PROCEEDINGS OF THE
ECOLOGICAL QUALITY ASSURANCE WORKSHOP
Sponsored by
U.S. Environmental Protection Agency
Corvallis, Oregon
NAPAP
FOREST RESPONSE PROGRAM
QUALITY ASSURANCE
PROJECT
United Sta*
Environmental Protection
AQ:
ites
Department of Agr
-------
EPA ERL-Corvallis Library
00006483
600988020
PROCEEDINGS OF THE
ECOLOGICAL QUALITY ASSURANCE WORKSHOP
Sponsored by
U.S. Environmental Protection Agency
Corvallis, Oregon
Organized by
Forest Response Program
QA Staff
March 29-31, 1988
Denver, Colorado
Library
0.5. Environmental Prot»etio»
Motional Health and BnvIn
EfSocta Research Laboratory
aOO S.T7. 35th Sires*
Corvallls, Oregon 97999
-------
Table of Contents
Preface
Section 1
Overview of the Quality Assurance Programs Represented
Section 2
Quality Assurance Issues
2.1 Adapting QA to Ecological Research
2.1.1 Paper — by Dr. Ian K. Morrison
2.1.2 Discussions
2.2 Comparability Studies
2.2.1 Paper — by Dr. Wayne P. Robarge
2.2.2 Discussions
2.3 Quality Control Data
2.3.1 Paper — by Dr. John K. Taylor
2.3.2. Discussions
Section 3
Conclusions and Future Challenges
3.1 Closing Discussions
3.2 Workshop Critique
3.3 Future Plans
Section 4
4.1 Workshop Agenda
4.2 List of Attendees
-------
Preface
The scientific community is increasingly called upon to
address ecosystem responses to a myriad of human activities at
local, regional, and even global levels. In the decade of the
1990's and beyond, a clearer understanding of such ecosystem
responses will be fundamental to emerging policy issues.
Monitoring and research within ecosystems is seen, therefore, as
a major scientific need for the period ahead, and Quality Assurance
(QA) procedures will need to accompany these developments.
As a preparatory step, the QA staff of the Forest Response
Program under the National Acid Precipitation Assessment Program
(NAPAP) organized a workshop to discuss the role of QA in
ecological science. The workshop was held on March 29-31, 1988 in
Denver, CO; the purpose was to:
1) strengthen interaction between the various QA programs by
exchanging information on QA activities in a number of
different monitoring and research areas, including water,
soil, vegetation, and atmospheric sciences, and to strengthen
interaction between the various QA programs;
2) provide a forum for discussing topics of general concern
to QA implementation in monitoring and research programs; and
3) establish some guidelines for the extension of QA
specifically into terrestrial/ecological research in the
decade ahead.
In order to fulfill the first objective, day one of the
workshop was dedicated to exchanging information about the QA
programs represented, to foster discussion about the issues at
hand, and to fulfill the first objective. A representative of each
program or organization presented an overview of their activities,
innovations, and difficulties. Section 4 contains an agenda, and
a list of participants and groups represented.
The second and third day of the workshop focused on specific
aspects of QA implementation, including: the adaptation of
traditional QA to ecological research, comparability studies, and
collection and evaluation of quality control data. For each of
the three sessions, a discussion leader presented an issues paper
prepared and distributed in advance of the workshop to stimulate
discussion. These papers are presented in Section 2 of the
proceedings, as modified after the workshop.
11
-------
At the conclusion of the workshop, the workshop participants
attempted to summarize the main issues discussed during the three
sessions and identify conclusions. The outcome is presented in
Section 3 of these proceedings.
One group consensus was to begin planning for an international
symposium in 1989 to further define the role of QA in ecological
research programs in the 1990's. Representatives from other
agencies and governments have expressed interest in assisting with
the arrangements for such a symposium.
Reaction to this workshop and future meetings was very
positive. There was consensus that this workshop fulfilled its
objectives and similar workshops or meetings are needed in the
future. The U.S. EPA's Environmental Research Laboratory in
Corvallis and the Forest Response Program plan to actively
participate. We wish to thank the other participants for their
efforts, particularly the authors of the three issues papers and
our colleagues from Canada for their participation.
111
-------
Section 1
OVERVIEW OF THE QUALITY ASSURANCE PROGRAMS REPRESENTED
About 40 people attended the workshop from across the U.S.
and Canada. They represented QA interests among twenty-one
federal, state, provincial, corporate and consulting organizations
that expressed high interest in the workshop. Introductory remarks
from spokespersons for sixteen organizations provided these
highlights:
o U.S. EPA and U.S. Forest service, Forest Response Program
(NAPAP),
Corvallis, Oregon - Susan Medlarz
- In 1986, began implementing QA within this multi-agency
program of research on air pollution and forest effects
across the U.S.
Developed and applied a QA program to a highly diverse
program focused on ecological research.
o Great Lakes Survey
Burlington, Ontario, Canada - Keijo Aspila
Conducts inter-laboratory comparison studies.
Services seven major monitoring programs which employ
over 400 laboratories.
o Canadian Territories Sample Exchange
Sault Ste. Marie, Ontario, Canada - Ian Morrison
Conducts inter-laboratory comparison studies.
Uses round-robin tests for forest soil and plant tissues.
-------
International Soil Sample Exchange
Las Vegas, Nevada - Craig Palmer
Undertakes comparability studies of the U.S. EPA
Direct/Delayed Research Program (DDRP) soil survey
analytical data and compares to standard soil survey
information.
Approach is inter-laboratory soil sample exchanges.
National Acid Deposition Network/National Trends Network
Ft. Collins, Colorado - Dave Bigelow
Network designed to collect wet deposition samples at
about 200 meteorological stations across the U.S.
Precipitation chemistry determined in laboratories from
weekly samples.
QA/QC activities include guidelines for standard
procedures at sites and in the laboratory.
National Surface Water Survey
Las Vegas, Nevada - Mark Silverstein
Undertook sampling of 1800 lakes and 450 streams in the
NE U.S. and 750 lakes in the Western U.S.
Implemented QA procedures for lake water sampling
methods, and sample tracking and analysis at eight
contract laboratories.
U.S. EPA, Direct-Delayed Response Program
Las Vegas, Nevada - Lou Blume
Over 2200 soil samples were collected for the DDRP for
the required chemical and physical analysis.
Analytical work was coordinated by QA staff through
technical caucus approach.
U.S. EPA, Watershed Manipulation Program
Corvallis, Oregon - Heather Erickson
Program will test three hydrologic models from DDRP
through manipulation experiments in paired catchments.
Goals of QA program, though still in the implementation
phase, are to improve research results by establishing
Data Quality Objectives (DQOs) for field research,
standard support laboratory procedures, and inter-
laboratory comparison studies.
U.S. EPA, QA Management Staff
Washington, D.C. - Linda Kirkland
Involved in developing QA policy for the U.S. EPA
following the initiation of a EPA agency-wide QA program
in 1984.
Strong emphasis on development of Data Quality Objectives
for environmental monitoring and research.
-------
U.S. EPA, Environmental Research Laboratory
Corvallis, Oregon - Deborah Coffey
Research program broadly-based across ecological issues
(e.g., effects of UV-B, toxicants, GEMS, and human
development on forests, crops, and wetlands).
Found QA to be most effective with top management support
and when it is involved before, during, and after data
collection.
U.S. EPA, Environmental Monitoring Systems Laboratory,
Research Triangle Park, NC - Bill Mitchell
Provide the materials, procedures, and QA services needed
to document/assess the quality associated with the
environmental monitoring of air, hazardous waste, and wet
and dry deposition.
Philosophy is to combine R & D with simple common sense
QA procedures relative to the critical measurements.
U.S. Geological Survey
Arvada, Colorado - Vic Janzer
Employs a staff of 120, the USGS processes 50,000 water
samples/year from across all 50 states.
The QA unit is responsible for maintaining commitments
from top to all field sampling people to achieve high
data quality.
W.S. Flemming and Associates
Albany, NY - Jim Healy
Provides management, QA, and data management for the
Mountain Cloud Chemistry Program.
At nine eastern U.S. and Canadian mountain sites measures
physical, chemical, and general meteorological
characteristics of clouds and atmospheric inputs to
remote forest locations.
Research Triangle Institute
Research Triangle Park, NC - Jerry Koenig
A not-for-profit research organization of 1200 persons
in engineering, math, survey, social, and environmental
research projects focused on physical, chemical, and
biological issues.
The QA staff developed a process for setting DQO's and
applied it to their research projects.
Technology Resources, Inc.
Washington, DC - Jerry Filbin
A consulting firm specializing in environmental survey
and monitoring projects (e.g., Maryland Synoptic Stream
Chemistry Survey).
Developed a computer program, QCIC, to standardize and
automate routine management and examination of analytical
chemistry data.
-------
o Weyerhaeuser Testing Center
Tacoma, Washington - Kari Doxsee
A laboratory for analyzing samples from wood products,
soils, plant tissues, fuels, and a broad array of
environmental projects.
Uses standard QA/QC practices to produce quality data in
a non-threatening manner which fits the needs of clients.
Five additional organizations expressed high interest in the
workshop. Unfortunately, their representatives were unable to
attend because of scheduling conflicts. These were:
o U.S.D.I. National Park Service, - Darcy Rutkowski
o U.S. EPA/ GLP Program - John McCann
o Research Evaluation Associates, - Richard Trowp
o Desert Research Institute/ - John Watson
o Wildlife International, - Hank Krueger
-------
Section 2
QUALITY ASSURANCE ISSUES
2.1.1 Adapting Quality Assurance to Ecological Research
Paper by:
I. K. Morrison, Canadian Forestry Service, Sault Ste. Marie,
Ontario, Canada
Introduction
Quality assurance (QA) and quality control (QC) are common to
a wide range of pursuits from manufacturing to monitoring to
research. In manufacturing, emphasis is placed on process and
product quality control, with exacting requirements for accuracy
and precision. This is especially the case when mass producing a
product.
Environmental monitoring adapted many industrial QC concepts
for mass collection of data (over time and/or space). Features of
both situations generally include: (1) thoughtful objective
setting, (2) establishment of measurable markers, (3) careful
selection and thorough and lucid documentation of methods, (4)
faithful attention to detail in implementation, (5) rigorous
auditing, and (6) timely feedback leading to any required
corrective action. The system organizing these activities was
named "QA". The most recent challenge is the adaptation of QA to
research, specifically ecological research. This paper addresses
the issues at stake in making such an adaption.
Ecological research studies the relationship of living
organisms or groups (populations or communities) of organisms to
their environment. Processes within the scope of ecology vary
widely in space and time, from small process occurring over short
time intervals to processes involving major segments of the
ecosphere and occurring over extended time periods. The "ecology"
we presumably must address from a QA perspective is the latter, as
-------
it frequently involves large numbers of individuals, and continuity
often must be maintained over long periods of time. This
represents special challenges to QA.
Challenges
All research, ecological or otherwise, must conform to
acceptable standards. This discussion is based on the special
obstacles to environmental problem-solving which result from
studying processes at the ecosystem level. Specifically, these
are: (1) the evolution of issues and re-ordering of objectives
over the time scale of research, (2) constraints on design,
particularly on experimentation, (3) the need for comparability,
(4) the need for continuity, (5) the availability of research
techniques, and (6) natural variability.
Evolving Issues - A number of environmental issues have
surfaced over the past decade, evoking scientific (and popular)
concern. These issues have pointed out general deficiencies in
our current knowledge of ecological processes, particularly how
such processes inter-relate. In addition, issues frequently evolve
over the time scale necessary for environmental research. For
example, the (purported) impact of regional air pollutants was
initially concerned mainly with "acid rain". Later, the issue was
taken to include both "acid rain" and other air pollutants (chiefly
ozone). Now, it is focused on "forest decline" in general,
possibly occurring in response to a mix of stress-inducing factors
including, but not limited to, the above pollutants. These other
stress factors include: (1) climatic stresses (chiefly winter
damage, late or early frost damage, and summer drought), (2) insect
or disease attack, and (3) poor management.
In general, however, tree health and tree growth are the
central issues with respect to effects of air pollutants or any
other stress-inducing factor on forest ecosystems. Thus, to reach
an ultimate resolution, forest growth reduction (or stimulation)
and causal relationships need to be unequivocally demonstrated.
Furthermore, forestry values at risk are primarily economic; these
need to be in readily convertible to units of timber measure.
Design Constraints - Various research methods are employed
in ecological research: experimentation, correlation, surveys,
and monitoring. In most research, traditional factorial
experimentation does not usually pose major problems. Hypotheses
can be framed and tested, treatments can be applied and adequately
replicated, and the research can proceed. However, some ecological
research does face unique space and/or time problems. For example,
the relationship of any type of forest to any one or combination
of air pollutants could presumably be characterized by a
dose-response relationship. But mature forests tend not to lend
themselves to direct factorial experimentation without either being
unacceptably artificial or unacceptably confounded. Also,
comparing or contrasting processes in similar forests in zones of
different pollutant loading tends to be confounded by differences
-------
in climate, geology, and soils. Though variation can often be
statistically accounted for by study design and by replication on
a smaller scale, the costs of replicating whole-ecosystem studies
is often a major consideration. Finally, pollutant loadings (in
eastern North America) tend to vary along the same gradient as
climate.
Need for Comparability - If research activities forming part
of an integrated program are carried out at different locations and
by different personnel, some basis of comparison must be included.
This implies a need for at least minimum standardization. Areas
where some degree of standardization has been achieved or where
standardization could be considered included: (1) assessments of
tree health, (2) measurements of forest or tree growth, (3) various
climatic measurements, (4) sampling and chemical analysis of foliar
tissues, and (5) soils descriptions, sampling, and analysis.
Meed for Continuity - Fixed reference points are necessary
if research or monitoring is extended over long time periods as in
•baseline1 or 'set-the-clock1 type studies. However, changes in
technology over time should be considered. While methodologies or
equipment can be resurrected, there is often a reluctance on the
part of scientists to accept obsolete methods or old data, which
can cast some doubt on the utility of studies. Maintenance of
samples and standards over time may help, though there are
considerations of long term stability.
Availability of Techniques - Despite the advance of
technology over the past several decades, there does not appear to
have been a commensurately large increase in the number of
techniques or tools available to the field-ecologist for efficient,
accurate and precise measurements (e.g., forest productivity).
Physiological measures such as C02 or HjO exchange offer
possibilities, though currently they tend to fall within the domain
of research, and some standardization is in place.
Chemical analysis of plant parts attracts interest. In
forestry, most plant analysis for diagnostic use involves the
determination of total concentrations (usually w/w) of various
macro-elements (N, P, K, Ca, Mg, S) , micro-elements (Fe, Mn, B,
Zn, Cu, Mo, Cl), or other inorganic elements (e.g. Na, Al, Ni, Cd,
Pb, etc.) in dried foliage. However, buds, bark, inner bark, xylem
sap, and fine roots have all been used. The object is usually to
relate internal concentration to some external (environmental)
variable. Various interpretive techniques have been advanced,
including the establishment of 'critical levels', the use of
various ratios and proportions, limiting factors analysis, vector
analysis, and DRIS (diagnosis and recommendation integrated
system). Foliage analysis derives its predictive ability from the
goodness of correlation between analytical results and
-------
environmental measurements. Even when standardized, the necessary
empirical relationships have not always been apparent.
Soils analysis, as currently used in forestry, is largely
borrowed from agricultural usage. Like foliage analysis, soils
tests presumably derive their predictive ability from the goodness
of correlation of analytical results with some measure of tree
response. Unlike foliage analysis (in which total concentrations
are usually determined), most soils tests extract particular
fractions, often purported to be 'available1 or 'exchangeable1
forms. Empirical relationships are often species-specific and for
regional application. Only in a few instances have empirical
relationships been derived for natural trees.
Natural Variability - Variability is usually controlled
experimentally or is accommodated statistically by replication or
stratification. The approach is normally dictated by the study
objective. Ecology tends to focus at the population, community,
or ecosystem level, or, for practical purposes, on trees, stands
of trees, forest types, forest associations, etc. On soils, the
focus is frequently at the series level or higher. Studies
therefore tend to be concerned with stand-to-stand,
species-to-species, forest type-to-forest type differences. This
leaves the variability to be accounted for at the within-tree,
tree-to-tree or plot-to-plot levels.
For growth studies, tree dimensions of interest generally
include diameter at breast height (DBH), total height, form class,
etc. Individual tree growth is usually estimated in terms of
diameter or height increment, or change of form class. Stand
dimensions are normally expressed in terms of stemwood volume
(total or merchantable) or in terms of dry weight (biomass or
phytomass). Stand growth on a per area basis is usually expressed
in terms of mean annual increment (MAI), net periodic or current
annual increment, gross periodic or current annual increment, or
(in ecological terms) net primary production. Conventions exist
for all of these. If data are available, the number of trees or
plots necessary to bring standard errors within acceptable limits
can be readily calculated.
Sampling, on the other hand, is not well-standardized even
though it generally outweighs analytical error. In addition to
stand-to-stand and species-to-species variation, there is
within-tree and between-tree variation to be taken into account.
The main sources of within-tree variation (which may vary among
elements) include: (1) position in the crown, (2) variation
through the season, (3) difference of needle age for conifers, and
(4) sun versus shade leaves for hardwoods. Aspect may also be
important. Most authorities tend to favor sampling during periods
when concentrations are stable (generally late in the growing
season but prior to leaf coloration for hardwoods, or during the
dormant season for conifers). However, some have suggested
sampling during the period of leaf expansion when plants are
physiologically more active (but, this presents some practical
8
-------
difficulties). Within tree variation generally lends itself to
stratification. Some between-tree variation may be eliminated by
stratification (e.g., restricting sampling to trees of certain
sizes or crown classes). At present, however, accommodating for
between-tree variation is mainly approached through replication.
Again, if data are available, the number of samples needed to bring
the standard error to within acceptable limits (generally for the
most variable parameter) can be calculated. However, precisely
sampling a required large numbers of large trees can be a
significant problem.
Soils properties, including physical and chemical properties,
vary both horizontally and vertically, and to some extent,
temporally (in response to processes such as microbial activity,
tree uptake, leaching, etc). Vertical and temporal variability can
be reduced, somewhat, through stratified sampling and
standardization of technique; horizontal variation can be reduced
somewhat by replication. Again, the required number of samples can
be calculated, but may be prohibitively large. For other sampling
(e.g., litter, precipitation, throughfall, or leachate), locating
sites is usually study-specific and the number of collectors can
be calculated as above.
Summary
The aforementioned subjects (the evolution of issues, design
constraints, etc.) are some of the obstacles to solving
environmental problems through ecological research. All have data
quality implications; data quality would be improved through better
issues identification, better study designs, more precise and
accurate methods, continuity and comparability, and accommodating
natural variability. The task before us is to adapt existing QA
concepts to ecological research to promote these concepts.
-------
2.1.2 Workshop Discussions
Dr. Ian Morrison presented his paper to set the stage for the
first discussion session. Based upon his long experience and lucid
thinking in forest ecology research, Dr. Morrison's talk stimulated
the group to rethink some important basic QA issues as a prelude
to reviewing QA applications to ecological research.
For example, what is the basic definition of QA? It is not
in the common reference lexicons. Without getting out the text
books, these thoughts emerged at the workshop:
o QA is a system whereby we define in a general way how to
quantify what we are interested in.
o It is: a) what we do to be sure what we do is
technically sound and legally defensible; b) up front
rules or procedures to follow in monitoring; c) in
research, not a system which drives methodology — rather
some research must come first to lay the foundation for
QA; and d) set forth in a QA plan and in research the
plan keeps changing (i.e., QA evolves).
o Quality data requires documentation of what was done;
thus, the philosophy of QA is partly dependent upon
documentation of each data collection step.
o Conventionally, we thought QA began with taking and
handling a sample; Canadians developed the idea of
starting with work plan development and following through
to and including the scientific report. This concept is
called quality management (QM) in science.
o More definitions of QA and associated terms are presented
in Robarge's paper (Section 2.2.1).
Also, what is quality?
o We judge quality by the lack of quality or departure from
a norm or defects that will or will not be tolerated.
o A value judgement that QA does not define, but rather
depends upon others to establish first (i.e., a kind of
model).
Where does QA fit in science?
o Basically, QA is a component of science.
o Science has to be allowed to proceed.
10
-------
o The value of QA activities are to ensure that: a) a
scientist can defend his/her work through a system of
documenting and correcting errors; and b) what the
scientist does meets the needs of the policy-maker.
QA has a narrow task and a broad task as reflected in the
activities described in a and b above.
o Through early involvement at the planning phase QA can
guide the data quality objectives (DQOs).
o In research, QA involvement depends on the research to
be undertaken — in leading edge research, QA people are
often following the advice of the scientist.
o An effective QA program is dependent upon good
communications among key managers, scientists,
technicians, and QA personnel.
Levels of QA implementation - Quality Management vs. Quality
Assurance vs. Quality Control?
o There are these three levels at which data quality can
be managed. The appropriateness and utility of each
needs to be considered as new programs or projects are
established. The activities associated with each were
discussed.
o Quality Management: (1) focuses on ensuring data meets
the needs of the users and that the policy questions
driving the program are clearly formulated, (2) is a
system which provides oversight and leaves the
participating scientists to determine how QA and QC will
be addressed.
o The role of Quality Assurance is following the
development of succinct policy goals and ensuring data
is collected in a sound manner. QA activities have
developed largely from other disciplines (e.g. chemistry
and air monitoring) so its adaptation to ecological
research is developing without guidance. Checks on many
ecological variables are not available; they are being
developed to check the process but this is still an
imperfect system. QA in ecological fields faces
cross-media complexities. To provide continuity for
long-term ecological research one should bank samples
(e.g. tissue, aqueous, soils, remotely sensed imagery)
so future comparisons are possible when methods have been
refined. New vs. old data must be assessed to minimize
loss of current activities.
o Quality Control is the nuts and bolts of quantifying the
precision and accuracy at the project level.
11
-------
2.2.1 Comparability Studies
Paper by:
Wayne P. Robarge, N.C. State University, Department of Soil
Science, Raleigh, NC
Introduction
This issues paper focuses on: (1) defining comparability and
models, (2) some examples of how comparability studies can be
implemented, (3) the use of comparability studies, and (4)
questions for discussion on the need for comparability studies in
ecological research. Please note the distinction between the term
"comparability" and the subject area of the discussion group
("comparability studies in ecological research"). This paper is
not intended to be a detailed document on how to carry out
comparability studies among analytical laboratories. Such
information is readily available in published books and the peer
reviewed literature. Rather, this document and the workshop were
intended to develop a better understanding of how to design,
implement, and use results from comparability studies in ecological
research to improve the data quality. Therefore, the terms and
definitions cited in this document serve only as a basis for
further discussion.
COMPARABILITY AND DEVELOPING MODELS
Comparability is one of five data quality indicators required
in the EPA's interim guidelines for preparing quality assurance
project plans. Comparability can be defined as the confidence with
which one data set can be compared to another. A data set can
refer to small sets of data generated by a single technician
comparing two analytical techniques, or to an extensive database
covering several disciplines that serves as a baseline to measure
long-term changes in an ecosystem. Regardless of the scale at
which the comparisons are made, they will require establishment of
suitable confidence limits prior to acquisition of the data. This
is most effectively accomplished through the use of a model.
A model is an idealized representation of an often complex
reality. Models attempt to bring together, among other things,
prior knowledge, hypotheses, and assumptions concerning the
phenomena or the system under investigation. Development of a
correct model (or models) leads to the quality of data that will
be necessary to provide the information required. This in turn
can be used to develop a set of data quality objectives for the
proposed project.
12
-------
This approach does not have analytical methodology as its
central focus. Rather, selecting a particular analytical technique
and associated QA follows from: (1) a detailed consideration of
the problems to be solved, and (2) a conscientious decision
regarding the quality of data required to reach a satisfactory
solution. Comparability then is not an addition to an experimental
plan, but an integral part of the planned research. It requires
input from both the project leader and the agency receiving the
final data set. These two groups need to decide on the quality of
collected data required for their particular needs. Topics that
need to be addressed concerning comparability among data sets are
covered below.
IMPLEMENTATION OF COMPARABILITY STUDIES
Stating a given confidence level for a particular data set
implies knowledge of the precision, accuracy (bias), and
representativeness of the data. The following is a listing of the
general ways in which this information is obtained. This listing,
however, should only be used as an example of ways to implement
comparability studies. It is important to the success of this
workshop to not allow the terminology and concepts often associated
with analytical chemical methodology from dominating our
discussions regarding ways to implement comparability studies in
ecological research.
Sampling Design - The importance of sampling is well
understood by most researchers and is the subject of numerous
monographs. Perhaps of most concern to QA are what populations
are actually being sampled in ecological studies and what
assumptions can be made regarding population distributions. Random
sampling is usually the answer to such questions, but the basis for
this is more from statistical concepts underlying the experimental
design than an appreciation for the distribution of the samples.
Many parameters are spatially and temporally related in ecological
systems, thus strict adherence to random sampling may lead to
sampling numbers larger than necessary in order to obtain a valid
conclusion. Alternative sampling strategies are available but
require a least some information about the parameter of interest
before they can be applied successfully.
Reference Sample Exchanges - A reference material is a
substance that has been characterized sufficiently well so that
its composition or related physical property has a certified value
of known accuracy. The chief role of reference samples is to
evaluate the accuracy of calibration standards for a particular
analytical instrument or complete measurement process. Note that
this does not necessarily mean that actual sample accuracy can be
measured using a reference material. The latter can only be
approximated when the matrices of the reference material and
samples are similar. The obvious drawback to the use of reference
materials is the lack of suitable, stable substances that match all
possible sample populations likely to be encountered in ecological
13
-------
research. As the similarity in matrices diminishes, the role of
a reference material in assessing comparability is reduced.
Audit/Exchange Samples - Sample exchanges of non-certified
material that approximates the matrix of the sample population
provides a means of determining relative accuracy in a data set.
Such estimates may be methodology dependent and may yield results
with substantial unknown bias that could limit comparisons between
data sets. Many measurements made in ecological research are
related to specific biological processes. In other words, they
only have meaning when combined with another data set. Audit or
exchange samples, therefore, provide more useful information
regarding precision within a given data set than yielding estimates
of accuracy or bias.
QC Check Samples - Also known as in-house controls, these
samples are necessary for monitoring quality control and estimating
precision within a data set. Their use, however, is limited to
methodologies where repeated measurements can be made from sample
matrices whose composition for the parameter of interest does not
change markedly as a function of time. For biological or
physio-chemical processes that are changing with time, use of
in-house controls will not be possible or will be restricted to
portions of a methodology which are relatively independent of time.
Methodology Comparisons - Adopting a set of methods as
standard operating procedures is a necessary step in developing a
QA plan and is fundamental to providing estimates of precision and
bias in a data set. Selection of a particular method is a function
of the data output desired and its cost. When suitable reference
materials are present, comparison of methods is straightforward.
Lack of a suitable reference material dictates the use of other
approaches, such as: (1) the use of spiked samples and surrogates,
(2) analysis of analogous reference materials, or (3) a comparison
of the selected methodology to a method that is accepted as a
standard but is too expensive to perform on a routine basis. All
three of these approaches are capable of estimating quality
providing the assumptions involved with each are satisfied for any
given situation.
Use of methodology comparisons in ecological research,
however, may suffer the same limitations as outlined for audit and
exchange samples. Many methods in ecological research are designed
as an attempt to quantify different biological or physical
processes which are essentially in a constant state of change. The
method being attempted is based to some degree on physical,
chemical, or biological principles, but there is no way to
determine what is the right answer. For example, measuring dry
deposition to a forest canopy produces an estimate which is based
on current technology and an understanding of deposition processes.
Attempting to assign accuracy and precision estimates to such
14
-------
methods will require a different approach than that used with
common analytical procedures.
Equipment Comparisons - Equipment comparison between
analytical instrumentation assumes that the same sample matrix can
be introduced into each instrument being tested. If this condition
is met, then such comparisons provide estimates of bias in the data
set introduced by the use of a given instrument. Such comparisons
are definitely in order for contributing to the comparability of
a data set.
The application of equipment comparisons to non-analytical
instrumentation, such as exposure chambers (e.g. CSTRs vs.
Open-Tops vs. field studies) poses a different set of questions.
It might be argued that such a comparison is not valid and should
not be addressed under the category of quality control. This
argument might be valid if the treatments used in these chamber
studies produced a response that followed a continuous function
and was unique upon exposure to the test substance. This is
generally not the case. Also, documentation of a response within
a controlled chamber does not constitute a direct link to similar
processes occurring in the field. If the effect of a given
substance is a function of several environmental variables, then
the response to different concentrations of this substance in the
environment will follow a response surface and not a simple
function. Thus, the location of a damage response on the response
surface is observed in the field versus observations of damage in
controlled chambers is a measure of bias in the data generated from
such experiments.
Comparison between non-analytical instrumentation also serves
to delineate problems with treatment precision within a given
design, and how this may influence interpretation of the results.
Unlike analytical instrumentation, where the x-variable can
generally be assumed to be error-free, there may be a substantial
amount of uncertainty in the actual treatment concentrations
present during an experiment. Failure to account for the
uncertainty in the x-axis may result in a bias in the final
interpretation of the data and the type of response function
assumed to be present. Comparison with chambers specifically
designed to control precision of the treatment concentration would
be one way to determine the presence of such bias in the data set.
USE OF COMPARABILITY STUDIES
Because of the way comparability is defined, it is necessary
to speak of the use of comparability studies at two different
levels. On the one hand, output from a comparability study could
be used to address selected topics for a specific data set produced
by a particular project. The rate of output would essentially
match that of the main project. To a large extent this is the
manner in which most comparability studies are currently defined
and executed. As pointed out above, however, comparability should
15
-------
be considered an integral part of the development of a model for
a given research project. It would follow then, that questions
raised by addressing the comparability of the projected data set
will require solutions before a project can be successfully carried
out. Such questions may be beyond the scope of the planned
project(s) and require separate investigations to arrive at a
satisfactory answer. The output from these comparability studies
would be used to plan future research projects.
COMPARABILITY IN MONITORING
Following are topics that should be addressed to show
comparability of data sets from different monitoring programs:
(1) how sampling stations or sites in each network are sited.
Ideally, each network has the same probability of collecting a
representative sample (for example, sites are selected to detect
maximum values or values representative of an area or volume?);
(2) how the same variables (analytes) measured in each program
are reported in the same units and corrected to the same standard
conditions. If not, provide a mathematical or gravimetric relation
between variables;
(3) how procedures and methods for collection and analysis of
any particular observable phenomenon are the same among networks.
If not, provide a mathematical relation;
(4) how data quality is established and how there will be
sufficient documentation of audits and DQOs to determine data
adequacy;
(5) how, in terms of accuracy and precision, data for one
observable phenomenon, measured by one method or equivalent
methods, can be combined or compared in a statistically defensible
manner among programs.
16
-------
2.2.2 Workshop Discussions
Discussions centered around Dr. Wayne Robarge's paper and
presentation on comparability studies generated the most
controversy and disagreement. Interestingly, it was not the
thoroughly prepared overview of implementing comparability in
ecological research that was the concern, but the semantics of this
aspect of data quality. Several questions were presented for
discussion. Following is a summary of the participants response.
Is it possible to draw a distinction between comparability and
comparability studies?
o The consensus was no. Comparability is one of several
quality descriptors applied to a body of numerical
information.
o Comparability studies are the means of determining the
information necessary to state the degree of confidence
with which one data set can be compared to another.
o Even identifying how comparable different data sets are
that use the same methodology is difficult. We are just
beginning to identify the components that contribute to
variability when different methodologies are used.
What is the difference between comparability studies and
calibration studies?
o This was not resolved.
PROGRAM METHOD
SAME METHOD DIFFERENT METHODS
WITHIN 1* 2
BETWEEN 3 4
*Complexity increases from 1 to 4
Can comparability studies be defined as a separate entity from a
research program?
o No. This type of study is only pertinent when defined
within the scope of all data quality descriptors for that
research.
17
-------
Does reference to comparability studies in ecological research
really imply the need for the development of better or alternative
methodologies to study ecological processes?
o Selecting a specific methodology to be used in a research
project is then reflected in the body of numerical
information to be produced. Comparability studies allow
for the determination of overall error variance in the
method selected. If this error variance exceeds the data
quality objectives set for the research program, then
these objectives need to be re-evaluated or a different
methodology selected.
o Comparability studies in ecological research reflect the
heterogeneity of methods in ecological research. They
are very important indicators that no one method has
proven itself above all others in certain fields. Also,
comparability studies help to determine which methods are
better.
Is there too much emphasis on numerical accuracy in ecological
research?
o The participants agreed that the answer to this question
directly depends on the data quality objectives set for
a given research project. It is more than likely that
too much emphasis is being given to numerical accuracy
for many methods currently used in ecological studies,
especially those comparing data sets from different
ecosystems or regions of the country.
o More emphasis should be placed on using comparable
interpretations of such data sets, rather than the data
itself.
Does ecological research require the development of a different
set of quality assurance and quality control criteria (i.e.
different from those developed for monitoring and analytical
chemistry)?
o No. The QA/QC criteria for a given research project is
a direct function of the specific data quality objectives
for the variables of concern. The biggest difference is
the basis of QA for research on relative vs. absolute
accuracy. Knowledge about accuracy for many variables
exists only as estimates for research. Whereas in
monitoring programs, specific accuracy on monitoring is
available.
o For both research and monitoring programs, data quality
objectives must be based on cost-effective considerations
established with realistic needs of the problem and the
capability of the measurement process.
18
-------
o Data quality objectives should not be confused with
accuracy and precision limits set for individual
analytical techniques.
Does the propagation of error set finite limits on the quality of
data that can be produced with the current experimental designs
used for research projects?
o This question pertains more to the representativeness of
a body of numerical information and was not addressed
during the workshop. It does, however, pertain to the
capability of a particular measurement process to produce
data of sufficient quality to solve the problem at hand.
What role should models or modeling play in setting quality
assurance guidelines in ecological research?
o Simulation models should be used whenever data bases
already exist that are applicable to the questions at
hand. Such models could be very useful in setting
priority areas within a research project requiring
comparability studies.
Is the quality assurance data being generated really being used by
funding agencies?
o Yes. However, it is apparent that in the future more
emphasis should be given at the data quality management
level in establishing well defined data quality
objectives for ecological research, especially for
programs dealing with long term ecological monitoring.
These objectives are the basis for implementing QA/QC
programs. The resulting QA/QC program should provide
direct feedback in terms of the quality of data being
produced and whether such data will be useful in solving
the problems to be addressed.
19
-------
2.3.1 Quality Control Data
Paper by:
John K. Taylor, Gaithersburg, MD
Introduction
Almost everyone will agree that data which is used for
decisions must be of sufficient accuracy for the specific
application. Otherwise, the decisions based on its use can have
limited, if any, value. Yet, the absolute accuracy of data can
never be known. The only thing that can be known is a reasonable
estimate of the limits of its inaccuracy. Modern measurement
practices, based on sound quality assurance concepts, can provide
a statistical basis for the assignment of limits of uncertainty.
The following discussion reviews the techniques that have been
found to be most useful for providing a statistical basis for the
evaluation of data quality.
Reliable data must be produced by an analytical system in a
state of statistical control. The analytical system, in its
broadest concept, includes the sampling system, the measurement
process, the calibration process, and the data handling procedures.
The sampling system includes the sampling operations, the transport
and storage of samples, and any sub-sampling operations that are
required. The system must provide reproducible samples that do not
undergo any changes compromising their use.
The measurement process must be capable of meeting the data
quality objectives with respect to its accuracy, selectivity, and
sensitivity. Measurements must be operated in a state of
statistical control, which means it must be stabilized and its
output statistically definable. The calibration system must be
adequate and operate in a state of statistical control.
Appropriate quality control procedures should be developed
and applied to attain and maintain statistical control of the above
operations. Quality assessment techniques are applied to verify
statistical control and to evaluate the quality of the various
outputs. The remaining discussion deals with several aspects of
the use of quality assessment samples (often called quality control
samples) to evaluate analytical operations and the data produced
by them.
20
-------
EVALUATION SAMPLES
Evaluation samples describe any material used in the
evaluation of an analytical system and its data outputs.
Regardless of the kinds of samples used, they must be of
unquestionable integrity. This includes their homogeneity in every
case and the accuracy of their characterization when used for
accuracy evaluation. Samples should be available in sufficient
quantity, including periodic re-evaluations and/or analyses at the
conclusion of the measurement program. It is of utmost importance
that evaluation samples have as close a correspondence as possible
to natural samples, since the data will be used to evaluate the
performance of the analytical system when measuring natural
samples.
Variability Samples - Samples may be measured to evaluate
the attainment and maintenance of statistical control and the
variability of the measurement process. Analysis can include
replicate measurement of some natural samples as well as samples
especially introduced into the measurement program to evaluate
these parameters. Natural samples have the advantage that they
truly represent the natural samples, but they can introduce
uncertainties due to homogeneity considerations. Samples of known
composition may be used to evaluate variability as well as
accuracy, but this may consume larger quantities of such samples
than may be desirable. Replicate measurements of natural samples
are superior evaluators of precision, but such measurements can be
used only for precision. Accuracy estimates must be made using
other techniques.
Accuracy Samples - Samples of known composition must be used
to evaluate accuracy. Examples of such samples are, in order of
reliability:
1) Reference Materials- samples of matrix and analyte level
closely analogous to natural samples. They need to be stable and
homogeneous, with thoroughly characterized analyte levels. In some
cases, it may be difficult to meet these needs.
2) Spikes/Surrogates- natural matrix samples spiked with
analyte of interest or a surrogate at appropriate levels. The main
objection is the question of analogy to naturally incorporated
analyte. Alternately, a benign matrix ( e.g. distilled water)
spiked as above may be useful, but only for truly
matrix-independent methodology.
3) Independent Measurement- not truly a sample, this involves
the comparison measurement of a sufficient number of natural
samples by a second reference technique.
Blanks - Blanks are a special class of evaluation samples.
Sampling blanks of various kinds are used to verify that sample
contamination is insignificant or to quantitatively evaluate its
magnitude. Blanks are also used to evaluate artifactual
21
-------
contributions of the analytical process resulting from the use of
reagents, solvents, and chemical operations. When used for yes/no
decisions, the number of blanks required can be smaller than when
quantitative estimates are of concern. Indiscriminate and
arbitrary schedules for blanks can be counterproductive, since
their measurement can be of little value and consume valuable
resources that may be better diverted to measurement of natural
samples.
Instrument Response Samples - Special samples may be used to
monitor certain aspects of the outputs of instruments such as
response factors, for example. These should not be confused with
other kinds of quality assessment samples that are designed to
monitor the total output of the analytical process.
Synthesized evaluation samples may appear to have some
advantage when they can be more accurately compounded than
analyzed. However, homogeneity considerations and questions of
analogy can override this apparent advantage. It is virtually
impossible to homogenize spiked solid samples; they may need to be
individually prepared and used in their entirety. The accuracy
attained in preparing evaluation samples cannot be assumed, but
must be experimentally demonstrated. Ordinarily, the accuracy of
certified values should exceed that required for the data by a
factor of 3 or greater.
Another kind of evaluation sample, not often considered as
such, is a calibration evaluation sample. There are two types of
calibration samples: those to evaluate maintenance of calibration,
and those to verify that the production of calibration samples is
reproducible. The first consists of remeasuring an intermediate
calibration point at selected intervals during the measurement
program. The second consists of a simple analysis of variance in
which samples are prepared and measured in replicate to estimate
both precision of their measurement and precision of their
production. The accuracy of calibration is evaluated on the basis
of analysis of all sources of bias in their preparation and the
precision with which they can be prepared.
When using reference materials, the overall evaluation of the
calibration process is involved in the overall evaluation of
accuracy. If biases are found, the first place to look to identify
their cause is in the calibration process. And finally, the
appropriateness of the calibration procedure used must be verified
initially and throughout a measurement program.
Accuracy can be evaluated using a reference laboratory to
measure a sufficient number of split samples. The participating
laboratories evaluate their own precision and measure a few
22
-------
reference samples, but the bulk of the accuracy assessment is based
on comparison of results with those of the reference laboratory.
CONTROL CHARTS
The results of evaluation sample measurements are best
interpreted by use of control charts. In doing so, a single
measurement at any time is related to a body of control data and
becomes meaningful. Otherwise, a number of replicate measurements
must be made each time an evaluation is undertaken. The
accumulation of control data via control charts increases the
degrees of freedom for the statistical estimates and minimizes the
amount of effort to assure measurement quality. Control charts
also can be very useful for the evaluation of the precision and
bias of the measurement process.
FREQUENCY OF QA SAMPLE MEASUREMENT
The frequency of QA sample measurement will depend on the
stability of the analytical system, the criticality of the data,
and the risk associated with being out-of-control. In principle,
all data during the interval from last known "in control" to first
known "out of control" is suspect and may need to be discarded.
Such situations should be avoided to minimize the resulting loss
of data and programmatic costs.
For a well understood and stable system, devotion of 5 to 10
percent of the total effort to quality assessment may be sufficient
and is not a large cost. For small operations, as much as 50
percent QA may be required. For very critical data, the ratio may
be as high as 95 percent QA to 5 percent natural samples.
Some steps in the measurement program may be adjusted to
minimize the amount of overall evaluation (i.e., the QA samples).
Readjustment of calibration intervals may be an example of this.
Careful attention to blank control, sample preparation steps, and
chemical processing are other areas for better control. Anything
that can be done to improve quality control generally will minimize
the QA effort.
SOURCE OF CONTROL
The bulk of quality control and quality assessment must be
done at the laboratory, and even at the bench level. Checks are
required at the supervisory level, but less frequently. Checks
need to be made at higher levels as is necessary, but are generally
needed at a decreasing frequency as the level of analysis is
removed from the source of data production. A monitoring director
far removed from the scene of action can only evaluate what was
done; the bench can evaluate what is being done. Each level must
engender the respect and earn the trust of every higher level.
23
-------
This can only happen when there is a mutual understanding of the
goals and delegated responsibility for the quality at each level.
Every time that a sample is handled, there is a chance for
contamination or loss. This increases the need for all kinds of
blanks and check samples. Accordingly, sample handling should be
minimized as possible.
DATA CONTROL
Data control consists largely of minimizing random errors, to
the point of virtual elimination. Such errors are different from
measurement errors and may be called by the undignified title of
"blunders". Blunders have a better chance of elimination than any
other kinds of error and are best identified and controlled by
inspection.
UNCERTAINTY EVALUATION
Uncertainty around measured values is estimated based on
measurements of materials assumed to be similar to natural samples.
A further assumption is that the analytical system is in
statistical control at all times. The estimation process is as
follows:
1) Measure a reference sample.
2) Use a t-test to decide significant differences.
2a) If insignificant, conclude that measurement is unbiased.
Assign limits of uncertainty based on the confidence
interval for the mean.
2b) If significant, conclude that the measurement is biased.
Try to identify the cause(s) of bias. Eliminate
source(s) of bias as possible. Correct data only if
validity of correction process can be proved.
In either case, the decision relates to a particular test
which generally will need to be reproducible. Good measurement
practice dictates that an analytical system should be continually
monitored to verify decisions and evaluate quantitative
relationships. The control chart approach is an excellent way to
accomplish both objectives. The central line of the control chart
becomes the best estimate of the limiting mean of the measurement
process and for evaluation of bias. The control limits become the
best estimate of the precision of measurement.
24
-------
MEASUREMENTS USING EMPIRICAL METHODS
In some measurement programs, the value of a parameter is
defined empirically by the method of measurement. In such cases,
the accuracy is synonymous with the precision of measurement.
However, one cannot discount that both observer and instrument bias
can enter measurement data. Collaborative testing programs can
identify such problems when the programs are properly designed and
executed. It may be possible to develop a "standard test
instrument". Collaborative tests of any parameter involving
unstable test samples (ozone for example) may require that all
participants assemble in the same area and measure a local,
homogeneous sample (even the same sample if possible) to minimize
the effect of sample uncertainty.
DATA QUALITY OBJECTIVES
DQOs should reflect realistic estimates of tolerable
uncertainty about measurement data as related to a specific use.
They should not be based on the perceived capability of the
measurement system. Once known, the requisite capability of the
analytical system can be estimated and the requirements for total
limits of uncertainty, Um, can be established. Clearly, Um must
be less than the DQOs to provide useful data.
Ideally, the ratio of DQO to Um should be > 10.
Practically, the ratio of DQO to Um should be >. 3.
While Um includes components due to sample and measurement,
we will confine the following remarks to measurement. However,
the basic concepts are applicable to all aspects of the analytical
system.
Let: Um = CI + bias (CI = confidence interval)
where:
CI = t s / n (n = n-size, s = std. dev., t = t-value)
bias = experimental bias + judgmental bias
Experimental bias is evaluated as mentioned above. It should
represent the average of at least 7 independent estimates of the
bias (e.g., x + Certified Value). Judgmental bias is based on a
bias budget that reflects contributions from the "lack of control"
of known sources of bias for which quantitative relationships are
known, and best estimates of limits from unevaluated sources. In
setting limits, and especially in correcting data, all of the above
must be documented and the original uncorrected data should be
accessible so that revisions can be made as appropriate. The
following "Good Data Analysis Practices" are recommended for
consideration in this regard:
25
-------
1. Bias identification is diagnostic but not a calibration
process.
2. Bias identification is not bias evaluation but only a
yes-no decision. Bias evaluation is a quantitative
process and requires extensive quantitative measurement.
3. Never correct for a bias without understanding its
origin.
4. Evaluation of bias over the entire measurement range (at
least at 3 levels such as low, intermediate, and high)
is necessary though not sufficient to understand the
nature of existing bias and can be helpful to indicate
ways to eliminate bias.
5. Ideally, eliminate bias at its source.
6. Development and consideration of a bias budget is a
helpful first step in the elimination of bias.
7. In most cases, reference materials should be considered
as diagnostic tools, and not as calibration items, in
most cases.
8. Data evaluation is an on-going process that must be
pursued systematically and consistently. Any procedure
that is implemented should be reviewed for its utility
and revised as necessary to be most effective. The
measurement of control samples is costly and must be
cost-effective.
9. Involvement of all levels of analytical input is needed
to develop and implement a cost-effective data evaluation
program. Quality assurance requirements that are imposed
from "on high" can be misunderstood and meet with
resistance. And sometimes they are not credible.
Feed-back and the mutual development of remedial actions
is necessary for realistic operation of a reliable
analytical system.
10. Without statistical control, measurement data has no
logical significance.
MANAGEMENT OF A QA PROGRAM
The quality assurance aspects of all measurement programs are
essentially the same. Only the details differ, and these should
be developed specifically for each program if they are to be of
optimum value. When designing and managing a quality assurance
26
-------
program for a research or monitoring project, special emphasis
should be given to such matters as:
1) The amount of effort devoted to quality assessment.
2) The kind of quality assessment samples to be measured.
3) The frequency of measurement of QA samples.
4) The amount of effort that should be carried on as part
of a contract laboratory's internal program.
5) The amount of effort that,should be done using externally
supplied materials and monitored externally.
6) How internal and external QA programs are to be
coordinated.
The preceding discussion provides general guidance for
considering these important matters. However, specific answers
need to be developed, based on the precise nature of a given
project. The following set of questions are presented for
consideration by QA management when designing a program for a
specific project.
1) What are the accuracy requirements for the data?
2) What kind of QA samples will be most effective for the
research or monitoring that is contemplated?
3) What is the prior experience in use of QA samples for
the project planned or for projects of a similar nature?
4) What is the relative reliability of natural matrix and
synthetic QA samples?
5) What is the level of QA understanding of participants?
6) What role will blanks play?
7) What is the source of QA samples?
8) What are the accuracy requirements for the QA samples?
9) What will be done to establish the credibility of each
QA sample used?
10) If QA samples are to be produced by a supplier, what will
be the specifications that the samples must meet and how
will compliance be evaluated?
11) What will be the feed-back loop for evaluation of the
effectiveness of an initially implemented QA sample
program?
12) What will be the relative frequencies of bench or
laboratory QA samples?
13) Should a reference laboratory be used as an adjunct to,
or in place of, some of the QA sample program?
14) How will each laboratory establish and monitor its
establishment of statistical control?
15) What procedure will be used to establish initial
competence in the methodology that is to be used?
16) What corrective actions should be taken in the event of
unsatisfactory results on QA samples?
17) In the case of a long term project, would periodic
meetings of participants be helpful in solving problems,
improving accuracy, and promoting continuity of effort?
27
-------
REFERENCE
The following reference by the present author discusses
several of the above matters in more detail. Discussions of allied
topics related to QA matters are also included.
John. K. Taylor "Quality Assurance of Chemical Measurements",
Lewis Publishers, Inc. P.O.Drawer 519, Chelsea, MI 48118 (1987).
28
-------
2.3.2 Workshop Discussions
A review of the subject of quality control data was lead by
Dr. John Taylor. His expertise in this area and the thorough
discussion provided in his issues paper initiated active
discussion. Unlike the previous two areas addressed in the
workshop, adapting QA to ecological research and comparability
studies, the collection and evaluation of QC data has long standing
historical precedent. The points raised and discussed by the
participants are given below.
What determines the level of quality control data needed?
o Depends primarily on the intended use of the data.
What is the risk associated with being "out-of-control"?
o The measurement process used is the primary determinant;
if one is dealing with a good measurement process then
the level of QA needed drops as low as 5 percent of the
effort is devoted to evaluating data quality (e.g.
through quality control checks). If, however, the
measurement process is poorly defined (or if it is a
small operation) then as much as 20 percent (sometimes
even up to 50 percent) of the effort will need to be
spent on defining error and evaluating data quality.
What is the purpose of defining accuracy, a standard component of
data quality objectives, in ecological research?
o Defining accuracy means that the true value of a
variable can be identified. This is not possible in
ecological research. The closest one can get is the
identification of relative bias or by assigning limits
of "uncertainty" to the data sets.
Quality control checks at the routine level are more important than
periodic checks, such as are provided through audit or audit
samples.
What is the definition and importance of bias in the analytical
process?
o Bias is an error source which can be caused by a process,
an operator, and/or a design. One of the most frequent
mistakes made when dealing with QC data is the tendency
to correct for bias without understanding the cause or
origin of the bias.
o Bias needs to be evaluated across the whole range of the
measurement process, this is important to identify the
cause of bias in a process.
29
-------
Some defining of QC samples was discussed.
o Evaluation samples include every QA sample which is
developed for a specific purpose; the strategy was
identified in advance.
o Variability samples are natural replicates.
o Accuracy samples include: reference materials, spikes,
blanks, and independent laboratory reanalysis.
What is an effective way to evaluate what type of QC data is needed
in a measurement process?
o All sources of error need to be identified. Determine
which of those are controlled and which are uncontrolled.
What are the aspects of QC data which need to be considered when
establishing an ecologically based research program?
o Managers need to define what level of effort should be
devoted to QA/QC by clearly identifying the goals of the
program.
o The type of QC checks needed can be evaluated, then the
number of samples will be dependent upon size and
complexity.
o Internal and external QC should be coordinated to
minimize the cost and to maximize the benefit.
Statistical control is often underemphasized in QA/QC programs.
o One of the primary functions of QC data is to attain and
maintain statistical control, which includes the
measurement process, the sampling system, the calibration
process, and the data custody and management.
o All portions of the QA process need to focus on
statistical control.
30
-------
Section 3
Conclusions and Future Challenges
The workshop provided a unique opportunity to exchange ideas
and expertise gathered in a wide variety of programs addressing QA
implementation in ecological research and environmental monitoring.
Consensus was reached in several areas. The involvement of quality
assurance staff needs to be expanded to manage data quality to
ensure these goals are realistic and achievable. Most programs
that require data quality to be defined, overlook the importance
of also clearly defining programmatic objectives. If QA is to be
effective, both concerning cost and effort, then QA needs to be
incorporated when programs are initiated.
General agreement leads to defining the components of QA as
Quality Management, Quality Assurance, and Quality Control. These
definitions are specific to the implementation of QA in ecological
programs. Ecological research can be divided into two general
areas: (a) monitoring and analysis of data, and (b) process
oriented research. QA in environmental monitoring and data
analysis is well established. We are in the process of expanding
QA principles to apply to process-oriented research. It is
important to include managing data quality in program
administration and especially during its establishment. Also, the
research team needs to be a part of the process that develops QC
and to define areas of error and to determine if QA/QC activities
are quantitative or qualitative.
The process of adapting QA to ecological science is just being
established. The workshop participants identified five important
activities in ecological QA implementation: documenting
procedures, establishing inter-laboratory sample exchanges,
developing methods to archive samples, conducting comparability
studies, and, lastly, jointly training all of the project personnel
who collect data.
31
-------
QA has become increasingly complex. During the early years
of this century QA became an important consideration in the
industrial process. Then it was expanded to environmental
monitoring and now to ecological research.
< 1950 1970 - 1980 1980 +
manufacturing monitoring research
organism populations ecoregions
single several populations multiples....
media,
stresses,
disciplines
local international global
This has lead to expanding the definition and role of QA.
New and not widely accepted are the ideas of expanding QA to be
more innovative, to be responsive to public issues, and to ensure
that policy makers' needs are clearly identified. To be effective
QA needs to lose its negative association by gaining distance from
its enforcement origins and becoming more innovative and flexible.
3.2 Workshop Critique
The following provides an overview of the recommendations
which came from the Workshop:
1. Expand the involvement of QA to include management in
addition to QA and QC, i.e. Quality Management or QM.
2. Define and/or explain the terminology under Quality
Management, Quality Assurance, and Quality Control
sufficiently to ensure they become a part of the scientific
process.
3. Emphasize the need for a clear definition of the product
required by management. Communication between quality
assurance staff and program management must be a part of this
task.
4. The general guidelines for conducting QA in ecological
research include:
32
-------
o document procedures
o foster opportunities for joint training
o establish inter-laboratory exchanges
o develop mechanisms for sample banking
o conduct comparability studies
5. Cooperate with the scientific community openly, thereby
encouraging QA/QC by the scientists themselves. This will:
(a) ensure their contribution, and (b) better identify
important aspects of QA/QC.
6. QA needs to be a part of program and project
conceptualization. Impetus to ensure QA is initiated early
must come from managers of scientific programs, i.e. QM.
There were several areas where no consensus was reached. For
example, a definition of data comparability was not reached though
many aspects of collecting and analyzing QC data were discussed.
Further discussions concerning adapting QA to ecological research
were not definitive and need to be expanded.
3.3 Future Plans
It was the general recommendation of the participants that QA
personnel hold annual workshops. The sponsorship could rotate to
minimize the effort to any specific group. Workshop effectiveness
would be enhanced by establishing small working groups to address
priority issues within the areas of ecological monitoring and
experimentation. The working groups should meet or have
conference-call discussions early and then workshop time to respond
to preliminary recommendations of the smaller groups.
33
-------
Section 4
4.1 Workshop Agenda
NATIONAL
ECOLOGICAL QUALITY ASSURANCE WORKSHOP
March 29-31, 1988
Holiday Inn Downtown- Mariner
Denver, CO
Monday/ March 28th (Glenarm Place)
7:00pm - 10:00pm Reception
Tuesday, March 29th (Cripple Creek Room)
9:00am Introductory Remarks, John Bailey
9:15 Program Overviews (15 minute presentations)
- Forest Response Program, Susan Medlarz
- Great Lakes Survey, Keijo Aspila
- Canadian Terr. Sample Exchange, Ian Morrison
- International Soil Sample Exchange, Craig Palmer
10:30 Break
10:45 Program Overviews (continued)
- NADP, Dave Bigelow
- Direct-Delayed Response Program, Lou Blume
- Watershed Manipulation Program, Heather Erickson
- Mountain Cloud Chemistry, Jim Healy
- Surface Water Surveys, Mark Silverstein
12:00 Lunch
2:00 Resume Program Overviews, Bob Mickler
- USEPA, QA Management Staff, Linda Kirkland
- USEPA ERL-Corvallis QA Program, Deborah Coffey
- USEPA EMSL-RTP QA Program, Bill Mitchell
- US Geologic Survey QA Programs, Vic Janzer
- USDI National Park Service, Darcy Rutkowski
- USFS Rocky Mtn. Station, Claudia Regan
3:30 Break
3:50 Program Overviews (continued, 10 minutes each)
- Research Triangle Institute, Jerry Koenig
- Research Evaluation Associates, Richard Trowp
- Desert Research Institute, John Watson
- Technology Resources, Inc., Jerry Filbin
- Weyerhaeuser Testing Center, Kari Doxsee
4:40 Adjourn for the day
34
-------
NATIONAL ECOLOGICAL QA WORKSHOP (continued)
March 29-31, 1988
Wednesday, March 30th
8:30am Session 1: Adapting OA to Ecological Research
Discussion Leader: Ian Morrison
Facilitator: Steve Cline
12:00 Lunch
1:30 Session 2: Comparability Studies
Discussion Leader: Wayne Robarge
Facilitator: Bill Burkman
5:00 Adjourn for the day
(morning and afternoon break)
Thursday, March 31st
8:30am Session 3: Quality Control Data
Discussion Leader: John Taylor
Facilitator: Steve Byrne
12:00 Lunch
1:30 Future Challenges in Ecological QA, Jack Winjum
3:30 Adjourn
(morning break)
35
-------
4.2 List of Attendees
*Paul A. Addison
Government of Canada
Canadian Forestry Service
Ottawa, Ontario, Canada
K1A 1G5
613/997-1107
Keijo I. Aspila
National Water Research Institute
867 Lakeshore Road
Burlington, Ontario L7R 4A6
Canada
416/336-4638
John Bailey
Corvallis Environmental Research Lab
200 SW 35th Street
Corvallis, OR 97333
503/420-4772
Cathy Banic
Environmental Applic. Group, Ltd.
6126 Yonge Street, 2nd floor
Willowdale, Ontario M2M 3W7
416/224-0701
Dave Bigelow
Grasslands Laboratory
Colorado State University
Fort Collins, CO 80521
303/491-5574
Louis J. Blume
U.S. EPA
Environmental Monitoring Systems Lab.
P.O. Box 93478
Las Vegas, NV 89193-3478
702/798-2213
Bill Burkman
Corvallis Environmental Research Lab
c/o USDA Forest Service, NEFES
370 Reed Road
Broomall, PA 19008
215/690-3122
Gerald Byers
Lockheed EMSCO
1050 East Flamingo Road
Las Vegas, NV 89119
702/734-3327
36
-------
Steve Byrne
Corvallis Environmental Research Lab
c/o NCSU AIR Program
1509 Varsity Drive
Raleigh, NC 27606
919/737-3520
Steve Cline
Corvallis Environmental Research Lab
200 SW 35th Street
Corvallis, OR 97333
503/757-4724
Deborah Coffey
Northrop Services, Inc.
200 SW 35th Street
Corvallis, OR 97333
503/757-4666 ext. 323
A. Scott Denning
NFS/ NREL
Colorado State University
Fort Collins, Co 80523
303/491-1970
Kari Doxsee
Weyerhaeuser Company
WTC 2F25
Tacoma, WA 98477
206/924-6452
Heather Erickson
Corvallis Environmental Research Lab
200 Southwest 35th Street
Corvallis, OR 97333
503/757-4666 ext. 349
Jerry Filbin
Technical Resources, Inc.
3202 Monroe St.
Rockville, MD 20852
301/231-5250
Don Hart
Beak Consultants
14 Abacus Road
Brampton, Ontario, Canada
416/458-4044
Jim Healey
W. S. Fleming & Assoc., Inc.
55 Colvin Avenue
Albany, NY 12206
518/458-2249
37
-------
*Dan Heggen
Exposure Assessment Res. Div.
EMSL-Las Vegas
P.O. Box 93478
Las Vegas, NV 89193-3478
FTS 545-2278
Victor Janzer
U.S. Geological Service
5293 Ward Road
Arvada, CO 80002
303/236-3612
Linda Kirkland
U.S. Environmental Protection Agency
RD-680
401 M. St., S.W.
Washington, DC 20460
202/382-5763
Donald E. King
Ontario Ministry of Environment
P.O. Box 213
Rexdale, Ontario, Canada
M9W 5L1
416/235-5838
Jerry Koenig
Research Triangle Institute
P.O. Box 12194
Research Triangle Park, NC 27709
919/541-6934
*Hank Kruegar, Director
Terrestrial Ecology Div.
Wildlife International Ltd.
305 Commerce Dr.
Easton, MD 21601
301/822-8600
John Lawrence
National Water Research Institute
867 Lakeshore Road
Burlington, Ontario, Canada
L7R 4A6
416/336-4638
Bernard Malo
U.S. Geological Survey, WRD
416 National Center
Reston, VA 22092
38
-------
*John A. McCann
Environmental Protection Agency (EN-342)
401 M. St., S.W.
Washington, DC 20460
202/382-7830
Susan Medlarz
Corvallis Environmental Research Lab
c/o USDA Forest Service, NEFES
370 Reed Road
Broomall, PA 19008
215/690-3105
Bob Mickler
Corvallis Environmental Research Lab
c/o USDA Forest Service, SEFES
Forestry Sciences Lab., Box 12254
Research Triangle Park, NC 27709
919/549-4022
William J. Mitchell
U.S. EPA
Env. Monitoring Systems Lab., MD 77B
Research Triangle Park, NC 27711
919/541-2769
Ian K. Morrison
Canadian Forestry Service
Great Lakes Forestry Centre
P.O. Box 490
Sault Ste Marie, Ontario, Canada
P6A 5M7
705/949-9461
Marilyn Morrison
Corvallis Environmental Research Lab
200 SW 35th Street
Corvallis, OR 97333
503/757-4666 ext. 443
Craig J. Palmer
UNLV Environmental Research Center
4505 South Maryland Parkway
Las Vegas, NV 89154
702/739-3382
Claudia Regan
USFS/ Rocky Mtn,
240 W Prospect
Ft. Collins, CO
FTS: 323-1274
Exp. Station
80526
39
-------
Wayne Robarge
NCSU/Soil Science Department
3406 Williams Hall
Raleigh, NC 27695
919/737-2636
Beth Rochette
Dept of Geological Science
Environmental Science Lab
University of Maine
Orono, ME 04469
207/581-3287
Jane Rothert
Illinois State Water Survey
2204 Griffith Drive
Champaign, IL 61820
217/333-7942
LeRoy Schroder
U.S. Geological Survey
P.O. BOX 25046, MS 407
Lakewood, CO 80225
303/236-3605
Randolph B. See
U.S. Geological Survey, MS-401
5293 Ward Road
Arvada. CO 80002
*William J. Shampine
U.S. Geological Survey, MS-401
Denver Federal Center
Denver, CO 80225-0046
Mark Silverstein
Lockheed EMSCO
1050 E. Flamingo Road
Las Vegas, NV 89119
702/734-3291
*Bob Stottlemeyer
Michigan Tech. University
Department of Biological Sciences
Houghton, MI 49931
906/487-2478
John K. Taylor
Quality Assurance Consultant
12816 Tern Drive
Gaithersburg, MD 20878
301/948-9861
40
-------
*Richard Trowp
Research and Evaluation Assoc., Inc.
727 Eastown Drive
Suite 200A
Chapel Hill, NC 27514
*John Watson
Desert Research Institute
P.O. Box 60220
Reno, NV 89506
702/972-1676
Jack K. Winjum
U.S. EPA
Corvallis Environmental Research Lab
200 SW 35th Street
Corvallis, OR 97333
503/757-4324
* denotes invited but unable to attend
41
------- |