APTD-1132
QUALITY CONTROL
PRACTICES
IN PROCESSING
AIR POLLUTION SAMPLES
U.S. ENVIRONMENTAL PROTECTION AGENCY
Office of Air and Water Programs
Office of Air Quality Planning and Standards
Research Triangle Park, N. C. 27711
-------
APTD-U32
QUALITY CONTROL PRACTICES
IN PROCESSING
AIR POLLUTION SAMPLES
Prepared by
PEDCo-Environmental Specialists, Inc.
Suite 13, Atkinson Square
Cincinnati, Ohio 45246
Contract No. 68-02-021]
EPA Project Officer: Neil Berg
Prepared for
ENVIRONMENTAL PROTECTION AGENCY
Office of Air and Water Programs
Office of Air Quality Planning and Standards
Research Triangle Park, North Carolina 27711
March 1973
-------
The APTD (Air Pollution Technical Data) series of reports is issued by
the Office of Air Programs, Environmental Protection Agency, to report
technical data of interest to a limited number of readers. Copies of
APTD reports are available free of charge to Federal employees, current
contractors and grantees, and non-profit organizations - as supplies
permit - from the Air Pollution Technical Information Center, Environ-
mental Protection Agency, Research Triangle Park, North Carolina 27711,
or may be obtained, for a nominal cost, from the National Technical
Information Service, 5285 Port Royal Road, Springfield, Virginia 22151.
This report was furnished to the Environmental Protection Agency by
PI-DCo-Environmental Specialists, Inc., Cincinnati, Ohio, in fulfillment
of Contract No. 68-02-0211. The contents of this report are reproduced
herein as received from the contractor. The opinions, findings, and
conclusions expressed are those of the author and not necessarily
those of the Environmental Protection Agency.
Publication No. APTD-1132
u
-------
ACKNOWLEDGMENT
This report was prepared under the direction of Mr.
David W. Armentrout. Principal authors were Mr. George A.
Jutze, Mr. Charles E. Zimmer, and Mr. Richard W. Gerstle.
Mr. Robert J. Bryan of Pacific Environmental Services was
principal author of Chapter 3.
Project Officer for the Environmental Protection Agency
was Mr. Neil Berg, Jr. The authors appreciate Mr. Berg's
contributions to the concepts presented in this project.
Several federal agencies and private organizations were
visited to obtain information used in preparing this report.
The authors particularly appreciate the assistance given by
the Center for Disease Control in Atlanta, the American
Society for Testing and Materials, the National Bureau of
Standards, and Armco Steel Corporation.
Mrs. Anne Cassel was responsible for editorial review,
and Mr. Chuck Fleming reviewed the graphics.
111
-------
TABLE OF CONTENTS
Page Number
Acknowledgment iii
List of Tables viii
List of Figures ix
1.0 INTRODUCTION . . . 1-1
1.1 General Considerations 1-1
1.2 Importance of 'Quality Control 1-2
1.3 Objectives and Scope of These
Guidelines 1-5
2.0 THE MAJOR ELEMENTS OF QUALITY CONTROL .... 2-1
2.1 Controlling Physical Parameters 2-3
2.2 Analysis of the Total Measurement
System 2-4
2.2.1 Precision and Accuracy 2-4
2.2.2 Control Chart Techniques 2-5
2.2.3 Review and Control Procedures 2-11
3.0 QUALITY CONTROL IN ATMOSPHERIC SAMPLING .. 3-1
3.1 Controlling the Physical Parameters 3-2
3.1.1 Environment 3-2
3.1.2 Reagents and Supplies 3-4
3.1.3 Calibration Materials 3-6
3.1.4 Calibration and Maintenance
Procedures 3-8
3.1.5 Design and Maintenance of Probes
and Manifolds 3-14
3.2 Analyzing and Controlling the Total
Measurement System . . . 3-15
3.2.1 Functional Analysis . 3-16
3.2.2 Sensitivity Analysis 3-18
3.2.3 Compiling and Using Data
Histories 3-21
3.3 Documentation and Demonstration of Quality
Control 3-28
3.3.1 Records 3-28
3.3.2 Reports 3-31
v
-------
TABLE OF CONTENTS (Continued)
Page Number
4.0 QUALITY CONTROL IN SOURCE SAMPLING 4-1
4.1 General Considerations 4-1
4.2 Controlling the Physical Parameters ... 4-2
4.2.1 Interferences 4-2
4.2.2 Sampling Rate and Sample Volume. 4-3
4.2.3 Equipment Maintenance and
Calibration 4-3
4.2.4 Conducting the Emission Test ... 4-6
4.3 Analyzing the Total Measurement System. 4-10
5.0 QUALITY CONTROL IN THE ANALYTICAL
LABORATORY 5-1
5.1 Introduction 5-1
5.2 Controlling Physical Parameters 5-2
5.2.1 Laboratory Support Services .... 5-2
5.2.2 Chemicals and Reagents 5-5
5.2.3 Instruments 5-9
5.2.4 Analytical Technique 5-18
5.3 Statistical Methods 5-20
5.3.1 Defining Performance Levels .... 5-20
5.3.2 Choosing Statistical Techniques. 5-21
5.3.3 Constructing Range Control
Charts 5-25
5.3.4 Constructing Coefficient of
Variation Charts 5-28
5.3.5 Determining Accuracy 5-31
5.3.6 Control Charts for Accuracy .... 5-34
5.4 Interlaboratory Proficiency Testing ... 5-39
5.5 Tracing Errors 5-42
6 .0 DATA HANDLING AND REPORTING 6-1
6.1 General 6-1
6.2 Data Recording 6-1
VI
-------
TABLE OF CONTENTS (Continued)
Page Number
6.2.1 Data Errors in Intermittent
Sampling 6-2
6.2.2 Data Errors in Continuous
Sampling 6-3
6. 3 Data Validation 6-6
6.3.1 Data Validation for Manual
Techniques 6-6
6.3.2 Data Validation for Computerized
Techniques 6-7
6.4 The Statistical Approach to Data
Validation 6-9
6.4.1 Maintaining Data Quality in
Manual Data Reduction Systems ... 6-9
6.4.2 Acceptance Sampling Applications . 6-10
6.4.3 Sequential Analysis 6-14
7.0 REFERENCES 7-1
Appendix A - Statistical Formulae and
Definitions A-l
Appendix B - Tables of Factors for Con-
structing Control Charts B-l
Appendix C - Calibration Curves from
Regression Analysis by the
Method of Least Squares C-l
VI1
-------
LIST OF TABLES
Table Page Number
3.1 Environmental Influences and Effects . 3-3
3.2 Effects of Housekeeping Practices on
System Performance 3-5
3.3 Span Drift for Analyzer A for 26
Two-Week Periods 3-25
4.1 Elements for Control of Interferences. 4-2
4.2 Flow Meter Types and Accuracy 4-4
4.3 Example Determination of Pitot Tube
Calibration 4-5
5.1 Techniques for Quality Control of
Laboratory Support Services 5-3
5.2 Guidelines for Quality Control of
Chemicals and Reagents 5-6
5.3 Restandardization Requirements 5-7
5.4 Techniques for Quality Control of
Analytical Instruments 5-10
5 . 5 S02 Calibration Data 5-15
5.6 Problems in Assessing Analyst
Performance 5-19
5.7 Data Used to Construct Scatter
Diagrams 5-25
5.8 Computation of Control Limits for Range
Control Charts 5-26
5.9 Computation of Control Limits for
CV-Charts 5-30
5.10 Techniques for Determining Accuracy .. 5-33
5.11 Accuracy Data for Percent Nitrogen ... 5-36
5.12 Percent Recovery Data 5-37
5.13 Variations in Method Procedures 5-41
5.14 Classification of Error Sources 5-43
Appendix B: Table BI - Factors for Con-
structing Control Charts for
Averages B-l
Table BII - Factors for Con-
structing Control Charts for Range B-2
Table Bill - Factors for Con-
structing Control Charts for Coefficient
of Variation B-3
Table BIV - The t Distribution
(Two-Tailed Tests) B-4
Appendix C: Table C-I - Calibration Data
for SO? Determination C-2
viii
-------
LIST OF FIGURES
Figure Page Number
2.1 Quality Control in Total Analytical
System 2-2
3.1 Functional Analysis of High Volume
Suspended Particulate Sampling 3-19
3.2 Control Chart for Span Drift 3-27
3.3 Instrument Maintenance Form -
California ARB 3-30
4. 1 Field Data Sheet 4-8
4.2 Field Data Sheet 4-9
4.3 Expected Errors Incurred by Non-
Isokinetic Sampling 4-11
5.1 Spectrophotometer Weekly Function Check . 5-12
5.2 Standardization Check at 420 nm 5-13
5.3 Calibration Curve for S02 Determination . 5-16
5.4 Scatter Diagrams for Determining Control
Charts 5-23
5.5 Control Chart for Range 5-26
5.6 Interpretation of Control Charts 5-29
5.7 Coefficient of Variation Chart 5-32
5.8 Accuracy Control Chart fronv Analysis of
a Primary Standard 5-36
5.9 Accuracy Control Chart for % Recovery ... 5-38
5.10 Procedures for Tracing Sampling Errors .. 5-45
6.1 OC Curve 6-13
6.2 Sequential Test for the Error Rate of
a Data Analyst 6-17
6.3 Operating Characteristic Curve 6-24
Appendix C: Figure C-l - Calibration Curve
for S0? Determination C-3
IX
-------
1.0 INTRODUCTION
1.1 General Considerations
State, regional and local air pollution control labo-
ratories perform an array of technical services involving
sample collection, sample analysis and data validation.
To demonstrate that reliable data are reported, the technical
services group must establish a quality control program.
We define a quality control program as:
"... THE PROGRAM APPLIED TO ROUTINIZED SYSTEMS
(I.E., SYSTEMS COMPOSED OF METHODS, EQUIPMENT AND
PEOPLE) IN" ORDER TO EVALUATE AND DOCUMENT THE
ABILITY OF A FUNCTION, ACTIVITY, OR PERSON TO PRO-
DUCE RESULTS WHICH ARE VALID WITHIN PREDETERMINED
ACCEPTANCE LIMITS".
It must be emphasized that we are speaking of "quality
control" not "quality assurance", which we consider as en-
compassing both quality control and methods standardization.
Further, quality control programs should be successfully
implemented prior to involvement of a technical services
group in cooperative-laboratory standardization efforts.
A quality control program is developed to minimize
sources of variation inherent in all analytical and technical
functions. Through the use of standard operating proce-
dures and statistical techniques, items such as determinate
errors are identified and controlled. The effects of random
1-1
-------
errors are measured and used to express the degree of con-
fidence to be placed in the analytical data and to deter-
mine when the process that generates the data is not func-
tioning properly.
The quality control program should be designed so that
the supervisor, through his technicians, can set up a pro-
tocol for performing each operation involved in sample
collection, analysis, and data handling. Implementation of
the quality control program requires that descriptions of
each procedure be readily available to the laboratory staff.
This document is intended to provide the administrator or
supervisor with guidelines for establishing a detailed
quality control program that is consistent with his specific
needs and objectives.
1.2 Importance of Quality Control
The functions of a technical services group in an air
pollution control program encompass both field and labora-
tory operations, including surveillance of pollution sources,
acquisition of ambient air quality data, episode criteria
monitoring and performance of special studies. The technical
services group provides qualitative and quantitative data to
be used at all levels of program operation. Consequently,
each sample procured must be adequately representative of
emissions from the pollutant source or of the atmosphere
sampled. In addition, analysis of the sample, performed in
1-2
-------
the field or in the laboratory, by automatic instrumentation
or by wet chemical means, must provide data that accurately
describe the qualitative or quantitative characteristics of
the sample. In some instances, incorrect data could lead
to faulty interpretations, and this could be worse than no
data at all.
Important and far-reaching decisions may be based on
air quality and emission data which sometimes may be pre-
sented as evidence in courts of law. Aerometric data will
be used to determine whether standards are being met. If
results indicate violation of a regulation, the appropriate
enforcement group is required to take action. With the
current emphasis on legal action and social pressures to
abate pollution, personnel in the technical services group
must be made aware of their responsibility to provide re-
sults that present a reliable description of the sample. In
addition, the analyst should know that his professional
competence, the procedures he uses, and the values he reports
may be presented and challenged in court. To meet such a
challenge, all data must be supported by a detailed program
that documents the proper control o.f all factors affecting
the reported result.
In testing of pollutant sources, the economic implica-
tions alone are sufficient reason for exercising extreme
care in sampling and analysis. Plant operators-may use
1-3
-------
these data as a basis for decisions to change a process,
install control devices, or even to construct new facilities
Special projects and short-range development studies
in air pollution control must be based on sound laboratory
data. The value of a development effort will depend on the
validity of laboratory results. The progress of a special
study and alternative experimental pathways, especially,
are evaluated on the basis of accumulated data; the final
results and recommendations are usually evaluated by pre-
sentation of data such as averages, standard deviations,
ranges, frequency distribution, and confidence limits.
For these reasons and many more, a quality control
program to assess and document the reliability of data
is essential. Although most chemists, engineers, and
technicians practice some personal form of quality control,
they do so at varying levels and degrees of proficiency,
depending on such factors as professional integrity, back-
ground and training, and understanding or awareness of the
scope and importance of the work they are engaged in. Un-
fortunately, these informal efforts at quality control are
often inadequate and usually fail to provide adequate docu-
mentation.
Because of the routine nature of the normal workload,
or, perhaps, the pressures of occasional high-priority
"rush" projects, quality control can be neglected easily.
1-4
-------
Therefore, in order to assure validity and reliability of
the final results, it is imperative that each agency re-
quire a specific control program for every sampling pro-
cedure and analytical test.
1.3 Objectives and Scope of These .Guidelines
The objective of this document is to provide guidance
or instruction for agency personnel at all levels to assist
in the development or modification of quality control pro-
grams . The diverse functions of the many types of agencies
that will use these guidelines necessitate some generali-
zation; whenever possible, however, we provide examples to
illustrate application of the principles cited. We attempt
to answer such questions as:
1. "Are the data valid?"
2. "Are they good enough for the intended use?"
3. "Is the technical services group performing
consistently?"
4. "How can we be sure that our equipment is
capable of providing correct results and
is operating so that it does?"
5. "How should we document or demonstrate our
level of performance to others?"
In the following pages Section 2 describes the major
characteristics of quality control, indicating the types of
activities available for incorporation into a quality con-
trol program. Sections 3, 4, and 5 deal with atmospheric
monitoring, source monitoring, and laboratory operations,
1-5
-------
respectively, considering for each of these functions three
major phases of quality control:
0 Control of physical parameters
0 Analysis of the measurement process
0 Documentation and demonstration of quality control.
Section 6 describes the basic techniques of data handl-
ing and data evaluation, particularly as they are applied in
air pollution control laboratories.
1-6
-------
2.0 THE MAJOR ELEMENTS OF QUALITY CONTROL
The major elements of a quality control program can
be considered broadly in two categories: control of the
physical parameters and analysis of the total measurement
process. We differentiate these two aspects of quality
control chiefly as an aid to understanding. Control of
physical parameters entails such functions as calibration,
maintenance, and standardization of materials. Analysis of
the total measurement process is a management tool for con-
tinuous evaluation of the performance capability of a
technical services laboratory and of the data the laboratory
produces. Analysis encompasses statistical monitoring
techniques in conjunction with such evaluation techniques
as analyzing spiked samples, replicate analysis, and inter-
laboratory comparisons. We may say, then, that control of
physical parameters is designed to reduce the frequency of
occurrence of errors in laboratory operation and that pro-
cedures analysis is applied to determine the effectiveness
of the physical control measures. An effective quality con-
trol program requires both kinds of effort, aimed at one
goal: the generation of valid data: Figure 2.1 illustrates
/
how each category relates to each of three phases of the
total sample analysis system:
0 Sample collection
0 Sample analysis
Data acceptance and performance evaluation
2-1
-------
• PHASE I: SAMPLE COLLECTION
Collect
Sample
Yes
PHASE III: PERFORM EVALUATION
PHASE II: SAMPLE ANALYSIS
V
> Yes fr
) ^
Review
Analyst
Performance
W,
P<
Review
Control
Criteria
Check Procedures
or Physical Parameters
Decision Block
Figure 2.1 Quality control in total analytical system
Measurement System Checks
Procedure
-------
2 .1 Contro.llin.g, :Phy,si,cal , Parameters
Control- measures to assure .the production, ot valid
data , can, also be ..thoughjt,. fpf s lmp.ly^ as jrgpp.d ppe^ating practices,
in both-, sampling r,an,d laJDora;tpr,y . anaJLys^s . ^Although details
vary cwi.th ^each, ,labpratpry .and,. application,,,, the, practices
usual ly.-, considered -are these.:.
., °rt .£a liberation,,
0 Functidh'al checks'" vof "component s". in. the measure-'
... ;ment ,s.ystem
0 Scheduled preveh't'ative" maintenance ...... "
. ° Nonscheduled. .maintenance^ ....
• n <-^; -,.'<..._- -J'-».\ - -:v. -Maif. 1': .:. J-^J-.< ...J.^j .
0 Standardization ;of materials
. ° . .Control of, interferences. . , ,.r
•••• .-'V- *Vr~ -, ."-'-' -v-L*-t^- r».v.-^'.. -A ji i,- ,../:.,. if •'..
0 Recording procedures : . , .-
0 , Batch--checking_procedures:1 , ..„
"•:•.-- -i--M; 'i,-'^ i'.w -Ljp' .~~-;*.r ,> i- -t-.-.K&ttl?..
° Go'od housekeeping technicjues
°. ^.-Control, prpceduresr for auxil-liary services
Examples of, .these., procedures, and, criteria for applying
them are given in the sections, dealing with.^field. and labo-
ratory .-.operations,., ...Management .of each, .technical .services
group wi>ll, require. decisions, .regarding jwhich. of these measures
should be .-applied , how .often , ..and ..specifically in what way
( detai led ^p rptp.cp.l ),.. A.irAmqng _the_many elements that will in-
, f luence these decisions., are ^ such practical matters as
• ayailabilitv, and cost, J-pf^mater;ialsr,,.^ayailability and cost of
manpower , _. logistics, ,,_,s chedulinq ,. ,and , legal requirements .
Further, the.. choice ,of .quality .control ..methods will be in-
fluenced by final application of the data generated and by
'• ' • • / ..-••' . • • .
the results of the continuing second step in quality control:
analysis of the total measurement system.
2-3;. .
-------
2.2 Analysis of the Total Measurement System
The, effectiveness of the types of physical controls
just discussed -- calibration, maintenance, standardization,
and the like — can be determined by a number of statistical
and other techniques, including analysis of data and review
of laboratory records and reports. Procedures for this type
of detailed analysis are amenable to a systematic approach,
we are speaking not about occasional, random spot-checks of
laboratory data and staff performance, but about a continuous,
orderly process of administrative analysis. Techniques of
such analysis are discussed in detail and exemplified in later
chapters concerning field and laboratory operations. At this
point we consider only a few fundamentals.
2.2.1 Precision and Accuracy
The terms "precision" and "accuracy" denote specific,
measurable characteristics of laboratory analysis. They
are key concepts of quality control, defined as follows:
0 Precision is a measure of the reproducibility of
results. It is determined by replicate analyses,
and it represents the variability of results among
those replicate analyses. Precision can be expressed
as standard deviation, variance, or range.
2-4
-------
0 Accuracy is the difference between a measurement
2
and an accepted value. Accuracy represents the
magnitude of error in a measurement. It is ex-
pressed either as relative error in percent or
in concentration terms such as parts per million.
Accuracy is determined by comparing analytical re-
sults from analysis of unknown samples to results
from analysis of reference materials.
Most of the critical parameters in an analytical system
should be evaluated in terms of precision and accuracy. If
they are not obvious, these critical parameters can be
identified through sensitivity testing.
2.2.2 Control Chart Techniques
Several techniques are available for plotting control
charts of accuracy and precision. Choice of a technique
for computing precision depends primarily on the change in
reproducibility as a function of change in the parameter
being measured. Chapter 5 represents a method for determin-
ing the type of precision control chart to be used in a
given situation.
Choice of a technique for expressing and for plotting
accuracy data on a control chart depends on the nature of
the sample, interferences, and the sensitivity range of
the method. The techniques used to determine accuracy are:
2-5
-------
Analysis of primary or working standards.
Spiked samples.
Percent recovery calculations.
The major difference between precision control charts
and accuracy control charts is that precision charts show
variability between sets of measured values, whereas
accuracy charts show variability between measured values
and known values.
To enhance the reader's understanding of the statistical
terms that are used in discussing accuracy and precision in
later chapters, we provide some important definitions:' ' ' '
0 Measures of Central Tendency - Parameters such as
the arithmetic mean, geometric mean, median, mode, etc.
which are used to describe the point about which the data
tend to cluster.
0 Arithmetic Mean - The most commonly used measure of
.central tendency is the sum of the values of the observations
divided by the number of observations.
1 N
Population Mean y = rr £ X, i = l,2,3,...,N
N i = l L
where N = Number of observations in total population
X.= Observed values
i •
0 . Frequency distribution.- Grouping observed values
into specific categories.
0 Normal distribution - The bell-shaped probability
distribution which is determined by two parameters, i.e. the
mean and the standard deviation.
2-6
-------
0 Variance - A measure of the variation of individual
observations about the mean. The variance of the popula-
2 2
tion and of a sample are a and S respectively.
91 ?
a = ± Z (Xi - y) i = 1,2,3,...,N
i = l
2
where a = variance of population
X. = observed values
y = population mean
N = number of observations in total population
2 1 N - 2
S^ = ± Z (X. - xr i = 1,2,3,... ,N
i=l
2
Where S = variance of sample
X. = observed values
X = sample mean
N = number of observations in the sample
0 Standard Deviation - A measure of the variation of
individual observations about the mean. The unit of measure-
ment for the standard deviation is the same as that for the
individual observations. The standard deviation (equal to
the square root of the variance) is referred to as a and S
for the population and a sample respectively.
0 Random Error - Repeated analyses to determine the
concentration of a contaminant in a sample will usually re-
sult in different values. These values, which tend to
cluster about the true value, are caused by indeterminate
2-7
-------
errors. The distribution of random error is generally
assumed to be normal with a mean equal to the true value
and a variance of a .
0 Bias - The result of a determinate (but possible
unknown) source of error which causes the result of an
analysis to be above or below the true value. Typical
sources of bias include; improper calibration, human error
in reading a meter or a color change.
0 Range - The difference between the maximum and
minimum values for a sample of observed values. When the
number of observed values is small, the range is a
relatively sensitive measure of general variability. As
the number of observations increases the efficiency of
the range (as an estimator of the standard deviation)
decreases rapidly.
0 Coefficient of Variation - The ratio of the
standard deviation to the mean, also referred to as the
relative standard deviation. It is usually expressed as
a percentage and is given by
CV = § (100) %
X
where S = standard deviation of a sample
X = mean of a sample
0 Confidence Interval - A statistic (e.g. the mean X)
is computed from the data for a sample. The statistic is
2-8
-------
then used as a point estimate of the population para-
meter (.e.g. the mean u) . It is recognized that the
statistic computed from a second sample would not be
identically .equal to that for the first sample. Because
of this points A and B are determined such that it can be
said with a specified probability that the true value of
the population parameter lies within the interval de-
scribed by A ahd'B.
For example the probability statement for the 95%
confidence interval estimate of the population mean is
given by;
P (X - t _,S < y < X + tn_1S) = 0.95
•\fn~~ -y[rT~
where X = sample mean
S = sample standard deviation
t _, = Student "t" value for n-1 degrees
n of freedom
n = number of observations in the sample.
The probabilities usually associated with confidence
interval estimates are 90%, 95% and 99%. For a given
sample size the width of the confidence interval increases
as the probability increases.
0 Confidence Limits - The end points of the confi-
dence interval A and B as discussed above, where:
2-9
-------
A - X - t ,S
n-1
B = X + t , S
n-1
0 "t" Distribution - A probability distribution de-
veloped by W. S. Cosset (writing under the pseudonym
"Student") used in the computation of confidence interval
estimates when a (the population standard deviation) is
unknown. In such a case S (the sample standard deviation)
is used as an estimate of a. When the sample size is
small the value of "t" for a given probability level
differs significantly from the "2" value for the normal
distribution. For example in determining the 95% con-
fidence interval estimate of the mean when the sample
size was 10, the value of t is 2.262 whereas the value of
3 from the normal distribution is 1.96 (regardless of
sample size) .
0 Adjustment Factors - Adjustment factors as applied
in the manual are multipliers used to calculate statistical
control limits for control charts . They provide a method
of approximating the distribution of all the values in the
universe when calculating statistical limits. This is
i'
necessary because the distribution of sample values differs
from the distribution of universe values. The factors
used in this manual are D.,, D , B_, B , and A,,. Definitions
2-10
-------
of these -factors and formulae for computing them are in
Appendix A. Tables with the factors used for the 99%
confidence interval are in Appendix B. The application
of each of the factors is:
0 D_ - Compute the lower control limit for a
range control chart.
0 D. - Compute the upper control limit for a
range control chart. In some cases,
D. can be used for the lower control
limit also (see Section 5.3.6).
0 B- - Compute the lower control limit for
standard deviation or coefficient of
variation control charts.
0 B. - Compute the upper control limit for
standard deviation or coefficient of
variation control charts.
0 A_ - Compute upper and lower control limits
for average control charts.
2.2.3 Review and Control Procedures
Non-statistical review procedures are also required
for determining validity of data. These include procedures
for evaluating personnel performance, instrument performance,
and quality of reagents and materials.
Procedures for checking the performance of laboratory
analysts include round-robin tests, both intra-laboratory
2-11
-------
(between individual analysts or groups) and inter-
laboratory (between members of different organizations).
Inter-laboratory proficiency testing can be applied to
both field sampling and laboratory analysis. For deter-
mining trends in individual performance over a long
period of time, control charts showing the accuracy or
precision of the results obtained by a single analyst
using a specific analytical method can be useful.
Procedures for review and control of instrument per-
formance include review of maintenance logs and function
check data and the establishing of control limits for
calibration curves. Procedures are also required for
checking automatic recording and transmission equipment.
Review and control procedures for checking quality
of reagents and materials include scheduled batch check-
ing and restandardization.
2-12
-------
3.0 QUALITY CONTROL IN ATMOSPHERIC SAMPLING
Attaining the highest possible quality of data from
an atmospheric monitoring program requires good operating
procedures and methods of determining the level of quality
achieved. Quality control procedures for atmospheric
monitoring in the field are different from those for
laboratory analyses, because of different operating con-
ditions. Further, air monitoring does not involve uniform
samples, the concept of sample replication is not easily
applied, and the occasional use of unattended automatic
instruments in remote locations may complicate the appli-
cation of quality control techniques. In field operations
the major application of statistical techniques is for
determining that the operational.variables of analyzers
and samplers are within acceptable limits. The most im-
portant control variables for which criteria should be
adopted are:
0 Sample and reagent flow rate
0 Span and zero drift
0 Output noise
0 Instrument response time '
0 Recorder dead band
0 Sample conditioning
0 Environmental conditions
0 Power supply
0 Response to functional tests
Standard procedures should be established to control
these parameters. Also, output data may be checked for
extreme values, and the total performance of an analyzer
3-1
-------
may be measured in terms of its output of good data as a
proportion of total in-place operating time.
This chapter discusses quality control procedures
common to all phases of air quality monitoring, including
monitoring systems and integrated or static monitoring
techniques.
3.1 Controlling the Physical Parameters
3.1.1 Environment
Air sampling is conducted under a variety of environ-
mental conditions: monitoring sites range from fixed, ground-
level, enclosed (interior) stations controlled for tempera-
ture and relative humidity and having adequate, well-
regulated power supplies to outdoor stations located at
various altitudes and exposed to widely ranging tempera-
tures, possibly mobile platforms mounted in aircraft or
vans. These environmental factors can affect the reliability
of the sampling device and the quality of the data obtained.
Table 3.1 lists some of the more important environmental in-
fluences cited by the International Electrotechnical
Commission and some possible effects of variation. In
atmospheric sampling, each sampler and instrument used
should be examined for these and other possible environmental
effects with particular attention to those mentioned in
methods descriptions and listed by manufacturers. Every
attempt should be made to adhere to absolute limitations
3-2
-------
TABLE 3.1
ENVIRONMENTAL INFLUENCES AND EFFECTS
Influencing Factors
Effect
Ambient Temperature
Sample and reagent flow
rates, life of electronic
components, amplifier
drift, motor performance,
ink drying, reagent
evaporation
Barometric Pressure
Mass flow rate of air
sample, requires correc-
tion to m.s.l. pressure
in reporting mass concen-
trations of contaminants
Vibration and Shock
Alignment of optical
components, joint leakage,
microphony in NDIR analy-
zers, damage to fragile
components, ..recorder pen
movement
Electric. Fields
!Interference in electronic
signal processing
Voltage
Performance of voltage
sensitive electronic
components, motor speed
Frequency
Speed of synchronous
motors (chart speed for
example), performance of
frequency-dependent ampli-
fiers
3-3
-------
and, where necessary, to make operating adjustments and
calculation corrections that will account for non-
controllable variations. Examples of such practices in-
clude the calibration of rotameters with regard to tempera-
ture (particularly those used to metier reagent flow) and
the correction of contaminant concentrations reported in
mass units for pressure variation.
Good housekeeping is as important at air monitoring
sites as it is in laboratory operations in that it provides
the proper setting for an effective quality control program.
Although some of the effects of poor housekeeping may seem
indirect or may seem more related to occupational safety
and health than to system performance, they usually entail
poor maintenance and so a reduction in the quality of
data. Some elements of poor housekeeping practices and
possible adverse effects are given in Table 3.2.
3.1.2 Reagents and Supplies
Most of the relevant information on quality control as
related to reagents and supplies is presented in Chapter 5.
Several important points relating to air monitoring are
emphasized here. These include provision for proper trans-
port of reagents in the field and periodic checking of re-
agents that are held in instrument reservoirs or that are
regenerated in situ. Some important reagent checks are pH
tests, visual clarity, and efficiency of dye or color re-
3-4
-------
TABLE 3.2
EFFECTS OF. HOUSEKEEPING PRACTICES:
ON SYSTEM PERFORMANCE
Element
Possible Effects
Excess atmospheric or
accumulated dust
Failure of electrical
contacts and switches,
excessive wear of mechani-
cal components, excessive
soiling of optical components
Reagent spillage or
leaks
Corrosion, hazardous
vapors, electrical
hazards, insecure
footing
Improper maintenance
of air conditioning
equipment
Air conditioning failure,
operation outside of
designated limits, equip-
ment damage, freezing,
inking pen failures,
excessive reagent evapo-
ration
Improper use of
extension cords or
overloading of
circuits
Poor voltage control,
excessive circuit failures,
electrical hazard
Improper cleaning
of glassware and
reagent containers
Reagent contamination
Non-systematized storage
of parts and tools
Loss of tools, absence
of tools and parts when
required, subsequent
system failure
3-5
-------
moval by carbon columns. Procedures for checking reagents
and schedules for their replacement or regeneration should
be documented and followed.
Filter media can be classified as reagents. Random
sampling schemes for checking new lots of filter media can
be established to measure flow characteristics, surface uni-
formity, presence of pinhold leaks, pH, ion blanks, and light
reflectance or transmittance characteristics. Acceptance
criteria should be related to the use of the media. An
example of an acceptance sampling scheme is presented in
Chapter 6 for data checking. The same kind of sampling de-
sign can be applied to media. As a practical consideration
such tests as checking for pinhold leaks in glass fiber
filters should be performed in the laboratory on every filter
to be used in the field. Acceptance testing becomes practical
only when the number of acceptance criteria is large enough
to make testing of each item costly or time consuming.
3.1.3 Calibration Materials '
•, i
Calibration materials, primarily gases, are introduced
into air monitoring devices in known quantities so that the
analyzer signal output can be related to the concentration
of the contaminant in the sample stream. Calibration gases
may be obtained commercially or generated on site. In
either case the major concern in quality control is the
degree of accuracy obtainable with the calibration.
3-6
-------
Secondary considerations are stability of output, effect of
storage, effect of operating conditions, and verification
of standards.
Commercial calibration gases are usually obtained in
pressurized cylinders. They are most often used "as delivered",
or they may be diluted on site. Because statements concern-
ing preparation tolerance and analysis accuracy vary among
suppliers, it is useful to define the term "calibration gas"
and to use the definition in procurement and validation. One
useful statement follows:
"For purposes of spanning operations, these
gases represent conventionally true values
against which indicated values are compared.
Therefore, the calibration gases should be
traceable to standards agreed on by the user
and the supplier or to national standards,
and the uncertainty of the conventionally
true values should be stated."
New certified cylinders of calibration gases should be
obtained before depletion of existing cylinders so that
cross-checks can be made. If results indicate uncertainty,
additional interorganization cross-checks or comparison
testing may be required.
Experience has shown that true "zero" gases can be as
difficult to obtain as true "calibration" gases, and their
use requires the same precautions. Increasingly, calibra-
tion gases are being generated on site. One rapidly develop-
ing technique is the .use of permeation or diffusion sources.
3-7
-------
This technique usually involves the permeation of a pure
gas through a semi-permeable material : (such as Teflon) from
a liquefied quantity of the gas encapsulated under its own
vapor pressure in a container having' at least one surface
of the semi-permeable material. The pure gas is diluted by
a carrier gas flowing at a known and precisely controlled
rate. The rate of permeation is temperature-dependent and
must be known. One advantage of the technique is that the
rate of permeation can be verified by determining the weight
loss of the permeation source.
Ozone must be generated on-site, by passing oxygen or
air over an ultraviolet source. At,the present state of
development, a referee analysis must be performed.
Regardless of the type of calibration material, an
effective quality control program requires accuracy levels
with these materials that are consistent with the method of
analysis. Methods specified by the Environmental Protection
Agency and most other published methods will include state-
ments of the accuracy and replicability to be expected.
Obviously, the accuracy of the calibration materials must be
greater than the overall accuracy expected. The stated
analysis accuracy of calibration sources should be +^2% of the
true value.
3.1.4 Calibration and Maintenance Procedures
Calibration - Specific calibration procedures are
3-8
-------
789
described in both official and unofficial documents. ' '
Procedures specified for the method or analyzer in use should
be followed. The frequency with which calibrations are per-
formed will affect the confidence limits of the data, depend-
ing on the degree to which calibration curves change with
time. Initially the maximum time intervals between cali-
brations may be set by determining what regulatory require-
ments apply, as given, for example, in the surveillance
portion of an implementation plan. This minimal schedule
will, of course, be modified by resource and logistical con-
straints such as total available manpower, manpower time re-
quired for each calibration, station siting (e.g. distance
from central laboratory), and required laboratory assistance.
Each technical services organization should develop a
history of the replicability of calibration runs and of the
expected change in the calibration curve with time. On the
basis of such records, operators can determine the accept-
ability of a single calibration and perhaps modify the cali-
bration schedule.
Generally, calibration records should include space
for the following information:
0 Instrument identification (serial or other
identification number)
0 Date
0 Operator identification
0 Calibration technique
Description and identification of standard
material used
3-9
o
-------
0 Test or other code - for use in identifying
samples for referee analysis
0 Operator comments
0 Data
Example - In calibration of a nondispersive infrared
carbon monoxide analyzer, the following procedure might be
used:
0 Conduct functional checks described in the operator's
manual to be certain the analyzer performs within specifi-
cations .
0 Obtain five cylinders of carbon monoxide zero and
calibration gas covering the range of 0-90% of full scale
and including the zero gas and the 90% of full concentration.
The stated concentrations should be known within +2% of the
true concentration.
0 Span and zero the analyzer using the zero gas and
the calibration gas closest to 60% of full scale concentration,
Adjust the zero and span controls so that the recorder values
correspond to the specified cylinder values.
0 Introduce the other calibration gases sequentially,
allowing the analyzer to reach a stable value before changing
cylinder sources.
0 Plot the recorder scale values against the stated
values.
0 Repeat steps 3 through 5.
0 Inspect the calibration plots. If both plots indicate
3-10
-------
that any one span gas deviates from an otherwise smooth
curve that could be drawn between the remaining points , re-
check that calibration gas. If a smooth curve can be drawn
through at least one plot, proceed to analyze the difference
between duplicates.
0 The acceptability of the calibration precision can
be determined with a control chart for Range. For this in-
strument the control limits are R+3o/^jn~, where R = average
value of the Range (in the case of duplicates this is equal
to the difference between the two values). Charts have been
tabulated to give the estimate of a in terms of R. The
upper and lower control limits are D.R and D.,R respectively.
D3 and D. for duplicates are found in these tables to be
D = 3.267 and D., = 0. The central line is R. After the con-
trol chart has been prepared the values of the Range are
plotted by successive sub-groups (each set of duplicates).
If the values for R fall within the control limits the
replicability or precision would be judged to be within con-
trol. As technical personnel gain experience with a par-
ticular instrument, they may desire to establish new limits
for the calibration curve. The decision will be based on
results of periodic recalculation of the standard deviation
of the Range. An example of how control limits for a cali-
bration curve are calculated is shown in Appendix C.
0 Record all data pertinent to the calibration. Take
3-11
-------
action to implement the use of a new calibration curve if
changed.
0 Introduce calibration gas to be used in routine span
operation and note scale value. This value will be necessary
for proper setting of the span control in future routine
operations until a new calibration is performed.
Other types of calibration may be performed such as in
those other dynamic systems utilizing on-site generation of
calibration gases through use of permeation tubes, ultra-
violet generation of ozone, and other physical or chemical
processes. Still other forms of calibration include those
which are designed to calibrate the detector portion of the
analyzer only. These are sometimes referred to as "static"
or "reagent" calibration techniques. Regardless of the
form of calibration, a history of calibration data should be
developed such as described in the preceding paragraphs.
The data collected on replicate sampling during calibration
can be used to determine whether or not the analyzer or
sampler is performing according to expected or required
specifications and to determine the frequency with which
calibrations must be performed in order to keep within per-
formance specifications of the method.
Maintenance - From the standpoint of quality control,
proper maintenance of air monitoring equipment should improve
the quality of data and increase the recovery rate of valid
3-12
-------
data. Servicing and maintenance schedules should relate to
the purpose of the monitoring, the environmental influences,
the physical location of analyzers, and the level of operator
skills. Data provided by instrument manufacturers and the
EPA "Field Operations Guide for Automatic Air Monitoring
Equipment" are useful in developing an initial maintenance
••i
program. Most such manuals present instructions for general
service and for service oriented to specific instruments.
They further indicate the recommended time intervals for
various types of service, such as tasks to be. performed
daily, semi-weekly, bi-weekly, monthly, quarterly, and semi-
annually.
Example. Schedule for Daily Servicing - General
0 Upon arrival at the monitoring site observe all
recorders for indication of normal operation.
0 Check liquid traps in instruments and probes.
0 Check for broken sample lines; check, condition
of probe or sample line filters.
0 Check all connections for possible leaks.
0 Check for chart supply and replace charts where
needed.
0 Check for timing of all charts, timers and clocks.
0 Check all flow-rate indicators for proper flow.
Check all rotameters for dirt and water, particu-
larly adjacent to ball. Clean and dry if necessary.
Check sample conditioning apparatus.
Service inking pens where required.
Check for reagent supply on all wet chemical
analyzers.
0 Check level of reagent in all lines, reservoirs,
etc. on wet chemical analyzers. Check for evidence
of carryover.
0 Check gas cylinder pressure.
0 Check for noisy or leaking pumps. Lubricate where
required.
0 Check recorder dead zone on all recorders. Note
where checked.
3-13
-------
0 Check sampling schedule for static or intermittent
sampling devices and operate as required.
0 Record .all maintenance and adjustments on log
books or forms as specified.
0 Record jpar.ts and supplies used on reorder form.
Service and maintenance should be performed by personnel
according to skills and staffing pattern. In general, station
operators perform routine servicing and trouble shooting
tasks; they should not attempt repairs for which they lack
the proper training or equipment, or for which the time re-
quired would interfere with other scheduled operations.
3.1.5 Design and Maintenance of Probes and Manifolds
The sampling of gaseous or particulate air contaminants
range from exposure of a static (passive) device (such as
a lead peroxide candle) through sampling from individual
probes and shelters, to sampling with common probes and
manifolds using an auxiliary air-moving device. Various
probe designs in common use have been summarized and recommen-
dations given to assure that the air sample reaching the
analyzer or sampler is altered minimally. After an
adequately designed sampling probe and/or manifold has been
selected and installed, the following steps will help in
maintaining constant sampling conditions:
0 Conduct leak test - Seal all ports and pump down
to approximately 0.5 inch water gauge vacuum as
indicated by a vacuum gauge or manometer connected
to one port. Isolate the system. The vacuum
measurement should show no change at the end of a
15-minute period.
3-14
-------
0 Establish cleaning techniques and schedules - A
large-diameter manifold may be cleaned by pulling
through it a cloth on a string. Otherwise the
manifold must be disassembled periodically and
cleaned with soap and water. Visible dirt should
not be allowed to accumulate.
0 Plug the ports on the manifold when sampling lines
are detached. This will help to maintain the de-
sired mass/velocity ratio.
0 Maintain a flow rate in the manifold at 3 to 5 times
the total sampling requirements or at a rate equal
to the total sampling requirement + 5 ft /min. This
will help to reduce sample residence time in the
manifold and insure adequate gas flow to the
monitoring instruments.
0 Vacuum in the manifold should not exceed 0.25
inch water gauge. Keeping the vacuum low will help
to prevent the development of leaks.
3.2 Analyzing and Controlling the Total Measurement System
Having applied all practical measures for control of
the physical components of a sampling/analysis operation,
the supervisor or manager proceeds with thorough analysis
of the total measurement system in terms of quality con-
trol. He does this by applying several analytical
techniques, which are described and .illustrated throughout
3-15
-------
this manual. In this chapter the emphasis is on atmospheric
sampling; the techniques presented here, however, such as
the functional analysis described next, usually can be
applied in other phases of technical services operations.
3.2.1 Functional Analysis
In application of quality control measures, the total
measurement system can be viewed as a complex consisting of
the analytical method, the instrument or analyzer, and the
operator. The critical components of this complex are
identified by functional analysis. We exemplify this
technique as it applies to continuous monitoring, because
continuous monitoring instruments do not provide a discrete
sample with which to perform conventional statistical pro-
cedures, such as use of replicate analyses, spiked samples,
and control samples.
A useful first step in functional analysis is to pre-
pare a schematic diagram of the analyzer, identifying all
major components and controls. Briefly summarize the
theoretical principles of the measurement.and indicate all
algorithms and transfer functions relating the measured
quantity to the final data statement of air quality, such
as pollutant concentration. With this information one can
identify the primary variables directly affecting instrument
performance. Some of these variables include:
3-16
-------
0 Source output (active devices)
0 Detector sensitivity
0 Contactor efficiency (where reagents are used)
0 Sample air flow rate
0 Reagent flow rate
0 Cell pressure (nohdispersive infrared analyzers)
Optical path length and alignment (colorimeters
and spectrophotometers)
Amplifier gain and stability
Reagent quality
o
Next, extend the analysis to determine means for control-
ling these critical parameters and for detecting off-specifi-
cation performance. Some parameters (such as flow rates and
pressures) can be observed directly, but often one must
monitor performance by observing secondary parameters that
may indicate malfunction or off-specification performance.
Such secondary parameters include:
0 Signal output noise
0 Span or zero drift
0 Repetitive transient signals >
0 Time response to step input .of .contaminant
0 Atypical appearance of charts
0 Response to built-in function tests
0 Physical appearance of reagent lines, reagent,
and other components
Of these observable indicators of performance, the ones
most adaptable to statistical monitoring are noise, span
drift, and zero drift.
In contrast to continuous monitoring, batch or sequential
monitoring (accomplished by collection of filter samples,
use of impingers, or similar procedures) is much more
closely related to laboratory procedures. Batch monitoring
operations, like those in the laboratory, can be evaluated
3-17
-------
by use of replicate sampling, standard or spiked samples,
and cross-checking among samplers or operators. Beyond
these, however, a thorough functional analysis of the
sampling arid analysis procedures should be performed to
identify critical operating parameters and sources of
error, and to define what data histories are needed, how
they shall be collected, and procedures for manipulating
and interpreting the data.
An initial procedure in this functional analysis of
batch monitoring might be to construct a flow chart of the
sampling and analysis processes, identifying all performance
criteria and control and measurement points. Calculations
used in obtaining final results would be shown, as would
information on expected repeatability and accuracy.
Figure 3.1 illustrates a 'flow chart constructed for the
determination of suspended particulate matter by the high-
volume filter sampler method. The illustration indicates
that flow rate, time of exposure, sample conditioning, and
initial and final weights represent the critical parameters
in the hi-vol method. Opportunities for cross-checking in
the weighing operations also arise, and parallel sampling
can be conducted.
3.2.2 Sensitivity Analysis
Following the functional analysis of the measure-
ment system the technique of sensitivity analysis may be
3-18
-------
Obtain
and
Inspect
Filter
Equilibrate
Filter
K Record
Time
Flow
I
/Specified \
\Conditionsr
Weigh
Filter
Load Filter
and
Start
Sampler
Balance \
Spec's. I
SamplerX
and
Operatingy
Spec's.
A
\
Record
Time
Flow
Terminate
Sampling
and
Unload
Equilibrate
Filter
^ /Specified
^Condition!
Record
^Weight
Weigh
Filter
fcfc-
p
To Other
Analyses
or
Store
Calculate
and
Document
tot
IP
Report
Results
Figure 3.1
Functional analysis of high volume
suspended particulate sampling.
3-19
-------
used to determine the effect of variations in either primary
measurement control variables or in secondary influences or
interferences. Sensitivity is defined as the partial de-
rivative of a system output with respect to input. In a
complex system a more practical definition of sensitivity,
often employed for analytical purposes, is the incremental
change in output resulting from an incremental change in
input.
If the system can be described in terms of a mathematical
model, an analytical approach may be taken usually utilizing
a factorial design. In the control of air monitoring systems
an empirical approach to sensitivity testing is more likely.
Example
Excessive zero drift has been, detected in a continuous
air monitoring instrument, but no component failure is indi-
cated. Knowing the analyzer, the operator suspects that
temperature or voltage variations may be influencing the
instrument behavior. The operator wants to know how the
voltage and temperature interact. Because the number of
variables is small he could select a full factorial design
of an experiment designed to test each variable at three
levels - (1) within specified normal range, (2) above
normal range, and (3) below normal range. The total number
of experiments required would be p where
n = no, of factors = 2
p - no. of levels = 3
3-20
-------
2
or 3 = 9. The experiments can be identified as T..V,, T^V^
T1V3' T2V1' T2V2' T2V3' T3V1' T3V2' T3V3 where T and V refer
to temperature and voltage respectively, and the subscripts
refer to the level. Other tests, such as Youden ruggedness
testing, could be used if the number of factors and/or
levels is large. Fewer tests would be required than with the
above technique, but the results would be nearly the same.
If an analysis of the data shows a relationship between
one or more of the influence parameters and the dependent
variable, in this case zero drift, a case would be made for
better control of that parameter.
3.2.3 Compiling and Using Data Histories
In a quality control program, positive steps should
be taken to provide methods, equipment, facilities, staff,
and operational procedures that make possible the production
of high quality data. Means of measuring performance and
detecting promptly any deviation from acceptable performance
should be adopted. . Corrective actions in the event that
data quality falls outside acceptable limits should be
defined. Adequate data histories must be available with
which to verify statements concerning the quality of the•
data.
In addition to the calibration data described earlier
data histories should be collected on all of the critical
measures of performance identified in the functional analysis,
3-21
-------
Data histories,most commonly kept are those for sample air
and reagent flow rates, results of span and zero checks,
results of use of other standards or function tests, noise,
response time,;and recorder dead zone. For example, if the
specification for zero drift for a continuous analyzer is
that it not exceed 1% of full scale/24 hours, and if the
data acceptance criteria are set to reject all data recorded
during a period in which the total drift exceeds 2% of
scale, then an initial schedule might be set for zero check-
ing once every two days. Experience may require adjusting
the checking interval.
Use of data histories in maintaining quality control is
based on the recognition that no analytical method or instru-
ment yields perfect results; when repeated determinations
are made on the same sample, some variation in results is
inevitable. Even if all identifiable o'r assignable causes
of variation are removed, some indeterminate sources remain.
These indeterminate errors should be randomly distributed.
Since the behavior of these random events can be predicted
statistically, limits within which repeated measurements
should fall can be computed. The design and use of control
charts and other uses of data histories rely on the com-
putation of ,these 'control limits'. It should be understood
that even though application of statistical techniques
indicates that an analysis .or air monitoring process is in
3-22
-------
a state of control, this does not mean that every analysis
or element of data is acceptable. Nor does it mean that the
control limits are rigid; they can be adjusted to produce
higher quality data.
As discussed earlier, replicate analysis cannot easily
be applied to automatic continuous analyzers, and quality con-
trol of the data they produce must usually be based on main-
taining performance characteristics within limits. One
important indication of low-quality data from a continuous
analyzer is excessive zero or span drift. Drift affects the
quality of data because the calibration curve has been
altered without the data reduction process having been
automatically altered at the same time. Within pre-set
limits it is possible to interpolate between the expected
and the altered calibration curve if the altered curve can
be identified. This is done by field operations known as
zeroing and spanning, during which a known zero gas and a
single calibration gas are introduced to the analyzer. The
scale value is noted and compared with the value expected
on the basis of the most recent full calibration. In effect
two calibration curves are available, one representing the
start of the time period immediately following the previous
standardization and one for the time of the current standardi-
zation. If the change is not too great, a linear interpola-
tion between the two curves is performed for data points
3-23
-------
between the two standardization operations. The difference
between the succeeding span and zero readings is called span
and zero drift and is expressed in terms of concentration
per unit time, e.g. yg/m per 24 hours.
One method of monitoring the state of control for span
drift would be to construct a control chart for the mean
span drift over some reasonable time period, perhaps two
weeks. The center line for such a chart is drawn at the
expected mean drift, y, and the upper and lower limits will
be y + 3 o/^n. If the distribution of the span drift is
normal, these limits should contain approximately 997 values
out of every 1,000. If the subgroup span drift means are
expressed as x, the best estimate of y is the overall
sample mean, x, which may be computed from the subgroup
means. If we have less than ten span operations during the
two-week period representing the subgroup, then R/d9 is an
unbiased estimate of a. R is mean of the subgroup ranges.
The factor A,, = 3/pr- d,., has been compiled and listed in
standard texts on statistics. Appendix A defines these
factors. Table 3.3 lists the span drift data for an
analyzer over 26 two-week periods. Since span checks were
made every other day, seven checks were made every two-
week period. Therefore
x = E x = 35.0 = 1.35
n 26
3-24
-------
TABLE 3.3
SPAN DRIFT FOR ANALYZER A FOR 26 TWO-WEEK PERIODS
Period
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Mean Sp_an
Drift (x)
1.4
1.5
1.3
1.4
1.2
1.3
1.2
1.4
1.3
1.2
1.8
1.3
1.4
1.2
1.5
1.2
1.1
1.3
1.2
1.4
1.6
1.4
1.5
1.3
1.2
1.4
Range (R)
0.8
0.9
0.6
0.8
0.9
0.6
1.1
1.3
0.4
0.3
0.6
0.9
1.1
0.4
0.7
0.8
0.8
0.7
0.2
0.5
0.7
0.9
0.8
0.9
1.2
0.4
Mean span drift (x) = average span drift for a
two-week period.
Range = Max-min values for a two-week period.
3-25
-------
R = £ R = 19.3 = 0.74
n 26
A2 = 0.419
A7 is based on n = 7, the number of
checks per two-week period.
A2R = 0.419 (0.74) - 0.31
x + A2R = 1.35 + 0.31 = 1.66
x - A2R = 1.35 - 0.31 = 1.04
The information developed from Table 3.3 is plotted on the
control chart shown in Figure 3.2 It is obvious that,
except for only one two-week period, the analyzer operated
satisfactorily.
Control charts for duplicate testing, such as might be
developed for use with batch or intermittant monitoring,
are similar to those described in detail in Chapter 5 on
laboratory methods. Parallel sampling, such as might be
done with high-volume particulate samplers and bubblers,
can be analyzed by this technique.
i
A special control chart may be/ used for duplicates
i
obtained in comparison testing. If it is assumed that this
type of testing will be performed only at intervals, it may
be difficult to obtain an unbiased estimate of the true
expected difference, d, between duplicates on a control
chart on which the central line is 0. Existence of bias can
be determined from several indicators by using the method of
extreme runs:
3-26
-------
1.8
1.7
1.6
1.5
1.4
... 1 ^S
IX -L.-JJ
, 1.3
PT-
U)
-------
0 8 successive points are on the same side
of the central line
0 10 out of 11 are on the same side
0 12 out of 14 are on the same side
0 14 out of 17 are on the same side
0 16 out of 20 are on the same side
This type of chart is used, for example, in analyzing
data collected by a mobile unit that regularly checks the
air monitoring conducted at fixed sites. In this type of
operation it is imperative that the mobile unit and the
stationary instruments are sampling the same air mass.
3.3 Documentation and Demonstration of Quality Control
3.3.1 Records
Administration of a quality control program requires
adequate records needed to validate data prior to process-
ing, to provide data histories, to aid in modification of
service procedures and schedules, to develop records of
parts usage, and to guide trouble shooting and repair. The
general categories of these records are:
0 Instrument service records - These may be in the
form of lab books or prepared multiple copy forms. The
California Air Resources Board has determined that the
latter type better serves management functions. These
forms provide space for entry of routine data such as flow
rates, filter or span checks, reagent checks (such as pH
of potassium iodide reagent for total oxidant), and main-
tenance performed. Similar forms could also include
3-28
-------
acceptable performance criteria for convenient reference
by the station operator.
0 Calibration records - These would contain all data
applicable to calibration including certified analyses of
cylinder gases; flow rates; results of referee or side
stream analyses; recorder output data, including scale
values; lag time, rise time, fall time; copy of recorder
chart; gravimetric determinations, if any; temperatures;
barometric pressure; and description of calibration pro-
cedures .
0 Station log books - To include data on all station
changes, station conditions, dates of supply delivery, sub-
jective statements on air quality conditions, visibility
observations, and notes of unusual events that might influence
analyzer readings.
0 Data validation and reduction notes - These would in-
clude all entries necessary for chart editing, including
results of span and zero checks, flow rates adjustment,
function tests, down times for service, and notes on known
bad data. These entries may be made either on the chart or
on special report forms. They are normally made on the
chart for manual data reduction and on a special form for
automated data reduction. In the latter case, the notes
form a basis for supplementary computer instructions cover-
ing baseline data, span constants, and data to be excluded.
3-29
-------
LOCATION
C H i'c 0
MODEL No.
A.R.3. No.
OPERATOR
PAGE
. 7O,
100
FDLTCR CHECK
Number of Filter
OX
NOX
NO2
Chart Reading
Sample Energy
Ref. Energy
Chart Reading
Sample Energy
Ref. Energy
Chart Reading
Sample Energy
Ref, Energy
Date: Ol - u\- - 7 0
0
O
So
50
O
55
55
O
4^
46
i
i"5
44
5o
^
4X
^5
^4<
•34
46
2
2.5
40
60
51
26
«#
51
2*
4fe
3
H3
35
5o
Bl
20
£5
•62
(6
4t
4
£7
3i
^o
tlo
13
55
l\5
l (
46
5
•7
55
0^?
^3
^fc
51
^2
•7.0
64
32.
16
^
4
6
7
S'f
f'7
6
'.V,
SOLUTION
FLOW CHECK
Check
Date Rate Set
Reset to
Date Rate Set
Check
Date
Rate
Set
Reset to
Date Rate
Oxidant
3-0
NOX
/-OO
HO
it
NO2
Ki Solution
|>M Check
Date
^•*^
pH 1 Date
^.s-p"'2
pH
6,?
Date
?-/9
pH
7'^>
Date
9-Z7
PH
7,/
Date
Y-ZS'
pH
(.3
Dalo
;~-'
DATE
WIAINTENANCG PiS
fttfo
5 ML
L. J
TO
°('20
lAQVZO ZWL Vou/i<:i»
-------
In addition to the records mentioned, provision should
be made for rapid notification concerning out-of-limits
analyzer performance as detected by the station operator
or, with telemetry, by the control center supervisor.
3.3.2 Reports
Reports prepared as part of a quality control program
are usually summaries designed to inform administrative
personnel as to the quantitative and qualitative level of
performance of the air monitoring activity. These reports
must be tailored to the size and complexity of the operation.
Several types of reports that might be employed are described
briefly.
0 Calibration Report - This report should summarize
calibration activities during the report period, including
equipment calibrated, date calibration performed, time re-
quired for calibration, source of standards, techniques used,
problems encountered, and indication of any change in cali-
bration. Quality control charts are useful here.
0 Maintenance Summary - This report should describe
significant maintenance, not routine servicing. It includes
such activities as replacement of major components and
changes of the equipment. This report will aid in develop-
ing a history of parts used and operations performed, to
serve as input for a preventative maintenance program.
0 Data Recovery Report - This report describes the
3-31
-------
performance of the air monitoring activity in terms of use-
ful data recovered. For each separate analyzer or batch
sampler one can determine a theoretical maximum amount of
data that should be available, based on time in place,
sample schedule, and scheduled downtimes for calibration,
span and zero checking, and preventative maintenance. This
report would list the percent of data recovered for each
analyzer and sampler, by location, for the report period
involved. A monthly or quarterly report should suffice.
A control chart may also be developed to indicate
whether instrument or station performance is within expected
limits on the basis of valid data, expressed as a percent of
data theoretically available. This is known as a control
10
chart for percent valid data. The following parameters
are calculated:
p = mean percent valid data (in this case the over-
all mean determined from past experience on per-
cent of valid data).
100 -llpq = standard deviation of percent valid data.
»n
q = 1-p
n = number of hours in sample period, e.g. in one
month.
For example, if a station is yielding 85% valid data
out of a possible 100%, the following calculations would
apply if the station had operated 720 hours during the
period of interest:
3-32
-------
p = 0.85
q = 0.15
n = 720
W= 0.013
UCL (upper control limit) = 0.85 + 3 (0.013) =0.88
LCL (lower control limit) =0.85-3 (0.013) = 0.82
The control chart in this case is plotted with p as the
central line, and p +_ 3 U *-S. as the upper and lower control
limit lines. The percent valid data (p) for the monthly sub-
group is plotted on the y-axis, and the subgroup number is
plotted on the x-axis. If n is reasonably large, as for
example the number of hours in a month, the normal distribution
is a good approximation of the binominal distribution. In
this case the probability is only three in a thousand that
a value of p will fall outside the control limits by chance.
3-33
-------
4.0 QUALITY CONTROL IN SOURCE SAMPLING
4.1 General Considerations
Currently, application of basic quality control elements
to source or emission testing is almost nonexistent.
Although operators are usually careful to clean and calibrate
equipment before testing, the equally or even more important
phases of sampling, such as statistical test design, control
of interferences from unknown compounds, and other steps to
insure the efficiency of sampling procedure are often
ignored. The primary reasons for this are lack of knowledge
pertaining to emissions from specific sources and lack of
background data on various source sampling procedures.
In spite of these difficulties, however, the technical
services supervisor who is oriented toward quality control
will work toward developing a set of systematic procedures
that will assure the highest possible quality of source-
sampling data. He will find that many of the basic con-
cepts of quality control can be applied or adapted to
source sampling. As an example, consider the instrumental
sampling of emissions. Although most emission testing is
currently accomplished by manual techniques, instrumental
methods are being developed and in special cases have
proved successful for determining a specific gaseous com-
ponent in a 'clean1 dry gas stream. For most instrumental
methods, the same decisions and criteria for maintaining
4-1
-------
acceptable operation of atmospheric monitoring (see Chapter
3) will also apply to source sampling.
4.2 Controlling the Physical Parameters
4.2.1 Interferences
The goal of a source sampling team should be to collect,
store, and transport to the analytical laboratory a sample
that is as free as possible from interferences. The inter-
ferences most commonly affecting field samples are related
*. 4-u * 4- 11,12,13,14
to these factors:
0 Composition of probes
0 Composition of collection media and filters
0 Cleaning procedures
0 Standardization of reagents
0 Storage and transport of samples.
These factors are analyzed more fully in Table 4.1.
TABLE 4.1
ELEMENTS FOR CONTROL OF INTERFERENCES
Element
Control Consideration
Probes
Media and
filters
Cleaning
procedures
Standard-
ization of
reagents
Storage and
transport
of samples
Inert to gases sampled.
Trial runs to test inertness in undefined en-
vironment.
Temperature control.
Filter requirements.
Non-reactivity of filters.
Requirements for distilled or deionized water
Blank requirements.
Inertness of cleaning solutions.
Elimination of residues.
Normality.
Stability.
Frequency of exposure.
Storage requirements.
Inertness of containers.
Elimination of atmospheric influences.
Time required for transport and storage.
4-2
-------
4.2.2 Sampling Rate and Sample Volume
The sampling rate must be known accurately in order
to relate the amount of sample collected to the stack gas
concentration. Direct displacement or totalizing meters
such as dry or wet test meters are preferred, since they
can accurately measure the gas volume even if the gas flow
rate varies, and they do not require constant attention.
Rate meters require careful measurement of total sampling
time and constant observation to make sure the rate does
not vary because of pressure changes in the sampling train.
Table 4.2 compares the accuracies of various types of flow
meters.
Evacuated tanks or flasks can also be used to determine
the volume of gas sampled if the initial and final pressure
and temperature in the tank are carefully measured and the
tank volumes are known.
4.2.3 Equipment Maintenance and Calibration
The accuracy with which one can measure various emission
parameters depends greatly on the accuracy of the instruments
used in sampling procedures. Some of the common sampling
components that require maintenance and calibration to
assure maximum accuracy are Pitot tubes, manometers, thermo-
meters, flow meters, and gas meters.
Rules for calibration of the various instruments used
in source testing are not available in the literature. In
4-3
-------
Table 4.2 FLOW METER TYPES AND ACCURACY
Type Fluid
Direct Measurement
Gas Prover G
(inverted bell type)
Frictionless Piston G
Bubble Meter G
Mass L
Fluid Dynamic Type
(Roots type)
Piston Pump
Area Meter
Rotameter
G,L
Principle
Batch displacement
Batch displacement
Batch displacement
Mass measurement
Application
Calibration
Calibration
Calibration
Calibration
Accuracy Range
+0.2%
+0.2%
+ 0.2%
+ 0.1%
Orifices2
Venturi Meter
Direct Displacement
Wet Test Meter
Dry Gas Meter
Cycloidal
G,L
G,L
G
G
G,L
Head loss
Head loss
Continuou
Continuou
Continuou
Continuous 6 Intermittent Sampling +0.5% (max.)
Continuous & Intermittent Sampling +_0.5%
Continuous displacement Calibration
Continuous displacement Intermittent sampling
Continuous displacement Calibration
Continuous displacement Continuous sampling
Head loss
Continuous sampling
(1) Obtainable with NBS traceability
(2) Commonly used in field sampling work
+ 0.5%
+1%
+ 1%
+ 1%
+1% to 10%
-------
practice, the measuring devices are calibrated against known
quantities or against devices known to provide a higher
degree of accuracy (see Table 4.2), and are then adjusted
to read the correct value. If the device cannot be adjusted,
it is replaced or used as a spare with an appropriate
correction factor.
Following are guidelines to the effective maintenance
and calibration of source sampling equipment:
0 Pitot Tubes - Compare with a standard type pitot
tube by inserting both Pitot tubes into a duct
and measuring the velocity at a specified point.
A correction factor is then calculated as shown
in Table 4.3
TABLE 4.3
EXAMPLE DETERMINATION OF PITOT TUBE CALIBRATION
Standard
Pitot reading
Ho -\[H7"
0.3 0.5477
0.5 0.7071
1.0 1.000
Type S Pitot reading
HI ^T
0.415 0.642
0.700 0.837
1.44 1.200
Ratio -
0.
0.
0.
C = 0.
P
NH~o
>Pl = Cp
853
844
833
843
H0 = Velocity head ("H20)
EI = Velocity head ("H20)
C = Pitot tube coefficient
0 Manometers - Inclined and U-tube manometers give
direct readings and do not require calibration.
The manometer must be clean, air-tight, and filled
with the liquid specified on the scale. Where
transducers or gauges are used to measure pressure,
they should be connected in parallel with a manometer
and adjusted to read the same value.
4-5
-------
0 Thermometers - Bi-metallic dial-type thermometers
are commonly used. These should be checked against
a mercury-in-glass thermometer and adjusted to read
correctly. All thermometers used in a sampling
program should be checked prior to use and should be
adjusted to the following limits:
150° F. + 2°
150-500° + 5°
500° + 10°
0 Dry Gas Meters - A dry gas meter is calibrated by
connecting it in series with a bell-prover, or wet-
test meter. Meters should be calibrated before
every test series and should be adjusted to read
within 1% of the true value.
0 Orifices, Rotameters, etc. - These devices are cali-
brated by connecting them in series with a more
accurate volume meter, such as a wet-test meter.
Calibration should be performed before every 10
months, depending on frequency of usage. Calibration
curves with deviations of no more than 1% should be
established for each device.
All sampling equipment must be cleaned carefully to prevent
sample contamination. Dichromate cleaning solutions are
recommended for cleaning glassware prior to beginning a test
series. Complete rinsing with tap and then distilled water
is suggested. For metal analyses, certain types of glass-
ware should not be used and cleaning with nitric acid is
recommended. The method description should include guide-
lines for choice of glassware.
4.2.4 Conducting the Emission Test
Before starting an emission test, the sampling crew
should follow a series of preparatory steps designed to re-
4-6
-------
duce or eliminate interferences from various sources. Check-
lists enable the crew to follow the standardized procedures
consistently. The operators should maintain complete notes
in the field. Checklists can include such data as the
following:
0 Preparation of water or reagent blanks.
0 Cleaning of impingers, probes, collection vessels,
etc.
° Confirmation of reagent grades specified in the
method.
0 Preweighing of filters.
0 Checkout of heating or cooling systems for probes.
0 Leak check of sample train.
0 Identification or visual check of Pitot tubes,
meters, thermometers.
Typical data sheets used in the field are shown in
Figures 4.1 and 4.2.
After the preparatory measures, the principal quality
control effort is directed toward preventing measurement
errors. Stack sampling involves a number of physical para-
meters, and the errors of measurement associated with each
parameter combine to produce an error in the calculated
emission rate.
Measurement errors are of two types: bias and random.
In bias errors, which usually result from poor technique or
faulty equipment, the measured value tends to differ from
the true value in one direction. Errors of this type can
be minimized by proper calibration and adequate training.
Random errors result from a variety of sources that cause
the measured value to deviate in either direction from the
4-7
-------
GAS SAMPLING FIELD DATA
Material Sampled For
Date
Plant Location
Bar. Pressure " hg
o
Ambient Temp. F Stack Temp., °F
Run No.
Power Stat. Setting
Filter Used: Yes No
Operator
., Flow Rate Meter Temperature
Time Meter (Ft. ) CFM °F
Contents
Impinger No. 1
Impinger No. 2
Impinger No. 3
Figure 4.1 Field data sheet
4-8
-------
GAS SAMPLING - EVACUATED FLASK
TEST NO.
DATE
LOCATION
Type of Operation
Sampling Flask No.
Volume of Reagent, V
R
Leg 1
Initial Flask Vacuum, Leg 2
Total
P .
i
Flask Volume, V
ml
ml, Type of Reagent
P. = P, - p. =
- i b ^i
"Hg
Leg 1
Final Flask Vacuum, Leg 2
f Total
- Pf = pb - pf =
"Hg
Initial Flask Temperature, T. =
Final Flask Temperature, T~ =
Barometric Pressure, P, =
b
Clock Time
°F + 460 =
°F + 460 =
"Hg
°R
°R
CALCULATIONS:
vs - (vo - V
-------
true value. They are caused by inability of the operator
to read scales precisely, by the quality and sensitivity
of the measurement device, and by uncontrolled environmental
variables .
Application of standard statistical techniques has shown
that the maximum error associated with determining an emission
rate (product of concentration and total stack gas flow) is
12 13
about 15%. ' Under normal sampling conditions, with no
bias in the readings due to faulty equipment or operator
technique, an error of less than 10.4% can be expected 99.6%
of the time. The most significant error associated with any
one measurement involves reading the inclined draft gauge
used to measure stack gas flow velocity head.
Another significant error can occur in measuring
particulate concentrations. Particles are segregated at
the sampling nozzle when the velocity of approach to the
nozzle does not equal the velocity of the stack gas at the
sampling point. This error varies with the size of the
particles, as shown in Figure 4.3.
4.3 Analyzing the Total Measurement System
Although, as mentioned earlier, many statistical tech-
niques of quality control cannot be applied to emissions
testing, a basic and detailed analysis of the total sampling
operation can aid the technical services staff in providing
high-quality samples that yield reliable data.
4-10
-------
2.5
w
w
o
EH
Q
W
EH
U
W
O
U
O
H
EH
U
2.0
1.5
1.0
0
80-100 micron
5-25
micron
o
o
H
EH
0 0.5 1.0 1.5 2.0
RATIO OF NOZZLE VELOCITY TO ACTUAL STACK VELOCITY IN DUCT
Figure 4.3 Expected Errors Incurred by Non-Isokinetic
Sampling
(These data should not be used to correct
concentrations obtained under non-isokinetic
conditions since a wide variety of particle
sizes is usually present.)
The emissions testing duties of a technical services
group can involve a variety of sampling procedures related
to the variety of sources that are monitored. Selecting the
; i
sampling method most appropriate is the first step in quality
control. Among the factors to be considered are the major
variables of the process to be sampled, location of sampling
points, and size of the sample. Experience and judgement
will influence method selection; review of method descrip-
tions and consultation with others are often helpful, as
are relevant literature sources such as those given in
references 11 through 15.
4-11
-------
Working within the framework of the method selected
for a specific emissions test, one can proceed with analysis
of the total measurement system. Prepare a schematic
diagram of the sampling train; list the critical components
of the system and note how their operation affects results.
Set forth, as briefly and simply as possible, the theoretical
principles on which the system is based.
These items should be prepared on paper for use by the
technical services staff. The schematic diagram expresses
relationships in visual form, showing how one element of a
sampling/analysis system affects another. It displays
opportunities for error, and therefore opportunities for
control.
Systems analysis for quality control of emission test-
ing should incorporate as many as possible of the elements
described in Chapter 3 concerning atmospheric sampling and
Chapter 5 concerning laboratory operations. Measures to
assure quality of reagents, for example, are appropriate
here. Compilation and analysis of data histories (such as
calibration records) can lead to improvements in data
quality. Documentation, by way of records, log books,
summaries, and reports, can provide the information base
required for quality control. Although performed in the
field, often under conditions less formal than laboratory
operations, competent source sampling requires the dis-
4-12
-------
ciplined application of technical skills. In terms of
quality control, 'stack sampling1 represents a challenge
to the technical services group: to develop attitudes,
techniques, and detailed procedures that lead to consistent
production of high-quality emissions data.
4-13
-------
5.0 QUALITY CONTROL IN THE ANALYTICAL LABORATORY
5.1 Introduction
Laboratory quality control programs should include
systematic procedures for performing analyses and for check-
ing the level of performance for each method used. Statisti-
cal procedures are usually required for specifying perfor-
mance standards, for recognizing analytical results that do
not meet the standards, and for interpreting historical data.
Standard operating procedures provide a base for achieving
and maintaining a consistent level of analytical performance
and for tracing errors when results do not meet the expected
level. Standard operating procedures should be developed
for each of the four major sources of analytical variation:
0 Support services
0 Reagents and materials
0 Instrumentation
0 Analytical technique
These may be considered as the chief physical parameters of
laboratory operations that are subject to quality control.
Each is treated in detail in later sections of this chapter.
In the discussion that follows it is assumed that
methods have been standardized; analytical methods approved
by EPA should be used whenever possible.
Techniques for controlling the major sources of
analytical variability are presented in this chapter, along
with the criteria for choosing the techniques best suited
to a specific laboratory situation. Certain general
5-1
-------
criteria are applicable in nearly all laboratory appli-
cations; e.g., cost, requirements of the analytical methods,
experience of other laboratories, and effects of inter-
ferences as determined through sensitivity testing.
5.2 Controlling Physical Parameters
5.2.1 Laboratory Support Services
Laboratory support services include laboratory gases,
water, and electricity. The parameters that affect the
quality of laboratory support services are given in
Table 5.1 with suggested control techniques.
Decisions concerning the frequency with which generat-
ing and storage equipment should be maintained and reagents
checked need to be made. Manufacturers' recommendations
provide a reasonable starting point, but schedules can be
adjusted as experience dictates. Initially, checks should
be made more frequently than is recommended to provide
data for decisions on scheduling. Maintenance contracts
with the manufacturer provide a convenient, and often
economically justified, method of meeting maintenance require-
ments. When a laboratory reagent such as water or air
has been purchased commercially, it should be subject to
procedures, like conductivity tests for water, that will
verify manufacturers' quality statements and verify that the
reagent is not changing over a period of time in the labora-
tory. The results of all verification procedures should be
5-2
-------
TABLE 5.1 TECHNIQUES FOR QUALITY CONTROL
OF LABORATORY SUPPORT SERVICES
Support
Service
Parameters
Affecting Quality
Control
Techniques
Laboratory
Gases
Purity specifications -
vary among manufacturers
Variation between lots
Atmospheric interferences
Develop purchasing
guides
Overlap use of old
and new cylinders
Adopt filtering and
drying procedures
Reagent
Water
Commercial source varia-
tion
Purity requirements
Atmospheric interferences
Generation and storage
equipment
Develop purchasing
guides - Batch test
for conductivity
Redistillation,
heating, deionization
with ion exchange
columns
Filtration of exchange
air
Maintenance schedules
from manufacturer
recommendations
Electrical
Service
Voltage fluctuations
Battery power
Constant voltage trans-
formers
Separate lines
Motor generator sets
Ambient
Conditions
Temperature
Humidity
Heating and air
conditioning systems
Humidity controls
5-3
-------
recorded on standard format check sheets, which could in-
clude the following data:
0 Date of receipt of container
0 Container identification
0 Manufacturer identification
0 Lot number, if available
0 Date of verification test
0 Name and purpose of test
0 Results of test
0 Signature of the analyst
Procedural guidelines for purchasing laboratory gases
and water can be developed by compiling definitions for
purity specifications used by different manufacturers.
Purchasing personnel should refer to such information to
avoid confusion when they are attempting to maintain a con-
stant quality of reagent, but must purchase from different
manufacturers.
Procedures for testing the quality of support media
should be developed with the aid of professional publications
and manufacturers' literature. References 17 through 22
will be useful in designing procedures for testing quality
of laboratory water and gases.
Considering the final application of the data generated
will help in the decision of which control techniques to
apply. The cost and effort required for some control
techniques may not be justified in terms of the purpose of
the analyses. If the data are used for internal preliminary
survey work, for example, to determine the relative
5-4
-------
TABLE 5.1 TECHNIQUES FOR QUALITY CONTROL
OF LABORATORY SUPPORT SERVICES
Support
Service
Parameters
Affecting Quality
Control
Techniques
Laboratory
Gases
Purity specifications -
vary among manufacturers
Variation between lots
Atmospheric interferences
Develop purchasing
guides
Overlap use of old
and hew cylinders
Adopt filtering and
drying procedures
Reagent
Water
Commercial source varia-
tion
Purity requirements
Atmospheric interferences
Generation and storage
equipment
Develop purchasing
guides - Batch test
for conductivity
Redistillation,
heating, deionization
with ion exchange
columns
Filtration of exchange
air
Maintenance schedules
from manufacturer
recommendations
Electrical
Service
Voltage fluctuations
Battery power
Constant voltage trans-
formers
Separate lines
Motor generator sets
Ambient
Conditions
Temperature
Humidity
Heating and air
conditioning systems
Humidity controls
5-3
-------
recorded on standard format check sheets, which could in-
clude the following data:
0 Date of receipt of container
0 Container identification
0 Manufacturer identification
0 Lot number, if available
0 Date of verification test
0 Name and purpose of test
0 Results of test
0 Signature of the analyst
Procedural guidelines for purchasing laboratory gases
and water can be developed by compiling definitions for
purity specifications used by different manufacturers.
Purchasing personnel should refer to such information to
avoid confusion when they are attempting to maintain a con-
stant quality of reagent, but must purchase from different
manufacturers.
Procedures for testing the quality of support media
should be developed with the aid of professional publications
and manufacturers' literature. References 17 through 22
will be useful in designing procedures for testing quality
of laboratory water and gases.
Considering the final application of the data generated
will help in the decision of which control techniques to
apply. The cost and effort required for some control
techniques may not be justified in terms of the purpose of
the analyses. If the data are used for internal preliminary
survey work, for example, to determine the relative
5-4
-------
efficiency of a combustion process, control measures may
be considerably less important than they would be if the data
were used for setting emissions standards.
5.2.2 Chemicals and Reagents
A laboratory quality control program should include
standard procedures for choosing chemicals, preparing
standard solutions, storing and handling chemicals and
reagents, and choosing and handling standard reference
materials. Some of the parameters affecting each pro-
cedure are listed in Table 5.2 with some appropriate control
techniques.
The control techniques for choosing chemicals include
the development of purchasing guidelines. These guidelines
should clarify the difference in grade designations used
by various manufacturers. The American Chemical Society
classifications can serve as the reference for most
definitions.
i
Standard solutions may require occasional restandardi-
zation. If the analytical method does not indicate the re-
quired frequency of restandardization, the frequency can be
established by considering any or all of the following
criteria:
0 Normality of the solution
0 Frequency of exposure to the atmosphere
Availability of inert storage containers
Availability of storage that prevents temperature
and light effects
5-5
-------
0 Cost of restandardization as opposed to making a
new solution - depends on frequency of use
TABLE 5.2
GUIDELINES FOR QUALITY CONTROL OF CHEMICALS AND REAGENTS
Procedure
Control Parameter
Control Technique
hoice of
Chemicals
Manufacturer designa-
tions
Method purity specs.
Develop purchasing
guides
Use American Chemical
Society designations as
a base
Develop purification or
treatment procedures
specified by method
Preparation
of Standard
Solutions
Calibrated glassware
Standard reference
materials (SRM)
Stability
Purchasing guidelines
Schedules for
ardization of
restand-
solutions
Storage and
Handling
Container composition
Filtering or pre-
treatment
Environmental sensi-
tivity
Design a labeling system
Purchase single lot
numbers
Rotate stock
Control temperature,
light, atmospheric
exposure
Standard
Reference
Materials
Availability
Stability
Store in temperature
controlled atmosphere
Desiccate when necessary
Replace if instability
is suspected
Weigh to determine loss
or degradation
Storage and restandardization requirements for several
Q
standard solutions as shown in Table 5.3.
5-6
-------
TABLE 5.3
RESTANDARDIZATION REQUIREMENTS
Solution
0.02-1.0N
0.02-1.0LM
0.02-1.0N
0.1N
0.1N
0.1N
0.1N
0 . IN
0.1N
Sodium Hydroxide
Hydrochloric Acid
Sulfuric Acid
Iodine
Sodium Thiosulfate
Ammonium Thiocyanate
Potassium Dichromate
Silver Nitrate
Potassium Permanganate
Storage
Require-
ments
Pplyolef in
Glass
Glass
Amber Glass
Refrigerate
Glass
Glass
Glass
Amber Glass
Amber Glass
Frequency of
Restandard-
ization
Monthly
Monthly
Monthly
Open Bottles-
Weekly
Sealed
Bottles-
Monthly
Weekly
Monthly
Monthly
Monthly
Weekly
Storage and handling procedures for chemicals and re-
agents should include the use of a labeling system. Labels
on chemical bottles should include the following:
0 Chemical Name
0 Formula
0 Manufacturer
0 Lot Number
0 Date Received
° Expiration Date
Labels on standard solution bottles should include the
following:
5-7
-------
0 Chemicals used
0 Manufacturers
0 Lot numbers
0 Date of Preparation
0 Date .of next standardization
0 Standardization data
0 Analyst identification
0 Conditions of analysis (temperature, pressure,
humidity)
Standard reference materials are available from the
National Bureau of Standards and from commercial manufacturers
They are used for standardizing solutions, calibrating equip-
ment, and monitoring accuracy and precision of analytical
technique. NBS classifies chemical standards as (1) primary
standards, (2) working standards, and (3) secondary standards,
22
having the following definitions:
Primary Standard - A primary standard is a commercially
available substance of purity 100 +_ 0.02% accuracy
Working Standard - A working standard is a commercially
available substance of purity 100 + 0.05% accuracy
Secondary Standard - A secondary standard is a substance
of lower purity that can be standardized against a
primary grade standard
The availability of primary standards may be limited.
Since analytical results rely on the accuracy of solutions
made with standard reference materials, it is important that
steps be taken in the laboratory to eliminate errors due to
mishandling of standard reference materials. This is best
accomplished by providing the analysts with written pro-
cedures for handling of standard reference materials. The
vendors usually recommend storage and handling procedures
(see Table 5.2) .
5-8
-------
5.2.3 Instruments
Standard procedures for operation, calibration, and
maintenance of analytical instruments are important for in-
terpreting results as well as for preventing errors. Table
5.4 shows the important control parameters for analytical in-
struments and the control techniques that apply to those
parameters.
Function checks can be used to indicate whether a sub-
system within an instrument is functioning properly within
predefined limits. The limits are usually defined by the
instrument manufacturer, and the user should determine
whether the limits are acceptable for his requirements. The
frequency with which function checks are performed depends
on how the instrument is used. If the instrument is used
for short periods each week, function checks may be per-
formed with each use or even monthly. If the instrument is
operated daily, function checks may be made daily or before
each run.
The following example illustrates a function check of
linearity of a spectrophotometer.
0 Prepare a standard data sheet to record relative
concentration of standards, readings at various
wavelengths, date of test, analyst identification,
remarks.
0 Prepare four standard solutions that absorb strongly
at selected wavelengths in the range of the analytical
methods. Commercially prepared solutions were used
for this example.
5-9
-------
TABLE 5.4
TECHNIQUES FOR QUALITY CONTROL OF ANALYTICAL INSTRUMENTS
Control Parameter
Control Technique
Instrument Operating Range
Interferences
Environmental Conditions
Associated Equipment
(cuvettes, volumetric
ware, dilutors, etc.)
Normal System Drift
System Component Functions
Response Readout
Coordinate instrument selection
with method requirements.
Sample conditioning (drying,
separating, mixing, etc.)
Use of blanks
Use of spiked samples
Monitor and control temperature,
humidity, pressure, any atmos-
pheric parameter that can
affect system response. Con-
sult manufacturer instructions
and method descriptions.
Proper handling procedures
Standard procedures for clean-
ing
Standardization or calibration
Zero adjust
Apply function tests
Plot response to changing
concentrations
Perform maintenance when in-
dicated
Use calibration curve, adjust
using blanks and zero -
span controls
5-10
-------
0 For each relative concentration, record the absor-
bance at the specified wavelengths.
0 Using Cartesian graph paper .(rectangular coordinates)
plot a graph of absorbance vs. relative concentra-
tion. Prepare a separate graph for each wavelength.
The graph should be a straight line if the instrument
is functioning properly and if the absorbing solution
follows Beer's Law.
0 At each successive testing interval, plot the points
as above.
0 If the points for a wavelength appear to be on a
straight line or to be equally distributed around
it, the operator can assume that the instrument is
functioning satisfactorily.
0 If the points for a wavelength begin to fall con-
sistently above or below the baseline and the
distance between the points and the baseline increases,
the operator should expect an impending problem
that will require maintenance.
For the data obtained at 420 nm (Figure 5.1), the graph (Figure
5.2) indicates that the response is beginning to change. The
manufacturer's literature should indicate whether the amount
of drift is acceptable. If the drift is not within defined
tolerance limits, maintenance should be performed according
to manufacturer's recommendations.
Calibration provides a technique for translating in-
strument response into meaningful concentration units. A
calibration curve is constructed by analysis of materials
containing varying known concentrations of the element or
compound of interest. Each time a curve is constructed, a
test is applied to determine whether instrument response
is within predefined limits. If the calibration curve is
acceptable, i.e., if it lies within the statistical limits,
then the analyst can apply it to translate instrument output
5-11
-------
SPECTROPHOTOMETER WEEKLY FUNCTION CHECK
FOR LINEARITY AND PRECISION
Relative
Concentration
ABS.at ABS at ABS at
670 mu 520 mu 420 mu Date Analyst Remarks
.l_._0_0_
i). 75
0.71*
Lr/3
0.50
0.25
0.00
0,00
\
1.00
LtiE
0. 75
0.50
0.25
0.00
0,00
\
\
1.00
0.7/0
0.75
0. 50
0. 25
0.00
\
1.00
,6/J
0. 75
0.50
0.25
0, t
0.00
.00
0.75
0.50
0.25
0.00
Figure 5.1
Spectrophotometer weekly function check for
linearity and precision
5-12
-------
-0;8
STANDARDISATION CHECK
-AT'420 nm
ui
I
Mfr.
Drift
:*e commended
Limit
:...,Pirocedure
Requ'ired .
0.25 0.50
RELATIVE CONCENTRATION
0.75
1.0
Figure 5.2 Standardization check at 420 nm
-------
into the appropriate concentration units. The example for
constructing an S0_ calibration curve, presented in
Appendix C, will show how to establish control limits.
The standards used to construct the calibration curve
can be primary or working standards, or spiked standards
designed to eliminate the effects of interferences en-
countered in real tests. In calibration the standard is
exposed to the same preparatory steps that are applied to
unknown samples. For example, if samples are subjected
to a separation procedure, then the standard should also
be subjected to that procedure. With this technique the
analyst can introduce the same net interferences, as closely
as possible, that will occur in normal application of the
analytical methods. The choice of standard will depend on
several criteria:
0 Method requirements
0 Expected concentration range of unknown samples
0 Known interferences determined by method
standardization and sensitivity tests
0 Availability of standard reference materials
0 Cost of standard reference materials
Stability of standards
o
Many laboratories use a non-statistical approach to
constructing calibration curves. The concentrations of the
standard that are analyzed to provide data for calibration
curves should cover the working range of the method. The
data in Table 5.5 were used for constructing the calibration
curve (Figure 5.3) for S02 determination using the modified
pararosaniline method.
5-14
-------
TABLE 5.5
SO CALIBRATION DATA
Concentration
0.20
0.40
0.60
0.80
1.00
1.20
1.40
Average Absorbance
0.095
0.240
0.315
0.440
0.565
0.660
0.780
The steps used to construct the curve are:
0 Analyze at least three concentrations of the
standard. The concentrations should cover
the working range of the method. Three
replicates should be analyzed for each con-
centration .
0 Plot the average absorbance values (y-axis)
for each set of replicates against the con-
centrations (x-axis) on rectangular coordinate
graph paper. If percent transmittance is
plotted, semi-log graph paper should be used.
0 Establish a calibration line by drawing a line
of best fit through the plotted points. This
line will not necessarily pass through the
origin of the graph. The y-intercept can vary
depending on such variable parameters as equip-
ment sensitivity, environmental influences, or
degradation of the standard.
Since the line .of best fit can shift, the analyst
should periodically analyze the standard at one of the con-
centration levels used to construct the calibration curve.
The result can be plotted on the curve to indicate the
5-15
-------
1/1
I
.6 .8 1.0
Concentration yg/ml
1..4
Figure 5.3 Calibration Curve for SO- Determination
-------
degree of curve shift, if any. The frequency with which the
curve should be checked depends on the reproducibility of the
instrument, the ruggedness of the method, and changes in
environmental influences. Control lines cannot be rigorously
i
determined with statistical techniques for the type of
calibration curve in this example. The control lines in
Figure 5.3 merely illustrate that limits can be placed on the
calibration curve if the method description specifies the
limits. Each time the standard is analyzed, the result should
lie within the established control lines. If it lies outside
the control lines, a change in one or several parameters in the
analytical system has occurred, and either corrective action
or a new calibration curve is necessary. The technique
just described is useful when the analytical method is to be
used infrequently. However, when a method is being used
routinely, the preferred technique for constructing the
curve and its limits is regression analysis by the method
of least squares.
To obtain maximum precision in determining the line of
best fit, the EPA suggests in the Federal Register using
regression analysis by the method of least squares. If
this technique is used, statistical control limits for the
calibration curve can be established that relate to different
confidence intervals. We suggest that if an analytical
method that requires a calibration curve is used continuously
5-17
-------
in the laboratory, regression analysis by the method of
least squares is a better technique for constructing the
calibration curves than is the previously explained technique.
An example of how this statistical technique is applied
appears in Appendix C.
5.2.4 Analytical Technique
The quality of analytical technique is a function of the
analyst's experience and training. Basic operations, such
as dilution procedures and handling of analytical weights,
contribute significantly to indeterminate errors, i.e.,
errors that cannot be traced.
Maintaining good analytical technique in the laboratory
23
is primarily a management task. The laboratory supervisor
can use several methods to encourage good technique and to
insure a continuing effort to maintain good technique.
0 Proper allocation of manpower
0 Periodic review of analyst performance
0 Implementation of maintenance schedules and
procedures for all laboratory instruments and
materials
0 Design and review of error histories
If he is applying one or all of these methods to maintain
good technique among the analysts, the supervisor can use a
form of statistical analysis to evaluate and compare the
performances. A control chart can be constructed for each
analyst to show the variability of results he obtains with
any given analytical method. Data used for such charts are
from analysis of replicate samples or of known standards
over a specified period of time. The analyst's performance
5-18
-------
can be compared with his performance over another time period
to show trends in his proficiency, or it can be compared
with the performances of other analysts using the same
analytical methods.
The major problems with designing a program to monitor
the analyst's performance are concerned with design of the
sampling system. The problems are:
0 What kinds of samples to use.
0 How to prepare and introduce samples into the
run without the analyst's knowledge.
0 How often to check the analyst's proficiency.
The problems and their suggested solutions or criteria
for decision are given in Table 5.6.
TABLE 5.6
PROBLEMS IN ASSESSING ANALYST PERFORMANCE
Problem
Solutions and Decision Criteria
Kinds of Samples
Introducing the
Sample
Frequency of
Checking
Performance
Replicate samples of unknowns or refer-
ence standards.
Consider cost of samples.
Samples must be exposed by the analyst
to same preparatory steps as are normal
unknown samples.
Samples should have same labels and
appearance as unknowns.
Because checking periods should not be
obvious, supervisor and analyst should
overlap the process of logging in
samples.
Supervisor can place knowns or replicates
into the system/occasionally.
Save an aliquot' from one day for analysis
by another analyst. This technique can
be used to detect bias.
Consider degree of automation.
Consider total method precision.
Consider analyst's training and attitude.
5-19
-------
In this discussion of quality control in an analytical
laboratory we have considered the four major sources of
variation or error; that is
the laboratory facilities
the reagents and materials
the instruments, and
the analysts.
We have described some measures for quality control in each
category. Now we turn to the second phase of quality con-
trol that is required for effective laboratory operations -
the application of statistical techniques and other review
and control practices that together constitute in-depth
analysis of the total measurement system.
5.3 Statistical Methods
Statistical techniques provide a means of defining
acceptable levels of analytical performance and determining
whether those levels are being achieved and maintained. The
major steps in developing a statistical evaluation system
are:
0 Defining the performance levels.
0 Choosing the statistical techniques.
0 Constructing control charts.
5.3.1 Defining Performance Levels
Before a system for evaluating analytical performance
can be initiated, acceptable performance levels must be
defined. Laboratories normally work in the 99% confidence
interval (see Section 2.2.2 discussion of adjustment factors
5-20
-------
and confidence intervals). Statistical formulae can be
applied to determine confidence limits. No formula can be
applied to determine confidence level, but several practical
criteria can affect the choice:
0 Method specifications
. ° EPA recommendations
0 Ultimate use of the data
0 Method standardization precision and accuracy data
0 Intralaboratory and interlaboratory test results
5.3.2 Choosing Statistical Techniques
Statistical techniques are applied to determine whether
the errors associated with analytical data are within
operational limits designated for the method. Precision con-
trol charts are prepared from results of replicate analyses
and are used to monitor the degree of variability among
laboratory results.
Precision control charts indicate the level of precision
in two ways: (1) in the unit of measurement of the variable;
(2) in percent. Precision in units is calculated in terms
of range (R-Chart) or standard deviation (S-Chart). Pre-
cision as percent is calculated in terms of the coefficient
of variation (CV-Chart), also referred to as relative
standard deviation.
R = Max - Min
S =llZ(x-x)2 R = Range
» Z_i S = Standard Deviation
•"• CV =' Coefficient of Variation
S_nnn. '' x = Individual Value
_^1UUJ x = Mean Value
x N = Number of Replicates
5-21
-------
The use of R-Charts and S-Charts is based on the
assumption of homogeneity of variance (i.e., the variation
between replicate analyses on a single sample is constant
over the range of concentration being measured). The
CV-Chart is used when precision is dependent on the con- •
centration being measured (i.e. the standard deviation
changes, usually increasing, with concentration). The co-
efficient of variation, expressed as a percent, is indepen-
dent of concentration from the standpoint that the percent
is constant over the concentration range.
Before an analytical method is adopted for routine use
in the laboratory, replicate analyses should be run on known
standards. Prepare the standards to represent both the low
and the high concentrations expected and at least one, but
preferably two, intermediate concentrations. As a good
"rule of thumb", between 5 and 10 replicate analyses should
be run at each known concentration.
Compute the mean (x) and standard deviation (s.) for
each concentration, x, s,, x2 s-, x_ s.,, x. s.. Plot
the mean and standard deviation on a scatter diagram
(Figure 5.4). Generally the scatter diagram will follow
the pattern of one of the two cases shown.
Case A: The standard deviation is independent
of the mean. Use either an R-Chart or
S-Chart.
5-22
-------
CASE B:
Standard Deviation
Increases with Concentration
45678
Concentration Mean
CASE A:
Standard Deviation is
Independent of Concentration
45678
Concentration Mean
Figure 5.4 Scatter Diagrams for Determining Control Charts
-------
Case B: The standard deviation is dependent on changes
in concentration. Use the CV-Chart.
Choice of an R-Chart or an S-Chart depends on the
number of replicate analyses to be run routinely to monitor
precision. If the number of replicates (n) is small
(n < 12), the R-Chart is the most efficient. When n is
large (n > 12) the S-Chart provides a more efficient control
of precision. Since precision is usually determined on the
basis of a small number of replicates, S-Charts for
precision are not discussed in this manual. Table 5.7
presents data used to plot Case A and Case B in Figure 5.4.
5.3.3 Constructing Range Control Charts
The procedure for constructing a control chart for
range follows (see Table 5.8):
0 List the absolute value of the range (R) for each
set of replicates.
0 Compute the average range (R) for all sets of
replicates.
0 Compute the upper control limit by UCL = D.R. D
is from Table BII, Appendix B.
0 Compute the lower control limit by LCL = D~R. D_
is from Table BII, Appendix B.
0 Plot .the line for R on the control chart.
0 Plot the values for ranges of each set of replicates
(Figure 5.5).
The control limits computed for this control chart are
for the 99% confidence interval (see Section 2.2.2). There-
fore 99%of the calculated range values should be between
these control limits.
5-24
-------
TABLE 5.7
DATA USED TO CONSTRUCT SCATTER DIAGRAMS
CASE A
X
2.3
2.2
2.3
2.4
2.1
x =
3.0
3.1
3.2
2.9
3.2
x =
5.1
5.0
5.2
5.1
5.0
x =
9.5
9.6
9.4
9,7
9.6
v —
x-x
0
-.1
0
1
-.2
2.3
-.1
0
.1
-.2
.1
3.1
0
-.1
.1
0
-.1
5.1
-.1
0
-.2
.1
0
9.6
(x-x)2
0
.01
0
.01
.04
S = .12
.01
0
.01
.04
.01
S - .13
0
.01
.01
0
.01
S = .08
.01
0
.04
.01
0
S = .12
2
2
2
2
2
x
3
3
3
2
3
x
5
5
5
4
5
x
9
9
9
9
9
x
X
.3
.2
.3
.4
.1
=
.0
.1
.2
.9
.2
=
.1
.2
.4
.9
.3
=
.5
.1
.6
.0
.2
=
CASE B
x-x
0
-.1
0
1
-.2
2.3
-.1
0
.1
-.2
.1
3.1
-.1
0
.2
-.3
.1
5.2
.2
-.2
.3
-.3
-.1
9.3
(x-x)2
0
.01
0
.01
.04
S = .12
.01
0
.01
.04
.01
S = .13
.01
0
.04
.09
.01
S = .19
.04
.04
.09
.09
.01
S = .26
5-25
-------
TABLE 5.8 COMPUTATION OF CONTROL
LIMITS FOR RANGE CONTROL CHARTS
Sample (n) x, x? R
1
2
3
4
5
6
7
8
9
10
_
R
21
39
14
8
59
88
7
88
38
22
R 68
~ n 10 6'8
29
47
18
10
71
96
9
98
46
28
R Total =
8
8
4
2
12
8
2
10
8
6
68
UCL = D4 R = 3.267 x 6.8 - 22.22
LCL =D3R=Ox6.8=0
D., and D are for n = 2
OJ
tn
UCL = 22.22
Average Range = 6.
Order of Replicates
Figure 5.5 Control Chart for Range
5-26
-------
Data plotted on precision control charts should be
approximately evenly distributed around the mean value line.
The Theory of Runs dictates that, for the 99% confidence
level, if eight successive values appear on one side of the
mean value line, then the process is judged to have bias.
If this occurs, stop the process and follow a standard
routine for error tracing to find the source of error that
is causing the bias. Note the problem and the error source
on the control chart for easy future reference. A system
for tracing errors is described later in this chapter.
If any of the results plotted on the control chart fall
outside of the control limits, the process should be judged
out of control and the analysis stopped. Whenever possible,
reanalyze all affected samples when a bias or an out-of-
control situation is observed. The feasibility of identi-
fying and reanalyzing affected samples depends on the
frequency with which replicate samples are analyzed. For
example, if few samples are analyzed by a method each day,
and only one set of replicates is analyzed each day, bias
would not be confirmed on the control chart until at least
the end of eight consecutive days. It would be impractical,
and in many cases impossible, to save an aliquot of each
sample for at least eight days for the possibility of reruns
Keep in mind that biased results are still acceptable if the
control limits are not exceeded. The most important step is
5-27
-------
to find the source of error and correct it before it causes
the analytical process to go out of control.
inspection of the control chart will help to determine
which samples may have been affected if bias or loss of
control occur. In a bias situation, all samples associated
with the eight sets of replicates that produced the plotted
control data can be considered to have been affected. This
situation is illustrated by Case A in Figure 5.6. When loss
of control is indicated, all samples analyzed between the
last set of replicates that showed control and the set of
replicates that indicated loss of control can be considered
to have been affected (Case B, Figure 5.6). An exception to
Case B would be the appearance of an upward or downward trend
of the control data prior to loss of control (Case C,
Figure 5.6). This could be a situation in which the
analytical process was subject to bias but went out of
control before enough replicates were analyzed to display
the bias. In this case the control data points are traced
back to the last point appearing on the side of the mean
value line opposite the points that indicate suspected bias.
All samples beyond this point could have been affected.
5.3.4 Constructing Coefficient of Variation Charts
The CV-Chart is prepared from measurements obtained
from replicate analyses of routine samples. Typically the
number of replicates is two. The duplicate analyses are
5-28
-------
0)
Cn
Case B - Ou
:at last po
""••; ";""••" 'UCL ' ""
.: : . - f-
' --; -I. J:-!/^; k • ..^.r;
y >1 ' J " '~ Mean V;i"l UP
^v iv : :
' i JLCL
: i " '
; . ' -Order of Result
- i , - Case A - Bi
: .. . _ [eight ..cons
i ; , ; on one .sic
.' ] ' '' \": :"' -' ----<- : -" line". "
— .! • .':':" "'".. j ;."
• ! l • -• . [ - -. IUCL •_ -•_ •:
. , . . !
. . o — if^- — .._;Mean- Value
r - ;
:LCL •
• - ; ; j - ; v
t of contr
int
A
; -, -' •— 1 :
; . i . . ... . ; .!. .!
' "1 ', ' !
.. -i
-..._, -• ,
< - • ' , ' '
i • ' '•
'-'•',-
,_, _ : }. ...
s J : ;
as indicat
Ol
. • - "" . • i : ,
i " ' '••>)•
." ' ' "•';".
• -- -- ; :- ;-•!- '- 'oj;
: : . ' : . Cn
C
(3
: «.
ed by
A 7
\ ./
.: ^
ecutive points ^ ~
e 'of mean lvalue ! ' , - :
'..!.'. i - 'All samp
- have bia
: .:
ft
/
le
s
i ' : .
Figure 5.6 Interpre
i
•' *
• •' ,
Ca
c
': " . - .'-b
-./"
! jt~ - -. '•• -'-
' - Grde
s past thi
tation of
se C - Out
on'tr.o.l and
ia|s ; i ;
• .- _i. . ^ i.
UCL i
Mean Value
LCL , , :
r jof Resul
s point ma
Control Ch
•
of
pps'sible
. — j.... . ,—
.._ -.^ -i ..
- :- --•; - - --;--
'
'
" "" . :::.\
'
. \
...... !.H...
ts: r:T"
y . • ' :.
• i
arts
Order of Results
-------
run on 5-10 percent of the incoming samples, selected by a
random process. The duplicates are sent to the analyst in
such a manner that the analyses are run "blind", i.e. the
analyst cannot determine which samples are being used to
monitor precision.
It is difficult to rigorously determine the number of
duplicate analyses needed to establish the necessary control
limits, but a minimum of 15-20 is preferred. Similarly, as
the duplicate analyses yield more data, control limits
should be revised periodically, perhaps monthly or quarterly
Cumulative data may indicate that control limits should be
revised annually.
The procedure for constructing the CV-Chart for the
data in Table 5.9 is described below. The term "sample"
TABLE 5.9 COMPUTATION OF CONTROL
LIMITS FOR CV-CHART
Sample x, x_ R x CV
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
23
39
14
8
59
78
7
80
38
22
12
29
48
75
48
80
29
47
18
10
71
96
9
98
46
28
16
35
60
91
58
100
6
8
4
2
12
18
2
18
8
6
4
6
12
16
10
20
26
43
16
9
65
87
8
89
42
25
14
32
54
83
53
90
16.3
13.2
17.7
15.7
13.0
14.6
17.7
14.3
13.5
17.0
20.2
13.3
15.7
13.6
13.3
15.7
I CV = 244.8
5-30
-------
refers to a set of duplicates. The range is used here for
convenience, since it is an efficient estimate of standard
deviation for n = 2 .
0 Compute the range R for each_sample .
0 Compute the arithmetic mean x for each sample.
0 Compute the coefficient of variation for each
sample , where :
CV = R
n = sample size 2
0 Compute the average coefficient of variation.
16
_ Z CVi
CV = 1 _ = 244.8 = 15.3
16 16
0 Compute the upper control limit.
UCL = B4 CV
where B. = 1.552 (Table Bill, Appendix B) for
4 n = 16
UCL = 1.552 (15.3) = 23.75
0 Compute the lower control limit.
LCL = B3 CV
where B3 = 0.448 for n = 16
LCL = 0.448 (15.3) - 6.85
0 Prepare the CV-Chart (Figure 5.7) showing CV,
UCL, and LCL.
'The CV-Chart is now ready for use to monitor precision
of routine sample analysis. For each sample compute CV and
plot new values sequentially on the control chart.
5.3.5 Determining Accuracy
Accuracy determinations involve the comparison of
results from analysis of unknown samples with results from
5-31
-------
20
A
UCL = 23.75
= 15.30
v
10
v
LCL =6.85
0
Order of Results
Figure 5.7 Coefficient of Variation Chart
analysis of standards of known concentration. Two
techniques for accuracy determination, classified by type
of reference material, are shown in Table 5.10 with the
criteria that affect the choice of each.
One problem with using a primary or a working standard
for accuracy determination is that these standards do not
approximate the quality of actual samples. Field samples
are exposed to a variety of sources of interference during
their collection and transport to the laboratory. After
arrival at the laboratory, the samples are subjected to more
potential sources of interference during sample preparation.
The effects of those interferences are significant for many
5-32
-------
TABLE 5.10 TECHNIQUES FOR DETERMINING ACCURACY
Analytical Technique
Criteria for Application
Pure standards
Spiked samples
(% recovery
technique)
Often not available.
Advantage - composition is
well established.
Disadvantage - does not
duplicate the sample.
Interferences normally
found in samples are not
present, and effect on
method cannot be measured.
Can be used when stable
standards cannot be obtained,
Can be used when accuracy
determinations are very
infrequent.
Applicable for trace
analysis.
types of analyses, such as atomic absorption analysis for
trace metals. Consequently, when accuracy is being deter-
mined it is often desirable to compensate for the net
effects of normal interferences. The standard additions
technique does this by providing a correction factor that
can be applied to observed values to calculate true values
for the analytical results.
steps:
The method of standard additions includes the following
24
Analyze an aliquot of the unknown sample.
Add to another aliquot of sample a portion of
a standard of the species of interest such that
the total concentration of the resulting solution
will be within the optimum detection range of
the method. Analyze the resultant mixture.
Calculate the recovery of the added substance
with the following formula:
5-33
-------
C(s + a) -Cs x 100 = % RecoVery
L,a
where: C(s + a) = Observed concentration value of
sample plus standard
Cs = Observed concentration value of sample
Ca = Actual value of standard
The percent recovery represents a correction factor for
interferences, and the true value of a sample concentration
is calculated by multiplying the observed value by the per-
cent recovery. Another way of expressing the correction
•p
factor is by use of the formula: •=- = CF
£\
R - Quantity Recovered
A = Quantity Added
CF = Correction Factor
ov
Then TV = ^
CV = True Value
OV = Observed Value
CF = Correction Factor
5.3.6 Control Charts for Accuracy
Control charts on which to plot the results from
accuracy determinations can be constructed in much the
same manner as precision control charts. Standard deviation
is convenient for measuring the variability among the
accuracy determination results. Two situations are dis-
cussed in this section: (1) a primary or working standard
is analyzed to determine accuracy; (2) the method of
standard additions is used to determine accuracy.
5-34
-------
Figure 5.8 illustrates the first situation, in which
percent nitrogen has been determined by titration. Table
5.11 presents the data used. The control chart in Figure
5.8 was constructed by the following steps:
0 Analyze 15-20 aliquots of a primary standard over a
period of time sufficient to insure that normal lab-
oratory operating conditions are represented. Com-
pute the mean percent nitrogen value for all analyses
This value will represent the nominal value about
which analytical values from later tests will be
plotted. Do not be confused by the expression of
nitrogen concentration as a percent. This is not a
percent recovery technique.
0 Calculate the standard deviation by:
S ="\| £ (x - x^1
1 n - 1
0 Calculate a lower and upper control limit by:
UCL = x + D4S
and LCL = x - D S
Use D4 = 1.652 for the 99% confidence level, when
n = 15
0 Construct the chart.
0 The formula for computing standard deviation in this
example is interchangeable with the formula in the
next example.
In the second example percent recovery is used to
determine accuracy: In the percent recovery technique the
control chart represents recovery efficiency, whereas in
the technique just described the control chart represented
analytical values. In preparing a control chart for per-
cent recovery, follow these steps:
5-35
-------
TABLE 5.11 ACCURACY DATA FOR PERCENT NITROGEN
Sample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
% Nitrogen (x)
25.89
25.92
25.87
25.83
25.79
25.53
25.39
26.00
25.53
25.90
25.83
25.60
25.65
25.80
25.40
x - x
.17
.20
.15
.11
.07
-.19
-.33
.28
-.19
.18
.11
-.12
-.07
.08
-.32
S = 0.207
x = 25.72
D4 S = 1.652 (0.207)
= .342
UCL = 25.72 + .342
= 26.06
LCL = 25.72 - .342
= 25.38
c
0)
Cr>
O
•H
-P
C
-------
Perform the recovery procedure 15-20 times and use
the results for calculations.
Calculate the average (x) for all percent recoveries
The average will become the mean nominal value for
the control chart.
Compute the standard deviation of the percent
recoveries by:
S =
|n £x - (Ex)
n(n - 1)
Convenience determines which formula to use for
standard deviation.
0 Compute the upper control limit and the lower
control limit by UCL = .x + D4 S and LCL = x - D4 S.
D , as it was in the previous example, is based on
n = 15.
0 Plot the mean value for all the recoveries on the
control chart and construct the upper and lower
control limits. The control chart is now ready for
plotting percent recovery data as generated. Table
5.12 shows the data used to construct Figure 5.9.
110
OJ
>
8 100
o\o
90
UCL = 103.69 Recovery
99.74% Recovery
LCL = 95.79 Recovery
Figure 5.9 Accuracy Control Chart for % Recovery
5-37
-------
TABLE 5.12 PERCENT RECOVERY DATA
% Recovery
Sample _ x _
1 .982 £x2 = 14.93
2 .990
3 .993 D. = 1.652 for n = 15
4 1.005 _
5 1.021 x = .9974
6 1.030
7 .970 S = .0239
8 .975
9 .985 UCL = x + D S
10 QQ fi
11 . nil = .9974 + 1.652 (.0239)
12 1 020 1'°369
ll I 015 = 103-69%
8 LCL = * - °4 S
= .9974 - 1.652 (.0239)
= .9579 = 95.79%
The control chart constructed from the data in Table 5.12
has the mean nominal value line at the point representing
the 99.74% recovery level. The upper control limit repre-
sents the 103.69% recovery level, and the lower control
limit represents the 95.79% recovery level. Perfect
recovery would be at the 100% level. Negative interferences
would be expected to keep the recovery below 100%, but
positive interferences can also influence an analysis . The
positive interferences are stronger than the negative when
the percent recovery is found to be more than 100 percent.
The direction of each percent recovery from the 100 percent
level, then, can indicate the kinds of errors to look for
when control limits are exceeded.
5-38
-------
5.4 Interlaboratory Proficiency Testing
The purpose of an interlaboratory proficiency testing
program is to determine the relative proficiency of a group
of laboratories and/or analysts in performing an analytical
method. Laboratories participating in such programs are
provided with standard reference samples and instructions
for analysis of those samples. A co-ordinating laboratory
prepares the samples and evaluates the results. Partici-
pation in interlaboratory programs is recommended.
Several problems can occur in proficiency testing
programs. Some are procedural or administrative problems,
and some are technical problems related to method standardi-
zation and ruggedness testing.
How to obtain unbiased treatment of the sample is an
important problem in proficiency testing programs. Samples
that are analyzed as part of a proficiency testing program
should be placed in the sample run as unknowns. If the
analyst recognizes the sample as coming from the referee
laboratory, he could use extra care in the analysis, and
the result would not represent his normal proficiency with
the method.
Stability of the sample is an important technical
consideration. The referee laboratory should determine the
net effect of storage and transportation on the sample. The
Center for Disease Control, Atlanta, Georgia, has employed
5-39
-------
a technique called "pigeon sampling". A sample is prepared
and an aliquot is analyzed. Another aliquot is shipped to a
designated point and returned unopened to the central labor-
atory. The aliquot is analyzed, and the results are compared
with those obtained for the sample aliquot before shipment.
The difference is an indication of the net effect of trans-
porting the sample.
Before a method can be used routinely to rate the pro-
ficiency of laboratories participating in the program, it
must be subjected to extensive testing to determine the
total effect of changes in operating parameters. Disagree-
ment among analytical results from different laboratories can
indicate that the laboratories are not controlling to the
same extent the parameters that affect the final results.
A case study involves the disagreement in results from
two laboratories using the SPADNS - Zirconium Lake Method
25
for determination of fluorides in stack gas. A statis-
tical analysis indicated that the difference between the
results of Laboratory A and Laboratory B was significant.
Because of the significance of the difference in
analytical results, steps were initiated to determine any
operational differences in the use of the method by the two
laboratories. The investigation showed two major differences
1) Laboratory B varied the steps of the method, but
varied each step in the same manner each time;
5-40
-------
2) Laboratory A stayed within the operational ranges
for each parameter, but varied inconsistently within
those ranges. The specific differences are shown
in Table 5.13 below.
TABLE 5.13 VARIATIONS IN METHOD PROCEDURES
Method Description Laboratory A Laboratory B
Charge still with Same as method. Charge still with
0.5 to 0.9 mg 0.4 to 0.6 mg
fluoride fluoride
Perform determina- Same as method Perform deter-
tions at 15° to minations at
30°C 24.8°C
Change H2S04 ' H_0 Same as method Change H2SO.
mixture when recovery H,,0 mixture
check indicates every three runs
necessity
A program was undertaken at Laboratory A to determine
the effects of varying the parameters outlined in the method
description. The investigation showed that change in the
temperature parameter made a significant difference in the
analytical results. The procedure follows:
0 Data were obtained by using the method with
temperature control (SPADNS (b)) and by using
the method without temperature control (SPADNS (a)).
0 Data were obtained by using the method with
temperature control (SPADNS (b)) and by using a
specific ion electrode (c).
0 The differences in results between (a,b) and (b,c)
were analyzed to determine significance.
Comparative data analysis gave the following results:
SPADNS(a)-SPADNS(b) SIE(c)-SPADNS(b)
No. of samples 18 18
Average difference -0.79 -0.10
Standard deviation 0.98 0.67
of difference
Student "t" value -3.4 (Significant) -0.66 (Not Signi-
ficant)
5-41
-------
NOTE: For a sample of size n = 18, if t = -0.66, the
probability that the two methods yield similar results is
about 0.50. However, if t = 3.4 the probability that the
two methods yield similar results is less than 0.005. It is
generally accepted that a probability of 0.05 or less is
sufficient to allow one to conclude that the difference
between the methods is statistically significant.
These data show that the temperature is more critical
than was indicated in the initial method description. For
use in interlaboratory proficiency testing, it is important
that the operating parameters of the analytical method be
specifically defined.
5 . 5 Tracing Errors
When precision or accuracy control charts indicate
that the analytical process is in error, it is necessary to
find the source of error and to correct it. The histories
and records from calibration, function checking, maintenance,
and material quality checks can provide information with
which to trace the sources of analytical variability.
To make optimal use of time and resources when tracing
an error, the analyst should perform the search in a standard
logical pattern designed so that the most obvious sources of
error are considered first. If the search must then progress
to less obvious sources of error, the procedure becomes
more involved. A review of reagent standardization charts
5-42
-------
may be required and perhaps such procedures as parallel
analysis using the original standard solutions before and
after restandardization, if the restandardization results
differ.
Table 5.14 illustrates one classification of error-
tracing procedures according to complexity.
TABLE 5.14 CLASSIFICATION OF ERROR SOURCES
Source of Error
Explanation
Input to Tracing
Procedure
Variation of Method
Steps (Intentional)
Unintentional
Method Variation
Change in Sample
Analytical Error
Change in Reagent
or Materials
A procedure or
material speci-
fied has been
changed or
substituted.
A procedure has
been altered or
deleted uninten-
tionally.
The sample has
changed or an
interference
has been intro-
duced .
An error has
occurred in a
basic procedure
such as dilu-
tion, scale
reading, weigh-
ing, etc.
A change has
occurred in a
reagent or in a
material selec-
ted to use in
the preparation
or storage of
reagents or in
preparation of
sample.
Control cha^rt - look
for trend on all
samples. Method
description.
Method description -
note special require-
ments such as sample
preparation.
Field sampling
records. Sample
label data.
Calibration data for
dilutors, volumetric
ware, etc.
Calibration curves.
Maintenace logs.
Function check
results for instru-
ments .
Standardization and
check lists for
reagents and labora-
tory services.
Method description -
look for storage
requirements, pre-
parative steps.
5-43
-------
The flow diagram, Figure 5.10, shows the steps that
would be followed for the third level of complexity in error
tracing. At this level the analyst suspects that inter-
ferences have been introduced into the sample during
collection, storage or transport, or during sample condition-
ing and preparation in the laboratory. Pertinent input data
are shown, and logical alternatives are suggested.
The specific classification of error sources and the
logical steps involved in tracing the errors differ in each
laboratory. Application of the error tracing technique
depends on the methods used, complexity of the routine
operational checking and maintenance procedures, and the
training of the analysts and their understanding of the
methods.
5-44
-------
SAMPLE ERROR
, Compare
. Sources
For all
Samples
No / Samples
Compatible
Adjustment
Possible
Field Data
Method
Descriptio
.JUt
Yes
i
Identify
Interferences
I I
Std. Pro-
cedures
Followed
Procedures
Figure 5.10 Procedures for Tracing
Sampling Errors
Data or Input Document
Decision
Critical Steps
A Check Sample Source
B Check Field Handling
C Check Collection Process
5-45
-------
6.0 DATA HANDLING AND REPORTING
6.1 General
Measurements of the concentration of a contaminant,
either in the ambient atmosphere or in the exhaust gas from
an emission source, are assumed to be representative of the
conditions existing at the time of sample collection. The
extent to which this assumption is valid depends on the
sources of error and bias inherent in the collection, handling
and analysis of the sample. Methods that have been thoroughly
researched and evaluated should have minimal error and no bias.
Besides the sampling and analytical error and bias, human
error may be introduced anytime between sample collection and
sample reporting. Included among the human errors are such
things as failureoof the technician to record pertinent in-
formation, mistakes in reading an instrument, mistakes in
calculating results, mistakes in transposing data from one
form to another. Data handling systems involving the use
of computers are susceptible to keypunching errors and
errors involving careless handling of magnetic tapes and
other storage media. Although human error cannot be com-
pletely avoided, it can be minimized.
6.2 Data Recording
Methods for determining concentrations of air contami-
nants can be classified into two categories: (1) intermittent,
(2) continuous. In most intermittent methods a discrete
6-1
-------
sample is collected in some media and is sent to a labora-
tory for the analytical determination. Both the field
technician and the laboratory analyst can make errors in
data handling. Continuous methods involve an analytical
sensor that produces a direct readout of the pollutant con-
centration. The readout may be an analog record traced on
a moving strip chart, or it may be a value punched on paper
tape or written on magnetic tape. Some systems use tele-
metry to transmit data in real-time to a data processing
facility.
6.2.1 Data Errors in Intermittent Sampling
The field technician records information before and
after the sample collection period. This information in-
cludes identification of the sampling location, start and
stop date and time, and data pertaining to flow rates, etc.
It is necessary to rely on the integrity of the field
personnel that such items as location, start and stop time
and data are recorded correctly. Acceptability limits
should be set for data pertaining to flow rates, etc., and
the technician should invalidate the sampling data when
values fall outside of these limits. Questionable measure-
ment results indicate the need for instrument maintenance or
calibration.
The analyst in the laboratory reads measurements from
balances ;-. colorimetersv~spectrophotometers , and other
6-2
-------
instruments and records the data on standard forms or in
laboratory notebooks. Each time he records a value he has
the potential for incorrectly entering the result. Typical
recording errors are transposition of digits (i.e. 216
could be incorrectly entered as 126) and incorrect decimal
point location (i.e. .0635 could be entered as 0.635).
These kinds of errors are difficult to detect. The labora-
tory director must continually stress the importance of
accuracy in recording results.
6.2.2 Data Errors in Continuous Sampling
Continuous air monitoring systems may involve either
manual or automated "data recording. Manual data recording
is used to reduce data from strip charts. Automated data
recording may involve the use of a data logging device to
record data on paper tape or magnetic tape at the remote
sampling station, or the use of telemetry to transmit data
on-line to a computer at a central facility.
Manual reduction of pollutant concentration data from
strip charts can be a significant source of data errors. In
addition to those errors associated with recording data
values on record forms, the individual reading the chart can
also err in the determination of the time-average value.
Usually the reader estimates by inspection the average con-
centration. When the temporal variability in concentration
is large, it is difficult to determine an average concen-
6-3
-------
tration. Two persons reading the same chart may yield re-
sults that vary considerably.
Individuals who will be responsible for reducing data
from strip charts should be given extensive training. After
a trainee is shown how to read a chart, his results should
be compared with those of an experienced technician. Only
after a technician has demonstrated that he is capable of
obtaining satisfactory results should he be assigned to the
data reduction activity.
Periodically the supervisor or senior technician should
check strip charts read by each technician. When it becomes
obvious that an individual is making gross errors it may be
necessary to provide additional training.
Because manual chart reading is a tedious operation, a
drop in productivity and reliability can be expected after a
few hours. Ideally an individual should be required to
spend only a portion of a day at this task.
The use of a data logging device to automate data
handling from a continuous sensor is not a strict guarantee
against data recording errors. Internal validity checking
is necessary to avoid serious data recording errors. There
are two sources of error between the sensor and the recording
media: (1) the output signal from the sensor; (2) errors in
recording by the data logger.
A system recently installed by the Division of Air
Pollution Control, Cincinnati, Ohio, has validity checks
6-4
-------
for both sources of error. In this system a number of air
quality and meteorological sensors are interrogated at 5-
minute intervals. The data values are assembled into a
record and written on magnetic tape. Two of the data
channels in each record are reserved for an electronic check of
the data logger. One data channel is programmed to read
0000 + 0005 and the other to read 1600 + 0010. If the
value recorded in either data channel is not within the pre-
scribed limits the recorded data values for all of the
sensors is of questionable validity.
The second validity check, performed once each day, is
designed to test the electronics of each sensor. This also
is a two point check in that each sensor transmits a signal
representative of 10 and 70 percent of scale. For the
first 5-minute period the signal corresponding to 10 percent
of scale is transmitted from the sensor to the data logger
and then to the magnetic tape. Similarly for the next 5-
minute period the signal for 70 percent of scale is written
on the magnetic tape. Small tolerances are permitted for
both levels on the scale. Should the value recorded on tape
for a particular sensor fall outside of the acceptable range,
all data for that sensor (since the prior sensor check) is
of questionable value.
For a system involving the use of telemetry it is also
necessary to include a validity check for data transmission.
6-5
-------
6.3 Data Validation
Data validation is the final step in handling raw
measurement data from air quality monitoring equipment or
emission source testing. Data validation involves a critical
review of a body of data in order to locate spurious values.
It may involve only a cursory scan to detect extreme values
or a detailed evaluation requiring the use of a computer.
In either situation, when a spurious value is located, it is
not immediately rejected. Each questionable value must be
checked for validity.
6.3.1 Data Validation for Manual Techniques
Both the analyst and the laboratory supervisor should
inspect intermittent air quality monitoring data and emission
source testing data. At regular intervals, daily or weekly,
results should be scanned for questionable values. This
type of validation is most sensitive to extreme values, i.e.
either unusually high or low concentrations.
The criteria for determining an extreme value are de-
rived from prior data obtained at the particular sampling
site (or a similar site if no previous data is available
for a site). The data used to determine extremes may be
the minimum and maximum concentrations for all prior data
from a site. The decision criteria might also be based on
minimum and maximum for each season, each month., or each
day.
6-6
-------
The time spent checking data that has been manually re-
duced by technicians depends on the time available and on
the demonstrated abilities of the technicians to follow in-
structions. No agencies appear at this time to be using a
specific formula for determining how much data should be
checked for validity in a manual data reduction system. One
air pollution control agency approached the problem in the
following manner:
0 A senior technician or supervisor was assigned to
check approximately 10% of the data interpreted by
each of four or five technicians. The 10% figure
was arbitrary based on time availability and exper-
ience in finding errors.
0 Data was checked for obvious trends or unusual
values indicating possible reader bias.
0 No statistical formula was applied to determine
the significance of differences between readings
interpreted by the technician and readings inter-
preted by the senior technician or supervisor. If
the two values differed by more than two digits in
the last significant figure, the data was judged
unacceptable.
0 Each analyst's technique of data interpretation
was checked against written procedures describing
the use of graphic aids to determine if those
graphic aids had been properly used. The most
significant errors originated from the technician
deviating from the written procedures — not from
random error.
6.3.2 Data Validation for Computerized Techniques
A computer can be used not only to store and retrieve
data but also for data validation. This will require the
development of a specialized computer program. The system
6-7
-------
for checking extreme values in manual techniques also
applies here. The criteria for extreme values can be refined
to individual hours during the day. With this procedure
an hourly average concentration for carbon monoxide of 15
ppm may not be considered as an extreme value for 8:00 A.M.
but could be tagged as questionable if it appeared at 2:00
A.M.
Another indication of possible spurious data is a large
difference in concentration for two successive time intervals.
The difference in concentration, which might be considered
excessive, may vary from one pollutant to another and quite
possibly may vary from one sampling location to another for
the same pollutant. Ideally this difference in concentration
is determined through a statistical analysis of historical
data. For example, it may be determined that a difference of
0.05 ppm in the S0? concentration for successive hourly
averages occurs rarely (less than 5 percent of the time).
But at the same station the hourly average CO concentration
may change by as much as 10 ppm. The criteria for what con-
stitutes an excessive change may also be linked to time of
day.
The extent of the decision elements to be used in data
validation can not be defined for the general case. Rather,
the validation criteria should be tailored along the lines
suggested above for varying types of air monitoring networks.
6-8
-------
6.4 The Statistical Approach to Data Validation
6.4.1 Maintaining Data Quality in Manual Data Reduction
Systems
Often the output from a continuous air monitoring device
is an analog trace on a strip chart. Usually the strip
charts are cut at weekly intervals and are sent to the data
handling staff for interpretation. A technician may estimate
by inspection the hourly average pollutant concentrations and
convert the analog percent of scale to engineering units,
e.g. ppm. He may also read daily maximum five minute or ten
minute concentrations from the chart.
Reading strip charts is a tedious job subject to varying
degrees of error. A procedure for maintaining a desirable
quality for data manually reduced from strip charts is im-
portant. One procedure for checking the validity of the data
reduced by a technician is to have another technician or the
supervisor check the data. Because the values have been
taken from the strip chart by visual inspection, some
difference in the values derived by two individuals can be
expected. When the difference exceeds a nominal amount and
the initial reading has been determined to be incorrect, an
error should be noted. If the number of errors exceeds a
predetermined number, all data for the strip chart are re-
jected and the chart is read again by a technican other than
the one who initially read the chart. The question of how
6-9
-------
many values to check can be answered by applying acceptance
sampling techniques.
6.4.2 Acceptance Sampling Applications
Acceptance sampling can be applied to data validation
to determine the number of data bits (individual values on a
strip chart) that need to be checked to determine with a
given probability that all the data bits are acceptable.
The supervisor wants to know, without checking every data
value, if a defined error level has been exceeded. From
each strip chart with N data values, the supervisor can
randomly inspect n data values. If the number of erroneous
values is less than or equal to C, the rejection criteria,
the values for the strip chart are accepted. If the number
of errors is greater than C the values for the strip chart
are rejected, and another technician is asked to read the
chart.
The following discussion is meant only to explain how
acceptance sampling can be applied. The procedure is
relatively complex, and several types of acceptance sampling
can apply. A procedure for any specific application should
be developed by a competent individual who understands the
statistical derivation of acceptance sampling.
Values of n (the number of data values to be checked)
and C (the maximum number of errors that is acceptable) are
selected to insure a high probability of acceptance of all
6-10
-------
the strip chart values if the error rate is P, or less. A
low probability of acceptance is insured if the error rate
is P? or greater. The probability of acceptance of all the
strip chart values is 1-a for an error rate of P, and 3 for
an error rate of P2. Typical values of a are 0.01, 0.05, and
0.10, and typical values of 3 are 0.05, 0.10, and 0.20. The
error rate is the percent of erroneous values.
The probability levels a and 3 are subjectively
chosen. To determine the probabilities to use, the risk of
accepting bad data or of rejecting good data must be con-
sidered. If the risk is small the value for the a (the
acceptance probability) and 3 (the rejection probability)
may be set at 0.10 and 0.20 respectively. Decreasing the
values of a and 3 increases, the size :'of the sample required
to be checked.
Another problem is determining values for P, and P»
(the acceptable and the non-acceptable error rates). The
acceptable error rate could be ten percent. If the number
of erroneous values exceeds ten percent of all values checked,
then the data for the entire strip chart could theoretically
be rejected. From a practical standpoint, however, if the
error rate is eleven percent, the strip chart may still be
acceptable. The P, value represents the error rate that the
supervisor is trying to maintain. The P~ value represents
the maximum error rate that can be tolerated. As the
6-11
-------
difference between the acceptable error rate and the non-
acceptable error rate increases for small sample sizes, e.g.
ten or less, the acceptance and rejection probabilities
change significantly.
A technique that can be applied to determine the effect
of varying n (the number of individual values checked) and
C (the maximum number of erroneous values allowed) is that
use of an operating characteristic (OC) curve. The OC
curve (Figure 6.1) gives the probability of acceptance for
various error rates. For the following example values of P
between 0.02 and 0.20 are used.
Consider that the choices of sample size, n, are 10,
25, and 50. Consider the rejection criterion to be 2, i.e.,
if more than 2 errors occur, the strip chart is rejected.
The next step is to compute the probability of 2 or fewer
errors for all P values from 0.02 to 0.20.
The probabilities of 0, 1, or 2 errors for each sample
size n and each error rate P can be evaluated from the binomial
distribution. Since the values of P being considered are
small, the Poisson distribution can be used to approximate
the binomial distribution. The cumulative probability
curves for the Poisson distribution can be approximated by
O £
use of a chart developed by Dodge and Romig.
The OC curves for the values of n and C are shown in
Figure 6.1. When n = 50 the probability of acceptance, if
6-12
-------
Probability of Acceptance
Probability of(Rejection
1.0
0.0
40
20
0.20
0.40
0.60
0.80
1.00
02 .04 .06 .08 .10 .12 .14 .16 .18 .20 .22
Error Rate •
Figure 6.1 OC Curve
6-13
-------
the error rate is P = 0.04, is approximately 0.68. But for
the same sample size of 50, if the error rate is P = 0.02, the
probability of acceptance if the error rate is P = 0.10 is
approximately 0.12. If the sample size is n = 10 (and the
acceptance criterion remains at C = 2) the probability of
acceptance if P = 0.04 is approximately 0.99, and if P = 0.10
it is still approximately 0.92. So when the sample size
decreases from 50 to 10 the test cannot discriminate well
between error rates of 0.04 and 0.10.
It is apparent that establishing an acceptance plan
for maintaining a defined quality of data is difficult.
Various types of sampling plans are available. A quality
control expert should be consulted before an acceptance
sampling plan is adopted.
6.4.3 Sequential Analysis
6.4.3.1 Test Procedure - The typical approach used in
performing a statistical test of hypothesis requires the
collection of a sample of a fixed size. A statistic is
then computed from the sample data and compared with some
critical values for that statistic. A decision is then made
to accept the hypothesis (H ) or to accept some alternative
hypothesis (H,). With such a procedure it is necessary to
6-14
-------
collect the specified sample of observations regardless of
the results that may be obtained from the first few observations,
A procedure called sequential analysis requires that a
decision be made after each observation is made. The possible
decisions to be made are;
1. Accept H .
c o
2. Accept H,.
3. Continue the sampling process.
This procedure has the advantage that, on the average, fewer
observations are required to reach a decision than would be
the case with a fixed sample size.
Sequential analysis is readily adaptable to acceptance
sampling related to checking the error rate of an analyst
responsible for reducing data from strip charts. Suppose
that a random sample of data values read by one analyst is
checked for validity by another (and probably more experienced)
analyst. Each time the difference between two such readings
exceeds some nominal value an error is said to have occurred
in the original value. Suppose further that it is desired to
reject a set of data only 5 percent of the time if the error
rate of the analyst is less than 0.05 and accept the set of
data only 10 percent of the time if the error rate is greater
than 0.15.
6-15
-------
H : P = P < 0.05
o o —
Hl : P = Pl - °'15
a = 0.05
B = 0.10
On the basis of the above information it is possible
to construct the graph shown in Figure 6.2. The parallel
lines define the critical regions for the three possible
decisions, one of which must be made after each observation.
The actual observational data is plotted on the graph as a
step function (i.e. a line is drawn one unit to the right if
a data value is determined to be valid and one unit up if
the value is declared to be incorrect). The sampling process
is stopped when the step function crosses either parallel
line.
Suppose the following results are obtained when data
values read by an analyst are checked for validity:
6-16
-------
I
t-1
•^1
e
i
C
O
•H
-P
(0
(U
U)
X!
O
-P
O
Q)
J-i
In
O
U
C
M
M-l
O
Q)
15
10
Accept H,
P = 0.15
Continue
Sampling
15
20
25
Number of Valid Observations - gm
Accept HQ
= 0 . 0 5
30
35
Figure 6.2 Sequential Test for the Error Rate of a Data Analyst
-------
Observation Result
I g
2 g
3 g
4 b
5 g
6 b
7 g
8 g
9 g
10 g
11 g
'12 b
13 g
14 g
15 g
16 g
17 g
18 g
19 g
20 b
21 g
22 g
23 g
24 g
25 b
26 g
27 g
28 b
29 g
30 b
31 g
32 g
33 g
34 b
35 g
The results can be plotted on Figure 6.2 by drawing a line
one unit to the right if the data value is good (g) and one
unit up if the data value is bad (b). After only a few
observations it appears that the final outcome will be to
reject the hypothesis that the analyst true error rate is
equal to or less than 0.05. Indeed after the 30th observation
(i.e. g + d =30, where g = 23 and d = 7) the sampling
6-18
-------
process would be halted and the decision would be to accept
H (i.e. reject the hypothesis that the error rate is
equal to or less than 0.05) indicating that the error rate
of the analysis is not acceptable.
The equations of the parallel lines shown in Figure 6.2
are:
1.099 d - 0.111 g = 2.890 (1)
m 3m
1.099 d - 0.111 g = -2.255 (2)
m m
where d = number of errors out of m observations
m
g = number of valid values out of m observations
m
The general form of equations (1) and (2) is given by
dm ln F1> 9m ln 1^= ln ^ (3)
o o
P 1-P
and d In ^ + g In T-T-i = In -=-§— (4)
m P ym 1-P 1-cc
o o
where P = 0.05 = acceptable error rate
P, = 0.15 = excessive error rate
a = 0.05 = probability of rejecting the set of
data when the true error rate is <_ 0.05
6 = 0.10 = probability of accepting the set of
data when the true error rate is > 0;.15
6-19
-------
In = natural logarithm of the term following
Substituting the values shown above in equations (3)
and (4) yields;
rl In °-15 + „ In l~° ' 15 - In l~° ' 10
dm ln 0705 + gm ln T^OTOS ~ ln TT05-
i °-15 j. i 1-0-15 , 0.10
m ln 0705 + gm ln l^OTOS = ln I^OT
The method of sequential analysis is based upon the
computation of the sequential probability ratio P, /pom-
The denominator of this ratio, P is the probability that
om
the m observations would occur if hypothesis H were true.
Similarly P, is the probability that the m observations
would occur if hypothesis H-, were true. The test procedure
followed is as follows:
p
-i . , 1m < , B . „
1. if —= -- :-TiL- , except H
Pom l~a
p
2. If ^ >• ^- , except HI
om
3. If T— < ^-- < —- , take: another observation.
1-a P a
om
6-20
-------
If the error rate of the analyst is P,, the probability
of getting exactly d errors and g valid data values out of
m 3m
a set of m observations is
Plm = pl m d-Pi)"1 W
On the other hand if the error rate of the analyst is
d g
P = P m (1-P ) m
om o o
The sequential probability ratio is
P, I P, \dm
1m 1 \
om
1-P.
Jm
(9)
(10)
and the decision to accept H shown above becomes
m
1-P.
1-P
m
- 1-a
(11)
Taking the natural logarithm of (11) yields
1-P.
dm ln
9m ln -I=W^ -
< In
1-a
(12)
from which equation (4) is obtained by using the equality
(i.e. the critical value). Equation (3) follows in a
similar manner.
6-21
-------
6.4.3.2 Operating Characteristic - The operating characteristic
(OC) function is the function which defines the probability of
acceptance when the error rate for the data analyst is P. For
the example discussed above, four points on the OC curve are
known;
Error Rate Probability of Acceptance
(P) (OC Function)
0 1
0.05 1-a = 0.95
0.15 8 = 0.10
1 0
An additional point between P and P, can be plotted
for the proportion
1-P
In ±
1-Po
1-P, P,
In - In pi
1-P o
o
and the corresponding probability of accepting P' as
In —^-
a .
Pr(P') = =-3 5
In ^_P _ In P
a 1-a
6-22
-------
Substituting P , P,, a and 3 for the .example yields,
n 1-0.15
p, = _ 1-0.05
, 1-0.15 _ 0.15
ln 1-0.05 ln oToT
= 0.09
1-0.10
In
Pr(P') = - °-°5
, 1-0.10 , 0.10
In —r—^-=— - In
0.05 1-0.05
= 0.56
The OC curve for the example problem is presented in
Figure 6.3-As can be seen if the actual error rate of the
analyst is equal to or greater than 0.15 the probability of
acceptance with this test procedure is small (<0.10).
If, however the actual error rate of the analyst is about 0.10
the probability is about 0.5 that this test procedure will
accept the set of data as having an acceptable error rate.
6.4.3.3 Average Sample Number (ASN) - It is obvious from
Figure 6.2, that the sampling process to determine if
the analyst is maintaining an acceptable error rate will
terminate after a few observations if the error rate is
much greater than P, or much smaller than P . On the other
J 1 o
hand if the error rate is near or between P, and P it may
6-23
-------
QJ
U
C
1.00
0.80
0)
o
o
0.60
0
•H
rH
•rH
•a
XI
o
0.40
0.20
0.2
0.4
0.;6
0.8
1.0
Error Rate - P
Figure 6.3 Operating Characteristic Curve
6-24
-------
be necessary to check a large number of data values to
ascertain whether or not the analyst is performing at
a satisfactory level. Based on the example, if all of
the data values that are checked are incorrect, the check-
ing process will terminate after only 3 observations.
Likewise if all values are valid the checking process will
be stopped after 16 observations .
The average sample number (ASN) for the example problem
for P = 0.05 is given by
In - + a In
ASN = ^ " a
Po ln P1 + (1-po) ln
o
= 40
Similarily for P, = 0.15 the ASN is given by
ASN = B ln T^ + (1-B) ln ^
P 1-P
P In ± + (1-P ) In ,-
o 0
- 33
6.4.3.4 Observations in Groups - There may be situations when
it is more efficient (from the standpoint of the collection of
a sample of observations) to make several observations at a
time and then record the results in a manner similar to that
6-25
-------
(4)
for the example problem above. It can be shown mathematically
that the ASN will be increased by an amount equal to the
number of observations in each group. Thus the ASN's reported
for P and P,, above would be increased by 5 to 45 and 38
respectively.
6-26
-------
7.0 REFERENCES
1. Bauer, E.L., A Statistical Manual for Chemists.
2. Bennett, C. A., and Franklin, N. L., Statistical Analysis
in Chemistry and The Chemical Industry, John Wiley and
Sons, New York, 1954.
3. Duncan, A. J., Quality Control and Industrial Statistics,
Richard D. Irwin, Inc., Homewood, Illinois, 1959.
4. Dixon, W. J., and Massey, F.J., Jr., Introduction to
Statistical Analysis, McGraw Hill Book Co.,Inc., New
York, 1957.
5. Snedecor, G. W., and Cochran, W. G., Statistical Methods,
The Iowa University Press, Ames, Iowa, 1967.
6. International Electrotechnical Commission, Technical
Committee 66, Working Group 6 on Electronic Measuring
Devices of Air and Water Pollution.
7. Office of Air Programs, "Field Operations Guide for
Automatic Air Monitoring Equipment", Office of Air
Programs Publication No. APTD-0736, EPA, Research
Triangle Park, North Carolina, 1971.
8. Intersociety Committee, APHA, Methods of Air Sampling
and Analysis, American Public Health Association, 1015
18th Street, N.W., Washington, D.C., 1972.
9. "Part 50 - National Primary and Secondary Ambient Air
Quality Standards", Federal Register, 36 FR 22 384,
November 25, 1971.
10. Grant, E.L., Statistical Quality Control, McGraw Hill
Book Co., Inc., New York, 1969.
11. Morrow, N.L., "Sampling and Analyzing Air Pollution
Sources", Chem. Eng.,pp . 85-98, January 4, 1972.
12. PEDCo-Environmental Specialists, Inc., "Administrative
and Technical Aspects of Source Sampling for Par-
ticulates", Contract No. CPA-70-124, EPA, 1971.
13. Shigehara, R.T., et.al., "Significance of Errors in
Stack Sampling Measurements", Paper 70-35, APCA, St.
Louis, Missouri, June 14-18, 1970.
7-1
-------
14. American Society for Testing and Materials, "Standard
Method for Sampling Stacks for Particulate Matter",
ASTM Method D 2928-71.
15. Devorkin, H., "Source Testing Manual", Los Angeles
County Air Pollution Control District, Los Angeles,
California, 1963.
16. Price, L.W., "Maintenance of Laboratory Instruments",
"Laboratory Practice", summary of LABEX International
1969 meeting, under leadership of L.W. Price, Bio-
chemistry Dept., University of Cambridge.
17. U.S. Environmental Protection Agency, Handbook for
Analytical Quality Control in Water and Wastewater
Laboratories, June, 1972.
18. American Public Health Association, Standard Methods
for the Examination of Water and Waste Water, 13th
Edition, 1971.
19. American Society for Testing and Materials, "Water;
Atmospheric Analysis", 1971 Annual Book of ASTM
Standards, Part 23, November, 1971.
20. Nelson, G.O., Controlled Test Atmospheres Principles
and Techniques, Ann Arbor Science Publishers, Inc.,
Ann Arbor, Michigan, 1971
21. Jacobs, M.B., The Chemical Analysis of Air Pollutants,
Interscience Publishers, Inc., New York, 1960.
22. U.S. Dept. of Commerce, National Bureau of Standards,
"NBS Special Publication 260".
23. Bayse, David, "Quality Control in Laboratory Manage-
ment", U.S. Department of Health, Education and
Welfare, Center for Disease Control, Licensure and
Development Branch, Proficiency Testing Section,
Atlanta, Georgia.
24. Varian Techtron Pty. Ltd., "Water Analysis by Atomic
Absorption Spectroscopy", Varian Techtron, Ltd.,
1972.
25. Decker, C.E., and Smith, W.S. "Determination of
Fluorides in Stack Gas: SPADNS - Zirconium Lake
Method". U.S. Dept. HEW, National Center for Air
Pollution Control, Cincinnati, Ohio.
7-2
-------
26. Dodge, H.F., and Romig, H.G., Sampling Inspection Tables,
John Wiley and Sons, Inc., New York, 1944.
27. Reference 3, pp. 365-366
28. Reference 3, p. 886
29. Reference 3, p. 367
7-3
-------
APPENDIX A
STATISTICAL FORMULAE AND DEFINITIONS
ARITHMETIC MEAN
n
Z Xi
X = 1
N
X = mean value
Xi = individual value in the sample
N = number of values in the sample
RANGE
R = X max - X min
X max = maximum value in the sample
X min = minimum value in the sample
STANDARD DEVIATION
\Z (X - X)2
"
N -. 1
X = individual value in the sample
X = average of all values in the sample
N = number of values in the sample
COEFFICIENT OF VARIATION (also referred to as relative standard
deviation)
CV = S (100)
X
S = standard deviation
X = mean value
A-l
-------
CONTROL LIMITS FOR CONTROL CHARTS
RANGE CHART
(27)
Upper Control Limit = UCL = D.R
R = average range for a set of replicates
D. = adjustment factor
o^ = standard deviation of the relative range
d_ = an adjustment factor used to estimate
the standard deviation of a universe
Lower Control Limit = LCL = D_R
R = average range for a set of replicates
D- = adjustment factor
D, = 1 - 3 o'w/d,
•J £•
COEFFICIENT OF VARIATION CHART(28)
Upper Control Limit = UCL = B^ CV
CV = average coefficient of variation for
a set of replicates
B. = adjustment factor
B4 = 1 + K
N = number of values in a sample
c = an adjustment factor used to estimate
the standard deviation of a universe
A-2
-------
Lower Control Limit = LCL=B CV
CV = average coefficient of variation
for a set of replicates
N = number of values in a sample
c2 = an adjustment factor used to estimate
the standard deviation of a universe
CONTROL CHART FOR AVERAGES
Upper control limit = UCL = x + A-R
Lower control limit = LCL = x - A»R
*2 = -^-
N = number of values in a sample
d = an adjustment factor used to estimate the standard
deviation of a universe.
A-3
-------
APPENDIX B
TABLES OF FACTORS FOR CONSTRUCTING CONTROL CHARTS*
TABLE BI
FACTORS FOR CONSTRUCTING CONTROL
CHARTS FOR AVERAGES
Observations in Factors for Control Limits
Sample, n A.,
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
1.880
1.023
0.729
0.577
0.483
0.419
0.373
0.337
0.308
0.285
0.266
0.249
0.235
0.223
0.212
0.203
0.194
0.187
0.180
0.173
0.167
0.162
0.157
0.153
B-l
*The factors in these tables are for the 99% confidence interval.
-------
TABLE BII
FACTORS FOR CONSTRUCTING CONTROL
CHARTS FOR RANGE
Number of Observations
Factors for Control Limits
in Sample, n
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
"3
0
0
0
0
0
. 0.076
0.136
0.184
0.223
0.256
0.284
0.308
0.329
0.348
0.364
0.379
0.392
0.404
0.414
0.425
0.434
0.443
0.452
0.459
"4
3.267
2.575
2.282
2.115
2.004
1.924
1.864
1.816
1.777
1.744
1.716
1.692
1.671
1.652
1.636
1.621
1.608
1.596
1.586
1.575
1.566
1.557
1.548
1.541
B-2
-------
TABLE Bill
•FACTORS FOR CONSTRUCTING CONTROL CHARTS
FOR COEFFICIENT OF VARIATION
Number of Observations
in Sample, n
Factors for Control Limits
i, B,
2 0
3 0
4 0
5 0
6 0.030
7 0.118
8 0.185
9 0.239
10 0.284
11 0.321
12 0.354
13 0.382
14 0.406
15 0.428
16 0.448
17 0.466
18 0.482
19 0.497
20 0.510
21 0.523
22 0.534
23 0.545
24 0.555
25 0.565
3.267
2.568
2.266
2.089
1.970
1.882
1.815
1.761
1.716
1.679
1.646
1.618
1.594
1.572
1.552
1.534
1.518
1.503
1.490
1.477
1.466
1.455
1.445
1.435
B-3
-------
TABLE BIV
THE t DISTRIBUTION (TWO-TAILED TESTS)
Degrees of
Freedom
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Probability of
0.050
12.707
4.303
3.182
2.776
2.571
2.447
2.365
2.306
2.262
2.228
2.201
2.179
2.160
2.145
2.131
2.120
2.110
2.101
2.093
2.086
a Larger Value, Sign Ignored
0.010
63.657
9.925
5.841
4.604
4.032
3.707
3. 499
3.355
3.250
3.169
3.106
3.055
3.012
2.977
2.947
2.921
2.898
2.878
2.861
2.845
B-4
-------
APPENDIX C
CALIBRATION CURVES FROM REGRESSION ANALYSIS BY THE
METHOD OF LEAST SQUARES
E.I Constructing the Calibration Curve
The calibration curve presented in Figure C-l shows
the relationship between the concentration of sulfur dioxide
in the sample and absorbance (i.e. the instrument response).
The calibration curve was determined on the basis of a least
squares regression analysis in which the relationship between
concentration (X) and absorbance (Y) is assumed to be of the
form;
Y = a + bX .
where a = intercept (i.e. the point at which the line
crosses the Y axis)
b = slope (i.e. the change in absorbance per unit
change in concentration)
Equations for determining a and b are:
x =
N N N
NZ X.Y. - EX. ZY.
NZ X - (Z
a = Y - bX
N N
Z Y. bZ X.
" N N
where N = 12 (the number of samples analyzed) as shown in
Table C-l.
C-l
-------
TABLE C-l
CALIBRATION DATA FOR S02 DETERMINATION
X
.20
.20
.20
.60
.60
.60
1.00
1.00
1.00
1.40
1.40
1.40
ZX =
X =
Y
.095
.080
.123
.305
.329
.355
.559
.560
.590
.780
.810
.790
9.6 ZY
.8 Y
x2
.04
.04
.04
.36
.36
.36
1.00
1.00
1.00
1.96
1.96
1.96
= 5.376
= .448
Y2
.009025
.0064
.015129
.093025
.108241
.126025
.312481
.313600
.348100
.6084
.6561
.6241
EXY =
XY
.0190
.0160
.0246
.1830
.1974
.2130
.559
.560
.590
1.092
1.1340
1.106
5.694
ZX2= 10.08 ZY2= 3.220626
Substituting the appropriate summation from Table C-l:
b = 12 (5.694) - (9.6) (5.376)
12(10.08) - (9.6)2
b = 0.5805
5.376 (0.5805) (9.6)
a =
12 12
a = -0.0164
The equation for the calibration line is:
Y = -0.0164 + 0.5805X
To plot this straight line (Figure C-l) two points in
the XY plane are necessary. One point is the Y intercept, that
is when X = 0, Y = -0.0164. A second point can be selected for
any other value of X, for example, X = 1.2. Then
Y = -0.0164 + 0.5805 (1.2)
Y = 0.6802
C-2
-------
LU_L_U
I | !! I I ' i
I | i i I | I
! ! ! I M I i I
I i I i i
I I I I I !
L i LUJ.
L.L ' i_ JjJ
l.LLLLL! i_
III I ! 1 I
I i r!" -"i -i-i
:- u i :L:J:
L!_J_!_UJlU
i n~n
M
i i i i_n TTI
rrrm—
M I II i I
! I I I ! M I I
" ' ~ i i i
i | l | i_i
i i r i n i i
i I I I I ! I I i
_-_LJ_LJ_J_L!
LLDlLLLLi
i i I I I
m
I | I t I i ! M
"
LLLIJ
MIM
...I !I.!M _j_U
±!_LJTl | H _LLJ
UCL
99%
!_U_LU_!J_i
i MJ_!_M_!_I
l I M i i i i i
I II M i M
rQIuirn
M M i i I i
i u U
_LJlLJ- iZLLL!
_L_LlJL.Li.J_!_!
J !_:_!_]_!_! Jj
JZ1 i ! ! i !' I -1
i i~T"i i 1 ! i i
.LLLJ
.I J_L.L
i i h i—h-Y = -0.0164 + .5805X
jj
J.JJ
1 IJ_LI_!
I Ml
/ II l I I M !
_i_L 'L.L- _L>J./
__: i |_ | ! M
j i 1
CONCENTRATION
| , i . ! ! i i I I ! l I |._! I i I I ! i • ' l I !
Figure C-l
Calibration Curve for S02 Determination
ri1 !~n ri n
12 D705
' O THE INCH
KEUFFEL. ft ESSER CO.
MADE IN U .E A.
-------
E.2 Constructing Control Limits for the Calibration Curve
The control limits for the calibration curve are deter-
mined from the standard error of estimate Syx. The standard
error of estimate is a measure of the spread of the responsive
variable (absorbance) about the regression line. Its value
can be determined from:
Sy-x =
N-2
N N N
EY. - aZY. - bEX.Y.
Syx = -T~3- 220626-(-0. 0164) (5. 376)-(0. 5805) (5
.694)1
= 0.00585
Regression theory is based on the assumption that, for
each value of X, the values of the response variable Y are
normally distributed about the point on the straight line.
Then a region of ±2.58 Syx units on either side of the regres-
sion line should contain 99 percent of all of the measured values
of Y. The limits of this region are then the UCL and LCL for
determining whether or not future calibration values are in
control .
For any specific methodology, a clear plastic overlay could
be constructed showing the control lines for the calibration
curve. Each time a control sample is run, and the analytical
value is plotted against the calibration curve, the process is
judged to be in control if the point lies between the control
lines of the overlay.
C-4
------- |