-------
Section 5.8
Revision No. 0
Date: November 1975
Page 58 of 67
1. The use of control charts for plotting audit data
as they are obtained to allow for corrective
action to be taken, if necessary, after each audit.
2. The use of audit data to estimate the precision
and bias of the field observations on a lot-by-lot
basis.
3. Testing the data quality against given upper (U)
and lower (L) limits using sampling by variables
to monitor and thereby help control the average
percentage of reported field observations falling
outside the limits.
Each aspect is treated separately in the following para-
graphs.
A. Use of Control Charts
A form such as shown in figure 5.8.9 is suggested for
recording individual audit data as they are obtained. Fill
in the clerical information which includes the name of the
auditor, date of the audit, name of the observer being
audited, audit number, date that the audit period ends, and
date that the Data Assessment Form is filled out.
Both the observer and the auditor are to calculate the
average opacity, 5. and 5 ., respectively, for each of the 3
D aD
runs of 24 consecutive readings each. Using the following
equation, calculate the difference between the auditor's and
observer's values for each run (j = 1 to 3):
d. = 0 . - 0 . .
: : an
The three values of d. can then be plotted on a control chart
as shown at the bottom of the sample data assessment form in
figure 5.8.9. The control chart is a quick visual check to
determine if the d. values are within acceptable limits.
The values used in figure 5.8.9 for the upper control
limit (UCL) and lower control limit (LCD represent + 3 standard
2-59
-------
Section 5.8
Revision No. 0
Date: November 1975
Page 59 of 67
AUDITOR'S SIGNATURE
OBSERVER'S NAME
AUDIT NO.
AUDIT DATE _
TODAY'S DATE
of Audit Period Ending
DATA:
Opacity Values Audit Run Number
°j
Oaj
dJ = °j-°aj
1
2
3
Check if Id. I > 3a-r- = 6
1 i' d
Plot d; values
•r— i
ft!
IO
1
IO
II
T3
10.
6.
-6.
-10.
4
9
9
4
UCL
Warning
Warning
LCL
1 1 1
• 1 1
j=l J=2 j=3
Limit
Limit
Figure 5.8.9. Sample data assessment form for each audit.
2-60
-------
Section 5.8
Revision No. 0
Date: November 1975
Page 60 of 67
deviations of the differences based on the between-observer
standard deviation, o, = 2.45 percent opacity, reported from
a collaborative test of the method (ref. 3) and thus a
standard deviation of the differences of a, = /2 a. =3.46
d b
percent opacity. These limits should be recalculated and
adjusted if necessary as actual field data become available.
If one or more d. values fall outside the UCL and LCL
limits or if the average difference of the three runs |d.| > 6,
action to correct possible deficiences should be taken before
the audited observer performs future field observations. Such
action should include either informal or formal retraining of
the observer.
The auditor should complete the data assessment audit
form or equivalent form and forward copies to his supervisor
and to the field observer's supervisor with appropriate
comments if either of the above performance criteria were
exceeded.
B. Estimating Precision and Bias of Field Observations
The average difference, d., for the ith audit as recorded
on the sampla data assessment audit form of figure 5.8.9 is
used to fill in the data in the table at the top of figure
5.8.10 for a given auditing period. That is, values for d.
for i=l, 2, 3, ..., n are recorded in the table for each
audit period.
Bias of the field observations for that lot of field
observation data, obtained during the audit period, is
estimated by d calculated as shown in figure 5.8.10.
The precision is estimated in terms of the standard
deviation of the average differences of the i audits, i.e.,
s-: . The standard deviation of the differences is calculated
d.
as shown in figure 5.8.10 and used as an estimate of the
standard deviation of differences for that lot of data.
2-61
-------
AUDITOR'S SIGNATURE
Section 5.8
Revision No. 0
Date. November 1975
Page 61 of 67
AUDIT PERIOD to
DBSERVER'S NAME
DATA:
TODAY ' S DATE
Average
difference Audit Number i
di
1
2
3
4
5
6
7
8
9
10
n
AUDIT NO
dT-d
L Q
Figure 5.8.10 Auditing period data assessment form.
2-62
-------
Section 5.8
Revision No. 0
Date: November 1975
Page 62 of 67
The field data for that lot or group of field observa-
tions are reported with the calculated values for bias T and
standard deviation of differences a,.
C. Testing data quality against given standards
Because the lot size is generally small, N < 100, and
the sample size is small, say of the order n < 10, it is
important to assess the quality of the data with respect to
prescribed limits using sampling by variables to make as
much use as possible of the audit data.
Some of the background concerning the assumptions and
the methodology will be repeated below for convenience.
However, a number of publications can be referred to for a
more detailed discussion of sampling by variables (see refs.
8, 9, and 10). The discussion below will be given in
regard to the specific problem of analyzing visible opacity
data which has some unique features as compared with the usual
sampling plans.
The plan as illustrated here is designed to provide a
probability of 0.9 of detecting a lot or group of field data
in which 10 percent or more of the differences, if all
observations had been audited, fall outside the limits L and U.
Using the data from a collaborative study on this method
(ref. 3), the mean difference of opacity measurements made by
different observers has a standard deviation of 3.45 percent
opacity. Assuming 3a-j limits, the values of -10.4 and +10.4
are used to define lower and upper limits, L and U, respec-
tively, outside of which it is desired to control the portion
of differences, d.. Following the method given in reference
10, a procedure for applying the variables sampling plan is
described below. Figures 5.8.11 and 5.8.12 illustrate
satisfactory and unsatisfactory data quality with respect to
the prescribed limits L and U.
The variables sampling plan requires the following:
2-63
-------
Section 5.8
Revision No. 0
Date: November 1975
Page 63 of 67
Example 5.8.11. Example illustrating p < 0.10 and satisfactory
data quality.
p (percent of measured
differences outside
limits L and U) > 0.10
Figure 5.8.12,
Example illustrating p > 0.10 and unsatisfactory
data quality.
2-64
-------
Section 5.8
Revision No. 0
Date: November 1975
Page 64 of 67
d, the sample mean difference
s-3, the standard deviation of the differences
k, a constant whose value is a function of p
and n for a given sampling plan , and
p, the portion of the differences outside the
limits L and U at which we want to detect with
a probability P.
For example, to control at 0.9 the probability of detecting
lots with data quality p equal to or greater than 0.10 (or 10%
of the differences outside of L and U) for a sample size of
n = 10, then the value of k = 2.112 is obtained from table
5.8.3. Additional values of k for other sampling plans can
be determined from table II of reference 10. The values of
3 and ST are calculated as shown in figure 5.8.10.
Given the above information the test procedure is
applied and subsequent action is taken in accordance with
the following criteria:
a. If both the following conditions are satisfied,
U = 3 + ks3- < U
c d -
L = d - ks-r- > L
c d -
then the measurements are considered to be
consistent with prescribed data quality limits
and no corrective action is prescribed.
b. If one or both of the inequalities is violated,
possible deficiencies exist in the opacity deter-
mination process as carried out for that particular
lot of field observations. These deficiencies
should be identified and corrected before future
field observations are performed.
Table 5.8.3 contains a few selected values of n, p, and k
for convenient reference.
2-65
-------
Section 5.8
Revision No. 0
Date: November 1975
Page 65 of 67
Table 5.8.3 Sample plan constants, k for P (detecting a lot
with proportion p outside limits L and U} > 0.9
Sample Size n
3
5
7
10
12
5.8.4.2 Evaluation
p = 0.
3.039
1.976
1.721
1.595
1.550
of Training
2 p = 0.1
4.258
2.742
2.334
2.112
2.045
Schools-- The method
recommended for monitoring a given training school requires
that an auditor attend one complete training course for
every 10 training courses held by the school. If less than
10 training courses are held in 1 year, at least one of
the courses must be audited. The audited courses are randomly
selected.
5.8.4.2.1 Method of auditing_training schools. The auditor,
i.e., the individual performing the audit, should have extensive
background experience in all of the areas listed in sections
5. 8. 3.. 2 and 5.8.3.3. In addition, the auditor must be a current-
ly certified observer and have.' had experience in the preparation
and implementation of a training course.
Attend all of the lectures and/or seminars, using a
Course Evaluation Form as shov/n in figure 5.8.13 as a guide-
line for making quality evaluations. Upon completion of the
course, document the evaluation with the designated grading system.
Check the smoke generator for any obvious malfunctions,
following procedures similar to those in section 5.8.3.3. Us-
ing appendix A as a guideline, confirm that the generator meets
those specifications. Document all results on a Smoke Generator
Log (figure 5.8.5).
5.8.4.2.2 Course-quality assessment. It is recommended
that the audit level be increased by a factor of two if either
of the following situations occurs:
2-66
-------
Section 5.8
Revision No. 0
Date: ••Jovon.bor 1975
66 of 67
METHOD 9
COURSE EVALUATION
GRADING SYSTEM:
1 - Excellent
2 - Good
3 - Fair
4 - Poor
14. Other comments
QUALITY CHECK
1. Definition of course
objectives
?. Choice of subjects
3. Lecturer(s)' Knowledge
of subjects covered
4. Relevancy of the
material covered
5. Proper emphasis on
important facts
6. Presentation of
the lectures
7. Choice of visual aids
8. Use of visual aids
9. Organization
10. Field-lecture balance
11. Condition of smoke
generator
12. Calibration and
operation of smoke
generator
13. Testing procedures
1
2
3
4
COMMENTS
Figure 5.8.13. Sample course evaluation form.
2-67
-------
Section 5.8
Revision No. 0
Date: November 1975
Page 67 of 67
1. Any of the quality checks in figure 5.8.13 falls
below a grade 2;
2. The auditor notices; any other serious irregularity
in the course proceedings.
The training school should be; audited continuously until either
f
or both of the following situations are corrected:
1. Any of the quality checks is rated a grade 4, and
2. The smoke generator does not: meet the specifications
stated in appendix A.
2-68
-------
Section 3.0
Revision No. 0
Date: November 1975
Page 1 of 1
3.0 FUNCTIONAL ANALYSIS OF TEST METHOD
3-1
-------
Section 3.1
Revision No. 0
Date: November 1975
Page 1 of 4
3.0 FUNCTIONAL ANALYSIS OF TEST METHOD
Test Method 9—Visual Determinations of the Opacity
of Emissions from Stationary Sources, is described in the
Federal Register, November 12, 1974, and is reproduced in
section 5.8.1 of this document. This method pertains to the
determination of the opacity of visible emissions by quali-
fied observers. It requires the proper training and certi-
fication of the observers and the use of defined procedures
in the field when making the determinations.
This method has been subjected to collaborative testing
(ref. 3); therefore, some quantitative information on the
precision and bias is available. In some areas of variable
evaluation, though, quantitative data is not available, and
the functional analysis is forced to be somewhat general. In
these cases, engineering judgments were used in estimating
variable limits. The subject of error analysis is discussed
in references 11 and 12.
3.1 VARIABLE EVALUATION AND ERROR RANGE DATA
The opacity of the visible emissions from a stationary
source where Method 9 is applicable is reported for enforcement
purposes in terms of the average opacity over a given period.
A set of at least 24 consecutive readings is generally
observed, with each reading taken at 15-second intervals.
From this set, the average opacity can be calculated with
the following equation:
m
I 0.
° ^l—
m
3-2
-------
Section 3.1
Revision No. 0
Date: November 1975
Page 2 of 4
where:
6 = calculated average opacity, percent opacity,
0. = determined opacity for the jth reading, percent
opacity,
m = number of consecutive intervals required by the
law enforcement agency for computing average
opacity to determine a violation of the applica-
ble opacity standards.
The plume at the time and point of the readings will
have a specific but unknown percent opacity (O'). The differ-
ence in the O1 (average of the true opacity over the same time
interval) and 0, as calculated in equation 1 above, is due
to a combination of errors in the observation process, some
of which are controllable, while others are not. A short
description of each source of error is given in the following
list.
1. Position of the observer with respect to the plume.
The reference method (section 5.8.1) states that the observer
should locate himself at a sufficient distance from the source
such that he shall have a clear view of the plume. It also
states that the observer should be perpendicular to the
plume direction in the case of a single stack. Error due to
a deviation from this ideal positioning would depend on the
exact position, the shape of the plume, and the wind speed
and direction at the time of the readings.
The reference method (section 5.8.1) also states that,
in the case of multiple stacks, the observer is to position
himself so that his line of sight is perpendicular to the
longer axis of the set of stacks. Also, his line of sight
should not include more than one plume at a time. A devia-
tion from either of these requirements could cause the data
to be biased.
Although error can result from an improper observer
position, his position can be considered a controllable
3-3
-------
Section 3.1
Revision No. 0
Date: November 1975
Page 3 of 4
variable in the observational process. Proper positioning
can almost always be realized at some point in time with
appropriate weather conditions.
2. Position of the observer with respect to the sun.
The qualified observer should be positioned so that the sun
is oriented in the 140° sector to his back. Just as in the
source of error discussed above, the observer's position
with respect to the sun is a controllable variable, and the
errors due to this source can be minimized. Test data (ref.
1) has shown that the closer the sun is to being directly
behind the observer, the more accurate the observation
values will be.
3. Determination of the weather data. Good judgment
and proper documentation of the weather data play an impor-
tant role in interpreting opacity data at a later date.
Inaccurate weather information or the lack of weather infor-
mation can serve to discredit the data in a court of law.
The use of weather measurement instrumentation, weather
station data, or charts, can greatly reduce error in judg-
ment.
4. Corrective or colored lenses. The use of corrective
or colored lenses can be a major source of observational
error if they were not worn during the certification testing.
This source of error can be eliminated if the observer takes
the precaution of removing any sunglasses or unnecessary
lenses while performing the visual determinations.
5. Background against which the plume is viewed. The
plume is most visible and the observer will determine the
highest opacity value for a given plume when the background
is contrasting with the color of the plume. It is with the
contrasting background that the plume opacity can be deter-
mined with the greatest degree of accuracy. However, the
probability of positive error is also the greatest under
these conditions. As the background becomes less contrasting
3-4
-------
Section 3.1
Revision No. 0
Date: November 1975
Page 4 of 4
the apparent plume opacity diminishes and determinations tend
to assume a negative bias (which actually favors the plant
operator). The results of studies undertaken to determine
the magnitude of the positive errors are given in the
Federal Register, November 12, 1974.
6. Momentary observations. The observer should not
study the plume continuously, but instead should observe
the plume momentarily at 15-second intervals. More than a
momentary glance can not only allow the observer to lose
his concentration but may also cause eye fatigue.
7. Point of observation. Error can occur if the
observer exercises poor judgment in his determination of
the point of observation. The point of observation should
be the point in the plume closest to the stack where con-
densed water vapor is not present. The point should also
be where the plume exhibits the greatest opacity. Error, if
any, due to readings taken at a point where condensed water
vapor is present is usually positive. However, after compre-
hensive training, a certified observer can readily identify
the portion of the plume which contains condensed water vapor
and will avoid assigning an opacity to a plume which contains
any visible (condensed) water droplets. Thus the probability
of error due to the presence of condensed water vapor is
negligible (ref. 13) . Readings taken at points other than
the point of greatest opacity cause the average opacity
value to be less than the actual opacity. The probability
of this source of error, too, is a function of the quality
of training and the observer's experience, and can be consid-
ered almost nonexistent in observations made by certified
observers.
8. Experience of the observer. The term "experience"
as used here refers to length of time the observer has been
3-5
-------
Section 3.2
Revision No. 0
Date: November 1975
Page 1 of 1
certified, which can dictate his personal biases. Collab-
orative tests have shown that the readings of the more
experienced observer are consistently more accurate than the
readings of the less experienced observer. However, the
less experienced observer's error is almost always negative,
which is in the favor of the emission source.
9. Nighttime observations. Visible emission monitoring
is difficult to apply at night. Any observer who must make
observations at night should receive special training to
calibrate his eye for night conditions. The background
under these conditions will generally be less contrasting,
hence the error will tend to be negative. See number 6
above..
10. Calibration of the observer's eye. The readings
are subject to error from inaccuracies in the calibration
of the; observer's eye. Such an error would bias all obser-
vations made until the observer is tested and recertified.
This can be avoided with frequent auditing of the observer's
performance, stringent specifications on the smoke generator
used for certification, and frequent recertification.
11. Data processing. Errors can occur in the calcu-
lation of the average opacity over a given time interval.
These errors can be avoided by consistently rechecking the
data.
3.2 COMBINING ERROR TERMS
All of the error terms discussed thus far are indepen-
dent; at least there are no obvious reasons why they should
not be independent. Therefore, the total bias in the visual
determination of opacity is the algebraic sum of the biases
of the individual terms. The variance of the observational
data is the sum of the variances of the individual error
terms;
3-6
-------
Section 3.3
Revision No. 0
Date: November 1975
Page 1 of 3
2 2^2,2, ^2
OT = a1 + a 2 + a3 + an *
3.3 PRECISION ESTIMATES
The variability will be larger when the measurements to
be compared are performed by different observers than when
they are carried out by a single observer performing repli-
cates. Many different measures of variability are conceiv-
able according to the circumstances under which the measure-
ments are performed. Only two situations will be discussed
here. They are as follows:
1. Repeatability, r, is the value below which the
absolute difference between duplicate results,
i.e., two observations made on the same plume
by the same observer over a short interval of
time, may be expected to fall with a 95-percent
probability.
2. Reproducibility, R, is the value below which the
absolute difference between the observations
made on the same plume by different observers
may be expected to fall with a 95-percent prob-
ability.
The above definitions are based on a statistical model,
according to which each observation is the sum of three
components:
0 = 5 + b + e (2)
where
0 = the measured value, percent opacity,
O = the true average, percent opacity,
b = an error representing the differences between
observers, percent opacity,
e = a random error occurring in each observation,
percent opacity.
3-7
-------
Section 3.3
Revision No. 0
Date: November 1975
Page 2 of 3
In general, b can be considered as the sum
b = b + b (3)
•L O
where b is a random component and b a systematic component.
The term b is considered to be constant during any series of
observations performed under repeatability conditions, but to
behave as a random variate in a series of observations per-
formed under reproducibility conditions. Its variance will
be denoted as
var b = OT , (4)
LI
the observer bias variance.
The term e represents a random error occurring in each
measurement. Its variance
2
var e = a (5)
will be called the repeatability variance.
For the above model, the repeatability, r, and the
reproducibility, R, are give^n by
r = 1.96. /2~ o = 2.77 a (6)
and
R = 2.77 Jo2 + a2 = 2.77 a_. (7)
» r LI s\
2
where a_. will be referred to as the reproducibility variance.
K
Using the data available from a collaborative study (ref. 3)
the reproducibility standard deviation, o , is taken to be
K
3.46 percent opacity. The repeatability standard deviation,
a , is assumed to be 2.0 percent opacity. The repeatability
and reproducibility can be calculated with these values as
follows:
r = (2.77) (2.0) = 5.54 percent opacity (8)
3-8
-------
Section 3.3
Revision No. 0
Date: November 1975
Page 3 of 3
and
R= (2.77) (3.46) = 6.79 percent opacity. (9)
2
Using the same data, the observer bias variance, a , is
LI
assumed to be 1.99. When compared with the value of the
2
within-observer or repeatability variance, a = 4.0, the
observer bias variance makes up only a small portion of the
composite between observer or reproducibility variance,
222
o=a + a . Hence the major sources of error are not
K r LI
items 8, 9, and 10 above, but rather the other sources of
error which can be more readily controlled as discussed above.
3-9
-------
-------
References
Revision No. 0
Date: November 1975
Page 1 of 3
REFERENCES
-------
References
Revision No. 0
Date: November 1975
Page 2 of 3
LIST OF REFERENCES
1. Philip R. Sticksel, Editor, Instructor's and Operator's
Manual for Evaluation of Visible Emissions for State and
Local Air Pollution Inspectors, EPA Contract No. CPA
70-175; Environmental Protection Agency, Air Pollution
Control Office, Institute of Air Pollution Training,
Research Triangle Park, North Carolina; 1971.
2. Rober* Missen and Arnold Stein, Guidelines for Evaluation
of Visible Emissions, EPA Contract No. 68-02-1390, Task
Order No. 2; U.S. Environmental Protection Agency, Office
of Enforcement, Office of General Enforcement, Washington,
D.C.; April 1975.
3. Henry F. Hamil, Richard E. Thomas, and Nollie F. Swynnerton,
Evaluation and Collaborative Study of Method for Visual
Determination of Opacity of Emissions from Stationary
Sources, EPA Contract No. 68-02-0626; Environmental
Protection Agency, Research Triangle Park, North Carolina;
January 1975.
4. William D. Connor and J. Raymond Hodkinson, Optical
Properties and Visual Effects of Smoke-stack Plumes;
Office of Air Programs, Publication Number AP-30;
Environmental Protection Agency, Research Triangle Park,
North Carolina; Revised May 1972.
5. Pamela Giblin, "Opacity as a Readily Enforceable Standard,"
paper presented at 65th Annual Meeting of the Air Pollution
Control Association, Miami Beach, Florida; June 1972.
6. Norman E. Edmisten, Geoffrey Stevens, and Dennis P. Holzschuh,
"Effective Enforcement Through Opacity Provisions," paper
presented at 75th National Meeting of the American Institute
of Chemical Engineers, Detroit, Michigan; June 1973.
7. Melvin I. Weisburd, Field Operations and Enforcement
Manual for Air Pollution Control, Volume I: Organization
and Basic Procedures; EPA Contract Number CPA 70-122;
Environmental Protection Agency, Office of Air Programs,
Stationary Source Pollution Control Programs, Research
Triangle Park, North Carolina; August 1972.
8. A. H. Bowker and H. P. Goode, Sampling Inspection by
Variables, McGraw-Hill, New York; 1952.
9. A. Hald, Statistical Theory with Engineering Applications,
John Wiley and Sons, New York; 1952.
10. D. B. Owen, "Variables Sampling Plans Based on the Normal
Distribution," Technometrics 9, No. 3; August 1967.
R-2
-------
References
Revision No.O
Date: November 1975
Page 3 of 3
11. Philip R. Bevington, Data Reduction and Error Analysis for
the Physical Sciences, McGraw-Hill, New York; 1969.
12. D. C. Baird, Experimentation: An Introduction to Measurement
Theory and Experiment Design, Prentice-Hall, New Jersey; 1962.
13. EPA Response to Remand Ordered by U.S. Court of Appeals
for the District of Columbia in Portland Cement Association
v. Rickelshaus (486 F. 2d 375, June 29, 1973), Environmental
Protection Agency, Office of Air and Waste Management,
Office of Air Quality Planning and Standards, Research
Triangle Park, North Carolina; November 1974.
R-3
-------
-------
Appendix A
Revision No. 0
Date: November 1975
Page 1 of 3
APPENDIX A
A-l
-------
Appendix A
Revision No. 0
Date: November 1975
Page 2 of 3
APPENDIX A
GLOSSARY OF SYMBOLS
This is a glossary of the symbols used in this document.
Symbols used and defined in the reference method (section
5.8.1) are not repeated here.
m Number of readings in a given run.
0. Opacity value measured by the observer when the jth
-1 reading is taken.
0 Opacity value recorded by the transmissometer recorder
j at the jth reading during a certification test.
0 . Average opacity value calculated from the auditor's
ai data for the jth run.
d. Difference in the audit average opacity value and
3 the value determined by the observer for the jth
run.
S Summation of the d. Vcilue for the three runs per
1 audit. D
d. Average of the differences d. for the ith audit.
N Lot size, ie., the number of field observations to
be treated as a group..
n Sample size for the auditing period.
Sy Summation of d. values; for the n audits under assess-
ment. 1
d Bias of the field observations for a given lot or
auditing period.
S- Intermediate summation used in the calculation of
Sd.'
S-T Estimated standard deviation of the average of the
i differences between O. and 0
J ^ •
a, Between-observer standard deviation computed from
collaborative test data.
a. Standard deviation of the differences computed from
collaborative test data.
L Lower quality limit used in sampling by variables.
U Upper quality limit used in sampling by variables.
L Lower quality limit value calculated from audit data.
c
U Upper quality limit value calculated from audit data.
C
A-2
-------
Appendix A
Revision No. 0
Date: November 1975
Page 3 of 3
k Constant used in sampling by variables.
P Percent of differences outside of specified limits
i L and U.
LCL Lower control limit of quality control chart.
UCL Upper control limit of a quality control chart.
2
a Total variance of the observational data.
r Repeatability.
R Reproducibility.
b Error representing the differences between observers
e Random error occurring in each observation.
b Random component of b.
b Systematic component of b.
s
OT 2 Observer bias variance, also denoted by var b, cal-
culated from collaborative test data.
o 2 Repeatability variance, also denoted by var e, cal-
culated from collaborative test data.
aR Reproducibility standard deviation calculated from
collaborative test data.
o Repeatability standard deviation calculated from
collaborative test data.
a Reproducibility variance calculated from collaborative
test data.
A-3
-------
-------
Appendix B
Revision No. 0
Date: November 1975
Page 1 of 2
APPENDIX B
B-l
-------
Appendix B
Revision No. 0
Date: November 1975
Page 2 of 2
APPENDIX B
GLOSSARY OF TERMS
The following glossary lists and defines the technical and
statistical terms as used in this document.
Bias
Duplicate
results
Lot
Observation
Quality
audit
Quality
control
check
Precision
Reading
Repeatabil-
ity
Reproduc-
ibility
Run
The systematic or nonrandom component of
system error.
The results from two observations made on
the same plume by the same observer.
A specified number of objects to be speci-
fied as a group.
A run or series of runs of visual determina-
tions made at a given source during a single
visit.
A management tool for independently assessing
data quality•
Checks made by training school personnel or
observers on certain items of equipment and
procedures to assure data of good quality.
A measure of mutual agreement among individual
measurements of the same opacity under pre-
scribed similar conditions and expressed in
terms of the standard deviation .
A single instantaneous glance at the plume
for the purpose of making a determination
of plume opacity.
The value below which the absolute difference
between duplicate results may be expected to
fall with a 95-percent probability^
The value below which the absolute difference
between the observations made on the same
plume by different observers may be expected
to fall within a 95-percent probability.
A series of consecutive readings from which an
average opacity can be determined.
B-2
-------
1/16/76
EPA-650/4-74-0051
Guidelines for Development of a Quality Assurance
Program: Visual Determination of Opacity Emission
from Stationary Sources
; \,- '> •"
P. Wohlschlegel, D. E. Wagoner
.^November 1975
Research Triangle Institute
P.O. Box 12194
Research Triangle Park, North Carolina 27709
1HA327
rr~i >' • i •. '.•.' "v.-
62-02-1234
Office of Research and Development
U.S. Environmental Protection Agency
Washington, D. C. 20460
Guidelines for the quality control of opacity determination by the Federal
reference method are presented. These include:
1. Good operating practices.
2. Directions on how to assess performance and to quality data.
3. Directions on how to identify trouble and to improve data quality.
4. Directions to permit design of auditing activities.
The document is not a research report. It is designed for use by operating
personnel.
i Quality assurance
I Quality control
i Air pollution
! Stack gases
• Unlimited
Unclassified
Unclassified
13H
14D
13B
21B
95
C-l
-------
-------