EPA-R4-73-028a
June 1973
Environmental Monitoring Series
Jiiiftji
XsvXvX-x •'•'. •". \ • :• ;• A \ •.•/!•.•!•.•.•.vl
>:?•:•:::::•::
Ill
I
55
\
UJ
O
-------
EPA-R4-73-028o
GUIDELINES FOR DEVELOPMENT
OF A QUALITY
ASSURANCE PROGRAM
Reference Method for the Continuous Measurement
of Carbon Monoxide in the Atmosphere
by
Franklin Smith and A. Carl Nelson, Jr.
Research Triangle Institute
Research Triangle Park, North Carolina 27709
Contract No. 68-02-0598
Program Element No. 1H1327
EPA Project Officer: Dr. Joseph F. Walling
Quality Assurance and Environmental Monitoring Laboratory
National Environmental Research Center
Research Triangle Park, North Carolina 27711
Prepared for
OFFICE OF RESEARCH AND MONITORING
U.S. ENVIRONMENTAL PROTECTION AGENCY
WASHINGTON, D.C. 20460
June 1973
-------
This report has been reviewed by the Environmental Protection Agency
and approved for publication. Approval does not signify that the
contents necessarily reflect the views and policies of the Agency,
nor does mention of trade names or commercial products constitute
endorsement or recommendation for use.
ii
-------
PREFACE
Quality control is an integral part of any viable
environmental monitoring activity. The primary goals of
EPA's quality control program are to improve and document
the credibility of environmental measurements. To
achieve these goals, quality control is needed in nearly
all segments of monitoring activities and should cover
personnel, methods selection, equipment, and data
handling procedures. The quality control program will
consist of four major activities:
• Development and issuance of procedures
• Intra-laboratory quality control
• Inter-laboratory quality control
• Monitoring program evaluation and
certification
All these activities are essential to a successful quality
control program and will be planned and carried out
simultaneously.
Accordingly, this first manual of a series of five has
been prepared for the quality control of ambient air
measurements. These guidelines for the quality control
ill
-------
of ambient carbon monoxide measurements have been produced
under the direction the Quality Control Branch of the
Quality Assurance and Environmental Monitoring Laboratory
of NERC-RTP. The purpose of this document is to provide
uniform guidance to all EPA monitoring activities in the
collection, analysis, interpretation, presentation, and
validation of quantitative data. In accordance with
administrative directives to implement an Agency-wide
quality control program, all EPA monitoring activities
are requested to use these guidelines to establish intra-
laboratory quality assurance programs in the conduct of
all ambient air measurements of carbon monoxide. Your
comments on the utility of these guidelines, along with
documented requests for revision(s), are welcomed.
All questions concerning the use of this manual and
other matters related to quality control of air pollution
measurements should be directed to:
Mr. Seymour Hochheiser, Chief
Quality Control Branch
Quality Assurance and Environmental
Monitoring Laboratory
National Environmental Research Center
Research Triangle Park, North Carolina 27711
iv
-------
Information on the quality control of other
environmental media and categorical measurements can be
obtained by contacting the following person(s):
Hater
Mr. Dwight Ballinger, Director
Analytical Quality Control Laboratory
National Environmental Research Center
Cincinnati, Ohio 45268
Pesticides
Dr. Henry Enos, Chief
Chemistry Branch
Primate and Pesticide Effects Laboratory
Environmental Protection Agency
Perrine, Florida 33157
Radiation
Mr. Arthur Jarvis, Chief
Office of Quality Assurance-Radiation
National Environmental Research Center
Las Vegas, Nevada 89114
During the months ahead, a series of manuals will
be issued which describe guidelines to be followed during
the course of sampling, analysis, and data handling. The
use of these prescribed guidelines will provide a uniform
approach in the various monitoring programs which allows
the evaluation of the validity of data produced. The
implementation of a total and meaningful quality control
program cannot succeed without the full support of all
monitoring programs. Your cooperation is appreciated.
-------
TABLE OF CONTENTS
Section Page
1.0 INTRODUCTION 1
PART I. OPERATIONS MANUAL
2.0 GENERAL 3
2.1 Operating Procedures 4
ANALYZER CALIBRATION 8
SAMPLING 20
OPERATIONAL CHECKS 21
DATA PROCESSING 27
2.2 Special Checks for Auditing Purposes 35
A. Measuring Control Samples 35
B. Water Vapor Interference Check 36
C. Data Processing Check 38
2.3 Special Checks to Detect and/or Identify Trouble 39
A. Zero Drift Check 39
B. Flow Rate Variation Sensitivity Check 41
C. Temperature Variation Sensitivity Check 41
D. Voltage Variation Sensitivity Test 42
2.4 Calibration of Sample Flow and Sample Cell
Pressure Indicators 45
A. Flow Rate Calibration 45
B. Sample Cell Pressure Gauge Calibration 45
2.5 Facility and Apparatus Requirements 48
A. Facility 48
B. Apparatus 48
vi
-------
TABLE OF CONTENTS (CONT'D)
Section
PART II. SUPERVISION MANUAL
3,0 GENERAL 50
3.1 Assessment of NDIR Data 52
A. Required Information 52
B. Collection of Required Information 52
C. Treatment of Collected Information 55
3.2 Suggested Standards for Judging Performance 57
3.3 Collection of Information to Detect and/or
Identify Trouble 57
A. Identification of Important Variables 59
B. How to Monitor Important Variables 62
C. Suggested Control Limits 63
3.4 Procedures for Improving Data Quality 66
3.5 Procedures for Changing the Auditing Level to Give
the Desired Level of Confidence in the Reported Data 70
A. Decision Rule - Accept the Lot as Good If No
Defects Are Found 71
B. Decision Rule - Accept the Lot as Good If No More
Than One (1) Defect is Found 71
3.6 Monitoring Strategies and Cost 72
A. Reference Method 72
B. Reference Method with Sample Diffusion Chamber 73
C. Reference Method Plus Sample Diffusion Chamber
and Shelter Temperature Control Unit 73
vii
-------
TABLE OF CONTENTS (CONCL'D)
Section Page
PART III. MANAGEMENT MANUAL
4.0 GENERAL 75
4.1 Data Quality Assessment 76
A. Assessment of Data Quality 78
B. Assessment of Individual Measurements 80
4.2 Auditing Schemes 80
A. Statistics of Various Auditing Schemes 83
B. Selecting the Auditing Level 88
C. Cost Relationships 91
D. Cost Vs. Audit Level . 94
i
4.3 Data Quality Versus Cost of Implementing Actions 96
4.4 Data Presentation 102
4.5 Personnel Requirements 104
A. Training and Experience 104
4.6 Operator Proficiency Evaluation Procedures 105
REFERENCES 107
APPENDIX REFERENCE METHOD FOR THE CONTINUOUS MEASUREMENT
OF CARBON MONOXIDE IN THE ATMOSPHERE
(NON-DISPERSIVE INFRARED SPECTROMETRY) 108
viii
-------
LIST OF FIGURES
Figure Page
1 Operational Flow Chart of the Measuring Process 5-6
2 Carbon Monoxide Monitoring System Flow Chart 7
3 Sample Calibration Curve 15
4 Table for Converting Trace Deflection in Percent of
Chart to Concentration in PPM 16
5 Sample Daily Check Sheet 18
6 Sample Form for Reporting Results of Quality Control Checks 23
7 A Sample Graph of the Mean (c) and 3a Limits of Hourly CO
Concentrations for a 24-Hour Period 28
8 Sample Sheet for Recording Hourly Averages 29
9 Sample Trace of 24-Hour Sampling Period with Zero and
Span Calibrations 31
10 SAROAD Hourly Data Form 33
11 Calibration Set-Up for Pressure Gauges 47
12 Data Qualification Form 56
13 Critical Values of Ratio s./a. Vs. n 82
14 Data Flow Diagram for Auditing Scheme 84
ISA Probability of d Defectives in the Sample If the
Lot (N=100) Contains D% Defectives 86
15B Probability of d Defectives in the Sample If the
Lot (N=50) Contains D% Defectives 87
16A Percentage of Good Measurements Vs. Sample Size
for No Defectives and Indicated Confidence Level 89
16B Percentage of Good Measurements Vs. Sample Size
for 1 Defective Observed and Indicated Confidence Level 90
17 Average Cost Vs. Audit Level 97
18 Costs Vs. Precision for Alternative Strategies 101
19 Sample QC Chart for Evaluating Operator Proficiency 106
ix
-------
LIST OF TABLES
Table Page
1 Analyzer Evaluation Data 44
2 Apparatus Used in the NDIR Method 49
3 Suggested Performance Standards 58
4 Methods of Monitoring Variables 62
5 Suggested Control Limits for Parameters and/or Variables 64
6 Quality Control Procedures or Actions 67-69
7 Critical Values of s /a. 81
8 Required Auditing Levels n for Lot Size N=100
Assuming Zero Defectives 88
9 Costs vs. Data Quality 91
10A Costs If 0 Defectives are Observed and the Lot is Rejected 92
10B Costs If 0 Defectives are Observed and the Lot is Accepted 92
11 Costs in Dollars 93
12 Overall Average Costs for One Acceptance -
Rejection Scheme . 95
13 Assumed Standard Deviations for Alternative Strategies 100
-------
ABSTRACT
Guidelines for the quality control of ambient CO by the Federal
reference method are presented. These include:
1. Good operating practices
2. Directions on how to assess data and qualify data
3. Directions on how to identify touble and improve data quality
4. Directions to permit design of auditing activities
5. Procedures which can be used to select action options and
relate them to costs
The document is not a research report. It is designed for use by
operating personnel.
This work was submitted in partial fulfillment of Contract Durham
68-02-0598 by Research Triangle Institute under the sponsorship of
the Environmental Protection Agency. Work was completed as of May 1973.
xi
-------
1.0 INTRODUCTION
This document presents guidelines for implementing a quality
assurance program for the continuous measurement of carbon monoxide in
the atmosphere using non-dispersive infrared (NDIR) spectrometry.
The objectives of this quality assurance program for the NDIR method
of measuring atmospheric carbon monoxide are to:
1) provide routine indication, for operating purposes,
of unsatisfactory performance of personnel and/or
equipment,
2) provide for prompt detection and correction of
conditions which contribute to the collection of
poor quality data, and
3) collect and supply information necessary to describe
the quality of the data.
To accomplish the above objectives, a quality assurance program must
contain the following components:
1) routine training and evaluation of operators,
2) routine monitoring of the variables and/or
parameters which may have a significant effect on
data quality,
3) development through auditing procedures, statements
and evidence to qualify data and detect defects, and
4) action strategies to increase the level of precision
in the reported data and/or to detect instrument
defects or degradation and to correct same.
Implementation of a quality assurance program will result in data
that are more uniform in terms of precision aud accuracy. it will enable
each monitoring network to continuously generate data that approach the
highest level of accuracy attainable with the NDIR method.
-------
This document is divided into three parts. They are:
Part I, Operations Manual - The operations manual sets forth
recommended operating procedures, instructions for performing control
checks designed to give an indication or warning that invalid or poor
quality data are being collected, and instructions for performing certain
special checks for auditing purposes.
Part II, Supervision Manual - The Supervision Manual contains
brief directions for 1) the assessment of NDIR data, 2) collection of
information to detect and/or identify trouble, 3) applying quality control
procedures to improve data quality, and 4) varying the auditing or
checking level to achieve a desired level of confidence in the validity
of the outgoing data. Also, example monitoring strategies and costs as
discussed in Part 111 are summarized in this manual.
Part III, Management Manual - The Management Manual presents
procedures designed to assist in 1) detecting when.data quality is
inadequate, 2) assessing overall data quality, 3) determining the extent
of independent auditing to be performed, 4) relating costs of data
quality assurance procedures to a measure of data quality, and 5) selecting
from the options available the alternative(s) which will enable one to meet
the data quality goals by the most cost-effective means. Also, discussions
on data presentation and personnel requirements are included in this
manual.
The scope of this document has been purposely limited to that of a
field document. Additional background information is contained in the
final report under this contract.
-------
PART I. OPERATIONS MANUAL
2.0 GENERAL
This operations manual sets forth recommended operating procedures
for the continuous measurement of carbon monoxide in the atmosphere
using non-dispersive infrared (NDIR) spectrometry. Quality control
procedures and checks designed to give an indication or warning that
invalid or poor quality data are being collected are written as part of
the operating procedures, and are to be performed by the operator on a
routine basis. In addition, the performance of special quality control
procedures and/or checks as prescribed by the supervisor may be required
of the operator on certain occasions.
The accuracy and/or validity of data obtained from this method
depends upon instrument performance and the proficiency with which the
operator performs his various tasks. Deviations from the recommended
operational procedure may result in the collection of invalid data or at
least reduce the quality of the data. The operator should make himself
familiar with the manufacturer's operational instructions and with the
rules and regulations concerning the NDIR method as written in the
Federal Register, Vol. 36, No. 84, Part II, April 30, 1971 (see Appendix
of this document).
For illustration purposes, directions throughout this document are
written in terms of a 24-hour sampling period (i.e., 24 hours between
zero and span calibrations), and an auditing or checking level of 7 checks
out of a lot size of 100 sampling periods. Sampling period durations and
auditing levels are subject to change by the supervisor and/or manager.
Such change would not alter the basic directions for performing the
operation. Also, certain control limits as given in this manual represent
best estimates for use in the beginning of a quality assurance program and
are, therefore, subject to change as field data are collected.
It is assumed that an analyzer which meets reference method specifi-
cations has been set up and checked out according to the manufacturer's
directions by an experienced technician.
-------
2.1 Operating Procedures
The sequence of operations to be performed during each sampling
period is given in Figure 1. Each operation or step in the process is
identified by a block. Quality checkpoints in the measurement process,
for which appropriate quality control limits are assigned, are represented
by blocks enclosed by heavy lines. Other checkpoints involve go/no-go
checks and/or subjective judgments by the operator with proper guidelines
for decision making spelled out in the procedures. Theie. Op&LCULioYlA and
chzchA c.uA& one. pfWQ>i&>&&> &tvp by t>t
-------
ANALYZER CALIBRATION
1. Verify concentration of
calibration gases when
first purchased and any
time desired performance
standards cannot be met.
Check cylinder pressure
daily.
Perform multi-
point/zero and
span calibra-
tions as
scheduled .
Multipoint
Calibration
Zero and
Span Calibration
Record settings of zero
and span controls after
each calibration.
SAMPLING
Prepare analyzer for
sampling.
Check and adjust sample
flow and sample cell
pressure to specified
values for sampling.
6. Visually check recording
system for proper operation.
Period between successive
zero and span calibrations
No adjustments are made on
analyzer or recorder
controls during sampling
period.
OPERATIONAL CHECKS
8. After each sampling period
check and compare control
settings with settings from
Step 3.
Check
Control
Settings
Figure 1: Operational Flow Chart of the Measuring Process
-------
13.
16
Read and compare sample
flow rate with value from
Step 5.
Check
Flow Rate
10.
Read and compare sample
cell pressure to the value
from step 5.
Visually check the
recorded data after
each sampling period
for signs of equipment
malfunction and
unusual pollutant
levels °r patterns.
DATA PROCESSING
11.
12.
Check shelter temperature
control for proper opera-
tion. Check maximum
temperature variation from
the set value.
Visually check the water
vapor control unit for
proper operation daily.
17.
Remove recorded data
from recorder and edit
in preparation for
data reduction.
Convert instrument
response to concentra-
tion in ppm as hourly
averages.
Data
Reduction
Replace filter monthly or
at any sign of filter
plugging or particulate
buildup.
19
19.
Complete SAROAD form
for hourly averages and
document results of any
quality control checks.
Forward to supervisor.
14.
Visually check sample
introduction system dally
for breakage, leaks, and
particulate deposits.
15.
Visually check recording
system for proper operation
over the past sampling.
Figure 1: Operational Flow Chart of the Measuring Process (cont'd)
6
-------
SAMPLE INTRODUCTION SYSTEM
ANALYZER SYSTEM
DATA RECORDING
AND
DISPLAY SYSTEM
ZERO GAS
SPAN GAS
Figure 2: Carbon Monoxide Monitoring System Flow Chart
-------
ANALYZER CALIBRATION
Step 1. Calibration Gas Check
A multipoint calibration requires calibration gases with
concentrations corresponding to approximately 10, 20, 40, and 80 percent
3
of full scale and a zero gas containing less than 0.1 rag CO/m . It is
further recommended that calibration gases certified to be within
+; 2 percent of the stated value be purchased in high pressure cylinders
with inside surfaces of a chromium-molybdenum alloy of low iron content
or other appropriate linings. Store the cylinders in areas not subject
to extreme temperature changes (e.g., do not expose to direct sunlight).
It is recommended that CO in synthetic air be used for all calibration gases.
It is recommended that at least three (3) control gas samples,
assayed and certified to be within + 1 percent of the stated level of CO,
be obtained for use in the auditing process for assessing data quality.
(Sections 2.2, 3.1, and 4.1 discuss the auditing process.) The CO concen-
tration of the control samples should be distributed to cover the
range of about 5 to 40 ppm.* These auditing gases (control samples)
could be purchased in size 3 cylinders to allow for portability and to
insure that the sample is exhausted before the CO concentration has
changed significantly due to deterioration with time.
A. Concentration Verification
When a quality assurance program is first started, the concentration
of all calibration gases on hand should be verified. Two verification
procedures are discussed herein. The first method represents the minimum
action necessary to verify concentration values. It is possible that the
certified concentration values of calibration gases and auditing gases
obtained from the same supplier may be equally in error and, consequently,
be accepted as good by this method. A second and somewhat more thorough
procedure which eliminates this possibility is given using gases from
different sources.
The factor for converting CO from volume (ppm) to mass (mg/m^) units is:
1 ppm = 1.145 mg/m3 at 25°C and 760 mmHg.
-------
Once the calibration gases have been initially verified, new gases
can be verified at the time of purhcase in a routine fashion by measuring
against the old gases.
1. Method I
a) Set up and check out the analyzer.
b) Calibrate the analyzer with the auditing gases
(see Step 2A, page 11 for calibration procedures).
c) Construct a calibration curve from at least four (4)
points, i.e., zero, span, and two (2) upscale points
(see Step 2A, Procedure 24 on page 14 for guidance in
constructing a calibration curve).
d) Check the calibration curve and if any of the measured
points deviate from the smooth curve by more than
+ (1.0 + 0.01 C )*ppm, have that cylinder of auditing
gas reanalyzed. In some cases a subjective decision
will have to be made by the supervisor as to whether it
is the span gas or one of the upscale gases that is in
error. If all measured points are within the above
limits, use the best fit curve as the correct
calibration curve.
e) Measure the calibration gases.
f) Have all calibration gases whose measured value differs
from its certified value by more than + (1.0+ 0.02 C )**ppm
reanalyzed until an acceptable set of calibration gases is
obtained.
g) Obtain and verify new calibration gases before old ones
are exhausted by calibrating the analyzer with the old
calibration gas. Accept the new gas as good if the
measured and certified values are within
+ (1.0 + 0.02 C )ppm of each other; reject the gas
otherwise.
*
1.0 ppm is based on the 3o value for repeatibility from a collaborative
test (Ref. 1), 0.01 is the stated accuracy of the auditing gas, and C
is the certified concentration of the auditing gas.
**
0.02 is the stated accuracy of the calibration gas, and GC is the certified
concentration of the calibration gas.
9
-------
2. Method II
a) Set up and check out the analyzer.
b) Have two (2) sets of calibration gases (+ 2%) from
different suppliers available.
c) Have one set of auditing gases (+ 1%) from another
supplier if possible; or when available, "standard
gases" of EPA's recommendation.
d) Calibrate the analyzer with the auditing or standard
gases (see Step 2A, page 11 for calibration procedure).
e) Construct a calibration curve from at least four (4)
points, i.e., zero, span and two (2) upscale points
(see Step 2A, Procedure 24 on page 14 for guidance
in constructing a calibration curve).
f) Check the calibration curve and if either one of the
two upscale points deviates more than + (1.0 + 0.01 C )ppm
3.
from the smooth calibration curve, have that cylinder
of auditing gas reanalyzed. In some cases a subjective
decision will have to be made by the supervisor as to
whether it is the span gas or one of the two upscale
gases that is in error. If both upscale points are
within the above limits, use the best fit curve as the
correct calibration curve.
g) Measure both sets of calibration gases.
h) If both sets of calibration gases disagree (i.e.,
measured and certified values differ by more than
+_ (1.0 + 0.02 C )ppm for cylinders of each set), have
all calibration and auditing gases reanalyzed.
i) If one set of calibration gases agrees (i.e., measured
and certified values agree within + (1.0 + 0.02 C )ppm
for each cylinder), accept that set as good and have the
other set reanalyzed. If both sets agree, accept both
sets as good.
10
-------
j) Obtain and verify new calibration gases before the
old gases are exhausted by calibrating the analyzer
with the old gases and measuring the new calibration
gas. Accept the new gas as good if the measured and
certified values are within + (1.0 + 0.02 C )ppm of
each other; reject the gas otherwise.
B. Cylinder Pressure Check
Before each calibration, check the cylinder pressure of each
calibration gas to be used. Order replacement for any cylinder with less
6 2
than 2.1 x 10 Nm (300 psi) pressure.
Step 2A. Multipoint Calibration
A. Frequency of Calibration
A multipoint calibration is required when:
1) the analyzer is first purchased,
2) the analyzer has had maintenance which could
affect its response characteristics, or
3) when results from the auditing process show that
the desired performance standards are not being met
(see A of Section 2.2).
B. Calibration Procedures
Follow the manufacturer's detailed instructions when calibrating a
specific analyzer. General procedures are:
1) Turn the power on and let the analyzer warm up by
sampling ambient, air. This usually requires several
hours (as many as 24 to 48 hours) depending on the
individual analyzer.
2) Connect zero gas to the analyzer.
3) Open the gas cylinder pressure valve (see Figure 2,
page 7). Adjust the secondary pressure valve until
the secondary pressure gauge reads approximately
4 -2
3.4 x 10 Nm (5 psi) more than the desired sample
cell pressure. Caution: Do not exceed the pressure
limit of the sample cell.
11
-------
4) Set the sample flow rate as read by the rotameter
(read the widest part of the float) to the value that
is to be used during sampling.
5) Let the zero gas flow long enough to establish a stable
trace. Allow at least 5 minutes for the analyzer to
stabilize.
6) Adjust the zero control knob until the trace corresponds
to the line representing 5 percent of the strip chart
width above the chart zero or baseline. The above is
to allow for possible negative zero drift. If the strip
chart already has an elevated baseline, use it as the zero
setting.
7) Let the zero gas flow long enough to establish a stable
trace. Allow at least 5 minutes for this. Mark the
strip chart trace as adjusted zero.
8) Disconnect the zero gas.
9) Connect the span gas with a concentration corresponding
to approximately 80 percent full scale.
10) Open the gas cylinder pressure valve (see Figure 2,
page 7). Adjust the secondary pressure valve until the
secondary pressure gauge reads approximately
4 -2
3.4 * 10 Nm (5 psi) more than the desired sample cell
pressure.
11) Set the sample flow rate, as read by the rotameter, to
the value that is to be used during sampling.
12) Let the span gas flow until the analyzer stabilizes.
13) Adjust the span control until the deflection corresponds
to the correct percentage of chart as computed by
Cg(ppm)
x 100 + 5 (% zero offset) = correct percentage
, ,
of chart
where
C_ = concentration of span gas in ppm,
and
Cf = full scale reading of analyzer in ppm.
12
-------
As an example see Figure 3, page 15, where the % zero
offset is 5 and the correct percentage of chart for
the span gas of 40 ppm would be
x 100 + 5 = 85.
50 ppm
14) Allow the span gas to flow until a stable trace is
observed. Allow at least 5 minutes. Mark the strip
chart trace as adjusted span and give concentration
of span gas in ppm.
15) Disconnect the span gas.
16) Repeat Procedures 2 through 8 and
a) if no readjustment is required, go to Procedure 17;
b) if a readjustment greater than 1 ppm is required,
repeat Procedures 9 through 16.
17) Lock the zero and span controls.
18) Connect the calibration gas with a concentration
corresponding to approximately 10 percent full
scale to the analyzer.
19) Open the gas cylinder pressure valve (see Figure 2,
page 7). Adjust the secondary pressure valve until
the secondary pressure gauge reads approximately
4 -2
3.4 * 10 Nm (5 psi) more than the desired sample
cell pressure.
20) Set the sample flow rate to the value used during
sampling.
21) Let the calibration gas flow until the strip chart
trace stabilizes. Note: No adjustments are made at
this point.
22) Disconnect the calibration gas.
23) Repeat Procedures 18 through 22 for each of the
calibration gases with concentrations corresponding
to approximately 20 and 40 percent of full scale in
that order.
13
-------
24) Fill in the information required on a calibration sheet
and construct a calibration curve of deflection as
percent of chart versus concentration in ppm as illus-
trated in Figure 3. Draw a best fit, smooth curve
passing through the zero and span points and minimizing
the deviation of the three remaining upscale points from
the curve. The calibration curve should have no inflec-
tion points, i.e., it should either be a straight line
or bowed in one direction only. Curve fitting techniques
may be used in constructing the calibration curve by
applying appropriate constraints to force the curve
through the zero and span points. This procedure becomes
quite involved, however; and the most frequently used
technique is to fit the curve by eye.
25) Recheck any calibration point deviating more than
+ (1.0 + 0.02 C )ppm from the smooth calibration curve.
If the recheck gives the same results, have that cali-
bration gas reanalyzed. Use the best fit curve as the
calibration curve.
26) Fill in the calibration conversion sheet (see Figure 4,
page 16) from the calibration curve.
27) In certain situations the supervisor may request that
the calibration be repeated (replicated). In this case
obtain both sets of data and follow his instructions for
preparing a calibration curve.
Step 2B. Zero and Span Calibration
A. Frequency of Zero and Span Calibration
A zero and span calibration is performed before and after each sampling
period (taken as every 24 hours here) or as directed by the supervisor.
14
-------
Location Date Operator
Analyzer No. Range Flow Rate Cell Pressure
Zero Gas Cylinder Pressure Cylinder No.
Upscale gas (80%) Cylinder Pressure Cylinder No.
(10%) Cylinder Pressure Cylinder No.
(20%) Cylinder Pressure Cylinder No.
(40%) Cylinder Pressure Cylinder No.
Zero Control Setting Span Control Setting
Recorder Type
1 1
:::::::: :j::::i|:| ::::::::::
1 ;i!i|i!i:|::i:i;
;;;;:||i;;;;l: ;;;;:;;; ;
:::|:±::::t:S|:|::^d
±i:Sffi:;itNffiP
:;;;;EEE;;E;E;E|P|||E
iffS5^rE£fen~rt±r3
Serial No.
::::::::::::::::::::::::::::::::::::::: ::::: ±: :::::::::::::::::::::::
i;;;;=i=;|;;;iiiiii!|ii:;;jE;;J;lg;E||;;||;|;iiE
1 -;irf + W-+J---ftifTfIt:::::: + i:lif !
i|;;|!|!;;g;;;i;|;;i!^;piPg;^i|||i;;|;:E
E:ig'-ME|:::l:::::i:::^^^±:*mS::s;:3
^?||:::::::::::|:::::::l:::gg:|:||::|:J^: + g:::
PSS^feBsli^i^^^iSi^
^^p4*^^^^%f^^^i^i f^p
y^^^gy
SrS:::|:::::S::::±|::::|::
BiTiBI|l*i|iH
|ffi;g;pp^|±^
l4u|4|i!ll !':T[[tf!^pJJ±gj]
4Jlfr4r-^H • :'4'- Lji!l'[l'i:p^|
-SH^^'Cferr^^t^
Figure 3: Sample Calibration Curve
15
-------
Analyzer No.
Date of Calibration
% Chart
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
6.0
6.5
7.0
7.5
8.0
8.5
9.0
9.5
10.0
10.5
11.0
11.5
12.0
12.5
13.0
13.5
14.0
14.5
15.0
15.5
16.0
16.5
17.0
17.5
18.0
18.5
19.0
19.5
20.0
PPM
% Chart
20.5
21.0
21.5
22.0
22.5
23.0
23.5
24.0
24.5
25,0
25.5
26.0
26.5
27.0
27.5
28.0
28.5
29.0
29.5
30.0
30.5
31.0
31.5
32.0
32.5
33.0
33.5
34.0
34.5
35.0
35.5
36.0
36.5
37.0
37.5
38.0
38.5
39.0
39.5
40.0
PPM
% Chart
40.5,
41.0
41.5
42.0
4?. 5
43.0
43.5
44.0
44.5
45.0
45.4
46.0
46. 5
47.0
47. 5
48.0
48.5
49.0
49.5
50.0
50.5
51.0
51.5
52.0
52.5
53.0
53.5
54.0
54.5
55.0
55.5
56- P
56.5
S7.0
S7. S
58.0
58. 5
SQ.n
SQ. S
60. n
PPM
% Chart
fiO.S
61.0
61r5
69. n
fi9.S
63.0
63.5
f>4. n
64. S
6s.n
6S_ S
6^,n
66. S
67. n
67.5
68. n
6R S
69. n
6Q. S
70.0
70. S
71 .0
71 . 5
79. n
7?. 5
73.0
73.5
7A.O
74.5
75 0
7S. S
7fi.n
7fi. S
77 n
77 S
7R 0
78 ^
7Q n
7Q 5
«n n
PPM
% Chart
fin. s
si .n
81 .S
ft? n
R9 . •;
R^ n
R-^,5
R4,0
R^ R
R=; n
RS, S
flfi n
86.5
R7 n
87,5
RR n
RR «;
RQ ,n
RQ <;
on o
on =;
Ql.n
QI. •;
q?.n
Q9. •;
q^.n
QV •;
04 n
9i 5
QS n
Q1; =;
QA 0
Q6 5
°7 0
07 5
98 0
98 5
9° 0
Q9 5
100 0
PPM
Figure 4: Table for Converting Trace Deflection in Percent of Chart to
Concentration in PPM
16
-------
B. Zero and Span Calibration Procedures
1) Connect the span gas with a concentration value
corresponding to 80 percent of full scale, or other
values as directed by the supervisor, to the analyzer.
2) Open the gas cylinder pressure valve and adjust the
secondary pressure valve (see Figure 2, page 7) until
the secondary pressure gauge reads approximately
4 -2
3.4 x 10 Nm (5 psi) more than the desired sample
cell pressure.
3) Set the sample flow rate as read by the rotameter (read
the widest part of the float) to the value to be used
when sampling.
4) Let the span gas flow long enough to establish a stable
trace on the strip chart recorder; allow at least 5 min-
utes. Mark the chart trace as an unadjusted span.
Record unadjusted span reading in ppm on form in Figure 5,
page 18, under column entitled "Unadjusted Calibration."
5) Disconnect the span gas.
6) Connect zero gas to the analyzer.
7) Open the gas cylinder pressure valve and adjust the
secondary pressure valve until the secondary pressure
4 -2
gauge reads approximately 3.4 * 10 Nm (5 psi) more than
the desired sample cell pressure.
8) Set the sample flow rate as read by the rotameter to
the value that is used when sampling.
9) Let the zero gas flow long enough to establish a stable
zero trace on the strip chart recorder; allow at least
5 minutes. Mark the chart trace as an unadjusted zero.
Record the unadjusted zero reading in ppm on form in
Figure 5, page 18 under column titled "Unadjusted Cali-
bration." A supervisor could use this data to compute a
mean and standard deviation in ppm for zero drift using
unadjusted calibration data from at least 25 sampling
periods (see Section 4.1 for computing standard deviations)
17
-------
CO ANALYZER DAILY CHECK SHEET
Station Name
Analyzer Number
Location
Date
Operator
Sample Flow Rate
U/min)
Initial
Final
Sample Cell
Pressure (Nm~2)
Initial
Final
Cylinder
Pressure(Nm~2)
Zero
Span
New Control
Knob Setting
Zero
Span
Unadjusted
Calibration
Zero
Span
00
Figure 5: Sample Daily Check Sheet
-------
If the unadjusted zero trace is more than + 3a ppm
from the true zero value, check the temperature control
for the analyzer and/or shelter for proper operation and
other likely causes. Report the situation to the super-
visor. Continue with the calibration.
10) Adjust the zero control knob until the trace corresponds
to the true zero setting. Let the zero gas flow until a
stable trace is obtained. Mark the chart trace as an
adjusted zero.
11) Disconnect the zero gas.
12) Reconnect the span gas and let flow until analyzer has
stabilized; then adjust the span control until the
deflection on the strip chart corresponds to the span gas
concentration in ppm using the calibration conversion
table as illustrated in Figure 4, page 16. Let the strip
chart trace stabilize. Mark the chart trace as an
adjusted span with the span gas concentration in ppm. A
supervisor could compute a mean and standard deviation in
ppm for span drift using data from at least 25 sampling
periods (obtain span drift in ppm by subtracting the true
span gas concentration from the unadjusted span reading as
recorded in the last column of the form in Figure 5). If
the required adjustment is more than +_ 3a ppm, try to
determine the cause and report the situation to the
supervisor. Continue with the calibration.
13) Disconnect the span gas.
14) If a span adjustment greater than +1.0 ppm (2% of chart)
is required, repeat Procedures 6 through 12 until no
adjustments are required.
15) Lock the zero and span controls.
19
-------
Step 3. Record Analyzer Control Settings
Record the following information on the check sheet (see Figure 5,
page 18). Record 1 and 2 under "New Control Knob Settings," and 3 and 4
under "Cylinder Pressure." Include units of pressure if other than newtons
_2
per square meter (Nm ) are used.
1) Zero control knob position,
2) Span control knob position,
3) Zero gas cylinder pressure (read first stage pressure gauge),
4) Span gas cylinder pressure (read first stage pressure gauge).
SAMPLING
Step 4. Place Analyzer in Sampling Mode
Connect analyzer to sample introduction system. Allow time for
the analyzer to stabilize.
Step 5. Sample Flow and Sample Cell Pressure Check
Check and, if necessary, adjust the sample flow to the desired
value.
Record the sample flow (include units if other than A/min) and sample
cell pressure on the daily check sheet as "Initial" value (see Figure 5,
page 18).
Step 6. Recording System Check
Check the strip chart recorder for proper operation including:
1) chart speed control setting,
2) gain control setting,
3) ink trace for readability,
4) signs of excess noise, and
5) the recorder's deadband (according to manufacturer's
directions about once a month).
Automatic data acquisition systems incorporating magnetic tape
recorder or punched paper tape are checked for proper operation according
to the manufacturer's instructions.
20
-------
Step 7. Sampling Period
The sampling period is defined as the time interval between
successive zero and span calibrations (usually 24 hours).
Do not change control settings on the analyzer or recording system
during the sampling period.
OPERATIONAL CHECKS
Step 8. Zero and Span Control Settings
Compare the zero and span control settings to the values
recorded on the check sheet (Figure 5) under "New Control Knob Settings."
If the settings before and after the sampling period do not agree,
note the difference in the data log book and
1) perform the normal zero and span calibration. If the
required zero and/or span correction is less than +1.0 ppm,
continue in the usual manner (this assumes that the
original settings were recorded wrong or that the change
in setting was not large),
2) if the required zero and/or drift correction is greater
than +1.0 ppm, mark the data void and report the situation
to the supervisor. Continue normal operations.
Step 9. Sample Flow Rate
Read the sample flow rate from the rotameter. Record flow rate
(£/min) on daily check sheet (Figure 5) as "Final" value. Compare initial
and final readings.
If the change is greater than + 20 percent of the initial value, check
the particulate filter for plugging (see Step 13) and the sample air pump
system for proper operation. Take corrective action.
21
-------
Compute percent difference by
Q±-Qf
— x 100 = percent
where
Q. = initial flow rate (£/min)
and
Qf = final flow rate
Step 10. Sample Cell Pressure
Check the sample cell pressure and compare with the initial
pressure recorded on the daily check sheet (Figure 5, page 18). If the
pressure varied during the sampling period and
final pressure x <
initial pressure — '
try to determine the cause and initiate corrective action. Report the
change to the supervisor and record the magnitude and direction of change
on the quality control check sheet (Figure 6, page 23) under "Data Quality
Statement."
If the pressure varied by more than 10%, the supervisor should void
the data and locate and correct the cause before sampling is resumed.
Step 11. Temperature Control Check
Each shelter should be equipped with a temperature-indicating
device such as a wall thermometer or a maximum and minimum registering
thermometer. Check the thermometer to verify that the temperature control
system is operating within limits. Control limits on allowable tempera-
ture variations are determined by a supervisor from the temperature
variation sensitivity check in Section 2.3 and in conjunction with desired
accuracy. If a larger than usual or allowable temperature variation is
observed, record cause and corrective action in the maintenance log book
22
-------
City_
QUALITY CONTROL CHECKS
Pollutant
Site Location_
Site Number
Analyzer Number
Date
Supervisor Responsible for Checks (Signature)
Auditing
Level
n checks
N sampling
Type of
Quality
Control
Check
Result
of
Check
Corrective
Action
Taken
Operator
Performing
Check
Data Quality Statement:
Figure 6: Sample Form for Reporting Results of Quality Control Checks
23
-------
maintained in the shelter. If no cause is identified, or if the cause is
determined but cannot be corrected immediately, report it to the supervisor.
Reset the maximum and minimum registering thermometer after checking
temperature variation for a sampling period.
Step 12. Water Vapor Control Check
Several techniques may be used to control water vapor interference.
Refrigeration and drying agents are two of the most commonly used methods.
Daily checks for these two methods are:
1) Drying Agents - The color of the drying agent (i.e.,
silica gel or other indicating dessicants) is checked
daily and replaced or rejuvinated when necessary as
indicated by a change in color.
2) Refrigeration - Check the moisture trap between the
/ refrigerator unit and the analyzer for condensed moisture.
Any sign of moisture indicates a malfunction in the control
unit. Check the compressor for proper operation by
measuring the temperature of the cooling coil (units are
designed to operate at a specified temperature). Drain
condensate from the cold trap after each sampling period
and before each calibration (or leave the drain cock open
during sampling). Report any period(s) of time that the
sample air dewpoint was lower than the refrigerator dewpoint.
Take corrective action and/or notify the supervisor at any sign of
malfunction or inadequacy of the water vapor control unit. Report with
the data, by recording on the strip chart record, any period of time for
which the moisture control unit was not operating or its effectiveness was
not certain.
Document malfunctions and corrective actions in the maintenance log
book.
-------
Step 13. Particulate Filter Check
A filter with a porosity of 2 to 10 micrometers is used to keep
large particles from reaching the sample cell.
With no filter in the system observe the sample flow rate. Place a
clean filter in the filter holder and read the new flow rate. Any drop
in flow rate is due to the clean filter. Record the magnitude of the
drop in the maintenance log book.
Initially measure the flow rate with and without the "dirty" filter
once a month. Replace the filter if the flow rate drop for the dirty
filter divided by the drop caused by the clean filter is more than 2.
Experience will suggest how often such checks need to be made for a given
site.
Step 14. Sample Introduction System Check
A sample introduction system usually consists of an intake port,
trap for moisture and large particulates, horizontal sampling manifold,
and exhaust blower as illustrated in Figure 2, page 7.
Check the moisture trap for accumulated water and large particles.
Remove, clean, and replace the trap if any moisture and/or particulates
are present.
Visually check the sample introduction system for breakage, leaks,
foreign objects in the intake port (e.g., spider webs, wasp nests), and
deposited particulates or excess moisture in the horizontal sampling
manifold.
The above checks are made each sampling period (usually daily).
Conditions such as a break in the manifold or a leaky joint in the sampling
manifold network which could affect data quality are reported with the data
by a brief description of the condition under "Data Quality Statements" on
the form in Figure 6, page 23, and by marking the strip chart trace as
void and reporting the situation to the supervisor. Take corrective
action and document in maintenance log book.
25
-------
Step 15. Recording System Check and Servicing
Check the recording system for signs of recorder malfunctions
occurring during the past sampling period. Specific procedures for
checking and servicing will depend on the type of recording system used.
Check and service automatic data acquisition systems according to
the manufacturer's instructions.
For a strip chart recorder check to see that:
1) the recorder did not run out of chart paper,
2) there is a continuous inked narrow trace for the
entire sampling period, and
3) there was a uniform advancement of the chart paper
by checking the start and end times on the chart and
comparing with actual start and end times.
Malfunctions in the recording system resulting in loss or invalida-
tion of data are corrected and documented in the maintenance log book.
The sampling interval affected by the malfunction is identified on the
strip chart record for that sampling period.
Service the recorder for the next sampling period;
1) Check the ink supply and refill if less than 1/4 full.
2) Install a new roll of chart paper as necessary.
Step 16. Visual Check of Recorded Data
Check and edit the strip chart record for the past sampling
period to detect signs of monitoring system malfunctions and to validate
the data.
Typical points to look for which may indicate system problems are:
1) A straight trace for several hours (other than minimum
detectable).
2) Excess noise as indicated by a wide solid trace, or
erratic behavior such as spikes that are sharper than
is possible with the normal instrument response time.
Noisy outputs usually result when analyzers are
exposed to vibration sources.
3) A long steady increase or decrease in deflection.
26
-------
4) A cyclic pattern of the trace with a definite time
period indicating a sensitivity to changes in
temperature or parameters other than CO concentration.
5) Periods where the trace drops below the zero baseline.
This may result from a larger-than-normal drop in the
ambient room temperature or power line voltage.
If any of the above conditions are detected, data should be flagged,
troubleshooting done, and the supervisor informed. Data should be declared
invalid only if malfunction of the instrument is detected; otherwise, it
should be reported.
Also, for data validation, a graph could be prepared by the supervisor
from previous data (e.g., 1 year of data) containing the average and + 3o
values for the hourly averages for reference when editing data. Figure 7,
page 28, is an illustration of such a graph. The occurrence of any one
or more of the conditions listed below should be investigated for possible
causes (e.g., extra heavy traffic, shift in peak traffic hours, or periods
of atmospheric stagnation with high pollution levels):
1) an estimated 1 hour average falls outside the
+ 3a limits for that specific hour,
2) the daily pattern has shifted to left or right
by 2 or more hours, and
3) abnormal pattern such as no peaks.
Document any causes known or suspected or the absence of any known
causes on the form in Figure 6, page 23, under "Data Quality Statement."
DATA PROCESSING
Step 17. Data Handling
At the end of each sampling period the operator should make
certain that the strip chart contains the following information:
1) Sampling station number, location, pollutant being
measured, and operator.
2) Starting time and date. Ending time and date.
3) Proper identification of unadjusted zero and adjusted
zero traces.
27
-------
7.0
8 10 12 14
Time of Day (Hours)
16 18
20
22
24
Figure 7: A Sample Graph of the Mean (c)and 3a Limits of Hourly
CO Concentrations for a 24-Hour Period
4) Proper identification of unadjlisted and adjusted span
traces, and the concentration in ppm of the span gas.
5) Editing information identifying any periods of invalid
data due to equipment failure or other known causes.
Step 18. Data Reduction
A. Procedure for Reading Hourly Averages from Strip Chart Records
To determine the hourly average concentration from a strip chart
record, the following procedures are used:
1) Obtain the strip chart record for the sampling period
in question. The record must have adjusted span and
zero traces at the beginning of the sampling period
and unadjusted span and zero traces at the end of the
sampling period.
2) Fill in the identification data called for at the top
of an hourly averages sheet (see Figure 8, page 29).
28
-------
CITY
SITE LOCATION_
DATE
SITE NUMBER_
POLLUTANT
OPERATOR
CHECKER
Hour
0-1
1-2
2-3
3-4
4-5
5-6
6-7
7-8
8-9
9-10
10-11
11-12
12-13
13-14
14-15
15-16
16-17
17-18
18-19
19-20
20-21
21-22
22-23
23-24
Reading Zero Baseline
Original
Check
Original
Check
Difference
Original
Check
Add +5
Original
Check
PPM
Original
Check
Figure 8: Sample Sheet for Recording Hourly Averages
29
-------
3) Using a straight edge, draw a straight line from the
adjusted zero at the start of the sampling period to
the unadjusted zero at the end of the sampling period
as illustrated in Figure 9, page 31. This line repre-
sents the zero baseline to be used for the sampling
period.
4) Read the zero baseline in percent of chart at the
midpoint of each hour interval and record the value on
the hourly averages sheet in Figure 8, page 29.
5) Determine the hourly averages by using a transparent
object, such as a piece of clear plastic, with a
straight edge at least 1 inch long. Place the straight
edge parallel to the horizontal chart division lines.
For the interval of interest between two vertical hour
lines, adjust the straight edge between the lowest and
highest points of the trace in that interval, keeping
the straight edge parallel to the chart division lines,
until the total area above the straight edge bounded by
the trace and the hour lines is estimated to equal the
total area below the straight edge bounded by the trace
and hour lines. See Figure 9 for an illustrated example.
Read and record on the hourly average sheet the
percentage of chart deflection.
Repeat the above procedure for all the hour intervals
for which the analyzer was sampling and which have not
been marked invalid. Record all values on the hourly
averages sheet in the column headed Reading under "Original."
6) Subtract the zero baseline value (Column 2) from the reading
value (Column 1) and record the difference in Column 3.
7) Add the percent zero offset (Column 4) to the difference.
8) Convert percentage chart values to concentration in ppm
using the calibration conversion table (in Figure 4)
developed from the calibration curve. Record the ppm
values in Column 5 on the hourly averages sheet. The
"Check" columns will be used in the auditing process and
will be discussed in Section 2.2.
30
-------
SAMPLING PERIOD END
ia
c
-s
CO
T3
n>
o>
o
o
-h
o
-5
CO
EU
3
IQ
n>
T
o'
D.
M
tt>
o
CU
3
a.
CO
T3
O
OJ
cr
o
Jlllliiil I I! Hi ill! hi llll ! i il
i
1 OVERNIGHT PERIOD NOT SHOWN
1 PM
12 N
SAMPLING PERIOD BEGIN
-------
Step 19. Data Reporting
Transcribe information and data from the hourly averages sheet
to a SAROAD Hourly Data Form (see Figure 10).
Basic instructions for filling out the SAROAD Hourly Data Form are
given below. If the data are to be placed in the National Aerometric
Data Bank, further instructions can be obtained from the SAROAD Users
Manual APTD-0663.
1) The SAROAD Hourly Data Form is an approved form for
the recording of data observed on averages at intervals
of less than 24 hours. In this case the form is to be
used for recording hourly averages of carbon monoxide
observations.
2) Entries on the upper left of the form (see sample form,
Figure 10) provide indentification (many of these items
may already filled in by the time operators receive the
cards). These are:
(1) Agency - group recording the observations.
(2) City - city in which instrument is operated.
(3) Site - specific location of the sampler within city.
(4) Project name, if any.
(5) Parameter observed - carbon monoxide.
(6) Time Interval - Hourly.
(7) Method - Instrumental Nondispersive Infrared.
(8) Units of Observation - parts-per-million.
3) In the upper right hand corner of the SAROAD Hourly Data
Form appears three lines of blocks for coding identifying
information. These correspond to the card columns of the
numbers beneath each box when punched on an 80-column
Hollerith card. EPA will assign codes for the first line
of blocks to the reporting agency when Site Identification
Forms are initially submitted. They consist of a two-
digit code for state (SS) , a four-digit code for the area
32
-------
CO
u>
LESS THAN 24-HOUR SAMPLING INTERVAL
LTL
1
Agency
City Name
Site Address
ENVIRONMENTAL PROTECTION AGENCY
National Aerometrlc Data Bank
P. 0. Box 12055
Research Triangle Park
North Carolina 27711
State
Area
Site
Parameter observed
Time interval of obs.
Method
Of
ri r n n m
2 31*56789 10
Agency | Project Time Year Month
Tl 12 13 m 15 16 17 18
Parameter code Method Units DP
•LIMN m on D
Day
19 20
-
St Hr]
21 22
Project
Rdg 1
33 31* 35 36
-
Rdg 2
37 38 39 i«0
--
Rdg 3
itl i*2i»3 m*
Rdg 4
451*6 1*7 1*8
^-,
*
Rdg 5
1*9 50 51 52
k
Rdg 6
53 Si* 55 56
Rdg 7
57 58 5960
Rdg 8
61 62 6361*
Rdg 9
6566 67 68
t*> 2.1 i
Rdg 10
69 70 71 72
8 29 J
Rdg 11
73 71*75 76
J 31 32
1 Rdg 12
77 78 79 80
i '
j
|
i
!
I
Figure 10: SAROAD Hourly Data Form
-------
of the state in which the sampler is located (CCCC), and
a three-digit number specifically identifying the site
(XXX). For the remaining two lines of blocks, codes are
assigned for each study as follows:
(1) Agency - Agency Code
(2) Project - Project
(3) Time
(4) Year
(5) Month - 01 to 12 for example, as appropriate,
a. July - 07
b. August - 08
c. September - 09
d. October - 10
e. November - 11
(6) Parameter Code - 42101
(7) Method - 11
(8) Units - 07
(9) DP - 1 (designates the number of places to the
right of the decimal point in the value entries)
4) On the body of the form, the two-block first column,
"Day", is the calendar day of the month (e.g., 01, 02).
"ST HR" (start hour) calls for either 00 or 12 to denote
the starting hour for which data on that line are
recorded. Two lines are used for each day's obser-
vations. The first line gives "00" (midnight) for
"ST HR" and lists the a.m. observations. The next line
gives "12" (noon) for "ST HR" and lists p.m. observations.
34
-------
5) Record the hourly averages in the "Rdg" columns:
"Rdg 1" would be for either the 0 to 1 hour reading
or the 12 to 13 hour reading; "Rdg 2" would be for
either the 1 to 2 hour reading or the 13 to 14 hour
reading; etc. In entering the hourly averages, the
decimal point is located between the first and second
column.
For example:
1.0 ppm would be entered as | | 11 I 0
2.5 ppm would be entered as | | |2 |5 | .
Report the results of any special quality control checks performed
on special form in Figure 6. Attach the special form for quality
control checks to the SAROAD form and give to the supervisor.
File the hourly averages sheet in the data log book.
2.2 Special Checks for Auditing Purposes
In making special checks for auditing purposes, it is important that
the check be performed without any special check or adjustment of the
system (see section 3.2 for further discussion). Three special checks
are required to properly assess data quality. A checking or auditing
level of 7 checks out of 100 sampling periods is used here for illus-
tration purposes. The supervisor will specify the auditing level to be
used according to monitoring requirements. Each of the three checks is
discussed separately.
A. Measuring Control Samples
The operator, when given a control sample (auditing gas) to measure,
should proceed as follows:
1) Make no checks or adjustments on the system in
preparation of the measurement.
35
-------
2) Connect the control sample bottle in the system in
the same manner that the regular calibration gases
are connected (i.e., the control sample should pass
through all the analyzer system including the water
vapor control unit and the particulate filter).
3) Let the sample gas flow until a stable trace is
obtained. Mark the trace with the code number of
the control sample and the measured concentration
in ppm.
4) Disconnect the control sample gas and connect the
regular zero gas to the analyzer.
5) Perform a zero and span calibration as in Step 2B.
6) Remeasure the control sample.
7) Return the control sample bottle and the two (2)
measured values, properly identified, to the
supervisor. After the supervisor evaluates the
results, he may request that the operator perform a
multipoint calibration and measure the control
sample again.
8) The supervisor fills out and signs the form in
Figure 6, page 23.
B. Water Vapor Interference Check
Water vapor checks should be independent and random; that is, a
qualified individual other than the regular operator should make the
check. Also, the regular operator should not know in advance when the
check is to be made. The individual making the check should not adjust,
replace, or in any way change the water vapor control unit before per-
forming the check. The exact procedure for performing the check will
depend on the type of control being used. Two procedures are given
below.
Drying Agent - When a scrubber column filled with a drying agent is
used to control water vapor interference, the check can be performed in
the following manner.
36
-------
1) Connect the dry zero gas directly to the analyzer
inlet, bypassing the scrubber column. Let the
zero gas flow until a stable trace is obtained.
Mark the trace as dry zero gas.
2) Remove the segment of line bypassing the drying
agent. Place a 50-ml impinger containing 25 ml
of distilled water, at room temperature, in the
sample line such that the zero gas passes through
the impinger, drying agent, and analyzer in that
order. CAUTION: With NDIR analyzers having a
pressurized cell, this impinger may have to be
pressurized. If so, it will have the full cell or
pump pressure and will be subject to explosion. Use
a pressure vessel of adequate pressure capacity,
and avoid the use of glass, if possible.
Let the gas flow until a stable trace is
obtained. Mark the trace as response to saturated
zero gas.
3) Determine the difference in the two traces as an
equivalent CO concentration in ppm. Always subtract
the dry measurement from the saturated measurement.
In some cases negative values will result due to
normal measurement error. Document the check on the
form for reporting quality control checks in Figure 6,
page 23, and give to the supervisor for his signature.
Replace or rejuvinate the drying agent if the interference is as
large as + 0.5 ppm.
Other Methods - Other methods include refrigeration, refrigeration
preceded by humidifier, filter cells, or optical filters. When refriger-
ation alone is used, the cold trap should be thoroughly drained before the
test and the dry zero gas allowed to flow at least 30 minutes before
reading. Otherwise, all these methods can be checked in the following
manner.
37
-------
1) Pass zero gas through the system in the same
manner as is done in zero and span calibrations.
Mark the trace as dry zero gas.
2) Insert a 50-ml impinger filled with 25 ml of
distilled water at room temperature into the sample
inlet line on the inlet side of the water control
unit. CAUTION: With NDIR analyzers having a pres-
surized cell, this impinger may have to be pressurized.
If so, it will have the full cell or pump pressure and
will be subject to explosion. Use a pressure vessel
of adequate pressure capacity, and avoid the use of
glass, if possible.
3) Pass the zero gas through the impinger and system and
mark the trace as saturated zero gas.
4) Determine the apparent change in concentration in ppm
by subtracting the response to dry gas from the
response to saturated gas.
5) Allowable interference levels for different control
methods could be specified by the supervisor from
method specifications or from the desired accuracy
of the reported data.
6) Corrective action should be taken for a measured
interference exceeding the allowable level for a
particular control unit.
Document the results of the checks on the form in Figure 6, page 23, and
forward to the supervisor for his signature.
C. Data Processing Check
In auditing data processing procedures, it is convenient and allows
for corrections to be made immediately if checks are made for each sampling
period. Hence, rather than check all 24 hourly averages for 7 days out of
every 100 days, it is suggested that 2 one-hour averages be checked each
38
-------
24 hour sampling period. Also, it is suggested that the 2 highest hourly
averages or the 2 hours for which the strip chart trace is most dynamic
in terms of spikes be selected for checking by scanning the strip chart
record. The check must be independent; that is, performed by an indi-
vidual other than the one who originally reduced the data. The check is
made starting with the strip chart record and continuing through the
actual transcription of the concentration in ppm on the SAROAD form.
This, then, would include reading, calculation, and transcribing or
recording errors.
The check is performed in the same manner as the original data were
processed as described in Section 2.1, Steps 17 through 19. Values are
recorded on the form in Figure 8, page 29, in the "Check" columns. If
either one of the two checks differ by as much as + 1* ppm from the
respective original value, all hourly averages for that sampling period
should be checked and corrected. In cases where all hourly averages
have been checked, the two original, randomly selected checks should be
clearly identified on the hourly averages sheet.
2.3 Special Checks to Detect and/or Identify Trouble
The following checks may be required when: 1) a quality assurance
program is first initiated in order to determine the analyzer's perform-
ance capabilities and to identify potential problem areas, and 2) at any
later time when it becomes increasingly difficult to meet the performance
standards of the auditing program to identify and/or evaluate trouble
areas. Procedures for performing a zero drift check, flow rate variation
sensitivity check, temperature variation sensitivity check, and a voltage
variation sensitivity check are discussed individually.
A. Zero Drift Check
If available, set up equipment for monitoring and recording on strip
chart the analyzer's power source voltage and the ambient room temperature;
If such equipment is not available, use a regular A.C. voltmeter capable of
For monitoring sites or analyzer configurations where the recorded data do
not exhibit frequently occurring sharp spikes, it would be advisable to use
a value of + 0.5 ppm as the difference above which all hourly averages are
rechecked.
39
-------
measuring between 100 and 130 V.A.C. and connect it across the analyzer
power plug. Locate a thermometer or other temperature-indicating device
near the analyzer to give a representative reading of the ambient room
temperature. Preferably a maximum-minimum thermometer should be used.
1) Connect the zero gas to the analyzer and adjust
the trace to 5 percent of chart.
2) Start temperature and voltage recorders or read
and record the temperature and voltage each hour
for the duration of the test.
3) Let the analyzer operate unadjusted for 24 hours
with the zero gas.
4) From the strip chart(s) and recorded data determine
the following:
a) difference between lowest (may be negative)
and highest values of the zero trace in ppm
as AC,
b) difference between lowest and highest temper-
atures in °C as AT,
c) difference between lowest and highest line
voltages recorded during sampling period in
volts as AV.
d) Document the values of AC, AT and AV on the
quality control check form in Figure 6, page 23.
5) Compare the fluctuation of the zero trace with the
temperature and voltage fluctuations for similarities
(i.e., see if the peaks occur at about the same time).
If it appears that the zero trace is sensitive to
voltage and/or temperature changes, document with a
short explanation under data quality statement on the
form in Figure 6.
40
-------
B. Flow Rate Variation Sensitivity Check
1) With the analyzer in normal operating condition, connect
a span gas with a concentration corresponding to approxi-
mately 80 percent of full scale to the analyzer.
2) Adjust the sample flow and sample cell pressure to the
normal operating values and allow time to obtain a
stable trace.
3) Mark the trace with flow rate and cell pressure values.
4) Adjust the flow rate to 1/2 of its previous value. Do
not readjust the sample cell pressure. Allow time to
obtain a stable trace.
5) Mark the trace with flow rate and cell pressure.
6) Adjust the flow rate to 3 times its present setting.
Allow time to obtain a stable trace.
7) Mark the trace with flow rate and cell pressure.
8) Record the three flow rate values with corresponding
cell pressures and measured concentrations in ppm on the
form for quality control checks (Figure 6). Give the
form to the supervisor.
C. Temperature Variation Sensitivity Check
From the zero drift check, if AC <_ 1 ppm and AT >_ 6°C (11°F), do not
perform a temperature sensitivity check. Report temperature sensitivity
as AC/AT, where AC and AT are the apparent change in concentration and
change in temperature, respectively, as observed from the zero drift
check.
If however, the above conditions are not satisfied, perform a
temperature sensitivity test as follows:
1) The analyzer is placed in a room where the tempera-
ture can be varied by at least + 6°C (11°F).
2) Let the analyzer warm up sufficiently to get a
stable trace.
3) Set up temperature-measuring device such as a maximum-
minimum thermometer near the analyzer.
41
-------
4) Perform a zero and span calibration at normal room
temperature. Reconnect the zero gas to the analyzer.
5) Turn the temperature control down 6°C. Allow time for
the room temperature and the analyzer trace to stabilize.
Read from the thermometer and record on the strip chart
the actual temperature.
6) Turn the temperature control up 12°C from its previous
setting (i.e., 6°C above the normal setting). Allow
time for room temperature and analyzer to stabilize.
Record actual temperature on the strip chart.
7) Calculate
cx - c
AC = Xl J
AT Tn - T,
where
C = concentration measured at T ,
C = concentration measured at T9,
2 l
T = highest temperature (centigrade),
T2 = lowest temperature (centigrade),
and
-r=- = apparent change in concentration
per °C change in temperature.
Document the test on the form for reporting quality control checks
in Figure 6, page 23.
D. Voltage Variation Sensitivity Test
From the zero drift check, if AC <_ 1 ppm and AV >_ 10 volts, do not
perform a voltage variation sensitivity check. Report voltage sensitivity
as AC/AT, where AC and AV are the actual values observed from the zero
drift check.
42
-------
If however, results from the zero drift check do not fall in the
above category, perform a voltage variation sensitivity test as follows;
1) Plug the analyzer into a variac capable of adjusting
the power line voltage by +_ 15 volts from the normal
line voltage and plug the variac into the regular
electrical outlet.
2) Connect a voltmeter across the variac output leads.
3) Perform a regular zero and span calibration with the
variac adjusted so that the voltmeter reads 115 volts.
4) With the span gas still connected, adjust the variac
until the voltmeter reads 105 volts. Allow the
analyzer to stabilize. Identify that portion of the
strip chart trace as being at 105 volts.
5) Adjust the variac until the voltmeter reads 125 volts.
Allow the analyzer to stabilize and properly identify
the trace.
6) Read the trace deflection at 105 and 125 volts and
convert to concentration in ppm.
7) Calculate the change in concentration per unit change
in voltage.
AC = C125 ~ C105
AV 20
where
the measured concentration at 125 volts,
and
Cir)C. = the measured concentration at 105 volts,
— = the change in concentration per unit
change in voltage.
Results from the flow rate, temperature, and voltage sensitivity checks
and estimates (could be actual measurements, if available) of the maximum
expected variation of each of the parameters under normal operational
-------
conditions are recorded in Table 1. The maximum expected error for each
parameter is computed as illustrated in Table 1. Assumed values of
expected variation are given for ambient room temperature as + 4.5°C from
a set value and for voltage as + 12 volts from a normal 115 volt source.
Variation in flow rate will depend on the analyzer. A maximum expected
variation in Q can be determined by taking the 3a value of flow rate
changes from 20 to 30 sampling periods. A fourth important parameter is
water vapor interference. There are only two values for water vapor
interference as tested herein. They are dry and saturated (see Section 2.2,
B for this check); therefore, the value, —, represents the maximum inter-
ference that is expected to occur and may not be representative of error
in the measured data in areas characterized by low relative humidities.
Table 1 should be completed for each analyzer and made a part of the
operational data log book.
Table 1: Analyzer Evaluation Data
Variable
Flow Rate
Measure
of
Sensitivity
AC
AQ
Maximum
Expected
Variation
AQ =
Maximum
Expected Error
*jx AQ=
Temperature ^ = AT=9°C ^ x AT =
Voltage H = AV = .24 V
Water Vapor —
AW
-------
2.4 Calibration of Sample Flow and Sample Cell Pressure Indicators
A. Flow Rate Calibration
Rotameters usually require cleaning every six months to a year. It
is suggested that they be calibrated after having been cleaned or at any
sign of erratic behavior. Calibration can be accomplished using a wet
test meter as a secondary standard or a rotameter which has been recently
calibrated against a secondary standard.
The easiest method is to use a calibrated rotameter as follows:
1) Place the calibrated rotameter in series with the
rotameter to be calibrated and adjust the flow rate
as read on the calibrated rotameter to 80 percent
of full scale.
2) Adjust, if necessary, the test rotameter until it
reads the same as the calibrated rotameter and lock
the adjustment screw or knob.
3) Work down scale stopping at 65, 50, 35, 20 and 5 percent
of full scale. Record corresponding readings from both
rotameters on a rotameter calibration sheet.
4) Using the calibration curve for the calibrated rotameter,
construct a calibration curve of rotameter reading
versus flow rate for the test rotameter.
B. Sample Cell Pressure Gauge Calibration
It is suggested that initially the sample cell pressure gauge be
calibrated at 6-month intervals or at any sign of erratic behavior of the
3 -2
gauge, such as a change larger than + 6.9 x 10 Nm (1 psi) in the sample
cell pressure during a sampling period in which the sample flow rate did
3 3
not change by more than + .014 m /hr (0.5 ft /hr). If after two 6-month
calibrations the gauge shows no sign of change (i.e., reads within
3 -2
+ 6.9 x 10 Nm (1 psi) of the calculated pressure), go to once a year
calibrations. Repeat the process making the period shorter or larger
according to the magnitude of change of the gauge until an optimum
calibration interval is realized.
-------
One means of performing the calibration is using a bottle of zero
gas, the gauge to be calibrated, and a mercury manometer in a set-up as
shown in Figure 11. With this set-up the gauge can be calibrated from
5 -2
atmospheric pressure up to 2.07 x 10 Nm (30 psi), usually 100 percent
of its range, in the following manner:
1) Open cylinder pressure valve.
2) Adjust secondary pressure regulator until the
5 -2
secondary pressure gauge reads 1.38 x 10 Nm
(20 psi).
3) Open the manometer valve slowly and let manometer
stabilize.
4) Open gauge valve slowly, let test gauge stabilize.
5) Read the difference in the mercury columns (h) in
centimeters.
6) Determine ambient atmospheric pressure (P ) from a
calibrated wall barometer, or other suitable
_2
barometer, in Newtons per square meter (Nm ).
_2
7) Compute the pressure at the test gauge (P ) in Nm
by
P (Nm~2) = 7.5 x 10 4 x h(cm Hg) + P (Nm 2)
m r
(Pm(psi) = 5.17 x h(cm Hg) + Pr(psi)) .
8) Record the computed value of P and the actual reading
of the test gauge.
9) Repeat Procedures 2 through 8 with the secondary
5 -2
pressure gauge reading 1.72 x 10 Nm (25 psi) and
then 2.07 x 1Q5 Nm~2 (30 psi).
10) Construct a calibration curve of gauge reading versus
computed pressure for the gauge.
-------
TEST
GAUGE
GAUGE
VALVE
PRESSURE
REGULATOR
PRESSURE
VALVE
HX1—'
VENT
{XI CXh
MANOMETER
VALVE
MERCURY
MANOMETER
SUPPLY
BOTTLE
Figure 11. Calibration Set-up for Pressure Gauges
-------
2.5 Facility and Apparatus Requirements
A. Facility
A weatherproof shelter or room is required for housing the NDIR
analyzer. Ideally the shelter or room would be equipped with an auto-
matic all seasons air conditioning unit capable of maintaining a pre-set
temperature within + 3°C (5°F). It is desirable that the heating/cooling
be done electrically to guard against the station's emitting pollutants and
altering the ambient air quality. A heat pump or a cooling unit wit-.h
electric resistance heaters would be suitable.
The shelter must be large enough to house the analyzer, any data
acquisition equipment, and storage space for the calibration gases. It
should also have adequate working space for the inspection, calibration,
and maintenance of the system.
B. Apparatus
Items of equipment with approximate costs are listed in Table 2.
Costs associated with the analyzer, sample introduction system, and
water vapor control unit vary according to the analyzer model and make,
size of the sampling station, and type of water vapor control unit used;
hence, only approximate ranges of cost are given for these items.
The calibration gases are for size 1A cylinders, certified to an
accuracy of + 2% of the stated CO concentration as determined by analysis.
Audit gases used as control samples are certified to an accuracy
of +_ 1% of the stated CO concentration and for mobility can be obtained
in size 3 cylinders.
Each item is checked according to whether it is 1) required in the
reference method, 2) used to control a variable or parameter, 3) required
for auditing purposes, or 4) used to monitor a variable.
48
-------
Table 2: Apparatus Used in the NDIR Method
Item of Equipment
Apparatus
1. Carbon Monoxide Analyzer
2. Sample Introduction System
3. Water Vapor Control Unit
4. Strip Chart Recorder
Reagents
5. Zero Gas (1 bottle)
6. Calibration gases (4 bottles)
Optional Equipment
7. 3 Primary Standards (Spiked
Samples)
8. 2-Stage Pressure Regulator
9. Midget Impinger
10. Temperature Control (Heating/
Cooling System)
11. Diffusion Chamber
12. Constant Voltage Regulator
13. A.C. Voltmeter
14. Maximum Minimum Thermometer
Approx
Cost
1972
2 to 4,000
1,000
13
204
393
96
35
1,000
10
270
50
35
Associated
Error
Calibration
Calibration
Total Measure-
ment Error
Water Vapor
Interference
Zero Drift
Data Reduction
Zero Drift
Zero Drift
Zero Drift
Reference
Method
/
/
/
/
J
Variable
Control
/
/
/
Auditing
Equipment
/
/
•
Variable'
Monitoring
J
'
-p-
VO
-------
PART II. SUPERVISION MANUAL
3.0 GENERAL
Consistent with the realization of the objectives of a quality
assurance program as given in Section 1.0, this manual provides the
supervisor with brief guidelines and directions for:
1) the collection and analysis of information necessary
for the assessment of NDIR data quality,
2) isolating, evaluating, and monitoring major
components of system error,
3) changing the physical system to achieve a desired
level of data quality,
4) varying the auditing or checking level to achieve
a desired level of confidence in the validity of
the outgoing data, and
5) selecting monitoring strategies in terms of
data quality and cost for specific monitoring
requirements.
This manual provides brief directions that cannot cover all situa-
tions. For somewhat more background information on quality assurance
see the Management Manual of this document. Additional information
pertaining to the NDIR method can be obtained from the final report
for this contract and/or from the literature referenced at the end of
the Management Manual.
Directions are written in terms of a 24-hour sampling period and
an auditing level of n=7 checks out of a lot size of N^lOO for illus-
tration purposes. Information on different auditing levels is given in
the Management Manual.
Specific actions and operations required of the supervisor in
implementing and maintaining a quality assurance program as discussed in
this Manual are summarized in the following listing.
50
-------
1) Data Assessment
a) Set up and maintain an auditing schedule.
b) Qualify audit results (i.e., insure that checks are
independent and valid).
c) Perform necessary calculations and compare to
suggested performance standards.
d) Make corrections or alter operations when standards
are exceeded.
e) Forward acceptable qualified data, with audit results
attached, for additional internal review or to user.
2) Routine Operation
a) Obtain from the operator immediate reports of suspi-
cious data or malfunctions. Initiate corrective action
or, if necessary, specify special checks to determine
the trouble; then take corrective action.
b) On a daily basis, evaluate and dispose of (i.e., accept
or reject) data that have been identified as question-
able by the operator.
c) Examine operator's log books periodically for complete-
ness and adherence to operating procedures.
d) Approve data sheets, calibration data, etc., for filing
by operator.
e) File auditing results.
3) Evaluation of Operations
a) Evaluate available alternative monitoring strategies
in light of your experience and needs.
b) Evaluate operator training/instructional needs for
your specific operation.
51
-------
3.1 Assessment of NDIR Data
A. Required Information
A valid assessment of a batch or lot of NDIR data can be made at a
given level of confidence with information derived from three special
checks. The three checks are:
1) measurement of control samples,
2) water vapor interference check, and
3) data processing check.
Directions for performing the checks are given in the Operations Manual,
Section 2.2. Directions for insuring independence and proper random-
ization in the auditing process and for the analysis of the results are
presented in this section.
B. Collection of Required Information
1) Measurement of Control Samples
Acquisition of Control Samples - obtain at least three
audit gases, for use as control samples, that have
been assayed and certified to be within + 1 percent of the
stated level of CO. The three levels should be selected to
span the range from about 5 to 40 ppm. The specific values
should be varied as new control samples are purchased.
Code each control sample and record the code and certified
concentration value in a log book which is not accessible
to the operator(s). Use these coded audit gases as control
samples.
Procedure for Performing Check - From the next 100 sampling
periods* randomly select 7 periods** (e.g., one period
selected randomly from seven intervals of fourteen sampling
periods each would be satisfactory). Then randomly select
an hour for each of the 7 periods (it is felt that one hour
randomly selected from the 8-hour working day will adequately
satisfy the requirements).
One sampling period is defined as one 24-hour day.
**The extent of auditing, i.e., the number of checks, will be discussed in
the Management Manual. co
-------
At the selected hour within the appropriate sampling
period, have the operator measure one of the control samples
(i.e., one of the three audit gases). Instruct the operator
to make no checks or adjustments on the system before
making the measurement. A control sample can be reused
several times as long as the operator does not know the
true concentration. The operator performs the measurement
according to the procedures given in the Operations Manual,
Section 2.2.A.
Treatment of Data - Two values are reported from each check.
One value represents the measured value of the control sample
with no adjustments made to the system prior to measurement.
The second reported value is the measure of the control sample
obtained after a zero and span calibration has been performed.
Results of the second measurement (i.e., the measurement made
after a zero and span calibration has been performed) are
used to detect and identify trouble and are discussed in
Section 2.4. Results of the first measurement are used in
assessing data quality and are treated below.
For each measurement or check, compute the difference in
the true or certified concentration, C > and the measured
concentration, C , in ppm as
d_ . = C . - C .
li ai 01
where
i is
during a given auditing period.
i is the i time that the check has been made
2) Water Vapor Interference Check
Procedure for Performing Check - Using the same seven
sampling periods as were randomly selected in 1 above, conduct
an independent (i.e., done by someone other than the regular
operator) water vapor interference check without forewarning
of the operator. Perform the check according to instructions
given in the Operations Manual, Section 2.2.B. To insure
53
-------
unbiased checks, it is recommended that the individual
performing the checks be changed periodically and that the
results from successive checks be evaluated by the super-
visor for reasonableness in terms of individual magnitudes
and variations in magnitude between checks.
Treatment of Data - For each measurement or check, compute
the difference in instrument response to dry zero gas, C,,
and saturated zero gas, C , in ppm as
s
d2i = Csi - Cdi
where
i is
made during a given auditing period.
i is the i time that the check has been
3) Data Processing Check
Procedure for Performing Check - Independent checks on
data processing errors are made as directed in the Operations
Manual, Section 2.2.C. Data processing checks are made each
sampling period (24 hours). To insure continuous unbiased
checks, it is recommended that the individual performing the
checks be changed periodically.
Treatment of Data - Two checks are made each sampling period.
For each check determine the difference between the check
value and the original value. If either check differs by as
much as + 1 ppm from the original value, all hourly averages
for that period are checked and corrected. For reporting data
quality, the value used for correcting all hour averages (e.g.,
+ 1 ppm) is reported. In situations where the procedure in
Section 4.1 of the Management Manual is to be followed for
data quality assessment, compute a value for reporting on the
form in Figure 12 by
a) subtracting the check value in ppm from the original
value in ppm for each of the two hourly averages
that were originally selected for checking.
54
-------
b) adding the two values computed in a) above
algebraically (i.e., keep track of the signs)
and dividing by 2, and
c) reporting the result as
d3i
where
i is the i audit performed during
the auditing period.
C. Treatment of Collected Information
1) Identification of Defects
One procedure for identifying defects is to evaluate auditing
checks in pairs, i.e., d11d2i» di2d22 d!3d23 ' d!7d27' If °ne °r both
members of the pair are defective, it counts as one defect. No more than
one defect can be declared per set. Data processing errors should be
corrected when found, and are not, therefore, discussed here.
Any set of auditing checks in which the value of d.. . or d~. is
greater than +2.2 ppm or + 1.7 ppm, respectively, will be considered a
defect. These values are assumed to be the 3o values and are discussed
in Section 3.2. As data become available, these limits should be
reevaluated and adjusted, if necessary. Small (e.g., less than 0.4 ppm)
negative values of d?. may occur as a result of measurement error.
2) Reporting Data Quality
Each lot of data submitted with SAROAD forms or tapes should be
accompanied by the minimum data qualifying information as shown in
Figure 12. The individual responsible for the quality assurance program
should sign and date the form. As an illustration, values from Section 3.2,
Suggested Standards for Judging Performance, are used to fill in the
blanks in Figure 12. The reported auditing rate is the rate in effect at
55
-------
Supervisor's Signature
Reporting Date
Auditing Rate for Data Errors: n = 7 , N = 100
Definition of Defect: ld-,.,1 > 2.2 ppm, d9. > 1.7 ppm
li
2i
Auditing Rate for Data Processing Errors: n = 2 , N = 24
* i i **
Definition of Defect : |d_.| >_ I ppm
Number of Defects Reported
(should be circled in the table below)
Audit
1. Measurement of Control
Samples (d )
2. Water Vapor Interference
Check (d2±)
3. Data Processing Check (d_.)
Check Values (ppm)
dll
d21
d31
d!2
d22
d32
d!3
d23
d33
_ _ _ _
dli
d2i
d3i
dln
d2n
d3n
Data processing errors are corrected when found and are, therefore, not<
reported as defects.
**
This is actually the value of one check while d_. is the average of
two checks.
Figure 12: Data Qualification Form
56
-------
the beginning of the auditing period. An increase or decrease in auditing
rate during the auditing period will be reflected by the total number of
checks reported. The reason for change should be noted on the form.
Check values (i.e., d.-.'s, d 's and d *s) are calculated as directed
in Section 3.1.B and reported in ppm. Values of d_. need be reported only
if requested by the Manager. All reported check values exceeding the defini-
tion of a defect should be marked for easy recognition by circling on the form.
Attach the data qualification form to the SAROAD form and forward for
additional internal review or to the user.
3.2 Suggested Standards for Judging Performance
Results from a collaborative test of the NDIR method (Ref. 1) show
that system precision is a function of the CO concentration. The perform-
ance standard given below in Table 3 for measurement of control samples
was taken from the point of maximum system precision which occurred at a
concentration of about 17 ppm. The value of + 2.2 ppm represents the
3a limit. This standard should be reevaluated and adjusted for different
concentration levels when data collected from the measurement of control
samples, as directed in Section 2.2 of the Operations Manual, become
available.
The suggested standards given for water vapor interference and data
processing errors are no more than rough estimates. Reasonable performance
standards can be determined as data become available from the auditing
program.
3.3 Collection of Information to Detect and/or Identify Trouble
In a quality assurance program one of the most effective means of
preventing trouble is to respond immediately to reports from the operator
of suspicious data or equipment malfunctions. Application of proper
corrective actions at this point can reduce or prevent the collection of
poor quality data. Important error sources, methods for monitoring
applicable variables, and suggested control limits for each source are
discussed in this section.
57
-------
Table 3: Suggested Performance Standards
Standards for Defining Defects
1. Measurement of Control Samples; d.. . > + 2.2 ppm
XX
2. Water Vapor Interference Check; d >_ 1.7 ppm
Standard for Correcting Data Processing Errors
3. Data Processing Check; d». _>_ 1 ppm
Standards for Audit Rates
4. Suggested minimum auditing rates for data error; number of
audits, n = 7; Lot size, N = 100; allowable number of defects
per lot, d = 0.
5. Suggested minimum auditing rates for data processing error;
number of audits, n = 2; lot size, N = 24; allowable number
of defects (i.e., ^_.
Standards for Operation
>_ 1 ppm) per lot, d = 0.
6. If at any time d = 1 is observed (i.e., a defect is observed)
for either d.. or d?., increase the audit rate to n = 20,
N = 100 until the cause has been determined and corrected.
7. If at any time d = 2 is observed (i.e., two defects are observed
in the same auditing period), stop collecting data until the
cause has been determined and corrected. When data collection
resumes, use an auditing level of n = 20, N = 100 until no
defects are observed in three successive audits.
8. If at any time either one of the two conditions listed below is
observed, 1) increase the audit rate to n = 20, N = 100 for the
remainder of the auditing period, 2) perform special checks to
identify the trouble area, and 3) take necessary corrective
action to reduce error levels. The two conditions are:
a) two (2) d . values exceeding +1.4 ppm, or
three (3) dn. values exceeding +0.7 ppm
Xx —
b) two (2) d.. values exceeding +1.0 ppm, or
/x
three (3) d«. values exceeding +0.5 ppm.
58
-------
A. Identification of Important Variables
A great many variables can affect the expected precision and accuracy
of measurements made by the NDIR method. Certain of these are related to
analysis uncertainties and others to instrument characteristics. Major
sources of error are discussed below.
Inaccuracy and Imprecision in the Stated CO Concentration of
Calibration Gases (Ref. 1) - There are two components of error
involved; one is the error in the original assay, and the second
is due to the deterioration of CO with time.
Large errors in the original assay should be detected when
the gas is first purchased by measuring with a properly cali-
brated and functioning analyzer. Changes in concentration
occurring as a function of time will be detected at a given level
when spiked samples can no longer be measured within given limits
with a properly calibrated and functioning analyzer.
Water Vapor Interference - Water vapor is a positive interference
for all NDIR analyzers (Refs. 1-5). The magnitude of the inter-
ference is a function of the type of control equipment being used
and the operational state of the equipment.
Refrigeration and drying agents have proved to be effective
in controlling water vapor interference (Ref. 1). Refrigeration
units should be preceded by a humidifier when used in locations
where the dewpoint of the ambient air is frequently below the
dewpoint in the refrigeration unit. Drying agents have to be
checked and replaced frequently when used in areas characterized
by high relative humidities (Ref. 4).
Error due to water vapor interference is not compensated for
or corrected by the zero and span calibrations. Its magnitude is
monitored as part of the auditing program by performing periodic
water vapor interference checks.
59
-------
Data Processing Errors - Data processing, starting with
reducing the data from a strip chart record through the act
of recording the measured concentration on the SAROAD form,
is subject to many types of errors. Perhaps the major source
of error is in reading hourly averages from the strip chart
record. This is a subjective process and even the act of
checking a given hourly average does not insure its absolute
correctness. The approach used in Section 2.2.C of the
Operations Manual means that one can be about 55% confident
that no more than 10% of the reported hourly averages are in
error by more than + 1 ppm.
The magnitude of data processing errors can be estimated
from, and controlled by, the auditing program through the
performance of periodic checks and making corrections when
large errors are detected. A procedure for estimating the bias
and standard deviation of processing errors is given in
Section 4.1 of the Management Manual.
Zero Drift - Zero drift is defined as the change in instrument
output over a stated period of time, usually 24 hours, of
unadjusted, continuous operation when the input concentration
is zero.
Several variables contribute to zero drift. Some variables
such as variations in ambient room temperature, source voltage,
and sample cell pressure result in a zero drift that is not
linear with time. Therefore, performing a zero and span cali-
bration does not correct for the component of drift throughout
the sampling period but rather just at the time the calibration
is performed.
Degradation of electronic components and increased accumu-
lation of dirt in the sample cell may result in a zero drift that
is linear with time. Periodic zero and span calibrations allow
for correction of this component of zero drift for the entire
sampling period.
The importance of zero drift to data quality can be deter-
mined from the results obtained from measuring control samples.
If a zero and span calibration is nearly always required in order
60
-------
to measure a control sample within desired limits (see Section 2.1),
a zero drift check as described in section 2.2 should be performed
to determine the characteristics and major causes of the drift.
For a drift that is generally linear with time, it is valid to
perform a zero and span before measuring control samples as part
of the auditing process. However, if the drift is a function of
variations in temperature, voltage, or pressure, as can be deter-
mined by the special checks in section 2.2, zero and span calibra-
tions should not be performed before measuring control samples for
auditing purposes. In this case meeting desired performance stan-
dards may require more frequent zero and span calibrations or more
rigid control of temperature, voltage, and pressure, as appropriate.
Span Drift - Span drift is defined as the change in instrument
output over a stated time period of unadjusted, continuous
operation when the input concentration is a stated upscale
vaJLue. For most NDIR analyzers the major component of span
drift is zero drift and is corrected or controlled as dis-
cussed above. The component of span drift other than zero
drift can be caused by either optical or electronic defects.
If this component of span drift is large or shows a continuous
increase with time, the manufacturer's manual should be
followed for troubleshooting and correction of the defect. The
importance or magnitude of span drift can be determined from the
zero and span calibrations after each sampling period.
Excessive Noise - Noise is defined as spontaneous deviations
from a mean output not caused by input concentration changes.
Excessive- noise may result when an analyzer is exposed to
mechanical vibrations. Other sources of noise include a high
gain setting on the recorder, accumulation of dirt on sample
cell walls and windows, or loose dirt in the sample cell
(Ref. 6).
Excessive noise is evidenced by either an extra broad
strip chart trace or a narrow but erratic trace. The manu-
facturer's manual should be followed for troubleshooting and
correcting the cause.
61
-------
B. How to Monitor Important Variables
System noise, zero drift, span drift, and sample cell pressure are
monitored as part of the routine operating procedures. Implementing an
auditing program effectively monitors calibration gas concentration, water
vapor interference, and data processing errors. Variations in ambient
room temperature and/or source voltage can be monitored with a minimum-
maximum thermometer and an a.c. voltmeter, respectively. Table 4 summarizes
the variables and how they can be monitored.
Table 4: Methods of Monitoring Variables
Variable
Method of Monitoring
1. Calibration Gas Concentration
2. Water Vapor Interference
3. Data Processing Errors
4. Zero Drift
5. Span Drift
6. System Noise
7. Sample Cell Pressure
Variation
8. Temperature Variation
9. Voltage Variation
Measurement of control samples as
part of the auditing program.
Water vapor interference checks per-
formed as a part of the auditing
program.
Data processing checks performed as a
part of the auditing program.
Zero check and adjustment before each
sampling period as part of routine
operating procedure.
Span check and adjustment before each
sampling period as part of routine
operating procedure.
Check of strip chart record trace for
signs of noise after each sampling
period as part of routine operating
procedure.
Reading and recording sample cell
pressure at the beginning and end
of a sampling period as part of
routine operating procedure.
Minimum maximum thermometer placed
near the anlayzer, or any other tem-
perature-indicating device, read
periodically throughout the sampling
period. This would usually be done
as a special check.
A.C. voltmeter measuring the voltage to
the analyzer and read periodically
throughout the sampling period. This
would usually be done as a special check.
62
-------
C. Suggested Control Limits
Appropriate control limits for individual variables will depend on
the level of performance needed. Table 5 gives suggested performance
standards for measuring control samples and water vapor interference.
The standards are given in terms of a mean (bias) and standard deviation.
Standards given for the measurement of control samples were taken
from the results of a collaborative test (Ref. 1). The standard
deviation, a , is actually a function of the CO concentration and should
be evaluated for different levels as the necessary data become available.
The value used here is probably adequate for concentration values between
8 and 30 ppm.
In the table, error in measuring control samples has been divided
into four components. They are: 1) error in calibration gas concentration,
2) zero drift, 3) span drift, and 4) noise. The values given for the
various error components were arrived at in the following way. Verifi-
cation of calibration gas concentrations can be made, at the 3a level,
within + (1.0 +0.02 C )ppm by measuring on a properly calibrated and
functioning analyzer. This would result in an upper limit of + 1.2 ppm
for a calibration gas with a true concentration of 10 ppm. Any deviation
larger than +1.2 ppm indicates that the CO concentration value has
actually changed with time from the certified value and the gas should be
reassayed.
The nonlinear component of zero drift which can result from
variations in temperature, pressure, or voltage is not totally corrected
for by zero and span calibrations. If the zero drift is randomly positive
and negative from sampling period to sampling period, the drift probably
has a large nonlinear component. From previous experience with NDIR
analyzers* a + 1.2 ppm nonlinear zero drift over a 24-hour sampling period
is believed to be a reasonable upper limit.
The effect of span drift, that component other than zero drift, is a
function of the CO concentration level being measured. This component of
drift is normally small and is usually measured at about 80 percent of full
scale. The effect, then, in this case would be the ratio of the CO concen-
tration being measured and 40 ppm (80 percent of scale) times the drift in ppm.
53
-------
Table 5: Suggested Control Limits for Parameters and/or Variables
Parameter/Variable
1. Measurement of Control Samples
Calibration Gas Concentra-
tion Error
Zero Drift (Non-Linear
Component)
Temperature Variations
Voltage Variations
Cell Pressure Variations
Span Drift (other than zero
drift)
Noise
Total:
0! = Ja2 + a2 + a2 + a2
1 1 a b c d
2. Water Vapor Interference
Suggested Performance Standard
Mean
d =.025C *
X «*
d =.025C **
a c
v°
d =0
c
dd=o
d^=.025Co
J. 3
d"2=0.3
Standard Deviation
(ppm)
0^=0.72
a =0.4
Si
a =0.4
b
a =0.2
c
a =0.2
a
a|=0.67
a2=0.3
Upper Limit
(3a)
+ 2.16
± 1-2
± i-2
+ 0.6
+ 0.6
+ 2.01
+ 1.74
C = concentration of control sample
cL
C = concentration of calibration gas
-------
It is estimated that this component of drift very seldom introduces an
error as large as +_ 0.6 ppm and on the average accounts for less than
+0.2 ppm error in the measured value.
System noise can originate in the analyzer or recorder. Specifications
on most analyzers quote a maximum noise level of + 1 percent of full scale
or + 0.5 ppm for a 0 to 50 ppm scale. With proper maintenance the combined
noise levels of analyzer and recorder should seldom exceed an equivalent
concentration of +_ 0.6 ppm.
Combining the means and standard deviations of component errors as
d1 = d
and
. +
^
CH ~*
t<
dc +
^2H
c
dd
2
^°d
shows that at this level of control the suggested performance standard for
measuring control samples is satisfied as is evidenced by
and
dl = dl = °'°25 Ct
I
0- -
Water vapor is a positive interference for NDIR CO monitors. Standards
given here are strictly estimates and should be reevaluated and adjusted
for different types of control units as data become available. Here it
is assumed that interference errors have a negative exponential distribu-
tion whose mean,
-------
3.4 Procedures for Improving Data Quality
Quality control procedures designed to control or adjust data
quality may involve a change in equipment or in operating procedures.
Table 6 lists some possible procedures for improving data quality. The
applicability or necessity of a procedure for a given monitoring
situation will have to be determined from results of the auditing process
or special checks as performed to identify the important variables. The
expected results are given for each procedure in qualitative terms. If
quantitative data are available or reasonably good estimates can be made
of the expected change in data quality resulting from implementation of
each procedure, a graph similar to that in Figure 18, Section 4.3 of the
Management Manual can be constructed. The values used in Table 13 and
Figure 18 are assumed and were not derived from actual data.
Equipment and personnel costs are estimated for each procedure.
Personnel costs were taken as 5 dollars per hour for operator time and
10 dollars per hour for supervisor time. Equipment costs were prorated
over 5 years for continuous monitoring, i.e., sampling 365 days a year.
All costs are for a lot size of 100, that is, 100 days of sampling.
A procedure for selecting the appropriate quality control procedure
to insure a desired level of data quality is given below:
1) Specify the desired performance standard, that is,
specify the limits within which you want the devi-
ation between the measured and the true concentration
to fall a desired percentage of the time. For
example, to measure within + 3 ppm 95 percent of the
time, the following performance standards must be
satisfied:
1 3 PPm-
2) Determine the system's present performance level from
the auditing process, as described in Section 4.1
66
-------
Table 6: Quality Control Procedures or Actions
Procedure
Al. Verify concentration
of calibration gas
A2. Replicate calibration
curve
Description of Action
a) Verify concentration
of new calibration gas
as described in Section
2.1 of the Operations
Manual. If certified
and measured values
differ by more than
+(1.0 +0.02 Cc*)ppm,
reject the gas.
b) Verify concentration of
calibration gas anytime
a control sample cannot
be measured within
+(1.0 + 0.01 Ca**)ppm
after the analyzer has
been calibrated and is
in proper working order.
Repeat the calibration
process after one day and
use the average of each pair
(other than 0 and span) to
construct a calibration
curve .
Expected Results
Reduces likelihood of
calibration gas
errors exceeding
+(1.0 + 0.02 C )ppm.
Reduces random error
calibration points
(other than 0 and
span) by 1 and
detects large errors
made in original
replicate.
Costs
Equip
$70
Personnel
$20
40
Total
$90
40
**
: certified concentration of calibration gas
C = certified concentration of control sample
-------
Table 6: Quality Control Procedures or Actions (Cont'd)
Procedure
A3. Perform multipoint
calibrations
A4. Perform a zero and
span calibration
every 8 hours
A5. Sample Diffusion
Chamber
A6. Temperature Control
Description of Action
Perform multipoint calibra-
tion when deviation between
measured and stated value of
control sample differs by
more than +(1.0 4- 0.01 Ca)
when measured immediately
after a zero and span cali-
bration.
Perform zero and span cali-
bration every 8 hours as
opposed to every 24 hours.
Place a diffusion chamber
in the sample inlet line
with sufficient capacity
to integrate or smooth out
peak concentrations of
less than 5 minutes
duration.
Install heating/cooling
system capable of main-
taining ambient room
temperature to within
+ 5°F (3°C) of a preset
value .
Expected Results
Reduces errors due
to a change in
instrument response
characteristics
between multipoint
calibrations.
Reduces error due to
the nonlinear com-
ponent of zero drift.
Reduces reading error
by eliminating sharp
spikes from the
strip chart trace.
Reduces zero drift
caused by voltage
variations and noise
spikes resulting
from sudden voltage
changes.
Costs
Equip
$100
1
55
Personnel
$ 40
500
10
20
Total
$ 40
600
11
75
00
-------
Table 6: Quality Control Procedures or Actions (Concl'd)
Procedure
A7. Voltage Control
A8. Improve water vapor
interference
control
Description of Action
Install constant voltage
regulator capable of
maintaining line voltage
to within + 1% of a
preset value.
Improve water vapor
interference control by
equipment change and/or
increased maintenance.
Expected Results
Reduces zero drift
caused by voltage
variations and noise
spikes resulting
from sudden voltage
changes .
Maintains water
vapor interference
to a insignificant
level when compared
to normal measure-
ment error.
Costs
Equip
$ 15
25
Personnel
None
$ 10
Total
$ 15
35
VO
-------
of the Management Manual by setting
T = d^ + d"2 + d3
and
-V-T
°T =S + S2 + S3
If the relationship of (1) above is satisfied, no
control procedures are required.
3) If the desired performance standard is not satisfied,
identify the major error components.
4) Select the quality control procedure(s) which will
give the desired improvement in data quality at the
lowest cost. Figure 18 in Section 4.3 of the
Management Manual illustrates a method for
accomplishing this.
The relative position of actions on the graph in Figure 18 will differ
for different monitoring networks according to type of equipment being
used, available personnel, and local costs. Therefore, each network would
need to develop its own graph to aid in selecting the control procedure
providing the desired data quality at the lowest cost.
3.5 Procedures for Changing the Auditing Level to Give the Desired
Level of Confidence in the Reported Data
The auditing process does not in itself change the quality of the
reported data. It does provide a means of assessing the data quality.
An increased auditing level increases the confidence in the assessment.
It also increases the overall cost of data collection.
Various auditing schemes and levels are discussed in Section 4.2.
Numerous parameters must be known or assumed in order to arrive at an
optimum auditing level. Therefore, only two decision rules with two
levels of auditing each will be discussed here.
70
-------
For conditions as assumed in C of Section 4.2 of the Management
Manual, a study of Figure 17 gives the following results. These conditions
may or may not apply to your operation. They are included here to call
attention to a methodology. Local costs must be used for conditions to
apply to your operation.
•
A. Decision Rule - Accept the Lot as Good If No Defects Are Found
(i.e., d = 0).
1) Most Cost Effective Auditing Level - In Figure 17 the two
solid lines are applicable to this decision rule, i.e.,
d = 0. The cost curve has a minimum at n = 7 or an audit-
ing level of 7 checks out of 100 sampling periods. From
the probability curve it is seen that at this auditing
level there is a probability of 0.47 of accepting a lot as
good when the lot (for N = 100) actually has 10 defects
with an associated average cost of 234 dollars per lot.
2) Auditing Level for Low Probability of Accepting Bad Data -
Increasing the auditing level to n = 20, using the same
curves in Figure 17 as in (1) above, shows a probability
of 0.09 of accepting a lot as good when the lot actually
has 10 defects. The average cost associated with this
level of auditing is approximately 430 dollars per lot.
B. Decision Rule - Accept the Lot as Good If No More Than One (1)
Defect is Found (i.e.. d 4 1).
1) Most Cost Effective Auditing Level - From the two dashed
curves in Figure 17 it can be seen that the cost curve has
a minimum at n = 14. At this level of auditing there is a
probability of 0.51 of accepting a lot of data as good when
it has 10 defects. The average cost per lot is approximately
340 dollars.
71
-------
2) Auditing Level for Low Probability of Accepting Bad Data -
For an auditing level of n = 20 the probability of accepting
a lot with 10 percent defects is about 0.36 as read from the
d _< 1 probability curve. The average cost per lot is
approximately 375 dollars.
It must be realized that the shape of a cost curve is determined by
the assumed costs of performing the audit and of reporting bad data. These
costs must be determined for individual monitoring situations in order to
select optimum auditing levels.
3.6 Monitoring Strategies and Cost
Selecting the optimum monitoring strategy in terms of cost and data
quality requires a knowledge of the present data quality, major error
components, cost of implementing available control procedures, and poten-
tial increase in system precision and accuracy.
Section 4.3 illustrates a methodology for comparing strategies to
obtain the desired precision of the data. Table 6 of Section 3.4 lists
control procedures with estimated costs of implementation and expected
results in terms of which error component(s) are affected by the control.
Three system configurations identified as best strategies in Figure 18,
Section 4.3 of the Management Manual are summarized here.
A. Reference Method
Description of Method; This refers to a, sampling system as illustrated
in Figure 2, Section 2.2 of the Operations Manual. Routine operating pro-
cedures as given in the Operations Manual are to be followed with special
checks performed to identify problem areas when performance standards are
not being met. An auditing level of n = 7 out of a lot size of N = 100
is recommended for this strategy. This strategy is identified as AO in
Table 13 and Figure 18 in the Management Manual.
Costs: Taken as reference or zero cost.
Data Quality: Combining the assumptions made concerning water vapor
interference and data processing errors with the standard deviation of
measuring control samples (Ref. 1), the data quality is described by
72
-------
C = C - (0.025 CT + 0.30) + 3(0.93) .
t m t —
For a true concentration, C , of 10 ppm the measured value, C , will be
t m
within the following limits
7.8 < C < 13.3
m
approximately 99.7 percent of the time.
B. Reference Method with Sample Diffusion Chamber (A5)
Description of Method; Identical with (A) above except a diffusion
chamber large enough to integrate or smooth out sharp spikes of less than
5 minutes duration is used. This reduces the chance of large errors in what
can be a highly subjective process in the measurement method.
Costs; Estimated average cost per lot in excess of the costs of (A)
above is 10 dollars.
Data Quality: From Table 13 and Figure 18 the data quality would
be described by
C = C - (0.025 C,. + 0.3) + 3(0.82).
t m t —
For a true concentration, C , of 10 ppm the measured value, C , would
fall within the following limits
8.1 < C < 13.0
m
approximately 99.7 percent of the time.
C. Reference Method Plus Sample Diffusion Chamber (A5) and
Shelter Temperature Control Unit (A6).
Description of Method; Identical to (B) above with the addition of
operating the analyzer in a shelter where the temperature is controlled
to within + 5°F of a set value.
73
-------
Costs: From Figure 18 it is seen that the average cost per lot of
data in excess of the cost of (A) above is about 85 dollars.
Data Quality; The combination of A5 and A6 as shown in Figure 18
has a standard deviation of about 0.7 ppm. Neither A5 nor A6 affect
system bias; therefore, data quality can be reported as
CL = C - (0.025 C. + 0.30) + 3(0.7)
t m t —
For a true concentration, C , of 10 ppm the measured value, C , will
fall within the following limits
8.5 < C < 12.7
m
approximately 99.7 percent of the time.
74
-------
PART III. MANAGEMENT MANUAL
4.0 GENERAL
The objectives of a data quality assurance program for the NDIR
method of measuring the atmospheric carbon monoxide were given in Sec-
tion 1.0. In this section of the manual, procedures will be given to
assist the manager in making decisions pertaining to data quality based
on the checking and auditing procedures described in Sections 2 and 3.
These procedures can be employed to:
1) detect when the data quality is inadequate,
2) assess overall data quality,
3) determine the extent of independent auditing to be
performed,
4) relate costs of data quality assurance procedures
to a measure of data quality, and to
5) select from the options available to the manager
the alternative(s) which will enable him to meet
the data quality goals by the most cost-effective
means.
Objectives 1 and 2 above are described in Section 4.1. The determination
of the extent of auditing is considered in Section 4.2. Finally the
Objectives 4 and 5 are discussed in Section 4.3. The cost data are
assumed and a methodology provided. When better cost data become
available, improvements can be made in the management decisions.
If the current reference system is providing data quality consistent
with that required by the user, there will be no need to alter the physical
system or to increase the auditing level. In fact, several detailed pro-
cedures could be bypassed if continuing satisfactory data quality is
implied by the audit. However, if the data quality is not adequate,
i.e. either a large bias and/or imprecision in the reported data, then
(1) increased auditing should be employed, (2) the assignable cause is
to be determined, and (3) the system deficiency corrected. The correc-
tion can take the form of a change in the operating procedure, e.g.
increased frequency of calibration, such as every month; or it may be
75
-------
improved instrumentation to control environmental variations during the
sampling period, i.e. between zero-span calibrations. Another possi-
bility is to increase the auditing level and hence increase the confidence
in the reported results. These alternatives will be considered in
Section 4.2.
4.1 Data Quality Assessment
As a result of the audits suggested in the Supervision Manual, one
can (1) compare the estimated variations in the measured concentrations
with suggested standards, (2) make an overall data quality assessment,
and (3) detect when the data quality may be inadequate. It is important
that the audit procedure be independent of previously reported results
and be a true check of the system under normal operating procedures.
Independence can be achieved by providing a control sample of unknown
concentration to the operator and requesting that he measure and report
the concentration of the sample or having another person perform the
check. To insure that the check is made under normal operating procedures,
it is required that the audit be performed without any special check of
the system prior to the audit other than that usually performed each
sampling period, such as a zero and span calibration.
Assume for convenience that an auditing period consists of N = 100
days (or sampling periods). Subdivide the auditing period into n equal
periods or nearly equal periods. Make one audit during each period and
compute the deviations (differences) between the audit values and the
stated values (or previously determined values as determined by the
operator) as indicated in the Supervision Manual. For example, if
seven audits (n = 7) are to be performed over 100 sampling periods (N = 100),
the 100 periods can be subdivided into 7 intervals (6 with 14 periods and
1 with 16 periods). The audit scheme for the data processing errors will
be slightly different from the above and will be discussed under 3) below.
The checks are to be combined for the selected auditing period and the
mean difference or bias and the standard deviation of the differences are
to be computed as indicated below.
76
-------
1) Measurement of Control Samples
n
I
Bias = d, 1 i
1 n
where
d1. = deviation of measured concentration of
CO from the stated value for the control sample.
Standard Deviation = s.
'I f n-1
where
d- - the average bias,
and
the estimated standard deviation of the measured
concentrations corrected for the average bias d .
The level of sampling or auditing n will be considered as a parameter to
to be selected by the manager to maintain the quality of data as required.
2) Error Due to Water Vapor Interference
n
R- -
Bias
2
where
d_. = deviation of the measured concentrations under
saturated and dry conditions.
I. M21 - 32)2
Standard Deviation = s0 =
2 f n-1
The formulas for average bias and the estimated standard deviations
are the standard ones given in statistical texts (e.g., see Ref. 7).
77
-------
3) Errors in Data Reduction and Recording
An auditing procedure for data processing errors was described in
the Supervision Manual, Section 3.1. This procedure suggested that two
hours be selected at random during each sampling period, that an indepen-
dent check of the concentration of CO be obtained for these hours, and
the differences between the hourly averages obtained by the operator, C ,
and the corresponding check values, C ., i.e., C . - C ., be computed.
These differences are treated as a go/no-go check, i.e., if either one of
the differences is larger than + 1 ppm, all hourly averages for that period
(day) are rechecked and corrected; otherwise, no corrections are made.
The value + 1 ppm is a suggestion only; experience or results from the
auditing process will indicate a more appropriate limit to use.
In order to compute an overall bias and standard deviation associated
with the data processing, request that the values of d_. be reported with
the data (see Section 3.1.B in the Supervision Manual) and calculate
n
S d3i
- i=l
Bias = d_ = , and
Standard Deviation = s_ =
A. Assessment of Data Quality
The above values completely describe the variation of the reported
data from the true or stated concentrations in that all the operator,
instrument, environmental, and data reduction errors are contained as
components of one of the three errors. Hence, an overall statement of
data quality can be obtained by combining these results as follows:
Overall Bias T = cL + d_ + cL
where the individual biases are set equal to zero if they are negligible
or not significantly different from zero.
78
-------
f~2
Overall Standard Deviation = d = Vs + s_ +
2 2
_ + s_ ,
and hence the true concentration should fall in the following interval
where C is the measured concentration,
m
" ~ *
C - T + 2o_ ,
m — T '
approximately 95 percent of the time, or within the interval
C - T + 35
m — T
approximately 99.7 percent of the time. The value 2a_, is actually dependent
on the number of audits conducted. If n is large, say about 25 or larger,
the value 2 is appropriate.
In reporting the data quality, the bias, overall standard deviation,
and auditing level should be reported in an ideal situation. (See
Section 4.4 for further discussion on data presentation.) More restricted
information is suggested in the Supervision Manual as a minimal reporting
procedure.
In summary, the data provided by these three audits is sufficient
to provide an overall estimate of the data quality. One assumption has
been made in this analysis, which can be checked with the accumulation
of data, i.e., the values of s.., s2, and s~ have been assumed to be
independent of the concentration level. In practice a slight dependence
on the concentration level is expected, and thus the data should be
logically grouped to measure the variation at low, intermediate, and high
concentration levels within the range of concentrations normally measured.
If the overall reported precisions/biases of the data meet or
satisfy the requirements of the user of the data, then a reduced auditing
level may be employed; on the other hand, if the data quality is not
A positive bias in the measurement must be subtracted from the measured
value when estimating the true concentration.
79
-------
adequate, assignable causes of large deviations should be determined, and
appropriate action taken to correct the deficiencies. This determination
may require an increased checking or auditing of the measurement process
as well as the performance of certain quality control checks, e.g., monitor
temperature and voltage variations over 24-hour sampling period, check
zero and span calibration procedures, determine adequacy of calibration
curve, etc.
B. Assessment of Individual Measurements
Individual checks on the standard deviations of the three audits
can be made by computing the ratio of the estimated standard deviation,
s., to the corresponding suggested standard, a., given in Table 7. If
this ratio exceeds values given in Table 7 for any one of the audits,
this would indicate that the source of trouble may be assigned to that
particular aspect of the measurement process. Critical values of this
ratio are given in Figure 13 as a function of sample size and two levels
of confidence. Having assessed the general problem area* one then needs
to perform the appropriate quality control checks to determine the
specific causes of the large deviations.
4.2 Auditing Schemes
Auditing a measurement process costs time and money. On the other
hand, reporting poor quality data also can be very costly. For example,
the reported data might be used to determine a relationship between
health damage and concentrations of certain pollutants. If poor quality
data are reported, it is possible that invalid inferences or standards
derived from the data will cost many dollars. These implications may be
unknown to the manager until some report is provided to him referencing
his data; hence, the importance of reporting the precision and bias with
the data.
80
-------
Table 7 : Critical Values of s./o.
Level of
Confidence
90%
95%
Statistic
s./a1
s./a1
n=5
1.40
1.54
n=10
1.29
1.37
n=15
1.23
1.30
n=20
1.20
1.26
n=25
1.18
1.23
estimated standard deviation
hypothesized or suggested standard deviation
Audit
Control Sample
Water Vapor
Data Reduction
Overall Standard Deviation
Suggested
a1 = 0.72
a2 - 0.30
03 = 0.50
CTT = 0.93
Standard
(at
(at
10 ppm)
10 ppm)*
For concentrations different from 10 ppm, use a, = 0.072 (concentration
in ppm), and
until further information concerning the dependence of a on the concen-
tration of CO is obtained.
81
-------
1.60
1.10
0 5 10 15 20 25 30 35 40
Sample Size (n)
Figure 13: Critical Values of Ratio a la Vs. n
82
-------
As a result of the cost of reporting poor quality data* it is desirable
to perform the necessary audits to assess the data quality and to invali-
date unsatisfactory data with high probability. On the other hand, if the
data quality is satisfactory, an auditing scheme will only increase the
data measurement and processing cost. An appropriate tradeoff or balance
of these costs must be sought. These costs are discussed in Section C
below.
Consider the use of a control sample to check the major errors in the
NDIR measurement process. Using the suggested standard deviation of 0.72
ppm for the measured concentration of CO at about the 10 ppm level and a
range of 1% in the deviation of the stated concentration of the control
sample from the true concentration, a single measured concentration
should fall between 10 + 0.1 + 2(0.72)ppm or 10 + 1.54 ppm approximately
95% of the time, 10 + 0.1 + 3(0.72)ppm or 10 + 2.26 ppm approximately
99.7% of the time. A deviation outside the 2a (or 30) limits is
considered a defect in enforcement (or routine) monitoring of air quality.
The number of defects can be determined from the results of the reported
audits.
Now consider the implication of an auditing scheme to determine or
judge the quality of the reported data in terms of an acceptance sampling
scheme. Let the data be assembled into homogeneous lots of N = 50 or 100
sampling periods. Suppose that n = 7 (10, 15, or 20) periods are sampled
in the manner suggested in Section 3.1. That is, one day is selected at
random during each 14 periods, and for 100 periods a sample of size 7
would be obtained. Figure 14 gives a diagram of the data flow, sampling,
and decision making process.
A. Statistics of Various Auditing Schemes
Suppose that the lot size is N = 100 periods (days), that n = 7 periods
are selected at random, and that there are 5% defectives in the 100, or 5
defectives. The probability that the sample of 7 contains 0, 1, ..., 6
defectives is given by the following.
83
-------
Data Flow
Lot 1
N - 100
Days
Lot 2
N - 100
Days
Sample
n - 7
Periods (days)
Observe
d - 0 defects
Observe
d • 1 defect
Calculate Costs of
Accepting and
Rejecting the Lot
Accept Data If
Cost Comparison
Favors This Action
Data Quality Is
Acceptable
Reject Data
Otherwise
Figure 14: Data Flow Diagram for Auditing Scheme
84
-------
p(0 defectives) =
and for d defectives
p(d defectives) =
n
The values are tabulated below for d = 0, 1, ..., 6 and for the two
data quality levels.
Data Quality
d
0
1
2
3
5
6
D=5% Defectives
0.6903
0.2715
0.0362
0.0020
0.00004
= 0
D=15% Defectives
0.3083
0.4098
0.2152
0.0576
0.0084
wO
Figure 15A gives the probabilities of d = 0 and d <_ 1 defectives as
a function of sample size. The probability is given for lot size N = 100,
D = 5 and 15% defectives, for sample sizes (auditing levels) from 1 to 20.
For example, if n = 10 measurements are audited and D = 5% defectives, the
probability of d=0 defectives is 0.58. Figure 15B gives the probabilities
for lot size N = 50, for D = 6, 10, and 20% defectives, and for d = 0
and d <_ 1. These curves will be used in calculating the cost relationships
of Section C.
N°!5'/V!88!/ = 9^:"89.= 0.6903.
r!0d\ / 1001\ 100-99'"94
(7!93! )
85
-------
1.0
t
.3
og
,
•s
.0
o
n
0.8
0.6
0.4
0.2
d i 1, D - 5%
- 0. D - 5Z
d S 1, D - 15Z
d - 0, D - 15%
10 15
Sample Size (n)
20
25
Figure 15A: Probability of d Defectives in the Sample If
the Lot (N - 100) Contains D% Defectives
86
-------
%
4J
•8
•§
M
0.2
10 15
Sample Size (n)
20
Figure 15B: Probability of d Defectives in the Sample If the
Lot (N • 50) Contains DX Defectives.
This graph is for a lot size of N = 50. Only whole numbers of defectives
are physically possible; therefore, even values of D (i.e., 6, 10, and
20 percent) are given rather than the odd values of 5 and 15 percent as
given in Figure ISA.
87
-------
B. Selecting the Auditing Level
One consideration in determining an auditing level n used in assessing
the data quality is to calculate the value of n which for a prescribed
level of confidence will imply that the percent of defectives in the lot is
less than ten percent, say, if zero defectives are observed in the sample.*
Figures 16A and 16B give the percentage of good measurements in the lot
sampled for several levels of confidence, 50, 60, 80, 90, and 95%. The
curves in 16A assume that 0 defectives are observed in the sample, and
16B, 1 defective observed in the sample. The solid curves on the figures
are based on a lot size of N = 100; two dashed curves are shown in
Figure 16A for N = 50; the differences between the corresponding curves
are small for the range of sample sizes considered.
For example, for zero defectives in a sample of 7 from a lot of
N = 100, one is 50% confident that there are less than 10% defective
measurements among the 100 reported values. For zero defectives in a
sample of 15 from N = 100, one is 80% confident that there are less than
10% defective measurements. Several such values were obtained from
Figure 16A and placed in Table 8 below for convenient reference.
Table 8: Required Auditing Levels n
for Lot Size N = 100
Assuming Zero Defectives
Confidence Level
50%
60%
80%
90%
95%
D = 10%
7
9
15
20
= 25
15%
<5
6
10
15
18
20%
<5
<5
8
11
13
Obviously, the definition of defective need not always be the same and
must be clearly stated each time. The definitions employed herein are
based on results of collaborative test programs.
88
-------
100
en
4J
g
3
cd
"8
3
u
(-1
Oi
40
20
10 15
Sample Size (n)
Figure 16A: Percentage of Good Measurements Vs. Sample Size
for No Defectives and Indicated Confidence Level
89
-------
100
80
I
§ 60
0)
o
3
o 40
g
o
20
: 60%
80%
444-
90%
I 1 I
95%
10 15
Sample Size (n)
20
25
Figure 16B: Percentage of Good Measurements Vs. Sample Size
for 1 Defective Observed and Indicated Confidence Level
Lot Size = 100
90
-------
C. Cost Relationships
The auditing scheme can be translated into costs using the costs
of auditing, rejecting good data, and accepting poor quality data.
These costs may be very different in different geographic locations.
Therefore, purely for purposes of illustrating a method, the cost of
auditing is assumed to be directly proportional to the auditing level.
For n = 7 it is assumed to be $155 per lot of 100. The cost of rejecting
good quality data is assumed to be $600 for a lot of N = 100. The cost
of reporting poor quality data is taken to be $800. To repeat, these
costs given in Table 9 are assumed for the purpose of illustrating a
methodology of relating auditing costs to data quality. Meaningful
results can only be obtained by using correct local information.
Table 9: Costs vs. Data Quality
Data Quality
"Good"
D <_ 10%
Incorrect Decision
"Bad"
D > 10%
Correct Decision
Reject Lot of
Data
Lose cost of performing
audit plus cost of reject-
ing good quality data.
(-$600 - $155)
Lose cost of performing
audit, save cost of not
permitting poor quality data
to be reported. ($400 - $155)
Accept Lot of
Data
Correct Decision
Lose cost of performing
audit. (-$155)
Incorrect Decision
Lose cost of performing
audit plus cost of declaring
poor quality data valid.
(-$800 - $155)
Cost of performing audit varies with the sample size; is assumed to be
$155 for n = 7 audits per N = 100 lot size.
91
-------
Suppose that 50 percent of the lots have more than 10 percent
defective and 50 percent have less than 10 percent defective. (The
percentage of defective lots can be varied as will be described in the
final report.) For simplicity of calculation, it is further assumed
that the good lots have exactly 5 percent defectives and the poor quality
lots have 15 percent defective.
Suppose that n = 7 measurements out of a lot N = 100 have been audited
and none found to be defective. Furthermore, consider the two possible
decisions of rejecting the lot and accepting the lot and the relative costs
of each. These results are given in Tables 10A and 10B.
Table 10A: Costs If 0 Defectives are Observed and the Lot is Rejected
Reject Lot
D = 5%
D = 15%
Correct
Decision
P2 = 0.31
C2 = 400 - 155
Incorrect
Decision
P1 = 0.69
C1 = -600 - 155
Net Value ($)
PICX = -$521
P2C2 = $76
Cost =
-$445
Table 10B: Costs If 0 Defectives are Observed and the Lot is Accepted
Accept Lot
D = 5%
D = 15%
Correct
Decision
PI = 0.69
C3 = -155
Incorrect
Decision
—
P2 = 0.31
C4 = -800 - 155
Net Value ($)
P;LC3 = -$107
P2C4 = -$296
Cost = p-j^C + p2C4 = -$403
92
-------
The value P1(p2) in the above table is the probability that the
lot is 5% (15%) defective given that 0 defectives have been observed.
For example,
[probability that the lot is 5% defective]
V and_ 0 defectives are observed /
r°
1°
lot is 5% defective and \
defectives observed )
lot is 15% defective and
/lo
1°
defectives observed
0.5(0.69)
0.5(0.69) + 0.5(0.31)
0.69.
[probability that the lot is 15% defective]
_\ and. 0 defectives are observed I
rlot is 5% defective and \ +
0 defectives observed ]
0.5(0.31)
0.5(0.31) + 0.5(0.69)
/
= 0.31.
lot is 15% defective and
0 defectives observed
It was assumed that the probability that the lot is 5% defective is 0.5.
The probability of observing zero defectives , given the lot quality is 5%
or 15%, can be read from the graphs of Figures ISA or 15B.
A similar table can be constructed for 1, 2, ..., defectives and the
net costs determined. The net costs are tabulated in Table 11 for 1, 2,
and 3 defectives. The resulting costs indicate that the decision preferred
from a purely monetary viewpoint is to accept the lot if 0 defectives are
observed and to reject it otherwise. The decision cannot be made on this
basis alone. The details of the audit scheme also affect the confidence
which can be placed in the data qualification; consideration must be given
to that aspect as well as to cost.
Table 11: Costs in Dollars
Decision
Reject Lot
Accept Lot
d = number of defectives
0
-445
-403
1
-155
-635
2
+101
-839
3
+207
-928
93
-------
D. Cost Vs. Audit Level
After the decision criteria have been selected, an average cost can
be calculated. Based on the results of Table 11, the decision criterion
is to accept the lot if d = 0 defectives are observed and to reject the
lot if d = 1 or more defectives are observed. All the assumptions of
the previous section are retained. The auditing level is later varied
to obtain the data in Figure 17.
One example calculation is given below and summarized in Table 12.
The four cells of Table 12 consider all the possible situations which can
occur, i.e., the lots may be bad or good and the decision can be to
either accept or reject the lot based on the rule indicated by Table
The costs are exactly as indicated in Tables 10A and 10B. The probabilities
are computed as follows.
q1 = (prob. that the lot is 5% defective and 1 or
more defects are obtained in the sample)
= (prob. that the lot is 5% defective)(prob. 1 or
more defectives are obtained in the sample
given the lot is 5% defective)
= 0.5 (0.31) = 0.155
Similarly q_, q_, and q. in Table 12 are obtained as indicated below.
q2 = 0.5 (0.69) = 0.345
q3 = 0.5 (0.69) = 0.345
q4 = 0.5 (0.31) = 0.155
The sum of all the q's must be unity as all possibilities are considered. The
value 0.5 in each equation is the assumed proportion of good lots (or poor
quality lots). The values 0.31 and 0.69 are the conditional probabilities
that given the quality of the lot, either d = 0 or d = 1 or more defectives
are observed in the sample. Further details of the computation are given
in the final report of this contract.
94
-------
Table 12: Overall Average Costs for One
Acceptance - Rejection Scheme
Decision
Reject any lot of
data if 1 or more
defects are found.
Accept any lot of
data if 0 defects
are found.
Good Lots
D = 5%
qx = 0.155
C1 = -$755
q3 = 0.345
C3 = -$155
Bad Lots
D = 15%
q2 = 0.345
C2 = $245
q4 = 0.155
C4 = -$955
qlCl + q2C2 = ~$ 32
q3C3 + q4C4 = ~$202
Average Cost = -$234
In order to interpret the concept of average cost, consider a large
number of data lots coming through the system; a decision will be made
on each lot in accordance with the above and a resulting cost of the
decision will be determined. For a given lot, the cost may be any one of
the four costs, and the proportion of lots with each cost is given by the
q's. Hence the overall average cost is given by the sum of the product of
q's by the corresponding- C's.
In order that one may relate the average cost as given in Table 12
to the costs given in Table 11, it is necessary to weight the costs in
Table 11 by the relative frequency of occurrence of each observed number
of defectivesj i.e., prob(d). This calculation is made below.
No. of
Defectives
d = 0
1
2
3
4
Decision
Rule
Accept
Reject
Rej ect
Reject
Reject
Costs ($) from
Table 11
- 403
- 155
101
207
244
Prob(d)
0.50
0.34
0.1255
0.030
0.0042
Cost x Prob(d)
-$201.5
- 52.7
12.6
6.2
1.0
Totals 0.9997
-$234.4
95
-------
Thus the value -$234 is the average cost of Table 12 and the weighted
average of the costs of Table 11. The weights, Prob(d), are obtained
as follows:
Prob(d=0) = Prob(lot is good and d=0 defectives are observed)
+ Prob(lot is poor quality and d=0 defectives are observed)
= 0.5 (0.69) + 0.5 (0.31) = 0.50.
This is the proportion of all lots which will have exactly 0 defectives
under the assumptions stated. For d = 1, 2, 3, and 4 the values of the
probabilities in parentheses above can be read from the table on page 87.
Based on the stated assumptions, the average cost was determined for
several auditing levels as indicated in Table 12. These costs are given
in Figure 17. One observes from this figure that n = 7 is cost effective
given that one accepts the lot only if zero defectives are observed. (See
curve for d = 0.
If the lots are accepted if either 0 or 1 defectives are observed,
then referring to the curve d _<_ 1, the best sampling level is n = 15.
The curve of probability of d = 0 (d <_ 1) defectives in a lot of N = 100
measurements if there are 10% defectives is also given on the same
figure.
Another alternative is to accept all data without performing an
audit. Assuming that one-half (50%) of the lots contain more than 10%
defectives, the average cost on a per lot basis would be 0.5(-$800) = -$400.
This, however, would preclude qualification of the data. Regardless of cost
it would be an unacceptable alternative.
4.3 Data Quality Versus Cost of Implementing Actions
The discussion and methodology given in the previous section was
concerned with the auditing scheme (i.e., level of audit or sample size,
costs associated with the data quality, etc.). Increasing the level
of audit of the measurement process does not by itself change the
quality of the data but it does increase the information about the
96
-------
Probability
if d - 0
0)
N
O
01
CO
-------
quality of the reported data. Hence, fewer good lots will be rejected
and more poor quality data will be rejected. If the results of the
audit imply that certain process measurement variables are major contrib-
utors to the total error or variation in the reported concentration of
CO, then alternative strategies for reducing these variations need to be
investigated. This section illustrates a methodology for comparing the
strategies to obtain the desired precision of the data. In practice it
would be necessary to experiment with one or more strategies, determine
the potential increase in precision, and relate the precisions to the
relative costs as indicated herein. Several strategies are considered but
only a few of the less costly ones would be acceptable as illustrated in
Figure 18. The assumed values of the standard deviations and biases for
each type audit are not based on actual data, except for the reference
method. In this case values were taken from Ref. 1. These values are
probably smaller than those experienced in the field.
Several alternative actions or strategies can be taken to increase
the precision of the reported data. For example, if the instrument
responses to voltage, temperature, and humidity variations are large
contributors to the variation of an observed instrument response, then
additional control equipment for one or more of the environmental effects
can reduce the variation of the measured responses by calculated amounts
and thus reduce the error of the reported concentrations. In this manner,
the cost of the added controls can be related to the data quality as
measured by the estimated errors of the reported results. It must be
recognized that these errors are dependent to some extent on the
concentration.
Suppose that it is desired to make a statement that the concentration
of CO is within 2.1 ppm (3a limit) for 1 hour averages and that the
minimal cost control equipment and checking procedures are to be employed
to attain this desired precision. In order to determine a cost efficient
procedure, it is necessary to estimate the variance for each source of
error (or variation) for each strategy and then select the strategy or
combination of strategies which yields the desired precision with minimum
cost. One such calculation is summarized in Table 13 with assumed costs
of equipment and control procedures.
98
-------
Examining the graph in Figure 18 of cost versus precision, one
observes that the combination of actions A5 and A6 is the least costly
strategy that meets the required goal of 2.1 ppm (aT < 0.7 ppm) in the
i. """"
reported concentration. Similarly A5 meets the goal of 2.5 ppm
(OT _<^ 0.83 ppm). The assumed values of the standard deviations of the
measured concentrations of CO for the alternative courses of action are
given in Table 13. The estimated costs for the various alternatives are
given in Table 6 of Section 3 and in Table 13.
Two curves are given in Figure 18 to illustrate the assumed
relationship between cost of reporting poor quality data and the measure
of precision cr_. In one curve it is assumed that there is 0 cost in
reporting data for which a_, £ 0.60 and the cost increases rapidly beyond
OT = 0.60. The other curve assumes a_ = 0.80 is acceptable.
Data processing or data reduction errors have been included in these
sample analyses for illustration purposes. The value assumed for a, in
Table 13 is an estimate and was not derived from actual data. If
information regarding the distribution of data processing errors is
desired, the field supervisor could be requested to forward actual values
for d_-, d«2, , d,7 with the data qualification form shown in Figure 12
of the Supervision Manual.
99
-------
Table 13: Assumed Standard Deviations for Alternative Strategies
1. Control Sample d..
°1
2. Water Vapor d-
Interference
°2
3. Data Reduction d»
°3
Alternative Strategies
AO
0.25
0.72
0.30
0.30
0
0.50
Al
0.25
0.70
0.30
0.30
0
0.50
A2
0.25
0.69
0.30
0.30
0
0.50
A3
0.10
0.68
0.30
0.30
0
0.50
A4
0.25
0.54
0.30
0.30
0
0.50
A5
0.25
0.72
0.30
0.30
0
0.50
A6
0.25
0.57
0.30
0.30
0
0.50
A7
0.25
0.68
0.30
0.30
0
0.50
A8
0.25
0.72
0.10
0.10
0
0.50
** 2
(overall variance)
OT (overall std. dev.)
*** Max. Pos.
Bias
Max . Neg .
Added Cost/100 Periods
0.86
0.93
0.55
0
$0
0.83
0.91
0.55
0
$90
1.82
0.90
0.55
0
$40
0.80
0.90
0.40
0
$40
0.63
0.79
0.55
0
$600
0.67
0.82
0.55
0
$11
0.66
0.82
0.55
0
$75
0.80
0.90
0.55
0
$15
0.78
0.88
0.35
0
$35
Alternative Strategies are given in Table 6, Section 3; the 0,,'s,
i=l, 2, and 3, are assumed values based on results given in Ref. 1, and
where data are not available they are engineering judgments.
2.2,2
** 2
°"T
.FT"
' aT =VaT •
***
Bias = T = d1 + d_ + d«; the biases and standard deviations which are
dependent on concentration level are determined at 10 ppm. In order to
estimate the true concentration, the estimated bias T must be subtracted
from the measured concentration and then the appropriate 2a or 3a
error added, i.e., c - T + 20_, gives the 95% limits.
m JL
100
-------
•co-
o
u
T3
-------
4.4 Data Presentation
A reported value whose precision and accuracy (bias) are unknown is
of little, if any, worth. The actual error of a reported value—that is,
the magnitude and sign of its deviation from the true value—is usually
unknown. Limits to this error, however, can usually be inferred, with
some risk of being incorrect, from the precision of the measurement
process by which the reported value was obtained and from reasonable
limits to the possible bias of the measurement process. The bias, or
systematic error, of a measurement process is the magnitude and direc-
tion of its tendency to measure something other than what was intended;
its precision refers to the closeness or dispersion of successive
independent measurements generated by repeated applications of the
process under specified conditions, and its accuracy is determined by
the closeness to the true value characteristic of such measurements.
Precision and accuracy are inherent characteristics of the measure-
ment process employed and not of the particular end result obtained.
From experience with a particular measurement process and knowledge of
its sensitivity to uncontrolled factors, one can often place reasonable
bounds on its likely systematic error (bias). This has been done in the
model for the measured concentration as indicated in Table 13. It is
also necessary to know how well the particular value in hand is likely
to agree with other values that the same measurement process might have
provided in this instance or might yield on measurements of the same mag-
nitude on another occasion. Such information is provided by the estimated
standard deviation of the reported value, which measures (or is an index
of) the characteristic disagreement of repeated determinations of the
same quantity by the same method and thus serves to indicate the precision
(strictly, the imprecision) of the reported value.
A reported result should be qualified by a quasi-absolute type of
statement that places bounds on its systematic error and a separate
statement of its standard deviation, or of an upper bound thereto, when-
ever a reliable determination of such value is available. Otherwise a
computed value of the standard deviation should be given together with
a statement of the number of degrees of freedom on which it is based.
102
-------
As an example, consider the example given in Section 4.3, Table 13.
In this case the estimated 2a limits of the reported concentration of CO
by the NDIR reference method are +2(0.93) or + 1.86 ppm. Suppose that
a positive bias of 0.3 ppm results from the water vapor interference,
then the results could be reported as the measured concentration, C ,
with the following 2a limits and audit level, e.g.,
C - 0.3 + 1.9 ppm, n = 20 .
The replication error is a measure of the variation of successive
determination of CO with the same operator and instrument on the same
sample within a time interval short enough to avoid change of environ-
mental factors. This replication error as given by the standard devia-
q
tion s was measured to be about 0.17 mg/m (.15 ppm).
The repeatability error is a measure of the variation between test
results on the same sample on different days by the same laboratory. The
standard deviation was estimated to be s = 0.57 mg/m (.50 ppm). This
measure of variation must by definition include the replication error, i.e.,
2 2 1/2
a(repeatability) = [a (replication) + a (day)]
It is indicated (Ref. 1) that the a for repeatability varies significantly
between laboratories; it appears to depend on the concentration but the
results are erratic.
The requirements for reporting data quality as outlined in the
Supervision Manual involves adopting a standard, performing an audit, and
comparing the audit result to the standard. A defect is defined in terms
of the standard. This approach does not make maximum use of the collected
data, but its simplicity should aid in the implementation of a quality
assurance program. After experience has been gained in using the auditing
scheme and in calculating the results, it is recommended that the above,
more comprehensive method of data presentation be implemented.
103
-------
4.5 Personnel Requirements
Personnel requirements as described here are in terms of the NDIR
method only. It is realized that these requirements may be only a minor
factor in the overall requirements from a systems point of view where
several measurement methods are of concern simultaneously.
A. Training and Experience
1. Director
The director or one of the professional-level employees should
have a basic understanding of statistics as used in quality control. He
should be able to perform calculations, such as the mean and standard
deviation, required to define data quality. The importance of and require-
ments for performing independent and random checks as part of the auditing
process must be understood. Three references which treat the above-
mentioned topics are listed below:
Probability and Statistics for Engineers, Irvin Miller
and John E. Freund, published by Prentice-Hall, Inc.,
Englewood, N. J., 1965.
Introductory Engineering Statistics, Irwin Guttman and
S. S. Wilks, published by John Wiley and Sons, Inc.,
New York, N. Y., 1965.
The Analysis of Management Decisions, William T. Morris,
published by Richard D. Irwin, Inc., Homewood, Illinois,
1964.
2. Operator
There are or can be two levels of operation involved in the NDIR
method.
First, an operator or technician who is involved in the preliminary or
initial setup and checkout or is responsible for troubleshooting and
repairing the analyzer should have technical training in electronics and/or
instrumentation as obtained in a technical or service school or
104
-------
several years of on-the-job experience. For a. specific analyzer it would
be desirable to have the technician checked out by a manufacturer's repre-
sentative or at least to have him participate, with the representative, in
the initial installation and startup. The manufacturer's instruction
book should be available for study or reference by the technician.
Routine operations involve the use of external controls only and
require no high-level skills. A high school graduate with proper super-
vision and on-the-job training can become effective at this level in a
very short time.
An effective on-the-job training program could be as follows:
a) Observe experienced operator perform the different
tasks in the measurement process.
b) Study the operational manual of this document and
use it as a guide for performing the operations.
c) Perform operations under the direct supervision
of an experienced operator.
d) Perform operations independently but with a high
level of quality control checks utilizing the
technique described in the section on Operator
Proficiency Evaluation Procedures below to encourage
high quality work.
Another alternative would be to have the operator attend an appropriate
basic training course sponsored by EPA.
4.6 Operator Proficiency Evaluation Procedures
One technique which may be useful for early training and qualification
of operators is a system of rating the operators as indicated below.
Various types of violations (e.g., invalid sample resulting from
operator carelessness, failure to maintain records, use of improper equip-
ment, or calculation error) would be assigned a number of demerits
depending upon the relative consequences of the violation. These demerits
could then be summed over a fixed period of time of one week, month, etc.,
and a continuous record maintained. The mean and standard deviation of
the number of demerits per week can be determined for each operator and
105
-------
a quality control chart provided for maintaining a record of proficiency
of each operator and whether any changes in this level have occurred. In
comparing operators, it is necessary to assign demerits on a per unit
work load basis in order that the inferences drawn from the chart be
consistent. It is not necessary or desirable for the operator to be
aware of this form of evaluation. The supervisor should use it as a means
of determining when and what kind of instructions and/or training is
needed.
A sample QC chart is given in Figure 19 below. This chart assumes
that the mean and standard deviation of the number of demerits per week,
e.g., are 5 and 1, respectively. After several operators have been evalu-
ated for a few weeks, the limits can be checked to determine if they are
both reasonable and effective in helping to improve and/or maintain the
quality of the air quality measurement.
The limits should be based on the operators whose proficiency is
average or slightly better than average. Deviations outside the QC
limits, either above or below, should be considered in evaluating the
operators. Identifying those operators whose proficiency may have
improved is just as important as knowing those operators whose proficiency
may have decreased.
The above procedure may be extended to an entire monitoring network
(system). With appropriate definitions of work load, a continuous record
may be maintained of demerits assigned to the system. This procedure might
serve as an incentive for teamwork, making suggestions for improved
operation procedures, etc.
•H
C
* 7
M
O c
Q 1234 5 6 7 8 9 10 11 12 13
Time Intervals (Weeks)
Figure 19: Sample QC Chart for Evaluating Operator Proficiency
106
-------
REFERENCES
1. Herbert C. McKee et al., "Collaborative Study of Reference Method
for the Continuous Measurement of Carbon Monoxide in the Atmosphere
(Non-Dispersive Infrared Spectrometry)," Southwest Research Insti-
tute, Contract CPA 70-40, SwRI Project 01-2811, San Antonio, Texas,
May 1972.
2. Frank McElroy, "The Intech NDIR-CO Analyzer," presented at the llth
Methods Conference in Air Pollution, University of California
Berkeley, California, April 1, 1970.
3. Hezekiah Moore, "A Critical Evaluation of the Analysis of Carbon
Monoxide with Nondispersive Infrared (NDIR)," presented at the
9th Conference on Methods in Air Pollution and Industrial Hygiene
Studies, Pasadena, California, February 7-9, 1968.
4. Richard F. Dechant and Peter K. Mueller, "Performance of a Continuous
NDIR Carbon Monoxide Analyzer," AIHL Report No. 57, Air and Industrial
Hygiene Laboratory, Department of Public Health, Berkeley, California,
June 1969.
5. Joseph M. Colucci and Charles R. Begeman, "Carbon Monoxide in Detroit,
New York, and Los Angeles Air," Environmental Science and Technology _3
(1), January 1969, pp 41-47.
6. "Tentative Method of Continuous Analysis for Carbon Monoxide Content
of the Atmosphere (Nondispersive Infrared Method)," ija Methods of
Air Sampling and Analysis, American Public Health Association,
Washington, D. C., 1972, pp 233-238.
7. John Mandel, The Statistical Analysis of Experimental Data, Interscience
Publishers, Division of John Wiley & Sons, New York, N. Y., 1964.
107
-------
APPENDIX
REFERENCE METHOD FOR THE CONTINUOUS MEASUREMENT
OF CARBON MONOXIDE IN THE ATMOSPHERE
(NON-DISPERSIVE INFRARED SPECTROMETRY)
Reproduced from Appendix C, "National Primary and Secondary Ambient Air
Quality Standards," Federal Register, Vol 36, No. 84, Part II, Friday,
April 30, 1971
108
-------
RULES AND REGULATIONS
Figure B2. Assembled sampler and shelter.
Figure B3, Orifice calibration unit.
APPENDIX G—REFERENCE METHOD roa TUB
CONTINUOUS MEASUREMENT or CARBON
MONOXIDE IN THE ATMOSPHERE (NON-
DISPERSIVE INFRARED SPECTROMETRT)
1. Principle and Applicability.
1.1 This method Is based on the absorp-
tion of infrared radiation by carbon mon-
oxide. Energy from a source emitting radia-
tion In the Infrared region Is split Into
parallel beams and directed through ref-
erence and sample cells. Both beams pass
Into matched cells, each containing a selec-
tive detector and CO. The CO In the cells
absorb infrared radiation only at Its charac-
teristic frequencies and the detector Is sensi-
tive to those frequencies. With a nonabsorb-
Ing gas In the reference cell, and with no
CO In the sample cell, the signals from
both detectors are balanced electronically.
Any CO Introduced Into the sample cell will
absorb radiation, which reduces the temper*
ature and pressure In the detector cell and
displaces a* dlaphram. This displacement Is
detected electronically and amplified to pro-
vide an output signal.
1.2 This method Is applicable to the de-
termination of carbon monoxide In ambient
atr, and to the analysis of gases under
pressure.
2. Range and Sensitivity.
2.1 Instruments are available that meas-
ure In the range of 0 to 58 mg./m.J (0-50
p.p.m.), which Is the range most commonly
used for urban atmospheric sampling. Most
Instruments measure In additional ranges.
2.2 Sensitivity Is 1 percent of full-scale
response per 0.8 mg. CO/m.1 (0.5 p.p.m.).
3. Interferences.
3.1 Interferences vary between individual
Instruments. The effect of carbon dioxide
Interference at normal concentrations Is
minimal. The primary Interference is water
vapor, and with no correction may give an
Interference equivalent to as high as 12 mg.
CO/m.* Water vapor interference can be
minimized by (a) passing the air sample
through silica gel or similar drying agents,
(b) maintaining constant humidity in the
sample and calibration gases by refrigera-
tion, (c) saturating the air sample and cali-
bration gases to maintain constant humid-
ity or (d) using narrowband optical filters
In combination with some of these measures.
3.2 Hydrocarbons at ambient levels do
not ordinarily interfere.
4. Precision, Accuracy, and Stability.
4.1 Precision determined with calibration
gases is ±0.5 percent full scale In the 0-58
mg./m.1 range.
4.2 Accuracy depends on Instrument
linearity and the absolute concentrations
of the calibration gases. An accuracy of ± 1
percent of full scale In the 0-58 mg./m.1
range can be obtained.
4.3 Variations In ambient room tempera-
ture can cause changes equivalent to as
much as O.B mg. CO/m." per *C. This effect
can be minimized by operating the analyzer
In a temperature-controlled room. Pressure
changes between span checks will cause
changes In Instrument response. Zero drift
Is usually less than ±1 percent of full scale
per 24 hours. If cell temperature and pres-
sure are maintained constant.
5. Apparatus.
5.1 Carbon Monoxide Analyzer. Commer-
cially available Instruments should be In-
stalled on location and demonstrated, pref-
erably by the manufacturer, to meet or
exceed manufacturers specifications and
those described In this method.
5.2 Sample introduction System. Pump,
flow control valve, and fiowmeter.
5.3 Filter (In-line). A filter with a poros-
ity of 2 to 10 microns should be used to
keep large particles from the sample cell.
5.4 Moisture Control. Refrigeration units
are available with some commercial Instru-
ments for maintaining constant humidity.
Drying tubes (with sufficient capacity to op-
erate for 72 hours) containing Indicating
silica gel can be used. Other techniques that
prevent the Interference of moisture are
satisfactory.
6. Reagents.
6.1 Zero Gas. Nitrogen or helium contain-
ing less than 0.1 mg. CO/m.*
6.2 Calibration Gases. Calibration gases
corresponding to 10, 20, 40, and 80 percent
of full scale are used. Gases must be pro-
vided with certification or guaranteed anal-
ysis of carbon monoxide content.
6.3 Span Gas. The calibration gas corre-
sponding to 80 percent of full scale Is used
to span the instrument.
7. Procedure.
7.1 Calibrate the Instrument as described
In 8.1. All gases (sample, zero, calibration,
and span) must be Introduced Into the en-
tire analyzer system. Figure Cl shows a
typical flow diagram. For specific operating
Instructions, refer to the manufacturer's
manual.
FEDERAL REGISTER, VOL .36, NO. 84—FRIDAY, APRIL 30, 1971
109
-------
RULES AND REGULATIONS
8. Calibration.
8,1 Calibration Curve. Determine the
linearity of the detector response at the
operating flow rate and temperature. Pre-
pare a calibration curve and check the curve
furnished with the Instrument. Introduce
zero gas and set the zero control to Indicate
a recorder reading of zero. Introduce span
gas and adjust the span control to indicate
the proper value on the recorder scale (e.g.
on 0-68 mg./m.1 scale, set the 46 mg./m.'
standard at 80 percent of the recorder
chart). Recheck zero and span until adjust-
ments are no longer necessary. Introduce
Intermediate calibration gases and plot the
values obtained. If a smooth curve Is not
obtained, calibration gases may need
replacement.
9. Calculations.
9.1 Determine the concentrations directly
from the calibration curve. No calculations
are necessary.
9.2 Carbon monoxide concentrations in
mg./m.1 are converted to p.p.m, as follows:
p.p.m. CO = mg. CO/m." x 0.873
10. Bibliography.
The Intech NDIR-CO Analyzer by Prank
McEIroy. Presented at the llth Methods
Conference In Air Pollution, University of
California. Berkeley, Calif., April 1, 1970.
Jacobs, M. B. et al., J.A.P.C.A. 9, No. 2t
110*114. August 1959.
MSA LIRA Infrared Gas and Liquid Ana-
lyzer Instruction Book, Mine Safety Appli-
ances Co., Pittsburgh. Pa.
B*eckman Instruction 1635B. Models 215A,
315A and 415A Infrared Analyzers. Beckman
Instrument Company, Fullerton. Calif.
Continuous CO Monitoring System, Model
A 6011, intertech Corp., Princeton, N.J.
Bendli—UNOR Infrared Gas Analyzers.
Ronceverte. W. Va.
ADDENDA
A. Suggested Performance Specifications
for NDIR Carbon Monoxide Analyzers:
Range (minimum) 0-58mg./m»
(0-50 p.p.m.).
Output (minimum)...,- 0-10, 100, 1,000.
6,000 mv. full
scale.
Minimum detectable sen- 0.6 mg./m.« (0.5
Bltlvtty.
p.p-m.).
Lag time (maximum)— 15 seconds.
Time to 90 percent re- 30 seconds.
sponse (maximum).
Rise time, 90 percent 15 seconds.
(maximum).
Fall tune, 90 percent 15 seconds.
(maximum).
Zero drift (maximum)... 3 percent/week.
not to exceed
1 percent/ 24
hours.
Span drift (maximum) — 3 percent/week,
not to exceed
1 percent/24
hours.
Precision (minimum)--- ±O.Spercent.
Operational period (min- 3 days.
Imum).
Noise (maximum) ±0.5 percent.
Interference equivalent 1 percent of full
(maximum). scale.
Operating temperature 5-40° C.
range (minimum).
Operating humidity range 10-100 percent.
(minimum).
Linearity (maximum de- 1 percent of full
vlatlon). scale.
B. Suggested Definitions of Performance
Specifications:
Range—The minimum and nwrt"nim meas-
urement limit*.
Output—Electrical signal which Is propor-
tional to the measurement; intended for
connection to readout or data processing
devices. Usually expressed as millivolts or
mllllamps full scale at a given impedance.
Pull Scale—The maximum measuring limit
for a given range.
Minimum Detectable Sensitivity—The small-
est amount of input concentration that
can be detected as the concentration ap-
proaches zero.
Accuracy—The degree of agreement between
a measured value and the true value; usu-
ally expressed as ± percent of full scale
Lag Time—The time interval from a step
change In Input concentration at the In-
strument Inlet to the first corresponding
change In the instrument output.
Time to 90 percent Response—The time In-
terval from a step change In the input
concentration at the instrument Inlet to
a reading of 90 percent of the ultimate
recorded concentration.
Rise Time (90 percent)—The Interval be-
tween Initial response time and time to 90
percent response after a step Increase in
the inlet concentration.
Fall Time (90 percent)—The Interval be-
tween Initial response time and time to
90 percent response after a step decrease
In the Inlet concentration.
Zero Drift—The change In Instrument out-
put over a stated time period, usually 24
hours, of unadjusted continuous opera-
tion, when the Input concentration is
zero; usually expressed as percent full
scale.
SAMPLE INTRODUCTION
SAMPLE IK
Span Drift—The change In instrument out-
put over a stated time period, usually 24
hours, of unadjusted continuous opera-
tion, when the Input concentration Is a
stated upscale value; usually expressed as
percent full scale.
Precision—The degree of agreement between
repeated" measurements of the same con-
centration, expressed as the average devia-
tion of the single results from the mean.
Operational Period—The period of time over
which the instrument con be expected to
operate unattended within speclflcations.
Noise—Spontaneous deviations from a mean
output not caused by input concentration
changes.
Interference—An undeslred positive or nega-
tive output caused by a substance other
than the one being measured.
Interference Equivalent—The portion of
indicated input concentration due to the
presence of an Interferent.
Operating Temperature Range—The range
of ambient temperatures over which the
instrument will meet all performance
specifications.
Operating Humidity Range—The range of
ambient relative humidity over which the
Instrument will meet all performance
specifications.
Linearity—The maximum deviation between
an actual Instrument reading and the
reading predicted by a straight line drawn
between upper and lower calibration
points.
ANALYZER SYSTEM
SPAN
AND
CALIBRATION
I. R. ANALYZER
F!gur« C1. Carbon monoxide analyzer flow diagram.
APPENDIX D — R
KEEUCK METHOD roa TUB
Pl.tOTOCi-WJMICAL OXIDANTS
CORRECTED ros iNTEBrpotF.NCES PTTK TO
KmtOGEtf Oxrosxs AND SULFUR DIOXIUE
1. Principle, and Applicability.
1.1 Ambient air and ethylene **e de-
livered simultaneously to a mixing Ron*
whc-re tho ozone in the air reacts with the
ethyitine to emit light which Is detected by
a plaotomultlpllw txibe. The resulting: photo-
ourrcut i/> amplified and Is either read tii-
rectiy car- displayed on a recorder.
1.3 The nuH;iiod 13 applicable to the con-
tinuous m.i£Hmjr«zn«n.fc of taxxuo In wxiblcnt;
air.
2. Ktt?ipe ttnd Sensitii'ity,
2,1 The range is 9.8 tt%~ CX/'m.' to greater
than 19(10 Ng. Cyin.* (0.005 p.p.m. O( to
greater than 1 p.p.m. Os) .
2.2 The sensitivity 13 9.8 Ag. Oa/m.' (0.005
p,p,m. Of).
3. Interferences.
3.1 Other oxidizing and reducing spociw
normally found In ambient air do not Inter-
fere.
4. Precision and Accuracy.
4.1 Tho ftverage deviation from t3ie mcac
of repeated siwgle mewfmremerits docs not ex-
ceed 5 percent of the mean of the measure-
ments.
4.2 The method IB accurate within ±7
percent.
5. Apparatus.
5.1 Detector Cell. Figure Dl is a. drawing
of a typical detector coll showing flow pcithB
of gases, the mixing none, and placement of
the photomuUlpUcr tube. Other flow paths
in, which the air and othylene streams .meet
FEDERAL REGISTER, VOL 36, NO. 84—FRIDAY, APRIL 30, 1971
U. S. GOVERNMENT PRINTING OFFICE: 1973 746772/4195
110
-------
BIBLIOGRAPHIC DATA
SHEET
1. Report No.
EPA-R4-73-028a
3. Recipient's Accession No.
4. Title and Subtitle
GUIDELINES FOR DEVELOPMENT OF A QUALITY ASSURANCE PROGRAM
Reference Method for Continuous Measurement of CO in the
Atmosphere
5. Report Date
June 1973
6.
7. Author(s)
Franklin Smith and A. Carl Nelson
&• Performing Organization Rept.
No.
9. Performing Organization Name and Address
Research Triangle Institute
Research Triangle Park, N.C. 27709
10. Project/Task/Work Unit No.
11. Contract/Grant No.
EPA- Durham
68-02-0598
12. Sponsoring Organization Name and Address
Environmental Protection Agency
National Environmental Research Center
Research Triangle Park, N.C. 27711
13. Type of Report & Period
Covered
Interim contract
report-field document
14.
IS. Supplementary Notes
16. Abstracts
Guidelines for the quality control of ambient CO by the Federal reference
method are presented. These include:
1. Good operating practices
2. Directions on how to assess data and qualify data
3. Directions on how to identify trouble and improve data quality
4. Directions to permit design of auditing activities
5. Procedures which can be used to select action options and relate them
to costs
The document is not a research report. It is designed for use by operating personnel
17. Key Words and Document Analysis. 17a. Descriptors
Quality Assurance
Quality Control
Air Pollution
Quantitative Analysis
Gas Analysis
Carbon Monoxide
17b. Identifiers/Open-Ended Terms
17e. COSATI Field/Group
> Q-JQ ^
18. Availability Statement
19.. Security Class (This
Report)
UNCLASSIF1EC
LASSIF1ED
•Class (Thi!
20. Security Class (Tl
UNCLASSIFIED
21. No. of Pages
128
22. Price
FORM NTIS-35 (REV. 3-721
USCOMM-OC 14A92-P72
-------
INSTRUCTIONS FOR COMPLETING FORM NTIS-35 (10-70) (Bibliographic Data Sheet based on COSATI
Guidelines to Format Standards for Scientific and Technical Reports Prepared by or for the Federal Government,
PB-180 600).
1. Report (Dumber. Each individually bound report shall carry a unique alphanumeric designation selected by the performing
organization or provided by the sponsoring organization. Use uppercase letters and Arabic numerals only. Examples
FASEB-NS-87 and FAA-RD-68-09.
2. Leave blank.
3. Recipient's Accession Number. . Reserved for use by each report recipient.
4. Title and Subtitle. Title should indicate clearly and briefly the subject coverage of the report, and be displayed promi-
nently. Set subtitle, if used, in smaller type or otherwise subordinate it to main title. When a report is prepared in more
than one volume, repeat the primary title, add volume number and include subtitle for the specific volume.
5- Report Dote, l-'ach re-port shall carry a date indicating at least month and year. Indicate the basis on which it was selected
(e.g., date of issue, date of approval, date of preparation.
6* Performing Organ!zation Code. Leave blank.
7. Authorfs). Give name(s) in conventional order (e.g., John R. Doc, or J.Robert Doe). List author's affiliation if it differs
from the performing organization.
8. Performing Organization Report Number. Insert if performing organization wishes to assign this number.
9. Performing Organizotion Name and Address, dive name, street, city, state, and zip code. List no more than two levels of
an organizational hierarchy. Display the name of the organization exactly as it should appear in Government indexes such
as USGRDR-I.
10. Project/Task/Work Unit Number. Use the project, task and work unit numbers under which the report was prepared.
11. Contract/Grant Number. Insert contract or grant number under which report was prepared.
12* Sponsoring Agency Nome and Address. Include zip code.
13. Type of Report and Period Covered. Indicate interim, final, etc., and, if applicable, dates covered.
14. Sponsoring Agency Code. Leave blank.
15. Supplementary Notes. Fnter information not included elsewhere but useful, such as: Prepared in cooperation with . . .
Translation of ... Presented at conference of ... To be published in ... Supersedes . . . Supplements
16. Abstract. Include a brief (200 words or less) factual summary of the most significant information contained in the report.
If the report contains a significant bibliography or literature survey, mention it here.
17. Key Words and Document Analysis, (a). Descriptors. Select from the- Thesaurus of Engineering and Scientific Terms the
proper authorized terms that identify the major concept of the research and are sufficiently specific and precise to be used
as index entries for cataloging.
(b). Identifiers ond Open-Ended Terms. Use identifiers for project names, code names, equipment designators, etc. Use
open-ended terms written in descriptor form for those subjects for which no descriptor exists.
(c). COSATI Field/Group. Field and Group assignments are to be taken from the 1965 COSATI Subject Category List.
Since the majority of documents are multidisciplinary in nature, the primary Field/Group assignment(s) will be the specific
discipline, area of human endeavor, or type of physical object. The application(s) will be cross-referenced with secondary
Field/Group assignments that will follow the primary posting(s).
18. Distribution Statement. Denote releasability to the public or limitation for reasons other than security for example "Re-
lease unlimited". Cite any availability to the public, with address and price.
19 & 20. Security Classification. Do not submit classified reports to the National Technical
21. Number of Pages. Insert the total number of pages, including this one and unnumbered pages, but excluding distribution
list, if any.
22. Price. Insert the price set by the National Technical Information Service or the Government Printing Off ice, if known.
FORM NTIS-35 CREV. 3-72) USCOMM-OC I4982-P72
------- |