EPA-650/4-74-005-h
February 1975 Environmental Monitoring Series
GUIDELINES FOR DEVELOPMENT
Of A QUALITY ASSURANCE PROGRAM:
VOLUME Vtll - DETERMINATION
OF CO EMISSIONS
FROM STATIONARY SOURCES
BY NDIR SFECTROMETRY
Ollice of Research and Development
U.S. Environmental Protection Agency
Woshinglon, DC 20460
-------
-------
EPA-650/4-74-005-H
GUIDELINES FOR DEVELOPMENT
OF A QUALITY ASSURANCE PROGRAM:
VOLUME VIII - DETERMINATION
OF CO EMISSIONS
FROM STATIONARY SOURCES
BY NDIR SPECTROMETRY
by
Franklin Smith, Denny E. Wagoner, and Robert P. Donovan
Research Triangle Institute
Research Triangle Park, North Carolina 27709
Contract No. 68-02-1234
ROAP No. 26BGC
Program Element No. 1HA327
EPA Project Officer: Steven M. Bromberg
Quality Assurance and Environmental Monitoring Laboratory
National Environmental Research Center
Research Triangle Park, North Carolina 27711
Prepared for
OFFICE OF RESEARCH AND DEVELOPMENT
U.S. ENVIRONMENTAL PROTECTION AGENCY
WASHINGTON, D.C. 20460
February 1975
-------
EPA REVIEW NOTICE
This report has been reviewed by the National Environmental Research
Center - Research Triangle Park, Office of Research and Development,
EPA, and approved for publication. Approval does not signify that the
contents necessarily reflect the views and policies of the Environmental
Protection Agency, nor does mention of trade names or commercial
products constitute endorsement or recommendation for use.
RESEARCH REPORTING SERIES
Research reports of the Office of Research and Development, U.S. Environ-
mental Protection Agency, have been grouped into series. These broad
categories were established'to facilitate further development and applica-
tion of environmental technology. Elimination of traditional grouping was
consciously planned to foster technology transfer and maximum interface
in related fields. These series are:
1. ENVIRONMENTAL HEALTH EFFECTS RESEARCH
2 . ENVIRONMENTAL PROTECTION TECHNOLOGY
3. ECOLOGICAL RESEARCH
4. ENVIRONMENTAL MONITORING
5. SOCIOECONOMIC ENVIRONMENTAL STUDIES
6. SCIENTIFIC AND TECHNICAL ASSESSMENT REPORTS
9. MISCELLANEOUS
This report has been assigned to the ENVIRONMENTAL MONITORING
series. This series describes research conducted to develop new or
improved methods and instrumentation for the identification and quanti-
fication of environmental pollutants at the lowest conceivably significant
concentrations. It also includes studies to determine the ambient con-
centrations of pollutants in the environment and/or the variance of
pollutants as a function of time or meteorological factors.
This document is available to the public for sale through the National
Technical Information Service, Springfield, Virginia 22161.
ii
-------
TABLE OF CONTENTS
SECTION
I
II
III
IV
V
APPENDIX
A
B
c
D
E
PAGE
INTRODUCTION 1
OPERATIONS MANUAL 4
2.0 GENERAL 4
2.1 EQUIPMENT SELECTION 7
2.2 EQUIPMENT CALIBRATION 13
2.3 PRESAMPLING PREPARATION 26
2.4 ON-SITE MEASUREMENTS 31
2.5 POSTSAMPLING OPERATIONS
MANUAL FOR FIELD TEAM SUPERVISOR 38
3.0 GENERAL
3.1 ASSESSMENT OF DATA QUALITY (INTRATEAM)
3.2 MONITORING DATA QUALITY (SEC. II)
3.3 COLLECTION AND ANALYSIS OF INFORMATION
TO IDENTIFY TROUBLE
MANUAL FOR MANAGERS OF GROUPS OF FIELD TEAMS 54
4.0 GENERAL
4.1 FUNCTIONAL ANALYSIS OF TEST METHOD
4.2 ACTION OPTIONS
4.3 PROCEDURES FOR PERFORMING A QUALITY AUDIT
4.4 DATA QUALITY ASSESSMENT
REFERENCES 82
REFERENCE METHOD FOR DETERMINATION OF
CARBON MONOXIDE EMISSIONS FROM
STATIONARY SOURCES 84
FLOW CHART OF OPERATIONS 83
GLOSSARY OF SYMBOLS 92
GLOSSARY OF TERMS 94
CONVERSION FACTORS 95
36
38
39
43
47
54
58
65
68
70
ill
-------
LIST OF ILLUSTRATIONS
FtGURE NO. PAGE
1 Operational Flow Chart of the Measurement Process 5
2 Modified Integrated Gas Sampling Train 16
3 Stopcock Configuration for Determining the Leak
Rate of the Sampling Train 18
4 Sample Calibration Curve 22
5 Sample Daily Check Sheet 24
6 Stopcock Configuration for Purging (Integrated
Sampling) 35
7 Stopcock Movement to Assume Sampling Configuration
(Integrated Sampling) 35
8 Sample Control Chart for the Range, R, of Field
Analyses 45
9 Sample Control Chart for Calibration Checks 46
10 Summary of Data Quality Assurance Program 57
11 Added Cost Versus Data Quality for Selected Action 67
Options (at a CO level of 500 ppm)
12 Example Illustrating p < 0.10 and Satisfactory 75
Data Quality
13 Example Illustrating p > 0.10 and Unsatisfactory 75
Data Quality
14 Flow Chart of the Audit Level Selection Process 77
15 Average Cost VS Audit Level (n) 80
iv
-------
LIST OF TABLES
TABLE NO. PA6E
1 Apparatus Checklist for Carbon Monoxide Emissions
Measurements 28
2 Methods of Monitoring Variables 51
3 Estimates of Mean, Variance, and Distribution
of Important Variables 63
4 Computation of Mean Difference, d~, and Standard
Deviation of Differences, s. 74
5 Sample Plan Constants, k for P (not detecting a lot
with proportion p outside limits L and U}<^ 0.1 76
-------
ABSTRACT
Guidelines for the quality control of stack gas analysis for carbon
monoxide emissions by the Federal reference method (NDIR) are presented.
These include:
1. Good operating practices.
2. Directions on how to assess performance and to qualify data.
3. Directions on how to identify trouble and to improve data quality.
4. Directions to permit design of auditing activities.
The document is not a research report. It is designed for use by
operating personnel.
This work was submitted in partial fulfillment of contract Durham
68-02-1234 by Research Triangle Institute under the Sponsorship of the
Environmental Protection Agency. Work was completed as of February 1975.
VI
-------
SECTION I INTRODUCTION
This document presents guidelines for developing a quality assurance
program for Method 10 Determination of Carbon Monoxide Emissions from
Stationary Sources. For convenience of reference, this method as published
by the Environmental Protection Agency in the Federal Register, March 8,
1974, is reproduced as appendix A of this report.
This document is divided into four sections:
Section I, Introduction. The Introduction lists the overall objectives
of a quality assurance program and delineates the program components
necessary to accomplish the given objectives.
Section II, Operations Manual. This manual sets forth recommended
operating procedures to insure the collection of data of high quality, and
instructions for performing quality control checks designed to give an
indication or warning that valid data or data of poor quality are being
collected, allowing for corrective action to be taken before future
measurements are made.
Section III, Manual for Field Team Supervisor. This manual contains
directions for assessing data quality on an intrateam basis and for
collecting the information necessary to detect and/or identify trouble.
Section IV, Manual for Manager of Groups of Field Teams. This manual
presents information relative to the test method (a functional analysis)
to identify the important operations, variables, and factors; a methodology
for comparing action options for improving data quality and selecting the
preferred action; and statistical properties of and procedures for carrying
out a quality audit for an independent assessment of data quality.
The objectives of this quality assurance program for Method 10 are to:
1. Minimize systematic errors
-------
4. Collect and supply information necessary to describe
the quality of the data.
To accomplish the above objectives, a quality assurance program must
contain the following components:
1. Recommended operating procedures,
2. Routine training of personnel and evaluation of performance
of personnel and equipment,
3. Routine monitoring of the variables and parameters, which
may have a significant effect on data quality,
4. Development of statements and evidence to quality data and
detect defects, and
5. Action strategies to increase the level of precision/accuracy
in the reported data.
Component (2) above will be treated in the final report of this contract;
all others are discussed in this report.
Implementation of a properly designed quality assurance program should
enable measurement teams to achieve and maintain an acceptable level of
precision and accuracy in their carbon monoxide emissions measurements. It
will also allow a team to report an estimate of the precision of its
measurements for each source emissions test.
Variability in emission data derived from multiple tests conducted at
different times includes components caused by:
1. Process conditions,
2. Equipment and personnel variation in field procedures, and
3. Equipment and personnel variation in the laboratory.
In many instances time variations in source output may be the most significant
factor in the total variability. The error resulting from this component of
variation is minimized by knowing the time characteristics of the source
output and sampling proportionally. The sampling period should span at
least one complete output cycle when possible. If the cycle is too
long, either the sample collection should be made during a portion of the
cycle representative of the cycle average, or multiple samples should
be collected and averaged.
Quality assurance guidelines for Method 10 as presented here are
designed to insure the collection of data of acceptable quality by prevention,
-------
detection, and quantification of equipment and personnel variations in
both the field and the laboratory through:
1. Recommended operating procedures as a preventive measure,
2. Quality control checks for rapid detection of undesirable
performance, and
3. A quality audit to verify independently the quality of the
data.
The scope of this document has been purposely limited to that of a
field and laboratory document. Additional background information will be
contained in the final report under this contract.
-------
SECTION II OPERATIONS
2.0 GENERAL
This manual sets forth recommended procedures for determination of
carbon monoxide emissions from stationary sources according to Method 10.
(Method 10 is reproduced from the Federal Register. Vol. 39, No. 47, Friday,
March 8, 1974, and is included as appendix A of this document.) Quality
control procedures and checks designed to give an indication or warning
that invalid or poor quality data are being collected are written as part
of the operating procedures and are to be performed by the operator on a
routine basis. In addition, the performance of special quality control
procedures and/or checks as prescribed by the supervisor for assurance of
data quality may be required of the operator on special occasions.
The sequence of operations to be performed for each field test is
given in fig. 1. Each operation or step in the method is identified by
a block. Quality checkpoints in the measurement process, for which appro-
priate quality control limits are assigned, are represented by blocks
enclosed by double lines. Other quality checkpoints involve go/no-go
checks and/or subjective judgments by the test-team members, with proper
guidelines for decisionmaking spelled out in the procedures.
The precision/accuracy of data obtained from this method depends upon
equipment performance and the proficiency and conscientiousness with which
the operator performs his various tasks. From equipment checks through
on-site measurements, calculations, and data reporting, this method is
susceptible to a variety of errors. Detailed instructions are given for
minimizing or controlling equipment error, and procedures are recommended
to minimize operator error. Before using this document, the operator
should study Method 10 as reproduced in appendix A in detail. In addition,
the quality assurance documents of this series for Methods 2, 3, and 4
(refs. 1-3) should be read and followed.
To insure that all apparatus satisfies the reference method speci-
fications, acceptance checks, as specified in section 2.1, should be per-
formed when the apparatus is purchased and field tests must always be
preceded by calibration and check-out procedures as given in sections 2.2
and 2.3 respectively. The manufacturer's recommendations should be fol-
lowed when using a particular piece of equipment.
-------
BUIPKNT SELECTION (2,1)
1. SELECT EQUIPMENT ACCORDING TO THE
GUIDELINES GIVEN IN SUBSECTION 2.1
FOR THE SOURCE TO BE TESTED.
EQUIPMENT CALIBRATION (2,2)
2. CALIBRATE EQUIPMENT ACCORDING TO
SUBSECTION 2.2.
RESAMPLING PREPARATION (2,3)
3. OBTAIN PROCESS DATA, SELECT/PREPARE
SAMPLING SITE, DETERMINE LOGISTICS
FOR PLACING EQUIPMENT ON-SITE, AND
DETERMINE STACK CONDITIONS Ts, Vs,
Bwo AND Md' (SUBSECTION 2.3.1)
4. CHECK OUT SAMPLING TRAIN AND RELATED
COMPONENTS. (SUBSECTION 2.3.2)
5. PACKAGE AND SHIP EQUIPMENT.
(SUBSECTION 2.3.4)
ONSITH MEASUREMENTS (2,4)
6. MOVEMENT OF EQUIPMENT TO SAMPLING
SITE AND SAMPLE RECOVERY AREA.
(SUBSECTION 2.4.1)
7. PRELIMINARY MEASUREMENTS AND SETUP
MILL INCLUDE DUCT DIMENSIONS AND
PROBE POSITIONING (SUBSECTION 2.4.2)
EQUIPMENT
SELECTION
p i
i
EQUIPMENT
CALIBRATION
3
t
PRELIMINARY
SITE VISIT
(OPTIONAL)
A
APPARATUS
CHECK
5
r
PACKAGE
EQUIPMENT
FOR
SHIPMENT
6 i
t
TRANSPORT
EQUIPMENT
TO SITE
7 I
i
PRELIMINARY
MEASUREMENTS
AND SETUP
Figure 1. Operational flow chart of the measurement process,
-------
8.
9.
10.
11.
12.
DETERMINATION OF MAXIMUM AND MINIMUM
AP AND STACK GAS TEMPERATURE.
(SUBSECTION 2.4.2.2)
VELOCITY
TRAVERSE
AND STACK
CONDITIONS
ASSEMBLE AND LEAK CHECK THE SAMPLING
TRAIN. (SUBSECTION 2.4.3.2)
ASSEMBLE AND
LEAK-CHECK
SAMPLING
TRAIN
COLLECT A MINIMUM SAMPLE VOLUME OF
60 LITERS. MAINTAIN PROPORTIONAL
CONDITIONS DURING SAMPLING-
(SUBSECTIONS 2.4.3.3, 2.4.3.4)
PERFORM FINAL LEAK CHECK OF THE ENTIRE
SAMPLING TRAIN. (SUBSECTIONS 2.4.3.3,
2.4.3.4)
COMPARE THE NDIR MEASURED VALUES
WITH THE VALUE OBTAINED BY AN
ALTERNATE METHOD (3.3.3.1) OR WITH THE
VALUE GIVEN BY COMBUSTION NOMOGRAPHY
IF APPLICABLE. (SUBSECTION 2.5.1)
POSTERING OPERATIONS (2,5)
13. DISASSEMBLE AND INSPECT EQUIPMENT FOR
DAMAGE SUSTAINED BUT NOT DETECTED
DURING SAMPLING. (SUBSECTION 2.5.2)
14. PACKAGE EQUIPMENT FOR RETURN TRIP TO
BASE LABORATORY. (SUBSECTION 2.5.2)
10
11
1
13
COLLECT
SAMPLE
LEAK-CHECK
SAMPLING
TRAIN
DATA
VALIDATION
DISASSEMBLE
AND CHECK
EQUIPMENT
PACKAGE
EQUIPMENT
FOR SHIPMENT
-------
3.1 EQUIPMENT SELECTION
A listing of the required apparatus (for sampling trains configured
as shown in figures 10-1 or 10-2 of appendix A) and reagents along with
certain miscellaneous equipment and tools to aid In source testing is
given in table 1 of subsection 2.3. Additional specifications, criteria,
and/or design features as applicable are given here to aid in the selection
of equipment to insure the collection of data of consistent quality. Pro-
cedures and, where applicable, limits for acceptance checks are given. The
descriptive title, identification number, if applicable, and the results of
the acceptance check are recorded in the receiving record file, dated, and
signed by the individual performing the check. Calibration data generated
in the acceptance check are recorded in a calibration log book.
2.1.1 Sampling Probe
2.1.1.1 Design Characteristics. The sampling probe should be made of
stainless steel or of borosilicate (Pyrex) glass encased in a steel sheath.
The probe should be equipped with a filter to remove particulate matter.
This filter can be glass wool (borosilicate or quartz glass) packed in the
probe end that extends into the stack. A retaining ring that screws or
clamps to the end of the probe will help hold the filter in place when
sampling into low-pressure stacks.
o
High-temperature probes (temperatures greater than 870 C) should be
made of quartz. In all sampling setups, the probe material must not react
with the gas constituents in a way that will introduce a bias into the
analytical method.
2.1.1.2 Acceptance Check. A new probe should be visually checked for
suitability, i.e., is it the length and composition ordered. The probe
should be checked for cracks or breaks and leak-checked on a sampling
train as described in subsection 2.2.1. Any probe not satisfying the
acceptance check should be rejected.
2.1.2 Air-cooled Condenser
2.1.2.1 Design Characteristics. The purpose of the condenser is to facili-
tate condensation of water from the gas to be sampled. The long coiled
path (figs. 10-1 and 10-2, appendix A) allows the entering gas to cool to
-------
near ambient temperature (other temperatures can be obtained, if desired,
utilizing a circulating water cooler, for example, to maintain a tempera-
ture below ambient); the large volume collects the condensed water which
can be drained by the valve between or after sampling runs.
The capacity of the condenser must be sufficient to collect all con-
densed moisture from the sample gas during system purging and sampling,
but should not be unnecessarily oversized because the added size increases
the bulk of the sampling train and lengthens purge times. For example, a
sample train of 1 A total volume, including the condenser, should be able
to hold the condensate from about 100 H of sample gas (90 I sample plus 5
displacements of the sampling train volume plus 5 Si margin) . Assuming 20
percent water concentration in the stack gases, the water vapor content
of this volume of stack gas is 20 H of water vapor. When condensed, this
gas volume corresponds to about 20 mi of liquid so that a condenser volume
of 0.25 I allows adequate operating margin.
The condenser should possess a drain valve to permit emptying between
samplings.
2.1.2.2 Acceptance Check. Check the condenser visually for damage (such
as breaks or cracks), and manufacturing flaws. The condenser should be
airtight and leak-free when checked at a positive pressure > 2 in. of H.O
~ 2
above atmospheric and monitored with a draft gauge.
2.1.3 Needle Valves
2.1.3.1 Design Characteristics. Two metering valves with convenient-sized
fittings are required in the sampling train. It is recommended that stain-
less steel valves be utilized.
2.1.3.2 Acceptance Check. Install the valve(s) in the sampling train and
check for proper operation. Reject a valve that cannot regulate the sample
flow rate over the desired operating range.
2.1.4 Vacuum Pump
2.1.4.1 Design Characteristics. The vacuum pump should be capable of
maintaining: 1) a flow rate from approximately 8.5 to 34 A/min (0.3 to 1.2
ft /min) at atmospheric pressure, and 2) creating pump inlet pressures from
25.4 to 500 mtnHg with the pump outlet at or near standard pressure, i.e.,
-------
760 mmHg. The pump must be leak-free when running and pulling a vacuum
(inlet plugged) of 380 mmHg. The vacuum pump should be a leak-free dia-
phragm pump because of the low inherent contamination characteristics of
that type of pump. For safety reasons, the pump should be equipped with
a three-wire electrical cord.
2.1.4.2 Acceptance Check. A new pump should be visually checked for
Leak-check by plugging the pump inlet and passing the outlet line through
a total volume meter such as a dry gas meter. Bubbling the outlet line
through a liquid bath is an alternative method. In any event the volume of
gas flowing should be less than 1 percent of the anticipated sampling rate.
2.1.5 Rate Meter
2.1.5.1 Design Characteristics. The rate meter is a rotameter or its
equivalent used to measure gas flows in the range 0 to 1 fc/min (0 to 0.035
ft3/min).
2.1.5.2 Acceptance Checks. Inspect the meter for cracks or flaws and check
its calibration. Reject the rate meter if it is damaged, behaves errati-
cally, or cannot be adjusted to agree within + 5 percent of the standard
rate meter. The rotameter tube and ball should be cleaned and retested if
dust and/or liquid contamination is suspected.
2.1.6 Flexible Bag
2.1.6.1 Design Characteristics. The flexible bag is an Inflatable, leak-
tight bag used to collect a measured volume of sample gas. The bag
capacity should be 60 to 90 £ (2 to 3 ft ). It should be scalable to a
nipple or other leak-tight connection such as illustrated In figure 10—2
of appendix A.
2.1.6.2 Acceptance Checks. Leak-test the bag In the laboratory by
ating with a leakless pump followed by a dry gas meter. When evacuated
and leak free, there should be negligible flow through the meter (less
than about 10 percent bag volume in 10 hours).
One difficulty in leak-testing a nonrigid volume by evacuating it is
that it is difficult to ascertain that the entire bag has been tested. What
happens is that one flexible wall presses against another section and, even-
tually, the pumping orifice, cutting off flow. The absence of flow then
does not quarantee that all sections of the bag are leak free—analogous to
stopping water flow in a garden hose by pinching a section of line. This
-------
section can be used to check for leaks between the faucet and the pinch-off
point but does not test the section between the pinch-off point and the nozzel.
An alternative and preferred test method is to pressurize the bag with
air to approximately 54 mm HO (2 in. of HO) above atmospheric pressure
and to monitor the pressure with a draft gauge over a period of time. Any
change in pressure over a 24-hour period should be considered an excessive
leak and the bag should be repaired or replaced.
2.1.7 Stack Gas Velocity Temperature and Pressure Measuring System
See the quality Assurance Document of this series, entitled Determina-
tion of Stack Gas Velocity and Volumetric Flow Rate (type-S pitot tube),
for a discussion of this system (ref. 1). The equipment required includes
a type-S pitot tube, an inclined manometer, and expropriate lines. The
pressure differential measured between the inertial pressure and the
static pressure is used to adjust the sampling flow rate so that the samp-
ling is carried out proportional to the flow rate in the stack.
2.1.8 Carbon Monoxide Analyzer
2.1.8.1 Design Characteristics. The carbon monoxide analyzer is a nondis-
persive infrared (NDIR) spectrometer (or its equivalent) meeting the per-
formance specifications listed in the addenda section of appendix A.
2.1.8.2 Ace eptance Checks. Demonstrate that the analyzer meets the speci-
fications listed in appendix A as well as those advertised by the manu-
facturer. Guidelines for instrument evaluation are given in "Procedures
for Testing Performance Characteristics of Automated Methods," federal
Register, Vol. 40, No. 33, Tuesday, February 18, 1975. A strip chart recorder
is a desirable option for making a permanent record of the NDIR readings.
2.1.9 Silica Gel Drying Tube*
2.1.9.1 Design Characteristics. The drying tube is a glass tube or impinger
or the equivalent capable of holding 200 grams of indicating silica gel,
which removes the water vaper passing through the condenser that otherwise
*The silica gel drying tube and the CO removal can be combined into
one tube containing layers of the two materials. The inlet line discharges
into the silica gel which grades into a mixed layer of gel and ascarite
and finally becomes a layer composed totally of ascarite. The quantities
of each material should be at least that specified in the standard (appendix
A). The drying tube is repacked when the indicating silica gel exhibits a
characteristic color change.
10
-------
would -Interfere with the NDIR measurement of carbon monoxide. The
tube should be tightly stoppered or sealed in order to be leak-free. The
input and output lines should be configured to maximize the interaction
between the gas and the gel. This condition is assured by adding the gel
after the impinger is sealed (through the outlet line).
2.1.9.2 Acceptance Checks. Confirm that the silica gel (6 to 16 mesh) has
been properly dried (2 hrs at 177° C) and indicates a dry state. Check the
seal on the tube to confirm leak-free status.
2.1.10 CO Removal Tube*
2.1.10.1 Design Characteristics. The CO removal tube is a glass tube
or impinger or the equivalent (e.g., a flexible plastic tube) capable of
holding 500 g of ascarite. The dry sample gas passes through the ascarite
which removes the CO . The tube must be leak free when loaded and sealed.
The input and output lines should be configured so as to maximize the ex-
posure of the sample gas to the ascarite. To prevent the ascarite from
plugging the inlet line, the ascarite should be added through the outlet
after the inlet line is sealed in place.
2.1.10.2 Acceptance Checks. Ascertain the status (remaining life) of the
ascarite, replace spent ascarite. It should not be soggy or pasty. A
glass wool plug on both the inlet and outlet lines helps prevent ascarite
dust from being carried to other parts of the system. They should be in
place or added. Confirm that the seals on the removal tube are leak free.
2.1.11.1 Design Characteristics. The ice bath must be of sufficient size
to contain both the drying tube and the CO removal tube in ice/ice water.
While a simple bucket is suitable for onstack use, a double-walled flask
with an evacuated space (a dewar flask) gives longer ice life and, hence,
requires less refilling for laboratory operations.
2.1.11.2 Acceptance Check. Confirm that the bucket is large enough to
hold the drying tubes surrounded by ice and water.
2.1.12 Orsat Analyzer
See the document of this series entitled, "Gas Analysis for Carbon
Dioxide, Excess Air and Dry Molecular Weight," based upon method 3 (ref. 2).
*See footnote for 2.1.9.
11
-------
2.1.13 Calibration Gases
2.1.13.1 Des ign Char ac ter is tic s. Five reference levels of carbon
monoxide in nitrogen are required;
1. Zero,
2. 15 percent of span,
3. 30 percent of span,
4. 45 percent of span, and
5. 75 percent of span.
The span (reading at 100 percent of scale) should not exceed 1.5 times the
applicable source standard. For example, for petroleum refineries the car-
bon monoxide concentration cannot exceed 500 ppm by volume (1T 60.103 of FR
38, 15408). Instrument span, therefore, should not exceed 750 ppm. A span
of 600 ppm would be reasonable with calibrating gases of 90, 180, 270, and
425 ppm concentrations in addition to the prepurified nitrogen serving as
a zero calibration gas. The accuracy of the carbon monoxide concentration
in the span gas determines the accuracy with which the carbon monoxide con-
centration in the sample gas can be measured.
2.1.13.2 Acceptance Checks. Traceability of the calibration gas to an NBS
standard reference material (CO in NZ) should be established under con-
trolled laboratory conditions prior to acceptance. A cylinder of calibra-
tion gas should not be accepted anytime the average of five or more deter-
minations, made on different days, under controlled conditions and with an
analyzer spanned with an NBS standard prior to each determination, differs
from the vendor's certified value by more than + 4 percent.
2.1.14 Tubing and Connecting Lines
2.1.14.1 Design Characteristics. Connecting lines—tubing made of the
appropriate material—couple the individual components together to make up
the sampling and analytical systems. For CO sampling and analysis, the
range of acceptable materials is quite wide; stainless steel, aluminum,
glass, plastics such as polypropylene, PVC, Teflon, and Tygon are all
satisfactory. Considerations of weight, ease of assembly, or durability
may favor plastic lines for many of the lengthy couplings. Short lines
which are seldom touched could be made of stainless steel.
12
-------
2.1.14.2 Acceptance Check. Account for all connecting lines and ascertain
that they are available in the diameters, lengths, and materials desired.
Make sure that all plumbing hardware is available (joints, fittings, and
the tools for installation).
2.2 EQUIPMENT CALIBRATION
Before proceeding to a field site for measurements, the equipment to
be used should be assembled and calibrated in a controlled laboratory
setting. This section reviews calibration procedures and can be used as
part of a presampling checklist for the preparation of the sampling train.
In particular, new apparatus and equipment should be calibrated as part of
the overall sampling system before field use, in addition to undergoing
the acceptance checks for individual parts given in section 2.1.
2.2.1 Sampling Train Assembly and Checkout
Two types of sampling trains appear in Method 10: the continuous
sampling train (fig. 10-rl, appendix A) and the integrated gas-sampling
train (fig. 10-2, appendix A).
In the continuous sampling method, the analytical equipment (fig. 10-3,
appendix A) must be in position in the field to couple directly onto the sam-
pling train. A pitot tube is not required in continuous sampling because the
sampling flow rate need not be maintained proportional to the stack gas velocity.
The needle valve and rate meter between the pump and the NDIR analyzer (fig.
10-3, appendix A) permit manual adjustments in accordance with the manufac-
turer's recommendations for gas flow and pressure during NDIR measurements.
The carbon dioxide concentration must also be determined using the
integrated sampling techniques of Method 3 with either the continuous or
integrated sampling techniques of Method 10*. Guidelines for this method
have been previously prepared (ref. 2). The appropriate guidelines are
those for the integrated sampling of CO. using the Orsat analyzer. As
emphasized in those guidelines, using the Orsat analyzer in the stack area
*Appendix A (Method 10 Standard) describes an alternative method for
determining the CO- concentration. In this procedure, the weight of the
ascarite C0_ removal tube before and after a given volume of sample gas
has passed through it is used to calculate the average CO,, concentration.
This procedure has not worked out adequately in various round-robin checks
(ref. 4) and is therefore not recommended. These guidelines assume that
the Orsat method will be used.
13
-------
is a last resort only. A nearby laboratory or room is far superior in
regards to measurement accuracy and precision as well as analyzer mainte-
nance and life. Similar comments apply to NDIR analytical equipment.
Time-dependent permeation of CO through the flexible bag used to transport
the sample is expected, so that it is important to minimize any delays
between sampling and evaluation when carrying out integrated sampling.
In the integrated gas-sampling train, a sample of gas is collected
over a period of time (a minimum of 60 minutes for petroleum refineries)
and stored in a bag, which is subsequently transported to a convenient
laboratory location for the determination of C0» and CO. The gas sample
is taken proportionally, using a needle valve and rate meter (fig. 10-2,
appendix A) to match manually changes in the sampling gas velocity to
changes in the stack gas velocity (as determined by the pitot tube). When
the probe position remains fixed in the vicinity of the duct centrold
throughout the sampling period (no traverses of the duct), the implicit
assumption is that the CO concentration is uniform across the duct. In
some sampling it may be desirable to draw from different points traversing
the duct.
Once collected, the sample can be transported to the analyzers, both
the Orsat and the NDIR, and evaluated under controlled laboratory condi-
tions. While in principle the orsat analyzer measures both CO. and CO, the
minimum detectable CO level, by the Orsat method, is about 1,000 ppm—far
too high for new source performance standards (NSPS) CO concentration levels.
The only measurement of interest by the Orsat method, therefore, is the C0_
concentration. The volume of sample gas required for Orsat analysis is
small (<1£) compared to total bag volume.
2.2.1.1 Continuous Sampling Train. The sampling probe (fig. 10-1,
appendix A) and the analytical equipment (fig. 10-3, appendix A) together
constitute the complete sampling train for continuous sampling. The tvo
separately illustrated units should be directly coupled and mounted within
the same enclosure for transportation and field use. The more firmly
anchored and compactly housed the equipment, the better. All that has to
be free is the probe, which is easily coupled to the condenser by a flexi-
ble hose made of a material inert to the sample gas such as Teflon or
polyethylene.
14
-------
Three checks should be carried out on the assembled sampling chain
before proceeding to the field:
1. A leak test of the system,
2. A calibration of the NDIR analyzer,
3. A calibration of the Orsat analyzer.
The procedures for calibrating the analyzers are the same regardless
of whether the analyzers are used with the continuous sampling train or
with the integrated gas-sampling train. These procedures are therefore
discussed in separate sections (2.2.3 and 2.2.4) applicable to both sampl-
ing trains. The remainder of this section describes a procedure for leak
testing the continuous sampling train.
The most straightforward method for leak testing the continuous sampl-
ing train is by sealing the probe with a plug or stopper, making sure the
calibrating gases are shut off, turning on the pump, and ascertaining that
the gas flow rate, as indicated on the downstream rate meter, falls to
zero. If the sampling train is leak free, the pump will evacuate it
rapidly so that the gas flow at the downstream rate meter becomes negligi-
bly small. The time required to evacuate depends on the pumping speed and
the volume of the sampling train. The flow rate should decrease continu-
ously to effectively zero. Leaks anywhere in the system, including the
pump, establish a steady rate flow in the system which registers as a
nonzero reading on the downstream rate meter. To eliminate such a con-
dition, which would contribute a diluting effect during sampling and,
hence, would introduce error into the measurements, standard leak-detection
methods, such as soap-bubble painting or sequential piece-by-piece isola-
tion, must be employed to locate and to eliminate the leak. Not until
the leak rate measured on the flow meter is less than 2 percent of the
planned sampling flow rate can the system be approved as acceptably leak
free.
An alternative and preferred leak test procedure is to pressurize
the system (i.e., the sampling train) with air to about 54 mm HO (2 in.
of HO) above atmospheric pressure and monitor the pressure with a draft
gauge. If there is no detectable pressure change in 10 minutes the sys-
tem is considered leak-free. A system leak test should be performed be-
fore and after each field test.
15
-------
2.2.1.2 Integrated Sample Train. The Integrated gas-sampling train is
depicted schematically in fig. 10-2 of appendix A. It is used in conjunc-
tion with the analytical equipment shown in fig. 10-3, appendix A, but has
the advantage over the continuous sampling system of not having to be
physically coupled to the analytical equipment. In the field, only the
sampling train itself needs to be moved to the vicinity of the stack-
sampling port. The analytical equipment should be at the site, however,
in order to minimize the time between sampling and analysis.
Before field use the integrated sampling train should be checked for
leaks. To facilitate this leak testing (but more importantly, the leak
testing in the field prior to sampling), the sampling train should include
connections, valves, and lines that enable the vacuum pump to perform a
series of different functions without the rerouting or the making/break-
ing of any lines. A suitable, simple arrangement, requiring the addition
of only two three-way stopcocks, is shown in fig. 2. This arrangement,
while not mandatory or necessarily the best modification, will be assumed
in the following discussion.
3-WAY PUMP
STOPCOCK
3-WAY SAMPLE
STOPCOCK
SAMPLE
PUMP
FLEXIBLE VACUUM HOSE
PUMPING PORT
QUICK DISCONNECT
THICK-WALLED
CONTAINER
FLEXIBLE BAG
Figure 2. Modified integrated gas-sampling train.
16
-------
Figure 2 illustrates the addition of two three-way stopcocks and
appropriate connecting lines to the schematic appearing in the Federal
Register (appendix A, fig. 10-2). These stopcocks can be made of materials
compatible with the tubing used in the system. Teflon -stopcocks should be
adequate and relatively maintenance free. All features not shown in fig.
2 are the same as in fig. 10-2 with the exception of a rate meter which is
added to the outlet line of the pump. This meter is used for leak testing.
It can be of the conventional rotameter type, but a dry gas meter is pre-
ferable in terms of realiability and stability.
Once the acceptance check for the flexible bag has been completed
(2.1.6.2), the bag can be attached to a finger, which is inserted through
and sealed to the wall of a protective container (fig. 2). The bag is on
the inside of the container; the outside end of the finger is coupled to
a quick-disconnect valve. Both of these couplings are sketched in fig. 2.
The bag will be used repeatedly in this configuration until it wears
out, i.e., develops a leak.
The material from which the finger and container are made must be
compatible with the volume requirements of the sample and with the weight
restrictions of portability in the field. A heavy-walled, 20-gallon
polyethylene container or drum works adequately, but other containers are
suitable. The container should be relatively leak free, and while not
normally subjected to large pressure differentials, it will experience
large, crushing atmospheric forces. Consequently it must be rigid and
strong.
Once an accepted bag has been mounted within its protective container,
it remains there throughout its useful life, being filled and discharged
as sampling proceeds. When it needs to be replaced, the finger is
removed from the container wall, and a new bag is installed in place of
the defective bag.
To check the system for leaks, the pump stopcock should be in its
straight-through position, as illustrated in fig. 2. The sample stopcock
can be positioned to couple the pumping line to the bag line alone, to
the sample line alone, or to both together. To check the total system
leak rate, the pump should be pumping on both the bag line and the sample
line simultaneously (fig. 3). With the probe sealed by a stopper or
other plug, the flow rate appearing on the rate meter (or the dry gas
17
-------
meter) in the exhaust port of the pump is a measure of the system leak
rate. This leak rate must be negligible (less tihan 2 percent of the antici-
pated sampling rate) before the sampling train is declared acceptable. The
sample stopcock can be used to isolate the bag line from the sample line
in tracing sources of leaks.
PUMP STOPCOCK
SAMPLE STOPCOCK
PUMP:
SAMPLE LINE
BAG LINE •
CONTAINER LINE
Figure 3. StopcTock configuration for determining the leak rate of
the sampling train.
An alternate approach is to pressurize the bag-inlet system (2 in. of HO
above atmospheric pressure) and monitor the change in pressure at the outlet
with a draft gauge Any change in pressure over a 10 minute period is un-
acceptable indicating a leak in the system.
2.2.3 NDIR Analyzer Calibration
Two types of calibration procedures are given here. The first, a
multipoint calibration, is a laboratory procedure designed to establish the
characteristic curve relating analyzer output voltage to CO concentration
flowing through the analyzer. Once prepared, this curve accompanies the
analyzer and is used to convert all subsequent analyzer outputs into the
parameter of interest, CO concentration.
The second calibration procedure, the zero and span calibration, is
a field procedure carried out on a daily or routine basis to ascertain that
the originally established calibration curve is still valid.
2,2.3.1 Multipoint Calibration Procedure. A multipoint calibration pro-
cedure is required when:
1. The analyzer is first purchased,
18
-------
2. Before each set of NSPS tests and anytime the analyzer has had
maintenance that could affect its response characteristics, or
3. An auditing process shows that the desired performance standards
for quality of data are not being met.
A multipoint calibration requires calibrating gases with CO concentra-
tion in nitrogen, which corresponds to approximately 15, 30, 45, 75, and 100
percent of the selected instrument span, and a zero gas (the prepurified
3
grade of nitrogen) which contains less than 0.1 mg CO/m .* The instrument
span selected cannot exceed the applicable source performance standard by
more than 1.5 times the standard. For petroleum refineries, the source
standard is 500 ppm so that the maximum allowable instrument span for moni-
toring this industry is 750 ppm. Assuming an instrument range of 1000 ppm,
the span gas then corresponds to an instrument output of about 75 percent
of full scale. The remaining calibrating gases read downscale proportionally
depending upon the linearity of the instrument response.
All calibration gases, certified by the vendor to be within + 2 percent
of the stated concentrations, should be purchased in high-pressure cylinders
whose inside surfaces have low iron content if available. Prolonged storage
in cylinders with high iron content can deplete the CO concentration through
the formation of iron carbonyl. Calibration gases should be verified by
establishing their traceability to an NBS standard reference material (i.e.,
CO in N ) when first purchased and reverified at six month intervals.
Cylinders should be stored in areas not subject to temperature extremes
(i.e., not in direct sunlight).
When calibrating a specific analyzer, follow the manufacturer's de-
tailed instructions using sound engineering judgment. General illustra-
tive procedures are:
1. Turn the power on and let the analyzer warm up for at least 1
hour by sampling ambient air. Warm-up time varies with individual analyzers
but, for field use, should not exceed several hours. The critical point
is to attain temperature stability before attempting to calibrate the analyzer.
2. Connect zero gas to the analyzer.
3. Open the gas cylinder pressure valve. Adjust the secondary
pressure valve to the pressure recommended by the manufacturer. Caution;
3
*The factor for converting CO from volume (ppm) to mass (mg/m ) units
is 1 ppm = 1.145 mg/m at 25°C and 760 torr.
19
-------
Do not exceed the pressure of the sample cell.
4. Set the sample flow rate as read by the rotameter (read the
widest part of the float) to the value that is to be used during sampling.
5.\ Let the zero gas flow long enough to establish a stable trace.
Allow at least 5 minutes for the analyzer to stablize. A minimum strip chart
speed of 5 cm/hr (or 2 inches per hour) is recommended for this application.
6. If a recorder is used adjust the zero control knob until the trace
corresponds to the line representing 5 percent of the, strip-chart width
above the chart zero or baseline. The above is to allow for possible nega-
tive zero drift. If the strip chart already has an elevated baseline,
use it as the zero setting.
7. Let the zero gas flow long enough to establish a stable trace.
Allow at least 5 minutes for this. Mark the strip chart trace as adjusted
zero.
8. Disconnect the zero gas.
9. Connect the calibrating gas with a concentration corresponding to
approximately 100-percent span (75-percent full scale in the case of petro-
leum refineries) .
10. Open the gas cylinder pressure valve. Adjust the secondary
pressure valve until the secondary pressure gauge reads the operations
pressure recommended by the manufacturer.
11. Set the sample flow rate, as read by the rotameter, to the
value that is to be used during sampling.
12. Let the span gas flow until the analyzer stabilizes.
13. Adjust the span control until the deflection corresponds to the
correct percentage of chart as computed by
C (ppm)
x 100 + 5 (percent zero offset) = correct percentage
-^—f - r-
Cf(ppm) of chart
where C0 m Concentration of span gas
O
and C, - Full-scale reading of analyzer in same units as C .
As an example, see figure 4, where the percent zero offset is 5 and
the correct percentage of chart for the span gas of 750 ppm would be
20
-------
1000 ppm
100 + 5 - 80.
14. Allow the span gas to flow until a stable trace is observed.
Allow at least 5 minutes. Mark the strip-chart trace as adjusted span and
give concentration of span gas in ppm.
15. Disconnect the span gas.
16. Repeat procedures 2 through 8 and:
a. If no readjustment is required, go to Procedure 17;
b. If a readjustment greater than 2 percent of span is required,
repeat procedures 9 through 16.
17. Lock the zero and span controls.
18. Connect the calibration gas with a concentration corresponding to
approximately 15-percent span to the analyzer.
19. Open the gas cylinder pressure valve. Adjust the secondary
pressure valve until the secondary pressure gauge reads the pressure recom-
mended by the manufacturer.
20. Set the sample flow rate to the value used during sampling.
21. Let the calibration gas flow until the strip-chart trace stabi-
lizes. Note: No adjustments are made at this point.
22. Disconnect the calibration gas.
23. Repeat procedures 18 through 22 for the calibration gases with con-
centrations corresponding to approximately 30, 45, and 75 percent of span.
24. Fill in the information required on a calibration sheet and con-
struct a calibration curve of deflection as percent of chart versus con-
centration in ppm as illustrated in fig. 4. Draw a best-fit, smooth curve
passing through the zero and span points and minimizing the deviation of
the two remaining upscale points from the curve. The calibration curve
should have no inflection points, i.e., it should either be a straight
line or bowed in one direction only. Curve-fitting techniques may be used
in constructing the calibration curve by applying appropriate constraints
to force the curve through the zero and span points. This procedure
becomes quite involved, however, and the most frequently used technique is
to fit the curve by eye.
25. Recheck any calibration point deviating more than ;+ (9 + 0.02 C )
ppm* from the smooth calibration curve. If the recheck gives the same
*0.02 is the stated accuracy of the calibrating gas, C is the certified
concentration of the calibrating gas and the number 9 is the 2o limit for
measuring standard samples as obtained from a collaborative study of the
method (ref. 4).
21
-------
Location
Analyzer No.
Zero Gas
Upscale gas (80%)
(20%)
(40%)
Zero Control Setting
Recorder Type
w
Q
Date
Operator_
_Range_
Flow Rate
'•'.
Cylinder Pressure_
Cylinder Pressure_
Cylinder Pressure^
Cylinder Pressure_
Cylinder Pressure
_Cell Pressure
Cylinder No.
Cylinder No.
Cylinder No.
Cylinder No.
Cylinder No.
_Span Control Setting_
Serial No.
PPM CO BY VOLUME
Figure 4: Sample calibration curve.
22
-------
results, have that calibration gas reanalyzed. Use the best-fit curve as
the calibration curve. If the calibration curve deviates more than + 2
percent of full scale from a straight line between zero and the span point
the calibration curve should be used for reducing field data.
26. In certain situations, the supervisor may request that the cali-
bration be repeated (replicated). In this case, obtain both sets of data
and follow his instructions for preparing a calibration curve.
2.2.3.2 Zero and Span Calibration
Zero and span checks are performed before and after each sample evalua-
tion or as directed by the supervisor.
Note: In NSPS tests where only CO values close to the standard are of
interest, two calibrating gases with concentrations bracketing the standard
(e.g., 0.8 and 1.2 times the standard) should be used in lieu of the zero
and span check as described here. Using two calibrating gases would increase
the confidence in the data close to the standard and could allow for linear
interpolation between the two upscale points since they are reasonably
close together', for an unotherwise non-linear calibration curve.
1. Connect the calibrating gas with concentration corresponding to
100-percent span (the span gas), or other values as directed by the super-
visor, to the analyzer.
2. Open the gas cylinder pressure valve and open the secondary pres-
sure valve and let the sampler pump pull from the cylinder as needed.
3. Set the sample flow rate as read by the rotameter (read the widest
part of the float) to the value to be used when sampling.
4. Let the span gas flow long enough to establish a stable trace on
the strip-chart recorder; allow at least 5 minutes. Mark the chart trace
as an unadjusted span. Record unadjusted( span reading in ppm on the form
in fig. 5, under the column entitled, "Unadjusted Calibration."
5. Disconnect the span gas.
6. Connect zero gas to the analyzer.
7. Open the gas cylinder pressure valve and adjust the secondary
pressure valve until the secondary pressure gauge reads the pressure recom-
mended by the manufacturer.
8. Set the sample flow rate as read by the rotameter to the value
that is used when sampling.
23
-------
w
33
CO
w
Bfl
O
01
c
o
O
C
e 3
03 Z
C
o
to
4-J
co
CO
c
T3 O
4-1
to co
3 M
1-1.0
T3 -H
Cfl rH
a 0)
P U
i-l 00
O C
M -H
4-> 4-1
C -M
O CU
U CO
S f>
0) O
a c
X
CN
1
tl
cu ^^
13 CO
C V4
•H 3
rH CO
>^ CO
U CU
M
PJ
/«~\
*CM
r-il
•H B
CU Z
CJ ^"
0) CU
i-l r-l
&3
ca co
CO CU
I-l
Q-i
01
cd
&
$ 'c'
0 -H
rH S
CU ^
— 1
a.
e
<0
a
tO
o.
CO
o
1-1
CU
N
C
cO
(X
CO
O
^
CU
NI
C
a.
10
o
V4
CU
N]
rH
s
•H
In
rH
•H
4J
•H
C
M
0)
c
[n
rH
tO
•H
4-1
•H
C
h-<
^
o
U
CO
M
CU
ft
O
(U
CO
O
,
U
0
U
to
60
O"
CO
I
-------
9. Let the zero gas flow long enough to establish a stable zero
trace on the strip-chart recorder; allow at least 5 minutes. Mark the
chart trace as an unadjusted zero. Record the unadjusted zero reading in
ppm on the form in fig. 5, under the column entitled, "Unadjusted Calibra-
tion."
10. Adjust the zero control knob until the trace corresponds to the
true zero setting. Let the zero gas flow until a stable trace is obtained.
Mark the chart trace as an adjusted zero.
11. Disconnect the zero gas.
12. If the unadjusted zero was within +_ 9 ppm of zero and the un-
adjusted span was within +9 ppm of its known value no adjustments are
required and a sampling can be resumed. If either or both zero and span
checks are outside the above limits continue with steps 13 through 16.
13. Reconnect the span gas and let it flow until the analyzer has
stabilized; then adjust the span control until the deflection on the strip
chart corresponds to the span gas concentration in ppm using the calibra-
tion curve as illustrated in fig. 4. Let the strip-chart trace stabilize.
Mark the chart trace as an adjusted span with the span gas concentration
in ppm.
14. Disconnect the span gas.
15. If a span adjustment greater than 2 percent of span is required,
repeat procedures 6 through 13 until no adjustments are required.
16. Lock the zero and span controls.
Note: The second calibrating gas should be read at this point if
bracketing the standard as described in the above note is desired.
17. Record the following information on the check sheet (see fig. 5).
Record a and b under "New Control Knob Settings," and c and d under
"Cylinder Pressure. "
a. Zero control knob position,
b. Span control knob position,
c. Zero gas cylinder pressure (read first-stage pressure gauge),
d. Span gas cylinder pressure (read first-stage pressure gauge).
2.2.4 Orsat
Follow the procedures given on pp. 11 and 12 of the guideline
entitled, "Gas Analysis for Carbon Dioxide, Excess Air and Dry Molecular
Weight" (ref. 2).
25
-------
2.3 PRESAMPLING PREPARATION
2.3.1 Preliminary Site Visit (Optional)
The main purpose of a preliminary site visit is to gather informa-
tion to design and implement an efficient source test. Prior preparation
will result in the prevention of unwarranted loss of time, expenses, and
injury to test and/or plant personnel. A test plan conceived from a
thorough set of parameters will result in more precise and accurate
results. This preliminary investigation (on-site) is optional and not
a requirement. An experienced test group can, in most cases, obtain
sufficient information about the source through communications with the
plant engineer. The information should include pictures (or diagrams) of
the facilities.
2.3.1.1 Process (Background Data on Process and Controls). It is recom-
mended that the tester, before a preliminary site visit is made, become
familiar with the operation of the plant. Data from similar operations
that have been tested should be reviewed.
2.3.1.2 Sampling Site Preparedness. Each facility tested should provide
an individual who understands the plant process and has the authority to
make decisions concerning plant operation to work with the test team.
This would include decisions concerning whether the plant would be
operated at normal conditions or at rated capacity. This individual or
individuals will supervise installation of ports, the sampling platform,
and electrical power. If the above installations are already in exis-
tence, they should be examined for their suitability for obtaining a
valid test and for overall safety conditions. If the sampling plat-
form, port size, and locations are sufficient, the diameter, area of
the stack, and Wall thickness should be determined. If ports have to be
installed, specify at least 3-lnch ports (4-inch is preferred) with plugs.
Port locations should be based upon Method 1 of the Federal Register
(ref. 5). One electric drop with 115 volt and 20 amp service should be
available at the test facility.
2.3.1.3 Stack Gas Conditions. The following can be determined on the
initial site survey, either by measurement or estimation:
(1) T = Approximate stack gas temperature.
Savg
(2) P = The static pressure (positive or negative).
s
26
-------
(3) AP and AP . - The maximum and minimum velocity pressure
max .in
(4) B - Approximate moisture content.
(5) M - Molecular weight calculated from approximate
8 gas constituent concentrations.
The above parameters can be roughly determined using an inclined mano-
Mt«r (0-5 inches of water), a type-S pitot tube and a manual thermometer,
or thermocouple, attached to the pitot tube with a potentiometric readout
device. The moisture content (approximate) can be determined by wet-
bulb - dry bulb method, and the gaseous constituents by hand-held indi-
cator kits. Nomographs are useful in checking and estimating the pre-
liminary required data (ref. 6).
2.3.1.4 Methods and Equipment for Transporting Apparatus to Test Site.
Ropes, block and tackle, and other hoisting equipment belong in the
inventory of any stack sampler. The initial site visit should include
the drafting of a preliminary plan by plant personnel and the test team
for transporting the equipment to the sampling site. Electric forklifts
should be utilized when at all possible. In addition to the above, it is
recommended, when permissible, that pictures be taken of the hoisting area
and sampling area, so that further discussions (either by letter or tele-
phone) will be facilitated.
2.3.2 Apparatus Check Prior to Packing
Each item to be used should be visually checked for damage and/or
excessive wear before packing. Items should be repaired or replaced as
applicable, if judged to be unsuitable for use by visual inspection.
Table 1 is designed to serve as a sample checklist for the three
phases of & field test. It is meant to serve as an aid to the individuals
concerned with procuring and checking the required equipment and as a
means for readily determining the equipment status at any point in time.
The completed form should be dated, signed by the field crew supervisor,
and filed in the operational log book upon completion of a field test.
This includes initiating the replacement of worn or damaged items of
equipment. Procedures for performing the checks are given in the appro-
priate subsections of this operations manual; a check is placed in the
proper row and column of table 1 as the check/operation is completed.
Each team will have to construct its own specific checklist according to
the type of sampling train and equipment it uses.
27
-------
Table 1: Apparatus Checklist for Carbon Monoxide Emissions Measurements
TEST SITE
«E« SUPERVISOR
DATE
ON-S1TE MEASUREMENT
Performjnce
and/or
Calibration
Check
Visual
Check for
Damage
Sampling Train
"I. ProEe
10. Type-S Pjtot fib
"TTI Co nn ecFtrfcf tine s
Stack Cas Teripcrature
13. Temperature Measuring
Svsten
7. Ha^h Bottles (two
"TIT Sanple Storage
?0, Class Wgighipg Oishei
21. Desiccatoi
22. Analytical Balance
vvyv^PQ^QgpooooPQOOCiC OOCJgQ
rOQlS A>IO EQUlP.'tllT
34. Equipment Transpor-
tatlo
K. Safetytquipnent
fools and Spare Parts
scellaneous Supplie
nd Equipment
The inclusion of spare parts, particularly those not readily available
at the site, frequently pays rich dividends in reduced dc-wntime.
2.3.3 Source Sampling Tools and Equipment
The need for specific tools and equipment will vary from test to
test. A listing of the most frequently used tools and equipment is given
below.
1. Transportation Equipment
a. A lightweight hand truck that can be used to transport cases
and be converted to a four-wheel cart for supporting the meter box control
unit.
28
-------
b. A 1/2-inch, continuous-filament, nylon rope-with large boat
snap and snatch block for raising and lowering equipment on stacks and
roofs.
c. A tarpaulin or plastic to protect equipment in case of rain.
A sash cord (1/4-inch) for securing equipment and tarpaulin.
d. One canvas bucket is useful for transporting small items up
and down the stack.
2. Safety Equipment
a. A safety harness with nylon and steel lanyards, large-throat
snap hooks for use with lanyards for hooking over guard rails or safety
line on stack.
b. A fail-safe climbing hook for use with climbing harness when
climbing ladders having a safety cable.
c. Hard hats with chin straps and winter liners. Gas masks,
safety glasses, and/or safety goggles.
d. Protective clothing including the following: appropriate
suits for both heat and cold, gloves (both asbestos and cloth), and
steeltoed shoes.
e. Steel cable (3/16-inch) with thimbles, cable clips, and
turn buckles. These are required for installing a safety line or
securing equipment to the stack structure.
3. Tools and Spare Parts (optional)
a. Electrical and Power Equipment
1. Circular saw
2. Variable-voltage^transformer
3. Variable-speed electrical drill and bits
4. Ammeter-voltmeter-ohmeter (VOM)
5. Extension cords—light (No. 14 avg.) 2 x 25
6. Two to three wire electrical adapters
7. Three-wire electrical triple taps
8. Thermocouple extension wire
9. Thermocouple plugs
10. Fuses
11. Electrical wire
b. Tools
1. Tool boxes (one large, one small)
29
-------
2. Screwdrivers
a. one set, flat blade
b. one set, philips
3. C-clamps (2) 6-inch, 3-inch
c. Wrenches
1. Open-end set, 1/4-inch to 1-inch
2. Adjustables (12-inch, 6-inch)
3. One chain wrench
4. One 12-inch pipe wrench
5. One Allen wrench set
d. Miscellaneous
1. Silicone sealer
2. Silicone vacuum grease (high temperature)
3. Pump oil
4. Manometers (gauge oil)
5. Antiseize compound
6. Pipe fittings
7. Dry-cell batteries
8. Flashlight
9. Valves
10. Thermometer (dial) (6-inch through 36-inch)
11. Vacuum gauge
12. SS tubing (1/4-inch, 3/8-inch,
1/2-inch) short lengths
13. Heavy-duty wire (telephone type)
14. Adjustable packing gland
2.3.4 Package Equipment for Shipment
Equipment should be packed under the assumption that it will receive
severe treatment during shipment and field operation. Each item should be
packaged as follows:
1. Probes, pumps, and condenser should be packed in cases or wooden
boxes filled with packing material or lined with styrofoam.
2. Rotameters, needle valves, and all small, glass parts should be
individually packed in a shipping container.
3. For integrated samples, it is advantageous that the rigid con-
tainer for the sampling bag serve also as its shipping container.
30
-------
4. The Orsat should be disassembled, and each item individually
packed in suitable packing material and rigid containers. It is recom-
mended that spare parts and the absorbent solution be shipped in another
shipping container.
5. The NDIR analyzer is a self-contained instrument that should be
transported in its original packing carton as initially shipped by the
manufacturer or an equivalent carton to ensure the safety of the analyzer.
2.4 ON-SITE MEASUREMENTS
The on-site measurement activities include transporting the equipment
to the test site, unpacking and assembling the equipment, confirming stack
dimensions and traverse points (such preliminary determinations should be
accomplished in a site visit), sampling, sample recovery (for integrated
sampling), and data recording.
2.4.1 Transport of Equipment to the Sampling Site
The most efficient means of transporting or moving the equipment
from floor level to the sampling site (as decided during the preliminary
site visit) should be used to place the equipment on-site. Care should be
exercised against damage to the test equipment during the moving phase.
Utilization of plant personnel or equipment (winches and forklifts) under
close supervison in moving the sampling gear is recommended. The CO concen-
tration determination (Orsat analysis as described in the Quality Assurance
Document for Method 3 ref. 2) and the NDIR analysis of CO should be per-
formed away from the stack or sampling area if at all possible. In most
cases, the sample-recovery area can be used for gas determinations.
2.4.2 Preliminary Measurements and Setup
2.4.2.1 Stack Dimensions. Measure the stack dimensions according to sub-
section 2.2.3 of the Quality Assurance Document of this series for Method
2 (ref. 1). Determine the number of traverse points by Method 1 or check
traverse points as determined from the preliminary site visit.
2.4.2.2 Stack Temperature and Velocity Heads. Set up and level the dual
inclined manometer and determine the minimum and maximum velocity head (AP)
and the stack temperature (T ). This is done most efficiently with a
s
type-S pitot tube, with a temperature-sensing device attached
31
-------
The AP's are determined with an inclined manometer by drawing the pitot
tube across the stack diameter in two directions (circular stack with 90°
traverses).
2.4.3 Sampling
The on-site sampling includes preparation and assembly of the sampling
train, an initial leak test, insertion of the probe into the stack, sealing
the port, sampling proportionally either from the vicinity of the stack
centroid or while traversing, recording of data, and a final leak test of
the sampling system. Sampling is the foundation of source testing. More
problems in testing result from poor or incorrect sampling than from any
other part of the measurement process. The analytical process (laboratory)
can never correct for errors made in the field, by either poor judgment or
instrument failure. If the initial site survey, apparatus check and cali-
bration, and preliminary measurement and setup on-site have been imple-
mented properly, the testing should go smoothly with a minimal amount of
effort and crises.
2.4.3.1 Sampling Train Assembly. Unpack and reassemble the sampling train
using the identical parts that made up the train that was checked leak-
free just prior to shipping. All parts should be inspected for shipping
damage. Once assembled, the sampling train should again be leak-tested
using the same procedures as in the preshipping checks (sec. 2.2.1.1—
Continuous Sampling Train—or 2.2.1.2—Integrated Sampling Train),
After the reassembled system checks out as satisfactorily leak free,
transport the system to the stack to be sampled by disconnecting only the
probe from the condenser, and only if necessary; that is, the preferred
technique is not to alter the system at all, but, for safe portability up
the stack, a break in the system at the condenser is convenient and
involves little risk to the system performance.
Field sampling most often will be with the integrated sampling system;
otherwise, the NDIR analyzer must be in the vicinity of the sampling port
or a long sampling line is required. Continuous sampling is used for
\
specialized problems, such as the monitoring of short-term CO emissions—
the identification of fine structure in the emission profile in time—or, in
those industries, such as petroleum refineries, for which Federal regulations
require continuous monitoring of CO emissions or some equivalent con-
tinuous check. Even with continuous measurement of CO, however, an
32
-------
integrated sample may be used to determine the C0_ concentration. These
procedures have been described in the Guidelines for Method 3 (ref. 2).
2.4.3.2 Procedures for Continuous Sampling.
1. With the equipment in place, reconnect the probe to the con-
denser .
2. Repeat the leak test (sec. 2.2.1).
3. Repeat the zero and span calibration (sec. 2.2.3.2).
4. Remove the plug or cap from the probe end and confirm the position
of the glass-wool filter; place the probe in the stack with the probe tip
in the vicinity of the stack centroid and at least 12 inches from the
stack wall.
5. Plug the sampling port to prevent dilution of the stack gas by
in-leakage of ambient air. Dilution is particularly serious when the
stack pressure is negative.
6. Adjust the sample gas flow rate to the desired value and purge
the system by drawing a volume of sample gas through the system equal to
at least five times the sampling system volume (the sampling train volume
plus the volume of the gas lines in the analytical equipment).
7. Begin recording of sample CO concentration.
8. Record sample gas flow.
9. If used, check the strip-chart recorder for proper operation:
a. Chart speed-control setting,
b. Gain control setting,
c. Ink trace for readability,
d. Excess noise,
or periodically record the measured CO concentration as a function of time.
10. Adjust the flow through NDIR analyzer in accordance with manu-
facturer's recommendations and operational experience.
11. Upon completion of the sampling period, remove the probe and plug
the open end.
12. Repeat the zero and span calibration (sec. 2.2.3.2).
13. Record the new zero and span settings or incorporate them into
the strip-chart record.
14. Repeat the leak test (sec. 2.2.1).
15. Draw an integrated sample for the Method 3 determination of CO ,
33
-------
using the procedures established in the appropriate Guidelines Document
(ref. 2).
Note: While the alternative method of measuring the weight increase
in the ascarite over the sampling period is simpler in princi-
ple, this method has yielded unsatisfactory measurements and
is therefore not recommended (ref. 4).
2.4.3.4 Procedures for Integrated Sampling.
1. With the sampling train in place on the stack, reconnect the
probe to the condenser.
2. Repeat the leak test for the integrated gas sampling train
(sec. 2.2.2). This procedure leaves the stopcocks positioned as in fig.
3. When the system is declared adequately leak free, the sample stopcock
should be rotated 90° clockwise to isolate the sample line from the pump
but not from the bag line. The stopcocks then have the configuration
illustrated in figure 2.
3. Remove the plug from the end of the probe and check the
position of the glass-wool filter.
4. Place the probe in the stack with the probe tip in the vicin-
ity of the stack centroid and at least 12 inches from the stack wall.
5. Plug the sampling port with wet asbestos or other suitable
material.
6. Rotate the sample stopcock 90° clockwise to couple the sample
line directly to the pump and commence purging (fig. 6).
7. Purge the system by drawing a volume of sample gas equal to
at least five times the volume of the sampling train.
8. Rotate the sample stopcock 90° clockwise to couple the sample
line to the bag line; rotate the pump stopcock 90° counterclockwise to
couple the pump to the container (fig. 7).
9. Sample at a rate proportional to the stack gas velocity as
monitored by a type-S pitot tube. The rate of sampling is varied accord-
ing to the variation of the square root of the velocity pressure differen-
tial, i.e., sampling rate as indicated by the rate meter is set and sub-
sequently adjusted according to the values of /AP~.
34
-------
PUMP STOPCOCK
SAMPLE STOPCOCK
PUMP:
SAMPLE LINE
CONTAINER LINE
BAG LINE
Figure 6. Stopcock configuration for purging (Integrated sampling)
SAMPLE LINE
Figure 7. Stopcock movement to assume sampling configuration
(Integrated sampling).
35
-------
10. Disconnect the flexible sampling bag and remove it to a
suitable area for performing the Orsat analysis. Use the procedures des-
cribed in the Guidelines for Method 3 (ref. 2)- The analysis should be
performed as soon as possible but never exceeding 4 hours.
11. Record the CO- concentration of the sample; take at least 3
successive readings.
12. Couple the sampling bag to the NDIR analytical equipment (the
analyzer should have been "on" for at least an hour).
13. Perform the zero and span calibration (sec. 2.2.3.2).
14. Purge the lines of the analyzer by drawing through them a
volume of sample gas equal to at least five times the analyzer volume.
15. Record the CO concentration of the sample as determined by the
NDIR analyzer take at least 3 successive readings.
16. Compute the concentration of CO in the stack using eq. 10-1
of appendix A.
2.5 POSTSAMPLKJG OPERATIONS
2.5.1 Data Inspection
After the analyses have been performed but before the equipment is
disassembled, the measured values of CO concentration should be averaged
and inspected for gross error or inconsistency.. Coarse agreement should
exist with the predictions of combustion nomography which estimate CO ,
CO, and 0- concentrations when the fuel composition is known (ref. 6).
2.5.2 Equipment Disassembly, Inspection, and Packing
Once the field data have been declared acceptable as judged by the above
stated points, the sampling train can be removed from the stack, dissassembled,
and repacked for shipment. Defects or damage to any part of the sampling
train or analytical equipment should be noted on the checklist (Table 1) for
future action.
Damage that was not detected during the sampling should also be in-
corporated into the field data sheet. An estimate of the bias that such
damage could introduce into the measurements should be made and, if signifi-
cant, should be included in the field test report.
36
-------
The equipment should be repacked in the same containers used to ship
it to the field site. Procedures and precautions for packing are identi-
cal to those used in shipping the equipment to the field site (sec. 2.3.4),
37
-------
SECTION III MANUAL FOR FIELD TEAM SUPERVISOR
3.0 GENERAL
The term "supervisor," as used in this document, applies to the indi-
vidual in charge of a field team. He is directly responsible for the
validity and the quality of the field data collected by his team. He may
be a member of an organization that performs source sampling under con-
tract to government or industry, a government agency performing source
sampling, or an industry performing its own source sampling activities.
It is the responsibility of the supervisor to identify sources of
uncertainty or error in the measurement process for specified situations
and, if possible, to eliminate or minimize them by applying appropriate
quality-control procedures to assure that the data collected are of accept-
able quality. Specific actions and operations required of the supervisor
for a viable quality-assurance program are summarized in the following list.
1. Monitor/Control Data Quality
a) Direct the field team in performing field tests according to
the procedures given in the Operations Manual.
b) Perform or qualify results of the quality-control checks
(i.e., assure that checks are valid).
c) Perform necessary calculations and compare quality-control
checks to suggested performance criteria.
d) Make corrections or alter operations when suggested perfor-
mance criteria are exceeded.
e) Forward qualified data for additional internal review or
to user.
2. Routine Operation
a) Obtain from team members immediate reports of suspicious
data or malfunctions. Initiate corrective action or, if
necessary, specify special checks to determine the trouble;
then take corrective action.
b) Examine the team's log books periodically for completeness
and adherence to operating procedures.
c) Approve data sheets, data from calibration checks, etc., for
filing.
38
-------
3. Evaluation of Operations
a) Evaluate available alternative(s) for accomplishing a given
objective in light of experience and needs.
b) Evaluate operator training/instructional needs for specific
operations.
Consistent with the realization of the objectives of a quality assurance
program as given in section I, this section provides the supervisor with
brief guidelines and directions for:
1. Collection of information necessary for assessing data quality
on an intrateam basis.
2. Isolation, evaluation, and monitoring of major components of sys-
tem error.
3. Collection and analysis of information necessary for controlling
data quality.
3.1 ASSESSMENT OF DATA QUALITY (INTRATEAM)
Intrateam or within-team assessment of data quality as discussed herein
provides for an estimate of the precision of the measurements made by a
particular field team utilizing an NDIR analyzer. Precision in this case
refers to replicability: i.e., the variability among replicates, and is
expressed as a standard deviation. This technique does not provide the
information necessary for estimating measurement bias (see subsection A.1.2
for a discussion of bias) which might occur, for example, from failure to
collect a representative sample, sampling train leaks, or inadvertent expo-
sure of the sample to ambient air. However, if the operating procedures
given in the Operations Manual (section II) are followed, the bias should
be small in most cases. The performance of an independent quality audit
that would make possible an interteam assessment of data quality is sug-
gested and discussed in subsection 4.2 of the Manual for Manager of Groups
of Field Teams.
The field data are used to derive a confidence interval for the
reported data. The primary measurement of interest here is the percent
CO in the sample. Two sets of data exist as follows:
1. The Orsat determinations of C0? concentration;
2. the NDIR determinations of CO concentrations.
The latter (#2) can be either a continuous record corresponding to a
39
-------
continuous chart trace or to a series of NDIR measurements recorded over
a period of time (the continuous sampling method) or it can be the NDIR
analysis of the contents of the same flexible sampling bag that was used
in making the Orsat measurements of CO (the integrated sampling method).
Data quality checks are described for each method in the following
paragraphs.
3.1.1 Continuous Sampling Method
In the continuous sampling method, the data available consist of
1) a series of CO readings, either on a strip chart or as a written, run-
ning record, including periodic measurements of Tzero and span drift; and
2) three Orsat measurements of the C0~ concentration of a single integrated
bag sample.
The CO. concentration to be used in equation 10-1 (Appendix A), which
is the equation for expressing the CO concentration, is the mean of the
three Orsat measurements of C0« concentration in the integrated bag sample.
Guidelines given in the quality assurance document for Method 3 (ref. 2)
should be followed in making CO- measurements.
Large error in the measurement of CO- does not have a large effect on
the accuracy of the final stack CO concentration as calculated by equation
10-1 of Appendix A. Typical stack C0? concentrations are in the 12 percent
or less range. A 50-percent error in the C0? measurement, therefore, produces
a 6 percent or less error in the calculated CO concentration.
The continuous record of CO concentrations must be corrected for
drifts in analyzer zero and span. Periodically, the continuously reading
NDIR instrument should be interrupted for a zero and span calibration
(2.2.3.2). For those systems using a recorder, the unadjusted zero and
span should be incorporated into the strip chart, so that they can be used
to correct readings for drift in both zero and span.
The procedures for correcting the chart readings for zero and span
drifts are as follows:
1. Obtain the strip-chart record for the sampling period in question.
The record must have adjusted span and zero traces at the beginning
of the sampling period and unadjusted span and zero traces at the
end of the sampling period.
2. Using a straight edge, draw a straight line from the adjusted zero
at the start of the sampling period to the unadjusted zero at the
40
-------
end of the sampling period. This line represents the zero baseline
to be used for the sampling period.
3. Using a straight edge, draw a straight line from the adjusted span
at the start of the sampling period to the unadjusted span at the
end of the sampling period. This line represents the span reference
line to be used for the sampling period.
4. Read the zero baseline in percent of chart at the midpoint of each
hour interval (or other selected time interval).
5. Read the span reference line in percent of chart at the midpoint
of each time interval as selected in step 4.
6. Determine the time interval averages by using a transparent ob-
ject, such as a piece of clear plastic, with a straight edge at
least 1 inch long. Place the straight edge parallel to the
horizontal chart-division lines. For the interval of interest
between two vertical time lines, adjust the straight edge between
the lowest and highest points of the trace in that interval,
keeping the straight edge parallel to the chart-division lines,
until the total area above the straight edge bounded by the
trace and the time lines is estimated to equal the total area
below the straight edge bounded by the trace and time lines.
7. Substract the zero baseline of each interval from: a) the span
reference and b) the time interval average.
8. Determine the percentage of span for each interval (the b/a ratio
from step 7) and convert to ppm. by multiplying it by the span
concentration.
When the data are collected by a series of recorded measurements,
(i.e., no strip chart record is available), the zero and span should be
checked and adjusted for each individual reading or once an hour, whichever
time interval is longer. Performance specifications limit both zero and
span drift to a maximum of 10 percent of full scale in 8 hours. Hourly
drifts in excess of about 3 percent of full scale are symptoms of instru-
ment instability and should be monitored closely.
41
-------
3.1.2 Integrated Sampling Method
The only difference in the intrateam assessment of measurement pre-
cision using an integrated sample as opposed to continuous sampling is that
the NDIR measurements of CO are made from the contents of the same flexible
bags used in the Orsat determinations of C0_.
A minimum of three readings of both CO (NDIR) and CO- (Orsat) concen-
tration is made on integrated samples. The mean and standard deviation of
each of these variables can be computed for three determinations as follows:
CO - 1/3
C0if
(1)
and
s{CO}
(C0± -CO)/ 2
1/2
(2)
where
CO = The average of three (i=l, 2, 3) determinations, ppm
,th
CO. = The i CO determination after correction for C0_ removal, ppm
s{CO} = The standard deviation calculated from the three CO
measurements, ppm.
The average of the three measurements is reported as the carbon
monoxide concentration in the stack gas. The calculated standard deviation
can be used to place confidence limits on the measurements.
It is recommended that the CO data be reported with 90 percent confi-
dence limits as follows:
C0t - CO + 2.92 s{CO}//3
(3)
where
CO = The true mean of the integrated sample
CO = The experimental mean determined from 3 measurements
2.92 = The 95th percentile of the Student t-distribution with 2
degrees of freedom which yields a 90 percent confidence interval.
s{CO}//3~ » The calculated standard deviation of the mean of 3 measurements.
Limits constructed in this manner will contain the true mean CO concentration
42
-------
of the bag approximately 90 percent of the time, assuming that the sampling
is not biased. Assessment of this assumption can be carried out by audits
in which an independent inspecting team prepares reference samples with
known CO, C0_, and if desired, known levels of water vapor to be measured
by the field team (sec. 4.2).
3.2 MONITORING DATA QUALITY
In general, if the procedures outlined in the operations manual are
followed, the major sources of variability will be in control. It
is felt, however, that as a means of verification of data quality, as well
as a technique for monitoring personnel and equipment variability, two
quality control charts should be constructed and maintained as part of the
quality assurance program. The quality control charts will provide a basis
for action with regard to the measurement process: namely, whether the meas-
urment process is satisfactory and should be left alone, or whether the process
is out of control and action should be taken to find and eliminate the causes
of excess variability. In the case of this method in which documented pre-
cision data are scarce, the quality control charts can be evaluated after
20 to 30 data points have been obtained to determine the range of variation
that can be expected under normal operating conditions.
The two recommended quality control charts are:
1. A range chart for the analyses performed in the field, which
should serve as an effective monitor of operator variability and,
to a lesser extent, of equipment variability, and
2. A chart for the differences in measured and known values (span drift),
as obtained from Calibration checks, to monitor equipment and/or
operator variability, as well as systematic errors (biases).
Discussions of control charts and instructions for constructing and
maintaining them are given in many books on statistics and quality control,"
such as in refs. 7 and 8.
It is good practice to note directly on control charts the reason for
out-of-control conditions, if determined, and the corrective actions taken.
It is also good practice to maintain control charts in large sizes, e.g.,
8-1/2 x 11 inches or larger, and to keep them posted on a wall for viewing
by all concerned, rather than to have them filed in a notebook (when the
analyzer is housed in a laboratory type facility).
43
-------
3.2.1 Range Chart
Figure 8 is a sample control chart for the range. The chart was
constructed for a sample size of three; i.e., only three replicates per
field test are used. It is recommended here that: the range be computed for
the first three analyses performed for a given field test.
For illustrative purposes, a standard deviation of 9 ppm* (see sec. 4.1.2)
for the measurement error was assumed in computing R and the upper control
limit (UCL). In practice the standard deviation and R computed from actual
data should be used. (For small sample sizes (r _< 6) the lower control
limit (LCL) is effectively zero and is not given here.)
The R values are plotted sequentially by the supervisor as they are ob-
tained from each field test ana connected to the previously plotted point with
a straight line. Corrective action, such as a review of operating technique
should be taken any time one of the following criteria is exceeded:
1. One point falls outside the UCL.
2. Two out of three points from consecutive field tests fall in the
warning zone (between 2o and 3o limits).
3. Points from seven consecutive field tests fall above the CL line.
Exceeding any one of the criteria will usually indicate poor technique.
3.2.2 Difference Chart
A sample quality control chart for span drifts, i.e., the difference
between measured and known values of the calibration gas to be maintained on
site by the field team is shown in fig. 9. The chart was constructed using
a standard deviation of 9.0 ppm for the measurement error and assuming that
the test gas concentrations are accurately enough known not to increase this
variability substantially. It is suggested that the chart, as set up in fig. 9,
be used until sufficient field data are available to compute new limits.
For each regularly scheduled span check, compute
d = CO - CO (4)
m t
where
d » the difference in the measured and known concentration of CO, ppm
*a o * 13 ppm was obtained from a collaborative study of the method (ref. 4).
44
-------
i
|40
en 35
I30
2 25
£20
rO
u. 15
o
y 10
I 5
CHECK NUMBER
DATE
TEST LEADER
PROBLEM AND
CORRECTIVE
ACTION
| ACTION LIMIT" 39.2 _ _
WARNING LIMIT" 31.2
-
R«l5.2ppm (
•
^^
1
2
3
4
5
6
7
8
9
10
UCL
*UCL= R + Sdj? {COHI5.2+3X0.888X9"39.2
+ R= d2 ", |CO} • 1.693 X 9. • 15.2 ppm
Figure 8, Sample control chart for the range, R, of field analyses.
45
-------
I
t
8
30
25
20-
15
5
0
-5
£ -IO-
CS)
-15
-20
-25
-30
CHECK NO.
ACTION. LIMIT* 27
»UCL
WARNING LIMIT = 16
-CL
WARNING LIMIT = -l8ppm
_ _—— ACTION LIMIT = -27ppm _^ ^^
LCL
8
K)
DATE /TIME
OPERATOR
PROBLEM AND
CORRECTIVE
ACTION
Figure 9. Sample control chart for calibration checks.
46
-------
CO - The measured concentration of CO in ppm, and
m
CO - The true or known concentration of CO in the calibration gas in ppm.
Plot each d value on the quality control chart as it is obtained and connect
it to the previously plotted point with a straight line.
Corrective action, such as recalibrating the analyzer, performing
other equipment repair, and/or checking on proper operating procedures
should be taken any time one of the following criteria is exceeded:
1. One point falls outside the region between the lower and upper con-
trol limits.
2. Two out of three consecutive points fall in the warning zone, i.e.,
between the 20 and 3o limits.
3. Seven consecutive points fall on the same side of the center line.
Exceeding the first or second criteria indicates excessive span drift or
sudden changes in environmental conditions (e.g., temperature) or equipment
malfunction and that analysis should be repeated after corrective action has
been taken. The third criterion, when exceeded, indicates a system bias due
to a faulty analyzer, drifting in the same direction each time, eventually
resulting in the inability to properly span the analyzer.
3.3 COLLECTION AND ANALYSIS OF INFORMATION TO IDENTIFY TROUBLE
In a quality assurance program, one of the most effective means of
preventing trouble is to respond immediately to indications of suspicious
data or equipment malfunctions. There are certain visual and operational
checks that can be performed while the measurements are being made to help
assure the collection of data of good quality. These checks are written
as part of the routine operating procedures in section II. In order to
effectively apply preventive-type maintenance procedures to the measurement
process, the supervisor must know the important variables in the process,
know how to monitor the critical variables, and know how to interpret the
data obtained from monitoring operations. These subjects are discussed in
the following subsections.
3.3.1 Identification of Important Variables
Determination of stack gas composition requires a sequence of operations
and measurements that yields, as an end result, a number that represents the
average percent of a component gas for that field test. There is no way of
47
-------
knowing the accuracy, i.e., the agreement between the measured and the true
value, for a given field test. However, a knowledge of the important vari-
ables and their characteristics allows for the application of quality con-
trol procedures to control the effect of each variable at a given level
during the field test, thus providing a certain degree of confidence in the
validity of the final result.
A great many variables can affect the expected precision and accuracy
of measurements made by the NDIR method. Certain of these are related to
analysis uncertainties and others to instrument characteristics. Major
sources of error are:
1. Inaccuracy and Imprecision in the Stated CO Concentration of
Calibration Gases (ref. 9). There are two components of error involved;
one is the error in the original assay, and the second is due to the
deterioration of CO with time.
Large errors in the original assay should be detected when the gas is
first purchased by establishing its traceability to an NBS standard reference
material as described in section 2.1.13.2. Changes in concentration occur-
ring as a function of time will be detected at a given level when rever ified
at six month intervals.
2. CO. Interferences. CO. is a major interference for most NDIR
2 —— i
analyzers of CO. The technique recommended in the reference method (appen-
dix A) is to remove the CO^ before analysis by passing the gas stream through
ascarite. The efficiency of the CO removal can be variable, depending on
the status of the ascarite. No compensation for variable CO- removal exists
so that significant errors can be introduced when the ascarite loses its ef-
ficiency. If the ascarite is not new, its CO removal efficiency should be
.checked prior to use by passing a known concentration of CO (C09 in N cali-
bration gas) through the ascarite and observing the analyzer response. Also,
such check performed after an NSPS test would ensure data free from error due
to CO interference.
3. Water Vapor Interference. Water vapor is a positive interference
for all NDIR analyzers (ref. 9-13). The magnitude of the interference is a
function of the type of water vapor control equipment being used in the meas-
urement system and the operational state of the equipment.
Drying agents- have proved to be effective in controlling water-vapor
interference, but they must be checked and replaced frequently when used on
48
-------
uples characterized by high relative humidities (ref. 9),
Error due to water-vapor interference is not compensated for or corrected
by the zero and span calibrations. Its magnitude is monitored as part of
the auditing program by the performance of periodic water-vapor interference
checks.
4. Data-Processing Errors. Data processing, starting with reducing
the data from a strip chart record through the act of recording the measured
concentration, is subject to many types of errors. Perhaps the major source
of error is in reading averages from the strip chart record. This is a
subjective process and even the act of checking a given time average does
not Insure its absolute correctness.
The magnitude of data processing errors can be estimated from, and con-
trolled by, the auditing program through the performance of periodic checks
and making corrections when large errors are detected. A procedure for
estimating the bias and standard deviation of processing errors is given
in section 4.1 of the Management Manual.
5. Zero Drift. Zero drift is defined as the change in instrument out-
put over a stated period of unadjusted, continuous operation when the input
concentration is zero.
Several variables contribute to zero drift. Some variables such as
variations in ambient room temperature, source voltage, and sample cell
pressure result in a zero drift that is not linear with time. Therefore,
performing a zero and span calibration does not correct for the component
of drift throughout the sampling period but rather just at the time the
calibration is performed.
Degradation of electronic components and increased accumulation of
dirt in the sample cell for example may result in a zero drift that is
linear with time. Periodic zero and span calibrations allow for correction
of this component of zero drift for the entire sampling period.
The importance of zero drift to data quality can be determined from
the results obtained from measuring control samples. For a drift that is
generally linear with time, it is valid to perform zero and span calibra-
tions before measuring control samples as part of the auditing process.
However, if the drift is a function of variations in temperature, voltage,
or pressure, zero and span calibrations should not be performed before
measuring control samples for auditing purposes. In this case, meeting
49
-------
desired performance standards may require more frequent zero and span
calibrations or more rigid control of temperature, voltage, and pressure,
as appropriate.
6* Span Drift. Span drift is defined as the change in instrument out-
put over a stated time period of unadjusted, continuous operation when the
input concentration is a stated upscale value. For most NDIR analyzers,
the major component of span drift is zero drift and is corrected or con-
trolled as discussed above. The component of span drift other than zero
drift can be caused by either optical or electronic defects. If this com-
ponent of span drift is large or shows a continuous increase with time, the
manufacturer's manual should be followed for troubleshooting and correction
*
of the defect. The importance or magnitude of span drift can be determined
from the zero and span calibrations after each sampling period.
7. Excessive Noise. Noise is defined as spontaneous deviations from
a mean output not caused by input concentration changes. Excessive noise
may result when an analyzer is exposed to mechanical vibrations. Other
sources of noise include a high gain setting on the recorder, accumulation
of dirt on sample cell walls and windows, or loose dirt in the sample cell
(ref. 14).
Excessive noise is evidenced by either an extra broad strip-chart trace
or a narrow but erratic trace. The manufacturer's manual should be followed
for troubleshooting and correcting the cause.
3.3.2 How to Monitor Important Variables
System noise, zero drift, span drift, and sample cell pressure are monitored
as part of the routine operating procedures. Implementing an auditing program
could effectively monitor calibration gas concentration, water vapor inter-
ference, C0_ interference, and data-processing errors. Variations in ambient
room temperature and/or source voltage can be monitored with a minimum-maximum
thermometer and an a.c. voltmeter, respectively. Table 2 summarizes the
variables and how they can be monitored.
3.3.3 Optional Control Procedures
Additional measurements or modified procedures can be useful in
obtaining high quality data. Three recommendations are made here in
response to the most common causes of measurement error: calibration gas
error, ascarite failure, and bag leaks.
50
-------
Table 2: Methods of monitoring variables
Variable
Method of Monitoring
1. Calibration Gas Concentration
2. CO- Interference
3. Water Vapor Interference
4. Data Processing Errors
5. Zero Drift
6. Span Drift
7. System Noise
Sample Cell Pressure
Variation
9. Temperature Variation
10. Voltage Variation
Measurement of control, samples as part
of the auditing program.
Check for CO interference by measur-
ing calibration gases of CO in N_.
Water vapor interference checks per-
formed as a part of the auditing
program.
Data processing checks performed as a
part of the auditing program.
Zero check and adjustment, if required,
before and after each sampling period
as part of routine operating procedure.
Span check and adjustment, if required,
before and after each sampling period
as part of routine operating procedure.
Check of strip chart record trace for
signs of noise after each sampling
period as part of routine operating
procedure. When operating properly
system noise should be less than 1
percent of full scale.
Reading and recording sample cell
pressure at the beginning and end of
a sampling period as part of routine
operating procedure
Minimum-maximum thermometer placed
near the analyzer, or any other tem-
perature-indicating device, read
periodically throughout the sampling
period. This would usually be done
as a special check. NDIR analyzers
are sensitive to temperature changes.
In field testing this could, if not
controlled, be the greatest source
of variability.
A.C. voltmeter measuring the voltage
to the analyzer and read periodically
throughout the sampling period. This
would usually be done as a special
check.
51
-------
3.3.3.1 Independent Back-up Determination of Bag CO Concentration. A num-
ber of alternative techniques exist for measuring CO in the concentration
range of interest for stack sampling (ref. 15). Relatively inexpensive,
portable instruments exist based upon such principles as:
1. The heat of combustion from the oxidation of CO in the presence
of a catalyst (MSA Model D Portable CO Indicator),
2. The current flow due to an electrochemical reaction limited by the
availability of CO (Ecolyzer Model 2800),
3. Solid state reactions or interactions (Bullard CO meter, CO-Dackel
meter, Emmet CO meter, others).
None of these methods are as yet acceptable as equivalent standard methods,
but they can, nevertheless, be very useful in identifying errors and pin-
pointing the source of error. Their addition to the gas analysis procedure
is generally inexpensive and potentially very valuable.
What is recommended is that an auxiliary measurement of CO concentra-
tion be made on the integrated bag sample that is required for both the
continuous method and the integrated method, using a measuring technique
other than NDIR.
That two independent measurements of CO concentration based upon dif-
ferent principles can agree adds greatly to measurement confidence. If one
of these two measurements is based on a chemical reaction, such as the
oxidation of CO, and the other, a physical method, such as NDIR,, the cross
check is particularly valuable because the interferences and errors
characteristic of one method are not the same as those for the other. For
example, C0? is a major interference for the NDIR technique, and removing
CO is a requirement prior to the NDIR CO measurement. C02 is riot an inter-
ference for the CO oxidation technique. And so, while both methods yield
a measure of percent of CO by volume on a dry basis, the NDIR measurement
relies upon efficient C0? removal for precision and accuracy. The chemical
technique does not. General agreement in measured percent of CO by the two
methods shows, among other conclusions, that the NDIR CO- removal technique
is adequate. Conversely, agreement in the evaluation of the dry calibrating
gases, but continued disagreement in the measucement of the sample gas, may
mean inadequate C0? removal from the sample gas passing through the NDIR
analyzer.
52
-------
Temperature and humidity sensitivities also will be different, so
that useful information can be obtained by comparing measurements. The
auxiliary method will most likely be less precise than the NDIR technique.
Definite standards on how well the two methods should agree cannot be given
now due to insufficient data. However, at CO concentrations near 500 ppm,
differences as large as 50 ppm should be taken as manifesting a need to
trouble-shoot the system and reanalyze the sample.
3.3.3.2 Alternative Method for Incorporating C0; Correction. Ascarite fail-
ure is troublesome in that it is difficult to spot. A (XL in N. calibration
gas of known concentration should be measured before and after field tests
to assure and document the efficiency of the ascarite to remove CO .
3.3.3.3 Replacement of the Flexible Bag. The successful use of a flexible
bag and the lung technique for drawing an integrated sample demands a leak-
free bag. Unfortunately, the bags often leak. This defect can be indicated
by a wide variation of the analyses of the three bags, drawn under essentially
constant stack conditions. Leak testing the bags before each field use as
recommended in section 2.1.6.2 should minimize this problem.
53
-------
SECTION iv mm. FOR WIAGER OF GROUPS OF FIELD TONS
4.0 GENERAL
The guidelines for managing quality assurance programs for use with
Test Method 10 - Determination of Carbon Monoxide Emissions from Stationary
Sources, are given in this part of the field document. This information is
written for the manager of several teams that measure source emissions and
for the appropriate EPA, State, or Federal Administrators of these programs.
It is emphasized that if the analyst carefully adheres to the operational
procedures and checks of Section II, then the errors and/or variations in
the measured values should be consistent with the performance criteria as
suggested. Consequently, the auditing routines given in this section
provide a means of determining whether the stack sampling test teams of
several organizations, agencies, or companies are following the suggested
procedures. The audit function is primarily one of independently obtaining
measurements and performing calculations where this can be done. The pur-
pose of these guidelines is to:
1. Present information relative to the test method (a functional
analysis) to identify the important operations and factors.
2. Present a methodology for comparing action options for improving
the data quality and selecting the preferred action.
3. Present a data quality audit procedure for use in checking adher-
ence to test methods and for validating that performance criteria are being
satisfied.
4. Present the statistical properties of the auditing procedure in
order that the appropriate plan of action may be selected to yield an accept-
able level of risk to be associated with the reported results.
These four purposes will be discussed in the order stated in the sec-
tions which follow. The first section will contain a functional analysis
of the test method, with the objectives of identifying the most important
factors that affect the quality of the reported data and of estimating the
expected variation and bias in the measurements resulting from equipment
and operator errors.
54
-------
Section 4.2 contains several actions for improving the quality of the
data; for example, by improved analysis techniques, instrumentation, and/or
training programs. Each action is analyzed with respect to its potential
improvement in the data quality, as measured by its precision. These results
are then compared on a cost basis to indicate how to select the preferred
action. The cost estimates are used to illustrate the methodology. The
manager or supervisor should supply his own cost data and his own actions
for consideration. If it is decided not to conduct a data audit, sections
4.1 and 4.2 would still be appropriate, as they contain a functional analysis
of the reference method and of alternative methods or actions.
There are no absolute standards with which to compare the routinely
derived measurements. Furthermore, the taking of completely independent
measurements at the same time that the routine data are being collected
(e.g., by introducing two sampling probes into the stack and collecting two
samples simultaneously) although desirable is not considered practical due
to the constrained environmental and space conditions undei which the data
are being collected. Hence, a combination of an on-site system audit, in-
cluding visual observation of adherence to operating procedures and a quanti-
tative performance quality audit check, is recommended as a dual means of
independently checking on the source emissions data.
The third section contains a description of a data quality audit pro-
cedure. The most important variables identified in section 4.1 are con-
sidered in the audit. The procedure involves the random sampling of n stacks
from a lot size of N = 20 stacks (or from the stacks to be tested during a
3-month period, if less than 20) for which one firm is conducting the source
emissions tests. For each of the stacks selected, independent measurements
will be made of the indicated variables. These measurements will be used
in conjunction with the routinely collected data to estimate the quality of
the data being collected by the field teams.
The data quality audit procedure is an independent check of data col-
lection and analysis techniques with respect to the important variables.
It provides a means of assessing data collected by several teams and/or
firms with the potential of identifying biases/excessive variation in the
data collection procedures. A quality audit should not only provide an
55
-------
independent quality check, but also identify the weak points in the measure-
ment process. Thus, the auditor, an individual chosen for his background
knowledge of the measurement process, will be able to guide field teams in
using improved techniques. In addition, the auditor is in a position to
identify procedures employed by some field teams which are improvements over
the currently suggested ones, either in terms of data quality and/or time
and cost of performance. The auditor's role will thus be one of aiding the
quality control function for all field teams for which he is responsible,
utilizing the cross-fertilization of good measurement techniques to improve
the quality of the collected and reported data.
The statistical sampling and test procedure recommended is sampling by
variables. This procedure is described in section 4.4. It makes maximum
use of the data collected; it is particularly adaptable to the small lot
size and consequently to small sample size applications. The same sampling
plans can be employed in the quality checks performed by a team or firm in
its own operations. The objectives of the sampling and test procedure are
to characterize data quality for the user and to identify potential sources
of trouble in the data collection process for the purpose of correcting the
deficiencies in data quality.
Section 4.4.4 describes how the level of auditing, sample size n, may
be determined on the basis of relative cost data and prior information
about the data quality. This methodology is described in further detail in
the Final Report on the Contract. The costs data and prior information con-
cerning data quality are supplied to illustrate the procedure and these data
must be supplied by the manager of groups of field teams, depending upon the
conditions particular to his responsibility.
Figure 10 provides an overall summary of the several aspects of the
data quality assurance program as described in these documents. The flow
diagram is subdivided into four areas by solid boundary lines. These areas
correspond to specific sections or subsections of the document, as indicated
in the upper right hand corner of each area. The details are considered in
these respective sections of the document and will not be described here.
56
-------
Pollutant
Measurement
Method
Functional
Analysis
Subsection 4.1
Estimate Ranges
and Distributions
of Variables
Identify and Rank
Sources of
Bias/Variation
Perform Overall
Assessment
Section III
Develop Standards
for Q. C.
Procedure
Institute
QC Procedure
for Critical
Variables
Continue to Use
Measurement Meth.
as Specified
Subsections 4.3 and 4.4
Subsection 4.2
Develop Standards
for Audit Procedure
(Optional)
No
Evaluate Action Options
for Improving Data
Quality
Modified
Measurement
Method
Quality Using
Audit Data
Yes
Continue to Use
Measurement Method
as Specified
Figure 10. Summary of data quality assurance program.
57
-------
4.1 FUNCTIONAL ANALYSIS OF TEST METHOD
Test method 10 - Determination of Carbon Monoxide Emission from Sta-
tionary Sources, is described in the Federal Register of March 8, 1974 and
is reproduced in appendix A of this document. This method is used to deter-
mine the concentration of carbon monoxide in the stack gas on a dry basis.
It requires a measurement of the CO concentration in the sample, using
Method 3 to make a total volume correction in the final calculation of
carbon monoxide, since the C0_ component of the sample is removed prior to
NDIR analysis.
This method has not been subjected to a collaborative test; thus, quan-
titative information on precision and bias are not available. Therefore,
the functional analysis of the method is somewhat general, using data from
a collaborative test of the NDIR method of measuring CO in ambient air
(ref. 9) and published data from special tests (refs. 10-14). Engineering
judgements were used to estimate variable limits when data were not avail-
able. These data were used in a variance analysis to determine the resulting
variability of the measured value, i.e., CO concentration.
4.1.1 Variable Evaluation and Error Range Estimates
A functional analysis for determining the C0« concentration by Method
3 is given in the applicable Quality Assurance Document of this series
(ref. 2).
The concentration of CO in the stack gas is calculated by eq. 10-1 of
appendix a, as follows:
CO - C (1 - F ) (5)
C°NDIR C°2
where
•^
CO = Measured concentration of CO in stack on a dry basis, mg/nT .
C = Concentration of CO measured by NDIR analyzer on a dry
NDIR , , ,3
basis, mg/m .
F = Volume fraction of CO. in sample, i.e., precent C0_ from
UU,j ^ £•
Orsat analysis divided by 100.
The stack gas at the time and point of sample collection will have a
specific but unknown CO concentration (i.e., CO ). The difference in the
58
-------
CO and CO as calculated from eq. 5 above is due to a combination of errors
in the measurement process. For discussion purposes the measurement process
is divided into three phases. The phases are: 1) sample collection and
handling, 2) sample analysis, and 3) data processing.
4.1.1.1 Sample Collection and Handling. Sample collecting and handling is
subject to a variety of errors. A short description of each source of error
is given in the following list.
1. Collection of the sample from one point in the stack. Collecting
the sample at one point in the stack requires the assumption that the CO
concentration is the same at each point in the cross-sectional plane from
which the sample is taken. It also requires the assumption that the gas
velocity profile remains relatively constant (i.e., if the velocity varies
at one point in the plane, it varies proportionally at all points in the
plane) for the sample collection period.
2. Proportional sampling. To attain a sample representative of the
stack gas at the point of collection, it is necessary to maintain the
sampling train flow rate proportional to the stack gas velocity. This may
be difficult to do if the gas velocity changes rapidly. However, error due
to deviation from proportional sampling is usually small, for gas velocity
changes of less than + 20 percent of its average velocity.
3. C00 interference. CO- is a positive interference for the NDIR
——^* .—— ——— ^
method of measuring CO. Errors can result from a) incomplete removal of the
CO., and b) an error in determining the true C0_ concentration for making a
volume correction. The potential effect on the CO measurement from either
1 or 2 above increases on an absolute basis as the CO- concentration in-
creases.
4. Sampling train leaks. Leaks in the sampling train dilute the stack
gas sample (both CO and CO ) with ambient air. It is felt that the inte-
grated gas-sampling train as shown in fig. 10-2 of appendix A is highly
susceptable to leaks in the flexible bag. There are no actual data for
estimating this error; however, personnel experienced in the application of
this method feel that is is one of the major sources of variability.
5. H^O interference. Water vapor is a positive interference for the
NDIR method of measuring CO. Several presently unpublished tests indicate
59
-------
that under normal conditions moisture contents on the order of 1 to 4 per-
cent pass through the silica gel trap and thus to the NDIR analyzer. A
rejection ratio on the order of 3.5 percent H_0 per 8 mg/m (7 ppm) CO
would indicate that errors as large as 8 mg/m3 may occur frequently, and
that larger errors can result if the stack gas temperature, as it leaves
the silica gel trap, is not 21°C (70°F) or less arid/or the silica gel is
spent.
4.1.1.2 Analysis. The analysis of CO by NDIR is subject to error from
inaccuracies in the calibration gases and from analyzer drifts (zero and
span) due to the analyzer's sensitivity to changes in temperature and pres-
sure.
Error in a calibration gas would bias the measurements for the life-
time of that cylinder. Analyzer drift usually is random in nature; there-
fore it influences the precision of the measurements. Temperature control
is extremely important for NDIR analyzers. Applications in which the
analyzer is exposed to the atmosphere or housed in a makeshift shelter with
no temperature control will result in excess variability in the data.
4.1.1.3 Data processing. In continuous monitoring, where the CO concentration
varies rapidly, an accurate estimate of the average value may be difficult
to make, either from a strip chart or from discrete values recorded directly
from the analyzer. Integrated sampling does not suffer from this problem
since the CO concentration has reached an equilibrium in the bag.
4.1.1.4 Error range estimates. All the error terms discussed thus far are
independent; at least there are no obvious reasons why they should not be
independent. Therefore the total bias in the CO measurements is the sum of
the biases of the individual error terms. The variance of the measurements
is the sum of the variances of the individual error terms.
The variability will be larger when the measurements to be compared are
performed by different analysts and/or with different equipment, than when
they are carried out by a single analyst performing replicates, for example,
on the same integrated sample, using the same equipment. Many different
measures of variability are conceivable according to the circumstances under
which the measurements are performed.
60
-------
Only two extreme situations will be discussed here. They are:
1. Repeatability, r, is the value below which the absolute difference
between duplicate results, i.e., two measurements made on th,e same
sample by the same analyst using the same equipment over a short
interval of time, may be expected to fall with a 95 percent prob-
ability.
2. Reproducibility, R, is the value below which the absolute differ-
ence between two measurements made on the same sample by different
analysts using different equipment may be expected to fall with a
95 percent probability.
The above definitions are based on a statistical model according to
which each measurement is the sum of three components:
CO = CO + b + e (6)
where
CO = the measured value, ppm
CO = the general average, ppm
b = an error representing the differences between analysts/equipment,
ppm.
e = a random error occurring in each measurement, ppm.
In general, b can be considered as the sum
b = b + b (7)
where b is a random component and b a systematic component. The term b
L S
is considered to be constant during any series of measurements performed
under repeatability conditions, but to behave as a random variate in a series
of measurements performed under reproducibility conditions. Its variance
will be denoted as
2
var b = a , (8)
LJ
the between laboratory variance including the between analyst and between
equipment variabilities.
61
-------
The term e represents a random error occuring in each measurement. Its
variance
2
var e = a (9)
will be called the repeatability variance.
For the above model, the repeatability, r, and the reproducibility, R,
are given by
r = 1.96 v/2 a = 2.77 a (10)
and
R = 2.77 4/0'/in + a* = 2.77 o_ (11)
^ r LI K
2
where a will be referred to as the reproducibility variance and m is the
K
number of repeated measurements averaged to give an observation, m = 1 in
this case.
A collaborative study of the method (ref. 4) indicated that the repro-
ducibility (at the 95 percent confidence level) of the method is 120 ppm i.e.,
R = 120 ppm and from equation 11 above a = 43.5 ppm. Repeatability of the
R
method resulting from the collaborative study is r = 97 ppm and from equation
10 a =35.5 ppm.
Bias of the method was found to vary with CO concentration arid to vary
between collaborators. The overall average bias derived from the measure-
ment of standard gases was + 7 ppm.
Table 3 summarizes the previously discussed sources of error. Estimates
are made of the mean, variance, and probability distribution for each vari-
able, primarily to point out those variables that are considered to dominate
the imprecision of CO measurements on repeatability and reproducibility
bases. Estimates of the error involved in assuming that the CO concentration
is homogeneous at all points in the sampling plane are not included in the
table, due to lack of information necessary to make the estimates. All esti-
mates are made for a true CO concentration of 500 ppm. In actual practice,
precision will probably vary with concentration.
62
-------
en
ai
1-1
a
fi
tfl
4J
1-1
O
cx
o
•H
(-1
4-1
U)
T3
fl
n)
CO
01
rH
•§
H
J^
4J
•iH
T J
•H
•H
O
3
O
r-l
p.
ex
S ex
(3
o
•r-t
4J
3
•H
j^
4J
CO
Q
0)
as
C3 ^^
cO g
•H a,
rJ CX
G
ca g
HJ CX
S ex
0)
rH
n
cfl
•H
M
tfl
H H
cij oj
"PI *rH
4-1 4J
a c
rH rH tU <1) I-H rH
n) cd fi C a) a)
S BO 0 E 8
S M CX CX M M
O OX r< O O
a a CD rH 00 rH
O O O
a e
rH (J M rH rH
cO o O cd cd
g H_( 4-j B g
M -H -H M C
O C C O O
C 3 3 C C
O O *^" u~> r^*
O O O vO CM 00
•H r-~ vo
O ro
CX 0) CD
cn o o
B CO (3 p! 4J d
o oo cu cu 4H o
l-l H 1-1 -H -H
m C3 CU QJ M 4J
O ^ C HH T3 O
C -K -H J-i 0 M 3
O004J CU-rl 0) r-l^l
•Hfitv)4J4J4-),
•rH CX -H M rH CO
> B rH Cslr-l O Cfl 4J
rH CM CO
-------
4.1.2 Variance Analysis
For the relationship
co •
the variance of CO is given by
02{CO} = (1 - F >V{C } + C2 02{F } (12)
C°2 C°NDIR CONDIR C°2
2
where 0 (C } is the sum of the variances given in table 3 for either
repeatability or reproducibility , whichever situation is being evaluated.
From the table then o^C > =1580 and ^iCrQ } = 2385. The variance
a {F } is taken from the Quality Assurance Guidelines Document for Method
2 -5
3 (ref. 2) as 1.6 x 10 . Solving equation 12 for C = 500 ppm and
TT n n ^ u NDIR
F =0.12 yields
co2
02{CO} =1225,
and
0r(CO} = 35 ppm,
or about 7 percent of the actual value.
Likewise
and
02{CO} =1847
K
0_{CO} =43.5 ppm,
K.
or about 8.7 percent of the actual value.
Repeatability, r, from equation 10 is
r = 2.77 o =97 ppm.
This says that two repeated measurements of a sample (with a CO concentration
of about 500 ppm) by the same crew using the same equipment should not
differ more than 97 ppm 95 percent of the time,. On a relative basis, r =19.4
percent of the actual value.
Reproducibility, R, from equation 11 gives
R = 120 ppm.
64
-------
Two measurements made on the same sample (with a CO concentration close to
500 ppm) by different teams using different equipment should agree within
120 ppm (24 percent) approximately 95 percent of the time.
4.1.3 Bias Analysis
Results of the collaborative test of the NDIR method of measuring CO
in the ambient air indicated a possible positive bias of about 7 ppm. Also,
in source testing it is felt that in actual field application there is a
greater probability of experiencing positive interferences from HO and CO..
Assuming that the true or acceptable value, CO , is known from equa-
tion 6,
CO - CO = T (13)
represents an estimate of the bias of the measurement method. In table 3,
the sum of the individual variable means gives an estimated bias of
A.
T = 7 ppm,
or about+1.4 percent at a CO level of 500 ppm.
4.2 ACTION OPTIONS
Suppose it has been determined as a result of the functional analysis
and/or the reported data from the checking and auditing schemes, that the
data quality is not consistent with suggested standards or with the user
requirements. Poor data quality may result from (1) a lack of adherence to
the control procedures given in section II—Operations Manual, or (2) the
need for an improved method or instrumentation for taking the measurements.
It is assumed in this section that (2) applies, that is, the data quality
needs to be improved beyond that attainable by following the operational
procedures given for the reference method.
The selection of possible actions for improving the data quality can
best be made by those familiar with the measurement process. For each
action, the variance analysis can be performed to estimate the variance,
standard deviation, and coefficient of variation of the pertinent measure-
ment (s). In some cases it is difficult to estimate the reduction in
65
-------
specific variances that are required to estimate the precisions of the per-
tinent measurements. In such cases, an experimental study should be made
of the more promising actions based on preliminary estimates of precision/
bias and the costs of implementing each action.
In order to illustrate the methodology, four actions and appropriate
combinations thereof are suggested. Variance and cost estimates are made
for each action, resulting in estimates of the overall precision of each
action. The actions are as follows:
AO: Reference Method
Al: Establish traceability of calibration gas to NBS standard when
first purchased (cost of $200/20 field tests)
A2: Crew training (cost of $1000/20 field tests)
A3: Special temperature control (cost of $1000/20 field tests)
A4: Check NDIR results with an alternate measurement method (cost of
$250/20 field tests).
The costs given for each action are additional costs above that of the re-
ference method. The assumptions made concerning the reduction in the vari-
ances (or improved precisions) are given in the following for each action.
1. Verification of the span calibration gas when first purchased by es-
tablishing its traceability to an NBS standard should eliminate gross errors
in assayed values. It is assumed here that the mean and variance of the er-
ror due to calibration gases would be reduced from 13.5 ppm to 5 ppm and 25
to 16 respectively. Since one cylinder of span gas will last for a long time,
the cost of implementing this action would be small. Also, under the assump-
tions made in section 4.1, this is one of the dominant sources of error,
2. From discussing this method with experienced field testers, it is
felt that the method requires an operator that understands the system and
its capability. Early detection of out-of-control conditions by the opera-
tor can substantially improve data quality. It is assumed here that crew
training could affect all sources or variability, A reduction of aD from
K.
15.5 ppm to 10 ppm and bias from 19.5 to 12 ppm is assumed as achievable
with trained crews. A one week course once a year, or special OJT training,
is estimated to cost approximately $1000 per 20 field tests.
3. Temperature control is critical for NDIR measurements. This implies
using a laboratory or room with temperature controls to house the analyzer.
66
-------
If necessary, a portable shelter should be used. An additional cost of
$1000 per 20 field tests is assumed for implementation of this action. A
reduction in the variance (analyzer drift) from 225 to 100 is assumed.
4. The use of an alternate measurement method to check the NDIR mea-
surements as proposed in subsection 3.3.3.1 should aid in detecting large
deviations and thus increase overall data quality. It is estimated that
for a cost of about $250 per 20 field tests, the bias t and standard devia-
tion a could be reduced to 15 ppm and 10 ppm respectively.
R
Figure 11 shows the results in terms of cost and data quality. Data
quality for this purpose is given in terms of the Mean Square Error (MSE),
which is calculated by
MSE
(14)
The cost of reporting poor quality data curve in figure 11, has to be de-
rived for specific situations according to monitoring objectives. Here it
I200r
1000
tti 800-
8
600-
400
200
BEST ACTION
OPTIONS
0
COST OF REPORTING
POOR QUALITY DATA
12
14
16 18 20
MSE (ppm)
Figure 11. Added Cost versus Data Quality for Selected Action Options
(at a CO level of 500 ppm).
-------
is assumed that the cost of reporting poor quality data increases as data
quality decreases. The optimum action option for this particular hypothe-
tical situation is A,, as seen in figure 11.
4.3 PROCEDURES FOR PERFORMING A QUALITY AUDIT
"Quality audit" as used here implies a comprehensive system of planned
and periodic audits to verify compliance with all aspects of the quality
assurance program. Results from the quality audit: provide an independent
assessment of data quality. "Independent" in this case implies that the
auditor prepares a reference sample of CO and CO in air and has the field
team analyze the sample. The field team should not know the true CO and
CO. concentrations. From these data both bias and precision estimates can
be made for the analysis phase of the measurement process.
The auditor, i.e., the individual performing the audit, should have
extensive background experience in source sampling, specifically with the
characterization technique that he is auditing. He should be able to
establish and maintain good rapport with field crews.
The functions of the auditor are summarized in the following list:
1. Observe procedures and techniques of the field team during on-site
measurements.
2. Have field team measure sample from a reference cylinder with
known CO and C0_ concentrations.
3. Check/verify applicable records of equipment calibration checks
and quality control charts in the field team's home laboratory.
4. Compare the audit value with the field team's test value.
5. Inform the field team of the comparison results specifying any
area(s) that need special attention or improvement.
6. File the records and forward the comparison results with appro-
priate comments to the manager.
4.3.1 Frequency of Audit
The optimum frequency of audit is a function of certain costs and the
desired level of confidence in the data quality assessment. A methodology
for determining the optimum frequency, using relevant costs, is presented
both in the Quality Assurance Documents for the methods requiring the results
68
-------
of Method 3 and in the final report for this contract. Costs will vary
among field teams and types of field tests. Therefore, the most cost effec-
tive auditing level will have to be derived using relevant local cost data
according to the procedure given in the final report on this contract.
4.3.2 Collecting On-Site Information
While on-site, the auditor should observe the field team's overall
performance of tht field test. Specific operations to observe should in-
clude, but not be limited to:
1. Setting up and leak-testing the sampling train;
2. Purging the sampling train with nitrogen prior to collecting the
sample;
3. Proportional sampling;
4. Frequency of zero and span checks; and
5. Transfer of sample from the flexible bag to the Orsat analyzer and
the NDIR analyzer.
The above observations, plus any others that the auditor feels are
important, can be used in combination to make an overall evaluation of the
team's proficiency in carrying out this portion of the field test.
In addition to the above on-site observations, it is recommended that
the auditor have a pressurized cylinder of CO and C0_ in air to prepare a
reference sample for analyses by the field team.
4.3.2.1 Comparing Audit and Routine Values of CO. In field tests, the
audit and routine (field team's results) values are compared by
d. - CO., - CO (15)
J J a
where
d - The difference in the audit and field test results for the jC
audit, ppm
CO - Audit value of CO concentration, ppm
Si
CO - CO concentration obtained by the field team, ppm.
Record the value of d in the quality audit log book.
4.3.3 Overall Evaluation of Field Team Performance
In a summary-type statement the field team should be evaluated on its
overall performance. Reporting the d value as previously computed,
69
-------
in an adequate representation of the objective information collected
for the audit. However, unmeasurable errors can result from nonad-
herence to the prescribed operating procedures and/or from poor technique
in executing the procedures. These error sources have to be estimated sub-
jectively by the auditor. Using the notes taken in the field, the team
could be rated on a scale of 1 to 5 as follows:
5 - Excellent
4 - Above average
3 - Average
2 - Acceptable, but below average
1 - Unacceptable performance.
In conjunction with the numerical rating, the auditor should include justi-
fication for the rating. This could be in the form of a list of the team's
strong and weak points.
4.4 DATA QUALITY ASSESSMENT
Two aspects of data quality assessment are considered in this section.
The first considers a means of estimating the precision and accuracy of the
reported data, e.g., reporting the bias, if any, and the standard deviation
associated with the measurements. The second consideration is that of
testing the data quality against given standards, using sampling by vari-
ables. For example, lower and upper limits, L and U, may be selected to
include a large percentage of the measurements. It is desired to control
the percentage of measurements outside these limits to less than 10 percent.
If the data quality is not consistent with the L cind U limits, then action
is taken to correct the possible deficiency before future field tests are
performed and to correct the previous data when possible.
4.4.1 Estimating the Precision/Accuracy of the Reported Data
Methods for estimating the precision (standard deviation) and accuracy
(bias) of the CO concentration were given in section 4.1. This section will
indicate how the audit data collected in accordance with the procedure
described in section 4.2 will be utilized in order to estimate the precision
and accuracy of the measures of interest. Similar techniques can also be
used by a specific firm or team to assess their own measurements. However,
70
-------
in this case no bias data among firms can be obtained. The differences
between the field team results and the audited results for the respective
measurements are
Vcoj-cV
Let the mean and standard deviation of the differences d ; j-1,
denoted by d, and s,, respectively. Thus
J-1
and
(d - d)2/(n - 1)
1/2
(16)
.., n be
(17)
(18)
Now d is an estimate of the bias in the measurements (i.e., relative to the
audited value). Assuming the audited data to be unbiased, the existence of
a bias in the field data can be checked by the appropriate t-test, i.e.,
t =
d - 0
(19)
See ref. 16 for a discussion of the t-test.
If t is significantly large, say greater than the tabulated value of t
with n - 1 degrees of freedom, which is exceeded by chance only 5 percent
of the time, then the bias is considered to be real, and some check should
be made for a possible cause of the bias. If t is not significantly large,
then the bias should be considered zero, and the accuracy of the data is
acceptable.
The standard deviation s, is a function of both the standard deviation
d
of the field measurements and of the audit measurements. Assuming the audit
values to be much more accurate than the field measurements, then a, is an
d
estimate of a{CO}. Table 4, page 75 , contains an example calculation of d
and s. starting with the differences for a sample size of n ~ 7. (See the
final report on the contract for further information concerning this result.)
71
-------
The standard deviation, s,, can be utilized to check the reasonableness of
d
the assumptions made in section 4.1 concerning a{CO}. For example, the
estimated standard deviation, s,, may be directly checked against the assumed
value, a{CO}, by using the statistical test procedure
2 SH
f-- -f , (20)
°{co}
2
where x /f is the value of a random variable having the chi-square distri-
2
bution with f = n - 1 degrees of freedom. If x /f is larger than the tabu-
lated value exceeded only 5 percent of the time, then it would be concluded
that the test procedure is yielding more variable results due to faulty
equipment or operational procedure.
The measured values should be reported along with the estimated biases,
standard deviations, the number of audits, n, and the total number of field
tests, N, sampled (n _< N). Estimates, i.e., s. and d which are significantly
different from the assumed population parameters, should be identified on
the data sheet.
2
The t-test and x -test described above and in further detail in the
final report on this contract, are used to check on the biases and standard
deviations separately. In order to check on the overall data quality as
measured by the percent of measurement deviations outside prescribed limits,
it is necessary to use the approach described in subsection 4.4.2 below.
4.4.2 Sampling by Variables
Because the lot size (i.e., the number of field tests performed by a
team or laboratory during a particular time period, normally a calendar
quarter) is small, N = 20, and because the sample size is, consequently,
small (of the order of n = 3 to 8), it is important to consider s sampling
by variables approach to assess the data quality with respect to prescribed
limits. That is, it is desirable to make as much use of the data as pos-
sible. In the variables approach, the means and standard deviations of the
sample of n audits are used in making a decision concerning the data quality.
Some background concerning the assumptions and the methodology is
repeated below for convenience. However, one is referred to one of a number
72
-------
of publications having information on sampling by variables; e.g., see
refs. 17-22. The discussion below will be given in regard to the specific
problem in the variables approach, which has some unique features as com-
pared with the usual variable sampling plans. In the following discussion,
it is assumed that only CO measurements are audited as directed in sections
4.2.2.1 and 4.2.2.2. The difference between the team-measured and audited
value of CO is designated as d . , and the mean difference over n audits by
d is
n
d - 1/n 2_ CO. - CO . (21)
J-l
Theoretically, CO and CO should be measures of the same CO concentration
o
and their difference should have a mean of zero on the average. In addition,
this difference should have a standard deviation equal to that associated
with the measurements of CO.
Assuming three standard deviation limits, the values +30 = +131 ppm
" K
define the respective lower and upper limits, L and U, outside of which it
is desired to control the proportion of differences, d.. Following the
method given in ref . 20, a procedure for applying the variables sampling plan
is described below. Figures 12 and 13 illustrate examples of satisfactory
and unsatisfactory data quality with respect to the prescribed limits L and
U.
The variables sampling plan requires the following information: the
sample mean difference, d, the standard deviation of these differences, s . ,
and a constant, k, which is determined by the value of p, the proportion of
the differences outside the limits of L and U. For example, if it is de-
sired to control at 0.10 the probability of not detecting lots with data
qualities p equal to 0.10 (or 10 percent of the individual differences out-
side L and U) , and if the sample size n = 7 , then the value of k can be
obtained from table II of ref. 20. The values of d and s, are computed in
the usual manner; see table 4 for formulas and a specific example. Given
the above information, the test procedure is applied, and subsequent action
is taken in accordance with the following criteria:
73
-------
Table 4. Computation of mean difference, d, and
' d
General Formulas
d - CO -
dl
d2
d3
d4
d5
d6
d7
Zdj
- Zdj
d - — J-
n
* ^
Sd *
sd- Vs
If both of the
d - k s ,
d
coaj
4
d2
d3
L " -131 ppm
Specific Example
Data ppm
-40 1600
+20 400
-10 100
+80 6400
+60 3600
+30 900
+10 100
+150 13,100
d - +21.4 ppm
s] - 1650
d
s - 40.6 ppm
are satisfied,
d + k s, < U - 131 ppm
a ~~
the individual differences are considered to be consistent with the
prescribed data quality limits, and no corrective action is required.
2. If one or both of these inequalities is violated, possible defi-
ciencies exist in the measurement process as carried out for that
74
-------
P2 < 0.10
Figure 12. Example illustrating p < 0.10 and satisfactory data quality.
p (percent of measured
differences outside
limits L and U) > 0.10
Figure 13. Example illustrating p > 0.10 and unsatisfactory data quality.
particular lot (group) of field tests. These deficiencies should
be identified and corrected before future field tests are performed.
Data corrections should be made when possible, i.e., if a quanti-
tative basis is determined for correction.
Table 5 contains a few selected values of n, p, and k for convenient
reference. Using the values of d and s, in table 2, k = 2.334 for a sample
size n = 7, and p = 0.10, the test criteria become
d - k s = 21.4 - 2.334 x 40.6 = -71.4 > L = -131
ppm
d + k s = 21.4 + 2.334 x 40.6 = 116 ppm < U = 131 ppm.
75
-------
Table 5. Sample plan constants, k for P {not detecting a lot
with proportion p outside limits L and U} <^ 0.1
Sample Size n p • 0.2 p - 0.1
3 3.039 4.258
5 1.976 2.742
7 1.721 2.334
10 1.595 2.112
12 1.550 2.045
Therefore, both conditions are satisfied and the lot of N - 20 measurements
is consistent with the prescribed quality limits. The plan is designed
to aid in detecting lots with 10 percent or more defects (deviations falling
outside the designated limits L and U) with a risk of 0.10; that is, on the
average, 90 percent of the lots with 10 percent or more defects will be de-
tected by this sampling plan.
4.4.4 Cost Versus Audit Level
The determination of the audit level (sample size n) to be used in
assessing the data quality, with reference to prescribed limits L and U, can
be made either 1) on a statistical basis, by defining acceptable risks for
type I and type II errors, knowing or estimating the quality of the incoming
data, and specifying the described level of confidence in the reported data,
or 2) on a cost basis, as described herein. In this section, cost data
associated with the audit procedure are estimated or assumed, for the pur-
pose of illustrating a method of approach and identifying which costs should
be considered.
A model of the audit process, associated costs, and assumptions made
in the determination of the audit level is provided in figure 14. It is
assumed that a collection of source emissions tests for N stacks is to be
made by a particular firm, and that n measurements (n _<_N) are to be audited
at a cost, C = b + en, where b is a constant independent for n and c is
the cost per stack measurement audited. In order to make a specific deter-
mination of n, it is also necessary to make some assumptions about the
76
-------
Collection of Source Emission
Tests (Lots of Size N)
L
50% of Lots
< 10% Defective
^^
Audit n
Measurements
t
->
'
-»
Acceptable
Quality
^^
C. = b+cn - $600
A
«-
Not Acceptable
Quality
\
>
Audit n
Measurements
*
V
•~
«-
50% of Lots
> 10% Defective
Select Audit
Parameter n, k
_J <
t
No
No
Yes
1
"—"VIS -
>sf d -
Data Declared
to be of
Acceptable-
Quality
'
\
Report
Data
r
a
ksd
/
«^_
> \*r
\
' I
Data Declared
not to be of
Acceptable
Quality
t
\
> — t
1 \
\
<
Institute Action to
Improve Data Quality
(Correct Data if
Possible)
c
Data Declared
to be of
Acceptable
Quality
Expected Cost of
Treating Poor
Quality Data as
Good Quality Data
CG|P
$15,000
Expected Cost of
Falsely Inferring
Data are of Poor
Quality Cp|G -
$10,000
i
Expected Cost
Saving of Taking
Correct Action with
Respect to Poor
Data
Figure 14. Flow chart of the audit level selection process.
77
-------
quality of the source emissions data from several firms. For example, it is
assumed in this analysis that 50 percent of the data lots are of good
quality, i.e., one-half of the firms are adhering to good data quality as-
surance practice, and that 50 percent of the data lots are of poor quality.
Based on the analysis in section 4.1, good quality data is defined as that
which is consistent with the estimated precision/bias using the reference
method. Thus if the data quality limits L and U are taken to be the lower
and upper 3o limits, corresponding to limits used in a control chart, the
quality of data provided by firmly adhering to the recommended quality as-
surance procedures should contain at most about 0,3 percent defective mea-
surements (i.e., outside the limits defined by L and U). Herein, good
quality data is defined as that containing at most. 10 percent defective mea-
surements. The definition of poor quality data is somewhat arbitrary; for
this illustration it is taken as 25 percent outside L and U.
In this audit procedure, the data are declared to be of acceptable
quality if both of the following inequalities are satisfied:
d + ks,, < U
a
d - ks, > L ,
d
where d and s, are the mean and standard deviation of the data quality char-
acteristic (i.e., the difference of the field and audited measurements)
being checked. The data are not of desired quality if one or both inequali-
ties is violated, as described in section 4.3. The costs associated with
these actions are assumed to be as follows:
C = Audit cost = b + en. It is assumed that b is zero for this exam-
A
pie, and c is taken as $600/measurement.
CT>I,- = Cost of falsely inferring that the data are of poor quality, P,
given that the data are of good quality, G. This cost is assumed
to be one-half the cost of collecting emissions data for N = 20
stacks (i.e., 0.5 x $1000 x 20 = $10,000). It would include the
costs of searching for an assignable cause of the inferred data
deficiency when none exists, of partial repetition of data collec-
tion, and of decisions resulting in the purchase of equipment to
reduce emission levels of specific pollutants, etc.
78
-------
CX,IB " Cost of falsely stating that the data are of good quality, G,
G|P
given that they are of poor quality, P. This cost is assumed to
be $15,000 (- 0.75 x $1000 x 20), and is associated with health
effects, litigation, etc.
C I - Cost savings resulting from correct identification of poor quality
data. This cost is taken to be $7,500, i.e., equal to one-half
of C_|-, or equal to 0.375 x $1,000 x 20, the total cost of data
P|G,
collection.
These costs are given in figure 15. The cost data are then used in
conjunction with the a priori information concerning the data quality, to
select an audit level n. Actually, the audit procedure requires the
selection of the limits L and U, n, and k. L and U are determined on the
basis of the analysis of section 4.1. The value of k is taken to be the
value associated with n in table 5 of section 4.4.3, i.e., the value
selected on a statistical basis to control the percentage of data outside
the limits L and U. Thus, it is only necessary to vary n and determine the
corresponding expected total cost E(TC) using the following cost model
E(TC) - - CA - 0.5 Pp,G Cp|G + 0.5 Pp|p Cp|p - 0.5 PG|p CG|P (22)
where the costs are as previously defined. The probabilities are defined
in a way similar to defining corresponding costs:
P-n I/-. * Probability that a lot of good quality data is falsely inferred
P |G
to be of poor quality, due to the random variations in the
sample mean d and standard deviation, s , in small samples c\f
size n.
P | - Probability that a lot of poor quality data is correctly identi-
fied as being of poor quality.
Pi * Probability that a lot of poor quality data is incorrectly judged
G|P
to be of good quality, due to sampling variations of d and s.
These three probabilities are conditional on the presumed lot quality
and are preceded by a factor of 0.5 in the total cost model, to correspond
to the assumed percentage of good (poor) quality data lots.
In order to complete the determination of n, it is necessary to calcu-
late each of the conditional probabilities, using the assumptions stated
79
-------
$8000
$6000
W
I
4J
n
o
a
00
I
$.2000
•-o-
4567
Audit Level (n)
8
10
tp » Proportion defective measurements in the "lot"
P{Acc. lot with p} <_ 0.1
Figure 15. Average Cost vs Audit Level (n)
80
-------
for a series of values of n (and associated k, which is given In table 5).
The computational procedure is given in the Final Report of this contract'.
These calculations were made for the cases n « 3, 5, 7, and 10 and for two
degrees of control on the quality of the data that can be tolerated, i.e.,
p « 0.2 and p = 0.1, the portion outside the limits L and U for which it
is desired to accept the data as good quality, with probability less than
or equal to 0.10. These computed probabilities are then used in conjunction
with the costs associated with each condition, applying equation (22) to
obtain the average cost versus sample size n for the two cases p » 0.1 and
0.2. The curves obtained from these results are given in figure 15. It can
be seen from these curves that the minimum cost is obtained by using n = 5
independent of p. However, it must be recognized that the costs used in
the example are for illustrative purposes and may vary from one region to
another; thus, within the reasonably uncertainty of the estimated costs,
suggest that p = 0.2 is more cost effective; this tends to permit data of
poorer quality to be accepted.
81
-------
SECTION V
1. F. Smith and D. E. Wagoner. "Guidelines for Development of a Quality
Assurance Program—Determination of Stack Gas Velocity and Volume-
tric Flow Rate (Type-S Pitot Tube)." Final Technical Report, Con-
tract No. 68-02-1234, Technical Publications, Research Triangle
Park, North Carolina, 27711.
2. F. Smith and D. E. Wagoner. "Guidelines for Development of a Quality
Assurance Program—Gas Analysis for Carbon Dioxide, Excess Air,
and Dry Molecular Weight." Final Technical Report, Contract No,
68-02-1234, Technical Publications, Research Triangle Park, North
Carolina, 27711.
3. F. Smith and D. E. Wagoner. "Guidelines for Development of a Quality
Assurance Program—Determination of Moisture Content of Stack
Gases." Final Technical Report, Contract No. 68-02-1234, Techni-
cal Publications, Research Triangle Park, North Carolina, 27711.
4. EPA-650/4-75-001, "Collaborative Study of Method 10 - Reference
Method for Determination of Carbon Monoxide Emissions From
Stationary Sources - Report of Testing," Environmental Pro-
tection Agency, Research Triangle Park, N.C., January 1975.
5. Method 1 - Sample and Velocity Traverses for Stationary Sources.
Federal Register, Vol. 36, No. 247, Thursday, December 23, 1971,
pp. 24882-4.
6. Walter S. Smith and D. James Grove. Stack Sampling Nomographs for
Field Estimations. Entropy Environmentalists, Inc., Research
Triangle Park, N.C., 1973.
7. Glossary and Tables for Statistical Quality Control. American Society
for Quality Control, Statistics Technical Committee, Milwaukee,
Wisconsin, 1973.
8. Eugene L. Grant and Richard S. Leavenworth. Statistical Quality
Control. 4th ed. St. Louis: McGraw-Hill, 1972.
9. Herbert C. McKee et al. "Collaborative Study of Reference Method for
the Continuous Measurement of Carbon Monoxide in the Atmosphere
(Non-Dispersive Infrared Spectrometry)." Southwest Research Insti-
tute, Contract CPA 70-40, SwRI Project 01-2811, San Antonio, Texas,
May 1972.
10. Frank McElroy. "The Intech NDIR-CO Analyzer." Presented at the llth
Methods Conference in Air Pollution, University of California
Berkeley, California, April 1, 1970.
82
-------
11. Hezekiah Moore. "A Critical Evaluation of the Analysis of Carbon
Monoxide with Nondispersive Infrared (NDIR)." Presented at the
9th Conference on Methods in Air Pollution and Industrial Hygiene
Studies, Pasadena, California, February 7-9, 1968.
12. Richard F. Dechant and Peter K. Mueller. "Performance of a Continuous
NDIR Carbon Monoxide Analyzer." AIHL Report No. 57, Air and Indus-
trial Hygiene Laboratory, Department of Public Health, Berkeley,
California, June 1969.
13. Joseph M. Colucci and Charles R. Begeman. "Carbon Monoxide in Detroit,
New York, and Los Angeles Air." Environmental Science and Tech-
nology _3_ (1), January 1969, pp. 41-47.
14. "Tentative Method of Continuous Analysis for Carbon Monoxide Content
of the Atmosphere (Nondispersive Infrared Method)," in Methods of
Air Sampling and Analysis, American Public Health Association,
Washington, D. C., 1972, pp. 233-238.
15. "Evaluation of Portable Direct-Reading Carbon Monoxide Meters." HEW
Publication No. (NIOSH) 75-106, September 1974, Available from
Office of Technical Publications, National Institute for Occupa-
tional Safety and Health, Post Office Building, Cincinnati, Ohio,
45202.
16. H. Craifier. The Elements of Probability Theory. New York: John Wiley
and Sons, 1955.
17. Statistical Reasearch Group, Columbia University. C. Eisenhart, M.
Hastay, and W. A. Wallis, eds. Techniques of Statistical Analysis.
New York: McGraw-Hill, 1947.
18. A. H. Bowker and H. P. Goode. Sampling Inspection by Variables. New
York: McGraw-Hill, 1952.
19. A. Hald. Statistical Theory with Engineering Applications. New York:
John Wiley and Sons, 1952.
20. D. B. Owen. "Variables Sampling Plans Based on the Normal Distribution."
Technometrics 9, No. 3 (August 1967).
21. D. B. Owen. "Summary of Recent Work on Variables Acceptance Sampling
with Emphasis on Non-normality." Technometrics 11 (1969):631-37.
22. Kinji Takogi. "On Designing Unknown Sigma Sampling Plans Based on a
Wide Class on Non-Normal Distributions." Technometrics 14 (1972):
669-78.
83
-------
ff>PENDIX A REFERENCE MEHCD R1R DETERMINATION OF CARBON
MONOXIDE EMISSIONS FROM STATIONARY SOURCES
84
-------
METHOD 10—DETERMINATION OF CARSON MON-
OXIDE EMISSIONS FROM STATIONARY SormcES
1. Principle and Applicability.
1.1 Principle. An Integrated or continuous
gas sample Is extracted from a sampling point
and analyzed for carbon monoxide (CO) con-
tent using a Luft-type nondlspersive infra-
red analyzer (NDIR) or equivalent.
12 Applicability. This method is appli-
cable for the determination of carbon mon-
oxide emissions from stationary sources only
when specified by the test proci lures for
determining compliance with new source
performance standards: The test procedure
will indicate whether & continuous • or an
integrated sample Is to be used.
2. Range anct sensitivity.
2,1 Range. 0 to 1,000 ppm.
2 2 Sensitivity. Minimum detectable con-
centration is 20 ppm for a 0 to 1,000 ppm
span.
3. Interference*. Any substance having a
strong absorption of infrared energy will
interfere to some extent. For example, dis-
crimination ratios for water (H^O) and car-
bon dioxide (CO,) are 3.5 percent H.O per
7 ppm CO and 10 percent CO per 10 ppm
CO, respectively, for devices measuring in the
1,500 to 3,000 ppm range. For devices meas-
uring In the 0 to 100 ppm range, interference
ratios can be as high as 3.6 percent H,O per
36 ppm CO and 10 percent CO, per 60 ppm
GO. The use of silica gel and ascarlte traps
will alleviate the major interference prob-
lems. The measured gas volume must be
corrected if these traps are used.
4. Precision and accuracy.
4.1 Precision. The precision of most NDIR
analyzers is approximately ±2 percent of
span.
4.2 Accuracy. The accuracy of most NDIR
analyzers is approximately ± 5 percent of
span after calibration.
fi. Apparatus.
5.1 Continuous sample (Figure 10-1).
6.1.1 Probe. Stainless steel or sheathed
Pyrex > glass, equipped with a filter to remove
partlculate matter.
5.1.2 Air—cooled condenser or equivalent.
To remove any excess moisture.
6.2 Integrated sample (Figure 10-2).
5.2.1 Probe. Stainless steel or sheathed
Pyrex glass, equipped with a filter to remove
partlculate matter.
P.3.2 Air-cooled condenser or equivalent.
To remove any excess moisture.
6.3.3 Valve. Needle valve, or equivalent, to
to adjust flow rate.
8.2.4 Pump. Leak-free diaphragm type, or
equivalent, to transport gas.
6.2.5 Rate meter. Botameter, or equivalent,
to measure a flow range from 0 to 1.0 liter
per min. (0.036 cfm).
6.3.6 Flexible bay. Tedlar, or equivalent,
with a capacity of 80 to 80 liters (3 to 3 ft *).
Leak-test the bag in the laboratory before
using by evacuating bag with a pump fol-
lowed by a dry gas meter. When evacuation
1u complete, there should be no flow through
the meter.
MKOOUO eONKNSCI
rewutna
NUB HUM not)
'dm H-i. cam
6.2.7 Pitot tube. Type S, or equivalent, at-
tached to the probe so that the sampling
rate can be regulated proportional to the
stack gas velocity when velocity is varying
with the time or a sample traverse is con-
ducted.
6.3 Analysis (Figure 10-3).
1 Mention of trade names or specific prod-
ucts does not constitute endorsement by the
Environmental Protection Agency.
FEDEHAt. REGISTER, VOL. 39, NO. 47—F31DAY. MARCH 8, 1974
85
-------
RULES AND REGULATIONS
6.3.1 Carbon monoxide analyzer. Nondlsper-
slve Infrared spectrometer, or equivalent.
This Instrument should be demonstrated,
preferably by the manufacturer, to meet or
exceed manufacturer's specifications and
those described In this method.
5.3.2 Drying tube. To contain approxi-
mately 200 g of silica gel.
6.3.3 Calibration gas. Refer to paragraph
6.1.
5.3.4 Filter. As recommended by NDIB
manufacturer.
5.3.6 CO, removal tube. To contain approxi-
mately 600 g of ascarlte.
6.3.6 Ice water bath. For ascarlte and silica
gel tubes.
6.3.7 Valve. Needle valve, or equivalent, to
adjust flow rate
6.3.8 Rate meter. Botameter or equivalent
to measure gas flow rate of 0 to 1.0 liter per
mln. (0.036 cfm) through NDIB.
6.3.9 Recorder (optional). To provide per-
manent record of NDIB readings.
6. Reagents.
Ffcurt 10-J. AMIyllcal Mull
6.1 Calibration gases. Known concentration
of CO in nitrogen (N,) for Instrument span,
prepurlfled grade of N, for zero, and two addi-
tional concentrations corresponding approxi-
mately to 60 percent and 30 percent span. The
span concentration shall not exceed 1.6 times
the applicable source performance standard.
The calibration gases shall be certified by
the manufacturer to be within ±2 percent
of the specified concentration.
6.2 Silica gel. Indicating type, 6 to 16 mesh,
dried at 175" C (347« F) for 2 hours.
6.3 Aicarite. Commercially available.
7. Procedure.
7.1 Sampling.
7.1.1 Continuant sampling. Bet up the
equipment as shown In Figure 10-1 making
sure all connections are leak free. Place the
probe In the stack at a sampling point and
purge the sampling line. Connect the ana-
lyzer and begin drawing sample Into the
analyzer. Allow 6 minutes for the system
to stabilize, then record the analyzer read-
Ing as required by the test procedure. (See
112 and 8). COi content of the gas may be
determined by using the Method 3 Inte-
grated sample procedure (36 FB 24886), or
by weighing the ascarite CO, removal tube
and computing CO2 concentration from the
gas volume sampled and the weight gain
of the tube.
7.1.2 Integrated sampling. Evacuate the
flexible bag. Set up the equipment as shown
in Figure 10-2 with the bag disconnected.
Place the probe In the stack and purge the
sampling line. Connect the bag, making sure
that all connections are leak free. Sample at
a rate proportional to the stack velocity.
COZ content of the gas may be determined
by using the Method 3 Integrated sample
procedures (36 FR 24886), or by weighing-
the ascarite CO2 removal tube and comput-
ing CO, concentration from the gas volume
sampled and the weight gain of the tube.
7.2 CO Analysis. Assemble the apparatus as
shown In Figure 10-3, calibrate the instru-
ment, and perform oth«r required operations
as described In paragraph 8. Purge analyzer
with Na prior to Introduction of each sample.
Direct the sample stream through the Instru-
ment for the test period, recording the read-
ings. Check the zero and span again after the
test to assure that any drift or malfunction
is detected. Record the sample data on Table
10-1.
8. Calibration. Assemble the apparatus ac-
cording to Figure 10-3. Generally an instru-
ment requires a warm-up period before sta-
bility Is obtained. Follow the manufacturer's
Instructions for specific procedure. Allow a
minimum time of one hour for warm-up.
During this time check the sample condi-
tioning apparatus, i.e., filter, condenser, dry-
Ing tube, and CO, removal tube, to ensure
that each component Is in good operating
condition. Zero and calibrate the instrument
according to the manufacturer's procedures
using, respectively, nitrogen and the calibra-
tion gases.
TABU 10-1.—Field data
Location Comments:
Date ,-—
Operator _—.» ——_»
Clock time
Sotameter setting, uteri per minute
(cubic feet per minute)
8. Calculation—Concentration of carbon monoxide. Calculate the concentration of carbon
monoxide in the stack using equation 10-1.
cco.,.,k = CcoNDIR(l - F co,)
where:
Ccolt.,k = concentration of CO In stack, ppm by volume (dry basis).
equation 10-1
CCoNiDH==concentratlon of CO measured by NDIR analyser, ppm by volume (dry
basis).
Fco,=volume fraction of COi In sample, I.e., percent COi from Oraat analysta
divided by 100.
FEDERAL REGISTER, VOL 39, NO. 47—FRIDAY, MARCH S, 1974
86
-------
10. Bibliography.
10.1 M<*lroy. Prank. Tbe Intertech NDIR-OO
Analyser, Presented at llth Methods
Conference OB Air Pollution. TTnlfenttjr
of California. Berkeley, OaHf.. Aprffl 1.
1970.
10.1 Jacobs, M. B., et al., Continuous Deter-
mination of Oarbon Monoxide and Hy-
drocarbon* In Air by a Modified Infra-,
red Analyzer, J. Air Pollution Control
Anoclatlon. »(»): 110-114. August 1980.
10.3 MSA LIRA Infrared Oa» and Liquid
*|H»»|H
r iMtruoOon Book. Mta* Safety
Oo, TMbnteal Prodoeto IX-
10.4 Modeb MIA. SIM. and 416A tofrared
Beokman bMtruottom 16W-0. Fuller-
ton, Oallf ., Ootober 1MT.
lot OontUuioiH OO MooMoring OjHeui.
Model AMU, Dattrteon Ootp, Prlnoetea.
Hjr.
10.6 UNOB Infrared Oat AnalyMn. Bendlx
Corp.. Bonoeverto. Wevt TtrgUHa.
Aaooaut
A. Performance Specifications for NDIR Carbon Monoxide Analyzer*.
Range (minimum) 0-1000ppm.
Output (minimum) O-lOmV.
Minimum detectable sensitivity aOppm.
Rise time, 90 percent (maximum)
8.7
-------
JCIXB
A flow chart of the operations involved in an auditing program, fro
st setting desired limits on the data quality to filing the results,
an in the following pages. Assumed numbers are used and a sample
culation of an audit is performed in the flow chart. Each operation 1
erences to the section in the text of the report where it is discusset
88
-------
1. LIMITS FOR DATA QUALITY CAN BE SET BY WHAT
IS DESIRED OR FROM THE NATURAL VARIABILITY
OF THE METHOD WHEN USED BY TRAINED AND
COMPETENT PERSONNEL. FOR THIS EXAMPLE, IT
IS ASSUMED THAT cUCO} = 43.5 ppm
(subsec.4.1), ANDKUSING + 3 oR{CO), THE
LIMITS ARE L = -131 ppm AND U = 131 ppm.
2. FROM PRIOR KNOWLEDGE OF DATA QUALITY, ESTIMATE
THE PERCENTAGE OF FIELD MEASUREMENTS FALLING
OUTSIDE THE ABOVE LIMITS. 1^ NO INFORMATION
IS AVAILABLE, MAKE AN EDUCATED GUESS. IT IS
ASSUMED IN THIS EXAMPLE THAT 50 PERCENT OF THE
FIELD DATA ARE OUTSIDE THE LIMITS L AND U
(subsec. 4.4.4).
3. DETERMINE: (1) COST OF CONDUCTING AN AUDIT,
(2) COST OF FALSELY INFERRING THAT GOOD DATA
ARE BAD, (3) COST OF FALSELY INFERRING THAT
BAD DATA ARE GOOD, AND (4) COST SAVINGS FOR
CORRECTLY IDENTIFYING BAD DATA (subsec. 4.4.4).
4. DETERMINE THE AUDIT LEVEL EITHER BY (1) MINI-
MIZING AVERAGE COST USING EQUATION (22) OF
SUBSECTION 4.4.4, OR (2) ASSURING A DESIRED
LEVEL OF CONFIDENCE IN THE REPORTED DATA
THROUGH STATISTICS. FOR THIS EXAMPLE, THE
AUDIT LEVEL IS TAKEN AS n = 5 (fig. 15).
5. BY TEAMS, TYPES OF SOURCES, OR GEOGRAPHY,
GROUP FIELD TESTS INTO LOTS (GROUPS) OF ABOUT
20, TO BE PERFORMED IN A PERIOD OF ONE
CALENDAR QUARTER.
6. SELECT n OF THE N TESTS FOR AUDITING. COMPLETE
RANDOMIZATION MAY NOT BE POSSIBLE DUE TO AUDI-
TOR'S SCHEDULE. THE PRIMARY POINT IS THAT THE
FIELD TEAM SHOULD NOT KNOW IN ADVANCE THAT
THEIR TEST IS TO BE AUDITED.
7. ASSIGN OR SCHEDULE AN AUDITOR FOR EACH FIELD
TEST.
SET DESIRED
LOWER AND UPPER
LIMITS FOR DATA
QUALITY, L AND U
ESTIMATE AVERAGE
QUALITY OF FIELD
DATA IN TERMS OF
L AND U
DETERMINE OR
ASSUME RELEVANT
COSTS
DETERMINE AUDIT
LEVEL FROM
STATISTICS, OR
AVERAGE COST
GROUP FIELD TESTS
INTO LOT SIZES OF
ABOUT N = 20
RANDOMLY SELECT
n OF THE N TESTS
FOR AUDITING
ASSIGN/SCHEDULE
AUDITOR(S) FOR
THE n AUDITS
89
-------
AUDITOR
8. THE AUDITOR OBTAINS APPROPRIATE CALIBRATED 8
EQUIPMENT AND SUPPLIES FOR THE AUDIT
(subsec. 4.3).
9. OBSERVE THE FIELD TEAM'S PERFORMANCE OF THE 9
FIELD TEST (subsec. 4.3.2) AND NOTE ANY
UNUSUAL CONDITIONS THAT OCCURRED DURING
THE TEST.
10. THE AUDITOR'S REPORT SHOULD INCLUDE (1) DATA 10
SHEET FILLED OUT BY THE FIELD TEAM ,
(2) AUDITOR'S COMMENTS, (3) AUDIT DATA SHEET
WITH CALCULATIONS , AND (4) A SUMMARY OF THE
TEAM'S PERFORMANCE WITH A NUMERICAL RATING
(subsec. 4.3).
11. THE AUDITOR'S REPORT IS FORWARDED TO THE 11
MANAGER.
MANAGER
12. COLLECT THE AUDITOR'S REPORTS FROM THE n 12
AUDITS OF THE LOT OF N STACKS. IN THIS
CASE n = 7 AND ASSUMED VALUES FOR THE
AUDITS ARE d, = -40, d9 = 20, d- = -10,
d. = 80, dc ± 60, d, = 30, and 3-, = 10
(table 4).5 6 7
13. CALCULATE cT AND s , ACCORDING TO THE SAMPLE IN 13
TABLE_4. RESULTS OF THIS SAMPLE CALCULATION
SHOW d - 21.4 AND sd = 40.6 (table 4, subsec.
4.4.3).
14. USE A t-TEST TO CHECK d FOR SIGNIFICANCE, FOR 14
THIS EXAMPLE t = (21.4 x /7~)/43.5 = 1.30. THE
TABULATED t-VALUE FOR 6 DEGREES OF FREEDOM AT
THE 0.05 LEVEL IS 1.943; HENCE, d IS NOT
SIGNIFICANTLY DIFFERENT FROM 0 AT THIS LEVEL.
ALSO, sd IS CHECKED AGAINST THE ASSUMED VALUE
OF 43.5 ppm BY A CHI-SQUARE TEST.
X2/f = sJ;/a2{cD = (40.6)2/(43.5)2 = 0.87
THE TABULATED VALUE OF x /6 AT THE 95 PER-
CENT LEVEL IS 1.64; HENCE, sd IS NOT SIGNIFI-
CANTLY DIFFERENT FROM 43.5 ppm.
PREPARE EQUIPMENT
AND FORMS
REQUIRED IN AUDIT
OBSERVE ON-SITE
PERFORMANCE
OF TEST
PREPARE
AUDIT
REPORT
FORWARD
REPORT TO
MANAGER
COMBINE
RESULTS OF
n AUDITS
CALCULATE THE
MEAN, d, AND
STANDARD
DEVIATION, sd
_ TEST
d AND s.
90
-------
15. OBTAIN THE VALUE OF k FROM TABLE 5, FOR n = 7 15
AND p = 0.1. THIS VALUE IS 2.334, THEN
d + k sd = 116 ppm AND d - k sd = -71.4ppm
(subsec. 4.4.3).
16. COMPARE THE ABOVE CALCULATIONS WITH LIMITS 16
L AND U (subsec. 4.4.3). FOR THIS EXAMPLE
d~ + k sd = 116 < U = 131 ppm
d - k sd = -71.4 > L = -131 ppm
BOTH CONDITIONS ARE SATISFIED 60 TO 18*.
17. STUDY THE AUDIT AND FIELD DATA FOR SPECIFIC 17
AREAS OF VARIABILITY, SELECT THE MOST COST-
EFFECTIVE ACTION OPTION(S) THAT WILL RESULT
IN GOOD QUALITY DATA (subsec. 4.2). NOTIFY
THE FIELD TEAMS TO IMPLEMENT THE SELECTED
ACTION OPTION(S).
18. A COPY OF THE AUDITOR'S REPORT SHOULD BE SENT 18
TO THE RESPECTIVE FIELD TEAM. ALSO, THE DATA
ASSESSMENT RESULTS, i.e., CALCULATED VALUES OF
d, sd, AND COMPARISON WITH THE LIMITS L AND U
SHOULD BE FORWARDED TO EACH TEAM INVOLVED IN
THE N FIELD TESTS.
19. THE FIELD DATA WITH AUDIT RESULTS ATTACHED ARE 19
FILED. THE AUDIT DATA SHOULD REMAIN WITH THE
FIELD DATA FOR ANY FUTURE USES.
CALCULATE
d + k sd
i AND
d - k sd
COMPARE
(15) WITH
L AND U
MODIFY
MEASUREMENT
METHOD
INFORM
FIELD TEAMS
OF AUDIT
RESULTS
FILE AND
CIRCULATE OR
PUBLISH FIELD
DATA
if either one or both limits had been exceeded one would proceed to
step 17.
91
-------
APP00IX C GUBSAPY OF SYTOOUS
This is glossary of symbols as used in this document. Symbols used and
defined in the reference method (appendix A) are not repeated here.
SYMBOL DEFINITION
N Lot size, i.e., the number of field tests to be treated as
a group.
n Sample size for the quality audit (section IV).
CV{x} Assumed or known coefficient of variation (100 Ox/Vix)•
^
CV{X> Computed coefficient of variation (100 sx/X) from a finite
sample of measurements.
a{x} Assumed standard deviation of the parameter X (population
standard deviation).
/\
T{X} Computed bias of the parameter X for a finite sample
(sample bias).
R Range, i.e., the difference in the largest and smallest
values in r replicate analyses.
d. The difference in the audit value and the value of CO
arrived at by the field crew for the j1-" audit.
d Mean difference between COj and C0a-: for n audits.
s. Computed standard deviation of differences between CO. and
C0aj.
p Percent of measurements outside specified limits L and U.
k Constant used in sampling by variables (section IV).
P{Y} Probability of event Y occurring.
t/ -, ^ Statistic used to determine if the sample bias, d, is
significantly different from zero (t-test).
2 . 2
X /(n -1) Statistic used to determine if the sample variance, s , is
r\
significantly different from the assumed variance, a , of
the parent distribution (chi-square test).
92
-------
/rawxc
SYMBOL
L
U
CL
LCL
UCL
CO
C°3
CO
m
CO,
CO,
NDIR
GUOSSAPY OF SYTBOLS (CONTINUED)
DEFINITION
Lower quality limit used in sampling by variables.
Upper quality limit used in sampling by variables.
Center line of a quality control chart.
Lower control limit of a quality control chart.
Upper control limit of a quality control chart.
Carbon monoxide reported by the field team for field test.
Carbon monoxide concentration used in an audit check.
Measured value of a calibration gas.
Assayed or known value of a calibration gas.
Concentration of CO measured by the NDIR analyzer on a dry
basis and uncorrected for C02 removal.
93
-------
tfPBDIXD
GUBSARf OF TEPT1S
The following glossary lists and defines the statistical terms as used
in this document.
Accuracy
Bias
Lot
Measurement method
Measurement process
Population
Precision
Quality audit
Quality control
check
Sample
A measure of the error of a process expressed as a
comparison between the average of the measured values
and the true or accepted value. It is a function of
precision and bias.
The systematic or nonrandom component of measurement
error.
A specified number of objects to be treated as a
group, e.g., the number of field tests to be conducted
by an organization during a specified period of time
(usually a calendar quarter).
A set of procedures for making a measurement.
The process of making a measurement, including method,
personnel, equipment, and environmental conditions.
A large number of like objects (i.e., measure-
ments, checks, etc.) from which the true mean and
standard deviation can be deduced with a high degree
of accuracy.
The degree of variation among successive, independent
measurements (e.g., on a homogeneous material) under
controlled conditions, and usually expressed as a
standard deviation or as a coefficient of variation.
A management tool for independently assessing data
quality.
Checks made by the field crew on certain items of
equipment and procedures to assure data of good
quality.
Objects drawn, usually at random, from the lot for
checking or auditing purposes.
94
-------
flPPBDIXE
CONVERSION FACTOR
Conversion factors for converting the U.S. customary units to the
International System of Units (SI)* are given below.
To Convert from
To
Multiply by
foot
inch
meter (m)
meter (m)
Pressure
inch of mercury (in. of Hg) (32°F) Newton/meter2 (N/m2)
inch of mercury (in. of Hg) (60°F) Newton/meter2 (N/m2)
millimeter mercury (mmHg) (32°F) Newton/meter2 (N/m2)
inch of water (in. of H20) (29.2°F) Newton/meter2 (N/m2)
inch of water (in. of 1^0) (60°F) Newton/meter2 (N/m2)
pound-force (Ibf avoirdupois)
pound-mass (Ibm avoirdupois)
degree Celsius
degree fahrenheit
degree rankine
degree fahrenheit
kelvin
foot/second
foot/minute
cubic foot (ftj)
foot-Vminute
foot /second
Force
Newton (n)
Mass
kilogram (kg)
Temperature
kelvin (K)
kelvin (K)
kelvin (K)
degree Celsius
degree Celsius
Velocity
meter/second (m/s)
meter/second (m/s)
Volume
(m3)
meter~
Volume/Time
o O
meterj/second (m/s)
O Q
meter-Vsecond (m/s)
0.3048
0.0254
3386.389
3376.85
133.3224
249.082
248.84
4.448222
0.4535924
K
tK
t
tc + 273.15
(tF+459.67)/1.8
tR/1.8
)tF - 32)/1.8
t - 2?3.15
0.3048
0.00508
0.02832
0.0004719
0.02832
Metric Practice Guide (A Guide to the Use of SI, the International Systems
of Units), American National Standard Z210.1-1971, American Society for
Testing and Materials, ASTM Designation:E380-70, Philadelphia, Pa., 1971.
95
-------
TECHNICAL REPORT DATA
(Please read Instructions on the reverse before completing)
1 REPORT NO.
EPA-650/4-74-005h
3. RECIPIENT'S ACCESSION-NO.
4.TITLEANDSUBTITLE "Guidelines for Development of a Quality
Assurance Program: Determination of Carbon Monoxide
Emissions from Stationary Sources by Non-Dispersive
Infrared Spectrometry (NDIR)."
5. REPORT DATE
February 1975
6. PERFORMING ORGANIZATION CODE
7 AUTHOR(S)
Franklin Smith, Denny E. Wagoner, Robert P. Donovan
8. PERFORMING ORGANIZATION REPORT NO.
9. PERFORMING ORGANIZATION NAME AND ADDRESS
Research Triangle Institute
P.O. Box 12194
Research Triangle Park, NC 27709
10. PROGRAM ELEMENT NO.
1HA327
11. CONTRACT/GRANT NO.
68-02-1234
12. SPONSORING AGENCY NAME AND ADDRESS
Office of Research and Development
U.S. Environmental Protection Agency
Washington, D. C. 20460
13. TYPE OF REPORT AND PERIOD COVERED
14. SPONSORING AGENCY CODE
15. SUPPLEMENTARY NOTES
16. ABSTRACT
Guidelines for the quality control of stack gas analysis for carbon monoxide
emissions by the Federal reference method (NDIR) are presented. These include:
1. Good operating practices
2. Directions on how to assess performance and to qualify data.
3. Directions on how to identify trouble and to improve data quality.
4. Directions to permit design of auditing activities.
The document is not a research report. It is designed for use by operating
personnel.
17.
KEY WORDS AND DOCUMENT ANALYSIS
DESCRIPTORS
b.lDENTIFIERS/OPEN ENDED TERMS
c. CO SAT I Field/Group
Quality Assurance
Quality Control
Air Pollution
Gas Sampling
Stack Gases
13H
14D
13B
14B
21B
18. DISTRIBUTION STATEMENT
19. SECURITY CLASS (This Report)
Unclassified
21. NO. OF PAGES
im
Unlimited
20. SECURITY CLASS (This page)
Unclassified
22. PRICE
EPA Form 2220-1 (9-73)
96
------- |