United States Office of Air Quality EPA-340/1 -83-013
Environmental Protection Planning and Standards January 1983
Agency Research Triangle Park NC 27711
Stationary Source Compliance Series
Performance
Specification
Tests for
Pollutant
and Diluent
Gas Monitors:
Reporting
Requirements,
Report Format,
and Review
Procedures
-------
EPA-340/1-83-013
Performance Specification Tests for Pollutant
and Diluent Gas Monitors:
Reporting Requirements, Report Format,
and Review Procedures
Prepared by:
GuyB. Oldaker III, Ph.D.
James W. Peeler
Entropy Environmentalist, Inc.
Research Triangle Park
North Carolina
Prepared for:
Louis R. Paley
Stationary Source Compliance Division
and
Anthony Wayne
Region VII
United States Environmental Protection Agency
SSCD Contract No. 68-01-6317
U.S. ENVIRONMENTAL PROTECTION AGENCY
Office of Air Quality Planning and Standards
Stationary Source Compliance Division
Washington, D.C. 20460
January 1983
-------
The Stationary Source Compliance series of reports is issued by the
Office of Air Quality Planning and Standards, U. S. Environmental
Protection Agency, to assist Regional Offices in activities related to
compliance with implementation plans, new source emission standards,
and hazardous emission standards to be developed under the Clean Air
Act. Copies of Stationary Source Compliance Reports are available -
as supplies permit - from Library Services, U.S. Environmental
Protection Agency, MD-35, Research Triangle Park, North Carolina
27711, or may be obtained, for a nominal cost, from the National
Technical Information Service, 5285 Port Royal Road, Springfield,
Virginia 22151.
This report has been reviewed by the Office of Air Quality Planning
and Standards, U.S. Environmental Protection Agency, and approved for
publication as received from Entropy Environmentalists, Inc. Approval
does not signify that the contents necessarily reflect the views and
policies of the U.S. Environmental Protection Agency, nor does mention
of trade names or commercial products constitute endorsement or
recommendation for use.
-------
ABSTRACT
This document presents recommended reporting requirements for performance tests
of continuous emission monitoring systems installed at fossil-fuel fired steam
generators subject to New Source Performance Standards (NSPS). The recommended
reporting requirements are applicable to performance tests conducted according
to 40 CFR 60, Appendix B, Performance Specifications 2 and 3 (promulgated,
Federal Register, Vol. 40, No. 194, October 6, 1975). The document details
procedures for reviewing such performance tests.
111
-------
TABLE OF CONTENTS
Section 1. Introduction 1
Section 2. Discussion of the Review Process 7
2.1 An Overview 7
2.2 Preparing for the Review 8
2.2.1 Pretest Protocol Statements 10
2.2.2 The Observer's Notes and Records 11
2.3 The Level of Review 11
2.4 Conducting the Review 13
2.5 Reviewer's Report 14
Section 3. Test Report Format 17
Section 4. Review of Test Procedures, Data and Results of Monitor
Performance Tests for S09 and NO Monitors 27
L. X
4.1 Calibration Error Test 28
4.1.1 Background 28
4.1.2 Review of Calibration Error Test Data and
Results 29
4.2 Conditioning Period; Operational Test Period 31
4.2.1 Background 31
4.2.2 Review of Operational Test Period 32
4.3 Response Time Test 35
4.3.1 Background 35
4.3.2 Review of Response Time Test Data Sheet and
Strip Chart Records 37
4.4 Zero and Calibration Drift Tests 43
4.4.1 Background 43
4.4.2 Review of Zero and Calibration Drift Tests. . . 45
4.4.3 Review of 2-Hour and 24-Hour Drift Test
Strip Chart Records 50
4.5 Relative Accuracy Test 51
4.5.1 Background 51
4.5.2 Review of Relative Accuracy Calculations. ... 52
4.5.3 Determinations of Continuous Monitor System
Values 58
4.5.4 Review of Reference Method Data and Results . . 60
-------
Table of Contents
(continued)
Section 5. Review of Test Procedures, Data, and Results of Monitor
Performance Tests for CC>2 and 0^ Monitors. . 107
5.1 Background ..... 107
5.2 Reviewing Diluent Monitor Performance Test Reports. . . 110
5.3 Calibration Gases H2
VI
-------
1.0 INTRODUCTION
Continuous monitoring of gaseous pollutant emissions sulfur dioxide
(S0_) and nitrogen oxides (NO ) is required at stationary sources subject to
^ A
New Source Performance Standards (NSPS). Requirements for continuous
monitoring are contained in 40 CFR 60.13. Paragraph 60.13(c) states that "the
owner or operator of any affected facility shall conduct continuous monitoring
system performance evaluations and furnish the Administrator within 60 days
thereof .. .copies of a written report of the results of such tests." The
contents of this written report are mentioned in Performance Specification 2
(40 CFR 60, Appendix B) , which explicitly requires reporting of the following
information :
(1) The data and results from determinations of :
(a) Calibration gas concentrations (Figure 2-1).
(b) Calibration error (Figure 2-2).
(c) Relative accuracy (Figure 2-3).
(d) 2-hour zero and calibration drifts (Figure 2-4).
( e) 24-hour zero and calibration drifts (Figure 2-5).
( f ) Response time (Figures 2-6).
(2) The method used to determine the integrated averages from
the continuous emission monitor for the relative accuracy
test (this requirement appears as a note at the bottom of
Figure 2-3).
(3) For relative accuracy tests of wet-basis continuous emission
monitors, the moisture test method and the correction
procedure employed to place the reference method and
monitoring measurements on a consistent basis.
-------
Moreover, additional reporting may be necessary depending on the manner in
which the Reference Methods are employed during the performance testing program
(which may commence with the analyses of calibration gases two v«eks prior to
the initiation of the operational test period). Thus, the introduction to 40
CFR 60, "Appendix A - Reference Methods" addresses the necessary reporting
requirements to be observed when the tester elects to employ materials or
procedures identified therein as "optional," "equivalent," or "subject to the
approval of the Administrator."
A performance test report containing only the explicitly required
information would provide little support for the results of monitor performance
testing. Indeed, this type of support is critical to prove that the monitors,
as operated and installed at the source, are providing quality emission rate
data because subsequent decisions and actions presuppose the accuracy of the
indicator, i.e., the continuous monitor.
This manual has been written with a threefold purpose:
(1) Provide a model for the complete reporting of the results of
continuous monitor performance tests.
(2) Provide test report reviewers with sufficient background
information about the continuous monitor performance
specifications so that they may review such reports effectively.
(3) Provide test report reviewers with detailed guidelines for
conducting reviews so that the review process may be
facilitated .
The interpretation of "reporting completeness" is not entirely objective,
and the recommended reporting requirements contained in this manual may be
viewed by some as burdensome. However , the recommendation includes only the
reporting of data generated as a matter of course, if the procedures detailed
in the Performance Specifications and Reference Methods are followed. In
-------
addition, the material based on report formatting and stylistic conventions
(e.g., Introduction, Table of Contents, Discussion of Results, etc.), would
also be included as a matter of course in engineering report documentation.
Standard, complete performance specification test reporting can have
beneficial effects. It will promote adherence to Performance Specification and
Reference Method procedures, which in turn will help ensure quality services to
sources who contract for testing. For the agency, standardized, complete
reporting can facilitate consistent comparisons of performance among many
monitors (which may be useful in recognizing invalid emission rate data), ease
manpower restraints, and increase the cost effectiveness of continuous monitor
certification programs.
This manual is intended to accompany the continuous monitor Performance
Specifications promulgated October 6, 1975 (Federal Register, Vol. 40, No. 194,
pp. 46250-46271). Since that time, however, revisions to these Performance
Specifications have been proposed twice: Federal Register, Vol. 44, No. 197,
Wednesday, October 10, 1979, PP. 58601-58636, and Federal Register, Vol. 46,
No. 16, Monday, January 26, 1981, pp. 8351-8363. The revisions proposed
October 10, 1979 were quite extensive in scope, and in addition, the revisions
addressed required reporting in some detail. The revisions proposed January
26, 1981, on the other hand, are significantly streamlined relative to the
currently promulgated Performance Specifications, and emphasize the assessment
of performance in terms of the relative accuracy test results. Both revisions
require determinations of system relative accuracy in terms of Ib pollutant/10
Btu when diluent monitors are used in conjunction with pollutant monitors.
Thus, the revisions implicity include a relative accuracy specification for
diluent monitors. Revised Performance Specifications in the form of those
proposed January 26, 1981, are favored for promulgation.
-------
Other regulatory actions and issues that may ultimately affect the use of
this manual include: (1) the introduction of additional Reference Methods for
determinations of SO and NO emission rates (Ib pollutant/10 Btu);
(2) revisions to the current Reference Methods so that quality assurance
provisions are included; and (3) the promulgation of Appendix F, which will
specify quality assurance programs for continuous monitors.
In spite of the current regulatory flux, it is hoped that this manual will
establish uniform reporting requirements and be a useful guide for reviewers of
performance test reports.
Although this manual explicitly addresses monitor performance test report
requirements for NSPS sources, much of this information may apply to reports
for monitors installed at sources subject to State Implementation Plans (SIPs) ,
since many of the continuous monitoring requirements contained in the SIPs are
patterned after the specifications and procedures of 40 CFR 60, Appendices A
and B. For similar reasons, applicability may extend to continuous monitors
installed as required by waivers, consent decrees, etc.
This manual is divided into five sections. Section 2.0 provides
background material concerning the nature of the report and also discusses the
review process. Included within the background material are discussions that
deal with: (1) the function of the pretest meeting, any ensuing test protocol
decisions, and the use of this information by the reviewer; and (2) the role of
the agency observer during the test and the use of the observer's report and
field notes by the reviewer.
-------
Following this information, the goals of the review are listed. These
goals are the basis for subsequent discussions that address (a) procedures for
reducing the complexity of the reviewing task; (b) the organization and
strategy of the review; (c) the depth of the review; and finally, (d) the
actions the reviewer should take upon completing the review.
Sections 3.0 and 4.0 deal with the reporting format and requirements and
the review of pollutant gas monitor performance test reports. The material is
divided into the two sections for convenience. Section 3-0 is covers the body
of the test report: the introductory background information; the source,
monitor, and test descriptions; and the test results. The discussions focus on
the format, content, and purpose of this introductory material.
Section 4.0 deals with the raw data generated during performance tests of
gas monitors. Each test of the complete performance specification test is
separately introduced and discussed. Each introduction includes a brief
description of the test and its purpose. Ihe discussions that follow address
reporting format, reporting requirements, and review procedures. Additional
data that supplement the tests, such as data generated by moisture
determinations, are also included in Section 4.0 and are treated similarly.
Section 5.0 addresses the reporting and review of the raw data
accompanying performance test reports of diluent monitors. Because the
performance tests of diluent monitors closely parallel those of gas monitors,
much of Section 5.0 is devoted to directing the reviewer to analogous
discussions pertinent to gas monitors (the subject of Section 4.0).
-------
2.0 DISCUSSION OF THE REVIEW PROCESS
This manual is designed for the novice reviewer. The reviewer is expected
to have some familiarity with Methods 3, 4, 6, and 7 (40 CFR 60, Appendix A)
and Performance Specifications 2 and 3 (*»0 CFR 60, Appendix B) . As the
reviewer gains experience, this manual should become more a reference, rather
than a guide. This is especially true of the many introductions to the
individual test procedures.
2.1 AN OVERVIEW
The primary goal of the test report review is to determine the performance
status of the tested monitor.
Clearly, the amount of data and supporting information contained in the
report will limit the accuracy of the continuous monitor performance
assessment. Sufficient data should be reported to allow a priori recalculation
of the test results from the field data, laboratory data, and monitor records.
Accordingly, completeness is the primary aspect of the report to be assessed by
the reviewer.
The reviewer must determine whether acceptable testing procedures were
employed in the monitor performance evaluation. In general, test procedures
must provide accurate data for determining whether the monitor is in compliance
with the performance specifications. Finally, the reviewer must determine
whether the reported test results have been calculated correctly and accurately
from the reported data.
-------
Based upon the results of the report review, the reviewer must conclude
that:
(1) the reported results of the test conform with the
applicable monitoring specifications,
(2) the reported results do not conform with the monitoring
performance specifications, or
(3) the performance status of the continuous monitor cannot be
determined from the reported results.
A complete monitor test report includes a plethora of information. Figure
2. 1 illustrates the complexity of the complete review process for an S0?
monitor relative accuracy test. The diagram also shows how the review can be
effectively approached by employing two guidelines: (1) divide the review into
workable units, and (2) work from general to specific (e.g., first check the
final result; second, the supporting intermediate results; and, finally, the
raw data).
2.2 PREPARING FOR THE REVIEW
Before starting the review of a monitor test report, the reviewer should
assemble as much supplementary information as possible. This may include the
following:
(1) pretest protocol statements;
(2) observer's report, notes, or checklist;
(3) previous monitor performance test reports; and
reports; and
(4) previous source performance test reports.
-------
Result
v Ve/lccurcict/ C&lculat/on$
J S^amp I in o
Calculation^
COiltlr cut ion
&-
\r
9
L
a
ac/
nd
th
docii
Wenkmcr
Calcu
FIGURE 2-1
Review Process for a Relative Accuracy Test of
an SO GEM
Calculations'
-------
Some of the items above may aid the reviewer in validating the test
procedures; this is true for the statements of protocol and information provided
by the observer. Other sources of information can aid the reviewer in
identifying and evaluating anomalous data. In general, the more supplemental
information the reviewer has, the more easily the review can be performed, and
the more easily and accurately the required level of review can be achieved.
2.2.1 Pretest Protocol Statements
Conducting a monitor performance test is not always straightforward. Many
problems can arise which show that there are neither "standard" sources nor
"standard" monitors. The nature of the source, monitor, or the monitoring
location may require modification of the performance specification test (PST)
procedures. In this regard, a fair degree of procedural latitude is permitted in
both the Reference Methods and Performance Specifications.
A pretest meeting is often held at which representatives from the agency
(usually the observer), the source, and the test team discuss the potential
problems associated with the PST ' and agree upon necessary procedures,
modifications, and reporting requirements. The agreed upon procedures are
sometimes documented within a pretest protocol statement prepared by either the
source or the tester, and the statement may be submitted to the agency. In some
cases, the only record of the pretest meeting agreements is the agency
representative's notes. Whatever the situation, the reviewer should obtain all
documents of pretest procedural decisions and agreements because such information
may prove invaluable for the test report review.
10
-------
2.2.2 The Observer's Notes and Records
Because owners or operators of continuous monitors are required to notify
the Administrator no less than 30 days before the monitor operational test period
[40 CFR 60.7(a)(<5) and 40 CFR 60.13(c)], an observer is usually present during
the PST.
If an agency observer is present, a written report and/or observer's
checklist may be available, Tne report/checklist should contain brief summaries
of any inconsistencies observed during the test and an estimate of their effect
Of, the test results, Obviously, without having observed the test, the reviewer
cannot fully evaluate the validity of the test procedures and raw data, The
observer's report may supply the reviewer with valuable supplemental information
to facilitate the review and/or aid the reviewer in the review strategy.
2.3 THE LEVEL OF REVIEW
There are three levels of review:
Level I. Cursory check of results.
Level II. Spot check of report.
Level III. Complete review.
In a Level I review, the reviewer checks the results to determine the
performance capability of the monitor and confirms that the report contains
sufficient raw data to permit a higher level of review. Insufficient data would
limit the review to this level.
For Level II, the reviewer recalculates the results of one or more tests
from the raw data. Tne reviewer makes additional checks of the reported data and
11
-------
procedures using available pretest protocol statements and information provided
by the observer .
A Level III review requires considerable effort and a significant time
commitment. The reviewer checks all results thoroughly, which may require
extensive recalculation, and evaluates all procedures, diagrams, and data using
any supplemental information.
In selecting the level of review, there are several factors to consider.
The reviewer should consider the importance of the test and the significance of
the results. For example, when the accuracy of the values of the reported
results is more important than the status of the monitor (i.e., whether the
monitor passes or fails the performance test) , a Level III effort would be
required. Tne results of a particular part of a performance test, e.g., response
time, can also dictate the level of review. For example, when the results of a
test conclusively show that the monitor failed the particular part of a
performance test, and a glance at the raw data supports this conclusion, further
review should be unnecessary. A Level I review, in this case, would suffice.
Supplemental information can also influence the selection of the level of
review. If an agency observer has thoroughly documented one or more tests and is
satisfied that the results are valid, the reviewer may decide to conduct a Level
I review. On the other hand, special tests, e.g., tests to detect or to quantify
stratification at the monitoring location, may require additional effort on the
part of the reviewer.
The review process is a dynamic activity; the level of review may change
between the time the review is initiated and the time it is completed. Thus, the
reviewer must be sensitive to the review process and flexible in his approach.
12
-------
2.4 CONDUCTING THE, REVIEW
As the reviewer gains experience, he will probably develop his own review
strategies. Two basic guidelines are offered to the novice. First, focus on a
manageable part of the report and then proceed from the general to the specific.
For instance, in reviewing a relative accuracy test, first check the calculation
of the relative accuracy, then validate the reported monitor data from the strip
chart records, and finally, validate the reported Reference Method data by
recalculating the results from the field and laboratory raw data, Tnis
progression from the general to the specific may be terminated at any point
corresponding to the selected level of review, or may be terminated upon finding
any major discrepancy in the reported data or results. (A major discrepancy is
considered any error that affects the performance status of the monitor.)
The following paragraphs present a synopsis of a Level III review. The
reviewer is assumed to be well supplied with supplemental information.
(1) The reviewer should check the introductory material of the report
against the observer's report and the statements within the pretest
protocol and note any discrepancies and evaluate their effect on the
results of the test.
(2) The reviewer should examine the results of the test and the discussion
of the results, focusing upon those tests that represent borderline
cases, i.e., where the monitor is close to the specification, either on
the passing side or on the failing side. The order in which the
individual tests comprising the PST are reviewed should be decided
using this criterion.
(3) Before proceeding to the raw data and calculations, the reviewer should
inspect the test report to verify that all the raw data necessary to
13
-------
recalculate the results are available. Onissions of data should be
noted. At the same time, the reviewer should estimate the significance
of these omissions with regard to their perceived effect on the
reported results.
(4) After identifying data omissions and evaluating their effect on
reported results, the reviewer can then recalculate the results. This
is the bulk of the task and can require a significant amount of effort.
The reviewer should divide the task into workable units and should
proceed from the general to the specific. Data should be examined with
regard to reasonableness. Here, common sense and experience will, in
most cases, suffice in evaluating most of the reported data. For these
recalculations, errors and other omissions should be noted, and their
effects on the reported results should be evaluated.
(5) When all the procedures, data, calculations, and results have been
evaluated, he or she is then in a position to answer the questions:
(a) Has the testing been properly conducted?
(b) Have adequate data been reported to permit a thorough review?
(c) Have all calculations been performed correctly?
(d) Vhat is the status of the monitor as indicated by the data?
2.5 REVIEWER'S REPORT
The reviewer should promptly submit to the source the results of the review.
The report should answer the questions listed at the end of Section 2.4. Tne
report should identify those errors, omissions, and inconsistencies that the
reviewer feels may have significant effect on the results of the test. Where
appropriate, the reviewer should supply corrections and should provide his own
14
-------
evaluation of the impact of the problems on the results. The reviewer may also
recommend future actions.
The report should include the reviewer's evaluation of the status of the
monitor, as indicated by the available data, and should reach one of the
following conclusions:
(1) The data indicate that the monitor passed the performance specification
test;
(2) The data indicate that the monitor failed the performance specification
test;
(3) The status of the monitor cannot be determined from the available data.
The second and third conclusions require recommendations. If the monitor
failed the PST, the applicable tests should be identified, and appropriate
retesting should be recommended. If the status of the monitor cannot be
determined, the reviewer should recommend either submission of additional data or
retesting.
15
-------
3.0 TEST REPORT FORMAT
A standardized format for continuous monitor PST reports is highly
recommended, because report review can be accomplished more efficiently,
effectively, and accurately. In addition, the adoption of standardized
reporting requirements that emphasize completeness could promote the
application of quality PST procedures. A standardized test reporting format
would be beneficial for both agency and source.
In the paragraphs that follow, a report format is recommended. The
criteria considered for the development of this recommended format include:
(1) the need for formal elements, such as title pages, certification
pages, table of contents, etc.;
(2) the need for background information necessary to present data
and results in the proper technical and regulatory perspective*,
and
(3) the need for sufficient data to determine the performance status
of the tested monitor.
The following format reflects minimum, recommended reporting requirements,
which are detailed in subsequent sections of this manual. Because minimum
reporting requirements are reflected, the format would not apply to all test
reporting situations. Rather, the format is a skeleton, which may be added to
as necessary, recognizing that PSTs and accompanying test reports are best
handled on a case-by-case basis.
The recommended format is first presented in outline form in Figure 3-1;
the individual elements are discussed in order of their presentation.
17
-------
FIGURE 3-1.
RECOMMENDED
ELEMENTS OF A CONTINUOUS MONITOR
PERFORMANCE TEST REPORT
1.0 Title Page
2.0 Certification Page
3.0 Table of Contents
4.0 Introduction
5.0 Summary of Results
6.0 Discussion of Results
7.0 Description of Source
8.0 Description of Monitors
9.0 Discussion of Testing Procedures
10.0 Performance Testing Data Sheets
Appendices
Appendix I. Reference Method Raw Data
Appendix II. Strip Chart Records
Appendix III. Calibration Data
18
-------
(1) Title Page
The title page should include the following: (1) identification of the
affected facility and its location; (2) the type of pollutant monitor tested
(e.g., S02 or N0x monitor); (3) identification of the organization submitting
the test report; and (4) the date on which the report was submitted.
(2) Certification Page
For engineering test reports, the test team leader and a professional
engineer should verify that the report is accurate and has been reviewed.
Review by a professional engineer, however, is not mandatory. The
certification page also provides the Agency reviewer with the name of a
knowledgeable person to contact in the event that questions arise.
(3) Table of Contents
A table of contents aids the reviewer in locating pertinent sections of
the report. The pages of the report should be numbered.
(4) Introduction
The introduction to the test report should include the purpose and
background of the test. It should be brief, and the following items should
also be included: (1) the name and location of the affected source; (2) a
description of the source process (e.g., fossil-fuel fired steam generator);
(3) the process production rate (e.g., MW); (4) the fuel category (e.g.,
bituminous coal); (5) an identification of the regulation(s) requiring
monitoring of emissions; (6) if applicable, an identification of the local
agency with jurisdiction over the source; (7) a discussion of pertinent
pre-test protocol; (8) the names of the testing firm, source, and regulatory
19
-------
agency observers present during the testing; (9) the types of monitors used
(mfg. and model, e.g., Munkus Model 666 S0? monitor); and (10) the dates of the
operational test period.
(5 ) Summary of Results
The summary of results is the heart of a monitor test report. It should
be tabulated. If the results are juxtaposed with the applicable performance
specifications, interpretation and review will be greatly facilitated. Figure
3-2 illustrates an example "Summary of Results."
(6 ) Discussion of Results
A discussion of the results should follow the summary of results. This
discussion should be brief and should focus on those tests in which:
(1) anomalous data were obtained, (2) departures from standard test procedures
were observed, or (3) failure to meet the performance specification occurred.
Possible causes for these anomalies, deviations, and/or failures should be
presented and discussed.
(7) Description of Source
The report should contain a brief description of the source, with emphasis
on those aspects of source operation that directly affect the monitor
performance test. For example, if the source is a fossil-fuel fired steam
generator, then the description should address the heat input rate, the fuel,
and the location of the monitor within the effluent handling system. The heat
input rate determines the necessity for monitoring. The fuel, be it gas, oil,
or coal, determines the appropriate continuous monitor span. The continuous
monitor location helps to determine the representativeness of the sample. The
20
-------
FIGURE 3-2
EXAMPLE
SUMMARY OF RESULTS
S02 Monitor
Relative Accuracy
Response Time
Calibration Error
2-hour Zero Drift
2-hour Calibration Drift
24-hour Zero Drift
24-hour Calibration Drift
Operational Test Period
Result
15%
81 sec
mid 2%
high 0.4%
0.3%
0.1%
0.3%
1.0%
June 1-7, 1982
Speci fication
£20%
£15 min
£5%
£5%
£2%
£2%
£2%
£2.5%
>168h
NOX Monitor
Relative Accuracy
Response Time
Calibration Error
2-hour Zero Drift
2-hour Calibration Drift
24-hour Zero Drift
24-hour Calibration Drift
Operational Test Period
Result
37%
110 sec
mid 2%
high 2.7%
0.3%
1.7%
0.2%
0.6%
June 1-7, 1982
Specification
£20%
£15 min
£5%
£5%
£2%
£2%
£2%
£2.5%
>168h
02 Monitor
Response Time
2-hour Zero Drift
2-hour Calibration Drift
24-hour Zero Drift
24-hour Calibration Drift
Operational Test Period
Result
Specification
70
0.
a.
0.
0.
sec
09%
1%
1%
1%
°2
°2
°2
°2
£1
£0
£0
£0
£0
5 min
.4%
.4%
.5%
.5%
02
°2
°2
°2
June 1-7, 1982
>168h
21
-------
location of the monitor (i.e., the region or point within the effluent stream
where pollutant emissions are measured) should be addressed within all PST
results regardless of the source category.
(8) Monitor Description
The test report should include a brief description of the monitors tested.
This is important because, as applied, PST procedures are generally monitor
specific. For example, the calibration error tests for extractive and in-situ
monitors differ radically. Extractive monitors require injection of calibration
gases, while in-situ monitors require the use of cells containing calibration
gases. Another example is the measurement basis of the monitor. Moisture
measurements must be performed concurrently with reference method
determinations during relative accuracy testing of wet basis pollutant gas
monitors.
The monitor description should include the following information: (1) the
make and model of the monitor; (2) the analytical measurement process (e.g.,
infrared, second derivative ultraviolet, galvanic concentration cell); (3) the
moisture basis (wet or dry); (H) the measurement mode (extractive or in-situ);
and (5) the method employed for calibration.
22
-------
The test report should contain a drawing that illustrates the general
locations and relative positions of the monitors and the reference method
sampling probe(s) within the effluent handling system (a schematic is
acceptable). This drawing serves two purposes. It aids the reviewer in
examining anomalous data with reference to process dependent aberrations.
Secondly, a drawing will denote proper or improper monitoring and sampling
locations.
(9) Discussion of Testing Procedures
The test report should include a section that discusses the procedures
employed during the PST. Ihe discussion should be brief and should emphasize
those parts of the test for which no established procedures exist and/or those
parts of the test in which deviations from established testing procedures
occurred. Several examples of procedures that should be discussed are provided
below.
When concentrations for calibration gases are determined using the
reference methods, procedures for sampling gas cylinders should be described.
This is particularly true for Methods 6 and 7, because these methods do not
directly apply to high pressure samples. Tne pertinent sampling procedures
should be discussed within the "Testing Procedures" section of the report, or a
reference for the procedures should be cited. In the latter case, it would be
helpful to include within the report's appendices a copy of the citation that
is not readily available.
An analogous example would be the documentation of the method used for
determining effluent moisture, if relative accuracy testing was performed for a
wet-basis continuous emission monitor. Since Performance Specification 2 does
23
-------
not prescribe a method for determining moisture, testers may choose among
several methodologies. If Reference Method 4 is not used, then the tester must
provide a description and discussion of the employed method in the test report.
In this regard. Performance Specification 2 explicitly states that the method
used for determining moisture and the calculation procedures employed for
correcting between differing measurement bases (e.g., wet-basis or dry-basis)
are to be reported .
Performance Specification 2 requires the reporting of the method used for
obtaining the continuous emission monitor's concentration data for the relative
accuracy computation. Therefore, a description of this method should be
included within the "Testing Procedures" section of the test report.
Descriptions of monitor-specific PST procedures should also be included
within the "Testing Procedures" section. Two such monitor-specific aspects of
the PST include:
(1) the pressures and flow rates for calibration gas injections, and
(2) the use of mixed calibration gases, e.g., S0? and 0 in
nitrogen, for monitors that have dual analysis capability.
Finally, the "Testing Procedures" section should address all deviations
from the reference methods that occurred during the performance program. In
this regard, the introductory paragraphs of 40 CFR 60, Appendix A, state:
". . .an owner electing to use. . .techniques [cited as 'subject
to the approval of the Administrator1 or as 'or equivalent'] is
responsible for . . .(2) including a written description of the
alternative method in the test report (the written method must be
clear and must be capable of being performed without additional
instruction, and the degree of detail should be similar to the detail
contained in the reference methods); and (3) providing any rationale
or supporting data necessary to show the validity of the alternative
in the particular application."
24
-------
(10) Performance Testing Data Sheets
The test report should, as appropriate, contain docunentation as specified
within Figures 2-1, 2-2, 2-3, 2-4, 2-5, and 2-6 of Performance Specification 2
and within Figures 3-1, 3-2, and 3-3 of Performance Specification 3.
(11) Appendices
The test report should include all the raw data that figured into the FST.
In general, these raw data fall into three categories: (a) data from reference
method (and moisture) testing, (b) continuous monitor data records, and (c)
data from calibration activities.
The reference method (and moisture) raw data should be assembled in one
appendix and, as appropriate, should include: sampling data for SO NO and
2* x'
H20, and analysis data for SO and NO .
Copies of the data recorded by the continuous monitor over the entire
operational test period should be included in another appendix.
All calibration data should be included in a separate appendix. These
calibration data may derive from pre- and post-test dry gas meter calibrations,
thermometer calibrations, and sampling and analysis of SO NO , CO , and 0
^ X £, £.
calibration gases.
Finally, a separate appendix should be included for copies of procedures
cited within the "Testing Procedures" section.
25
-------
4.0 REVIEW OF TEST PROCEDURES, DATA. AND RESULTS OF
MONITOR PERFORMANCE TESTS FOR S00 AND NO MONITORS
2 -x
This section provides detailed procedures for reviewing the data,
calculations, and results of the various monitor performance evaluation tests
conducted for SO and NO monitors. Included are background information and
£. X
review procedures for evaluating calibration standards, calibration error
tests, conditioning period/operational test period requirements, response time
tests, zero and calibration drift tests, and relative accuracy tests.
Background information outlining the specific tests and associated procedural
requirements is included at the beginning of each subsection.
The procedures presented here provide specific guidelines for review of
the reported data and results. In most cases, the recommended review proceeds
from a review of the general results to a review of specific data. Example data
sheets and calculations are provided for each of the monitor performance
evaluation tests.
The performance specifications for pollutant continuous monitors are shown
below.
27
-------
Parameter Specification
Accuracy £ 20% of the mean value of the
reference method test data
Calibration error £ 5% of each (50%, 90%) calibration
gas mixture value
Zero drift (2 h)1 £ 2% of span
Zero drift (24 h)1 £ 2% of span
Calibration drift £ 2% of span
(2 h)1
Calibration drift £ 2.5% of span
(24 h)1
Response time £ 15 min maximum
Operational period > 168 h minimum
Expressed as sum of absolute mean value plus 95%
confidence interval of a series of tests.
4.1 CALIBRATION ERROR TEST
4. 1. 1 Background
The calibration error test for pollutant monitors (SO and NO ) is a test
c. X
to determine the accuracy and repeatability of the monitor response relative to
calibration standards equivalent to 50% and 90% of the instrument span
(normally either 1000 or 1500 ppm) . Since the calibration error test involves
measurements at 0%, 50%, and 90% of span, this test also provides a check of
the linearity of the monitor response over its measurement range.
Performance Specification 2 allows the calibration error test to be
performed either in the laboratory or in the field. Tnus, the calibration
28
-------
error test is not ,necessarily conducted during the operational test period.
Performance Specification 2 states that, for an extractive monitor, three
different concentrations of the appropriate pollutant gas must be introduced
into the monitoring system: 0%, and "approximately 50% and 90% of span." The
performance specification also states that no gas concentration may be
introduced twice in succession. The test requires 15 non-consecutive
measurements and 5 measurements with each gas. If the monitoring system is
non-extractive, the mid- and high-range data are obtained by using gas cells
"vhose concentrations are certified by the manufacturer to be functionally
equivalent to these concentrations."
Performance Specification 2 utilizes only the results from the mid-range
(50% span) data and the high-range (90?) data in determining calibration error.
4.1.2 Review of Calibration Error Test Data and Results
The review of the calibration error test should include: (1) a check to
verify that the mid-range and high-range calibration errors have been
calculated properly from the reported data, (2) a check of the adequacy of test
procedures and data obtained, and (3) a verification of the concentration
values of the calibration standards employed.
The determination of calibration error employs the same equations for
computing differences, mean difference, and confidence interval as does the
determination of relative accuracy. (See the example calculations provided in
Section 4.5 of this manual.) The mid-range and high-range calibration errors
are determined separately from the 5 mid-range and the 5 high-range
measurements, respectively. The mean difference must be calculated
algebraically (retaining the signs of the differences). The confidence
29
-------
interval must also be calculated using the algebraic values of the differences.
The correct value for tfi Q7(. used in the confidence interval calculation is
2.776 for 5 measurements. The calibration error is computed as the sum of the
absolute value of the mean of the differences and the confidence interval,
divided by the appropriate calibration gas concentration.
Based on the experience of the reviewer and on the particular data set
reported, the reviewer must decide either to recalculate the mid-range and/or
high-range calibration error results or to accept the reported result. At a
minimum, the calibration error test data form should be reviewed for
completeness by checking that each calibration point (zero, mid-range, and
high-range) is represented by 5 independent measurements. Gonformance with
specified procedure may be checked by verifying that the datum for each
calibration point was obtained in a non-consecutive fashion.
If the calibration error test was conducted during the operational test
period, the reviewer should examine the strip charts for consistency between
the data reported on the calibration error data form and the values indicated
by the charts. Also, the duration of each measurement at 0%, 50$, and 90% of
span, as indicated by the strip chart, should be greater than the reported
response time of the monitor. Tnis may be difficult to determine visually if,
for example, the response time is approximately one minute and the strip chart
speed is approximately 1 inch/h. The concentrations of the mid- and high-range
calibration gas mixtures should be approximately 50% and 90? of the required
instrument span. Tne reviewer must interpret the acceptable limits established
by the word "approximately," since Performance Specification 2 does not
elaborate on its meaning. (The October 10, 1979 proposed revisions to
Performance Specification 2 require that the high-range gas concentration be
30
-------
between 80% and 90% of span, and that the mid-range gas concentration be
between 45% and 55% of the span.)
The interpretation of the recorded responses to zero gas injections must
be approached with caution, because anomalous data may reflect the test
technique, fundamental limitations of the monitor, or impending monitor
malfunction. Without a firm understanding of monitor operation, it is generally
difficult to draw valid conclusions about monitor performance from what appear
to be anomalous zero data.
The actual values of the calibration standards employed directly affect
the outcome of the calibration error test. Errors in the values of the
calibration gases cannot be distinguished from monitor non-linearity in the
results of the calibration error test. Thus, the true concentration values of
the calibration gases or functionally equivalent concentrations of calibration
cells must be determined. The reviewer is directed to Section 4.5.1.4 of this
manual, where guidelines for checking reported calibration standard values are
provided.
4.2 CONDITIONING PERIOD; OPERATIONAL TEST PERIOD
4.2.1 Background
Performance Specification 2 requires that pollutant continuous emission
monitors be operated for an initial 168-hour conditioning period in a normal
operating manner. Tne strip chart recorder should be offset approximately 10%
during the conditioning period to facilitate observation of negative drift. At
a minimum, the "normal operating manner" requires that the monitor zero and
span be checked daily, and that the monitor operate continuously over the
168-hour period. Monitor failure during the conditioning period requires
31
-------
reinitiation of the conditioning period once the monitor is repaired. In
contrast, if the source shuts down during the conditioning period, then the
168-hour period is interrupted and then is continued when the source resumes
operation.
The 168-hour operational test period; is conducted after the conditioning
period is completed. The operational test period need not immediately follow
the conditioning period. During the operational test period, the response time
test, relative accuracy test, and zero and calibration drift tests are
conducted. The calibration error test is most often conducted in the field
during the operational test period; however, as pointed out in Section 3-2.1,
Performance Specification 2 permits the accomplishment of this test in the
laboratory also .
During the operational test period, the continuous monitor must
continuousl y monitor the effluent, and the data recorder zero should be offset
approximately 10%. Performance Specification 2 states, "during the 168-hour
operational test period, the continuous monitor shall not require any
corrective maintenance, repair, replacement, or adjustment other than that
clearly specified as required in the operation and maintenance manuals as
routine and expected during a one-week period."
4.2.2 Review of Operational Test Period
The strip chart records (or computer printouts) for the operational test
period should be included in the test report. The reviewer should check the
data record to ensure that the requirements of the operational test period are
met.
32
-------
The source of the monitoring raw data obtained during the operational test
period is usually strip chart records. These records must be accurately and
extensively documented. Furthermore, copies of the strip charts included in
the report must be of high quality. If these conditions are not met, it is
often neither cost effective nor possible to check the accuracy of the reported
monitor data .
Dates and clock times should be accurately documented on the charts. The
time should be indicated either on a 24-hour basis or labelled with either FM
or AM. (Chart paper is available printed with clock times; however, such paper
does not guarantee the accuracy of the printed times.) At least once daily, the
chart should be labelled with the appropriate date - including year. In
addition, all pertinent traces should be marked to indicate the clock time.
Such marking will ensure that the chart recorder is chronologically accurate.
The identification and description of the concentration-dependent axis
must be precise and complete if the chart is to be interpreted correctly. This
should include: (1) the span indicated by the chart paper both with and without
the 105& zero offset; and (2) an indication of whether a zero offset is present.
The reasoning behind these reporting requirements can be illlustrated with the
following example. If the chart has a 1000 ppm span, and the zero is offset
10!6, then the new span could either become 900 ppm (the entire range was
shifted) or remain at 1000 ppm (the entire scale was proportionally reduced).
This problem can often be resolved by investigating the responses observed
during monitor calibration, but this is not always possible, especially when
only poor copies of the strip charts are available.
All pertinent traces should be identified. An often overlooked problem
with strip chart documentation is the identification of traces by pen color:
33
-------
this information is not translated by black and white copy.
The reviewer should watch for disparities between reported and recorded
times; this may indicate problems with chart speed control. Variations in
chart speed can be determined by measuring time intervals with a rule.
Those traces corresponding to tests for response, calibration error, and
drifts should be clearly identifed on the strip charts. Most important, the
strip chart should be marked to indicate those time periods that supplied the
data for the relative accuracy computation.
Other explainable variations in monitor response (e.g., unusual monitor
responses observed while the test team initially establishes the proper
calibration gas flow rates/ injection pressures) should also be clearly
identified to prevent misinterpretation of these events as monitor failures.
Other than the above instances, the continuous monitor should provide an
uninterrupted data record for the entire operational test period. Data records
indicating unexplained periods of zero readings, offscale readings, exactly
constant readings, or widely fluctuating emission values should be considered
suspect by the reviewer, because they suggest instrument malfunction.
34
-------
4.3 RESPONSE TIME TEST
4.3.1 Background
The continuous monitoring requirements of 40 CFR 60.13(e)(2) state, "All
continuous monitoring systems ... for measuring oxides of nitrogen [and] sulfur
dioxide ... shall complete a minimum of one cycle of operation (sampling,
analyzing, and data recording) for each successive 15 minute period." To ensure
that this requirement is met, Performance Specification 2 limits monitor
response time to 15 minutes and defines response time as, "the time interval
from a step change in pollutant concentration at the input to the continuous
monitoring system to the time at which 95 percent of the corresponding final
value is reached as displayed on the continuous monitoring system data
recorder." It is important to note that the regulations clearly indicate that
the response time test is intended to include the entire monitoring system.
For response time tests performed on extractive monitors, gases
representing 0% and 90? of the applicable span concentration are introduced
into the monitor in sequential order, and the times required for the monitor to
attain 95% of the resultant step changes are recorded. Both upscale and
downscale responses are measured three times each. This is readily
accomplished by alternately switching from zero to span conditions. The
response time for a particular step change is the average of the three
measurements, and the reported monitor response time is the slower of the two
average response times.
The calculation procedures of Performance Specification 2 (Paragraph
7.2.1), as distinguished from the performance specifications contained in Table
2-1, address the matter of differing upscale and downscale response times; a
35
-------
maximum difference of 15% relative to the slower response time is specified.
The response time test procedures of extractive and non-extractive
(in-situ) continuous monitoring systems are not the same. For response time
tests of non-extractive monitors, the analyzer is evaluated with respect to a
calibration gas cell and a simulated zero condition. The resultant analyzer
response time is predominately dependent on the time required for the cells to
be placed within the light path of the analyzer. (Most non-extractive monitors
use the attenuation of electromagnetic radiation, i.e., infrared, ultraviolet,
etc., for determining the concentration of a pollutant specie.) The total
response time of a non-extractive monitoring system may also be dependent on
whether the analyzer provides instantaneous or integrated values and whether
the analyzer serves to monitor more than one pollutant (or diluent) gas. For
example, many in-situ monitors are designed with "sample and hold" circuitry to
display integrated sampling values. In this situation, the monitor must
complete a full integration cycle before a new emission value is displayed.
Also, some in-situ systems monitor more than one pollutant; alternating, for
example, between SO and NO for short intervals.
^ X
The operation of extractive monitors, on the other hand, requires the
transport of the effluent sample to the analyzer. The process of transporting
the sample takes a finite amount of time, which is dependent on the flow rate
of the sampling system and the length of the sampling line. Consequently,
extractive monitors generally show a delay between the time of sample
acquisition and the time of analysis and recording.
To a large degree, measured response times of extractive continuous
emission monitors reflect the time demanded by sample transport. In this
regard the reviewer should recognize that the length of connections between gas
36
-------
cylinders and the tested monitor can affect the response time test result if
these connections are inordinately long. For well calibrated extractive
monitors, failure to meet the response time specification is rare, and when
failure does occur it most frequently is a result of inappropriate testing
technique.
For extractive monitors, response time also may be lengthened because of
physicochemical interactions, such as adsorption or desorption of the pollutant
on or from the sampling interface and/or dissolving and degassing of the
pollutant to or from liquid phases. These effects are dependent on the
duration of the response time tests, the span gas concentration, and the
sampling history of the monitor. Fhysicochemical interactions can manifest
themselves as widely differing response times for the upscale and downscale
responses. Such problems, however, are generally encountered during calibration
error testing, if such testing is performed in the field.
Finally, for some extractive monitoring systems, a single analyzer may
monitor several sampling locations; the analyzer is time-shared between the
various sampling locations. The time required to complete a full sampling
cycle, which includes all sampling locations, should be included in the
reported response time.
** 3- 2 Review of Response Time Test Data Sheet
and Strip Chart Record's ~
The small amount of data generated for a response time test makes the
review straightforward and relatively easy. The reviewer should confirm the
following: (1) the reported span gas concentration is "approximately" 90% of
the applicable span, (2) three upscale and three downscale response time tests
37
-------
RESPONSE I
Plant and location
Monitor
Date of test
Span gas concentration
Monitor span setting
/5~Q(7
ftature
be JC>C7<, er
extract rue
Value,
Spa/7
7/7 J-ttU
ava//G-itc, cett
C ^n ces? t-rcuts 0/7.
Response
rvg. Upscale Response
Cownscale Response
2 #'JT seconds
Avg. Downscale Response j7/ _ seconds | /{vo .
System response time (slower time) =
$ seconds 0/C7M/?T
F}ercent deviation from slower _ \ (a v e r age u P_s_caJ_e) -_(average dpwnscaj
systen average response
slower tine
is
-£//ne- .'
38
-------
are reported, (3) the reported values for the average response times (upscale
and downscale) are correct, (4) the reported system average response time is
the longer average response time, and (5) the value reported for the "percent
deviation from slower system average response" is correct. The reported
response time is intended to reflect the response time for the entire
monitoring system. It includes the full sampling cycle time if the monitor
samples multiple gases or at multiple locations.
Performance Specification 2 limits average response times (upscale and
downscale) to a maximum of 15 minutes (900 seconds) . An additional limit is
placed on the percentage difference between the average response times; the
percentage deviation of the two response times, relative to the slower average
response time, must be less than or equal to 155L This is summarized by the
following formula:
% deviation
from slower
system average
response
average average
upscale downscale
response - response
time time
slower average response time
x 100*
This latter specification (Paragraph 7.2.7, Performance Specification 2)
is not listed with the monitor performance specifications that appear in Table
2-1 of Performance Specification 2.
In the limit of fast response times, the specification regarding
percentage deviation can cause interpretive problems outside the intent of
Performance Specification 2. For example, if the average upscale response is
25 seconds, and the average downscale response is 30 seconds, the percentage
deviation is 17?. In this example, the monitor response is very fast compared
39
-------
to the 15-minute response time specification, but the monitor fails the
specification for percentage deviation. Obviously, this is not the intent of
the percent deviation specification. Accordingly, the proposed revisions
(October 10, 1979) address only the slower response time.
The reviewer may attempt to verify the reported response times by
consulting the appropriate strip chart records. The strip charts should bear
the following information: chart speed, labelled points marking the initiation
of each response time test, and labelled points marking the positions of all
maximum deflections. In most cases, reviewing the strip chart records entails
either visual inspection or an approximation with a rule, depending on the
response time of the monitoring system.
In those cases in which the reported response times are close to the
15-minute limit and in which the final value is approached asymptotically, the
strip chart records should be checked with a rule. The most commonly
encountered problem in attempting to verify the reported response time is
either that the point marking the initiation of the concentration step change
is not labelled on the strip chart or that the times for gas injections are not
contained on the data sheets. This situation prevents the reviewer from
verifying the response time data.
There are situations in which the reviewer will be unable to obtain the
response times from the strip charts. For example, if the response time of a
monitor is very fast relative to the chart speed, the response time test will
appear as a spike on the paper. Although the monitor response time cannot
usually be determined from these records, it is often evident that the monitor
meets the response time specification. The strip chart records serve only to
document that the response time test was conducted.
40
-------
If the strip chart is accurately documented and if the trace shows three
uniformly spaced pairs of alternating upscale and downscale responses within a
time period significantly less than 90 minutes (i.e., 15 minute response time
test x 6 tests) , it is reasonable to presume that the monitor meets the
response time specification.
41
-------
U.il ZERO AND CALIBRATION DRIFT TESTS
M.4.1 Background
The drift tests described in Performance Specification 2 provide estimates
of the temporal stability of the monitor's calibration. The zero and
calibration drifts are determined by two independent tests: the 2-hour zero and
calibration drift test, and the 24-hour zero and calibration drift test.
For extractive monitoring systems, the drift tests involve introducing
zero and span gases into the monitoring system. For in-situ monitoring systems
that cannot accept calibration gases, a calibration gas cell "functionally
equivalent to 50 percent of span concentration" is used in lieu of span gas.
In addition, such in-situ monitors must have some means of producing a "zero
condition that provides a system check of the analyzer internal mirrors and all
electronic circuitry including the radiation source and the detector assembly."
The performance specification permits an extrapolative method as an alternative
to simulating a zero condition, if the monitor lacks direct zeroing ability.
For example, three or more calibration gas cells are inserted in the monitor,
and extrapolation from these values p-ovides the zero response. Owners or
operators of these monitors are required to retain a graph which illustrates
the extrapolation operation. The validity of this extrapolative technique is
based upon the following assumptions: (1) the monitor response over the entire
range of extrapolation can be described by a linear equation; (2) the three or
more points are well spaced over the instrument's measurement range; (3) the
concentrations of the gas cells are known to a high degree of accuracy; and (M)
the extrapolation to the zero value is not affected by span drift. Evaluation
of these extrapolative procedures must be conducted on a case-by-case basis.
43
-------
4.4.1.1 2-Hour Drift Test
The 2-hour zero and calibration drift tests require 15 data sets collected
at 2-hour intervals. This particular test is intended to quantify drift on a
short term basis. For example, the test is capable of identifying diurnal
variations in monitor response.
The important data obtained from the 2-hour drift tests are the
differences between consecutive zero and span measurements. Adjustments to the
monitor during the 2-hour drift tests are not allowed. A minimum of 16 data
sets of zero and span measurements should be acquired, because 15 differences,
determined at 2-hour intervals, are required for the test. Thus, the first
data set is the starting point for obtaining the first difference data.
The data are not allowed to represent non-consecutive measurements; thus,
in the event that a series of 2-hour determinations is interrupted, a new set
of data must be generated as a starting point before more difference data may
be obtained.
4.4.1.2 24-Hour Drift Tests
The purpose of the 24-hour drift tests is to reveal variations in monitor
calibration which may occur on a day-to-day basis. Measurements are conducted
at 24-hour intervals until 7 drift determinations are obtained. Since 8
consecutive sets of zero and span readings are necessary to provide 7 drift
determinations, the 24-hour zero and calibration drift tests normally define
the time of the operational test period.
According to Performance Specification 2, manual adjustments to the
monitor during the 24-hour drift tests are permitted only at 24-hour intervals,
44
-------
unless the monito,r manufacturer specifies a shorter interval for manual
adjustments. Automatic adjustments, on the other hand, are allowed at any
time. Specifically, the 24-hour drift tests require the following procedure at
24-hour intervals: (1) measurement of the zero value; (2) adjustment of the
zero response to the correct value, if necessary; (3) measurement of the span
value; and (4) adjustment of the span response to the correct value, if
necessary. The chart recorder zero value must be sufficiently offset during the
tests, to allow for the determination of negative drift. Performance
Specification 2 states that the offset should be approximately 1051. However,
for practical reasons smaller offsets may be validly employed as long as all
drifts are recorded on scale.
Some monitor vendors suggest that the adjustments not be made at 24-hour
intervals; they would rather allow the drift to accumulate for the entire
operational test period, provided that the total drift does not exceed the
respective performance specification drift limitations. If this procedure is
employed, then the 24-nour zero and calibration drift values should be
calculated in the same manner as in the 2-hour drift tests.
For the 24-hour drift tests, the zero is adjusted after the zero
measurement but before the span measurement. Consequently, the procedure for
conducting the 24-hour drift tests automatically removes the effects of zero
drift from the span measurements. The measured span drifts are, therefore,
equivalent to calibration drifts, and no further manipulation of the data is
required .
4.4.2 Review of Zero and Calibration Drift Tests
In reviewing the drift test data and results for both the 2-hour and
24-hour tests the reviewer should perform the following checks: (1) check the
45
-------
_
2 HOUR ZERO AND CALIBRATION DRIFT
Source and Location
Monitor
fig/
s^ AS"
'
Data
Spt,
No.
1
2
5
4
5
6
7
3
9
10
. i
Ti
Begin
1000
1200
MOO
,'XOO
1000
J4-00
£QQ
^Jt-OV
'J2&GG.
ne
End
IOO5
(305
1405
f£OS
/OO£
I3OS
HOS
, /£0S
!&&£
.&£&.
Date
=2/bc>
n
n
n
2-f^O
,,
n
fl
ii
2/5-7
Zero
Reading
0
-3-
-/
-+ 1
-tz
0
0
0
\ !
0
Zero
Drift
(AZero)
X
- ^
-s- /
+ 2.
^^^
^^\
-2.
0
0
- 1
*
/fx
Span
Reading
/^2^S
I3.X/
1 ^~) £? fl
1 "*-- ci L/
1^X2.
I3-X2
I2-Z4
l^tf£
!2%7
iax^
iax£
Span Calibration
Drift Drift
(:Span) (Span-Zero)
vU vL'
/T^ J\
-4 0
- / -2
-+2. 0
0 ' -^
-h .^ + $
- / i - /
_f_ / \ .+ /
- 2. - /
* - * ~
/
2
;
>
i
!i
(
;..i, fooo
9
O
'2(?(7- /J2^T // ! -r JZ - / ! /,
'4 00- /4 $5 a - ' - / ] I.
'£(?O i /60S >, -r ! O '' /.
a/^ o
2.8£ - /
+/
/
7
-/
/
/
/
(5 drift
/9 ctcuta sets
trons.
.-'. Drift =[Mean Zero Drift* _ 7.
T [Instrument Span] x TOO =
+ C.I. (Zero)
/y o erf
Calibration Drift - [Mean Span Drift* Q. 7.TvT + C.I. (Span) ^ 2-31 j
f [Instrument Span] x 100 = 0. 4 °7n
/ / /S^ 4 V ^. __Z. /- *» r^ ** f* S^ 9
* Absolute value
75-
these calculations
46
r
a
-------
concentration value of the calibration gas or calibration gas cell used to span
the monitor, (2) check for the required number of zero and span measurements,
(3) check the individual zero and calibration drift calculations, (4) check the
reported zero and calibration drift determinations against the individual drift
measurements, and (5) check the values recorded on the data sheet against the
strip chart records.
4.4.2.1 2-Hour Erift Tests
For extractive monitors, the 90% of span calibration gas is specified for
use in the 2-hour drift tests. For non-extractive monitoring systems, the
calibration gas cell is specified to be equivalent to 50% of span. The
reviewer should check the report for comformance with these specifications.
The 2-hour drift tests require 15 sets of drift determinations, in which
each data set is composed of initial and final zero and span readings.
Consecutive drift determinations may utilize the same data set (i.e., a
particular zero and span value can be used as the final measurements in one
drift determination and as the initial measurements in the following drift
determination). Thus, at a minimum, 16 sets of zero and span measurements must
be acquired. An additional set of measurements is required each time the drift
test sequence is interrupted. It should be noted that 2-hour drift testing may
span and even include the time period when 24-hour drift measurements are made,
as long as no adjustments to the monitor are performed during the 24-hour drift
test for that particular day. If, on the other hand, adjustments occur during
the affected 24-hour drift test, then the 2-hour drift testing is interrupted
and the testing sequence must be reinitiated.
47
-------
Deviations from exact 2-hour timing are not uncommon. Because Performance
Specification 2 does not address this subject, the significance of timing
deviations must be assessed on a case-by-case basis by the reviewer. In this
regard, it is recommended that the reviewer base his judgement on whether the
drift data indicate that the monitor's calibration is stable over short time
periods.
For each drift determination (initial and final zero and span
measurements), the zero drift is determined as the final zero reading minus the
initial zero reading. Similarly, the span drift is determined as the final
span reading minus the initial span reading. The calibration drift is then
determined as the span drift minus the zero drift. The reviewer should check
to see that the above subtractions are performed consistently, and that
algebraic values are employed in determining the calibration drift.
The mean zero drift is determined from the algebraic sum (retaining signs)
of the 15 zero drift determinations. The confidence interval for the zero
drift test is also calculated by employing algebraic values of the measured
zero drifts. The value of t for use in the confidence interval
u. y t j
calculation is 2.145 for 15 measurements. (A detailed example confidence
interval calculation is included in the "Review of Relative Accuracy
Calculations" section of this manual .) The reported 2-hour zero drift should be
calculated as:
zero drift =
Mean Zero Drift
+ Confidence Interval
x 100
Instrument Span
48
-------
Ihe reported 2-hour calibration drift is determined from the 15
calibration drift determinations in exactly the same manner as the zero drift
is calculated .
4.4.2.2 24-Hour Erift Tests
The review of the 24-hour zero and calibration drift tests is very similar
to the review of the 2-hour drift tests. Ihe same requirements apply to the
value of the calibration standard used to span the system for both drift tests.
The value of the zero offset employed should also be determined before
reviewing data acquired from the strip chart records.
The 24-hour drift test requires 8 sets of zero and span measurements to
facilitate 7 zero and calibration drift determinations. The procedure
prescribed by Performance Specification 2 requires adjustments to the zero and
span values following the respective zero and span measurements. Provided that
this procedure is followed, the individual zero drift values are calculated as
the zero value measured at the end of the 24-hour interval, minus the correct
zero value. In the same manner, the calibration drift is the span value at the
end of the 24-hour interval minus the correct span value. The adjustment of
the zero value after the zero reading and before the span reading automatically
removes the effects of zero drift on the span measurements. Thus, no further
correction of the span measurements is necessary.
For reasons discussed in Section 4.4.1.2, rather than performing the
24-hour zero and span adjustments, testers often allow drift to accumulate. If
this procedure is employed, the 24-hour zero and calibration drifts are
determined in exactly the same manner as for the 2-hour drift tests.
-------
The calculation procedure for determining the reported 24-hour zero and
calibration drifts from the individual drift measurements is the same as that
described for the 2-hour drift tests. The mean zero drift, mean calibration
drift, and respective confidence intervals are determined using algebraic
values. (Pertinent, detailed example calculations are included in the "Review
of Relative Accuracy Calculations" section of this manual.) The value of tQ
for use in the confidence interval calculation is 2.44? for 7 drift
determinations. The reported 24-hour zero and calibration drifts are computed
as:
Mean Zero D-ift
or
Mean Span D-ift
+ Confidence Interval
zero drift
or :
span drift x 100
Instrument Span
4.4.3 Review of 2-Hour and 24-Hour Drift Test Strip Chart Records
The reviewer should confirm the accuracy of the span and zero values
reported in the drift test data sheets by comparing them with corresponding
values on the strip chart record. In addition, the reviewer should verify that
all drift traces are on scale. Traces which reflect pegging at either end of
the strip chart will necessarily invalidate a drift test, because the true
magnitude of the pegged response cannot be determined. Finally, the reviewer
should check that the times reported on the data sheet match the times
indicated on the strip chart recording for the zero and span measurements.
50
-------
4.5 RELATIVE ACCURACY TEST
4.5.1 Background
The relative accuracy test is performed to assess the adequacy of the
calibration technique of the continuous monitor. The test entails the
comparison of pollutant concentrations determined by the continuous monitor to
concentrations concurrently determined by EPA Reference Methods 6 and/or 7.
When relative accuracy tests are conducted on continuous monitors that measure
pollutant concentration on a wet basis, these two reference methods must be
suplemented by an Agency approved method for determining the effluent stream
moisture content. The result from the moisture determination is used to place
all the effluent measurements on a consistent moisture basis.
The reported relative accuracy result is calculated from three terms. One
term is the algebraic mean difference observed between the monitor and
reference method concentration measurements. This mean difference term may be
interpreted as the absolute bias or inaccuracy of the monitor measurements. A
second term, the 95? confidence interval, is the precision estimate associated
with the determination of the mean difference. The remaining term used in the
computation is the mean concentration of the reference method determinations.
The sum of the confidence interval and the absolute value of the mean
difference divided by the mean reference method concentration, affords the
relative accuracy which is expressed as a percentage.
Performance Specification 2 calls for 9 Method 6 measurements when tests
are conducted to determine the relative accuracy of an SO monitor. A
determination of the relative accuracy of an NO monitor likewise requires 9
X
measurements of the pollutant concentration, but in this case each measurement
is the average of three Method 7 determinations each conducted over
51
-------
approximately three minute intervals. Thus, 27 determinations of the NO
concentration in the effluent are required for the relative accuracy test.
Since the determination of relative accuracy involves a considerable
amount of reference method sampling and analysis, the amount of attendant data
is exceptionally large, relative to the other monitor performance tests. The
effort required in reviewing relative accuracy tests is likewise increased
proportionally.
The review of the relative accuracy test can be divided into several major
areas: (1) review of the relative accuracy calculations; (2) determination of
continuous monitor values; and (3) review of reference method sampling data and
results. The following subsections in this report treat each of these topics
separately.
*». 5. 2 Review of Relative Accuracy Calculations
The most common error in monitor performance evaluation test reports is
the incorrect calculation of relative accuracy from the reported data.
Therefore, if there is any doubt about the value reported for relative
accuracy, the reviewer should perform the entire calculation as a check.
Because of the frequency of this error, a detailed discussion and an example of
the relative accuracy computation are provided below.
The reviewer should first check the relative accuracy data form for the 9
data sets, required as the minimum number of sampling runs by Performance
Specification 2. In many cases, a reference sample is inadvertently lost or
invalidated. This situation is not uncommon because much sampling is involved:
9 S>0 samples; 27 NO samples; and in some cases, 9 H_0 samples. The reviewer
may decide to accept less than 9 data sets, provided the results clearly
52
-------
indicate compliance with (or failure of) the relative accuracy specification.
The review of the relative accuracy test data form should next focus on
the times reported for the reference method sampling. Performance
Specification 2 states that no more than one run may be conducted in any one
hour. The interpretation of this requirement calls for flexibility. Deviations
of several minutes do not violate the intent of this specification.
The reviewer should verify that the monitor and the concurrent reference
method values are accurately reported and the differences have been determined
accurately. According to Performance Specif ication 2, these differences are
computed by subtracting the reference method value from the concurrent monitor
value. Thus, positive differences arise when the monitor reads higher than the
reference method values. There will be no effect on the value of the
calculated relative accuracy result if the subtraction operation is performed
in the reverse manner, provided the subtraction operation is performed
consistently throughout the calculation. Pbwever, the interpretation of the
individual data is reversed if the subtraction operation is reversed, i.e.,
negative differences will occur when the monitor reads higher than the
reference method sampling results.
After checking that the differences are correctly determined, the reviewer
should compute the mean reference method value, the mean of the differences,
the confidence interval , and finally, the relative accuracy.
(1) Mean Reference Method Value
The mean Fteference Method value is simply the arithmetic average of the 9
reported pollutant determination results. It is employed in the denominator of
the relative accuracy calculation to express the accuracy on a relative basis.
53
-------
.
ACCURACY DETERMINATION (S02 AND
Source and Location ,
' 3 n"'" c r
Nine
run pe>r ft
3" Samp/es p&r
r
v5-Tr ref. n.ethod
--s: value '::)
test value (NO )
A
Mean of
the differences \_fajiL.
z-5 T:nfidence intervals = +
Accurac
_ppm (S02), = ± A
.. . ^.. Ppm (NO,)
; c s JMeu-D_.°f th c differences! + 9SV confidence interval^
" ^ Tlcan "reference method value ^ 100 =
«(S02) , = 3^7 £ (NOX.) .
E-.^lain and report method used to determine integrated averages
calculation
above been
54
Us
i /
-------
(2) Absolute Value of the Mean of the Differences
The algebraic differences between concurrent Reference Method values and
monitor values are summed and averaged. Summing algebraic values requires the
retention of the sign of the differences. Consequently, the computed mean
difference may be a negative or positive number, or even zero. (It should be
noted that the sum of the differences result also enters into the calculation
of the 95$ confidence interval, and therefore, repetitive calculations can be
avoided by recording this value for later use.)
(3) The 95% Confidence Interval (C.I.n QC.)
" " ' ' ~~ ~v y j*
The determination of the 95% confidence interval is the greatest source of
error. The formula used in the determination is given by:
C-I-0.95 =
Where:
C.I.Q g = 95 percent confidence interval;
n = number of data points (In this example, n = 9);
x. = algebraic value of a measurement
(Some versions of the performance speci-
fications define x as the absolute value
of a measurement; this is not correct.)
x = sum of the n measurements;
fcO 975 = fc score for checking both upper and lower limits
its with 95 percent certainty, corrected for n
degrees of freedom.
55
-------
For the relative accuracy test, the operations required for the
calculation of the confidence interval apply strictly to the data represented
by the differences. Again, attention should be placed on summing algebraic
values (as opposed to absolute values) in computations of the 95? confidence
interval .
A common error in the calculation of the 95% confidence interval is the
use of an incorrect tn value. The reviewer should verify that the
u. y (o
calculation employs the correct value. In the majority of cases, the error is
the result of the use of an adjacent value from the tabulated t scores
(tn jyjc-) . A detailed example confidence interval calculation is provided in
u. yI\)
Figure 4-5.
(4) Relative Accuracy
The reported relative accuracy should be calculated as:
Relative
Accuracy =
Mean of
Differences
C.I.
0.95
x 100
Mean Reference Method Value
It should be emphasized that the absolute value of the mean of the
differences is used in calculating the relative accuracy. As a final step, the
reviewer should check that the calculated relative accuracy value is the same
as the value listed in the "Summary of Results" section of the test report.
56
-------
II //
Ui
I
II
fo
N
^
-
<
o
a'
h
N
"I
H
x
i t
H
-------
4.5.3 Determinations of Continuous Monitor System Values
The relative accuracy test entails comparing the concentrations determined
by the continuous monitor to concentrations determined by the reference method.
The following factors should be considered when quantifying the continuous
monitor system measurements from the chart records: (1) the time interval
(duration) for comparison of monitor system data to sampling data, (2) the
method of integrating or averaging the continuous monitor system data , and
(3) the response time of the monitor system.
Performance Specification 2, (Paragraph 7.2. 1) requires the continuous
monitor data to be determined by integrating or averaging the pollutant
concentrations over each of the time intervals concurrent with each reference
method testing period. The data form (Figure 2-3) provided in Performance
Specification 2 indicates that an hourly average of the pollutant concentration
should be used for comparison with reference method data. towever, it is
generally recognized that this hourly averaging note is inconsistent with the
sampling times observed for EPA Methods 6 and 7: 20 minutes and approximately 3
minutes, respectively. Accordingly, the average concentration measured by the
monitor ordinarily is determined over time intervals corresponding to reference
method sampling times.
If strip chart records served as the source for the average monitor
concentrations obtained during the relative accuracy determinations, those
portions of the trace corresponding to the tests should be bracketed and
labelled according to run number.
The method of integrating or averaging the continuous monitor system
output over the time interval corresponding to the reference method tests may
58
-------
affect the accuracy determination. If the monitoring system provides a
nunerical output or integrated averages, then the problem is usually not
significant. However, equal intervals must be used to average the data. If
the monitoring system provides a continuous record (strip chart), then
averaging the data may be more difficult, particularly where large variations
in the pollutant concentration with time are encountered. In this situation,
taking as many readings as the resolution of the data record permits and
averaging the readings may provide good results.
In many cases, the strip chart record can be integrated by simple visual
inspection.
Performance Specification 2 states that the method for performing this
integration is to be reported. The reviewer should check that the reported
method can be applied to the monitoring data record to afford results
equivalent to those contained in the relative accuracy test data sheet. The
reviewer may find that the method for determining the integrated averages is
not reported. In addition, cases may arise when disparities occur between the
recorded averages and the reported averages. For both these possibilities, the
reviewer should weigh the significance of the report deficiencies against the
impact of these deficiencies on the relative accuracy result. For example, if
the relative accuracy result would change from 1% to 13% as a consequence of
errors in figuring the integrated averages from the continuous emission data
monitor record, the change would be considered insignificant because indicated
performance is unchanged relative to the specification < 20%.
59
-------
H. 5.4 Review of Reference Method Data and Results
This section details the review of the reference method sampling results
employed in the relative accuracy determination. In this portion of the
monitor performance test review, the reviewer attempts to accomplish several
objectives: (1) to determine whether appropriate sampling and analytical
procedures were employed; (2) to determine whether sufficient data are included
in the documentation of the testing; (3) to verify the accuracy and correctness
of calculation procedures employed in determining the final test results; and
(4) where possible, to establish the validity of the data and results by
comparison with other parameters, e.g., fuel analysis data.
The agency observer bears the primary responsibility for ensuring that
proper testing procedures were employed. If an agency observer was present
during the monitor performance test, then the observer's report and field notes
should be reviewed and compared with the data and results from the reference
method sampling.
The reviewer cannot always ascertain that correct sampling and analytical
procedures were employed. In some cases, the range of values for particular
data or intermediate results can be bracketed. Results or data that fall
outside of reasonable and normal ranges require additional scrutiny. The
skilled reviewer is often able to determine quickly which parts of the sampling
data and results require a closer check or additional documentation.
The following subsections treat the subjects of SCU, NO , and moisture
determinations separately. Background information for the novice reviewer and
procedures for reviewing field sampling data, laboratory analytical results,
and calculations of gas concentrations are included in each section, as
appropriate .
60
-------
4.5.4.1 Reference Method 6 - SO-
Background
EPA Reference Method 6 is specified for determining dry basis SO
concentrations in stationary source effluent streams. The application of the
method essentially entails sampling a measured volume of the effluent through
impingers containing hydrogen peroxide where SO- quantitatively reacts to
sulfate ion, which is determined titrimetrically using the barium-thorin
method. When applied to monitor relative accuracy tests, Reference Method 6
sampling is conducted with the probe positioned adjacent to the monitor's
probe. This positioning ensures that the comparative results will be based
upon measurements performed on equivalent samples. The barium-thorin method
does not necessitate a laboratory environment; consequently titrations are
often conducted in the field.
Review of Field Sampling Data
Raw data sheets documenting the Reference Method 6 sampling should be
included in the monitor test report. Data sheets should be included for all of
the sampling runs reported on the relative accuracy data sheet. The reviewer
should check these data to ensure that the required sampling procedures were
followed, that sufficient data were reported, and that the reported data are
within reasonable ranges. Each raw data sheet should provide the information
necessary to calculate the volume of effluent sampled, corrected to dry
standard conditions. This volume, in conjunction with the laboratory
analytical data, is used to calculate the SO- concentration of the sample.
61
-------
Each raw data sheet should contain the following identifying information
(see example sheet): (1) plant name and location; (2) date of the test; (3)
sampling location; (4) initials of the sampler(s); (5) run number; (6) material
sampled for (in this case, SCL); (7) barometric pressure; and (8) a number
identifying the meter box used and its associated value of Y (the dry gas meter
calibration coefficient).
Reference Method 6 specifies that the dry gas meter reading, flow rate,
and dry gas meter temperature be recorded at 5-minute intervals. Accordingly,
these readings and respective times should be included on the raw data sheets
that document sampling.
For Method 6, the volume of effluent sampled at dry standard conditions is
calculated according to the following equation:
KYV P,
m bar
Vm, ,\ =
(std) T
m
Where: = Dry gas volume expressed at
standard conditions, dscm
(dscf).
0.3858 °K/mm Hg
(17.64 °R/in. Hg).
Y = Dry gas meter calibration
coefficient.
= Dry gas volume expressed at
meter conditions, dcm (dscf).
= Barometric Pressure, mm Hg
(in. Hg).
= Dry gas meter temperature, K, ( R)
62
-------
The variables above are discussed in the paragraphs that follow. Where
possible, guidelines are provided for checking or verifying that the reported
values are within reasonable and expected limits.
The reviewer should note that the reported standard sample volume has the
greatest potential for error.
(1) Flow Rate
Reference Method 6 states that the flow rate should be maintained at 1.0
liter per minute +_ 10%, (2.1 standard cubic feet per hour). The reviewer
should check that the reported flow rate measurements fall within the
acceptable range. Sampling rates that exceed 1.1 L/min are undesirable,
because the decreased residence time of the sample within the impingers
containing the hydrogen peroxide may result in diminished S0_ absorption
efficiency, which would be ultimately reflected as a negative bias in the
measured concentrations of S02. (As a note, information is unavailable
regarding the magnitude of the flowrate at which this bias becomes significant;
thus, the reviewer should be somewhat liberal in interpreting the technical
validity of results obtained from sampling at higher than prescribed flow
rates.) Low flow rates, on the other hand, will affect the quality of the data
only if the amount of SO sampled is insufficient for analysis.
If the reviewer has doubts concerning the validity of the reported flow
rates, he may check them by comparison to the average sampling rate, which is
easily computed by: (1) subtracting the initial dry gas meter reading from the
final reading, (2) correcting the results by multiplying by the meter
calibration coefficient, and (3) dividing the corrected volume by the reported
sampling time.
63
-------
(2) Meter Volume (V )
A sample volume requirement is neither specified within Reference Method 6
nor within Performance Specifiation 2. The sample volume and sample time
requirements specified within the subparts that describe source performance
tests are generally adopted for conducting monitor performance evaluations.
For example, for fossil-fuel fired steam generators, Subpart D specifies a
minimum sampling time of 20 minutes and a minimum sample volume of 0.02 dscm
(0.71 dscf).
When the volume is measured in cubic feet, the dry gas meter readings
should be reported to a precision characterized by three digits to the right of
the decimal point (e.g., 68.035 dscf). This will ensure that the sample volumes
are measured to three significant figures.
The reported metered sample volume will, in all likelihood, differ from
the sample volume corrected to standard conditions, reflecting volume
dependence upon the meter temperature, barometric pressure, and dry gas meter
calibration coefficient. Thus, the sample volume at dry standard conditions,
rather than the meter volume, should be employed to assess whether the minimum
sample volume requirements have been met. However, the reviewer should
recognize that sample volumes less than the minimum may be technically valid.
(3) Meter Temperature (T )
The measured dry gas meter temperature is used for correcting the metered
sample volume to an equivalent sample volume at (dry) standard conditions of
standard temperature and pressure.
64
-------
The dry gas meter temperature should be reported for each set of 5-minute
entries. Reporting the temperature to the nearest whole degree is of
sufficient precision. Small errors in the temperature measurement have little
effect on the calculated SO^ concentration because the measured temperature is
converted to an absolute basis for all calculations. For example, a
temperature of 68° F (20° C) becomes 528° R (°R = degree Rankine, absolute
temperature scale in English units) or 293° K (°K = degree Kelvin, absolute
temperature scale in metric units). The temperature that figures in the volume
computation will always be known to three significant figures.
The reviewer should check that the reported average dry gas meter
temperature has been correctly determined. The range of meter temperatures
usually encountered is between ambient temperature and 15° F above ambient
temperature at the meter location.
(4) Dry Gas Meter Calibration Coefficient (Y)
The dry gas meter calibration coefficient (Y) relates the volume measured
by the subject dry gas meter to the true volume measured by a gas meter
standard. This coefficient is employed in computing the sample volume
corrected to standard conditions. The reviewer should check to see whether the
value of Y used in the calculations is consistent with the value appearing on
the calibration data form.
Reference Method 6, Paragraph 5.1.1, states that dry gas meters are to be
calibrated before their initial use and after every use in the field. In order
to ensure completeness, the test reports should include data sheets documenting
the initial calibration and the post-test calibration check for every dry gas
meter used in the monitor performance test.
65
-------
The data sheet documenting the initial calibration of the dry gas meter
should include the data for three (3) calibration runs. The rotameter readings
should each be approximately 1 L/min (2.1 dscfh), and the dry gas meter volume
readings should each be approximately 0.5 dscf or greater. The regulations
require that no individual Y may differ more than 2% from the mean Y. The
reviewer should check for conformance with these specifications.
The data sheet documenting the post-test calibration check should include
data for at least two runs. The rotameter readings should each be
approximately 1 L/min (2.1 dscfh), and the dry gas meter volume readings should
each be approximately 0.3 dscf or greater. Again, no individual Y should
differ from the mean Y by more than 2%. The mean value of Y obtained from the
calibration check should differ by no more than 5% from the mean value of Y
obtained from the initial calibration.
If the 5% specification is not met, the dry gas meter is required to be
recalibrated as per the initial calibration. This would necessitate a third
data form for a dry gas meter. The regulations state that the Y (of the two
initial calibrations) that "yields the lower gas volume for each test run is to
be used." This requirement is intended to apply to source performance tests
rather than to monitor performance tests. An average Y value for monitor
performance tests may be more reasonable for these situations.
(5) Barometric Pressure - P.
-bar
The barometric pressure is used in computing the sample volume corrected
to standard conditions. The reviewer can reasonably judge the accuracy of the
reported barometric pressures in light of the range of pressures that could
ordinarily be expected to occur. "Rules of thumb" can be offered: (1) the
66
-------
GASEOUS POLLUTANT SAMPLING DATA
Plant
Plant Location
Date .a &1, /79
CL.
Sampling Location
Initials
Run Number
Probe Heater Used? Yes / No
Filter Used? Yes
No
Material Sampled For
Sample Box Number
Meter Box Number
Barometric Pressure <=29.
Ambient Temperature
Dry Gas Meter
Read i nr
T )' ^ a b .' C. t
erceratur
Total Gas Meter Volume
Average Meter Temperature
Clcan-Up Bottle Number
Comments:
% See
in text.
su-l traction
Ccrrr?c(:lv f> e>rfo
y *-/
67
an adequate
Cample volurne bees
&l>ta/si
\> 0.71 oLs-cf) J
-------
range of pressures ordinarily observed at sea level range between 29 and 31 in.
Hg; and (2) for locations above sea level, the pressure decreases approximately
1.0 in. Hg for every 1000 feet. (This rule underestimates the barometric
presure above 3000 ft by about 2.5%).
68
-------
Laboratory Analyses
Accurate laboratory analytical results are essential to the validity of
relative accuracy tests because these analyses determine the masses of S0p
sampled. However, unlike the data generated during sampling, laboratory data
are not as abundant and, because of this, they do not permit quite so thorough
a review. Nevertheless, data forms documenting laboratory analyses for S0p
should accompany the relative accuracy sections of monitor test reports. This
practice should ensure reporting completeness and promote quality and adherence
to Reference Method 6. In addition, thorough documentation will permit the
identification and correction of any errors that occur.
SCL Analyses
The mass of SO absorbed during sampling is calculated using the following
equation:
m = 32.03N(V -V )V . /V
buo t b soln £
Where: mso = Mass of S02 absorbed, mg.
32.03 mg/meq = Equivalent weight of SO ,
mg per milliequivalent rmeq) of titrant.
Vfc = Average titrant volume, mL (two titrations).
Vfe = Volume of titrant required for the blank, mL.
N = Normality of the barium titrant in meq/mL.
V , = Total volume of solution in which the S0?
sample is contained (most often, 100 mL)f mL.
V = Volume of the sample aliquot titrated
(most often 20mL), mL.
A review of the analytical phase of Method 6 requires data for all the terms
listed above; thus, at a minimum, all these terms should be recorded on the
69
-------
laboratory data sheets. Sample data sheets accompany this section (see Figure
4-7). The pertinent data will be discussed individually.
(1) Normality (N)
The normality of the barium perchlorate, BaCClO^) (or barium chloride,
BaCl2) solution used in titrating the samples should be reported to three
significant figures; the value should lie within the range of 0.00980 N to
0.0102 N. Reported concentrations that fall outside this range will not affect
the results of the analyses as long as sufficient titrant volumes are used.
(2) Volume of Titrant for Blank
The volume of titrant required for the blank V should always be reported.
The value should be less than 0.5 mL, or reagent contamination is suspect.
Without identification of the source and identity of the contamination, it is
not possible to assess accurately the impact on the analyses.
(3) Average Titrant Volume
Each S02 sample is titrated twice, and the volumes for the two titrations
are averaged. Data for all titrations should be recorded on laboratory data
sheets. For each set of titrations, the volumes should agree within 1 percent
or 0.2 mL, or the titrations are invalid according to Reference Method 6. In
summary, three titrant volumes should be reported for each analysis: the
volumes from the two titrations and their average.
(*O Volume of Solution V , and Volume of Aliquot V
spin 3 a
The volume of solution in which the S02 sample is contained and the volume
of the aliquot titrated are usually 100 mL and 20 mL, respectively. If these
values are consistent throughout the analyses, it is necessary only to state
70
-------
JOB
ANALYSIS LOCATION
FIGURE 4-7
EPA REFERENCE METHOD 6 (SO2 ANALYSIS WORKSHEET
PLANT NAME/UNIT 4
.. ANALYST
DATE
H2S04
Volume
VM1)
(mL)
Ba2 +
Vo lume
vt(l)
(mL)
Ba2 +
Norm .
Nl
(meq/mL)
H2S04
Volume
Vh(2)
(mL)
Ba2 +
Volume
Vt(2)
(mL)
Ba2 +
Norm .
N2
(meq/mL)
H2S04
Vo lume
Vt(3)
(mL)
Ba2 +
Vo lume
Vt(3)
(mL)
Ba2 +
Norm .
N3
(meq/mL )
Nx = l''h
-------
this fact on the laboratory data sheets. If sample dilution is due to high S0?
concentrations, the pertinent dilution factors should be reported.
4.5.4.2 Reference Method 7 - NO
x
Background
EPA Reference Method 7 is applicable for determining dry basis NO
concentrations in stationary source effluent streams. In applying this method,
an effluent sample is drawn via a heated probe into an evacuated flask
containing a solution that oxidizes the NO to nitrate ion. The nitrate ion is
then determined spectrophotometrically using the phenoldisulfonic acid (PDSA)
method. According to convention, the NO concentration is reported as NO .
X c.
For testing N0x monitor relative accuracy, the inlet of the Reference
Method 7 probe is placed adjacent to the monitor's probe to ensure that both
monitor and reference method are applied to equivalent samples. A relative
accuracy test consisting of 9 runs will include 27 Reference Method 7 samples,
since 3 samples are defined as one run. NO analysis is rarely (if ever)
performed in the field during the operational test period because of the
extended period required for complete oxidation of the NO and because of the
X
need for laboratory hood space.
Review of Method 7, NO - Data and Results
If a complete review is to be performed, the documentation of Reference
Method 7 sampling must provide sufficient data to recalculate all NO
x
concentrations. The novice should note that a substantial amount of data
manipulation is required to recalculate the concentrations of NO .
72
-------
In reporting the concentration of NC>x, the greatest potential for error
lies in calculating the sample volume, i.e., the volume of effluent sampled on
a dry basis and corrected to standard conditions, V . The second major source
sc
of error occurs in the analysis phase, specifically where the spectrophotometer
calibration factor is determined. The review should focus on these two aspects
of the report as a minimum.
The following discussions treat individually all the raw data that enter
into the determination of N0x concentrations. Emphasis is placed on the
following: (1) the precision requirements of these data; (2) factors affecting
these data; and (3) the normal range of values of these data.
Data Reporting
In order to be complete, each raw data sheet should contain the following
identifying information (see example sheet): (1) the source name and location;
(2) the sampling date; (3) the sampling location, e.g., stack breeching, or
duct; (4) the initials of the sampler; (5) run and sample numbers documented on
the data sheet; and (6) the material sampled for (in this case, NO ).
x'
If the reviewer is to recalculate the sample volumes, the data that appear
on the raw data sheet should include the following tabulated information and
data: (1) the run number; (2) the flask identification; (3) the volume of the
flask, Vf; (4) the volume of absorbing solution contained in the flask, V ; (5)
3
the initial temperature of the flask, t.; (6) the initial relative pressure of
the flask (as measured with a mercury manometer), P.; (7) the initial
barometric pressure, P^ ..; (8) the final temperature of the flask, tf; (9)
the final relative pressure of the flask (measured with a manometer), P ; and
(10) the final barometric pressure, P
bar f
73
-------
In order to compute the volume of effluent sampled, it is first necessary
to place the initial and final temperatures and pressures on an absolute basis.
This is accomplished by adding the standard absolute temperature and the
associated barometric pressures, respectively. These and the other data above
are then used to compute the standard sample volume using Equation 7-2 in
Reference Method 7:
V = K (v. -V )
sc 1 £ a
Pf
Tf T
Where: P = Final absolute pressure of
flask, mm Hg (in. Hg).
P. = Initial absolute pressure
1 of flask, mm Hg (in. Hg).
P = Standard absolute pressure,
std 760 mm Hg (29.92 in. Hg).
T = Final absolute temperature
t of flask, °K (°R)
K = P = 0.3858 °K/mm Hg, metric units
1 std (17.64 °R/in. Hg, English
units)
T = Initial absolute temperature,
°K (°R).
T fj = Standard absolute temperature,
293 K (528 °R).
Vgc = Sample volume at standard
conditions (dry basis), mL.
Vf = Volume of flask and valve, mL.
Va = Volume of absorbing solution, mL.
CD Volume of the Flask and Valve
Reference Method 7 specifies the use of two-liter (2000 mL) flasks for the
collection of N0x samples. The flasks must be equipped with 3-way stopcock
valves in order to allow for evacuation and isolation of the sample. The flask
74
-------
volume (Vf) is the combined volume of the flask and the valve. Reference
Method 7 states that the flask volume should be reported to the nearest 10 mL.
Thus, the flask volume should be known at a minimum to three significant
figures.
The volumes of the flasks will not all be 2000 mL; the reviewer will
encounter a range of volumes from 1950 to 2050 mL.
x
The reviewer should note that the accuracy and precision of the NO
determination is not affected by the flask volume per se; rather, it is the
resultant standard sample volume that potentially affects the quality of the
result.
(2) Volume of the Absorbing Solution
A relatively small measured volume of absorbing solution is placed within
the sampling flask in order to react with the NO and thus trap it for the
A
subsequent laboratory analysis. The volume occupied by the absorbing solution
reduces the flask's available volume and, as a result, the calculation of the
volume of effluent sampled, corrected to standard conditions (V ), requires
that the volume of absorbing solution (in mL) be subtracted from the flask
volume.
Reference Method 7 specifies that the volume of absorbing solution be 25
mL. In most cases, the reviewer will find the absorbing solution volume
reported to the nearest mL. Such precision is easily obtained by dispensing
the absorbing solution with a graduated cylinder.
The reviewer may safely assume that the volume of absorbing solution may
vary between 20 mL and 30 mL without any effect on the quality of the result.
Volumes of absorbing solution smaller than 20 mL may be incapable of completely
75
-------
reacting high concentrations of NO . Volumes larger than 30 mL, while not
affecting the quality of results, may interfere with the stated methodology of
the laboratory phase of the analysis because larger volumetric flasks may be
necessary.
(3) Initial Relative Flask-Pressure
The initial flask relative pressure is measured with a mercury manometer.
Since the manometer indicates the pressure of the flask relative to barometric
pressure and since the flask is under vacuum, the initial flask pressure must
be a negative quantity. If the same vacuum pump is used throughout the
sampling, the reported initial flask pressures are generally very close in
magnitude.
Under conditions of fair weather and elevation not far above sea level,
the initial flask relative pressures will normally fall between -26 and -28 in.
Hg (-660 mm Hg to -711 mm Hg). This range of values reflects the specified
criterion that the initial absolute pressure of the flask must be less than 3
in. Hg (75 mm Hg). Adequate precision is maintained if the pressure is
reported to the nearest 0.1 in. Hg (2 mm Hg).
(4) Initial Absolute Pressure of the Flask
This quantity is required by Method 7 to be less than 3 in. Hg (75 mm Hg).
Adherence to this criterion ensures that an adequate volume of effluent will be
sampled.
(5) Final Flask Pressure
The final flask relative pressure, like the initial flask relative
pressure, is measured with a mercury manometer and is thus a relative quantity.
Unlike the initial flask pressure, the final flask pressure may be lower
-------
(negative), greater (positive), or equal to barometric pressure. The sign and
magnitude of the final flask pressure are dependent on the following
parameters: (1) the difference between the stack pressure during sampling and
the barometric pressure before clean-up; (2) the pressure of the effluent
stream sampled; (3) the moisture content of the effluent stream; (4) the
difference between the absolute temperatures during sampling and before
clean-up; (5) the evacuated pressure of the flask before sampling; and (6) the
presence of leaks in the flask. All relative pressures should be reported to
the nearest 0.1 in. Hg (2 mm Hg).
When checking these relative pressure data, the reviewer should recognize
first that Method 7 does not address the acceptable range of the data. Second,
the reviewer should nonetheless recognize that anomalies in the data may
indicate possible problems with sampling and sample integrity. The impact of
such problems on the test results generally cannot be quantified; therefore,
the reviewer should interpret with caution. The following paragraphs provide
example anomalies.
Improper venting of the flasks to the effluent will result in low final
relative flask pressures, e.g., -10 in. Hg. While not specifically addressed
within Method 7, such situations simply mean that less effluent is sampled,
which may potentially affect the precision of the determination.
Final flask relative pressures consistently 0.0 in. Hg in magnitude may be
indicative of leaks. This criterion is somewhat difficult to apply because of
the potential for coincidence.
Finally, large positive final relative pressures should be viewed with
suspicion because, for example, without restraining clips the design of the
flask/valve assembly does not permit the build-up of large relative pressures
77
-------
(greater than 2 in. Hg). Thus, if the Method 7 raw data sheets showed two sets
of relative pressures, e.g., flask pressures greater than 2 in. Hg and flask
pressures of 0.0 in. Hg, some of the flasks could have vented themselves to the
atmosphere. The reviewer would have to assess the significance of venting on a
case-by-case basis.
(6) Final Absolute Pressure of the Flask
This quantity is obtained in a fashion analogous to the initial absolute
pressure discussed above. As such, the dependencies, precision, and range of
values will reflect the final flask relative pressure and the final barometric
pressure.
(7) Barometric Pressure (P. )
bar
The reviewer can reasonably judge the accuracy of the reported barometric
pressures in light of the range of pressures that could ordinarily be expected
to occur. "Rules of thumb" can be offered: (1) Pressures ordinarily observed
at sea level range between 29 and 31 in. Hg; and (2) For locations above sea
level, the pressure decreases approximately 1.0 in. Hg for every 1000 feet.
(8) Initial Temperature of the Flask (t.)
The values reported for the initial flask temperatures for the most part
reflect the sampling environment, and, as such, must be viewed as being
dependent on the weather, the time of day, the time of the year, and the
sampling location at the source. Since flasks are ordinarily cleaned up in a
laboratory environment, the values reported for the final flask temperatures
usually reflect room temperatures.
78
-------
The reviewer should exercise his own common sense in reviewing the reported
temperatures. For example, a reported temperature of 140° F for either the
initial or the final flask temperature should be viewed with suspicion because
of the obvious limitations of the human body under such conditions.
(9) Volume of Effluent Sampled, Corrected to Standard Conditions (V )
SQ
This value may not be directly accessible to the reviewer, in which case
it would be necessary to compute it from the raw data. The value for V
sc
should lie between about 2000 mL and 1000 mL. Values lower than 1000 mL may be
indicative of incomplete venting of the flasks to the effluent or the use of
smaller flasks than those specified.
79
-------
Laboratory Analysis
The laboratory analysis for NO provides the mass of NO collected. To
A X
ensure adherence to the Reference Methods and to promote data quality, it is
recommended that data forms documenting the laboratory analyses for NO
accompany the relative accuracy sections of monitor test reports.
NO Analyses
The mass of NO absorbed during sampling is calculated using the following
equation:
=2KcAF
Where:
K
= Mass of NO (computed and reported as
NO ) absorbed, yg.
= Empirically determined spectrophotometer calibration
factor, yg NO /unit absorbance.
= Measured sample absorbance.
2 = Aliquot factor.
F = Dilution factor (used only if necessary).
The review of the majority of the NO laboratory data is straightforward
and primarily involves checking the accuracy of simple multiplication
operations. Errors that may have occurred in the analytical phase, however,
cannot be detected by such checks. The parameter, K , should be well
scrutinized. The parameters important to the analyses for NO and which should
A
be included in the test report are discussed individually below.
81
-------
(1) Mass of NO Sampled
The laboratory data sheets should include the mass of N0x (in yg) for
each acquired sample. The reporting convention for the analysis is that N0x is
reported as N02. The reviewer should ensure that a one-to-one correspondence
exists between the number of reported masses and the number of reported NO^
concentrations.
(2) Aliquot Factor (2)
The analytical phase of Reference Method 7 specifies the analysis of a
25-mL aliquot from the total sample volume of 50 mL. Thus, 50 mL/25 mL = 2. If
Reference Method 7 is strictly followed, the aliquot factor should be implicit
in all the determinations and needs to be shown only in a sample calculation.
The analytical results are not necessarily affected by aliquot factors
other than "2." If factors other than "2" are used, they should be reported on
the laboratory data sheet, and the deviation from prescribed procedure should
be addressed in the text of the report.
(3) Dilution Factor (F)
Occasions arise when sample absorbance is outside the range of the
spectrophotometer calibration. The sample must be precisely diluted with water
in order to determine the NO concentration in a valid manner. If dilution is
required, the appropriate dilution factor should be reported on the laboratory
data sheet. If proper laboratory technique is applied, dilution should not
affect the quality of the analysis result.
82
-------
(4) Measured Absgrbance of the Sample (A)
For each sample, the abscrbance as read from the spectrophotometer should
be reported. The values reported will ordinarily range from 0 to approximately
1.5. The absorbances of all samples should be less than the absorbance of the
most concentrated calibration standard. The results from samples not meeting
this criterion may be biased high.
(5) Spectrophotometer Calibration Factor
The single most important element in the determination of the mass of NO
x
sampled is the spectrophotometer calibration factor, K . In the analysis phase
c
of Reference Method 7, standards are prepared which cover the linear operating
range of the spectrophotometer. A numerical method is then employed to fit the
resultant concentrations and associated absorbances to a line, and the slope of
the line, K , is subsequently used for assigning concentrations to the
C
absorbances indicated by NO samples. The value for K is computed using the
X C
following equation:
= 100 AI + 2A2 + 3A + 4A,
Where: AI = Absorbance of a 100- yg NO standard.
Ap = Absorbance of a 200- yg NO- standard.
A- = Absorbance of a 300- yg NO standard.
Aj. = Absorbance of a 400- yg N0? standard.
It is highly recommended that all the pertinent absorbances for the
determination of the spectrophotometer calibration factor be included within
83
-------
the data sheets that document the laboratory analysis of NO . These
absorbances should be correctly labelled so that the value of K can be
C
determined by the reviewer. Alternatively, the calibration data can be
presented graphically with the calibration points and their associated
absorbances well labelled. Example data sheets follow (see Figures U-8 and
4-9).
The reviewer should check the linearity of the reported calibration. This
can be accomplished either through the use of a calculator having a linear
least squares program or graphically. (The reviewer should note that the value
of K afforded by linear least squares will not necessarily equal the value
obtained through the use of the equation above. In addition, the reviewer is
reminded to include the origin data in the linear least squares computation.)
Ideally, all calibration points should lie on the computed calibration
line; this will be obvious if a calibration is checked graphically. Using the
calculator, the ideal case would be indicated by a correlation coefficient
value of 1.00. (This function is generally included as part of the linear
least squares program.)
The current version of Reference Method 7 does not address the precision
of the spectrophotometer calibration; thus, neither established guidelines nor
specifications are available regarding the limits to which calibration points
may deviate from the calibration line before calibration is considered to be of
insufficient quality. The reviewer, nevertheless, should recognize the
critical importance of the spectrophotometer calibration factor in ultimately
providing the NO concentration result: the accuracy and precision (i.e., the
quality) of the spectrophotometer calibration is a direct factor in determining
the reported NO concentration.
84
-------
w
M
p-l
K
!
A
V
I
\
I
*
V <
^
V
CM ^ V
- ^ h
K c\
^ N v
-K X1
^0>
K
LO
OO
-------
FIGURE 4-8 (Cont'd.)
<27
3*0
AT?.
a
0, 574
O.S8I
0,
0.
o,
&
?. #oc
86
-------
FIGURE 4-9.
of "
Concen traztbn
-orIDeEermma.t(bn
l 7
svr2>ance .
! Job _
Initials*
0.60O
0.500
8
K
0.400 -
O.JOO --
0.200 -
0.10O !
= 723.1
" Concentration (/JLJ/J[(?omL)
87
-------
It is not uncommon for the PDSA method, as ordinarily practiced, to show
_+ 5% deviations from the calibration line. Indeed, higher deviations (e.g.,
+ 10$) are not rare, especially at the lower concentration calibration points.
Consequently, in assessing the adequacy of the reported NO concentration data,
the reviewer must weigh the significance of spectrophotometer calibration in
light of the precision commonly associated with the method and the reported
relative accuracy status of the subject continuous emission monitor. Clearly,
the review of the spectrophotometer calibration factor is approached on a
case-by-case basis.
4.5.4.3 Moisture Sampling/Moisture Correction Factors
Background
Pollutant concentration measurements are expressed as the ratio of the
mass of the pollutant of interest to the total gas sample volume. Wet basis
measurements include the quantity of water vapor in the effluent sample as part
of the total volume of gas (denominator). Dry basis measurements do not
include water vapor as part of the total gas volume. Water vapor dilutes the
pollutant concentration; thus, for a given sample, the dry basis pollutant
concentration will be greater than the corresponding wet basis concentration.
Many continuous monitors measure gas concentrations on a wet basis. In
contrast. Reference Methods 6 and 7 provide pollutant concentration
measurements on a dry basis. Thus, it is often necessary to apply a moisture
correction factor so that all measurements are on the same moisture basis. A
moisture correction factor can be applied to either the monitoring data or the
reference method data. However, it is more consistent to apply the correction
to the reference method data because these determinations of SO , NO , and H 0
2 x 2
88
-------
all entail classical, i.e., wet-chemical methods, as distinguished from
instrumental methods, e.g., continuous monitors. Nevertheless, equally valid
results and interpretations will be provided if corrections are applied to the
continuous emission monitor data.
The moisture correction factor is determined from the results of effluent
moisture measurements. The appropriate monitor sample stream moisture content
is not always the same as the effluent moisture content. All in-situ monitors
provide wet basis measurements; therefore, the moisture content of the stack
gases must be determined for relative accuracy tests. Extractive monitoring
systems, on the other hand, may contain conditioning systems that remove
moisture from the sample prior to analysis; thus a dry basis measurement may be
afforded. Extractive monitors also are available that measure on a wet basis.
Finally, there are extractive monitors with sample handling/conditioning
systems that remove a portion of the water vapor from the sample stream. Thus,
the concentrations indicated by such a monitor are somewhere between dry-basis
and wet-basis. For purposes of relative accuracy testing, the moisture basis
is generally assumed, for convenience, to be either one moisture basis or the
other. The choice is often dictated by the moisture basis of the associated
diluent monitor, if such a monitor is used conjunctively. Accordingly, the
performance specifications require that such pollutant-diluent combinations
must be either total wet basis or total dry basis.
Moisture Sampling
The regulations are not explicit regarding the methodology to be employed
in determining moisture correction factors. Performance Specification 2
states, "Determine the correction factor by moisture tests concurrent with the
reference method testing periods. Report the moisture test method and the
89
-------
correction procedure employed." Since the moisture correction factor is an
integral part of the relative accuracy calculations, pertinent raw data, sample
calculations, and a brief description of methodology should be included in the
test report. Also, it is necessary to have a record of each correction factor
used, identified according to which reference method test it complements.
Reference Method 4, "Determination of Moisture Content in Stack Gases,"
describes two sampling methods - a reference method and an approximation
method. The reference method employs Smith-Greenburg impingers, whereas the
approximation method uses midget impingers. Reference Method 4 is not often
used in monitor performance evaluations, because the same information can be
obtained through the use of midget impingers, which are also employed in
Reference Method 6. Used instead are modified versions of the approximate
Method 4, acceptable for monitor performance tests. This modified methodology
is described in, "An Alternative Method for Stack Gas Moisture Detemination,"
by Jon Stanley and Peter R. Westlin. Therein, two alternative sampling
procedures are described: (1) a modified approximate Method 4, and (2) a
modified Method 6 sampling train used to measure moisture content and S0?
concentration simultaneously. For both sampling procedures, midget impingers
in an ice bath are used as condensers and are followed by a silica gel trap.
The sampling train is weighed to the nearest 0.01 gram before and after
sampling, thus affording a gravimetric determination of the moisture. The
weight gain of the sampling train is used to calculate the moisture content of
the sample stream according to the following equations:
90
-------
we
1.336 x 10~3w
Where:
we
Volume of water vapor con-
densed, corrected to
standard conditions, scm.
Total weight gain of the
condenser and silica gel
trap assembly, g.
Where: V
Y
P
mstd
Where:
m
m
m
ws
B
ws
Dry gas volume measured by
meter, corrected to
standard conditions, dscm.
Meter calibration coefficient, dimensionless.
Absolute meter pressure, mm Hg.
Absolute temperature at meter, °K
Dry gas volume measured by meter, dcm.
V
- - - x 100
we
. .
m std
Water vapor content in stack gas,
percent by volume.
The above equations apply to any condenser type moisture method. An
example data sheet for condenser methods follows.
Review of Moisture Sampling Data and Results
The reviewer may evaluate the reported moisture sampling results either by
checking the raw data and calculations or by checking that the reported results
fall within reasonable limits. In particular, the reviewer should check the
volume of the effluent sampled, corrected to dry standard conditions, and the
subsequent calculation of the sample stream moisture content. The reviewer is
directed to the discussion of the review of Method 6 sampling data for
procedures explaining the check of effluent sample volume data.
91
-------
The reviewer should check that all reported moisture contents fall within
reasonable limits. For determinations of moisture in effluent streams from
fossil-fuel fired steam generators, nomographs are available which allow
estimations of the percentage of water vapor in the effluent, based upon: (1)
the fuel used, (2) % excess air, (3) free water in the fuel, (4) ambient
temperature, and (5) relative humidity (see Fig. 4-10). Even if measured values
are not available for all of the parameters listed above, the reviewer should
at least be able to bracket the moisture content by using estimated ranges for
those various parameters.
It should be emphasized that the moisture content of the stack gases
cannot exceed the moisture content at saturation conditions. For example,
after the effluent passes through wet scrubbers, stack gases may contain
entrained water droplets. In this instance, any condenser type moisture method
may yield erroneously high results.
Such situations are addressed within Reference Method 4, which states that
if water vapor saturation is suspected within the effluent stream, or if the
effluent stream contains entrained water droplets, then, using the effluent
stream temperature as a reference, moisture should also be determined
simultaneously either by: "(1) using a psychrometric chart and making
appropriate corrections if stack pressure is different from that of the chart,
or (2) using saturation vapor pressure tables." Method 4 states that the lower
of the two moisture determination results shall be considered correct.
92
-------
IN AIR
« EXCESS AIK
(MM, |'!'|
O m o
i > H-- >£ a
1 i i i i i i i i S 8SS
5 , ^\^» *
M ,> /VV o
01 /x «" « H
i °V ^> S 1
O /v >V. v^ 3K
.A" *J» ^V**> Q O r-
H! vC% i s i
M ,.% ^X o HI 0
fe fi ^^^x £3*
5 o\«. g 1
M
> \ I ;
' i i i i i i ) i i i | i
8 » ° £ L
RELATIVE HUMIDITY '
8
JT
i (I1
< H
2* of 9
S I1"
3 j
. . 8
M W A
o 2 *
c 2 I
ta g (5
w $
4N H
1 z
O
5
H
_% n 1 1 1 T"
g I'l'l1
S § » o° ° '
o ?^2§P 1 gl
§ 2 ' ? » » p ** Sc
p|9jjfL£ftlS' LJ **§
G S>S'SlfrUIE"' ~ ~* KZ
3 «2»3< *» 5^5 j» |
i :??1??J |«« if!
s 'ii:I«: 4 ft & sl
W ft a i- > o *« "
C *ftH-«OHC *
53 MMKH^A ctl
0 ?-S- - !? . 5- ' P*
R s?«|§8 ! ? jj
B * " ? r
" S 8 JT S
1 | ' 1 1 1 ' 1 « | 1 | I [1111,11 1 1 T ]
/
-JO^*J> C 1
/
/
/
% HO IN FLUE GAS PROM
* /
COMBUSTION ONLY
1 1 / 1 1 | ' ' ' |Mi'|i«"]
O H- KJ U> ^ (71 »-> K- NJ
U / o u. .
/
/
/
/
/
x S f
/ *1 O
' w a
/ i . I ' i
, t | .^.L^ Pi . II
JT oo »p « w
5 r| K I WS 3 JX
a |l i ?^S ^
» a*- > i » i- a a5
O H- rr 3 ,_-, £ Ji
* f E o i S *
J S S n, 2 S
| 8 £ " "
I '
% Excess Air
I'l ' 1 ' | 1 j 1 | 1 |"">j" I I |
Jggooo o/ "
3OOOO o O
/
/
/
/
» HZO in Flue Gas From
Free HO Only
/ ^
l""l ' I'l'I'iV"1'! ' 1 'I'l'l'm]
'u, / o o ooo o
/
» Free H2O iri Fuel, By Height
1 1 ' 1 ' i ' | ' ' | i | i | r -j
O O O O rt rt 1-1 X tT
-------
Moisture Correction Factors
Once the moisture content of the stack gas is determined, the moisture
correction factor should be calculated and applied to the reference method test
results. The following generalized equation is applied to adjust the reference
method results from a dry basis to a wet basis:
reference
method
concentration
dry
100
reference
method
concentration
wet
94
-------
4.5.4.4 Calibration Standards and Calibration
Background
Calibration is the single most important parameter evaluated during
performance tests of continuous monitors. Indeed, with the exception of the
response time test, all the remaining tests address calibration. Thus,
calibration accuracy and precision are quantified by the calibration error
test. Calibration stability is quantified on a short-term basis by 2-hour
drift tests and on a long-term basis by 24-hour drift tests. Finally, the
adequacy of the routine calibration procedure, embodied within the calibration
error test, is assessed by the relative accuracy test.
The absolute accuracy of calibration is a critical factor for the
calibration error and relative accuracy tests. Flit simply, a poorly calibrated
instrument is less likely to pass these two tests, because both place strong
emphasis on the monitor's response relative to known inputs, e.g., calibration
gases (or cells) or effluent samples with concentrations established using
reference methods. (The drift tests, on the other hand, entail only measuring
monitor response to a constant input; the tests do not explicity address how
accurately the monitor should measure this constant input.) Fundamental to a
properly calibrated monitor are calibration standards known to a high degree of
accuracy and precision.
Performance Specification 2 describes the necessary calibration standards
employed during continuous monitor performance tests. For ex tractive monitors,
gaseous pollutant standards are required which have concentrations of
approximately 50 and 90 percent of instrument span. (The current Performance
Specification 2 does not anplify the interpretation of the word
95
-------
"approximately". The revisions proposed October 10, 1979, may be used for
guidance in this regard. Accordingly, the proposed revisions specify that the
gas concentrations be 45 to 55 and 80 to 90 percent of span.) "Calibration gas
cells whose concentrations are certified by the manufacturer to be functionally
equivalent to these gas concentrations" are also permitted in lieu of gas
standards. The performance specification provides assurance that the gas
standards are not affected by temporal variations in concentration by stating
acceptable components of the gas mixtures, e.g., sulfur dioxide in nitrogen,
and by implicitly specifying an expiration date for the analyzed gas mixtures:
"triplicate analyses of gas mixtures shall be performed within two weeks prior
to use using Reference Method 6 for SO and Method 7 for NO ."
C, A
The concentration assigned to the calibration gas is the average of the
three analyses. The criterion for acceptable analytical results is that "each
sample test result must be within 20 percent of the averaged result or the
tests shall be repeated." This criterion gives testers considerable latitude
with regard to the quality of data afforded by determinations of calibration
gas concentrations; however, this latitude is attended by the risk that the
inaccuracy and imprecision of the calibration gas analyses will compromise the
results of the calibration error and relative accuracy tests.
More stringent criteria for acceptable calibration gas analytical results
are contained in the revisions to Performance Specification 2 proposed October
10, 1979. Many testers have adopted these criteria in order to ensure that
performance tests are not biased by gaseous calibration standards of poor
quality.
The performance specifications do not specify the appropriate sampling
methodology to be employed in sampling calibration gas cylinders. The
96
-------
reference methods vere developed for sampling effluent streams at close to
ambient pressures, while cylinders containing calibration gases are at high
pressures. Procedures for sampling from gas cylinders have not yet appeared in
the Federal Register; hovever, P. R. Westlin and J. W. Brown of the U. S. EPA
have offered several sampling methods in their paper, "Methods for Collecting
and Analyzing Gas Cylinder Samples" (Source Evaluation Society Newsletter, Vol.
3, No. 3, September 1978).
Some modifications of the reference methods are often either possible or
necessary, because calibration gases are of defined composition and are also
free from impurities. For example, the analysis of sulfur dioxide calibration
gases using Reference Method 6 requires no final purge because the absence of
sulfur trioxide (SC^) in the sample obviates the need of the imping er
containing isopropanol (IPA). (Hovever, a purge is necessary if an IPA
imping er is used because so is absorbed by IPA.) The analysis of nitric oxide
(NO) calibration gases using Reference Method 7 requires modification because
of the necessity for having oxygen (Og) present in the flask to ensure complete
reaction of the NO to nitrate. Since the cylinders are oxygen-free, sampling
must be interrupted before the flask is totally filled with the calibration
gas, and oxygen is then admitted by venting the flask to the atmosphere. This
procedure is described in greater detail within Reference Method 7, Paragraph
4. 1.2.
Performance Specification 2 allows the calibration error test to be
performed either in the laboratory or in the field. While it could be argued
that the laboratory option v»uld be an unwise choice, in practice the choice of
when and vhere the calibration error test is performed is ordinarily
inconsequential. Accordingly, before initiating the operational test period
97
-------
testers ordinarily will optimize the monitor's calibration to ensure the
successful completion of the performance test. The lost time and effort
incurred by an aborted or unsuccessful relative accuracy test are sufficiently
great that this test vail not be started until the testers are confident that
the monitor passed or could pass the calibration error test.
Calibration gases are also required by Performance Specification 3 for
evaluations of diluent (oxygen or carbon dioxide) monitors. The reviewer
should note that the specifications for concentration ranges and triplicate
analyses are similar to those within Performance Specification 2. However, the
calibration standards have a diminished role because of the lack of
specifications for calibration error and relative accuracy within Performance
Specification 3.
Because of the vital role calibration standards play in performance tests
of continuous monitors, it is highly recommended that test reports contain data
that document and support the concentrations assigned to the standards. For
concentrations established using Reference Method 6 or 1, raw data sheets for
sampling and analysis should be included. If cells are used during the
performance test, the report should include documentation of cell concentration
certification.
Review of Calibration Gas Analysis
If possible, the reviewer should check the composition of the calibration
gases to ensure that the appropriate diluent gas (nitrogen, air) is used.
The reviewer should check the consistency of the reported calibration gas
concentrations by comparing the average values from triplicate analyses with
the values reported for use in the calibration error test, drift tests, and
98
-------
response time test. The triplicate analyses of the calibration gases should be
checked in order to confirm the accuracy of the reported average
concentrations.
The reviewer should verify that the three individual analyses are each
within 20% of the reported average. This criterion is easily attainable. The
reviewer should recognize analyses that approach the limit of this criterion
may have an effect on the observed monitor performance. Fbwever, without
additional analytical data, the effect cannot be accurately quantified. For
example, if a test report indicated that the monitor did not meet the
specification for calibration error and if the calibration gas analyses showed
poor precision, the reviewer could reasonably recommend that the calibration
error test be repeated with more precisely known calibration gas standards.
Finally, the reviewer should check that the dates of the analyses of the
calibration gases are within two weeks of the dates of their use in the tests
of the affected monitors. Deviations from this criterion should be judged on a
case-by-case basis allowing a liberal interpretation.
The complete review of the analyses of the calibration gases requires
review of the reference method sampling and analytical data. The reviewer is
directed to the "Review of the Reference Method Data and Results" section of
the Relative Accuracy discussion of this manual for guidance in reviewing
reference method sampling data, analytical data, and calculations.
Calibration Gas Cell Certification
Continuous monitors do not necessarily provide for the introduction of
calibration gases for tests of drift, calibration error, or response time, used
instead are cells containing concentrations of gases functionally equivalent to
99
-------
the gas concentrations required in tests of extractive monitors. According to
Paragraph 6.1.2 of Performance Specification 2 and Paragraph 6.2.3 of
Performance Specification 3, the concentrations of gases in the cells are
required to be certified by the manufacturer. Analyses of the gases in the
cells is not possible. Moreover, there are no simple procedures available for
verifying the gas concentrations in the cells. The regulations do not specify
what constitutes manufacturer certification.
As a minimum, a statement by the instrument manufacturer or operator
regarding the gas concentrations of the cells should be included in the test
\
report. Ihe reviewer should check the consistency of these values in those
tests that employ the cells.
100
-------
4.5. M. 5 Monitor Location and Reference Method Sampling Location
Background
Performance Specification 2, Paragraph 4, requires gas monitors to be
located so that "measurements can be made which are directly representative, or
which can be corrected so as to be representative of the total emissions from
the facility." The Performance Specification states that conformance with the
requirement of representative measurement location can be accomplished by
installing the continuous monitor eight or more equivalent diameters downstream
from positions of air in-leakage. Stratification, the condition of
non-representativeness, is defined by Performance Specification 2, Paragraph
3.9 as "a condition identified by a difference in excess of 10 percent between
the average concentration in the duct or stack and the concentration at any
point more than 1.0 meter [3.3 feet] from the duct or stack wall." This
stratification definition implies that all locations less than 1.0 meter from a
duct or stack wall are suspected of being stratified.
Performance Specification 2 does not contain detailed methodology for
determining the degree of stratification; thus, the specification states only
that the tester may perform a traverse to characterize any stratification of
effluent gases that might exist. The reviewer should recognize that
technically valid assessments of stratification must account for the temporal
variability of concentrations within the effluent stream. Documents that
address valid stratification test procedures are available from the U.S. EPA,
SSCD.
If the results from testing indicate the absence of stratification, then
the established representativeness of the effluent ensures that the continuous
101
-------
monitor may be acceptably located anywhere within that location. The converse
situation is addressed within Performance Specification 2, which provides,
essentially, the choice of either: (1) determining the pollutant concentration
in a manner vhich permits correction to representative conditions, or
(2) accounting for the impact of stratification by monitoring the concentration
of either oxygen or carbon dioxide. The reviewer should note that no owner or
operator of a continuous monitor has opted for the first choice because of the
technical difficulty associated with verifying the adequacy of the correction
technique. Thus, diluent monitors (continuous monitors for CO or CL) are
employed instead. (Indeed, for sources required to report pollutant emission
rates in units of Ib NO or S0?/10 Btu, a diluent monitor, in conjunction with
the pollutant monitor, is absolutely essential.) The proper use of a diluent
monitor for purposes of accounting for stratification effects implicity
requires that the diluent and pollutant monitors measure effluent samples with
approximately equivalent composition. This requirement is reflected by the
specification that both monitors be of the same "type" (both extractive or both
in-situ) . Nevertheless, established guidelines regarding the proper placement
of pollutant and diluent probes in stratified effluent streams are unavailable.
Clearly, the problem is moot if both pollutant and diluent determinations are
performed, either on one sample obtained from a common probe or within the same
sample path (for in-situ monitors) .
Location is an issue that must be considered during the monitor
performance test. During the sampling phase of a relative accuracy test, it is
imperative that the gas stream sampled according to the reference method is
equivalent to the gas stream sampled by the monitor. Performance Specification
2, Paragraph 6.2.2.1 addresses the relative positions of the probe tips for an
extractive pollutant monitor and for the reference method during relative
102
-------
accuracy tests: "For continuous monitoring systems employing extractive
sampling, the probe tip for the continuous monitoring system and the probe tip
for the reference method sampling train should be placed at adjacent locations
in the duct." However, the distance implied by the word "adjacent" is not
quantified. Also, the above criterion is applicable only to extractive and
in-situ monitors that analyze at a point within the effluent. The performance
specification does not provide guidance regarding the relative placement of
reference method sampling probes when tests of multipoint extractive monitors
or across-stack in-situ monitors are performed. A technically valid approach
would be to traverse the monitor measurement path with the reference method
probe .
Reporting and Review
The issue of monitor location should have been settled long before the
monitor performance test is conducted. Nevertheless, the reviewer should, to
be thorough, check the drawings included in the test report to confirm that the
monitor location is acceptable. The reviewer should approach the relative
locations of monitor and reference method probe tips in a similar fashion.
To permit such review, the monitor performance test report should contain
a drawing that shows the position of the monitor in the effluent stream. This
drawing should display approximate distances from the monitor to: (1) the walls
of the stack or duct; (2) control equipment, such as precipitators or
scrubbers; (3) points or sources of air in-leakage; and (4) points at which gas
streams with dissimilar gas concentrations are combined. (Performance
Specification 2 requires that the monitors be located eight or more stack
diameters downstream of any positions of air in-leakage. Equivalent diameters
for non-circular ducts are defined in UO CFR 60, Appendix A, Reference Method
103
-------
1). This same downstream distance criterion should also be applied to points
where dissimilar gas streams are combined.
The reviewer should examine the monitor location drawings to verify that
the monitor is installed sufficiently far downstream of potential sources of
stratification. The reviewer should also check to be sure that single point or
short path length monitors are installed to sample more than one meter (3-3
feet) from the stack wall.
If tests were conducted to detect or to quantify stratification of the
effluent, a brief statement of the results of these tests should be included in
the monitor performance test report. The reviewer should examine these results
in order to attempt to verify that the monitor sampling location provides
measurements representative of the entire effluent stream.
The relative positions of the monitor sampling region and the probe tip of
the reference method sampling train should be illustrated. These positions
could be indicated on the drawing showing the position of the monitor in the
effluent stream, or the sampling positions could be the subject of a separate
drawing. Distances should be indicated in the drawing so that these positions
can be fixed .
Performance Specification 2 states that the probe tip of an extractive
monitor and the probe tip of the reference method sampling train should be
placed at adjacent locations in the duct during field tests for relative
accuracy. The reviewer must decide whether the locations meet the definition
of "adjacent," bearing in mind that the intent of the word "adjacent" is to
ensure that the monitor and probe are sampling identical effluent streams.
Separation distances up to approximately 20 cm (8 inches) between the reference
method sampling probe and continuous monitor sampling region are usually
104
-------
considered to be "adjacent." It should be emphasized that if the monitor and
sampling probe are not sampling at exactly the same location, and if the issue
of stratification has not been resolved, then the failure of a relative
accuracy test may be due to the effect of stratification as opposed to monitor
inaccuracy.
105
-------
5.0 REVIEW OF TEST PROCEDURES . DATA . AND RESULTS
OF MONITOR PERFORMANCE TESTS
FOR CO AND 0 MONITORS
5. 1 BACKGROUND
At fossil-fuel fired steam generators subject to NSPS regulations (40 CFR
60, Subparts D and Da) diluent monitors are required in addition to pollutant
monitors. Diluent monitors enable emission rate measurements in units of the
standard, Ib pollutant/10 Btu heat input (ng pollutant/Joule heat input). The
diluent gas measured by the diluent monitor may be either CO or 0?. The
pollutant gas monitor and diluent monitor must measure on the same basis (i.e.,
they both measure on either a dry or a wet basis) . This requirement is stated
in 40 CFR 60, Subpart D, Paragraph 60.45(e)(1).
Because either CO- or Op can be measured, and because emission rate
measurements can be performed either on a wet or a dry basis, a number of
equations for calculating the emission rates exist. Some that are commonly
used appe?r below:
Dry Basis (No moisture correction)
E=DF
c
20.9 - % 0
2d
E = C,F
d c
100
% CO
2d
Wet Basis (Moisture correction)
E = C F
w w
E = C F
w c
20.9
20.9 (1 - B ) - % 0.
wa 2w
100/7, CO
2w
J
107
-------
Where: E = Pollutant emissions,
Ibs pollutant/10 Btu
(ng/Joule).
C = Pollutant concentration,
dry basis, Ib/dscf (ng/dscm).
C = Pollutant concentration,
w wet basis, Ib/scf (ng/scm).
F, F , & F = Constants determined by the
c w identity of the fuel combusted
combusted and the diluent
measured. F accompanies
measurement of 0 and F
accompanies the measurement
of C02. Both of these
constants are on a dry basis.
F accompanies measurements
o₯ Op on a wet basis.
%0 and %0 = % concentration of 0 ,
d w dry and wet basis, respectively.
%CO and %CQ = % concentration of CO ,
d w dry basis and wet basis,
respectively.
B = Mole fraction of moisture in the
W3
combustion air.
B = Mole fraction of moisture in the
ws effluent.
Because the factors F, F and F are constants, measurements of the pollutant
c w
and diluent concentrations (and as necessary, the moisture content) provide a
basis for calculating the emissions in units of the standard. (For a more
detailed presentation of emission rate calculations, the reviewer is referred
to 40 CFR 60, Appendix A, Reference Method 19.)
Performance specifications and test procedures for diluent monitors are
contained in 40 CFR 60, Appendix B, Performance Specification 3. These
specifications and test procedures are very similar to those for pollutant gas
108
-------
monitors. the performance specifications for diluent monitors are tabulated
below.
Parameter Specification
Zero drift (2h)« < 0.4* 0 or C0?
Zero drift (24h)« < 0.5% 0": or CCC
Calibration drift (2h)« _< 0.4* 0^ or CCC
Calibration drift (24h)» _< 0.5? CT or CO^
Operational Period ^ 1 68 h
Response Time < 10 min
(* Expressed as sum of absolute mean value plus 95* confidence interval of a
series of tests.)
Major differences between performance specifications for pollutant gas
monitors and for diluent monitors include the absence of specifications for
relative accuracy and calibration error. Another difference between the
specifications is the use of units of concentration (* 0_ and % C0?) for the
drift specifications. This difference reflects the fact that the sum of the
absolute mean value and the 95* confidence interval is not subsequently divided
by the instrument span. Tnus, the diluent performance specifications are
expressed on an absolute basis as a % 0^ or a % C0_, rather than on a relative
basis as a percentage of the measured quantity. (The use of % in both cases
often creates confusion.) Finally, the response time specification is 10
minutes for diluent monitors as opposed to the 15 minute specification for
pollutant gas monitors.
The zero and calibration drift tests and the response time tests for
diluent monitors are conducted in an identical manner to those of pollutant gas
monitors. Tne associated calculations are also performed in the same manner,
with the exception that final division by the span of the instrument is not
performed .
109
-------
5.2 REVIEWING DILUENT MONITOR PERFORMANCE TEST REPORTS
The information and data recommended for inclusion within performance
specification test reports for diluent monitors are identical to those
recommended for pollutant gas monitors with respect to determinations of
calibration values, drifts, and response times. Again, it is important to note
that relative accuracy tests and calibration error tests are not included in
diluent monitor performance evaluations. The absence of both the relative
accuracy test and the calibration error test and their attendant data leaves
little (relative to reports for tests of pollutant gas monitors) to report, and
therefore, to review.
Although no test per ^e is required for determining calibration error, a
calibration check of the diluent monitor is required by Paragraph 6.1 of
Performance Specification 3. Accordingly, the calibration check entails
establishing a calibration curve using zero, midrange and span concentration
gas mixtures. This resultant curve is then compared with the expected
calibration curve as described by the analyzer manufacturer, and its
"consistency" is verified. Additional calibration gas measurements are made or
"additional steps are undertaken" if the expected response is not obtained.
"Consistency" is not defined within the paragraph.
As a minimum, copies of the calibration curves should be provided within
the report in order to document that the check was performed. An example is
given with Figure 5-1. Raw data forms which document the response time test,
the drift tests, and the calibration checks should be included in the monitor
performance test report. The reviewer should check the data from the
calibration check and should evaluate these data in light of the verified
results from the drift tests.
110
-------
TIT
o
S1
~\~
/ C-^
-------
Since the tests for drift and response time are nearly identical to those
for gas monitors, the reviewer is directed to those sections in this manual
where review procedures for pollutant gas monitors are discussed in detail. As
a final note, the reviewer should recognize that mid-range calibration gases
are used for the 2-hour calibration drift test rather than span gases.
5.3 CALIBRATION GASES
Background
In a manner analogous to performance evaluations of pollutant gas
monitors, diluent monitors require gases of known concentations for the tests
of response time, zero drift and calibration drift, and for calibration checks.
Calibration gases are injected into extractive monitors, while in-situ monitors
may employ certified gas cells functionally equivalent to the required diluent
concentrations. Because the reporting requirements for in-situ diluent
monitors are identical to those for pollutant gas monitors, the reviewer is
directed to the pertinent discussions in this manual (see Calibration
Standards, Section 4.5.1.4). The following discussion addresses the reporting
requirements applicable to the concentrations of the gas cells and the review
of the reported information.
Depending on whether the diluent monitor measures carbon dioxide or
oxygen, the composition of the calibration gas mixture is required to be either
carbon dioxide in air or oxygen in nitrogen.
The required concentrations of the diluent calibration gases are
determined by the "normal carbon dioxide or normal oxygen concentration in the
stack gas of the affected facility." Thus, according to Performance
112
-------
Specification 3, Paragraph 3.2, the span of the instrument "shall be set no
less than 1.5 to 2.5 the normal carbon dioxide or normal oxygen concentration."
The concentration of the span diluent calibration gas is required to be 90% of
the instrument span. The concentration of the mid-range diluent calibration
gas is required to be "representative of the normal conditions [oxygen and
carbon dioxide concentrations] in the stack gas of the affected facility at
typical operating rates." This requirement appears in Paragraph 3.3 of
Performance Specification 3. Finally, Performance Specification 3, Paragraph
2.2 permits the use of ambient air as a diluent calibration gas. This
provision applies strictly to oxygen analyzers with spans set higher than 21$
V
The reviewer may also check the reported concentrations of the span and
mid-range gases with regard to the criterion that these should reflect the
diluent concentrations present in the effluent during normal operation of the
affected facility. The concentrations can be checked for reasonableness by
consulting nomographs especially made for the purpose of estimating the
concentrations of oxygen and carbon dioxide in the effluents of fossil-fuel
fired steam generators. These nomographs can provide limits, which are
dependent on the fuel combusted, for the diluent concentrations. It is,
however, difficult for the reviewer to fix accurately the "normal" diluent
concentration. Most of the problem stems from the large ranges of diluent
concentrations encountered at fossil-fuel fired steam generators: 6-15% CO and
3-20* 02.
The calibration gases for extractive diluent monitors are required to be
analyzed within two weeks of their use in monitor performance tests. The gases
are analyzed in triplicate, using the methodology of Reference Method 3
113
-------
Orsat analyses of bag samples obtained from gas cylinders. Unlike the
paragraph treating the analyses of calibration gases for pollutant gas
monitors, the analogous paragraph for diluent monitors does not give a
criterion for the acceptability of the results of the triplicate analyses.
Reference Method 3* however, provides requirements for maintaining
precision. Paragraphs 4.2.6.1 and 4.2.6.2 of Reference Method 3 address the
acceptable criteria for the respective Orsat analyses of CC>2 and 02. For C0?,
Orsat analysis is repeated "until the results of any three analyses differ by
no more than (a) 0.3 percent by volume when C02 is greater than 4.0 percent, or
(b) 0.2 prcent by volume when C02 is less than or equal to 4.0 percent." For
determinations of 02, the criteria are similar: the analysis is repeated "until
~the results of any three analyses differ by no more than (a) 0.3 percent by
volume when 02 is less than 15.0 percent, or (b) 0.2 percent by volume when 0?
is greater than 15.0 percent." For the analyses of CO and 0 , the average of
the three acceptable results is reported as the appropriate concentration.
Thus, the criteria for the acceptability of the analyses of diluent calibration
gases are very similar to those that apply to pollutant calibration gases.
The test report should contain documentation of the triplicate analysis of
all diluent calibration gases used during the monitor performance test.
The review of the analyses of 02 and C02 calibration gases is simple,
since the methodology (Orsat analysis of bag samples obtained from cylinders)
generates a small amount of data and does not require further manipulation. For
an example, see Figure 5-2. The reviewer should first confirm the completeness
of the data by checking that each diluent calibration gas used in the
performance test is accompanied by a triplicate analysis. Each of the
individual analyses for the gases should meet the acceptability criteria
114
-------
UKbAl MhLD iJ
/]
Plant Name___///
Sampling Location
r, , , , . T
Run and/or Sample No.
Leak Test?
Date^/t^Onerator^.'
Time of
Sample
Collection
ozoo
//
ft
Time
of
Analysis
Og/0
CZ3-9
JZft
Are .
co2
Reading
A
°2
Reading
B
e. /
g.c
'-- /
CO
Reading
C
Xvi;.
%o2
B-A
f. /
. c
O '
C. i
? /
^_ / /
-
%co
C-B
''' %H2
100-C
-
^-.
-J-S aver aye
Run and/or Sample No. Leak Test? Date Operatoi
Time of
Sample
Collection
Time
of
Analysis
Avg.
co2
Reading
A
°2
Reading
C
CO
Reading
Avr.
3'D
U2
G-A
c:co
C-B
0' lil
..-N2
100-C
Run and/or Sample No.
Leak Test?
Date
^Operator
Time of
Sample
Collection
Time
of
Analysis
Avc .
co2
Reading
A
°2
Reading
B
CO
Reading
C
Avg .
o/n
/olio
B-A
%CO
C-B
%n2
100-C
115
-------
discussed above.
The reviewer should then verify that all the subtraction operations
required by the Orsat analysis are correct, and that the reported average
diluent gas concentration is also correct. All reported average diluent gas
concentrations should be compared as appropriate to the concentrations reported
in the drift test, response time test, and calibration check. The reported
values should be consistent. Finally, the reviewer can check the time between
the analysis of the gases and their use in the performance test. This time
interval should be less than 2 weeks; however, the reviewer should bear in mind
that diluent calibration gas concentrations are generally very stable.
116
-------
TECHNICAL REPORT DATA
(Please read Instructions on the reverse before completing}
1. REPORT NO.
EPA-340/1-83/013
2.
3. RECIPIENT'S ACCESSION NO.
. TITLE AND SUBTITLE
PERFORMANCE SPECIFICATION TESTS FOR POLLUTANT AND
DILUENT GAS EMISSION MONITORS: Reporting Require-
ments, Report Format, and Review Procedures
5. REPORT DATE
January 1983
6. PERFORMING ORGANIZATION CODE
7. AUTHOR(S)
Guy B. Oldaker III, Ph.D.
James W. Peeler
8. PERFORMING ORGANIZATION REPORT NO,
9. PERFORMING ORGANIZATION NAME AND ADDRESS
Entropy Environmentalists, Inc.
P.O. Box 12291
Research Triangle Park, NC 27709
10. PROGRAM ELEMENT NO.
11. CONTRACT/GRANT NO.
68-01-6317
12. SPONSORING AGENCY NAME AND ADDRESS
OAQPS
Stationary Source Compliance Division
Waterside Mall, 401 M Street, SW
Washington, DC 20460
13. TYPE OF REPORT AND PERIOD COVERED
FINAL - IN-HOUSE
14. SPONSORING AGENCY CODE
EPA/200/04
15. SUPPLEMENTARY NOTES
16. ABSTRACT ' ~~~~ ~
This document presents recommended reporting requirements for performance tests
of continuous emission monitoring systems installed at fossil-fuel fired steam
generators subject to New Source Performance Standards (NSPS). The recommended
reporting requirements are applicable to performance tests conducted according
to 40 CFR 60, Appendix B, Performance Specifications 2 and 3 (Promulgated, Federal
Register, Vol. 40, No. 194, October 6, 1975). This document details procedures
for reviewing such performance tests.
7.
KEY WORDS AND DOCUMENT ANALYSIS
DESCRIPTORS
b.lDENTIFIERS/OPEN ENDED TERMS
c. COSATI Field/Group
Air Pollution
Continuous Emission Monitoring Systems
Performance Tests
Fossil-Fuel Fired Steam
Generators
Performance Specifica-
tions 2 and 3
NSPS
8. DISTRIBUTION STATEMENT
Release to Public
19. SECURITY CLASS (TillsReport)
unclassified
21. NO. OF PAGES
124
20. SECURITY CLASS (Thispage)
unclassified
22. PRICE
EPA Form 2220-1 (R«». 4-77) PREVIOUS EDITION is OBSOLETE
-------
United States Office of Air Quality Planning and Standards
Environmental Protection Stationary Source Compliance Division
Agency Washington, D.C 20460
Official Business Publication No EPA-340/1-83-013 Postage and
Penalty for Private Use pees pai(j
$300 Environmental
Protection
Agency
EPA 335
If your address is incorrect, please change on the above label,
tear off, and return to the above address
If you do not desire to continue receiving this technical report
series, CHECK HERE D , tear off label, and return it to the
above address
------- |