EPA/625/6-89/023
January 1990
Handbook
Quality Assurance/Quality Control (QA/QC)
Procedures for Hazardous Waste
Incineration
Center for Environmental Research Information
Office of Research and Development
U.S. Environmental Protection Agency
Cincinnati, Ohio 45268
Printed on Recycled Paper
-------
Notice
This report has been reviewed by the U.S. Environmental Protection Agency and approved for
publication. Mention of trade names or commercial products does not constitute endorsement or
recommendation for use.
-------
Contents
Page
Figures v
Tables vj
Acknowledgements vjj
Chapter 1. Introduction 1
Chapter 2. QA Project Plans in Hazardous Waste Incineration Trial Burns 3
2.1 Structure of QAPjP 3
2.2 Review of the QAPjP with Trail Burn Plan 9
Chapter 3. General Topics 11
3.1 Sample Handling and Custody 11
3.2 Holding Times 11
3.3 Routine Calibration of Stack Sampling Equipment 12
3.4 Internal Auditing 12
3.5 Use of External Audits 15
3.6 Reporting QA/QC Results 17
3.7 Evaluating Trial Burn QA/QC Results 18
Chapter 4. QC Procedures for Sampling Waste, Ash, Fuel, and Air Pollution
Control Device (APCD) Effluent 21
4.1 General 21
4.2 Sampling Design-Representative Samples 21
4.3 Standard Operating Procedures (SOP) for Sampling Activities ... 22
4.4 Summary 23
Chapter 5. QC Procedures for Analysis of Waste, Ash, Fuel, and Air Pollution
Control Device(APCD) Effluent 25
5.1 Analysis of Waste Samples for Heating Value, Ash, Viscosity,
and Chlorine 25
5.2 Analysis for Principal Organic Hazardous Constituents (POHCs) . 26
5.3 Analysis for Metals in Waste, Ash, and APCD Samples 28
Chapter 6. QC Procedures for Stack Sampling 33
6.1 EPA Methods 1 and 2 40 CFR k60, App.A:
Location and Velocity 33
6.2 EPA Methods 3 and 3A: Gas Analysis for Carbon Dioxide,
Oxygen and Excess Air, and Dry Molecular Weight 33
6.3 EPA Methods 4 and 5: Moisture and Particulates 34
6.4 Hydrogen Chloride 35
6.5 Volatile Organic Sampling Train (VOST)--Method 0030 35
iii
-------
Contents (continued)
6.6 Bag Sampling 36
.6.7 Semi-Vost (SVOST)--Method 0010 36
6.8 Determination of Multiple Trace Metal Emissions-Draft Method . . 36
Chapter 7. QC Procedures for Analysis of Stack Samples 39
7.1 Gas Analysis for Carbon Dioxide, Oxygen, and Dry Molecular
Weight; Methods for Moisture and Particulates 39
7.2 Hydrogen Chloride 40
7.3 Volatile Organic Sampling Train (VOST)-Method 0030/5040 .... 42
7.4 Semivolatife Organic Sampling Train (SVOST)--Method 0010 45
7.5 Metals Determination 49
Chapter 8. QC Procedures for General SW-846 Analytical Methods 55
8.1 Volatile Organic GC/MS Analysis 55
8.2 Semivolatile Organic GC/MS Analysis 56
8.3 Gas Chromatography (GC), High Performance Liquid
Chromatography(HPLC), Ion Chromatography (1C) 57
8.4 Metals Determinations 60
Chapter 9. Specific Quality Control Procedures for Continuous Emission Monitors . . 63
9.1 Carbon Monoxide Monitors 63
9.2 Oxygen Monitors 65
Chapter 10. Specific Quality Control Procedures for Process Monitors 67
10.1 Introduction 67
10.2 General QC Procedures 67
Chapter 11. QA/QC Associated with Permit Compliance and Daily Operation 69
11.1 Routine Procedures for Monitoring and Testing/Calibration 69
11.2 Record Keeping 73
Chapter 12. References 75
Appendix
A VOST Calibration 77
B Acronym List . . . ., • • • • 81
IV
-------
Figures
4-1. Example sampling instructions and field record form
6-1. Pretest sampling checks
23
34
-------
Tables
2-1. Sixteen Essential Elements of a Quality Assurance Project Plant (QAPjP) 4
2-2. Recommended Outline for a Trail Burn Quality Assurance
Project Plant (QAPjP) 4
2-3. Example Summary Table of Precision and Accuracy Objectives 6
2-4. Example Table of Calibration Procedures and Criteria
for Sampling Equipment 8
3-1. General Recommendations for Containers, Preservation,
and Holding Times 13
3-2. SW-846 Holding Times for Water Samples 14
3-3. Available Audit Cylinders 16
5-1. Summary of QA/QC Procedures for Heating Value, Ash, Viscosity,
and Chlorine Analysis . 26
5-2. Summary of QA/QC Procedures for Principal Organic Hazardous
Constituent Determination in Waste Feed Samples 29
5-3. Summary of QA/QC Procedures for Metals Determination
in Waste Feed Ash and APCD Samples 31
7-1. Summary of QA/QC for Chloride Determination 42
7-2. Summary of QA/QC Procedures for VOST 46
7-3. Summary of QA/QC Procedures for SVOST 50
7-4. Standard Reference Material (SRM)--Metals on Filter Media 53
7-5. Summary of QA/QC Procedures for Metals Determination
in Stack Gas Samples 54
8-1. BFB Key Ions and Ion Abundance Criteria (Method 8240 Criteria) 55
8-2. Surrogate and Spike Recovery Limits 56
8-3. Decafluorotriphenylphosphine (DFTPP) Key Ions and Ion Abundance
Criteria (Method 8270 Criteria) 57
8-4. Calibration Check Compounds 57.
8-5. Summary of QA/QC Procedures for GC/HPLC and 1C Determinations 59
8-6. Summary of QA/QC Procedures for Metals Determinations 61
9-1. Carbon Monoxide Performance Test Criteria 63
9-2. Quality Assurance Objectives for CO Monitors 66
9-3. Oxygen Performance Test Criteria 66
9-4. Quality Assurance Objectives for CO Monitors . 66
11-1. QA/QC for Routine Operation-CO and QZ Monitors 72
VI
-------
Acknowledgments
This Handbook was prepared for the U.S. Environmental Protection Agency's Center for
Environmental Research Information (CERI), Office of Research and Development (ORD)
under the direction of Sonya Stelmack (OSW) and Larry D. Johnson (ORD) with Justice A.
Manning (CERI) serving as the Project Manager.
The Handbook was prepared by Midwest Research Institute's (MRI) Environmental
Systems Department, with Andrew Trenholm serving as MRI's project manager. Thomas
Dux was principal author with assistance from Pamela Gilford. Other authors who
contributed sections to this Handbook are F. Bergman, B. Boomer, D. Hooton, and R.
Neulicht.
Review was provided by the Permit Writers Workgroup composed of permit writers in the
EPA Regional offices and EPA representatives in OSW and ORD. Particular
acknowledgment is extended to Joe M. Finkel, Senior Chemist, Southern Research
Institute, and Don Wright, EPA Region 2, for their thorough review of the draft.
Appreciation is owed to Jeanne Hankins and Sonya Stelmack, OSW, and Larry Johnson,
ORD, for their review of the draft and the final. Lastly, appreciation is expressed to
Thomas Dux for a final review prior to publishing.
VII
-------
-------
Chapter 1
Introduction
The Environmental Protection Agency (EPA) has
promulgated regulations for hazardous waste
incinerators under the Resource Conservation and
Recovery Act.1* These regulations require the permit
applicant to conduct trial burns to demonstrate
compliance with the regulatory limits and provide data
needed to write the individual permits. Trial burns
require a Quality Assurance Project Plan (QAPjP) with
quality assurance/quality control (QA/QC) procedures
to control and evaluate data quality. Both permit
writers and applicants are in need of specific,
consistent guidance in preparing QAPjPs and for
designing the necessary QA/QC procedures to ensure
consistency and adequacy of plans, reports, and over-
all data quality. Although considerable information is
available on sampling and sample analysis for
hazardous waste and its incineration, guidance on
specific QA/QC methods has not been available
previously.
Guidance on the preparation and review of QAPjPs,
establishment of quality assurance objectives, design
of QA/QC procedures, and assessment of trial burn
results are presented in this handbook. In this volume,
QA/QC procedures are defined for process
monitoring, sampling, and analysis for both the initial
trial burn and for later continuing operation of the
incineration facility. Pollutant categories discussed are:
principal organic hazardous constituents (POHCs),
metals, particulates, acid gases, and combustion
gases.
This handbook is intended for a diverse audience:
engineers, chemists, environmental scientists, facility
personnel, and EPA staff at all levels. It has been
written with the EPA or state permit writer's
information needs in mind, but would be, by
extension, of considerable interest to the permit
applicant. The handbook assumes the reader
understands the technical approach to incineration
and is familiar with the basics of most sampling and
analysis methods.
Chapter 2 is a background discussion, covering the
need for a QAPjP in a trial burn. A standardized
format for a trial burn QAPjP has been recommended
"References and a bibliography are listed in Chapter 12.
to unify QA/QC methodologies for hazardous waste
incineration and ensure comparability of data across
all performance tests. A key concept in the handbook
is the use of QC information and the associated QC
criteria for acceptance of trial burn data. The QA/QC
procedures and associated QA objectives for each
critical measurement parameter are identified in this
handbook, along with guidance for acceptance limits.
The evaluation of trial burn results if QA/QC objectives
have not been achieved is discussed in the handbook.
A wide variety of sampling and analytical methods is
covered in the handbook. Based upon practical
application of the methods, specific QA/QC
procedures have been delineated here which are
beyond those in available written protocols. Key QC
procedures of each method and their associated
acceptance criteria are addressed; some minor QC
procedures have not been covered.
The QA/QC procedures presented in this handbook
should be considered as the minimum necessary for
assessing data quality and ensuring attainment of
project objectives. For some facilities, regions, or
states, these QC procedures may not be sufficient
due to the complexity of a given trial burn; in these
cases, the handbook guidance should constrain
neither the permit applicant nor the regulatory agency.
The primary focus of the handbook is the trial burn
itself; however, a discussion of the QA/QC for routine
incinerator monitoring and permit compliance is
included in a separate chapter. This area has slightly
different requirements and objectives from those of
the trial burn. The trial burn should be viewed as a
short-term project with a defined beginning and end,
while compliance monitoring is considered an ongoing
process.
If trial burns and routine monitoring are designed using
the QA/QC indicated in the handbook and follow the
outline and guidance for the development of a QAPjP,
the level of precision and accuracy will be doc-
umented, and acceptance limits for these parameters
will be defined. If the QC information suggested in this
handbook is presented as part of the final trial burn
report, the subsequent process of reviewing and
assessing the results should be easy, effective, and
standardized.
-------
-------
Chapter 2
QA Project Plans in Hazardous Waste Incineration Trial Burns
The fundamental concepts of quality assurance and
quality control as applied to the hazardous waste
incineration permitting process are introduced in this
chapter. The role of QA objectives in the overall qual-
ity assurance project plan (QAPjP) and in the trial burn
plan (TBP) is discussed in terms of specific
information the permit writer should expect to find in
an applicant's documentation. This section covers
both the format and content of QA plans required for
trial burns. Chapter 11 of this handbook discusses the
QA/QC for daily incinerator operation.
Trial burns of hazardous waste incinerators are
complex activities requiring operation of the incinerator
under rigorously controlled conditions in conjunction
with environmental sampling and analysis of
constituents in diverse matrices. This complexity is
reflected in the permit application and trial burn plans
(TBPs) which must cover facility design, theoretical
design of the trial burn, incinerator operating
conditions (waste streams, temperature, air pollution
control equipment, etc.), complex sampling methods
(e.g., VOST, SVOST, Orsat), and finally, preparation
and analysis of samples ranging from high
concentration waste feeds to low concentration stack
gas samples. All of the data generated must have a
documented, known level for precision and accuracy
sufficient to support decisions based upon those data.
Often, the key procedures and concepts needed to
ensure quality data are vital in presenting the
technical design of the incinerator and trial burn.
The QA/QC procedures for a particular trial burn are
presented in the Quality Assurance Project Plan
(QAPjP). It is designed to document and assess the
precision and accuracy of the trial burn data, and to
assure the permit reviewer that the data will be of
sufficient quality for making regulatory decisions. EPA
quality assurance policy stipulates that every
monitoring and measurement project must have a
written and approved QAPjP.2 This document should
contain, in specific terms, policies, organizational
adaptations, overall objectives, functional activities,
and tailored QA/QC activities designed to achieve the
data quality goals of that particular project or
operation. The QAPjP must be prepared by the
organization responsible for the project work and
approved by the appropriate federal, regional, or state
agency.
The QAPjP and TBP should be considered companion
documents and should be reviewed at the same time.
They may be presented as a single document if that is
the applicant's preference. Generally, the TBP covers
topics related to the experimental design of the trial
burn (e.g., incinerator type, waste feeds, test
schedules), sampling design and methods, as well as
analytical methods. The QAPjP covers all the QA/QC
procedures necessary to fulfill the objectives of the
trial burn. In many areas the TBP and QAPjP will
overlap, or areas will be repeated in both documents;
however, the TBP usually is considered the primary
document, and the QAPjP will often refer to subjects
already considered in the TBP.
2.1 Structure of QAPjP
2.1.1 Format
The general format and required topics in a QAPjP are
outlined by the EPA Quality Assurance Management
Staff (QAMS) in Interim Guidelines and Specifications
for Preparing Quality Assurance Project P/ans)2. The
sixteen items that must be considered for inclusion in
each QAPjP are outlined in Table 2-1. QAMS states
directly, "The sixteen essential elements must be
considered and addressed in each QAPjP. If a
particular element is not relevant to the project under
consideration, a brief explanation of why the element
is not relevant must be included."
The permit writer should not accept a QAPjP which
does not cover all the elements in the QAMS
guidance. Standardizing the format will help unify the
QA/QC methodologies for hazardous waste
incineration and ensure comparable data quality for all
performance tests. Usually, each one of the 16 items
is a separate section in the QAPjP. If an item is not
relevant to the QAPjP or is covered elsewhere in the
accompanying TBP, this may be explained and/or
reference may be made to the appropriate section of
the QAPjP or TBP.presents
However, the QAMS format should not constrain the
applicant if there is a need to cover topics not
included in the 16 elements. A slight modification of
these 16 elements is presented in Table 2-2 that is
more appropriate to incineration trial burns. The only
modifications made were the addition of staff qualifi-
-------
Tables 2-1. Sixteen Essential Elements of a Quality
Assurance Project Plant (QAPjP)
1. Title page with provision for approval signatures.
2. Table of contents.
3. Project description.
4. Project organization and responsibility.
5. OA objectives for measurement data in terms of precision,
accuracy, completeness, representativeness, and
comparability.
6, Sampling procedures.
7. Sample custody.
8. Calibration procedures and frequency.
9. Analytical procedures.
10. Data reduction, validation, and reporting.
11. Internal quality control checks and frequency.
12. Performance and system audits and frequency.
13. Preventive maintenance procedures and schedules.
14. Specific routine procedures to be used to assess data
precision, accuracy, and completeness of specific
measurement parameters involved.
15. Corrective action.
16. Quality assurance reports to management
From Interim Guidelines and Specifications for Preparing Qualify
Assurance Project Plans (QAMS-005/80).2
Table 2-2. Recommended Outline for a Trail Burn Quality
Assurance Project Plant (QAPjP)
QAPjP Outline for Hazardous Waste Incinerator Trial Burns
Section 1.0 Title Page (with approval signatures)
Section 2.0 Table of Contents
Section 3.0 Project Description
Section 4.0 Organization of Personnel, Responsibilities, and
Qualifications
Quality Assurance and Quality Control Objectives
Sampling and Monitoring Procedures
Sample Handling, Traceability, and Holding Times
Specific Calibration Procedures and Frequency
Analytical Procedures
Specific Internal Quality Control Checks
Data Reduction, Data Validation, and Data
Reporting
Section 12.0 Routine Maintenance Procedures and Schedules
Section 13.0 Assessment Procedures for Accuracy, Precision,
and Completeness
Section 14.0 Audit Procedures, Corrective Action, and QA
Reporting
Section
Section
Section
Section
Section
Section 10.0
Section 11.0
5.0
6.0
7.0
8.0
9.0
cations to the fourth element and the combining of
audits, corrective action, and QA reporting into a
single section. The sections of a QAPjP and the types
of information the permit writer should expect to see
in this document are described briefly in the
remainder of this chapter. For a more detailed
description of the information that belongs in each
section of a QAPj'P, please refer to the above
document (QAMS-005/80).
, i J . i
2.1.2 Document Control, Title Page, and Table
of Contents
Each page of the QAPjP should have a document
control indicator in the top right corner as shown
below:
Section No.
Revision No,
Date:
Page of '••
This document control indicator assists the permit
writer in finding information, flags changes made
during the review process, and enables the permit
writer to identify unapproved changes to the QAPjP.
Multiple revisions are frequently difficult to track.
Revised sections of the QAPjP should be submitted
so that the permit writer can update the QAPjP easily
and track areas which have been modified. Also,
QAPjPs may be photocopied and distributed many
times, and the number of pages quickly indicates if a
full copy has been received. A document control
format is also helpful for the TBP.
The title page and table of contents are self-
explanatory. The title page must include approval
signatures from the following personnel: (a) the proj-
ect leader; (b) the project leader's supervisor (if the
trial burn is conducted by a subcontractor and not by
the facility); (c) the quality assurance coordinator
(QAC) for the trial burn; and, (d) the facility-designated
signatory (40 CFR 270.11). A revised title page should
be submitted with every modification of any section of
the QAPjP. Provision should be made for the
signatures of the permit writer and the permit writer's
quality assurance officer. In approving the TBP and
QAPjP, a signed title page should be returned to the
applicant indicating approval.
2.1.3 Project Description
This section may be redundant since the
accompanying TBP should contain a complete project
description. However, a short project description is
recommended for inclusion along with a diagram of
the incinerator indicating sampling points, especially if
the QAPjP is a separate document. Sometimes
QAPjPs become separated from the TBP and the
duplicate information is useful. At a minimum,
-------
reference should be made to the TBP section
containing the project synopsis.
been established, the acceptance of the data is left to
the technical judgment of the permit writer.
2.1.4 Organization of Personnel,
Responsibilities, and Qualifications
This section of the QAPjP should identify key
personnel, their qualifications, and their QA/QC
responsibilities. At a minimum, the following personnel
must be identified: (a) the facility-designated signatory
(40 CFR 270.11); (b) the overall trial burn project
manager; (c) the field sampling manager; (d) the
analytical manager; and, (e) the QAC. Preferably, a
chart or table should be included showing the project
organization.
The QAPjP should contain an appendix giving the
qualifications, resumes, or curriculum vitae of every
individual with key responsibilities. The permit
reviewer should examine these qualifications to
ascertain that facility and contractor personnel are
sufficiently experienced or trained to conduct a trial
burn.
A single individual must be designated as QAC. The
QAC's function is to conduct or coordinate audits by
other personnel of field and laboratory operations to
ensure compliance with the TBP and the QAPjP. The
QAC should also have the identified responsibility of
examining all project records, analysis data, and
quality control results, and including a written inde-
pendent assessment of overall data quality to be
submitted with the trial burn report (TBR). This
assessment should be in addition to the assessment
and conclusions of the primary author of the trial burn
report (the project leader). The TBR should include
sufficient information to indicate whether the QAC is
organizationally independent of the trial burn's
technical staff (i.e., not the project leader, field
sampling manager, or analysis task manager), and is
not directly responsible for any environmental
measurements nor accountable to those directly
responsible. A designated QAC is essential to an
independent assessment of the data quality presented
in the trial burn report.
2.7.5 Quality Assurance and Quality Control
Objectives
QAMS-005/802 states, "For each major measurement
parameter, including all pollutant measurement
systems, list the QA objectives for precision,
accuracy, and completeness. These QA objectives
should be summarized in a table." These objectives
must be based upon the permit writer's decisions.
Each measurement must have a defined precision and
accuracy objective summarized in a table. If all the
QC data meet the objectives, the trial burn results will
be judged as having an acceptable quality level,
sufficient for making the permitting decision. When
QC results are poor and specific criteria have not
Specific QC procedures and associated acceptance
criteria are presented in this handbook. These
procedures should be summarized and presented in
the objectives table. This table should guide the
permit writer to all the quality control and associated
criteria for each measurement (POHCs, CO, O2,
combustion chamber temperature, spike recovery,
etc.). Each associated quality objective must be
related to a method for determining that objective. For
example, an objective for chloride measurement
accuracy stated as 80% to 100% is meaningless,
since no basis has been provided to determine this
objective. Instead, the objective should be associated
with the spike recovery from impingers fortified at the
estimated 99% removal level (80% to 120%
recovery). Table 2-3 is an example of a QA objective
table from a QAPjP.
QAMS-005/802 also states that this section should
cover the quality objectives of completeness,
representativeness, and comparability. Completeness
is defined as "the amount of valid data obtained from
a measurement system compared to the amount that
was expected to be obtained under optimal normal
conditions." For the permit to be written,
completeness should be 100% in that three valid test
runs are needed for each test condition. Acceptable
results must be obtained for all three trial burn runs.
However, when individual tasks and problems are
considered, completeness is not so easily defined. For
example, in VOST tube analyses four samples are
often collected, and three are analyzed unless there
are problems. Although only three or four samples
have been analyzed, a valid result for a test run was
obtained; the test is complete. The concept of
completeness as defined for a QAPjP is probably
more pertinent to an entire monitoring project, where
a certain amount of data is needed to complete the
statistical design.
Representativeness and comparability objectives are
generally not quantifiable. Representativeness is
defined as "the degree to which data accurately and
precisely represent a characteristic of a population,
parameter variations at a sampling point, process
condition, or an environmental condition," while
comparability is defined as "expressing the confidence
with which one data set can be compared to
another."2 In stack sampling, a representative sample
whose results are comparable to other data sets is
ensured primarily through the use of standard EPA
methodss (e.g., M1, M2, SVOST, VOST). The proper
use of a standard stack sampling method ensures a
representative sample. If that sample is analyzed
using standardized methodology and the results are
reported in common units, the results should be
comparable to those obtained from other trial burns. In
rare situations, a trial burn involves unique POHCs,
-------
Table 2-3. Example Summary Table of Precision and Accuracy Objectives <
Parameter
Semivcfatite POHC (1,2,3-
Trichtorobenzene)
Matrix
Stack emissions:
QC Procedure
Spiked with suitable surrogate
Precision
NA
Accuracy
Mean
Recovery
%
XAD-2
Filter
Water
Front half rinse
Back half rinse
Solid waste
Organic liquid wastes
Ash
Stack emission
compound (use of labeled
surrogate is recommended)
13C-Hexachlorobenzene 50% RSD 50-150
13Cg-l,2,4,5-Tetrachlorobenzene
For each SVOST component,
average over three runs
As a minimum, one native surrogate 50% RSD 50-150
will be spiked in each sample.
Average over three runs 50% RSD NA
Analysis of spiked blank filter and NA 50-150
spiked XAD. Spiked with all POHCs
and surrogates
Particulate
Chlorine
Hydrogen Chloride
Stack emission
Aqueous waste
Sludge
Solid wastes
Organic liquid wastes
Blind knowns
NaOH solution/water
NaOH solution/water
Balance calibration with 500 mg
weight •
\ Duplicate analysis for '/ run
Chloride standard in water
Duplicate analyses for one run
NA
20
20
20
20
NA
30
(499.5 - 500.5)
(±0.5mg)
NA
NA
NA
NA
100 ±10
100 ± 15
NA
NA - Not applicable
and standard methodology will not meet the data
needs for the regulatory decision. In such a case, the
performance of any novel methodology should be
determined in advance and documented in the QAPjP.
Comparability also refers to the units in which results
are reported. The handbook on Guidance on Setting
Permit Conditions and Reporting Trial Burn Results
recommends suitable units for data reporting.*
2.7.6 Sampling and Monitoring Procedures
Sampling and monitoring procedures are usually
described in the accompanying TBP, and there is no
need to repeat details already given. However, a table
giving all sampling points, sampling frequency, total
number of samples plus replicate and field duplicates
should be presented in this section. Each sampling
activity needs a written procedure. For stack
sampling, reference to the EPA method is usually
sufficient, but any specific options chosen from those
procedures must be given. However, for waste feed
and ash sampling, an outline procedure should be
presented in the QAPjP. Details of sampling
procedures should be discussed in an appendix (see
Chapter 4).
The key quality parameters for sampling are: (1) use
of standard reference methods; and (2) that sampling
procedures and trial burn design call for sufficient
POHC mass in the stack gas sample for accurate
detection and quantitation at the 99.99% ORE level.
The amount of this mass should be included, along
with the calibration range of the analytical method
used to detect and quantitate the POHC. The mass of
POHC in the sample (if ORE is at the 99.99% level)
should be within the calibration range and at least 10
times the lowest calibration point to ensure accurate
measurement of the ORE. If not, the permit applicant
should either change the waste feed rate, the
sampling rate, or the analytical method to achieve
proper quantitation of the POHC.
For example, the theoretical waste feed input, the
stack sampling rate, and 99.99% ORE should be used
to calculate a maximum VOST tube concentration
(e.g., 100 ng) if the 99.99% ORE is achieved. This
should be presented with the calibration range (e.g.,
10 to 500 ng) to ensure that a sufficient amount of
POHC is present. For SVOST, this presentation
should take into account the manner in which the
SVOST components are combined; in addition, the
POHC and calibration range must be in the same
concentration or mass units to be comparable.
2.7.7 Sample Handling, Custody, and Holding
Times
Each sample should be identified in this section, along
with appropriate holding times for each analysis and
any associated preservation techniques. All sample
handling procedures for the trial burn must be
described, including sample labeling, preservation,
packing, shipping, laboratory, and field storage pro-
cedures. All documentation practices should be
described, including field log books, sample analysis
request forms, laboratory custody log books, and field
-------
custody forms. Storage of samples for archive
purposes must also be covered. Often it is most
appropriate to formulate these procedures into a
formalized standard operating procedure included as
an appendix to the QAPjP. These topics are
discussed in more detail in Chapter 3 of this
handbook.
2.7.8 Specific Calibration Procedures and
Frequency
Since the majority of measurements made during a
trial burn are performed using standard EPA reference
methods, calibration procedures and frequency do not
have to be discussed in detail, but should be
referenced. This section of the QAPjP should state
the source of all standard analytical reference material
used in calibration, including chemical standards, gas
calibration cylinders, and reference thermometers.
The ultimate standards used for the analytical
procedure or instrument calibration and the
relationship of the calibration scheme to these
reference materials should be delineated. For any
nonstandard methods (such as facility standard
operating procedures), calibration procedure and
frequency must be included. Particular attention
should be paid to all process monitors and continuous
monitors. Calibrations should be summarized in a
table. Routine calibration of stack sampling equipment
is discussed in Section 3.3. Table 2-4 is an example
from a QAPjP.
2.7.9 Analytical Procedures
Most of these analytical procedures should follow EPA
standard methodology. All samples should be
identified in a table, along with the associated
analytical procedure. Written procedures in an appen-
dix should describe any analytical procedures unique
to that trial burn. All modifications of standard
methods must be identified, along with reasons for the
changes. Most procedures have allowable options to
ensure effective analysis for POHCs. If no options
(especially for VOST, SVOST, and metals) have been
clearly identified in the QAPjP or TBP, the permit
writer should ask the applicant to confirm their
absence.
Two items crucial for all POHC analysis are detection
limit and POHC quantitation. First of all, if a POHC
has not been detected in the stack gas sample, the
detection limit should be used for calculation of the
ORE. Thus, this determination can be a critical
parameter in deciding if the ORE has been achieved.
Often, the detection limit will be artificially low if it has
been based purely on an instrumental detection limit
and does not include method recovery of the POHC
and possible interference from stack gas components.
However, as long as the 99.99% ORE critical level is
above the lower quantitation limit, achievement of
ORE based upon the detection limit will not
significantly affect a regulatory decision based upon
ORE. Actually, if no POHC is detectable in the
samples, a more conservative quantitation limit is
recommended for ORE calculations as compared to
the detection limit.
Secondly, the successful detection and quantitation of
the POHC is of particular importance in trial burns.
This area requires a great deal of analytical expertise
and often involves a choice of options, modifications,
or additions to standard analytical methods. This
section of the QAPjP should present method perfor-
mance data for each POHC to demonstrate in
advance the effectiveness of the proposed
methodologies. These data may be derived from past
trial burns (recoveries of isotopically-labeled
surrogates of POHCs), from recovery studies of
POHC spikes of blank VOST or SVOST components,
or, in cases in which the stack gas matrix might
present serious interference problems, from a prelim-
inary "mini" trial burn conducted at the incinerator
prior to the actual RCRA trial burn. This assures the
regulatory agency that the analytical method is
capable of providing usable data. Permit reviewers
must exercise caution when reviewing the
development of alternative analytical methods or
alternative sampling approaches. The accuracy of a
POHC determination is highly dependent on adequate
method development. A qualified chemist should
make this determination. Analytical method perfor-
mance cannot be assumed from theoretical
postulates, but must be demonstrated in advance
using actual data obtained by the firm conducting the
trial burn analysis.
2.7.70 Specific Internal Quality Control Checks
For each analysis method, specific internal QC
procedures should be detailed in this section of the
QAPjP. These procedures should each have an
associated quality control objective, as outlined in
Section 5 of the QAPjP (Section 2.2.5 of this
handbook). For example, if accuracy is to be 80% to
120% for the chloride reference standard, the section
under chloride analysis should state the source and
concentration of this standard. For SVOST analysis,
the instrument check standard, the surrogate spiking
levels, the component to be spiked, the type and
number of blanks, the spiking levels of the blank
SVOST train, and required duplicate analysis of
samples should be described in detail.
Some QC procedures have criteria not related to
accuracy and precision. Blank analysis is an example.
Its objective is to determine the degree of
contamination of the measurement system. This
objective must be defined by: (a) the type of blank
(blank VOST train from field); (b) the frequency of the
blank (one per trial burn run); and (c) the acceptance
criteria.
-------
Table 2-4. Exampla Table of Calibration Procedures and Criteria for Sampling Equipment
Parameter Calibration technique Reference standard Acceptance limit9
Calibration
1. Probe nozzle
2. Gas meter volume
Measure diameter to
nearest 0.001 in
Compare to wet test
meter
Micrometer
Wet test meter
Mean of three
measurements;
difference between high
and low £0.1 mm
Record calibration
factor
±5% of factor
Prior to test
Prior to test
Posttest
3. Gas meter temperature
4. Stack temperature sensor
5. Final Implnger temperature
sensor
6. Filter temperature sensor
7. Aneroid barometer
8. S-type pitot tube
Compare to mercury-in-
glass thermometer
Compare to mercury-in-
glass thermometer
Compare to mercury-in-
glass thermometer
Compare to mercury-in-
glass thermometer
Compare to mercury
barometer
NA
ASTM
Thermometer
ASTM
Thermometer
ASTM
Thermometer
ASTM
Thermometer
Mercury column
barometer
Design criteria
±5°F
±1.5"%
±1.5"% mean temp.
±5"F
±S°F
±2.5 mm
Meets RM2 criteria
Prior to test
Prior to test
Posttest
Prior to test
Prior to test
Prior to test
Prior to test
•40 CFR 60, Appendix A.
Occasionally, these items can be summarized in
tables or presented more cohesively in the analysis
section of the QAPjP (handbook Section 2.2.9). If not,
the QC section should at least reference the other
section of the QAPj'P in which they are presented.
Many of the analysis sections of this handbook outline
needed QC procedures in addition to those presented
in the methods; this chapter of the QAPjP should
identify any of those procedures being utilized for a
particular trial bum.
2.1.11 Data Reduction, Validation, and Reporting
For each major measurement parameter, a brief
description of the following should be included:
* The data reduction scheme for nonroutine
methods, including ail validation steps and the
equations used to calculate the final results.
* Listing of all final experimental data to be
reported in the trial burn report.
* Listing of all quality control data to be reported in
the trial burn report.
This section of the QAPj'P is difficult to define
explicitly. Approaches used by past applicants have
varied widely. For data reduction schemes in which
calculations are specified in the methods, only a
summary need be presented with minimal explanation.
However, the validation steps in the data reduction
process need to be identified. Validation of analysis
results can be carried out in many different ways, but
the central concept is that QC results must be within
the acceptance criteria for a given analysis.
Of particular importance is the use of blank data.
Routine correction of any stack gas sample results for
blank analysis is generally not recommended, regard-
less of the type of blank. The purpose of this
recommendation is to disallow any routine correction
of stack gas results to increase the ORE. If a need
does exist for blank correction, the VOST method
(0030)3 and the Hazardous Waste Measurement
Guidance Manual5 give specific procedures for blank
correction. Blank corrected emissions data should
also be reported without correction for comparison.
Any stack gas calculations for ORE, HCI emissions, or
metals emissions presented in this section that
routinely incorporate blank corrections should be
questioned by the permit reviewer.
All reportable test data and QC data must be
identified. This will preclude delays during review of
the trial burn report (TBR) because of insufficient
information. QC data are often neglected in trial burn
reports, but they are vital to assessing overall data
quality. Guidance on Setting Permit Conditions and
Reporting Trial Burn Results4 gives specific reporting
requirements and formats that should be used.
Section 3.6 of this handbook gives a summary of
reportable QC data.
2.7.12 Routine Maintenance Procedures and
Schedules
The purpose of this section is to list all critical
equipment necessary to maintain permit operating
conditions and to demonstrate continuing compliance
-------
to the permit. For each piece of measurement
equipment (e.g., a CO monitor, waste feed rate
monitor, combustion chamber pressure monitor, etc.),
a schedule and maintenance procedure should be
outlined. The brief statement "per manufacturer's
recommendations" is insufficient. Full procedures
must be provided in the permit application or QAPjP.
2. 1. 13 Assessment Procedures for Accuracy and
Precision
The formulae for assessing precision and accuracy
are given here. If the number of data points is less
than 4, precision should be expressed as:
Range Percent (RP)
/xi-X2\ Eq-2-1
RP = { - 1 100
V avg.X /
where Xi = highest value
Xz = lowest value
If n >4, precision should be expressed as:
Relative Standard Deviation (RSD)
standarddeviationX
RSD =
Eq. 2-2
V average value /
Accuracy, if using reference material of known
concentration, is usually expressed as:
Accuracy (A)
. /foundconcentration X
A = I } 100
V actual concentration /
Eq. 2-3
If accuracy is being determined by adding a known
amount to a sample (spiking), it is usually expressed
as:
Recovery (R);
R = /rfound- native X ^ Eq. 2-4
V amount spiked /
The found level is the amount determined in the spike
sample, and the native level is the amount determined
in the unspiked sample. For spiked samples, recovery
should always be expressed in relation to the amount
spiked (a known quantity), not in relation to the
amount spiked plus the native level (an unknown
quantity, determined by the same analytical system
being evaluated for accuracy). Therefore, recovery
should not be calculated as R = 100[found/(amount
spiked •*• native)].
2.1.14 Audit Procedures, Corrective Action, and
QA Reporting
This section of the QAPjP should be divided into two
parts, one for trial burn activities and one for routine
incinerator operation. This section should cover all QA
activities for both topics. For the trial burn, all QAC
audits and reports should be identified. A minimum of
one audit of overall data quality should be carried out
by the applicant and reported in the TBR. Such audits
are discussed in greater detail in Section 3.4 of this
handbook. All audits, major problems, and significant
corrective action need to be reported to QA
personnel, project management, and corporate
management The kinds of reports submitted (e.g.,
audits) and who may receive them (e.g., project
leader) should be identified in this section.
2.2 Review of the QAPjP with Trial Burn
Plan
The QAPjP should be detailed, specific, and centered
on the decisions that the permit writer must make. For
the trial burn data to be usable, specific QC
procedures must be followed and the related data
quality indicators must fall within the prescribed
criteria. All of these procedures and accompanying
criteria should be clearly identified in the QAPjP and
addressed in the TBR.
One of the inherent difficulties with a QAPjP is that it
forces an arbitrary distinction between QA/QC
procedures and the technical design and procedures
of the project itself. To avoid this, many people
integrate the TBP and the QAPjP. However, the
problem with this approach is that QC and the
associated data assessment parameters (and criteria)
get lost in the technical discussion of the project. The
QAPjP does not have to repeat details which are
given in the TBP; however, if the details are in the
TBP appendices or the analytical methods, all QA/QC
objectives and procedures at least must be
summarized in the QAPjP.
The regulatory agency needs the QAPjP as a basis for
justifying acceptance or rejection of the trial burn. The
QAPjP should be considered as similar to a contract.
The permit reviewer in approving the QAPjP is stating,
"If all QA/QC procedures are followed and meet the
appropriate acceptance criteria, then the trial burn
data will be judged a sufficient base for making the
permitting decision." An unclear QAPjP can contribute
to many difficulties in reviewing test reports and
possibly the rejection of a test as inadequate.
Previously agreed upon objectives (via a QAPjP)
serve as a useful vehicle for supporting acceptance or
rejection of trial burn results.
-------
-------
Chapter 3
General Topics
Overview discussions of selected general topics that
should be covered in the QAPjP are provided in this
chapter. The topics are not specific to any particular
method. They are relevant to the overall data quality
and conduct of a trial burn.
3.1 Sample Handling and Custody
Chain of custody (COG) is not required for trial burns;
however, the permit applicant may choose to use
COG procedures. A description of COG requirements
can be found in SW-846 (Section 1.3)3 and the
National Enforcement Investigations .Center (NEIC)
Policy and Procedures.6 Strict sample custody is
currently all that is required. Procedures for sample
custody should be outlined in Section 7 of the QAPjP
or in a standard operating procedure appended to the
QAPjP. At the minimum, these procedures should
contain the following elements:
• A master record containing a list of all samples
taken, time and date of sampling, description of
sample, unique identifier for each sample, and
sample preservation and sample storage
conditions before shipment.
• For each sampling event, a sample data form
should be presented, including one for stack
samples, waste feed samples, scrubber water
samples, etc. At the minimum, each form should
indicate: (a) the individual taking the sample; (b)
the date and time of sample collection; (c)
sampling technique; (d) compositing technique;
(e) sample container; (f) sample identifier; (g)
sample location; (h) sampling equipment; and (i)
any sample preservation or storage before
shipment.
• Each sample shipment should be accompanied
by a sample inventory form which should
indicate: (a) every sample shipped (by identi-
fier); (b) sample packaging; (c) date of shipment;
(d) carrier; and (e) any sample preservation such
as packing in ice. Upon receipt, the following
should be recorded on the same form: (a) all
samples received; (b) their condition upon
receipt; (c) if shipped with ice, temperature of
samples upon receipt; (d) person receiving
samples; and (e) storage conditions upon
receipt.
Examples of the above forms and all records should
be presented in the QAPjP. Every sample collection
form, sample shipping inventory, and the master
records should be available to the permit writer. As
part of the review of the trial burn report (TBR), the
permit writer may spot check these records to ensure
that samples have been handled properly, taken at the
correct time and in the correct manner, assigned a
unique identifier, received intact by the laboratory, and
that all sample preservation was appropriate. If
samples are not traceable or not properly handled,
explicit justification for data acceptance from the
permit applicant is required.
3.2 Holding Times
Most analytes have a finite stability in a sample matrix.
Holding time is the maximum allowable time between
sample collection, sample preparation, and sample
analysis; after the holding time has expired, a
significant probability of lowered analyte concentration
in the sample exists. Holding times are dependent
upon the analyte sample matrix and sample
preservation techniques such as storage temperature
and chemical methods to stabilize the analytes (e.g.,
pH adjustment).
Since a lower analyte concentration is the expected
result of exceeded holding times, from the regulatory
perspective (attainment of ORE), waste feed holding
times are not as critical as those for stack gas, ash,
and air pollution control samples (if waste feeds are
biased low, this will lower the ORE). VOST samples
must be kept at or below 5°C and analyzed within 14
days after collection; SVOST samples must be stored
at the 5°C temperature, extracted within 14 days, and
analyzed within 40 days after extraction. These
traditional holding times are not based on
experimental data for the individual analyte in each
matrix, but on information about general classes of
compounds and the most common analytical matrices.
Particularly reactive or labile compounds may require
a more stringent holding time or a different
preservation technique.
11
-------
General guidance on holding times for incineration
samples is given in Table 3-1, and SW-846 holding
times are contained in Table 3-2. All holding times
should be summarized and reported in the TBR.
Whenever intended holding times are extended in the
QAPjP or actual holdings times after the trial burn,
justification based upon actual sample data should be
requested from the permit applicant.
Bag samples or grab samples of stack gas or gaseous
waste feed samples are a very special case. An
analyte in the gaseous state is potentially more
reactive and labile as well as difficult to contain.
Therefore, if bag samples or grab samples of a
gaseous media are taken, the permit writer should
require holding times as short as is logistically
feasible.
3.3 Routine Calibration of Stack
Sampling Equipment
The quality of stack sampling cannot be evaluated by
a performance audit. The QA/QC results therefore
must be managed by controlling the sampling
procedures and the calibration of stack sampling
equipment. The stack sampling components requiring
calibration consist of dry gas meters, rotameters, pitot
tubes, vacuum gauges, manometers, barometers, and
temperature-indicating devices.
Many testing organizations have found it desirable to
establish a routine calibration for these components
before trial burns. In all cases, the calibration is best
performed after every field test and after repairs have
been made on any components. These calibrations
then serve effectively as pretest calibrations for the
tests to follow.
All calibrations must be documented. Copies of the
documents should be included in the TBR. The
calibration documentation should include as a
minimum: (a) the device being calibrated; (b)
identification (ID) number; (c) reference device; (d)
date reference device last calibrated; (e) ID of
reference device; (f) date calibration performed; (g)
by whom calibration was performed; (h) description
of reference device; and (i) total volume sampled
(when applicable).
The calibration documents should be included in the
TBR to enable a permit writer to determine if proper
procedures were employed. A document of
certification performed by an outside organization
without a description of the procedures used and the
organization's qualifications is insufficient.
Procedures specified in the Quality Assurance
Handbook for Air Pollution Measurement Systems^
and amendments to the methods published in the
Federal Register provide the calibration procedures.
Dry gas meters used in sampling trains may be
calibrated using either a wet test meter, a secondary
standard dry gas meter, or an orifice. The procedures
are reported in detail in 50 FR 01164 (01/09/85) and
for critical orifices in 52 FR 09657 (03/26/87), and 52
FR 22888 (06/16/87). Reviewers of the TBR should
check calibration for procedural errors. For example,
volume measurement devices may be operated
outside of the range and/or for an insufficient time
period. One or more complete revolutions of wet and
dry gas meters are required and at least three
calibration runs should be made at each setting or
rate.
Rotameters used to set a sampling rate such as used
in Method 3 and VOST do not need to be calibrated
but may use the manufacturer's calibration curves.
This allowance is permitted because total gas sample
volume is measured by the dry gas meter.
Assurance of the calibration of pitot tubes consists of
visual inspection before and after a test. If the pitot
tube is part of an assembly, it must either meet the
noninterference standards outlined in EPA Method 2
or be calibrated against a reference pitot tube
following the procedure specified in EPA Method 2.
The procedures to be followed for calibrating gauges,
manometers, barometers, and temperature-indicating
devices are specified in the procedures. In most
cases, calibration should be performed after every test
and documented. Documentation is not simply a
statement that a device was calibrated following
recommended procedures. The documentation of the
calibration process is used to facilitate location of any
procedural errors which may have been introduced
into the system. In those cases in which an item is a
subcomponent of a system (e.g., vacuum gauge on a
meter console), the item should at least be listed on
the system check record. Barometer calibration
records should indicate the reference source and any
altitude correction that may have been applied. In
calibrating temperature-indicating devices, any indirect
reading systems should be calibrated using the entire
device, i.e., sensor, umbilical cord, and read-out
system.
The criteria and methods discussed in this section
were summarized in Table 2-4.
3.4 Internal Auditing
Internal audits are conducted by the applicant or the
applicant's contractors. External audits are conducted
by agency personnel or agency contractors.
Firms conducting trial burns should have a QA
program run by a Quality Assurance Coordinator
(QAC). This program may include:
12
-------
»= CD . .
o>E 2>
.S1*3 §•
2 °>2,
^^"^
E E §>
3 CO m
11 §
S §
5*=
fit
ill
— s
C o
73 CO
o •§ _;
-C CD CO
1 -1"
^ CO
25 E
.>
*CD
CO
S
Q.
?=
Container
•c
i
§
1
2
i ....
o o o o
1
2 2 s ?;',
r
8 CO CD CO
._ .9 .9 .9.
_C .C .C ^1
'3 '3 '1 '1
COCO
mi
iiii
"D ~o *S5 *S
CO CO O O
XAD-2
Stack gas filter
Waste feeds
Ash
<3
^
o
Q.
f~
__
co
CO
•* •* . •*
'•
« <
II 1
a
CO CO CD
.9 .9 .9
(— ^ £-
i ! i
EE E
CD 8 °
VOST Cartridg
• VOA vialc
(no headspi
G, Teflon-lined
Tenax or chare
Liquid wastes
Solid wastes
CO
o
o
Q.
.CD
^
O 0 O
CO CO CO
zz z
Z Z Z
Q3 Q5 CD
O O O
CL
O
OO CD
"O
CO
iHf
g CO to CO
il s|
5,
o x
1 ^
i i
z z
CO CO
s §
z z
'!""' •'-•
Standard petri
Integrated bag
Quartz filter
Gaseous
1
I ?
1) 0
•^ ^_i Cj
O WJ n
1 la
E
3
S.
%
CD
C
O
.0
T3
C
^
O ,.•
0>"§
C .g
3 m
iber glass (G).
immediately folk
apped vial; cap
E CD 9
:§ o c 1
.9O-,-o w
°-sE *
ii O 4—i c
(0 ^ O
13
-------
Table 3-2.* SW-846 Holding Times for Water Samples
40 CFR Section 136.3, Table 11--Required Containers, Preservation Techniques, and Holding Times (taken from Test Methods for
Evaluating Solid Waste (SW-846), except where noted with an (*))
Parameter NoVname
Container13
Preservation
Maximum holding time
INORGANIC TESTS:
1. Acidity
2. Alkalinity
3. Ammonia
4. Bromide
5. Cyanide, total and
amenable to chlorination
6. Hydrogen ton (pH)
7. Sulfide
P,G
P,G
P,G
P.G
P.G
P.G
P,G
Cool, 4°C
Cool, 4°C
Cool, 4°C, H2SO4 pH < 2
None required
Cool, 4°C, NaOH to pH > 12,
0.6 ascorbic acid
None required
Cool, 4°C, add zinc acetate plus
sodium hydroxide to pH > 9
14 days
14 days
28 days
28 days
14 days
Analyze immediately
7 days
METALS:
1. Chromium VI
2, Mercury
3. Metals, except chromium
and mercury
ORGANICS:
1. Purgeable hatocarbons
2. Purgeable aromatic
P,G
P.G
P.G
G, Teflon-lined septum
G, Teflon-lined septum
3, Acrotein and acrylonitrile G, Teflon-lined septum
4. Phenol
5. Benzidines
6. Phthalate esters
7. Nitrosamines
8. PCBs
9. Potynuclear aromatic
hydrocarbons
10. Chlorinated hydrocarbons
11. TCDD(dioxin)
12. TCDF (dibenzofuran)
PESTICIDE TESTS:
1. Pesticides
G, Teflon-lined cap
G, Teflon-lined cap
G, Teflon-lined septum
G, Teflon-lined cap
G, Teflon-lined cap
G, Teflon-lined cap
G, Teflon-lined cap
G, Teflon-lined cap
G, Teflon-lined cap
G, Teflon-lined cap
Cool, 4°C
HNO3topH <2
HNOgtopH <2
Cool, 4°C, 0.008% Na2S203
Cool, 4°C, 0.008% Na2S2O3,
HCI to pH 2
Cool, 4°C, 0.008% Na2S2O3>
adjust pH to 4-5
Cool, 4°C, 0.008% Na2S2O3,
Cool, 4°C, 0.008% Na2S2O3,
Cool, 4°C
Cool, 4°C, 0.008% Na2S2C)3.
store in dark
Cool, 4°C
Cool, 4°C, 0.008% Na2S2O3i
store in dark
Cool, 4°C, 0.008% Na2S2O3
Cool, 4"C, 0.008% Na2S2O3
Cool, 4°C, 0.008% Na2S2O3
Cool, 4°C, pH 5-9
24 hours
28 days
6 months
14 days
14 days
* 7 day unpreserved
14 days
7 days until extraction
7 days until extraction
7 days until extraction
40 days after extraction
40 days after extraction
40 days after extraction
40 days after extraction
40 days after extraction
6 months prior to extraction
6 months prior to extraction
40 days after extraction
•Adapted from "Removal Program Sampling QA/QC Plan - Interim Guidance," Emergency Response Division, EPA, OERR, OSWER,
OSWER Directive 9360.4-01, February 2,1989.
^Polyethylene (P) or glass (G).
System audits of field and/or laboratory
operations to ensure that the procedures
specified in the TBP and QAPjP are followed.
Instrument calibration check samples.
Blind spikes of blank SVOST trains with POHC
and surrogate POHC. (A "blind spike" means
that the amount spiked is known only to the
QAC.) This is used to independently verify the
accuracy of the sample extraction and analysis.
• Submission of blind calibration check standard
for each instrumental analysis as an independent
verification of calibration accuracy.
Submission of blind spikes of the POHC waste
feed for analysis and determination of spike
recovery.
Submission of EPA and/or NIST reference
samples for metals and target analytes.
Audits of the field records, raw analysis data,
and other project records to determine if the trial
burn was conducted as specified in the TBP and
the QAPjP. This audit entails the tracing of one
run's data and verification of selected analysis
results and is referred to in Section 2.1.14 of this
handbook.
Overall assessment of data quality based upon
reported QC data.
14
-------
The level of effort described above is not required for
every trial burn. However, all these audits are
suggested in cases in which the trial burn data are
likely to be challenged, or when the trial burn is highly
complex. These checks are made independently of
the analysis team, and thus are relatively free of any
bias, as well as carrying more weight in validating
sample results. All internal audits identified in the
QAPjP should be reported in the TBR.
For all trial burns, the QAC should do an audit of data
quality, inspecting field records, raw analysis data, and
project records as well as assessing overall data
quality based on reported QC data. The QAC should
inspect all the data for at least one run arid ensure
traceability from field records through analysis records
to final results (ORE, particulate, chloride, etc.). In this
audit, the performance of the experimental work must
be compared with the TBP and QAPjP for compliance.
Selected data should be independently recalculated
and verified by the QAC. In addition to this audit, all
QC data should be examined and compared to the
criteria for data acceptance given in the QAPjP. All
data which do not meet the QC criteria must be
discussed in the TBR in terms of acceptance of
sample results, given the failure to meet the criteria. A
brief summary of the audit results and data quality
assessment should be included as an appendix to the
TBR. This summary must be prepared by the QAC,
not the project leader or TBR author.
The purpose of the QAC audit and quality assessment
is to provide an independent review of the trial burn
results and supporting documentation before
submission to the regulatory agency. This internal
audit should ensure that the data are usable, which
will save time during permit application review. Data
quality problems and possible incomplete or missing
sections of the f BR should be addressed by the
applicant before the TBR is submitted. The EPA
requires a similar review and narrative summary in
other programs for acceptance of experimental
results. Following this audit sequence should also
relieve the permit writer of some of the burdensome
review of the field and analysis records, and allow
more time for engineering and regulatory assessment
of the trial burn results.
3.5 Use of External Audits
3.5.1 Types of Audits
External audits can be a powerful tool in controlling
and assessing the quality of an environmental data
collection program. Four basic types of external audits
that may apply to a trial burn are:
• Field audit. Conducted on-site during the trial
burn. Consists of observation of all sampling and
analysis activities conducted during the trial
burn.
• Laboratory system audit. Conducted at the
laboratory doing the analysis. Consists of
observing the analysis of trial burn samples and
inspecting analysis and project records/This
audit is difficult to accomplish if analysis is
performed at more than one location.
• Performance audit. This audit is an external
check of the accuracy of the measurement
system. It consists of supplying a sample for
analysis whose concentration is known only to
the regulatory agency. Analysis results are
compared to the actual concentration for an
accuracy determination. One common example
is the VOST audit cylinders.
• Referee analysis audit. This audit is also an
external check of accuracy. Trial burn samples
(waste feeds, impingers for chloride, etc.) are
sent to a referee laboratory (contracted by the
regulatory agency) for analysis. Results from the
referee analysis are compared to the results in
the TBR for a determination of accuracy or
precision.
3.5.2 Audits Recommended for AH Trial Burns
A field audit is recommended for every trial burn. The
audit usually includes observation by the permit
writers or their representatives and use of a VOST
audit cylinder. QA/QC in field sampling is considerably
more subjective than the QA/QC in analysis. Chemical
analysis can be designed to include many indicators
of data quality; however, the quality of field sampling
is more dependent upon the skills of the field
sampling crew. The permit writer who observes the
trial burn can be assured that sampling was
conducted according to plan, and that all sampling or
incinerator problems are resolved with his or her
concurrence. Ideally, two individuals should be
present, one on the stack to observe the critical stack
sampling continuously and the other to observe waste
feed sampling, air pollution control equipment
sampling, ash sampling, incinerator operation, and the
operation of continuous monitoring systems.
Field audits should be conducted by individuals with
an intimate and thorough knowledge of sampling
methodology. Auditing procedures are discussed in
many documents; however, Trial Bum Observation
Guide (Reference 1), is a good source for specific
guidance on auditing hazardous waste incineration
trial burns.
If any of the POHCs is volatile, the field audit should
include an analysis of a VOST audit cylinder. VOST
audit cylinders and field audits have been effectively
used for many years and constitute accepted practice.
VOST audit cylinders can be used for both VOST and
gas bag sampling. (Use of VOST audit cylinders is
required for all trial burns except where Method 00103
15
-------
Table 3-3. Available Audit Cylinders^
Analytes Concentration range13
Group /
Carbon tetrachtoride
Chloroform
Perchtofoethylene
Vinyl chloride
Benzene
Group II
Trichtoroethvfene
1,2-Dichloroe thane
1,2-Dibromoethane
F-12
F-11
Bramomethane
Methyl ethyl ketone
1,1,1 -Trichtoroethane
Acetonitrile
7-90 ppb
90 - 430 ppb
430 -10,000 ppb
7 - 90 ppb
90 - 430 ppb
Group ///
VinylkJene chloride I
F-113
F-114
Acetone
1,4-Dtoxane
Toluene
Chtorobenzene )
Group IV
Acrytonitrile
1 ,3-Butadiono
Ethytene oxide
Melhytene chloride
Propylene oxide
Ortho-xytene -1
I
\ 7-90 ppb
/ 90 - 430 ppb
[
|
\
1
\ 7-90 ppb
f 430 - 10,000 ppb
f
•From SW-846, Method 0030; cylinders can be obtained
from:
Audit Cylinder Gas Coordinator (MD-77B)
Quality Assurance Division
Environmental Monitoring Systems Labortory
Research Triangle Park, NC 27711
bAnalytQs are in nitrogen. Concentration
based on volume (v/v)-
Is the only sampling method.) Cylinders available in
1986 and their source are shown in Table 3-3. To be
an effective audit, the cylinder should be brought to
the trial bum by the field auditor and the field sampling
crew should conduct the sampling. The audit cylinder
seal should be broken by an authorized person, and
the cylinder should be sampled in the presence of the
auditor before the trial burn begins or immediately
afterwards. Four samples should be taken. Only three
must be analyzed; the fourth serves as a backup. If
the fourth sample is analyzed, results of all four
analyses should be reported. The cylinder should not
be left with the sampling crew; it should remain in the
possession of the auditors and be returned by them
following the trial burn. Accuracy criteria for VOST
audits are ±50% of actual concentration (see Section
7.3 and Method 003Q3).
3.5.3 Audits Recommended in Special
Circumstances
Most trial burns are conducted by contractors of the
incineration facility. This lends a degree of
independence to the analysis of the samples.
However, in some cases the facility is large enough to
have the necessary resources to conduct all
determinations internally. In this case, performance
audits, laboratory system audits, and referee analysis
audits are suggested.
At the minimum, performance audits should consist
of:
• Analysis of a calibration standard containing
each POHC at the final expected sample
concentration of the 99.99% ORE level.
• Analysis of a standard reference solution of
chloride.
• Analysis of a synthetic waste sample at a POHC
concentration similar to the waste used during
the trial burn. For trial burns using synthetic
waste streams or a spiking solution for waste
feeds making a synthetic waste sample in these
situations is not overly difficult.
• Analysis of the VOST audit cylinder.
At a minimum, the laboratory system auditor should
observe and examine:
• Sample preparation and analysis for samples
from VOST, SVOST, chloride, waste feed, air
pollution control devices, and ash.
• Analysis and balance calibration records for
particulates.
• Analysis staff credentials.
Referee audits should consist of:
• Analysis of waste feed for POHCs.
• Analysis of impingers for chloride.
3.5.4 Documentation of Audits and Objectives
of the Audits
All audit objectives should be set out in detail in
advance by the regulatory agency. Preferably, they
should be outlined in the letter indicating acceptance
of the TBP and QAPjP and providing a schedule for all
audits. The facility should be informed that
acceptance of the trial burn results is dependent on
receiving a positive assessment from all field and
laboratory auditors. All performance audits and referee
16
-------
audits should show an accuracy within predetermined
acceptance criteria.
Scheduling and discussing the audits in advance is
important. Sometimes there is limited room for field
auditors in the incinerator control room or on the
stack. A performance audit or referee analysis audit
without accompanying acceptance criteria is of limited
use. The acceptance criteria should be agreed to by
both the regulatory agency and the permit applicant.
Without acceptance criteria, the validity of
performance audit results is left to technical judgment
and is open to interpretation. Since sample analysis is
often conducted over 40 to 50 days, laboratory
system audits need to be scheduled so that a
complete preparation and analysis cycle can be
observed.
Finally, audits must be reported. All field and
laboratory audits must be reported as soon as
possible, preferably within 2 weeks of completion.
These audit reports should be appended to the TBR,
and any noted problems or deficiencies should be
addressed by the applicant. Performance audit results
should be reported in the TBR. The accuracy
indicators for performance audit samples (e.g., audit
cylinders) should be calculated by the permit writer
and the results reported to the applicant. The permit
applicant should respond to any difficulties following
receipt of the audit results.
3.6 Reporting QA/QC Results
This section of the handbook gives a general
summary of the QC information needed for the major
measurement areas. QA/QC information from the trial
burn that will be reported should be outlined in the
QAPjP. All field records, all calibration data (analytical
and field), all precision and accuracy determinations
associated with QA objectives (e.g., surrogates,
spikes, duplicates, standard reference material), all
internal audits, and the data quality assessment report
from the QAC should be included in the TBR.
Precision and accuracy determinations should be
clearly presented with all results calculated. For
example, if duplicates are analyzed to determine
precision by range percent (RP), the individual
determinations plus the calculated RP should be
presented. Any value which falls outside the data
quality objectives should be flagged in the data tables
and discussed in the text (or a footnote) in terms of
how the apparent problem affects overall sample
results.
In general the following QA/QC information is
desirable.
Sample traceability:
• Master record or inventory giving all samples
and identifiers.
Holding times:
• For analysis of volatile organics in waste feed,
fuel or ash, and air pollution control device
(APCD) samples, the number of days between
sampling and analysis should be presented
either in a separate table or be incorporated into
the sample results table.
• For analysis of semivolatiles in waste feed, fuel,
ash, and APCD samples (both GC/MS and non-
GC/MS analyses), the number of days between
sampling and extraction, as well as the number
of days between extraction and analysis should
be reported.
• Any sample analysis that exceeded holding times
should be specifically mentioned in the text.
Technical justification for use of the data must
be offered before sample results obtained after
holding times have expired can be considered
for acceptance. However, exceeding holding
times must be avoided and usually results in
rejection of the sample data.
Waste/fuel/APCD sampling:
• All field records showing sampling method,
dates, times, sampling equipment, field sampling
personnel, sample preservation, sample
identification number, and compositing
techniques.
Stack gas sampling:
• All field records required in the methods.
• All calibration records for pretest and posttest
calibration.
• All calibration records for calibration equipment.
Analysis:
• All initial calibration results (e.g., source and
purity of standards, calibration standards and
responses for calibration curve, calculation of
response factors, calculation of linearity, average
response factors, standard deviations). The
forms in SW-846 Chapter 14 may be used.
Tables or graphs contained in output from
analytical data systems may be sufficient.
• All continuing calibration results (e.g., daily
calibration, calculation of percent difference from
initial calibration). Again, tables or graphs
17
-------
contained in output from analytical data systems
are sufficient. Balance calibrations should be
reported for particulate determinations.
All accuracy determinations (e.g., calibration
check standards, interference check standards,
spikes, analyses of reference materials, analysis
of performance audit samples, analyses of
spiked sampling trains, and surrogate analyses).
All accuracy values such as percent recovery
should be calculated. Summary averages are
sometimes needed, particularly for surrogate
recoveries, yet the individual values should be
reported also. Comparisons or averages of
heterogeneous matrix types should be avoided.
For example, surrogate or spike recoveries for
aqueous waste should be compared only to
recoveries for similar aqueous waste, not to a
liquid organic waste.
All precision determinations (e.g., replicate
analyses, replicate sample preparation and
analyses). Again, comparison should be made
only between similar matrix types.
All blank determinations. At a minimum, this
should include all field blanks and at least one
method blank per analysis. Volatile organic
analyses should include a method blank
determination for each day.
Quality control assessment:
• Each measurement of precision and accuracy
and each instrument calibration which does not
meet criteria established in the method or QAPjP
should be discussed in terms of its effect upon
sample results.
• The quality control assessment should be made
by the author of the TBR, included in the main
text of the TBR, and discussed or elaborated
upon by the QAC in the QAC report.
Quality assurance coordinator report:
3.7 Evaluating Trial Burn QA/QC Results
3.7.7 Use of Data Quality Indicators and
Acceptance Criteria
Quality assurance objectives, QA/QC procedures, and
the acceptance criteria for these parameters must be
clearly identified in the QAPjP. Assessing the data
quality is relatively simple if: (a) agreement exists on
these points; (b) all QC data are reported; and (c) the
TBR contains the independent assessment done by
the QAC.
All measurement systems will have an agreed upon
level of precision and accuracy as well as accom-
panying QC procedures to indicate achievement of
this level. The QAC must see that all QC procedures
have beert completed and have met criteria. Cases in
which the criteria have not been met or procedures
have not been completed will be noted by the QAC,
and data should not be accepted unless the applicant
provides an adequate technical justification for use of
the data. The permit writer will need to spot check
critical QC areas to ensure that the QAC review was
valid and then accept or reject the rationale for
accepting any results outside the QC criteria.
When QA objectives are based upon regulatory
decisions and acireed upon in advance, a judgment on
data quality does not have to be made at the TBR
stage of the project if the QC data meet the
acceptance criteria. A decision on data quality is a
major benefit of using QA objectives; however, it is
dependent upon rigorous planning, and it cannot be
added after the data have been acquired.
3.7.2 Checklist for Reviewing RCRA Trial Burn
Reports
A checklist for reviewing RCRA TBRs has recently
been developed (Reference 2). This checklist has six
basic functions:
• Foster consistency in TBR review.
• Evaluate the completeness of the TBR.
All internal system audits and audits of field
records, analysis records, and other project
records should be summarized and reported with
an assessment of the overall data quality found
during the audits. This assessment of data
quality should refer to or include any quality
assessments conducted by other project
personnel and reported in the TBR.
Results of all samples submitted for analysis by
the QAC should be reported, along with
associated precision and accuracy results.
.Evaluate the .validity of the TBR in relation to
regulatory statutes and policy.
Compare the actual trial burn to the planned
activities.
Ensure that QC results have met the associated
objectives and criteria.
Provide written documentation of the TBR
review.
The checklist serves as a supporting document for
regulatory decisions based upon trial burn data. The
18
-------
various sections of the checklist concerning sampling,
engineering, regulatory policy, analytical methods, and
quality control would best be completed by experts in
each related field. The final decision maker then
needs only to review the checklist to find the critical
problem areas and any data of questionable quality. It
is suggested that the checklist be given to the permit
applicant before the TBR is written to ensure that all
necessary information will be included in the report.
The applicant's QAC may fill out the pertinent
sections of the checklist (sampling, analysis, and
QA/QC) to ensure that the TBR is complete and that
any questionable data areas have been addressed.
This checklist can serve as a guide to "auditing" the
trial burn results and isolating key information for later
agency management review. However, some areas in
the checklist concerning QA/QC cover topics which
are not pertinent to all trial burns. The key documents
to be reviewed are the TBP and the QAPjP.
The analysis and QA/QC portions of the checklist
assume that the permit writer has full access to all the
raw data supporting the trial burn results (field data,
analysis data, etc.). Sometimes these data can come
from multiple laboratories or be intermixed with
another project's records. Since supplying the raw
data is a burden, this need should be conveyed to the
applicant in advance. One option for the reviewer is to
read the TBR, decide which trial burn run is the most
critical (if possible), and thoroughly review the records
for only that run. However, the logistics of obtaining all
raw data from a single run can still present problems.
Sometimes the TBR and associated raw data are
reviewed by a team of technical experts (e.g.,
chemists, engineers, sampling personnel). One prob-
lem with a team review of a TBR is closing the loop
on the decision process. An analytical or sampling
expert may raise serious questions concerning data
quality and relay concerns to the permit writer.
However, the process should not end here. Any
serious problems raised during a trial burn review
should be presented along with an assessment of its
impact upon overall acceptance of the results for
permitting purposes. The concerns of the individual
experts should be relayed to the applicant for
justification of data acceptance given the quality
problems. The applicant response should be
forwarded to the initial expert reviewer who should
respond as to the appropriateness of the justification.
This whole process should be documented in writing
in support of the permitting decision.
3.7.3 Review of TBR-Decision-Based Criteria
The data presented in the TBR must be of sufficient
quality to be the basis for regulatory decisions and
must contain the necessary information for outlining
the permit operating conditions. There are three main
performance-based regulatory questions:
• Was the 99.99% ORE achieved?
• Were the hydrogen chloride emissions < 4 Ib/h
or were 99% of potential chloride emissions
removed?
• Were the particulate emissions less than 0.08
grains/dscf?
If the data associated with these decisions do not
meet the criteria established for precision and
accuracy, sample results should not be automatically
rejected. For example, if the observed ORE is
99.9996%, well above the regulatory minimum, then
minor problems with accuracy or precision can be
considered moot. Precision and accuracy data can in
some cases be used to define a confidence window
around reported values, which can then support the
regulatory decision. The same can be said concerning
particulate and hydrogen chloride emissions.
However, even though potential precision and
accuracy problems appear minor when the regulatory
objectives have been achieved with large margins for
error (e.g., 99.9996% ORE), if precision and accuracy
are not measured and quality control procedures are
not followed, the data quality is unknown. In these
cases, the permit writer cannot judge whether the
data are acceptable. The only recourse remaining is
to assess the impact of the missing QC data. For
example, if no spike sample was used for the back
impinger of a chloride determination, but levels were
very low compared to the front impingers, then this
missing information is not critical. However, if
surrogates were not spiked into the SVOST compo-
nents and no POHCs were spiked into a blank train
for recovery determinations, then there is no indication
of the accuracy of the complete sample preparation
and analysis. Data critical to the ORE decision is then
of completely unknown accuracy. In this case, if no
POHCs are detected in stack samples there is no
evidence to indicate they would have been detected if
the POHCs had been present in the stack samples.
QA/QC criteria need to be applied carefully. Data that
do not meet the criteria should not be accepted
unless the applicant provides adequate technical
justification for use of the data.
19
-------
_
-------
Chapter 4
QC Procedures for Sampling Waste, Ash, Fuel, and Air Pollution
Control Device (APCD) Effluent
The QA/QC procedures that pertain to sampling of
waste, ash, fuel, and APCD effluent are covered in
this chapter, with emphasis on the establishment of
written procedures and documentation as well as
ensuring that they are performed.
4.1 General
The QA/QC aspects of field sampling operations are
much more subjective than the QA/QC aspects of
analysis. The wide diversity of waste feeds, POHCs,
incinerators, APCD, and trial burn experimental design
precludes the establishment of firm QA/QC
procedures applicable in all situations. Although some
guidance on sampling design is presented in this
chapter, the major QA/QC activities associated with
sampling are to establish written procedures and to
document that those procedures have been followed.
The basic objective of hazardous waste incineration
sampling is to obtain a representative sample, i.e.,
one that exhibits the average properties of the media
being sampled. This sample must be collected over a
period of time that is sufficient to represent the time-
dependent variability inherent in the relatively
continuous process of incineration. The achievement
of this objective is dependent upon design and
implementation. Proper design of the sampling
operation includes sampling points, number of
samples, sampling equipment, size of sample, and
sampling technique. Implementation of the sampling
design relies on following written procedures,
documenting that procedures have been performed,
and observation in the field to verify that procedures
were followed.
The specific QA/QC elements of which to be aware in
sampling are:
• Sampling design must produce a sample
which is representative.
• Sampling design must be translated into
written procedures.
• Sampling activities must be thoroughly
documented.
4.2 Sampling Design-Representative
Samples
Some key concepts that need to be considered in
three areas while sampling any media are presented
in Chapter 9 of the publication "Test Methods for
Evaluating Solid Waste."3 These concepts are
interrelated and are presented below with some
examples of how they should be applied in incinera-
tion sampling to ensure samples which are
representative. Specific guidance on sampling inter-
vals, number of samples, etc., is also available.5,8,9
4.2.1
Waste/Media Considerations
(1) Physical State~The physical state of each type of
media to be sampled must be described. For example,
is the waste a liquid? Is the ash solid or a solid/water
slurry? What is the temperature of the scrubber
water? Is the waste homogeneous or heterogeneous?
Is it stratified into layers? The physical state of the
media being sampled determines both the sampling
technique and the sample container.
(2) Composition-The composition of the media to be
sampled should be given. For example, one waste
stream may be fiberboard drums containing toluene-
soaked rags, with 2 to 3 Ib of toluene per drum. A
second waste stream may be an organic liquid waste
which is 25% tetrachloroethylene and 75% methanol.
Sample composition is used to determine the amount
of sample necessary to produce a sufficient sample
size to exceed the detection limit of the analyte.
(3) Volume/Mass--The total volume/mass of the
material to be sampled needs to be given and any
change of volume/mass with time. The total
volume/mass is needed to judge whether a sample
point is appropriate (given the specific incinerator
process) and whether the sample size is adequate.
21
-------
4.2.2 Site/Location Considerations
(1) Accessibility-Places where the waste can be
accessed should be given (sampling ports, etc.).
Where can samples be taken? How hard is it to reach
the sampling point? Does that area have sufficient
electric power for sampling equipment? From a
conceptual standpoint, waste feeds should be
sampled as close to the introduction of waste to the
incinerator as possible. For example, if a spiking liquid
is being added to an organic or aqueous waste, it
would be preferable to sample the mixed spiked waste
instead of just sampling the spiking fluid. However,
this is often impractical, especially when the waste
and spiking liquid might not be completely mixed, or in
the case of containerized waste, when time
considerations on the day of the trial burn may make
this impractical.
(2) Qeneration/Handling~The generation of the waste
APCD effluent, etc., and how it is handled should be
described. For example, ash in a rotary kiln might take
a significant amount of time to reach the sampling
point; thus, ash sampled during or immediately after
stack sampling may be more representative of the ash
generated before the trial burn. Sometimes, the
generation and handling of solid waste feeds may
present the most difficult sampling problem.
(3) Time Events~Any time-dependent characteristics
of the waste should be detailed. Are there any timed
events in the relatively continuous operation of the
incinerator? For example, the barrel feed rate creates
a discrete timed event every time a barrel is
introduced into the incinerator. For liquid feeds, given
a steady waste feed rate, every time a new tank truck
is brought on line a timed event occurs. All possible
timed events and time-dependent phenomena need to
be outlined in the TBP or QAPjP.
4.2.3 Sampling Equipment and Sample Storage
(1) Sample Change—Sampling equipment and storage
containers must introduce little or no change in
samples. Samples for the analysis of volatile
components require special containers and sampling
techniques to ensure the integrity of the volatile
analytes. For samples containing particularly labile
analytes, cold temperatures must be maintained from
the moment of sampling.
(2) Sample Properties-The sampling equipment and
containers must be able to withstand effects of the
physical and chemical properties of the sample itself.
Samples may be very hot or corrosive and may melt
or dissolve the sample container. Wide-mouthed
sampling containers may be needed for samples
which are very viscous or heterogeneous (e.g., sludge
to prevent spillage or possible segregation by particle
size).
All sampling strategies must be justified in the TBP,
covering all topics discussed above. The permit
applicant should outline all sampling strategies and
offer a clear justification of the design. A clear
delineation of the reasoning behind the experimental
design is invaluable in creating a defensible sampling
strategy.
Trial burns are conducted in triplicate runs, and the
regulatory decisions and operating parameters are
based upon the average values for each run. This
requires a general sampling design consisting of
systematic random grab samples taken throughout the
run and composited into a single sample per run for
analysis. The literature regarding sampling of
hazardous waste incinerators5,8,9 contains basic
guidelines for ensuring a representative sample.
Sampling design should include: number of samples,
duration of sampling, and a sample composition
scheme, if necessary. A sampling scheme should
reflect the degree of variability of that particular waste
stream. Continuous waste feeds, such as liquid
organic waste, slurries, and solid waste on conveyors,
should be treated differently from APCD effluent and
ash or containerized waste.
4.3 Standard Operating Procedures
(SOP) for Sampling Activities
The majority of TBPs rely on the ASTM procedures
for sampling or on the procedures given in Sampling
and Analysis Methods for Hazardous Waste
Combustion.^ These procedures are too general for
use on a trial burn; a specific set of instructions or a
SOP should be developed.
Taking an example, a certain TBP states that liquid
organic waste would be pumped from a trailer tank
and sampled from a port immediately following the
waste feed pump but before the flow rate meter using
the tap sampling Method S004 from the above book.
This citation is all that was given. Method S004 states
that: (a) a samJ3le collection line will be used; (b) the
sample vessel and collection line will be rinsed with
the liquid waste; and (c) a 2 L sample will be taken.
However, in reality, there was no need for a separate
sampling line; the sample vessels were clean and did
not need to be rinsed (rinsing containers with sample
sounds appropriate, but is generally not recommended
when the substance is a known hazardous waste).
Containers can be purchased precleaned. Only a 100
ml_ sample was needed.
In the field, the first sample taken had multiple
phases, since the pump had been used for aqueous
waste before the trial burn. The multiple phases were
due to residual waste in the sampling tap; the sampler
had not been told to flush the lines of the sampling
tap with waste. The sampler did not have a copy of
22
-------
the cited Method S004. The multiple phases were
noted by an observer, not the sampler. After ~ 3 L of
liquid had run through the tap, the waste stream was
clear. The first sample was discarded and then the
flush time for the tap was made long enough to allow
at least three to five full sampling line volumes of
liquid to flow before collection of the sample.
Samples were taken at the appropriate times, but
since all field information had been recorded on the
sample label, there was no field sampling notebook or
field sampling forms. Thus the only record of the date,
time, and sampler was on the sampling container
which was discarded after analysis. There was no
record of the sampling technique, the compositing
technique, the flush time, the original sample which
had been discarded, or the resulting corrective action.
When the field data were presented with the TBR,
there was no record of any of the field sampling other
than the stack sampling.
An example of a sampling form with instructions is
presented in Figure 4-1. All sampling should have a
short SOP and form or data recording instructions. If
not presented in the TBP, the permit reviewer should
request that the information be provided. Samplers
should have written instructions for all sampling
activities.
4.4 Summary
The primary QA/QC requirements for the sampling of
waste, ash, fuel and APCD are good planning, fully
written procedures, and documented field activities.
Thorough planning must be evidenced in the TBP by
a complete description of various media and their
properties, sampling location and necessary
equipment, along with justification of each sampling
method as tailored to each type of media. Sampling
design must have been translated into comprehensive
written instructions and later supported by information
and data recorded in the field.
Facility: Frank's Hazardous Waste Incinerator
Type of Sample: Liquid Organic Waste
Reference Sampling Method: Tap, S004 [Sampling and Analysis Methods for Hazardous Waste Combustion (NTIS PB84-155845)].
Sampler: , ,
Date of Sampling: ;
Run Number:
Run Description:
Sample Identification Number:
Equipment: one gallon wide-mouth compositing jar and two 1 L sample jars with Teflon-lined polycarbonate tops; one 500 mL graduated
beaker; one funnel; one pail and two 1 gal jugs for waste.
Instructions
(1) Before the trial burn run starts, clear sampling line (2 in ID x 2 ft) by opening the tap and collecting not less than 1 L of waste. Examine waste '
to assure the liquid is homogeneous (e.g., free from water, solids, sludge etc.). If not, contact field sampling crew chief before trial burn starts.
(2) At the beginning of the trial burn and every 15 min (±5 min), open the tap, rinse about 1/2 L into the bucket, close tap. Visually inspect waste
to ensure that it is homogeneous. If not contact crew chief.
(3) Open tap slowly, fill beaker to 300 mL mark. Place sample in compositing jar. Seal jar.
(4) Record the time, and any comments. Dump waste in bucket into the waste jug.
(5) Repeat Steps 1-4 every 15 min for the 2 h run. This will result in eight grab samples and 2.4 L of sample at the end of the run.
(6) Mix the final sample by inverting the sealed jar at least 20 times.
(7) Pour the sample into each 1 L sample jar.
(8) Following the traceability procedures, label the jar, seal the jar and fill out the necessary chain of custody forms.
(9) Deliver the sample to the field sample custodian for packaging and shipment.
Grab No.
1
2
3
4
5
6
7
8
9
10
Time of Grab
Comments
USE BACK OF FORM FOR ANY ADDITIONAL INFORMATION
Figure 4-1. Example sampling instructions and field record form.
23
-------
-------
Chapter 5
QC Procedures for Analysis of Waste, Ash, Fuel, and Air Pollution
Control Device (APCD) Effluent
Sample analysis QA/QC procedures for waste, ash,
fuel, and APCD effluent are covered in this chapter.
The concepts of precision, accuracy, detection limits,
spiking, and calibration are presented and discussed
as appropriate. Stack gas sample analysis is dis-
cussed in Chapter 7. Some general analysis topics
are discussed in Chapter 8. If the permit writer is not
familiar with these analyses, review of these areas of
the TBP, QAPjP, or TBR should be done by qualified
personnel.
5.1 Analysis of Waste Samples for
Heating Value, Ash, Viscosity, and
Chlorine
5.1.1 Sample Matrix
ASTM methods are followed almost universally for
these analyses. These methods are often specific to a
particular matrix, such as chlorine in petroleum
products (ASTM D808), or chlorine in organic
compounds (ASTM E442). The permit applicant must
provide the full title of each procedure, which will give
the appropriate matrix for a given method. The
applicant must justify the use of a procedure if it
appears that the matrix might be incompatible.
For example, the accuracy of chloride analysis is
dependent upon choosing the appropriate analysis
procedure for a given sample chloride level. For
samples with high levels of chloride, sample
preparation/digestion followed by titration or silver
chloride precipitation is often the method of choice;
while for low level samples, ion chromatography is the
preferred analysis method.
5.7.2 Precision Determination
Since field samples are large compared to the amount
needed for each analysis and the analyses are
relatively inexpensive, multiple determinations for
demonstration of precision present little difficulty.
Because most of these samples are analyzed by
subcontractors, a sample can be split in the field and
shipped as two samples to the laboratory. At a
minimum, one test run's sample should be split,
prepared, and analyzed in duplicate for all parameters.
Recommended precision criteria are given in Table
5-1.
5.1.3 Accuracy Determination
The recommended procedure for accuracy
determination for trial burn analyses is to use
reference materials with known values for ash, total
chlorine, heating value, and viscosity. These samples
are submitted for analysis without being distinguished
from field samples and provide an independent check
for any systematic bias and support data validation of
the laboratory.
Reference materials are relatively easy to procure or
prepare and can be chosen to match the relative
physical properties of samples. For example, if the
liquid organic waste feed is primarily 5%
tetrachloroethylene in a waste oil, then a solution of
5% tetrachloroethylene in white oil (chlorine-free oil)
can be submitted as a reference standard. For higher
heating value samples, a compound of known heating
value can be submitted (such as hexane), or the
National Institute of Standards and Technology (NIST)
provides reference material of a known heat of
combustion. For ash, obtaining a reference material
that mimics the field sample composition can be
difficult (e.g., 5% ash in organic liquids). Zinc oxide
can be blended with a solid or liquid sample or a fuel
oil and used as a spike. At least one unspiked sample
and two samples spiked at different levels should be
submitted to the laboratory for analysis.
5.1.4 Summary of QC Procedures
A summary of the QC procedures discussed above is
presented in Table 5-1. These parameters are loosely
based upon the precision and accuracy values given
in the most commonly used ASTM methods. The
permit reviewer is encouraged to check the
acceptance criteria given in the QAPjP versus the
precision and accuracy values given in the methods.
Each quality parameter must be reported in the TBR,
and acceptance of sample results must be justified by
the applicant if the QC procedures were not followed
or the criteria were not met. In cases in which preci-
25
-------
Tablo 5-1. Summary of QA/QC Procedures for Heating Value, Ash, Viscosity, and Chlorine Analysis
Quality parameter Method of determination Frequency Target criteria
Method selection
Precision
Accuracy
Accuracy-optional
Check method to ensure if it is appropriate During QAPjP review Choice must be justified if method appears
to sample matrix and has an acceptable inappropriate
method detection limit
Duplicate preparation and analysis of at Once per test
least one run's samples
Analysis of a reference material Once per test
Spike of sample at 2 times sample level Once per test
10% range
90%-110% of stated reference value
90%-110% of spiked value
sion and accuracy are poor, acceptance of the data
can be decided based upon the way the information is
to be utilized. For example, if chloride precision is
poor, yet chloride levels are close to the detection
limit and the emission limit of not more than 4 Ib/h has
been met, then poor precision will not affect the
regulatory decision.
Most of these analyses are relatively inexpensive. The
preferred option is to repeat the analysis if the
precision and accuracy are not sufficient to make the
permitting decisions. However, the samples must not
be biologically or chemically active and must be
stored to prevent evaporation.
5.2 Analysis for Principal Organic
Hazardous Constituents (POHCs)
5.2.7 General
POHC concentrations in samples collected from an
incinerator trial burn are highly variable. Waste feeds
may have POHC concentrations in the range of 0.1%
to 30%, while ash or APCD samples can have very
low concentrations (1 ppm). The waste feed analysis
is the most critical; however, ash and APCD analyses
are sometimes used to justify delisting the waste to
allow its disposal as a nonhazardous material.
Since the analysis system used for stack gas samples
is designed to detect low concentrations of POHCs, it
can be applied to the analysis of POHCs in ash and
APCD. For a volatile POHC, similar surrogates,
calibration curve, and analysis conditions can often be
used as for VOST. For semivolatile POHCs, solid
samples can be extracted in a manner identical to that
used for the SVOST XAD/filter (Soxhlet extraction),
and the aqueous samples can be extracted like the
SVOST cpndensate (liquid extraction). The surro-
gates, calibration curve, and analysis conditions will
be the same as for SVOST. Please refer to Sections
7.3 and 7.4 of this handbook for criteria.
Detection limits for ash and APCD samples may be
important if decisions regarding disposal are based on
the amount of POHCs found in the sample. The
QAPjP should make specific mention of the method
chosen for determining the detection limit, and the
TBR should give the results of that determination.
Sections 7.3 and 7.4 give guidance on detection
limits. Also, at least one sample of each matrix type
should be prepared and analyzed in duplicate for a
precision determination and spiked (at 5 times the
detection limit) for an accuracy determination.
POHC levels in waste feed samples are often so high
that using the same analytical system as for stack gas
samples is impractical. The precision and accuracy of
gas chromatography with detectors such as flame
ionization, thermal conductivity, electron capture, or
flame photometry may be more precise and accurate
than GC/MS. The permit writer may accept non-
specific detectors! coupled to a gas chromatograph if
the waste feed matrix is relatively simple (such as a
completely synthetic waste or a single component
waste from an industrial process).it
The specific QC elements to include in waste feed
determinations are:
• Calibration of the analytical system.
• Determination of accuracy using calibration
check standards, spiked samples, and
surrogates.
• Determination of precision by multiple analysis of
samples.
This section covers QC elements that are not
specifically addressed in the SW-846 methods.
5.2.2 Calibration for Waste Feed Analysis
Waste feed composition should be very well
characterized before the permitting process begins.
The calibration range of the analytical system should
bracket all expected concentrations of the POHC with
a minimum of five standard concentrations. Samples
with concentrations greater than the highest standard
should be diluted into the calibration range and reana-
lyzed. Samples lower than the lowest calibration level
26
-------
should be concentrated and reanalyzed. If this is not
feasible, the calibration range should be extended with
the inclusion of lower or higher concentration
standards. The calibration range, concentration of
calibration standards, and the expected sample
concentration should be presented in the TBP or the
QAPjP.
Criteria for both initial and daily GC/MS calibration for
POHCs are given in Sections 7.3 and 7.4. Calibration
criteria for other analysis methods are given in Section
8.4. The essential point for calibration, irrespective of
the analysis method, is a successful calibration before
and after sample analysis. The initial calibration curve
must pass the criteria before any sample analysis. At
the end of each analysis period, an end-of-day calibra-
tion standard must be analyzed and must pass the
continuing calibration criteria. Every group of samples
must be bracketed by two successful continuing
calibrations-one preceding sample analysis and one
following. If the calibration check following sample
analysis does not meet the criteria, it should be
repeated; if it fails the second time, analysis problems
should be investigated and corrected and the samples
following the last successful calibration should be
reanalyzed. All initial and continuing calibration results
must be reported in the TBR.
5.2.3 Accuracy Determination for Waste Feeds
Calibration-/^ five-point calibration curve is usually
prepared from a single stock solution of the reference
material (SW-846, Method 8270).3 As a check on the
validity of the calibration and the identity of the
reference material, a calibration check standard
should be analyzed. This calibration check standard
must: (a) contain all the POHCs and surrogates used;
(b) be at the expected concentration level of the
POHCs in the waste samples; and, (c) be analyzed
after each preparation of calibration standards and
before sample analysis.
Standards should be prepared from EPA standard
reference material obtained from the EPA repository
(QA Branch, EMSL-Cincinnati, USEPA, Cincinnati,
Ohio 45268). Preparation of the check standard from
material of documented purity should be done by the
QAC or by personnel not responsible for the
preparation of the calibration standards. This indepen-
dent preparation should reveal any systematic bias
that may be present. If EPA reference material is not
available, the laboratory must characterize a standard
material for this use. Characterization entails a
qualitative identification of the chosen calibration
standard and a quantitative determination of standard
purity.
The calibration check standard should be within the
same accuracy window as that used for continuing
calibration (e.g., GC/MS 70% to 130%). If the
criterion has not been met, the analytical problem
should be corrected before sample analysis begins.
The results for all calibration check standards should
be presented with the appropriate calibration curve
results in the TBR.
Surrogates-All GC/MS methods must incorporate
analysis of isotopically labeled surrogates. These
surrogates should be added to the sample at the
beginning of sample preparation at a concentration
equal to the estimated POHC level. If surrogate
POHCs are not available, other isotopically labeled
surrogates chemically similar to the POHC can be
substituted; however, the selection must be justified in
the QAPjP. For non-GC/MS methods, in which
isotopes cannot be distinguished from native
compounds, a compound - chemically similar to the
POHC can be chosen as a surrogate. Surrogate
recovery of each sample should be within the 50% to
130% range of the amount spiked and must be
reported in the TBR. For POHC analysis of waste
feeds, a low recovery means the calculated ORE
could be lower than the actual. However, a recovery
higher than 130% should not be accepted and could
mean that the calculated ORE is higher than the
actual.
Spikes--For analysis methods that do not employ
GC/MS, accuracy is determined through use of a
waste feed sample spiked with the POHCs. In
addition, sometimes the cost of surrogates does not
allow spiking the samples at the.beginning of sample
preparation; thus, the recovery of surrogates is not
indicative of total method accuracy. A minimum of one
sample from a run should be split and a portion spiked
with each POHC at a level of not more than twice the
expected POHC concentration. Samples should be
spiked just before sample preparation. Spike
recoveries must be reported in the TBR; the accuracy
criterion is 50% to 130% of the amount spiked. As
with surrogates, a low bias is less critical than a high
bias.
5.2.4 Precision Determinations for Waste Feed
For analyses using surrogates, precision can be
determined from surrogate recoveries.' The relative
standard deviation (RSD) of surrogate recovery from
all three test runs should be < 35%. Surrogate
recovery results should not be compared across
sample matrices (e.g., aqueous sample results should
not be mixed with liquid organic sample data).
Precision data must be calculated also and presented
in the TBR. Precision is determined by duplicate
preparation and analysis of a sample from each
matrix. If problems with precision are anticipated, all
waste feed samples should be prepared and analyzed
in duplicate and the average result used in the ORE
calculation. The percent range should be less than
35%. If the precision determination shows a wide
variability between duplicate sample results, a few
27
-------
samples should be reanalyzed to determine if the
precision problem is related to sample preparation or
analysis. If the problem is sample preparation, the
method should be modified and all samples should be
reprepared and reanalyzed. If the problem is sample
analysis, the analysis system should be modified and
alt samples should be reanalyzed. If this subsequent
analysis shows that precision problems are not
systematic, the average POHC concentration should
be used for ORE calculation. If precision is a
systematic problem, the lower of the two values could
be used in the ORE calculation. All precision data
must be calculated and reported in the TBR.
5.2.5 Blanks for Waste Feed Analyses
Method blanks must be analyzed to demonstrate that
the sample preparation and analysis system is free
from any significant positive bias. Method blanks must
be reported in the TBR and must be below 5% of the
sample POHC levels measured for the sample
extracts. If the blank value is above these levels, it is
recommended that the sample preparation and
analysis system be examined and corrected. Sample
results should not be corrected for blank values.
5.2.6 Summary of QC Procedures for Waste
Feed Analyses
A summary of QC procedures for waste feed analysis
is presented in Table 5-2. Each quality parameter
must be reported in the TBR. If the QC procedure
was not followed or the criteria have not been met,
sample results should not be accepted unless the
applicant provides an adequate technical justification
for the inclusion of the data. The QC procedures
related to calibration and calibration accuracy must be
completed and must be within the criteria before
sample analysis begins.
For surrogate and POHC spike recovery results, the
50% to 130% range is the suggested limit. High
recovery would significantly affect the regulatory
decision. (Sample results would be biased high, and
the calculated ORE would be higher than actual.)
However, if individual recoveries are lower than 50%,
trial burn results should not be accepted unless the
applicant supplies an adequate technical justification
for the use of the data. Sample results should be
corrected for low surrogate recovery. Guidance on
using surrogate recoveries for correction of
environmental data should be published in the Federal
Register by the end of 1989.
Results that lack sufficient precision are of great
concern because POHC waste feed concentration is a
critical parameter. If precision is poor, the laboratory
should attempt to identify and correct the problem. If
this is not possible, all samples should be prepared
and/or analyzed in duplicate.
5.3 Analysis for Metals in Waste, Ash,
and APCD Samples
Metals analysis of waste samples is used in two ways
for regulatory decisions. One is for a metals removal
efficiency calculation; in this case, a high bias to
sample results will result in a removal efficiency that is
better than actual. The other use is for a risk assess-
ment calculation, assuming all metals in the waste
stream are vented to the atmosphere. In this case, a
low bias will have an unfavorable effect on the regu-
latory decision. The acceptance criteria for these
analyses are determined by the permit writer's use of
the data.
For these determinations, the specific QC elements to
be aware of are:
• Calibration of the analytical system.
• Determination of accuracy or matrix effects using
calibration check standards and spiked samples.
• Determination of precision by multiple analysis of
samples.
This section covers QC elements that must be
addressed outside the scope of the SW-8464
methods. The general requirements of the SW-846
inorganic methods are discussed in Section 8.4.
5.3.1 Sample Matrix
Waste feed ash and APCD samples can present
some unique problems in metals analysis. These
matrices can vary In composition from virtually 100%
organic material to aqueous solutions. Ten inorganic
analytes are of primary interest: arsenic, beryllium,
cadmium, chromium, antimony, barium, lead, mercury,
silver, and thallium. These samples may be prepared
by a variety of methods (e.g., microwave digestion,
chemical digestion, dissolution) and analyzed by mul-
tiple methods [e.g., graphite furnace atomic
absorption (GFAA), cold vapor atomic absorption
(CVAA), inductively coupled plasma (ICP)]. The TBP
and QAPjP should justify the selection of all sample
preparation and analysis methods. Particular attention
should be paid to the selection of the sample
preparation method in terms of achieving complete
digestion and optimal analyte recovery as well as the
choice of an appropriate analysis method for the
necessary detection limit.
5.3.2 Calibration
The calibration method is often dependent upon the
type of instrumentation used for analysis. For
example, some inductively coupled plasma spec-
trometers as designed require only a blank and one
standard for calibration, while some atomic absorption
spectrophotometers need multiple standards to
28
-------
Table 5-2. Summary of QA/QC Procedures for Principal, Organic Hazardous Constituent Determination in Waste Feed
Samples
Quality parameter
Method of determination
Frequency
Target criteria
Method selection Ash and air pollution control device
samples should be analyzed by the same
methods as stack gas samples
Calibration Initial analysis of five standards at different
levels
Sample analysis must be bracketed by
calibration standards
Continuing calibration
Accuracy-calibration Analysis of calibration check standard
Accuracy-surrogates
Isotopically labeled POHC spiked at the
expected POHC level before sample
preparation
During QAPjP review
At least once
All samples
Before and after sample
analysis
After each preparation of
standards and initial
calibration
Every sample
See Sections 7.3 and 7.4 for QC
procedures and criteria
See Sections 7.3, 7.4, or 8.4
NA
See Sections 7.3, 7.4, or 8.4 for
appropriate criteria
Must be within continuing calibration
criteria
50%-130% recovery
Accuracy-spikes
Precision-surrogates
Precision-POHC
Blanks
One sample from each matrix spiked with
POHC at 2 times the expected level
Same as for surrogate accuracy-
surrogates
Duplicate preparation and analysis of one
sample from each matrix
Method blank carried through all sample
preparation steps
One per sample matrix
One per test condition
One per sample matrix
One per sample batch
50-130% recovery
< 35% RSD of recovery
< 35% range
< 5% of sample levels
calibrate the instrument. The applicant should know
the general levels of metals in waste feed samples
because the waste feed should have been very well
characterized before the beginning of the planning
process. The calibration range of the analytical system
should bracket all expected concentrations of metals
in the waste. Any samples with concentrations greater
than the highest level should be diluted into the
calibration range and reanalyzed or the calibration
range should be extended. For samples below the
lowest calibration standard, if possible, the calibration
range should be extended or the samples should be
concentrated.
Criteria for initial and continuing calibration are given
in Section 8.5 and summarized in Table 8-7. The ini-
tial calibration curve (which includes all calibration
standards) must pass the criteria before sample
analysis. At the end of each analysis period, a
calibration standard should be analyzed and must
pass the continuing calibration criteria. Every sample
must be bracketed by two successful calibrations-one
full calibration preceding sample analysis and one
midrange calibration standard following each group of
samples. If the calibration standard following sample
analysis does not meet the criteria, it should be
repeated. If it fails the second time, the analysis
problem should be rectified and the samples that were
analyzed after the last successful calibration should
be reanalyzed. All initial and continuing calibration
results must be reported in the TBR.
The instruments used in inorganic metals analysis
have a tendency to drift at both the high and low ends
of the calibration range. Therefore, all continuing
calibrations must also be accompanied by the analysis
of a reagent blank. The acceptance of this blank is
somewhat subjective, depending upon the sample
results and whether the drift is positive or negative.
Calibration blank results should be reported in the
TBR, and any drift greater than 50% of the lowest
standard should be noted and explained..
5.3.3 Accuracy Determination
Ca//fc>ratfon--Virtually all SW-846 methods3 require
some initial check on calibration accuracy using a
second standard different from the one used for
calibration, which is called a "calibration check
standard." It should be analyzed following calibration
and prior to sample analysis. This calibration check
standard must fall within 90 to 110% of the actual
concentration. This range is fairly wide for the analysis
of a pure standard; results outside this range are thus
unacceptable. Any problem must be solved before
sample analysis proceeds.
29
-------
Sp//fes~A minimum of one sample from each matrix
should be split and a portion spiked with each metal.
Effort should be exerted to achieve a spike level of
not more than three times the expected sample level
or five times the detection limit, whichever is greater.
Samples should be spiked at the beginning of sample
preparation. Spike results must be reported in the
TBR, and the accuracy target range is 70% to 130%
of the amount spiked.
5.3.4 Precision
Precision is determined by preparation and analysis of
duplicate samples from each matrix. If precision is
expected to be a problem, all samples should be
prepared and analyzed in duplicate and the average
result be used for calculations. The percent range
should be less than 35% if the sample result is
greater than the lowest calibration standard. If the pre-
cision determination shows a wide variability in sample
results, a few samples should be reanalyzed. If these
precision results are good, the problem is related to
sample analysis. All samples should be reanalyzed if
sample analysis appears to be at fault. If the precision
results still are not improved, the problem probably is
related to sample preparation. If the problem appears
to be sample preparation, it should be modified and all
samples should be reprepared and analyzed. If this
subsequent work shows that the precisioh problems
are a relatively isolated occurrence, the average
should be used for all calculations. However, if pre-
cision appears to be a systematic problem, the value
leading to the most conservative regulatory decision
can be used in subsequent calculations. All precision
data must be calculated and reported in the TBR.
average value used, plus the blank values should be
shown to be statistically different from sample values.
The permit writer should be aware that contamination
in trace metals analysis can be a severe problem. If
metals analysis is a critical decision area and the
levels in the samples are very low in comparison with
the SW-8463 detection limit, the permit applicant must
present a statistical design for the method blanks.
Method blanks must be used to interpret sample
results in trace metals analysis, but multiple blanks
(rather than a single blank per sample batch) must be
analyzed to properly characterize the extent of the
system contamination. Method blanks must be
reported in the TBR. If the blank value is above the
detection limit, the detection limit should be changed
to 1.5 times the blank level.
5.3.6 Detection Limit Determination
Many times metals are not detected in the samples at
all; thus the permit reviewer must decide whether
metals emissions are a problem. The QAPj'P should
identify a method for determining the detection limit of
each analyte. The TBR must give the results of this
detection limit determination. If this subject is not
"addressed in the TBP, the permit writer should
request that the applicant supply the information. All
detection limits must be corrected for the sample
weight/volume and dilutions/concentration used in
sample preparation.
5.3.5 Method Blanks
For metals determinations, method blanks are critical
experimental design elements of the trial burn. Method
blanks are samples consisting of the reagents used in
sample preparation. These blanks are processed
exactly like the environmental samples. One method
blank should be introduced per batch of samples.
Method blanks are routinely used in inorganic analysis
to identify contamination problems occurring during
sample preparation and to correct for any systematic
low inorganic levels found in the reagents used in
sample preparation and analysis. However, these
corrections may be inappropriately applied. For data to
be method blank-corrected, the blank result must be
statistically different from the sample result, and the
blank result must be indicative of the "average" level
of contamination.5 An ordinary single blank
determination does not give enough information for
determining if these criteria have been met. Thus,
sample results should be reported without correction,
and if blank correction is justified, results should be
reported with and without the correction. If correction
is desired, multiple blanks should be analyzed, the
5.3.7 Summary of QC Procedures for Metals
Determination
A summary of the QC procedures for metals
determination is presented in Table 5-3. Each quality
parameter must be reported in the TBR, and
acceptance of sample results must be justified by the
applicant if the QC procedure was not done or the
criteria were not met. The QC procedures related to
calibration and calibration accuracy must be
completed and must be within the criteria before
sample analysis occurs. Deviations from the criteria
after the trial burn should not be accepted.
As stated previously, a high spike recovery in waste
sample results is not acceptable if the data are to be
used in an emission efficiency calculation; a low spike
recovery should not be accepted when the permitting
decision assumes all metals are being discharged to
the atmosphere, ash, or APCD. The permit reviewer
should consult the guidance document which
addresses the measurement of metals10 and another
document which addresses the necessary permitting
decisions regarding metals.11
30
-------
Table 5-3. Summary of QA/QC Procedures for Metals Determination in Waste Feed Ash and APCD Samples
Quality parameter
Method of determination
Frequency
Target criteria
Method selection
Calibration
Accuracy-calibration
Accuracy-spikes
Precision
Blank
Detection limit
determination
Review by expert in inorganic analysis
Initial analysis of standards at different
concentration levels
Continuing midrange calibration
standard
Analysis of calibration check standard
One sample from a run spiked with
analytes at 3 times the detection limit or
2 times the sample level
One sample prepared and analyzed in
duplicate
Method blank carried through all sample
preparation and analysis steps
Method is variable; must be given in
QAPP
During review
At least once before sample
analysis
Before and after sample
analysis
After every initial calibration
One per sample matrix
One per sample matrix
One per sample batch
One for each non-detected
analyte per sample matrix
Choices must be justified by applicant
Instrument-dependent. Suggest that
linear correlation coefficient of
standard data > 0.995
80%-120% of expected value for
GFAA and CVAA, 90%-110% of
expected value for ICP
90%-110% of theoretical value
70%-130% recovery
Range < 35% if sample result above
lowest standard
Below detection limit
NA
31
-------
-------
Chapter 6
QC Procedures for Stack Sampling
Methods that are used for stack sampling are
introduced and described in this chapter. All aspects
of QA/QC for EPA Methods 1 through 8, 6A and 6B,
7A and 7D, 9, 10, 13A and 13B, 17, and 18 (40 CFR
60, App. A) are provided in the "Quality Assurance
Handbook for Air Pollution Measurement Systems".7
This chapter covers conditions in a trial burn which
may affect the quality of the test data. Internal QA/QC
items such as repairs; maintenance, spares, etc., will
not be addressed. QA/QC procedures for the analysis
of stack samples are given in Chapter 7 of this
handbook.This chapter introduces and describes
6.1 EPA Methods 1 and 2 (40 CFR 60,
App. A): Location and Velocity
The QA/QC procedures required for sample site
selection and velocity traverses consist of ensuring
that the required operations have been properly
carried out and that the equipment has been
calibrated. These operations cannot be checked by a
performance audit, but must be controlled by strict
adherence to the specified procedures. Questions
which should be addressed and answered according
to the methods include: (1) Does the sampling site
meet criteria? (2) Were flow angles measured for
cyclonic flow? (3) Have the proper number of
sampling points been selected? (4) Are all points at
least 1/2 inch from the wall? (5) Are the ports properly
located?
Numerous modifications of EPA methods are
published in the Federal Register. Only the latest
version of a method should be accepted for a trial
burn. These are available in the Code of Federal
Regulations from the Office of the Federal Register.
For instance, if a gauge other than an inclined
manometer is used, the gauge must be checked
against an inclined manometer. If a Method 5 probe is
used for the initial velocity traverse, the pitot assembly
must either meet the noninterference criteria specified
under Method 5 or have been calibrated.
To ensure good quality data, one must perform quality
control checks and independent audits of the
measurement process; document these checks and
audits by recording the results, as appropriate; and
use materials, instruments, and measurement
procedures that can be traced to an appropriate
standard of reference.
Working calibration standards should be traceable to
primary standards. Two recommended primary
standards for establishing traceability are:
1. Calibrating the pitot tubes against a standard
pitot tube with a known coefficient obtained from
the National Institute of Standards and
Technology (NIST) or against the design
specification in the method which has previously
been shown to give acceptable coefficients.
2. Comparing the stack temperature sensor to an
American Society for Testing and Materials
(ASTM) reference thermometer.
Calibration data on field equipment should contain at
least the information provided in Figure 6-1.
6.2 EPA Methods 3 and 3A: Gas Analysis
for Carbon Dioxide, Oxygen and
Excess Air, and Dry Molecular
Weight (40 CFR 60, App. A)
QC procedures for gas analysis are included in the
method, so that quality assurance consists of ensuring
that the procedure has been accurately followed and
documented. The review should determine that the
proper method was used, that the required leak
checks were performed, and that the sampling rate
was constant (±10%). A performance audit should be
conducted with a cylinder gas of known concentration.
If Method 3A is to be used, the analyzers must be
tested prior to the trial burn. Documentation should be
available showing that the units have been checked
for interference response (Method 3A, 6.2), analyzer
calibration error, and sampling system bias (Method
3A, 6.3), in addition to the calibration concentration
verification required (Method 3A, 6.1). An audit should
be performed either following Method 3 or using a
separate audit gas cylinder.
Data must be routinely obtained by repeat measure-
ments of standard reference samples, or primary,
secondary, and/or working standards. Working
33
-------
Date
Completed by_
Pitot Tube Type
Identification No.:
Dimension specifications checked?*
Calibration required?
Date,
Identification of calibration reference
Date
CD_
Temperature Sensor
Identification No.:
Calibrated?"
Was a pretest temperature correction used?
II yes, temperature correction
Identification of reference sensor
Barometer
Was the pretest field barometer reading correct?"
Identification of reference barometer
Differential Pressure Gauge
Was pretest calibration acceptable?*
* Most significant items/parameters to be checked.
Figure 6*1. Pretest sampling checks.
calibration Standards should be traceable to primary
standards.
When absorption type gas analyzers are used,
operator techniques and analyzer operations can be
checked by sampling certified mixtures of bottled gas
containing 2% to 4% O2 mixed with 14% to 18%
CO2, and 2% to 4% CO, with the balance being N2.
Bottled gases used for audit purposes should be
traceable to NIST standards.
6.3 EPA Methods 4 and 5: Moisture and
Participates (40 CFR 60, App. A)
Method 4 is used to determine water vapor and
contains guidance in setting the isokinetic sampling
rat©. A preliminary measurement is made using the
Method 4 sampling train. Measurements required
during a trial burn are normally taken simultaneously
with measurement of particulates in the Method 5 train
by analyzing the moisture in the desiccant impingers
of the sampling train. The QA/QC procedures used in
conjunction with the Method 5 train will ensure
obtaining moisture data of the quality required in a trial
bum.
Specific details of procedures that will provide
sufficient QA/QC are available in the method
description. The citation of Reference Method 5 in the
TBP as the procedure to be followed is acceptable.
However, this statement must be modified to include
specific details of areas in which optional procedures
have been chosen. For example, the probe may be
quartz instead of F'yrex; the probe may be air- or
water-cooled; or space limitation may re-quire the use
of a flexible line between the probe and the sample
box.
In other words, the desired approach in preparing a
TBP is to provide the detail necessary to avoid any
misunderstanding between the organizations involved
in the test and to document fully any options which
have been exercised. This allows a permit writer to
assess and approve or reject any proposed options
prior to the test. In addition, the information provided
will assist a reviewer in evaluating the quality of the
test results. These requirements can be satisfied by
the inclusion of a specific written procedure in the
TBP.
Calibration of Method 5 apparatus is one of the most
important functions in maintaining data quality. These
34
-------
calibration procedures are rather straightforward with
two exceptions: the dry gas meter and the pitot tube.
The Method 5 (M5) procedure calls for calibration of
the dry gas meter using a wet test meter (M5, Section
5.3); for an alternate to calibration, use a standard dry
gas meter (M5, Section 7.1) or critical orifices (M5,
Section 7.2). Method procedures require a specific
unit of measurement for calibration. Deviation is not
recommended, as an erroneous calibration may result.
Full documentation of the calibration procedure should
be included in the TBR. This document should include
the method used, the standard device identification,
the date the reference device was last calibrated or
certified, and the organization calibrating or certifying.
The pitot tube specifications provided in Method 2,
Section 2.1, should be followed strictly to prevent gas
flow interference. Type 2 pitot tube assemblies that
fail to meet any of the specifications of M5 Figures 2-
6 through 2-8 should be calibrated according to
Method 2, Sections 4.1.2 through 4.1.5. These steps
of the calibration procedure should be fully
documented and reported.
Providing complete documentation of all calibration
procedures should not prove to be a burden since
most firms which routinely do stack testing will already
have these documents on file. The documents need
only to be copied and added to other supporting data,
usually as an appendix. Intent to supply all calibration
documentation in the final report should be stated
clearly in the TBP.
In addition to documentation of calibration procedures,
documentation of all procedures should be required in
the final report, including filter weighing (before and
after sampling to establish constant weight), moisture
recovery, the particulate field sampling sheet as
shown in the "QA Handbook for Air Pollution
Measurement,"7 and documentation of the isokinetic
calculations. Sample calculations should be included
in sufficient detail to permit the reviewer to check all
calculations. Calibration records for the balances used
for filter and moisture collection weights should be
included in the TBR.
Inclusion of the simple statement in the TBP, "Copies
of all data will be included in the final report," should
be sufficient to assure submittal of calibration data
with the TBP. However, to avoid misunderstanding, an
itemization of the data and procedural descriptions
that will be included in the final report should be listed
in the TBP.
6.4 Hydrogen Chloride
Sampling of chloride requires employing what is
essentially a Method 5 or Method 6 sampling train.
The draft method describes method-required QC and
includes brief calibration procedures. Since the train
configurations are the same for Methods 5 and 6,
these two reference methods provide thorough cali-
bration instructions. The discussion of Method 5 in
this handbook should be used for QC on the M5
version, and appropriate QC should be employed
when the midget impinger version is used.
The field blank consists of 100 ml_ of absorbing
solution placed into blank train impingers, which is
recovered and transferred to storage bottles, labeled,
and returned to the laboratory for analysis. At least
one field blank should be collected at the end of the
test period.
6.5 Volatile Organic Sampling Train
(VOST)-Method 0030
The most recent method description is available in
SW-8463 (0030). The testing organization should
describe its train and procedure in considerable detail,
giving all chosen options in the method, plus any
deviations to the method. Discussion should include a
description of the sorbent tubes, the method for
cleaning and preparation of tubes, the method used
for storing and shipping the tubes, and the method
used for checking tube background. The TBP should
contain a statement that a new Teflon sample line will
be used for the trial burn and the sampling train will
use greaseless fittings and connectors.
A clear statement of the number of pairs of sorbent
tubes that will be collected during each run should be
a part of TBP. A basic run consists of at least three
pairs of sorbent tubes, each tube run until not more
than a 20-L sample has been obtained. A fourth pair is
often collected in case one pair is broken or lost
during analysis. The actual sampling time should add
up to a total of at least 1 hour; however, 2 hours is
optimal (exclusive of the time for tube changes and
leak checks). Other options should be fully explained
and justified in the TBP. One pair of field blanks
should be collected for each run (one pair of blanks
for each six pairs of samples). In addition, one
laboratory blank pair and one shipping blank pair
should be analyzed for each test series.
The sample collected should be large enough to
establish compliance, and the front and back tube
must be analyzed separately. Samples are considered
valid (no breakthrough) if the back trap contains no
more than 30% of the quantity collected on the front
trap. This criterion does not apply when the quantity
of sample is less than 75 ng on the back trap (see
Section 7.3 on VOST analysis).
The VOST method (0030) does not contain a section
on calibration of apparatus. These procedures,12
included in Appendix A of this handbook, are
recommended for VOST calibration.
35
-------
6.6 Bag Sampling
The collection of gas samples to determine volatile
organics using a Tedlar bag is listed only as a backup
technique for VOST. This alternative is seldom used
because of: (a) a lack of data on the stability of
organics in Teflon bags; (b) no ability to concentrate
analytes; (c) poor storage characteristics for many
analytes; (d) difficulties involved in shipping the bags;
and (e) the high probability of leaks in the bag. The
procedure followed is similar to that for an integrated
bag sample under Method 18. A more appropriate and
detailed procedure is being developed for inclusion in
SW-846.
General QA procedures are provided in Method 3.
These consist of leak-checking the bag and the
sample line. In addition, the ratemeter (rotameter)
should be accurate enough to permit setting a
sampling rate which allows sampling for the entire run
without overfilling the bag.
QA procedures which are specific to this method
consist of efforts to demonstrate the absence of cross
contamination and the rate of decay of the POHC of
interest. New bags should be used. A field blank filled
from a tank of high purity air or nitrogen should be
collected daily, and a minimum of two trip blanks
should be processed every week. Analysis in the field
is preferred; however, an alternate overnight delivery
of samples by air or surface vehicle to the analysis
location followed by immediate analysis will likely be
acceptable. Holding times should be kept as brief as
possible. Stability in bags must be demonstrated
before use.
6.7 Semi-Vost (SVOST)--Method 0010
The QA/QC for this method consists of verifying that
the test organization understands the correct
procedure and is following that procedure, particularly
In critical areas. Calibration of critical components is
the same as specified in Method 5. With Method
0010, the probe liner must be glass or quartz and the
filter support must be either glass, quartz, or Teflon®.
The temperature of the gas entering the sorbent trap
must be monitored, preferably every 5 minutes, and
its temperature must be held to 20 °C, or lower, but
above 0°C.
In Method 0010, procedures are specified for the
cleanup of the XAD-2 resin including a maximum 4-
week holding time. The trial burn plan should address
the resin cleanup required, and the trial burn report
should specify the date the resin was cleaned. The
report should also contain the results of the residual
methylene chloride test and the residual extractable
organics test on that resin. Alternatively, a certificate
of purity and date of preparation from the resin
supplier stating that the resin meets or exceeds the
purity specified for Method 0010 is acceptable.
To assist the reviewer, the trial burn test plan should
contain a complete description of the sampling train
assembly and a detailed diagram (not a generalized
block diagram). A complete description of the wash
and brush procedure should also be included in the
TBP. The entire train should be considered as
containing the sample, and ail interior surfaces should
be considered in the recovery procedure.
All components ahead of the filter should be brushed,
and all components should be solvent-rinsed. All
particulates and liquids are considered part of the
sample. Handling of these components after sampling
should also be addressed in the TBP.
The TBP must show the calculations that will be used
to determine the required sample volume, which must
generally exceed 3 dscm (with compounds that exhibit
relatively high volatility, lower volumes could be
appropriate), and indicate the lower detection limit.
Sampling points should be clearly defined. Minimal
statements such as "all sampling will comply with the
method requirements" are insufficient.
6.8 Determination of Multiple Trace
Metal Emissions—Draft Method
The sampling train for trace metal collection (draft
method from U.S. EPA, AREAL, Source Methods
Standardization Branch, Research Triangle Park, NC
27711) is similar to a Method 5 train with five imping-
ers. Reference should be made to Method 5 in this
handbook for QA procedures referring to train
preparation, calibration, and documentation.
The target level of each metal should be stated, and
the lower detection limit (LDL) for each should be
provided by the specific analytical laboratory handling
the sample. Calculations should be provided to
demonstrate that the proposed sample volume will
meet the program requirements. Typical LDL are
reported in the draft method. This subject is also
discussed in Section 7.6 of this document concerning
analysis of the sampling train components.
A glass or Pyrex probe and filter frit are required, and
their use should be stated in the TBP. The
recommended filter is quartz and must have a metal
blank low enough to allow quantitation of the analytes
at expected concentrations. Information on the actual
levels should be provided in the TBP.
The draft method for trace metals outlines a specific
cleaning procedure for the train components. The
TBP should address this subject to demonstrate that
the test organization is aware of the requirements.
These requirements include the use of surgical gloves
36
-------
and acid-washed nylon brushes for sample recovery.
If zinc is an analyte, surgical gloves should be
checked, since some use a zinc-containing dust. As in
all procedures using a train, field blanks and a train
blank should be collected. Blanks of all reagents
should be collected in the field at the end of the test
period and every time a new reagent lot is opened or
the supply container is changed. Reagent blanks do
not require analysis unless the train blank shows high
levels of contamination. Full documentation and
reporting of all operations and procedures, including
all raw data, should be part of the TBR.
37
-------
-------
Chapter 7
QC Procedures for the Analysis of Stack Samples
This chapter of the handbook gives general
background information on analyses to be used for
stack gas samples. Topics concerning precision,
accuracy, detection limits, and calibration are defined
and discussed. If the permit writer is not familiar with
these analyses, review of these sections of the TBP,
QAPjP, and TBR should be done by qualified
personnel.
7.1 Gas Analysis for Carbon Dioxide,
Oxygen, and Dry Molecular Weight;
Methods for Moisture and
Particulates
Gas analysis may be performed following EPA Method
3 (Orsat) for excess air or emission rate correction
factor, Fyrite for dry molecular weight determination,
or instrumentally following EPA Method 3A under
specified conditions. Information on analytical
procedure, equipment identification, leak check
performance, and calculations should be reported on
data sheets.
QA/QC for gas analysis is enhanced by the analysis of
performance samples. A common practice is to use
ambient air as an audit sample. In this case, triplicate
analysis of air samples should show 20.8 ± 0.5% for
oxygen. In most atmospheric samples, the carbon
dioxide content is too low to be measured using either
the Orsat or Fyrite. Therefore, certified gases should
be obtained from specialty gas manufacturers.
Pressurized canisters containing CO2, O2, and CO in
nitrogen are available.
For CO2, analyses should agree within 0.3% when
CO2 is > 4.0% and by 0.2% when CO2 is <4.0%.
For O2, analyses should agree within 0.3% when O2
is < 15% or by 0.2% when O2 is > 15%. For CO,
analyses should agree within 0.3%.
When gas analysis is performed using EPA Method
3A, QA/QC consists of determining the following: that
the system was evaluated according to the procedure,
that the method was operating properly, and that
performance samples have been analyzed. Before
analysis begins, the instrument should be evaluated
for calibration errors, sampling system bias, and
calibration drift and an interference check performed.
The applicant should provide this information in the
TBR.
The average stack concentration of O2. and CO2
cannot be less than 20% of the span value, and the
minimum detection limit should be less than 2% of
span. Calibration should be performed using three
calibration gases; a high-level gas at 80% to 100% of
span, a medium-level gas at 50% to 60% of span,
and a low-level gas at 0% to 10% of span.
An audit should be performed using EPA audit
cylinders, but a suitable alternative would be to
perform a Method 3 analysis on samples obtained at
the inlet to the CO and O2 analyzers. Agreement in
either case should be within ±5%.
Moisture is determined using either EPA Method 4 or
5. Use of the Method 5 sampling train to collect HCI
does not interfere with the simultaneous determination
of water content. Neither method is valid if the stack
gas contains water droplets because the heated probe
vaporizes the water, which is then condensed in the
train and measured as moisture. Method 4 is normally
employed only as a pretest procedure to assist in
determining the proper isokinetic sampling rate. As
such, the method needs only to be approximated. For
either Method 4 or Method 5, QA/QC consists of
determining the moisture collected in the impingers
and should be determined to the nearest 0.5 ml_ using
either volume or gravimetric procedures. A graduated
cylinder with subdivisions no greater than 2 mL or a
laboratory balance capable of weighing to the nearest
0.5 g or less is suitable.
Particulates are determined gravimetrically by
collection on a filter, drying, and weighing. The filters
should be dried to a constant weight, which is defined
as two successive weighings at a 6-hour interval
showing a weight change of less than 0.5 mg.
Before and after each set of filter weighings, the
balance should be checked by weighing a check
weight of approximately the same weight as the filter
assembly being weighed. If the check weight
disagrees by more than ±0.5 mg, the weighing
should be repeated.
39
-------
Analytical procedures should be fully documented.
Copies of the documents should be provided in the
TBR. Filters should be identified with a unique
number, traceable from field to analysis records.
7.2 Hydrogen Chloride
7.2.1 General
Analysis of HCI can be done by many techniques
(e.g., silver chloride precipitation, titration, or color-
imetry); however, the current guidance10 indicates
that ion chromatography by ASTM Method D-4327 or
EPA Method 300.0 is preferred. The QC procedures
described in this section are tailored to the ion
chromatography method, although the general
principles are applicable to other methods. The
precision and accuracy of all the methods are similar.
The ion chromatography method is free of some of
the interferences of the other methodologies that can
lead to a positive bias in the chloride results.
Usually in chloride analysis there are two or more
impinger samples from each run. The early impingers
usually contain at least 80% of the chloride and are
the more critical samples in making the regulatory
decision. If a chloride sample is lost during shipment,
and the lost sample is from the front impingers, the
data for that run are unusable. If the back impinger
samples are lost, and the back impingers from the
other two runs show relatively low chloride levels, the
average distribution for front- to-back impingers can
be used to estimate the concentration of the lost
impinger, and the data may still be usable for
regulatory purposes.
The specific QA/QC elements of which to be aware
for this determination are:
• Calibration of the analytical system.
• Accuracy using calibration check standards and
spiked samples.
• Precision by multiple analysis of samples.
• Detection limit.
• Internal Standard~An internal standard, such as
sulfate, can be added to all the standards and
samples, and retention time measured relative to
the retention time of the sulfate ion. (The peak is
considered chloride if the relative retention time
[RRT] of the peak is within three standard
deviations of the average RRT observed during
initial calibration.) Most stack gas samples will
contain some sulfate (sulfuric acid is added to
the first impinger in the draft sampling method),
and in many cases sulfate will have to be added
only to the standard solutions.
• Average Retention Time—The average retention
time of the calibration standards is computed. All
peaks within three standard deviations of that
time are considered to be chloride.
• Retention Time Range--The retention time range
of the standard (high to low concentration) is
used; any peak within the range of the chloride
retention time seen in the standards is
considered chloride. For this method sample
concentrations must be bracketed by standard
concentrations.
• Spike Confirmation—All samples are first
analyzed by one of the above three techniques
for identification of the chloride ion. Each sample
is then spiked with chloride at a level twice the
approximate sample level. The chromatogram of
the spike must exhibit a single peak in the
retention time window for confirmation of the
chloride ion. If two peaks are observed in the
spike sample chromatogram, no chloride is
present.
Irrespective of which one of the first three methods is
followed, the spike confirmation technique should be
used for any sample in which identity criteria are
suspect because of interference peaks (poor separa-
tion of chloride from other stack gas components) or
any samples in which the identification is marginal.
Qualitative concerns must be addressed in the TBP or
QAPjP. They are not covered in the ASTM method;
therefore merely citing the standard methodology
does not address; qualitative identification.
7.2.2 Calibration
Qualitative Concerns—For most chromatography
procedures, qualitative identification is based upon the
retention time of the analyte or its retention time
relative to an internal standard. However, for ion
chromatography, the chloride ion will exhibit a
changing retention time with different concentration
levels. In some cases, the influence of the sample
matrix can cause a shift in retention. Four acceptable
ways to ensure the identity of the chloride component
in the chromatogram are:
Quantitation-Jhe chloride levels in the waste and the
theoretical efficiency of the air pollution control device
are known parameters. Thus, the expected
concentration range for the chloride levels in the
impingers should be determined in advance, The
calibration range for the instrument should consist of
at least four standards that bracket the expected
sample levels and are presented in the TBP or QAPjP.
Sometimes two calibration curves are used, one for
high-level samples and one for low-level samples. Any
sample with a concentration greater than the highest
40
-------
standard should be diluted into the calibration range.
The linearity criterion for acceptance of the standard
curve is that a plot of the standard response versus
standard concentration must yield a linear correlation
coefficient greater than 0.995. If this criterion cannot
be met, sample analysis should not be carried out
until linearity can be demonstrated over the entire cali-
bration range.
Calibration of the analytical system should be checked
on a regular basis. A calibration standard close to the
expected chloride concentration in the front impinger
should be analyzed after every 10 samples and at the
end of the analysis period. The concentration of this
standard determined from the calibration curve must
be within 10% of the theoretical value. Every group of
10 samples must be bracketed by two successful
calibrations-one preceding sample analysis and one
following. If the calibration check following sample
analysis does not meet the criteria, it should be
repeated; if it fails a second time, the analysis system
should be regenerated and the samples following the
last successful calibration should be reanalyzed.
i
All initial and continuing calibration results must be
reported in the TBR.
7.2.3 Accuracy Determination
Calibration--^ five-point calibration curve is usually
prepared from a single stock solution of the reference
material. This means that all stack sample results are
traceable to that one weighing. As a check on the
validity of the calibration and the identity of the
reference material, a calibration check standard
should be analyzed. This calibration check standard
must be at the chloride concentration level expected
in the front impinger and must be analyzed after each
initial calibration curve and before sample analysis.
This standard should be prepared from a stock
solution obtained from a different source than the
calibration standards. The stock solution concentration
should be certified by the manufacturer.
The calibration check standard should be within the
same accuracy window as that used for continuing
calibration, i.e., 90% to 110% of the expected
concentration. If this criterion cannot be achieved, the
analytical problem should be identified and rectified
before sample analyses are begun. The results for all
calibration check standards with the appropriate
calibration curve results should be presented in the
TBR.
Spikes—A, minimum of one front and one back
impinger sample from each run should be spiked with
chloride at a level of not more than 3 times the
theoretical sample level. The samples should be
spiked at the beginning of sample preparation.
Accuracy results must be reported in the TBR, and
the accuracy criterion is 85% to 115% of the amount
spiked.
7.2.4 Precision
Precision must be determined by duplicate preparation
and analysis of a front and back impinger sample from
at least one run. Given the relatively inexpensive
nature of this analysis, duplicate chloride
determinations for all samples are highly recom-
mended. Experience has shown that an incinerator
operator is more likely to be denied a permit based
upon chloride (and particulate) emissions than upon
ORE determination. Precision is calculated as percent
range and must be less than 25%. If the sample
results are within 5 times the detection limit, the RPD
should be below 50%.
All precision data must be calculated and reported in
the TBR.
7.2.5 Detection Limit Determination and
Method Blanks
Ordinarily, all impinger samples will contain chloride.
Since chloride is virtually ubiquitous, most method
blanks also will contain some chloride. Sample results
should not be corrected for levels of chloride in the
blanks. Usually, blank values are very low in
comparison with sample results.
In the rare instance in which no chloride is detected in
the impingers, a detection limit must be determined.
The method of determining the detection limit should
be reported along with the sample results. All detec-
tion limits must be adjusted for any sample dilution
and the total contents of the impinger. If the blank
value is higher than the detection limit, the detection
limit should be set at 1.5 times the blank level.
7.2.6 Summary of QC Procedures
A summary of the QA/QC procedures for chloride
determination is presented in Table 7-1. Each
parameter must be reported in the TBR and
acceptance of sample results must be justified by the
applicant if the QA/QC procedure was omitted or if the
criteria were not met. The QA/QC procedures related
to calibration and calibration accuracy must be
completed and must be within the criteria before
sample analysis begins. Deviations from the criteria
after the trial burn should not be accepted.
In the case of poor accuracy, low recovery could
indicate that the calculated emission rate was lower
than the actual, while high recovery could indicate that
the calculated rate was higher than the actual. Any
accuracy difficulties connected to the back impingers
should not be considered a problem unless the
chloride distribution between the front and back
impingers indicates that the back impinger contains a
41
-------
Tablo 7-1. Summary of QA/QC for Chloride Determination
Quality parameter Method of determination
Frequency
Criteria
Calibration-qualitative
Calibration—quantitative
Accuracy-calibration
Accuracy-spikes
Precision
Detection limit
Blank
Relative retention time
Average retention time
Initial calibration with a minimum of
four standards
Continuing calibration
Certified reference solution
One front and one back impinger
spiked at no more than 3 times native
level
One duplicate preparation of both a
front and back impinger
Method must be reported in TBR
One method blank carried through
sample preparation and analysis
Every calibration curve
Every calibration curve
At least once before sample
analysis
Every 10 samples and at
end of day
After every initial calibration
before sample analysis
Once per test
Once per test
± 3 standard deviations of average
Within retention time window of
standards
Linear correlation coefficient
< 0.995
90%-110% of theoretical
concentration
90%-110% of theoretical
concentration
85%-115% recovery
± 25% range; if less than 5 times
detection limit ± 50% range
Only if a sample is reported NA
beneath limit
One per test
Less than 5% of sample levels
significant amount of the chloride collected and must
be considered in the regulatory decision.
If precision is poor, the laboratory should reprepare
and reanalyze the samples. Chloride determinations
are not expensive in light of the importance of the
chloride data to the regulatory decision. Sample
preparation usually entails only dilution. If subsequent
work indicates a systematic problem with precision,
contamination problems in the analytical laboratory, or
a matrix effect from other stack gas components,
could be indicated. If precision is poor, the highest
value for the samples can be used (instead of the
average). This high chloride value would provide a
conservative estimate of whether chloride levels are in
compliance with regulations.
7.3 Volatile Organic Sampling Train
(VOST)--Method 0030/5040
7.3,1 General
The primary method for collection of volatile principal
organic hazardous constituents (POHCs) from the
stack gas effluents of hazardous waste incinerators is
the Volatile Organic Sampling Train (VOST), as
described in Method 0030 of SW-846. The analysis
method for VOST is described in the "Protocol for
Analysis of Sorbent Cartridges from Volatile Organic
Sampling Train" as described in Method 5040 and
Method 8240 of SW-846.3 According to EPA
guidance, VOST is preferred over integrated bag
sampling.
Volatile POHCs, generally with boiling points between
30° and 100°C, are collected from a gaseous effluent
source at rates typically from 0.5 to 1.0 L/min and
trapped on a pair of traps comprised of Tenax (front
trap) and Tenax/charcoal (back trap). A maximum of
20 L of sample is run through each pair of traps, and
up to six pairs of sorbent traps may be used to
complete a test run.
The analytical method for VOST is based on the
quantitative thermal desorption of volatile POHCs from
the Tenax and Tenax/charcoal traps and analysis by
purge-and-trap GC/MS.
The specific QyVQC elements of which the permit
writer should be aware are:
• Sample handling/blank results.
• Calibration of the GC/MS.
• Method performance at the 99.99% ORE level.
• Accuracy and precision determinations.
• Breakthrough ratios of POHCs on trap pairs.
• Detection limit determination.
7.3.2 Method Performance
The primary concern of the permit applicant and writer
is whether an analytical system is available that can
be used to reliably identify and quantify the sampled
POHCs at the expected stack concentration where the
99.99% ORE is achieved. The QAPjP should present
all estimated POHC concentrations in both the Tenax
and Tenax/charcoal trap at the ORE critical level
(concentration ait 99.99% ORE) plus the calibration
42
-------
range of the GC/MS. The concentration of the POHCs
at 99.99% ORE should be at least one order of
magnitude greater than the concentration of the
lowest standard. Highly efficient incinerators might be
able to achieve DREs significantly better than
99.99%. In such cases, the 99.99% level would be at
the :high end of the calibration curve. If the
concentration is not at least one order of magnitude
greater than the lowest standard concentration, the
perrriit reviewer should request that the applicant
reevaluate the sampling strategy and calibration to
ensure proper mass of POHCs in the samples and
reliable identification and quantitation.
VOST analysis is not always a simple undertaking.
The permit applicant should demonstrate in the TBP
the analytical methodology that will be used for the
POHCs. Four ways are available to demonstrate
performance.
1.'Presenting surrogate POHC recoveries from past
.trial burns.
2. Presenting POHC recoveries from two VOST
cartridges spiked with POHCs, prepared and
analyzed in advance specifically for this trial
burn.
3. Performing an analysis of a VOST audit cylinder
containing the POHCs of interest.
4. Conducting a miniature trial burn in advance and
presenting the results of surrogate POHC
analysis. (This advance burn is especially helpful
if other stack gas components are expected to
significantly interfere with the sample analysis.)
Given the time and money expended on a trial burn,
successful analysis of the sample should be
supported by performance data, not left to theoretical
performance. If these data are not presented in the
TBP, the permit reviewer should request performance
results from the applicant or a justification of why they
are not needed. Average recoveries of the POHCs or
surrogate POHCs should be between 50% to 150%. If
they are not, the permit writer should request
justification of the experimental design.
7.3.3 Sample Handling
The quantitation of a particular volatile POHC depends
on the level of interference and the presence of
detectable levels of volatile POHCs in the blanks.
Interference arises primarily from background
contamination of sorbent traps before or after sample
collection, usually from exposure to solvent vapors
during preparation or from ambient air at hazardous
waste incinerator sites.
Because of this potential for contamination, numerous
field blanks must be analyzed with the field samples to
demonstrate that background levels and sensitivity are
acceptable and/or to identify the source of any
contamination.
Three types of blanks should be reported with the
VOST sample results: field blanks, trip blanks, and
laboratory or system blanks.
• Field blanks are VOST traps taken to the field
and uncapped during changeovers to simulate
exposure to ambient conditions. A minimum of
one pair of field blanks is required with each six
pairs of traps collected.
• Trip blanks are VOST traps transported to and
from the field and included with each shipment
of samples back to the laboratory. These blanks
are intended to demonstrate that no cross-
contamination of samples has occurred during
storage and shipment.
• Laboratory blanks are VOST traps that are not
sent to the field but remain in the laboratory.
They are analyzed daily after high-level samples
or if high levels of contaminants are found in the
field or trip blanks.
All blanks must be identified in the QAPjP. However,
VOST blank trap results should not be used routinely
to correct trial burn results. Blanks should be used to
correct results only if they are found to be statistically
different from the samples as outlined in the method
(0030) and the two guidance manuals.5,9 For all cases
where a blank correction is used, both corrected and
uncorrected emission data should be presented.
Improper handling of samples may affect the analyses
either by giving the samples a high bias, which would
lower ORE results, or in extreme cases increasing the
method detection limit so that it falls above the
concentration levels required for meeting ORE
regulations.
7.3.4 Calibration
Calibration criteria for VOST trap analysis are listed in
Method 5040 and in Method 8240 "Gas
Chromatography/Mass Spectrolnetry for Volatile
Organics."3 The primary objective of calibration for
VOST addresses the POHCs and POHC surrogates.
Stock standard solutions should be prepared from
EPA-supplied standard materials or purchased as
certified solutions. If EPA reference material is not
available, all POHC standard materials should be
characterized for identity and purity. The source and
purity of all POHC standards should be reported in the
TBR.
Fresh stock standards should be prepared weekly for
volatile POHCs with boiling points of < 35°C; all
43
-------
other standards should have been prepared no earlier
than 30 days prior to analysis.
A minimum of three concentration levels for each
analyte of interest is required for calibration. Each
calibration standard should be analyzed on both a
Tenax and a Tenax/charcoal cartridge; response
factors for both traps are used for determining quality
control acceptance and for quantitation of sample
results. To ensure adequate sensitivity of the
analytical system, the calibration range should bracket
the 99.99% ORE POHC concentration level. The
expected POHC concentration in both the Tenax and
Tenax/charcoal traps (at 99.99% ORE) must be
presented in the QAPj'P and must be shown to be at
least 10 times the level of the lowest calibration
standard. Some applicants know their incinerator is
capable of achieving a much better ORE than
required. In these cases, the 99.99% level might be
above the calibration range. A decision of this type
represents some risk to the permit applicant. Should
incinerator performance not be significantly better than
required to meet ORE, analysis results will be out of
calibration range. Sometimes the calibration range can
be extended by immediately analyzing a higher
concentration standard, but results outside the
calibration range are unacceptable and generally
rejected.
Quantitation is performed for all standard data using
the internal standard method for determining relative
response factors (RRF). If the RRF value over the
working range is a constant (< 20% RSD), the
average RRF may be used to calculate concentration
of POHCs in samples; alternatively, the results can be
used to plot a calibration curve of response ratios
(area standard/area internal standard) vs. RRF. Some
TBP and QAPjP give a criterion of ±25% for the
average RRF and do not allow the optional calibration
method. The working calibration curve or RRF must
be verified each working day, or every 12 hours of
operation, by the analysis of a continuing calibration
standard. The continuing calibration check is valid if
the RRF falls within ±25% of the initial cajibration
data. If this check does not meet the criterion, the
standard should be reanalyzed. If the second check
does not meet the criterion, the acceptance of sample
results from the last successful check must be
justified. All initial and continuing calibration checks
must be reported in the TBR.
Internal standard responses and retention times must
also be monitored during data acquisition. The internal
standard retention time should not change by more
than 30 seconds from the last calibration check. The
internal standard areas in samples must be within
65% to 135% of the area observed in the last
continuing calibration standard analysis. If either of
these parameters changes during sample analysis, a
calibration standard check should be performed. For
samples in which a low internal standard area occurs,
the fourth VOST trap pair may be analyzed once the
analysis difficulty has been corrected.
7.3.5 Precision and Accuracy Determination
To establish the precision and accuracy of the
analysis, triplicate Tenax and Tenax/charcoal traps
must be spiked with each POHC and surrogate POHC
and analyzed immediately following the initial
calibration and before sample analysis. The spiking
level must be at the expected POHC mass if 99.99%
ORE has been achieved. The spiking standard must
be prepared from stock standards separate from those
used for calibration and, preferably, prepared by differ-
ent personnel to avoid any systematic bias. Recovery
for each POHC and surrogate POHC should be within
75% to 125% of spiked value. A low POHC recovery
may indicate an artificially high calculated ORE;
however, a high bias may indicate that the calculated
ORE may be artificially low. The relative standard
deviation associated with each analyte should be less
than 25%.
The average recovery from this determination should
be used as an acceptance criteria for sample results.
The surrogate recovery in each sample must be within
three standard deviations of the average recovery
obtained from the initial precision and accuracy
determination. If Tenax and Tenax/charcoal traps give
equivalent recoveries, the overall average and
standard deviation for both traps may be used.
In addition to the precision and accuracy
determinations, an EPA performance audit must be
completed during a trial burn as a check on the entire
VOST system (see Section 3.5). An EPA audit
cylinder is sampled during the trial burn by the same
person on multiple traps at the same time, and using
the same analytical procedure as for the regular
VOST trial burn samples. Generally, four pairs of traps
are taken and three are analyzed, with one pair saved
as a backup. All analyzed pairs should be reported.
The criteria for acceptability of the EPA audit cylinder
is 50% to 150% of the audit value. A recovery above
the limit is sometimes less of a problem (where ORE
is concerned) than a low recovery (ORE could be
artificially high).
7.3.6 Detection Limit Determination
For each POHC, a detection limit (DL) must be
determined. The DL is a critical parameter since
POHCs are not detected in many trial burns and the
ORE is based upon the DL. The method of
determining the DL can vary from laboratory to
laboratory. However, if the 99.99% ORE level is within
the calibration range, the DL is not critical in
determining achievement of the performance
standard. The method for DL determination must be
described in thei QAPjP. If this subject has not been
addressed, the permit writer should request that the
44
-------
applicant supply this information. Guidance is being
developed for detection limit determination in
hazardous waste incineration.
7.3.7 Breakthrough Ratios of POHC
The front and back VOST traps should be analyzed
separately to determine POHC breakthrough to the
charcoal adsorbent. The analysis of the
Tenax/charcoal trap should indicate less than 30% of
the POHC collected on the front Tenax trap. Break-
through of the POHC to the charcoal trap above this
level may cause loss of desorption efficiency and
result in a negative bias in the ORE calculations. This
criterion does not apply when less than 75 ng is
detected on the back trap.
7.3.8 VOST Condensate Analysis
The condensate from the sampling- train also has to
be analyzed by purge and trap GC/MS, SW-846
Method 8240.3 The QC procedures consist of spiking
with the surrogate POHCs. The accuracy is calculated
by the recovery of the POHC, which should be
between 50% and 150%, and precision is calculated
as the relative standard deviation of the surrogate
recovery from trial burn samples and should be less
than 35%. If the sample is sufficient, the precision can
be determined by duplicate analysis of one run's
condensate sample and calculated as the percent
range of the POHC levels found in both analyses. A
method for the determination of detection limit of the
POHC in the condensate needs to be identified in the
TBP or QAPjP and the results of the determination
reported in the TBR.
7.3.9 Summary of QA/QC Procedures
A summary of QA/QC procedures for the VOST
method is presented in Table 7-2. Each quality
parameter must be reported in the TBR, and sample
results must be justified by the applicant if the QA/QC
procedure has not been performed or the criteria were
not met. QA/QC procedures related to calibration and
the precision and accuracy determinations must be
complete and must be within the established criteria
before sample analysis is initiated.
In assessing the QA/QC results for acceptance of
sample data, precision and accuracy problems are not
as critical if the calculated ORE is much larger than
99.99%. For example, a surrogate recovery below the
criterion is not as critical for a ORE of 99.9999% as it
is if the ORE is only 99.99%. Sample results should
be corrected for low surrogate recovery.
Both segments of a VOST trap pair must be analyzed,
preferably separately. However, sometimes a single
segment of the pair is lost due to breakage. In this
case, one of the backup VOST pairs should be
analyzed and used in the ORE calculation. Also, since
the compounds quantitated in the VOST analysis are
by nature highly volatile, the time between sample
collection and analysis for all samples should be
reported in the same table with VOST results. The
recommended holding time is generally 14 days.
7.4 Semivolatile Organic Sampling Train
(SVOST)-Method 0010
7.4.1 General
The primary method for SVOST is 0010 (SW-846).3
There are three matrices from the SVOST train: (a)
the XAD resin and filter, each prepared separately by
Soxhlet extraction; (b) the aqueous condensate and
impingers, each prepared by solvent extraction; and
(c) the organic solvent rinses of the probe and train
components, each prepared by solvent extraction.
Usually, these matrices are analyzed by GC/MS
Method 8270,3 using isotopically labeled analogs
(surrogates) of the POHCs or compounds chemically
similar to the POHCs. This section covers QA/QC
elements outside the scope of Method 8270 that need
to be addressed. Discussion focuses on the POHCs
and the concentration of final sample extracts at the
99.99% ORE level. Analysis methods which are not
GC/MS are discussed briefly in Section 7.4.8.
The specific QA/QC elements of which to be aware
are:
• Demonstration of method performance at the
99.99% ORE level before the trial burn.
• Calibration of the GC/MS.
• Determination of accuracy using calibration
check standards, spiked samples, -and
surrogates.
• Determination of precision by multiple analysis of
samples.
• Determination of the detection limit.
7.4.2 Method Performance
The primary concern of the applicant and permit writer
is whether an analytical system is available that can
be used to reliably identify and quantify the POHCs in
the SVOST fractions. All estimated POHC concentra-
tions in the SVOST fractions at the critical ORE level
(concentration at 99.99% ORE) and the calibration
range of the GC/MS should be presented in the
QAPjP. If possible, the concentration of the POHC in
the SVOST fractions should fall in the middle of the
calibration curve, but at least 10 times above the
concentration of the lowest standard. If not, the permit
reviewer should request that the applicant reevaluate
the sampling strategy, sample preparation methods, '•
and calibration to ensure proper mass of POHC in the
samples for reliable identification and quantitation.
45
-------
Table 7-2. Summary of QA/QC Procedures for VOST
Quality parameter Method of determination
Frequency
Criteria
Demonstrated ability to do
analyses
Blanks-sample integrity and field
contamination
Blanks-verify no cross-
contamination in storage and
shipment
Blanks-verify no laboratory
contamination and system control
Initial calibration of GC/MS
Continuing calibration
Consistency in chromatography
Precision and accuracy
Continuing accuracy check
Verification of VOST system
accuracy
Detection limit
Breakthrough determination
VOST condensate: precision and
accuracy
1. Historial data for surrogates,
or
2. Spiked trap recovery of
POHC, or
3. Surrogate results from anothr
incinerator trial burn, or
4. VOST audit cylinder analysis
Field blanks, 1 pair of traps
Trip blanks-1 pair of traps
Lab blanks-1 pair of traps
3 to 5 standards bracketing ORE
level
Mid level standard
Monitor internal standard;
retention time and area
Replicate analysis of 3 traps
spiked with a standard
independent of calibration
standards at the expected level
of 99.99% ORE
Spike each sample with
surrogate POHC
Analysis of samples from EPA
audit cylinder
Open to choice by applicant
Separate analysis of front and
back traps
Surrogate POHCs spiked
Before trial burn
50%-150% recovery
1 pair per 6 samples
1 pair with each shipment
container
Daily, before analysis of samples
and in between high-level
samples
Prior to sample analysis
Prior to sample analysis, then
every 12 h, or after sample set
Every sample, standard, and
blank
Demonstrated prior to sample
analysis
Every sample
Required with each trial burn
At least once for each POHC if
limit is used in DRE calculation
Every pair
All trial bum condensate samples
Less than lowest standard
Less than lowest standard
Less than lowest standard
Variability of average RRF £
20% RSD
RRF within ± 25% of initial
calibration (RRF)
Retention time within ± 30 sec
of last calibration check
Area is within 65% to 135%
from last daily calibration check
75%-l25% recovery; ±25%
RSD
Within 3 standard deviations of
the initial accuracy found during
the precision and accuracy
determination
Within 50%-150% of certified
concentration
NA
Tenax/charcoal trap must have
less than 30% of POHC amount
on Tenax trap (does not apply if
there is less than 75 ng POHC
on back trap)
Recovery between 50%-150%;
relative standard deviation of all
recoveries <35%
SVOST analysis is not a sample preparation
procedure that can be oversimplified or streamlined. It
involves multiple manipulations prior to a relatively
complex analysis. No single analytical technique is
applicable because POHCs exhibit diverse chemical
properties. Method 0010 allows the selection of an
appropriate extraction method and solvents to
optimize the recovery of POHCs. In the TBP, the
permit applicant should document the performance of
the analytical methodology for the POHCs. This can
be accomplished in three ways:
Presenting surrogate POHC recoveries from past
trial burns, or
• Presenting POHC recoveries from a train spiked
with POHCs, prepared and analyzed in advance
specifically for this trial burn, or
• Conducting a miniature trial burn in advance and
presenting the results of surrogate POHC
analysis. (This advance burn is a particularly
useful option if other stack gas components are
expected to significantly interfere with sample
analysis.)
Given the time and money expended on a trial burn,
successful analysis of the sample should be
supported by performance data, not left to theoretical
performance. If these data are not presented, the
46
-------
permit reviewer should request performance results or
a justification of why they are not needed. Average
recoveries of the POHC or surrogate POHCs should
range from 50% to 150%. If they do not meet these
criteria, the permit reviewer should request justification
of the experimental design.
7.4.3 Calibration
The primary calibration objectives for SVOST analysis
concern the POHCs and the surrogate POHCs.
Method 8270 is designed for a broad spectrum of
compounds, but the trial burn itself focuses on a
limited number of compounds (approximately one to
three). Method 8270 requires a five-point calibration
curve for each analyte, and the relative standard
deviation (RSD) of the average RRF must be < 30%.
A continuing calibration standard (CCS) must be ana-
lyzed every 12 hours and at the end of each analysis
day. The RRF for the CCS must be within 30% of the
average RRF.
The criteria for both initial and continuing calibration
are critical parameters and need to be calculated and
evaluated before sample analysis. However, in
addition to monitoring the CCS analysis, the POHCs
should be included in the CCS and meet the same
calibration criteria. The POHCs are the critical
analytes upon which the primary regulatory decisions
are based. Sample analysis should not proceed until
the analytical problem has been rectified and the
criteria met. All reported sample results must be
bracketed by two successful CCS analyses. If
samples are analyzed and the end-of-day (or 12 hour)
CCS analysis does not meet the criteria, the CCS
should be reinjected before any corrective action is
taken. If the CCS still fails to meet the criteria, all
samples analyzed since the last acceptable CCS
should be reanalyzed and the initial analysis data
rejected. Sample results should not be reported from
a GC/MS system that does not meet the calibration
criteria. All initial and continuing calibration data for the
POHCs and surrogates must be included in the TBR.
7.4.4 Accuracy Determination
7.4.4.1 Calibration
The five-point calibration curve is usually prepared
from a single stock solution of the reference material.
This means that all stack sample results are traceable
to that one standard preparation. A calibration check
standard should be analyzed to validate the calibration
and the identity of the reference material. The
calibration check standard (a) must contain all the
POHCs and surrogates; (b) should be at the
concentration level of the POHCs at the ORE critical
point; (c) should be analyzed after each preparation of
standards and each calibration curve before sample
analysis; and (d) should be prepared from EPA
reference solutions or prepared from EPA standard
reference material obtained from the EPA repository
(QA Branch, EMSL-Cincinnati, USEPA, Cincinnati,
Ohio 45268). The preparation of the standard should
be done by personnel not responsible for the prepara-
tion of the calibration standards. Independent
preparation should eliminate any systematic bias.
If EPA reference material or certified neat standards
are not available, the laboratory must characterize the
standard material. Characterization should entail a
qualitative identification of the standard and a
quantitative determination of standard purity.
Since GC/MS calibration has a window of 30%
relative standard deviation, the calibration check
standard must be within 70% to 130% of its
theoretical concentration. If this criterion has not been
met, corrective action should be taken to resolve the
analytical problems before any sample analyses are
initiated. The results for all calibration check standards
should be presented with the appropriate calibration
curve results in the TBR.
7.4.4.2 Surrogates
SVOST Method 0010 specifies that all elements of the
sampling train should be spiked with surrogates of the
POHCs and processed separately to yield three final
samples for analysis: combined XAD and filter,
impingers, and solvent rinses. Although Method 0010
states that all surrogates should be spiked at
approximately 10 times the MDL, for trial burns
surrogates must be spiked at a level not more than 2
times the ORE critical level because the ORE level is
the concentration of regulatory concern and that level
should be well within the calibrated concentration
range of the instrument. If surrogate POHCs are not
available, other surrogates may be chosen; however,
the selection must be justified. The recovery of the
surrogates in each sample must be within 50% to
150% of the amount spiked and must be reported.
Method 0010 specifies that all elements of the
sampling train are to be processed separately to yield
three final extracts for analysis. Train components
should not be combined prior to sample preparation
(e.g., XAD-2 resin combined with the particulate filter
and the back half rinse; front half rinse and
condensate combined) and after sample preparation
to yield a single extract for analysis.
7.4.4.3 Spikes
At a minimum, a blank of each SVOST component
(e.g., unused XAD/filter or deionized water for
condensate) should be spiked with each POHC and
surrogate POHC at a level not more than two times
the amount of the ORE critical level. If the SVOST
components are to be combined, the guidance
discussed above for surrogates also applies. The
47
-------
required accuracy is 50% to 150% of the amount
spiked and results must be reported in the TBR.
7.4.5 Precision
Each SVOST component is completely used up in
sample preparation, so replicates are not available for
determining precision. Since each SVOST component
must be spiked with the surrogate POHC, however,
precision can be determined from comparison of sur-
rogate recoveries from the different runs. The RPD of
surrogate recovery from each component should be
±50%. If there are more than three determinations (a
complex trial burn with multiple test conditions), the
RSD can be used and should be less than 35%.
Surrogate results should not be averaged across the
SVOST components (e.g., XAD results should not be
mixed with condensate results). However, when train
components are combined, the surrogate recoveries
are indicative of the extraction method (e.g., Soxhlet
or liquid/liquid) and are not related to the recovery
from the individual components.
Precision of the analysis can only be determined by
duplicate analysis of the extracts from one run. The
SVOST components from the run with the highest
level of POHC should be chosen for reanalysis. The
percent range should be less than 50%. If the POHC
concentration falls beneath the lowest standard in the
calibration curve, the RPD should be less than 100%.
7.4.6 Blanks
Blanks are useful in locating problem areas in the
sampling and analysis program, but should not be
used to routinely correct sample data. There are three
major kinds of blanks: (a) SVOST trains shipped to
the field and returned (trip blanks); (b) SVOST trains
hooked up to sampling apparatus on the stack but
never used for stack gas sampling (field blanks); and
(c) blank XAD/filters and deionized water analyzed by
the laboratory and never shipped to the field
(analytical method blanks).
The analytical method blanks demonstrate that the
detection limit claimed for the analysis is valid given
the background concentration of the POHC in the
laboratory. Sometimes detection limits used for ORE
calculations are based upon analysis of standard
solutions and do not include any possible background
contamination from the laboratory preparation. All
blanks must be reported in the TBR, and values
should be less than twice the estimated detection limit
determined for the sample extracts. If the method
blank value is above this level, it is recommended that
1.5 times the level of POHC in the method blank
should be used as the detection limit
Significant background contamination at the
incinerator could artificially lower a ORE by increasing
the POHC levels determined in the stack gas. If this
problem is expected, the applicant should present a
statistical design for the number and kinds of blank
SVOST and how they will be used for correcting the
ORE. At a minimum, each run should contain at least
one trip blank and one field blank, plus one method
blank per sample batch. If sample data are corrected
for blank results, both the uncorrected and corrected
results must be reported.
7.4.7 Detection Limit Determination
For each POHC the detection limit (DL) must be
determined. The DL is a critical parameter because
many times no POHCs are found and thus the ORE is
based upon the DL. The method of determining the
DL can vary from laboratory to laboratory. The method
used must be described in the QAPjP and the deter-
mination presented in the TBR. However, if the
99.99% DRE level is above the lowest calibration
standard, the DL is not critical for assessing
achievement of the performance standard. Guidance
is being developed for detection limit determinations in
hazardous waste incineration. Detection limits used in
DRE calculations must be determined on the actual
sample matrix - not an ideal sample without inter-
ferences.
7.4.8 SVOST - Analysis by Other Methods
Other methods that may be used for trial burn analysis
are described in this section.
7.4.8.1 Background
Not all trial burns require the sensitivity and the
specificity of GC/MS analysis, and some POHCs (e.g.,
formaldehyde) are not amenable to GC/MS. Other
techniques, such as GC with flame ionization
detection (GC/FID) or electron capture detection
(GC/ECD), may give a much more precise and
accurate analysis when compared to GC/MS. Often
the increase in precision is by a factor of 2 or 3.
GC/MS analysis is very expensive, and it generates
large amounts of data that must be reduced and
evaluated by personnel with a very high skill level.
Since GC/FID and GC/ECD analysis is less complex,
problem samples may be more easily and more cost-
effectively reanalyzed.
From the perspective of the permit applicant, GC/MS
provides more assurance against a mistaken
identification of POHC in the stack gas samples due
to more specificity in the analysis. It also assures the
permit applicant that the POHC is being quantitated
and not some other stack gas component. However, if
the 99.99% DRE level is within the analytical system's
calibration range, demonstration of a lower DRE is a
moot point. GC/MS also has one other distinct advan-
tage, which is the use of surrogates to determine
analytical accuracy for each stack gas sample. This
advantage is not available with any other technique.
48
-------
Some researchers use compounds chemically related
to the POHC for a surrogate, but these compounds
are not considered as reliable as the use of
isotopically labeled surrogates.
7.4.8.2 QA/QC
Situations exist in which GC/MS cannot be used
because the POHC has a low volatility, is unstable, or
is highly reactive. In these cases other
chromatographic techniques such as high pressure
liquid chromatography or ion chromatography should
be used. However, all the QA/QC elements for
GC/MS delineated below must be followed:
• Demonstration of method performance before
trial burn is conducted.
• Calibration curve bracketing concentration at
99.99% ORE.
• Independent calibration check standards,
analyzed before samples and passing criteria.
• Spikes of blank SVOST components for
accuracy.
• Duplicate analysis for precision.
• Detection limit determination.
Some of the key elements of the design for any
determination of POHC in stack gas samples using a
technique other than GC/MS are highlighted below.
• Sample results should be calculated using an
internal standard technique. The lack of an
internal standard must be justified in the TBP or
QAPjP.
• If possible, a compound chemically similar to the
POHC should be identified and spiked into all
samples. The same guidance given for iso-
topically labeled surrogates would apply to this
compound. The lack of a chemical surrogate
must be justified in the TBP or QAPjP.
• If possible and given the desired detection limit,
some consideration should be given to dividing
the condensate, impingers, and solvent rinse
samples for one run with one portion spiked with
POHC at twice the ORE critical level and the
other analyzed unspiked. This measures
accuracy as recovery. However, the XAD and
filter should not be split; in that case the final
extract should be analyzed and then spiked at
twice the DRE critical level and reanalyzed. The
recovery should be 50% to 150%. Good
recovery demonstrates a lack of significant inter-
ference from other stack gas components for the
XAD resin samples.
• The impingers and solvent rinse samples for one
run should be divided, prepared, and analyzed in
duplicate. Precision as RPD can then be
measured.
• The identity of the POHC must be confirmed by
the use of relative retention times (RRT)
compared to the internal standard. The RRT
window should be set daily using the RRT of the
daily calibration standard and three times the
standard deviation of the RRTs of the calibration
curve standards. If questionable identification
occurs due to matrix interference of a peak that
otherwise meets the retention time criteria, the
sample should be spiked at the same level and
, reanalyzed. If two peaks are observed, the
tentatively identified peak should not be
considered to be the POHC. The use of capillary
GC reduces the possibility of false positives and
certainly second column confirmation should be
given some consideration to confirm compound
identification.
7.4.9 Summary of QC Procedures
A summary of QA/QC procedures discussed in this
section is presented in Table 7-3. Each quality
parameter must be reported in the TBR. If the QA/QC
procedure was not followed or the criteria not met,
sample results should not be accepted unless the
applicant provides an adequate technical justification
for the use of the data. The QA/QC procedures
related to method performance, calibration, and
calibration accuracy must be completed and must be
within the criteria before sample analysis begins.
For surrogate and POHC spike recovery results, the
50% to 150% level is acceptable. If recovery is above
150%, sample results could be biased high and the
calculated DRE would be lower than the actual
efficiency. Sample results should be corrected for low
surrogate recovery. If individual recoveries are lower
than 50%, results should be rejected.
7.5 Metals Determination
7.5.1 General
EPA methods for sampling and analysis of metal
emissions are Method 12 for lead, Method 101A for
mercury, Methods 103 and 104 for beryllium, and
Method 108 for arsenic. For the past 2 years, a
method has been under development for sampling
and analysis of multiple metal analytes.13 At this
writing, the draft method can be applied to 16
analytes. It has become the most commonly used
procedure for metals because few incinerators
process waste containing only one metal analyte.
NOTE: At this time no valid procedure for stack gas
determination of chromium in the hexavalent state is
49
-------
Tabla 7-3. Summary of QA/QC Procedures for SVOST
Quality parameter
Method of determination
Frequency
Criteria
Method performance-accuracy 1. Historical data for
surrogates, or
2. Blank SVOST spiked with
POHC, or
3. Miniature trial bum
surrogate results
Before trial burn
50%-150% recovery
Calibration
Accuracy-calibration
Accuracy-surrogates
Accuracy-spikes
Precision-surrogates
Preciston-POHC
Blanks
Five-level calibration curve;
ORE critical level at least 10
times higher than lowest
standard; continuing calibration
standard
Analysis of calibration check
Isotopically-labeled POHC
spiked at not more than two
times ORE critical level
POHC and surrogate POHC
spiked not more than two times
the ORE level into each
SVOST component of a blank
train
Same as for accuracy-
surrogates pool results for each
SVOST component
Duplicate analysis of all
SVOST components from the
run with the highest POHC
level
Method blank for each SVOST
component
At least once; at beginning of < 30% RSDa of average RRFb
day; continuing calibration once within 30% of average RRF from
every 12 h and at end of day calibration
After every initial calibration
Every SVOST component
One per trial burn
Every SVOST component
One per trial burn
One per batch of samples
70%-130% of theoretical value
50%-150% recovery
50%-150% recovery
< 40% RPDC of surrogate
recovery. If more than three
determinations RSD < 35%
± 50% range if POHC
concentration is above lowest
calibration standard; ± 100% all
other cases
Blank value < 2 DL. If greater,
DL is changed to 1.5 x blank
level
For Methods Other Than GC/MS
Identification
QuantitatRm
Chemica) surrogate-accuracy
Sample spikes-accuracy
XAD spiks-accuracy
Internal standard RRT<* window Internal standard in every
sample
Internal standard RRF
Chemically related to POHC
Spilt impinger and rinsate
samples. Spike one with
POHC at two times ORE
critical level
Analyze XAD extract then
spike at two times-BRE-eritical
level
Internal standard in every
sample
Spiked into every sample
At least one run
At least one run
±3 SD of RRT from initial
calibration survey
NA
50%-150% recovery
50%-150% recovery
80-120% reovery
*RSD * relative standard deviation.
bRRF » relative response factor.
cRPD « relative percent difference.
dRRT « relative retention time.
available. Discussion in this section focuses on the
draft method.
One of the biggest difficulties associated with the
permitting process for metals is the lack of a clearly
defined decision level based upon the metals data.
Unlike ORE or paniculate emissions, metals emissions
do not have a clear cut-off point for decision-making
based upon stack gas concentration. The draft
document, "Guidance on Metals and Hydrogen
Chloride Controls for Hazardous Waste Incinera-
tors,"11 contains the procedures for determining
whether metals emission rates determined with a risk-
based assessment model must be used in reviewing
the permit application and deciding if emission testing
for metal analytes is necessary.
_
50
-------
If testing is required, a target analyte list should be
developed, based upon the expected waste
composition. A stack gas target concentration for
regulatory decision-making should be set for each
analyte based upon the maximum acceptable
emission rate determined from risk assessment
modeling. The draft metals procedure should be
modified (according to the guidance given in the
method) to produce a theoretical method detection
limit at least 10 times lower than the target regulatory
limit. Determination of the regulatory concentration
limit and necessary method modifications should be
discussed in the TBP or QAPP.
A metals emission determination without a clear
definition of the analytes of concern and their critical
concentration levels can result in a data set of limited
use. For example, metals determinations in the low
concentration ranges can be subject to severe
problems with contamination, precision, and accuracy.
If a low concentration determination is needed,
ultrapure reagents and determination of blank train
reagent levels in advance of the trial burn are
suggested. Metals analysis techniques are chosen
with regard to the expected analyte concentration
range. For example, if arsenic is a concern only at
higher levels, inductively coupled plasma would be
chosen for analysis; however, a low level arsenic
determination is usually carried out by graphite
furnace atomic absorption.
The QAPP or TBP should contain a discussion
answering the following questions:
• Why is metals determination necessary?
• What are the target analytes?
• What are the stack gas concentrations of
concern?
• What modification to the existing methods will be
needed to quantitate the analytes at the
concentrations of interest?
The following parameters should be clearly delineated:
• Target analytes.
• Stack gas concentrations.
• Final expected concentration range in each
sampling train fraction before and after sample
preparation.
• Calibration range of the analysis system to
bracket the expected sample concentrations.
The specific QA/QC elements suggested for these
analyses are:
Clear definition of the need for metals
analysis, the metal analytes of interest, and
the regulatory concentration limits.
Determination of accuracy using calibration
check standards, reference materials, and
spiked samples.
Determination of precision by multiple
preparation and analysis of samples.
Determination of the detection limit.
7.5.2 Method Performance
Two primary concerns exist regarding metals method
performance. First, the analytical system must be
capable of reliably identifying and quantifying the
metals in the sampling train fractions. Second, since
stack sampling and analysis for metals is not as
routine as that for VOST and SVOST, the organization
conducting the measurements should be requested to
demonstrate their ability in this area. The draft multiple
metals protocol allows the selection and modification
of various sample preparation and analysis steps to
optimize the measurements system. In the TBP the
permit applicant should demonstrate the performance
of the analytical methodology for the metals. This
capability can be accomplished in two ways:
• Presenting QA/QC results from past trial burns
for analysis of similar metals at similar
concentrations.
• Conducting a miniature trial burn in advance and
presenting the QA/QC and emission results.
This demonstration of the ability to carry out the
measurements is burdensome but it ensures some
possibility of obtaining reliable data. This
demonstration is particularly critical with low levels of
metals and the determination of chromium in the
hexavalent state.
7.5.3 Calibration
Calibration procedures are dependent upon the type
of instrumentation used for analysis. The expected
concentrations for the various sampling train fractions
must be presented along with the selected analytical
methods and the calibration range. The calibration
range of the analytical system should bracket all
expected concentrations of the metals. Samples with
concentrations greater than the highest standard level
should be diluted into the calibration range and
reanalyzed.
Criteria for both initial and continuing calibration are
given in Section 8.4. The essential points for calibra-
tion, irrespective of the analysis method, are before
and after sample analysis. The initial calibration curve
51
-------
must pass the criteria before any samples are run. At
the end of each analysis period, a calibration standard
should be analyzed and pass the continuing calibra-
tion criteria. Thus, every sample must be bracketed
by two successful calibrations; one full calibration
curve preceding sample analysis and one midrange
calibration standard following sample analysis. If the
calibration check following sample analysis does not
meet these criteria, it should be repeated; if it fails the
second time, the analysis system should be recali-
brated and the samples following the last successful
calibration should be reanalyzed. All initial and
continuing calibration results must be reported in the
TBR in the exact order in which they were analyzed.
The instruments used in metals analysis have a
tendency to drift at both the high and low levels of the
calibration range. Thus, all continuing calibrations
must be accompanied by results of analysis of a
reagent blank (all reagents contained in standard
solutions). The acceptance of this blank is somewhat
subjective, depending upon the sample results and
whether the drift is positive or negative. The reagent
blank should be reported in the TBR and any drift
greater than 50% of the lowest standard should be
commented upon.
7.5.4 Accuracy Determination
Calibration: Virtually all of the SW-846 methods
require some initial check on the accuracy of
calibration using a second standard, different from that
used for calibration. This sample should be analyzed
Immediately following calibration and must fall within
90% to 110% of the actual concentration. This is a
fairly wide range for the analysis of a pure standard;
results outside this range are unacceptable. Sample
analysis should not proceed until a result within this
range is achieved.
Reference material. Spiked samples of all sampling
train fractions are usually not possible since most
fractions cannot be split for spiking purposes. This is
particularly true for the filters. However, the National
Institute of Standards and Technology provides
standard reference material for the analysis of metals
on filter media. The available metals and their
concentration levels are given in Table 7-4. If these
metals are target analytes, a minimum of two filters in
the appropriate concentration range (twice the target
regulatory levels) must be analyzed. Precision can be
determined from duplicate analyses. Recoveries
should be within 75% to 125% of the reference value,
and results must be reported in the TBR.
Spikes: At least two complete blank sampling train
components consisting of all reagents from each
impinger should be spiked with each appropriate metal
analyte. Efforts should be exerted to identify a spiking
concentration which will give a spike level of not more
than twice the expected sample level or five times the
detection limit, whichever is greater. The samples
should be spiked at the beginning of sample
preparation. The duplicate spiked blank train results
will be used for determining precision.
For mercury analysis, an aliquot is taken for analysis
from each fraction (nitric and permanganate
impingers) except for the filter/probe rinse fraction.
The aliquot should be large enough to allow a split
and then a spike of the actual sample without
significantly reducing the total volume of the original
impinger samples. For one sampling train, a portion of
the aliquot from each sampling train fraction (including
the postdigestion filter/probe rinsate) must be spiked
for each metal analyte. Effort should be exerted to
identify a spiking concentration level which will give a
spike of not more than 3 times the expected sample
level or five times the detection limit, whichever is
greater.
The accuracy target range is 70% to 130% of the
amount spiked. Results should be reported in the
TBR.
Post Sample Preparation Spike: For all analytes
except mercury, one sample preparation from each
sampling train component (e.g., one condensate, one
nitric impinger sample) should be analyzed and then
spiked at about two times the sample level. The target
recovery of the amount spiked into the sample should
be within 70% to 130%. If the recovery is not within
this range, the sample should be diluted (at least 1:5)
and reanalyzed. If the value is higher than the
undiluted result, the diluted sample should be spiked
and, again, the target recovery should be within 70%
to 130%. The diluted sample result should be
reported.
If recovery is still poor, the dilution process should be
continued either until the recovery is within the limits
or until further dilution will approach the detection
limit. If the diluted sample result shows little or no
change from the undiluted sample, this is indicative of
a matrix effect which cannot be easily rectified by
dilution; the results from the undiluted sample should
be reported with a discussion of the affect of these
findings upon sample results.
7.5.5 Precision
Precision is determined from duplicate preparation and
analysis of the standard reference filters and the
spiked blank trains. The target for precision should be
less than 35% range. For mercury, all the
components from one stack gas sampling train should
be analyzed in duplicate and must meet the same
criteria. Precision data must be calculated and
reported in the TBR.
52
-------
Quantity certified dig/filter)
SRM
2676C
2677
Type
Metals on filter
media
Beryllium and arsenic
on filter media
Unit size
Set of 12
2 sets of 4
Material
certified
Cadmium
Lead
Manganese
Zinc
Beryllium
Arsenic
I
0.954
7.47
2.11
9.99
0.052
0.103
II
2.83
14.92
9.92
49.68
0.256
1.07
III
10.09
29.81
19.85
99.28
1.03
10.5
IV
(< 0.01)
{< 0.01)
(< 0.01)
(< 0.01)
< 0.001
< 0.002
Note: 1. These SRM's consist of potentially hazardous materials deposited on filters to be used to determine the levels of these materials in
industrial atmospheres.
2. Values in parentheses are not certified but are given for information only.
These can be obtained from the National Institute of Standards and Technology (NIST), Gaithersburg, Maryland 20899,
Phone:301-975-6776.
7.5.6 Method Blanks
Blanks can become critical experimental design
elements of the trial burn when detection limits are
pushed to very low levels. Three major kinds of blanks
are: (a) sampling train reagents shipped to the field
and returned (trip blanks); (b) sampling trains hooked
up to sampling apparatus on the stack but never used
for stack gas sampling (field blanks); and (c) reagent
blanks of the filters and impinger solutions analyzed
by the laboratory but never shipped to the field
(analytical method blanks).
Many detection limits are based upon the analysis of
standard solutions and do not include any possible
background contamination from the laboratory
preparation. Method blanks must be analyzed to
demonstrate that the detection limit claimed for the
analysis is valid, given the background concentration
of the metals in the laboratory. Method blanks must
be reported in the TBR and must not be more than
twice the estimated detection limit. If the blank value
is above this criterion 1.5 times the level of analyte in
the blank should be used as the detection limit.
If the permit applicant expects significant background
contamination at the incinerator which could result in
an artificially high metals determination, this topic
should be discussed in detail in the QAPjP. The
applicant should present a statistical design for the
number and kinds of blanks and how they will be used
to correct the sampling results. At a minimum, each
run should consist of at least one trip blank and one
field blank (including probe rinses), plus one method
blank per sample batch. Guidance in these areas is
given in Reference 5.
All blank determinations must be reported in
appropriate units along with the associated samples.
All values that are blank corrected must be flagged as
corrected, and all subsequent results from those
values must also be flagged. Final results must be
presented both with and without the blank correction.
7.5.7 Detection Limit Determination
A detection limit (DL) must be determined for each
analyte. The DL is a critical parameter since metals
are not detected for many trial burns and the
regulatory decision is then based upon the DL. The
method of determining the DL can vary from
laboratory to laboratory, but must be described in the
QAPjP. If this subject has not been addressed, the
permit reviewer should request that the applicant
supply the information. The results of the DL
determination must be presented in the TBR.
Guidance is being developed for detection limit
determinations in hazardous waste incineration.
7.5.8 Summary of QC Procedures
QC procedures for metals determination are
summarized in Table 7-5. Each quality parameter
must be reported in the TBR. If the QC procedure
was not completed or the criteria were not met,
sample results should not be accepted unless the
applicant provides an adequate technical justification
for use of the data. The QC procedures related to
calibration and .calibration accuracy in particular must
be entirely documented and must be within the criteria
before sample analysis begins.
A high bias demonstrated in the accuracy
determination indicates that metals emissions are
probably lower than presented. A low bias in the
spiked blank train samples is indicative of a loss of
analyte in the preparation and analysis procedures.
This will mean the regulatory decision is based upon
stack gas emission values that are lower than actual.
In all cases, any blank corrections applied to the data
should be examined in detail, and their use must be
clearly justified by the applicant.
53
-------
Table 7-5. Summary of QA/QC Procedures for Metals Determination in Stack Gas Samples
Quality parameter Method of determination Frequency Target criteria
Method selection
Method performance
Calibration
Accuracy-calibration
Accuracy-filters
Accuracy-spikes
Accuracy-spike
(mercury only)
Accuracy-
postproparation spike
Precision-mercury
Blanks
Detection limit
Use guidance documents to determine Once
overall data quality objectives
Past trial bums or a "mini burn" at the Once
subject facility
Initial analysis of standards at multiple At least once
levels
Continuing mid-range calibration
standard
Continuing calibration blank
Analysis of a calibration check
standard
Analysis of NIST standard reference
filters
Analysis of a full blank sampling train
spiked at approximately twice the
expected concentration or five times
the deletion limit
Spike one portion of a mercury aliquot
from each matrix at ~ 2 times the
expected level or 5 times the detection
limit
Spike at ~ 2 times the level in sample
Duplicate analysis of one sample from
each matrix
Trip blanks
Field blanks
Method blanks
Must be presented in TBP or QAPjP
At least before and after
sample analysis
With continuing calibration
standard
A every initial calibration
Twice
Twice
Once
Once per sample
component
Once
One each per trial burn
NA
QC results for overall precision and
accuracy of spiked samples within
criteria given below for spiked blank
trains
Method-dependent. Suggest linear
correlation coefficient of standard data:
< 0.995
80% to 120% of expected value for
GFAA; 90% to 110% for ICP
Subject to interpretation
90% to 110% of theoretical value
75% to 125% of reference value
70% to 130% recovery
Once for each analyte
and each method analysis
70% to 130% of recovery or reference
value
Recovery of spike 70% to 130%
25% RPD
Evaluated on case by case basis
NA
If precision on spiked blank train samples is poor, the
precision problem could be in the sample preparation
stage or just the result of an outlier in one of the
spiked train samples. The two spiked blank train
results should be closely examined to determine if the
precision problem was caused by contamination.
Since each fraction of the train is spiked, the results
for the spiked blank train can be compared as the
sum of all components. If this comparison shows
generally good agreement (< 35% range), a precision
problem in only one fraction could be considered of
minimal concern. Assessment of the effect of the
precision problem should be based upon the relative
importance of that fraction in the particular analyte
determination. Results from the three test runs should
all be examined to see whether a lack of agreement
exists between the runs or whether the precision
problem is just with the two spiked blank trains.
54
-------
Chapter 8
QC Procedures for General SW-846 Analytical Methods
Key QC procedures which are specified in SW-8463
Methods 8240, 9270, 7000, and 6010 and general
chromatography (HPLC or GC) are addressed in this
chapter. The purpose of this chapter is to present the
key QC procedures with guidance on data validation,
allowing data users to gain a level of understanding of
the QC. The various methods are complex and
designed for technical experts; however, the key QC
procedures all share common elements and can be
relatively easily evaluated to detect common
problems. If the person evaluating the trial burn data is
not familiar with these analyses, review of these data
should be done by qualified personnel.
8.1 Volatile Organic GC/MS Analysis
8.1.1 General
Chapter 5 covers QC procedures for POHC analysis
of waste and stack gas samples. Most of these
samples are analyzed by GC/MS using SW-846
Method 8240. Often a TBP will indicate that waste and
stack gas samples will be analyzed for the Appendix
VIII compounds amenable to GC/MS. This section will
cover QC procedures for Method 8240 and offer
some guidance on areas that should be considered in
assessing trial burn results using analysis records with
the "Checklist for Reviewing RCRA TBR."1* If a thor-
ough review and validation of the non-POHC analytes
is needed, Reference 7 contains very good guidance
for validating and accepting analytical data from
GC/MS. POHC analysis is the primary focus of
concern in this handbook, while the full Appendix VIII
analysis is secondary.
8.7.2 Surrogate Standards
Surrogates identified in Method 8240 are added to all
sample standards and blanks. Method 8240
recommends toluene-da, 4-bromofluorobenzene
(BFB), and 1,2-dichloroethane. Surrogate recovery is
dependent on the matrix. Sample surrogate recovery
should be within 75% to 125%. Data that fall outside
these limits should be flagged and evaluated for
possible effect on trial burn results. When recovery is
low, surrogates should be used to correct sample
data.
8.7.3 Calibration
The GC/MS must be tuned to the criteria given in
Table 8-1 for Method 8240 using BFB. Instrument
calibration should not proceed until these criteria have
been met. In reviewing analysis records, the BFB
tuning should be checked. If the tuning does not meet
criteria (see Table 3-1), all sample results for that
analysis day are suspect and should not be accepted
unless the applicant provides an adequate technical
justification for the use of the data. If the tuning
problem is severe, the absence of a given analyte
should also be questioned.
Table 8-1. BFB Key Ions and Ion Abundance
Criteria (Method 8240 Criteria)
Mass Ion abundance criteria
50 15% to 40% of mass 95
75 30% to 60% of mass 95
95 base peak, 100% relative abundance
96 5% to 9% of mass 95
173 > 2%'of mass 174
174 < 50% of mass 95
175 5% to 9% of mass 174
176 < 95% < 101% of mass 174
177 5% to 9% of mass 176
Before sample analysis begins, a five-point calibration
curve must be analyzed; the average RRFs for each
analyte must exhibit a relative standard deviation <
30%. Every day or every 12 hours, a continuing
calibration standard containing all analytes must be
analyzed. The RRF for specific calibration check
compounds, in the daily standard must be within 25%
of the initial calibration average RRF; jf not, then
analysis should not proceed. The other analytes
should also be within the 25% calibration criteria;
however, Method 8240 does allow some deviation
based upon technical judgment. Nevertheless, the
POHCs should meet the calibration criteria in all
cases.
55
-------
8.1.4 Analyte Identification
Analyte identification is done by comparison of relative
retention time (RRT) and GC/MS spectra; these areas
can be checked if the analysis records are submitted
with the TBR. The RRT of the sample analyte must
be within 0.006 RRT units of the daily standard. If the
RRT deviates significantly, the identification should be
considered suspect. Computer identification by mass
spectra should be confirmed by a mass
spectrometrist. For critical analytes such as POHCs,
the analysis records must clearly document the
rationale if a component that appears to meet the
RRT criteria for POHC identification is rejected
because of mass spectral data. In general, detected
ions greater than 10% relative intensity in a sample
should match the ions in the standard spectra to
within ±20% agreement.
8.7.5 Quality Control Requirements
Method blanks (reagent water) should be analyzed
initially to demonstrate that the system is contaminant-
free, and after high-level samples have been run, to
demonstrate no cross-contamination with the next
sample.
Duplicate spikes of each matrix type (e.g., scrubber
water, waste feed) should be performed with all target
compounds to obtain accuracy and precision data.
Accuracy as recovery should be within 50% to 150%
of the amount spiked. Precision as percent range
should be within 25%.
Internal standard areas should be recorded and
monitored by the analyst. The criterion is specified by
Method 8240 as -50% to +100% agreement with the
last daily calibration check. A change in the internal
standard area could indicate either a problem with the
GC/MS or with that particular sample. A drop or rise in
the area over several samples would be more
indicative of a GC/MS system problem. Whatever the
cause, reanalysis of this sample (if possible) is
recommended.
A QC check standard should be analyzed to verify the
accuracy of the calibration. The QC check sample
should be a standard solution, prepared independently
from the standard solutions used for calibration. This
check standard can be purchased with a certified
concentration or obtained from EPA (QA Branch,
EMSL-Cincinnati, USEPA, Cincinnati, Ohio 45268).
Agreement should be within ±30% of the certified
value.
8.2 Semivolatile Organic GC/MS Analysis
8.2.1 General
QC procedures for POHC analysis of waste and stack
gas samples were covered in Chapter 5 and Section
7.3. GC/MS using SW-8463 Method 8270 is the
preferred method for these samples. Often the TBP
indicates that waste and stack gas samples will be
analyzed for the Appendix VIII compounds amenable
to GC/MS analysis. This section will cover the QC
procedures for Method 8270 and give some guidance
for areas that should be covered when assessing trial
burn results. If a thorough review and validation is
needed, very good guides exist for validating and
accepting analytical data from GC/MS.15-16 These
handbooks view the POHC analysis as the primary
concern, while the full Appendix VIII analysis is
secondary.
8.2.2 Surrogates
If the full range of analytes are to be analyzed, the
surrogates identified in Method 8270 (see Table 8-2)
must be spiked into all samples. (Please refer to
Section 7.4 for a discussion on surrogate spiking in
SVOST components.) Generally, surrogate recovery
below 50% is considered suspect and should be
closely reviewed. Surrogate recovery limits for soil,
sediment, and water samples are listed in the table;
however, these limits are not applicable to all samples
seen in a trial burn. Many samples, such as the
relatively clean water in the impingers and the XAD
resin, will give better than 50% recovery. Recovery
may be influenced by the solvent used in sample
preparation; if this solvent is optimized for POHC
extraction some of these surrogates may exhibit poor
recovery. Some laboratories use historical data to
supply surrogate recovery limits; however, surrogate
data from different matrices, solvents, and extraction
methods often are combined, yielding wide boundaries
of limited value in judging data acceptance. If the
QAPjP presents historically-defined limits, the sample
matrix, extraction, and solvent and extraction method
used to derive the acceptable criteria should be
identified.
Table 8-2. Surrogate and Spike Recovery Limits
Surrogate compound
Nitrobenzene-ds
2-Fluorobiphenyl
p-Terphenyl-d14
Phenol-d6
2-Fluorophenol
2,4,6-Tribromophenol
Low/medium
water
35-114
43-116
33-141
10-94
21-100
10-123
Low/medium
soil/sediment
23-120
30-115
18-137
24-113
25-121
19-122
From SW-846, Method 8270.
All surrogates should be identified in the QAPjP along
with their acceptance criteria. Surrogate recoveries
should be reported in the TBR. Any values outside the
criteria given in the QAPjP must be flagged and
should not be accepted unless the applicant provides
an adequate technical justification for the use of the
data. Guidance is available on accepting, qualifying, or
56
-------
rejecting sample data based upon the surrogate
recoveries.15.16 When recovery is low, surrogates
should be used to correct sample data.
8.2.3 Calibration
The GC/MS must be tuned using decafluoro-
triphenylphosphine (DFTPP) to the criteria given in
Table 8-3. Instrument calibration should not proceed
until these criteria have been met. Some slight
deviations are allowable within the expanded criteria in
Table 8-3; however, these instances should be
documented and explained in the TBR. In reviewing
analysis records, DFTPP tuning should be checked. If
the tuning does not meet either criterion, then all
sample results for that analysis day should be
considered suspect.
Table 8-3. Decafluorotriphenylphosphine
(DFTPP) Key Ions and Ion
Abundance Criteria (Method 8270
Criteria*)
Mass Ion abundance criteria
51 30%-60% of mass 198
68 < 2% of mass 69
70 < 2% of mass 69
127 40%-60% of mass 198
197 < 1% of mass 198
198 Base peak, 100% relative abundance
199 5%-9% of mass 198
275 10%-30% of mass 198
365 > 1% of mass 198
441 Present but less than mass 443
442 > 40% of mass 198
443 17%-23% Of mass 442
Expanded DFTPP Criteriab
51 22.0%-75.0% of m/z198 '•
68 Less than 2.0% of m/z 69
70 Less than 2.0% of m/z 69
127 30.0%-75.0% Of m/z 198
197 Less than 1.0% of m/z 198
198 Base peak, 100% relative abundance
199 5.0%-9.0% of m/z 198
275 7.0%-37.0% of m/z 198
365 Greater than 0.75% of m/z 198
441 Present but less than m/z 443
442 Greater than 30.0% of m/z 198
443 17.0%-23.0% of m/z 442
a Eichelberger, J. W., L. E. Harris, and W. L. Budde,
"Reference Compound to Calibrate Ion Abundance
Measurement in Gas Chromatography-Mass
Spectrometry," Analytical Chemistry, 47, 995
(1975).
b Laboratory Data Validation Functional Guidelines for
Evaluating Organic Analysis, U.S. EPA Hazardous
Site Evaluation Division, February 1, 1988.
Before sample analysis, a five-point calibration curve
must be analyzed. The average RRFs for each
analyte must exhibit a relative standard deviation of
< 30%. However, the calibration check compounds
(CCC) and POHCs presented in Table 8-4 should be
within this criterion before commencing analysis.
Every day, or every 12 hours, a continuing calibration
standard containing all analytes must be analyzed.
The RRF for the CCC and POHCs in the daily
standard should be within 30% of the daily standard; if
not, analysis should not proceed. Also, every day the
GC/MS should be checked before sample analysis
and every 12 hours with the system performance
check compounds (SPCC) (W-nitroso-di-rt-prbpyl-
amine, hexachlorocyclopentadiene, 2,4-dinitrophenol;
and 4-nitrophenol). These compounds should exhibit a
minimum RRF commensurate with the method before
any sample analysis is initiated.
Table 8-4. Calibration Check Compounds
Base/neutral fraction Acid fraction
Acenaphthene
1,4-Dichlorobenzene
Hexachlorobutadiene
A/-Nitroso-di-n-phenylamine
Di-n-octylphthalate
Fluoranthene
Benzo[a]pyrene
4-Chloro-3-methylphenol
2,4-Dichlorophenol
2-Nitrophenol
Phenol
Pentachlorophenol
2,4,6-Trichlorophenol
From SW-846, Method 8270.
The SPCC and CCC criteria exist to ensure that the
GC/MS system is capable of detecting and quantifying
the large range of analytes necessary. If the criteria
have not been met, this information should be
reported in the TBR and discussed in relation to
sample data. However, since POHCs are the critical
analytes, POHCs should meet all calibration criteria
daily (see Section 7.4.3). The use of all SPCCs and
CCCs noted in Method 8270 may not be required
when analysis is conducted to quantitate a few very
specific POHCs. The use of fewer SPCCs and CCCs
should be discussed and justified in the TBP or the
QAPjP.
8.2.4 Analyte Identification
Analyte identification is done by RRT and GC/MS
spectra comparison, and these areas may be checked
by the permit writer if the analysis records are sub-
mitted with the TBR. The analyte identification
requirement discussed previously in Section 8.1.4
applies also to semivolatile analyses.
8.3 Gas Chromatography (GC), High
Performance Liquid Chromatography
(HPLC), Ion Chromatography (1C)
8.3.7 General
These analysis techniques are grouped together
because they share two fundamental characteristics.
First, they depend upon Chromatography to separate
the analytes of interest from other sample compo-
nents. Second, these separate analytes are
57
-------
quantitated by a relatively simple detector. SW-846
Method 8000 covers the basic QC principles used for
GO; these principles are highlighted in this section.
The primary QC concepts of which to be aware are:
• Calibration criteria.
• Retention time criteria.
General QA/QC procedures for these analysis
techniques are summarized in Table 8-5.
8.3.2 Calibration
All chromatographic systems are initially calibrated
with standards at varying concentrations. The initial
calibration serves three purposes: (a) demonstration
of linearity over the concentration range; (b)
delineation of retention time windows for qualitative
identification of analytes in samples; and (c)
establishment of the calibration constants for use in
the calculation of sample results.
Calibration systems follow either the external standard
method or the internal standard method. The external
standard method uses the ratio of the response (peak
area or peak height) of a standard compound relative
to its concentration or mass on the column to
calculate a response factor (RF). The internal
standard method uses an internal standard (a com-
pound chemically similar to the analytes of interest)
added at a fixed concentration to every standard and
sample. The relative response factor (RRF) is
calculated from the ratio of the response of an analyte
to the response of the internal standard related to the
ratio of the concentrations of the internal standard and
the analyte.
The internal standard method is the method of choice.
It compensates for physical variance in sample
introduction, such as injection size, solvent effects,
and leaking septums. Also, its retention time markers
provide a more precise identification of target
compounds. However, without a selective detector
such as the mass spectrometer, occasionally other
sample components interfere with the quantitation of
internal standards. In these cases, the permit
applicant should provide technical justification in the
TBP or QAPjP for using an HPLC, GC, or 1C
procedure without an internal standard, or the data
should not be accepted.
All initial and continuing calibration data should be
reported. All quality control results (e.g., linearity data)
should be calculated, and all data falling outside
criteria should be flagged and explained. If calibration
criteria were not met, sample results should not be
accepted unless the applicant provides an adequate
technical justification for use of the data.
8.3.2.1 Initial Calibration
Each analysis type, including stack gas samples,
waste feed analysis, and scrubber water analysis,
should be discussed in the TBP or QAPjP, along with
the expected concentrations of analytes in the waste,
the predicted final concentration in samples for
analysis, and the calibration range. The calibration
range of the instrument should bracket expected
concentrations. A minimum of three different concen-
tration levels (preferably five) in the standards as well
as a reagent blank should be used to span the
calibration range. The reagent blank should contain all
the reagents in the standards and no analyte should
be detected at a concentration greater than one-fifth
the lowest calibration standard. Sample results higher
than 120% of the high calibration standard should be
diluted into the calibration range. All expected critical
regulatory concentrations (e.g., ORE) should be at
least 10 times the concentration of the lowest
standard to ensure reliable detection and quantitation.
Initial calibration can be used to demonstrate the
linearity of the analytical system in two ways. First, the
relative standard deviation of the RRF for any analyte
calculated from all standards analyzed for initial
calibration must be less than 20%. Sample results are
calculated from the average RF or RRF. Alternatively,
a plot of the response (or response relative to the
internal standard) vs. the concentration of the
compound in the standard must yield a linear
correlation coefficient greater than 0.995. Sample
results in this case should be calculated using the
linear regression equation that fits the calibration data.
Most calibrations for trial burn analysis are linear by
nature. However, some calibration systems are non-
linear. For these systems, an acceptance criterion for
initial calibration must be established. This criterion
could be the correlation coefficient for a polynomial fit
of the standard data, or it could be the relative error of
values for the calibration standards calculated from
the mathematical fit of the standard data. The key
issue is that a criterion for initial calibration must be
established.
The accuracy of this calibration should be verified by
a calibration check standard that includes all analytes.
This standard should be prepared independently of
calibration standards and, ideally, from EPA reference
solutions. The observed concentrations of these
standards should fall within 75% to 125% of the
expected values. If any calibration criteria have not
been met, the problem should be rectified before
sample analysis progresses. If automated equipment
is being used, samples should be reanalyzed if the
subsequent data analysis shows that initial calibration
did not pass criteria.
58
-------
Table 8-5. Summary of QA/QC Procedures for GC/HPLC and 1C Determinations
Quality parameter Method of determination Frequency
Target criteria
Analysis type
Calibration-initial
Recommend internal
standard
Added to every standard and sample NA
Each matrix type
Must bracket expected
sample concentrations
Minimum of three standards At least once
Generation of RRF or RF
Blank Once following calibration
Calibration-initial-accuracy Calibration check standard Once following calibration
Calibration-continuing
Qualitative identification
Sample validation
Continuing calibration
standard
Beginning and end of analysis period
and 'once every 10 samples
Continuing calibration blank Beginning and end of analysis period
and once every 10 samples
Retention time Every sample
Internal standard area
Every sample
NA
Correlation coefficient for linear plot
< 0.995. Relative standard deviation
for average < 20%.
One-fifth of lowest standard response
85% to 115% of expected
concentration
85% to 115% of expected
concentration
One-fifth of lowest standard
Must be within three standard
deviations of average calibration
relative retention time
Must be within 75% to 125% of area
in last calibration standard
8.3.2.2 Daily or Continuing Calibration
Once the linearity of the measurement system has
been verified, calibration standards should be
analyzed regularly to verify that the system stays in
calibration. A standard should be analyzed at the
beginning and end of each analysis period and after
every 10 samples. The observed concentration of
each analyte in this check standard should be within
75% to 125% of the theoretical concentration. The
calibration level for continuing calibration should be
chosen to meet either the decision level of the
analysis (e.g., sample concentration when 99.99%
ORE was achieved) or the level which best tests the
accuracy of the measurement system (e.g., high level
standard for GC/ECD analysis). All samples must be
bracketed by two successful calibrations, one before
sample analysis and one after. If the continuing
calibration criteria have not been met, the analytical
problem should be rectified and all samples since the
last acceptable calibration should be reanalyzed.
Sample results obtained from an analytical system in
which daily calibration was not done or did not meet
criteria should not be accepted unless the applicant
supplies an adequate technical justification for use of
the data.
A reagent blank should also be analyzed at the same
frequency as the continuing calibration standard as a
check on any possible contamination in the analytical
system. In general, no analyte concentration greater
than one-fifth the lowest calibration standard should
be detected in the blank.
8.3.3 Qualitative Identification
GC, HPLC, and 1C generally rely upon detectors
which are not specific enough to positively identify
analytes. The retention of the analyte on the
chromatographic column or its retention relative to an
internal standard provide some identification. Although
this method is inferior to the specific identification
provided by the mass spectrometer, identification by
retention time can be sufficient for incineration
analyses. These methodologies have a greater possi-
bility of obtaining an incorrect positive identification,
and for cases in which interference with chromato-
graphic peaks may occur, the amount of a POHC can
be overestimated.
Initial calibration data should be used to calculate the
average retention time (or relative retention time for
internal standard methods). All peaks within three
standard deviations of this average are identified as
the analyte. Every continuing calibration standard
must be within the current retention time window;
however, the absolute retention time can be updated
when a continuing calibration standard is analyzed. In
chromatographic systems for which there is very little
measurable difference in retention times, three, other
options exist:
• Use of ± 5% of the average retention time.
• Inclusion of all the continuing calibration
standards analyzed during the project to
provide more variability in the retention time
and thus a larger retention time window.
59
-------
• Use of a spike confirmation technique. All
samples are first analyzed using one of the
other techniques for identification of the ana-
lytes, then each sample is spiked with the
analyte at a level twice the approximate
sample level. The spike chromatogram must
exhibit one peak in the retention time window
for confirmation of analyte identity. If two
peaks are observed in the spike sample
chromatogram, no analyte is present.
Irrespective of which method is used for identification,
the spike confirmation technique should be used for
any sample in which identity criteria are suspect due
to interference peaks (poor separation of analytes
from other sample components) or for samples in
which the identification is marginal. The topic of
qualitative identification must be addressed in the
QAPjP.
8.3.4 Sample Validation
For analysis using an internal standard, an additional
quality control check is available. The internal
standard area for each sample should be 75% to
125% of the area observed in the last continuing
calibration standard. If this criterion has not been met,
the sample should be reanalyzed. If still not met, the
problem should be investigated and any acceptance
of sample results should be accompanied by technical
justification by the applicant.
8.4 Metals Determinations
8.4.7 General
Atomic spectroscopy is used for metals detection and
quantitation. SW-846 Methods 7000 and 6010 discuss
the basic QC principles involved. These principles are
outlined and amplified in this section. The most
important QC procedures of which to be aware are:
• Calibration of the analytical system.
• Determination of accuracy or matrix effects
using calibration check standards and spiked
samples.
• Determination of precision by multiple analysis of
samples.
8.4.2 Initial Calibration
All atomic spectroscopy instruments are calibrated
daily with standards at varying concentrations. The
calibration serves two purposes: (a) demonstrating
linearity over the concentration range; and (b)
establishing the calibration constants used in the
calculation of sample results.
As discussed in previous sections, the TBP or QAPjP
should present all analysis types, including stack gas
samples, waste feed analysis, and scrubber water
analysis, along with the expected analyte concentra-
tions in the waste, the final concentration predicted in
samples for analysis, and the calibration range. The
calibration range of the instrument should bracket
those expected concentrations. All expected
regulatory critical concentrations (e.g., ORE) must be
at least twice the concentration of the lowest standard
to ensure reliable detection and quantitation. In
general, a minimum of three different concentration
levels in the standards plus a reagent blank should be
used to span the calibration range. Some spectrom-
eters are designed to require only a blank and one
standard for calibration. Sample results higher than
the high calibration standard should be diluted into the
calibration range. The reagent blank should contain all
the reagents in the standards; no analyte should be
detected at a concentration greater than one-half of
the lowest calibration standard. For inductively
coupled plasma (ICP) analysis, a reagent blank
analysis should follow the initial calibration.
Initial calibration must demonstrate the linearity of the
analytical system for all analytes. A plot of the
response (absorbance units) vs. the concentration of
the compound in the standard must yield a linear
correlation coefficient greater than 0.995. Sample
results should be calculated using the linear
regression equation that fits the calibration data. For
instruments using only two concentration levels, the
correlation coefficient cannot be calculated.
Accuracy of initial calibration: For all analyses, the
accuracy of the calibration should be checked by the
analysis of a calibration check standard obtained from
another source and prepared independently of those
used for instrument calibration. This check standard
should be analyzed following the calibration curve and
before sample analysis. This check standard should
give an observed concentration within 90% to 110%
of its expected value.
For ICP analysis, Method 6010 requires that the
highest standard be reanalyzed immediately following
initial calibration; the observed concentration must be
within 95% to 105% of its expected value. Next, the
above calibration check standard should be analyzed
and should be within 90% to 110%. Following this
standard, analyze an interference check standard; the
observed concentration must be within 80% to 120%
of the expected value. The interference check
standard should be designed to mimic the interfering
analytes found in the sample matrix type and can
contain elements such as Al, Ca, Fe, Na, Zn, Ba, Ca,
Mg, Mn, Cr, Cu, and Ni. A quick ICP survey of one
60
-------
sample from each matrix type will assist in choosing
the appropriate analytes.
If any calibration criteria have not been met, the
problem should be rectified before sample analysis
proceeds. If automated equipment is being used,
samples should be reanalyzed if subsequent data
review shows initial calibration does not fall within
criteria.
8.4.3 Daily or Continuing Calibration
Once linearity of the measurement system has been
verified, a calibration standard should be reanalyzed
on a regular basis to verify that the system maintains
its calibration. A standard should be analyzed at the
beginning and end of each analysis period and after
every 10 samples. The observed concentration of
each analyte in this standard must be within 80% to
120% for GFAA and CVAA, and 90% to 110% for ICP
of the theoretical concentration. The concentration for
continuing calibration should be around the middle of
the calibrated concentration range. All samples must
be bracketed by two successful calibration checks,
one before sample analysis and one after. If the
continuing calibration criteria have not been met, the
analytical problem should be identified and rectified,
and all samples since the last acceptable calibration
check should be reanalyzed.
A reagent blank should also be analyzed at the same
frequency as the continuing calibration standard. No
concentration greater than one-half the lowest
calibration standard should be detected in the analyte
blank. For ICP analysis the reagent blank value must
be within three standard deviations of the average
values for the blanks analyzed during initial calibration.
If blank values do not fall within these criteria, the
state of the instrument should be investigated. Low
level samples analyzed since the last acceptable blank
analysis should be reanalyzed if necessary.
8.4.4 Summary of QC Procedures
A summary of the QC procedures for metals
discussed in the previous sections is presented in
Table 8-6. Each quality parameter involving initial and
continuing calibration should be calculated and
reported in the TBR, and acceptance of sample
results should be justified by the applicant. If QC
procedures have not been carried out or the criteria
have not been met, sample results should be rejected
unless sufficient technical justification has been
provided by the applicant. The QC parameters should
be calculated and available for review along with the
raw data supporting the analyses.
Table 8-6. Summary of QA/QC Procedures for Metals Determinations
Quality parameter Method of determination Frequency
Target criteria
Calibration-initial
Calibration-initial-accuracy
Must bracket expected sample
concentrations
Minimum of three standards
generating a standard curve3
Blank
Analysis of a calibration check
Each matrix type
At least once
At least once
At least once
NA
Correlation coefficient of linear plot
> 0.995
Must be beneath one-fifth of lowest
standard
90% to 110% of expected value
Calibration-continuing
Calibration-continuing
standard
For ICP, reanalysis of high level At least once
standard
For ICP, analysis of interference At least once
standard
Analysis of a middle level standard Every 10 samples
Analysis of a calibration blank
Every 10 samples
95% to 105% of expected value
80% to 120% of expected value
90% to 110% of expected value for
ICP; 80% to 120% of expected value
for GFAA and CVAA
Less than one-half of the lowest
calibration level or (ICP only) within
three standard deviations of average
blank
aSome ICP spectrometers by design require only a blank and one standard for calibration.
61
-------
-------
Chapter 9
Specific Quality Control Procedures for Continuous Emission
Monitors
The instrumental analyzers used for continuous
monitoring of emissions are the focus of this chapter,
with emphasis on carbon monoxide and oxygen
measurements. Guidance is presented on quality
assurance objectives and specific quality control
procedures important to the decision-making process.
Instrumental analyzers are used to continuously
monitor the concentrations of carbon monoxide in
incinerator emissions. In some cases, analyzers are
also used to measure oxygen. Many types of ana-
lyzers are available commercially from different
manufacturers. However, the basic quality objectives
are essentially the same for the different types of
monitors. Calibration procedures used for each
instrument will vary, and specific procedures typically
are specified by the manufacturer.
The QA/QC procedures associated with the trial burn
include:
• Conducting an initial performance test.
• Conducting calibration checks during the trial
burn.
• Obtaining complete data records.
9.1 Carbon Monoxide Monitors
9.7.7 Initial Performance Test
Soon after installation of a Continuous Emission
Monitoring System (CEMS), the acceptability of the
system should be evaluated by conducting a
performance specification test. This test is performed
upon installation to determine if the CEMS is capable
of providing adequate data. The draft procedures for
conducting the performance test and the criteria for
determining if the CEMS performance is acceptable
are available in Appendix A of Reference 10.
The performance specification test on the CEMS must
be conducted and passed before the trial burn is
conducted. The performance specification test criteria
are summarized in Table 9-1. Each of these criteria is
Table 9-1. Carbon Monoxide Performance Test Criteria
Criterion Acceptable limit
1. CEMS measurement
location
Representative sample
obtained
2. Calibration drift (precision) < 5% full scale measurement
3. Calibration error (accuracy) < 5% full scale measurement
4. Response time < 1.5 min
5. Relative accuracy
< the greater of:
20 ppm or 10% of reference
method value, whichever is
greater
discussed in detail in the Performance Specifica-
tions.10 They are discussed only briefly here.
The location of the CEMS sampling point is important
to ensure obtaining a representative sample.
Recommendations for acceptable sampling locations
are contained in the Performance Specifications. The
primary consideration is that a representative sample
be obtained. However, the location should also be
accessible to allow for routine maintenance.
Both the calibration error and calibration drift of the
instrument must be checked. Both specifications are
presented as a percentage of the instrument's full-
scale measurement range span values. For example,
the acceptable calibration drift is <5% full scale; for
an instrument with a 100-ppm full-scale range, the
acceptable drift is <5 ppm. For an instrument with a
2,000-ppm full-scale range, the acceptable drift is <
100 ppm. The instrument span range chosen will
determine the absolute level (ppm) of calibration error
that is considered acceptable. Consequently, the span
value chosen for the CEMS is important and should
be consistent with the monitoring objectives.
Recommended span values are presented in the
Performance Specifications.
The calibration error should be checked at low, mid,
and high levels of the instrument range to ensure that
the instrument is capable of accurately measuring
over the entire range; typically this check is
conducted only once during the performance
specification test. The calibration drift test is
63
-------
conducted over a one-week period to evaluate the
precision of the measurement. A calibration check is
conducted every 24 hours using the same standard,
and the difference between daily measurements is
evaluated. Calibration drift is calculated and used in
evaluating whether the GEMS is capable of main-
taining calibration over an extended period.
The response time test simply measures the lag time
required for the GEMS to respond to a change in
concentration level. Excessive response lag times are
not desirable since the objective of continuous
monitoring is to be able to obtain real time data to use
in process control.
A relative accuracy test using a reference method
(RM) is the only independent measure of the accuracy
of the GEMS data. An RM is used to measure the CO
concentration, and the results of the RM
measurements are compared to the GEMS data. The
criterion for acceptance is that the CEMS data not
deviate by more than 20 ppm or 10% of the RM
results,* whichever is greater (i.e., at levels greater
than 200 ppm, a variation of 10% is allowed).
In conducting the CEMS performance test, the entire
System must be evaluated in its normal operation
state. For example, the sampling line, sample
conditioning system, and analyzer should all be
checked, not just the analyzer. However, conducting
the tests using the entire sampling line and/or condi-
tioning system may not be practical because of the
amount of calibration gas which may then be required
to purge the system. In such cases, the integrity of
the sampling line may be checked at the beginning of
the performance test by some other means, such as a
leak test (e.g., plugging the sampling line and seeing if
a vacuum can be generated). Also, any problem within
the sampling line or conditioning system will be
identified, since the CEMS will then fail the test of
comparison to the RM values.
If the CEMS fails the performance test, then
corrective action should be taken and the parts of the
test that the monitor failed should be repeated. If
major modifications are made to the system, the
entire performance test may need to be repeated;
judgment must be used in determining what parts of
the test must be repeated. For example, if the sample
flow rate has been increased to reduce response
time, no effect on calibration is likely. In this case,
only the response time test need be repeated. On the
other hand, if the primary electronic circuit boards
"Refer to Performance Specifications10 for actual calculation of
variance.
have been replaced and the instrument recalibrated to
reduce calibration drift, the calibration error should
also be rechecked and the relative accuracy test
repeated.
9.7.2 Calibration
The CEMS is first calibrated according to the
manufacturer's instructions prior to the initial
performance test. After the initial performance test,
calibration must be checked on a routine basis, and if
the calibration has drifted outside allowable limits,
adjustments must be made.
The recommended calibration check is to challenge
the monitor with both a low-level standard (0 to 20%
of full scale) and a high-level standard (80% to 100%
of full scale). Although two standards are preferable, a
single high-level standard is sometimes substituted.
During the trial burn, calibration checks should be
conducted daily to confirm that the instrument remains
calibrated.
9.1.3 Data Records
Calibration records sufficient to evaluate performance
are required. The data recording system for use
during normal operation should also be used for the
performance test and trial burn. When both data
loggers and chart recorders are used, the recorded
values from each device should be compared during
calibration to ensure their consistency.
Typically, either a data logger or a chart recorder, or
both, are used to record real-time CEMS data.
Sometimes, an integrator is used to average the data
as it is collected, and the time-weighted average (e.g.,
hourly) is recorded. Minimum data requirements
include recording a value every minute by recording
the measured value or updating the rolling average
(e.g., a 1-minute rolling hourly average). When
multiple data recorders are used, one recording
medium must be chosen as the source of the official
record. The designated device should be used as the
data source throughout the trial burn.
During the trial burn, the minimum and maximum
values obtained also are of interest, and the recording
system should be capable of storing these values. If a
real-time chart recorder is used, the minimum and
maximum values can be obtained from the chart
recorder. If a data logger is used, the data logger
should be capable of storing the minimum and
maximum values (before averaging).
If a data logger system is used and the data logger is
programmed to calculate a CO concentration
normalized to a standard oxygen level (e.g., 7% 02),
provisions should be made during the performance
test and routine calibrations so that adequate and
sufficient data will be obtained to be able to evaluate
64
-------
the calibration results. Two approaches can be used.
The first approach is to use calibration gases which
include a known oxygen concentration (such as 7%
oxygen); the measured normalized value can then be
checked against the calculated normalized value of
the standard gas mixture. The other approach is to
ensure that the data logger system is capable of also
providing the uncorrected (not normalized) CO data so
that these data can be evaluated against the
calibration standard.
Calibration records should be maintained so that the
calibration history is available for review. Calibration
records should include:
a. Calibration values on the data logger and/or
chart record.
b. Calibration standards (e.g., cylinder gas
identification and manufacturer's certified
value, gas filter cell identification, and certified
value).
c. Documentation of values obtained during
calibration checks.
d. Calibration log book (including a record of the
date and time of any calibration adjustments
made or changes in the standards used).
A maintenance log book identifying all routine and
nonroutine maintenance on the CEMS should also be
kept and cross-referenced to the calibration log book
when maintenance procedures require subsequent
recalibration.
9.1.4 Quality Assurance Objectives and
Assessment
The quality assurance objectives for a CO CEMS and
the means for assessing data quality are summarized
in Table 9-2. Accuracy and precision objectives are
presented as a percent of the full-scale range of the
instrument. Some judgment should be used in
determining whether the calibration accuracy and
precision values are sufficient. For example, if the
instrument's full-scale range is 100 ppm, then the
calibration error should be < 5 ppm (5% of 100 ppm).
If a calibration check indicates a calibration drift of 8
ppm, there is no need for concern if the facility is
operating consistently at a CO level of 20 ppm and
the regulatory CO standard is 100 ppm. However, a
trend in calibration drift or an excessive daily drift
should be corrected. Proper calibration records must
be maintained so that the data can be evaluated.
9.2 Oxygen Monitors
9.2.7 Introduction
An oxygen monitor may be used in conjunction with a
CO monitor as a diluent monitor; i.e., to obtain the
data necessary to adjust the measured CO
concentration to a reference concentration (such as
7% oxygen). Consequently, the same quality control
procedures that are used for CO monitors apply to
oxygen monitors. These are:
• Initial performance test.
• Calibration checks during the trial burn.
• Complete data records.
9.2.2 Initial Performance Test
The initial performance test for oxygen monitors is
conducted in conjunction with the performance test
for CO monitors, using the same approach. The
specification procedures for oxygen monitors are
summarized in Table 9-3
When the oxygen monitor is used as a diluent
monitor, the sampling location of the oxygen monitor
should be adjacent to the sampling location of the CO
monitor so that the same portion of gas flow is
measured.
For the relative accuracy test, the performance of the
combined CO/O2 system can be evaluated in lieu of
separate evaluations. In other words, the relative
accuracy criterion can be evaluated using the CO
concentration normalized for Oa and comparing it to
the acceptable limit for CO monitors rather than
evaluating the measured CO and Oa separately.
9.2.3 Calibration
The oxygen monitor should be calibrated according to
the manufacturer's instructions prior to conducting the
initial performance test. The calibration must be
checked on a routine basis; and if the calibration has
drifted outside allowable limits, adjustments must be
made. The recommended calibration check procedure
is to challenge the monitor with a low-level standard
(0% to 20% of full scale) and a high-level standard
(80% to 100% of full scale) at routine intervals.
Although use of two standards is preferable, a single
high-level standard is sometimes used.
During the trial burn, calibration checks should be
conducted daily to confirm that instrument calibration
has not drifted.
9.2.4 Data Records
The requirements for data records for oxygen
monitors are the same as for CO monitors (see
Section 9.1.3).
9.2.5 Quality Assurance Objectives and
Assessment of Results
The quality assurance objectives for oxygen monitors
are summarized in Table 9-4. Assessment results are
similar to those for CO monitors as discussed in
Section 9.1.4.
65
-------
Table 9-2. Quality Assurance Objectives for CO Monitors
Quality parameter Method of determination Frequency
Criteria
Accuracy
Precision
Documentation
• Multipoint calibration
• Relative accuracy test
• Calibration checks
• Data records
Performance test prior to TB
Performance test prior to TB
Performance test prior to TB
Daily check during TB
Ongoing
il 5% FS3
< 20 ppm or 10% of RMb value
(whichever is greater)
< 5% FS
< 5% FS
• Complete calibration records for
performance specification test
• Calibration records for daily checks
• Complete data records for trial burn
1 -min reading maximum/minimum
values
»FS - full scale.
bRM = reference method.
Tablo 9-3. Oxygen Performance Test Criteria
Criterion
Acceptable limit
1. OEMS measurement location Adjacent to CO monitor (if to be used as diluent monitor)
2. Calibration drift (precision) s 0.5% 02
3. Calibration error (accuracy) < 0.5% 02
4. Response time < 1.5 min
5. Relative accuracy £ 1.0% 02 or 20% of reference method value, whichever is greater
Table 9-4. Quality Assurance Objectives for Oxygen Monitors
Quality parameter Method of determination Frequency
Criteria
Accuracy
Precision
Documentation
• Multipoint calibration
• Relative accuracy test
• Calibration checks
• Data records
Performance test prior to TB
Performance test prior to TB
Performance test prior to TB
Daily check during TB
Ongoing
s;0.5% 02
< 1% 02 or 20% of RMa value
(whichever is greater)b
<0.5% 02
<;0.5% 02
• Complete calibration records for
• Calibration records for daily checks
• Complete data records for trial burn:
(a) 1 -min readings
(b) maximum and minimum values
»RM « reference method.
bPerformance test criteria for 02 may be omitted if performance is evaluated using normalized CO measurement.
cPedormance specification test.
66
-------
Chapter 10
Specific Quality Control Procedures for Process Monitors
Guidance in establishing quality control procedures for
process monitors is offered in this chapter. Although
instrument types and parameters vary widely across
facilities, the general topics of calibration and
operational checks, data records, and quality
assurance objectives must be addressed in every trial
burn plan. The discussion in this chapter, therefore,
focuses on general guidance, not specific parameters
and instruments.
10.1 Introduction
A variety of process operating parameters are
monitored during trial burns to provide the data
necessary for developing permit limits. Some of these
parameters are applicable to all trial burns; others are
specific to a given incineration facility. Many of the
parameters can be monitored with a wide variety of
instrument types; e.g., many instruments are available
to monitor waste feed rates. Some of the quality
control procedures needed for the trial burn are similar
to those discussed in Chapter 11 for continuing opera-
tions.
10.2 General QC Procedures
This section covers calibration and operational
checks, data records, and quality assurance
objectives.
10.2.1 Calibration and Operational Checks
Prior to a trial burn, all process monitors and
instruments used to record process data should be
calibrated, if appropriate, and checked for proper
operation. Calibration procedures vary widely, not only
with the type of instrument, but also among
manufacturers. Instrumentation should, however, be
calibrated according to manufacturer's recommended
procedures and meet manufacturer's specifications.
Many instruments are received from the manufacturer
already calibrated. In that case, written records should
be available showing the procedures and results of
that calibration.
Prior to the trial burn, all process monitors should be
checked under the incinerator operating conditions
expected during the trial burn. The automatic waste
feed cutoff system should be included in these
checks. These checks should include visual
inspection to ascertain that the instruments are func-
tioning and that values obtained for the parameters
are within range. Other possible checks include
comparison of readings from redundant units (e.g.,
thermocouples), back-up instruments (e.g., CO
monitors), or alternative methods. For example, the
reading from an installed flow meter may be checked
against the change in a feed tank level for an
approximate comparison. Instruments that are subject
to drift on a short-term basis should be recalibrated
throughout the test period either before each test run
or on a daily basis.
10.2.2 Data Records
Adequate data records should be maintained for all
process monitors to evaluate their functioning and
performance. These records should document the
procedures and results of calibrations and operational
checks, as well as the specifications the monitors
must meet. The data records should reflect the units
and format specified in the TBP. A log book with
records of all routine and nonroutine maintenance
should also be kept. Maintenance records should be
cross-referenced to the associated calibrations and
operational checks.
70.2.3 Quality Assurance Objectives
The QA objectives for process monitors are affected
by the actual capabilities of each monitor as well as
by the method of determining the objective. For
example, a calculation of ORE is based not only on
analysis results (e.g., constituent concentration) and
sampling criteria (e.g., sample volume), but also on
the waste feed rates that may be obtained from
process monitors.
Actual QA objectives for monitored parameters will
vary depending on the type of instruments used and
the individual capabilities of the specific manufactured
unit. Instrument manufacturers may state
specifications as follows: temperature (thermo-
couples), ±0.5%-0.75% accuracy; gas velocity
measurements, ±3% precision; and mass flow
meters, ±0.4% accuracy. Such values may be of
limited usefulness to the permit writer in selecting data
quality objectives.
67
-------
For example, the specification may not be valid for the
instalment as installed for the specific application, no
alternative method may exist for verifying the
specification, or the specification may exceed needed
accuracy. For a thermocouple reading of 2000° F, a
±2.5% accuracy may be adequate (i.e., ±50°F).
Thus, QA objectives must be based upon the
instrument's capabilities, the ability to measure
tolerance values, and the decisions that will be based
upon the measurement.
68
-------
Chapter 11
QA/QC Associated with Permit Compliance and Daily Operation
In this final chapter of the handbook the RCRA
permitting program is outlined in terms of achieving
and maintaining acceptable performance. Also,
procedures for corrective action and record keeping
requirements are described.
The RCRA permitting program defines the acceptable
performance of a permitted incinerator in terms of
specific operating limits that are continuously
monitored by facility operators. Since these operating
limits are the primary indicators of incinerator
performance, monitoring procedures and
instrumentation must function reliably on a continuing
basis. Permits should be very specific in identifying
requirements for the continuous monitoring, testing,
calibration, and record-keeping activities that are
associated with the demonstration of compliance.
Each permit-limited condition has associated
monitoring/testing/calibration procedures and record-
keeping systems.
Major categories of permit-limited conditions include:
• Waste feed limits.
• Gaseous emission limits.
• Other key operating parameters for the
combustion chambers and air pollution control
equipment.
The measurements associated with waste feeds, for
example, contribute to the performance of the
automatic waste feed shutoff system, which is an
essential safeguard in case an incinerator's operations
deviate from allowable conditions.
Adequate documentation of continuous monitoring
requirements in the permit provides three major
benefits:
1. Establishes in advance the minimum require-
ments for measurement quality.
2. Provides specific criteria to facility owners/
operators.
3. Establishes enforceable specifications for
EPA/state control agency staff who conduct
subsequent compliance inspections.
The types of specifications that a permit writer should
consider for inclusion in RCRA incinerator permits are
discussed in the following sections. These
specifications are not a substitute for a thorough
preliminary review effort to evaluate the adequacy of
each proposed key monitoring instrument. Such a
review should occur early in the permit review
process and address such key issues as:
• Appropriate technique and equipment type.
• Adequate operating range, response time,
precision, and accuracy.
• Proper location of sensor.
• Adequate readouts/data recording.
11.1 Routine Procedures for Monitoring
and Testing/Calibration
11.1.1 Waste Feed Limitations
Permits will typically limit the waste feed in terms of
allowable feed rates and allowable waste feed
characteristics. Both types of limits may be used to
calculate loading rate limits for such parameters as
chloride, metals, ash, and heat input.
11.1.1.1 Feed Rate Monitoring
Waste feed rates for gases, liquids, sludges, and
solids typically are determined by such diverse
devices as differential pressure meters, velocity
meters, mass flow meters, volumetric methods, level
or stationary weight indicators, and conveyor weighing
systems.
At least semiannually, the calibration of feed rate
monitoring devices should be checked and, if
warranted, recalibrated. Applicable calibration
methods, depending on the device, may include:
69
-------
• Using standard weights or other known
weights/flows.
• Comparing readings with duplicate or alternative
devices.
• Using manufacturer's methods.
• Returning the instrument to the manufacturer for
recalibration.
A more limited form of "calibration" is a zero
adjustment having a negligible impact on full-scale
readings. The specified calibration method should be
applicable to the allowable range of feed rates
specified in the permit.
Deviations from a semiannual calibration may be
appropriate in two cases. First, if a waste feed is
abrasive or is otherwise potentially damaging to the
feed rate sensing device, recalibration should be
required on a more frequent basis. Second, if a
device is very reliable but is also very difficult to
calibrate in situ (e.g., some mass flow meters), a less
frequent calibration (e.g., factory calibration) may be
appropriate. In such cases, alternating the use of two
instruments may be an option.
The permit writer should clearly identify the required
calibration method and frequency in the permit. The
calibration method should be specified as thoroughly
as possible.
11.1.1.2 Waste Feed Characteristics
The permit should specify the frequency of reanalysis,
the parameters to be determined, and the
documentation requirements associated with continu-
ing waste characterization. As a minimum, the typical
frequency requirement is annual reanalysis and
additional reanalysis whenever waste characteristics
may have changed (e.g., as a result of process
modifications). Some facilities may be required to
analyze waste feeds for selected parameters on a
batch basis.
Depending on the issues associated with a particular
incinerator, waste characterization may include:
• Appendix VIII constituents.
• Compounds prohibited in the feed (PCBs, etc.).
• Chloride and ash content.
• Viscosity.
• Heating value.
• Other characteristics as applicable.
Documentation of the waste characterization should
include, at a minimum:
• Date sample was obtained.
• Sampling method used to obtain a representative
sample.
• Laboratory performing each analysis.
• Sample preparation and analysis methods.
• Date analyses were performed.
• Results (value and units).
• Analytical QC results and assessment of data
quality.
• Signature of generator representative.
Additional requirements associated with the waste
analysis plan (in the permit application) may also apply
and should be summarized or referenced within the
permit. If practical, waste analysis and associated QC
should be similar to that used in the trial burn.
11.1.1.3 Calculation of Compliance-
If some permit limits are expressed as loading rates
(e.g., total chloride input, total heat input), a
calculation is needed to demonstrate compliance with
these limits. The calculation may involve the
multiplication of a concentration or similar value (mg/L,
Btu/lb) and a flow rate (ton/h, gal/min, etc.) with
appropriate conversion factors to yield a loading rate
(Ib/h of chloride, MBtu/h, etc.).
Permits should require periodic calculations of
compliance with these categories of operating limits.
Calculations may be continuous, based on automatic
computerized calculations using computerized
analysis results, or less frequent based on manual
calculations. Calculations must be performed at least
once a week by selecting an operating point that
represents approximately the highest waste feed rate
of the week. Continuous calculations should be
displayed on computer log sheets with other
monitored parameters. Manual calculations should be
recorded in operators' log files, and within a special
file as part of the records inventory.
11.1.2 Continuous Emission Monitor Systems
(OEMS) for Carbon Monoxide and
Oxygen
The initial performance test discussed in Chapter 9 is
used to evaluate the performance of the CEMS when
first installed. A QC program is required to ensure that
the CEMS continues to operate properly and that
70
-------
reliable results continue to be obtained. The primary
components of a QC program for GEMS are:
• Routine calibration checks.
• Preventive maintenance program.
• Performance tests.
• Record keeping.
The routine QC procedures are summarized in Table
11-1.
11.1.2.1 Calibration Checks
The routine calibration check is the primary QC
procedure for ensuring accurate data on an ongoing
basis. Manufacturer's procedures should be followed.
The check should be made daily unless performance
data indicate less frequent calibration is sufficient. The
recommended calibration check is to challenge the
monitor daily with a low-level standard (0% to 20% full
scale) and a high-level standard (80% to 100% full
scale). Although it is preferable to use two standards,
a single mid-level or high-level standard is sometimes
substituted. Corrective action consists of adjusting the
calibration when it has drifted outside the allowable
limit. When the GEMS includes a diluent monitor for
normalizing the CO data, a combined CO/Oa standard
may be used to evaluate the monitor calibration on a
normalized CO concentration basis.
11.1.2.2 Record Keeping
Record-keeping requirements should include: (a) all
calibration and calibration check records; (b)
maintenance records; and (c) data records.
The results of the daily calibration checks should be
recorded automatically by the chart recorder or data
logger system as part of the normal data recording
system. A calibration log book should be maintained
and should include:
1. Chronological record of any calibration/
adjustments.
2. Records of the calibration standards, including a
unique identifier for each standard and the
manufacturer's certified value.
3. Records showing when calibration standards
have been replaced.
4. Cross references to any maintenance log book,
if calibration problems require maintenance or if
maintenance requires recalibration.
A maintenance log of the monitoring system will help
identify recurring problems so that a preventive
maintenance program can be initiated or modified to
address those problems.
Data records sufficient to show compliance with
permit conditions must be maintained. Normally, this
includes chart records or data logger records showing
ppm of CO. Depending upon the permit limits, the CO
data may be 1-minute rolling hourly averages or some
other permit-limited condition. Normalization to 7%
oxygen also may be required.
11.1.2.3 Preventive Maintenance Program
A proper QC program for a GEMS will include
preventive maintenance. The preventive maintenance
program will be based on manufacturers'
recommendations and will include such items as:
1. Checking the integrity of probe and sample line
and backflushing as necessary.
2. Checking and maintaining the sample
conditioning system; e.g., cleaning or replacing
filters.
3. Cleaning optical lens (in situ monitors).
4. Checking operation of recorders and data
loggers (e.g., replacing pens, ink, charts, etc.).
The preventive maintenance program should be
established by the facility operator and should identify
daily, weekly, monthly, and annual maintenance
activities. A maintenance log for the GEMS should be
maintained.
11.1.2.4 Performance Tests
Performance tests of the monitoring system can be
repeated as necessary. Repetition of the relative
accuracy test is recommended every 2 years to verify
monitoring system performance. If the relative
accuracy acceptance criterion is no longer achieved,
then the cause of the problem must be determined,
corrective action taken, and the performance test
repeated. Repetition of the complete performance test
or portions of it may be necessary at other times if
problems are encountered and if the corrective action
taken requires that the monitoring performance be
reevaluated. For example, when the fuel cell for an in
situ oxygen monitor is replaced, a multipoint
calibration should be conducted and the relative accu-
racy should be checked against a reference method
(e.g., Orsat analysis).
11.1.3 Other Monitored Parameters
Proper continuing operation of each monitoring
instrument associated with a permit-limited operating
condition is a crucial portion of the RCRA incineration
program. However, the approach used for continuing
-------
Table 11-1. QA/QC for Routine Operation-CD and O2 Monitors
Activity
Frequency
Acceptance criterion
Corrective action
• Calibration check
• Record keeping
Daily
Daily
• Maintenance
•Performance test
Relative accuracy
Daily
Every 2 years; more frequently if
instrument calibration/repairs warrant
CO < 0.5% FSa
O2 £ 0.5% O2
• Record calibration results
• Record calibration adjustments
• Record changes in calibrated
standards
• Record maintenance activities
• Record emissions data per permit
requirements
Establish preventive maintenance
program
CO < 20 ppm or < 10%
(whichever is greater)
O2 < 1% O2 or «s 20% RM
(whichever is (jreater)
Adjust calibration
NA
Replace/repair equipment
then retest
"FuN scale.
method.
calibration checks is not as straightforward for all
monitoring instruments as it is for the CO and oxygen
monitors. This variety derives from the many designs
of monitors and incinerators.
Three examples demonstrate the types of issues
facing the permit writer when addressing instrument
calibration procedures:
• Thermocouples (used for. most of the critical
temperature monitoring requirements) are
typically compared with duplicate units in situ.
Nonfunctioning or suspect malfunctioning units
are replaced. Calibration is not an applicable
term for thermocouples. The permit writer should
require duplicate thermocouples and establish
minimum replacement criteria (e.g., when
duplicate thermocouples vary by more than
50°F). This ensures the precision of the
temperature measurement and keeps the
temperature range similar to that in the trial burn.
• An annubar (used to measure gas flow) is
typically calibrated in a wind tunnel prior to
installation. Indirect monitoring of changes in the
calibration factor may be accomplished by using
a calibrated pilot tube to obtain an independent
measure of velocity. The independent measure
is compared to velocity measured by the
annubar. Recalibration would require removal of
the unit for a repeat test in a wind tunnel.
• Magnehelics (used to measure pressure) may be
checked by temporarily replacing the unit with an
alternative unit for a calibration check. One
example is the use of an inclined manometer
connected, if possible, to parallel pressure taps.
Malfunctioning units may be adjusted or
replaced.
The permit writer should make calibration evaluations
on a case-by-case basis. Calibration requirements
should include the calibration method, the minimum
frequency, an allowable range of variation, and
documentation requirements for calibration and
maintenance.
11.1.4 Automatic Waste Feed Shutoff System
RCRA requires that incinerators be equipped with a
system that automatically stops the flow of waste feed
into the incinerator whenever certain key operating
conditions (e.g., temperature, combustion gas
velocity) deviate from allowable levels. The automatic
waste feed shutoff system (AWFSO) includes: sensing
devices for each key condition; transmitters that send
the signals from sensing devices to a receiver; a
receiver/signal processor that evaluates the signals
and sends a shutoff signal when limits are exceeded;
and a shutoff device that effectively shuts down the
flow of waste materials going into the incinerator. The
AWFSO must operate properly on a continuous basis.
The permit writer should specify the following in the
permit:
• Required frequency of testing the AWFSO.
• Format of the test.
• Any special operating considerations.
RCRA regulations [40 CFR 264.347(c)] require a
weekly test (or a monthly test if the applicant provides
justification). Testing less frequently than once a week
should be allowed only in special cases. Records of
required periodic tests must be maintained.
The permit writer should specify the format of the test
in terms of the parameters that trigger the AWFSO
system. Some complex incineration facilities may have
72
-------
more than a dozen parameters that can trigger
automatic waste shutoff. The permit writer should
specify how all triggering parameters are to be
included in the AWFSO tests on a periodic basis.
Options may include testing at least one parameter a
week on a rotating basis, or weekly testing that
includes all triggering parameters over a month's time.
Records must be maintained to document compliance
with shutoff limits and any specified time restrictions
associated with the excursions that trigger the system.
For some systems, permit writers may consider
restrictions in the testing of the AWFSO in response
to the potential for the release of uncontrolled
emissions as a result of such tests. Testing for
selected parameters may be appropriate while the
facility is operating with nonhazardous feed material.
Simulated shutoff conditions may be appropriate for
some portion of the AWFSO testing plan instead of
creating actual shutoff conditions. Examples include:
(1) the use of a high CO standard gas to trigger a
shutoff; or (2) overriding actual readings with keyed-in
computer override values to trigger a shutoff.
j . . •
11.2 Record Keeping
Incineration facilities are required to maintain detailed
records to document compliance with permit
conditions. These records are important for
compliance inspections conducted by EPA and state
agency staff. The required records can be reviewed
by inspectors to demonstrate recent and past opera-
tions at the facility. Permit writers should be very
specific in each permit in defining the following:
• Which records must be maintained?
• What is the content and format of the records?
• What is the frequency of inputs to each type of
record (continuous, weekly, etc.)?
• How are the records stored for ease of access?
In general, documentation maintained by the facility
includes:
• Records associated with continuously monitored
operating parameters (e.g., strip charts, compu-
terized logs, operator logs).
• Records associated with waste characterization.
• Records associated with the characterization and
handling of by-product wastes.
• Records associated with daily (and additional)
inspections performed by facility staff.
• Calibration and maintenance logs.
• Automatic waste feed shutoff system records
(documentation of shutoff incidents and system
tests).
• Records of facility-specific issues (e.g.,
emergency vent stack openings, waste
acceptance, blending, etc.).
The content and format of each record should be
defined in the permit in sufficient detail to ensure that
all needed information will be available to inspectors.
For example, records of calibrations should document
date, calibration method, initial reading, and final
reading. Specific requirements for strip charts may
include: (a) minimum chart speed, (b) minimum
labeling of date and time (e.g., minimum daily manual
labeling by the operator), and (c) use of different ink
colors when the same strip chart is used for more
than one parameter. The permit should clearly identify
the minimum frequency of inputs to records.
Examples include an update each minute of a
calculated 60-minute rolling average for CO concen-
tration and semiannual calibration records of a flow
meter.
Ideally, all records should be stored for ease of
access for inspections. A permit writer can assist the
inspectors by permit requirements such as the
following:
• All records maintained in one central location.
• A daily master log (filed by month) that cross-
references all permit-required activities
completed during each day.
• Separate detailed files maintained for each type
of required activity (e.g., waste characterization,
operating strip charts, calibration of instruments,
etc.). Files to be cross-referenced with the daily
master log.
73
-------
-------
Chapter 12
References
1. Resource Conservation and Recovery Act, 40
GFR:264, Subpart O, Hazardous Waste
Incinerators.
2. Interim Guidelines and Specifications for
Preparing Quality Assurance Project Plans. EPA
QAMS-005/80, December 29, 1980.
3. Test Methods for Evaluating Solid Waste--
Physical/Chemical Methods. SW-846, Third
Edition, September 1986. Washington, D.C.
4. Handbook—Guidance on Setting Permit
Conditions and Reporting Trial Burn Results.
EPA/625/6-89/019, January 1989. Cincinnati,
OH.
5. Hazardous Waste Incineration Measurement
Guidance Manual. EPA/625/6-89/021, June
1989. Cincinnati, OH.
6. National Enforcement Investigations Center
(NEIC) Policy and Procedures (Chapter II, EPA-
300/9-78-001-R).
7. Quality Assurance Handbook for Air Pollution
Measurement Systems. EPA-600/9-76-005; EPA-
600/4-77-027a; and EPA-600/4-77-027b.
8. Sampling and Analysis Methods for Hazardous
Waste Combustion. EPA-600/8-84-002 (NTIS
PB84-155845), February 1984.
9. Practical Guide—Trial Burns for Hazardous
Waste Incinerators. MRI Final Report, MRI
Project No. 8034, EPA Contract No. 68-03-3149,
June 25, 1985.
10. Proposed Methods for Measurements of CO, Oa,
THC, HCI, and Metals at Hazardous Waste
Incinerators. Midwest Research Institute Draft
Final Report, EPA Contract No. 69-01-7287,
September 9, 1988.
11. Guidance on Metals and Hydrogen Chloride
Controls for Hazardous Waste Incinerators.
Volume IV of the Hazardous Waste Incineration
Guidance Series, Draft Document, March 1989.
USEPA Office of Solid Waste, Waste Treatment
Branch, Work Assignment Manager: Dwight
Hlustick.
12. Section 3.5.2 from Method 6 in Quality
Assurance Handbook for Air Pollution
Measurement Systems, Volume III. EPA 600/4-
77-027b, August 1977.
13. Methodology for the Determination of Trace
Metal Emissions in Exhaust Gases from
Stationary Source Combustion Processes (Draft
EPA/EMB Metals Protocol, November 1988).
14. Checklist for Reviewing RCRA Trial Burn
Reports. MRI Final Report, MRI Project No.
8982-78, .EPA Contract No. 68-01-7038,
February 10, 1989.
15. Laboratory Data Validation Functional Guidelines
for Evaluating Organics Analyses. USEPA,
Hazardous Site Evaluation Division, February 1,
1988.
16. Removal Program Sampling QA/QC Plan
Guidance; Draft Report, OERR, OSWER Direc-
tive 9360.4-01, February 2, 1989.
BIBLIOGRAPHY
Trial Burn Observation Guide. EPA/530-SW-89-027,
March 1989.
Analytical Procedures to Assay Stack Effluent
Samples and Residual Combustion Products for
Polychlorinated Dibenzo-p-dioxins (PCDD) and
Polychlorinated Dibenzofurans (PCDF). A draft
ASME analytical protocol dated September 18,
1984.
Brantly, E., and D. I. Michael. Setting Data Quality
Objectives. Presented at the Second Ecological
Quality Assurance Workshop, Environmental
Research Center, University of Nevada - Las
Vegas, February 8-10, 1989.
75
-------
Data Quality Objectives for Remedial Response
Activities Development Process. EPA/540/G-
87/003.
Development of Data Quality Objectives, Description
of Stages I and II. EPA Quality Assurance
Management Staff Draft Report, July 16, 1986.
Draft Guide to the Preparation of Quality Assurance
Project Plans for the Office of Toxic Substances.
U.S. Environmental Protection Agency,
September 28, 1984.
Dux, J. P. Handbook of Quality Assurance for the
Analytical Chemistry Laboratory. Van Nostrand
Reinhold Company, New York, New York, 1986.
Environmental Protection Agency Performance Test
Methods. EPA-340/1 -78-011.
Freeman, H. W., et al. Incinerating Hazardous Wastes.
Technomic Publishing Company, Lancaster,
Pennsylvania, 1988.
Guidance Manual for Writers of PCB Disposal Permits
for Alternate Technologies. USEPA Office of Toxic
Waste Internal Document, October 1, 1988.
Guidance on PIC Controls for Hazardous Waste
Incinerators. Midwest Research Institute Draft
Rnal Report, EPA Contract No. 69-01-7287, April
3, 1989.
Guidelines for Stack Testing of Municipal Waste
Combustion Facilities. EPA/600/8-88-085.
Handbook-Permit Writer's Guide to Test Burn Data--
Hazardous Waste Incineration. EPA/625/6-88/012.
LaBarge, R. R. "A Programmatic Approach to
Achieving Data Quality Objectives." Presented at
the Second Ecological Quality Assurance
Workshop, Environmental Research Center,
University of Nevada-Las Vegas, February 8-10,
1989.
Laboratory Data Validation Functional Guidelines for
Evaluating Inorganic Analyses. USEPA, Hazardous
Site Evaluation Division, July 1, 1988.
Permitting Hazardous Waste Incinerators, Seminars
for Hazardous Waste Incinerator Permit Writers,
Inspectors and Operators. EPA/625/4-87/017.
Preparation Aids for the Development of RREL's
Category III Quality Assurance Project Plans, U.S.
EPA, Risk Reduction Engineering Laboratory,
Cincinnati, OH 45268. October 20, 1989.
Quality Assurance Procedures Manual for Contractors
and Financial Assistance Recipients, AEERL (QA)-
003/85.
Samplers and Sampling Procedures for Hazardous
Waste Streams. EPA-600/2-80-018, NTIS PB80-
135353, January 1980.
Standard Practice for Generation of Environmental
Data Related to Waste Management Activities.
ASTM Draft Document No. 5, February 1989.
Taylor, J. K. Quality Assurance of Chemical
Measurements. Lewis Publishers, Chelsea,
Michigan, 1987.
76
-------
Note:
Appendix A
VOST Calibration
These procedures are taken from Reference 12 and are presented to give added
detail on sampling train component calibration not presented in Method 0030.
1.0 Calibration of Apparatus Used in
VOST
Calibration of the apparatus is one of the most
important functions in maintaining data quality. All
calibrations should be recorded on standardized forms
and retained in a calibration log book.
1.1 Metering System
1.1.1 Wet Test Meter
The wet test meter is used to calibrate the dry test
meter; it also must be calibrated and have the proper
capacity. The wet test meter should have a capacity
of at least 3 L/min. No upper limit is placed on the
capacity; however, a wet test meter dial should make
at least one complete revolution at the specified flow
rate for each of the three independent calibrations.
Wet test meters are calibrated by the manufacturer to
an accuracy of ±0.5%. Calibration of the wet test
meter must be checked initially upon receipt and
yearly thereafter.
The following liquid positive displacement technique
can be used to verify and adjust, if necessary, the
accuracy of the wet test meter to ±1%.
1. Level the wet test meter by adjusting the legs
until the bubble on the level located on the top of
the meter is centered. .____
2. Adjust the water volume in the meter so that the
pointer in the water level gauge just touches the
meniscus.
3. Adjust the water manometer to zero by moving
the scale or by adding water to the manometer.
4. A description of the set up of the apparatus
can be found in Figure 2-1 of Section 3.5.2 of
reference 12.
a. Fill the rigid-wall 5-gal jug with distilled
water to below the air inlet tube. Put
water in the impinger, or saturate and
allow both to equilibrate to room
temperature (about 24 h) before use.
b. Start water siphoning through the
system, and collect the water in a 1-gal
container, located in place of the
volumetric flask.
5. Check operation of the meter as follows:
a. If the manometer reading is < 10 mm
(0.4 in) HaO, the meter is in proper
working condition. Continue to step 6.
b. If the manometer reading is > 10 mm
(0.4 in) H2O, the wet test meter is
defective or the saturator has too much
pressure drop. If the wet test meter is
defective, return it to the manufacturer
for repair unless the defect(s) (e.g., bad
connections or joints) can be found and
corrected.
6. Continue the operation until the 1-gal
container is almost full. Plug the fnlet to the
saturator. If no leak exists, the flow of liquid to
the gallon container should stop. If the flow
—. continues, correct for leaks. Turn the siphon
system off by Closing the valve, and unplug
the inlet to the saturator.
7. Read the initial volume (Vj) from the wet test
meter dial, and record on the wet test meter
calibration log.
77
-------
8. Place a clean, dry volumetric flask (Class A)
under the siphon tube, open the pinch clamp,
and fill the volumetric flask to the mark. The
volumetric flask must be large enough to allow
at least one complete revolution of the wet
test meter with not more than two fillings of
the volumetric flask.
9. Start the flow of water and record the
maximum wet test meter manometer reading
during the test after a constant flow of liquid is
obtained.
10. Carefully fill the volumetric flask, and shut off
the liquid flow at the 2-L mark. Record the
final volume on the wet test meter.
11. Steps 7 through 10 must be performed three
times.
The air volume can be compared directly with the
liquid displacement volume for two reasons. First, the
water temperature in the wet test meter and reservoir
has been equilibrated to the ambient temperature.
Second, the pressure in the wet test meter will
equilibrate with the water reservoir after the water flow
is shut off. Any temperature or pressure difference
would be less than measurement error and would not
affect the final calculations.
The error should not exceed ±1%. Should this error
magnitude be exceeded, check all connections within
the test apparatus for leaks, and gravimetrically check
the volume of the standard flask. Repeat the calibra-
tion procedure. If the tolerance level is not met, adjust
the liquid level within the meter (see the
manufacturer's manual) until the specifications are
met.
1.1.2 Sample Meter System
The sample meter system-consisting of the drying
tube, needle valve, pump, rotameter, and dry gas
meter-is calibrated by stringent laboratory methods
before it is used in the field. The initial calibration is
then rechecked after each field test series. This
recheck requires less effort than the initial calibration.
When a recheck indicates that the calibration factor
has changed, the tester must again perform the
complete laboratory procedure to obtain the new
calibration factor. After the meter is recalibrated, the
metered sample volume is multiplied by the calibration
factor (initial or recalibrated) that yields the lower gas
volume for each test run.
Initial calibration. The metering system should be
calibrated when first purchased and at any time the
posttest check yields a calibration factor that does not
agree within 5% of the pretest calibration factor. A
calibrated wet test meter (properly sized, with ± 1%
accuracy) should be used to calibrate the metering
system.
The metering system should be calibrated in
following manner before its initial use in the field:
the
1. Leak check the metering system (drying tube,
needle valve, pump, rotameter, and dry gas
meter) as follows:
a. Temporarily attach a suitable rotameter
(e.g., an airflow range of 0 to 40 cm3/min)
to the outlet of the dry gas meter, and
place a vacuum gauge at the inlet to the
drying tube.
b. Plug the drying tube inlet. Pull a vacuum
of at least 250 mm (10 in) Hg.
c. Note the flow rate as indicated by the
rotameter.
d. A leak of < 0.02 L/min must be recorded
or leaks must be eliminated.
e. Carefully release the vacuum gauge
before turning off pump.
2. Assemble the apparatus, as shown in Figure
2-3 in Reference 9 of Section 3.5.2, with the
wet test meter replacing the drying tube and
impingers; that is, connect the outlet of the
wet test meter to the inlet side of the needle
valve and the inlet side of the wet test meter
to a saturator which is open to the
atmosphere. Note: Do not use a drying tube.
3. Run the pump for 15 min with the flow rate
set at 1 L/min to allow the pump to warm up
and to permit the interior surface of the wet
test meter to become wet.
4. Collect the information required in the forms
shown in Reference 9 of Section 3.5.2,
Figures 2-4A (English units) or 2-4B (metric
units), using sample volumes equivalent to at
least five revolutions of the dry test meter.
Three independent runs must be made.
5. Calculate Yj for each of the three runs using
Equation 1. Record the values on the form
(Figures 2-4A or 2-4B, Reference 9 of Section
3.5.2).
V P +
w m
Y. =
(Eq. 1)
46°
78
-------
where
Vw
Pm
ratio for each run of volumes
measured by the wet test meter and
the dry gas meter; dimensionless
calibration factor,
volume measured by wet test meter,
m3
6.
barometric pressure at the meters,
mm (in) Hg,
= pressure drop across the wet test
meter, mm (in) H2O,
= average temperature of dry gas meter,
°C («F),
= volume measured by the dry gas
meter, m3 (ft3), and
= temperature of wet test meter, °C
Adjust and recalibrate or reject the dry gas
meter if one or more values of Yj fall outside
the interval Y ±0.002Y, where Y is the
average for three runs. Otherwise, the Y
(calibration factor) is acceptable and will be
used for future checks and subsequent test
runs. The completed form should be
forwarded to the supervisor for approval, and
then filed in the calibration log book.
An alternative method of calibrating the metering
system is to substitute a dry gas meter that has been
properly prepared as a calibration standard for the wet
test meter. This procedure should be used only after
obtaining approval of the Administrator.
Posttest calibration check. After each field test series,
conduct a calibration check as in Subsection 1 .2, with
the following exceptions:
1. The leak check is not conducted because a
leak may have been corrected that was
present during testing.
2. Three or more revolutions of the dry gas
meter may be used.
3. Only two independent runs need be made.
4. If a temperature-compensating dry gas meter
was used, the calibration temperature for the
dry gas meter must be within ±6°C (10.8°F)
of the average meter temperature observed
during the field test series.
When a lower meter calibration factor is obtained as a
result of an uncorrected leak, the tester should correct
the leak and then determine the calibration factor for
the leakless system. If the new calibration factor
changes the compliance status of the facility in
comparison to the lower factor, either include this
information in the report or consult with the
administrator for reporting procedures. If the
calibration factor does not deviate by more than 5%
from the initial calibration factor Y (determined in
Subsection 1.2), then the dry gas meter volumes
obtained during the test series are acceptable. If the
calibration factor does deviate by more than 5%,
recalibrate the metering system as in Subsection 1.2.
For the calculations, use the calibration factor (initial
or recalibration) that yields the lower gas volume for
each test run.
1.2 Thermometers1
The thermometers used to measure the temperature-
of gas leaving the first cartridge should be initially
compared with a mercury-in-glass thermometer that
meets ASTM E-1 No. 63C or 63F specifications:
1. Place both the mercury-in-glass and the dial
type or an equivalent thermometer in an ice
bath. Compare the readings after the bath
stabilizes.
2. Allow both thermometers to come to room
temperature. Compare readings after both are
stabilized.
3. The dial type or equivalent thermometer is
acceptable if values agree within ±1°C (2°F)
at both points. If the difference is greater than
±1°C (2°F), either adjust or recalibrate the
thermometer until the above criteria are met,
or reject it.
4. Prior to each field trip, compare the
temperature reading of the mercury-in-glass
thermometer with that of the meter
thermometer at room temperature. If the
values are not within ±2°C (4°F) of each
other, replace or recalibrate the meter
thermometer.
The thermometer(s) on the dry gas meter inlet used
to measure the metered sample gas temperature
should be compared initially with a mercury-in-glass
thermometer that meets ASTM E-1 No. 63C or 63F
specifications:
1. Place the dial type or an equivalent
thermometer and the mercury-in-glass
thermometer in a hot water bath, 40° to
50°C(104° to 122°F). Compare the readings
after the bath has stabilized.
79
-------
2. Allow both thermometers to come to room
temperature. Compare readings after the
thermometers have stabilized.
3. The dial type or equivalent thermometer is
acceptable if values agree within 3°C (5.4°F)
at both points (steps 1 and 2 above) or if the
temperature differentials at both points are
within ±3°C (5.4°C), and the temperature
differential is taped to the thermometer and
recorded on the meter calibration form
(Figures 2-4A or 2-4B, Reference 9 of Section
3.5.2).
4. Prior to each field trip, compare the
temperature reading of the mercury-in-glass
thermometer at room temperature with that of
the thermometer that is part of the meter
system. If the values or the corrected values
are not within ±6°C (10.8°F) of each other,
replace or recalibrate the meter thermometer.
1.3 Rotameter
The Reference Method does not require that the
tester calibrate the rotameter. The rotameter should
be cleaned and maintained according to the
manufacturer's instructions. For this reason, the
calibration curve and/or rotameter markings should be
checked upon receipt and then routinely checked with
the posttest meter system check. The rotameter may
be calibrated as follows:
1. Determine that the rotameter has been
cleaned as specified by the manufacturer and
is not damaged.
2. Use the manufacturer's calibration curve
and/or markings on the rotameter for the initial
calibration. Calibrate the rotameter as
described in the meter system calibration of
Subsection 1.2, and record the data on the
calibration form (Figures 2-4A or 2-4B,
Reference 9 of Section 3.5.2).
3. Use the rotameter for testing if the pretest
calculated calibration is within 1.0 ±0.05
L/min. If the calibration point is not within
±5%, however, determine a new flow rate
setting, and recalibrate the system until the
proper setting is determined.
4. Check the rotameter calibration with each
posttest meter system check. If the rotameter
check is within ±10% of the 1-L/min setting,
the rota,meter can be acceptable with proper
maintenance. If the check is not within ± 10%
of the flow setting, however, disassemble and
clean the rotameter and perform a full
recalibration.
1.4 Barometer
The field barometer should be adjusted initially and
before each test series to agree within ±2.5 mm (0.1
in) Hg with a mercury-in-glass barometer or with the
pressure value reported from a nearby National
Weather Service Station and corrected for elevation.
The tester should be aware that the pressure readings
are normally corrected to sea level. The uncorrected
readings should be obtained. The correction for the
elevation difference between the weather station and
the sampling point should be applied at a rate of -2.5
mm Hg/30 m (-0.1 in Hg/100 ft) elevation increase, or
vice versa for elevation decrease.
The calibration checks should be recorded on the
pretest sampling form (Figure 2-5, Reference 9 of
Section 3.5.2).
80
-------
*y. Appendix B
• •'t
Acronym List
APCD
APCE
AREAL
ASTM
AWFSO
CCS
CO
COC
GEMS
CVAA
DL
DE
ORE
dscf
EICP
GC/MS
GFM
HWERL
IDL
LDL
LOQ
M1, M2, M5
MDL
NBS
NIST
Air Pollution control device
Air pollution control equipment
Atmospheric Research & Exposure Assessment Laboratory
American Society of Testing and Materials
Automatic waste feed shutoff
Calibration check standard
Carbon monoxide
Chain of custody
Continuous emission monitoring system
Cold vapor atomic absorption-for metals
Detection limits
Destruction efficiency
Destruction and removal efficiency
Dry standard cubic feet
Extracted ion current plots
Gas chromatography/mass spectrometry
Graphite furnace atomic absorption-for metals
EPA's Hazardous Waste Engineering Research Laboratory. Now
known as Risk Reduction Engineering Laboratory (RREL)
Instrument detection level
Lower level of detection
Limit of quantitation
Designation of specific EPA sampling and analysis methodologies
Method detection limits
National Bureau of Standards
National Institute for Standards and Technology (formerly the
National Bureau of Standards)
81
-------
PCC
POHCs
POM
QAC
QAMS
QAP
QAPjP
QAPP
QA/QC
RCRA
RIG
RIS
RF
RM
RPD
RRF
RRT
RSD
SCO
SIM
SOP
SVOST
SW-846
TBP
TBR
VOST
XAD-2
Primary combustion chamber
Principal organic hazardous constituents
Polycyclic organic matter
Quality assurance coordinator
Quality assurance management staff
Quality assurance plan
Quality assurance project plan
Quality assurance program plan
Quality assurance/quality control
Resource Conservation and Recovery Act
Reconstructed ion chromatograms
Recovery internal standards
Response factor: ratio of the response of an analyte (peak height
or area) to its concentration or mass injected into a
chromatography system
Reference method
Relative percent difference
Relative response factor: ratio of the response of an analyte (peak
height or area) to the response of an internal standard related to
the ratio of the concentrations of the internal standard and analyte
Relative retention time: ratio of the chromatographic retention time
of an analyte to the retention time of an internal standard
Relative standard deviation
Secondary combustion chamber
Selected ion monitoring
Standard operating procedure
Semivolatile organic sampling train
(same as Method 0010 in SW-846)
EPA methods publication (see; reference 3)
Trial Burn Plan
Trial Burn Report
Volatile organic sampling train
(same as Method 0030 in SW-846)
Resin used in SVOST tube
82
------- |