% PR0^0<
SEVENTH ANNUAL
WASTE TESTING
AND
QUALITY ASSURANCE
SYMPOSIUM
JULY 8-12,1991
GRAND HYATT WASHINGTON
WASHINGTON, D.C.
PROCEEDINGS
Volume I
-------
VOLUME
I
THE SYMPOSIUM IS MANAGED BY THE AMERICAN CHEMICAL SOCIETY
printed on recycled paper
-------
TABLE OF CONTENTS
Volume I
Paper Page
Number Number
QUALITY ASSURANCE
1. Western Processing: Surface and Ground Water Monitoring During A Superfund Remediation. I - 1
D. Actor, Z. Naser
2. A Quality Assurance Program for Remedial Actions Within the USEPA ARCS Program. 1-18
£>. M. Stainken, D. C. Griffin, K. Krishnaswami, J.C. Henningson
3. A National QA Standard for Environmental Programs for Hazardous Waste Management 1-26
Activities. G. L. Johnson, N. W. Wentworth
4. The Impact of Calibration on Data Quality. R. G. Mealy, K. D. Johnson 1-34
5. Proficiency Evaluation Sample Program for Solid Waste Analysis: A Pilot Project. 1-47
D. E. Kimbrough, J. Wakakuwa
6. Technical Data Review - Thinking Beyond Quality Control. K. D. Johnson, R. G. Mealy 1-48
7. Quality Assurance Strategies to Improve Project Management. T. L, Vandermark, G. F. Simes I - 65
8. Bias Correction: Evaluation of Effects on Environmental Samples. M. W. Stephens, 1-66
M. A. Paessun
9. Ensuring Data Authenticity in Environmental Laboratories. J. C. Worthington, R. P. Haney 1-81
10. Establishment of Laboratory Data Deliverable Requirements for Data Validation of 1-82
Environmental Radiological Data. D. A. Anderson
11. An Assessment of Quality Control Requirements for the Analysis of Chlorinated Pesticides 1-83
Using Wide Bore Capillary Columns—A Multi-Laboratory Study. /. A. Serges,
G. L. Robertson
12. Analysis-Specific Techniques for Estimating Precision and Accuracy Using Surrogate 1-84
Recovery. C. B. Davis, F. C. Garner, L. C. Butler
13. Use of Organic Data Audits in Quality Assurance Oversight of Superfund Contract 1-86
Laboratories. E. J. Kantor, M. Abdel-Hamid
14. Useof Inorganic Data Audits in Quality Assurance Oversight of Superfund Contract 1-87
Laboratories. R. B. Elkins, W. R. Newberry
IS. Improved Evaluation of Environmental Radiochemical Inorganic Solid Matrix Replicate 1-88
Precision: Normalized Range Analysis Revisited. R. E. Gladd, J. W. Dillard
16. Laboratory On-site Evaluations as a Tool for Assuring Data Quality. T. J. Meszaros, 1-92
G. L. Robertson
-------
17. Application of Bias Correction. D. Syhre 1-93
18. Matrix Spiking: From Sampling to Analysis. D. Syhre, M. Rudel, V. Venna 1-116
19. Land Disposal Restrictions Program Data Quality Indicators for BOAT Calculation: Past and 1-131
Future. J. Alchowiak, L. Jones
20. Comparison of Quality Assurance/Quality Control Requirements for Dioxin/Furan Methods. I - 138
D. Hooton
21. A Study of Method Detection Limits in Elemental Solid Waste Analysis. D. E. Kimbrough, I - 151
J. Wakakuwa
22. Preparation and Validation of Proficiency Evaluation Samples for Solid Waste Analysis. I - 165
D. E. Kimbrough, J. Wakakuwa
23. Observation of Quality Assurance Anomalies in Superfund Activities. D. M. Stainken 1-180
24. Functional Evaluation of Q C Samples, a Proactive Approach. D. R. Xiques, J. Allison 1-181
25. Features of the U.S. EPA-Quality Assurance Material Bank Standards. R. A. Zweidinger, 1-186
N.Malof
26. Automated Data Validation—PANACEA or TOOL. G. Robertson 1-187
27. Building Data Quality into Environmental Data Management M. AfiOer, P. Ludvigsen 1-188
28. A Software Approach for Totally Automating the Quality Assurance Protocol of the EPA 1-198
Inorganic Contract Laboratory Program. C. Anderau, R. Thomas
29. Automated Reporting of Analytical Results and Quality Control for USEPA Organic and 1-206
Inorganic CLP Analyses. R. D. Beaty, L. A. Richardson
30. A Customizable Graphical User-Friendly Database for GC/MS Quality Control. P. Chong, I - 219
J. S. Hicks, /. Janowsld, G. Klesta, C. Pochowicz
31. Computer Assisted Technical Data Quality Evaluation. S. Hopper, J. Burnetti, M. Stock I - 225
SAMPLING/FIELD
32. Preparation and Stabilization of Volatile Organic Constituents of Water Samples by Off-Line I - 243
Purge and Trap. E. Woolfenden, J. R Ryan
33. A Remote Water Sampler Using Solid Phase Extraction Disks. H. A. Moye, W. B. Moore 1-245
34. Representative Sampling for the Removal Program. W. Coakley, L. Ray, G. Mallon, G. Janice I - 262
35. Preliminary Field and Laboratory Evaluations and Their Role in an Ecological Risk I - 277
Assessment for a Wetland Subject to Heavy Metal Impacts. G. Under, M. Bollman, S. Ott,
J. Nwosu, D. Wilbom, B. Williams
36. PAH Analyses: Rapid Screening Method for Remedial Design Program. L. Ekes, M. Hoyt, I - 284
G. Gleichauf, D. Hopper
37. Evaluation of Household Dust Collection Methods for HUD National Survey of Lead in 1-285
Homes. J. J. Breen, K. Turman, S. R. Spurlin, B. S. Urn, S. Weitz
38. Field Deployment of a GC/Ion Trap Mass Spectrometer for Trace Analysis of Volatile Organic I - 296
Compounds. C. R Leibman, D. Dogruel, E. P. Vanderveer
-------
39. Accurate, On-Site Analysis of PCBs in Soil — A Low Cost Approach. D. Lavigne I - 298
40. How Good are Field Measurements? L. Williams 1-311
41. Assessment of Potential PCB Contamination Inside a Building; A Unique Multi-matrix I -312
Sampling Plan. W. W. Freeman
42. Comparison of the HNU-Hanby Field Test Kit Procedure for Soil Analysis With a Modified I - 323
EPA SW-846 5030/8000 Procedure. /. D. Hanby, B. Towa
43. Field Test Kit for Quantifying Organic Halogens in Water and Soil. D. Lavigne I - 331
-------
QUALITY ASSURANCE
-------
WESTERN PROCESSING: SURFACE AND GROUND WATER MONITORING
DURING A SUPERFUND REMEDIATION
David Actor. Manager Sampling Programs, Chemical Waste Management, Inc., 150
West 137th Street, Riverdale, Illinois 60627; Zaki Naser, Technical and
Environmental Manager, Western Processing Project, Chemical Waste Management,
Inc., 20015 72nd Avenue South, Kent, Washington 98032
ABSTRACT
Collecting high quality, defensible samples from the environment can be a
controversial and difficult task. This paper describes an operating ground water and
surface water sampling program that is monitoring the progress of a long-term
superfund remediation.
In 1983, Western Processing was listed under CERCLA as one of the fifty most
contaminated sites in the nation. Written into the consent decree are requirements
for both surface and ground water monitoring. The stream that runs adjacent to the
site has intensive monitoring requirements during remediation with clearly defined
water quality objectives. Ground water monitoring is required during remediation
and for 30 years thereafter.
The monitoring approach includes comprehensive quality assurance/ quality control
(QA/QC) and sampler training programs. Dedicated ground water monitoring
equipment is utilized to minimize introduction of contamination by sample
collection. The surface stream that runs adjacent to the site is sampled with non-
dedicated equipment. A rigorous QA/QC program has been implemented to track
any possible contamination introduced during sample collection and to ensure the
integrity of every sample obtained.
The monitoring equipment and the methods employed on this project as of 1987 are
discussed including presampling activities, sampling procedures, field records
handling, sample parameters (which include priority pollutant listed compounds,
sampling schedule, and analytical parameters and procedures, and QA/QC
objectives. The health and safety approach for the environmental monitoring
program at Western Processing is discussed, and a brief description of the
environmental cleanup is given.
1-1
-------
I INTRODUCTION
History
The Western Processing superfund site is located in Kent, Washington,
approximately 20 miles south of Seattle. The site is presently surrounded by
warehouse and manufacturing facilities that were built over the last decade.
From 1952 through 1961 the site was operated by the U.S. Army as an anti-aircraft
battery. The Western Processing Company purchased the 13 acre location and
began operations in 1961 as an animal by-products and brewers yeast processor.
Operations were expanded to include the reprocessing of pickle liquor, recovery of
heavy metals and waste solvents, neutralization of acids and caustics , electrolytic
destruction of cyanide, chemical recombination to produce zinc chloride and lead
chromate, reclamation of flue dust, metal finishing by-products, and ferrous sulfide
in fertilizer production. In 1983, due to environmental problems associated with the
site, the U.S. Environmental Protection Agency initiated closure of the facility and
an emergency response cleanup action. Following surface remediation and
establishment of surface water control measures, an intensive shallow soil
contamination study was conducted to determine the extent of hazardous chemical
contamination and provide a baseline for remedial activities. Sampling
investigations conducted between 1982 and 1986 have identified over 70
contaminants in soils and 46 contaminants in ground water samples. From this
initial data ground water and surface water indicator chemicals were selected for
long term environmental monitoring, (Tables 1 and 2).
Geology
Western Processing is located in the Duwamish/Green River Valley flood plain and
is bounded to the west by Mill Creek and to the east by a shallow drainage ditch
(Figure 1). Ground water is shallow, ranging from 3 to 15 feet below the ground
surface. Underlying soils are comprised of fill and laterally discontinuous and
unconfined lenses of sands, silts, and clays. A discontinuous sandy and clayey silt
layer is present at about 35 feet below the ground surface. This aquitard is about
5 feet thick. Below 40 feet the soil is generally unconfined sands and constitutes the
regional aquifer. The regional ground water flow is generally to the northwest
resulting in an upgradient direction to the east and southeast of the site. Shallow
ground water flow is influenced by discharge to Mill Creek at depths of 30 to 40
feet during normal to low flow periods. Regional ground water flow is about 100
feet per year.
1-2
-------
TABLE 1
ANALYTICAL SUITE FOR GROUND WATER
(Indicator Chemicals)
Volatile Organics - All volatile organic priority pollutants
Metals (total) Base Neutral/Acid Extractibles
Cadmium Bis (2-ethylhexyl) phthalate
Chromium 2,4 Dichlorophenol
Copper 2,4 Dimethylphenol
Nickel Isopherone
Lead Phenol
Zinc
Iron
Manganese
Sodium
Calcium
Cyanide
3-(2-Hydroxypropyl)-5-Methyl-2-Oxazolidinone
Conventional Parameters
Total Hardness
Temperature (field)
pH (field)
Specific Conductance (field)
Total Chlorides
Sulfates
Bicarbonate
Carbonate
1-3
-------
TABLE 2
ANALYTICAL SUITE FOR SURFACE WATER
(Indicator Chemicals)
Volatile Organics
Chloroform
1,1-Dichloroethane
1,1-Dichloroethene
Ethylbenzene
Methylene Chloride
Tetrachloroethene
Trans-l,2-Dichloroethene
Cis-l,2-Dichloroethene
1,1,1-Trichloroethane
Trichloroethene
Toluene
Metals (Total & Dissolved)
Cadmium ,
Chromium
Copper
Nickel
Lead
Zinc
Iron
Manganese
Sodium
Calcium
Base Neutral/Acid Extractibles
Bis (2-ethylhexyl) phthalate
2,4 Dichlorophenol
2,4 Dimethylphenol
Isopherone
Phenol
Conventional Parameters
Temperature (field)
pH (field)
Specific Conductance (field)
Dissolved Oxygen (field)
Hardness
Chloride
Ammonia
Turbidity
Nitrate
Phosphorous
Total Suspended Solids
Other
Cyanide
3-(2-Hydroxypropyl)-5-Methyl-2-Oxazolidinone
1-4
-------
NORTH
15M17
ABCO
15M16
ABC
15M15
ABC
8M8
ABCO
C-3
MONITORING LOCATIONS
WESTERN PROCESSING
KENT, WASHINGTON
KEY
C-1
ABCD
Stream Monitoring Station
Monitoring Well Locatton
& Description
13M12
ME :
200
FIGURE 1
SCALE IN FEET
I-5
-------
Contractor/Client/Government Interaction
The Western Processing superfund project may be unique with its proactive
approach to overall site management. Informal weekly meetings are held to inform
the Trust overseeing the cleanup, the governments (U.S. Environmental Protection
Agency, Washington State Department of Ecology, and the City of Kent) and other
regulatory parties of the status of site operations. Regulatory interaction and
cooperation to resolve project issues is very high.
II REMEDIATION
Subsurface remediation included the removal of highly contaminated soils and non-
leachable materials, installation of ground water extraction, infiltration and water
treatment systems and a ground water monitoring network. A site-dedicated
laboratory was constructed to analyze both process and environmental samples
generated by the project.
Twenty-two thousand cubic yards of highly contaminated and low permeability
materials were excavated, and the pits backfilled with clean, high permeability fill.
The site was then graded and bermed, and a shallow (30 feet deep) extraction well
system was installed. Organics and metals contaminated ground water is pumped
from these wells by vacuum extraction and transferred to a water treatment facility
where metals are removed by precipitation/clarification and organics are removed
by air stripping and carbon adsorption. The dedicated laboratory performs analysis
on water and soils for organic and inorganic parameters. Laboratory instruments
include three gas chromatograph/mass spectrometers for volatile and semi-volatile
analyses, an inductively coupled plasma-arc furnace and two atomic absorption
furnaces for metals analyses, a gas chromatograph for pesticide analysis, an ion
chromatograph for anion analysis, and a UV vis spectrometer for phenol and
cyanide.
A slurry wall was constructed to minimize lateral ground water movement into or
away from the site during ground water extraction. This wall encircles the site and
is composed of a soil and bentonite mixture with a hydraulic conductivity of about
1X10"7 cm/sec. Its depth ranges from 40 to 45 feet so it junctions with the aquitard
zone which is present at 35 feet. The Western Processing Consent Decree stipulates
that an inward gradient to the site must be maintained during ground water
remediation. Twenty-two pair of shallow and deep piezometers were installed inside
and outside the slurry wall to monitor the horizontal and vertical gradient relative
to the site and to aid in management of the extraction and infiltration systems.
1-6
-------
Ill MONITORING PROGRAM
The monitoring program at Western Processing is divided into two parts: process
monitoring, which includes discharge compliance monitoring and environmental
monitoring. Process sampling and analysis monitors remediation progress of the
extraction field and treatment efficiency through the various treatment processes.
Environmental monitoring at Western Processing tracks relatively low levels of
contaminants in the ground water that originally migrated off-site. Immediately
adjacent and down-gradient from the site is Mill Creek. Performance standards for
Mill Creek water quality are specified in the Western Processing Consent Decree.
These standards were achieved by reducing the contaminant concentrations at the
downstream sampling locations (Figure 1) below the applicable ambient water
quality criteria (AWQC). The applicable AWQC are those that were published in
the Federal Register at the time of entry of the Consent Decree (April 1987).
Relevant AWQC are for cadmium, chromium (hexavalent and total), copper, lead,
mercury, nickel, silver, zinc and cyanide. This performance standard was achieved
within its three year compliance period. Sampling protocol is extremely important
for collection of representative samples and elimination of potential contamination
from sample collecting activities. Mill Creek water is sampled monthly for ground
water source contamination at one upstream location and at two downstream
locations. In addition, Mill Creek sediments are sampled semiannually at one
upstream location and three downstream locations.
Monitoring wells have been installed up-gradient of Western Processing for
background data and down-gradient of the site to track site-related contaminant
migration. Presently the monitoring network includes 54 wells, 11 of which were
pre-existing. The monitoring installations are individual shallow wells and clusters,
consisting of three or four wells with individual wells in each cluster, each screened
in a different zone. All newer installations are single completion wells with 10 foot
screen lengths placed in depth zones of 10 to 30 feet, 40 to 60 feet, 80 to 100 feet,
or 120 to 140 feet. The actual screen interval was determined during well
installation by both sieve analysis of the soils in the proposed screen zone and
observations of the hydrogeologist that logged the boring. All monitoring wells were
installed using cable tool drilling equipment. Long-term ground water monitoring
is conducted on a quarterly basis with special interest wells being monitored
monthly. Water levels are being monitored monthly during the operation of the
ground water extraction system. The duration for long-term ground water
monitoring is thirty years after the governments have determined that an acceptable
level of remediation has been achieved by ground water extraction and treatment.
1-7
-------
Monitoring Approach
Since environmental monitoring for Western Processing will be conducted for thirty
years after completion of remedial activities, a comprehensive and defensible
program had to be developed. To assure that the sampling was consistent,
especially due to project duration and the long term potential for litigation
concerning the analytical data, a comprehensive Quality Assurance/Quality Control
(QA/QC) program was established. QA/QC for the monitoring program includes:
o Rigorous sampling methodologies;
o Defensible documentation of all calibrations, chain-of-custody,
maintenance, training and sampling procedures;
o Sample contamination evaluation to determine if technique or
equipment is introducing contamination to the sample;
o Duplicate sampling and analyses to assess analytical precision;
o Round robin analyses to measure field analytical performance;
o Instrument calibration and performance checks to assess whether the
field instruments are performing within acceptable parameters and;
o Frequent sampling audits.
Position requirements for sampling personnel at Western Processing are high.
Physical requirements are demanding as the sampler must perform sampling tasks
in protective clothing in a variety of weather conditions. The sampler must possess
adequate academic training to comprehend the concepts and goals of the work.
Sampler training is essential to assure the competency of the sampling team in
performing their tasks and maintaining consistency of protocol and technique
throughout the duration of the program. Training is required a minimum of once
a year for all samplers. Training includes a thorough review of all relevant work
plans, discussion of the sampling theory and it's application, and supervised
demonstration of sample collection. A sampler proficiency test is administered
annually by comparing analytical results of the trainee to the trainer. All sampler
training is documented and archived with project records.
Monitoring Equipment
The monitoring program is operated out of a 10 X 40 foot trailer that has been
modified for field operations. The monitoring laboratory contains both domestic
and deionized water sources for decontaminating equipment, a lab hood for
application of solvent and acid rinses, flammable storage, equipment and supply
storage, work counters for instrument calibration and maintenance, and a waste
1-8
-------
water tank to collect all decontamination water for treatment. Equipment common
to both the ground water monitoring program and the surface water monitoring
program are pH meters, specific conductivity meters and a dedicated sampling van.
The van is the platform from which all field activities are conducted. This includes
field pH and specific conductivity analyses, field data entry and sampling equipment
and supply storage.
Monitoring Wells
Presently, 54 monitoring wells are being sampled for Western Processing. To
eliminate the possibility of cross contamination, the sampling equipment that comes
in direct contact with ground water is dedicated to each well. The type of sampling
pump utilized for all monitoring wells on the project are submersible mechanical
pumps. The pump is a double check valve, positive displacement, piston pump.
Actuation of the pump is accomplished from the well head with a portable
pneumatic motor. The mechanical connection to the submersible pump is by a
small diameter stainless steel pushrod. Only the pump, pushrod, and discharge pipe
contact ground water down the well. Materials of construction of the sampling
pump are teflon and stainless steel, which provide good chemical resistance while
maintaining reliability. Discharge pipe is 3/4 inch schedule 80 PVC, which provides
an adequate discharge rate while providing enough room between the discharge pipe
and the well casing to allow the use of a water level probe. The well head
discharge is a 3/4 inch PVC tee with a hose-fitting connection. The advantage of
this system is that the well can be purged at a reasonable pumping rate (5 gal/min)
and sampled (as low as 120 ml/min) with the same pump. The compressed air
source for the pneumatic motors is a trailer-mounted air compressor. The
compressor produces sufficient volume to purge three wells concurrently to minimize
total sampling time. A 1000 gallon purge water collection tank is also mounted on
this trailer.
Surface Water Monitoring
Unlike the ground water monitoring program, the surface water and sediment
sampling program does not employ dedicated sampling equipment. To assure high
quality surface water and sediment samples, rigorous QC procedures have been
incorporated to detect sample contamination. These procedures are described in
QA/QC objectives.
Non-dedicated equipment includes mechanical current meters for flow measurement,
a subsurface grab-type sampler for collecting stream samples, an Ekman-type dredge
1-9
-------
for collecting stream sediments, a 2.4 liter pressure filter for filtering stream
samples, and stainless steel mixing bowls and trowels for compositing sediment
samples.
During sample collection, current meters are used to determine stream flow by
taking velocity measurements at different points in the stream's cross section. Two
types of current meters are used- selection depends on the water depth. A current
meter consists of a vertical axis rotor with cups that is attached to a wading rod.
Sediment samples are collected from Mill Creek with a pole-mounted Ekman
dredge. This sediment sampler is a stainless steel box with spring loaded jaws that
are tripped after the box has been driven into the sediment. The pressure filter is
an acrylic pressure filtration unit designed for field filtration of water samples. The
filter unit is pressurized with nitrogen and is used for collecting samples for
dissolved metals analysis. A one liter grab-type water sampler is utilized for
collecting all water samples. Water is transferred directly from the grab sampler to
individual sample bottles. In-situ oxygen measurements are observed with a
dissolved oxygen meter.
Monitoring Methods
Pre-Sampling Activities
Before entering the field and initiating field activities, available background
information on the monitoring station is reviewed. This information includes the
condition of the well or stream station and range of historical field test data (pH,
specific conductivity, dissolved oxygen, temperature, purge volume, etc.). Field
equipment is checked for proper operation (i.e., the air compressor is run,
pneumatic motors are operated, current meters are assembled, the dredge is tested,
etc.). The field instruments are calibrated and results recorded in a laboratory
logbook. Part of the calibration procedure is an instrument performance check
(IPC/QC). Results of this IPC/QC must be within two standard deviations or the
instrument is recalibrated. A closing calibration check is conducted at the end of
the sampling day with the results recorded in the laboratory logbook. All calibration
logbooks are reviewed and initialed monthly by the site QA officer.
Pre-cleaned and quality controlled bottles are utilized for sample collection. To
minimize contamination introduced from the field, preservatives, if required, are
added by the laboratory with the exception of volatile samples. Volatile samples are
preserved hi the field immediately after sample collection. Proper labels, chain of
custody forms, and custody seals are assembled.
MO
-------
Field Records
Field sampling records consist of the chain of custody, field parameter form, and
field logbook. The chain of custody (COC) accompanies and tracks the sample from
acquisition through analysis to final disposition. The form is designed to summarize
the contents of the shipment, dates and times of custody transfer, and signatures of
all individuals relinquishing and receiving the samples. It includes the following
information:
o Project name o Sample number
o Sampler's name o Date/time
o Analysis parameters o Number of containers
o Remarks o Relinquished by
o Date/time o Received by
The Western Processing COC form closely resembles the NEIC form with the
addition of analytical parameters and project name printed on the basic form for
efficiency.
The field parameter form contains information about sampling procedures,
equipment, conditions, and field measurements. All field information is recorded
via a laptop computer and downloaded at the end of the day into the laboratory
database. A hard copy is printed at that time for review, date and signature by the
field sampler. The signed hard copy is archived for future reference. The field
parameter form contains the following information:
o Sample point - the complete sample number which includes the
sample location plus the laboratory ID number.
o Purging information - date, time, volume.
o Sample depth, water depth, flow.
o Sampling equipment information.
o Field measurements - pH, temperature, conductivity, dissolved oxygen,
and ground water elevation.
o Field comments - weather conditions, well and dedicated equipment
condition, sample appearance and preservatives added in the field, if
any.
The field logbook is a numbered controlled document that contains hand written
field notes and data that compliment the information provided on the field
parameter form and COC. Each page is numbered, initialed, and dated by the
1-11
-------
sampler. The field log book provides the additional information necessary to
respond in detail to inquiries about a sampling event, especially when conditions
require deviation from the procedures specified in the workplan.
Sample Storage and Transfer
Immediately after sample collection, the bottles are placed in an insulated shuttle
with ice packs and transported to the laboratory for analysis. Samples are
transferred to the laboratory usually within 4 hours of collection, minimizing any
temperature changes that might result from shipping to an off-site laboratory.
Transfer of samples to the laboratory requires a properly prepared chain of custody
form. An incomplete or improperly prepared chain of custody could invalidate any
resulting data.
Analytical Procedures
Laboratory analysis is accomplished using USEPA Methods, SW-846, Standard
Methods for the Analysis of Water and Wastewater, and CLP Methods. Table 3
presents these methods.
TABLE 3
Analytical Laboratory Methods
Parameter
Acidity
Alkalinity
Ammonia
Anion Chromatography,
Chloride, Nitrate,
Sulphate
Cyanide
Hardness
Metals (ICP)
Arsenic (GFAA)
Antimony (GFAA)
Lead (GFAA)
Mercury (CVAA)
Mercury (CVAA)
Selenium (GFAA)
Method
Reference
305.1
310.1
350.3
300.0
335.2
130.2
200.7
206.2
204.1
239.2
245.1
245.5
270.2
Matrix
Water
Water
Water
Water
Water
Water
W/S
W/S
W/S
W/S
w
s
W/S
1-12
-------
TABLE 3 - Analytical Laboratory Methods (cont'd)
Thallium (GFAA) 279.2 W/S
Phosphorous 365.2 Water
Total Suspended Solids 160.2 Water
Turbidity 170.1 Water
Volatile Organics 8260 W/S
Semi-Volatile Organics 8270 W/S
Pesticides/PCB 8080 W/S
Quality Assurance and Quality Control Objectives
A strong quality assurance/quality control (QA/QC) program for sampling and
analysis is incorporated to determine actual environmental contamination. QA/QC
is used to assess the sample's ability to represent its sampling location. A minimum
of 10% of the total number of samples collected are quality control samples. These
include both sample duplicates and method blanks.
Duplicate samples are collected in the same manner as the actual environmental
samples. The duplicate is not a split sample, but, is collected immediately after the
original environmental sample. Over the duration of the project each station is
being included in this process.
For ground water sampling, a field method blank consists of the appropriate
sampling containers filled by the laboratory with deionized water and sent into the
field with other containers to check the quality of the sampling environment. These
blanks are opened at the sampling station and poured into sample containers during
the sampling event and transported to the laboratory for analysis. Volatile blanks
are not opened in the field and serve as a trip, or transport blank to test container
quality. If contamination is present after analysis of the volatile blank, container
banks are collected and analyzed on the specific lot(s) of samples bottles used
during that event. The well would then be resampled.
For surface water sampling, field method blanks are collected to check equipment
decontamination procedures, quality of the sampling environment, and sample
container quality. The method blank consists of pouring laboratory deionized water
into the grab sampler and then transferring it to sample containers to be analyzed
as a regular sample. For filtered samples, the water is poured into the pressure
filter and filtered as a normal investigative sample. Volatile sample bottles are
filled in the laboratory and travel to the field and back without being opened.
1-13
-------
For sediment sampling, the Ekman dredge, and mixing equipment interact with the
sample. Acid washed sand is placed in the dredge, emptied into the bowl and then
transferred to the jar as an actual sample would be handled.
Monitoring Wells
Sampling Procedures
After arrival at the well location, the conditions of the well and the immediate
surroundings are observed and recorded. This includes weather conditions, well
integrity, evidence of tampering or contamination, and conditions in the area that
could effect the quality of the sample (airborne contaminants, etc). All wells are
photographed annually to document the condition of each well. Prior to sampling,
the ground water elevation is measured and the monitoring well purged. The well
is purged so that the ground water sample collected is representative of the
formation water at that point in time. The ground water surface elevation is
measured with an electric tape from the top of the well casing to the water level in
the casing. Water levels are measured to the nearest hundredth of a foot with
precision at +. two hundredths of a foot. After the ground water surface elevation
has been determined, the volume of ground water in the well casing can be
calculated and the total purge volume determined. The industry standard and the
approved purge amount for this project is three casing volumes. Purge volumes are
measured in the field with in-line flow meters calibrated in gallons and tenths of
gallons. Once a year pH, temperature and conductivity are measured continuously
during the purge process to verify that purging three casing volumes of ground water
is adequate to provide a representative sample. All the wells monitored to date are
stable with respect to these parameters before complete removal of the third casing
volume.
The pneumatic motor and purge water discharge line are attached to the well head
assembly and the well pumped until the required volume has been evacuated. The
purge water discharge lines are not dedicated to each well, so the end contacting the
well head assembly is decontaminated before and after each use. A backflow
preventer assures that purge water cannot drain back into the well during the
pumping process. After completion of purging, the well is allowed a reasonable
period to recharge (usually five to thirty minutes, depending on the recharge rate)
prior to sampling. During this recharge period the dedicated small diameter 3/8
inch sample hose is attached and the motor adjusted for minimum flow operation.
Ground water samples are collected immediately after well purging and recovery.
Samples are collected over a bucket to minimize the possibility of contaminating the
1-14
-------
immediate vicinity of the well. A one liter bulk sample is collected and split into
four discrete samples from which temperature, conductivity and pH are measured.
Collection of volatile organic samples involves filling the appropriate vial very slowly
with as little air contact as possible. Because the analysis requires that volatile
samples be headspace free, the vial is allowed to overflow at least 1.5 volumes. The
appropriate amount of preservative (concentrated HC1) is added and the cap is
gently replaced. All other sample bottles are filled with a minimal amount of air
contact. These bottles are filled as full as possible without any overflow.
Sample Parameters and Schedule
The long term ground water monitoring network is sampled on a quarterly basis for
the constituents listed in Table 1. Monitoring wells are sampled in the same
sequence each quarter to insure that the individual sampling frequency remains one
quarter apart. During the summer quarter, a complete priority pollutant scan is
conducted on a sample from each monitoring well in addition to those parameters
not on the priority pollutant list.
Surface Water Monitoring
The stream that lies adjacent to the site is continuously monitored for flow using a
pressure transducer located behind a weir. Resulting information is recorded by a
data logger. Comparative flow measurement data is manually collected at each
stream sampling station immediately following sample collection. Velocity
measurements and stream flow are determined using U.S. Geological Survey
methods.
Sampling Procedures
Prior to sampling, weather conditions and stream height are evaluated to determine
if storm conditions (abnormally heavy rain and high stream conditions) exist.
Monitoring will occur if conditions are within the norm for that time of year. This
is to develop data representative for that period and not storm data. The sampling
station is evaluated for any physical changes or conditions that could impact flow
measurement or sample collection. These are noted on the field parameter form.
Field measurements are conducted using the same protocol as for ground water
monitoring. A one liter bulk sample is collected with the grab sampler and the
contents split into four discrete samples from which temperature, conductivity and
1-15
-------
pH are measured. Dissolved oxygen calibration and measurements are conducted
in-situ. Collection of volatile organic samples involves filling the appropriate vial
from the grab sampler with as little air contact as possible. As with ground water
samples, the analysis requires that volatile samples be headspace free. When filling,
the vial is allowed to overflow at least 1.5 volumes. The appropriate amount of
preservative (concentrated HC1) is added and the cap is gently replaced. All other
sample bottles are filled with a minimal amount of air contact. These bottles are
filled as full as possible without any overflow. For total and dissolved metals
analysis, the sample is split into two aliquots. One aliquot is filtered in the field and
the other is left unfiltered. This insures that laboratory analyses are performed on
one sample of water.
Sediment Sampling Procedures
Mill Creek sediment within the reach of Western Processing is generally silty with
some sand. The ekman dredge was selected for sediment sampling because it can
collect an adequate volume of reasonably undisturbed sample from this sediment
type. The number of individual samples collected at each station ranges from one
to three, and is dependent upon the stream width at each location. Multiple
samples are mixed to obtain one composite sample for each station. Volatile
samples are collected by taking approximately equal aliquots from beneath the
undisturbed surface of the individual samples immediately after sample collection
and before compositing.
Sample Parameters and Schedule
Mill Creek water is sampled monthly until the completion of remediation.
Parameters for analyses are listed in Table 2. During the summer quarter, a
complete priority pollutant scan is conducted on each surface water sample in
addition to those parameters not on the priority pollutant list.
IV HEALTH & SAFETY
The environmental monitoring program is carried out in accordance with the
approved health & safety plan for the project. Enough historical data has been
accumulated on both the monitoring wells and surface water sampling to allow
protective clothing requirements to be addressed on a well-by-well and station-by-
station basis. Minimum requirements include a working uniform with safety glasses
and disposable latex or PVC gloves.
1-16
-------
V SUMMARY
The Environmental Monitoring Program at Western Processing has been refined
with input and constructive criticism by both government and private industry from
the first workplan drafts in June 1987 until government approval of the workplans
in the summer of 1990. The result is a monitoring program that provides defensible
and reproducible data that may well be tested by the courts in the future. A
specific monitoring history can be pulled from archives and a specific sampling event
can be recreated without reliance on the sampler's memory.
VI ACKNOWLEDGEMENTS
The authors would like to express their appreciation to Dennis Sleeves and Paul
Anderson of Chemical Waste Management for their input. The authors would also
like to acknowledge the Western Processing Trust Fund, the U.S. EPA, Region X,
and the Washington State Department of Ecology for their proactive approach to
resolving both technical and regulatory issues during the Western Processing
remediation.
VII REFERENCES
U.S. EPA Region 10, Investigation of Soil and Water Contamination at Western
Processing. King County. Washington. May 1983
Landau Associates, Extraction/Infiltration Systems Management Manual. Western
Processing. July, 1990
Chemical Waste Management, Inc., Ground Water Monitoring Program Workplan.
PartB. July, 1990, Revision 1
Chemical Waste Management, Inc., Mill Creek & East Drain Monitoring Plan.
August, 1990, Revision 1
Chemical Waste Management, Inc., Laboratory Quality Assurance Quality Control
Project Plan. July 1, 1990
1-17
-------
A QUALITY ASSURANCE PROGRAM FOR
REMEDIAL ACTIONS WITHIN THE USEPA ARCS PROGRAM
by
D.M. Stainken. D.C. Griffin, K Krishnaswami and J.C. Henningson
Malcolm Pirnie, Inc.
2 Corporate Park Drive
White Plains, New York 10602
ABSTRACT
The Superfund Program has continually evolved over the past decade and many of the NPL
sites have advanced to varying stages where clean-up activities are to commence. The
USEPA has established an alternative Remedial Contracting Strategy (ARCS) whereby
contractors provide support for remedial activities (response action contracts). Quality
Assurance (QA) activities are an integral component of the technical support provided.
These QA activities are inculcated into the RI/FS and RD/RA phases, as well as in PRP
oversight tasks. Although general QA activities are relatively defined including development
of sampling plans and QA project plans, the management and technical details remain to
be implemented.
As an ARCS contractor, we have implemented a practical QA management system with
multiple geographic locations. This system is built around a QA Program Plan, a unique
quality assurance manual and SOPs, our laboratory and equipment facility, an audit system
and a technical director - quality management review process with review teams. Applying
this system, we have conducted audits of field events and of other contractors. This paper
will present details of the organization of the QA program, key components and past
experience in implementing and managing the process.
INTRODUCTION
The Superfund Program has steadily grown in size and complexity over the past decade and
the number of hazardous waste sites placed on the National Priority List (NPL) has grown
accordingly. The Superfund Program and its legal and technical components have
influenced or affected numerous Federal and State programs and industrial practices.
During the past decade, EPA used the services of Contractors to fulfill specific tasks and
objectives. Hazardous waste sites were and continue to be evaluated through contracts
1-18
-------
termed Field Investigation Team (FIT) Contracts. Services provided survey and assess sites,
"score" for hazard ranking, and if warranted place the site on the NPL list.
The Superfund Program Contract strategy has undergone internal program management
reviews and has been changed at times to achieve EPA's goals and objectives. Consequently
acronyms for Programs, Contracts, tasks, etc., have arisen and in some cases, dropped from
use. In 1990, the EPA established a long-term contracting strategy for Superfund(l). The
Agency's objectives in developing the strategy were to analyze the long term contracting
needs of the program, and to design a portfolio of Superfund contracts to meet those needs
over the next ten years. Contract support was to be implemented for enforcement support,
regional management support, removal contract support, analytical support, preremedial and
remedial contract support. The preremedial activities include site preliminary assessments,
inspections and "scoring" of sites for NPL consideration. Remedial activities include a
variety of activities necessary to actually remediate a site such as the remedial investiga-
tions/feasibility study phase (RI/FS) and the remedial design/remedial action phase
(RD/RA). To conduct preremedial and remedial activities, the Agency established
contracts termed Alternative Remedial Contracting Strategy (ARCS) contracts. As an
additional task under ARCS, some contractors may be assigned oversight and review tasks
of PRP (Potentially Responsible Party) sponsored cleanups.
A key component of ARCS in the implementation of a Quality Assurance Program which
is a requirement within all EPA programs(2). There are numerous technical guidance
documents (3-11) and manuals which must be integrated into a Superfund ARCS QA
program. The components include establishment of standard operating procedures (SOP's),
field sampling plans (FSP), quality assurance program plan (QAPP), quality assurance
project plan (QAPjP), use of the CLP program, data validation, conducting audits of the
processes (eg. field audits, data quality audits, management systems or program audits, file
audits, lab audits, etc), adherence to other "applicable and relevant agency regulations"
(ARAR's), and quality control and quality assurance activities for remedial design and
action phases.
QA PROGRAM
As a new EPA Region n ARCS Contractor, Malcolm Pirnie, Inc. (MPT) has developed a
quality assurance program for administering and monitoring requisite Superfund Quality
Assurance (QA) activities within ARCS remedial work and site assignments. Traditionally
quality assurance at Malcolm Pirnie was a project specific activity and the responsibility of
individual project officers. The focus was on the quality of technical work products and did
not require extensive documentation. The ARCS program activities covered include all
phases of RI/FS and RD/RA work and includes multiple MPI Offices and the subcontrac-
tor, CH2M Hill, Inc., and functions (eg. ARCS equipment and storage, lab, program
management, site management). A unique aspect of the ARCS and MPI approach is the
staff structure. Unlike other EPA contracts where staff were "dedicated" to the Contract,
1-19
-------
MPFs staff is not dedicated. Personnel are drawn from MPI Offices throughout Region n
and assigned to ARCS Projects on an as-need basis. This allows a cost-effective application
of appropriate skills and talents to ARCS projects on a timely basis. This program is now
expanding to include preremedial work assignments and subcontractors.
With the initiation of the ARCS contract, a Quality Assurance Program Plan (QAPP) was
established which identified the administrative structure, oversite functions, and responsibili-
ties and process of technical and QA reviews. A Quality Assurance Project Plan (QAPjP)
was also established defining how QA functions, activities, and duties were to be
implemented. The QAPP documents the structure. The Program Management Office
(PMO) receives work assignments and administers the Program. Within PMO, the PMO
Quality Assurance Manager oversees the ARCS QA program. Site Managers are located
in different Offices in New York or New Jersey, depending on site location and available
resources. The site manager is responsible for all activities concerning a site. These
activities include establishment of a site specific QAPjP and Field Sampling Plan (FSP) plus
additional documents, when needed, for the site Work Plan. The site manager is
responsible for arranging adequate technical input including technical reviews. Each site is
also assigned a Quality Assurance Officer (QAO) whose duties are to assist the site
manager on QA issues, and review and forward the site QAPjP and FSP to the PMO QA
Office for review. The PMO QA Office reviews the QAPjP and FSP and may return the
documents with written review comments to the site manager. Figure 1 illustrates the
process of review of FSP and QAPjPs. When these comments have been addressed or
resolved, the documents are returned to PMO QA for forwarding to EPA for approval. A
PMO QA office provides coordination, QA reviews, QA training, and conducts and oversees
audits. Within the MPI QA program, each site QAO is responsible for conducting field
audits of FSPs and QAPjPs. The PMO provides oversite audits of QAO's, field audits, and
management system and file audits of the ARCS Program, subcontractors, and components
within the program (i.e. ARCS equipment storage and staging facility, non CLP lab
activities). The PMO QA also oversees data validation services for project data and may
audit laboratories when non CLP laboratories are used in projects.
To manage effectively QA activities within the ARCS program, an MPI ARCS Quality
Assurance Procedures manual was established This QA Manual specifies how QA is to be
administered in the Program, the duties and responsibilities of the PMO QA and site QAO
personnel and QA standard operating procedures (SOPs) for carrying out QA functions for
the PMO QA Manager and the site QAO's. Table 1 is The Table of Contents of the QA
Manual and lists of the QA SOP's currently in use. QA and technical details are derived
from EPA technical documents(3) whenever possible. A training session was conducted for
site QAO's and site managers, when the MPI ARCS QA manual was implemented.
Technical memoranda, reviews and audits from EPA are distributed to all site managers and
QAO's and those affecting QA are to be included in the QA manual in a Technical
Memoranda section. In addition, MPI has now initiated a new phase, whereby site
1-20
-------
managers convene to discuss status, trends, problems and resolutions of QA and technical
issues within the ARCS program.
The MFI QA Program to date, has been relatively successful Problems noted have usually
been the result of short time leads, new personnel (EPA and MPI), normal project
problems, technical complexity of each site, changing objectives and DQOs. The ARCS
Program has continued to expand to include more sites, PRP oversite assignments and
preremedial work. These tasks will require more audits and an extension of MPI's QA
Program.
MPI's audits have observed how difficult it has become to incorporate the growing number
of technical advisories, procedures, memoranda, etc, into a manageable Work Plan for a site.
In addition, the extensive paper trail necessary for all operations, field sampling and sample
booking into the CLP, and the long lead term for booking SAS and RAS presents real
challenges for all involved. As MPI work assignments shift into RD/RA implementation
and preremedial work, so will the focus of QA activities to assure that quality data and
quality work are attained.
CONCLUSION
In conclusion, the MPI ARCS QA Program has provided a workable, cost effective program
for administering QA activities in the ARCS Program. The key components in the process
are the site QAO, the PMO QA manager and establishment and use of an MPI ARCS QA
Manual Use of the Manual, in addition to reviews and audits, provide an effective QA
Program.
1-21
-------
REFERENCES
1. U.S.E.PA., 1990. Long-term Contracting Study for Superfund. Public. No. 9242.6-
07FS.
2. U.S. E.P A., 1984. "Policy and Program Requirements to Implement the Mandatory
Quality Assurance Program," EPA Order 5360.1., Office of Research and Develop-
ment
3. CERCLA Quality Assurance Procedures Manual, Region n, U.S.E.PA.
4. U.S.E.PA. 1990. Quality Assurance/Quality Control Guidance for Removal
Activities. EPA Public. No. EPA/540/G-90/004.
5. U.S.E.PA. 1991. Preparation Aids for the development of Category I Quality
Assurance Project Plans. EPA Public. No. EPA/600/8-91/004.
6. U.S.E.PA. 1991. Preparation Aids for the development of Category n Quality
Assurance Project Plans. EPA Public. No. EPA/600/8-91-/004.
7. U.SJ2.PA. 1991. Preparation Aids for the Development of Category n Quality
Assurance Project Plans. EPA Public. No. EPA/600/8-91/005.
8. U.SJLPA. 1987. Data Quality Objectives for Remedial response activities. EPA
Public No. EPA/540/G-87/003.
9. U.SJLPA. 1988. User's Guide to Contract Laboratory Program. EPA Public No.
EPA/540/8-89/012.
10. U.S.E.P A. 1988. Guidance for Conducting remedial Investigations and Feasibility
Studies Under CERLIA. EPA Public. No. EPA/540/G-89/004.
11. U.S.E.PA 1990. Guidance for Expediting Remedial Design and Remedial Action.
EPA Public. No. EPA/540/G-90/006.
1-22
-------
Review
Team Leader
Site
Manager
Review
Team
Members
ro
CO
Site
QAO
Operations
Manager
*
PMOQA
Manager
Complete
1
]
1 1
Yes No
J 1
r
USEPA
ARCS II Quality Assurance Document Review System
to
c
-------
TABLE 1. MALCOLM PIRNIE QA PROCEDURES MANUAL - ARCS H
TABLE OF CONTENTS
Page
1.0 INTRODUCTION 1-1
1.1 Quality Assurance in Malcolm Pirnie 1-2
12 Quality Assurance Project Plans 1-3
2.0 MALCOLM PIRNIE QA PROGRAM 2-1
2.1 Individual QA Task Assignments 2-1
3.0 IMPLEMENTATION/DOCUMENTATION OF QA PROCEDURES 3-1
3.1 Standard Operating Procedures 3-1
3.1.1 Preparation and Format of Standard
Operating Procedures 3-1
3.1.2 Preparation and Format of Audit Reports 3-2
3.13 Numbering System of SOP's 3-5
4.0 SITE QA OFFICER 4-1
4.1 SOP Preparation and Review 4-1
4.2 Field Sampling and Analysis Plan Preparation 4-1
43 QA Project Plan Preparation 4-1
4.4 Data Validation 4-2
45 Field Auditing 4-2
4.6 Lab Auditing 4-2
5.0 PMO QA MANAGER 5-1
5.1 SOP Review and Approval 5-1
52 Field Sampling Plan - QA Project Plan Review 5-1
and Approval 5-1
53 Auditing - Site QA Officers/Field, File, Program 5-1
LIST OF APPENDICES
Appendix Description
1 Site QA Officer Specific Tasks
2 PMO QA Manager Specific Tasks
3 Stylized example of a Technical SOP
4 Standard Operating Procedures (SOPs)
5 Technical Advisory Memos
6 QA References
1-24
-------
TABLE 1. MALCOLM PIRNIE QA PROCEDURES MANUAL - ARCS H
TABT.F. OF rONTENTS
(Continued)
SOP Title SOP No.
PMOQA
Procedure for the QA Technical Review of FSP-QAPjP MP-PMOQA-001-9/90
Management Systems - QA Audit MP-PMOQA-002-9/90
Contents of a PMO QA File MP-PMOQA-003-9/90
Procedure for Filing and the Contents of a MP-PMOQA-004-9/90
PMO QA Site Specific File
Procedure for Documenting the MP-PMOQA-005-12/90
Quality Assurance Review of FSPs and QAPjPs
Procedure for Documenting Technical Quality Reviews MP-PMOQA-006-12/90
Procedure for QA Interaction within ARCS H Program MP-PMOQA-007-12/90
Between MPI and CH2M Hill
Basic Procedures for Providing Quality Assurance MP-PMOQA-008-1/91
to Remedial Design and Oversight of Remedial
Design by PRP's
SQAO (Site Quality Assurance Officer)
Procedure for a Technical Systems - Field Sampling MP-SQAO-001-9/90
Audit
Procedure for Completing Field - CLP Paperwork MP-SQAO-002-1/91
1-25
-------
A NATIONAL QA STANDARD FOR ENVIRONMENTAL PROGRAMS
FOR HAZARDOUS WASTE MANAGEMENT ACTIVITIES
Gary L. Johnson and Nancy V. Wentworth, Quality Assurance Management Staff
(RD-680), U.S. Environmental Protection Agency, 401 M Street SW,
Washington, DC 20460
ABSTRACT
The clean-up of Federally-owned facilities contaminated by mixtures of
hazardous chemical and radioactive wastes involves critical decisions
based on environmental data. The Federal Government,is currently using
several different standards or sets of requirements, including U.S.
Environmental Protection Agency (EPA) guidance for establishing the
quality assurance and quality control (QA/QC) procedures for these sites.
These standards defined the criteria for the QA activities and documenta-
tion required, the content and format of the documentation, and who was
responsible for them. Shortcomings in these standards or requirements
have led to efforts by several Federal groups to develop a uniform,
consistent standard that produces the needed type and quality of
environmental data in a more cost-effective manner. These efforts are
being conducted under the auspices of the American Society for Quality
Control (ASQC) and involve participation by EPA, the Department of Energy
(DOE), Department of Defense (DOD), Nuclear Regulatory Commission (NRG),
and others in the contractor and regulated communities.
This paper describes the progress which has been made toward establishing
a consensus standard and associated requirements for use by DOE, EPA, and
others for hazardous waste management activities. The standard proposes
two distinct but related levels for management and technical activities,
which include, respectively, the organizational structure, policies and
procedures, and roles and responsibilities needed to conduct effective
site operations, and the project-specific Quality Assurance Program
activities necessary to produce the desired data quality. It is expected
that the "harmonization" of QA/QC requirements will significantly improve
the cost-effectiveness of hazardous waste management activities involving
environmental data operations, as well as provide the basis for a needed
revision and expansion of EPA guidance.
INTRODUCTION
The emergence of hazardous waste remediation as one of the principal
environmental issues of the 1980s has resulted in large-scale environmen-
tal sampling and analysis programs to characterize the waste sites and to
select and implement appropriate remedies. Quality assurance (QA)
programs developed in support of the environmental programs have varied
widely in content and in application. As the responsible authority for
implementing environmental regulations, the U.S. Environmental Protection
Agency (EPA) has mandated a QA program for its environmental programs in
EPA Order 5360.1,C1) issued in 1984. This Order directed that all
1-26
-------
environmental data operations conducted by or for EPA in support of Agency
decision-making develop and implement an acceptable QA program. Prior to
the Order, Agency requirements were manifested in two documents developed
by the Quality Assurance Management Staff (QAMS) : QAMS-004/80,<2) which
discussed QA Program Plans, and QAMS-005/80,(3) which defined QA Project
Plan requirements for individual data collection activities. Both of
these documents were issued in 1980 and became the de facto standard for
EPA QA requirements for all environmental programs.
While the EPA Order establishes specific authorities and responsibilities
for QA, it does so almost entirely within the context of EPA organiza-
tions. Externalization of EPA QA requirements was accomplished largely
through the contracts regulations, found in 48CFR15,(*) and the financial
assistance regulations for grants, cooperative agreements, etc., given by
40CFR30(3) and 40CFR31(6). Both sets of regulations limited the require-
ments for QA essentially to QA Project Plans for individual environmental
data collection activities. The full scope of the EPA Order has not been
extended to the regulated community, and EPA guidance to implement the
Order was not officially distributed outside the Agency.
Moreover, until recently, QA was not generally included in specific
rulemaking by the agency. Air quality regulations have included QA and
quality control (QC) specifications since the 1970s, but the ongoing
rulemaking to incorporate Chapter One into SW-846(7) represents the first
explicit inclusion of QA requirements in a regulation. Since neither
QAMS-004/80 nor QAMS-005/80 has been revised or updated since their
initial issue, the public was not aware of the changes to the EPA QA
program as it has grown and expanded. Consequently, interpretations of
EPA QA requirements became varied as EPA programs, such as RCRA and
Superfund, developed their own expanded requirements based on specific
program needs and the ten EPA Regional Offices responsible for implement-
ing the hazardous waste programs under RCRA and CERCLA developed their own
interpretations of QAMS QA Program Plan and Project Plan guidance and
program office guidance. In the meantime, the general public continued to
use QAMS-004/80 and QAMS-005/80, and to interact individually with
Regional Offices on their interpretation. Consequently, ten years after
issuing QAMS-004/80 and QAMS-005/80, EPA was faced with multiple
interpretations of QA requirements documents which do not reflect the
current vision of QA in the Agency. The need for new QA guidance from
QAMS provided impetus for EPA to examine the experiences of the past ten
years and to develop focused criteria upon which such guidance could be
based for use in the next ten years.
The situation in other Federal agencies was not dramatically different
from that of EPA. The U.S. Department of Energy (DOE) uses NQA-1{8) as its
standard for QA requirements. The Nuclear Regulatory Commission bases its
QA program on 10CFR50, Appendix B, which is identical to NQA-1. The U.S.
Department of Defense has used various MIL Standards as well as NQA-1 in
its Installation Restoration Programs. NQA-1 was developed for nuclear
facilities and its application to environmental programs has been
difficult to accomplish effectively. Moreover, different field offices in
DOE, for example, like their EPA Regional counterparts, have taken
differing interpretations of what in NQA-1 is applicable to environmental
programs.
1-27
-------
Currently, DOE, DOD,. and other Federal facilities are generally responding
to multiple sets of QA requirements while trying to satisfy CERCLA and
RCRA regulations. Often this has meant preparing two sets of QA
documentation, one to satisfy the EPA Region (usually QAMS-005/80
requirements and format) and one to satisfy the "owner" agency (DOE, DOD,
etc.). which has often meant NQA-1 requirements and format. At the least,
it has meant having to prepare one document which satisfactorily addresses
the expectations of the EPA Region and the "owner" agency. This has
resulted in costly and time-consuming duplication of effort. In addition,
the perception of inconsistent and often conflicting QA requirements has
created confusion and frustration in the regulated community. It became
increasingly clear that action was needed to bring some order to the
process. The staggering cost of clean-up at Federal facilities has
focused considerable public pressure on all agencies involved, including
EPA, to plan and carry out cost effective clean-ups.
THE HARMONIZATION PROJECT
In the fall of 1989, an initiative was begun which would, as a minimum,
attempt to "harmonize" the varied QA requirements into a single, uniform,
consistent set for application to environmental programs. This effort was
conducted under the auspices of the American Society for Quality Control
(ASQC) and included active participation by EPA, DOE, DOD, NRC, and
representatives of the contractor community. A Work Group and a Policy
Group were formed in the spring of 1990 to pursue the harmonization
effort. The Work Group was composed of experienced QA professionals
representing EPA, DOE, and Federal contractors, who would make the initial
efforts of harmonizing current QA requirements. The Policy Group included
senior officials at EPA, DOE, DOD, and NRC, and senior QA consultants, who
would guide the efforts of the Work Group.
It became apparent very quickly that a new, national consensus standard
would be the most effective way to harmonize the existing QA requirements.
Moreover, those engaged in the harmonization effort were committed to
producing a standard which could encompass the broad scope of environmen-
tal programs without being overly prescriptive. Emphasis was placed on
defining WHAT the requirements should be, not the HOW TO or BY WHOM. This
recognized that a detailed, omnibus standard could not meet the needs of
all Federal agencies. Their missions are too diverse. The Work Group and
Policy Group decided early to retain as much flexibility as possible in
the standard and not try to prescribe format or detailed specifications.
The differing missions and personalities of the organizations would be
served best by allowing them to define detailed requirements for their QA
programs based on the general requirements of this standard. For example,
the standard would require the use of QA Project Plans, but it would not
prescribe the content and format of the plans. This decision would be
left to the implementing organization.
The outline for this proposed standard was presented in September 1990 at
the ASQC Energy Division National Conference in Tucson.(9) Subsequently,
a standard was drafted and is currently undergoing public review and
comment as part of the ASQC standard-setting process. As part of this
process, EPA will develop new guidance to implement the standard across
1-28
-------
Agency programs. Included among the new guidance documents planned are
replacements for QAMS-004/80 and QAMS-005/80. This guidance is expected
to be available shortly after acceptance of the formal standard by EPA.
PROPOSED NATIONAL OA STANDARD FOR ENVIRONMENTAL PROGRAMS
Environmental data have an important role in decisions involving the
protection of the public and the environment from the adverse effects of
a variety of pollutants from waste operations and discharges. To assure
that these data are of the appropriate type and quality to support their
intended use, a proposed standard has been developed for environmental
programs.
The proposed standard includes the basic requirements for a Quality
Assurance Program to plan, implement, and assess the effectiveness of
multimedia data operations to characterize environmental processes and
conditions and to design, construct, and operate environmental engineering
systems. Included in this Quality Assurance Program are the necessary QA
and QC activities to assure that technical and quality specifications are
satisfied.
As noted earlier, NQA-1 has been utilized extensively for environmental
programs by several Federal agencies. Many of the fundamental concepts
found in NQA-1 have been incorporated into this standard and, in several
cases, improved. The standard also reflects the current vision of EPA1 s
Quality Assurance Program as well as numerous Total Quality Management-
based concepts which have gained wide-spread acceptance.
The proposed standard provides for two distinct levels of requirements to
be addressed:
the organization (or institutional) level, and
the technical/project level
In each distinct level, there are specific QA elements or functions which
must be addressed.
The elements at the organization level include defining the organizational
structure, policies and procedures, and roles and responsibilities for the
activities to be performed. These elements define what must be addressed
in order to establish and manage an effective Quality Assurance Program
for planning, implementing, and assessing effective environmental data
operations. The organization level elements provide a framework or
infrastructure to enable consistent quality procedures across similar
environmental projects. The organizational level also defines the
requirements for necessary management functions to support multiple
technical activities or projects. These include procurement of services
and items, documents and records, use of computer hardware and software,
and operation of analytical facilities and laboratories. The basic
requirements for these functions are assembled into Management Systems,
Part A, and may be viewed as an umbrella under which technical projects
1-29
-------
are performed.
The technical or project level consists of two parallel parts within the
framework defined by the organization level requirements. Each part
describes the project-specific Quality Assurance Program activities
necessary to produce the desired type and quality of data -- one part
relates to process or site characterization and the other part relates to
environmental engineering systems. Dividing technical/project operations
into two parts reflects the differences between requirements for
characterizing an environmental process or condition (Part B) and
requirements for designing, constructing, and operating environmental
engineering systems (Part C).
Characterization of Environmental Processes and Conditions, Part B,
contains the basic requirements for planning, implementing, and assessing
operations to collect, analyze, and evaluate chemical, biological,
ecological, or physical data in the environment. This also includes
compiling, modeling, and analyzing environmental processes and conditions
by mathematical or computerized methods. The emphasis here is on planning
and the approach is based largely on EPA's Data Quality Objectives
process.(10) The study design completes the planning phase and the
standard requires that the data operations be implemented as planned and
documented. There is a focus on performance-based objectives for the
study so that a measure of success may be readily determined. During
assessment of the results obtained, it is recognized that results from
environmental data operations may not completely satisfy the performance
objectives. However, the assessment of data usability may enable the data
or part of the data to be used provided that the data user is willing to
accept lees confidence in the results and a greater risk in making the
decision for which the data were needed.
Environmental Engineering Systems, Part C, provides the basic requirements
to ensure effective design, construction, and operation of physical
engineering systems, and their components, which remediate environmental
contamination or remove pollutants from multimedia discharges. While
these requirements were not originally within the scope of the proposed
standard, it was recognized that environmental engineering systems may
require rigorous QA activities to assure their safe and effective
operation. EPA guidance on QA for engineering systems is sparse and has
limited application to hazardous waste remediation technologies being
developed today. The requirements for Part C were drawn largely from NQA-
1, since the principal concern driving NQA-1 was the protection of public
health and safety from the operation of nuclear facilities. While the
magnitude of the potential threat is not as great, it is reasonable to
suggest that the inadequate design, construction, or operation of some
remedial technologies could pose health and environmental concerns.
Consequently, the need to include Part C in the proposed standard became
evident.
The Basic Requirements contained in the current draft of the proposed
standard are listed in Table I.
1-30
-------
NEXT STEPS AND SUMMARY
The standard-sett ing process is under way within the ASQC. Public comment
on the proposed standard has been invited. While it is not possible to
estimate a completion date for the standard at this time, the outlook is
optimistic.
The "proof" of the value of this proposed standard lies largely in its
acceptance and implementation by Federal agencies. Here again, the
outlook is promising. By involving key senior QA officials from the major
agencies in the development of the proposed standard, many issues have
already been addressed which otherwise would have posed serious barriers
to acceptance of the standard.
The value of this standard to future hazardous waste management activities
is severalfold. First, the QA requirements of Federal agencies would be
the same. Some programmatic differences may still occur between RCRA and
CERCLA, for example, but essentially the QA "playing field" would be
level. The consistency of a national consensus standard also opens new
opportunities for increased standardization in other areas, such as, field
methods, analytical procedures, and data validation and verification
processes. The standard provides a basis for increased cooperation among
the Federal agencies conducting waste remediation activities and for the
sharing of ideas and experiences. Given the finite resources available
and the magnitude of the clean-up job ahead, no one can afford to re-
invent the wheel. The cost savings that will result from this proposed
standard are difficult to estimate, but they could be substantial.
EPA assumes that an acceptable national consensus standard for QA will
emerge from ASQC. Preparations for adopting and implementing the standard
across EPA programs are under way, including the development of a
comprehensive set of new QA guidance documents. The transition to a new
standard probably will not happen quickly. In time, some programs will
likely recognize the added value and benefits of revising their QA
programs to reflect the standard and the new guidance. Not all programs
will have to be revised or even have new QA Management Plans (QAMP)
prepared, while some others may need more extensive changes.
This standard will help to forge new partnerships among Federal agencies
engaged in hazardous waste management activities and to foster greater
efficiency and effectiveness in future work.
1-31
-------
REFERENCES
(1) EPA Order 5360.1, Program and Policy Requirements to Implement
the Mandatory Quality Assurance Program. U.S. Environmental
Protection Agency, 1984.
(2) Interim Guidelines and Specifications for Preparing Quality
Assurance Program Documentation. U.S. Environmental Protec-
tion Agency, QAMS-004/80, 1980.
(3) Interim Guidelines and Specifications for Preparing Quality
Assurance Project Plans. U.S. Environmental Protection
Agency, QAMS-005/80, 1980.
(4) EPA Acquisition Regulations. U.S. Environmental Protection
Agency, 48 CFR 15, 1984.
(5) Financial Assistance Requirements. U.S. Environmental
Protection Agency, 40 CFR 30, 1983.
(6) Financial Assistance Requirements. U.S. Environmental
Protection Agency, 40 CFR 31, 1988.
(7) Test Methods for Evaluating Solid Waste, Physical/Chemical
Methods (3rd Ed.), SW-846. U.S. Environmental Protection
Agency, Federal Register 55(27), February 8, 1990.
(8) ASME NQA-1, Quality Assurance Program Requirements for Nuclear
Facilities, 1989 Edition. ASME, September 1989.
(9) Blacker, Stanley M. Harmonization of Quality Assurance First
Focus: NQA-1 and QAMS. Proceedings, 17th Annual National
Energy Division Conference, Tucson, September 1990.
(10) Draft Information Guide on Data Quality Objectives. U.S.
Environmental Protection Agency, 1986.
1-32
-------
TABLE I
QUALITY ASSURANCE PROGRAM
REQUIREMENTS FOR
ENVIRONMENTAL PROGRAMS
Part A. Management Systems
1 Management Commitment and Organization
2 Quality Assurance Program
3 Personnel Training and Qualification
4 Management Assessment
5 Procurement of Services and Items
6 Documents and Records
7 Use of Computer Hardware and Software
8 Work Processes and Operations
9 Quality Improvement
Part B. Characterization of Environmental Processes and Conditions
1 Planning and Scoping
2 Design of Data Collection Operations
3 Implementation of Planned Operations
4 Quality Assessment and Response
5 Assessment of Data Usability
Part C. Design. Construction, and Operation of Environmental Engineering
Systems
1 Planning
2 Design of Environmental Engineering Systems
3 Implementation of Engineering Systems Design
4 Inspection and Acceptance Testing
5 Operation of Environmental Engineering Systems
6 Quality Assessment and Response
1-33
-------
THE IMPACT OF CALIBRATION ON DATA QUALITY
Richard G. Mealy - Supervisor, Quality Assurance
Kim D. Johnson - Manager, Analytical Laboratory
Warzyn, Inc, 1 Science Court, Madison, Wisconsin 53705
ABSTRACT
The basic functional elements of the typical Quality Control (QC) program are the QC samples
used to evaluate control of the analytical process. The ability to meet acceptance criteria
associated with QC samples, as well as the acceptance criteria themselves, are directly influenced
by the calibration process. Despite the critical role that calibration plays in the generation of
accurate data, it is the aspect of environmental analysis that is controlled the least, based on a
comparison of the calibration process established in the common regulatory programs.
As the requirements for litigation quality data become more restrictive, the calibration process
will need to be re-evaluated, and subject to more rigorous controlling measures. This paper
discusses the key aspects of the calibration process, and the strengths and weaknesses associated
with each. Without more control, it will be difficult to achieve data comparability between
laboratories using the same referenced methods.
INTRODUCTION
The calibration process represents the initial controlling mechanism for the generation of quality
data, yet there is a general lack of guidance regarding specific evaluation techniques for this
process. One of the drawbacks of providing such little guidance is the potential loss of data
comparability, one of the chief Data Quality Objectives identified by the EPA.
This paper examines several critical aspects of the calibration process, and identifies those
features that, if overlooked, can significantly impact the quality of the data generated. Initially, a
comparison of calibration processes, as outlined in the various regulatory programs, is presented.
In order to provide a more focused scope for this paper, the discussion is limited to the impact on
methods for the analysis of Volatile Organics, Pesticide/PCBs, and Semi-volatile Organics. The
concepts presented, however, can be extended to other organic and inorganic analytical methods
as well.
-34
-------
It is important to note that some of the issues raised in this paper have been addressed in
regulatory programs that were not evaluated specifically for this paper. The USATHAMA*
program, in particular, has incorporated a requirement that calibration data be subject to
statistical tests for both Zero Intercept and Lack of Fit, which serve to resolve some of the
problems associated with non-linear data and calibration intercepts. These problems are
discussed in detail in this paper.
COMPARISON OF REGULATORY APPROACHES TO CALIBRATION
There exists a great deal of difference in the calibration protocols and requirements of the key
regulatory programs, including the 5001 and 600)2 series of EPA methods, those published in SW-
846^, and the Contract Laboratory Program*. A summary of calibration requirements of the
various regulatory methodologies is presented in Table 1.
In general, the 600 series of methods offers the least amount of guidance, and thus is the most
open to individual interpretation. More recent revisions to the 500 series of methods for analyses
conducted under the SDWA program introduce several new requirements that provide greater
control over the accuracy of the resultant calibration. As Table 1 indicates, wide variation exists
in the number of calibration standards required both within and across the series of regulatory
protocols. One particularly important assumption that the 500,600, and 800$ series all share in
common relates to curve linearity. In each of these methods, if the %RSD of response factors
associated with calibration standards is within certain criteria (10-35%), "then linearity through
the origin can be assumed." Clearly, there are widely ranging views regarding when the intercept
of a calibration curve deviates significantly from the origin. In keeping with the goal to establish
data comparability, there is a need to consider the incorporation of a statistical technique to
provide an objective means of determining whether a particular set of data essentially has a zero
intercept.
In the event that %RSD criteria cannot be achieved, 3 of the 4 programs allow the user to simply
prepare a calibration "curve" from concentration vs. instrument response. Unfortunately, there
are no requirements for the type of curve algorithm (linear regression, polynomial fit, etc.)
allowed.
As cleanup criteria continue to evolve, this variability between the different regulatory protocols
can have significant, adverse impact on the comparability of data generated by laboratories. Due
to either regional or site specific preferences, analytical programs can be based on methodologies
from any of these programs. While each of the programs is considered to be designed to produce
quality analytical data, the differences between the calibration protocols will result in significantly
different data quality.
1-35
-------
In order to provide more control over the calibration process, each element of the process must
be considered so that the most appropriate combination of elements is employed. The basic
"parts" of the calibration are shown below:
• number of calibration levels
• calibration algorithm
• calibration levels
• calibration acceptance criteria
• effect of "curve-smoothing" routines
The remainder of this paper focuses of detailed analysis of each of these sections. In particular,
those aspects that potentially lead to inaccurate or biased data are discussed. In addition, the
areas of the methods that are open to interpretation, or require further guidance are identified.
NUMBER OF CALIBRATION LEVELS
Essentially, as the number of calibration levels increases, the relative risk is reduced, as a better
picture of the analyte's performance is obtained. The analytical run-time is also an important
consideration in determining the number of levels to employ. For analyses with a relatively short
analysis time, such as the majority of inorganic parameters, additional calibration levels do not
represent a burden to production. This is not the case, however, for most organic analyses, with
routine run times of 40 to 60 minutes. Laboratories are engaged in a constant struggle between
quality and production. While an increased number of calibration levels would certainty improve
the quality of the data, this is not always possible. The implications of establishing calibration
"curves" with a minimum of data points are brought to light in the next few sections.
CALIBRATION ALGORITHM
While most laboratories default to the standard (least squares) method of linear regression
analysis to develop a calibration algorithm, a wide array of non-linear calibration technique
options are available. These options including, polynomial fits, exponential and power curves,
segmented fits, and even specific manufacturer options, are routinely provided as part of the
software bundled with instrument data stations.
The most common approaches to quantitation use an average response factor (e.g., in GC/MS),
single point quantitation (multi-component analytes such as PCBs), and multiple point
calibration "curves". Of these approaches, single point quantitation has the greatest potential
for inaccuracy because the response from the single standard analyzed is deemed to be
representative of the linearity of the analyte in question; There are two sources of error in single
1-36
-------
point quantitation. First, any error in the preparation or concentration of the standard will
directly affect quantitation. In addition, if the level chosen actually represents a significantly non-
linear portion of the relationship between response and concentration, substantial bias will be
introduced.
The use of an average response factor is designed to normalize differences in response factors
over the calibration range. The drawback to this approach is if only a single response factor
deviates significantly from the others, the bias is normalized by distributing an equivalent degree
of bias in the opposite direction over the other standards.
Non-linear data require more advanced statistical treatment. Typically, regressions of a higher
order, quadratic equations, or polynomial fits of the data are employed. The main precaution
associated with these techniques is the minimum number of data points required. As the
minimum number of points required to form a line is two, then a linear regression (1st order
polynomial fit) actually requires a minimum of 3 data points to be significant. Similarly, with
each higher order equation, one more data point is required. As with a simple linear regression,
the correlation coefficient must not be used as a measure of linearity. The correlation coefficient
only provides a measure of how well the data points fit the equation generated. Finally, as the
degree of non-linearity increases, the curve of a 2nd or 3rd order polynomial becomes parabolic
(Figures IB, 2B). This results, at the upper end of the curve, in two solutions for a given data
point. Unless the actual curve is carefully evaluated, the analyst may not even be aware that
multiple solutions are possible for a given response. The consequence associated with this type of
situation is that significantly inaccurate data could be reported.
/
Essentially, a linear regression results in the equation for a straight line, whereas polynomial fits
above the 1st order will result in the equation for a curve. The most recent versions of the 524.1,
524.2, and 525, GC/MS methods for the analysis of volatile and semi-volatile organics, specifically
allow the use of 2nd or 3rd order regression equations if the response factor criteria cannot be
met. Figure 3 shows the curves that are associated with a linear fit as well as polynomial fits of
orders 2 through 5 for Sample Data Set #1. Note, in particular, the significant differences in the
curve fit to the data in the region between points D and E. If these higher level curves are used
for the Sample Data Set, serious inaccuracies would result at the upper range of the curve-- the
recommended range for sample quantitation.
1-37
-------
While, in specific cases, each of these statistical manipulations of calibration data can provide a
"better fit" of the calibration equation to the data, they can also have significant impact on the
quality of the data generated. Essentially, with the number of statistical programs readily
available, an equation can be found that will provide a mathematical solution (i.e. "fit") to any
set of data. Consequently, without a complete understanding of the actual effect on the raw data,
none of these statistical techniques should be used in the generation of data for regulatory
compliance.
CALIBRATION LEVELS
The specific levels that are selected for calibration can have a significant impact on the validity of
the calibration equation. Calibration levels should be established based on consideration of (1)
the range of the levels, (2) the reportable detection limit, and (3) the linear range of the
analyte(s). The majority of the regulatory programs reviewed provide little guidance with respect
to the range of calibration levels. A generic statement is provided that indicates that the levels
selected should be based on the expected range of sample results. In some cases, the
"expected" sample concentrations exceed the working linear range of the detector. In the
interest of obtaining accurate results, it is more important to define the linear range of the
analyte and/or instrument, and dilute sample concentrations that exceed this range.
A wide calibration range, based on only a few calibration levels, will nearly always result in a
correlation coefficient greater than 0.995, which is frequently used as the sole calibration
evaluation criterion. In the example of Sample Data Set #2, the linear regression calculated
from all 5 data points yields a correlation coefficient of 0.963. If only the first two and the
uppermost data points are used (10, 20, and 200) however, the correlation coefficient is increased
to 0.999. This is a consequence of the derivation of the correlation coefficient.
The relative difference between the concentration of the low level standard and the reportable
detection limit is critical to providing confidence in the accuracy of low level measurements. Bias
is more pronounced as the calibration curve approaches the detection limit for a particular
analyte. Consequently, if the low level standard is significantly greater than the detection limit,
then accuracy in the proximity of the detection limit is compromised, because linearity of
response has not been evaluated in this region. Ultimately, the detection limit itself may come
into question. While the majority of the regulatory methods specify that the low level standard
must be prepared at a concentration "near, but above, the detection limit", methods 524.1 and
524.2 allow the low level standard concentration to be as much as 10 times higher than the
detection limitl.
1-38
-------
Finally, analytes have detector-specific linear ranges. In order to accurately evaluate non-linear
regions of the curve, there must not be a significant difference between the uppermost standard
(X) and the (X-l) concentration level. The consequence of not considering this in a calibration,
is the user may fail to identify a parabolic curve. This is one of the consequences that can result
from establishing calibration levels based solely on the expected concentration range of the
samples.
Recent revisions to the 500 series of methods represent the first attempt (other than the CLP
program, where calibration levels are contractually defined) to provide stronger guidance
regarding the calibration range. Methods 524.1 and 524.2 require at least 3 calibration levels to
encompass a factor of 20 calibration range (i.e. 1 to 20, 10 to 200). In addition, at least 4
standards are required to cover a range of a factor of 50, and at least 5 standards are required for
a range factor of 100.
CALIBRATION ACCEPTANCE CRITERIA
Once a calibration has been performed, there must be a set of criteria to determine if the curve is
acceptable for use in generating analytical results. This is one of the key weaknesses in the
published regulatory methodology. With the exception of the CLP program, the referenced
regulatory methods have only established acceptance criteria if the mean response factor is to be
used for quantitation. The alternative, if %RSD criteria (relative standard deviation of response
factors from the calibration curve) cannot be achieved, is to simply generate a plot of
concentration vs. response or response factor. This allows the generation of data without control
of data quality until the analysis of the first continuing calibration standard, where a limited
measure of control is obtained. In addition to %RSD criteria, the CLP program has established
minimum response factor criteria for most analytes. This requirement is associated with
confidence in the ability to detect the analyte, however, rather than in quantitation of the analyte.
As indicated in Table 1, even the acceptance criteria associated with continuing calibration
verification (CCV) offer little assurance of accurate quantitation. The most stringent CCV
acceptance criteria are found in method 502.2, which requires the analysis of a midpoint standard
to yield a response within _+_ 20% of that obtained for the same standard in the initial calibration.
In addition, this method requires the analysis of a laboratory fortified blank (LFB) per batch of
20 or fewer samples, fortified at a concentration of 20 ug/L.
For a set of data which is essentially linear, the mathematical basis of a linear regression attempts
to establish the midpoint of the curve as the point which deviates least from the linear equation.
The extent of the deviation then increases at the extremes. The deviation is absolute rather than
1-39
-------
relative to concentration, which creates the greatest impact at the lower end of the curve. Due to
the magnitude of response associated with the highest calibration level, the relative effect is
minimal. In the case of strongly non-linear data, such as that in Sample Data Set #2, the point at
which the curve becomes non-linear (in this case, the upper calibration level) is central in the
minimization of deviation from the curve. This effect is evident in Table 4, which indicates that
relatively minimal bias occurs in the upper calibration level, even considering such non-linear
data.
The relationship between bias and concentration has its greatest impact on the continuing
calibration verification process (CCV). The concentration of the CCV is typically equal to the
midpoint concentration of the initial multi-point calibration. With linear calibrations (typically
the norm), the midpoint level is associated with the least degree of bias from the plot of the
calibration equation. Consequently, if the overall accuracy of the analysis is less than 20%, there
is a significant probability that the acceptance criteria for the CCV and the fortified laboratory
blanks can be met.
The correlation coefficient ("r") is the most commonly used statistical measure of calibration
acceptability. One longstanding misconception is that this parameter also provides a measure of
linearity. The correlation coefficient is a measure of the "goodness of fit" of a series of data
points. Basically, the correlation coefficient can be viewed as a mathematical process which
determines the tightest ellipse that defines a set of data. The more the ellipse resembles a
straight line, the higher the "r" value (to a maximum of 1.00). The more the data appear to be
randomly distributed, or the ellipse appears more as a circle, the lower the "r" value (to a
minimum of 0). This effect is illustrated in Figure 4. Consequently, even a particular random set
of data can result in a high "r" value if the data range is such that the data can be described by a
tight ellipse.
Calibration acceptance criteria should be designed to evaluate the relationship between the
intercept of the calibration equation and the importable detection limit (RDL). The data in
Tables 2A and 2B, for Sample Data Sets 1 and 2 show significant negative bias at the low end of
the calibration. If, as required by most of the regulatory methods, the low level standard is just
slightly greater than the actual RDL, then the RDL would clearly not be valid for these
calibration sets. One requirement that should be imposed on calibration data is that the x-
intercept (expressed as concentration) should be no greater than 50% of the RDL. This will
serve to minimize the reporting of low level false positive results.
1-40
-------
One final consideration regarding the evaluation of calibration data, is to consider the bias at
each calibration level that results from obtaining a concentration from the calibration equation
using the actual raw calibration data. The software in use today provides graphic representations
of the calibration data, but the plots are typically too small and the resolution too poor to be used
to accurately evaluate point-specific bias.
Each of the generally accepted calibration evaluation mechanisms should be considered no more
than a single data assessment tool, rather than an absolute indicator of calibration acceptability.
For example, the correlation coefficient, used frequently in the inorganic arena, can provide
misleading information if there is a significant range between the uppermost and lower
calibration levels.
EFFECT OF CURVE "SMOOTHING" ROUTINES
With the advent of powerful software routines and instrument data stations, the analyst is now
provided with a series of tools that can be used to "smooth" the fit of any curve. While these
techniques certainly are not an element of the calibration process, their use is rapidly becoming
routine. High-powered calibration algorithms are most often used without understanding the
mathematical functions behind them as well as the limitations to their use and impact on a
particular data set. For this reason, these techniques are discussed here.
One of the most routine software options available is that of "forcing" the curve through the
origin. Theoretically, a blank should yield no response for a particular analyte. Due to signal-to-
noise considerations, however, this is rarely the case. Many analysts, however, have been trained
that a curve should pass through the origin, and thus this option is selected. There are two ways
in which curves can be forced through the origin. The first, is a simple mathematical formula
designed to result in a slope and zero-intercept. The other option is a manual one, which is based
on the repetitive inclusion of (0,0) data points until the curve is eventually forced through the
origin.
Curve "weighting" techniques are often used to obtain a better fit of the data points at either
extreme of the calibration range. Typically, the low end of the curve is susceptible to poor fit of
the calibration equation. The most common weighting routine employed to improve the fit is a
1/X manipulation of the data. Basically, each data point is weighted by a factor of the inverse of
the associated concentration. The result of this weighting, for the entire set of data, is a 91 point
curve vs. the original 5 point curve. The results of this weighting are summarized in Table 3. The
table indicates that a significantly better fit is achieved at the low end of the curve while affecting
the midpoint and upper range only minimally.
1-41
-------
While each of these techniques results in a better fit of the data points to the calibration
equation, they remain little more than data manipulation techniques. In the generation of
environmental data, analysts must be trained to understand that the use of these techniques can
result in mis-interpretation of the data.
SUMMARY
In a regulatory climate that is increasingly concerned with Quality Assurance, most data quality
assessments remain reactive in that they rely on quality control information generated during the
course of analysis, rather than prior to the analysis of environmental samples. The calibration
process should be viewed as the initial opportunity to assess the quality of data to be generated.
Consequently, there is a need for more structure and guidance in the evaluation process in order
to provide analytical methods which ensure data comparability.
REFERENCES
1. U.S. EPA, 1988. Methods for the Determination of Organic Compounds in Drinkine Water.
EPA/600/4-88,039. December, 1988.
2. U.S. EPA, 1985. Methods for the Organic Chemical Analysis of Municipal and Industrial
Wastewater. 40 CFR Part 136, Appendix A. July, 1985.
3. U.S. EPA, 1990. Test Methods for Evaluating Solid Waste. Volume IB: Laboratory Manual,
Physical/Chemical Methods. SW-846. 3rd ed., Revision 1. November, 1990.
4. U.S. EPA, 1990. Contract Laboratory Program, SOW 390. Statement of Work for Organic
Analyses, Multi-Media, Multi-Concentration. March, 1990.
5. USATHAMA, 1990. United States Army Toxic and Hazardous Materials Agency. Quality
Assurance Program. USATHAMA PAM11-41, rev. 0. January, 1990.
1-42
-------
Table 1: Comparison of regulatory method requirements for various aspects of the
calibration process. |
^^^^^^^B 500 600 8000 CLP 1
lo. Standards
Low standard
Calibration
Range
nitial Calibration :
Requirement to
use mean RF
Initial Calibration:
Alternative to
RF
Cont. Calibration
Frequency
Cont. Calibration
Acceptance
Criteria
• 3 to 5
• 5 recommended
• 1 point allowable
with criteria.
• 6pre-set:525
• Near, but above
EDLs , to
o2-10xMDL
Range factor :
20 : 3 minimum
50 : 4 minimum
1 00 : 5 minimum
RSD< 1055 (502)
<2055 (508,524)
<3055 (525)
• Generate a plot of
peak height or
area response vs.
concentration.
o No acceptance
criteria.
• Daily (502, 508)
• every 8 hours :
(524, 525)
• 95 of initial
standard response :
±3098:525,524
±2095:502,508
3 minimum 15 minimum
Each analyte: near,
but above MDL.
• Expected range
of samples.
• Detector range.
RSD <1095 602,608
<3555 624,625
• Generate a plot of
peak height or
area response vs.
concentration.
• No acceptance
criteria.
Once daily
• 55 of initial
standard response :
1555:608
2055:625
• QC Check standard
analyte specific:
601 , 602, 624
Each analyte: near,
but above MDL.
o Expected range
of samples.
• Detector range.
RSD <2055
• Generate a plot of
peak height or
area response vs.
concentration.
• No acceptance
criteria.
• Once daily
• More frequently
for ECD methods.
Standard response
within ± 1 555
of initial response.
5 : general GC/MS
4 : 8 Semivolatiles
3 : Pesticides
1 : Multicomponents
Contractual
requirements
Contractual
requirements
o Generally : RSD...
<20.55K [GC/MS]
<1 0-1 555 [GC]
• No RSD criteria :
- 20 Semivolatiles
-lOVolatiles
Use mean RRF.
Every 1 2 hours
Generally :
Maximum 55D =
2595 (for most)
1-43
-------
Table2A: Sample Data Set*!
Table 2B: Sample Data Set *2
X Y RF
1 65,000 65000
2 140,000 70000
5 365,000 73000
10 680,000 68000
50 2,250,000 45000
RSDofRFs-17.3%
LSR=Y=43320X+ 110840
2nd Order- Y * -599.W2 + 75069X - 5816.7
X Y RF
10 650,000 65000
20 1,400,000 70000
50 3,650,000 73000
100 6,800,000 68000
200 9,000,000 45000
RSDofRFs = 17.3%
LSR= YM4071X+ 950580
2nd Order- Y= -238.2XA2 + 94494X - 356730
Table 3: Calculated X values for Sample Data
Set#l using both Linear regression
(LSR) and LSR weighted 1/X.
X
1
2
5
10
50
LSR
-1.1
0.7
5.9
13
49
I.SR fl/X)
0.4
2.0
6.7
13
46
Table 4: Summary of the biased obse
using different quantf afion tec
X LSR PF2 RF
1
2
5
10
50
-191%
-60%
19%
31%
-2%
6%
-2%
-1%
0%
0%
1%
9%
14%
6%
-30%
rved in Sample Data Set#l data
:hniques.
< — Single Point — >
Mid Low High
-4%
3%
7%
0%
-34%
0%
8%
12%
5%
-31%
44%
56%
62%
51%
0%
RF= Hverage Response Factor
LSR- Least Squares Linear Regression
PF 2= 2nd order Polynomial Fit
1-44
-------
3.00e+6
2.00e+6 -
«r
M
f
at
1 .OOe+6 -
O.OOe+0
y = 1.1084e+5 + 4.3320e+4x
r= 0.995
3.00«+6
2.00e+6 -
J
1 .OOe+6 -
O.OOe+0
\i = - 5816.7 + 7.5069e+4x - 599.1 Ox"2
r=0.999
) 10 20 30 40 50 60
Concentration
Rgure 1A: Sample Data Set#1,
Unear Regression Plot.
10 20 30 40 50 6C
Concentration
Rgure IB: Sample Data Set#l,
2nd Order Polynomial Fit.
1 .OOe+7
8.00e+6 -
w 6.00e+6 -
4.00e+6 -
2.00e+6 -
O.OOe+0
r= 0.963
y = 9.5058e+5 + 4.4071 e+4x
1 .OOe+7
8.00e+6 -
w6.00e+6-
a
M
«i
K4.00e+6 H
2.00e+6 -
O.OOe+0
= - 3.5673e+5 + 9.4494e+4x - 238.19x*2
r=0.999
100 200
Concentration
300
Rgure 2A: Sample Data Set#1,
Linear Regression Plot.
100 200
Concentration
300
Rgure 2B: Sample Data Set #2,
2nd Order Polynomial Fit.
1-45
-------
Linear Regression
2nd order polynomial fit
3rd order polynomial fit
4th order polynomial fit
5th order polynomial fit
®= Data points
•2e+6
10 20 30 40 50 60
70
80 90 100
figure 3: Plot of Caltoralion curves generated from various algorithms
using data from Sample Data Set#1.
0-
2 ' 10' 20
Poor correlation
r= < 0.800
10 100
EKcellent correlation
r> 0.995
figure 4: Illustration of the relationship between the range of data
points and the correlation coefficient (r).
1-46
-------
PROFICIENCY EVALUATION SAMPLE PROGRAM FOR SOLID WASTE
ANALYSIS: A PILOT PROJECT
DAVID E.KIMBROUGH. PUBLIC HEALTH CHEMIST II, AND JANICE WAKAKUWA, SUPERVISING CHEMIST,
CALIFORNIA DEPARTMENT OF HEALTH SERVICES, SOUTHERN CALIFORNIA LABORATORY, 1449 W.
TEMPLE STREET, LOS ANGELES CALIFORNIA 90026-5698.
ABSTRACT
The use of Proficiency Evaluation (PE) samples for laboratory evaluation is an accepted practice lor clinical,
industrial hygiene, and drinking water chemistry laboratories. It has not been systematically applied to the
analysis of solid waste by environmental laboratories This paper discusses the theoretical and practical
considerations involved in preparing a pilot PE sample program. The project was designed to assess the
types and frequency of laboratory error, completeness of data packages, and to identify logistical and
tracking problems that might occur in an ongoing PE program.
Two sets of PE samples were distributed among 319 environmental laboratories accredited by the
Environmental Laboratory Accreditation Program (ELAP) for either PCB or elemental analysis. One set
consisted of five soils spiked with Arodor 1260, and the other set of five soils spiked with arsenic, cadmium,
molybdenum, selenium, and thallium.
This project is an attempt to evaluate the competence of the environmental laboratory industry providing
services in the state of California. The data and statistical analysis for this study are presented here.
I-47
-------
TECHNICAL DATA REVIEW - THINKING BEYOND QUALITY CONTROL
Em P. Johnson - Manager, Analytical Laboratory
Richard G. Meaty • Quality Assurance Supervisor
Warzyn Inc, 1 Science Court, Madison, Wisconsin 53711
ABSTRACT
Considerable attention has been given to the concept of quality assurance (QA) during sampling,
laboratory testing, and data reduction; however, few aspects of quality assurance have been
incorporated into the technical review of data that will be used to make critical project decisions.
This paper focuses on the QA aspects of technical review and presents an alternative to the
traditional multi-tiered data review process. In addition, techniques and methods used to perform
overall data assessment are described. The process includes:
1. Quality assurance in data processing
2. Technical review of data
3. Evaluation of trends and anomalies
4. Comparison of historical project information
The use of computer tools and relational databases can streamline the overall process.
INTRODUCTION
Most laboratories have implemented a multi-tier data review process as part of their internal
Quality Assurance/Quality Control (QA/QC) program. Typically, the review moves up the
management chain, beginning with activities such as simple math checks, passing into verification
of QC sample frequency and acceptance criteria (which are usually well defined) and then,
perhaps, some limited "technical" review of the data.
As environmental testing moves into the next phase of quality assurance, it is becoming evident
that analyzing increasing numbers and types of QC samples does not, in and of itself, ensure that
quality data are being reported. QC samples are routinely subject to more intense scrutiny due to
their very nature, and the implications that they present to managers, clients, and the end users of
the data. Simply put, errors are not being made on the QC sample themselves, because these
samples are subjected to very specific acceptance criteria. Unfortunately, routine environmental
samples rarely receive this level of effort.
1-48
-------
With errors come legal liability. The analytical data generated by laboratories are used to
determine regulatory compliance of industry. The implications of not providing quality data that
can withstand the challenge of a court of law are forcing the analytical laboratory industry to
improve its data review process. Even data validation done under the Contract Laboratory
Program does not sufficiently provide for review of data in the technical sense; it merely
contractually provides for review of specific criteria.
This paper presents an alternative to the traditional approach to data review. Described are
specific techniques and examples that can be used to focus on the technical aspects of the data
review process, and the deductive rationale that must be applied in order to understand the "big
picture" needed for sound project decisions.
THE TECHNICAL REVIEW PROCESS
Laboratory analyses are critical in determining project direction, therefore the reliability of the
analytical data is essential. The financial implications of such decisions have prompted Warzyn to
develop an effective review of our results from a project and regulatory viewpoint.
In this alternative approach, QA is an integral part of laboratory operations, rather than an
isolated entity. QA is an interactive element in each phase of the analytical process including
technical review of data and reports. Figure 1 illustrates this approach to QA.
Technical review is an interpretive process designed to evaluate the overall ability of the data to
satisfy the project objectives. Initially, analytical results must be reviewed in relationship to the
other analytes reported. The purpose of this type of review is to attempt to identify trends,
anomalies, or interferences which can bias the overall usefulness of the data. The technical review
process begins at the onset of any project. Recommendations for testing programs and data
quality objectives (DQO) are examples of project support services offered. This strategy moves
the laboratory from a "black box" to a highly visible, integral part of the project team where
technical expertise is critical to:
The selection of testing programs
Regulatory assistance
Development of project DQO's
The technical review process incorporates an initial review of the testing program upon receipt of
the samples. The reviewer evaluates analyses requested to ensure project DQOs are being met.
1-49
-------
As analytical results are generated, initial math check, QC review and supervisors' technical
release of data are performed within the operations unit of the laboratory. Reviewers consider the
relative accuracy and precision of each anatyte when interpreting the analytical results. This may
assist in establishing the reliability of the results before proceeding with the overall project
interpretation.
The data entry process is a unique function which requires double key protocols for data entry. An
internal computer error-checking routine is employed to compare both data entries and generate
an exceptions report. The double key entry greatly reduces the transcription error rates and cuts
down on the time required for non-technical review and the errors associated with report
preparation.
Once reports have been generated, the next phase of the technical review begins. This review is
performed from a technical perspective and based on a wealth of environmental experience.
Laboratory performance verification (precision and assurance) is designed to determine the
quality of individual analyses. The interpretive technical review ties all parameters together to
obtain an overall picture of the data quality.
A number of computer generated reports assist as individual parameter relationships are
evaluated. One of the "red herrings" that typically appear in Quality Assurance Program Plans
(QAPPs) or Quality Assurance Project Plans (QAPjPs), is the discussion of historical data
comparison as part of the review process. While this technique is quite valuable in long term
monitoring situations, it is most often performed from memory rather than actual comparison to
previous data. Current technology allows the opportunity to store monitoring data for future
comparison. Advances in relational database software also provide the capability for statistical
and trend analysis, modeling, and evaluation of regulatory compliance.
Figure 2 shows a historical comparison report which clearly presents data at a sampling location
for the past four quarters. With this information the technical reviewer can trace anomalies or
request laboratory confirmation of any analyte in question. The development of this historical
report format has greatly increased trend analysis capabilities. With real historical information
available, anomalies are easily identified.
Figures 3A and 3B illustrate a two stage process which shows how detailed historical reports can be
used to identify anomalous data and provide information to ascertain which specific parameters
are in question. Initially, the technical reviewer reviews the summary data (Figure 3A) and
1-50
-------
observes that the hardness value is significantly lower than recent historical data indicate for this
sampling point. The alkalinity value, which closely correlates to hardness, does not exhibit a
similar trend. The next step in this process is to compare these two results to the conductivity
value (Figure 3B). The technical reviewer can now evaluate all parameters given as a whole. In
this case, the TDS and sodium values provide clues to assist in pinpointing the problem analyte,
i.e., hardness. The laboratory benchsheet for hardness is then reviewed to determine if a
mathematical error was made. In this case, all the parameters indicate that the source of the
anomaly is in the hardness value alone, which was indeed confirmed in an audit of the raw data.
Figure 3C represents a different scenario. In this case, there is a significant reduction in the
hardness value from the previous quarter, and the sodium result is significantly higher than in the
previous quarter. The conductivity results, however, are not significantly different. The field log
notes would then be reviewed to determine if the sampling location might shed some light on the
hardness anomaly, such as water sampled downstream from a water softener. In this example, the
field observations confirmed that a water softener had recently been installed upstream of the
sampling point.
If no apparent error is found, the technical review section has authority to schedule an immediate
confirmation of the analyte(s) in question. Other routine data relationships which are considered
during the review process include:
Anion-cation balances and relationships to EC
• Comparison of theoretical and measured EC/TDS
/•
Demand parameter relationships (BOD/COD/TOC ratios)
Evaluate trace element data in terms of potential interelement interferences
"Logical" VOC degradation patterns (landfill age vs. solvent breakdown product
appearances).
Confirm the presence/absence of common laboratory contaminants such as: solvents,
phthalates, methylene chloride.
Interpretation of data relative to detection limits and dilutions.
Close scrutiny of "rarely" detected anatytes.
• Relationship of detected analytes to potential source contamination (e.g., elevated lead
and cadmium near highways).
1-51
-------
Some projects require detailed comparison of analytes. Figure 4 shows a detailed program which
calculates the ion balance for a monitoring well. Upon further examination, the sum of the
parameters exceeds the measured TDS; the IDS value appears biased low. Also, by reviewing
historical data, it was determined that prior alkalinity values were approximately 250 mg/L. With
this correction, an ion balance is achieved.
Another example of a computer database report which assists the technical review process is an
Exceedance Summary. In Wisconsin, the DNR has "NR140" Preventive Action Limits (PALs)
for groundwater standards. The Exceedance Summary report assimilates all historical data for a
site and compares the found values to that of the PAL and presents the comparison in a summary
table. Figure 5 is an example of this report. In this example, the ch'ent is provided with the actual
sample results and any regulatory criteria. Any value which exceeds a regulatory criterion level is
flagged appropriately on the report.
One of the most powerful tools for a technical reviewer is a database of compiled technical
information to help further their environmental knowledge. Common organic contaminants, trace
elements in natural soils, and common inorganic interferences are a few examples of the type of
support this gives to the technical review group. A central library is maintained, and pertinent
articles are routed throughout the QA and management staff.
TECHNICAL REVIEW - THE NEXT GENERATION
Quality is an evolutionary process. The next step in the ever-expanding quality assurance "tool-
box" is the development of an interactive database. Rather than relying solely on practical
experience, Warzyn is currently experimenting with a user-friendly, menu-driven, software
program that integrates text and graphics in a relational database format. The information being
electronically cataloged and cross-referenced includes the following:
common analyte names and "aliases"
sources of the analytes
environmental fate
• chemical structures
• available analytical method summaries
information regarding inclusion on various regulatory lists
regulated levels for compliance
1-52
-------
Figure 6 shows two "snapshots" of the computer screens which are available with the current
program. These screens provide a wealth of information that can be assimilated by the technical
reviewer. The information on sources and environmental fate can be used to evaluate whether or
not a site is likely to be contaminated with these analytes, and which long-term breakdown
products might be expected. Our long range goal is to offer strong technical support to all
locations within our firm.
We envision this program as an excellent tool for staff training as well. Our colleges and
universities simply do not prepare graduates adequately for work in the environmental field.
However, with modifications to this program detailed information regarding analytical techniques
can be presented in an intriguing, informative format. The net effect would be to provide hands-
on, visual training in critical interpretation of peak overlap, mass spectral identification, and the
impact of method interferences. Each training module can be designed to include an interactive
"test" to help assess trainees' comprehension of technical information and concepts.
SUMMARY
Environmental decisions depend on the quality of the data used to make those decisions. There is
a growing need to look beyond quality control data and ask if the data "make sense". The aim of
the review process described in this paper is to improve the quality of data generated by
incorporating QA as an integral part of laboratory operations. Only with this type of holistic
approach to data review can the "black box" aspect of environmental analysis be eliminated and
attention be focused on technical advancement with full support to project and client needs.
1-53
-------
REFERENCES
"Interim Guidelines and Specifications for Preparing Quality Assurance Project Plans", QAMS-
005-80, United States Environmental Protection Agency, EPA Document 600/4-83-004,
1983.
U.S. EPA, 1988 Methods for the Determination of Organic Compounds in Drinking Water,
EPA/600/4-88/039, December 1988.
U.S. EPA, 1979 Handbook for Analytical Quality Control in Water and Wastewater Laboratories,
EPA-600/4-79-019 March 1979.
U.S. EPA, 1986 Test Methods for Evaluating Solid Waste. SW846 3rd Edition November 1986.
U.S. EPA Office of Solid Waste and Emergency Response. Hazardous Waste Land Treatment.
SW-874, April 1983, pp. 273.
1-54
-------
Quality Commitment
Wareyn's Quality Assurance Program
selves as a critical link to laboratory operation,
rather than as an ancillary function.
Project
Development
V ComDlete?.ess
Sampleg
•Historical
comparison
•Project-specific
QC requirements
Receiving!!
•Technical
project review
•Technical review:
loc-ir. vs. COC
•V transcription
errors
• Review of test
selection
•Adherence
to SOPs
•Consultation
for Corrective
Action
•Blind PZ program
Sample
11 Analysis!
UllData
Review
1-55
Figure 1
-------
WARZYN ANALYTICAL LABORATORY RESULTS
LOCATION: MADISON/WISCONSIN
PROJECT NO: 30205 OORL
CK'D: APP'D:
DATE ISSUED:
PAGE: 1
06/07/90 09/18/90 12/11/90 03/04/91
Leach MHZ pH
Conductivity @ 25 Deg C
Alkalinity, Total
Biochemical Oxygen Demand
Carbonaceous BOD
Chemical Oxygen Demand
Chloride
Cyanide, Total
Hardness, Total
Nitrogen, Total Kjeldahl
Total Suspended Solids
6.49
5270
3610
3190
2690
4060
537
0.007
2400
89.8
198
6.26
7950
3300
4700
3600
6940
728
0.006
3560
166
520
6.68
9020
4950
5370
5490
6960
965
0.006
4160
159
128
5.63
10000
5200
6030
4840
7760
1040
<0.025
4260
158
350
Results in mg/L except elev(ft), pH (S.U.), conductivity (umhos/cm).
1-56
Figure 2
-------
HISTORICAL REPORT EXAMPLE
WARZYN ANALYTICAL LABORATORY RESULTS
LOCATION: MADISON, WISCONSIN
09/18/90 12/11/90
lysimeter 3 pH 6.46 6.22
Conductivity @ 25 Deg C 1510 1470
Alkalinity, Total 804 817
Chemical Oxygen Demand 64 48
Chloride 37 28
Hardness, Total 955 535
Nitrogen, Ammonia 3.06 0.37
Nitrogen, Nitrate + Nitrite 0.09 O.02
Nitrogen, Total Kjeldahl 4.1 1.64
Phenolics, Total 0.066 0.036
Sulfate 38 78
Iron 7.58 20.4
Manganese 0.83 1.02
Sodium 20.7 14.6
Solids, Total Dissolved 1020 1100
PROJECT NO: 30205 OORY
CK'D: APP'D:
DATE ISSUED:
PAGE: 1
I Historical Hapdnessl
«• IData indicates I
la low bias. I
I I
Results in mg/L except elev(ft), pH (S.U.), conductivity (umhos/cm).
1-57
Figure 3A
-------
HISTORICAL REPORT EXAMPLE
WARZYN ANALYTICAL LABORATORY RESULTS
LOCATION: MADISON, WISCONSIN
09/18/90
Lysimeter 3 pH
6.46
Conductivity @ 25 Deg C 1510
Alkalinity, Total 804
Chemical Oxygen Demand 64
Chloride 37
Hardness, Total 955
Nitrogen, Ammonia 3.06
Nitrogen, Nitrate + Nitrite 0.09
Nitrogen, Total Kjeldahl 4.1
Phenolics, Total 0.066
Sulfate 38
Iron 7.58
Manganese 0.83
Sodium 20.7
Solids, Total Dissolved 1020
12/11/90
6.22
1470
817
48
28
535
0.37
<0.02
1.64
0.036
78
20.4
1.02
14.6
1100
PROJECT NO: 30205 OORY
CK'D: APP'D:
DATE ISSUED:
PAGE: 1
(Alkalinity, con-
Iductivity and TDS
I results further
I indicate low bias
ion Hardness.
Results in mg/L except elev(ft), pH (S.U.), conductivity (umhos/cm).
Benchsheets reviewed; math error was identified. Confirmation of the Hardness
value was performed.
1-58
Figure 3B
-------
HISTORICAL REPORT EXAMPLE
WARZYN ANALYTICAL LABORATORY RESULTS
LOCATION: MADISON, WISCONSIN
09/18/90
Lysimeter 3 pH
Deg C
Conductivity
-------
Sample Monitoring Well Data
Based on theoretical
NOTE: Since the sum
of the parameters
enceeds measured
TDS, the TDS ualue
is biased IQUJ.
i
i NOTE: By reuietuing
historical data, it
could be determined
| that prior ualues
ujere in the range
of 250 mg/L.
This ujould result
in an ion balance.
\
pH X
Measured
7.35 *
EC ^4 2597^
TDS
435
EC, field EC
Theoretical
1404
628
%D
-46%
44%
TDS/ECm 0.17 0.24
TDS/ECt 0.31 0.45
PafaLuctcT
Alkalinity
HC03-
C03=
Chloride
Nitrate
Sulfete
Cf-ftlrrniTTi
rti ^jT"l**^ni TT'
i7>OC[^'1TTI
P^TT^^^pl^^
mg/L
^ 120
181
0
120
70
32
150
2.7
628
EC []im'hn]
203.0
asCaCO3
ssCaCOS
387.3
| 0.0
184.8
182.0
122.2
319.5
5.0
1404
"+" meq/L
3.49
2.63
6.53
0.07
12.72
is biased
high.
'-' meq/L
2.40
5.11
0.00
2.50
wmmm
10.00
Discrenancv- 77?
Ion Balance
%Diff. = 12.0%
Anior.s too low
Cations OK
Acceptance criteria= ±2.0%
Aoion £meq/L xlOO shniJd = EC
Bffected Parameter (assumed discrepancy source)
Subsequent impact on:
Theoretical Prior Year
Alkalinity
Chloride
Nitrate
Sulfete
Calcium
Magnesium
Sodium
Potassium
^ mq/L
^ 256
277
168
250
TDS
709
724
796
758
EC
1634
1610
1598
1605
TDS/EC
0.43
0.45
0.50
0.47
Data
249
175
<0.1
109
81
26
139
7
1-60
Figure 4
-------
Summary of Chapter NR 140 PAL
Concentration Attainments and Exceedances
Parameter
PAL ES SHLS 06/08/90 SWLS 09/14/90 SWLS 12/14/90
Chloride (mg/L) 125. 250.
Iron (mg/L) 0.15 0.3
*397.
*2.72
221.
*5.46
"7.05
Motes:
(1) PAL = Preventive Action Limit (Chapter NR 140, Wisconsin Administrative
Code).
ES = Enforcement Standard (Chapter NR 140, Wisconsin Administrative
Code).
* = Concentration attains or exceeds Enforcement Standard Concentration.
(2) The concentration listed may not be actual PAL or ES exceedances depending
on the location of the facility's Design Management Zone, Site Specific
PAL's, etc. What constitutes an NR 140 exceedance is defined on a site-
by-site basis.
(3) Does not include NR 140 Welfare parameters color and odor or organic
health parameters.
1-61
Figure 5
-------
Acrolein
1. Byproduct of tobacco smoke.
2. Thermal decomposition of fats/greases, i.e.
restamunt kitchens.
....... ENVIRONMENTAL FATE ••••••••
Under goes addition of halogens easily.
Unstable-polymerizes rapidly in light or strong
acid.
Polyurethane foams
Polyester resins
Intermediate for syn. glycerol
Military- poison gas mixtures
Aquatic herbicide
Warming agent (Chloromethane
refrigerant)
I Structure ||
Back to
Index
V ?
c=c—c
I I I
H H H
Other Names
Allyl aldehyde
Acrylaldehyde
2-Propenal
Agualin
Regulatory Lists
IRCRA App.VIII
|AB1803(CA)
MCL info]
Analytical Methods
SOIL
SW-84S: 8030: p urge* trap GC/FID
SW-846:8240: purge Wrap packed col. GC/MS
............. NOTES .............
IGC/M S m et h o d 6 24 is only ap prove d as a sere en
for this compound. If quantitation is critical or
lovHevel detection is required, the GC method is
themethod of choice.
1,2-Dibromoethane il |stpuctupe"||
i....... ENVIRONMENTAL FATE ••••••••
• Bio degradation occurs in 30-120 days at levels
of 15-18 ppm
• Degrades to Bromoethane
Scavenger for lead in gas.
Occurs at levels up to 0.0258
(w/v)
Grain and tree crop fumigant
Waterproofing preparations
H H
Br-C-C-Br
I I
H H
MCL info]
Analytical Methods
Back to
Index
Other Names
Dovfume W85
EDB
Ethyl en e di bromide
Regulatory Lists
|SDVAList2
IRCRA Appendix VIM
WATER —
[EPA504: microerfractionGC/ECD
EPA 502.1: packed P&T GC/HECD
EPA 502.2: capillary P&T GC/HECD
I EPA 524.2: capillary P&T GC/MS
SOLID/WASTE
I SW-846 8011: micrDetraction GC/ECD
SW-846 8021
1-62
-------
TABLE 1
COMMON LABORATORY CONTAMINANTS
Substance
Use in Analytical Laboratory
Ll,2-Trichloro-2,2,l-Trifluoroethane
1,2,4-Trichlorobenzene
1,4-Dichlorobenzene
2.4.6-Trichlorophenol
2.4-Dichlorophenol
2-Chlorophenol
4-Bromofluorobenzene
4-Nitrophenol
Acenaphthene
Acetone
AJuminum
Arsenic
Benzene
Benzo(a)pyrene
Boron
Carbon Disulfide
Chlorobenzene
Chloroform
Chromium
Copper
Diethyl Ether
Fluoranthene
Freons (CClsF, CCbF2)
Iron
Mercury
Methylene Chloride
MTBE
Naphthalene
Pentachlorophenol
Phenol
Pyrene
Selenium
Silver
THMs
Toluene
Trichloroethene
Various Phthalates
Xylenes-
Zinc
Solvent for oil & grease/TPH extractions
Calibration compound, matrix spike
Calibration compound, matrix spike
Calibration check compound
Calibration check compound
Matrix spike compound
Surrogate compound for VGA
Matrix spike compound
Calibration compound, matrix spike
Extraction solvent
Matrix spike (high concentration)
Matrix spike (high concentration)
Matrix spike compound
Calibration check compound
Glassware
GC thermal desorp work (air)
Matrix spike compound
Extraction solvent, sample preservation
Cleaning solution/digestion reagent (COD)
Sample preservation (phenols)
Extraction solvent
Calibration check compound
Refrigerants (A'C, freezers) fire
extinguishers
Matrix spike (high concentration)
Gas displacement/digestion reagent
(COD &TKN) w
Extraction solvent
Solvent for many new 500 series
Petroleum distillate (pesticide spraying)
Calibration compound, matrix spike
Calibration compound, matrix spike
Matrix spike compound
Shampoo
Digestion reagent (COD)
Water supply system chlorination
by-product
Carpet glue, paints, extraction solvent.
matrix spike, electrical tape
Matrix spike compound
Inks, plasticizers. plastics
General solvent, slide cleaning
Sample preservation, hand cream
1-63
-------
TABLE 2
TRACE CHEMICAL ELEMENT CONTENT OF NATURAL SOILS
Element
Symbol
Common Range (ppm) Average (ppm)
Aluminum
Antimony
Arsenic
Barium
Beryllium
Boron
Bromine
Cadmium
Cesium
Chlorine
Chromium
Cobalt
Copper
Fluorine
Gallium
Gold
Iodine
Lanthanum
Lead
Lithium
Magnesium
Manganese
Mercury
Molybdenum
Nickel
Radium
Rubidium
Selenium
Silver
Strontium
Thallium
Tin
Tungsten
Uranium
Vanadium
Yttrium
Zinc
Zirconium
Al
Sb
As
Ba
Be
B
Br
Cd
Cs
Cl
Cr
Co
Cu
F
Ga
Au
I
La
Pb
Li
Mg
Mn
Hg
Mo
Ni
Ra
Rb
Se
Ag
Sr
Tl
Sn
W
U
V
Y
Zn
Zr
10.000-300.000
2-10
1-50
100-3.000
0.1-40
2-100
1-10
0.01-0.7
0.3-25
20-900
1-1.000
1-40
2-100
10-4.000
0.4-300
0.1-40
1-5,000
2-200
5-200
600-6.000
20-3,000
0.01-0.3
0.2-5
5-500
8x10-5
5-500
0.1-2
0.01-5
50-1,000
2-200
0.9-9
20-500
25-250
100-300
60-2.000
71,000
5
430
6
10
S
6.06
6
100
100
8
30
200
30
1
5
30
10
20
5,000
600
0.03
2
40
10
0.3
0.05
200
5
10
1
1
100
50
50
300
Ref: USEPA Office of Solid Waste and Emergency Response.
Hazardous Waste Land Treatment. SW-874 (April, 1983) page 273.
1-64
-------
7 QUALITY ASSURANCE STRATEGIES TO IMPROVE PROJECT MANAGEMENT
by
Tracey L. Vandennark, Maxwell Laboratories, Inc. — S-CUBED Division
Guy F. Simes, U.S. EPA, Risk Reduction Engineering Laboratory
In these times of ever-shrinking budgets, the primary objective in effective project
management becomes one of how to accomplish as much as possible for minimal cost. There
is no longer any margin for repeating work which was done incorrectly the first time, or for
extraneous investigations which are not directly related to the objectives of the project Quality
assurance, applied before, during, and after the inception of a project, can vastly decrease
misdirected efforts and needless expenditures.
Some of the key aspects in the conceptual stages of any project are those involving
identification of the types of decisions to be answered by or made as a result of the outcome of
the project, specification of the project objectives, and identification of the uses for and quality
of the resulting data. These are the types of questions which need to be answered before a
quality assurance plan can be written. These concepts are inherent in the strategy embodied by
the Data Quality Objectives (DQO) process, which, though originally developed for application
to the array of problems associated with Superfund sites, may be applied with equal success to
aspects of the RCRA program.
Once a Quality Assurance Project Plan (QAPjP) has been written, a QAPjP review by an
independent third party is often able to identify experimental design flaws, inappropriate
experimental methods, or insufficient QA measures for either sampling or analytical procedures.
The SITE Program has documented several examples of substantial monetary savings to projects
which were accomplished by employing QAPjP reviews prior to the initiation of any sampling
or analytical efforts.
After the project is underway, an audit is the primary means to ensure that the data meet
the established project-specific QA criteria or that the project has not deviated from the quality
assurance plan, and that overall technical systems are all in proper working order. Two types
of audits achieve these purposes: audits of data quality (ADQs) and technical systems audits
(TSAs), respectively. These audits may be conducted by an independent third party, or by in-
house personnel. Regardless of who conducts the audit, however, management must place a high
degree of commitment in responding to the findings of the audit and in rectifying any problems
noted.
With the incorporation of such quality assurance measures, it is possible to meet project
goals within pre-established budgetary constraints. Without including appropriate QA procedures
in a project, not only will redundancy of work and cost overruns be likely, but overall project
goals may be compromised.
1-65
-------
g BIAS CORRECTION: EVALUATION OF EFFECTS ON ENVIRONMENTAL SAMPLES
Marvin W. Stephens. Ph.D.. Vice President/Corporate Technical Director,
Michael A. Paessun, Environmental Regulations Specialist
Wadsworth/ALERT Laboratories, Inc.
4101 Shuffel Drive, NW
North Canton, Ohio 44720
ABSTRACT
September 25, 1990 began a new era in data evaluation. Under the new
Toxicity Characteristic (TC) rule, as well as the Land Disposal Restric-
tions (LDR) program, a requirement for the bias correction of TCLP data
was implemented. Since that deadline, Wadsworth/ALERT Laboratories has
analyzed hundreds of samples for which bias correction factors were
generated and applied. The spike data and the corresponding correction
factors associated with those samples have been collated and evaluated.
For the population of samples studied, the number of samples having
hazardous levels of contaminants and those which became hazardous after
applying the bias correction factor was small compared to those samples
which were non-hazardous.
This paper evaluates the magnitude of the corresponding bias correction
factor for each analyte on the TC list, the variability of the factor for
each analyte, and the likelihood it will change a sample's hazard
classification. A comparison of the individual correction factors to the
recoveries of the respective analytes in control samples is presented to
demonstrate the possibility of using the control sample recoveries for
bias determination. And finally, analytical anomalies, especially for the
organic compounds, are addressed.
From these discussions, proposals are suggested to aid in making decisions
for future sample analyses. An estimate of the benefits and expenses of
these options will be discussed.
INTRODUCTION
On March 29, 1990, the long expected Toxicity Characteristic Leaching
Procedure (TCLP) was finally promulgated in the Federal Register(l). The
short phrase "A matrix spike shall be performed for each waste and the
average percent recovery applied to the waste characterization" (Section
10.3) sent Shockwaves throughout the environmental laboratory industry.
A flurry of comments and questions prompted the EPA to issue corrections
to the TCLP on June 29, 1990(2). The above statement was replaced by
another clarifying statement, "The bias determined from the matrix spike
determination shall be used to correct the measured values." (Section
8.2). Also included was the formula (Equation 1) for calculating this
corrected value (Section 8.2.4).
1-66
-------
Equation 1
Xc - 100 (XjKR)
Where:
Xc - Corrected value
Xm - Measured value of unspiked sample
HR — Percent recovery of matrix spiked amount
The Toxicity Characteristic (TC) for which the TCLP is to be used, became
effective for large-quantity generators of hazardous waste September 25,
1990. Since that date, thousands of samples have been extracted using
this procedure and subsequently analyzed and the appropriate correction
factors calculated and applied.
Vadsworth/ALERT Laboratories analyzed a portion of those samples. The
data for nearly 6000 determinations for the toxicity characteristic have
been compiled to evaluate the effect this requirement has had on the types
of samples this laboratory has encountered. This paper will present data
on the magnitude and variability of the bias correction factors calculated
for these samples and compare it to data collected from control samples
run in the same TCLP extraction fluids. It will also evaluate the
probable effect on future samples, suggest changes in protocols, propose
the use of control sample recoveries for bias determination, and present
some interesting anomalies.
The data base for this study consisted of sample matrices from industrial
wastes, soils, oils, sludges, and waters. All samples studied had
undergone TCLP extraction (filtration in the case of waters), had been
spiked with the analytes on the TC list, and a correction factor
calculated following the analysis. Correction factors (CF) as used in
this paper are the decimal equivalents to the percent recovery used in
Equation 1. Table 1 indicates the number of analyses performed for each
analyte and the number of samples having a specific analyte concentration
in excess of the regulatory limit. Of the 5871 determinations, only 0.9X
of the uncorrected results exceeded the regulatory levels. And it can be
readily seen that the majority of those samples exhibiting a toxicity
characteristic are the result of metals contamination. Not seen in this
table are another 8.31 of the results that exceeded the laboratory estab-
lished method detection limits yet remained below regulatory levels.
(Many of those values were likely due to trace levels of metals in the
TCLP extraction fluid as demonstrated by the blank analysis.) The remain-
ing 90.8% of the sample results were less than the method detection limit.
To determine whether an individual correction factor will have an impact
on a given population of samples a new term was coined, the Critical
Correction Factor (CCF) . It is defined in Equation 2. The CCF allows one
to evaluate the potential impact of bias correction on each individual
1-67
-------
Equation 2
CCF - Method Detection Limit
Regulatory Limit
analyte within a population of samples with no detectable contaminants.
It is simply the value at which a "non-detect" would be bias corrected and
thus exceeds the regulatory limit. As the method detection limit
approaches the regulatory limit, the CCF approaches 1. The smaller the
CCF, the less likely that a component will exceed the regulatory limit
even if very poor recoveries are measured. The right-hand column of Table
2 presents the value of the CCF for each analyte as calculated for this
laboratory. Since detection limits vary among laboratories, the CCF will
change proportionally and must be calculated for each location.
The use of the CCF to predict whether samples within a data set will
exceed regulatory levels can be demonstrated by comparing it to the
magnitude and variability of measured correction factors (Table 2). A
frequency distribution of the CFs for each analyte was plotted to evaluate
its statistical behavior since a broad spectrum of matrices were included.
A normal distribution occurred in most cases. Figures 1-6 show the
distributions for representative constituents from each analyte group.
For the metals, the mean for the correction factors varied from 0.86 to
1.04. Mercury (Figure 1) had the widest range within the data and was the
only metal where a CF was found to be less than the critical correction
factor.
The variation in the averages of the pesticide correction factors was very
similar to the metals, 0.86 to 1.04. However, the standard deviations are
somewhat larger, indicating the greater variability within the measure-
ments expected for these analyses. On the other hand, the value for the
lowest calculated correction factor in this group (endrin - Figure 2) was
nearly twenty times larger than the CCF for the compound.
The two herbicide residues had average correction factors which were very
similar, and the standard deviations were comparable to the pesticides.
The range within the silvex data (Figure 3) was wider than the previous
constituents. However, the CCF for both herbicides is so small (<0.001)
none of the results approached this level.
All of the volatile organic constituents showed consistent average
correction factors, 1.01 - 1.06, except for methyl ethyl ketone (MEK).
For several samples, the correction factors for MEK (Figure 4) were in the
range of 4 - 8. This variability caused the standard deviation to exceed
0.8. Vinyl chloride (Figure 5) also exhibited a large standard deviation
(0.47). However, none of the calculated correction factors approached its
respective CCF since most of the recoveries were biased high.
1-68
-------
Finally, the semivolatile organic compounds had the lowest average
correction factors (0.53 - 0.74). And the standard deviation for many of
the compounds were larger, which seems to be the case for the analysis of
most semivolatile compounds. Two of the compounds, 2,4-dinitrotoluene and
hexachlorobenzene (Figure 6), had CCFs much larger than any of the other
compounds. (This is due to the fact that the regulatory limit is only a
factor of three greater than the detection limit [CCF - 0.308]). In spite
of that, only six instances out of the combined total of 249 determina-
tions where no contaminants were detected exceeded the regulatory limit
when the correction factor was applied.
In all of the above discussions, it has been assumed that the detection
limit would be bias corrected in the same manner as any measurable
quantities. Since it is unlikely that a detection limit study would be
performed for each matrix analyzed, this is a necessary assumption.
Therefore from the above discussions, it can be concluded that for the
majority of matrix types encountered in this laboratory the corrected
detection limits will not exceed regulatory limits. But it must also be
recognized that a few constituents exhibit recovery characteristics that
warrant more careful review. Of the 288 mercury analyses, only one sample
(0.3%) had a corrected detection limit greater than the regulatory limit.
There were four such occurrences in the 116 analyses for 2,4-dinitro-
toluene (3.4%). And the 133 hexachlorobenzene analyses showed two
occurrences (1.5%). None of these percetages suggest a serious problem.
Several other semivolatile compounds have relatively low biases, but the
GCF in each case is small and is unlikely to be a problem compound. One
might expect the MEK results to be similarly affected, but the bias is
toward recoveries greater than 100% which tend not to be a regulatory
concern. Vinyl chloride results do not approach the CCF (0.05) even
though its very large standard deviation accentuates the spread in the
recovery data and makes it suspect.
Although it has been documented that some type of sample matrix effect may
exist, the argument for the need for bias correction must also consider
the effect of the TCLP buffer solution itself on the ability to recover
each of the spiked compounds. To evaluate this effect, the recovery data
for control samples run with each batch of samples was collated. These
control samples were prepared by spiking an aliquot of the TCLP buffer
blank in the same manner as the matrix spikes were prepared. These
control samples then underwent the same extraction or preparative
procedures and analyses as the actual samples. Table 3 summarizes the
number of control samples analyzed, the average recovery, standard
deviation, and measured recovery range for each analyte. It should
immediately be noted that for both 2,4-dinitrotoluene and hexachloro-
benzene, the control sample recoveries in at least one case were less than
the respective CCFs.
To compare this data to the recoveries of the matrix spikes (CFs), a graph
was prepared plotting the pairs of mean correction factors and mean
control sample recoveries for each analyte (Figures 7 - 10). For each
mean the range of two standard deviations is indicated (a 95X confidence
1-69
-------
interval). A quick survey of the graphs shows the remarkable similarity
of the two sets of data. Interestingly, the spike recoveries and control
sample recoveries for several of the compounds previously discussed do not
stand out as being significantly different when compared in this manner.
The CCF for each analyte has also been included for comparison purposes.
Before formulating any conclusions, there are still a couple of signifi-
cant anomalies that should be discussed. Methyl ethyl ketone had
recoveries that frequently exceeded 100X compared to a standard purged
from deionized water. This was true in the control samples and was even
more pronounced in the matrix spiked samples. It is felt that this can be
explained by the decreased solubility of the MEK in the higher ionic
strengths routinely present in the buffer. This would then be even more
exaggerated in the extracts of many of the samples which contained soluble
salts. Similar effects have been noted in this laboratory during the
routine analysis of other water-soluble ketones and alcohols under similar
conditions.
Another important anomaly was found for four of the compounds from the
semivolatile fractions. The cresols, 2,4-dinitrotoluene, nitrobenzene,
and pyridine showed no detectable spike recovery in a small number of
samples. These data were not included in the tables since a bias
correction factor could not be calculated. In such cases the recovery
data could not be used to support or reject the analytical results.
Instead, results had to be evaluated based on the generator's knowledge of
the process producing the waste sample, not on recovery factors. Ideally,
another method should be developed that is capable of quantitating these
compounds in the more difficult matrices.
CONCLUSIONS
Now, what can we now imply from this data? Can we just disregard spike
recovery data as being insignificant? Though it might appear so on first
review, the opposite is really true. For many of the analytes, the
information presented indicates there are potential problem areas that
need to be addressed.
For example, if the sample in question is being characterized for the
first time or comes from a process where the characteristics are
continually changing, the data presented suggests that the matrix needs to
be evaluated to determine the extent to which it may affect the recovery
of the designated analytes. However, if it has been demonstrated that no
unusual sample matrix effects are biasing the data, there seems to be no
need to apply any correction factors. The method bias can be represented
by the mean of the control sample recoveries. If a sample contains
concentrations of contaminants that approach a regulatory level, and it is
felt that the bias suggested by the control sample recoveries do not
represent this particular matrix, a more extensive matrix spike study for
those analytes may be considered.
1-70
-------
It must be emphasized that only 0.9X of all constituents tested initially
exceeded the hazardous classification limits. When the calculated
correction factor was applied to the remaining data, an additional 0.2X
exceeded regulatory levels, including the seven samples where the
corrected detection limit was elevated above those limits. Therefore,
nearly 99% of all the analyses completed by this laboratory were still
classified as non-hazardous following bias correction. It can be argued
that there may be a large number of matrix types which are regulated but
are not included in this sample population. This may be true and each of
these matrices needs to be evaluated on its own merit. But as the data
base expands, it is predicted that it will become apparent that the need
to evaluate bias will be based more on the TCLP extraction fluid's effect
than on any sample matrix effects.
If the bias correction requirements are to remain, it is proposed that
control sample recoveries established from a statistically significant
database be used. Figures 7-10 graphically showed the differences be-
tween the average correction factor and the average control sample
recovery value. In all but six cases, this difference is less than 0.05.
Four of the cases involve pesticides where the CCF is very small because
of very good detection limits. These larger differences (0.08 - 0.23) are
not statistically significant and would seem to have no effect. MEK's
spike recovery is larger than the control sample recovery so the
difference is actually negative. But both averages are well above 100% and
any bias correction would only cause the result to be adjusted downward.
Nitrobenzene (a difference of 0.11) is the only remaining compound where
the control sample recoveries might be seen as adversely affecting the
data from an enforcement standpoint. Yet the CCF for this compound is
still very small (0.02) and the actual measured value (or an elevated
detection limit) would have to approach forty times the normal detection
limit before it would be significant. As suggested before, any concentra-
tion approaching a regulatory limit should be evaluated on an individual
basis.
What then are the options and any potential detrimental effects? From the
data used in this study, it seems there is very little opportunity for
significant abuse. By evaluating the bias for a specific component based
on the recoveries from control samples as described in this paper, the
cost of analysis can be significantly reduced with minimal environmental
impact. Another option for determining bias, which has been suggested by
many, is to use isotope dilution techniques, thus reducing the need for
the extra matrix spike. Though reducing the cost and supposedly any
duplicate sample preparation problems, this technique is only useful for
GC/MS methods. But if the control sample can give similar information,
then the same result can be achieved with even less expense than isotope
dilution and can also be used with all the methods. If there is a
requirement that first time or uncharacterized samples undergo some type
of matrix spike recovery evaluation, very few samples which may have
seriously biased data will be overlooked and potentially harm the
environment.
1-71
-------
SUMMARY
To approach all samples that may be regulated by the Toxicity Characteris-
tic rule using the same bias correction requirement seems to be impracti-
cal and definitely unnecessary. Though many argue that the process is
statistically unsound, with which I agree, the fact that bias may exist
should be recognized. If the data being generated is to be compared to a
regulatory level that has been set based on absolute recoveries, the bias
for a given analyte will need to be considered. But from the data
presented, the bias for many matrix types can be represented by the
recoveries of laboratory control samples without the added expense of
another spiked sample analysis.
REFERENCES
1) U. S. Environmental Protection Agency, March 29, 1990. Hazardous Waste
Management System; Identification and Listing of Hazardous Waste; Toxicity
Characteristics Revisions; Final Rule. 40 CFR Parts 261, et al. Federal
Register 55: 11798 - 11877.
2) U. S. Environmental Protection Agency, June 29, 1990. Hazardous Waste
Management System; Identification and Listing of Hazardous Waste; Toxicity
Characteristics Revisions; Final Rule. 40 CFR Parts 261, 264, 265, 268,
271, and 302. Federal Register 55: 26986 - 26998.
1-72
-------
TABLE 1 SUMMARY OF BIAS CORRECTION SAMPLES
No. of Analytes
Number of Exceeding
Constituent Spiked S*»p1«»g Regulatory Levels
Arsenic 289 1
Barium 290 2
Cadmium 300 16
Chromium 313 5
Lead 320 19
Mercury 288 3
Selenium 286 4
Silver 291 0
Chlordane 64 0
Endrin 64 0
Heptachlor (& its epoxide) 64 0
Lindane 64 0
Methoxychlor 64 0
Toxaphene 64 0
2,4-D 74 2
2,4,5-TP (Silvex) 65 0
Benzene 159 1
Carbon tetrachloride 158 0
Chlorobenzene 158 0
Chloroform 158 0
1,2-Dichloroethane 157 0
1,1-Dichloroethene 158 0
Methyl ethyl ketone 157 0
Tetrachloroethene 158 0
Trichloroethene 159 1
Vinyl chloride 159 0
Cresols 118 0
1,4-Dichlorobenzene 132 0
2,4-Dinitrotoluene 116 0
Hexachlorobenzene 133 0
Hexachlorobutadiene 132 0
Hexachloroethane 132 0
Nitrobenzene 119 0
Pentachlorophenol 125 0
Pyridine 128 0
2,4,5-Trichlorophenol 128 0
2.4,6-Trichlorophenol 127 0
1-73
-------
TABLE 2 SUMMARY OF CORRECTION FACTOR DATA
Constituent
Arsenic
Barilla
Cadmium
Lead
Mercury
Selenium
Silver
Chlordane
Endrin
Heptachlor
(& its expoxide)
Lindane
Methoxychlor
Toxaphene
2,4-D
2,4,5-TP (Silvex)
Benzene
Carbon tetrachloride
Chlorobenzene
Chloroform
1 ,2-Dichloroethane
1 , 1-Dichloroethene
Methyl ethyl ketone
Tetrachloroethene
Trichloroethene
Vinyl chloride
Cresols
1 ,4-Dichlorobenzene
2 , 4-Dini tro toluene
Hexachlorohenzene
Hexachlorobutadiene
Hexachloroethane
Nitrobenzene
Pentachlorophenol
Pyridine
2,4, 5-Tr ichlorophenol
2,4, 6-Tr ichlorophenol
Average
Correction
Factor (gR)
0.95
0.87
0.88
0.88
0.89
1.04
0.92
0.86
0.86
1.04
1.02
0.90
0.89
1.00
0.80
0.78
.04
le .02
.02
.03
.05
.01
: .32
.01
.06
.01
0.57
0.62
0.53
0.66
. 0.62
0.56
0.74
0.58
0.59
tol 0.54
10! 0.62
Standard
Deviation of
Corr. Factor
0.093
0.117
0.072
0.070
0.103
0.212
0.192
0.069
0.193
0.202
0.162
0.250
0.208
0.198
0.166
0.213
0.134
0.091
0.071
0.086
0.110
0.156
0.823
0.104
0.185
0.474
0.213
0.120
0.158
0.172
0.114
0.163
0.221
0.275
0.185
0.169
0.205
Range
0.5 - 1.3
0.5 - 1.4
0.6 - 1.4
0.55 - 1.4
0.3 - 1.5
0.08 - 1.5
3.4 - 1.5
0.45 - 1.3
0.5 - 1.38
0.45 - 1.38
0.65 - 1.4
0.36 - 1.3
0.49 - 1.25
0.54 - 1.42
0.43 - 1.23
0.15 - 1.5
0.8 - 2.0
0.7 - 1.4
0.8 - 1.2
0.8 - 1.3
0.8 - 1.4
0.3 - 1.8
0.4 - 8.2
0.8 - 1.8
0.2 - 2.0
0.4 - 5.0
0.06 - 1.14
0.1 - 1.11
0.05 - 1.1
0.11 - 1.22
0.18 - 0.93
0.06 - 1.08
0.09 - 1.36
0.08 - 1.54
0.08 - 1.06
0.015 - 0.89
0.03 - 1.1
Critical
Correction
Factor
0.100
0.001
0.100
0.020
0.020
0.100
0.300
0.020
0.017
0.025
0.013
<0.001
<0.001
0.010
<0.001
<0.001
0.010
0.010
<0.001
0.001
0.010
0.007
<0.001
0.007
0.010
0.050
<0.001
0.005
0.308
0.308
0.080
0.013
0.020
0.002
0.008
<0.001
0.020
I-74
-------
TABLE 3 SUMMARY OF CONTROL SAMPLE RECOVERIES
Constituent
Arsenic
Bariua
Cadmitui
Chroniua
Lead
Mercury
SeleniuB
Silver
Chlordane
Endrin
Heptachlor
(& its epoxide)
Lindane
Methoxychlor
Toxaphene
2,4-D
2,4,5-TP (Silvex)
Benzene
Carbon tetrachloride
Chlorobenzene
Chloroform
1,2-Dichloroethane
1,1-Dichloroethene
Methyl ethyl ketone
Tetrachloroethene
Trichloroethene
Vinyl chloride
Cresols
1,4-Dichlorobenzene
2,4-Dinitrotoluene
Hexachlorobenzene
Hexachlorobutadiene
Hexachloroethane
Nitrobenzene
Pentachlorophenol
Pyridine
2,4,5-Trichlorophenol
2,4,6-Trichlorophenol
1
Number of
Control
Samples1
94*
92
92
92
92
89
97*
92
49
49
49
49
49
49
55
55
62
62
62
62
62
62
62
62
62
62
95
95
95
95
95
95
95
95
95
L 95
L 95
Standard Deviations
Average
Recovery
0.93
0.91
0.92
0.92
0.93
1.09
0.98
0.89
0.9C
1.12
1.07
1.13
1.04
1.07
0.83
0.73
1.00
0.98
1.01
1.01
1.03
0.99
1.18
0.99
1.02
0.96
0.62
0.61
0.48
0.69
0.60
0.59
0.85
0.52
0.62
0.57
0.68
of Control Sample
Recoveries
0.113
0.066
0.045
0.045
0.057
0.113
0.162
0.056
0.206
0.157
0.147
0.153
0.148
0.224
0.171
0.218
0.081
0.088
0.079
0.084
0.097
0.178
0.323
0.082
0.094
0.202
0.123
0.097
0.171
0.170
0.105
0.125
0.234
0.219
0.245
0.120
0.124
Range
0.10-1.19
0.73 - 1.21
0.77 - 1.06
0.73 - 1.00
0.74 - 1.30
0.80 - 1.41
0.60 - 1.42
0.56 - 1.02
0.24 - 1.32
0.74 - 1.42
0.63 - 1.45
0.75 - 1.40
0.60 - 1.35
0.53 - 1.63
0.44 - 1.34
0.30 - 1.29
0.81 - 1.22
0.68 - 1.19
0.72 - 1.19
0.79 - 1.18
0.83 - 1.29
0.30 - 2.07
0.10 - 2.17
0.78 - 1.24
0.81 - 1.72
0.47 - 1.43
0.33 - 0.99
0.37 - 0.92
0.06 - 1.12
0.20 - 1.12
0.32 - 0.91
0.28 - 0.91
0.48 - 1.72
0.07 - 1.25
0.18 - 1.82
0.28 - 1.05
0.42 - 1.29
'Represents duplicate analyses
'Includes both ICP and AA furnace results
I-75
-------
Figure 1
Frequency
80r
60
4O
20
Mercury
Correction Factors
(288 data points)
OOMUOO
0 02 0.4 O.B O.B 1 t2 14 10 18 2 3
Correction Factor
Figure 2
Endrin
Correction Factors
(64 data points)
0.6 OS 1 12 14 16 18 2 3
Correction Factor
1-76
-------
Figure 3
5 -
Silvex
Correction Factors
(65 data points)
0 0.2 0.4 0.6 0.8 1 12 14 1.6 1.8
Correction Factor
Figure 4
Frequency
40
Methyl Ethyl Ketone
Correction Factors
(156 data points)
0.5 0.7 0.9 11 1.3 15 17
Correction Factor
1-77
-------
Frequency
60
10 •
Figure 5
Vinyl Chloride
Correction Factors
(169 data points)
0.2 0.4 0.6 OA 1 12 14 t6 1.8
Correction Factor
Figure 6
10 •
Hexachlorobenzene
Correction Factors
(133 data points)
02 0.4 0.8 O.8 1 12 14 16 18
Correction Factor
1-78
-------
Figure 7 Comparison of correction factor and control sample recovery for metals.
2.0 -|
1.5 .
03
o 1.0-
o
(D
rr
0.5 •
o.
on
II
II
x
o Correction Factor 95% Cl
• Correction Factor Average
o Control Sample 95% Cl
• Control Sample Average
x Critical Correction Factor
Q on
O « Q ||
T" JB i5 1s ° *" i°
o o 0 D
o
X
x x
— i X 1 1 X 1 X 1 1 1 X 1
Arsenic
Barium Cadmium Chromium
Lead
Mercury Selenium Silver
Figure 8 Comparison of correction factor and control sample recovery for pesticides and
herbicides.
o Correction Factor 95% Cl
2.0 T • Correction Factor Average
a Control Sample 95% Cl
• Control Sample Average
x Critical Correction Factor
1.9 •
01.0.
03
OL.
0.5
0
<
_c
c
• <
i
,1
• c
]
\
I
1 <
>c
t
1
1 e> *•
<
1
>[
i
C
1
1
<
1
o
>
1
»
' _ o
<
1
1
a
<
> E
1
>
r
]
1 (
n
u
|
»
. r
j
| ^
<
i
*
r
O •" o
> c
] <
1
0 <
>C
' |
> .
]
1
D
Xi v
-X
•X
Chlordane Endrin Heptachlor Lindane Methoxychlor Toxaphene 2,4-D 2,4,5-TP
& its epoxide
1-79
-------
Rgure 9 Comparison of correction factor and control sample recovery for volatile organic
compounds (VOQ.
2.0 T
1 5 l
0.5-
o Correction Factor 95% Cl
• Correction Factor Average
D Control Sample 95% Cl
• Control Sample Average
x Critical Correction Factor
T? ?T TT 1°
i ti i n
3.0
:
5C
'1
>
^
1
D
' C
»
D
O
1 O
IT «
*•
u
c
'I '
>
D
f
c
'1
1
k *
C
•1
>l
X
1 1
1 !
0>
•
<
1 .
\
]
1 <
1 1
rj
i
i
o n
l
4
D
>
O
i v i y
c
»
l
D <
4
1
?
,1
o
C
1 .
A 1
[
OQ O
**-* ' *-* • "
1 s 5 i ^
g> C jx >» CL
e .8 e °- §
x •§ |—
I ^ •»-
1
o
]
1
I
c
< >
^
•
ichlorophenol ;
£
•k
-------
9 ABSTRACT
Ensuring Data Authenticity
in Environmental Laboratories
Jeffrey C. Worthineton. Director of Quality Assurance; R. Park Maney, ESQ.,
Vice President; TechLaw, Inc., 12600 W. Colfax Avenue, Suite C-310,
Lakewood, Colorado 80215,
D*ta delivered to both government and public clients by environmental laboratories is no
longer reviewed only for contract compliance and acceptability of quality assurance
techniques, data users are concerned that the data be "authentic".
Investigations into data authenticity by government agencies have sparked organized
efforts by data users to ensure that the data delivered is exactly what it is purported to be.
Laboratory personnel should seize the initiative and embrace concerns for data
authenticity as a normal part of conducting business.
Laboratories and other generators of environmental data need to develop policies and
procedures to ensure the authenticity of their own data. These procedures should include
authenticity monitoring as a function of both laboratory management and quality assurance
staff.
The authors present a summary of the practices that have resulted in the delivery of data
that is not authentic and a plan for developing policies and procedures for data authenticity
including:
0 Education and training,
0 Internal audits,
0 Laboratory documentation procedures, and
0 Automated laboratory practices.
1-81
-------
•in ESTABLISHMENT OF LABORATORY DATA DELIVERABLE REQUIREMENTS
IW FOR DATA VALIDATION OF ENVIRONMENTAL RADIOLOGICAL DATA
David A. Anderson, Scientist
Environmental Restoration Program
EG&G Idaho, Inc.
P.O. Box 1625
Idaho Falls, ID 83401
ABSTRACT
In the world of environmental sampling and analysis, the importance
of obtaining accurate, valid data has become increasingly important. Much
attention has been placed on the review and validation of sample data from
CLP organic and inorganic analyses (U.S. EPA Functional Guidelines for
Review of Inorganic and Organic Data). In the field of nuclear waste
management, we are seeing a significant increase in the analysis of mixed-
waste samples from waste sites containing potential radiological
contamination. In light of this fact, a need has been identified for the
establishment of a protocol for data quality assessment and validation of
sample data from radiological analyses. The EPA CLP program protocol
provides a means for obtaining consistent data deliverables from different
analytical laboratories performing organic and inorganic analyses. This
allows for a reasonably equal assessment of sample data regardless of the
laboratory doing the analyses. The same type of protocol is necessary to
adequately address the need for data of known precision, accuracy,
representativeness, comparability and completeness when radiological
analysis of environmental samples is performed.
Establishment of a consistent set of radiological deliverables
provides a method of obtaining useable, accurate data even when using more
than one laboratory for sample analysis. A data assessment and validation
program for radiological data can then be designed from the data
deliverable requirements.
At EG&G Idaho a set of forms have been designed to provide a master
template for the laboratory to format their data deliverables. Use of
these templates insures consistency in the data received from different
laboratories. Sample results from these forms may be transmitted to EG&G
on computer diskette or electronically transferred directly into a
database. An initial evaluation of sample results data and quality control
data can be performed by a computer analysis of selected data fields in the
forms against an established set of criteria. Initial flagging of quality
control and sample results data can also be computer generated. This
system provides for a substantial savings in time and money for both data
entry functions and the data validation process.
1-82
-------
4 4 AN ASSESSMENT OF QUALITY CONTROL REQUIREMENTS FOR THE ANALYSIS OF
CHLORINATED PESTICIDES USING WIDE BORE CAPILLARY COLUMNS--
A MULTI-LABORATORY STUDY
Jack A. Serges. Lockheed Engineering & Sciences Company, Las Vegas, Nevada.
Gary L. Robertson, U.S. EPA Environmental Monitoring Systems Laboratory, Las
Vegas, Nevada.
Wide bore (> 0.5 mm) capillary columns are frequently used for the analysis of
chlorinated pesticides by gas chromatography/electron capture detection
(GC/ECD). In March 1990, the U.S. EPA Contract Laboratory Program (CLP)
included the use of wide-bore capillary columns in its protocols for pesticide
analysis. In August 1990, as part of the contracting process, single blind
samples were distributed to 43 laboratories for pesticide analysis.
The CLP protocols allow variability in the analytical columns and conditions
used for pesticide analysis, but the protocols require stringent quality
control (QC) to enable the use of the results for a wide variety of purposes.
The QC requirements include chromatographic resolution, compound breakdown,
compound retention time stability, and detector linearity.
The laboratories participating in the blind study demonstrated a wide range of
proficiency in the use of GC/ECD. An examination of the results of the blind
study thus gives insight into the ruggedness of the chromatographic procedures
under variable conditions. This assessment presents results of a study of the
QC data and its impact on the analytical quality. Recommended QC
modifications that were derived from the study will be discussed.
Notice: Although the research described in this article has been funded
wholly or in part by the United States Environmental Protection Agency through
contract number 68-CO-0049 to Lockheed Engineering & Sciences Company, it has
not been subjected to Agency review and therefore does not necessarily reflect
the views of the Agency and no official endorsement should be inferred.
1-83
-------
-j p ANALYSIS-SPECIFIC TECHNIQUES FOR ESTIMATING PRECISION AND ACCURACY USING
SURROGATE RECOVERY
Charles B. Davis. Lockheed Engineering & Sciences Company, Las Vegas, Nevada and
University of Toledo, Toledo, Ohio
Forest C. Garner, Jack A. Berges, Lockheed Engineering & Sciences Company, Las
Vegas, Nevada.
Larry C. Butler, U.S. Environmental Protection Agency, Environmental Monitoring
Systems Laboratory, Las Vegas, Nevada
Statistical techniques are presented for estimating the precision and accuracy
of gas chromatography / mass spectroscopy (GC/MS) analytical determinations by
using associated surrogate recoveries.
The statistical techniques employed involve variations on the so-called "using
the regression line in reverse" common in linear calibration. An appropriate
regression model is first identified using prior data. In the simple case, where
the means and variances of analyte and surrogate recoveries are constant with
respect to true concentration, prediction intervals for recovery are inverted to
provide confidence intervals for analyte concentration. Otherwise (if means
and/or variances are functions of concentration), variance stabilizing
transformations are first performed as needed, and then computations similar to
those used in conditional multivariate calibration provide confidence intervals
for analyte concentration. In either case, the resulting confidence intervals
provide the desired information regarding precision and accuracy.
Examples of the use of the techniques are given, using data sets based on
quarterly blind performance evaluation studies conducted by the U.S. EPA Contract
Laboratory Program and other sources. For these data sets, intra-analysis
estimates of precision and accuracy obtained by the techniques presented here are
1-84
-------
compared and contrasted with inter-analysis estimates available from, for
example, matrix spike studies. The relationship of the improvement in intra-
analysis over inter-analysis estimates to the variability and correlation in
analyte and surrogate recoveries is explored with these examples.
Finally, we discuss the implications of these results regarding the usefulness
of surrogate and/or matrix spike recoveries in "correcting" analytical determina-
tions, and suggest statistical procedures that might be employed when such
"corrections" are warranted.
Notice: Although the research described in this article has been funded wholly
or in part by the United States Environmental Protection Agency through contract
number 68-CO-0049 to Lockheed Engineering & Sciences Company, it has not been
subjected to Agency review and therefore does not necessarily reflect the views
of the Agency and no official endorsement should be inferred.
1-85
-------
•1 3 USE OF ORGANIC DATA AUDITS IN
QUALITY ASSURANCE OVERSIGHT OF SUPERFUND CONTRACT LABORATORIES
Edward J. Kantor
U.S. EPA Environmental Monitoring Systems Laboratory
Las Vegas, Nevada 89119
Mahmoud S. Hamid
Lockheed Engineering & Sciences Company
Las Vegas, Nevada 89119
ABSTRACT:
Organic data audits are performed to assess the technical quality of analytical
data and to evaluate overall laboratory performance. The technical data quality
is assessed on the basis of the total number of problems observed in the case.
The processes used to identify problems in organic analytical data range from a
check of the quality control to a thorough investigation of the raw data
submitted with the case will be presented. Besides providing the basis for
determining the technical quality, the number and type of problems provide a
mechanism to track data quality for the Contract Laboratory Program (CLP) , or for
an individual laboratory, over time. Long-term tracking is accomplished by the
use of an audit comment data base that contains standardized comments explaining
common problems found within the data submitted by CLP laboratories. Each
comment represents an individual problem, and the frequency of use for the
comments is tabulated by the data base. Common problems observed during the past
year in CLP data packages such as calibration errors, failure to submit
deliverables, instrument contamination, and use of incorrect quality control
solutions will be presented.
Notice: Although the research described in this article has been supported by
the United States Environmental Protection Agency through contract 68-CO-0049 to
Lockheed Engineering & Sciences Company, it has not been subjected to Agency
review and therefore does not necessarily reflect the views of the Agency and no
official endorsement should be inferred.
1-86
-------
4 A USE OF INORGANIC DATA AUDITS IN QUALITY ASSURANCE OVERSIGHT OF SUFERFUND CONTRACT
1 LABORATORIES
Robert B. Elkins. Lockheed Engineering & Sciences Company, Las Vegas, Nevada
William R. Newberry, U.S. Environmental Protection Agency, Environmental
Monitoring Systems Laboratory, Las Vegas, Nevada
ABSTRACT:
Inorganic data audits are performed to assess the technical quality of analytical
data and to evaluate overall laboratory performance. The technical data quality
is assessed on the basis of the total number of problems observed in the case.
The processes used to identify problems in inorganic analytical data range from
a check of the quality control to a thorough investigation of the raw data
submitted with the case. Besides providing the basis for determining the
technical quality, the number and type of problems provide a mechanism to track
data quality for the Contract Laboratory Program (CLP), or for an individual
laboratory, over time. Long-term tracking is accomplished by the use of an audit
comment data base that contains standardized comments explaining common problems
found within the data submitted by CLP laboratories. Each comment represents an
individual problem, and the frequency of use for the comments is tabulated by the
data base. Common problems observed during the past year in CLP data packages
include calibration errors, failure to submit deliverables, instrument
contamination, and use of incorrect quality control solutions.
Notice: Although the research described in this article has been supported by
the United States Environmental Protection Agency through contract 68-CO-0049 to
Lockheed Engineering & Sciences Company, it has not been subjected to Agency
review and therefore does not necessarily reflect the views of the Agency and no
official endorsement should be inferred.
1-87
-------
Improved Evaluation of Environmental
Radiochemical Inorganic Solid Matrix
Replicate Precision:
Normalized Range Analysis Revisited
Robert E. Gladd
James W. Dfflard, Ph.D.
I.T. Corporation
OAK RIDGE LABORATORY
ABSTRACT; the Ifocmafocct Range Scai8t>c,a» BPA-6QQ/4^l» provides tfafc
___ r_
(osuaily <2B&) a&ptettfittqaa&ryttSGQm^
owing to dietct^wcmaptudwof mceafcubted and*cs$)ccted*steinas, tbc^opcctecf* value being basedoa
teof ttWatatfcoc
The monitoring of laboratory accuracy and precision at the
IT Oak Ridge Laboratory is guided by the statistical procedures
detailed in EPA-600/4-81-004, methods which provide gener-
ally oractkal empirical point estimators of analytical perform-
ance. Accuracy evaluation is accomplished through the use of
the "Normalized Deviation" statistic, in which the analytical
result of a spiked sample test is "normalized" to the "known"
value and "expected laboratory 1-sigma" precision; in traditional
statistical parlance, a "Z-transformation." Normalized Devia-
tion statistics (NDEVs) are computed and plotted on control
charts with a mean of zero, warning limits at +/-2.0s and control
limits of +/- 3.0s.
Similarly, the analytical precision of unknown replicates (Le.,
samples where a "known" value from a reference standard is not
present) is assessed via the Normalized Range (NRANGE)
estimator, in which the numerical difference between replicate
results is evaluated in the context of both an "expected 1 -sigma"
precision level and an "expected range" factor. NRANGE
statistics are computed and plotted on control charts containing
an X-Y origin of zero, an "expected range" of 1.0s, and warning
and controllimits at +3.0s and +4.0s respectively (i.e., mean, or
"expected" range plus 2 and 3 sigma). NRANGE points lying
above the +4.0$ Control Limit mandate an investigation of the
analytical data for the replicates in question to ascertain the
causes of the excessive divergence.
These statistical tools assume the presence of a liter or kilo-
gram sample aliquot, the latter necessitated by the EPA food
matrix crosscheck. Analytical results are adjusted to their respec-
tive activities at a liter or kilogram before the statistics are com-
puted. Where the samples are constituted of low-level, low-
volume inorganic solid matrices, the generation of spurious
"outlier" statistics is a recurring phenomenon, owing principally
to the relative magnitudes of die analytically determined 1 -sigma
and the "expected 1-sigma" used by the NRANGE computa-
tion. This a-priori 1-sigma, while empirically appropriate for 1
kg. samples, imposes an unrealistic constraint on small aliquot in-
organic replicates. Clearly, in such cases a method of incorporat-
ing the analytically determined 1-sigmas must be employed to
make the NRANGE statistic reflect the true precision level of the
replicates; to the extent that these "outliers" are invalid they con-
1-88
-------
INORGANIC SOLID MATRIX REPLICATE PRECISION: Normalized Range Analysis Revisited. Pg.2
'alt2
tribute to a misleading impression of laboratory precision capabili-
ties and result in unwarranted technical review of the replicate
sample data, an examination required of all "out-of-control" QC
results.
Since every quantitative sample result is essentially a point-
csdmate of a "mean" value which approximates a "true" activity or
concentration level, it would be tempting to dismiss any "expected
sgma" constraints on replicate results, particularly of the types
under discussion here, and apply a sort oft-test" on our experi-
mental "means" incorporating only the analytical 1-sigmas in
determining the acccpta-
bikyofthc range between
replicate values. Support
fcc such a method de-
rives from die feet that
the relative standard de-
viation (Lc., the "percent
agma," or coefficient of
variation) accompanying
each production analysis
soot routinely evaluated
against an "expected"
sigma; it is generally ac-
cepted mat, as activities
and/or aliquots are lesser,
sprns will tend to be pro-
portionally greater, fre-
quently approaching or
even exceeding the mag-
nitudes of the quantified
activities themselves. An
expected sigma," while
serviceable in the main as
an objective standard of
analytical variability, is fre-
quently inappropriate in
fight of the component
measurement particulars of individual cases such as those under
review in this presentation.
The conventional t-tcst cannot be directly applied to replicate
n&analytical results owing to the feet that "N=l" for each
sample datasct, leaving us without "degrees of freedom" to employ
iathe derivation of t-valucs. Given this problem, should we wish
to retain the mathematical simplicity of the NRANGE statistic, we
could simply replace the "expected" sigma in the formula with the
mean of the analytical percent sigmas. Such a replacement would
jiddNRANGE statistics derived totally in the context of the error
terms of the lab results themselves, removing any empirically
objective" variability standard in favor of the case-specific uncer-
tainty estimates. The virtue of such an approach would be to
remove any potential argument over whether an "NRANGE >4"
alcuktcd in such a fashion in fact represented an "out-of-control
replicate set. Replicate results so divergent as to normalize out to
"NRANGE > 4" even after taking their own individual error
terms into account would indisputably be indicative of unaccept-
able precision and would indeed merit technical review to deter-
mine the causes of the disparity.
Alternatively, eschewing the NRANGE formula entirely, we
might statistically examine our replicates via one of two variations
on a "Z-test" formulation, employing either the mean of the
analytical sigmas or the square root of the sum of the variances as
divisors of the replicate range, as shown in the box below.
Under either "z-score"
approach (Eq. 1 & 2 in
box) our "outliers" would
be those resulting in a z-
statistic > 3.0 absolute.
This approach makes in-
tuitive sense, but is a bit
bothersome in that any
utilization of a reference
value such as the "ex-
[2]
NRadi - NR
acrj
eDa
epa
[3]
«s again pre-
cluded, a tactic that
contravenes an implicit
assessment principle of the
EPA-600 method: the
application of empirical
guideposts to laboratory
precision capability ac-
counting. A simple ad-
justment to the
NRANGE statistic is
therefore proposed, one
that incorporates both the
EPA sigma factor a/nd the
mean of the analytical
sigmas into the
NRANGE calculation, as
shown by Eq. 3 above: a ratio of "expected" over "found."
It should be readily apparent that where the mean lab sigma
nearly equals the "expected" sigma the NRANGE statistic will be
quite close to the value returned by the standard method. Where
the lab error coefficient is greater than the expected, the NRANGE
will be attenuated by the ratio of the two. Further, where the mean
lab sigma coefficient is smaller than the expected, the adjustment
factor wil be > 1.0, thereby expanding the NRANGE value. In
this manner the sigma-ratio adjustment factor is a double-edged
sword; if individual error terms are better than the expected, the
results had better be minimally divergent to avoid being pushed
into "out-of-control" status. Such a condition makes methodo-
logical sense; analytical sigmas are mathematical expressions of
our confidence in our quantitative estimates. Replicates returning
bctter-than-expected error terms and grossly disparate "means"
are indicative of a condition warranting quality control review.
1-89
-------
INORGANIC SOLID MATRIX REPLICATE PRECISION: Normalized Range Analysis Revisited. Pg. 3
•
XI
D
L
a
A graphical example effectively illustrates the problem posed
by the application of an expected sigma to a low-level small aliquot
solid matrix duplicate result set. Two Sr-90 results, at 3.30 and
1.68 dpm respectively, arc graphed as normal distributions (sec
box, below right), first with an assumption of a 5% CV, then
overlaid using sigmas derived in the analyses (0.59 and 0.52 1 -sig-
mas, respectively). While
the results arc displayed as
narrow, peaked distributions
whose tails arc quite far apart
under a 5% CV assump-
tion, when viewed in the
distributional context of the
analytically derived sigmas,
quite another picture
emerges; the tails of the
distributions overlap sub-
stantially. The traditional
NRANGE statistic for this
set came in at NR= 14.73,
while the "corrected"
NR=3.01, and this adjusted
Normalized Range value
seems appropriate; our
replicates diverge, perhaps
more than we would prefer,
but certainly not to the ex-
tent indicated by a Normal-
ized Range of 14.73. Were
these replicate results those
of a full kilogram vegeta-
tion matrix emanating hun-
dreds of dpm, we would
perhaps have cause for concern at the disparate replicate values re-
turned by the lab. In the instance of 2.07 gram inorganic aliquots
evincing a few dpm, however, the range between Rl and R2 is not
all that severe, certainly not to the point implied by an NRANGE
statistic of 14.73. The relative standard deviations (the CVs) for
Rl and R2 were, respectively, 17.86% and 30.91%.
EPA-600 expected 1-sigma % precision guidelines arc grouped
by type of analysis into four levels: 5%, 10%, 15%, and 25%. An
examination of 116 inorganic solid matrix replicate results from
our RADQC™ laboratory QC database is revealing. As might be
expected, most of the NRANGE difficulty lies with the analyses
classified in the "5% precision" group, as the following table
illustrates:
practice, grouped by the EPA sigma classifications. It is evident
that a 5% sigma represents an unrealistic level of precision where
small aliquot solids arc concerned. The application of the NRANGE
sigma ratio adjustment factor factor to the 116 samples investi-
gated for this research effort reduced the "NR>4.0" outliers by
75%, and the attenuated "adjusted NR" statistics were ovcr-
RADQC SOLID MATRIX DUPLICATES
Sr-90 9 2.87 gr«m»
3.3 8.16
1.68 8.88
3.3
1.68 8.B2
-
DPM, w/ 5% Sigma* L Analytical Sigm«»
N
68
4
38
6
EPA 1-sigma
0.05
0.10
0.15
0.25
LAB 1-sigma
0.125
0.121
0.159
0.232
The column on the right tabulates the average CVs found in actual
whelming of a magnitude consistent with the type of graphical
evidence obtained by plotting the gaussian distributions in the
manner of the above example.
Further statistical support for the use of the NR adjustment is
seen by a correlation matrix comprised of the values obtained for
this QC data under the formulas displayed in Eqs. 1, 2, and 3:
NR_adj
Z altl
Z alt2
NR_adj
1.000
0.903
0.925
Z altl
0.903
1.000
0.956
Z alt2
0.925
0.956
1.000
The high Pcarson-R correlations among the three methods
indicate a significant agreement between the two Z-scorc vari-
ations and the adjusted NRANGE method; we arc measuring the
same phenomenon, irrespective of algebraic method. Any of these
formulations serve to reduce the quantity of spurious QC outliers,
improving the assessment of inorganic solid matrix precision.
1-90
-------
INORGANIC SOLID MATRIX REPLICATE PRECISION: Normalized Range Analysis Revisited. Pg. 4
A second graphic plot example is provided below to further
demonstrate the utility of the NRANGE adjustment method.
This replicate set consisted of Gross Beta analyses performed on
aliquot weights of 264 and 258 mg. The dpm results were
calculated to be 112 +/- 23 and 134 +/- 28 (2-sigmas). The
unadjusted NR=4.38, just slightly over into the outlier realm.
The "corrected" NR=2.11, and again, the distribution plots seem
to provide visual agreement with the numerical statistic. The
mean CV was 10.36%.
ROBERT E. GLADD is Vice President of CAMS Associates of
Knoxville, TN. He has served since 1986 as a statistical analyst and
computer applications consultant to the IT Oak Ridge Laboratory.
JAMES W. DILLARD, Ph.D. is the Technical Director of the
Oak Ridge Radioanalytical Laboratory of IT Corporation.
RADQC SOLID MATRIX DUPLICATES
Qro«» B«t« 9 £64, 268 mg.
112
134
6.6
6.7
u
JO
a
L
a
-J.12--
134
..--11.5
14
-
DPM, w/ 6X •igm>« & Analytical Sigma*
A quick look at some of the aggregate univariatc and correla-
tion statistics for the datasct of solid matrix replicates used in this
effort is useful. The median uncorrcctcd NRANGE was 2.14,
with a mean of 3.74, while the median adjusted NRANGE was
1.48, with a mean of 2.02. The smallest aliquot was 4.8 mg., and
the largest was 841.9 grams, with a median of 2.07 grams and
mean of approximately 69 grams. The lab sigma precision was, as
we might expect, inversely correlated with aliquot (R=-.25, sta-
tistically "significant" with p=.007) The median lab sigma
precision was 12.17% with a mean of 14.04%. One lesson
flowing from these data is that perhaps those types of QC analyses
currently classified as requiring "5% expected 1-sigma precision"
be instead calculated using a 15% expected precision statistic
where the samples arc those of inorganic solid matrices. The
NRANGE sigma ratio adjustment factor employed here is essen-
tially performing roughly that very sort of task in attenuating the
Normalized Range where the sigmas arc closer to 15% in the
laboratory production environment.
REFERENCES
Kanipc, Larry G. (1977), Handbook for Analytical Quality Control
in BjuUoanalytical Laboratories, TVA/USEPA Intcragcncy En-
ergy-Environment Research & Development Program (EPA 600/
7-77-088)
Walpolc, Ronald E. & Myers, Raymond H. (1989), Probability
and Statistics for Engineers and Scientists, Fourth Edition, New York,
MacMillan
EPA 600/4-81-004, USEPA, EMSL Office of Research & Devel-
opment, Las Vegas, NV.
Copyright 1990 IT CORPORATION
All Rights Reserved
1550 Bear Creek Road
Oak Ridge, TN 37830
1-91
-------
•J g Laboratory On-site Evaluations
as a Tool for Assuring Data Quality
Timothy J. Meszaros. Lockheed Engineering & Sciences Company,
Las Vegas, Nevada
Gary L. Robertson, U.S. Environmental Protection Agency,
Environmental Monitoring Systems Laboratory-Las Vegas, Nevada
In its role of providing quality assurance support to the Superfund Office, the
Environmental Protection Agency's Environmental Monitoring Systems Laboratory-Las
Vegas has developed a program for conducting laboratory on-site evaluations ("on-
sites"). Although developed principally for use with Superfund's national
Contract Laboratory Program (CLP), "on-sites" have been incorporated as an
integral element in an overall scheme of laboratory performance evaluation for
activities supporting the Resource Conservation and Recovery Act and the
Department of Energy.
The purposes of the laboratory on-site evaluation, together with the
complementary performance evaluation sample program, are (1) to determine whether
a given laboratory can effectively perform required analyses and (2) to identify
laboratory problems that negatively impact performance. As a part of the "on-
site," the following elements are reviewed: the laboratory's quality assurance
plan; its standard operating procedures; its utilization of available space;
adequacy of personnel; instrumentation capacity; sources of possible
contamination; and the ability to perform the required analyses.
This presentation will describe those good laboratory practices examined in the
"on-site," and highlight the types of problems encountered and their potential
impact upon data quality.
Notice: Although the research described in this article has been funded wholly
or in part by the United States Environmental Protection Agency through contract
number 68-CO-0049 to Lockheed Engineering and Sciences Company, it has not been
subjected to Agency review and therefore does not necessarily reflect the views
of the Agency and no official endorsement should be inferred.
1-92
-------
17
APPLICATION OF BIAS CORRECTION
D.Syhre. Laboratory Supervisor, Browning-Ferris Industries Houston Lab,
5630 Guhn Road, Houston, Texas, 77040.
ABSTRACT
EPA has required bias correction of analytical data generated under the
Toxicity Characteristic Leaching Procedure (TCLP, SW-846, Method 1311)
since September of 1990. In order to determine the validity of correcting
data based on recoveries of matrix spikes, two Standard Reference
Materials (SRMs) were created from real environmetal samples. These SRM's
were spiked with stable isotopically labeled compounds. Results were
corrected for bias by three different methods: 1) on an individual sample
basis per individual compound; 2) on an individual compound per batch of
ten basis; and 3) on a class of compounds per individual sample basis.
Results of uncorrected and corrected recoveries were compared to known or
"true" concentrations and conclusions drawn about the effect bias
correction has on data quality.
INTRODUCTION
The 1984 Hazardous and Solid Waste Amendments to the Resource Conservation
and Recovery Act (RCRA) directed the EPA to re-examine the Toxicity
Characteristic (TC) portion of the Extraction Procedure Toxicity test [1]
and make changes necessary to better address the leaching of wastes and to
regulate additional characteristics. A revision to the TC rule was
published in the Federal Register (55FR 11798) which added 25 organic
constituents, and a later revision [2] added quality control measures that
require that the bias determined from matrix spike recoveries be used to
correct the analytical data. In January of 1991 EPA extended bias
correction requirements to the Land Disposal Restriction Rules, and has
proposed including bias correction in Chapter 1 of SW-846. The latter
inclusion would extend bias correction requirements to all RCRA testing.
Despite the proliferation of bias correction requirements, little has been
published regarding how well this technique works. If one is to really
know how well bias correction works, one must know the true concentration
of an analyte. Merely spiking deionized water does not provide the
information needed about recoveries since one would expect uniformly good
results from an "in control" analysis. For this reason two SRMs made from
real environmental samples were created. A water matrix was chosen for
both types of matrices in order to overcome potential mixing problems.
The SRM for Cases A and B was created from 11 liters of sanitary landfill
leachate. SRM Case C was created from a single monitor well from a closed
remediation site. Method 1311 currently requires that spiking for
correction purposes be done on a batch basis (one spiked sample per twenty
samples of a similar matrix).
1-93
-------
Proposed changes to Chapter 1 of SW-846 would also require spiking on a
batch basis, but may allow correction by representative compound class.
No bias correction would be required for "self-correcting" methods such as
isotope dilution. Cases A, B, and C are experiments on the use of bias
correction in each of the above ways. In all cases, uncorrected and
corrected data were compared with the true concentrations and changes in
accuracy and uncertainty calculated.
METHODOLOGY
Case A
Eleven liters of sanitary landfill leachate from the same point source were
collected, composited, and three liters removed for analysis as a sample,
duplicate, and spike. The remainder was spiked with the entire list of TC
volatile and semivolatile parameters. This amount is subsequently
referred to as the "true" concentration and was the SRM for both Case A
and Case B. See Figure 1 for the processing of this SRM.
The seven liters of the SRM were then filtered in the TCLP prescribed
manner for volatiles and semivolatiles. It has been our experience that
leachate often contains a large amount of dissolved gases which may cause
foaming during the volatile purge step. Because of this, we routinely
dilute these samples prior to analysis. We had anticipated that a five
fold dilution of the SRM would be adequate, but the sample foamed too
badly and a ten fold dilution was needed. Prior to each analysis for
volatile constituents, the extract (filtrate) was diluted ten fold and
spiked with isotopically labeled volatile TC constituents. These spikes
were used to produce spike recovery correction factors.
Prior to each of the 7 extractions, the semivolatile filtrate was diluted
(one to five) and spiked with the entire list of isotopically labeled TC
semivolatile constituents. A separatory funnel extraction was performed
due to time constraints. These spikes were also used for spike recovery
correction factors.
Spike amounts for volatile and semivolatile parameters varied on a
compound basis. These levels had been predetermined to give adequate
recoveries from a leachate matrix without exceeding regulatory levels when
dilution factors are considered. Volatile analyses were performed via
methods 8240/8260 from SW-846 and semivolatile analyses were performed in
accordance with method 8270. Spiking levels corrected for dilution
factors are reported in Table 5.
Case B
The uncorrected mean recoveries of the native compounds from Case A were
experimentally inserted into a batch of nine other sanitary landfill
leachates. However, each of the nine other samples in the batch of ten
1-94
-------
had also been spiked with the entire volatile and semi-volatile TC list,
and recoveries calculated. True concentrations of these nine samples were
not known. Spike recoveries from each of the nine samples were used to
individually correct results of the native compounds found in the SRM.
Case C
A simulated monitor well was constructed using a five foot section of
Corning glass pipe fitted with Teflon [3] endcaps. Each endcap was fitted
with a Teflon stopcock. Total well volume was about 12.6 liters. The top
endcap of the monitor well was removed and the well filled with sample
from a single contaminated groundwater monitoring well. A volatile and a
semivolatile sample were removed from the bottom of the well to determine
pre-spike constituents arid concentrations. The well was re-filled with
groundwater which had been spiked with the following classes of volatile
parameters: Saturated Chlorocarbons; Unsaturated Chlorocarbons;
Aromatics; Bromocarbons. The constituents within each class are found in
Table 1. The endcap was replaced and the small remaining volume filled
via access through the stopcock. Top and bottom stopcocks were connected
by Teflon tubing to a pump calibrated to deliver one well volume every 5.6
hours. The stopcocks were opened and the pump turned on for 23 hours
during which time about four well volumes were recirculated, enough for
adequate mixing of the sample. At this point the well was considered to
contain a SRM.
The pump was stopped, stopcocks closed and the top cap removed. A three
foot top section was added to prevent overflow and the well was sampled
seven times by the lab's field crew. The volatile samples were packed
with ice and shipped by Federal Express to ourselves (with routing through
Memphis). Just prior to purging, each of the seven samples were spiked
with a specially prepared surrogate mix that contained stable isotopically
labeled compounds representative of each class of compounds. Recoveries
of each representative compound were then used to bias correct all amounts
of native compounds within that class found in the SRM. Results were
compared to the true concentration in the same way as in Cases A and B.
The simulated monitor well prepared for a semivolatile experiment in the
same manner. The semivolatile classes were: Nitroaromatics; Acidic
Compounds (phenols); Nitosoamines; Bases; Polynuclear Aromatics;
Chlorinated Hydrocarbons; and Phthalates. Just prior to extraction, each
sample was spiked with lableled representative compounds (see Table 4).
Results were treated the same way as the volatiles.
Results and Discussion
Case A
Seven replicates of leachate SRM were subjected to normal statistical
treatment. The sample data consisted of three sets:
1-95
-------
1) Recovery of known value.
2) Recovery of spiked isotopes.
3) Bias correction of SRM values with recovery of labeled compounds
The mean and standard deviation for each set of data were calculated. A
confidence interval at the 99°/, level of confidence was calculated as:
X = mean S = small population standard deviation
CI = confidence interval = (S/square root of 7)*3.707
LCL = lower control limit = X-CI
UCL = upper control limit = X+CI
The confidence interval of a data set was considered a reflection of
uncertainty in the data. Changes in uncertainty that bias correction
produced were calculated as follows:
'/. uncertainty = (corrected Cl-uncorrected CD* 100
uncorrected CI
Change in accuracy is best looked at as a relative indication of the
closeness of a corrected versus uncorrected result to a true value.
Accuracy increases as recovery approaches 1007,, but begins to decrease as
recoveries exceed 100'/, of the true value. Changes in accuracy were
calculated as follows:
UCX = uncorrected mean '/, recovery of true value
CRX = corrected mean 1, recovery of true value
ABS = absolute value
Accuracy change = ABS(100-UCX)-ABS(CRX-100)
Table 2 summarizes the uncertainty and accuracy of bias corrected data
from Case A.
Case B
While Case A was an experiment of bias correction on an individual sample
basis. Case B was an experiment on a batch basis. Nine leachate samples
were spiked and recoveries calculated. These recoveries, tabulated in
Table 3, were used to bias correct the mean native values found in the
leachate SRM. Since the proposed EPA guidelines for bias correction in
SW-846 Chapter 1 state that bias correction should not be done if
recoveries are greater than 80%, some recoveries were not used. Accuracy
and uncertainty results for corrected data are summarized in Table 5.
Case C
Case C was an experiment in bias correcting an entire class of compounds
based on recovery of a single representative compound from that compound
1-96
-------
class on an individual sample basis. Volatiles were divided into five
classes and semivolatiles into six classes. Results are summarized in
Tables 6 and 7.
SUMMARY
Case A
Application of bias correction on an individual compound per individual
sample basis resulted in an overall increase of accuracy by about 9%.
Uncertainty also increased, however, by about 197,.
Case B
Application of bias correction on an individual compound per batch basis
resulted in an average reduction in accuracy of 1197,. Calculation of
change in uncertainty was not as straight forward since not every data
point was used per EPA proposed instruction. If batch corrected data
for a compound contained at least 3 data points, then the change in
uncertainty was calculated as follows:
CIB = batch corrected confidence interval
CII = individual sample corrected confidence interval
*/, uncertainty = (CIB - CII)*100
CII
The average percent increase in uncertainty for Case B was 352%! See
Figure 2.
Case C
Application of bias correction on a class of compounds within an
individual sample was averaged on a volatile and semivolatile basis.
Compounds which were corrected by their own isotopes were not included for
averaging purposes. Volatile data increased in percent uncertainty by
557., but also increased in accuracy by only 8. Semivolatiles increased in
accuracy by 4 units and increased in uncertainty by 137,.
The above results clearly show that bias correction on a batch basis
performs very poorly. Case A was essentially an isotope dilution method.
Case C gave results that were surprisingly similar to Case A on an average
basis, but neither instance showed marked improvment in accuracy. Bias
correction also failed to correct a glaring case of systematic error in
the TC method. The average recovery of native hexachlorobenzene was a
dismal 3.497.. There was no error in sample spiking, since the control
sample yielded an excellent 907,. These results are consistent with other
studies conducted at this laboratory: hexachlorobenzene fails to survive
the TC filtration step. The mean corrected value was only 3.877, of the
original concentration in the SRM.
1-97
-------
Many people in the industry have lost sight of the fact that every
laboratory in the country using GC/MS methods from SW-846 are already
employing bias correction in a case C type manner. Every volatile and
semi-volatile analysis goes through an extraction step and a concentration
step. The volatile extraction step is usually referred to as the purge
step. The volatile extract is concentrated onto a trap and then injected
into the instrument. The semi-volatiles are extracted into a liquid
solvent, and the extract concentrated to 1 mL. Internal standards are
used for quantification. The use of internal standards means that all
data is bias corrected.
The only difference between volatile and semi-volatile analysis is in when
(or where) the internal standards are added. In the volatile analysis,
internal standards are added prior to extraction. The semi-volatile
internal standards are added after extraction. This is a critical
difference which may partly explain why bias corrected data for volatile
analytes seems more consistent than semi-volatile data.
Perhaps two small changes would be in order:
1) Add semi-volatile internal standards prior to extraction
2) Change the list of internal standards for both volatiles and
semi-volatiles.
These changes would make data that have already been bias corrected
through use of internal standards more representative of the sample.
Currently, internal standards are assigned to an analyte purely on the
basis of retention time. Compounds of very different chemical type share
the same retention time. Careful examination indicates that data quality
improves when analyte recovery is corrected by compounds which more
closely resemble the analyte in terms of both chemical and physical
characteristics. Recommended internal standards are:
Volatile Semi-volatile
vinyl chloride-d3 n-nitrosodimethylamine-d6
1»l-dichloroethene-d2 phenol-d6
chloroform-13C nitrobenzene-d5
benzene-D6 hexachloroethane-13C
chlorobenzene-D5 hexachorobenzene-13C6
bromoform-13C aniline-d5
di-n-butyl phthalate-d4
pentachlorophenol-13C6
benzo[a]pyrene-d!2
1-98
-------
ACKNOWLEDGEMENTS
I would like to express my appreciation to the following individuals and
their organizations for their help and participation in this study:
M. Rudel. M. Hackfeld. M. Moore, H. Valdez, A. Valdez, R. Palomino,
M. Noto, T. McKee, S. Mondrik, E. demons - BFI Laboratory, Houston,
TX
F. Thomas - Chemical Waste Management, Riverdale. IL
M. O'Quinn - Encotec, Ann Arbor, MI
W. Ziegler - ThermalKEM. Rock Hill. SC
Environmental Laboratory Council, Washington. DC
REFERENCES
[1] Method 1310 from Test Methods for Evaluating Solid Waste.
Physical/Chemical Methods. November, 1986, Third Edition, USEPA,
SW-846 and additions thereto.
[2] Federal Register. Vol. 55. No. 126, Friday, June 29, 1990. pp.
26986-98.
[3] "Teflon" is a registered trademark of the E. I. DuPont Corporation.
DLS/slm
1-99
-------
Figure 1
Landfill Leachate
Sample 11 liters
Control Sample 3 liters
Analyze in duplicate
Non-detcet levels for
fYTfl nnn«*ittifhnt^
Duplicate Spike
good Recoveries
Experimental Sample
7 liters spiked with
OTC Constituents
SRM of Known Concentrations
7 Replicates
volatile analysis
Spike Recovery Corrected
Concentration in SRM based
on value of each of 9 other
spike replicates in batch
7 Replicates
Semivolatile Analysis
Spike Recovery Corrected
Concentration based on isotope
spikes to SRM Sample
Figure 2
Comparison of Individual vs Batch Based Corrections
+OUU
£ 300
O)
0
(J
Q)
DC
= 0
a
o
i_
o
c.
-300
-500
R
f^ h
rr ira
1 1
£>• •*
lot
S fl i
"v
i.
t<
E
1
1 a 1
i a
,
i 1
8 a
r l"l
9 a H _
,'li » U g
*a
M *
* - Mean
ra
V3 « Batched based rorreciions
U • Individual corrections-
Nitro-
Hexachloro- O-Cresol 2.4-Dinitro- Pyridine
Benzene butadiene Toluene
1-100
-------
Table 1 CASE C ORGANIC VOLATILES by COMPOUND CLASS
SATURATED CHLOROCARBONS
COMPOUND 200PPB IN DIH2Q
CHLOROMETHANE
CORRECTED RECOVERY
CHLOROETHANE
CORRECTED RECOVERY
METHYLENE CHLORIDE
CORRECTED RECOVERY
1,1-DICHLOROETHANE
CORRECTED RECOVERY
CHLOROFORM
CORRECTED RECOVERY
CARBON TETRACHLORIDE
CORRECTED RECOVERY
1,2-DICBLOROETHANE
CORRECTED RECOVERY
1 ,2-DICHLOROPROPANE
CORRECTED RECOVERY
1 ,1 ,2-TRICHLOROETHAKE
CORRECTED RECOVERY
1 ,1 2,2-TETRACHLOROETHANE
CORRECTED RECOVERY
1,1 ,1 -TRtCHLOROETHANE
CORRECTED RECOVERY
CHLOROFORM-13C
UNSATURATED CHLOROCARBON!
VINYL CHLORIDE
CORRECTED RECOVERY
CIS-1.2-DICHLOROETHENE
CORRECTED RECOVERY
TRANS-1 ,2-DICHLOROETHENE
CORRECTED RECOVERY
TRICHLOROETHENE
CORRECTED RECOVERY
CIS-1.3-DICHLOROPROPENE
CORRECTED RECOVERY
1 ,1 -DICHLOROETHENE
CORRECTED RECOVERY
TRANS-1, 3- DICHLOROPROPENE
CORRECTED RECOVERY
TETRACHLOROETHENE
CORRECTED RECOVERY
CHLOROBENZENE
CORRECTED RECOVERY
1 ,1 -DICHLOROETHENE- D2
Mean
160
183.7178
168.8571
192.9498
178.2857
204.5227
168.4286
192.8422
165.5714
189.0124
127.2857
145.4379
209
238.3924
170.8571
195.594
153.8571
1 74.961 1
1 46.4286
1 68.4957
161.142S
185.3636
175.2857
i
149.2857
172.6715
161.5714
188.4741
140.7143
165.5118
141.5714
164.9935
202.8571
236.2842
117
136.8937
, 57.54286
67.00263
97.07143
111.9346
214
250.6949
172.8571
STD
25.37059
31.90475
14.91564
1 1 .35443
26.35472
32.11179
15.24092
17.60963
14.0577
4.203377
11.57172
8.120035
30.97311
24.88816
9.546877
10.35389
22.401 1 1
12.39616
4.755949
19.57005
16.55726
27.94825
15.40254
20.5322
8.606541
1 1 .08839
18.15619
13.19993
28.75034
8.100558
1 2.47256
23.75871
29.14277
8.906926
17.96964
9.602405
1 1 .32563
27.2758
27.87805
18.81489
36.42765
21 .70583
Ct
35.54375
44.69799
20.89655
15.90736
36.92251
44.98805
21.35227
24.67078
19.6946
5.888858
16.21178
1 1 .37603
43.39278
34.86788
13.37501
14.50562
31.38356
17.3668
6.663001
27.4173
23.19643
39.15501
21 .57868
28.76526
12.05761
15.53464
25.43651
18.49287
40.27872
1 1 .34874
17.47383
33.28554
40.82852
12.47845
25.17515
13.4528
1 5.86702
38.21292
39.05667
26.35933
51.0345
30.40948
LCI
124.4562
139.0198
1 47.9606
1 77.0424
141.3632
159.5346
147.0763
168.1714
145.8768
183.1236
111.0739
134.0619
165.6072
203.5245
157.4821
181.0884
122.4736
157.5943
139.7656
141.0784
137.9464
146.2086
153.707
120.5205
160.6138
146.0368
163.0376
122.2214
125.2331
130.2227
147.5197
169.5716
195.4557
104.5216
111.7185
44.09005
51.13561
58.85851
72.87791
187.6407
199.6604
142.4477
UCI
195.5438
226.4158
189.7537
208.8571
215.2082
249.5107
189.7808
217.513
185.266
194.9013
143.4975
156.8139
252.3928
273.2602
184.2322
210.0996
185.2407
192.3279
153.0916
195.9129
184.3393
224.5186
196.8644
178.051
164.7291
177.1061
213.9106
159.2072
205.7905
152.9202
182.4674
236.1427
277.1127
129.4784
162.0688
70.99566
82.86965
1352843
150.9912
240.3593
3017294
203.2666
Mean%Re
80
9 1.85889
84.42857
96.47489
89.14286
102.2613
84.21429
96.42109
80.96402
94.5062
63.64286
72.71894
103.5679
119.1962
85.42857
97.797
76.92857
87.48054
73.21429
84.24783
60.57143
92.6818
87.64286
74.64286
86.33573
80.06513
94.23703
70.35714
82.7559
70.78571
82.49677
101.4286
118.1421
58.5
68.44684
28.77143
33.50132
48.53571
55.96729
107
125.3474
86.42857
%RSD
15.85662
17.36617
8.833287
5.884657
14.7823
15,70034
9.048894
9.131627
8.490417
2.223863
9.09114
5.583164
14.81967
10.44
, 5.587637
l~ 5.293561
14.55968
7.085093
3.247965
L 11.61457
10.2749
15.07753
8.787102
13.75363
4.984345
6.862841
9.633259
9.380659
17.37056
5.721888
7.559421
11.71204
12.33378
7.612757
13.12671
16.6874
16.90327
28.09869
24.90567
8.792004
14.53067
12.55709
CHACC
11.85889
-7.43032
12.04631
-7.33203
8.595805
-13.5244
12.2068
-15.4571
13.54218
-30.8633
9.076084
23.71317
-15.6283
.4.624747
12.36843
-20.8684
10.55197
-142663
11.03354
-3.6764
12.11037
-5.03894
-87.6429
11.69267
-6.2706
14.1719
-23.8799
12.39876
-11.9702
11.71105
16.07466
-16.7135
-23.3579
9.946844
-39.6754
4.729886
15.0344
7.431575
37.03271
-18.3474
11.77602
-86.4286
REL%UNC
25.75486
-532495
-23.8756
132.1096
21.84452
-525379
15.54175
-20.1704
-70.0991
175.2959
-29.8286
281.4405
-19.6459
-61 .6409
8.453149
116.3545
-44.6627
-61.6337
311.4857
-15.3949
68.79756
-44.8891
-100
-58.0827
28.83677
63.74059
-272979
117.8068
-71.8245
53.97156
90.48787
22.66144
-69.4369
101.749
-465632
17.94579
140.8324
2.208025
-32.51
93.61075
-40.4139
-100
-------
Table 1 (Cont.)
AROMATICS
BENZENE
CORRECTED RECOVERY
TOLUENE
CORRECTED RECOVERY
CHLOROBENZENE
CORRECTED RECOVERY
ORTHO XYLENE
CORRECTED RECOVERY
ETHYL BENZENE
CORRECTED RECOVERY
STYRENE
CORRECTED RECOVERY
JM+P1-XYLENE
CORRECTED RECOVERY
BENZENE-D6
BROMOFORMS
BROMOMETHANE
CORRECTED RECOVERY
BROMODICHLOROMETHANE
CORRECTED RECOVERY
DIBROMOCHLOROMETHANE
CORRECTED RECOVERY
BROMOFORM
CORRECTED RECOVERY
BROMOFORM -13C
GASES
CHLOROMETHANE _^
CORRECTED RECOVERY
VINYL CHLORIDE
CORRECTED RECOVERY
CHLOROETHANE
CORRECTED RECOVERY
BROMOMETHANE
CORRECTED RECOVERY
VINYLCHLORIDE-D3
1600
1849.711
101.9143
117.82
214
247.3988
109.3571
126.4244
118.5714
147.9728
116.5714
145.7193
170.8571
212.904
162.1429
167.5714
188.2825
148.8571
167.2552
111.0286
124.7512
139.4286
156.6613
167.5714
160
222.2222
149.2857
207.3413
168.8571
234.5238
167.5714
232.7381
161.1429
74.16198
85.7364
10.4391
12.06832
18.81489
21.75132
7.397715
8.552272
9.692904
19.23338
2.878492
17.90254
12.46901
23.87467
20.98752
12.35391
13.8808
20.26844
22.77352
31.21467
35.07266
6.214423
6.982497
12.92101
25.37059
35.23693
20.5322
28.51695
14.91564
20.71616
12.35391
17.15821
15.44267
103.8997
120.1152
14.62499
16.90751
26.35933
30.47321
10.36407
11.98158
13.57959
26.94564
4.032717
25.08115
17.46887
33.448
29.40316
17.30761
19.44675
28.39573
31 .90531
43.73121
49.13619
8.706298
9.782357
18.1021
35.54375
49.36632
28.76526
39.95175
20.89655
29.02298
17.30761
24.03835
21 .63492
1496.1
1729.596
87.28929
100.9125
187.6407
216.9256
98.99307
114.4429
104.9918
121.0271
112.5387
120.6382
153.3883
179.456
132.7397
150.2638
168.8358
120.4614
135.3499
67.29736
75.61501
130.7223
146.879
149.4693
124.4562
172.8559
120.5205
167.3895
147.9606
205.5008
150.2638
208.6997
139.5079
1703.9
1969.826
116.5393
134.7275
240.3593
277.8721
119.7212
138.406
132.151
174.9184
120.6041
170.8005
188.326
246.352
191.546
184.879
207.7293
177.2529
199.1605
154.7598
173.8874
148.1349
166.4437
185.6735
195.5438
271 .5885
178.051
247.293
189.7537
263.5468
184.879
256.7764
182.7778
88.88889
102.7617
47.05184
58.90999
57.1276
123.6994
51 .58356
63.21222
50.24213
73.98639
57.56614
72.85965
79.10053
106.452
81.07143
83.78571
94.14125
74.42857
83.62761
55.51429
62.3756
64.55026
78.33066
83.78571
80
111.1111
74.64286
103.6706
84.42857
117.2619
83.78571
116.369
80.57143
4.635124
4.635124
10.24302
10.24302
8.792004
8.792004
6.76473
6.76473
8.174738
12.99792
2.469294
12.28564
7.297915
11.21382
12.94385
7.372324
7.372324
13.61603
13.61603
28.11409
28.11409
4.457065
4.457065
7.710745
15.85662
15.85662
13.75363
13.75363
8.833287
8.833287
7.372324
7.372324
9.58322
8.34939
-50.1864
11.85815
-1.78239
19.17298
-24.717
1 1 .62866
-12.9701
23.74426
-16.4203
15.29352
6.240875
14.44745
-12.4766
-81.0714
10.35554
-19.7127
9.199037
-28.1133
6.861316
2.174663
13.78039
5.455056
-83.7857
8.888889
-14.246
21 .68651
-11.9008
-1.69048
1.047619
-0.15476
-3.05952
-80.5714
15.60694
-87.8242
15.60694
55.9031 1
15.60694
-65.9896
15.60694
13.33718
98.42748
-85.0339
521.9419
-30.3506
91 .47209
-12.0929
-100
12.35955
46.01783
12.35955
37.06562
12.35955
-82.2813
12.35955
85.04849
-100
38.88889
-41.731
38.88889
-47.6955
38.88889
-40.3658
38.88889
-9.99832
-100
-------
TABLE 2
SUMMARY OF RESULTS FOP CASE A IV TABLE 1
Compound
% Uncertainty
Chance In Accuracy
vinyl chloride
1 , 1-dichloroethene
chloroform
1 , 2 -dichloroethane
carbon tetrachloride
trichloroe thane
benzene
tetrachloroethene
chlorobenzene
hexachloroethane
nitrobenzene
hexachlorobutadiene
hexachlorobenzene
o-cresol
pentachl orophenol
2 , 4-dinitrophenol
pyridine
2,4, 6 -trichl orophenol
2,4, 5-trichlorophenol
m&D cresol
MEAN
+68.70
+ 1.37
+75.53
-15.41
+11.84
-41.73
+245.2
+59.89
-49.46
+58.69
- 2.97
+169.2
+ 5.99
- 9.43
-55.22
- 6.71
-69.50
-55.99
+ 7.18
-10.29
+19.3
+14.3
+23.6
+20.1
+ 0.86
+11.61
+ 6.34
+ 8.89
- 0.77
-11.41
+26.75
+ 2.60
+25.96
+ 0.38
+ 9.76
+12.65
+17.36
+34.1
- 2.84
+ 4.14
-17.00
+ 9.36
* corrected
value >100%
* corrected
value >100%
Ten parameters, or 50% had an increase in uncertainty that ranged from +1.4% to
+245%. Ten parameters, or 50% had a decrease in uncertainty that ranged from -
2.97% to -69.50%. Sixteen parameters, cr 80% had an increase in accuracy that
ranged from +0.86 to +34.1%. Two corrected values had increases in accuracy b_.
exceeded the maximum, or true value. Four parameters experienced a decrease in
accuracy with a range of -0.77% to -17%.
Overall the uncertainty increased by an average of 19% with only a margin?-"
improvement in accuracy of 9.36%.
1-103
-------
TABLE 3
SPTTCP RFCOVERY RESUI.TS FOR OTHER NINE SAMPLES
ANALYZED IN SAME BATCH WITH
vinyl chloride
1 , 1-dichloro
ethene
chloroform
1,2-dichloro
ethane
carbon
tetrachloride
trichloroethene
benzene
tetrachloro-
ethene
chlorobenzene
o-cresol
hexachloroethane
(n&p) -cresol
nitrobenzene
hexachloro-
butadiene
2,4, 6-trichloro-
phenol
2,4, 5-trichloro-
phenol
2,4-dinitro-
toluene
hexachlorobenzene
pentachlorophenol
pyridine
65984
95
105
100
95
80
95
100
80
95
60
77
130
75
90
115
104
92
86
114
29
66092
85
95
90
84
80
95
85
75
85
60
80
130
60
94
105
100
74
86
108
36
65058
85
80
85
75
85
90
90
85
90
50
97
99
85
138
175
90
118
88
104
28
65560
85
75
85
80
85
90
95
80
95
72
97
170
215
70
100
116
98
20
61
6
THE SPM
65068
95
95
105
110
105
95
100
95
100
62
77
130
70
58
85
88
86
54
104
10
(Case
65781
85
100
115
135
95
105
90
95
90
108
107
53
80
106
100
172
90
88
90
25
B)
64697
130
135
130
105
135
95
110
90
95
100
130
112
95
158
7
9
170
112
8
75
65238
95
105
107
120
125
120
112
100
110
58
80
110
73
96
65
60
78
68
54
95
62836
120
120
110
97
110
100
110
95
105
17
10
28
24
10
0.15
-
26
28
™
14
1-104
-------
Table 4 BIAS CORRECTION by COMPOUND CLASS
o
(fl
NKroaromatlct
ANALYTE 200PPB MW MATRIX
NITROBENZENE
Corrected % Recovery
2,4-DiNITHOTOLUENE
Corrected % Recovery
NITROBENZENE-OS
Acid Cournpounds
ANALYTE 200PPB MW MATRIX
4-NITROPHENOL
Corrected % Recovery
PENTACHLOROPHENOL
Corrected % Recovery
O-CRESOL
Corrected % Recovery
0-CRESOL-D8
NKrosoamlnes
ANALYTE 200PPB MW MATRIX
N-NITROSODI-N-PROPYLAMINE
Corrected % Recovery
N-NITROSODIMETHYLAMINE
Corrected % Recovery
N-NITHOSCOIMETHYLAMINE-D6
Base Compounds
ANALYTE 200PPB MW MATRIX
ANILINE
Corrected % Recovery
2-NITRO ANILINE
Corrected % Recovery
PYRIDINE
Corrected % Recovery
N-NITROSCOIMETHYLAMINE-D8
PNA
ANALYTE 200PPB MW MATRIX
NAPHTHALENE
Corrected % Recovery
ACENAPHTHENE
Corrected % Recovery
BENZO(A)PYRENE
Corrected % Recovery
BENZO(A)PYRENE-D12
CHLORINATED HYDROCARBONS
ANALYTE 200PPB MW MATRIX
HEXACHLOROBENZENE
Corrected % Recovery
1 ,2,4-TRICHLCflOBENZENE
Corrected % Recovery
HEXACHLOROBENZEN E - 1 3C8
PHTHALATES
ANALYTE 200PPB MW MATRIX
DIMETHYLPHTHALATE
Corrected % Recovery
DI-N-BUTYLPHTHALATE
Corrected % Recovery
Dl -N-OCTYLPHTHALATE
Corrected % Recovery
Dl -N -BUTYLPHTHALATE-D4
Mean
169.1429
216.5031
182.7143
233.8565
156.4286
Mean
168
190.4826
107.8571
122.5139
2.485714
2.793719
176.8571
Mean
159.8571
188.4448
196
229.8435
170
Mean
10328.14
12189.26
172.1429
204.1354
184.5714
214.7
170
Mean
100.7286
113.1769
51.01429
57.25054
116.5143
131.5771
178.2857
Mean
136,8714
141.4848
44.57143
47.43001
189.4286
Mean
0.142857
0.223214
25
38.17832
113.7714
174.7018
130.4714
STO
11.80838
10.37165
17.4233
19.22284
11.57378
STD
35.6376
39.44595
20.16919
24.17091
1.181
1.28363
13.10761
STD
12.07516
11.6583
30.04441
23.6537
14.14214
STD
685.0029
1255.822
31.87177
44.57137
63.58984
64.41604
14.14214
STD
4.606776
4.42547
3.174602
1.168741
44.69252
45.00046
11.70063
STD
60.66357
51.75928
2.592755
4.224737
22.48597
STD
0.377964
0.590569
7.450727
9.433954
35.66355
47.87414
20.67903
Cl
16.68501
14.53188
24.41205
26.93341
16.2182
Cl
50.21259
55.26831
28.25935
33.86622
1.854717
1.798513
18.36527
Cl
16.91869
16.33462
42.09566
33.14156
19.81476
Cl
1239.991
1759.552
44.65601
62.44861
88.11083
80.25428
19.81476
Cl
6.454622
6.200593
4.447982
1.63754
62.61936
63.05082
16.39393
Cl
84.99664
72.52071
3.632748
5.818343
31.50544
Cl
0.529572
0.827456
10.43833
13.21805
49.96874
67.07717
26.8737
LCL
152.4578
201.8712
158.3022
207.0231
140.2124
LCL
117.7874
135.2143
79.58778
68.6477
0.630997
0.895206
158.4919
LCL
142.9385
172.1101
153.8043
196.602
150.1852
LCL
8086.152
10439.73
127.4886
141.6856
95.4608
124.4457
150.1852
v
LCL
84.27395
106.9763
46.5663
55.613
55.88492
68.52625
161.8918
LCL
51.87478
68.96424
40.93866
41.51067
157.9231
LCL
-0.38671
-0.60424
14.56067
24.96027
63.80269
107.6246
101.4977
UCL
185.8279
231.0348
207.1263
260.8899
172.6448
UCL
218.2126
245.7508
136.1165
158.3601
4.140432
4.592232
195.2224
UCL
176.7758
204.7794
238.0957
263.0851
189.8148
UCL
11568.13
13958.83
216.7989
266.585
273.6823
304.8543
189.8148
UCL
107.1832
119.3775
55.46227
58.88806
181.1336
184.6278
184.6796
UCL
221.8681
214.0057
48.20416
53.34936
220.934
UCL
0.672429
1.05067
35.43933
51.39637
163.7402
241.778
159.4451
MEAN%RE
83.73408
107.1797
91.35714
115.8201
78.21428
%RSD
7.040428
4.790534
9.535818
8.216415
7.398763
MEAN WE
84
95.24131
53.82857
61.25896
1.242857
1.396858
88.42857
MEANER!
78.92857
94.22238
98
114.9718
85
MEAN*fll
106.9387
126.3127
86.07143
2.11364
92.28571
2.223028
85
MEAN Wl
38.59332
43.36278
25.13019
21.93507
58.96233
50.41267
89.14286
MEANWH
68.43571
70.74247
22.26571
23.71501
94.71429
MEANER!
0.071429
0.111607
12.5
19.08916
56.88571
87.35091
65.23571
%RSD
21.3319
20.70843
16.69991
19.72811
47.51151
45.947
7.411413
%RSD
7.55372
6.186586
15.32678
10.28674
8.318803
%RSD
8.568648
10.29423
18.51472
21.83422
34.45817
30.00281
8.318903
WSD
4.573455
3.810225
6.222966
2.041449
37.71066
34.20084
6.582854
WtSD
44.32157
36.58289
5.81708
8.807308
1 1 .87042
*flSD
284.5751
264.5751
28.80291
24.71024
31 .34667
27.40334
15.84947
CHACC
8.086178
-1.46312
-7.1772
-5.86566
-78.2143
CHACC
11.24131
-41.3127
7.328386
-60.0141
0.154002
87.03171
-88.4286
CHACC
14.29381
3.777619
-12.8718
-0.02824
-85
CHACC
-18.374
12.36415
-83.8578
90.17207
-90.0627
82.77897
-85
CHACC
4.769462
-18.2326
-3.19512
37.02726
-8.54966
36.73019
-89.1429
CHACC
2.306761
-48.4568
1.429292
70.99928
-94.7143
CHACC
0.040178
12.38839
6.58916
37.78655
30.46519
-22.1152
-65.2357
REL%UNC
-12.9046
87.98963
10.32634
-39.7915
-100
REL%UNC
10.06863
-48.8668
19.84075
-95.114
8.690042
921.1366
-100
REL%UNC
-3.45224
157.7084
-21.2708
-40.2117
-100
REL%UNC
41.90038
-97.4621
39.84593
42.69237
1.283178
-78.0456
-100
REL56UNC
-3.93562
-28.2652
-63.1647
3723.99
0.689022
-73.8888
-100
REL%UNC
-14.6781
-94.9907
62.94382
432.2455
-100
REL%UNC
56.25
1161.617
26.6178
278.034
34.23627
-56.8054
-100
-------
Or APPLICATION Or EIAS CORRECTION TC* SPJ?
C EPA
OK A BATCT BASIS ftror Table
True Amount Bias Change In
"propound Value Found * Recovered Corrected Accuracy
1,1-dichloroethene 100 67 75 89 +22.0
1,2-dichloroethane 100 93 75 124 -17.0
tetrachloroethene 100 75 75 100 +25
3-cresol 1250 1036 60 1727 -21.0
o-cresol 1250 1036 60 1727 -21.0
o-cresol 1250 1036 50 2072 -4E.6
o-cresol 1250 1036 72 1439 +2.4
o-cresol 1250 1036 C2 1671 -16.6
o-cresol 1250 1036 58 1786 -25.8
O-cresol 1250 1036 17 6094 -370
hexachloroethane 750 369 77 479 +14.7
hexachloroethane 750 369 77 479 +14.7
hexachloroethane 750 369 10 3690 -341
cresol 2500 3595 53 67E3 -12E
cresol 2500 3595 28 12639 -370
nitrobenzene 500 421 75 561 +3.6
nitrobenzene 500 421 60 762 -24.6
nitrobenzene 500 421 70 601 -4.4
nitrobenzene 500 421 73 577 +C.4
nitrofc-nzene 500 <21 24 1754 -225
hexachlorcbutadiene 125 33 70 47 +12
hexz.chlorobutadiene 125 33 58 57 +24
bexaciilorobutadiene 125 22 10 330 -50.4
2,4,C—trichli'Tophenol 500 419 7 5986 -1080
2,4,6-triChlcrophenol 500 419 65 645 -12. S
2,4,6-trichlorophenol 500 419 <5 279
2,4,5-trichlor^phencl 1250 618 9 6867 -399
2,4,5-trichi-rophencl 1250 61E 60 1030 +33.0
2,4,5-trichlcrophenol 1250 618 <5
2,4-dinitrotoluene 125 100 74 125 +12.0
2,4-dinitrotoluene 125 100 78 128 +17.6
2,4-dinitrotoluene 125 100 26 3E5 -1EE
hexachlorobenzene 125 4.4 20 22 +14.1
bexachlcrobenzene 125 4.4 54 8.15 +3.0
hexachlorobenzene 125 4.4 68 6.47 +1.7
hexachlorobenzene 125 4.4 28 15.7 +5.0
per.tachlcrophenol 12500 10350 8 129400 -510
per.tachicrophenol 12500 10350 54, 1S170 -36.2
pentachiorophenol 12500 10350 <5
pyridine 500 226 29 942 -33.6
pyridi-e 500 226 36 628 +25.2
pyridine 500 226 28 8C7 -6.6
ryridine 500 226 6 3767 -555
pyridine 500 226 10 2260 -257
pyridine SCO 226 2t 504 -26.0
pyridine SCO **£ 75 301 +15.0
cyridine 500 226 14 1614 -16E
Tat entire set of data had 200 applicable data points. Forty-seven recoveries fit the requirements set by EPA
guideline? for accuracy adjustment. Of these 47 bias corrections. 18, or 38%. resulted in an increase ir. accuracy
cf the r.u.—ber v.-::: respect to the true vilue averaging K.8%. Twenty-nine of the bias corrections resulted in
recoveries that reduced the accuracy of the data. This represented 62% of the corrected data. In calculating
the i\tnst chzrce in accuracy, three recoveries below 55c were not included. The overall average reduction
in ti._.-ar.- was 119%.
1-106
-------
TABLE 6
SUMMARY OF RESULTS FOR CASE C SEHIVOLATIIES
COHPOUHD
I UUCERTAIMTY
CHANGE IK ACCURACY
nitrobenzene
2.4-DNT
4-nitrophenol
pentachloro
o-cresol
N -ni tr oso-pr opy 1
K-nitroso dinethylanine
aniline
2-nitroaniline
pyridine
naphthalene
ecenapthalene
benzo[o]pyrene
hexachloroe thane
1 ,2.4-trichlorobenzene
hex* chl or obenzene
dia«thylpthalate
di-N-butylpthalate
di -fi-octvlT>thalate
-12. e
10.33
10.07
19.84
8.69
-3.45
-21.27
41.9
22.97
1.17
-3.64
-6.82
0.69
29.2:
6.46
-14.6
HE
26.62
34.24
9.09
-7.18
11.24
7.33
0.16
14.29
-12.97
-19.37
11.63
-0.41
4,77
3.67
6.E
1.86
1.43
2.31
NR
6.59
30.46
MEAK
«13.2
«4.24
Compounds which were connected by its own isotope were not used tor
averaging purposes (see Table?).
TABLE 7
SUMMARY OF RESULTS FDR CASE C VOLATILE
COMPOUND
UNCERTAINTY
CHANGE IK ACCURACY
chlorone thane
chloroe thane
oethylene chloride
1 ,1-dichloroe thane
chlorofont
carbontetrachloride
1 ,2-dichloroethane
1 . 1 ,2-trichloroethane
1 ,2-dichloropropane
1 . 1 . 2 . 2-tetrachloroethene
1.1.1 -trichloroe thane
vinyl chloride
cis- 1 ,2-dichloroethane
trans -1 ,2-dichloroethene
trichloroethene
cis-l ,3-dichloropropane
1 . 1 -dichloroethene
trans -1 ,3-dichloropropane
tetra chl or oe thene
chlorobenzene
benzene
toluene
chlorobenzene
ortho-xylene
ethylbcnzene
styrene
(a»p)xyl«ne
brooonethane
bromodichlorome thane
dibrooochlorome thane
brooolont
25.75
-23.88
21.84
15. 54
-70.10
-29.83
-19.64
-44.66
8.45
311.48
68.80
-58.08
63.74
117.81
53.97
22.66
101.75
17.94
2.21
93.61
15.61
15.61
15.61
15.61
98.43
523
91.47
12.35
12.36
12.36
12.36
11.86
12.05
8.60
12.21
11.72
9.08
11.7
10.55
12.37
11.03
12.11
11.69
13.45
12.40
11.71
-16.71
9.95
4.73
7.43
-16. 3c
8.1
7.38
8.7
8.0
12.45
14.57
6.12
10.36
S.20
6.86
8.62
MEAK
-55.12
Compounds which were connected by its own isotope were not used for
averaging purposes (see Table ?).
1-107
-------
Figure 1
©ff .
Control Sample 3 liters
Analyze in duplicate
Non-detcet levels for
Duplicate Spike
good Recoveries
Landfill Leachate
Sample 11 liters
Experimental Sample
7 liters spiked with
OTC Constituents
SRM of Known Concentrations
7 Replicates
volatile analysis
7 Replicates
Semivolatile Analysis
Spike Recovery Corrected
Concentration in SRM based
on value of each of 9 other
spike replicates in batch
Spike Recovery Corrected
Concentration based on isotope
spikes to SRM Sample
Figure 2
Comparison of Individual vs Batch Based Corrections
+500
£ 300
o
o
c
0)
2
Q>
C-
0
-300
JS.
Mean
Batched based corrections
cr\r\ I LJ * Individual corrections-
QUU Nitro-
Hexachloro- 0-Cresol 2.4-Dinitro- Pyridine
Benzene butadiene Toluene
1-108
-------
TABLE 1 CASE C ORGANIC VOLATILES BY (XMPOUND CLASS
8
SATURATED C Ml OROC ARSONS
COMPOUND 200PPB IN DIH20
CHLOROMETHANE
CORRECTED RECOVERY
ChLOROETHANE
CORRECTED RECOVERY
METHYLENE CHLORIDE
CORRECTED RECOVERY
1 1-DICHLOflOETHANE
CORRECTED RECOVERY
CHLOROFORM
CORRECTED RECOVERY
CARBON TETRACHLORIDE
CORRECTED RECOVERY
1 2-DICHLOROE1HANE
CORRECTED RECOVERY
1.2-OICHLOROPROPANE
CORRECTED RECOVERY
1,1.2-TRlCHLOROETHANE
CORRECTED RECOVERY
1,1,2 2-TETRACHLOHOETHANE
CORRECTED RECOVERY
\ 1,1 -TRICHLOROETHANE
CORRECTED RECOVERY
CHLOROFORM- 13C
UNSAT U RATED CHLOROCARBONS
VINYL CHLORIDE
CORRECTED RECOVERY
CIS- 1.2-DICHLdHOETHENE
CORRECTED RECOVERY
TRANS- 1.2-DICHLOROETHENE
CORRECTED RECOVERY
TRICHLOROETHENE
CORRECTED RECOVERY
CIS- 1,3-DICHLOROPROPENE
CORRECTED RECOVERY
1,1-DICHLOROETHENE
CORRECTED RECOVERY
THANS- 1,3-DICHLOROPROPENE
CORRECTED RECOVERY
TETRACHLOROETHENE
CORRECTED RECOVERY
CHLOROBENZENE
CORRECTED RECOVERY
1.1- UCHLOROETHENE-D2
Mean
160
183.7176
1600571
192.94 i*0
1782857
204.5227
160.42U6
1928422
165.5714
1890124
127.2057
145.4379
209
238.3924
170.8571
195594
1538571
174.9611
146.4206
1684957
161.1429
1853636
175.2657
0
149.2857
172.6715
161.5714
1884741
140.7143
165.5118
141.5714
164.9935
202.8571
236.2842
117
136.8937
57.54286
67.00263
97.07143
111.9346
214
250.6949
1 72.857 1
STO
2S37U59
31.90475
14.91564
11.25443
26.35472
32.1117'J
152409<>
17.601/63
140577
4.203377
11.57172
8.120035
30 a/a 11
24. eta 16
9.546877
10.35389
22 401 1 1
1239616
4.755949
19.57005
16.55726
27.94825
15.40254
0
205322
8.606541
11.08839
18.15619
13.19993
28.75034
8.100558
12.47256
23.75871
29.14277
8.906926
17.96964
9.602405
11.32563
27.2758
27.87005
18.81489
36.42765
21.70583
Ci
35 543/5
44.6U799
20.89655
1 59073U
3GU225I
44.9UU05
2135227
24.U7078
19.6046
5.0U0858
lli 21 176
11.37603
43.3927U
34.86788
13.37501
14.50562
31.38356
17.3668
6.663001
27.4173
23.19643
3U.15S01
21.57868
0
28.76526
12.05761
15.53464
25.43651
18.49287
40.27872
11.34874
17.47383
33.28554
40.82852
12.47845
25.1 751 5
13.4528
15.86702
38.21292
39.05667
26.35933
51.0345
30.40948
LCI
124.4562
139.0IU8
1 47.9606
177.0424
i4i.36jii
1 Si). 534 6
147.0763
160.1714
145 07C8
183.1236
111.0739
1340U1U
165.6072
203.5245
157.4821
181.0084
122.4736
167.MJ43
139.7656
141.0784
137.9464
146.2086
153.707
0
120.5205
160.6138
146.0368
163.0376
122.2214
125.2331
1302&?
147.5197
16U.5/I6
195.4557
104.5216
111.7105
44.00665
51.13561
50.85851
72.87791
187.6407
199.6604
1 42.447 /
UCI
lift 5430
2204158
10>J.75:»/
200H!./i
2I5.1'OU2
241*611)7
it9.7BOB
217.513
185.260
194.9013
(43.4975
166.8130
262.3020
273 2602
184.2322
2IU.0996
105.2407
102 3279
153.0916
195.9129
184.3393
224.5106
196.8644
0
178.051
184.7291
177.1061
2139106
159.2072
205.7905
152,8202
182.4674
23U.1427
277.1127
12U.4/04
162.0(>80
70.90506
82.06965
135.2843
150.991 2
240.3593
301.72U4
2U3.2tJl>U
Muun%ltu
00
91.8580'J
04.42057
96.4740!)
09.1 4206
102.2. -1 3
04.2142'J
96.421 09
00.96402
94.5002
C3.6420U
72.710U4
)0:i£.li7U
HU.l'Jtii!
85.42057
97.797
76.i)^'057
87.4U054
73.2142'J
04.24/03
00.57143
92 GUI B
07.64200
0
74.64206
86.33573
80.0U513
9423703
70.35714
82.7559
70.7U571
02.4iJa77
101.4286
110.1421
£8.5
68.44U04
20.7/143
33.50132
48.53571
55.06729
107
1253474
ttUMUM
1
%RSD
1S.U5U6^
17.3bU| '
0.U3-J20:'
5.U84tii>/
14.70LM
15.7UOU4
9.U400U4
9.I3IOL'/
6.490417
22^'3U03
U.09114
5&03IU4
14.01007
104-1
5.S07KJ7
S.2U35CI
14.5C000
7.0850U3
3.247&C5
11.61457
10.2749
15.07753
0.707102
ERR
13.75J1.3
4.984345
ti.86284 1
9U3259
9.360659
17.37056
5.72101)0
7.559421
ii.71204
12.33378
7.U12757
13.12671
16,6074
16.110327
28.09069
24.005(27
87U2004
1- 5301J /
t2.S!i/UU
CHACO
II.HMlU'J
-7.4U032
12.04631
~ 7.33203
0.50L.UU5
• 13.i»:M-t
12.1^068
-J54071
13.542 1H
-30 bt,J'J
0 U/LUll-l
^;j./i:ii7
• t^i.b^UU
4 62-1747
\/. JUt-l.l
- 2U U6U4
10.55197
-14.2663
11.03354
-•3.6'/.:«
12.11U3/
-.i.ajU'J-4
-a/.(J4ij
7>..64L'8(i
11.C!)L'07
-6.L70U
14.1719
-23.0799
12.39876
-11.3702
11.71105
16.07466
- 10.7 US
-23.3S/U
9.946044
-39.6754
4.7^Ml)(J
15.0344
7.431575
37.03271
-18.3474
1l.7/uUl>
- U0.40li
i.'8 1.4405
- I'J.li-lfitl
- Ul.u-lDU
U '153140
tlU.3545
41.C627
-61.6337
311.4057
-i 5.304 U
kUy'J7M)
-44.00UI
-100
ERR
-5110027
20 03677
Gi740r/J
- a/L'979
1170068
-71,0^45
53.97 IfG
90.48/07
22(iUK4
-6U4U6U
HII.74U
-4li.5(i32
17.!i-l!>79
140U324
2.200025
-•32.51
93.61075
- 411.4 130
-100
-------
TABLE 1 (continued)
AROMATICS
BENZENE
CORRECTED RECOVERY
TOLUENE
CORRECTED RECOVERY
CHLOROBEN2ENE
CORRECTED RECOVERY
ORTHOXYLENE
CORRECTED RECOVERY
ETHYL BENZENE
CORRECTED RECOVERY
STYRENE
CORRECTED RECOVERY
(MtP)-XYLENE
r*f\anc/'*Tcn acfowcov
UUnHcts 1 1 U HtUUVtnY
BENZENE-DO
BROMOFORMS
BROMOMETHANE
CORRECTED RECOVERY
BROMODICHLOROMETHANE
CORRECTED RECOVERY
DIBROMOCHLOROMETHANE
CORRECTED RECOVERY
BROMOFORM
CORRECTED RECOVERY
BROMOFORM- 13C
GASES
CHLOROMETHANE
CORRECTED RECOVERY
VINYL CHLORIDE
CORRECTED RiCOVERY
CHLOROETHANE
CORRECTED RECOVERY
BROMOMEIHANE
CORRECTED RECOVERY
VINYLCHLORIDE-D3
0
1600
1849.711
101.9143
117.82
214
247.3988
109.3571
126.4244
118.5714
147.9728
116.5714
145.7193
170.8571
cltf.WH
162.1429
0
167.5714
188.2825
148.8571
167.2552
111.0286
124.7512
139.4266
156.6613
h 167.5714
0
160
222.2222
1492857
207.3413
1688571
234.5238
167.5714
232.7381
161.1429
0
74.16198
85.7364
10.4391
12.06832
18.81489
21.75132
7.397715
8.552272
9.692904
19 23338
2.878492
17.90254
12.46901
tfJ.OrWr
20.98752
0
12.35391
13.8808
2026844
22.77352
31.21467
35.07266
6.214423
6.982497
T2.92l5i
0
25.37059
35.23693
20.5322
28.51695
14.91504
20.71616
12.35391
17.15821
15.44267
0
103.8997
120.1152
14.62499
1690751
26.35933
30.47321
1036407
11.98158
13.5?C59
26.94564
4.032717
25.08115
17.46887
33.448
29.40316
0
17.30761
19.44675
28.39573
31.90531
43.73121
49.13619
8.706298
9.782357
18.1021
0
35.54375
49.36632
28.76526
39.95175
20.89655
29.02298
17.30761
24.03835
21.63492
0
1496.1
1729.596
87.28929
100.9125
187.6407
2169256
98.99307
114.4429
1049918
121.0271
112.5387
120.6382
"153.3883
179.455
13277397
0
1502638
168.8358
120.4614
135.3499
67.29736
75.61501
130.7223
146.879
149.4693
0
124.4562
172.8559
120.5205
167.3895
147.9606
205.5008
150.2638
208.6997
139.5079
0
1703.9
1969.826
116.5393
134.7275
240.JS93
277.8/21
119.7212
138.406
132.151
1749184
120.6041
170.8005
188.326
OJA ICO
191.546
0
164879
207.7233
177.2529
199.1605
154.7598
173.8874
148.1349
166.4437
185.6735
0
(95.5438
271.5885
178.051
247.293
189.7537
263.5468
184.879
256.7764
182.7778
0
88.68889
102.7617
47.05184
50.90999
57.1276
123.6994
51.58356
63.21222
5024213
7398639
57.56614
72.859SS
79.10053
IHft 4K9
81.07143
0
83.78571
94.14125
74.42857
83.62761
55.51429
62.3756
64.55026
7U.33006
63.78571
0
80
111.1111
74.64286
103.6706
84.42857
117.2619
8378571
116.369
80.57143
ERR
4.635124
4.635124
1024302
10.24302
8.792004
8.792004
6.76473
6.76473
8.174738
12.99792
2.469294
12.20564
7.297915
n 91000
12.94365
ERR
7.372324
7.372324
13.61603
13.61G03
28.11409
28.11409
4.457065
4.457065
7.710745
ERR
15.85662
15.85662
13753G3
1375363
8.833287
8.833287
7.3723^4
7.372324
9.58322
88.88889
8.34939
-bO.IB64
11.85815
-1.78239
19.17238
-2-. 7:7
iieiooG
-121701
23.74426
-16.4203
15.29352
6.240875
14.44745
— 19 A7RR
-81.0714
83.78571
10.35554
-19.7127
9.199037
-28.1133
6.861316
2.174653
13.78039
5.455056
-8377857
80
8.868039
-14.246
21.68651
-11.9008
-1.69048
1.047619
-0.15476
-3.05952
-80.5714
ERR
15.60694
-87.6242
15.60694
55.9031 1
15.60694
-65.9896
1560694
1333718
90.42748
-850339
521.9419
-30.3506
91.47209
.19 nooo
-100
ERR
1235955
46.01783
12.3595:-
37.06562
12.35955
-82.28(3
12.35955
85.04649
-100
ERR
38.8QU89
-41.731
38.88889
-476955
38.86089
-403658
3000009
-9.99832
-100
-------
TABrg 2
SUMMARY OF RESULTS FOR CASE A TN TABU! 1
Compound % Uncertainty Chancre In Accuracy
vinyl chloride
1, 1-dichloroethene
chloroform
1 , 2-dichloroe thane
carbon tetrachloride
trichloroe thane
ber zene
tetrachloroethene
chlorobenzene
hexachl oroe thane
nitrobenzene
hexachlorobutadiene
hexachlorobenzene
o-cresol
pentachlorophenol
2 , 4-dinitrophenol
pyridine
2,4, 6-trichlorophenol
2,4, 5-trichlorophenol
n&D cresol
MEAN
+68.70
+ 1.37
+75.53
-15.41
+11.84
-41.73
+245.2
+E9.89
-49.46
+58.69
- 2.97
+169.2
+ 5.99
- 9.43
-55.22
- 6.71
-69.50
-55.99
+ 7.18
-10.29
+19.3
+14.3
+23.6
+20.1
+ 0.86
+11.61
+ 6.34
+ 8.89
- 0.77
-11.41
+26.75
+ 2.60
+25.96
+ 0.38
+ 9.76
+12.65
+17.36
+34.1
- 2.84
+ 4.14
-17.00
+ 9.36
* corrected
value >100%
* corrected
value >100%
Ten parameters, or 50% had an increase in uncertainty that ranged from +1.4% to
+245%. Ten parameters, or 50% had a decrease in uncertainty that ranged from -
2.97% to -69.50%. Sixteen parameters, cr 80% had an increase in accuracy that
ranged from +0.86 to +34.1%. Two corrected values had increases in accuracy b_.
exceeded the maximum, or true value. Four parameters experienced a decrease in
accuracy with a range of -0.77% to -17%.
Overall the uncertainty increased by an average of 19% with only a margin?-*
improvement in accuracy of 9.36%.
1-111
-------
TABLE 3
SPIKE RECOVERY RESULTS FOR OTHER NINE SAMPLES
ANALYZED IN SAME BATCH WITH
vinyl chloride
1 , 1-dichloro
ethene
chloroform
1,2-dichloro
ethane
carbon
terra chl oride
trichloroetbene
benzene
tetrachloro-
ethene
chiorobenzene
o-cresol
hexachloroethane
(afcp) -cresol
nitrobenzene
hexachloro-
butadiene
2,4, 6-trichloro-
phenol
2,4, 5-trichloro-
phenol
2,4-dinitro-
toluene
hexachlorobenzene
pentachlorophenol
65984
95
105
100
95
80
95
100
80
95
60
77
130
75
90
115
104
92
86
114
66092
85
95
90
84
80
95
85
75
85
60
80
130
60
94
105
100
74
86
108
65058
85
80
85
75
85
90
90
85
90
50
97
99
85
138
175
90
118
88
104
65560
85
75
85
80
85
90
95
80
95
72
97
170
215
70
100
116
98
20
61
THE SRM
65068
95
95
105
110
105
95
100
95
100
62
77
130
70
58
85
88
86
54
104
(Case
65781
85
100
115
135
95
105
90
95
90
108
107
53
80
106
100
172
90
88
90
B)
64697
130
135
130
105
135
95
110
90
95
100
130
112
95
158
7
9
170
112
8
65238
95
105
107
120
125
120
112
100
110
58
80
110
73
96
65
60
78
68
54
62E36
120
120
110
97
110
100
110
95
105
17
10
2?
24
10
0.15
-
26
28
™
pyridine
29
36
28
10
25
75
95 14
1-112
-------
blAS COIIIItCllGll BrCOMI-OUtlO CLASS
TABLE 4 CASE C SET-UVOLATILES BY CmPOUND CLASS
Nmctiamilici
ArjAI* 1 E 200f-P6 MlV MAlftlX
hi mo BENZENE
C.ji«ll«J 1l flKOvwy
24-UNIIHOICtLllfNE
^Mutuifu^m
Ai.it! Cuunipouridt ^
Al.AirlE 2uiiPPB MM MAIRlX
4-l«llflOPll£NOl
HMUCmGHOPHENOl
4U>K> K 4 % fWcovtiy
O ClllSOL
O-CftESOt-08
AIMUIE 2001-Pb MW MAIHlX
CwAHiUdS R«to,«i»
M-I4IIHOSOUMCIIIYIAMINE
CX,r*4l» J V JWortiy
N- UlIHGSGUMtlllYLAMlMt-U*
U»« Compound*
Al.AiriE 2001'CB MM MAIRlX
Al.lllHE
C4"i«il»4 % llacoveiy
J-NIIHOAHIIIHE
4Zoii*i.UU V R«co**tfy
Crinl'iiE
Con.kl.J» llx.o.«i»
(1 - Ml HOSGUMl 1 IIYIAMINE - 06
I'NA
ANAirlt 200PPBUW MAlftlX
N**'II|HAIENE
CUIIIKU J Ik ftacovtiy
ACI IIAPlllHENE
CuuviU J Ik ftoc&vety
fiMiAiU J It ll«KOv«iy
III li/G(A|f Hiflii Oi;
CIMOIIINAMU IIYUIIOCAHIIOK
AUAltlt tliuCrHuAMAINX
IILXACmGIIGUFllJI Nt
C..H.II.J*. llM.Uvi.-iy
124 IlllCltl GHOIldl/ljNS
Cuiitii l»d % Aucuvvly
III AACIIlOUOBtllttHt- I1C6
rniiiAiAli s
[MMI 1 H>i I'lililAi Alt
CunwvUd 1L Hucnvbiy
U H bUIVIHIIIIAlAIfc
CnHvclNil Ik Racovuiy
m-M GCIYIPIIIIIAIAIE
CtKIL'tl-* 1 A MuCu.uly
UN IjUlltl'HlHAIAIt- 04
M««n
1691429
2165011
1*2/141
1564266
Miin
16*
1904626
10/8571
1225139
2 4b5/ 14
2/93/19
1/645/1
M«n
15965/1
168444*
196
2299435
170
Metn
10126 14
I2t»9i*
1/2.1429
204 1354
1145/14
2147
I/O
M**n
1007266
113 U6»
51 0142»
57 25054
1115141
1315/71
1/42657
Maan
136(714
jit 4B4»
445/141
47 431-01
164 4266
Mean
0 I4?U5/
022XM4
3* 1/t32
1117/14
174 70 II
1104/14
SID
1 1 9061*
101/165
17 4231
19 22264
"l 15/3/6
sidT"
35 *)/6
19 44595
20 16919
24 1/091
1 1(1
1 <*'Jtl
13 10/61
SID
120/516
II 6561
1004441
216517
14 14214
SID
1650029
1255 122
316/177
44 Vll/
635^194
64 41604
14 14214
SID
4 tuC/'/»
442547
3 l/4i02
1 1(6/41
44 t!HM
4500046
i i IMSI
510
to to.it/
5l 1'jUlt
4224/17
2^ 4^'b/
sin ~~
03/7SI64
0 1,'jljidi)
/ 41.0727
* 4.IM54
34 6G.I5.S
4/6/414
206)S)u3
a
16 66501
1453186
2441205
26*1341
" 16 2 162
a —
5921259
55,21*31
21 25035
31 16622
1654717
1 798511
1*36527
a
1691669
1613462
42 08568
13 14156
19114/6
Cl
1219991
1750552
44 tUiOl
6244961
6911061
9025428
19*1476
Cl
6454622
* 200591
444/962
1 63/54
626l».lfi
610'«32
I61..I!.]
Cl
64 9%b4
72 Sin /I
J632/4J
31 5IC.I.
cP" "
0 52U'j/2
u62/4:,&
I04UUI1
49»u8M
67 0/7 I/
id9/J/
ICL
152457*
201*712
15*1022
29.' 9??!
1402124
LCl
11776/4
1352141
79597/9
6664/7
0 «3UUB/
0995206
15*49 19
ICL
14293*5
I/2IIOI
1539041
196 602
1501652
ICL
9086 152
10419 73
12/4666
141665*
95410*
124 445/
1501852
ICL
94 27195
1069.761
465661
55 6ll
556UIU2
66 52625
1618911
LCI
51 U/4/U
0041. 1^4
40 >:uta
41 5IUU/
I57K21I
ici
- 0 JBb/ 1
•0104^4
14 5iAll>7
249UI//
6 3 btvt*!*
ICI/U246
1411 4/
48 2U4I8
53 3493C
220914
UCL
0 672423
1 U.',l«,/
31 4.U13
163/4(12
241/79
1594451
MtAUxn
61 /.HO*
10/.I797
91 35714
II5«20I
7*71429
Ml AN Mil
14
9524I1J
5192857
6125696
1 242857
1 3%659
4842857
MEAN Ml
79 U2657
9422238
96
1149711
IS
MEAN VIII
1068187
1263127
6007)41
211164
82285/1
2221026
85
MEAN XII
36 59132
41362/1
25 13019
21 91507
»« 8G211
50 4 Fit/
19. 14286
MI-AHXfll
(.6 41571
/O 74<>4/
21 71501
114/1429
MTAM V.IIIJ
II U7 142U
I'j uau46
t,lil>tt!>/l
HI lUjul
£5?157I
M1SD
704i>uu
4./UU534
U 515618
J2J64I5
* 7,396763
MISO
21 3119
20 70643
18C999I
19 l»ll
4751151
45947
7.4 114 lit
M1SD
75537!
6 166566
15.32671
1026674
I3IU03
MISU
6 56M4f
1029423
16.514/2
21(1422
3445817
lUIAWlll
1.116903
...*n*p
4.S7345J
3910225
6 272966
204144!!
37 7 101.6
14 200*4
I 5G2K54
"~ xnso '
4432157
36 58269
sai/ua
1 1 87U42
MISI.
2645/51
2 1.4 5/51
24 /I624
It 14l.fi/
27 4U314
15 84947
Cll ACC
906617!*
-1.46112
-7.17/2
'-/e'Sii:
" «4
II 24131
-4I.1I2/
712*381
-600141
0 15400,
670JI/1
-88.4286
/8 92857
1429JB1
1/7/618
-129/11
-0.02824
-85
9306127
-19374
1218415
-61957*
801/207
-900627
6277G9/
-15
38 5KI12
4.769462
- 16 21/li
-3.18512
37.02/2C
-« 549C6
3*71019
-89.M28
"68435/1
230C/QI
-484!>i8
/U UVJ28
-91 7143
OU/I4VJ
004,11/9
12 3UIJU
6 5UUI6
37 7UC40
aii 41.611
• 2/ 1 152
-652J!i/
HEIK.UNC
-I29UI6
6/.SU-/. 1
10 l'.'6J4
-39.7915
-iiio
Ellll
lOOtibbJ
-46*6116
U 840/1
-95.114
1 68W4;
921. 1361
-IW
EHII
-345724
157 7084
-21 2/OJ
-4U2III
-101
Lllll
41VOU1I
-97.4621
3964591
42 OH3/
1 2811/6
- 76 0456
-100
Ellll
-301562
-2*2652
-63.1847
3723 9!t
0 6U'JU*2
- 71 O'ji>5
-ibii
thii
-146/81
94 U'JO/
62UU02
-100
HIM
' (• 2ti
Hi,: til/
iu 1, 1/'J
'•n fnai
-5U8UI4
-KM)
-------
TABU: 5
SOMKARY O
APPI/TCATTOK OF BIAS CORRECTION TO S9K FOL1.OVTNG EPA
OK A BATCH BASIS t±roff Table
-------
TABU e
SWMARY or RcsuiTr TDK CASE c SEKIVDUTILES
COMPDUKD
nitrobenzene
2.4-DRT
t-nitrophenol
pentachloro
o-cr»»ol
N-nitroae-propyl
K-nitro>o diBcthvlaBine
aniline
2-nitroaniline
pyridine
naphthalene
acenaptfaalcne
benzol c3prr«ne
hexa chi or o* thane
1.2.<-trichlorobenz«ne
hexa chi or obenzem
diMcthylpthalate
di-K-butylpthalate
di-f-octvlcthalBX*
1. UBCERTAim
-12.9
10.33
10.07
16.64
6.69
-S.«B
-21.27
41.9
22.97
1.17
-S.64
-6.B2
0.69
29.2
6.46
-14.6
NF.
2C.62
34.24
CHANCr IK ACCURACY
9.09
-7.16
11.24
7.3S
0.16
14.29
-12.97
-19.37
11.83
4^77
£.67
6.E
1.86
2.31
HR
t.59
sr- er* not u«»d for
TABLE?
SUMMARY 0? RESULTS TOR CASE C VOUTIIE5
coHPtnjHr i.
chlorose thane
chloroe thane
•ethylene chloride
1 . 1 -di chloroe thene
chioroiort
carbonic tra chl ori be
1 .2-dichleroethane
1 .1 ,2-tr; chl croe thane
1 .2-eJic.%lcroproptn«
1.1 .".2-tetrachlcroethene
i.: .l-trichJ.oroet.-uJi*
vinyl ci.loraet
cis-1 .2-cickloroethane
trans-1 .2-dichloroethene
xri chloroetnene
cis-1 .c-cichioroproptn«
1 . 1 -di chloroe tnent
trans-: ,S-£:chlcropropane
tetra chloroe then*
chlorobenzene
benzene
toluene
chlorobenzene
orthc-x\lene
ethylsensene
styrent
(c-t;;-.-ien*
brocoze thane
crosoci chl croc* thtne
-------
18 Matrix Spiking: From Sampling to Analysis
D. Syhre, Organic Laboratory Supervisor, M. Rudel, Analyst; V. Verma, Analytical
Laboratory Supervisor; Browning-Ferris Industries Houston Laboratory, 5630 Guhn
Road, Houston, TX, 77040.
Abstract
Currently there is little knowledge about how much change in analyte concentration oc-
curs during the steps from sampling from the sample source (e.g. a monitor well) to
analysis. A series of experiments were performed for metal, volatile, and semivolatile
parameters with various stages of handling from the sampling of a simulated monitor
well to analysis; percent recoveries were determined, and conclusions drawn.
Introduction
Current EPA methodology using the SW-8461 procedures requires a matrix spike rela-
tively close to die point of analysis for volatile (VOA) organics, semivolatile (SV) or-
ganics, and metals. Specifically, volatile matrix spikes are performed immediately
prior to analysis, semivolatile spikes during die extraction step, and the metals spiked
prior to digestion. Matrix spikes are performed using analytes listed in die appropriate
sections in SW-846; it is assumed that diese analytes were chosen by EPA because they
felt these were representative of die analytes routinely analyzed.
Currently die purpose of these matrix spikes is to provide QA/QC for die labs perform-
ing die analysis by demonstrating diat the analyses being performed in die batch associ-
ated with die matrix spike are "in control" The results from die matrix spike are com-
pared to die appropriate control charts, and if tiiey fall within an acceptable range die
analyses in that batch are deemed widiin "control." In a similar vein, die organic anal-
yses are tracked with surrogate spiking compounds added during die extraction step.
The function of die surrogates is to track "hi control" performance on a sample by sam-
ple basis.
The advent of spike correction, however, changes die role of die matrix spike. This
type of spike correction requires die substitution of die particular analytes of interest
versus die standard spike list, and replaces die "in control" aspect of die matrix spike
with an adjustment to die data from die associated batch of samples based on percent
recoveries of die analytes of interest in die spike. Part of die stated reasons for moving
to spike correction is to provide data more representative of what is actually in die sam-
ple source (e.g. monitor wells, leachate, etc.).
Regardless of die appropriateness of spike correction as a method, this raises an inter-
esting question: how much does die sample change from die time of sampling to analy-
1-116
-------
sis? To examine this
question in detail we
performed a series of
experiments designed
to determine change in
analyte concentration at
each conceivable stop-
ping point from sam-
pling to analysis. This
was done by spiking
different sets of repli-
cates at the possible lo-
cations, letting the sam-
ple then proceed
through the process as
normal, and analyzing
the samples. The loca-
tions at which it was
determined to spike for
the organics (see Figure
1) were: 1. Sample
Source; 2. Water Sam-
ple (post sampling, pre-
transportation); 3.
Water Sample (post-
transportation); 4. Or-
ganic Toxicity Charac-
teristic (OTC) Filtered
Sampling)
Transportation)
OTC
Extraction
SV
(Extraction
VOA
Figure 1: Organic Sample Pathway
Water Sample; 5. Extract For the metals all the
samples originated from the same source, then un-
derwent different degrees of processing along the
metals pathway (see Figure 2) and differences in
concentrations determined. The concentrations
from the samples were compared to spiked
amounts and percent recovery determined. The
recovery data from various stages was compared
to determine trends in analyte concentration
through the sample pathway.
Methodology2
For the metal constituents a spike was performed
in a simulated monitor well which yielded the
Metal Concentration (ppm)
As 2.51
Ba 2.51
Cd 1.18
Cr 2.51
Pb 1.%
Se 2.51
Ag 1.18
Ca 5.10
Fe 2.51
Mg 5.10
Mn 2.51
Ni 1.%
Zn 1.96
Table 1: Metals Spike
Concentrations
1-117
-------
S ample
TM
1 Preserve}
TransportatJon}
( Sample)
TC
[Transportation I
•W Sample
Filter &
Preserve
Filtration}
C Sample)
f Sample)
Spike &
Preserve
Sample)
^ ^ Digestion ^ ^
"^"^ & Analysis, ^^~
Figure 2: Metals Sample Pathway
concentrations indicated in Table 1. The monitor well was constructed using a five foot
section of Corning conical glass pipe with Teflon3 endcaps, each endcap fitted with a
Teflon stopcock. The simulated well was stood vertically in its frame, the bottom
sealed with an endcap and partially filled with deionized (d.i.) water. The spiking solu-
tions were added, the remainder filled, and the top capped. The remaining small vol-
ume was filled via access from the top stopcock, and a Teflon tubing was used to con-
nect the bottom stopcock to a pump calibrated to deliver one well volume (-12.58 1) in
-5.6 hours, then connected to the top stopcock. The stopcocks were opened, and the
simulated well allowed to recirculate for -23 hours. This allowed a total of approxi-
mately four well volumes to be recirculated, and was deemed satisfactory for adequate
mixing of the sample.
M18
-------
After stopping the recirculation process the stopcocks were closed, and the teflon tub-
ing removed. One liter of sample was drained from the bottom and analyzed for the
complete list, then the top endcap removed and the simulated well sampled. Eight one
liter samples were then distributed as follows:
1. One sample was preserved in the manner appropriate for total metals (TM) analysis,
then analyzed for Ba, Cd, Ca, Cr, Fe, Mn, Mg, Ni, and Zn.
2. One sample was preserved as above, then shipped overnight and analyzed for Ba,
Cd, Ca, Cr, Fe, Mn, Mg, Ni, and Zn.
3, One sample was analyzed immediately after sampling for the complete list.
4. One Sample was field filtered and preserved in the manner appropriate for dissolved
metals analysis, then analyzed for Ba, Cd, Ca, Cr, Fe, Mn, Mg, Ni, and Zn.
5. One sample was filtered and preserved as above, then shipped overnight and ana-
lyzed for Ba, Cd, Ca, Cr, Fe, Mn, Mg, Ni, and Zn.
6". One sample was shipped and analyzed for As, Ba, Cd, Cr, Pb, Se, and Ag (the TC
metals except for Hg).
7. One sample underwent TC (Toxicity Characteristic) filtration, then was preserved
and analyzed for As, Ba, Cd, Cr, Pb, Se, and Ag.
8. One sample underwent TC filtration, then was spiked with the appropriate metals,
preserved, and analyzed for As, Ba, Cd, Cr, Pb, Se, and Ag.
All analyses were performed in replicates of five, and the results averaged. The averag-
es were compared versus known spike amounts, and percent recovery computed.
For the semivolatile organics an experiment was performed in which the simulated
monitor well was spiked as above at a level of 200 ppb with seven sets of compounds
deemed to be representative of the priority pollutant analytes. These sets consisted of:
1. Acids: 4-Nitrophenol, pentachlorophenol, o-cresol.
2. Bases: Aniline, 2-nitroaniline, pyridine.
3. Chlorocarbons: Hexachloroethane, 1,2,4-trichlorobenzene, hexachlorobenzene.
4. Nitroaromatics: Nitrobenzene, 2,4-dinitrotoluene.
5. Nitrosamines: N-Nitrosodi-n-propylamine, N-nitrosodimethy?amine.
1-119
-------
6. Phthalates: Dimethyl phthalate, di-n-butyl phthalate, di-n-octyl phthalate.
7. Polynuclcar Aromatics: Naphthalene, acenaphthene, benzo[a]pyrcne.
The well was allowed to rccirculate ~23 hours, then five one liter samples pulled from
the bottom of the well via the stopcock. The top endcap was removed, and five one
liter samples taken from the top in the normal sampling method. These samples were
then sent through the normal process (see Figure 1) and analyzed.
In addition four other sets of samples were spiked and sent through the remaining steps
of the pathway:
1. Five liters spiked at 200 ppb, then shipped overnight and proceeding through extrac-
tion to analysis.
2. Five liters spiked at 200 ppb, then proceeding through extraction to analysis.
3. Five liters spiked at 200 ppb then proceeding through the TC filtration process, ex-
traction, and to analysis.
4. Five vials spiked at 200 ppm (simulating the 1000:1 concentration factor) and ana-
lyzed.
During the extraction step a set of isotopically labeled compounds consisting of the fol-
lowing were spiked:
1^
o-cresol-dg pyridine-d^ hexachlorobenzene- Cg
nitrobenzene-d^ di-n-butyl phthalate-d4 benzo[a]pyrene-d12
N-nitrosodimethylamine-dg
These were considered as surrogates in substitution of the standard surrogates which
are listed in SW-846, and used to track the quality of the extraction and analytical stag-
es. Note that since these compounds are representative of the compounds of interest,
they may also serve as a matrix spike, and are added at the location stated for such.
As for the metals, percent recoveries were determined for each set of replicates.
The volatile organics were treated essentially the same as the semi-volatile organics,
except the exclusion of the extraction step. The analytes of interest were:
1. Saturated Chlorocarbons: Chloromethane, chloroethane, methylene chloride, 1,1-
dichloroethane, chloroform, carbon tetrachloride, 1,2-dichloroethane, 1,2-dichloropro-
pane, 1,1,2-trichloroethane, 1,1,2,2-tetrachloroethane, and 1,1,1-trichloroethane.
1-120
-------
2. Unsaturated Chlorocarbons: Vinyl chloride, 1,1-dichloroethene, cis-l,2-dichloroet-
hene, trans-l,2-dichloroethene, trichloroethene, cis-l,3-dichloropropene, trans-1,3-
dichloropropene, chlorobenzene, and tetrachlorothene.
3. Bromocarbons: Bromomethane, bromodichloromethane, dibromochloromethane,
and bromoform.
4. Aromatics: Benzene, toluene, o-xylene, ethylbenzene, styrcne, and m- & p-xylenes.
5. Gases: Chloromethane, vinyl chloride, bromomethane, chloroethane; these arc com-
pounds from the above list which may be separated from the above chemical classifica-
tions in addition due to their physical properties.
Isotopically labeled surrogate/matrix spike compounds for the volatiles were:
chloroform13C l.l-dichloroethene-o^ vinyl chloride-d^
bromoform- C benzene-dg
Following the normal analytical procedures percent recoveries were calculated as be-
fore for the metals and semivolatiles.
Results and Discussion
The metals data are summarized in Tables 2 and 3. The data in the tables represent av-
erages of the percent recoveries for all five replicates, except for the TC Spike, which
indicates the recovery of the TC analytes spiked at the appropriate location. The
%RSD (percent relative standard deviation) indicates that very little, if any, loss occurs
in any of the processes; the only exception being a consistently slight loss during the
TC filtration stage.
Bottom/Well
As
Ba
Cd
Cr
Pb
Se
Ag
100%
99%
99%
99%
92%
100%
97%
Top/Well
100%
99%
99%
99%
93%
102%
97%
Transport
101%
100%
101%
100%
93%
100%
96%
TC Filter
94%
98%
92%
94%
86%
91%
96%
TC Spike
94%
96%
99%
98%
98%
94%
94%
%RSD
3%
2%
3%
2%
5%
5%
2%
Table 2: Metal Recoveries for
TC Analytes (except Hg)
1-121
-------
Bottom/Well
Ba
Cd
Ca
Cr
Fe
Mn
Mg
Ni
Zn
92%
86%
100%
93%
98%
94%
94%
92%
92%
Top/Well
92%
88%
99%
92%
97%
94%
94%
92%
93%
Filter/
Preserve(DM)
94%
87%
102%
94%
97%
96%
95%
92%
95%
PreservefTM)
90%
86%
100%
91%
93%
92%
91%
90%
91%
Trans(DM) Trans(TM) %RSD
93% 92% 2%
90% 88% 2%
102% 106% 2%
94% 93% 1%
101% 98% 2%
95% 94% 1%
94% 93% 1%
93% 92% 1%
92% 91% 2%
Table 3: Metal Recoveries for
Non-TC Analytes
The volatile organic results are summarized in Tables 4 and 5 (see end of paper). Table
4 displays the recovery data for the initial attempt at performing this experiment The
overall trend displays higher recoveries for the samples drawn from the bottom of the
well via the stopcock and the samples sampled in the normal manner than those sam-
ples which were simply spiked and carried through their respective processes. These
results obviously represent aberrant experimental procedure, but where is the question.
The most obvious conclusion is poor spiking methodology. The spiking methods em-
ployed in the initial experiment used four spiking solutions, each at 2000 ug/ml. The
appropriate amount was then spiked from each individual solution into the simulated
well, followed by individual spikes of the appropriate containers of d.i. water.
To determine whether the methodology was at error a second set (three replicates in-
stead of five) of samples was analyzed with new spiking solutions. In this set the steps
in which samples were just spiked and run and those that were spiked, filtered, and run
were repeated with spikes being performed as previously and also in a composite man-
ner using septum capped vials (Table 5). In addition, a single replicate was performed
in which a sample was spiked in the previous manner using the original set of solutions
and analyzed. Obvious degradation of the integrity of the original set had occurred, al-
though the numbers are comparable to those obtained from the data from the spiked and
run set versus the original experiment
Although some improvement can be seen in the quality of the data in going from the
previous spiking method to the composite/septum vial method, the improvement is not
enough by itself to explain the variance in the original data. The most likely explana-
tion is that the spiking solutions degraded during the length of the original spiking pro-
cess, causing erratic results. Examination of the new spike data is ongoing.
1-122
-------
Of interest, however, are the exceptional recoveries obtained in the samples acquired
from both sampling via the bottom stopcock and those sample from the top via the nor-
mal method. This would suggest that very little is lost during the sampling process, and
in the steps later on. Obviously these conclusions must be taken with a grain of salt,
however, due to an incomplete set of comparable data. The available data also suggests
that spiking technique is critical when performing volatile organic spikes.
The semivolatile results are summarized in Figures 3 through 9 (see end of paper). The
observed trends:
1. Acids - No excessive decrease from sampling to analysis. Filtration provided die
greatest reduction in analyte concentration. Note must be taken, however, that d.L
water is an ideal matrix, and our experience has been that the acids suffer greatly from
matrix interferences in actual matrices.
2. Bases - Although the bases did not suffer greatly as a group in the process from sam-
pling to analysis, it is important to note that their recoveries were highly erratic. The
best actor overall was 2-nitroaniline, which is the least basic and least water-soluble of
the three. Pyridine's erratic behavior may be attributed to the factors of basicity, solu-
bility, and volatility (i.e. during the concentration stage).
3. Chlorocarbons - A trend is obvious which displays that with increased handling from
sampling to analysis there is a reduction in analyte concentration. Of particular interest
is the total failure to recover hexachlorobenzene from the sample which underwent TC
filtration. This result concurs with previous experiments performed in this laboratory,
and begs questioning the validity of TC hexachlorobenzene results.
4. Nitroaromatics - No significant trends exist In general, these compounds are good
actors from sampling to analysis.
5. Nitrosamines - Overall this classification can also be considered good actors, the
only exception being the erratic behavior of N-nitrosodimethylamine. The most likely
explanation of this erratic behavior is its volatility, which could provide problems dur-
ing the concentration stage.
6. Phthalates - The obvious trends are higher recoveries as molecular weight increases,
reflective of a solubility phenomenon, and loss in analyte concentration with increased
handling, especially at the filtration stage. Of particular interest is the failure to recover
dimethyl phthalate, consistent with poor recoveries for this analyte seen previously in
studies at this laboratory.
7. Polynuclear Aromatics - The most striking trend is the large loss of analyte concen-
tration with increased handling time and/or length of time containerized. This trend in-
creased with molecular weight, with extremely poor recoveries being demonstrated for
benzo[a]pyrene. This is especially so during the TC Filtration stage.
1-123
-------
Conclusions
Experience has shown that the matrix plays a very significant factor in analyte recover-
ies, and all results and conclusions reported herein should be taken with note that these
experiments were performed in an ideal matrix, that being d.i. water.
1. Metals - Current metals methods from sampling to analysis give data which is repre-
sentative of the actual analyte concentration within the sample source; therefore no
need is seen for modification of procedures in involving metals samples.
2. Semivolatiles - Over the classes defined, it was seen that some classes performed ad-
equately with respect to recovery, but that on the whole increased handling showed de-
creased recoveries. Of interest to note is the erratic behavior of some analytes, defying
efforts to determine trends for these analytes. Erratic recoveries for samples which
were measured in a replicate basis demonstrate that if spike recovery corrected values
for these analytes were used, the values must be taken with a grain of salt
It is our recommendation that the current semivolatile surrogate list, which is not repre-
sentative of typical analytes, be changed to the more representative list below:
Di-n-butyl Phthalate-d4
Hexachloroethane-13C
10
Hexachlorobenzene- Cg
Benzo[a]pyrene-dj2
Aniline-d^
N-Nitrosodimethylamine-dg
Phenol-dg
10
Pentachlorophenol- Cg
Nitrobenzene-d^
We feel that this list is representative of the typical analytes in terms of reactivity, vola-
tility, and solubility. In addition, if a list such as this were used in place of the current
surrogate list, the recoveries determined could be used in place of the traditional matrix
spike to monitor "in control" performance.
The benefits of changing the list of spike compounds would be to provide a more repre-
sentative list of analytes to determine laboratory performance on a sample by sample
basis. In addition, elimination of the superfluous matrix spike, required on a minimum
one spike per twenty sample batch basis, would allow ons additional analysis per batch.
1-124
-------
3. Volatiles - Obviously the data is inconclusive; the most apparent conclusion to be
drawn is that spiking methodology and handling of the spiking solution is a critical
step. It may be that with proper handling there is little loss in analyte concentration
from sampling to analysis.
Acknowl edgements
The authors wish to thank the field crew, extraction, GC/MS, and wet lab personnel for
their willingness to devote intense energy towards this project. None of this data would
have arisen without their dedication.
References
1. Method 1310 from "Test Methods for Evaluating Solid Waste, Physical/Chemical
Methods," November, 1986, Third Edition, USEPA, SW-846 and additions
thereto.
2. Methods for sampling and analysis obtained from SW-846.
3. Teflon is a registered trademark of the E. I. Dupont Corporation.
0.75
0.5
0.25
Figure 3: Acid
Recoveries
o
Spk & Run
Spk & Ext
Filtered
Shipped
Top
Bottom
Bottom
4-Nitrophenol 66%
Pentachlorophenol 65%
o-Cresol 76%
o-Cresol-d8 79%
4-Nitrophenol
Pentachlorophenol
o-Cresol
o-Cresol-d8
Top Shipped Filtered Spk & Ext Spk & Run
52% 62% 46% 70% 74%
59% 74% 51% 83% 77%
70% 71% 65% 67% 74%
78% 75% 71% 71% 85%
1-125
-------
Figure 4: Base
Recoveries
| Aniline
j~j 2-Nitroaniline
Pyridine
Pyridine-d5
Bottom
Aniline 59%
2-Nitroaniline 87%
Pyridine 54%
Pyridine-d5 40%
Top Shipped Filtered Spk & Ext Spk & Run
42%
76%
99%
48%
52%
70%
45%
36%
48%
66%
47%
49%
38%
60%
50%
47%
70%
74%
155%
83%
Figure 5:
Chlorocarbon
Recoveries
Hexachloroethane
1,2,4-Trichlorobenzene
Hexachlorobenzene
Hexachlorobenzene-13C6
Hexachloroethane
1,2,4-Trichlorobenzene
Hexachlorobenzene
Hexachlorobenzene-13C6
Bottom
24%
17%
2%
88%
Top
19%
14%
3%
90%
Shipped
51%
62%
42%
87%
Filtered
38%
41%
0%
84%
Spk & Ext
46%
61%
79%
89%
Spk & Run
80%
79%
94%
1-126
-------
1
0.75
0.5
0.25
Figure 6:
Nitroaromatics
Recoveries
Spk & Run
Spk & Ext
Filtered
Shipped
Top
Bottom
Nitrobenzene
2,4-Dinitro toluene
Nitrobenzene-dS
Nitrobenzene
2,4-Dinitro toluene
Nitrobcnzene-d5
Top Shipped
74% 77%
76% 84%
72% 69%
Spk & Run
76%
71%
74%
Figure 7:
Nitrosamine
Recoveries
N-Nitrosodi-n-propylamine
N-Nitrosodimethylamine
N-Nitrosodimethylamine-d6
N-Nitrosodi-n-propylamine
N-Nitrosodimethylamine
N-Nitrosodimethylamine-d6
Bottom
79%
88%
71%
Top
76%
101%
72%
Shipped
72%
95%
66%
Filtered
70%
67%
69%
Spk & Ext
76%
74%
69%
Spk & Run
74%
114%
94%
1-127
-------
Figure 8:
Phthalate
Recoveries
Spk & Run
Spk & Ext
Filtered
Shipped
Top
Bottom
Dimethyl Phthalale
Di-n-butyl Phlhalate
Di-n-octyl Phlhalate
Di-n-butyl Phthalale-d4
Bottom Top Shipped Filtered Spk & Ext Spk & Run
Dimethyl Phthalate
Di-n-butyl Phthalale
Di-n-octyl Phthalale
Di-n-butyl Phthalale-d4
0%
9%
9%
52%
0%
12%
7%
60%
0%
12%
37%
43%
0%
7%
2%
55%
0%
30%
59%
47%
65%
65%
66%
90%
Figure 9:
Polynuclear
Anomalies
Recoveries
Spk & Run
Spk & Ext
Filtered
Shipped
Top
Bottom
| Naphthalene
3 Acenaphthene
| Benzo[a]pyrene
I Benzo[a]pyrene-dl2
Bottom
Naphthalene 32%
Acenaphthene 17%
Benzo[a]pyrene 2%
Ben2o[a]pyrene-dl2 86%
Top Shipped
31% 43%
15% 58%
2% 39%
86% 87%
Filtered
38%
37%
0%
80%
Spk & Ext
46%
59%
75%
87%
Spk & Run
55%
70%
84%
96%
1-128
-------
Chloroinethane
Vinyl Chloride
Bromomethane
Chloroethane
1.1-Dichloroethene
Methylene Chloride
1,1-Dichloroethane
cis-1.2-Dichloroethen€
Chloroforom
trans-1,2-Dichloroethene
Carbon Tetrachloride
1.2-Dichloroedune
Benzene
Trichloroethene
1 ,2-Dichloropropane
Bromodichloromethane
cis-13-Dichloropropene
Toluene
trans-1,3-Dichloropropene
1,1,2-Trichloroethane
Dibramochloromelhane
Chtorobenzene
o-Xylene
Ethylbenzene
Styrene
Broinoform
1,1^2-Trichloroethane
Tetrachloroethene
1,1,1-Trichloroethane
m- & p-Xylene
Chlorofonn-13C
l,l-Dichloroethenc-d2
Benzene-d6
Vinyl Chloride-d3
Bromoform-13C
Table
Bottom Top Shipped Filtered Spiked & Run
103% 103% 27% 54% 15%
99% 113% 36% 65% 20%
162% 164% 64% 100% 48%
136% 144% 57% 85% 43%
66% 74% 43% 55% 34%
72% 69% 37% 57% 36%
77% 76% 55% 68% 51%
74% 74% 48% 57% 43%
73% 74% 54% 65% 54%
68% 71% 44% 53% 39%
63% 73% 52% 61% 51%
69% 72% 51% 60% 50%
77% 79% 55% 60% 51%
66% 72% 56% 71% 55%
77% 80% 55% 65% 53%
64% 64% 42% 58% 43%
110% 100% 102% 114% 88%
53% 56% 50% 52% 41%
36% 46% 33% 44% 28%
83% 117% 51% 83% 50%
98% 100% 54% 64% 53%
74% 70% 65% 62% 56%
63% 60% 54% 54% 50%
65% 61% 82% 87% 66%
66% 62% 58% 58% 54%
66% 66% 56% 65% 56%
77% 76% 57% 67% 59%
58% 62% 68% 75% 57%
71% 78% 58% 69% 51%
107% 98% 117% 121% 106%
77% 81% 75% 77% 79%
92% 95% 82% 83% 81%
82% 83% 82% 84% 87%
104% 107% 99% 109% 112%
75% 84% 80% 72% 71%
4: Initial Volatile Organic Results
1-129
-------
Chloromethane
Vinyl Chloride
Bromomethane
Chloroelhane
1.1-Dichloroethene
Methylene Chloride
1,1-Dichloroethane
cis-l^-Dichloroethene
ChlorofoTom
trans-1 ,2-Dichloroethene
Carbon Tetnchtoride
1.2-Dichloroethane
Benzene
Trichloroethene
1.2-Dichloropropane
Bromodichloromethane
cis-1,3-Dichloropropene
Toluene
trans-13-Dichloropropene
1,1.2-Trichloroethane
Dibromochloromethane
Chlorobenzene
o-Xylene
Elhylbenzene
Styrene
Bromoforrn
1,1,2,2-Trichloroethane
Tetrachloroethene
1,1,1-Trichloroethane
m- & p-Xylene
Chlorofonn-13C
1.1 -Dichloroethene-d2
Benzene-d6
Vinyl Chloride-d3
Bromofoim-13C
Old Way
Old Sol'n
11%
19%
48%
49%
34%
42%
68%
57%
45%
54%
62%
68%
58%
67%
68%
60%
93%
34%
32%
72%
57%
80%
60%
93%
66%
37%
64%
62%
74%
145%
90%
102%
93%
106%
79%
TableS:
Spike & Run
Old Way
New Sol'n
108%
126%
157%
146%
65%
57%
56%
75%
50%
79%
76%
64%
78%
79%
81%
75%
103%
42%
25%
54%
65%
88%
66%
101%
72%
36%
59%
57%
84%
161%
90%
97%
89%
91%
74%
New Way
New Sol'n
90%
106%
141%
131%
60%
54%
74%
73%
56%
71%
81%
72%
87%
84%
88%
75%
118%
49%
30%
65%
62%
99%
78%
105%
86%
44%
74%
56%
97%
175%
87%
87%
83%
85%
75%
Spike, Filter, & Run
Old Way New Way
New Sol'n New Sol'n
120% 157%
143% 138%
182% 135%
168% 136%
73% 109%
68% 86%
90% 105%
85% 112%
60% 113%
85% 118%
88% 110%
79% 123%
89% 108%
95% 113%
98% 114%
87% 91%
105% 154%
52% 98%
31% 44%
74% 101%
63% 127%
104% 100%
78% 97%
104% 105%
85% 106%
42% 90%
69% 88%
66% 121%
93% 105%
177% 195%
88% 97%
100% 100%
84% 103%
88% 103%
70% 79%
Spike Method Comparison
1-130
-------
•|Q LAND DISPOSAL RESTRICTIONS PROGRAM
DATA QUALITY INDICATORS FOR BOAT
CALCULATIONS: PAST AND FUTURE
Lisa Jones. Waste Management Division QA Coordinator, U.S. Environmental
Protection Agency, Office of Solid Waste, Waste Treatment Branch, 401 M
Street, SW, Washington, DC 20460; Justine Alchowiak, QA Coordinator,
Versar Inc., 6850 Versar Center, Springfield, VA 22151
ABSTRACT
The Land Disposal Restrictions (LDR) Program is continuing to develop and
re-examine treatment standards for each of the listed hazardous waste
codes using data of known quality from well-designed and well-operated
treatment systems. The numerical treatment standards are based on the
level of treatment achieved, the accuracy of the data, and the inherent
variability of the treatment processes, sampling, and analytical data.
Analytical data are collected by EPA or submitted by industry for the LDR
Program on an ongoing basis. Consistent with the quality assurance/
quality control (QA/QC) program mandated by the office of Solid Waste and
Emergency Response, the quality and usability of the data being evaluated
are assessed based on the following data quality indicators: detection
limits, bias, precision, representativeness, and comparability.
EPA OSW's newly issued Quality Assurance/Quality Control Methodology
Background Document (QMBD) explains in detail the LDR Program's
requirements for data quality as well as the history behind existing LDR
standards. This paper discusses these data requirements in order to
present the "whole picture" of QA/QC in the LDR Program.
To verify that substantial treatment has occurred, when a treatment system
is evaluated as a candidate for BOAT, EPA examines data characterizing the
untreated waste and the treatment system residuals. Since treatment
systems frequently reduce the hazardous constituents in many wastes to
levels below detection limits, a detection limit is often used as the
numerical measure of the level of treatment that occurred. The detection
limit is also a factor considered in determining the applicability of the
analytical method for the waste matrices analyzed.
The LDR Program's QA protocols define bias as percent recovery of
laboratory matrix spikes. The results of the matrix spikes are used to
bias correct the data as well as to determine the extent of matrix
interferences on the usability of the data.
Similarly, precision is defined in terms of relative percent difference of
the matrix spike and matrix spike duplicate. The results of relative
percent difference are used to determine the reproducibility of the
1-131
-------
analytical procedure and can provide additional insight into matrix
interferences.
Representativeness is addressed through selection of appropriate sampling
locations and procedures. For the data to be applicable for calculating
•treatment standards, the samples must be determined to be representative
of the waste and the treatment system.
Comparability is addressed through use of the same sampling and analytical
procedures. If data from more than one study are used, the results should
be comparable. Therefore, the samples should be collected using the same
procedures (i.e., grab or composite) and the analytical procedures should
be either the same or comparable for all the results to be applicable for
calculating treatment standards.
If the results of the data quality indicators meet the program's
objectives, then the data are defensible and can be used to evaluate the
•treatment technology to determine whether it is the best demonstrated
available technology and to calculate concentration-based treatment
standards based on that technology.
INTRODUCTION
Under the Land Disposal Restrictions Program, EPA's Office of Solid Waste
(OSW) developed and promulgated regulations for the following scheduled
wastes:
Solvents and Dioxins - November 7, 1986
California Rule Wastes - July 8, 1987
First Third Wastes - August 8, 1988
Second Third Wastes - June 8, 1989
Third Third Wastes - May 8, 1990
Currently, EPA-OSW's Waste Treatment Branch is developing standards for
newly listed wastes (i.e., wastes listed since the 1984 HSWA Amendments).
EPA-OSW will also re-examine promulgated treatment standards as new
information on treatment technologies or analytical methods become
available to the EPA. As discussed in recent Advance Notices of Proposed
Rulemaking, EPA is actively soliciting treatability information for both
groups of wastes. Since treatment standards developed and promulgated
under the LDR Program were, and will continue to be, based on the best
data available for treatment systems that were determined to be well-
designed and well-operated. The Quality Assurance Methodology Background
Document is intended to present EPA's criteria for accepting treatability
data as the basis of subsequent numerical treatment standards.
Analytical data used to develop concentrated-based treatment standards can
come from many sources both inside and outside EPA. These may be
collected by EPA for the LDR Program or for other EPA programs (e.g..
Office of Water's Effluent Guidelines Program) or may be submitted to EPA
1-132
-------
by industry, trade associations, etc., to be considered for inclusion in
the LDR Program's data base. The quality and usability of the data is
evaluated based on the data quality indicators—detection limit, accuracy,
precision, representativeness, and comparability.
EVALUATION OF DATA QUALITY INDICATORS
The results of the data quality indicators are important to determine:
Appropriateness of the analytical method for the constituents
analyzed.
Bias of the analytical procedure, especially for bias-correction
of the data.
Determination of matrix interferences or other analytical
problems.
• Reproducibility of the analytical results.
Comparability of data, especially for comparability of data sets
submitted from more than one source.
• Representativeness of samples, especially for comparison of data
from more than one treatment system.
The LDR Program has established quantitative or qualitative guidelines for
each of the data quality indicators. The guidelines are used to evaluate
the acceptability of the data for developing treatment standards. EPA
OSW's Quality Assurance/Quality Control Methodology Background Document
(QMBD) explains in detail the LDR Program's requirements for data quality
as well as the history of the existing LDR treatment standards.
Detection Limits
To verify that substantial treatment has occurred, data characterizing the
untreated waste and the treatment system residuals are examined. since
the hazardous constituents in many wastes are treated to non-detect
levels, especially for destruction technologies, a detection limit is
often used to quantify the level of treatment that has occurred and to
develop the treatment standards.
The detection limit may also be used to evaluate the appropriateness of
the analytical procedures. One of the goals of the LDR Program was to
obtain the best data with the lowest detection limits. A target detection
limit of at least 1 ppra was established for data collected specifically
for the LDR Program. However, lower detection limits were achieved for
most of the data collected. The reported detection limit is especially
important in evaluating data reported as non-detect. Data may be judged
to be unacceptable for calculating treatment standards if it could be
1-133
-------
determined that an inappropriate analytical procedure was used based on
the reported detection limits. For example, in the case of metals, if the
residuals and TCLP extracts were analyzed by flame atomic absorption
methods instead of graphite furnace or ICP methods, the detection limits
may be considered to be too high for the purpose of developing treatment
standards and lower limits could have been achieved if the best analytical
method was used.
Bias
Matrix spikes are completed to evaluate whether the laboratory is
performing adequately or whether there is a methodological problem such as
the presence of an interference or a systematic laboratory error. For the
LDR Program, a matrix spike and matrix spike duplicate are required for
the parameters of interest for treatment tests completed specifically for
the LDR Program. A minimum recovery value of 20 percent was determined to
be acceptable for the LDR Program. Data with recovery values below
20 percent are deemed to be unreliable, since the low recovery values
indicated the presence of matrix interferences and may indicate difficulty
in obtaining reproducible analytical results. Therefore, data with low
recoveries are not used to develop treatment standards.
All data used to develop the treatment standards for the First, Second,
and Third Thirds were bias-corrected. If a matrix spike and a matrix
spike duplicate were completed, the lowest recovery value for the
constituent to be regulated is used. If a spike was not completed for the
constituent to be regulated, the lowest average recovery for the subset of
constituents that are representative of the constituent class and analyzed
by the same method (e.g., volatile organics, base-neutral organics, acid
extractable organics, etc.) may be used. It should be noted that, for the
Second and Third Thirds, data were not adjusted if the spike recoveries
were above 100 percent.
All data used to develop treatment standards for newly listed wastes or
for revising promulgated standards will also adjust for recovery in order
to bring the reporting concentration of the target analyte closer to the
true concentration, thus improving the overall accuracy of the value that
serves as the basis for the promulgated standard.
Precision
Precision is defined in terms of relative percent difference of the matrix
spike and matrix spike duplicate. The results of relative percent
difference (RPD) are used to determine the reproducibility of the
analytical procedure. No criteria were established for an acceptable RPD,
however, the RPD is evaluated to gain additional insight into any matrix
interference and to evaluate the reproducibility of the laboratory's
performance. Engineering judgment is used to evaluate the analytical data
with RPD's exceeding 20 percent for all of the constituents spiked.
1-134
-------
Representativeness
Representativeness is addressed through selection of appropriate sampling
locations and procedures. For the data to be applicable for calculating
treatment standards, the samples must be representative of the waste and
the treatment system. For a specific waste code, EPA uses data for the
most difficult to treat waste to develop the treatment standards, i.e.,
for most cases, that would affect waste with the highest concentrations of
the BOAT constituent present. Therefore, to evaluate the information for
this data quality indicator it is important to evaluate both the untreated
waste and the treatment residuals that may be land disposed.
For the treatment residuals, the data must also be representative of the
material to be regulated. Since organic constituents can be destroyed,
the treatment standards are based on total composition analysis. For
metals, treatment technologies usually remove or immobilize the metal,
data are evaluated for both total composition and the TCLP extract for
both the untreated waste and the treatment residual to determine the
leachability of the metal.
Comparabilitv
Comparability is addressed through use of the same sampling and analytical
procedures. If data from more than one study are used, the results must
be comparable. Therefore, documentation on how the samples were collected
(i.e., grab or composite) and on the analytical procedures used is
reviewed.
Most of the treatment standards were developed based on grab samples
analyzed using EPA approved methods published in SW-846. However, this
does not preclude the use of composite data or samples analyzed using
other solid analytical procedures.
BOAT CALCULATIONS FOR TREATMENT STANDARDS
The general approach for developing treatment standards was promulgated as
part of the November 7, 1986 Solvents and Dioxins Rule. Based on this
approach, EPA's treatment standards are based on the performance of the
"best demonstrated available technology" (BDAT) and are stated as
(1) concentrations of hazardous constituents in nonwastewater and
wastewater residues, (2) specific treatment technologies for the waste, or
(3) a combination of a specific treatment technology for a type of residue
and constituent concentrations.
All valid data available to the Agency will be considered in establishing
the treatment standards. All data either collected by EPA or submitted by
industry, etc. for a specific waste code are available to the public in
the Administrative Record either during proposal or promulgation
(depending upon the data of submission) of the rulemaking for the specific
waste code. Whatever the information source, however, the data underlying
1-135
-------
the performance standards must meet standards of quality assurance and
quality control. Therefore, information for the data quality indicators
are evaluated as discussed earlier. If insufficient information is
available for some of the indicators, engineering judgment may be used to
determine the adequacy of the data. If the data for the indicators is
totally nonexistent or judged to be substandard, the data may be
discarded. All data evaluations are presented in either the background
document for the specific waste and/or in the Administrative Record.
Information for the data quality indicators is collected during all of the
treatment tests conducted by EPA-OSW specifically in support of the LDR
Program. The adequacy of the data is evaluated on a case-by-case basis
and only data meeting the criteria described previously is used to develop
concentration-based treatment standards.
The final step in setting a performance standard is to define the maximum
acceptable constituent levels in treatment residuals based on the
performance of the technologies proven to be both demonstrated and
available for the waste.
All concentration data with spike recoveries between 20 and 100 percent
are bias-corrected. The average treatment value observed in all of the
acceptable data is then multiplied by the "variability factor." The
variability factor takes into account the fluctuations in performance that
may result from inherent mechanical limitations in treatment control
systems, treatability variations caused by changing influent levels,
variations in procedures for collecting treated samples, or variations in
sample analysis.
Only one major change was made between November 1986 and June 1990 in the
methodology used to calculate the treatment standards. In the solvents
and dioxins rule, the treatment standards were based on TCLP results for
both the organic and inorganic constituents. For the Thirds, the
standards and all subsequent regulations were based on total composition
data for organic constituents and TCLP extract data only for the metal
constituents.
If analytical data of adequate quality or an appropriate analytical
procedure are not available, the Agency set a performance standard based
on a specific treatment method.
1-136
-------
SUMMATION
Data collected by EPA for the LDR Program has sufficient information
available to evaluate all of the data quality indicators. No changes in
the basic methodology are expected to be made for future regulations. Any
changes in the methodology would be proposed and published for comment
before the change would be implemented. As new data become available or
new analytical procedures are developed, EPA may re-examine promulgated
treatment standards to determine if a revision in the standard is
necessary.
Therefore, data submitted for potential inclusion in the future
rulemakings should have at a minimum information on the following:
Analytical data for untreated waste
Analytical data treatment residuals (total composition and TCLP
extracts for all inorganic constituents)
Analytical methods used and any modifications
Detection limits
Matrix spike recoveries
Analytical precision (from matrix spike and matrix spike
recoveries or from duplicate analysis)
Sampling method (i.e., grab or composite)
REFERENCES
U.S. EPA. 1986. U.S. Environmental Protection Agency, Office of Solid
Waste and Emergency Response. Test Methods for Evaluating Solid Waste;
Physical/Chemical Methods. SW-846 (3rd Edition), Washington, D.C.,
November 1986.
U.S. EPA. 1987. U.S. Environmental Protection Agency, Office of Solid
Waste. OSW's Generic Quality Assurance Project Plan for Land Disposal
Restrictions Program (BOAT). March 1987. Washington, D.C.: U.S.
Environmental Protection Agency.
U.S. EPA. 1991. U.S. Environmental Protection Agency, Office of Solid
Waste. Draft. Best Demonstrated Available Technology (BOAT) Background
Document for Quality Assurance/Quality Control Procedures and Methodology
(QMBD1.
1-137
-------
20 COMPARISON OF QUALITY ASSURANCE/QUALITY CONTROL
REQUIREMENTS FOR DIOXIN/FURAN METHODS
Dennis Hooton, Senior Chemist, Midwest Research Institute, 425 Volker Boulevard,
Kansas City, Missouri 64110
ABSTRACT
Fundamental areas of quality assurance/quality control are compared for several
dioxin/furan analysis methods which use high-resolution gas chromatography/high
resolution mass spectrometry (HRGC/HRMS). These methods are used for analyzing
various environmental matrices and for testing emissions from combustion sources.
General areas of compatability and differences are discussed, while comparative items are
categorized into tables for EPA Methods 8290, 23, and 1613, and also California ARB
Method 428.
INTRODUCTION
Numerous analytical methods exist for the determination of polychlorinated dibenzo-p-
dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs). Many of these methods
have been promulgated or drafted under federal and state agencies for use in
environmental programs. These methods involve diverse matrices, and some are targeted
toward specific regulatory applications, such as testing of hazardous waste incinerators
under TSCA, CERCLA site remediations, and for development of national effluent
limitation guidelines.
Although the basic techniques of isotope dilution, gas chromatography, and mass
spectrometry used in identifying and quantifying these compounds are similar across these
reference methods, regulatory applications are complicated by subtle variances in quality
assurance/quality control (QA/QC) requirements imposed by the selection of a particular
method. The goal of this paper is to simplify the technical review and application of
these methods by comparing the various QA/QC elements. This information is
categorized for some of the more commonly used methods into general descriptions,
procedural checks, calibration controls, validation criteria, and external verification
MJU-AVZMTCMJAJ-AP 1
1-138
-------
recommendations. These comparisons may be helpful in formulating test plans and in
data review.
DIOXIN METHODS
For this paper, the following four methods were examined:
• EPA Method 8290—"Analytical Procedures and Quality Assurance for
Multimedia Analysis of PCDDs and PCDFs by High Resolution Gas
Chromatography/High Resolution Mass Spectrometry (HRGC/HRMS),"
EMSL-Las Vegas draft dated May 1987. Although this method has never been
promulgated, it has been widely used to analyze a wide variety of matrices
including soil, sediment, water, biota, etc. It includes matrix-specific extraction
and analyte-specific cleanup techniques.
• EPA Method 23—"Determination of PCDDs and PCDFs from Stationary
Sources," Final Rule, Federal Register, February 13, 1991. This method has
been promulgated to regulate emissions from municipal waste combustors and is
also being used for trial burns on hazardous waste incinerators. The method
includes descriptions of stack sampling procedures and recovery of the sampling
train components for the PCDD/PCDF analyses.
• ARB Method 428—"Determination of PCDD, PCDF, and PCB Emissions from
Stationary Sources," California Air Resources Board, March 1988. Similar to
Method 23, this state-promulgated method also involves a modified Method 5
source sampling system for collection, recovery, and analysis of source emissions
for PCDD/PCDF.
• EPA Method 1613—"Tetra- through Octa- Chlorinated Dioxins and Furans by
Isotope Dilution HRGC/HRMS," Revision A, October 1990. This method was
developed by the Industrial Technology Division with the U.S. Environmental
Protection Agency's Office of Water Regulations and Standards to provide
regulatory test data on waters, soils, sludges, and other matrices.
MW-A\Z54*M5A.PAP
1-139
-------
As indicated in Tables 1 through 6, there are many similarities and also subtle differences
between the methods. The following sections present some general discussions in these
areas.
GENERAL AND PROCEDURAL DIFFERENCES IN METHODS
One fundamental difference between these methods is in sampling. Two of the methods
(23 and 428) include protocols for sample collection and recovery from combustion
sources, while the other two procedures focus on the analysis, with only limited
discussion relating to field sampling activities.
Of the sampling procedures, Method 23 and ARE 428, although very similar, do differ
in solvent recovery protocols for the sampling train. Both add labeled congeners to the
XAD adsorbent of the sampling train, prior to sample collection, to monitor train
performance. Method 428 also requires correction of source emission results, if field
surrogate recoveries are below 70%.
For the analytical methods, 1613 has more restrictive criteria for data acceptance than the
draft Method 8290, primarily in the increased number of labeled congeners added prior
to extraction, in specifications for initial demonstration of precision and recovery, and in
continuing calibration verification limits.
Control blanks are required for all methods in the forms of blank sampling trains,
sampling equipment rinsates, and laboratory method blanks.
Sample holding times are only discussed in Method 1613, although the QA sections of
SW-846 provide holding time guidance for aqueous matrices which would be applicable
to Method 8290.
General and procedural comparisons for the methods are presented in Tables 1 and 2.
CALIBRATION REQUIREMENTS
As shown in Table 3, only Method 8290 requires the use of a 7-point initial calibration
curve, while the other methods require 5-point curves. Differences occur in acceptance
1-140
-------
limits for both native and labeled congeners, although none of the methods discuss the
rationale for the selection of these limits. Requirements for ion abundance ratios are the
same for all methods. Mass spectrometer tuning and column performance checks are
standard across the methods.
Continuing calibration requirements are significantly lower for some congeners in
Method 1613, as low as ±6% difference for some congeners versus 20% to 30%
difference used by the other methods. This is an important control parameter in that it
triggers the need to stop analyses and recalibrate the instrument.
VALIDATION CRITERIA
As presented in Table 4, qualitative determination of dioxin/furan congeners are similar
in most respects except for differences in matching requirements of relative retention
times for samples to standards. Confirmatory analyses are required by all methods for
positive identification of 2,3,7,8-substituted congeners.
Most of the methods discuss handling of non-detect values and the estimation of detection
limits as a reportable value. This generally involves estimation of concentrations based
on background noise or reporting values as "maximum possible concentration" for
chemical hits which may match standard retention times but are outside of ion ratio
criteria.
Because these methods involve trace analysis, there are discussions on reagent screening
techniques, glassware tracking, glassware cleaning, and decontamination throughout the
methods. However, the methods do not always indicate how to handle blank
contamination problems other than to discard contaminated sources and restart the
analytical process. Method 8290 does discuss criteria for acceptable method blanks.
Duplicate analysis of field samples (and criteria for acceptance) is only discussed in
Method 8290. However, source testing typically involves multiple runs and samples that
are representative of 1 test condition which may be used for comparison. Surrogate
recoveries across like matrices, as discussed in Method 1613, also provides a means to
evaluate analytical precision.
MM-A\Z54*M}A.PAP
1-141
-------
Acceptance criteria for surrogate recoveries of labeled congeners vary significantly by
method, ranging from 25% to 150% recoveries. The concept and use of the isotope
dilution technique for quantitation allows these tolerances without invalidating the test
data. Poor surrogate recoveries, similar to poor detection limits, may be due to real
matrix effects and/or interferences, and not reflective of analyst performance. As
described in Method 1613, the technique of adding a known amount of a labeled
compound to every sample prior to extraction allows the native compound to be corrected
for recovery because the native compound and its labeled analog exhibit similar effects
through extraction, concentration, and gas chromatography.
Table 6 lists the calibration ranges, native and labeled congeners for each method.
EXTERNAL VERIFICATION CHECKS
Because these complex methods are routinely used in regulatory-based decisions, all the
methods specify that an ongoing quality assurance program be implemented by the
laboratory. This encompasses many more areas of QA/QC than are presented in the
methods, such as document control, audits, control charting, etc. However, the methods
suggest several ways that this may be accomplished, including: using standards traceable
to EPA reference materials or other certified standards, using EPA audit samples (spiked
XAD) for source testing, and using field-associated QC samples (field replicates, rinsates,
etc.) to demonstrate the overall quality of sampling and analysis test data.
SUMMARY
In researching this paper, several observations were apparent:
• Successful use of these methods require a comprehensive QC system 'that
incorporates procedural elements which are common to all the methods, including
laboratory management, sample handling, instrument operating parameters, and
elimination of potential sources of contamination. In addition, a QA program is
needed to ensure that test data for environmental programs are both internally
consistent and comparable to other EPA data.
MW-A\Z5«W)A.PAP
1-142
-------
• Control limits and acceptance criteria vary by method for initial/continuing
calibrations, surrogate recoveries, and control samples.
• Validation of test data relies on proper calibration, acceptable method blanks,
surrogate recoveries, qualitative identification criteria, certification of standard
reference materials, and confirmatory analysis of suspect samples. Documentation
of these control parameters is essential for completing technical reviews and
audits.
Notwithstanding the inherent method similarities and differences discussed above, the
authors' discussions of particular technical areas vary in degree of clarity and emphasis,
so that there is complimentary information shared by all methods. This diversity of
information gives the reader a better understanding of dioxin analysis techniques,
regardless of the actual method chosen for an environmental program.
1-143
-------
TABLE 1. PCDIVPCDr METHOD COMPARISONS
METHOD 8290
METHOD 23
ARB428
METHOD 1613
Application
PCDD/PCDF in soil,
sediment, water, fly
ash, sludge, fad oil,
paper, biological
samples
PCDD/PCDF from
stationary sources
PCDD/PCDF & PCB
emissions from
stationary sources
(regulatory compliance)
PCDD/PCDF in water,
soil, and other solid
matrices
Status
EPA Draft,
May 1987
EPA final
rule,
Federal
Register,
February
1991
Adopted
March 23,
1988 by
CARB
USEPA,
Revision A,
October
1990
Sampling
protocol
No
Yes, Modified
Methods
(fitter/XAD)
Yes, modified
California
version of
Methods
(fitter/XAD)
[minimum 3 tons
® 3 h each]
No
Field spiked
surrogates
NA
Yes
Yes
NA
Preclean&
screen
reagents &
glassware
Yea
Yes
Yes
Yes, protocol
specifies
Sample train
recovery
NA
1. Acetone
2. Methylene
chloride
3. Toluene
(separate
analysis as "QA
rinse")
1. Methanol
2. Benzene
3. Methylene
chloride
NA
Analysis
• Spike samples with internal standards
• Extract aqueous with methylene chloride
• DB-5 column recommended
• Confirmation column required for
positive identification of all isomers
• Spike XAD with internal standards
• Reduce aqueous component and
combine with XAD
• Soxhlet extract with toluene
• Split sample hi half for archiving
• Column deanupi
-silica gel
—alumina
—carbon/Celite
• no add cleanup discussed
» HRGC/HRMS with DB-5 column
• Confirmation using SP-2330 on SP-2331
• Spike XAD with internal standards
(none is added to aqueous)
• Extract aqueous with methylene chloride
then add to & extract with fitter/XAD
• Extract with benzene or toluene
• Acid cleanup
• Column cleanup
-silica gd
—alumina
— carbon/Celite
• HRGC/HRMS with DB-5
• Confirmatory analysis using SP-2330 on
SP-2331
• Spike samples with 15 labeled analogs
• Labeled TCDD added after extraction to
measure efficiency of cleanup steps
• Internal standards added prior to
analysis for quantitation
• Sample cleanup by GPC, HPLC, or
column chromatography described
(column calibration required)
MRI-A\Z5470-QA.PAP
-------
TABLE 2. PROCEDURAL QA/QC
METHOD 8290
METHOD 23
ARE 428
METHOD 1613
Lab
method
blank
Yes, 1
per batch
of 24
Yes
Ongoing
1 blank
control
matrix
per batch
of 20
samples
Matrix spike/MSD
Yes, duplicate spike per
batch of 24
All XAD traps are
spiked with labeled
congeners prior to
sampling
All XAD traps are
spiked with labeled
congeners prior to
sampling
Initial demonstration of
recovery and precision
from 4 matrix spikes is
required. Also,
1 control sample per
batch is analyzed.
Holding
times
Not
discussed;
refer to
SW-846
Not
discussed
Not
discussed
1 yr @ 4°C
for samples;
40 d © 4°C
for extracts
Homogenization
of sample
Yes (% moisture
of soils)
Complete
analysis (except
for toluene "QA"
rinse)
Complete
analysis
Yes, % moisture
of soils
Field blank
Rinsate of sampling
equipment; 1
"uncontamraated" field
blank per 24
Blank train
1 blank train per three
runs
Recommended.
QC check sample
One performance
evaluation sample in
all batches
EPA audit sample
required for
regulatory basis
EPA or other
independent audit
sample
Ongoing precision
and accuracy control
samples
CO
MRI-A\Z5470-QA.PAP
-------
TABLE 3. CALIBRATION QA/QC
METHOD 8290
METHOD 23
ARB428
METHOD 1613
Number of
standards in
calibration curve
7-point
5-point (option for
two levels)
5-point
(replacement of
calibration
standards every
6 months)
5-point standard
solutions are
analyzed within
48 h of prepara-
tion and on a
monthly basis to
check for
degradation
Initial
calibration
precision
20% RSD
25% USD
(30% RSD for
native OCDF
and some
labeled
congeners)
15% RSD
< 20% coeffi-
cient of
variation for
calibration by
isotope
dilution;
< 35% coef-
ficient of
variation for
calculation by
internal
standard
Ion
abundance
ratios
±15% of
theory
±15% of
theory
±15% of
theory
±15% of
theory
Continuing
calibration (every
12 hr)
20% D (bracketing
RRFs used for
quantitationif RRF
is between 20-25%
D)
25% D (30% D for
OCDF and some
labeled congeners)
30% D
Varies from 6% to
20% D allowed
M5 timing
and accuracy
check
PFK (every
12 h)
PFK
PFK (every
12 h)
PFK (every
12 h)
GC column performance check
Initially and every 12 h:
• First and last elnters labeled on
chromatograms
• Presence (current switching) of both
1,2,8,9-TCDD and 1,3,4,6,8-PeCDF
Initially (daily):
• 25% valley for 2,3,7,8-TCDD
• Established RT windows for homolog series
Initial (daily):
25% valley for 2,3,7,8-TCDD
Meets ion abundance ratio criteria
Minimum S/Nof5:l
Mass correction for m/e 328 for native TCDD
Retention windows for homolog series
Initially and every 12 h:
• Mass drift correction using PFK
• Verify ion abundance ratios, minimum levels,
and signal-to-noise ratios
• Column window-defining mix
MRI-A\Z547
-------
TABLE 4. QC DATA AND ACCEPTANCE CRITERIA.
METHOD
8290
METHOD
23
ARE 428
METHOD
1613
Identification
Ions maximize within 2 s
S/N l 2.5 for both ions
RT match of -1 to +3 s of standard
Ion abundance ratio within ±15% limit
RT within homolog window
Confirmatory analysis on second column
(quantitalion is specified by column &
congener)
• No PCDPE interference
• Simultaneously (±2 s) detection of ions
• S/N * 2.5 for both ions
• RT match of ±3 s of standard
• Ion abundance ratio within ±15% limit
• RT within homolog window
• Confirmatory analysis on second column
(quantitation is specified by column &
congener)
• No PCDPE interference
• Simultaneous (±2 s) detection of ions
• S/N £ 2.5 for standards > 10 for samples
(both ions)
• Ion abundance ratio within ±15% limit
• RT within homolog window
• Confirmatory analysis on second column
(quantitation is specified by column &
congener)
• No PCDPE interference
• Ions maximize within ±2 s of one another
* S/N i 2.5 for sample extract and t 10 for a
calibration standard
• Ion abundance ratios within ±15% limits
• Relative retention time windows within column
performance mix
• Confirmatory analysis for 2,3,7, 8-congeners
using second column
• No PCDPE interference
Method blanks
• Method blank per
batch (daily, before
samples)
— internal standards
> 10:1 S/W
— background below
< 10% of target
detection limit
• No blank criteria given
• No blank criteria given
• AH materials must be
demonstrated to be
interferant-free
• Glassware tracking
recommended
Field
surrogates
NA
70-130% R
(correct test
results if
R < 70%)
60-140% R
NA
Internal
standard
recovery
40-120% R
40-130% R
(tetra-hexa)
25-130% R
(hepta-octa)
(data are still
acceptable if
S/N * 10 for
detected
PCDD/PCDF)
40-120% R
25-150% R
Duplicate
sample
analysis
<25%D
for 2,3,7,8-
congeners
Not
discussed
Field
replicates
(no criteria
given)
Not
discussed
MS/MSD
*20%D
for
23,7,8-
congeners
Not
discussed
Not
discussed
Accep-
tance
criteria
for initial
and
continuing
perfor-
mance
tests are
listed
Detection limits
Concentration
corresponding to 7.5x
the background noise
Report Theoretical
Minimum Quantifiable
Level (TMQL)" ast
TMQL (PG) = lowest
STD (pg/uL) x final
extract
volume/recovery of
internal standards
• Concentration
corresponding to
2.5x noise level or
ions outside ion
abundance criteria
as "estimated
maximum passible
concentration"
Minimal levels are
listed for water, solid,
and extract
MRI-A\Z5470-QA.PAP
-------
TABLES. EXTERNAL VERIFICATION
Reporting requirements
Method recommendations
METHOD
8290
Results of at least one GC column
performance check first ("F") and last
("L") eluters labeled on chromatogram
Results of 2 mass resolution checks and
continuing calibration checks during a
12-h period
Toxic equivalency factors
(if needed)
• Calibration standards traceable to
EPA (EMSL-LV) reference
materials
—verification data and supporting
records on file
—1 to +3 s match in RT times
—s£ 20% difference in
concentration between standards
and EPA reference
—standard preparation and
traceability records on file
METHOD 23
Internal standard percent recoveries
Field surrogate recoveries
Analysis results of toluene QA rinse
TMQLs (detection limits)
Results for 2,3,7,8-congeners and totals
• EPA audit sample (for regulatory
tests)
ARE 428
Detection limits = 2.5 x noise
"Maximum Possible Concentration" for
bits that do not meet ion abundance
criteria
Deviations from method
Totals and specific 2,3,7,8-substituted
congeners
Sample numbers, source, and chain-of-
custody records
Dates of submittal and GC/MS analysis
Raw data (mass intensities)
Ion ratios for PCDD/PCDF detected
% recoveries of internal standards
Recovery of spiked samples
Summary of calibration data
—mean RRFs
—RSDs for 5-point curve
—acceptable continuing calibration
checks for each 12 h
Traceability records and sequence of
analysis
Verification of 2,3,7,8-TCDD
standards to EPA reference
materials
Formal QA program
—ongoing analysis of spiked
samples
—records of past performance
—ongoing screens and method
blanks
—field replicates for overall S&A
precision
METHOD
1613
Report values to 3 significant figures
Multiple forms provided for reporting
QC and test data
Control charting required
Standards must be certified for
purity, concentration, and
authenticity
MJU-A\Z547(K!A.PAP
11
1-148
-------
TABLE 6. ANALYTE LIST AND CALIBRATION RANGES
Desig-
nation
Compound
RRF
reference
standard
Calibration range:
Method 8290
2.5-1,000
Pg/^L
Method 23
0.5-1,000
Pg/l»L
(low)
5-10,000
pg/^L
(high)
Method 428
5-10,000
pVl*L
Method 1613
05-2,000
pg/|iL
UNLABELED ANALYTES (17)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
2,3,7,8-TCDD
2,3,7,8-TCDF
1,2,3,7,8-PeCDD
1,2,3,7,8-PeCDF
2,3,4,7,8-PeCDF
1,2,3,4,7,8-HxCDD
1,2,3,6,7,8-HxCDD
1,2,3,7,8,9-HxCDD
1,2,3,4,7,8-HxCDF
1,2,3,6,7,8-HxCDF
1,2,3,7,8,9-HxCDF
2,3,4,6,7,8-HxCDF
1,2,3,4,6,7,8-HpCDD
1,2,3,4,6,7,8-HpCDF
1,23,4,7,8,9-HpCDF
OCDD
OCDF
A
B
C
D
D
E
E
E
F
F
F
F
G
H
H
I
I
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
(continued)
MRI-A\Z54TD-QA.PAP
12
1-149
-------
TABLE6
Desig-
nation
A
B
C
D
E
F
G
H
I
AA
BB
18
19
20
21
22
23
Compound
RRF
reference
standard
Method 8290
Method 23
Method 428
Method 1613
INTERNAL STANDARDS (10)— (SPIKED ONTO SAMPLE PRIOR TO EXTRACTION)
uCir2,3,7,8-TCDD
BC,2-2,3,7,8-TCDF
BC12-l,2,3,7,8-PeCDD
BC12-l,2,3,7,8-PeCDF
BCir1^3,6,7,8-HxCDD
uC12-l,2,3,4,7,8-HxCDD
BCirl,2,3,4,6,7,8-
HpCDD
BC,2-1,2,3,4,6,7,8-
HpCDF
BC,2-OCDD
uC12-23,4,6,7,8-HxCDF
AA
AA
AA
AA
BB
BB
BB
BB
BB
BB
X
X
X
X
X
X
X
X
X
-
X
X
X
X
X
(U,3,6,7,8)»
X
X
X
-
X
X
X
X
X
(1,2,3,6,7,8)
X
(1,2,3,4,7,3,9)
X
.
X
X
X
X
X
X
X
X
X
X
RECOVERY STANDARDS (2)— (SPIKED INTO EXTRACT PRIOR TO ANAYSIS)
"CB-U.SATCDD
BCirl,2,3,7,8,9-HxCDD
-
-
X
X
X
X
X
(1,2,3,4,7,8)
X
X
SURROGATE STANDARDS (6)— (SPIKED ONTO SAMPLE PRIOR TO SAMPLING
37Cl-2,3,7,8-TCDD
BCI2-23,4,7,8-PeCDF
"Cn-M^^S-HxCDD
13C12-l^^,4,7,8-HxCDF
"0,2-1,2,3,4,7,8,9-
HpCDF
uCn-1^3,7,8,9-HxCDF
A
D
E
F
H
F
-
-
-
-
-
-
X
X
X
X
X
(alternate)
X
X
-
X
(1,2,3,4,6,7,8)
X
X"
Xs
-
x«
(1,2,3,6,7,8)
x°
X*
• Brackets indicate alternate or other congener specification by method.
b Spiked into sample after initial extraction prior to clean-up to check column
performance.
c Method 1613 uses these labeled congeners as internal standards, i.e., spiked
into sample prior to extraction.
MM-A\ZS4HW}AJ>AP
13
1-150
-------
A STUDY OF METHOD DETECTION LIMITS IN
ELEMENTAL SOLID WASTE ANALYSIS
DAVID E. KIMBROUGH. PUBLIC HEALTH CHEMIST, JANICE WAKAKUWA, SUPERVISING
CHEMIST, CALIFORNIA DEPARTMENT OF HEALTH SERVICES, SOUTHERN CALIFORNIA
LABORATORY, 1449 W. TEMPLE STREET, LOS ANGELES CALIFORNIA 90026-5698.
ABSTRACT
Two types of data are generated when determining the concentrations of
regulated elements in solid matrices (e.g., sediments, sludges, soils,
spent catalysts, press cakes, slags, powders, etc.). These are numerical
values, that indicate the amount of analyte present, and "None Detected"
or "Less Than" values. The latter values define, for a given analyte, the
smallest amount that can be quantified. A positive numerical value is
usually defined within a set of precision and accuracy criteria. A "less
than" value on an analytical report is as much a data point as a numerical
value and should be determined with equivalent precision and accuracy.
By far the most common method for determining these "less than" values is
the U.S. E.P.A.'s Method Detection Limit (MDL)1>2 based on the work of
Glazer et al.3 This method was developed for trace analysis of organic
analytes in water matrices. MDLs have, however, been used extensively for
inorganic analyses and solid matrix analyses without examination of its
applicability.
This study attempts to assess the applicability of the MDL method to solid
matrix analysis. The study compares the calculated MDLs for five analytes
in soil, arsenic, cadmium, molybdenum, selenium, and thallium with method
performance at concentrations above and below^the calculated MDL. The MDL
method is examined both for its empirical suitability for solid waste
analysis and whether it has the proper theoretical tools for solid
matrices.
INTRODUCTION
When determining the concentration of regulated elements in solid matrices
(e.g., sediments, sludges, soils, spent catalysts, press cakes, slags,
powders, etc.) two types of data are generated, numerical values
indicating the amount of an analyte present and "None Detected" or "Less
Than" values that let the data user estimate the smallest amount of the
analyte that can reasonably be quantified. A positive numerical value is
usually defined within a set precision and accuracy. A "less than" value
on an analytical report is as much a data point as a numerical value and
should be determined with equal precision and accuracy. The most common
method for determining this "less than" value is the U.S. E.P.A.'s Method
Detection Limit (MDL) based on the work of Glazer et al. This method was
developed for trace analysis of organic analytes in water matrices. MDLs
have, however, been used extensively for inorganic and solid matrix
analyses without an examination of their applicability to the procedures.
1-151
-------
This study attempts to assess the applicability of the MDL method to solid
matrix analysis. A comparison of the calculated MDLs for five analytes
in soil (arsenic, cadmium, molybdenum, selenium, and thallium) with method
performance at concentrations above and below the calculated MDL. The MDL
method is examined both for its empirical suitability for solid waste
analysis and its appropriateness as a theoretical tool for estimating MDLs
in solid matrices.
METHODS
A) The USEPA's MDL and Glazer et al. are both six step procedures. These
procedures are designed to be used on any matrix. The five step portion
of these procedures that are applicable to solid wastes are presented
below.
1) Estimate the MDL by one of four procedures:
a) The concentration that corresponds to an instrument signal to
noise ratio of 2.5 to 5.
b) The concentration value that corresponds to three times the
standard deviation of replicate instrumental measurements for the
analyte in reagent water.
c) The concentration value that corresponds to the region where there
is significant change in sensitivity at low analyte concentrations.
d) The concentration value that corresponds to the known instrument
limitations.
(This will be referred to as the estimated MDL)
2) Obtain a solid material corresponding to the matrix type for which
the MDL is to be determined. The material must have a concentration of
the analyte(s) of interest at one to five times (but not to exceed ten
times) the estimated MDL.
3) Take a minimum of seven (7) aliquots of the material and process
each through the entire analytical procedure. Calculate the results back
to solid phase with the appropriate units such as mg/kg or ug/g.
4) Calculate the standard deviation (S) using equation 1, as follows:
x\ - £ xj
s
Eq. 1
5) Calculate the MDL by equation 2 as follows:
1-152
-------
MDL = t * 5
Eq. 2
where t is the student's value approximate for a 99% confidence
level for n - 1. Since n is equal to seven the student's t value
would be 3.143. (This will be referred to as the calculated MDL).
6) There is an optional step which calls for the preparation of a
material spiked exactly at the calculated MDL and repeating steps 3-5.
This will be referred to as the iterative procedure. If the calculated
MDL results in a spiked concentration that does not allow qualitative
identification then repeat steps 3-5 with a higher spiked
concentration. If the spiked concentration is qualitatively
identifiable but the standard deviation of the seven replicates of the
spike (Sb)is 3.05 times greater than the standard deviation of the
calculated MDL determination (Sa) then an other spiked material must be
prepared. If the ratio of Sa/Sb is less than 3.05 then the MDL is
recalculated by pooling the data following equations 3 and 4:
Eq. 3
MDL =2.681 (
Eq. 4
(This will be referred to as the pooled MDL) .
B) Percent Inaccuracy (%I) is defined here as the absolute difference
between the spiked value (Xs) of a soil and the mean measured value (X,,,)
divided by the spiked value times 100.
Y _ y
%j = e _ m * 100
Eq. 5
C) Percent Relative Standard Deviation (%RSD) is defined as the
standard deviation as defined above divided by the mean value times 100.
1-153
-------
%RSD = ^- * 100
Eq. 6
EXPERIMENTAL SECTION
A) Study Design. To assess the applicability of the MDL method to solid
waste elemental analysis it will be necessary to determine the MDL by
means of the five steps outlined above for a solid matrix, in this case a
loamy soil.
Three instruments were used: a Jobin-Yvon JY-50P simultaneous Inductively
Coupled Plasma Atomic Emission Spectrometer (ICP) a Perkin-Elmer PE-5500
sequential ICP, and a Thermo Jarrel- Ash Video 12E Flame Atomic Absorption
Spectrometer (FAA) . The estimated HDL was determined by each of the four
methods for each instrument in step one for arsenic, cadmium, molybdenum,
selenium, and thallium for the two ICP-AESs. The same was performed for
cadmium, molybdenum, and thallium on the FAA.
Five soils were spiked at several concentrations for each element (see
below) corresponding to the estimated MDLs. Seven aliquots of each soil
were digested and the calculated MDLs were determined for each element for
each instrument. When necessary, the pooled MDL was also determined.
Finally, the estimated, calculated, and pooled MDL values was evaluated
for both precision and accuracy for solid waste analysis.
B) Analytical Procedures. The soils were digested using an aqua regia
method here designated as the SCL method4'5-6. 2.00 grams of the soil were
placed in a 150 mL Phillips beaker. 10 mL of concentrated HC1 and 2.5 mL
of concentrated HN03 was added and heated to 95° C on a heating block.
When there was no more reddish-brown gas (N02) , the beakers were removed
from the heating block and the digestate was filtered through a Whatman 41
filter paper and collected in a 100 mL volumetric flask. The residue and
filter paper were then washed with 5 mL of hot (95° C) concentrated HC1 and
then 20 mL of hot de- ionized water which is also collected into the 100 mL
flask. The filter paper and residue were then placed back into the
Phillips beaker and heated with 5 mL of concentrated HC1 until the filter
paper dissolves and then this second digestate is filtered and collected
into a second 100 mL volumetric flask. Both filtrates were then analyzed
and the results were mathematically combined.
Instrumental analysis methods used to analyze these materials was EPA SW
8466 methods 6010 for both the sequential and simultaneous ICP using a 100
ug/mL standard for all elements .
Both ICPs used two background correction points for each analytical line.
1-154
-------
Methods 7130, 7480, and 7840 were used for the Flame AA and a deuterium
lamp was used for all analyses for background correction.
C) Materials. A local soil was dried, sieved and then analyzed for of
arsenic, cadmium, molybdenum, selenium, and thallium. The amounts found
were far below 1 ug/g. These elements were then spiked into the soils to
produce the following concentrations:
Sample A Sample B Sample C Sample D Sample E
Arsenic 4,000 500 50 5
Cadmium 500 50 5 - 5,000
Molybdenum 30 5 - 5,000 500
Selenium 5 - 5,000 500 50
Thallium - 5,000 500 50 5
Here and throughout this paper the dash mark, -, is used to indicate that
the analyte in question was not spiked. Soluble salts of each element were
dissolved in water and spiked into the soils. Additional water was added
to form a slurry which allowed for easy homogenization. The soils were
then dried, milled and sieved, (for a more complete discussion of the
preparation of spiked soils, see "Preparation and Validation of
Proficiency Evaluation Samples" by Kimbrough, D.E. and J.R. Wakakuwa in
these proceedings.) One additional material was prepared, designated "K",
which was made up of 100 g of the same soil and was spiked with 20 mL of
a 100 ug/mL Cd and Tl standard. This material was treated exactly as the
other materials were. Its concentration was 20 ug/g.
RESULTS
A) Arsenic. Table I lists all of the data for arsenic. Using the
estimation procedures, four estimated MDLs were determined for both types
of ICP. A wide range of estimates were obtained, so three different
spiked materials with concentrations of 500, 50, and 5 ug/g were used to
determine the calculated MDL as well as an unspiked background material.
For the simultaneous ICP, using the 500 ug/g material the calculated MDL
is 30 ug/g. Following the iterative procedure then a 30 ug/g material
should be analyzed. For this step the 50 ug/g PE sample was used which
produced a calculated MDL of 41 and the ratio of S500 to S50 is far less
than 3.05. The pooled MDL is 29.
For the sequential ICP the 500 ug/g material yields a calculated MDL of 17
ug/g. The 5 ug/g material could be used for the iterative procedure
except the results were not qualitatively identifiable. The 50 ug/g
material is qualitatively identifiable and yields a MDL of 8.4. The ratio
1-155
-------
of S500 to S50 is less than 3.05 and the pooled MDL is 10.3 ug/g.
B) Cadmium. Three materials were used as with arsenic, 500, 50, and 5
ug/g as well as an unspiked background material. Table II lists the
estimated MDLs.
Using the 500 ug/g material to calculate the MDL on the simultaneous ICP,
the value of the MDL is 17 ug/g. If the iterative step is followed then
the 5 ug/g material should be used. The 5 ug/g sample is qualitatively
distinguishable but the ratio of S500 to S5 is far greater than 3.05. The
same is true for 50 ug/g material. If the 20 ug/g material is used here
a qualitatively identifiable signal is generated which has a standard
deviation less than 3.05 times the standard deviation of the 5 ug/g
material. The pooled MDL is 0.8 ug/g.
The sequential ICP analyzing the 500 ug/g material yields a calculated MDL
of 22 ug/g. The 5 ug/g material could be used for the iterative procedure
except the results were not qualitatively identifiable. The 50 ug/g
material is qualitatively identifiable and yields a MDL of 8.4. The ratio
of S500 to S50 is greater than 3.05. The 20 ug/g material produced no
qualitatively identifiable signal.
For the FAA the 50 ug/g material produces a calculated MDL of 4.4 ug/g.
The 5 ug/g material was qualitatively identifiable and yields a MDL of
3.1. The ratio of S50 to S5 is less than 3.05. The pooled MDL is 3.0
C) Molybdenum. Following the estimated MDLs on Table III, the 500 ug/g
material was used to determine the calculated MDL on the simultaneous ICP,
which had a value of 72 ug/g. The 30 ug/g material was used for the
iterative procedure but the ratio of S50o to S30 is far greater than 3.05.
Using data from the 30 ug/g material, the calculated MDL is 8.6 ug/g.
However, the 5 ug/g material yielded no qualitatively identifiable signal.
The sequential ICP analyzing the 500 ug/g material yields a calculated MDL
of 56 ug/g. The 50 ug/g material is qualitatively identifiable and yields
a MDL of 8.4. The ratio of S500 to S30 is greater than 3.05. Using data
from the 30 ug/g material, the calculated MDL is 7.5 ug/g. However, the 5
ug/g material yielded no qualitatively identifiable signal.
For the FAA the 30 ug/g material produces a calculated MDL of 9.2 ug/g.
The 5 ug/g material was qualitatively identifiable and yields a MDL of
5.0. The ratio of S30 to S5 is less than 3.05. The pooled MDL is 6.7
D) Selenium. Table IV lists the estimated MDLs and other data. The 500
ug/g material was used to calculate the MDL on the simultaneous ICP, which
had a value of 29 ug/g. The 50 ug/g material was used for the iterative
procedure producing and calculated MDL of 17 and the ratio of S500 to S50
is less than 3.05.
The sequential ICP analyzing the 500 ug/g material yields a calculated MDL
1-156
-------
of 63 ug/g. The 50 ug/g material is qualitatively identifiable and yields
a MDL of 30. The ratio of S500 to S50 is less than 3.05 and the pooled HDL
is 38.
E) Thallium. Table V lists the estimated MDLs. Using the 500 ug/g
material to calculate the MDL on the simultaneous ICP, the value of the
MDL is 23 ug/g. The iterative step using 50 ug/g material produced an MDL
of 11 ug/g. The pooled MDL is 14 ug/g. This is identical with the
calculated MDL for the 20 ug/g material which has a S20 of less than 3.05
times the standard deviation of either the 5 ug/g or the 50 ug/g
materials.
The sequential ICP using the 500 ug/g material yields a calculated MDL of
26 ug/g. The 50 ug/g material is qualitatively identifiable and yields a
MDL of 22. The ratio of S500 to S50 is less than 3.05. The pooled MDL is
19. The 20 ug/g material produced not qualitatively identifiable signal.
The FAA analyzing the 50 ug/g material produces a calculated MDL of 26
ug/g. The 5 ug/g material was qualitatively identifiable and yields a MDL
of 4.5. The ratio of S50 to S5 is less than 3,05. The pooled MDL is 5.0
DISCUSSION
The MDL process as applied to arsenic using simultaneous ICP produced
three estimates 30,41, and 29 ug/g which were in close agreement. These
numbers, however, were indistinguishable from the 34.6 ug/g value that was
obtained from the unspiked soil. Further, the inaccuracy found in the 50
ug/g material was quite high despite the good precision. For the method
used in this study, two grams of soil digested by aqua regia and analyzed
by ICP, any results below 50 ug/g are highly inaccurate even if they were
statistically distinguishable from zero. This is also true for selenium
values, the poor accuracy and precision obtained at 50 ug/g would make
lower results quite dubious. Cadmium and thallium had accuracy and
precision reasonable results at 50 ug/g but highly inaccurate results at
20 ug/g, which is far above the calculated and pooled MDLs. Molybdenum
results for 30 ug/g were both accurate and precise but the calculated MDL
of 8.6 does not appear reasonable given the fact that the 5 ug/g material
was not qualitatively identifiable.
The sequential ICP results for arsenic exhibit the same type of data as
that obtained from the simultaneous ICP. The three measurements of the
estimated MDL were 17, 8.4, and 10 ug/g. However, the 5 ug/g produced no
measurable signals on the ICP making it indistinguishable from the blank.
This raises serious questions as to the accuracy and precision of
measurements near this point, as was the case for all of the other
analytes. Furthermore, the sequential ICP results for thallium and
selenium were very inaccurate even at 50 ug/g so lower values were even
more doubtful. Cadmium, on the other hand, produced accurate results at
50 ug/g but no signal at 20 ug/g, again far above the calculated and
pooled MDLs. Likewise, molybdenum results for 30 ug/g were quite good but
1-157
-------
the calculated MDL of 7.5 ug/g is unlikely in light of the results from
the 5 ug/g material. Again the MDL procedure generated values that can
only be described as highly inaccurate.
The FAA data were quite different from that obtained from either of the
ICPs. Estimated MDLs for cadmium were 30, 5, 5, and 5 ug/g. The
calculated MDLs were 4.4 and 3.1 while the pooled MDL was 3.0 ug/g. The
5 ug/g material was accurate, %I = 1.4 and the precision was acceptable,
XRSD - 19. This was also true for the molybdenum and the thallium. The
MDL method seems to produce values that are accurate and precise for the
FAA.
SUMMARY AND RECOMMENDATIONS
The EPAs method detection limit procedure is not adequate for the
determination of "less than" values for elemental solid waste analysis
using ICP. As the data clearly shows, the MDL too often predicts a "less
than" value that is at a concentration that either generates no signal at
all or is subject to unacceptable high inaccuracy. Interestingly enough,
precision generally was not a problem.
The EPA's MDL procedure seems to predict reasonably accurate values for
Flame AA determinations. The calculated and pooled MDLs produced results
that were both precise and accurate when using real solid matrix
materials. Even for Flame AA however, the amount of work needed to
generate these MDLs is quite prohibitive.
The EPA MDL procedure is designed to determine the lowest concentration of
an analyte where there is a 99% confidence that the concentration is not
zero. The determination of this value is based entirely on the precision
of the method without considering accuracy. The smallest amount of
analyte that has an acceptable precision (i.e., MDL) is not the correct
question. For the regulatory solid waste community the question should be
"what is the smallest amount of an analyte that can be quantified within
quality control limits of both precision and accuracy".
REFERENCES
1. Appendix A, July 1982 to Methods for Chemical Analysis of Wastewater
EMSL-Cincinnati, USEPA, June 1982
2. Appendix B to Part 136 CFR 40, October 26, 1984, Federal Register Vol.
49, No. 209, Pages 198 - 204.
3. Glaser, J.A., Forest, D.L., McKee, G.D., Quave, S.A., and Budde, W.L. ,
"Trace Analysis for Wastewaters," Environmental Science & Technology, 15,
1426, December 1981
4. Kimbrough, D.E. and J.R. Wakakuwa, "Acid Digestion for Sediments,
Sludges, Soils, and Solid Wastes. A Proposed Alternative to EPA SW 846
Method 3050." ; Environmental Science and Technology, 23, pages 898-900,
1-158
-------
July 1989.
5. Kimbrough, D.E. and J.R. Wakakuwa, "Report of an Interlaboratory Study
of an Interlaboratory Study Comparing EPA SW 846 Method 3050 and an
Alternative Method from the California Department of Health Service";
Proceedings of the Fifth Annual USEPA Symposium on Solid Waste Testing and
Quality Assurance, Washington, D.C. July 1989. Reprinted in Waste Testing
& Quality Assurance: Third Volume, ASTM STP 1075, C.E. Tatsch, Ed.,
American Society for Testing and Materials, Philadelphia, 1991
6. Kimbrough, D.E. and J.R. Wakakuwa, "A Report of the Linear Ranges of
Several Acid Digestion Procedures "; Proceedings of the Sixth Annual USEPA
Symposium on Solid Waste Testing and Quality Assurance, Washington, D.C.
July 1990.
7. Test Methods for Evaluating Solid Wastes (EPA SW 846 Volume 1A) 3rd
Edition. Office of Solid Waste and Emergency Response, U.S.Environmental
Protection Agency Washington, D.C., November 1986
1-159
-------
TABLE I
ARSENIC
ESTIMATED MDLs IN /ig/9
SIMULTANEOUS ICP
SEQUENTIAL ICP
a)
b)
c)
d)
268
165
5.0
5.0
71
165
50
50
CALCULATED MDL IN Jig/9 SIMULTANEOUS ICP
Spiked Value 500
Mean 506
Calculated MDL 30
% Inaccuracy 1.2
Standard Deviation 9.7
% RSD 1.9
50
78.7
41
57.4
13.3
3.5
5
51.4
5.7
902
1.8
3.5
34.6
7.7
2.3
6.7
CALCULATED MDL IN pg/q SEQUENTIAL ICP
Spiked Value 500
Mean 479
Calculated MDL 17
% Inaccuracy 4.2
Standard Deviation 5.3
% RSD 1.1
50
42.9
8.4
14.2
2.7
6.2
<1
1-160
-------
TABLE II
CADMIUM
ESTIMATED MDLs IN
SIMULTANEOUS ICP
SEQUENTIAL ICP
FAA
a)
b)
c)
d)
30
115
5.0
5.0
113
120
50
50
30
5.0
5.0
5.0
CALCULATED MDLs IN /ig/g SIMULTANEOUS ICP
Spiked Value 500
Mean 518
Calculated MDL 19
% Inaccuracy 3.6
Standard Deviation 6.1
% RSD 3.9
50
55.5
4.0
11.0
1.3
2.3
20
7.1
0.9
74.5
0.3
3.9
5
7.8
1.1
56.0
0.4
4.5
1.5
0.07
10.2
CALCULATED MDLs IN nq/g SEQUENTIAL ICP
Spiked Value 500
Mean 500
Calculated MDL 22
% Inaccuracy 0.0
Standard Deviation 6.9
% RSD 1.4
50
41.8
5.4
16.4
1.7
4.1
20
CALCULATED MDLs IN fJ,q/q FLAME ATOMIC ADSORPTION
Spiked Value 500
Mean 508
Calculated MDL 15
% Inaccuracy 2.0
Standard Deviation 4.8
% RSD 0.95
50
50.4
4.4
0.8
1.4
2.8
5
5.1
3.1
2.0
1.0
19.2
1-161
-------
TABLE III
MOLYBDENUM
ESTIMATED MDLs IN jitg/g
SIMULTANEOUS ICP SEQUENTIAL ICP FAA
a) 140 163 30
b) 135 30 4.5
c) 5.0 50 5.0
d) 5.0 50 5.0
CALCULATED MDLs IN /ig/g SIMULTANEOUS ICP
Spiked Value 500 30 5
Mean 439 28.3
Calculated MDL 72 8.6
% Inaccuracy 12.2 5.7
Standard Deviation 23 2.73
% RSD 5.2 9.7
CALCULATED MDLs IN pg/g SEQUENTIAL ICP
Spiked Value 500 30 5
Mean 473 10.8
Calculated MDL 56 7.5
% Inaccuracy 5.4 64.0 - -
Standard Deviation 18 2.4 - -
% RSD 3.8 22
CALCULATED MDLs IN /tg/g FLAME ATOMIC ADSORPTION
Spiked Value 500 30 5
Mean 453 37.6 4.3
Calculated MDL 46 11.0 5.0
% Inaccuracy 9.4 25.3 14.0
Standard Deviation 14 3.5 1.6
% RSD 3.2 9.2 37.0
1-162
-------
TABLE IV
SELENIUM
ESTIMATED MDLs IN pg/g
SIMULTANEOUS ICP
a)
b)
c)
d)
460
210
5.0
5.0
CALCULATED MDLs
Spiked Value
Mean
Calculated MDL
% Inaccuracy
Standard Deviation
% RSD
500
394
29
21.2
9.1
4.2
CALCULATED MDLs
Spiked Value
Mean
Calculated MDL
% Inaccuracy
Standard Deviation
% RSD
500
425
63
15.0
20.0
1.1
SEQUENTIAL ICP
94
120
50
50
IN /Ltg/g SIMULTANEOUS ICP
50 5
6.56 <1 <1
11
86.9
3.5
53.0
IN /Ltg/g SEQUENTIAL ICP
50 5 _
12.9 <1 <1
30.0
74.2
9.5
74.0
1-163
-------
TABLE V
THALLIUM
ESTIMATED MDLs IN jtig/g
SIMULTANEOUS ICP SEQUENTIAL ICP FAA
a) 120
b) 110
c) 5.0
d) 5.0
CALCULATED MDLs IN n
Spiked Value 500
Mean 452
Calculated MDL 23
% Inaccuracy 9 . 6
Standard Deviation 7.4
% RSD 0.96
CALCULATED MDLs IN
Spiked Value 500
Mean 450
Calculated MDL 26
% Inaccuracy 10 . 0
Standard Deviation 8.2
% RSD 1.5
CALCULATED MDLs IN pg/g
Spiked Value 500
Mean 446
Calculated MDL 20
% Inaccuracy 9.8
Standard Deviation 6.3
% RSD 1.4
200 25
150 10.5
50 25
50 25
g/g SIMULTANEOUS ICP
50 20 5 -
41.9 12.1
11 14 - -
16.2 39.5
3.6 4.4
8.6 36.5
Hg/g SEQUENTIAL ICP
50 20 5 -
28.6 - - -
22.0 - - -
43.8 - - -
6.9 - -
24 -
FLAME ATOMIC ADSORPTION
50 5 -
54.7 7.0
26.0 4.5
9.4 40.0
2.5 1.4
4.56 20.6
1-164
-------
22
PREPARATION AND VALIDATION OF PROFICIENCY
EVALUATION SAMPLES FOR SOLID WASTE ANALYSIS
DAVID E. KIMBROUGH. PUBLIC HEALTH CHEMIST II, RUSTUM CHIN,
PUBLIC HEALTH CHEMIST III, AND JANICE WAKAKUWA, SUPERVISING
'CHEMIST, CALIFORNIA DEPARTMENT OF HEALTH SERVICES, SOUTHERN
CALIFORNIA LABORATORY, 1449 W. TEMPLE STREET, LOS ANGELES
CALIFORNIA 90026-5698.
ABSTRACT
Development, validation, and use of Proficiency Evaluation
(PE) samples for solid waste analysis has generally been
performed by private firms supplying them on a self-
analysis basis to private laboratories. This paper
discusses preparation and validation of two sets of
proficiency evaluation samples.
For the purpose of this study PE samples are defined as
materials used to evaluate laboratory (as opposed to
method, instrument, or analyst) performance for a given
matrix specific analysis. Any material which have a method
defined mean value and homogeneity can be used. The
easiest and most practical approach is to spike a known
amount of analyte into a well defined homogeneous matrix.
This not only meets the criterion for a PE sample but also
gives a "true value" in addition to a mean value.
The first set of PE samples consists of five soils spiked
with Aroclor 1260, and the second of five soils spiked with
arsenic, cadmium, molybdenum, selenium, and thallium. The
samples were first analyzed at Southern California
Laboratory (SCL) and then sent to twenty eight laboratories
outside of California for analysis.
This validation study was designed to test the PE sample
preparation procedures used at SCL. The data and
statistical analysis for this study are presented.
INTRODUCTION
The last ten years have seen an explosive growth in the
field of environmental chemistry. Concomitant with this
growth has been the development of laboratory accreditation
programs for environmental laboratories. The federal
government does not accredit environmental laboratories.
Almost every state either has or is developing an
accreditation program for environmental laboratories. It
is generally agreed that Proficiency Evaluation (PE)
samples should be an integral part of a comprehensive
laboratory accreditation program. Significant progress has
been made in developing PE samples for water matrices,
1-165
-------
there has been virtually no progress in the development of
solid matrix PE samples. This is due, in part, to the
newness of programs accrediting laboratories analyzing
solids and the relative simplicity of preparing aqueous PE
samples as compared to solid matrix samples.
The California Department of Health Services (DOHS) through
its Environmental Laboratory Accreditation Program (ELAP)
is responsible for the accreditation of laboratories
analyzing water, waste water, and solid waste doing
business in the state. ELAP is mandated by California law
to distribute PE samples to laboratories they accredit.
PROFICIENCY SAMPLE THEORY
There is significant confusion about the distinctions
between PE samples, Laboratory Control Samples (LCSs), and
Standard References Materials (SRMs). Also a lack of
consensus as to how to prepare these solid matrix PE
samples. A number of issues have contributed to this lack
of consensus. The most important of which is the debate
between spiked PE samples vs."real world" PE sample. This
debate revolves around the benefits of "true values" versus
mean values or of "real" samples versus the artificial.
For the purposes of this pilot project a set of definitions
was developed for LCSs, SRMs, and PE samples.
A Laboratory Control Sample (LCS) is a material used by a
laboratory for quality control/quality assurance purposes
for a method or set of methods. It contains the analytes
of interest in concentrations within the working range of
the method or methods. It is homogeneous, of the same
matrix type as the samples and is analyzed with each batch
of samples. The results for each batch should fall within
established control limits. This data can be used to
monitor long term trends and method performance. It is
immaterial whether the LCS is spiked or not, since only a
mean value and standard deviations are needed to create a
control chart and set control limits.
A Standard Reference Material (SRM) is used to determine
the applicability, on a particular matrix, of a method,
method comparison, instrument, or instrumental performance.
It should be a "real world" sample, homogeneous and must
not be spiked. The analytes of interest should be present
in measurable amounts .
Proficiency Evaluation samples are used to evaluate the
performance of the entire laboratory system for a given
analyte, not just the methods or instruments. This
includes sample tracking, sample preparation, record
keeping, method selection, method application, and data
1-166
-------
reduction. Like the other materials, PE sample must have
the analytes of interest present in concentrations that are
within the linear range of the methodology. It must be
homogeneous and of a matrix that approximates that of
actual samples. Laboratories analyzing solid wastes cannot
be evaluated using a reagent water PE sample.
Although laboratories use PE samples internally for self-
evaluation, the most important use of PE samples is as part
of a laboratory accreditation program. The accrediting
agency submits the samples blind to the laboratory. The
laboratory's performance is evaluated based on the results
obtained. The material can also be used for double blind
analysis, that is it can be submitted to the laboratory
without the knowledge of its personnel. As a result, a PE
sample should have a physical appearance that will not give
it away as a PE sample. For laboratories analyzing soils,
the PE sample should look like a soil.
Theoretically, a laboratory can be put out of business if
it fails a PE sample. This leaves the accreditation
program with a large window of liability. So in addition
to the above mentioned factors, PE sample must be legally
defensible This means PE samples must be validated prior to
distribution.
All of these needs are best met by using a spiked sample.
Spiking allows for choice of analytes, their
concentrations, the matrix, and can establish a "true"
value. This last point is important in increasing the PE
samples legal defensibility. Spiked samples are also
easier and less expensive to prepare.
EXPERIMENTAL SECTION
Experimental Design: To test this approach to PE samples, a
pilot project was designed. The project had two goals.
The first was to test the PE sample preparation process.
The second was to test the validation procedure for PE
samples.
PE Sample Preparation
From previous experience in the preparation of soils spiked
with inorganic analytes1'2 it was decided to use water
soluble salts of the target elements. The use of water
soluble salts means that it will be easy for laboratories
to solubilize the target elements using any digestion
procedure. It is also easier to make spiking materials
using water soluble salts. Strong oxidizers can attack the
organic component of the soil and volatilizing it, leaving
the heavier silica and alumina portions. This increases the
1-167
-------
density of the soil and changes the concentration of the
spiked analytes. The appearance of the soil is also
altered making it look more artificial. For PE samples to
be used as double blind checks they should appear as
natural as possible.
A large amount of a local soil was collected, milled, and
sieved through U.S. Standard No. 10 (2 cm2) sieves. It was
analyzed for native amounts of sixteen elements regulated
by the State3' Chromium, cobalt, copper, lead, nickel,
vanadium, and zinc were found to be present in excess of 5
mg/ka. Since, most laboratories use EPA SW 846 method
3050^ as the digestion procedure antimony, barium, and
silver will be poorly solubilized. These three elements
were not be used. Beryllium was not used as most of the
water soluble salts are either unavailable commercially or
are extremely toxic. Beryllium sulfate is relatively safe
but has a very small mole fraction of beryllium. This would
require such large amounts of beryllium sulfate that the
soil matrix would be disturbed. This left arsenic,
cadmium, molybdenum, selenium, and thallium as the target
elements. The inorganic samples were prepared in the
following fashion:
Sample A Sample B Sample C Sample D Sample E
Arsenic 4,000 500 50 5 -
Cadmium 500 50 5 - 5,000
Molybdenum 50 5 5,000 500
Selenium 5 - 5,000 500 50
Thallium - 5,000 500 50 5
(Here and throughout this paper the dash mark, -, will be
used to indicate that the analyte in question will not be
spiked and the analyte is at background concentration.)
California ELAP accredits about 200 laboratories for
inorganic hazardous materials analysis. Five kilograms of
each sample was prepared enough to provide a 20 gram
aliquot of each sample to the individual laboratories.
Spiking solutions were prepared for each of the salts
listed below. These solutions were diluted one to one
hundred and checked against standards prepared from
different stock materials. All of the solutions were well
within 10% of the expected value.
Salt MF Mass Element Mass Salt Cone.
As2O3 0.757 100 g 132 g lOOg/L
3Cd(S04)'8H20 0.438 50 g 114 g 50g/L
(NH4)6M07024-4H20 0.543 5 g 9.2 g 5 g/L
H2SeO3 0.612 50 g 82 g 50g/L
T12SO* 0.808 5 g 6.19 g 5 g/L
MF = Mole Fraction
1-168
-------
The amount of each salt that was to be added to each soil
was calculated and totaled as noted below. The amount of
salts to be spiked was subtracted from the 5 kilograms of
soil. Thus when the salts were added, the total weight
would be 5 kg. Due to the limited solubility of ammonium
molybdate and thallium sulfate, dry salts were added for
the 5,000 mg/kg materials. The appropriate mass of soil
was placed in a plastic tray and mixed with enough de-
ionized water to make a slurry. The slurries are then
spiked with the amounts of the salts as indicated below.
Sample A Sample B Sample C Sample D Sample
E Arsenic Trioxide 26.4g 3.30g 0.38g 0.04g
(200 ml)(25 ml)(250 ml 1:100)(25 ml 1:100)
Cadmium Sulfate 5.12g 0.51g 0.05g 51.2g
(50 ml) (50 ml 1:10) (5 mll:10) -(500 ml)
Ammonium Molybdate 0.28g O.OSg - 46.1g 4.6g
(30 ml) (5 ml) (500ml)
Selenium Trioxide 0.04g - 41g 4.lg 0.4lg
(5 ml 1:10) (500 ml) (50 ml) (50
mil:10)
Thallium Sulfate - 30.7g 3.07g 0.31g 0.03g
(500ml) (50 ml) (5 ml)
Total Mass 39 g 34 g 44 g 50 g 62 g
of Salts
This mass was removed from each 5.00 kg batch.
Mass of Soil 4.961kg 4.966 kg 4.956 kg 4.950 kg 4.948 kG
The slurries were then dried at 95° C with frequent mixing.
After drying, the materials were again milled and sieved
through a U.S. Standard No. 10 sieve.
The PBC Aroclor 1260 was used to spike the same native soil
as was used to make the inorganic PE samples. The soil was
found to be free of PCBs, although low level interferences
from decomposed vegetable matter was detected. The soil
was milled, sieved, and autoclaved (to kill any bacteria
that might be present). Approximately 150 laboratories are
certified for PCB analysis by California ELAP. Ten
kilograms of each PE sample was prepared so that each
laboratory could be provided with about 25 grams per PE
sample. The five PE samples were prepared in the following
concentrations in mg/kg:
1-169
-------
Sample F Sample G Sample H Sample I Sample J
Aroclor 1260 100 10 1.0 0.1 -
Two solutions were prepared; one contained 10 g Aroclor
1260 in 1 liter of hexane, while a second was made form a 1
to 100 dilution of the first solution. The Aroclor 1260
was made up from neat PCB and checked against EPA derived
standards and found to be within 5% of the expected value.
These solutions were spiked in the following fashion:
Sample F Sample G Sample H Sample 1 Sample J
Aroclor 1260 100 ml 10 ml 100 ml 10 ml
(1:100) (1:100)
The samples were slurried with n-hexane, homogenized, and
dried at room temperature with periodic mixing.
Validation
A two step validation was used. The initial validation was
performed in-house. The inorganics were digested seven
times using an aqua regia method5 (which in previous
publications was referred to as the SCL method1'2'5) for
analysis by simultaneous inductively coupled plasma atomic
emission spectroscopy (ICP-M), sequential ICP (ICP-Q), and
Flame Atomic Absorption Spectroscopy (FAA). For Graphite
Furnace Atomic Absorption Spectroscopy (GFAA) EPA SW-846
method 3050 was used. Analytical methods include EPA SW-
846 methods 6010, 7061, 7130, 7131, 7480, 7481, 7740, 7840,
and 7841. The organic samples were analyzed in duplicate
using EPA SW-846 method 3540, 3620, and 8081. All of the
results were within 20% of the expected values and had a
relative standard deviation of less than 30%.
The samples were validated by having at least twenty (20)
laboratories which are not accredited by California ELAP
analyze the materials. The samples would be considered
valid if the mean value from these laboratories was within
20% of the spiked value and the percent relative standard
deviation (%RSD) is less than 20% for the two higher
concentrations for each analyte. It is to be expected that
the %RSD for an analyte will increase as the concentration
decreases, all other things being the same. In the case of
the inorganic materials each low concentration analyte was
in a material with a high concentration analyte. So for
the lower concentrations inorganic analytes, the analyte is
considered validated if the mean value was within of 20%
the spiked value and the high concentration analyte in the
same sample had an %RSD of less than 20%.
1-170
-------
RESULTS
For the inorganic samples, the majority of the laboratories
prepared the samples by an acid digestion, in most cases by
EPA SW 846 method 3050 or a similar method. Two
laboratories used chelation extraction for some of the
analytes. For the energy dispersive X-ray fluorescence
instrument (EDXRF) the samples were ground to pass a U.S.
Standard sieve No. 200.
The digestates and extracts of the PE samples were analyzed
on a number of different instruments, ICP-MS, simultaneous
and sequential ICPs, FAA, GFAA, hydride generation atomic
absorption (HGAA), colorimeter, and fluorescence
spectrometer. The soils were also analyzed directly by
EDXRF. Again all of the mean values were within 20% of the
spiked values except for arsenic at 5 mg/kg. See Table I.
The PCS results were more complex. All of the mean values
were within 20% of the true value and % RSD was less than
30% for all except the lowest sample. As can be seen on
Table II the mean tends to be low and the %RSDs high.
However, this is consistent with previous efforts with
solid matrix PCB materials. The data was unlike the
inorganic materials where there was a normal distribution
about a mean. The data for the three highest PCB
materials, samples F, G, and H were bi-modally distributed,
with one mode at around 95% recovery and another at 75%.
Outliers
Two types of outliers were identified; even multiples and
base line interferences. Even multiples are values that are
an even factor off of the true value. These are caused by
either omission or inclusion of dilution factors or bad
standards. For example, there is a Cadmium outlier for
sample C. It is exactly 40 times the true value and the
worksheet indicates that a 1:40 dilution occurred. Similar
errors seem to have occurred three other times in the
inorganic samples. Bad standards account for the five
arsenic outliers which were all five to seven times the
true value and the five high PCBs from laboratory number 1.
Base line interferences occurred in the inorganic samples.
This is caused by an interference that raises the
analytical background. Without appropriate background
correction, the instrument reads significant amounts of the
analyte in the blank soil. This caused elevated results in
the lowest spiked value. This can be seen for two
laboratories with molybdenum and thallium. Every
laboratory except two, both with high reporting limits,
measured significant amounts of arsenic in the unspiked
1-171
-------
sample (E).
DISCUSSION
Almost all of the inorganic PE materials were validated.
The accuracy and homogeneity was within acceptance
criteria. The mean values were all well within 20 percent
of the spiked values and the %RSDs for the high
concentration analytes were all less than 12%. The
exception to this rule was arsenic at 5 mg/kg level in
sample D and the unspiked sample E. In almost every case
the readings from sample E, when subtracted from the
results of sample D, produced results within 20 % of the
spiked value. From this it can be concluded that, in fact,
there was a small amount of arsenic present in the native
soil. This is not surprising since arsenic is commonly
found in soils in the low ug/g range8. As can be seen on
table I the mean value for arsenic for sample D is 13 ug/g
and 8.8 ug/g for sample E, the difference being 4.2 ug/g,
88% of the spiked value. Table III shows the results for
arsenic instrument by instrument. As can be seen, for each
instrument type, the difference between the mean value for
samples D and E is about equal to the spiked value of 5.0
ug/g. The ICPs obviously had a high spectral background
problem was well.
One point is clear, that if an analyte is going to be
spiked at a concentration that is near the lower
quantitation limit, to properly validate it, an internal
standard must be spiked in as well at the same time as the
analyte. The internal standard should be at a much higher
concentration and should be an easily analyzable analyte
such as beryllium or cadmium. It is also possible to use
background analytes, in soils this internal standard could
be iron or calcium. For future validation studies where
low concentration spikes are planed, it is suggested that a
liquid standard also be analyzed containing these analytes
at low concentrations.
The data for the PCBs is more difficult to interpret. As
can be seen, the data is bi-modal for samples F, G, and H
while the results for sample I is more normally
distributed. Specifically, There are nine sets of results
clustered around the spiked value (labs 2-10) and another
eight sets clustered around 70% recovery. The clustering
is not evident in the results for sample I. The source of
this bi-modality appears to be a result of instrument
calibration bias. This conclusion is by means of
elimination. Laboratories in both clusters used the same
extraction equipment, solvents, methods, instruments,
columns, and procedures. The only remaining difference was
the calibration procedures. One possibility is the base-
1-172
-------
line selection procedure used during integration.
One conclusion is evident, the PCB PE samples themselves
are homogeneous. This can be shown by two facts: Samples
F, G, an H all produced results equal to or lover than the
spiked value with one exception. Second, the response was
generally linear i.e., F, G, and H were proportionally
lower in each case. It would be unreasonable to expect a
set of heterogeneous materials to respond in this fashion.
There is also a question of accuracy. For example, sample
F was spiked at 100 ug/g but the mean value is 85 ug/g. A
similar situation exists with samples G and H. Sample I,
on the other hand, was spiked at 0.10 ug/g and had a mean
value of 0.109. Were then, samples F, G, and H made
incorrectly leaving these materials with 85% of the
expected value? If the data were distributed normally
about an 85% recovery this might be reasonable. But as has
been noted, there are two normal distributions, one around
95% recovery and another around 75% recovery. For samples
F, G, and H there are no more than 15% of the results are
near the mean value (80 to 89%) . Further, since H and I
were prepared from the exactly the same solution and base
soil, and F and G were made from the same stock as H and I,
it would be unreasonable to conclude that the materials
contain a concentration other than the spiked value.
Rather, there are two distinct populations of laboratories
analyzing the same materials and getting two different sets
of results.
SUMMARY
Using spiked materials is a useful and economical approach
to preparing PE samples for solid waste laboratory
evaluation. The preparation steps used in this study
generate accurate and homogeneous materials. Difficulties
occur when very low concentrations are spiked and this
should be avoided or combined with an internal standard.
PCB spikes are difficult, not due to the preparation steps
but to the linearity problems obvious in the analysis
process. More study is needed to identify the source of
this problem in PCB analysis and so the use of PCBs spikes
for PE samples must be used with discretion.
ACKNOWLEDGMENTS
We would like to thank Dr. William Nilsson and Monina Ligao
of the Southern California Laboratory for their assistance
with the PCB work. We would also like to thank Dr. Tony
Harding of Spectrace Instruments Inc. for his work on the
inorganic materials and for the EDXRF spectra.
1-173
-------
PARTICIPATING LABORATORIES
Alaska Department of Environmental Conservation
Environmental Quality Monitoring & Laboratory Operations
Douglas Laboratory
Arizona Department of Health Services
State Laboratory Division
California Department of Health Services
Hazardous Materials Laboratory
COMPUCHEM Laboratories Inc.
Connecticut Department of Health Services
Bureau of Laboratories
Environmental Chemistry Section
Hittman Ebasco Associates Incorporated of Columbia Maryland
Hawaii Department of Health
Chemistry Branch
Indiana State Board of Health
Environmental Laboratory Division
Lower Colorado River Authority
Environmental Laboratory
Maine Department of Environmental Protection
Bureau of Administration
Division of Laboratory Services
Michigan Department of Natural Resources
Environmental Laboratory
Minnesota Department of Health
Chemical Laboratory
Mississippi State Chemical Laboratory
Ohio Environmental Protection Agency
Division of Environmental Services
Oregon Department of Environmental Quality
Laboratory Division - Portland
Pennsylvania Department of Environmental Resources
Bureau of Laboratories
Quebec Ministers de 1'Environnement
Direcion des laboratoires
Laboratoire de Montreal
1-174
-------
Research Triangle Institute
Center for Environmental Measurement & Quality Assurance
Salt River Project
Lab & Field Services Division
Environmental Laboratory
South Dakota Department of Health
State Health Laboratory
Tennessee Valley Authority
Environmental Chemistry, Water Quality Department
U.S. Army Corp of Engineers
Missouri River Division Laboratory
U.S. Environmental Protection Agency
Environmental Monitoring Systems Laboratory-Las Vegas
Methods Research Branch
U.S. Department of Interior
Bureau of Reclamation
Assistant Commissioner-Engineering & Research
Research & Laboratory Services Division
U.S. Geological Survey
National Water Quality Laboratory
1-175
-------
REFERENCES
1. Appendix A, July 1982 to Methods for Chemical Analysis
of Wastewater EMSL-Cincinnati, USEPA, June 1982
2. Appendix B to Part 136 CFR 40, October 26, 1984, Federal
Register Vol. 49, No. 209, Pages 198 - 204.
3. Glaser, J.A., Forest, D.L., McKee, G.D., Quave, S.A.,
and Budde, W.L., "Trace Analysis for Wastewaters,"
Environmental Science &_ Technology. 15, 1426, December
1981
4. Kimbrough, D.E. and J.R. Wakakuwa, "Acid Digestion for
Sediments, Sludges, Soils, and Solid Wastes. A Proposed
Alternative to EPA SW 846 Method 3050." ; Environmental
Science and Technology, 23, pages 898-900, July 1989.
5. Kimbrough, D.E. and J.R. Wakakuwa, "Report of an
Interlaboratory Study of an Interlaboratory Study
Comparing EPA SW 846 Method 3050 and an Alternative
Method from the California Department of Health
Service"; Proceedings of the Fifth Annual USEPA
Symposium on Solid Waste Testing and Quality Assurance,
Washington, D.C. July 1989. Reprinted in Waste Testing
&_ Quality Assurance: Third Volume. ASTM STP 1075. C.E.
Tatsch, Ed., American Society for Testing and Materials,
Philidelphia, 1991
6. Kimbrough, D.E. and J.R. Wakakuwa, "A Report of the
Linear Ranges of Several Acid Digestion Procedures ";
Proceedings of the Sixth Annual USEPA Symposium on Solid
Waste Testing and Quality Assurance, Washington, D.C.
July 1990.
7. Test Methods for Evaluating Solid Wastes (EPA SW 846
Volume 1A) 3rd Edition. Office of Solid Waste and
Emergency Response, U.S.Environmental Protection
Agency:Washington, D.C., November 1986
8. Arsenic; Committee on Medical and Biological Effects of
Environmental Pollutants, National Academy of Science,
Washington D.C., 1979
1-176
-------
TABLE I
INORGANIC RESULTS
Arsenic N = 25
True Value
Mean Value
SD
%RSD
Reporting
-------
TABLE II
PCS RESULTS IN MICROGRAMS/GRAM
Sample
Spiked
Value
LAB 1
LAB 2
LAB 3
LAB 4
LAB 5
LAB 6
LAB 7
LAB 8
LAB 9
LAB 10
LAB 11
LAB 12
LAB 13
LAB 14
LAB 15
LAB 16
LAB 17
LAB 18
LAB 19
LAB 20
Mean
SD
%RSD
N =
-------
TABLE III
Data from Verification of PE Samples for Arsenic
total Results
H= 25
ID
True Value
Hean Value
SD
IRSD
Reporting
-------
00 OBSERVATION OF QUALITY ASSURANCE
*" ANOMALIES IN SUPERFUND ACTIVITIES
by
Dr. D.M. Stainken
Malcolm Pirnie, Inc.
2 Corporate Park Drive
White Plains, New York 10602
The Superfund Program consists of numerous components, programs, contracts and
documents which have contributed to placing sites on the NPL, PRP and remedial actions.
During this process, quality assurance activities within EPA and the States have continually
evolved. Accordingly, there are QA requirement for activities under the Clean Water Act,
Safe Drinking Water Act, Clean Air Act, RCRA, and the NCP, as well as State programs
which can affect Superfund actions as ARAR's. Historically, the Superfund paper-trail
process makes use of field sampling plans, QA program and project plans, and ultimately,
Records of Decisions (ROD). When a site is to be remediated, additional RI/FS work may
be necessary for RD/RA design.
A review of numerous QA documents within the Superfund process indicates that anomalies
do occur in the process. As examples, RODs have been reviewed in which numerous
compounds were mis identified or reported as isomers based on incorrect R fits or mass
spectra, and compounds were reported in aqueous media at values greatly exceeding solubility
maxima. In other reviews, wrong analytes are listed on wrong lists with intermixed methods,
or the ARAR end point MDL is mismatched with the CLP CRQL. This paper will present
a synopsis of QA anomalies observed in reviewing QA activities.
PAPER.DS
1-180
-------
24 FUNCTIONAL EVALUATION OF Q C SAMPLES, A PROACTIVE APPROACH
DONALD XIOUES AND JANICE ALLISON
QUALITY ASSURANCE DIVISION
BECHTEL NATIONAL, EMC.
OAK RIDGE, TENNESSEE
Abstract
An aggressive and systematic approach in using and evaluating Quality Control
(QC) samples provides improved data quality and reduced costs for environmental
sampling. Unlike the traditional approach, where evaluation of QC samples has
typically been performed after all sample results have been received from the
laboratory and the process of data validation/evaluation is underway, a proactive
approach involves strategic QC sampling, rapid analytical turnaround, and
preliminary review of QC sample results. This "just-in-time" approach allows for
decisions to be made regarding isolation of contamination, minimization of error
associated with data, and avoiding unnecessary sample analysis costs. This
information provides the sampling team with necessary data to identify, isolate, and
eliminate sources of contamination and provides a definitive means to determine if
resampling is required while still in the field, thus avoiding remobilization costs.
Additional savings result from eliminating unnecessary sample analysis and
reducing the need to resample. Data quality is improved by reducing data qualifiers
and minimizing data gaps. The goal of this manuscript is to summarize the
decision-making process in identifying the feasibility of the proactive approach and
to provide a framework for that process by means of a decision-tree flowchart.
Introduction
Quality assurance (QA) and quality control (QC) are programmatic and systematic
procedures to ensure that a product of known quality is produced. This "quality" is
defined for environmental sampling by quantitative data characteristics, accuracy,
precision, and minimum acceptable detection limits. The measure of how close our
data comes to the true concentration of the contamination present at the site defines
accuracy. Precision is a measure of how reproducible our results are. Minimum
acceptable detection limits determine the amount of contaminant necessary to be
present for detection. These key elements are defined during the establishment of
data quality objectives (DQOs) for the scope of work. Based upon the decisions
which will be made with the sampling data and the consequences of failure, a
minimum acceptable level will become evident. Cost effective sampling strategies
can minimize the number of samples required to obtain the required data necessary
for the decision making process. The result of this strategy is that as the number of
samples is reduced, the quality of each sample data point becomes increasingly more
1-181
-------
critical. If information from a sample representing a given area is deleted or
becomes unusable, a gap in the data will exist. Depending upon the importance of
this information to the achievement of the scope of work, the data gap created by
missing or substandard data may be sufficient for resampling to be required. For
example, qualitative data may be acceptable for preliminary site investigations. This
situation may not justify the additional costs associated with the proactive
evaluation of QC samples, requiring rapid turnaround, as qualitative (estimated)
data may be satisfactory. On the other hand, data necessary to support closure or
permitting activities would require quantitative data.
Traditional sampling and analysis plans call for critical samples to be analyzed
under rigorous analytical standards at qualified analytical laboratories such as
laboratories participating in the Contract Laboratory Program or one of several
interlaboratory comparison programs. Unfortunately, this process is time
consuming. Constraints from laboratory scheduling, holding times and
documentation requirements result in a routine 45 day turnaround time for the
results from a sampling event. It is at this point that the data are reviewed and
evaluated. The consequence of discovering that a sample or batches of samples fail
to meet the minimum criteria at this point has serious implication on both cost and
scheduling.
Minimum criteria for sampling are measured by the results of the QC samples. If
the QC samples are discovered to have a significant level of contamination on
review after the typical 45 day wait, it may be too late to remedy the situation at this
point. However, if the QC samples had been evaluated earlier before the other
associated (within the same sample batch) samples were analyzed, several options
would then be available: 1) discarding the data completely, thus eliminating the
data for that point or area; 2) finding the source of the problem and eliminating it
with subsequent resampling and reanalysis; 3) only using the data with analyte
concentrations significantly above background data (> 5X or 10X above the
concentration found in the blanks); or 4) accepting qualitative data (instead of
quantitative), as all data less than 5X or 10X above the concentration found in the
blanks will be qualified as estimated (J) ( EPA Functional Data Validation
Guidelines).
Conclusions
This "just-in-time" approach allows for decisions to be made regarding isolation of
contamination, minimization of error associated with data, and avoidance of
unnecessary sample analysis costs. This is accomplished by reviewing the QC
sample results which are scheduled for rapid turnaround (24 to 48 hours) analysis
before other associated samples are analyzed. This provides initial review of QC
results and the option of cancelling associated sample analyses if it is decided that
estimated data are or will not be acceptable. It is vital that the rapid turnaround data
be evaluated immediately so that real-time decisions can be made. For example,
sources of contamination can be identified, isolated and eliminated (see Figure 1),
1-182
-------
thus preventing further contamination of QC and associated samples; and decisions
can be made to resample while still in the field, thus avoiding remobilization costs
(see Figure 2). These options may result in further cost savings by avoiding
unnecessary analyses. Scheduling delays are reduced as a result of faster evaluation
(as opposed to the traditional 45 day, standard turnaround wait). If these decisions
are used to avoid the cost of remobilization, eliminate unnecessary analyses, and
insure that data meet DQOs, then thousands, or possibly million, of dollars in
project and analytical costs will have been saved. The likelihood of poor decision
making is reduced, and the project tasks will have been performed as efficiently and
cost-effectively as possible.
1-183
-------
Contamination Source Isolation Process
Preservation contamination.
Heminate source, evaluate new
detection limits, cancel analysis
of associated samples, and
resample.
Preservation blank
contaminated?
Held blank
contaminated?
Contamination source in
transportation/storage or
containers. Eliminate source,
evaluate new detection limits,
cancel analysis of associated
samples, and resample.
Contamination from ambient
sampling conditions.
Qeminate source, evaluate
new detection limits, cancel
analysis of associated samples,
and resample.
Wash blank
contaminated?
Contamination from
decontamination procedures.
Heminate source, evaluate new
detection limits, cancel analysis
of associated samples, and
resample.
If no contamination has been
detected in any of the field blanks
then no evident contamination
has been introduced by field
methods.
Figure 1
1-184
-------
Pro-active QC Decision Tree
Sampling
Rapid turnaround of QC for
critical samples
Evaluation of QC results to
determine if samples will
meet the required minimum
criteria for data end-use.
Does the data meet the
minimum usability
standards?
YES_»
Analyze associated samples
00
01
NO
1
Cancel analysis for associated
samples and direct
resampling.
Figure 2.
-------
25
FEATURES OF THE U.S. EPA-QOALITY ASSURANCE MATERIAL BANK STANDARDS
Ruth A. Zweidinger, Chief, Analytical Chemistry, ManTech Environmental
Technology, Inc., 2 Triangle Drive, Research Triangle Park, North Carolina
27709 and Nathan Malof, Research Chemist, Environmental Monitoring Systems
Laboratory, U.S. Environmental Protection Agency, Cincinnati, Ohio 45268.
ABSTRACT
The U.S. EPA currently provides organic solution standards to its Contract
Laboratory Program (CLP) laboratories under the Quality Assurance Material
Bank (QAMB). These standards ensure comparability between laboratories
and traceability to U.S. EPA materials. ManTech Environmental operates
the current QAMB program over which U.S. EPA maintains a strong quality
assurance oversight role.
The quality assurance requirements for QAMB organic solutions standards
have a number of distinct features and this presentation addresses each.
For example, one requirement is the characterization of the neat materials
with purity analyses, including moisture analyses of hygroscopic
compounds, and confirmation of the compound identity. An U.S. EPA-
supervised, independent laboratory analyses the purity to verify the
original assay. Furthermore, the identity confirmation procedures are
designed to be unequivocal.
Each lot of standards is analyzed to verify its concentration by ManTech
Environmental and again by an U.S. EPA-supervised, independent laboratory.
Reanalysis of the standards is required periodically to verify stability,
according to a schedule optimized for each standard. Strict control limits
are placed on each of these analyses and will be presented. The details
of these specific requirements and their implications on confidence
intervals associated with the standards will be presented.
The probable error of each step in the solution standard production
process has been evaluated and used to assess the overall probable error.
This information, along with the analytical method performance data for
the quality control, serve to define the quality of the standards
available under the QAMB program. The results of this evaluation will be
presented.
1-186
-------
26
AUTOMATED DATA VALIDATION—PANACEA OR TOOL
Gary Robertson, U.S. Environmental Protection Agency, Environmental
Monitoring Systems Laboratory, Las Vegas, Nevada
ABSTRACT
The increased emphasis on the clean-up of hazardous waste
sites and the quality assurance (QA) of environmental data has
caused a substantial increase in the amount of analytical data that
must be reviewed. Computer programs such as Computer-Aided Data
Review and Evaluation (CADRE) for the review of Contract Laboratory
Program data and "E-DATA" for the review of analytical data from
emergency response teams have been developed to help deal with that
large amount of data. Such programs can be of great value to the
data reviewer; however, the user must be aware of what these
programs can and cannot accomplish. The types of QA measures that
may be checked by computer such as calibrations, holding times and
analytical sequences will be described. The limitations of
computerized checking will also be discussed including the areas
of analytical methodology, electronic data formats, chromatographic
quality and professional judgement. The needs for standardization
and QA of the electronic data will be considered.
Notice: Although the information discussed in this article has
been funded wholly or in part by the United States Environmental
Protection Agency, it has not been subjected to Agency review and
does not necessarily reflect the views of the Agency and no
official endorsement should be inferred.
1-187
-------
27 BUILDING DATA QUALITY INTO ENVIRONMENTAL DATA MANAGEMENT
Mitzi Miller, Automated Compliance Systems, 673 Emory Valley Road, Oak Ridge, TN 37830; Dr.
Philip Ludvigsen, Automated Compliance Systems, 245 Highway 22 West, Bridgewater, NJ 08807.
ABSTRACT
With the volume of environmental data increasing and the need to make rapid, accurate decisions,
managing the data is essential. Data Management includes consolidating all information into
central, consistent, accurate data bases. From these data bases, information can be selected,
queried and electronically transferred to statistical, graphical and reporting packages. By
establishing a database with output to decision support software, accuracy and speed of decision
making is improved. An essential part of computerizing the data is establishing a credible data base.
This paper describes methods to improve the quality, integrity, and connectivity of information.
INTRODUCTION
There are many different programs requiring collection and analysis of environmental data. Among
these are those required under the Comprehensive Environmental Response, Compensation and
Liability Act (CERCLA) and the Superfund Amendments and Reauthorization Act (SARA). In
addition, the National Pollutant Discharge Elimination Systems (NPDES) requires that any effluent
from a facility be monitored regularly according to permit requirements. Other regulations require
monitoring of air and drinking water. No matter which program is discussed, data must be collected
and evaluated. The problem is that if the project is large or if monitoring data is collected for
several years, one is inundated with information. This information becomes difficult to sort and
evaluate. While many of begun using spread sheets and other similar tools, the amount of data can
quickly exceed the capacity of many tools. The other problem is that the technical staff and
managers typically do not enjoy entering the data. Even if diskette deliverables are presented,
someone must map and move the data into the existing database. Many databases have been
established only to find that the information is inaccurate, inconsistent and difficult to retrieve. The
purpose of this paper is to outline issues in data management and an strategy for success. The
information is based on the experience which Automated Compliance Systems has had in
managing databases of over 3,000,000 records on over 100 projects. These projects have included
CERCLA, RCRA, NPDES, air monitoring and other environmental projects.
PLANNING
Some regulations such as CERCLA, SARA and RCRA require that planning documents be
written. Others require that data be documented but do not require specific planning
documentation such as sampling plans and QA/QC Plans. Most projects do not include a data
management plan. Millions of dollars are spent collecting and evaluating the data however, little
thought is given to planning how the data will be passed between parties, what information will
be captured and passed, and who will be responsible for this process. The first recommendation
for large projects is to outline a plan for managing the data.
1-188
-------
The goal of any data management system should be to consolidate the data and allow the end user
to be able to use and evaluate the information. The primary purpose of a data management plan
is the communication of how information will be captured, accessed, entered, and used. Data
management plans should include:
1) Data Dictionary
2) Data Naming Conventions
3) Data Entry Criteria
4) Data Consistency Filters
5) A Traffic Control System
6) QC Data Elements and Relationships
7) System Design Strategy
8) Audit Trails and Entry Serial Numbers
9) Connectivity Requirements
10) Tools for Compliance Screening/Data Validation
11) Staff Responsible for Data Management
The following sections will outline issues and provide some examples of success when these areas
are addressed.
DATA DICTIONARY
Data dictionaries often include the elements of data to be captured and the definitions of these
elements. Some data dictionaries include the location of information in the database and diagrams
of the relationships between the data. While this is useful, the most important information is the
data elements and terms. This assures that all project members have consistent understanding of
the pieces of information to be captured.
Another critical issue in establishing a data dictionary is that the geologist will not look at data the
same way the laboratory looks at the same piece of information. The data dictionary needs to be
reviewed by project team members with different view points. This insures that the same data
element will be understood by all parties. A benefit of the dictionary is that redundancy in the
data system can be reduced. (1)
DATA NAMING CONVENTIONS
The planning document should include a method of naming and identifying samples. Samples are
often collected by one group, analyzed by another and evaluated by yet another group. The key
information which is passed along is the sample identity. If all parties do not know the convention
for numbering samples mistakes can be made.
One issue with naming a sample is how much information to include in the name. Many projects
have met disaster because the convention included more fields than the sampling, lab or end user
database would allow. If this occurs the identity may be truncated and make matching the data
1-189
-------
with the sample number difficult. It is important that all the parties involved understand how data
is collected and entered into the respective databases along the data processing path. As an
example if the sample collection team had understood that the laboratory computer could only
accept 12 digits for the customer sample number, then the sample number could have been
appropriately sized. Long naming conventions increase the chance of data entry errors. If the name
is exceedingly long, the laboratory and end users often have trouble accurately entering the
number for laboratory sample tracking and for using the information. If this occurs the association
between the data and the sample may be incorrect. Keeping sample location and numbers at less
than 10-12 characters is suggested. This is especially true if bar code readers are not used.
ACS's experience has shown that unique pieces of information should be tracked separately and
not aggregated together. With the power of relational databases, the data can still be related
without elaborate naming conventions which encode all pieces of information. As an example, a
sampling location such as SB-12/A/10-15/3-91 may mean the soil boring (SB) was collected from
location number 12 in zone A at a depth of 10-15 feet in sampling event of May 1991. It is very
easy for data entry mistakes to be made in entering this number into sampling and laboratory
databases. With modern relational databases, there is no reason for this type of naming. With a
well designed system all the information could easily be tied to Soil Boring number 12. This could
be printed on chain-of-custodies and labels without being encoded.
The recommendation is that the sampling location names or numbers should be short and unique.
Other data should be related to this point name. Information typically related to the point name
is the sampling coordinates, the depth of collection, complete sampling and analytical data
including analytical data and dates of collection, receipt by all parties. Data from multiple sampling
events can be associated to a single location. Data from each event is designated by sampling and
analysis date and by the laboratory and sampling numbers used in sample identification.
A location and sample number system should be established and documented and used by all
contractors performing work. Often the most confusion occurs when several organizations collect
samples at the same site over several years. Without an agreed upon location naming convention,
the data is difficult to connect to the correct location. Having this information available to all staff
performing work is important in maintaining accurate and traceable data.
Since the location name and sample numbers are critical pieces of information, it increases
accuracy if these are passed between the field teams and laboratories via computer generated chain
of custody forms. Many projects have suffered major problems because of difficulty in transcribing
data from a hand written chain of custody to a laboratory computer tracking system. If the project
planning information is entered for early tracking, the sample names and numbers, this can be
easily done. In addition to the forms, bar codes can be placed both on the forms and the bottles.
The bar codes should contain the sample location, sample number, depths, analysis requested and
any other pertinent information. By putting more information on the code than the number,
sample login in the laboratory can be quick and accurate. If changes occur while samples are being
collected, information can be hand written on the forms. (2)
1-190
-------
DATA ENTRY CRITERIA
Data entry is accomplished via manual entry, scanning or electronic transfer. Criteria should be
specified for all areas. It has been ACS's experience that scanning of documents which are not
neatly typed results in 5-10% errors requiring reentry and correction. This is particularly critical
for sampling logs, boring logs and other documents which are typically hand written. If these are
manually entered into the database, the data can be electronically searched and moved to logs and
construction diagrams. This results is typed, legible forms while entering the data only once.
If manual entry is needed, the best method is to double key the information by separate staff with
the computer making a comparison between the entries. If it is not practical for two people to
enter the information, it should be entered at least twice by the same person followed by computer
comparison of the entries. The computer should print out differences between the entries. These
differences should be resolved prior to moving the data from temporary holding files or tables to
the final database tables.
If electronic transfer is used, typically ASCII files should be transferred. All parties should agree
on the information to be entered and transferred and its location in the file. Two major issues are
that many organizations performing analysis do not have double key entry processes nor do they
have electronic download from the instruments to the central database. This is especially true of
laboratories. As a result the data is rekeyed from the instrument output to the database. This is
why so many differences are normally found between hard copy and electronic transmittals. Data
errors between these two media run from 5-10%. In some cases they are greater. In auditing
laboratory information processes, ACS has found as many as four transcriptions of the same data
prior to entering it into a file for electronic transfer.
As a result, ACS recommends that criteria be outlined for data entry of sampling data, laboratory
data, validation of data and other information needed for the permanent records. These criteria
should include not only the method of transfer of data, but the way the data enters the database
initially. This means that if a laboratory does not have electronic transfer between instruments,
double key entry may be needed. Trend analysis and other checks may be used to further assure
that data is consistent and correctly reported.
DATA CONSISTENCY FILTERS
Many problems in using data result in inconsistencies in the reporting. As an example, ACS
mapped data from a project. The engineering firm had observed a 20ft water level difference at the
site when ground water samples were collected. Prior to ACS involvement, much money had been
spent on models to explain these inconsistencies. After the original well logs and methods of
measuring water level were discussed, ACS determined through consistency checks that the
problem was inaccurate survey information.
1-191
-------
The consistency checks are performed via computer and via data inventory printouts. These
consistency checks. The consistency checks include identifying:
' Missing sample locations
' Duplicate data and samples
- Improper parameter names
- Duplicate Test Names
- Samples with missing data
- Data with missing location information
- Time traveling samples
- Incorrect location and episode data
- Illogical Units
' Illogical Qualifiers
- Missing Detection Limits
Any data management process should look for and correct problems in these areas. While software
can help identify these issues, it takes dedicated staff to correct these problems. In order for these
corrections to occur, all the members of the project team must be easily contacted regarding these
problems. It is also helpful if these problems are identified as early as possible in the project. This
allows for correction of inconsistencies prior to use of the data.
When new data is received, a mechanism must be in place to examine the incoming data and
compare it to the existing data in the central database. The mechanism should also look for
inconsistencies similar to those outlined previously in the new incoming data. The information
upload to the central database should allow only the new data to be entered. If the entire database
must be uploaded, hours may be wasted in data processing. In order to facilitate these data
transfers, special utility programs need to reside on the system which maintains the central
database.
DATA TRAFFIC CONTROL SYSTEM
A major part of managing the data is knowing when deliverables are due and assuring that they
meet schedules and that the data required is delivered. This is particularly critical for sampling and
analytical data. The traffic control system varies depending on the project need. As an example,
for CERCLA/SARA projects the proposed number of samples and QC samples must be compared
to the number actually collected and analyzed. This information is used to alert project managers
of missing information and be able to assist in cost tracking. In the systems used, all the proposed
samples, matrix, methods of analysis, and QC for delivery are entered. The computer compares
incoming data to the proposed data. Printouts of differences were available immediately after
receipt of data. These reports must occur quickly. This allows rapid corrective action. Having the
cost information available also allows contractual issues to be quickly resolved.
1-192
-------
For NPDES type of projects, not only is the laboratory data tracking important, but sampling staff
need to be alerted to collect specific samples from outfalls on specific days. In addition, the final
reporting of data should be compared to permit requirements. This comparison should include the
analytes, analyte concentration limits, and date reported. Anything exceeding permit criteria must
be flagged and managers notified immediately.
The traffic control system described is important since typically, there are large amounts of
incomplete data or data which does not meet acceptance criteria. By allowing early notification of
these problems, project schedules can be met. Without using computers, tracking the large number
of samples, sample results, cost and schedule become difficult and tedious for technical staff. The
data management system must address these needs. The method of handling these issues should
be outlined in the plan.
QC DATA ELEMENTS AND RELATIONSHIPS
Another critical item is defining the QC samples in the planning document and a numbering
scheme for these samples. The critical item which is often missed is the definition of what the
sample is and how it is collected. Currently at least 10 definitions exist for Field blanks and
rinsates. With this many definitions, all parties in the process should understand what the samples
are.
The other critical information is how the QC samples are related to the actual samples. These
relationships must be captured or the QC data will not be useful. As an example if five trip blanks
were collected and shipped in 5 coolers, one must be able to relate the correct blank with the
correct cooler or to the samples in the cooler. This is typically done via the chain of custody and
the sampling date, time, and person collecting. However, a more successful method is to number
the QC samples and state which samples by number were associated with the QC samples. This
method of association should be included in the definitions in the data dictionary.
Similar issues occur in the laboratory. Many laboratories do not track batch numbers for samples.
Samples may only be related to QC by date. This leads to problems when multiple instruments are
being used for the same task and when several similar sample preparations are performed on the
same day. More problems arise when the laboratory must reanalyze a sample. Reanalysis results in
two or more batches of QC being related to a sample. If the lab system cannot assist in managing
this, confusion as to which data should be reported may occur. All QC should be available. This
means that QC should be associated to the samples by unique batch numbers. When reanalysis
occurs both the old and new batch number should be available with a comment on why reanalysis
was performed. Batch numbers will also serve to tie the instrument, method, and person
performing analysis to the samples. Without knowing which samples are associated with specific
QC samples, data validation and evaluation becomes difficult.
1-193
-------
SYSTEM DESIGN STRATEGY
The system must be flexible enough to handle all the sampling, project scheduling and analytical
data. The database must be able to manage large and small projects. The entry process must allow
easy addition of parameters, sampling data points and other information which may not be
identified in the early planning stages. The system must be able to handle numeric and textual
data. A design which has proven to be flexible, thinks of data as three type, sites, locations and
episodes. Site data is the area of study or the site name. This could be a building, facility, or area
name. The next type of information is associated with the location. This information is fixed in
time. It includes information such as soil lithology, sample coordinates (x,y, and z), and the name
of the sampling location. The last division of data is the episodic data. This data includes
information which changes with time. These data elements include but are not limited to water
level, parameter names, analytical results, detection limits, units, dates sampled, dates prepared and
analyzed, methods used and other information related to the sampling events.
Using this design strategy makes data entry flexible and easy to add information when the need
arises. No matter how much planning occurs there is always the need to add information during
the project. Once data is entered in this manner, it can be extracted in many different methods
to allow one to examine data by locations, parameters, methods and other queries appropriate for
data review.
Another issue of design is the method of handling data qualifiers. These qualifiers are used to easily
indicate QC problems, detection limits and other pieces of information needed to evaluate the
data. In many databases the qualifiers are coded as part of the result. This makes passing data to
modeling programs, and statistical programs difficult. These programs accept only numeric results.
If the qualifiers such as Us, Bs, Js are part of the result field, it is difficult to extract the data and
move it to user programs. It is critical that qualifies be placed in separate data fields. The
recommendation is that reporting and usage requirements should not drive the data entry. By
allowing the computer to reformat data rather than have data entry dictated by the reporting
additions can be made quickly and by the user.
Other issues for data base design include the use of relational database tools which can be moved
to various hardware systems without rewriting software. This will allow upscale of hardware as the
database grows in size. It also allows end users to have smaller systems and to easily upload and
download between smaller and larger systems. In addition the ANSI computer standards require
use the Structured Query Language to facilitate data communication between systems. Because
of these features, ACS has had the best success using ORACLE Relational Data Management
Systems with a design built .on a row based entry system consisting of sites, locations and episodic
data.
1-194
-------
AUDIT TRAILS and ENTRY SERIALS
The computer must be able to track the entry and changing of information. An effective method
is to establish unique computer generated serial numbers for each entry session. The audit trail
should be kept, for both manual and electronic entry. The person performing entry or download,
of the data, the origin of the information, and the date entered must be kept by the computer.
The entry serial number ties this information together. By performing this task, an audit trail is
kept on the data entry. Changes to the database should follow a similar strategy.
CONNECTIVITY REQUIREMENTS
In communicating data from planning to field crews, to the laboratory, and back to the data users,
a path with consistent methods of transfer must occur. Without documentation and a well
designed path, data will be transcribed and reentered many times. This results in errors and project
delays. The best strategy is to outline file structures and information which must be passed.
Typically ASCII files can be passed between organizations. However, information sometimes is not
in consistent locations within files. A data exchange language which assists in this process is a
useful tool. This exchange language can parse files apart even when information is not in
consistent locations.
As mentioned in the Data Consistency Filter section, as data is uploaded checking for consistent
information should be performed. These checks should be available to all parties moving data. By
doing this, early warning for missing and inconsistent information occurs.
The recommendations are to use ASCII files, to agree on the information to be transferred and
a general format prior to beginning the project, to use a data exchange tool which will assist in
parsing files even if data moves within the file, and to provide upload and download software which
performs consistency checks on the incoming and outgoing data.
TOOLS FOR COMPLIANCE SCREENING/DATA VALIDATION
Contract Compliance Screening is the systematic verification that the deliverables specified in the
governing contract meet delivery and general QC requirements. It also includes verifying the
: frequency of QC analysis, and limit checks. Examination of the raw data (chromatograms and
mass spectra) are not included. The process of screening generates both summary and detailed lists
of samples failing criteria with associated notes about how they failed.
Data Validation is a systematic review of the data which includes QC frequency and limit checks.
It also includes review of the raw data, flagging of data which does not meet criteria and technical
judgement in assessing the information.
The part of screening and validation amenable to automation is the QC frequency and limit
checks, and much of the data flagging. Once the initial flags are applied, some evaluation using
1-195
-------
technical judgement must be done by a qualified technical person to assess the flags. By automating
much of this process, star! will be better utilized, screening will be improved, consistency is
improved, and fester execution will occur. Currently, much of this process is poorly automated.
Software used for this purpose needs to allow the limits to changed to meet the objectives of the
project. It must also generate summary reports outlining the number of compliant and non-
compliant items per sample. Detailed reports outlining specific non-compliant data should also be
presented. Ultimately final data with flags should be presented. This should be similar to an EPA
Contract Laboratory Protocol Form 1 with additional validation flags. The system should be able
to handle textual data so that explanations of flags and non-compliance can be related to the
numerical results.
After the screening/validation is complete, the electronic download to a central database should
be easy and menu driven. The data from the central database should then be moveable to other
packages such as statistics and mapping packages. This will allow the central repository to maintain
upto date data. The last issue is that an audit trail on the changes should be maintained.
DATA MANAGEMENT STAFF
A key person (s) should be identified in each organization to manage data. This individual must be
a part of all planning from the initial stages. A budget should be set aside for this task. The items
above should be addressed in the planning and continuing phases of the project. The data
manager (s) should be familiar with the types of data collected in the particular process, they should
have a desire to create systems which place data in the users hands, and should be willing to
communicate with the entire project team. This person or organization becomes a focal point to
manage the data traffic.
CONCLUSION
Data management is an integral part of environmental projects. The need to quickly obtain and
assess data continues to be an important factor in successful completion of projects. Multiple
companies and organizations typically work on these projects, the management of the information
is critical. If a central process is established for data management from the beginning, projects can
be successfully completed. Key elements include computerized chain of custody and tracking,
establishing common terms and meanings, and establishing location naming conventions which are
documented and used by all parties. In order to maintain consistency and integrity, audit trails and
consistency checks are required. To track the information, a traffic control system is needed to
determine the planed samples versus those obtained. Screening and validation must be automated
to prevent data review bottlenecks. The data must be made available to users for decision support
in a timely manner. By considering these issues in the planning and implementation project, high
quality and timely results can be generated.
1-196
-------
References
1. Phyllis Koslow,"Data Dictionaries," Database Programming & Design. April 1991, 26-29.
2. Butterfield, S.; Deinse, H.V.; Rumford, Greg; Penalba, Jorge; "Data Management of a
Multimedia Remedial Investigation Program," Presented at Annual Conference of the Air and
Waste Management Association, Vancouver, British Columbia, June 16-21, 1991.
1-197
-------
PR A SOFTWARE APPROACH FOR TOTALLY AUTOMATING THE QUALITY
ASSURANCE PROTOCOL OF THE EPA INORGANIC CONTRACT
LABORATORY PROGRAM
Cindv Anderau. Technical Specialist, Rob Thomas, Marketing Specialist, The Perkin-
Elmer Corporation, 761 Main Avenue, Norwalk, Connecticut 06859-0219
ABSTRACT
The EPA's Inorganic Contract Laboratory Program requires the determination of 23
elements in a wide variety of matrices. ICP-OES and ICP-MS, because of their
multielement capabilities, are ideal techniques for this type of analysis. Although ICP-
OES has gamed full approval by the EPA, ICP-MS, while rapidly gaining momentum, is
still not fully approved for all the EPA Inorganic Programs. However, regardless of the
technique chosen, the quality assurance protocol associated with a CLP analysis is very
complex and time consuming. Traditionally this has been a manual operation that can
involve the rechecking, recalibrating, rerunning or even rejecting samples, standards,
blanks and spikes that fall outside certain specified ranges.
This paper will describe a software program, which shall be called "QC Expert™", that
totally automates the operation of both ICP-OES and ICP-MS for this very tedious
quality control protocol. This is achieved by controlling both the instrument and the
autosampler with the software. During an analysis, if the quality of the data is considered
unacceptable, then pre-established procedures to restore the quality will be undertaken.
These will be monitored and then directed by the software.
The major tasks for establishing the proper criteria for this type of analysis are divided
into two separate steps. The first step involves setting the analytical method (standard
concentrations, QC limits, predefined actions, etc.) and the second step involves setting
up the sample parameters (sample id's, weights, and volumes, etc.). The logic behind
this, is that once an analytical method has been set up to perform a C.L.P. analysis, it will
not drastically change. On the other hand, the sample information will almost definitely
change on a regular basis. For this reason, these two quite different tasks are separated
and simplified to provide for a quick and easy set up for each analysis.
INTRODUCTION
The EPA's Inorganic Contract Laboratory program has quality control requirements that
are very stringent. Laboratories participating in ths program find that these analyses can
be extremely time-consuming on the analyst's part, and would rather not have an analyst
spend all of his or her time monitoring the analysis of samples, to insure that good
quality control of the data is maintained. Yet, that is exactly what many laboratories
must do. Many instruments still require some user intervention to both ascertain the
quality of the analysis and to control it.
1-198
-------
Typical OC Protocol
Calibrate
KHtrumact
"
ctttbrttion cucv«
YES
Run Duplicate, Spike
eno DBUton Sample*
YES
• tc.
A typical analysis requiring extensive QC is shown in the flowchart above. The protocol
shown here is very similar to that mandated by the EPA for the Contract Laboratory
Program. Once the autosampler tray has been loaded with samples and the appropriate
sample IDs and perhaps weights and volumes have been entered into the instrument
software, the analysis is started by establishing a calibration curve. The analyst must be
present at this point to verify that the calibration curve meets the requirements for the
method, for example, the correlation coefficient of the curve must be 0.9995 or better. In
some cases, the correlation coefficients must be hand calculated by the analyst. If the
correlation coefficients do not meet the established limit, an action from the analyst is
required. This action is likely to be initiation of a recalibration.
After the calibration curve requirements have been met, a suite of QC standards that must
be run. The result for each element in each QC standard must be within some established
limits. If the results for an element or a number of elements fail to meet the
requirements, an action by the analyst again must be taken. A typical action would be to
recalibrate, confirm the calibration curve and rerun the QC standards until all of them
meet the established criteria. Finally, samples may be run and the analyst may take a
break from closely monitoring each result. However, the analyst may need to know
when the results for an element are below the instrument detection limit, requiring that
the sample be rerun by a more sensitive technique, or when the results for an element are
above the linear range, requiring that the sample be diluted and rerun. In both of these
1-199
-------
cases, the results are not importable. Rather than searching through the data to see where
the results fall, the analyst may choose to continually check on the results after each
sample is run, to flag the samples that must be rerun. In either case, after 10 samples
have been run, the analyst must return to monitor the results for the next set of QC
standards to verify the continuing quality of the results. As you can see from this
scenario the analyst has little time for other tasks and the potential for errors is great.
Just imagine the task of making sure 23 elements in 10 QC standards are within the
allowable limits.
Automation of this analysis would provide benefits to the analyst and ultimately to the
laboratory. The greatest benefit would be that the analyst would be free to multi-task and
would not be required to make any decisions during the analysis or to physically take any
action. The analyst could be responsible for the analysis but at the same time attend to
other challenging tasks and be assured that the quality of the data was being assessed and
controlled by the software. The potential for human error disappears. The lab would
also be assured of the quality of the data under the close supervision of the software.
SUMMARY
This proposed automation could be met with the QC Expert software which is designed
to provide intelligent quality control for ICP Emission spectrometry and ICP Mass
spectrometry. If the quality of the data is determined to be unacceptable, pre-established
procedures to restore quality will be undertaken in real-time.
Software Structure
The strategy of this software design was to intelligently separate the various tasks that are
required to set up an analysis. The instrumental parameters such as element mass or
wavelength, integration time, etc. are set up with the normal instrument software. What
we are calling the analytical parameters are established in the QC Expert software. The
analytical parameters are divided into two parts or files. One is called the Analytical
Method File and contains all the information necessary to run the analysis except for the
sample information. The other file is the Sample Description file which includes
information specific to the individual samples. The Analytical Method file and the
Sample Description file are then combined into an Autosampler Worksheet File, which is
basically a script for the instrument or a log of the analytical sequence. Results obtained
during the analysis may be used later to create a variety of reports and a variety of QC
charts.
Specifically the Analytical Method File contains information on the calibration standards
and the type of calibration algorithm to use. It also contains information on all the QC
standards that will be run in the analysis and the allowable limits for them. All actions
for out of limit conditions are chosen in this file. The internal standard elements that will
be used are selected here as well as the allowable drift limits for the internal standard
element intensities. This file also contains information on how often a QC standard will
be run during the analysis.
1-200
-------
The information about the calibration and QC standards are stored in separate files. This
allows for flexibility in the selection of QC standards and calibration standards. Once a
standard is created it may be used with any number of methods, it is not necessary to re-
enter the standard information in each Analytical Method file. A standard file may be
created which contains every element one ever expects to determine. This file could then
be used with any method, but the software would only use the information for the
elements in that method and ignore the others.
QC Limits
The allowable limits for several measurements are established in the Analytical Method
file. Each measurement that a limit has been set for may have an action associated with
it that will be taken if any of the limits are exceeded. Limits may be set for the
following:
* Correlation coefficients for each element
* QC standards
* Intensities for the internal standards
* Duplicates, spikes and dilutions
* Elements in the samples
An example of when one might use the sample limit feature of the software, would be to
monitor when the concentrations of the elements in the samples are below the detection
limit or above the linear range. In both or either case, an action could be selected. A
message would be printed which might simply flag the data and tell the user that the
sample must be rerun for that particular element For example, the message "Se
concentration is below the Detection Limit, RERUN by GFAA" could appear when the
concentration of Se is below the lower limit. The message "Ca concentration is beyond
the Linear Range, DILUTE and RERUN" could appear when the concentration of Ca
exceeded the upper limit.
Actions
An action may be chosen for every measurement for which a limit was set. This would
include: correlation coefficients, all QC standards, upper sample limit for each element,
lower sample limit for each element, intensity drift for each internal standard element,
duplicates, spikes and dilutions. It is also possible to qualify when an action should take
place. Rather than have an action, say recalibration, occur when any element out of a
suite of 20 is out of limits, it is possible to establish which elements or how many
elements must be out of limits before an action will occur. A recalibration takes tune and
it may not be worth the time if only one element is out of limits, especially if it is an
element that may be determined later by a more sensitive technique anyway.
There are nine available actions for each measurement for which you have established
limits. Two actions may be selected for each measurement. The second action becomes
the alternative action that will be taken if the first action does not solve the problem.
This would be analagous to a user reruning a QC standard if it did not meet the
established limits. For the QC Expert this would be action 1. If the QC standard still
1-201
-------
failed, the user would recalibrate and then rerun that standard. This would be action 2
for the QC Expert.
When an out of limit condition is detected by the software, the selected actions are
executed. The actions available are as follows:
*Stop
* Continue
* Recalibrate and Continue
* Recalibrate and Rerun
* Wash for X seconds and Rerun
* Wash for X seconds and Continue
* Rerun current sample/standard
* Continue from... (specify sample or standard ID)
* User Defined Program
User Defined Program
The User Defined Program action allows the user to do virtually anything that a program
can be written for. A User Defined Program is a batch file containing whatever
commands are needed to accomplish the desired task. When this action is executed, the
QC Expert software is temporarily exited and the User Defined Program is run. When
the program is completed, the QC Expert software is automatically returned to and the
analysis continues. The User Defined Program could turn on or off a switch or turn on a
dilution system or even call you at home.
Every time an out of limit situation occurs the message in the message column would be
printed on the printout This message could be as long as 80 characters, the full width of
a page.
The utility of a User Defined program as well as the utility of establishing limits for
elemental concentrations in samples are best illustrated in a process control situation. An
example of a process that requires good control is the discharge of effluent from a
factory. Depending on what body of water the effluent is being discharged into, the
limits for the various heavy metals monitored vary. In this example, the effluent is being
discharged into a trout stream thus the allowable levels of many heavy metals are very
low. The allowable level for the discharge of Pb, mandated by the EPA, is as low as 15
/ig/L in some states. The QC Expert software in conjunction with an ELAN 5000 could
easily be used to both monitor the level of Pb in the effluent as well as control the
discharge of it. The sample limits feature of the software could be used to determine
when the concentration of Pb exceeds 15 jtg/L in the effluent When the level of Pb does
exceed 15 pg/L, the action that the QC Expert would initiate would be a User Defined
program. This User Defined program could control a switch that would divert the
effluent to a holding tank where it could be treated to precipitate the Pb before
discharging the effluent into the trout stream.
1-202
-------
Customizing a Method
The QC Expert software has a number of features that allow one to customize a method.
This makes it easy to follow any QC protocol required. Units are available for the
standards, the weights and volumes for the samples and the final concentrations reported
for the samples. It is possible to use different units for each element. Several commonly
used units are available in the software and it is possible to teach the software any other
unit desired. The software is smart enough to know the various conversion factors
needed to calculate the final sample concentrations in the units specified.
It is possible to report the sample concentrations to a certain number of significant
figures or a certain number of decimal places. Each element may also be reported
differently. The available calibration curve algorithms are selectable on a per element
basis and there are a number of linear and non-linear choices available. The samples
have two possible labels, a 15 character batch ID and a 15 character sample ID. It is easy
to copy and increment these IDs.
Once a method has been created and all the sample IDs and weights have been entered,
the analytical sequence is automatically created by the software and assembled into an
Autosampler Worksheet. The software is able to create this worksheet by combining
information on the autosampler tray layout, the calibration and QC standards to be run
and the sample information. This autosampler worksheet is readily edited. It is possible
to print this worksheet according to the analytical sequence so the user knows how the
analysis will proceed or it is possible to print the worksheet according to autosampler
positions so the user can fill the autosampler tray easily.
Running an Analysis
An example of how an analysis is run automatically by the QC Expert is shown in the
flowchart below. The calibration is initiated by the software. The correlation
coefficients are compared to the minimum acceptable value of 0.9995. If any of the
correlation coefficients are 0.9995 or less, the software directs the instrument to
recalibrate for those elements. However, if a recalibration had already occurred once, the
software will simply stop the analysis and sound an alarm to alert the user to the
problem. If all the correlation coefficients are acceptable, the analysis will proceed and
all the QC standards are run. After each QC is run, the results are be compared to the
acceptable limits. If the results are unacceptable for more than five elements, a
recalibration occurs. If a recalibration for this QC had already occurred, the analysis is
stopped and an alarm is sounded to alert the user of the problem. If five elements or less
were out of limits, the analysis continues with the samples. However, the elements that
were out of limits are no longer reported to avoid having data on the printout that is not
useable.
1-203
-------
AB DCMnpM «T mm Aaayttt Run tf W* OC DcpMt
-M CMbrtf* ImtninMit
1
YES
F
RunQC$
•te.
After each sample is run, the elemental concentrations are compared to the allowable
upper and lower limits. If a concentration for a particular element is below the lower
limit, a message is printed, telling the user that the concentration for that particular
element is below the detection limit and it has to be run again by a more sensitive
technique. If the elemental concentration is above the lower limit, it would be compared
to the upper limit to check that it does not exceed this. If the elemental concentration
does exceed the upper limit, the autosampler would be directed to wash for 20 seconds
and then continue on with the next sample. This would avoid carryover between
samples. A message would also be printed out telling the user that the concentration for
that element exceeded the linear range and the sample would have to be diluted and
rerun. After each sample is run, the sample count is checked. Once 10 samples have
been run, the QC standards are run again as already described. This process continues on
in this fashion with QC standards being run inbetween batches of 10 samples. After the
last sample, a set of QC standards are run again with a similar protocol.
Without the automation of the QC software, the user would have been required to
intervene at all of the stages, to make a decision and/or to physically take action. The
user would also have been required to wait after each intervention to see that the problem
was solved and the analysis could continue. If the problem had not been solved, a second
action would have to be initiated by the user. Again, the potential for errors allowing
"bad" data to slip through cannot be overstated. With the QC software, the user has only
1-204
-------
to be present to start the analysis. The user may want to be within ear shot if he wants to
know when the analysis is completed or if the analysis has stopped.
When the analysis is completed, the printout will contain all the information necessary to
track the analysis. Time and date appear for each sample. When QC standards and
diluted samples are run, percent recoveries are calculated and printed. When spiked
samples are run, spike recoveries are calculated and printed. For duplicate samples a
percent difference is printed. All user specified messages will be printed for all out of
limit situations.
If the printout is not what is needed as a final report, one can use the Report Generator
included which will provide a variety of reporting formats. It is also possible to monitor
the quality of the data over time by using the QC Charting mode of the software. In the
QC Charting mode, one can monitor any measurement of any standard and element over
time. For example, one can plot the concentration for each element in each QC standard
over time (an X-bar chart) or the %RSDs for each element in each QC standard over
time.
The QC Expert software totally automates the analysis, allowing the analyst to attend to
other tasks. Actions that would normally be taken by the analyst to assure the quality of
the analysis can be executed by the software. The laboratory can become very efficient
by having the analyst tend to other more challenging and important tasks than "watching"
an instrument. The quality of the data is assured by the software as it executes the
protocol established by the user.
1-205
-------
29 AUTOMATED REPORTING OF ANALYTICAL RESULTS AND QUALITY CONTROL
FOR USEPA ORGANIC AND INORGANIC CLP ANALYSES
Richard D. Beatv. President, TELECATION ASSOCIATES, P.O. Box
1118, Conifer, Colorado 80433; and Leigh A. Richardson,
Laboratory Consultant, PEAK 10 SCIENTIFIC, P.O. Box 278,
Conifer, Colorado 80433
ABSTRACT
The U.S. Environmental Protection Agency's Contract
Laboratory Program (CLP) for environmental analyses follows a
detailed protocol for analysis, quality control, and
reporting. The formal reporting procedure involves
submission of a deliverable data package, which includes a
number of predefined forms and a diskette containing data in
an "Agency Standard" format. Attention to detail in the
reporting process is important, both to assure verifiable,
usable data for the Government, and to guarantee full
compensation for the contractor laboratories. Automation of
the reporting detail is necessary, if compliant deliverables
are to be generated with regularity.
The QC calculation and reporting process lends itself to
automation through the use of PC-based programs for data
reduction. This paper will discuss the factors involved in
developing and maintaining such programs for both the Organic
and Inorganic CLP programs.
Automation of the reporting process begins with electronic
data acquisition from the analytical instruments being used.
Some beginning efforts at standardization of data output
format among instrument manufacturers has resulted in the
ability to include "standard" data import routines in CLP
report generation software. For organic analyses, the
"reduced result" file has been selected as a standard import
structure. For inorganic analyses, a "comma delimited ASCII"
import routine has been selected, which is compatible with
some manufacturer's AA and ICP instruments. For both organic
and inorganic analyses, where instrument output does not
match one of the "standard" structures, special custom data
file import routines have been devised for automatic data
acquisition.
After instrument data has been accumulated into the CLP
reporting software, additional required information may be
added by keyboard entry. Once all required data is present,
software routines process data to calculate reported results
1-206
-------
according to EPA approved calibration routines, compare
results to specified limits and conditions, flag results
which lie outside these limits or otherwise meet conditions
for data qualification, and finally, generate the formal
printed and computer readable reports.
Until recently, two formats for computer readable data were
allowed. These were known as Format A and Format B. Over
the past year, the intention to go to a single "agency
standard" format was announced. The specifications for this
format, as it applies to CLP diskette deliverables, was
introduced earlier this year. This paper will discuss the
status of automated CLP reporting software to provide "agency
standard" diskettes for organic and inorganic CLP software.
While the CLP program was devised strictly as a protocol for
sample analysis and reporting of data by EPA contractors to
the EPA, a trend has emerged for use of CLP, or at least a
"CLP-like" format, as a reporting standard for other
environmental analyses. In such cases, it may be desirable
to modify some of the details of the report, to achieve
modified goals. The modiflability of the CLP reporting
software to address special reporting needs will be
discussed.
INTRODUCTION
Telecation Associates began as a consulting company providing
on-site training, instrument setup and method development for
analytical laboratories. In 1985, while helping a laboratory
setup for compliance with the Contract Laboratory Program1 s
inorganic Statement of Work 7/85, we became aware of the
detailed calculations and comparisons required to produce a
CLP printed data package. It was quite obvious that a
PC-based software program designed to perform the
calculations and the logic required to fill in the forms
would provide an automated means of generating compliant
deliverables with both regularity and considerable savings of
time.
Somewhat later a new inorganic Statement of Work, SOW 7/87
was released. This contract required the laboratory to not
only prepare the printed set of forms for each group of
samples, but also submit the package data on a DOS readable
diskette, in accordance with one of two very specific
computer formats. Computer automated reporting of analytical
results for CLP analyses had now become a requirement.
1-207
-------
In October of 1987, Telecation introduced a commercial
software product which automated the QC calculations and the
report and diskette generation for Inorganic CLP, SOW 7/87.
Since then we have continued providing CLP reporting software
to comply with subsequent contracts under Statements of Work
7/88 and 3/90, in addition to numerous official revisions and
"interpretation updates" issued by EPA for each contract.
In the spring of 1990, Telecation expanded its CLP software
product line to include the new Organic CLP contract for SOW
3/90. In May 1990, Telecation began shipping QC and
reporting software for all three organic protocols, including
volatiles, semi-volatiles, and pesticides. From the
viewpoint offered by our extensive background in developing
software for both inorganic and organic CLP, we will discuss
the technical and economic factors which prevail in the
automation of the QC and reporting requirements of USEPA's
CLP program. We will also identify opportunities to
substantially enhance the benefits of automation for CLP-like
applications, where the restrictions imposed by USEPA may not
be a factor.
REVIEW OF THE REQUIREMENTS
Before examining the factors affecting the development of CLP
reporting software, we will first review some of the elements
of CLP, which the software must address. The analytical
protocols for CLP describe the analysis of set lists of
analytical parameters. In addition to the actual field
samples, a series of QC samples must be analyzed. The QC
samples include such things as blanks, duplicates, spikes,
matrix spike duplicates, control samples, and various
instrument performance checks. The analytical results and
other sample related details, as well as the QC, sample
preparation, instrument calibration and performance checks,
are summarized on printed forms, the format of which is
precisely defined for each reporting protocol. There are 14
different forms plus a cover page required for the inorganic
data package, and 36 different forms for organics.
The CLP reporting requirements do not allow a simple transfer
of information to a fixed format form. All raw analytical
results must first be compared to both actual instrument
detection limits and to the reporting limits required by the
contract to determine which of the values is to be reported.
Each form has different requirements regarding the reporting
of data corrected for sample preparation factors, the
reporting of significant figures, decimals, and values
rounded according to the EPA rounding rules. Data
1-208
-------
qualifiers, or "flags", summarize the analytical performance.
And, in some cases the forms interact with one another,
inasmuch as the results or the flags appearing on one form
may also have to appear on another form in place of, or in
addition to, information which would normally appear there.
And, last but definitely not least, these results must also
be submitted on a computer readable diskette according to a
very fixed format, which is now entirely different than the
format of the data appearing on any form.
In addition to the demands imposed by the contract details of
the CLP statements of work, the practical considerations in
making the software versatile enough to address modified CLP
needs must be considered. When the Contract Laboratory
Program was established, it was originally intended as a
standardized protocol for analysis and reporting by
contractor laboratories who worked under contract to the
USEPA. Soon, however, the quality control standards set by
the program began to be applied to other environmental work,
as well.
Since legal decisions rest heavily on established precedent,
the analytical and reporting protocols of CLP, which were
developed specifically to provide legally defensible data,
were accepted as a defacto standard for such analyses.
Because laboratories of all types are now being held legally
accountable for the data they release, CLP-type reports are
being requested by state environmental agencies, industrial
accounts, environmental engineers, and practically every
facility that is submitting data for any type of
environmental monitoring, especially if they are concerned
that the data may become involved in litigation.
The emergence of other users of CLP reports, besides the
USEPA, has created a demand for a software product which can
automate the generation of a CLP-like reporting package,
which may be substantially the same as strictly interpreted
CLP, but modified in certain details. Even the USEPA,
themselves, has issued special bid requests, which deviate
from the "Routine Analytical Services" contracts. These
include protocols for analysis of "Low Concentration
Organics", "High Concentration Inorganics", and others.
The market place for CLP software is small to begin with,
relative to the development effort required to implement the
complex requirements. These special purpose modifications to
standard CLP serve to reduce the market size even further.
In the case of EPA special bids, the entire market size may
consist of only two or three laboratories. Therefore, the
only feasible way to address the many variations of CLP and
1-209
-------
"CLP-like" applications, is to design flexible software,
which allows the user to modify certain aspects of the
software's performance. The features to be modified may
include the list of analytes or compounds to be reported,
variations in quantitation limits (CRDL's and CRQL's), and
the format or file structure in which data is to be recorded
on the printed forms and/or the diskette deliverable. Some
applications may not require all of the printed forms, and
others may require a modified file structure for the diskette
deliverable. Finally, even the actual analysis details,
including the nature of quality control and associated
calculations to be performed, may be different from USEPA CLP
specifications.
To create a software package which automates the complex
requirements of CLP would be, in itself, a difficult
challenge. To also make it flexible to address any number of
modified specifics, while keeping the final product easy to
use, becomes an almost impossible task. Combining the above
goals with the ever-changing nature of the USEPA requirements
for CLP makes an optimum software solution for all
applications extremely illusive.
In spite of the challenge presented above, Telecation
provides a solution in its "ENVIROFORMS/Inorganic" and
"ENVIROFORMS/Organic" software products, which have
historically provided notably accurate compliance with strict
CLP standards, while providing the flexibility required to
allow the user to address most "CLP-like" 'applications,
through the use of software utilities for modifying output.
No special knowledge of a programming language or use of
optional compilers is required to implement modified output
through the use these utilities.
BARRIERS TO AN OPTIMUM SOFTWARE SOLUTION; CONTINUAL CHANGES
Working with the EPA's Contract Laboratory Program can cause
both the contract laboratories and the software vendor
considerable frustrations. Not only do these contracts
contain a profusion of technical and bureaucratic detail, but
also these details are in a constant state of flux. To the
software vendor, once a contract is issued, the program code
necessary to implement the terms of the contract is written,
tested, and documented. But even as samples are being
released for analysis, revisions to the protocol are issued.
Sometimes the official revisions note errors in calculations
or documentation in the original contract. Other times these
amendments are actually "interpretations" intended to
clarify an ambiguous point, or in cases where the contract
1-210
-------
contradicts itself, indicate which statement should prevail.
Inasmuch as the CLP approach is now several years old, it
might be expected that the number and frequency of revisions
would be diminishing. But in fact, this has not been the
case. Wholesale changes in analytical protocol, QC
calculations, and diskette file structure have led to a new
round of contract problems, the fix to which frequently
creates more problems.
Each time revisions are issued by the EPA, the software
vendor is required to make the corresponding changes to the
software, if the product is to remain viable. Since the
nature of CLP software is so interactive, even the smallest
change to the software in one area requires thorough testing
to detect possible problems in other areas, which occur as a
result of the change. This process ultimately identifies
"bugs", which require further changes followed by a new round
of testing. Then, the documentation for the software must be
updated to reflect these latest changes. It should be noted,
that revisions which the EPA might consider "minor" may in
fact have a major effect on software. The code is written
around all of the details specified in the contract, and a
"minor" change in procedure may have a devastating impact on
the logic used for program development.
Besides testing and documenting the updated software, the
vendor must also consider the effect of updating their
current users, considering among other things, how installing
the changes will affect "work in progress" at the laboratory
site. Lastly, all new software and documentation must be
distributed to the users.
The instability of CLP contract terms, therefore, sets into
motion a series of events, which can only lead to further
changes. Each contract change which necessitates a software
change, requires that new logic and code be developed,
followed by performance testing, software bug fixes and
retesting, and issuance of updated software and
documentation. This is usually followed by identification of
contract flaws, which require new contract modifications,
which sets the whole process in motion again. Meanwhile, as
this moving target progresses through a never ending
evolution, the contract laboratories and software vendors are
expected to "conform" to what ever the latest interpretation
happens to be. Any deviations from this volatile "standard"
result in compensation penalties exacted against the
submitting laboratory.
Because of the continuous EPA revisions to the requirements
for CLP analysis and reporting, most software vendors offer a
1-211
-------
type of software maintenance contract designed to keep the
end user up to date with the latest EPA revision. Some
laboratories view the price of these "Maintenance Contracts"
as expensive, and indeed they are relative to maintenance
contract prices for most software. But the demands outlined
above are not a part of most software. The software vendor
who conscientiously strives to update its clients in a timely
manner after being notified of yet another change, must be
prepared to divert development, testing, and documentation
resources from other projects to CLP projects on a moment's
notice. If commercial software for CLP work is to continue
to be available, the software vendor must be compensated for
the never-ending development and untimely diversion of
resources. In fact, the software vendors, like the contract
laboratories, themselves, realize nominal profit margin on
CLP work, relative to other, less demanding tasks.
Laboratories, who bear the cost for software and software
maintenance, should understand what is behind that cost.
BARRIERS TO AN OPTIMUM SOFTWARE SOLUTION; OTHER
The goal of CLP software is to automate QC calculations and
reporting in as expeditious a manner as possible. There are a
number of details of the CLP contract which impede the
ultimate achievement of this goal. The inorganic contract
SOW 7/85, which was produced prior to the need for
computerization, included some features which defied
computerization. One such detail was the requirement to
"circle" preprinted options on the forms. Since computers
cannot "circle" choices, this was changed for SOW 7/87 to
take advantage of the benefits which computer technology had
to offer. However, at least one requirement, which is almost
equally incompatible with computer technology still exists
today, the EPA rounding rule.
The EPA rounding rules call for the evaluation of numerical
data in a way which is foreign to computers. Computers can
round on their own very nicely, but the rounding required by
the Contract Laboratory Program calls for a special
evaluation routine to be performed on each and every value.
Since the rounding rule is nothing new, the software routine
has been developed a long time ago to perform the
calculation. But this does not mean that the rounding rule
does not still impact CLP software in a negative way. First
of all, the necessity to round by a special routine
frequently complicates the implementation of other EPA
imposed changes. Further, the ever present requirement to
evaluate every number by a special rounding routine impacts
1-212
-------
the speed of software operation. When it is realized that a
typical sample delivery group contains thousands, if not tens
of thousands of calculated numbers, the special rounding rule
is responsible for a significant increase in software
operation time.
As to the value of the rule, it has been stated that in order
to make the data legally defensible, it is necessary to
precisely define the rounding conditions. This appears to
us, however, to be a flawed argument, since it is highly
unlikely that the outcome of a legal action would ever hinge
on the difference in rounding of a number in the least
significant digit. A competent attorney would see to that.
The USEPA has deemed this rule to be valuable for their
purposes, and hence, it is by definition a part of all
software intended for generating data for EPA use. For all
other applications, the EPA rounding rule serves only to
reduce the automation benefit which would otherwise be
realizable in a 1990's computer technology.
A new impediment to efficient compliance with the CLP
protocols is the "Agency Standard" diskette format. Prior to
the more recent contract terms, data could be recorded on
diskette in one of two formats. Format A or Format B. Format
A has been the choice of most software vendors, since that
format closely matches the printed form, and a software
routine which is set up to print the hard copy forms can,
without great difficulty, also produce the Format A file
structure. By contrast, the "Agency Standard" format, which
does not follow the forms, and in fact contains information
which is not contained on any form, requires all new software
code, involving additional data manipulation, to produce the
diskette. This incremental software development will impact
the cost of future software and maintenance contracts.
Software users should also expect increased software
operation time, to complete the additional data manipulation
required to create the Agency Standard diskette.
A recent occurrence has added a new element to the already
uncertain nature of CLP contract details. This occurrence
attacks at the heart of every software vendor's confidence
that after expending the time and resources described earlier
to develop a product to address a new CLP contract, there
will be a market for that product.
When a new CLP contract is issued the required dates for
complete compliance are also stipulated. At a CLP data
management caucus held in Raleigh, North Carolina in the
spring of 1990, the software vendors were asked if they would
be able to respond to the proposed Statement of Work 3/90 for
1-213
-------
volatiles, semi-volatiles, and pesticides by a date specified
by EPA. It was explained to the vendors that it would be
important for software to be available at that time for use
with Performance Evaluation samples distributed in response
to an Invitation For Bid appearing in Commerce Business
Daily. Because of this and prior commitments made to EPA to
meet the required schedules, Telecation redirected resources
to address the new Statement of Work, and ENVIROFORMS/Organic
SOW 3/90 was ready to ship in May, 1990. However, EPA later
recanted their previously announced schedule, thereby
eliminating all need for the product.
The impact on software cost emanating from the frequent
changes imposed by EPA has already been discussed. Issuing
modification instructions and asking for crash development
efforts, only to have the demand removed after the
development effort has been expended, compounds the economic
impact of the changes. The development effort expended
toward a protocol which is never, in fact, implemented has to
be absorbed in the cost of future CLP software products and
maintenance contracts. It should be noted that the software
vendors affected most by this failure of EPA to follow
through with its announced contracts, are those which strive
the hardest to provide prompt updates to the changes. Such a
practice can only serve to discourage the software vendor
from reacting quickly to future USEPA changes.
OPPORTUNITIES IN CLP-LIKE PROTOCOLS
As discussed earlier, there is an increasing number of
laboratories who wish to report data in a format resembling
CLP, but differing in certain details. For such
applications, the limitations discussed above may not apply.
It is not the authors' intention to second guess the USEPA on
the necessity for the details contained in its contracts, nor
to question the need to frequently change those details. It
is our purpose to identify the stumbling blocks which stand
in the way of automating the good analytical protocol of CLP,
and discuss the benefits that would accrue to non-USEPA
applications, if these stumbling blocks were removed.
As discussed earlier, non-USEPA applications have need to
change various performance and output features, including
such things as the list of target analytes or compounds,
quantitation limits, and report and diskette formats.
ENVIROFORMS/Inorganic and ENVIROFORMS/Organic offer this
flexibility to the user. Further, the user may select less
than the total CLP form set for printing. The items mentioned
above are modifiable by the user without programming, because
1-214
-------
the items are controlled by user accessible data bases, as
opposed to program code.
Some modifications for which we have received requests,
involve changes to the analytical and QC procedures from
standard CLP protocols. Since much of the program logic and
calculations are based on the standard CLP procedures and
calculations, applications which require a departure from
standard CLP procedures and calculations are more difficult
or, in some cases, impossible to change without changes in
actual program code.
One such application was recently introduced by EPA, itself.
The statement of work for "Low Concentration Inorganics"
introduced a new technique (ICP/MS), added several additional
forms, and reported detail which was not previously required.
Since these and other changes struck down the program logic
assumed for standard CLP, it was impossible to use the
standard CLP software to address Low Concentration
Inorganics. Similarly, certain state programs, which model
themselves after CLP but alter certain critical logic
elements, destroy the possibility of using off-the-shelf CLP
software to totally automate the response.
A potential solution exists, which would add the ability to
change many procedural issues, in addition to the currently
modifiable characteristics of the software. This solution,
which is currently under investigation by Telecation, would
make the software operation largely independent of analytical
procedure and would put the QC calculation formulas in the
hands of the user. Such a software product would benefit
both the contract laboratory and contractor agency, alike, in
that it would now be technically and economically feasible to
address special contracts, with differing analytical and QC
requirements. If it is easy enough for the user and/or the
vendor to make the changes necessary to address the special
requirements, then such changes can be economically made,
even for low volume work.
Whether or not such a concept is workable rests heavily on
the flexibility of the contractor agency regarding some of
the issues which have complicated CLP in the past. In order
to provide the degree of user control necessary to make the
concept work, the software code will have to be less
regimented. This means that some of the issues which are now
handled by the regimented code, must be sacrificed. The
issues which cause particular problems with this concept are:
(1) the EPA rounding rule, and (2) the Agency Standard
diskette.
1-215
-------
The rounding rule, as discussed earlier, offers very
questionable benefit, either from a standpoint of scientific
significance or legal defensibility. However, its presence
would require the software to execute specific rounding
routines, which in turn, would require the software to have
predefined knowledge of the calculations, which eliminates
the desired goal of user-definable formulas.
The Agency Standard diskette format introduces a new and
formidable barrier to development of a generic software
package for CLP-like work. The format, which includes such
things as special data delimiters, hexadecimal calculation of
checksums, and unique file formats precludes any possibility
of user configuration. On the other hand, a simple file
format based around a comma-delimited ASCII output of
information contained on the forms, would make it possible to
generate a diskette under user control, by allowing the user
to simply indicate the sequence of fields to be written to
the diskette file.
It is our understanding that the Agency Standard has been
decreed as Government policy even to those responsible for
generation of CLP contract terms. Therefore, it is assumed
that Agency Standard is a nonnegotiable point for USEPA
applications. This simply means that software to address
special low volume USEPA contracts will probably never be
commercially available. Other contractors, who are not under
this constraint, should consider the complications of Agency
Standard very carefully, before adopting this approach and
promulgating more barriers to efficient automation of CLP
type work.
SUMMARY
The technical and non-technical complications discussed in
this paper do not necessarily go hand in hand with the goals
of CLP analyses. For those who have need to develop a good
regimented QC program accompanied by a standardized reporting
format, we would encourage adopting the sound analytical
features of CLP, while excluding the detail which complicates
the process without return of a corresponding benefit. For
state agencies which have not finalized their requirements,
we would encourage allowing flexibility, where such
flexibility does not compromise the quality of the data or
quality control. We would advise against requirements for
special rounding rules, and suggest a less rigid diskette
format based around the printed forms. Such flexibility will
not only reduce the cost of both analysis and automation, it
will also lead to a more reliable deliverable, due to the
reduced complexity.
1-216
-------
Telecation has been dedicated to providing state-of-the-art
software to address CLP applications, since the first
diskette deliverable requirement for Inorganic CLP. The
current ENVIROFORMS/Inorganic and ENVIROFORMS/Organic address
detailed compliance to the USEPA's contract terms. We have
provided timely updates to EPA issued changes, by
conscientiously applying development resources to the CLP
challenge. As an historical supplier of CLP software,
Telecation intends to continue its past policy of providing
the detailed compliance and prompt updates necessary for
USEPA contract laboratories, consistent with what is
technically possible and economically viable.
We are, however, not content with the limitations imposed by
USEPA details and the effect they have on non-EPA
applications for CLP-type analyses. We will, therefore, also
explore new approaches to CLP-like software, which retain the
analytical benefit of CLP, without the cost and operational
complexity burdens which accompany USEPA CLP requirements.
Since USEPA applications will undoubtedly continue to present
obstacles to automation, future developments will probably
involve a CLP software product line, consisting of two
separate product categories, one for USEPA contracts in its
strictest detail, and one for other, more flexible
applications. This allows strict compliance to be maintained
for our EPA contract laboratories, while offering expanded
capability and flexibility for those who are not so
constrained.
For such a segregated software product line, the costs
associated with the changes and complexity imposed by the EPA
could be applied to the price of the software intended for
USEPA applications. The more flexible CLP products, would
thereby be less expensive to laboratories, since they would
not carry the cost liability of the complexity and frequent
changes imposed by the EPA program. This would, of course,
make the software for USEPA applications much more expensive
than it is currently, since there would be a smaller market
over which to amortize the development expense. On the other
hand, such a segregation of product application would put the
cost where it really belongs, without penalty to all those
other applications, which are not so constrained.
The authors would again like to clarify that the purpose of
this paper is not to provide an evaluation of the necessity
and value of the items discussed to the USEPA's program. It
is impossible for us to know all of the factors behind the
terms of USEPA contracts. We are, however, qualified to
identify the effects of these terms on software performance
and flexibility, and evaluate the enhanced features and
1-217
-------
benefits which could be made available for CLP-like
applications, where the constraints of the EPA program are
not a factor.
1-218
-------
A CUSTOMIZABLE GRAPHICAL USER-FRIENDLY DATABASE FOR
GC/MS QUALITY CONTROL
Peter Chong, Programmer Analytical Systems, John Hicks, Manager Analytical Systems,
John Janowski. Supervisor Mass Spectroscopy, Chris Pochowicz Technical Writer
Analytical Systems, Chemical Waste Management, Inc. 150 West 137th Street, Riverdale,
IL 60627; Gene Klesta, Director Quality Assurance Programs, Chemical Waste
Management, Inc. 4300 West 123rd Street, Alsip, IL 60658.
Abstract
Standard methods for GC/MS analysis require a significant amount of data gathering
operations to demonstrate good quality control (QC) practices. Both volatile and semi-
volatile analysis require the use of surrogates, internal standards, matrix spikes and matrix
spike duplicates. Laboratories are required by the corporate quality assurance (QA)
program to control the analysis within fixed method specific criteria. Additionally,
laboratories are required to calculate their own control limits.
At Chemical Waste Management (CWM), we developed a customized application to
facilitate the collection, analysis and viewing of GC/MS quality control (QC) data. The
Customizable Graphical User-Friendly Database for GC/MS Quality Control was designed
according to the guidelines specified in EPA SW-846 Methods 8240 and 8270.
Compounds for the QC section use the recommended surrogate standards from the two
methods. Ranges for recovery data are taken from Table 8 of Methods 8240/8270, which
contains multi-laboratory performance-based limits for soil and aqueous samples.
Compounds selected for the duplicate and fortification analysis section were selected from
the recommended compounds listed in EPA SW-846 Method 3500. Ranges for the matrix
spike and matrix spike duplicate compounds were selected from the forms following
Chapter 1 in SW-846, which correspond to the CLP limits for these compounds. In
addition to meeting standard QC practices for collecting, analyzing and viewing GC/MS
QC data, this application allows for the creation of acceptance limits in matrices other than
those matrices allowed by the methods specified within the database.
Surrogate recovery data are compared to these fixed criteria and are used to calculate the
laboratory specific criteria. Relative percent difference and percent recovery data are
accumulated and compared to the fixed acceptance and laboratory generated criteria. QC
data can be viewed, edited and graphically displayed based upon a wide range of selection
criteria (e.g., sample id, matrix, dates, analyst, instrument, etc.). Use of this application
has already significantly increased the efficiency and effectiveness of QC practices within
the company.
Introduction
Analytical methods promulgated by the USEPA which use gas chromatograph/ mass
spectrometer instruments require a considerable amount of quality control practices. The
complexity of the matrices and the sensitivity of the instrumentation and volume of data
combine to create a situation which requires substantial use of data evaluation to show the
accuracy and precision of the analytical process. In the methods covering volatile and semi-
volatile organics, a large number of compounds of interest are included, and the number of
surrogate compounds and matrix spike compounds are significant. Moreover, the methods
require specific trend analysis procedures and specific data evaluation techniques to
demonstrate acceptable performance.
1-219
-------
Page2
A laboratory using one of the GC/MS methods must calculate the average recoveries for all
surrogate compounds added to the calibration standards, blanks, and samples. The
laboratory must compare its variability to the allowable variability in the method. The lab's
own entries should be developed and used for control from this accumulated data. The
mean percent recoveries and standard deviations need to be calculated and reviewed in a
timely manner. Additionally, the CWM QC Policy requires that the laboratories performing
GC/MS analysis report pertinent quality control data to the central quality assurance unit.
Quality Control policy also mandates that each VOA and Semi-VOA assay act as its own
QC sample with spike duplicates required every 20th sample.
A fully integrated software package was determined to be the best way to accomplish these
tasks, but no such package was commercially available. The GCMS Application, described
in this paper, was developed to fulfill these quality control requirements.
Specification
The specification for GCMS evolved over time as the developers interacted with members
of the Quality Assurance group and the laboratory. Quality Assurance required flexible
access to GCMS QC data to assure the quality of sample analysis and a means to compare
GC/MS quality across the corporation. The lab required flexibility in data input, reporting,
and graphing, plus the ability to isolate potential quality control problems of instruments,
analysts, or matrix inferences. Other requirements included the ability to selectively query
and graph surrogate data, compare our surrogate recoveries to regulatory limits, and the
capability to distribute the application inexpensively throughout the CWM laboratory
system.
Base Software
GCMS was developed in Clarion®, a comprehensive and flexible database development
language. Clarion® satisfied our database requirements and included built-in graphing tools
to allow for visual display of surrogate recoveries. Clarion® applications are compilable,
which facilitated distribution throughout the corporation.
Features
From the specifications described above, an initial prototype was built and demonstrated to
the users for immediate feedback. This process continued through several iterations until
the final application emerged. Ultimately, a "top down" versatile and responsive
screen/menu application was produced. The resulting GCMS application is a feature-rich
quality assurance quality control tool. Major features include:
Customizable Database
GCMS allows all CWM labs to customize the application to local site requirements.
Laboratories incorporate instruments and GCMS personnel specific to their site,
enabling each laboratory to isolate quality performance and quality trends before
they become significant issues. GCMS also acts as a repository for regulatory
surrogate recovery limits and allows for creation of new acceptance limits in
matrices other than those listed in the regulations. GCMS allows for the
accumulation of QC data over a 5 year period with appropriate control limits for
each year.
1-220
-------
PageS
Data Input
Results are entered by selecting the appropriate method and pressing the [Insert]
key. Like data (analyst, instrument, matrix, etc.) are copied into the new record. As
the analyst enters QC data, the results are automatically compared to the pertinent
limits for that method and matrix. Out of control results are immediately flagged.
GCMS can also be shared on a network, allowing simultaneous access by analysts,
management and quality control staff.
Data Queries and Reports
GCMS has summary screens which show the quality performance for volatile and
semi-volatile analyses. The summary screens are queried for all data found in a
specific date range. The search can optionally be qualified for a particular analyst,
instrument, matrix or compound. The summary screen enables the reviewer a quick
view of the quality performance for a range of samples. The screen summarizes, in
an easily read format, the sample identification, the surrogates analyzed, the
acceptability of the surrogate (e.g., by signifying if the result is in or out of limits),
and a summary of the total performance of the assay (i.e., by listing the total
number of surrogates out of control). Since GCMS allows for qualified searches,
the data can be reviewed to identify specific quality problems in the lab.
GCMS is capable of generating two types of reports. The first report is a listing of
all surrogates and spikes entered into the application (Figure 1). The report looks
similar to the CLP format for reporting surrogate results and can be printed using all
of the filters used for the summary screens. The other report present in GCMS is
the QA/QC report which lists all surrogate parameters and gives a total of the
number of analyses entered, the number of samples, and QC calculations that
include: percent of analyses that are within QC limits, the upper and lower limits
along with a mean and die co-efficient of variation. The mean percent recovery,
mean percent error, along with the standard deviations for both spikes and
duplicates are also reported. This report is placed in a DOS file and can be printed
or copied on disk and sent to the Quality Assurance unit.
1-221
-------
Page4
Chemical Want* Management, Inc.
GC/MS QC Report for VOA surrogate Recovery
Date Analyzed: 11/30/90 TO 11/30/90
Instrument ID: *
waste Matrix: AQU - AQUEOUS
Analyst Name: *
JAIDlsampla-IDI DBF I 12D I TO8 I BFB |T-OUT|D|
Report Summary
KH1
PD1
PD1
PD1
PD1
KH1
PD1
34052 TC
34147 ZHE
34148 ZHE
34196 ZHE
34197
BLANK VOA
BLANK VOA
98
110
109
119*
119*
100
96
104
113
95
117*
119*
94
100
102
93
93
102
102
92
103
101
88
86
96
91
99
96
0
0
0
2
2
0
0
N
N
N
N
N
N
N
Glossary
AID: Analyst ID
T-OUT: Total t OUT of QC Limits
D: Surrogates Diluted OUT
*: Values outside Req. QC Limits
-l: Represents Null Field/Data
Surrogates
QC Limits
Low Hgh
DBF: Dibromofluoromethane 86-118
12D: l,2-Dichloroethane-d4 76-114
TO8: Toluene-d8 86-110
BFB: Bromofluorobenzene 86-115
Figure 1:QC Report
Graphing
One of the more useful features of GCMS is its graphical capabilities. GCMS uses
two graph formats, a QC graph and fortification/duplicate graph. The data to be
graphed can be selected by range of dates, surrogate, matrix, instrument or analyst
The QC graph plots the individual surrogate recoveries, the upper and lower
method and laboratory limits by method/matrix, and the mean of the surrogate
analysis. The duplicate/fortification graph plots the surrogate recoveries for each
duplicate, the relative percent difference of the duplicate recoveries and the method
limits (Figure 2). GCMS plots 110 individual graphs (one plot per surrogate, per
matrix). The lab limits displayed are those calculated from the beginning of the year
to date.
Each graph summarizes the limit information in the upper right hand comer of the
display. If users require more information about a datapoint, they click on the point
of interest using a mouse, and the details of the analysis are displayed in the upper
left hand comer. If users require a hard copy of the graph, they click on the print
icon to send the graph to a local printer.
1-222
-------
PageS
DOR QC GrapK for 12D:flQU HU.II« LU.IZS it
1I/Z7/J* Chmical Uasta ttarv»go»ont. Inc. HL'7« LL>77 3Z
D«t«l! 34iI3 ZHE 1 HP2 i flUU 1 KHI ! H i 124 n N • 1 8 1 3 2
p l7*:
• lse;
• , - -I
n ie*
< -
c 88^
° 7»I
U -
• 6*I
|
A
3
83 1
9
,'•
n
^ *""
d 1
N
y*
I
f> /
^y
y^
>. E
v t
\
v
1
>« ^A
\ / \
s
A \
4 ,
1
, a
T T
/ V
^ '
S
•-a a
Sj
y
^
3
V :
52
; •,
/ \
V
- Z8 23 38 33 41 <5 SB 35 68
u • b • r of S • • p 1 • i
Figure 2: QC Graph
Benefits
The following examples demonstrate how the GCMS Application makes quality problem-
solving a quick and relatively painless operation:
Approximately 25% of the volatile surrogates in a laboratory were found to be
running bias-high upon analysis. The problem was present on the two instruments
used to run the volatile analyses. When the surrogate data was filtered through
GCMS, a pattern emerged that showed that a large percentage of one analyst's
surrogate recoveries were high. Since this analyst used a different stock solution of
surrogates, the stock was re-analyzed and found to be incorrectly constituted.
Samples, of a specific matrix, received from one customer were found to have low
surrogate recoveries. Two years before the same customer submitted similar
samples. By using GCMS to retrieve the surrogate/ matrix data for the older
samples, it was shown that the lab had similar analysis problems.
1-223
-------
Page6
A large group of samples in the middle of the month had low surrogate recovery.
By using GCMS it became apparent that all of the samples were analyzed on a
specific instrument. The instrument calibration was examined and found to contain
a wrong (or incorrect) number. Upon correction, all surrogate recoveries were
recalculated and found to be within range.
Conclusion
GCMS gives a laboratory with multiple instruments from different manufacturers and with
many analysts the ability to view all of the quality control data at one time. The ability to
group samples by matrix allows the laboratory to quickly and easily calculate its own
quality control limits for the many different matrices an environmental lab needs to analyze.
The GCMS application is a simplified way to satisfy these requirement, and allows the
laboratory to satisfy the method requirements in a relatively simple way. The GCMS
application also allows the laboratory to not only gather quality control data, but to also put
it to good use. From the many sheets of paper and printouts that the GC/MS laboratory had
previously used to accumulate the quality control data, we now are able to put all of the data
into this application in a minimal amount of time. More importantly, by using GCMS we
are able to retrieve the most useful information from the inputted data. In a larger sense this
application can help gather information from a large group of labs doing GC/MS analysis to
study the affect of matrix on surrogates and to evaluate new methods as they are approved.
1-224
-------
0-j COMPUTER-ASSISTED TECHNICAL DATA QUALITY EVALUATION
Samuel Hopperr Technical Staff Member, James Burnetti,
Technical Staff Member, Toxic and Hazardous Materials
Assessment and Control, The MITRE Corporation, 7525 Colshire
Drive, McLean, Virginia 22102; Captain Michael Stock,
Systems Analyst, Human Systems Division, Environmental
Information Management Program Office, Brooks Air Force
Base, Texas 78235-5000
ABSTRACT
Assessing environmental contamination and designing and
implementing remediation strategies are invariably based on
analytical data. Sound decisions for managing environmental
assessments and cleanups can only be made if the analytical
data are of a known and documented quality. Manually
assessing the quality of analytical data generated in an
environmental study is a resource intensive endeavor, and
frequently the results of the assessment are not delivered
to the end users of the data. This paper describes an
automated system developed for the Air Force Human Systems
Division Installation Restoration Program (IRP) Office
(HSD/YAQ) to automatically evaluate the quality of technical
data and to store the data with their respective data
qualifiers in an electronic database. The data processed by
this system are generated as part of the Air Force IRP
efforts and are stored in the IRP Information Management
System (IRPIMS). IRPIMS currently contains sampling and
analysis data for IRP efforts at 76 Air Force installations
which comprise nearly one million analytical results.
A
The process of automated data validation must be considered
as two distinct but interrelated activities. One of these
activities is the preparation of the electronic data files
by contractors. The second activity is the automatic
evaluation of these data by the Air Force. The Air Force
has developed software tools that support these activities.
Two personal computer tools have been developed to assist
contractors in the preparation of their data submissions.
One of these tools (the Contractor Data Loading Tool)
provides the contractor with a convenient means of manually
entering the technical data that are to be processed. The
other tool (the Contractor Quality Checking Tool) enables
the contractor to evaluate the data integrity of a
submission. The final software tool developed by HSD/YAQ
(the Batch Loading Tool) is used at Brooks Air Force Base
(AFB) to automatically evaluate the submission for
consistency with IRPIMS data requirements and the technical
correctness of the data (e.g., holding times met,
contamination in blanks, logic of well completion records,
1-225
-------
etc.) Data assessment reviews that would require staff
weeks of effort can be done in hours with this automated
system. The technical data processed by this batch loading
tool are stored in electronic form fully qualified for all
the technical evaluations that were automatically done
during batch loading.
This discussion covers the two automated data validation
activities, and the three distinct pieces of software that
have been developed by the Air Force to support them.
INTRODUCTION
The analytical data that are procured by the Air Force as
part of its IRP are required to be both scientifically sound
and legally defensible. Procurement of technical data that
meet these requirements must be based on a structured
procurement strategy that includes clearly defined
performance criteria and a quality assurance review of
received data for conformance with these established
criteria. The manual implementation of such a procurement
strategy is very resource intensive/ and a manually
implemented review system is prone to errors. This paper
describes an automation tool that has been developed by the
Air Force to facilitate the implementation of a structured
technical data procurement strategy that is capable of
100 percent check of all data against the established
performance criteria.
The description of this automated tool will be presented in
three sections. The first section will describe the
electronic data loading tool (i.e., the Batch Loading
Utility) that is resident on the VAX 8600 at Brooks AFB.
Included in this discussion will be a description of the
organizational process that supports the operation of the
Batch Loading Utility. This discussion will be followed by
a description of the various data quality reports that are
being produced while the technical data are being processed
by the Batch Loading Utility. Finally, a brief discussion
of the various forms of assistance the Air Force is
providing to its contractors to help them prepare suitable
electronic data submissions will be presented.
THE ELECTRONIC DATA LOADING PROCESS
The technical data that are generated as part of Air Force
IRP studies cross several disciplines, and the electronic
collection of these data must be supported by various
technical and administrative personnel. Additionally, the
electronic data loading process consists of activities that
must be done by contractors and other activities that must
1-226
-------
be done by Air Force personnel. The Air Force's electronic
data loading process is based on a standard operating
procedure that defines the roles and responsibilities of the
various technical and administrative staff and describes the
electronic data loading review and decision making
processes. The illustration in figure 1 schematically
depicts this process. The review and decision making
processes and the roles and responsibilities of the various
staff are described below.
ROLES AND RESPONSIBILITIES
Collection of quality data suitable for making decisions
regarding remediating hazardous waste sites is a team
effort. The roles and responsibilities of the various team
members include the following:
The Remedial Investigation/Feasibility Study Contractor.
The remedial investigation/feasibility study contractor is
responsible for conducting the field investigation,
performing the laboratory analyses, and preparing the
electronic data submission. The electronic data submission
contains a record of quality assurance/ quality control
(QA/QC) activities in the field and in the laboratory.
HSD/YAQ has provided tools to assist the contractor in
preparing electronic data submissions. Those tools are
described in a subsequent section of this article.
Contracts Administrative Branch. Electronic data
submissions are treated exactly like any other IRP
deliverable. The data submissions are received by contract
administrators, who record the receipt of the deliverables,
and based on the advice of the technical project managers,
accept the deliverables from the contractor or require the
contractor to resubmit the electronic report with revisions.
Technical Project Managers. The technical project manager
(TPM), with the assistance of the data administrator and
hydrogeology and chemistry consultants, is responsible for
accepting or rejecting the contractor's data submissions.
The data submissions may be rejected for two reasons. If
the data does not conform to the standard IRPIMS data
format, they may be rejected, and the contractor may be
required to reconstruct the electronic data submission. If
the reports generated by the automated QA/QC tool indicate
that the contractor did not meet the data quality objectives
identified for the project, the contractor may be required
to repeat field work and/or laboratory analyses.
1-227
-------
TPMReturo
Rejected
Bitch Fib
to
Contnctcr
DATA FAtUNO
^ BATCH LOADER
\ QC CHECKS
REJECTED
BATCH
FILES
REJECTED
BATCH FILE
SUBMISSION
DiU Admin
Retumi
Suhminian
loTPM
TPMDelmn
Bitch File.
to DiU
AcninutntoT
DATA FAIJJNO
QC CHECKS
D«U Admin ^
Vinullyliupcctl
WilhWon)
PnccMor
FMbmuVinu
. Check j
BATCH
FILES
ON
FLOPPY
DISK
DiU Admin
Tniufea
Bitch Rle»
10 VAX
Computer
ELECTRONIC
TRANSFER
FUt File on
VAX Computer
Dm An
QCdVii
Bitch
Lotdinf
Software
Chemiitiytnd
Hydroteology
Comihuu
AdviMTPM
VAX BATCH
FILES
CONTAMINATION
PREC/AOC I
REPORTS
——» DATA la
W ACCEPTED
DATA
DO NOT
S MEET DATA
QUALITY
OBJECTIVES
Temporary Tible
(Good DiU)
DATA
PASSING
QC CHECKS
DiU An
Appended to
Production
Diubtie
IVooUCtlOQ
Diubue
DATA PASSING
QC CHECKS
Figure 1. Data Flow Diagram of Batch Loading Process
-------
Hydrogeology and Chemistry Consultants. The hydrogeology
and chemistry consultants provide the TPM with expert advice
in interpreting the output of the data QA/QC modules.
The IRPIMS Data Administrator. The data administrator is
responsible for overseeing the work of the contractor that
operates the batch loading software. The data administrator
is ultimately responsible for maintaining the integrity of
the IRPIMS database. He maintains the lists of valid codes
(e.g., codes for analytes, analytical methods, sampling
methods, etc.) that the batch loading software uses to
evaluate the quality of the electronic data submissions. He
provides the TPM with expert advice regarding the quality of
a contractor's data submission from the database integrity
perspective as opposed to the hydrogeology/chemistry
perspective.
The Operator of the Electronic Data Loading Software. The
operator of the electronic data loading software is
responsible for physically loading the electronic submission
onto the IRPIMS computer, running the data quality checking
software, distributing the resulting reports to the
appropriate HSD/YAQ staff, and providing technical support
in interpreting the output. If the TPM decides to accept
the electronic data submission, the operator then appends
the electronic data submission into the production technical
database.
REVIEW AND DECISION PROCESS
As illustrated in figure I, the electronic data submission
is first received by the contracts and administrative
branch, where it is logged as received. The submission is
then delivered to the technical project manager responsible
for the IRP project. The TPM delivers it to the IRPIMS data
administrator. An operator designated by the data
administrator checks the data submission for computer
viruses, and then performs a visual inspection of the data
using a word processor. If the submission fails the visual
inspection, it is returned to the TPM, who advises the
contracts administrative branch that the submission has been
rejected, and should be resubmitted. Data that passes the
visual inspection is loaded onto the IRPIMS computer, and
run through the data quality assessment software. Data that
passes all of the QA/QC checks are loaded into a set of
temporary tables that can be appended to the production
database. Data that fail the QA/QC checks are loaded into a
set of temporary tables that can be edited in house or
converted into standard ASCII files to be returned to the
contractor. The quality assessment software generates a
series of reports that are distributed to the TPM and
1-229
-------
chemistry and hydrogeology consultants. The TPM, with the
assistance of the technical consultants and data
administrator, ultimately decides whether to accept the data
submission and whether the reports indicate that the
contractor should be required to repeat a portion of the
field and/or laboratory work.
THE ELECTRONIC DATA LOADING SOFTWARE
The IRPIMS system resides on a Digital Equipment Corporation
VAX computer, running the VMS operating system. The
database is managed by relational database management system
software from the Oracle Corporation. The quality
assessment software is written in the "C" programming
language, with embedded structured query language (SQL)
statements. The electronic data submissions are quite large
(typically 25,000 to 50,000 records), and a single record
may contain 10 or more coded fields. To improve the
performance of the software, many of the checks to verify
that coded fields contain valid codes are instituted in
memory rather than via lookups to tables on disk. A
throughput of approximately 40,000 records per hour has been
achieved using this algorithm.
The software evaluates the data for three different types of
quality issues. First, the software evaluates individual
records from the computer scientist's point of view. The
software verifies that key and required fields are not
blank; that numeric fields contain numbers; that date fields
contain dates; and that coded values are drawn from a list
of valid values. Second, the software evaluates the data
from the chemist's and hydrogeologist's perspective. For
instance, analytical results are evaluated for conformance
with holding times, contamination, precision, and accuracy
conformance criteria, and well completion details are
evaluated for conformance with specifications identified for
the project. Finally, the software evaluates the data from
a database integrity point of view. For instance each
analytical result must be associated with an extraction
event; each extraction event must be associated with a
sampling event; and each sampling event must be associated
with a known sampling location. Details regarding reports
generated by the electronic data loading software are given
below.
DATA QUALITY REPORTS
The Batch Loading Utility does a number of quality checks on
the technical data that it processes, and it produces a very
comprehensive set of evaluation reports. Several examples
of these reports will be presented in the figures of this
1-230
-------
section. A complete discussion of all the checks done by
the Batch Loading Utility and presentation of all the
reports that are produced by this utility is beyond the
scope of this discussion.
The quality evaluations of the chemistry data generated to
support environmental decision making are perhaps the most
universally acknowledged. These evaluations typically
address the quality concerns of contamination, precision,
accuracy, and the maintenance of sample integrity during
sampling and analysis. The analysis of environmental
samples is almost always accompanied with the analysis of QC
samples that are used to gauge the success of the sampling
and analysis activities. These QC samples are used to
document the precision and accuracy of the analyses and the
presence of contamination introduced in the sampling or
analysis process. The integrity of analytes in
environmental samples can be compromised if the samples are
held for periods longer than their specified holding times.
A typical IRPIMS Batch Loading Utility holding times report
is shown in figure 2. Every test that is submitted to
IRPIMS is checked for compliance with regulator specified
holding times. Tests performed outside the holding times
are presented in the holding times report. These tests that
are not in compliance with regulatory specified holding
times can be rejected and not entered into the database, or
they can be entered into the database with flags that
indicate the holding times have been missed.
The IRPIMS Batch Loading Utility produces a complement of
reports on the many type of QC samples that accompany field
sampling and subsequent analysis. Contamination concerns
are addressed by a series of four reports that are produced
by the Batch Loading Utility. An example of one of these
reports is shown in figure 3. A series of six reports
summarize the QC data that document precision concerns and
another series of six reports summarize the QC data that
document accuracy concerns. These reports list the normal
environmental samples that are associated with these
questionable QC samples. The Batch Loading Utility also
sets contamination, precision, and accuracy flags for all
normal environmental samples as passed, failed, or could not
determine. Finally, two reports summarize the QC samples
that accompanied each analytical test reported in the
electronic submission.
1-231
-------
Submission Identification
Base: XXXXX XXX XXX X xxxxx
Contract: 85-4533 Delivery Order: 06
Cont ractor: xxxxxxxxxxxxxxxxxxxxxxx
Submission Date: 05-MAR-91 Analysis Date: 26-MAR-91
Method:
Matrix:
SW8270
S
GC/MS for Semivolatile Organics: Capillary Colunn Technique
Standards:
SAMPLE ID
DATES
40 47
HOLDING TIMES (DAVS)
AFIID
XXXXX
XXXXX
XXXXX
XXXXX
XXXXX
XXXXX
XXXXX
XXXXX
XXXXX
XXXXX
XXXXX
XXXXX
LOG ID
TR-10F
TR-16F
TR-17F
TR-17U
TR-18F
TR-5F
TR-5U
TR-18U
TR-17W
TR-17F
TR-16U
TR-10U
MATRIX
S
S
S
S
S
S
S
S
S
S
S
S
SBD
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
SED
0.00
0.00
0.00
0.00
0.00
0.00
0.00
'o.oo
0.00
0.00
0.00
0.00
SA
N
N
D
D
N
N
N
N
N
N
N
N
SAMPLING
21-DEC-B9
21-DEC-89
21-DEC-89
21-DEC-89
21-DEC-89
21-DEC-89
21-DEC-89
21-DEC-89
21-DEC-89
21-DEC-89
21-DEC-89
21-DEC-89
EXTRACTION
19-JAN-90
19-JAN-90
19-JAN-90
30-DEC-89
19-JAN-90
19-JAN-90
30-DEC-89
30-DEC-89
30-DEC-89
19-JAN-90
30-DEC-89
30-DEC-89
ANALTSIS
22-JAN-90
22-JAN-90
22-JAN-90
16-JAN-90
22-JAN-90
22-JAN-90
16-JAN-90
16-JAN-90
16-JAN-B9
22 -JAN -90
16-JAN-90
16-JAN-90
SAMPLE TO 1
EXTRACTION
28
28
28
9
28
28
9
9
9
28
9
9
EXTRACTION
TO ANALYSIS
3
3
3
16
3
3
16
16
-1
3
16
16
SAMPLE TO
ANALYSIS
31
31
31
25
31
31
25
25
-1
31
25
25
(The -1 in row 9 indicates the analysis data is recorded incorrectly.)
Figure 2. Installation Restoration Program Information
Management System Batch Loading Utility Holding Times Report
1-232
-------
IRPIMS Batch File Loader
Chemistry Report
Part 2 - Contamination Summary
Section 0 - Normal Environmental Samples Associated with Contaminated Lab Blanks
Submission Identification
Base: BLTST Batch loading test ID
Contract: 99-XX99 Delivery Order: 01
Contractor: H/A ERROR-N/A
Submission Date: 22-MAR-91 Report Date: Ol-APR-91
Analysis Date: 09-NOV-88
Lab Lot Ctl I: 8809-775
Laboratory: XXXXX
N>
CO
CO
Analyte
ALDRIN
ALPHA BHC (ALPHA HEXACHLOROCYCLOHEXANE )
BETA BHC (BETA HEXACHLOROCYCLOHEXANE )
DELTA BHC (DELTA HEXACHLOROCYCLOHEXANE)
GAMMA BHC (LINDANE)
ALPHA-CHLORDAKE
GAMMA-CHLORDANE
DIBUTYLCHLORENDATE
DIBUTYLCHLORENDATE
p,p'-DDD
p.p'-DDE
p,p'-DDT
DIELDRIN
ALPHA ENDOSULFAN
BETA EHDOSULFAN
ENDOSULFAN SULFATE
ENDRIN
ENDRIN KETONE
HEPTACHLOR EPOXIDE
HEPTACHLOR
Location
00-110-P
00-110-P
00-110-P
00-110-P
00-110-P
00-110-P
00-110-P
00-110-P
LABQC
00-110-P
00-110-P
00-110-P
00-110-P
00-110-P
00-110-P
00-110-P
00-110-P
00-110-P
00-110-P
00-110-P
Laboratory
Sample ID
8809775001
8809775001
8809775001
8809775001
8809775001
8809775001
8809775001
8809775001
8809775PBLK
8809775001
8809775001
8809775001
8809775001
8809775001
8809775001
8809775001
8809775001
8809775001
8809775001
8809775001
Detection
Limit
0.0500
0.0500
0.0500
0.0500
0.0500
0.5000
0.5000
0 .0000
0.0000
0.1000
0.1000
0.1000
0.1000
0.0500
0.1000
0.1000
0.1000
0.1000
0.0500
0.0500
Actual
Result
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
108.0000
101.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
Units
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
UG/L
Figure 3. Installation Restoration Program Information
Management System Contamination Summary Report
-------
Another set of technical quality concerns that are addressed
by the IRPIMS Batch Loading Utility are construction and
maintenance details of groundwater monitoring wells.
A schematic diagram of a typical monitoring well is given in
figure 4. The Batch Loading Utility report shown in
figure 5 identified two types of problems in the
construction records for monitoring wells in a particular
data submission. The first error identifies wells that have
a total depth recorded that is deeper than the driller
reported in the hole. Obviously, this is a recordkeeping
error that must be resolved before these well completion
records can be accepted. The other error identified could
be much more serious. This error indicated the filter pack
length is only 6 inches in length and the screened interval
of the well is 10.00 feet. From the schematic diagram of
the monitoring well in figure 4, the filter pack should
extend the length of the screen to ensure the proper
production from the well. This error could be a simple
recordkeeping problem or a more serious construction error
in the installation of these wells.
In addition to the technical quality evaluations, the IRPIMS
Batch Loading Utility checks the submission to insure that
all information required was submitted and that it makes
good logical sense. Figure 6 cites analytical result
records that were submitted without the analyte identified
(one instance), without the analytical result stated (three
instances), and no laboratory detection limit reported
(244 instances). Figure 7 reports nine sample extraction
dates reported before the sample was taken, 34 analysis
dates reported before the sample was taken, and 15 analysis
dates that preceded the date the sample was extracted.
Three instances are also identified on this report where the
units "MG/KG" were used for water samples.
AIR FORCE ASSISTANCE TO CONTRACTORS
The success of electronic data loading depends on the
ability of contractors to prepare the electronic
submissions. The Air Force offers a variety of assistance
to its contractors who are preparing these files. Two
personal computer "tools" are provided to assist contractors
in the preparation and preliminary evaluation of their
submissions. The Contractor Data Loading Tool provides IRP
contractors with a convenient means of manually entering the
various technical data collected by IRPIMS. This tool
performs many integrity checks, provides complete list of
IRPIMS acceptable codes online, and offers technology that
reduces the number of keystrokes required to make the
submissions. The Contractor QC Tool provides the
1-234
-------
Figure 4. Schematic Diagram of an Environmental
Monitoring Well
1-235
-------
File BCHUCI
Detailed Listing of Errors and Warnings
Error/
Warning
Submission Identification
Base: XXXXX XXX XXX X xxxxx
Contract: 85-4533 Delivery Order: 06
Contractor: xxxxxxxxxxxxxxxxxxxxxxxxx
Submission Date: 05-HAR-91 Analysis Date: 26-MAR-91
Description of Problem with Data
Number of
Occurences
Error Total casing depth exceeds borehole depth (TOTDEPTH > DEPTH)
for the following records:
AFIID LOCXREF
TOTDEPTH DEPTH
Warning
XXXXX BG-1
XXXXX BG-8
XXXXX C-12
XXXXX C-14
XXXXX C-8
XXXXX C-9
XXXXX OG-5
XXXXX MU-2UWLB
XXXXX MW-4UULB
XXXXX HW-8HF
XXXXX MW-8UWLB
20.00
20.40
25.10
25.50
20.30
22.00
24.20
23.70
25.60
12.60
25.30
Screen length exceeds filter
following records:
AFIID LOCXREF
XXXXX BG-1
XXXXX BG-2
XXXXX BG-3
XXXXX BG-4
XXXXX BG-5
XXXXX BG-6
XXXXX BG-7
XXXXX BG-8
XXXXX C-10
XXXXX C-11
XXXXX C-12
XXXXX C-14
XXXXX C-8
SCRLENGTH
10.00
9.90
10.00
10.00
10.00
10.00
10.00
10.00
10.00
10.00
10.00
10.00
10.00
19.0
20.0
25.0
25.0
20.0
20.0
24.0
23.5
25.5
12.5
25.0
pack (SCRLENGTH > FPL) for the
FPL
0.50
0.50
0.50
0.50
0.50
0.50
0.50
0.50
0.50
0.50
0.50
0.50
0.50
Figure 5. Installation Restoration Program Information
Management System Batch Loading Utility Well Completion
Information Error Report
I-236
-------
File BCHRES
Detailed Listing of Errors and Warnings
Submission Identification
Base: XXXXX XXX XXX X xxxxx
Contract: 85-4533 Delivery Order: 06
Cont ractor: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Submission Date: 05-HAR-91 Analysis Date: 26-MAR-91
Error/
Warning
Description of Problem with Data
Number of
Occurences
Error Key field PARAMETER LABEL (PARLABEL) was left blank
Error Required field PARAMETER VALUE (PARVAL) was left blank
Error Required field LAB DETECTION LIMIT (LABDL) was left blank
Error Required field EXPECTED VALUE (EXPECTED) was left blank
1
3
244
609
Warning Contamination in the laboratory of field blank samples were
detected for the following tests/analytes:
ANMCODE
£418.1
SW6010
SW7191
SW8240
SW8270
2
11
4
35
13
Figure 6. Installation Restoration Program Information
Management System Batch Loading Utility Analytical
Results Error Report
1-237
-------
File BCHTEST
Detailed Listing of Errors and Warnings
Submission Identification
Base: XXXXX XXX XXX X xxxxx
Contract: 85-4533 Delivery Order: 06
Contractor: XXX XXXXXXXXXXXXXXXXXX
Submission Date: 05-MAR-91 Analysis Date: 26-MAR-91
Error/
Warning
Description of Problem with Data
Number of
Occurences
Error The extraction date must be on or after sampling date
Error The analysis date must be on or after sampling date
Error The analysis date must be on or after extraction date
9
34
15
Error The following soil or tissue records did not have a value in
the field BASIS:
API ID LOCXREF LOGDATE SBD SED MIX SAC
XXXXX MU-7UULB 16-NOV-89 5.00 5.00 S N
Error The following UNITS OF MEASURE (UNITHEAS) are not applicable
to a water sample:
UNITMEAS
MG/KG
Figure 7. Installation Restoration Program Information
Management System Batch Loading Utility Sample
Preparation Error Report
I-238
-------
contractors with an onsite means to evaluate the format of
their electronic submissions. In addition to these tools,
the Air Force also provides a guidance document, training,
and telephone support of the contractor data loading effort.
SUMMARY
The Air Force has implemented a comprehensive technical data
evaluation process that includes a high level of automation
and results in the technical data being stored in a easily
retrievable electronic format. The data processed by the
Batch Loading Utility are reviewed for a number of technical
quality concerns and the results of this review is stored
with each record in the database. The Air Force anticipates
that this data will eventually be used to supply data to
various graphical (including geographical information
systems) and modeling tools that will facilitate IRP
remediations.
ACKNOWLEDGEMENTS
The authors would like to acknowledge Mr. Stephen Shieldes
of OAO Corporation, the implementation programmer who coded
the IRPIMS Batch Loading Utility.
1-239
-------
SAMPLING/FIELD
-------
Preparation and Stabilization of Volatile Organic Constituents of Water Samples by Off-Line Purge
and Trap
by
Elizabeth Wodfenden, Perkin-Elmer Limited, Seer Green, Buckinghamshire, England, and
James Ryan, The Perkin-Elmer Corporation, 761 Main Avenue, Norwalk, CT 06859-0219
Purge-and-trap gas chromatographic analysis has a 20 year history of successfully analyzing volatile
organics in water. The technique is quantitative, sensitive, and able to be automated. Purge-and-trap is
used in all major EPA monitoring programs; RCRA, CERCLA, NPDES industrial wastewater, and drinking
water.
A conventional system is integrated, i.e. the purge vessel and sorbent trap are connected directly to the
gas chromatograph in a laboratory environment. Water samples must be collected in the field, chemically
stabilized, atmospherically sealed, and shipped to a laboratory while chilled. When received at the lab,
they must be stored at 4°C. until analyzed, and at least for CERCLA, these samples must be analyzed
within 10 days of receipt. While this existing analytical system works well, this paper will demonstrate that
it is not necessary for the purging to be done in proximity to the chromatographic analysis.
Purge-and-trap systems which incorporate an integral (on-line) thermal desorption device have been
found to suffer from several limitations. These limitations can adversely affect the practical performance of
such on-line systems. Among the limitations are:
- RISK OF CARRYOVER BETWEEN SAMPLES. This can occur when a particularly high
concentration sample is analyzed.
- RESTRICTED COMPATIBILITY WITH HIGH RESOLUTION CAPILLARY GC + MS DETECTION.
- RESTRICTED STORAGE TIME FOR WATER SAMPLES.
- INCREASED CHANCE OF SAMPLE CONTAMINATION. This can happen because of stabilizers
added to the sample to prevent haloform formation, or from atmospheric contamination of the
aqueous sample from improper sample seals.
One solution to overcoming these potential problems is to separate the purge-and-trap volatile chemical
collection and concentration from the desorption-chromatographic analysis. In other words, perform the
chromatography off-line from the sample concentration.
Using portable traps in combination with an automatic off-line purge unit enables water to be sampled
using conventional EPA purging methodology at field sampling stations. Once sampling is completed, the
tubes may be capped and transferred for thermal desorption GC analysis at a central laboratory facility.
This approach immediately overcomes two of the major drawbacks of conventional on-line methodology:
- NO RISK OF CARRYOVER BETWEEN SAMPLES.
- GREATLY EXTENDED MAXIMUM SAMPLE STORAGE TIMES.
- DISTRIBUTED FIELD SAMPLING COMBINED WITH CENTRALIZED LABORATORY ANALYSES.
I-243
-------
Commercial automatic thermal desorption instruments allow multiple sample tubes or traps to be
analyzed without operator attendance. These traps, in the form of sampling tubes, are compatible with
the method detection limits specified by EPA 500 and 600 series methods, as well as those purge-and-trap
methods in the RCRA SW-846 analytical method manual. For long term storage, the sorbent tubes can be
capped with brass Swagelok ™caps and one-piece PTFE ferrules. Such tubes, spiked with benzene,
toluene and m-xylene, are available as certified standards (Ref.1), and have been shown to be stable for
up to two years of storage time.
In addition, data reported by the Netherlands Organization for Applied Scientific Research shows that
chlorinated hydrocarbons on Tenax ™ are stable for over 2 years. Multiple analyses for trichloroethylene
and tetrachloroethylene carried out over a two year period had a reproducibility with less than 10% RSD at
storage temperatures ranging from 4™ to 40 °C (Ref. 4).
STABILITY OF VOLATILE CHLOROALKANES ON TENAX
Storage Component 24 Month
Temperature Initial Mean Charge Mean Recovery
°C ng RSD% #Rep ng %Rec.
4° Trichloroethylene 840 2.0% 15 856 102%
Tetrachloroethylene 806 1.9% 15 781 97%
20° Trichloroethylene 840 2.0% 15 816 97%
Tetrachloroethylene 806 1.9% 15 756 94%
40° Trichloroethylene 840 2.0% 15 842 100%
Tetrachloroethylene 806 1.9% 15 765 95%
Ref: TNO Division of Technology for Society: Netherlands Organization for Applied Scientific
Research, Report No. R90/268.
The ATD-400 also overcomes another limitation of conventional procedures, i.e. incompatibility with high
resolution capillary GC, by using an optimized two-stage thermal desorption process.
REFERENCES
1. Certified Standard Material Reference No. CRM112. Available from the European Community
Bureau of Reference:
Community Bureau of Reference (BCR)
Rue de la Loi 200
B-1049 Brussels
Belgium
2. How Efficient are Capillary Cold Traps?, J. W. Graydon and K. Grob, Chrom. 15,327,1983, pp.
265-269.
3. Modified Analytical Technique for Determination of Trace Organics in Water Using Dynamic
Headspace and Gas Chromatography-Mass Spectrometry. Bianchi, Varney, and Phillips, J. of
Chrom. 467 (1989), pp 111-128.
4. Stability of Chlorinated Hydrocarbons on Tenax, F. Lindqvist and H. Bakkeren, Netherlands Or-
ganization for Applied Scientific Research, TNO Division of Technology for Society, Report No.
90/268.
I-244
-------
A REMOTE WATER SAMPLER USING SOLID PHASE EXTRACTION DISKS
H. A. Moye, Pesticide Research Laboratory, Food Science and Human Nutrition
Department, University of Florida, Gainesville, FL, 32611 and W. B. Moore,
Pesticides and Data Review Section, Bureau of Drinking and Groundvater Resources,
Florida Department of Environmental Regulation, Tallahassee, FL, 32399-2400.
ABSTRACT
In order to reduce costs and inconveniences associated with the sampling,
preservation, storage, shipping and analysis of large volumes of water for the
analysis of trace levels of pesticides, we have designed and developed a remote
water sampler employing porous extraction disks. Commercially available disks
were studied in the laboratory for their ability to extract the pesticides
alachlor, butachlor, ametryn, prometryn, and terbutryn from both laboratory and
groundwaters. Extraction efficiencies were determined as functions of pesticide
type, concentration, flow rate through the disks, disk pretreatment, storage
temperature, and storage interval. Results were encouraging enough to pursue
design and construction of the remote sampler, made possible by the extensive
modification of an existing commercially available water sampler. Evaluation of
the resulting sampler using extraction disks showed that it was reliable and
accurate enough to be used in the field.
INTRODUCTION
One of the larger expenses in any ground or surface water monitoring project
entails proper collection, preservation, storage, and transportation of the water
samples to be studied. This is particularly true when transient events, such as
a moving groundwater plume or a single surface contamination event, occur. Being
able to effectively sample in a continuous manner over a period of hours, days,
or even weeks would allow for the observation of such an event, and prevent it
from escaping unnoticed.
Most approved USEPA numbered methods for the analysis of trace organics in water,
(500 and 600 series) including pesticides, require the collection of large
volumes of water, typically one liter. Transporting multiple one liter
quantities, when multiple analytical methods are to be employed, or when sampling
over regular intervals is to be undertaken, requires expensive transportation and
storage techniques. And when they arrive at the laboratory, they are typically
extracted with large quanitities of expensive and hazardous solvents, such as
methylene chloride. Some methods allow the water to be extracted on-site with
solid phase extractors (SPEs; 1-3), with subsequent transportation back to the
laboratory for elution and chemical analysis, usually by gas or high performance
liquid chromatography. These SPE devices are small cylindrical cartridges packed
with intermediate size silica particles, usually chemically coated with an alkyl
material, such as octyl groups. They adsorb organics from water when it is drawn
through by pressure or vacuum, and then release the organics to a small amount
of solvent that is passed through the cartridge.
While on-site extraction via the SPEs obviates several problems, there still is
the requirement that personnel be transported, housed, maintained, and paid if
sampling is to be done over time. In order to eliminate the requirement for
collecting, storing, and transporting large volumes of water, while at the same
time minimizing costs for the presence of on-site personnel, we conceived the
idea of creating a remotely located "dosimeter", which could be placed at a
remote surface or well water site, programmed to collect one liter samples at
various intervals, perform the sampling unattended, process the sample water
through a solid phase extractor, direct the processed water to waste, shut itself
off and await return of personnel. It was our initial intention to employ SPE
cartridges that have been commercially available. However, as the engineering
of the device began, it became readily apparent that switching a stream of water
1-245
-------
into and away from an array of SPEs would be very difficult, if not impossible
to achieve, given the pressures required to achieve reasonable flow rates through
the SPEs. Attention turned to the more recently developed extraction disks,
developed by the 3M Company (4,5). These disks have proved their worth for such
an application, as we now report here.
We evaluated the C8 disks for the extraction of the pesticides ametryn,
prometryn, terbutryn, alachlor, and bromacil from laboratory and three typical
Florida groundwaters; some evaluations were done in the laboratory in filter
holders, and some within the "dosimeter".
DISK EXTRACTION EFFICIENCIES
EXPERIMENTAL
All studies were performed on the 4.7 cm Cs Empore extraction disks (Analytichem,
Harbor City, CA). They were held by a glass Millipore model KGS-47TF filter
holder, attached to a 250 mL graduated funnel that was clamped hold the disk.
The filter holder was placed into a 1 liter suction flask that was attached by
Tygon tubing to a Millipore vacuum pump, model 5KH10GGR28S. This vacuum pump was
equipped with a needle valve for adjusting the vacuum present in the flask.
Pesticide analyses were conducted on a Hewlett-Packard gas chromatograph, model
5890, equipped with a nitrogen-phosphorus (NP) detector, and an electron capture
detector. The instrument was capable of automatic sample injection and peak
integration. Separation of the pesticides under study was accomplished on a DB-5
bonded fused silica column, 30 meters long x 0.25 mm ID, with a film thickness
of 0.25/jm (J and W Scientific, Folsom, CA). Injections were made, 2 fiL, in the
splitless mode with a 45 second delay. Carrier gas was helium at 30 cm/sec
linear velocity. Injector temperature, 250°C; NP detector temperature, 300°C;
EC detector temperature, 300°; oven temperature programmed from 60°C to 300°C at
40C/min.
Pesticides studied were ametryn, prometryn, terbutryn, alachlor, and bromacil,
all obtained in 99.9% pure analytical standard form from the EPA Pesticide
Repository (Research Triangle Park, NC).
All solvents were HPLC grade (Fisher Scientific or Burdick and Jackson). Wetting
agents (CETABS, polyethylene glycol, octanol, etc) were ACS reagent grade from
Fisher.
Since reverse phase disks, the C18 and the C8, are hydrophobic by their nature,
and require pretreatment with a water miscible solvent to allow activation of the
reverse phase material, and to allow for water flow, various wetting agents, neat
and dissolved in water, were examined for their abilities to wet the Empore disks
so as to allow water flow by gravity. Efforts were made to select water miscible
alcohols and surfactants that would be involatile, so that pretreatment could be
done in the laboratory before assembly of the disks. The disks were soaked in
the following agents, for 1 hour and 30 minutes, then they were removed, placed
in the Millipore filter holder, 100 mL of HPLC grade water added to the funnel,
and allowed to drain by gravity, without vacuum from the vacuum pump.
Three procedures were studied for their ability to extract the pesticides
ametryn, prometryn, and terbutryn from water. They were designed with an eye
toward their implementation in the dosimeter, since it was known that the
dosimeter would place some constraints on how disk pretreatment could be
accomplished. We wanted to determine what steps in the Analytichem procedure
could be either reduced or eliminated, while still giving good extraction
efficiencies, so that the dosimeter design could be kept as simple as possible.
The selected extraction method was then validated on other pesticides (alachlor
1-246
-------
and butachlor) during additional extraction efficiency studies at high water flow
and during volatility studies, where losses from disks used to extract the
pesticides from water were determined (see subsequent section).
These three extraction procedures were:
(1) Modified Analytichem (created from the procedure recommended by Analytichem
in Appendix A by omitting the 5 mL addition of methanol to the water, and by
presoaking the Empore disks in ethyl acetate, rather than passing it through the
disks once they were placed in the holder).
-Presoak disk in 5 mL of ethyl acetate for 6 hours; discard solvent.
-Place disk in Millipore filter apparatus.
-Apply vacuum for 5 minutes.
-Add 20 mL of methanol, apply vacuum; do not let disk go dry.
-Repeat above step with 20 mL of HPLC grade water.
-Begin adding water sample.
-After sample is processed, pull air through disk for 5-30 minutes.
-Place test tube at tip of filter base inside flask.
-Add 10 mL ethyl acetate, pull half of the solvent through, let stand for
approximately minute, then pull the rest through.
-Repeat with a second 10 mL aliquot.
-Sample is ready for analysis or may be concentrated to a smaller volume by dry
nitrogen at ambient temperature.
(2) PRL Procedure #1.
-Presoak disk in 5 mL of ethyl acetate for 6 hours; discard solvent.
-Place disk in Millipore filter apparatus.
-Wash with methanol:water (40 mL of 1:1).
-Apply vacuum.
-Add sample.
-After sample is processed pull air through disk for 5-30 minutes to remove any
water remaining on disk.
-Elute sample as per modified Analytichem procedure above.
(3) PRL Procedure #2. (As for PRL #1, except 20 mL methanol used rather
than methanol:water).
Since the Millipore vacuum pumps used in the laboratory are capable of pulling
water through the Empore disks at a higher rate than can be done in the
dosimeter, and since high flow rates would be desirable when extracting 1 liter
samples, a study was done to determine what effect maximum flow rates would have
on extraction efficiency. Consequently, we studied the two flows of 20 mL/min
and 80 mL/min for several pesticides in HPLC grade water at several
concentrations.
The ability of the Empore disks to function at such high flow rates (80 mL/min)
allowed for the collection of 1 liter samples if plugging did not result and if
"breakthrough" did not result. It was conceivable to us that "breakthrough"
might occur for these large volumes. Since the Empore disks are simply very
short and very permeable reverse phase HPLC columns, and even though 100 % water
causes organic compounds to move only slowly down these columns, there is some
volume of water that would cause the compounds to "breakthrough" and exit the
disks. Whether or not this would occur within the 1 liter of water being sampled
was part of this study, as determined by measuring percent recoveries.
Consequently, replicate one liter samples were fortified at concentrations of
0.1, 1.0, and 10.0 ppb with the pesticides alachlor, bromacil, ametryn,
prometryn, and terbutryn, and the PRL #2 procedure was used for extraction and
analysis. All determinations were done in triplicate with duplicate sample
injections on the gas chroma to graph. Unfortunately, some of the 0.1 ^g/L samples
were not quantifiable due to method sensitivity limitations (alachlor and
bromacil).
1-247
-------
In order for the Empore extraction disk concept to be functional in the dosimeter
in the field for extended periods approaching 30 days, it was important to
determine whether after extraction, the pesticides would remain on the disks
without unusual precautions so that they could be transported to the laboratory
for analysis. It was also important to consider that the disks could conceivably
experience elevated temperatures as a result of a "greenhouse" effect as they sat
in the dosimeter.
Consequently, we studied all the pesticides at 15 and 30 days storage at 25°C
(nominal ambient) , and also studied alachlor, bromacil, and prometryn at 40°C for
those intervals. We also studied alachlor and bromacil for 1 day and 10 day
intervals. HPLC grade water was fortified at the 10.0 /*g/L level, exactly 100
mL was passed through fresh Empore extraction disks at 80 mL/min, and the disks
subjected to the #2 PRL procedure for extraction and analysis. All samples were
prepared and analyzed in triplicate with duplicate injections on the gas
chromatograph.
RESULTS AND DISCUSSION
How the disks responded to pretreatment with alcohols and surfactants in terms
of water flow is shown in Table 1.
Table 1:Flow Through Alcohol and Surfactant Treated;Disks, Gravity.
Wetting agent
Observation
none
HeOH
80 % MeOH
50 % MeOH
30 % MeOH
Octanol
80 % Octanol
50 % Octanol
30 % Octanol
Ethylene glycol
80 % ethylene glycol
50 % ethylene glycol
30 % ethylene glycol
Acetonitrile
80 % acetonitrile
50 % acetonitrile
30 % acetonitrile
Polythelene glycol 8000 (0.5 g/lOOmL)
CETAB (0.04 g/lOOmL)
Tetrahydrofuran
80 % tetrahydrofuran
50 % tetrahydrofuran
30 % tetrahydrofuran
No flow observed
0.76 mL/min
0.7
No flow observed
Disk floated; no wetting
No flow observed
Disk floated; no wetting
No flow observed
No wetting observed
0.6 mL/min
0.6 mL/min
0.5 mL/min
No flow observed
Disk floated; no wetting
No flow observed
0.6 mL/min
0.7 mL/min
0.7 mL/min
0.6 mL/min
These data showed that of the alcohols and surfactants studied, only
methanol, acetonitrile, and tetrahydrofuran would wet the disks adequately for
water flow. We then studied the effect of these solvents on water flow through
prewet disks under vacuum. Empore disks were soaked in each of the three
solvents that gave flow under gravity, and water was passed through at a vacuum
of either 5 or 10 pounds per square inch (PSI). Table 2 summarizes these
results.
1-248
-------
Table 2:Flow Through Solvent Treated Disks, Vacuum.
Solvent
Tetrahydrofuran (THF)
n
Methanol
n
Acetonitrile
n
Vacuum (PS1)
5
10
5
10
5
10
Flow (mL/mln)
54
88
50
79
59
74
Even though both THF and acetonitrile gave slightly greater flows than did
methanol, methanol was chosen for further studies, so that our studies could be
compared with those already done and with those yet to follow by others, who
probably would use the methanol recommended by Analytichent. Since we felt that
a 10 PSI vacuum could be achieved in the dosimeter using the pumps selected for
it, a flow of 79 mL/min would be adequate. This flow would certainly permit 1
liter samples to be collected within 1 hour, a goal we set, if acceptable
recoveries could be achieved at this flow. These studies showed that some way
of pretreating the disks within the dosimeter on site before each water sample
was extracted would have to be accomplished, since 100 mL subsamples of water
would not pass through an unactivated Empore disk. This would have to be
accomplished within the dosimeter by a separate delivery system for methanol.
How well the pesticides were recovered from the Empore disks using the three
pretreatment and extraction procedures is shown in Table 3.
Table 3: Recovery of the Pesticides Ametryn, Prometryn, and Terbutryn from 100
mL of HPLC Grade Water Using Various Procedures and the Empore Disks.
Pesticide
Procedure
Cone. (ppb)
% Rec.
%RSD
Ametryn
Prometryn
Terbutryn
Modif. Analy.
PRL #1
PRL #2
Modif. Analy.
PRL #1
PRL #2
Modif. Analy.
PRL #1
PRL #2
105
105
105
110
110
110
110
110
110
1
3
1
11°
2
1
117
68
106
65
83
95
79
75
88
6.8
12
9.3
7.2
6.1
Number of individual determinations; each determination injected in duplicate
on the gas chromatograph. All flows through the filter were at approximately 18
mL per min.
b Eleven (11) of these extracted with the same Empore disk.
c Extracted with the same Empore disk.
This data shows that there are little if no consistent differences in the
three extraction procedures used. Consequently, since the PRL #2 procedure is
simpler than the other two, it was used throughout the remaining studies, unless
otherwise stated. Acceptable reproducibility exists for replicate determinations,
although we would like to improve on this as the studies continue. It also shows
that the disks can be reused many times without suffering fracturing, erosion,
or loss of the silica particles, all of which would lead to diminished
1-249
-------
recoveries.
Table 4 summarizes these results for the effect of flow rate on recoveries from
the disks.
Table 4:Effect of Flow Rate Through Empore Disks on Extraction Efficiencies".
Pesticide%Recovery
20 mL/min 80 mL/min
Alachlor
Bromacil
Prometryn
89
36
93
96
46
92
Average of triplicate determinations for 10 pg/L samples; average RSD less than
6%.
These results show that extremely high flow rates can be handled with no
loss of extraction efficiency. This ability to rapidly process samples in the
laboratory allowed us to work with 1 liter samples, giving an improvement in
method limit of detection, and would also allow for high sample throughput if
similar vacuum could be established within the dosimeter.
Table 5: Extraction Efficiencies of Pesticides from Empore Disks Using High
Water Sample Flows (80 mL/min)."
Pesticide Concentration (/ig/L) Average % Recovery
Alachlor 10.0 89
1.0 92
0.1 --b
Bromacil 10.0 36
1.0 58
0.1 --b
Ametryn 10.0 91
1.0 69
0.1 89
Prometryn 10.0 93
1.0 91
0.1 76
Terbutryn 10.0 83
1.0 79
0.1 35
* Averages of triplicate determinations, with duplicate injections for each on
the gas chromatograph. Average % RSD for triplicate determinations less than 8%.
b Unable to quantitate due to sensitivity problems.
1-250
-------
From the above data, it is apparent that only bromacil gives low recoveries
at all concentrations. There is also an apparent recovery problem with terbutryn
but only at the lower concentrations. Subsequent studies on pesticide losses due
to volatility (see Table 6 below) indicated that low recoveries of bromacil may
be due to factors other than that of poor adsorption by the Empore disks.
Table 6:Pesticide Losses from Empore Extraction Disks with Storage at Extended
Intervals and at Elevated Temperatures.8
Pesticide Storage Interval Temperature Average % Recovery
Alachlor
Bromacil
Prometryn
Ametryn
Terbutryn
1 day
10 "
15 -
15 "
30 -
1 day
10 "
15 "
15 "
30 "
15 day
15 "
15 day
15 day
25°C
n
408C
258C
25°C
tl
It
40°C
25°C
25°C
40°C
25°C
25°C
99
87
91
98
87
95
97
64
83
78
43
30
30
4
"Average of triplicate determinations, 10 A*g/L; duplicate injections on the gas
chromatograph. Average % RSD less than 10%.
From these data it is obvious that ametryn, prometryn, and terbutryn cannot
withstand 15 day storage intervals at 25°C on the Empore disks. Surprisingly,
both alachlor and bromacil are recoverable even up to 30 days with average
percent recoveries greater than 78%. Even more surprising is that bromacil,
which had shown very poor recoveries upon immediate extraction of the disks with
ethyl acetate (see Table 5), now gives excellent recoveries after sitting for
days after extraction of the water samples. Apparently, aging the disk
stabilizes bromacil, or, makes it more amenable to ethyl acetate extraction.
Whether this phenomenon would occur for other pesticides that were not studied
here is yet to be determined, and merits investigation.
Raising the storage temperature from 25°C to 40°C caused little additional loss
of pesticide for the three studied at the higher temperature. This may be a very
important aspect for long term use of the dosimeter when temperatures may become
very high in the field.
Also of potential value is the fact that during these long term studies, all
pore disks were placed in the open in fume hoods, susceptible to an array of
1-251
-------
microbes attached to air particles which would "fall out" on them. Despite this,
none of the pesticides were significantly degraded by exposure to the atmosphere.
This is self evident from a "prima facie" aspect when the mechanism for retention
of pesticides on the 8p irregular shaped silica Empore particle is considered.
It is well recognized that retention occurs at the alkyl (C8) bristle which is
deep inside the pores of the particle. These pores are only 60 Angstroms in
diameter; expressed in microns (/O, that amounts to 10"* ft. However, microbial
cells are larger than 2 x 10"1 p in diameter, making them much too large to
penetrate into the pores of the silica particles of the Empore disks. Therefore,
microbial degradation of pesticides adsorbed by Empore disks is, from first
principles, impossible. For this reason, difficulties we faced in getting water
flow through the 0.2 ft microbial filters are of only academic concern.
EXTRACTION DISK WATER SAMPLER DESIGN. CONSTRUCTION. AND VALIDATION
Introduction
There were several goals that were established to be met in the design of the
dosimeter. The device needed to be battery powered and capable of operating
unattended over extended intervals of time, as long as one month. Battery drain
should be such that as many as 24 samples, of one liter volume, could be taken
in as little as 24 hours. Sampling volume accuracy should be as good as + or -
10%. It ought to be programmable, such that variable volumes could be sampled
at variable time intervals. It ought to be capable of processing more than one
sample through the same disk. Only minimal redesign and modifications of an
existing inexpensive water sampler ought to be required. While still being
highly portable, the dosimeter ought to have a large enough internal volume to
accommodate at least 24 Empore disk holders.
An American Sigma Streamline water sampler (Middleport, NY), was chosen as the
nucleus of the disk water sampler (See Fig. 1). It came equipped with an
"advanced programming" mode present in the EPROM software that allowed much
flexibility in the programming in regard to sample volume, intervals, multiple
sampling, etc. However, its main feature, which was not apparent until it became
obvious that vacuum would have to be applied to the bottom of each Empore disk
holder, was that the motor that was used to position the sample distribution arm
over the sample bottles could be synchronized with a somewhat similar motor which
would rotate a valve for selecting which filter holder received vacuum. This was
a critical aspect in the successful design of the dosimeter.
Also critical was the fact that electronic timing circuitry was in place and
enabled the installation of a switch as a means of acctuating a methanol pump at
the appropriate time for each sample.
The device as modified will accept up to 24 holders. During operation, 1000 mL
of water is processed through each holder, in ten 100 mL increments. In this
report these 100 mL increments are referred to as sub-samples. The machine can
be programmed to vary the time between increments and also set real time start
and stop times for each 1000 mL sample as well as continuous operation once
started.
The number of 1000 mL samples can be selected as any value up to a maximum of 24.
If the multiple stop-start method of programming is to be used and the machine
set to shut off after the desired number of samples have been taken, holders must
be placed in the machine in direct sequence, 1 to whatever number desired, in a
counter clockwise direction from position 1.
After completion of the sampling program, the holders can be brought back to the
lab where the Empore disks can be processed without removal from the holder.They
1-252
-------
are processed by elution of the adsorbed pesticides from the disk material with
a solvent suitable for the type of analysis to be used.
The device, as described here, comes equipped with an intake tube of 3/8 in.
internal diameter, and a weighted intake strainer for sampling surface waters.
An interface remains to be constructed for monitoring wells.
Preparation of Holders
For use in this device, the Empore extraction disks are held in a modified
disposable filter funnel (See Fig. 2). These filter funnels are manufactured by
Micron Separations, Inc. of Westborough, Mass. (cat.# DFN-P4SGS-S1) and are
supplied with various types of filter media. For this work, all of the supplied
filter media, except the supporting membranes is discarded, making space for the
stacked disks described below.
The remaining support disk is modified by trimming it to 40 mm in diameter. The
trimmed support is then centered in the lower half of the holder and cemented in
place with silicon rubber cement. After this re-work is performed, the holder
is then reassembled in the following manner:
1st layer (bottom) - one original support disk (trimmed to 40 mm)
2nd layer - Empore™ cat. # 1214-5002, C8 disk (Analytichem
International)
3rd layer - Whatman Multi-grade filter, # GMF150
The two halves of the holder are then placed together and held with moderate
downward pressure while tape is applied to assemble them. If the holder is to
be reused ordinary duct tape 3/8 in. wide has proven satisfactory. Care was
exercised while assembling the holder to limit downward pressure to prevent the
multigrade filter being cut allowing bypass of the sample around it.
Description of System for Vetting Extraction Disks
Before passing any water through the Empore disc, it must be wet with methanol.
In this machine, this is accomplished in the following manner. Methanol is
placed in a 500 mL reservoir (See Fig. 3). At the beginning of the sampling
interval, the main pump runs in reverse for several seconds in order to purge the
intake line to the pump. After that, the methanol pump is turned on, which
delivers approximately 10 mL of methanol to the main pump discharge funnel on top
of the distributor arm. The methanol flows by gravity down to filter holder
number one and wets the disc. On the following pump cycles the methanol pump is
held off until there is counted a total of 20 reverse run times (two for each
sub-sample interval). Runs in the forward direction are ignored due to the
action of a diode. After twenty run times, the circuit is reset and is ready to
pump methanol again for the next sample. See Fig. 4 for a summary of the parts
added to the Streamline water sampler, and Fig. 5 for a logic diagram of the
control circuits, both original and added.
Description of the Vacuum Pump Circuit
At the same time the methanol pump starts, a vacuum pump is turned on to pull the
methanol through the filter, and controlled by a four minute timer. A vacuum
of approximately 15 in. of Hg is applied to the bottom of each individual disk
holder for drawing the sub-sample through the disk. This vacuum is supplied by
a small 12V powered vac pump and distributed by a 24 port rotary valve. The
rotary valve is synchronized manually before the beginning of the program by
rotating the distribution arm and the valve to position #1 individually.
Thereafter, the interface circuitry advances the rotary valve in synchronism with
the distribution arm allowing time for the sub-sample to be drawn through the
1-253
-------
disk before advancing the valve to the next position.
Modification to Main Pumping System
The main pump flow rate is approximately 3500 mL/min as supplied by the
manufacturer. Since sub-sample increments of only 100 mL are used, this high
flow rate resulted in splashing and control problems. In order to reduce these
problems the pumping rate was changed by reducing the size of the tubing used in
the pump and the suction line. In addition, it was found that a small amount of
water was trapped in the pump due to sagging of the tub ing, result ing in a small
carryover between sub-samples. The tubing was shortened to eliminate this sag
and insure that the pump was blown dry during the purge cycle.
Materials were chosen for the dosimeter construction and evaluation that were of
high quality, reliable, and economical, all readily obtainable from suppliers
within the United States.
Empore Filter Design Considerations
Study design called for lake water, with 3 levels of algae, to be tested in the
dosimeter, as well as 3 groundwater (well) samples with 3 levels of hardness.
Therefore, a filter holder was identified that could accommodate multiple, or a
stacked array of filters. We erroneously believed that some sort of microbial
filter would have to be incorporated in the filter holder to prevent microbes
from reaching the Empore extraction disk during water filtration.
Overflow Detection When Filter Becomes Plugged
A convenient way was devised to determine when filter holder overflow would
occur, and which filter had overflowed, ostensibly as a result of Empore filter
plugging, either by algae or other microparticulates. This was accomplished by
ringing the top portion of the filter holder with paper tape impregnated with
water soluble ink (Sanford's Mr. Sketch). As the water begins to overflow the
washing away of the ink begins. This feature ought to be quite useful in that
it would prevent the analytical work from being performed in the laboratory on
samples that plugged in the field.
Dosimeter Pump Accuracy and Reproducibility
The accuracy and reproducibility of the dosimeter peristaltic pump were
determined by evaluations with Fisher HPLC grade water. Before beginning the
study, the pump was calibrated for flow. The pump was then programmed to pump 10
sequential 100 mL samples at 12 minute intervals. Water was collected in a 1000
mL graduated cylinder and measured after each sample. Of the 10 samples
collected, 3 of them were exactly 100 mL, while 7 of them varied plus or minus
5 mL. Total volume delivered was 995 mL, very well within the manufacturers
specifications for the pump.
Four more similar tests were run, with an average delivered volume per sample of
98.9 mL, excellent accuracy and reproducibility.
Lake Water Extraction Efficiencies
Three lake waters were selected with varying amounts of algae as determined by
Secchi disk. Secchi numbers (see Table 7 below) are the number of feet that a
black and white disk can be lowered before definition of its outline is lost.
One gallon (4 L) brown jugs were filled, taken to the laboratory for storage at
4°C, fortified at 10 /jg/L with all eight pesticides, and processed in 100 mL
subsamples through the dosimeter to give a total of 1 L for each sample. These
1-254
-------
studies were done on stacked filters that did not include the 0.2 /i microbial
filters, but did include the GMF multigrade. The support filter had not been
trimmed for these. Three filter aids were also tested for their ability to
filter out the algae onto the multigrade filter: (1) Solka Floe, James River
Corp., (2) Hyflo Super Cel, Johns-Manville Corp., and (3) white sea sand.
Exactly 15 mL of each were placed into the filter holders, along with several
controls (no filter aid), and the waters were run through.
Table 7:Dosimeter Extraction of Several Lake Waters Fortified with Eight Test
Pesticides; Effect of Various Filter Aids a
Lake WaterSecchi Disk Reading
Lake AliceT75~~
Bivens Arm 1.8
Lake Wauberg 2.0
Newnans Lake 1.8
Red Water Lake 3.2
Lake Lochloosa 3.5
Riley Lake (Putnam Co.) 8.0
*Fortified at 10 A»g/L.All samples plugged the filter stack and flow stopped
at 300 mL of water, + or - 50 mL, whether filter aid was present or not.
These results clearly showed that the tested version of the Empore extraction
disks cannot be used for the extraction of 1 liter quantities of lake water, even
when a multigrade filter is present and even when 15 mL of filter aid are present
in the filter holder.
Extraction of Groundwater Samples Fortified with the Eight Test Pesticides
Three groundwater samples of various hardnesses were to be collected, fortified
with the eight test pesticides and extracted with the dosimeter. Those three
chosen were: (1) Murphree well field, City of Gainesville, well #1, (2)
shallow well at a private residence, Gainesville, FL, and (3) private shallow
well on Riley Lake, Putnam County, FL. These wells ranged widely in their
hardnesses. For example, the Gainesville well was the most hard, with 320 mg/L
(ppm) of bicarbonate, followed by the Murphree well #1, with 208.6 mg/L. The
Putnam County well was least hard, with a bicarbonate of only 10.2.
Samples were collected (8 L of each) in one gallon brown jugs and transported to
the laboratory where they were fortified at 10 yug/L with the five test pesticides
and stored at 4°C before extraction by the dosimeter. Exactly 1 liter samples
of each were extracted in triplicate. Alachlor, bromacil, ametryn, prometryn,
and terbutryn were analyzed by gas chromatography as previously described. Table
8 summarizes the recoveries.
1-255
-------
Table 8:Extraction Efficiencies of Eight Test Pesticides Extracted from Three
Florida Groundwaters Using the Dosimeter Equipped with Reduced Size (40 mm)
Support Disks."
Well
alachlor
% Recoveries
prometryn
ametryn
Murphree
Gainesville, FL
Putnam Co. , FL
109
113
100
107
110
102
108
114
106
Well
bromacil
% Recoveries
terbutryn
Murphree
Gainesville, FL
Putnam Co., FL
119
91
100
106
109
100
" Averages of three separate determinations, fortifications at 10 pg/1.; RSD less
than 5%.
Although somewhat high, these recoveries were similar, from pesticide to
pesticide and from well to well. Relative standard deviations for triplicate
analyses averaged less than 5% for all pesticides and wells, not much larger than
what could be expected for instrumental error.
FIELD TEST OF DOSIMETER
The pumping capacity and accuracy of the dosimeter in the field was briefly
examined by placing it on the shore of Riley Lake, Putnam County, FL. The
dosimeter was programmed to sample 24 one liter samples in 100 mL subsamples over
a 24 hour period, beginning at 1:06 PM on Nov. 29 and ending at 1:06 PM on Nov.
30. Since previous runs on Riley Lake water showed that the Empore filter disks
plugged at approximately 300 mL, they were not placed in the disk holders for
this study. Only the support disk and the GMF 150 multigrade disk were installed
in each of the 24 holders. All water pumped through the disks was collected in
large carboys and transported back to the laboratory for volumetric measurement.
After all samples were completed, the overflow detection tape was examined on
each holder. Battery voltage was measured before starting the sampling and
immediately following completion of sampling.
No overflow was observed for any of the samples, nor was there evidence of
splashing inside the dosimeter around the filter disk holders. Exactly 20.83
liters of water were collected, 87 % of the programmed volume. This accuracy
falls outside the specifications set by the manufacturer, however, it may be
explained by the extreme cold temperatures the dosimeter experienced on the
morning of Nov. 30 when 408F was measured. Battery dropped very little, from
12.75 volts to 12.48 volts, indicating that much more sampling could be
accomplished before recharging.
SUMMARY
We have shown that the pesticides alachlor, bromacil, ametryn, prometryn, and
terbutryn can be extracted from laboratory and well waters at trace levels using
C8 Empore extraction disks and one liter samples. Drying of the disks was
required for the near complete recovery of bromacil. Several lake waters plugged
the disks at approximately 300 mL, whether filter aids and multigrade prefliters
were employed or not. For laboratory water, extraction efficiencies decrease
little with increasing flows, maximum flow being determined by disk rupture. The
1-256
-------
pesticides alachlor and bromacil could be kept on disks at ambient temperatures
(25°C) and elevated temperatures (40°C) for up to 30 days without loss. All five
pesticides could be efficiently extracted from three different well waters
varying widely in hardness using the dosimeter. Pumping accuracy, in the field,
was not as good as observed in the laboratory, although it could be measured and
used for sample volume corrections.
ACKNOWLEDGEMENTS
This work was sponsored by the Florida Department of Environmental Regulation,
under project WM 321.
REFERENCES
1. Cochrane, W. P.; Lanouette, M.; and Trudeau, S; J. Chrom. 243(2), 307-314
(1982).
2. Jones, A. S.; and Jones, L. A.; J. Agric. Food Chem. 30(5), 997-999 (1982).
3. West, S. D.; and Day, E. W. Jr.; J. Assoc. Off. Anal. Chem. 64(5), 1205-1207
(1981).
4. Brouwer, E. R. ; Lingeman, H. ; and Brinkman, U. A. Th. ; Chromatographia
29(9/10), 415-418 (1990).
5. Hagen, D. F. ; Markell, C. G.; Schmitt, G. A.; and Blevins D. D. ; Analytica
Chimica Acta 236, 157-164 (1990).
LIST OF FIGURES
Fig. 1. American Sigma water sampler, model 702, top cover removed.
Fig. 2. Filter holder, MSI model DFN-P4SGS-S1.
Fig. 3. Bottom plate of dosimeter with methanol reservoir, methanol pump,
control circuitry and controller.
Fig. 4. Diagram of dosimeter interconnections, dashed lines denote added parts.
Fig. 5. Logic diagram of control circuitry for dosimeter, thin lines for
original Streamline, bold lines for added logic.
1-257
-------
1-258
-------
Remove Approx.
1/32" lid on
Bottom Surface
Cover
Funnel
MSI Disposable
Filter Funnel Modified
for use in Dosimeter
Filter
Media
Base
Accessory
No.8 Stopper
for Elution
Flask
Extender for
use in Flask
I-259
-------
Key Board
and
Display
Streamline
CUP P. C.
Board
Streamline
Power Control
P.C.B.
Arm
Ledex
Motor
i
Auxiliary
Battery
Dosimeter
Interface
P.C. Board
•
t
T
Vaccum
Pump
Vaccum
Rotary
Value
1
Sample
Pump
Methanol
Pump
These Connections
Made Though Re-Hired
Auxiliary Socket on
Punp Control Assy.
Block Diagram of
In terconnec tions
Dashed Lines Denote
Added for Dosimeter
1-260
-------
Flow Chart far Control Crts.
Delay Til
Restart Time
Take Sample
iOO mL
Post Purge
Move Dist.
Arm to Next
Pos.
Stop
Thin Lines
Bold Lines
Original Controls Crt.
Added Interface
Move Arm
to Pos.l
Move Vac.
Value to 1
Move Vac.
Value to Next
Pos.
J.
Reset
LRi
1-261
-------
34
REPRESENTATIVE SAMPLING FOR THE REMOVAL PROGRAM
William A. Coakley, Environmental Response Team, U.S. Environmental
Protection Agency, Raritan Depot, Edison, New Jersey 08837; Lauren Ray.
Technical Assistance Team, Gregory MaiIon, Technical Assistance Team, Roy
F. Weston, 955 L'Enfant Plaza, Washington, DC 20024; Gregory Janiec,
Technical Assistance Team, Roy F. Weston, Weston Way, West Chester,
Pennsylvania 19380.
ABSTRACT
In addition to addressing catastrophic releases, the Removal Program is
involved in meeting objectives such as identifying the threat a site poses
to public health, welfare, and the environment; delineating sources and
extent of contamination; evaluating waste treatment and disposal options;
and confirming the achievement of cleanup standards. To help meet these
ends, EPA assembled the U.S. EPA Committee on Representative Sampling for
the Removal Program to develop guidance documents for collecting
representative samples at removal sites. The goal of a representative
sampling plan is to accurately depict variations in pollutant presence and
concentration across a site for a given medium (e.g., soil, groundwater).
Five representative sampling guidance documents, addressing different
environmental media, are in various stages of planning and development.
Guidance documents for soil and air sampling are scheduled for publication
in late FY91. Guidance documents for biological sampling, waste sampling,
and groundwater/surface water/sediment sampling will be completed during
FY92. Each document has information unique to its medium, but follows the
overall objectives and recommendations for effective representative
sampling. The documents address: assessing available information;
selecting an appropriate sampling approach (including the selection of
sampling locations); properly selecting and utilizing sampling and field
analytical screening equipment; utilizing proper sample preparation
techniques; incorporating suitable types and numbers of QA/QC samples; and
interpreting and presenting the resulting data. Each document presents a
case study to illustrate how a representative sampling plan may be
developed to meet Removal Program objectives. A representative sampling
training program and an air sampling methods database are also being
developed.
INTRODUCTION
The EPA Removal Program
Under the Comprehensive Environmental Response, Compensation, and
Liability Act of 1980 (CERCLA), the EPA may respond to a release (or
threat of a release) of hazardous materials. CERCLA authorized both long-
1-262
-------
term activities (called remedial actions) and emergency response
activities (called removal actions), and a Hazardous Waste Trust Fund
("Superfund") to pay for them.
A removal action is a short term action intended to stabilize or clean up
an incident or site which poses a threat to public health or welfare or to
the environment. Removal actions may include but are not limited to:
• removing and disposing of hazardous substances;
• constructing fences, posting signs, and other security
measures to restrict access to the site;
• providing an alternative water source to local residents where
water supplies have become contaminated; and
• temporarily relocating residents.
CERCLA also defined the duration and cost of removal actions. Removal
actions were limited to six months duration and a total cost of $1
million. Exemptions would be required in cases where the action exceeded
these time and cost limitations. In 1986 the Superfund Amendments and Re-
Authorization Act (SARA) changed the limitations to 1 year and $2 million.
In cases where imminent threats have been addressed but long term threats
remain, the site is referred to EPA's Remedial Program for further
assessment and investigation. However, remedial actions are only
conducted at sites that have been placed on EPA's National Priorities List
(NPL). Removal actions may be necessary at remedial sites if an emergency
arises. EPA has responded with multiple removal actions at sites with
complex hazards.
Program Implementation
Superfund program implementation is guided by the National Oil and
Hazardous Substances Pollution Contingency Plan (NCP), which outlines the
roles and responsibilities of the Federal agencies who respond to releases
of hazardous substances. The U.S. Coast Guard has responsibility for
releases in or near coastal waters and EPA has responsibility over those
which occur inland.
The initial step in a removal action is the discovery of the incident.
EPA may be notified by the National Response Center, which is operated 24
hours a day by the U.S. Coast Guard, or be notified directly. Once EPA has
been notified, an EPA On-Scene Coordinator (OSC) evaluates the situation.
Based on this evaluation, EPA decides whether Superfund money will be used
to respond to the incident (if the responsible party cannot or will not do
so, or if state or local officials cannot or are unable to respond). EPA
notifies or calls for assistance from other agencies, as necessary.
Once EPA determines that a removal action is required, the OSC assembles
the equipment and resources necessary to respond. During an initial site
1-263
-------
assessment, air monitoring equipment help to determine the nature of the
on-site hazards. After assessing on-site conditions, the OSC establishes
a site command post and, if there is no immediate emergency, begins
monitoring and sampling on-site materials or contaminants. A variety of
media may be sampled, including soil, air, surface water, groundwater,
waste piles, and drums or other containers. Once sampling is completed,
the necessary equipment is mobilized to stabilize the site. Stabilization
can include: building berms/dikes, establishing water treatment systems,
excavating contaminated soil, erecting fences, and other activities.
REPRESENTATIVE SAMPLING
Representative sampling is the degree that a sample or a group of samples
accurately characterizes site conditions. Representative samples reflect
the concentration of parameters of concern at a given time. Analytical
results from representative samples illustrate the variation in pollutant
presence and concentration across a site.
The U.S. EPA Committee on Representative Sampling for the Removal Program,
comprised of U.S. EPA, state, and contractor representatives, is planning
and developing five representative sampling guidance documents, each
addressing a different environmental medium. Guidance documents for soil
and air sampling are scheduled for publication in late FY91. Guidance
documents for biological sampling, waste sampling, and groundwater/surface
water/sediment sampling will be completed during FY92. The documents are
medium-specific, for ease of use, however, multimedia sampling is usually
necessary at removal sites. Each document covers aspects unique to its
medium, but follows the overall objectives and recommendations for
effective representative sampling. The documents address: assessing
available information; selecting an appropriate sampling approach
(including the selection of sampling locations); properly selecting and
utilizing sampling and field analytical screening equipment; utilizing
proper sample preparation techniques; incorporating suitable types and
numbers of QA/QC samples; and interpreting and presenting the resulting
data. The air document also addresses analytical techniques and
atmospheric modeling. The documents address the above considerations
within the objectives and scope of the Removal Program.
Each document presents a case study to illustrate how a representative
sampling plan may be developed to meet Removal Program objectives. The
case study illustrates the concepts discussed in each chapter. For the
soil guidance, the case study illustrates how "interactive" field
analytical screening and other cost-effective field techniques such as
geophysical surveys can be used to characterize a site, from selecting
sampling locations to confirming cleanup.
1-264
-------
USE OF REPRESENTATIVE SAMPLING TO MEET REMOVAL PROGRAM OBJECTIVES
Although field conditions and removal activities vary from site to site,
the primary Removal Program sampling objectives include:
1. Establishing Threat to Public Health or Welfare or to the
Environment -- CERC1A and the NCP establish the funding
mechanism and authority which allow the OSC to activate a
Federal removal action. The OSC must establish that the site
poses a threat to public health or welfare or to the
environment. Sampling is often required to document the
hazards present on site. The analytical data are often needed
quickly to activate the removal action.
2. Locating and Identifying Potential Sources of Contamination --
Sampling is conducted to identify the locations and sources of
contamination. The results are used to formulate removal
priorities, containment and cleanup strategies, and cost
projections.
3. Defining the Extent of Contamination -- Where appropriate,
sampling is conducted to assess horizontal and vertical extent
of contaminant concentrations. The results are used to
determine the site boundaries (i.e. , extent of contamination),
define clean areas, estimate volume of contaminated soil,
establish a clearly defined removal approach, and accurately
assess removal costs and timeframe.
4. Determining Treatment and Disposal Potions -- Sampling is
conducted to characterize waste and contaminated soil for in-
situ or other on-site treatment, or excavation and off-site
treatment or disposal.
5. Documenting the Attainment of Cleanup Goals -- During or
following a site cleanup, sampling is conducted to determine
whether the removal goals or cleanup standards were achieved,
and to delineate areas requiring further treatment or
excavation as appropriate.
Development and Execution of a Sampling Plan
The representative sampling guidance documents outline how a sampling plan
can be designed to meet these objectives. The sampling plan design
consists of the following steps:
• Review existing historical site information,
• Perform a site reconnaissance visit,
• Evaluate potential migration pathways, receptors, and routes
of exposure,
1-265
-------
• Determine the sampling objectives,
• Utilize field screening techniques,
• Select parameters to be analyzed,
• Select an appropriate sampling approach, and
• Determine the locations to be sampled.
Unless the site is considered a classic emergency, every effort is made to
first review relevant information concerning the site. An historical data
review examines past and present site operations and disposal practices,
providing an overview of known and potential site contamination and other
hazards. Sources of information include: federal, state, and local files
(e.g., prior site inspection reports and legal actions); facility maps;
blueprints; historical aerial photography; and interviews with facility
owners and operators, current and former facility employees, and local
residents. ,-A site reconnaissance, conducted either prior to, or in
conjunction with sampling, assesses site conditions, evaluates areas of
potential contamination, evaluates potential hazards associated with
sampling, and helps to develop a sampling plan. During the
reconnaissance, Removal Program personnel observe and photodocument the
site, noting site access routes, process, waste disposal, and other
potential contaminant source areas, and potential routes for contaminant
transport off-site.
A representative sampling plan considers pollutant migration pathways,
receptors, and routes of exposure. Pollutant migration pathways include
surface drainage, vadose zone and groundwater transport, air transport,
and human activity (such as foot or vehicle traffic). In urban areas,
man-made pathways, such as storm and sanitary sewers and underground
utility lines can influence contaminant transport. Human receptors
include children who can come into direct contact with or ingest
pollutants by playing in a contaminated area. Environmental receptors
include Federal- and state-designated endangered or threatened species,
habitats for these species, wetlands, and other Federal- and state-
designated wilderness, critical, and natural areas.
The scope of the sampling program depends on the Removal Program sampling
objectives previously discussed. In order to attain these objectives, the
quality assurance components of precision, accuracy, completeness,
representativeness, and comparability are considered.
Samples are analyzed by an established and approved off-site laboratory,
or are screened or analyzed on-site using various portable direct reading
instruments. Field analytical screening equipment utilized by the Removal
Program includes the X-ray fluorescence (XRF) meter, photoionization
detector (PID), flame ionization detector, and field test kits. Some
field analytical screening instruments, such as the PID and some XRF
units, can be used in-situ (without collecting a sample). Field
analytical screening methods may be utilized to narrow the possible groups
or classes of chemicals for laboratory analysis. When used appropriately,
field screening can cost-effectively evaluate a large number of samples
1-266
-------
for the purpose of selecting a subset for off-site analysis. The
detection limits and accuracy of the screening method is evaluated by
sending a minimum of 10% of the samples to an off-site laboratory for
confirmation. Field screening techniques and confirmatory sampling can be
used together to identify or delineate the extent of contamination and to
confirm cleanup. Once a contaminated area has been identified with
screening techniques, an appropriate confirmatory sampling strategy
substantiates the screening results. Used in tandem, field analytical
screening and confirmatory sampling provide data more representative of
the problem at the site than off-site laboratory analysis alone. Field
screening is also utilized in the Removal Program for air monitoring
during removal activities and for on-site health and safety decisions.
Geophysical techniques such as ground penetrating radar (GPR),
magnetometry, electromagnetic conductivity (EM) and resistivity surveys
may also be conducted during removal actions. Geophysical surveys, in
conjunction with field analytical screening, helps delineate areas of
subsurface contamination, locations of buried drums or tanks, and
disturbed areas. Geophysical data can be obtained relatively rapidly,
often without disturbing the site.
Locating sampling points for field screening and laboratory analysis
entails choosing the most appropriate sampling approach. The sampling
objectives, the site setting, limitations in the sampling and the
analytical methods, and available time and resources are all
considerations. Representative sampling approaches include judgmental,
random, stratified random, systematic grid, systematic random, search and
transect methodologies. A representative sampling plan may combine two or
more of these approaches (as defined below). Although some approaches
(such as judgmental and random sampling) are applicable to a variety of
media, it should be noted that systematic, search, and transect sampling
techniques are specific to soil and sediment sampling.
• Judgmental Sampling - Judgmental sampling is the subjective
selection of sampling locations at a site, based on historical
information, visual inspection, and on best professional
judgment of the sampling team. Judgmental sampling is most
often used to identify the contaminants present at the highest
concentrations (i.e., worst case conditions). This is often
the basis for supporting the removal funding request.
Judgmental sampling has no randomization associated with the
sampling strategy, and therefore prevents statistical
interpretation of the sampling results.
• Random Sampling - Random sampling is the arbitrary collection
of samples within the area of concern. Random sample
locations are chosen using a random selection procedure (such
as a random number table). The arbitrary selection of
sampling points requires each sampling point to be selected
independent of the location of all other points, and results
1-267
-------
in all locations within the area of concern having an equal
chance of being selected. Randomization is necessary in order
to make probability or confidence statements about the
sampling results. Random sampling may be performed for all
media, however random sampling assumes that the site is
homogeneous with respect to the parameters being monitored.
The higher the degree of heterogeneity, the less the random
sampling approach will adequately characterize true conditions
at the site. For soil and sediment media (which are rarely
homogeneous) , other statistical sampling approaches (discussed
below) provide ways to subdivide the site into more
homogeneous areas, and may be more appropriate than random
sampling for sampling soil and sediment at removal sites.
Figure 1 illustrates random sampling.
Stratified Random Sampling - Stratified random sampling often
relies on historical information and prior analytical results
(or field screening data) to stratify the sampling area. Each
stratum is more homogeneous than the site is as a whole.
Strata can be defined based on various factors, including:
sampling depth, contaminant concentration levels, and
contaminant source areas. Sample locations are selected
within each of these strata using random selection procedures.
Stratified random sampling imparts some control upon the
sampling scheme (e.g., collection of more samples from depths
or areas having higher contaminant concentrations) but still
allows for random sampling within each stratum. Different
sampling approaches may also be selected to address the
different strata at the site. Figure 2 illustrates a
stratified random sampling approach for soil where strata are
defined based on depth.
Systematic Grid Sampling - Systematic grid sampling (of soil
and sediment) involves subdividing the area of concern by
using a square or triangular grid and collecting samples from
the nodes (intersections of the grid lines). The distance
between sampling locations in the systematic grid is
determined by the size of the area to be sampled and the
number of samples to be collected. Systematic grid soil
sampling is often used to meet Removal Program sampling
objectives for locating and identifying potential sources of
contamination, defining the extent of contamination, and
documenting the attainment of cleanup goals. Figure 3
illustrates a systematic grid sampling approach.
Systematic Random Sampling - Systematic random sampling (of
soil and sediment) involves subdividing the area of concern by
using a square or triangular grid and collecting samples from
within each cell using random selection procedures.
Systematic random sampling is a useful and flexible design for
1-268
-------
FIGURE 1 - RANDOM SAMPLING
8
to
100-
75-
• -
UJ
uj 50-
25-
I I I I I I I I I
25 50 75 100 125 150 175 200 225
FEET
FIGURE 3 - SYSTEMATIC GRID SAMPLING
100-
75-
uj 50-
LL
25-
(j
f*~
£
r — 1
8-,
5 — j
1
h p ^
I
3!
1 i
Bl
3 r
1 '
31
- i
[\
J *"
--!
fc**^*"^
D
y
i i I i r
25 50 75 100 125 i50 175 200 225
FEET
After: U.S. EPA. 1989. EPA/230/02-89-042
FIGURE 2 - STRATIFIED RANDOM SAMPLING FIGURE 4 - SYSTEMATIC RANDOM SAMPLING
S
OC L
o.
..
0 •
1.5-
3-
4.5-
6
i
STRATUM 1
STRATUM 2
100-
75-
50-
25-
I I I I I I I I
25 50 75 100 125 150 175 200 225
FEET
m
LEGEND
SAMPLE AREA BOUNDARY
SELECTED SAMPLE LOCATION
SAMPLE LOCATION
After: U.S. EPA. 1989, EPA/230/02-89-042
-------
estimating the average pollutant concentration within each
cell of the grid. Also, systematic random sampling allows for
the isolation of cells that may require additional sampling
and analysis. Figure 4 illustrates a systematic random
sampling approach.
Search Sampling - Search sampling utilizes either a systematic
grid or systematic random sampling approach to search for hot
spots (i.e., areas where contaminants in soil and sediment
exceed applicable cleanup standards). The number of samples
collected and the grid spacing used are determined on the
basis of the acceptable error (i.e., the chance of missing a
hot spot). When conducting search sampling, initial
assumptions must be made about the size, shape, and depth of
the hot spots to be searched for. The smaller and/or narrower
the hot spots are, the smaller the grid spacing (i.e. , the
more samples) necessary to locate them. In addition, the
smaller the acceptable error of missing hot spots, the smaller
the grid spacing must be.
Transect Sampling - Transect sampling involves establishing
one or more transect lines across the surface of a site. Soil
or sediment samples are collected at regular intervals along
the transect lines at the surface and/or at one or more given
depths. The spacing between sampling points along a transect
is determined by the length of the transect line and the
number of samples to be collected. Multiple transect lines
may be parallel or non-parallel to one another. A primary
benefit of transect sampling over systematic grid sampling is
the ease of establishing and relocating individual transect
lines versus an entire grid. Transect sampling is often used
to delineate the extent of contamination and to define
contaminant concentration gradients. Transect sampling has
also been used, to a lesser extent, in compositing sampling
schemes.
Selection and Use of Sampling Equipment
The manner in which a sample is collected is based on the objectives
stated in the site-specific sampling plan. Sample collection requires an
understanding of the capabilities of the sampling equipment in obtaining
a sample which accurately depicts current site conditions. The use of
inappropriate equipment (or the incorrect use of sampling equipment) may
result in biased samples.
The mechanical method by which a sampling tool collects a sample may
impact sample representativeness. For example, if the goal is to
determine the concentrations of contaminants at each soil horizon
interface, using a hand auger would be inappropriate. Obviously, the
1-270
-------
augering technique would disrupt and mix soil horizons, making the precise
horizon interface difficult to determine. In addition, all sampling
devices used are of sufficient quality to not contribute contamination to
samples (e.g., painted surfaces which could chip off into the sample) and
the sampling equipment are either easily decontaminated, or cost-effective
if considered to be expendable.
Sample Preparation
Field sample preparation includes all aspects of sample handling after
collection, until the sample is received by the laboratory. Sample
preparation techniques are specific to the sample medium and sampling
plan. For soil and sediment sample preparation, common techniques are:
removing extraneous material, sieving, homogenizing, splitting, and
compositing samples.
Proper sample preparation and handling maintains sample integrity and
provides a representative sample from the total material collected.
Improper handling can result in a sample becoming unsuitable for the type
of analysis required. For example, homogenizing, sieving, and compositing
of soil samples all result in a loss of volatile constituents and
therefore, are inappropriate when volatile contaminants are of concern.
Homogenization is the mixing or blending of a soil or sediment sample in
an attempt to provide uniform distribution of contaminants. Ideally,
proper homogenization assures that portions of the containerized samples
are equal or identical in composition and are representative of the total
sample collected. Incomplete homogenization may increase sampling error.
Quartering, as per ASTM Standard C702-87, can be used to simultaneously
homogenize and split a sample. Split samples are most often collected in
enforcement actions for comparing sample results obtained by EPA with
those obtained by the potentially responsible party (PRP). Split samples
also provide a measure of the sample variability, and a measure of the
analytical and extraction errors. Split soil and sediment samples are
commonly collected. Splitting may also be done, in some cases, with water
and air samples.
Compositing is the process of physically combining and homogenizing
several individual soil and sediment aliquots. Compositing samples
provides an average concentration of contaminants over a certain number of
sampling points, which reduces both the number of required lab analyses
and the sample variability. Compositing can be a useful technique, but
must always be considered and implemented with caution. Since compositing
dilutes high concentration aliquots, the applicable detection limits
should be reduced accordingly. If the composite value is to be compared
to a selected action level, then the action level must be divided by the
number of aliquots that make up the composite in order to determine the
appropriate detection limit (e.g., if the action level for a particular
substance is 50 ppb, a detection limit of 10 ppb should be used when
1-271
-------
analyzing a 5-aliquot composite).
To help maintain sample integrity and assure representativeness, it is
sometimes possible to ship samples to the laboratory directly in the
sampling equipment. This is the case for air sampling media. For soil
core samples, the ends of a Shelby tube can be sealed with caps, taped,
and sent to the laboratory for analysis. To help maintain the integrity
of VOA soil samples, soil cores can be collected using acetate sleeves and
sent in the sleeves to the laboratory. To ensure the integrity of the
sample after delivery to the laboratory, laboratory sample preparation
procedures are made part of laboratory bid contracts.
Quality Assurance/Quality Control (OA/OC)
Quality assurance/quality control (QA/QC) measures are an integral part of
representative sampling. QA/QC samples provide information on the
variability and usability of environmental sample results. They also
evaluate the degree of site variation, whether samples were cross-
contaminated during sampling and sample handling procedures, or if a
discrepancy in sample results is due to laboratory handling and analysis
procedures.
In the Removal Program, field replicate, collocated, background, and
rinsate blank samples are the most commonly collected field QA/QC samples.
Performance evaluation, matrix spike, and matrix spike duplicate samples,
either prepared for or by the laboratory, provide additional measures of
control for the data generated. QA/QC results may suggest the need for
modifying sample collection, preparation, handling, or analytical
procedures if the resultant data do not meet site-specific quality
assurance objectives.
Three QA/QC objectives (QAl, QA2, and QA3) have been defined by the
Removal Quality Assurance Program, based on the EPA QA requirements for
precision, accuracy, representativeness, completeness, comparability, and
detection level. QAl standards are used when a large amount of data are
needed quickly and relatively inexpensively, or when preliminary screening
data, which do not need to be analyte or concentration specific, are
useful. QAl requirements are used with data from field analytical
screening methods for a quick, preliminary assessment of site
contamination. Examples of QAl activities include: determining physical
and/or chemical properties of samples; assessing preliminary on-site
health and safety; determining the extent and degree of contamination;
assessing waste compatibility; and characterizing hazardous wastes.
QA2 verifies analytical results. The QA2 objective is intended to provide
a certain level of confidence for a select portion (10% or more) of the
preliminary data. This objective allows Removal Program personnel to
focus on specific pollutants and concentration levels quickly, by using
field screening methods with laboratory verification and quality assurance
1-272
-------
for at least 10% of the samples. QA2 verification methods are analyte
specific. Examples of QA2 activities include: determining physical
and/or chemical properties of samples; defining the extent and degree of
contamination; verifying site cleanup; and verifying screening objectives
obtainable with QAl, such as pollutant identification.
QA3 assesses the accuracy of the concentration level, by determining the
analytical error as well as the identity of the analyte(s) of interest.
QA3 data provide the highest degree of qualitative and quantitative
accuracy of all QA objectives by using rigorous methods of laboratory
analysis and quality assurance. QA3 is intended to provide a high level
of confidence so that the decisions can be made with regard to:
treatment; disposal; site remediation and/or removal of pollutants; health
risk or environmental impact; cleanup verification; and pollutant source
identification.
Sources of Error
Quantifying the error associated with a sampling activity can be
difficult, but is important in order to identify the possible sources of
error or variation in sampling and laboratory analysis and to limit their
effect(s) on the data. Four potential sources of error are:
• Sampling design -- Site variation (the non-uniform conditions
which exist at a site) include the identities and
concentration levels of contaminants. The goal of
representative sampling is to accurately identify and define
this variation. However, error can be introduced by the
selection of a sampling design which "misses" site variation.
For example, a sampling grid with relatively large distances
between soil sampling points or a biased sampling approach
(i.e., judgmental sampling) may allow significant contaminant
trends to go unidentified.
• Sampling methodology -- Error can be introduced by sampling
methodology and sample handling procedures, as in cross-
contamination from inappropriate use of sample collection
equipment, unclean sample containers, improper decontamination
and shipment procedures, and other factors. The use of
standard operating procedures for collecting, handling, and
shipping samples allows for easier identification of the
source(s) of error, and can limit error associated with
sampling methodology. Trip blanks, field blanks, replicate
samples, and rinsate blanks are used to identify error due to
sampling methodology and sample handling procedures.
• Sample heterogeneity -- Sample heterogeneity is a potential
source of error, especially for soil and sediment samples.
These media are rarely homogeneous and exhibit variable
1-273
-------
properties with lateral distance and with depth. This
heterogeneity may also be present in the sample container
unless the sample was homogenized in the field or in the
laboratory. The laboratory uses only a small aliquot of the
sample for analysis; therefore thorough sample homogenization
is important to assure that the analytical results are
representative of the sample and of the corresponding site.
• Analytical procedures -- Error which may originate in
analytical procedures includes cross-contamination,
inefficient extraction, and inappropriate methodology. Matrix
spike samples, replicate samples, performance evaluation
samples, and associated quality assurance evaluation of
recovery, precision, and accuracy, can be used to distinguish
analytical error from error introduced during sampling
activities.
A common objective of the evaluation of soil analytical data is to
delineate the extent of site contamination. One cost-effective approach
used in the Removal Program is to correlate inexpensive field screening
data with laboratory results. When field screening techniques, such as
XRF, are used along with laboratory methods (e.g., atomic absorption
(AA)), a regression equation can be used to predict a laboratory value
based on the screening results. The model can also be used to place
confidence limits around predictions. The relationship between the two
methods can be described by a regression analysis and used to predict
laboratory results based on field screening measurements. The predicted
values can then be located on a base map and contoured. These maps can be
examined to evaluate the estimated extent of contamination and the
adequacy of the sampling program.
Data Presentation and Analysis
Data presentation and analysis techniques can be used to compare
analytical values, to evaluate numerical distribution of data, to
determine and illustrate the location of hot spots and the extent of
contamination across a site, and to assess the need for removal of
contaminated soil with concentrations at or near the action level. Data
presentation and analysis methods include:
• Data posting -- Data posting, the placement of sample values
on a site basemap, is useful for displaying the spatial
distribution of sample values to visually depict extent of
contamination and to locate hot spots.
• Geologic graphics -- Geologic graphics include cross-sections
and fence diagrams, two- and three-dimensional depictions,
respectively, of soils and strata to a given depth beneath the
site. These types of graphics are useful for posting
1-274
-------
subsurface analytical data, for interpreting subsurface
geology and contaminant migration, and for placing monitoring
wells.
• Contour mapping -- After depicting sample values on a basemap
with data posting, contour lines (or isopleths) can be drawn
at a specified contour interval. Interpolating values between
sample points and drawing contour lines is done manually or by
using computer contouring software. Contour maps are useful
for depicting soil and groundwater contaminant concentration
values across a site.
• Statistical graphics -- The histogram is a statistical bar
graph which displays the distribution of a data set. A normal
distribution of data is a bell-shaped curve, with the mean and
median close together about halfway between the maximum and
minimum values. A probability plot depicts cumulative percent
against the concentration of the contaminant of concern. A
normally distributed data set plotted as a probability plot
would appear as a straight line. A histogram or probability
plot show trends and anomalies in the data prior to conducting
more rigorous forms of statistical analysis. Common
statistical analyses such as the t-test and the regression
analysis rely on normally distributed data. The distribution
or spread of the data set is important in determining which
statistical techniques to use.
• Geostatistics - - A geostatistical analysis can be broken down
into two phases. First, a model is developed that describes
the spatial relationship between sample locations on the basis
of a plot of spatial variance versus the distance between
pairs of samples. This plot is called a variogram. Second,
the spatial relationship modeled by the variogram is used to
compute a weighted-average interpolation of the data. The
result of geostatistical mapping by data interpolation is a
contour map that represents estimates of values across a site,
and maps depicting potential error in the estimates. The
error maps are useful for deciding if additional samples are
needed and for calculating best or worst case scenarios for
site cleanup.
The data interpretation method chosen depends on project-specific
considerations, such as the number of sampling locations and their
associated range in values. A site depicting extremely low soil data
values (e.g., non-detects) with significantly higher values (e.g., 5,000
ppm) from neighboring hot spots with little or no concentration gradient
in-between, does not lend itself to contouring and geostatistics,
specifically the development of variograms. However, data posting would
be useful at such a site to illustrate hot spot and clean areas.
Conversely, geostatistics and contour mapping, as well as data posting,
1-275
-------
can be applied to site data with a wide distribution of values (e.g.,
depicting a "bell shaped" curve) with beneficial results.
Incorporating Representative Sampling into the Removal Program
Although the principles discussed here are utilized in the Removal
Program, there is no national consistency with how they are employed. The
first step to representative sampling consistency in the Removal Program
is the completion of guidance documents for each environmental medium. In
order to keep the guidance documents current while sampling methodologies
are evolving, EPA has placed the documents on a two-year update schedule.
The second component of incorporating the concept of representative
sampling into the Removal Program is the development of a training program
for site personnel, based on the guidance documents. The training course
will introduce the concepts presented in the documents, structured around
a realistic example site (an actual Superfund site). One common site is
being incorporated into each document. This will facilitate the
development of an integrated training program addressing all media.
To enhance the integration of representative sampling into the Removal
Program, the guidance documents discuss computer software that assist in
implementing the concepts presented in the documents. This includes the
use of EPA's Geo-EAS program (Geostatistical Modeling Assessment
Software) and its application to soil sampling in the soil document, and
an evaluation of available air models to assist in sample design in the
air document.
Finally, specific tools will be developed and incorporated into the
documents as necessary. In the air document, an air sampling methods
database has been developed to provide up-to-date information on sampling
methods and compounds that can be sampled by those methods.
SUMMARY
The U.S. EPA Committee on Representative Sampling for the Removal Program
is planning and developing guidance documents to assist Removal Program
personnel in the collection of representative samples at removal sites.
Five guidance documents, addressing soil, air, biota, waste, and
groundwater/surface water/sediment sampling will be published in FY91 and
FY92. Each document addresses considerations unique to its medium, but
follows the overall objectives and recommendations for effective
representative sampling. The documents address: assessing available
information; selecting an appropriate sampling approach; properly
selecting and utilizing sampling, field analytical screening, and
geophysical equipment; utilizing proper sample preparation techniques;
incorporating suitable types and numbers of QA/QC samples; and
interpreting and presenting the resulting data.
1-276
-------
35 PRELIMINARY FIELD AND LABORATORY EVALUATIONS
AND THEIR ROLE IN AN ECOLOGICAL RISK ASSESSMENT
FOR A WETLAND SUBJECT TO HEAVY METAL IMPACTS
Greg Linder. FIT Environmental Services, Lake Oswego, OR, Mike Bollman, Suean
Ott, Julius Nwosu, David Wilborn, ManTech Environmental Technology, Incorporated,
Bill Williams, US EPA Environmental Research Laboratory, Corvallis OR.
ABSTRACT
An integrated laboratory/field project was initiated by Environmental Protection Agency
[US EPA] Region 8 as part of their ecological risk assessment for the Milltown Reservoir
Superfund site located on the Clark Fork River in western Montana, six miles east of
Missoula, Montana. Preliminary work supporting the ecological risk assessment included
field studies completed at Milltown Reservoir, while laboratory work [biological testing
and chemical analysis] was completed at the US EPA Environmental Research
Laboratory-Corvallis [ERL-C], Corvallis, Oregon. For the wetlands evaluation at
Milltown Reservoir, heavy metals appear the most critical contaminant which must be
considered in the ecological assessment; those of primary interest include arsenic, zinc,
copper, cadmium, and nickel as well as manganese and iron. Preliminary laboratory and
field investigations evaluated the extent of contaminant and its impact on the indigenous
wildlife and vegetation characteristic of the site. Field work included scoping activities;
the identification of sampling units; preliminary sampling of surface water, soil, and
sediment; and preliminary field screening tests. Results from the preliminary studies
indicate ecological effects may be subtle in expression, and future work should focus
upon current as well as historic sediment deposition areas in the reservoir and associated
wetlands.
INTRODUCTION
Milltown Reservoir is located on the Clark Fork River in western Montana, six miles
east of Missoula, Montana [US EPA Region 8]. The reservoir was formed in 1907
following the construction of a hydroelectric facility located on the Clark Fork River
immediately downstream from its confluence with the Blackfoot River. Since
construction of the dam, a wetland habitat has been created, but because of the upstream
mining activities on the Clark Fork River, Milltown Reservoir has accumulated a large
volume of heavy metal-contaminated sediment. The Milltown Reservoir wetland was
initially identified under CERCLA in 1981 after community well-water samples were
found to have arsenic levels that ranged from 0.22 to 0.51 mg/L; the EPA
recommendation for potable water supplies suggested that arsenic not exceed 0.05 mg/L
[Woessner, et al. 1984]. Within an ecological context, however, the impact of the
contaminated sediments on the wetlands is unclear [Adamus and Brandt 1990; Tiner 1984].
1-277
-------
The laboratory and field investigations completed during FY90 evaluated the extent of
contaminant and its impact on the indigenous wildlife and vegetation characteristic of the
site. During the preliminary season, field work was completed at Milltown Reservoir
and supporting laboratory work [biological testing and chemical analysis] was completed
at the Environmental Research Laboratory-Corvallis [ERL-C], Corvallis, Oregon.
Preliminary field work included scoping activities, such as the identification of sampling
units; preliminary sampling of surface water, soil, and sediment; and preliminary field
screening tests [NSI 1989]. In addition to field methods [e.g., earthworm, seed, and
amphibian evaluations] being used as part of the biological and ecological assessment,
complementary laboratory evaluations were completed [Greene, et al. 1989]. Routine
water quality measurements on surface water samples and soil measurements [e.g.,
texture analysis] were also collected as part of the preliminary field activities.
SYNOPSIS: PRELIMINARY FIELD AND LABORATORY ACTIVITIES
Sampling plan. Historic information regarding Milltown suggested that a preliminary
field effort could well determine the extent of contamination, and definitive field
operations planned for FY 91 could be focussed along habitat lines suggested by our
preliminary field season. Clearly, the sedimentation problems of the reservoir would
require those matrices being evaluated in our definitive studies, but little previous work
had considered the impact of periodic inundations upon upland habitats. Accordingly,
the sampling plan which guided our preliminary field season was designed within
topographic and historic bounds and assured the definitive sampling and analysis plan
guiding our FY 91 field season be developed on defensible empirical grounds. For the
preliminary field effort, Milltown was stratified into sampling units based largely on
topography, and line transects were established on each sampling unit. Each line transect
was derived from an initial random vector, then defined along habitat gradients amenable
to each of the evaluations being completed on the sample units; plant and earthworm
methods were applied on the upland features, and amphibian methods were applied in
emergent zones [Schweitzer and Santolucito 1984].
Laboratory and field testing: Vegetation evaluations. Within laboratory settings,
critical developmental stages in plant life cycles were evaluated and physiological
endpoints pertinent to ecological impact were measured [Linder, et al. 1990]. Seed
germination evaluations were completed to evaluate soils directly in the laboratory
without preparing eluates [Thomas and Cline 1985]; root elongation tests were completed
on site-soil eluates [Greene, et al. 1989]. On-site seed germination evaluations also
considered the germination endpoint, but were less time consuming, generally more cost-
effective and minimized the manipulation of the site-soils, since tests were performed
directly in the field [NSI 1989]. Data derived from on-site testing complemented
terrestrial laboratory tests and chemical analysis of site samples. The on-site evaluations
also addressed questions regarding lab-to-field extrapolation.
1-278
-------
Earthworm testing. Integrated field and laboratory methods using earthworms
contributed to soil evaluations and afforded a direct test of an environmental matrix
which may greatly influence the impact of soil chemicals on indigenous
macroinvertebrate communities [Rhett, et §1. 1988; Marquenie, et al. 1987]. Adverse
biological or ecological effects may be expressed owing to contaminant-related effects
associated with anthropogenic chemicals. Or, physical alterations in habitat may
effectively impact terrestrial or wetland systems through direct and indirect mechanisms.
Within ecological evaluations for terrestrial and wetland habitats, then, biological and
ecological measurements in general must assess and, if possible, distinguish between
effects mediated by physical alterations in habitat and those effects mediated by
anthropogenic chemicals or contaminants associated with human activities. Additionally,
lab-to-field extrapolation bias was addressed through the integrated field and laboratory
work completed with earthworms.
Amphibian testing. The integrated laboratory and in situ methods using amphibians
contributed to an evaluation of the extent of contamination as well as lab-to-field
extrapolation error. For the preliminary field season, in situ evaluations were completed
at selected emergent zones at Milltown Reservoir and used field-collected eggs and early
embryos of Rana catesbeiana [NSI 1989]. Grab samples of surface waters at Milltown
Reservoir were also collected as part of the preliminary field season for the Milltown
Reservoir endangerment assessment; laboratory evaluations using standardized amphibian
methods [FETAX] were completed as parallel and complementary components to these
in situ amphibian evaluations [ASTM 1991; Dawson, et al. 1988].
RESULTS AND DISCUSSION
From the preliminary field season completed at Milltown Reservoir:
+ There are no indications that acute effects are associated with any presumptive
contaminant exposures on those areas surveyed and sampled during FY 90.
+ Occasionally, biological tests suggested that subacute or chronic effects may be
expressed at some locations on the Milltown wetlands; these expressions of
adverse biological effects appeared to be associated with deposition zones where
sediments either currently or historically had accumulated as a function of
changes in the Clark Fork channel or flow rates.
+ Samples characterized by adverse biological responses in laboratory or field
exposures were frequently identified by more than one test method; adverse
biological responses in laboratory tests should be evaluated for laboratory-related
manipulation "effects," particularly when their field analogs yielded dissimilar
endpoints.
1-279
-------
+ Potential sample heterogeneity may be minimized by stratifying future sample
plans along topographic boundaries determined by current or historic depositional
areas on the Clark Fork.
+ The significance of Blackfoot River input at its confluence with the Clark Fork
could not be determined from these preliminary studies.
+ Some soil and sediment samples from the Milltown wetlands, again in current or
historic zones of deposition, presented metals concentrations which may be
considered elevated [Beyer 1990].
+ Biological, and presumptively ecological, impacts associated with these elevated
metal concentrations were not overtly expressed, and the biological information
collected during the preliminary field season should be considered when
interpreting these total metals concentrations.
Within an ecological context, these in situ and on-site biological methods have proven
to be applicable to integrated field and laboratory studies that evaluate, not only the
current status of the wetland, but also provide information which will contribute to future
mitigation and restoration efforts.
SUMMARY
While the work completed in FY 90 represents preliminary efforts in evaluating
ecological risks, the information garnered can focus future work for the Milltown
Reservoir assessment. While overt expressions of toxicity are not expressed in the
wetlands, these technical items were identified as significant considerations for discussion
in designing field and laboratory evaluations that could contribute to future Milltown
Reservoir work.
+ "Reference areas" must be defined for future work at Milltown; some
comparative framework must be established, be that "site-equivalent areas,"
within-boundary reference locations or an adequate historical data base, for
evaluating the information generated in the laboratory or field.
+ Within ecological contexts, the area being sampled must be extended to include
the entire Clark Fork Arm and other upstream areas suspected of having more
recently deposited and potentially heavy metal-contaminated sediments.
+ Sediment evaluations within the reservoir must be completed to adequately
evaluate the wetland; these evaluations should consider metals concentrations in
sediments, evaluations of sediment toxicity [including evaluations of its
physicochemical properties], and a field survey which would yield ecologically
1-280
-------
relevant information such as community structure.
+ Sediment evaluations should include invertebrate work [e.g., Timmermans and
Walker 1989], and if possible, wetland plants [floating and emergent] should be
evaluated for the potential adverse biological effects associated with sediment
exposures [Federal Interagency Committee for Wetland Delineation 1989;
Fleming, el al 1988; Reed 1988; Walsh, gj al. 1990].
+ For more adequate site-specific evaluations regarding bioavailability of metals in
soils and sediments, additional physicochemical characterizations may be
advantageous; for example, to adequately evaluate vegetation responses, routine
soil texture, nutrient [N-P-K] and total organic carbon [TOC] analysis may be
beneficial [Chapman and Pratt 1961; SCS 1951; Vandecaveye 1948].
+ If total metals appear inadequate for evaluating metal bioavailability, speciation
studies may be indicated; if such studies are anticipated, site samples targeted for
these analyses should be selected after a thorough review of the current, as well
as historic, geochemical information.
REFERENCES
Adamus, P.R., and K. Brandt. 1990. Impacts on quality of inland wetlands of the
United States: a survey of indicators, techniques, and applications of community
level biomonitoring data. [EPA/600/3-90/073]. U.S. Environmental Protection
Agency, Environmental Research Laboratory, Corvallis, Oregon, 97333.
ASTM. 1991. Draft standard guide for conducting the frog embryo teratogenesis assay-
Xenopus [FETAX]. American Society for Testing and Materials, ASTM
Committee E-47 on Biological Effects and Environmental Fate, Philadelphia, PA.
Beyer, W.N. 1990. Evaluating soil contamination. U.S. Fish and Wildlife Service,
Biol. Rep. 90[2]. 25pp.
Chapman, H. D. and P . F. Pratt. 1961. Methods for the analysis of soils, plants, and
waters. Univ. of California, Division of Agricultural Sciences.
Dawson, D.A., E.F. Stebler, S.L. Burks, and J.A. Bantle. 1988. Evaluation of the
developmental toxicity of metal-contaminated sediments using short-term fathead
minnow and frog embryo-larval assays. Environ. Toxicol. Chem. 7[l]:27-34.
Federal Interagency Committee for Wetland Delineation. 1989. Federal manual for
identifying and delineating jurisdictional wetlands. U.S. Army Corps of
Engineers, U.S. Environmental Protection Agency, U.S. Fish and Wildlife
1-281
-------
Service, and U.S.D. A. Soil Conservation Service, Washington, D.C. Cooperative
technical publication. 76pp.
Fleming, W.J., JJ. Momot, and M.S. Ailstock. 1988. Bioassay for phytotoxicity of
toxicant to sago pondweed. Chesapeake Res. Consort. Publication 129. pp. 431-
440.
Greene, J.C., C. Bartels, W. Warren-Hicks, B. Parkhurst, G. Linder, S. Peterson, and
W. Miller. 1989. Protocols for Short Term Toxicity Screening of Hazardous
Waste Sites. US EPA Research Laboratory, Corvallis, OR. EPA/600/3-88/029.
Linder, Greg, J.C. Greene, H. Ratsch, J. Nwosu, S. Smith, and D. Wilborn. 1990.
Seed germination and root elongation toxicity tests in hazardous waste site
evaluation: methods development and applications. In Plants for Toxicity
Assessment, ASTM STP 1091. W. Wang, J.W. Gorsuch, and W.R. Lower,
Eds., American Society for Testing and Materials, Philadelphia, PA. Pp. 177-
187.
Marquenie, J.M., J.W. Simmers, and S.H. Kay. 1987. Preliminary assessment of
bioaccumulation of metals and organic contaminants at the Times Beach confined
disposal site, Buffalo, N.Y. Miscellaneous Paper EL-87-6, Dept. of the Army,
WES, Corp of Engineers, Vicksburg, MS, 67pp.
NSI Technology Services Corporation. 1989. FY 89 Report: Initial performance
evaluation of three bioassays modified for direct in situ testing. Technical report
submitted to Ecotoxicology Branch, Ecological Assessment Team, US EPA,
Environmental Research Laboratory, Corvallis, OR.
Reed, P.B., Jr. 1988. National list of plant species that occur in wetlands: national
summary. U.S. Fish Wildl. Serv. Biol. Rep. 88(24). 244pp.
Rhett, R.G., J.W. Simmers and C.R. Lee. 1988. Eisenia foetida used as a
biomonitoring tool to predict the potential bioaccumulation of contaminants from
contaminated dredged material. In Earthworms in Waste and Environmental
Management. Eds: C.A. Edwards and E.F. Neuhauser. SPB Academic
Publishing, The Hague, The Netherlands.
Schweitzer, G. E. and J. A. Santolucito. 1984. ACS Symposium Series 267:
Environmental Sampling for Hazardous Waste. American Chemical Society,
Washington, D.C.
Soil Conservation Service. 1951. Soil Survey Manual. U.S. Department of Agriculture
Handbook No. 18. Washington, D.C. 503pp.
1-282
-------
Thomas, J. M. and J. E. Cline. 1985. Modification of the Neubauer technique to assess
toxicity of hazardous chemicals in soils. Environ. Toxicol. Chem. 4:201-207.
Timmermans, K.R., and P.A. Walker. 1989. The fate of trace metals during the
metamorphosis of chironomids (Diptera, Chironomidae). Environ. Pollut. 62:73-
85.
Tiner, Jr., R.W. 1984. Wetlands of the United States: Current status and recent trends.
U.S. Fish and Wildlife Service, Habitat Resources, One Gateway Center,
Newton, MA. 02158.
Vandecaveye, S. C. 1948. Biological methods of determining nutrients of soil. In:
H.B. Kitchen [ed.], Diagnostic Techniques for Soil and Crops. American Potash
Institute, Washington, D.C.
Walsh, G.E., D.E. Weber, T.L. Simon, L.K. Brashers, and J.C. Moore. 1990. Use of
marsh plants for toxicity testing of water and sediments. Contribution No. 694,
Environmental Research Laboratory, Gulf Breeze, Florida.
Woessner, W.W., J.N. Moore, C. Johns, M. Popoff, L. Sartor, and M. Sullivan. 1984.
Final report: Arsenic source and water supply remedial action study. Prepared
for Solid Waste Bureau, Montana Department of Health and Environmental
Sciences, Helena, Montana.
1-283
-------
PAH Analyses: Rapid Screening Method for Remedial Design Program
Laurie Ekes. Marilyn Hoyt, Gayle Gleichauf, David Hopper
ENSR Consulting and Engineering
35 Nagog Park
Acton, MA 01720
The Iron Horse Park Superfund site (Billerica, MA) includes approximately 15 acres
of lagoon area. Previous site investigations have demonstrated contamination by
Polycyclic Aromatic Hydrocarbons(PAHs) and petroleum hydrocarbons. The US EPA
record of decision and administrative order for the site stipulates stringent cleanup goals;
all soil with Total PAH above 1 ppm or TPH above 100 ppm must be remediated.
As part of the pre-remedial design, iterative sampling was planned to tightly define
the spatial limits of the contamination. The program required analytical support with rapid
turn-around on high numbers of samples and action-level detection limits.
The method developed and validated to support this program has since proven to
have wide applicability for site investigation and remediation. Analytical protocol will be
presented with associated quality assurance/quality control data for samples from this site
and a coal gasification waste site. The method includes sample extraction by sonication
followed by direct analysis by GC/MS in the selected ion mode. Total analysis time per
sample is under 30 minutes. Sixteen different PAH may be identified and quantified in
samples, with individual PAH detection limits of 60 ppb. Comparison data from Method
8270 analysis for split samples will be presented.
A statistically-significant correlation at the 95% confidence level was found between
total PAHs and Total Petroleum Hydrocarbons(TPH) at the Ironhorse Park site. Data for
the regression analysis, which will be presented, were used to justify reliance on the PAH
data alone for the remedial design.
I-284
-------
37 EVALUATION OF HOUSEHOLD DOST COLLECTION METHODS FOR HDD NATIONAL SURVEY OF
LEAD IN HOMES
Benjamin S. Lira, Ph.D., Joseph J. Bre«mT Ph.D., Field Studies Branch,
Office of Toxic Substances, U.S. Environmental Protection Agency,
Washington, D.C. 20460; Kay Turman, B.S., Stan R. Spurlin, Ph.D., Midwest
Research Institute, Kansas City, Missouri 64110; Stevenson Weitz, Office
of Policy Development and Research, U.S. Department of Housing and Urban
Development, Washington, D.C. 20410.
ABSTRACT
In conjunction with the National Survey of Lead-Based Paint in Housing
conducted by the Department of Housing and Urban Development with the
technical support from the Environmental Protection Agency, it became
necessary to develop a method of collecting household dust samples for
lead (Pb) analysis from a variety of surfaces. This sampling technique
needed to be portable, simple to use, and applicable to surfaces ranging
from window seats to baseboards to carpeted surfaces. In addition,
because the field sampling crews would be intrusive into the occupied
dwelling, it was desirable that adequate sample for analysis be collected
in 5 to 10 minutes. Three pumps fitted with 0.8 micron membrane filters
in cassette holders were evaluated at three flowrates (5, 20, and 100
1/min)/pressure heads (20, 60 and 125 mm Hg) specifications in combination
with four different nozzle designs.
The results indicated the higher f lowrate sampling pumps produced not only
better collection efficiency in a shorter period of time, but also were
more reproducible in their collection efficiency. The final system design
adopted provides for better than 80% collection efficiency from a 4 sq.
ft. area in less than 5 minutes. The system weighs less than 10 Ibs and
is usable with a minimum of training.
INTRODUCTION
While developing the design for the HUD National Survey, HUD carried out
a pretest survey of several housing units in three counties of North
Carolina (Boyle et al. 1989). HUD provided the data from this survey to
EPA's Office of Toxic Substances to help select appropriate methods of
sample collection and analysis. The analytical methods used, quality
control samples, analytical results, and numbers of samples collected as
part of the design were reviewed.
1-285
-------
In the review of the data generated in HDD's pretest survey, it was noted
that no data were presented on lead in household dust. However, a
procedure was presented in the survey design for sampling and analyzing
household dust (RTI Survey Design 1988). It was determined dust sampling
had been carried out, but that none of the samples collected indicated
lead above the method detection limit. It was decided to evaluate the
pretest survey techniques to determine why the data were not consistent
with previously observed results of lead in almost all dust samples
(Bornschein et al. 1986). Studies by Bomschein and Clark at the
University of Cincinnati showed that the majority of indoor dust had at
least 100 Mg/g of lead. These concentration levels were predicted to be
detectable by the AAS protocol, if sufficient sample mass was collected.
Dust sample weights were not determined in the pretest survey to verify
adequate sample mass had been collected. The lack of measurable lead
levels suggests the sampling method did not collect a sufficient amount of
sample for analysis. Therefore, a limited evaluation of the sampling
technique was undertaken. The objective was to formulate a protocol to be
used in the HUD National Survey of Lead in Homes.
Experimental Methods
The first step of the evaluation was to select a vacuum system adequate
for dust collection from carpets, windowsills, and floors. (The criteria
for selection of methods include: (1) less than 5 min to sample 4 ft2 of
sample area; (2) sampling apparatus weighs less than 10 Ib; and
(3) simplicity of operation.) The major problem with carrying out a
systematic method comparison of indoor dust sampling techniques is the
reproducible generation of representative dust samples on various
substrates. Dust is a complex mixture of organic and inorganic particles
of various shapes and sizes. Because no representative "standard" dusts
were available due to time constraints and lack of commercial availabil-
ity, the dust used throughout this evaluation was composited from dust
collected in vacuum cleaner bags at a personal residence and from an MRI
floor vacuum. Staff sieved the material to remove all particles greater
than 250 microns and any extraneous carpet fibers. This produced a fine
dust that appeared to settle in a pattern similar to that found in
households.
Carpet represented a unique problem in carrying out a systematic
evaluation. Staff used the percentage of weight of a representative dust
sample as one of the criteria to evaluate collection efficiency. However,
when vacuuming carpet, the vacuum picks up a large number of carpet fibers
along with the dust. This prevents an accurate determination of the
percentage of dust recovered from carpeted surfaces. For the National
Survey in which the Pb in dust was determined as micrograms of Pb per
square foot of sampled surface, this does not represent a major concern
unless the carpet fibers contained significant amounts of Pb. However, in
trying to assess the efficiency of a particular collection system, the
weight of the fibers collected could significantly bias the results in a
positive direction.
1-286
-------
1. Method of Application to Surfaces
The method of application of dust samples to the surface areas needed to
produce a fairly even distribution of a known amount of material. A
fairly even distribution of dust over the surface would represent a
typical field sampling situation. Static effects and non-retrievable dust
in cracks or crevices in the natural dust settling patterns are more
difficult to assess and were beyond the scope of this investigation. A
child's toy flour sifter (Kitchen Play) was used to evenly apply the dust
to the tested surfaces. Evaluation of the holdup of the dust in the
sifter showed that greater than 90% of the dust reached the surface of the
sample area (Table 1). The ability to reprodueibly deliver greater than
90% of the dust over a dust burden range of 60 to 229 mg was considered
adequate to proceed with this testing. The dust scattered fairly evenly
over the templated area.
Table 1. Dust Recovery for Sifter
Weight of Weight of
dust added dust determined
to sifter (mg) on surface (mg) % recovery
112
59.6
229
78.2
102
57.0
220
70.8
91.1
95.6
96.1
90.5
Avg = 93.3 ± 2
.8%
2. Evaluation of Vacuum Collectors
Three vacuum pumps were evaluated, each with four different sample nozzle
configurations. Table 2 lists the three pumps along with their pertinent
specifications. The maximum flow rate for each pump was determined with
a clean sampling cassette containing both filter and pad attached to a
4-ft section of 3/8-in heavy-wall Tygon tubing. The flow rate was
measured using an NBS traceable anemometer.
Figure 1 illustrates the nozzle designs evaluated. Nozzle A, the design
used in the HUD pretest study, attaches to a small sampling tube that is
1-287
-------
connected to the small sample port on the cassette. Nozzles B and C are
designs constructed at MRI for evaluation. The fourth option, Nozzle D,
is the open end of the cassette used directly as a sampling port. Each of
these nozzle designs was evaluated using each pump operating at its
maximum flow rate. Nozzle designs B and C evolved during the
investigation of sampling methods to meet the desired sampling time
constraints and were not evaluated simultaneously with Nozzles A and D.
The efficiency of each dust collection method was evaluated by determining
the weight of dust collected from a surface spiked with a known weight of
household dust. In determining the amount of dust collected in each
evaluation, a problem is the weight change in the cassette due to changes
in the amount of moisture on the filter. This problem would be especially
severe at high flow rates which could chill the filter with subsequent
condensation. A preliminary evaluation, sampling just air for 5 min,
showed very little change « 5 mg) in a 2-L/min (SKC pump) system and
larger changes (> 15 mg) in the higher flow systems (i.e., Cast pump).
This problem was minimized by drying the cassettes at 80°C, removing them
from the oven and immediately sampling, redrying for 5 min, and
immediately weighing after removal. Weight gains associated with air
sampling were less than 2 mg on cassettes after this treatment.
Table 2. Pumps Evaluated for Oust Collection
Flow rate Pressure head
specification specification Weight
Manufacturer Model (L/min) (mm Hg) (Ib)
SKC 101-M 5 25 1.2
Gast 302-100 100 125 10.8
Fisher A-20 20 60 7.2
1-288
-------
To Pump
rvj
CD
CO
A
Tubing 4 Flange
JL
To Pump
To Pump
4"
B
Square Cut Teflon
To Pump
2.r
D
Open End Cassette
Figure 1. Nozzles evaluated In dust collection study.
-------
Dust collection was determined on three different surfaces: (1) vinyl
tile; (2) construction-grade plywood (unpainted); and (3) a piece of
enamel-painted, construction-grade plywood. Table 3 shows the test
matrix. Attempts at evaluating the collection efficiency from carpet were
not successful due to continuing interferences from carpet fibers. These
fibers did not allow for accurate dust weight determinations, and
therefore resulted in artificially high recoveries.
Tables 4 and 5 show recovery results of the sampling evaluations. Each
determination was made in duplicate, and average value reported. The
procedure used to collect the dust involved 50% overlapping passes made on
the surface from left to right and then top to bottom. Table 6 includes
the approximate time to vacuum a 1-ft2 area for each nozzle type and pump.
The time difference to cover the specified area was significant, and this
impacted the final selection between the systems. The early data on the
time required to vacuum resulted in fabricating a larger vacuum nozzle for
dust sampling (Nozzles B and C).
In evaluating the data in Tables 4 and 5, it is clear the larger flow rate
pump is necessary to achieve high levels of dust recovery. This finding
is contrary to reports by other researchers (Bornschein et al. 1985), who
have previously reported > 80% recovery using the SKC 5-L/min pumps* This
difference may be the result of sampling technique, time, or type of dust
(i.e., particle size) sampled. Our finding, however, is consistent with
the results reported in the HUD pretest survey. After considering the
sampling efficiency (amount collected on filter) and sampling time, Nozzle
C and the Gast pump operated at full capacity were selected to collect
household dust in the HUD National Survey of Lead in Homes. Table 7 shows
a summary recovery (%) performance of Gast pump/nozzle combination system.
It is clear significant research efforts in this area of dust sampling is
warranted. Our efforts and decisions were based on an immediate need to
meet the short-term deadlines required for the HUD national survey. EPA
continues to investigate improved methods for household dust collection
studies (Wilson et al. 1991).
Bornschein, R.L.; Succop, P.A.; Krafft, K.M.; Clark, C.S.; Peace,
B.; Hammond, P.B. (1987b) Exterior surface dust lead, interior house
dust lead and childhood lead exposure in an urban environment. In:
Hemphill, D.D., ed. Trace substances in environmental health - XX:
proceedings of University of Missouri's 20th annual conference; June
1986; Columbia, MO. Columbia, MO: University of Missouri; pp. 322-
332.
1-290
-------
2. Boyle, K.E.; Gutknecht, W.F.} Neefus, J.D.; Stutts, E.S.; Williams,
E.E.j Williams, S.R.; "Design A National Survey of Lead-Based Paint
in Housing - Pretest Report". HUD Contract No. HC5796 (August 3,
1989).
3. Wilson, N.K.; Lewis, R.G.; Fortmann, R., et al.; "Evaluation of
Methods for Measurement of Exposure of Young Children to Lead in
Homes," to be presented at International Society of Exposure
Analysis, Annual Meeting, Atlanta, Georgia, November 1991.
1-291
-------
Table 3. Test Matrix for Vacuum Dust Collection
Pump
(operating flow)
SKC (2 L/min)
Fisher (20 L/min)
Cast (100 L/min)
Vinyl
2a
2a
2a
Surface type
unpainted "olvwood
2a
2a
2a
Enamel plvwood
2a
2a
2a
aNumber of replicate collections carried out at each sample weight
(50 mg and 150 ing).
1-292
-------
Table 4. Recovery of Surface Dust Applied at 50 ± 10 mg/ft2
Pump and nozzle
SKC (2 L/min) + A
SKC (2 L/min) + B
SKC (2 L/min) + C
SKC (2 L/min) + D
Gast (100 L/min) + A
Cast (100 L/min) + B
Gast (100 L/min) + C
Gast (100 L/min) + D
Fisher (20 L/min) + A
Fisher (20 L/min) + B
Fisher (20 L/min) + C
Fisher (20 L/min) + D
Vinyl
6.5(23.0)
7.5( 6.7)
10,5(14.0)
3.5(100)
45.5(14)
73.5(7.5)
90.0(2.2)
36.0(16.6)
12.0(0)
21.5(7.0)
27.0(7.4)
16.0(6.2)
Unpainted plywood
2.0(100)
3.0(33.3)
12.0(25.0)
19.5(79.5)
46.5(3.2)
85.5(6.4)
92.0(5.4)
25.5(37.2)
17.0(29.4)
28.0(10.7)
34.5(10.1)
15.5(100)
Enamel-painted
plywood
7.5(20)
3.0( 0.0)
5.5( 9.0)
7.5(6.7)
53.5(43.9)
93.0(13.4)
60.0(46.7)
30.5(1.6)
35.0(67.9)
26.5(17.0)
51.0(11.8)
17.0(0)
*RPD = Relative percent deviation |x-x| x 100
1-293
-------
Table 5. Recovery of Surface Dust Applied at 150 + 25 mg/ft2
Average Recovery (%). x (RPD)
Pump and nozzle
SKC (2 L/min) + A
SRC (2 L/min) + B
SKC (2 L/min) + C
SKC (2 L/min) + D
Cast (100 L/min) + A
Cast (100 L/min) + B
Cast (100 L/min) + C
Cast (100 L/min) + D
Fisher (20 L/min) + A
Fisher (20 L/min) + B
Fisher (20 L/min) + C
Fisher (20 L/min) + D
Vinyl
19.0(10.5)
2.0(100)
6.0(50)
16.5(30.3)
41.0(87.8)
69.5(20.9)
85.0(1.2)
35.0(14.2)
22.5(28.9)
43.0(9.3)
68.0(48.5)
17.0(0)
Unpainted plywood
10.0(20.0)
4.5(55.5)
7.0(71.4)
6.0(16.7)
38.5(13.0)
82.5(4.2)
94.5(2.6)
35.0(2.9)
16.0(6.3)
42.0(11.9)
50.5(16.8)
10.5(100)
Enamel-painted
plywood
21.5(25.6)
3.5(100)
8.0(12.5)
14.0(7.1)
37.5(4.0)
77.5(3.2)
98.5(8.6)
25.5(37.2)
21.0(4.8)
29.0(44.8)
53.5(12.1)
23.5(17.5)
1-294
-------
Table 6. Time to Vacuum Surface
Nozzle
Time to cover area
of 4 ft2 (min)a
A
B
C
D
24
5
5
32
aTime to execute 50% overlapping passes as
specified in the text. Visual inspection
of collection efficiency not used for this
evaluation.
Table 7. Summary Recovery (%) Performance of Selected Pump/Nozzle Combinations
(Gast(100 1/min) + C)
Dust Loading
Vinyl
Average Recovery (%). x (RPD)
Unpainted plywood
Enamel-painted
plywood
50 ± 10 mg/ft2
150 ± 25 mg/ft2
90.0(2.2)
85.0(1.2)
92.0(5.4)
94.5(2.6)
60.0(46.7)
98.5(8.6)
1-295
-------
38 FIELD DEPLOYMENT OF A GC/ION TRAP MASS SPECTROMETER FOR TRACE
ANALYSIS OF VOLATILE ORGANIC COMPOUNDS
Chris P. Leibman. David Dogruel, Health and Environmental Chemistry
Group, HSE-9, Eric P. Vanderveer, Instrumentation Group, MEE-3 Los Alamos
National Laboratory, M/S K-484, Los Alamos, New Mexico, 87545
ABSTRACT
Field analytical support can directly impact the expense of environmental
clean-up by reducing the cost-per-analysis. Cost for sample packaging,
shipment, receiving and management are eliminated 1f analyses are
performed on site. Field analytical support improves the chances that
schedules and monetary constraints associated with remedial activities
are met.
To reduce the cost associated with environmental clean-up we have
developed a purge and trap/GC/Ion Trap Detector (ITD) at Los Alamos
National Laboratory for the identification and quantification of volatile
organic compounds present at chemical waste sites. A custom purge and
trap/GC sampling system was integrated with a modified ITD to achieve
instrument operation consistent with field activities.
The instrumentation and associated methods parallel those outlined in
method 8260, SW-846. Qualitative and quantitative analysis for the 68
target compounds and the associated internal standards and surrogates is
completed in an automated sequence that 1s executed every 25 minutes.
Sample purging, analysis, data reduction, and preliminary report
generation proceeds automatically. The Instrument can be operated in a
1-296
-------
continuous mode, pausing only for sample loading and data file
specification. All data are archived on floppy disk for subsequent
review by a skilled analyst. Part-per-trillion detection limits can be
attained for many compounds from either 5 gram soil or 5 milimter water
samples.
The GC/ITD 1s being deployed in a mobile laboratory which has been
designed to support volatile organic analysis. The use of the
transportable GC/ITD for support of environmental surveillance and the
characterization/clean-up of hazardous waste sites is being evaluated.
We will discuss field activities completed to date and the evolution of
field operation plans and field documentation. Additionally, we will
discuss the quality control we have Implemented for field analysis using
the GC/ITD. Results obtained from blind quality control samples will be
presented.
One purpose of the presentation will be to examine problems encountered
with field analyses using the GC/ITD and to discuss any actions taken to
address those problems. Notably, we will discuss how small quantities
of water introduced into the ITD from the purge and trap sampling system
negatively impact quantitation and the steps we have taken to mitigate
those problems.
1-297
-------
OQ ACCURATE, CN-SITE ANALYSIS OF PCBs
IN SOIL — A LOW COSH APPROACH
Deborah Lavicme. Quality Control Manager
Dexsil Corporation
One Hamden Park Drive
Hamden, Connecticut 06517
ABSTRACT
Polychlorinated Biphenyls (PCBs) are very stable materials of low
flammability used as insulating materials in electrical capacitors
and transformers, plasticizers in waxes, in paper manufacturing, and
for a variety of other industrial purposes.
There are many PCS transformers and capacitors still in service
throughout the United States today. The Environmental Protection
Agency estimates that there are 121,000 (askarel) PCB transformers,
20 million PCB contaminated mineral oil transformers and 2.8 million
large PCB capacitors currently in use. A certain percentage of this
equipment will leak, fail or rupture and spill PCB into the
environment each year (1).
Because of equipment leakage and widespread industrial dumping, PCBs
have appeared as ubiquitous contaminants of soil and water. Chemical
analysis for PCBs has been almost exclusively performed by gas
chromatography. Other analytical techniques such as nuclear magnetic
resonance (NMR) and liquid chromatcgraphy with UV detection are
alternative methods for PCB analysis but can only be successfully
applied where the suspected concentration level of PCB is greater
than 1000 ppm.
A new instrumental method has been developed to analyze for PCB
content using electrochemical methodology and a chloride specific
electrode to measure quantitatively the amount of chloride. The
instrument converts the chloride concentration into a PCB equivalent
amount of PCB in an oil or soil sample and gives a direct readout in
parts per million of PCB. The preparation steps involve extracting
the PCBs from the soil (not necessary for oil samples) and reacting
the sample with a sodium reagent to transform the PCBs into chloride
which can be subsequently quantified by the instrument. Oil samples
take about 5 minutes to prepare and soils about 10 minutes. One
operator can complete about 150 oil tests or 100 soil tests in an
eight hour day.
Although this paper will concentrate on the results of soil samples
obtained from a Superfund site analyzed electrochemically and by gas
chromatcgraphy, it demonstrates the accuracy and economic advantage
of employing the electrochemical procedure in analyzing both oil and
soil samples.
1-298
-------
PCBs were first formulated as far back as 1881. Although they were
known to exist in the late 1800s, manufacturing on a aammercial scale
did not start until 1929. It was not until 1977 when all U.S. PCB
production was halted.
In the late 1960s, PCBs were recognized as a potential environmental
problem, which was probably due to the unregulated maintenance and
handling of PCBr-containing equipment. A series of studies has been
done to identify and quantify the distribution of PCBs in the U.S.
The overall distribution is shown in figure (1).
The wide use of PCBs was due to their non-flammable characteristics
as well as their thermal and chemical stability, low vapor pressures
at atmospheric temperature and high dielectric constants. Although
the use of PCBs has been banned in most applications, they are still
being used in vacuum pumps and gas-transmission turbines. PCBs have
been used as plasticizers in synthetic resins, in hydraulic fluids,
adhesives, heat transformer systems, lubricants, cutting oils and in
many other applications.
The EPA currently recommends two PCB specific methods of analysis;
the GC/MS Method 680 for quantitating PCB isomer class totals and the
GC/ECD Method 8080 for quantitating Aroclors. Over the past decade,
the use of these instrumental methods has increased dramatically and
it is the purpose of this paper to provide an example of one type of
non-specific analysis of PCBs where simple inexpensive chemical
procedures can in fact, under certain circumstances, be a preferable
alternative to chromatographic methods.
The examples chosen in this paper are the analyses of PCBs in
transformer oil and soil. The tests involve measurements of PCB
concentration down to a few parts per million where, as a result of
extensive legislation, inaccurate results would likely evoke
expensive litigation and heavy fines. The different methodology and
apparatus will be described, the accuracy and precision of each
method discussed, and the costs of each analysis reported.
METHOD FOR THE ELECTROCHEMICAL DETERMINATION OF PCB IN OILS AND SOIL
This procedure utilizes sodium metal to remove chlorine from any PCB
present in the sample. The concentration of chloride contained in
the final aqueous extract can be determined electrometrically by
means of a chloride specific electrode. By immersing a chloride
specific electrode in the aqueous extract and measuring the EMF
produced, the chloride concentration and thus, the PCB content can be
estimated. The chloride concentration is exponentially related to
the electrode EMF and thus with a suitable electronic circuit design
the results can be presented digitally in ppm of the selected PCB on
an appropriate meter.
1-299
-------
This is a non-specific method, testing for the presence of chlorine
in the sample being examined. As a result, other chlorinated
compounds will cause a false positive result because the analysis
method reads all chlorinated compounds as PCB. False negative
results should no occur, however, because if no chlorine is present,
PCBs cannot be present.
SAMPLE PREPARATION
I/Oil Samples
0.2 ml of a solution of naphthalene in diglyme is added to 5 ml of
oil sample. To this mixture is added 0.4 ml of a dispersion of
metallic sodium in mineral oil and the mixture shaken for one
minute. 5 ml of buffer is then added to neutralize the excess sodium
and to adjust the pH to 2.0 to ensure the pH of the mixture is within
the operating range of the electrode. 5 ml of the aqueous layer is
then carefully decanted into a suitable vessel.
2/Soil Samples
10 g of the sample of soil is extracted by shaking for one minute
with 12 ml of solvent containing 2 ml of distilled water in 10 ml of
an immiscible hydrocarbon. The soil is then allowed to settle and
the supernatant liquid filtered through a column containing Florisil
to remove any moisture and inorganic chloride. 5 ml of the dry
filtrate is then treated with 0.2 ml of a solution containing
naphthalene in diglyme, followed by 0.4 ml of a dispersion of
metallic sodium in mineral oil and shaken for 1 minute. 5 ml of
buffer solution is then added and the aqueous layer allowed to
separate. 5 ml of the aqueous layer is then decanted into a suitable
vessel.
ANALYTICAL METHOD
The measuring instrument (Dexsil 12000™, Hamden, CI) is fitted
with temperature compensation as the output of the chloride specific
electrode varies with temperature. Initially the temperature
compensation adjustment is set to the sample/electrode temperature.
The electronic measuring device is then calibrated employing a
solution containing chloride equivalent to 50 ppm. The electrode is
immersed in 5 ml of the calibration solution and appropriate
adjustments made to the calibration control to provide an output on
the digital meter of 50 ppm of chloride.
After rinsing and drying, the chloride specific electrode is immersed
into the 5 ml sample, gently stirred for 5 seconds and allowed to
stand for 30 seconds. The concentration of PCB in ppm is then read
directly from the digital output meter. The dynamic range of this
analytical procedure is from 5 to 2000 ppm. The precision varies
with the concentration. At concentrations between 50 and 2000 ppm,
it is +/" 10% • Between 5 and 50 ppm it is about +/- 2 ppm.
1-300
-------
ANALYTICAL TESTS, RESULTS AND DISCUSSION
Oil Samples
In general, PCS specific methods are more accurate than the
non-specific methods, but they are also more expensive, more lengthy
to run, and less portable. The 12000™ PCS analyzer provides
accurate analysis of PCS concentration in oil by testing for the
total amount of chlorine that is present in the sample.
The PCS concentration is calculated from the chloride concentration
using a conversion factor based on the Aroclor present in the
sample. If the specific Aroclor is not known, then the most
conservative estimate- results from assuming that the PCS present is
Aroclor 1242. Aroclor 1242 contains the lowest percentage of
chlorine of the commercially produced FOB mixtures.
The 1260 setting is used when a sample contains Aroclor 1260, but not
the associated trichlorobenzene.
The Askarel setting is used for samples that contain Aroclor 1260 and
associated trichlorobenzene. Askarel accounts for the majority of
contaminated transformer oil samples and therefore this setting will
usually supply the most accurate results; however, if a 1242
contaminated sample is tested on the askarel setting, a false
negative will result if the sample contains between 50 and 120 ppm.
Tables (1) and (2) show comparison results of transformer oils
contaminated with 1242 and 1260 (as Askarel) respectively, analyzed
by the PCB specific GC method versus the L2000 . The GC method
used to analyze the transformer oils in this study is EPA
600/4-81-045.
It is seen that accurate and precise results are obtained over a wide
concentration range of PCBs and although false positives can cause
unnecessary secondary testing, this method can be very economical
when used on transformer oil, which contains few sources of chlorine
other than PCS. Used crankcase and cutting oils, however, always
contain some chlorinated paraffins and almost always give false
positive results with non-specific testing. More expensive gas
chromatographic analysis is required when testing for regulated
levels of PCS in these matrices
Soil Samples
The EPA Spill Cleanup Policy stipulates that a PCB spill, once
detected, must be cleaned up within 48 hours. (3) The EPA mandates
that cleanup actions are taken in this short time frame in order to
minimize the risk of human and environmental exposure to the spilled
PCB. In addition to the many PCB Superfund sites, there are still
many other PCB spill sites that have not made the National Priorities
list that still must be cleaned up.
1-301
-------
One of the most time consuming steps in laboratory soil analysis is
the drying time. When a soil sample is received for GC analysis by
ASTM D3304, the sample is dried for 24 hours. The sample is then
weighed and placed in a soxhlet extractor and allowed to cycle for 8
hours. The sample must be completely dry, since the extraction
solvent (visually hexane or isooctane) is immiscible with water.
Extraction of a wet sample would yield a low result since the solvent
cannot fully interact with the soil to extract the PCBs. Typically,
90% of soil samples received for laboratory analysis by GC require
drying prior to extraction. With a 48 hour cleanup policy,
twenty-four hours of drying time could be a substantial set-back.
The content of the spilled material must ideally be determinedat
once and the cleanup procedures begun immediately. The 12000™
allows the operator to respond immediately and to make a quick
evaluation of the concentration of PCB at the site. At an excavation
site where soil analysis is being performed, the decision can be made
immediately if more soil needs to be removed or if the excavation has
been carried far enough.
The results of soils obtained from a Super-fund site and analyzed by
GC and the 12000™ are compared in Table (3). Since gas
chromatography can quantitate each Aroclor present, the GC results
are presented for each Aroclor aqboally detected in the soil
samples. The corresponding 12000™ results for that particular
sample are seen on the same line. These results are listed according
to each setting available to the analyst. The 12000™ does not
have the capability to quantitate each Aroclor; instead, all the
chloride present is interpreted according to the Aroclor setting
being used. For samples contaminated with an unknown Aroclor, the
prudent analyst would use the 1242 conversion to provide the most
conservative estimate.
Using the 12000™ as a screening method, the samples are evaluated
according to column 4 interpreting chloride as 1242. For the ten
samples analyzed, samples 2, 3, 4 and 6 would be considered as below
the Code of Federal Regulations limit of 10 ppm set by the EPA.
Since this is a site remediation, the results would indicate that
these areas can be considered "clean" and would not need further
treatment. If active clean-up were underway, these samples would
indicate that the excavation has gone far enough in that area.
The remaining samples indicate that there is still possible
contamination above the 10 pm level. This would result in further
excavation being required to reach safe levels. If active excavation
is not underway then the samples can be further analyzed to determine
the specific Aroclor content. Whether the samples are further
analyzed or excavation is continued based on the 1242 estimate will
depend on the cost consideration of waiting for lab results while
paying for an idle excavation team and remediation equipment, or
excavating excess material while the crew and equipment are still on
site.
1-302
-------
Frxm the GC analysis it was determined that only two of the six
"positives" were "false positives" in that the total chlorine
indicated an equivalent of PCS above the regulatory 10 ppm limit
whereas GC analysis of those samples showed an actual level below 10
ppm.
The problem of contamination with chlorinated solvents is exemplif ied
by sample 1 where the 12000™ result is considerably higher than
the GC results. This high reading is again an over estimation of the
PCB present and would result in a conservative action being taken
such as retesting using GC or further excavation.
To make a systematic comparison of the GC results which quantify each
Aroclor separately, to the 12300™ results, an equivalent amount of
a single Aroclor must be calculated from the sum of all Aroclors
detected. For the results given in this paner Aroclor 1242 was
chosen as the basis for equatincr the 12000™ results with the GC
results. The equivalent 12000™ reading, which converts the
chloride concentration to PCB using a single Aroclor conversion
factor, can then be calculated. The direct conversion of ppm 1260 by
GC to its equivalent in ppm 1242 is based on the percent chlorine
difference of 1242, 42%, versus 1260, 60%, according to the equation:
12000 equivalent ppm 1242 = (X) (60/42)
where: X = ppm 1260 by GC
60/42 = ratio of percentage chlorine
For example, the GC results for the first soil sample shown in Table
(3) of 11.59 ppm 1242 and 2.24 ppm 1260 should theoretically read
14.79 on the 12000's 1242 setting. The value of 14.79 is attained by
converting the GC 1260 value to 1242 according to the equation above,
and adding it to the GC value for 1242. The actual reading on the
12000 1242 setting was 25.0 ppm, which is significantly higher than
the theoretical prediction. The false high reading can probably be
attributed to other chlorinated compounds being present in the soil
that the GC does not detect. Nevertheless, from a regulatory point
of view a false positive is preferable. A more realisitc and
expected result is seen from the results for the seventh soil
analysis shown in Table (3), and the once again a theoretical
concentration of 1242 can be predicted from the conversion equation.
The GC result for that sample was 92.66 ppm 1242 and 15.08 ppm 1260.
15.08 ppm 1260 converts to 21.54 ppm 1242, which when added to 92.66
ppm 1242 gives a theoretical projection of 114.2 ppm 1242 as the
12000 result. The actual 1242 result given by the 12000 was 122.7,
which is within the +/- 10% accuracy level accepted for GC analysis.
Table (4) shows a comparison of results from soil samples obtained
from a PCB spill site.
Like the oil samples, soil sample concentration of PCBs are also
based on the detection of chlorine; however, it is only chlorine
present from an organic source that would cause a false positive, as
seen in the first example above, rather than an inorganic source such
as road salt or sea salt. Some possible sources of chlorine
contamination are pesticides and solvents.
1-303
-------
One benefit to the laboratory personnel analyzing soils is that using
the 12000™ first to screen PCB content allows the GC chemist to
make an accurate dilution right away. The appropriate dilution is to
1 ppm and one chromatographic analysis is approximately 40 minutes
long. The analysis time can certainly add up with trial-and-error
dilutions being made, especially if there are many samples waiting to
be analyzed. Kicwing the right dilution also prevents overloading
the column with PCB contamination.
The 12000™ system can analyze to fewer than 5 ppm in oil and soil,
can be used in the field by non-technical personnel, and requires
less than 10 minutes to run an analysis. These attributes make the
instrument an excellent alternative to gas chromatographic analysis,
especially for soil samples.
Although this new technique does not replace gas chromatography, it
can significantly reduce the number of samples requiring GC analysis,
and therefore allow a greater amount of samples to be run at a lower
cost.
1-304
-------
REFERENCES
1./Environmental Progress and Challenges; EPA's Update. United States
Environmental Protection Agency. EPA-230-07-033. August 1988.
2,/Finch, S.R., lavigne, D.A., Scott, R.P.W. "One Example Where
Qiromatography May Not Necessarily Be the Best Analytical Method."
Journal of Chramatographic Science. July 1990. pp. 351-356.
3./40 CFR 761.125. Office of the Federal Register. Rev. July 1, 1989.
4./PCS Equipment, Operations and Management Reference Manual. SCS
Engineers, Inc.
1-305
-------
FIGURE 1
U.S. Distribution of PCBs (4)
Presently in use 750 million pounds 60%
In landfills and dumps 290 million pounds 23%
Released to environment 150 million pounds 12%
Destroyed 55 million pounds 5%
Total production 1,245 million pounds 100%
1-306
-------
TABLE 1
RESULTS OF GC ANALYSIS OF PCBs (1242) IN TRANSFORMER OIL
VS
RESULTS OF L2000 ANALYSIS
Standard Results from GC Analysis
fppm 1242) fppm 1242)
Results from 12000 Analysis
(ppm 1242)
0 None Detected (< 2 ppm)
None Detected (< 2 ppm)
None Detected (< 2 ppm)
10 10.0
10.8
10.4
MEAN 10.4
STD. DEV. 0.4
50 51.6
52.3
50.3
MEAN 51.4
STD. DEV. 1.0
100 96.8
95.8
94.2
MEAN 95.6
STD. DEV. 1.3
500 474.0
482.2
497.0
MEAN 484.4
STD. DEV. 11.7
0.6
0.9
1.5
MEAN 1.0
STD. DEV. 0.4
9.7
9.3
9.7
MEAN 9.6
STD. DEV. 0.2
50.7
46.2
51.4
MEAN 49.4
STD. DEV. 2.8
104.9
95.2
95.4
MEAN 98.5
STD. DEV. 5.5
522.0
492.0
470.0
MEAN 494.0
STD. DEV. 26.1
1-307
-------
TABLE 2
COMPARISON OF RESUIUS FROM THE ANALYSES
OF OIL SAMPLES OCNTAINING AROCLOR 1260 (ASKAREL A) :
GAS C3JROMATOGRAPHY VS L2000
Standard
fppm 1260)
GC Analysis Results
fppm 1260)
12000 Analysis Results
fppm 1260)
10
MEAN
STD.DEV.
9.482
9.241
9.186
9.303
0.129
MEAN
STD.DEV.
9.2
9.5
10.6
9.8
0.6
50
MEAN
STD.DEV.
50.923
48.409
51.883
50.405
1.465
MEAN
STD.DEV.
53.7
48.6
50.8
51.0
2.1
250 233.911
232.007
230.215
MEAN 232.044
STD.DEV. 1.509
255
262
261
MEAN 259
STD.DEV. 3.8
500 493.232
486.400
472.423
MEAN 484.018
STD.DEV. 8.661
530
519
510
MEAN 520
STD.DEV. 10.0
1-308
-------
TABLE 3
COMPARISON OF SUEERFUND SITE SOIL ANALYSES:
GAS CHROMATOGRAHK VS L2000 READINGS
1242
11.59 ppm
0.32 ppn
0.33 pptl
5.00 ppm
0.77 ppm
92.66 ppm
7.18 ppm
7.87 ppm
GC RESULTS
1254 1260
2.24 ppm
0.25 ppm
2.64 ppm 1.78 ppm
0.20 ppm
2.53 ppm
0.80 ppm 0.35 ppm
15.08 ppm
1.54 ppm 0.08 ppm
3.25 ppm 0.30 ppm
9.43 ppm
L2000 RESULTS (read as)
1242
25.0 ppm
0.9 ppm
7.9 ppm
2.8 ppm
10.6 ppm
7.5 ppm
122.7 ppm
11.5 ppm
13.0 ppm
16.2 ppm
1260
17.5 ppm
0.6 ppm
5.5 ppm
2.1 ppm
7.5 ppm
5.3 ppm
85.8 ppm
8.1 ppm
9.2 ppm
11.4 ppm
ASKAREL
10.6 ppm
0.4 ppm
3.3 ppm
1.4 ppm
4.6 ppm
3.2 ppm
51.7 ppm
4.9 ppm
5.6 ppm
6.9 ppm
1-309
-------
TABLE 4
COMPARISON OF PCB SPILL SITE SOIL ANALYSES:
GAS CHRCMATOGRAPHY vs L2000
GC RESULTS
1242 1254
.30 ppm
.10 ppm
.97 ppm
.38 ppm
.68 ppm
1260
6.09 ppm
41.59 ppm
0.40 ppm
0.05 ppm
6.67 ppm
4.42 ppm
206.0 ppm
1699.0 ppm
12000 RESULTS (read as)
1242 1260 ASKAREL
10.8 ppm 7.5 ppm 4.5 ppn
62.5 ppm 43.8 ppn 26.4 ppm
5.7 ppm 4.0 ppn 2.4 ppm
6.1 ppm 4.3 ppm 2.6 ppm
14.8 ppm 10.3 ppm 6.2 ppm
7.3 ppm 5.1 ppm 3.1 ppm
404.0 ppm 281.0 ppm 167.5 ppm
>2000 ppm ,1642.0 ppm 996.0 ppm
1-310
-------
40 HOW GOOD ARE FIELD MEASUREMENTS?
Llewellyn R. Williams, Director
Quality Assurance and Methods Development Division
EPA - EMSL - Las Vegas, Nevada
Abstract
Quick! Cheap! High throughput! We have all heard these
words associated with field measurement methods. But what does
the record show with respect to the application of these new
technologies and the quality/ acceptability, and usability of
data produced in the field. Are the touted advantages of field
screening methods and field analytical methods being fully
realized to measure, monitor, or characterize waste sites or
waste streams? Highlights will be presented from the Second
International Symposium on Field Screening Methods that was
conducted earlier this year. Technologies presented ranged from
simple chemical and immunochemical test kits to highly
sophisticated fieldable instrumentation for analysis of toxic
metals and organic chemicals in all environmental media. Case
studies indicate the current utility of several key technologies
for monitoring and site characterization. In addition,
information will be furnished on field measurement technologies
recently demonstrated under the Superfunds Innovative Technology
Evaluation program. Some institutional inertia appears to hinder
the broader acceptance of field-produced data. One thing remains
clear? the new field technologies do not, nor should they be
expected to, replace operator skill and judgement in generating
environmental data. But they do constitute a battery of new and
available tools thai can improve the confidence of decisions
based upon such data.
1-311
-------
A-\ ASSESSMENT OF POTENTIAL PCB CONTAMINATION INSIDE A BUILDING;
A UNIQUE MULTI-MATRIX SAMPLING PLAN
William W. Freeman, Principal Scientist, Roy F. Weston, Inc.
Weston Way, West Chester, Pa. 19380
ABSTRACT
Characterization of potential PCB contamination inside a
large, active market and restaurant area was required. There
was the possibility for PCBs to have entered the facility as
a result of construct ion/demolition activities taking place in
the area above the market.
This case history describes a unique assessment approach,
including a multi-matrix sampling procedure. A representative
number of samples had to be collected from this facility which
occupies approximately 175,000 ft.2 in area. The sampling
also had to be performed in a practical manner, with the least
possible disruption of routine daily activities.
A visual inspection and reconnaissance of the market was first
conducted in order to identify entry points for potential PCB-
containing materials such as dust, water and debris. Six
different matrices were identified for sampling, including
air, dust, water and sediments. Wipe samples were also taken
from non-porous surfaces such as counter tops and fixtures.
Destructive (chip) samples were taken from porous solid
surfaces such as wood and insulation materials. Composite
samples were taken from some areas. Quality Control samples,
including items such as duplicates and field blanks were also
taken.
The sampling plan is discussed in detail, including equipment
used, statistics, and the selection of random and biased
sample locations. Analytical procedures are also reviewed,
including extraction techniques and quantitation limits.
1-312
-------
1. INTRODUCTION
Roy F. Weston, Inc. was retained by the Department of Health
of a large Eastern United States city to collect and analyze
environmental samples from within a large urban food market.
The purpose of this investigation was to assess the extent of,
or confirm the absence of, polychlorinated biphenyl (PCB)
contamination.
Demolition activities, including PCB removal, had been on-
going in the open shed which is located above the market. The
shed is an elevated area with a roof, but is open at one end.
The market is approximately 175,000 ft2 in area, and the
platform of the shed is supported by steel columns within the
market area. The ceiling of the market is suspended from the
shed steel structure, and consists of a wood layer, covered
with roofing paper and sheet metal. The interstitial space
between the market's wooden ceiling and the underside of the
shed structure houses a combined support-beam and stormwater
drainage system.
The market is primarily a varied food market with a major area
devoted to restaurants of different types to serve the people
from the office buildings surrounding the market. Concern had
previously centered around the potential for PCB contamination
entering the market by way of leaks through or around the
ceiling structure from the demolition activities on-going in
the shed above. This concern was accentuated when the
extremely heavy rains of one day deluged the shed and resulted
in severely heavy leaks into the market below.
This report summarizes the assessment carried out and
environmental samples collected from surfaces within the
market. Field sampling activities were conducted by Roy F.
Weston, Inc. personnel on a Sunday, while the market was
closed. All sampling was conducted in Level "D" personnel
protection requirements except during dust sampling when Level
11C" protection was used.
It should be noted that this sampling effort included 55
samples from a variety of matrices (e.g., sediments, solids,
air, dust, wipes of surfaces, water). Although this is a
relatively small number given the size of the market, it is
considered representative of conditions at that time within
the market regarding assessing potential PCB contamination.
Roughly one-half of the samples were biased toward areas of
higher probability for contamination (such as areas of leaks
or visible staining) and one-half were random throughout the
active, occupied areas of the market.
1-313
-------
2. SCOPE OF WORK
The sampling team first performed a visual inspection of the
market in order to identify potential entry points for air,
dust, water, oil and/or debris from the shed above into the
market and delineate initial sampling locations. These entry
points, and potentially affected areas below these points,
were candidates for sample collection. The visual inspection
to identify potential entry points consisted of looking for
areas with distinct discoloration/staining, watermarks, rust,
damaged roof structure, clogged drainages, or penetrations in
the roof. The visual inspection was supplemented with data
from previous reports of a testing effort designed to
determine PCS contamination within the interstitial space.
Sampling locations were also identified for the biased and
unbiased samples.
The battery limits of this study were from the floor of the
market to ceiling level and within the four walls.
The following types of samples were collected by the field
sampling personnel:
1. Wipes - From non-porous surfaces such as metal
poles/beams, counter tops and market furniture,
fixtures, food handling/preparation equipment, and
floor drains.
2. Destructive - From porous hard surfaces such as
wood, pipe insulation and rusted pillars.
3. Sediment - Standing sediment/solids from drainage
outfalls, or "catch" samples from plastic or metal
covers over stalls.
4. Water - Drippings from plastic catch areas, roof
leaks, drains, or standing water.
5. Dust - Primarily from floor sweepings and in
corners.
6. Air - Continuous air samplers to sample the ambient
air in the market.
The actual sampling activities took place three days after the
initial reconnaissance visit.
3. SAMPLING STATISTICS
A total of 55 field samples (and 14 quality control samples)
were collected. Table 1 gives the breakdown of the various
types of samples collected. All the samples were preserved on
1-314
-------
Table 1
Summary of Environmental Samples Taken at the Market
Type of Samples
Number of Samples for PCB Analysis
Test Control Duplicate Field Blank Total
Wipes
Destructive
Dust
Sediment/solids
Water
a) Unfiltered
b) Filtered
Air
Totals
>7 2
3
7
6
4
4
4 1
2
1
1
1
1
1
2
—
—
1
1
33
4
8
8
5
5
6
55
69
1-315
-------
ice and transported to the Weston Analytics Laboratory
following completion of sampling activities. The individual
sampling procedures are discussed in subsequent sections.
4. SAMPLING PROCEDURES
4.1 wipe
Wipe sampling was conducted on 27 non-porous surfaces within
the market. Samples were collected from a variety of surfaces
such as counter tops, eating tables, freezers, steel pillars,
cooking range hoods, glass cases, and others. The sampling
locations were spread out over the entire market in order to
get representative coverage. See Appendix A for complete
procedure .
The wipes were divided into 2 categories:
• Biased - discrete
• Random - composite or discrete
The biased samples were collected as discrete samples. These
samples were collected where visual observation and review of
previous reports suggested a potential for PCB contamination.
The random samples were collected at various locations
distributed within the building. These samples were collected
either as discrete or composite samples. The discrete samples
were collected on structures such as pillars, food cabinets,
and range hoods. Composite samples were generally collected
on larger surfaces such as long counter tops.
The composite samples were collected by taking three separate
hexane soaked gauze pads and wiping each pad in various
locations over a given surface and then collecting them in one
sample bottle. The sampling area for composite samples was
three times as large as that for the discrete samples (300 cm2
vs. 100 cm 2) . The analytical results were then adjusted to
consistent units of 100 cm2 for all samples to facilitate data
comparison .
A total of six additional wipe samples were collected as
quality control (QC) samples. Of these, two were controls,
two were duplicates, one equipment blank and one field blank.
The control samples were collected from surface areas within
the market which suggested the least likelihood of being
contaminated (such as inside a closed food cabinet.) The
equipment blank sample was collected by wiping the aluminum
foil covered template with the hexane soaked gauze and
analyzing for PCBs. The field blank sample consisted of a
laboratory prepared wipe sample gauze pad in the sample
1-316
-------
container taken to the market and returned for analysis along
with the other samples.
The samples were placed in 250 ml wide-mouth glass jars (soil
sample type jars) and preserved at 4°C by ice.
4.2 DESTRUCTIVE (CHIP) SAMPLING
Three destructive samples were taken of hard porous surfaces.
Three different porous mediums were selected. They were a
piece of wood, rust from a pillar and wrapping from pipe
insulation.
All the samples selected were biased samples from different
areas of the market. The rust sample from a pillar was
collected near the wood ceiling. The insulation wrapping
sample was collected from a drain pipe which comes down from
the shed. The wood sample was collected from a temporary wall
which was erected adjacent to a pillar in a corner of the
market. A duplicate sample of the wood was taken at the same
location.
Samples were collected using a decontaminated stainless steel
trowel or chisel. Samples were collected in a 250 ml wide-
mouth jar with teflon-lined lid and preserved at 4°C using ice
placed in an ice chest. See Appendix B for complete
procedure.
4.3 DUST SAMPLING
The team collected eight dust samples including one duplicate.
Dust samples were collected from, a variety of surfaces and
locations. Two samples were collected from floor sweepings of
two aisles, one sample from on top of the men's room roof, one
from the louvers of an air exchange unit, one from a wall fan,
one from the cold storage room screen, and the last one from
a pipe near the roof directly across from an air conditioning
unit. All samples were preserved on ice as previously
described.
4.4 SEDIMENT/SOLID SAMPLING
A total of six sediment samples were collected from the
market. Additionally, one duplicate sample and a field blank
were also taken. Four sediment samples were collected in
separate locations from the plastic suspended from the ceiling
to catch water and solids which had dripped or fallen in from
the shed above. These plastic sheets are predominantly
located along the perimeter of the market.
Except in one area, no water leaks were visually evident on
the day of the sampling. The sediment and water collected on
1-317
-------
the plastic was potentially an accumulation from prior
infiltration. It should be noted that during the initial site
visit these plastic sheets were filled with water and some
were overflowing due to the severe rain storm. However, by
the day on which the sampling took place, much of the standing
water was absent. Moist sediment and some water remained.
Except for one sample which was from a floor drain, the other
samples were from solids/sediments which apparently dropped
from the ceiling level. All samples were preserved on ice as
previously described.
4.5 WATER SAMPLING
Water samples were collected in areas where there was stagnant
or standing water. These areas included canopies of some of
the stores and the suspended plastic surrounding some of the
roof leak areas. The water samples were collected in 950 ml
amber glass jars with teflon-lined caps and preserved at 4"C.
Eight water samples were taken from four locations (two
samples per location) plus two duplicates (one location). For
each location, one of the two samples was filtered and then
analyzed for PCBs while the other sample was not filtered
prior to analysis. This protocol was used to assess if the
PCBs in the sample, if any, were potentially associated with
the water phase or the suspended sediment/solid phase.
The water samples collected were from roof leakage which
either collected onto the plastic sheeting beneath the leak or
from a bucket under the leak or from a trough at roof level.
4.6 AIR SAMPLING
Five air samples were taken at locations inside and
immediately outside the market building. Four samples were
collected at inside locations, two from diagonally opposite
corners and two samples within the active space of the market.
One sample was drawn from a location outside the building to
serve as a control sample. One blank sample tube was also
analyzed as a field blank.
Air samples were drawn over an eight-hour period to determine
time weighted average concentrations and for ease of
comparison to OSHA Permissible Exposure Limits (PEL) and
Threshold Limit Values (TLV) as set by the ACGIH. Collection
of the analyte was accomplished as outlined in the National
Institute of Occupational Safety and Health (NIOSH) Method
5503 as modified by Versar, Inc. to provide for the sampling
of a greater volume of air and to provide a lower limit of
detection of the analyte. The analytical methods and
detection limits are discussed in detail in Appendix C. The
1-318
-------
control sample was collected from outside the building near
the entrance to the market. This was accomplished by running
the source tube outside while the sampler remained inside the
market.
5. RESULTS AND CONCLUSIONS
The analytical results are confidential to the client, but, in
summary, none of the active areas of the market were found to
contain PCBs at levels above the analytical detection limits.
This includes the air and wipe samples from the restaurant
areas. Trace amounts of PCBs were found in a few of the
sediment and chip samples from isolated perimeter areas of the
facility. These appear to be the result of an accumulation of
residues from various small leaks over a period of time, and
can be removed by routine maintenance operations.
6. SUMMARY
This survey of the extent of potential PCB contamination in
the air, dust, sediments, etc. and on various porous and non-
porous surfaces within the market was completed within a week.
This included one day for reconnaissance and identification of
sample locations, one day of sampling, and production of
validated analytical result within 72 hours by the laboratory.
The sampling scheme represented, both statistically and
logistically, a good survey of various matrices within the
market. The results indicating a lack of detectable
quantities of PCBs in the active market and restaurant areas
was important to the continued safe operation of these
facilities.
1-319
-------
APPENDIX A
Procedure for Wipe Sampling
1 Prior to field activities, 3"x3" gauze pads are soxhlet-
extracted in the laboratory with hexane and placed in the
laboratory-cleaned glass sample containers equipped with
teflon-lined caps.
2 Bring dedicated, prepared gauze pads (secured in glass
containers) to sample site. Select appropriate sample
location and area. Photograph area to be sampled, if
necessary.
3 Measure area to be wiped or use dedicated aluminum
template to mark area. Generally, a 100 cm2 area is sampled;
however, a smaller or larger area may be wiped, depending on
the degree of cleanliness encountered in the field. Record
size of area to be sampled.
4 Put on a clean pair of surgical gloves.
5 Hold gauze pad with clean glove and initially wipe sample
area in a horizontal direction using a forward and backward
motion. Wipe sample area a second time with a clean portion
of the same gauze pad in a vertical direction using a forward
and backward motion.
6 After wiping, replace the gauze pad in the appropriate
laboratory-prepared container and secure the teflon-lined lid
on the sample container.
7 Duplicate wipe samples will be taken in an area directly
adjacent to the original sample location.
8 Attach the sample label with the sample identification
number and other appropriate sample information. Apply
custody seals and place in a plastic self-sealing bag.
9 Record all pertinent information in the site log and, if
appropriate, on the site map, and complete the sample analysis
request form and chain-of-custody record.
10 Follow the sample documentation, packaging, shipment, and
chain-of-custody procedures.
1-320
-------
APPENDIX B
Procedure for
Destructive/Chip Sampling
1 Select appropriate sample location and record/mark
location and area. Photograph area to be sampled.
2 Put on a clean pair of surgical gloves.
3 Using a decontaminated chisel and hammer, proceed to chip
material to a depth of less than 2 cm, taking care not to
scatter pieces outside the marked area. Clean, dedicated, or
decontaminated aluminum pans or dust pans may be used to
shield the area to prevent pieces from scattering. Record
area and depth of sample.
4 Using a dedicated brush and dust pan or tweezers, collect
the sample and transfer to an appropriate laboratory-cleaned
container and secure the teflon-lined lid on the container.
5 Duplicate samples will be taken by homogenizing the
sample material by mixing in an aluminum container. The
duplicate portion of the sample will be taken from the same
container as the original sample.
6 Record all pertinent information in the site log and, if
appropriate, on the site map, and complete the sample analysis
request form and chain-of-custody record.
7 Follow the sample documentation, packaging, shipment, and
chain-of-custody procedures.
1-321
-------
APPENDIX C
Analytical Methods for PCBs
The WESTON Analytical Laboratory used the following methods
for analysis of PCB samples. Method references are to EPA SW-
846 (Test Methods for Evaluating Solid Wastes).
I Solid and Water Analysis
• Analytical Method - EPA Method 8080
This is a gas chromatographic method for analysis of
PCBs in various matrices. Prior to use of this method,
appropriate sample extraction techniques, as described
below, are employed.
• Extraction Methods
Solids: Method 3540, Soxhlet Extraction
Water: Method 3520, Liquid/Liquid Extraction
• Detection Limits
Both the extraction methods and the analytical methods
referenced from above will, in most instances, yield a
detection limit of 0.1 ppm with acceptable accuracy.
Some matrices, such as those with a high organic and/or
bituminous content, can present interferences to
achieving this low detection limit.
Appropriate "cleanup" procedures, such as the florisil
method (3620) are employed to eliminate the
interferences, if necessary. However, there may be
instances, such as PCBs on oil-based paint surfaces or
PCBs on tar-based roofing materials, where precise
analysis down to 0.1 ppm is not possible. Detection
limits in these cases can be in the 0.5 ppm range.
Detection limits for wipe samples are in the 0.2 to 1.0
ug/wipe (100 cm2) range.
II Air Sample Analysis
• Method: NIOSH 5503
This is the standard NIOSH method for analysis of
Florisil sorbent samples. The normal working range for
this procedure yields results with a detection limit of
10 ug/m3. There is a modification employed by Versar,
Inc., in New York, in which a larger Florisil sample
tube and larger air volume are employed. This can
generate accurate results in the 0.1 to 1 ug/m3 range.
Ill Quality control (QC)
All standard EPA and/or Contract Laboratory Program (CLP)
laboratory and field QC protocols are followed by WESTON
including use of blank samples and replicate samples.
These QC samples are an integral part of the analytical
scheme and are important in substantiating validity of the
analytical data. All sample data and QC data is reviewed
and validated prior to issuing a report.
1-322
-------
42 COMPARISON OF THE HNU-HANBY FIELD TEST KIT PROCEDURE
FOR SOIL ANALYSIS WITH A MODIFIED EPA SW-846
5030/8000 PROCEDURE
John D. Hanby. Technical Director, HNU-Hanby; Bruno Towa,
Chemist, Hanby Analytical Laboratories; HNU Systems, Inc., 160
Charlemont Street, Newton Highlands, Massachusetts 02161-9987
ABSTRACT
A sample generation procedure for the preparation of gasoline
in soil standards was developed to produce homogeneous blends
of consistent concentration and stability. Three sets of these
standards prepared at different concentrations were analyzed
utilizing the HNU-Hanby Field test procedure for analysis of
soils and a modified EPA SW-846, 5030/8000 GC procedure. The
relatively high degree of consistency of the concentrations
within each set of the gasoline/soil standards allowed a
statistically meaningful comparison between the accuracy of the
two methods.
INTRODUCTION
Concurrent with the development of methodologies for the
chemical analysis of environmental samples has, perforce, been
the search for representative analytical standards. This
principle, of course, is "sine qua non" to accurate chemical
analysis, but probably in no case is it more de rigueur than in
the analysis of soils. Soil matrices vary from relatively
simple configurations, e.g. quartz sand, to those hellish
quagmires we call "toxic waste sites".
This investigation focuses on a problem of sufficient magnitude
to be of large scale environmental concern yet still retain
tractability in an analytical sense. Underground fuel tanks
are generally bedded in sandy soil, primarily, because of
structural concerns for tank and associated line integrity. A
procedure was developed at HNU-Hanby Environmental Laboratories
which facilitated the preparation, packaging, and analysis of
gasoline contaminated sand samples which could produce fairly
large numbers (100-200) of homogeneous blends of the mixture at
different concentrations. Three different sets of the
gasoline/sand standards were prepared and subjected to analysis
using the HNU-Hanby field test kit extraction/colorimetric
procedure and modified EPA SW-846 5030/8000 GC methods.
1-323
-------
PROCEDURE
An alluvial sand from a Houston nursery, typical "river bank
sand", was obtained in sufficient quantity to allow for the
preparation of a large number of samples for this as well as
subsequent investigations. First experiments with this soil in
attempts to produce homogeneous mixtures of fuel contaminated
aliquots were frustratingly unsuccessful. Microscopic
examination of the soil revealed that it typically contained
relatively large "chunks" of clay interspersed with the more
abundant quartz sand. The obvious remedy for this problem was
to wash the more water-dispersive (and much more organically
sorbent) clays out of the soil matrix. This washing was
performed with warm laboratory tap water on approximately 25
kilograms of the soil. Subsequent drying of batches of the
soil was carried out, rather tediously, in a laboratory vacuum
oven at approximately 80 C and 29" Hg. A corollary effect of
this vigorous approach to the removal of the clay turned out to
be the sterilization evidenced by the fact that subsequent
washings of the soil in de-ionized water produced no biota on
filter samples cultured on M-F endo broth media plates.
Thus treated, a 2 Kg. batch of the sand was introduced into a
gallon, glass screw-capped jar and placed on a small ball mill
roller device. A 25 ml solution of super unleaded gasoline in
methanol of approximately 10% was prepared in a 60 cc plastic
syringe fitted with a 26 guage needle. This solution was
sprayed through a small hole, previously drilled in the cap, as
the bottle turned on the roller device. The mixture was turned
for several hours with occasional hand shaking of the jar and
intermittent tapping with a small wooden mallet to dislodge
sand which accumulated on the sides of the jar. Observation of
this decreasing tendency of the sand to adhere to the glass was
used as the indication of an appropriate end point for the
procedure. That is, the mixture was tumbled for approximately
one hour after cessation of appreciable adherence of the
mixture. The jar was then removed from the ball mill roller,
and, using a specially prepared dispenser rack, the sand was
very rapidly transferred to 1 dram screw cap vials which were
immediately capped and placed in refrigeration at 4 C. the
dispenser rack was designed to hold 24 small plastic funnels
positioned over 24 of the 1 dram glass vials so that rapid
removal and capping of the full vials was facilitated. A
vibrator was attached to the filling rack to promote rapid
funneling and settling of the sand into the vials.
Three separate concentrations of the gasoline/sand mixtures
were prepared, designed to give final concentrations of
approximately 500, 200 and 50 mg/kg. Twenty samples from each
set (approximately 100 in each set), were randomly selected for
1-324
-------
analysis by each of the two methods. The average weight of
sample in the 1 dram vials was 6 grams. Actual weights for
each sample were utilized in the calculations of concentration
(mg gasoline/kg sand).
The method utilized with the HNU-Hanby field test kit is as
follows:
1. Empty sample vial into the 50 ml beaker.
2. Immediately snap one 10 ml solvent ampoule from the kit and
empty into the beaker.
3. Stir the sample/ solvent mixture with a spatula for three
minutes.
4. Pour solvent from the beaker into one of the kit test tubes
up to the mark (4.2 ml).
5. Add contents of one of the catalyst vials from the kit into
the test tube.
6. Shake test tube vigorously for three minutes and observe
developed color in the catalyst at the bottom of the tube.
Comparisons of the color with standard gasoline in soil
photographs supplied with the kit as well as with photographs
made specifically to provide matches with the standard soil
concentrations obtained in this investigation were facilitated
by juxtaposition of the two sets ,of pictures with the actual
test tube results. Apparent differences in hue can be seen in
the color photographs accompanying this paper. These hue
differences can largely be accounted for because of variable
composition in the make up of the gasoline used in the original
kit supplied photographs and the gasoline used in this
investigation. Use of black and white photography minimizes
the bias that these hue differences can cause. A quotation
from MIT Professor of Physics Phillip Morrison's book The Ring
of Truth is appropriate here, "...spectral photos in black
white...bear the full information of the spectrum." *'
The GC methods utilized in this investigation are adapted from
the EPA SW-846 manual, 5030/8000 and from a study conducted by
the Midwest Research Institute for the U.S. Environmental
Protection Agency's Office of Underground Storage Tanks. The
method employed the following procedures and instrumental
parameters:
1-325
-------
1. 40 ml VOC vials were placed on a top loading balance and
tared.
2. A sample soil vial was emptied into the voc vial. The
weight of sample was recorded.
3. Immediately, de-ionized water was added to carefully bring
the meniscus of water to slightly above the voc vial top.
Another weighing was recorded after this step to determine
the weight (volume) of water added.
4. The vials were placed in a Fisher Scientific Bransonic
ultrasonic water bath for 15 minutes, then removed to and
Eberbach shaker for 30 minutes on high speed.
5. The voc vials were then analyzed on an HNU Model 421 gas
chromatograph equipped with a photoionization detector
connected in series with a flame ionization detector. 5 ml
of water was poured from the voc vial into a gas tight
syringe. This was then connected to an O.I. Corporation
Model 4460A sample concentrator. The sample was sparged
with helium for 11 minutes at 25 C.
The trap was desorbed at 180 C for 4 minutes through a heated
transfer line to the GC. GC conditions were: 25 meter Nordion
fused silica .53 mm column with a 1.0 micrometer MB 30 coating.
The temperature program employed was: initial temperature 45
C, 2 minutes, then 10 C/minute to 100 C, hold for 3 minutes.
Data was sent to a Spectra Physics 4270 integrator. GC
parameters were initiated via an IBM PS/2 Model 70 386
channeled through the 4270.
A standard solution for calibration of the GC runs was prepared
composed of the gasoline plus added concentrations of 2-methyl
pentane and 1,2,4-trimethylbenzene. The GC standard curve was
prepared by analysis of various concentrations of the gasoline
in methanol standards which were injected through the front of
the 5.0 ml sample syringe using a 10 or 100 ul syringe. Using
the protocol established in the MRI study peaks considered
typical of the gasoline range organics elute inclusive of and
between the 2-methyl pentane and the 1,2,4-trimethylbenzene
peaks. Total area counts from the FID as integrated by the
SP4270 were used in this investigation. Further studies
utilizing these same standards are planned which will
incorporate the data from the photoionization detector which
will correspond to the SW-846 5030/8020 method. A separate
series of GC data charts indicate relevant quality control
methods for blanks, surrogates and spiked samples.
1-326
-------
All stock solutions, standard dilutions, and prepared soil
standards were stored at 2 -4 C in a walk-in refrigerator. A
primary consideration in the design of this study was the
determination of the temporal stability of the soil standards.
The period involved in this investigation was approximately 8
weeks. That is, soil samples were analyzed by both methods
(FTK and GC) over a period of some 56 days.
Figure 1 is a photograph of the HNU-Hanby Field Test Kit
utilized in the investigation. The components pertinent to
this study are the 50 ml beaker, the 10 ml solvent ampoules,
the marked test tubes and the 1 dram vials of catalyst. Figure
2 is a chromatogram typical of the flame ionization detector
peaks described in the report.
SUMMARY;
The utility of this method of preparation of consistently
uniform samples of gasoline in soil is realized in the
relatively small variability of the data obtained by both the
HNU-Hanby Field Test Kit as well as the EPA purge and trap/GC
methods of analysis. Correlation between the two methods
themselves, in this visible comparison manner, is also evident.
The gasoline range organic method of analysis as documented in
the MRI study utilizing FID chromatographic detection gives
consistent comparison to the test kit results even though GC
method is essentially a total organic integration whereas the
test kit is based on the colorimetric determination of the
total aromatic components of the gasoline.
The utilization of photoionization detection being much more
responsive to the aromatic components in gasoline will
doubtless be even more closely correlative with this
extraction/colorimetric technique. Investigations of this
correlation are already underway as well as more elaborate
techniques involving the utilization of reflectance
spectrophotometry which, of course, will provide a means of
obtaining data with the technique which will not be dependent
on visual observation of relative color intensity. A study is
also in process which will add a third dimension of analytical
measurement, i.e., the utilization of a head space vapor
technique.
1-327
-------
-------
CHflNNEL fl R29.RflW
INJECT 05/28/91 16:42:28
PffT
Solvent Peak
4. 88
4. 83
.62
2-Methylpentane
1,2,4-Trimethylbenzene
0 10. 08
1.fl DfiTfi CflPTURED TO: \LflBNETNR29.RflM
TPHflfl004
FILE 1.
PEflKtt
1
--,
C-
3
4
5
6
7
8
3
10
11
12
13
14
15
METHOD
FIRE FIX
31.
7.
1.
0.
0.
0.
0.
6.
0.
0.
0.
0.
0.
0.
0.
59
61
027
645
205,
655
17
17
848
303
323
044
214
195
001
0.
RT
0.
2.
J.
4.
4.
5.
5.
6.
6.
7.
7.
8.
8.
9.
10.
RUN 33
flREfl
62228452893
91
47
83
F,
87
24
72
31
82
27
67
44
08
21308942
2876678
1306063
573927
1834958
476975
17275023
2373753
847648
305771
122859
593116
544697
2528
BC
02
02'
02
02
02
02
02
02
02'
02
02
02
02
02
03
TOTHL
95/28/31 16:42:28 CH= "fl" PS= 1.
INDEX 43
100.
280001331
Gasoline Range Peaks
1. fl Sending Report...Done
Figure 2 Typical Chromatogram of Gasoline Range Organics
1-329
-------
ACKNOWLEDGMENTS:
The authors are grateful to the U.S. EPA Office of Underground Storage
Tanks and to the Midwest Research Institute, particularly Ms. Linda
McConnell, for inviting this laboratory's participation in the round-
robin study designed to produce a technically defensible Total
Petroleum Hydrocarbon method of analysis of environmental samples.
REFERENCES;
1. Morrison, Phillip and Phylis, 1987, The Ring of Truth, Random
House, Inc., New York, N.Y., p. 227.
2. Midwest Research Institute Draft Report, Nov., 1990, Evaluation of
proposed analytical methods to determine total petroleum hydrocarbons
in soil and groundwater, MRI, Falls Church, VA.
3. U.S. Environmental Protection Agency, SW-846 Test Methods for
Evaluating Solid Waste, 3rd Edition.
1-330
-------
43 FIELD TEST KIT FOR QOANITiraNG ORGANIC HALOGENS
IN WATER AND SOIL
Deborah Lavigne, Quality Control Manager
Dexsil Corporation
One Haraden Park Drive
Hamden, Connecticut 06517
ABSTRACT
In a continuing data-gathering program, the EPA monitors organic chemicals
in the waters of the United States. The list of monitored chemicals
includes both aliphatic and aromatic hydrocarbons, pesticides, industrial
chemicals, plasticizers, and solvents. Many of these materials are
halcgenated, produced by chlorination of water during purification
processes, through industrial and municipal runoff, natural sources, and
sewage purification practices.
Chlorine is a contaminant often found in oils, soils, sludges, and organic
liquids found at hazardous waste sites. Controlling wastewater discharges
and landfilling of chlorinated compounds have become priority issues for
EPA since the passage of the Hazardous and Solid Waste Amendments of 1984.
In response to toxicological and environmental concerns of trihalomethanes
and other halcgenated compounds present in water and soil, a quick,
accurate, easy to use, portable field test kit has been developed for
quantifying organic halogens. The analytical procedure requires an
extraction with a suitable solvent, followed by colorimetric chemistry to
quanitfy the organic halogens present.
This paper will detail field and laboratory results, limits of
detection, matrix effects, and cost analysis.
INTRODUCTION
EPA regulation 40 CFR 261 establishes that any used or waste oil
containing greater than 1000 ppm organic chloride may have to be
classified as a hazardous waste. Chlorinated solvents are the primary
contaminants found in waste oils and oily wastes.
Currently available instrumental methods of chlorine analysis
(microcoulometric titration, X-ray fluorescence spectometry, oxygen bomb
combustion and gas cnromatography) are time consuming and must be
performed in a laboratory by trained technicians. Foreseeing the
additional testing that would be required under the new regulations, the
EPA Region II contracted Dexsil Corporation to develop a field-portable
test kit that could be used by untrained personnel. The result was two
small, disposable test kits that require less than five minutes to
determine chloride contamination in waste oil. The first method is a
go/no-go test, indicating over or under 1000 ppm chloride. The second
method is a quantitative analysis giving an amount of contamination
between 200 - 4000 ppm.
1-331
-------
These test kits were evaluated Toy Research Triangle Institute (Raleigh,
NC) for EPA and were found to be acceptable methodology for chlorine
detection. As a result, the kits were assigned EPA method 9077, to be
published in the upcoming SW-846 manual. Interest has since increased in
a test kit that would work on oil containing large quantities of water
(oily waste) and, in light of the current regulations pertaining to
leaking underground storage tanks, it would be useful to have a kit that
would detect total organic halogens in soil. Two field-portable test
procedures have been developed which address these issues of halogens in
wastewater, oily waste, and soils.
The different methodology and apparatus will be described, the accuracy
and precision of each method discussed and the costs of each method
reported.
USED OIL CONTAMINATION
How do chlorinated solvents contaminate used oil? Chlorinated solvents
are not ingredients of crankcase oil, but are indirectly introduced
through careless management practices, such as pouring used degreasing and
cleaning solvents into used oil storage drums. The most common solvents
found in waste oils are dichlorodifluoromethane, trichlorotrifluoroethane,
1,1,1-trichloroetnane, trichloroethylene, and tetrachloroethylene (1).
Levels of contamination range from 100 ppm to thousands of ppm. The
possible presence of chlorinated solvents can be flagged by checking total
chlorine, an indicator of the potentially hazardous chlorinated substances
present.
The EPA estimates that over 350 million gallons or about 30 percent of all
used oil is landfilled or dumped annually. Approximately 160 million
gallons comes from the "do-it-yourself" oil changers, who typically
dispose of their oil by dumping it on the ground, in sewers, or in
waterways, or by placing it with the household trash destined for a
landfill that has not been lined to protect against soil and groundwater
contamination. The remaining 190 million gallons is dumped or landfilled
by automotive shops and industrial facilities. (2)
OILY WASTE SOURCES
Sources of oily waste include bilge and ballast, rain runoff, washings
from cleaning vehicles and tanks, and cutting oils. All of these
materials are predominantly water, containing from 0.1 to five or ten
percent oil.
Bilge oil is a mixture of fuel oil, lubricating oil, and hydraulic oil
dispersed in sea water along with dirt, rust, and bacterial sludge.
Ballast oil composition depends on what is carried in the ballast tanks
when the ship is not in ballast, usually fuel oil, crude oil, or petroleum
products. The oil will usually exist as free oil droplets in the
seawater, or as a sheen on the water surface.
1-332
-------
Bain runoff that carries oil from contaminated areas often cannot be
legally discharged to storm sewers. Trucks and fuel storage tanks are
cleaned with water containing detergents. This produces oily water
containing solids, emulsions, free oil, dissolved oil, and detergents.
Metalworking fluids are used for both lubrication and cooling in various
machinery processes such as cutting and grinding. Oily waste resulting
from used oil mismanagement causes damage to streams, ground water, lakes,
and the oceans. For instance, the Coast Guard estimates that sewage
treatment plants discharge twice as much oil into coastal waters as do
tanker accidents - 15 million gallons per year versus 7.5 million gallons
from accidents. A major source of this pollution is dumping of oil by
do-it-yourselfers into storm drains and sewers. A startling example of
this has occurred in the Seattle area, where more than 40 percent of the
water quality trouble calls received are related to used oil and other
wastes dumped down storm drains, contaminating water bodies (3).
ENVIRONMENTAL IMPACT
Many contaminated sites containing oily wastes and oily waste sludges are
now being cleaned up under authority of Superfund. The Superfund
regulations affect the handling of oil wastes in the areas of spills and
accidental releases, leaky storage tanks, and abandoned storage
facilities. Oils from abandoned storage facilities fall into one of three
catagories: Abandoned tank pumpings, abandoned drummed oils, or sludge
pit residues (4).
The composition of the oils in each of these catagories can vary
significantly from site to site. Over time, the oils in tanks and drums
absorb material from the walls of the container. This process is
exacerbated by corrosion due to seasonal temperature variations, rain,
mechanical abrasion, and the like. The oils are usually significantly
diluted by water infiltration. In order to fall under Superfund
jurisdiction the sites must present a danger to the public or the
environment. Thus the emphasis is on the quick and inexpensive analysis
and disposal of the materials, rather than on recycling and reuse (5).
Ideally, hazardous waste determinations, whenever possible, should be
carried out in the field to quickly identify the extent and magnitude of
the contamination. The advantages of alternative simple chemical tests
have been foreseen by the EPA and some procedures have, in the face of
alternative instrumental methods, been examined and subsequently become
EPA approved.
A CHEMICAL METHOD FOR THE DETERMINATION OF ORGANIC HALOGENS IN WASTEWATER,
OILY WASTES AND SOILS
This procedure requires an extraction with a suitable hydrocarbon
solvent. Covalently bonded halogens present in the hydrocarbon solvent
are then stripped from their solvent backbones by sodium metal according
to the Wurtz reaction:
2Na + 2R-X > 2NaX + R-R
1-333
-------
Any halogens that are present (now in ionic form) are extracted into an
aqueous buffer, to which is added a color reagent to quantitate resulting
chloride. A solution of mercuric nitrate is added dropwise until a color
change from yellow to purple is realized, and the concentration (in ppm)
is read directly off the dropper.
ANALYTICAL MEIHDD
I/Method for Samples Containing Water
10 ml of the liquid sample is extracted by shaking for one minute with 10
g of an Immiscible hydrocarbon and 0.5 g of a (granular) emulsion breaking
material. The sample is allowed to settle until it has separated into
distinct phases (about three minutes). Approximately one-third of the top
layer is dispensed into a vial containing a drying agent which will remove
any moisture and inorganic chloride. The vial is shaken and the drying
agent is allowed to settle. 0.34 g of the dried solvent is then treated
with 1.5 ml of a solution of naphthalene in ethyl diglyme followed by 0.4
ml of organic dispersion and metallic sodium, and shaken for 1 minute. 7
ml of buffer solution is then added and the aqueous layeris separated and
combined with 0.5 ml of a solution of s-diphenyl carbazone in alcohol. A
solution of mercuric nitrate is added dropwise from a 1 ml microburette.
When a true purple color is realized, the test is stopped and the chloride
ntration of the original oil/water or wastewater sample is read
directly off the microburette.
2/Method for Soil Samples
10 g of the soil sample is extracted by shaking for one minute with 12 ml
of a mixture that contains 2 ml of distilled water and 10 ml of an
immiscible hydrocarbon. The soil is then allowed to settle and the
supernatant liquid filtered through a column containing florisil to remove
any moisture and inorganic chloride. 0.34 g of the dry filtrate is then
treated with 1.5 ml of a solution of naphthalene in ethyl diglyme followed
by 0.4 ml of organic dispersion and metallic sodium, and shaken for 1
minute. 7 ml of buffer solution is then added and the aqueous layer is
separated and combined with 0.5 ml of a solution of s-diphenyl carbazone
in alcohol. A solution of mercuric nitrate is added dropwise from a 1 ml
microburette. When a true purple color is realized, the test is stopped
and the chloride concentration of the original soil sample is read
directly off the microburette.
ANALYTICAL TESTS, RESULTS AND DISCUSSION
The samples chosen were both laboratory mixtures and Superfund samples
containing a range of 125 ppm to 6500 ppm chloride. The procedures
employed are the same as those previously described except a packed kit
was used (Hydrodor-Q , Dexsil, Bamden CT). All reactions with this
kit are carried out in sealed plastic tubes and all reagents are contained
in crushable glass tubes to obviate any need to handle the reagents. This
is advisable, as some of the reagents are hazardous to handle in the
normal manner. The results obtained from the laboratory samples are shown
in table (1) and table (2), and the results from the Superfund samples are
shown in table (3). All three tables include results from the
microooulometric titration (EPA method 9076) of the same samples.
1-334
-------
It is seen that the results f rom both the test kit and the
micxocoulometric titration of the samples agree very reasonably. It is
also clearly demonstrated that no interference occurs in the presence of
inorganic chloride. laboratory soil sarrples were also tested in the same
manner using an analytical kit (Dexsil, Hamden CT). This is a similar
type of kit to the one used for liquids, but also provides a simple
balance for weighing out the soil. The procedures previously described
were used and the results obtained for wet and dry soils are shown in
table (4) and the results for wet and dry sands are shown in table (5).
Mcrocoulometric titration results of the same samples are shown in each
table and it is seen that agreement is good between the two methods.
The cost of each kit is $10-13 and no capital investment in instruments is
needed. The kits can readily and easily be used in the field and little
skill is needed. The test takes about ten minutes. With increasing
testing requirements, laboratory fees and laboratory turn-around times,
the field-portable chemical test with colorimetric end-point would be the
first choice for a suspect site or container, prior to laboratory
analysis.
1-335
-------
REFERENCES
I/Guide to Oil Waste Management Alternatives, Final Report, p. 4-15,
Energy and Environmental Research. Corporation, Irvine, CA, April, 1988.
2/Nolan, J.J., Harris, C., and Cavanaugh, P., Used Oil; Disposal Options.
Management Practices and Potential liability, 2nd Ed., p. 12, Government
institutes, Inc., Rockville, MD, 1989.
3/How to Set UP a
)il, EPA Eept. No.
530-SW-89-039A, p. 1, U.S. EPA, Washington, D.C., May 1989.
1-336
-------
TABIE 1
COMPARISON OF lABORATORY PREPARED SAME>IEANALYSES:
mCBDCOUKmiKLC TITRATION VS HYDRCCLOR™
Micrxxxulometric
Sample Hydroclor™ Titration
2000 ppm d"~ as 2000 ppm 1980 ppm
C12C2C12 k* 250° PP"11 246° PP"1
1% oil in pond H20
2000 ppm Cl~ in 2250 ppm 2250 ppn
previous matrix 2275 ppm 2210 ppm
+ dirt
1000 ppm Cl"" as 900 ppm 760 ppm
CgH3Cl3 in 1050 ppm 980 ppm
1% oil in pond H2O
1000 ppm Cl~ in 850 ppm 849 ppm
previous matrix 900 ppm 897 ppm
+ dirt
1000 ppm CT~ as 900 ppm 996 ppm
CHC13 in 1% oil 975 ppm 959 ppm
in pond H20 +
4000 ppm Cl" as NaCl
1000 ppm Cl" in 1000 ppm 936 ppm
previous matrix 900 ppm 871 ppm
+ dirt
1-337
-------
TABLE 2
COMPARISON OF IABORATORY PREPARED ANTIFREEZE SAMPLE ANALYSES:
mGRDOOOICMETRIC THRAHON ys HraRDdQR™
Microcxsulametric
Matrix
Tetrachloro-
ethylene in
antifreeze/HjjO
Same
Same
Trichloro-
ethylene in
antifreeze/H20
Same
Same
1,2-Dichloro-
ethane in
antifreeze/H2O
Same
Same
1,2, 4-Trichloro-
benzene in
antifreeze/H20
Same
Same
Chloroform in
antifreeze/H20
Same
Same
Sample
2740 ppn
2670 ppn
1230 ppn
1140 ppn
481 ppn
444 tXXCL
3000 ppn
3000 ppn
1200 ppn
1200 ppn
451 ppn
462 com
2950 ppn
2800 ppn
1400 ppn
1490 ppn
697 ppn
711 con
3260 ppn
1400 ppn
1640 ppn
812 ppn
791 ron
3090 ppn
2930 ppn
1300 ppn
1310 ppn
728 ppn
718 ppm
Titration
2690 ppn
2760 ppn
1280 ppn
1280 ppn
535 ppn
548 ppm
2810 ppn
2800 ppn
1120 ppn
1160 ppn
509 ppn
521 ppm
2820 ppn
2800 ppn
1370 ppn
1410 ppn
693 ppn
671 ppm
2880 ppn
2940 ppn
1510 ppn
1620 ppn
857 ppn
856 ppm
2930 ppn
2930 ppn
1410 ppn
1440 ppn
732 ppn
730 ppm
HydroClor™
2900 ppn
2850 ppm
1200 ppn
1350 ppn
500 ppn
500 ppm
3000 ppm
3100 ppn
1200 ppm
1250 ppn
600 ppm
600 ppm
3300 ppn
3300 ppn
1550 ppn
1600 ppn
800 ppn
800 ppn
2800 ppn
2800 ppn
1500 ppm
1500 ppn
800 ppn
825 ppm
2900 ppn
2800 ppn
1400 ppn
1350 ppn
800 ppn
725 DOT
1-338
-------
TABLE 3
CCMPARISON OF LIQUID SUEERFUND SAMPLE ANALYSES:
TITRATION VS IKDRDCLOR111
Sample
TX - 563 ppm
TOX - 242 ppm
TX - 604 ppm
TOX - 315 ppn
TX - 2260 ppm
TOX - 1400 ppm
TX - 1910 ppm
TOX - 1690 ppm
TX - 6420 ppm
TOX - 5690 ppm
TX - 4940 ppm
TOX - 3980 ppm
TX - 1560 ppm
TOX - 712 ppm
Mica^coulometric
Titration
230 ppm
242 ppm
417 ppm
396 ppm
1187 ppm
1425 ppm
1539 ppm
1518 ppm
5750 ppm
5900 ppm
3270 ppm
3870 ppm
774 ppm
748 ppm
HvdroClor™
200 ppm
200 ppm
300 ppm
350 ppm
1350 ppm
1400 ppm
1600 ppm
1700 ppm
5800 ppm
5600 ppm
3600 ppm
3400 ppm
900 ppm
800 ppm
1-339
-------
TABLE 4
COMPARISON OF LABORATORY PREPARED SOIL SAMPLE ANALYSES:
MCKOOOUICMETRIC ITERATION VS SOIL FIELD TEST KIT
Sample
500 ppn Cl~
in dry soil
600 ppm Cl~
in dry soil
700 ppn Cl~
in dry soil
800 ppn Cl~
in dry soil
900 ppn d~
in dry soil
1000 ppn d"
in dry soil
1500 ppn d~
in dry soil
500 ppn Cl~
in wet soil
600 ppm Cl~
in wet soil
700 ppm Cl~
in viet soil
800 ppn d~
in wet soil
900 ppn d~
in wet soil
1000 ppn d"~
in wet soil
1500 ppn d~
in wet soil
2000 ppn d~
in wet soil
Soil Kit
600 ppm
500 ppn
650 ppm
650 ppm
850 ppm
650 ppm
800 ppn
800 ppn
950 ppm
900 ppm
1000 ppn
950 ppn
1500 ppn
1450 ppm
500 ppn
450 ppm
700 ppn
650 ppn
750 ppn
800 ppn
800 ppm
800 ppm
900 ppn
950 ppn
1100 ppn
1000 ppn
1600 ppm
1600 ppm
2050 ppm
2000 ppm
MiCTOcoulametric
Titration
515 ppm
509 ppn
635 ppm
624 ppm
700 ppn
727 ppm
784 ppn
790 ppn
931 ppm
948 ppn
960 ppm
979 ppn
1450 ppn
1490 ppm
558 ppn
595 ppm
689 ppm
719 ppm
654 ppm
677 ppm
861 ppm
883 ppm
960 ppm
946 ppm
1070 ppm
1080 ppn
1520 ppm
1520 ppm
1860 ppm
1910 ppm
1-340
-------
TABLE 5
COMPARISON OF IABORATORY PREPARED SAND SAMPLE ANALYSES:
MKRDCCUKMETRIC TTTRATICN VS SOIL FIELD TEST KIT
Sample
300 ppa Cl""
in wet sand
400 ppn Cl~
in wet sand
500 ppa Cl~
in wet sand
500 ppa Cl"
in dry sand
600 ppai Cl~
in wet sand
700 ppa Cl~
in wet sand
1000 ppa Cl"
in dry sand
1186 ppa Cl"
in dry sand
1200 ppo Cl"
in dry sand
1500 ppa Cl"
in dry sand
2000 ppa Cl"
in dry sand
Soil Kit
350 ppa
300 ppn
400 ppa
450 ppn
500 ppa
550 ppn
400 ppa
575 ppa
650 ppa
775 ppa
1050 ppa
1050 ppa
1200 ppa
1250 ppa
1200 ppa
1500 ppn
1550 ppa
1800 ppa
Microcx3Ulc3rnetric
Titration
312 ppa
315 ppa
421 ppa
429 ppn
452 ppa
457 ppn
533 ppa
528 ppa
633 ppn
632 ppn
823 ppn
812 ppn
1110 ppn
1220 ppn
1200 ppn
1200 ppn
1570 ppn
1510 ppn
1880 ppn
1-341
------- |