PROCEEDINGS
                                  The Eleventh Annual
                                  Waste
                                  Testing
                                  & Quality
                                  Assurance
                                  Symposium
                                  July 23-28, 1995
                                  The Washington Hilton
                                  Hotel and Towers
                                  Washington, DC
                                           ซeo sr,,,
                                             *

-------
  CONTENTS
Paper
Page Number
1      Field Screening—"Quick & Dirty" Is Rapidly Earning the Reputation of "Efficient & Cost-            1
       Effective." C. Armstrong
2     Cost-Effective Management of Large Drum Jobs Utilizing Hazscan Analyses and US EPA's New     12
       Drumtrak Software. S. Benes
3     On-Site Laboratory Support of Oak Ridge National Laboratory Environmental Restoration Field     19
       Activities. J. Burn
4     A Decision Analysis Approach to Determining Sample Sizes for Site Investigation. N. Neerchal,    24
       T. Legore
5     Cost-Effective Statistical Sampling: Compositing, Double Sampling, and  Ranked Sets. R. O'Brien   36
Q     The Development  of an Innovative Program to Monitor the Effectiveness and Performance of     37
       Remediation Technology at a Superfund Site. R. Rediske, L. Pugh, R. Buhl, R. Wilburn,
       S. Borgesson, D. Rogers, D. Peden
7     Comparison of Alternatives for Sampling & Storage of VOCs in Soil. D. Turriff, C. Reitmeyer,      51
       L. Jacobs, N. Melberg
8     Comparison of Response Factors for Weathered Petroleum Standards. C. Cox, J. Moodier,         59
       M. Schonhardt
9     A Simple, Accurate Field Test for Crude Oil Contamination in Soil. K.. Carter                     6O
10    A Practical  Field Application of Medium Soil Extraction/Headspace GC Screening for VOCs in      68
       Soil Samples. L. Ekes, S. Frisbie, C. MacPhee
11    Use of a Portable, Fiber-Optic CCD Spectrophotometer to  Measure Friedel-Crafts Products in      69
       the Detection of Crude Oil, Fuel, and Solvent Contamination of Soil. J. Hanby
12    Improved Method  for Soil Analysis Screening by Heated Headspace/lon Trap Mass               80
       Spectrometry. T. Lloyd Saylor, J.  Dougherty, B. Bacon
13    Field Screening of Volatile Chlorinated Hydrocarbons Based on Sonochemistry. E. Poziomek,      81
       G. Orzechowska, W. Engelmann
14    The Environmental Response Team's (ERT's) On-Site (Mobile) Analytical Laboratory Support       87
       Activities. R. Singhvi, J. Lafornara
15    A New Soil Sampling and Soil Storage System for Volatile Organic Compound Analysis.          95
       D. Turriff
16    Performance Evaluation of a New Low-Cost Field-Test Kit for Analysis of Hydrocarbon-            97
       Contaminated Soil at a Diesel Fuel Release Site. K. Wright, ]. Seyfried
17   The Admissibility of Scientific Evidence. E. Perkins
18   Strategic Considerations in Presenting Technical Evidence in Court: A Case Study.
      B. M. Hartman
19   Avoiding Successful Challenges of Measurement Data. L. W. Strattan
            105
            107
            111

-------
Paper                                                                               Page Number
20    Soxhlet Alternatives for the Environmental Lab. M. Bruce, J. Hall                                  114
21    Sample Preparation Using Accelerated Solvent Extraction (Method 3545, Proposed).                121
       D. W. Felix, B. Richter, J. Ezzell
22    Evaluation of the New Clean Solid Phases for Extraction of Nitroaromatics and Nitramines           128
       from Water. T. Jenkins, P. Thorne, K. Myers, E. McCormick
23    Environmental Sample Extraction Using Glass-Fiber Extraction Disks. S. Randall,  C. Linton,           143
       M. Feeney, N. Mosesman
24    Capacity Factors in High-Efficiency GPC Cleanup. K. Kelly, D. Stalling, N. Schwartz                 148
25    The Use of Fourier Transform Infrared Spectroscopy for the Analysis of Waste Drum Headspace.      155
       W. Bauer, M. Connolly, A. Rilling, D. Gravel
26    Quantitative Method for the Determination  of Total Trihalomethanes in  Drinking Water.             156
       W. Studabaker, S.  Friedman, R. Vallejo
27    Stability Studies of  Selected Analytical Standards for the Experimental Determination of             160
       Expiration Dates. M. Re, C. Petrinec
28    Determining Volatile Organic Compound Concentration Stability in Soil. A. Hewitt                 173
29    Photolysis of Laboratory Dioxins/Furans Waste. J. P. Hsu, J. Pan                                   184
30    The Effectiveness of Methylene Chloride Stabilizers in an Environmental Laboratory Recycling        190
       Program. T. Willig, J. Kauffman
31    Approaches to Quality Control of Non-Linear Calibration Relationships for SW-846                  203
       Chromatographic Methods. H. McCarty, B. Lesnik
32    How Low Can We Go—Achieving Lower Detection Limits with Modified "Routine" Analytical        209
       Techniques. P. Marsden, S. Tsang, B. Lesnik
33    Non-Phthalate Plasticizers in Environmental Samples. J. Barren, E. Messer                          219
34    Microwave-Assisted Extraction from Soil of Compounds Listed in SW-846 Methods 8250, 8081,      228
       and 8141 A. W. Beckett, V. Lopez-Avila, R. Young, ]. Benedicto, P. Ho, R. Kim
35    Toxic Congener-Specific, Monoclonal Antibody-Based Immunoassay for PCBs in  Environmental       230
       Matrices. R. Carlson, R. O. Harrison, Y. Chiu, A. Karu
36    Accelerated Solvent Extraction of Chlorinated Hydrocarbons,  Including Dioxin and PCB, from        231
       Solid Waste Samples. J. Ezzell, D. Felix, B. Richter, F. Hofler
37    Robust SFE Sample Preparation Methods for PCB and OCPs Submitted to the US EPA SW-846 for     237
       Consideration as a  Draft SFE Method 3562.  D. Cere, S. Bowadt, P. Bennett, H-B. Lee, T. Peart
38    Analysis of Dioxin in Water by Automated Solid-Phase Extraction Coupled to Enzyme                24O
       Immunoassay. R. Harrison, R. Carlson, H. Shirkhan, L. Altshul, C. De Ruisseau, J. Silverman
39    Microwave-Assisted Solvent Extraction of PAHs from Soil—Report of an Interlaboratory Study.        241
       L Jassie, M. Hays,  S. Wise
40   Optimizing Automated Soxhlet Extraction of Semivolatiles. K. Kelly, N.  Schwartz                  244
^ *   Tests of  Immunoassay Methods for 2.4-D Atrazine, Alachlor, and Metolachlor in Water.              251
       P. Marsden, S. Tsang, M. Roby, V. Frank, N. Chau, R. Maxey
42    Automated Liquid-Liquid Extraction of Semivolatile Analytes. R. McMillin, M. Daggett, D.  Gregg,    266
       L Hurst, K. Kelly, N. Schwartz, D. Stalling
43    Determination of Polyethylene glycol)-600 from the Pharmaceutical Manufacturing Industry by     281
       Derivatization and Liquid Chromatography. A. Messing, W. Telliard, R.  Whitney

-------
Paper
44

45
46

47

48
49

SO

51
52

53

54

Page Number
Determination of Non-Purgeable, Water-Soluble Analytes from the Pharmaceutical Manufacturing
Industry by GC/MS and CC/FID. A. Messing, W. Telliard, C. Helms, C. Parsons
Evaluation of a Robotic Autosampler for the Analysis of VOCs. V. Naughton, A. Sensel
Examination of GC/FID for the Analysis of Modified Method TO-14 for VOCs in Ambient Air.
V. Naughton, S. Wang, S. Liu, R. Carley, A. Madden
The Suitability of Polymeric Tubings for Sampling Well Water To Be Analyzed for Trace-Level
Organics. L. Parker, T. Ranney
The Analysis of Hexachlorophene by SW-846 81 51. N. Risser, J. Hess, M. Kolodziejski
Solvent Recovery in the Pesticide Extraction Laboratory Utilizing Standard Laboratory Glassware.
N. Risser
Determination of TNT in Soil and Water by a Magnetic Particle-Based Enzyme Immunoassay System.
F. Rubio, T. Lawruk, A. Gueco, D. Herzog, J. Fleeker
An Immunoassay for 2,4,5-TP (Silvex) in Soil. Brian Skoczenski, j. Matt, T. Fan, Y. Xu
Determination of Semivolatile Organic Compounds in Soils Using Thermal Extraction-Gas
Chromatography-Photoionization Electrolytic Conductivity Detection. R. Spelling
Congener-Specific Separations of PCBs: Extraction by SPME, Separation by Capillary GC, and
Detection by ECD and MS. C. L. Woolley, V. Mani, R. Shirey, ). Desourcie
Rapid Separation of VOCs with Short, Small-Diameter Capillary GC Columns. C. L. Woolley,
R. Shirey, J. Desourcie
286

292
298

305

315
321

326

341
342

347

359

-^i'v^d'^'^ •' "'•'•
>;'-V;.^-:^;.s^:" - '., f -t^f-f^-f^'^rf-''f^I;m^Kimฃ\-^ฃ:&^jf^'&^ '.::,-'^\.:- -,'^}&^M^^^--\, •"•• .:>x;.v:;;.'.v.:,.;-;:.,.; I
',-.•..:•.:••-ฃ#:•'?&•'•, :. ": V ^^ป ^^•••M^-^wS&iwffeKtV^ " -v% V .'..-•".i-'-'1.-",1-?11-.- -': :•:•-"..'• ••. -' -'•- •- .•.&&?^<-&fa'"tsfy-- . • ..-•-.---:•-.*-• 	 	 .-•:••-••• - ••
.•-."• ->-.^ffff-'.. .." - "•• „ - " -— ;.- ^^": - ,.-; ^M-- ^^•'-^•^^ '-, ^T^T^K^^:;;;:?^ -. . -,: • .; -..;•.; vvv; .;; •• ,:.'>.-•;•:.: •:::,• :..;;.;/ ;." ,: •&&s?3ฅt%ffXz. :••-.•••:. •• :- • ;-.:,/ - -,:-:i -..-•
55

56

57

58
59
60

61

62

63

64
••Mm^^^^^^^^^^^^^H^^^^BHMB^^^H^HBHaaBTB^H^HBB^B^^^^^— ^nซซ^— ^w^_ ^^ซ^^— — nn^B^^^^B^^^^^^^^^^^^^BB^^^^BHBH^—^MHI^BI^HMIHDIMI^^H^^^^HI
An Improved Temperature Feedback Control Sensor for Microwave Sample Preparation. L. Collins,
K. Williams
Improvements in Spectral Interference and Background Correction for Inductively Coupled Plasma
Optical Emission Spectrometry. J. Ivaldi, A. Ganz, M. Paustian
Analytical Methods for White Phosphorus (P4) in Sediment and Water. M. Walsh, S. Taylor,
D. Anderson, H. McCarty
Effects of Barometric Pressure on the Absorption of Prepared Mercury Standards. S. Siler, D. Martini
A Simple Silver Analysis. D. Yeaw
Capillary Ion Electrophoresis, an Effective Technique for Analyzing Inorganic and Small Organic Ions
in Environmental Matrices. G. Fallick, J. Romano, j. Krol, S. Oehrle
The Determination of Adamsite, a Non-Phosphorus Chemical Warfare Agent, in Soil Using
Reversed-Phase High-Performance Liquid Chromatography. H. King
Microwave Closed-Vessel Sample Preparation of Paint Chips, Soil, Dust Wipes, & Baby Wipes for
Analysis of Lead By ICAP. S. Littau, R. Revesz
Removal of Zinc Contamination from Teflonฎ PFA Microwave Digestion Vessels. R. K. Smith,
F. Secord
A Comparative Investigation of Three Analytical Methods for the Chemical Quantification of
375

379

380

388
396
4O1

406

411

424

428
Inorganic Cyanide in Industrial Wastewaters. C. Ikediobi, L. Latinwo, L. Wen

-------
Paper
Page Number

65    Managing RCRA Statistical Requirements to Minimize Groundwater-Monitoring Costs. H. Horsey,      443
       P. Carosone-Link, J. Loftis
66    EIS/GWM-An Integrated, Automated Computer Platform for Risk-Based Remediation of               449
       Hazardous Waste Contamination-A Holistic Approach. B. Dendrou, D. Toth
67    Interlaboratory Comparison of Quality Control Results from a Long-Term Vapor Well-Monitoring        466
       Investigation Using a Hybrid EPA Method T01/T02 Methodology. R. Vitale, G. Mussoline,
       W. Boehler
      ,'t^^itif^-fitSff.fSf;-^: :>•>•:••>:::.:•;' ~.
       General
68    Characterizing Hazardous Wastes:  Regulatory Science or Ambiguity? T. Meiggs                    482
69    Misuse of the Toxicity Characteristic Leaching Procedure (TCLP). S. Chapnick, M. Sharma,            487
       D. Roskos, N. Shifrin
7O    The Synthetic Groundwater Leaching Procedure (SGLP): A Generic Leaching Test for the              493
       Determination of Potential for Environmental Impact of Wastes in Monofills. D. Hassett
71    Suggested Modification of Pre-Analytical Holding Times—Volatile Organics in Water Samples.         507
       D. Bottrell
72    Secondary Waste Minimization in Analytical Methods. D. Green, L. Smith, J. Grain,  A. Boparai,         517
       J. Kiely, ). Yeager, B. Schilling
73    Ten Sure Ways to Increase Investigation and Cleanup Costs. J. Donley                             527
74    The TCP Test for Metals Selection of Extraction Fluid. S. Nagourney, N. Tummillo, jr., M. Winka,       542
       F. Roethal, W. Chesner
      ซ"^_ •-*•ซ;  :•   _ .^   >wi-iim>-jj?<—:  :•••>*?ซ ซซ-<:-•:•"• 'ซ5Hป-!vป;,';ซs?ปR!;wy
       Quality  Assurance
75    Data Quality Assessment of Data Usability vs. Analytical Method Compliance. D. Blye, R. Vitale        544
76    Planning for Radiochemical Data Validation as Part of the Sample and Analysis Collection             545
       Process. D. Bottrell, L. Jackson, R. Bath
77   New Calculation Tool for Estimating Numbers of Samples. L. Keith, G. Patton, D. Lewis,              550
      P. Edwards, M. Re
78    Use of Standard Reference Materials as Indicators of Analytical Data Quality. A. Bailey, C-A. Manen     565

-------
Paper                                                                                  pag€ Number
 79  The Generation of Calibration Curves for Multi-Point Standardizations Displaying High Relative         576
      Standard Deviations. D. Lancaster
 80  Data Acquisition and Computer Networking: A Key to Improved Laboratory Productivity.               585
       J. Ryan, C. Sadowski, E. LeMoine
 81   Issues Regarding Validation of Environmental Data.  R. Cohen                                      589
 82   Quality Assurance/Quality Control at a POTW. R. Forman                                         600
 83   Conducting a Performance Evaluation Study-It's not just Analytical Results. L. Dupes, G. Rose         601
 84   Fate or Effect of Data Presented with Qualifier and Laboratory-Established QC Limits. A. Ilias,          613
       J. Stuart, A. Hansen, G. Medina
 85   ISO Guide 25 vs. ISO 9000 for Laboratories. P. Unger                                            621
 86   A Method for Estimating Batch Precision from Surrogate Recoveries. G. Robertson, D. Dandge,        632
       S.  Kaushik, D. Hewetson
 87    Providing Legally Defensible Data for Environmental Analysis. J. Boyd                               634
 88    Audits as Tools for Process Improvement. R. Cypher, M. Uhlfelder, M. Robison                      640
 89    Cost-Effective Monitoring Programs Using Standardized Electronic Deliverables. A. Ilias, G. Medina        641
 9O    Performance Objectives and Criteria for Field-Sampling Assessments. M. Johnson                    650
 91    Smart-Sequencing  Environmental GC/MS in a Client-Server Environment. C. Koch, M. Lewis           651
 92    How the US EPA Region 2 RCRA Quality Assurance  Outreach Program, Office of Research and          663
        Development,  and Office of Enforcement and Compliance Assurance Are Helping  Industry to
        Minimize Environmental Compliance Costs. L. Lazarus,  P. Flax, J. Kelly
 93    Quality Assurance and Quality Control Laboratory and in situ Testing of Paper Mill Sludges Used as     668
        Landfill Covers. H.  Moo-Young
 94    Determination of Control Limits for Analytical Performance Evaluation in US DOE's Radiological        686
        Quality Assessment Program. V. Pan
 95    EM Quality Assurance Assessments for Environmental Sampling and Analysis. H. Pandya,              689
        W. Newberry
 96    A  Pattern Recognition-Based  Quality Scoring  Computer Program. A. Sauter                         691
 97    Automated Data Assurance Manager (ADAM). T. Schotz, L. McGinley, D. Flory, L. Manimtim,         692
        D. White
 98    The Diskette Data Dilemma. L.  Smith                                                           707
 99    Development of Assessment Protocols for DOE's Integrated Performance Evaluation Program          712
        (IPEP). E. Streets, P. Lindahl, D. Bass, P. Johnson,). Marr, K. Parish, A. Scandora, J. Hensley,
        R. Newberry,  M. Carter
 10O   International Agreements in Laboratory Accreditation. P. Unger                                   727
 1O1   International Activities in  Reference Material Certification. P. Unger                                736
 102   Automated Sampling Plan Preparation: QASPER, Version 4.1. M. Walsh, W. Coakley, G. Janiec,        737
        R. Weston
 1O3   Monitoring VOC Losses in Soils Using  Quantitation Reference Compounds and Response Pattern       747
        Analysis. S. Ward

-------
Sampling
and field

-------
   FIELD SCREENING - "QUICK &  DIRTY"  IS  RAPIDLY EARNING  THE
          REPUTATION OF "EFFICIENT  &  COST  EFFECTIVE"
Gavin D. Armstrong, QA/QC Coodinator, Division of Emergency
and Remedial Response, Ohio Environmental Protection Agency,
1800 WaterMark Drive, P.O. Box 1049, Columbus, Ohio 43266-0149


ABSTRACT
As the numbers of environmental remediations and projects
continue to increase, so do the costs associated with them.
With this "trend" in the environmental field,  many
environmental scientists, both in the public and private
sector, are realizing the financial and time-efficient
benefits of field screening.  Existing and emerging field
screening technologies that are specifically geared toward
"real-time" data and information can provide a means of
reducing time and resources typically inherent in most
environmental projects.  Acceptability and practice of
utilizing field screening techniques, as this
presentation/paper will demonstrate, is the emerging trend
which will be setting the pace for environmental
investigations and remediations both today and in the future.
INTRODUCTION
Representative sample collection is a primary function of any
successful environmental project, be it site assesment or
audit.  The ability to achieve this in an inexpensive and
time-efficient manner makes this a preferred method for site
analysis.  Field screening techniques assist in facilitating
this quick and cost-effective analysis.  As the market of
field screening and on-site analysis products increases, so
increases the ability to conduct a sound, thorough assesment
of evironmentally contaminated sites.  This is especially
significant where the need of Phase II site-assesments for
real estate transactions is concern.  Field screening
provides a means of data accumulation that can be achieved
without the cost and liability that permanent features, such
as monitoring wells, tend to have associated with them.

The typical sequence of activities for site-assesment
includes determination that a contamination problem does
indeed exist, followed by the establishment of objectives for
remediation of those contaminants.  It is during the

-------
FIGURE 1
SAMPLE COLLECTION APPROACHES
 JUDGEMENTAL

-------
FIGURE 2
SAMPLE COLLECTION APPROACHES - COMBINATIONS
 STRATIRED RANDOM
 SYSTEMATIC RANDOM

-------
                                     FIGURE 3
                                 HYPOTHETICAL SITE
                 USE OF FIELD SCREENING TO DELINEATE "HOT SPOTS"
   X = Sample Locations Utilizing Field Screening.
   Systematic Judgemental Sampling

      = Confimatory Samples Collected
-\
Railroad Tracks - Exact Spill Area Unknown
> Records indicate approximate area
  of spill.
 1111111 rn
          Sludge Pit -  Exact Contaminant Area Unknown
          Facility Blue Prints Provide Source of Approximate Area
                                                                        Extent of
                                                                        Contamination
                                                                        Unknown
The amount of background information available regarding possible areas of contamination will assist
in deciding which type of sampling approach will accurately assess the site.  In this hypothetical
situation, a moderate amount of information is known regarding location of contaminated areas.
Systematic judgemental sampling was utilized and field screening was proven advantageous as the
areas of contamination were more acutely defined.

-------
                              TABLE 1

         SITE INVESTIGATION TIME TRACKING OUTLINE
                           (estimated per project)
1.     Background information search; Tasking

2.     Site Access; Reference Material

3.     Site Reconnaissance

4.     Sample Projections; Coordinate with
      Laboratory Sample Coordinator

5.     Bottle Prep; Equipment /Vehicle Prep.

6.     Work Plan/Safety Plan Preparation

7.     Field Work; Field Tasks

8.     Equipment Re-stock/Cleaning/Decon
      General Equipment Maintenance
TIME: (approx)


65

60

20


16

40

40

150


40
                                             TOTAL TIME:    431 HOURS
**NOTE:   These times do not include values related to laboratory
           turn-around times and/or analysis times.

-------
establishment of the project objectives that consideration
for field screening comes into discussion.  The statistical
design of sampling should support the established project
objectives.  This is especially relevant for the data quality
objective (DQO) process.  It is these statistics that verify
the samples as being representative of the matrix being
considered.  Common sense, when evaluating the statistical
considerations, will identify the value and pertinence of
field screening.  The ability to field screen, as opposed to
collecting multiple samples for laboratory analysis, will
significantly reduce project expenditures.  Whereas the
actual extent of contamination on-site might need to be
determined, field screening provides a resource for
eliminating the need for total laboratory analysis of all the
sample points indicated/determined during the statistical
evaluation.  Confirmatory sampling of the field screened
sample points should be conducted so as to reduce the
probability of false-negatives.  This confirmatory sampling
assists in supporting the specific DQO process established
for the project.

The sampling approach that is determined to be adequate for
contaminant delineation is typically one of three main
processes.  Judgemental, systematic, and random are the three
primary sample collection approaches (figure 1 demonstrates
each of these approaches).  There are,  of course,
combinations of these three approaches (figure 2) which will
be specific to the particular project and established DQO's.
Field screening can assist project coordinators in reducing
the need for random sampling by delineating the "hot-spots"
of a particular site (see figure 3 for a hypothetical site).
Utilization of field screening will result in a scaled-down
systematic sampling approach or judgemental sampling.  All of
this "scaling-down" takes place within a hastened time frame
due to the real-time data generated from the field screening
activities.

On average, an entire sampling episode can take upwards of
360(est.)  staff hours (1)(table 1).  This primarily includes
site access, reconnaissance, equipment preparation, sample
collection, sample packaging and shipping, and equipment
decon..   The inevitable wait for laboratory analysis and
subsequent return of data is not a part of this estimate.
Depending on the type of analysis required, laboratory
analysis and data return/review can take several more weeks.
CLP turn-around, for example, for sample analysis can take  an
estimated 50 to 60 days before the data is returned for
evaluation.  Utilizing field screening techniques can provide
data responses within minutes and/or hours after sample
collection.  Clearly, it is advantageous to utilize field

-------
screening as much as possible.  Depending on the particular
site situation, field screening may also provide a quick
answer as to whether possible immediate control measures are
needed to avert further contamination and possible health
hazards to the public.

The costs associated with field screening are markedly less
than those incurred through laboratory analysis.
Generically, the cost of a particular piece of equipment to
be used for field screening will be apparent in the initial
purchase of the particular piece of equipment.  A portable
volatile analyzer, such as a Micro-Tip, will have an initial
Purchase price and subsequent servicing fees, but can be
utilized for many projects and many years.  Laboratory
analyses can incur costs per parameter and on a one-time
basis.  As each laboratory facility will have its own
schedule for fees, it is therefore difficult to accurately
assess the costs for parameter analysis.  Table 2 is an
example of cost comparisons between various field screening
techniques/equipment and laboratory analysis.  It is
important to remember that the one-time up-front cost of
purchasing the equipment is something that can be recovered
over many years, and that particular laboratory analysis fee
is per sampling episode and per project.

What follows is a brief snyopsis, by matrix, of some of the
available field screening kits/equipment and the relative
costs.  The costs are estimated and it is not the intention
of this paper to endorse one particular product over another.
Rather, this presentation is designed to provide the reader
with a basis for realizing some of the products that are
available for field screening.
WATER  (INCLUDING GROUNDWATER)


Field Atomic Absorption  (in field laboratory) - Metals
Detection Limits;   0.1 ug/1 (most metals)
Analysis Time;  2 min./sample  (after sample prep)
Cost;  $20 - 30,000 initial cost (est.)
Comment;  Easy set-up in stationary or mobile field lab.  Can
be run off a portable generator.  A specific cathode lamp is
required for each element being analyzed.

-------
WATER CONTINUED,. . .

Immunoassay - Organic Analysis
Detection Limits;  50ppb to 5,000ppm (varies)
Analysis Time;  4 to 5 hrs./multiple plate (several samples)
Cost;  @$55/sample or @$22,000 for the system
Comment;  This is still a developing method, but gaining
respectability.  Advantages include rapid, accurate results,
minimal sample prep., non-hazardous reagents, limited sample
volume needed.  Limitations include crossreactivity and
possible concentration equivalents. (2)

Immunoassay - PCB Analysis
Detection Limits;  >. 5ppm to ppb (varies depending on kit)
Analysis Time;  @30-45 minutes (will vary)
Cost;  @$100-200/sample
Comment;  Easy to analyze, rapid, accurate results.  Multiple
samples per kit. (3)


Xray Fluorescence (XRF) - Metals
Detection Limits;  100 - 600 ug/1  (varies)
Analysis Time;  @5-10 minutes/sample (10-30 min. prep, time)
Cost;  $80,000 unit - @$50-80/sample
Comment;  Limited sample volume needed (@40ml),  rapid
screening, simultaneous detection  (multiple elements/sample).

Headspace Analysis - Organics
GC/MS system, OVA (FID), HNU (PID)
Detection Limits;  Varies depending on the instrumentation
utilized, but usually can detect in the ug/1 range.
Analysis Time;  Multiple samples per hour.

Note:     Other standard anayses include those for pH,
disolved oxygen, conductivity,  Oxidation reduction potential
(Eh), and temperature.  These parameters can be part of one
whole unit, such as submersible units,  or individual tresting
instrumentation.  All offer a variety of options depending on
the product.  Analysis time is usually minutes.   Cost will
vary depending on the product and its capabilities.
                                  8

-------
SOIL
Most of the field testing kits and products that are
available for water analyses are also availble for soil
analysis.  Costs, analysis time, and benefits will be
directily similar to those listed for the water analyses.

Penetrometer Testing
Analysis;  Soil electrical conductivity measurements
           Piezometric measurements
           Soil temperature
Penetrometer testing for groundwater, soil gas, soil, and
ability to install small diameter piezometers.
Costs;  Will vary depending on the usage and time for set-up.
Comments;  Estimated that several hundered goetechnical
soundings have been performed in one day. (4)

Soil Gas
Passive samplers, OVA, HNU, etc.
Analysis Time;  Will vary depending on the bore-hole time,
sample collection, and analysis - typically 90-120
minutes/sample.
Costs;  Will vary depending the product used.
Detection Limits;  Suited for low-concentration contaminants.
Comments;  Samples are a result of hand-operated augers,
hand-driven devices, hydraulically driven devices, and mobile
drilling rigs.  Can provide infromation on areas of
contamination within one day.  This will depend on the number
of cores being collected and the ability to mobilze at
various sample points.  Site specific conditions (i.e. depth
to groudwater, sub-surface geology) will guide the specific
methods to be used during the project.  Depths of sample will
vary depending on the particular equipment used.  For
example, hand augering will generally achieve depths of 10 to
20 feet, while hydraulically driven devices will achieve
depths of 50 feet or more.

Fiber Optic Sensors - Method under development
Analysis;  Able to detect contaminants to ug/1 in soil,
water, and air.
Permits real-time analysis which is especially useful in
difficult or hazardous situations, including spill clean-up
monitoring.
Costs;  It has been indicated that this will have low
developmental costs, but high operational costs (equipment
costs included)
Comment;  Involves use of optical fibers attached to various
analytical instrumentation.  Can be effect to large
distances, but requires a dedicated fiber for each pollutant
to be monitored.  Fiber bundles are being developed to allow
for analysis of several pollutants at once.

-------
SUMMARY
As this presentation has described, there are many benefits
to be acquired through the use of field screening techniques.
Analysis time and costs will vary depending on the product
utilized, but offer the advantage of obtaining real-time
data/information regarding the contaminants and their
locations at a site.  By utilizing field screening
techniques, project organizers/managers will be able to
obtain knowledge about the contaminants present on-site as
well as delineate the major areas ("hot-spots")  of
contamination.  Analytical costs and staff hours can be
significantly reduced thus providing an economic savings to
the overall project.

This presentation has provided a brief overview of the
economic and time-effective benefits of field screenig and
offered insight into the variety of methods available to
acquire real-time data and information regarding site
contaminants.  As the number of available products increases,
so does the acceptance and utilization of these field
screening methods.  As this presentation has demonstrated,
field screenig - "quick and dirty" is rapidly earning the
reputation of "efficient and cost effective".
                                   10

-------
REFERENCES

1.   Reinbold, K. Site Investigation Time Tracking Outline,
     1993, Ohio Environmental Protection Agency presentation.

2.   Huellmantel, L.L. Developing a Quantitative PCB
     Immunoassay for Use with Sediments, Asci Corporation
     presentation, 1995.

3.   ENSYS, Environmental Products, Inc., Portable Analytical
     Test Kits for EPA Methods.

4.   Stratigraphies, The Geotechnical Data Acquisition Corp.,
     Penetrometer Subsurface Exploration System.
                                    11

-------
    COST EFFECTIVE MANAGEMENT OF LARGE DRUM JOBS UTILIZING
    HAZSCAN ANALYSES AND U. S. EPA's NEW DRUMTRAK SOFTWARE
Susan R. Benes. Project Manager, Kiber Environmental Services, Inc., 3786 Dekalb
Technology Parkway, Atlanta, Georgia 30340
ABSTRACT

The U. S. EPA Superfund Program is constantly addressing sites with numerous drums, tanks
and other containerized wastes.  The magnitude of these sites can range from a few drums
to tens of thousands of drums; however, regardless of size, keeping track of the sampling,
analyses and disposal data can often require a significant amount  of resources and time.
Hazscan analyses complemented with the U. S. EPA's new DrumTrak computer software can
save time, money and resources when applied to these particular projects.

The analytical cost for tens of thousands of unknown drums can result in millions of dollars,
not to mention the time involved with obtaining the results. Alternatives to this might include
analyzing the material for a few chosen parameters or fully analyzing only a small percentage
of the containers.  This also can prove to be costly in both time and overall project costs, as
well as developing a possible safety hazard due to incomplete analyses. An alternative to this
is a succession of screening tests which identify the waste chemical  characteristics in a
relatively short period of time.

This series of screening tests are typically called "Hazscan" or Hazcat" analyses.  These tests
include water reactivity, air reactivity, water solubility, organic solubility, pH, cyanide,
sulfide, oxidizer, peroxide, flammability, chloride and a screen for polychlorinated biphenyls
(PCB's).   Based  on the Hazscan testing,  the drums  can  then be  composited  for further
analytical and off-site shipment in truckload volumes.

Hazscan testing can  easily be performed either on-site in a mobile laboratory or  off-site at a
stationary  laboratory.   Regardless  of the location  chosen  to perform the analyses, a
tremendous amount of data will be generated as a result of the sampling and characterization
analyses.   This  information can be  placed  into a database; however, most database
applications  are limited to sorting the data and generating reports.

Kiber Environmental Services,  Inc.  has recently completed assisting the U. S. EPA's
Emergency  Response Team in developing a computer program  designed to aid in the
management of data on drum sites.  The program allows the user to quickly manipulate and
generate reports based on the sampling and analytical data generated from each container.
                                               12

-------
These reports can be used to aid in the planning and development of a disposal management
plan, classify the containerized wastes based on the results of the Hazscan testing and provide
a tracking system for each container on site from the initial sample point to off site disposal.
The Program has an additional advantage in that it allows a pen-based computer to be used
to enter the data as samples are being collected in the field.

This  approach provides an alternative to the extensive analytical testing and overwhelming
amount of data utilized today and can  be easily applied to any project containing  large
quantities of unknown wastes.

INTRODUCTION

Abandoned drum sites often pose serious challenges to environmental cleanup organizations
from both the public and private sector.  These sites often have hundreds or even thousands
of containers of unknown waste which have the potential to be reactive, shock-sensitive or
even explosive.  The  drums, tanks or other waste containers are typically scattered  in an
unorganized fashion throughout the  site and are for the  most part in  various  stages of
deterioration.  Clues to what may be in the tanks and drums can be gained  from the past
history of the location or facility;  however, often times, past histories  can  be  limited or
deceiving.

Options for removal of the waste material are non-existent until the waste can be identified
as to its chemical contents.  Options for identification of the waste material as well as options
for disposal  are numerous and depending on the choice,  some of these  options can be quite
costly. Sampling only a small number of the drums and making the assumption the remaining
material is similar to the ones sampled could result not only in sending  material to a facility
which is out of specification, but also expose  site  personnel to a potentially  hazardous
situation resulting from blending unknown and or incompatible materials. A typical approach
to the removal would be to identify and characterize,  blend similar materials and dispose of
each segregated wastestream.  This approach is detailed  below.

DETAILED APPROACH

An approach taken for removal of waste at any unknown drum site should be  organized and
planned properly.  Steps should be taken to assure the end result of removal and disposal of
the waste is  always held as the overall objective of the project. Removal and disposal of the
waste is typically more cost effective by shipping the material in bulk  quantities.  In order
to accomplish this, drums which exhibit  similar chemical and physical characteristics  can be
blended together.  Once blended,  the materials  can be shipped off-site.  This process  is
outlined below:
                                                13

-------
                       UNKNOWN WASTE DISPOSAL FLOW CHART
In order to blend the materials the waste should be screened individually for characteristics
which may cause the material to be hazardous as well as reactive when mixed with other
materials.  This screening process can serve as the first step in the identification process of
the waste.  This process need not be detailed, but should include screening tests to access the
chemical and physical characteristics of the wastes.  The process  should also not be time
consuming due to the potential number of samples that will have to be processed (hundreds
or even thousands).

The regime of "Hazscan" or "Hazcat" screening tests which have been successfully utilized
on large drum projects both public and private sector includes thirteen separate screening tests
which can identify potentially reactive  material  as  well  as  a majority of  the  Resource
Conservation and Recovery Act (RCRA) characteristics.  These tests include air and water
reactivity,  radioactivity, water  and  organic  solubility,  pH,  cyanide,  sulfide,  oxidizer,
peroxide, flammability, chloride and PCB's.  The tests, with the exception of the flashpoint
and PCB  screen are all wet chemistry methods that employ a series of color and/or phase
changes that indicate a positive test result.  The flammability screen is  performed utilizing a
setaflash closed cup tester and the PCB screen is performed utilizing a gas chromatograph.

Results gained  from these tests can aid a chemist in classifying the  material into hazard
characterizations such as water soluble acid oxidizing liquids, water soluble cyanide liquids
and organic soluble flammable solids.  Positive test results from each  of the screening tests
are used to flag to the individual container as to its contents.  For example, a sample with a
positive test result for water solubility, oxidizer and a pH of 2 would indicate a water soluble,
acid oxidizing material. This method of classification would continue until all drums or waste
materials were placed into a hazard classification.  These classifications can then be utilized
                                                 14

-------
to "group" the drums into chemically compatible bulking groups for additional testing and
analyses.

The next step in the removal process is the blending or "bulk" testing.  The purpose of bulk
testing is to attempt to duplicate the on-site blending of chemically compatible wastes.  On-
site blending of the  material is an alternative to shipping  each container separately  for
disposal.   This "bulking" of chemically compatible materials prior to shipment will take
advantage  of the more cost effective  bulk disposal  prices  rather than the more costly
individual shipment and disposal costs.  The formation of Bulk Groups from individual hazard
characterizations minimize the number of disposal wastestreams that have to be dealt with.
Bulk  Groups are typically chosen based on the disposal alternative available for the waste
material on-site.   Bulking Groups typically are chosen during the development of the site
waste disposal plan.  All waste materials that are proposed to be disposed of utilizing  the
same alternatives such as wastewater treatment, fuels blending or landfilling, can be placed
in the respective Bulking Groups.

The "bench-scale" blending or bulk test will monitor the procedure for possible reactions that
could occur from combining high concentration wastes. By combining proportional volumes
of waste from chemically and physically similar hazard characterizations the blending can be
monitored for temperature increase, polymerization and gaseous emissions that may occur.
A bulk test should be completed for each "Bulking Group" that is proposed. An example of
such  a  grouping is Flammable Solids.  This bulk group would include all solids that were
found to be flammable or combustible and do not exhibit any other chemical characteristics
that would disallow the material to be incinerated.  A pictorial example of this is presented
below:
                      Example of Grouping for Bulk Testing and Bulk Groups
                                                           Solids for
                                                           Incineration
                                                           Bulk Group
                                                 15

-------
After the bulking test has been completed, further,  more  extensive  testing is  usually
completed  to  further  identify  the chemical  composition of  the  waste.   This  includes
performing analyses  such as volatiles, semivolatiles, metals and pesticides.  These analyses
will help complete the disposal  facility requirements such as profile sheets that are required
by various disposal facilities for approval of waste into  the respective facility.

Characterization and bulk testing can be conducted at a project site utilizing an on-site mobile
laboratory or the samples can be transferred to a fixed based  laboratory for the analyses.
There are several advantages to an on-site laboratory, however the biggest advantage is easy
access  for interaction between the site  supervisory personnel  and the chemists in  the
laboratory.  The on-site laboratory is dedicated to the project and as a result, turn-around-
times are quicker and communications become easier and  clearer.  The  project is also  not
delayed by the downtime involved with transferring the samples to the fixed based lab as well
as the analyses.  All of these items contribute in the overall cost savings for the project.

While this approach of testing, characterization and bulking is very cost effective and timely,
the organization and cross-checking involved with this  process is sometimes very detailed.
Attempts  to place this information into a standard computer database program can be
somewhat limited  to allowing the user to generate  a hard copy of the  data.  In order to
facilitate the entire process, the  U. S. EPA has recently developed in conjunction with Kiber
Environmental Services, Inc. a computer software program that allows the user to track these
unknown waste containers from initial inventory  and sampling  through the characterization
and ultimate off-site  disposal.

This computer software program is actually a compilation of four different databases which
track physical container data and Hazscan or Hazcating results as well as generate hazard
Characterizations and Bulking  Groups.  Each of these four databases are programmed to
function as an entire program that allows the user to manipulate and generate a  multitude of
individual reports. The DrumTrack Program (Program) was designed to be utilized on large
drum sites that are following the process described previously.  Each phase  of this process,
including Inventory and Sampling,  Hazscan Testing, Hazard Characterization,  Bulk Group
Selection, Disposal and Shipment of waste off site, can be tracked.

On most drum  sites,  during the  sampling and  inventory process various information is
recorded on a  "drum log" which aids the tracking of the actual container as well as the waste
inside.  This can be transferred to the first screen in the program in the format presented
below:
                                                16

-------
                     U.S.  EPA/ERT Drum Tracking Ver  1.0
[
I
"]
I
I
I
I


I
>rum id.:
)ate :
?irae
)rum Type :
3rum Top :
Drum Cond.:
)ebris/PPE:
LAYERS PHYSICA
1 (Top)
2 (Middle)
3 (Bottom)
Manufacturer:
Chemical:
Generator:
Location
Sampler
Witness
Drum Size :
% Full :
Overpack Size:
No. of Layers:
L STATE COLOR CLARITY LAYER DEPTH


The Program continues by allowing entry of all the Hazscan test results. Once the test results
have been recorded, the Program will automatically classify each container and place each
one into  a Hazard Characterization category.  The user can then generate proposed Bulk
Groups for the site and assign each Hazard Characterization category to its respective Bulk
Group.  Finally, the program allows for tracking disposal of each container by the manifest
number.

The true benefits of this Program can be appreciated once all information has been entered
into the  database and the full application of the Program can begin.  Over  twenty-five
different pre-set reports can be generated to aid the user and other on-site personnel  in dealing
with the  waste.  These reports include the following:

       Individual Drum Log Sheet With Data
       Numerically Arranged Hazscan Test Results
       Drum Marking by Drum ID Number
       Drums by Location
       Drums by Manifest
       Drums Missing Hazscan Results
       Inventory of Empty Drums
       Inventory of Drums  Containing Personnel Protective Equipment
       Drums By Hazard Characterization
       Drums By Bulk Group
       Summary of Hazard Characterization
       Summary of Bulk Groups
       Bulk Groups by Target Volume (listing of drums up to a user chosen target volume)
       Disposed Drums by  Bulk Group
                                               17

-------
Quite simply what this Program does for its user is automate all of the tasks which were once
required to be completed by hand.  These include sorting current paperwork to find containers
that did not get analyzed, categorize and generally interpret hazscan results, assign drums to
Bulk Groups, generate a list of drums for a tankerload of bulk waste and finally print a listing
of all drums assigned to a particular manifest.

This Program becomes especially important to any transportation and disposal coordinators
involved with the project, by having all of the containerized waste information regarding the
physical and chemical characteristics available  at "the touch of a key".  Transportation and
disposal coordinators can minimize the time required to coordinate and arrange disposal for
the waste on-site by utilizing the Program to generate information about the classification and
bulking of the waste.

The overall  benefit of the Program is that it reduces the time and personnel commitments
which are normally  required to process  and track the  vast amount of information that is
generated  during the  project and eventually used to dispose of the material. By utilizing the
Program, in conjunction with an on-site laboratory, two full time personnel can easily process
and track approximately 100 to 200 samples in a twelve hour work shift.

An additional  time saving application of the program is the field data entry system that is
computer  pen-based.  This "Drum  Pen" Program allows the user to take  a pen-based
computer into  the field and enter the data as the samples are being collected. This eliminates
the need to generate a hard copy of the drum information while sampling and then enter this
information  later in the database.  The elimination of this "double-handling" of the data can
save a considerable amount of time in both sampling and data entry.  The program can even
be used to later generate hard copies of all drum information formatted in a  "drum log" sheet.

SUMMARY

Application  of the Hazscan or Hazcat testing regime along with U. S. EPA new DrumTrak
Software Program can save considerable time, money, and resources on any project requiring
the management of numerous unknown containers.  This approach provides an  alternative to
the extensive analytical testing and overwhelming amount of paperwork utilized today and can
be easily applied to any large-scale project containing large  quantities of unknown wastes.
                                               18

-------
        ON-SITE LABORATORY SUPPORT OF OAK RIDGE NATIONAL
    LABORATORY ENVIRONMENTAL RESTORATION FIELD ACTIVITIES

James  L.  E. Burn.  Ph.D..  Sr. Scientist, Bechtel Environmental,  Inc., Oak Ridge,
Tennessee 37831-0350

ABSTRACT

A remedial investigation/feasibility study has been undertaken at Oak Ridge National
Laboratory  (ORNL).   Bechtel  National,   Inc. and  partners  CH2M  Hill,  Ogden
Environmental and Energy Services, and PEER Consultants are contracted to Lockheed
Martin Energy Systems, performing this work for ORNL's Environmental Restoration
(ER) Program.  An on-site Close Support Laboratory (CSL) established at the ER Field
Operations Facility has  evolved into  a laboratory  where quality analytical screening
results can be provided rapidly (e.g.,  within 24 hours  of sampling).  CSL capabilities
include three basic areas: radiochemistry, chromatography, and wet chemistry.  Besides
environmental samples, the  CSL routinely  screens  health  and  safety  and  waste
management samples. The cost savings of the CSL are both direct and indirect. Direct
cost savings are estimated based on comparable off-site quick-turnaround analytical costs.
Indirect cost  savings are estimated based on: reduction of costs and liability associated
with shipping for off-site analyses, preparation for sampling, assistance to Health &
Safety staff, use of CSL results to focus further sampling efforts, and sampling crew
downtime. Lessons learned are discussed.

INTRODUCTION

A remedial  investigation/feasibility  study  (RI/FS) began at  Oak Ridge  National
Laboratory (ORNL) in  1987 for ORNL's Environmental Restoration (ER) Program.
Bechtel National, Inc. and partners CH2M Hill, Ogden Environmental and Energy
Services, and PEER Consultants are the RI/FS subcontract team. In 1989 the project
established the Close Support Laboratory (CSL) to provide rapid radiological (a//3/y) and
volatile organics screens on samples to determine DOT classifications before shipment
to the off-site CLP laboratory. The advent of the Observational Approach and SAFER
led the  RI/FS team to shift  the main use of the CSL from preshipment screening to
screening  to help in technical decisions (e.g., delineating the extent of contamination).
Basic wet chemistry techniques were added to assist in rapid and cost-effective sample
characterization.  CSL scope is now changing further to support other groups performing
environmental restoration activities  for ORNL ER.

TECHNICAL SUPPORT

The CSL provides the quality, quick-turnaround data needed to support results-based field
decision making.   Also,  CSL  staff  assist  RI/FS project geologists with planning,
                                            19

-------
interpretation, and application of sampling  and analysis plans  and associated support
documents.  The staff currently support ER  field efforts with analytical planning, cost
estimating, and data interpretation.

We interact with various ER project staff to provide pre- and post-field-support activities
including   preparation   of   sampling   kits,   sample   screening   for   DOT
transportation/packaging and radioactivity checks, analytical planning and coordination
with off-site confirmatory-level laboratories,  receiving excess sample from off-site labs,
and archiving or disposing of sample remnants (thus closing the chain-of-custody).

immobile laboratory trailers at the ORNL ER Field Operations Facility (FOF) house the
CSL. This location is convenient for sampling teams to pick up sample kits or to deliver
samples since the FOF is the starting and stopping point for most ER field activities.  We
routinely  screen  environmental, health and  safety,   low-level decontamination  and
decommissioning and waste management samples. Our sample screening results are used
by off-site labs to  guard against instrument contamination and detector saturation.

ANALYTICAL TECHNIQUES

The analytical scope of the CSL covers basic radiological and volatile organics screening,
and  basic  wet chemistry.   Analyses can  be performed  rapidly,  and results from
complementary  techniques are  reviewed  to  provide  a  more  complete  technical
understanding. Method detection limits are comparable to off-site confirmatory labs.
Minimum detectable activity values for radiological samples may be adjusted by changing
sample sizes and count times to  meet the customer's needs.  Radiochemical analyses
include  gamma spectroscopy, tritium and carbon-14  screens using liquid scintillation
analysis, and gross alpha and beta counting.  Cerenkov counting and crown-ether-based
separation are the two rapid methods used for determination of radiostrontium in water
samples.

Gamma spectroscopy is  performed via an intrinsic germanium detector with a computer-
based multichannel analyzer.  Due to the lack of an autosampler and the long count tunes
often required, the gamma detector system  is a bottleneck  in sample throughput.  A
second detector will soon be on-line to increase our capacity.

Liquid scintillation is used  to perform 3H and screening 14C analyses.  Samples are not
distilled; instead,  soils are DI water extracted (1:1 w/v) and instrumentation software
corrects for quenching  effects in all samples.   Carbon-14 can be excluded  based on
negative screening results but cannot be confirmed based on positive results (other weak
or quenched  (3 particles may cause  'false' positives).

Gross o;  and /3 are measured using proportional counters.  Low-activity samples are
analyzed on a low-background gas-flow proportional counter.  Higher-activity samples
                                             20

-------
are analyzed on sealers because higher-activity samples might contaminate the  low
background counter, and the ZnS solid scintillator probe is immune to the /3- > a cross-
talk observed in the a signal from the gas-flow proportional counter.

The CSL analyzes 90Sr in water samples using one of two methods.  Strontium may be
separated from unfiltered  or filtered samples using SrSpec columns (EiChrom), then
immediately counted for ^Sr as gross /3 before substantial 90Y ingrowth.  Alternatively,
after a two-week ^Y ingrowth, ^Sr Cerenkov counting may be performed on filtered
samples  using  the liquid scintillation counter (and no scintillation cocktail).  Strontium-
90 Cerenkov counting also requires gamma spectroscopy to provide '^Cs/^Co correction
to the Cerenkov-determined activity.

Volatile  organics  screens  are performed  by  gas  chromatography  (GC)  using
photoionization (10.2 eV) and Hall electrolytic conductivity detectors and a CSL-specific
method based  on EPA 601 and  602.  A sixteen-port purge-and-trap autosampler
introduces  samples onto the GC column.  The primary volatile organic contaminants of
concern are fuel-based aromatics and solvent-based chlorinated hydrocarbons.

Basic  wet chemistry  for environmental  waters includes  alkalinity,  dissolved and
suspended  solids,  ion  chromatography  (1C), and,  (for various  matrices)  pH and
resistivity.   1C  is used to analyze both cations and anions following a CSL-specific
method based on EPA 300. Together, 1C and alkalinity provide an ionic profile of water
samples.

QUALITY ASSURANCE

The mission of the CSL is to provide rapid screening (EPA level II) for the ORNL ER
program. The  lab  delivers  these results, using lab-specific methods,  without  time-
consuming deliverable requirements.  Controlled CSL procedures  and the laboratory
quality assurance plan document quality requirements for each  analysis and general
laboratory  practices. QA staff from Bechtel, ORNL Oak Ridge Reservation, and DOE
Oak Ridge routinely audit the lab's procedural conformance and good lab management
practices.  The CSL has used commercially prepared  performance evaluation  (PE)
samples to fine tune method accuracy.  The radiological PE samples were obtained from
Analytics and the chemical from Environmental Resource Associates. Recently, we have
begun to take  part  in EPA-sponsored radiological (EMSL-LV) and chemical  water
pollutant (EMSL-Cinci) PE studies.   Participation in these studies will  verify our
accuracy and interlaboratory comparability.
                                             21

-------
COST EFFECTIVENESS/SAVINGS

The CSL is saving dollars both directly and indirectly. Direct cost savings are based on
comparable  off-site  quick-turnaround  analytical  costs; premium  charges for rapid
response from off-site laboratories make the CSL especially cost-effective.  The RI/FS
team has documented CSL savings estimated to be greater than $1 million for each of the
last two fiscal years.

Indirect savings  are difficult to quantify.   They are based on reduction of costs and
liability associated with shipping samples off-site for analysis, preparing for sampling and
sample shipping, assisting Health and Safety (H&S) staff, and sampling crew downtime.
CSL data  provides for proper DOT classification of environmental samples.   Sample
container  procurement,  sample kit preparation, and sample chain of custody are  all
centralized through the CSL for most samples analyzed by the CSL.  CSL staff also
generally prepares and packages samples for shipment to off-site labs for further analysis.
H&S staff uses the CSL to analyze monitoring samples to minimize personnel risk, and
field sampling crews are more productive because  of the rapid turnaround of data from
H&S and  sampling based on results of previous sampling.   The RI/FS team has made
extensive  use of CSL data  in the Remedial Investigation for Waste Area  Group 5 at
ORNL and other site characterization projects.

LESSONS LEARNED

Several  lessons learned at the CSL may apply to similar screening laboratories.

*    Participate in the initial scoping or DQO Process activities to identify data uses
      and opportunities to  use CSL data.

*    Determine a  general  prioritization  scheme  for  samples and analyses  before
      competing deadlines or customers require  one.   This planning should  include
      holding time, data end-use, and lab staffing considerations. Lab customers should
      be  aware  of and agree with this scheme.

*    Establish  appropriate  sample  selection  guidelines to identify possible further
      analyses (e.g., perform 7 spectroscopy  only  when /? activity is greater than x)
      within the  screening  lab or  at  an off-site  confirmation  lab.   Setting up a
      formalized analytical decision  tree will save money by reducing unnecessary
      analyses and documentation requirements.

*    Invest in an expandable data handling system and integrate data handling into the
      appropriate project data management plan.  Data quality can be undermined by
      a poor or 'make-do'  handling system.
                                             22

-------
*     Stagger staffing hours.  Varied schedules reduce overtime, improve morale, and
       serve both the first-of-the-day customers (generally technical staff) and end-of-the-
       day customers (generally field sampling staff).

FUTURE DIRECTIONS

The mission of the CSL will likely stay the same as the CSL continues under another
subcontractor to Energy Systems, although with the recent appointment of a technical
interface, Energy Systems will take a more active role in CSL activities.  An upgrade to
the database is under way  to ensure seamless electronic data delivery to CSL customers
and the Oak Ridge Environmental Information System.  As quick-turnaround screening
data are more broadly accepted, the analytical capability and sample capacity of the CSL
will likely expand.

SUMMARY

The ORNL RI/FS team established the CSL to  provide rapid radiological (0/18/7) and
volatile organics screens for ER.  Basic wet chemistry techniques were added to assist
in rapid and cost-effective sample characterization.  The CSL provides its RI/FS and
other ER customers with technical and analytical support, and lessons learned may have
potential application for similar  sites  or labs.  ER is expanding the CSL's scope to
support general environmental restoration/waste management activities at ORNL.
                                                23

-------
                      A DECISION ANALYSIS APPROACH
                       TO DETERMINING  SAMPLE  SIZES
                         FOR SITE INVESTIGATION

Tim  LeGore,  Principal  Engineer,  Boeing  Computer  Services  Richland,
Information Systems and Services, Richland, Washington 99352; Nagaraj K.
Neerchal.   Senior  Research  Scientist,  Pacific  Northwest  Laboratory,
Analytic Science and Engineering Department,  Richland,  Washington 99352.

INTRODUCTION

A  current  problem in  environmental  restoration work  is  the lack  of a
detailed and  complete  definition of the overall site  investigation and
remediation process.   A  generic  process  has  been created under the RCRA
and Superfund laws but it has many gaps.  A number  of individual tools
have  been  developed  to   deal  with  individual  parts   of  the  site
investigation and remediation process, but very little has been done to
connect these parts into  a contiguous whole  in which site investigation
parameters (e.g.,  sample  plan design) can be clearly and traceably related
to the identified risk goals.

This  paper  is  an  attempt to remedy  at least  a  small portion  of that
problem.  The connection between a desired post-remediation condition of
a waste site and the data to be collected during the site investigation is
identified.   To portray the  connection,  the  following  information is
developed and presented:

•  the data necessary  to describe a contaminated waste site
•  the structure of the decision process
•  the relationship of the site data to risk estimates
•  the basis  for designing a sampling plan
•  the required information about a proposed remediation process.

This paper does not concern itself with variations on a theme for how to
perform the risk assessment.  It  will  be  assumed that the risk assessment
methodology is defined and is linear with concentration.   In addition,
only soils are dealt with.

OVERVIEW

The fundamental decision to be made at a waste site is whether or not to
remediate the site.   Secondary considerations include choosing a specific
remediation technology,  where  to remediate,  and how much to remediate.
Once  the  basic mechanism  for arriving  at  the fundamental  decision is
established,  the secondary considerations can be addressed  as optimization
parameters.

This analysis  is based on the following logic:

•  Describe the model of waste site contamination used most commonly in
   site investigations.
                                         24

-------
   Describe the parameters used in statistical decision making.

•  Structure the decision process based on the contamination model and
   statistical decision parameters.

•  Use the decision process structure to establish the expected outcome
   of the decision as a function of the contamination model parameters,
   sample plan parameters, and associated decision error rates.

•  Optimize on the  expected outcome of the decision process to obtain
   the best combination of sample plan parameters and error rates for
   a given range of contamination conditions.

DESCRIBING CONTAMINATION

The U.S. Environmental Protection Agency  (EPA) describes the statistical
tools   to  be   used  for  designing   sampling   plans   and  identifying
contamination at facilities.  The model of contamination used (EPA 1994)
is as follows:

•  Some portion of  the facility may be contaminated.   This portion is
   identified as epsilon  (e) and may range from 0 (uncontaminated) to
   1 (all of the facility contaminated).

   The contaminated portion of the facility has had a constant amount,
   so that the overall average concentration  for  the site is above the
   background levels  of analytes of concern,  by a quantity identified
   as delta (5).

In addition to the above description of contamination, the uncontaminated
conditions of the facility are assumed to  contain the analytes of concern
at concentration levels that vary naturally across the facility.

The value of e and  8  that is important to detect depends upon the nature
of the background distribution,  the risk calculation methodology, and the
acceptable post-remediation risk levels.  The risk methodology is assumed
constant and will not be further considered.  A relatively direct linkage
can be made between the desired post-remediation  risk and critical values
of e and S.  These critical values represent  the  conditions for which the
sampling plan must  be designed in order to achieve the expected level of
acceptable risk.

PROBABILISTIC DECISION MAKING

The decision process  for whether or not the waste site is contaminated is
usually  based upon a quantifiable decision rule (i.e.,  a  statistical
hypothesis test) that may or  may not  yield the "correct" decision given
the true waste site conditions.   In a  site investigation, the accuracy of
a decision is measured in terms of the probabilities associated with the
two possible decision errors, false positives and false negatives.
                                         25

-------
The null  hypothesis being  tested  (EPA 1994)  is  "The  reference-based
cleanup standard  achieved."   The alternative  is  "The  reference-based
cleanup standard not achieved."   In more common terms,  this  amounts  to
asking  if the  waste  site  is  clean  or dirty.   Using the  null  and
alternative  hypothesis,   the   nature   of  the  decision  errors  can  be
identified:
                                            TRUE CONDITION OF WASTE SITE
                                              CLEAN
                                                              CONTAMINATED
   DECISION ABOUT
     WASTE SITE
     CONDITION
CLEAN
                     1-a
                   CORRECT
                  DECISION
TYPE II ERROR,
       &
                       CONTAMINATED
                                          TYPE I ERROR,
                                                a
                                 CORRECT DECISION
The decision rule will have two probabilities associated with it, a and ฃ>.
These quantities indicate the probability of committing  each  of the two
possible errors in making the  decision.  The first kind of decision error
(false positive, also called type I error in  the statistics literature) is
denoted  by  a  and is  the  probability of declaring  the  waste  site
contaminated when it is not.   Because the probabilities of declaring the
waste clean or contaminated,  given the true  condition is  clean, must add
to 1, the probability of making a correct decision under this condition is
1-a.

The  second kind of decision  error (false negative, also  called type II
error in the  statistics literature) is denoted by &  and is the probability
of declaring the waste site clean when the true condition is contaminated.
Because  the  probabilities   of  declaring   the  waste   site   clean  or
contaminated, given the true condition is contaminated, must add to 1, the
probability of making a correct decision under this condition is 1 - &.
The term "power" is used to denote the quantity 1 - fi.   B> depends on the
number of samples  taken from both the background (n) and waste sites (m),
the extent and magnitude  (e and 5) of the contamination at  the waste site,
and the value of a chosen.

ERROR RATES AND SAMPLING PLAN DESIGN

   The usual procedure  in designing a  sampling  plan is  to  specify the
value of a and one or more  combinations  of &,  e, and 5.    From this
information,  the number of samples from both  the background and waste site
areas can be determined by consulting the appropriate power tables.  The
main problem is in  determining the necessary and appropriate combinations
of fi,  e,  and S.   This  is where the  connection to the  risk assessment
process must be made.
                                         26

-------
POST-CLOSURE RISK

We structure the sampling design problem in terms of a decision tree, to
bring out all  the  relevant  steps  involved and all possible decisions in
facing a variety of uncertain scenarios and the consequent outcomes.  This
is particularly useful in computing probabilities  associated with various
final outcomes  of  a complex process and  thereby  computing the expected
value of a potential decision taken.

Use of such tree diagrams and computation of probabilities associated with
final outcomes  is  described in many statistics books.  See, for example,
Bernardo and Smith  (1994).  Using  the decision tree construct, we develop
the  concept  of   post-closure  risk.    Post-closure  risk  provides  a
quantitative measure  for describing the  goal of a  site investigation,
thereby providing a means  for choosing from among various  sampling designs
and parameters  for the site investigation.

The  use of  a  decision  rule based  upon the  outcome  of  a  statistical
hypothesis test  is  a node in the decision tree.  The outcome of the test
is either remediate or  stop.   Figure  1 shows a simplified decision tree
for the  choice  of  sampling plans.   The decision "Use Sample Plan Xt" is
followed by two binary nodes  in  sequence for  a total  of  22 possible
outcomes.

The first node  in  the  sequence describes the possibilities for the true
condition of the waste site.  The variable S0 is used to create  the binary
nature  of  the  node.   S0 may be  arbitrarily specified  as  a detectable
difference  above background, or  it may be interpreted  as  a regulatory
limit,  such  as a maximum permissible  concentration.   The probability ^
embodies the uncertainty in the knowledge about the true  state  of nature.
In Bayesian  terms,   is the prior  estimate of probability of the waste
site being dirty (50 < 5), while 1  -  is the probability of the waste site
being clean  (60 > 5).

The second node in the branch is  the  hypothesis  test  used to trigger a
remediation  action.  The power of  the  test (1 - ฃ) is the probability of
accepting Ha when  Ha is true.   Similarly, when H0 is true,  1 - o is the
probability  of  not performing a remediation action.

The expected risk  in the waste site after closure will depend upon

1. the residual  risk (Rc2) if the  site  is determined clean

2. the  risk  from  contamination   (Rcl)  if  the  site  is  determined
   contaminated

3. the power of the decision rule  to detect the contamination (1 - ฃ)

4. the risk  levels  achievable by the remediation process  (R,.2 , R^).

The expected post-closure risk can be  constructed as follows.  The pre-
                                          27

-------
closure waste site risk is divided into two parts, the risk from a clean
site and the risk associated with a contaminated site.  The expected post-
closure  risk,  Rf,  will  be   the  weighted  average  of  the   risk  after
remediation, E^,  and the baseline (current)  risk.  The weighting factors
are the values 1 - E> and ฃ, respectively:
      Rf  =  E(Risk)   =  4>[$RC1 + (l-p)J?rl] + (l-4>) [(l-a)Rc2 + aRT2]
                 Rcl  =  Waste site risk, given 80 & 5
                 Rzl  =  Residual risk after remediation, given 60 s 6   (1)
                 RC2  =  Waste site risk, given o < 50
                 RZ2  =  Residual risk after remediation, given 8 < 60
The connection to the sampling plan design is made by replacing & with the
functional form  of  the power curve,  fi  — f(e,fi,n,m).   In practice,  the
functional form of f(e,5,m,n) may not be known.   What will be known are
discrete values  from a power table.   The tabular values can be entered
into the equations and Rf calculated.

The value of $ in Equation (1) is unknown and, as  described above, must be
estimated a priori.  A  reasonable estimate of  may be obtained by looking
at the proportion of a  site contaminated, e.  Thus,  may be interpreted as
the probability  that  a  randomly  chosen grid  location is contaminated.
Consequently, we may estimate  with  e.   This viewpoint allows us to use
an estimate based on historical information.  The substitution yields


             E(Rf) =  elVRcl + (1-$)RZ1\  +  (1-e) [(l-a)J?c2 + uRr2]          (2)
The  risk  variable  in  Equation  (2)  can  easily  be  replaced  by  the
appropriate  cost  variable,  with  the  caveat that the  costs  must be
converted to commensurable units.

Equation  (2)  incorporates all  of  the critical  Data Quality Objectives
information  that  must  be established  before  a  sampling  plan  can be
specified.   The prior  information about  the site  and the remediation
method performance is included  in  the expected outcomes of the  decision
tree and in the specification of the  true  condition.  The hypothesis test
is implicitly  required  in the determination  of  the type  I  and II  error
rates.

DETERMINING A SAMPLE SIZE

Given a single equation such  as  Equation (2),  optimization procedures can
be   directly   applied  to   generate   the  sample   plan  design,   thus
                                         28

-------
simultaneously optimizing not only the number of samples but also the type
I and II error rates.  Not only can the  optimum sampling plan for a given
waste site condition be  determined,  but the critical (i.e., worst case)
waste site  conditions  for which  a sampling plan should be designed to
detect can be determined.  The critical condition would be that waste site
condition that results in the highest expected risk for a given sampling
plan.

Equation  (2)  can be used to  plot the expected  post-closure  risk  (Rฃ)
against S . (a)   Figure 2  is  an example set  of such  curves  of Rj plotted
against the baseline risk.  To generate such curves,  the analyst must make
several decisions:

1. What statistical  hypothesis test will be used?
2. What type I error rate will be used  (a)?
3 . What remediation  option will be considered?

The  first two decisions  are  necessary to establish  the  power  of  the
hypothesis test for a specified number of samples.  The  third decision is
necessary to establish the performance level(s) of the remediation process
The decisions made for this analysis are listed below:

•  Use the Wilcoxon Rank Sum Test (WRS , EPA 1994) .

•  Use a significance level (a) of 5%.

   Use a value for e of 1.0,  consistent with the  usage of the WRS test
   to detect a uniform contamination in the waste site.

•  Remediate by  removal of  soil  and replacing  with  clean backfill,
   i.e., background material (S0 = 0).

Several characteristics of the  curves  in  Figure  2  should be noted.   For
the baseline risk equal to very large  values,  the  power of the WRS test
approaches 1.0.  Thus, the remediation will almost certainly be performed,
achieving a  post-closure  risk  equal to the  background risk.   For very
small values of the baseline risk, near background, the power of the WRS
test is also small,  thus  failing  to  trigger  a remediation.   Because the
baseline risk is  small to begin with, this is acceptable.  At intermediate
levels of baseline risk, there  is a maximum in the expected post-closure
risk.  Where this maximum occurs is a function of the shape of the power
curve.   Different sample  sizes  have  been  used to obtain different power
curves and,  hence, different curves of Rf.

The  peak  of each  curve  represents  the  critical  waste  site  condition
leading to  the  maximum value  of Rf.    All  other  possible waste  site
conditions will  lead to  lower  values  of  Rf.   Selecting the  number of
samples that yields a maxima less than  the stakeholder determined target
risk will assure the stakeholders  that, regardless of the initial waste
                                          29

-------
site conditions, the expected result is less than the target risk.  Should
the actual waste site conditions differ from the critical values, then the
expectation is that the final post-closure risk,  Rj, will be lower, perhaps
significantly so,  than the  target risk.   This establishes the basis for
selecting  the  optimum  sampling  plan   for  the  critical  waste  site
conditions.

Figure  3  shows the process repeated for different e.   An interesting
result  of  these curves is  that for the WRS test, the reduction in power
for decreasing  e is slower  than the  reduction in risk due  to the reduced
exposure area of the waste  site.  This is evidenced by  the fact that the
peak value for  e = 1 is the largest  and the peaks decline  as e declines.
It is not until e reaches very small values and  the contamination is very
large with respect to the background conditions that Rฃ begins to increase
and  approach the  e  - 1 maxima.   This indicates a  condition  in which  a
change  should be considered in the hypothesis test used.(b)

DISCUSSION

The  examples above were  based upon the  Wilcoxon Rank Sum test.   Any
defined decision rule can be used, provided that the power of the decision
rule  is known  or can be estimated.   The analysis for the critical values
e, 5  and  the optimum values m, n  requires that

•  the  decision rules be defined

•  their  power be  determined

   the  remediation method performance be  defined

•  the  acceptable  maximum  expected post-remediation  outcome  (risk,
   dose,  or  cost)  be established  by the stakeholders.

The  design of  the  sample plan can be performed generically and applied to
many different waste sites.    Site-specific  changes  may  occur  if   a
predetermined  decision  rule  is  sensitive  to  area  (i.e. ,  hot  spot
detection),  a  decision rule is changed (e.g.,  moving  from a statistically
based  decision rule to  a  subjectively  based  decision  rule),   or the
acceptable maximum expected post-closure  risk is changed (e.g., a change
in risk scenarios  from a  change in proposed land use) .

Decision rules based upon  professional judgment  are also possible.  In
this instance,  a and ฃ become subjective  probabilities.  A discussion of
the  determination  of subjective probabilities is beyond the scope of this
paper,  but is compatible with the  above development.  The "Sample  Plan Xj/'
branch  may be  replicated, and  the probabilities a and ฃ replaced with the
subjective estimates of a'  and &'  for  the error rates.

The  analysis presented does not guarantee that every waste site will be
cleaned up to less  than the target risk for which this analysis  is used to
establish sampling plans.   It  does guarantee that,  on average, all of the
                                          30

-------
waste  sites  will  be  remediated  to  acceptable  conditions.    If  the
stakeholders  desire  to  guarantee  that  every  waste  site  meets  the
acceptable  risk  criteria,   then  each  site  must  be  remediated,  thus
eliminating  the  decision errors.   However,  the  cost of  achieving this
level of certainty may be unacceptable to the stakeholders.

A  spreadsheet  can   easily  perform  the  analysis  for  a  few  likely
combinations  of ฃ,  6, m,  and n.   The sample plan designer can then pick
out the combination that results in  the expected post-closure, Rf,  being
equal  to  or  less  than  the  stakeholder  target  risk.    However,  for
performing  a  comprehensive  optimization  of the  sampling  plan,  the
development  of software  to specifically  perform  the calculations  and
display the curves should be pursued.

SUMMARY

A  simple process  has been  defined for  developing  soil  sampling plan
parameters based  upon the power  of the decision  rule  being used and the
stakeholder-defined acceptable post-closure  risk levels.   The output of
this process may be performed generically and applied to many waste sites
simultaneously, thus reducing the amount of effort involved in sample plan
generation.

ENDNOTES

(a) 5 may also be converted to other scales  and units of measurement such
as baseline  (initial) risk or Pr as described in  EPA (1994).

(b) The Quantile test recommended in EPA (1994) was designed specifically
to detect this condition.  Overlaying the curves of the Quantile test and
the WRS test will allow  the analyst to establish a balance between the two
tests based upon  comparable performance in achieving Rf.

ACKNOWLEDGMENTS

The work on which this paper  is based was supported by the U.S. Department
of Energy,  EM-263, under  Technical Task Plan RL323101.   We  thank our
colleagues Richard Gilbert, Nancy  Hassig, Robert O'Brien, and Rick Bates
for their insightful comments.  We also thank Andrea Currie for editorial
review.
                                           31

-------
                           Figure  1.
              SIMPLIFIED  DECISION  TREE  FOR
         SITE  INVESTIGATION  AND  REMEDIATION
      DECISION
                         TRUE
                       CONDITION
     HYPOTHESIS
       TEST
                                          no ฐ - uo
                                          Ha=60<6
                     OUTCOME
       USE SAMPLE
         PLANX:
                         DIRTY
                       P(50
IRTY     _T~
<6)=+   ~~[_
                  _     CLEAN       I
                      P(5 < 8J = 1 - 4   l_
D —
       USE SAMPLE
        PLAN X
                         DIRTY
                       P(60 < 6) =
                         CLEAN
                      P(8
-------
                                    Figure 2.
     0.75 -r
s


o
D
r
m
O
cc
0.
                FINAL WASTE SITE CONDITION VS INITIAL WASTE SITE CONDITION
                                 60 BACKGROUND. SO WASTE SITE SAMPLES
                                      WILCOXON RANK SUM TEST
                                       EPSILON VARIABLE
 0.7  r
    0.65 4-
      0.6
O.S5
     0.5
                               FRACTION OF WASTE SITE CONTAMINATED = EPSILON
        0.50
               0.55
                       0.60
                              0.65
                                            0.75
                                                    0.80
                                                           0.85
                                                                  0.90
                                                                          0.95
                                                                                 1.00
            INITIAL PROBABILITY P(BACKGROUND DATUM < CONTAMINATED PORTION OF
                                   WASTE SITE DATUM) = Pr
                                             33

-------
                            Figure 3.
             POST REMEDIATION  RISK  vs  BASELINE RISK
                  ASSUMED REMEDIATION METHOD:
          REMOVE AND BACKFILL WITH BACKGROUND MATERIAL
g
h-
5
CO
O
D-
 BACKGROUND
 RISK
                          BASELINE  RISK
                         UNIFORM CONTAMINATION
                                   34

-------
REFERENCES

Bernardo, J.M., and A.F.M. Smith.  1994.   Bayesian Theory.  John Wiley &
Sons, New York.

U.S. Environmental Protection Agency (EPA).  1994.  Methods for Evaluating
the Attainment of Cleanup Standards, Volume 3:  Reference-Based Standards
for Soils and Solid Media.  EPA 230-R-94-004, Office of Policy, Planning,
and Evaluation, Washington, D.C.
                                         35

-------
                Cost  Effective  Statistical   Sampling:
           Composting, Double Sampling  and  Ranked Sets

                             Robert  O'Brien
                     Environmental   Statistics  Group
                      Pacific Northwest  Laboratory
                              Richland, WA
Several  cost effective  methods  of  statistical  sampling  will  be  presented.
These methods; compositing,  double sampling  and rank set  sampling allow
for more  effective site  specific  coverage  patterns  for detecting
contamination and at the same  time reduce  sampling costs. The cost
savings  are  achieved  by reducing the number of necessary laboratory
analysis, which is  a major cost  in  environmental data collection for site
investigations, rather than by reducing  the number  of site samples taken.
Each of these  statistical methods are appropriate for a  site decision
making  under varying assumptions. A discussion  of  each  method will  be
given along  with an example data set. Ways  of combining these methods to
achieve  greater cost savings  will also  be  discussed.
                                      36

-------
   THE DEVELOPMENT OF AN INNOVATIVE PROGRAM TO MONITOR THE
EFFECTIVENESS AND  PERFORMANCE OF REMEDIATION TECHNOLOGY AT A
                        SUPERFUND SITE
Richard Rediske, Ph.D., Research Associate, Grand Valley
State University, Water Resources Institute, Allendale,
Michigan 49401,  L. Pugh, P.E.,  R. Buhl,  R.  Wilburn,  S.
Borgesson,  Earth Tech, 5555 Glenwood Hills Parkway S.E.,
P.O. Box 874, Grand Rapids, Michigan 49588,  D. Rogers, CPC
International, P.O. Box 8000, Englewood Cliffs, New Jersey
07632, and  D. Peden, CHMM.,  Cordova Chemical Company of
Michigan, 500 Agard Road, North Muskegon, Michigan 49445.
ABSTRACT

An innovative monitoring program was developed to assess the
effectiveness and performance of remediation systems at a
former organic  chemical manufacturing facility, known as the
Ott/Story/Cordova Superfund Site near Muskegon, Michigan.
The groundwater contains an estimated 80 mg/1 of ammonia-
nitrogen and 1500 mg/1 of COD.  Thirty percent of the COD is
composed of a mixture of 50 Appendix IX compounds that
include aromatic and halogenated organics.  The remaining
COD consists of a complex mixture of known organic compounds
and unidentified chemical process intermediates and
degradation products.  Many of the unidentified chemicals
were phenolic and aromatic nitrogen based compounds related
to historical pesticide production. A two-stage
PACT*(Powered Activated Carbon Treatment) system was
evaluated at bench  and pilot scale levels.

The monitoring  program developed for the remediation system
evaluation had  to address Appendix IX constituents in
addition to the large group of unidentified organic
compounds.  This was accomplished by the following steps;
conventional treatment performance parameter analysis, GC/MS
analysis by 8240 and 8270 expanded to include spectral
libraries of influent and effluent compounds and mass
spectral interpretation, and biological whole effluent
toxicity testing.

Conventional parameter analysis and GC/MS methods were used
to monitor the  operation of the treatment systems. Influent
and effluent mass spectral libraries were developed for each
chromatographic peak detected above threshold.  Forward
                                  37

-------
searches using these libraries were then conducted to
determine whether unidentified influent organics were
effectively removed by the system.  Mass spectral
interpretation of the unidentified effluent organics was
performed to provide structural information.  In order to
provide an overall indication of treatment performance,
acute and chronic whole effluent toxicity testing was
conducted.

Based on the results of the monitoring program for the bench
scale system, the two-stage PACT technology was found to
effectively  remove organics and ammonia.  A 9.5
liter/minute pilot  PACT* system was then constructed on
site and operated for five months.   The only Appendix IX
compound found in the pilot system effluent was 1,2-
dichloroethane.   Comparisons of the mass spectra for
influent and effluent samples showed that only three
unidentified compounds passed through the system; these were
low molecular weight degradation products with little
environmental significance.  The effluent was also found not
to be acutely toxic to aquatic organisms in the whole
effluent tests.   The data from the monitoring program was
used to demonstrate that the remediation system effectively
removed the unidentified compounds and produced an effluent
that would not impact the environment.    As a result of the
on-site pilot study, a 4,600 cubic meter per day, two-stage
PACT* system is being implemented at the site.  Operation of
the two-stage PACT* system represents a potential cost
saving of  $20,000,000 over the project life as compared to
the several technologies originally recommended.
INTRODUCTION

The Ott/Story/Cordova Superfund Site,  located in Muskegon,
Michigan, has been extensively studied and evaluated for
remediation for almost 20 years.   The  Ott/Story Chemical
Company produced a variety of pesticides and specialty
organic chemicals in a remote area from 1958-1974.
Production wastes were equalized and stored in unlined ponds
prior to discharge in a small stream.   The site is located
on sandy soils with a shallow aquifer  5-10 feet from the
surface.  A plume of contaminated groundwater extends 4,000
feet down gradient from the site and is intercepted by
Little Bear Creek.

The Ott/Story Chemical Company generated phosgene and methyl
isocyanate on site to produce a variety of carbamate and
                                 38

-------
urea based pesticides.  Azo coupling reactions were also
used in the synthesis of dyes such as chlorazol chloride.
In addition, a number of specialty chemicals based on
camphor and glycine were also manufactured.  PCBs and
chlorinated hydrocarbon pesticides were not detected in the
site soils or groundwater.  The major chemicals produced and
used at the facility are listed in Table 1.

With the exception of a few chlorinated and aromatic
solvents, polycyclic aromatics, and phenols, the groundwater
contained a limited number of HSL compounds.  Appendix IX
analysis plus TICs  (Tentatively Identified Compounds)
however identified over 50 compounds including aromatic
amines, substituted phenols, and camphor related materials.
This analysis only accounted for 30% of the chemical oxygen
demand.  A listing of groundwater characteristics is given
in Table 2.

The design considerations for remediation at this site are
itemized in Table 3.  The complex chemical composition of
the groundwater in addition to the environmental health
concerns related to phosgene, methyl isocyanate, and
pesticide production resulted however in the design of a
very elaborate and costly remediation system.  The EPA
mandated system contained a series of biological and
physical/chemical processes including:

     air stripping                 activated sludge
     clarification                 lime softening
     ammonia stripping             aerobic digestion
     sludge thickening             recarbonation
     sand filtration               carbon adsorption
     thermal oxidation

A detailed analysis of the chemicals and their environmental
fate however supported the use of enhanced biological
treatment. An evaluation of remediation alternatives found
the PACT*(Powdered Activated Carbon Treatment)  System to be
the most effective process due the combination of activated
carbon with aggressive biological treatment.  This
alternative was not initially acceptable to the EPA due to
concerns related to the unidentified chemicals and the
perceived "fragility" of biological systems when treating
concentrated organic influents.  Bench scale testing of a 2
stage PACT* System found the technology effective in
removing influent organics.  Based on these results, a 9.5
liter/minute pilot system was constructed on site to
evaluate the technology.  A key component to this evaluation
                                   39

-------
                          Table 1.
             Chemicals Produced and Used at the
                   Ott/Story Chemical Co.
Chemicals Produced

     Methyl isocyanate
     Propyl isocyanate
     Ethyl isocyanate
     Butyl isocyanate
     Chlorophenyl isocyanate
     Pentachloronitrobenzene
     Dimethyl carbamoyl chloride
     Camphor sulfonic acid
     Glycerol chlorohydrins
     Ethyl centralite
     Chlorophenyl-n-methyl carbamate
     Tetramethyl urea
     Amylphenyl-n-methyl carbamate
     Phosgene
     Phenyl Glycine
     Ethyl chloroformate
     Isopropylphenyl methylcarbamate
     Tolyl methylcarbamate
     Butyl phenyl methylcarbamate
     Chlorazol chloride
     Diuron
     Monuron

Major Chemicals Used

     1,2 Dichloroethane
     Amyl Phenol
     Aromatic Naptha
     Ammonia
     Substituted Anilines
     Substituted phenols
     Camphor
     Nitric Acid
     Glycine
                                  40

-------
                          Table 2.
                 Ott/Story Chemical Company
                 Groundwater Characteristics
   1500  mg/1  chemical oxygen demand
   81  mg/1  ammonia nitrogen
   87  mg/1  organic nitrogen
   50  EPA Appendix IX Compounds including halogenated and
   aromatic solvents, chlorinated and alkyl  phenols,
   polycyclic aromatic hydrocarbons,  phthalate  esters,  nitro
   aromatics. (500 mg/1)
   100 other  organic compounds including aromatic amines,
   substituted ureas, ethoxy compounds,  aldehydes,  camphor
   derivatives,  and alcohols.   (300 mg/1)
   200 unidentified organic compounds (200 mg/1)
   800 mg/1 of organic compounds that do not chromatograph
                          Table 3.
          Remediation System Design Considerations
•  Effectively remove a variety of polar and non-polar
   organic chemicals
•  Address concerns related to unidentified organic
   compounds
•  Produce an effluent acceptable for  discharge  to  surface
   water
•  Remove ammonia
•  Cost  effective
•  Easy  to operate
                                  41

-------
was the design and implementation of a monitoring program
that would address unidentified organic compounds and
document a stable treatment process that produced an
effluent acceptable for discharge to the receiving stream.
This paper discusses the remediation system monitoring
program and presents the results.
THE REMEDIATION SYSTEM MONITORING PROGRAM

The monitoring program developed for the PACT* pilot system
had to address conventional wastewater parameters, Appendix
XI constituents, TICs, unidentified organics, and whole
effluent toxicity.  The monitoring program is summarized in
Table 4.  While traditional monitoring programs can be
readily designed around a parameter list, the large number
of TICs and unidentified organics in the site groundwater
presented a problem that required resolution.  Gas
chromatography/mass spectrometry (GC/MS) methods had to be
modified in an innovative manner to include this group of
chemicals.  A project specific target list was first
developed that included Appendix XI volatiles and
semivolatiles in addition to significant site compounds such
as camphor, N,N-dimethyl aniline, N-ethyl aniline,
tetramethylurea, and 1,l-dichloro-2,2-diethoxyethane.  Mass
spectral libraries were then constructed for all
chromatographic peaks in both the influent and effluent
samples for each sampling event.  The influent library was
used to search the effluent samples to monitor the removal
of TICs and unidentified compounds by the PACT* System.  As
a further check of removal, the effluent library was used to
search the influent samples to document the absence of
compound overlap in the chromatogram.  Finally, mass
spectral interpretation was used to characterize the
unidentified compounds in the effluent.  Even though the
exact identity of these compounds could not be determined,
the mass spectra and retention times clearly showed the
chemicals to be of low molecular weight. In addition, it was
evident that the effluent compounds did not contain halogens
or aromatic rings.  This was a significant determination
because most of the environmentally hazardous chemicals
contain halogens and or aromatic ring structures.

A diagram of the two-stage PACT* system is presented in
Figure 1.
                                   42

-------
                          Table 4.
            Remediation System Monitoring  Program


• Conventional  Parameter Analysis Oxygen Demand
                                   Nitrogen Series
                                   Solids
                                   Sulfate
                                   Total Phosphorus
                                   Alkalinity

• Organic Analysis  by GC/MS        Volatile Organics
                                   Semivolatile Organics
                                   Spectral Libraries of
                                     Each Peak
                                   Library Searches
                                   Mass Spectral
                                     Interpretation

• Toxicity Testing                 Acute and Chronic
                                   Fish and Invertebrates
                                  43

-------
EARTH TECH Designed PACT Process Treatment Schematic
                                                     DISCHARGE TO
                                                    MUSKEGON RIVER
                                                      (850 OPM)
                        Figure 1,

-------
Samples were collected from the influent, stage-one
effluent, and stage-two effluent.  A five month monitoring
program was initiated.  Program components are given in
Table 5.  Conventional parameters were analyzed during the
first month for start-up purposes.  Volatile and
Semivolatile organics were added during months 2 and 3 to
document steady state.  All parameters were analyzed during
months 4 and 5.  As a further verification of performance, 3
sets of samples were sent to an EPA contract laboratory.
All methods were performed according to EPA approved methods
(EPA, 1992) .
MONITORING PROGRAM RESULTS

The results of the monitoring program are presented in
Figures 2, 3, and 4. The only Appendix XI compound found in
the effluent was 1,2-dichloroethane.  This compound was
detected at concentrations consistently below the proposed
discharge limit of  mg/1.  The system was also found to
effectively remove BOD, ammonia,  and total phosphate.   Only
three unidentified compounds remained in the effluent.  Two
of these compounds were low molecular weight degradation
products which had spectra similar to alcohols and esters.
The remaining compound had structural similarities to
camphor and was probably an oxygenated metabolite.  There
was no evidence of halogenated or aromatic compounds in the
semivolatile analysis.  The effluent was also found not to
be toxic to fish or invertebrates in the whole effluent
toxicity tests.
SUMMARY

Based on the results of the remediation system monitoring
program, the EPA accepted the two-stage PACT* system as the
appropriate remedy for the site.  A 4,600 cubic meter per
day system is currently under construction.   Operation of
the two-stage PACT* system represents a potential cost
saving of $20,000,000 over the project life  as compared to
the original remediation alternative.  The innovative use of
GC/MS in the monitoring program was a key factor in
                                    45

-------
Parameter
                      Table 5.
Ott/Story Remediation  System Monitoring Program

                        Frequency
                         Month 1  Month 2   Month 3  Month 4   Month 5
BOD
COD
Nitrate
Nitrite
Ammonia
Total Organic Nitrogen
Total Phosphate
Sulfate
Alkalinity
TSS
Volatile Organics
Semivolatile Organics
TICs and Unidentified Organics
Whole Effluent Toxicity
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
*
*
*
*
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
1/wk
1/wk
*
*
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
1/wk
1/wk
*
*
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
1/mo
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
5/wk
1/mo
  sample analysis not performed
                                            46

-------
                              Figure 2.
                        Ott/Story/Cordova
                Remediation System Performance
  600

  500

j 400

.ง, 300
Q
O
O 200

  100

    0
               influent
                                    stage 1
                                                          stage 2
  100
   90
   80
_  70
|>  60
r  so
5  40
"  30
   20
   10
    0
               influent
                                     stage 1
                                                           stage 2
  0.6

  0.5
j1
1 0.4
H"
3 0.3
1
8 0.2
ฃ
  0.1

    o
               influent
                                     stage 1
stage 2
                                         47

-------
                              Figure 3.
                         Ott/Story/Cordova
                Remediation System Performance
  60

  50
"Si 40

S 30
*
2 20
'ฃ
  10
              influent
                                    stage 1
                                                           stage 2
  90
  80
? 70
E 60
sT 50
S 40
| 30
              influent
                                    stage 1
stage 2
  90
g- 80 -•
I1 70 -•
7 60
| 50
* 40 -
g 30 -•
| 20
6 10 -
   0
              influent
                                    stage 1
stage 2
               influent
                                     stage 1
 stage 2
                                         48

-------
                        Figure  4.

                    Ott/Story/Cordova

            Remediation System Performance
  90
  80
  70 •
  60 •
i 50

|
ฃ 40
o
a
3 30
  20
  10
            influent
                              stage 1
                                                 stage 2
  14
  12
  10
   6
  2
            influent
                              stage 1
stage 2
                                      49

-------
obtaining EPA approval for•a more cost effective remediation
technology.
REFERENCES

EPA 1992. Test Methods for Evaluating Solid Waste,
Physical/Chemical Methods, SW-846,  3RD Edition.
                                   50

-------
 COMPARISON OF ALTERNATIVES FOR SAMPLING AND STORAGE OF VOCS
                           IN SOIL
David Turriff, Director, Chris Reitmeyer, Lloyd Jacobs, and
Nils Melberg, En Chem,  Inc., 1795 Industrial Dr., Green Bay,
WI  54302
ABSTRACT

     The  search  for  an effective alternative to SW 846
Method  5030  for  preparing Volatile Organic Compounds  (VOCs)
in soil must overcome the limitations that were inherent in
that method, i.e., the method must show minimal
volatilization and/or biodegradation losses.  Other issues
such as method sensitivity and waste handling also become
important depending  upon the particular regulations for
which testing is required. This study was undertaken to
study several alternatives which are currently being
promoted  by  various  state/federal regulations:  1)Brass
Tube; 2)Dynatech Soil Vial; 3)Methanol Preservation; 4)En
Core Sampler.  The Dynatech soil vial is a 40 ml glass vial
with two  teflon-sealed caps and a glass frit on the bottom.
The soil  is  sampled  directly into the vial and the vial is
analyzed  without subsampling. This is the basis of EPA SW
846 Method 5035  for  VOCs.  The EnCore sampler is a stainless
steel volumetric sampling device which has a sealed sample
chamber that can store the sample immediately after
sampling.
     The  results of  this study indicate that only methanol
is completely effective at preventing both volatilization
and biodegradation.  The brass tube showed significant
losses of benzene and other target compounds as early as 12
hours after  sample preparation.  The Dynatech soil vial and
the EnCore sampler did not show significant volatilization
losses over  the  14 day test.  However,  if a
microbiologically active soil was spiked with VOCs,  then
losses could occur within 1-2 days of sample preparation.
The same  soil, after sterilization,  regained its ability to
retain VOCs.  This provides strong evidence that the
Dynatech  and En  Core systems are not prone to volatilization
loss but  may not be  suitable for samples which have the
potential  for biodegradation unless the method is modified.
                                    51

-------
     Results will be presented using a sampling and sample
storage scheme which tests the efficacy of adding a
preservative, such as 80% ethylene glycol in water, sodium
bisulfate or sodium azide, to the soil.  The method
recommended here may meet all of the criteria necessary to
provide a unified, effective soil VOC method.
INTRODUCTION
      Samples taken for soil VOCs under EPA protocol are
packed into glass jars in such a manner as to minimize
headspace.  The jars are routinely shipped offsite and held
for up to 14 days before the laboratory prepares the sample
for analysis.  The preparation involves a subsampling of the
soil in order to collection a 5 gm sample into a purge and
trap tube.  The tube,  when attached to the purge and trap
instrument, is no longer subject to any further exposure to
the environment.  This method of storage has been shown by a
number of investigators to be deficient to the point where
the length of storage time after collection can be the major
variable in the analytical results (See the EPA Symposium
reference, 1993).

     This study was undertaken to compare alternatives to
the currently accepted method.  Very few studies have been
published which compare alternative sampling and storage
methods for soil VOCs.  This is partly due to the fact that
it is very difficult to sample soils with a high sampling
precision so that different methods can be statistically
compared.  A soil mixing device based upon the mixing system
developed by Paul King (1993)  was used for this study to
compare four primary alternatives for soil sampling and
storage.  The purpose of this study was to determine whether
recommendations could be made about which method or method
combination might be used to provide precise and accurate
results.
EXPERIMENTAL

     Clayey sand taken from the field and mixed with sand as
necessary to create a finely mixed soil.   The soil was mixed
in a 35 gallon steel drum into which were welded a series of
mixing blades which facilitated the mixing.   The chamber was
turned on its side and rotated at 5-7 rpms by means of a
motorized belt assembly.   The drum was kept inside an
                                  52

-------
insulated box which was cooled to 40-45 degrees Centigrade
by means of a refrigerated circulating pump.
     The soil was precooled and water was added to achieve a
10% final moisture content.  The soil was spiked with a
synthetic gasoline standard which contains ten major
components of gasoline and, in several instances, also with
a spiking mixture of 1,2-dichloroethane, trichloroethylene
and tetrachloroethylene.  The starting concentration was
adjusted so that, after 16-20 hours of mixing, the final
concentrations were of sufficient concentration to be in the
middle part of the calibration curve.  All samples were
analyzed using a Hewlett-Packard 5890 GC with either a
PID/FID tandem for BETX and Gasoline analysis or a PID/ELCD
detector for BETX and chlorinated compound analysis.
     All experiments were run over multiple time points and
each time point was done with five replicates.  In some
instances, not every sample was useable due to
instrumentation or quality control problems.  Once the
experimental setup was designed, a computer program was used
to set a randomized sampling order  to control for sample
order bias.
     On the day of sampling, a team of samplers were
arranged so that the sampling could be completed in less
than 10 minutes.  It was determined that over 100-120
samples could be collected within this time frame without
creating a significant bias due to time delays.

     For methanol preservation, twenty five gins of soil were
preserved immediately upon sampling and analyzed at the
indicated storage times.  For brass tubes, twenty five gms
of soil were subsampled into methanol and the samples were
analyzed within one week of preservation.  A twenty five
gram version of the EnCore sampler was used as third
comparison method.  For these three methods, the soil to
methanol ratio was 1:1.  One hundred microliters of methanol
was analyzed in a 5 ml purge volume.
     For the Dynatech soil vials,  a 5 gm plug of soil was
sampled into each vial using the EnCore volumetric sampler.
The vials were capped and stored at 4 degrees C until the
specified storage time and then analyzed immediately on a
Dynatrap autosampler.  For the EnCore sampler, the samples
were taken and stored at 4 degrees C until the specified
storage time, then sampled into the Dynatech vial and
analyzed immediately.  After the initial experiment, the
soil was spiked with manure and the experiment repeated.  A
sample of the soil was taken for analysis of petroleum-
degrading bacteria.  After this experiment,  the same soil
was sterilized, re-spiked and the study repeated a third
time.   Again, a sample of the soil was analyzed for
                                   53

-------
petroleum-degrading bacteria.
     Results were analyzed using the SPSS for Windows
statistical package.
RESULTS AND DISCUSSION

     Table one shows the stability of methanol-preserved
parameters over a 28 day period against the brass tube over
a two day period and the EnCore sampler over a five day
period for benzene which was the compound most susceptible
to losses.  The brass tube was ineffective after 12 hours
and the EnCore sampler was stable after 48 hours.  Benzene
in methanol-preserved soils was stable over the 28 days.
     Table two shows the results of a comparison study
between the Dynatech soil vial and the EnCore sampler over
14 days for benzene.  The upper set of data is on the
original soil. The middle set of data is for the bacteria
enriched soil.  The bottom set of data is for the sterilized
soil.  In this last case, only data for day six was
generated.
     Table three shows the same data pattern for 1,2-
Dichloroethane.  As can be seen, benzene in both the
Dynatech and EnCore vial is stable for somewhere between two
to four days, then begins to show a decline in
concentration.  After spiking with manure, the benzene was
essentially gone after two days.  After sterilizing with
soil, the benzene levels where close to zero time
concentrations.  Bacterial counts from the soil at the
second experiment were 5 x 108 and declined to non
detectable counts after the sterilization procedure.  This
is strong evidence that biodegradation rather than
volatilization is occurring in the Dynatech vial and EnCore
sampler.  Table three shows that the 1,2-dichloroethane was
stable within 30% over all three experiments.  This was
added as a control since it is not very susceptible to
aerobic biodegradation.  This also supports the contention
that these methods do not lose significant concentrations
due to volatilization.  A final set of experiments will be
reported where soils sampled into the Dynatech vial are
preserved immediately and after two days with solutions of
ethylene glycol,  sodium azide or sodium bisulfate.  Results
of these experiments will be the basis of recommendations
for a sampling protocol for soil VOCs.
                                  54

-------
CONCLUSIONS

     Methanol preservation of soils prevents both
volatilization and biodegradation.  The brass tube method is
not stable and should not be used for more than a few hours
storage.  The Dynatech and EnCore methods are effective for
longer term storage if the soil does not contain petroleum-
degrading microbes.  However, a water-based preservative may
overcome the limitations of the Dynatech method and allow
real hold times approaching 14 days.
     A sampling and storing scheme will be discussed which
takes advantage of the benefits from the different methods
studied here.
REFERENCES

King, P. Evaluation of sample holding times and preservation
methods for gasoline in fine-grained sand.  In: National
Symposium on Measuring and Interpreting VOCs in Soils: State
of the Art and Research Needs, January 12-14,  Las Vegas, NV.
1993

EPA Symposium:  National Symposium on Measuring and
Interpreting VOCs in Soils: State of the Art and Research
Needs, January 12-14, Las Vegas, NV.  1993.

Urban, M.J., J.S. Smith, E.K. Schultz and R.K. Dickinson
Volatile organic analysis for a soil sediment or waste
sample. In: 5th Annual Waste Testing and Quality Assurance
Symposium, Washington, D.C.: U.S. Environmental Protection
Agency, p. II-87-II101.

Hewitt, Alan.  Concentration Stability of Four Organic
Compounds in Soil Subsamples.  US Army Corp of Engineers
Special Report 94-6, 1994.
                                   55

-------
                                                                TABLE 1
                                                    Comparison of Hold Time by Method
                                                     Percent of Zero Time for Benzene
01
Method
Meffianof
Brass Tube
EnCore
Zero
Time
100
100
100
12 Hours
	
47
100
48 Hours
	
43
100
14 Days
94
	
	
21 Days
89
	
	
28 Days
98
	
	

-------
                                                TABLE 2
                                    Comparison of Hold Time by Method
                                     Percent of Zero Time for Benzene
Ul
Method
Dynatech
EnCore

Dynatech
EnCore
=
Hunafprh

FnCflfP

Zero
100
100
=
100
100
_
100

100

24 Hours
97
87
Manure
27
16
Sterile




48 Hours
97
81
Spiked
8
5
Soil




4 Days
47
52
— -
0
0
=




6 Days
23
15
—
0
0
— —
00

67
ot
10 Days
48
7
==
0
0
—




14 Days
7
15
—
0
0
— =





-------
01
00
                                                    Table 3
                                       Comparison of Hold Time by Method
                                   Percent of Zero Time for 1,2-Oichloroethane
Method
Dvnatech
EnCore
—
Dvnatech
EnCore
....
nunnforfi
uyildiccii
Fnfnfo
kllvUI C
Zero Time
100
100
=
100
100
—
inn
IUU
inn
IUU
24 hours
107
104
=
90
82
=




48 Hours
102
97
Manure
90
82
Sterile




4 Days
93
93
Spiked
78
74
Soil




6 Days
71
95
=
78
76
—
7fi
(D
76.
it
10 Days
91
85
=
78
81
=




14 Days
93
79
	
75
11
	 .





-------
                                                                                        8
A Comparison of Response Factors For Weathered Petroleum
Standards
Christopher S. Cox, Joseph L. Moodier, &
Melissa A. Schonhardt
Restek Corporation
110 Benner Circle
BeUefonte,  PA 16823
Phone: (814) 353-1300; FAX: (814) 353-1309
Leaking underground storage tanks represent an increasing
environmental concern. Identification and quantitation of
petroleum products in the environment can be troublesome for
environmental laboratories since the composition of these products
is changed in the environment due to  weathering. This weathering
may be caused by evaporative loss, migration through natural
matrixes, or bio-degradation.

Various evaporative loss weathered petroleum products, both
laboratory controlled environment weathering and real world
weathering, were analyzed to determine their composition.  These
standards are compared to determine how weathering affects the
petroleum products identification and  quantitation.
                                             59

-------
       A Simple, Accurate Field Test for Crude Oil Contamination in Soil
Kevin R. Carter, Ph.D, Vice President, Technical Services, EnSys Environmental
Products, Inc., P. O. Box 14063, Research Triangle Park, North Carolina 27709
Abstract

Crude oil has been pumped out of the earth in the United States for over 100 years. As a
result of the commercial exploration of petroleum reserves, the soil in the immediate areas
of production, storage and transportation facilities are contaminated with high levels of
crude oil.  In areas that have been used for these purposes for decades, the concentration
of crude oil in the soil frequently exceeds 10%.  Many of the oil producing states and oil
production companies are working to reduce the level of crude oil contamination
surrounding these facilities and return the soil to levels of crude oil less than 1%.

Existing methods for the determination of crude oil concentration in soil are usually done
in a laboratory. Those tests that are adaptable to field use are not necessarily easy to use
and suffer from the same interference problems  experienced by the method in the
laboratory.

This paper will introduce a field analytical product called the Crude Check™ Soil Test
which can be used simply and accurately in the  field by personnel otherwise unfamiliar
with chemical analysis.
Introduction

A test has been developed to accurately determine the concentration of crude oil hi soil at
contaminated areas of production, storage, and transportation facilities.  The test was
designed to meet the requirement for crude oil testing imposed by the Texas Railroad
Commission (Statewide Rule 91) in the field to expedite delineation and remediation of
crude oil contaminated soil.
Current Analytical Methods

With thousands of crude oil sites to evaluate, clean-up, and monitor, the task of measuring
the extent of the problem is a serious, costly one.  Existing methods for the determination
                                              60

-------
of crude oil concentrations in soil are usually performed in a laboratory.  They are based
either on the direct gravimetric determination of crude oil extracted from soil by a solvent
mixture (Method 9071) or by the measurement of the hydrocarbon content of a Freon
extract of soil using IR spectrometry (Method 418.1).  While it has been feasible to adapt
Method 418.1 to field use with a portable instrument, the field protocol is not ultimately
easy-to-use and suffers from some of the same interference problems experienced by the
method as practiced in the laboratory.  In addition, the use of Freon will not be permitted
past the end of 1995, requiring the use of an alternative solvent.
Crude Check™ Soil Test System

The need for a simple, accurate test that can be used in the field by personnel otherwise
unfamiliar with chemical analysis methods has resulted in the development of a field
analytical product called the Crude Check™ Soil Test System.  The test allows the user
to test a small sample of soil for crude oil in less than 5 minutes.  The test results in a
quantitative indication of crude oil concentration over the range of 0.5% to 6%.  The
analysis of soil samples for crude oil can be performed over a wide range of ambient
temperature (40ฐF  110ฐF) and humidity conditions (5% - 95% RH) and the test
materials have a storage shelf life of 1 year.  To provide accurate quantitation, the test
requires the use of a simple piece of field equipment, while complicated solvent
extractions, and the large volume of waste solvent they generate, are avoided.

The method is based on the principle that crude oil will form a stable emulsion in water
solution under certain conditions. A simple procedure is employed to place any crude oil
that may be present in the soil into conditions where emulsion formation will occur.  A
sample of the soil (5g) is first extracted with a small volume of a proprietary solvent and
the extract is subsequently mixed with a water solution that causes the emulsion to form.
The turbidity of the final solution is directly proportional to the crude oil concentration.
A portable, battery-powered turbidimeter is used to measure the turbidity of the solution
and a conversion table is provided in the test instructions to convert to percent oil
concentration by weight.
Test System Performance

Sensitivity
Quantitative methods used for environmental purposes must have the minimum
sensitivity necessary to measure the analyte at concentrations that are lower than the
regulatory action levels.  The minimum sensitivity is usually expressed in two ways:  1)
                                               61

-------
method detection level, which is a quantity of crude oil equivalent to three standard
deviation increments of turbidity above a mean negative sample result; 2) reliable
quantitation limit, which is the quantity of crude oil derived from four times the turbidity
measurement calculated for the method detection.  The method detection limit is usually
regarded as the lowest concentration that could be measured under ideal circumstances and
for the Test System is 0.11% crude oil.  The reliable quantitation limit reflects the
minimum sensitivity that can be reasonably obtained under most circumstances.  The
Test System has a reliable quantitation limit of 0.33% crude oil.

The maximum concentration that can be reliably measured is 6% crude oil.  Above this
concentration, the turbidity response is no longer proportional to crude oil concentration.

Accuracy  and Precision
The Crude Check™ Soil Test System is designed to deliver accurate, precise quantitative
results over the range of regulatory interest.  The accuracy and precision of the test was
determined using a silty loam soil fortified with 15 different crude oils at two different
crude oil concentrations. Using the conversion table in the User's Guide to obtain
concentration results from turbidity data, the test characteristics in Table 1 were found.
These results indicate that the test is both accurate and precise.

Furthermore, the recovery of one crude oil (Prudhoe Bay) was evaluated following
fortification of 9 different soil types at a concentration of 1%. The mean recovery of
crude oil from these soils was 116ฑ14%, indicating excellent consistency of recovery,
with little  effect of different soil matrices.

Selectivity
The Crude Check Test accurately determines the concentration of crude oil in soil.  In
addition, the test also measures the concentrations of diesel fuel, fuel oil #2, bunker C,
grease, and motor oil in  soil.  These petroleum products are detected with somewhat
more sensitivity than crude oils and, therefore, correction factors must be applied to the
results generated using the conversion table in the User's Guide.  These correction factors
are given in Table 2.  The Crude Check test does not give a useful response to either
gasoline or brake fluid.

Correlation with Standard Analytical Methods
The Crude Check Test has been evaluated with field samples and has been shown to give
results comparable to the laboratory methods commonly used to evaluate crude oil
contamination in soil. The results of a trial conducted with crude oil contaminated soil
samples provided by a large oil company are shown in Table 3.  Each sample was
analyzed by Method 9071, Method 418.1, and the Crude Check Test System.   The
                                                62

-------
correlation between the Crude Check results and the results from either laboratory
method is as good as the correlation between the two laboratory methods.  The
variability seen in any of these results is attributable partially to sample heterogeneity.

Robustness
The ability of technically unsophisticated individuals to run the test will be a key to
getting representative data in the field.  A set of samples from a variety of locations was
tested by two different operators using the Crude Check test.  These results are given in
Table 4.

Many methods perform well in the laboratory, but fail to perform to the levels expected
when taken out into the field and subjected to field conditions.  There are many possible
reasons for performance shortfalls in field trials of field analytical methods.  The
variability of field conditions may have an effect on the performance of a method. These
conditions include temperature, humidity, wind, sunlight, and precipitation.  It is
important to determine if these environmental factors seriously effect the method to
understand its limitations.  Other circumstances such as sample heterogeneity, sample
matrix,  and user training can have an impact on test performance.

The Crude Check Test system is currently undergoing field trial evaluation.
Conclusions

An accurate, rapid, easy-to-use field test has been developed that quantifies crude oil
contamination in soil.  The results from this test correlate well with those obtained for
the same samples analyzed by the standard laboratory methods.  The test allows the user
to quickly assess the achievement of clean-up of crude oil contamination at wellheads and
storage and transportation facilities without sending samples to a laboratory and incurring
the delays inherent in the laboratory analytical process.  This facilitates the clean-up
with a minimum number of trips back to the site.
                                               63

-------
Table 1
                       Bias and Recovery for Crude Check
Characteristic
Recovery (accuracy)
Precision (RSD)
1% Spike | 5% Spike
109%
12%
97%
7%
                                     64

-------
Table 2
                   Correction Factors for Other Petroleum Products
                    Petroleum Product
Correction Factor
diesel fuel
fuel oil #2
bunker C
grease
motor oil
0.94
0,94
0.83
0.83
0.83
                                 65.

-------
Table 3
          Comparison of Results from Crude Check with Laboratory Methods
                                      Field Sample Results
                                             % OIL
Samole ID
L
C
K
D
F
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
D
I
J
M
P
S
Crude Check
1.7
1.1
4.6
4.9
> 6.0
<0.5
> 6.0
> 6.0
5.8
>6.0
<0.5
1.6
> 6.0
> 6.0
<0.5
<0.5
2.4
4.6 / > 6.0
5.5 73.6
< 0.5 7 < 0.5
3.7/3.0
1.6
1.4
>6.0
0.5
4.4
> 6.0
>6.0
> 6.0
>6.0
2.6
1.9
3.5
2.6
1.0
0.9
<0.5
<0.5
Method 4 18.1
ND
ND
ND
ND
ND
0.2
66.1 773.5
9.9
6.7
6.4
< 0.1
1.2
9.4
19.6
0.1
0.1
2.9
7.9
7.6
0.2
5.5
0.3
4.8
12.07 11.1
2.6
7.2
22.2
14.3
8.7 / 8.6
11
3.6
2.9
< 0.1
< 0.1
< 0.1
0.2
< 0.1
< 0.1
Method 9071
1.2
2.9
3.9
5.2
9.9
0.9
44.2
14.2
7.5
8.5
0.4
2.3
19.6
31.6
0.6
0.6
2.6
7.9
13.1*
1.0
12.2*
1.5
1.2
12
1.5
3.9
17
18.7
8.3
13.2
2.5
1.5
20.3*
9.3*
0.6
1
0.3
0.4
         * Contained material in extract which did not appear to be crude
                                    66

-------
Table 4
                            Operator-Induced Variability
Sample ID
1
2
3
4
5
6
7
8
9
10
Crude Check Results
Operator 1 (% oil)
>6
2.7
3.7
2.7
4.3
>6
1.1
1.0
0.6
0.9
Crude Check Results
Operator 2 (% oil)
>6
3.4
3.6
2.7
4.3
>6
1.3
0.9
0.8
0.7
                                     67

-------
10


A Practical Field Application of Medium Level Soil Extraction/Headspace GC Screening for VOCs
in Soil Samples

Laurie H. Ekes, Seth Frisbie.PhD, and Craig MacPhee

ENSR Consulting and Engineering
35 Nagog Park
Acton, MA 01720
(508) 635-9500

A site for a confidential client consists of approximately nine (9) acres in the Upper Peninsula of
Michigan. Since the 1930's, the site was used for the disposal of wood tar wastes from the
production of charcoal briquettes and chemicals derived from wood.  Following evaluation  of
several remedial alternatives; excavation, removal,  shipment by rail and landfilling of the wood
tar  materials was selected as the most viable solution.  The recommended approach  was
reviewed by Michigan Department of Natural Resources (MDNR) and ENSR was the designated
engineer and construction manager for this  project.

The major task of the interim response required removal of all visibly contaminated material (over
70,000 tons) while minimizing potential community impacts.  Toxic Characteristic Leaching
Procedure (TCLP) regulatory levels for volatile organic compounds were used as guidelines for
loading the tar waste.

In order to assure that material being loaded  into railroad cars met the waste characteristic
requirements of the landfill to which it was being sent and to assure  that it was properly
manifested as non-hazardous; ENSR successfully employed a Photovac Model lOSPIus field GC
with a Photoionization detector (PID) to screen  large quantities of soil samples.  The screening
method is a modification of the EPA medium level  VOC soil analysis and includes soil sample
preservation and extraction into methanol with subsequent analysis of an aliquot of the extract
headspace equilibrium over 30 ml of water in a 40 ml VOA vial.  Total analysis time per sample
was under 20  minutes.   Analytical  protocol will be  presented with associated quality
assurance/quality control data for samples from this site. Ten percent of the samples were  sent
for  laboratory confirmation by EPA method SW846/8240.  This comparison data will  also be
presented.

Use of this field screening during one of the most severe winters in the Upper Peninsula saved
the client over $ 200,000 in laboratory costs and enabled work to  progress in a timely fashion
with minimal impacts on the surrounding community.
                                             68

-------
                                                                   11
              USE OF A PORTABLE,  FIBER-OPTICS, CCD
           SPECTROPHOTOMETER TO MEASURE  FRIEDEL-CRAFTS
        PRODUCTS IN THE DETECTION OF CRUDE  OIL, FUEL, AND
                  SOLVENT CONTAMINATION  OF  SOIL


John  David Hanby.  President, Hanby Environmental Laboratory
Procedures, Inc., 501 Sandy Point Road,  Wimberley, Texas  78676


ABSTRACT

The utilization of a test kit employing  Friedel-Crafts alkylation
reactions to produce intensely colored  products of  aromatic
compounds  in  the analytes  (typically carbon  tetrachloride)  has
facilitated  removal  of  contaminated  soils  and  provided  an
extremely  accurate  and rapid analysis  of  remediation processes.
The extraction/colorimetric method has employed visual comparison
of results with photograph standards.  Testing of a new, portable
spectrophotometric  read-out  device has  been  completed  on a
selected  group  of  crude oils,  fuels, and  solvents.   This paper
describes results  of  the  use  of the device  in determining
concentration of a typical West Texas crude oil,  a gasoline, and
a diesel  fuel in soil.   The  extremely small size  (5" x 7" x 3")
of the device is made possible by the use of a recently-developed
single-fiber  optic/CCD  spectrometer  "bench".  The  instrument is
interfaced to a 486 SX  "notebook" PC.  An algorithm for software
development using  color  values  developed  by the  International
Commission  on  Illumination  (CIE)   was  incorporated to provide
quantitative analytical data.

INTRODUCTION

Since the discovery, in 1986, of  the technique of extracting soil
and water samples with  various  solvents  and  then causing the
extracts  to undergo  Friedel-Crafts  reactions by  the addition of
stoichiometrically  great  excess  (>100x) amounts  of appropriate
Lewis acid catalysts, the procedure has been utilized  as a field
method  to provide  extremely accurate   quantitative analyses  of
these substances on site in  thousands of cleanup and remediation
projects  around the world.   Optimization  of  this  procedure  in
order to maximize the visual detection  sensitivity has typically
involved  the  use  of various amounts of an  alkyl  halide,  carbon
tetrachloride,  which is,  although an  extremely good  Friedel-
Crafts reactant, high on the list of chemical "betes no'irs".  The
subsequent search  for solvents  which would serve to provide
sufficient reactivity and color via this method,  coincident with
a general  focus by  the EPA and other regulatory  agencies  on the
larger scale  environmental problems such as leaking underground
fuel tanks, crude  oil production,  storage  and processing areas,
pipelines, etc., led to the  realization that these substances of
concern  were  generally  composed  of  the requisite  chemical
                                   69

-------
species,  e.g. aromatics, alkenes,  ketones,  to allow  high  level
(ca.  100  PPM)  detection  without  the  use  of  an alkyl  halide
solvent.    This,  however,  further  spurred  efforts to develop  a
spectrophotometric read-out  device which  would alleviate  the
problems  of subjective interpretation of the  colors  obtained in
the precipitates  which are the  determining  parameters  in  the
method.  That  is,  in comparing the  colored products of the FC
reaction  caused by  the sample extract  in the  test tube to  the
photograph  standard,  visual  acuity,  lighting  and   other
uncontrollable factors  played  a part in determining the result.

Two relatively  important  technical  considerations had  stymied
successful  development   of   a   suitable,   field-portable,
spectrophotometric  instrument   for   this   method:   1)   an
appropriately sized and focused optical viewing device, and 2) an
appropriately sized and powered  detector.   The  first technical
problem centered on  the  fact  of the powered  reflective  surface
composed  of  the  excess  catalyst  and the F-C  precipitates.
Numerous  researches on  these reflection/scattering phenomena have
pointed  to the considerable effects  of  parameters  such  as
particle  size, packing, interstitial fluid,  etc.   A  solution to
many  of  these problems seemed  to  be offered in  the technique
understood by printers and Post-Impressionist painters for many
years: optical integration of  a large area.   Just as  a too-close
inspection of  a  Seurat painting  or  a photograph  reveals  a
confusing jumble of dots, a  microscopic  look at  the color  in  the
Hanby method test tubes showed a wide variation in the color of
the catalyst/precipitate catalyst/precipitate  mixture.    The
unreacted catalyst  particles (AlClo) and hydrated catalyst  (AlClg
- 6H2O) were  typically white  while  those  particles  of the  F-C
reaction,  or  adsorbed  product  were  colored  to   an  extent
indicative of  the  relative amounts of the reactants (analytes).

The solution  to the  second  technical  problem,  i.e., size  and
power requirements, lay in the utilization  of  a charge-transfer
device of some  type.   In  the  fall of  1994  the  author  was
introduced to a  group,  Ocean  Optics, Inc.  who had  developed  a
technology in 1992 which seemed  ideally  suited to the needs of
the method.  In the development of optical technologies diversely
used  for pH and  spectrophotometric  applications,  this group  had
produced  an  extremely  small  optical  detector employing  charge
coupled devices and single fiber optic transmission.  Essentially
all that  was necessary in the development of  the  present  device
was the design  and manufacture of  the appropriate optical cell
(test tube) holder.  This was accomplished by the author  after
experiments with various materials  indicated that, probably  due
to  the  rather  inconsistent reflection  characteristics  of
relatively inexpensive,  commercially-available test tubes,  a non-
reflective material should be  utilized.  After  the completion of
the test  tube/fiber optic probe module,  a series of test were run
to determine  the correct distance the probe should be  placed from
the sample tube.    This  would optimize  the view area  and  the
signal strength.    In  effect, probe distance  would primarily
determine  the  signal to noise  ratio  (SNR).
                                  70

-------
For  the  initial  trials  of  the  instrument  a  particularly
appropriate algorithm was available  for  the  conversion of input
to output, i.e.,  the tri-stimulus  values  established  by  the
International Commission on  Illumination  (CIE).  Essentially,  the
defined wavelength/color relationships  of  Red (700nm),  Green
(546.1), and Blue  (435.8) are used in  the computation  of values
in the  CIE-derived colour space such as:  L*,  a*, b*,  xtri, ytri,
and  ztri.   As the primary  indicator of  quantitative analytical
results with the method, as heretofore used  as  a  visual method,
had been lightness/darkness  of precipitated color, it was assumed
that  the L*  (color  intensity  from  white to total  saturation)
value would  be  the more indicative  parameter,  however  the data
was to  prove otherwise.  In more spectrophotometrically familiar
(to  a chemist)  terms  this  was  of  benefit as  the  chromophoric
effect various functional groups  have is  a long-established body
of chemical  knowledge,  and using  other   parameters,  namely  the
tristimulus values: xtri, ytri,  and  ztri, a  simpler  translation
to wavelength/absorption numbers  would  be available.   Hence,  the
expansion of the method in terms of  a  qualitative  technique  for
identification of  substances would be enhanced.

EXPERIMENTAL

Selection of substances to be utilized in the initial  trials of
the  instrument  was  prioritized  roughly  be  production  and
environmental importance.   Thus,  crude  oil,   gasoline,  diesel
fuel, and  toluene were  chosen.   Of course,  an extremely wide
range of crude oils  exists,  and  the definition of a "standard"
gasoline or diesel fuel is not chemically available.   Hence  the
use  of  the various  terms  to  describe these substances,  e.g.,
"heavy,  medium,  light" or,  "high  or  low"  octane  or  cetane
numbers, etc.  This lack of  exact chemical definition, of course,
is understandable and perhaps  has given  rise  to  the  often
denigrated term "Total  Petroleum Hydrocarbon".   Give  the site-
specific,  and,  often,  substance-specific  uses  this  method  has
found  it  was  appropriate  to  prepare  exact  mass/solvent
concentration standards  of  various  "typical"  samples   of  crude
oils,  gasolines,  and diesel  fuels which were  ampoulized as
reference  materials  for the  procedure.   All  standards were
prepared using HPLC grade n-Heptane  or a 20% (v/v)  solution of
carbon tetrachloride in heptane  (Fisher Scientific).   Two ranges
were  established  according  to  the  solvent  selected:   1.
(CC14/Heptane—low range—)  2,  10,  25,  50,  75, 100,  200,  500,
750,  1000  rag/Kg  and,  2.  (Heptane—high range—)  500,  2500,
10,000,  25,000,  and 50,000 mg/Kg.

INSTRUMENTAL

The  L-shaped  test tube/probe holder was fabricated from black
Delrin  to  configure  the fiber optic probe orthogonally  to  the
test tube at a distance of  7.0  mm.   This resulted in  a focused
viewing area of 3.4 mm in diameter.   The   reflectance  probe/fiber
optic used  was the  R-200-7-LR;  tungsten-halogen source,  LS-1;
spectrometer optical bench,  PS1000  (Ocean Optics,  Inc., Dunedin,
FL) .    An  aluminum  housing for  the complete  assembly  was
                                 71

-------
manufactured by Preferred  Stampings  of Texas, Inc., Round  Rock,
TX.  The spectrometer was  interfaced via a  ribbon  cable/A/D card
(DAQCard-700, National Instruments) to an AST 486  SX/25  notebook
PC.   Data  and  graphs were  printed on  a  portable  printer  (HP
DeskJet 310).   The portability  of the complete  system:   Field
test kit,  spectrophotometer,  computer and  printer  is  such that it
can easily be carried on-site and  operated, and complete reports
can be generated by the analyst.

EXPERIMENTAL PROTOCOL

Solutions were prepared in the ranges  listed  above corresponding
to the  amount  of  analyte extracted from a  5.0  gram soil  sample
using 10 ml  (Cy/CCl^)  or 20  ml (C-y).  One  gram catalyst amounts
were added to 4.2 ml aliquots of these solutions in  the  standard
100 mm x 15  mm  test tubes  according to the Hanby  Field  Test Kit
protocol,  and the solutions were  hand shaken intermittently  for 4
minutes, allowed  to settle for  1  minute,  and then  read in  the
instrument.  Four readings (with ca.  30  degree  rotation of  the
tube  between each)  were taken  for each  concentration.     The
tristimulus value ztri was found  to correlate  extremely well with
concentration for each set  of  readings.   Virtually  all  readings
were  found  to  lie  within  five percent of  the mean  of  the  four
readings.    This  verified  the  fact  of the  optimization of  the
tube/probe module configuration, and  corresponded  well  with  the
typical test kit weighing and extraction error of +/- 5%.

DISCUSSION

The primary  aim of  this  research was to  test the  application  of
the new instrument to this field  test kit method of analysis.   as
illustrated  by  the data,  confirmation of  the  applicability  is
proven.    In the  course  of  the  experimental  work,  another
desirable   feature  of  the  method   was   demonstrated.    The
utilization  of  this application of  Friedel-Craft  chemistry  now
extends to  the "other"  branch of  this  time-honored  discovery,
i.e., acylation.    That  is,  previous  employment   of  the method
primarily  exploited the  intense colors  produced  by strong
alkylation reactions promoted by  use of very high ratios  (>100:1)
of the  Lewis acid  catalyst  employed,  and  the  use  of the  very
reactive F-C solvent, carbon tetrachloride.  Acylation reactions
as well  as  reactions  in  which  other alkylating  agents,  e.g.,
alkenes, are available  in  substances  such  as  crude  oils  and
fuels.  Again,  the  fundamental  principle  of the  Hanby  method,
i.e., the use of stoichiometrically very large proportions of the
aluminum chloride catalyst which  serve to dehydrate the extract
and enhance the Friedel-Crafts reaction,  is certainly  key to  the
successful use of this procedure for the high concentration  (ca.
50,000  PPM) ranges that  are being  allowed as  interim   soil
contamination levels  at  designated sites.    Implementation  of
these regulatory limits has been  carefully  considered by  a number
of oil-producing  states  and was  recently  effected by  the  Texas
Railroad Commission.  As stated  in a classic  text of  analytical
chemistry,   "In  the broadest  sense,  an instrument for  chemical
analysis  converts  an analytical signal  that  is  usually not
                                   72

-------
directly detectable  and understandable by a human  to  a  form  that
is.   Thus,   an  analytical  instrument can  be  viewed  as  a
communication device between  the system under  study and  the
scientist."    This  development  can be  regarded  as  a  practical
combination  of  the two  divisions  of analytical  chemistry -
classical or  "wet" analysis and instrumental analysis.
                                  73

-------
REFERENCES

1.   United  States Environmental Protection Agency,  Solid Waste
and Emergency Response  (Os-420),  Field Measurements: Dependable
Data When You Need It, EPA/530/UST-90-003, September  1990.

2.   Roberts,  R.M., Khalaf,  A.A.,  Friedel-Crafts Alkylation
Chemistry; A Century of  Discovery, M. Dekker, Inc., 1984.

3.  Shriner, R.L., Fuson,  R.C.,  Curtin, D.Y.,  Morrill,  T.C., The
Systematic  Identification  of  Organic  Compounds.  John  Wiley  &
Sons, New York,  1980.

4.  Sweedler, J.V.,  Ratzlaff, K.L., Denton, M.B., Charge  Transfer
Devices in Spectroscopy. VCH Publishers, Inc., 1994.

5.   Fox, M.A.,  Whitesell,  J.K.,  Organic  Chemistry,  Jones  and
Bartlett Publishers,  1994.

6.   Hunt,  R.W.G.,  Measuring  Colour.  Second  Edition,  Ellis
Horwood, 1991.
                                  74

-------
        A1C1
                 cc'
                  +  Oca
                   c
 Friedel-Crafts alkyation reactions :
Formation of Mono-, Di-, and Tri-
 Arylalkylhalide Structures
 Intensely colored, UV-unstable
                  75

-------
                               Test Tube
                                                       Fiber,
                                              Probe
           Test Tube Holder
   50 um source fibers
.72.5 mm
                               Reflectance fiber
                               (to spectrophotometer)
Emission fibers from the tungsten

  halogen  source hexagonally

   arranged around central reflectance

  fiber to optical bench CCD's
          Fiber Optic Probe End Window
                                    76

-------
      West Texas Crude Oil in Soil
N
  40
  48 -
  56 -
  64
  72
   0

-------
                                Diesel  in Soil
00
        N
            10.00
            18.60
            27.20
            35.80
            44.40
            53.00
                 0
200
400
600
800
1000
                                  Concentration (mg/Kg)

-------
                              Gasoline in  Soil
CD
             10.00
             18.60
             27.20
             35.80
             44.40
             53.00
                 0
200
400
600
800
1000
                                 Concentration (mg/Kg)

-------
 12

          IMPROVED METHOD FOR SOIL ANALYSIS SCREENING
       BY HEATED HEADSPACE/ION TRAP MASS SPECTROMETRY

T. Lloyd-Saylor, J. T. Dougherty, and B. P. Bacon, Eastman Chemical Company,
Tennessee Eastman Division, P. O. Box 511, Kingsport, TN  37662-5054.

Screening techniques generally offer a cost-effective alternative to conventional
total analysis by GC/MS when determining organic contamination in soil.  A semi-
quantitative heated headspace screening  method  using  deuterated  internal
standards  and GC/ion  trap  mass  spectrometry  has been developed and
successfully applied in a RCRA facility investigation (RFI). The method optimizes
purge conditions to maximize sensitivity and enable detection of components not
generally thought of as volatile materials.  Possibly the most unique feature of this
method is that quantitation is performed by addition of deuterated analogs  of the
analytes  for most components.   This approach greatly enhances the overall
accuracy and precision of the method by virtually eliminating matrix effects that
could change  the relative responses of the internal standards and analytes in
different samples. Our method of using isotopically labeled compounds for internal
standards reduces to a minimum differences in relative responses since, chemically
and physically, the internal standards and the analytes are almost identical, except
for minor isotope effects.

The method has been used in a RFI to determine if any releases to the soil have
occurred along several miles of process waste sewers. Twelve analytes that were
typically found in the sewer were selected as indicator compounds which could be
analyzed with the method quantitatively to determine if a release had occurred.
With this selected list of analytes, the detection sensitivity ranged from sub-ppm to
a few ppm depending on  the compound.

As a  part of the QA/QC protocol, about 10% of the screened samples were
analyzed for volatile and  semivolatile organics by conventional GC/MS analysis.
This poster presentation gives the  technical details of the developed  method
including the QA/QC protocols and the results of the application of this method to
the analysis of approximately 200 RFI soil samples along with the details of the
associated cost savings.
                                           80

-------
                                                                                      13
   FIELD SCREENING OF VOLATILE CHLORINATED HYDROCARBONS
                       BASED ON SONOCHEMISTRY
Grazyna E.  Orzechowska.  Research  Chemist, and Edward J. Poziomek, Research
Professor, Department of Chemistry  and Biochemistry, Old Dominion University,
Norfolk, Virginia  23529-4628; and  William H. Engelmann,  U.S. Environmental
Protection Agency, Environmental Monitoring Systems Laboratory, Las Vegas, Nevada
89193-3478.
ABSTRACT
A proof-of-principle was recently established of using ultrasound in combination with
relatively simple  electrochemical  devices for monitoring  volatile  chlorinated
hydrocarbons in water. The idea is to use sonochemistry to decompose pollutants such
as trichloroethylene (TCE), chloroform (CHC13) and carbon tetrachloride (CCLO into
compounds or ions, such as Cl", which can be more easily detected than the parent
compound. For example, one minute sonication of aqueous solutions containing ppm
concentrations of TCE gives sufficient Cl" which can be measured using commercially
available Cl" ion selective electrodes. Increases in Cl" as a result of sonication indicates
the  presence of the chlorinated hydrocarbons.  This method is  not meant to replace
laboratory methods. Rather it is meant to be used as a rapid field analytical method.
Excellent correlation coefficients were obtained in Cl' changes versus concentration of
TCE, CHC13 and CC14 in low ppm ranges.  Humic substances at concentrations up to
400 ppm did not adversely affect the Cl" sonochemistry yield. Some lowering was noted
at 800 ppm. It is concluded that none of the parameters investigated to date seriously
impact on plans to develop miniaturized ultrasound chemical  monitoring cells and to
perform operational testing in the field.


INTRODUCTION
The U.S. Environmental Protection Agency (EPA) has been examining the potential of
combining ultrasound with other technologies for monitoring specific classes of organic
pollutants in water.  This  is a new concept for field screening applicable to hazardous
waste sites with particular emphasis on in situ groundwater monitoring. Ultrasound is
defined as any sound of frequency beyond which the human ear can respond, i.e., above
16  KHz.  Excellent summaries of the fundamentals  of ultrasound are  available 1>2.
Ultrasound in the range of 20-100 KHz  affects chemical reactivity. Tiny bubbles are
formed in liquids through ultrasound processes.  The energy generated on  collapse of
these bubbles is given as the  underlying reason for chemical transformations and
enhancements (sonochemistry)2.
                                             81

-------
The concept of using ultrasound in chemical analysis is illustrated in equation 1 using


                      Ultrasound
CC14    +   H20    	^     C12    +   HC1   +    CO       (1)


CC14 in water. Sonication of a solution containing the chlorinated target analytes yields
ions or other products that can be measured using for example, ion  selective electrodes
(ISEs).  Sonication of chlorinated hydrocarbons usually leads to the formation of Cl'.
Increases in Cl~ are an indication of the presence of the analyte.  Initial experimental
results were  very  promising^A.  Chloride ion was detected in aqueous solutions
containing  low ppm concentrations of CCU, CHCls, and TCE, after one minute of
sonication.  Chloride ion increases were accompanied by increases  in conductivity  and
decreases of pH. Aromatic and polyaromatic chloro compounds (chlorobenzene  and
polychlorobiphenyls) did not form chloride ion as readily as did CCL)., CHC13, and TCE.
Changes in anion concentrations via sonication would be used in monitoring the target
pollutants.  The purpose of this paper is to present additional results on the use of
sonochemistry in monitoring CC14, CHCls  and TCE in water with special emphasis on
the potential for quantifying results in the field, and possible impacts due to the presence
of humic substances.


EXPERIMENTAL
Chemicals and Test Solutions:  The chlorinated hydrocarbons were obtained from
Aldrich Chemical Co., Inc. in high purity grade (99%). Stock solutions were prepared in
methanol and used for preparation of the test samples (1:100 dilution) with deionized
water or a humic substance solution.  Humic acids were obtained as follows: sodium
salt of humic acid, technical grade from Aldrich Chemical Co.; humic acid from Fluka;
and peat humic acid from International Humic Substances Society, Colorado School of
Mines. Stock solutions were prepared by first mixing weighed amounts of the humic
substance with 500 mL of deionized water in a 1  L volumetric flask.  The solution was
shaken for 2 minutes several times during the day and occasionally during the next two
days. The volume was then  adjusted to 1 L.  The concentrations of the humic acids in
the test solutions (weight/volume) were: sodium salt of humic acid (Aldrich)(HANa),
100 ppm, 200 ppm, 300 ppm, 400 ppm, humic acid (Fluka)(HA), 100 ppm, 200 ppm,
3000 ppm, 400 ppm, 800 ppm, and peat humic acid (Peat), reference grade, 400 ppm.
The concentrations of CCLj, CHC13, and TCE in the test samples were 40 ppm, 37 ppm,
and 37 ppm, respectively.


Equipment and Procedures.  A Branson Ultrasonic Corp. Sonifier Model 450  (20
kHz)was used for sonication of the sample  solutions. The unit was equipped with a
power supply, a soundproof box, a converter, and a 1/2"  horn probe.  There was also a
cup horn which was not used in the present sonication experiments;  however, because
of its design, it served both as a convenient  holder for reaction tubes and as a cooling
                                             82

-------
bath.  Sonication was performed in borosilicate vials. Coolant was passed through the
cup horn using a peristaltic pump in conjunction with a cooling bath.  The output
temperature of the cooling bath was set at -10ฐC. The optimum sample volume for use
with the 1/2" horn probe was 15 mL. This allowed proper immersion of the probe. The
horn probe was operated at the maximum output control setting i.e.,  10, during the
experiments.  The average output  power in the  1/2" horn probe was 120 W.  A pulse
mode of 80% was used.  In the pulse mode, ultrasonic vibrations are transmitted to the
test solution at a rate of one pulse per second.  The pulse mode can be adjusted from 10
to 90%, enabling a  solution to be  processed at full ultrasonic intensity while limiting
temperature build-up.  The temperature of the samples after 1 minute sonication under
the conditions of the present experiments was 30ฐC. Readers are referred to reference 3
for additional details on experimental procedures.


RESULTS AND DISCUSSION
Changes in Cl" Concentration. An increase in Cl~ after sonication of an aqueous
solution suspected to contain chlorinated hydrocarbons is taken as a positive test.
Changes in Cl' concentration for aqueous solutions of TCE (37 ppm), CHCls (37 ppm),
CC14 (40 ppm), and chlorobenzene (Ph-Cl)(94 ppm), were reported previously3. The
greatest increase was noted for CC14;  smaller changes were noted for CHCls and TCE.
The smallest changes were for Ph-Cl.  We have now found that changes in Cl" vs.
concentration of  CC14, CHC13, and TCE in the range  of 3-80 ppm are linear with
excellent correlation coefficients i.e., 0.995, 0.987, and 0.957, respectively. The same
order of reactivity is evident as found earlier-*.


Perhaps the most important chemical parameter which needs to be taken into account in
developing ultrasound monitoring methods is pH. Cheung and coworkers^ recorded
pH data in destroying organochlorine compounds in water as part of a remediation
feasibility demonstration.  The pH decreased rapidly in all cases.  Using  1 minute
sonication times and working with 15 mL samples of water containing various ppm
amounts of chlorinated hydrocarbons, we confirmed that pH decreases. The relationship
was found to be nonlinear.
Effect of Humic Acids. One of the important parameters that needs to be investigated
in the use of sonication for chemical  monitoring in ground water is the effect of
dissolved humic substances.  Soil organic matter is the source of humic materials which
are divided into fulvic acids (soluble in acids and bases), humic acids (soluble in bases
but not in  acids), and humins (insoluble in acids and bases). Humic acid (50 to 80% by
mass) and polysaccharides (10 to 30% by mass) may constitute up to 90% or more of the
total humus in soil^.  Formation of complexes between pollutants and dissolved humic
substances may have a significant effect on the chemical reactivity and fate of the
contaminants in natural systems (see reference 8 and citations therein). It was of interest
to determine whether sonication of aqueous humic acid will lead to its decomposition. It
was also necessary to establish whether  humic substances will inhibit or accelerate Cl'
                                              83

-------
formation in the sonolysis of chlorinated hydrocarbons. For example, the presence of
humic acid was found  to  increase the  reductive  dehalogenation of  chlorinated
hydrocarbons in aqueous solutions containing ferrous ion by factors up to 10 (reference
9).  Three different humic acid substances were examined. It is known that the structure
of dissolved humic substances is affected by pH, ionic strength, and electrolyte cation
valence (see  citations in reference 10). These factors were not investigated in our
sonochemistry studies.  Instead, "high" concentrations of the humic acids were  utilized
in an effort to discover major effects.


The Cl" concentrations, as determined by an ISE using the 400 ppm solutions of HANa,
HA, and Peat, were found to be 2.0, 13, and 1.0 ppm, respectively.  The pH values were
8.1, 6.1, and 6.0, respectively.  The conductivity values (|iS/cm) were 135, 91, and 6,
respectively.  It was noted that the solutions were cloudy.  Filtering the humic acid
sodium salt and the Fluka humic acid solutions with either WVR qualitative filter paper
or Micropore 0.45 (im filter paper did not  change the results.  It is clear that the peat
humic acid is much less ionic  in water  in comparison to  the other two samples.
Sonication of the humic acid solutions for  one minute did not affect the values within
experimental limits, indicating stability to ultrasound.  If measurable decomposition
occurred, one would have at least expected changes in conductivity.
A number of experiments were performed to compare changes in Cl" concentration in
the sonolysis of TCE, CHCls, and CCLj. in the presence of humic acids. No significant
changes with humic acid sodium salt and Fluka humic acid (100 - 400 mg/L), and peat
humic acid (400 mg/L) were noted in comparison to deionized water  alone.  No
significant effect was noted as a result of filtering the Fluka humic acid (400 mg/L).
However, it was noted that the presence of humic acid sodium salt  at 800  mg/L did
reduce Cl" formation.
Conductivity increases and pH decreases were much smaller in the presence of humic
acids than in their absence  (peat humic acid  gave the greatest changes among the
substances examined).  Generally, it appears that the presence of humic substances such
as the ones examined, at least up to 400 mg/L, will not be a problem in Cl" monitoring
using ultrasound.
Another important question for sonolysis experiments in the real world for monitoring
applications relates to the effect of suspended particles.  Kotronaroul 1 studied the effect
of large sand particles (500 u,m  average) and fine particles  (7 nm average) on the
sonication rate of sulfide oxidation.  Large particles might be expected to decrease the
rate because of sound attenuation.  The fine particles might enhance the rate  by
providing additional nuclei for bubble formation.  The effects of sand particles at the
sizes and concentrations studied  were insignificant. This  implies that no problems
should be encountered in chemical monitoring scenarios. Also, as mentioned above,
filtering humic  acid solutions containing very  finely divided material, made  no
difference in the sonication yields of Cl".
                                                84

-------
SUMMARY
The use of sonication in combination with measuring changes in Cl~, in real time is a
very simple approach in monitoring organochlorine compounds in water.  However,
there  are many parameters that may affect the rate of Cl" production.  One may not
necessarily be able to provide controls in a field situation to optimize the course of
sonochemical reactions.   For field screening,  in situations in which the potential
contaminants are known and in which the water system characteristics are understood,
optimization may not be needed. Sonication experiments with water from a particular
location using potential pollutants of interest should allow an understanding of what to
expect in monitoring the water and what the data obtained from that source means. As
mentioned above, it appears that the presence of humic acids will not cause problems.


Design of the ultrasound system and equipment options are very important; they affect
sonochemistry performance. The ultimate goal for field measurements  is to design an
ultrasound  system which would allow  a probe  to be placed into 2" and 4" diameter
monitoring wells. Preliminary engineering designs were not considered under the scope
of this work, but the possibility of miniaturized ultrasound systems appears feasible. For
example, tapered microtip horns are commercially available with  diameters of 3.2 mm.
These can be  used for volumes  ranging from  1-2 mL. The design of a cell  system
compatible  for both sonolysis and reaction  product measurements present technical
challenges, but these do not appear insurmountable.


The sonication approach  is applicable to organic compounds  which contain other
halides, phosphorus, nitrogen, and sulfur that, when released as anions, could be easily
quantified.   It is judged that ultrasound may be very useful as an in situ technique for
monitoring the effectiveness of remediation processes and for post-closure monitoring.
The potential of ultrasound systems for monitoring chemicals in  water is judged to be
high.  Predicted attributes include:

              Adaptability to miniaturization,
              User friendliness,
              No sampling requirements,
              No solvents,
              No wastes,
              Self-cleaning,
              In situ generation of reagents, and
              Adaptability to networking.


ACKNOWLEDGMENT


Parts  of this work were performed at the Harry Reid Center for Environmental Studies,
University of Nevada - Las Vegas.
                                              85

-------
NOTICE
The U.S. Environmental Protection Agency (EPA, through its Office of Research and
Development (ORD), partially funded and collaborated in the research described here.  It
has been subjected to the Agency's peer review and has been approved as a publication.
Mention of trade names or commercial products does not constitute endorsement or
recommendation for use.  The U.S. Government has a non-exclusive, royalty-free
license in and to any copyright covering this article.


LITERATURE CITED

(1) Suslick, K.S., ed. Ultrasound:  Its Chemical, Physical, and Biological Effect. VCH
Publishers, Inc., New York, 1988, 336 pp.

(2) Mason, T.J.  Chemistry with  Ultrasound.  Elsevier Applied Science,  New York,
1990, 195  pp.

(3) Orzechowska, G.E.;  Poziomek, E.J.  Potential Use of Ultrasound in Chemical
Monitoring, EPA/540/R-94/502;  Environmental Monitoring Systems Laboratory   Las
Vegas, U.S. Environmental Protection Agency: Las Vegas, July 1994.

(4) Orzechowska, G.E.;  Poziomek, E.J.  Method of Detecting Pollution in Water Using
Sonication,  University of Nevada  Las Vegas, Patent Application No. 08/293283
August 1994 (pending).

(5) Cheung, M.;  Bhatnagar, A.;  Jansen, G. Environ. Sci. Technol. 1991. 25: 1510-
1512.

(6) Bhatnagar, A.; Cheung, M.H.  Environ. Sci. Technol. 1994. 28(8"): 1481-1486.

(7) Bonn, H.L.; McNeal, B.L.; O'Conner, G.A. Soil Chemistry, 2nd ed; John Wiley &
Sons: New York, 1985;  p 143.

(8) Chen, S.;  Inskeep,  W.P.; Williams, S.A.;  Callis, P.R. Soil Sci. Soc. America J.
1992, 56H): 67-73.

(9) Curtis, G.P.;  Reinhard, M. Environ. Sci. Technol. 1994, 28

(10)   Murphy, E.M.;  Zachara, J.M.;  Smith, S.C.;  Phillips, J.L., Wietsma, T.W.
Environ. Sci. Technol. 1994. 28m: 1291-1299.

(11)   Kotronarou, A.; Ultrasonic Irradiation  of Chemical  Compounds in Aqueous
Solutions.  Ph.D. Thesis, California Institute of Technology,  Pasadena, California, 1992.
                                             86

-------
                                                                                 14
                 THE ENVIRONMENTAL
              RESPONSE TEAM'S (ERT's)
          ON-SITE (MOBILE) ANALYTICAL
      LABORATORY SUPPORT ACTIVITIES

Raieshmal Singhvi and Joseph P. Lafornara, U.S. Environmental Protection Agency, Office of Solid Waste
and Emergency Response, Office of Emergency and Remedial Response, Emergency Response Division,
Environmental Response Team, 2890 Woodbridge Avenue, Edison, New Jersey 08837
ABSTRACT

One of the critical factors for successfully conducting site evaluation/removal activities is
immediate and appropriate analytical laboratory response.  The United States
Environmental Protection Agency's Environmental Response Team (U.S. EPA/ERT) is at
the forefront of efforts to utilize on-site analytical laboratory support (mobile laboratories)
to provide rapid turnaround of analytical results, the flexibility to meet changing
requirements, and immediate interpretation of complex results during emergency response
and removal activities.

On-site analytical support has proven to be a viable, cost-effective approach in providing
quick turnaround of environmental sample analysis results for site
evaluation/characterization, especially during emergencies and removal actions.

INTRODUCTION
The EPA/ERT's mobile laboratory fleet
has grown from one unit to five units in
the last ten years. Sample results which
previously took site managers days or
even weeks to receive from fixed
laboratories are now available from real-
time to within a few hours. Through the
Response Engineering and Analytical
Contract (REAC) and the  Technical
Assistance Team (TAT), the U.S.
EPA/ERT has successfully implemented
and utilized mobile laboratory support
at over 200 sites, saving over two
million dollars in sample analysis cost
and unaccounted field personnel hours.
The U.S. EPA/ERT mobile laboratory is fully equipped
with state-of-the-art instrumentation to provide analysis
support.
                                           87

-------
BACKGROUND

The U.S. EPA/ERT was established in October 1978 to provide technical assistance to
federal On-Scene Coordinators (OSCs), Regional Response Teams (RRTs), the National
Response Team (NRT), U.S. EPA Headquarters and regional offices, and other
federal/state government agencies.  The U.S. EPA/ERT also provides environmental
emergency assistance to foreign governments during such environmental emergencies as
chemical spills, chemical fires, and oil spills.
CAPABILITIES

The mobile laboratories combine state-
of-the-art instrumentation with U.S.
EPA/ERT approved methodologies and
rigorous Quality Assurance/Quality
Control (QA/QC) procedures to provide
immediate and accurate data analysis.

Some of the procedures include holding
time, frequency of blank and matrix
spikes required, and expected recovery
ranges for surrogates and matrix spikes
as specified in the U.S. EPA
methodologies. Instrumentation is also
required to meet all the criteria for
tuning, initial calibration, continuing
calibration, and check (or verification)
standards. Detection limits are
established before  the methodologies
are adapted and verified as needed.
Blind [performance evaluation (PE)]
samples  are occasionally included with
field samples collected. All of these
procedures are employed to ensure the
reliability of the analytical data.

Furthermore, on-site laboratory
operations conform to all relevant U.S.
EPA and Occupational Safety and
Health Administration (OSHA)
regulations to ensure the safety of
personnel operating the analytical
equipment.  The analytical laboratory
Mobile laboratories can be equipped with fume hoods, gas
chromatographs, gas chromatograph/mass spectrometer,
atomic absorption spectrometers, glove boxes, dependant on
the analyses required at each site.
                                                88

-------
capabilities can be used for on-site characterization of pollutant levels in soil, water, and
complex sample matrices, including:

    *•        Atomic absorption (AA) spectroscopy for inorganic metal
            analyses of water, soil and other media.
    >•        Gas chromatograph (GC) for analysis of pesticide/poly-
            chlorinated biphenyls (PCBs), pentachlorophenol (PCP), and
            creosote in environmental samples.
    >•        Gas chromatography/mass spectrometry (GC/MS) for analysis of
            base neutral/acid extractables (BNAs), volatile organics (VOAs),
            PCP, and creosote in environmental samples.
    -        Gas chromatograph/photo ionization detection (GC/PID) of
            volatiles in water, soils, and soil gas in bags and acetate sleeves.
    >•        X-ray fluorescence (XRF) for analysis of metal contaminants in
            soil and nonroutine elements in other media.
    >•        Extraction and analysis of nonroutine pollutants (such as
            dicamba and benzonitrile), as necessary, using GC electron
            capture detector (ECD) and flame ionization detector (FED).


CASE STUDIES

Aladin Plating

The Aladdin Plating site, located in Chinchilla, PA, was an abandoned "backyard"
chrome-plating operation located on top of a hill. Plating waste was dumped on the
ground and concentrated near the operation but also had spread downslope toward
nearby properties. Based on earlier characterization studies, remedial activities were
undertaken to clean up the site. The soil contaminant of concern was total
chromium (Cr), however, hexavalent chromium (Cr6*) was also suspected as a
groundwater contaminant. The action level set by the site manager was 50 parts per
million (ppm) total Cr.
                                              An on-site laboratory was set
                                              up in a trailer to provide
                                              Cr/Cr6+ analysis for the months
                                              of October and November 1990
                                              and during the spring of 1991.
                                              Analytical instrumentation
                                              included a portable AA unit for
                                              Cr analysis and a portable
                                              spectrophotometer for
                                              determination of Cr6*. Samples
                                              were prepared and analyzed
Atomic Absorption (AA) unit.
                                               89

-------
using standard U.S. EPA methodologies, including PE samples, to satisfy rigorous
QA/QC protocol. The majority of samples were analyzed for total Cr with less than
five percent for Cr6*. Typically, 10-15 samples/day were analyzed over a 3- to
5-month period, providing reliable same-day results to guide additional remedial
activities.

The availability of on-site analytical laboratory support facilitated efficient removal
actions by providing the RPM cost-effective, same day turnaround with no
compromise in data quality or reliability.
Shavers Farm

Shavers Farm, an abandoned farm
site in Chicamauge, GA, was used
as an industrial waste landfill
between 1973 and 1974 and was
an approved landfill by the state
of Georgia. Many of the drums
deposited in the landfill had
corroded and leaked their
contents contaminating the
surrounding grounds. Soil and
drum contaminants of concern
included dicamba and
benzonitrile.
Computer systems are utilized to track data.
A mobile trailer laboratory was set up in May 1990 to support site
excavation/removal actions. Laboratory instrumentation included two dedicated GC
systems: one to analyze benzonitrile using an FID, and one to analyze dicamba
using an BCD. U.S. EPA-approved methods were modified for field analysis of soil
and drum samples. The modified methods provided quick extraction times and low
detection levels; 2 milligram per kilogram (mg/kg) for dicamba and 5 mg/kg for
benzonitrile. These detection levels were well below the 25 mg/kg action level set
by the OSC. Typically, 15-20 samples/day were analyzed over a 5-month period,
providing fast results to guide next-day excavation/removal activities. Reliability
was ensured by analyzing PE samples in accordance with strict QA/QC criteria.

On-site analysis of dicamba and benzonitrile contaminant levels provided the OSC
with critical data for field decisions on appropriate removal actions. Fast (24-hour)
turnaround incorporating rigorous QA/QC protocol guaranteed reliability of
analytical results used in that decision process.
                                                90

-------
Petrochem

The Petrochem site, located in
Salt Lake City, UT, was utilized
(prior to 1987) as a hazardous
waste storage facility and a
hazardous waste
incineration/waste oil recycling
facility. Storage tanks and drums
were in poor condition and
numerous spills of oil, acid, and
caustic had been documented.
Soil and water pollutants of
concern included PCBs, BNAs,
VOAs, and polyaromatic
hydrocarbons (PAHs).
Mobile laboratory: Sample preparation.
A laboratory was set up at the Water Resource Center in Salt Lake City to support
site assessment/characterization activities during the months of May and June 1990.
Instrumentation included a GC with dual detectors (ECD and FID), a GC/MS, and
a separate portable GC with a PID. Analyses performed on samples included:
pesticide/PCBs by GC/ECD methods; BNAs, PAHs, and Oil fingerprints by
GC/FID with GC/MS confirmation; and VOAs utilizing the portable GC/PED. U.S.
EPA methods were modified for field analysis while maintaining high data quality in
accordance with strict QA/QC protocol. Approximately  150 samples were analyzed
over a 1-month period, providing fast (24-hour) turnaround and high quality results
to the OSC.

Mobile-analytical-laboratory support provided fast turnaround on high-quality
analyses of several critical pollutants to assist the OSC in the assessment and
characterization of site contamination.

Escambia Woodtreating Sites

The Escambia Treating Company operated four woodtreating facilities located
in Pensacola, FL; Brookhaven, MS; Camilla, GA; and Brunswick, GA. Wooden
telephone poles and foundation pilings were manufactured and treated at these
facilities from the 1940s until they were closed between 1982 and 1991. Poor
handling practices in the treating facilities resulted in PCP and creosote
contamination of soil throughout each site.
                                               91

-------
A gas chromatograph/mass spectrometer (GC/MS).
                                              It was necessary to analyze
                                              samples containing dioxin waste
                                              material which could not be
                                              analyzed at the ERT/REAC
                                              Edison laboratories.  Therefore,
                                              an on-site High Hazard
                                              laboratory was established in
                                              May 1991 at the Brunswick,
                                              GA site to provide fast
                                              turnaround on PCP and
                                              creosote analyses for
                                              dioxin-contaminated samples,
                                              using modified U.S. EPA
methods while maintaining high quality of analytical results. This laboratory
provided analytical capabilities for all Escambia locations. Instrumentation included
GC/FID systems in operation since the laboratory was mobilized in 1991. Over
1,000 samples were analyzed using GC/FID.  GC/FID was replaced by GC/MS in
1992 and a GC/MS method was established which provides 24-hour turnaround for
analyses of 15-20 samples per day. Large sample batches for PCP analysis have
also been processed and analyzed by GC/MS, resulting in 240 samples analyzed
within a 2- to 3-week period. Over 4,000 samples have been analyzed by GC/MS
between 1992 and 1995.

Ongoing operations at this High Hazard laboratory continue to provide
high-quality, fast, cost-effective analyses for site characterization, treatability
studies, and remediation/removal activities at several hazardous waste sites.
Capabilities are continually updated and improved as new analytical technology
becomes available.
OTHER SITES

In addition to the case studies discussed above, the U.S. EPA/ERT has utilized
on-site analytical laboratory support at over 200 sites in the United States (Figure
1). For example, PCBs were analyzed at Pagano Salvage in Los Lunas, NM, Beck
Street Salvage in Salt Lake City, UT, and Raymark site hi Statford, CT; PCP
contamination was determined for Rocky Boy Post and Pole in Box Elder, MT and
at the Blackfeet Pencil Factory site in Browning, MT; and, Toxaphene levels were
determined by GC/ECD at the PCX site in Washington, NC.
                                                 92

-------
ENVIRONMENTAL RESPONSE TEAM (ERT) FIELD SUPPORT
                                                       RAJ, MARCH 23 1995

-------
CONCLUSIONS

On-site mobile laboratory analytical support has proven to be a viable, effective
approach to meet pollutant analysis needs in many U.S. EPA/ERT hazardous waste
evaluation/removal programs. High-quality results are achieved with quick
turnaround using U.S. EPA-approved analysis methodologies incorporating
rigorous QA/QC procedures. The availability of highly reliable on-site laboratory
analyses provides site managers with the data needed to guide critical field decisions
concerning remediation/removal actions while at the same time realizing cost and
time savings compared to analysis associated with outside laboratories.  The scope
of the U.S. EPA/ERT mobile laboratory functions and capabilities spans the United
States and continues to broaden.
ACKNOWLEDGEMENTS

The authors wish to thank Vinod Kansal, John Syslo, Yi-Hua Lin, Joseph Soroka,
Jay Patel, and Susan Finelli of REAC for their technical assistance. This work was
performed under U.S. EPA/ERT technical direction by REAC and TAT contractor
personnel. Mention of trade names of commercial products does not constitute
endorsement or recommendation for their use.
REFERENCES

U.S. EPA. 1992. Quick Reference Fact Sheet.  "On-Site (Mobile) Analytical
Laboratory Support." EPA 540/F95/005.

U.S. EPA. 1992. Quick Reference Fact Sheet.  "Wood Treating Sites: Analysis of
PCP and Creosote Using On-Site Mobile High Hazard Laboratory." EPA
540/F94/057.
                                             94

-------
                                                                   15
A NEW SOIL SAMPLING AND SOIL STORAGE SYSTEM FOR VOLATILE ORGANIC
                        COMPOUND ANALYSIS

  David Turriff, Director, En Chem, Inc.,  1795 Industrial Dr.,
                      Green Bay, WI  54302
     ABSTRACT

     The design and performance of a new stainless  steel  coring
     device,  the EnCore sampler will be  presented.   This  device
     is made in two sizes,  a 25 gm version  for methanol
     preservation,  and a 5  gm version made  for EPA  SW846  method
     5035.   The sampler is  designed to both sample  and hold a
     plug of soil for an interval  of time so that the limitations
     of using other methods in the field are overcome.  The data
     shows  that the sampler can hold the target Volatile  Organic
     Compounds for a minimum of 48 hours.   This will allow the
     field  personnel to bring the  sample to the laboratory for
     either preservation in methanol or  for preparation into a
     soil vial.
     INTRODUCTION

     Wisconsin implemented methanol  preservation  for  soil  BETX
     and GRO and is in the process of  implementing the method of
     VOCs.   A new sampler, called the  EnCore sampler,  was
     developed to overcome the need  for using methanol in  the
     field.   This stainless steel device  is designed  to  sample  a
     25 gm  soil core.   The sampler has a  cap containing  a  Viton
     O-ring and when the cap is attached, the chamber forms  an
     air-tight seal.  The back of the  chamber has a moveable
     plate  which is held in place by a nut.  The  moveable  plate
     is sealed to the back of the chamber with a  small Viton O-
     ring.   When the sampler is filled with soil  and  sealed, the
     sampler can be used as a sample container and can be  sent
     back to the laboratory on ice.  The  laboratory detaches the
     nut and extrudes the soil into  the methanol.  Recently, a  5
     gm version became available which performs exactly  the  same
     way but extrudes a soil plug into a  40 ml VOC vial.   A
     disposable sampler is also in development and its
     performance relative to the stainless steel  samplers  will  be
     discussed.
                                    95

-------
EXPERIMENTAL

A soil mixing system as described in another paper  in these
proceedings was used to generate samples for testing
different methods of sampling and various methods for
storing the samples for VOC analysis.   Common methods such
as using a spatula, brass tube, plastic syringe, plastic
baggy and the EnCore sampler were compared when sampled and
handled immediately versus holding on ice for up to 48
hours.
RESULTS AND DISCUSSION

The results indicate that the method of sampling is not as
critical as the method of storage for obtaining reliable VOC
results.  If sampled quickly, all methods tested provided
equivalent results.  If samples are held two hours, however,
only the brass tube and the EnCore sampler provided results
equivalent to results with no storage.  When the brass tube
and the EnCore sampler were compared to 48 hours, only the
EnCore sampler showed high recovery.  When stored more than
48 hours, the EnCore sampler shows a steady decline in BETX
compounds, probably due to biodegradation.
Based upon these results it is recommended that 3 EnCore
samplers be taken per sample location.  A 48 hour time limit
is placed on samples in the EnCore device.  One of the
samples is used to screen for high/low level VOCs.  If the
sample is low level, then the other two soils are extruded
into 40 ml vials or into Dynatech soil vials for low level
analysis.  If the sample is high level, at least one sample
is extruded into methanol.  In this way, limitations that
exist with both methanol and the soil vial method can be
overcome, i.e., methanol can be eliminated from the field
and low level detection limits are possible and expensive,
breakable glass vials are not needed for the field and high
level samples will be identified so they do not "overload"
the low level system.
CONCLUSION

The EnCore sampler may provide a simple,  short term method
for hold soil VOC samples until the proper preparation
method is determined.
                                   96

-------
                                                                                16
       PERFORMANCE EVALUATION OF A NEW LOW-COST FIELD TEST
          KIT FOR ANALYSIS OF HYDROCARBON-CONTAMINATED SOIL
                   AT A DIESEL FUEL RELEASE SITE
J.  Scott  Sevfried.  RPSS,  REA, Senior  Scientist,  LevineปFricke,  Engineers,
Hydrogeologists, & Applied Scientists, 3001  Douglas Boulevard,  Suite 320,
Roseville, California 95661;  Keith A.  Wright.  RPSS,  REA, Consulting  Soil
Scientist, 2094 Hideaway Ranch Road, Placerville, California 95667
ABSTRACT

Dexsil  Corporation's new low cost PetroFLAG™ field  test kit was used  in
conjunction with a mobile laboratory to field test soil contaminated by diesel fuel.
This innovative new technology uses no CFCs and is completely field portable.
Initially the PetroFLAG field test results were compared directly to sample splits
analyzed by an on-site  mobile laboratory using EPA  method 8015 for  diesel.
The field  generated PetroFLAG  results  proved to be  very  accurate  when
compared to the  mobile laboratory  results.  The first time PetroFLAG  users
required only 5 minutes  of training to become proficient enough  at using the test
kit to achieve  this high degree of correlation.  Due to the excellent correlation
between PetroFLAG results and  the mobile laboratory results, the PetroFLAG
test kit was used exclusively in the field to find the zero line of contamination in
the soil. When the PetroFLAG test indicated that no hydrocarbons were present
in the  soil,  the sample was  given  to the mobile laboratory for  confirmation
analysis.   By using  the PetroFLAG test, site work including lateral and vertical
definition  of the  contaminated  area and  excavation  and  removal of the
contaminated soil could proceed without delays caused by lack of test data. The
mobile laboratory was spared the inconvenience of "hot" samples that  might
otherwise  overload  the  laboratory equipment  necessitating a time consuming
recalibration, thus saving time and expense.  Overall, the use of the PetroFLAG
test kit allowed more samples to be tested at a low cost,  freed up the  mobile
laboratory to perform confirmation analysis only,  provided an accurate method
for  locating the zero  line of contamination  so  that additional  volumes  of
uncontaminated soil were not  excavated, accelerated the project, and helped to
keep the project, equipment and manpower working without delays.
                                          97

-------
INTRODUCTION

Releases of petroleum hydrocarbons to surface and subsurface environments
are an  unfortunate  reality  in todays world.   These  releases can  result in
significant degradation of the quality of our soil and water resources and may
result in substantial health risks to people, plants, and animals in the vicinity of
the release.   Environmental professionals  are  typically called  upon  when  a
release of petroleum hydrocarbons is reported to assess the nature and extent
of the release and to formulate a remedial action plan to address the problem.
The characterization work is frequently conducted on an emergency response
basis, requiring  rapid turn around of data to support remedial decisions in the
field.

A common problem with characterizing the nature and extent of  petroleum
hydrocarbons in soil at a release site has been the lack of a quick, easy to use
and accurate method of measuring the concentration of petroleum hydrocarbons
in  soil  at the  site.   Typically,  soil vapors   are  measured using  a  field
photoionization detector (PID) or other similar device to test for the presence of
gasoline  in soil,  while observations of staining and odors are used  to check for
the presence  of "heavier",  less volatile  petroleum hydrocarbons  (i.e., diesel,
motor oil, kerosene, jet fuel, crude oil). These  semi-quantitative field data are
often  used to  direct soil excavation  activities  and  to  determine where
confirmation samples are to be collected and sent to a state-certified  laboratory
for analysis.

Use of these  semi-quantitative field methods typically results in the following
problems: false  positives (field  methods  indicate the presence of  petroleum
hydrocarbons where they are not present) resulting in unnecessary excavation,
excessive confirmation sampling and lost time and money; false negatives (field
indicators do not indicate the presence of petroleum hydrocarbons where they
are present above the target concentration) resulting in re-excavation of areas
presumed to be "clean"  and  lost time and money;  uncertainty  in  the data.
resulting in excessive confirmation sampling and down time.

Dexsil Corporation, recognizing the need for a  fast, low cost, quantitative field
test for  determining  the  concentration  of  a full  range  of   hydrocarbon
contaminants  in  soil, recently developed the PetroFLAG field test kit.  The
PetroFLAG test is inexpensive, fast, easy to  learn and yields quantitative results
for a  full range of hydrocarbons in  soil.    The PetroFLAG analyzer  displays
sample results directly in  parts per million (ppm).  The test kit can be used to
analyze one sample, or multiple samples at a time.
                                           98

-------
Correlation  between  PetroFLAG test results and standard EPA  Laboratory
methods 8015 and 418.1  is  excellent.   The  PetroFLAG  test kit  provides
environmental professionals with  a new tool to  perform  quantitative on-site
sample analysis quickly and inexpensively. PetroFLAG test  results can be used
to determine when and where to  collect soil samples  for  (more  expensive)
laboratory confirmation  analysis, thus eliminating subjective observations such
as soil color and soil odor for the process.

This paper presents a case study involving the use of the PetroFLAG test kit at a
diesel fuel release site where excavation was the selected remedial measure.
BACKGROUND

Several thousand gallons of  diesel fuel were released  from  an underground
pipeline beneath the roadway in a residential  neighborhood in California.  The
diesel  fuel  was released under  pressure from  the  top of the pipeline  at
approximately 3 feet below ground surface, resulting in  the upward migration
and  lateral spread of the diesel beneath the asphalt road.  Some of the diesel
emerged from the seams between the asphalt road and  the concrete  sidewalk
and subsequently flowed above ground into an adjacent storm drain.

LevineปFricke, Inc. was called in to assess the nature and extent of the release
and to formulate a remedial action plan for the site.  LevineปFricke is a nation-
wide full-service environmental consulting firm and is recognized as an industry
leader in the characterizing and remediation of petroleum-affected sites.

Due to the residential  setting  of the site and the specific concerns of  the local
residents, the responsible party agreed to  excavate  the diesel-affected soil
beneath  the road to a  concentration below laboratory detection limits (i.e., less
than 1  ppm for total petroleum hydrocarbons as diesel [TPH/d]). The excavated
diesel-affected soil is to be treated using bioremediation  at an off-site location.
A mobile laboratory was dispatched to the site to provide real-time on-site data
to help direct the excavation.  In addition, Levine*Fricke arranged for an on-site
demonstration  of the  PetroFLAG  test  kit  by  a  representative of  Dexsil
Corporation to help assess whether PetroFLAG would be appropriate for use at
the Site.
                                            99

-------
The following sections discuss how the PetroFLAG test kit was used  on the
project.


FIELD PROCEDURES

Training and Confirmation Sampling;

A representative of Dexsil Corporation provided LevineซFricke personnel with a
demonstration on the morning of the second day of the excavation activities.
The demonstration consisted of calibrating the PetroFLAG analyzer using two
prepackaged calibration standards.  The calibration standards consist of a blank
and a 1000 ppm spike, and are provided with every ten pack of soil test reagents
used in  the  PetroFLAG test kit.  The on-site calibration took approximately 10
minutes to perform.

A ten gram sample  of the soil from the site was weighed directly into the
extraction container and a  premeasured ampulized extraction  solvent mixture
was added  to the soil sample. A  timer was set, and the soil and extraction
solvent were then shaken vigorously several  times during the first four minutes
of the five minute extraction period.  The mixture was allowed to settle during the
final minute. The solvent/soil mixture was then decanted into a filter syringe and
the sample extract was  filtered directly into a cuvette  containing the pre-
measured color development solution.  The  digital timer was  then  set for ten
minutes (the color development quantification period).  The cuvetted contents
were then mixed thoroughly during this  period.   At the end of the  ten  minute
quantification period,  the  cuvette  was placed into the calibrated  PetroFLAG
Analyzer and analyzed for  diesel.  The  total demonstration including analyzer
calibration took approximately 25 minutes.

Upon  completion of the demonstration, LevineปFricke personnel collected a soil
sample  near the excavation and split the sample into two sub-samples.  One
sample split was analyzed by the on-site  mobile laboratory for TPH/d using EPA
method  8015; the other split sample was analyzed for TPH/d using the methods
described above.  The short demonstration  period was sufficient  for Levine*
Fricke personnel to conduct the analysis using the PetroFLAG kit.  Results from
both analyses were below detection limits (e.g. less than 1  ppm) for the mobile
laboratory.   The PetroFLAG result  was zero.  Based partially on these results,
and the quick and easy nature of the PetroFLAG analysis  method,  PetroFLAG
was selected for use at the site.
                                      100

-------
Excavation and Sampling Procedures

The  objective of the remedial action plan was to excavate petroleum-affected
soil with a concentration of TPH/d greater than the laboratory detection limit (1
ppm) from the site.  To meet this objective, soil samples were collected from the
bottom and sidewalls of the excavation using a slide-hammer sampler fitted with
clean brass  liners.   Subsamples  were  collected  from  the brass liners and
analyzed for TPH/d using the PetroFLAG test kit.

Results of the PetroFLAG  analyses were used  to assess whether additional
excavation would  be  required  in the  area sampled and to assess  where
confirmation samples were to be collected.  If the results from the PetroFLAG
analysis  indicated  the presence of petroleum  hydrocarbons  above 1  ppm,
additional excavation was conducted in that area.   When the results  of the
PetroFLAG analysis indicated that petroleum hydrocarbons were  not present,
the subject sample  was sent to a  state-certified  laboratory  for  confirmatory
analysis and excavation in that  portion  of the site  was stopped.  After results
were received from the state-certified  laboratory  confirming the  PetroFLAG
results, the area was backfilled with clean  fill,  compacted and paved.   This
procedure was followed until the entire portion of the road  was remediated.

Approximately 210 samples were analyzed on-site using  the PetroFLAG test kit
during the  excavation work (approximately 8 weeks).  Of the 156 samples that
were sent to the state-certified laboratory for confirmation,  only 3 samples had
results greater than the detection limit (at 3, 4 and 17 ppm, respectively).  Based
on  the excellent agreement  between results from  PetroFLAG analysis and
analysis  results from the state-certified laboratory, the  mobile laboratory was
sent off of the Site after two weeks  and confirmatory samples were sent to a
(less expensive) stationary laboratory.
RESULTS AND DISCUSSION

Training

The on-site training session for LevineซFricke personnel took approximately 25
minutes  to  complete,10  minutes of that  time consisted  of calibrating the
PetroFLAG analyzer.  From this short training session,
                                         101

-------
LevineปFricke personnel were able to use the PetroFLAG test kit with confidence
on  the  same  day, immediately  after the  training  session.   LevineซFricke
personnel operated the PetroFLAG test kit several times a  day during the
excavation project with virtually no problems or delays.

Results of Confirmation Analysis

Of the 156 samples analyzed using PetroFLAG and sent to  the state-certified
laboratory for confirmation, only 3 had results greater than the detection limit (at
3,  4,  and 17 ppm, respectively).   It is possible  that the disagreement in the
results associated with these  samples  may have  been the result of  soil
heterogeneity's within the collected soil sample volume.  In any case, the data
collected during this study indicate an excellent agreement between PetroFLAG
results and results from a stationary, state-certified laboratory using  EPA method
8015.

Use of the PetroFLAG Test Kit at the  Excavation

The PetroFLAG test kit was used  exclusively at the site to  assess when the
lateral and vertical extent of the diesel-affected soil had been reached.  Based
on the excellent agreement between the PetroFLAG results and  the results from
the mobile laboratory,  the mobile laboratory was sent off of the Site after two
weeks and confirmatory samples were sent to a  (less  expensive) stationary
laboratory.  The confidence  in the PetroFLAG data was sufficiently high to allow
for use of the PetroFLAG data only to direct the excavation.

Use of the PetroFLAG test kit in this manner resulted in  substantial savings of
both  time and money.   The quick turn-around time for PetroFLAG results
(approximately 10 minutes) made it possible to make decisions regarding where
to  excavate and where to  halt excavation and  collect  confirmatory samples
rapidly, resulting in efficient use of manpower and excavation equipment.  This
resulted in an accelerated progress of  the excavation project and completion of
the excavation ahead of schedule. Also, use of the PetroFLAG test kit to screen
samples for confirmatory analysis prevented "hot"  samples from  being submitted
to  the mobile laboratory that might overload the  mobile  laboratory equipment,
resulting in costly downtime.

Use of the PetroFLAG test kit at the Site also resulted in substantial
                                           102

-------
savings of money.  Perhaps the most significant cost savings was realized in the
overall savings of time described above.  Other more direct cost savings realized
through the use  of PetroFLAG included reduced volume of excavated  soil and
reduced  total  laboratory costs.   The  quick  (approximately 10 minutes) and
inexpensive (approximately $15.007sample) nature of the  PetroFLAG  analysis
process allowed for frequent collection and analysis  of samples to assess the
limits of the excavation.  This increased sampling density and frequency resulted
in better definition of the excavation boundary at any given  place and time, thus
minimizing  excavation of clean soil.

The overall cost  of the PetroFLAG test is $15.00 per test.  As discussed above,
use of the PetroFLAG test kit resulted in less samples being submitted to a
state-certified laboratory (cost of $100 to $200/sample for 24-hr, turnaround) and
allowed LevineซFricke to discontinue use of the mobile laboratory (approximate
cost of $1500.00 per day).
SUMMARY AND CONCLUSIONS

The PetroFLAG test kit was used at a diesel fuel release site to provide rapid,
inexpensive and accurate data regarding the nature and extent of the diesel fuel
in soil.  Agreement between PetroFLAG results and results from a stationary,
state-certified laboratory using EPA Method 8015 was excellent. Because of the
excellent agreement between these methods, the PetroFLAG test kit was used
exclusively  at  the Site  to direct  the  excavation and  to determine where
confirmatory samples  were to be  collected for submittal  to a state-certified
laboratory for analysis.

Use of the PetroFLAG test kit at the subject site resulted in substantial savings
of both time and money.   The  quick turn-around  time for PetroFLAG results
(approximately  10 minutes) made it possible to make decisions regarding where
to excavate and  where to  halt excavation and collect confirmatory samples
rapidly, resulting in efficient use of manpower and excavation equipment. Also,
use of the PetroFLAG test kit replaced the need for an on-site mobile laboratory
and reduced the total number of samples sent to a state-certified laboratory for
confirmation analysis.

Based on the performance of the PetroFLAG test kit during the excavation
phase of this project, LevineปFricke is using the PetroFLAG test kit in the
                                         103

-------
bioremediation treatment phase of the project. A soil biotreatment cell has been
constructed to treat the diesel-affected  soil excavated from the release area.
When results from the PetroFLAG tests indicate that the concentration of diesel
in soil in the  biotreatment cell is below the target remediation level,  confirmation
samples will  be collected and  sent to a state-certified laboratory for analysis.
Additionally,  the low cost associated with the PetroFLAG test kit will allow for
increased sampling of diesel concentrations in soil in the biotreatment cell while
the bioremediation is in progress.  These on-going monitoring data will be used
to track the rate and distribution of bioremediation within the biotreatment  cell
and to evaluate what adjustments to the biotreatment cell (e. g., increased air
flow, addition of nutrients) may be warranted.
                                          104

-------
Enforcement
            *5X ^

-------
                                                                                        17

              THE ADMISSffilLITY OF SCIENTIFIC EVIDENCE
Edwin E. Perkins, Environmental Chemist, Chief, Chemical Section, New York State
Department of Environmental Conservation, Albany, New York, 12233
ABSTRACT
Time, energy, money and opportunity are wasted when environmental evidence collected
by technical professionals to achieve a certain objective fails to serve that purpose. This
presentation examines the basic reasons why collected data evidence fails so often to
meet legal standards of proof and what the technical professional can do to ensure that
specific evidence stands up to judicial scrutiny.
INTRODUCTION
Environmental statutes required the promulgation of regulations which specified
detailed, highly technical procedures for handling and analyzing sample data evidence.
Even now other concepts are being introduced into the process such as "performance
based" methods, which will tend to shift more responsibility to the evidence generator.
The use of such evidence in legal actions and the current emphasis on compliance with
environmental regulations have increased the requirements placed on environmental
professionals to generate admissible and defensible data as evidence in civil and criminal
proceedings.
The following outline represents the topics to be discussed in the speaker's presentation:


I.    Environmental Data vs. Scientific Evidence

     A. Good Sample and Poor Evidence

     B. Understanding the Rules in Two Arenas

II.  The Approach used in Evaluating the Admissibility of Scientific Data as Evidence

    A. Frye Standard

    B. Daubert vs. Merrell Dow Pharmaceuticals, Inc.
                                             105

-------
ffl.  Foundational tests for Environmental Data Evidence

    A. Relevance

    B. Foundation

    C. Authenticity (Chain of Custody)

IV.  Identifying the Language Barrier that Affects Communication between the
    Technical and Legal Professional

V.  Using the Daubert-Blackmun Factors in Determining Scientific Validity of
    Testimony

VI.  Establishing the Data Generation Path
                                             106

-------
                                                                                      18
                    llth Annual EPA-ACS Waste Testing Conference
                                    July 25, 1995

  STRATEGIC CONSIDERATIONS IN PRESENTING TECHNICAL EVIDENCE IN
                             COURT: A CASE STUDY

Barry M. Hartmanr Partner, Kirkpatrick & Lockhart,  1800 M Street, N.W., Washington,
D.C.  20036
Part I: The Case Study

Attached to this paper is the transcript of certain testimony by several expert witnesses in
United States v. Frank, et al. No. 93-706 (ERK) (E.D. NY 1995). It will provide the reader
with a bit of the flavor of how expert testimony is actually presented in court.

The case involved charges that the defendants conspired to defraud the United States, and
violated several provisions of the Toxic Substances Control Act ("TSCA"), as well as the
Federal Water Pollution Control Act.  For purposes of this discussion, only the TSCA counts
are relevant.

The facts, which are greatly simplified, are as follows:  The defendants operated an oil and
tank truck cleaning facility.  Their specialty was cleaning out oil barges. They  also cleaned
oil tank trucks. They washed out the barges with high pressure water.  They allowed the
oil/water to  separate in a  large separator tank, which had heating coils in the bottom to help
the separation process. They used the oil as fuel for boilers to create steam, in order to heat
water for cleaning future  barges.  The wastewater was sent through a treatment system
before being discharged pursuant to an NPDES permit.

Over a period of years, solid particles that were suspended in the oil/water that entered the
separator tank, settled to the bottom of the tank, during the separation process,  creating
sludge.  After a period of time, the sludge affected the ability of the heating coils to work,
and had to be removed. The defendants moved the sludge from the tank and placed it in a
barge that was not in use. The barge had four compartments that were each about 60' long,
13' wide, and 20' deep.

Over a year later, during  an inspection on October 5, 1990 the U.S. Environmental
Protection Agency (EPA) took one oil sample from each of the  four barge compartments via
an access hole. Three weeks later the EPA took one more sample (sludge, this time) from
each compartment, using  the same access hole.  One month later they took one  more sludge
sample from each compartment, again using the same access hole.  Testing of these samples
indicated the presence of polychlorinated byphenyls (PCBs).
                                            107

-------
There were other samples taken over the next two years, mostly by private laboratories,
before the material was removed from the barge.  Almost all were taken from the same
sampling point as the initial sets. In several instances, oil samples were tested and did not
indicate the presence of PCBs. In other cases, sludge was tested, and PCBs in excess of 50
ppm were found.  When the sludge was finally removed from the barge (under the direction
of the EPA and the U.S. Coast Guard), the material was sampled again, as it was removed.
Most of those samples indicated PCB  concentrations of under 50 ppm.

TSCA mandates special handling of materials containing PCBs in concentrations of 50 ppm
or more. Based on the test results, the United States obtained an indictment against the
defendants that charged several of them with violations of TSCA marking, storage, and
disposal requirements.  The conspiracy alleged that defendants hid the fact that the storage
tank contained PCB-contaminated sludge and that they were  moving it to the barge.

The attached testimony includes (1) the direct and cross examination of one of the persons
(Randy Braun) who took the samples from the barge (Attachment A); (2)  stipulations about
the testimony of two persons who assisted in analyzing the samples in the NEIC laboratory
(Attachment B); (3) testimony by one of the NEIC laboratory scientists (Dr. Lawrence
Stratum) who conducted or oversaw the sampling analysis in the laboratory (Attachment C);
and (4) testimony by one of the persons who analyzed samples taken by a private contractor
(Attachment D).
Part H:  Discussion Points

      A.    Profiling considerations

             1.     Using the Freedom of Information Act

             2.     Using technical evidence in prefiling negotiations

      B.    Pretrial considerations

             1.     Is this a civil or criminal case?

                    •      Discovery differs

                    •      Obligation to disclose differs

                    •      Trial tactics differ

                    •      Burden of proof differs
                                                108

-------
       2.      Experts (witnesses vs. consultants)

       3.      Finding the "qualified" expert/consultant:  "I never found an expert
              who wasn't qualified./I never found an expert who was qualified."

              •     investigatory experts

              •     sampling experts

              •     laboratory experts

              •     toxicology experts

              •     experts on the law

              •     experts v. consultants

       4.      Reviewing the government's data

              •     Who reviews it?

              •     What is reviewed?

              •     When is it reviewed?

       5.      Government protocols and procedures

              "Alice in Memoland"

B.     Trial considerations

       1.      Who will educate the judge/jury? (Who wants to?)

       2.      Do I even want to dispute the technical evidence?

              a.     Burden of proof

              b.     Credibility of counsel

              c.     Theory of the case

       3.      Did my client get and test split samples:  "What goes around, comes
              around."
                                             109

-------
             4.     Calling an expert

             5.     Attacking technical evidence

                   a.     Sampling plans were flawed.

                   b.     The sample it isn't proof of anything (representativeness).

                   c.     The person taking the sample wasn't qualified.

                   d.     The person taking the sample was sloppy.

                   e.     The person taking the sample didn't follow his own agency's
                          procedures.

                   f.     It's not fair to use that evidence because no private company
                          could have known to test the way the government did.

                   g.     The laboratory wasn't qualified to analyze the sample.

                   h.     The laboratory made mistakes when it analyzed the sample.

                   i.      The laboratory didn't follow the right protocols when it analyzed
                          the sample.

                   j.      It's not fair to use evidence against someone who couldn't have
                          known to test the material the way the government did.

                   k.     "Big" attacks vs.  "little" attacks

            6.     Using objections

            7.     When to use technical evidence in the defense case

            8.     How technical evidence is used in closing arguments.

      C.    Conclusions
Those who attend  the  enforcement session on Tuesday afternoon will receive
copies  of attachments at the  meeting.
                                             110

-------
                                                                                 19
           AVOIDING SUCCESSFUL CHALLENGES OF MEASUREMENT DATA


Laurence W. Strattan, Chemist, U.S. Environmental Protection Agency,
National Enforcement Investigations Center, Box 25227, Denver,
Colorado 80225

ABSTRACT

This  paper  is  presented  from  the viewpoint  of  a technical  person,
especially someone performing laboratory analyses, and discusses ways to
thwart  successful  attacks on  data.   Good  planning is the  single most
important  preventive measure.   The best  planning  involves all  of the
personnel needed to produce the data,  and will address all the objectives
of the measurement  activities  (both sampling and analysis).   The people
involved   need  to   know  what  regulations  are  being  enforced,  any
requirements placed  on  measurement procedures  by those regulations, and
what is needed'to show a violation of  those  regulations.  Shortcomings in
these areas due to lack of planning are  the easiest  to attack.  These are
"legal hoops" you need to get through to defend data.  If you have covered
these and other basic legal requirements such as chain-of-custody, testing
with a sound scientific basis  should be able to  withstand attacks.  Having
the  correct  answer  and being able  to  show that fact also  helps,  but
doesn't count if you haven't covered the basics.

INTRODUCTION

It is anticipated that  this presentation will follow a presentation by a
defense attorney, who among other  things, outlined generic approaches to
attacking  data.   Some  general comments  of my own  apply to  this topic.
Having data  good enough that  it  won't be attacked should  be a goal, but
there are ways to attack any data.  Expect to have data attacked and try
to be prepared. Also, there is no one correct way of producing defensible
data.    There  are  fifty  states  each probably  having  one  or  more
laboratories which  analyze  samples for  environmental enforcement cases,
and I believe that they usually do the job successfully.   I am even more
confident  that  they  have  at  least fifty different ways of doing things.
Some  of the examples  I give should be  thought  of as things  that have
worked, not  the only way to do things.

DISCUSSION

The first  thing a technical person thinks of in connection with the term
"data  defensibility" is  probably the  scientific defensibility  of the
procedures  used and the  results obtained,  including  quality assurance
results.    In   practice,  unless  the data  really  do not  support  the
conclusions presented, these areas will not be seriously attacked.  A non-
serious attack  might be trying to obtain victory by  default  -  put up a
weak scientific challenge and hope to  prevail because nobody fights back.
A serious attack would require complicated technical arguments  and, if the
basic conclusions of the data  are  correct, would have little  to gain.  The
easier and usually more productive arguments to make are things such as:
the required procedure  was not used; the  samples  are flawed  making the
results meaningless;  or,  required items such as  holding  times  were not
satisfied.  Checking the claimed  credentials of the people involved to see
if they are  accurate is also standard practice.

If a procedure is  required by regulation,  that is  the procedure that must
be used.   Regulation in  this  sense could be federal,  state,  local,  or
perhaps  a permit.   Knowing  if   something  is  required is  part  of the
planning  which should  occur  prior to  sampling.   However,  it  is  also
                                         111

-------
essential  to  know what  is  actually required  and what  is  an  implied
requirement.   The  implied requirement may be something done by tradition,
or because it is used in another area. Implied requirements are a common
area to attack because it is  easy to  create the impression that something
wrong was done.   Such attacks may be successful  unless  you can quickly
refute the implied shortcoming, usually  by explaining  that it was not a
requirement in this case, and often by adding that the results would have
been the same using either procedure.

Communication  among  the personnel  involved  in  an  enforcement  case  is
obviously  helpful since each  person  can  supply  details   from  their
specialty area, lowering the  chances  of overlooking a requirement unknown
to  persons  less  knowledgeable   in that   area.     The  earlier  this
communication takes place,  the better.  It is best when it takes place in
the planning stages.

Communication should  also address sampling.   Sampling is a critical part
of the measurement process,  and one that often does not  get  its proper
emphasis.   Besides  the  importance  of  communication  to  understanding
objectives, an understanding  of the   sample  collection process can help
the chemist in defending data.  The chemist should know enough about the
sampling to convince  themselves that  the  procedures used were adequate to
meet objectives.  A chemist will give a more credible appearance if they
can discuss the overall measurement, not just  the laboratory analysis.  In
a  deposition  you will  be  questioned  in  your weakest  and/or  least
knowledgeable areas,  if for no other  reason than to shake your confidence
and perhaps get weaker answers within your areas of expertise.  Also,  in
RCRA testing especially,  the  important thing often is not the data alone,
but a conclusion based on the results of  a test.  That conclusion usually
involves a knowledge  of the overall measurement, not just the analytical
step.  However, you  need to  remember your limitations.   Don't get into
being an expert on sampling unless you are one.

Beyond  using  a  procedure if it  is  required,  the  best  way to  have
defensible data is to have the correct answer,  and be  able to show you
have  the  correct  answer.    This  is where  the  process  gets down  to
scientific defensibility.  If the results you obtained would stand up to
being published in a  refereed scientific journal, they will stand up  in
the  legal  process.    This  may mean  doing  a  determination  by alternate
techniques to show you can get the  same result by different methods, much
the way the National Institute of Standards  and Technology does in setting
values for a reference material.  The thoroughness of the process depends
on the tests performed and on the legal venue.   If standard methods meet
objectives, those  tests should be adequate as performed.  If non-standard
tests  are performed,  alternate  procedures  would  probably  give  more
confidence in the  results obtained.

Quality assurance  is  another  important part of being able to show you have
the correct answer.  In our laboratory,  if a  sample exceeds a regulatory
concentration, we like  to  analyze  the sample in  triplicate to show the
result is reproducible and to be able to  make a more definitive statement
about how confident we are the limit is exceeded.  Blanks and spikes are
also important in  this basic scientific defensibility, but  if you have one
result for  a  sample,  running a replicate  gives more  information than
spiking the sample, especially if  sample homogeneity is  a question.  If
you get  good  results  on a spike,  you either were able  to duplicate the
sample result and recover the spike, or you were just lucky.  If you run
replicates, you find  out how well  you can reproduce the  result.   Also,
what is in the sample is  the  issue, not how well you can recover a spike.
Analyzing  a  reference material  to  verify  calibration is  also  a good
practice.
                                         112

-------
SUMMARY

The legal process  can be intimidating, but that doesn't mean you should be
intimidated.    Knowing what  is  required  (and  not  required)  by  the
objectives of a project  is the biggest single  thing  which  can help the
technical person to  defend results.   Being  able to  show you have gotten
the correct answer is also important,  either using a proscribed procedure
or by just using accepted  science.  Regardless, the technical person can
expect  to have  results questioned,  and should not take  it  personally;
attacking you is  somebody's job.   You should  look forward to overcoming
arguments based on half-truths and bad science.  On the other hand, if you
need to defend results based on those same things, you will deservedly be
in trouble.
                                          113

-------
Organic*

-------
20


  SOXHLET ALTERNATIVES FOR THE ENVIRONMENTAL LAB

Mark L. Bruce. Quanterra, 4101 Shuffel Dr. NW, North Canton, Ohio 44720,
JackR. Hall,Quanterra,9000ExecutiveParkDrive, Suite A110, Knoxville, TN 37923

ABSTRACT
Accelerated solvent extraction (ASE) combines aspects of both supercritical fluid
extraction  (SFE)  and microwave assisted extraction  (MAE).   The extraction is
accomplished  using traditional organic solvents at moderate temperature and pressure.
Extraction time is  faster than SFE, while labor time is comparable to automated SFE.
Solvent usage and instrument cost are intermediate between automated SFE and MAE.
Extraction efficiency is generally equivalent to standard laboratory extractions  with
Soxhlet and sonication.  ASE analyte recovery from some challenging matrices was
significantly higher than sonication.

INTRODUCTION

Sample preparation alternatives to Soxhlet and sonication are  needed to reduce solvent
and labor requirements  while shortening  sample preparation time.   Soxhlet is the
technique to which  other solid sample extraction techniques are usually  compared.
Sonication has been a routine technique for many years.  It is faster than Soxhlet, but is
more labor intensive. Also, analyte recovery from some challenging matrices may be
significantly less than Soxhlet.

Several new extraction technologies have been developed which shorten the extraction
time (like sonication) while often maintaining the thoroughness  of Soxhlet.  This is
usually accomplished by extracting at above room temperature and at elevated pressure.
Alternative extraction fluids may also be used to improve the extraction.  Most of these
alternatives are more automated than Soxhlet  or sonication, thus  less analyst labor is
required.

These new technologies for solid sample extraction are becoming viable alternatives to
traditional Soxhlet extraction of solid samples for the production environmental lab.
Automated and accelerated Soxhlets are available. Also, supercritical fluid extraction
(SFE)  and microwave assisted extraction (MAE)  have been applied to environmental
matrices.   The latest  sample preparation option for the  lab is  accelerated solvent
extraction (ASE).

EXPERIMENTAL

Accelerated solvent extraction (ASE) will be included in SW-846 Update in as Method
3545.  The extraction time is 10 minutes, with sample-to-sample cycle time of about 13
minutes.   The required solvent volume ranges from 15 to 50 mL depending on the
amount of sample extracted. Sample amounts from 10 to 30 g can be routinely extracted.

The ASE uses traditional solvent mixtures (dichloromethane/acetone and hexane/acetone)
at moderate temperature and pressure  to  extract most routine semivolatile organic
analytes listed in SW-846. Figure 1 shows the plumbing arrangement of the system. The
weighed sample is mixed with sodium sulfate and placed in a stainless  steel extraction
vessel and sealed with end caps.  An automatic mechanism seals the  extraction vessel into
                                           114

-------
the plumbing system and moves it into the oven. The load valve opens and solvent is
pumped into the vessel.  When the vessel is full the static valve closes and the pump
continues until the pressure reaches the set-point. As the solvent and sample warm up to
the oven temperature,  the pressure rises  as the solvent expands.  When the pressure
exceeds the set-point by 200 psi, the static valve opens briefly (reducing pressure) and
releases about a milliliter of solvent into the collection vial.  The pump then adds a
milliliter of fresh solvent and brings the vessel pressure back to the set-point. This cycle
repeats many times while the sample and solvent are heating up to the oven temperature.
After a total equilibration and soak time of 10 minutes, the static valve opens and all
solvent in the extraction vessel is flushed out with a few milliliters of fresh solvent. Last,
the purge valve opens and nitrogen gas blows the remaining solvent into the collection
vial. The final extract volume ranges from 15 to 50 mL depending on sample size.
          solvent pump
                                               oven
-..A 	 :

'



-



'


CD
to
CO
CD
>
c
g
"o
cc
ฃ
CD

\
- :










                   Figure 1 Accelerated Solvent Extraction System

Forty samples were examined that had been previously extracted and analyzed for
polychlorinated biphenyls (PCBs), organochlorine pesticides (OCPs) and semivolatiles
(BNAs) with traditional SW-846 methods (3540, 3550, 8080, 8270). Also, a few matrix
spiked samples were extracted for petroleum hydrocarbons and analyzed with infra red
analysis (TPH-IR).  The PCBs and OCPs were extracted with hexane/acetone  (50:50) at
100ฐC,   1500   psi   for   10  minutes.    The  BNAs  were  extracted  with
dichlor'omethane/acetone (50:50) at 100ฐC, 1500 psi for  10 minutes.  The TPH-IR
samples were extracted with tetrachloroethene at 200ฐC, 1500 psi in 10 minute segments.

Figures 2-4 show the results from the ASE extraction plotted relative to the results from
sonication & Soxhlet extractions.  The diagonal line in the center indicates the region
where the ASE concentration data corresponds exactly to the sonication/Soxhlet data.
The shaded region covers the area from ASE results being 50% higher to 50% lower than
                                             115

-------
the sonication/Soxhlet results.  The X and Y axes are shown in log-log space since the
concentration data cover many orders of magnitude.

On average the ASE PCB results (Figure 2) were comparable to the sonication/Soxhlet
results but data were more scattered than one would like.  Sample homogeneity was the
most likely cause. The prototype ASE system was limited to 10 g samples.

The OCP results (Figure 3)  show similar data scatter which was most likely  caused by
homogeneity limitations. Also, the ASE OCP  concentrations were in general much
higher than the corresponding sonication results.  These OCP samples were all from the
same  site and were  high in  both clay and moisture content, which tend to be the most
difficult type of soil  to extract.

The BNA  samples  (Figure 4)  showed good equivalence between ASE  and
sonication/Soxhlet  extractions.  The analytes  determined  ranged from phenol and
dichlorobenzenes to  6 ring PAHs. There were 32  different semivolatile analytes detected
in all.  Two regions of Figure 4 deserve special  note. Those analyte results that were
significantly  higher with the ASE were typically low concentrations  of low boiling
analytes. These types  of analytes have been previously reported as more difficult to
extract (1).  Thus, the ASE may be a more efficient extractor for these analytes. Another
concern raised about the ASE extraction was solvent saturation when very high level
samples were extracted. The sample whose results are shown at the high end of the BNA
graph was > 3% extractable  hydrocarbon material. These data compare well, so solvent
saturation was not a significant problem.

Figure 5 shows the results from several  matrix spiked petroleum hydrocarbon samples.
This test evaluated the ASE  as an alternative to the Freon-113ฎ based Soxhlet for TPH-
IR analyses. Hydrocarbon recovery from wet samples is more difficult when  only non-
polar extraction solvents (such as Freonฎ or tetrachloroethene) are used. It is necessary
to dry the sample either before or during  the extraction to achieve good analyte recovery.
Both  wet  and dry sample  matrix spikes of diesel and  motor oil  were sequentially
extracted until quantitative  recovery  was achieved.   The two  dry samples  were
quantitatively extracted with a single 10 minute extraction with tetrachloroethene at
200ฐC.  The wet  samples were more difficult to extract. The fresh diesel spiked sample
required a second 10 minute, 200ฐC extraction to  reach quantitative recovery.  The aged
motor oil spike although easy to extract when dry became very difficult to extract when
the moisture content of  this clay sample was  brought to 50%.   Three 10 minute
extractions were required to  achieve quantitative recovery. The first extraction had very
little hydrocarbons recovered, but the extract contained significant  amounts of water.
Subsequent extractions of the same clay sample aliquot recovered more hydrocarbons and
less water.
                                            116

-------
 100  -r
                                                               +50%
                                                                     -50%
                                                                   100
                          Sonication/Soxhlet (mg/kg)

        Figure 2  Comparison of Polychlorinated Biphenyl Results
   10000  —
                                                           +50%
                                                                -50%
    1000  . .
I

ui
CO
      100 ..
       10
          10
                            100
                                              1000
10000
                           Sonication/Soxhlet Gig/kg)

        Figure 3  Comparison of Organochlorine Pesticide Results
                                             117

-------
      10000  _
       1000  . .
I
UJ
        100  . .
          10 ..
           1 ..
         0.1  ..
        0.01
            0.01
                  0.1
10
100
1000    10000
                          Sonication/Soxhlet (mg/kg)

    Figure 4  Comparison of Semivolatile Base/Neutral/Acid Results
           120 -,
           100 -
            80 -
                                                   dry (aged oil)


                                                  dry (diesel)


                                                wet (fresh diesel)



                                               wet (aged oil)

                               mnnnfmrnmnmaUBSSaSf.


                             2         3

                     extraction number


Figure 5 Recovery of Petroleum Hydrocarbons and Analysis by Infra Red
                                             118

-------
Process Evaluation

Once acceptable extraction efficiency was established for the ASE, the effects on the
sample preparation process were evaluated. The evaluation covered several key areas; 1)
total extraction time, 2) labor time, 3) equipment cost, 4) supplies cost and 5) side effects
on other aspects of the sample analysis process.

The total extraction turn around time is about 13 minutes.  This is the  extraction time
only. It does not include sample homogenization, vessel loading, extract concentration or
clean-up.  This extraction time compares well with sonication and is shorter than most
implementations of supercritical fluid extraction and microwave assisted extraction. It is
dramatically shorter  than  the  traditional Soxhlet extraction and newer automated
Soxhlets.  Since the ASE is an automated sequential extractor, the total  extraction time
for a batch of samples will depend on the number of samples in the batch.  For example, a
batch of 10 samples  plus 4 QC extractions would take about 3 hours of unattended
extraction time.

The labor time component for the extraction process is small and primarily consists of
sample homogenization, drying agent mixing and QC spiking.  This part of the process is
about the same for each of the extraction alternatives. Thus, ASE and Soxhlet labor times
are about the same. Sonication has significantly higher labor cost. Note, this labor time
comparison does not include extract concentration, see side effects  below.

The initial cost of automated extraction equipment is often a major  component of the total
cost of performing the extraction. An automated ASE system costs about $45,000. This
is significantly more expensive than equivalent sonication, Soxhlet or automated Soxhlet
equipment.  However, the ASE system is less expensive than many  similarly automated
SFE systems.

The supplies cost of the automated ASE system can not be estimated now, since the ASE
has not been  used in a long term production mode yet.  The  frits and seals of the
extraction vessels have a finite but undetermined lifetime  at present.  Under ideal
conditions these components could last for hundreds of extractions, but contamination
from real samples and misuse by extractionists will shorten their lifetime. Other supplies
such as solvent, sodium sulfate and filter paper should be less expensive (in total) since
solvent volume is reduced relative to traditional techniques.

Most new sample preparation techniques  will have  side effects on other parts  of the
preparation and analysis process.  Some  new extraction  techniques (such as the
automated SFE  systems) limit the  amount of sample  extracted to 5-10  g. This would
raise non-detected analyte  reporting limits unless other steps in the concentration or
analysis process compensate.  The ASE system does not produce this negative side effect
since 30 g sample sizes are maintained.  The ASE produces a positive side effect on
extract concentration. Since the ASE extract volume is much smaller than sonication or
Soxhlet extracts (50 mL compared to 200-300 mL), the macro-concentration is much
quicker and requires smaller glassware.
                                              119

-------
CONCLUSION

ASE extraction efficiency is generally equivalent or better than the traditional sonication
and Soxhlet techniques. Actual extraction time is short and about the same as sonication.
Labor time is small and comparable to Soxhlet extraction.  Initial equipment cost is much
higher than current extraction techniques.  The on-going cost of supplies can not be
accurately  estimated  at  present because  some of the  component lifetimes are
undetermined. The ASE extraction has the positive side effect of reducing concentration
time.  No negative side effects have been identified at this time. If applied properly, the
ASE should improve extraction turn-around time, reduce total cost and diminish the
health & environmental effects of solvent usage.

ACKNOWLEDGMENTS

Many people have contributed to the success of this study. In particular  the authors
would like to thank Bruce Richter, John Ezzel, Dale Felix and Brent Middleton from the
Dionex Corporation. Essential support was provided by Tom  Hula, Dennis Edgerley,
Sarah Braxter, Tami Stephens, Dennis Mayugba, Tony Young, Mark Gimpel, Richard
Burrows, Steve Kramer and Paul Winkler from Quanterra Environmental Services.

REFERENCES

(1) Burford, M.; Hawthorne, S.; Miller, D. Analytical Chemistry 1993, 65, 1497.
                                             120

-------
                                                                                        21
Sample Preparation Using Accelerated Solvent Extraction
(Method 3545, Proposed)


Dale W. Felix , Bruce E. Richter and John L. Ezzell
Dionex, Salt Lake City Technical Center
1515 West 2200 South, Suite A
Salt Lake City, Utah 84119

Abstract

Various techniques have been promoted during the last decade to replace solvent
intensive extraction techniques such as Soxhlet and sonication. These methods typically
require 300 to 500 mL for each sample.  However, replacement techniques have been
difficult'to use, have taken extensive time and labor for methods development, have been
matrix dependent, or have not provided adequate recoveries.  Accelerated solvent
extraction (ASE) was developed to overcome these problems. ASE applies solvents at
elevated temperatures and pressures compared to traditional methods. At the
temperatures used in ASE, dissolution kinetics are accelerated and solvent capacity is
enhanced. The result is an enhancement of the extraction conditions which allows the
extraction of a wide range of environmentally important matrices in a few minutes with
minimal solvent consumption. For example, a 10-g sample requires only 13 to 15 mL of
solvent and the extraction is completed in twelve minutes. ASE has been compared to
conventional solvent extraction of chlorinated pesticides, BNAs, organophosphorus
pesticides and herbicides, in two laboratory validation studies. Based on results presented
here, ASE will be included as Method 3545 (proposed) in Update III of CFR 40.

Introduction

Preparation of solid waste samples for chromatographic analysis usually requires an
extraction procedure to separate the desired analytes from the matrix. Techniques such as
Soxhlet which has been in use since the turn of the century are generally solvent
intensive.  Typically 150 to 500 mL of solvent are required for samples in the range of 10
to 30 g.  Both environmental and economic concerns have led to the push to develop less
solvent intensive extraction techniques.  In the last decade, reduced solvent techniques
including automated Soxhlet, microwave digestion and supercritical fluid extraction
(SFE) have appeared. However, some of the new techniques, while ostensibly showing
great promise for reducing solvent usage, have faced other difficulties in assuring general
acceptance in the chemical laboratory setting.  Some of the problems include difficulty in
use, time required for each extraction, intensive labor requirements, difficult methods
development, matrix dependence, and poor recoveries of target analytes.

Accelerated Solvent Extraction (ASE) was developed to overcome these problem areas.
ASE applies solvents at elevated temperatures and pressures to achieve complete
                                            121

-------
extraction of the typical environmental matrices. As temperature increases, the various
kinetic processes controlling extraction are accelerated. Temperature will also
beneficially affect the solubility of most analytes.  The consequence of using solvents at
higher temperatures than typically used in other solvent methods is an enhancement of
the extraction conditions which then allows the extraction of a wide range of
environmentally important matrices in a few minutes with minimal solvent consumption.
The ASE system consists of an oven chamber, a pump, a solvent source and valves to
control the both liquid and gas purge flow paths.  A schematic diagram of an ASE system
is shown in Figure 1.


Methods

Apparatus and Materials.  Extractions were performed on a pre-production accelerated
solvent extraction (ASE) system  (Dionex  Corporation,  Sunnyvale,  CA) and by
conventional Soxhlet and shaker extraction. All solvents were analytical grade or better
quality.  Spiked soils were purchased from Environmental  Research Associates (ERA,
Arvada, CO), and were stored at approximately 4ฐC until used. These soils were prepared
in batch mode, and represent artificially aged samples.

Equivalency Study.
Extractions by ASE and automated Soxhlet, Soxhlet or shaker were performed in parallel.
All extracts from  both ASE  and the conventional method  were placed in the normal
sample queue.  No samples were re-extracted, and no extracts were re-analyzed. Seven
replicate extractions by each technique of each concentration level on each matrix for the
two  compound  classes were  performed.  Matrix blanks, spikes and  spike  duplicates
(quality control samples) were included for each matrix. These spikes and blanks were
obtained using clean soils from the same batches that were used for the spiked soils and
were  provided  by  ERA.  The  total number of extractions  and  analyses  for  both
equivalency studies, including blanks  and standards was over 600.
       Extraction. ASE  extractions  were  performed at a pressure of 2000  psi  and  a
temperature of 100ฐC.  Additional information on the operation of ASE are reported in a
separate paper (1).  Stainless steel extraction vessels with 10.4 mL volumes and rated for
use at 5000 psi (9.4 mm x 150 mm, Keystone  Scientific,  Bellefonte, PA) were used.
Surrogate compounds (QA/QC compounds, not target analytes) were spiked directly onto
the soils immediately prior to sealing the sample vessels. Ten gram samples of spiked soil
were used for all extractions. Chlorinated pesticide spiked soils were extracted with a 1:1
mixture of hexane/acetone. Organophosphorus  pesticide and BNA spiked soils  were
extracted with a 1:1  mixture of methylene  chloride/acetone,  while herbicide spiked soils
were  extracted  with a  1:2  mixture  of  methylene chloride/acetone  with  4%  (v/v)
H3PO4/H2O (1:1).  All extracts  were  collected  into amber,  precleaned  40 mL  vials
purchased from I-Chem (New Castle,  DE).
       Quantitation. All  analysis were performed  by contract laboratory  personnel.
Average recoveries for each analyte  were determined from 7 replicate extractions and
analysis. For all data sets, no recoveries above 150% were included in the calculations of
                                             122

-------
average recoveries, however, any zero values were included. The accuracy and precision
data from the surrogate compounds were well within established quality control limits.
No method analyte was found in any reagent or method blank sample  at levels above
detection limits.

Results and Discussion
ASE Extractions
A schematic diagram of the accelerated solvent extraction  system used  in this study is
presented in Figure 1. The sample is loaded into the extraction vessel and it is filled with
the extraction solvent by opening the pump valve. Once filled, the cell is maintained at
constant pressure by the pump. The sample and solvent are then heated by placing the cell
hi contact with a pre-heated metal block. While heating, thermal expansion of the solvent
occurs resulting in an increase in the measured pressure. This pressure increase is relieved
by periodically opening the  static  valve, venting small amounts of solvent  into the
collection vial. Following 10 minutes in this configuration,  the static valve is opened to
allow 7-8 mL of fresh solvent to flow through the cell. The pump valve is then closed and
the purge valve is opened to allow compressed nitrogen to push the remaining solvent
from the cell into the collection vial.

BNAs.
The relative recovery of ASE compared to automated Soxhlet extraction for the BNA
compounds are summarized in Figure 2. The average relative recovery of ASE relative to
automated Soxhlet at all spike levels and  from all  matrices was  99.2%. Only  one
compound fell below the target relative recovery value of 75%; benzo[g,h,i]perylene,
whose average recovery was 66.4%.
The average relative recoveries from the three matrices were as follows: clay   96.8%,
loam - 98.7%, and sand  102.1. The average relative recoveries at the three concentration
levels were as follows: low - 101.2%, mid - 97.2 and high  99.2%. The  overall average
RSD values for the pesticides were 12.8% for ASE and 13.9% for Soxhlet.

Chlorinated Pesticides.
The relative recovery of ASE compared to automated  Soxhlet extraction of OCPs  are
summarized hi Figure 3. The average relative recovery of ASE relative to automated
Soxhlet at all spike levels and from all matrices was  97.3%. Again, only one compound
fell below the target relative recovery value of 75%; DDT,  whose average recovery was
74.9%.
The average relative recoveries from the three matrices were as follows: clay - 96.0%,
loam  99.1%, and sand - 96.8. The average relative recoveries at the three concentration
levels were as follows: low - 105.1%, mid  90.7 and high - 96.1%. The  overall average
RSD values for the pesticides were 8.3% for ASE and 8.7% for Soxhlet.

Herbicides.
In order to extract free acid herbicides into organic  solvents, soil samples are normally
acidified with hydrochloric acid prior to extraction.  While  this procedure was followed
for the samples extracted by Method 8150A, samples extracted by ASE were acidified by
                                             123

-------
direct addition of phosphoric  acid to the extraction solvent, as described in Methods.
Recovery of the herbicides by ASE relative to the shake  method are summarized in
Figure 4. The average recovery, relative to the shake method, at all spike levels and from
all matrices was 115.7%.
As with the other data, no matrix dependency seemed to exist with the herbicides. The
average relative recoveries from the three matrices were as follows: clay - 99.8%, loam
138.7% and sand - 108.8%. The average relative recoveries at the two concentration
levels were as follows: low - 112.2% and high - 119.2%. The overall average RSD values
for the herbicides  was 24.5 for ASE and  31.5 for the shake method. These values  seem
high,  and can be  explained by the  fact that only eight  compounds were used for this
study. If one compound had poor  precision, as was the case with Dalapon, it would
heavily influence the overall precision. If the RSD for Dalapon is excluded, the averages
become 15.8 for ASE and 28.8 for the shake method.

Organophosphorus Pesticides.
The  relative  recovery of ASE  compared  to  Soxhlet extraction for the  OPPs are
summarized in Figure 5. The average recovery of ASE relative to Soxhlet for the OPPs at
all spike levels and from  all  matrices was  98.3%. There were cases  in which target
compounds were not detected in the  extracts by either extraction technique (TEPP, Naled,
Monocrotophos at all concentrations and Fensulfothion, Azinfos Methyl and Coumaphos
at low level from clay). In these cases, data points were excluded from relative recovery
calculations
The average relative recoveries from the three matrices were  as follows: clay - 97.0%,
loam    100.0%,  and  sand  - 97.0%.  The  average  relative  recoveries  at the  two
concentration levels were as follows: low - 98.9% and high - 97.6%. The overall average
RSD values for the pesticides were 9.3% for ASE and 8.4% for Soxhlet.

Conclusion
Accelerated solvent extraction (ASE) has been shown in this work to be equivalent to
conventional  solvent  extraction  of chlorinated pesticides,  BNAs,  organophosphorus
pesticides and herbicides.  The time required is less than  15  minutes per sample, and
solvent usage is reduced significantly (15 mL per 10 gram sample). Compared to Method
8150A, ASE eliminates the use of hydrochloric acid and diethyl ether, and significantly
reduces analyst labor time. The ability of ASE to achieve these results  is most likely due
to enhanced solubilization,  which occurs at elevated temperatures and pressures.
The data presented in this  study were used in the equivalency evaluation of accelerated
solvent extraction, which is scheduled to appear as SW-846 Proposed Method 3545 in
40-CFR update III (3).

References
(1)     B.E. Richter, J.L. Ezzell,  W.D. Felix, K.A. Roberts and D.W. Later, American
       Laboratory, Feb. 1995 24-28.
(2)     J.L.  Ezzell, B.E. Richter, W.D. Felix,  S.R. Black and J.E. Meikle, LC/GC  13(5)
       1995, 390-398.
(3)     Lesnik, B. and Fordham, O.,  Environmental Lab,  Dec/Jan 1994/95 25-33 (1995).
                                             124

-------
        PURGE VALVE
                PUMP VALVE
         EXTRACTION
         CELL
                                  VIAL
                     OVEN
Figure 1.  Schematic  diagram of accelerated  solvent
extraction (ASE).

      Relative Recovery of BNA by ASE
   O
   O
   8.
   0)
   0.
Figure 2.
                 Low       Mid      High
                Clay n Loam D Sand
                           125

-------
       Relative Recovery of OCR by ASE
   0

   O
   O
   0
   0
   ฃ
   0
   CL
                Low
                    Mid
High
Figure 3.
           • Clay HLoam eSand



Relative Recovery of Herbicides
            by ASE
   
   Q.
                  Low          High
                l Clay D Loam n Sand
Figure 4.
                      126

-------
     Relative Recovery of OPP by ASE
  o
  o
  
-------
 22
    EVALUATION OF THE NEW CLEAN SOLID PHASES FOR EXTRACTION OF
             NITROAROMATICS AND NITRAMINES FROM WATER
T.F.  Jenkins and P.G. Thorne,  U.S.  Army Cold Regions Research
and Engineering Laboratory, Hanover,  New Hampshire 03755 and
K.F.  Myers and E.F. McCormick, U.S. Army Engineer Waterways
Experiment Station, Vicksburg, Mississippi 39180
ABSTRACT
Salting-out solvent extraction (SOE)  is the preconcentration step
currently specified in SW846 Method 8330,  the reversed-phase high-
performance liquid chromatography (RP-HPLC)  method for nitroaro-
matics and nitramines in water. Previous attempts to utilize solid
phase extraction (SPE) in our laboratories indicated that use of
the solid phases commercially available at that time led to intro-
duction of unacceptable interferences for some matrices. Recently,
several manufacturers have introduced new cleaner solid phases.
This study was conducted to evaluate their utility in providing
preconcentration for low level determination of these analytes.

SOE was compared with cartridge and membrane SPE for preconcen-
tration of nitroaromatics, nitramines and aminodinitroaromatics
prior to determination by RP-HPLC.  The solid phases evaluated were
Porapak RDX for the cartridge method and Empore SDB-RPS for the
membrane method. Thirty-three groundwater samples from the Naval
Surface Warfare Center, Crane, Indiana, were analyzed using the
direct analysis protocol specified in Method 8330 and the results
compared with analyses conducted after preconcentration using SOE
with acetonitrile,  cartridge based SPE and membrane based SPE.
For high concentration samples analytical results from the three
preconcentration techniques were compared with results from the
direct analysis protocol. The results indicated that good recovery
of all target analytes was achieved by all three preconcentration
methods. For low concentration samples, results from the two SPE
methods were correlated with results from the SOE method. Overall,
very similar data were obtained by the SOE and SPE methods, even
at concentrations below 1 (ig/L. Chromatograms from the three meth-
ods were examined and the large interferences observed for the SPE
methods in our earlier study, using less clean material, were
largely absent. A small interference was observed for both SPE
methods at the retention time of RDX on the primary analysis col-
umn that translated to concentrations ranging from 0.2 to 0.6 |J.g/L
RDX. Even though this peak was not present at the proper retention
                                     128

-------
time on the confirmation column, detection limits for RDX should
be raised to 0.6 jig/L if the SPE methods are used due to this
potential interference. We recommend that solid phase extraction
be included with SOE as an option in SW84S Method 8330

INTRODUCTION

One of the U.S. Defense Department's most serious environmental
problems is associated with sites contaminated with residues of
secondary explosives. Contamination at these sites was chiefly
caused by manufacture of the explosives, loading of explosives
into ordnance, and disposal of off-specification or out-of-date
material. Residues from these activities contain the explosives,
manufacturing  impurities and environmental transformation prod-
ucts (1). Unlike many other organic chemicals, these compounds
are quite mobile in the soil and have resulted in serious ground-
water contamination  (2-6)  Plumes of contaminated groundwater,
often miles in length, have been identified at military sites
with some extending beyond installation boundaries.
A number of laboratory methods have been developed to character-
ize water samples potentially contaminated with secondary explo-
sives. At present, however, the method most often used by con-
tract laboratories conducting analyses for the Army is SW846
Method 8330  (7) . This is a reversed-phase high performance liquid
chromatographic  (RP-HPLC) method that specifies 14 target nitro-
aromatic and nitramine analytes and two protocols for water anal-
ysis. When detection limits ranging between 4 and 14 (J.g/L are ad-
equate for project requirements, a direct injection procedure can
be used that does not require sample preconcentration prior to
RP-HPLC determination. When lower detection limits are needed,
a protocol including a salting-out solvent extraction (SOE) pre-
concentration  step is specified  (8,9). Winslow et al.  (10,11)
proposed the use of solid phase extraction (SPE) as an alterna-
tive to SOE and reported excellent recovery and detection limits
that were very similar to those for SOE. Winslow's results were
obtained using Porapak R, a divinylbenzene n-vinylpyrrolidone
co-polymer, in the cartridge format. LeBrun et al.  (12), using
SPE in the membrane format, reported excellent recoveries of the
analytes in Method 8330 using a membrane composed of styrene-
divinylbenzene. Recently Bouvier and Oehrle (13) reported on the
use of Porapak RDX for cartridge SPE preconcentration of nitro-
aromatics and  nitramines.
Because of a number of potential advantages of SPE over SOE,
we conducted a three-way comparison of SOE, cartridge-based SPE
using Porapak  R  (SPE-C). and membrane-based SPE (SPE-M) using
styrene-divinylbenzene membranes  (Empore SDVB) for preconcentra-
                                     129

-------
tion of waters containing nitroaromatics and nitramines (14, 15).
This evaluation included estimating detection capability and anal-
yte recovery using fortified reagent grade water, and analyte re-
covery for a series of field-contaminated groundwater samples from
the U.S. Naval Surface Warfare Center (NSWC),  Crane, Indiana.
Overall, the results can be summarized as follows:
  (1)  the three methods were comparable with respect to low-con-
centration detection capability, ranging from 0.05 to 0.30 p.g/L.

  (2)  Recoveries generally exceeded 80%,  except for HMX (octahy-
dro-1,3,5,7-tetranitro-l,3,5,7-tetrazocine)  and RDX (hexahydro-
1,3,5-trinitro-l,3,5-triazine) by membrane-SPE where recoveries
were lower.
  (3)  Large interferences were found on about  half of the  ground-
water samples from the NSWC using the two SPE methods, but none
were found by SOE.
  (4)  The SPE interferences  were traced to a matrix interaction of
the SPE polymers with low pH groundwaters which apparently caused
the release of unreacted monomers or other contaminants from the
interior of the polymeric materials.
At least partly in response to the problems identified above, sev-
eral manufacturers of SPE materials sought to improve the reten-
tion of SPE materials for very polar organics  such as HMX and RDX,
and experimented with new cleaning procedures  to better remove in-
terferences from the SPE materials. As a result, Waters Corpora-
tion released a new ultra-clean SPE material for use in cartridge
SPE under the name Porapak RDX  (13), and 3M Corporation developed
a new surface modified styrene-divinylbenzene membrane which also
had been cleaned more extensively  (Empore SDB-RPS). Initial tests
at the U.S. Army Cold Regions Research and Engineering Laboratory
(CRREL) and elsewhere indicated that these materials were indeed
cleaner than the original SPE materials.

OBJECTIVE

The objective of this study was to reassess SPE for preconcentra-
tion of nitroaromatic and nitramine explosives from water, using
the newly released,  manufacturer-cleaned SPE materials. Special
attention was given to recovery of HMX and RDX, because of the low
recoveries found for these analytes with membrane SPE in the ini-
tial study. The level of contamination resulting from use of these
manufacturer-cleaned materials was assessed using both reagent
water samples and some groundwaters from the Naval Surface Warfare
Center  (NSWC).  These groundwaters included some of the low pH
waters that had revealed the contamination problem with the ini-
tial SPE materials.
                                     130

-------
 EXPERIMENTAL

 Conduct: of stvnHy

 This work was  jointly  conducted by  the U.S. Army Engineer Water

     .                                 . Wells were purged with a
 PVC bailer to a depth midway down the well stream, allowed to
 recharge a minimum of 2 hours, then  sampled with Teflon bailers.
 Samples were collected in 1-L precleaned, amber glass bottles and
 were stored and shipped at 4ฐC.

 RP-HPLC analyg-io
 All water samples were analyzed by RP-HPLC. Depending on the spe-
 cific test conducted, water samples were either analyzed using
 the direct method specified in SW846 Method 8330 (7)  or were
 preconcentrated using either SOE, SPE-C or SPE-M as described
 below (14) .

 Primary analysis was conducted on a 25-cm x 4.6-mm (5-pm)  LC-18
 column (Supelco) eluted with 1:1 methanol /water (v/v)  at 1.2 mL/
 min.  Injection volume was 50 (J.L  introduced using a 200-p.L  sample
 loop.  Concentration estimates were obtained from peak heights
 from a Waters 820 Maxima Chromatography Workstation.  The identi-
 ties  of target analytes and transformation products were con-
 firmed by analysis of the samples on a 25-cm x 4.6-mm (5-nm)  LC-CN
 column from Supelco eluted with  1:1  methanol /water (v/v)  at  1.2
 mL/min (7) .  Quantitative results for the 2-amino and  4-amino-
 dinitrotoluenes (2ADNT and 4ADNT)  were also taken from the LC-CN
 determination since better separation of these two analytes  was
 obtained on  this column.  Retention times of the analytes  of  in-
 terest for both separations  are  reported elsewhere (17) .
 Primary analyses were conducted  using a  Waters Model  600  system
 controller, Model 610 fluid unit,  Model  717 plus Auto  Injector
 set for  a 50-p.L  injection, a  486  UV Variable Wavelength Detector
 set at 245 nm,  and a  Maxima  Chromatography Workstation. Confirma-
 tion analysis was  conducted  on a  Waters  LC Module  1 with a 486 UV
Variable Wavelength Detector  (245  nm) , a 717 plus Auto Injector
 (50 (J.1) and a Maxima  820 Chromatography  Workstation.

Salting-out solvent extraction/non-evaporative
preconcentration procedure
A 251. 3-g portion  of  reagent  grade sodium chloride  was added  to  a
1-L volumetric  flask. A 770-mL sample of water was  measured with
a 1-L graduated  cylinder and  added to the flask. A  stir bar was
added and the contents stirred at maximum rpm  until the salt was
                                      131

-------
completely dissolved. A 164-mL aliquot of acetonitrile (ACN),  mea-
sured with a 250-mL graduated cylinder, was added while the solu-
tion was being stirred and stirring was continued for at least 15
minutes. If the ACN was slow in dissolving due to poor mixing, a
Pasteur pipette was used to withdraw a portion of the undissolved
ACN layer and reinject it into the vortex of the stirring aqueous
phase. After equilibrium had been established only about 5 mL of
ACN normally remained in a separate phase. The stirrer was turned
off and the phases allowed to separate for 15 minutes. If no emul-
sion was present, the ACN phase was removed and placed in a 100-mL
volumetric flask and 10 mL of fresh ACN was added to the 1-L flask.
The 1-L flask was again stirred for 15 minutes,  after which 15 min-
utes was allowed for phase separation. The ACN was removed and com-
bined with the initial extract in the 100-mL volumetric.  When emul-
sions were present, that portion of the sample was removed from
the volumetric flask with a Pasteur pipette, placed in a 20-mL
scintillation vial, and centrifuged for 5 minutes at 2000 rpm. The
supernate was also pipetted into the 100-mL volumetric flask,  the
scintillation vial was rinsed with 5 mL of acetonitrile and the
acetonitrile added to the 100-mL volumetric flask. For the first
extraction the pellet that formed after centrifugation was added
back to the 100-mL flask,  but if it formed in the second extrac-
tion, it was discarded.
In order to reduce the volume of ACN, an 84-mL aliquot of salt wa-
ter  (325 g NaCl per 1000 mL of water) was then added to the 100-mL
volumetric flask. The flask was placed on a vertical turntable and
rotated at about 60 rpm for 15 minutes. After the phases  were
allowed to separate for 15 minutes,  the ACN phase was carefully
removed using a Pasteur pipette and placed in a 10-mL graduated
cylinder. An additional 1.0-mL aliquot of ACN was then added to the
100-mL volumetric flask and the flask rotated on the turntable for
15 minutes. Again the phases were allowed to separate for 15 min-
utes and the resulting ACN phase was added to the 10-mL graduated
cylinder. The volume of the resulting extract was measured and
diluted 1:1 with reagent grade water prior to analysis.

Cartridge solid-phase extraction
Prepacked cartridges of Porapak RDX  (Sep-Pak, 6 cc,  500 mg) were
obtained from Waters Corporation. The cartridges were cleaned by
placing them on a Visiprep Solid-Phase Extraction Manifold  (Supel-
co) and passing 15 mL of acetonitrile through each using gravity
flow. The acetonitrile was then flushed from the cartridges using
30 mL of reagent grade water. Care was taken to ensure that the
cartridges were never allowed to dry after the initial cleaning.
A connector was placed on the top of each cartridge and fitted with
a length of 1/8-in. -diameter  Teflon  tubing.  The other  end of the
                                     132

-------
tubing was placed  in  a  1-L  fleaker  containing 500 mL of sample.
The vacuum was turned on  and  the  flow rate through each cartridge
set at about 10 mL/min. If  the  flow rate declined significantly
due to partial plugging from  suspended material, it was readjust-
ed. After the sample  had  been extracted, the top plug containing
the fitted tubing  was removed from  each cartridge and 10 mL of
reagent grade water was passed  through the cartridge using gravi-
ty flow unless the cartridges were  sufficiently plugged to re-
quire vacuum. A 5-mL  aliquot  of acetonitrile was used to elute
retained analytes  from  the  cartridges under gravity flow. The
volume of the recovered ACN was measured and diluted 1:1 with re-
agent grade water.

Membrane solid-phase  extraction
Empore styrene-divinyl  benzene  membranes (47 mm) were obtained
from 3M Corporation.  The  membranes  were designated SDB-RPS and
were not commercially available at  the time the study was con-
ducted. The styrene-divinyl benzene used in these membranes had
been modified to provide  extra  retention for polar organics such
as HMX  (16) . These membranes  were precleaned by centering on a
47-mm vacuum filter apparatus and several milliliters of aceto-
nitrile added to swell  the  membrane before the reservoir was
clamped in place.  A 15-mL aliquot of ACN was then added and
allowed to soak into  the  membrane for 3 minutes. The vacuum was
then turned on and most (but  not  all) of the solvent pulled
through the membrane. A 30-mL aliquot of reagent grade water was
then added and the vacuum resumed.  Just before the last of this
water was pulled through  the  membrane, the vacuum was removed,
the reservoir filled  with a 500-mL  sample,  and the vacuum re-
sumed. This sample extraction took  from 5 minutes to an hour
depending on the amount of  suspended matter present. Once the
water was eluted,  air was drawn through the membrane for 1 minute
to remove excess water. These extractions were conducted six at a
time using an Empore  extraction manifold (3M Corporation). Vials
(40 mL) were placed below the outlets of the six membranes, a 5-
mL aliquot of ACN  was added to  each reservoir, the acetonitrile
was allowed to soak into  the  membrane for 3 minutes, and then the
vacuum was applied to pull  the  acetonitrile through the membranes
into the vials. Each  resulting  extract was removed with a Pasteur
pipette, the volume measured  in a 10-mL graduated cylinder, and
the extract was diluted 1:1 with  reagent grade water prior to
analysis.

Preparation of analytical standards
All standards were prepared from  Standard Analytical Reference
Materials  (SARMs)  obtained  from the U.S. Army Environmental Cen-
ter  Aberdeen Proving Ground, Maryland. Individual stock stan-
dards were prepared in  HPLC grade acetonitrile  (Baker). Combined
                                      133

-------
working standards were in acetonitrile and were diluted 1:1 with
Milli Q Type I water  (Millipore Corp.).


RESULTS AND DISCUSSION
Determination of retention capacity of the
SDB-RPS membrane for HMX and RDX
The retention of HMX and RDX by the SDB-RPS membranes was tested
by extracting a 2-L aliquot of reagent grade water that had been
spiked with 100 |ig/L of HMX and RDX using aqueous stock standards.
Samples of the water passing through the membrane were collected
every 250 mL and analyzed by RP-HPLC using the direct analysis
protocol. Results indicated that no breakthrough for either anal-
yte occurred until more than 1 L of water had been extracted  (17).
Thus it appears that the SDB-RPS membranes have an increased re-
tention capacity for the very polar nitramines relative to that
observed with the initial SDB membranes used in an earlier study
(14,15).

Comparison of results using groundwater samples
from Naval Surface Warfare Center
All 33 groundwater samples from NSWC were all analyzed by the
direct RP-HPLC method (without preconcentration)  and by RP-HPLC
after preconcentration using salting-out solvent extraction (SOE),
cartridge solid phase extraction (SPE-C),  and membrane solid phase
extraction (SPE-M) (17). The following target analytes were de-
tected in these samples (the number of samples where the analytes
were detected in at least one of the four analyses are given in
brackets): HMX [19],  RDX [22], TNB [4], DNB [5],  3,5-DNA [6],  TNT
[11], 2,4-DNT [2], 4ADNT [15] and 2ADNT [15]. Concentrations meas-
ured for HMX and RDX in these groundwater samples were generally
much higher than for the nitroaromatics and aminonitroaromatics.
While results from the direct method are certainly not error-free,
they are subject to far fewer sources of error than methods em-
ploying a preconcentration step. For that reason, we treated the
results from the direct analysis as "true values" for purposes of
comparison with results from the three preconcentration tech-
niques . Table 1 summarizes results for samples where analytes were
detected by the direct RP-HPLC method. Of the 33 groundwater sam-
ples analyzed,  11 had detectable HMX using direct analysis, with
concentrations ranging from 25 to 325 |J.g/L.  Likewise RDX was de-
tected in 13 groundwaters using the direct method, with concen-
trations ranging from 13 to 608 (J.g/L;  TNT in four samples with
concentrations ranging from 14 to 180 |J.g/L;  the 4ADNT and 2ADNT
in five samples with concentrations ranging from 9 to 59 ng/L
and 7 to 65 fig/L,  respectively; and TNB in two samples at 5 and 8
                                     134

-------
        Table 1.  Ratio of concentrations obtained for the various
        preconcentration methods relative to that from the direct
        method.

                    Concentration-preconc./Concentration-direct
Analvte
HMX
RDX
TNT
4ADNT
2ADNT
n
11
13
4
5
5

0.
0.
1.
0.
0.
SOE
870+0.
800ฑ0.
010+0.
909ฑ0.
86510.

188
184
252
128
106

0.
0.
1.
0.
1,
SPE-C
. 957ฑ0.
.975ฑ0.
.143+0.
. 996ฑ0.
.021+0.

147
192
331
106
066*

0.
0.
1.
0.
0.
SPE-M
. 833ฑ0.
. 882ฑ0.
. 015ฑ0.
. 925ฑ0.
.871+0.

129
158
244
095
057
        * Value significantly different at the 95% confidence
          level.
        n = the number of  ratios in each mean.

|j.g/L, respectively. For  a given  analyte, the ratio of  the  concen-
tration  obtained  for  each preconcentration technique relative  to
that for  the direct method was computed and the mean and standard
deviation obtained  (Table 1). Mean  ratios  ranged  from  0.800  for
RDX using the SOE method to  1.143 for TNT  using the SPE-C  method.
Only for  2ADNT was a  significant difference among methods  detected
(by ANOVA)  at the 95% confidence level  (SPE-C was different  from
SOE and  SPE-M, which  were not significantly different  from each
other).  The results of this  analysis indicate that, for relatively
high concentrations,  all three preconcentration techniques pro-
duced  concentrations  similar to  that from  the direct analysis
method,  with analyte  recoveries  in  all  cases at or above  80%.
These  results demonstrate a  marked  improvement in the  recovery of
HMX and  RDX using the SDB-RPS membrane  relative to that observed
in our original  study where  the  SDB membrane was  used  (14,15).
This improvement  is particularly striking  for HMX, where  recover-
ies  improved from about  49%  to  83%, and appears to be  due  to an
improvement in retention for polar  compounds resulting from  sul-
fonation of the  styrene  divinylbenzene.  Recovery  of HMX and  RDX
using  the Porapak RDX cartridge  remains excellent at  96%  and 98%,
respectively.
Since  the value  of  a preconcentration  technique  lies  in the  fact
that  it  allows determination at  concentrations below  those that
can  be determined directly,  it  is important to  evaluate its  per-
formance when  concentrations are below the detection  limits  of
the  direct method.  Since the SOE method is the procedure currently
recommended in SW846 Method 8330, results  for  SPE-C and SPE-M were
compared with  those obtained for SOE for samples  with analyte con-
centrations below the detection limits of  the direct  method.  In
Fibres  1  2  and 3  the concentrations  of HMX,  RDX and TNT deter-
     -,     /„ OPE-C and SPE-M are plotted against  the concentrations
Staled usifg SOE  In the absence of  bias the plots should have  a
slope of 1-00  and an  intercept of zero. Regression analyses were
                                      135

-------
                 14
                 12
                 10
               O
               LU
               0.
               co
               O
               X
  O SPE-C vs. SOE
  • SPE-Mvs. SOE
                                   SPE-C
                                   vs. SOE
                  SPE-M
                  vs. SOE
                                             I
                       2     4     6    8    10    12    14
                            HMX Concentration by SOE (jig/L)
              Figure  1.  Plot  of HMX concentrations de-
              termined for groundwater samples using SOE
              vs  those using SPE-C and SPE-M.
                 14
               ~ 12
                 10 —
               O 8
                       T
      T
O SPE-C vs. SOE
• SPE-M vs. SOE
I
                                             I
                 0     2     4     68    10    12    14
                             RDX Concentration by SOE (n g/L)
               Figure 2.  Plot  of RDX concentrations de-
               termined for groundwater samples using SOE
               vs  those using SPE-C  and SPE-M.
conducted for the  SPE-C vs  SOE and SPE-M vs SOE  individually for
each analyte, and  the resulting slopes,  intercepts and correla-
tion coefficients  squared are presented in Table 2.  Similarly,
                                        136

-------
                             SPE-M /'SPE-C
                             vs. SOE//  vs. SOE
                   0.4    0.6   0.8    1.0
                   TNT Concentration by SOE
                                        1.2
     Figure 3.  Plot of TNT  concentrations de-
     termined  for groundwater samples  using
     SOE  vs those using  SPE-C and SPE-M.
Table 2. Results of regression analyses of SPE-C vs.
SOE and  SPE-M vs SOE for low concentration*  deter-
minations .
              SPE-C vs.  SOE
                                    SPE-M vs.  SOE
Analyte
HMX
RDX
TNT
4ADNT
2ADNT
3, 5-DNA

1.
1.
1.
1.
1.
0.
m*
083
255
264
400
270
972

0
-1
-0
-0
0
0
b**
.125
.044
.052
.448
.110
.007

0
0
0
0
0
0
r2t
.999
.987
.933
.994
.981
.996

0
1
1
1
1
0
m
.972
.160
.325
.208
.484
.930

0
-0
-0
-0
0
0
b
.113
.850
.085
.360
.875
.014

0
0
0.
0,
0.
0,
r2
.999
.980
.972
.992
.974
.996
 * Low concentration—concentrations below that de-
   tectable using  the direct method.
 t m - Slope.
** b - Intercept.
tt - Correlation coefficient squared.
                                         137

-------
                       Salting-out
                       Cartridge-SPE
                       Membrane-SPE
       \J
              8      12
             Retention Time (min)
                            16
                                    20
Figure 4.  LC-18  RP-HPLC chromatograms
for sample  30 preconcentrated using SOE,
SPE-C and SPE-M using initial less clean
SPE materials.
regression analyses were
conducted for 4ADNT,  2ADNT
and 3,5-DNA  (Table 2).
Slopes for these 12 regres-
sion analyses range from
0.930 to 1.400, with  inter-
cepts ranging from -1.044
to +0.875. Values for the
square of the correlation
coefficient range between
0.933 and 0.999. The  re-
sults from these regression
analyses indicate that  the
two SPE methods are pro-
ducing data which are very
similar to those obtained
from SOE, even at concen-
trations below 1 p.g/L.  The
TNT data for concentrations
below 0.5 |J.g/L are particu-
larly striking in this  re-
spect (Figure 3).
Examination of chromato-
grams for qroundwater
samples
In our initial comparison
of SOE,  SPE-C and SPE-M, we
found a series of ground-
water samples that caused
the solid phase materials
to release high concentra-
tions of interferences.
This is illustrated for the
chromatograms obtained  for
sample 20641 in 1992  (14)
using SOE, Porapak R  (SPE-
C) and Empore SDB (SPE-M)
(Figure 4). Chromatograms
for this same sample  ob-
tained using the new, manu-
facturer-cleaned Porapak
RDX and SDB-RPS are shown
in Figure 5. Clearly  there
is a vast decrease in in-
terferences released  from
the two solid phases. There
remains,  however, a small
                                     138

-------
                                         Salting-out
             LJ
                                        Cartridge -SPE
                                       Membrane -SPE
                                 8
                             Retention Time (min)
                                            12
                                                      16
            Figure 5. LC-18 RP-HPLC chromatograms for sam-
            ple 30 preconcentrated using  SOE,  SPE-C  and
            SPE-M using new manufacturer-cleaned SPE mate-
            rials showing small RDX interference for SPE-C
            and SPE-M.
interference peak at the retention time  for RDX  in the two chro-
matograms for the SPE methods that is not observed for the SOE
(Figure  5)  and does not confirm as RDX using  the LC-CN confirma-
tion  column (Figure 6).  This peak was observed in the LC-18 chro-
matograms for both the SPE-C and SPE-M for the same six well
waters that resulted in release of interferences in the original
study Observation of these peaks would  require  that a confirma-
tion  analysis be conducted, and would result  in  quantitative RDX
estimates ranging from 0.2 to 0.6 |ig/L if careful  scrutiny of an
LC-CN confirmation analysis had not been done. Thus when SPE pre-
                                     139

-------
                                        Salting-out
            I      I     I	  I  .. .. I	 I
                                        Cartridge-SPE
                  ii
                                        Membrane-SPE
                                 8
                             Retention Time (min)
                                           12
                                                      16
            Figure 6.  LC-CN RP-HPLC chromatograms for sam-
            ple 30 preconcentrated  using SOE,   SPE-C  and
            SPE-M using new manufacturer-cleaned SPE mate-
            rials showing  small RDX interference for SPE-C
            and SPE-M.

concentration  is  used,  the detection limit for RDX should be
raised to about  0.6 |J.g/L  to  eliminate the chance for misidentifi-
cation due  to  this small  interference peak.

CONCLUSIONS AND RECOMMENDATIONS

Solid phase extraction, in both  the cartridge  (SPE-C) and membrane
(SPE-M) formats,  was  evaluated for  its ability to preconcentrate
nitroaromatics, nitramines and aminodinitroaromatics from water
samples prior  to  analysis by RP-HPLC (SW846 Method 8330) . A series
of 33 groundwater samples from the  Naval Surface Warfare Center
was used for comparison.  New,  manufacturer-cleaned solid phase
materials  (Porapak RDX for SPE-C and SDB-RPS for SPE-M) were  com-
pared to salting-out  solvent extraction with respect to their
recovery of target analytes  and  their production of chromato-
graphic interferences.

Based on these results, we recommend that solid phase extraction,
in either the  cartridge or membrane format,  be included as an
                                      140

-------
option along with salting-out solvent extraction  for  the  precon-
centration step in SW846 Method 8330  (7). Comparison  of the re-
sults of  this study and earlier work  (14,15) demonstrates the ne-
cessity of using carefully cleaned solid phases for this  purpose
or interferences will be released for certain water matrices.

ACKNOWLEDGMENTS

The authors gratefully acknowledge Dr. C.L. Grant, Professor Emer-
itus, Chemistry Department, University of New Hampshire,  and Tommy
E. Myers,  Environmental Engineer, WES, for useful  comments on this
manuscript.  In addition, the authors acknowledge Don  E. Parker and
B. Lynn Escalon of AScI Corporation, McLean, Virginia, for con-
ducting a number of the analyses reported here, Roy Wade  (WES) for
collection of all groundwater samples, and Linda  Stevenson (WES)
for sample management. Funding was provided by the U.S. Army Envi-
ronmental Center, Aberdeen Proving Ground, Maryland,  Martin H.
Stutz, Project Monitor. This publication reflects  the personal
views of  the authors and does not suggest or reflect  the  policy,
practices,  programs, or doctrine of the U.S. Army  or  Government  of
the United States.


LITERATURE CITED
  I.Walsh, M.E., T.F. Jenkins,  P.H. Miyares, P.S. Schnitker, J.W.
    Elwell and M.H. Stutz (1993).  Evaluation of SW846 method 8330
    for characterization of sites  contaminated with residues of high
    explosives. USA Cold Regions Research and Engineering Laboratory,
    CRREL Report 93-5,  Hanover, NH.
  2-Kayser, E.G. and N.E. Burlinson  (1982) Migration of explosives  in
    soil.  Naval Surface Weapons Center, Report TR 82-566,  White Oak,
    MD.
  3. Pugh,  D.L.  (1982) Milan Army Ammunition Plant contamination sur-
    vey.  U.S. Army Toxic and Hazardous Materials Agency, Report
    DRXTH-FR-8213, Aberdeen Proving Ground, MD.
  4. Rosenblatt, D.H. (1986)  Contaminated soil cleanup objectives for
    Cornhusker Army Ammunition Plant. U.S. Army Medical Bioengineer-
    ing Research and Development Laboratory, Technical Report 8603,
    Fort Detrick, MD.
  5. Maskarinec, M.P., D.L. Manning and  R.W. Harvey  (1986)  Application
    of solid sorbent collection techniques and high-performance liq-
    uid chromatography with electrochemical detection to the analysis
    of explosives on water samples. Oak Ridge National Laboratory,
    Report TM-10190, Oak Ridge,  TN.
  6  Spaulding, R.F. and J.W. Fulton  (1988) Groundwater munition resi-
   ' dues and nitrate near Grand Island, Nebraska, U.S.A. Journal of
    Contaminant Hydrology, 2,  139-153.
                                        141

-------
 7 EPA  (1994) Nitroaromatics and nitramines by HPLC. Second Update
   SW846 Method  8330.

 8. Miyares,  P.H. and T.F. Jenkins  (1990) Salting-out solvent  ex-
   traction  method  for  determining low  levels of nitroaromatics and
   nitramines in water. USA Cold Regions Research and Engineering
   Laboratory, Special  Report  90-30, Hanover, NH.

 9.Miyares,  P.H. and T.F. Jenkins  (1991) Improved salting-out
   extraction-preconcentration method for the determination of ni-
   troaromatics  and nitramines in water. USA Cold Regions Research
   and Engineering  Laboratory, Special  Report 91-18, Hanover, NH.

lO.Winslow,  M.G., B.A.  Weichert and R.D. Baker  (1991) Determination
   of low-level  explosives residues in  water by HPLC: Solid phase
   extraction vs. salting-out  extraction. Proceedings of the  7th
   Annual Waste  Testing and Quality Assurance Symposium, 8-12 July
   1991.

ll.Winslow,  M.G., B.A.  Weichert, R.D. Baker and P.P. Dumas  (1992) A
   reliable  and  cost-effective method for the determination of ex-
   plosives  compounds in environmental  water samples. Proceedings
   of the 8th Annual Waste Testing and  Quality Assurance Symposium,
   13-17 July 1992.

12.LeBrun, G., P. Rethwill and J. Matteson  (1993) Determination of
   explosives in surface and groundwater. Environmental Lab,  12-15.

13.Bouvier,  E.S.P.  and  S.A. Oehrle  (1995) Analysis and identifica-
   tion of nitroaromatic and nitramine  explosives in waters using
   HPLC and  photodiode-array detection. LC-GC. 13 (2). 120-130.

14. Jenkins,T.F., P.H. Miyares, K.F. Myers, E.F. McCormick and A.B.
   Strong  (1992) Comparison of cartridge and membrane solid-phase
   extraction with  salting-out solvent  extraction for preconcentra-
   tion of nitroaromatic and nitramine  explosives from water. USA
   Cold Regions  Research and Engineering Laboratory, Special  Report
   92-25, Hanover,  NH.

15. Jenkins,T.F., P.H. Miyares, K.F. Myers, E.F. McCormick and A.B.
   Strong  (1994) Comparison of solid phase extraction with salting-
   out solvent extraction for  preconcentration of nitroaromatic and
   nitramine explosives from water. Analvtica Chimica Acta. 289,
   69-78.

IS.Markell,  C. 3M Corporation, personal communication.

17. Jenkins,  T.F., P.G.  Thorne, K.F. Myers, E.F. McCormick, D.E.
   Parker and B.L.  Escalon  (in press) Evaluation of  the new clean
   solid phases  for extraction of nitroaromatics and nitramines
   from water. USA  Cold Regions Research and Engineering Laborato-
   ry, Special Report,  Hanover, NH.
                                       142

-------
                                                                                      23
 ENVIRONMENTAL SAMPLE EXTRACTION USING GLASS-FIBER
                           EXTRACTION DISKS

Sean T. Randall, Sample Prep Product Manager/Environmental Applications Chemist,
Chris Linton, Senior Research Chemist, Mike Feeney, Technical Applications Manager,
Neil Mosesman, Technical Marketing Director, Restek Corporation 110 Benner Circle
Bellefonte, Pennsylvania 16823

ABSTRACT

When performing environmental sample analysis, the extraction procedures are often very
time consuming, expensive and possibly dangerous. A new extraction technology that
would reduce those aspects would be extremely worthwhile to an environmental
laboratory. Fiber membranes demonstrate that capability with respect to semi-volatile
extraction methods for drinking water, wastewater, and groundwater matrices. New
technologies, such as teflon based membranes, have shown improvements with clean
sample matrices, but are still quite expensive, and time consuming.

The fiber membranes look promising as an alternative to current technology due to its
larger pore size and depth filter capabilities. This could solve one of the biggest downfalls
of SPE technology, clogging due to dirty samples. The fiber membranes would lower the
amount of solvent used, as well as use less toxic solvents, increase sample capacity due to
shorter extraction times and the biggest potential exists in providing a sample extract that
does not have the interferences that usually accompany extracts using current liquid-liquid
technology.

Experimental results were generated using the SIMDisk-GF CIS, solid phase extraction
disk, from Restek Corporation. The method tested was EPA Method 525.1, for semi-
volatile analytes in drinking water.

INTRODUCTION

The US Environmental Protection Agency has recently adopted a streamlined tier system
for promulgating new methods.  This allows more rapid approval of methods that
incorporate new innovative technologies. Recently, several new sample extraction
methods have been approved which overcome many of the shortcomings of classical
liquid-liquid techniques.  Liquid-liquid extractions are time consuming, use expensive
glassware, and require large amounts of solvents. Solid phase extraction has been
promoted  as an alternative to liquid-liquid extraction.
                                           143

-------
Solid phase extraction has been used for several years, but due to the shortcomings of the
SPE tubes or cartridges for extraction of large volumes of water it has not gain acceptance
for environmental applications. More recently, solid phase extraction disks have been
promoted for the extraction of semi-volatile pollutants from aqueous matrices. The most
popular extraction disk is a Teflon membrane that has been impregnated with a Cl8
bonded silica particles. These disks allow more rapid extraction of larger sample volumes
while maintaining good recoveries for a wide range of non-polar and moderately polar
compounds. However, clogging of these membranes from particulate matter in the sample
can significantly reduce flow through the disk which greatly increases extraction time.

Restek now offers a new hydrophobic glass fiber extraction disk that is impregnated with
bonded CIS silica particles.  Unlike the Teflon membrane extraction disks that rely
primarily on surface filtration, the glass fiber disk allows extractions to take place deep in
the filter due to the thicker, more open design. This results in less clogging and faster flow
rates even for samples with high particulate matter. Because of the larger pore size,
SIMDisk™-GF  disks run at extraction flow rates of 125-150ml per minute, compared to
only 80-100ml per minute for Teflon disks with typical water samples. SIMDisk™-GF are
more rigid and easier to handle than thin Teflon filter extraction disks.  And, most
importantly, SIMDisk™-GF costs less than Teflon disks resulting in a savings every tune
your lab does an extraction.

The EPA has given approval for the use of other extraction disks as long as they pass the
QC criteria and are chemically the same.  The only requirement to prove equivalency is to
show that the recovery the compounds specified in the method are within the limits
established. Since recovery data is required with any disk, whether specified in the method
or not, there is really no extra work involved. The SIMDisk™-GF and the Teflon disk
both contain C18 bonded silica, therefore they are considered chemically similar.

EPA Method 525.1 is used for the determination of organic compounds in drinking water
by liquid-solid extraction and capillary column gas chromatograph/mass spectrometry.  It
is applicable to a wide range of organic compounds that are efficiently partitioned from the
water sample onto a CIS organic phase chemically bonded to a solid silica matrix in a
cartridge or disk. [EPA methods are available from NTIS (National Technical Information
Service), U.S. Department of Commerce, Springfield, VA, 22161, 703-487-4650]

PROCEDURE

SAMPLE PRETREATMENT:  Allow 1 liter of deionized water to equilibrate to room
temperature in a narrow-mouth amber glass bottle. Adjust sample pH to less than 2 with
6M hydrochloric acid. Add 5 ml of methanol and mix thoroughly. Spike internal
standards. For QA/QC samples, spike with 2 ug of each analyte (8 ug of
pentachlorophenol) and 5 ug of each internal standard.
                                           144

-------
APPARATUS ASSEMBLY: Assemble the 47mm apparatus. Place the SIMDisk-GF disk
in the Diskcover-47 filter support, WRINKLED SIDE UP.

DISK PRECLEANING:  Add 5 ml of methylene chloride to the top surface of the disk and
immediately draw through under vacuum at 15 hi. Hg (50 kPa). Continue to draw vacuum
at 15 in. Hg (50 kPa) for 5 minutes to remove all solvent.

DISK CONDITIONING: Add 5 ml methanol to the top surface of the disk and
immediately apply low vacuum (1-2 in. Hg, 3-7 kPa). Draw through until the top surface
of the methanol is just above the disk.  DO NOT ALLOW ANY AIR TO PASS
THROUGH THE DISK OR TO REACH THE TOP SURFACE OF THE DISK.
Immediately add 5 ml of DI water to the disk and draw through at low vacuum until the
water almost reaches the top surface of the disk.  NOTE: It is preferable to leave extra
liquid above the disk rather than allow any air to contact the surface of the disk.

SAMPLE ADDITION: Add the sample onto the disk, adding it directly to the film of water
left on the disk from the conditioning step. Adjust the vacuum to 10 in. Hg (35 kPa) for a
flow rate of approximately 100 ml per minute until the entire sample has been processed.

DISK DRYING: After the sample has been processed, draw ah- through the disk under
vacuum at approximately 15 in. Hg (50 kPa) for approximately 5 minutes.

ANALYTE ELUTION: Release system vacuum. Insert the sample collection rack and
collection vessels. Reassemble the apparatus. Add 5 ml methylene chloride directly to the
sample bottle  and gently swirl to rinse all inner surfaces of the bottle. Allow the  sample
bottle to stand for 1 to 2 minutes, and transfer the methylene chloride to the disk using a
glass pipet and rinsing the sides of the reservoir in the process. Draw the solvent through
the disk at 5 in Hg (17 kPa). Repeat the bottle rinse and disk elution twice with fresh
aliquots of methylene chloride, combining all eluates in the collection tube.

FINAL ANALYSIS:  Remove water from sample eluate by passing through approximately
3 grams of anhydrous sodium sulfate.  Concentrate to 1 ml, and analyze 1 ul by GC/MS.
                                            145

-------
    Accuracy and Precision data from four deteminations of Method 525.1 analytes at 2ug/L
with Liquid-Solid SIMDisk-GF 47mm extraction disk and the Finnigan MAT ITS40 Ion Trap MS
Compound
Acenaphthalene-d 1 U
Phenanthrene-dlO
Chrysene-dl2
Hexachlorocyclopentadiene
Dimethylphthalate
Acenaphthylene .
2-Chlorobiphenyl
Diethylphthalate
Fluorene
2,3-Dichlorobiphenyl
Hexachlorobenzene
Simazine
Atrazine
Pentachlorophenol
gamraa-BHC
Phenanthrene
Anthracene
2,4,5-Trichiorobiphenyl
Alachlor
Heptachlor
Di-n-butylphthalate
2,2',4,4'-Tetrachlorobiphenyl
Aldrin
Heptachlor epoxide
2,2',3',4,6-Pentachlorobiphenyl
gamma-Chlordane
Pyrene
alpha-Chlordane
trans-Nonachlor
2,2',4,4',5,6'-Hexachlorobiphenyl
Endrin
Butylbenzylphthalate
Bis(2-ethylhexyl)adipate
2,2',3,3',4,4',6-Heptachlorobiphenyl
Methoxychlor
2,2',3,3'>4,5',6,61-Octach]orobiPhenyl
Benzo(a)anthracene
Chrysene
Bis(2-Ethylhexyl)phthalate
Benzo(b)fluoranthrene
Benzo(k)fluoranthrene
Benzo(a)pyrene
Perylene-dl2
Indeno(l ,2,3-cd)pyrene
Dibenzo(aji)anthracene
Benzo(g,h,i)perylene
Target Cone.
(ug/L)
3
5
5
2
2
2
2
2
2
2
2
2
2
8
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
5
2
2
2
Mean
(ug/L)
-
-

1.6
1.8
2.0
2.0
2.1
2.1
2.0
2.0
1.9
2.1
9.7
2.1
2.2
2.0
1.9
2.1
1.9
2.5
1.9
1.6
2.1
1.9
1.9
2.0
1.9
1.9
1.7
2.2
2.2
1.8
1.8
2.1
1.7
1.9
1.9
2.2
2.0
2.0
2.0
4.7
1.9
1.8
1.8
Std. Dev.
(ug/L)
-
-
-
0.03
0.17
0.06
0.05
0.07
0.06
0.07
0.06
0.19
0.16
0.79
0.04
0.04
0.09
0.04
0.04
0.04
0.24
0.02
0.20
0.05
0.05
0.08
0.04
0.05
0.07
0.14
0.05
0.12
0.21
0.04
0.05
0.02
0.02
0.02
0.04
0.09
0.08
0.10
0.34
0.22
0.20
0.18
%RSD
-
-
-
2.1
9.4
3.1
2.4
3.3
3.1
3.2
2.8
10.2
7.5
8.2
2.2
1.9
4.6
1.9
1.7
2.2
9.5
1.2
12.7
2.5
2.4
4.1
2.0
2.8
3.7
8.0
2.2
5.4
11.9
1.9
2.5
1.2
0.9
0.9
2.0
4.2
4.2
5.1
7.3
11.6
11.3
9.9
Accuracy
(%of Target)
-
.
.
80
90
100
100
105
105
100
100
95
!05
12 1
105
110
too
95
105
95
125
95
80
105
95
95
too
95
95
85
no
110
90
90
105
85
95
95
110
100
100
100
94
95
90
90
% REC. in
Method
-
-
-
55
95
95
95
100
110
115
85
105
110
97
105
120
85
85
-
110
110
75
80
115
95
110
95
100
135
80
90
100
80
70
90
90
90
110
95
-
105
40
100
20
15
35
                                                146

-------
SUMMARY

       The results show that the recoveries of all compounds are well within the limits
specified hi the method. Recoveries ranged from 80-125%, which are well within the
range of 70% to 130% specified in the method.  The RSD's were also well below the 30%
limit specified in the method.  Even the heavier polycyclic aromatic hydrocarbons, which
typically show lower recoveries, are easily recovered with the SIMDisk™-GF. Although
these results prove the equivalency of the SIMDisk™-GF to other extraction disks, Restek
suggests that each laboratory generate data using their extraction techniques  and
equipment.

When using the SIMDisk™-GF, a liter of water can be processed in about 10 minutes,
compared to at least 30 minutes using Teflon disks. A package of 20 SIMDisk™-GF
extraction disks costs about $35.00 less than 20 Teflon disks. Faster extraction times and
lower costs equate to improved efficiency and lower costs for laboratories processing
samples for EPA Method 525.1.
                                              147

-------
24
           CAPACITY FACTORS IN HIGH-EFFICIENCY GPC CLEANUP
   Kevin P. Kelly. Ph.D., David L. Stalling, Ph.D., Nancy L.  Schwartz;
   Laboratory Automation, Incorporated (a subsidiary of OI Analytical),
   555 Vandiver Drive, Columbia, Missouri  65202

   ABSTRACT

   High efficiency gel permeation chromatography (GPC) cleanup columns increase sample
   throughput and reduce hazardous waste generation by employing smaller gel particle sizes
   to obtain  more chromatographic  efficiency.  They are permitted as  substitutes  for
   referenced columns in EPA methods (e.g. SW-846 Method 3640A) because the packing's
   chemical nature is essentially the same as columns specified in methods. Previous work1
   showed that relative analyte retention times are the same for the two column types packed
   in 100% methylene chloride, indicating that they are interchangeable for this application.

   To some extent advantages of high efficiency columns have been obtained by sacrificing
   sample matrix handling capacity. In other words, commercially available high efficiency
   column sets pass quality control specifications provided in the method, but at any given
   matrix  loading level the  degree of cleanup obtained using the high efficiency technique
   may not be as great as the cleanup obtained using the traditional (low pressure) columns
   specified in the EPA methods. For this reason traditional GPC cleanup columns are still
   recommended for processing samples that are high in lipid, such as tissue extracts.

   In this  work degree of cleanup was  studied for three column types*:  Envirosep-ABC™
   high efficiency columns, EnviroBeads™  low pressure column sized per Method 3640A,
   and a smaller version of EnviroBeads column.  In addition to 100% methylene chloride
   eluant  specified in the EPA  method a non-chlorinated alternative, ethyl  acetate  and
   cyclopentane  (CYP)  mixture, was  explored.    Four  types  of matrix  material were
   investigated:  diesel fuel, corn oil, potting soil extract, and spinach  extract.

   Cleanup efficiency was rated by measuring the amount of matrix material remaining in
   a collected fraction when a calibrated column was loaded with various levels of matrix
   dissolved in  mobile phase.   Calibration was performed with the test mixture cited in
   Method 3640A. Chromatograms were obtained at more than one flow rate to determine
   how much flow could be increased without visually obvious loss of resolution.  In all
   cases the traditional column provides a higher degree of cleanup than a high efficiency
   column set for the same matrix loading level, or the same degree of cleanup at a higher
   matrix loading level.  The results can provide guidance for choosing a column type that
   is appropriate to the user's cleanup goals.
      Envirosep-ABC is a trademark of the Phenomenex Corporation. EnviroBeads is a trademark of Laboratory
      Automation, Inc.
                                           148

-------
INTRODUCTION

GPC cleanup of organic extracts protects data quality and reduces analytical equipment
maintenance requirements2 by removing high molecular weight matrix coextractives.
The GPC separation  mechanism is primarily physical in nature, thus the cleanup is
applicable to all organic analytes, including those that may be captured or destroyed by
during adsorptive cleanup techniques (alumina, silica gel, Florisilฎ columns').

Current EPA methodology for GPC cleanup currently cites a 25 mm x 700 mm glass
barreled column packed with 70 grams of S-X3 resin beads (a styrene and divinylbenzene
copolymer)  in 100% DCM. Use of this column (referred to as Column A) to clean up
extracts for GC/MS semivolatiles analysis requires a processing time of about 50 minutes
and the use  of 250 mL of chlorinated solvent for each sample. Smaller particle sizes of
copolymer beads provide greater chromatographic efficiency for faster sample throughput
and reduced solvent consumption; however, they are usually packed in smaller columns
due to cost factors.  Since smaller columns  overload at lower matrix coextractive levels,
careful comparison of cleanup requirements to column performance factors is advisable.
Two types of smaller columns were tested: 1) a low pressure column in a glass barrel
(Column B) and, 2) a high-efficiency column set packed in steel columns  (Column C).

In this study traditional columns were compared to smaller ones with presumed lower
matrix handling capacity to assess completeness of  cleanup for several matrix  types.
Table 1 compares characteristics of three column types tested.  Each type was calibrated
using EPA recommended solution3.  Standard solutions of matrix material were injected
on the columns and the collected fraction was analyzed for unremoved matrix material.
Alternate solvent systems offer the possibility to eliminate use of chlorinated solvents4.
Therefore, performance of columns was also compared for two mobile phase systems:
1) 100% DCM, and 2) 7:3 ethyl acetate and cyclopentane (ETA/CYP).
               Table 1.  Column Types Compared in This Investigation
Column Type
A
B
C
EnviroBeads S-X3 Select
EnviroBeads S-X3 Select
Envirosep-ABC
Weight of
Packing
70 grams
21.5 grams
NA
Bed Length
49 cm
43 cm
41 cm
Inner
Diameter
2.5 cm
1.5 cm
2.1 cm
* Florisil is a registered trademark of the Floridin Company.
                                          149

-------
EXPERIMENTAL PROCEDURE

       Sample Matrices

Four matrix surrogates were examined; Mazola corn oil (biological matrix), number 2
diesel fuel from a service station (petroleum contaminated samples), potting soil extract
(soils), and spinach extract (crops samples).  Standard solutions of each matrix  were
prepared in each of the two solvent systems at several concentrations.

       Column Calibration

Calibration of each column was checked first at a flow rate for which the linear velocity
of mobile phase was about the same as in the traditional GPC cleanup method.  This was
determined by multiplying 5 mL/minute times the square of the ratio of the diameter of
the column being tested to the diameter of the traditional column.   Higher flow  rates
were also checked to determine when loss of resolution (by visual inspection of the UV
chromatogram) occurs as mobile phase linear velocity is increased.

       Determination of Capacity

At a given flow rate, for each  of the two solvent systems, solutions of matrix material
at various loading levels were injected on a column  using a  sample loop of 2.5 mL or
5.0 mL capacity.  Dump and collect times were set for either semivolatiles (BNA)
analysis or pesticides and PCBs analysis.  The collected fraction  was  examined for
unremoved matrix material using  either gravimetric analysis  (corn oil, potting soil
extract, and  spinach  extract)  or  GC/FID analysis for  matrix  materials containing
components that might be lost during evaporation (diesel fuel).
RESULTS

Results provided in this manuscript are for the semivolatiles (BNA)  calibration only.
Additional results will be presented at the conference, including column comparisons for
the pesticide/PCB application.

Table 2 show amounts of unremoved matrix material for corn oil injected onto column
B or column C under various conditions with either a 5.0 mL or 2.5 mL injection loop*.
Similarly, Table 3 shows amounts of unremoved number 2 diesel fuel  for the same pair
of columns.  All type B results shown are using ETA/CYP mobile phase and all type C
results are  using  100%  DCM mobile  phase.   Additional data will  be available for
presentation at the conference.
   There appears to be little difference in resolution whether using 2.0 mL or 2.5 mL injection. The larger
   size was chosen to facilitate "dirty" sample processing by avoiding viscosity effects on resolution.
                                         150

-------
  Table 2.  Corn Oil in the Collected Fraction of Extracts Cleaned Using Envirosep-ABC
,.
Column
Type
=====
6
C
C
C
C
C
C
======
Injection
Size
=====
2.5 mL
S.OmL
=====
Flow
Rate
=====
4.0 mL/min
5.0 mL/min
6.4 mL/min
7.7 mL/min

Loading
Level
-
400 mg
50 mg
100 mg
200 mg
400 mg
Amount
Recovered
14 mg
1.3 mg
1.1 mg
2.5 mg
12.8 mg
49.6 mg
41.8 mg
Percent
Removal
96.5 %
94.8 %
97.8 %
97.5 %
97.6 %
87.6 %
89.6 %
       Table 3.  Diesel Fuel in the Collected Fraction of Extracts Cleaned by GPC
Column
Type
B
C
C
Injection
Size
2.5 mL
5.0 mL
Flow
Rate
4.0 mL/min
5.0 mL/min
Loading
Level
400 mg
200 mg
Amount
Recovered
380 mg
354 mg
167 mg
Percent
Removal
5.0 %
11.5 %
16.5 %
DISCUSSION

Performance of GPC columns as matrix loading level is varied can be classified into two
arbitrary regions (Figure 1).  In the region below a "point" of matrix overloading, GPC
peak shapes remain normally Gaussian, meaning that a certain percentage of the matrix
material loaded is removed.  That percentage depends on the amount of chromatographic
resolution which the column obtains between the matrix coextractives  and the largest
target analyte molecules.  This in turn depends on  resin bed length and HETP of the
column (which in turn is gel particle size and flow rate dependent).  The concentration
of matrix material  which defines the overload point is primarily dependent on the cross
section of the resin bed and permeability characteristics of the gel particle pores (which
is affected by viscosity of the mobile phase and therefore temperature). Note that as the
amount of matrix material injected increases  toward the overload point, the fraction of
injected material removed remains the same; however, the absolute amount of unremoved
material is increasing linearly with the amount loaded.  Thus the overload point does not
define the limits of acceptable cleanup, but does serve as a reference point for comparing
column types and sizes.
                                             151

-------
     L
     01
    +J
     a
    E
     x
    'L
    +J
     ffl
    z.
    •0
     0)
     >
     0

     I
     c
                                                        c
                                                        o
B
D
                        Loading Level
                Figure 1. Two Regions of GPC Cleanup Performance
Results in Table 2 and corresponding UV chromatograms showed that at loading levels
up to 200 mg with a 5.0 mL injection, column C was below the overload point.  At a
400 mg loading level with corn oil, some deterioration of Gaussian shape was observed
on the trailing edge of the corn oil peak, indicating column overload begins to occur near
that loading level.  This is consistent with previous work.  At flow rates higher than the
customary 5 mL/min on column type  C larger amounts of oil  were recovered  in the
collected fraction, indicating that the overload region shifts to lower loading levels when
flow rate is increased past 5 mL/min

The  absolute amount of corn oil in the collect fraction was  12.8  mg at the 400 mg
loading level and 5.0 mL/min flow rate.  That amount of oil residue is higher than the
amount expected to remain in an extract that is cleaned using an EnviroBeads column per
method  instructions (column  A).   This is because the larger size of the standard
EnviroBeads column more than compensates for the lower chromatographic resolution
of its gel particles (larger relative to Envirosep-ABC gel particles in column B); therefore
a larger percentage of the corn oil is removed when using column A. Note that column
type  B provided matrix removal similar to the results from column C when type B was
used  with ETA/CYP mobile phase at 4.0 mL/min.
                                            152

-------
The results from diesel fuel  (Table 3) illustrate the difficulty in  removing petroleum
derived matrix contamination from samples.  Waxy materials and some of the higher
molecular weight aliphatic compounds are removed, but the GC/FID profile of the
collected fraction appeared essentially unaltered; this is not surprising since at least one
fourth of diesel fuel aliphatics have 16-carbons or less, making them too small to remove
effectively when  using GPC  for  cleanup of semivolatiles extracts.  Clean up of this
matrix type is expected to be  much more efficient for the pesticide/PCBs application.

Column C under  standard conditions removed only 16.5% of the diesel fuel when 200
mg of diesel was loaded.  At 400 mg loading only 11.5 % was removed. Under the same
conditions column C at 4.0 mL/min in ETA/CYP was even less effective when measured
at the 400 mg loading level; its collected fraction still had 95% of the diesel fuel
remaining.
SUMMARY & CONCLUSIONS

1.     For corn oil  type matrixes, column types B and C provided similar results at
loading level of a 400 mg per injection. This suggests that smaller columns can provide
similar degrees of cleanup whether they are type  B (large gel particle  size like the
traditional GPC column) or type C (smaller gel particle size in high efficiency, steel
jacketed columns).

2.     For type C 400 mg appears to be the beginning of the overload region for the
corn oil matrix.  Overload seems to occur at a lower loading level when flow rate is
increased beyond 5.0 mL/min for type C.

2.     GPC cleanup with 100% DCM mobile phase is not efficient for removal of diesel
fuel (principally aliphatic hydrocarbons) during sample clean up.  Column type C appears
to be more efficient than type B, but at 200 mg loading level it still only removed 16.5%
of the  matrix.  At 400 mg level it removed 11.5%.  This loading level may be at or near
the overload  region for that matrix type.
                                             153

-------
    REFERENCES
[1]     Conrad, E.  E., Schwartz, N. L., and Kelly, K. P.; Tenth Annual Waste Testing & Quality Assurance
       Symposium, 1994, Paper 102.

[2]     Willig, T.,  Kaufinann, J.; Ninth Annual Waste Testing & Quality Assurance Symposium, 1993, Paper
       65.

[3]     US EPA Test Methods for Evaluating Solid Waste, 3rd Edition, 1995 (with 2nd Update).

[4]     Conrad, E.  E., Kelly, K. P.; Environmental Testing & Analysis, 1994, September-October, 34-39.
                                                   154

-------
                                                                                                 25
THE USE OF FOURIER TRANSFORM INFRARED SPECTROSCOPY FOR THE ANALYSIS OF
WASTE DRUM HEAD SPACE

W.F. Bauer and M.J. Connolly, Idaho National Engineering Laboratory, LITCo, Idaho Falls, Idaho
83415, ,4. Rilling and D. Gravel, Bomem Hartmann & Braun, Inc., Quebec, Quebec Canada G2E 5S5

Transuranic (TRU) radioactive wastes have been retrievably stored in waste drums at Department of
Energy  (DOE) sites since the 1970's.  Ultimately, these waste drums are destined for final disposition in
the Waste Isolation Pilot Plant (WIPP).  Current requirments for acceptance of waste into the WIPP
dictate that a representative drum head space sample be aquired and analyzed prior to the transport and
disposal of waste  in the WIPP. Analysis results  of the head space sample are to  be used for waste
characterization, verification of process knowledge, assigning Environmental Protection Agency (EPA)
hazardous waste codes, determining the  potential for flammability, and as input to gas generation and
transport models. Because of the very large number of waste drums and the rate at which they will need
to be processed, a rapid, simple and reliable  analysis method for waste  drum head space that can be
performed  "at-line" is necessary. Fourier transform infrared (FTIR) spectroscopy was selected because
the analysis times are short, operation of the instrumention is simple and reliable and because it could be
easily implemented "at-line". Drum  head space samples are pulled directly into a cell mounted on an FTIR
spectrometer and a spectrum recorded. From each infrared spectrum, 29 volatile organic compounds and
the C,-C3 hydrocarbons are identified and quantitated. To evaluate the analytical performance of the FTIR
system  and methodology on real samples, -300 gaseous samples of actual waste drum head space and the
head space  of other inner layers of confinement  have been analyzed by the "at-line" FTIR  system.
Analytical results are available within 5-6 minutes of sample collection. The  FTIR analysis results were
compared to the results from duplicate samples that were collected in SUMMA canisters and analyzed
by the standard laboratory gas chromatographic (GC) methods. The FTIR analysis results agree well with
the chromatographic analyses and will meet the program required limits for  accuracy and precision for
the analytes of interest. To  date, our results indicate that FTIR spectroscopy is a viable, cost effective
alternative to the laboratory  based GC methods currently specified for the analysis of TRU waste drum
head space.
                                                     155

-------
26

 A QUANTITATIVE METHOD FOR THE DETERMINATION OF
 TOTAL TRIHALOMETHANES IN DRINKING WATER

 W.B. Studabaker. S.B. Friedman, andR.P Vallejo
 EnSys, Inc., Morrisville, NC 27560

 Abstract

 Despite the need for extensive testing for trihalomethanes (THMs) in the nation's drinking
 water supplies, there is as yet no simple, inexpensive, accurate procedure useful for the
 monitoring of total THMs.  Obstacles to the development of such a procedure include the
 lack of a simple, reliable method for extracting THMs from water, and the inherent
 difficulty in normalizing the assay response of individual THMs on a weight basis.  We
 have now developed a method which overcomes these obstacles.  The test can be
 performed in less than 15 minutes in a laboratory setting, using ordinary laboratory
 equipment.  The test chemistry provides quantitation of any mixture of the four
 trihalomethanes, on a weight basis, with an accuracy of ฑ15% and a relative standard
 deviation <8%. The method MDL is <5ppb TTHMs and the RQL is <20ppb TTHMs.
 Cross-reactivity with most other disinfection by-products is minimal.

 Introduction

 Trihalomethanes form as by-products during the disinfection of water using chlorine-based
 oxidants.  Changes in trihalomethane concentrations in finished water may reflect changes
 in the quality of influent raw water and indicate a need for adjustments in the treatment
 process. THMs are routinely measured using  purge-and-trap/gas chromatographic
 techniques.  The monetary costs involved in acquiring and maintaing the required
 instrumentation and the user dedication and expertise needed to assure reliable data places
 THM monitoring outside the scope of many municipal water treatment laboratories. As a
 result, considerable effort has gone into the  development of simple, inexpensive, and
 reliable tests for the quantitative detection of THMs at concentrations typically found in
 drinking water. These tests are in general based on the Fujiwara reaction, in which
 organic halides, pyridine or a pyridine derivative, and hydroxide react to form a product
 with a strong UV or visible chromophore.l  Methods have involved fluorescence
 detection213, pentane extraction/reaction/evaporative concentration4, and purging into
 solutions of pyridine and hydroxide5. Drawbacks to these methods have included the use
 of expensive equipment, difficult or lengthy procedures, matrix interferences, and poor
 relative recognition of THMs.

 We have developed a method for the detection of THMs in drinking water samples which
 involves two simple procedures. First, a lOOmL water  sample is extracted using a
 proprietary, carbon-based filter cartridge. A peristaltic pump is used to filter the sample
 under positive pressure in a way that avoids exposing the sample to headspace. Then, the
 analyte is eluted from the cartridge using pyridine and analyzed using Fujiwara conditions
 which were developed to normalize the response of each of the THMs on a weight basis.
                                             156

-------
Analyte quantitation can be accomplished through use of a standard curve generated by
the user or by running a kit standard. The method can be run in any lab equipped with a
working fume hood and needs only common laboratory equipment, a spectrophotometer,
and a peristaltic pump. Laboratory personnel can perform the method without training
and, with little practice, can run 5-10 analyses per hour.

Experimental

Standards and instrumentation: Trihalomethane standard solutions were prepared from
neat reagents (Aldrich).  Aqueous standards were prepared using water from a laboratory
purifier system (MilliQ) following the procedures prescribed in EPA method 502.1.
Analyses were performed using a Hach DR-2000.

Summary of the test protocol:  The water sample to be analyzed is loaded into a lOOmL
syringe so that no headspace forms between the sample and the plunger. The volume is
adjusted to lOOmL, then the entire sample is pumped through the filter cartridge via a
peristaltic pump.  The cartridge is purged of excess water, then attached to a bottle-top
dispenser and eluted with pyridine into a test tube.  A developer reagent is added, then the
solution is incubated in boiling water and cooled. The presence of THMs is indicated by a
pink color which is measured at 53 Inm and quantitated using a kit standard.

Results and Discussion

Spike/Recovery.  Figure 1 shows the results obtained from the analysis of distilled water
samples spiked with individual THMs.  The method exhibits excellent linearity throughout
the range normally encountered in finished water samples. Relative response of the
individual THMs in the method is ฑ15% of the mean.

                                      Figure 1.
                        Spike/recovery of trihalomethanes
          350
                                                     • Chloroform
                                                     o Bromoform
                                                     A Bromodichloromethane
                                                     x Chlorodibromomethane
                  50  100  150  200  250  300  350

                      [THMs], spiked (ppb)
                                              157

-------
Precision. The method exhibits the following precision for the measurement of 20ppb
spikes: (10 replicates)

       Analyte                    RSD

       Chloroform:                5.7%
       Bromodichloromethane:      3.9%
       Chlorodibromomethane:      2.7%
       Bromoform:                7.4%

Method sensitivity. The MDL (3 SD above mean blank) is approximately 3 ppb. TheRQL
(12SD, measured at 20ppb, above the mean blank) is approximately 12ppb.

Cross reactants. The following table illustrates the cross-reactivity of all Information
Collection Rule analytes (Methods 551 and 552) and some other organochlorine analytes
in the method described above.
Analyte                    X-react

Trichloroacetonintrile        59%
Dichloroacetonitrile          0
Dibromoacetonitrile          1
Bromochloroacetonitrile      0
Trichloroacetic acid         15
Dichloroacetic acid           4
Dibromoacetic acid           2
Chloroacetic acid             2
Bromoacetic acid             2
Bromochloroacetic acid       2
Analyte                    X-react

1,1,1-trichloroacetone       60%
1,1-dichloroacetone           1
Chloral hydrate             44
Chloropicrin                  0
Trichloroethylene           31
1,1,1 -trichloroethane          0
Carbon tetrachloride           1
Tetrachloroethylene           0
A preliminary study of ICR cross reactants present in water samples from nearby
municipalities, using Method 551, indicated that they were present at concentrations of
less than 10% of the TTHMs and would contribute <10% to the total signal in the test.

Conclusions:

The method described above provides a simple, rapid means of quantitating total
trihalomethanes in finished drinking water. Further research will detail the correlation of
the method with EPA methods (502.1).
                                             158

-------
References

1. Fujiwara, K. Sitzungsber. Abh. Naturforsch. Ge. Rostock, 1916, 6, 33.
2. Okumura, K.; Kawada, K.; Uno, T. Analyst 1982, 707, 1498.
3. Aoki, T.; Tanaka, .; Fujiyoshi, K.; Yamamoto, M. Proc. Int. Water Supply Assoc.
Conf. 1989.
4. Huang, J.Y.C.; Smith, G.C. Jour. AWWA 1984, 76, 168.
5. Reckhow, D.A.; Pierce, P.D. A Simple Spectrophotometric Method for the
Determination of THMs in Drinking Water, AWWA Research Foundation, Denver, CO.,
1992.
                                               159

-------
27
       STABILITY STUDIES OF SELECTED ANALYTICAL STANDARDS FOR THE
             EXPERIMENTAL DETERMINATION OF EXPIRATION DATES

 C.A.  Petrinec,  Staff  Scientist,  and M.A. Re.  Senior  Scientist, Radian
 Corporation, P.O. Box  201088, Austin, TX  78720-1088

 ABSTRACT

 The accuracy of analytical data in environmental analysis is  dependent on
 the accuracy of analytical standards used  in the analysis.  Shelf life and
 stability  are  important considerations  when making and using standards.
 Usually  expiration  dates  for  standards are  arbitrarily  set  for EPA
 analytical  methods.   Valid expiration dates  can only  be established by
 stability studies over time.  A protocol for  experimental  determination of
 expiration  dates  has been established and applied  to  some specific EPA
 method standards.   Stock and working-level  standard solutions have been
 prepared for a number of EPA methods including Method 8080 for Pesticides
 and  PCB  analysis.   While some  commercial  sources of  standards supply
 stability  data on  high-concentration  stock solutions,  there  is  very
 limited data  on lower  concentration working-level  standards.   Now that
 standards at working levels are commercially available, shelf-life studies
 of these mixtures are  critical.  A general  discussion  of our stability
 testing program,  the importance of experimentally determined expiration
 dates, and initial  results  for some of  our stability  studies  will be
 presented.

 INTRODUCTION

 Accurate determination of pollutants in  the environment is reliant upon a
 number  of   factors  including     field  sampling,   laboratory  sample
 preparation, and methods for analysis and quantitation of  target analytes.
 Methods for many of these aspects  of  environmental analysis  are  well
 established and documented.   The United States Environmental Protection
 Agency  (USEPA)  has  published  methods  in documents  such as  the SW-846
 series and the Contract Laboratory Program (CLP) Statements of Work (SOW).
 These  types  of  documents  describe  sampling  techniques,  methods  for
 preparing  samples  for  analysis,   and  analytical  parameters   such  as
 instrumentation and quantitation guidelines.

 Analytical  methods  will  usually include  sections describing   standard
 solutions  needed  for  the  analysis.    A number of  standards   such  as
 solutions for initial instrument calibration,  calibration  checks,  internal
 and surrogate standards, matrix standards, and quality control standards
 are required for any given method.   The description of standards  may also
 include preparation methods and set  guidelines for storage and shelf life
 of each solution.

 Since the  time that many  of the methods were  written,  the commercial
 availability of standards has  increased  dramatically.    There  are  many
 suppliers  that specialize  exclusively  in  providing  standards to  the
 analytical  testing  community.    Examples  of   standards  commercially
                                         160

-------
available  include high purity neat materials, single and multicomponent
solutions.    Solution  standards  prepared at  high  concentrations  are
typically known as  stock  standards.  These are diluted to  levels at which
the analytes  are in a concentration range appropriate for the analytical
method.  The diluted solutions are known as working-level standards.  Many
stock solutions for specific  EPA methods are commercially  available.  The
availability  of  standards  has  removed some  of  the  burden  of preparing
solutions  from the  analytical laboratory.

Preparation of standards on  a commercial scale is often quite different
than preparing  standards  in the analytical laboratory.  Usually, the batch
size is  much  larger on commercial scale and often solutions are packaged
in flame-sealed ampules.   Laboratories typically prepare smaller amounts
and either  store solutions in screw-cap vials, bottles, or in volumetric
flasks.  As analytical methods were written and guidelines  were set for
preparing and storing standard solutions, the focus was  toward individual
laboratory  preparations.   A solution stored in a sealed ampule may have a
longer  shelf life  than  the  same standard stored  in a vial  or  flask.
Arbitrary storage conditions  and expiration dates were set in the methods
with limited or no experimental basis.   For example,  the guidelines in SW-
846 Method 8081  suggest  replacing stock standards  after  six months and
replacing working-level calibration standards after  two months. The most
appropriate way  to  determine expiration dates is to  prepare  and store
standards  and  study  them over  time   to  determine  changes  in  analyte
composition and concentration.

Another  aspect  of  great  concern  to  the  testing  laboratory  is  the
traceability  and documentation  associated with analytical  standards.
Traceability   of analytical   reference  materials   has  been  discussed
recently.1'2  Often  when laboratories are audited by government or private
auditors,  they are  asked to  provide documentation and traceability data
for standards that they  have used in their processes.   Using standards
that  have  exceeded  the  expiration  date  may  cause  data  on  analysis
performed with  the  standards to be invalid.  Shelf-life data is part of
the traceability of  the  standard and  should be  provided by commercial
suppliers of  standards.

Until recently,  most commercial  suppliers of  standards have only offered
neat reference  materials  and  stock standard solutions.   Laboratories then
dilute the  stocks to working levels.   An increasing number of working-
level standards are now being prepared  commercially.3 While some data is
available on stability of stock standards,  the amount of data on stability
of working-level solutions is very limited.   Since this data is critical
to  laboratories using  these  solutions,  a  study  was  undertaken  to
experimentally  determine  shelf life of selected solutions and to establish
reasonable  expiration dates.

STANDARD PREPARATION

All neat materials used for standards preparation were either synthesized,
purified or procured by Radian  Corporation  Specialty Chemicals  Group.
                                          161

-------
Each material  was  verified for identity using  a  combination of methods
including GC-MS, NMR,  FT-IR, Melting Point (MP) or  Boiling Point  (BP), and
Elemental Analysis.  Purity assays for  all materials were performed using
two or more methods  including  GC-FID,  HPLC,  DSC,  MP,  TLC, and Elemental
Analysis.

Stock solutions were prepared using the following procedure:  Analytical
balances were calibrated using NIST traceable weights.  Neat materials for
each  stock  standard  were  accurately  weighed  into  vials  and  then
quantitatively  transferred to volumetric  flasks.   This procedure was
performed in triplicate by three different Chemists or Technicians.  The
three preparations  were compared to each  other to ensure  precision of
preparation.  The three  preparations were then combined to form the master
stock  solution.    Working-level  standards were prepared by volumetric
dilutions of the appropriate stock standards.

Ampules to be  used  in the  aliquoting process  were rinsed with deionized
water, oven dried,  and silanized.   The ampules  were then  filled with
appropriate amounts of working  standard and flame sealed.  Random ampules
were removed during the early,  middle,  and late portions of the ampuling
process and used for batch homogeneity testing.  All standards were stored
in a refrigerator at approximately 4CC and protected from light.

Working-level calibration standards were prepared in this manner for USEPA
Methods 8080/8081.   This method is applicable  for the  determination of
Chlorinated Pesticides  and Polychlorinated  Biphenyls (PCBs) using Gas
Chromatography (GC) with an Electron Capture Detector (BCD).   A total of
nine sets of calibration standards were prepared (see Tables 1-9) .   Each
set of standards consisted of six or seven concentration levels that would
allow for  generating  a  6 or 7-point  calibration  curve  for  each target
analyte.   Since most laboratories typically generate  5-point curves, the
standards were designed so that a combination of  5 of the 6  or 7 levels
could be used to generate  a higher range or a lower range 5-point curve.

METHOD OF ANALYSIS

All analyses were performed using a Hewlett Packard 5890 Gas Chromatograph
equipped with a  split/splitless injector, autosampler, and  BCD.   The GC
column used for analyses  was a  DB-5, 30m  X  0.53mm ID,  1.5 /lm  film
thickness column (J&W Scientific).   Chromatographic  conditions  were as
follows:

      Injector Temperature                 250ฐC
      Detector Temperature                 290ฐC
      Initial Oven Temperature             150ฐC (0.5  min. hold)
      Ramp Rate                           5ฐC/min.
      Final Oven Temperature              270ฐC (5.5  min. hold)

Calibration standards were analyzed and  correlation coefficients  were
calculated for  each  analyte in  each  set.    The random  ampules  removed
during the  aliquoting  process were  analyzed  for homogeneity of  all
                                         162

-------
analytes .  As an additional check for accuracy of concentrations ,  second
source standards were used for comparison.  Second source standards were
obtained  as  stock  solutions   and  diluted  to  working  levels  that
corresponded  to  the mid-point concentrations  of each set of calibration
solutions .  Responses of second source solutions were directly compared to
responses of  appropriate mid-point  solutions.

METHOD FOR STABILITY TESTING

Stability testing was performed by comparing existing solutions to freshly
prepared solutions.  For each set of calibration standards, a new stock
solution was  prepared from neat  materials.  Each stock solution was then
diluted  to a  working level that  resulted  in concentrations at mid-points
of  the  calibration  curves.   Each calibration curve was re-analyzed and
correlation  coefficients were calculated.   The new mid-point solutions
were  analyzed  and analyte  responses  were  compared  to  responses  of
corresponding existing mid-point solutions.

RESULTS  AND  DISCUSSION

There are many  factors that may affect the  stability  of compounds in
solution.   Some  considerations  include  reactivity  of an  analyte with: 1)
the solvent,  2) other analytes in the same solution (cross reactivity), 3)
light,  and 4) oxygen.   Another factor to consider is storage temperature.

The most common way to detect instability of  analytes in  solution is to
monitor  change   in  analyte  concentration  over  time.    One  cause of
concentration change is the evaporation  of solvent  which  will  result in
high analyte concentrations.  This  can occur  if standards are  stored in
screw- cap bottles  or  in volumetric flasks.   One way to prevent  loss of
solvents and to  extend the shelf life of  solutions is to use flame -sealed
ampules for  storage.   Another cause for  changes in concentration is the
ability of  some analytes to  stick to glass  surfaces.   This may  cause
analyte  concentration to  appear  low  because  some of  the  analyte is
adsorbed to  the glass.  This effect can be very pronounced for  standards
at working levels  because analyte concentrations are generally  very _ low
Any  small  loss  of  analyte to  the  glass surface may greatly affect the
 concentration   The use of  silanized glass  ampules may help reduce this
phenomenon because  the silanization procedure will reduce  active sites on
 the glass surface.
 into consideration.
                                          163

-------
Analysis of PCBs by Methods  8080/8081  is done by classifying PCBs under
the commercial term of Aroclors.   Aroclors are mixtures  of PCBs that fall
into  a  specific  boiling point range that  is  dependent  on the degree of
chlorination.  PCBs are thought to be very stable and degradation is not
expected to occur in hydrocarbon solution.

The duration of the stability testing study for the nine  calibration mixes
was two years.  The  initial mixes  were prepared and verified as described
above.  The original correlation coefficients for all calibration curves
were  0.997  or greater.   Comparison of original  mid-point  solutions to
second source standards indicated percent differences for pesticides to be
less  than 10% and for PCBs to be less than 15%.

After  two  years, new  standards  were prepared from neat materials  and
comparison studies were performed.   The  first analysis  of the stability
study was  to re-verify each  original  calibration curve for linearity.
Changes in any specific analyte concentration at any of the calibration
levels  should be indicated by non-linearity.   This  analysis,  however,
would not give any indication of overall changes in concentration due to
solvent evaporation.   For this  reason,   it  is critical to  prepare  new
solutions from neat materials  to verify analyte concentrations.  Diluting
the new stock solution to concentrations that are comparable to existing
levels allows for a direct comparison of old and new solutions at the same
prepared concentration.

Results from curve linearity analysis and mid-point comparisons of old and
new solutions for each  set of standards  are  summarized  in Tables 10-18.
All correlation coeffecients were 0.997 or better.  The  correlations are
not significantly different than initial linearity data  on the solutions
indicating significant changes in analyte concentration at  any specific
concentration level did not occur.

The point  to point comparisons of existing solutions to new solutions also
did not reveal  significant  differences  in  analyte  concentration.    A
change  in  analyte  response of  greater than  ฑ15% was  considered  to  be
significant.   Only four analytes  out of 42 analytes  studied were greater
than  10% with the largest difference at  14.2%.   These  results  indicate
that analytes at the mid-point levels were stable over the two year time
period.

Methods 8080/8081 discuss generation of a multi-point calibration curve to
initially calibrate  instruments.   It is also suggested to inject a single
mid-concentration standard after each group of 20  samples as a calibration
check.  The variance  of analyte  responses of  the single point  check to
average responses of  the  multicalibration should be  less than 30%.   A
variance of greater than  30%  indicates  that  multipoint  recalibration is
necessary.   Laboratories will often use separately prepared solutions as
calibration check standards.

Additional calculations were  performed on the data from the  stability
analysis to illustrate consistency of the existing calibration solutions
                                          164

-------
with  separately  prepared check  standards.    Comparison  of the  curve
responses  to  the freshly prepared  mid-point  standard responses yielded
variances of less than 15%.   This  indicates that a multipoint calibration
of the existing two year old standards would still yield acceptable data.

gUMMARY

The approach for stability analysis of working-level calibration standards
for USEPA Methods 8080/8081 was two-fold.  Linearity over the full range
of each  set of  solutions needed  to be initially verified  and  then re-
evaluated  after  long-term  storage.    Additionally,   concentrations  of
analytes  needed  to  be confirmed by comparison to  new gravimetrically
prepared solutions.

Results from both aspects of  the study met the established criteria.  No
significant changes  were observed in the existing solutions.   Based on
this data it can be concluded  that working level calibration  standards for
Methods 8080/8081 prepared in isooctane,  stored in flame-sealed silanized
amber ampules  at 4ฐC  are  stable for  at least  two years.

These results have established experimentally  determined expiration dates
for each set of standard solutions included in the study.  The study will
continue and another set of analyses will be performed at the end of three
years that may further extend the shelf life of these  standards.  Based on
data from the above studies, no significant changes in these  solutions are
anticipated.

Similar studies  are  currently in progress that include stability testing
on working-level  calibration standards for Method 8240 (volatile organics
analysis)  and  Method 8270  (semivolitale organics analysis).

REFERENCES:

1.    Keith, L.H., Environ.  Sci. Technol., 28, 590A,  (1994).

2.    Noble, D.   Analytical  Chemistry, 66. 868A, (1994).

3     Keith,   L.H.      "A  New   Concept:     Ready-to-Use  Standards."
      Environmental  Lab,  p.  19.   (August/September 1993).

4.    Phillips,  D.D.,  Pollard, G.E.,  and  Soloway, S.B.  Agricultural and
      Food Chemistry,  10,  217,  (1962).

5     Zabik, M.J.,  Schuety,  R.D.,  Buston, W.L., and Pape, B.E.   J. Agr.
      Food Chem., 19,  308,  (1971).
                                         165

-------
TABLE 1.  PESTICIDE CALIBRATION MIX A
Analyte
gamma-BHC
Heptachlor
Aldrin
Heptachlor-2,3-exo-epoxide
Endosulfan I
Dieldrin
Endosulfan II
p,p'-DDT
Endrin Aldehyde
Methoxychlor
Decachlorobiphenyl
Tetrachloro-m-xylene
Concentration (ng/mL) in Isooctane
C1
1
2
2
2
2
2
4
4
4
16
4
4
C2
2.5
5
5
5
5
5
10
10
10
40
10
10
C3
5
10
10
10
10
10
20
20
20
80
20
20
C4
10
20
20
20
20
20
40
40
40
160
40
40
C5
25
50
50
50
50
50
100
100
100
400
100
100
C6
50
100
100
100
100
100
200
200
200
800
200
200
C7
100
200
200
200
200
200
400
400
400
160
400
400
TABLE 2.  PESTICIDE CALIBRATION MIX B
Analyte
(ฑ)-alpha-BHC
beta-BHC
delta-BHC
cis-Chlordane (alpha)
trans-Chlordane (gamma)
p,p'-DDD
p,p'-DDE
Endosulfan Sulfate
Endrin
Endrin Ketone
Decachlorobiphenyl
Tetrachloro-m-xylene
Concentration (ng/mL) in Isooctane
C1
1
4
2
2
2
4
2
4
4
4
4
4
C2
2.5
10
5
5
5
10
5
10
10
10
10
10
C3
5
20
10
10
10
20
10
20
20
20
20
20
C4
10
40
20
20
20
40
20
40
40
40
40
40
C5
25
100
50
50
50
100
50
100
100
100
100
100
C6
50
200
100
100
100
200
100
200
200
200
200
200
C7
100
400
200
200
200
400
200
400
400
400
400
400
                      166

-------
TABLE 3. PCB CALIBRATION MIX A
Analyte
Aroclorฎ 1016
Aroclorฎ 1 260
Decachlorobiphenyl
Tetrachloro-m-xylene
Concentration (ng/mL) in Isooctane
C1
50
50
10
10
C2
100
100
20
20
C3
250
250
50
50
C4
500
500
100
100
C5
750
750
150
150
C6
1000
1000
200
200
TABLE 4.  PCB CALIBRATION MIX B

Analyte
Aroclorฎ 1221
Aroclorฎ 1 254
Decachlorobiphenyl
Tetrachloro-m-xylene
Concentration (ng/mL) in Isooctane
C1
50
50
10
10
C2
100
100
20
20
C3
250
250
50
50
C4
500
500
100
100
C5
750
750
150
150
C6
1000
1000
200
200
TABLE 5. PCB CALIBRATION MIX C
Analyte
Aroclorฎ 1232
Decachlorobiphenyl
Tetrachloro-m-xylene
Concentration (ng/mL) in Isooctane
C1
50
10
10
C2
100
20
20
C3
250
50
50
C4
500
100
100
C5
750
150
150
C6
1000
200
200
TABLE 6. PCB CALIBRATION MIX D
Analyte
Aroclorฎ 1242
Decachlorobiphenyl
Tetrachloro-m-xylene
Concentration (ng/mL) in Isooctane
C1
50
10
10
C2
100
20
20
C3
250
50
50
C4
500
100
100
C5
750
150
150
C6
1000
200
200
                   167

-------
   TABLE 7. PCB CALIBRATION MIX E
Analyte
Aroclorฎ 1 248
Decachlorobiphenyl
Tetrachloro-m-xylene
Concentration (ng/mL) in Isooctane
C1
50
10
10
C2
100
20
20
C3
250
50
50
C4
500
100
100
C5
750
150
150
C6
1000
200
200
TABLE 8. CHLORDANE CALIBRATION MIX
Analyte
Chlordane (tech.)
Decachlorobiphenyl
Tetrachloro-m-xylene
Concentration (ng/mL) in Isooctane
C1
50
10
10
C2
100
20
20
C3
250
50
50
C4
500
100
100
C5
750
150
150
C6
1000
200
200
TABLE 9. TOXAPHENE CALIBRATION MIX

Analyte
Toxaphene
Decachlorobiphenyl
Tetrachloro-m-xylene
Concentration (ng/mL) in Isooctane
C1
50
2
2
C2
100
4
4
C3
250
10
10
C4
500
20
20
C5
750
30
30
C6
1000
40
40
                   168

-------
TABLE 10.  PESTICIDE CALIBRATION MIX A



Analyte
gamma-BHC
Heptachlor
Aldrin
Heptachlor-2,3-exo-epoxide
Endosulfan I
Dieldrin
Endosulfan II
p,p'-DDT
Endrin Aldehyde
Methoxychlor
Decachlorobiphenyl
Tetrachloro-m-xylene


Correlation
Coefficient
(C1-C7)
0.9991
0.9984
0.9999
0.9999
0.9999
0.9999
0.9997
0.9984
0.9997
0.9994
0.9979
0.9991
Comparison of New Std. (C4)
to Existing Std. (C4)
Theoret. Cone.
(ng/mL)
10
20
20
20
20
20
40
40
40
160
40
40
%
Difference
-0.26
-10.71
0.49
0.18
1.27
2.22
14.21
-12.32
1.39
-7.41
-2.38
2.55
TABLE 11.  PESTICIDE CALIBRATION MIX B



Analyte
(ฑ)-alpha-BHC
beta-BHC
delta-BHC
cis-Chlordane (alpha)
trans-Chlordane (gamma)
p,p'-DDD
p,p'-DDE
Endosulfan Sulfate
Endrin
Endrin Ketone
Decachlorobiphenyl
Tetrachloro-m-xylene


Correlation
Coefficient
(C1-C7)
0.9985
0.9998
0.9995
0.9999
0.9997
0.9998
0.9998
0.9999
0.9997
0.9999
0.9990
0.9998
Comparison of New Std. (C4)
to Existing Std. (C4)
Theoret. Cone.
(ng/mL)
10
40
20
20
20
40
20
40
40
40
40
40
%
Difference
0.67
11.59
1.91
1.08
-1.33
-2.65
4.00
-2.71
-6.06
-0.61
-2.78
1.99
                           169

-------
TABLE 12.  PCB CALIBRATION MIX A
Analyte
Aroclorฎ 1016/1260
Decachlorobiphenyl
Tetrachloro-m-xylene
Correlation
Coefficient
(C1-C6)
0.9997
0.9992
0.9988
Comparison of New Std. (C3)
to Existing Std. (C3)
Theoret. Cone.
(ng/mL)
250
50
50
%
Difference
0.71
-6.51
-1.95
TABLE 13.  PCB CALIBRATION MIX B
Analyte
Aroclorฎ 1221/1254
Decachlorobiphenyl
Tetrachloro-m-xylene
Correlation
Coefficient
(C1-C6)
0.9996
0.9990
0.9997
Comparison of New Std. (C3)
to Existing Std. (C3)
Theoret. Cone.
(ng/mL)
250
50
50
% Diff.
-1.33
-5.85
-0.39
TABLE 14. PCB CALIBRATION MIX C
Analyte
Aroclorฎ 1 232
Decachlorobiphenyl
Tetrachloro-m-xylene
Correlation
Coefficient
(C1-C6)
0.9998
0.9988
0.9998
Comparison of New Std. (C3)
to Existing Std. (C3)
Theoret. Cone.
(ng/mL)
250
50
50
%
Difference
3.94
-5.14
0.45
                           170

-------
  TABLE 15.  PCB CALIBRATION MIX D
Analyte
Aroclorฎ 1 242
Decachlorobiphenyl
Tetrachloro-m-xylene
Correlation
Coefficient
(C1-C6)
0.9997
0.9991
0.9997
Comparison of New Std. (C3)
to Existing Std. (C3)
Theoret. Cone.
(ng/mL)
250
50
50
%
Difference
4.01
-3.33
2.28
   TABLE 16. PCB CALIBRATION MIX E
Analyte
Aroclorฎ 1 248
Decachlorobiphenyl
Tetrachloro-m-xylene
Correlation
Coefficient
(C1-C6)
0.9998
0.9990
0.9998
Comparison of New Std. (C3)
to Existing Std. (C3)
Theoret. Cone.
(ng/mL)
250
50
50
%
Difference
-0.73
-3.76
-1.49
TABLE 17. CHLORDANE CALIBRATION MIX
Analyte
Chlordane (tech.)
Decachlorobiphenyl
Tetrachloro-m-xylene
Correlation
Coefficient
(C1-C6)
0.9999
0.9994
0.9998
Comparison of New Std. (C3)
to Existing Std. (C3)
Theoret. Cone.
(ng/mL)
250
50
50
%
Difference
9.99
-2.64
2.77
                              171

-------
TABLE 18. TOXAPHENE CALIBRATION MIX
Analyte
Toxaphene
Decachlorobiphenyl
Tetrachloro-m-xylene
Correlation
Coefficient
(C1-C6)
0.9971
0.9994
0.9981
Comparison of New Std. (C4)
to Existing Std. (C4)
Theoret. Cone.
(ng/mL)
500
20
20
%
Difference
-9.91
-1.98
3.47
                            172

-------
                                                                         28
              DETERMINING VOLATILE ORGANIC COMPOUND
                 CONCENTRATION STABILITY IN SOIL

Alan D. Hewitt. Research Physical Scientist, U.S. Army Cold
Regions Research and Engineering Laboratory,72 Lyme Road,
Hanover, New Hampshire 03755-1290

ABSTRACT

The pre-analysis concentration stability of volatile organic com-
pounds  (VOCs) in soil matrices were evaluated independent of vol-
atilization losses. Soil subsamples were fortified with benzene,
toluene, ethyl benzene, p-xylene, o-xylene, trans-1,2-dichloroet-
hylene, trichloroethylene and perchloroethylene, sealed inside
glass ampoules, and handled in a manner consistent with the EPA's
SW-846 Method 8240. Experiments have repeatedly shown that chlo-
rinated-hydrocarbon concentrations remain fairly constant, while
aromatic hydrocarbons often experience a complete (>99%)  loss
when soils are held at 22ฐC for several days.  While  refrigeration
at 4ฐC reduces the rate of biodegradation,  more than 50%  of  some
of the hydrocarbons are lost when soils are held for 14 days.
Chemical preservation by soil acidification with NaHS04 mitigates
the loss of these aromatic hydrocarbons for periods beyond 14
days when held at 22ฐC.

INTRODUCTION
Despite the large number of soil subsamples analyzed for volatile
organic compounds (VOCs) each year, there exists little informa-
tion on the stability of these compounds in the absence of vola-
tilization losses (1). The routine acceptance of refrigerated
storage (4ฐC)  up to 14 days after subsamples have been trans-
ferred to airtight vessels  (2) continues, even though it is well
recognized that soils remain biologically active under these con-
ditions. Several investigators have observed significant reduc-
tions in VOC concentrations during storage; however, the experi-
mental approaches used were incapable of distinguishing between
volatilization and biodegradation losses (3-5) . By eliminating
volatilization losses by encapsulating subsamples in glass am-
poules and then transferring them to volatile organic analysis
(VOA) vials, we can isolate the effect of biodegradation and
evaluate methods of chemical preservation  (1, 6,  7) .
Our initial experiments used a vapor-fortification procedure to
spike soils. Although this method has many useful applications
(8-10), the number of subsamples that can be made from a single
batch of soil is often limited (< 25) ,  treatment takes several
days and requires that the soil be desiccated. Here, a much
                                     173

-------
quicker procedure is described. It uses a spiked aqueous solution
to introduce benzene (Ben),  toluene  (Tol),  ethylbenzene  (E-Ben),
para- and ortho-xylene  (p-Xyl, o-Xyl),  trans-1,2-dichloroethylene
(TDCE),  trichloroethylene (TCE),  and perchloroethylene  (PCE) to
48 replicate soil subsaitiples held in small glass ampoules. After
treatment, the ampoules are sealed, creating airtight vessels
that can be stored and/or transferred intact to VOA vials. For
the latter, once the VOA vial has been capped the ampoule can be
broken by hand shaking to release the treated soil.  Ampoules and/
or VOA vials can be stored according to protocols for low- (< 1
|j.g VOC/g)  and high- (> 1 ng  VOC/g)  level  purge-and-trap  gas  chro-
matography mass spectrometry  (PT/GC/MS),  aqueous extraction head-
space gas chromatography  (HS/GC), or any other method of analy-
sis, without exposing the sample to the atmosphere.  Here, a pro-
tocol is tested that is consistent with soil samples retained in
vapor-tight glass bottles awaiting subsampling [although this
practice is not recommended by the author (11)],  or in VOA vials
awaiting low-level PT/GC/MS analysis (2). Samples were chemically
preserved with NaHSO4 because it  is one of  the more  practical
biodegradation inhibitors (12).

SOIL SUBSAMPLE PREPARATION AND TREATMENT

The silty-sand topsoil used in this study was obtained locally
just prior to the experiment, from between 5 and 10  cm below the
ground surface. It was air-dried for 24 hr,  passed through a 30-
mesh sieve and thoroughly mixed.  The moisture content was 4.3%
and the organic carbon content was 0.89%.
Subsamples of 1.00 ฑ 0.01 g were  transferred to  2-mL glass am-
poules (Wheaton, actual vol. =3.1 mL)  some of which contained
0.25 g of NaHSO4 (see Figure 1).  In this  experiment,  21  ampoules
contained both NaHSO4 and soil, and 27  contained just soil.
The fortification solution was prepared by adding microliter vol-
umes  (3.1-5.8 \1L)  of Ben, Tol, E-Ben, p-Xyl,  o-Xyl,  TDCE, TCE,
and PCE to a 100-mL volumetric flask containing about 102 mL of
groundwater. Each analyte would have an aqueous concentration of
approximately 50 mg/L if dissolution was complete. However,  this
is unlikely, based on their solubilities. After adding the anal-
ytes the solution was shaken, a stirring bar introduced, and the
flask topped off with groundwater, leaving less than 0.5 mL of
headspace after inserting the glass stopper. This solution was
stirred for at least 24 hr and allowed to sit undisturbed for
1 hr prior to removing aliquots.

Each soil subsample was spiked with a 200-n.L aliquot of this
aqueous solution using a 500-^L glass syringe (Hamilton). To
avoid undissolved low density analytes that would accumulate at
                                     174

-------
               Bulk Sample
                              Ampoule
                                     27: 1.00 g soil
                                     15: 1.00 g soil, 0.25 g NaHSO4
                                      6: 1.00 g soil, 0.25 g NaHSO4, 1.0 mL water
          Day 0
          Spiked: 48 soil subsamples, 3 VOA vials with 15 mL water.
          Stored: 12 spiked soil subsamples refrigerated (4ฐC), remainder held at 22ฐC.
          Analyzed: 3 spiked VOA vials, 3 spiked soil samples, 3 spiked soil subsamples preserved
          with NaHSO4.
          Day 5
          Analyzed: 3 spiked soil subsamples stored at 4ฐC, 3 spiked soil subsamples stored at 22ฐC,
          3 spiked soil subsamples preserved with NaHSO4.
          Days
          Analyzed: 3 spiked soil subsamples stored at 4ฐC, 3 spiked soil subsamples stored at 22ฐC,
          3 spiked soil subsamples preserved with NaHSO4, 3 soil subsamples preserved with
          NaHSO4 and 1 mL of water.
          Day 14
          Analyzed: 3 spiked soil subsamples stored at 4ฐC, 3 spiked soil subsamples stored at 22ฐC,
          3 spiked soil subsamples preserved with NaHSO4.
          Day 21
          Analyzed: 3 spiked soil subsamples stored at 4ฐC, 3 spiked soil subsamples stored at 22ฐC,
          3 spiked soil subsamples preserved with NaHSCXt, 3 spiked soil subsamples preserved with
          NaHSO4 and 1 mL of water.
  Figure 1.   Flow diagram of  subsample preparation  and analysis.

the  surface,  aliquots were taken from well below the water-air
interface,  and the stainless steel needle was wiped prior to
inserting  into the ampoule's neck. Before transferring  a spike,
each ampoule was placed  in a metal clamp so it  be could heat-
sealed with a propane torch immediately after spiking.  To enhance
mixing, 1 mL of Type 1 water  (Milli Q,  Millipore Corp.)  was
introduced with a  pipette to 6  of the ampoules  containing both
treated soil and NaHSO4  (see Figure I).  It took approximately
1 hour to  spike and seal the 48 soil subsamples,  after  which each
one  was hand shaken, mixing its contents. In addition to prepar-
ing  the soil subsamples,  a 200-|iL aliquot of the spiking solution
was  placed in each of three autosampler headspace vials (22 mL,
Tekmar) containing 15 mL of Type 1 water, which were immediately
capped with crimp-top caps and  Teflon-faced butyl rubber septa
(Wheaton).  One of  these  samples was prepared at the beginning,
middle and end of  the soil subsample fortification process to
estimate the spiking solution concentration and homogeneity.
The  first,  middle,  and last soil subsamples prepared with and
without NaHSO4 were selected for analysis on Day 0 (day of treat-
ment) .  Also on Day 0, twelve sealed ampoules containing only  for-
                                         175

-------
tified soil were selected at random and placed in a refrigerator
(4ฐC)  for storage.  All other subsamples remained at room tempera-
ture (22ฐC).  Triplicates from these three subsample sets (22ฐC
preserved and unpreserved, 4ฐC unpreserved)  were selected at ran-
dom and analyzed after 5, 9, 14, and 21 days of storage. The  six
subsamples preserved with NaHS04 and made into a slurry by adding
1 mL of water were split into two batches, and analyzed after
holding periods of 9 and 21 days  (see Figure 1).
ANALYSIS

All samples were analyzed with a HS autosampler  (Tekmar 7000)
coupled to a GC  (SRI model  8610-0058) equipped with a 15-m DB1
0.53 capillary column.  Subsamples in ampoules were prepared  for
analysis by placing them in autosampler vials (22 mL) that con-
tained 14 mL of Type 1  water, or 13 mL for the six ampoules  that
already contained 1 mL. After sealing with a crimp-top cap,  each
vial was vigorously hand shaken, causing the ampoule to break and
allowing the treated soil to be completely dispersed. Headspace
equilibration was obtained  by two minutes of manual shaking  fol-
lowed by holding at 25ฐC for 20 min.  A 1-mL headspace sample was
drawn through a heated  needle and transfer line to the GC for
separation and flame ionization detection  (FID).  The GC tempera-
ture sequence started with  the sample injection,  stayed at 40ฐC
for 1 min, then increased to 100ฐC in 6 min,  and  held at 100ฐC
for an additional 3.5 min.  Sample analyte concentrations were
established relative to aqueous headspace standards prepared by
adding small  (<10 |1L) quantities of a methanol
stock solution to autosampler vials contain-
ing 15 mL of Type 1 water  (8).
RESULTS AND DISCUSSION
Results for the spiking  solution and the
treated soil subsamples  appear in Tables 1,
2, and 3. The means and  standard deviations
of the analyte mass obtained  for the three
aqueous aliquots  (Table  1)  and those of the
treated soils analyzed on Day 0  (Tables 2 and
3) demonstrate that the  treatment procedure
was precise. The  small  (< 15%) concentration
decrease from the spiking solution to the
unpreserved spiked soil  samples is consistent
with observed analyte-organic carbon parti-
tion phenomena (13);  the changes for the
preserved spiked  samples is a result of both
partitioning and  salting out  (14) .
Table  1. Means and
standard deviations
of analyte concen-
trations  ((ig/vial)
of the spiking solu-
tion in the auto-
sampler vials (trip-
licate analyses).
Analyte
Ben
Tol
E-Ben
p-Xyl
o-Xyl
TDCE
TCE
PCE
Treatment
aliquot
7
8
7
8
8


9
.0
.5
.8
.2
.2
10
13
.6
ฑ
+
ฑ
ฑ
ฑ
ฑ
ฑ
+
0
0
0
0
0
0
0
0
.3
.2
.1
.1
.1
.3
.3
.3
                                     176

-------
Table 2. Means and  standard deviations of analyte concentrations
(Hg/g) in unpreserved subsamples stored at 22 and 4ฐC (tripli-
cate analyses).
Analyt'
Ben
Tol
E-Ben
p-Xyl
o-Xyl
TDCE
TCE
PCE

Ben
Tol
E-Ben
p-Xyl
o-Xyl
TDCE
TCE
PCE
Analysis day
e
6
8
7
7
7
9

8

6
8
7
7
7
9

8
0*
.6 ฑ
.0 ฑ
.0 ฑ
.1 ฑ
.3 ฑ
.5 ฑ
12 +
.2 ฑ

.6 +
.0 +
.0 ฑ
.1 ฑ
.3 ฑ
.5 ฑ
12 ฑ
.2 ฑ

0.1
0.0
0.3
0.3
0.6
0.3
0.3
0.2

0.1
0.0
0.3
0.3
0.6
0.3
0.3
0.2
5
ND'
ND
ND
0.2 +
5.5 ฑ
9.7 ฑ
11 ฑ
7.2 ฑ

6.5 +
7.6 ฑ
6.4 ฑ
6.5 ฑ
6.7 ฑ
9.4 ฑ
12 ฑ
7.5 ฑ

t
0
0
0
0
0

0
0
0
0
0
0
0
0


9


A. 22ฐC
ND
ND
ND
.03 ND
.3
.0
.2
.4

.2
.1
.1
.1
.1
.3
.2
.1
ND
9.3
11
6.9
B. 4ฐC
5.7
7.6
6.3
6.2
6.6
9.6
12
7.5
ฑ 0
ฑ 0
ฑ 0

ฑ 0
ฑ 0
ฑ 0
ฑ 0
ฑ 0
ฑ 0
+ 0
ฑ 0
.1
.6
.6

.9
.2
.1
.2
.2
.2
.1
.2
8.7
9.6
6.3

1.2
7.1
6.1
6.0
6.5
10
12
8.0
14
ND
ND
ND
ND
ND
ฑ 0
ฑ 0
ฑ 0

ฑ 1
ฑ 0
ฑ 0
ฑ 0
ฑ 0
ฑ 0
ฑ 0
+ 0



.1
.1
.1

.4
.5
.2
.3
.2
.4
.4
.3



9.3
10
6.8


4.4
5.7
4.6
6.6
9.4
11
7.4
21
ND
ND
ND
ND
ND
ฑ 0.
ฑ 0.
ฑ 0.

ND
ฑ 0.
ฑ 0,
ฑ 0,
ฑ 0.
ฑ 0.
ฑ 0,
ฑ 0,



.3
.2
.1


.4
.2
.4
.2
.4
.4
.2
* Same  subset used  for Day  0 values for both storage conditions.
t Not detected:  less  than 0.02 ^g VOC/g.
Table 3. Means and standard deviations of analyte concentrations
(|ig/g) in preserved soil subsamples stored at 22ฐC  (triplicate
analyses).
Analyte

Ben
Tol
E-Ben
P-Xyl
o-Xyl
TDCE
TCE
PCE

Ben
Tol
E-Ben
p-Xyl
o-Xyl
TDCE
TCE
PCE



7.5
9.1
7.7
7.7
7.9
11
14
8.7
B.
7.5
9.1
7.7
7.7
7.9
11
14
8.7

0*
A
+
+
+
+
ฑ
+
+
+
Analysis day
5 9
. Soil subsamples preserved
0.2 7.4 + 0.1 7.4 ฑ 0.2
0.2 8.6 ฑ 0.3 8.6 ฑ 0.3
0.4 6.9 ฑ 0.3 6.9 ฑ 0.2
0.2 7.0 ฑ 0.2 6.9 ฑ 0.2
0.2 7.1 + 0.4 7.2 ฑ 0.3
0.4 11+0.2 10 ฑ0.2
0.4 13 ฑ0.4 13 ฑ1.0
0.5 8.0 ฑ 0.3 7.8 ฑ 0.2


with
6.5
7.4
5.9
6.1
6.2
9.5
11
7.3

14
Na
+
+
+
ฑ
ฑ
ฑ
ฑ
+


s<
0
0
0
0
0
0
0
0
Soil subsample slurries preserved with
+
+
+
+
+
ฑ
ฑ
ฑ
0.2 7.3 ฑ 0.4
0.2 8.1 ฑ 0.3
0.4 6.3 + 0.3
0.2 6.2 ฑ 0.4
0.2 6.5 ฑ 0.3
0.4 10 ฑ 0.4
0.4 13 ฑ 0.6
0.5 7.3 ฑ 0.4



























.1
.1
.2
.2
.2
.2
.1
.1



7.3
8.5
6.7
6.6
6.9
10
13
7.6

21

+
+
+
+
ฑ
ฑ
ฑ
+



0
0
0
0
0
0
0
0



.2
.3
.5
.5
.4
.3
.6
.2
NaHSO4








7.3
8.4
6.5
6.5
6.7
10
13
7.6
ฑ
+
ฑ
+
+
+
+
+
0
0
0
0
0
0
0
0
.2
.2
.2
.3
.2
.6
.2
.1
  Same subset used for Day 0 values for both storage conditions.
                                         177

-------
               12Q
                                             • Ben
                                             V Tol
                                             D E-Ben
                                             T p-Xyl
                                             * o-Xyl
                                             A TDCE
                                             O TCE
                                             • PCE
                              8      12
                             Holding Time (days)
                                                  20
            Figure 2.  Mean concentrations  (|ig/g) of
            VOCs in soil subsamples stored in ampoules
            up to 21 days at 22ฐC.

As previously observed when storing samples in sealed glass
ampoules or capped VOA vials, the chlorinated compounds  showed
little  (< 23%) change in concentration, confirming that  vapor
losses were controlled  (1, 6, 7). Except for the six subsamples
made into slurries, only soil (moisture content 24%) and 2.5 mL
of air existed during storage in the 2-mL glass ampoules.  This
moisture and oxygen content is sufficient for complete microbial
degradation of the spiked VOCs  (15). Indeed, the soil subsamples
held at room temperature  (22ฐC) ,  showed a complete (> 99%) loss
of the aromatic hydrocarbons within 9 days  (Figure 2). These de-
gradation rates are consistent with those observed in aqueous
systems, showing half-lives on the order of days for these
aromatic hydrocarbons, and weeks to months  for the chlorinated
compounds (16). Refrigeration (4ฐC)  slowed the degradation, but
after 14 days Ben was substantially  (> 50%) reduced in concentra-
tion relative to Day 0  (Figure 3). In contrast,  the subsamples
preserved with NaHSC>4 showed only small (< 23%)  concentration
changes relative to Day 0 for all of the test analytes over a
21-day room-temperature storage period  (Figure 4). Similarly,
immersion in MeOH has been shown to be an effective means of
preserving VOC concentrations (1) . These findings and others
(1, 7) suggest that refrigeration is not sufficient to elimin-
ate microbial degradation of VOCs.

Even though these experiments used laboratory-fortified  samples,
field samples should behave similarly because the chemical pre-
                                     178

-------
   120
                   8       12
                  Holding Time (days)
                                        20
Figure  3.   Mean concentrations (p.g/g)  of
VOCs in soil subsamples stored in ampoules
up to 21 days at 4ฐC.
   140.
                   8       12
                   Holding Time (days)
                                        20
 Figure  4.   Mean concentrations (ng/g) of
 VOCs in soil subsamples preserved with
 NaHSC>4  and  stored in ampoules  up  to 21
 days at 22ฐC.
                 179

-------
servative inhibited the activity of the indigenous soil microbes.
There are, however, some precautions that need to be addressed.
Analyte transformations due to acidification have been found to
affect the stability of styrene, but not that of the other 23 VOCs
tested to date [Appendix (17)]. In addition, it is probably impor-
tant to obtain pH 2 or lower throughout the sample to inhibit
microbial degradation, perhaps requiring an aqueous slurry. In the
experiment presented here,  slurries were not necessary; the final
analyte concentrations in the slurries were not significantly dif-
ferent from those in the other chemically-preserved samples (Table
3). Until more information is available it is recommended that
soils be evaluated on a case-by-case basis, and furthermore,  that
preservation methods other than acidification (e.g.,  mercuric
chloride or sodium azide) be used when a soil contains carbonates.
By using chemical preservation and sample collection and handling
protocols that minimize volatilization losses during storage and
analysis, environmentally representative analyte concentrations
are more apt to remain stable for 14 days, and perhaps longer
(11). Small (5-15%) VOC losses are expected even with acidifica-
tion since the Teflon septum cap liner is somewhat transparent
to VOCs  (17. 18). Losses through Teflon-lined caps, however,  do
not appear to be a problem when soil samples are immersed in MeOH
(1, 7). Another advantage of chemical preservation is that refrig-
eration is not as critical.

SUMMARY
Confinement of subsamples in vapor-tight vessels throughout han-
dling and analysis is critical to the accurate assessment of both
biological degradation and chemical preservation of VOCs in soil.
Using such storage protocols allows investigators to determine if
measures other than refrigeration are necessary or effective in
maintaining stable VOC concentrations over the holding times per-
mitted by regulations. For the surface soils used in studies at
this laboratory  (1, 7)  chemical preservation by acidification with
NaHSO4 (except for soil containing styrene) succeeded in maintain-
ing stable concentrations of aromatic hydrocarbons for periods of
21 days, while refrigeration at 4ฐC usually failed.

ACKNOWLEDGMENTS

Funding for this work was provided by the U.S. Army Environmental
Center, Martin H. Stutz, Project Monitor. The author thanks Dr.
C.L. Grant and Marianne Walsh for critical review of the text.
This publication reflects the view of the author and does not sug-
gest or reflect policy, practices, programs, or doctrine of the
U.S. Army or of the Government of the United States.
                                    180

-------
REFERENCES

   1.  Hewitt A.D.  (In press)  Preservation of volatile organic compounds  in soil
      subsamples.  American Environmental Laboratory.

   2.  U.S.  Environmental Protection Agency (1986)  Test Methods for Evaluating
      Solid Waste,  Vol.  IB. SW-846.

   3.  Jackson J. ,  N.  Thomey and L.F. Dietlein (1991)  Degradation of hydrocar-
      bons  in soil samples analyzed within accepted analytical holding times.
      In:  Proceedings of 5th Outdoor Action Conference on Aquifer Restoration,
      Ground Water Monitoring,  and Geophysical Methods.  Las Vegas,  Nevada,  May
      13-16.  p.  567-76.

   4.  Maskarinec M.P.,  C.K. Bayne, R.A. Jenkins,  L.H.  Johnson and S.K. Holladay
      (1992)  Stability of volatile organics in environmental soil samples.  Of-
      fice  of Scientific and Technical Information,  Oak Ridge,  Tennessee,  ORNL/
      TM-12128.

   5.  King  P.  (1993)  Evaluation of sample holding times and preservation meth-
      ods  for gasoline in fine-grained sand.  In:  Proceedings of National Sympo-
      sium on Measuring and Interpreting VOCs in Soils:  State of the Art and
      Research Needs. Las Vegas Nevada, January 12-14.

   6.  Hewitt A.D.  (1994) Concentration stability of four volatile organic  com-
      pounds in soil subsamples. U.S. Army Cold Regions Research and Engineer-
      ing  Laboratory, Hanover,  New Hampshire, Special Report 94-6.

   7.  Hewitt A.D.  (1995) Preservation of soil subsamples for the analysis  of
      volatile organic compounds. U.S. Army Cold Regions Research and Engineer-
      ing  Laboratory, Hanover,  New Hampshire, Special Report 95-5.

   8.  Hewitt A.D,  P.H.  Miyares, D.C. Leggett and T.F.  Jenkins (1992)  Comparison
      of analytical methods for determination of volatile organic compounds.
      Envir.  Sci.  Technol.. 26: 1932-1938.

   9.  Hewitt A.D and C.L. Grant (1995) Round-robin study of performance  evalua-
      tion  soils vapor-fortified with volatile organic compounds. Envir. Sci.
      Technol..  29:  769-74.

  10.  Minnich, M.  and J. Zimmer (In press) Preparation and analysis of forti-
      fied  dry soils for volatile organic compounds performance evaluation ma-
      terials. In:  Proceedings of the International Symposium on Volatile  Or-
      ganic Compounds (VOCs)  in the Environment.  Montreal, Quebec,  Canada,
      April 1994.

  11.  Hewitt A.D.,  T.F.  Jenkins and C.L. Grant (1995)  Collection, handling,  and
      storage: Keys to improved data quality for volatile organic compounds in
      soil. Am.  Environ. Lab. Feb-Jan.

  12.  Maskarinec M.P.,  L.H. Johnson, S.K. Holladay,  R.L. Moody and R.A.  Jenkins
      (1990)  Stability of volatile organic compounds in environmental water
      samples during transport and storage. Environ.  Sci. Technol.. 24:  1665-
      1670.
  13.  Chiou C.T. (1989)  Reactions and Movement of Organic Chemical in Soils
      (B.L. Sawhney and K. Brown, Eds.). Soil Sci. Soc. Amer. Special Pub. 22,
      Madison, Wisconsin, p.  1-29.

  14.  loffe B.V. and A.G. Vitenberg  (1982) Head-Space Analysis and Related
      Methods in Gas Chromatoaraphv. John Wiley & Son.
                                              181

-------
15.  Atlas R.M. (1981) Microbial degradation of petroleum hydrocarbons: an en-
    vironmental perspective. Microbiological Reviews,45: 180-209.

16.  Lewis Publishers, Inc.  (1991) Handbook of Environmental Degradation
    Rates. H.T. Printup Editor, Chelsea, Michigan.

17.  Hewitt A.D. Unpublished data.

18.  Leggett D.C.  and L.V. Parker  (1994) Modeling the equilibrium partition-
    ing of organic contaminants between PTFE, PVC, and Groundwater. Environ-
    mental Science and Technology. 28: 1229-1233.
                                            182

-------
  Appendix: Volatile organic compounds studied in
holding-time and chemical preservation experiments.

               Benzene
               Bromodichloromethane
               n-Butyl benzene
               Carbon tetrachloride
               Chlorobenzene
               Chloroform
               1,3-Dichlorobenzene
               1,1-Dichloroethane
               1,2-Dichloroethane
               cis-1,2-Dichloroethene
               trans-1,2-Dichloroethene
               1,2-Dichloropropane
               Ethylbenzene
               Isopropylbenzene
               Methylene chloride
               n-Propylbenzene
               Styrene
               Tetrachloroethene
               Toluene
               1,1,2-Trichloroethane
               Trichloroethene
               o-Xylene
               m-Xylene
               p-Xylene
                        183

-------
29
            Photolysis of Laboratory Dioxins/Furans Waste
J. P. Hsu, Director, Southwest Research Institute, San Antonio, Texas 78228,
Joseph Pan, Manager, Southwest Research Institute, San Antonio, Texas 78228
Abstract

      The photolysis of polychlorinated dioxins and furans in an  environ-
mentally benign solvent of propylene glycol, was  demonstrated to be an
efficient process.
Introduction

      The problem with the disposal of the dioxins/furans wastes in the
laboratories is troublesome due to no legal disposal way stated in RCRA and
denial of this kind of  waste from most waste companies.   The EPA  1613
method indicates that the dioxins/furans can be decomposed by photolyzing
dioxins/furans in methanol or ethanol for two to three days.  However, both
methanol and ethanol are highly volatile and, therefore, very flammable.  In
addition, methanol is toxic and ethanol is a  controlled substance. We would
like to find a solvent which is economical, non-toxic, less volatile, and, in the
mean time, efficient in the solvation and decomposition of dioxins/furans in
photolysis.

      Propylene glycol is selected for this purpose since it is harmless (can even
be taken internally), high boiling (bp760188.2ฐ), and also miscible with water and
can dissolve most of organic compounds.
Experimental

     A 90 mL of propylene glycol (PG) in a 140mL beaker was spiked with a
                                    184

-------
300 uL of dioxins/furans at the concentrations shown in Table 1. The solution
was magnetically stirred through out the entire experiment.  The beaker was
wrapped with aluminum foil on the outside and bottom to prevent UV from
emitting beyond the beaker.  The UV light was then turned on and sampling
performed at 0 min, 5 min, 15 min, 35 min,  75 min, 3.25 hr, 9.25 hr, 24 hr, and
50 hr from the start of the experiment. Two separate aliquots of the solution
was sampled at zero time and only one aliquot at all  other sampling times.
During sampling, a 3 mL aliquot of the solution was quantitatively transferred
to a vial containing 6 mL of water. After thoroughly mixing, a 10 mL hexane
was added  to the PG-water mixture. A 20  uL  of an internal standard mixture
(Table 2) was spiked into the hexane layer.  The mixture was shaken vigorously
for 30 seconds.  The top layer was transferred to another vial. The PG-water
mixture was again extracted with another aliquot of hexane (10 mL). Both of
the hexane extracts were then combined and blown down to 6 mL. A 2 mL of
reagent water was added to the vial containing hexane extract and the mixture
shaken for 20 seconds. The hexane extract  was then quantitatively transferred
to another vial. Two mL of hexane was used to rinse out the residue left in the
original vial. The hexane extract was blown down to dryness. The wall of the
vial was rinsed with a  1 mL methylene chloride, which also blow down to
dryness.  The extract was quantitatively transferred to an  injection vial with two
aliquots of 200 uL methylene chloride. The methylene chloride solution in the
injection vial was blown down to dryness. The wall of the injection vial is then
rinsed with a 50  uL methylene chloride,  which was again blown down to
dryness.  Finally, 20 uL of a recovery standard mixture (Table 2) was added to
the vial  and  mixed well before the  gas  chromatograph/mass spectrometer
analysis.
Result and Discussion

     The concentrations of each congener at different sampling intervals were
shown in Tables 3.  This result indicated that only approximate 1.3% of OCDD
and OCDF left after 195 minutes of UV photolysis of total PCDD/PCDF in
propylene glycol.   All the other  PCDD/PCDF, including TCDD/TCDF,
                                    185

-------
PeCDD/PeCDF, HxCDD/HxCDF and HpCDD/HpCDF were decomposed.
Within  50  hours  of UV photolysis, all the rest of   OCDD/OCDF was
decomposed.  As shown in Table 4, the photolysis within first five minutes
causes significant decomposition of both OCDD and OCDF.  In this initial
interval, the number of isomers and total concentration increased  for tetra-
through hexa-PCDDs. However, total concentration for terra- through hepta-
PCDFs decreases in the first five minutes of photodegradation and only the
number of isomers of TCDF increases.  In general, PCDF is photodegraded
much faster than the corresponding PCDD except OCDD. Both  OCDD and
OCDF have approximately the same rate in photodegradation. Two compounds,
1,2,3,4,7,8-HxCDD and 1,2,3,7,8,9-HxCDD, which were not spiked to  the
solution and not found in the two solutions sampled at zero time, were found in
the solution sampled 5 minutes from  the beginning of photodegradation, at
0.107 ng/mL  and  0.60  ng/mL, respectively. These were photodegradation
products of HpCDD or most likely OCDD.

     The table 4 showed the same results as in Table 3, but expressed in TEF
(2,3,7,8-TCDD toxicity equivalent factor).  This table showed that  TEF was
decreased to less than 10% of the original TEF in 35 minutes, and to about 3%
in 75 minutes.

Conclusion

     Photodegradation by UV light in propylene glycol is an effective way of
destroying dioxins/furans. Better photolysis conditions are being sought to
speed up the process. The mechanism of PCDD/PCDF photolysis are being
studied by using OCDD (or OCDF) as the only substrate. Comparision between
using UV light and sunlight in PCDD/PCDF photolysis are also being studied.
                                  186

-------
               Table 1
   Concentration of Spiked Solution
Analytes
2378-TCDD
2,3,7,8-TCDF
1,2,3,7,8-PeCDD
1,2,3,7,8-PeCDF
1,2,3,6,7,8-HxCDD
1,2,3,6,7,8-HxCDF
1,2,3,4,6,7,8-HpCDD

1,2,3,4,6,7,8-HpCDF

OCDD
OCDF
Concentration (ng/uL)
2.5
2.5
6.25
6.25
6.25
6.25
6.25

6.25

12.5
12.5
               Table 2
Internal and Recovery Standard Solution
Compound
13C]2-1,2,3,4-TCDD
13C12-l,2,3,7,8,9-HxCDD
13C12-2,3,7,8-TCDD
13C12-2,3,7,8-TCDF
13C12-l,2,3,6,7,8-HxCDD
13C]2-l,2,3,4,6,7,8-HpCDF
]3C12.-OCDD
Standard
Recovery
Recovery
Internal
Internal
Internal
Internal
Internal
Concentration
(ng/uL)
0.56
0.48
0.56
0.46
0.54
0.52
0.98
                    187

-------
                                      Table 3
    Concentration of PCDD/PCDF in ng/mL (ppb) of Propylene Glycol

2,3,7,8-TCDD
1,2,3,7,8-PeCDD
1,2,3,6,7,8-HxCDD
1,2,3,4,6,7,8-
HpCDD
OCDD
2,3,7,8-TCDF
1,2,3,7,8-PeCDF
1,2,3,6,7,8-HxCDF
1,2,3,4,6,7,8-
HpCDF
OCDF

IS % Recovery
13CI2-2,3,7,8-TCDD
l3C,2-2,3,7,8-TCDF
13CI2-1,2,3,6,7,8-
HxCDD
13C12-1,2,3,4,6,7,8-
HpCDF
13C,2-OCDD
Minutes into experiment
0
5.17
21.3
4.27
11.4
21.1
5.31
18.2
9.8
10.5
19.5


50
47
96
97
98
0
5.33
19.4
4.27
12.0
23.3
5.68
17.3
10.1
10.7
21.2


58
56
96
100
97
0
averag
e
5.25
20.4
4.27
11.7
22.2
5.50
17.8
9.95
10.6
20.4


54
52
96
99
98
5
3.55
15.2
2.71
4.58
4.19
1.05
1.59
.733
.947
3.14


50
52
90
88
84
15
1.39
7.51
.573
.707
1.09
.147
0.22
.173
.267
0.96


42
47
83
85
88
35
.453
2.5
.067
0.247
0.62
.067
0.14
.107
0.16
0.58


56
62
90
87
80
75
.373
.387
0
.113
.273
0
0
0
0
.273


75
80
94
94
86
195
0
0
0
0
.287
0
0
0
0
.273


80
87
86
83
70
555
0
0
0
0
.153
0
0
0
0
.127


85
91
90
87
75
1440
0
0
0
0
.073
0
0
0
0
0


91
107
95
94
95
3000
0
0
0
0
0
0
0
0
0
0


81
114
107
113
112
Note 1:
concentration below 0.067 ng/mL (instrument detection limit) is reported as 0.
Note 2:        1,2,3,4,7,8-HxCDD and 1,2,3,7,8,9-HxCDD, which were not spiked to the solution and not found
in the solution sampled at zero time, are found in B at 0.107 ng/mL and 0.60 ng/mL, respectively
                                            188

-------
                                    Table 4
  Concentration (ng/mL in PG) of Total PCDD/PCDF with Isomer Number in Parentheses
                    ( "Total" includes 2,3,7,8-substituted isomers)
TOTAL
PCDD/PCDF
Total TCDD
(22)*
Total PeCDD
(14)*
Total HxCDD
(10)*
Total HpCDD
(2)*
OCDD
(D*

Total TCDF
(38)*
Total PeCDF
(28)*
Total HxCDF
(16)*
Total HpCDF
(4)*
OCDF
(D*
Minutes Into Experiment
Omin
5.17
(1)
21.8
(1)
4.27
(1)
11.4
(1)
21.1
(D

5.31
(1)
18.2
(D
9.81
(D
10.5
(D
19.5
(1)
0 min
5.33
(1)
19.4
(1)
4.27
(1)
12.0
(1)
23.3
(1)

5.68
(1)
17.3
(D
10.1
(D
10.8
(1)
21.2
(1)
Omin
(Average)
5.25
(D
20.6
(1)
4.27
(1)
11.7
(D
22.2
(1)

5.50
0)
17.8
(1)
9.96
(1)
10.7
(1)
20.4
(1)
5 min
5.37
(4)
24.3
(7)
16.3
(8)
9.21
(2)
4.19
(D

1.76
(4)
1.59
(1)
0.733
(1)
0.947
(1)
3.14
(1)
15 min
9.21
(7)
13.5
(6)
2.39
(5)
1.03
(2)
1.09
(D

0.147
(D
0.22
(1)
0.173
(D
0.267
(1)
0.96
(1)
35 min
8.34
(9)
3.36
(6)
0
(0)
0.247
(1)
0.62
(1)

0.067
(1)
0.14
(1)
0.107
(D
0.16
(1)
0.58
(1)
75 min
2.71
(9)
0.389
(D
0
(0)
0.113
(1)
0.27
(1)

0
(0)
0
(0)
0
(0)
0
(0)
0.27
(1)
195
min
0
(0)
0
(0)
0
(0)
0
(0)
0.29
(1)

0
(0)
0
(0)
0
(0)
0
(0)
0.27
(1)
* Maximum number of isomers possible.
                                    Table 5
 Concentration (ng/mL in PG) Expressed as 2,3,7,8-TCDD Equivalent Factors
                                    (TEFs*)
Minutes into
Experiment
TEF
(ng/mL)
Omin

18.9

0 min

18.2

Omin
(Average)
18.6

5 min

11.8

15 min

5.25

35 min

1.73

75 min

0.57

195
min
0

* Only the 2,3,7,8-chlorinated PCDDs/PCDFs are assigned with toxicity equivalence factors (TEF) in this table.
                                           189

-------
 30
     TEES EFFECTIVENESS OF METHYLENE CHLORIDE STABILIZERS IN AN
            ENVIRONMENTAL LABORATORY RECYCLING PROGRAM

 Thomas 5. Willig, Chemist,  Semivolatile Organ!cs GC/MS, Jon S.
 Kauffman,  Ph.D,  Group Leader, Semivolatile Organics  GC/MS,
 Lancaster Laboratories,  2425 New  Holland Pike,  Lancaster,
 Pennsylvania 17601

 ABSTRACT

       As solvent purchasing and waste disposal costs rise and
 EPA regulations governing  stack  emissions  tighten,  more  labs
 are turning  to  solvent recycling.    Methylene  Chloride  is
 especially   suitable   for    in-house   recycling   by  .the
 environmental labs  that generate large quantities of  this
 solvent.   When  recycling,  though,  one must  be aware of  the
 preservative used by the manufacturer.  Some preservatives are
 lost to the aqueous phase during a water  sample  extraction,
 leaving the  solvent open to degradation.  Other preservatives
 react  during the recycling to form oxidation products which
 may interfere with the sample analysis.   The initial use  of
 the proper preservative in the virgin  methylene chloride  and
 the use of a nitrogen blanket during the distillation process
 by this laboratory has resulted  in  the consistent generation
 of solvent clean enough for BNA  extractions.  The distillate
 which  is concentrated 300:1 and analyzed by GC/MS,  is free  of
 oxidation products, and does not  degrade in either the long or
 short  terms.

 INTRODUCTION

 As solvent  purchasing and  waste disposal  costs  rise  and
 environmental  emission standards tighten,  the  recycling  of
 waste  solvents  is becoming a more attractive option  for many
 environmental analytical labs.  Recycling allows them reduce
 their  costs while conforming with environmental regul-ations.
 Methylene chloride in particular is a viable candidate for a
 solvent  recycling  program  because  it  is  used   in  large
 quantities, is easy to recover, presents no  particular storage
 problems, and can be sufficiently cleaned up by distillation.
 It  is,  however,  susceptible  to  degradation leading to  the
 presence of  such impurities  as phosgene,  hydrochloric acid,
 chloroform and  1,1,2,2-tetrachloroethane.   Manufactures  add
 preservatives to reduce degradation of the  solvent.  These
 preservatives normally do  not  interfere with the analysis  of
 samples  extracted  using  the  preserved   solvent.    When
 recycling, though, one must be aware of the preservative used.
 Some preservatives  can be  lost to  the aqueous phase during
 water   sample   extraction,  leaving   the   solvent  open   to
 degradation.    Other  preservatives  may   react  during  the
 recycling process to  form  products  which  interfere with the
 sample analysis.  In this study we found that if we extracted
 samples using methylene  chloride  which contained the proper
preservative,  and if we distilled under a nitrogen blanket,
                                   190

-------
we could collect  the  solvent  after sample concentration and
successfully distill it to produce  solvent consistently clean
enough to be reused for BNA extractions.

EXPERIMENTAL

Materials

ABC Integrity 2000 spinning band distillation unit
4 Liter amber glass solvent jugs
Buchner funnel
boiling chips
Nitrogen gas
aluminum foil
Kuderna-Danish evaporator-concentrator with Snyder column
Organomation S-Evap unit
Hewlett-Packard   5890  Gas   Chromatograph   &  5970   Mass
Spectrometer
Alltech EPC 1000

Methods

In the  first  part of this experiment  (referred  to  below as
normal conditions) we tested four different preservatives of
methylene  chloride  for  their  suitability   for  use  in  a
recycling  program.   The  four  preservatives  we  tested  were
Dmethanol,  2) cyclohexene,  3)amylene,  and  4)amylene  and
methanol.   In each case,  we used  the  methylene  chloride to
extract water samples according to the semivolatile extraction
method  3510.   We added total  of 300  ml of  solvent  to  each
sample,  shook  it out,   and  concentrated it using  a  K-D
apparatus with Snyder column over a steam bath.  The solvent
vapors were condensed and collected using an Organomation S-
Evap.  The waste solvent was stored in  amber glass jugs until
20 liters of solvent had been collected.  This waste solvent
was then  poured into the  ABC  spinning band  unit and glass
boiling chips  added.    Each  distillation run  was  conducted
using the following parameters:

                   shut down temperature = 44
                   motor speed = 2
                   motor on temp = 30  C

                      First Cut      Second Cut

   open cut              30 C          40 C
   close cut             39 C          41 C
   equilibrium hours      0             0
   equilibrium minutes   45            45
   reflux ratio          2:1           4:1
   mantle rate           30            30
                                   191

-------
The distillate from each run was collected in a 20 Liter glass
bottle.  When the run was complete,  300 mL of the distillate
was concentrated to 1.0 mL in a Kuderna-Danish with a Snyder
column, and analyzed by GC/MS on a 30 meter J&W DB-5.625 .25mm
i.d.  column  with  a   1  micron  film   thickness.    The  gas
chromatograph was operated in splitless injection mode.  The
GC temperature program used was:

       injector temp     = 275 C
       detector temp     = 300 C
       initial oven temp = 45 C
       initial time      = 3 min
       temp ramp         = 8 C/min
       final oven temp   = 300 C

In  the second part of the experiment,  three  of  the  above
methylene  chloride  preservatives  were  tested  for  their
suitability  in a  recycling  program  that  used a  nitrogen
blanket over the still to remove all air from the system and
used aluminum  foil over the  distillate collection  bottle to
keep  light  out.    The  methylene chloride  preserved  with
methanol only was not tested in this part because it was found
to be  inappropriate for semivolatile extractions for reasons
discussed below.   The nitrogen blanket  was  accomplished by
running copper tubing  from a  nitrogen  tank  to a teflon tee;
one leg  of the tee was  connected by  teflon tubing to  the
distilling head, and  the  other leg was  connected  by teflon
tubing  to a  100  mL  round bottom  three neck  flask  which
contained a reservoir  of oil.  The middle neck  of  the  flask
was plugged and the third neck was vented to the atmosphere.
The nitrogen pressure was adjusted so that the nitrogen slowly
bubbled through the oil.  This low pressure was enough to keep
air out of the system, but not so high that  it  would affect
the distillation.   Before the  distillation  run  was started,
nitrogen  was  flushed  through  the  boiling pot   and  the
distillate  collection  bottle  to  force  out  any air.    The
distillate collection  bottle was then completely covered with
aluminum foil to keep  out all  light.   Since  the boiling pot
was covered with an insulative blanket and  the  distillation
column was silvered on the inside, the solvent's contact with
light was minimal.

As in the  first part of the experiment,  the methylene chloride
was used in a water extraction, collected on an S-evap, stored
in amber jugs,  poured  into the  still in  20 Liter batches, and
distilled using the same parameters as before.  The distillate
was concentrated 300:1 and analyzed by GC/MS.

RESULTS AND DISCUSSION

When distilled under normal conditions,  peaks  were found which
interfered  with GC/MS analysis   in the methylene  chloride
preserved  with each  of  the  four preservative which  were
                                  192

-------
examined.  The methylene chloride preserved with methanol was
determined to  be  unacceptable because most  of  the methanol
partitioned  from  the   solvent  into  the water  during  the
extraction, leaving the solvent  unprotected.  Figure 1 is the
chromatogram produced by GC/MS analysis.  The largest peak in
the chromatogram,  at 8.23 minutes,  is  tetrachloroethane,  a
common  impurity  in  degraded  methylene  chloride.    This
indicates that the solvent is unprotected and breaking down.
Another problem is the  absence of the last internal standard,
Perylene-dl2.   Methylene  chloride  degradation  products have
been implicated in the  quenching of polyaromatic hydrocarbons
used as semivolatile internal standards.  We observed similar
results in the actual samples which had been extracted using
this solvent in a continuous liquid/liquid extraction .  For
these  reasons,  we  determined  that  the methanol  preserved
solvent was inappropriate  for semivolatile  extractions,  and
did not test it any further.

When  distilled under  normal conditions methylene  chloride
which had been preserved with cyclohexene exhibited two main
peaks  which  elute  at  10.51  and  11.44  minutes  in  the
chromatogram in figure 2.   A library  search of the first peak
suggested 2-chlorocyclohexanol.  GC/MS analysis of a solution
made from  the  purchased neat compound produced  the same two
peaks at the same retention times,  confirming the identity of
the contaminant as 2-chlorocyclohexanol.

When distilled under normal conditions, the methylene chloride
preserved with amylene  exhibited a  cluster  of  early eluting
peaks  (figure 3) .   We have not  been  able to  positively
identify these peaks, but based on the mass spectra we believe
that the two largest peaks could be a result of acid induced
polymerization of the amylene preservative.

When distilled under normal conditions, the methylene chloride
preserved with amylene and methanol exhibited the same cluster
of early  eluting peaks seen in the solvent preserved with
amylene alone (figure 4).   The size of the peaks, though only
about 10% of  those seen in  the solvent preserved with amylene
alone, were still too large to pass our criteria for solvent
to be used in  BNA extractions.   None of the peaks observed in
the solvent preserved with methanol alone were observed in the
solvent preserved with  amylene and methanol, and no quenching
of the last internal standard was exhibited.

When distilled under a nitrogen blanket and with aluminum foil
over the collection bottle, methylene chloride preserved with
each of the three preservatives  tested: cyclohexene, amylene,
and methanol and amylene,  was free  of any  peaks  which would
significantly interfere with an 8270 GC/MS analysis.  Figure
5  is  a   samples   of   methylene   chloride   preserved  with
cyclohexene which  has  been  successfully cleaned  up.    The
criteria we used was that no  peak could be greater than 3% of
the closest internal standard.  This modification to the still
                                  193

-------
has also allowed us to clean up waste  from the Gel Permeation
Chromatograph  without drying,  filtering,  neutralizing,  or
predistilling the waste before pouring it  into the spinning
band distillation unit (figures 6 & 7).
CONCLUSION

Setting up a successful methylene chloride recycling program
in an environmental analytical  laboratory can have positive
environmental and economic rewards,  but some thought must be
given to  the type of  preservative  which is  present  in the
methylene chloride when  it  is purchased.  We found in this
study that methanol alone is not an acceptable preservative.
The other  three preservatives  that  we tested  under  normal
distillation procedures  all produced peaks  which interfered
with an 8270 semivolatile analysis by GC/MS.  By placing the
distillation unit under a blanket of nitrogen to exclude air
from the  system and by  wrapping the  distillate collection
bottle with aluminum foil to keep out light,  we were able to
consistently distill  methylene  chloride  waste  and recover
solvent  which   was  pure  enough  to  reuse   for  BNA  sample
extractions.
                               194

-------
CO
Ol
280000-

260000-

240000-

220000-

200000-

180000-

160000-

140000-

120000-

100000-

 80000-

 60000-

 40000-

 20000H











.1
1 • 1 • 1
4 8





^
•v
a
a
t-
c
$
-i-
C
1_
j;
X

•^
f-

' 1





1
1
I

i
i

1

[V
^
"C

<
?
T
X
c
i;
3






2

•j
/
7

> c
! T
? a
i 1
c
c
?
-Ij


16 2(
a
T-
TD
1
a
c
4
.^.
C
i
ฃ
i
L

1
1

.1
) 24





: r
1
i i
i
f
•i.





i • i • i • i •
28 32





4
i
ป
i
a
".






36 40 44 48
                      Figure 1.  Total Ion Chromatogram of methylene chloride preserved with
                                 methanol which has been distilled under normal conditions and
                                 concentrated 300:1.  Internal standards are at 40ng/ul.

-------
CO
(35
                         280000-
                         240000-
4>
4>
                                                                  O

                                                                  T

200000-

160000-


—
120000-


80000-
40000-
0-

^
v
 ^
5 -n
^
3









i • i ' i '
16
r_
L ^
? 5-
5 -=
: Q










i • i • i
20 2

i 4
: c
Q.
]
C









4 28 3c







-------
                        160000-
                        140000-
                         120000-
CD
•vl
                                             x>

                                             4J
03
-o
 m
                                                             a
                                                             TJ
a>

x:
Q.
o
                                                                    o
                  4>
                                                                     o
TI
100000-
80000-
60000-
40000-
20000-
o-

c
<
•ปc


il
' 1 • 1 ' 1 • 1
4 8
1 -C
3 t
• C
Ei
t



1 1 '
12
L
J
r



16
&



20 ฃ
: i



4 28 3
a
_c
D
Q



1 1 ' 1
2 se
>
i
•s
5



i 40
                        Figure 3.  Total Ion Chromatogram of methylene chloride preserved with
                                   amylene which has been distilled under normal  conditions and
                                   concentrated 300:1.  Internal standards are at 40ng/ul.

-------
                       220000^


                       200000^
CD
00
•
160000:
•
14000CK
-
120000^


iooooo:
80000-
60000-
40000-
20000-
o-

^
X
E
<
h
(
X
t
J
ฃ
I
1
1
I





1 ' 1 ' 1 ' 1 '
4 8
BC
1
! J
> 4
! |
j 1
> c.
I z


f






12
' c
' ฃ
./•
i -U.
: jc.
> c
| C
i I
i -5
i
p









16
! I
\ g

ป- c
! c
i 1
a










20

-------
CO
CD
 220000^


 200000^


 18000OJ


 16000QJ


 140000^


 12000OJ


 10000QJ


  8000QJ


  60000^


  40000^


  20000J


       0:



Figure 5.
                                        XI

                                        v

                                        I

                                        I
H3
TJ

ID
C
CD
                                             tl
                                                    V
I
             I
C
C
X
t
'C
4
w-








10
c

j
i










15
1













20
2
J2
-t-
C
C
II
0
J









25

^
^-
•^
! j
^
(
f
•X
C






30 35

4

1
J
i
i
5 C-4
-o
1
IP
c
_m
IP
1
40 45 50 55 60
                                 Total Ion Chromatogram of methylene  chloride preserved with
                                 cyclohexene which has been distilled under the modified conditions
                                 and concentrated 300:1.   Internal standards are at 40ng/ul.

-------
ro
o
o
6QQQQO-


550000-


500000-


450000-


400000-


350000-


300000-


250000-


200000-


150000-


100000-


 50000-
                             0-
                                    8
                                                                                  I
                                                                                  CD
                                                                                  c
                                                                                 CL
^ I   I
 12
                        • I •  i •  I T
                        16    20
24
28
l^ i. *!
  32
36
1 i  • i
40
44
48
                      Figure 6.   Total Ion Chromatogram of Gel Permeation Chromatograph methylene
                                 chloride  waste concentrated 300:1. Internal standards are at
                                 40ng/ul.

-------
 220000;


 200000^


 180000^


 160000^


 140000^


 120000J


 100000^
T*
-O

CD



        4>
O
7—

7
>
0^
0^
0-
0-
0-
o-

c
•^
7-



1 ' 1 ' 1 ' 1 •
481
ฃ
i




1 I '
2 d
i
r




1 ' 1 ' I n
.6 20
L
<
Q




24
i
0
T
1
<
!
j



1 1 ' 1 ' 1 • 1
28 32
j
j
Q
i
i
T
m
b
fX
1
1 1 ' 1 • 1 • 1 • 1 • 1 • 1 • 1
36 40 44 48
Figure 1.   Total Ion Chromatogram of Gel Permeation Chromatograph methylene
           chloride waste  which has been distilled under the modified
           conditions and  concentrated 300:1.  Internal standards are at
           40ng/ul.

-------
REFERENCES

1. J.T. Baker staff, "Methylene Chloride Production Process
  Yields Stable, Interference-Free Performance,"
  Chromconnection,  July 1994,  pp.  8-9.

2. SW-846, 3rd Edition, Method 3520,  Revision 1, Dec., 1987
   and Revision 2,  Nov., 1990.

3. Cornelius A.  Valkenburg,  "Recycling Program For Laboratory
   Organic Solvents," in Proceedings  of the EPA 9th Annual
    Waste Testing and Quality Assurance Symposium, July 12-16,
   1993, Arlington, VA, 1993.
                                 202

-------
                                                                                 31
  APPROACHES  TO QUALITY CONTROL OF NON-LINEAR CALIBRATION RELATIONSHIPS
                   FOR SW-846 CHROMATOGRAPHIC METHODS

Harry  B.  McCartv.  Ph.  D. .  Senior  Scientist, Environmental  and Health
Sciences  Group, Science  Applications  International  Corporation,  1710
Goodridge  Drive,  McLean, Virginia,  22102;  and  Barry  Lesnik,  Chemist,
Methods  Section,   Office of  Solid  Waste,  USEPA,   401  M  Street,  SW,
Washington, DC  22046

ABSTRACT

As part of the revisions to  Method 8000B  in the Third Update to the Third
Edition of the SW-846 manual,  EPA provides  a hierarchy of approaches that
may be used  to address  calibration of instruments for organic analyses.
The intent of this  approach  is to provide the  analyst with options to the
traditional approach of calibration factors or response factors that are
assumed to pass through the origin, and is necessitated, in part, by the
use  of  instrumentation  such  as particle  beam  mass  spectrometry  the
response  of  which  is best  described by  a  non-linear  relationship.   The
hierarchy progresses from the simplest, traditional approach,  evaluating
the relative standard deviation of the calibration or response factors, to
a polynomial regression model up to third order that is evaluated on the
basis  of  the weighted  coefficient  of the determination,  a statistical
measure of the variability  in the  calibration data  that is explained by
the calibration model.

INTRODUCTION

One feature  of the SW-846  manual is the series  of  "base methods" which
describe the general approach  to  specific analytical techniques.  Example
base methods  include Method  3500  (extraction procedures), Method 3600
(cleanup procedures), and Method 8000 (chromatographic procedures).  These
base methods provide  details on the many common aspects  of the procedures,
including    concentration     techniques,    calibration    requirements,
calculations, and quality control procedures.  The revision to Method 8000
(8000B) proposed  in  the Third Update  to  the Third Edition  of SW-846
provides  specific  guidance on the use  and  evaluation of both  linear and
non-linear calibration relationships.

Traditionally,  most  EPA  analytical  methods  have  relied  on a  linear
calibration,  where  the instrument response to known amounts of analyte can
be modeled as  a first  order (linear)  equation.   For methods for organic
analytes,  this equation is assumed to pass  through the origin (0,0).  The
methods  call  for  calculating  calibration factors  (CFs)  for  external
standard  calibration procedures  or response  factors  (RFs)  for internal
standard calibration procedures.  Although the forms of the calculations
differ, these factors represent the slope of a line between the origin and
the response of the instrument to the standard.

SW-846  chromatographic methods specify a five-point initial calibration,
thus five  CF  or RF  values are  generated.  The  relative standard deviation
                                         203

-------
(RSD) is used as a measure of the similarity of the five slopes.  An RSD
of  0%  means  that  the  slopes  are  identical.    This  approach  has  been
adequate  for most  methods and  offers advantages  of  ease  of  use and
understanding (i.e., lower RSD  values are  "better").  However, as EPA has
investigated new analytical techniques and reviewed existing ones with an
eye to increasing productivity  and lowering costs, the limitations of the
linear model have become more apparent.

NEW APPROACH

In Method  8000B, OSW  is proposing a hierarchy of calibration approaches
that  may be  employed.    The hierarchy consists  of  the  following  four
approaches to instrument  calibration:

         Traditional linear model, evaluated on the basis of the RSD
         Narrower linear  range, evaluated on the basis of the RSD
         Linear regression, not through the origin, evaluated on  the basis
         of the regression coefficient (R2)
         Polynomial  regression model,  evaluated on  the  basis of the
         weighted coefficient of the determination (COD)

The  first step in  the  hierarchy is  to  attempt to use  the  traditional
linear calibration model  that passes through the origin.  The RSD of the
CFs or RFs is used to evaluate  linearity.   As in earlier versions of this
base method, Method 8000B specifies a maximum RSD of 15% for most GC and
HPLC methods.  For GC/MS and HPLC/MS methods, the  QC limit for the RSD of
the initial calibration is generally 20%.

If the RSD for an initial  calibration fails to meet the QC specifications,
then the second approach is to  employ a narrower concentration range with
the linear model,  again  using the RSD to evaluate the linearity.  This can
be accomplished by  eliminating one or more standards from  the  upper or
lower end  of  the calibration and  recalculating  the RSD.   If the new RSD
meets that QC specification for the method,  then the analyst must prepare
additional calibration standards  within the narrower range, as a total of
five standards are  still necessary.  If the RSD  of the new calibration
range meets the QC specification, then the analyst may proceed with sample
analyses.

Narrowing the range involves several trade-offs.  First, as noted above,
five standards  are  still necessary  for  the initial calibration,  so at
least one new standard must be  prepared.  However, assuming that this new
range is truly appropriate for the instrument in question, these standards
should not need  to  be prepared often.   Rather,  the analyst  has simply
better defined the working range of the particular instrument.  The second
significant trade-off is  that  narrowing the calibration  range  may  mean
that more samples will require dilution to  keep their responses within the
narrower linear range.  This will likely be the case when standards from
the high end of the original range are eliminated.
                                         204

-------
The  last obvious trade-off  involves reporting  sample  results when the
standards eliminated come from the  lower  end  of  the original range.  The
analyst must consider  the regulatory limits associated with the analysis
and  ensure  that  the  lowest  standard in the calibration is at  or below a
sample concentration that corresponds to the regulatory limit in question.
Otherwise,  the analysis  will not be able to  demonstrate compliance with
the  regulatory limit.

The  third option is  to  use a  linear calibration that does not pass through
the  origin.  In this case,  a linear regression of the  instrument response
versus  the concentration  of the  standards  is  performed  treating the
instrument response as  the dependent variable  (y) and the concentration as
the  independent variable (x), in the form:

                                y = ax + b
where:

      y  =  Instrument response
      a  =  Slope of the line (also  called the coefficient of  x)
      x  =  Concentration of the calibration  standard
      b  =  The  intercept

The  correlation coefficient of the  regression  (R2) is used to evaluate the
linearity of the calibration. The  analyst must take care not to force the
line through the origin, either by  including  0,0 as a calibration point,
or by using software that forces the  line through the origin.  Forcing the
line through  the origin is  analogous  to  using  the RSD to  evaluate the
calibration.  Since  the traditional calibration approach must be attempted
first, the  analyst  can be  assured  that the approach of forcing the line
through the origin will not meet the  QC specifications.  OSW believes that
including 0,0 as  a calibration point  is inappropriate for organic methods,
as it tends to skew the data in the  lower end of the calibration range.

A  regression  coefficient of 0.99  is necessary when  using  this option.
This R2 is, in fact, greater than that which would be calculated for the
traditional calibration  approach with  an  RSD  of <15%.  The increased R2
requirement  is  intended  to limit  the use of the  option  of  a  linear
regression that does not pass through the  origin  to those instances where
it is truly appropriate,  and not  simply to avoid  appropriate cleaning and
maintenance  of  the  instrument,  or  to  compensate  for  questionable
standards.

In   calculating  sample  concentrations,   the  regression  equation  is
rearranged to solve for the  concentration  (x), as shown below.

                               x _ (y - b)
The  intercept value  (b)  generated from  the regression  must  also  be
evaluated before  reporting sample  results.    A  positive value  for the
intercept may indicate  that there  is some threshold instrument response
                                         205

-------
which  is  the  limiting  factor  in establishing  linearity.    A negative
intercept  value  can be  transformed  into  an  x-intercept  value  that
represents  a  threshold  concentration which  is the  limitation.   If the
intercept  is  positive,  then,  as  a general  rule,  results  where the
instrument response is less  than three times  (3x) the  intercept value may
be unreliable.   This  will afford some protection against false positive
results.  If the intercept is negative, results  below the concentration of
the lowest  concentration calibration standard  may be  unreliable.   These
adjustments to the quantitation limits will apply to all samples analyzed
using the regression  line.

The fourth  calibration  option is to employ  a  polynomial  equation  up to
third order,  in  the form:
As  with  the  linear  regression model,  the  polynomial  must  treat the
instrument response as the dependent variable (y)  and the concentration as
the  independent variable  (x) .   The  model also  must produce  a unique
concentration  for each response.   In order  to  provide enough  data to
adequately model a non-linear calibration, the analyst  must either perform
triplicate analyses of five calibration standards, or single analyses of
ten, more widely-spaced, standards.

The difficulty with non-linear  (higher order) calibration models is that
a  large  number of  polynomials may  be  fit  to  the  observed  results.
Therefore,  it  can be  difficult to assess  the  "goodness  of  fit"  of a
particular model  relative to  any other polynomial.   In response, Method
8000B  stipulates  that  the non-linear  model be evaluated on the basis of
the weighted coefficient of the determination (COD).  The COD represents
the percentage of the  observed variability in the  calibration data that is
accounted for by the non-linear  equation  chosen as  the model.   The COD is
calculated as:
                   COD = -^	

where:                            s=i

      Yobs  =  Observed response (area) for each concentration from
              each  initial  calibration point  (i.e.,  10  observed
              responses for  the 10-point  curve,  and 15  observed
              responses for the three replicate 5-point  curves)

      y    =  Mean observed response from the 10-point calibration
              or from all three 5-point calibrations

      Yฑ   =  Calculated   (or   predicted)   response   at   each
              concentration from the initial calibration(s)
                                          206

-------
      n    =  Total number  of  calibration  points (i.e.,  10, for  a
              single 10-point calibration, and 15, for three  5-point
              calibrations)

      p    =  Number  of adjustable  parameters   in  the polynomial
              equation  (i.e., 3 for a third order;  2  for a second
              order polynomial)

Under ideal conditions, with a  "perfect" fit of the model to  the data, the
coefficient  of the determination  will equal  1.0.    In  order  to be an
acceptable non-linear calibration, the COD must be greater  than or equal
to 0.99.

SUMMARY

The proposed  approach  to  calibration in Method 8000B offers a number of
advantages to the analyst,  including:

         Increased flexibility for the analyst.
         Applicability to a broader range of analytical techniques
         and  instruments,  including HPLC-particle  beam-MS, which
         exhibits a second-order response to many analytes.
         A   prescribed  hierarchical   approach   that  specifies
         attempting the simplest model first.
         A  straight  forward numerical approach  to  evaluating the
         results,  i.e., small RSD values are better and R2 or COD
         values should approach 1.0.

While one of the aims of this new approach is to provide added flexibility
to the  analyst, there are  a number of restrictions  detailed in Method
8000B.   First,  the purpose  of  the  hierarchical  approach is  not to allow
the analyst to  employ  any  of the  non-traditional procedures in order to
avoid necessary and appropriate  instrument maintenance or to compensate
for detector  saturation.    Second,  whatever  procedure is used,  it must
result in a unique concentration for each instrument response.  In other
words, no parabolic functions  or other models  that would predict two or
more concentrations for a given instrument  response.

Other  potential  disadvantages  include  the  fact  that  some  of  the
calculations are different  and some, such  as  the COD,  are more involved
than a simple RSD calculation.  However, none  of these calculations are
beyond the sophistication of most instrument data systems.  The approach
of using a narrower  linear range  may require  the preparation  of  new
calibration standards  and/or dilution of more samples to keep the results
within the calibration range.

The use  of a linear  regression  that does not  pass through  the  origin
requires that  the  intercept be  evaluated  relative to  reporting  sample
results.  The polynomial regression approach requires the analysis of more
standards  than  the  other  approaches,   either  triplicate  five-point
calibrations  or  a  single  ten-point  calibration.    Lastly,  the  QC
                                         207

-------
specifications are increasingly stringent  as  one  progresses through the
hierarchy,  in order to  discourage inappropriate  uses  of  higher  order
calibrations.

Despite these  potential disadvantages, this hierarchy of  approaches may be
applied to  either  external  standard or internal  standard calibrations,
starting  from the  simplest approach  (linear,  through  the  origin),  and
proceeding  through non-linear  calibration,  as necessary.   It  should be
applicable  to any of the  SW-846 8000  series  chromatographic methods and
will be essential in the use of methods such as HPLC-particle beam-MS.

ACKNOWLEDGEMENTS

The authors wish to thank Douglas  Anderson, Michael Kyle,  Sara Hartwell,
and  Scott  Henderson  of  SAIC,  and  John Austin,   of  EPA,  for  their
contributions  to  the  development  and  review  of  this  approach  to
calibration.

REFERENCES

Method 8000B,  in Test Methods for  Evaluating  Solid Waste,  SW-846,  USEPA
Office of Solid Waste,  Third Edition,  Proposed Update 3, January 1995.
                                         208

-------
                                                                                 32
    HOW LOW CAN WE GO   ACHIEVING LOWER DETECTION LIMITS WITH MODIFIED
                     "ROUTINE" ANALYTICAL TECHNIQUES

P. Marsden and S.F. Tsang, Senior Chemists, SAIC. San Diego California
92121, and B. Lesnik, Organics Program Manager OSW/EPA, Washington D.C.
22043.
ABSTRACT

As illustrated by the Regulatory Issue Workshop at this Symposium, there
is a demand for lower detection limits for many environmental
pollutants.  Because most EPA methods can be adapted to provide lower
limits of quantitation, it is our experience that modification of
routine methods is almost always the fastest and most cost-effective
approach to achieving additional measurement sensitivity.  The
techniques for modifying routine methods include:

•  the extraction of larger samples
•  collection of large samples with stack sampling trains
•  additional concentration of the final extract
•  incorporation of additional cleanup procedures to reduce background
   signal
•  derivatization of specific analytes
•  use of more sensitive instrumentation
   reduction of laboratory contamination.

As with any laboratory measurement technique, modified methods must be
tested to document their performance with the particular sample matrix
of concern to the client.  This presentation will provide strategies for
achieving lower quantitation limits illustrated using several specific
examples.  In addition, recommendations for testing proficiency with
modified methods and quality control practices suitable for low
detection level analyses are provided.
INTRDUCTIOH

More than 10 years ago the US EPA developed  prescriptive methods for
the Superfund Contract Laboratory (CLP).  These methods have come to be
treated as the "defacto" methods for the  determination of semivolatile
organics, extractable organics, organochlorine pesticides and metals in
water and soil.  While standard methods such as those prescribed by the
CLP can facilitate the inter comparison of data,  they can also limit the
use of newer techniques for sampling and sample preparation.
Prescriptive methods can also slow the adoption of new more sensitive
chemical instruments that can allow analysts to achieve lower detection
limits.  SW-846 uses a modular approach for describing analytical
methods.  Compatible modules for extraction/digestion, sample
preparation and measurement are combined to provide an analytical scheme
that satisfies project data quality objectives (DQOs) such as lower
detection limits.

Because SW-846 modules are designed for regulatory applications, they
are rugged,  reproducible and contain embedded quality assurance
                                         209

-------
procedures.  Each module undergoes several levels of performance testing
and review before it considered by an SW-846 workgroup.  These
constraints generally limit the analysts ability to select true state-
of-the-art techniques(e.g., capillary zone electrophoresis).   Rather,
analysts are limited to modifying standard methods in order to achieve
the desired performance, such as lower detection limits.  Depending on
the analyte(s) and the matrix to be analyzed, lower detection limits may
be achieved by:

   extracting/digesting larger samples (water and soil)
•  collecting larger air samples with stack sampling trains
•  concentrating the final extract/digestate prior to analysis
•  use additional cleanup procedures to reduce background signals that
   interferes with the target analyte
   derivatization of  specific analytes in order to increase signal-to-
   noise
   use more sensitive analytical instrumentation
•  reducing laboratory contamination
   minimizing contamination during sample collection.

Use of the DQO process is central modifying routine analytical
techniques.  Targets for quantitation limits, accuracy, precision and
matrix suitability are established through this process.  The sections
below describe specific modifications to analytical procedures that
allow these targets to be achieved.


EXTRACTION OF  LARGER SAMPLES

Environmental analysis requires that target analytes be isolated from
sample matrices (air, water, leachates, soil, sediment or tissue).  An
obvious way to lower detection limits is to increase sample size or to
improve the selectivity of the sampling process.  Analysts are cautioned
that collection larger samples can increase the heterogeneity of the
material taken to the laboratory.  This can be in the form of spatial
heterogeneity (i.e..  larger samples contain more volume of incompletely
mixed environmental matrices) or temporal heterogeneity (i.e., larger
samples can include pollutants generated during discontinuous events).1

Increasing the size of an air or stack sample is more complex than just
using a larger  sampling time or a larger sampling train.   Analysts must
ensure that target analytes do not break through the sorbant  during
sampling.   Analysts must also establish that the sample collected over a
longer time is representative of discontinuous emission events  (e.g.,
different plant processes or changes in incinerator feedstock).

Most air and stack samples are still collected using Tenax (VOST 0030)
or XAD II and Tenax (Semi-VOST 0010).  Charcoal is needed to  retain some
volatiles for some applications.   Anasorb 747 (a beaded, activated
carbon)  is preferred over normal charcoal because it is easier to desorb
target analytes from Anasorb.  Use of newer sampling materials is not
adopted until thorough ruggedness and performance testing of those
materials  is completed.  Based on such tests, the Summa canister
(Method TO-14) is currently gaining favor for some air sampling
applications.
                                         210

-------
Lower detection limits  for volatile  organic  analytes  (VOAs) in water can
be achieved using  25-mL purge vessels  or  azeotropic distillation for
polar VOAs  (Method 5031).

Larger water samples can be extracted  in  order to achieve lower
detection limits for semivolatile  organics  (Semi-VOA's) and
organochlorine pesticides using larger  glassware, continuous extraction
processes or solid phase extraction  (draft Method 3535).  Investigators
at the EPA  Laboratory in Duluth/Grand  Isle and Ed Furlong of the USGS
are using custom-made solid phase  materials  to extract water samples
that are greater than 5 liters.  Pesticide manufacturers like Zenica
(formerly ICI) also use solid phase  materials to improve the selectivity
of extractions.  ICI demonstrated  that  passing a water sample through a
strong anion exchange material before  using  a Gig sorbant can remove
analytical  interferences and increase  the capacity of the solid phase
sorbant for specific apolar target analytes  (e.g., pyrethroid
insecticides)   In some cases, the recovery  of apolar analytes using Gig
media can be improved by adding salt to a water sample prior to
extraction.

Ionic  or  ionizable  pollutants  such   as  2.4-D  can  be  extracted  by
adjusting  the  sample pH to  produce a  cationic  or  anionic form  of the
target analyte.  The water sample  is then passed through an ion exchange
resin in order to  remove the target  ionic species.  After extraction, an
appropriate  acid  or base  is used  to change the  ionic  form  of  the
compound to a neutral molecule which is eluted with solvent.

Large soil  samples  (100   500 g) are often required to achieve the low
quantitation limits required for modern pesticides .  They are extracted
with methanol or methanol/water using  a shaker table or a wrist action
shaker.  After extraction, particulates are  removed by centrifugation or
filtration.  Target analytes are back  extracted into methylene chloride
after adding salt  or water to the  aqueous methanol extract.  This back
extraction  step partitions apolar  and  semi-polar compounds away from
polar interferences extracted by the methanol.  The final extract is
dried using sodium sulfate.  This  technique  will be considered for the
fourth update of SW-846 as proposed  method 3570 after performance
testing as  a multianalyte procedure.

Minimum sample size should be established during project planning as
part of the DQO process.  This exercise requires that the analyst back-
calculate the sample size using the  target quantitation limit,
instrument  quantitation limit (IQL), final extract volume and the
anticipated analytical  recovery:

   target quantitation  limit = (concentration in sample) /  (recovery)

  (target  quantitation  limit  x sample size)/  final extract  volume  =  IQL

   sample  size  =  IQL  x  final  extract volume/  target  quantitation limit

where recovery < 100%
      IQL = instrument  quantitation  limit
                                         211

-------
MINIMIZE CONTAMINATION DURING SAMPLE COLLECTION

The contamination of samples collected for lead analysis was documented
by Patterson and Settle^ in response to the tuna fish contamination
scare of the 1970s.  Fitzgerald and Gill4 adopted a similar approach for
environmental  mercury analysis.  Both groups found that the use of acid
washed Teflon collection vessels, trained staff and frequent changes of
gloves can minimize contamination of samples collected for metals
analysis.  VGA contamination can also present problems, particularly
when gasoline powered vehicles or generators are in use during sampling.

Contamination is not the only problem encountered during sampling.
Reactive or volatile analytes (e.g., mercury and VOAs) can be lost
during transport and storage if the proper preservatives or other
precautions are not employed.-'  Minimizing the loss of analytes is a
critical aspect of trace-level analysis.


ADDITIONAL CLEANUP PROCEDURES TO REDUCE BACKGROUND SIGNAL

      Analysis  of environmental  samples  often  requires  a  multi-step
sample preparation  process  to isolate trace-level  components from  the
sample matrix.  The purpose of these steps is to isolate and concentrate
the target analytes into a final extract  that can be  analyzed  with good
accuracy and precision.  Cleanup and derivatization can minimize:

•     false  positives  due  to  non-target peaks  that   elute  within  the
      analyte retention time window (HPLC and GC)
      false negatives due to degradation  of labile analytes (GC)
      poor quantitation due to elevated baselines (GC  and HPLC)
      quantitation limits above the action limits (GC  and HPLC)
      retention time shifts due to column overloading  (GC and HPLC)
•     damage to chromatography columns caused by deposited  materials  (GC
      and HPLC)
•     instrument downtime due to the need to clean injector ports  or  to
      replace precolumns (GC and HPLC).

Sample cleanup  can be  accomplished using  mini-columns  (e.g.,  Pasteur
pipettes),  open chromatography  columns,  solid phase  cartridges,  porous
disks   (Empore™)  or glass  fiber  disks   (SIMDisk™) •    Disks  generally
have higher sample capacity than solid phase  cartridges  or mini-columns
and do  not  require  the  training  needed  for open  column  techniques.
However,  open  columns  should  be  used   if  large  sample  capacity  is
required.

Polar organic  materials  (e.g.,  phenols,  humic  acids  or  amines)  are
absorbed  onto  the  stationary  phase   (Florisil™,  silica  or  alumina)
Cleanups  based on  absorption techniques  (Methods  3610,  3620  and  3630)
are generally  suitable for  neutral or  slightly  polar  compounds.   An
organic  solvent is used to  elute  the  less polar analytes while  leaving
the polar interferences in place.   Columns should  not  be  overloaded.
                                         212

-------
The ability of the absorbent to retain chemicals is called its activity.
The addition of water  (or a wet extract) will reduce the activity of any
of these  absorbents.   The ability of a  solvent  to elute compounds from
the   absorbent   is  called   its   elutropic   strength   (methanol>ethyl
acetate>methylene chloride>ethyl ether>toluene>hexane).  The least polar
compounds  elute  from the  solid absorption  media  earliest,  elution of
more  polar material require additional  volumes  of solvent  or stronger
solvents.

Reversed  phase  cleanup  is  achieved  through the interaction  of  the
analytes  and  interferences  with  silica  derivatized  with  silyl  ethers
(e.g.,  Cg or  C^g)  or  with styrene  divinylbenzene.   This  technique is
called  reversed phase  because  the mobile  phase  is more  polar than the
stationary  phase.    Apolar  compounds  are  retained  on  the column  and
semipolar   analytes  are   eluted   with  aqueous   methanol   or  aqueous
acetonitrile.   Ionic species are generally eluted using water  or buffer.
Reversed  phase  cleanups  are generally  accomplished  using  solid  phase
cartridges  and porous  disks.   HPLC can  also  be  used  for reversed phase
cleanups;  however, it is an expensive and relatively labor intensive.

Reversed  phase  cleanups  can also  be  used  for  ionic  species when  ion
pairing reagents  are added  to  the -elution solvent.   Quatenary ammonium
salts  are  added   to  extracts  to  form  ion-pairs  with  anions  (e.g.
phenolates) which then behave like neutral molecules and are retained on
the reversed phase media.  The retained  ionic species  can are  eluted by
removing  the ion pairing reagent from the mobile phase.

Metal  ions or ionizable  organic  compounds  can be  isolated  using  ion
exchange  media.   The extract pH is  adjusted  in  order  to  ionize  target
analytes  as cations  or anions.   The extract  is  then  passed  through ion
exchange  media.    Cleanups  using  ion  exchange  media  can   be  highly
selective  allowing  the separation of  very polar  species  that are  not
amenable  to solvent partitioning techniques.

Apolar and polar organic constituent in extracts can be separated by the
use   of   partitions  between  non-miscible  solvents    (e.g.,  methylene
chloride/water  or   hexane/acetonitrile).     Generally,   apolar  target
analytes  dissolve  into  the  less  polar  solvent  while  polar  species
partition  into the polar solvent,  "like dissolves like".

Gel  permeation  cleanup  (GPC)  is  a  size  exclusion  technique using  a
styrene divinylbenzene column.  This column packing has  numerous  pores
that  allow the entry of  small  molecules while excluding  high  molecular
weight  chemicals.    Large,   unretained  molecules  elute  earlier  while
smaller molecules  have longer  retention  times.   High  molecular  weight
interferences   that  can  be  removed  by  GPC  include  waxes,  resins.
paraffins, humic acid  and  lipids.   There are two  forms  of  GPC systems,
(1) a higher  capacity,  low  pressure  system that  requires  more  solvent
and (2) a more modern lower capacity, high efficiency system that uses a
higher pressure pump.

Using either type  of GPC for  environmental  analysis   requires that  the
column be calibrated with the target analytes and several molecular size
indicators    (usually    corn    oil.    diethylhexyl    phthalate    and
pentachlorophenol).    Most   semi-VOAs  elute  from  the  GPC   after  the
                                         213

-------
phthalate esters and before pentachlorophenol.     GPC  cleanup of samples
containing organophosphorous  insecticides  (OPs)  is  not  appropriate  as
some OPs elute with the corn oil.

Mercury or shiny copper is used to remove elemental sulfur from sediment
extracts prior to analysis for  organochlorine  pesticides (Method 3660).
The  sulfur  is  reduced  to sulfide  ion  and  the  mercury  or  copper  is
oxidized to  the di-cation, a  black  insoluble  solid.    Whenever  sulfur
contamination is  a  problem,  copper  or  mercury should  be added  to  the
extracts until no additional sulfide is formed.

Use  of mercury  to  remove sulfur  is in  decline  due  to  environmental
concerns. Granular  copper  is  an alternative:  it  should be  prepared  by
first  pouring dilute hydrochloric acid  over  the copper  granules.   This
shiny  copper should be  rinsed  with  reagent water and  drained  to  remove
the hydrochloric  acid before  it is  added to the  extract.   Acid washing
the copper is necessary because even a thin coat of oxidized copper will
prevent its reaction with sulfur.


CONCENTRATE THE FINAL EXTRACT PRIOR TO ANALYSIS

Lower detection limits can be achieved by decreasing the final volume of
extracts and digestates; however, that approach has significant
limitations.  Analytical precision decreases significantly when the
final volumes are less than 0.5 mL primarily because it is difficult to
(1) reliably reproduce these volumes and (2)  quantitatively transfer
small volumes.   While dioxin methods (i.e., 8280 and 8290) use stable
labeled analogs to correct for these problems,  isotope dilution methods
are not really practical for routine environmental analysis.

Use of smaller final extract volumes for organic analysis can also
result in the oiling out of apolar interferences or phase separation in
autosampler vials due to the presence of residual water.  Insoluble oils
can trap both semi-VOAs and pesticides.   Drying extracts to remove water
can result in the loss of polar analytes such as 2,4-D, even when
acidified sodium sulfate is prepared according to the instructions in
Method 8151.  Analysts must carefully inspect final extracts for
evidence of heterogeneity whenever concentration techniques are used to
achieve lower detection limits.
DERIVATIZE SPECIFIC ANALYTES TO INCREASE SIGNAL-TO-NOISE

Analytes with  reactive  organic  functionalities may  be derivatized  to
decrease  detection   limits.     Analysts  are   cautioned   that   these
derivatizing reagents react somewhat unselectivly  and can significantly
increase the potential  for false  positives.  Pentafluorobenzyl  bromide
(PFBBr) is used to prepare pentafluorobenzyl  esters of carboxylic acids
and PFB  ethers  of phenols prior  to GC/ECD  analysis.   Samples  are  is
added to aqueous  sodium  carbonate  and the PFBBr  is  added  in methylene
chloride.    Tetrabutylammonium  hydrogen  sulfate   serves  as  a  phase
transfer catalyst  for the  reaction.    While  this two phase  reaction
system  limits   the  hydrolysis  of  PFBBr  and  reduces  the  amount  of
interferences   from   the   sample   extract,   PFBBr   derivatives   of
                                        214

-------
environmental  samples  are complex with many  large  peaks resulting from
the derivatives  of non-target  compounds.

The use  of PFBBr has other disadvantages.   It is  a  lachrymator and is
unpleasant  to use.  PFBBr derivatives  cannot be  stored for  more than
several  days,  and  some lots of  PFBBr have many  impurities which makes
interpretation   of   results   nearly   impossible.     Despite   these
difficulties,  derivatization with PFBBr  is  often the  only way to achieve
low  detection/quantitation limits  required  for  the analysis  of some
compounds.

Certain  compounds  (particularly  pesticides and Pharmaceuticals)  can be
derivatized  to  produce  fluorescent  species  which greatly  improve  the
sensitivity  and  selectivity of  HPLC  analysis.   Florescent derivatives
may be produced  by the reaction  of molecules  in the sample  extract prior
to analysis  or by a post-column  derivatization reaction.

Fluorescent  species can  also  be  produced in  a  post-column  reaction.
Analysis  of  carbamate insecticides  (Method  8315) is one  such application
of  a  post-column  derivatization  technique.   Resolution  of  related
analytes  is  less of a  challenge using  post-column methods because  the
derivatives  are  formed after the chromatographic separation.

Certain  pesticides containing  sulfur  are oxidized  to  make them suitable
for GC analysis  or to generate a common moiety from related metabolites.
Oxidations   are  usually  accomplished with  meta-chloroperbenzoic  acid
(MCPBA).   In methods for Aldicarb, Fenamiphos and  Fenthion,  the parent
pesticide and  the sulfoxide (S=0) in  its  metabolite are oxidized to  the
corresponding  sulfone  (0=S=0)


USE MORE  SENSITIVE ANALYTICAL  INSTRUMENTATION

A number  of  manufacturers have developed  more sensitive  analytical
instrumentation.  The inductively coupled plasma/mass spectrometer
(ICP/MS,  Method  6020) is  capable of measuring many  metals at an order of
magnitude lower  than optical ICP instruments  (Method  6010)   The ion
trap mass spectrometer can achieve detection  limits an order of
magnitude lower  than most full scan quadrupole instruments.  GCs with
electronic pressure control in the injector port produce narrower peaks
for late  eluting compounds.  This improvement in chromatography can
result in lower  detection limits.
REDUCE LABORATORY CONTAMINATION

Lower detection limits means that laboratories must reduce contamination
that interfere with the measurement of target analytes.  Dr. C.
Patterson^  first raised this issue for the analysis of lead in food.
Dr. E. Heithmar at the EMSL-LV  (personal communication) found that low
levels of laboratory contamination limited the lower detection limits
that could be achieved using Chelex™ resin and ICP/MS.  These
investigators documented that metals can be introduced by dust,
including the particles found in the hair of analysts.  Use of cleanroom
techniques, "tacky mats" to limit the introduction of dusts as well as
                                       215

-------
gloves, coats and hats limited to trace level analyses.  Bloom^ and
Prevatt^ recently reviewed the requirements for cleanroom techniques
suitable for environmental analysis.

Analysis of volatile organic chemicals is also subject to contamination
problems.   This is most clearly demonstrated by the increased reporting
levels for acetone and methylene chloride used by routine analytical
services laboratories.  These increased reporting levels are generally
the result of contamination of samples by the solvents used to extract
water and soil samples.  Some laboratories minimize contamination by
maintaining separate air supplies for their volatile and semivolatile
laboratories.  This solution is most appropriate when designing a new
facility and has only limited application for an existing laboratory.
Field laboratories are often plagued with benzene and xylene
contaminants introduced by the combustion of fuels (e.g., from field
generators).

Lopez-Avila and Beckert^ documented the source and ubiquitous nature of
phthalate esters that can interfere with the analysis of semi-VOAs and
pesticides.

Elimination of laboratory contamination is a serious concern in trace
level analysis. Bloom^ describes a need for "paranoid zeal" in order to
successfully eliminate these contaminants.


DEMONSTRATE LABORATORY PROFICIENCY WITH SENSITIVE ANALYSES

Proficiency testing is particularly critical when modifying analytical
procedures to provide improved performance such as lower detection
limits.  SW-846 describes the process of demonstrating laboratory
proficiency in Section 8 of Method 8000B.   As a first step:

      8.4   Each laboratory must demonstrate initial proficiency with
      each combination of sample preparation and determinative methods
      that it utilizes, by generating data of acceptable accuracy and
      precision for a reference sample containing the target analytes in
      a clean matrix.  The laboratory must also repeat this
      demonstration whenever new staff are trained or significant
      changes in instrumentation are made.

Analysts and laboratory management should evaluate the results of
initial proficiency tests in terms of accuracy,  precision,  percent of
false positives, percent of false negatives and the number of rejected
analyses.   The evaluation should include of interferences and
calibration data.  Some laboratories are able to satisfy the requirement
for a 20% relative standard deviation for the initial calibration
despite a not detect at the low-point calibration.   This is not
acceptable for trace-level analysis.  The low-point calibration analysis
should produce a signal at least 10 times the chromatographic or mass
spectral background level in order to be suitable for analyzing
environmental extracts containing target analytes at that concentration.
                                        216

-------
An initial demonstration of proficiency ensures that the laboratory
staff is capable of trace-level analysis and demonstrates that
laboratory contamination is under control.  However, it does not
necessarily ensure that the laboratory or analyst is capable of
analyzing real-world samples at the target quantitation limit.  This
requires analysis of characterized contaminated matrices or spiked real-
world matrices spiked at the target quantitation limit.  Use of real-
world performance materials is described in the next section.


SPECIFIC QA/QC PROCEDURES

Trace-level measurements require an analytical system that is under
reliable statistical control.^  This control is central to a
comprehensive measurement QA program described in Chapter 1 of SW-846.
One aspect of this measurement QA program is a demonstration of
laboratory proficiency in trace level analysis.  The laboratory must
analyze environmental matrices spiked at the target quantitation limit.
Analyses of these spiked materials should provide the method accuracy
and precision specified in the project DQO.   Whenever appropriate.
characterized reference materials containing target analytes at or near
the target quantitation limit should also be analyzed as part of the
demonstration of laboratory proficiency.  Spiked surrogates, matrix
spikes and duplicate analyses described in SW-846 Chapter 1, or other
appropriate methods, also help demonstrate laboratory proficiency and
document the performance of modified methods.  In addition,  the
determination of incorporated, non-target, pollutants may provide an
additional measure of method performance.  For example, analysts using
SW Method 8081B or Method 8082 for the analysis of trace-level
organochlorine pesticides or PCBs in tissues should expect to see DDE, a
near universal contaminant of animal tissues, in the chromatographs of
all trace level analyses, even after application of sulfuric
acid/permanganate cleanup (Method 3670).


ACKNOWLEDGEMENTS

The authors gratefully acknowledge the discussions with Dr.  Larry
Johnson AREAL/EPA RTP,  NC while preparing this manuscript.


REFERENECES

1.  Keith,  L.H. (ed.).  Principles of Environmental Sampling. 1988,  ACS
Books.

2. draft Environmental  Chemistry Methods Manual,  in preparation.
Analytical Chemistry Branch, USEPA, Office of Pesticides Programs.

3.  Patterson. C.C,  and D.M. Settle,  "The Reduction of Orders of
Magnitude Errors in Lead Analysis of Biological Tissues and Natural
Waters by Evaluating and Controlling the Extent and Sources of
Industrial Lead Contamination Introduced During Sample Collection,
Handing and Analysis".  1977. in Accuracy in Trace Analysis:  Sampling.
                                    217

-------
Sample Handling and Analysis .  P.O. LaFluer (ed.), NBS STP 422, pp. 321-
351.

4. Gill,  G.A.  and W.F.  Fitzgerald, "Mercury Sampling of Open Ocean
Waters at the  Picomolar Level",  1985, Deep Sea Res.  32. 287.

5. Fitzgerald,  W.F. and C.J. Wantras, "Mercury in Superficial Waters of
Wisconsin's Lakes", 1989.  Science of the Total Environment.  87/83. 223.

6. Bloom, N.S.,  "Ultraclean Sample Handling", March/April 1995,
Environmental  Lab.  7,  20.

7  Prevatt, F.J., "Clean Chemistry for Trace Metals, March/April 1995,
Environmental  Testing  and Analysis. 4. 24

8. Lopez-Avila,  V., J.  Milanes and W. Beckert, 1986, "Phthalate Esters
as Contaminants in Gas  Chromatography, EPA Project Report, Contract #68-
03-3226.

9. Taylor,  J.K.  Quality Assurance of Chemical Measurements.  1988,  Lewis
Publishers.
                                       218

-------
                                                                                                            33
            NON-PHTHALATE PLASTICIZERS IN ENVIRONMENTAL SAMPLES

James Bairon. Chemist, U.S. Environmental Protection Agency, Central Regional Laboratory, Region III,
Annapolis, MD 21401, Edward Messer, Chemist, U.S. Environmental Protection Agency, Central Regional
Laboratory, Region III, Annapolis, MD 21401

ABSTRACT

Phthalate  plasticizers  are on  all EPA "lists."   However only drinking water regulates a non-phthalate
plasticizer, bis(2-ethylhexyl) Adipate.  In a recent water quality monitoring project on the Chester River,
in Maryland, we had authentic standards previously obtained from a former plasticizer manufacturer on the
Chester River by the Md Dept. of Natural Resources.  These materials included both phthalate and non-
phthalate  plasticizers.  The non-phthalates included adipates,  maleates,  a  sebacate, a benzoate, and a
trimelliate. All the materials were a technical grade, containing the various isomers of that material.  One
of the adipates manufactured  at the Chester River site, di-octyl adipate, is one of the compounds on the
original consent decree list.  One of it's isomers, di(ethylhexyl)  adipate is a drinking water analyte.   We
were examining river sediments at low ppb levels.  Most  of the plasticizers supplied were "non-target
compounds."    Our results indicated  both phthalate and non-phthalate plasticizers were present in the
samples.  We feel the results show non-phthalate plasticizers could be useful indicators at sites where non-
phthalate plasticizers have been used, typical applications being in lubricants, coatings and low temperature
applications for plastics, particularly polyvinyl chloride formulations.

Introduction

Environmental Chemists deal with the effects of industrial chemical processes, but may not be familiar with
a particular industry, or uses of  a  common chemical in a  variety of industries.   Even  a  novice
environmentalist is aware of the wide use of polyvinyl chloride (PVC) and the effects of it's monomer,
vinyl chloride. They may also be aware that phthalate plasticizers are widely used in PVC and other plastics,
but what  about the non-phthalate plasticizers.  In a recent study to determine any effects of airborne
industrial  pollutants on water quality  in  the Chester River in Maryland,  we had reason to address that
problem.  In the early  1980's while monitoring for permit violations by a firm located in Chestertown, MD
on the Chester River,  which manufactured plasticizers, we had received samples of the products made by
this  firm, which  included both  phthalate and non-phthalate plasticizers.   The problem was  settled
administratively, and the materials were never really looked at, or utilized.  In the fall of 1994 CRL agreed
to do analytical work for the State of Maryland,  Dept of Natural Resources to survey pollutants in the
Chester River sediment.  It was realized that the previously obtained materials were still available.  The list
of compounds included phthalates we had never heard of, and non-phthalates such as adipates,  a maleate,
a sebacate, a benzoate, and a melliate.  While we were aware that ethylhexyl adipate was a drinking water
analyte, and vaguely aware that adipates were  a group of non-phthalate plasticizers, we had never heard of
some of the other compounds such as 7-11 phthalate, and  Trioctyl trimelliate.  These compounds are listed
in table one.  Realizing that we had C4,  C6,  and  CIO dibasic acid esters, and the aliphatic dibasic acids
being an  important class of industrial materials,  we felt it might be  interesting to see  which of these
materials,  as well other diesters in their grouping, i.e. adipates, were in the NBS75k Mass Spectral library,
which is the one we use for analytical work. Also what other groupings, C3, C5, C7, C8 and C9 might be
present  in this library.  These are presented in table 2.

                                             NOTICE

Due to time constraints, this paper was not subjected to Agency review. Therefore it does not necessarily
reflect the views of the U.S.  Environmental Protection Agency, and no official endorsement  should be
inferred.
                                                       219

-------
Experimental

A set of sediment samples from sampling stations on the Chester River were analyzed by our usual
procedures for the organic semi-volatile Base Neutral Acid fraction. All sediment samples at our laboratory
are extracted by RCRA Method 3 540A, Soxhlet Extraction.  Analysis is by capillary column GC Mass
Spectrometry Using CRL BNA SOP R3-QA201.0  which is consolidated procedure derived from SDWA
525.2, NPDES  625, RCRA 8270 and the current CLP Statement of Work.  A 30 meter DB-5,1 micron
thickness capillary column was utilized  . Before acquisition of any sample data, the mass spectrometer is
calibrated by obtaining the spectrum of a known compound (DFTPP).  All mass assignments and relative
abundances are found to be in acceptable ranges or the instrument is adjusted until an acceptable spectrum
of DFTPP is obtained, according to the Superfund CLP Organic Low/Medium Statement of Work (SOW).
Immediately before analysis, each sample is spiked with the internal standard mix used in the current CLP
Statement of Work for semi-volatile.  All quantitation or estimates of concentration are made in comparison
to the  internal  standard nearest the compound of interest.   Mixed standards of Extractable Priority
Pollutants and CLP Hazardous Substances List Compounds (10-1 OOng range) are analyzed before each group
of samples.  These are traceable standards obtained from certified vendors. The target compound results
are not reported here but are available.  bis(2-ethylhexyl) Adipate was run as a separate standard.

For each group  of samples extracted, a method blank is prepared and examined for laboratory introduced
contamination.  All reported target compound values are qualified with a "B" if less than or equal to lOx
the concentration determined in the field and/or laboratory blank.  All samples were spiked with a mixture
of six surrogate  compounds prior to extraction. The percent recovery for each was determined to check for
matrix effect. The target limits are those established for the Superfund CLP Organic Low/Medium SOW.
Eighty-six of ninety surrogates recoveries were within the recommended Quality Control Limits.  These
results  are not reported here,  but are available.  Two aliquots of sample  941121-17 were spiked with a
priority pollutant cocktail containing twelve compounds at 100 ng/uL (in the extract). These spiked samples
were then carried through both the extraction and GC/MS analysis. The percent recovery for each spiked
compound was determined to check for matrix effect.  The percent recoveries have been corrected for target
compounds present hi  the sample. The  target limits are those established for the Superfund CLP Organic
Low/Medium SOW. Eighteen of twenty-four matrix spike recoveries and six of twelve RPDs were within
acceptable QC  limits  for this case.  The semi-volatile that were  of the utmost interest in this  case,
plasticizers (phthalates/adipates) did not seem to have suffered from matrix effects as evidenced by the
MS/MSD results. These results are not reported here but are available.

Discussion

Beside analyzing the plasticizers we had, the various environmental databases were searched for information
on these compounds, particularly the odd mixtures.  What chemical you  have exactly is important  with
industrial chemicals, since use of synonyms is somewhat loose.  The toxic release inventory(l) indicated
that the dioctyl  adipate we had, from the Chestertown manufacturer  was actually bis(ethylhexly) adipate,
which was later confirmed.  According  to the TRI  there had been a release in 1987.

The plasticizers were  run at  100 ppb to establish retention  times  and spectra, then rerun to establish
quantitation limits. Some of the compounds, the monoesters,  had very high limits (>100 ppb) and would
not be seen  except in  spill situations.  Two of the phthalates are already  target compounds and separate
standards were not prepared.  The phthalates found were the usual low level target phthaltes and are not
reported in this paper.  It was interesting to us, that 7-11 Phthalate was listed hi  reference 2, and is a mixed
group of akyl phthaltes hi that the carbon atom range, as figure (1) indicates.  The mixed phthalate esters
are also a multipeak mixture of akyl phthalates.  We couldn't find out (easily so we gave up)  what  6-10
phthalate was.   But a  drum sample or a spill of these mixed esters could plaster the characteristic 149 ion
across most  of a typical 30-300 deg. C  chromatogram.
                                                      220

-------
The information from the TRI database indicated the Chester River facility may have changed owners,
confusing the issue as to Di-n-octyl adipate or Bis(2-ethylhexyl) adipate.  The Chester Dioctyl adipate was
compared with a known standard of the ethylhexyl adipate.  Spectra and retention times indicated the
Chester River material was Bis(2-ethylhexyl) adipate (fig. 2,3).  The other available materials listed in table
one were also run except for the epoxidized soybean oil, which looked  as thought it had polymerized.
These materials and other diesters from C3-C10 were searched for CAS  numbers(2).  The NBS spectral
library we utilize was then searched for entries.  We were looking for patterns such as the phthalate 149,
167 masses that could be utilized.  The adipates have masses at 129 and  147 for most of the isomers we
had spectra for. It did seem as though diesters with aklyl groups higher than diethyl would fragment to give
the same base peak.  The sebacates apparently are widely used, but the  spectra didn't show any  useful
patterns.

In our routine sample analysis target compounds are identified and quantitated, then  tentatively identified
compounds (TICs) are reported.  However our program which is typical of most instrument software only
peaks those peaks which are at 20% of the nearest internal standard to avoid doing library searches on noise,
but this can miss low level  contaminants, and in our initial sample run  it did.  We then generated ion
chromatograms at masses 99, 129, 105, and 305, since once contaminants are identified you can quantitate
them at lower levels.  The only mass  that was present, other than the 149 phthalates, were the 129, 147
adipate masses.  Bis(2-ethylhexyl) adipate was added to our normal quantitation program as an external
standard.  The results are reported in table 3.  We were somewhat surprised that we had the adipate in our
blanks, since the standards  had  been  kept sealed and refrigerated.  Reviewing sample  blanks data on
archived  magnetic tapes indicated the  presence of adipates in some blanks. This, and the wide range of
adipates with extremely similar spectra has implications for drinking water analysis by method 525, in that
retention time windows  are as important as spectra.

Mixed texanol benzoates another multi-peak mixture gave the usual 105 base peak for benzoic acid for all
the components.  So we have another target compound, that can have it's quantitation ion across most of
the mass  chromatogram.  The Trioctyl Trimelliate (fig. 4) is a specialty plasticer whose major use according
to the Harzardous Subtances  Database(HSDB) (4) is in PVC for electrical applications. This could help in
targeting sources  at specific  waste sites related to electrical manufacturing.

CONCLUSION

It is important to be aware that there are other types of materials used as plasticizers besides the phthalates,
The non-phthalate materials  are widely used  as additives, plasticizers,  in paints, synthetic lubricants, and
hydraulic fluids,  the adipates offer a group which has consistent masses, 129 and 147 which can be easily
monitored, and in the case of ethylhexyl adipate are regulated.  Other non-phthalate plasticizers also seem
to have consistent base peaks with the longer alkyl groups, a search program to produce ion  chromatograms
of these masses including the adipates can easily be set up on the software currently used on most mass
spectrometers. Where a hazardous waste site has an industrial categorization that commonly uses the non-
phthalate materials, a search for those  contaminants could yield additional useful information.

REFERENCES

(1) EPA  Toxic Release Inventory, 1987-1992, Office of Pollution Prevention and Toxic Substances, EPA
   Washington, DC.
(2) Howard, Phillip, andNeal, Micheal, "Dictionary of Chemical Names and Synonyms," Lewis Publishers,
   1992
(3) NBS75K.L Mass Spectral Library
(4) TOMES Plus, Volume 25, Micromedex Inc.
                                                     221

-------
TABLE 1.
PLASTICIZER OBTAINED FROM
CHESTER RIVER SOURCE
DIOCTYL ADIPATE
DISODECYL ADIPATE
ISODECYL CAPPED 1,3-
BUTYLENE ADIPATE
TRIDECYL ADff ATE
DIBUTYL MALEATE
6-10 PHTHALATE
7-11 PHTHALATE
DI-N-BUTYL PHTHALATE
DI-2-ETHYLHEXYL PHTHALATE
DIISODECYL PHTHALATE
MIXED PHTHALATE ESTERS
DI,TRIDECYL PHTHALATE
TRIDECYL SEBACATE
EPOXIDIZED SOYBEAN OIL
EPOXY TA11ATE
TRIOCTYL TRIMELLIATE
CAS Number
000103-23-1
027178-16-1
(a)
(a)
105-76-0
(a)
068515-42-4
000084-74-2
000117-81-7
026761-40-0
not available
000119-06-2
(a)
008013-07-8 (6)
(a)
3319-31-1
Background Information
Available in Database
HSDB(l), RTECS(2)
(2)
(a)
(a)
(2)
(a)
(a)
(1,2,3)
(1,2)
(1,2)
N/A
(2)
(a)
RTECS(2)
CHRIS(3)
HSDB, RTECS
(a)  Not found with available resources
(1)      Hazardous Substances Data Bank
(2)      Registry of Toxic Effects of Chemical Substances
(3)      Chemical Hazard Response Information System
                                                    222

-------




























































TABLE 2.

Non-Phthalate Potential Plasticizer Materials



Propandioic Acid, diethyl ester
Propandioic Acid, dimethyl ester
Propandioic Acid, dimethyl-, diethyl ester
Propandioic Acid, ethyl-, diethyl ester
Diethyl methylpropylmalonate
Propandioic Acid, methyl-.diethyl ester
Diethyl isopropylmalonate
Diethyl isobutylmalonate
Propandioic Acid, ethylmethyl-, diethyl ester

Butanedioic acid, diethyl ester
Butanedioic acid, dibutyl ester
Butanedioic acid, dimethyl ester

2-Butenedioic acid, diethyl ester
2-Butenedioic acid, dimethyl ester
Dibutyl Maleate

Valeric acid, 2,3-epoxy-3,4-dimethyl-, tert-butyl ester
Valeric acid, 2,3-epoxy-3,4-dimethyl-,ethyl ester, cis

Hexanedioic acid bis(2-ethylhexyl) ester
Hexandioic acid, dihexyl ester
Hexandioic acid, dioctyl ester
Hexandioic acid, dicyclohexyl ester
Hexandioicacid, bis(l-methylpropyl) ester
Hexandioicacid, mono 2-ethylhexyl ester
Hexandioic acid, dipropyl ester
Hexandioic acid, bis(l-methylethyl) ester
Hexandioic acid, dimethyl ester
Hexandioic acid, dibutoxyethyl ester
Hexandioic acid, dibutyl ester
Hexandioic acid, 2,2-dimethyl-,dimethyl ester
Tridecyl Adipate
Diisodecyl Adipate
Isodecyl Capped, 1,3-Butylene Adipate

Azelaic Acid, bis(2-ethylhexyl ester)
Azelaic Acid, dibutyl ester
Azelaic Acid, dimethyl ester

Decanedioic acid, bis(2-ethylhexyl) ester
Decanedioic acid, dibutyl ester
Decanedioic acid, diethyl ester
Decanedioic acid, dimethyl ester
Didecyl Sebacate
Tridecyl Sebacate
Epoxy Tallate
Mixed Texanol Benzoates
Trioctyl Trimelliate


* not found in available library resources
# mixtures not identifiable to a given CAS number






























































CAS*



000105-53-3
000595-45-0
001619-62-1
000133-13-1
055898-43-6
000609-08-5
000759-36-4
010203-58-4
002049-70-9

000123-25-1
000141-03-7
000106-65-0

000141-05-9
000624-48-6
000105-76-0

024222-06-8
024222-05-7

000103-23-1
000110-33-8
000123-79-5
000849-99-0
000141-04-8
004337-65-9
000106-19-4
006938-94-9
000627-93-0
000141-18-4
000105-99-7
017219-21-5
*
27178-16-1
*

000103-24-2
000103-24-3
000103-24-4

000122-62-3
000109-43-3
000110-40-7
000106-79-6
002432-89-5
•
#
#
003319-31-1


































































Source
of Spectra


NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L

NBS75K.L
NBS75K.L
NBS75K.L

NBS75K.L
NBS75K.L
NBS/CRL

NBS75K.L
NBS75K.L

NBS/CRL
NBS75K.L
NBS/CRL
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
CRL STD
CRLSTD
CRL STD

NBS75K.L
NBS75K.L
NBS75K.L

NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
NBS75K.L
CRL STD
CRL STD
CRL STD
CRL STD






Base Peak
or
Major Ion

29
43
29
160
115
129
160
160
73

101
101
115

99
113
99

57
99

129
129
129
129
129
129
129
129
59
41
185
157
129
129
129

171
171
152

185
241
55
55
57
57
57
105
305






Secodary
Ion


115
73
88
143
174
74
133
133
87

129
119
55

127
85
117


115

147
147
147
147
147
147
142
142

99
129
129
147
147
147





112
185
60
74
71
71


323




223

-------
TABLE 3.
Sample number
941121-14, Chester R. Station 9
941121-15, Chester R. Station 8
941121-16, Chester R. Station 7
941121-17, Chester R. Station 6
941121-18, Chester R. Station 5
941121-19, Chester R. Station 4
941123-15, Chester R. Station 3
941123-16, Chester R. Station 2
941123-17, Chester R. Station 1
11/21 sand blank
11/23-Sand Blank
Bis(2-ethylhexyl) Adipate
ug/Kg
32.7 B
55.2 B
53.7 B
<10
<10
<10
27.8 B
148.5
<10
<10
11.5
                         224

-------
            FIGURE 1.
            STANDARD OBTAINED FROM CHESTER RIVER SITE
            7-11 PHTHALATE (MIXED PHTHALATE ESTERS)
Abundance

 1400000 -


 1200000-


 1000000-


  800000 -


  600000-


  400000-


  200000 •
TIC: REPHTH1.D
         1*1
rime-->  .24.00   26.00   28.00   30.00   32.00   34.00   36.00
abundance
180000 -
160000 :
140000 -
120000-
100000 -
80000 -
60000 -
40000
20000 :
0"
^
m/z-->






5

41
1
40
Scan 1150 (23.67
1'





7


99
II 93 121132
f ..11. ., ., 1 1. 1 11. t
30 80 100 120 14'0
6 min) : REPHTH1.D
9







167
265
' ' 1 ' ' ' ' 1 ' ' ' ' i • • • • i • • • • i • • • • i • • • •
160 180 200 220 240 260
                                   225

-------
FIGURE 2.
STANDARD OBTAINED FROM CHESTER RIVER SITE
DI-OCTYL ADIPATE
Abundance
50000-
40000 -

30000 -
20000-
10000 -
n/z--> :
Average of 23


57

A


13









7

59
LL ij.
1 ' ' ' '. l^-"-'1 1 ' i
iO 40 50 6
D
8
I
11 	 I '
0 '70 80
.495 to





3







23.554 min.
i:





11
101
97|
i
90 100 11





1

MAYPH1.D
9





147
1
ide
, rl 	 	
0 120 130 14'0 ISO 160 170
    FIGURE 3.
    STANDARD OBTAINED FROM ALDRICH CHEMICAL
    BIS(2-ETHYLHEXYL) ADIPATE
Abundance Average of
. •
25000 -
20000 -

15000 -
•
10000 -

5000 -
Tl/ Z - - > .



57

41



4





70





23.495







83


40 50 60 70 80

87 !
1
to 2-3.554 min.: MAYPH2.D







i:






111

01
i


9







147

' ' 1 ' ' ' ' 1 ' ' ' ' 1 1 1 1 . | 1 1 1 1 -1 1 T~ป | 1 1 1 1 | 1 1 1 1 | 1 1 1 1 1 1 I 1 I 1 1
90 100 110 120 130 140 150 160 170
                           226

-------
FIGURE 4.
STANDARD OBTAINED FROM CHESTER RIVER SITE
TRIOCTYL TRIMELLIATE
Abundance
100000-
90000 -
80000 :
70000 -
60000 -
50000 -
40000 -
30000 -
20000-
10000 -
0-
n/z-->
5


4






1
3


1

7









il II




8

h.l
1 ' t '
50
Scan 2043 (41.805 min) : REPHTH7.D












193



3
113

, I,
100
148
I
I 1 .ซ




3(








211

150 200
260
5







323

417435
11 I I
250 300 350 400
                    227

-------
34
 MICROWAVE-ASSISTED EXTRACTION FROM SOIL OF COMPOUNDS LISTED IN
 SW-846 METHODS 8250, 8081, AND 8141A

 W. F. Beckett, U.S. Environmental Protection Agency, EMSL-LV, Las Vegas, Nevada 89119,
 and V. Lopez-Avila, R. Young, J. Benedicto, P. Ho, and R. Kim, Midwest Research Institute,
 California Operations, Mountain View, California 94043.
 ABSTRACT

       This study, which is part of an ongoing U.S. Environmental Protection Agency (EPA)
 research program, carried out by the Environmental Monitoring Systems Laboratory-Las Vegas,
 evaluates new sample-preparation techniques that minimize generation of waste solvents,
 improve target analyte recoveries, and reduce sample preparation costs. We have continued with
 developing a microwave-assisted extraction (MAE) procedure designed to extract pollutants
 from soil and sediment matrices, and are reporting results of MAE for 187 compounds and four
 Aroclors listed in SW-846 Methods 8250, 8081, and 8141 A.  All MAE experiments were
 performed on 5-g sample portions at 115ฐC/10 min with 30 mL 1:1 hexane/acetone. Of 89
 semivolatile and six surrogate compounds spiked on soil, extracted by MAE, and analyzed by
 GC/MS, the spike recoveries for 79 compounds were between 80 and 120%, and for 14
 compounds less than 80% (benzo(b)fluoranthene and benzo(k)fluoranthene were counted as one
 compound, because they could not be resolved on the DB-5 column; the recovery of 7,12-
 dimethylbenz(a)anthracene was 128%).  Of the latter, recoveries for five compounds were below
 20% (benzidine at 0%, a,a-dimethylphenethylamine at 7.0%, 2-picoline at 7.7%,
 dibenzo(aj)acridine at 10.6%, and 2,4-dinitrophenol at 17.2%). When the spiked samples were
 aged for 24 h in the presence of moisture before being extracted, spike recoveries were between
 80 and 120% for 46 compounds, and below 80% for 47 compounds.  Of 45 organochlorine
 pesticides spiked on soil, extracted by MAE, and determined by dual column/dual ECD GC,
 spike recoveries of 38 compounds ranged from 80 to 120%, six ranged from 50 to 80%, and only
 the captafol recovery was above 120%.  Spike recoveries in the range 100 ฑ 20% were obtained
 for 29 compounds when moistened samples were aged for 24 h before MAE, and for only 15
 compounds after aging for 14 d.

       Recoveries for Aroclors (in spiked, native, or reference materials) from nine different
 matrices ranged from 82 to 93% for Arbclor 1016 and 1260, and from 75 to 157% for Aroclor
 1248 and 1254 (concentrations ranged from 0.022 to 465 mg/kg). Organophosphorus pesticide
 recoveries (47 compounds) were in general slightly lower than those achieved for the other
 pollutant groups tested.

       For 15 compounds in a reference soil, the recoveries of 14 compounds by MAE were
 equal to or better than recoveries obtained by Soxhlet extraction (naphthalene being the
 exception). For selected organochlorine pesticides, recoveries from spiked soil samples were at
 least 7% higher for MAE than for either Soxhlet or sonication extraction.

        The results of this study further demonstrate that MAE of pollutants from soil samples is
 a viable technique for sample preparation. With the use of MAE, many of the compounds of
                                             228

-------
concern to the EPA can be extracted in a single step in 10 min with a small amount of organic
solvent.
NOTICE

       The U. S. Environmental Protection Agency (EPA), through its Office of Research and
Development (ORD), had this abstract prepared for a proposed oral or poster presentation. It
does not necessarily reflect the views of the EPA or ORD.  Readers should note the existence of
a patent (Pare, J.R.J., et al., U.S. Patent 5,002,784, March 1991) describing the use of
microwave-assisted extraction for biological materials.
                                               229

-------
35
   A  TOXIC CONGENER SPECIFIC,  MONOCLONAL ANTIBODY-BASED
       IMMUNOASSAY  FOR PCBs  IN ENVIRONMENTAL MATRICES

Robert E. Carlson, ECOCHEM Research, Inc., Chaska, MN 55318, Robert O. Harrison,
ImmunoSystems,  Inc. Division of  Millipore Corp., Scarborough, ME 04074, Ya-Wen
Chiu and Alexander E. Karu,  Agricultural and Environmental Chemistry Graduate
Program, University of California Berkeley, Berkeley, CA 94720.
      The most toxic PCB congeners are ortho unsubstituted and coplanar.  They
occur in  much smaller amounts than the less toxic congeners  in industrial PCB
formulations and environmental samples.  There is growing recognition that specific
analysis of the toxic PCB congeners in the environment is required for an objective
evaluation of risk and environmental impact.  However, the time, effort and expense
associated with the congener specific analysis  of these compounds by instrumental
methods  such as capillary gas chromatography places substantial constraints on the
scope of  risk assessment and site evaluation studies.  Immunoassay based analytical
methods  have demonstrated value for specific,  high throughput screening as well as
quantitiative analyses of  many  environmental analytes.  We have developed  an
enzyme  immunoassay  (EIA) which  is  specific for the most toxic, coplanar PCB
congeners.  This EIA is based on a coplanar hapten derived monoclonal antibody and
a novel competitor labeled enzyme conjugate.  The coated tube format of this assay
can be completed in less than 30 minutes.  The EIA has a minimum detection limit of
less than 0.2 ppb  and an ISQ of less than 1 ppb for the 3,3',4,4'-tetrachloro and
3,3',4,4',5-pentachlorobiphenyl  congeners.   Cross-reaction  with several  of the
common Aroclor congeners  including  4,4'-dichloro-, 2,2',5,5'-tetrachloro- and
2,2',4,4',5,5'-hexachlorobiphenyl is less than 0.01%.  This presentation will describe
optimization  and performance characteristics of the EIA with emphasis on preparation
of various environmental matrices and the relation of toxic congener quantification to
the total PCB content of a sample.
                                       230

-------
                                                                                      36
ACCELERATED SOLVENT EXTRACTION OF
CHLORINATED HYDROCARBONS INCLUDING DIOXIN
AND PCB FROM SOLID WASTE SAMPLES

John L. Ezzell, Dale W. Felix and Bruce E. Richter
Dionex Salt Lake Technical Center, Salt Lake City, UT 84119
Frank Hofler
Dionex GmbH, Idstein, Germany

Abstract

Accelerated Solvent Extraction (ASE) applies temperature and pressure to accelerate
extraction processes and improve the efficiency of solvent extraction. This paper reports
the results of a study of chlorinated hydrocarbons found at trace levels on a variety of
matrices. PCB contaminants have been extracted from solid wastes including sewage
sludge, urban dust and soil.  Recoveries from these incurred samples were equivalent or
better than from the Soxhlet extractions of equivalent samples. Dioxins at ppb levels
were also extracted by ASE from a number of incurred  samples from a range of
environmental sources. A review of the data again shows that ASE gives equivalent or
better recoveries compared to the traditional  techniques. For a 10-g sample, the
automated ASE system typically requires about 15 minutes to complete an extraction
with a total solvent requirement of from 13 to 15 mL.

Introduction
Organic solvents required to extract solid samples can comprise the largest source of
waste in the environmental analysis laboratory. Typical solvent volumes can range from
50 mL to over 400 mL per sample analysis procedure. At the present time, federal and
state regulatory agencies are placing increased demands to reduce solvents in the
analytical laboratory. However, most states require that only 40CFR promulgated
methods be used in RCRA work. This requirement limits the typical contract and
industrial laboratories to the solvent intensive methods described under SW-846: Method
3510 (separatory funnel), 3540 (Soxhlet), 3541 (automated Soxhlet) and 3550 (ultrasonic
extraction).
A new extraction technique, accelerated solvent extraction (ASE) has recently been
introduced (1). This technique uses conventional liquid solvents at elevated pressures
(1500-2000 psi) and temperatures (50-200ฐC) to extract solid samples quickly, and with
much less solvent than conventional techniques. With ASE, a solid sample is enclosed in
a stainless steel vessel which is filled with an extraction solvent and heated to
temperature. The sample is allowed to statically extract in this configuration for 5-10
minutes, with the expanding solvent vented to a collection vial. Following this period,
compressed nitrogen is used to purge the remaining solvent into the same vial.  The entire
procedure is completed in 12-17 minutes per sample, and uses approximately 15 mL of
solvent for a ten gram sample. ASE takes advantage of the increases in analyte
solubilities which occur at temperatures above the boiling points of commonly used
                                           231

-------
solvents. At the higher temperatures used by ASE, the kinetic processes for the
desorption of analytes from the matrix are accelerated compared to the conditions when
solvents at room temperature are used. Solvent usage is reduced as a result of the higher
analyte solubility in the heated solvent.
In this study, data will presented from the extraction of soils contaminated with PCBs and
Dioxins.

Methods

Materials. All solvents used were pesticide grade or better. Certified PCB contaminated
soil was purchased from Resource Technology Corporation (Laramie, WY).

Extraction. ASE extractions were performed at a pressure of 2000 psi and a temperature
of 100 or  150ฐC.   Additional  information on  the operation of ASE  are reported in
separate papers (1,2).  Stainless  steel extraction vessels with internal  volumes of 11 mL
were used.  The extraction method was designed so that the vessel containing the sample
was pre-filled  with solvent, and then  allowed to heat  and extract statically for a total
elapsed time of 10 minutes.  The static valve was controlled so that it opened briefly
when the cell  pressure exceeded 2200 psi.  The solvent that was expelled during this
valve opening was routed to the collection vial. A schematic diagram of ASE is shown as
Figure 1.
Following the combined heat-up and static extraction period, the static valve was opened,
and fresh extraction solvent was  introduced for a period of 10-15 seconds (approximately
8 mL), followed by  a purge  with  nitrogen gas at 150 psi.   The final volume of the
extraction solvent was approximately 15 mL; the total extraction time  was approximately
12 minutes per sample. All PCB containing samples were extracted with hexane/acetone
at 100ฐC. Dioxin containing  samples were extracted  with  toluene  at  150ฐC. Fly ash
samples were  extracted with toluene acidified with phosphoric acid. All extracts were
collected into amber, precleaned 40 mL vials purchased from I-Chem (New Castle, DE).

Quantitation. Analysis of the PCB extracts was performed by GC/ECD according to EPA
Method 8080. Analysis of dioxin extracts was by GC/MS.

Results

PCBs
PCBs have been extracted from a variety of matrices, including oyster  tissue, soils
sludges and sediments. All of the extractions were performed according to the following
method: 100C, 2000 psi, 12  minutes, using hexane/acetone as the extraction fluid. Table I
summarizes data obtained from the extraction of sewage sludge. This sample was dried
and ground prior to extraction. Percent recovery values are based on Soxhlet  extraction
results. Results presented in Table 2 were obtained by extracting a reference soil with
certified levels of arochlor 1254. Analysis of these extracts were performed by an
independent contract laboratory and show excellent correlation with the certified value.
                                             232

-------
Dioxins
Dioxins and furans have been extracted from a number of matrices as well. Dioxin
samples were extracted at 150C, 2000 psi, using toluene as the extraction fluid. The
extraction of dioxins requires app. 17 minutes per sample. Data summarized in Tables 3
and 4 show the absolute levels of dioxins and furans recovered from chimney brick by
ASE and Soxhlet extraction. In all cases, ASE produced levels which very closely
correspond the Soxhlet values.

Conclusion

In this study, accelerated solvent extraction has been shown to produce results
comparable to traditional solvent extraction of PCBs and dioxins in much less time (12-
17 minutes per sample) and using much less solvent (15 mL for a 10 g sample). Since a
single method is capable of extracting the analytes from a variety of matrices, the time
normally required for method development is significantly reduced.
ASE has previously been shown to be equivalent to Soxhlet extraction of BNAs,
chlorinated pesticides, organophosphorus herbicides and shaker extraction of herbicides
(1,2). Based on these data, ASE will be included as extraction Method 3545 in update III
ofCFR40(3).

References
(1)    B.E. Richter, J.L. Ezzell, W.D. Felix, K.A. Roberts and D.W. Later, American
       Laboratory, Feb. 1995 24-28.
(2)    J.L. Ezzell, B.E. Richter, W.D. Felix, S.R. Black and J.E. Meikle, LC/GC 13(5)
       1995, 390-398.
(3)    Lesnik, B. and Fordham, O., Environmental Lab,  Dec/Jan 1994/95 25-33 (1995).
                                             233

-------
          PURGE VALVE
                     PUMP VALVE
                                       STATIC VALVE
         PUMP
            EXTRACTION
            CELL
                                            VIAL
                           OVEN
Figure   1.    Schematic  diagram  of  accelerated  solvent
extraction (ASE).
Table 1. Recovery of PCBs from Sewage Sludge
PCB Congener
PCB28
PCB 52
PCB 101
PCB 153
PCB 138
PCB 180
Avg. (%), n=6
118.1
114.0
142.9
109.5
109.6
160.4
RSD (%)
2.5
4.7
7.4
5.8
3.9
7.5
Analyte concentration range: 160-200 ^g/kg/component
                          234

-------
                      Table 2.
        Recovery of PCBs from Contaminated Soil
          (1340 ug/kg certified, Arochlor 1254)
Run Number
1
2
3
4

Avg.
RSD
ug/kg
1290
1366
1283
1369

1327 (99.0%)
3.5%
Table 3. Dioxins from Chimney Brick
Congeners
2,3,7,8-Tetra CDD
1,2,3,7,8-Penta CDD
1,2,3,4,7,8-Hexa CDD
1,2,3,6,7,8-Hexa CDD
1,2,3,7,8,9-Hexa CDD
1,2,3,4,6,7,8-Hepta CDD
Octa CDD
Soxhlet
(ng/kg)
0.006
0.052
0.046
0.12
0.097
1.0
2.9
ASE
(ng/kg)
0.006
0.057
0.052
0.13
0.10
0.88
2.6
                       235

-------
Table 4. Furans from Chimney Brick
Congeners
2,3,7,8-Tetra CDF
1,2,3,7,8-Penta CDF
1,2,3,4,7,8-Hexa CDF
1,2,3,6,7,8-Hexa CDF
1,2,3,7,8,9-Hexa CDF
1,2,3,4,6,7,8-Hepta CDD
Octa CDD
Soxhlet
(ng/kg)
0.16
0.43
1.1
0.54
0.042
2.1
2.0
ASE
(ng/kg)
0.18
0.47
1.1
0.57
0.042
2.0
2.0
                  236

-------
                                                                                                 37
Robust SFE Sample Preparation Methods for  PCB and OCPs submitted to the US EPA
SW 846 for consideration as a draft SFE method 3562

DENNIS GERE, Hewlett-Packard, Little Falls Site, 2850 Centerville Rd. Wilmington, DE 19808,

SOREN BOVVADT, Energy and Environmental Research Center, Campus Box 9018, University of North Dakota,
Grand Forks, ND 58202-9018

DIANNE BENNETT, Chemistry Laboratory Services Branch, California Dept of Food and Agriculture,
Sacramento, CA 95832

H-B. LEE & TOM PEART, Environment Canada, Centre for Inland Waterways, Burlington, Ontario, Canada L7R 4A6

Environmental laboratories  are now in the process of examining  the feasibility  of  replacing traditional  sample preparation
techniques that use macro quantities of chlorinated hazardous organic solvents with supercritical fluid extraction (SFE). Method
development and validation have been underway for at least the past three years in many  laboratories around the world. These new,
emerging technology alternatives primarily use carbon dioxide with only a fraction of the organic solvent used in traditional methods
such as Soxhlet or sonication.

We will present SFE techniques and robust methods  for the  sample preparation of polychlorinated biphenyls (PCBs) and
organochlorine pesticides(OCPs) from solid wastes and fish tissue. In each case, the conditions for the extraction and cleanup prior
to analysis will be given. Validation data will include to the recovery and precision results  for representative reference samples. This
validation data has been presented to the organic work group of the US EPA SW-846 solid waste operation for consideration as a
draft method, which would be designated Method 3562.

The presented methods will  maximize unattended operation and minimize  the use of organic solvents. For a typical SFE sample
preparation of a fish tissue sample containing PCBs and OCPs, approximately one hour of time is required with no further external
sample cleanup or evaporation/concentration steps (in-situ sample cleanup). A typical traditional sample preparation technique
which would require 12-18 hours of time, 250-500 milliliters of organic solvent, and a separate, manual column chromatography
cleanup step with a Florisil column or silica treated with sulfuric acid The manml column chromatography  manipulation would
result in a volume of organic  solvent which needs concentration prior to analysis.

The deb'verables data include the recovery of PCB congeners and OCP analytes as listed  below.

Table 1                                PCBs


    Compounds                   empirical formula     IUPAC numbers
 2,4,4' Trichlorobiphenyl           C12H7C13                     CB 28
2,5,2',5' Tetrachlorobiphenyl       Cl2H6d4                     CB 52
2,4,5,2%5' Pentachlorobiphenyl     Cl2HsCl5                     CB101
2,4,5,3',4' Pentachlorobiphenyl     Cl2HsCl5                     CB 118
2,3,4,3',4' Pentachlorobiphenyl     Ci2HsCl5                     CB 105
 2,3,4,2',4',5' Hexachlorobiphenyl   C12H4C16                     CB 138
2,3,4,2',3',4' Hexachlorobiphenyl    C12H4C16                     CB 128
 2,3,6,2',4',5' Hexachlorobiphenyl   C12H4C16                     CB 149
2,4,5,2',4',5' Hexachlorobiphenyl    C12H4C16                     CB 153
2,3,4,5 ,3',5' Hexachlorobiphenyl    C12HJCI6                     CB 156
2,3,4,5, 2',4',5'HeptachlorobiphenyI  CnHaCl?                     CB 180
 2,3,4,5,2',3',4' Heptachlorobiphenyl Ci2H3Cl7                     CB 170
                                                237

-------
a  IUPAC nomenclature
 There are potentially 209 members of a class of compounds known as Polychlorinated Biphenyls. In this class of compounds,
biphenyl is the backbone and between one and ten chlorine atoms are substituted on this biphenyl nucleus. Of the possible 209 CBi
only about 120 have been detected in environmental samples.
Table 2
    OCPs

Compounds
           Aldrin
           b-Hexachlorocyclohexane (b-BHC)
           d-Hexachlorocyclohexane (d-BHC)
           g-Hexachlorocyclohexane (g-BHC)"
           a-Chlordane
           4,4'-DDD
           4,4'-DDE
           4,4'-DDT
           Dieldrin
           Endosulfan
           Endrin
           Endrin aldehyde
           Heptachlor
           Heptachlor epoxide
        a Chemical Abstracts Registry Number

        b  Also known as Lindane
                                                                CAS#
                                                    309-00-2
                                                    319-85-7
                                                    319-86-8
                                                    319-87-9
                                                    5103-71-9
                                                    72-54-8
                                                    72-55-9
                                                    50-29-3
                                                    60-57-1
                                                    115-29-7
                                                    72-20-8
                                                    7421-93-4
                                                    76-44-8
                                                    1024-57-3
        RESULTS:

 The following tables summarizes all the recovery, bias, precision, minimum detectable limit (MDL) and the Reliable Quanutatio
 Limit as prescribed in your letter.:

  Table 3 PCB Deliverable Data

EC1
SRM 1941
ECS
CRM481
Michigan Bay
CRM392
SRM 2974

GRAND MEANS
bias
95.7
85.8
104.0
79.0
108.7
91.8
83.2

92.6
Precision
%rsd
8.2
2.2
3.6
4.7
2.6
2.7
3.0

3.9
MDL
mg/Kg
10.7
1.7
3.1
5.2
5.5
15.1
5.0

6.6
RDL
mg/Kg
42.8
6.8
12.4
20.7
22.1
60.6
19.9

26.2
                                                   238

-------
  Table 4 OCP Deliverable Data

Delhi 250
Delhi 5
McCarthy 250
McCarthy 5
Auburn 250
Auburn 5

GRAND MEANS
bias
107.9
74.3
102.0
85.9
79.4
87.8

89.6
Precision
%rsd
5.3
5.2
4.0
7.4
4.4
5.5

5.3
MDL
mg/Kg
0.9
0.6
0.7
1.3
0.6
0.6

0.8
RDL
mg/Kg
3.4
2.4
2.8
5.0
2.5
2.5

3.1
The poster session will include much more experimental data regarding the types of samples which were used for the data obtained.
It would appear that this is a robust method which was tested in four different laboratories in the United States and in Canada on a
wide variety of certified samples and in the case of the organochlorine pesticides, spiked samples on reference soils.
                                                     239

-------
38


ABSTRACT FOR ENVIRACS MEETING, WASHINGTON DC, JULY 1995


ANALYSIS OF DIOXIN IN WATER BY AUTOMATED SOLID PHASE EXTRACTION
COUPLED TO ENZYME IMMUNOASSAY
R.O. Harrison, Millipore Corp.; R.E. Carlson, Ecochem Research, Inc.; H. Shirkhan, Fluid
Management Systems, Inc.; L.M. Altshul, C.A. De Ruisseau, and J.M.  Silverman, Harvard
School of Public Health


An automated system has been developed for solid phase extraction of liquid samples, based
in part on the fluidics and control portions of the automated FMS Dioxin-Prep™ System. The
SPE-Prep™ System has been used to  develop  a method for the  extraction  of 2,3,7,8-
tetrachlorodibenzo-p-dioxin (TCDD) from water. Also, an enzyme immunoassay (EIA) system
has recently been developed for rapid screening of TCDD from a variety of matrices. These
two novel  methods have been coupled to  produce a rapid and simple method for the
screening of water samples for dioxin contamination. Water analysis can be performed by EIA
directly following extraction and solvent exchange with no extract clean-up.  Sensitivity for
TCDD in the EIA is approximately 100 pg per analysis.  Thus sensitivity  to 0.1 ppt TCDD in
water is possible using 1-2 liters of sample.  Scaling the sample size to 50 liters allows better
than 10 ppq sensitivity. Total time for sample preparation and EIA analysis is less than 4 hours
for a  1-2 liter sample. Larger samples can be extracted by running the automated system
overnight, with the  same approximate analyst time required for extraction and EIA analysis.
Optimization of the automated SPE system and its interface to the EIA will be described.
Results from EIA and GC-MS analysis of both spiked and field samples will be presented.
                                       240

-------
                                                                                        39
        MICROWAVE-ASSISTED SOLVENT EXTRACTION OF PAHs FROM SOIL-
                   REPORT OF AN INTER-LABORATORY STUDY

L. B. lassie. CEM Corporation, Matthews, NC 28106; M. J. Hays and S. A. Wise,
Chemical Science and Technology Laboratory, National Institute of Standards and
Technology, Gaithersburg, MD 20899


ABSTRACT
Solvent extractions are among the oldest and most widely practiced sample preparation
techniques for chemical analysis. Normally, solvents are selected to dissolve target
analytes based on the affinity between solvent and solute and range from highly polar
molecules like water to lipophilic hydrocarbons, depending on the target analyte.
Although traditional extraction methods are labor intensive and often time consuming,
newer extraction techniques using microwave heating more efficiently leach additives
from plastics (1), natural products from botanicals (2), and pollutants from sediment
(3-5). Microwave extractions that are performed in closed vessels achieve higher
temperatures and pressures, thus they take less time than traditional methods. Controlled
dielectric heating with microwave sources is more reproducible than room temperature
sonications or open vessel Soxhlet methods. This improves extraction precision.


INTRODUCTION
This presentation discusses optimization of extraction parameters for releasing polycyclic
aromatic hydrocarbons (PAHs) from a reference sediment, SRM 1941a using methylene
chloride, and presents results obtained at NIST by reverse phase liquid chromatography
(4). Both extraction time and temperature were systematically varied to evaluate the two
most important experimental parameters in order to establish the optimum extraction
conditions. Although microwave recoveries showed improvement with increasing
temperatures from 40 to 100 ฐC, the average ranged from 93-106% of the Soxhlet value.
When the isothermal extraction was varied from 10-30 minutes, similar recovery
efficiencies were found although some degradation was seen at longer times. The
presence of moisture speeded the attainment of the target temperature, however,
extraction efficiency did not improve. Neither did the addition of sodium sulfate as a
drying agent improve recovery.

A protocol was subsequently developed for conducting an interlaboratory study to
compare the recovery efficiency of the microwave method with conventional extraction
techniques. It consisted of a 15 minute extraction in 30 mL of methylene chloride
(pesticide grade) at 100 ฐC. Participants included a regional EPA lab, a municipal
environmental lab, one small and one large private lab and one national laboratory.
Instructions to analysts included a request to extract the sediment by their conventional
method and to analyze the PAHs of environmental interest in both extracts by the best
available method, i.e., GC, GC-MS, or HPLC. In all cases, that analytical method turned
out to be a GC-MS, similar to EPA Method 8270. Information on conventional extraction
methods, sample sizes and analytical methods are summarized in the following table.
                                             241

-------
Table I. Conventional Methods of Preparation, Analytical Technique and Sample Sizes
for the Intel-laboratory Study
                        Methods                Spikes &         Sample Size, g
Lab#
1
2
3
4
5
Preparation
Sonication
Sonication
Sonication
Wrist Shaker/auto
Tumbling
Detection
GC-MS
GC-MS
GC-MS
GC-MS
GC-MS
Surrosates Conven
dg, dio, di2 5
dg, dio, di2 10
d4, dg, dio, di2 5
di2, terphenyl 1
dg, dio, di2 1
uwave
5
2
1
1
1
For the study a batch consisted of six vessels comprising three sample replicates, one
solvent blank and two controls which permitted analysts to run either a surrogate,
laboratory QC or check sample, or a standard or reference material. Vessels were
weighed both before and after microwave heating to identify any potentially compromised
vessel that may have vented during the heating step. Only one lab reported > 0.1 g weight
lost for any vessel during the 25 minutes of elapsed time which includes ~ 10 minutes for
the six-vessel complement to reach 100 ฐC. After cooling to room temperature samples
were centrifuged and the solvent decanted. Sample cleanup in most instances meant
removal of elemental sulfur with copper. A final solvent exchange and blown down in
hexane was effected for the chromatography.


RESULTS
Results  comparing the percent recovery for typical PAHs found in soil will be presented
for the microwave extraction technique. Spike recoveries for the deuterated compounds
averaged from ~ 65% to slightly over 100%. When compared to the NIST certified value
(6) for the specific PAH, recoveries were often > 100 %. Additional comparisons
between the microwave extract recoveries and conventional extractions of PAHs will be
presented for each participating laboratory.


DISCUSSION
The results presented demonstrate that with dielectric heating sufficient energy is
transferred to solution, with solvents such as methylene chloride, to raise the solution
temperature to well above the boiling point in a matter of minutes. At these elevated
temperatures, the rate of analyte extraction or desorption from the soil surface and
interstices is enhanced. In addition, the solubility of these relatively non-polar analytes in
methylene chloride is substantially improved. Extraction efficiency thus is a function of
the increased temperatures  afforded by the closed vessel technique. This technology has
been shown to be comparable to conventional methods of extraction (5). In systems
where even more polar solvents such as acetonitrile or aliphatic alcohols are used, similar
enhancements can be expected. This approach may lead to  still more efficient and
environmentally friendly extraction systems. With automated cleanup, improved sample
throughput is possible.
                                            242

-------
CONCLUSIONS
We can report that study participants uniformly appreciated the opportunity to
dramatically reduce the solvent volumes needed to accomplish extractions of pollutants
from sediments. In addition, laboratory efficiency may have been improved because of
the time savings realized when closed vessel microwave extraction was used.
REFERENCES
1. Freitage, W.; John, O. Angew. Makromol. Chem.  175 (1990), 181-185.
2. Microwave Assisted Natural Products Extraction, U. S. Patent #5002784, Mar. 91.
3. Onuska, F. L; Terry, K. A. Chromatographia, 36 (1993), 191-194.
4. lassie, L.; Margolis, S.; Craft, N.; Hays, M. 1993 Pittsburgh Conference, #171.
5. Lopez-Avila, V.; Beckert, W. Anal. Chem., (1994),  66, 1097-1106.
6. Wise, S., Schantz, M. et al (1995) in press
                                            243

-------
40

OPTIMIZING AUTOMATED SOXHLET EXTRACTION OF SEMTVOLATELES
Kevin P. Kelly. Ph.D., Nancy L. Schwartz;
Laboratory Automation, Incorporated (a subsidiary of OI Analytical)
555 Vandiver Drive, Columbia, Missouri 65202
ABSTRACT

Automated Soxhlet Extraction was recently promulgated as an official USEPA method
(SW-846 Method 3541).   Because of more efficient  extraction  design, the revised
technique is faster than traditional Soxhlet Extraction (Method 3540) and uses less solvent
than traditional Soxhlet extraction or Ultrasonication Extraction (Method 3550). In the
first stage of Method 3541 extraction, sample is immersed in boiling solvent, facilitating
extraction of target analytes and thus shortening total extraction time. Following this the
sample thimble is separated from contact with the extraction solvent and there is a second
refluxing extraction  stage  which is  similar to a Soxhlet-type of extraction in that
condensed solvent percolates down through the sample.

The Soxtherm extraction system performs SW-846 Method 3541 and automates all steps
of sample processing including macro concentration of the sample extracts and collection
of evaporated solvent for recycling or disposal.  Samples  up to 30 grams in  size are
extracted in two hours using less than half the solvent required by traditional methods.
A variety of target compounds of environmental interest are  efficiently extracted from
different matrices.

Since automated concentration of the sample extract is included in Soxtherm processing,
an assessment of losses traceable to the evaporation technique is useful.  This work
examines effects on evaporative losses of changes in extraction system operation such as
solvent type,  heater and coolant temperatures, extraction vessel and thimble sizes, and
final concentrate volume.  Very  good recoveries of easy to lose semivolatile  analytes
were demonstrated with excellent precision.  Six replicate  aliquots of 1:1 acetone and
methylene chloride (115 mL) spiked with  25 /*g of each analyte were concentrated to
final volumes ranging from 4 mL to  11 mL, with an  average recovery of 99.5% for all
eleven of the compounds.  The same procedure when conducted using 102 mL of 100%
methylene chloride yielded an average of 94.4% recovery  with final volumes ranging
from 5 to 12  mL.

The system used for this work provides high throughput with minimal labor requirement,
and  presents  opportunities for  laboratories to decrease  turnaround time, minimize
hazardous waste generation, obtain operator independent results, and economize on labor
costs.
                                             244

-------
INTRODUCTION

Soxhlet extraction is a proven technique for recovery of organic analytes,  is simple to
implement, and suffers from few disadvantages; however, one drawback has been  the
method time requirement  of 16 to 24  hours.   Recently an  updated technique was
promulgated as SW-846 Method 3541, "Automated Soxhlet Extraction", which cites a
two hour extraction cycle.   A recent publication1 showed that  extraction times can be
shortened relative to those specified in the method without substantial decrease of analyte
recoveries.  Fast extraction time relative to  the traditional method  stems from direct
exposure of the sample to boiling solvent during the initial portion of the procedure.

Soxtherm performs Method 3541 in the most automated fashion possible, with all steps
under  microprocessor  control.   Following an  initial reflux period  during which  the
sample is immersed in boiling solvent, the system separates the sample thimble from  the
extract by evaporating  a portion of the refluxing solvent.  The second reflux period
which  follows serves to complete recovery of extractable material and assures precise
results. Further reduction in volume of the sample extract can take place following that
reflux  period.

Some  semivolatile target compounds are easily  lost when extracts are reduced to  small
volumes.  Maximum utility of the automated  Soxhlet procedure depends on recovering
such analytes in a volume of concentrate that is small enough to require minimal further
processing, yet large enough to avoid extensive evaporation losses.   Some  recent  work
at US  EPA Region 6 Laboratory in  Houston, Texas used concentrates from Soxtherm
which  were subjected to further evaporation  in Labconco  RapidVap™ N2*, a nitrogen
blowdown apparatus.    These showed  moderate losses in  recovery  for more volatile
analytes upon GC/MS analysis (Table 1).  However, prior work involving extraction of
spiked samples2, in which Soxtherm evaporation was stopped at 10-20 mL final volume
and the concentrate gently  blown down by hand to a smaller volume and  analyzed by
GC/FID (Table 2) indicates that evaporation losses were minimal and that recoveries
were affected principally by matrix effects.

Comparison of the two sets of prior work cited indicates  that users of this technique
should be able to routinely  concentrate samples automatically to some final volume
between  1 mL and 10 mL with very minimal evaporation losses.  This work is aimed at
determining how low in volume Soxtherm concentrates can be made without encountering
significant evaporation losses.   Recovery losses  traceable to the evaporation stage of
sample processing are  assessed and sensitivity of analyte  recoveries to changes in
extraction system operating conditions such as temperatures and final concentrate volume
are determined.
*   RapidVap is a trademark of the Labconco Corporation.
                                              245

-------
 Table 1. Spiked Blanks with Final Evaporation Completed in Rapid Yap
ANALYTE || AVER. (6 REPL)
2-Fluorophenol
Phenol-dS
2-Chlorophenol-d4
1 ,2-Dichlorobenzene
Nitrobenzene-d5
2-Fluorobiphenyl
2,4,6-Tribromophenol
Terphenyl-dl4
Phenol
2-Chlorophenol
1 ,4-Dichlorobenzene
N-nitrosodi-n-propylamine
1 ,2,4-Trichlorobenzene
4-Chloro-3-Methylphenol
Acenaphthene
4-Nitrophenol
2 ,4-Dinitrotoluene
Pentachlorophenol
Pyrene
46
61
56
63
62
66
77
97
52
51
49
70
59
68
71
84
74
95
96
STD. DEVIATION
3.4
4.6
5.0
6.3
5.6
4.1
3.3
2.9
3.8
4.8
5.3
5.5
5.0
4.0
4.0
3.5
3.1
4.0
2.8
Table 2. Recoveries from Spiked Samples Using Gentle Nitrogen Blowdown
Analyte
2-Fluorophenol
Phenol
1 ,2-Dichlorobenzene
1 ,2,4-Trichlorobenzene
Acenaphthene
Hexachlorobenzene
o-Terphenyl
Blank
93
92
82
88
90
96
93
Sand/Clay
80
83
60
84
80
86
85
Loam
78
86
45
85
85
96
96
                   Average of three or four replicates
                                       246

-------
EXPERIMENTAL PROCEDURE

       Extraction System Operating Parameters

No samples were used during these investigations; however, the extraction system was
allowed to proceed in a manner consistent with sample extraction so that any evaporation
losses  traceable to the extraction period would be duplicated.  Initial solvent volume
during automated Soxhlet extraction must be large  enough to cause the sample to  be
covered with boiling solvent. Therefore, solvent volumes were chosen to equal amounts
required to process various sizes of environmental samples ranging up to thirty grams.
Example Soxtherm operating parameters are shown  below in Table 3.  The automated
extraction system was programmed with an extraction temperature (i.e. heater control
temperature) sufficient to produce adequate reflux action.  During the thirty minutes of
boiling time the system was in total reflux, which produces rapid extraction.

The number of 15 mL aliquots removed during the first  solvent reduction period was
chosen to reduce extract  volume to an amount that would be low enough to uncover the
extraction thimble,  thus  suspending it  above  the boiling extract.   For the standard
extraction beaker (ca. 48 mm. i.d.) the volume remaining after the first reduction period
was approximately 40 mL.
                  Table 3. Example Soxtherm Operation Conditions
Parameter
Extraction temperature
Boiling time
Solvent reduction A
Extraction time
Solvent reduction B
Solvent cooling time
Air pulse interval
Air pulse duration
Chiller water temperature
Value
150ฐ C
30 minutes
5 x 15 mL
45 minutes
2 x 15 mL
15 minutes
5.5 minutes
3 seconds
15ฐ C
Following the first solvent reduction  period an  extraction  time  of 45 minutes was
employed.  This would serve to rinse additional extractable materials from the sample
contained within the thimble.  After extraction was completed a second solvent reduction
period was used to concentrated the extract to a small volume (5 to 15 mL).
                                            247

-------
       Extraction and Evaporation

Solvent was measured and placed into each extraction beaker and a solution of eleven
semivolatile compounds (25 fj.g of each contained in 1 mL of 1:1 acetone and DCM) was
spiked directly into the solvent.  Processing was initiated using  a recirculating chiller
(Neslab CFT-33) to cool water for the condensers.  No gaskets were placed between the
extraction beakers and the  condensers.  Omission of gaskets that contain extractable
material eliminates a potential source of contamination cited in Method 3541. Following
the automated extraction and evaporation the concentrated extract was allowed to cool
for a few minutes with a foil cap covering the extraction beaker to prevent further solvent
losses, then the concentrate  was removed using a syringe to measure its volume and the
beaker was rinsed with two or more small portions of solvent.

       Quantitation

Concentrates were adjusted in volume to 10.0 mL and analyzed using a Hewlett-Packard
5890A gas chromatograph equipped with an Rtx™-5 column* (0.53 mm i.d. x  15 meter,
1 micron film thickness) and a flame ionization detector.  Peak areas from analysis of
recovered extracts were referenced  against those  from solutions of spiking standard
diluted to  10.0 mL and 25.0 mL,  with one set of standard injections performed for each
three  samples  that were analyzed.

RESULTS &  DISCUSSION

Table 4 shows recoveries after evaporation for the semivolatile compound spiked into
either a 1:1 mixture of acetone and methylene chloride (DCM) or  into 100%  DCM.
Measured volumes of the concentrated "extracts" showed similar sizes and variability for
the two experiments, with volumes ranging between 4 and 12 mL, with average volumes
of 8.3 mL and 9.2 mL, respectively.  These compare well with calculated final volumes
of 10 mL and  12 mL respectively for the two experiments (starting volume less amount
expected to be removed  during  solvent reduction steps) and indicates that losses of
solvent vapor  due  to leakage were consistent and not very large.  The final volumes
achieved in these experiments are convenient for further concentration of extracts, when
necessary  to achieve detection limit goals, using nitrogen blowdown or the microSnyder
apparatus.

As  expected, evaporation of 1:1  mixture produced a concentrate that was enriched in
acetone.   Weighing extracts from the 1:1  experiment and computing the ratio to  the
measured volumes produced a calculated density of 0.919 g/mL, which corresponds to
a mixture  containing about 25% DCM.
* Rtx is a trademark of Restek Corporation.
                                          248

-------
     Table 4. Evaporation Recoveries for Solutions Containing Semivolatile Analytes
                       Six replicate evaporations using Soxtherm
Analyte
2-Fluorophenol
Phenol
1 ,4-Dichlorobenzene
1 ,2-Dichlorobenzene
Hexachloroethane
Nitrobenzene
&zs-(2-Chloroethoxy)methane
Naphthalene
Hexachlorobenzene
2-Fluorobiphenyl
Acenaphthene
System A
Rec(%)
103
102
101
97
93
98
101
98
97
99
104
s.d.
3.0
2.8
2.7
2.2
1.7
2.5
2.9
2.4
2.1
2.6
5.3
System B
Rec(%)
93
94
94
94
93
95
97
95
94
96
93
s.d.
2.9
2.0
1.3
1.5
1.3
1.7
3.8
2.0
1.0
1.8
1.9
         SYSTEM A =  115 mL of 1:1 acetone and methylene chloride

         SYSTEM B =  102 mL of methylene chloride
Both experiments produced very good recoveries with single analyte precision (standard
of deviation) values ranging from 1.0%  to 5.3%. Very little loss was measured for even
the most volatile of the compounds tested (2-fluorophenol and phenol) and there was no
discernible difference between recoveries of the most volatile and least volatile analytes.
Overall measured recoveries for the eleven analytes were higher for the experiment with
1:1 acetone and DCM than for the experiment with 100% DCM, 99.5%  average versus
94.4%, respectively.
                                              249

-------
   SUMMARY & CONCLUSIONS

   Data from further experiments not available for this manuscript will be presented at the
   conference. Parameters which may be investigated include the effect of increasing heater
   or coolant water temperatures and the effect of decreasing final concentrate volumes.
   The following conclusions have been drawn from the data accumulated thus far.

   1.     Soxtherm evaporates  extracts  to a  small  volume in a predictable fashion and
   accomplishes the macro concentration function efficiently and automatically.

   2.     For the acetone and DCM experiment, the average final concentrate volume was
   8.3 mL, versus a projected final volume of 10 mL (115 mL starting volume less 7 x 15
   mL removed during solvent reduction steps).  This indicates that losses due to  leakage
   during evaporation are minimal under these conditions.

   3.     Recoveries were very good for all  analytes in both solvent systems, and there
   were no statistically significant differences  in  recoveries between the more volatile and
   less  volatile of the  analytes tested,   recoveries  also  showed  no dependence  on  the
   measured volumes of the final concentrates. This  indicates that there are no appreciable
   evaporation losses during  evaporation to final volumes between 4 mL and 12 mL.

   4.     Recoveries were higher by an average  of 5% for evaporation of 1:1 acetone and
   DCM mixture than  for evaporation  from  DCM  alone.  Unless  this result is  due to
   measurement errors, it indicates that evaporation of DCM without acetone carries  off
   small amounts of target compounds as high boiling as acenaphthene.
   REFERENCES
[1]     Hollis, W. K., Wilkerson, C. W.; Pittcon 1995, Paper 337P. "Characterization of Semivolatile Organic
       Compounds in Solidified Mixed Waste"

[2]     Conrad, E. E., and Kelly, K. P.; Laboratory Automation, Inc., Application Note 26, "Automated Soxhlet
       Extraction of Semivolatile Analytes in Soil Samples"
                                               250

-------
                                                                                 41
       TESTS OF IMMUNOASSAY METHODS  FOR 2,4-D,  ATRAZINE,  ALACHLOR.
                         AND METOLACHOR IN WATER

P. Marsden. S.F. Tsang, M. Roby, V.  Frank and  N.  Chau.  SAIC.  San Diego,
California  92121,   and  R.  Maxey,   EPA/OPP/ECS,   Stennis  Space  Center,
Mississippi 39529.

ABSTRACT

There  is  high level  of  public concern about  pesticides that might be
dispersed  into  water systems  in   the United  States  as   a  result  of
misapplication,   improper   disposal,   or   natural   disasters   (e.g.,
flooding)   EPA has an interest in  immunoassay as rapid, reliable,  low-
cost screening method for environmental and floodwater samples.   The EPA
Office  of  Pesticide Programs (OPP)  tasked SAIC to  conduct  a  systematic
validation of immunochemical methods  for  2,4-D,  atrazine,  alachlor, and
metolachlor  using  surface  water   samples  spiked  at  three  different
concentrations.  Seven replicate samples were  prepared  at  each  of three
fortification  levels  and analyzed  using  EPA Methods  3510/8151  (515.1)
and  507:  those  results were  compared  with  results  obtained  using
commercially  available  immunochemical  test  kits.  The  fortification
levels  of  atrazine,  alachlor  and  metolachlor  corresponded   to  the
estimated  detection limit  (EDL),  the  limit of  quantitation  (LOQ,   3  x
EDL) ,  and  ten times the  LOQ specified in method 507.   In the  case of
2,4-D,  the measurement limit  for the immunoassay  test kit  was  higher
than that EDL specified in method 515.1.  Therefore, those fortification
levels were selected  according to the  limits of the test kit.

The tests  of  the kits for  2,4-D. alachlor,  and  metolachlor  were  fully
successful.    The   major  problem  identified  during  testing  was  false
positives and poor  accuracy using the  atrazine test kits.   This  problem
may have been the result of a bad calibration standard supplied  with the
kit  (the manufacturer has  since changed  suppliers)   or cross-reacting
materials in the sample.   No triazine herbicides or degradation  products
were detected in the  matrix blanks using GC/MS (<0.05 ug/mL).
2,4-D Recovery  (%)
Standard Deviation
RSD (%)

Alachlor Recovery (%)
Standard Deviation
RSD (%)

Metolachlor Recovery
Standard Deviation
RSD (%)

Atrazine Recovery (%)
Standard Deviation
RSD (%)
low

110.8
 28.5
 25.7

 89.8
 12.8
 14.3

 81.1
 10.2
 12.6

 193.4*
  29.9
  15.5
91.7
25.2
27.5

96.4
18.9
19.6

126.2
  2.9
  2.3

197.8*
 13.1
  6.6
124.3
 13.6
 10.9

113.2
 10.3
  9.1

78.8
15.4
19.5

143.8*
 11.1
  7.7
      Failed to meet project data quality objectives (DQOs)
                                         251

-------
INTRODUCTION

Immunoassay was one of the major topics at EPA's Tenth Annual Waste
Testing and Quality Assurance Symposium (ENVIRACS) held last July.  It
appears that the efforts of manufacturers, Regulatory Agencies as well
as academic, government and commercial laboratories have resulted in
reliable products that can be used to screen for pollutants in the
environment using immunochemistry.   A number of immunoassay screening
procedures have been promulgated or will be proposed for the RCRA
program in SW-846.   These include methods for pentachlorophenol, PCBs as
Aroclors,  petroleum hydrocarbons and Toxaphene.

The Office of Pesticide Programs also has an interest in immunoassay as
a low-cost, reliable screening procedure for monitoring pesticides in
the environment.  Based on immunoassay results, investigators from the
University of Iowa reported the presence of pesticides in mid- West
flood water samples during the Spring of 1993.   However, their sampling
and analysis procedures were not well documented.   OPP chose SAIC to
systematically test the validity of immunoassay kits for screening
samples for contamination from 2,4-D. Alachlor, Metolachlor and
Atrazine.   SAIC was specifically tasked to provide data that could be
used to evaluate the timeliness, costs, accuracy,  and precision of
immunoassay.

This task was specifically authorized to establish the suitability of
immunoassy for screening flood water samples.  However, the mid-Western
flood waters had receded by the time that the SAIC work assignment was
in place,   As a result, each analyte was added  to  surface waters from
San Diego County in order to simulate flood water.

Immunoassay Binding

The measurement technology used in immunoassay  kits is formally called
Enzyme Linked Immunosorbant Assay (ELISA).  Antibodies used in test kits
are immobilized on the walls of test tubes, 96-well assay plates,
magnetic particles, or membranes.  Immunoassay  measurements are
accomplished by competitive binding between pollutants extracted from a
sample and a pollutant-enzyme-conjugate supplied as part of the kit.
When the extract of a highly contaminated sample is analyzed,  most
antibody sites bind the extracted pollutants.  When a non-contaminated
sample or a blank is analyzed, most antibody sites bind the pollutant-
enzyme-conjugate.  The antibody tubes or assay  plates are then washed to
remove any unbound extract.

Antibody reagents can be used to measure pollutants directly in aqueous
samples.  However,  antibodies are incompatible  with organic extraction
solvents,  most oily matrices and certain reactive  wastes.  As a result,
solids must be extracted with methanol and diluted before antibody
binding.  Additional cleanup procedures may be  required to make
immunoassay suitable for measuring pollutants in oily and reactive
matrices.

Visualization Technique

The difference in the amount of pollutant in sample extracts,  blanks.
                                          252

-------
and standards becomes apparent during the second incubation of the ELISA
procedure. All of the bound enzyme-conjugate  reacts with  a substrate to
produce a colored product.  Therefore, the observed color by the kit is
inversely proportionally to the concentration of pollutant in the sample
extract:  (1) a darker color means a lower concentration  in the sample,
(2) less color means a higher pesticide concentration in  the sample.

Immunoassay screening techniques compare the  color produced from sample
extracts with a standard that corresponds to  the action level in the
sample.  This relationship is illustrated in  Figure 1 (provided by
Millipore).

Although not currently authorized by the EPA, immunoassay can be used
for quantitative analysis.  The upper and lower quantitation range of
the method is established by analyzing 3 to 5 calibrators in duplicate.
When the concentration of pollutants in a sample extract  exceeds the
upper calibration range (i.e., is lighter), that extract  should be
diluted so that it falls within the dynamic range.  The precision of
immunoassay measurements should be documented by performing duplicate
analyses of sample extracts, positive controls and blanks.

Data Review And Quality Assurance Considerations

When reviewing immunoassay data, one must remember that this is a
different measurement technology from chromatographic analysis.
Therefore, data quality objectives (DQOs) need to be designed for ELISA
and not just adapted from monitoring programs that use chromatographic
analysis.  Immunoassay QA should include performance checks that
document photometer performance.  In addition, data sheets should
provide the ambient temperature when the test was performed, storage
conditions for the kits, the lot number of all reagents as well as all
recorded raw data.  Under no circumstances should reagents from
different kits or from different manufacturers be employed to perform a
single analysis.  Data reviewers should ensure that results for
duplicate analyses, positive controls and blanks fall within acceptance
windows specified in the method or defined as project DQOs.  Reviewers
should confirm that sample calculations are provided and  that all data
reductions are performed properly.

Documentation of analyst training is an important quality assurance
consideration for immunoassay techniques.  Specific training in
performing these methods in the field is particularly important when
those measurements may be used to support real-time remediation
decisions.
EXPERIMENTAL DESIGN

Laboratory work was conducted following Good Laboratory Practices
described according to an approved Quality Assurance Project Plan
(QAPP).   Water samples were obtained from the Santa Margarita River (San
Diego County CA) and fortified with three different concentrations of
atrazine, alachlor. metolachlor, and 2,4-D. Seven replicate
fortifications of each fortification level were analyzed using EPA
Methods  507 and 515.1 (8151):  those results were compared with results
                                       253

-------
 obtained using  commercially  available  (Millipore)  immunochemical test
 kits  based  on ELISA.

 Single analyses  of  each  of the  seven replicate  fortifications  were  made
 for each target  analyte  using the  chromatographic  methods.   Duplicate
 immunochemical  determinations of each  replicate fortification  were  made
 as specified in  the manufacturer's instructions.   The  mean  value of the
 duplicate immunochemical determinations was used for quantitation.   A
 reagent blank and a matrix blank were  analyzed  along with each
 fortification level.

 Samples analyzed by method 507  were fortified with a mixture of  three
 target herbicides,  atrazine, alachlor, and metolachlor.  Samples  analyzed
 by method 515.1  (8151) were  fortified  with 2,4-D only.  Samples analyzed
 by the immunoassay  kit were  fortified  with the  individual target
 analytes.   The  fortification levels of atrazine, alachlor and
 metolachlor correspond to the estimated detection  limit  (EDL), the  limit
 of quantitation  (LOQ, 3  x EDL), and ten times the  LOQ  specified  in
 method 507.  In  the case of  2,4-D, the measurement limit for the
 immunoassay test kit was higher than that EDL specified  in Method 515.1
 (8151).  Therefore, the  fortification  levels for 2,4-D were selected
 according to the limits  of the  2,4-D test kit.

 Instrumentation

 Gas Chromatograph   Hewlett  Packard 5890 Series II GC with a 7673 auto-
 sampler, and a Vectra 486s/20 with HP  Chemstation.

 Method 507  chromatographic conditions:
      column:
      carrier:
      make up:
      hydrogen:
      air:
      inj.temp:
      det.temp  (NPD)
      inj.volume:
      initial temp:
      initial time:
      rate 1:
      final temp.1:
      final time 1:
      rate 2:
      final temp.2:
      final time 2:
30 m x 0.53 mm ID, 1.5 pm film thickness DB-5
helium at 5 mL/min
helium at 25 mL/min
3 mL/min
100 mL/min
      200ฐC
      240ฐC
      2 pL
      130ฐC
      5.3 min.
      12ฐC/min
      190ฐC
      0 min.
      3ฐC/min
      230ฐC
      2.37 min.
2,4-dimethyl-nitro-benzene:
atrazine retention time:
alachlor retention time:
metolachlor retention time:
             4.61 minutes (surrogate)
            13.35 minutes
            16.30 minutes
            17.80 minutes
                                        254

-------
Method 515.1 chromatographic conditions:
      column:
      carrier:
      make up:
      inj.temp:
      det.temp  (ECD):
      inj.volume:
      initial temp:
      initial time:
      rate:
      final temp.
      30 m x 0.53 mm ID,  1.5 pm film thickness DB-5
      helium at 5 mL/min
      5% methane in argon at 55 mL/min
            200ฐC
            300ฐC
            1 pL
            150ฐC
            0.5 min.
            6ฐC/min
            275ฐC
2,4-D retention time:   8.20 minutes

Photometer   Millipore EnviroQuant™ photometer with filter block,
keypad, liquid crystal display, microprocessor, printer, and tube
holder.  It is a discrete-wavelength, bichromatic photometer.  The
photometer included a CPU with an automated data reduction algorithm.


Immunochemistry Test Kits:  (1) Millipore Envirogard™ Alachlor
QuantiTube Test Kit, ENVIR TOO 06,   (2) Millipore Envirogard™ 2.4-D
QuantiTube Test Kit, ENVIR TOO 03, (3) Millipore Envirogard™ Triazine
QuantiTube Test Kit. ENVIR TOO 01, (4) Millipore Envirogard™
Metolachlor QuantiTube Test Kit.  SD3P 212S4
All equipment, supplies, and reagents needed for the immunochemistry
analysis, including calibration standards, were provided in the specific
kits .
         TABLE 1.  CHEMICAL STANDARDS.  SOURCES,  PURITY,  AND  LOT  NUMBERS
Analyte
CAS #
Lot # Purity %
Source
Atrazine
Alachlor
Metolachlor
2,4-D
1 ,3 -dimethyl -
1912-
15972
24-9
-60-8
51218-45-2
94-75
89-87
-7
-2
J210
115
124603
JB01257
H-0052
99.
99.
97.
99.
250
0
9
0
0
-
(neat)
(neat)
(neat)
(neat)
pg/mL
EPA
EPA


Crescent Chemical
Ultra
Ultra
Scientific
Scientific
2-nitrobenzene
(507 surrogate)
                                        255

-------
RESULTS

Method 515.1 (Method 8151) for 2.4-D - Two trials of Method 515.1 were
performed during this study.  Initial evaluation of Method 515.1 was not
successful.   Calibration was not linear and method recoveries did not satisfy
the project DQO criterion for calibration.  After consultation with the EPA
WAM, laboratory procedures were modified   glassware was cleaned with
methanolic potassium hydroxide (10%).  5% methane in argon (P-5) was employed
as a detector makeup gas, and commercial 2,4-D methyl esters (Ultra) were used
for calibration.

The second trial of Method 515.1 gave mean recovery and precision values
meeting project DQOs for 2,4-D at all fortification levels.  The calibration
data for the target analyte was linear and all calibration check standards
gave less than 20% difference from initial calibration.  The reagent and
matrix blanks did not exhibit response exceeding one half the low level
calibration standard response within the retention time window of any of the
target analytes.  The mean recovery for 2,4-D in samples fortified at 2.0 pg/L
(EDL) was 111.9% with a precision of 18.6%.   The mean recovery at 6.0 pg/L
(LOQ) was 104.1% with a relative standard deviation (RSD)  of 17.9% and the
mean recovery at 60.0 pg/L (10 x LOQ)  was 97.0% with an RSD of 9.3%.  These
data are presented in Table 2A.

2.4-D immunoassay test kits   The test kit gave acceptable 2.4-D recoveries
for samples  fortified at 2 pg/L (EDL)  and at 6.0 pg/L (LOQ).   The mean
recovery for 2.4-D in samples fortified at 60 pg/L (10 x LOQ)  was high.   The
precision values for samples fortified at the EDL and LOQ were high while the
precision for samples fortified at 10  x LOQ  was acceptable.

The mean recovery for 2.4-D in samples fortified at 2 (jg/L was 110.8% with an
RSD  of 25.7%.   The mean recovery for  2,4-D  at 6.0 pg/L was  91.7% with an RSD
of 27.5% and the mean recovery for 2,4-D at  60.0 pg/L was 124.3% with an RSD
of 10.9%.  These data are presented in Table 2B.

The calibration data associated with these samples was linear  (r = 1.0 and r =
0.9993) with all points falling within the expected  range.  The  calibration
check standards associated with samples fortified at the EDL  and the LOQ
exhibited less  than 12% difference from initial calibration.   However,  the
calibration  check standard associated  with the 10 x LOQ group  exceeded 20%
difference.   The matrix blank response was less than one half  the low level
calibration  response for all data sets.
                                       256

-------
           TABLE 2.  ANALYSIS OF 2,4-D IN TERMS OF PERCENT RECOVERY

                                  2A. GC/ECD
Replicate #
2.0 ug/L
Fortification Level

    6.0 ug/L       60.0 ug/L







Mean
1
2
3
4
5
6
7
Percent Recovery
Standard Deviation
RSD

97.
93.
100.
105.
104.
131.
150.
111.9
20.8
18.6
9
2
7
8
6
4
0



107.
109.
111.
78.
118.
78.
125.
104.1
18.6
17.9
5
3
1
9
3
0
8



96.
97.
93.
82.
113.
98.
97.
97.0
9.0
9.3
9
4
0
4
1
1
9



Replicate #
         2B.   IMMUNOASSAY

             Fortification Level
2.0 yg/L         6.0 ug/L
                    60.0 ug/L
1
2
3
4
5
6
7
Mean Percent Recovery
Standard Deviation
RSD (%)
134.0
83.0
96.5
83.5
159.5
120.5
98.5
110.8
28.5
25.7
86.5
135.0
101.5
72.7
109.8
64.7
72.0
91.7
25.2
27.5
96.6
139.9
132.9
127.0
122.6
127.1
123.8
124.3
13.6
10.9
Method 507 for atrazine. alachlor. and metolachlor   Method 507 gave mean
recovery and RSD values meeting project DQOs (70   120% recovery and RSD <
20%) at all fortification levels.  The calibration data for all target
analytes were linear and all calibration check standards gave  less than 20%
difference from initial calibration. The reagent and matrix blanks did not
exhibit response exceeding one half the low level calibration standard
response within the retention time window of any of the target analytes.

Using Method 507, the mean recovery for atrazine in samples fortified at 0.13
ug/L (EDL) was 94.2% with an RSD of 11.2%.  The mean recovery for atrazine at
0.39 ug/L (LOQ)  was 117.7% with an RSD of 3.9% and the mean recovery for
atrazine at 3.9 ug/L (10 x LOQ) was 114.6% with an RSD of 6.9%..

Using Method 507. the mean recovery for alachlor in samples fortified at 0.38
                                        257

-------
pg/L (EDL) was 101.2% with an RSD of 16.9%.  The mean recovery for alachlor at
1.10 pg/L  (LOQ) was 107.8% with an RSD of 3.5% and the mean recovery for
alachlor at 11.0 pg/L was 89.2% with an RSD of 6.7%.

Using Method 507, the mean recovery for metolachlor in samples fortified at
0.75 pg/L  (EDL) was 89.3% with an RSD of 5.9%.  The mean recovery for
metolachlor at 2.20 |Jg/L (LOQ) was 103% with an RSD of 3.3% and the mean
recovery for metolachlor at 22.0 pg/L was 88.0% with an RSD of 7.0%.

Atrazine immunoassay test kits   The atrazine immunoassay test kit gave high
recoveries (> 120%) for atrazine using immunoassay.  Furthermore,  the matrix
blanks gave a positive response for atrazine at levels equivalent to the low
level calibration standard (0.05 pg/mL).   Analysis of the matrix blank using
Method 507 did not show the presence of atrazine but the atrazine detection
limit by this method is specified at 0.13 pg/L and so, may not be capable of
detecting atrazine at 0.05 pg/L.

A major problem was identified in testing the atrazine immunoassay kit.  These
kits produced false positives and poor accuracy.   The manufacturer (Millipore)
believes that the problem is the result  of a bad  lot calibration standard or
if the amount of atrazine in the standard is not appropriate to the capacity
of the antibody reagents.  Millipore has  subsequently changed  the  source for
their standards,  but those new kits were  not retested.
                                        258

-------
          TABLE  3.   ANALYSIS OF ATRAZINE IN TERMS OF PERCENT RECOVERY

                                  3A.   GC/NPD

                              Fortification Level
Replicate #           0.13  ua/L         0.39 ug/L       3.9 ug/L







Mean
1
2
3
4
5
6
7
Percent Recovery
Standard Deviation
RSD
(%)
87.
85.
88.
83.
102.
110.
102.
94.
10.
11.
1
4
8
2
2
5
3
2
6
2
116
112
126
116
114
116
121
117
4
3
.9
.8
.2
.3
.4
.0
.2
.7
.6
.9
121
123
101
113
113
114
113
114
7
6
.8
.5
.4
.0
.5
.4
.0
.6
.9
.9
                                3B.  IMMUNOASSAY

                              Fortification Level
     Replicate #      0.13 ug/L        0.39 ug/L        3.9 ug/L







Mean
1
2
3
4
5
6
7
Percent Recovery
Standard Deviation
RSD i
C%)
161.
238.
230.
184.
192.
176.
169.
193.
29.
15.
5
5
8
6
3
9
2
4
9
5
220
210
189
189
187
200
187
197.
13,
6.
.5
.3
.7
.7
.2
.2
.2
8
.1
6
125
159
135
139
147
152
146
143
11
7
.6
.0
.9
.7
.4
.6
.2
.8
.1
.7
Alaehlor immunoassay test kits   The test kit gave alachlor recovery and RSD
values meeting project DQOs for all fortification levels.  A linear response
was obtained for the calibration standards over the range 0.1 ug/L to 5 ug/L
(r = .9999).  The calibration check standards exhibited less than 13%
difference from the initial calibration.  Matrix blanks did not exceed one
half the response of the low level calibration standard (0.1 ug/L).

The mean recovery for alachlor in sample fortified at 0.38 ug/L was 89.9% with
an RSD of 14.3%.  The mean recovery for alachlor at 1.10 ug/L was 95.4 % with
an RSD of 19.6% and the mean recovery for alachlor at 11.0 ug/L was 113.2%
with an RSD of 9.1%.  Data for the analysis of alachlor are presented in Table
4.
                                          259

-------
          TABLE 4.   ANALYSIS OF ALACHLOR IN TERMS OF PERCENT RECOVERY

                                  4A.   GC/NPD
Replicate #
             Fortification Level
0.38 ua/L         1.10 ug/L      11.0 ue/L







Mean
1
2
3
4
5
6
7
Percent Recovery
Standard Deviation
RSD
(%)
96
100
97
74
123
115
96
101
17
16
.4
.3
.1
.4
. 7
.2
.4
.2
.1
.9
114
104
109
105
105
107
117
107
3
3
.5
.2
.5
.7
.7
.2
.1
.8
.8
.5
94.
95.
78.
88.
88.
90.
87.
89.
6.
6.
4
6
9
3
1
2
8
2
0
7
Replicate #
         4B.   IMMUNOASSAY

             Fortification Level
0.38 ug/L        1.10 ug/L       11.0 ug/L
1
2
3
4
5
6
7
Mean Percent Recovery
Standard Deviation
RSD (%)
71.1
81.6
78.9
105.3
92.1
100.0
100.0
89.8
12.8
14.3
96.4
111.8
130.0
80.0
83.6
78.2
94.5
96.4
18.9
19.6
122.7
103.6
103.2
121.4
120.5
120.9
100.0
113.2
10.3
9.1
Metolachlor immunoassav test kits   The test kit gave acceptable metolachlor
recovery and RSD values for samples fortified at 0.75 ug/L (EDL) and at 22
Mg/L  (10 x LOQ).   High recoveries were obtained for samples fortified at 2.20
|jg/L  (LOQ) although the precision was acceptable.  The matrix blank associated
with samples fortified at the LOQ also gave a high result, exceeding one half
the response of the low level calibration standard.  Subtraction of this
value, 0.17 Mg/L, from the measured concentrations would result in recoveries
meeting project DQOs.  The matrix blanks associated with the other two sample
sets did not exceed one half the low level calibration standard.

The mean recovery for metolachlor in samples fortified at 0.75 pg/L was 81.1%
with an RSD of 12.6%.  The mean recovery for metolachlor at 2.20 (Jg/L was
126.2% with an RSD of 2.3% and the mean recovery for metolachlor at 22.0 ug/L
                                          260

-------
was 78.8% with an RSD of 19.5%.
presented in Table 5.
           Data for the analysis of metolachlor are
The calibration standards associated with samples fortified at the EDL and LOQ
gave a linear response  (r = 0.9998) with all associated calibration check
standards showing less than 16% difference from initial calibration.  The
calibration standards associated with samples fortified at 10 x LOQ gave a
non-linear response with none of the individual standards responding within
the test kit specified expectation values.  The calibration check standard
associated with this group of samples exceeded 20% difference from the initial
calibration.
        TABLE 5.  ANALYSIS OF METOLACHLOR IN TERMS OF PERCENT RECOVERY

                                  5A. GC/NPD
Replicate #
0.75  ug/L
Fortification Level

     2.20 ug/L      22.0 ug/L







Mean
1
2
3
4
5
6
7
Percent Recovery
Standard Deviation
RSD (
[%)
92
81
84
93
90
93
83
89.
5.
5.
.7
.2
.4
.2
.3
.9
.6
3
,2
9
106
101
107
102
99
100
107
102
3
3
.4
.3
.6
.4
.2
.6
.5
.9
.4
.3
93
94
77
87
87
87
85
88
6
7
.8
.5
.2
.7
.3
.7
.8
.0
.2
.0
                                5B.  IMMUNOASSAY
Replicate #
        Fortification Level

  0.75  ue/L        2.20  ua/L
                      22.0 ug/L
1
2
3
4
5
6
7
Mean Percent Recovery
Standard Deviation
RSD (%)
69.3
74.7
77.3
72.0
88.0
93.3
93.3
81.1
10.2
12.6
124.1
129.1
124.5
123.2
124.1
130.0
128.6
126.2
2.9
2.3
93.6
103.6
65.6
63.6
67.3
74.5
83.6
78.8
15.4
19.5
                                        261

-------
DISCUSSION

With the exception of the atrazine immunoassay kit, only minor differences
were observed between results obtained using immunoassay analyses  and  using
Method 507 or 515.1.  Those results are compared using the 95% confidence
intervals calculated for each set of replicate analyses obtained by  replicate
chromatographic and immunochemical analyses.  These confidence intervals were
calculated using the program InStat™ and are based on a Poisson distribution
of the data rather than the Student t test.  These data are presented  in Table
6.

Comparison of the 2.4-D results obtained using GC/ECD and immunoassay  reveal
that the measured concentrations correspond to the fortification levels in  the
samples.  However, the 2,4-D kit gave a somewhat positive bias (<  20%) at the
highest fortification level (60 ug/L)•   Tne immunoassay method gave higher
RSDs than Method 515.1 (8151).   All 2,4-D analyses were less precise that
analyses for atrazine, alachlor and metolachlor.

The chromatographic results for atrazine correspond to the fortification
levels in the samples; however, the immunoassay results were consistently
higher than the fortification levels.  Analysis of blanks produced a positive
result corresponding to the low-point fortification level. Consultation with
Millipore indicates that there may have been problems with the calibration
reagent supplied with the kit.   These results indicate that the claimed lower
quantitation level for the atrazine kit is greater than the low-point
calibration level O0.13 (Jg/L)  .
               TABLE  6.   CONFIDENCE  INTERVALS  FOR  THE  ANALYTES
Analvte/technique

2,4-D GC
2.4-D Immunoassay
       Fortification Level

2.0 ug/L         6.0 ug/L
   1.9-2.6
   1.7-2.8

 0.13 ue/L
 5.2-7.3
 4.1-6.9

0.39 ug/L
60.0 ug/L

   53-63
   67-82

   3.9 ug/L
Atrazine GC
Atrazine Immunoassay
Alachlor GC
Alachlor Immunoassay
Metalochlor GC
Metalochlor Immunoassay
    0.11-0.13
    0.22-0.29

 0.38 ug/L

    0.33-0.43
    0.30-0.39

 0.75 ug/L

    0.63-0.70
    0.54-0.68
 0.44-0.47
 0.73-0.83

1.10 ug/L

   1.1-1.3
   0.87-1.25

2.20 pg/L

   2.2-2.4
   2.7-2.8
  4.2-4.7
 5.2-6.2

   11.0 ug/L

     9.3-10.4
    11.4-13.5

   22.0 ug/L

       18.1-20.4
       14.3-20.5
                                         262

-------
Figure 2 provides a graphical representation which summarizes the minor
differences observed between chromatographic and immunochemical results in
this study.  Values obtained using the immunoassay kits are plotted on the X
axes; values obtained using chromatographic analyses are plotted on the Y
axes.  Values obtained for the 4 analytes using both methods are presented on
one page to facilitate comparison of these data.

The graph for 2,4-D results illustrates that both chromatographic and
immunochemical methods are less precise for this analyte than for the other
three analytes.  The precision of Method 515.1 is similar to the immunoassay
method for the analysis of 2,4-D.  There appears to be a significant positive
bias for the analysis of 2.4-D using immunoassay at the 60 Mg/L fortification
level.

The graph for atrazine results illustrates that both chromatographic and
immunochemical methods have similar precision.  While values obtained using
immunoassay are somewhat higher than chromatographic analyses, any comparison
of method bias is suspect because of calibration difficulties observed using
the immunoassay method for atrazine.

The graph for alachlor results illustrates that both chromatographic and
immunochemical methods have similar accuracy and precision.  The concentration
of alachlor measured using Method 507 was slightly less than the 11.0 pg/L
fortification level: the concentration of alachlor measured using immunoassay
was slightly more than the 11.0 Mg/L fortification level.

The graph for metolachlor results illustrates that both chromatographic and
immunochemical methods have similar accuracy. Method 507 appears more precise
than the immunoassay method.  The metolachlor immunoassay kit had limited
testing outside of the factory, it was released as a product by Millipore
during this study.


CONCLUSIONS

Immunoassay is a useful technique for environmental monitoring and
should become one of the tools used for making environmental decisions.
However,  immunoassay  requires different analytical and data reduction
skills from chromatographic analysis.  The only way to develop these new
skills is through training or by conducting immunoassays.
Unfortunately, it seems that our industry places barriers to adopting
measurement techniques simply because they are new.

Let us hope that immunoassay does not follow the example of the
capillary GC analysis of organochlorine pesticides.  It took a decade to
get that technique approved for regulatory applications.  Now. almost no
one uses packed columns for those analyses.


ACKNOWLED CEMENT S

The authors acknowledge the helpful discussions of Barabara Young of
Millipore Corp. and Brian Skoczendki of Immunosystems in conducting this
study.
                                         263

-------
                       FIGURE 1
                Immunoassay Technology
Competitive ELISA: Principles of the Procedure
 Zero
 Sample
  Low
Concentration
 High
Concentration
 Sample
                                         Concentration of Sample
                          264

-------
         FIGURE 2   RECOVERY USING METHODS 507/515.1 vs.  IMMUNOASSAY
           Atrazine
                                                       Alachlor




35-:
f- 3-:
0 "*"
m oe :
2.5-
•o
O 9J
rซ :
** 1 SH
Q) '-a^
S ^_
OS-
0-





















































•












-••i












•
—











12-,

1O-


-
r*
o
in 6-
•O
2 4.
4J
1 a

OH











n
•





















































1

•
•










•
A









     01234567
             Immunoassay
Immunoassay
            Metolachlor
2,4-D
o
in
4->
0)
    25-
    20-
    10-
;
50-
• 40-
1-1
10 in
T3
0
i 10]
0-




































1












•
1






I1





                10   15   20   25
                                              0  10  20 30 40 50 60 70 80  90
             Immunoassay
                                                     Immunoassay
                                    265

-------
42
     AUTOMATED LIQUID-LIQUID  EXTRACTION OF
                    SEMIVOLATILE  ANALYTES
Rick  McMillin.  Mike  Daggett, Diane Gregg, and  Lisa  Hurst,  U.S.  Environmental
Protection Agency, Region 6 Lab, Houston, Texas, 77099;  Kevin Kelly, Ph.D., David L.
Stalling,  Ph.D., Nancy  L. Schwartz, Laboratory Automation, Incorporated,  Columbia,
Missouri, 65202.
ABSTRACT

Organic extractions have traditionally been very labor intensive and tend to be time
consuming — especially when care is given to recovery and precision of quality control
spikes.  Bath temperatures,  emulsions, and rate of concentration can directly affect
control recoveries by traditional methods. The Region 6 Houston Lab has been actively
evaluating  various equipment and methodology for performing  Organic extractions in
order to reduce turn-around time while maintaining  a high level of quality control. Our
current shift has been away  from traditional separatory funnel / K-D, sonication, and
continuous extraction devices, to the unproved techniques of the Coming Accelerated
One-Step™ and automated Soxhlet (Soxtherm™). As part of our ongoing investigation
of new and improved extraction techniques, we are evaluating an automated liquid-liquid
extraction device  produced by Laboratory Automation, Inc. (ABC Instruments), called
the ExCell™.  We have compared this new  device with  various  other  extraction
techniques, with emphasis on comparison to the Accelerated One-Step1 since this is now
becoming our main water matrix extraction technique.

This new technique (electrically assisted extraction  ~ the ExCell) more closely mimics
separatory funnel extractions (equilibrium based) than the continuous extraction device in
physical interaction.  It uses  an innovative  electric  field to provide the water /  solvent
mixing action as the water sample passes up through aliquots of solvent. Because of the
replacement of mechanical mixing with electrically  actuated dispersion, emulsions were
not encountered with this technique and are much less likely than in separatory funnel
extractions.  This technique is highly automated up to the point of concentration.  For the
purposes of this test, the Labconco RapidVap N2™  was used to concentrate all samples
from the ExCell, traditional continuous extractor, and separatory funnel extractions. The
main goal of our lab was to evaluate the productivity enhancements that this device could
provide to  our lab,  and  attempt to measure how equivalent  this technique is to  other
       One-Step is a trademark of the Coming corporation. ExCell and Soxtherm are trademarks of
Laboratory Automation, Inc., a subsidiary of OI Analytical. RapidVap is a trademark of Labconco.
TurboVap is a trademark of Zymark Corp. No indorsement is indicated or implied by the U.S. EPA for
any of these companies or products. Opinions expressed are expressly of the authors only.
                                             266

-------
techniques we are currently employing.  Equivalency was measured by spiking various
matrices with  54-64  semi-volatile  target compounds, extracting  the samples,  and
comparing the results of the ExCell with results from other extraction techniques.
Precision and accuracy data are presented. The extractions were all carried out at a single
acid pH (< 2).  Analysis of the extracts were performed by method 8270.  For each matrix
evaluated, seven replicates at low level (lOug/L) were extracted (for MDL determination)
and three replicates were extracted at a high level (500ug/L).  The  matrices evaluated
were TCLP buffer #1 (pH 4.93, ฑ0.05), DI water, ground water, and waste water. Not all
extraction  techniques could be compared  in all  matrices at this time.  Results  and
principles of extractor operation are discussed.
INTRODUCTION

The Houston  Lab  has  been  very interested  in  the wide  range  of evolutionary
developments in the organic extraction field.  The reason  for this interest is two fold.
First, our agency  (the EPA) is under an Executive order to reduce the  amount  of
hazardous waste generated by the end of 1999 by half.  This has motivated us to look
hard at new methods that reduce solvent  consumption or  allow  solvent collection for
recycling. The second reason is that there have been many new developments in organic
extractions that have improved productivity, decreased labor, and reduced turnaround
times in the lab.   In the past,  our extraction lab has been a bottleneck in the  over-all
productivity  of our  organic analysis.  This can certainly cause time problems later,
especially if the analysis shows possible problems with the original extraction that can
only be solved or confirmed by a re-extraction.  Through some implementations of new
techniques and equipment, our turnaround times have improved significantly from 3-6
days (sometimes longer) to 1-3 days.  With the main goal of enhancing these productivity
improvements in our lab even further, we contacted ABC instruments about conducting a
study on their new ExCell liquid/liquid extraction device  to see if it  would meet our
needs.

The extraction techniques we currently have evaluated, or have experience with, include
traditional  separatory  funnel  (SF),  traditional  continuous  extractor  (CE),  Corning
One-Step (OS), Coming Accelerated One-Step (AOS), solid phase extractions (SPE),
and the ExCell (EX) for water matrices.  For solid matrices, we have evaluated or used
sonication, soxhlet, automated soxhlet (Soxtherm™), and organic microwave.  In addition
to extraction techniques, we have evaluated medium to large (20-5 00ml) concentration
techniques / equipment for the extractions that require a separate concentration step (this
would  include all the above  listed except  for OS,  AOS, and Soxtherm).   These
concentration techniques include traditional K-D / water  bath, Zymark TurboVap™,
Zymark Turbo Vap 500™, and the Labconco RapidVap N2™.  The methods we currently
use for the bulk of our water extractions  consists of AOS (semi-volatiles), separatory
                                             267

-------
funnel (Pesticides/PCBs), and sonication (soil semi-volatiles (ABNs) & Pest/PCBs). We
are currently using the RapidVap N2 for concentrating the extracts (not required for
AOS). We are moving to reduce or eliminate the use of separatory funnel extractions as
we add AOS hardware, or possibly  move to  other extraction techniques.   A  large
percentage of our samples are from the Superfund program and we have standardized the
bulk of our analysis on the CLP methodologies.  This allows us to use a single acid (pH <
2) extraction with a continuous  extractor for the majority of our semi-volatile analysis.
For these reasons, we have  elected to compare the ExCell mainly  with the AOS at a
single acid pH (< 2).
PRINCIPLES OF OPERATION

For most organic extractions, intimate mixing between aqueous and organic phases is
necessary to reach extraction equilibrium and assure good analyte recoveries. How this
mixing  is physically accomplished  can  vary and is  one means of innovation in the
extraction field.   Traditional techniques for liquid/liquid extraction  accomplish this by
mechanical (SF)  or kinetic  energy  (CE) means.  The following will provide a  brief
description of the extraction techniques employed in this comparison.
Separatory  Funnel   (SF):    This  is  an
equilibrium technique  in which the  aqueous
sample is first taken to a pH of < 2. An aliquot
of methylene chloride (MeCl) is added and the
two phases are mixed by physical shaking of
the   separatory  funnel  to  the  point  of
equilibrium of analytes between the phases.
When  the phases  separate, the solvent is
removed by draining.  This process is repeated
two more times.  All of the solvent aliquots are
combined,     dried,     and    concentrated.
Traditional   concentration   of  solvent  is
performed by K-D / water bath and nitrogen
blow-down apparatus.

Continuous  Extractor (CE):   This  is  an
extraction that is not  required to  reach full
equilibrium with the solvent aliquot  because
the  system is being constantly refreshed with
new solvent.  The sample is taken to a pH of <
2 and added to the CE device.  The CE device
consists of a boiling solvent reservoir on one
                 Continuous
                 Extractor
                Boiling Solvent
Figure 1:  Continuous Extractor Illustration
                                             268

-------
side, and a sample reservoir on the other with a condenser on top (see figure 1).  The
boiling solvent vapor transverses to the condenser over the sample, condenses, and then
drips  into the aqueous sample.  The solvent extracts analytes as it passes through the
sample and collects  in the bottom  of the sample vessel.  After the solvent  reaches a
certain level, it siphons back into the boiling vessel to concentrate the analytes and then
recycles through the process  again, this time as clean distilled solvent vapor.  This
process continues for 18 to 24 hours, continually  providing fresh aliquots (drops) of
solvent to extract the sample.  The only agitation provided by this method is the drop of
solvent falling through the aqueous sample. The aliquots (drops of solvent)  are much
smaller than the  SF, but much more numerous and in  contact with the sample over a
much longer period of time. At the end of the 18-24 hours, the solvent is collected, dried,
and concentrated.  Traditional concentration of solvent is performed by K-D / water bath
and nitrogen blow-down apparatus.
Accelerated One-Step  (AOS):  This technique is almost identical  to CE with some
notable differences.  One difference is  the addition of a semipermeable hydrophobic
membrane on the sample side of the apparatus that the water column sits on (see figure
2). The hydrophobic membrane allows solvent to pass freely through,  but does not allow
the  water  to  pass  through.     This
membrane holds the water  on top, and
when the stopcock assembly is  open, it
allows the solvent to pass directly into the
boiling chamber by gravity (no  siphon
effect  required — thus allowing a faster
flow of solvent through the system). The
hydrophobic membrane also  dries  the
solvent in the process — thus eliminating
a  sodium sulfate drying step required
before concentration. Another difference
from traditional CE is in the design of the
solvent boiling chamber.    In the
traditional CE  this chamber is a boiling
flask, in the OS and AOS it more closely
resembles a K-D with a three-ball snyder
column in  shape and  function.   The
bottom of the  AOS  solvent  boiling
apparatus   contains  a  water  jacketed
concentrator tube (thimble) for hot water
to provide the  heat to boil  the  solvent.
When the solvent return valve (stopcock
assembly)  from the  sample chamber is
closed (after extraction  is complete), the Figure 2: Accelerated One-Step Illustration
Accelerated
One-Step J
         CONDENSER
     Extractor Body


HYDROPHOBIC MEMBRANE-*!
      JACKETED CONCENTRATOR
      TUBE (THIMBLE)
                                            269

-------
concentration of the solvent begins in the jacketed thimble (the solvent boils off and does
not return to the thimble, but collects  on the sample side).  The bottom of the thimble
contains a small projection under the hot water jacket of about 0.5 - 1.0 ml that is not
heated by the  water and  thus protects against the  extract going  to  dryness.   These
modifications to the  continuous  extractor allow for shorter extraction times  (5-6  hrs.
verses 18-24 hrs.), uses less solvent (100 ml verses 500 ml), dries, and concentrates the
extract all in one device with little operator intervention.  It also collects the used solvent
for disposal or recycling.  The down side to this device is the initial setup time  (not very
significantly different from CE), glass breakage, hot water distribution, and providing
sufficient cooling to prevent  solvent and volatile analytes from going out the top of the
condensers.

Electrically Assisted Extraction  (ExCell):   This device  seems  to  work somewhat
similarly to  both SF and CE,  but more  closely follows SF in principle.  With this device,
the aqueous sample is drawn-up through a standing aliquot of solvent in a stream (similar
to an upside down CE). This stream is bombarded by an electrical field that provides the
mixing  action  (similar to SF) between the
two phases.  This method also uses limited
fixed aliquots of solvent (also similar to SF).

Workers at  Oak Ridge National  Laboratory
discovered that droplets of a conductive fluid
(water)   can  be   dispersed   within   a
non-conductive fluid (MeCl) by  application
of an electric field2.  This principle is used in
the ExCell automated liquid/liquid extraction
system to intimately mix droplets of aqueous
sample  with solvent  as  they are  pumped
through an aliquot of the extraction solvent.
The dispersed droplets encounter  a field of a
different strength  and this causes them to
recombine  as a bulk phase (water) which
floats over  the extraction aliquot  (MeCl),
thus  greatly  reducing   the   chance  of                              	
emulsions.    The  original work used dual     Figure 3: ExCell Schematic Diagram
internal electrodes;  however, the  commercially
available extraction system combines both in a single, external electrode wrapped around
a PTFE funnel, reducing opportunities for corrosion or  contamination3.   The entire
       U.S. Patent No. 4,767,515. Scott, T.C. and Wham R.M. "Surface Area Generation on Droplet
Size Control in Solvent Extraction Systems Using High Intensity Electric Fields", Aug. 30 '88.
3      U.S. Patent No. 5,384,023.
                                                270

-------
sample is pumped through the extraction solvent aliquot at a rate of approximately 32
ml/min.

If desired, the sample can be made to automatically repeat passage through the extraction
aliquot for increased extraction yield.  Following the last passage of the aqueous sample
through the extraction solvent, the aliquot is automatically separated, collected, dried (if
desired) and the extraction can be repeated with fresh aliquots of solvent.  Most of the
automated extraction work described in this manuscript was performed using three
extracts (three fresh solvent aliquots) with the sample making one passage through the
extract. Each ExCell extraction instrument will batch extract up to six samples.

RapidVap N, Concentration Technique:  This technique is much faster than K-D /
Nitrogen blow-down and has a design that allows some relief of watching for dryness that
the K-D does not have. This device uses a nitrogen stream with physical swirling and
heating of the container to evaporate your extract  solvent.   The bottom of the glass
solvent container comes to a low volume point that is not in the heated  zone and thus
does not go to dryness as rapidly.  This device will safely evaporate a  250 ml MeCl
extract to 1.5 ml in two hours (by the settings used in our lab).
EXPERIMENTAL PROCEDURE

Sample Preparation & Collection:  All samples were measured at 1 liter and spiked
with 1 ml of either a high level or low level working mixed standard solution in MeCl to
give a concentration of either 500ug/L or lOug/L for 54 target analytes.   A separate
working  surrogate solution (in  MeCl)  was used in which 1 ml was spiked to give a
concentration of 50 or 100 ug/L (depending on analyte).  Stock standards were purchased
from  Supelco.  Some analytes (5 compounds) were not in our initial cocktail and were
added later as a separate solution (1  ml) to subsequent sample sets (low level groundwater
and waste water only). All  samples were spiked with surrogates and target analytes
before pH adjustment and subsequent extraction. TCLP buffer #1 was made according to
method 1311 of the SW-8464 with  glacial acetic acid  and sodium hydroxide to a pH of
4.93 (ฑ0.05). The waste water was collected in 4 liter glass amber containers at a waste
treatment  facility in  Houston, Texas.  The ground water was collected in 4 liter glass
amber containers in McBaines, Missouri. The waste water and ground water where kept
in the dark and refrigerated at 2-6ฐC until spiking and subsequent extraction.

Extraction: All of the following extractions were performed at a single pH < 2 (using
6N H2SO4 to acidify).
       EPA Test Methods for Evaluating Solid Waste
                                               271

-------
       Separatory Funnel (SF1: The technique we employed is from method 351 OB of
       the SW-846. Our extractions were performed with three 60 ml MeCl aliquots at a
       single pH of less than 2.   The extract was dried by a sodium sulfate  drying
       column. Mechanical shakers were employed to perform this extraction.

       Continuous Extraction (CE):  We followed method 3520B of the SW-846.
       Approximately 500 ml of MeCl was used as the solvent. The extract was dried by
       a sodium sulfate drying column.

       Accelerated One-Step (AOS); We followed a modification of method 3520B in
       which the solvent (MeCl) amount was reduced from  500 ml to 100 ml.  The
       extraction time was also reduced from 18-24 hrs. to 5-6 hrs.

       Electrically Assisted Extraction (ExCell): All extractions were performed with
       3-80ml extracts of 1  pass each.  This is similar to what is used  in separatory
       funnel extractions in which you have 3-60ml separate MeCl aliquot extractions of
       the same sample. The first 80  ml aliquot is added to the ExCell sample container
       by the technician, and the sample is shaken before being placed into the ExCell
       instrument.  The subsequent solvent metering, solvent addition, timing, rinsing,
       solvent collection, and sample  collection  is all automated by the instrument.  The
       finished extracts and used  sample go into  a separate collection  (or disposal)
       containers at the end of the extraction process, ready for concentrating or a second
       pH adjustment.

Concentration:   Concentrations were performed by the RapidVap N2 rather than by
Kudema-Danish (K-D) / water bath for all extractions other than the AOS which has a
self-contained  concentration  apparatus similar to K-D.   The block temperature on the
RapidVap was set to 35ฐC and the  vortex speed set to 45%.  Nitrogen gas stream  was
manually adjusted to an arbitrary  flow (just to dimple the surface  of  the solvent).
Additional nitrogen blow-down was required for all methods to adjust the final extraction
volume down to 1 ml.  This was accomplished using the Organomation Meyer N-EVAP5.

Analysis:   Analysis was performed  by  GC/MS  using method 8270 of  the SW-846.
Quantitation was performed with a single 50ng/ul standard shot daily and compared with
a five-point curve.  The instrument used was an HP-5890/5971 with a 30M 0.25mm ID
HP-5MS GC column with a 0.25,a film thickness.  An HP-7673 autosampler was attached
in which we used a \/u\ autosampler injection for all samples.  This instrument was being
used on a continuous basis to analyze a variety of dirty samples during the time of this
study and may have been affected by residual effects of such analysis.
       The Meyer N-EVAP is a trademark of Organomation Associates Inc.
                                            272

-------
RESULTS and DISCUSSION

Productivity Comparison: All methods were evaluated for productivity features. This
can be a  very  biased type of analysis since opinions  are  by nature very  operator
dependent.  Overall, the main technician involved in these extractions preferred  the
operation of the ExCell over all other methods.  This was due to more than mere time
considerations (see table 1), but was also due to mechanical operation of hardware and
potential safety hazards.

       The ExCell is very user friendly and is the most mechanically automated. It is
       processor controlled  and several variables can be programmed (rinse  times,
       number of extracts, number of passes for each extract, etc.).  The only glassware
       required is the sample bottle, receiving bottle, and drying runnel. This reduces  the
       chance of injury due to glassware breakage. The solvents are pumped into  the
       instrument externally, thus eliminating  repetitive solvent pouring except for  the
       first addition which we  performed manually. Loading and unloading the sample
       is very easy, but care should be made on positioning of the sample straw. Extract
       drying may be accomplished on-line with a  sodium sulfate  drying  funnel, or
       performed separately later.

       Table 1:
ESTIMATED TIMES FOR SAMPLE PREPARATION
(in minutes; for six samples)
Function
Equipment Prep (wash, etc)
Time of Extraction
Cooling Step
Drying Step (+ prep)
Breakdown / Washing
Concentration Step*
Nitrogen Blow-down Step**
Total Time
Amount of Solvent Used
ExCell
Tech
Time
35
0
0
20
20
30
30
135
Total
Time
35
210
0
20
20
180
120
585
240
AOS
Tech
Time
60
0
0
0
60
1
30
151
Total
Time
60
360
0
0
60
10
120
610
100
SF
Tech
Time
60
0
0
15
35
30
30
170
Total
Time
60
45
0
15
35
180
120
455
250
CE
Tech
Time
60
0
0
45
35
30
30
200
Total
Time
60
1,080
60
45
35
240
120
1,640
500
       ' The RapidVap N2was used for all methods that required separate concentration.
        ' Same nitrogen blow-down method was used for all extraction methods.
       The Accelerated One-Step is the second choice for ease of use and speed by the
       operator. The main productivity feature of this device is that once it is setup and
                                               273

-------
       running,  there is very  little operator intervention  with  the  extraction  and
       concentration  process.   After the  5-6  hour extraction  time is complete, the
       operator merely turns the stopcock to concentrate the sample down to about 1.5
       ml.  The design is such that it is not very likely for the extract to evaporate to
       dryness, therefore samples are rarely lost for this reason (this problem was also
       reduced on  all other  methods by using the RapidVap N2 rather than K-D for
       concentration purposes).  The problem for the  operator is the initial setup and
       potential injury from glassware breakage. This device is probably more prone to
       glass breakage than the others due to it's somewhat complex design. Setup is a bit
       involved, but not difficult once accustom to it.

       The Separatory Funnel  is probably the most familiar  technique used for
       extractions.  The main advantage it holds is in the shorter total amount of time it
       takes to extract a set of samples (up to 10 + QC?). The AOS may edge this one
       out at higher sample numbers though.  The big disadvantage is the labor-intensive
       hands-on time required and potential glass breakage / injury.

       The Continuous Extractor by far  consumes the most tune, labor, and solvent
       compared with the  other techniques.  It is similar to  the AOS  in setup and
       operation, but the separate cool-down, drying, and concentration steps puts it at a
       disadvantage in operator and total time involved.

Equivalency Comparison: Table 2 shows  the accuracy and precision data for low level
(lOug/L) DI water  spikes for all four sample extraction methods discussed.  Table 3
shows data provided by ABC Instruments in which DI water was extracted by the ExCell
instrument at different pH. Tables 4 & 5 show the accuracy, precision and MDL6 data for
low level ground water  and  waste water spikes respectively (ExCell and AOS only).
Table 6 shows the recovery data  for the high level spike (400-500 ug/L) for all currently
available matrices.  The main interest for our lab were the CLP target compounds (not all
of which were included in this list). Salient  aspects of the recoveries afforded using each
extraction technique are discussed below.

       Separatory  Funnel Extraction provided lower recoveries than the continuous
       extractions for  more water soluble  analytes, such as phenol  and 4-nitrophenol.
       This is expected since the extraction is not an exhaustive one. Previous workers
       have  noted  similar  differences  between  separatory  funnel and  continuous
       extractions7.  Further investigation of our data led to the theory that the separatory
       funnel extractions may have not been made at  a sufficiently acidic pH and the
       suspected data  effected was removed from  table 2.  Historical  data exists for
       comparison and this part of the experiment will be repeated.

6       40 Code of Federal Regulations, part 136, appendix B.
7       Valkenburg, C.A., Munslow, W.D., Butler, L.C.; /. AOAC1989, 72(4), 602-608.
                                              274

-------
Continuous  Extractor provided higher recoveries of the more  water  soluble
analytes noted  above,  but lower  recoveries of more volatile  analytes (e.g.
dichlorobenzenes,       hexachloroethane,      1,2,4-trichlorobenzene,      or
hexachlorocyclopentadiene).  This result is expected for an extraction that takes
place over a long period of time with reflux in an apparatus that is prone to vapor
losses.    Again, previous workers have  noted  similar  differences between
separatory funnel and continuous extractions5.

Accelerated  One-Step extraction provided higher recoveries of the more water
soluble analytes noted above (similar to  CE).  The tendency to lose volatile
analytes was not as great for this technique than for CE.  This  is probably a
consequence of shorter extraction times needed for the AOS technique relative to
CE. Losses of volatile analytes were more pronounced at the lower spiking level
than the high level in DI water.  The difference was less pronounced when the
TCLP buffer matrix was extracted.  Certain amines were much more  readily
recovered  at the   higher  spiking  level  than at lower   spiking levels (ex.
4-chloroaniline, 2-naphthylamine, and 4-aminobiphenyl). For these compounds,
which are difficult to recover from samples at low pH, AOS provided somewhat
higher recoveries than did CE.

Electrically  Assisted Extraction (ExCell) provided lower recoveries than the
continuous extractions for more water soluble analytes  such as phenol and
4-nitrophenol.  This is not unexpected since electrically assisted extraction, like
separatory funnel extraction, is not exhaustive.  Somewhat higher recoveries were
obtained from  ExCell extraction than from SF extraction for the more water
soluble analytes. The ExCell data seems to more closely follow the SF data (with
the exception of 4-Aminobiphenyl, once the compounds were removed that were
suspect), which is to be expected since they are both an equilibrium process.

Like CE and AOS, ExCell extraction resulted in lower recoveries for some of the
more volatile analytes.  The increase in recoveries of those analytes  in going from
low to high spiking levels was less pronounced for ExCell, suggesting that the
losses  occur by a different  mechanism.   No heat  is  applied during  ExCell
extraction; however, there are periods during which some air is pumped through
the extraction aliquots, and losses may be occurring at those times.

ExCell also share the AOS tendency for higher recoveries of amines at the higher
spiking level; however, ExCell recoveries for those compounds were generally
lower than those from AOS and hi some cases these analytes were  not recovered
at all using the  ExCell (aniline).  This may also prove true  for separatory funnel
extractions for  those compounds at a  strongly acidic pH.  Table three shows
results of work  performed at the manufacturer's laboratory in Columbia Missouri
                                              275

-------
       at various pH that indicated better recovery for the amines (and other compounds)
       mentioned.  There were also a few other analytes for which recoveries showed a
       strong dependence on spiking level only for the ExCell technique (e.g. terphenyl,
       di-n-octyl phthalate, and 7,12-dimethylbenz[a]anthracene).

CONCLUSIONS

Although recoveries were often lower using ExCell extraction, the technique can reduce
the cost and complexity of sample preparation while increasing worker safety.  Therefore
instrument  method optimization will  be attempted to bring  recoveries closer to those
obtained using other extraction techniques. Most of the compounds that did not perform
well by the ExCell were not CLP target compounds.  For  those that were, this is a
concern that will be investigated  further  in  an  effort to optimize the method.   It is
believed that future investigation will show this  method to be closer to the separatory
funnel technique in performance.  Ruggedness of this method needs to be investigated
with a wider variety of real sample matrices.  This instrument could be a real benefit to
the overall productivity of the lab and will be investigated further.
       Table 3 (provided by ABC Instruments):
COMPARATIVE AMINE RECOVERY AT DIFFERENT PH CONDITIONS
FOR THE EXCELL*
#
4
16
17
27
29
36
41
42
48
49
52
56
58
60
64
Compound
Aniline
N-Nitrosomorpholine
N-Nitrosodi-n-propylamine
4-Chloroaniline
N-Nitrosodi-n-butylamine
2-Nitroaniline
Dibenzofuran
2-Naphthylamine
Diphenylamine
1 ,3 ,5-Trinitrobenzene
4-Aminobiphenyl
Methapyrilene
p-Dimethylaminoazobenzene
2-Acetylaminofluorene
7,12-Dimethylbenz[a]anthracene
acid then
base
2
53
49
21
50
62
50
39
66
88
63
0
91
111
66
pH4
4
30
36
28
38
55
42
65
68
87
87
0
92
106
54
base then
acid
50
54
49
54
50
61
48
73
69
90
96
83
95
115
79
table 2
ExCell
0
68
84
0
83
69
67
0
34
49
0
0
66
81
16
*80ug/L in DI water, GC/FID
                                             276

-------
                    SUMMATION OF  LOW  LEVEL  Dl  WATER  RECOVERIES
                                             (Table #2)
                                          ACCURACY DATA
                                                                         PRECISION DATA
                     Spike  Analyte  Avg % Rec  Avg % Rec Avg % Rec  Avg % Rec   %RSD   "/,RSO  %RSD   %RSD
Semivolatile Compounds Amount Number  EXCELL    AOS       SF      CE     EXCELL  ACS    SF     CE
  N-Nitrosodimethylamine    0.000 ^^^-^~~~~—"~                          ^^—^————
 N-Nitrosomethvlettivlamine I
  N-NltrosodieUiylamlne
                                Compounds are not on CLP target list.
                                                    277

-------
SUMMATION OF LOW LEVEL GROUND WATER RECOVERIES
                       (Table #4)
                   Compounds are not on CLP target list.
                               278

-------
      SUMMATION OF LOW LEVEL WASTE WATER RECOVERIES
                                 (Table #5)
p-Dlmethylamlnoazobenzene I 10.000
 bls(2-eUiylhexyl|phthalate  I 10.000
                                           279

-------
                     SUMMATION OF ALL HIGH LEVEL RECOVERIES
                                              (Table #6)
                                      ACCURACY DATA   ACCURACY DATA   ACCURACY DATA    ACCURACY DATA
                      Spike   Anaiyte      DI Water         Ground Water      Waste Water       TCLP Buffer #1
                      Cone   Number   Avg % Rec Avg % Rec  Avg % Rec Avg % Rec   Avg % Rec  Avg % Rec  Avg % Rec  Avg % Rec
Semivolatile Compounds   (ug/L)  (Arbitrary)   EXCELL    AOS     EXCELL    AOS     EXCELL     AOS     EXCELL     AOS
  N-Nitrosodimethylamine     500
                                    I Not Added Not Added
                                                                         Not Added  Not Added I  Not Added  Not Added
N-Nltrosomethylethvlarnine I
  N-Nllrosodlethylamine
                                    • jFiyj* P EEii JHKP P nr •
  2-Acetylaminofluoran

bia(2-ettiylhexyl)phttialate
                                                      280

-------
                                                                                43
           DETERMINATION OF POLY(ETHYLENE GLYCOL)-600  FROM THE
    PHARMACEUTICAL MANUFACTURING INDUSTRY BY DERIVATIZATION AND LIQUID
                             CHROMATOGRAPHY
William A.  Telliard,  Chief,  Analytical  Methods Staff,  Engineering and
Analysis Division,  Office of Water, USEPA, 401 M Street, S.W., Washington,
DC 20460; Alan  W.  Messing-,  Principal Chemist,  DynCorp,  300  N.  Lee St.,
Alexandria,  VA  22314; Richard Whitney, Organics Department Manager, ETS
Analytical Services, Inc., 1401 Municipal Road, NW, Roanoke, VA 24012.

ABSTRACT

Section 304(h) of the Clean Water Act directs EPA to promulgate guidelines
establishing   test  procedures   (analytical  methods)   for  analyzing
pollutants.    These  test  procedures  are used for filing applications for
compliance monitoring under the National Pollutant Discharge Elimination
System  (NPDES)  found at  40  CFR Parts 122.41(j)(4)  and 122.21(g)(7), and
for the pretreatment program  found  at 40 CFR 403.7(d).  Promulgation of
these  methods  is   intended  to  standardize  analytical  methods  within
specified industrial categories and across industries.

EPA has promulgated analytical methods for monitoring pollutant discharges
at 40 CFR Part  136, and has promulgated methods for analytes specific to
given industrial categories at 40  CFR Parts 400 to 480. EPA has published
proposed regulations  (60 FR 21654,  May  2,  1995)  establishing discharge
limitations   for  the  Pharmaceutical   Manufacturing  Industry   (PMI)
Wastewaters  from  the  PMI  contain   a  complex  mixture  of  conventional
pollutants,  toxic (priority) pollutants,  and non-conventional pollutants.
Among  the  non-conventional  pollutants  identified  from  the  PMI  is
poly(ethylene glycol)-600  (PEG-600)

PEG-600  is  commonly  used  in the  PMI   as  a  non-ionic  surfactant  and
thickening agent and  has been identified as  a  constituent  of  PMI waste
streams.  In addition, PEGs have been implicated in the formation of toxic
alkoxy acetic acid metabolites (Flam, 1994) .  PEG-600  is composed of 12 to
15 oligomers with a molecular  weight centered around 600  Da.  Methods for
determination  of  PEGs  found  in  the   literature  generally  call  for
hydrohalic acid  cleavage followed by gas chromatography  or turbidimetric
determination of the native analytes.  Neither of these methods provide
results that identify PEGs in  specific molecular weight ranges.  For this
reason, we have  developed a method for the quantitative  determination of
PEG-600,  based  on the   work  of   Kinahan  and  Smyth  (1991),   using
derivatization followed by high pressure liquid chromatography.  Detection
limits of around 300  parts-per-billion  in water  can  be  achieved with  a
quantitation  limit  of one part-per-million.

INTRODUCTION

Section 304(h) of the Clean Water Act directs EPA to promulgate guidelines
establishing   test  procedures   (analytical  methods)   for  analyzing
                                        281

-------
pollutants.   These test procedures  are  used for filing applications and
for  compliance  monitoring   under   the  National  Pollutant  Discharge
Elimination System (NPDES).   Promulgation of these methods is intended to
standardize analytical methods within specific industrial categories and
across industries.   EPA has promulgated  analytical methods for monitoring
pollutant discharges at 40 CFR Part 136, and has promulgated methods for
analytes specific to given industrial categories  at 40 CFR Parts 400 to
480.    EPA  has published   regulations  (60 FR  21654,  May  2,  1995)
establishing discharge limitations  for  the Pharmaceutical Manufacturing
Industry  (PMI).    The  Agency   acquired  data  on  the  presence  and
concentration  of approximately  400  analytes   from the  PMI during  18
sampling episodes and pilot studies conducted during a 10-year period from
May of 1983 to October of 1993.  The data collected during these studies
and  information acquired  from  a  detailed questionnaire  sent  to  all
domestic pharmaceutical manufacturers  form the basis  for  regulation of
about sixty analytes from the PMI.

Wastewaters  from the  PMI contain  a  complex  mixture of  conventional
pollutants,  toxic (priority)  pollutants,  and non-conventional pollutants.
Analytical methods  exist for  the  determination of  all of the conventional
and priority pollutants  from the PMI,  but  many of  the non-conventional
pollutants were without promulgated analytical  methods.   Among  the non-
conventional pollutants  identified  from  the PMI  without a promulgated
analytical method is poly(ethylene glycol)-600  (PEG-600).

PEG-600  is  commonly  used in the  PMI   as  a non-ionic surfactant  and
thickening agent and has  been identified as a  constituent  of PMI  waste
streams.  In addition, PEGs have been implicated  in the  formation of toxic
alkoxy acetic acid metabolites (Flam, 1994) .  PEG-600 is composed of 12 to
15 oligomers with a  molecular weight centered around 600 Da.  Methods for
determination  of  PEGs  found in  the   literature  generally  call  for
hydrohalic acid  cleavage  followed by gas chromatography or turbidimetric
determination of the native  analytes.   Neither  of these methods  provide
results that identify PEGs in specific molecular weight ranges.  For this
reason, we have developed a method for the quantitative determination of
PEG-600,  based  on  the  work   of  Kinahan and   Smyth  (1991),   using
derivatization followed by high pressure liquid chromatography.

EXPERIMENTAL

Sample Extraction and Derivatization

Place one liter of sample and one mL  of surrogate standard (10  mg/mL of
di(ethylene glycol)monohexyl  ether in tetrahydrofuran)  in a liquid-liquid
extractor and extract with pesticide grade dichloromethane for 18 hours.
Dry  the  dichloromethane  solution  over  anhydrous   sodium  sulfate  and
evaporate off the solvent using the Kuderna-Danish procedure.  Dry again
over anhydrous sodium sulfate when the volume reaches 10   25 mL and use
a gentle stream of dry nitrogen to remove most of the remaining solvent.
Quantitatively  transfer  the  residue to  a V-shaped reaction  vial  using
                                          282

-------
anhydrous dichloromethane or anhydrous tetrahydrofuran and remove the last
of the solvent with a stream of dry nitrogen.

After  ensuring  that  the  extract  is  free  of  water,   add   5  mL  of
derivatization   solution   (10  mg/mL   3,5-dinitrobenzoyl   chloride  in
tetrahydrofuran} and  2  drops of anhydrous pyridine.   Seal and heat the
vial and contents in a sand bath at 60ฐC (ฑ5ฐC) for 1 hour.   Cool  the vial
and quantitatively  transfer the  contents  to a 125-mL separatory funnel.
Add 50 mL of diethyl ether  (ether) and sequentially extract  with two 25-mL
portions of  dilute  hydrochloric  acid,  then two 25-mL portions of water,
then two 25-mL portions  of  sodium bicarbonate  solution, and finally with
two 25-mL portions of saturated sodium chloride  solution.  Take care not
to lose any ether solution  during the extraction procedure.   Place a small
plug of  glass  wool  in a funnel and add approximately  10 g of anhydrous
sodium sulfate.  Drain  the ether solution through the  sodium sulfate in
the funnel,  then rinse  the separatory funnel with two  10-mL portions of
ether  and drain through  the  anhydrous  sodium  sulfate in  the funnel.
Quantitatively  transfer the ether solution  to a  clean Kuderna-Danish
apparatus and evaporate the solvent.   Perform a solvent  exchange with 40%
acetonitrile/water, adjust  the volume  to  2 mL and filter,  if necessary,
for analysis.

High Pressure Liquid Chromatography (HPLC)

      Chromatographic conditions.

             Column:   Betasil C18,  250  mm by 4.6 mm, 5-^m  particle size
             (Keystone 255-701, or equivalent)

             Mobile  Phase:    40% acetonitrile/water  to  100% acetonitrile
             over a period  of 20 minutes.

             Flow Rate:   2.0 mL/min.

             UV Detector:   254 nm.

             Injection Volume:  50 ^L.

The retention  time  of the  PEG-600 derivative  relative  to  the surrogate
derivative  is  centered  about  0.63.    Because  PEG-600  is  a mixture  of
poly(ethylene glycol) oligomers,  the exact nature of PEG-600 samples from
various manufacturers and  different batches from a single manufacturer,
may vary.  For this reason, concentrations of PEG-600 in a specific waste
stream are  best determined when  standards are  prepared using the same
batch of PEG-600 in use by the  pharmaceutical manufacturer  at the time of
discharge of the waste stream under analysis.  Where it  is not possible to
obtain such a sample,  adequate  results can be obtained by use of a PEG-600
product  as  a  standard  that is  unrelated  to  the  one in  use by  the
pharmaceutical manufacturer, and careful definition of an "elution range"
for derivatized  PEG-600  in both  the  external standards and the samples.
An "elution range"  or retention time window is defined as a characteristic
                                         283

-------
period  of time  during which  the  derivatized PEG-600  elutes  from the
chromatographic column.  This  range should encompass at least 90 percent
of the PEG-600 derivative in both the standard and the sample.  The width
of the  retention  time window used for quantitation  should  be based upon
measurements  of  actual retention time  variations of standards  over the
course of a day.   Three times the standard deviation of the retention time
for a compound can be used to calculate a suggested window size;  however,
the experience of the  analyst  should weigh heavily in the interpretation
of chromatograms.

Calculations

Calculate  each  response factor  (RF)  as follows  (mean  value based  on 5
points):


                         „,-, _ concentration of standard
                                 area of the signal
                         mean RF = RF = —
Calculate the concentration of PEG-600 as  follows:


                    \ig/L = RF x area of signal x concentration factor

                         where:

                            ^ ^   * -*    final volume of extract
                       concentration factor =	
                                          Initial sample volume
CONCLUSIONS

      Using  this  method it  is  possible to  routinely detect  PEG-600 at
about 300  parts-per-billion and to  quantitate at one  part-per-million.
The method is simple  to apply and  can be  performed by any  laboratory
equipped with HPLC equipment. Caution must be exercised in the extraction
and  concentration  steps  to minimize  loss of  material.    Extraction
efficiencies  are  around 60 percent.  While  it is best to  use standards
derived from the feed stock used at the time of waste generation, adequate
results can be achieved from standards  that  are  unrelated to the feed
stock.
                                          284

-------
REFERENCES

Flam, P., Science, 265, 9 Sept. 1994, 1519.

Kinahan, I.M. and Malcolm R. Smyth, J. Chrom., 565  (1991), 297 - 307.

U.S.  Environmental   Protection  Agency,   Analytical  Methods  for   the
Determination  of  Pollutants in  Pharmaceutical Manufacturing Industry
Wastewater,  EPA 821-B-94-001,  February, 1995.
                                         285

-------
44
     DETERMINATION OF NON-PURGEABLE, WATER-SOLUBLE ANALYTES FROM THE
        PHARMACEUTICAL MANUFACTURING INDUSTRY BY GC/MS AND GC/FID
 William  A.  Telliard,  Chief,  Analytical Methods  Staff,  Engineering  and
 Analysis Division, Office of Water, USEPA, 401 M Street, S.W.,  Washington,
 DC  20460;  Alan W. Messing,  Principal  Chemist,  DynCorp,  300 N.  Lee  St.,
 Alexandria, VA 22314;  C.  Lee Helms and C.S.  Parsons,  Pacific  Analytical,
 Inc.,  6349  Paseo  del Lago,  Carlsbad, CA 92009.

 ABSTRACT

 Section 304(h) of the Clean Water Act directs EPA to promulgate guidelines
 establishing   test  procedures   (analytical  methods)   for   analyzing
 pollutants.   These test procedures  are used for filing applications  for
 compliance  monitoring  under the National Pollutant  Discharge  Elimination
 System (NPDES)  found at 40 CFR Parts 122.41(j)(4)  and 122.21(g)(7),  and
 for the  pretreatment program found  at  40 CFR 403.7(d).  Promulgation of
 these  methods  is  intended  to  standardize  analytical methods  within
 specified industrial categories and across industries.

 EPA has promulgated analytical methods for monitoring pollutant discharges
 at  40  CFR Part 136,  and has promulgated methods for analytes  specific to
 given  industrial categories at 40  CFR Parts 400 to 480.  EPA has published
 proposed  regulations  (60 FR  21654,  May 2,  1995)  establishing  discharge
 limitations  for   the  Pharmaceutical   Manufacturing  Industry   (PMI).
 Wastewaters from  the  PMI  contain  a   complex  mixture  of conventional
 pollutants, toxic  (priority) pollutants, and  non-conventional  pollutants.
 Among  the non-conventional pollutants identified from the PMI are a series
 of  non-purgeable,  water-soluble analytes that provide unique challenges to
 analysis  by gas chromatography (GO    These analytes are miscible with
 water  and are  not  readily transferred  to the vapor  phase by passage of a
 stream of gas  through an aqueous solution.  The analytes  to be determined
 simultaneously include common alcohols, low molecular  weight  amines,  and
 other  low  molecular  weight  nitrogen,  oxygen,   or  sulfur   containing
 analytes.   Two methods  have been developed that  employ direct  aqueous
 injection  of  samples  into  the  GC and  determination  of  the analytes  by
 either mass spectrometry  (MS) or flame  ionization  (FID).

 INTRODUCTION

 Section 304(h) of the Clean Water Act directs EPA to promulgate guidelines
 establishing   test  procedures   (analytical  methods)   for   analyzing
 pollutants.   These test procedures  are used for filing applications  and
 for  compliance  monitoring  under   the  National   Pollutant   Discharge
 Elimination System (NPDES)    Promulgation of these methods  is  intended to
 standardize analytical methods within  specific industrial  categories  and
 across industries.  EPA has promulgated analytical methods  for monitoring
 pollutant discharges at 40  CFR Part 136, and has promulgated  methods  for
 analytes  specific to given industrial  categories at 40 CFR Parts 400 to
 480.    EPA has  published   regulations  (60  FR  21654,   May  2,  1995)
                                          286

-------
establishing discharge  limitations  for the Pharmaceutical Manufacturing
Industry   (PMI) .     The  Agency  acquired  data  on  the  presence  and
concentration  of  approximately 400  analytes  from the  PMI during  18
sampling episodes and pilot studies conducted during a 10-year period from
May of 1983 to October  of  1993.  The data collected during these studies
and  information  acquired  from a  detailed questionnaire  sent to  all
domestic pharmaceutical manufacturers  form the basis  for regulation of
about sixty analytes  from  the  PMI.

Wastewaters  from  the  PMI  contain  a  complex  mixture  of  conventional
pollutants, toxic  (priority) pollutants,  and non-conventional pollutants.
Analytical methods exist for the determination of  all of  the conventional
and priority pollutants from the PMI, but  many of the non-conventional
pollutants were  without promulgated analytical methods.   Among  the non-
conventional pollutants identified  from the  PMI  are  a  series  of non-
purgeable,  water-soluble  analytes   that provide  unique challenges  to
analysis by  GC/MS and  GC/FID.   These pollutants  are  listed with  their
Chemical Abstracts Service  Registry Numbers (CASRNs) in Table 1.
      Table 1   Non-Purgeable, Water-Soluble Analytes from the PHI
PMI Analyte
Acetonitrile
Diethylamine
Dimethylamine
Dimethylsulf oxide
Ethanol
Ethyl ene glycol
Formamide
Methanol
Methylamine
Methyl cellosolve
( 2 -methoxyethanol )
n-Propanol
Triethylamine
CASRN
75-05-8
109-89-7
124-40-3
67-68-5
64-17-5
107-21-1
75-12-7
67-56-1
74-89-5
109-86-4
71-23-8
121-44-8
These analytes present a unique challenge to simultaneous  analysis by gas
chromatography  because  they  are miscible  with  water  and cannot  be
efficiently  extracted from the aqueous  waste  streams  in  which they are
found.  In addition,  they cannot be efficiently purged from water, even at
                                           287

-------
elevated temperatures, and trapped for GC analysis.  One alternative for
analysis is  direct  aqueous injection into a  gas  chromatograph equipped
with a  capillary column and  either  a mass spectrometric  detector or a
flame ionization detector.  Because it was not known at the outset which
might provide the most sensitivity for the simultaneous determination of
these analytes, both column/detector combinations were investigated.

EXPERIMENTAL

The experimental section is divided in two subsections:  GC/MS and GC/FID.
Experimental   conditions   for  each  column/detector   combination  and
column/detector specific information is  found  in each subsection.  Method
Detection Limits (MDLs) and Minimum Levels (MLs)  for both approaches are
provided in  the  results  section and  they are  compared and contrasted in
the conclusions section.

Gas Chromatographv/Mass Spectrometrv  Analyses were performed using a VG
Trio-1 GC/MS system.  The capillary column used was a Restek Rtx Amine (30
meter,  0.32mm  i.d., 1.5  /xm film thickness).   The  GC  was programmed such
that  sufficient  separation  of  target  analytes  was  achieved  while
minimizing run times.   The GC was held  at 40ฐC for 4  minutes,  ramped to
100ฐC at 8ฐC per minute,  with no hold at  100ฐC,  then  rapidly  heated to
220ฐC at 25ฐC per minute with  a 3 minute hold at 220ฐC.  A 30:1 pre-column
split and 2 /xL  injections were used to achieve acceptable chromatographic
peak shape.  Helium carrier gas was  introduced at 1.5  mL/min.   The mass
spectrometer was tuned using  p-bromofluorobenzene at  50  nanograms.  The
mass spectrometer scan range  was  20 to 200 atomic mass units.   Table 2
provides  absolute  retention times,   relative   retention  times,  and
quantitation masses for each  analyte,  their labeled analogs (where used),
and the internal standard.

Some  target  analytes  were  not  quantitated  using  isotope  dilution
techniques.  These  included the amines,  ethylene  glycol,  and formamide.
Labeled analogs  of  the amine free  bases were not available.   Ethylene
glycol-ds and formamide-15N could not be used because of their significant
spectral  contributions  to   the  native  analyte.     In  these  cases,
tetrahydrofuran-d8  was used as an  internal standard.

Gas Chromatocrraphy/Flame  Ionization Detector    Analyses  were  performed
using an HP  5880 GC/FID  system.   The capillary column used was an SPB-1
Sulfur  (30 meter,   0.32mm i.d.,   4.0 /im  film thickness).   The  GC was
programmed such that sufficient separation of target analytes was achieved
while minimizing run times. The GC was held at 40ฐC for 2 minutes, ramped
to 180ฐC at 10ฐC per minute.   The  injection port was set at 200ฐC and the
FID at 300ฐC.  A 30:1 pre-column  split  and 2  piL injections were used to
achieve acceptable  chromatographic peak shape.  Hydrogen carrier gas was
introduced  at  a head pressure  of  10  psi.   Table 3  provides absolute
retention  times  and relative retention  times for each analyte  and the
internal standard.
                                          288

-------
     Table 2   Retention Times and Quantitation Masses for the PMI
                            Analytes  by GC/MS
PMI Analyte
Methylamine
Methyl alcohol -d3
Methyl alcohol
Dimethylamine
Ethyl alcohol -d5
Ethyl alcohol
Acetonitrile-d3
Acetonitrile
n-Propanol-1-d-L
n-Propanol
Diethylamine
Tetrahydrof uran- do
Absolute
Retention
Time (sec)
81
85
85.5
93
103
104
119
121
170
170.5
188
263
Relative
Retention
Time (sec)
0.
0.
1.
0.
0.
1.
0.
1.
0.
1.
0.
1.
308
323
006
354
394
010
452
017
464
003
717
000
Quantitation
Mass (Da)
30
35
32
44
49
45
44
41
32
31
58
80
  (internal standard)
 Methyl cellosolve
  (2-Methoxyethanol)
 Triethylamine

 Ethylene glycol
 Formamide
 Dimethyl sulfoxide-d6

 Dimethyl sulfoxide
290


372

398

400

639

643
1.103


1.414

1.513

1.521

2.431

1.006
45


58

31

45

66

63
REStTLTS

Minimum detection  limits  (MDLs)  for each analyte  were determined by the
method described  in 40 CFR Part 136, Appendix B.   Minimum levels  (MLs)
were calculated from MDLs by multiplying by a  factor of 3.18 and rounding
to the nearest multiple of 1, 2,  or 5 x 10n, where n is  a positive or
negative integer, or zero.  Table 4 provides MDLs  and MLs for each native
analyte and for each GC/Detector combination.  Analytes have been arranged
in groups with similar functionality and with the amines first, alcohols
second, and miscellaneous compounds last.
                                        289

-------
          Table 3   Retention Times for PMI Analytes by GC/FID


               PMI  Analyte            Absolute        Relative
                                     Retention      Retention
                                    Time  (sec)      Time (sec)
Methylamine
Methanol
Dimethyl amine
Ethanol
Acetonitrile
n-Propanol
Diethylamine
Tetrahydrofuran
(internal standard)
Methyl cellosolve
(2-Methoxyethanol)
Formamide
Ethylene Glycol
Triethylamine
Dimethyl sulfoxide
128
139
165
188
203
307
341
416
429
473
495
518
676
0.
0.
0.
0.
0.
0.
0.
1.
1.
1.
1.
1.
1.
307
334
396
452
488
737
819
000
030
136
189
244
624
CONCLUSIONS

The results provide no clear indication whether use of a GC/MS combination
or  a  GC/FID combination  is superior  for all  analytes.    Choice of  a
column/detector combination will hinge  on the identity of  the  analytes
most important  to  the analyst, industry, permit writer, or regulator.   It
is apparent that amines are best analyzed by GC/FID;  the MDLs for GC/FID
range from about one-third to about one-fifth of  those  for GC/MS.  MLs for
the amines by GC/FID are consistently one-fourth of those for GC/MS.

MDLs for methanol  by GC/MS and GC/FID are about  the same,  while MDLs for
ethanol and  n-propanol  are lower  by GC/MS.   Due  to rounding,  MLs  for
methanol by  the two methods  are the same, while MLs  for  ethanol and n-
propanol are lower when analyzed by GC/MS.  Methyl cellosolve and ethylene
glycol are apparently better  analyzed by GC/FID because  their  MDLs  are
about one-fourth and one-half those achieved by GC/MS, respectively.

Of the remaining three  compounds,  acetonitrile is best analyzed by GC/MS
while  formamide and dimethyl  sulfoxide  are best  analyzed by GC/FID.
Results for formamide by GC/MS showed a high degree of variability.  For
                                        290

-------
unknown reasons,  the mass spectrometer response  for  formamide was both
very low and inconsistent.


       Table 4  - MDLs and MLs for PMI Analytes by GC/MS and GC/FID
       PMI  Analyte
         GC/MS
MDL (mg/L)     ML (mg/L}
         GC/FID

MDL (mg/L)    ML  (mg/L)
Methylamine
Dime thy 1 amine
Diethylamihe
Triethylamine
Methanol
Ethanol
n-Propanol
Methyl cellosolve
(2 -Methoxyethanol )
Ethylene glycol
Acetonitrile
Formamide
Dimethyl sulf oxide
83-8
68. S
72, S
55.4
21,4
5.0
9.0
21.5
72.7
1.7
407.5
36. S
200
200
200
200
50
20
20
50
200
5
1000
100
19
22
15
20
13
14
15
5.
35
16
27
5.
.2
.8
.9
.4
.4
.8
.8
.4
.4
.5
.9
2
50
50
50
50
50
50
50
20
100
50
100
20
REFERENCES

ASTM.   "Standard Test Methods  for Volatile  Alcohols  in Water by Direct
Aqueous-Injection   Gas   Chromatography."     1994  Annual  Book  of  ASTM
Standards,   Volume  11.02   (Water(II)}.     ASTM,  1916   Race  Street,
Philadelphia,  PA  19103-1187.

U.S.  Environmental Protection Agency.   "Method  1624:   Volatile Organic
Compounds by Isotope Dilution GCMS."  Revision C, June, 1989.

U.S.  Environmental  Protection  Agency    "Analytical  Methods  for  the
Determination of  Pollutants  in Pharmaceutical  Manufacturing Industry
Wastewater."   EPA  821-B-94-001.   February, 1995.
                                         291

-------
  45
      EVALUATION OF A ROBOTIC AUTOSAMPLER FOR THE ANALYSIS OF VOC'S

          Anne K. Sensel, Valerie J. Naughton. Tekmar Company, P.O. Box 428576,
                              Cincinnati, Ohio 45242-9576

INTRODUCTION

In the fast paced world we live in there is always an emphasis on reliable answers in a minimum
amount of time.  Laboratories have these same demands. The analyst is under increasing
pressure to provide maximum productivity.  By coupling automation and versatility laboratories
can meet this goal. Automation allows a large  number of samples to run virtually unattended.
Versatility prevents costly down time while an instrument is moved or reconfigured.

The Precept is a vial autosampler that combines both automation and versatility.  It analyzes up
to 48 aqueous or solid samples unattended. The aqueous samples are typically drinking or
waste water samples. The most common environmental solid samples are soil (e.g. clay, humus,
and sand). The Precept accommodates up  to two different sampling modules. The vials are
moved to the sampling modules using a robotic arm. This allows the vials to remain in an upright
position.  A syringe is used to measure the  sample volume. Up to two different standard
solutions  can be automatically added to the sample prior to purging.

There are three sampling modes available for the Precept. The modes are aqueous, solid S1,
and solid  S2. The aqueous module transfers the sample from a standard 40ml vial to  the
sampling  syringe. The aliquot is then transferred to the glassware of the concentrator where the
sample is purged. If automatic standard addition is used, it is transferred to the glassware with
the sample.

The solid  S1 module purges the sample directly in a standard 40ml vial.  The dry sample is placed
in a vial. A long concentric needle is inserted through the septum. Water is measured into the
syringe and transferred to the vial. The standard is automatically transferred into  the vial with the
water. The purge gas enters the vial through holes at the base of the needle. The purged
analytes are swept away from the vial and onto the trap through a hole near the top of the
needle.

The solid  S2 module also purges the sample directly in special 40ml vial. The vial used is
threaded on both ends for septa and caps.  There is also a frit inside the vial to increase purge
efficiency of the solid sample.  The dry solid placed  on top of the frit and water is added to the
vial.  If automatic standard addition is used, it is transferred to the vial with the water.  The
analytes are purged from the vial by two short needles piercing the top and bottom septa. The
bottom needle introduces purge gas up through the frit. The analytes are then swept onto the
trap through the top needle.

The work in this paper focuses on the evaluation of the aqueous module of the Precept.
Compounds from the USEPA Method 8260 and 524.2 were chosen for this assessment.
Linearity of the system was examined utilizing two different configurations.

EXPERIMENTAL

Table 1: Parameters for the Method 8260 configuration

Tekmar Precept/3000 Parameters
Line Temp                   150ฐC
Valve Temp                  150ฐC
MCS Line Temp              150ฐC
                                                292

-------
Sweep Needle Time
Syringe Fill Volume
TransLine Sweep Time
Syringe Rinse Volume
# of Syringe Rinses
Backflush Filter Time
Flush Needle Time
Sweep Lines Time
Purge Ready Temp
Purge Temp
Sample Fill
Purge Time
Dry Purge Time
Transfer Line Type
MCS Desorb Temp
Trap Type
Desorb Preheat
Desorb Temp
Desorb Time
Sample Drain
Glassware Rinse
Glassware Rinse Time
Glassware Purge Time
Bake Temp
Bake Time
BGB
MCS Bake Temp
TPC Setting
Purge Flow
1min
25ml
O.Smin
25ml
2
1min
O.Smin
S.Omin
30ฐC
0ฐC
1.5min
11min
Omin
0.53mm Fused Silica
50ฐC
Vocarb 3000
245ฐC
250ฐC
6min
On
On
3min
1min
260ฐC
4min
Off
300ฐC
4psi
40ml/min
HP 5890/Glass Jet Separator Parameters
Carrier Gas                 Helium
Flow Rate                   10ml/min
Detector A (Jet Separator)     150ฐC
Detector B (GC/MS Interface)  280ฐC
Makeup Flow                20ml/min
Transfer Line interfaced to the column via zero dead union
Column                     DB-624 75M 0.53mm Sum
Temperature Program         40ฐC hold 1min;
                           20ฐC/min to 50ฐC;
                           7ฐC/minto150ฐC;
                           20ฐC/min to 220ฐC hold 6min

HP 5970 Mass Selective Detector Parameters
Solvent Delay                2min
EM Voltage                 1700
Scan Range                 35-260
A/D                        3

Table 2: Parameters for the Method 524.2 configuration

Tekmar Precept/3000 Parameters
Line Temp                  150ฐC
Valve Temp                 150ฐC
                                             293

-------
MCS Line Temp
Sweep Needle Time
Syringe Fill Volume
Sample Std 1 Transfer
Sample Std 2 Transfer
TransLine Sweep Time
Syringe Rinse Volume
# of Syringe Rinses
Backflush Filter Time
Flush Needle Time
Sweep Lines Time
Purge Ready Temp
Purge Temp
Sample Fill
Purge Time
Dry Purge Time
Transfer Line Type
Cryofocuser
Cryo Standby
Cryofocus Temp
Cryo Inject Time
Cryo Inject Temp
MCS Desorb Temp
Trap Type
Desorb Preheat
Desorb Time
Sample Drain
Glassware Rinse
Glassware Rinse Time
Glassware Purge Time
Bake Temp
Bake Time
BGB
MCS Bake Temp
TPC Setting
Purge Flow
                           150ฐC
                           1min
                           5ml
                           2.5ml
                           2.5ml
                           0.5mm
                           25ml
                           2
                           1min
                           O.Smin
                           3min
                           30ฐC
                           0ฐC
                           1.5min
                           11min
                           Omin
                           0.32mm Fused Silica
                           On
                           150ฐC
                           -180ฐC
                           I.Omin
                           180ฐC
                           50ฐC
                           Tenax/Silica Gel/Charcoal
                           220ฐC
                           225ฐC
                           On
                           On
                           3min
                           1min
                           230ฐC
                           12min
                           Off
                           300ฐC
                           4.5psi
                           40ml/min
HP 5890 Series ll/ Plus Parameters
Carrier Gas                 Helium
Column Head Pressure        15psi
Transfer Line interfaced to the column via zero dead union
Detector A (Jet Separator)
Detector B (GC/MS Interface)
Column
Temperature Program
                           150ฐC
                           280ฐC
                           DB-VRX 60m 0.25mm 1.4um
                           35ฐC hold 5min;
                           10ฐC/min to 200ฐC hold 5min;
                           20ฐC/min to 220ฐC hold 5min
HP 5970 Mass Selective Detector Parameters
Solvent Delay                2.0min
EM Voltage                 1600
Scan Range                 35-260amu
A/D                        4
                                              294

-------
RESULTS AND DISCUSSION

The Precept was evaluated under two configurations.  The first configuration utilized a wide bore
column and jet separator for Method 8260 (Table 1). The linearity of the system from 1 to
10Oppb was excellent.

The second configuration used Method 524.2 compounds on a narrow bore column (Table 2). a
short 0.53mm precolumn was used in the Cryofocusing module to increase capacity of the
column during desorb. The linearity of this configuration from 0.5 to SOppb is shown in Table 4.
Also listed are the Method Detection Limits (MDL) obtained by using seven replicates of O.Sppb.

TABLE 3 RRF'S AND RSD'S FOR 8260 ANALYTES

Compounds                               RRF          RSD
1.  dichlorodifluoromethane                   0.4           2.8
2.  chloromethane                           0.2           2.4
3.  vinyl chloride                             0.3           2.3
4.  bromomethane                           0.3           5.1
5.  chloroethane                             0.2           8.8
6.  trichlorofluoromethane                    0.6           2.0
7.  1,1-dichloroethene                        0.3           2.6
8.  methylene chloride                       0.3          24.9
9.  trans-1,2-dichloroethane                  0.3           2.9
10. 1,1-dichloroethane                        0.6           2.7
11. cis-1,2-dichloroethane                    0.3           3.2
12. 2,2-dichloropropane                      0.5           3.6
13. bromochloromethane                     0.1           4.2
14. chloroform                               0.6           3.3
15. dibromofluoromethane                    0.6           1.5
16. 1,1,1-trichlorethane                       0.5           2.2
17. 1,1-dichloropropene                      0.5           3.3
18. carbon tetrachloride                      0.5           2.3
19. 1,2-dichloroethane                        0.2           4.8
20. benzene                                0.8           3.0
21. flurorobenzene                           1.0           0.7
22. trichloroethene                           0.4            1.8
23. 1,2-dichloropropane                      0.3           3.2
24. dibromoethane                           0.2           4.2
25. bromodicnloromethane                   0.4           2.7
26. cis-1,3-dichloropropane                   0.4          10.8
27. toluene-d8                               5.3           2.8
28. toluene                                  3.1           5.2
29. trans-1,3-dicnloropropene                 1.4           4.4
30. 1,1,2-trichloroethane                     0.8           4.9
31. tetrachlorethene                         2.9          12.9
32. 1,3-dichloroproane                        1.4            9.0
33. dibromochloromethane                   1.8           4.7
34. 1,2-dibromoethane                        1.2           4.5
35. chlorobenzene                           3.6           4.0
36. 1,1,1,2-tetrachloroethane                  1.7            5.3
37. ethylbenzene                            6.1            4.6
38. m,p-xylene                              2.2            5.3
39. bromofluorobenzene                     2.1            4.5
40. o-xylene                                2.1            5.4
                                                   295

-------
41. styrene                                  3.3            4.9
42. bromoform                               .1.0            4.5
43. isopropylbenzene                         6.1            4.3
44. 1,1,2,2-tetrachloroethane                  1.1            6.8
45. bromobenzene                           1.7            4.4
46. 1,2,3-trichloropropane                     0.3           17.6
47. n-propylbenzene                         1.6            3.0
48. 2-chlorotoluene                           1.4            3.4
49. 1,3,5-trimethylbenzene                    2.4            4.9
50. 4-chlorotoiuene                           1.4            4.2
51. tert-butylbenzene                         5.6            5.5
52. 1,2,4-trimethylbenzene                    2.6            5.1
53. sec-butylbenzene                         1.4            4.2
54. 1,3-dichlorobenzene                      2.9            4.8
55. 4-isopropyttoluene                        5.6            5.5
56. 1,4-dichlorobenzene                      1.4            2.9
57. 1,2-dichlorobenzene                      0.8            1.7
58. n-butylbenzene                           2.6            2.5
59. 1,2-dichlorobenzene                      1.1            3.1
60. 1,2-dibromo-3-chloropropane              0.1           11.2
61. 1,2,4-trichlorobenzene                    0.8            3.6
62. hexachlorobutadiene                      0.8            2.9
63. naphthalene                             0.7            7.5
64. 1,2,3-trichlorobenzene                    0.7            2.0

The internal standards used were pentafluorobenzene, 1,4-difluorobenzene, chlorobenzene-d5,
andl ,4-dichlorbenzene-d2.

TABLE 4 RRF'S, RSD'S AND MDL'S OF THE 524.2 ANALYTES

Compounds                                RRF         %RSD         MDL(ppt)
1.  dichlorodifluoromethane                   0.056         18.75             3.7
2.  chloromethane                           0.070           6.98            24.9
3.  vinyl  chloride                             0.025           4.79             5.5
4.  bromomethane                           0.114           6.34             9.8
5.  chloroethane                             0.083           4.41             8.5
6.  trichlorofluoromethane                    0.165          4.05            11.5
7.  diethyl ether                             0.142          2.89             7.1
8.  methylene chloride                       0.176          6.34             7.1
9.  trans-1,2-dichloroethene                  0.142          2.50             6.4
10. 1,1-dichloroethane                        0.342          3.81            13.7
11. cis-1,2-dichloroethene                     0.198          1.69             8.1
12. bromochloromethane                     0.096          3.50             2.6
13. chloroform                               0.352          24.04          20.5
14. 2,2-dichloropropane                      0.176          4.43            18.7
15. 1,2-dichloroetane                         0.268          3.88             9.6
16. 1,1,1-trichloroethane                      0.222          2.55             7.9
17. 1,1-dichloropropene                      0.238          1.62            15.2
18. carbon tetrachloride                      0.191           3.32             8.8
19. benzene                                 0.686          1.00            16.0
20. dibromomethane                         0.120          2.89             4.5
21. 1,2-dichloropropane                      0.218          1.48            10.3
22. trichloroethene                           0.194          3.81            14.3
23. bromodichloromethane                    0.257          11.76           9.4
                                                  296

-------
24. cis-1,3-dichloropropene                   0.292         4.43            11.1
25. trans-1,3-dichloropropene                 0.264         4.83            12.8
26. 1,1,2-trichloroethane                      0.142         2.81            6.1
27. toluene                                  0.388         1.55            16.4
28. 1,3-dichloropropane                      0.298         1.54            5.3
29. 1,2-dibromoethane                       0.162         4.06            5.7
30. dibromochloromethane                    0.174         6.45            9.1
31. tetrachloroethene                         0.252         9.06            75.5
32. 1,1,1,2-tetrachloroethane                  0.161         5.96            4.6
33. chlorobenzene                           0.431         3.16            14.2
34. ethylbenzene                            0.751         0.97            37.8
35. m,p-xylene                              0.270         0.96            26.2
36. bromoform                              0.112         15.20           7.6
37. styrene                                  0.446         3.93            27.9
38. o-xylene                                 0.274         1.98            12.1
39. 1,1,2,2-tetrachloroethane                  0.178         4.38            28.6
40. 1,2,3-trichloropropane                    0.258         6.31            14.6
41. isopropylbenzene                         0.703         1.20            36.0
42. 4-bromofluorobenzene (SUIT)               0.393         4.29          128.4
43. bromobenzene                           0.177         5.38            5.7
44. n-propylbenzene                         0.866         1.37            56.1
45. 2-chlorotoluene                          0.548         1.44            35.6
46. 4-chlorotoluene                          0.554         1.39            43.3
47. 1,3,5-trimethylbenzene                    0.571         1.34            26.7
48. tert-butylbenzene                         0.123         5.10            9.7
49. 1,2,4-trimethylbenzene                    0.593         1.21            31.0
50. sec-butylbenzene                         0.735         1.13            43.5
51. 1,3-dichlorobenzene                      0.337         4.08            14.1
52. 1,4-dichlorobenzene                      0.355         3.83            30.6
53. p-isopropyltoluene                        0.605         1.54            46.3
54. d4-1,2-dichlorobenzene (surr)              0.358         5.86          122.1
55. 1,2-dichlorobenzene                      0.339         3.12            14.1
56. n-butylbenzene                          0.599         1.28            48.4
57. 1,2-dibromo-3-chloropropane              0.041         4.36            5.4
58. 1,2,4-trichlorbenzene                     0209          8.27            25.4
59. hexachlorobutadiene                      0.096         4.12            9.9
60. naphthalene                             0.512         11.13           73.0
61. 1,2,3-trichlorobenzene                    0.188         9.87            21.2

The Internal Standard  used for this work was Fluorobenzene.

CONCLUSIONS

Laboratories can increase productivity through automation.  The automation chosen should be
reliable and versatile. The Precept aqueous module was evaluated using two different
configurations and it performed well in both cases. The calibration curves at both  ranges were
linear. The MDL's were also good for the 524.2 analytes. Because the autosampler can also be
equipped with a solid module, the precept provides maximum productivity.

ACKNOWLEDGEMENTS

The author gratefully acknowledges Richard Herrmann of Environmental Enterprises, Inc. and
Valerie J. Naughton of Tekmar Company.
                                                  297

-------
46
      EXAMINATION OF GC/FID FOR THE ANALYSIS OF
   MODIFIED METHOD TO-14 FOR VOCS IN AMBIENT AIR
Suya Wang, Shili Liu, Robert J. Carley, Jangshi Kang-Environmental Research Institute, University of
Connecticut, Storrs, Connecticut
Al Madden, Research Chemist, Tekmar Company, 7143 E. Kemper Road, Cincinnati, Ohio, 45249
INTRODUCTION

Toxic organic compounds in ambient air are often analyzed by gas chromatography/mass
spectrometry(GC/MS.)1 While this approach offers both sensitivity and selectivity, it can be
more complicated  and  expensive  than  necessary. A simpler  analytical method was
evaluated for screening and quantitation of volatile organic compounds(VOCs) from ambient
air samples  in SUMMAฎ canisters.  In this case,  a  gas chromatograph, equipped with a
flame ionization  detector, was utilized to determine the applicability for an air  toxics
monitoring laboratory.

The sensitivity, accuracy, and precision are presented for polar and non-polar analytes. This
is represented by method detection limits,  calibration  curve linearity,  and evaluation of
reference standard samples

EXPERIMENTAL

The calibration standard was prepared  from a commercially available TO-14 mixture and from
two standards made in  2 liter static dilution bottles  (SDB.) The stock TO-14 standard has a
concentration of 2.0 ppmv (Alphagasz - Morrisville, PA).  The two SDBs were made by injecting
neat liquids  into the SDB for vaporizatrion. Two microliters each of three  trihalomethanes
(bromoform,  bromodichloromethane, chlorodibromomethane) were injected into the SDB  to
yield a concentration of 2 ng/L. The  polar standard (acetone, acrylonitrile, 2-butanone, methyl
methacrylate, methyl isobutyl ketone) was prepared in  a separate SDB by adding  2.9 uL of
each component, resulting in a concentration of 3 ng/L. The calibration standard was made  by
injecting 1.0  mL of the brominated trihalomethane, 0.5 mL of the polar, and 150 mL of the TO-
14 standard  into an evacuated eight liter canister. After the standards were added, the canister
was pressurized to  22 psig and simultaneously humidified to 35% relative humidity by adding
150 uL of water. The resulting concentrations were: 15 ppbv for the TO-14 components, 12 to
15 ppbv for the trihalomethanes mixture, and 7 to 14  ppbv for the polar analytes.

The concentrator, cryofocusing module, and canister interface were connected to the GC. The
flow rates were adjusted to the conditions listed below. The system was leak checked with the
instrument's  control software. The trap in the system  was baked and standards were analyzed.
The system was calibrated using external standard quantitation.

A six point calibration curve was run  on June 29,  1994 from a 15 ppbv standard. The points of
the calibration were obtained by using  six different volumes of this standard. The volumes for
                                            298

-------
the calibration  (50, 100, 250,  500,  750, and 1000mL) were metered onto the trap  using an
electronic mass flow controller  (MFC.) The integrated chromatograms were used to  calculate
response factors (RF) for each level and the percent relative standard deviation of the response
factors. This was then repeated on July 31, 1994 to determine the reproducibility of calibration.

Once the calibration curve was  built,  the sensitivity of the system was determined. For this test,
seven 20 mL aliquots of 10 ppbv calibration standard  were analyzed. These seven aliquots
were quantitated to determine the  concentration. The standard deviation of the calculated
concentration was  determined  and multiplied by  the student t-value for the 99% confidence
limits (3.143) to determine the method detection limit (MDL.)

The system  performance was then verified  against a National Institute of Standards and
Technology (NIST) audit sample to determine accuracy of the results.  Six samples were
analyzed, three at 500 mL and three at 1000 mL, and  the percent difference was determined
between the actual  and calculated concentrations.

CONDITIONS
Tekmar 6000/AEROCan
Line/Valve Temp
Standby Flow
Trap Standby Temp
Sweep Gas(Nitrogen)
    Flow Rate
    Sweep/Flush time
200ฐC
lOmL/min
100ฐC

100mL/min
1 min
Cooldown         -175ฐC
Inject            100ฐC
Injection port bypassed
Glass Bead Packed Trap Setpoints
    Cooldown
    Desorb Preheat
    Desorb
    Bake
    Bake Flow
Moisture Control System
    Standby Temp
    Desorb Temp
 -   Bake Temp
Cryofocusing Module
    Standby

RESULTS
-165ฐC
195ฐC
5 min @ 200ฐC
10min@225ฐC
lOOmL/min

200ฐC
50ฐC
320ฐC

  100ฐC
Hewlett Packard 5890
Column
ID
Film Thickness
Length
Carrier Gas
Flow Rate
Oven Profile
Initial Temp
Ramp
FID Temperature
Hydrogen Flow Rate
Air Flow Rate

HP-5
0.32 mm
1 urn
50m
Helium
2.65 mL/min @35ฐC

5ฐCfor4min
7ฐC/min to 220ฐC
250ฐC
30 ml/min
300 ml/min
Calibration
The system was first calibrated on June 29, 1994 and again on July 31, 1994. The results for all
50 compounds are listed in Table 2. Both calibration curves meet precision requirements of less
than 30% RSD stated in EPA Method TO-14, including three polar analytes. There are five sets
of coeluting analytes including:
   3-chloro-1-propene/methylene chloride, benzene/carbon tetrachloride,
   1,2-dichloropropene/trichloroethylene,
   meta & para-xylene,
                                              299

-------
one month period as evidenced by the similarities of the precision and response factors from
the two calibration curves.  This also gives a good indication  of the precision at which the
standards were diluted over this same period. The data from July exhibits a value slightly higher
than expected for the precision of dichlorodifluoromethane and is  probably due to interfering
hydrocarbons from the dilution gas.

The method  detection limits are similar to those expected from a TO-14 analysis by GC/MS.
Some  holes  in the data appear in  the  permanent gases,  peaks one  through six,  and are
attributed to the sensitivity of the FID to these halogenated C1 and C2 compounds.

The NIST standard was evaluated  on the system to determine accuracy of the  system to a
reference. The determined oncentration agreed well with this standard. Of the fifteen analytes in
the mixture, eleven analytes were well within the true concentration range provided with this
standard, dated July 1991.

CONCLUSION

The system  is a reliable and  rugged  mechanism for screening  air toxics samples prior to
analysis. This can also be used as a final analysis tool in well characterized sampling sites. This
technique  shows impressive sensitivity for the  TO-14  compounds,  the  additional three
brominated trihalomethanes, and the five polar analytes.

There are two  drawbacks with this system.  The three sets of coeluting peaks could limit final
analysis on this system unless a conformational  column is used. In addition,  the sensitivity to
C1 and C2 hydrocarbons to FID can also be prohibitive. The addition of an  electron capture
detector  to the  system could partially  resolve  this  issue.  Overall, the system  exceeded
expectations for sensitivity and linearity for the analytes tested.
REFERENCES


1.  Winberry, J.T.; Carhart, B.S.; Randall, A.J., Decker, D.L.,"Method TO-14."Compendium of Methods for the
     Determination of Toxic Organic Compounds in Ambient Air. EPA-600/4-89-017, U.S. Environmental Protection
     Agency, Research Triangle Park, NC, 1988.
                                              300

-------
TABLE 1- Calibration Curve Linearity for June and July 1994
Peak*
1
2
3
4
5
6
7
P1
8
P2
9
10&11
12
P3
13
14
15
18
17&18
19&20
Brl
P4
P5
21
22
23
24
Br2
25
26
27
28
29&30
Br3
31
32
33
34
35
36
37
38&39
40
41
42
Analyte
Dlchlorodifluoromethane
Chloromethane
1 ,2-Dichlorotetrafluoroethane
Vinyl Chloride
Bromomethane
Chloroethane
Trichlorofluoromethane
Acetone
1,1-Dichloroethene
Acrylonitrile
1 ,1 ,2-Trichloro-trifluoroethane
3-CM-Propene & Metli Cl
1 ,1 -Dichloroethane
2-Butanone (MEK)
cis-1 ,2-Oichloroethene
Chloroform
1 ,1 ,1 -Trichloroethane
1 ,2-Dichloroethane
Benzene & Carbon Tetrachloride
1.2-DCP&TCE
Bromodichloromethane
Methyl Methacrylate
MIBK
cis-1 ,3-Dichloropropene
trans-1,3-Dichloropropene
Toluene
1 ,1 ,2-Trichloroethane
Dibromochloromethane
1 ,2-Dibromoethane
Tetrachloroethene
Chlorobenzene
Ethylbenzene
m-Xylene & p-Xylene
Bromoform
Styrene
o-Xylene
1 ,1 ,2,2-Tetrachloroethane
4-Ethyltoluene
1 ,3,5-Trimethylbenzene
1 ,2,4-Trimethylbenzene
1 ,3-Dichloro benzene
1,4-DCB & Benzyl Chloride
1,2-Dichlorobenzene
1 ,2,4-Trichlorobenzene
Hexachloro-1,3-butadiene
50ml
RF
0.0336
0.0099
0.0129
0.0072
0.0123
0.0072
0.0059
0.0073
0.0070
0.0087
0.0077
0.0075
0.0061
0.0074
0.0165
0.0076
0.0076
0.0044
0.0059
0.0368
0.0055
0.0039
0.0059
0.0103
0.0022
0.0075
0.0248
0.0077
0.0036
0.0025
0.0019
0.0019
0.0033
0.0021
0.0082
0.0018
0.0018
0.0019
0.0027
0.0040
0.0027
0.0035
0.0046
100ml
RF
0.0665
0.0105
0.0135
0.0075
0.0155
0.0075
0.0426
0.0070
0.0075
0.0074
0.0089
0.0080
0.0079
0.0061
0.0077
0.0170
0.0079
0.0079
0.0046
0.0061
0.0381
0.0058
0.0040
0.0062
0.0105
0.0023
0.0078
0.0237
0.0081
0.0037
0.0026
0.0020
0.0021
0.0498
0.0034
0.0021
0.0085
0.0019
0.0019
0.0020
0.0028
0.0042
0.0028
0.0039
0.0048
250ml
RF
0.0378
0.0110
0.0135
0.0080
0.0160
0.0073
0.0441
0.0070
0.0078
0.0076
0.0091
0.0081
0.0082
0.0063
0.0079
0.0176
0.0081
0.0082
0.0047
0.0063
0.0388
0.0057
0.0041
0.0062
0.0100
0.0024
0.0080
0.0254
0.0083
0.0037
0.0027
0.0021
0.0021
0.0495
0.0035
0.0022
0.0087
0.0020
0.0020
0.0020
0.0029
0.0043
0.0029
0.0039
0.0050
500ml
RF
0.0382
0.0113
0.0135
0.0081
0.0161
0.0077
0.0445
0.0064
0.0077
0.0076
0.0091
0.0081
0.0081
0.0063
0.0078
0.0175
0.0081
0.0082
0.0047
0.0063
0.0387
0.0056
0.0040
0.0064
0.0103
0.0024
0.0081
0.0248
0.0083
0.0036
0.0027
0.0021
0.0021
0.0496
0.0035
0.0022
0.0087
0.0020
0.0020
0.0020
0.0029
0.0042
0.0029
0.0038
0.0050
750ml
RF
0.0382
0.0112
0.0135
0.0081
0.0162
0.0077
0.0437
0.0071
0.0077
0.0076
0.0092
0.0081
0.0082
0.0064
0.0079
0.0177
0.0082
0.0082
0.0047
0.0064
0.0388
0.0056
0.0041
0.0064
0.0099
0.0024
0.0081
0.0245
0.0083
0.0035
0.0027
0.0021
0.0022
0.0482
0.0036
0.0022
0.0087
0.0021
0.0020
0.0021
0.0029
0.0042
0.0029
0.0038
0.0051
1000ml
RF
0.0384
0.0114
0.0134
0.0083
0.0162
0.0077
0.0441
0.0074
0.0078
0.0077
0.0094
0.0080
0.0082
0.0055
0.0079
0.0179
0.0082
0.0084
0.0048
0.0064
0.0391
0.0057
0.0041
0.0064
0.0099
0.0024
0.0082
0.0261
0.0084
0.0035
0.0028
0.0021
0.0022
0.0482
0.0036
0.0023
0.0088
0.0021
0.0020
0.0021
0.0030
0.0042
0.0030
0.0039
0.0053
% RSD
RF
28.66%
5.28%
1.78%
5.32%
10.02%
2.75%
1.67%
7.85%
2.60%
3.48%
2.63%
1.84%
3.48%
5.35%
2.46%
3.04%
3.16%
3.40%
2.92%
3.32%
2.25%
1.37%
1.74%
2.92%
2.32%
3.15%
2.99%
3.28%
2.88%
2.07%
3.18%
3.26%
4.54%
1.60%
3.20%
3.46%
2.66%
5.14%
3.69%
3.59%
3.24%
2.08%
3.39%
4.20%
4.91%
                                                                                      July 1994
                                                                                         % RSD
                                                                                          RF
                                                                                         10.28%
                                                                                         11.71%
                                                                                         11.43%
                                                                                         12.13%
                                                                                         15.62%
                                                                                         7.83%
                                                                                         7.18%
                                                                                         12.21%
                                                                                         8.63%
                                                                                         11.43%
                                                                                         14.27%
                                                                                         7.36%
                                                                                         8.97%
                                                                                         9.35%
                                                                                         8.14%
                                                                                         8.34%
                                                                                         7.96%
                                                                                         8.98%
                                                                                         11.78%
                                                                                         8.59%
                                                                                         8.19%
                                                                                         6.69%
                                                                                         8.65%
                                                                                         7.86%
                                                                                         4.67%
                                                                                         9.41%
                                                                                         8.81%
                                                                                         14.54%
                                                                                         8.52%
                                                                                         7.38%
                                                                                         9.22%
                                                                                         9.33%
                                                                                         9.26%
                                                                                         7.25%
                                                                                         8.98%
                                                                                         9.63%
                                                                                         9.15%
                                                                                         9.90%
                                                                                         9.41%
                                                                                         8.74%
                                                                                         8.94%
                                                                                         7.71%
                                                                                         9.16%
                                                                                         6.43%
                                                                                         8.51%
Method Detection Limits
The method detection  limits are  displayed  from  seven  20  ml  aliquots  of the  calibration
standard.  The MDLs are listed  below  in  Table  2. In this table  the concentration values
calculated from these samples, the standard  deviation, and the method detection limits  are
outlined.
                                                 301

-------
TABLE 2- Method Detection Limits for Seven Replicate Analyses
Peak#
1
2
3
4
5
6
7
P1
8
P2
9
10&11
12
P3
13
14
15
16
17&18
19&20
Br1
P4
P5
21
22
23
24
Br2
25
26
27
28
29&30
Br3
31
32
33
34
35
36
37
38&39
40
41
42
Analyte
Dlchlorodifluoromethane
Chloromethane
1 ,2-Dlchlorotetrafluoroethane
Vinyl Chloride
Bromomethane
Chloroethane
Trlchlorofluoromethane
Acetone
1,1-Dichloroethene
Acrylonltrile
1 ,1 ,2-Trichloro-trifluoroethane
3-CI-1-propene & Meth Cl
1,1-Dlchloroethane
2-Butanone (MEK)
cls-1 ,2-Dlchloroethene
Chloroform
1,1,1-Trlchloroethane
1,2-Dlchloroe thane
Benzene & Carbon Tetrachloride
1,2-DCPSTCE
Bromodlchloromethane
Methyl Methacrylate
MIBK
cls-1 ,3-Dlchloropropene
trans-1 ,3-Dlchloropropene
Toluene
1,1,2-Trichloroethane
Dlbromochloromethane
1,2-Olbromoethane
Tetrachloroethene
Chlorobenzene
Ethylbenzene
m-Xylene & p-Xylene
Bromoform
Styrene
o-Xylene
1 ,1 ,2,2-Tetrachloroethane
4-Ethyltoluene
1,3,5-Trimethylbenzene
1,2,4-Trtmethylbenzene
1 ,3-Dlchlorobenzene
1,4-DCB & Benzyl Chloride
1 ,2-Dlchlorobenzene
1 ,2,4-Trichloro benzene
Hexachloro-1 ,3-butadlene
7 Replicate Analyses using 20 mL of a 10 ppbv Standard
20mL-1
0.9392
0.2147
0.3103
0.1879
0.2852
0.2884
0.2051
0.1933
0.2105
0.6027
0.1736
0.4273
0.2042
0.1875
0.1943
0.2070
0.4821
0.2967
0.2061
0.1053
0.0886
0.2049
0.1957
0.2091
0.1942
0.3194
0.1945
0.2327
0.1947
0.1935
0.3600
0.2061
0.0992
0.1887
0.1902
0.1800
0.1703
0.1577
0.1751
0.2845
0.1706
0.0652
0.1299
20mL-2
0.2225
0.3160
0.1865
0.1925
0.2821
0.3051
0.2122
0.2010
0.2148
0.6286
0.1903
0.4168
0.2080
0.1802
0.1998
0.2144
0.4884
0.3019
0.2039
0.1099
0.0892
0.2123
0.1988
0.2124
0.1959
0.3306
0.1995
0.2373
0.1960
0.1979
0.3674
0.2198
0.1014
0.1921
0.1947
0.1854
0.1746
0.1608
0.1806
0.2935
0.1777
0.0940
0.1407
20mL-3
0.9274
0.3026
0.2765
0.2014
0.1768
0.1944
0.5828
0.1759
0.4425
0.1853
0.1692
0.1857
0.1990
0.4566
0.2777
0.1798
0.0991
0.0830
0.1938
0.1823
0.1983
0.1850
0.3227
0.1842
0.2217
0.1816
0.1822
0.3403
0.1998
0.0940
0.1798
0.1798
0.1721
0.1620
0.1496
0.1698
0.2737
0.1633
0.0880
0.1305
20mL-4
0.7959
0.1825
0.3076
0.2711
0.4287
0.1668
0.1515
0.2251
0.4849
0.1417
0.3614
0.1553
0.1453
0.1592
0.1700
0.3452
0.2306
0.1626
0.0828
0.0702
0.1616
0.1602
0.1660
0.1557
0.2834
0.1510
0.1900
0.1532
0.1511
0.2824
0.1677
0.0768
0.1496
0.1507
0.1415
0.1347
0.1211
0.1301
0.2182
0.1296
0.0645
0.1486
20mL-5
0.3662
0.1951
0.3478
0.1699
0.1628
0.1691
0.2670
0.2779
0.1890
0.1665
0.1892
0.5331
0.1657
0.3633
0.1782
0.1656
0.1741
0.1880
0.3892
0.2548
0.1786
0.0936
0.0794
0.1818
0.1747
0.1870
0.1755
0.3148
0.1712
0.2163
0.1722
0.1723
0.3208
0.1876
0.0877
0.1686
0.1705
0.1612
0.1528
0.1392
0.1550
0.2493
0.1508
0.0821
0.1078
20mL-6
0.1995
0.3229
0.1707
0.1729
0.3161
0.3004
0.1979
0.1749
0.1991
0.5603
0.1705
0.3852
0.1868
0.1745
0.1880
0.1981
0.4834
0.2755
0.2117
0.0973
0.0835
0.1912
0.1752
0.1959
0.1823
0.3257
0.1837
0.2280
0.1787
0.1803
0.3392
0.2091
0.0931
0.1775
0.1833
0.1683
0.1610
0.1468
0.1642
0.2707
0.1615
0.0786
0.1251
20mL-7
0.9741
0.0252
0.2563
0.2954
0.8099
0.1834
0.1663
0.1920
0.5222
0.1723
0.4017
0.1739
0.1577
0.1691
0.1787
0.4728
0.2471
0.1755
0.0909
0.0764
0.1633
0.1634
0.1779
0.1678
0.3553
0.1605
0.0953
0.1616
0.1643
0.3009
0.2051
0.0908
0.1625
0.1634
0.1554
0.1597
0.1346
0.1497
0.2461
0.1506
0.0722
0.1172
Avg Cone
(ppbv)
0.8005
0.1732
0.3102
0.1699
0.1733
0.1806
0.2886
0.3838
0.1937
0.1758
0.2036
0.5592
0.1700
0.3997
0.1845
0.1686
0.1814
0.1936
0.4454
0.2692
0.1883
0.0970
0.0815
0.1870
0.1786
0.1923
0.1795
0.3217
0.1778
0.2031
0.1768
0.1774
0.3301
0.1993
0.0918
0.1741
0.1761
0.1663
0.1593
0.1443
0.1607
0.2623
0.1577
0.0778
0.1285
Standard
Deviation
0.2520
0.0739
0.0301
0.0121
0.0113
0.0174
0.1952
0.0153
0.0169
0.0134
0.0497
0.0146
0.0313
0.0180
0.0141
0.0145
0.0157
0.0559
0.0262
0.0187
0.0091
0.0068
0.0194
0.0148
0.0166
0.0144
0.0214
0.0177
0.0500
0.0159
0.0163
0.0308
0.0170
0.0081
0.0150
0.0155
0.0150
0.0130
0.0138
0.0173
0.0260
0.0158
0.0112
0.0137
MDL
(PPM
944
249
101
84
51
55
614
43
53
42
156
46
98
57
44
46
49
176
82
59
28
21-
61
46
52
45
67
56
157
50
51
97
53
26
47
49
47
41
43
54
82
50
35
43
NIST-traceable Audit Samples

Once the sensitivity and the linearity of the system were determined, the NIST audit sample,
cylinder No. AAL-21390, was analyzed. The resulting concentrations from the three aliquots
each at 500 and 1000 ml  are listed. These values were averaged and compared to the true
concentration  in  Table  3  and  an example  chromatogram  is  shown  in  Figure  2.
                                           302

-------
1,4-dichlorobenzene/benzyl chloride.
An example of 500 ml of the calibration mixture is illustrated below in Figure 1.
FIGURE 1- 500 ml of a 15 ppbv TO-14 Standard
       50000:

       40000

       30000:

       20000:

       10000:


    Time->
                5.00
10.00
15.00
20.00
25.00
                                               303

-------
FIGURE 2- 500 ml of a NIST-traceable reference mixture
20000-
18000-
16000-
14000-
12000:
10000:
8000
6000
4000
2000








_L_



I
u_JL








L^LJ.







^.~,



—, 	








— 1_
Timeป> 500 mOO 15^00








	 .








A_,








_Ji 	 	 	
20lOO 25^00
TABLE 3- System Accuracy as compared to NIST-traceable audit mixture
Peak#
4
5
7
10&11
14
15
16
17S18
19S20
23
25
26
27
28
32
Analyte
Vinyl Chloride
Bromomethane
Trichlorofluoromethane
3-CI-1-propene & Methylene Chloride
Chloroform
1,1,1 - 1 ncmoroethane
1,2-Dichloroethane
Benzene & Carbon Tetrachloride
1,2-Dlchloropropane & Trichloroethene
Toluene
1 ,2-Dibromoethane
Tetrachloroethene
Chlorobenzene
Ethylbenzene
o-Xylene
500 ml
#1
5.43
5.44
4.84
2.24
4.60
4.77
5.13
9.49
11.09
4.87
4.14
5.55
4.88
4.43
4.89
500ml
#2
5.39
5.41
4.71
2.23
4.58
4.73
5.08
9.35
11.00
4.82
4.10
5.54
4.84
4.40
4.85
500ml
#3
7.19
5.37
4.63
2.22
4.53
4.72
5.07
9.32
10.95
4.81
4.10
5.50
4.83
4.37
4.82
1000 ml
#1
6.63
5.71
4.83
2.27
4.67
4.86
5.22
9.67
11.27
4.97
4.23
5.68
4.99
4.52
5.01
1000 ml
#2
5.44
5.67
4.70
2.25
4.66
4.85
5.20
9.64
11.23
4.96
4.20
5.61
4.98
4.51
4.98
1000 ml
#3
5.50
5.66
4.67
2.26
4.65
4.85
5.17
9.58
11.21
4.93
4.19
5.60
4.97
4.50
4.97
True
Cone.
4.91
5.27
5.02
4.56
4.91
5.45
4.87
9.01
9.82
5.05
4.84
5.01
5.10
4.89
5.30
Average
Exp Cone.
5.93
5.54
4.73
2.24
4.62
4.80
5.15
9.51
11.12
4.89
4.16
5.58
4.92
4.45
4.92
% Diff.
from true
20.77%
5.18%
-5.78%
-50.79%
-6.00%
-12.00%
5.66%
5.52%
13.29%
-3.11%
-14.04%
11.39%
-3.60%
-8.92%
-7.18%
DISCUSSION

The results from the calibration show excellent linearity and precision for the 6 sample volumes
taken during calibration. This also illustrates the wide sample volume which can be loaded onto
the glass bead-packed, cryogenic trap (50 ml to  1000 ml_.) This system was stable over the
                                            304

-------
                                                                         47
   THE SUITABILITY OF POLYMERIC TUBINGS FOR SAMPLING WELL WATER
             TO BE ANALYZED FOR TRACE-LEVEL ORGANICS

Louise V. Parker. U.S. Army Cold Regions Research and Engineering
Laboratory, Hanover, New Hampshire 03755-1290, and Thomas A. Ran-
ney, Science and Technology Corporation, Hanover, New Hampshire
03755
ABSTRACT

There is concern in the groundwater monitoring industry that
polymeric tubings used to sample groundwater can affect contami-
nant concentrations. Results from a recent study that looked for
sorption and leaching of organic contaminants by twenty polymeric
tubings will be presented. The flexible and rigid tubings that
were tested included several polyethylene and polypropylene for-
mulations, several different fluoropolymers, as well as polyure-
thane, polyamide, and flexible PVC.

In this study, the tubings were exposed to a solution containing
a mixture of eight organic compounds (nitrobenzene, trans-1,2-
dichloroethylene, m-nitrotoluene, trichloroethylene, chloroben-
zene, o- and p- dichlorobenzene, and tetrachloroethylene),  each
at a concentration of 10 to 16 mg/L. Our results indicate that
three rigid fluoropolymers [fluorinated ethylene propylene (FEP),
FEP-lined polyethylene, and polyvinylidene fluoride (PVDF)] were
the least sorptive of the tubings tested. During this study,  we
observed that the reversed-phase HPLC chromatograms for samples
exposed to some of the tubings had spurious peaks.  This indicates
that several of the tubings leached contaminants into the test
solution. Only the polyethylene tubings, the rigid fluoropolymer
tubings, and one plasticized polypropylene tubing did not appear
to leach any contaminants.

Based on the findings from this study and relative cost, we ten-
tatively recommend PVDF when a rigid tubing can be used and a co-
polymer of vinylidene fluoride and hexafluoropropylene  [P(VDF-
HFP)] when a more flexible tubing is required. However, since
this study was conducted under static conditions, and sampling
usually involves continual replenishment of the contacting solu-
tion, we are currently conducting studies under dynamic condi-
tions .

INTRODUCTION

It is important that the reported concentrations of contaminants
in samples taken from groundwater monitoring wells reflect the
true in-situ values. One concern about sampling methods that in-
                                      305

-------
                                       Table 1. Polymeric tubings used for sampling trace-level organics.
CO
O
CostVft2 Dimensions
Tubing material Rigidity*
1
2

3

4

5

6
7
8
9
10
11

12
13

14
15

16

17
18
19
20
polyethylene, low density (LDPE)
polyethylene, cross-linked high
density (XLPE)
polyethylene liner in ethyl vinyl
acetate shell
polyethylene liner cross-linked to
ethyl vinyl acetate shell
co-extruded polyester lining in
polyvinylchloride shell
polypropylene (PP)
polytetrafluoroethylene (PTFE)
perf luoroalkoxy (PFA)
polyurethane, ether-grade
ethyl enetetrafluoroethylene (ETFE)
polyproplyene-based material with
plasticizer, type I
polyamide (nylon)
linear copolymer of vinylidene fluoride
and hexafluoropropylene P(VDF-HFP)
flexible PVC
silicone-modif ied thermoplastic
elastomer (TPE)
polypropylene-based material with
plasticizer, type II
polyvinylidene fluoride (PVDF)
f luoroelastomer
FEP- lined polyethylene
fluorinated ethylene polypropylene (FEP)
R
R

R

R

R

R
R
R
F
R
F

R
F

F
F

F

R
F
R
R
($)
0
0

0

1

0

0
4
5
0
5
0

0
1

0
0

2

1
8
3
3
.19
.43

.57

.08

.77

.27
.27
.58
.64
.50
.58

.71
.99

.89
.96

.48

.80
.70
.00
.90
I.D.
0.64
0.64

0.64

0.64

0.64

0.64
0.75
0.64
0.64
0.48
0.64

0.71
0.64

0.64
0.64

0.64

0.64
0.64
0.64
0.64
0
0
0

0

0

0

0
0
0
0
0
0

0
0

0
0

0

0
0
0
0
.D.
.95
.95

.95

.95

.95

.95
.95
.95
.95
.64
.95

.95
.80

.95
.95

.95

.95
.95
.80
.95
(cm)
Wall
0
0

0

0

0

0
0
0
0
0
0

0
0

0
0

0

0
0
0
0
.16
.16

.16

.16

.16

.16
.10
.16
.16
.08
.16

.12
.08

.16
.16

.16

.16
.16
.08
.16
Length
(cm)
20
20

20

20

20

20
17
20
20
27
20

18
20

20
20

20

20
20
20
20
Areasur
to volsoi
(cm'1)
6.3
6.3

6.3

6.3

6.3

6.3
5.3
6.3
6.3
8.4
6.3

5.6
6.3

6.3
6.3

6.3

6.3
6.3
6.3
6.3
                      *  R-can be stepped on without collapsing the tubing;  F-finger pressure can collapse tubing.

                      t  Cost varies with quantity,  dimensions and supplier.

-------
volve pumping groundwater samples to the surface is that there
may be interactions between the tubing and the sample as it is
pumped. The tubing could leach or sorb inorganic or organic con-
taminants to or from the sample. Also, if a pump and its tubing
are not dedicated to a particular well, it is possible that tub-
ing used previously to sample a well with high concentrations of
contaminants could subsequently desorb previously sorbed contami-
nants into samples. In a recent review of the literature on de-
contamination, Parker (1) found there has been very little study
of desorption of organic contaminants from tubings and very lit-
tle study of how to decontaminate these tubings.

The purpose of this study was to compare sorption of organic sol-
utes by twenty commercially available sampling tubings and to
look for signs of contaminants leaching from these products.
Table 1 lists the tubing materials used in this study and their
abbreviations, tubing dimensions, costs, and flexibilities. The
flexibility of the products we tested varied from non-rigid
(i.e., easy to collapse with only finger pressure) and thus very
flexible, to rigid (i.e., standing on the tubing failed to col-
lapse it) with only slight flexiblity  (i.e.,  coilable).  Cost of
the tubings used in this study ranged from $19 (LDPE)  to $870
(fluoroelastomer) per 100 ft.

MATERIALS AND METHODS

First sorption study

The twenty tubings were cut to different lengths so that they
would all have the same internal surface area, 40 cm2  (Table 1).
This was necessary because three types of tubing (PTFE,  ETFE,  and
polyamide) had different internal diameters from the rest. As a
result, the tubing surface-area-to-solution-volume ratios dif-
fered for these three materials.

The cut tubing sections were rinsed with several  volumes of deion-
ized water and left to air dry. One end of each of the tubings
was plugged with a glass rod whose diameter matched the internal
diameter of the tubing. The glass rod was inserted in the tubing
to a depth of 1 cm, and the outside of the tubing was clamped
with a plastic tubing clamp.

The test solution was prepared by adding eight neat organic com-
pounds directly to well water in a 2-L glass bottle to give mg/L
concentrations of nitrobenzene  (NB), trans-1,2-dichloroethylene
(TDCE), m-nitrotoluene (MNT), trichloroethylene (TCE),  chloroben-
zene  (CLB),  o-dichlorobenzene  (ODCB),  p-dichlorobenzene (PDCB),
and tetrachloroethylene  (PCE). Mercuric chloride was added to the
                                    307

-------
solution (40 mg/L) to prevent losses due to biological activity.
After adding all of the analytes, the bottle was topped off with
well water so there was no headspace, capped with a glass stopper,
tightly wrapped with Parafilm, and stirred with a magnetic stirrer
for two days. The initial concentrations of the organic solutes
varied from 10 to 16 mg/L.

For each type of tubing, there were five sampling times (1, 8, 24,
48, and 72 hours) and two replicates for each sampling time. For
each sampling time, the tubings were filled in random order using
a glass re-pipettor. The open end of each tubing was then sealed
by inserting another piece of glass rod so there was no head
space, and clamped with a plastic tubing clamp. The tubings were
stored in the dark at room temperature. At the beginning and end
of filling each set of tubings, three HPLC autosampler vials were
filled with the test solution, capped with Teflon-lined plastic
caps, and stored in the dark in a refrigerator. These solutions
served as controls.

When it was time to take a sample from one of the tubings, one of
the plugged ends of the tubing was cut off and a Pasteur pipet was
used to transfer an aliquot of the test solution to an HPLC auto-
sampler vial.

Analytical determinations were performed using reversed-phase
HPLC  (RP-HPLC).  A modular system was employed consisting of a
Spectra-Physics SP8875 autosampler with a 100-p.L injection loop,
a Spectra-Physics SP8810 isocratic pump, a Spectra-Physics SP8490
variable-wavelength detector set at 215 run, and a Hewlett-Packard
3396 series II digital integrator. Separations were obtained on
a Supelco LC-18 25-cm x 0.46-cm (5-|om)  column eluted with  65/35
(V/V) methanol/water at a flow rate of 2.0 mL/min.  The detector
response was obtained from the digital integrator operating in
the peak-height mode.

Primary and working standards were made as described by Parker and
Ranney (2). The working-standard solutions were made fresh on each
sampling day and run in triplicate. The method detection limit
(MDL) for PDCB was 8.6 ng/L and 3.5 |o.g/L for PCE. The MDLs for
these analytes were obtained according to the EPA protocol de-
scribed elsewhere  (3).

Second Sorption Study

Since three of the tubings used in this study  (PTFE, ETFE, and
polyamide)  had different surface-area-to-solution-volume ratios
than the other tubings, this study was conducted so that we could
compare sorption of organic solutes by these tubings with the
other seventeen tubings.
                                     308

-------
Five-centimeter pieces of the three tubing types were placed in
three different-sized glass vials  (9, 25 and 40 mL). The test so-
lution contained the same organic  compounds and was made in the
same manner as in the previous study. The solution was poured into
the vials so there was no headspace, and the vials were capped
with Teflon-lined plastic caps. The surface-area-to-solution-vol-
ume ratios were: for PTFE, 0.70, 1.15, 3.55; for ETFE, 0.45, 0.74,
and 2.15; and for polyamide, 0.69, 1.14, and 3.59. Same-sized vi-
als, filled with test solution but no tubing, served as controls;
there were two controls for each vial size and sampling time.  All
samples were kept in the dark at room temperature. Duplicate sam-
ples were taken after 1 hour, 8 hours, and 24 hours. With a Pas-
teur pipet, an aliquot of each sample was transfered from each of
the test vials to an autosampler vial.

Analyses were done as described previously in the first sorption
study.

RESULTS AND DISCUSSION

Sorption Of Organic Solutes
Figures 1 and 2 show mean normalized concentrations of PCE and
PDCB the solutions exposed to the  twenty polymeric tubings. These
two analytes and ODCB were sorbed  the most rapidly and to the
greatest extent of all the analytes tested in this study (2).  Mean
normalized concentration was calculated by dividing the mean con-
centration of an analyte exposed to a given tubing for a given
time by the mean concentration of  the same analyte in the control
samples at the same time. Thus, a mean normalized value of 1.00
represents no loss of analyte.

The figures also show the adjusted mean normalized concentrations
for the three materials that had different surface area to volume
ratios  (PTFE, ETFE, and polyamide). These were found by taking the
best-fit equation for the data from the second experiment  (2)  for
each material, analyte and time, and using it to determine what
the adjusted normalized values would be for these three materials
if the surface-area-to-solution-volume ratios were the same as the
other seventeen tubings.

For PDCB, the least sorptive tubings  (both in rate and extent of
sorption) were FEP-lined PE, FEP,  and PVDF  (Figure la). Fisher's
Protected Least Significant Difference tests showed that FEP-lined
PE generally performed significantly better than FEP, which per-
formed significantly better than PVDF, which in turn performed
significantly better than PFA  (2). The most sorptive tubings were
flexible tubings that were not fluoropolymers  (polyurethane, sili-
cone-modified thermoplastic elastomer [TPE], the plasticized
                                     309

-------
  i.on-
  0.8
  0.6
O
.1 0.4
  0.2
D PP      * P(VDF-HFP)
O PTFE*    ffi PVDF
O PFA     V Fluroelastomer
A ETFE*    a FEP Lining in PE Shell
H Polyamide'  * FEP
           adjusted surface-area-to-solution-volume ratio
                                         0.050 r-
                                         0.040
D LDPE
O XLPE
O PE Lining in EVA Shell
  PE Cross-linked to EVA Shell
  Polyester Lining in PVC Shell
  Polyurethane
  Plasticized PP (type I)
V PVC
a TPE
•ป Plasticized PP (type II)
            20
                     40       60
                  Contact Time (hr)
          40
       Contact time (hr)
     a. Least  sorptive tubings.              b. Most sorptive  tubings.
                       Figure 1.  Sorption  of PDCB.

polypropylenes,  and flexible  PVC)  (Figure Ib). These tubings sorbed
more than  98%  of the  analytes in  the  first hour.  The various poly-
ethylene tubings were the  next most sorptive group of tubings  (Fig-
ure  Ib) .

Figure 2a  shows  that  for  PCE, PVDF was  the least  sorptive tubing.
FEP-lined  PE,  FEP, and ETFE were  the  next least sorptive tubings.
Generally  there  was no significant difference in  the concentrations
of solutions exposed  to FEP and FEP-lined PE (2).  The six tubings
that were  the  most sorptive of PDCB were also the most  sorptive
tubings of PCE (Figure 2b). Again,  the  polyethylene tubings  were
the  next most  sorptive tubings.
                                         0.050 r
  1.0D-
  0.8
 ง 0.6
  0.4
  0.2
D PP
O PTFE"
O PFA
A ETFE*
ffi Polyamide*
                      P(VDF-HFP)
                      PVDF
                      Fluoroelastomer
 Q LDPE
 O XLPE
 O PE Lining in EVA Shell
   PE Cross-linked to EVA Shell
   Polyester Lining in PVC Shell
   Polyurethane
   Plasticized PP (type I)
   PVC
             20
                     40       60
                  Contact time (hr)
          40 "~™    60~
      Contact time (hr)
      a.  Least  sorptive tubings.              b. Most  sorptive  tubings.

                         Figure 2. Sorption of PCE.
                                           310

-------
These results  are  typical for the other six analytes  (2). We
found that  PVDF was  also the least sorptive material for TDCE and
TCE, and  that  FEP  and FEP-lined PE were the least sorptive mate-
rials for the  other  four analytes (NB,  MNT, CLB, ODCB).

The results from this study appear to agree well with  the results
from a number  of similar studies (4-10).  These studies compared
sorption  of organic  solutes by a few polymeric tubing materials.
Generally,  these studies found that flexible materials like sili-
cone rubber and flexible PVC were the most sorptive materials,
and that  PTFE  and  other fluoropolymers and rigid PVC were the
least sorptive.

Leaching  of Contaminants

When we compared the chromatograms of solutions exposed to the
tubings with those of the control solutions,  we saw additional
peaks in  solutions exposed to some of the tubings. By the end of
the experiment (72 hr),  solutions exposed to nine of the tubings
had extra peaks  (Table 2).  The polyurethane,  polyamide, and PVC
tubing leached at  least eight compounds (as indicated by spurious
peaks), with polyurethane leaching the most (twelve).  Of the rig-
id polymers, only  the fluoropolymers and polyethylenes did not

             Table  2.  Number of spurious HPLC peaks found
             during tubing material  study.
                                           Contact time
                                              (hr)
             	Tubing material	1   72

             LDPE                            0    0
             XLPE                            0    0
             PE in a EVA shell                0    0
             PE cross-linked to EVA shell      0    0
             Polyester lining in a PVC shell    1    4
             PP                              11
             PTFE                            0    0
             PFA                             0    0
             Polyurethane                     5   12
             ETFE                            0    0
             Plasticized PP (type I)           11
             Polyamide                        2    9
             P(VDF-HFP)                       1    1
             PVC                             3    8
             TPE                             1    4
             Plasticized PP (type II)          0    0
             PVDF                            0    0
             Fluoroelastomer                  1    1
             FEP-lined PE                     00
             FEP                	0    0
                                      311

-------
leach any contaminants. Of the flexible polymers, only one of the
plasticized polypropylene tubings (type II) did not leach any con-
taminants. However,  several of the flexible tubings leached only
one contaminant [P(VDF-HFP),  the fluoroelastomer, and the other
plasticized polypropylene].

These results agree well with the few studies that looked for
leaching of contaminants from polymeric tubing materials (4, 6,
11, 12). Generally,  these studies found that fluoropolymers
(especially PTFE)  did not appear to leach contaminants (6,  12) .

Based on these findings, we would tentatively recommend not using
the following tubing materials since each of them leached several
contaminants: polyurethane,  polyamide,  flexible PVC,  polyester-
lined PVC, and silicone-modified thermoplastic elastomer. In addi-
tion, polypropylene,  plasticized polypropylene (type I),  P(VDF-
HFP), and the fluoroelastomer tubings each leached one contaminant
and thus may also be less desirable than those tubings that did
not leach any contaminants (the polyethylene and rigid fluoropoly-
mer tubings).

CONCLUSIONS

Based on this study,  the rigid fluoropolymers appear to be the
best materials for sampling groundwater since they were the least
sorptive of organic solutes and do not appear to leach any contam-
inants. Among the fluoropolymers, FEP,  FEP-lined PE,  and PVDF were
the least sorptive materials tested. If one also considers cost,
PVDF becomes the most desirable choice; it's price was less than
one-half that of the FEP tubing and approximately 60% of the FEP-
lined PE tubing. In fact, PVDF was the least expensive of all the
rigid fluoropolymers tested.

In some instances a more flexible tubing may be required (e.g., in
the head of a peristaltic pump). Among the flexible (non-rigid)
tubings, the two fluorinated tubings [the fluoroelastomer and
P(VDF-HFP)] were much less sorptive of organic solutes than the
others. In addition,  these two tubings and the two plasticized
polypropylenes were the best products with respect to leaching of
contaminants. Thus,  among the flexible tubings we tested, we would
tentatively recommend using the fluoroelastomer or P(VDF-HFP) tub-
ings. However, if we also consider cost, we see that the fluoro-
elastomer was the most expensive of all the tubings tested  ($8707
100 ft). Since the price of the P(VDF-HFP) tubing was less than
1/4 of that of the fluoroelastomer tubing, we would tentatively
recommend using P(VDF-HFP) when flexible tubing is required.

If under dynamic conditions these tubings reach equilibrium prior
                                    312

-------
to sampling, then  loss  of organic solutes should no longer be an
issue, unless  transfer  through the tubing to the atmosphere is
significant.   It is  also possible that leaching of components
from rigid polymers  is  a surface phenomenon that decreases with
time, as several researchers (13-16)  have observed for rigid PVC.
On the other hand,  if higher flow rates increase leaching, as
Junk et al.  (17) found  with flexible PVC, then leaching may be
more problematic than sorption when sampling a well. Since the
costs of the materials  we found to be the most inert are still
quite high  (around $200/100 ft),  it would be desireable to use a
less expensive alternative (e.g.,  LDPE is only $19/100 ft) if the
water samples  are  not affected.  We are currently conducting stud-
ies to determine if  the behaviors we discovered in this study
remain the same, increase,  or disappear under dynamic conditions.

ACKNOWLEDGMENTS

Thanks to Martin Stutz  and the U.S. Army Environmental Center for
their support  of this work. Thanks also to Alan Hewitt, Research
Physical Scientist,  and Marianne Walsh, Research Chemical Engi-
neer at CRREL,  for their technical reviews of this manuscript.

This publication reflects the personal views of the author and
does not suggest or  reflect the policy, practices, programs or
doctrine of the U.S. Army or government of the United States. The
contents of this report are not to be used for advertising or
promotional purposes. Citation of brand names does not constitute
an official endorsement or approval of the use of such commercial
products.

REFERENCES

 I.Parker, L.V. (In Press) Decontamination of Organic Contaminants
   From Ground- Water Sampling Devices:  A  Literature Review.  CRREL
   Special Report 95- ,  U.S. Army Cold Regions Research and Engi-
   neering Laboratory,  Hanover,  N.H.
 2. Parker, L.V. and T.A. Ranney (In Press)  Sampling Trace-level
   Organics with Polymeric Tubings.  CRREL  Special Report, U.S.  Army
   Cold Regions Research and Engineering Laboratory, Hanover,  N.H.
 3 . Federal Register  (1984) Definition  and  procedure for the deter-
   mination of the method detection limit.  Code of Federal Regula-
   tions, Part 136, Appendix B,  October 26.
 4. Miller, G.D. (1982)  Uptake and release  of lead, chromium,  and
   trace level volatile  organics  exposed to synthetic well casings.
   In Proceedings of the Second National Symposium on Aquifer Res-
   toration and Ground Water Monitoring, National Water Well Asso-
   ciation, U.S. Environmental Protection  Agency, and National Cen-
   ter for Ground Water  Research,  pp.  236-245.
 5. Ho, J.S.-Y.   (1983) Effect of sampling variables on recovery of
                                      313

-------
   volatile organics in water. Journal of the American Water Works
   Association 75 (11) :583-86.
 6.Barcelona, M.J., J.A. Helfrich, and E.E. Garske  (1985) Sampling
   tubing effects on ground water samples. Analytical Chemistry,
   57:460-464.
 7.Devlin, J.F.  (1987) Recommendations concering materials and
   pumping systems used in the sampling of groundwater contaminated
   with volatile organics. Water Pollution Research Journal of Can-
   ada 22 (1) :65-72.
 S.Pearsall, K.A. and D.A.V. Eckhardt  (1987) Effects of selected
   sampling equipment and procedures on the concentrations of
   trichloroethlene and related compounds in ground water samples.
   Ground Water Monitoring Review 7(2):64-73.
 9. Reynolds, G.W and R.W. Gillham (1985)  Absorption of halogenated
   organic compounds by polymer materials commonly used in ground
   water monitors. In Proceedings of Second Canadian/American Con-
   ference on Hydrogeology: Hazardous Wastes in Ground Water: A
   Soluble Dilemma, National Water Well Association, Dublin, Ohio,
   pp. 125-33.
10.Gillham, R.W. and S.F. O'Hannesin (1990) Sorption of aromatic
   hydrocarbons by materials used in construction of ground-water
   in sampling wells. In Ground Water and Vadose Zone Monitoring,
   ASTM STP 1053, American Society for Testing and Materials, Phil-
   adelphia, pp. 108- 122.
ll.Curran, C.M. and M.B. Tomson (1983)  Leaching of trace organics
   into water from five common plastics.  Ground Water Monitoring
   Review 3 : 68-71.
12.Ranney, T.A. and L.V. Parker (1994)  Sorption of Trace-level Or-
   ganics by ABS, FEP, FRE and FRP Well Casings. CRREL Special Re-
   port 94-15, U.S. Army Cold Regions Research and Engineering Lab-
   oratory, Hanover, N.H.
13. Packham, R.F.  (1971a) The leaching of toxic substances from un-
   plasticized PVC water pipe. Part I.  A critical study of labora-
   tory test procedures. Water Treatment and Examination 20(2):108-
   124.
14. Packham, R.F.  (1971b) The leaching of toxic substances from un-
   plasticized PVC water pipe. Part II. A survey of lead levels in
   PVC distribution systems. Water Treatment and Examination
   20(2):144-151.
15. Gross,  R.C., B. Engelhart, and S. Walter  (1974) Aqueous extrac-
   tion of lead stabilizers from PVC compounds. Society of Plastic
   Engineering, Technical Paper 20.  pp. 527-531.
16.Boettner, E.A., G.L. Ball, Z. Hollingsworth, and R. Aquino
   (1982)  Organic and Organotin Compounds Leached from PVC and CPVC
   Pipe. U.S EPA-600/S1-81-062. U.S. Environmental Protection Agen-
   cy, Health Effects Research Laboratory, Cincinnati, OH.
17. Junk, G.A., H.J. Svec, R.D. Vick, andM.J. Avery (1974) Contami-
   nation of water by synthetic polymer tubes. Environmental Sci-
   ence and Technology, 8 (13):1100-1106.
                                        314

-------
                                                                                48
              THE ANALYSIS OF HEXACHLOROPHENE BY SW846 8151

N.  Risser,  Manager,  Environmental  Sciences,  Lancaster  Laboratories
Division, Thermo Analytical,  Inc., Lancaster, Pennsylvania 17601.

J. Hess,  Group  Leader,  Pesticides Residue Analysis Group, Environmental
Sciences,  Lancaster  Laboratories  Division,  Thermo  Analytical  Inc.,
Lancaster, Pennsylvania 17601

M.   Kolodziejski,   Chemist,   Pesticides   Residue   Analysis   Group,
Environmental   Sciences,    Lancaster   Laboratories    Division,   Thermo
Analytical, Inc.,  Lancaster,  Pennsylvania 17601

ABSTRACT

In response  to the   Resource Conservation and  Recovery Act,  hazardous
waste  generators  are frequently  required to monitor  waste  streams  for
target  compounds  known as  the Appendix IX list.  Included in  this list
are    volatiles,    semi-volatiles,     pesticides,     and   herbicides.
Hexachlorophene,     also     known     as     2,2'-Methylene-bis(3,4,6-
trichlorophenol),  is  listed as a target  analyte with poor  and erratic
chromatographic  performance  in  SW846  8270,  the  GC/MS  method  for  the
semi-volatile fraction. Since many of the organic fractions are required
most of the time,  it is practical to analyze hexachlorophene as a target
analyte  in the  herbicide  fraction  using  SW846  8151  and  obtain  much
better   performance  than   that   observed   when   using  8270.   Since
hexachlorophene is  a phenolic compound, it is readily derivatized by the
methylating reagents used in the  8151 preparation. Results indicate the
potential   for  much   improved  method   detection   limits,   improved
chromatographic    performance,       and   acceptable   precision   when
hexachlorophene is  analyzed by 8151 as  opposed to 8270.

Introduction

The passage of the  Resource Conservation and Recovery Act  (RCRA) in 1976
set the stage for  the  passage of a  series  of  amendments that defines
hazardous waste and regulates its disposal. The definition of a waste as
hazardous or not hazardous  can require  extensive analytical testing.  The
methods  for  testing are described in  a series of methods known  as  SW-
846,  which is  now in the third update  of  its  third  edition.  The target
analytes  for  RCRA testing are  compiled  in  several  lists  in  the
regulations, one  of which  is known as  the  Ground-Water  Monitoring List
(40 CFR Part  264  Appendix IX) .  This  list  includes  the names  of  the
target compounds of interest, the CAS number of each, suggested methods,
and the PQL  for  each  target compound.  One  of  the  compounds  on  the
Appendix  IX  list is hexachlorophene  and  the suggested  method  for this
compound is SW846  8270. Upon examination of the list of  compounds in the
method,  it  is  evident  that  hexachlorophene  does not perform  well  by
8270.  Adsorption  to walls  of glassware during  extraction and storage,
and non-reproducible chromatographic performance is likely to occur.
No QC acceptance criteria were given in Table 6 of 8270.

Hexachlorophene  (2,2'-Methylene-bis[3,4,6-trichlorophenol])  is  an anti-
infective agent  that is used  chiefly  in the manufacture of germicidal
soaps.  It is regulated primarily  because  of  its potential neurotoxicity
in humans.
                                        315

-------
                                  OH
                                       OH    Cl
                             cr   ci     ci    ci
                              Hexachlorophene
C13H6C16O2
                                 Formula Weight  406.92
Analytical Approach
The first  approach to the analysis of hexachlorophene at  our laboratory
was to  use SW846  8270.  The  recoveries  observed using  this  method were
erratic  and detection limits were variable.  It appeared that  some form
of  chemical  degradation  or  reaction  was  occurring   during  the  gas
chromatographic  analysis . Figure 1 illustrates a typical  chromatogram of
a  hexachlorophene standard under  the conditions  of  the 8270  analysis.
Not  only  was the  chromatographic  peak  badly  tailing,  but  the  mass
spectrum of the  peak was not consistent  (Figure 2 and Figure  3)  with the
expected mass spectrum of hexachlorophene. Elevated quantitation limits
are often  the result of  chemical instability and  poor  chromatography.
These limitations  of 8270 indicate that  an alternative method could lead
to better  performance.

                                 Figure  1
            Total Ion Chromatogram  of  494 mg/L Hexachlorophene
         80000-

         70000-

         60000-

         50000-1

         40000-;

         30000-j

         20000-f

         10000-j

           0
       Time-> 3300
                             Hexachlorophene 494 mg/L
                                perylene-dl 2
 unknown
 hexachlorophene
 adduct
                                35.62
                    34.00
                              35.00
                                      36.00
                                               37.00
                                                         38.00
                                                                  39.00
             Figure 2
Mass Spectrum of Unknown Adduct
                              Figure 3
                   Mass spectrum of Hexachlorophene
            Unknown henKlilorcptene o
            Retefrton IHTO 35.82 (mutes
ill
             11,  T ,|   „
             lnLiilil Ll: 1  i I
        ,1  i
The  structure  of hexachlorophene  indicates  that  it  is  a  chlorinated
phenolic  compound  that  might  behave  in  a  similar  manner  to  other
                                          316

-------
phenolic  compounds  such  as  pentachlorophenol  (PCP).  PCP  is  a  target
compound  listed  in  SW846   8151  Chlorinated  Herbicides  by  GC  Using
Methylation  or Pentafluorobenzylation Derivatization:  Capillary  Column
Technique.  The SW846  8151 method could  potentially be  applied  to  the
analysis  of hexachlorophene.  Preliminary  mass  spectral  data  (Figure  4
and  Figure  5)  indicated  that complete derivatization was  observed when
hexachlorophene was  methylated with diazomethane.
             Figure  4
Total Ion Chromatogram of
Methylated Hexachlorophene
Ahndaia

 120000-


 100000-
 40000 •

 20000-

   0
Tiw->
     HaxacMorophwe-mettiylatEd  ^
       30.00
Abndau

 12000-
                3400  3600
                             4000  4200
                                                      Figure 5
                                             Mass Spectrum of Methylated
                                                    Hexachlorophene
                                                              HancMoropliene - mafyteted
                                                              Rstenfonli™ 3481 min
Experimental  Design
The  first step was  to  determine the  gas  chromatographic conditions  for
the  analysis  of methylated hexachlorophene.  This  derivative  was  not
readily  available  from commercial  vendors and  so a  stock solution  of
hexachlorophene was derivatized  with diazomethane   according  to  the
bubbler method described in SW846-8151. From this stock, five  levels  of
calibration  standards were prepared at 17.06,  34.11,  68.30, 170.60,  and
341.10 ug/1.

The  standards  were  then  used  to  calibrate  the gas   chromatographic
system.  The analysis  was performed  on a  HP5890 Series  II GC  equipped
with  Electronic  Pressure Control  and two  columns  installed  into  one
injection port.

The columns  chosen  for this  analysis were:
Analytical  columns :
DB-608, 30 meters,  0.53 mm ID,  0.83 micron film  (J&W P/N  125-1730)
DB-1701,  30  meters,  0.53 mm ID, 1.0 micron film  (J&W P/N  125-0732)
Guard column:
RTX-5, 3  meters,  0.53 mm ID, 3.0 micron film  (Restek Cat  #  10282)

The following  chromatographic conditions were used:
Injection port-  260 C;  Detector-  300 C;  Helium  carrier at  3.5  PSI;
Temperature  program- 80 C for 3 min,  5 C/min to 180 C, then 20  C/min to
260 C. Figures  6, 7,  8,  and  9 illustrate the chromatographic results  and
calibration  curves  that were obtained.
                                         317

-------
                                    Figure 6
  Chromatogram of  Herbicide Methyl Esters and Hexachlorophene on DB-608
76.0-
73.0-

74.0-

73.0-

72.0-

71 .0-
70.0-
60.O-
68.0-




























V

P: '
S8 7
J 3 r
CD i
^






-. _r^ . 	 ^

a
a
I

I
yLJ
.
Rl
i




vj



* ^
S;



—





Methylated
Hexachlorophene
\v
• ^"^k.
ri >v
I ^ ^S
s
1


*-rv\.


V













*^ 1 . . •IT - —

                                          20
                                                   23
                                    Figure 7
 Chromatogram of Herbicide Methyl  Esters and Hexachlorophene on  DB-1701
 72-
                                                    a a
                                                          Methylated
                                                          Hexachlorophehe
                                   15
                                             2O
                                                                   30
            Figure 7
Calibration Curve on DB-608
Hexachlorophene  Methyl  Esters
  8000Q. •
~ BOOOO.
i 40000.

ง

                         I
                            20000. -
                                                 0.-
                                                         100.
1   1    '   I
  200.    300.
 Amount (ugfl)
 I
400.
                                           318

-------
The samples  were prepared according  to  the September  1994  revision of
SW846-8151 Section 7.0.

The following steps summarize the preparation for waters:
•  Add NaCl to 1 liter of sample
•  Adjust pH of sample to greater than 12
•  Extract with methylene chloride
•  Adjust pH of sample to less than 2
•  Extract with diethyl ether and dry with sodium sulfate
•  Derivatize the extract with diazomethane using the bubbler method

The steps required in the preparation of soils include:
   Adjust pH of sample to less than 2
   Add sodium sulfate
   Extract with methylene chloride/acetone
   Hydrolyze the extract with KOH
   Extract with methylene chloride
   Adjust pH to less than 2
   Extract with diethyl ether and dry with sodium sulfate
   Derivatize the extract with diazomethane using the bubbler method

Preliminary extraction data indicated 40-50% recovery of hexachlorophene
through this procedure, and that a significant amount of hexachlorophene
was lost in  the  methylene chloride step. It appears that  the  pK of the
second hydroxyl group is  quite high and  that  even  at a pH above 12, the
hydrogen is not fully dissociated.

To  address  the  low  recovery  of  the   hexachlorophene,  the  methylene
chloride  wash  step  was  not performed,  but  instead,  an  additional
florisil cartridge cleanup was used that was  modified  from the florisil
cleanup  described  SW846  3620.   Preliminary  data  from  real-world  soil
samples indicate that the florisil cleanup is effective in reducing some
types of chromatographic interferences.

A  spiked  water  sample   and  a  spiked  soil  sample were  analyzed  in
triplicate.

Analytical Results

The results of the recovery study are illustrated in Figure 10.
                                Figure  10
           Recovery Results for Hexachlorophene by SW846-8151
                                on DB-608
Sample
Water 1
Water 2
Water 3

Soil I
Soil 2
Soil 3
Soil 4

Spike amount
9.7 ug/1
9.7 ug/1
9.7 ug/1

320. ug/Kg
320. ug/Kg
320. ug/Kg
320. ug/Kg

Spike found
7.7 ug/1
8.1 ug/1
9.8 ug/1

240. ug/Kg
310. ug/Kg
230. ug/Kg
270. ug/Kg

% Recovery
79.
84.
101.

76.
97.
72.
84.

%RSD



13.1




13.4
                                        319

-------
                               Figure  11
           Recovery Results for Hexachlorophene by SW846-8151
                               on DB-1701
Sample
Water 1
Water 2
Water 3

Soil 1
Soil 2
Soil 3
Soil 4

Spike amount
9.7 ug/1
9.7 ug/1
9.7 ug/1

320. ug/Kg
320. ug/Kg
320. ug/Kg
320. ug/Kg

Spike found
9.0 ug/1
11.2 ug/1
12.3 ug/1

280. ug/Kg
360. ug/Kg
270. ug/Kg
300. ug/Kg

% Recovery
93.
115.
127.

87.
114.
85.
95.

%RSD



15.4




13.4
Conclusion

Recovery data indicate  that  SW846 8151  can  successfully be  applied to
the  analysis  of  hexachlorophene in  soils  and waters.  This  approach
results in much improved chromatographic performance.  The  GC-ECD method
provides for more  reproducible  and  reliable detection  and  quantitation
of hexachlorophene than does  SW846 8270.

Some work remains  to demonstrate the  utility of this  method and validate
its performance. The conditions  of the florisil cartridge  cleanup  need
to be  finalized. A  quad  study and method detection limit study  need to
be  performed.   The  resulting   Method  Detection  Limit  and  Practical
Quantitation Limit obtained using SW846  8151  are expected to be  two to
three orders of  magnitude lower  than  those  obtained using SW846 8270.
                                       320

-------
                                                                                49
         SOLVENT RECOVERY IN THE PESTICIDE EXTRACTION LABORATORY
                 UTILIZING STANDARD LABORATORY GLASSWARE

N. Risser, Manager, Environmental  Sciences, Lancaster Laboratories
Division, Thermo Analytical Inc.,  Lancaster,  Pennsylvania 17601

ABSTRACT

With the  enactment  of  the Clean Air Act,  there is an increased interest
in recovery of  solvents used in  the extraction of soil, water, and waste
samples as prescribed by  the U.  S. Environmental Protection Agency's 500
series Drinking Water  methods,  600 series  Waste  Water  methods,  and the
SW-846 Solid Waste methods. Until  promulgated methodology allows for the
application  of other  techniques  to  pesticide trace  residue analysis,
such as supercritical  fluid extraction (SFE)  and solid phase extraction
 (SPE), the use  of  solvents such as methylene chloride,  acetone, hexane,
and diethyl ether  are  required.  Following the extraction of the sample,
aliquots  of  the  solvent  extract are  combined  in a Kuderna  Danish
concentrator.  During a  typical concentration  process,  the  solvent  is
vented by a fume hood  into the atmosphere. Our environmental laboratory
began capturing some solvents  several years ago.  Our current goal is to
capture more  than  80%  of the  solvents we  use,  including those solvents
used  in  the  pesticides  sample  preparation laboratory.  Commercially
available solvent  recovery products  were  reviewed on the basis of cost,
ruggedness, and ease of  use.  Based  on  this  review, it  was determined
that a  more cost  effective  solution could be  custom designed.  Several
vendor's  systems and the performance of  the  custom designed system are
reviewed.

INTRODUCTION

Air quality  has been a  major  concern for  the Environmental Protection
Agency  since  the  passage of  the  Clean Air  Act in  1970.  The  list  of
compounds of  interest  has  continued  to  expand through  the  1980s which
led  to  the  passage of  the Clean Air  Act  Amendments  in  1990.  These
Amendments place more  responsibility on industry and local,  state,  and
federal agencies to keep  the public informed  about the health effects of
the hazardous air  pollutants  (HAPs)  emissions and levels of exposure to
these pollutants. ^

Although  the   Clean  Air  Act   is  seen  by  many  forecasters  of  the
environmental market as  a driving force in the  creation  of an expanded
market segment  for analytical services,  its  focus is  to  hold industry
accountable for emissions of  HAPs.  Many companies are  concerned about
environmental  issues  and  are  creatively  seeking ways  to  minimize  or
eliminate the generation of hazardous waste through pollution prevention
programs. In addition to  concern for the environment,  environmental and
industrial laboratories  that use solvents  (many are  listed  as HAPs}  in
sample preparation should be  aware  of their  potential  liability under
the provisions  of  Title  I  or Title III  of the Clean Air Act Amendments.
For  the  environmental  laboratory,  a  perplexing  paradox   exists:  The
solvents  that are  regulated by the Clean Air Act  are the  same solvents
that  are required by  current  approved  analytical methods.  Table  1
summarizes the  solvent requirements for several approved EPA methods.

Solvent  reclaimation  could contribute  in  a  significant   way  to  the
minimization of  solvent emissions from the pesticide sample preparation
laboratory.  In  an effort to implement an effective solution, a study was
conducted to compare several solvent reclaimation methods.
                                        321

-------
REVIEW OF LABORATORY OPERATIONS

Many  of  the  current  methods  of  sample  extraction  in  the pesticide
laboratory  have similar approaches to  the separation and  concentration
of  target analytes  from the sample matrix.   First,  a sample of soil  or
water is extracted with several hundred milliliters  of a  solvent such  as
methylene   chloride   using  standard   extraction   techniques  such   as
sonication,  shake out in a  separatory  funnel,  continuous  liquid-liquid
extraction,  or soxhlet extraction.  Secondly,  the extract  is  concentrated
in  a  Kuderna-Danish  concentrator on a steam bath. A  solvent exchange
step  to  a  non-halogenated  solvent is   required   depending on  which
instrumental  analysis  method will be employed. Thirdly,  the extract  is
subjected to  various cleanup techniques such  as gel permeation and solid
phase  cartridge  cleanup.  Re-concentration  of the extract might   be
required after cleanup. And  last  of all,  the extract is analyzed by gas
chromatography  (GC)   using   an  electron  capture   detector  (ECD),  a
nitrogen-phosphorus detector  (NPD), flame  photometric detector (FPD),  or
flame  ionization detector  (FID).   A solvent recovery system  would  be
most effective during the Kuderna-Danish concentration step.

SOLVENT RECOVERY  SYSTEM REQUIREMENTS

Several  performance  requirements  and  hardware  specifications  for  our
solvent recovery  system were identified:
*   The system needs  to recover greater than 90% of  the  solvent during
    extract  concentration.
*   Negative  impacts  on the efficiency  of  laboratory operations  need  to
    be minimized.
*   The hardware needs  to  be reasonably rugged to endure  daily wear and
    tear.
*   The initial  cost of the hardware needs  to be minimized.
*   The operational cost of the system needs to be minimized.
*   The safety of  operators cannot be compromised.

COMMERCIALLY  AVAILABLE SOLVENT RECOVERY SYSTEMS

Several vendors have  developed solvent  recovery systems  that are  based
on  different  approaches to the recovery problem.

Some automated hardware  is available that concentrates  sample extracts
and recovers  the solvent  one extract at  a time.  In  our operation,
several units would  be required  to obtain enough  capacity  to meet  the
demands of  our  laboratory. This approach did not meet our requirement  of
minimal initial cost.

Another approach  is  to connect some  type of condenser to  the Kuderna-
Danish  concentrator  to  collect  the  solvent vapors.  Two  vendors  were
considered  that took  this approach. The  system design from one vendor
included  a  specially designed  Snyder/Condenser  unit.  The  glassware
appeared  to  be  especially  susceptable  to  breakage.  There  was   no
convenient  way to perform the  solvent exchange step.    The size  and
dimensions  of the unit made  it difficult  to  use in  the  laboratory fume
hood.  The design  from the other vendor  reflected more closely the design
of  the glassware used in the normal operation of our laboratory.  Some  of
the components appeared to  be a  special  design,  and  these components
were connected using rigid ball  and socket  clamp  joints.   There  was  a
provision for solvent  exchange.  The potential for breakage appeared  to
be  high which could lead to significant operational  costs.
                                        322

-------
After a review of commercially available systems, it became evident that
an  effective  solvent  reclaimation system  could  be developed  in house
that  would   meet  the   system  requirements  of   cost  minimization,
ruggedness,   and  operational   efficiency.   Table  2   summarizes  the
comparison of the various approaches that were considered in this  study.

CUSTOM SOLVENT RECOVERY SYSTEM

The  solvent   recovery  system  designed at  Lancaster  Laboratories took
advantage of the glassware and hardware that was already purchased. This
included much  of the glassware, including  the  Kuderna-Danish apparatus
and the steam bath heater.

The  solvent   extract  is  placed in  the Kuderna-Danish  flask with  its
concentrator  tube.  A standard  three-ball  Synder column is  attached to
the  flask. At this point, the  Kuderna-Danish apparatus is placed on a
steam bath  located inside a fume hood. An aluminum  rack  supports  the
Kuderna-Danish  apparatus  as sample  concentration  occurs.  A  standard
vacuum  adapter is  attached  to  the  Snyder column.  Flexible  corrugated
TEFLON  tubing is attached to  the  vacuum adapter which  provides  a flow
path  for  the  solvent  vapors  to an Allihn condenser.  A  recirculating
cooler  provides  the coolant  to  the  condensers.  The condensate collects
in  a gently  sloping  TEFLON tubing  manifold and  then  flows  through  a
drain line to a  10 liter  collection vessel.

Figure  i  illustrates  the setup  of the custom solvent recovery system.
The components used to construct the system are summarized in Table 3.

Hardware  was   assembled   in  two hoods  to  accomodate  the  simultaneous
concentration of 24 sample extracts.

THE FOTDRE OF SOLVENT RECOVERY

One  reason  that  it was  particularly  appropriate  to minimize  capital
expenditures is  that approval of new methodologies utilizing alternative
isolation and  concentration  techniques  has  been  given  by the  EPA or the
approval is  imminent.    Many of the  500  Series  Drinking Water  methods
already  allow   for  solid   phase   isolation   techniques  to   be  used.
Immunoassay  techniques  are   increasingly   being  applied  to  pesticide
analyses.  New methods are scheduled to be released with  the  Proposed
Update  III  to  the  SW-846 collection  of  RCRA  methods.2 These  methods
include Method 3530(c) -  Pesticides and PCBs by Open-Tubular Solid Phase
Extraction; Method  3535  -  Solid  Phase Extraction  Disk Method   (SPE);
Method  3545(c)   -  Automated   Solvent  Extraction   (ASE) ;   and  many
immunoassay methods  specifically  for  pesticides  - Methods  4015,  4020,
4040, 4041, and  4042.  These  new methods require much  less  solvent than
conventional  extraction  techniques.  This  is very  good news  for  the
extraction laboratory. Lower levels  of solvent usage translate to lower
costs for  solvent purchase  and disposal,  and  more importantly,  lower
potential risks associated with worker exposure to these solvents.

In  sample  preparations  where  significant  amounts  of  solvents  are
required,  the recovery of those  solvents will allow for the purfication
and recycling of those solvents. This minimizes expenses associated with
the  purchase  of  new  solvent  and  the  disposal  of  the spent  solvent.
Recycling  of  solvents   in  the  pesticide  laboratory  is  especially
challenging because of  the  range  of  solvents that are  used.  The broad
scope of extraction techniques  employed in the  analysis of pesticides,
                                         323

-------
herbicides,  and  PCBs  leads  to  mixtures  of  solvent that  have similar
boiling points that are difficult to separate for re-use.

CONCLUSION

The  custom designed system  meets the  requirements of  minimal initial
cost,  ruggedness,  ease  of  use,  and  throughput.   The  system  has  been
functioning well for several months.  By designing  and  building our own
system, a savings in initial capital investment of  $10,000 - $12,000 was
realized.

                                Figure 1
                     Custom  Solvent Capture  System
                                                  coolant
                                                   in
                                                          coolant
                                                            out
                                 Table 2
              Comparison of Approaches to Solvent Recovery
Vendor
Number I
Number 2
Number 3
Custom
Technique
heating with
fan action
heating with
special K-D
and condenser
heating with
standard K-D
heating with
standard K-D
Cost per
Sample
Position
>$2500.
$633.
$695.
$205.*
Ruggedness
unknown
possibly
suseptible
to breakage
possible
breakage
from rigid
connections
little
breakage
during 1st
4 months
Ease of
Use
unknown
somewhat
cumbersome
acceptable
acceptable
Throughput
(estimate)
serial process
with 1 hour
per sample
estimate at 8
samples per
1/2 hour/unit
estimate at 6
samples per
1/2 hour per
unit
estimate at 12
samples per
1/2 hour per
unit
  assumes use of already purchased steam bath and K-D units.
                                       324

-------
                                 TABLE 1
              Volumes of Solvent used in Sample Preparation
                        using Various EPA Methods
EPA Method
507/508
515
600/4-81-045
SW846-3640
SW846-8080-SW
SW846-8080-W
SW846-8150-SW
SW846-8150-W
SW846-8310-SW
SW846-8310-W
hexane
(ml)
0
0
255
0
135
105
100
0
0
0
petroleum
ether
(ml)
0
0
10
0
0
0
0
0
0
0
methylene
chloride
(ml)
200
180
0
100
195
225
252
0
195
225
acetone
(ml)
0
60
0
0
285
0
300
30
285
0
MTBE
(ml)
150
20
0
0
0
0
0
0
0
0
aceto-
nitrile
(ml)
0
0
0
0
0
0
0
0
0
0
diethyl
ether
(ml)
0
263
0
0
0
0
200
512
0
0
Total
(ml)
350
523
265
100
615
330
852
542
480
225
                                 Table 3
                 Items  for Custom Solvent Recovery System
Item
Steam bath (12 pos)*
Teflon tubing (3/8")
Teflon Tees (3/8")
Teflon Elbow (3/8")
Corrugated Teflon tubing
(3/8")
Extension Clamp
Clamp Holder
Pyrexplus 9500 ml bottle
Allihn Condenser
Glass Stopper 24/40
Inlet/Outlet Adpt 24/40
Inlet/Outlet Adpt 24/40
Vacuum Adapter 24/40
Green Plastic Clips
Tygon tubing (3/8")

Vendor
Lindberg/Blue M
Cole Farmer
Cole Farmer
Cole Farmer
Cole Farmer
VWR
VWR
VWR
Perpetual Sys
Perpetual Sys
Perpetual Sys
Perpetual Sys
Perpetual Sys
Perpetual Sys
VWR

Part
Number
MW-1130A-1
G-06407-40
G-06361-90
G-06361-60
G-06407-52
05-769-3
05-754
B7579-9LS
PS500-03
PS1300-05
PS162-01
PS164-01
PS202-01
05-880E


Cost
$2,000.00
$64.30
$38.53
$22.00
$70.00
$9.72
$5.53
$216.93
$43.00
$7.60
$8.43
$9.71
$24.51
$1.68
$2.34
Total
Number
1
1
12
1
5
12
12
1
12
12
12
12
12
12
12

Subtotal
$2,000.00
$64.30
$462.36
$22.00
$350.00
$116.64
$66.36
$216.93
$516.00
$91.20
$101.16
$116.52
$294.12
$20.16
$28.08
$2,465.83
*NOTE: Steam bath was  already in place so was considered sunk cost.
                               References

1. Winberry, W.T.,  "Sampling  & Analysis Under Title III", Environmental
Lab, June/July 1993, p.  46.

2. Lesnik, B. and Ollie  Fordham,  "SW-846:The Current Status",
Environmental Lab,  December/January 1994/95, p. 25-33.
                                        325

-------
50
  DETERMINATION OF TNT IN SOIL AND WATER BY A MAGNETIC
         PARTICLE-BASED ENZYME IMMUNOASSAY SYSTEM.
Fernando M. Rubio. Timothy S.  Lawruk, Adrian M. Gueco and David P. Herzog,
Ohmicron Environmental Diagnostics, 375 Pheasant Run, Newtown, Pennsylvania 18940;
James R. Fleeker, North Dakota State University, P.O. Box 5516, Fargo, North Dakota
58105.
ABSTRACT

Use of immunoassays as field-screening methods to detect environmental contaminants
has increased dramatically in recent years.  Immunochemical assays  are sensitive, rapid,
reliable, cost-effective and can be used for lab or field analysis.  A magnetic particle-based
immunoassay system has been developed for the quantitation of TNT in soil and water.
Paramagnetic particles used as the solid-phase allow for the precise addition of antibody
and non-diffusion limited reaction kinetics.  The magnetic particle-based immunoassay is
ideally suited for on-site investigation  and remediation  processes to  delineate  TNT
contamination.  This system includes easy-to-use  materials for collection, extraction,
filtration and dilution of soil samples prior to analysis by immunoassay.  The  method
detects  TNT, and  various  nitroaromatic  compounds  such as  Tetryl,  and  1,3-5-
Trinitrobenzene at less than 0.25 parts-per-million (ppm) levels in soil and at less than 1
part-per-billion (ppb) in water.  The typical precision of the assay (within assay) in  soil and
water is less than 12% and 8%, respectively.  Recovery  studies averaged 106% from soil,
and 103% from water.  The analysis  of soil samples by this ELISA correlates well with
Method 8330, yielding a correlation  coefficient (r) of 0.970; when water samples  were
compared to SW-846 Method 8330, a correlation (r) of 0.951 was obtained.  The
application of this ELISA method permits  the cost-effective evaluation of samples with
minimal solvent disposal and can result in savings of time and  money.  The  system's
flexibility allows the analysis of TNT in many other sample matrices with minimum sample
preparation.

INTRODUCTION

TNT is the common name for 2,4,6-trinitrotoluene, the most widely used military  high-
explosive, it  is used widely  in  shells, bombs,  grenades, demolition  and  propellant
compositions.  Trinitrotoluene is produced  at Army Ammunitions Plants (AAPs),  its
production from 1969 to 1971 was reported as 85 million pounds per month (Ryon et al.,
1984).  During that period, as much as one half million  gallons of TNT wastewater were
generated per day by a single TNT production facility (Hartley  et  al., 1981).  The
wastewater generated was collected in lagoons which  after evaporation has resulted in
localized areas of severe contamination.   Storage and testing of explosives has also
contributed to environmental contamination.
                                            326

-------
TNT is cosidered highly toxic, mutagenic and carcinogenic in bacterial and animal tests
(U.S. EPA,  1989).  The lifetime Health Advisory Level for TNT in drinking water has
been set at 2 ppb (U.S. EPA, 1989). As military bases throughout the United States and
Europe are decommissioned and turnovered for other uses, contaminated sites on these
bases need to be remediated.  To define the extent of contamination and monitor the
progress of the cleanup,  samples are  initially screened on site,  or  sent for laboratory
analysis.  The analysis of TNT  contamination in environmental samples  is typically
performed by HPLC,  such as SW-846 Method 8330 (U.S. EPA,  1992), this method  is
accurate and precise but can be time-consuming and expensive.  This poster describes a
magnetic-particle solid-phase immunoassay method for TNT in water and soil samples.
Immunoassays have the advantage of being rapid and less expensive than GC/MS or
HPLC, as well as field-portable.

The principles of enzyme linked immunosorbent assays (ELISA) have been described
(Hammock and Mumma,  1980).  Magnetic particle-based ELISAs have previously been
described  and applied to the detection of pesticide residues (Itak  et al,  1992,  1993;
Lawruk et al,  1992, 1993; Rubio et al, 1991). These ELISAs eliminate the imprecision
problems  that  may be associated with antibody coated  plates and  tubes (Harrison et al,
1989; Engvall, 1980) through the covalent coupling of antibody to the magnetic particle
solid-phase.  The uniform dispersion of particles throughout the reaction  mixture allows
for rapid  reaction kinetics and precise addition of antibody. The TNT magnetic-based
ELISA described in this paper combines antibodies specific  for TNT with enzyme labeled
TNT.  The presence of TNT in a sample is visualized through a colorimetric enzymatic
reaction and results are obtained by comparing the color in sample tubes to those of
calibrators.

MATERIALS AND METHODS

Amine terminated  superparamagnetic particles of approximately  1  um  diameter were
obtained from Perseptive Diagnostics, Inc. (Cambridge, MA).   Glutaraldehyde (Sigma
Chemical, St. Louis, MO).  Rabbit anti-TNT serum and  TNT-HRP  conjugate (Ohmicron,
Newtown, PA).  Hydrogen peroxide and TMB (Kirkegaard & Perry Labs, Gaithersburg,
MD).   TNT,  its metabolites, and non-related  cross-reactants (Chem  Service,  West
Chester, PA).  Other explosives (U.S. Army Environmental Center, Aberdeen, MD).

The anti-TNT coupled magnetic particles were prepared  by glutaraldehyde activation
(Rubio et al, 1991).  The unbound glutaraldehyde was removed from the particles by
magnetic separation and washing four times  with 2-(N-morpholino) ethane sulfonic acid
(MES) buffer.  The TNT antiserum and the activated particles were incubated overnight at
room temperature  with agitation.  The  unreacted  glutaraldehyde was quenched with
glycine buffer and the covalently coupled anti-TNT particles  were washed and diluted with
a Tris-saline/BSA preserved buffer.

TNT was  dried using  phosphorous pentoxide overnight under vacuum.  TNT and TNT
related compounds used during cross-reactivity studies were diluted in methanol to obtain
                                             327

-------
a stock concentration of 1000 ppb.  The stocks were further diluted in TNT diluent to
obtain concentrations  of 10,  5,  1, 0.25,  and 0.1  ppb.   After dilution, the diluted
compounds were analyzed as samples in the assay.

Soil samples (TNT free) were fortified with TNT in acetone to obtain concentrations of 1,
5 and 10 ppm, the samples were  then  air dried and analyzed promptly to minimize
degradation.  When analyzing  soil samples, a simple extraction was performed prior to
analysis:   10 g  of soil and 20 mL of a  methanolic solution are added to  a soil collector
(Figure 1). The collector was shaken vigorously for  1 minute and the mixture allowed to
sit at  least five  minutes. The cap of the soil collector was then replaced with  a filter cap
and the extract  collected in a small glass vial. The filtered extract was then diluted 1:500
in TNT zero  standard and assayed.  Water samples  were collected in glass vessels with
teflon lined caps before analyzing in the assay.

Diluted soil extract  or water samples (100 uL) and horseradish peroxidase  (HRP) labeled
TNT  (250 uL) were incubated for 15 minutes with the antibody coupled solid-phase (500
uL).  A magnetic field was applied to the magnetic solid-phase to facilitate washing and
removal of unbound TNT-HRP and eliminate any potential interfering substances.   The
enzyme  substrate   (hydrogen  peroxide)  and  TMB  chromogen (3,3',5,5'-tetramethyl
benzidine) were then added and incubated  for 20 minutes. The reaction was stopped with
the addition of acid and the final colored  product  was analyzed using the  RPA-I RaPID
Analyzer™ by  determining the absorbance at 450  nm. The observed absorbance results
were  compared to a linear regression line  using a log-logit standard curve  prepared from
calibrators containing 0, 0.25, 1.0, and 5.0 ppb  of TNT.  If the  assay is performed in the
field (on-site), a battery powered photometer such as the RPA-III™ can be used.

RESULTS AND DISCUSSION

Figure 2  illustrates  the mean standard curve for the TNT  calibrators collected over 50
runs;  error bars represent two standard deviations (SD).  This figure shows  the typical
response of the assay and the reproducibility of the standard curve from run-to-run. The
displacement  at the 0.25  ppb level is significant (78.9 % B/Bo, where B/Bo is the
absorbance at 450 nm observed for a sample or standard divided by the absorbance at the
zero standard).  The assay sensitivity in  diluent based on 90% B/Bo (Midgley et al, 1969)
is 0.07 ppb. When analyzing water samples, the assay has a range of 0.07 to 5.0 ppb. The
assay  range when analyzing soils  in conjuction with the TNT  Sample Extraction Kit is
0.25 to 5 ppm as a result of sample extraction and dilution.

A precision study was conducted  in which four surface  and groundwater samples were
fortified with  TNT at four concentrations, and assayed 5 times in singlicate per assay on
five different days.   The results are  shown in Table  1.  Coefficients of variation (%CV)
within and between day (Bookbinder and Panosian,  1986) were less than 8% in both
cases.
                                              328

-------
In another precision study,  ten  samples  of two soils were weighed on  a balance  or
measured by packed volume in the soil collector. The samples were then extracted and
diluted (as described in the Methods Section), followed by assaying  in duplicate in one
assay.  Results are shown in Table 2.   The overall coefficient  of variation for TNT
measurement using  components of the Soil Collection and the Soil Extraction Kit with
analysis by the TNT RaPID Assayฎ was determined to be less than 12% in both cases.

Table 3  summarize the  cross-reactivity  data of the  TNT RaPID  Assay for  various
explosives and nitroaromatic compounds.  The percent cross-reactivity was determined as
the amount of analog required to achieve 50% B/Bo.  The specificity of the antibody used,
allows for the  detection of TNT  and  various nitroaromatic compounds.   Many non-
structurally related organic compounds demonstrated  no reactivity at concentrations up to
10,000 ppb (data not shown).

Table 4 summarize the accuracy of the TNT RaPID Assay in soil samples.  Ten different
soil types were fortified with TNT at 1, 5, and 10 ppm. The samples  were extracted and
diluted as described above,  followed by  analysis in  the immunoassay.   Soil recoveries
obtained  were:  97%  at  1  ppm, 107% at 5 ppm, and 115% at  10  ppm,  obtaining  an
average of 106% across the range tested.

Table 5  summarizes the accuracy  of the TNT  ELISA in  water.  Four ground water
samples were spiked with TNT at the following levels:  0.25, 0.35, 0.50, 0.75, 1.50, 2.0,
3.0,  and  4.0 ppb.  TNT  was recovered correctly in  all cases with an average  assay
recovery  of 103%.

Figure 3  illustrates the correlation of 30 water samples fortified with TNT, between the
ELISA (y) and SW-846 Method 8330 (x) after  correction for surrogate recovery.  The
regression analysis yields a correlation (r) of 0.951 and a slope of 0.98 between methods.

Correlation of nineteen field soil samples, analyzed by the ELISA method (y) and SW-846
Method 8330 (x) is  illustrated in Figure 4.  The regression analysis yields a correlation (r)
of 0.970  and a slope of 0.93 between methods.

SUMMARY

This work  describes a magnetic particle-based ELISA for the detection of TNT and  its
performance characteristics in soil and water samples.  The  assay compares favorably to
SW-846 Method 8330, is faster, and eliminates the need for expensive instrumentation and
solvent disposal.  The ELISA exhibits good precision and  accuracy  which can  provide
consistent monitoring  of environmental samples.  Using this ELISA, forty (40) results
from  soil  samples  can  be  obtained in  less than   two  hours without the variability
encountered with antibody coated  tubes  and microtiter plates (e.g.  coating variability,
antibody  leaching, etc.).  This system is ideally suited  for adaptation to on-site monitoring
of TNT in water, soil, and solid waste samples.
                                             329

-------
 REFERENCES

 Bookbinder, M.J.; Panosian, KJ. Correct and Incorrect Estimation of Within-day and
 Between-day Variation. Clin.Chem. 1986, 32, 1734-1737.

 Engvall, B. Enzyme Immunoassay ELISA and EMIT. In Methods in Enzymology: Van
 Vunakis, H.; Langone, J.J., Eds.; Academic Press: New York, 1980; pp 419-439.

 Hammock, B.D.; Mumma, R.O. Potential of Immunochemical Technology for Pesticide
 Analysis. In Pesticide Identification at the Residue Level, Gould, R.F., Ed.; ACS
 Symposium Series, Vol. 136; American Chemical Society: Washington, DC, 1980; pp
 321-352.

 Harrison, R.O.; Braun, A.L.; Gee, S.J.; O'Brien, D.J.; Hammock, B.D. Evaluation of an
 Enzyme-Linked Immunosorbent Assay (ELISA) for the Direct Analysis of Molinate
 (Odramฎ) in Rice Field Water. Food & Agricultural Immunology 1989, 1, 37-51.

 Hartley, W.R.; Anderson, A.C.; Abdelghani, A.A.  Separation and Determination of
 Dinitrotoluene Isomers in Water by Gas Chromatography. Proceedings of Univetsity of
 Missouri's 15th Annual Conference on Trace Substances in Environmental Health.
 Columbia, MO, 1981, pp 298-302.

 Itak, J.A.; Olson, E.G.; Fleeker, J.R.; Herzog, D.P.  Validation of a Paramagentic Particle-
 Based ELISA for the Quantitative Determination of Carbaryl in Water. Bulletin of
 Environmental Contamination and Toxicology  1993, 57, in press.

 Itak, J.A.; Selisker, M.Y.; Herzog, D.P. Development and Evaluation of a Magnetic
 Particle Based Enzyme Immunoassay for Aldicarb, Aldicarb Sulfone and Aldicarb
 Sulfoxide. Chemosphere 1992, 24,11-21.

 Lawruk, T.S.; Lachman, C.E.; Jourdan, S.W.; Fleeker, J.R.; Herzog, D.P.; Rubio, F.M.
 Quantification of Cyanazine in Water and Soil by a Magnetic Particle-Based ELISA. J.
 Agric. FoodChem. 1993, 41(5), 747-752.

 Lawruk, T.S.; Hottenstein, C.S.; Herzog, D.P.;  Rubio, F.M. Quantification of Alachlor in
 Water by a Novel Magnetic Particle-Based ELISA. Bulletin of Environmental
 Contamination and Toxicology 1992, 48, 643-650.

 Midgley, A.R.; Niswender, G.D.; Rebar, R.W. Principles for the Assessment of Reliability
 of Radioimmunoassay Methods (Precision, Accuracy, Sensitivity, Specificity). Acta
Endocrinologica. 1969, 63 163-179.

Rubio, F.M.; Itak, J.A.; Scutellaro, A.M.; Selisker, M.Y.; Herzog, D.P.; Performance
 Characteristics of a Novel Magnetic Particle-Based Enzyme-Linked Immunosorbent Assay
                                             330

-------
for the Quantitative Analysis of Atrazine and Related Triazines in Water Samples. Food &
Agricultural Immunology 1991, 3, 113-125.

Ryon, M.G.; Pal, B.C.; Talmage, S.S.; Ross, R.H.  Database Assessment of the Health
and Environmental Effects of Munition Production Waste Products, Oak Ridge National
Laboratory, Oak Ridge, TN 1984, AD ORLN-6018.

U.S. EPA, Test Methods for Evaluating  Solid Waste. Physical/Chemical Methods. SW-
846, 3rd Edition, Final Update 1.

U.S. EPA, Office of Drinking  Water, Trinitrotoluene-Health Advisory, Washington, DC,
January 1989.
                                             331

-------
                                   Table 1

                    Precision of TNT Measurement in Water

Source:             Surface         Municipal       Ground         Surface

Replicates              5555
Days                   5555
N                     25              25               25              25
Mean(ppb)             0.35             0.77             2.17             3.96
%CV (within)          7.8              4.3              2.9              3.4
%CV (between)         7.8              4.4              4.8              3.1
                                          332

-------
                                    Table 2

                     Precision of TNT Measurement in Soil


Soil:                            Manhatten, KS              Pleasant Hill, NC

Sample Collection Method      weight       volume          weight       volume

Replicates                      10           10              10           10
Mean(ppm)                    0.55         0.53             0.41         0.41
%CV (total)                    8.3           8.3              10.0         11.8
                                            333

-------
                                     Table 3

                           Specificity (Cross-Reactivity)


                                 90% B/Bo    50% B/Bo    % Cross
Compound                      LDP (ppb)   ED50 (ppb)  Reactivity

TNT                                0.07          1.44           100
1,3,5-Trinitrobenzene                 0.04          2.20           65.5
2,4-Dinitroaniline                    0.10          22            6.5
Tetryl                               0.10          30            4.8
2,4-Dinitrotoluene                    1.0           35            4.1
2-Amino-4,6-dinitrotoluene            0.25          45            3.2
1,3-Dinitrobenzene                   2.38          83            1.7
4-Amino-2,6-dinitrotoluene            0.10          98            1.5
2,6-Dinitrotoluene                    100           3880          0.04
2,4-Dinitrotoluene                    7.95          >10,000       <0.1
3-Nitrotoluene                       155           >10,000       <0.1
RDX                               702           > 10,000       <0.1
1,2-Dinitrobenzene                   1000         >10,000       <0.1
Dinoseb                             1000         >10,000       <0.1
4-Nitrotoluene                       1160         >10,000       <0.1
2-Nitrotoluene                       2320         >10,000       <0.1
Nitrobenzene                        3410         >10,000       <0.1
HMX                               4520         >10,000       <0.1

B/Bo = Absorbance at 450 nm observed for a sample or standard divided by the absorbance
at the zero standard.

% Cross Reactivity = Concentration of TNT exhibiting 50% inhibition  (1.44 ppb) divided
by the 50% inhibition of a compound x 100.
                                            334

-------
                                      Table 4

             Accuracy of the TNT RaPBD Assay In Different Soil Types
TNT Added:
Mean Observed (ppm)
n
%CV
% Recovery
1 ppm TNT
0.97
10
11.1
97
5 ppm TNT
5.34
10
8.1
107
10 ppm TNT
10.2
10
8.1
102
Soil type analyzed: Beardon ,ND (clay loam); Churchville, PA (sandy loam); Glen Cove, NY
(loam); Holland, PA (clay loam); Levittown, PA (silt loam); Munin (clay loam); Princeton, NJ
(clay loam); Pt. Pleasant, NJ (sand); Tennessee (sandy loam); Wisconsin (loam).

Mean Observed = concentration obtained after fortification with the listed concentrations of TNT.
                                               335

-------
                   Table 5

  Accuracy of The TNT RaPID Assay In Water
Added       Observed     SD      Recovery
fppb)          (ppb)     fppb)        ฃ%}

+ 0.25           0.24     0.03         9.6
+ 0.35           0.35     0.04         100
+ 0.50           0.52     0.06         104
+ 0.75           0.79     0.05         105
+1.50           1.65     0.10         110
+2.00           2.11     0.11         105
+3.00           3.25     0.16         108
+4.00           3.95     0.16         99

Average                               103
                           336

-------
                        SOIL SAMPLE COLLECTION  DEVICE
               Soil Collector
                                                    TOP -|  Screw Cap
     BOTTOM
                                                                          Luer Cap
                           Plunger Rod     Plunger
                                                             Luer Cone
Figure 1.  Diagram of soil collector used to collect and extract soil samples.
                                           337

-------
             TNT RaPID Assay Dose Response Curve
          90
          80 -




       tง70H
       m

       'TO
       060-



          50 -



          40 -



          30 -
         20
            0.1
  1.0

TNT (ppb)
10.0
Figure 2.  TNT RaPID Assay dose response curve.  Each point represents the mean of 50

determinations. Vertical bars indicate +/- 2 SD about the mean.
                                       338

-------
           TNT Method Comparison:  Water Samples
                          SW-846 Method 8330 (ppb)
Figure 3.  Correlation between TNT concentration as determined by the RaPID Assay
and SW-846 Method 8330 (corrected for surrogate recovery) in water samples, n = 30, r
= 0.951, y = 0.98x-0.02.
                                      339

-------
             TNT Method Comparison: Soil Samples
          105 -
          10ฐ
                              102      103      104
                          SW-846 Method 8330 (ppm)
105
Figure 4.   Correlation between TNT concentration as determined by the RaPED Assay
and SW-846 Method 8330 in soil samples, n = 19, r = 0.970, y = 0.93x + 0.33.
                                     340

-------
                                                                                          51


                  An  Immunoassay  for  2,4,5-TP  (Silvex)  in  Soil

                     Jonathan Matt, Titan Fan, Yichun Xu and Brian Skoczenski
                              Millipore Corp./ ImmunoSystems Inc.
                           4 Washington Ave., Scarborough, Me 04074


2-(2,4,5,-Trichlorophenoxy)propionic acid, known as 2,4,5-TP and Silvex, is a herbicide used for the
control of trees and shrubs. 2,4,5-TP is a hormone type herbicide absorbed by leaves and translocated.

An Immunoassay has been developed that is sensitive to 2,4,5-TP in soil. The EnviroGard™ Silvex in
Soil Kit uses clear plastic tubes coated on the inside with antibodies raised to 2,4,5-TP. Extracts from soil
samples are added to the tubes along with a 2,4,5-Trichlorophenoxy-acetic acid (called 2,4,5-T)-enzyme
conjugate competitor. This competitor is a 2,4,5-T molecule with an enzyme attached that competes with
contaminates in the sample extract for a limited number of antibody binding sites. After a short incubation
the competition is stopped by washing the tubes with tap water. The amount of antibody bound enzyme
conjugate competitor remaining in the tube is inversely proportional to the amount of contamination in the
soil.  Next a clear solution of chromagenic  (color producing) substrate is added to the tubes.  The blue
color that develops is  also inversely proportional the contamination in the soil. The reaction is stopped
after 5 minutes turning the product from blue to yellow and the tubes are read visually or in a portable
photometer at 450 nm.

Soil extraction is accomplished by adding 10 mL of an extraction buffer to 5 grams of soil and shaking for
2 minutes. The extract is filtered and then used in the assay.

The assay is cross reactive to 2,4,5-T, a herbicide used post emergence and applied as a foliage spray. It
is absorbed through roots, foliage and bark and along with 2,4-D was a major active ingredient in Agent
Orange.  Production of 2,4,5-T has been associated with significant levels of Dioxins as a by-product.  Its
production has been discontinued.

The detection limit of the assay for 2,4,5-TP is approximately 0.1 ppm and for 2,4,5-T approximately
0.02 ppm. Data will be presented on  negative and fortified soil samples and cross-reactivity.

The Envirogard™ Silvex in Soil Kit provides a fast, convenient and field portable method of screening
soils for contamination with 2,4,5-TP.
                                              341

-------
 52


 DETERMINATION OF SEMIVOLATILE ORGANIC COMPOUNDS
           IN SOILS USING THERMAL EXTRACTION-GAS
   CHROMATOGRAPHY-PHOTOIONIZATION-ELECTROLYTIC
                       CONDUCTIVITY DETECTION

Ronald D. Snelling. Product Line Manager,
OI Analytical, P.O. Box 9010, College Station, Texas 77842


INTRODUCTION

Sample preparation is generally the most time consuming step in the analysis of semivolatile or-
ganic compounds.  The extraction and concentration of the semivolatile organic compounds has
traditionally been a complex, multistep procedure. The extraction of semivolatile organics from a
solid sample matrix involves an initial estimate of concentration so that an appropriate mass of the
sample is extracted. The sample is placed into the extraction apparatus and is then sonicated or
Soxhlet extracted using an organic solvent or a mixture of organic solvents. The solid matrix is
extracted for the amount of time specified in the extraction method, then the extracted liquid is
collected and the volume is reduced. Any necessary sample cleanup procedures such as gel perme-
ation chromatography or column chromatography are performed at this time. In many cases, the
solvent is exchanged for hexane instead of the extracting solvent. The extract may now be injected
into the gas chromatograph or other analytical instrument.

It can be seen that there are several disadvantages to using the above general procedure. The proce-
dure is very labor intensive, and thus is quite expensive. Several of the most common and efficient
extraction solvents  are on the USEPA list of compounds whose use is  to be minimized. Solvent use
costs are steadily rising, with increased record keeping and disposal costs adding to the expense of
using large volumes of solvents. One problem with the conventional extractions which is not gener-
ally recognized is the skill required to reproducibly extract samples. The least experienced person-
nel in the laboratory are generally assigned to this task. While extraction of large numbers of samples
can be tedious, reproducible extractions are a key factor in reproducible final results.

Several methods can be used to modify the conventional extraction procedure to increase efficiency
and reproducibility. One approach is to automate the current extraction process using robotics to
increase the reproducibility of the mechanical phases of the process. This approach is very expen-
sive and does not address the expenses in  the use of large volumes of solvent. Automation of the
current extraction process  is not feasible for the majority of laboratories performing extractions for
semivolatile analysis.

A more promising approach to the improvement of semivolatile organic analysis involves the use of
alternative extraction technology. The most commonly used alternative to solvent extraction is
extraction with a supercritical fluid, usually carbon dioxide or carbon  dioxide with a small percent-
age of an organic solvent added as a modifier to increase extraction efficiency. Supercritical fluid
extraction has suffered from matrix effects, and there is usually a lengthy method development
process required before reproducible, efficient extraction parameters are determined.
                                               342

-------
Another technology which can replace conventional solvent extraction is thermal extraction. This
technique involves heating the sample in a flow of an inert gas to volatilize the organics from the
solid matrix. By controlling the temperature of the extraction cell, compounds of a specific volatil-
ity range may be selectively extracted from the sample matrix.

EXPERIMENTAL

The system used for this work was a Ruska Instrument Corporation ThermEx Inlet System inter-
faced to a Hewlett Packard 5890 Series  II gas chromatograph. The gas chromatograph was fitted
with an OI Analytical PID/ELCD tandem detector. The column used for this work was a J&W 30 m
x 0.32 mm DB-5 with a 0.25 micron phase coating. The oven was held at 35ฐC  for 6 minutes,
ramped to 310ฐC at 8ฐC per minute, and held at 310ฐC for 2 minutes. The sample in the ThermEx
Inlet System was held at 60ฐC for 1 minute, then ramped to 340ฐC at 35ฐC per minute, and held at
340ฐC for 2 minutes.

The samples for analysis were weighed  into fused quartz crucibles before analysis. The crucibles
had a quartz frit in the base and were fitted with a lid constructed from fused quartz frits. The sample
size was  approximately 50 milligrams for all samples.

The samples used in this work were clean soil which had been analyzed and was free of detectable
semivolatile organic compounds, a subsample of this soil spiked with a standard mix of 14 poly-
chlorinated biphenyl congeners, a subsample of the polychlorinated biphenyl soil spiked with diesel
fuel, and a subsample of the clean soil spiked with a subset of the USEPA SW-846 Method 8270
analyte list. These samples were  aliquots of mixtures used for the verification of thermal extraction
technology for Draft Method 8275A, a gas chromatography-mass spectrometry method for the
quantitative determination of polycyclic aromatic hydrocarbons and polychlorinated biphenyls in
soils and sediments.

RESULTS & DISCUSSION

Thermal extraction-gas chromatography-mass spectrometry has been accepted by the Organic Meth-
ods Work Group of the USEPA as a viable technique for the quantitative determination of polycy-
clic aromatic hydrocarbons and  polychlorinated biphenyls. Many analysts prefer to use detectors
rather than mass spectrometers for a variety of reasons, including cost, complexity, and ruggedness.
The issue of reliability becomes  very important in mobile laboratory applications where it is often
critical to have rapid sample turnaround. The use of selective detectors on a gas chromatograph
interfaced to  a thermal extraction  unit eliminates many of the concerns analysts have regarding
mass spectrometer use.

The samples described above were  analyzed to determine the feasibility of using selective detectors
instead of a mass spectrometer for semivolatile organic compound determination. Figures 1  and 2
are chromatograms of the PCB standard  at a concentration of 5 PPM of each congener. Figure  1 is
a photoionization detector trace of the standard spiked soil. The PCB congeners are in a retention
                                                 343

-------
time window from 27 to 38 minutes. Even though the peaks are distinct for the congeners with
retention time of approximately 30 minutes the signal is noisy and has a high baseline. The smaller
peaks may be compounds other than PCBs which are present in concentrations too low to detect
with a mass spectrometer.
100000 ) !i
i 1'.
80000 •' | J
1 \ \ '• 1
60000 j vฃ| 1
' f
40000 j 'i'J
1 i
200001 ''
-I J



, i

*$• iii



i
\\\ *{
^_
U 10 20 30 40
Time (min.)
                   Figure 1.  PID Trace of 5 ppm PCB Standard
Figure 2 is the electrolytic conductivity detector trace for the same sample run. The ELCD is very
selective for halogenated compounds, and the trace shows responses only for the PCBs and any
trace halogenated components which may be present in the soil. The chromatogram is much simpler
with quantitation and congener identification using ELCD  detection instead of PID detection.
600000 '

400000 '

200000 '





















i




!i
..ll.


10 20 30 40
Time (mm.)
                   Figure 2.  ELCD Trace of 5 ppm PCB Standard
                                                 344

-------
Figure 3 illustrates the sensitivity of selective detectors interfaced to the thermal extraction inlet.
The PCB congeners in this soil sample were present at a concentration of 0.5 ppm of each compo-
nent. All fourteen congeners in the mixture are easily detected at this level, which is lower than the
method detection limit specified in Method 8275A. The peak broadening seen in the late eluting
peaks is due to the temperature  limitations of the PID in the tandem configuration. A stand-alone
ELCD will have a better peak shape, increased peak height, and improved detection limits.
1
LOeS
j
8.0e4j
6.0e4J
4.0e4J



ju 	 JLU



L.








I
i
• i r
'Lj. JU'Wi. A-''Ov^J!J!'J\ 	 l\_/\__
i
26 28 30 32 34 36 38
Time (min.)
                    Figure 3.  ELCD Trace of 0.5 ppm PCB Standard

Figures 4 and 5 show an analysis of a sample which is more representative of the types usually
analyzed. The soil for this sample was spiked with a high concentration of diesel fuel, giving a high
hydrocarbon background.  The PID trace, Figure 4, is similar to the trace of a GC-MS run of the
same type of sample. In both cases, the analyte peaks are masked by the high background signal.
Extracted ion chromatograms from a mass spectrometer run should allow  the integration and
quantitation of the PCB congeners, but the high  background level of hydrocarbons may affect
ionization efficiency and cause inaccurate quantitation.
1.2e6
LOeff


8.0e5-

, 1
| 1 it i ••

|l f ijmnl •,
I !] i! J ''i " V fsVftj
H ^l/^' %
! I'JUi'll !l,,.
4.0e5
2.0e5
a
in." ซ
y-\j V f-
/ \
/ 'V 	
	 '"'
10 20 30 40
Time (mm.)
                    Figure 4.  PID Trace of 10 ppm PCB Standard On Soil Contaminated
                            With Diesel Fuel
                                                  345

-------
Figure 5 illustrates the value of selective detectors in the analysis of a complex sample matrix. The
high levels of hydrocarbon present in the sample are not detected by the ELCD, so the chromato-
gram shows only the PCB congeners. This chromatogram may easily be integrated and the com-
pounds quantitated accurately. The use of a selective detector for the analysis of this sample elimi-
nated the need for sample cleanup.
                     4.0e5'
                     2.0eS
                        ฐL
                                   10
                                             20
                                        Time (min.)
                                                       30
                                                                 40
                    Figure 5.  ELCD Trace of 10 ppm PCB Standard On Soil
                            Contaminated With Diesel Fuel
An important point is the speed of the extraction and analysis of the PCB congeners when using the
thermal extraction-gas chromatography-mass spectrometry system.  The system is configured so
that the extracted analytes are transferred directly to a gas chromatograph injection port, and the
extraction and analysis are integrated. The total extraction and analysis time for the PCB congeners
is approximately 40 minutes. In cases where rapid sample turnaround is essential, this system will
provide quantitative data in a very short time.

The use of selective detectors improves detection limits for halogenated compounds compared to
mass spectrometers. A mass spectrometer operates at low flow rates, but a thermal extraction unit
requires a relatively  high flow rate, generally 20 to 40 mL/min, for an efficient transfer of the
semivolatile organic compounds to the gas chromatograph injection port. This leads to using a split
ratio of approximately 30 to 1 for most thermal extraction-gas chromatography-mass spectrometry
work. Selective detectors can handle much higher flow rates, and with the use of megabore columns
can be operated in a splitless mode.  Selective detectors can also have inherently lower detection
limits for classes  of organic compounds than mass spectrometers. These factors allow detection
limits to be significantly lower for selective detectors than  for mass spectrometers.

CONCLUSION

The use of selective detectors is  a viable alternative to mass spectrometry when using  a thermal
extraction-gas chromatography system for the analysis of semivolatile organic compounds. When
the sample matrix is complex the use of selective detectors may be preferable as only the compound
classes of interest are detected. Selective detectors may also improve the detection limits for the
analytes of interest. A thermal extraction system configured with selective detectors on the gas
chromatographs addresses the issues of cost, reliability, and complexity while eliminating solvent
extraction.
                                                  346

-------
                                                                              53
                Congener-Specific Separations of PCBs-
             Extraction by SPME, Separation by Capillary GC,
                     and Detection by ECD and MS
Cole Woolley. Senior Research Chemist, Gas Separations-Capillary GC,
Venkat Mani, Senior Research Chemist, Sample Handling-SPME,
Robert Shirey, Senior Research Chemist, Sample Handling-SPME,
James Desorcie, Research Manager, Gas Separations
Supelco, Inc., Supelco Park,  Bellefonte, PA USA 16823-0048
ABSTRACT

A variety of capillary GC columns, along with a new column containing a bonded
octylmethyl polysiloxane stationary phase (SPB-Octyl), were evaluated for their
propensity to separate PCB congeners.  The separation of all 209 PCB
congeners on columns including: 5%, 20%, 35%, and 50% phenyl polysiloxane
phases as well as polydimethylsiloxane were included in the study. Emphasis
was placed on the separation of low-level toxic PCB congeners belonging to the
classification of "coplanar PCBs".  The elution order of PCB congeners was
affected by the column polarity and polarizability.

Extracting PCB congeners from soil by using solid phase microextraction(SPME)
proved to be successful.  SPME is a solventless alternative to soxlet extractions.
By exposing the 1 cm-long  polydimethylsiloxane-coated fiber at  the tip of the
SPME device to the headspace above the soil, PCB congeners were extracted
and concentrated without  using solvents. Extractions  usually required  15-30
minutes and the  SPME fibers  were reused 50-200 times. The PCB congeners
were then desorbed in the GC injection port, where they are transferred to the
column.
INTRODUCTION

The  analysis of PCB congeners  is challenging in several respects.  Synthetic
PCB mixtures are commonly retained in soil, sludge, clay and airborne particles,
but are quite insoluble in water. PCBs bioaccumulate in food chains. They can
be traced from  soil and air to plant life, from plants to herbivores,  and from
herbivores to various levels of carnivores. Both aquatic and terrestrial animals
bioaccumulate PCBs, mainly in fat tissue and vital organs. The most toxic PCB
congeners are in low abundance  in synthetic PCB mixtures, but exist in higher
concentrations in incinerator fly-ash. Escalating the analytical challenge  is the
large number of  possible  PCB  congeners,  209,  with  as  many as  10-15
congeners eluting per minute from a high resolution GC column. The  sheer
                                        347

-------
complexity has resulted in quantitation  often being reported  for two or more
coeluting congeners.

PCBs  have  received  considerable  regulatory  attention  because the  high
toxicities of a dozen  individual PCB congeners are similar to the toxicities of
several  dioxins. The key to the toxicity of these PCB and  dioxin congeners to
mammals is found in their chemical structures (Figure A). The two structures on
the left side of Figure A represent PCB congeners.  Substitution of chloro-groups
in  the ortho positions  has  a marked affect on the free rotation of the coupled
phenyl  rings.   PCB  77 (3,3',4,4'-TCB)  is highly toxic and  is found  in  low
abundance in synthetic PCB mixtures, while PCB 110 (2,3,3',4',6-PCB) is quite
abundant, but relatively non-toxic. The key to the difference in their toxicities is in
their chemical structures (Figure B). The most toxic PCB  congeners have
chloro-groups in the 3,4,4' positions, with zero or one  chloro-group in the 2 or
ortho position.  In PCB 77 there is unrestricted rotation of the bond that links the
phenyl groups. Therefore, the phenyl rings of PCB 77 can achieve geometries
that  are essentially coplanar.  The most toxic dioxins and furans contain  the
common 2,3,7,8-tetrachloro-substitution  with aromatic  rings that are coplanar.
On the other hand, the phenyl  rings of PCB 110 are restricted in rotation, due to
the  chloro-substitution in  the two  ortho  positions. PCB  110  is  limited  to
noncoplanar conformations.
EXPERIMENTAL

Five capillary GC columns were evaluated for PCB congener separations. The
separation of all 209 PCB congeners was determined on polydimethylsiloxane
(SPB-1)  and on 5% (SPB-5), 20%  (SPB-20),  and 50%  (SPB-50)  phenyl
polysiloxane bonded phases. A new capillary GC column, SPB-Octyl, containing
50% n-octyl groups on a polysiloxane backbone, was evaluated based on the
previous positive results  [1-4]. ECD and MSD detection with splitless injections
(300ฐC) on HP-5890 were used in this work. The columns were run with helium
carrier gas at 37.5cm/s  at 40ฐC. Oven temperatures were programmed  from
75ฐC (2 min) to  150ฐC  at  15ฐC/min, then to 280ฐC at 2.5ฐC/min.   The  last
congener,  PCB  209,  always eluted  before  280ฐC  during  temperature
programming. Mixtures of PCB congeners, at  40pg/uL per congener for  ECD
and 4ng/uL for MSD; as well as Aroclor mixtures at 400ppb for ECD and 40ppm
for   MSD  were  utilized.   The   limits  of  detection   were  approximately
O.Sppb/congener by ECD and 10ppb/congener for MSD.
RESULTS AND DISCUSSION

The ECD chromatogram in Figure C illustrates the complexity of PCB congener
separations, with nearly 100 congeners eluting within 8 minutes.  The brackets
                                       348

-------
below the chromatogram mark the elution ranges of PCB homologs: trichloro-,
tetrachloro-, pentachloro-, hexachloro-,  heptachloro-  and octachlorobiphenyls.
There are 30 possible tetrachloro-, 46 pentachloro- and 42 hexachlorobiphenyl
congeners.  This  complexity  leads to  coelutions  of  PCB  homologs and
overlapping of elution ranges for PCBs of different homologs (e.g., pentachloro-
and hexachlorobiphenyls).

With a mass selective detector (MS, MSD or ion trap), the congeners of each
chloro-homolog can be extracted from the total ion chromatogram. In this slide,
pentachlorobiphenyls (m/z 326) and hexachlorobiphenyls (m/z 360) are stacked
separately (Figure D), thereby overcoming the overlapping of elution ranges.
For  instance, partially  coeluting PCB  118  and  PCB 132  can  be correctly
identified by retention time or retention index and accurately quantified by using
extracted  ion plots.  ECD  is  more  sensitive  to  PCB congeners,  but  mass
spectrometric detection  is more selective and enhances the chromatographic
separation.

The  effect of chloro-substitution in the ortho positions can be classified into six
categories (Figure E). Non-o/tfto and mono-ortftosubstituted PCB congeners
can achieve coplanar conformations since the phenyl  groups are free to rotate.
Rotation  about  the  common  bond  of di-ortfto-substituted PCB congeners
diminishes, due to steric hindrance. The number of  conformations is limited
further  by \r\-ortho  and tetra-o/tfto substitution  of  chloro-groups.  Achieving
coplanarity with these congeners is impossible since two chloro groups repulse
each other as the phenyl groups approach coplanarity. With no chloro-groups in
the ortho positions, PCB 77 has unrestricted rotation and is capable of coplanar
conformations. On the other hand, PCB  95 contains three chloro-substituents in
ortho positions, thereby reducing rotational freedom and  limiting  phenyl  group
conformations to nearly perpendicular geometries. An interesting note is that the
aromatic rings of the most toxic chloro-substituted aromatics (dibenzodioxins,
dibenzofurans and naphthalenes) are rigid and planar (Figure A), whereas the
most toxic PCB congeners are flexible and coplanar.

These classes of  ortho-substituted  PCBs overlap to differing degrees on  the
capillary columns  we studied.  With the  5%  phenyl SPB-5 column (Figure F)
there was some overlap of the 6\-ortho (2,6 and 2,2') with the \r\-ortho (2,2',6)-
substituted pentachlorobiphenyls. With SPB-20 the overlap increased, because
the increased phenyl content of the column stationary phase widened the elution
range of the orfftosubstitution classes.

The  overlap  between  orfrto-substitution classes of PCBs  increases with
increasing phenyl-substitution  in the column  stationary  phase. With the 50%
phenyl SPB-50 column (Figure G), the elution zones are approximately twice as
wide as those for SPB-5 column. The basis of the widening of the elution  zones
is  the  increased  average  dipole-induced  dipole  interactions  between  the
                                         349

-------
polarizable phenyl-containing  phases (SPB-5, SPB-20,  and SPB-50) and the
moderately polar PCB congeners.

With SPB-Octyl columns, the elution zones for the ortfto-substitution classes are
narrower and well separated from each other (Figure H). The brackets help to
show that the noncoplanar congeners (e.g., tetra-ortho 2,2',6,6r) elute first and
the flexible coplanar congeners (e.g.,  non-o/tfto) elute  last for each group of
chloro-homologs. One of the most toxic PCB congeners, non-o/#?o-substituted,
coplanar PCB 126 (3,3',4,4',5-PeCB), elutes well separated and last in the group
of pentachlorobiphenyls.

The same pattern is evident for the hexachlorobiphenyls (Figure I) separated on
the SPB-Octyl column. Another of the most toxic PCB congeners,  PCB 169
(3,3',4,4',5,5'-HxCB), also  a non-ortfto-substituted, coplanar congener,  elutes
last among the hexachloro homologs.

The column stationary phase has a significant effect on the elution order of PCB
congeners. The enlarged  segments of the chromatographic separations of
heptachlorobiphenyls 170 (2,2',3,3',4,4',5-HpCB) and  190 (2,3,3',4,4',5,6-HpCB)
depicted in Figure J demonstrate the effect that increasing phenyl content in the
column  phase has on  elution order.  PCB 190 has 2  chloro-groups in the ortho
positions,  while PCB 170 contains only one. PCB 170 elutes first on the SPB-1
and  SPB-5 columns, but is retained more on SPB-20 and SPB-50. SPB-Octyl
has the  same elution order as SPB-1, but provides resolution and retention times
similar to  SPB-50. As an illustration of the change  in elution order,  resolution
values for 170/190 are positive for SPB-Octyl, SPB-1  and  SPB-5, but negative
for SPB-20 and SPB-50.

The resolution values for a number of closely eluting  or coeluting congeners on
SPB-5 are listed for the five column phases evaluated (Figures K and L). The
sets of  doublets shown in these figures  have  the  same number of chloro-
substitutions;  Type A are classified by the  same o/t/7o-substitution  class,  while
Type  B correspond to different substitution classes. PCB  31  and  PCB  28
separate on SPB-Octyl, but not on the other phases. Type A separations are the
most difficult because the chemical structures and boiling points are very similar.
Type  B separations are more easily facilitated  by  the differences  in  ortho
substitution.

Type C  separations (Figure L)  involve congeners from different chloro-homologs
and different orfrtosubstitution classes. The changes in resolution  values from
SPB-1 up to SPB-50 were not higher than 3 units,  while from SPB-1 to  SPB-
Octyl changes as high  as 10-14 units were achieved. The accentuated resolution
using the SPB-Octyl column indicates a unique selectivity of this column for PCB
congeners.
                                         350

-------
Solid  phase microextraction (SPME) is a means  of rapidly  extracting PCB
congeners from soil.  The process of microextracting and concentrating organic
compounds from  a sample  of  water, soil or sludge by  SPME is shown  in
Figure M.  By exposing the 1 cm-long polysiloxane-coated fiber at the tip of the
SPME device to the headspace above the soil, one can extract and concentrate
PCB  congeners  without using  solvents.  Extractions  usually  require 15-30
minutes.  The PCBs then are desorbed  in the  injection port, where they are
transferred to the column.

The ECD chromatogram in Figure N depicts the SPME-extracted organics from
a stream  sediment collected downstream  from  a major industrial  site where
transformer oils accidentally leaked into the stream more than 10 years ago. The
extracted PCB profile is nearly identical to that of Aroclor 1242, except for the
increased abundance of several di- and trichlorobiphenyls. To give an idea of the
extraction levels, the peak representing PCB 44/65 were at 700 parts per trillion
and PCB  105  was at 50 parts  per trillion. With SPME  extraction a minimum
extraction limit of less  than 5 parts  per trillion is detectable by ECD,  with an
extraction time of only 60 minutes.
CONCLUSIONS

From this work, one can conclude that the SPB-50 column phase exhibited the
strongest dipole-induced  dipole  interactions  with  PCB  congeners.    The
SPB-Octyl column was shown to have unique separation characteristics. The
elution orders of PCB congeners that typically coelute on SPB-5 columns were
opposite for the SPB-Octyl and SPB-50 columns. Congeners of the same chloro-
homolog elute in discrete ranges, as a function of their chloro-substitution in the
four ortho positions.  The noncoplanar tetrachloro-ortfto congeners eluted first
and the  non-ortho congeners eluted last for each series of chloro-homologs.
SPB-Octyl and  SPB-1  columns provide an excellent dual-column system for
ECD, with the SPB-Octyl/SPB-50 combination also showing promise.  Finally,
PCB congeners can  be extracted  from  soil by solid  phase  microextraction
(SPME), down to parts per trillion, without the need for solvents.
                                         351

-------
Figure A. Chemical structure of toxic halogenated aromatics.
               Toxic Halogenated Aromatics
              tnot&  ortno orttio
              meta  ortho ortho  meta
            PCB Positional Nomenclature
                                        2,3,7,8-TCDD
               56    65
                 W.M'-TCB
5   u   4
 2,3,7,8-TCDF
Figure B. List of toxic equivalence factors (TEF) for dioxin-like PCB
          congeners.
Toxic, Dioxin-Like PCBs
Toxic
PCB Congener
Type
Norvortho


Mono-ortho







Dt-ortho

IUPAC No.
77
126
169
105
114
118
123
156
157
167
189
170
180
Structure
3,3',4,4'-TCB
3,3',4,4',5-PeCB
3,3',4,4'^^-HxCB
2,3,3',4,4'-PeCB
2,3,4,4'^PeCB
2,3',4,4'^PeCB
2',3',4,4'^-PeCB
2>,3.3',4,4>.5-HxCB
2',3.3',4,4',51.HxCB
r.y.M'.S.S'-HxCB
2',3,31,4,41r5.5'-HpCB
2,21,3,31,4,4<>HpCB
W.SA^.S.S-HpCB
Equivalency
Factor (TEF)
0.0005
0.1
0.01
0.0001
0.0005
0.0001
0.0001
0.0005
0.0005
0.00001
0.0001
0.0001
0.00001
U.G. AhlbOfg. ซ*, OMflKMpMraZS (e|:104ป-10ซ7 (IBM).
M-Otf7
                                        352

-------
Figure C. Capillary ECD chromatogram of Aroclor 1254.
     Complexity of Congener-Specific Separations
     10.00—
      S.OD—
      6.0O—
      4.OO—
      2.OO—
                SPB-S
              Aroclor 1254
            Spllttoss, 400ppb total
                 ECD
              13
                            Mm
Figure D.  Selected ion chromatograms for pentachlorobiphenyls and
          hexachlorobiphenyls.
            Separation by MS Extracted Ions
SPB-Octyl 1
Arodor12S4
I J ,



. I

. I 111 ik


. 1
118

.A


e
i
105 PentaeliloroMptienyls
m/z326
m


.1

1!
.
HencMoroblphanyls
m/z360
>
1
                                     353

-------
Figure E. Effect that chloro-substitution in the ortho positions has on the
          coplanarity of PCB congeners.
        Order of Decreasing Rotational Freedom
                (substitution in ortho positions)
        Coplanar     „  H
                                                Cl
                                 2^',6-tri-ortho   2,2',6,6-tetra-ortho
Figure F. Separation of orffto-substituted PCBs on the SPB-5 column.
Effect of ortoo-Substitution: SPB-5
—a.

JDOO

HDD
HOD •
BD •
0
•no au
Pentachlorobiphen;
Arodor 1254 ซ
ofOw-sutwtttuton CMMS
	 2' 2 	
6' e '
•5
ซ 84
IM| |
.... 100 U g, ซ2
7iyrVlT *

i



M
115
87
IT
1
Jl
gate r
ป•
1 2*

Is (m/:
110





82 *"
JjlM
s326)i
118





123
114
AA >
is






127 126
/ A /
iD MOD *t
-------
Figure G. Separation of ortho-substituted PCBs on the SPB-50 column.
ion •
urn •
(DO
lfem-> 3OC
Effect of orf/to-Substiti]
Pentachlorobiphenyls
91
Arodor12S4 *"
	 2' 2 	
96
92
TT trll
I
\J
I
117
97
4
1,. LL
UL — /WJ'M
9 ^*** 3UO tUD
I STซr I
I 9.?" ซ I
I w

I ซ

7
as
I
itic
(m
110
124
W7
•MB
ID
wo>

>n: SPB-^
/z 326)
118
J&ฃ_
>0
105
12*
AGO ^^n 4UD
| noo. |
_J
1 2
j

Figure H. Separation of o/tAo-substituted PCBs on the SPB-Octyl
         column.
Effect of ortoo-Substitution: SPB-Octyl
Pentachlorobiphenyls (m/z 326)
ttuomx



an)

ISO •

mi .

BD •
Aroelor 1254
101
tw ffn>iubปUUJtlon dJBMni
	 2" 2 	
/ V— / \
• e' e '


95
M
III I


99




ซ


_
97
1


tmm sftnn ซnn *Atm
1 1 1 W* I I ซ
110 lit 106






91
'"
II
)








115
'r -7
i .1









,123
114
Ir









127 126
1*1 V A L-
*ซ, ซjป ซuio ซxlo 44O>
| | 2 | | nooซ |
2^6- LMJ
                                    355

-------
Figure I. Separation of ortho-substituteA hexachlorobiphenyls on the
         SPB-Octyl column.
         Effect of ortao-Substitution: SPB-Octyl
                Hexachlorobiphenyls (m/z 360)
        TO


        ซC
        •D


        XO
             Arodor1254
              6"   •
                                                     T
           3400   l&OO
                           4X00    ADD
JL
                                                    U
Figure J. Effect of column phase on elution order and resolution (Rs) of
         PCB congeners.
          Column Phase Affects Elution Order
                   Resolution Of PCB 170/190
                                            SPB-Octyl
                                               190
                                                  SPB-SO
      ChromnograpMc Mgmnts ire wrivged.
                                            Rs = -3.7
                                      356

-------
Figure K. Resolution of PCB congeners with the same number of chloro-
           substituents, but different orffco-substitution.
Resolution of PCB Congeners
3_ 	 2' 2 	 S
Type A: same total Cl, same ortho-substitution a- s^=v e^0-
PCB
31
28
70
66
Structure SPB~-Octyl SPB-1 SPB-5
4'-@5
4'-@4 2.4 (OJB) (0.8)
3' 4'-ฎ 5
3'41-@4 4.6 1.7 2.1
SPB-20 SPB-50
(0.7) (0.5)
3.9 3.5
Type B: same total Cl, different ortho-substitution
170
190
138
158
•MtT?
3' 4' -(2)3 4 5ฉ 5.1 1.8 (0.8)
@4' 5'-ฎ 3 4
31 4' -(2)3 4ฉ 3.9 2.4 1.7
-1.7 -3.7
(0.2) (-0.5)
Figure L. Resolution of PCB congeners with differing number of chloro-
           substituents and different ortho-substitution.
                Resolution of PCB Congeners
       Type C: different total Cl, different ortho-substitution  •
        PCB   Structure    SPB"-Octyl   SPB-1   SPB-5  SPB-20   SPB-50
        110
         77

        132
        105

        134
        114

        149
        118
3' 4' -3 4
              6.2
3' 4'-ฎ 3 4     10.6
4' -(2)3 4 5     14.7
3'41-ฉ4 5     11.3
-1.7   (-0.4)    (0.1)     -2.3
                     (-0.4)    (0.9)    (0.6)    -2.9
                      (0.3)   (1.5)    (1.4)    -2.1
                      (0.3)   (1.2)    (0.3)    -3.0
                                         357

-------
Figure M.  Extraction and desorption procedures for solid phase
             microextraction (SPME).
            Extraction Procedure
                 for SPME
                                   Desorption Procedure
                                        for SPME
                     (Extract)
                     f
                                                           MnctFVMr
Figure N.  SPME extraction of PCBs from polluted stream sedimen
            separated on the SPB-Octyl column.
GC Condi Uoits
    Column:  SPB-Octyl, 30m
          OJflOmm DI USym fflm
     OMH:  50-O1mln-15-Omhi-l50X-
                                         SolM Phase MlcroEaracaon
                                            SPME FUปn  POMS, 100MIK
                                             SmpUnp:  ซOX, SOmto (hซซJwปo,)
                                            OMOiptlan:  30CTC, InUn
                                               Cone  PCB44(T20ppl)
                                                    PCB105(SOppt)
                                              Svnpta  SMKH HdMMnt colHend
                                                    dmnutrann of mป(or bMumw
       3.5O—
       3.OO—
       2.50-
       2.OO—
                                              358

-------
                                                                               54
                       Rapid Separation of VOCs
            with Short Small-Diameter Capillary GC Columns
Cole Woolley. Senior Research Chemist,  Gas Separations-Capillary GC,
Robert Shirey, Senior Research Chemist, Sample Handling-SPME,
James Desorcie, Research Manager, Gas Separations,
Supelco, Inc., Supelco Park, Bellefonte, PA, USA 16823
ABSTRACT

Rapid screening of VOCs in soil, drinking water, or waste water requires a fast
extraction technique and swift separation step.  This could be accomplished by
extracting  a sample while the previous  extracted sample is being separated.
Fast analysis of environmental samples increases throughput of data collection
at a suspect contaminated sites.  Rapid screening of collected samples can help
in organizing samples for more lengthly GC/MS analyses.

Solid  phase  microextraction  (SPME)  is a  fast,  solventless  alternative to
conventional sample extraction techniques. Because no solvent is injected and
the analytes are rapidly desorbed onto the column, short,  narrow-bore capillary
columns can be  used.   This greatly reduces analysis  time and improves
minimum detection limits, while manitaining resolution.

Three capillary columns (60m x 0.25mm ID) were initially evaluated to determine
the elution order  of  60 common VOCs listed  in  EPA Method  502.2.  Two
common columns  for separating VOCs, an SPB-624 (1.4um film) and a VOCOL
(1.5um film), along with a novel column, an SPB-Octyl, were run  under the same
chromatographic conditions.  The elution order of VOCs, column efficiency,
unique coelutions and separations,  as well as reversal of elution order  were
tabulated for each column.

Finally, SPME extractions using  100um polydimethylsiloxane fibers and 10m x
0.20mm ID capillary columns were used to obtain rapid separation  of VOCs from
EPA Method 624.  The extraction and analysis times were optimized to provide
quick sample screening by GC/FID or GC/MS in a train fashion.
INTRODUCTION

Volatile  organic compounds (VOCs) are among the  most common chemical
pollutants tested for in soil, slugde, drinking water, and waste water.  U.S. EPA
methods prescribe the use of thick-film capillary GC columns for separating
VOCs.  Methods 502.2,  524.2, 602, 624, 5041, 8010 8015, 8020, 8260, and
CLP-VOA prescribe purge-and-trap sample preparation and GC separation with
                                        359

-------
30m  105m long 0.530mm ID thick-film capillary columns.  Current emphasis of
determining and controlling the VOC contaminants in outdoor and indoor air has
prompted methods by OSHA, NIOSH, ASTM and US EPA (e.g., TO-14).

The great number and chemical diversity of possible VOCs in air, water and soil
require capillary GC columns that are capable  of separating close to  100
compounds.  Of the 189 hazardous air pollutants (HAPs) designated in Title III of
the Clean Air Act Amendments of 1990, 15 are classified as very volatile organic
compounds (VVOC) and 82 as volatile organic compounds (VOC).  These 97
VOCs in air, the 93 VOCs cited in Method 8260 (multimedia), the 84 VOCs in
Method 524.2 for drinking water are commonly extracted with adsorbant tube or
purge-and-trap technology.  To obtain accurate qualification and quantitation of
these VOCs capillary GC columns used must  provide high separating  power.
This separating power can  be  accomplished  with columns that  have high
theoretical plates, high selectivity or both.  Long 0.53mm ID columns of 75m  and
105m provide high number of theoretical plates.  This is also accomplished with
smaller diameter columns (0.25mm ID) with shorter lengths (e.g., 10m, 30m  and
60m). Columns with specially designed bonded stationary phases provide the
polar, polarizable and  dispersive interactions  needed to separate  numerous
VOCs.  Columns with  distinctly different bonded stationary  phases provide a
means of  eluting VOCs  in different  elution orders or separating VOCs that
cannot be separated with other columns.

The  first half of this poster describes the comparison of three new 60m x
0.25mm ID capillary columns for the separation of VOCs. The elution order of 60
common VOCs on these three capillary columns was compared. The second
half  of this poster  describes the combined use of SPME  and fast GC for
screening VOCs using short, 0.20mmlD capillary columns.
EXPERIMENTAL

The capillary columns used for comparing VOC elution order were 60m x
0.25mm ID. The VOCOL column (1.5um film), the SPB-624 column (1.4pm film)
and the SPB-Octyl (1.0pm film) were examined with an HP 5890/5721 GC/MS
(scan m/z = 45-300) under identical pressures (25psig) and temperature
programs (40ฐC/4min - 4ฐC/min - 200ฐC/10min). VOC standards in water and
soil were extracted with a manual solid phase microextraction holder using a
100um polydimethylsiloxane fiber. SPME extractions were obtained with the
fiber immersed in water or held in the soil headspace.  The length of the SPB-1
and VOCOL 0.20mm ID capillary columns for rapid sample/site screening was
shortened to 10m - long enough for adequate resolution, yet short enough for
rapid separation.
                                       360

-------
RESULTS AND DISCUSSION
THE SPB-624 COLUMN.  The SPB-624 capillary is a key column for separating
volatile organic compounds (VOCs) extracted from drinking water, waste water,
indoor air, outdoor air, and soil/sludge. The SPB-624 column is commonly used
in the separation of VOCs in flavor and fragrance additives as well as residual
solvents in industrial and pharmaceutical products. The SPB-624 column can be
used with purge-and-trap, automated headspace  and automated solid phase
microextraction (SPME) systems for the multimedial extraction and separation of
VOC pollutants. The SPB-624 column (Figure A) provided the highest column
efficiencies (i.e., Trennzahls - Separation Numbers above 200 between a set of
chlorinated ethenes and a set of chlorinated benzenes ) and a unique selectivity
for certain VOCs that typically coelute on VOCOL columns.  The identification of
the 60 VOCs used to compare the column elution orders are listed in Table A.

Trennzahl (Separation Number)
211   between vinyl chloride (3) and tetrachloroethene (28)
204   between chlorobenzene (32) and 1,2,3-trichlorobenzene (60)

Unique VOC Separations Using SPB-624
28 & 29     etrachloroethene & 1,3-dichloropropane
32 & 33/34  chlorobenzene & 1,1,1 -2-tetrachloroethane / ethylbenzene

The  instances of  coeluting VOCs on the SPB-624 column could be resolved
using MS (extracted ions), PID/ELCD or FTIR.  However, m- and p-xylenes
(35/36) could not be resolved  with  the  SPB-624 column  with these selective
detectors.  Although the instances of partially coeluting VOCs were numerous,
all pairs could be  resolved for accurate identification and quantitation by MS  or
PID/ELCD

Resolution of Coeluting VOCs by Selective Detection:
                                                     MS   PID/ELCD
11/12       2,2-dichloropropane/cis-1,2-dichloroethene   yes        yes
16/17       1,1-dichloropropene/carbon tetrachloride    yes        yes
35/36       m-xylene / p-xylene                        no         no
41/42       1,1,2,2-tetrachloroethane/bromobenzene    yes        yes

Partially Coeluting VOCs
18,19       benzene /1,2-dichloroethane                yes        yes
33,34       1,1,1,2-tetrachloroethane/ethylbenzene      yes        yes
37,38       o-xylene / styrene                          yes        no
51,52       1,3-dichlorobenzene/p-isopropyltoluene     yes        yes
                                        361

-------
THE VOCOL COLUMN.   The VOCOL capillary is an industry-standard GC
column for the separation of environmental VOCs.  It is widely used in purge-
and-trap with  GC-PID/ELCD and GC-MS systems.   VOCOL  columns are
commonly used in U.S. EPA Methods 502.2, 524.2, 624, 8020, TO-14, 8260 and
CLP-VOA.   THE VOCOL column yielded  the lowest Trennzahl values of the
three columns, yet produced the least number of coeluting VOCs due to the
designed selectivity of the stationary phase  (Figure B).

Trennzahl (Separation Number)
197   between vinyl chloride (3) and tetrachloroethene (28)
130   between chlorobenzene (32) and 1,2,3-trichlorobenzene (60)

Unique VOC Separations Using VOCOL
11 & 12      2,2-dichloropropane & cis-1,2-dichloroethene
16 & 17      1,1 -dichloropropene & carbon  tetrachloride
37 & 38      o-xylene & styrene
41 & 42'     1,1,2,2-tetrachloroethane & bromobenzene
52 & 51      p-isopropyltoluene & 1,3-dichlorobenzene
            (elution order reversed relative to SPB-624)

The VOCOL column provided superior separation of substituted benzenes (40-
55) and  with the substituted alkanes  and alkenes (11-19).  Coeluting VOCs on
the VOCOL column could also be resolved using MS (extracted ions), PID/ELCD
or FT-IR (except for 35/36, m-xylene and p-xylene).

Resolution of Coeluting VOCs by Selective Detection:
            MS         PID/ELCD
18/19       benzene/1,2-dichloroethane                yes         yes
33/34       1,1,1,2-tetrachloroethane/ethylbenzene      yes         yes
35/36       m-xylene / p-xylene                        no          no

Partially Coeluting VOCs
28,29       tetrachloroethene & 1,3-dichloropropane      yes         yes
32 & 33/34   chlorobenzene & 1,1,1 -2-TCA / ethylbenzene  yes         no
Relative to SPB-624, the VOCOL column reversed the elution order of a few
VOCs.  The VOCOL column has a higher average polarity than the SPB-624
column, thereby creating a stronger interaction with the longer retained VOCs.

Elution Order Reversed (relative to SPB-624) Using VOCOL
14 & 13    chloroform & bromochloromethane
23 & 22    bromodichloromethane & dibromomethane
40 & 39    isopropylbenzene & bromoform
                                       362

-------
THE SPB-OCTYL COLUMN.  The new SPB-Octyl capillary column (50% n-
octyl,50%  methyl poiysiloxane)  was  desinged for detailed separations  of
petroleum hydrocarbons and PCB congeners. The  SPB-Octyl bonded phase is
less polar than polydimethylsiloxane (SPB-1), and slightly more polar than the
totally hydrocarbon, nonbonded squalane phase.  The  SPB-Octyl column has
high theoretical  plates,  even at subzero temperatures,  and increases  the
retention of aromatics and alkenes (Figure C).

Trennzahl (Separation Number)
200   between vinyl chloride (3) and tetrachloroethene (28)
165   between chlorobenzene (32) and 1,2,3-trichlorobenzene (60)

Unique VOC Separations Using SPB-Octyl
35 & 36           m-xylene & p-xylene
43 & 44           1,2,3-trichloropropane & n-propylbenzene
33 & 32 & 34       1,1,1 -2-tetrachloroethane & chlorobenzene & ethylbenzene

High Resolution and Reversed Order - relative to SPB-624 and VOCOL
12 & 11           cis-1,2-dichloroethene & 2,2-dichloropropane
19 &18            1,2-dichloroethane & benzene
29 & 28           1,3-dichloropropane & tetrachloroethene
38 & 37           styrene & o-xylene

The elution order of VOCs was quite similar for VOCOL  and SPB-624 columns.
However, the SPB-Octyl column greatly shifted the elution order of many VOCs
compared to the elution orders on VOCOL and SPB-624 columns.

Coeluting VOCs Resolved by Selective Detection:      MS   PID/ELCD
14/11   chloroform/2,2-dichloropropane                 yes         no
43/41   1,2,3-trichloropropane/1,1,2,2-tetrachloroethane  yes         no
45/47   2-chlorotoluene/4-chlorotoluene                 no          no
51/53   1,3-dichlorobenzene/1,4-dichlorobenzene        no          no

Partially Coeluting VOCs
10,9        1,1-dichloroethane/trans-1,2-dichloroethene yes         yes
21,22       1,2-dichloropropane/dibromomethane       yes         no
39 & 43/41   bromoform & trichloropropane /1,1,2,2-TCA  yes         no

Elution Order Reversed (relative to SPB-624 and VOCOL)
      8&7        13&12     S3&32     55 & 54
      10 & 9       18&17     3S&37     59 & 58

 Retention Time Greatly Affected:
Reduced:   19    39    43    41
Increased:  20    25    28
                                       363

-------
SOLID PHASE MICROEXTRACTION.  Solid phase microextraction (SPME), like
purge-and-trap,  is a  solventless  extraction procedure, but SPME does  not
require the complex  instrumentation of  purge-and-trap methodology.   SPME
involves immersing a polymer-coated  fused silica fiber into drinking water or
waste water samples, or the headspace above water or soil samples to adsorb
the VOCs.  The adsorbed VOCs are thermally desorbed in the injection port of
any GC and focussed at the front of the cooled capillary column (Figure D).
Extraction selectivity can be altered by changing the polymeric fiber coating or its
thickness.   For example,  the small distribution  constants and low polarity of
chlorinated and aromatic VOCs dictate the use of a thick, nonpolar fiber coating
for efficient extraction.  Agitation, addition of salt,  pH  adjustment, and immersion
of the fiber in the aqueous sample improve recovery of difficult-to-extract VOCs.

Comprehensive separation of SPME extracted VOCs from soil are depicted in
Figure E.  The SPME  extraction of VOCs  at  40ppb provided the highest
sensitivity for the substituted aromatic VOCs above benzene.  The more volatile
halogenated alkanes  and  alkenes were  not concentrated on the fiber as  the
aromatics were. The extraction of the volatile gases  (dichlorodifluoromethane to
chloroethane),  resulted in sufficient extraction  for  positive  identication  and
quantification.

RAPID SCREENING.  For screening VOCs with  nonspecific detectors,  such as
FIDs and TCDs, a dual column analysis on columns of different polarity provides
better  identification  and  quantification  of  VOCs.    A  dual-column  system
composed of a 10m x 0.20mm ID x 1.2um SPB-1 column and a VOCOL column
of the same dimensions provided good resolution of US EPA Method 624 VOCs
in about 6 minutes.   Figures F and G show the dual column analysis of  the
Method 624 VOCs at SOppb, following a 5 minute  extraction by SPME.   The
combined analysis time and cool-down time was 10 minutes.  The  10-minute
cycle  time for the  analysis is compatible with  the sample preparation  time by
SPME - 5 minutes for extractions and 3 minutes for desorption.

Because wastewater samples can  contain VOCs  at concentrations ranging from
trace  ppb to ppm  levels, a sample screeening technique must be suitable for
quantifying  VOCs  over a  wide  range of concentrations.  Ina purge-and-trap
instrument,  VOCs at concentrations greater than 200ppb can saturate  the trap
and contaminate the valves and lines,  requiring downtime to clean the  system.
SPME was  effective over a wide range of VOC concentrations, and proved its
suitability for screening  samples on-site  or  prior  to purge-and-trap/GC/MS
analysis.  Waste water samples  found to  be highly concentrated can be diluted
prior to the formal analysis.

The average response factors  for  31 VOCs in US EPA Method  624 at a
concentration range of 25ppb   1ppm were determined using SPME.   Data for
SPME extractions  at  7 concentrations are summarized in Table B.  The  low
percent  relative standard deviations  (%RSD) for  most VOC indicate  good
                                        364

-------
linearity for the response factors for this range of concentrations. The % RSD for
vinyl chloride is unusually high because vinyl chloride coelutes with  methanol,
the solvent used with the standard. Responses for vinyl chloride are more linear
with specific detectors, such as ELCD or MS.
CONCLUSIONS

Based on this study, it was concluded that the SPB-624 column provided the
highest column  efficiency  (Trennzahl  values), resulted  in  numerous peak
coelutions that could be separated using selective detectors (MS or PID/ELCD).
The VOCOL column provided the lowest column efficiency, the longest retention
times, but the fewest coelutions.  The coelutions on the VOCOL column could
also be separated with selective detectors.  The SPB-Octyl column provided high
column efficiency and a unique elution order.  Although the SPB-Octyl column
separated m-xylene and p-xylene, 2-chlorotoluene/4-chlorotoluene (isomers) as
well as 1,4-dichlorobenzene/1,3-dichlorobenzene (isomers) were not resolved.

These results show that SPME is fast, easy and compatible with short, naroow-
bore columns that provide fast analysis times.  Volatile organic compounds can
be extracted with good accuracy over a wide concentration  range. Because the
apparatus is portable and easy to use, SPME can  be employed in the field for
quick turn-around methods, or for screening a sample prior to GC/MS analysis.
Precision  and accuracy also make SPME  effective in quantitative analyses.
                                        365

-------
Table A. Sixty Common VOCs in EPA Method 502.2 For
Comparing Capillary Column Performance
  1.  Dichlorodifluoromethane      31.
  2.  Chloromethane              32.
  3.  Vinyl Chloride               33.
  4.  Bromomethane              34.
  5.  Chloroethane                35.
  6.  Trichlorofluoromethane       36.
  7.  1,1-Dichloroethene           37.
  8.  Methylene Chloride           38.
  9.  trans-1,2-Dichloroethene      39.
  10. 1,1-Dichloroethane           40.
  11. 2,2-Dichloropropane          41.
  12. cis-1,2-Dichloroethene        42.
  13. Bromochloromethane         43.
  14. Chloroform                  44.
  15. 1,1,1-Trichloroethane         45.
  16. 1,1-Dichloropropene          46.
  17. Carbon Tetrachloride         47.
  18. Benzene                    48.
  19. 1,2-Dichloroethane           49.
  20. Trichloroethene              50.
  21. 1,2-Dichloropropane          51.
  22. Dibromomethane            52.
  23. Bromodichloromethane       53.
  24. cis-1,3-Dichloropropene       54.
  25. Toluene                     55.
  26. trans-1,3-Dichloropropene     56.
  27. 1,1,2-Trichloroethane         57.
  28. Tetrachloroethene            58.
  29. 1,3-Dichloropropane          59.
  30. Dibromochloromethane       60.
1,2-Dibromoethane
Chlorobenzene
1,1,1,2-Tetrachloroethane
Ethylbenzene
m-Xylene
p-Xylene
o-Xylene
Styrene
Bromoform
Isopropylbenzene
1,1,2,2-Tetrachloroethane
Bromobenzene
1,2,3-Trichloropropane
n-Propylbenzene
2-Chlorotoluene
1,2,3-Trimethylbenzene
4-Chlorotoluene
tert-Butylbenzene
1,2,4-Trimethylbenzene
sec-Butylbenzene
1,3-Dichlorobenzene
p-lsopropyltoluene
1,4-Dichlorobenzene
n-Butylbenzene
1,2-Dichlorobenzene
1,2-Dibromo-3-chloropropane
1,2,4-Trichlorobenzene
Hexachlorobutadiene
Naphthalene
1,2,3-Trichlorobenzene
                                    366

-------
Table B. Linearity of Response Factors for EPA Method 624 VOCs
                                             Response Factors
                               Column*      Mean       % RSD
No.
VOC
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
IS
28.
29.
30.
31.
Chloromethane
Vinyl chloride
Bromom ethane
Chloroethane
Trichlorofluoromethane
1,1-Dichloroethene
Methylene chloride
trans-1 ,2-Dichloroethene
1,1-Dichloroethane
Chloroform
1,1,1 -Trichloroethane
Carbon tetrachloride
1,2-Dichloroethane
Benzene
Trichloroethene
1 ,2-Dichloropropane
Bromodichloromethane
2-Chloroethylvinyl ether
cis-1 ,3-Dichloropropene
Toluene
trans-1 ,3-Dichloropropene
1,1,2-Trichloroethane
Tetrachloroethene
Dibromochloromethane
Chlorobenzene
Ethylbenzene
Bromoform
1 ,4-Dichlorobutane (int. std.)
1 ,1 ,2,2-Tetrachloroethane
1 ,3-Dichlorobenzene
1 ,4-Dichlorobenzene
1 ,2-Dichlorobenzene
SPB-1
SPB-1
SPB-1
SPB-1
SPB-1
VOCOL
VOCOL
VOCOL
VOCOL
VOCOL
SPB-1
VOCOL
SPB-1
SPB-1
VOCOL
VOCOL
VOCOL
VOCOL
VOCOL
VOCOL
VOCOL
VOCOL
VOCOL
VOCOL
SPB-1
SPB-1
SPB-1

VOCOL
VOCOL
VOCOL
VOCOL
0.022
0.663
0.025
0.229
0.022
0.341
0.040
0.354
0.272
0.106
0.374
0.080
0.183
1.951
0.336
0.529
0.072
0.324
0.551
2.091
0.501
0.247
0.251
0.060
1.543
1.892
0.086

0.274
1.021
1.078
1.032
17.0
23.0
11.4
14.7
8.3
13.3
14.7
15.3
9.1
12.1
5.1
11.9
7.8
5.1
3.9
3.4
9.9
6.0
3.6
5.2
4.3
3.4
13.0
6.1
6.5
14.0
6.4

4.9
16.9
16.3
17.4
AColumn used to quantify the analyte
    Sample:  US EPA 624 VOCs in 1.8mL saturated salt water
             (2mL vial) 25ppb-1ppm, 7 concentration points
  Fiber Type:  100um polydimethylsiloxane (PDMS)
  Extraction:  direct immersion of fiber in sample (5 min, rapid stirring)
                                       367

-------
Figure A. Separation of 60 US EPA Method 502.2 VOCs with the SPB-624 column.
  Abundance
                                                      35,36


120000 .


100000 .



BOOOO .
, , 60000 .
GO
o>
00

40000 .


20000 .

o
Column: SPB-624, 60m x 0.25mm ID x 1.4pm
Oven: 40ฐC/4mln - 4ฐC/mln - 200ฐC/10min
Det: MS (m/z = 45 • 300), 300ฐC
In): 250ฐC, Splltless (2min)
Sample: 60 VOC, Sppb, In MeOH 25








j
2.












5.0I
3



t

\
)


16,17
i
11,12 \ 18

{
7 8


6






5
































10













14
I
I3I






lฃ



















20





19



23
2

21





•V~i — i — | — i — i — i — i — | — i — i i i
10.00 15.00 20.00
2
24








32

on 33
f
27


26












UJ — j II. (.l , —
25.0C
\
30



2(









31




41-42 54
38 \ 44 50
3<

















40 \









39








45
\



\










46'
48

47














9!

















52
51

I53















59


57
55








56











58


















60
























i i — i — i — | — i — i — i — i | — i — i i — i | i — i — i — i — | — i — i — i —
) 30.00 35.00 40.00 45.00

-------
Figure B. Separation of 60 US EPA Method 502.2 VOCs with the VOCOL column.
 Abundance
                                                     35
70000




60000 J

50000 .


40000 .

03
0)
CO
30000 .


20000 .



10000 .

0
Column
VOCOL, 60m x 0.25mm ID x 1.5pm 34>3
Oven: 40ฐC/4mln - 4ฐC/mln - 200ฐC/10mln








Dot: MS (m/z = 45 - 300), 300ฐC




In):




250ฐC, Splitless (2mln)
Sample:



























































60 VOC, Sppb, In MeOH










f
8
7
4 6
1






2





3



" i



5


5.00





i













)




18,19 25







12

1
10




I ' '
10.00
1.
i


i

13
41 16
4
15










20
22
23
1 24
21






28





t
26






ii iii
n

2S











30











32









31






15.00 20.00 25.00




(l




54
45
44 I 50
38
V


















4

4
\

41


H







I 49 f
dfil
"to/
lr\
'47,












•
i
I

I ' ' I
2
53
51





















5

















5









5






i i r
30.00 35.00 40.00


59




57










6







58





















6















1 1 '
45.00

-------
Figure C. Separation of 60 US EPA Method 502.2 VOCs with the SPB-Octyl column.
  Abundance                                                             51(53
59

60000 .


50000 .

40000 .


Co JOOOO
O


20000 .

10000 .
o

Column:
SPB-Octyl, 60m x 0.25mm ID x I.Oum 45,47
Oven: 40ฐC/4mln-4ฐC/mln- 200ฐC/10m!n / _,
/ O1
Del: MS
(m/z = 45 - 300), 300ฐC /
Inj: 250ฐC, Splltless (2mln)
Sample:





1 *


4

2
3
" I
I

5

5.00

7 9
8





10



1

i
1
60 VOC, Sppb,








14,11

2
1C

o.c







18


15
19

K>

16



17

1

In MeOH








20

|2
!2

15.00




















•>
-
24 f

1

1

1



25






!9

n —
20.0



30






•f-
43,41
\36
34

32

28





31

a

3'
ww






39
\






\





\





T — i r "| "i "v
25.00
8 40
37













42









30.0









/ 49
4648
























3 C
54
)

52

55
























57









56








58
/i













llll|l' '" '' ' — i — | — i — i — i — 1~
5.00 40.00
0















-------
Figure D. SPME extraction and separation of 60 US EPA Method 502.2 VOCs.
    Extraction Procedure for SPME
   Desorption Procedure for SPME
           Pierce
         Sample Septum    Exp08eFlber
                      (Extract)
                                  Retract Fiber
                                   (Remove)
 Pierce GC
Inlet Septum
Expose Fiber
 (Desorb)
                             Retract Fiber
                              (Remove)
Sample
Vial \

• f • ^
o
• *
e '
• •
o o
0
•


7ฃ
:4:
ซ .
o o
•
•
o


o
o .




Injection Port
Liner

i






-------
00
                    Figure E.  Extraction and desorption processes for SPME.
                                          Column:  SPB-624, 60m x 0.25mm ID x 1.4pm
                                          Oven: 40ฐC/4mln - 4ฐC/min - 200ฐC/10mln
Abundance
                         800000
                         700000
                         600000
                         500000
                         400000
                         300000
                         200000
                         100000
                                                                                                      50.
Det: MS (m/z = 45 - 300)

SPME: Polydlmethylslloxane, 100pm

35,36 48
Extraction: 25ฐC, 10mln, headspace
Desorption: 250ฐC, 3mln, Splltless (2mln)
Sample: 60 VOC, 40ppb, 3g









8000



4000

3 4 e
• i g
1 1 w
1 2 J l|
1 |J^'Hrm'ffi.JUL fttn
i i i i | i i i i | i • • i | < i i i | i • i i | i i i i | i i i i |








4W
4.0 5.0 6.0 7.0


16,17 „
11,12
1 o V3 1S
? 1 ll> 111
&
18
I"
| 1 I , I | 1 1 1 . | 1 I I I
soil H2O sat










34




28
25



A
U
22
flii
5.00 10.00 15.00 20.00





27
26,
J4
32

33
\
29

30
I.31
1
40




37










1

38








39
I,



41,
42







43
\
M

4


46
45



47












.









1


52
54








51
/
53






III












55







i 1 i i . | . i . | . . i i |
25.00 30.00 35.00 40.00




















58





57





56
I ,
1 ' ' I
45.00











59

60



i
' i i


-------
Figure F.   Rapid Screening of VOCs on 10m VOCOL Column
                    SPME: 100pm PDMS phase fiber
                            immersed in 1.8mL saturated salt water (5 min)
                  Columns: VOCOL  10m x 0.20mm ID,  1.2umfilm
                Oven Temp.: 40ฐC (0.75 min) to 160ฐC at 20ฐC/min
                    Carrier: helium, 40cm/sec (set at 40ฐC)
                      Det.: FID, 260ฐC
                        Inj.: 230ฐC, splitless (closed 3 min)
                            50ppb each analyte
               1. Chloromethane
               2. Vinyl chloride
               3. Bromomethane
               4. Chloroethane
               5. Trichlorofluoromethane
               6. 1,1-Dichloroethene
               7. Methylene chloride
               8. trans-1,2-Dichloroethene
               9.1,1-Dichloroethane
               10. Chloroform
               11. 1,1,1-Trichloroethane
               12. Carbon tetrachloride
               13. 1,2-Dichloroethane
               14. Benzene
               15. Trichloroethene
               16. 1,2-Dichloropropane
17. Bromodichloromethane
18. 2-Chloroethylvinyl ether
19. cis-1,3-Dichloropropene
20. Toluene
21. trans-1,3-Dichloropropene
22. 1,1,2-Trichloroethane
23. Tetrachloroethene
24. Dibromochloromethane
25. Chlorobenzene
26. Ethylbenzene
27. Bromoform
IS 1,4-Dichlorobutane (int. std.)
28. 1,1,2,2-Tetrachloroethane
29.1,3-Dichlorobenzene
30.1,4-Dichlorobenzene
31  1,2-Dichlorobenzene
                           13,14
        3,4 6
20
                                        19
           25
26
IS
  21
                                              22
                                                               27

                                                         -<ปvป™'vw^
                                                                              30
                                                                             29  31
                                                    4.0
                      5.0
                    6.0
                                       Min
                                             373

-------
  Figure G.   Rapid Screening of VOCs on 10m SPB-1 Column

                     SPME: 100um PDMS phase fiber
                             immersed in 1.8ml_ saturated salt water (5 min)
                   Columns: SPB-1  10m x 0.20mm ID, 1.2um film
                 Oven Temp.: 40ฐC (0.75 min) to  160ฐC at 20ฐC/min
                     Carrier: helium, 40cm/sec (set at 40ฐC)
                        Det.: FID, 260ฐC
                         Inj.: 230ฐC, splitless (closed 3 min)
                             50ppb each analyte
                  1. Chloromethane
                  2. Vinyl chloride
                  3. Bromomethane
                  4. Chloroethane
                  5. Trichlorofluoromethane
                  6.1,1-Dichloroethene
                  7. Methylene chloride
                  8. trans-1,2-Dichloroethene
                  9.1,1-Dichloroethane
                  10.  Chloroform
                  11.  1,1,1-Trichloroethane
                  12.  Carbon tetrachloride
                  13.  1,2-Dichloroethane
                  14.  Benzene
                  15.  Trichloroethene
                  16.  1,2-Dichloropropane
                               17. Bromodichloromethane
                               18. 2-Chloroethylvinyl ether
                               19. cis-1,3-Dichloropropene
                               20. Toluene
                               21. trans-1,3-Dichloropropene
                               22. 1,1,2-Trichloroethane
                               23. Tetrachloroethene
                               24. Dibromochloromethane
                               25. Chlorobenzene
                               26. Ethylbenzene
                               27. Bromoform
                               IS 1,4-Dichlorobutane (int. std.)
                               28. 1,1,2,2-Tetrachloroethane
                               29. 1,3-Dichlorobenzene
                               30. 1,4-Dichlorobenzene
                               31.  1,2-Dichlorobenzene
                            14
                             20
25 26  IS
0.0
1.0
                          11


1921
16


L
.0
15 18
1

y
^A-^J



L


22
\
3.0





2
24
JL,
Min





3








1
4.0






27
UJ
30
29

]
3



28
I'll II J, -J IM,ซ*1 - WV l\
\
5.0


**-"*>
794-
1




v^-
0438

                                              374

-------
Inorganics

-------
                                                                                             55
AN IMPROVED TEMPERATURE FEEDBACK CONTROL SENSOR
              FOR MICROWAVE SAMPLE PREPARATION

Leo W. Collins. Applications Scientist, Karl M. Williams, Research Engineer,
OI Analytical, PO Box 9010, College Station, Texas 77842-9010


ABSTRACT

A patent-pending microbulb thermometry sensor for microwave-assisted sample preparation sys-
tems has been developed. It performs as well as the currently employed sensors in accuracy and
precision. The theory behind the microwave-transparent temperature sensor is based on gas law
principles. In practice, the sensor has a linear response from -50ฐC to 250ฐC, for an extended period
of time. The microbulb sensor's accuracy is enhanced by an applied linearization factor. The sensor
is designed with microwave-transparent materials and is not prone to breakage as are sensors in
other current technologies. Calibration is made in under a minute and replacement of the "expend-
able-priced" probe can be accomplished in seconds. These factors allow the sensor to be employed
on a routine basis for method development or EPA method compliance. A demonstration of the new
technology has been performed on sludge and sediment samples, as outlined in EPA SW-846,
Method 3051, "Microwave-Assisted Acid Digestion of Sediments, Sludges, Soils, and Oils" [1].
After the timesaving microwave digestion period, the samples were analyzed for several of the
approved Resource Conservation and Recovery Act (RCRA) metals by inductively coupled plasma
(ICP) spectroscopy. Excellent accuracy and precision were obtained, in addition to a significant
time reduction in sample preparation. The new Microbulb Thermometry System'" allows  micro-
wave sample preparation scientists to use temperature feedback control on a routine basis.

INTRODUCTION

The improvements of microwave-assisted acid digestion over the traditional hot plate techniques, in
terms of time reduction and precision, have been well documented [2]. The development and accep-
tance of the two EPA methods for microwave-assisted acid digestion has ignited a growth  in use.
EPA Methods 3015 and 3051 were  written with performance-based criteria so that the transfer of
the method could be made, independent of instrument manufacturer, from one laboratory to another.
Future methodology will also employ performance-based criteria. Temperature is the primary factor
in the microwave-assisted chemical  reactions, and therefore, the most crucial variable in the perfor-
mance-based methods. The need for  accurate, precise, and durable  temperature monitoring and
control is evident. The current technologies include phosphor fiber-optic, passive IR, and thermo-
couple detection. These methods are accurate but have various disadvantages, ranging from cost to
cumbersome use. The new microbulb sensor was developed to remove these obstacles and retain
the superb accuracy that is required.
                                              375

-------
EXPERIMENTAL

An illustration of the Microbulb Thermometry System (MTS'M) is displayed in Figure 1.
                  Microbulb
                  Probe
Electrical
Connection
Control Box Containing
the Temperature and/or
Pressure Sensors
                 Figure 1.  Microbulb Thermometry System

The patent-pending MTS is made from microwave-transparent materials and is extremely robust,
unlike similar products in much of the current technology, and can therefore be used in routine
applications. The MTS has an excellent thermal response, comparable to currently used sensors.
The stability of the MTS is superb, having negligible drift over a period of twelve hours. The MTS
was used with an Analytical Microwave System1" (AMS) Model 7195 from OI Analytical. Tem-
perature feedback control  programming was performed according to EPA Method 3051, "Micro-
wave-Assisted Acid Digestion of Sediments, Sludges, Soils, and Oils." The method requires 10.0
mL of nitric acid with 0.5 g of sludge  or sediment sample.  When using temperature feedback
control, the programming can be simply stated; heat the acid-sample mixture to 175ฐC in 5.5 min-
utes and maintain that temperature for an additional 4.5 minutes. This eliminates the  need for the
timely calibration procedure that is necessary when performing the method without  temperature
feedback control. The temperature program profile for an example of Method 3051 using the new
thermometry technology is shown in Figure 2 and an actual temperature profile for a sediment
sample is shown in Figure 3.
                                                  376

-------
USEPA Method 3051
1 ^O -

Temp ( C) luu-
fr\ m
04
A i A * -^ •*> A

s — *
f-^s
/
0123456789 10
Time (minutes)
    Figure 2. Microwave Heating Profile for USEPA Method 3051
  Temp (ฐC)
180

160

140

120

100

 80

 60

 40

 20


  0
                                                              Temperature
                                        67   8  9   10  11  12  13  14 15  16   17
                                            Time (minutes)
Figure 3. USEPA Method 3051 Temperature Profile for Sediment Sample
RESULTS

The ICP results for several of the RCRA metals for the sludge sample are in excellent agreement
with the expected concentration, and are shown in Table 1. The bias, defined as the difference
between the amount expected and the amount found, was  less than 2% for each of the metals
analyzed.

Good precision is displayed in the ICP analysis of several of the RCRA metals for the sediment
sample. The concentrations along with the precision, based on n=4, are shown in Table 2.
                                                  377

-------
SUMMARY
Element
Ag
As
Ba
Cd
Cr
Hg
Pb
Se
Zn
Concentration (ppm)
1.40
9.80
373
12.6
49.6
0.14
825
0.86
1120
                       Table 1. Analysis of RCRA Metals in Tank Sludge
                              by TCP Spectroscopy
Element
As
Ba
Cd
Cr
Hg
Se
Ag
Zn
Concentration (ppm)
Experimental Range
31-38
266-271
2-3
50.6-55.3
<0.1
24-38
<0.5
129-157
Precision
(SD, n=4)
3.6
2.2
0.9
2.0
6.3
12
              Table 2. Analysis of RCRA Metals in Sediment by ICP Spectroscopy
The demonstration of a newly developed temperature sensor for microwave-assisted sample prepa-
ration is shown. The Microbulb Thermometry System performs as accurately as current tempera-
ture-measuring technologies, and is robust and user-friendly. These differences will permit micro-
wave users in the laboratory to perform routine temperature feedback methods, eliminating the need
for power calibration steps. In addition, as more performance-based methods employing tempera-
ture criteria are created, the use for temperature sensors will escalate.

REFERENCES

[1]    United States Environmental Protection Agency, "Test Methods for Evaluating Solid Waste.
       Physical/Chemical Methods," SW-846, 3051 -1, November 1990.

[2]    H.M. Kingston, L.B. Jassie, Eds., "Introduction to Microwave Sample Preparation: Theory
       and Practice," ACS Professional Reference Book, American Chemical Society, 1988.
                                                  378

-------
                                                                                56
IMPROVEMENTS IN SPECTRAL INTEREFERENCE AND BACKGROUND CORRECTION FOR
INDUCTIVELY COUPLED PLASMA OPTICAL EMISSION SPECTROMETERY.

Juan C. Ivaldi. Alan M. Ganz, and Marc Paustian, The Perkin-Elmer Corporation,
761 Main Avenue, Norwalk, CT, 06859-0293

Treatment of spectral data from an inductively coupled plasma (ICP)
spectrometer is of central importance to the quality of results of CLP and RCRA
analyses.  A routine method is to use off-line background correction coupled with
Interfering Element Correction (IEC). There are inherent limitations with this
approach and known difficulties with its implementation [1].  Conventional IEC
requires the operator to select background correction points. Also, there must
be a linear relationship between the interfering element reference line and the
interfering emission occuring at the analyte wavelength in order for the algorithm
to work. In this paper, we discuss an improved version of this method called
Total Interfering Element Correction (TIEC), which addresses both the
mathematical limitations and the practical implementation problems of
conventional IEC. Similar mathematics are used for TIEC as for IEC but the
information from the spectrometer is used much more efficiently.  For example,
selection of background correction points is superfluous.  Thus, the variability
resulting from this parameter is removed in the method development step. The
argon continuum background is simply treated as another interfering
contribution. Furthermore, with TIEC, proper spectral interference correction is
not dependent on intensity ratios to spectral lines at distant wavelengths from
the analyte.  The TIEC output provides the same interference correction factor
information as IEC, necessary for regulatory compliance. In addition, TIEC
provides diagnostic feedback useful to the operator for instrument performance
verification. It will be shown that TIEC is equivalent in function to IEC but offers
simpler setup for the operator and more reliable results owing to the relaxation of
contraints in the older method.

[1] G.A. Laing et al. in Proceedings: Tenth Annual Waste Testinp and Quality
Assurance Symposium. July 11-15, 1994, VA.
                                        379

-------
57
           ANALYTICAL METHODS FOR WHITE PHOSPHORUS  (P4)
                       IN SEDIMENT AND WATER

Marianne E. Walsh. Chemical Engineer,  Susan Taylor, Research
Physical Scientist, U.S. Army Cold Regions Research and Engineer-
ing Laboratory, Hanover, New Hampshire 03755; Douglas Anderson,
Chemist, Harry McCarty, Senior Scientist, Science Applications
International Corporation, Falls Church, Virginia 22043.

ABSTRACT
White phosphorus  (P4)  can produce severe adverse ecological im-
pacts if released into the environment. First produced in the
United States over 100 years ago for use in matches, and subse-
quently for rat poisons and fireworks, today it is primarily used
in the production of phosphoric acid and as a smoke-producing
munition. To date, there is no standard analytical method for
white phosphorus in environmental matrices. We have been using an
analytical method based on solvent extraction and gas chromatog-
raphy to determine white phosphorus in sediments and water from
an Army training area. For sediments, a method detection limit of
less than 1 |j.g/kg was achieved for white phosphorus extracted with
isooctane and determined with a portable capillary gas chromat-
ograph equipped with a nitrogen-phosphorus detector. For water,
extraction with isooctane may be used to determine concentrations
greater than 0.1 |j,g/L. However,  to meet water quality criteria
for aquatic organisms, preconcentration of the solvent extract is
required. Due to the relatively high vapor pressure of white
phosphorus, a nonevaporative preconcentration step is used. P4
is extracted from water using diethyl ether (10:1 water:solvent
ratio).  The ether phase is collected, then reduced in volume by
shaking with reagent-grade water. By using the appropriate volume
of water, excess ether is dissolved away, resulting in a precon-
centration factor of 500 while heat is avoided and loss of P4 by
volatilization minimized. Using this preconcentration procedure,
a method detection limit of less than  0.01 |o.g/L was achieved.
To minimize use of solvent in the laboratory, solid phase micro-
extraction (SPME) may be used to screen samples for contamina-
tion. Exposure of a 100-Hm polydimethylsiloxane phase to the
headspace above a sediment or water sample for 5 min followed by
thermal desorption in the injection port of the gas chromatograph
provides sensitivity similar to that obtained by solvent extrac-
tion. Since this method is based on equilibrium partitioning be-
                                     380

-------
tween the sample, headspace, and solid phase, response is matrix-
specific. Work is in progress on calibrating this procedure for
quantitative analyses.

This analytical method will be proposed for inclusion in SW-846
Update III as Method 7580: White Phosphorus by Solvent Extraction
and Gas Chromatography.

INTRODUCTION

White phosphorus (P4)  is a synthetic chemical that has been used
in poisons, smoke-screens, matches, and fireworks and as a raw
material to produce phosphoric acid (1). In 1990, a waterfowl die-
off at Eagle River Flats, Alaska, a U.S. Army training site, was
traced to the presence of P4 in the salt marsh sediments (2).  At
that time, no standard analytical method was available for the
determination of P4 in soil/sediment or water.  To analyze the
thousands of samples required by the site investigation, we used
a published method, which was based on solvent extraction followed
by gas Chromatography with a phosphorus selective detector  (3).
The method needed modification to improve extraction efficiency
and detection capability  (4-6). This paper describes further work
performed to validate the method in a variety of matrices and to
test the use of solid phase microextraction (SPME) as a means to
screen samples for P4 contamination.

EXPERIMENTAL
An analytical standard for P4 was obtained from Aldrich Chemical
Co., Milwaukee, Wisconsin. The P4 was supplied as a 5-g stick
with a white coating under water. Pieces (100-300 mg) from the
stick were obtained by placing the stick in degassed water in a
nitrogen-purged glove bag and cutting with a razor blade. Care was
taken to ensure that the surfaces of each piece of P4 were freshly
cut and lustrous in appearance, and showed no evidence of a white
coating. These pieces were used to prepare solutions as described
below.
A stock solution for calibration standards was prepared under
nitrogen by dissolving 250 mg of P4 in 500 mL isooctane (Aldrich
Chemical Co.). Standards over the range 3.5 to 70 jig/L were pre-
pared by dilution of the stock solution with isooctane or diethyl
ether. Standards in isooctane are stable for months stored in
ground glass stoppered flasks in the dark at 4ฐC. Standards in
ether were prepared the same week of analysis and stored at -20ฐC.

Aqueous solutions of P4 were prepared by placing pieces of P4 into
an amber jug containing 4 L of Type I water  (MilliQ, Millipore)
with no headspace and agitating the jug for over 60 days.
Blank matrices used to prepare spiked samples were: reagent grade
                                     381

-------
 (Type I) water  (MilliQ, Millipore);  groundwater from a domestic
well in Weathersfield, Vermont; surface water from a pond in Han-
over, New Hampshire; Ottawa sand purchased from U.S. Silica,
Ottawa, Illinois; a loamy soil from the U.S. Army Environmental
Center, Aberdeen Proving Ground, Maryland; a sandy silt from Leb-
anon, New Hampshire; and a Montana soil with high concentrations
of metals purchased from NIST, Gaithersburg, Maryland.  Soil sam-
ples were wetted to 100% moisture (dry weight basis) prior to
spiking.
For each matrix, 10 replicate spiked samples (1 L for water and
40 g for wet soil) were prepared by adding an aqueous solution of
?4 to yield concentrations near the  presumed detection  limit (0.01
\ig/L for water and 1 \ig/kg for soil) .  This method worked well for
all matrices except the Montana soil,  where the dissolved P4 in
the aqueous spike was lost immediately, probably by fast reaction
with metals in the soil samples. An alternative spiking method
was used instead, where the Montana soil samples were spiked with
small pieces of solid P4.  Spiked water samples  were extracted
within a day; spiked soil samples were equilibrated 24  hr prior
to extraction.
Field-contaminated samples were obtained from Eagle River Flats,
Fort Richardson, Alaska. Water samples were collected in 1-L
amber glass bottles and soil samples were collected in 500-mL
wide-mouth jars filled so that there was no headspace.  Samples
were maintained at 4ฐC until extracted.  Samples  were extracted
and analyzed within 7 days of collection.
For extraction, a 500-mL aliquot of  water was mixed with 50 mL of
diethyl ether by shaking in a 500-mL separatory funnel  for 5 min.
After phase separation, all of the ether layer was collected. The
volume of the ether layer recovered varied, depending on the tem-
perature and the ionic strength of the samples;  it generally
ranged from 3 to 10 mL. The volume of the ether layer was  further
reduced to approximately 0.5 mL by adding the ether extract to
approximately 50 mL of reagent-grade water in a 125-mL separatory
funnel and shaking for 1 min. After phase separation, the ether
layer was collected in a 5-mL graduated cylinder and the exact
volume measured. P4 concentration in the extract was then  deter-
mined by gas chromatography. Extracts were analyzed immediately
to minimize loss due to solvent evaporation.
Wet sediment samples were extracted by placing a 40-g subsample
into a 120-mL jar containing 10.0 mL of degassed reagent-grade
water. Then 10.0 mL of isooctane was added. Each jar was tightly
sealed with a Teflon-lined cap, vortex-mixed for one minute, and
then placed horizontally on a platform shaker for 18 hr. The sam-
ple then was allowed to stand undisturbed for 15 min to permit
phase separation. Extracts were analyzed within a few hours.
                                     382

-------
P4 was determined by injecting a 1.0-fiL aliquot of the isooctane
or ether extract on-coluinn into an SRI Model 8610 gas chromato-
graph equipped with a nitrogen-phosphorus detector. The methylsil-
icone fused silica column (J and W DB-1, 0.53-mm-ID, 15-m, 3.0-jom
film thickness) was maintained at 80ฐC.  The carrier gas was nitro-
gen set at 30 mL/min. Under these conditions, P4 eluted at 2.7
min.

The potential use of SPME as a means to distinguish blank samples
from spiked or field-contaminated samples was tested. SPME fiber
assemblies were obtained from Supelco, Beliefonte, Pennsylvania.
These assemblies are composed of a fused silica fiber coated with
a stationary phase (we used 100-|jm polydimethylsiloxane)  The fi-
ber is attached to a holder that resembles a modified microliter
syringe. In general,  the fiber is exposed to a sample for a short
period of time, during which analytes adsorb to the stationary
phase. Then the fiber is placed into the injection port of a gas
chromatograph to thermally desorb the analytes. We used the SPME
fibers as follows. For each water sample, a 25-mL aliquot was
placed in a 40-mL VOA vial.  The vial was placed in a sonic bath
for 5 min, during which time the SPME phase was exposed to the
headspace. The SPME phase was immediately transferred to a heated
(200ฐC)  injection port  of the gas chromatograph described above.
For each soil sample, a 40-g subsample was placed in a 120-mL jar
containing 10.0 mL of degassed reagent-grade water. The jar was
sealed with a cap equipped with a septum. Each sample was shaken,
then the SPME phase exposed to the headspace for 5 min. The SPME
phase was thermally desorbed as described for the water samples.

RESULTS AND DISCUSSION
Method Detection Limits, Accuracy, and Precision: Method Detection
Limits were computed from the standard deviation of the mean con-
centration found for each matrix and the appropriate Student's t
value (7) (Tables 1 and 2).  For the water matrices, the MDLs were
similar, ranging from 0.003  to 0.005 ng/L.  For the soil matrices,
the range in MDLs was broader, ranging form 0.07 to 0.4 p.g/kg. By
definition,  the spiked concentration must be within 1 to 5 times
the MDL; therefore, only the MDL for the Lebanon soil should be
considered a valid estimate. Based on the analysis of thousands of
field-contaminated samples,  where the lowest detectable concentra-
tions reported are around 0.2 jig/kg,  the MDLs obtained for the
sand and Lebanon soils are reasonable estimates of the detection
capability of the method.
Recovery was estimated from the mean found concentration and the
spiked concentration. The spiking method we used differed from the
commonly used technique where the analyte of interest is dissolved
in an organic solvent,  then added to a matrix. Frequently the sol-
                                      383

-------
         Table 1.  Method Detection Limits for water matrices.

                                       Spiked water samples

Spiked concentration ((ig/L)
Mean found concentration (|j.g/L)
Standard deviation
RSD (%)
Mean recovery (%)
Method Detection Limit (ua/L)
Reagent
0.012
0.0075
0.0012
16
62
0.003
Well
0.0097
0.0086
0.0019
22
89
0.005
Pond
0.0101
0.0081
0.0013
16
80
0.004
          Table 2. Method Detection Limits for soil matrices.

                                        Spiked soil samples
Sand Lebanon
Spiked concentration (jig/kg)
Mean found concentration (jig/kg)
Standard deviation
RSD ( % )
Mean recovery (%)
Method Detection Limit (p.g/kg)
1.9
1.4
0.061
4
73
0.17
0.97
0.83
0.12
14
86
0.34
USAEC
0.84
0.71
0.025
4
85
0.07
vent used in the spike solution is the same as the extraction
solvent; therefore, interaction between the analytes and the ma-
trix is dissimilar to what may be expected in field samples.
While no spiked matrix can fully mimic the interactions that
occur over extended time periods in field-contaminated samples,
we chose to use an aqueous solution of P4 as a spike solution to
more realistically simulate field-contaminated soils. For the
water matrices, the lowest recovery was from reagent grade water
(Table 1).  Dissolved P4 is readily lost from water (8); however,
previous studies have shown that the rate of loss is slowed by
the presence of dissolved organic matter  (8), dissolved salts
(9), soil (10). or iron  (11). Whether or not instability played
a role in the observed recoveries is unknown. Another factor may
have been the more favorable partitioning of P4 between the or-
ganic and aqueous phases when dissolved salts were present in the
aqueous phase, such as in the well and pond water samples. When
water from the salt marsh was spiked, recoveries were near 100%.
With the exception of the Montana soil, mean recoveries for the
soil samples were greater than 70%. Recovery from the spiked Mon-
tana soil was less than 0.1%. Poor recovery was expected due to
the rapid reaction of P4 with copper (12)  that was present in
the soil at over 2900 |J.g/g. Any soil with high concentrations of
copper will produce unacceptably low recovery of P4.  Preparation
and analysis of matrix spikes should identify soils where matrix
interactions will significantly affect recovery.
                                     384

-------
Precision was better for the soil  samples  (Table  2)  than for  the
water samples (Table 1) , probably  due to the  lack of a preconcen-
tration step. Even at these very low concentrations,  the relative
standard deviations were all less  than 25%.

Field-Contaminated Samples : Replicate samples of field-contami-
nated water and sediment were analyzed  (Table 3}.  The concentra-
tions in the water samples were very low, with the mean  concentra-
tion within the range of MDLs obtained for the spiked matrices.

P4 was easily detectable in all sediment samples  (Table  3). Low
part-per-billion  (jig/kg) concentrations are typical  for  samples
contaminated by the use of P4 munitions (13-15).  However, concen-
trations up to 3,000,000 M.g/kg have been observed  for some Eagle
River Flats samples that contain particulate  P4.  For samples that
contain particles of P4, concentration estimates can vary widely
from subsample to subsample.

Calibration: This method utilizes  a nitrogen-phosphorus  detector,
which has proven to be extremely sensitive to P4 and free from
interference. Drawbacks of the detector are the limited  linear
range and the tendency of the response to vary from  day  to day.
To reduce the systematic error, we recommend  generating  a  five-
point calibration curve daily prior to analysis of samples. Since
the gas chromatographic run times  are so short (less  than  5 min),
less than 30 min is required to obtain these  data. To check for
drift in the detector response during the course  of  an analytical
shift, a check standard should be  run every 10 samples and at the
end of the shift. Unless the shift is particularly long, drift
should be less than 10%.
Table 3. Pซ concentrations found
in field-contaminated samples.
                Pa Concentration
Rep
1
2
3
4
5
6
7
8
9
10
Mean
Std deviation
RSD ( % )
Water
(H9/L)
0.
0,
0.
0.
0,
0.
0,
0.
0.
0.
0.
0.

.0026
.0009
.0024
.0015
.0031
.0039
.0054
.0061
.0048
.0055
.0036
.0018
50%
Sediment
(jig/kg)
20.3
14.2
5.8
17.9
13.7
21.2
14.0
11.6
11.5
18.9
14.9
4.7
32%
                                 Screening by Solid Phase Micro-
                                 extraction: Because the majority
                                 of samples sent to analytical
                                 labs for the analysis of vola-
                                 tiles or semivolatiles tend to
                                 be blank or devoid of the anal-
                                 ytes of interest, considerable
                                 time and effort could be saved
                                 by screening samples for contam-
                                 ination prior to extraction and
                                 analysis. Recently, several
                                 papers have been published de-
                                 scribing the use of solid phase
                                 microextraction  (SPME) as an
                                 alternative to traditional ex-
                                 traction techniques  (16) for vol-
                                 atiles and semivolatiles. Using
                                 the spiked and field-contaminated
                                    385

-------
samples described above,  we tested the use of SPME as a way to
screen samples  for P4  contamination.  The SPME fiber was simply ex-
posed to the headspace above a subsample for 5 min, then  thermal-
ly desorbed in  the injection port of the gas chromatograph.  P4 was
detectable in all the  spiked water and soil samples, and  in the
field-contaminated sediment samples.  P4 was not detected  in  some
of the samples  of Eagle River Flats water, which had P4 concentra-
tions below the estimated MDL. Based on these results the analy-
sis of a large  number  of blanks can be avoided and use of solvent
minimized in the laboratory by using SPME to screen samples  for
contamination.

SUMMARY
Using spiked and field-contaminated matrices, analytical  methods
for the extraction and analysis of P4 in water and soil matrices
were evaluated.  Method Detection Limits less than I |o.g/kg for
soil and less than 0.01 (J.g/L for water were obtained using methods
based on solvent extraction followed by gas chromatography with a
nitrogen-phosphorus detector. Solid phase microextraction was
tested and appears to  have great potential as a means to  screen
samples for P4  contamination.

ACKNOWLEDGMENTS
The authors gratefully acknowledge funding for this work,  which
was provided by the U.S.  Army Environmental Center, Aberdeen
Proving Ground,  Maryland,  Martin Stutz, Project Monitor,  and the
U.S. Army Engineer Waterways Experiment Station, Vicksburg,  Mis-
sissippi, Ann Strong,  Project Monitor. Technical reviews  were
provided by Thomas F.  Jenkins and Alan Hewitt. This publication
reflects the personal  views of the authors and does not suggest
or reflect the  policy, practices, programs, or doctrine of the
U.S. Army or Government of the United States.

LITERATURE CITED
  1.  Parkes, G.D.  (1951)  Phosphorus and  the remaining elements of Group
     V.  In: Mellor's Modern  Inorganic Chemistry.  Longmans, Green and
     Co., London.
  2.  Racine, C.H., M.E.  Walsh,  B.D. Roebuck, C.M. Collins, D.J. Calkins,
     L.  Reitsma,   P  Buchli and  G. Goldfarb  (1992) Journal of Wildlife
     Diseases. 28: 669-673.
  3.  Addison, R.F. and R.G.  Ackman (1970) Journal of Chromatoqraphv. 47:
     217-222.
  4.  Taylor, S. and M.E. Walsh  (1992) Optimization of an analytical
     method for determining  white phosphorus in contaminated sediments.
     U.S. Army Cold Regions  Research and Engineering Laboratory, Han-
     over, New Hampshire,  CRREL Report 92-21.
                                     386

-------
 5.  Walsh,  M.E.  and S.T.  Taylor  (1993) Analvtica Chimica Acta.  282:55-61.

 6.  Walsh,  M.E.  (1995)  Bulletin of Environmental Contamination and Toxi-
    cology, vol. 54.

 7.  Federal Register (1984)  Definition and procedure for the determina-
    tion of the method detection limit.  Code of Federal Regulations,
    Part 136,  Appendix B, October 26.

 8.  Spanggord,  R.J., R. Renwick, T.W. Chou, R. Wilson,  R.T. Podoll,
    T.  Mill,  R.  Parnas, R. Platz and D.  Roberts (1985)  Environmental
    fate of white phosphorus/felt and red phosphorus/butyl rubber mili-
    tary screening smokes. U.S.  Army Medical Research and Development
    Command,  ADA176922.

 9.  Bullock,  E.  and M.J.  Newlands (1969)  Decomposition of phosphorus in
    water.  In:  Effects of Elemental Phosphorus on Marine Life: Collected
    Papers  Resulting from the 1969 Pollution Crisis, Placentia Bay, New-
    foundland (P.M. Jangaard, Ed.).  Circular No. 2.  Fisheries Research
    Board of Canada, Halifax, Nova Scotia, p.55-56.

10.  Zitko,  V.,  D.E. Aiken, S.N.  Tibbo,  K.W.T. Besch and J.M. Anderson
    (1970)  Journal of the Fisheries Research Board of Canada, 27: 21-29.

11.  Sullivan,  J.H., H.D.  Putnam, M.A. Keirn, B.C.  Pruitt,  J.C. Nichols
    and J.T.  McClave(1979) A summary and evaluation of aquatic environ-
    mental  data in relation to establishing water quality criteria for
    munitions-unique compounds.  Part 3:  White phosphorus.  U.S. Army Med-
    ical Research and Development Command, ADA083625.

12.  Mellor, J.W.  (1928) A Comprehensive Treatise on Inorganic and Theo-
    retical Chemistry.  Longmans, Green and Co., London,  volume VIII.

13.  Walsh,  M.E.  and C.M.  Collins (1993)  Distribution of white phosphorus
    residues from the detonation of 81-mm mortar WP smoke rounds at an
    upland site. U.S.  Army Cold Regions Research and Engineering Labora-
    tory,  Hanover, New Hampshire, Special Report 93-18.

14.  Racine, C.H.,  M.E.  Walsh, C.M. Collins, S.T. Taylor,  B.D. Roebuck,
    L.  Reitsma and B.  Steele (1993)  Remedial investigation report for
    white phosphorus contamination in an Alaskan salt marsh. U.S. Army
    Cold Regions Research and Engineering Laboratory, Hanover, New Hamp-
    shire,  CRREL Report 93-17.

15.  Simmers,  J.W., R.A. Price and S.Stokke  (1994)  Assessment of white
    phosphorus storage in wetlands within the artillery impact area of
    Ft. McCoy.  In Proceedings of the 18th Annual Army Environmental
    Technology Symposium, 28-30 June. Williamsburg,  Virginia. U.S. Army
    Environmental Center.
16.  Zhang,  Z.,  M.J. Yang  and J.  Pawliszyn (1994) Journal of Analytical
    Chemistry.  66: 844A-853A.
                                         387

-------
 58
     EFFECTS OF BAROMETRIC PRESSURE ON THE ABSORPTION OF
                    PREPARED MERCURY STANDARDS

                                                                 S. Siler
                                                                 D. Martini
INTRODUCTION:

Typically mercury samples can be analyzed on any given day with little variation in
Quality Control Reference Standard (QCRS) recovery. However, we have noted
substantial variation when storm systems move through our geographical area. Though
the same standards used to define the morning calibration curve (before the
thunderstorms) were used after lunch, the peak heights varied substantially. Most
interesting was that a third, late afternoon curve, after storm systems had passed, showed
peak heights virtually identical to those generated in the morning.  On days when the
weather patterns are particularly complicated we have found it virtually impossible to
maintain standardization.  Mechanical variables such as tubing tension, aperture
blockage, intermittent valve malfunction, etc. were considered. We suspect that the
barometric pressure may actually be affecting the output of the instrument. Using
official barometric pressure readings provided by the National Weather Service and
comparing to our recorded peak height on prepared mercury standards, we have studied
the relationship and, though not yet absolute, we have noted some definite trends.

EXPERIMENTAL:

All analyses were performed at TALEM, Inc. (Texas Analytical Laboratories for
Environmental Monitoring) during the routine course of business. Standards were made
from reagent grade deionized water, ACS grade nitric and sulfuric acids, and either
SPEX EP-8 certified mercury standard (10 ug/ml) or PlasmaPure certified mercury
standard (1000 ug/ml).  Analyses were performed on a PSA 10.04 Automated Mercury
Analyzer using EPA 245.2 procedures. The analyzer is microprocessor controlled and
aspiration times were programmed and constant.

Standards were prepared each time analyses were to be performed and, after analysis,
stored in a refrigerator held at 4ฐC. Previously used standards were measured only for
experimental reasons.  The instrument blank was 2% nitric acid in deionized water. The
reducing agent was 2.5% stannous chloride in 5% hydrochloric acid. Barometric
pressures were obtained from the National Weather Service as recorded hourly at Dallas-
Fort Worth International Airport.  Peak height vs. barometric pressure was plotted and
classical linear regression  analysis was used to construct trends for mercury standards
prepared at 0.30, 0.50, 1.0, 2.5, and 5.0 ug/L concentrations.
                                           388

-------
DISCUSSION:

All samples analyzed were used as calibration standards for regular analysis of unknown
samples and yielded correlation coefficients of not less than 0.995.
To observe the effect of barometric pressure on prepared standards, several weeks of data
were recorded. Table 1 shows the raw data. Graphs are included for each mercury
concentration and even though there is significant scatter in the individual data points
trend analysis does show a direct relationship between barometric pressure and peak
height response.

Although not included here it was interesting to note a general decrease in the slope of
individual calibration curves as barometric pressure decreased. Peak height gain or loss
seems to be a direct relationship to changing barometric pressure.

CONCLUSIONS:

The data seems to indicate that relatively small changes in barometric pressure can have a
profound affect on peak heights produced by mercury standards.  With many of these
types of systems being automated, the magnitude of the affect can compromise validity
of calibrations causing more frequent recalibrations to be required.  Possible
explanations for the change in response could have to do with residence time changes in
the cold vapor cell due to flow rate changes induced by fluctuations in barometric
pressure although this and other plausible explanations have not been explored to this
point.

ACKNOWLEDGMENTS:

This work was supported by TALEM, Inc. Fort Worth, Texas. We are especially
grateful for the generous professional help of Ted R. Skingel, Quality Assurance
Administrator, and Tyler Tull, V. P. Environmental Services. We also extend special
thanks to the staff at the National Weather Service in Fort Worth for opening records,
accessing files and generally being helpful and cooperative on our Saturday afternoon
raids of their data files.

REFERENCES:

Methods for Chemical  Analysis of Water and Wastes. Environmental Monitoring and
Support Laboratory, Office of Research and Development,  U.S. Environmental
Protection Agency,  1983.
                                            389

-------
TABLE 1
DATE
2/2/95
2/4/95
2/7/95
P(cm)
74.60
74.92
75.32
2/7/95 75.40
2/9/95 74.36
2/9/95 74.89
2/11/95 74.71
2/13/95
2/14/95
3/3/95
3/8/95
3/9/95
3/10/95
3/10/95
3/13/95
3/14/95
3/17/95
3/22/95
3/30/95
3/30/95
4/4/95
4/11/95
4/12/95
4/20/95
4/20/95
4/20/95
4/26/95
5/1/95
5/2/95
5/4/95
5/5/95
5/8/95













74.98
74.68
75.25
75.84
75.45
75.16
75.53
74.27
74.41
74.97
73.88
74.70
75.00
74.64
74.47
75.02
73.71
73.85
73.88
74.18
74.40
74.64
74.74
74.63
73.90











0.3ppb Hg
1.68
1.88
2.47
2.08
1.73
2.83
2.58
1.89
1.76
2.68
2.13
2.93
2.17
1.68
2.92
2.45
1.79
2.20
2.29
2.69
1.18
2.45
2.13
1.77
1.90
1.99
2.95
2.23
2.09
2.39
0.71
1.65














!



O.Sppb Hg
3.09
3.30
3.24
2.89
2.08
3.63
3.90
3.32
3.13
4.20
3.10
5.21
4.07
4.66
4.27
3.09
4.26
3.41
3.95
4.71
1.91
3.58
3.10
3.15
3.44
3.67
5.16
2.40
3.56
2.86
1.85
3.45


















f.Oppb Hg
5.60
7.49
6.30
5.83
6.96
8.41
6.90
6.24
6.95
5.93
8.27
9.54
8.11
8.93
8.25
5.60
7.13
7.61
7.37
8.81
4.47
7.54
6.00
6.79
6.77
7.23
8.44
7.43
7.30
7.29
5.97
6.25


















2.5ppb Hg
20.74
22.82
7.26
20.96
16.94
27.13
18.47
18.92
22.80
17.56
20.58
26.86
23.19
24.82
23.30
20.74
21.60
24.19
22.40
15.42
13.44
22.00
17.08
18.74
19.19
20.60
22.30
19.40
18.81
19.05
18.55
18.43


















S.Oppb Hg
42.05
43.40
42.32
41.86
38.63
55.48
45.22
42.45
51.08
39.21
45.49
58.54
52.70
56.99
49.20
42.96
46.70
52.33
42.70
53.83
29.30
45.09
31.84
38.86
39.57
42.94
53.09
42.32
42.10
47.88
37.61
37.37


















      390

-------
                                                                                                                                     0.3 ppb Hg
GO-
CD
                                                                  3.00
                                                                  2.50
                                                                  2.00
1.50
                                                                  1.00
                                                                  0.50
                                                                  0.00
                                                                     73.50
                                                                                          74.00
                                                                                                                74.50                 75.00

                                                                                                                      Pressure (cm)
                                                                                         75.50
                                                                                                              76.00

-------
                                                                                                                                      0.5 ppb Hg
                                                                 6.00
                                                                 5.00
                                                                 4.00
CO
CO
S
.21

I
                                                                 3.00
                                                                 0.00
                                                                    73.50
                                                                                         74.00
                                                 74.50                 75.00

                                                        Pressure (cm)
                                                                                                                                                          75.50
                                                                                                                                                                                76.00

-------
                                                                                                                                         1.0 ppb Hg
                                                                     10.00
                                                                      9.00
                                                                      8.00
                                                                      7.00
                                                                      6.00
                                                                      5.00
CO
                                                                      4.00
                                                                      3.00
                                                                      2.00
                                                                      1.00
                                                                      0.00
                                                                         73.50
                                                                                              74.00
                                                                                                                    74.50                 75.00
                                                                                                                          Pressure (cm)
75.50
                      76.00

-------
                                                                                                                2.5 ppb Hg
U
CO
3U.UU






20.00

S
O)


a.


in nn



n nn
i (
j "*


'1 '
F
•
ft *•

,
* * *v '<
•j' x '•> ,;' - '
' Is-'* ;
* J "*! v f


j F <
'* : ! < ' '
! *
tt A * ^ t*
( f >f
,
3 b , ' I


' !*'-f ;• j
1 >' • i i , ' ^
„>, , i , '•,*'•
' ; , •; * :
i i 5 ;
^ *^ jj


i ^
, ,






,
, ; is -
",,'.*.<
*,, ^ r i
! i { i i
|Li' 1^.
1 % i *•
*
• , ' > a
; .f./ . > ^
't -!' i',' ' ' ' -{'
J' < \ !
* t A
, .J.,, 4,.,; i
!,' ' 't
^ ' , ' (






' • H> • :.

1 ! f
1
f
1 i ,
!*'' ' ' J
' ' " n '' i "'
'. !. >'i,. L.,s . *,, ,
* f  ^ -*
ซ. / j ,* '
* > j ,
1 J * i * ,


,. '

k"

; -

1

, ' • i
f t
:!. . -. ,i
* i | ,j • i,
i
.' ' i i ' :'
i • i- j

,
i i t •
i
^^ '
^






1





i
lป
r{ - -
t'
1 i 1

!
i i -
it* ^ \ i * * <
* '*'* 1'
t j
73.50 74.00 74.50 75.00 75.50 76
                                                                                                    Pressure (cm)

-------
                                                                                                                                   5.0 ppb Hg
                                                                60.00
                                                                50.00
CO
CD
cn
                                                                40.00
                                                                30.00
                                                                20.00
                                                                10.00
                                                                 0.00
                                                                    73.50
                                                                                         74.00
74.50                75.00

       Pressure (cm)
                                                                                                                                                         75.50
                                                                                                                                                                              76.00

-------
59


                   A Simple Silver Analysis


David C. Yeaw, Environmental Chemist, Environmental Sciences Section,
Corporate Health, Safety, and Environment, B-69 R-0420, Eastman Kodak
Company, Rochester, NY  14650-1818


Abstract

A simple, inexpensive colorimetric silver analysis has been developed that is capable
of measuring silver concentrations in varying solutions over a range of 0.2 to 20,000
mg/I. The technique rivals AA and ICP for accuracy and precision, but is easily
performed by inexperienced personnel in a few minutes using inexpensive
equipment.
With increasingly stringent governmental regulation of heavy metal discharges, it
has become more important for manufacturers and processors to be able to
monitor their silver usage, recovery operations and discharge levels.  To date, this
has been difficult at best for all except the largest faculties.  The current options
available are:

       Copper test strips. Under the right conditions, silver will plate out from
solution onto copper metal.  This very inexpensive "test" (it only costs a penny,
and you get to keep the penny) merely indicates the presence of silver without
measuring concentration. At least one purveyor of copper test strips claims to be
able to distinguish 5 mg/L with an extended dip time.  In general, this technology
is of very limited use.

       Silver Estimation Papers. These are (usually) cadmium sulfide impregnated
porous papers designed to be dipped into the solution to be tested. Any silver
present will form a brown silver sulfide stain that is compared to a color chart to
estimate the silver concentration. The drawbacks are: 1) a lack of sensitivity.
Intensity differences become quite  difficult to distinguish below 0.5 gm/L of silver,
and 2) the lack of specificity.  Other metal ions can react, creating a similar brown
stain, producing confounding results. The chealated iron compounds in a
photographic bleach or bleach-fixer may leave a brownish stain that could be
confused with a silver response.  In short, these indicator papers can be
misleading.

       Potentiometric titration.  Potentiometry measures the intensity of silver ions
in solution by ion specific electrode (I.S.E.). As the silver is removed from
solution by titration with a standard titrant, the solution potential is monitored.
This technique requires equipment that can range from mildly to highly expensive.
It also requires the talents of an experienced operator familiar  with laboratory
                                            396

-------
techniques and interpretation of data. The titrants can be dangerous (generally
sulfides) and the range of delectability in many working solutions is limited. The
I.S.E. readings are effected by any species that reacts with silver, possibly
interfering with the accuracy.  This technology is most effective when applied as a
control of a process such as electrolytic silver recovery, where the actual readings
are not as important as the relative changes in potential.

      Atomic absorption (includes ICP). This technique is generally considered
the most accurate and precise analytical procedure.  It is the methodology
recommended by most regulatory agencies requiring compliance monitoring.
However,  it involves extremely expensive instrumentation operated by an highly
skilled analyst. It also utilizes compressed gasses for fuel.  Only the very largest
faculties are able to avail themselves of this technology.  Most often when analyses
are required, the generator turns to:

      The independent (reference) laboratory. Although this is the route
mandated by some agencies for the monitoring of compliance, results from these
operations are necessarily delayed by shipping and, are never timely. Any
problems may not be caught for several  days. This is also an expensive
alternative, the costs ranging from $25-75 for a single analysis.

Considering the above options, there was a clear need for an analytical technique
to fill the gap between the expensive, difficult analytical technologies and the
simple, undependable estimations. Such a technique would have to be:
      1) simple. Most businesses can ill afford the expense of a full time
      laboratory analyst.
      2) inexpensive.  There is seldom much in the budget to purchase
      equipment that doesn't directly produce profit.
      3) accurate.  If decisions effecting process controls, recovery operations,
      and discharge parameters are to be based on the results of a silver
      analysis, the analysis had better be accurate.
      4) compact.  Most businesses have set aside little or no space for non
      revenue producing activities.
      5) sensitive.  The technique must  be  usable to analyze solutions containing
several gm/L of silver, yet at the same tune be able to accurately measure to less
than 1 mg/L in wash waters and effluents.

The above criteria seemed to be best met using colorimetry, which is the
measurement of the intensity of an uniquely colored compound in solution, which
would be formed by the reaction of the silver with a reagent  compound.

There are many chemical compounds which will react with silver to form new
compounds, but most either demonstrate no visible change, or result in a
precipitate that precludes their use as colorimetric reagents.  The proprietary
compound used as the silver sensing reagent in the silver test is a thiol-type metal
ion complexing agent that is soluble and active at pH's of 12  or higher.  In high
pH aqueous solution, this thiol combines on a one-to-one basis with silver ions to
form a reddish-purple compound that demonstrates a '^max at 545 nm. Under the
conditions of the test, the thiol will not only react with free silver ions, but also
                                           397

-------
with those tied up in complexes such as with thiosulfate and thiocyanate. This
product is indeed insoluble, but initially it is so finely dispersed as to appear and
measure as a solution.

In actuality, this thiol forms unique colored complexes with many metals, each
complex having its own distinctive Xmax.   Although the technology described
herein was developed for the measurement of silver in photoprocessing solutions, it
could apply to any one of several metal ion concentrations in other venues.  In
fact, several could be determined simultaneously with readings at various points,
or by scanning the visible wavelengths.

The orange color of the reagent has a ^max at 460 nm, but the curve is sufficiently
wide to  overlap with readings in the mid 500 nm range. For this reason, the
absorbance of the reagent alone is measured and subtracted from the reading of
the silver complex formed (To affect this, the colorimeter is actually zeroed on the
reagent  prior to its use).  The molar absorptivity, of the silver complex calculated
from measurements made at 560 nm was 6 X 1CP.  This indicated sufficient
sensitivity to measure silver in the concentration range of interest (0.2 to 20,000
mg/L).

A reagent mixture was devised containing the thiol complexing agent, a compound
to maintain the pH, a compound to complex iron in order to prevent the
formation of ferric hydroxide at the working pH, an antioxidant to protect the
thiol, and a dispersant to keep the silver complex in fine suspension for
measurement.

Calibration curves were generated from standard silver concentrations combined
in varying matrices of photoprocessing solutions demonstrated that, under the
conditions recommended for testing, no matrix effects were evident.  It made
essentially no difference whether  fixer, bleach-fix, or silver nitrate solution was
being measured. Like silver concentrations yielded like absorbances.

Over the past two years, several thousands of samples have been analyzed both by
this technique and by ICP or AA. These samples have been of varying
composition, containing significant concentrations of thiosulfate, thiocyanate,
metal ion  chealators such as EDTA, ferrocyanide, halides and many other
compounds typically found in photoprocessing effluents. The silver levels in these
samples ranged from less than I mg/L to nearly 20 Gm/1.  The correlation
constants (r2) calculated from these data as compared to the reference methods
mandated by regulatory agencies were consistently greater than 0.98. A typical
correlation study is shown in Figure 1. The samples used were taken from a
photoprocessing operation and include samples of fixers, EDTA bleach-fixers.
ferrocyanide containing fixers, and wash waters.  Silver levels ranged from  2
mg/L to over 10 gm/L.

The chemistry of this analysis has also been applied to a continuously sampling
analyzer monitoring the output from an ion exchange silver recovery system.  The
hardware utilized segmented flow technology, measuring in a flow-through cell
mounted hi a colorimeter. The colorimeter converted the colorimetric intensities
                                           398

-------
to a millivolt output which was monitored by a process controller.  The controller
used the signal to alert of silver breakthrough and initiate the resin regeneration
cycle. A characterization of this system is shown in Figure 2. This type of
monitor could also be used to control the addition of a silver precipitating agent
prior to the treatment of a fixer for reuse. It would assure that there would be no
excess precipitant in the recycled solution.

The greatest need, in the industry, however, is for a low cost, discrete analysis.
To this end, Kodak has produced a silver test kit which contains all the hardware
necessary to perform the silver analyses.  Included are the colorimeter and three
sampling pipettors that cover the entire range of the sensitivity, along with a
dispenser for the liquid portion of the reagents. Reagents are supplied separately
in multiples of 100 tests.  The reagents are hi a single test format, which includes a
sealed packet, the contents of which are dissolved into an aliquot of a provided
liquid reagent in preparation for measurement. To this reagent mixture, the
proper measured sample is added and mixed, and the absorbance of the resultant
solution is measured in a colorimeter.  The entire process takes less than two
minutes.

Full strength fixes and bleach-fixes may be measured as low as 20 mg/L of silver.
The measurement of lower levels in these media would necessitate a larger sample
size than that recommended. At this point, the thiosulfate competes more far
favorably for the silver, so the sensitivity falls off drastically. Because the
thiosulfate in wash waters is diluted 50-100X, the measurement of silver in these
solutions is possible to less than 0.5 mg/L.

Given the versatility, accuracy, and sensitivity of this technique, it is possible to
monitor the silver throughout the process, allowing the operator to optimize
replenishment of process chemicals and washes, to control and verify the recovery
process, and monitor the discharge  for compliance.

I would like to acknowledge the contributions of Andrew Hoffmann and Dr.
Richard Horn of the Eastman Kodak Company for then' contributions to the
optimization of this technique, the generation of thousands of results, and the
coordination of the correlation studies. Then- efforts have been invaluable.
                                           399

-------
  Figure 1.
          100
       jo




       I
       T3

       O
       0>

       E
       CD
         0.001
         0.01 -
           0.001
Figure 2.
 0.01     0.1      1       10

Gm/L Silver by Atomic Absorption Method
100
                     ,-  ...  Standard vs. ;
                    : Equality  .   :.   i
                    I     '  Invention
       SILVER RECOVERY PROCESS CONTROL SYSTEM
                                  400

-------
                                                                             60
  Capillary Ion Electrophoresis, An Effective Technique for Analyzing
     Inorganic and Small Organic Ions in Environmental Matrices

Joseph P. Romano, James A. Krol, Stuart A. Oehrle and  Gary J. Fallick
                         Waters Corporation
                           34 Maple Street
                         Milford, MA 01757
Capillary Ion Electrophoresis is a mode of capillary electrophoresis which is
optimized for the rapid analysis of inorganic anions, cations, low molecular
weight organic acids and amines. This use is also termed Capillary Ion
Analysis (CIA). It is characterized by high speed, high resolution
separations which are achieved by applying an electric field to a sample
contained in a capillary filled with an electrolyte. Since no chromatography
column is involved, complex samples can often be analyzed without the
extensive  sample preparation commonly needed prior to ion chromatography
or other modes of HPLC.

The mechanism of separation is different from ion chromatography making
it possible to easily analyze anions and organic acids simultaneously.
Similarly cations and organic amines can be monitored in a single ran,
Figure 1.  As shown hi Figure 2, the instrumentation for performing CIA is
very simple. Instead of a chromatography pump, the separation is driven by
a power supply. A portion of the capillary in which the separation takes
place forms the detector cell. Direct UV/vis detection is used as well as
indirect detection for ions which do not absorb UV.

Figure 3 illustrates the rapid, high resolution separations which are provided
by CIA. It also demonstrates significant chloride speciation. Total analysis
times are typically 4 to 6 minutes. As indicated by the response shown for
the low ppm concentrations, detection limits are  typically in the low to mid
ppb levels for the common anions using a simple hydrostatic injection mode.
There is also a method for combined concentration and injection of ultra
pure samples which extends the detection limit into the low ppb/high ppt
range, but this is generally not used for environmental samples.
                                        401

-------
Capillary Ion Analysis is an effective compliment to ion chromatography
and has been shown to produce comparable results to both single column
and chemically suppressed modes of 1C, Figures 5, 6 and 7.  Typical sample
preparation, such as the waste water analysis shown in Figure 8, is often
confined to filtration and dilution. Since the waste water was diluted 1:10,
the very small fluoride peak is actually 7 ppb concentration.  With CIA there
are no early eluting water dip, carbonate or cation peaks to complicate  the
analysis or quantitation.

A recent study compared the values obtained by a commercial testing
laboratory using official wet chemical methods to those produced by CIA
and ion chromatography. Samples included drinking water, process and
waste water as well as landfill leachate. The results of one such comparison
are shown in Figure 9. Overall agreement among the techniques was
considered to be excellent.  The wet chemical Nitrate-Nitrite results were
composite values provided  by the cadmium reduction method. Individual
values for each ion were obtained by CIA and 1C and then summed for
comparison purposes.

Other studies have demonstrated the utility of this technique for analyzing
cations 1 and nerve agent degradation products^ in environmental samples.
The characteristics of CIA listed in Figure 10 have already resulted in
investigation of its use for additional environmental problems ranging from
characterizing nuclear waste sites to monitoring acid rain. The technique is
rugged and especially well  suited for used by the environmental analyst.

References

1. Oehrle,  S.A., Blanchard, R.D., Stumpf, C.L., and Wulfeck, D.L., Sixth
International Symposium on High Performance Capillary Electrophoresis,
January 31 - February 3, 1994, San Diego, CA

2. Oehrle,  S.A., and Bossle, P.C., Journal of Chromatography A, 692 (1995)
247-252
                                       402

-------
O
CO
                                 Figure 1
                        Capillary Ion Analysis
                                 Definition
                         Capillary Ion Analysis is a form of
                      Free Zone Capillary Electrophoresis (CE)
                           Introduced in 1990 by Waters
                               for the Analysis of
                       Inorganic Anions & Organic Acids and
                       Inorganic Cations and Organic Amines

                     CE Has Different Separation and Detection
                             Chemistry and Physics
                     Compared to Liquid / Ion Chromatography
                              Figure 3
                      Powerful Anion Separations
                                                        I Bromide   4
                                                        2 Chloride   2
                                                        3 Iodide   4
                                                        4 Sulfate   4
                                                        5 Nitrite   4
                                                        6 Nitrate   4
                                                        7 Chlorate  4
                                                        8 Perchlome 4
                                                        9 Fluoride  I
                                                        10 Phosphate 4
                                                        11 Chlorite  4
                                                        12 Carbonate 4
                                                        13 Acetate  S
                                                        \4 Monochtoro-
                                                         acetate   5
                                                        ISDIdlloro-
                                                         acetate  5
                               1 Minutes
                     Figure 2
Capillary Ion Analysis System Configuration
             Sample Carousel &
             Working Electrolytes
                    Figure 4
            Capillary Ion Analysis
            Advantages vs IC/LC
     + Small Scale of Operation
      ป. Minimal sample & electrolyte used
      > Minimal waste generated (<100 mUday)
      > Fast analyses (typically 6 minutes)

     ป. No Column Involved
      > Minimal sample prep needed
      > Fast methods turnaround
      ป. Fast run-to-run times
      > Increased cost effectiveness

-------
Figure
Elution Order










i
15
> 1C (Borate/Gluc)
ป, 1 . Water Dip
>-2. Cations
+ 3. Fluoride
ป. 4. Carbonate
>.5. Chloride
*6. Nitrite
>1 Bromide
„ 8. Nitrate
ป. 9. Phosphate
.> 10. Sulfate
min
5

of Major Analytes
*• C
^
>•
ป•
^
>•
ป•
+
+
+
*"
;IA
Bromide
Chloride
Sulfate
Nitrite
Nitrate
Fluoride
Phosphate
Carbonate
Water Peak
No Cations

7
5
10
6
8
3
9
4
1










0 f
5-7min
                Figure 6
Why Use Capillary  Ion Analysis?

 ป. Rapidly analyze:
   • inorganic anions,
   • organic acids,
   • alkali & alkaline earth cations,
   • alkanolamines
 t- Results equivalent to ion chromatography
 ป. Different separation selectivity than ion
   chromatography (confirm peak identity)
Figure 7
CIA Anion Analysis
Comparison to Ion Chromatography
An Ion IC(ppm) CIA(ppm) CIA/IC
Tapwaier




Wellwater



Induslrlal
Wastewaldr


Power Plant
Wastewattr
Cnlorlde
Sulfate
Nitrate
Fluoride

Chloride
Sulfate
Nitrate

Chloride
sultate
Fluoride

Chloride
Sulfate
20.22
14.77

Not Detected

37.65
11. 95
3.17

83. 18
23.89
Not Detected

191.83
75.88
20.04
14.M

0.06

36.46
11.42
3.1 B

93.03
23.07
0.13

189.77
76.7S
Nitrate 2.38 2.23
••CIA Anion Analysis results using proposed ASTM Method are
equivalent to Ion Chromatography results
0.391
0.951



0.969
0.956
1.004

0.998
0.367


1.041
0.361
O.S37
                Figure 8
         CIA Anion Analysis
      Chromate Electrolyte - N601b
           4 mM Cluomate / 0.3 mM OFM-OH,
             pHSwrthHiBOj
           -15 Wat 12 nA, IMT6
           Indirect UV at 254 nm
           30sec Hydrostatic Sampling
                       F  POi
                      JvJV_
WasleWalerOiluled 1:10
 Cl =16.76ppm
 SO, = 86.62 ppm
 NOj= 10.56 ppm
 F = 0. 07 ppm
 PO, = 2.01 ppm
Drinking Water Standard
 Cl =30 ppm
 SCU = 10 ppm
 NOj= 2 ppm
 NOa = 3 ppm
 F = 1 ppm
 PO,= 5 ppm

-------
o
Ol
Ani(
1
Samplt
5 *, ft&alyte".;: T
--•:CNf9rM#;
;;-N8rat8~N&lr
- \ - PlwWs
- ; , Sstteta
6rtha-PhQ$p&8iง:
*By EPA meth
Sulfate 37
Figure 9
^ns in Landfill Leachate
Method Comparison
; # 60730-2 Concentrations in mg/L
s w#ciซs*ป* ;, ,* \ -OA ^ ^, ifoซ vmm -
-''%&&-- <; ;
''"'""""^"""!,'."v'
fe- "- "
, - 1$8 -
. . !31!2
.V % V V ••
ฃ^32$8
V^;A!^ - -:-
-^.^^^ \
--- 0 " Mi -- v -
- -„*&"* ":s
v- ฃ•.- ^ Vs%
i%
\; \j2<*s *r
''^''^$PV"'V 	
- 'W?
^ - W* , ,,
i";;"^w'^5""1;"""11"
,\VX'> % -^
ods: Chloride 325.3, Nitrate-Nitrite 353.2,
5.4, Fluoride 340.2, ortho-Phosphate 365.2
              Figure 10
Why Use Capillary Ion Analysis?

 ป. Easy to change among applications
 ป. Simple hardware and operation
   • Low operating costs
   • Minimal waste, less than 100 mL per day
   • No column to foul or void
   ซ Fast sample turnaround
   . Streamlined sample preparation
   ซ Amenable to multi-user operation
 ^Technology for the 90's and beyond

-------
 61

 The Determination of Adamsite, a Non-Phosphorus Chemical Warfare Agent,
 in Soil Using Reversed-Phase High Performance Liquid Chromatography
Heather  King, Mike Christopherson and Greg Jungclaus
Midwest Research Institute, Kansas City, MO 64110
ABSTRACT

An analytical method was developed to determine low-level concentrations of Adamsite in soil and
sediment matrices.  Air-dried soil samples are extracted with methanol in an ultrasonic bath. A
portion of the extract is diluted with aqueous CaCl2? filtered, and analyzed by high-performance
liquid chromatography.  The procedure provides linearity over the range of 0.2 to 15 |ig/g. The
method detection limit study yielded a detection limit of 0.1 ug/g. Matrix spike recoveries were
greater than 90% for all tests conducted.
INTRODUCTION

       Chemical warfare agents have been in existence since World War I and before.  These agents
have been stored at chemical arsenals on military installations, in both small and stockpiled
quantities, depending on their use. Because chemical warfare agents are extremely hazardous
materials, their clean-up and disposal of these chemicals presents significant problems.

       Chemical warfare agents are generally classified as organophosphorus (nerve agents) and
non-phosphorus  containing compounds.  These  compounds can be further divided by  their
physiological effects (nerve agents, sensory irritants, psychotoxics, vesicants, etc.) This paper will
deal specifically with a non-phosphorus  containing compound called adamsite, 10-chloro-5,10-
dihydrophenarsazine and its hydrolysis product, 10,10'-oxybis-(5,10-dihydrophenarsazine).  Figure
1 presents the chemical structures for both compounds and Table 1  describes the physical properties
of each.

       Adamsite was developed in 1919 by the British army.  It belongs to the riot control family
of agents.  Its  physiological effects include vomiting, difficulty in breathing, and death in large
doses.  With more effective  incapacitating agents available, adamsite did not see wartime use due
to its low toxicity. Its use in controlling civilian riots was considered too harsh, therefore its use was
limited. Adamsite was used commercially for some years as a pesticide to treat wood used for water
vessels. Its toxic effects and by-products  (arsine-based) led to its  ban in the 1930's [1].

       In recent years, with an increasing awareness in potential health and environmental concerns
from long-term storage of chemical warfare agents, a need has developed to determine low-level
concentrations  of agents in various matrices.  As a result, MRI has developed a solvent extraction
method followed by HPLC  analysis with UV detection.
                                             406

-------
                                    H20
                                                 H-N    As-O-As    N-H
                                                        Hydolysis
                                                        Product of
                                                        Adamsite
 Figure 1. Structure of Adamsite and the Formation of the Adamsite Hydrolysis Product
   DM
Adamsite
 Abbreviation Name
     DM     10-chloro-5,10-
  Adamsite   dihydrophen-
             arsazme
 Hydolysis     lO'lO'-oxybis-
 Product of    (5,10-dihydro-
 Adamsite     phenarsazine)
                 Formula      CAS #
                 C12H9AsClN  578-94-9
M.W. M.P.ฐC
277.6   195
                 C,4H,8As,N,O 4095-45-8   500.3   350
B.P.ฐC
 410
 Table 1. Physical Properties of Adamsite and its Hydrolysis Product
EXPERIMENTAL

       An analytical standard for adamsite was obtained from the U.S. Army Toxic and Hazardous
Materials Agency, Aberdeen Proving Ground, MD.  Individual stock standard solutions were
prepared in both acetonitrile and methanol. Methanol was used in the preparation of eluant.

       Standard soil used for  method development was obtained from USATHAMA SARM
Repository Soil. Field-contaminated soils were obtained from a military site.

       Analytical separations were obtained on a modular system composed of a Dionex Gradient
pump, Dionex Variable Wavelength detector, Spectra-Physics SP8880 autosampler equipped with
a Rheodyne Model 9010 injector, and a Turbochrom acquisition system. Sample concentrations
were determined by UV response(peak height) and calculated by internal standard technique relative
to the standard data.
                                          407

-------
 RESULTS AND DISCUSSION

       Samples were initially extracted with  both acetonitrile and methanol.  Methanol was
 preferred to acetonitrile, in part due to better solubility of adamsite in methanol and also because of
 better recoveries for spiked samples. Samples were also extracted using a sonic cell disrupter and
 an ultrasonic bath. No significant differences in recoveries were observed, therefore the ultrasonic
 bath is preferred to allow for larger sample throughput simultaneously. Additional studies were
 performed to examine various extraction times.  The times examined included 1 to 24 hours. An
 extraction time  of 1  hour yielded recoveries in excess of 90% and allows for processing of an
 extraction batch in one day. It should be noted that no significant differences were seen in the longer
 extraction times. A summary of the final extraction procedure follows.

       An air-dried sample is extracted with methanol in an ultrasonic bath. A portion of the extract
 is diluted with aqueous calcium chloride [2], filtered, and analyzed by Reverse-Phase HPLC.

       Analysis parameters evaluated included wavelength and eluant concentrations. The two
 wavelengths  evaluated  were 229nm  and  254nm  based  on absorption maxima  and molar
 absorptivities [3]. The 229 nm wavelength was chosen based on increased mv response of both the
 hydrolysis product and internal standard.  Several  eluant concentrations were evaluated under
 isocratic conditions. The objective was to find a wavelength/eluant combination to produce baseline
 resolution of the hydrolysis product of adamsite in a reasonable amount of time. This objective was
 accomplished with a 25 cm x 4.6 mm (5urn) C-18 column and eluted with 70/30 v/v methanol/water
 (Figure 2).  Retention times and capacity factors for the hydrolysis product and internal standard are
 shown in Table 2. It should be noted that it is the hydrolysis product of adamsite that is seen in the
 chromatography. As described by Kuronen (1990), adamsite is rapidly and completely converted
 to its hydrolysis product when in contact with steel (i.e. chromatography tubing, steel frits, column,
 etc.).
Figure 2.  Chromatogram of Adamsite Hydrolysis Product on C-18 column, eluted with
         Methanol/Water at 1.3 mL/min.
                                             408

-------
Analyte
DM Hydrolysis
Product

Internal Standard
Retention Time (min.')
        6.03
        4.09
Capacity Factor
       1.97
Table 2. Retention Time and Capacity Factors for Adamsite Hydrolysis Product
       Using peak height and internal standard calibration, a linear curve was produced to cover the
range of 0.2 to 15 ug/g.  The correlation coefficient was 0.999 or greater with a %RSD of 15% or
leSS (Figure 3).  curve Parameters:
                  Curve S1 :  First Order Fit
                        Weighting Factor
                        Calibration Curve
               1.0 CNo Weighting)     r* =  0.999929
               CO.000582) * <0.297529)X
                                               ADAMSITE
                          •H
                           0)
                                             ISTD Amount Ratio
Figure 3.  Internal Standard Calibration Curve for the Adamsite Hydrolysis Product


       The method-detection limit study was performed according to USAEC protocol. The spiking
concentration used in the MDL study was 1.5X the lowest standard of the calibration curve.  Seven
replicates of SARM Repository soil were spiked, extracted, and analyzed. MDL's were determined
by calculating the standard deviation of the seven replicates and multiplying the result by the t-value
at the 99% confidence level. The obtained MDL (0.11 ug/g) was 2X less than the DL (0.26 ug/g)
based on instrument response.  The actual percent recovery values ranged from 87% to  122%. These
values were derived from an internal standard calibration method using a linear regression equation
with zero-intercept for the spiked concentrations versus the found. The actual found concentrations
ranged from 0.27 ug/g to 0.38 ug/g. The results are presented in Table 3.
                                              409

-------
                                      Reporting          Average
Analyte            MDL (|ig/g)        Limit (|ig/g)        % Recovery
DM Hydrolysis         0.12                0.26                101
Product
TableS. MDL and RL Results
ACKNOWLEDGEMENTS

      The authors would like to thank Dennis Hooton for suggestions during method development
and editorials on the manuscript.
REFERENCES

[1]    Compton, James. Military Chemical and Biological Agents.  New Jersey. Telford Press.
      1987.

[2]    Jenkins, T. F., Schumacher, P. W., Walsh, M. E., Bauer, C. F. (1988) USA Cold Regions
      Research and Engineering Laboratory Special Report 88-8, Hanover, NH

[3]    Kuronen, P. (1990) Development of a Retention Index Monitoring Method for Reversed-
      Phase High-Performance Liquid Chromatography of Non-Phosphorus Chemical Warfare
      Agents, Helsinki, Finland.
                                           410

-------
                                                                         62
   MICROWAVE CLOSED VESSEL SAMPLE PREPARATION

                           of

PAINT CHIPS, SOIL, DUST WIPES, BABY WIPES, & BABY WIPES

                           for

                ANALYSIS of LEAD by ICAP
             Sara Littau - Senior Application Chemist
         Robert Revesz - Applications Laboratory Manager

                      CEM Corporation
                       P.O. Box 200
                  Matthews, NC 28106-0200
                                     411

-------
                           Innovators in Microwave Technology

                     3100 Smith Farm Road, P.O. Box 200, Matthews, NC 28106-0200 USA
                         Phone (800) 726-3331 or (704) 821-7015 • FAX (704)821-7894
Slide No.                  Description

1                           Title Slide - Microwave Closed Vessel Sample Preparation of Paint
                            Chips, Soil, Dust Wipes, and Air Sampling Filters for Analysis of Lead by
                            ICAP.

2                           Introduction - Exposure to lead in the environment has an adverse affect
                            on our health even at low levels. It can cause

                                           Central nervous system impairment
                                           Mental retardation
                                           Behavioral disorders.

                            Domestic sources of lead exposure are primarily paint, dust, and
                            secondarily food, water, and airborne dust.

                            Industrial sources of lead exposure are abrasive blasting, acid and alkali
                            cleaning of metals, forging, molding, welding, and painting.

                            For this work we will investigate the contribution from the primary
                            domestic sources to lead exposure. The samples selected are reference
                            samples from the American Industrial Hygiene Association.  We will
                            review the

                                   1.  Microwave Sample Preparation Instrumentation
                                   2.  Heating (Digestion) Programs
                                   3.  Conditions used for Lead Analysis  by ICAP
                                   4.  Lead Recoveries vs the Certified Value.

3                           Thermo Jarrell Ash  ICAP 61E Trace Analyzer - All elemental
                            analyses were performed with the TJA - ICAP 61E Trace Analyzer.

4                           MDS-2000 Microwave Sample Preparation System  - All  samples were
                            prepared (digested) using a CEM Corporation Model MDS-2000 with
                            temperature and closed vessels to allow elevated temperatures and
                            pressures to accelerate the digestion step.

5                           Advanced Composite Vessel - All samples, except the baby wipes, were
                            prepared using this vessel design. The vessels have an operating
                            pressure and temperature of 200 psig and 200ฐC. This slide provides
                            an exploded view of the vessel.

6                           PFA Digestion Vessel - The baby wipe samples were prepared using
                            this vessel design.  The vessels have an operating pressure and
                            temperature of 200  psig and 200ฐ C. It was used for the 3.0 gram baby
                            wipe samples due to the automatic venting / resealing capabilities.
                                           412

-------
Slide No.                  Description

7                           Microwave Closed Vessel Heating Conditions for the
                            Digestion of Lead in Paint Chips - AIHA (American Industrial Hygiene
                            Association) Reference Material ELPAT (Environmental Lead Proficiency
                            Analytical Testing) Round 009 Paint Chips. As a precaution with the first
                            digestion of unknown samples, all paint and soil samples were allowed
                            to predigest for 10 minutes prior to sealing the vessels.  The digestion
                            was performed as shown. For this sample type, the vessel pressure
                            increased to 45 psig.  Twelve samples were simultaneously digested. At
                            the end of the digestion, the samples were filtered through Whatman #40
                            filter paper.

8                           Lead Recovery from Paint Chips - ELPAT 009 samples.  Four different
                            lead levels. All recovery data within acceptance limits. Low RSD.

9                           Lead Recovery from "Real World" Paint Chips - Samples A and B were
                            paint chips removed from walls and woodwork in homes and spiked with
                            ELPAT paint chips.

                            Calculated lead was based on the average lead values determined for
                            samples A and B plus the ELPAT paint chip spike. At the end of the
                            digestion, vessel pressure was 74 psig.

                            Vessel pressure at the end of a digestion is dependent on the
                            temperature, the paint chip sample composition, and any other material
                            such as wood or plaster adhering to the paint chips that can be oxidized
                            to produce carbon dioxide.

10                          Microwave Closed Vessel Heating Conditions for the Digestion of Lead in
                            Soil - Twelve simultaneous digestions of ELPAT Round 009 soils.
                            Samples were predigested for 10 minutes prior to sealing the  vessels.
                            Performed digestion as shown. Vessel pressure increased to 46 psig.

                            The pressure in the vessels after digestion will be dependent on the
                            temperature of the acid and the amount of carbonate or organic  material
                            present in the soil sample.

11                          Lead Recovery From Soil - AIHA samples ELPAT Round 009.

                            All recoveries are within the certificate acceptance limits and
                            RSD's.

12                          Microwave Closed Vessel Heating Conditions for the Digestion of Dust
                            Wipes - A dust wipe, ELPAT Sample, is a 9cm round filter paper folded
                            and spiked with  dust containing lead.  Since the sample weight is
                            approximately 0.8 gram of organic material, a ramped temperature and
                            pressure digestion program was used to avoid pressure overruns.
                            Maximum operating pressure is 200 psig for ACV vessels.

                            A total of 12 samples were simultaneously prepared.
                                                 413

-------
Slide No.                   Description

13                           Temperature and Pressure Curves for Digestion of Dust Wipes (These
                             are for the T & P curves).

                             In Stage 1 the temperature increased to 122ฐC before dropping off as the
                             pressure was held constant.

                             In Stage 2 the temperature increased to 133ฐC before dropping off as
                             the pressure was held constant.

                             In Stage 3 the temperature increased to 135ฐC before dropping off as the
                             pressure was held constant.

                             In Stage 4 the temperature increased to 140ฐC before dropping off as the
                             pressure was held constant.

                             In Stage 5 the temperature increased to 154ฐC.  At the end of the fifth
                             stage, the temperature had dropped to 136ฐC.

                             Temperature in the last 4 stages of the program was always greater than
                             120ฐC which is the atmospheric boiling temperature of nitric acid.

                             A small amount of filter paper residue remained. Samples were filtered
                             through Whatman filter paper #40.

    14                       Lead Recovery From Dust Wipes - This was the first lead recovery data
                             in this study using a ramped temperature and pressure heating program.

                             The recoveries were all within the certified acceptance limits.

    15                       Microwave Closed Vessel Heating Conditions for the Digestion of Lead
                             Spiked Air Filters - The cellulose ester filters were spiked with a known
                             concentration of lead.

                             This is the heating program for the simultaneous digestion of
                             12 filters. Temperature was controlled at 160ฐC for 5 minutes.

                             The vessel pressure at the end of this digestion was 35 psig.  This
                             relatively low pressure  results from the inorganic matrix of the spike
                             material and from the low organic filter weight of <0.1  g.

     16                       Lead Recovery From Spiked Mixed Esters of Cellulose Filters - Average
                             recoveries were 88 to 93%. Interesting to note that as the lead
                             concentration increases the recovery decreases.

     17                       Microwave Open Vessel Heating Conditions for the Partial Digestion of
                             Lead Spiked Baby Wipes - Since there has been no regulation specifying
                             the type of wipe used to sample surfaces for lead contamination, we used
                                                  414

-------
Slide No.                   Description

                             a 3g wipe (Wash-a-Bye Baby brand) to minimize wipe weight. The
                             majority of the baby wipe weight is organic material. Since the sample
                             size is greater than the recommended sample size for closed vessel
                             digestion of organic materials, we were required to do some open vessel
                             digestion to oxidize some of the organic material prior to closed vessel
                             digestion.

                             The safety relief disks were placed on the 120 ml PFA vessels during
                             the open vessel heating. The disks produced some acid refluxing and
                             also reduce the possibility of contamination during the open vessel
                             digestion.

                             A five stage program using power control was used to control the rate of
                             heating and eliminate spattering. Approximately 10 -12 ml of the acid
                             and wipe mixture remained in the vessel after completion of the open
                             vessel heating  program. The vessels were cooled and sealed. A
                             total of 12 samples were simultaneously digested.

     18                       Microwave Closed Vessel Heating Conditions for the Digestion of Lead
                             Spiked Baby Wipes - Samples were then digested  use the pressure
                             ramping program shown. Some residue remained in the vessel after
                             completion of the digestion. All samples were filtered.

     19                       Temperature and Pressure Curves for Digestion of Lead Spiked Baby
                             Wipes - These are the temperature and pressure curves for the pressure
                             ramped digestion.

     20                       Lead Recovery From Spiked Baby Wipes - ELPAT Round 009 soil and
                             paint chips were used for the spike material.  All recoveries were above
                             90%. The highest spike concentration did  produce the lowest recovery.
                             All samples were prepared and analyzed in triplicate.  Lead recovered
                             for all spiked sample types.

     21                       Conclusion -

                             *Rapid sample preparation for lead analysis.

                                    Paint Chips            15 minutes
                                    Soil                   15 minutes
                                    Dust Wipes            20 minutes
                                    Filters                 11  minutes

                             'Unattended sample digestion.
                             *No special vessel cleaning required.
                             *Excellent lead recoveries.
                                    ELPAT Round 009 Reference Material
                                    Real world samples
                                               415

-------
 Microwave Closed Vessel Sample Preparation of
Paint Chips, Soil, Dust Wipes, Baby Wipes, and Air
   Sampling Filters for Analysis of Lead by ICAP
                      Introduction
                   Interest in environmental lead analysis:
                    Domestic concerns
                    Industrial concerns

                   Sample preparation equipment

                   Sample types
                    Paint chips
                    Soil
                    Dust wipes
                      1. Filter paper (9 cm)
                      2. Baby wipes
                    Cellulose acetate filters (37 mm)

                   • Heating programs

                   • Lead analysis by ICAP

                   • Lead recoveries
                           416

-------
Analytical  Instrumentation
    Thermo Jarrell Ash; ICAP, 61E Trace Analyzer*
    Instrument configuration:
       Cyclone Spray Chamber
       Meinhard nebulizer
       Wavelength - 220.353
           * Lead detection limit -1.02 ng/mL
Microwave Sample Preparation System
             Mode! MDS-2000
                     417

-------
                Verrt Fitting
     Advanced ComDOSite Sl
                                     Thread Ring
Cutaway View of Vessel Cap and Pressure
Relief Valve Assembly, Sealed and Venting
          Seated
 At pressures oeiow 830 kPa (120 osg). me
  raOM bp remains sealed against trie cap.
  Excess pressure forces me too of me cao to
flex uowara. breanng me seat around me raised to.
ano •xnausang me oressurtzed gas. Cao resects
 when pressure drops oekiw 690 kPa (100 DM}).
                            418

-------
Microwave Closed Vessel Heating Conditions
    for the Digestion  of Lead in Paint Chips
Stage
Power (%)
Pressure (psig)
Run Time (min)
TAP (min)
Temperature (ฐC)3
(1)
100
100
20:00
10:00
160
(2)2
000
000
5:00
0:00
000
                 Vessel type    ACV
                 Acid and volume 10 ml_ of nrtnc acid (70%)
                 Sample wt.    0.1 g
                 Total time     15 mm
                    • MDS-2000 Dioeroon System
                    1 ELPAT ROUND 009 Reference Material
                    2. Cool down stag*
                    3. Control parameter
           Lead Recovery From Paint Chips
         Sample


           1
           2
           3
           4
Average Lead
Recovery1
(wwgm%)
0.5124
0.0442
4.441
0.4098
RSO
O>
0.74
0.54
3.49
0.46
Certificate
Acceptance Limits
(wwgm%)
0.4025 - 0 6973
0.0354 - 0 0608
3.8909 - 5.6496
0.3189-0.5301
Certificate
RSD
(*)
8.9
3.8
6.1
8.3
              * ELPAT ROUND 009 Reference Maienai
              1 AU samples were prepared ana analyzed in triplicate Analyzed by (CAP
       Lead Recovery From Real World
               Paint Chip Samples
                Average Lead     Calculated
        Sample    Recovery1     Lead Present     RSD
                  (woigrlt %)        (weignt %)        (%>

          A        5.23          5.58        2.00

          B        1.72          1.69        5.68
           • Spiked with ELPAT ROUND 009 Reference Matenal (paint Chios).
           i AH umoles wซre prepared and analyzed in triplicate. Analysis Dy ICAP
                                419

-------
Microwave" Closed Vessel Heating Conditions
        for the Digestion of Lead in Soil'
Stage
Power (%)
Pressure (psig)
Run Time (min)
TAP (mm)
Temperature (ฐC)3
(1)
100
100
20:00
10:00
160
(2)2
000
000
5:00
0:00
000
               Vessel type    ACV
               Acid and volume  10 mL of nitric acid (70%)
               Sample wt.     0.1 g
               Total time      15mm
                  • MDS-2000 Digestion System
                  1 6LPAT ROUND 009 Reference Material
                  2. Cool down stage
           Lead Recovery From Soil
      Sample


        1
        2
        3
        4
Average Lead
Recovery1
(moAg)
465
921
479
83.7
RSD
<*>
1.2
2.:J
0.7
1.9
Certificate
Acceptance Limrta
(mo/kg)
433.3 - 568.5
794.6- 1119.7
431.9-573
69.5 - 109.5
Certificate
RSD
(*)
4.5
5.7
4.7
7.4
          • ELP.VT ROUND 009 Reference Matenal
          1. All umpm wera prepared ana analyzed in triplicate. Analyzed Dy ICAP
 Microwave  Closed Vessel Heating Conditions
    for the Digestion of Lead on Dust Wipes1
Stage
Power (%)
Pressure (psig)
Run Time (mm)
TAP (mm)
Temperature (ฐC)
(D
100
50
10:00
3:00
120
(2)
100
100
10:00
3:00
130
(3)
100
120
10:00
3:00
140
(4)
100
150
10:00
3:00
150
(5)
100
200
10:00
3:00
160
          Vessel type    ACV
          Acid and volume 10 mL of nitric acid (70%)
          Sample wt.    0.8 g
          Total time     22 mm
               • MDS-2000 Digestion Syltem
               1 ELPAT ROUND 009 Reference Matenal (9 cm filter paper)
                          420

-------
               Temperature and Pressure Curves for
                Digestion of Dust Wipes (MDS-2000)
      Lead Recovery From Dust Wipes
          Sample

            1
            2
            3
            4
          Blank Wipe
Average Lead
Recovery1
(ligNnpe)
900
325
102
476
0.13
Certificate
Acceptance Limits
(ng/wipe)
729.4-1040.2
224 9 - 437
84- 132.8
376.6 - 580.6
—
          • ELPAT ROUND 009 Reference Material
          1 All samples were prepareo ana analyzed m indicate Analyzed By ICAP
Microwave Closed Vessel Heating Conditions
  for the Digestion of Lead Spiked Air Filters
Stage
Power (%)
Pressure (psig)
Run Time (min)
TAP (mm)
Temperature (ฐC)3
(D
100
100
15:00
5:00
160
(2)2
000
000
5:00
5:00
000
                 Vessel type    ACV
                 Acid and volume 10 ml of nitnc acid (70%)
                 Sample wt.    < 0.1 g
                 Total time     11 mm
                       • MOS-2000 Digestion System
                       1. Mined esters of cellulose (37 mm)
                       2. Cool down stage
                       3. Control parameter
                              421

-------
Lead Recovery From Spiked MEC Filters
        Sample   Value    Recovery1    Recovery   RSD
Spike
Value
imoj
100
250
500
Average Lead
Recovery1
(mg)
93.4
231
441
Average
Recovery
93
92
88
          1       100      93.4        93     0.51

          2       250      231        92     2.42

          3       500      441        88     0.03
         • Mixed esters of cellulose
         1 Ad samples were prepared and analyzed in Quadruplicate. Analysis By ICAP
 Microwave Open Vessel Heating Conditions
            for the Partial Digestion of
             Lead Spiked1 Baby Wipes2
Stage
Power (%)3
Pressure (psig)
Run Time (min)
TAP (min)
Temperature (ฐC)
0)
85
000
3:00
0:00
000
(2)
65
000
15:00
0:00
000
(3)
70
000
5:00
0:00
000
(4)
90
000
5:00
0:00
000
(5)
100
000
10:00
0:00
000
         Vessel type    120 mL PFA3
         Acid and volume 30 mU of nitric acid (70%)
         Sample wt    3 g
         Total time     38 mm
           • MOS-2000 Digestion System
           1 Spiked with ELPAT ROUND 009 Reference Material (paint cmos and soil)
           2. Watt-a-Sye Bafiy tirand
           3. VeuM not sealed covered with me Relief Disk only.
          MICROWAVE* CLOSED VESSEL
   HEATING CONDITIONS FOR the DIGESTION of
          LEAD SPIKED1on BABY WIPES2
Stage
Power (%)
Pressure (psig)3
Run Time (mm)
TAP (min)
temperature (ฐC)
1
100
10
2
100
20
10:00 10:00
3:00
3:00
3
100
40
10:00
3:00
4
100
65
10:00
3:00
5
100
90
10:00
3:00
monitored only
 Vessel Type     120 ml PFA *
 Acid and Volume  Approximately 10 ml of nitnc acid
 Sample Wt.     3 gram (Weight of wipe)
 Total Time       24 mm.

 * MDS-2000 Digestion System
 1 Spiked with ELPAT Round 009 Reference Matenal (paint chips and soil)
 2 Wash - a - Bye Baby Brand
 3 Control Parameter
 4 Sealed Vessels
                             422

-------
         Temperature and Pressure Curves for Digestion
              of Lead Spiked Baby Wipes (MDS-2000)
Lead  Recovery  From  Spiked  Baby Wipes'
                          Spike
                          Value
                          (moAg)

         Wipe + Soil #1       500.9
         Wipe i- Soil #4       89.5
         Wipe + Paint Chips #3  47,702
         Blank Wipe          —
Average Lead   Average
 Recovery2   Recovery  RSD
    (moyko.)        (%)      (%)

    489         98     3.55
    87.8         98     7.67
    43.202        91     2.62
             • Stxked with ELPAT ROUND 009 Reference Material (paint duos ana soil)
             1  Wain-a-Bye Baby brand
             2. All lampm were prepared and analyzed in triplicate Analyzed by ICAP
               Open and clo*ed vessel digestion sample preparation
                             Conclusions
                       • Rapid sample preparation for lead analysis
                          Paint chips   15 minutes
                          Soil       15 minutes
                          Oust wipes   20 minutes
                          Filters      11 minutes
                       • Unattended sample digestion

                       • No special vessel cleaning required

                       • Excellent lead recoveries
                          ELPAT ROUND 009 Reference Materials
                          Real world samples
                                   423

-------
63
 ASI
ANALYTICAL SERVICES. INC.
    ENVIRONMENTAL MONITORING & LABORATORY ANALYSIS
 110 TECHNOLOGY PARKWAY • NORCROSS, GA 30092 • (404) 734^200
          FAX (404) 734-4201 • FEDERAL ID. #58-1625655
  Removal of Zinc Contamination from Teflonฎ PFA Microwave Digestion

                                    Vessels


 Forrest B. Secord, Sample Preparation Supervisor, Metals Section, and Roy-Keith Smith. PhD.

                      Analytical Methods Manager, QA Section
 ABSTRACT


 Laboratory contamination is one of the largest single producers of error in analysis  of
 environmental samples.  ASI, as many laboratories have, converted to use of Teflonฎ
 digestion beakers for hot acid digestion of samples for metals analysis. The digestion liners in
 the microwave digestion system were made of Teflonฎ and favorable experiences with them
 prompted the change. The Teflonฎ beakers and liners have some very desirable properties
 such as ease of cleaning and unbreakability  which more than offsets the high initial purchase
 cost of the containers.  However, over time while using the Teflonฎ containers, the blank
 values for zinc were noted to be slowly increasing. This became of increasing concern when
 the background zinc values in the  blanks passed our  minimum reporting level and still
 continued to rise. We had an idea that use of a strong chelating agent would serve to reduce
 the contamination levels in the Teflonฎ  beakers and liners and  performed  a  series  of
 experiments to test the hypothesis. The development and description of a highly successful,
 simple and inexpensive cleaning procedure which eliminates the use of hot concentrated acid
 leaches, yet completely  removes the background metal contamination problem from Teflonฎ
 digestion beakers and liners, is the subject of this presentation.


 INTRODUCTION


 Recent advances in the sensitivity of analytical instruments  have led to the ability to reliabily
 quantitate target analytes at substantially lower levels than those previously possible. These
 sensitivity increases have proceeded hand-in-hand with the lowering of regulatory thresholds,
 for instance the Ambient Water Quality Criteria Levels. These analytical advances and stricter
 monitoring  requirements have simultaneously increased concerns  about  laboratory
 contamination introduced during the collection and  preparation of samples1.  This paper
 discusses  the identification and elimination  of one significant source of laboratory
 contamination encountered in the preparation of samples for analysis of trace metals.

 EPA methods 3015 and  30512 are for microwave digestion of water and solid samples. They
 recommend (Section 7.2) use of hot acid leaches with  first 1:1 hydrochloric acid for at least 2
 hours  followed by 1:1 nitric acid for a minimum of 2 hours to remove contamination from the
 Teflonฎ PFA digestion vessels. Some of the samples we run exhibit rather high levels of zinc
 and copper and we have found that repeated cycles of microwave digestion in the Teflonฎ
 liners  cleaned daily by the EPA procedure or alternatively with hot aqua regia (concentrated
 3:1 hydrochloric-nitric  acid), leads to permanent establishment of background zinc levels
 above detection limits in the blanks and samples.  This form of laboratory contamination can
 be remedied by purchase of new Teflonฎ digestion vessels.  However at $50 to $100 each,
                                           424

-------
even doubling the useful life of a liner results in a considerable savings to the overhead
operating costs of the facility.

EDTA (ethylenediaminetetraacetic acid disodium salt, CAS number 6381-92-6) is well known
as a sequestering agent for divalent and higher charged cations. Approximately 40 different
cations are known to  be complexed by EDTA.  There is a pH dependance for optimum
complexation.  For instance pH 1 is optimum for Fe+3. pH 4 for Zn+2. pH 8 for Ca+2, and pH
10 for Mg+2.  Environmental analytical applications of EDTA include the complexometric
titration of calcium and magnesium in hardness determinations in a pH  10.0 ammonia-
ammonium chloride buffer.  We felt that soaking the Teflonฎ liners with EDTA might serve
to chelate and solubilize the zinc out of the walls of the Teflonฎ liner and reduce the overall
level  of  carry-over contamination.  We  performed a series of experiments to test this
hypothesis.

A saturated solution  (approximately 5%) of EDTA in reagent grade water was prepared, which
exhibited a pH of 4.6.  The original samples were 6 contaminated Teflonฎ liners which were
cleaned with the EPA procedure and with  aqua regia. A blank digestion of acid and 45 mL
reagent grade water were performed in each liner by EPA method 3015 and the digestate
assayed by EPA method  6010 (ICP-AES).  A series of cleaning procedures were devised and
performed sequentially on the liners.  The mildest procedure was performed first, followed by
more rigorous schemes. Test 1 was an EDTA soak for 1 hour at ambient temperature followed
by rinsing with DI  water.  Test 2 was an EDTA soak  overnight at  ambient temperature,
followed by rinsing with DI water. Test 3 was an EDTA soak for 1.5 hrs at 60ฐC, followed by
rinsing with DI water.  After each test treatment, the blank digestion was repeated and the
digestate analyzed.  The results of these tests are presented in the Table.
                                             425

-------
Table.  Analytical results (mg/L) for tests of Teflonฎ cleaning experiments with EDTA. ND
= Not Detected.

Calcium
Vessel
Original
Testl
Test 2
Test3
1
.500
.844
.891
.538
2
1.02
1.16
.873
.417
3
1.03
1.42
.854
.368
4
.733
.924
.651
.474
5
.947
.910
.776
.314
6
.969
.605
-
.514
Iron
Vessel
Original
Testl
Test 2
Test3
1
.038
.020
.041
ND
2
.024
.056
.025
ND
3
.031
.020
.044
ND
4
.017
.011
.017
ND
5
.020
.014
.022
ND
6
.018
.009
-
ND
Potassium
Vessel
Original
Testl
Test 2
Test3
1
.054
ND
.106
.003
2
.074
.018
.145
.064
3
.029
ND
.073
.028
4
.023
.003
.078
.059
5
.013
ND
.090
.054
6
.064
ND
-
.069
Magnesium
Vessel
Original
Testl
Test 2
Test3
1
.042
.033
.067
.007
2
.049
.047
.073
.009
3
.048
.037
.055
ND
4
.025
.030
.051
.011
5
.057
.032
.038
.002
6
.044
.018
-
.035
Sodium
Vessel
Original
Testl
Test 2
Test3
1
.607
1.06
1.29
.845
2
1.10
1.35
1.37
.643
3
1.29
1.25
1.28
.547
4
.976
1.08
1.05
.688
5
1.18
1.08
1.18
.514
6
1.34
.700
-
.790
Zinc
Vessel
Original
Testl
Test 2
TestS
1
.052
.037
.033
.014
2
.038
.052
.034
.006
3
.040
.064
.023
.017
4
.023
.024
.019
.019
5
.034
.024
.020
.006
6
.021
.021
-
.009
                                            426

-------
Although we had undertaken these experiments with the specific objective of reducing a zinc
contamination problem, examination of the other metals determined in the ICP-AES printout,
indicated we had also significantly reduced calcium, iron and magnesium levels in the blanks.
The background contamination levels of these metals had yet to reach the PQL and were not as
yet viewed as a problem.

To understand these results the following formation constants for EDTA complexes3 are
helpful:  Mg+2 4.9 x  108, Ca+2 5.0 x 1010, Zn+2 3.2 x 1016, Al+3 1.3 x 1016, and FeH3 1.3 x
1025.  The larger the formation constant, the greater the ability of EDTA to dissolve the ion.
The observations recorded in  the Table  are in line with the magnitude of the formation
constants. For example the iron contamination is completely removed, the zinc is substantially
reduced, there is a significant reduction in magnesium and calcium, and finally, potassium and
sodium levels are unchanged. The last results are not surprising as EDTA is not noted for any
complexation of alkali metal cations. Examination of the formation constants further suggest
that aluminum contamination, should it be encountered, will be completely removed.
Even  further reductions in  background contamination for specific contaminants should be
possible through judicious pH adjustment.  For example use of ammonia-ammonium chloride
buffer with EDTA should improve the removal of calcium and magnesium from the  plastic,
however we have not explored this.

How the  metal ions are attached to the Teflonฎ PFA surface is unknown. It may be that the
pores in the polymer are allowing the metal ions inside, where they are stabilized by the high
electronegativity of the fluorine atoms or by Lewis base coordination to the oxygen atoms
present in the PFA resin. The oxygen atom complexation may the culprit which led to the zinc
problem, however other mechanisms can not be ruled out as we have seen similar although
lower level contamination in Teflonฎ PTFE beakers which we use for hotplate digestions. At
any rate  it appears that the  EDTA presents a more suitable resting place for the metals and
they are efficiently removed from the polymer.

Encouraged by the initial experimental results, we developed an SOP for cleaning Teflonฎ
digestion vessels which added a weekly treatment with EDTA to the existing EPA procedure
and the regular aqua regia soaking.  The  treatment is to take a room temperature saturated
solution of EDTA in reagent grade water, heat it to at least 60ฐC, then submerge the Teflonฎ
container in the bath.  The bath is heated for 2 hours, then the container rinsed with reagent
grade water and allowed to  dry. Although we have reused the EDTA solution up to 4 times
during a  month, the current practice is to  prepare a new solution every week.  We treat the
Teflonฎ PTFE beakers weekly with EDTA by filling them with the hot solution, then heating
the beakers on a hotplate for 2 hours.


SUMMARY

This improved cleaning procedure using EDTA has been in place in our laboratory for over a
year and we have extended the usable life of Teflonฎ PFA microwave digestion liners by a
factor of at least 5 and Teflonฎ PTFE beakers by a factor of 3.
1 Sampling Ambient Water for Determination of Trace Metals at EPA Water Quality Criteria Levels, Method
1669 (Draft), October 1994, USEPA.
2 Test Methods for Evaluating Solid Waste, Physical/Chemical methods, SW-846, July, 1992, USEPA
3 Skoog,D.A., D.M. West and F.J. Holler, 1990. Analytical Chemistry An Introduction, Holt, Rinehart and
Winston, Inc. Orlando FL32887, pp 239-249.
                                              427

-------
64
A COMPARATIVE  INVESTIGATION  OF  THREE ANALYTICAL METHODS  FOR
THE CHEMICAL QUANTIFICATION OF INORGANIC CYANIDE IN INDUSTRIAL
WASTEWATERS
Christopher O. Ikediobi, Professor, Department of Chemistry,
L.M. Latinwo, Associate Professor, Department of Biology and
L.Wen,  Department  of  Chemistry,   Florida  A&M  University,
Tallahassee, Florida 32307-3700.
ABSTRACT
     A frightening volume of toxic cyanide-containing liquid
waste  is  generated annually  in  industries involved  in the
mining and extraction of  metals, metal plating and finishing,
hardening of steel, manufacture of  synthetic  fibers and the
processing  of  such  cyanogenic   crops  as  cassava,  bitter
almonds, white clover,  apricots, etc.  The U.S. Environmental
Protection Agency  requires  that  the cyanide  in  this liquid
waste be  destroyed and its  level brought down to less than
Ippm  before  the  waste  can  be  discharged  into  aquatic
environments.    This  requirement  can  only  be  met  if  a
sensitive, reliable and rapid analytical method suitable for
quantifying cyanide in industrial liquid  wastes  exists.   As
part  of  an   ongoing  cyanide   degradation  project  using
immobilized enzymes and immobilized microbial cells, we have
investigated    and   compared    three    chemically-related
spectrophotometric methods for determining cyanide namely, the
4-picoline-barbituric acid,  the isonicotinate-barbiturate and
the pyridine-pyrazolone methods for their suitability in the
routine determination  of cyanide  in industrial  wastewaters.
Data  from recovery  experiments  carried  out  with  standard
cyanide solutions and those from analyses of actual cyanide-
containing  liquid  wastes obtained  from  metal  and  cassava
processing,  indicate that the  three methods are about equally
sensitive and capable of  reliably detecting free cyanide ions
down to  less  than  0.1 ppm.   In  these methods,  the soluble
colored dyestuff is formed within reasonable time (5-30 min)
and is stable for upwards of 1-2 hr. at room temperature (22-
28 C) .  The chromogenic reagent for the 4-picoline -barbituric
acid method  is  stable for 2-4 hr while those  for the other two
methods can  be stored in  dark  brown  bottles for up to 20 days
without affecting cyanide measurement.  The three methods are
affected  to  varying extents  by  interferences from various
cationic,  anionic  and organic substances that  are usually
encountered  in  industrial cyanide-containing wastewaters.  The
error   in  cyanide   measurement   associated  with   these
interferences  is  sufficiently  serious  as  to  warrant  a
distillation step as part of the  analytical protocol.
                                 428

-------
 INTRODUCTION
     Large  amounts  of  concentrated  cyanide  solution  are
 generated annually from such human activities  as  the mining
 and extraction of metals (e.g. gold and silver), cleaning and
 electroplating  of  metals,  hardening of steel and  production
 of synthetic fibers (1-2).  The processing of cyanogenic crops
 like  cassava,  bitter  almonds,  apricots,  butter beans  etc.
 which  contain  significant  amounts of cyanide in the  form of
 cyanogenic glycosides also produces large volumes of cyanide-
 rich waste liquor  in the food industries  (3).   Hydrolysis of
 cyanogenic  glycosides by  the endogenous  enzyme systems  of
 these  plant  raw materials during processing results  in  the
 conversion  of   large  amounts  of organic   cyanides  (e.g.
 nitriles) into  inorganic cyanide.   Since  cyanide is a potent
 respiratory poison (3) undetoxified cyanide-containing liquid
 wastes   could   easily  contaminate  fishes   and   ultimately
 extinguish   aquatic   life   if    discharged   into   aquatic
 environments.

     Chemical  methods  for  the  detoxification  of  cyanide-
 containing industrial wastes are  expensive,  energy intensive
 and leave other environmentally undesirable  byproducts  (4).
 This fact coupled with EPA's  stringent requirements regarding
 cyanide levels in detoxified  cyanide-containing liquid wastes,
 has encouraged us to embark on an investigation of the use of
 immobilized enzymes and microbial cells in the degradation of
 cyanide in industrial wastewaters  in collaboration with Norris
 Industries.   An important  part of this collaborative effort
 calls for the identification and adapting of  existing methods
 of  cyanide  determination  to the analysis  of  concentrated
 industrial   cyanide-containing    liquid   wastes.       The
 modifications introduced in the three methods reported in this
 work have enabled us to integrate these analytical  methods
 into  our ongoing  project  on waste cyanide  degradation  by
 biotechnological methods.
EXPERIMENTAL
Materials;   4-Picoline,  barbituric acid, isonicotinic acid,
chloramine-T, 3-methyl-l-phenyl-5-pyrazolone, bis-pyrazolone,
spectroscopic-grade  pyridine  and  potassium  cyanide  were
purchased from Sigma  Chemical  Company, St.  Louis, Missouri.
Other chemicals and reagents used were of analytical grade.
Doubly-distilled and deionized water was used throughout the
work.  Two different samples of cyanide-containing wastewater
(referred to as SP  and Nu)  were gratefully  obtained from
Norris Industries,  Los Angeles, CA.
                                  429

-------
Cassava Waste Liquor;  This was prepared with  cassava  tubers
from  two different  varieties  of  cassava  (var.  A  and  B) .
Cassava tubers were peeled to  remove the back  and expose  the
cortex.   These tubers  were  cut into  small cubes measuring
about 3x3x3 cm.  Approximately 500g of these were blended in
batches  of  100 g with  300  ml of  cold  distilled water each
time.  The  pooled  homogenate was filtered.  The residue  was
reextracted with  500 ml  of  water  and filtered  again.   The
filtrate  was  left  to stand over might  at room temperature.
The precipitated starch was subsequent removed by filtration
and the resulting cyanide-rich  filtrate referred to as cassava
waste  liquor  (A and  B)  was  used  for all cyanide  analysis
described below.

Preparation of  Chromoqenic Reagents:   Sodium isonicotinate,
sodium barbiturate,  4-picolone-barbituric  acid,  and  sodium
isonicotinate-sodium  barbiturate  reagents  were  prepared  as
described by Nagashima (5-6).  The pyridine-pyrazolone reagent
was  a  pyridine solution  of  0.1% bispyrazolone  and  0.5%  3-
methyl-l-phenyl-5-pyrazolone prepared in situ as recommended
by Cooke  (7) and Ikediobi et al. (8-9)

Cyanide  by  the  4-Picoline-barbituric  acid  method:     Six
milliters of a sample  containing less than lOug of cyanide  was
pipetted  into a dry reaction  test  tube.   To this were added
3.0 ml of phosphate  buffer  (pH 5.2) and 0.2  ml  of  1% (w/v)
solution of chloramine-T.  The  test tube was stoppered and  the
resulting solution gently mixed and left at rooom temperature
for  1-3  min.   Then  1.8 ml   of 4-picoline-barbituric acid
reagent  was added,  the  tube   stoppered again,  the  contents
mixed and the solution kept  at 25 C for 5 min. in a fume hood
for  color development.    The   absorbance of the  blue-violet
color was read at 605 nm against a suitable reagent blank  in
a Shimadzu-160 double beam spectrophotometer.

Cyanide by the Isonicotinate-barbiturate method:  Into a  dry
test tube was pipetted  6  ml  of sample containing less than
lOug CN  .   To this solution  was  added 1.8 ml  of phosphate
buffer followed by  0.2 ml of 1% solution  of  chloramine-T.   The
tube was stoppered  and contents mixed gently.  After standing
for 1-3 min, 3 ml of sodium isonicotinate-sodium barbiturate
reagent was added.   The tube  was  again  stoppered,  contents
mixed and the tube kept at 22-25 C for 30 min in a fume hood
for color development.   The absorbance of the blue-violet-
colored  dyestuff was  measured at  600  nm  against a  reagent
blank in a Shimadzu-160 double beam spectrophotometer.

Cyanide by  the  Pvridine-pyrazolone  method; Approximately 1
ml of  sample  containing  less  than lOug CN~  and 0.4  ml   of
chloramine-T were added to a dry reaction test  tube.  The tube
was stoppered and allowed to stand for 5 min at 0 C (ice-H2
-------
temperature in a fume hood.  The absorbance of the resulting
soluble blue dye was  determined at 630 nm against a reagent
blank in a Shimadzu-160 double  beam spectrophotometer.

Distillation of  cyanide-containing solutions;   Distillation
was  routinely  used  in  recovery   experiments  and  in  the
preparation of  complex and  concentrated cyanide-containing
liquid waste for cyanide analysis.  The cyanide distillation
train consisted  of a  1-liter  boiling flask connected  to a
glass condenser which in turn was connected to two consecutive
glass traps  equipped with medium-porosity  sparger  and each
containing 1 M NaOH.  Provision was made in the distillation
setup for a boiling flask air inlet, a suction flask trap and
a vacuum connection.  Approximately 500 ml of diluted sample
containing less than  10 ppm  of CN~ was distilled at a time.
For distillation of complex  concentrated cyanide solutions,
e.g. industrial wastewater, cassava waste liquor etc., 2g of
sulfamic acid, 50 ml of diluted (1:1 v/v) H2SO4 and 20 ml of
51% solution  of MgCl2.6H2O  were also added to  the boiling
flask through the  air  inlet tube  before  distillation.   The
entire setup was connected  to a vacuum source and suitable air
flow  maintained  at  a  rate   of  1-3  air  bubbles/second.
Refluxing was allowed to proceed for  1-2  hr.  at the rate of
40-50  drops/min from  the condenser  lip.   At  the end  of
distillation,  heating was discontinued but air flow maintained
for additional 15-30 min.  while the train cooled down to room
temperature.  The  cyanide  traps were  disassembled and the 1
M NaOH solutions containing the trapped CN  were pooled, the
traps were rinsed and the pooled solution was suitably diluted
for CN~ determinations.

Statistical Analysis:   Statistical analysis of data in Table
1 was performed using the paired t-test.

RESULTS AND DISCUSSION:
Cyanide Content of  Industrial  Wastewater:   Our design of an
enzyme-based detoxification of cyanide involves a continuous
packed-bed  reactor  through which  cyanide-containing liquid
waste is recycled  and the kinetics of CN  degradation followed
by accurate monitoring of the CN  content of the inffluent and
effluent wastewater.  Tables 1 and 2 present data on the CN
 content of four  samples  of  CN~  containing liquid waste two
of which  (NU  and  SP)  arise  from  actual  industrial activity
while the  other  two  (A and  B)  represent cyanide-containing
waste liquor arising from laboratory-scale processing of two
different varieties of cassava tubers for starch  (3).  Table
1 shows that liquid wastes SP and NU are too concentrated in
cyanide  to  be  discharged  without  prior  detoxification.
Although exceeding by  an  order of  magnitude the EPA ceiling
for CN  in industrial  wastewater, the cyanide levels in A and
B are relatively  low  because the  latter were  prepared from
                                 431

-------
edible  (genetically  low  cyanide-containing)   varieties  of
cassava.   Reports in  the literature  indicate that  similar
preparations from cassava varieties bred for industrial  use,
upon  processing,  leave  behind waste  liquors  with  cyanide
levels  as  high as 1200-2000 ppm  (3) .   We  conclude from the
statistical  analysis of  data  in Table  1  and the standard
curves  shown in Figs. 1-3, that the three analytical  methods
are about equally sensitive and capable of detecting CN   down
to less than 0.10 ppm,  a  sensitivity that more  than meets the
EPA requirements.  Data in Table 2 also show that  there  is  a
significant difference between the  cyanide  levels in SP and
NU  before  and  after  distillation,   with  errors   in   CN~
estimation of at least 21%.  This suggests the presence in the
wastewater samples of substances capable of interfering in the
determination  of  cyanide  by any of these methods, although
data  shown in Table 2 have  been  presented for only  one of
these  methods.   The fact  that each  method  is  capable of
detecting  100% of the CN~in standard cyanide solutions as
shown in Table 1  is  additional proof  that  the three methods
are about  equally responsive to the presence  of cyanide in
aqueous solutions.
Comparative Chemistry of the three spectrophotometric methods:
Figs. 4  and 5 summarize the color-forming  reactions  of the
three spectrophotometric methods  studied.  As can be seen from
these Figs., the three-step color-forming reaction  is similar
in the three methods.  Essentially it consists of the reaction
between  the  free  cyanide  (CN~)  ion  and  chloramine-T  (N-
chloro-p-toluene sulfo-namide)  to yield cyanogen chloride as
one  of the  products.   The  cyanogen  chloride (CNCl)  in turn
attacks 4-picoline  (in the 4-picoline-barbituric acid method) ,
isonicotinate (in the isoni- cotinate-barbiturate method) or
pyridine  (pyridine-pyrazolone  method)  to   form glutaconic
dialdehyde or its derivative.  The  last reaction involves a
condensation between the glutaconic dialdehyde and barbiturate
or pyrazolone to yield a  soluble blue  to  violet colored dye
that absorbs strongly in  the range of 600-630  nm as specified
in the text.

     In  all  these  methods, the formation  of  the  soluble
colored dye occurred within reasonable time period  (5-30 min)
while the dye-stuff  formed  in each case was stable for upwards
of 1 hr.  at room temperature. The chromogenic reagent for the
4-picoline-barbituric acid method was stable for about 1-2 hr.
while  those  for   the   isonicotinate-barbiturate   and  the
pyridine-pyrazolone methods  stored  well for up to 20 days
without  significantly affecting  cyanide measurements.   The
pyridine-pyrazolone  method,  however,   is  limited  by  the
expensiveness, offensive odor and toxicity  of the  chemicals
used.  The three methods  are affected  to  varying extents by
interferences from  cations, anions and  organics  that  are
likely to be found in industrial cyanide-containing  liquid
                                   432

-------
waste.  The cummulative effect of the  interfering substances
in industrial wastewater  is sufficiently serious to require
distillation   of    wastewater    samples   before   cyanide
determination  is carried  out by  any  of the  three method
investigated.
Interferences In the  Determination of Cyanide:  While Table
2  demonstrates  vividly the  additive effect  of  the various
interfering  substances on  cyanide  determination,  Table 3
presents  data  on the  effect,  on  cyanide  determination, of
specific substances added to standard cyanide solutions.   The
presence of the two cations  - Ca    and Mg    -  generally found
in hard water typical of most municipal water supplies results
in  slight to  moderate overestimates or  underestimates in
cyanide  content  especially  with  the  pyridine-pyrazolone
method.  Of the anions tested, the thiocyanate (SCN~) caused
the most error in the analysis particularly with the pyridine-
pyrazolone and  isonicotinate-barbiturate methods.   The  two
organics,  benzaldehyde   and   1-butanol,   at  most  of   the
concentrations tested, caused moderate underestimation in the
level of cyanide.  Aldehydes may react with CN  ions to  form
nitriles which  cannot be detected by any  of these methods.
The data in Table 3 also reveal that  for the ions and organic
compounds  tested,  the  magnitude  of the  error  caused, is
roughly  related to  the  concentration  of  the  interfering
substance.    Although  in  most  cases,  the  mechanism of
interference is unclear, there  is  little doubt  about   the
wisdom of distilling complex cyanide-containing liquid wastes
prior to CN~ determination  (Table 2).
CONCLUSIONS

     Three  related  spectrophotometric  methods  have  been
compared for their suitability in the determination of cyanide
in industrial wastewaters.   Data presented demonstrate that
they are about equally and reliably capable of detecting CN
 ions down  to  less  than 0.1 ppm  and comparably affected by
some interfering inorganics and organics many  of which can be
encountered  in  CN~  - containing  liquid wastes.   The three
analytical methods are successfully and interchangeably in use
to monitor  the  kinetics of  cyanide  degradation in cyanide-
containing wastewaters by immobilized enzymes  and immobilized
microbial cells.
                                  433

-------
                    LITERATURE CITED
(1)   Patterson, J.W. Industrial Wastewater treatment
     technology, 2nd ed. p.115-134  (1985) Butterworths
     London.

(2)   Wild, J. Liquid wastes from the metal finishing
     industry.  In D. Barnes, C.F.  Forster and  S.E.  Hrudley
     (eds.) 1987, Surveys  in  industrial  wastewater  treatment
     Vol.  3,  Longman London.

(3)   Nartey,  F. Manihot  Esculenta  (Cassava);  Cyanogenesis and
     Ultrastructure.  Munksgaard (1978) Copenhagen Denmark.

(4)   Olaluwoye, S.  Norris Environmental Services.  Norris
     Industries, Los Angeles,  CA (1994)  (Personal
     Communication)

(5)   Nagashima, S.  Spectrophotometric  determination of
     cyanide  with 4-Picoline  and  barbituric  acid.  Analytica
     Chimica  Acta 91,  303-306  (1977).

(6)   Ibid  Spectrophotometric  determination  of cyanide with
     sodium isonicotinate and sodium barbiturate.  Analytica
     Chimica  Acta 99,  197-201  (1978).

(7)   Cooke, R.D. An Enzymatic assay for the total cyanide
     content  of cassava  (Manihot esculenta Crantz).   J.  Sci.
     Fd Agric.  29.  345-352  (1978).

(8)   Ikediobi,  C.O.,  Terry, D.E.  and Ukoha,  A.I.  The use of
     sorghum  dhurrinase-enriched preparation in  the
     determination  of total cyanide in sorghum  and sorghum
     products.  J.  Food Biochem.  18,  17-29 (1994).

(9)   Ikediobi,  C.O., Olugboji, O. and  Okoh,  P.N.  Cyanide
     profile  of component parts of  sorghum  sprouts.   Food
     Chem.  27 167-175  (1988).
                                  434

-------
              Table 1.  Determination of the  Cyanide  Content  of a Standard Cyanide Solution and Samples
                       of Industrial Wastewater3.

              Analytical            Standard        Industrial Waste-          Cassava Processing
              Method                CN~ solution   water                       Waste Liquor
                               (10 ppm)         SP  (ppm)    NU (ppm)       A (ppm)    B (ppm)

              4-Picoline-barbi-
              turic acid            10.10           13812+467    16660+1318    7.40+0.04   6.22+0.14

              Isonicotinate-
              barbiturate           10.44           12665ฑ51C    15337ฑ281     8.21+0.06ฐ  6.98+0.08ฐ
en
              Pyridine-pyra-
              zolone                 9.98           12390+144    15890+782     7.38+0.10"  5.99ฑ0.17


              aMean + SD.  The  differences  in the CN~ values for SP,  A and B as determined by the three
              methods  are  statistically  significant  (P  <  0.01)   as   indicated  above  by  different
              superscripts .

-------
                  Table  2.   Effect of Distillation on Cyanide Content of Industrial Wastewater
CO
en
CN
Standard Cyanide
Solution
Wastewater, SP
Wastewater, NU
before distillation
(ppm)
10.08
14800
17530
CN after distillation
(ppm)
10.08
12270
10540
% Error in
CN Content
0
21
66

-------
                  Table  3.   Effect of Diverse Substances on the Determination of 2.4 gCN /llml
-
oo
I on /Compound
Mg2 +



Ca2 +



N02



SO2"



SCN~



I~



Benzaldehyde

1-Butanol


Added Amount
as added ( g)
MgCl2 1600
8000
400
80
CaCl2 16000
3200
80
16
NaNO2 6400
1280
320
64
Na?SO3 9000
90
9
0.9
KSCN 2
0.2
0.1
0.02
Nal 7000
700
350
7
104000
10,400
405,000
81000
40,500
	 1_ 	
Percent of Cyanide recovered
Method Ia
97.4
96.6
98.3
98.9
107.6
102.0
100.6
99.7
100.9
100.4
98.7
98.9
0
83.8
98.8
99.5
136.4
103.7
101.1
99.7
0
53.1
-
96.4
82.6
98.5
92.6
97.4
ND
Method II" Method III0
98.6
98
99.4
100.6
117.1
104.2
102.3
101.3
101.7
101.3
100.4
100.8
0
83.6
95.5
100.8
133.4
106.1
103.4
102
ND
ND
94.1
96
82.4
97.6
ND
92.5
100.4
101.7
97.3
103.2
104.2
124.3
120.0
113.5
111.0
107.1
102.2
99.2
100.2
0
82.5
90.4
99.7
148.4
136.0
118.4
108.1
ND
ND
93.3
96.3
86.5
97.3
ND
98.3
101.2
                    4-Picoline-barbituric acid;  Isonicotinate-barbiturate;

                   ND = not determined due to solubility or color problems.
"Pyridine-pyrazolone

-------
      1.4 i-
605  0.8 -
       0.0    0.1    0.2    0.3    0.4    0.5    0.6    0.7
   Fi g |  : Standard Curve for Cyani de Deter mi nati on by the
          4-Picoline - Barbituric acid Method
                        438

-------
        2.5 r
600
          0.0     0.2     0.4     0.6     0.8     1.0     1.2
         Fig 2 : Standard Curve for cyanide determination by the
                Isonicotinate - barbiturate Method
                             439

-------
  630
0.8 r

0.7

0.6

0.5

0.4

0.3

0.2

0.1
           0.0
                                j	i
             0.0    0.2    0.4    0.6    0.8    1.0    1.2     1.4
                         [CN]  A/g/ml

Fig 3 : Standard Curve for Cyanide Determination by the
        Pyridine - Pyrazolone Method
                                440

-------
 NaCN
                    CHS
              Chloramine-T
                    Glutaconic Aldehyde
                              R
                  O=CH-CH=C-(
                      -2H2O
                     O
                                             SO2N
                                                    V
       Na
       Na
CHS
                                       o
                                    Barbituric acid
                 HN    =CH-CH=C-
                     H
                           Blue- Violet dye     H
R
       CNC1
    Cyanogen
    Chloride
                    R = CHS : 4-picoline
                    R = COO" : isonicotinate
Fig 4 : Chemical reactions for cyanide determination by the 4-picoline - barbituric acid
      and isonicotinate - barbiturate methods

-------
                       SO2N
       NaCN
                             \
                 ,Na
                  Cl
                        CHS
                   Chloramine-T
         O=CH-CH=C-CH2-CH=O
             Glutaconic Aldehyde
      -2H2O
          o
                           o
                             CH3
                                        CNC1
                                     Cyanogen  +
                                     Chloride
.Na
>Na
                                                 CHS
                                   N   Cl
                                    I
                                    CN
                           O
N-=
                       •ฃ.
                         H=-
                                          N
CHS
                                   CH3
                   Blue-colored dye
Fig 5 : Chemical reactions for cyanide determination by the pyridine - pyrazolone method

-------
     Air and
Croundwater

-------
                                                                                 65
   MANAGING RCRA STATISTICAL REQUIREMENTS TO MINIMIZE
                GROUND WATER MONITORING COSTS
Henry R. Horsey, Ph.D., President, Phyllis Carosone-Link, Senior Statistician,
Intelligent Decision Technologies, Ltd., 3308 Fourth Street, Boulder, Colorado
80304;
Jim Loftis, Ph.D., Department Head, Department of Agricultural and
Chemical Engineering, Colorado State University, Fort Collins, Colorado
80523
ABSTRACT

This paper will  provide an overview of how ground  water monitoring
statistical choices  can significantly impact a facility's ground water monitoring
costs.  For example, a recent study analyzed the long term  ground-water
monitoring cost impacts of different statistical analysis approaches on over 20
landfills.  The study found that the choice of statistical approach can make
over a 50% difference in long term monitoring costs.  Four  key issues in
choosing a statistical approach that minimizes monitoring costs were:
      •  Minimizing retesting because of inappropriate hydrogeologic
         assumptions;
      •  Minimizing site-wide false positive rates;
      •  Minimizing sample  size requirements; and
      •  Maximizing statistical flexibility when data characteristics change.
INTRODUCTION

Usually, the costs of performing statistical
analyses  range between 10%  to 15% and
rarely should  exceed 20%  of the total
ground water  monitoring costs.   Field
sampling,  analytic  laboratory   and
regulatory reporting costs comprise most
of the monitoring  costs.   For example,
analytic laboratory costs for the Subtitle D
Appendix 1 constituents required under
detection   monitoring   usually  cost
between  $350  and  $400 per  well per
sample.  When sampling and  reporting
costs  are also  included, the  per well
ground water monitoring costs often will
                  25%
                    15%
     45%
Typical Distribution Of
   Monitoring Costs
                                          443

-------
 climb to $700 or more.  If a facility is forced into a retesting situation because
 an inappropriate statistical test was used, the sampling, lab and reporting costs
 will often exceed $2000.   And if the facility is inappropriately forced  into
 assessment monitoring the analytic laboratory costs alone can easily exceed
 $1900 per well.

 By contrast, with the prudent use  of appropriate  statistics, the statistical
 analysis cost for a detection monitoring program should run from $125 to
 $175 per monitoring well per reporting period and may prevent a  facility
 from being inappropriately forced into retesting or assessment monitoring.
 Usually,  when statistical costs  exceed  20% of the total  ground  water
 monitoring costs, the cost advantages of specialized ground-water statistical
 software  are being overlooked.  For example, EMCON, IT Corporation  and
 Law Environment along with a number of other national and regional solid
 waste consulting companies use the Groundwater Statistical Analysis System
 (GSAS™) software at many sites to automate the  statistical analyses  and
 ensure that the most appropriate statistical tests are being used.  This decision
 support system developed by Intelligent Decision Technologies in Boulder,
 Colorado is one example  of how artificial intelligence software is being used
 to reduce solid waste management operating costs.

 Ironically, at sites which do not have large analytic programs, when statistics
 comprise less than 10% of the total ground water monitoring costs, this is
 often an indication that the inappropriate use of statistics  has forced a  facility
 into extensive retesting or assessment monitoring which drives the sampling
 and analytic costs through the roof. There is no single statistical approach for
 all sites that minimizes ground-water monitoring costs.  There are, however,
 general considerations  that apply to most  sites.   These considerations  are
 summarized in four steps that you can take to gain better control over your
 monitoring  costs.


 APPROPRIATE HYDROGEOLOGIC ASSUMPTIONS

 The first step in controlling your ground water monitoring costs is to  ensure
 that the hydrogeologic assumptions of your statistics accurately reflect your
 site's hydrogeology.  Many site managers are surprised to find that different
 statistical  tests  implicitly make different  assumptions about the  site
 hydrogeology and monitoring  program.  Misunderstanding  these implicit
 assumptions is the greatest cause of sky rocketing ground water monitoring
 costs.

For example, we  have seen sites where interwell statistics indicate a release
from the facility when no waste has yet been placed [See Sidebar On Different
Types Of Statistics].  These  and  other studies have demonstrated that an
intrawell statistical approach is generally more appropriate than an interwell
approach  when  there  is  evidence  of  spatial  variation   in  the  site's
                                          444

-------
 hydrogeology.  However,  demonstrating  to  regulators the  need for and
 effectiveness of an intrawell approach can be difficult,  especially for  sites
 where the monitoring program began after waste was placed.

 Intrawell statistics compare historical data at the compliance well against
 recent observations from that well.  This eliminates the possibility that spatial
 variation between  upgradient and  downgradient wells  can  cause an
 erroneous conclusion that a release  has occurred, but assumes that the
 historical data at the compliance wells have not been impacted by the facility.
 The fundamental regulatory concern about the  intrawell approach is whether
 the historical data have been impacted.  Otherwise, the historic data  do not
 provide an  accurate baseline to detect a  future impact.  This is  a common
 problem faced  by older facilities where the monitoring wells were installed
 after waste had been placed  at the facility.  How do they demonstrate that  their
 historical data are "clean"?

 Generally,  the  facility should  first  use  hydrogeologic  information
 supplemented  with statistical  evidence  to demonstrate  that  there  is
 significant natural spatial  variation in the site's hydrogeology.   One statistical
 approach is to  evaluate  whether there is significant statistical differences
 among the upgradient  wells.   If  there are, this  is usually evidence of
 significant spatial variation at the site and therefore it  can be reasonably
 concluded that an interwell  approach will make erroneous conclusions about
 the facility's water quality.

 The facility can then screen the historical  data at the compliance wells to
 ensure that only clearly unimpacted data are used to develop each compliance
 well's background standard.  Statistical  approaches used to assist  in the
 screening include VOC tests, trend tests  and even interwell limit based
 analyses. This approach has worked for many  facilities.  It has  received
 regulatory acceptance in a number of states including California and Colorado
 and has  allowed  facilities to significantly  reduce their ground  water
 monitoring costs by reducing retesting  and  keeping facilities  out of
 unnecessary assessment monitoring.


 MINIMIZING  SITE WIDE  FALSE POSITIVES

 The second step in controlling your ground  water  monitoring costs is to
 minimize your site wide false positives.   False positive rates in the original
 EPA guidance1 and in most  state and Federal regulations are considered  only
 on a test or individual comparison basis.  Facilities, however, need to focus
 on the site-wide false positive rate which is  the possibility of finding at least
 one statistical false positive result in a regulatory reporting period.

 Site-wide false positive rates are far higher than the individual  test false
positive rates and  site-wide false positive rates increase with the number of
                                         445

-------
 statistical tests being performed.  For example, if an interwell statistical test is
 run on only 10 constituents at a 5% false positive rate per constituent, the site-
 wide false positive rate will be approximately 40%.   Consequently, many
 facilities have at least a 50% chance of one  or more false positives in each
 reporting period.  The site-wide  false positive rate is critical because it only
 takes one finding of a statistically significant difference to move a facility into
 retesting and/or assessment  monitoring.

 One approach to minimizing the  site-wide false positive rate is to reduce the
 number of constituents that are  statistically analyzed under detection
 monitoring.   Federal Subtitle  D regulations do not require  that  every
 Appendix I constituent be statistically analyzed.  In fact, the EPA regulators
 who promulgated  the EPA guidance recognize that in detection monitoring it
 may be preferable  to statistically analyze a  subset of the inorganic and organic
 Appendix  I constituents.2/ 3  The choice of the subset should be based upon
 prior monitoring  results, local  hydrogeology  and leachate characteristics.
 Some state regulatory  agencies such as the regional water quality boards in
 California have specified shortened lists of  inorganic parameters  to  be
 statistically analyzed.

 A second approach to reducing site-wide false positive rates for VOC analyses
 is to use composite analyses. Analyses such as Poisson based limits or the
 California  screening method reduce site-wide false positive rates and are
 usually far more appropriate  for VOCs because of the high proportion of non-
 detects commonly found in VOC data. Care  must be taken, however, in the
 application of Poisson based limits.  A recent  EPA review4  criticized a
 commonly used formulation  for the Poisson limit.  This is just one example
 of how the application of statistics to ground water quality data is continuing
 to change  rapidly as more  is learnt about  the ramifications  of using the
 various statistical tests.

 A third approach  to reducing site-wide false positive rates is to reduce the
 false positive  rate of the individual tests. Reducing the false positive rate
 will, however, increase the false negative rate. In  such situations, increasing
 the background sample  size can offset  the  potential increase in the false
 negative rate.  An equation for computing a  reduced false positive rate has
been developed by California regulators and approved by EPA. Alternatively,
power analyses can be performed  to justify the use  of a reduced false positive
rate.
MAXIMIZING STATISTICAL POWER FOR A GIVEN SAMPLE SIZE

The  third  step in controlling  your ground water  monitoring costs is  to
maximize statistical power for a given sample size. The power of a statistical
test is its ability to detect a "true" difference or change. There are a number of
factors that can affect the power of a test and unfortunately, not all statistical
                                           446

-------
tests have the same power under the same circumstances. A key determinant
of power is the statistical sample size (e.g. the number of analytic results). For
example, parametric tests usually have more power than nonparametric tests
for the same sample size.  Thus, a parametric test is often the test of choice for
ground water monitoring, especially when sample  sizes are limited if the data
are normally distributed.

The difficulty in using a parametric test is that ground  water quality data often
do not fit a normal or log transformed normal distribution when rigorous
normality tests such as the Shapiro-Wilks or Shapiro-Francia tests are  used.
The general response to this situation is to proceed with a parametric analysis
which  will yield unpredictable results or to move to  a nonparametric
analysis.  The disadvantage of the parametric analysis in such circumstances
is that it is impossible to accurately control the  power or false positive rate of
the test. The  disadvantage of the nonparametric analysis is that it has much
lower power for a given sample size when compared  to the parametric test if
the data are normally or transformed normally  distributed.

There is one other option that can be employed when the data do not fit a
normal or log transformed normal distribution.  This option is to utilize a
family of transforms identified by Dennis Helsel, one of the US  Geological
Survey's water quality  statistical analysis experts5. These  transforms, called
"The Ladder Of Powers", significantly increase the possibility that the data can
be transformed into a normal distribution and  that a parametric analysis can
be used. This increases the power of the test for a given false positive rate and
sample size.  Thus additional sampling, expensive  retests, and/or being
unnecessarily forced   into  assessment  monitoring can  be avoided.
Unfortunately, the  computations  required to perform and  evaluate the
effectiveness  of these  transforms are quite extensive  and  thus  using
specialized statistical software is often a necessity.


ENSURING FLEXIBILITY

The fourth step  in controlling your  ground  water  monitoring costs  is to
develop a site specific analysis methodology that incorporates the spectrum of
possible changes in data characteristics over time.  For example,  data
distributions, percentage of non detects, or equality of variances can and  often
do dramatically change as ground  water monitoring programs mature. All
too often facilities do not plan for  these  possible  changes.   Instead of
proposing in their permit applications and monitoring plans a decision  logic
for choosing the most appropriate statistical approach  based upon the current
characteristics of the data, they propose one statistical  test based solely on the
limited  data available at the time the application  was submitted. In  turn,
when data characteristics do change, and the facility does  not adjust its
statistical  approaches, retesting  and  assessment  monitoring with all  their
associated increased monitoring costs are  highly probable.
                                        447

-------
 Unfortunately, adjusting the statistical approach once the permit has been
 issued can be costly and used as a mechanism by other parties to raise a host of
 other unrelated issues.  The approach we use is to incorporate into permit
 applications and monitoring plans  a decision logic for choosing the most
 appropriate statistical approach based upon the current characteristics of the
 data.  On a regular basis, the data characteristics are reviewed and the most
 appropriate statistical test is selected based  upon the permit  decision logic.
 While the test may change,  so long  as the decision logic remains  consistent,
 no permit modification is required.  This is a concept that has been accepted
 by US EPA and  numerous state regulatory agencies but again, to  cost
 effectively  implement this type of flexibility, specialized software is often  a
 necessity.


 SUMMARY

 Statistical issues are  driving both short and  long term  monitoring costs at
 municipal  landfills around  the nation.  Utilizing specialized  ground-water
 statistical analysis  software, the costs of performing the statistical analyses
 should rarely exceed 20%  of the total ground  water monitoring costs  for
 ongoing  monitoring programs. The cost of initial statistical evaluation  and
 permit preparation,  however,  may  often  exceed  this  amount.   When
 knowledgeably applied, statistics can reduce the number of samples required,
 minimize retesting, and  prevent  a facility from being unnecessarily forced
 into assessment monitoring, yet  provide a reliable  indication of  a release.
 Unfortunately, when statistical tests are inappropriately applied, the statistical
 findings  can result in grossly inflated monitoring costs and yet still provide
 inaccurate answers.
^United States Environmental Protection Agency,  "Statistical Analysis Of Ground-Water
Monitoring Data At RCRA Facilities  Interim Final Guidance", Office Of Solid Waste,
Washington, D.C., Series No. EPA/530-SW-89-026, April, 1989.
2United States Environmental Protection Agency,  "Statistical Analysis Of Ground-Water
Monitoring Data At RCRA Facilities - Addendum To Interim Final Guidance", Office Of
Solid Waste, Washington, D.C., Series No. EPA/530-R-93-003,  July, 1992.
3United States Environmental Protection Agency, "Regulatory Impact Analysis For
Amendments To The Hazardous Waste Facility Corrective Actions Regulations - Draft
Report", Office Of Solid Waste, Washington, D.C., 1993
4Cameron, Kirk, M. D., "RCRA Leapfrog: How Statistics Shape And In Turn Are Shaped By
Regulatory Mandates", presented at The International Biometric Society - Eastern North
America Region Spring Meeting, Birmingham, Alabama, March, 1995.
5Helsel, H.R. and Hirsch, R. M.,  "Statistical Methods In Water Resources",  Elsevier
Scientific Publishing, New York, 1992.
                                           448

-------
                                                                                        66
              EIS/GWM - AN INTEGRATED AUTOMATED
     COMPUTER PLATFORM FOR RISK BASED REMEDIATION
              OF HAZARDOUS WASTE CONTAMINATION -
                        A HOLISTIC APPROACH

David L. Toth, US EPA, Region III, 841 Chestnut Bldg., Philadelphia, PA 19107
Basile Dendrou. MicroEngineering, Inc., P.O. Box 1344 Annandale, VA 22003
Stergios Dendrou, MicroEngineering, Inc., P.O. Box 1344 Annandale, VA 22003


ABSTRACT

The cleanup of contaminated sites is likely to remain the number one environmental
concern for the  foreseeable future. Successful  remediation must be  based  on a
thorough understanding of the contaminant migration and fate. Existing simplified
empirical modeling and simulation tools are no longer sufficient to design, regulate
and manage the contamination problem  effectively. For  example,  the simple
exponential  decay law does not adequately  describe the chemical and biological
interactions between the contaminant and the terminal electron acceptors, the soil
matrix and the available nutrients that take place in bioremediation. Likewise, for in
situ remediation technologies such  as  bioventing (interaction  of thermal  and
chemical processes); pump-and-treat (interaction of mechanical and chemical
processes);   vitrification (interaction  of  thermal,   chemical  and  mechanical
processes);   electrokinetics  (interaction of  electrical chemical  and  mechanical
processes); chemical barriers and in situ containment technologies (interaction of
chemical and mechanical processes), there usually lacks a 3-D simulation model
capable of quantifying their impact on the geomedia environment. This results in an
inaccurate  evaluation of the risk  assessment  and  in excessive cleanup  costs,
presenting a heavy burden to the Nation's economy. Furthermore,  the environment
must be considered holistically, where remediation of one medium (e.g., soil) must
not result in the contamination  of another  (e.g., air).  Sustainable development
requires a  holistic  macroengineertng approach where  interactions of different
natural processes are an integral part of the theoretical model, which must be used to
simulate actual contamination episodes, so as to determine optimum innovative and
effective mitigative measures.  The Environmental Impact System/ Ground Water
Module  (EIS/GWM)  computer platform  was  developed to  support  this  novel
Contaminant Migration  Risk  Assessment approach  in  a  unified computational
framework  based on the development of strong Scientific Engines or simulators
embedded in the platform. The EIS/GWM integrated modeling platform is written
for MS Windows and has been used to demonstrate the feasibility of embedding this
holistic  approach in an integrated/ automated  computer platform based on the
interaction of natural objects in 3-D  space. Example cases from EPA Region III,
including industrial and defense sites illustrate the  operation,  framework,  and
philosophy of the EIS/GWM platform.
                                            449

-------
INTRODUCTION

Context of the Environmental Risk Assessment Problem
Risk  assessment  of health  hazard posed  by  contamination  episodes,  and site
remediation  to reduce and control their risk require a good understanding of how
chemicals move  through, and interact with the  subsurface  and above ground
environment. To  remove the source or to  pump-and-treat the aquifer are simply
declarations  of intent not directly amenable to efficient ways of dealing with the
problem. What should drive the remediation effort is risk assessment and risk
characterization. In this light, the question  becomes what is the level of treatment
necessary to meet health risk standards at "end points," compatible with what can be
physically and economically achieved.

For example, let us consider the risk of contamination from a typical hydrocarbon
spill illustrated in Figure  1.  The bulk of  the fuel ("free product")  occupies the
interstitial space  in the  vadose zone  and "floats"  on top of the  water table.
Constituents of the fuel dissolve into the saturated part of die aquifer and tend to
contaminate the aquifer by advection and dispersion. This in turn may contaminate a
water supply well located downgradient from the spill and cause a health hazard.
Direct use of the ground water is one of the  most important "end points" to consider
in a risk assessment.  Others are:  fumes  emanating from the  unsaturated zone;
pollutants reaching surface waters, either through ground water discharge to them, or
by runoff; and others. The credibility of a risk assessment evaluation hinges to the
largest extent on the ability to accurately predict contamination pathways; and
equally importantly, on the ability to predict  the efficacy of treatment alternatives.

In the case of Figure 1 pumping the free product will not eliminate the entire source.
The fuel adsorbed on the soil particles will continue to dissolve and contaminate the
ground water. The question then is what level of soil treatment to seek. The way to
address the problem is by determining all pathways to 'end points' and relate residual
source strength to acceptable risk standards at 'end points.' For this exercise to have
any credibility the following conditions must be met: I/  A thorough understanding
must be  exhibited of the processes that link the source  to 'end points' usually by
means of a simulation model; and, 21 The model must be validated to site specific
data.  Interestingly, in some instances nature can be  shown  to  perform  some
remediation  on her own with minor  human intervention.  But this also needs
thorough proof with the use of simulation models and compliance monitoring.

The EIS/GWM platform offers the ability to go from the  screen-level (rudimentary)
to the most detailed level  progressively,  within the  same  platform,  using all
previously compiled data and calibrated models. Therefore, the distinction between
levels of analysis does nor require an apriori selection among a hierarchy of models:
all modeling needs are automatically available under the same EIS/GWM roof.
                                             450

-------
              Possible Groundwater Pathways to an Oil Spill

                                           Humans
 Source Of Contamination
                                                             Aquatic Plants &
                                                               Animals
               Figure 1. Typical ground water pathways to humans.
Early in the process, screening models are used to identify environmental concerns.
Screening level modeling, often based on a structured-value approach, is designed to
be used with regional/representative information. Models such as the EPA Hazard
Ranking System (HRS) divide the site and release characteristics into predetermined
categories that are assigned a point value based on answers to a set of questions. The
score from such systems is useful to determine if a situation is a problem, but not to
provide a risk-based relative ranking of problems.

Detailed analyses  require a highly specialized  assessment  of potential  impacts.
Methodologies, such as the Chemical  Migration Risk Assessment  (CMRA), are
composite  coupled approaches that use  numerically  based models  that are not
physically linked and represent single-medium models, implemented independently
in series.  This approach usually is reserved for the most complex models, is data
intensive,  and  relies on the expertise of  the  analyst.  Although such tools are
appropriate for their intended application, extension beyond site-specific applications
often is either difficult or cost-prohibitive.

An alternative to these Analytical/Semi-Analytical/Empirical-Based multimedia
models (designated as  analytical  models) is offered by the EIS/GWM  platform
which includes a large pool of numerical models (scientific engines) that can be used
for prioritization, preliminary assessments and exhaustive risk assessments studies.
                                          451

-------
These models are all integrated in the EIS/GWM platform. They are fully coupled
approaches that use numerically based algorithms, combined into a single code to
describe each environmental medium.

Figure 2  illustrates the value of simulation models in the risk assessment process.
They can be used in  a detailed  (i.e., numerical) or  an initial-screening (i.e.,
ranking/prioritization) assessment, where data and circumstances warrant. Figure 2
also illustrates  the relative  relationships between  input-data  quality,   output
uncertainty,  and types of problems at each level of assessment. The computational
requirements tend to be less intensive at the earlier stages of an assessment when
there are fewer available data, and, correspondingly, the uncertainty with the  output
results tends to  be  greater.   As  the  assessment   progresses, improved  site
characterization  data and conceptualization  of the  problem  increase, thereby
reducing the overall uncertainty in risk estimates. As indicated in Figure 2  the
EIS/GWM platform offers the most accurate site specific evaluation of risk.

EIS/GWM integrates standard approaches into a consistent and powerful tool. This
multimedia (multiphase) platform incorporates medium-specific,  transport-pathway,
and exposure-route codes  based on standard, well accepted algorithms; hence, their
acceptance by regulators is favored.   For example,  numerical solutions  to  the
advective-dispersive equation describe contaminant migration in the ground water
environment.  The  platform allows to  link migration models  with risk exposure
models,  so that the analyst can immediately assess the entire process of contaminant
release, migration (transport), exposure, and risk at once.
                           Level of Analysis
          Least
                  Screening     Analytical
         Create*
               Leist
                                                      Highest
     Figure 2. Relationship between input-data quality and output uncertainty
                               (after Whelan, et al., 1994.)
                                               452

-------
The value of the platform is exemplified by  an  order-of-magnitude reduction in
assessment time, as compared to the  traditional risk  assessment models. It can
concurrently  assess  multiple  waste sites with  multiple  constituents to include
baseline  (at  t  =  0),  no  action  (at t  > 0), during-remediation, and  residual
(post-remediation)  assessments,   including   changing  land-use  patterns   (e.g.,
agricultural, residential, recreational,  and industrial).  Its scientific engines can
describe  the environmental  concentrations  within each  medium  at  locations
surrounding the waste sites to a radius of 80-km (50 mi).  Specially distributed,
three-dimensional, concentration isopleths can be constructed detailing the level of
contamination within each environment.  By coupling land-use patterns with the
environmental concentrations, three-dimensional risk isopleths can be developed (as
a function of land-use pattern and location).
RISK ASSESSMENT IMPLEMENTATION UNDER EIS/GWM

The EPA risk assessment methodology for exposure assessment suggests a series of
standard default exposure routes  and exposure  assumptions/parameters for use in
conjunction with discrete current and future land use scenarios. While the exposure
routes themselves may be more or less applicable to a specific site, the majority of
the standard exposure assumptions advocated for use in estimating chemical intakes
are not  site-specific, nor are they necessarily the most current, relevant numerical
values. Historically, the use of alternate standard assumptions of the development of
site-specific assumptions has  been  met  with varying degrees of acceptance by
regulatory  agencies, although the existing guidelines for these assumptions (EPA
1989, 1991b) and the guidelines regarding the formulation of site-specific  PRGs
(EPA 199la) advocate the use  of site-specific  information wherever possible.
Site-specific information and viable exposure routes will vary with the location,
magnitude, and nature of the spill or leak,  as the local human  populations, regional
topology and hydrogeology, and land use. Practical, site-specific considerations as
implemented in the EIS/GWM platform are discussed below.

Rather than using point estimates in  exposure assessment, the EIS/GWM simulation
tools can be  used to estimate distributions for  exposure assumptions. Use of this
methodology does not alter the basic structure of the exposure estimate.  However, it
does refine the way chemical intakes are calculated in the exposure assessment.

Figure 3 illustrates the overall approach as implemted in the program. The starting
point  is to establish the statistical characteristics of all pertinent input parameters
characterizing a site. Such parameters include: soil properties (soil layers, porosity,
hydraulic  conductivities,  dispersion  etc.),  chemical  properties   (adsorption,
stoichiometry, etc.), as well as loading/source  site-specific conditions. Then, their
mean  values and standard  deviations automatically  feed the scientific engines
(simulation algorithms) available in the platform.
                                             453

-------
                    Statistical Treatment of Input Parameters
                                Hydraulic Condt   Dispersion
                 Mean Values & Standard Deviations of Input Variables
Humans
                EIS/GWM Contaminant Migration Simulations
      Receptors
                   Mean Values & Standard Deviations of Concentrations
                  Max Entropy to obtain Distribution at given location
                       EIS/GWM Risk Exposure Module
          Concentrations
                            Life Time Average Daily Intake (LAPP


                                      (Cw)0ป)(EF)(ED)
                              Intake=
                                         (BW)(AT)


                           Cancer Risk = LADI (Oral Slope Factor)
                                                              Risk Maps

                                                             Cancer Risk
Large Exposure
           Figure 3. Risk assessment implementation in EIS/GWM.
                                          454

-------
 These algorithms simulate different natural processes that include:

   *  ground water flow with special features such as slurry walls, geosynthetics
      and geologic faults;
   *  Single  species  contaminant migration  processes   such  as  advection
      (computed from fluxes produced by the flow module), dispersion, chemical
      reaction (sorption, ion-exchange, chemical decay), and sink/source mixing;
      and,
   *  Migration  and degradtion of hydrocarbons accounting for oxygen-limited
      biodegradation occuring at the site.

 The outcome of the simulation is the spatial distribution of the mean values and
 standard deviations of the concentrations throughout the site. The mean values are
 the  conventional point  estimates  as  produced by  the  corresponding algorithms
 activated on a given site. A first-order approximation is used to  compute  the
 standard deviation of  the concentrations "C" assuming that  all input are statistically
 independent.

 At this stage we only know the mean and variance of the concentration probability
 distribution. However, invoking the principle of maximum entropy (Jaynes 1978)
 the assignment of a concentration probability distribution is that  which maximizes
 the information entropy subject to the additional constraints imposed  by the given
 information (i.e. the mean and variance values). Detailed solutions for a number of
 cases are given in Goldman (1968), Tribus (1969), Dendrou  (1977).

 In the EIS/GWM simulation model, the analyst determines  a continuous or discrete
 distribution to describe each random variable. This distribution is defined in terms
 of the probability density function (PDF) or the cumulative distribution function
 (CDF). Several distributions are defined by one, two, or more parameters. When
 running the EIS/GWM simulation model, the computer automatically proceeds to
 determine the distribution of daily intakes. From this distribution, a specific intake
 can be selected (e.g., the average or mean intake, median intake, or 95th percentile
 upper  confidence limit  on the intake) that, in combination with the appropriate
 toxicity benchmark concentration, is used to calculate risk.

 EIS/GWM simulations can also include correlations between variables (Smith et al.
 1992). For example, there is a correlation between body weight and ingestion rate.
 Using  strongly correlated variables in deriving an  estimate of exposure serves to
 strengthen  the estimate by preventing nonsensical combinations  of variables in its
 derivation.

In most cases, the daily human intake calculated using the EIS/GWM  simulation is
less than that calculated using point estimates. This is not to suggest the use of the
platform because it produces lower estimates, but rather because its estimates can be
associated with probabilities. This results in increased confidence in the estimate of
intake, thus ensuring increased confidence in public health protection.
                                             455

-------
CASE STUDY

Site Description

The  selected site in  this application is an  industrial site where BTEX  has been
released in the groundwater from leaky oil tanks. It is determined that ground water
is the medium of concern and off-site residents are the population of concern based
on their use of ground water as drinking water source. The site covers an area
approximately 4550x4720 meters. An LNAPL fuel has been released into the soil
and the shallow confined aquifer as shown in Figure 4. The contaminant plume
migrates towards the north, where several drinking water wells are located.  These
wells are  of  primary  importance to the  adjacent municipality and a detailed
groundwater study is initiated to predict the  extent and migration of the spill  in the
shallow confined layer which is a subject of the investigation.  Samples collected
around the perimeter of the spill indicate that  free product has not reached the
perimeter  of the oil  tank farm. The relatively uniform throughout the year flow
regime with low annual precipitation (4 cm.) corroborates the assumption of a stable
piezometric map throughout the duration of the selected scenario analysis.
                                                         North
            Figure 4. Site layout and location of NAPL contamination.
                                             456

-------
The objectives of the study are the following:
  *  Determine the extent of contamination; rate of progression.
  *  Evaluate the risk of contamination of municipal wellfields.

Groundwater FLOW and Migration Models

The  EIS/GWM platform is used  to  simulate the migration of dissolved-phase
contaminants at the vicinity of the oil tank site. This simulation necessitates the use
of the groundwater flow (Miflow) and migration (Migra &  BioRemSD) models,
built on the site-specific data. These data  indicate  the  existence of a relatively
shallow layer of medium-grained sand throughout the examined area (Figure 5). The
thickness of this aquifer varies between 0 and 20 meters overlaying a relatively
impermeable clay layer. The general grid orientation is in the direction of the flow.
Several grid discretizations were examined leading to the 48x48 macroelement mesh
shown below. Calibration runs for heads involved adjusting hydraulic conductivities
so as to  match observed piezometric heads  (first  100 days of  the contaminant
migration).  A constant  source  mechanism  is  retained to  specify  the  initial
contaminant plume.
 Sandy Layer
             Clay Layer
          Figure 5. 3D configuration of the geologic strata (soil layers).
                                            457

-------
A series of sensitivity analyses were made about conductivities,  as well as of the
advective  resolution  process,  using  the  MOC  method  (USGS  Method  of
Characteristics) and the Discrete Element Method. The latter being more efficient is
retained for the production runs used for the risk analysis. Among the several grids
that were used to discretize the site area the medium grid (100x100 m cells) offered
the best cost efficiency ratio. That is the accuracy of the simulation is maintained
while the computational burden is kept at acceptable levels.

Calibration of the BTEX plume involved many steps some  of  which required a
sensitivity  analysis which included the proper  characterization, in  order  of
importance, of the advection mechanism  (influenced by the flow regime), dispersion,
retardation,  decay properties,  and  other chemical interactions.  The following
computer runs are retained for the risk assessment study:
File Name
RISKM
RISKB

Description
Medium Mesh
One Species analysis
Medium Mesh
Multispecies analysis

Degrees of Freedom
69 12 Degrees
6912 Degrees

Computational Module
Migra
Biorem 3D (includes
biodegradation process)

The results of these simulations are shown in Figure 6 and 7. Figure 6 illustrates the
variation of the piezometric heads in the vicinity of the contaminant plume, while
Figure 7 shows  the extent of the contaminant plume  1000 days after the initial
release. As it  can be seen, the extent of the contaminant plume has reached the zone
of influence of the municipal wells, the "end points" (receptors) in our investigation.
To  proceed with the risk assessment of the use of the groundwater as a drinking
water resource, we need to use the exposure equation to estimate  the intake for
groundwater.  The concentration of BTEX (Cw) in this equation is obtained from the
computational modules "Migra" and "Biorem 3D".  This equation is now applied to
each well located north of the contaminant plume. A summary of the raw values for
the  concentrations is given below:
Well Number
G171
G25
G23
Uncertainty in the Estimates
Results from Migra
[ppm]
2.15
1.52
3.09
Moderate Uncertainty
[.5-1.0 ppm]
Results from Biorem 3D
[ppm]
0.56
0
0.89
Large Uncertainty
[ 1.5-2.0 ppm]
Natural biodegradation considerably reduces the concentrations of dissolved BTEX
in the groundwater, but their estimates  include ,a large uncertainty that may affect
the associated risk to drinking water.
                                       458

-------
=L
   Contour Range Dmfl Layer yiew AnnotaBoo
              Figure 6.  Computed Piezometric Heads.
   Figure 7.  Computed Extent of BTEX Plume (Migra Simulation).
                             45Q

-------
Risk Exposure Study

The deterministic values of the BTEX concentrations (Cw) reported in the previous
table will typically result in a point estimate scenario of the benzene ingestion in
water. However, to perform an uncertainty analysis a distribution is needed for all
the exposure parameters (such as the ingestion rate, the exposure frequency, the
exposure duration, the body weigh and the cancer potency factor ) and the BTEX
concentration at the "end  points" (receptors).  The distribution of  the exposure
parameters  is obtained from Data bases of laboratory experiments. However, to
obtain  the  distribution of  the BTEX  from  the migration  simulation  is  a more
elaborate task. Several options are offered for this implementation as shown in the
table below:
Method
Deterministic
(Point estimate)
Monte Carlo
Generalized point
estimate
Stochastic Finite
Element
Uncertainty model
(proposed model)
Fundamental
Principle
One value estimate
Multiple estimates
for range of probable
values
Equivalent discrete
Probabilities at
reaction points
Karhunen-Loeve
expansion
Max entropy
principle
Analysis Type &
"End" results
Nonlinear
Mean
Nonlinear
Distribution
Nonlinear
Distribution
Nonlinear
Distribution
Nonlinear
Distribution
Size [Degrees of
Freedom]
6,912 DOF
6,912 DOF
6,912 DOF
37,600 DOF
6,9 12 DOF
Number of Runs
1
5,000 to 10,000
2,048
1
1
A quick observation of the resources needed to estimate the BTEX distribution at the
"End points" favors the proposed uncertainty model. Indeed, the maximum entropy
principle explained in the previous section allows a cost effective evaluation of die
BTEX concentration distribution without any penalty on the computational effort. At
this  stage we  can combine  all the various distributions to  come  up with the
distribution of the "Cancer Risk" due to the BTEX contamination of this particular
site. This is achieved as shown in Figure 8.

First we compute the lifetime average daily dose (LADD) based on  the input
distribution of the following parameters: the BTEX concentration obtained from the
simulation, the ingestion rate, the exposure frequency, the exposure  duration, the
body weight, the life span and the conversion factor.  Then  the cancer  risk is
estimated based on the following equation:

                Cancer Risk = (LADD)(CPF)
Where:
     LADD=Lifetime Average Daily Dose (mg/kg/day)
     CPF= Cancer Potency Factor (mg/kg/day)"1
                                       460

-------
             Input Information
Output Estimation
                                                  Compute Intake (LADD)
                                                  LAPP = (Cw)(Ir)(EF)(ED)
                                                           (Bw)(LS)(CF)
                                                   Compute Cancer Risk
                                                  Cancer Risk = LADD x CPF
             Figure 8.  Estimating distribution of human daily intakes.

The input values that are used in this risk assessment study are shown below. For the
exposure parameters we assumed a lognonnal and beta distributions as the most
representative of the laboratory tests.
Parameter
Chemical
Concentration
Ingestion rate
Exposure Frequency
Exposure Duration
Body Weight
Cancer Potency
Factor(CPF)
Distribution
Lognonnal
Lognonnal
Beta
Beta
Lognonnal
Lognonnal
Mean
(mg/1)
2.0 I/days
350 days/year
70 (years)
70 (kg)
0.029 (mg/kg)
Variance

0.25



0.67
Min-Max Range


Mm=250, Max=365
Mm=9, Max=70


These parameters are now combined to estimate the cancer risk according to the
above equation. A typical set of results are shown in Table 1 for municipal well G
23  (retained  "End Point")  (see  also  Figure  7). Several uncertainty models are
compared with the conventional point estimate approach proposed by EPA. These
models are based on the computed concentration at well G23;  the exponential
model retains only the mean concentration from the simulation, while the lognonnal
model retains both the mean and variance of the concentration. Two contaminant
migration scenarios  are also  considered; one which  considers advection, and
dispersion as the predominant natural processes (implemented in module Migra) and
one which also includes biodegradation (implemented in module BioRemSD). A
variety of very interesting observations can be made focusing the discussions first on
the merits  of the two  simulation scenarios and then  on the advantages of the
corresponding uncertainty models.
                                      461

-------
     Table 1. Summary of the results of the risk assessment study for well G 23.
Processing Module
Migra
(Advection,
Dispersion)

Biorem
(Advection,
Dispersion,
Biodegradation)
Mas Entropy
Uncertainty
Model
Point Estimate
Exponential
(mean)
Lognormal
(mean, variance)

Point Estimate
Exponential
(mean)
Lognormal
(mean, variance)
G23-BTEX
Concentration
(rag/liter)
3.1
3.1
(3.1), (0.5)

0.89
0.89
(0.89), (0.3)
Life Average
Daily Dose
(LADD)
8.4 xlO'2
2.5 x 10-2
2.1 x 10'2

2.36 x lO'2
0.3 x lO'2
0.2 x lO'2
Cancer Risk
(Expected
Value)
2.4 x 10'3
2.1 x 10"
1.9x10"

6.0x10"
6.2 x 10'5
5.5 x 10-5
95th Percentile
of PDF
-
2.2 x lO"4
2.0x10"

-
1.9x10"
1.6x10"
Comparing Simulation Scenarios at Well G 23

The discussion  is  focused on  the  results of  the  conventional  point  estimate
procedure. As it can be seen in Table 1, the expected "Cancer Risk" is the highest
when the simulation includes only advective and dispersive terms (module Migra).
The  simulation  that  includes  biodegradation  (module  BioRemSD)  shows  a
dramatically decrease in the expected cancer risk. In the context of hazardous waste
cleanup, site-specific cancer risk between  10"4 and 10~6 may be deemed acceptable
by the appropriate authority. In that respect, when biodegradation is accounted for,
the point estimated risks  are acceptable. However, this picture is changed when we
consider the results based on the distribution of the input simulation parameters. In
this  particular case for example,  little  gain  is  obtained when biodegradation is
included.  This is due to  the  high  uncertainties  associated  with  the input site
dependent parameters of the biodegradation process.

Comparing Uncertainty Models at Well G 23

In general the use of the uncertainty models results in reduced estimated cancer risk.
As it can be seen in Table 1, the point estimate of the cancer risk gives  2.4 x 10"3,
while the 95th percentile for the cancer risk based on the exponential model is 2.2 x
10"4. The lognormal uncertainty model decreases this value even further because the
quality  of the  information  is  better,  since  the  variance  of  the  computed
concentrations is also included in the risk assessment study.
                                       462

-------
CONCLUDING REMARKS

Aiming at Improving Risk Characterization under EIS/GWM

The basic objective of the risk assessment study under EIS/GWM is to be able to
better characterize the risk as our knowledge of the natural processes affecting the
contaminant migration improves. Figure 9 illustrates the outcome  of the present
analysis. It  is clear that risk assessment using the conventional statistical approach
leads only to an average risk estimate based on in-situ measurements that are lumped
together over the entire studied domain. As a consequence, the same risk estimate is
applied to all "End Points". The penalty on some of these "end Points" is steep as the
high estimated cancer risk may result in a prohibitive remediation cost.
                  Conventional Statistical Approach
                    (Results Applied to all "End Points")

                                                Estimated Cancer Risk
      "Measurements"
                         End Points'
                                                         10E-3
                   Risk Assessment under EIS/GWM

                    (Results Applied to each "End Point")
                                               Estimated Cancer Risk
                                         0.95
       "Measurements
                                                  10E-4
                  Figure 9. Simulation based risk assessment.
                                             463

-------
A more pragmatic approach is adopted in this document which improves greatly on
the conventional risk evaluation. The improved methodology combines statistics of a
site characterization with a  variety of numerical models  that  simulate  different
physical processes. This results in reducing the inherent uncertainties of the study,
narrowing the probability distribution function  of the health risk. This  approach
requires a good understanding of the use of the different models in the overall risk
assessment. Otherwise the high uncertainty of the estimated concentrations at  "End
Points" will again 'flatten' the probability curve resulting  in high estimated risk
percentiles.

Numerical models designed to describe and simulate environmental systems cover a wide range of
detail and complexity: they range from very simple  statistical  black-box  models to  the
"all-inclusive", multiphase, spatially discretized simulation models  offered  under the  EIS/GWM
platform.  But even for the most detailed and refined models, macroelements (discrete elements or
compartments) require some distribution at a scale larger than that of the size of the sample from the
field or experiment. In fact, what models really describe are simplified conceptualizations of the
real-world system, which are very difficult to relate directly to the data point samples of these
systems.  In that respect, models and  data operate on two  different levels of abstraction and
aggregation, and therefore traditional data from a spatial or functional microlevel can hardly be used
directly. Instead, from the available data one can try to derive information about the system  at the
appropriate scale, for comparison with the  respective modeling factors. Ideally, the measurements
should be made directly at the appropriate level, but some of the more promising techniques in
environmental data collection are still in their infancy, at least as far as scientific applications are
concerned.

The two elements that will drive the risk based approach in the near future are:

   *  the introduction and use of scale-dependent statistics; and
   *  the association of cost with the  corresponding target risk reduction level in
      relation to the uncertainty of that estimate.

Scale-dependent statistics is an aspect of the risk-based remediation approach which
must be thoroughly developed. Elements of the statistical theory of scale dependence
exist but their application in risk assessment is lacking. As an example, geostatistical
kriging as implemented in EIS/GWM  can be used  to  derive 'point' estimates or
'volume' estimates. Depending on  the nature of an input parameter to a  simulation
model (e.g. conductivity, or initial  concentration) a point estimate or a volume
estimate may be the appropriate statistical inference to use. Similarly, scale must be
accounted for at the level of the field  or laboratory measuring device.

Finally, cost must be by necessity a key element of the risk based  approach: because
the marginal rate of return per unit of additional risk reduction will show whether we
have reached the  level  of diminishing returns.  When this happens,  you  need to
evaluate  the  system at a   higher  level of accuracy   (narrow  the probability
distribution) before further analyzing remedial options.
                                               464

-------
REFERENCES

USEPA, 1989a.  OSWER Models Study:  Promoting Appropriate Use of Models in Hazardous
Waste/Superfund Programs.  Information Management Staff, Offic of Program Management and
Technology, Office of Solid Waste and Emergency Response, USEPA, Washington, DC.

USEPA, 1989b, Risk Assessment Guidance for Sui)erfund, Volume 1.  Human Health Evaluation
Manual (Part A). Interim Final. EPA/540/1-89/002. Office of Emergency and Remedial Responses,
USEPA, Washington, DC.

USEPA, 1990. Report on the Usa-ge of Computer Models in Hazardous Waste/Superfund Programs:
Phase 11 Final  Report. Information Manageme Staff, Office of Solid  Waste and  Emergency
Response, Washington, DC.

USEPA, 1991. Risk Assessment Guidance for Superfund: Volume I Human Healt Evaluation
Manual  (Part  B. DeveloL:)ment of Risk-based Preliminary  Remediatio Goals).   Publication
9285.7-01 B, Office of Emergency and Remedial Response Washington, DC.

USEPA, 1992. MMSOELS: Multimedia Contaminant Fate, Transport, and Exposure Model. Office
of Research and Development, Washington, DC.

Vandergrift, S.B. and R.B. Ambrose Jr.,  1988.  SARAH2, A Near Field Assessment Model for
Surface Waters. EPA/600/3-88/020.

Whelan, G., J.W.  Buck, D.L. Strenge, J.G., Droppo Jr., and B.L. Hoopes,  1986.   Overvi the
multimedia environmental pollutant  assessment system (MEPAS). Hazard Waste and Hazardous
Materials vol. 9 no. 2 .191-208.

Whelan.  G.,  D.L. Strenge, J.G., Droppo Jr.,  and B.L.  Steelman.  1987.  The Remedial Action
Priority System (RAPS): Mathematical Formulations. PNL-6200. Pacific Northwest Laboratory,
Richland, Washington.

Whelan, G., J.W. Buck, and A.  Nazaraii, 1994.  'Modular Risk  Analysis for  Assessing Multiple
Waste Sites."  PNL-SA-24239, In: Proceedings of the U.S. DOE Integrated Planning Workshop.
U.S. Department of Energy, Idaho National Engineering Laboratory, Idaho Falls, Idaho, June 1-2,
1994.

Woolhiser, D.A., 1975. The watershed  approach  to understand our environment. Journal of
Environmental Quality,vol. 4, p. 17-20.

MicroEngineering,   1994.   "BIOREM-3D,  Contaminant  Biodegradation Models  Under the
EIS/GWM Platform," Report No. Mi-94-M009.

MicroEngineering,  1994. The EIS/GWM Intgrated Computer Platform:  "Environmental Impact
System  for Contaminant  Migration Simulataions  and  Risk Assessment,"    Report  No.
Mi-94-MOOl.

MicroEngineering,  1994. "Automated Spatial Inference - Geostatistical and Fractal Interpolation
Schemes Under the EIS/GWM Platform,"  Report No. Mi-94-M004.

MicroEngineering,  1994.  "Framework for an Automated  RI/FS Study  Under the EIS/GWM
Platform," Report No. Mi-94-M005.

MicroEngineering,  1995  "Environmental Risk Assessment  Under the EIS/GWM Platform,"
Report No. Mi-95-M030.
                                                 465

-------
67
 INTER-LABORATORY COMPARISON OF QUALITY CONTROL RESULTS FROM
     A LONG-TERM VAPOR WELL MONITORING INVESTIGATION USING A
                 HYBRID EPA METHOD T01/T02 METHODOLOGY

R. Vitale and G. Mussoline, Environmental Standards,  Inc. 1140 Valley Forge Road, Valley
Forge, PA 19482-0911  and W. Boehler,  Suffolk County Department of Health Services,
Forensic Sciences Building, Hauppauge, NY 11787-4311.

ABSTRACT

Analyses of air samples has received a significant amount of attention since the  passage of the
Clean Air Act (CAA).  Of particular interest is the analysis for volatile organic compounds
(VOCs) in air.  Although the CAA recently has focused a significant amount of attention on the
analysis of air samples,  a very large, complex air investigation was initiated in 1988 when a
small leak in an underground gasoline line was discovered, but not before over a million gallons
of gasoline was released to the saturated and unsaturated zones, under and around a major oil
terminal in  New York State.  The facility is located in a generally residential area and as such,
the possibility of gasoline fumes migrating to the basements of residents,  all of which  were
utilizing municipal water supplies, was of concern to the facility owners and the local health
department.

During 1988 and 1989, the facility owners pro-actively installed several hundred ground water
and vapor monitoring wells on the facility and in the nearby residential community.  Extensive
monitoring  of the vapor wells  for eighteen VOCs via air  sample collection using calibrated
personal sampling pumps on multi-media Tenax/Ambersorb sorbent tubes followed by direct
thermal desorption GC/MS analytical techniques (EPA T01/T02 hybrid) was conducted on the
vapor monitoring wells over the last 7 year period.  As a fairly low analytical action  limit of 10
nl/L of benzene was established based on conservative health considerations,  the facility owners
contracted the primary author to design,  implement and oversee the quality assurance program
such that data quality was evaluated on a real-time basis.

During the  aggressive remediation and weekly vapor  well monitoring that has occurred  since
early 1988, a significant amount of performance data has been generated by two commercial
laboratories and one regulatory laboratory and has been independently validated.  This paper
presents a comparison of the QC results generated since 1988, such as inter- and intra-laboratory
duplicates/triplicates, surrogate spike recoveries and double-blind performance evaluation results
between the three  laboratories over  the seven year period.  Based on these  performance  data,
the hybrid  EPA T01/T02 hybrid appears to represent an acceptable alternative to the more
recently commercially adopted EPA  T014 (e.g., Summa Canister) methodology for the analysis
for VOCs in air.
                                            466

-------
INTRODUCTION

In 1987, a gasoline leak occurred at a storage facility of a large petroleum distributor located
in New York State. It was estimated that over one million gallons of gasoline was released to
the saturated  and unsaturated zones under and around a petroleum terminal from a hole in an
underground  supply line on the petroleum distributor's property.  Approximately 110 vapor
monitoring wells were installed in and around the potentially impacted residential community.
In addition, approximately 200 water monitoring wells were also installed around the facility and
in the surrounding community.

An initial survey of the impacted area revealed that one of the water monitoring wells contained
seven (7) feet of gasoline floating product.1 Public health was of concern due to the possibility
of toxic gasoline fumes entering the basements of the potentially impacted community.

Approximately, 470,000 gallons of gasoline were recovered  during the first three years of the
clean-up of this spill1; however, the continuous monitoring of the remaining fuel would require
additional time and effort.  Due to the  long  term  monitoring  (estimated  5   10 years) of the
vapor wells,  "normal",  frequent indoor air  quality  sampling  was  not feasible due  to  the
inconvenience that  would be placed  upon the home owners  surrounding the  facility.   The
alternative sampling design that was established involved an outdoor soil vapor monitoring well
program.  The soil vapor monitoring wells were installed in and around the potentially affected
residential area. This type of a program allowed accessibility at any time and enable the soil
vapors to be  tested  at typical basement depth.

The vapor wells were constructed by blending existing vapor probe technology with a length of
3/e" outside diameter Teflon* tubing (at basement depth) with a thumb-wheel fitting capable of
a leak tight seal for the organic sampling tubes (see Figure I).1-2
Figure 1
Vapor Monitoring Well Construction
Ground Ground
Level ,-- 	 ^~~~>i Leyel
Cover^ ^ /
Plate \ Locking (5 ^
\ PlateV /f \^L_ ^
K' ^-L^-L^^] __d Thumb
\ ^ ', Wlieel

Weep Hole ^ J*f^"^ ^
, \^
Grout ^~y
(concrete !|
bcntonite ; .
mix) ; ;
^^_^

Tenon Tube ^o
Flush Joint ggD
2.25" PVC OOQ
Pipe ฐ2Pฐ
Screened <5tX.i
with 0.020" 0ฐg
slots ogg
vJix
s^^^'
J


















	 y,"^ steel
; > > y 1 
Vault












8ft
	 J
Ott! Gravel
ง^p Packed
000 (rabove
OQO screen)
5oo
oS?
ฐpOO T
D^On ^f
^EL^^I








                                              467

-------
EXPERIMENTAL
Approximately 25 - 30 vapor wells were sampled on a weekly basis.  Multi-layer sorbent tubes2
were  employed as the collection vessels for this investigation.3  The sorbent  tubes  were
constructed of Pyrex* glass (20cm length X 6mm O.D. and 4 mm I.D.). These tubes contained
sequential layers of glass beads, Tenax, Ambersorb XE-340 and charcoal absorbants which were
held in place with a glass frit and glass wool (see Figure 2).1
                                       Figure 2
                                  Sorbent Tube Design
           1/4" OJ>.
Before each sampling round, the sorbent tubes were conditioned by the analytical laboratory.
This conditioning involved a purge with ultra high purity helium (60 mL/min) at a temperature
of approximately 310ฐC for twenty minutes. After the 20 minute conditioning period, the tubes
were  continuously purged until room temperature  was  achieved.  A surrogate compound,
4-Bromofluorobenzene (BFB) was added to each conditioned tube just prior to shipment to the
field sampling team. Each tube was placed in a storage container and shipped in a cooler to the
field sampling team.
Sample collection  employed  the  use of  individual  personal pumps.   The  pumps were
programmed to obtain an air volume sample of approximately 1.0 Liter.  The flow rate for these
pumps were typically around 50 mL/min.  The  analytical design  consisted of a Multi Tube
Desorber (MTD), a concentrator,  and a Gas Chromatograph/Mass Spectrometer  (GC/MS).
Eighteen volatile organic target compounds (Table 1) were investigated.
                                       Table 1
                         Target Volatile Organic Compounds
           benzene
           toluene
          w-xylene
          0-xylene
          /7-xylene
    1,3,5-trimethylbenzene
1,2,4-trimethylbenzene
  m-dichlorobenzene
  jrj-dichlorobenzene
  o-dichlorobenzene
  />-diethylbenzene
 1,1,1 -trichloroethane
 trichloroethylene
tetrachloroethylene
1,2-dichloroethane
1,2-dibromoethane
   ethylbenzene
   naphthalene
                                            468

-------
The analytical laboratories involved in the analysis of these samples were required to follow
strict quality assurance/quality control (QA/QC) procedures.  All analytical procedures had to
meet the requirements as stated in a quality assurance project plan (QAPP) which was prepared
during the design phase of the investigation.  Due to the possible public health  concern,  the
laboratories were required to electronically  transmit preliminary analytical results to  the QA
oversight contractor within 7 calendar days after sample receipt. Complete data packages were
submitted by both the two commercial laboratories for rigorous third-party data  validation as
these data were used as the basis  for the risk assessments/remedial decisions.4-5

Two commercial laboratories were involved in the analysis of the vapor well sorbent tubes. The
sampling schedule was established such that one commercial laboratory would receive samples
every other week.   Having two laboratories involved in the project allowed flexibility in  the
scheduling of sampling events. In situations where one laboratory was experiencing difficulties
in meeting the analytical schedule,  the other  laboratory  was usually  available  to meet  the
project's needs.  Additionally, Suffolk County's Department of Health Services Air Pollution
Laboratory served  as a  reference  laboratory and provided regulatory oversight  for  the
commercial laboratories involved.

A rather extensive analytical database  was maintained for all the sampling events (field arid
performance) that were conducted  for this investigation. The database that was designed allowed
the project team to monitor trends in levels of the target compound in various  ways. Some of
the trends observed over the duration of the sampling events are summarized and discussed in
the Results and Discussion Section.

Periodically,  performance evaluation (PE)  samples were prepared by the  Suffolk  County
laboratory and issued to project laboratories.  Known concentrations of target compounds  were
spiked onto conditioned sorbent tubes and issued to the two  commercial laboratories as blind  PE
samples. The results of these PE  samples  were issued by the Suffolk County laboratory to  the
QA oversight contractor and reviewed against the "theoretical" values. A summary of one such
round of PE samples is summarized and discussed in the Results and Discussion  Section.

Additionally,  split sampling events between the Suffolk County laboratory and the commercial
laboratories  were  conducted as  an  additional  measure of performance of  the  commercial
laboratories.  These split sampling events involved  the collection of simultaneous field samples
that were issued to the commercial  laboratories as  well as Suffolk County's laboratory  for
analysis. An evaluation of the results of a split sampling event is also discussed in the Results
and Discussion Section.

As a way of monitoring the analytical procedure,  a surrogate compound, BFB,  was added to
each sampling tube prior to shipment to the field for use in sampling.  The percent recoveries
of BFB were  reviewed and evaluated to ensure  that the  analytical technique was properly
followed by the laboratories and that problems in the sampling and analysis did not occur (i.e.
leaks occurring during the thermal desorption of the samples for GC/MS analysis).  A summary
and evaluation of typical BFB recoveries observed during the investigation are also discussed in
the Results and Discussion Section.
                                             469

-------
RESULTS AND DISCUSSION

The ongoing monitoring of the vapor wells demonstrated that, in general, the concentrations of
the target compounds decreased as a function of time.  As shown in Figures 3, 4, and 5, the
average concentrations of benzene, toluene, ethylbenzene and xylenes decreased as the number
of months in which the wells were monitored increased.  The statistics associated BTEX in VW-
12 are presented in Table 2.  The average concentration of benzene observed in this particular
vapor well was 4.0 nl/L over the six years of sampling.

                     Table 2 - BTEX Statistics in VW-12 over Time
                    Benzene    Toluene  m,p-Xylenes  o-Xylene Ethylbenzene
Minimum
Maximum
Mean
Median
Std. Dev.
Range
n
0.5
40.7
4.0
1.5
7.8
40.2
70
0.5
44.0
3.7
1.5
6.4
43.5
70
0.5
4.0
1.1
1.0
0.5
3.5
70
0.5
1.7
1.0
1.0
0.2
1.2
70
0.5
1.3
1.0
1.0
0.1
0.8
70
During the months of June, 1989 through May, 1990, elevated levels of BETX were observed
in VW-12 as shown in  Figure 4.  The statistics associated BTEX during these months in VW-
12 are presented in Table 3.  The average concentration of benzene observed in this particular
vapor well during these months was 10.9 nl/L as opposed to the average  concentration of 4.0
nl/L that was observed over the six years of sampling.

            Table 3 - Statistics for BETX in VW-12 during June 1989 - June 1990
                    Benzene    Toluene   m,p-Xylene   o-Xylene Ethylbenzene
Minimum
Maximum
Mean
Median
Std.Dev.
Range
n
1.5
40.7
10.9
6.3
12.7
39.2
19
1.5
44.0
9.5
7.3
10.4
42.5
19
1.0
4.0
1.4
1.0
0.8
3.0
19
1.0
1.7
1.1
1.0
0.2
0.7
19
1.0
1.3
1.0
1.0
0.1
0.3
19
In general, seasonal trends were not observed as shown in these figures.  Also, as shown in
Figure 5, occurrences of elevated benzene levels were observed periodically in the latter months
of the investigation.  These occurrences (when greater than 10 nl/L), were confirmed/negated
through the analysis of a split sample by the County's laboratory.  If a split sample was not
available for analysis, resampling of the monitoring well in question was immediately initiated,
and analysis of the resampling event was conducted by the County's laboratory and the other
project laboratory that did not report the initial positive results. In all such instances, the vapor
well result in question was negated through split  sample analysis and/or resampling.
                                             470

-------
The results of a blind PE sampling event is presented in Figure 6.  These PE samples were
prepared by the Suffolk County laboratory and issued as PE samples (single blind) to one of the
project laboratories.  The commercial laboratory results were in agreement with  both  the
theoretical concentration and county laboratory's concentration for every compound of interest
with the exception of o-dichlorobenzene and naphthalene.  These two compounds were  observed
at elevated concentrations by the commercial laboratory. Additional PE samples were  provided
to the commercial laboratory, since these two compounds did not fall within the acceptance range
(0%  - 30% difference   acceptable,  31%   50%   borderline  acceptable, and  >  51%  not
acceptable) around the theoretical concentration. The analysis of these additional PE samples by
the commercial laboratory was acceptable.

The results of split sampling events for VW-1, VW-8 and  VW-104 are presented in Figures 7,
8 and 9, respectively. In general, the commercial laboratories compared well to each other and
to the regulatory laboratory. Variations in specific compound concentrations were observed in
some PE/split  sampling events;  however,  these instances were usually  corrected through
additional sampling and analysis or routine laboratory maintenance and/or corrective  actions.

The surrogate compound,  BFB, was used to monitor the analytical technique.  The percent
recoveries of BFB, from the analytical laboratories, were reviewed and evaluated to ensure that
the analysis of project samples was in control.  This surrogate compound ensured that problems
in the sampling and analysis did not occur (i.e.  leaks occurring during the thermal desorption of
the samples for GC/MS analysis).  Typical BFB recoveries that were observed for this project
are presented in Figure 10 and a statistical interpretation of these recoveries is presented in Table
4.

                                 Table 4 - Statistics for BFB
                                     Percent Recoveries
                                                    BFB
                                Minimum             60
                                Maximum             137
                                Mean                103
                                Median               101
                                Std. Dev.             19
                                Range                77
                                n                   34
                                             471

-------
CONCLUSIONS

Overall, the vapor well monitoring program that was instituted for this project performed very
well.  The design of the program was such that the monitoring wells were accessible to the field
sampling team  at any time  and the homeowners'  daily routines were not disrupted due  to
sampling events.

Concentrations of the analytes of concern were monitored on a timely basis. During the life of
the project, over 9500 vapor monitoring wells were sampled and analyzed. Approximately 2.0%
of these vapor wells contained benzene above the 10 nl/L action level.  Also, approximately
7.4% of the vapor monitoring wells tested had on some occasion exhibited benzene at a level
greater than the laboratory's analytical detection limit.

As demonstrated in Figures 3, 4 and 5, BTEX concentrations decreased over time and seasonal
variations in vapor well concentrations were  not  observed.   This demonstrated and  well-
documented decrease in vapor concentration has led to a revised monitoring schedule in which
vapor well are now sampled and analyzed on a quarterly basis instead of the weekly basis that
was initially instituted for this project.

Performance  Evaluation  samples  and split samples were  used to  assess  project laboratory
performance. As seen in Figures  6 through 9, laboratory performance on PE samples as well
as split samples, were, in general, acceptable.  Variations in specific compound concentrations
were  observed  in some  PE/split  sampling events; however,  these instances were usually
corrected through additional sampling and analysis or routine laboratory maintenance and/or
corrective actions.

Finally, the QC measures that were instituted for this soil vapor well monitoring program were
effective in  monitoring  the  quality  of the   data  that  was  used  as   a  basis  for risk
assessment/remedial decisions.  The surrogate compound, BFB was effective in monitoring the
laboratory analysis conditions.  Recoveries of this compound outside  a 50   150%  window
usually provided indication that either (l)the laboratory incorrectly spiked the sorbent tubes
before they were issued to the field sampling team or (2) a leak occurred during the analytical
process  and  analytes  were  lost.   As  seen in  Figure  10, BFB recoveries throughout the
investigation were typically within the 50  150% recovery  window.
                                           472

-------
REFERENCES
1.    W. F. Boehler, R. L. Huttie,  K. M. Hill, P. R. Ames, "A Gasoline Vapor Monitoring
      Program For a Major Underground Long-Term Leak", Presented at the EPA/A&WMA
      International Symposium on the Measurement of Toxic and Related Air Pollutants, May
      10, 1991.

2.    W.R.   Benz,  "Monitoring  A Wide Range  of Airborne  Contaminants",  from the
      Proceedings EPA/A&WMA Symposium on Measurement Of Toxic And Related Air
      Pollutants, A&WMA, Pittsburgh, PA, pp 761   770, (1987).

3.    W.R.  Betz,  "Monitoring  A  Wide  Range of  Airborne  Contaminants",  from the
      Proceedings EPA/A&WMA Symposium on Measurement Of Toxic And Related Air
      Pollutants, A&WMA, Pittsburgh, PA, pp 761-770, (1987).

4.    R.L.  Forman,  "Guidance For Determining  Data Usability  Of  Volatile Organic
      Compounds In Air", Presented at the Superfund XV, November, 1994.

5.    R.J. Vitale, "Assessing  Data Quality For Risk Assessment Through  Data Validation",
      Presented  at the ASTM Second Symposium  on Superfund Risk Assessment  in Soil
      Contamination Studies,  January,  1995.
                                          473

-------
                            FIGURE 3
    Average Monthly BTEX Concentration (nl/L) for VW-12
See Blown -up Area
                                Month
        Benzene    CH Toluene
m,p-Xylene   I o-Xylene    Q Ethylbenzene

-------
                                                                                 FIGURE 4
                                                            Expanded View of BETX in VW-12  (nl/L)
en
                       g 25.0 +
                       B 20.0 --
                       IV

                       ฃ
                       o
                      (J 15.0 --
                                                                                                                                 1  S ^  1 ง 6^  l
                                                                                                                                               !"<=• 'I

                               Feb-89   Mar-89   Apr-89   May-89  Jun-89   Jul-89   Aug-89   Scp-89   0ซ-89   Nov-89   Dcc-89   Jan-90    Feb-90   Mar-90   Api-90   May-90   Jun-90   Jul-90   Aug-90

                                                                                        Date
                                                  Benzene     I	I Toluene
                                                                                 m
,p-Xylene   I o-Xylene     CH Ethylbenzene

-------
                                                               FIGURE 5
                                Average Monthly BTEX Concentration  (nl/L) for VW-99
000  III lit! Ilil Illl llSI Illl I|H I|H ifl hil I|MI|H I|H I [SI l|ll l|ll I|H I|M l|ll l|ll I|U I|H I|H I|H I [II l|ป l|ll I|B l|ll l|ll I|H l|ll I|UJ|IIJ|U.J[I1J|11J|B I|HJ|FJ l|UJ|a,l|UJ|llJ|ll l|li.l|ll I|RI J|H l|ltJ|U.I|B.]|lปl|ป Ijll llH I [18 I [II l|
                                                                      Date
Benzene      I	I Toluene      S m,p-Xylene   • o-Xylene
                                                                                                   Ethylbenzene

-------
                                                FIGURE 6
                                  Comparison of PE Sample Results
  35 -
o
U
                                                  Compound
                          County Laboratory
I	I Commercial Laboratory   III Theoretical Concentration

-------
                                              FIGURE 7
                             Comparison of Split Sample Results from
                                                VW-1
(J 10 +
"3

i  ' +
B
c3  < +
J]
                                                                             I •• ItHi'il | ••  I "I I | B^ 1-Jft |

                                                                         i     t
                                                Compound
                          County Laboratory     1 _ 1 Commercial Laboratory 1 Hi Commercial Laboratory 2

-------
                                                                  FIGURE 8
                                                  Comparision of Split Sampling for VW-8
-si
CD
                                                                     Compound
                                             County Laboratory       D Commercial Laboratory 1   H Commmercial Laboratory 2

-------
                                                                 FIGURE 9
                                            Comparision of Split Sample Results for VW-104
00
o
                   d
                   "e
                   O
                   •a

                   2
                   o
                   O
                     f    1


                     5    ?,
                                                                                                   I


                                                                                                   I
                                                                   Compound
                                         County Laboratory
CD Commercial Laboratory 1  HI Commmercial Laboratory 2

-------
                                                               FIGURE 10
                                                 Typical BFB Recoveries (%) Observed
CD
                            2   3  4  5   6  7  8   9  10  II  12  13  14  IS  lซ  17  18  19  20 21  22  23  24  25 16  27  28 29  30  31  32  33  34
                                                                   Sample

-------
General

-------
 68
 CHARACTERIZING HAZARDOUS WASTES:  REGULATORY SCIENCE
                               OR AMBIGUITY?

Theodore Q. Meiggs, Ph.D., President, Meiggs Environmental Consultants, Inc., Golden,
Colorado 80401

ABSTRACT

Proper waste characterization is one of the cornerstones of the RCRA program and its
regulations. A waste generator needs to determine if his waste is hazardous or not.  All
future actions and liabilities regarding the waste flow from that initial determination. The
costs of making an incorrect determination can be staggering  in terms of potential waste
clean up costs and possible civil or even criminal penalties.

Waste characterization should be based on good science and not be ambiguous. It should
be a straight forward, simple and objective process. Unfortunately, it is not. Too much of
the process is subjective.  Even where objective tests are possible, parts of the tests are
poorly defined and lead  to confusion by the regulated public and the testing laboratories.
These poorly defined aspects or "gray areas" need clarification  from EPA. This paper will
identify a number of problems with the waste characterization methods and regulations, and
will suggest possible steps that the waste generator can take to reduce his liability until EPA
takes  some of the guess work  out of this important process.  Until then, all  waste
generators  need  to be aware of these potential problems so that they can take steps to
minimize their liability.

INTRODUCTION

Proper classification of a waste material is extremely important for the waste generator.
The waste generator needs to know if his waste is hazardous  or not.  Classification of a
waste as hazardous or  non-hazardous is the initial step determining which road the
generator must follow in the handling, labeling, transportation  and disposal of that waste.
This difference can mean savings of thousands, even millions  of dollars to the generator,
and can substantially impact the generators long term liability. The penalty for not properly
characterizing a waste can be huge. It includes increased costs for handling, treatment or
disposal of the waste, plus the potential liability that may arise from clean up costs at a
Superfund site, or it may arise from civil or even criminal sanctions from the regulators.

Generators are always responsible for their wastes, but the requirements under Subtitle D
(for non-hazardous wastes) are substantially less than those under Subtitle  C (for
hazardous wastes).  In  addition, liability from mishandling  hazardous wastes is much
greater than for non-hazardous wastes.  Furthermore, the public's interest and concern are
much greater for hazardous wastes than for non-hazardous wastes.

EPA attempted to minimize confusion when they devised the RCRA regulations  and
promulgated the waste classification rules in 1980.  These  rules stated that wastes were
only considered hazardous if they were "listed," or if they  failed to pass any of four
"characteristic tests".  A waste was either on the list or it wasn't.  It either passed the
characteristic test, or it failed . This approach looked to be very objective, and to many, it
was very black or white.  Unfortunately as more wastes have been subjected to these rules,
                                               482

-------
it has become evident that these rules are not clear or objective. Instead, they contain a
great deal of "gray."

Since 1980, EPA has attempted to deal with some of the gray, by adding notes or memos
to the RCRA Docket, redescribing what they really meant on the RCRA Hot Line, and in a
few cases, officially clarifying specific parts of the regulations through the normal rule
making process.  Unfortunately, there are still many issues to be dealt with,  and EPA has
put most of these aside due to budgetary constraints.

A federal judge recently described the regulations that relate to waste characterization as,
..."a sea of ambiguity."!  This judge later vacated his preliminary findings when the
plaintiff abandoned its theory and dismissed the case, but the comment still seems
appropriate.

Here, we define a "gray area" as a part of the regulations or test methods that are unclear or
ambiguous. The characteristic regulations are described in Part 40 of the Code of Federal
Regulations (CFR),  Section 261.2. We can identify "gray  areas" in each  part of these
regulations.

REPRESENTATIVE SAMPLES

Wastes are are often complex mixtures of chemicals  and other materials.  Physically and
chemically they can  be difficult to work  with. Often they are very difficult  to accurately
analyze or characterize. The first part of the process requires that we obtain a sample of the
waste for analysis, and here is where we encounter our first gray area.

The regulations require that we test a "representative sample" of the waste. This is defined
in 40 CFR, Section 260.10 as,  "A  representative sample means a sample of a  universe or
whole (e.g., waste pile, lagoon, ground water) which can be expected to exhibit the
average properties of the universe or whole."

This definition has  two distinct problems.  First, we do not know how  much waste
constitutes the "universe or whole," and second, we do not  know how to determine the
"average" properties.

We  do know that wastes are almost never homogeneous.  Essentially, all  wastes are
mixtures that are heterogeneous or even extremely heterogeneous.  As a result, the average
property is likely to be different for different amounts of the waste. The average for a
scoop of the waste will be different from a drum, or a pile or a day's production of the
waste.  The average for the top of a pile may well be different from that at the bottom of the
pile. Without knowing the size of the universe, we can not make accurate  comparisons
between samples or against a regulatory standard.

The waste characteristics measure attributes of a waste. These attributes are not additive,
and therefore, they can not be averaged.  For example, a waste that flashes at 20ฐC is no
more hazardous than a waste that flashes at 50ฐ C. The average is not a waste that flashes at
35ฐC. The Ignitability test results in a yes or no answer.  How do we average yes and no?
Is the average a maybe?
                                              483

-------
Another example is Corrosivity which is determined from the pH of a sample. However,
pH is a logarithmic property which technically can not be averaged. Multiple factors effect
the pH or flash point or leaching potential of a waste. These factors vary to differing
degrees through out a heterogeneous waste, and therefore they influence the attribute to
differing degrees. This assures that the values will not be additive nor averagable.

Another consequence of  the concept  of average property is the fact  that "average"
represents a range of actual values. If the average value is near the regulatory limit, then
some of the actual values are likely to be  above the regulatory limit.  Technically, half of the
actual values can be above the regulatory limit and the average will still not exceed the limit
If we can expand or contract our universe to include more values above the limit, then we
definitely  have a violation.  If we can  expand or contract the universe to include more
values below the limit, then we do not have a violation.

Unreliable results are often obtained because there is not a clear understanding of what
constitutes a representative sample or detailed  guidance has not  been given on how to
properly sample and subsample a waste. Sometimes, efforts made in the field to collect a
representative sample are defeated in the laboratory when proper guidance is not given.
For example, some laboratory personnel  have been known to scoop a subsample for testing
only from the  top of the container. The resulting values may be high, low or even non-
detect, but they are not likely to be representative of the entire sample in the container.
Other laboratory personnel selectively remove rocks or other material from the subsample
and then do not correct their result for this change. This situation will lead to erroneously
high values. Both situations illustrate the importance of double checking a laboratory's
results. Waste generators need to be aware of these potential problems. When their waste
appears to be near the regulatory limit, it  can be especially important to them to have a third
party assure that these types of errors are not being made.

There are also a number of "gray areas" in the individual characteristic tests  and regulations
and we will highlight a few.

CHARACTERISTIC OF IGNITABILITY

The only test for ignitability applies only to liquids. However, liquid is not defined and is
subject to misinterpretation especially as to how  it relates to sludges, semi-solids, semi-
liquids, free liquids, etc. In addition, aqueous solutions containing less than 24 percent
alcohol by volume are excluded.  Aqueous solutions and alcohol are not defined.  The
regulation writers sought  to exclude alcoholic beverages, but there are many ways  to
describe an aqueous solution and there are many different alcohols besides ethanol.

When the  waste is not a  liquid, there are no tests prescribed; only  subjective descriptions
that are open to misinterpretation.  For example, an ignitable solid  is described as burning
so "vigorously and persistently that it creates a hazard."   This can encompass a wide
variety of situations, and the degree of hazard will  depend upon where the waste is burned.

CHARACTERISTIC OF CORRQSIVITY

Again we encounter the  term "aqueous" without a clear definition. pH was chosen as the
indicator of corrosivity, but pH can only be  accurately measured in dilute aqueous
                                              484

-------
solutions.   Suspended solids, oils and water soluble organics all interfere with the test.
Solids were excluded from this classification, but what about pastes, semi-solids, semi-
liquids, etc.  Do we consider only free liquids or not?

EPA recently clarified one important part of this characteristics, and that is the explicit
requirement to measure the pH at 25 + 1ฐ C when the pH is greater than 12.0. This
clarification was necessary because pH is very sensitive to temperature changes at the high
end of the pH scale, and widely differing values are obtained unless the temperature is
controlled.  EPA should be commended for clarifying this issue which could have had a
negative impact on thousands  of users of lime and other highly alkaline materials
throughout the country.

CHARACTERISTIC OF REACTIVITY

For the most part, the characteristic of reactivity is the classic example of subjective
regulations.  Of eight different properties, only one has an objective test associated with it,
and even that test has problems. The problems with measuring Reactive Cyanide and
Sulfide have been described elsewhere.4 The measurement problems also reflect on the
theoretical basis for the recommended guidelines for cyanide and sulfide. It is important to
note that the guidelines of 250 mg/kg for cyanide and 500 mg/kg for sulfide have never
been formally adopted through the rule making process and are still subject to challenge.

In all fairness, developing reliable, objective tests for reactive wastes is not an easy task.
EPA has made some efforts to develop a few more objective tests, but this work has been
put aside due to budgetary constraints.  However, more work needs to be done in this area,
and EPA should address the issue of characterizing a waste that results from a mixture of a
small amount of "reactive " material with a large amount of non-reactive material.  Where
does one draw the line?

CHARACTERISTIC OFTOXICITY

This characteristic does in fact have an  objective test called the toxicity characteristic
leaching procedure or TCLP. It should be kept in mind that the values obtained from the
TCLP are not inherent properties of the waste, but are instead, attributes.  Consequently,
the amount extracted is dependent upon the conditions of extraction.  If two laboratories do
not provide exactly  the same conditions or use  exactly the  same procedure, then their
results will differ. The TCLP is a lengthy procedure with numerous opportunities for
error. Again, waste generators need to be aware of these potential problems.  When their
waste appears to be near the regulatory limit, it can be especially important to them to have
a third party assure that errors are not being made.

SUMMARY

As we discussed earlier, wastes by their very nature are often complex mixtures of various
materials.  Physically and chemically they are  difficult to  work with and difficult  to
accurately characterize. The current regulations are flawed, and better guidance is needed
from EPA on both sampling and analysis.  Guidance on how to deal with the situation
where representative sampling is not possible or where the characteristic tests do not work
for one reason or another would also be helpful.
                                               485

-------
Until additional guidance is forthcoming, the waste generator needs to be alerted to the
"gray areas" in these regulations. When in doubt or especially when the results appear to
be close to the regulatory limit, the generator should get a third party opinion. When these
"gray areas" are encountered, the generator should consider challenging the classification of
a waste. Such challenges will encourage the adoption of better science and less ambiguity
in these important regulations.

REFERENCES

1.) Alcael Information Systems, Inc., et al. v. State of Arizona, et al. (No. CIV 89-188
PHX RGB, Feb.  8,  1993).
2.) ibid., Dec. 19, 1994.
3.) Final Rule, RCRA Docket No.  F-95-W2TF-FFFFF, Mar. 29, 1995.
4.) Lowry, J., Fowler, J., Ramsey, C., Siao, M., 1992. Releasable Cyanide and  Sulfide:
Dysfunctional Regulation, Proceedings  of the 8th Annual Waste Testing &  Quality
Assurance Symposium,  American Chemical  society, Hyatt Regency Crystal City,
Arlington, Virginia
                                              486

-------
       THE MISUSE OF THE TOXICITY CHARACTERISTIC LEACHING
                              PROCEDURE (TCLP)
Susan D. Chapnick. M.S., Program Manager, Manu Sharma, M.S., Senior
Hydrogeologist, Deborah Roskos, Analytical Chemist, and Neil S. Shift-in, Ph.D.,
Principal, Gradient Corporation, 44 Brattle Street, Cambridge, Massachusetts 02138

ABSTRACT

The Toxicity Characteristic Leaching Procedure (TCLP) was developed to estimate the
mobility of certain organic and inorganic contaminants in a municipal landfill and to
determine if these wastes should be classified as "hazardous".  However, many
regulators, environmental consultants, and industry environmental managers are
inappropriately applying the TCLP to determine potential leaching to groundwater of a
variety of contaminants hi soils and sediments at a variety of non-landfill sites.  The
TCLP is misused in two ways: it is applied to incompatible chemistries (e.g.,
performing TCLP to determine cyanide leaching from soil) and it is used for
incompatible site scenarios (e.g.,  using TCLP to determine leaching of contaminants at
sites that are not equivalent to the municipal landfill model). The effect of misusing the
TCLP is that regulators and decision makers will misinterpret results, and incorrectly
predict the potential for impact to  groundwater of the contaminants. The TCLP is a test
that should be limited to determining if a waste should be considered a RCRA-regulated
waste.  For other settings, such as at Superfund and state-lead sites, alternatives to using
the TCLP in the determination of potential impact to groundwater, such as mathematical
models to estimate mobility using  site-specific parameters, are also discussed.

INTRODUCTION

Under Subtitle C of the Resource Conservation and Recovery Act (RCRA), hazardous
wastes are evaluated using four characteristics:  corrosivity, ignitability, reactivity, and
leaching potential.  The Toxicity Characteristic Leaching Procedure (TCLP) was
developed to test the leaching potential of toxic constituents in a municipal landfill under
specific  landfill conditions.  The TCLP Final Rule (Hazardous Waste Management
System: Identification and Listing of Hazardous Waste; Toxicity Characteristics
Revisions, Federal Register, Vol. 55, No. 61,  March 29, 1990) lists the 39 compounds
that are  regulated based on the TCLP test.  Method 1311 hi SW846 (July 1992) is the
EPA-approved TCLP method.  The TCLP thereby replaced the Extraction Procedure
(EPTOX) leach test  formerly required under RCRA.

The TCLP was designed to simulate leaching of an industrial waste dumped in a
municipal (sanitary)  landfill, therefore, acetic acid was chosen as the extraction fluid
because  it is a major component of typical municipal landfill leachates.  However, the
TCLP scenario may not be applicable to contaminated soils at sites that do not fit the
                                                487

-------
municipal landfill scenario.  At such sites, organic acids may not be present, and
therefore, leaching tests that use organic acids may selectively solubilize certain
compounds or elements from the contaminated soil (in the laboratory TCLP test) whereas
this would not occur in the environment. Elements such as lead are especially
susceptible to incorrect classifications hi contaminated soils where mobility using a TCLP
test would classify the soil as a hazardous waste  but the environmental site conditions
would not be conducive to leaching.

The dilution and attenuation factors (DAFs) developed for organic compounds under the
TCLP scenario were based on a database of municipal landfills. The equations used to
determine the compound-specific DAFs simulate the transport and attenuation of
contaminants as they travel through a landfill, to the underlying groundwater, and then to
a drinking water well exposure point. Unlike the organic TCLP constituents, the
inorganic regulatory limits for TCLP metals are not derived from DAFs modeled for a
municipal landfill scenario.  Regulatory levels for metals were set as ten tunes the
Drinking Water Standards, rather than a subsurface fate and transport model to calculate
constituent-specific DAFs.  Therefore, exceedances of TCLP leachate results for metals,
especially for non-landfill sites, must be interpreted with caution since no fate and
transport related process were considered for  the development of the regulatory levels.

The TCLP has been used incorrectly by regulators to evaluate potential impact on
groundwater due to leaching from contaminated soils for a number of years.  A
Leachability Subcommittee established by EPA's Science Advisory Board (SAB)
acknowledged some of the problems with the  test and stated that in most cases of
inappropriate use, "the justification given was that it is necessary to cite standard and
approved methods." This paper discusses examples of the inappropriate use of the
TCLP, in terms of incompatible chemistries and sites.  It also recommends the use of an
alternate leachate test,  Synthetic Precipitation  Leaching Procedure (SPLP), and site-
specific modeling.

INCOMPATIBLE CHEMISTRIES

The TCLP was developed to assess potential groundwater contamination for a specific set
of environmental contaminants including 8 metals, 11 volatile organic compounds
(VOCs), 12 semivolatile organic compounds (SVOCs), 6 pesticides, and 2 herbicides.
Extensive research and method development studies were performed during the
development of the TCLP procedure for these specific compounds and analytes.  Use of
the TCLP for some chemicals that were not included in the method development or in the
final TCLP rule constitutes "misuse",  and may not provide technically valid results.

An example of the inappropriate use of TCLP by a regulatory agency because the method
is incompatible with the chemistry of the analytes of concern is discussed below. The
Record of Decision (ROD) for a site hi New York (1) where cyanide and fluoride were
to be analyzed on soils beneath excavated material, required the use of a TCLP leachate
test and cleanup goals were to be set based on these TCLP results. In this example, the
ROD required the TCLP procedure to be modified such that the pH of the extraction
                                                 488

-------
fluid was adjusted to background overburden groundwater pH conditions.  There are
several potential problems with analyzing cyanide hi a TCLP leachate.  In addition, no
method study was undertaken to validate the TCLP method for cyanide and fluoride.

Cyanide exists hi the environment hi many forms that have different mobility, stability,
and toxicity.  Most environmental regulations requke the analysis of "total cyanide" as a
measure of the potential impact of cyanides as a health threat at contaminated sites.  This
is a very conservative approach, as "total cyanide" refers to all cyanide compounds that
can be classified as simple or complex. The simple cyanide compounds dissociate easily
under acidic conditions and are present hi aqueous solutions as HCN and CN", which are
the most toxic forms of cyanide hi water and hi the air. Some complex cyanides, such as
the iron-cyanide complexes, are very stable in soils and are not toxic due to their
extremely low human bioavailability (2).  Simple cyanides exist most typically hi aqueous
solutions as HCN rather than CN" because the pH of most natural waters is lower than
the pK  of molecular HCN.
In order to maintain the integrity of the sample from the tune of sampling to the time of
analysis, and prevent loss of the simple cyanides, aqueous samples are preserved with
sodium hydroxide (NaOH), which increases the pH and also converts the HCN (which is
easily lost to the atmosphere) to CN".  During the TCLP extraction procedure, the pH is
maintained hi an acetic acid buffer solution at 4.93.  This acidic pH releases any cyanide
that was not tightly bound (simple cyanide complexes) in the soil sample. The TCLP
extract is maintained hi this acidic buffer until the tune of analysis.  Without method
development to determine the accuracy of cyanide recovery, or techniques to minimize
the loss of cyanide during the laboratory testing, the TCLP extraction method should not
be used for determination of cyanide because the most toxic forms of cyanide are lost
during the procedure. At best,  the TCLP is expected to produce cyanide results that are
biased low; at worst, the TCLP results hi false negative results for cyanide in the
leachate.

An alternative procedure to assess the potential leaching of cyanides to groundwater is
the SPLP.  The SPLP, Method  1312  (September 1994), is a leach test that uses an
extraction fluid modeled after the pH of the precipitation hi the region of the US where
the soil is located. The SPLP does not use acetic acid. Instead, it uses an unbuffered
extraction fluid of sulfuric and nitric acids  (Extraction Fluids #1 and #2). However, the
SPLP procedure also allows for a deionized water leach, hi place of the  acidic extraction
fluid, to determine cyanide and  volatiles teachability (Extraction Fluid #3).  A deionized
water extraction fluid provides more technically sound results for cyanide hi the leachate
because the simple cyanides are not lost upon introduction to the extraction fluid.
Further, if it is important to obtain a laboratory test result for the potential of cyanide
leaching from contaminated soils hi an acidic environment, a zero headspace extractor,
similar to that used for the determination of VOCs,  may be used to prevent the loss of
the volatile cyanide species hi reaction with the acidic extraction fluid.  Method
development  and validation would need to be undertaken to verify that this is a viable
alternative.
                                              489

-------
As another example, the use of TCLP at sites in which organic acids are not the
dominant leachate form can result in an incorrect classification of a soil as "hazardous"
due to the reaction of the contaminant in question with the acetic acid extraction fluid of
the TCLP. Lead is a prime example of this phenomenon. In an interlaboratory
comparison study, conducted hi 1988 by RTI for OSW (3), the TCLP Method 1311
consistently leached more lead from soil than the SPLP method. Results from the study
showed that for soil samples with total lead concentrations ranging from 2000 to
30,000 mg/Kg, the TCLP leached lead at levels ranging from 2.0 to 375 mg/L whereas
the SPLP leachates contained lead ranging from nondetect values (detection limit was
0.1 mg/L) to 19 mg/L  (3).  These are significant differences  in the leachate potential
based upon the chemical reaction of the extraction fluid with the site soils.

INCOMPATIBLE SITES

In addition to the analytical chemistry related issues with the TCLP discussed above, another
major problem with the TCLP is how these results are being used by regulators at hazardous
waste sites.  At a number of sites (under non-landfill conditions) across the nation, TCLP
test results have been used by regulators as a measure of potential for impact to groundwater
with little or no consideration given to site-specific conditions. This has led to remediation
of soils (in most cases, excavation and off-site disposal) that exceed the TCLP standard, even
though hi a number of cases  these soils did not pose a threat to human  health or the
environment.  The following example illustrates this problem.

At the South  Cavalcade Superfund site hi Houston,  a former wood-preserving and coal-tar
distillation facility, the selected remedy required excavation and treatment of all soils that
either:  1) exceeded the risk-based cleanup goal or 2) exceeded the regulatory TCLP levels
(leaching-based cleanup goal) (4). The primary contaminants of concern at the site included
benzene, 3- to 5-ring polycyclic aromatic hydrocarbons (PAHs),  and metals — arsenic,
chromium, and lead.  Benzene  degrades very rapidly in the environment and the large ring
PAHs and lead bind readily to  any organic material present hi soils.  As a result, benzene
concentrations in the leachate might be degraded to acceptable levels by the time the leachate
reaches the water table, and the PAH compounds and lead  may not reach the water table in
a reasonable tune frame (travel tune could be on the order  of a few hundred to thousand of
years,  depending on the distance to the water table), hence having no impact on the
groundwater.   Thus,  the use of the leaching-based cleanup standard at this site cannot be
justified because exceedance of the TCLP regulatory level in a vadose zone soil sample does
not necessarily mean that groundwater will be impacted. Further, basing remedy decisions
on exceedance of TCLP regulatory levels can easily result in unnecessary remediation.

As the above  example shows, TCLP results should not be used to evaluate potential impact
on groundwater at non-landfill sites because this approach does not account for contaminant
properties (e.g., mobility and degradation) and site-conditions  (e.g., lack of acetic acid from
municipal waste and other site-specific soil conditions)  which dictate the fate and transport
of contaminants hi the subsurface and determine the  potential impact on groundwater.  The
best approach for  assessing potential  for  impact  to groundwater  would  be to use  a
combination of leaching tests and transport models.
                                                 490

-------
ALTERNATE APPROACH

The first step of our approach consists of using the SPLP test to determine leaching potential
under laboratory  conditions.   These SPLP results would be compared to a compliance
concentration equal to the maximum contaminant level (MCL) tunes a universal or generic
DAF.  The universal DAF would account for different soil types and other variables, such
as depth to  groundwater, similar in concept to  that of Chiang et al. (6).  If the SPLP
concentrations exceeded this compliance concentration of the universal DAF times the MCL,
a site-specific model would then be developed.

The site-specific modeling would consist of two parts:

        1)     Estimation of a site-specific soil/water partition coefficient (Kj).

        2)     Estimation of  contaminant  concentrations incorporating  the  effects  of
              contaminant dispersion, retardation, and degradation as the leachate migrates
              through the vadose zone and subsequently mixes with groundwater.
The data needed for defining the soil/water partition coefficient (Kj) depend  on the
chemicals of concern. If organics are of interest, the organic content of soil, often referred
to as fraction organic carbon (f^, is needed to estimate the soil/water partition coefficient
(Kj);  if  metals are  of  concern,  the metal concentration present  in soil and the metal
concentration expected  to be present hi the  aqueous form hi the leaching liquid  (i.e.,
rainwater) is needed to estimate the soil/water partition coefficient (Kd).  The SPLP, rather
than the TCLP, is recommended to estimate the metals concentration expected to be present
hi the leaching fluid since this test more closely simulates field conditions at  non-landfill
sites.

The soil/water partition coefficient, estimated either using the foc (for organics) or the SPLP
(for metals),  would then be used  as  input to  a transport model to  estimate leachate
concentrations as a function of depth and evaluate the potential for impact to groundwater.

The maui advantage of this approach is the unproved accuracy  and  increased reliability in
predicting potential impact to groundwater.

Replacing the leaching test with a model was also recommended in a 1992 EPA workshop
to assess  the potential impact of oily wastes in the environment, because it was recognized
that the TCLP does not accurately represent the disposal scenario of oily wastes hi a landfill.
                                                 491

-------
CONCLUSIONS

The TCLP is an appropriate tool only where it is truly applicable.  Site-specific models
should be used to assess potential leaching of contaminants that are not compatible with the
TCLP test or at sites that are not comparable to a municipal landfill management disposal
scenario. Thus, we recommend:

•      The TCLP should be used only for the scenario for which it was developed, i. e., for
       municipal landfill scenario and for specific chemicals of concern with chemistries
       which are compatible with the acetic acid fluid leach and  with the specifics of the
       method techniques.

•      Use SPLP and/or modeling for other sites to generate the best estimate of mobility
       of contaminants to groundwater and potential threat to human health.

REFERENCES

1.     NYSDEC,  1991.  Record  of Decision for the Aluminum Company  of America,
       Massena  Operations, Massena, New York.  New York  State Department of
       Environmental Conservation, Division of Hazards Waste Remediation, Watertown,
       New York, March 1991.

2.     P. Nielsen, B. Dresow, R. Fischer, and H.C. Heinrich, 1990.  Bioavailability of
       iron and cyanide from oral potassium ferric hexacyanoferrate (II) hi humans. Arch.
       Toxicol.,  64:420-422.

3.     USEPA, 1988. "Interlaboratory Comparison of Methods, 1310, 1311,  and 1312 for
       Lead in Soil," Research Triangle Institute (RTI), Contract No. 68-01-7075, prepared
       for Office of Solid Waste, USEPA, Washington, DC.

4.     USEPA Region VI, 1988. "Superfund Record of Decision: South Cavalcade Street
       Site, Houston,  Texas."

5.     J. Bear, 1979.  Hydraulics of Groundwater, McGraw-Hill Book Company (New
       York), pp. 569.

6.     C.Y. Chiang, P.O. Petkovsky, and P.M. McAllister, 1995. A risk-based approach
       for management of hazardous waste, Groundwater Monitoring Review, pp. 79-89.
                                               492

-------
                                                                                           70
     THE SYNTHETIC GROUNDWATER LEACHING PROCEDURE (SGLP):  A
  GENERIC LEACHING TEST FOR THE DETERMINATION OF POTENTIAL FOR
            ENVIRONMENTAL IMPACT OF WASTES IN MONOFILLS

David J.  Hassett. Senior Research Advisor, Energy & Environmental Research Center,
University of North Dakota, PO Box 9018, Grand Forks, North Dakota 58202-9018

ABSTRACT

The toxicity characteristic leaching procedure (TCLP) is often used in a generic manner for
the prediction of leaching trends, although the intent of this test was for the prediction of
leaching under codisposal conditions in sanitary landfills.  The application of acidic conditions
to predict field leaching that can occur under a wide range of conditions may lead to false
prediction of leaching trends.  Additionally, conditions imposed on leaching systems by
inappropriate leaching solutions may alter the distribution of redox species that would be found
in the field. In some cases (with reactive wastes), 18 hours, as specified in the TCLP and
other short-term leaching  tests, may be an insufficient equilibration time.

A generic test of leachability called the synthetic groundwater leaching procedure (SGLP) and
a long-term leaching (LTL) procedure,  developed at the Energy & Environmental Research
Center (EERC) at the University of North Dakota, have been used to predict leaching under
field conditions.  Specific uses have included characterization of coal ash  disposed of in
monofills and prediction of mobility of selenium in mined areas.  In many applications, the
SGLP has demonstrated trends widely different from TCLP and other commonly used leaching
protocols.  In the case of coal ash,  the trends indicated for leaching by the SGLP show much
different  trends than  TCLP.   These differences can be explained by the fact that many
commonly used leaching  tests impose conditions on samples different from those in a field
environment and, thus, bias data in a manner leading to inappropriate interpretation for
environmental impact.  Elements most often affected include arsenic, boron, chromium,
vanadium, and selenium.  Long-term leaching using the LTL procedure is used for waste
materials that may undergo hydration reactions after disposal upon contact with water.  The
implication for the usefulness of these tests is magnified by the increase in reactive wastes that
will  be produced using advanced combustion systems to comply with the  Clean Air Act
Amendments. These materials, which are almost always reactive, behave much differently
under field conditions than would be predicted using the TCLP or other short-term leaching
procedures.  At the present time, the SGLP test along with long-term leaching is being used
in a number of states, including Minnesota and Indiana, for determination of the environmental
impact of coal conversion  solids.  The test has been written up in draft form for consideration
by the American Society for Testing and Materials (ASTM) as a standard for leaching of coal
ash.
                                                493

-------
INTRODUCTION

Waste materials are of general concern to all, and the potential for environmental harm through
disposal is real.  Because of this, the proper testing of materials to evaluate the potential for
environmental harm must be carried out in a manner that is scientifically valid,  defensible,
accurate, precise, and relevant to the disposal conditions anticipated.   Often materials are
subjected to the toxicity characteristic leaching procedure (TCLP) (1), and for the most part,
some waste  materials can be at least partially  evaluated  for their potential  environmental
impact.  It is recognized that nearly any disposed material has the potential to generate leachate
with characteristics  different  from  local groundwater,  but this does  not  always  imply
degradation of the environment. Some waste materials disposed in environments where local
sediments produce groundwater of relative high ionic strength may generate leachates from the
infiltration of rainwater that are of higher quality than native groundwater.  These  waste
materials are of little concern, and it is the potentially problematic substances on which proper
testing is imperative.

A limitation of the TCLP that appears to be often overlooked is that the application for which
it was intended  was the evaluation  of leaching under codisposal conditions in a sanitary
landfill.  Numerous materials are highly unlikely to be disposed of in sanitary landfills,  and
under expected monofill disposal,  are  highly unlikely to encounter an acidic environment.
Rather, an alkaline environment would be maintained for long-duration leaching because of
the nature of these wastes. While numerous other waste streams could also be considered, coal
combustion solid residues (CCSRs) will be the focus of this paper, because of the high volume
and mass of CCSRs and because most residues from lower-rank coals or from advanced
combustion  processes will  be alkaline in nature  (either  because of inherent properties or
alkaline additives used to scrub acid gases for emission reduction).

Many coal combustion solid residues have physical, chemical, and mineralogical characteristics
advantageous for utilization and can be marketed for a wide variety of engineering applications
in construction and other industries.  Despite their potential for use, high volumes of these
materials are disposed of every  year throughout the United States.  With the enactment  of the
1990 Clean Air Act Amendments, coal combustion solid residues may change in character (as
combustion methods change) and will  certainly increase in volume.  The use of advanced
combustion processes and scrubbers for acid gas reduction will provide an alkaline nature to
many residues and will certainly affect trace element distribution and mobility. The quantity
of these materials requiring disposal is  expected to increase dramatically as coal combustion
and environmental systems change to  meet new regulations.  The environmental disposal
practices for these materials are important  issues impacting the coal mining  and utility
industries, regulatory agencies, electric utility ratepayers, and the general public.

Regulatory  agencies, the coal mining industry,  and the utility industry agree that  the
environmental issues of clean air and  water are  of the highest priority when considering
disposal/utilization of coal conversion solid residues and  other  by-products.  Regulatory
approaches must be adequate to safeguard the environment while minimizing the economic
                                                 494

-------
burden on industries that must, in turn, pass that cost on to consumers in the form of increased
rates for electricity.  Comprehensive and appropriate scientific information is essential to make
the difficult, but necessary, decisions  regarding the disposal or utilization of these highly
complex solid materials.

Chemical, physical, and mineralogical characterization  of wastes are all important in
formulating a plan for scientifically based disposal.  This paper is a discussion of protocols for
the leaching characterization of waste materials to  determine potential for environmental
impact.

EXPERIMENTAL

Numerous investigations of the leachability of trace elements from coal combustion solid by-
products have been conducted at the Energy & Environmental Research Center (EERC) using
several leaching procedures.  The primary objectives of these investigations can be summarized
as follows:

     •   Identify trace elements of environmental significance, to include currently regulated
         trace elements and others present at significant total concentrations

     •   Determine the total amounts of all identified trace elements

     •   Measure and compare the leachability (mobility) of the identified trace elements
         using several leaching tests

The materials included as examples in this report are two CCSRs, a low-rank coal fly ash and
a solid scrubber residue from a duct-injection demonstration project to control  stack gas
emissions of sulfur and nitrogen acid gases.   These  solid residues were subjected to a
comprehensive chemical characterization scheme that met the objectives listed above.  These
were as follows:

     •   Qualitative screening for identification of elements present.

     •   Quantitation of total concentrations of selected elements in the bulk sample and
         determination of mineral phases present.

     •   Leaching  of the  solid by  the  selected leaching  procedures, determination of
         concentrations of all selected elements in resulting leachates, and identification of
         mineral phases present in leached solids.

The qualitative screening was performed by proton-induced x-ray emission (PIXE).  PIXE was
used to identify elemental constituents (from sodium through uranium) present in the material.
The purpose of the screening was to identify all elements of interest either from the standpoint
of potential toxicity or from the standpoint of scientific  interest.
                                                 495

-------
The results of the screening procedure were used to select elements of interest for the various
studies used as examples in this paper.  Table 1 is a list of both Resource Conservation and
Recovery Act (RCRA) elements and non-RCRA elements of interest included in these studies.
The RCRA elements are arsenic, barium, cadmium, chromium, lead, mercury, selenium, and
silver. Several of the RCRA trace elements were not identified as present by the screening
procedure and are not typically found in coal combustion solid by-products, but were included
in this study  for completeness. Boron and molybdenum were also included since they are
elements that  are often concentrated in coal combustion solids and are not always identified by
PIXE at low concentrations.

Other materials referred to in this report were also  subjected to the same screening protocols.
Because of space limitations, only elements of interest that illustrate differences in results from
various leaching procedures are discussed, as well as examples to show the need for long-term
leaching, which is not addressed by the regulatory leaching tests.

Following the  qualitative screening and  identification of elements to be included in the
investigation, total concentrations of the identified elements were determined in the original
                                      TABLE 1

                        Elements of Interest Identified by PIXE
Element
Ag
As
B
Ba
Cd
Cr
Cu
Hg
Mn
Mo
Ni
Pb
Se
Sr
Zn
Type'
1
1
2
1
1
1

2
2

1
1


                      1  Type 1 indicates RCRA elements.
                         Type 2 indicates elements of high
                         interest in coal conversion solid
                         residues.
                                                  496

-------
solid material.  Appropriate sample dissolution techniques were used for different groups of
analytes.  Sample dissolutions were performed in duplicate, and resulting solutions  were
analyzed by atomic absorption (AA) or inductively coupled argon plasma (ICAP) spectrophoto-
metric techniques as appropriate.  Mercury was analyzed by cold-vapor generation with A A
detection.  Matrix-matched  standards  were used to calibrate instruments,  and standard
laboratory quality control methods were employed, including sample duplicates and analyte
spike recoveries. Major and minor constituents were determined in addition to the identified
trace elements.  These constituents provide standard information important in the classification
of these material types for utilization  and required for interpretation of the mineralogical
characterization. The major and minor elemental constituents shown in Table 2 are reported
as percent oxides.  Trace elements have been reported as elemental concentrations.   This
reporting format is only a convention  specified by the American Society for Testing and
Materials (ASTM).  The results do not indicate actual oxides present in the materials, but
rather the total concentrations  of these elements expressed as oxides.  It has been widely
                                      TABLE 2

                   Major, Minor, and Trace Bulk Chemical Analyses
Major/Minor
Constituents
Si02
A1203
Fe203
CaO
MgO
Na20
K20
P205
TiO2
BaO
MnO2
SrO
Moisture
Lor
so,
Duct-Injection
Solid, %
17.3
8.73
7.84
32.6
0.58
4.27
0.47
0.17
0.42
0.04
0.02
0.03
0.80
9.02
16.6
Fly Ash,
%
49.7
22.1
17.5
1.77
0.94
0.45
2.22
0.16
0.91
0.07
0.04
0.04
0.31
3.39
0.25
Trace
Elements
Ag
Cd
Ba
Cr
Hg
Se
As
Pb
B
Mo
Ni
Cu
Zn
Br
Cl
Duct-Injection
Solid, Mg/g
140
< 0.1
350
58
0.9
9.1
140
39
400
28
50
39
130
110
8530
Fly Ash,
Mg/g
0.6
12
600
150
0.6
2.2
54
140
340
11
220
91
460
ND2
ND
 1  Loss on ignition.
 2  Not determined.
                                                 497

-------
reported that relatively low percentages of the elemental concentrations  of these elements
actually are present in simple, pure oxide forms.  Most coal combustion solid by-products
contain an amorphous or glassy phase and numerous and diverse crystalline phases.

The final step of the laboratory investigation was to characterize several leaching procedures,
with subsequent analysis of the resulting leachates.  A summary of the procedures used for the
leaching (trace element mobility) characterization are as follows:

     •   The TCLP (U.S.  Environmental Protection Agency  [EPA], 1986) is  the  EPA
         regulatory leaching procedure,  through the RCRA, for the  determination of the
         hazardousness of wastes. Land disposal of materials identified as hazardous by this
         leaching procedure is prohibited by the EPA.  The TCLP has  also been adopted by
         many state regulatory agencies to provide leaching information on solid wastes (not
         hazardous) which are not federally regulated.  This test uses end-over-end agitation
         and a 20-to-l liquid-to-solid ratio with an 18-hour equilibration time. Two leaching
         solutions are specified for use with this test.  Leaching Solution #1 is an acetate
         buffer prepared  with 5.7 mL of glacial acetic acid per liter of distilled deionized
         water and adjusted to pH 4.93 with 1  N sodium  hydroxide solution.   Leaching
         Solution #2 is an acetic acid solution prepared by diluting 5.7 mL of glacial acetic
         acid to one liter with distilled deionized water.  This  solution will have a pH of 2.88.
         The TCLP specifies a test to determine the alkalinity  of the waste to be leached
         which, in turn, determines what leaching solution should be used.  More-alkaline
         materials utilize Solution #2, while less-alkaline materials are leached with Solution
         #1. Both leaching solutions were used in this leaching characterization, although by
         definition,  leaching Solution #2 would have been chosen according to the test
         protocol for nearly all of the alkaline materials being discussed in this report.   The
         choice of leaching solution is based on the results of a determination of the alkaline
         nature of each solid residue.

         The use of both leaching solutions allows comparisons to be made  between the waste
         forms and also provides an interesting comparison between the materials with respect
         to the acid leachability of each element tested.

     •   The synthetic groundwater  leaching procedure  (SGLP) (2)  was developed  as a
         generic  leaching test to be applied  to materials to simulate  actual field leaching
         conditions.

         Since the TCLP was designed to  simulate  leaching in a sanitary landfill under
         codisposal conditions, it is not appropriate to evaluate leaching of coal conversion
         by-products in typical disposal or utilization scenarios. To provide more appropriate
         and predictive information for  coal  conversion  by-products  and  other unique
         materials, a leaching test was developed using the same basic protocol as the TCLP,
         but allowing for the appropriate leaching solution  chemistry.  Test conditions are
         end-over-end agitation, a 20-to-l liquid-to-solid ratio, and an  18-hour equilibration
                                                 498

-------
         time.  The leachate used for this project was distilled deionize water.  For certain
         predictive applications, this solution may not be totally appropriate since mercury,
         for example, would be highly influenced by the presence of chloride because of the
         formation of an extremely stable mercury chloride complex.  Local, site-specific
         factors,  such  as  the  presence  of  significant  halide  concentrations  or other
         geochemical factors likely to influence trace element mobility, would have to be
         considered in any real disposal setting.  For our work on many research projects, the
         most likely source of water would be rainwater, thus prior mineralization would not
         be a consideration.  Additionally, because of the extremely alkaline nature of the
         samples included in this report and their high acid-neutralization capacity beyond the
         simple high pH, acidity from the impact of varying acid precipitation concentrations
         was not considered to be an important factor (although, as with every imaginable
         factor would, no doubt, have influenced results to some small degree).  The purpose
         of this test was to provide data not influenced by the presence of acetate ion or the
         initial acid impact when sample and  leaching solution were mixed.

     •   A long-term leaching (LTL) procedure, also using distilled deionized water,  was
         included to identify effects associated with any mineralogical changes that may occur
         in the waste forms upon long-term contact with water.  Separate samples were
         analyzed after 18 hours,  48 hours,  1 week, 4  weeks, and 12 weeks.  It has been
         found previously that, on long-term contact with water, certain coal conversion solid
         waste materials form secondary hydrated phases with mineralogical and chemical
         compositions different from any of the material in the original ash (3). In another
         research project, it was demonstrated that the formation of these hydrated phases was
         often accompanied by dramatic decreases in solution concentrations of oxyanionic
         species such as borate, cnromate, selenate, and vanadate (4).  Ettringite formation
         has been implicated in this phenomena. The decrease in the concentrations of these
         elements would not be predicted from the results of short-term leaching tests.

RESULTS AND DISCUSSION

The results of laboratory investigation are  summarized in four separate figures for clarity in
the presentation of data.  Some LTL results have been omitted, where concentrations were
below the lowest level of quantitation (LLQ), to simplify interpretation of these data and to
emphasize the change in leachate concentrations of elements with tune. In these cases, the
absolute concentration value is of less scientific significance than trends hi concentration.

Figures 1 and 2 show elemental concentrations  for all of the RCRA elements in leachates from
the three leaching solutions.  The  SGLP,  TCLP leaching Solution #1, and TCLP leaching
Solution #2 represent a series of increasing acidity, thus, the TCLP #1 is less acidic than TCLP
#2, and the SGLP leaching solution is essentially neutral. The measured concentration for
each element is compared  with  the RCRA limit  as well as  with the  maximum theoretical
concentration. This maximum concentration is calculated by using the results of bulk chemical
analysis for each element, assuming total dissolution of each analyte at the 20-to-l  liquid-to-
solid ratio used in the leaching protocols.  This allows comparison of RCRA limits  and
leachate  concentrations  with  a  calculated  worst-case  scenario  (maximum  calculated
concentrations), assuming total dissolution of analyte.
                                                499

-------
25
20
15
f 10
icentration,
p
to in
w
0 0.09
0.06
0.03
0.00

•
•
•
' 1


in
d




J

1 ,
P


\
100 mg/L


-, 1 li . „ n .
rsa 4
.
ml ill. m& •_ 1
As Ba Cd Cr Pb Hg Se Ag
1 SGLP TCLP #1 E8S8SS8I TCLP #2 I^B Max.
H RCRA
Figure 1. Fly ash SGLP and TCLP (RCRA elements).  (Reprinted with permission from
         Elsevier Science.)
25
20
15
1 10
c"
2. 5
a
c ฅ
S 0.12.
0 0.09
0.06
0.03
0.00

1
•
•
1
•
•
JH
I
!
I
\



\
tOO mg/L


„ Ji Ji . , n .
4

<0.001
fปB/L
As Ba Cd Cr Pb Hg Se Ag
I I RCRA
   Figure 2.  Duct-injection ash SGLP and TCLP (RCRA elements).  (Reprinted with
            permission from Elsevier Science.)
                                             500

-------
A general conclusion that can be drawn from these figures is that the leachability of most of
the RCRA  elements present in these samples is extremely low. It can be seen that mobile
analytes, as represented by leachate concentrations, are always a  fractional portion of that
available and are often several orders of magnitude lower than theoretical calculated amounts.

Figures 3 and 4 contain similar information on the non-RCRA as that for the RCRA elements
in previous figures; however, in these figures, the comparison is with respect to the maximum
calculated value only since RCRA limits  do not exist.   This  concentration  is calculated
assuming that the total mass of trace elements as determined from the bulk  analysis had
dissolved.  The non-RCRA elements were chosen for the purpose of scientific evaluation of
the various leaching tests.

Greater differences in leachability of trace elements for the various leaching solutions were
shown with the fly ash selected than with the duct-injection residue.  This is often the case for
the less-alkaline materials like this low-calcium fly ash, where acidity of leaching solution has
more control  over  final pH  than  for  the  strongly alkaline materials, where  final  pH is
essentially controlled by the large amounts of alkaline materials available.

Figures 5 and 6 show information on the change in concentrations during LTL tests of RCRA
elements, measured at above the LLQ. As before, the levels are compared against the RCRA
limit for each element.  The y-axis for  concentration  has been split for more meaningful
             25.00
              13.50
         J    2.004
         a>    1.25<
         o
         "a
         c
         o
         o
         c
         o
         O
1.00


0.75


0.50


0.25


0.00
                                I    I    I
                      B    Cu    Mn   Mo   Ni   Sr    V    Y    Zn    Zr

                      H  SGLP          TCLP#1  E8SS888I TCLP#2 Hi Max.
 Figure 3. Fly ash SGLP and TCLP (non-RCRA elements).  (Reprinted with permission
           from Elsevier Science.)
                                               501

-------
         D)
         E
  25

  20

  15

  10

   5


0.12'

0.09

0.06

0.03

0.00
                        <0.01
                        mg/L
                                mg/L
                                                mg/L
B      Cu

 SGLP
Mn     Mo

 TCLP #1  ง
                                                              <0.02S
                                                              mg/L
                                       Ni      Sr

                                      TCLP #2  H
                                                                 Zn

                                                               Max.
Figure 4.  Duct-injection ash SGLP and TCLP (non-RCRA elements).  (Reprinted with
          permission from Elsevier Science.)
             35


             18
           0.40
         g 0.30
         c
         o
         0 0.20


           0.10


           0.00
                      As


               I    I  RCRA
                                          100  mg/L
                         Ba

                        30 day
              Cr

              60 day
                                          Se

                                         I Max.
 Figure 5.  Fly ash LTL (RCRA elements). (Reprinted with permission from Elsevier
           Science.)
                                                502

-------
                                     48 hr       888  1 wk
                                     Max.     I    1  RCRA
  Figure 6.  Duct-injection ash LTL (RCRA elements).  (Reprinted with permission from
            Elsevier Science.)
representation of very low and relatively high concentrations. Additionally, it should be noted
that the RCRA limit for barium is actually 100 mg/L, as noted in the chart, which extends the
overall range for all elements to between 6 and either 25 or 35 mg/L, as required by the
leachate composition of each sample.

The important information hi these figures are hi the trends of solubility with respect to time
rather than actual concentrations.  It can be seen that the behavior of arsenic, chromium, and
selenium in the duct-injection ash are anomalous with respect to the expected gradual increase
with time.  In the case of these elements, the formation of new  mineralogical phases is most
likely responsible for the decrease in leachate concentration over time.  It has been shown in
another research project that the formation of ettringite can be accompanied by the fixation of
a number of elements (4).   Arsenic, chromium,  and selenium  can be immobilized by
incorporation into an insoluble ettringite phase. Normal leaching with a gradual increase in
leachate concentration with respect to time was seen for the fly ash. This material, because
of its low alkalinity, was not expected to form ettringite or ettringitelike phases that could
affect oxyanion concentrations.  Additionally, most of the RCRA elements in the fly ash
leachate were below the detection limits and are not shown on the graph.

Figures 7 and 8 compare LTL leachate concentrations for non-RCRA elements versus time and
include the calculated maximum concentrations of elements in the leachate, assuming total
dissolution of trace elements, as determined from the bulk analysis.

Anomalous behavior is seen for molybdenum concentrations in fly ash leachate and for boron
concentrations in duct-injection ash leachate.   These trace elements can be immobilized in
                                               503

-------
25.00
13.25
^ 1.50.
w 0.75 E


•

•


'
•











~,
i


%


$
i
\
K










*iP










^.-yT










ft







B Cu Mn Mo

1










9










;7jr





















s^as7










JTT.










:|
(










U\ St V Y Zn Zr
E&8888I 18 hr 30 day 60 day IHI Max.
Figure 7.  Fly ash LTL (non-RCRA elements).  (Reprinted with permission from Elsevier
          Science.)
                                           Mo
                      18 hr
                      12 wk
48 hr
Max.
               Ni
                                                  1 wk
                                                          Sr
                                                                 Zn
4wk
 Figure 8. Duct-injection ash LTL (non-RCRA elements). (Reprinted with permission
           from Elsevier Science.)
                                               504

-------
ettringite phases as seen with RCRA elements (Figure 7).   Several other elements show a
gradual increase in concentration that indicates their presence in moderately soluble phases
which gradually dissolve or release trace elements on long-term contact with water.

SUMMARY

The characterization of a waste material for disposal must include a complete study consisting
of chemical, mineralogical, and often physical characterization. Site characterization and the
effects of potential interactions of leachate with the environment are essential parts of any
complete study designed to evaluate potential for environmental impact of a waste material.

Long-term leaching results indicate the importance of this test in appropriate circumstances
primarily because of the unpredictability of results for individual elements between different
solids. Additionally, LTL most closely represents the environmental scenario most wastes are
likely to encounter, whereas  multiple pore volumes of leaching solution hi contact with the
solid over an 18-hour period are highly unlikely to occur.  Despite the inconvenience of having
to wait for months for results, LTL should be performed as a part of the environmental impact
evaluation if the potential to exceed RCRA limits exists.

Trace element mobility (teachability) in coal combustion solid by-products can be characterized
for individual materials and not for generalized categories of these materials.  Therefore, each
specific disposal project requires appropriate material characterization based on the distinct and
specific attributes of that material.

An acidic leaching solution does not constitute a "worst-case scenario," as applied to the TCLP
leaching of coal conversion solids or any other material.  Leaching procedures and solutions
must be carefully chosen and evaluated to provide reliable information and to be scientifically
valid.  A regulatory testing scheme should include flexibility to adopt a short-term leaching
procedure allowing the use of appropriate leaching solutions and/or long-term leaching tests
when necessary.  It may not be appropriate to attempt and model a worst-case scenario in a
laboratory procedure since this represents the use of science to demonstrate a "case" ratiier
than to accurately characterize a sample.

There are currently no laboratory leaching tests available that provide an accurate prediction
of absolute leachate concentrations of trace elements  in  field settings.   Thus,  leachate
concentration trends, as provided by LTL results, and comparisons of leachable  amounts
versus total amounts of analyte in  LTL, provide the best scientifically useful and valid
information currently available from laboratory tests. Empirical results of short-term leaching
can be very misleading and are often being misapplied for the formulation of decisions
impacting our environment.

Absolute containment of a waste and its leachate  is impossible even in the best  engineered
disposal facility. Thus, since escape of leachate is inevitable,  slow controlled release of trace
elements is essential to ensure low, nontoxic leachate concentrations that can be re-equilibrated
in the environment.  Toxicity with respect to most trace elements is a function of concentration
and not identity, thus release is not necessarily undesirable.   Since disposal  is forever,
                                                 505

-------
scientific thought about disposal must consider the long term and be realistic in terms of what
actually constitutes a hazard.

REFERENCES

1.  U.S. Environmental Protection Agency.  Federal Register 1986, 5/(9), 1750-1758.

2.  Hassett, D.J.  "A Generic Test of Leachability:  The  Synthetic Groundwater Leaching
    Procedure,"   In Proceedings of  the Waste Management  for the Energy Industries
    Conference; Grand Forks, ND, Apr. 29-May 1,  1987.

3.  Hassett, D.J.; McCarthy, G.J.; Kumarathasan, P.; Pflughoeft-Hassett, D.F. "Synthesis
    and Characterization of Selenate  and Sulfate-Selenate Ettringite Structure  Phases,"
    Materials Research  Bulletin 1990, 25,  1347-1354.

4.  Stevenson, R.J.; et al. "Solid Waste Codisposal Study," Final report prepared for the Gas
    Research Institute, Contract No. 5083-253-1283,  February 1988.
                                                 506

-------
                                                                                71
           SUGGESTED MODIFICATION OF PRE-ANALYTICAL HOLDING TIMES -
                      Volatile Organics in Water Samples

David W.  Bottrell. Chemist. U.S. Department of Energy, Office of Environmental
Management (EM-263), 1000 Independence Ave. S.W. Washington, D.C. 20585-0002
ABSTRACT:

The current political climate for environmental programs dictates that quality
control/quality assurance requirements be cost effective, reduce error,
improve quality, or otherwise add value to data collection.   Current holding
time requirements, especially in the case of volatiles in water, are a prime
example of minimal improvement in environmental data reliability at
inordinately high costs to both regulators and the regulated community.
Through the development and validation of a holding time model,  this paper
describes a concept of "practical reporting times"  and suggests alternative
approaches for implementation.  These include the extension of pre-analytical
holding times for volatile organics in appropriately preserved (i.e.,  no
headspace and pH 2) water samples to 28 days.  Based on common data delivery
schedules, e.g., 30-day submission, this functionally eliminates the current
regulatory requirement.  A second aspect of the modification of regulatory
requirements is the definition of a mechanism for further extension based on
analyte- and sample-specific demonstration of acceptable stability.  The
activities summarized in this paper were designed by a steering committee with
representation from across U.S. .Environmental Protection Agency (EPA)  Program
and Regional Offices, the Department of Energy (DOE   Office of Environmental
Management), and the Department of Defense (DOD).  The work was performed at
Oak Ridge National Laboratory (ORNL).
BACKGROUND:

Pre-analytical "Holding Times"  for environmental samples were initially based
on the reasonable concept that  chemical and physical characteristics may
change during each of the many  steps  from sampling through analysis.  In
response to the need to limit degradation or loss in water samples,  holding
times were arbitrarily set and  specified in 40CFR Part 136 (1979).   Unlike
most technical and legislative  aspects of environmental programs, this
requirement has never been significantly updated. Actually, its impact has
been expanded beyond the-initial  application to many additional regulatory
programs and environmental media  (1,2,3).  There is widespread skepticism in
the environmental community of  the technical basis for the requirement.
However  the complexity of the  organizational structure sustaining it is a
daunting obstacle to change.  Table  1 lists the Environmental Protection
Agency (EPA) Offices that explicitly  or implicitly (e.g., included within
method guidance) maintain the regulatory status.  Table 2 summarizes the
legally mandated requirements.  •
                                         507

-------
                                   table 1
                       MANDATED HOLDING TIMES FOR VOCs
Program / Reference
SDWA   40CFR 141
Halocarbons
Aromatics
Holding Time (Days)   4C storage

pH adjustment     no pH adjustment
      14
      14
pH adjustment required
pH adjustment required
CWA   40CFR 136
Halocarbons
Aromati cs
      14
      14
            14
            7
RCRA (TCLP, Deli sting)  - 40CFR 261
Halocarbons                         14(a)
Aromatics                           14(a)
                              14(a)
                              14(a)
(a) for TCLP Characterization,  the sample must be extracted in 14 days and
than analyzed within 14 days of the TCLP  extraction
                                   table 2


EPA PROGRAMS that PROPOSE / CONTROL /  MODIFY / APPROVE VOC Holding Times

*     Office of Emergency and Remedial  Response (OERR-Superfund)

*     Office of Solid Waste (OSW-RCRA)

*     Office of Water (OW- Drinking Water/Wastewater - CWA)

*     Office of Prevention, Pesticides,  and Toxic Substances (OPPTS
            Pesticides/TSCA)

*     Office of Air and Radiation (QAR   Stationary / Ambient)
                                         508

-------
HISTORY:

Since the mid-1980's  various  Interagency groups  (US  EPA,  DOE  and DOD)  have
funded various  projects  to  evaluate  options  and  clarify analyte-speclfic
holding time considerations.  Table  3  lists  physical  characteristics  of
various target  analytes  to  illustrate  the diversity  across  the volatile
organic analytical  fraction.  Initial  holding  time investigations centered on
ways to bring chemistry  into  the  interpretation  of analytical  results.   Risk
assessment may  be questioned  as a relatively inexact  estimate  or
interpretation  of technical variables,  but at  least  the disciple recognizes
that not all  chemicals behave identically. The basic  studies supporting
several subsequent  publications were presented at the 3rd,  4th,  and 5th Annual
Waste Testing and Quality Assurance  Symposium  and were again discussed  during
a special session at  last year's  Symposium (4-6).  In addition,  several  EPA
and DOE groups  have published summary  studies  interpreting  and expanding
various aspects of  the data sets.  These are readily  available elsewhere,  and
beyond the scope of this presentation  (7-10).
                                   table 3

Physical  Constants  for Selected Volatile Organic  Chemicals (VOCs)
COMPOUND
Bromomethane
Chloroform
Trichloroethene
Styrene
Benzene
Tol uene
DENSITY
(gm/ml )
1.68
1.48
1.46
0.906
0.877
0.867
BOILING
POINT
(degrees C)
3
61
87
145
80
110
VAPOR
PRESSURE
(mm of Hg)
1250
160
60
5
76
22
CURRENT STATUS:
Based on last year's special.session and subsequent Interagency discussions,
EPA's Analytical  Operations Branch provided DOE with data (results only) from
historical  holding time studies which were sent to Oak Ridge National
Laboratory for assessment.   The purpose has been to use this data, supported
by a limited verification study, to demonstrate applicability of a model
developed at ORNL that defines a "practical reporting time (PRO."  Table 4
                                        509

-------
                                 table 4
. PRT values compared with other holding time studies in aqueous matrices.

Compound
Acetone
Benzene
Bromodichloro me thane
Bromoforra
Bromomethane
2-Butanone
Carbon Bisulfide
Caibon Tetrachloride
Chlorobenzene
Chloroethane
Chlorofonn
Chloromethane
Dibromochlorornethane
1,1-Dichlorethane
1 ,2-Dichloroethane
1,1-Dichlorethene
1,2-Dichloroethene
1,2-DichIoropropane
ithylbenzene
2-Hexanone
Methylene Chloride
t-Methyl-2-Pentanone
Styrene
L, 1,2,2-Tetrachloroethane
letiachloroethene
Toluene
1,1, l-Trichloroethane
1,1,2-Trichloroethane
rrichloroethene
oOCylene/Xvlene total
ORNL 4C no preservative
Distilled
water

2:112

2112



ฃ112
ฃ112

ฃ112


ฃ112

ฃ112

ฃ112
ฃ112

ฃ112

ฃ112
9
ฃ112
ฃ112

ฃ112
ฃ112
ฃ112
Gioundwalci

102

ฃ112



ฃ112
ฃ112

ฃ112


ฃ112

ฃ112

ฃ112
19

ฃ112

0
ฃ112
ฃ112
67

ฃ112
ฃ112
80
Surfacewater

62

51



96
65

57


55

80

85
13

71

10
ฃ112
55
56

87
53
ฃ112
ORNL4CNaHSO4
Distilled
water
ฃ112
42

ฃ112
ฃ112
ฃ112
31
57
ฃ112
ฃ112
ฃ112
ฃ112

ฃ112

ฃ112

41
ฃ112
ฃ112
19
ฃ112
ฃ112
49
ฃ112
ฃ112

ฃ112
36
ฃ112
Groundwater
ฃ112
ฃ112

ฃ112
ฃ112
ฃ112
39
ฃ112
ฃ112
ฃ112
ฃ112
ฃ112

ฃ112

51

ฃ112
ฃ112
ฃ112
22
ฃ112
ฃ112
ฃ112
ฃ112
ฃ112

ฃ112
ฃ112
ฃ112
Surfacewater
ฃ112
34

ฃ112
ฃ112
ฃ112
34
ฃ112
54
ฃ112
ฃ112
ฃ112

ฃ112

ฃ112

45
ฃ112
ฃ112
40
ฃ112
ฃ112
40
ฃ112
ฃ112

26
33
ฃ112
EPA4Cno
preservative
Sewage
water
31
ฃ90
ฃ90
ฃ90

23
68
19
ฃ90

ฃ90

ฃ90
ฃ90
ฃ90
ฃ90
ฃ90
ฃ90
ฃ90
28
ฃ90
54
79
ฃ90
81
ฃ90
ฃ90
ฃ90
ฃ90
ฃ90
                                   510

-------
summarizes suggested holding times developed through application of the model
for both data sets.   Specific details of the statistical approach are beyond
the scope of this presentation and published elsewhere (11).  Generally,  the
model specifies the length a sample can be held with reasonable assurance the
analyte concentrations have not changed significantly.  The value of the
approach is that the key terms (significant change and reasonable assurance)
are user variable and user defined.  Risk of error is quantitative and
consistent with currently available draft data quality assessment guidance.

The ability to assess analyte-specific contributions to variability
contributed by holding times, especially for samples analyzed beyond currently
specified limits, is critical for accurate data interpretation, e.g.. data
validation.  Empirical data from historical and more recent studies have
determined the appropriate degradation (loss) model (e.g.. zero order, first-
order, log-term, etc) providing analyte-specificity to the statistical
approach.  Figure 1 demonstrates the PRT for a linear decreasing
concentration, approximately 80* of the cases reviewed.
                                     figure  1
                 Decreasing
                 Concentration
                                       Critical
                                       Concentration
                                                            50%
                               TIME (days)
      Practical reporting time (PRT) for an analyte with a linear decreasing
      concentration
                                         511

-------
Processing the EPA data through the model  verified  that at the level  of
assurance defined as acceptable (85% confidence In  Identifying a 15%  change),
there was minimal loss in data reliability from holding time extension.   Loss
/ degradation does occur (e.g., carbon  tetrachloride in this particular data
set) and is extremely variable.  Table  4 summarizes the results of the data
analysis (n=150 analyte pairs for the ORNL studies  and n ranged from  79 to 86
for the EPA data set).  The results are dependent upon many chemical,
physical, and biological factors.   There is clearly no universal  correct
answer for all analytes in all samples.   However, the approach described here
is technically defensible because it recognizes analyte variability and
because it provides a mechanism to assess  changes (degradation).

Understanding and managing variability  is  preferable to arbitrary black and
white decisions that are currently required.   The possibility of minimal  loss,
especially of infrequently found analytes,  is  a sacrifice that is a cost-
effective tradeoff worth making given the value added by the approach
described here.  This is a preferable to maintaining an entire system based on
the worst case situation.  The approach described here will introduce a risk-
based system that supports quantification and  management of sample-specific
variability consistent with interagency interests and Congressional direction.

A generally recognized need in the modification of  current holding time
requirements is separation of contractual  issues from technical  considerations
(9th Annual Waste Testing and Quality Assurance Symposium).  Current  Superfund
Guidance (12) attempts to address this  problem by relying on data validator
judgement to interpret effects and define the  impact of analyses performed
beyond current limits (for soil and water).  This is not a solution for a
consistent, documented, defensible process;  however it clearly points out the
need for a defined procedure to quantify analyte-specific effects and to
actually provide the necessary guidance for interpretation.


PROPOSAL(S) FOR MODIFICATION OF CURRENT REQUIREMENTS:

The first aspect of proposed modification is a simple extension of maximum
holding time to 28 days for water samples properly  collected, preserved,  and
stored (e.g., no headspace and acidified to pH <2).  This functionally
separates technical and contractual requirements assuming current, typical
data delivery schedules of approximately 30 days.   This approach is simple and
technically more justifiable.(reliability and  cost) than current requirements.
It is immediately applicable to many routine monitoring data collection
activities and essentially all of the regulated community.

A second, more complex aspect of the proposed  modification is to define a
procedure for site- and analyte-specific stability  studies to extend  holding
times beyond 28 days for specific matrix and analyte combinations, especially
for large scale or long term projects (e.g., DOE's  mixed waste programs
requiring radiochemical screening prior to sample shipment for hazardous
chemical analysis).  This would allow program  decisions to determine  an
appropriate, cost-effective approach for a wide variety of environmental
investigation and monitoring programs.   In addition, the approach effectively
defines and meets data needs consistent with current EPA guidance and future
requirements (13-15).
                                         512

-------
CONCLUSION
Recently several  encouraging  factors  have pushed the identification and
implementation of more  effective  environmental  data collection activities
These include:


      The current EPA emphasis  stressing scientific input into the decision
      making process  of environmental  programs(16,17)

      Distribution of several Quality Assurance guidance and requirements
     •documents covering data collection aspects (e.g.,  Data Quality Objective
      planning and Data Quality'Assessment).(13-15)

      Recognition that  environmental  decisions  are never without risk and
      variability / error can be  managed but  not eliminated.  A related aspect
      is the acceptance of laboratory or measurement uncertainty as a
      potentially minor contribution  to overall uncertainly in estimating site
      conditions(18).

      Questions about cost versus increased knowledge from traditional quality
      control parameters, e.g., duplicated matrix spikes (19).

      The emphasis on performance based methods criteria to replace earlier
      generations of  method adherence philosophy, not directly related to
      decision criteria.

      Role of Environmental Monitoring Methods  Council  to integrate diverse
      requirements across EPA programs, e.g., Offices of Water and Solid Waste
      and Superfund.

This initial set of proposals is  a first attempt to establish a process for
cooperative efforts to  identify changes in environmental data collection
activities that can improve the decision making process for both regulators
and the regulated community.  Additional potential  topics include
polychlorinated biphenyls in water and soil and volatile organics in soil.

In the current budget climate,  it is  essential  to assure that research is tied
directly to regulatory  concerns to efficiently focus on relevant problems,
facilitate distribution of information, and enhance the rate of acceptance of
technical advances.  Environmental research is  relevant only when it is
accepted and directly applied to  real  data collection projects and
environmental decisions.  This  particular project was selected-as an informal
pilot partly because, consistent  with the DQO process,  it required minimal new
data.  This has obvious advantages in cost, but also in the reduction of time
required to complete  the study  .   However, the primary reason for selection
was (perhaps inappropriately) simplicity.  Adopting this proposal will result.
in essentially no negative impact on  environmental  decisions.  However,
decisions may be made with a better perspective on the inherent uncertainty of
environmental data collection activities.  This isn't a sacrifice, but a
necessary recognition of' reality.  The primary potential benefit from the
suggested changes is  significantly more cost-effective data collection
supporting environmental decisions.   The following is a brief summary of
potential areas for cost reduction:
                                         513

-------
*    -Effective analytical "batch sizing" for analytical laboratories can
      Improve the efficiency of-sample processing.  For example, single or
      very small batches require up to 300& Increase In actual analyses to
      meet quality control requirements.  Extensions of holding times would
      significantly increase analytical batch sizes dropping the routine
      percentage of QC samples from potential >40$ to <20%.

*     "Required" instrumentation/personnel  for analytical demand, a standard
      contract feature, would be reduced (e.g.,  two instruments necessary
      instead of three contractually "required").

*     Relief from holding times as a "functional"  sampling schedule driver.
      For example, often sampling is limited to half day activities to
      accommodate overnight shipment to remote laboratories.  Alternatively,
      sampling days could be extended followed by  "bulk sample" shipment.
      This could easily result in a "5-day" sampling program being completed
      in two days with resource and time reductions carried on through the
entire data collection process.

The adoption of the modifications suggested in this presentation will have
essentially no adverse effect on reliability of environmental data.  The only
effect would be to significantly reduce program costs and improve the
technical understanding of environmental conditions.  Current plans are to
introduce the approach through the Quality  Assurance Management Staff (QAMS)
the EMMC as a mechanism for entry across boundaries.  Draft sections
consistent with the EPA's methods format have been submitted for consideration
as a section in SW-846 (19)
                                        514

-------
                                  REFERENCES

1) Federal  Register.  1979. 40CFR Part 136. Proposed Rules  Vol 44  No  233-
69534.  Dec  3.                                                     ...

2) Federal  Register.  1984. 40CFR Part 136. Rules and Regulations  Vol 49  No
209:  145. Oct  26.                                                        '    '

3) Test Methods for Evaluating Solid Waste, Physics/Chemical Methods SW-846
3rd Edition,  US EPA Office of Solid Waste and Emergency Response. November
1986

4) M.P. Maskarinec, L.H. Johnson, and S.K. Holladay. Recommended Holding Times
of Environmental Samples. 1988..Proceedings of US EPA Third Annual  Symposium
on Solid Waste Testing and Quality Assurance, H29   35.

5) M.P. Mascarinec, et al, Recommended Holding Times of Environmental
Samples. Proceedings of US EPA Fourth Annual Symposium on Solid Waste Testing
and Quality Assurance. 1988.

6) D. Bottrell. J. Fisk, and C. Dempsey, Pre-analytical Holding Time Study
Volatiles in Water. Proceedings Fifth Annual Symposium on Solid Waste Testing
and Quality Assurance. 11:24. 1989.

7) M.P. Maskarinec, R.L. Moody. Principals of Environmental Sampling, ed by
L.H.  Keith. 1988.  145.

8) M.P. Maskarinec, L.H. Johnson, S.K. Holladay. R.L. Moody, C.K. Bayne. and
R.A Jenkins.  1990.Stability of Volatile Organic Compounds in Environmental
Water Samples During Transport and Storage. Environ Sci and Technol. 24: 1665
  1670.

9) D.W. Bottrell,  Fisk, J.F.-, Hiatt, M, Holding Times  : VOAs in Water Samples,
Environmental  Lab, 29-31, June/July. 1990

10) D:W. Bottrell, et al, "Holding Times of Volatile Organics in Water," Waste
Testing and Quality Assurance: Third Volume. ASTM STP  1075. C.E. Tatsch, Ed.,
American Society for Testing and Materials, Philadelphia,1991

11) Bayne,  C.K., Schnoyer, D.D, and Jenkins, R.A.,  "Practical Reporting Times
for Environmental  Samples. Environmental Science and Technol. 1994, 28, 1430-
1436.

12). US EPA Contract Laboratory Program Functional Guidelines for Organic Data
Review, EPA-540/R-94/012, Office of Solid Waste and Emergency Response.
February,  1994.

13) Measuring and Interpreting VOCs in Soils: State of the Art and Research
Needs.  Environmental Monitoring Systems Laboratory, Las Vegas, NV. ed R.L.
Siegrist and J.J.  Van Ee. EPA/540/R-94/506. January, 1993  (Pre-issue Copy).

14) U S  Environmental Protection Agency,  "Guidance for the Data Quality
Objectives  Process," EPA  QA/G-4, Quality Assurance  Management Staff,
September,  1994.
                                          515

-------
15) U.S.  Environmental Protection Agency,  "Data Quality Objectives Process for
Superfund," EPA 540-R-93-071. Office of Solid Waste and Emergency Response,
(September 1993).

16) Guidance for Data Quality Assessment,  EPA QA/G-9 (External Working Draft),
US EPA,  QAMS, March 27, 1995.

17) W.F.  Raub, Plenary Presentation, 15th  Annual  National  Meeting on Managing
Environmental Data Quality, March 27-31, 1995,  San Antonio, TX.

18) W.F.  Raub, Keynote Address, 9th Workshop on Quality Assurance for
Environmental Measurement, April 25-27, 1995, Scottsdale,  Arizona.

19) G. Robertson, "lessons Learned form a  Review of the CLP Database," 9th
Workshop on Quality Assurance for Environmental Measurement, April 25-27,
1995, Scottsdale, Arizona.
                                         516

-------
                                                                                                    72
        SECONDARY WASTE MINIMIZATION IN ANALYTICAL METHODS*
                                            by
              David W. Green. Lesa L. Smith, Jeffrey S. Crain, Amrit S. Boparai,
                    James T. Kiely, Judith S. Yaeger, and J. Bruce Schilling
                               Analytical Chemistry Laboratory
                               Chemical Technology Division
                                Argonne National Laboratory
                                  9700 South Cass Avenue
                                Argonne, Illinois 60439-4837

                             Telephone Number:  (708)252-4379
                                Fax Number: (708)252-5655
                             Electronic Mail:  green@cmt.anl.gov
                                     To be presented at:
               Eleventh Annual Waste Testing and Quality Assurance Symposium
                                      Washington, DC
                                      July 23-28, 1995
                                 The submitted manuscript has been authored
                                 by a  contractor of the  U. S. Government
                                 under  contract  No.  W-31-109-ENG-38.
                                 Accordingly, the U. S. Government retains a
                                 nonexclusive, royalty-free license to publish
                                 or reproduce the  published form  of this
                                 contribution, or allow others to do so, for
                                 U. S. Government purposes.
*Work supported by the U. S. Department of Energy under Contract W-31-109-ENG-38.
                                                     517

-------
         SECONDARY WASTE MINIMIZATION IN ANALYTICAL METHODS

David W. Green. Manager, Analytical Chemistry Laboratory, Lesa L. Smith, Sr. Scientific Associate,
Jeffrey S. Grain, Associate Chemist, Arnrit S. Boparai, Organic Analysis Group Leader, James T.
Kiely, Scientific Assistant, Judith S. Yaeger, Scientific Assistant, and J. Bruce Schilling, Assistant
Chemist, Analytical Chemistry Laboratory, Chemical Technology Division, Argonne National
Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439-4837

ABSTRACT

The characterization phase of site remediation is an important and costly part of the process. Because
toxic solvents and other hazardous materials are used in common analytical methods, characterization
is also a source of new waste, including mixed waste. Alternative analytical methods can reduce the
volume or  form of hazardous waste produced either in the sample preparation step or in the
measurement step.

We are examining alternative methods in the areas of inorganic, radiological, and organic analysis.
For determining inorganic constituents, alternative methods were studied for sample introduction into
inductively coupled plasma spectrometers. Figures of merit for the alternative methods, as well as
their associated waste volumes, were compared with the conventional approaches.  In the radiological
area, we are comparing conventional methods for gross  a/p measurements of soil samples to an
alternative method that uses high-pressure  microwave dissolution.  With  the alternative method,
liquid waste was reduced by a factor of nine (200 mL/sample), dry active waste was reduced by a
factor of two, and analysis time was reduced by a factor of three. Preliminary measurements using
alternative on other matrices (i.e., oils, greases, sludges), and for the use of alternative, nonhazardous
solvents for the preparation of soils indicate additional reduction in waste volumes is possible.  For
determination  of  organic constituents, microwave-assisted  extraction  was studied for RCRA
regulated semivolatile organics in a variety of solid matrices, including spiked samples in blank soil;
polynuclear aromatic hydrocarbons in soils, sludges, and sediments; and semivolatile organics in soil.
Extraction efficiencies were determined under varying conditions of time, temperature, microwave
power, moisture content, and extraction solvent. Solvent usage was cut from the 300 mL used in
conventional extraction methods to about 30 mL.   Extraction results varied from one matrix to
another.  In  most cases, the microwave-assisted extraction technique was  as efficient as the more
common Soxhlet or sonication extraction techniques.

INTRODUCTION

The U.S. Department of Energy (DOE) will require a large number of waste characterizations over
a multi-year period to  accomplish the Department's goals in  environmental restoration and waste
management. Estimates vary, but two million analyses annually are expected.1 The waste generated
by the  analytical procedures used for characterizations is a significant source of new DOE waste.
Success in reducing the volume of secondary waste and the costs of handling this  waste would
significantly decrease the overall cost of this DOE program.

Selection of appropriate analytical methods depends on the intended use of the resultant data.  It is
not always necessary to use a "high-powered" analytical method, typically  at higher cost, to obtain
data needed to  make decisions about waste management. Indeed, for samples taken from some
heterogeneous  systems, the meaning of "high accuracy" becomes clouded if the data generated are
                                                  518

-------
 intended to measure a property of this system.  Among the factors to be considered in selecting the
 analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility
 (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors
 to achieve the multiple goals of a characterization program.  The purpose of the work described here
 is to add "waste minimization" to the list of characteristics to be considered. In this paper we present
 results of modifying analytical methods for waste characterization to reduce both the cost of analysis
 and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still
 generating data of acceptable quality for the decision-making process, we have data demonstrating
 that wastes can  be reduced in some cases without sacrificing accuracy or precision.

 APPROACH

 A typical characterization includes the following sequential steps:  planning,  sample collection,
 sample  transport, sample preparation (including separations), measurement, data analysis, and
 reporting.  Opportunities for waste minimization exist in the planning stage and in  the sampling
 process.  However, we have taken the preparation, separation, and measurement steps as our prime
 targets because these laboratory-based processes involve chemicals, sometimes hazardous ones, and
 typically generate significant volumes of waste. Furthermore, we have data to show that the waste
 volume can be  significantly reduced by applying emerging new technologies. We have chosen to
 review the analytical procedures hi three areas — sample injection for inorganic analysis, dissolution
 of  waste  samples  for  radiochemical analysis, and sample preparation for analysis of organic
 constituents.

 SAMPLE INTRODUCTION FOR INORGANIC ANALYSIS

 With the promulgation  of SW-846  Update  II,2 many  of the regulated  elements present in
 environmental and waste samples may be determined by using inductively coupled plasma (ICP)
 atomic  emission spectroscopy,  ICP-mass spectrometry (ICP-MS), or a combination thereof.
 Although these measurement techniques are often capable of achieving instrument detection limits
 of  micrograms per liter or better,  normal ICP sample  introduction —  continuous pneumatic
 nebulization (CPN) of a sample solution — utilizes only 1 to 10% of the sample uptake.   The
 remaining portion of the consumed sample goes directly to laboratory waste, thereby creating a
 secondary  waste stream that would be considered corrosive by  standards  in the  Resource
 Conservation and Recovery Act, and could also be toxic or mixed radioactive waste.  Despite the
 poor efficiency of the pneumatic nebulization process, dissolution or digestion is the preferred means
 of preparing bulk solids for ICP analysis.  Our objective in this project is to  identify and evaluate
 high-efficiency  alternatives for solution introduction that will reduce  or eliminate this  particular
 secondary waste stream.

 Graphite furnace atomization, hydride generation, and nebulization can all be used to  introduce
 dissolved analytes into an ICP.3 In the case of furnace atomization and hydride generation, the
 efficiency with which the analyte is introduced depends in large part upon the chemical properties
 of the element.  The utility of these techniques varies considerably among groups in the periodic
table. Solution nebulization, which is a physical means of analyte transport, works well for a broad
range of elements and, thus, for a broad range of applications; however, the inefficiency of solution
nebulizers was,  until recently, the major source of ICP waste. However, development of the direct
injection nebulizer (DIN),4'5 which  utilizes 100% of a sample solution by nebulizing it directly into
the base of the ICP, has allowed analysts to reduce or eliminate  ICP waste.
                                                   519

-------
We compared solution analyses using BEST and CPN. Table 1 summarizes the equipment used and
operating conditions.  Use of the flow injection (FI) manifold was critical because it facilitated
reductions in sample uptake and rinsing between samples.  The impact of these reductions is also
shown in the last two rows of Table 1.  Note that the duration of each spectral integration and the
number of repeat integrations were identical for the two systems. The 33% improvement achieved
in analysis time using FI-DIN was due principally to the excellent rinseout characteristics of the FI-
DIN system.  Better rinseout also contributed to the 50% reduction in per sample waste volume;
however, the lower consumption of the FI-DIN system was also a factor.

              Table 1. Equipment and operating conditions used in this work.

ICP-mass spectrometer
Nebulizer
Spray chamber
Primary solution pump
Solution consumption
(mL/min)
Injection loop (mL)
Analysis time
(min/sample)
Waste volume
(mL/sample)
Continuous
pneumatic
nebulization
Flow-injection
direct-injection
nebulization
PlasmaQuad 11+ with high performance interface
(Fisons Instruments, Winsford UK)
V-groove (Fisons)
Scott double-pass
(Fisons)
Minipuls 3 peristaltic
pump
(Gilson, Middleton WI)
1.0
none
7.5
7.1
Microneb 2000
(CETAC, Omaha NE)
none
Model SI 100 HPLC
pump
(CETAC)
0.06
0.5
5.0
3.4
Tables 2 and 3 compare important analytical figures of merit that were obtained using each of
the sample introduction systems. The data in Table 2, which are based upon nine blank analyses
carried out over two days, indicate that the  instrumental detection  limits achieved with each
system are quite similar.  However, neither system obviates blank limitations as shown by the
comparatively poor detection limits for Ni and Pb.  The blank limitations for Ni and Pb also
appear to affect the precision of Ni and Pb determinations in dilute aqueous standards and two
representative aqueous laboratory wastes (Table 3); however, determinations made using both
systems appear to agree well in most instances, even where precision is poor.

The data we have collected thus far suggest that significant reductions in waste volume and
analysis time are realized, with little or no  compromise in analytical figures of merit, when FI-
                                                  520

-------
DIN is used in place of CPN for ICP-MS analyses.  These results should also be directly
applicable to ICP atomic emission spectroscopy. As we continue to examine the FI-DIN system,
we intend to make further comparisons of long-term figures of merit, while also studying the
susceptibility of FI-DIN sample introduction to common ICP-MS interferences, i.e., polyatomic
ion spectral interferences and sensitivity suppression by matrix elements.  We will also examine
means of further reducing waste and analysis time by means of different flow injection protocols,
i.e., smaller injection loops, shorter rinse times, and changes in valve and pump switching logic.

                 Table 2.  Comparison of ICP-MS 3o  detection limits.

Element
Ni
Cd
Pb
U
Instrument detection limit (ug/L)
FI-DEV
1
0.05
0.8
0.01
CPN
0.5
0.05
0.6
0.003
  Table 3. Comparison of analyte concentrations determined in nine ICP-MS analyses.

Samnle
lOmg/LStd
Waste sol'n # 37
Waste sol'n # 40
Method
FI-DIN
CPN
FI-DIN
CPN
FI-DIN
CPN
Analyte concentration (mg/L)
Ni
10.1 ฑ0.9
10.2 ฑ0.3
0.8 ฑ 0.2
0.79 ฑ 0.05
0.38 ฑ 0.03
0 37 ฑ 0.09
Cd
10.2 ฑ0.1
10.02 ฑ0.09
1.31 ฑ0.01
1.34 ฑ0.03
0.0656 ฑ 0.0005
0.073 ฑ 0.008
Pb
12ฑ1
9.7 ฑ 0.2
1.8 ฑ0.3
1.58 ฑ0.06
0.77 ฑ 0.06
0.72 ฑ 0.07
U
10.14 ฑ0.04
9.4 ฑ0.2
3.24 ฑ0.03
3.06 ฑ 0.09
0.613 ฑ0.006
0.57 ฑ 0.02
SOIL DISSOLUTION FOR RADIOCHEMICAL ANALYSES

Dissolution is a vital aspect of sample preparation for environmental radiochemical analyses of
soils.  The traditional laboratory techniques6'7 of high temperature fusion and prolonged acid
digestion are time consuming.  In addition, they both  generate large quantities of secondary
wastes and fume hood emissions. Microwave technology has previously had limited application
in the radiochemical laboratory because of constraints on sample size resulting from vessel
pressure limitations.  However, newer microwave systems incorporating closed vessels can
                                                521

-------
withstand pressures up to 10 MPa (1500 psi). Thus, larger sample sizes can be accommodated.
We have achieved snorter processing times and reliable sample digestion while dramatically
reducing secondary wastes.

We have used gross oc/p measurements to compare the performance of alternative procedures
for sample preparation:  (1) a high-pressure microwave system and (2) a traditional procedure
that uses a hot plate for digestion by repetitive acid treatment.   A  variety of soil types of
potential interest to DOE were selected for testing, including a National Institute of Standards
and  Technology reference  soil  from the Rocky  Flats Plant (SRM  4353) and  several
environmental and contaminated soils from selected DOE sites (labeled Conl, Con2, and Con3).
Paired, two-tailed Mests indicate no significant differences at the 95% confidence interval in the
measurements on samples prepared from the hot plate and microwave digestion procedures for
these soils; representative data8 are shown in Table 4. In addition,  the microwave procedure
demonstrated good reproducibility and low blank values.  In comparison to the traditional hot
plate method, the acid volumes required for the microwave procedure are a factor of 20 lower,
the analyst time for sample processing is a factor of 2.5 lower, and the sample turnaround time
is a factor of 16 lower.

Because reactivity increases as pressure increases, these high-pressure microwave systems may
make it possible to use alternative, nonhazardous solvents to leach certain contaminants from
soils for analysis.  We have also investigated replacing strong, corrosive acids with milder,
nonhazardous complexing agents for removing plutonium from soils. While these complexing
agents have been successful for the extraction of contaminants such as plutonium, as shown in
Table 5,  the reagents fail to totally  break down the sample matrix and, therefore,  are not
applicable to matrix constituents such as U and Th.

      Table 4.  Gross oc/p analyses by hot plate and microwave digestion methods.

Soil type
SRM 4353
Fernald
Mound
Conl
Con2
Con3
AIpha(pCi/gฑ2o)
Hot plate
15ฑ5
9ฑ7
22 ฑ9
320 ฑ 34
174 ฑ26
183 ฑ26
Microwave
18ฑ5
9ฑ5
13ฑ7
354 ฑ 35
191 ฑ26
202 ฑ 27
Beta(pCi/gฑ2o)
Hot plate
14 ฑ4
<6
16ฑ6
31ฑ7
22 ฑ7
27 ฑ8
Microwave
11ฑ3
10ฑ3
19ฑ4
32 ฑ7
23 ฑ7
38 ฑ8
                                               522

-------
Table 5. Alternative solvents for high pressure microwave digestion of soils.
              Soil utilized was 1 g of SRM 4353 "Rocky Flats Soil #1."
                    Accepted value is 0.217 ฑ 0.016 pCi 239Pu/g.
Solvent
specifications
20 mL 1M citric acid
20 mL 1M sodium citrate
10 mL 2M citric acid
10 mL 1.5M sodium citrate
10 mL 4M tartaric acid
10 mL 1.5M sodium tartrate
10 mL 1M Na2CO3-0.1M EDTA
20 mL 1M Na2CO3-0.1M EDTA
10 mL 2M Na2CO3-0.1M EDTA
20 mL 2M Na2CO3-0.1M EDTA
20 mL 1M citric acid + 1 mL H2O2
10 mL 2M citric acid + 1 mL H7O,
239Pu activity
(pCi/gฑ2o)
0.214 ฑ 0.020
0.237 ฑ 0.025
0.180 ฑ 0.044
0.124 ฑ 0.029
0.257 ฑ 0.055
0.218 ฑ 0.040
0.201 ฑ 0.014
0.174 ฑ 0.032
0.183 ฑ 0.044
0.189 ฑ 0.039
0.238 ฑ 0.041
0.209 ฑ 0.037
Chemical
recovery (%)
67
56
59
33
55
68
45
36
55
62
50
58
MICROWAVE-ASSISTED EXTRACTION OF ORGANIC COMPOUNDS

Standard U.S. Environmental Protection Agency (EPA) methods for the extraction and analysis
of semivolatile organic compounds (SVOCs) (also called the "base/neutral/acid fraction") in soil
and solid waste samples typically use over 300 mL of hazardous solvents, such as methylene
chloride. Microwave assisted extraction (MAE)9'10'11'12 has the potential to reduce the amount
of solvent required to 30 to 50 mL.  We have studied the extraction of  SVOCs from soil,
sediment, and sludge samples using SW-846 Method 8270B2 for measurement and the MAE
technique for preparation of samples.  In most cases, the MAE results compare favorably with
the conventional extraction techniques while simultaneously allowing for reduced solvent usage.

To test the extraction of all Method 8270B SVOCs, these materials were spiked onto a blank soil
(Environmental Resource Associates) and extracted at various temperatures.  Three  solvents
were used: methylene chloride, a 50:50 mixture of methylene chloride:acetone, and a 50:50
mixture of hexane:acetone.  With the spiked samples, no obvious trends were seen  between
extractions carried out at 40, 80, and 120ฐC.  At 40ฐC, increasing the extraction time from 5 to
20 minutes increases the extraction yields; however, at 80 and 120ฐC this trend is not observed.
                                                523

-------
No dependence of recoveries on the microwave power setting was observed.  Sample water
content tends to decrease extraction efficiency for the acetone-containing solvents while
increasing the extraction of polar compounds with methylene chloride.  Table 6 gives the
recoveries of semivolatile organic compounds by class for  sonication extraction,  Soxhlet
extraction, and MAE with four different solvent compositions.

Table 6. Comparison of the recoveries of SVOCs using alternative extraction techniques.
Semi-volatile
compound
class
Alkylphenol
Halophenol
Nitrophenol
Phthalate
PAH
Halocarbon
Ether
Ketone
Sulfonate
Alcohol
Carboxylic
acid
Pyridine
Amide
Nitrosoamine
Aromatic
amine
Hydrazine
Azoamine
Nitroamine
Compounds
in class
5
10
4
6
20
13
6
2
2
1
1
2
2
5
12
1
1
5
Average percent recovery
Sonication
extraction
67
72
46
110
86
60
72
67
66
69
13
1
57
64
41
73
18
84
Soxhlet
extraction
56
78
64
97
84
70
75
74
76
73
61
36
75
70
57
70
78
88
Microwave-assisted extraction
CH2C12
68
79
56
97
82
70
72
70
24
72
17
0
56
60
49
69
20
86
CH2C12
+ H20a
69
76
76
76
90
81
79
84
73
70
38
54
85
77
71
79
78
101
CH2C12
+
acetone
70
78
70
70
87
78
77
81
69
71
41
19
84
77
56
76
88
95
Hexane
+
acetone
72
82
76
74
93
82
80
81
63
71
37
24
86
83
54
78
96
96
                           "Water is 10% by weight of sample.
                                           524

-------
More complete data are available elsewhere.13  Direct comparison with  an 18-h Soxhlet
extraction procedure  using methylene  chloride  gives  very similar results  for  methylene
chloride:water, methylene chloride racetone,  and hexane:acetone.  Methylene  chloride MAE
extractions yield similar results to sonication extractions with methylene chloride. Neither MAE
nor sonication with methylene chloride is as efficient as the Soxhlet and MAE procedures with
other solvents.  A number of compounds are not extracted efficiently (particularly strongly polar
materials such as benzoic acid and some amines and pyridines).  However, this inefficiency is
observed with both MAE and traditional extraction techniques.

The MAE extractions were carried out on soil CRM103-100 (Lot No. RQ103), which contains
15  certified compounds.   This  PAH-containing soil  sample (Fisher Scientific/Resource
Technology Corporation)  is from a Superfund site located in the western  United  States.
Extraction times of 5, 10, 20, and 40 minutes.and temperatures of 40, 80, and 120ฐC were tested.
The optimum time/temperature combination was found to be 20 minutes at 120ฐC.  Under these
conditions, the average percent recovery for the certified compounds in the reference material
is 90% of the certified values with  methylene  chloride solvent,  113% with  methylene
chloride:acetone, and 109% with hexane:acetone. When 10% by weight of water is added to the
solid before extraction, the methylene chloride extraction efficiency goes up to 100%, while the
other two solvents decrease to around 80%.  Addition of sodium sulfate does not improve yields.
Experiments with different microwave power settings showed no clear trends.

Recoveries of SVOCs with MAE extraction  on two quality control standards (Environmental
Resource Associates) were  comparable to those for most compounds extracted by traditional
techniques.  The low recoveries observed could be an indication of either a problem with the
MAE technique or a lack of sample stability. Extraction of PAHs from a certified American
Petroleum Institute  separator sludge (CRM101-100, Fisher  Scientific/Resource Technology
Corporation) gave compound recoveries well within certified prediction intervals.  Extraction
of PAHs from NIST SRM 1941a, however, only yields an average recovery of about 50% of the
certified value.

SUMMARY

We have investigated alternative methods for sample preparation and analysis that minimize the
production of secondary wastes.  Performance data on samples of interest have shown that these
alternative methods yield results of comparable quality to those  obtained for traditional methods.
Our work has demonstrated that flow injection coupled with direct  injection  nebulization
(FI-DIN) is less wasteful than conventional sample introduction techniques, yet critical analytical
figures of merit (precision, accuracy) are uncompromised.  Significant reductions in waste
volume from radiological analysis have been achieved by preparing samples with a high-pressure
microwave system.  In addition, we have demonstrated that alternative, non-toxic solvents can
be used for radiological analyses without compromising extraction efficiency.  Recoveries of
semivolatile organic compounds from  soil,  sediment, and sludge using microwave-assisted
extraction compare well with those using traditional extraction techniques.  Solvent usage and,
thus, waste  produced  are  decreased by an order of magnitude with microwave-assisted
extraction.
                                                 525

-------
ACKNOWLEDGMENTS

This work was performed for the U.S. Department of Energy under Contract W-3 l-109-Eng-38.
Thanks to Ray Lang and  Jim Thuot, who have encouraged this work.  Thanks to Cecilia
Newcomb of Lab Support, who did many of the experiments with microwave-assisted extraction.

REFERENCES

 1.  Analytical Services Program Five-Year Plan; Laboratory Management Division, Office of
    Environmental  Restoration  and  Waste Management, U.S. Department  of Energy,
    January 29, 1992.

 2.  Test Methods for  Evaluating Solid Waste;  U.S.  Environmental  Protection  Agency,
    Document SW-846,3rd ed., Office of Solid Waste and Emergency Response: Washington,
    DC, September  1994.

 3.  Handbook of Inductively Coupled Plasma-Mass Spectrometry; K.E. Jarvis, A. L. Gray, and
    R. S. Houk, Ed.; Chapman and Hall: New York, 1992, Chapters 3 and 4.

 4.  Wiederin, D. R., Smith, F. G., and Houk, R. S.; Anal. Chem. 63,  1477 (1991).

 5.  Wiederin, D. R., Smyczek, R. E., and Houk, R. S.; Anal. Chem. 63, 1626 (1991).

 6.  Chieco, N.  A.; Environmental Measurements Procedure  Manual HASL-300,  U.S.
    Department of Energy: New York, 1990.

 7.  Sill, C. W., Puphal, K. W., and Hindman, F. D.; Anal. Chem. 46, 1725 (1974).

 8.  Yaeger, J. S. and Smith, L. L.; Waste Minimization through High-Pressure Microwave
    Digestion of Soils for Gross a/ft Analyses, ANL/ACL-95-3, Argonne National Laboratory,
    in preparation.

 9.  Lopez-Avila, V., Young, R., and Beckert, W. F.; Anal. Chem. 66, 1097 (1994).

10. Pare, R. J., Belanger, J. M. R., and Stafford, S. S., Trends in Anal. Chem. H, 176 (1994).

11. Renoe, B. W.; Amer. Lab.26, 34 (1994).

12. Majors, R. E.; LC-GC 11, 82 (1995).

13. Schilling, J. B. and Newcomb, C. M.; manuscript to be submitted for publication, 1995.
                                              526

-------
                                                                   73
                  TEN SURE WAYS TO INCREASE
               INVESTIGATION AND CLEANUP COSTS


 John  W.  Donlev,  President,  a priori,  inc.,  218 Garfield
 Avenue, Colonial  Beach,  Virginia 22443


 ABSTRACT

 Due   to  the   interdisciplinary   nature  of  environmental
 investigations, they present a unique challenge to  scientists
 and   engineers  engaged   in   these   activities.     Indeed,
 environmental  professional  are  often  encouraged or  even
 pressured to take actions  which may  test the limits of their
 experience  and  training.    This  can  present  interesting
 opportunities  and  is  a  large part  of the  allure of  the
 environmental  field.   However,  it also encourages  a  rigid
 adherence to "standard" procedures and provides opportunities
 for  the misapplication  of scientific principles  which are
 beyond the individual's experience.  This,  in turn, increases
 the cost of environmental investigations and cleanup projects.

 This paper will examine ten of the most common data  collection
 and  interpretation problems  the  author has identified  in
 providing "second   opinions"   on  more  than  two  hundred
 environmental  investigation work plans and reports.  It will
 include a brief description of each problem, case  studies to
 illustrate, and  recommendations  on how to  avoid or overcome
 the  problem.   The  paper should provide a useful  guide for
 facility managers and regulators who are tasked with reviewing
 and authorizing environmental investigations.  It may also be
 helpful for consultants, contractors, and others involved in
 planning  and implementing  these investigations.

 INTRODUCTION

 Nearly everyone involved in environmental remediation projects
 seems to  feel  that the  costs are  too  high.   Some  blame the
 lawmakers for  creating unnecessary administrative burdens in
 the legislation.  Others  blame the regulators for promulgating
 complex regulations with unnecessary  bureaucratic procedures.
 Still others blame  the legal system for taking  advantage of
 the  complexities.   While the  topic makes  for  stimulating
 conversation,  those of us  who  are most directly affected --
 the  environmental  professionals  who  work  for  industry,
 regulatory agencies,  and consulting  or contracting firms --
 are seldom in a position to do much about the problem.  Still,
we  complain with  increasing  levels  of  frustration  about
 circumstances which are  largely beyond our control.
                                  527

-------
While we may have  little  influence over the processes through
which  environmental  projects are  conducted,  we  often have
significant control over  the costs. Regardless of whether the
administrative and legal components of a particular regulatory
program are overly complex and largely unnecessary, the fact
remains that the  largest portion of  the  funds  expended on a
typical  environmental  investigation  or  remediation project
goes to contract field and laboratory services.   And most of
the  participants  in a typical  project  have  at  least some
influence over the scope  of these activities.

This paper explores how our training,  attitudes,  preferences,
and   prejudices   influence   the  cost   of   environmental
investigations and of the decisions that ultimately determine
the  cost  of the final remedy.   As the  title suggests,  the
paper describes ten issues often encountered in environmental
projects which may have a  dramatic impact  on the investigation
and cleanup costs.  These issues are  related more to the way
we think about  environmental investigations than to our choice
of sampling or analytical techniques.

    CAUSES
It is not a goal of this paper to explore  the  causes  of the
various actions that might lead to increased investigation and
cleanup costs.  Still, the subject is of more  than academic
interest  to  those who  are ultimately  responsible for  the
success or failure of  a particular project.  To the extent we
are  able  to  understand  and  influence  the  causes,  we  are
obviously better equipped to control the costs.  Consequently,
the subject deserves at least some mention.  Conveniently, the
majority of the activities  that have the most dramatic impact
on investigation and cleanup costs also seem to have  one or
two common causes.

Many  appear   to  be  related  to  the  differences  or,  more
properly,   to  the  investigator's  failure  to  recognize  the
differences  between  the  methods   of   inquiry   that   are
appropriate in a theoretical or academic setting, versus those
that  are  more properly used  in  the applied  sciences.   As
students,   we   are taught  that  science  is  a  process  of
discovery.  We learn about  the most  significant achievements
in our particular field of  endeavor  and we  are  encouraged to
expand this knowledge  base  for the benefit of humankind.  But
few scientists leave college with any real  understanding of
how to apply their knowledge  outside  of  an  academic setting;
this type  of training seems to be  reserved  for  engineers and
technicians.   As a result, scientists are  tempted  to  borrow
techniques from the academic world and to apply these methods
to virtually any  problem they encounter  in  the  "real world."
Unfortunately, these  two  worlds  operate  under  different
                                  528

-------
priorities.  And, these  differences  are often manifested as
increased costs.

The second most  common cause seems to be related to the extent
to  which  environmental  science  is  distinct  from  other
scientific specialties.   Due to the interdisciplinary nature
of  the  environmental field,  environmental  investigations
present a unique challenge to scientists and engineers engaged
in these activities.   Indeed, environmental professionals are
often encouraged or even pressured to take actions which may
test the  limits of their experience  and training.   This can
present interesting opportunities and is a large part of the
allure  of  the  environmental  field.    However,  it  also
encourages a  rigid adherence  to "standard"  procedures  and
provides  opportunities for  the misapplication of scientific
principles which are beyond  the individual's  experience.
This,   in  turn,   increases   the  cost  of   environmental
investigations and cleanup projects.

THE PROBLEMS

Inappropriate Statements of Objectives.  Every investigation
that has  the potential  to lead  to  costly cleanup  actions
should begin with a clear statement of objectives to provide
a target for the sampling approach.  Put simply, the sampling
objectives are a statement  of the problem that must be solved
or,  in scientific  terms,  the  hypothesis the  investigator
intends to  test.   Unfortunately,  the majority  of  sampling
plans  fail to  accomplish  this  simple task.    Failure  can
usually be traced to one of two problems.  Either the author
of the plan is confusing the "what" or "how" of the sampling
activity with the "why," or he really doesn't understand the
objectives of  the study.   In the  first  case,  the  stated
objectives may read something like this:

     "The objective of  this study is to  drill a monitoring
     well network around the landfill  and to sample the wells
     quarterly for heavy metal contaminants."

Here we are told "what" will be done, but given no indication
as to "why" the study is being conducted.   The goals of the
study are not provided.

On the other hand,  an  investigator who knows what a statement
of  objectives   should look  like, but who  doesn't  really
understand the objectives of a particular study,  might say:

     "In this study,  we  will determine whether contamination
     is present by  collecting  and  analyzing  ground-water
     samples, using statistical methods to compare background
     values with the area of suspected contamination."
                                   529

-------
This is a far more subtle problem.  To most technical people
not familiar with the project,  this may seem to be an entirely
acceptable  statement of objectives.   We are told  that the
study is  intended to determine if contamination is present.
However,  that  may or may  not be the  real  objective  of the
study.  In  other words, the  goal of the investigator may not
necessarily be  the   goal  of  the program.   At  best,  the
objective stated in the  second example is probably only a part
of the program objective, which may be something like:

     "To  determine whether  releases of  hazardous  wastes or
     constituents have entered the environment at levels above
     the  federal  Maximum   Contaminant  Levels  (MCLs)   for
     drinking water  supplies."

If so, the investigator who adopts the more generic objective
of comparing sampling data to  background levels may be doing
his  client/employer  a  disservice,   since  it  is  entirely
possible  for a  constituent  to be present  above  background
levels but  below the applicable MCL.   Therefore,  a properly
crafted statement  of objectives should clearly establish the
decision criteria that will later be used to determine whether
the investigation  is a  success.

Failing to  Recognize the Role of  Experimentation.   Once we
have an appropriate statement of objectives, the next step is
to design an investigation to achieve these objectives.   In
its   most  basic  form,   this   involves   engineering   the
circumstances to test an hypothesis which has been developed
from  observation.    This  process is  usually  called  the
scientific  method.

Scientists are taught that objectivity is the most fundamental
precept of  the  scientific  method.   They must  be prepared to
take  failure in stride.   When one approach  does  not  prove
fruitful, the  scientist learns from her mistakes  and tries
again using a different approach.

But those who oversee environmental investigations conducted
pursuant to a  legal  or regulatory requirement  do  not often
respond well to failure.  They have deadlines to meet, "beans"
to count, budgets  to account for, and other projects waiting
to begin.   These  factors  impose severe limitations  on the
extent  to  which  the  investigator  should  feel  free  to
experiment.  In short, experimentation should be confined to
the project objectives.   The investigator  should  use proven
methods to  answer  a particular question or set of questions,
leaving the use  of  innovative  research techniques to  the
academic community.  Unfortunately, the investigation process
offers abundant temptations to turn even the simplest projects
into significant scientific  works.
                                   530

-------
A few years ago, a client was convinced by an eager contractor
to try a new and  improved  analytical method that promised to
provide lower detection  limits, albeit at a much higher cost.
The client viewed the method as an opportunity to demonstrate
to  the  regulators  that  the  company was  environmentally
proactive,  that their  policy  was to  exceed  the  regulatory
requirements.   As  expected, the regulators agreed to the new
approach, provided that  a  substantial  number of verification
samples  would  be  analyzed using  the  traditional  method,
further increasing the  cost.

The  method exceeded expectations.    It  provided  detection
limits in the parts per  trillion  range for constituents that
would not normally be detected below a few parts per billion
using conventional techniques.   But the  client will not be
receiving  any   awards for  innovation.   Instead, he  will be
spending  additional money conducting  a  risk  assessment to
demonstrate that  the constituents -- which could probably be
found at the same levels in the polar ice caps if anyone cared
to  look —  do not pose  a threat to  human  health  or  the
environment.

Failing  to  Consider   Site-Specific   Factors.    While  the
researcher who  operates  in a laboratory environment seeks to
control  all  of   the   conditions of  the  experiment,  the
investigator must work  with what he  has been given.   This
situation can certainly  make the  investigation more complex,
but  it has  the  advantage that  we can  use  the  available
information to  eliminate some  alternatives,  thereby reducing
the scope of work  and the  associated costs.

Unfortunately,  many investigators  fail to recognize  site-
specific  factors when   selecting  sampling  locations,  field
methods, or analytes.  They lament their inability to control
every aspect of the situation,  while failing to recognize the
corresponding advantages.  They take a formula approach to all
similar investigations.

A few years  ago, a  company in the metal finishing business
decided to  sell surplus property that had  been used  for a
small electroplating operation. A prospective buyer hired an
environmental contractor to determine whether the property had
become contaminated as a result of the  industrial activities.
At the time, the typical approach to such investigations was
to begin with a "phase I"  audit of the industrial activities
to determine whether a   sampling investigation was  warranted
and,  if  so,  to  identify prospective sampling  locations.
However, this particular contractor was  in  the  sampling and
remediation business and, apparently, had no interest  in
conducting  any  of  these  preliminary  activities.     The
                                   531

-------
contractor ignored the available site-specific information and
proceeded directly to a sampling program.

In the absence of any  historical  information to help select
appropriate  sampling  locations,  the  investigators  simply
divided the site into a grid and obtained  shallow soil samples
at the intersection of each grid line.  They were not equipped
to sample within the building, so they further limited their
sampling  locations   to  those  grid  intersections  on  the
remainder of the property.  Most of this  area was occupied by
an asphalt parking lot.

Without the benefit of historical information on past chemical
management practices, they  elected to analyze the samples for
a wide  range of  constituents.   The  result was  that nearly
every one of the  shallow soil samples  obtained  from beneath
the  asphalt parking  lot  was  found  to contain  petroleum
constituents.  As a  result, the contractor's report to their
client  recommended   a  multi-million  dollar cleanup,  at  a
property that was on the market for  approximately $800,000.
A more careful review of the  data revealed  that the samples
had contained pieces  of asphalt  pavement which was the source
of the petroleum constituents.

Using Random Sampling Inappropriately.   The investigators in
the  previous  example   should  have  used  the   available
information  to   select   biased   sampling  locations   and
appropriate  analytes.    Unfortunately,  many  inexperienced
scientists  and engineers mistakenly believe that  one  must
obtain only random samples in order to ensure the integrity of
the data.  Presumably,  the goal  is to reduce the influence of
the  investigator's   expectations  on  the  outcome  of  the
investigation.  This  is certainly  a valid concern, and random
sampling  is an essential  component  of  most  environmental
investigations.   But it  must  be used  appropriately.   It is
reasonable  to  assume  that a sufficient number  of  random
samples will provide data which  are representative of  the
population/area as a whole.  However,  when the "population" is
not correctly defined,  random sampling can unnecessarily add
to  the  investigation  costs   and   can  even   be  used  to
intentionally deceive in  a properly  crafted study.

A few years ago, I attended a workshop in which EPA personnel
were  to   learn   how  to  oversee   environmental  sampling
investigations.    Following several  days  of lectures,  the
participants were divided into  groups  and asked to design a
sampling  strategy to  determine  whether  contaminants  had
entered the soil surrounding a  concrete  container storage pad.
The workshop leader  asked my group,  in confidence, to design
a strategy  that  would be  unlikely  to achieve the  desired
results.
                                  532

-------
An experienced investigator would probably have suggested that
several random samples be obtained from biased locations which
the investigator felt were most likely to have been impacted
by a release.  These  locations would have been selected based
on runoff patterns and  evidence of staining  or  loss  of the
integrity of the concrete pad.   This program might also have
been  supplemented  with  random  samples  obtained from  the
general vicinity.

Our proposed  approach was limited  to  random sampling around
the perimeter of the pad.   I presented  this approach to the
audience,  stressing   the scientific  "validity"  of  random
sampling.   After I  finished my pitch,  the  workshop  leader
asked the participants to critique the approach.   Not one of
the  more than  100  regulators  could find  a flaw  in  our
strategy.  To  the consternation of  the leader,  they generally
agreed that ours was  a valid approach, despite that fact that
not one of our random samples happened to be located in one of
the areas most likely to have been impacted by a release.

Collecting  Too  Many Samples.    Before  the  advent  of  the
personal computer,  the scientific method served as an outline
for experimental design.  It encouraged  the investigator to
collect as much  data  as were necessary to support or refute a
specific  hypothesis.     However,   the   computer   enables
investigators to screen  large quantities of data in search of
hypotheses.    This encourages investigators to collect large
quantities of  data.   Needless to say,  those who pay the bills
do not often view this as a change for the better.

About seven years ago, I  attended a workshop to find out more
about  the  application   of  geostatistics  to  environmental
investigations.   Geostatistics  have been used for many years
to  characterize ore deposits   and   for  other  traditional
geological  investigations.     They  include  a  variety  of
techniques,  most of which are based on  the premise  that the
characteristics   of   interest   vary   spatially.      Since
environmental data,  including  chemical  data, are  usually
consistent with this premise,  I  expected to acquire  some
valuable new  tools  that would  help  me  in  my work.    I  was
especially  hopeful,   given  that  the   instructors   were
consultants who  claimed  to specialize in using geostatistics
for environmental investigations of industrial sites.

It was  a three-day program.   The  morning of  the first  day
consisted of  an overview of geostatistical tools.  We were
told that one of the best uses of geostatistics  is  to help
design sampling  programs.   The  first  step  is to  obtain real
data using  random  data   collection techniques.   A  sampling
strategy is then developed  by using geostatistical  tools to
evaluate these data.
                                  533

-------
Since many environmental investigations begin and end with the
collection of  random data, I  immediately began to question
whether this approach would be cost-effective.  The bad news
came  later the same day when we  learned that,  for a simple
ground-water   investigation,   the  instructors  recommended
installing a preliminary monitoring well network consisting of
between 50 and 100 wells placed at equal  intervals  on a grid.
In theory, data  obtained from these wells  would  be used to
design a permanent monitoring well network.

At the time,  most simple ground-water investigations employed
far  fewer than  50  wells  [new  direct  sampling  techniques
developed over the  past few years have overcome some of the
obstacles, making this  method  more attractive].  The simple
fact  that  most  customers  of investigation   services  are
unwilling to pay the cost of  such an extensive preliminary
effort, forces the investigator to pursue more cost-effective
alternatives.    Not  the  least  of these  is  the  use  of
professional judgement in lieu of  random  sampling techniques,
as discussed previously.

Using Inadequate Quality Control.  Assuming the investigator
has  a clear  statement  of  objectives   and has designed  a
sampling  strategy  to collect  the right  kind and  amount  of
information to  achieve  the objectives,  the next  step  is  to
make  sure the  data  are  of sufficient quality.   Fortunately,
most  investigations  conducted  pursuant to  a  regulatory
requirement are driven by a variety of guidance documents that
specify  minimum  acceptable   quality   control  procedures.
However,  far too many investigators seem to feel that these
procedures  do  not  apply  when  the  investigation  is  not
immediately subject to regulatory scrutiny.  In addition, some
regulators apparently feel that investigations conducted by,
or on behalf  of the regulatory agency should not  be subject to
the same level of control.

The most  disturbing example in the  authors experience occurred
during a  Sampling Visit  conducted as part of  the RCRA Facility
Assessment  of  an  industrial   facility   in the  northeast.
Facility representatives watched in disbelief as a contractor
acting on behalf of the regulatory agency poured fuel into a
gasoline-powered auger as it sat in the borehole, then later
obtained a sample  from  that same boring to be  analyzed for
petroleum constituents.

Using   Inappropriate   Decision   Tools.     Assuming   the
investigation  is  well  planned,  carefully  executed,  and
properly documented, the next step is to  examine the data to
determine whether it supports  the objectives.   This process
typically involves the use of a decision  criterion.  Perhaps
the most  common  approach is  to  compare the  data  to  some
                                 534

-------
reference, such as a regulatory standard, background samples,
or,  in  the  case  of  naturally-occurring  substances,   the
"normal"  range  of concentrations for the  constituent in  the
medium of concern.  Each of these methods can provide valuable
insights, but the investigator must be  careful to  recognize
that each method  has  limitations.   Problems  most often arise
when the investigator allows his expectations to influence  how
he uses these decision tools.

A few years ago, representatives of a state regulatory agency
visited  one  of  our  client's  facilities  and  obtained soil
samples to determine  if an abandoned landfill  on-site should
be added to the National Priority List.  The investigation was
flawed  from the  outset.   The  strategy  was  to compare soil
samples obtained  from two  areas:  one within the landfill  and
one from  a nearby "background"  location.   Unfortunately,  the
regulators elected to sample within the upper few  inches of
the  soil  surface,  even though they had  been  told that  the
landfill  was  covered  with a one-foot thick,  natural clay  cap
comprised of soil  from an  undisturbed, wooded area of  the
facility.  Consequently, one would not expect that  they would
find any  indication of contamination.

But  that  minor obstacle did not  deter  this sampling team.
They simply  adopted an innovative way of analyzing  the data:
the  democratic  method.    Basically,  their reasoning  went
something like  this.   The background samples,  by definition,
represent the normal range of concentrations one would expect
to find in soil samples from this area [begging  the  question] .
Therefore,  they concluded,  if  any of the samples from  the
landfill  area exceed  the highest reading from  the background
area,  the landfill must be contaminated.   The greater  the
number  of  samples that exceed background,  the  worse   the
contamination.

It seemed like a sure-fire  approach, but they still had to
look long and hard to  find a problem at this  particular site.
Eventually, they discovered that barium was slightly higher in
some of the landfill  samples than  in any of  the "background"
samples.   Even though they had  no reason  to suspect that
barium would  be present in the  landfill,  and the levels were
well within  the natural variation  in soils, the regulators
concluded that  the area was contaminated.  They apparently
ignored the  fact  that the  background samples exceeded  the
highest reading taken  from within the landfill  area for every
other constituent measured.  In other words,  if  these were
"blind" samples,  the   same  reasoning would have led them to
conclude that the background area was contaminated.

Misuse of Statistics.   Perhaps the most common way to look  for
meaningful differences between data sets is to use statistical
                                535

-------
tools.  Most scientists and engineers had at least one basic
course  in  statistics  and  can perform  simple  statistical
calculations from memory.  In addition, procedures for using
more sophisticated or complex techniques are readily available
from a number of sources for use on a personal computer.

Entire books have been written  about ways in which statistics
can  be  used to  mislead  or  deceive.    Fortunately,  most
environmental issues which may  provide an opportunity for the
intentional  misuse of  statistics  do  not  offer  sufficient
motive  to  entice  most  environmental  professionals  into
committing  such an act.   Consequently,  when  environmental
professionals misuse statistical tools, it  is most often out
of lack of knowledge.

One of the  reasons  that  statistics are so often  misused,  is
that  statistical  procedures   are  so  easy   to  use.    Most
environmental professionals,  engineers  in  particular,  are
fairly  proficient  at  mathematical   calculations.     Since
statistics  generally require  only  rudimentary math  skills,
they   pose  no   great   challenge   to  the   environmental
professional.  Therefore, difficulties most  often arise when
interpreting the results of these  calculations.

One  of  the  most  fundamental issues  is  the  concept  of
significance.  For  example, we say that  differences  between
two data sets are significant if they exceed some  statistical
threshold.   But,  what do  we  mean by  significant?   To  the
statistician, significance  is  an inherent  property of  the
numbers  we  use  to  represent  the   data.    The   particular
attribute  these  numbers  represent   is  of  little   or  no
consequence.   In  other  words, one  set  of  numbers  can  be
significantly higher than another set of numbers whether they
represent color intensity, chemical  concentrations,  or age.
But environmental  professionals  sometimes  try to  attribute
more meaning to  these differences than the statistical method
is capable of distinguishing.  In other words, they attempt to
use statistics as a surrogate for professional judgement.

A few years ago, one of our clients called in desperation.  He
had hired an engineering firm to oversee  the RCRA closure  of
a hazardous waste  storage  tank.   Consistent with the  state
regulations, the engineer in charge of the project had used a
statistical  comparison  to background  to determine  whether
releases  from the  tank had  entered  the  underlying  soil.
Unfortunately, he  had made a small but  extremely  important
error  in  identifying   sampling  locations.    Rather  than
collecting  background   samples from  locations   that  were
randomly  distributed  around   the  tank  area, he  chose  a
"background" area of approximately the same size as the former
                                   536

-------
tank pad and obtained all of the so-called background samples
from within that area.

By itself, this mistake was not  sufficient to have created a
problem.  But  it  created an opportunity for the engineer to
misuse the data. As  any soil scientist knows, measurements of
almost  any  soil  attribute  from  samples  taken  from  two
different  locations  are bound  to exhibit  some degree  of
statistically significant variation due to the high degree of
variability inherent in natural soils.  Through the  use of
statistics,  we  can  identify   these  differences,   but  no
statistical  procedure  can  tell us  the reason for  these
differences.  For that, we need  professional judgement.

As  expected,  the engineer  in our  example,  identified some
statistically  significant differences between  the  two data
sets.  To the practiced professional, these differences were
clearly  attributable  to  natural  variability.    They  all
involved  differences  in the  concentrations  of naturally-
occurring chemicals.   Some  of these chemicals were  found at
higher  concentrations in samples taken  from  the background
area.   Others  were  found  at higher concentrations  in the
samples taken from the tank area.  But all of the levels were
within  the  range  of concentrations that we would expect to
find  in natural soils.   In  addition,  none  of  the chemicals
found at  statistically higher levels in the tank area were
known to have  been managed  in the tank.  Unfortunately, the
engineer failed to recognize these important  factors and chose
instead   to   base   his   conclusion  on   a   fundamental
misunderstanding of the limits of the  statistical method.  In
short, he concluded  that  any  chemical found at statistically
higher  levels  in  the tank area must indicate  a  release from
the tank.   He  ignored the data showing statistically higher
levels of some  constituents in the background area.

Unfortunately,  the regulator  who reviewed the  report shared
the same misconceptions  about the limits  of the statistical
procedure.   She agreed  with the engineer's conclusion and
required  that  the  facility  "clean up the  contamination."
Following months  of  negotiations,  another  regulator finally
reversed this  decision.   But, he based the reversal  on the
fact that the  chemicals  in  question had not been managed in
the tank.  He refused to accept the  limits of the statistical
method as a valid explanation.

Failing  to  Recognize  the   Practical Limits  of  Inductive
Reasoning.    From grade  school  through  our  undergraduate
studies  we  learn   to  interpret  the  world  in  terms  of
principles, laws,  and theories.   This  approach encourages
basic deductive reasoning.    In  graduate  school, scientists
begin to focus more on inductive methods. In practical terms,
                                  537

-------
this means that we are initially taught to solve problems by
looking at data in terms of what we know about the world.  But
the more education we receive, the more we are encouraged to
look at data in terms of what  it can tell us about the world.
Both approaches are effective  when used  in the proper context
and most problems are efficiently solved using a combination
of the two.

One  of our  clients  owns  industrial  rental  property.   The
ground  water  underlying  this  property  has  come  to  be
contaminated with chlorinated  organic  solvents.  To determine
which tenant was responsible for the release, our client asked
us  to calculate  the age  of  the ground-water  plume.    We
approached the problem using classic deductive reasoning.  We
developed several methods for estimating the  age of a plume,
each based on sound scientific  and/or mathematical principles.

For example, one of our methods was based on the premise that
the age of a release  can be calculated  if:  a)  the plume is
traveling at a constant velocity,  and b) we know the distance
between the source and the location of  the plume on a specific
date.  Using field data, we were able to demonstrate that the
plume  at  this  particular site was traveling  at  a constant
velocity  and we knew the location of the plume  on several
dates.  As a result,  the argument is  inherently  valid.   The
only question then is whether our measurements  are accurate
(i.e., whether the conclusion is also  true or  correct).

By combining the date of  the release calculated  using  this
method, with dates calculated  using several other methods, we
arrived at a range of possible dates.   We were  also able to
develop statistical data to identify the most likely date and
to assign probabilities  to  a  particular tenancy.

The tenant who  was  implicated through our efforts,  hired a
professor from a well-known university to render an opinion.
The  professor  used  an inductive approach.    He  began  by
examining  the  data   for  trends  or  inconsistencies.    He
discovered that  the  velocity  of the plume which would  have
been predicted from aquifer characteristics (e.g.,  pump  test
data) was greater than the  measured velocity of  the plume.

There are several possible  explanations for this observance.
For  example,  the constituents  of concern  may  have  been
traveling through the aquifer more slowly  than the  water.
This is a common phenomenon known as the retardation factor.
It is also possible that  the  predicted velocity  of the plume
is less accurate since it is  based on  indirect measurements,
while the  actual velocity  of the  plume is based  on direct
measurements.
                                538

-------
However, the professor  assumed that both  the  predicted and
measured velocities  were  accurate  representations of  the
actual velocity.   He  then concluded that the differences must
reflect a gradual  change in velocity over time  (i.e., the time
between the two measurements).  He then had no choice but to
dismiss as  inaccurate the evidence that the  plume velocity had
actually been constant over time.  This reasoning led him to
develop  a  different  date for  the release that was several
months after his client had vacated the property.

The  example   illustrates   the   limitations   of   inductive
reasoning;  the argument can never be proven to be valid, but
can only be shown  to be possible.  As in the example, the data
may suggest two or more possible  conclusions or explanations.
The  academic  community responds to  this  shortcoming  by
developing  new  procedures  to  test  the  conclusion;  the
conclusion  becomes   the  hypothesis  for another  series  of
experiments.  However,  outside of the academic community, few
of us can afford to pay the cost of this research effort.

Dogmatism.   When faced with  a  problem, most us respond in one
of two ways.   We  either instantly recognize  a  solution and
forge ahead, or we begin to  analyze the problem, weighing the
alternatives and  trying to  make an informed  decision.   Our
response to a particular  problem  depends on  a  number  of
factors, including  the complexity  of  the problem and  the
consequences of  an  inappropriate decision.    But  the  most
significant  factor  is  often  our   level of   comfort  or
familiarity with the subject matter.

The less knowledge we have  about a particular  subject,  the
more thought we are likely to give to a problem involving that
subject.   Conversely,  as we  become  "expert"  in  a  subject,
solutions naturally come easier.  This  phenomenon is  one of
the reasons why we employ specialists,  or  consultants,  when
faced with an important  decision  involving  a subject which is
not our primary area of expertise.

But to what extent should we rely on  consultants' advice?  The
answer is probably obvious.  We should rely  on their advice to
the extent  that we are convinced they really are expert in the
particular subject matter and to the extent that they can be
objective  in formulating an opinion.    Since  most companies
that hire outside  consultants  have procedures to assess their
level of  expertise,  the second requirement,  objectivity,
provides more opportunities  for problems.

Objectivity is influenced by  two kinds of forces.   For the
purposes of  this discussion,  I will  refer  to  them  as
"external"  and "internal."   The external  forces  exerted on
consultants are related to  their "ownership"  in a particular
                                    539

-------
problem, as well as pressures from within their own firm to
increase  the  scope  of a  project.    As a  result,  the most
objective consultants  are  often those who have the smallest
investment  in a problem,  and  the  least  to gain  from its
resolution.    As  long  as   the  client  recognizes  these
influences, he  or  she can  assign  the proper weight  to  a
particular piece of  advice.

Internal  forces  can be much more  complex.    The  previous
example  described  some  of   the  limitations   of  inductive
reasoning.  But  the deductive method also has limits.  One of
the most important  of  these  is that  it  encourages dogmatism.
As  we  increase   our  level  of  knowledge, we  also tend  to
increase our comfort level  with a particular subject.  Most of
us  are  able to  temper this  with a  healthy  amount  of self-
doubt.  But a few people are  able to  convince themselves that
they cannot make mistakes.

This attitude complements  the deductive method.  The method,
or  argument,  consists  of  one  or more  premises  and  a
conclusion.   If  the conclusion follows necessarily from the
premise(s), then the  argument  is deductive  and inherently
valid.   But a valid argument is  not necessarily true.   The
dogmatist may be inclined to overlook this  "minor"  point,
creating  all  sorts  of valid,   but  completely  inaccurate
conclusions.

In the previous  example, I described a  consulting assignment
in which we attempted  to establish the  age of a ground-water
plume.   I  described how a professor had used  the inductive
method  to derive what  I believe is  an  erroneous conclusion.
Ironically, the  professor  had also used the deductive method
earlier in the project and  had experienced problems with that
method as well.

In a preliminary report, we  had hypothesized that the plume
was  only a few  years old.   We  based this opinion  on the
relative  absence of  known degradation  products  of  the
contaminant, trichloroethene.

The professor disagreed. Apparently, in his research, he had
found no evidence that trichloroethene degrades under aerobic
conditions:   a  conclusion  derived from inductive reasoning.
He reviewed the available data for this project and concluded
that conditions  within the  aquifer  were aerobic.  He then
transformed his  inductive conclusion into one of the premises
for a deductive argument that might be restated  as follows:

     *    Trichloroethene  does  not  degrade  under  aerobic
          conditions.
     *    The conditions in  this aquifer are aerobic.
                                    540

-------
     *    Therefore,   trichloroethene   could  not  possibly
          degrade in this particular aquifer.

His  argument  was  valid  because  the  conclusion  follows
necessarily  from the  premises.   Unfortunately,  his  first
premise  was wrong.    Other  researchers have  been  able  to
demonstrate   aerobic   degradation  of   trichloroethene   in
laboratory experiments.  More  importantly, three years after
the professor gave this opinion, the  predominant chemical
within  the plume  is  no longer  trichloroethene.    At  some
locations, one of its degradation products, dichloroethene, is
? resent  at concentrations more than twice  as high  as  the
 evels of trichloroethene.

SUMMARY — TTTF. CURE

The solutions  to each of the  ten common problems described
above lead fairly neatly to the following ten-step process to
control the cost of environmental investigation and cleanup
projects:

1.   Begin each project with  a  clear and accurate statement of
     objectives.

2.   Design the investigation to achieve the objectives using
     proven and widely accepted methods.

3.   When  selecting sampling  locations, field  methods,  and
     analytes, consider  site-specific factors.

4.   To  the  extent  possible,  use  existing information  to
     identify sampling locations which are most appropriate to
     achieve  the  objectives,  then  obtain  representative
     samples from those  locations.

5.   Collect   only   enough   information  to   achieve   the
     objectives.

6.   Use adequate quality  control protocols  to preserve  the
     integrity of the data.

7-   When evaluating the data  against the objectives, use an
     appropriate decision criterion.

8.   Recognize the limits of statistical decision tools.

9.   Structure your conclusions as deductive arguments,  then
     evaluate their validity as well as their accuracy.

10.  Be open-minded to new ideas and unexpected results.
                                   541

-------
74
       THE TCP TEST FOR METALS - SELECTION OF EXTRACTION FLUID
 Stuart J. Nagourney. Nicholas J. Tummillo, Jr. and Michael Winka, New Jersey Department of
 Environmental Protection, Trenton, NJ, Frank Roethal, State University of New York at Stony Brook
 and Warren Chesner, Ph.D., P.E., Chesner Engineering, P.C., Commack, New York

       Resource recovery facilities produce ash which are heterogeneous mixtures of inorganic and
 biological materials and a variety of chemically inert substances such as glass and ceramics.  The
 decision whether a waste material will be disposed of as a hazardous or nonhazardous waste depends
 upon the results of tests of several EPA-approved analytical methods. Since the cost for disposal of
 waste  designated as hazardous is many times the cost for disposing of the same amount  of
 nonhazardous material, the economic, the public health and safety realities of decisions about the
 nature of waste disposal places an enormous burden upon the validity of the test data.

       The Toxic Characteristic Leaching Procedure (TCLP, USEPAMethod 1311) for metals is
 often the determining factor whether a solid waste will be classified as hazardous or nonhazardous.
 Cd and Pb, with regulatory limits of 1.0 and 5.0 ug.g."1 respectively, are the elements that often
 determine the waste characterization.  The TCLP method for the determination of metals in waste
 consists of five sequential procedures:

       1. Waste characterization (sections 7.1.1 and 7.1.2)
       2. Waste homogenization (section 7.1.3)
       3.  Selection of extraction fluid (section 7.1.4)
       4 Sample preparation (sections 7.2.10 to 7.2.12)
       5. Analysis (section 7.2.14)

       The uncertainty and variability inherent in any physical or chemical test procedure cannot be
 completely eliminated. For instance, there is always error associated with the sample preparation and
 instrumental measurement of for any metal.  There are established statistical procedures to  quantify,
 report and interpret these types of errors. More difficult to measure and evaluate are the uncertainties
 associated with selecting an aliquot of the waste material (#1 above) and how differences in the way
 laboratory personnel interpret and conduct sections of the TCLP protocol (#2 and #3) can affect the
 final measured analyte concentration.

       The New Jersey  Departments  of  Environmental  Protection and  Transportation,  in
 collaboration with the New York State Energy Research and Development Authority and the Port
 Authority of New York and New Jersey, is evaluating whether bottom ash from the Warren County,
 New Jersey resource recovery facility can be beneficially reused by incorporating it into asphalt to
 be used  as road paving material. This much is generally understood by the vast majority of the
 resource recovery, waste management and laboratory testing communities concerning the application
 of the TCLP procedure for metals analysis:

 -  various types of waste streams (ash) from the same waste management facility (bottom, fly or
   combined) will yield different results
 -  different mean particle sizes of the same type of ash may yield different results
                                            542

-------
- application of lime or other treatment technologies can affect TCLP results
- some portions of the TCLP method offer the laboratory analyst discretion in how the method is
  carried out
- if extraction fluid #1 is selected, bottom ash will likely test nonhazardous for Pb and Cd
- if extraction fluid #2 is selected, bottom ash will likely test hazardous for Pb and Cd

       As part of this research and development study, the following questions regarding the
application of the TCLP procedure for the determination of metals in ash from the Warren County
facility were examined:

- does combined ash behave differently from bottom ash?
- does ash with a mean particle size of <9.5 mm. behave differently than ash with a mean particle
  size of <1  mm.?
- what are the variables in the extraction fluid selection section of the TCLP procedure?
- do these affect the selection of the extraction fluid?

       Archived samples of bottom and combined ash produced at the Warren County (NJ) resource
recovery facility in December  1993, and bottom ash obtained from this facility in December 1994
were used in this evaluation study. Samples of various particles sizes, ranging from 0.375 inch mean
mass diameter to those prepared in a ball mill (<1 mm. size), were obtained.  Multiple aliquots of each
ash type were treated according to section 7.1.4 of the TCLP method that determines the selection
of the appropriate extraction fluid.  The pH was monitored at regular intervals throughout the
procedure, various methods and gradients for heating and cooling were employed, and the pH after
reaching room temperature recorded over time.  Elemental determinations were also made. Tests
were preformed on aliquots of the same sample by several analysts and by different laboratories.

      Data will be presented on the results of our study and suggestions offered to the USEPA
regarding potential modifications to the TCLP method to improve data precision and ultimately the
accuracy of waste characterization.
                                               543

-------
   Quality
Assurance

-------
75

DATA  QUALITY -- ASSESSMENT  OF DATA  USABILITY VERSUS ANALYTICAL
METHOD COMPLIANCE

D. R. Blye  and R. J. Vitcde, CPC, Environmental Standards, Inc., 1140 Valley Forge Road,
Valley Forge, Pennsylvania  19482.

ABSTRACT

The quality  of analytical data used throughout an investigative project is generally determined
by  assessing the data usability  and evaluating the compliance of the data with the analytical
protocol. Data usability is typically determined by assessing quantitative and  qualitative quality
control measures against predetermined criteria, collectively termed the Data Quality Objectives
(DQOs), and by determining how well the data can meet the intended use of the analytical
measurements.  Compliance to the analytical protocol is determined by evaluating the data against
contractually mandated reporting and QA/QC criteria.

In many cases, the extent and determination of the usability of the analytical data is a much more
important indicator of data quality than the contractual compliance of the analysis performed to
generate the data.   Assessing the contractual  compliance  of the  analytical  data is  fairly
straightforward, while determining data usability often requires a high degree of professional
judgement.  Lack of compliance to the analytical method may  prevent data usability from being
assessed. However, because  most environmental data users are  non-chemist professionals, far too
often contractual noncompliance is unknowingly equated to poor or unusable data.  For example,
if an organic  analysis method  blank was not performed  as  required by the method, but the
associated samples contain no positive results and exhibit excellent surrogate recoveries, then the
analysis is contractually noncompliant; however, the data usability is not impacted.  The authors
do  not want to imply that  noncompliant analytical  data  is  acceptable.  Rather,  professional
judgement must be exercised to determine if the noncompliance impacts data usability.

Conversely, compliant data may not always be usable data.  For example, a project DQO might
be to determine the presence or absence of methylene chloride at greater than or equal to 10 |ig/L
in a ground water sample collected from a location downgradient of a source area. Methylene
chloride is a common volatile analysis laboratory contaminant due to its use as a semivolatile
analysis extraction solvent. The volatile analysis method blank associated with the ground water
sample analyzed for this project detected methylene chloride at 50 ug/L, which is contractually
compliant and acceptable by Contract Laboratory Program (CLP) analysis protocol.  However,
the ground water sample volatile analysis detected 20 ug/L of methylene chloride.  In this case,
the data quality and  usability  of the analysis  have not met the DQO since  the method blank
analysis suggests that the sample result may have been due to external contamination.

This paper will present a summary of the key  issues that must be evaluated to determine when
noncompliance to  the analytical methods affect data usability.
                                              544

-------
                                                                                76
Planning for Radiochemical Data Validation as Part of the Sample and
Analysis Collection Process

David W. Bottrell, US Department of Energy, EM 263, 1000 Independence
Ave.  S.W. Washington, DC 20585-0002, Larry Jackson Ph.D., 26 Keenan
Drive, Peterborough, NH 03458, and Raymond J. Bath Ph.D., Waste Policy
Institute, 555 Quince Orchard Road, Gaithersburg, MD  20878.

ABSTRACT:
    The sample and analysis environmental data collection process requires
the coordinated efforts of many individuals.  An integral part of this
process is validation of the data including the preparation of a validation
plan.  This plan should integrate the contributions and requirements of all
stakeholders and present this information in a clear, concise format.  To
achieve this goal, the validation plan should be part of initial planning, e.g.,
DQO(Data Quality Objective) process.  Placing validation in the upfront
planning process will insure that data reliability and technical defensibility
are determined in a cost efficient manner.
    Radiochemical validation planning includes developing standard
operating procedures and tests for  evaluating the data for detection, unusual
uncertainty and  quality control. The validation tests of detection determine
the presence or absence of important analytes while the tests of unusual
uncertainty verify that the data are  consistent with the statistical confidence
limits for error established during  the DQO process.  The radiochemical
tests  of quality control serve two purposes. In one application, they
establish that the laboratory measurement system was in control during the
testing and that the data reporting requirements were met.  In a second
application, they demonstrate if the sample system is in control (performs
within historical limits of similar samples).
    The validation plan is an integral part of the QAPP (Quality Assurance
Project Plan) and should be included as  either a section within the QAPP or
as a stand alone document attached as an appendix.  The validation plan
should be approved by an authorized representative of the project for whom
the work is being done, the validation group performing the validation, and
any other stakeholder whose agreement is  needed (e.g., regulators) for the
assessment of the data.

INTRODUCTION
   Validation is the process  of examining  the available laboratory data to
determine if an analyte is present or absent in a sample,  and if the overall
unusual uncertainty is within project limits. Validation is frequently preceded
by verification, a related but distinctly different process. Verfication
determines if the laboratory carried out all steps required by any  contractual
requirements governing the analysis  and the reporting of the data.  After data
are validated, they are forwarded to the project  staff with the validation  report.
The project staff integrates the laboratory  data,  current field information and
historical project data to assess overall data quality and use in the decision
process by comparing it to the original project Data Quality Objectives
(DQOs)  (ref. 1,2,3).  Verification and validation are the performance measures
of laboratory data quality. Validation and assessment assure the technical
strengths and weaknesses of the overall project data are known, and establish
the technical defensibility of the data.
                                          545

-------
   Environmental data operations require the coordinated efforts of many
individuals.  The validation plan should integrate the contributions and
requirements of all stakeholders and present this information in a clear,
concise format. To achieve this goal, validation planning should be part of the
initial planning process, e.g., DQO process, to assure that the data identified
as essential will be validated efficiently to determine their reliability and
technical defensibility.
   For radiochemical data validation there are three series of validation tests;
detection, uncertainly and quality control. The tests of detection determine the
presence or absence of the specified analytes, and the tests of unusual
uncertainty verify the  data are consistent with the statistical confidence limits
for error established during the DQO process.  The tests of quality control
serve two purposes. In one application, they establish if the laboratory
measurement system is in control during the analysis, and that the data
reporting requirements are met. In a second application, the quality control
tests demonstrate that  the analytical system (including sample preparation,
etc.) is in control. This means that the  total process is performed within
historical limits indicating a reasonable match among method/matrix/analyte,
and that routine expectations of data quality are appropriate.
   The verification process, completed  before the validation process,  compares
the laboratory  data package to a list of requirements associated with each
sample. These requirements  are generated by two separate activities.  The first
activity is the preparation of a contract  for analytical services. The second
activity is the development of the project sampling  and analysis plan with its
accompanying  quality  assurance project plan (QAPP)(ref.4).  These two
activities determine, a priori, the procedures the laboratory should use to
produce data of acceptable quality; and  in addition, they determine the content
of the analytical data package. Verification compares  the material delivered by
the laboratory  against  these requirements and produces a report that identifies
those requirements which were not met {called exceptions}.  Verification
exceptions normally identify:

•  required steps not carried out by the laboratory  (i.e.,  correction for yield,
proper signatures, etc.)
•  analyses not conducted at the  required frequency (i.e., blanks, duplicates,
spikes, etc.)
•  procedures  that do  not meet pre-set acceptance criteria (i.e., laboratory
control sample recovery, etc.)

   The radiochemical validation process begins  with a review of the
verification report and laboratory data package to rapidly screen the areas of
strengths and weaknesses of the data set (i.e., tests of quality control).  It
continues with objective testing of environmental sample data to confirm the
presence or absence of an analyte (tests  of detection), and to establish the
unusual uncertainty of the measurement process for the  analyte (tests  of
unusual uncertainty).  Each data point is then assessed as to its integrity and
dependability in the context of all available laboratory data.

VALIDATION PLAN
   The validation plan is an integral part of the  QAPP and should be  included
as either a section within the QAPP or  as a stand alone document attached as
an appendix(ref.5).  The validation plan should  be  approved by an authorized
representative  of the project for whom the work is  being done, the validation
                                         546

-------
group performing the validation, and any other stakeholder whose agreement is
needed (e.g., regulator).
   Identification of key analytes and samples that drive the project decisions is
part of the validation plan.  In addition, the plan should define the association
of required quality  control samples with project environmental samples. For
projects with large  numbers of samples relying on manual validation of data,
the plan may identify a statistically derived sub-set of samples utilized  to
estimate the reliability of the larger data set.  This  will result in significant
cost savings.  As automated systems are developed, this strategy should be
dropped in  favor of validation of all samples  because the cost advantages of
smaller validation sets will be eliminated.
   During the validation planning process, planners should identify those
samples/data sets that have less rigorous standards  for data quality and
defensibility.  The plan should then specify that fewer validation tests be
applied to those sets of data or establish relaxed performance criteria.  Site-
specific data validation guidelines should establish a protocol to prioritize the
data validation requirements (i.e., which validation tests are most important).
This can eliminate unnecessarily strict requirements that commit scarce
resources to the in-depth  evaluation of data points  with high levels of
acceptable unusual  uncertainty. For example, results very much above  or
below an action level may not require rigorous  validation. Even relatively
large unusual uncertainty would not effect the ultimate decision or action.

   The data validation plan should:
•  provide  sufficient detail about the project  technical and quality objectives in
terms of sample and analyte lists, limits of detection for the analyses, and level
of acceptable unusual uncertainty on a sample/analyte specific basis (where
appropriate);
•  specify the necessary validation tests (quality control, detection, and
unusual uncertainty) and performance criteria deemed appropriate for
achieving project objectives; and
•  assure that qualified data are properly identified and documented.

   The data validation plan should include the following sections:
   title  and approval sheet,
   table of  contents,
   distribution list,
   quality objectives  and criteria for measurement  data,
   validation narrative,
   requirements for verification, validation and reconciliation with DQOs,
   reporting,
   training  requirements/certification, and
   documentation and records.

   A section of the data validation plan should specify the following  technical
and quality objectives:
•  the level of measurement system performance (tests of quality control),
•  regulatory decision level and desired analytical measurement level (tests of
detection), and
•  level of  analytical unusual uncertainty at the analytical measurement level
(tests of unusual uncertainty).

   A section of the data validation plan should address the validation tests,
including:
                                          547

-------
•  the quality control samples that apply to the validation effort,
•  the specific quantitative validation tests to be used, and
•  the statistical confidence intervals and/or fixed limit intervals applied to
each of the validation tests.

   The reporting and documentation section identifies the priority rating
system applied to'the set of validation tests used to qualify specific data. This
system provides guidance to the validator concerning which of the quality
issues (i.e., validation tests) are considered the most important in determining
data reliability.  At one extreme, this system can be very  prescriptive and
assign scores and weighting factors for each validation test and a method of
summing the results to determine which  if any, qualifier should be used. At
the other extreme, the validation plan can rely solely  on the professional
judgment of the validator to determine the qualifier.   When deciding which
system to use, the planners  should attempt to devise the least prescriptive
approach that would allow two qualified and independent validators to  reach
similar conclusions about the data.
   The plan should identify documentation and records which should be
included in a validation report for the project or task.  The reporting format
should also be specified.  Disposition requirements for records and documents
related to  the project should be specified.
   The validation plan should identify procedures for non-conformance
reporting which detail the means by which the laboratory communicates non-
conformances against the validation plan. This should include all instances
where the a priori  analytical data requirements  and validation requirements
established by the DQO process and validation plan,  respectively, cannot be
met due to sample matrix problems and/or unanticipated laboratory issues (i.e.,
loss of critical personnel or equipment failure).

CONCLUSION
Data validation is part of the overall data collection process that accompanies
most environmental decisions.  The primary reason data validation is
performed is to provide data that are of known quality and are technically
defensible that are integrated with other  sources and for a final assessment
supporting a decision. Project specific data validation requirements that drive
a decision or that are part of a statistically derived set of  samples  are utilized
to estimate the reliability of a larger data set are decided upon during the DQO
process. These requirements are documented in the data validation plan. If
requirements are too stringent or extensive, the process may commit resources
to the  evaluation of inconsequential variables. The strategy developed  during
data validation planning is essential to support acceptable use and integration
of field screening and analytical approaches with more expensive  and
cumbersome laboratory measurements.  The acceptance and integration  of
alternative screening and measurement techniques is a key component of
design optimization, e.g. DQO process,  and cost-effective environmental
program decisions.

DISCLAIMER
   This paper was prepared as  an  account of work sponsored by an agency of
the United States Government. Neither the United States  Government  nor any
agency thereof, nor any of their employees, makes any warranty, express or
implied, or assumes any legal liability or responsibility for the accuracy,
completeness, or usefulness of any information, apparatus, product, or process
disclosed, or represents that its use would not infringe privately owned rights.
                                          548

-------
Reference herein to any specific commercial product, process, or service by
trade name, trademark, manufacturer, or otherwise does not necessarily
constitute or imply its endorsement, recommendation, or favoring by the
United States Government or any agency thereof.  The views and opinions  of
authors expressed herein do not necessarily state or reflect those of the United
States Government or any  agency thereof.

REFERENCES
1. "Guidance for Data Quality Assessment", EPA QA/G-9,External Working
Draft,  March 1995.
2. "Guidance for the Data Quality Objectives Process",EPA QA/G-4
(Final),September 1994.
3. "Data Quality Objectives for Superfund", EPA/540/G-93/071 Interim Final
Guidance, Publication 9355.9-01,September 1993.
4. "Requirements for Quality Assurance Project Plans for Environmental Data
Operations", EPA QA/R-5, Interim Final Draft, January, 1994.
5. "Performance Objectives and Criteria for Conducting DOE Radiochemical
Data Validation", DOE EM-263 (Draft), February, 1995.
                                        549

-------
 77
          A New Calculation Tool for Estimating Numbers of Samples
L. H. Keith,  G. L. Patton, D. L. Lewis, P. G. Edwards, and M A. Re, Radian Corporation,
P. O. Box 201088, Austin, Texas 78720-1088

ABSTRACT
Some of the most frequently asked questions involving environmental sampling and analysis
are: (1) What kinds of QC samples are needed? (2) How many QC samples are needed?  and,
(3) How many environmental samples are needed? Answers to the first question are facilitated
by using an inexpensive expert system which is part of "Practical QC' (a program available
from ACS Software). Answers to the other questions are  derived by statistical equations and
your specific requirements (i.e., Data Quality Objectives). However, although the equations
have been known for years, they are not frequently available in a convenient form for use by
chemists, project managers, samplers, regulators and others, who would use this information
more often if they could understand it and if it was in an easily used form. "DQO-PRO" is a
series of programs with a  user interface like a common calculator  and it is accessed using
Microsoftฎ Windows™. DQO-PRO provides answers for three objectives: (1) determining
the rate at which an event occurs, (2) determining an estimate of an average within a tolerable
error, and (3)  determining the sampling grid  necessary to detect "hot spots". DQO-PRO
facilitates understanding  the significance of DQOs by showing the relationships between
numbers of samples and DQO parameters such as (1) confidence levels versus numbers of false
positive or false negative conclusions; (2) tolerable error versus analyte concentration, standard
deviation, etc., and (3) confidence levels versus sampling area grid size.  The user has only to
type in his or her requirements and the calculator instantly provides the answers. For example,
if you provide  numbers of samples that you have (or  plan to take), the calculator estimates
various confidence levels or, if you provide confidence levels (as part  of your DQOs), the
calculator estimates the numbers of samples you'll need  to  obtain  those confidence levels.
Switching between numbers of samples  and  DQO parameters such as confidence levels,
standard deviations, tolerable errors, etc. is accomplished by simply leaving blank the parameter
to be calculated or by selecting a button on the calculator.  Help in the form of definitions and
guidance for using the calculator is provided in hypertext windows and  also in more detailed
help files. When used in conjunction with newly introduced QC Assessment Kits that contain
blanks and certified matrix spiked material, the program can effectively help project managers
and data users make informed decisions and improve the planning process. The key for cost
effective use is not to spend more money on more QC samples but rather to use those QC
samples  already available (or being planned) as part of a statistical population of QC samples.
The program is free.
                                               550

-------
Introduction
The purpose of environmental sampling and analysis is to assess a small,  but informative,
portion of a population and then draw an inference about that population from the data
gathered. There are an almost infinite number of samples that could be taken at any given site,
so environmental samples must be collected in such a way as to be representative of the
environmental area of interest. Typically, environmental samples may be taken from matrices
that include water (surface waters, drinking water, ground water, industrial wastewater, etc.),
soils, aqueous sediments, vegetation, air, or manufactured products  (e.g., paper, waste oils,
etc.). Quality  control (QC)  samples are used to provide an assessment of the kinds and
amounts of bias and/or imprecision in the data that is obtained from the environmental samples.
Thus, QC samples are used to assess the collection and measurement system in a similar way
that environmental samples are used to assess the portion of the environment from which they
come. Therefore, representative environmental samples are collected and analyzed to form
conclusions about a particular site, and representative QC samples are analyzed to form
conclusions about system that  measures the environmental  samples. This  similarity  in
environmental sample usage and QC sample usage is often not appreciated or even recognized.

There are many different types of QC samples, and each  is designed for a specific purpose.
Some provide an assessment of bias while others provide an assessment of imprecision.  In
addition, some are designed to assess laboratory-based variability and others are designed to
assess overall variability (both sampling and analysis). An expert system named "Practical
Environmental QC Samples" (1) provides answers for the question of what kinds of QC
samples to use for specific purposes but it doesn't calculate how many QC samples are needed
to assure specific confidence levels. A new computer program named DQO-PRO compliments
Practical Environmental QC Samples  and calculates the numbers of samples (both QC
samples and environmental samples) needed to resolve individual project needs. For example,
DQO-PRO calculates numbers of samples needed to assure, at a selected confidence level, that
a localized area of contamination ("hot spot") is  not missed. It also calculates numbers  of
samples needed, at a selected confidence level, to estimate the average concentration of a
pollutant in samples and the standard deviation or the relative standard deviation (coefficient of
variation) of the method used for its analysis.

The "calculators" in  this software tool are provided to assist the sampling  design stage  of
project planning.  The  calculators were  designed to specifically help with the final step
(optimize the design  for collecting data) of EPA's Data Quality Objectives (DQO) process.
The DQO process is a structured way to plan data collection efforts.  It was developed by the
U.S. EPA Quality Assurance Management Staff (QAMS) to help decision makers define the
specific questions that a data collection effort is intended to answer, identify the decisions that
will be made using the data, and define the allowable risk of decision errors in  specific, and
quantitative terms.
                                                551

-------
The DQO process comprises seven steps:

1.   State the problem;
2.   Identify the decision;
3.   Identify input to the decision;
4.   Define the study boundaries;
5.   Develop a decision rule;
6.   Specify limits on decision errors; and
7.   Optimize the design for collecting data.

This results in qualitative and quantitative statements  that pinpoint specific study objectives,
define the types of data needed, define the statistical  populations the data are considered to
represent, and specify tolerable risks for false positive  and false negative decision errors.  The
calculators help the user evaluate these statements of need by determining the number of
samples needed to meet  three different types of study objectives.  Used iteratively,  the
calculators will help  optimize the sampling design used to complete a study.  The three
objectives covered by these calculators are to:

1.   Determine when  the frequency with which a characteristic  that occurs in a population
    exceeds some frequency of concern (e.g., determine when the frequency of false positive
    measurements due to laboratory contamination exceeds 5%);  [Success-Cole]

2.   Estimate the average concentration of a target analyte in a specific medium (e.g.,  the
    average concentration of a target analyte in water or soils at a site); [Enviro-Calc] and

3.   Determine if at least one localized area of contamination (a "hot spot") of a given size and
    shape exists at a site [HotSpot-Cak].
       Initial DQO Inputs
The initial inputs include a concise statement of the problem which is being addressed, the
decision(s) that will be  made based  on the results of the  study, and all  of the important
parameters that are needed in order to make the decision(s). Parameter inputs may include
decisions such as a list of analytes, types of sample containers needed, sample preservation
requirements, analytical methods that can be used, types of QC samples needed, etc.

       Define the  Study Boundaries
The fourth step of the DQO process is to identify the boundaries of the study. This involves
not only defining the physical boundaries of the site being investigated, but also the boundaries
of the inference space, that is, defining the conceptual population represented by the sample
data. Defining the boundaries of the  study  however, goes beyond defining the physical
boundaries of the site.  It also  includes defining temporal boundaries, i.e., considering  and
                                                 552

-------
addressing the potential impacts of seasonally or other time-related considerations and how
these will be addressed in the data collection process.

One of the fundamental ideas that must be kept in mind when defining the boundaries of a
study is  that the  decisions made ultimately rest  on inference.  Although we talk about
measuring the concentration at a site and basing our decisions on these, what we actually do is
make decisions on the basis of inferences that are, in turn, based on estimates.  When we
analyze a sample, the result obtained is only one result out of a theoretically infinite number of
possible results for a theoretically infinite number of possible analyses of that sample.

       Decision Rule
The decision rule is a summary statement that defines how a decision maker expects to use
data to make the decision(s) identified in DQO Step 2.  In  the same way that multiple
decisions, for example, might pertain to multiple areas within a site, there also may be (and
often are)  multiple decision rules for different  areas of the site or for different pollutants.
Development of the decision rule involves the following three steps:

1.   Specify the parameter that characterizes the population of interest;

2.   Specify the action level for the study; and

3.   Develop an "if...then" statement that describes the decision rule in terms of alternative
    actions.

The parameter characterizing the population of interest is a statistical parameter, such as the
mean or 90th percentile or upper tolerance limit, for a  particular analyte or measurement
characteristic.  For the calculators programmed in two parts of the software tool (Success-
Calc and HotSpot-Calc) the parameter of interest is the individual measurement results for
each sample or grid point.  For the other calculator (Enviro-Calc) the parameter of interest is
the average concentration (e.g.  the average concentration of a target analyte over the entire
sampling site).
       Specify Limits on Decision Errors
As noted in the above discussion on defining study boundaries, decisions about a site ultimately
rest on estimates of parameters of statistical populations.  The true average concentration at a
site is not known and  is not knowable because it  is the mean of an infinite  population.
Therefore, decisions based on the average site concentration must be made using estimates of
the true site average, developed on the basis of limited sampling data for an infinite population.
This introduces sampling error into the estimate that is used as the basis for decision making.
                                                  553

-------
These estimates, which are based on measurement data,  also have an inherent uncertainty
associated with them because of random and systematic errors in the measurement process.
These elements of uncertainty reflect measurement error. Because the decisions are based on
estimates that contain  inherent uncertainty,  there is always some risk of error in  the final
decision.

For any binary decision, that is, a decision for which there are two possible outcomes, there are
two  ways to make a correct decision and two ways to make an  incorrect  decision.  The
comparison of an average site concentration with an action level is  one example of a binary
decision. The two possible decisions are that the site exceeds the action level or that it does
not.  If the true (but unknowable) site concentration does not exceed the action level, and if our
estimate leads us to the decision that the site concentration does not exceed the action level,
then we have made  a correct  decision.   Likewise, if the true  (but unknowable)  site
concentration exceeds the action level, and our estimate of the site concentration leads us to
conclude that  the site concentration exceeds the action level, then we have made the other
possible correct decision.   Thus, there is one  possible correct decision for each of the two
possible states of nature.

There is also one possible incorrect decision for each of the two possible states of nature. If the
true  (but unknowable)  site concentration does not exceed the action level, but our  estimate
leads us to the decision that the site concentration does exceed the action level, then we have
made an incorrect decision. Likewise,  if the  true (but still unknowable) site concentration
exceeds the action level, and our estimate of the site concentration leads us to conclude that the
site does not exceed the action level, then we have made the other possible incorrect decision.

These two types of decision errors are commonly referred to as false positive errors and false
negative errors. To reduce the risks of false positive and false negative errors, the study design
must include sufficient data collected in a statistically sound manner to adequately estimate the
population parameter used as a basis for decision-making.   Uncertainty due to sampling error
can be reduced by collecting large numbers of samples.  Uncertainty due to measurement error
can be reduced by using more precise  and  accurate analytical methods and  by performing
multiple analyses of each sample and averaging the results.  However, reducing uncertainty and
the associated  risks of decision errors increases the costs of collecting data. Therefore, one of
the most important steps of the DQO process is the sixth step, in which the acceptable risks of
the two  types of decision errors are established.

       Optimize the Design
The  seventh and final  stage of the DQO  process is to develop  and optimize the sampling
design.  This involves  integrating the output of the previous six steps into the most cost-
effective data collection design that satisfies the DQOs.  At this step the final sampling design is
developed and the number of samples to be collected is defined. For this step the DQO-PRO
calculator tool is most helpful.
                                                554

-------
Three sampling models are addressed in DQO-PRO.

1.  When using Success-Cole to  determine  if the frequency of some characteristic in a
    population (such as a false positive or negative rate or the percent of a site which is
    contaminated) exceeds a limit or a frequency of concern, the number of samples required is
    driven by the confidence that the user desires to correctly conclude that the true frequency
    of the population exceeds the limit.  Also, the number of samples is driven by the decision
    rule used to claim that the true frequency exceeds the limit.  The minimum number of
    samples needed is always associated with a decision rule that does not allow any samples to
    contain the characteristic of concern (e.g.,  the analytical results for method blank samples
    cannot report a hit for any target analyte in order to be able to conclude that the true
    frequency of laboratory contamination is less than X% for that analyte). As the decision
    rule allows for more samples to contain the characteristic of concern (for example, 1 or
    more false positives) in the process, the number of samples needed to make a decision with
    the specified confidence increases. Given these general design considerations, the sampling
    design needed to meet DQOs can be developed and optimized.

2.  When estimating the average concentration of a  target analyte with Enviro-Calc the
    number of samples is driven by the magnitude of error that can be tolerated in the estimate
    of the average. Also, the number of samples needed is driven by the amount of confidence
    the user desires in the estimate of the average within the tolerable error.

3.  Li HotSpot-Calc the number of samples will be driven by the size of the hot spot that it is
    desirable to detect and the allowable error in missing the hot spot.  Optimization of the
    sampling design for this model involves balancing total sampling and analysis costs (number
    of samples) against the size specified for a hot spot, the shape of the hot spot, and the
    acceptable risk of a false negative error.
Sampling Design
The objective of sampling is always to gather information that will allow us to answer some
question or questions  about a particular statistical  population. In many cases, sampling
objectives can be defined in terms of one of three basic conceptual sampling models:

1.  Sampling to determine if the frequency of some characteristic exceeds a limit (e.g., the
    percent of a site contaminated or the percent of measurements that are false-positives
    because of laboratory contamination).

2.  Sampling to estimate the average concentration of some target analyte; and

3.  Sampling to estimate the minimum size of a "hot spot" that is acceptable to  be missed.
                                                555

-------
The first question that must be answered when developing a conceptual model for a particular
sampling application  is whether the pollutant or pollutants of interest are expected to be
distributed over the entire site or localized in "hot spots." Hot spots are most often associated
with spills, leaks, or other similar point sources of relatively nonmobile contaminants. Hot spot
sampling is used when the  objective is to find these localized areas of contamination.  In this
model, the site is viewed as an area that consists of some number of discrete units, where each
unit is either contaminated  (above  some level considered "hot") or not contaminated.  If "hot
spots" are found, they may be cleaned up, and the rest of the site is typically left alone.  An
analogous sampling problem would be sampling to determine if there are any black beans in a
bowl full of red beans.

In contrast  to  the  "hot spot"  model, the  other  two conceptual models are applicable to
situations where it is  more likely that the pollutants of interest are distributed over the entire
site. In these models the objective is to determine the average concentration for the site as a
whole or to determine the percent of  a site  that is  contaminated.  Because the resulting
characterization is of the site as a  whole, the remediation strategies for these two cases also
apply to the whole site: either the whole site is  cleaned up or it is not. These sampling models
are analogous to sampling in order to estimate the average weight of the beans in a bowl.

The choice of an appropriate sampling model depends on characteristics of the site in question
and upon the contaminants of interest at that site. In many cases, and particularly when little
information  is available about the distribution of pollutant concentrations at a site during the
sampling design stage, the most  cost-effective sampling strategy will  be to use  a phased
approach. A phased approach typically involves an initial screening phase, followed by one or
more definitive  sampling phases. The design of the  screening phase will vary, depending on the
specific objectives and the specific information needed to optimize the definitive designs.
Often, the screening phase is conducted using "screening methods" for sampling and analysis.
Screening methods are usually amenable to on-site analysis, thus providing quick feed-back and
low cost compared with off-site analyses.  The trade-off for screening methods is typically
lower qualitative specificity, and poorer precision  and accuracy. Because screening samples
are relatively cheap, one common use for them is to collect samples from a grid over the whole
site, and then use the data to stratify the site for subsequent sampling. Data from a screening
phase are particularly useful for developing the variability  estimates used to determine the
number of samples required for definitive sampling.
Success-Calc
One of the most basic QC data assessments is to determine the presence of false positive and
false negative measurements in environmental analytical data. An analyte  that is incorrectly
concluded to be present in a sample is a false positive; these can cause regulatory and financial
consequences for a laboratory's clients. One cause of false positives is misinterpretation of the
identity of interfering analytes for the target analytes. When interferents are present in a sample,
the method must be modified to eliminate them, but when they are present in the materials used
                                                556

-------
to prepare or analyze samples (e.g., bottles, solvents, reagents, filters, columns, detectors, etc.),
their sources must be determined and the interferent removed if possible. Various kinds of QC
samples (e.g., as determined  from the Practical QC (1) program) can be used to determine
where, in the chain of events, the interferents are contributed but the first step is to recognize
their presence. Method blanks, which consist of a blank  matrix similar to the samples, but
without the target analytes, are used to determine overall  if false positives are present in the
materials and/or the process used to prepare and analyze samples (but they don't identify the
source of error).

A false negative occurs when  an analyte is concluded to be absent in a sample while, in reality,
it is present at detectable levels. False negatives commonly  occur from poor recovery of target
analytes from a matrix, or from interferences that mask the target analytes. They are especially
troublesome to government and regulatory personnel and also to scientists who work with risk
assessments because they result in pollutants being concluded to be absent when, in fact, they
are present.

Most environmental analyses are conducted in  "batch"  modes to facilitate  cost effective
analyses. In doing so, one method blank (also called a lab blank) and one or two method spikes
(or matrix spikes) are typically analyzed along with about 10 to 20 environmental samples. The
resulting data for all of the environmental samples in that batch are accepted or rejected on the
basis of those QC samples.

When used this way, the QC  data of a batch does not provide a statistically sufficient  amount
of information for the environmental samples. One or two QC samples, which is how these QC
samples are grouped, does not provide enough information to predict the reliability of the other
environmental  samples that  are  grouped with  them. An implicit  assumption  that the
environmental samples analyzed in conjunction with a method blank and one or two spiked
method blanks (or matrix spikes) do not contain false  positives or false negatives because the
accompanying one or two QC samples did not contain them is not necessarily correct. Thus.
the present-way of assessing QC data contains a basic  now that is not usually recognized

How  can  method  blanks and  method  spikes (i.e.,  spiked  method blanks) be used  as
representatives for the environmental sample population? The answer is to use a statistically
valid number of QC samples. That number depends on the Data Quality Objectives (DQOs) of
a particular sampling and analysis project. As an example,  the number of QC samples needed
can vary from 6  (for an 80% probability that the associated environmental samples will not
contain more than 25% false  positives or false negatives) to 458 (for  a 99% probability that
associated  environmental  samples will not contain more than  1% false positives  or false
negatives).

Success-Cole is designed to  determine the number of samples needed to detect a specified
frequency of some characteristic occurring in the population  (e.g., the % defectives or %
contamination).   In an environmental program it can be used for  a  number of different
                                                557

-------
purposes.  It can be used to design a QA program (i.e., the number of blanks and spikes needed
to test for a percentage of problems in the sampling or analytical process) or it can be used to
design an  investigation  program  (I.e., the number of environmental samples  needed to
determine if some percentage of a  site is contaminated.). This calculator does NOT calculate
the number of samples needed to estimate the frequency at which  a characteristic occurs,
rather, it calculates the number of samples required to decide when the true frequency of
occurrence exceeds some predefined frequency using a specified decision rule.

An important point to note is that many of the QC or environmental samples needed for
a statistical population are available (or can easily be made available); they are just not
presently used in this way. Thus,  increased costs  associated with large numbers of
samples may not be necessary - they may,  in fact,  be minimal  or even reduced with
proper planning. For example, consider that a method blank is typically analyzed for each
batch of samples;  this results in a large number of blank samples that may be useable for a
statistical population of a method and matrix when gathered over the  period of several  weeks
or months. The key to obtaining a statistically useable  population of sample data is that all
significant parameters that can affect  analytical method performance must remain constant.
Significant parameters include the instrumentation and method, the analyst, and the matrix.

       Approach

The approach used  resolves  an objective  to determine if the frequency  at which some
characteristic (e.g., false-positive measurements or contamination at a site) occurs is greater
than a desired frequency.  For example, this calculator will determine the number  of samples
needed to determine  if the true  rate of  false  positive  measurements  due to  laboratory
contamination is greater than 5%  with 95%  confidence.   Three pieces of information are
needed:

1.   The frequency of concern;
2.   The confidence desired in concluding that the true rate exceeds the frequency of concern;
    and
3.   The decision rule that will be used to conclude if the true frequency exceeds the frequency
    of concern.

If the frequency of concern is less than 10%.  the  calculator uses an equation based  on an
exponential-approximation  to  the binomial  distribution  that  provides  an  approximate
determination of the number of samples required (N). If the frequency of concern is greater
than 10%. an iterative approach is used that  calculates the confidence achieved for some
specified number of samples. In this case, the equation used takes the number (N) a  user enters
and calculates the confidence (for a specified  decision rule) with which one  can correctly
conclude that the samples could come from a population that has  a higher frequency of
occurrence than desired. This approach was used instead of an exact  calculation to show the
user the tradeoffs  of modifying the numbers of samples,  the decision rule,  and the desired
                                               558

-------
confidence. It thus allows the user to evaluate if the cost of these additional samples is worth
the improved decision-making confidence.

This approach also  allows the user to  change decision rules while manipulating N and the
frequency of concern.  The decision rule is the statement of how many samples must exhibit the
characteristic of concern (e.g., target analyte  detections in blanks or environmental samples
from a site) before the user will conclude that the true frequency of this characteristic in the
population exceeds the frequency of concern. The easiest, and least expensive, decision rule is:

       If zero of the samples collected exhibit the characteristic, then the true frequency is less
       than the frequency  of concern.   If one or  more  samples collected  exhibits the
       characteristic, then the true frequency is greater than the frequency of concern.

Decision rules that allow samples to have the  undesirable characteristic, but allow the user to
conclude that the true frequency is less than that of concern, allow for "errors" due to a variety
of sources  but they also require more  samples be collected.  For example, using the above
decision rule of zero "hits" in blanks to  determine if the true frequency of false positives from
blank contamination is greater than 5%, with  95% confidence, approximately 60 method
blanks are required.  Changing the decision rule to allow one "hit" in a blank, and still conclude
that the true rate is greater than 5% if two or more blanks have hits, requires approximately 90
samples.  The equations used for this approach are presented below.

       Equations

If the frequency which is desirable to detect is less than 10% the following equation is used:

  n = nL(alpha)/nL(l-Y)

  where:  alpha = 1 - the desired confidence;
        Y = the frequency to detect (this must be less than 10%).

The 10% limit is based on comments by W.G. Cochran (2). The equation itself is based on the
exponential  distribution and assumes  that the characteristic to be detected occurs very
infrequently,  as opposed to the binomial, which can tolerate any frequency from 0 to 100%.
The reference for this approach is information available from EPA  on "Xmax and the
Exponential Distribution Model in the Development of Tolerance Intervals". This information
is used in conjunction with guidance on  evaluating gas pipelines for PCB  contamination, but is
currently not published.

If the frequency which is desirable to detect  is more than 10%, we  must use the binomial
equation and iteratively solve for an appropriate n. In this case:

  Pr = n!/r!(n-r)!*qA(n-r)*pAr
                                                559

-------
where n = the number of samples in a sample collection;
       r = the number of samples with the characteristic to be detected;
       p = the true percentage of the population with the characteristic to be detected; and,
       q = the true percentage of the population without the characteristic to be detected and
       q=l-p.

Solve for Pr, which is  the probability that a sample of size n can be collected from the
population where there are truly p% items with the characteristic and have only r samples have
the characteristic (e.g.,  false-positives or contamination).  Then, calculate 1-pr and that  is
equivalent to the confidence in concluding that the true rate is less than p.

The best decision rule for the user is usually that which requires the fewest samples, i.e., the
zero/one rule described earlier.  However, if a different decision rule is used, Pr is calculated for
each "r" allowed and the resulting Pr(s) summed. For example, if the user picks a decision rule
of 1 or fewer "characteristic" results passes, then we must calculate the Pr for 1 and add it to
the Pr for 0. If the user picks r = 2, we must add the Pr for 2 to the Pr for 1 to the Pr for 0 for
a total Pr. Then take 1 minus this total Pr to get the confidence.

The final confidence, n, r, and p define the sampling design that will meet the  users objectives
(3). When the user implements the sampling design from this exercise, the decision based on
the results of the sampling exercise will be that either the true frequency exceeds the frequency
of concern or it does not.  If it does (i.e., more  samples reflected the characteristic than
allowed) the user  may desire  to estimate the range of true frequencies possible, given the
observed results. Or, if the number of samples with the observed characteristic was small, the
user may desire to determine what decreased confidence they have that the true rate is less than
the frequency of concern.

The last portion of the Success-Cole determines the  minimum and maximum percentage of the
population with the chosen characteristic given that some number of  samples  collected
indicated the presence of this characteristic.  The user enters the number of samples collected,
the number of samples with the chosen characteristic, and the confidence level that the user
desires when estimating the minimum and maximum frequency with which the characteristic
could occur.  This calculation is analogous to setting an upper and lower confidence level for a
mean (4).

The equations for  calculating the lower confidence level (LCL) and upper confidence level
(UCL) for the binomial distribution are:

       LCL = {l+((n-r+l)*F(l-alpha/2;2n-2r+2,2r)/r)}A-l

       UCL = {l+((n-r)/(r+l)*F(l-alPha/2;2r+2,2n-2r))}A-l
                                                560

-------
where        F = the F statistic with the above specified degrees of freedom;
              n= the number of samples collected;
              r = the number of samples with some characteristic (r in the earlier
                      equations); and,
              A-l = exponeniate the result to the negative 1.
 Enviro-Calc

 Enviro-Calc is designed to determine the number of samples needed to estimate an average
 analyte concentration in site-specific media within a specified absolute or relative error with a
 specified confidence.  This calculator assumes that measurements of analyte concentration will
 be normally distributed and that a random sampling plan will be used to collect samples.  While
 simple, random sampling plans are often used in environmental investigations, the assumption
 of measurements following a normal distribution is less certain. Therefore, unless the user has
 previous information indicating that the assumption of normality is reasonable, the number of
 samples  estimated by  this calculator should  be considered to be sufficient only to  gather
 preliminary information about an investigative media.  Additional sampling may be required in a
 second or third  phase after initial data have been analyzed and the underlying assumptions
 tested.
 Hot-Spot Calc

HotSpot-Calc is design to determine the grid spacing needed to detect the presence of a single
hot spot of a specified size and shape with a specified probability of missing the hot spot.  This
calculator is based on the following key assumptions:

•   the hot spot is circular or elliptical;
•   sample measurements are collected on square, rectangular, or a triangular grid;
•   that the definition of a "hot spot" is clear and agreed to by all decision makers; and,
•   that there are no misclassification  errors  (i.e., that there are no  false-positive or false-
    negative measurement errors).

This last assumption  is  the  most often  over looked assumption  and requires  careful
consideration oftheQA program and its design to prevent misclassification errors.

The objectives of hot spot sampling are fimdamentally different than the objectives of the other
two sampling  models.   Whereas the other two models focus  on  estimating the site-wide
average concentration or the percentage of an  area contaminated, the primary objective of hot
spot sampling is to pinpoint localized areas of contamination. A single site might have multiple
hot spots of different origin.
                                                561

-------
Basically, hot spot sampling involves performing a systematic search of a site for "hot spots" of
a certain specified shape and area.  The search is conducted by sampling every point on a two-
dimensional grid.  The probability of finding a hot spot is determined as a function of the
specified size and shape of the hot spot, the pattern of the grid, and the relationship between
the size of the hot spots and the grid spacing. For example, if one uses a square grid to search
for circular hot spots of radius r, the probability of locating a hot spot, if one exists, is 100%
when the distance between grid points is r.  Obviously, this probability decreases as the grid
spacing increases relative to hot spot size.

       Assumptions

The methods discussed in this section are based on those described by Gilbert (5).  They  are
based on the following assumptions:

•  A hot spot may be a surface area,  or a volume at any depth below the surface (i.e., at a
   particular soil horizon), but the surface projection of the hot spot is assumed to be circular
   or elliptical in shape;

•  Samples are collected on a two-dimensional grid of a specified pattern;

•  The distance between  grid points  is large relative to the projected surface area of the
   sample that is actually removed for analysis;

•  The criteria for defining  a hot spot are unambiguous with respect to the measurement
   method and the concentration considered "hot," and  there are no classification errors in
   applying these criteria.

Although triangular  grids have  been shown  to  give  more information than  square or
rectangular grids and are therefore recommended  as the preferred approach,  all three grid
designs are addressed.
QC Assessment Kits
When DQO-PRO is used to optimize a study design so that statistical confidence levels
planned with sampling  and analysis  projects can  be  achieved, all  significant  analytical
parameters must be maintained without change during the period of time that the QC samples
are being accumulated. Significant parameters that can affect analytical method performance
include the instrumentation, the analyst, and the matrix.

•  Changing or modifying instruments can affect instrument detection levels and many other
   measurement parameters.
                                              562

-------
•   Analysts with varying degrees of experience and different analytical techniques can also
    affect results of the measurement system.

•   Different matrices may have different artifacts, interferences, and also affect the recovery of
    target analytes differently.

Laboratories can readily document the consistent use of instrumentation and an analyst for a
given period of time or a specific project. Environmental matrices, however, are more difficult
and inconvenient to maintain consistency with over a period of time; this is especially true with
soils. Thus, a consistent source of representative matrices is also important for an assessment of
false positive and false negative conclusions from the analytical measurement system. We are
providing DQO-PRO at no cost to people who wish to use it. In addition, we have packaged
representative soils in convenient QC Assessment Kits. Using these kits provides ongoing
control of the third  major parameter (the matrix) needed  to maintain consistency among a
statistically relevant population of QC samples over time or for a project.

The QC Assessment Kits contain 10 units of conveniently packaged soil for method blanks
using any desired method for PCBs, PCDDs, PCDFs or any  other target analytes. Some QC
Assessment  Kits also contain  soils  from the identical lot of homogenized soil that are pre-
spiked with PCBs, PCDDs and PCDFs and thoroughly homogenized. Alternatively, two QC
Assessment  Kits with blank soils  can be purchased and one of them spiked with custom
prepared target analytes at any desired concentrations. The soils used  in these kits were
selected from pristine areas in North Carolina and California so they represent both East Coast
and West Coast regions. Both soils are sandy loam; this type of soil  was selected because  it
commonly occurs throughout the world and also because most organic pollutants spiked onto
this type of soil typically give average recoveries (not high as with sand and not low as with
clays).

The more kits that are used over any given time period, where all significant parameters remain
constant, the higher the statistical probability becomes that low rates of false positives or false
negatives can be identified in the associated environmental samples. Since similar QC samples
would be  analyzed anyway, analyzing a group  or  batch  of  samples  from a QC
Assessment Kit will not significantly increase costs, but it will significantly improve the
assumption of measurement process consistency  because  it removes  the  variability
associated with unknown matrices and poorly homogenized samples. Time limitations of 3
to 6 months are recommended as reasonable  lengths of time over which to  accumulate
statistical populations of QC data from these kits. Documented method parameters should be
consistent in laboratories that frequently use a given method for several weeks to several
months.  Table 1 provides an example of potential benefits, in terms of increasing statistical
confidence to detect  a low error rate, that can be gained by using QC Assessment Kits over a
controlled period of time.
                                              563

-------
Table 1  Numbers of QC Samples Versus Confidence Levels (Probability)
                 of Not Exceeding Selected Average Error Rates

Number
of Kits
1
2
5
10
15
20
30
50
100
Number
ofQC
Samples
10
20
50
100
150
200
300
500
1000
Confidence
Level With
20 % Error Rate
89%
99%
100%
100%
100%
100%
100%
100%
100%
Confidence
Level With
10% Error Rate
65%
88%
99%
100%
100%
100%
100%
100%
100%
Confidence
Level With
5% Error Rate
40%
64%
92%
99%
100%
100%
100%
100%
100%
Confidence
Level With
1% Error Rate
10%
18%
39%
63%
78%
87%
95%
99%
100%
References

1. Keith, L. H. "Practical QC Samples" an expert system program in Practical QC, Instant
Reference Sources, Inc., 7605 Rockpoint Dr., Austin, TX 78731 (1994).

2. Cochran, W. G., "Sampling Techniques", John Wiley & Sons Inc., New York, 3rd Edition,
1977.

3. Grant, Eugene L. and Richard S. Leavenworth. "Statistical Quality Control", Sixth Edition.
McGraw-Hill, Inc., New York,  pp. 201-208 (1988).

4. Hahn, Gerald J. and William Q. Meeker. "Statistical Intervals: A Guide for Practitioners"
John Wiley & Sons, Inc., New York, pp. 104-105 (1991).

5. Gilbert, R.O., in  "Statistical Methods for Environmental Pollution Monitoring", Van
Nostrand Reinhold Company, Inc., pp. 119-131 (1987).
                                               564

-------
                                                                   78
              USE OF STANDARD REFERENCE MATERIALS
           AS INDICATORS OF ANALYTICAL DATA QUALITY
AnnK. Bailey. President, EcoChem, Inc., 1401 Norton Building,
801 Second Avenue, Seattle, Washington 98104; Carol-AnnManen,
Chief, Injury Assessment Branch, Damage Assessment Center,
National Oceanic and Atmospheric Administration, 1305 East-West
Highway, Silver Spring, Maryland 20910

ABSTRACT

A large program of studies was performed as part of a natural
resource damage assessment conducted in the Southern California
bight. These studies included biochemical and physiological
work on birds, fish, and sediments.  As part of these studies,
samples of sediments and tissues were analyzed for the presence
and quantification of dichlorodiphenyltrichloroethane and its
metabolites (DDTs) and polychlorinated biphenyl congeners
(PCBs). The analyses were performed by two different
laboratories, one analyzing tissue samples,  and the other
analyzing tissue and sediment samples, over a period of
approximately 14 months. The quality assurance program for
these analyses specified the analyses of appropriate Standard
Reference materials  (SRMs) for indication of the quality of the
analytical measurements.  The SRMs were extracted and analyzed
as part of the sample string at a rate of one for every ten
samples; analyses were by dual column GC/ECD. Fifty sediment
SRMs (SRM 1941 and 1941a) and 92 tissue SRMs (SRM 1974 and 1974a)
were analyzed for this proj ect. Data from the analyses of these
materials were monitored on a near real-time basis to determine
if the data met the required quality control criteria of plus or
minus 30% of the National Institute of Standards and Technology
values. If results for an SRM did not meet the criteria,
corrective action for that batch of samples (n = 10) was
performed.  The use of SRMs and the near real-time assessment of
the data from the repetitive analysis of these materials were the
critical components in developing a data set that met the data
quality objectives for this project.
                                 565

-------
INTRODUCTION

Beginning in 1990, a large program of studies was performed as
part of a natural resource damage assessment conducted in the
Southern California bight.  These studies included biochemical
and physiological work on birds, fish, and sediments. As part of
these studies, samples of sediments and tissues were analyzed
for the presence and quantification of dichlorodiphenyl-
trichloroethane and its metabolites (DDTs) and polychlorinated
biphenyl congeners (PCBs).  Because it was possible that the
results from these analyses would be used in a court of law, it
was necessary to be able to define and demonstrate the accuracy,
precision, and comparability of the analytical data.

No particular analytical method was specified to the
laboratories for extracting and analyzing samples for this
project. Instead, the Analytical Chemistry Quality Assurance
Plan (ACQAP) (Manen, 1993) for this work specif ied a "common
foundation".  This "common foundation" included: 1) the analytes
to be identified and quantified, 2) the minimum sensitivity of
the analytical methods, and 3) the use of calibration materials
from the National Institute of Standards and Technology (NIST) .
In addition, prior to the analysis of samples, each laboratory
was required to demonstrate proficiency through the analysis of
a blind, accuracy-based material; provide written protocols for
the analytical methods to be used; calculate method detection
limits for each analyte in each matrix of interest and establish
an initial calibration curve in the appropriate concentration
range for each analyte. Each laboratory was audited once before
samples were analyzed, and once during the project to document
that the laboratory was in compliance with the ACQAP
specifications.

The laboratories were also required to demonstrate continued
analytical proficiency by the analysis of surrogates, method
blanks,  calibration checks, matrix spikes, and replicates.  The
critical on-going quality control check was the analysis of a
standard reference material (SRM) . The use of SRMs is considered
to be one of the best available approaches for decisions on the
accuracy of measurement data (Becker, et al., 1992). By
                                 566

-------
analyzing an SRMwith every batch of ten samples, the SRM results
provided information regarding the successful completion of all
steps in the analytical sequence for that batch. The near real-
time monitoring of these data re-emphasized the importance of
these data to the project and allowed for cost-effective
corrective actions. Comparing the SRM data over the period of
the project demonstrated the overall accuracy and precision of
the developed data.  Lastly, use of the SRMs provided a
traceability to a national standard for the data.

METHODS

Two different laboratories performed the analyses over a period
of 14 months. One laboratory analyzed both tissue and sediment
samples; the other analyzed tissue samples only. Both
laboratories used similar methods of extraction and analysis.
Sample extraction was performed using methylene chloride,
followed by extract clean-up and fractionation using alumina and
high pressure liquid chromatography (HPLC) .  Instrumental
analysis was performed using dual column gas chromatography-
electron capture detection (GC-ECD). Ten percent of the sample
analyses were confirmed by gas chromatography-mass spectrometry
(GC-MS).

The GC columns used were 30-m long by ,25-mm I.D. fused silica
capillary columns withDB-17 and DB-5 or RTX-5 bonded phase. The
samples were analyzed for a suite of seven DDT isomers and
metabolites and 42 PCB congeners.  The data from the two columns
were reduced to one results, i.e., the data reported herein are
"merged". All results are also reported corrected for recovery
of an internal standard added to the samples prior to extraction.

Each batch of ten samples was accompanied through the analytical
process, extraction, cleanup and quantification by an SRM. For
the sediment samples, this was either SRM 1941 or 1941a, Organics
in Marine Sediments.  For the tissue samples, bird eggs and fish
livers, the best reference material match was SRM 1974 or 1974a,
Organics in Mussel Tissue (Mytilus edulis) . Only SRM 1941a
provided certified values for organochlorine compounds. The
other three SRMs provided non-certified or informational values
for the analytes listed in Tables 1 through 4.  These values were
                                  567

-------
obtained by NIST using solvent extraction and GC-ECD analysis
(Schantz,  et al.,  1990; Wise, et al., 1991).

Analytical results with supporting instrument read-outs were
reported to an independent data validator on a batch basis, i.e.,
ten samples with accompanying quality control data;
calibration, surrogate recovery, SRM, blanks, matrix spikes,
and replicates. Data were examined by the data validator shortly
after being reported. The ACQAP required that the laboratory
obtain SRM results within plus or minus 30%  of NIST value on
average for all analytes and that no more than 30% of the
individual analytes exceed plus or minus 35% of the NIST values.
If these criteria were not met, corrective actions, ranging from
re-injection to re-extraction and re-analysis for the entire
batch of samples,  were performed.

RESULTS

Results for 50 sediment SRMs (SRM 1941 and 194la) are summarized
in Tables 1 and 2.  A concentration for PCB 66 was provided for SRM
1941, but the NIST data summary listed PCB 95 coeluting with PCB
66. Thus,  the results were not comparable to the analytical
results obtained for this project. SRM results were less than 10
times the method detection limit for 4,4' -DDT in SRM 1941, and
for PCB 95, PCB 128, and 2,4'-DDE in SRM 1941a.  Thus, these
results are not provided in the data summary.  Results for 92
tissue SRMs (SRM 1974 and 1974a) are summarized in Tables 2 and 3.
Tissue SRM results for PCB 28 were reported  by NIST. PCB 28
coeluted with PCB 31 for most project tissue sample results.
Thus PCB 28 data were not comparable to the NIST SRM results.  SRM
results were less than ten times the method  detection limit for
PCB 44 in SRM 1974a and for 2,4-DDE, 2,4'-DDT, 4,4'-DDT in both
SRMs.

The same logic for merging the two-column results was used for
the SRMs as for the samples.  The overall selection logic was to
report the result from the column that gave  the lowest reliable
value.  Selection of the lowest value is the same logic as used in
the EPA Contract Laboratory Program Pesticide/PCB Statement of
Work (U.S. EPA, 1991). An exception to the merge logic was made
for PCB congeners 138 and 187.  Results from column DB-17 for
                                  568

-------
these two PCB congeners showed consistently poor comparability
with NIST results.  Thus, only RTX or DB-5 data were used for the
two congeners (for both the samples and the SRMs).

The minimum and maximum results for each analyte listed in
Tables  1 through A indicate that the percent difference from the
NIST confidence interval was greater than 35% for any one analyte
in any one SRM. The range of minimum and maximum values was less
for the tissue SRM with fewer results exceeding 35% difference
from NIST. The quality control criteria allowed that up to 30% of
the analytes could vary by more than 35% from NIST, if the overall
average percent difference was less than 30%. However, if an
analyte in the SRM did vary more than 35%, the sample results for
that analyte in the associated batch (n = 10) were qualified as
estimated (J). DDD and DDE concentrations in the samples were
significantly greater in the samples than in the SRMs.  This
resulted at times in SRM results that were several ng/g higher
than the NIST value. The associated sample ODD/DDE con-
centrations were at least an order of magnitude greater than SRM
and method blank results, thus the apparent carryover was judged
not to affect the sample results.

DISCUSSION

The probability that these analytical data would be presented in
a court of law required a means of demonstrating the overall
precision and accuracy of the dataset. The relatively unusual
sample matrices, bird eggs and fish livers, and the problems
associated with the analysis of these matrices, as well as a
range of contaminant concentrations from parts per thousand to
parts per billion, complicated the problem. The QA program
developed and implemented for this proj ect relied heavily upon
the analysis of SRMs and the near real-time monitoring of the
resultant data.  This approach is based on statistical
techniques which consider the results from the repetitive
analyses of a reference material to be part of an infinite
population of measurements.  The data from the analysis of the
reference materials can then be considered as random samplings
of the output and can be used for evaluation of the measurement
process (Taylor, 1983).
                                 569

-------
The repetitive analysis of SRMs and the near real-time
monitoring of the data from these analyses were not the only
mechanisms used to develop and demonstrate the quality of the
dataset,  but they were a critical component.  They provided a
mechanism to verify the precision and accuracy of analytical
methods employed by the laboratories, demonstrated the
comparability of the results from the two laboratories, and
assured consistent results over time.
                                  570

-------
REFERENCES

Becker, D., R.  Christensen, L.  Currie,  B.  Diamondstone,  K.
     Eberhardt, T. Gills,  H.  Hertz,  G.  Klouda,  J.  Moody,  R.
     Parris, R. Schaffer, E. Steel, J. Taylor, R. Watters, and R.
     Zeisler.   1992.  Use of NIST Standard Reference Materials
     for  Decisions  on  Performance  of  Analytical  Chemical
     Methods  and  Laboratories,   Technology  Administration,
     NIST, Special Publication 829. Gaithersburg, Maryland.

Manen, C.  A.  1994.   Southern California  Damage  Assessment
     Analytical  Quality  Assurance  Plan,  NOAA/DAC.  Silver
     Spring, Maryland.

Schantz,  Michele M., B. A.  Benner, Jr.,  S. N. Chesler, B.  J.
     Roster, K. E. Hehn, S. F. Stone, W. R. Kelly, R. Zeisler, and
     S. A. Wise.  1990.  Preparation and Analysis of a Marine
     Sediment Reference Material for the Determination of Trace
     Organic Constituents,  Fresenius'  Journal  of  Analytical
     Chemistry. 338:501-514.

Taylor, J. K.  1983.  Reference Materials-What They Are and How
     They Should be Used.  JTEVA. Vol.  11,  No. 6, November 1983.
     pp 385-387.

Wise, Stephen A., B.A. Benner, Jr., R. G. Christensen, B.  J.
     Roster, J. Rurz,  M. M.  Schantz, and  R.  Zeisler.  1991.
     Preparation  and Analysis   of a Frozen  Mussel  Tissue
     Reference Material for the Determination of Trace Organic
     Constituents, Environmental  Science & Technology, Vol. 25,
     No.  10, October 1991. pp 1695  - 1704.

U.S.  Environmental  Protection   Agency.  1991.     Contract
     Laboratory  Program   Statement  of  Work  for  Organics
     Analysis. OLMO1.8. Washington, D.C.
                                  571

-------
                                                                                                Table 1
                                                                           NIST  1941.  Organics  in Marine Sediment.
en
•vj
IV)
Standard Reference Material
Valued) Uncertainty
Analyte ng/g (2)
18
28
52
101
105
118
138
153
180
170
187
195
206
209
4,4'-DDD
4,4'-DDE
9.9
16.1
10.4
22
5.76
15.2
24.9
22
14.3
7.29
12.5
1.51
4.81
8.35
10.3
9.71
0.25
0.4
0.4
0.7
0.23
0.7
1.8
1.4
0.3
0.26
0.6
0.1
0.15
0.21
0.1
0.17
ACQAP
Limits (3)
ng/g
LCL UCL
6.19
10
6.36
13.6
3.51
9.18
14.4
12.9
9
4.48
7.53
0.88
2.98
5.22
6.6
6.14
13.6
22.1
14.4
30.4
8.01
21.2
35.4
31.1
19.6
10.1
17.5
2.14
6.64
11.5
14
13.3
Laboratory SRM Results
Average Minimum Maximum Number
Merged Result Result Result Standard of Percent
Result (4) ng/g ng/g ng/g Deviation Analyses Difference
18
28
31/28
52
101
105
118
118/2,4'-DDD
138
153
180
170
196/170
187
195
206
209
4,4'-DDD
4,4'-DDD/114
4,4'-DDE
7.18
13.79
37.18
12.33
22.52
5.70
16.33
14.89
23.13
22.00
14.75
7.23
7.01
12.98
1.67
5.46
8.93
9.64
7.81
11.61
0.98
10.2
9.26
8.26
16.4
3.94
12.7
12.9
16.2
16
11
6.93
3.41
8.95
1.15
3.88
6.23
9.64
4.93
5.83
13.3
38
88.4
15.6
51.2
7.07
20
16.9
30.3
28.5
17.7
7.52
11.9
18
3.01
6.4
12
9.64
10.4
30.6
3.25
5.54
31.01
1.60
6.08
0.76
1.95
1.32
3.82
3.31
2.10
0.42
1.38
2.62
0.43
0.72
1.17
0.00
1.28
5.89
32
26
6
32
32
32
22
10
32
32
32
2
30
32
32
32
32
1
31
32
27
14
131
19
2
1
7
2
7
0
3
1
4
4
11
14
7
6
24
20
II Mean = 15
                                Notes:
All values are in dry weight.
SRM: Standard Reference Material.
LCL: Lower Control Limit.
UCL: Upper Control Limit.
1'.   Noncertified concentration.
2.   NIST confidence interval which is one standard deviation of a single measurement (triplicate injection).
3.   Acceptance limit for the Southern California Damage Assessment Analytical Chemistry Quality Assurance Plan (ACQAP), Manen, 1993.
4   The single analyte results are chromatograhically resolved.  Analyte results separated by a "/" are chromatographic co-elution results.
                       L:\A23\AO23B001 .XLS

-------
                                                                                               Table 2
                                                                      NIST SRM 1941 a.  Organics In Marine Sediment.
en
-j
CO
Standard Reference Material
Valued) Uncertainty
Analyte ng/g (2)
44
49
52
66
87
99
101
105
110
118
138
153
180
170
206
209
4,4'-DDD
4,4'-DDE
4.8
9.5
6.89
6.8
6.7
4.17
11
3.65
9.47
10
13.38
17.6
5.83
3
3.67
8.34
5.06
6.59
0.62
2.1
0.56
1.4
0.37
0.51
1.6
0.27
0.85
1.1
0.97
1.9
0.58
0.46
0.87
0.49
0.58
0.56
ACQAP
Limits (3)
ng/g
LCL UCL
2.5
4.08
3.92
3.02
3.99
2.2
5.55
2.1
5.31
5.4
7.73
9.54
3.21
1.49
1.52
4.93
2.71
3.72
7.1
14.9
9.86
10.6
9.42
6.41
16.5
5.2
13.6
14.6
19
25.7
8.45
4.51
5.82
11.8
7.41
9.46
Laboratory SRM Results
Average Minimum Maximum Number
Merged Result Result Result Standard of Percent
Result (4) ng/g ng/g ng/g Deviation Analyses Difference
44
49
52
66
87
99
101
105
110
118
118/2,4'-DDD
138
153
180
196/170
206
209
4,4'-DDD
4,4'-DDD/114
4,4'-DDE
4.39
5.52
6.85
6.07
6.16
3.97
11.17
2.85
9.82
7.71
7.79
13.23
13.64
7.09
3.23
4.06
8.58
4.77
6.08
12.47
1.21
1.37
2.4
1.96
2.19
1.48
3.99
1.05
3.69
2.69
7.79
5.13
4.83
2.09
1.09
0.591
3.36
4.42
2.38
3.17
4.91
9.73
8.2
7.18
8.17
4.64
12.2
3.45
12.1
9.18
7.79
15.5
15.4
8.28
3.97
5.61
9.52
5.11
21.8
84.8
1.28
2.69
1.77
1.90
1.70
1.15
2.96
0.94
2.45
2.48
0.00
3.43
4.62
1.52
0.81
1.05
2.12
0.30
0.00
1.70
18
18
18
18
18
18
18
18
18
17
1
18
18
18
18
18
18
4
14
18
9
42
1
11
8
5
2
22
4
23
22
1
23
22
8
11
3
6
20
89
|| Mean = 16
                                Notes:
                                          All values are in dry weight.
                                          SRM: Standard Reference Material.
                                          LCL:  Lower Control Limit.
                                          UCL:  Upper Control Limit.
                                          1. Certified concentration.
                                          2. NIST confidence interval which is one standard deviation of a single measurement (triplicate injection).
                                          3. Acceptance limit for the Southern California Damage Assessment Analytical Chemistry Quality Assurance Plan (ACQAP), Manen, 1993.
                                          4. The single analyte results are chromatograhically resolved.  Analyte results separated by a "/" are chromatographic co-elution results.
                       L:\A23W023B002 XLS

-------
                                                                                                                   Table 3
                                                                                  NIST SRM  1974.  Organics in Mussel Tissue (Mytilus edulis).
CJl
Standard Reference Material
Valued) Uncertainty
Analyte ng/g (2)
18
44
52
66
101
105
118
128
138
153
180
187
2,4'-DDD
4,4'-DDD
4,4'-DDE
3
8
12
13.6
13
5.6
13.6
1.9
14
18
1.7
3.7
2.5
8.4
5.9
1
3
5
0.06
1
0.4
0.6
0.3
1
1
0.2
0.1
0.9
0.4
0.2
ACQAP
Limits (3)
ng/g
LCL UCL
0.95
2.20
2.80
8.24
7.45
3.24
8.24
0.94
8.10
10.70
0.91
2.31
0.73
5.06
3.64
5.05
13.80
21.20
18.96
18.55
7.96
18.96
2.87
19.90
25.30
2.50
5.10
4.28
11.74
8.17
Laboratory SRM Results
Average Minimum Maximum Number
Merged Result Result Result Standard of Percent
Result (4) ng/g ng/g ng/g Deviation Analyses Difference
18
44
52
66
101
105
118
2,4'-DDD/118
128
138
153/114
153
180
157/180
187
2,4'-DDD
4,4'-DDD
4,4'-DDD/114
4,4'-DDE
2.50
7.69
11.82
11.82
14.49
5.57
15.91
19.51
1.91
16.17
13.76
13.79
1.76
1.91
3.45
2.20
6.95
5.50
6.91
0.402
5.15
9.35
7.1
11.544
4.188
13.13
17.24
1.26
12.36
10.52
11.976
1.284
1.55
2.18
1.2
5.03
3.8
3.59
5.07
9.51
13.92
14.52
19.08
6.58
20.71
21.56
2.41
19.81
15.95
15.653
2.31
2.43
4.46
3.3
9.88
9.53
16.81
0.85
0.94
1.13
1.79
1.64
0.60
1.92
1.77
0.33
1.92
1.32
1.35
0.28
0.35
0.51
0.51
1.94
1.37
3.21
26
26
26
26
26
26
22
4
26
26
18
8
18
8
26
26
6
20
26

17
4
2
13
11
1
17
43
1
15
24
23
3
12
7
12
17
35
17
Mean = 14
                                             Notes:
                                                        All values are in wet weight.
                                                        SRM:  Standard Reference Material.
                                                        LCL:  Lower Control Limit.
                                                        UCL:  Upper Control Limit.
                                                        1. Noncertified concentration.
                                                        2. NIST confidence interval which is one standard deviation of a single measurement (triplicate injection).
                                                        3. Acceptance limit for the Southern California Damage Assessment Analytical Chemistry Quality Assurance Plan (ACQAP), Manen, 1993.
                                                        4. The single analyte results are chromatograhically resolved.  Analyte results separated by a "/" are chromatographic co-elution results.
                                   I \A23\AO23BOW XLS

-------
                                                                                           Table 4
                                                           NIST SRM 1974a.  Organics in Mussel Tissue (Mytilus edulis).
en
•vl
01
Standard Reference Material
Value (1) Uncertainty
Analyte ng/g (2)
18
52
66
101
105
118
128
138
153
180
187
2,4'-DDD
4,4'-DDD
4,4'-DDE
3.98
13.5
10.54
14.51
7.23
18.34
2.66
19.91
19.86
1.84
4.00
1.86
4.06
6.49
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
ACQAP
Limits (3)
ng/g
LCL UCL
2.59
8.78
6.85
9.43
4.7
11.9
1.73
12.9
12.9
1.2
2.6
1.21
2.64
4.22
5.37
18.2
14.2
19.6
9.76
24.8
3.59
26.9
26.8
2.48
5.4
2.51
5.48
8.76
Laboratory SRM Results
Average Minimum Maximum Number
Merged Result Result Result Standard of Percent
Result (4) ng/g ng/g ng/g Deviation Analyses Difference
18
52
66
101
105
118
2,4'-DDD/118
128
138
153
153/114
180
157/180
187
2,4'-DDD
4,4'-DDD
4,4'-DDD/114
4,4'-DDE
4,4'-DDE/87
3.17
12.52
11.42
15.09
5.58
15.70
16.39
2.11
16.54
13.76
14.33
2.40
2.19
3.74
1.60
4.61
3.77
6.78
12.13
2.5
10.54
8.73
11.11
4.03
10.38
14.77
1.01
14.52
12.53
11.16
1.54
1.4
3.16
0.71
3.51
3.12
2.47
12.13
4.76
15.62
14.3
17.09
7.34
19.79
18.35
2.99
20.94
18.26
18.06
4.07
3.27
4.68
3.39
6
5.44
18.34
12.13
0.52
0.91
1.05
1.06
0.71
1.64
1.28
0.31
1.33
1.47
1.68
0.87
0.48
0.35
0.41
0.66
0.49
2.93
0.00
66
66
66
66
66
57
9
66
66
14
52
12
54
66
66
13
53
65
1

20
7
8
4
23
14
11
21
17
31
28
30
19
6
14
14
7
4
L_ 87
Mean = 19
                             Notes:
All values are in wet weight.
SRM:  Standard Reference Material.
LCL: Lower Control Limit.
UCL: Upper Control Limit.
1. Noncertified concentration.
2. NIST uncertainty and confidence interval not provided for SRM 1974a.
3. Acceptance limit for the Southern California Damage Assessment Analytical Chemistry Quality Assurance Plan (ACQAP), Manen, 1993.
4. The single analyte results are chromatograhically resolved. Analyte results separated by a "/" are chromatographic co-elution results.
                     L \A23W023B003 XLS

-------
79
      THE GENERATION OF CALIBRATION CURVES FOR MULTI-POINT
       STANDARDIZATIONS DISPLAYING HIGH RELATIVE STANDARD
                                  DEVIATIONS

  D. Lancaster. Senior Quality Assurance Chemist, Environmental Standards, Inc., 1140
  Valley Forge Road, P.O. Box 911, Valley Forge, Pennsylvania 19482-0911 (610) 935-
  5577

  ABSTRACT

  Many analytical methods, especially the GC methods, state that a calibration curve should
  be used if the percent relative standard deviation  (RSD) precision criterion for the initial
  calibration standards is exceeded.  However, no guidance is usually given in the methods
  on how the calibration curve is generated from the initial calibration data points or what
  determines  an  acceptable calibration curve.   This  lack of guidance has  led  to
  inconsistencies  within and  among laboratories.   For example, in  its analysis  of
  organochlorine pesticides by dual-column GC (SW846 Method 8080), one laboratory used
  linear calibration graphs for  certain compounds  because  these compounds  were
  "historically" linear up to the highest calibration  standard concentration, despite the fact
  that the data showed distinct tapering at the high end of the calibration curves.  The same
  laboratory used quadratic  curve-fit for other compounds when the data showed a very
  good straight-line fit for the data (low %RSD). This paper discusses the lack of guidance
  for quantitating positive results from non-linear calibration curves and suggests a possible
  solution to the  problem.  The paper provides an  easy method for generating calibration
  curves using available software and includes quality control and corrective actions. The
  adoption of such a procedure as detailed in this paper would help  to make comparisons
  of positive results from different laboratories more reliable since the laboratories will be
  using  similar calibration and quantitation techniques.

  INTRODUCTION

  Many analytical methods,  especially the gas chromatography (GC) methods, state that a
  calibration curve should be used if the percent relative standard deviation (RSD) precision
  criterion for the initial calibration standards  is  exceeded.  However, no guidance is
  usually given in the methods on how the calibration curve should be generated from the
  initial calibration data points or what should determine an acceptable calibration curve.
  For example, SW846 Method 8000, which is the parent method of many of the GC
  methods in SW846, states that if the percent RSD for the calibration factors for a given
  compound obtained in the standardization of the instrument is less than 20%, then the
  laboratory can  assume that the calibration exhibits linearity and the average of the
  calibration factors can be used for quantitating  positive all results for that compound
  across the range of the calibration standards. If the %RSD is greater than 20%, the
  method indicates that a calibration curve should  be generated.  However,  no guidelines
                                             576

-------
are specified for the generation of this calibration curve. Consequently, laboratories vary
with regard to the way in which they generate calibration curves for these analyses.
Some laboratories simply plot the data points  on a graph and generate a best-fit line
through the points using available linear regression software.  This seems inappropriate,
since the method implies that high %RSDs demonstrate that the instrument response is
not linear across the calibration range being examined. Other laboratories use a point-to-
point method of calibration, in which a straight line is drawn between the origin and the
data point  for the first (low  concentration) standard,  another  straight line is drawn
between the data points for the first and the second standards, and so  forth.  This is a
more accurate method of quantitation, but it suffers from the drawback of being very
difficult to verify.  One five-point calibration curve for an instrument standardization
would  require five separate equations to calculate the positive results for a  single
compound. Another laboratory based the quantitation of compounds on past performance
in calibration curves.  For example, in its analysis of organochlorine pesticides by dual-
column GC (SW846 Method 8080), this laboratory used linear calibration graphs for
certain compounds because these compounds were "historically" linear up to the highest
calibration standard concentration, despite the fact that the data showed distinct tapering
at the high end of the calibration curves. Inter-laboratory comparability with regard to
instrument calibration is of increasing importance because of the rising cost of performing
site  investigations.  Companies involved with the clean-up  of contaminated sites are
realizing the value of performing laboratory audits  and evaluating laboratory performance
through performance evaluation (PE) sample studies. Yet how can one judge the results
for the  analysis  of a PE sample performed by several laboratories if each is generating
its positive results in different ways?  As will be seen later, a large discrepancy between
results can occur depending on the method of quantitation the laboratory uses.  In order
to make comparisons between the results for a given analysis from different laboratories
more meaningful, the laboratories should be using the same method of quantitation to
calculate the positive results.

SUMMARY OF  METHOD

First, the  laboratory should  analyze  five  standards  (per  SW846 Method 8000)  of
increasing  concentration  on an instrument that has been  set  up according  to the
manufacturer' s specification. The low concentration standard should be at a concentration
equal to the reporting limit for the analyte. The concentrations of the other standards
should be selected to represent the range of interest for the analyte, based on the expected
levels of the  analyte hi the samples and the expected linear range of the analyte. The
calibration factor (CF) for each analyte in the standards should be calculated using the
following equation:
                                              577

-------
                         ฃF _  Instrument Response
                                  Amount  Injected

The average CF and %RSD for the calibration factors for an analyte are then calculated
using the following equations:
                                  CF= -i^-
                        %RSD =
                                 \
                                     5
                                        (CF-CF)2
1=1            .. 100
                   CF
                                                   -x-
If the %RSD is less than or equal to 20.0% (or the quality control criterion stated hi the
method used),  then the laboratory should use the average  calibration factor for the
calculation of the positive results for the analyte in the samples and calibration check
standards.  If the %RSD is calculated to be greater than 20.0%, the laboratory should
generate  a quadratic curve for the calculation of the positive results.  The equation
should be of the form y =  ax2  + bx.  Next, the upper limit of the calibration curve
should be determined using the following procedure: find the slope (m) of the calibration
curve at the low concentration standard  (x').  This is done using the equation m = 2ax'
+ b.  This equation represents the first derivative of the quadratic equation. Divide this
slope by five and find the concentration  (x) which corresponds to this reduced slope (m')
by using the equation  x = (m'-b)l2a.   (This equation comes from rearranging the
equation of the slope to  give the concentration in terms of the slope.)  This concentration
represents the point at which the calibration curve has degraded to only 20% of the slope
of the curve at the low  concentration standard.

If this concentration is less than the fourth calibration standard, then the laboratory should
adjust the operating conditions of the instrument  and recalibrate the instrument. If the
upper  limit is determined to be between  the  fourth and fifth (high concentration)
calibration standard, then the laboratory can analyze samples and quantitate  positive
results up to the concentration determined to be the upper limit of the curve; if a sample
displays a result higher  than the upper limit, the laboratory should dilute the sample and
                                              578

-------
reanalyze  accordingly.   If the upper limit is determined to be greater than the high
concentration standard, then the concentration of that standard should be considered the
upper limit.

In addition, after the instrument has been calibrated but before samples are analyzed, the
laboratory should analyze a standard at a concentration in the middle of the calibration
range. If this calibration check standard fails the criterion specified in the method a new
calibration curve should be  generated.  If the calibration check standard passes the
specified criterion, then the laboratory can proceed to analyze samples.

In the following section, the use of this procedure using laboratory-generated data will
be examined.

PRACTICAL USE OF PROCEDURE

In the analysis of project samples for endosulfan II by SW846 Method 8080, one large
environmental production laboratory provided the following data:

TABLE 1   Raw Data and Calibration Factors for Endosulfan II

   x (cone.. fj,s./U)             y (area counts)               Calibration Factor
        0.125                        160,000                        1,280,000
        0.250                       320,000                        1,280,000
        0.50                       645,000                        1,290,000
         1.0                        1,080,000                       1,080,000
         2.0                        1,410,000                       705,000

Due to past analytical performance for this compound, the laboratory used a linear
regression program to create a straight-line calibration curve of the form y — mx. + b.
However,  the  %RSD was calculated to be 22.3%.  In such a case, it would be  more
appropriate to use a quadratic equation to generate a calibration curve. The software used
was FIT, Version 1.0 by Matthias Kretschmer, available from WindowChem Software.
The two equations derived from the raw data were as follows:

Laboratory-derived equation:       y = (654,140)* +  216, 042

FIT-derived equation:              y = (-372,252)^ + (1,450,01 l)x

A plot of the raw data and the calibration curves is presented below.   As  can be
observed,  the results from the two  curves  can generate large discrepancies for  a given
sample response.   In the region of responses of 1,000,000 area counts, the difference
                                             579

-------
between the concentrations from the linear curve and the quadratic curve can be as much
as 0.30 /ng/L or more.  Table  2 summarizes the predicted concentrations  from both
calibration curves for given area counts and the differences between the predicted values.
As expected, the differences are minimal only near the regions where the two curves
intersect.   Even so, the differences are notable, especially at the low end of the curve
(near 200,000 area counts) and in the middle of the curve (around 1,000,000 area
counts).
        800000 ..
                   0.2   0.4   0.6   0.8
12    1.4    1.6    1.8
Figure 1  Calibration Curves Generated for the Raw Data for Endosulfan II
                                             580

-------
TABLE 2 Predicted Concentrations from the Calibration Curves for a Given Area Count
and the Differences Between the Predicted Concentrations

              Concentration (/xg/L) from  Concentration Og/L) from  Difference
 Given Area  Quadratic Calibration Curve  Linear Calibration Curve
150,000
200,000
250,000
300,000
350,000
400,000
450,000
500,000
550,000
600,000
650,000
700,000
750,000
800,000
850,000
900,000
950,000
1,000,000
1,050,000
1,100,000
1,150,000
1,200,000
1,250,000
1,300,000
1,350,000
1,360,000
1,370,000
1,380,000
1,390,000
1,400,000
0.106
0.143
0.181
0.219
0.259
0.299
0.340
0.382
0.426
0.471
0.517
0.565
0.614
0.665
0.719
0.775
0.834
0.896
0.961
1.032
1.109
1.193
1.288
1.399
1.539
1.574
1.612
1.654
1.704
1.768
-0.101
-0.025
0.052
0.128
0.205
0.281
0.358
0.434
0.511
0.587
0.663
0.740
0.816
0.893
0.969
1.046
1.122
1.198
1.275
1.351
1.428
1.504
1.581
1.657
1.734
1.749
1.764
1.779
1.795
1.810
-0.207
-0.168
-0.129
-0.091
-0.054
-0.018
0.018
0.052
0.085
0.116
0.147
0.175
0.202
0.227
0.250
0.271
0.288
0.303
0.313
0.319
0.319
0.311
0.293
0.258
0.194
0.175
0.153
0.125
0.090
0.042
                                             581

-------
               Concentration (/ig/L) from  Concentration (/ig/L) from  Difference
 Given Area  Quadratic Calibration Curve  Linear Calibration Curve     C/ig/D

  1,405,000             1.810                      1.818              0.007
  1,410,000             2.022                      1.825              -0.196
An examination of the slope of the quadratic calibration curve shows that the calibration
range for the data points for endosulfan II should not be extended to 2.0 jug/L.  However,
using the linear calibration curve, the laboratory assumed that the data points were valid
throughout the range of calibration standards up  to and including the high calibration
standard  concentration of 2.0 jug/L.  Instead,  the data shows that an upper limit of
approximately 1.6 ^g/L would be more appropriate.

TABLE 3  Slope of Quadratic Calibration Curve  at Given Concentrations

          Concentration (ug/L)                  Slope  of Quadratic Curve
                 0.125                                 1,356,948
                 0.200                                 1,301,110
                 0.300                                 1,226,660
                 0.400                                 1,152,209
                 0.500                                 1,077,759
                 0.600                                 1,003,309
                 0.700                                  928,858
                 0.800                                  854,408
                 0.900                                  779,957
                 1.000                                  705,507
                 1.100                                  631,057
                 1.200                                  556,606
                 1.300                                  482,156
                 1.400                                  407,705
                 1.500                                  333,255
                 1.600                                  258,805
                 1.700                                  184,354
                 1.800                                  109,904
                 1.900                                   35,453
                 2.000                                  -38,997
                                            582

-------
As can be seen, the slope of the curve of the quadratic curve is negative at the high end;
therefore, the data points should be considered unreliable at the  upper end of  the
calibration curve since one area count response can produce two concentration values.
This fact is obscured by using the linear calibration curve.  But what constitutes an
acceptable upper limit to a quadratic calibration curve? A good rule of thumb is a 20%
guideline.  The upper limit to the calibration curve should be the point where the  slope
of the curve has decreased to only 20% of the slope at the low standard concentration.
In this example,  the slope of the  graph at 0.125 /tg/L is 1,356,948.  One-fifth of this
value is 271,390, which corresponds to a concentration of 1.583 jxg/L.  Therefore,  the
upper limit of the calibration curve should be approximately 1.600 /^g/L. Since this value
falls between the two highest initial calibration standard concentrations, the laboratory can
use the calibration curve but should dilute and reanalyze any sample displaying an on-
column concentration of endosulfan II greater than 1.600 /tg/L.  If the calculated upper
limit were less than 1.0  ^ig/L (the concentration of the fourth calibration standard),  the
laboratory would have to restandardize the GC. If the calculated upper limit were greater
than 2.0  /zg/L (the  concentration  of the fifth calibration standard), then the upper limit
would be 2.0 pg/L since the laboratory should not report values at concentrations greater
than the  highest standard used for instrument calibration.

Another  issue with the calibration curves concerns minimum area counts  what is  the
minimum area count necessary to achieve the reporting limit for the analyte?  Using  the
previous equations, the minimum  area required to report a positive result of 0.125 /*g/L
for endosulfan II in a sample is 175,435 for the quadratic calibration curve.  The
minimum area required to report a positive result for endosulfan II in a sample 297,809
for the linear calibration curve, which is almost 70% higher than the area count required
to produce the same result from the quadratic calibration curve.  This is due to the fact
that the linear calibration curve crosses the y-axis at  an area count of 216,042.  This
value is  almost 60,000 area counts higher than the area count obtained from the low
concentration standard.  Indeed,  the minimum  area count required to  obtain a sample
concentration of 0.125 /*g/L using the linear calibration curve of the previous sample is
almost twice the area count obtained in the analysis of the low calibration standard.  This
demonstrates the utility of forcing the calibration curve through the origin.  Forcing  the
calibration curve through the  origin helps to minimize the amount of error between the
true and  calculated concentrations at the low end of the calibration curve.  It eliminates
the possibilities of a positive y-intercept (which serves to increase the minimum area
required to report a positive result at the low standard concentration) and of a negative
y-intercept (which  serves to decrease the minimum area required to report  a positive
result at  the low standard concentration so that any detection could be calculated  to be
greater than the reporting limit).
                                               583

-------
SUMMARY

A method for generating a quadratic calibration curve which is forced through the origin
has been described.  The quadratic equation obtained from the initial calibration data is
easy to use and its results are equally easy to interpret without a detailed knowledge of
statistics.   Many types  of software are currently available  which can perform these
calculations; analytical chemists can learn how to set up and run the software within a
day. Having a consistent approach to the generation of calibration curves would permit
a more accurate assessment of the results of a performance sample study.  The method
for generating calibration curves as presented hi this paper is easy to follow and can be
adopted by a laboratory  with a minimum of effort.
                                             584

-------
                                                                                               80
      Data Acquisition and Computer Networking :  A Key to Improved Laboratory
                                       Productivity
   C. S. Sadowski, E. A. LeMoine and J. F. Ryan, The Perkin-Elmer Corporation 50 Danbury Road
                                   Wilton, CT 06897-0259

 It is no surprise to anyone connected with the environmental laboratory community that the days
 of strip chart recorders with red ink pens are long gone.  Many of today's analysts likely have
 never used such a device, since that was in one sense the first era of data acquisition. There are
 those who argue that a second era began in the 1970's with the advent of the computer data
 acquisition device.  Spurred initially in a typical environmental laboratory by the need to analyze
 and reduce vast quantities of data from gas chromatography-mass spectrometry (GC/MS)
 instruments, the first data acquisition systems were based on minicomputers, large floor model
 computers by today's standards. These systems generally involved one minicomputer that
 operated sequentially as a data acquisition, data reduction and data storage device. There was a
 productivity penalty paid which some labs circumvented  by buying a second minicomputer.  In the
 intervening 20 years, this basic model still holds, though  the computer data handling capabilities
 have gotten larger and the price much cheaper.

 Came the 1980s and environmental GC/MS analyses took on a whole new series of quality
 assurance and data reporting functions.  But by and large, it was still done on computer systems
 tied one-on-one to a GC/MS instrument. QA/QC and data reporting were often done on a
 separate computer systems with the use of spreadsheets and stand alone EPA reporting software
 packages.

 Now we are approaching the third wave in environmental data handling with a need to integrate all
 computer and reporting functions into one system.  This need arises from the enormous pressure
 on a laboratory to improve the efficiency of their information collection and data reduction in order
 to minimize analyst time and to maximize the quality of their environmental data.

 Instrument and Environmental Protection Agency History
 Since the formation of EPA in December 1970, over 20 major pieces of environmental legislation
 have been enacted.  Over the same time period, there have been major improvements in
 analytical instrumentation that mirrored these legislative acts.
Legislative Mandates
Number of Laws
3 S B
5-
0
19.

EPA created mjฃ*
(19™ __.ปป,.
X .cซcu
•RCRA
• FWPCA
•SOWA
• CAA
• TSCA
• RCRA
• F1FRA
• HHTA
• sow*
• FEPCA
• FWPCA
• CAA
• O3HA
• FIFRA alFMSA



45 1955 1965 1975 1985 1995
Year
                         Fig. 1:  Environmental legislation time line

In the 1960s, GC/MS required packed columns with a complex interface between the
chromatography and spectroscopy instrument sections. Instruments cost $100,000 or more, and
                                                585

-------
were collections of relatively exotic vacuum and electronic technology. Individual mass
chromatograms were collected manually by watching a Faraday cup collector of ions. When the
meter indicated that the total number of ions was increasing (i.e., a peak was eluting from the GC
column into the MS ion source), an operator initiated a magnetic scan across a typical 20 to 600
amu mass range. Photographic osciliographic paper was used to collected the spectrum. An
experienced mass spectroscopist would then count across the amu mass range, identifying the
m/e value of each of the major ion fragments. The identity of the unknown compound would then
be constructed based on knowledge of ion fragment identities, isotope ratio values and operator
experience.  Ph.D.s typically operated these instruments and identified unknown compounds.
The process took a great deal of time and experience.

From an environmental viewpoint, there was little incentive to improve the productivity of this
process, since none of the then existing environmental statutes required individual chemical
analysis.  Most environmental monitoring requirements based on gross parameters such as Total
Suspended Solids (TSS) and Biological Oxygen Demand (BOD).

Then began the wave of environmental  legislation shown in Figure 1  above.  The growth of this
legislation was congruent with the new era of computerized GC/MS operations.  In the early
1970s, relatively large and certainly expensive mini-computers were  used to collect GC/MS data.
But these were "big iron" single tasking computers operating on each manufacturer's proprietary
computer operating system. Instruments collected data and then had to process the data.  And
the processing phase could only be considered by today's standard to be semi-automated.
Individual chromatographic peaks had to be manually identified by an operator, key mass spectra
identified and background subtracted, and then matched against standard libraries of GC/MS data
using a forward search algorithm.  Instruments could not be collecting data while processing and
searching.  Finnigan made the first break through in offering a multi-tasking GC/MS acquisition
and data processing Incos system in 1977.  Other manufacturers soon followed. While such
systems were more productive than previous ones (and certainly more productive than an
operator manually collecting strip chart spectra and counting up m/e values), the system is
nonetheless based on expensive minicomputers and a lot of operator experience and interaction.

EPA Approach to Environmental Monitoring
Besides computerizing what had been a manual instrumental system, EPA changed the world in
the 1970s as well.  With the development of chemical-specific waste  water monitoring methods,
EPA established a fixed list of analytes for which monitoring would be performed. Now industries
would no longer just look for all the possible chemicals which might be in an industrial effluent, but
rather for a list of volatile, base, neutral and acid compounds that would act as indicators. These
were known as "priority pollutants"  The theory was that if EPA could establish upper limits for
these indicator compounds which a plant's waste treatment process must not exceed, any
chemical pollutant not on the priority pollutant list would likely be effectively treated as well.  In
addition to limiting monitoring to specific chemicals, the priority pollutant list also led to the
development of specific environmental analytical methods. This monitoring model - fixed lists of
analytes and fixed analytical methods - began in the mid 1970s and has held for the last two
decades. What has grown is the analytical quality assurance and EPA reporting requirements.

In the mid-1980s came the era of the personal  computer. As much as to reduce the cost of the
mini-computer as well as to take advantage of spreadsheet and database software, GC/MS
instruments moved to PCs, but still each with its own. In addition, the reporting requirements
have increased.  It is not sufficient to report just data. Now legal regulations are imposed
requiring all quality assurance and quality control be reported along with the analytical data.  It is
not uncommon for GC/MS reports to take five times longer to generate that to acquire the raw
chromatographic data. Environmental labs were faced with a situation where data from GC/MS
                                                 586

-------
 and GC data acquisition systems had to be individually evaluated for compliance with QA limits
 and manually combined to generate EPA-compliant reports.
GC/MS Data Acquisition and Report Process
Stepl
GC/MS
Data
Acquisition
-
Step 2
GC/MS
Data
Reduction


Step 3
Evaluation
of
Data
QA/QC


Step-)
EPA
Report
Generation

                                          Fig. 2

The four steps in modem GC/MS operations are illustrated in Figure 2. Labs need to perform all
these steps in order to have their data judged acceptable.

The other trend of the 1980s was competition. Environmental labs had to lower their costs
because the amount of money received for each sample analysis had declined to the point where
not only Ph.D.s couldn't be afforded, neither could M.S. or even B.S. chemists. Many labs
operated with B.A.  level staff who were only trained to perform EPA fixed analyte-fixed method
analyses. These trends are illustrated in Figures 3 and 4.
 GC/MS Operator Skill and Dollar Trends
GC/MS Data Production Time and QA/QC
                    Fig. 3
                  Fig. 4
Productivity in the Environmental Analytical Laboratory
So here we are in the 1990s. Environmental protection still fundamentally depends on good
analytical chemistry performed with high levels of quality assurance.  But environmental labs are
faced with the need to optimize resources and minimize costs. We've moved well beyond the red
pen and strip chart recorder. But we also have resources that were unavailable even just 5 years
ago. Most of us have office work places where PCs are networked for e-mail and shared files.

So the infrastructure is already there. Networks exist, and these days software exists to integrate
all four GC/MS functions from data acquisition to report generation in a single system. In a
modern system, one PC on a network can be acquiring data, another performing  quality
assurance  evaluations, while yet a third generates EPA compliant reports.  And given the nature
of environmental analytical chemistry, i.e., soil, water and air samples being examined for a fixed
series of analytes using fixed analytical methods, the four steps of a GC/MS determination can be
performed by analyst without Ph.Ds.
                                                 587

-------
Network systems with distributed processing can process data on any one of the PCs connected
to a network.  One can produce calibration and quantitation data with enhanced data review
without need to revert to the minicomputers of old.  Given the distributed nature of the GC/MS
data production task, PCs may not always be engaged in data acquisition,  reduction, QC
evaluation and reporting functions. In a network environment, a  PC which  is found idle from its
primary task can be passed a processing task required by some other computer in a network. The
object of the overall GC/MS undertaking is to make the chromatography in  step one of Figure 2
the rate determining function.
                    Distributed Parallel Processing
                                     F=^l
    Fig. 5:  Distributed parallel processing links multiple instruments as a network of personal
           computers with the capability to assign pending tasks to any available PC

In short, in a distributed processing system,  multiple tasks can be performed as efficiently as
possible, since tasks can be sent to any PC on a network. The modern lab, set up to produce
EPA reports as a routine measure, can take data from a group of 10 samples, evaluate the data,
and generate the required reports all in the same day. In practice, CLP-like data packages can be
prepared in only 2 hours more than the 8 hours needed for chromatography with a distributed
processing package of software and a laboratory computer network with 3 personal computers on
it.  Laboratory productivity, as measured by sample data packages completed per unit of time or
by number of labor hours needed to complete a data package, has been improved by thousands
of percent compared to the original manual GC/MS work.
                                                588

-------
                                                                               81
            ISSUES REGARDING VALIDATION OF ENVIRONMENTAL  DATA

R. Cohen, Principal Scientist, Environmental Technical Services Division, Fernald Environmental
Restoration Management Company, P.O. Box 538704, Cincinnati, Ohio, 45253-8704

ABSTRACT

The collection and analysis of environmental data is subject to a number
of conditions  that  often have  an effect on the technical usability of the
data.  These conditions  are frequently related to matrix, and the manner
in which varying sample collection, sample preparation and analysis create
bias of  the final analytical results.  End users of such data need to be
made aware  of  these potential biases.  Data Validation can give users a
level  of  confidence  in  the  reported  values,  and  can also  identify
reporting/calculation errors (through data verification)   The US EPA has
required data  validation for Superfund-related sample analyses since the
early  1980's,  and several  DOE  sites require  data validation  as  well.
However, there are varying opinions  regarding  the extent of validation,
and  the  effects  of several variables.   These differences of opinion are
most  noticeable  when dealing  with  radiochemistry  data,  for  which no
standard protocols exist for either analysis or data validation.

Data validation  should be concerned with all aspects of  the sample,  from
sampling through data generation and reporting.   Data validation should
evaluate  such  items as   sample  holding  times,   sample  preservation,
instrument performance  (calibration,  method blanks,  etc.  .),  QC sample
results, and,  if necessary,  raw data inspection  (TIC evaluations, for
example)    These  items  allow  the  validators  to  determine  precision,
accuracy,  completeness,  and contract  compliance of the data;  the end users
can evaluate representativeness  once these other parameters are  evaluated.

There  is  definitive  guidance  for the  validation of  CLP  inorganic and
organic  data,  as  presented in the National Functional Guidelines.   This
has  resulted  in  a  fairly  uniform  validation  effort  for much  of the
chemical environmental data generated by the CLP Statements of Work for
Organic  and  Inorganic analyses.   There  are  those who  say  that  these
methods  are restrictive,  but some level of consistency is achieved by
their  use.  Radiochemical data generation and validation is another story-
several  Agencies are grappling with the attempt  at standardizing methods,
and  subsequent  data  validation.   The  Fernald Environmental  Management
Project  (FEMP)  is  among  several DOE sites  attempting  to  create  standard
 data verification/validation  procedures.   We have taken the lead on the
development of  such  procedures,  many  of  which  rely  on  software  data
 evaluation. The DOE  is working  toward the development of electronic data
verification/validation software that can be used across the  entire DOE
 complex.  The  creation of such software will reduce the effort required to
perform these  tasks,  and  will  result  in consistency  across  the  DOE
                                          589

-------
complex.

It  is the   author's   contention  that  all  data  used  .for  making  any
environmental decisions, especially data generated to satisfy regulatory
requirements, must be verified and validated.  The FEMP has invested much
time  and  resources to  standardize  these  procedures, and  believes that
these procedures can serve as a model, or at least a good starting point
for other environmental firms.  It is our objective to share some of our
"lessons  learned", and inform  the environmental community of our progress.

INTRODUCTION

One  of  the  most  powerful  and  indispensable  tools available   to  the
environmental decision-maker is  validated data. Validated data are  used to
define nature and extent of contamination,  evaluate resulting effects on
human  health (risk assessment),  and  the  extent to  which  a remediation
effort was successful (were contaminants of concern adequately reduced or
removed?)  Data generated under  SUPERFUND is subjected to verification and
validation  as   a  matter of  course;   EPA  has defined  guidance for  the
validation of  organic  and  inorganic  data,  especially data generated via
the CLP Program.  This  effort  is often manual,  however, software exists to
expedite  the process  and remove  some of the  subjectivity inherent with
manual validation efforts.  The verification/validation of radiochemical
data,  however  enjoys  no such  standardized  guidance-  it  is an area under
much  development,  especially  within the  DOE   complex.     The   DOE  is
attempting   to   standardize   verification/validation  procedures  for
radiochemical  data.

One cross-cutting issue centers around differentiating between validation
and verification.  There is a difference between the two processes which
is not often recognized, much less addressed.  The DOE has been attempting
to define these terms, and identify specific  functions that are performed
for  each of these  processes.  At  the  DOE  Fernald  site, we  have been
tackling  these differences,  and attempting to build the data evaluation
process  around a more  technically  defensible view  of  verification and
validation.

Another  issue  deals   with  applying  conventional  validation  wisdom to
samples representing unconventional matrices. At many DOE sites,  samples
are  often radioactive, which poses  unique problems  to  the  analyses of
these samples.    Even  in  cases  where   samples  are  not  "hot",  the
laboratories are  forced to attempt digestion/ extraction of matrices best
described as semi-refractory.  Traditional QC performed on  such  samples
often indicates problems, or, in some cases,  fails to indicate analytical
problems  because  of  the analytical methods requiring sample  spiking just
prior to  extraction,  which  is  not a  good  indicator  of  extraction/
digestion efficiency   of  the  target  compounds  from  these  difficult
matrices.   There are no easy answers to  these  types of problems, but  a
                                        590

-------
recognition that they exist is a first step at addressing them.

Much effort has been expended in an attempt to  streamline.costs associated
with verification and validation.  On several DOE sites, these processes
account  for  as much  as  one-half  the  total costs  associated with  a
particular  sampling  event.    (Associated costs  include sampling,  lab
analysis,  data package  generation,  verification,  validation,  database
activities , and final evaluation/data use.)  There are numerous software
programs in existence today that purport  to reduce the time necessary for
verification/validation  (V&V)  by  factors of two,  three or  more.  This
author has not evaluated these programs, but  some indeed have merit.  As
computer software and algorithms become more  advanced and user-specific,
the ability to perform automated V&V will increase in acceptance by users
and regulators as well.

DISCUSSION

Historically,   the  validation  process included  the entire  sequence of
events  from receiving  and logging in data  packages,  through verifying
completeness   and  contractual  compliance  through the  determination of
actual  data usability,  often even  including database entry and approval.
In reality,  and now more often in practice,  these steps are  referred to
individually,   and  not  lumped  together  under  the  general  misnomer of
"validation"    The  DOE has defined  two basic steps in the overall process -
data  verification,  and  data  validation. The  current  definitions are
presented  below.


      Analytical   Data   verification:  A  process   of  evaluation  for
      completeness, correctness, consistency, and compliance  of a set of
      facts against a standard or contract.   Data verification is defined
      as a systematic process, performed by either the  data  generator, or
      an entity external to the  data generator.1

      Analytical  Data. Validation:  A technically  based  analyte and sample
      specific process  that extends  the qualification  process beyond
      method  or contractual compliance and provides a level  of confidence
      that an analyte  is  present  or absent;  if  present- the associated
      variability   Data  validation is  a systematic process,  performed
      external to  the   data generator,  which applies  a  defined set of
      performance-based  criteria  to  a body   of  data  that may result  in
      qualification of the data.  Data validation occurs prior to  drawing
      a conclusion from the body of data.1

These  definitions   may  appear  to  be  a  bit  "dry"-   to  explain:   data
verification  can be thought of  as  contractual  compliance  (is everything
 there that was asked  for by the contract governing the  analyses),  data
completeness    is  all the necessary information  present that is needed  to
                                          591

-------
validate the data) ,  data consistency (when the same information is found
in  the  data package  in multiple  locations, was  the  same  information
transcribed/ downloaded at each location)  and data correctness (are the
results calculated  correctly)   These criteria are  sometimes known as the
four  C's.    These  criteria  are  also  applicable  to  electronic  data
deliverables (EDD)

Data validation is concerned with  technical  usability of the data.  The
validator,   in  an  ideal  world,   is handed   a  data  package  (paper   or
electronic) that  has  been verified to the  four C's  criteria, and assesses
the data on the basis of associated QC, sampling information, analytical
performance, and other relevant information.   The validator assesses the
impacts of these factors, and assigns data qualifiers to individual data
points, analyte groups,  or results  for entire samples,  depending on the
nature and  severity  of  the affecting  factor(s)   Qualification can range
from suggesting that  a data point  is imprecise (biased), to the rejection
of a result, or group of results.

The question naturally comes, "What  is the VALUE of  verified and validated
data?"   The answer  to this  question  is manifold.   First,   it  must  be
understood  that all  data potentially contains error.    Very few results
are "pure"- that is,  absolutely correct.  Just the process of collecting a
sample,    attempting   to    achieve   some    degree   of   homogeneity
(representativeness)   introduces some uncertainty,  and the uncertainties
associated  with  sample preservation,  shipment, and  analysis  all add  to
(i.e.  "propagate")   the  uncertainty,   or   imprecision,  of  the  final
analytical  results for  the  sample.   It  is the  aim of the verification/
validation  process  (V&V)  to  identify  these  uncertainties,  and give the
data  user   a  good  feeling  for the confidence of their  data.    It  is
generally  accepted,  that  if  samples were collected,  preserved, shipped,
and  analyzed within  the bounds of  accepted  protocols,  then barring any
unusual occurrences,  the resultant data will  not be qualified.   These data
represent  results of  the highest  level of confidence within the scope  of
the protocols followed.  So,  in summary,  data verification and validation
serve to increase the user's  level of confidence in a particular data set-
the data are "of  known and accepted  quality",  except where indicated.  The
intended uses(s)  of  the  data are  specified in Project  Specific Plans
(PSPs), and in the Data Quality  Objectives  (DQOs)  for the project.  V&V
identifies  data that are usable for  the intended purpose(s)  as  outlined  in
these  documents.

Another factor in why data should undergo V&V is wrapped up in the  term
"defensibility"   Analytical  data  are often used in litigation,  and if the
result(s)   in  question  have  not   been  thoroughly  assessed,   then  the
usability  of the data is questioned seriously   Data that are  found to  be
non-defensible prove  to make  or break a case.  It is crucial that  all  data
used  to  make   legal  (or  potentially  legal)  decisions  are  carefully
evaluated  in light of all factors that can affect  the result-  and this  is
                                         592

-------
the V&V process .

The  V&V  process  as  envisioned  by  EPA  is  to  assure  that reported
concentrations of a  particular  analyte  are  indeed indigenous  to the
sample,  and not  attributable  to  laboratory  or  method  contamination.
Inadvertent contamination  can also occur  during the sample collection,
shipping and preservation.

Another  factor  to  consider  is  the  proper  calibration  of  laboratory
instrumentation   prior   to  sample   analysis.     Improper/   incomplete
calibration will  result  in incorrect identification and quantitation of
analytes.   The analysis of  Laboratory Control  Samples  (LCSs) helps to
assure calibration accuracy.

Verification can be  accomplished manually  or  via verification software
that is commercially available.  Either way,  verification is performed in
a  similar  manner-  assuring  that  the  four  C's  are  evaluated.    Many
organizations  have   standardized   checklists   to   streamline  manual
verification,  ensuring  consistency between  data sets.    Validation is
performed similarly- standardized protocol  exists for assessing the impact
of  the various  factors  discussed  above on data.   The  EPA's  "National
Functional  Guidelines"  furnish  data  validators  with  a  widely accepted
protocol  for evaluating  inorganic and organic  data.  This  assures a high
degree of uniformity in the application of  validation criteria  regardless
of  the entity  performing the  process.

In  recent years, it has become necessary  to perform the  V&V  process on
radiochemical  data, especially within the DOE complex.  Validators at the
Fernald  site  began initiating communication with other  DOE sites  in an
effort to  share  information  regarding various  approaches that  were  being
taken  with  regard to  radiochemical  validation.    As  DOE sites  began
exchanging  information,  it rapidly  became  clear  that many  sites had very
little guidance  with respect to this area-  for several  reasons. First, no
standardized procedures exist for radiochemical  analysis;  so how can data
be  validated by  a defined  set of  guidelines  when a variety of  methods is
being  utilized?   Second, laboratories performing  radiochemical analyses
report their data in a variety of formats;  up until recently, the  concept
of  a "data package"  was rather foreign to many  laboratories,  through no
fault  of  their own.   Customers  had  never requested a standardized set of
data deliverables for the purpose of V&V, so  the  laboratories did not have
to  provide  one.   At the FEMP, as  the RI/FS  process gained momentum in the
early  1990's,  it became clear that validation of radiochemical data was
necessary.  Over  the past four years,  the validation  group  at the FEMP has
expended  a great deal of  effort  in an attempt  to identify verification
criteria, and  then validation criteria. Often  these criteria were nuclide
or  method-specific (alpha,  gamma,  proportional  counting,  LSC, etc...), due
to  significant differences  in  sample preparation,  and  counting  between
these  methods. In 1993. the DOE formed a complex-wide work group to  begin
the arduous process of attempting to define standardized  V&V guidance that
                                          593

-------
could e used  across  the entire DOE complex, and even be  applied to EPA
data as well.   This  work group has made progress, and  a  draft guidance
document is in the process of completion.  Many of the lessons learned at
the FEMP are being incorporated into this document.

Third,  there  is  no  consensus  regarding  the   effects   of  various  QC
indicators on the associated data.  Various validation entities view these
effects differently,  and weight their  importance  differently as well.  In
organic  and  inorganic  validation,  the EPA  has defined the  relative
importance and associated effects of QC information on the data.  No such
national standardization of the impacts of QC data exist for radiochemical
validation.

Consequently,  significant areas of discussion have centered  around the
evaluation  of calibration,  analytical/result uncertainty  (TPU),  batch-
specific QC, utilization of numerous detectors,  and other factors. Several
working conferences have  been  convened, and sub-groups formed to address
the various  issues.   It is not the intent  of this paper to identify the
resolutions of these  issues, due,  in part, to the  fact that resolution has
not yet been finalized on several of the issues.  It is the author's hope
that the awareness level has been  raised regarding some of  the issues, and
that knowledgeable professionals,  experienced in the assessment of data,
will  be  encouraged   to  participate  in the  attempts  at  standardizing
radiochemical  data assessment.

The National  Functional Guidelines for Organic  and  Inorganic Validation
deal with the validation of aqueous and soil matrices,  for  the most part.2
The behavior of holding  times,  spikes,  surrogates,  internal standards,
etc.   have  been  well-studied  in these  more  common  matrices.    The
acceptance  ranges of the  QC parameters are based on studies  of the QC
behavior  in hundreds of samples,  and the acceptance limits  used  by the CLP
SOWs  are  based on a  statistical evaluation  of  the QC results.   However,
when  dealing  with  more   complex  matrices,  such  as  concrete,  paint
scrapings,  fly ash, and other unique,  more refractory matrices frequently
encountered at  DOE  sites,   the QC  results  do  not reflect  the expected
behavior  of soils that are found  at  most  EPA  sites.   Data Validation
professionals  at the FEMP have recognized that unique matrices  do not
behave  in the  same way  as routine soil or water samples, and we have made
some  "adjustments" to some of  the EPA  guidelines, especially with regard
to spike  and lab duplicate performance.  This study is  still underway, but
it can  be said that  a somewhat looser  set of criteria has been employed,
via  matrix-specific   variances.    Concrete  has  proved  to  require  these
variances most of all, probably due to the high concentrations (percentage
levels)  of mineral  elements,  as well  as the  non-homogeneous  nature of
samples.  QC behavior with respect to radiochemical analyses has proved to
be   the   most   challenging   problem.      Because   of    the  nuclide
separation/isolation methods,  100% actual  recovery  of spiked  analyte is
nearly  impossible to  achieve.  The use of tracers helps  in the  evaluation
of method efficiency,  but  there  is still  no  standardized protocol for
                                        594

-------
evaluating QC data from radiochemical analyses.  Again,  it  is the author's
hope that knowledgeable professionals will be of assistance by providing
input to the eventual  solutions to these challenges.

SUMMARY/ CONCLUSIONS

Data  Verification and Validation are  a necessary  part  of  the  sample
collection,  analysis,  and data evaluation process.  This  issue of legal
defensibility makes  the careful  assessment  of data most important.   Any
data that is generated could conceivably be  called  into question; without
V&V, no level of  confidence  can be associated with the data.

There  are  accepted  guidelines for the  V&V of  inorganic,  organic,  and
conventional data resulting  from the analyses of the  more  common matrices
of soils and aqueous media.  These processes can be  performed manually, or
via software systems.  V&V for non-routine matrices,  and of radiochemical
data  in general,  enjoys no such standardization;  differences in the scope
and  content  of radiochemical  data review do  not allow for a  consistent
evaluation  of  the level  of  confidence  for  radiochemical  data, which is
necessary  to achieve  the goals  of  various  projects,  especially those
related   to  environmental   cleanup   and  remediation.     Making  the
environmental  community aware of these issues is a  first, but important
step  in coming  to grips with these issues, and it is  sincerely hoped  that
careful  thought  will  be  given to the  issues raised in this  paper,  and
helpful dialogue  will  follow.
                                         595

-------
                              REFERENCES

1 Draft Document: Radio chemistry Data Verification and Validation, January
6, 1995

2  National Functional Guidelines  for the Validation  of Organic  Data;
National Functional Guidelines for the Validation of Inorganic Data, 1994
                                      596

-------
                     DEFINITIONS

                  1.  Data Verification
    VERIFICATION:  A process of evaluation for
     Completeness,  Correctness,  Consistency, and
   Compliance (the  4 C's) of a set of facts against a
  standard or a contract.  DATA VERIFICATION is
 defined as a systematic process (performed  by either
  the data generator, or an entity external to the data
     generator) of determining the 4  C's of a data
                      deliverable.
                   2.  Data Validation
A technically based analyte and sample specific process that extends
beyond method or contractual compliance (verification) and provides
a level of confidence that an analyte is present or absent;  if present-
 the associated variability.  Data validation is a systematic process,
performed external to the data generator, which applies a  defined set
  of [contractual or] performance-based criteria to a body  of data.
Data Validation occurs prior to drawing a conclusion from a body of
                          data.
                          597

-------
                           The 4 C's
1.  Completeness:  The  presence  of all  the  necessary  technical
information that is needed to verify and validate the data.

2.  Consistency:  When  the same  information  located in multiple
sections of a data package is transcribed/ downloaded correctly at each
of the locations.

3. Correctness: The assurance that results are calculated correctly.

4. Compliance: The assurance that all  the information required by the
governing analytical SOWs  and  client  contracts is present in the data
deliverable.
  A Snapshot of items Validation evaluates in determining Data
                             Usability

  • Sample Collection Process (physical sampling, preservation...)
  • Holding times: from sampling to analysis, and from lab receipt to
  analysis
  • Analytical Quality Control Analyses
       • Blanks
       • Matrix Spikes
       • Laboratory Duplicates
       • Organic Surrogate Spikes
       • Lab Control Samples (LCSs)
       • Interference checks
       • Calibration Stability
       • Sample-specific issues (dilutions, re-analyses...)
       • Radiochemical tracer yields
       • Radiochemical Uncertainties (TPU, Counting errors)
                                            598

-------
                  Issues that Complicate Validation

    1. Effects of difficult matrices (high organic  content materials, high
    mineral  content  samples,  samples with  significant   radioactivity
    [especially Th]

    2. Consistent Application of radiochemical QC parameters

        * Confusion regarding the Meaning of QC parameters
        * Confusion regarding when QC are non-compliant
        * Confusion regarding how to apply non-compliant QC (extent of
        bias; is data estimated or unusable?)

    3. Where verification leaves off,  and validation begins.
                       Suggested Path Forward

1.   Recognize that all data requires verification  and validation  at some
    level.

2.   Adopt a consistent approach to verification and validation, such as the
    EPA  National Functional Guidelines for Inorganic and Organic data.

3.   Recognize that there  are complicating  issues  with  validation of
    environmental data, and that resolution is needed.

4.   Participate in discussions regarding the verification and validation of
    radiochemical data.  (Worgroup is headed by Jeff Paar, of Martin
    Marietta,  Oak Ridge, TN)
                                       599

-------
82
QUALITY ASSURANCE/QUALITY CONTROL AT A POTW

R.  Forman,  Environmental  Standards, Inc.,  1140 Valley  Forge  Road,  Valley Forge,
Pennsylvania 19482.

ABSTRACT

An essential component of any large field investigation is a working quality assurance/quality
control (QA/QC) program. A group of citizens living in the community surrounding a Publicly
Owned Treatment Works (POTW) facility were concerned that the operation of the POTW
facility potentially contributed to  the adverse health effects  they  experienced.   Since their
primary concern was the air emissions from the various plant operations, they requested that
their county department of public works conduct an air sampling study of the POTW facility.
Upon the citizens' request, the county funded a $1.5 million study to develop a technically sound
and legally defensible investigative program to determine whether the operation of the POTW
facility had the potential to contribute to the adverse health effects of the local residents.

Many  aspects of the laboratory and field quality assurance/quality control activities conducted
for this investigation have been custom designed.   An ambient air monitoring  network was
designed to allow for the simultaneous collection of air samples from ambient air  monitoring
stations strategically located in the neighborhoods surrounding the POTW facility. Samples were
collected on a routine basis as well as during periods in which there were high  incidences of
odor complaints.  The target compound list  for this project was based on compounds known to
cause odors at a POTW facility and compounds that were on the Clean Air Act list of volatile
organic compounds.   Samples were  analyzed  for  volatile  organic  compounds  by U.  S.
Environmental Protection Agency (EPA) Method TO-14,  for sulfur compounds by a modified
U.S. EPA Method 16, for chlorine by NIOSH  Method  6011, and for ammonia  by NIOSH
Method S347.  Because  these methods do not require extensive  laboratory QA/QC,  these
methods were modified for this investigation to include additional laboratory QA/QC samples
such as laboratory duplicate samples, laboratory control samples, and matrix spike/matrix spike
duplicate samples. In addition, since many laboratories do not routinely offer  complete and
comprehensive data package deliverables, specific data package deliverables were developed to
substantiate the reported analytical results  and additional QA/QC.  As one of the additional
QA/QC measures developed for this study, split  samples were collected and analyzed by both
the  project laboratories  and other laboratories.   In addition, blind performance evaluation
samples were submitted to the project  laboratories.  This  paper will discuss the  details of the
design of the field study, the  modified analytical methods, the data package deliverables, and
the results from the split samples and performance evaluation samples.  In revealing the details
of  the QA/QC measures  employed in this  investigation, this paper will  demonstrate  the
effectiveness and utility of a well-designed  QA/QC program.
                                            600

-------
                                                                                                       83
           CONDUCTING A PERFORMANCE EVALUATION STUDY
                       IT'S NOT JUST ANALYTICAL RESULTS
Lester  J.  Dupes.  CPC,  Quality  Assurance  Chemist,  Chemistry  Quality  Assurance Department,
Environmental  Standards, Inc., 1140 Valley Forge Road,  P.O. Box 911, Valley Forge, Pennsylvania
19482-0911; Gregory M. Rose, Supervisor - Site Remediation, Chrysler Corporation, 2301 Featherstone
Road, Auburn Hills, Michigan 48326-2808

Abstract

Conducting an effective Performance Evaluation (PE) study can provide more than an indication of the
laboratory's analytical expertise. Typically, a PE study is used to determine the laboratory's accuracy in
identifying and quantifying the compounds contained in the PE sample as compared to known identities
and  concentrations.  An effective PE study  can also  include evaluating the client,  technical and
administrative  services, sample login  and receipt, data packaging,  method compliance,  and quality
assurance.

This presentation will  focus on  an initial Performance  Evaluation study which  involved thirteen
laboratories.  A review of the initial setup procedures with the Performance Evaluation sample supplier,
contacting the laboratories, answering questions from laboratories, and the non-analytical and analytical
results obtained from the study will be presented.

Introduction

The PE study involved 13 laboratories selected for participation in a single-blind PE study using whole-
volume samples.  Since all laboratories were informed of the study date, no laboratory was placed at an
advantage or disadvantage based on sample workload conditions or potential of subcontracting the PE
samples to another laboratory facility.
The parameters chosen for analysis, the  required analytical methods,
received, and the preservation of the samples are summarized below.
the volume and  bottle types
Parameters
Volatiles
Semivolatiles
PCBs
Trace Metals
Cyanide
Method
Method 8240A/8260A
Method 8270A
Method 8080
Method 60 10 A/7470/7060/
7421/7740/7841
Method 90 10
Volume/Bottle
3 x 40 mL/VOA vial
2x1 L/Amber glass
2x1 L/Amber glass
1L/HPDE
1L/HPDE
Preservation
HC1 pH <2
Cool to 4ฐC
Cool to 4ฐC
HNO3 pH <2
NaOH pH >12
Once the final group of parameters was identified, the individual analytes and required concentrations
were determined with input from several PE sample manufacturers.   The following questions were
answered prior to the final design of the PE study.

•   Should off-the-shelf standards or custom standards be used in the PE study?
•   What analyte concentrations are required?
•   Should ampuled-PE samples or whole volume samples be used?
                                                     601

-------
In this study, our goal was to include a wide range of analytes from each group of parameters, therefore,
off-the-shelf standard lots (except for a PCB PE sample) were used.  The volatile organics included
chlorinated alkanes, alkenes, and aromatics.  The semivolatile PE sample contained substituted phenols,
base/neutrals, phthalates, PAHs, and several pesticides which were included in the PE sample lot, but are
not typically analyzed by the method chosen for semivolatile analysis.  The PCB sample was custom-made
to include both Aroclor 1016 as well as Aroclor 1254.  The inorganic PE samples contained typical Target
Analyte List parameters.  Performance  evaluation sample  lots containing  concentrations at low-to-
midrange of typical instrument calibration ranges were requested for the organic parameters.  The PE
samples were prepared as whole volume samples, which eliminated differences in PE sample dilution
techniques between laboratories by a reputable vendor. The PE samples for all laboratories were prepared
from the same lot number to further reduce variance and permit result comparison between laboratories.
The PE samples were preserved and shipped on ice under Chain-of Custody procedures for delivery to 11
laboratories in October of 1994.  A second set of freshly prepared PE samples was sent in November of
1994, to two laboratories which were added to the PE study.

Initial Contact with the Participating Laboratories

Initial contact  with the  laboratories is an important first step in conducting an  effective PE study and
evaluation of the customer service provided by  the laboratory.   The information that is  relayed to the
laboratories must be clear, concise, and consistent.   One individual should make all contacts to the
laboratories, through a  letter, which introduces and  explains the  PE study.   The following questions
should be answered:

•   What parameters will be analyzed?
•   What method will be used?
•   What list of compounds?
•   Will the PE samples be ampuled or whole-volume?
•   If the samples are whole-volume, when does the holding time commence?
•   What preservation requirements will the samples be arriving under?
•   When will the samples arrive and from which supplier?
•   When is the due date for results?
•   What types of data deliverables are required?
•   Who will be the single-contact for questions and submission of data?

In our initial contact letter, we decided not  to  provide answers to several of the questions posed above.
In this  way we could  evaluate a laboratory's  customer  service  response to  missing or  incomplete
information with regard to a regular sample submission.  The facilitator of the PE study must keep an
accurate phone log  of the questions asked and the answers provided for later use in final evaluation.  In
our letter we did not provide information on the  list of compounds to be reported or holding times. This
omission resulted in many,  but  not all,  of the laboratories calling to ask the list of compounds to be
analyzed.  Since whole volume PE samples were used in this study in an attempt to mimic actual field
samples,  holding times began on the date of PE  sample preparation.   Holding times listed in each
analytical method were used for  evaluation purposes.  Again  several labs did not inquire about holding
times; however, most correctly assumed that method holding times were to be followed.  Additional
questions or problems incurred  by the laboratories also provide an indication of the communications
systems and corrective actions employed by the laboratories on a daily basis.
                                                    602

-------
Review of Non-Analytical Factors

As stated above, the phone log provided the PE study facilitator with a wide range of good information on
each of the laboratories participating in the study. Below are some examples of the information collected
during the PE study.

•   Many laboratories called to ask which list of compounds was requested. However, upon final review
    of the reported data, several labs had not analyzed the method lists or provided additional compounds
    such as pesticides, when only PCBs for Method 8080 had been requested.

•   Several laboratories called to indicate that a specified method was not performed, however, a slightly
    different method or modification to the  method was used by the laboratory. These  types of
    communications indicate  good review and communication by customer  service as  well  as the
    analytical departments. Several laboratories provided different methods than those requested, without
    approval.  For example, Method 524.2 data with a 25 mL purge was used instead of the specified
    Method 8240/8260, which indicates a 5 mL purge volume.  This resulted in much lower reporting
    limits for the analyte list.  The lower reporting limits may result in additional problems for the project
    staff.

•   Several laboratories called to inform the facilitator that  the extraction holding  times  for the
    semivolatiles and/or PCBs had been exceeded and requested information as  to the recommended
    course of action. This again indicates very good communication when problems are encountered and
    corrective action is necessary.  In this example, the analysts had assumed that the PE samples were
    contained in ampules, which do not  have specified holding times. However, the initial letter
    submitted to the laboratory indicated that the samples would arrive as whole volumes and not as
    ampules (for which holding times are typically started when the ampules are opened).
The final results reported can also provide information beyond the compound results printed on the pages.
Depending on the type of data deliverables requested, a review of data deliverables from "results only" to
full  CLP-style deliverables can be completed.   Full  data validation  by highly trained chemists,
knowledgeable in the analytical method requirements and data deliverables, can provide an indication of
the total quality of the data.  The data can be reviewed for compliance to method requirements, sample
custodies, holding times, blank contamination, surrogate recoveries, matrix spike/matrix spike duplicate
recoveries, laboratory control sample recoveries,  initial and continuing  calibrations,  quantitation of
results, proper corrective actions, and reporting errors.  Even "results only" packages  can provide an
indication of the type of deliverables provided in this  package.   Data packages that contain excess
information not necessary for the client's needs, can be  reduced, saving money for the client.  Conversely,
data packages can be supplemented if  found deficient  to meet project requirements prior to project
initiation.  "Results only" packages can also provide an indication of the laboratory's quantitation limits,
whether the lab reports values below the quantitation limit, and if the package is "user-friendly"  Chain-
of-Custody Records and laboratory sample log-in  and receipt forms can provide an indication of the
quality and accuracy of sample receipt procedures.  Examples of  these procedures include whether or not
temperatures  of sample  coolers are taken and recorded,  the pH measurement of preserved samples are
reported, signatures and date and time of sample receipt are completed on the Chain-of-Custody Records,
if the laboratory uses internal Chain-of-Custody documentation and documents any warranted corrective
action procedures.  Several items are presented as examples of "non-analytical" factors found during this
study

•   One laboratory received the PE samples at 11 ฐ C and did not  report the problem to the PE facilitator.
                                                      603

-------
•   One laboratory provided full CLP-style data packages instead of the "results-only" package requested.

•   One laboratory provided multiple copies of the same data delivered on different days, which could
    cause additional confusion when data is reviewed by the data user.

Methodology and Scoring

The laboratory results were compared to  the certified performance evaluation sample results and then
evaluated against the 95 percent confidence limits provided by the PE sample supplier.  A Microsoft Excel
spreadsheet was prepared to determine the percent  recovery of each analyte, compliance with the 95
percent confidence limits, individual analyte scores, and a final laboratory score. The percent recovery is
determined by dividing the laboratory-reported result by the true value reported by the PE sample supplier,
multiplied by 100, and a percent reported on the spreadsheet.   Confidence limits were determined by
comparison of the laboratory-reported results to the lower and upper  confidence  limits.  In addition,
average percent recoveries were computed for the following subset parameters, volatiles, acid extractables,
base/neutral extractables, polychlorinated biphenyls (PCBs), metals, and cyanide for each laboratory and
charted for comparison purposes. Four analytes (heptachlor, gamma-BHC, boron and molybdenum) were
not scored since these compounds/elements are not typically included as normal parameters analyzed by
the analytical methods requested.

The laboratory results were scored on an individual analyte basis using the following  scoring criteria:

                Recovery Criteria	Points Awarded

                90-110 percent                                  10 points/analyte
                80-120 percent                                  8 points/analyte
                70-130 percent                                  6 points/analyte
                60-140 percent                                  4 points/analyte
                50-150 percent                                  2 points/analyte
                <50 or >150 percent                              0 points/analyte

In addition,  the  following  negative and/or  positive scores were also assessed,  as necessary,  at the
reviewer's discretion.   Major laboratory deficiencies, such as incorrect  identification  or missed holding
times, were assessed a  negative  10 point  score for each infraction which was deducted from the final
analytical  score.  Minor laboratory deficiencies,  such as late or multiple submission of results, were
assessed a negative 5 point score for each infraction.  Positive scores for "user-friendly" data, verification
of sample receipt, and data receipt by the laboratory were also scored as additional points

•   Incorrect identification of Aroclor 1016
•   Missed semivolatile holding time
•   Late deliverables/incomplete deliverables
•   Wrong deliverables format/multiple submissions of results
•   Missing analyte or additional analyte not present in mix detected at levels above quantitation limit
•   Temperature excursion not reported to Environmental Standards, Inc.
•   Identification and/or quantitation of pesticides as Tentatively Identified Compounds in Base/Neutral
    Extract
•   Verification of sample receipt
•   Verification of data receipt after submission to facilitator
                                                      604

-------
Analytical Results

The laboratories, in general, scored well for the volatile organic compounds, with average recoveries
ranging from  66 to 139 percent. The compound acetone was routinely detected, but was not reported by
the PE sample supplier at a certified amount. The presence of acetone is probably due to laboratory
contamination or as a standard solvent from the PE sample. Several laboratories identified the
dichlorobenzene isomers contained in the volatile PE sample as Tentatively Identified Compounds.  One
laboratory did not identify the dichlorobenzenes. One laboratory did not identify 1,2-dichloroethane in
their PE sample.

The acid extractable compounds exhibited average recoveries of 34 to 72 percent. It is difficult to obtain
good recoveries for these compounds as exhibited by the 95 percent confidence limits reported by the
supplier. In addition, acid surrogates typically used for semivolatile analysis also exhibit wide recovery
acceptance limits.  Several laboratories  did not identify 2-nitrophenol, or 2,4-dimethylphenol  in their PE
samples although the concentrations of these compounds contained in the PE samples were above the
laboratories' reporting limits.  One laboratory also reported the presence of 4-methylphenol which was not
present in the PE sample, according to the supplier.

The base/neutral extractable compounds exhibited average recoveries of 35 to 71 percent, which are
similar to the average recoveries for the acid extractables.  The compound,  hexachlorobutadiene, and the
phthalate ester compounds seemed to exhibit low recoveries for many of the participating laboratories. In
addition, several laboratories reported low recoveries for the polynuclear aromatic compounds. One
laboratory reported a positive result for anthracene which was not present in the PE sample according to
the supplier.

The polychlorinated biphenyl PE samples contained both Aroclor 1016 and Aroclor 1254. The decision to
use two Aroclors in a custom PE standard was made to determine the laboratories' ability to accurately
identify and quantitate multiple Aroclors. The decision to  use these two Aroclors was based on non-
overlap of the Aroclor peaks.  If the sample had contained  Aroclors that had many common peaks (e.g.,
Aroclors 1248 and 1254), the results may have caused problems during the evaluation process.   All
laboratories correctly identified Aroclor 1254, with recoveries ranging from 33 to 82 percent. However,
only five of the thirteen laboratories correctly identified Aroclor 1016.  The remaining laboratories
incorrectly identified the multi-peak pattern as Aroclor 1242, which is similar to the Aroclor 1016 pattern.

The laboratories scored very well for the metals, with average recoveries ranging from 84 to 162 percent.
Cyanide also exhibited good recoveries, ranging from 81 to 100 percent.  Several laboratories had low
recoveries for aluminum, iron, and mercury. One laboratory did not report results for copper.

The laboratory-reported results were outside the 95 percent confidence limits for five to 15 compounds for
the total of 50 compounds tested.  The average was approximately eight compounds, with the majority of
laboratories incorrectly identifying Aroclor 1016 in the PCB PE sample.  Many of the laboratories also
reported low recoveries for the acid extractables, base neutrals, and mercury.

Graphs showing the average recovery by fraction for each laboratory are presented in Attachment 1.
                                                      605

-------
Conclusion

The additional work necessary to conduct an effective Performance Evaluation study can provide a wide
range of analytical, as well  as non-analytical, data for evaluation of the  laboratories in our study.
Information obtained from evaluating client services, sample log-in and  receipt,  and data packaging
departments can help to determine the level of quality,  responsiveness, and completeness that may be
expected from the laboratory on an actual project. However, it should be realized that this "snap-shot" in
time may not fully demonstrate the laboratories' capabilities  on an  actual project.  The  information
obtained from the Performance Evaluation study is best  used in conjunction with information obtained
through laboratory audits, which also provide insight into the analytical and  equally  important, non-
analytical procedures practiced at the laboratory.
                                                     606

-------
                                                        Average Recoveries of Volatile Organic Compounds
                        140.00 -r
O)
o
-vl
                          0.00
                                                                         678

                                                                       Participating Laboratories
10
11
12
13

-------
                                       Average Recoveries for Acid Extractable Compounds
o
00
         80.00  T
         70.00  +
                                                        678


                                                      Participating Laboratories
10
11
12
13

-------
                                                   Average Recoveries of Base Neutral Extractable Compounds
o
CO
                        80.00  T
                        70.00  --
                         0.00
                                                                       678


                                                                     Participating Laboratories
10
11
12
13

-------
                                              Average Recoveries for PCBs
   80.00 T





   70.00





   60.00





ฃ 50.00




I

a 40.00 --


o
o

I 30.00 --





   20.00 -





   10.00 -f






    0.00 -
                             3       4      5       6       7       8       9       10      11       12      13


                                                  Participating Laboratories

-------
                                                                    Average Recoveries for Inorganics
o>
8  loo.oo --
ซJ
cc


g   80.00 --
o


a.
                                                                             678


                                                                          Participating Laboratories
                                                                                    10
11
12
13

-------
                                                                          Recovery for Cyanide
G)
                         100.00  T
                          90.00  -
                          80.00  --
                          70.00  --
                       $  60.00  --
                          50.00  +
                          40.00  -
                          30.00  --
                          20.00  --
                           10.00  --
                           0.00  -\
                                                                            678
                                                                          Participating Laboratories
10
11
12
13

-------
                                                                                         84
      FATE OR EFFECT OF DATA PRESENTED WITH QUALIFIER AND
                  LABORATORY ESTABLISHED QC LIMITS

A. Ilias. J. Stuart. A. Hansen and G. Medina. U.S. Army Corps of Engineers, North
Pacific Division Laboratory, 1491 NW Graham Ave, Troutdale, Oregon, 97060
ABSTRACT

    Data presented with qualifiers such as "B" in organic and inorganics, "J" in organics
and "E" in both organics and inorganics often leads to the non acceptance of data as
estimates. Estimated data do not always serve the project's needs. Data reported below
background, due to so called matrix effect and qualified as "J", meaning found below the
practical quantitation limits but detected above the instrument or method detection limit,
ultimately cause the data to be used as estimates. Non-detect data sometimes are used as
a tool for data censoring.  Some data are presented with the qualifiers "M" or "EMPC"
which means data reported with imperfect spectral match or estimated maximum possible
concentration, respectively. Data flagged with these symbols often lead to false positive
data reporting.  These type of data are often found in low level detection limit studies
such as groundwater, drinking water or dioxin/furan analyses.  False positive reporting
of data is also encountered with "B" flagged data.  Some of these data are also reported as
estimates based on data reviewer's opinion.  False positive and false negative data
reporting is expensive, delays and adversely hampers the project. There should be
controls on these types of data reporting.  The term "data rejection" should be employed,
but is not, due largely to loosely regulated methods and the lack of clearly defined QC
limits. EPA SW-846 methods, used under the RCRA program, use laboratory established
QC limits and no guidance has been given for "cut off' levels.  Due to these deficiencies,
data precision, accuracy and sensitivities subsequently  suffers.  In general, data
assessments are affected due, inpart, to the ambiguous use of qualifiers and poorly
defined or regulated laboratory established QC limits.

INTRODUCTION

    Data generated in hazardous toxic radiological waste (HTRW) studies are viewed
differently by each of the three main technical staffs involved with data production and
review: 1) analytical chemist (the principle data generator), 2) data evaluator/validator,
and 3) data user (often the regulatory authorities such as the EPA). Data generated by
the laboratory must meet method required internal quality control (QC) criteria, including
the EPA contract laboratory program (CLP) and SW-846 laboratory established (LE) QC
limits (1,2).  The laboratory normally uses qualifiers (flags) when certain internal QC
results are outside of the criteria.  Data  evaluators/validators add another functionality to
the review process, namely "data quality objective" (DQO) requirements during quality
assurance report (QAR) preparation. At this stage of review, the data may be labeled
with additional qualifiers. During QAR preparation, data with out qualifiers are
segregated for use. Data with qualifiers are treated with caution with the concurrence of
                                             613

-------
the regulator authority, whenever possible. This paper exposes some of the common
problems in reporting data with qualifiers and the implications of this in site evaluation.
It would be difficult and cumbersome to discuss all of the EPA functional guideline
qualifiers (1,2), therefore examples of some common qualifiers such as "B", "E" and "J"
are discussed.  These qualifiers are added to the data because of questionable precision,
accuracy, and sensitivity and to indicate false positive and false negative reporting.

DATA COLLECTION AND EVALUATION

   Data from various  commercial laboratories  were compiled and evaluated.  The data
presented here are for samples that were split or collected sequentially and analyzed by
two independent laboratories.  This was done to demonstrate inter- and  intra-laboratory
data comparability,  data  reproducibility and  to  further illustrate the  implications of
qualified data on data  usability.  Split samples were analyzed for volatile organics by
EPA  methods  8260  or EPA 8020,  semi-volatile  organics  by  EPA  method  8270,
dioxin/furans by EPA methods 8290/1613, and radiological parameters by EPA methods
9310/9320.  Some of the data are presented with laboratory method blank contaminants,
elevated detection limits, surrogate, matrix spike, relative percent difference  (RPD)
failures or holding times expiration.  Gasoline  range organics (GRO) and diesel range
organics (DRO) data, determined EPA modified 8015 (3) and EPA modified 8100  (4),
respectively, were chosen to demonstrate the limits of performance based methods where
the internal QC criteria are inadequately defined.

RESULTS AND DISCUSSION

   Positive  results presented in Tables 1 and 2 are qualified with either the  "B" or "J"
flag except for the QA laboratory's data of phenanthrene and pyrene.  The "B" and "J"
flags attached to the data indicates the respective analytes were  detected in the associated
method blank and detected  below the quantitation  limit, respectively.   The  common
laboratory contaminants, such as water soluble volatiles or phthalates as reported in
Tables 1 and 2, are not considered significant if the sample concentrations are  less than
10 times the concentration in the associated method blank.  It is noted  that  the  project
acetone datum is greater than 10 times  the blank concentration  (Table 1), and is reported
with a "B" qualifier.  The  presence of this analyte was supported by the other laboratory
where the datum was  qualified with "J" and laboratory blank contamination was  not
encountered. In both instances, data would be considered estimates. Most likely, the
detected acetone results  of both  laboratories were due  either  to laboratory cross-
contamination or some  sort of laboratory artifact.
   The project laboratory reported all semi-volatile organics (BNA) data in  Table 2 as
either not detected (ND) or with a "J"  flag. The BNA project data are  considered  low
estimates based on two out of six surrogates,  and six (acidic) out of 22 matrix spike
recoveries below EPA/LE QC limits.   Instead of accepting the  project data  as  low
estimates, the data should be rejected.   The  QA laboratory reported four of six positive
BNA data with a "J" flag.  The QA laboratory's internal QC was acceptable and it was
noted to have been performed using the sample.  The dilemma is  that all the  qualified
                                              614

-------
data presented in Table 2 has been considered estimated while in reality the evidence
supports the rejection of the project data. There should be some mechanism to reject data
in lieu of accepting data with qualifiers as estimates.
    The QA data in Table 3 are reported as estimates due to holding time expiration, and
benzene was found at a higher level as compared to the sequential blind duplicate data.
In general, the clean-up level  for benzene in soil is 50  ppb  and the QA  data (68 ppb)
would trigger clean-up while the project blind duplicate data  indicate no action needs to
be taken. It was noted that the internal QC of both the laboratories were within EPA or
LE QC limits.
    Data of radiological analyses in Table 4 are questionable and are  not  suitable for
decision making. Gross alpha found in project groundwater sample -09WA was detected
close to  the action level of 5 pCi/L, but was not detected in  the blind duplicate sample
and was detected at a lower level in the QA (external laboratory) sample.  Low levels of
gross alpha and radium 226/228 reported by the laboratories are probably due either to
background noise or some sort  of laboratory artifacts. In all probability, these data should
be attributed to false positive reporting. It was noted that the internal QC data of both
laboratories was acceptable per method requirements.
    The  majority of dioxin/furans data in Table  5  are reported  with the qualifier
"estimated  maximum  possible  concentration" (EMPC).    Low  levels  (parts  per
quadrillion) of dioxin/furans are reported as positive hits, but in reality are false positive
data.  A few analytes are reported with two qualifiers, B and EMPC, indicating that the
respective analytes were also detected in the associated method blanks.  Project blind
duplicate and QA data could not be compared, as the QA laboratory accidentally spiked
the sample with targeted analytes.  Despite this, about one half of the results for the QA
sample are  flagged with the qualifiers EMPC or  EMPC and B due  to an  imperfect
spectral  match and/or to laboratory cross  contamination. Dioxin/furan analysis which
utilizes high performance GC/MS methodologies are very expensive, often as high  as
$3000/per sample. And after paying the high cost of analysis, the data reported, in some
cases, are not useable due either to laboratory cross contamination or artifacts. Data are
reported per SW-846 under the RCRA program guidelines  due, inpart, to laboratory
established QC requirements where the QC limits are not very well defined. In projects
involving risk assessment it is normal for false positive dioxin/furan data to be used with
qualifications, in lieu of rejection, for site evaluation because of DQO requirements.
    The  data  presented  in Tables 6a  and 6b are State  of Alaska  Department  of
Environmental Conservation (ADEC) methods for GRO and DRO analyzed by modified
EPA methods 8015 and 8100, respectively.  Internal QC requirements and limits of data
acceptability are not well defined except for surrogate, laboratory control (LC) recoveries
and RPD results.  GRO data reported by the project and QA laboratories do  not agree
due, inpart, to the fuel quantification approach used by the laboratories and non-identical
samples  submitted.  Some of the early eluting hydrocarbons of DRO eluted  in the GRO
range and were quantitated as GRO.  As the clean-up action level for GRO in Alaska is
50 ppm and falsely elevated GRO results reported by the QA laboratory (49 ppm), is on
the borderline and puts the issue of clean-up into question. If the presence of GRO in the
soil  is confirmed,  additional  analyses such  aromatic  volatile  organics (AVO)  and
haloginated volatile organics (HVO) may be required to verify the need  for clean-up.  In
                                             615

-------
general, the false positive reporting of GRO results by the laboratories are due to the use
of loosely defined performance based methods.  In this particular case, false  positive
GRO reporting hampered the progress of the project.  Costs associated with this anomaly
could be as high as hundreds of thousands dollars, if decisions are based solely on these
data.

CONCLUSIONS

    There  is a need for a mechanism to reject data instead of using qualified,  but
questionable, data.  Internal QC requirements of EPA SW-846, where LE QC limits and
/or  method required QC limits are used to evaluate the data, are not well defined.  The
National Functional  Guidelines (2) used in  the EPA CLP  offer some limited  data
rejection guidelines,  but data eventually can be used with qualifications.  The project
BNA data in Table 2 should have been rejected due to internal QC failure, but data were
reported as estimates. Often not detected  (false negative) data are reported as estimates,
due to internal QC failure, as seen in one of the blind  duplicate data of Table 2. Possible
false positive data, such as presented in Tables 4 and 5, are expensive and hamper the
progress  of the project.   Data could not be rejected due, inpart, to loosely  defined
precision, accuracy and sensitivity QC criteria.  Data evaluators hired to validate data are
often as expensive as the cost of analyses.   Frequently,  after evaluation the data  are
reported with qualifiers.  Data rejected based on EPA functional guidelines are most of
the  time reported as estimates and are finally used in decision making as qualified data.
    False positive GRO data reported in Table 6 may have hampered the progress of the
project due to loosely regulated methods.  Based on the DRO data, further clean-up may
have been required, but the additional analyses required to substantiate the levels of GRO
found would not be required. Use of the modified performance based EPA methods, with
loosely  defined  internal  QC requirements,  should be  discouraged.   It is  further
recommended here that a total petroleum hydrocarbon methods for fuel (GRO and DRO),
using gas chromatography with stringent internal QC requirements should be developed
to avoid unexpected additional costs during the progress of the project.

Acknowledgments:  The authors are grateful to Mr. Timothy J. Seeman, Director, U.S.
Army Corps of Engineers North Pacific  Division  Laboratory for his support  and
encouragement.  The authors  acknowledges the efforts of Ms. Elizabeth Trent and Mr.
Mark Francisco for the development of the camera ready manuscript.

REFERENCES

1.  EPA SW-846, Final Update I, July 1992.
2.  EPA  CLP Statement of  Work  (No. OLMO1  and  OLMO2.1),  Draft National
    Functional Guidelines for Data Review, Dec. 1990 and June 1991.
3.  AK101  (modified EPA 8015),  Gasoline  Range Organics, Alaska  Department of
    Environmental Conservation, Alaska, 1992.
4.  AK102 (modified  EPA  8100), Diesel  Range  Organics,  Alaska  Department of
    Environmental Conservation, Alaska, 1992.
                                            616

-------
                                            TABLE 1

                     COMPARISON OF VOLATILE ORGANICS (EPA 8260) DATA

Analytes Detected
Acetone
2-Butanone
Methylene Chloride
Project Lab
213SL
130 B
13 J
18 B
Detection
Limits
14
14
6.8
QALab
201SL
24 J
ND
ND
Detection
Limits
130
130
6
Percent Solids                 73.2                                  78.0

Units = ng/Kg (ppb)
J = Estimated concentration
B = Found in method blank [acetone at 10 ppb, 2-butanone at 2 ppb and methylene chloride at 8 ppb]
ND = Not detected

SUMMARY:  The project and QA data agree within a factor of five to each other or their detection limits for all
targeted volatiles and are comparable except of the project data of acetone due to laboratory cross-contamination.
All three detected analytes are common laboratory contaminants  and were detected within a factor of ten to .the
levels found in the associated method blank, except for acetone. Acetone datum was reported with a qualifier as if
it was considered due to method blank contamination.   The presence of acetone is also supported by the  QA
laboratory's data, where no method blank contamination was encountered.
                                            TABLE 2

                  COMPARISON OF SEMI-VOLATILE ORGANICS (EPA 8270) DATA

                                     Project Lab              Detection      QALab      Detection
  Analytes Detected              HOQ1SL        H007SL        Limits       H008SL        Limits

  Phenanthrene                     ND             49 J       430/450         700           600
  Anthracene                      ND            ND         430/450         200 J         600
  Fluoranthene                     ND             88 J       430/450         600 J         700
  Pyrene                          ND             92 J       430/450         1300          600
  Benzo(a)anthracene               ND            ND         430/450         300 J         500
  Chrysene                        ND             64 J       430/450         400 J         700
  Di-n-butylphthalate                80J,B         ND         430/450         ND           900
  bis(2-Ethylhexyl)phthalate         100 J            80 J       430/450         ND           1000

  Percent Solids                    76             74                          79

Units = p.g/Kg (ppb)
J = Estimated concentration
B = Found in method blank [di-n-butylphthalate at 40 ppb]

SUMMARY: The project blind duplicate  and QA data agree within a factor of four to each other or their
detection limits for all targeted analytes and are comparable.  Data comparisons at or below detection limits are
not considered significant at these levels of detection.
                                                 617

-------
                                           TABLES

               COMPARISON OF AROMATIC VOLATILE ORGANICS (EPA 8020) DATA

Analytes Detected
Benzene
Toluene
Ethylbenzene
Total Xylenes
Project Lab
030SL 031SL
ND ND
13 ND
ND ND
74 440
Detection
Limits
7/72
7/72
7/72
7/72
QALab
031SL*
68
ND
ND
110
Detection
Limits
37
37
37
37
  Percent Solids            68.5            69.9                            67.3

Units = M-g/Kg (ppb)
ND = Not detected
* = Expired holding time

SUMMARY:  The project blind duplicate and QA data agree within a factor of five with each other or their
detection limits except for the project blind duplicate data of total xylenes and the QA data of benzene.  Since
both laboratories had acceptable internal QC data, the discrepancies could not be analytically resolved except for
the fact that the QA sample was analyzed past the recommended maximum holding time of 14 days.
                                           TABLE 4

              COMPARISON OF RADIOLOGICAL PARAMETERS (EPA 9310/9320) DATA
Analytes Detected
Gross Alpha
Radium-226
Radium-228
Project Lab
09WA 11WA
4ฑ2 ND
1.9 + 0.6 ND
ND ND
Detection
Limits
2
0.6
1
QALab
IOWA
0.90 + 0.58
1.94 + 0.39
0.5910.41
Detection
Limits
0.05
0.48
0.54
Units = pCi/L
ND = Not detected

SUMMARY:  The project blind duplicate and QA data agree within a factor of three and are comparable.
                                                   618

-------
                                           TABLES

          COMPARISON OF POLYCHLORINATED DIOXINS AND FURANS (EPA 8290) DATA
Analytes
Detected
2,3,7,8-TCDD
1,2,3,7,8-PeCDD
1,2,3,4,7,8-HxCDD
1,2,3,6,7,8-HxCDD
1,2,3,7,8,9-HxCDD
1,2,3,4,6,7,8-HpCDD
OCDD
2,3,7,8-TCDF
1,2,3,7,8-PeCDF
2,3,4,7,8-PeCDF
1,2,3,4,7,8-HxCDF
1,2,3,6,7,8-HxCDF
2,3,4,6,7,8-HxCDF
1,2,3,7,8,9-HxCDF
1,2,3,4,6,7,8-HpCDF
1,2,3,4,7,8,9-HpCDF
OCDF
Total TCDD
Total PeCDD
Total HxCDD
Total HpCDD
Total TCDF
Total PeCDF
Total HxCDF
Total HpCDF
Project Lab
01-WA
ND
ND
ND
ND
ND
ND
11.6EMPC
ND
5.6
ND
ND
5.25
ND
ND
7.5 EMPC
ND
8.3 B
3.9
ND
ND
ND
ND
ND
ND
ND
03-WA
ND
ND
ND
ND
ND
ND
12.0 B
ND
ND
ND
ND
4.7
ND
ND
7.3 EMPC
ND
64 EMPC
ND
ND
ND
ND
ND
ND
4.723
ND
Detection
Limits
3.2/3.1
4.4/9.9
3.7/3.8
2.3/2.3
3.7/4.6
1.0/1.4
4.9/3.5
2.8/2.6
2.8/2.6
2.7/2.5
2.4/1.7
1.5/1.1
2.0/1.5
2.0/1.5
2.3/1.3
2.6/1.6
2.0/1.6
3.2/3.1
4.4/9.9
2.6/2.7
3.7/4.6
0.0/0.0
2.7/2.5
1.9/1.4
2.5/1.4
QALab
02-WA
ND
16.6
13. 9 EMPC
23.7 EMPC
15. 8 EMPC
21.3 EMPC
912EMPC,B
ND
9.2 EMPC
17.0 EMPC
14.8
17.5
17.8
17.1
18.0
19.8
40.7
ND
16.6
40.8 EMPC
213 EMPC,B
ND
26.2 EMPC
67.3
37.7
Detection
Limits
5.3
„




—
4.4
—
—
„
_
„
—
	
„
—
5.3
—
—
-
4.4
-
-
-
Units =pg/L(ppq)
B = Found in method blank
EMPC = Estimated maximum possible concentration

SUMMARY:  Project blind duplicate data agree for all targeted analytes. Project blind duplicate and QA data do
not agree for over  one half of the targeted analytes, due to QA laboratory error in which the QA sample was
accidentally spiked with dioxin/furan analytes.
                                               619

-------
                                          TABLE 6a

             COMPARISON OF GASOLINE RANGE ORGANICS (ADEC 8015 MOD.) DATA

Analytes                    Project Lab              Detection         QALab         Detection
Detected               E023SL       E053SL         Limits          E054SL          Limits
GRO                    18            9.0             5.0              49              5


Percent Solids            100            100                            72.2

Units = mg/Kg (ppm)

SUMMARY: The project blind duplicate data agree within a factor of two to each other. The QA data agree
within a factor of five to one project sample  (E023SL) but does not agree with the blind duplicate (E053SL). It
was noted, that the percent solids of the QA and project samples are not the same, which indicates non-identical
sample aliquots submitted for analysis.


                                          TABLE 6b

              COMPARISON OF DIESEL RANGE ORGANICS (ADEC 8100 MOD.) DATA

Analytes                   Project Lab              Detection         QALab        Detection
Detected             E023SL       E053SL         Limits           E054SL         Limits
DRO                   2100          2500          500/240           2030             10


Percent Solids            80            83                             72.2

Units = mg/Kg (ppm)

SUMMARY: The project blind duplicate and QA data agree within a factor of two to each other and are
comparable.
                                               620

-------
                                                                                     85


            ISO GUIDE 25 VERSUS ISO 9000 FOR LABORATORIES
Peter S. Unqer. Vice President
American Association for Laboratory Accreditation
656 Quince Orchard Road, Suite 620
Gaithersburg, MD 20878-1409
Abstract

Before  laboratories jump on the ISO 9000 bandwagon, they should understand
whether this type of third-party recognition is really appropriate for the
needs of their customers.  From the point of view of the user of test data,
the quality management systems approach to granting recognition to
laboratories is deficient in that it does not provide any assessment of the
technical competence of personnel  engaged in what can only be described as a
very technical activity, nor does it address the specific requirements of
particular products or measurements.  The better method of achieving these
two objectives is through laboratory accreditation bodies, operating
themselves to best international practice, requiring laboratories to adopt
best practices and by engaging assessors who are expert in the specific tests
in which the customer is interested.
Summary

Users of test data should be concerned with both the potential for performing
a quality job (quality system)  and technical competence (ability to achieve a
technical result).  The best available method of achieving these two
objectives is through laboratory accreditation bodies, operating themselves
to best international practice,  requiring laboratories to adopt best
practices
and by engaging assessors who are expert in the specific tests in which the
customer is interested.  Acceptance of test data, nationally or
internationally, should therefore be based on the application of Guide 25 to
assure the necessary confidence in the data's validity.
Introduction

Internationally, as well as here in  the United States, there is considerable
debate and confusion about the similarities, differences and relationships
between laboratory accreditation (usually performed using ISO/IEC Guide 25,
General requirements for the competence of calibration and testing
laboratories") and quality system certification  (or registration) to one of
the three ISO 9000 series of quality system models, usually 9001, 9002 or
9003.  For a laboratory, quality system certification is normally performed
using ISO 9002.
                                           621

-------
Quality system certification has become a popular method of providing
assurance of product quality.  But does it?  The large number of
organizations offering certification to ISO 9000 series has created, perhaps
accidentally and certainly deliberately in some cases, the scenario that
certification to ISO 9000 assures product quality,  and for laboratories,
validity of specific test (and calibration) results.   To the well informed,
this is misleading.

There are several significant differences between laboratory accreditation
using Guide 25 and quality system certification, but  the key difference can
be summarized by the fact that the essence of Guide 25 is to ensure the
validity of test data, while technical  credibility is not addressed in ISO
9002.

Why is there so much confusion?

First, there is a significant problem of semantics.   Second,  the purposes of
each standard are different and thus examination against them gives different
levels of assurance.  The ISO 9000 series of standards provide a generic
system for quality management of an organization,  irrespective of the product
or service it provides.

Guide 25 is a document developed specifically to provide minimum requirements
to laboratories on both quality management in a laboratory environment and
technical requirements for the proper operation of a  laboratory.   To the
extent that both documents address quality management,  Guide 25 can be
considered as a complementary document to ISO 9002  written in terms most
understandable by laboratory managers.

There is, however,  a view being expressed that the  application of ISO 9002 is
sufficient for the effective operation of a laboratory,  and thus ensuring
validity of test data.  This opinion has caused some  confusion in the
laboratory community itself and also,  more broadly, among users of laboratory
services.  The problem is compounded when accreditation of the laboratory by
a third party is required.


The Semantics Problem

Terminology used in this area of conformity assessment is in a state of flux,
and is confusing or even misleading.   The three "tion"  words --
"accreditation," "certification" and "registration" --  are often used
interchangeably.  For example,  the US EPA talks about accredited asbestos
workers and certified drinking water laboratories when others in the same
agency talk of certifying laboratory personnel  and  accrediting laboratories.


The problem is compounded by some very specialized bodies using the words in
a different context altogether.  For example,  U.S.  building code groups refer
to accredited products rather than certified products and Underwriters
Laboratories (or UL) uses the term "listed" instead of "certified" partly
because the word "certified" carries with it the connotation of a guarantee,
                                           622

-------
which according to UL representatives is misleading and goes beyond what UL
product safety certification actually is.

The ISO Council Committee on Conformity Assessment (CASCO) has attempted to
resolve the semantics problem by standardizing the following definitions:

accreditation:  procedure by which  an authoritative body  gives formal
                recognition that a  body or person is competent to carry out
                specific tasks.

certification:  procedure by which  a third party gives written assurance
                (certificate of conformity) that a product, process or
                service conforms to specified requirements.

registration:   procedure by which  a body indicates relevant characteristics
                of a product, process or service, or particulars of a body or
                person, in an appropriate publicly available list.

Internationally, certification has  become the dominant term.  However,  their
common use in the United States is  not always in harmony with this
international guidance, nor particularly with European practice.   The
European approach is to label both  quality system registrars and product
certifiers as certification bodies.  There is very little if any use of the
term registration in Europe.   So we have certification bodies performing
either or both product certification and quality system registration.

There seems to be some agreement in the U.S.  that "accreditation" is a  formal
recognition that a body is competent to carry out specific tasks; while
"certification" is either self declaration by a supplier (also known as self-
certification -- CASCO discourages, preferring the term "supplier
declaration") or a formal  evaluation by a third-party that a product conforms
to a standard.

"Registration" is the common term in the United States when referring to
certification of quality systems.    So we have laboratory accreditation
defined as a formal  recognition that a laboratory is competent to carry out
specific tests or specific types of tests;  and quality system registration
being defined as a formal  attestation that a supplier's quality system  is in
conformance with an appropriate quality system model  (i.e., either ISO  9001,
9002 or 9003).  Thus, the ASQC's Registrar Accreditation Board (RAB)
accredits quality system certification bodies.

Traditionally, certification in the U.S.  has related to products, processes
or services, but because of the European influence we are hearing more
references to the certification of  quality systems,  or the very misleading
short-hand, "ISO certified" seen in many advertisements.  ISO is vigorously
discouraging this type of reference as inappropriate,  inaccurate and possibly
an infringement on the ISO trademark.   Unfortunately,  this type of
advertising is largely to blame for perpetuating the confusion and hyping
quality system registration beyond  that which it can honestly deliver.
                                             623

-------
Differing Purposes of the Standards

ISO 9000 Series

The primary aim of the  ISO 9000 standards is defined in the "Scope" section
of ISO 9001:

      "...  specifies  quality-system  requirements for use where a  supplier's
      capability to design  and supply  conforming product needs to be
      demonstrated."

The standards' primary  purpose is, therefore,  to provide a  management model
suitable for the supply of a conforming product or service  between  two
parties -• a supplier and his customer.  However,  the focus on the  use of the
ISO 9000 standards as two-party models has shifted greatly  as more  and more
use is made of them for third-party certification purposes.   In today's
complex world, there are limited opportunities for all  customers to have
direct relationships with their suppliers, so third-party certification
bodies are, in effect,  taking on the roles of representatives of multiple
second parties (all the customers which rely on independent certification for
their reassurance about a supplier).  It is important,  therefore, that users
of third-party certification understand what form of reassurance is provided
when an organization is certified against a quality system  standard.

Since the ISO 9000 standards are generic, it is often a significant challenge
to interpret their use  in different industry sectors,  or in organizations of
different sizes or technical complexities.  Quality system  certification does
not,  however, certify the quality of a particular  product or service for
compliance with specific technical specifications,  but only the management
system's compliance with a defined model (ISO 9001,  9002, or 9003).

The "Introduction" to the ISO 9001 standard makes  this distinction  between
systems and product conformance, where it states:   "It is emphasized that the
quality-system requirements specified in this International  Standard,  ISO
9001,  are complementary (not alternative) to the technical  (product)
specified requirements."  Essentially, the ISO 9000  standards are reminding
customers that they need to consider whether assurance is required  not only
on the compliance of a  supplier's management system,  but also on the
technical  compliance of the products provided by the supplier.   This product
assurance may be provided through a range of mechanisms such as product
certification, product  or process audits by the purchaser and vendor-supplied
test data.
ISO/IEC Guide 25-1990

Unlike the ISO 9000 series, ISO/IEC Guide 25 was  not  established primarily as
a contractual model for use between suppliers and their  customers.   Its aims
are to:

  •   Provide a basis  for  use  by accreditation bodies in assessing competence
      of laboratories;
                                            624

-------
  •   Establish general requirements  for  demonstrating laboratory compliance
      to carry out specific calibrations  or  tests; and

  •   Assist in the development and implementation of a  laboratory's quality
      system.

Historically, Guide  25 was developed within the framework of third-party
accreditation bodies.  Its early drafting was largely the work of
participants in the  International Laboratory Accreditation Conference (ILAC)
and the  latest edition was prepared in response to a request from ILAC in
1988.

To understand the significance and purpose of Guide 25 and its relationship
to ISO 9002, it is essential that it be viewed in light of its development
history  --  it was initially to assist the harmonization of criteria for
laboratory  accreditation.  Guide 25 is now being used by laboratory
accrediting bodies throughout the world and is the basis for mutual
recognition agreements among accrediting bodies.

Laboratory  accreditation is defined in ISO/IEC Guide 2 as "formal  recognition
that  a testing laboratory is competent to carry out specific tests  or
specific types of tests."  The key words in this definition are "competent"
and "specific tests."  Each accreditation recognizes a laboratory's technical
capability  (or competence) defined in terms of specific tests,  measurements,
or calibrations.   In that sense, it should be recognized as a stand-alone
form  of  quite specialized technical certification -- as distinct from a
purely quality management system certification -- as provided through the ISO
9000  framework.

Laboratory  accreditation may also be viewed as a form of technical
underpinning for  a quality system in much the same way that product
certification could  be considered as another form of complementary
underpinning for  a certified quality management system.
Similarities and Differences

Both the ISO 9000 series and ISO/IEC Guide 25 are used as criteria  by third-
party certification bodies, and both contain quality systems elements.   The
systems elements of ISO 9000 are generic;  those of ISO/IEC Guide are  also
generic but more specific to laboratory functions.   The textual  differences
between ISO 9002 and Guide 25 are obvious, but, when interpreted in a
laboratory context, it is generally accepted that the systems elements of the
two documents are closely compatible.  This is acknowledged in the
introduction of Guide 25 which states:  "Laboratories meeting the
requirements of this Guide comply, for calibration and testing activities,
with the relevant requirements of the ISO 9000 series of standards, including
those of the model described in ISO 9002.  when they are acting as suppliers
producing calibration and test results."

It is not true, however, that laboratories meeting the requirements of ISO
9002 will thus meet the requirements or the intent of Guide 25.   In addition
to its system requirements (which are compatible with ISO 9002), Guide 25
                                           625

-------
emphasizes technical  competence of personnel  for their assigned functions,
addresses ethical  behavior of laboratory staff,  requires use  of well-defined
test and calibration  procedures and participation in relevant proficiency
testing programs.   Guide 25 also provides more relevant equipment management
and calibration requirements,  including traceability to national  and
international  standards for laboratory functions;  identifies  the role of
reference materials in laboratory work;  and provides specific guidance
relevant to the output of laboratories --  the content of test reports and
certificates -- together with the records requiring management within the
laboratory.

Although Guide 25  contains a combination of systems requirements and those
related to technical  competence,  for laboratory accreditation purposes,  the
Guide is normally  used only as a starting point.   Guide 25  recognizes in its
"Introduction" that "... for laboratories engaged in specific fields of
testing such as the chemical field .  .  .  the  requirements of  this Guide will
need amplification.and interpretation ...   ."

In A2LA's system of laboratory accreditation,  these additional  technology-
specific criteria  are contained in special program requirements documents
such as the "Environmental Program Requirements."

However, there is  another level  of technical  criteria which must  be  met for
the accreditation  of  laboratories.   That is the  technically-specific
requirements of the individual  test methods for  which the laboratories'
competence is publicly recognized.   So the hierarchy of criteria  which must
be met for laboratory accreditation purposes:

  •   ISO/IEC Guide 25;
  •   Any  field-specific criteria; and
  •   Technical requirements of specific test  methods and procedures.

Apart from comparisons on the similarities and differences between the
purposes of ISO 9000  and Guide 25 and their use  for third-party conformity
assessment purposes,  it is important to examine  the differences in skills and
emphasis of assessors involved in quality system certification  and laboratory
accreditation assessments.

For quality system certification,  emphasis is traditionally placed on the
qualifications of  the assessor to perform assessment against  the  systems
standard.   The systems assessor (often referred  to as the Lead  Assessor) is
expected to have a thorough knowledge of the  requirements of  that standard.
In current practice internationally,  a quality system assessment  team may or
may not include personnel  who have specific technical  backgrounds or process
familiarity relevant  to the organizations being  assessed.

For laboratory accreditation,  the assessment  team always involves a
combination of personnel  who have expert technical  knowledge  of the  test or
measurement methodology being evaluated for recognition in  a  specific
laboratory, together  with personnel  who have  specific knowledge of the
policies and practices of the accreditation body and the general  systems
applicable to all  accredited laboratories.  Thus,  the laboratory
                                            626

-------
accreditation assessment includes a technical peer-review component plus a
systems compliance component.

There are some other elements of difference in the respective assessment
processes.  For example, laboratory accreditation involves appraisal of the
competence of personnel as well as systems.  Part of the evaluation of a
laboratory includes evaluation of supervisory personnel, in many cases
leading to a recognition of individuals as part of the laboratory
accreditation.
The technical competence and performance of laboratory operators may also be
witnessed as part of the assessment process.  The loss of key personnel may
affect the continuing accreditation of the laboratory by the accrediting
body.  For example, A2LA recognizes key staff whose absence would reduce the
laboratory's technical competence and may prompt a reassessment before it
would be normally scheduled.

The final product of a laboratory is test data.  In many cases, laboratory
accreditation assessments also include some practical testing of the
laboratory through various forms of proficiency testing (interlaboratory
comparisons or reference materials testing).

Quality system certification is not normally linked to nominated key
personnel.  The technical competence of managers and process operators is not
a defined activity for quality system assessment teams.   It is through the
documented policies, job descriptions, procedures, work instructions,
training requirements of organizations and objective evidence of their
implementation, that quality system certifiers appraise the people component
of a system.  Staff turnover is not an issue in maintaining certification.
Complementary Functions

Recognizing that there are differences in the purpose,  criteria and emphasis
of ISO 9000 and Guide 25 and their use for conformity assessment purposes,  it
is worthwhile to consider how the roles of quality system certification and
laboratory accreditation can best interact.

Quality system certification for a laboratory should be viewed as a measure
of a laboratory's capability to meet the quality expectations of its
customers in terms of delivery of laboratory services within a management
system model as defined in ISO 9002 or 9001 -- a "quality" job.  Secondly,
laboratory accreditation should be viewed by customers as an independent
reassurance that a laboratory is technically and managerially capable to
perform specific tests, measurements or calibrations -- a "technically
competent" job.

If satisfaction is needed on both these characteristics, then a combination
of quality system certification and laboratory accreditation may be
appropriate.
If a laboratory's function is purely for internal quality control purposes
within an organization and not requiring any formal output in terms of
certificates or reports to either external customers (or internal customers
within a larger organization requiring formal test reports), it may be
                                             627

-------
appropriate for the laboratory to operate within the overall  ISO 9002
framework of the parent company.  Nevertheless, such laboratories and their
senior management may also benefit from the external,  independent appraisal
provided by the technical assessors used in laboratory accreditation.
However, if a laboratory issues certificates or reports certifying that
products, materials, environmental conditions, or calibrations conform to
specific requirements, they may need to demonstrate to their  clients or the
general community that they are technically competent to conduct such tasks.
Laboratory accreditation provides the independent measure of  that competence.


Scope of Accreditation/Certification

Organizations may be certified to a quality system standard within very broad
industry or product categories.  Naturally, organizations with a very narrow
product range are certified in these terms.

Laboratories, on the other hand, are accredited for quite specific tests or
measurements, usually within specified ranges of measurement  with associated
information on uncertainty of  measurement,  and for particular products and
test specifications.

Accreditation bodies encourage laboratories to endorse test reports in the
name of the accreditation body to make a public statement that the particular
test data presented has been produced by a laboratory which has demonstrated
to a third party that it is competent to perform such tests.

The ISO 9000 series of standards are not intended to be used  in this way.
They address the quality system, not specific technical  capability.   The use
of a quality system certification body's logo should not be used as a
certification mark or endorsement as to the conformity of a particular
product with its specified requirements.   Similarly,  it should not be used to
endorse the competent performance of tests, calibrations or measurements
reported by laboratories.  Only a logo or endorsement  showing accreditation
to Guide 25 or equivalent for  specific calibrations or tests  denotes
technical credibility and an expectation of valid results.  Laboratories
certified to ISO 9000 cannot make the same claim.
The Special Role of Accredited Calibration Laboratories

For more general interaction between certified quality systems  and  laboratory
accreditation, one very significant area is the role that  accredited
calibration laboratories play in demonstrating traceability to  national  and
international standards of measurement.  The ISO 9000 series require that ".
.  .  suppliers shall . .  .  calibrate .   . .  inspection,  measuring and test
equipment  .... against certified equipment having a valid known
relationship to nationally recognized standards."

Many calibration certificates presented to quality system  auditors  contain
statements that the measurements or calibrations are "traceable to  national
standards."  Some auditors also insist that suppliers' calibration  documents
provide cross-reference to the other reference standards used to calibrate
                                            628

-------
their own devices, to provide a documented chain of traceability back to
their own country's or international standards of measurement.   There may be
multiple steps, involving various calibration devices,  required to
demonstrate traceability back to a national standard.   This can therefore
become a very complex and, in some perceptions, bureaucratic demonstration of
traceability by a supplier.  The supplier may also have no direct access to
information, or influence over, the provider of calibrations for its
equipment.

Concentration by auditors on documented statements of traceability of
measurements can be viewed as an exercise in "paper traceability,  "not
"technical traceability" -- that is, the calibrations performed on their
equipment have been performed by personnel competent to undertake the
measurements, under controlled environmental conditions (where  appropriate),
using other higher accuracy equipment that is maintained and recalibrated
within appropriate intervals and backed up by records and other management
systems which meet the principles of good laboratory practice embodied in
Guide 25.  Accreditation of the laboratory providing a  specialist calibration
service provides such reassurance of technical traceability.

As it is a fundamental requirement for accredited calibration laboratories to
have their own equipment traceable to national and international  standards,
both the interest and spirit of the ISO 9000 requirements are thus met when
accredited calibration laboratories are used by suppliers.   This principle
has been recognized in the recently issued ISO Standard 10012.1-1992 where
Clause 4.15 "Traceability," states that "... the supplier may provide the
documented evidence of traceability by obtaining his calibrations from a
formally accredited source."
Fundamental Difference

Quality system registration  (ISO 9000) asks:

  •  Have  you  defined your procedures?
  •  Are they  documented?
  •  Are you following them?

Laboratory accreditation  asks the same questions but then goes on to ask:

  •  Are they  the  most appropriate  test procedures to use in the
     ci rcumstances?
     Will  they produce accurate results?
     How have  you  validated the procedures to ensure their accuracy?
     Do you have effective quality  control procedures to ensure ongoing
     accuracy?
     Do you understand the science  behind the test procedures?
     Do you know the limitations of the procedures?
     Can you foresee and cope with  any technical problems that may arise
     while using the procedures?
     Do you have all  the correct equipment, consumables and other resources
     necessary to  perform these procedures?
                                             629

-------
The registration of a laboratory's quality management system is a component
of laboratory accreditation  --  not a substitute.    Quality system
registration of a laboratory to ISO 9000 misses  a  key element --  technical
validity and competence.

Unfortunately,  quality system registration of  laboratories is already being
seen as an easier route to some form of recognition for a  laboratory than
full accreditation.
European Position

In an April  1992,  statement  issued  by the European Organization  for Testing
and Certification (EOTC):

      .  . .  the only acceptable stand is to state that QS certification
     cannot be taken as an alternative to accreditation, when assessing the
     proficiency of testing laboratories.    Not trying to underrate the QS
     certification procedure,  it should none the less be underlined that, by
     being intended as a systematic approach to the assessment of an
     extremely broad scope of organizations and field of activity, it cannot
     include technical requirements specific to any given domain.


Conclusion

Before laboratories  jump on  the  ISO 9000 bandwagon, they should  understand
whether  this type of third-party recognition  is  really appropriate for the
needs of their customers.  From  the point of  view of the user of test data,
the quality  management systems approach to  granting recognition  to
laboratories is deficient in that it  does not provide  any assessment of  the
technical competence of personnel engaged in  what can  only be described  as a
very technical  activity, nor does it  address  the specific requirements of
particular products  or measurements.  The ISO 9000 series state  explicitly
that they are complementary  not  alternatives  to  specified technical
requi rements.

Users of test data,  therefore, should be concerned with both the potential
for performing a quality job (quality system) and technical competence
(ability to  achieve  a  technical  result).  The best available method of
achieving these two  objectives is through laboratory accreditation bodies,
operating themselves to best international  practice, requiring laboratories
to adopt best practices and  by engaging assessors who  are expert in the
specific tests in which the  customer  is interested.  Acceptance  of test  data,
nationally or internationally, should therefore  be based on the  application
of Guide 25  to assure  the necessary confidence in the  data's validity.


References
 1.  ISO/IEC Guide 2-1993,  "General terms and their definitions concerning
     standardization and related activities."
                                            630

-------
2.  International Laboratory Accreditation Conference, "Validity of
    Laboratory Test Data: The Application of ISO guide 25 and ISO 9002 to
    Laboratories," June 1993.

3.  Anthony J. Russell, "Laboratory Accreditation in a World-wide
    Perspective," Pittcon, March 7. 1994.

4.  International Laboratory Accreditation Conference Committee 1 on
    Commercial Applications, "Conformity Assessment: Testing, Quality
    Assurance, Certification and Accreditation," February 1994.

5.  European Organization for Testing and Certification (EOTC/AdvC/34/92),
    "Ascertaining the Competence of Test Laboratories, in the Framework of
    EOTC Agreements Groups," April 15, 1992.

6.  Malcolm Bell, "Laboratory Accreditation," December 1994 issue of TELARC
    Talk
                                            631

-------
86
 A METHOD FOR ESTIMATING BATCH PRECISION FROM SURROGATE
 RECOVERIES

 G. Robertson, U.S. Environmental Protection Agency, EMSL-LV, Las Vegas, Nevada
 89119 and D. Dandge, S. Kaushik, D. Hewetson, Lockheed Environmental Systems and
 Technology Company, Las Vegas, Nevada 89119

 ABSTRACT

 The U. S. EPA Environmental Monitoring Systems Laboratory in Las Vegas provides
 quality assurance support to the Superfund Contract Laboratory Program (CLP).  In part,
 this effort involves evaluating the effectiveness of the quality control required in analytical
 methods. Previous work has shown that the matrix spike/matrix spike duplicate
 (MS/MSD) analysis, as applied in the CLP, adds little or no added value to the CLP organic
 analyses. One problem with eliminating the MS/MSD analysis is that this pair of analyses
 provides the only estimator of analytical precision for the sample batch. The current work
 provides a precision estimator for the entire sample batch with no additional analyses.

 Precision is generally defined as a measure of the variability around a mean value.
 Traditionally, in analytical chemistry, this has been measured by replicate analyses  of a
 sample.  For this approach to provide useful information, the sample chosen for replicate
 analysis  must contain the analytes of interest, the sample must be analyzed a minimum of
 three times to provide a statistically valid estimate, and the sample must be representative of
 the sample batch.  The CLP MS/MSD analysis would rarely meet these three criteria.  The
 precision estimator that has been developed uses the surrogates that are added to every
 sample and blank to estimate precision for the entire sample batch. Data obtained for each
 surrogate may be applied to estimate recoveries and precision of chemically similar analytes,
 and a general precision value obtained for the entire analytical fraction  may be used when a
 single estimate of precision is desired for the batch. Historical precision data based on over
 2000 CLP sample batches are given  as a reference for interpreting the precision estimate.
 The individual surrogate recoveries can also be compared to the average  recoveries for the
 sample batch to  identify matrix and/or laboratory problems.  The adoption of the new
 precision estimator will yield improved information on the laboratory precision of the
 analyses in the entire sample batch while reducing costs by eliminating  two  analyses per
 batch.
                                               632

-------
                         NOTICE

The U.S.  Environmental Protection Agency, through its Office
of Research and Development has prepared his abstract for an
oral presentation.  It does not necessarily reflect the
views of EPA or ORD. Mention of trade names or commercial
products does not constitute endorsement or recommendation
for use.
                                  633

-------
87

 PROVIDING LEGALLY DEFENSIBLE DATA FOR ENVIRONMENTAL
 ANALYSIS

 Jo Ann Boyd, MS,
 Southwest Research Institute,
 6220 Culebra Road, San Antonio, Texas 78228-0510
 ABSTRACT

      Environmental analysis plays a very important role in the environmental
 protection program.   Due  to  the  possible litigation involvement, most of the
 environmental analyses follow stringent criteria, such as the Environmental Protection
 Agency Contract Laboratory Program procedures with analytical results  documented
 in an orderly manner.

      The documentation demonstrates that all quality control steps are followed and
 facilitates data evaluation to  determine  the quality  and usefulness of the  data.
 Furthermore, the tedious documents concerning sample check-in, chain of custody,
 standard or surrogate preparation, daily refrigerator and oven temperature monitoring,
 analytical and extraction logbooks, standard operation procedures, etc., also constitute
 a process of the lab documentation.

      The fundamentals for the success of the environmental analysis is people. The
 knowledge and experience of people constitutes the basic element for environmental
 analysis. In order to grow into this new area, the ability to develop new methods is
 crucial. In addition; the laboratory information system, laboratory automation, and
 quality  assurance/quality control are  major factors for laboratory success.   This
 presentation will concentrate on these areas.

 QUALITY ASSURANCE PROGRAM

      The implementation of a good quality assurance program within the laboratory
 ensures that all data generated are scientifically sound.  The laboratory must follow
proper quality assurance/quality control procedures throughout the process.

      The  consistency  of  quality is maintained by the laboratory  with the
implementation of a quality assurance program plan and detailed standard operating
procedures.   The  quality  assurance program plan should  provide guidance for
                                        634

-------
 laboratory personnel by documenting the daily required quality assurance/quality
 control performed.   The laboratory should have routine audits against the Quality
 Assurance Program  Plan.  The routine audits will  alert ongoing problems to be
 resolved and prepare the laboratory for any external audits in the future. This will also
 document that the laboratory follows the protocol detailed in the plan.

      A thorough sample custody log-in and tracking process  identifies quality
 assurance/quality  control aspects  of the  project requirement prior to analysis of
 samples.  This will enable the analysts to have  a detailed recording of the quality
 assurance/quality control requirements as a guideline  to the statement of work.

      The custody log-in and tracking process can also  provide quality control back-up
 for follow-up on  meeting holding times, maintaining  documented custody of the
 sample, the process for internal tracking of the documented analytical steps of the
 sample analysis, and that the proper quality control has been followed.

      Quality assurance/quality control follow-up of the analytical data continues
 throughout  the data  entry process  with a program that flags quality  control
 discrepancies programmed from the Quality Assurance Program Plan and Standard
 Operating Procedures.  Prior to submission of data to the client a validation is
 performed on the final package.  This validation is based on  the quality  control
 specification indicated in the statement of work or client contract.

      The  laboratory should have routine audits  against  the  Quality Assurance
 Program Plan. The routine audits will  alert ongoing problems  to be resolved  and
 prepare the laboratory for any external audits in the future.  This  will also document
 that the laboratory follows the protocol detailed in the plan.

 STANDARD OPERATING PROCEDURES

      Standard operating procedures are for the express use of providing the user with
an efficient means of providing quality data in a timely manner. They should be
written in such a manner that the user will understand  all aspects of each appropriate
standard operating procedure and  be able to follow  the protocol with little or no
supervision.
                                        635

-------
      The standard operating procedures should detail all aspects of the environmental
 analysis process. Steps should be written in the standard operating procedure to cover
 the following:

            the process of handling samples upon receipt at the laboratory
            all custody procedures and documentation
            each analytical process required
            instrument calibration
            QA/QC requirements through the analytical process
            analytical documentation requirements
            internal laboratory audits

 Continued steps required for the standard operating procedure:

            traceability of standards
            corrective action procedures and follow-up
            review and validation of data
            maintenance of equipment and records
            data reporting procedures
            training of personnel
            all forms  for laboratory use and instructions
            safety requirements
            sample storage and disposal procedures

 LABORATORY INFORMATION MANAGEMENT SYSTEM

      In evaluating commercially available laboratory information systems the options
 for analytical laboratories are limited. Clients require different criteria and needs based
 on sample identifications and deliverables which make altering a commercial LIMS
 difficult. However, in the final analysis our decision was to customize our own version
 of a laboratory information system in order to achieve our needs and requirements.
 This system enables the laboratory to follow a tracking flow throughout the analytical
 process. Client information, delivery requirements, date  and time of receipt, sample
 ID's, required analyses and special instructions are entered into our sample log-in
program.  The program assigns a unique work order number, assigns a one time unique
 sample identification  as a cross  reference to the  client identification,  adds the
 information to the database, generates a printed work order, determines distribution
requirements for the work order based on analyses required, computes holding times
and due dates, generates sample tracking, chain of custody and corrective action forms,
                                        636

-------
 and initiates a billing record form.  There are program modules which can search the
 database for specific information, determine current sample backlog and list work
 orders with approaching deadlines.  Work  is continuing on laboratory-interactive
 portions of the program for full sample tracking capabilities.

 LABORATORY AUTOMATION

      Laboratory automation includes hardware, such as gas chromatography auto
 samplers, and associated software for processing sample results.  Some examples of
 this hardware will be presented in the next eight slides, which include auto samplers
 for GC/MS, GC, 1C, GPC, AA and ICP. We have automated the sample result process
 for the GC/MS, GC, 1C etc.

      In terms of data reporting, we have several options available through the use of
 our Banyan Vines network.  By using the  network telefax/mail feature we have
 developed various programs which will telefax the results to the client directly from
 the program.  Additionally, the data is transmitted directly from the instruments to a
 program that will provide data results to minimize and data entry errors.

 LABORATORY ANALYTICAL FLOW

      As indicated  on the flow chart, once the client  request has  been approved
 samples are received at the laboratory from the field samplers.  If any discrepancies are
 found the client is notified immediately. Quality Assurance approves the work order
 which is then submitted to the analysts who will be involved with the project. If there
 are any problems during the analyses the lab manager, project manager, and client are
 contacted for resolution.  Once a  decision is made the analysis is  completed and
 submitted to the supervisor for review.  The supervisor submits data for data entry
which covers the quality control aspects through a program. If there are problems the
lab manager and supervisor are contacted to resolve the problems. Upon completion
the final data is submitted to the supervisor for a final review and returned to QA/QC
                                       637

-------
 for validation. If quality control does not meet the requirements the lab manager and
 supervisor are notified for narrative documentation and the final package with this
 documentation is paginated, copied, and submitted to the client.

       External audits will be performed by potential clients from time to time.  An
 audit provides the laboratory with positive feedback.  Laboratory personnel should
 improve existing procedures by implementing new suggestions and updating Quality
 Assurance Program Plan's  and Standard Operating Procedure's.   These changes
 continually assist in improving the quality procedures of the laboratory.

       Quality assurance/quality control follow-up of the analytical data continues
 throughout the data entry process  with a program  that flags  quality control
 discrepancies programmed from the Quality Assurance Program Plan and Standard
 Operating Procedures.  Following the data entry process a final analytical supervisory
 review is performed. Prior to submission of data to the client a validation is performed
 on the final package.  This  validation is based on the quality control specification
 indicated in the statement of work, client contract, or method related to the analysis.

 CONCLUSION:

      Any project that is performed  within the laboratory  program, as previously
 discussed, will provide legally defensible data through the custody and documentation
 of the sample analysis.   All documentation is performed  according to the Good
 Laboratory Practice guidelines.  This concept will identify all problems, corrective
 actions, and the affect and problems have on the quality of the data.

      The program allows internal audits, tracking the sample through the system at
 each step beginning from receipt of the sample.  With the unique identification and
 LIMS practices the ability to track the sample from each process and stage of analysis
throughout the system. With custody  and documentation that provides the ability to
repeat  the analysis, or steps taken throughout the process, the data provided will be
legally defensible for the client.
                                         638

-------
        CLIENT
Relay problem to client. Can
  modifications be made?
No
            No
     Request denied
                        Yes
CLIENT REQUEST
 Can be requested
results be achieved?
                                        Yes
     SAMPLES RECEIVED
        Are there any
       discrepancies?
                                        No
                              SAMPLE MANAGEMENT
                              for approval and tracking
                                   ANALYSIS
                                Are there problems?
                                        No
                                 SUPERVISORY
                                    REVIEW
                                  DATA ENTRY
                                Are there problems?
                                 Final corrections.
                                  SUPERVISOR
                                     Review
                                 QA/QC CHECKS
                              Are there discrepancies?
                                   Final review
                                        No
                             SAMPLE MANAGEMENT
                                Paginate and copy
                                    CLIENT
                                Submission of data
                         Yes
                         Yes
                           PROJECT MANAGER
                             LAB MANAGER
                                CLIENT
                               Contacted
                             LAB MANAGER
                                Notified
                                    ANALYST/
                                   SUPERVISOR
                                Contacted for solution
                         Yes   ..
                                  LAB MANAGER
                                                             SUPERVISOR
                                           639

-------
88

 AUDITS AS TOOLS FOR PROCESS IMPROVEMENT
 R. Cypher, EA Engineering,  Science, and Technology, Hunt Valley, Maryland 21031,  M.
 Uhlfelder, EA Laboratories, Sparks, Maryland  21152,  and M. Robison,  Maryland Spectral
 Services, Baltimore, Maryland 21227

 ABSTRACT

 Environmental testing laboratories are audited frequently by certifying agencies and clients. This
 paper describes how a laboratory can take advantage of the audit by using the results as a learning
 experience to upgrade or refine the laboratories  processes, techniques, or systems.  All audit
 comments should be given consideration.  What  may seem minor to the laboratory, may be of
 major concern to the auditor. In this paper we discuss the different types of audits, the general
 criteria used, and review the different styles and techniques  used by auditors.  We summarize  the
 most common findings  from recently conducted audits of over 30 environmental laboratories, our
 information gathering process, and a trend analysis of those findings. We discuss how we used
 the audit process as a positive learning experience, how we upgraded our own auditing system,
 and how any laboratory can benefit from an audit by using benchmarking techniques to improve
 their quality systems. We also describe how the audit process can be used as a mentoring tool for
 small disadvantaged or  minority-owned businesses.  In summary, we demonstrate how the lessons
 learned  from an audit can benefit a laboratory  and result in  cost reductions  and improved
 efficiency of its operations.
                                            640

-------
                                                                                        89
COST-EFFECTIVE MONITORING PROGRAMS USING STANDARDIZED
ELECTRONIC DELIVERABLES

G. Medina and A. Ilias. U.S. Army Corps of Engineers, North Pacific Division
Laboratory, 1491 NW Graham Ave. Troutdale, Oregon, 97060

ABSTRACT

Hidden costs associated with the management of monitoring wells attributed to the
assessment and archiving of analytical and field information/data have greatly inflated the
overall costs of groundwater monitoring and clean-up. This article addresses elements
universal to project management and proposes the cost cutting benefits of using a
standardized electronic deliverable format (EDF).  The Corps of Engineers North Pacific
Division Laboratory (NPDL) is in the process of implementing standardization of all
reported analytical data and field information in a standardized digital format.

INTRODUCTION

The Corps of Engineers North Pacific Division district offices are responsible for the
overall management of groundwater monitoring and clean-up projects at various military
installations. Architecture and engineering (A/E) firms are contracted to oversee the day-
to-day activities and functions associated with a given clean-up project. Typically, the
A/E is responsible for executing a scope of work that, amongst other criteria, clearly
defines a clean-up objective, a sampling plan, a chemical data acquisition plan (CDAP),
and the development of a model, based on field and analytical information and data. The
analytical work is contracted out by the A/E  to laboratories (if they do not have in-house
capability) that are referred to as primary laboratories. The analytical data obtained from
the primary laboratories is used to model the type and level of contamination.

Contaminated groundwater monitoring well  programs typically span an extended period
of time. It is not uncommon for a monitoring program to extend over several years.
During the life of the project, it is not unusual for multiple laboratories to provide
analytical data to  a given A/E.  It is also possible that more than one A/E firm may be
involved with the project through its duration. Thus, a tremendous amount of data is
being generated and processed.  The tracking and management of this data is a costly and
demanding undertaking.

NPDL is tasked with the responsibility of serving as a quality assurance (QA) laboratory.
To assure that the government is receiving the analytical services and quality it is paying
for, a minimum often percent of all field samples collected by an A/E firm are taken as
sample splits and are analyzed by NPDL or one of its eight contract laboratories that
provide analytical support. NPDL is also responsible for the generation of a quality
                                             641

-------
assurance report that comprises the evaluation of data from split samples, reported by the
primary and QA laboratories. This process entails physically extracting and compiling
information and data from hardcopy reports. Although performed for different reasons,
the review and validation functions employed by NPDL are repeated by the A/E and the
Corps of Engineers district office managing the project.

DISCUSSION

Hidden costs associated with analytical data processing are not immediately apparent but
ultimately make themselves evident somewhere along the monitoring program. In short,
hidden costs are attributed to the irreplaceable commodity of time in the singling out and
correction of errors reported by laboratories in hardcopy reports and manual database
entry. For example: the review and validation process associated with hardcopy reports
is tedious  and time consuming. Contacting the laboratories responsible for the analysis
for clarification of ambiguous data or data that are not supported with the correct or
sufficient  amount of quality control (QC) takes time, especially after a laps of time
between the generation of the report and the review process at NPDL, the A/E or the
Corps field district office.  All too often, analytical reports contain the ambiguous use of
flags and qualifiers, erroneous field identification information, improper use of significant
figures, misprints and miscalculations.  The absence of method of preparation
identification, initial and/or continuous calibration information or the revision date for the
QC criteria used for evaluation can greatly undermine the integrity and validity of the
data. When such information is questionable or missing, a good deal of time can be spent
in its clarification or acquisition.

 For the purpose of modeling, the data from hardcopy reports must be manually keyed
into a database system by the A/E. To assure error free data, it is common to see the time
consuming practice of double entry for the same data by two different data entry
personnel. In the case where different A/Es overlap  the life of the project, there is the
need and potential cost for matching database platforms and structures. Similarly, the
Corps district field offices must compile their own archival databases for a given project,
based on hardcopy reports. It is conceivable that at a minimum, three separate database
systems for three separate functions, maintained at three separate locations will be
generated  from hardcopy reports.

In an effort to standardize the exchange of data and information between the A/Es, the
primary and QA laboratories, and the managing Corps district field offices, NPDL has
developed an electronic deliverable format package that is currently being used by NPDL
and its eight contract laboratories.  The intent is for future distribution of EDF to A/Es
and their contract laboratories and contractually require that all analytical data provided
to NPDL and Corps district field offices be in a standardized EDF.
                                               642

-------
EDF is loosely based on the Air Force's Installation Restoration Program Information
Management System (IRPIMS). EDF uses the basic IRPIMS (1) valid value dictionary
(VVD) as a foundation for analytical methods and parameter labels because it lends itself
well to EPA SW-846 (2) methodologies. EDF is based on an ASCII, field delimited,
relational database structure that comprises five files (see Tables 1-5). The additional
fields and modifications to IRPIMS that comprise EDF reflect NPDL's focus on the need
to monitor sample custody/control as well as in-house laboratory QC.

The EDF package comprises the VVD, an electronic data loading tool (COELT) and an
electronic data constancy check tool (EDCC). The COELT is used to parcel the data into
the required five files by taking information and data that has been processed by a
laboratory's laboratory information management system. The EDCC tool is used to
validate the consistency of the reported data/information in terms of structure, format, the
correct use of valid values, and completeness. An error report is generated by the EDCC
to identify consistency or completeness problems. An EDF report is not considered valid
unless it has been run through the EDCC and is accompanied with the error  free report.
NPDL will only accept valid EDF reports.

In addition to providing a consistent formatted and structured report (both digitally as
well as hardcopy), the use of EDF forces the laboratory to provide required QC, initial
and ongoing calibration information, revision dates for the QC criteria being used,
identification of sample preparation and analytical batches, unique  and consistent flags
and qualifiers, codes that identify analytical work subcontracted out, codes to identify
secondary column confirmation for positive hits for gas chromatograph analysis, dilution
factors, etc.

NPDL has set up an electronic bulletin board system for the bi-directional exchange of
information and data between its laboratories, consultants and Corps district field offices.
The use of EDF will eliminate the need for manual database entry.  To provide support,
NPDL has set up a help desk to field questions and resolve potential problems associated
with EDF and its ancillary tools.

CONCLUSION

Although it is in its infancy stage, the use of a standardized EDF is beginning to provide
benefits in the elimination of transcription errors. EDF is forcing the laboratories to
provide complete, error free, standardized reports. Because the COELT is capable of
generating hardcopy output, the hardcopies that are being generated at NPDL from the
contract laboratories EDF reports are uniform and identical in structure and format.  It is
anticipated that in the future, EDF will greatly help in the reduction of hidden costs
associated with data and information management from groundwater monitoring
programs. EDF will lend itself towards the automation of data review, processing, the
                                             643

-------
elimination of transcription errors, and will facilitate immediate transport and exchange
via telephone lines.  Because EDF is based on an accepted and supported platform, the
archiving of data will be straight forward.  In the interest of data integrity, plans are
underway to investigate the potential and feasibility of having one database repository for
a given project that will be accessible to NPDL, the managing Corps field office, the A/E
and the contract laboratories.

Acknowledgments:  The authors are grateful to Arsenault and Associates, the primary
architects of EDF and Ms. Ruth Abney for her exhaustive efforts in the de-bugging
process of EDF and its output product. The authors acknowledge the efforts of Dr. R.
Bard for the review and proofing of the manuscript.

REFERENCES

1. Installation Restoration Program Information Management System Data Loading
Handbook, Version 2.3, May 1994, Air Force Center for Environmental Excellence,
Brooks Air Force Base, Texas.

2. EPA SW-846, Final Update I, July 1992
                                            644

-------
                              TABLE 1
                       SAMPLE INFORMATION
Field Name

LABCODE
LOCID
LOGCODE
LOGDATE
LOGTIME
MATRIX
CNTSHUNUM
NPDLWO
PROJNAME
SAMPID
Description

Analytical laboratory
Location Identification
Log code
Log date
Log time
Sample matrix
Control sheet number
Work order number
Project Name
Field assigned sample
ID
BOLD = IRPIMS Fields
                   645

-------
BOLD = IRPIMS Fields
                                TABLE 2

                          TEST INFORMATION
Field Name

 ANADATE
 ANMCODE
 BASIS
 EXMCODE
 EXTDATE
LABCODE
LABLOTCTL
LABSAMPID
LOCID
LOGCODE
LOGDATE
LOGTIME
MATRIX
QCCODE
RUN_NUMBER
APPRVD
COCNUM
EXLABLOT
LAB_REPNO
LNOTE
MODPARLIST
PRESCODE
RECDATE
REP_DATE
SAMPID
SUB
Description

Date of analysis
Analytical method code
Wet/dry weight
Extraction method
Extraction date
Analytical laboratory
Laboratory control number
Laboratory assigned ID
Location Identification
Log code
Log date
Log time
Sample matrix
Quality control type
Analysis run
Approved by
Chain of Custody No.
Extraction control number
Laboratory report No.
Laboratory notes
Modified parameter list
Preservation
Date received
Date of report
Field assigned sample ID
Subcontracted test
                      646

-------
BOLD = IRPIMS Fields
                                 TABLE 3
                         RESULTS INFORMATION
Eield Name

ANADATE
ANMCODE
EXMCODE
LABCODE
LABDL
LABSAMPID
MATRIX
PARLABEL
PARUN
PARVAL
PARVQ
PVCODE
QCCODE
RUN_NUMBER
UNITS
CLREVDATE
DILFAC
LNOTE
REPDL
REPDLVQ
RT
SRM
Description

Date of analysis
Analytical method code
Extraction method
Analytical laboratory
Lab detection limits
Laboratory assigned ID
Sample matrix
Parameter code
Parameter uncertainty
Analytical result
Parameter value qualifier
Parameter value class
Quality control type
Analysis run
Units of measure
Control chart revision date
Dilution factor
Laboratory notes
Reported detection limits
Rep. det. limit qualifier
TIC retention time
Standard reference
material
                       647

-------
                              TABLE 4
                  QUALITY CONTROL INFORMATION
Field Name

ANMCODE
LABCODE
LABLOTCTL
LABQCID
MATRIX
PARLABEL
QCCODE
UNITS
EXPECTED
LABREFID
Description

Analytical method code
Analytical laboratory
Laboratory control number
Quality control sample No.
Sample matrix
Parameter code
Quality control type
Units of measure
Expected parameter value
Reference sample number
                         648

-------
                               TABLES

                     CONTROL LIMIT INFORMATION
Field Name

ANMCODE
EXMCODE
LABCODE
MATRIX
PARLABEL
CLCODE
CLREVDATE
LOWERCL
UPPERCL
Description

Analytical method code
Extraction method
Analytical laboratory
Sample matrix
Parameter code
Control limit code
Control revision date
Lower control limits
Upper control limits
BOLD = IRPIMS Fields
                     649

-------
90

 PERFORMANCE OBJECTIVES AND CRITERIA FOR FIELD SAMPLING ASSESSMENTS
 Michael Johnson, United States Department of Energy, Environmental Measurements
 Laboratory,  New York, NY  10014
 ABSTRACT

 The Analytical Services Division, Office of Environmental Management (EM-263) has
 developed and is implementing an assessment program to evaluate EM's environmental
 sampling and  analysis activities. To support these goals the Environmental Measurements
 Laboratory has developed Performance Objectives and Criteria (POCs) for Field Sampling
 Assessments.

 The performance objectives address the key elements necessary for effective programmatic
 control of sampling services.  They are intended to guide an assessment team in evaluating
 the effectiveness of the sampling program and the system used by the facility to establish and
 implement  QA standards for sampling activities.

 Performance Objectives and Criteria were developed in the following areas:

                QA Project Plans and Sampling and Analysis Plans
                Standard Operating Procedures
                QA for Sample Collection
                Sample Management
                Operator Training
                Operational Criteria
                Maintenance and Decontamination

 The performance criteria emphasis policies and programs that  must generally be defined and
 implemented to achieve the performance objective. Several performance indicators have
 been identified for each criteria.  These indicators are examples of concrete, verifiable
 practices and activities that provide positive indications that the facility is meeting the
 performance objectives. They are indicators of the facility's approach to comply with the
 performance objective.

 Primarily the  POCs serve as guidance for DOE program managers, field offices and
 contractors to establish self-assessment programs for improving their field sampling
 programs. They also provide direction to technical personnel  who function as technical
 specialists (auditors) conducting assessments of sampling activities. Assessments findings may
 be based on a performance objective itself or a failure to satisfy one or more of the ojective's
 criteria.
                                             650

-------
                                                                                        91
     SMART SEQUENCING ENVIRONMENTAL GC/MS IN A CLIENT/SERVER
                                ENVIRONMENT

Charles A. Koch. Ph.D.. Application Engineer, Hewlett-Packard, 9606 Aero Drive, San
Diego, CA 92123; and Mark Lewis, Application Engineer, Thruput Systems, 450 East South
Street, Orlando, Florida, 32801

ABSTRACT

Smart sequencing of gas chromatography/mass spectrometry (GC/MS) analytical systems
allow results from data analysis to control the data  acquisition system without operator
intervention. Intelligent sequencing of this type adds more reliability  and efficiency  to
environmental testing. A failed quality control limit, such as decafluorotriphenylphosphine
(DFTPP) tune criteria,  pauses  data acquisition of a set  of samples referenced to that
particular DFTPP run. Corrective action is possible; the system may tune itself and re-
inject DFTPP. Traditionally, environmental  smart sequencing for GC/MS is  limited  to
standalone  computer systems,  where a  single computer  operates the instrument and
analyzes the data. Today, computers operate on local area and wide area  networks, and
share tasks. Client/server environments partition jobs among the various processors. This
work describes  a client/server application for environmental GC/MS.   A  PC  client
computer runs the GC/MS, and a UNIX server computer analyzes the data.  The client
runs the samples and sends the data files to the server. The server analyzes the data and
checks integrity according to  EPA rules. Using the results from the  server, the client
makes intelligent decisions concerning data acquisition, and thus, can either  abort a system
out of control, or take corrective action. This sets the operator free for more productive
tasks. Intelligent  instrument control minimizes incorrect testing decisions and  keeps
operating costs low.

INTRODUCTION

Intelligent control of GC/MS analytical systems allows response from quality-control (QC)
check samples to control  the  autosampler without user intervention.  This adds more
reliability and efficiency  to environmental testing. A failed quality control  limit, such  as
DFTPP tune criteria, halts data acquisition of samples in a batch job called  a shift. All the
samples in a particular shift are referenced to the same DFTPP run. Corrective action may
be possible. The system may tune itself and re-inject DFTPP, or merely stop. Traditionally,
environmental smart sequencing for GC/MS is limited to standalone computer systems,
where a single computer operates the instrument and analyzes the data. Today,  computers
operate on local area and wide area networks, and share tasks.  Client/server environments
share jobs among the various processors. In the analytical  laboratory, the client computer
runs the samples and  sends the  data files to the server computer, which analyzes the data
and checks QC integrity according to EPA rules. Using the results from the server, the
client has information for which to make intelligent decisions concerning data acquisition.
                                              651

-------
It can either abort a system out of control, or take corrective action. Operators are free to
do  more productive tasks.  Intelligent instrument  control minimizes  incorrect  testing
decisions, and keeps costs low.

Smart control of an autosampler for environmental GC/MS applications first appeared in
1988 for use with the Hewlett Packard Real Time Executive (RTE) Aquarius system(l). It
was developed  in an Environmental Protection Agency (EPA) contract lab where strict
tuning and calibration criteria must be met before any samples can be run. This software
enhancement allows feedback from QC check samples to control the autosampler during a
twelve hour sequence run. The logic relies on 1986 US EPA contract laboratory program
(CLP) requirements(2).  The results of a QC run are used in a decision-making process that
instructs the autosampler to  do one of three things: reanalyze the QC  sample, continue
with the next sample, or stop the sequence all together. It attempts to  remedy a  system
that is out of control. The system becomes 'smart' enough to conclude data acquisition if
either the  EPA mass spectrometry tuning criteria  or  the calibration  criteria are  out  of
control.  See Figure 1 for the basic algorithm. This software evolved in 1992(3) to check
for the newer CLP rules(4), and accommodate laboratories abiding by Federal Register(5)
or SW-846(6) guidelines. It ran only on the proprietary RTE system that performed both
data acquisition and data analysis. Hewlett Packard  moved away from the mature RTE to
faster, modern processors. These platforms rely on open operating systems, such as UNIX
and WINDOWS. This move  demanded a new generation of intelligent  sequencing
software.

The newer computing technologies added benefits to intelligent autosampler control. The
initial  experiments with the new systems duplicated the  earlier RTE/Aquarius work. PC
based  systems ran environmental application software under the Microsoft WINDOWS
operating system and included smart sequencing for environmental GC/MS as a standard
feature.  This was still a single computer solution, and very similar to the original RTE
work.  Hewlett Packard combined client/server expertise and measurement technology  to
create a UNIX based server for  environmental target compound analysis. Client/server
smart sequencing became possible.

A simple model of client/server computing is  the distribution of tasks between two  or
more computer applications.  See Figure 2. The model has three parts: a client, a  server,
and the  slash that binds the client to the server(7).  The  client runs the client side of the
application, and often sends data to the server. The client in return requests information or
resources from the server. The more powerful server provides information or resources to
clients. The  slash is the middleware that runs on both the client and the server sides of the
application.  An example of middleware is the well-known TCP/IP transport stack used to
transport files from the client  to the server.

The client/server application of interest here is smart sequencing of environmental GC/MS
instrumentation. The system consists of a WINDOWS  client and a UNIX server. The
                                              652

-------
client runs batch data acquisition jobs and sends raw GC/MS data files using TCP/IP to
the server for processing. It needs information back from the server to control the batch
autosampler. The server gets the data file from the client, then performs the analysis. It
communicates to the client if the QC  samples passed  or failed the EPA criteria.  The
middleware  is  the  network  communications software packages NFS,  PC/NFS,  and
TCP/IP.  The  computers connect  with standard Ethernet  hardware.  The  application
software consists of user-contributed macros running on both the client and the server that
add functionality to the basic product. The client macro pauses the autosampler after a QC
run, sends the data to the server, searches for a response from the server, then makes the
appropriate decision about  the autosampler. The server macro uses the results  of the
environmental targeting software on the server and intelligently decides if the QC sample
has passed or failed criteria.  The server macro signals the  client macro  about the QC
result. Finally, the client controls the autosampler. There are no limits, as far  as  the
application is concerned, as to the number of client systems that are to a particular server.

EXPERIMENTAL

The experiment consisted  of one Hewlett Packard  model 4920 server using  a  735
processor and the standard amount of core memory and hard disk space. There were two
PC clients. Each ran the Hewlett Packard WINDOWS based GC/MS  software.  The
instrumentation for each client consisted of a Hewlett Packard  5970B mass  spectromter
and  a 5890 gas  chromatograph. The  clients sent semivolatile data to the server  for
processing there with the Hewlett Packard Target III server  based software. Hewlett
Packard versions of NFS and TCP/IP ran on the server,  and  PC/NFS and TCP/IP  ran on
each client. The network backbone was Ethernet 10 BASE-T.

System modifications were minor. The  custom macro running on the PC  clients,  named
smart_seq.mac, ran  automatically after data  acquisition.  Similarly, the custom  macro
named "TuneCCalcheck.mac"  ran  automatically  at the server following data analysis.
These additional macros were the only modifications required  to the standard systems.
The client and server  systems each  provide  services to run custom user  macros
automatically.

See Figure 3 for the basic algorithm of client/server smart sequencing. The first step is to
make a sequence file describing the samples. Each sequence run must have a tune run and
daily calibration run. The run typically can only last twelve hours, as mandated by the EPA
methods. There are sequence keywords and sample types that trigger special computations
by the client/server intelligent  control software. A "BFB" or "DFTPP" sample type, for
example, causes the sequence to halt until the client gets the message back from the server
concerning the quality of the QC data. A "daily calibration" sample type similarly stops the
sequence. The  server gets  the daily calibration  data file sent to it by the client, and
automatically analyzes it. Then the macro TuneCCalcheck.mac decides if the QC  criteria
for daily calibration have been satisfied. It creates a text file called "passed, smt" if the
                                             653

-------
system is in QC compliance, or one called "failed, smt" if the system is out of control. The
macro places them in the data file directory on the server's hard drive. Smart_seq.mac
continually monitors the data directory for the result. When smart_seq.mac detects the
result, it either halts the autosampler, or continues with the run.

RESULTS AND DISCUSSION

Smart sequencing with one server and two clients worked.  The entire process took about
twenty seconds.  Time depends upon network traffic. It rarely took  more  than  forty
seconds. A big advantage that this computing scenario has over single computing systems
is the ability to distribute the tasks logically. A PC fits well on the lab bench and is well
suited for  chromatographic data  acquisition that  does not require a very powerful
computer. Data analysis  requires more power due to the floating point calculations. It is a
process better suited for a powerful server. Client/server computing offers other benefits
besides intelligent sequencing and logical job partitioning. The main advantage allows the
easy transfer of information among various computers  connected on a  network. A
Laboratory  Information  Management  System  (LIMS)   computer  contains  all  the
information about a sample from the time it is sampled in the field, to the time results are
released.  Sample information can be downloaded  as a text file to the PC running the
GC/MS,  and be used by the client to construct a sequence file.  This saves time and
eliminates typing mistakes. See Figure 4 is  a text file used to automatically create a
sequence with the proper keywords necessary for smart sequencing. No typing is required
by the chemist  at the  bench, and information does  not have  to be entered twice.
Acquisition occurs, then the data files go to the server for analysis. The quantitative results
move from the server to the LIMS  as formatted text files. Networked laboratory systems
share information and process samples together, with minimum operator intervention.

CONCLUSION

It  is  simple  to incorporate  the  user-contributed  macros  at  the  PC  controlling the
autosampler and the server performing data analysis. Client/server smart sequencing saves
time for the operator, and lets a computer take over tedious data validations.  It halts a
system that has gone out of control. This saves wear and tear on GC columns and mass
spectrometer sources by avoiding unnecessary injections of contaminated waste samples.
A cleaner system gives more reproducible data. The server part of client/server computing
is better suited for the large floating point  calculations involved in environmental target
compound  analysis. The client is better suited for the lab bench and data acquisition. This
logical partitioning of computer tasks adds to laboratory efficiency.  There is no cost for
the user-contributed smart sequencing macros, and they are available from the author.

There is a future for intelligent control of lab systems. Computer systems will use response
from data analysis to adjust acquisition parameters for both the chromatograph and the
mass  spectrometer.  Self correcting analytical systems could  keep themselves  in QC
                                               654

-------
control. Failed tune runs can signal the mass spectrometer to tune itself and inject DFTPP
again. Deviations from allowed retention times and response factors will flag the system to
correct itself.  Advances in computer technology advance smart  control of analytical
systems. The  current Ethernet  client/server  framework will  most likely give way to
another era in which proximity does not matter.  Improved  communications will  link
mobile  lab PC's and powerful  servers. Object technology will effect the data and the
processing. A sample run becomes an object, not just data, but data with an associated
action.  A DFTPP  object for example would consist of the GC/MS  data, the action to
check EPA criteria, and the final QC result. Object technology allows a busy computer to
send an object to another computer for processing help. This would be like sending to a
friend's house all the ingredients and the recipe to bake a cake. Environmental laboratory
computers networked across the hall, or across the world, could assist each other to
control instruments and validate  data simply by passing objects.
                                              655

-------
LITERATURE CITED

(1) Koch, C. A., Dewald, J., LC/GC, 1988, 2, 150-152

(2) "Chemical Analytical Services for Organics Applying GC/MS Techniques",
WA-87-J001, (US EPA Contract Lab Program, November 24, 1986)

(3) Koch, C. A., LC/GC, 1992, 9, 709

(4) "US EPA CLP Statement of Work for Organic Analysis for Multimedia and
Multiconcentration", Document 01M01.1 (US EPA Contract Laboratory
Program, Washington, D.C., 1990).

(5) Federal Register, 49,153, October 26,1984

(6) "Test Measurements for Evaluation of Solid Waste by Physical and
Chemical Methods", (SW-836, US EPA, Washington, DC, 3rd ed., 1990),
Section IB

(7) Orfaldi, R, Harkey, D, and Edwards, I, Byte, 1995, 4, 108-122
                          656

-------
Figure 1  The Smart sequencing GC/MS algorithm
           Does DFTPP
              pass ?
                           Run DFTPP tune
                             check again
                                       Does
                                   DFTPP pass
                                    this time ?
 Prepare tune reports;
Run 1 -point calibration
              Does
           Calibration
             Pass?
                              Stop run;
                            prepare report
    Prepare calibration reports;
       run 1st shift samples
      Run DFTPP tune check
          for 2nd shift
  Stop Run;
prepare report
                       657

-------
Figure 2 The Client/Server model


        Client/Server Defined

        * The distribution of tasks between two
         or more computer applications
        Clients
        Request information or resources
        from Servers
                              Servers
                              Provide information or
                              resources to clients
           Client
                     658

-------
Figure 3  The Client/server smart sequence algorithm
                                                                      Create File
                                                                        called
                                                                      "Failed.smt"
                                                   Create File
                                                     called
                                                   Passed.smt"
                                    659

-------
Figure 4  The LIMS text file used to construct the smart
sequence file at the client PC
[SEQUENCEJHDR]

      SEQPATH=c:\hpchem\l\sequence
      SEQFILE=start.s
      SEQFDLENEW=smart.s
      SEQCHECKBAR=0
      SEQSKIPACQ=0
      SEQOVERWRTTE=1
      SEQCOMMENT=smart sequence for tune and daily cal runs
      SEQOPERATOR=operator name
      SEQDATAPATH=c:\hpchem\l\data\
      SEQPRESEQCMD="
      SEQPOSTSEQCMD="
      STARTLINE=1
      STOPLINE=8

[LINE1]
      VIALNO=1
      DATAFILE=
      ACQMETHOD=On_Flag
      TYPE=6
      CALLEVEL=1
      UPDATEQI=1
      UPDATERF=1
      UPDATERT=1
      SAMPLEID=STOP
      SAMPLEMISC=
      SAMPLEAMT=0
      DILFACTOR=1
      PREMACRO=
      POSTMACRO=

[LINE 2]
      VIALNO=1
      DATAFILE=dummyl
      ACQMETHOD=defauIt.m
      TYPE=8
      CALLEVEL=1
      UPDATEQI=1
      UPDATERF=1
      UPDATERT=1
      SAMPLEED=samplc name stuff here
      SAMPLEMISC=sample misc stuff here
      SAMPLEAMT=0
      DILFACTOR=1
      PREMACRO=
      POSTMACRO=
                      660

-------
Figure 4 (cont.)
[LINES]
      VIALNO=2
      DATAFILE=dummy2
      ACQMETHOD=default.m
      TYPE=4
      CALLEVEL=1
      UPDATEQI=1
      UPDATERF=1
      UPDATERT=1
      SAMPLEED=sample name stuff here
      SAMPLEMISC=sample raise stuff here
      SAMPLEAMT=0
      DILrACTOR=l
      PREMACRO=
      POSTMACRO=

[LINE 4]
      VIALNO=3
      DATAFILE=dummy3
      ACQMETHOD=defaultm
      TYPE=1
      CALLEVEL=1
      UPDATEQI=1
      UPDATERF=1
      UPDATERT=1
      SAMPLEID=sample name stuff here
      SAMPLEMISC=sample misc stuff here
      SAMPLEAMT=0
      DILFACTOR=1
      PREMACRO=
      POSTMACRO=

[LINES]
      VIALNO=4
      DATAFILE=dummy4
      ACQMETHOD=default.m
      TYPE=1
      CALLEVEL=1
      UPDATEQI=1
      UPDATERF=1
      UPDATERT=1
      SAMPLEID=sample name stuff here
      SAMPLEMISC=sample misc stuff here
      SAMPLEAMT=0
      DILFACTOR=1
      PREMACRO=
      POSTMACRO=
                       661

-------
Figure 4 (cont.)
[LINE 6]
      VIALNO=5
      DATAFILE=dummy5
      ACQMETHOD=defauItm
      TYPE=1
      CALLEVEL=1
      UPDATEQI=1
      UPDATERF=1
      UPDATERT=1
      SAMPLEID=sample name stuff here
      SAMPLEMISC=sample misc stuff here
      SAMPLEAMT=0
      DILFACTOR=1
      PREMACRO=
      POSTMACRO=

[LINE 7]
      VIALNO=6
      DATAFILE=duminy6
      ACQMETHOD=defauItm
      TYPE=1
      CALLEVEL=1
      UPDATEQI=1
      UPDATERF=1
      TJPDATERT=1
      S AMPLEID=sampIe name stuff here
      SAMPLEMISC=sample misc stuff here
      SAMPLEAMT=0
      DILFACTOR=1
      PREMACRO=
      POSTMACRO=
[LINES]
      VIALNO=7
      DATAFILE=dummy7
      ACQMETHOD=Label
      TYPE=6
      CALLEVEL=7
      UPDATEQI=2
      UPDATERF=2
      UPDATERT=2
      SAMPLED)=STOP
      SAMPLEMISC=
      SAMPLEAMT=0
      DELFACTOR=1
      PREMACRO=
      POSTMACRO=
                      662

-------
                                                                    92
     How the U.S. Environmental Protection Agency Region 2 RCRA
     Quality Assurance Outreach Program, Office of Research and
     Development, and Office of Enforcement and Compliance
     Assurance are Helping Industry to Minimize Environmental
     Compliance Costs

Leon Lazarus. U.S. Environmental Protection Agency, Region 2,
Edison, New Jersey 08837;  Phil Flax, U.S. Environmental
Protection Agency, Region 2, New York, New York 10278;  and Jeff
Kelly, U.S. Environmental Protection Agency, Washington, DC
20460.

ABSTRACT

The U.S. Environmental Protection Agency  (EPA) Region 2 Resource
Conservation and Recovery Act  (RCRA) quality assurance outreach
program is cooperating with the EPA Office of Research and
Development  (ORD), and Office of Enforcement and Compliance
Assurance  (OECA) to help the regulated community minimize costs
when complying with environmental regulations.

The OECA computer bulletin board system (BBS) recently merged
with ORD's pollution prevention BBS.  This new BBS is named
Enviro$ense.  The three goals of the Enviro$ense BBS are to
prevent pollution, increase compliance with environmental
regulations, and reduce environmental compliance costs.

The Enviro$ense computer bulletin board system will contain
compliance and pollution prevention files from EPA program
offices.  This will allow "multi-media, one stop shopping" for
compliance and pollution prevention information.  For example,
Enviro$ense can scan file titles and abstracts for the key words
"cadmium in water", and list all compliance and pollution
prevention files that contain those key words.  The files of
interest can then be downloaded.  Enviro$ense will be accessible
directly or via the Internet.

INTRODUCTION

The U.S. Environmental Protection Agency  (EPA) Region 2 Resource
Conservation and Recovery Act  (RCRA) quality assurance outreach
program is cooperating with the EPA Office of Research and
Development  (ORD), and Office of Enforcement and Compliance
Assurance  (OECA) to help the regulated community minimize costs
when complying with environmental regulations.

The OECA computer bulletin board system (BBS) recently merged
with ORD's pollution prevention BBS.  This new BBS is named
Enviro$ense.  The three goals of the Enviro$ense BBS are to
prevent pollution, increase compliance with environmental
regulations, and reduce environmental compliance costs.
                                   663

-------
The Enviro$ense computer bulletin board system will contain
compliance and pollution prevention files from EPA program
offices.   This will  allow "multi-media, one stop shopping" for
compliance and pollution prevention information.  For example,
Enviro$ense can scan file titles and abstracts for the key words
"cadmium in water",  and list all compliance and pollution
prevention files that contain those key words.  The files of
interest can then be downloaded.  Enviro$ense will be accessible
directly or via the  Internet.

The BBS,  which became fully operational in April, will assist
industry minimize compliance costs by providing:

1.  Program specific (Clean Air Act,  Clean Water Act, Resource
Conservation and Recovery Act,  etc)  regulations, guidances, and
strategies for reducing environmental compliance costs;  program
specific quality assurance guidances and strategies for reducing
environmental compliance costs;  and industry specific (mining,
manufacturing, petroleum, etc)  regulations, guidances, and
strategies for reducing environmental compliance costs.  These
regulations, guidances,  and strategies may be downloaded by
anyone who has a modem and a computer.

2.  Weekly updates of EPA's Federal Register notices.

3.  The EPA Region 2 seminar, symposia, and workshop schedule.

4.  Quality assurance project plan guidances by program and by
region.

5.  Information about SW-846 analytical issues, including:  data
validation, method updates, performance evaluation studies,
immuno assay methods,  and the Office of Solid Waste Quality
Assurance Newsletter.

6.  Information about EPA's July 1995 Waste Testing and Quality
Assurance Symposium, in Washington,  DC (seminars, workshops, call
for papers, etc).

7.  The EPA Region 2 quality assurance standard operating
procedures (SOPs), toxicity characteristic leaching procedure
(TCLP) manual, and Comprehensive Environmental Response,
Compensation, and Liability Act (CERCLA)  manual.

8.  Information about air and water quality assurance issues.

9.  Information on hazardous waste identification,
characterization, and sample transportation.

The EPA Region 2 Office is coordinating the "ASK EPA" forum on
the BBS.   The ASK EPA forum will consist of question and answer
forums, where people post questions and EPA experts post answers
within a few days.   Other individuals interested in the topic can
read the questions and answers.
                                 664

-------
ASK EPA forums on the following topics will be offered:

1.  Ground Water.  Hosted by Bill Stelz, EPA, Washington DC.

2.  How to Use EPA's Decision Error Feasibility Trials (DEFT)
Software to Reduce Monitoring Costs.  Hosted by Nancy Wentworth,
EPA, Washington  DC.

3.  EPA Region 2 CERCLA quality assurance policies on data
validation, routine analytical services (RAS) and non RAS
methods.  Hosted by Peter Savoia, EPA, Edison NJ.

4.  EPA Region 2 NPDES policies.  Hosted by John Kushwara, EPA,
New York NY.

5.  EPA Region 2 RCRA quality assurance policies.  Hosted by Leon
Lazarus, EPA, Edison NJ.

6.  Mobile labs  and robotics.  Hosted by Vernon Laurie, EPA,
Washington DC.

The Ask EPA forum will describe how to download the following
files from Enviro$ense:
1) EPA hot line  and help line telephone numbers on EPA policies,
guidances and monitoring methods.
2)  Monthly summaries of questions commonly asked on EPA hot
lines and help lines.

USING ENVIRO$ENSE

The following discussion explains how to utilize the Enviro$ense
computer bulletin board system:

Modem Settings

Speed      1,200, 2,400, 4,800, 9,600, or 14,400  baud
Data       8 bits
Parity     None
Stop       1
Duplex     Full
Emulation  VT-100 or ANSI or BBS
Phone #    703-908-2092

After logging on and selecting a password, files may be uploaded
or downloaded.
                                  665

-------
UPLOADING FILES

Any type of PC file can be uploaded onto the BBS.  However, the
vast majority of BBS files are text files.  The authors recommend
that text files be uploaded in one of the following formats: 1)
DOS based ASCII text,  or 2) WordPerfect 5.1/5.2 files.  ASCII
text files may be generated by using the "Save As" or "Text Out"
commands in most word processors.   WordPerfect files are
compatible with most word processors.

Uploaded files should be compressed unless the users wants BBS
callers to be able to read the files on-line.  All large files
must be compressed.  Compressed files must be downloaded before
they are read.  Compression reduces the amount of disk space
utilized by a file, and reduces the time required to upload or
download a file.  Files may be compressed by using PKZip
utilities.  PKZip utilities may be obtained from the Enviro$ense
BBS by downloading file "PKZ204G.EXE".  After downloading this
file, its name is typed at the DOS prompt.  This will decompress
the PKZ204G.EXE into a number of files, including a software
documentation file.  The PKZip documentation file explains how to
use PKZip utilities.  This documentation file may be accessed in
any word processor.

If the user wants a text file to be readable on-line, it must be
saved as an ASCII file or WordPerfect file, and its name should
end with the TXT extension (i.e.,  MYFILE.TXT).  All files other
than these TXT files should be compressed using PKZip.
Compressed files will always have the file extension ZIP (i.e.,
MYFILE.ZIP).  Therefore, when preparing a file for uploading, it
should have either the TXT or ZIP extensions.  However, small
files with different extensions are acceptable.

Placing a file to be uploaded on a hard drive accelerates the
uploading.  The hard drive is usually designated as the "C:"
drive.
                                   666

-------
To upload a file onto the Enviro$ense BBS, the user must identify
and locate a specific file, and instruct the user's
communications software to transfer the file.  To notify the
Enviro$ense BBS that a file is to be uploaded, the user selects
"U" from the main menu, and presses enter.  The Enviro$ense BBS
will ask for the name of the file to be uploaded.  It must have
the same name as the file that has been prepared for uploading
(i.e., MYFILE.TXT or MYFILE.ZIP).  After verifying that the BBS
does not already have a file with that name, the user will be
asked to briefly describe the file.  The file description may be
up to 10 lines of 45 characters each.  The first line should
describe the file.  Subsequent lines should describe the file in
more detail, utilizing as many key words as possible.  BBS users
may easily scan all file descriptions for key words.  When
uploading files, the agency/company that produced the file, and a
contact name and phone number should be included to allow people
to obtain additional information.  After receiving the file
description, the Enviro$ense BBS will grant permission to
transfer the file.  At this time, the users communications
software should transmit the file to the Enviro$ense BBS.

The communications software manual illustrates how to transfer a
file.  Some communications programs utilize the "Page Up" key to
transfer files.  In order for a file to be transferred, it must
be properly named and located on a specific drive.  While the
file is being transferred, an indicator of transfer progress can
be viewed on your screen.  Depending on the size of the file and
the modem speed at which the user connected, uploading may take a
few seconds or many minutes.  Once the upload is completed, the
Enviro$ense BBS will thank the user, and scan the upload for
viruses.  If viruses are not present, the file will be placed in
the an appropriate topic directory of the files section.  The key
words used in the file description will determine the appropriate
file directory.

For information about the Enviro$ense BBS, please contact Myles
Morse at 202-260-3161 or Jeff Kelly at 202-260-2809.  For
information about the ASK EPA forum, please contact Leon Lazarus
at 908-321-6778.
                                   667

-------
93
QUALITY  ASSURANCE  AND  QUALITY CONTROL  LABORATORY  AND
INSITU TESTING OF PAPER MILL SLUDGES USED AS LANDFILL COVERS
Horace  K. Moo-Young Jr.,  Department  of Civil  and  Environmental  Engineering,
Rensselaer Polytechnic Institute, Troy, NY 12180 , and Thomas F. Zimmie, Department
of Civil and  Environmental Engineering,  Rensselaer Polytechnic Institute,  Troy, NY
12180.

ABSTRACT

Paper mill sludges have been successfully used as an alternative to clays as  landfill  cover
material for the  past  decade.   Although paper mill sludges are approximately 50%
kaolinite clay, the geotechnical properties of paper mill sludges differ from a typical clay.
Paper mill  sludges are characterized by a high water content  and organic  content  in
comparison to a typical clay  which contribute to the variations in the  geotechnical
properties.  The purpose of this paper is to give regulators a better understanding of the
geotechnical properties of paper mill sludges which are used as landfill cover material.

Laboratory tests  were  conducted on  seven  paper  sludges  to obtain the  geotechnical
properties such as the Atterberg limits, compaction characteristics, water content, organic
content, and shear strength.  Typical laboratory procedures used for clays were altered for
paper sludge due  to the high initial water content. Standard procedures for the laboratory
testing of the geotechnical properties of paper sludges and insitu sampling are discussed.

Hydraulic  conductivity  (permeability) and  compressibility  test were  conducted on the
various  paper sludge.   A  direct relationship  between organic  decomposition, water
content, and  compressibility was  established.   Laboratory permeability tests  were
conducted on insitu samples taken from an actual paper sludge landfill cover layer.

The permeability  varied considerably among the paper sludges. Factors which influence
the permeability include water content,  consolidation, and  organic content.   Although a
paper sludge may not initially meet the regulatory requirement for permeability (when the
sludge cover system is constructed at the natural water content),  the change in void ratio
that results from consolidation and dewatering under a low effective stress can reduce the
hydraulic conductivity to an acceptable value.

INTRODUCTION

       The high price of solid waste disposal has sparked interest in the development of
alternative uses for waste sludges (paper mill sludges and water treatment plant
sludges).  Compactable to a low permeability in spite of high water contents and low solid
contents in comparison to clays, paper mill  sludges can substitute for clays in landfill
                                              668

-------
covers.  Since 1975, paper mill sludges have been used to cap landfills in Wisconsin and
Massachusetts (Stoffel and Ham,  1979; Pepin,  1984; Aloisi and Atkinson,  1990; Swann,
1991; Zimmie et al.,  1993).  This paper establishes design criteria for landfill covers using
paper sludge.

       Seven sludges were used in this study.  Sludge A is a wastewater treatment plant
sludge from a  deinking recycling paper mill.  The treatment plant receives 96% of the
flow from the paper mill and 4% of the flow from the town. Sludge B is a blended sludge
from a wastewater treatment plant which receives its effluent from a recycling paper mill
and the neighboring community.  Sludge C is a blended  sludge from an integrated paper
mill and is comprised of kaolin clay, wood pulp and organics.  Sludge C was mined from
a sludge monofill landfill which was in operation since 1973. Samples were collected from
different sections of the monofill  to represent different sludge ages: one week (Cl), 2-4
years (C2),  and 10-14 years (C3).  Sludge D is a primary wastewater treatment plant
sludge from a recycling paper mill.  Sludge E is a primary wastewater treatment plant
sludge from a non-integrated paper mill.

GEOTECHNICAL CLASSIFICATION

       The geotechnical classification of paper mill sludges is not like that of typical clays
used in landfill  cover systems.  For example, Atterberg Limits tests are very difficult to
perform on  paper  sludges  and  the  results may not be meaningful in terms of classical
geotechnical  classification  (Zimmie and Moo-Young, 1995).  Organic content,  specific
gravity, natural water content, and permeability appear to be the major physical properties
of interest.

       The  ranges of natural  water  contents,  organic  content, specific gravity,  and
permeability are summarized in Table  1.  Water  contents were determined according to
American  Society  for Testing and  Materials (ASTM) procedure D2974.  The organic
contents of paper sludges were determined according to ASTM procedure D2974, method
C for geotechnical classification purposes.  Specific gravity tests were performed on the
sludges according  to ASTM procedure D854.  Permeability  tests were conducted  on
remolded specimens of the various sludges using ASTM procedure D5084.  Paper sludge
specimens were remolded at various water contents in the range of the initial moisture
content.  Average  initial permeability values were measured  at a low confining stress of
34.5 kPa.

MATERIAL WORKABILITY

       Proctor  tests were performed following ASTM procedure D698-78. Because of
the high water content,  tests were conducted from the wet side rather than from the dry
side  as  recommended by ASTM.   When water  was added to dry sludge, large clods
formed, the  clods were difficult to  break apart, and the sludge lost its initial plasticity.
                                             669

-------
During the drying process, the sludge was passed through the number 4 sieve and placed
in a pan to air dry.  Many trials were conducted to reach the optimum moisture content
and density.

       Figure 1 shows the Proctor curve, optimum moisture content, and dry density for
the various sludges.  The Proctor curves show a wide range of moisture contents on the
wet of optimum portion of the curve and a small range of water contents on the dry of
optimum portion of the curve.  At higher water contents, the dry density obtained from the
Proctor curve for the various sludges is similar.  At the optimum density and moisture
content, the sludge is dry, stiff, and unworkable. A very high water content is desirable, if
the sludge is to be used as a landfill  capping material (Zimmie et al., 1993).   These test
results compare favorably to research conducted on water treatment plant sludges (Raghu
et al., 1987; Alvi  and Lewis, 1987; Environmental Technology Inc., 1989; Wang et al.,
1991).

       During the construction of the Hubbardston landfill in Hubbardston, Massachusetts
and Erving Paper mill test plots in Erving,  Massachusetts, different types of equipment
were used to place the sludge cap.  Four types of equipment were used: a small ground
pressure vibratory drum roller, a vibrating plate compactor, a sheepsfoot roller,  and a low
ground pressure track dozer.  The sheepsfoot roller which is generally used to compact a
clay liner clogged immediately due to the  cohesive nature of the sludge and the high water
content.   The vibratory  methods did not  provide homogeneous mixing  and did not
compact the sludge effectively.   The  small ground  pressure  dozer  provided the  best
method for placement and compaction.   This equipment  successfully  eliminated large
voids from the sludge material and kneaded the material homogeneously.

CONSOLIDATION BEHAVIOR

       The water content of paper  sludge is the most useful  parameter in predicting
consolidation behavior.  The sludge samples are assumed to be fully saturated so that the
void ratio is equal to the specific gravity of the sludge multiplied by the water content. To
simulate insitu consolidation behavior, water contents were kept as close as possible to the
initial value.  Higher initial water contents result in higher initial void ratios which increase
the potential consolidation.

       Consolidation tests were performed on sludge A at various water contents to show
the highly  compressible nature of the  paper sludge and to establish a relationship between
consolidation behavior and initial water content (Figure 1).  The  change in void ratio per
log cycle  of pressure (Cc-Compression  Index)  increases due  to higher  initial water
contents as shown in Figure 2.  Fligher  initial water contents will result in higher  void
ratios, which account for the increasing magnitude of compression with increasing water
content.
                                             670

-------
      Consolidation tests were also performed on the other sludges at their natural water
content.  The compression index was plotted against the initial water content for the paper
sludges and for water treatment sludge (Wang et al., 1991). The relationship between the
compression index and water content is as follows:

Cc = 0.009w0	(1)

The relationship between the compression index and void ratio is as follows:

Cc = 0.39e0	(2)

Landva and LaRochelle (1983) established a relationship between compression index and
water content for peats which is similar to the one obtained for paper mill sludges.

INFLUENCE OF ORGANIC CONTENT ON COMPRESSIBILITY

      Consolidation tests were performed on the seven sludges to obtain a relationship
between compressibility and organic content. Paper mill sludges are composed of 40-60%
organics. Twenty two consolidation test were conducted to obtain a relationship between
Figure 2 Consolidation Test on Sludge A at Various Water Contents
organic content  and compressibility. Paper sludges were tested at an initial water content
ranging from 109% to  224%.  Sludges tested had an average water content of 166.4%
with a standard deviation in the water  content of 37%.   The  compression indices (Cc)
which is the  change  in void ratio per logarithm cycle of the  vertical  stress and the
coefficients of compressibility (Av) which is the change in void ratio per change in vertical
stress were computed for the various test specimens.  The correlation coefficient between
the organic content and the compression index and coefficient of compressibility are 0.47
and 0.53,  respectively, which indicates that there is a positive correlation between the
variables.

      Figure 3 plots the compression index and the organic content for various sludges.
The relationship between the compression index and the organic content from Figure 3 is
as follows:

      Cc = 0.027 Oc                                                 (3)
       Figure 4 plots the coefficient of compressibility and the organic content for the
various sludges.   The  relationship  between the coefficient  of compressibility and the
organic content  is as follows:

       Av= 0.000263 Oc                                                    (4)
                                             671

-------
       Previous research indicates that there  is little to no  data relating the organic
content to  compressibility for sludges.   A relationship can be developed to predict the
permeability of paper sludge from the organic content.
INSITU SAMPLING PROCEDURES FOR PERMEABILITY ANALYSIS

       The best sampling procedure was discovered through trial and error using Shelby
tubes.  Slow static pressure (pushing the Shelby tube into the sludge layer with a constant
vertical force)  compressed the sludge during  the sampling process and led to low
recovery rates.  A dynamic sampling process, like striking the Shelby tube with a hammer,
resulted in high rates of recovery and minimal disturbance.  Apparently, due to the fibers
and tissues in the sludge matrix, a sharp blow was needed to cut through the sludge. The
normal field procedure was to place the Shelby tube on the sludge, place a wood block on
top of the Shelby tube, and strike the block with a hammer.  This procedure resulted in the
highest rates of recovery and the least disturbance (Moo-Young, 1992).

       Laboratory permeability tests were conducted on undisturbed  sludge A samples
taken from the  Hubbardston Landfill on five occasions:  July 1991, October 1991, April
1992, January 1993, and July 1993.  All laboratory permeability tests in this study were
performed following the  procedures of ASTM  D5084  for measuring the hydraulic
conductivity  of saturated  porous  material using a  flexible wall permeameter with
backpressure.   Samples were tested at a low confining stress of 34.5 KPa to simulate the
worst case, that is the highest permeability.

       In general, the samples met the 1x10"'  cm/sec regulatory requirement for a low
permeability  landfill cover  system  in  Massachusetts.    Table 2   summarizes the
permeabilities of the samples.  The water contents  of the samples 1, 2, and 4 taken from
the landfill after construction varied from 150% to  220%.  In general all specimens taken
from  various  sections  of the landfill immediately after  construction  either  met the
regulatory requirement for permeability or were very close.

       Sample  3,  taken  after 9  months,  was  dewatered and  consolidated  under an
eighteen inch overburden.  It was  markedly stiffer and denser than  samples  obtained
shortly  after  construction.   The  permeability  for the sample  meets the regulatory
requirements of 1 x lO'^cm/sec. Sample  5 was taken from the same section of the landfill
as sample 3,  eighteen months after placement.  Permeability tests yielded an  average
permeability of 3.4 x 10"^ cm/sec at  a water content of 107 %, which easily meets the 1 x
10'7 cm/sec standard for  landfill  cover  design.   After  18 months of consolidation the
sludge layer met the regulatory requirements.  The sludge  layer performs as an  adequate
hydraulic barrier at a water content of 107% and a  void ratio of 2.1. Sample 6 was taken
two years after placement from  the same  section of the  landfill as samples 3  and  5.
                                              672

-------
Sample 6 meets the permeability requirement.  Thus, time, dewatering and consolidation
have reduced the permeability of sludge A.

HYDRAULIC CONDUCTIVITY

       A major factor in the design of a paper sludge landfill cover is the estimation of the
permeability after initial settlement  (approximately six months to one year).  There are
three major factors that contribute to the permeability characteristics of paper sludges:
water content, organic content, and consolidation.  Zimmie and Moo-Young (1995) have
conducted research on the hydraulic conductivity of various paper sludges.  In general, the
water content and permeability relationship for paper sludges reveals that the permeability
increases near the optimum moisture content (40% to 60%).  The minimum permeability
for paper sludges occurs approximately 100 percent wet of the optimum water content.
When constructing a paper sludge landfill cover, a high water content is desirable, usually

at the natural water content,  ranging from 150-250% (Zimmie et al, 1993).

       Figure 5 shows a relationship between the organic content and permeability.  The
organic content and the permeability were plotted for the various sludges. The organic
content  ranged from  25% to  73%.  For the average permeability  line, the hydraulic
conductivity ranged from  2  x 10"^  to 2  x 10"^ cm/sec.  The 95% prediction interval is
shown to give a range of values for the permeability and organic content relationship.  The
upper  prediction interval  ranges  from 4 x 10"' to   1 x  10"^ cm/sec, and the lower
prediction interval ranges from 1 x  10"^ to 2 x  10'^ cm/sec.  As the organic content
decreases, there is a decrease in  permeability.  Points outside of the prediction interval
indicate that the prediction interval is only an estimated range for the permeability.

       The consolidation  characteristics of paper sludges are  well documented (Zimmie
and Moo-Young, 1995; Zimmie et  al., 1993; Wang et al.,  1991; Alvi and Lewis, 1987;
Raghu et al., 1987).   Paper sludge is a highly compressible material with  a compression
index of 1.1 to 1.5 (Moo-Young, 1992).  Sludges with higher initial water contents have
steeper decreases in void ratio under equivalent changes in effective stress. The amount  of
reduction in  void  ratio under a given  change in effective  stress  directly effects the
magnitude of change  of permeability.  A typical paper  sludge shows approximately one
order of magnitude decrease in permeability while a clay shows a reduction of a factor  of
two over the same range of pressures (Zimmie and Moo-Young, 1995). These results  (a
decrease in permeability resulting from an increase in effective stress on a  sample) are
comparable to the results obtained from studies  conducted on organic clays and peats
(Landva and LaRochelle, 1983).

       Figure 6 shows the effects of a change in void ratio on the permeability of samples
of sludges A, C1-C3, and D which were molded at various water contents. It is of interest
to examine the curves for sludges A and D.  These sludges were selected for comparison
                                               673

-------
due  to  their similar organic contents,  water contents,  and  compression  properties.
Permeability tests were performed at 34.5 kPa, 69 kPa, and 138 kPa.  Although the two
sludges  do not have permeabilities of the same magnitude, they show nearly equivalent
changes in void ratio per log cycle change in permeability. Sludge C3 and Cl which were
molded  at higher water contents have a steeper change in void ratio and a larger reduction
in permeability

       The  void ratio-permeability relationships  were  also established for  sludges  C
(Figure  6).  Sludges Cl  and C3  were molded at 250 % water content and show similar
changes in void ratio. Sludge C2 was molded at 190% water content which is identical to
one of the sludge D samples.  Sludges C2 and D show similar compressive behavior, and
the change in void ratio per log cycle change in permeability are comparable.

LANDFILL COVER DESIGN FOR PAPER SLUDGE

       For landfill cover design, one of the common stipulations is that the cover should
include  a barrier layer with a permeability less than or equal to 1 x 10"' cm/sec.  Most
sludges  in this study (Cl, C2, D, and E) do not initially meet that regulatory requirement
for permeability when tested at  the natural water content under a low  confining stress
(Table  1).   In  general,  most of the sludges  meet the  1  x 10"^ cm/sec permeability
requirement when tested  at higher consolidation pressures (Moo-Young, 1992; Zimmie

and Moo-Young, 1995).  The time for this reduction in permeability must be short in
duration for the  material to be considered as the low permeability material of a landfill
cover system.  Short term laboratory tests take consolidation effects into  account but are
not capable of judging long term effects such as organic degradation. However, the use of
higher effective stresses to measure the permeability of paper sludges yields conservative
results, since organic decomposition also reduces the permeability (Figure 2).

       The estimated load at the mid-point of a typical paper sludge landfill cover system
is approximately 23.9 kPa.  At higher water contents (166-190%), the minimum change in
void ratio (from the initial void ratio to a vertical pressure of 23.9 kPa) ranges from 0.5 to
1.0 (Zimmie and Moo-Young,  1995).   Using the minimum change  in  void ratio, the
change  in permeability can be predicted for the various sludges using Figure 6.  A
maximum change in permeability of approximately one order of magnitude can result from
the consolidation of the paper sludge cover. Initially most of the sludges (Cl, C2, D, and
E) do not initially meet the 1 x  10"' cm/sec regulatory  requirement (Zimmie and Moo-
Young,  1995). However, large changes in void ratio (Ae = 1 or greater) that may occur
within one year will reduce the hydraulic conductivity of the sludge to an acceptable value
of 1 x 10~7 cm/sec or less.

       The laboratory permeability tests  on the insitu samples (Table  2) can be used to
illustrate the change in insitu hydraulic  conductivity that results from a change in void
                                             674

-------
ratio.  For the Hubbardston Landfill, the average initial permeability, water content, and
void ratio were  1.06  x 10"7,  190%,  and 3.72, respectively.   After  nine months', the
permeability, water content, and void ratio were 4.47 x 10"8, 106%, and 2.1, respectively.
There is a decrease in water content of 86% and a change in void ratio of 1.62.  The
resulting change in permeability was approximately one half an order of magnitude, which
compares favorably to the observed changes in laboratory compacted samples.

CONCLUSION

       Paper sludges  are  characterized by high water contents, organic contents, and
compressibilities,  and  are  compactable to low permeabilities. A high water content  is
recommended for the construction of a paper sludge cap, since paper sludge is stiff and
unworkable near the optimum water content.  For best insitu compaction, a low ground
pressure dozer is recommended for the  construction of a paper sludge cap.

       One dimensional consolidation tests revealed a direct relationship between the
water  content  and the  compression  index  and  between  the organic  content, the
compression index, and the coefficient of compressibility.   Paper mill sludges  were
characterized by high strains and large reductions in void ratio.  Higher water contents
resulted in higher void ratios and increased the  compressibility.

       Permeability tests  were  performed on the  various  sludges.   The minimum
permeability of paper sludge occurs far wet of the optimum  moisture content.  Organic
content, water  content, and compressibility  are the key  parameters  which affect the
permeability of paper sludges.  Paper sludges  yield a decrease in permeability five  times
that for a typical clay for an effective stress range of 34.5 to 138 kPa. Observations of the
municipal landfill using a 91 cm layer of sludge A as the impermeable barrier indicate that
it is providing an adequate hydraulic barrier.

       When designing a  landfill  cover system  using paper  sludge as the impermeable
barrier, the sludge layer should be constructed at the natural water content. Initially at the
natural water  content,  the sludge  may not  meet  the regulatory requirement for
permeability of 1  x  10~7 cm/sec or less.  However, the change in void ratio that results
from the application of an overburden pressure (i.e., drainage layer and vegetative support
layer) can reduce the permeability to an acceptable value.

ACKNOWLEDGMENTS

       Support for this research  was received from the following organizations: Erving
Paper Company, Erving, MA; International Paper Company, Corinth, NY; Marcal Paper
Mill Inc., Elmwood Park, NJ; Mead Specialty Paper Division, Lee, MA; Clough Harbour
and Assoc., Albany, NY; and the Army Research Office, Research Triangle Park, NC
                                             675

-------
REFERENCES

Aloisi, W. and Atkinson, D.S. (1990). "Evaluation of Paper Mill Sludge for Landfill
       Capping Material"  Prepared for Town of Erving, MA by Tighe and Bond
       Consulting Engineering, Westfield, MA.

Avi, P.M. and Lewis, K.H.  (1987). "Geotechnical Properties of Industrial Sludge."
       Proceeding from International Symposium on Environmental Geotechnology.
       Ed. Hsia-Yang Fang. Bethlehem, PA. 2:57-76.

Environmental Engineering & Technology Inc. (1989). "Water Plant Sludge Disposal in
       Landfills," Quarterly Report 1. Oklahoma City, Oklahoma.

Landva, A.O. and LaRochelle (1983). "Compressibility and Shear Characteristics of
       Radforth Peats"  Testing Peats and Organic Soils, P.M. Jarrett, Ed., ASTM
       STP 820, ASTM, Philadelphia, pp. 157-191.

Moo-Young, H.K. (1992).  "Evaluation of the Geotechnical Properties of a Paper  Mill
       Sludge for Use in Landfill Covers"  Master of Science Thesis, Rensselaer
       Polytechnic Institute, Troy, NY.

Pepin, R.G. (1984). "The Use of Paper Mill Sludge as a Landfill Cap"  Proceedings
       of the 1983 NCASI Northeast Regional Meeting, NCASI, New York, NY.

Raghu, D., Hsieh, H.N., Neilan,  T., and Yih, C.T. (1987), "Water Treatment Plant
       Sludge as Landfill Liner," Geotechnical Practice for Waste Disposal 87.
       Geotechnical Special Publication No.  13, ASCE, New York, NY, 744-757.

Stoffel, C.M. and Ham, R.K. (1979). "Testing of High Ash Paper Mill Sludge for Use
       in Sanitary Landfill Construction"   Prepared for the City of Eau Claire,
       Wisconsin by Owen Ayers and Assoc., Inc. Eau Claire, WI.

Swann, C.E. (1991). "Study Indicates Sludge Could Be Effective Landfill Cover
       Material" American Papermaker, pp. 34-36.

Wang, M.C., He, J.Q., and Joa, M. (1991). "Stabilization of Water Plant Sludge for
       Possible Utilization as Embankment Material"  Report, Dept. Civil Engineering,
       The Pennsylvania State Univ., PA.

Zimmie T.F., Moo-Young, H.K., and LaPlante, K. (1993).  "The Use of Waste Paper
       Sludge for Landfill Cover Material" Proceedings from the Green '93-Waste
                                          676

-------
       Disposal by Landfill Symposium. Bolton Institute, Bolton, UK.

Zimmie, T.F., Moo-Young, H.K., Harris, W.A., and Myers, T.J. (1994).
       "Instrumentation of Landfill Covers to Measure Depth of Frost Penetration."
       Transportation Research Record: Innovations in Instrumentation and Data
       Acquisition Systems. 1432: 44-49.

Zimmie, T.F., and Moo-Young, H.K. (1995). "Hydraulic Conductivity of Paper
       Sludges Used For Landfill Covers"  GeoEnvironmental 2000, Eds. Yalcin B.
       Acar and David E. Daniel, ASCE GeotechnicalSpecial Publication No. 46,
       New Orleans, LA. 2: 932-946.
                                             677

-------
Table 1 Summary of Water Content, Organic Content, Specific Gravity, and Average
      Initial Permeability
                                               AVERAGE
SLUDGE   WATER     ORGANIC   SPECIFIC    INITIAL
           CONTENT   CONTENT   GRAVITY   PERMEABILITY
           (%)         (%)                     (cm/sec)

A         150-230      45-50        1.88-1.96     1.0 xlO"7

B         236-250      50-60        1.83-1.85     l.OxlO'7

Cl         255-268      50-60        1.80-1.84     IxlO'6

C2         183-198      45-50        1.90-1.93     3xlO'7

C3         222-230      40-45        1.96-1.97     IxlO'7

D         150-185      42-46        1.93-1.95     IxlO'6

E         150-200      40-45        1.86-1.88     5xlO'6
                                         678

-------
Table 2


Summary of Laboratory Permeability Tests on Insitu Samples
SAMPLE
1
2
3*
4
5b
6C
DATE
JULY 1991
OCTOBER 1991
APRIL 1992
APRIL 1992
JANUARY 1993
JULY 1993
PERMEABILITY
(cm/sec')
1.06xlO-7
4.0xlO-8
4.47 x ID'8
4.2 xlO'7
3.4x10-8
3.8x10-8
WATER
CONTENT
(%)
190
185
106
220
107
91.5
a nine months
b eighteen months

c twenty four months
                                             679

-------
FIGURE 1 PROCTOR CURVE FOR VARIOUS SLUDGES
        9 -i
        8 -
        7 -
        5 -
        4 -
        3 H
                   •  SLUDGE A
                   HI  SLUDGE B
                   ฑ  SLUDGE D
                   O  SLUDGE E
          0
50     100    150    200    250    300
                    WATER CONTENT (%)
                         680

-------
FIGURE 2 CONSOLIDATION TEST ON SLUDGE A
o
HH
H
<
ti
a  2
O
      1 H
      o
      0.01        0.10        1.00
              VERTICAL PRESSURE (tsf)
                                    190%
                                    180%
                                    166%
                                    134%
                                    106%
                                   10.00
                           681

-------
FIGURE 3   COMPRESSION INDEX AND ORGANIC
           CONTENT RELATIONSHIP
X
w
Q
     *-<   2 -
     i—i   *-
     00
     CO
     O
     u
    1 -
                   SLUDGE A
                   SLUDGE B
                   SLUDGE Cl
                   SLUDGE C2
                   SLUDGE C3
                   SLUDGE E
               10    20    30    40    50

                 ORGANIC CONTENT (%)
                                  60
                         682

-------
FIGURE 4    COEFFICIENT OF COMPRESSIBILITY
             AND ORGANIC CONTENT RELATIONSHIP
         0.025
53
5o  0.020 H
t/3
ง
PH
O  0.015 -
O
P-,
O
2  0.010 -^
w
o
ix
IX
g  0.005 -I
O
         0.000
                       SLUDGE A
                      SLUDGE B
                      SLUDGE Cl
                      SLUDGE C2
                      SLUDGE C3
                      SLUDGE E
                 —i—
                  10
             	i	1	~
                20    30    40

             ORGANIC CONTENT <ฐ
50
60
                            683

-------
FIGURE 5   RELATIONSHIP FOR THE ORGANIC

           CONTENT AND PERMEABILITY
   OB
     1 .e-5 -=
     1.6-6 -
   H
   NM

   _


   3 1.e-7
1.6-8 --
     1.6-9 ~-
                                 SLUDGE A


                                       B

                                 SLUDGE C


                                 SLUDGE D

                                  SLUDGE E
20
30
40
50
                       60
                                70
80
              ORGANIC CONTENT (%)
                       684

-------
FIGURE 6   PERMEABILITY VS. VOID RATIO
         RELATIONSHIP
              SLUDGE
              SLUDGE
              SLUDGE
              SLUDGE
              SLUDGE
              SLUDGE
C3 (250%)
02(190%)
01 (250%)
A (195%
D(150%
D(19
       1 J
                      10-7         10-

           PERMEABILITY (cm\sec)
                     685

-------
 94

               DETERMINATION OF CONTROL LIMITS FOR
        ANALYTICAL PERFORMANCE EVALUATION IN U.S. DOE'S
           RADIOLOGICAL QUALITY ASSESSMENT PROGRAM

Vivian  Pan,  Chemist, U.S.  Department of Energy,  Environmental Measurements
Laboratory, Analytical Chemistry Division, 376 Hudson Street, New York, NY  10014,
(212) 620-3601

ABSTRACT

The Environmental Measurements Laboratory (EML) administers a semi-annual Quality
Assessment Program (QAP) for the U.S. Department of Energy to assess the quality of
environmental radiological data that are generated by its contractors.  Participation in EML
QAP is required under DOE Order 5400.1 for all laboratories providing monitoring and/or
surveillance support to DOE sites.  Furthermore, analytical laboratories  supporting
DOE/Environmental Management (EM) Program activities are required to participate in
QAP under an EM memorandum issued in 1993 (P. Grimm, Memorandum March 5,
1993).  Beginning with QAP41 (9/1994), all participants' analytical QAP performance will
be evaluated based on control limits derived from EML's historical radioanalytical QAP
data from 1982 through 1992.

The historical data comprise performance-based  analytical measurements of radionuclides
in environmental matrices of air filter, soil, vegetation, and water.  The analytes for air
filter are: 7Be,  MMn, 57Co,  ^Co, ^Sr,  134Cs,  137Cs, 144Ce, ^U, 238U, 238Pu, 239Pu and
M1Am; for soil are: 40K, ^Sr, 137Cs, 226Ra, 234U, 238U, total U, 238Pu, ^Pu and ^Am; for
vegetation are <ฐK, ^Co, ^Sr, 137Cs, 234U, 238U, 238Pu, **Pu and M1Am; and for water are:
3H, *Mn, 57Co, ^Co, ^Sr, I34Cs, 137Cs, 144Ce, 234U, 238U, total U, 238Pu, 339Pu and M1Am.
These radioanalyte/matrix pairs are evaluated on the basis of reported variations among
intercomparisons with time as well as on possible correlations of variations with activity
levels.  Results from the data analysis show  that environmental matrices show wider
variability range in the order soil > vegetation > air filter > water.   This order may be
due to the structural complexities of soil  and vegetation  which are natural matrices,
whereas air filter and water matrices are spiked synthetic matrices (no interferences).

The  QAP control  limits  are  established  from percentile distributions of cumulative
historical reported values that are normalized to EML's values. The operational criteria
developed for  QAP performance  are  based  on observed analytical capabilities for
individual radioanalyte/matrix pairs over a ten  year history of the program. The middle
70%  of all historical reported values per analyte/matrix has been established "acceptable"
and the next 10% on both sides of the 70% are "acceptable with warning".  Reported
values less than the 5th percentile and greater than the 95th percentile are established to be
"not  acceptable."  These control limits derived from the historical radioanalyte/matrix
                                              686

-------
percentiles have been used to evaluate the QAP41 (9/1994) data (Figure).  Results of the
evaluation show that performance proportions observed for QAP41 data are consistent with
those of previous QAP intercomparisons using ฑ20% and ฑ50%. Further discussions on
this topic are in Pan,  V. (1995)  Analysis of EML  QAP Data  from  1982 -1992:
Determination of Operational Criteria and Control Limits for Performance Evaluation
Purposes, U.S. DOE Report, EML-564,  New York.
                                               687

-------
                    9/1994 QAP Summary of Evaluations of 2596 Reported Analyses
00,
00
                      72%
                            Air Filter:
                            855 Analyses
                       67%
       17%


Vegetation:
398 Analyses
                                21%
                                  75%
                                                 Summary:
                                           1%   All Analyses
                                                            76.0%
                 A = Acceptable
                 W= Acceptable with Warning
                 N = Not Acceptable
                                         Soil:
                                         484 Analyses
       11%


Water:
 859 Analyses
                                                                                   14%
                                                                                     .0%
                                                                                19.0%

-------
                                                                   95
EM  QUALITY ASSURANCE ASSESSMENTS FOR ENVIRONMENTAL SAMPLING
                        AND ANALYSIS


Hemant—Pandya,  Environmental Scientist,  U.S.  Department  of
Energy, Environmental Measurements Laboratory, New York,  New
York 10014-3621.
William  R.  Newberry,  Program Manager,  U.S.  Department  of
Energy, Office of Environmental Management, EM-263,  Cloverleaf
Building,  19901  Germantown Road, Maryland  20874-1290.


ABSTRACT
The  collection of credible and cost-effective  environmental
data  is critical to the success of environmental management
(EM)  programs  at  DOE  facilities.  A  well-established  and
supported assessment program is critical to the success of the
characterization,  remediation  and  post-closure monitoring
activities  at  DOE  facilities.  The  Office of  Environmental
Management,  Analytical Services Division (EM-263), along with
the DOE's, Environmental Measurements Laboratory (EML)  and the
Radiological and Environmental Sciences Laboratory (RESL), has
developed a  comprehensive program  to  conduct  independent
assessments  of EM analytical laboratory and field  sampling
activities   and  the  associated   quality  assurance   (QA)
implementation. The assessment is  designed to address  both
compliance and performance  issues. This balanced approach will
assess    the    existence,   adequacy,    implementation   and
effectiveness  of  the  QA elements.

The program was developed  for technical and QA,  sampling and
data manager assessors.  It  employs a line of inquiry interview
approach rather than  the more common  checklist  approach of a
compliance  audit. Assessment  standards  for the  interviews
related  to  laboratory and  sampling  activities  have  been
developed. The  two most  important features of the program are:
(1)  The use  of technical  and management assessors  ;  and (2)
That  it is   not  regulatory  compliance   driven  (i.e.  no
checklists) . The end result will be  the  production of quality
data  as  a  result  of improvement  of  the  technical and  QA
processes.

In order to provide  guidance that  is  comprehensive  enough to
address  the various  aspects of  environmental  sampling and
analysis activities  and  to  provide  criteria  leading  to
consistent EM assessments across the DOE  complex, six separate
documents were issued. Performance  objectives  and  criteria
have  been developed which  establish the basis for assessment
findings. The performance objectives also provide criteria for
                                  689

-------
consistent  assessment.   The  performance   objectives  and
associated criteria are  directly related to QA guidance for
laboratory and sampling  activities  which was promulgated by
the Department to support EM activities. This QA guidance was
developed to ensure  that the quality  of environmental data
produced  is  systematically  documented  and  can  be  easily
verified, making the  data readily  acceptable  to  regulatory
agencies and to the public.

The assessment program is designed to assess appropriate DOE
field organizations and EM contractors. It can also serve as
guidance for the assessment program at any facility performing
analytical laboratory and field sampling activities.
                                  690

-------
                                                                                                          96
A PATTERN RECOGNITION BASED QUALITY SCORING COMPUTER PROGRAM

Andrew D. Sauter, A.D. Sauter Consulting, 217 Garfield Dr., Henderson, Nevada, 89014

ABSTRACT

QC criteria for environmental data often have explicitly delineated acceptance windows  When data is out of a
stipulated tolerance, various penalties might be applied that require sample reanalysis or a payment penalty  When
such problems arise, and the criteria are not technically based, as is often the case, everyone loses.  Data quality
decision dilemmas of this nature are not uncommon, and they result in adverse economic impact on regulators,
analytical laboratories, and others. It is a true lose-lose situation when perfectly acceptable data is classified as out
of specification because the criteria are incorrect. This quandary often arises due to nature of the approach taken to
score the data quality criteria.  Essentially, the "in vs. out" nature of the specification is binary in form. However,
the data is multivariate in nature, and does not fit the binary decision model.

In this presentation, we demonstrate a simple new software tool that employs pattern recognition techniques that
allow one to compare analytical data across organic and inorganic results for standards and samples. We
demonstrate how data can be "resurrected" using a pattern recognition approach that provides alternate scales of
comparison for multivariate data sets, be they in or  out of specification. We show how a quality matching factor
system can be employed to score the results for any set of environmental data.  We demonstrate how the program
uses a point-and-click approach to transform, weight, and otherwise modify organic, inorganic, and other data types
to easily provide alternate perspectives  on the information. We show how a more  informed data perspective results
from such comparisons.  We demonstrate how, through this approach, one can bring common sense to
environmental data analysis and save significant funds by using alternate data quality classification schemes.  We
discuss the powers and limitations of a pattern recognition based quality scoring approach and we propose the
adoption of such a technique for examining environmental data quality scoring when traditional data analysis
methods incorrectly classify data.
            Figure 1 - Comparison of analysis of two actual soil samples by ICP/AES.
                     The top two graphs show the analyte number vs. log of
                     concentration. The lower left graph shows the difference plot of
                     analyte vs.  log of concentration, and the lower right graph is a
                     scatter plot of log of concentration for Sample 1 plotted against log
                     of concentration for Sample 2.  All graphs can be displayed full
                     screen and can be edited by the user.
                                                      691

-------
 97
                      Automated Data Assurance Manager (ADAM)
Taryn G. Scholz, Louise McGinley, and Donald A. Flory, Ph.D., Quality Assurance Associates,
2050 North Loop West, Suite 201, Houston, Texas 77018; Lyn Manimtim and Donald R. White,
RCG Information Technology, 1900 North Loop West, Suite 200, Houston, Texas, 77018.

ABSTRACT

Data management and data quality assessment are inexorably linked functions that are the most
important aspect of any environmental analysis monitoring program. Data management begins
with preparation of sampling documents, proceeds through data quality assessment, and finishes
with storage and retrieval of technically valid, legally defensible data of known and documented
quality. Data quality assessment is a determination of the suitability of the data for the intended
use and includes  the  four  major tasks  of data management, data  validation, data
qualification/review (flagging),  and finally, the determination  of suitability which  must  be
consistent with the intended use of the analytical data. We have developed a software system,
called  ADAM,  which provides  automated  data management  and data  quality  assessment
functions. ADAM is the first comprehensive sample data management system to include all of
the following outstanding major features:

        1)  cradle-to-grave sample documentation using pre-printed forms
        2)  field data entry
        3)  analytical data importing (manual, electronic, or combined)
        4)  automated data validation and qualification
        5)  data reduction and reporting
        6)  data storage/archiving
        7)  laboratory invoice checking.

ADAM operates in  Microsoft Windowsฎ and  utilizes software that will be industry compatible for
many  years  into the future  (Microsoft  Visual Basicฎ and  the  Microsoft Accessฎ relational
database). ADAM is a state-of-the-art system  that can be easily tailored to any site or  project
through a user-friendly menu system. ADAM relies heavily on dictionary or maintenance tables
which  reduce  repetitive data  entry. The data  validation performed by ADAM  includes all QC
checks (calibration  response,   internal  standard area  reproducibility,  surrogate  recovery,
precision, accuracy, etc.)  for  the  major analysis  methods (SW-846, EPA-CLP,  Standard
Methods, and  EPA's 200, 300,  500, and 600  series). Data qualification can be performed in
accordance with the EPA's Functional Guidelines for laboratory data review  or  any project
specific guidelines. ADAM operates on the direct data output of the laboratory  instrumentation
thus reducing transcription and  calculation errors. The design of ADAM provides for maximum
flexibility and includes procedures that (1) accommodate different QC sample naming protocols
by different laboratories, (2) allow for entry of client-specific analytical method protocols, and (3)
handle varying degrees (levels) of QC.

ADAM is an invaluable tool to support the task of determination of suitability of analytical data for
its specified intended use. The  design, construction,  and  implementation of ADAM  provides a
range of benefits unavailable in any other commercial software system. The major benefits of
ADAM include flexibility, improved data defensibility, open database connectivity, and improved
efficiency which results in lower costs.
                                                 692

-------
1.0    INTRODUCTION

In  recent years the environmental arena has expanded dramatically, pushing technology to new
limits  and  encouraging  innovation  in  an  ever  changing  regulatory  atmosphere.  One
environmental issue dominates all aspects of this dynamic arena - the science of identifying and
quantitating regulated chemical species or,  laboratory  analysis.  A tremendous amount of
resource and capital is expended based on laboratory analytical  results and  the legal aspects
surrounding environmental issues  make it imperative that these  results  be "of known and
acceptable quality".  To  achieve this goal,  data  quality  assessment procedures  which are
standardized and uniform must be applied to all environmental analysis data. The only practical
means of attaining complete, uniform data quality assessment is to  use an automated software
application that  processes electronic laboratory  analysis results for QC  and environmental
samples. Any such automated software application must be able to provide efficient sample data
management and tracking, process data from any laboratory, validate to control limits for all the
major  environmental methods,  incorporate  new methods or project specific control  limits,
accommodate different QC protocols and agency standards, securely maintain large volumes of
data, and easily communicate with other software. The Automated Data Assurance Manager
(ADAM) which utilizes Microsoft Visual Basic and the Access relational database in a Windows
environment, is such a software application.

2.0     MAJOR FEATURES

Data quality assessment is the determination of the suitability of chemical analysis data for the
intended use and includes four major tasks:

    1)  Data Management - sample documentation and tracking.
    2)  Data Validation - verifying that the laboratory  has complied with all QC data quality
        requirements (QC Checks) of the specified analytical method.
    3)  Data Qualification/Review - flagging the data to  reflect any failures  to meet the data
        quality requirements according to a set of pre-established functional guidelines.

    4)  Suitability  Determination -  determining the suitability of  the qualified  data for the
        intended use.
ADAM is a comprehensive automated system which includes the  following  major features to
support these tasks:
    1)  Project Data Management

    2)  Importing of Analytical Data

    3)  QA/QC Data Validation

    4)  QA/QC Data Reduction

    5)  QA/QC Data Flagging

    6)  Completeness Calculations

    7)  Laboratory Invoice Checking

    8)  QA/QC Reporting

    9)  Data Storage and Archiving
                                                 693

-------
Figure 1 shows the Main Menu used to access the major features. Each feature is designed to
provide for maximum flexibility while insuring the integrity of existing functions and allowing for
easy incorporation of new functions. System flexibility and integrity are provided through the use
of "maintenance" tables.  These  tables  define  analytes,  analytical  method  control  limits,
parameters, parameter groups (e.g. RCRA metals), project detection limits, QC levels, and utility
information such as company addresses and  laboratory analysis costs).  User-friendly entry
screens make it easy to customize the  maintenance tables for a particular project. Once set  up,
the tables are  used to reduce the amount  of repetitive data entry thus reducing errors and
providing consistency.
Incorporation  of new functions is facilitated by the  use of "system" tables. Instead of hard code,
these tables are used to define procedural elements such as processing order and processing
steps to be included. Like the maintenance tables, the system tables can be customized on a
project or system level.

Each major feature is discussed in more detail in the following sections.

2.1    PROJECT DATA MANAGEMENT

Centralized data management is essential to the success of an environmental project and must
include cradle-to-grave sample documentation and tracking.  ADAM is designed to facilitate the
preparation of  sample  control documentation;  tracking of samples from  collection through
disposal; acceptance of field and analytical sample data; data searching, sorting, and  editing;
data storage and security; transfer of data; and reporting of data.

Sample tracking is initiated in the system by creating a Chain-of-Custody  (COC) record and
continues  via manual entry of completion  dates (i.e.  when  samples are  received  by the
laboratory,  analytical results are received by the project, validation report is submitted to the
user,  samples are disposed of, etc.). A  COC  record is created for both internal sample sets and
extant sample  sets. For internal sample  sets, the COC record is used  to print an Analysis
Request/Chain-of-Custody (ARCOC) report which is given to the sampler and executed in the
field.  For extant sample sets, the  COC record is  used to enter information off a  report from
another system, i.e. the laboratory's or  project's, that is needed for sample tracking  and  QA/QC
processing.

To create  a COC record, ADAM begins  by  assigning a  unique ADAM COC  Number and Set
Number. (For extant sample sets, the extant numbers are also carried.) ADAM Sample Numbers
are assigned  by adding an incrementer to the Set  Number. The ADAM Set  Number includes a
four-character Set Group Code which is assigned by the database manager. This can be used to
provide for easy recognition of sample sets and to group data  for statistical calculations.

Wherever possible, all remaining information for the COC record is entered  using maintenance
tables (see Figure 2 and  Figure  3).  This  includes project and laboratory  addresses,  SOP
references, the QC level, and parameters requested.  Additionally, existing container lists,
sample information,  or entire COCs can be used to create a new unique COC. In this manner,
the database manager can set up a "template" to be used for recurring sampling events such as
quarterly well  monitoring.

Any field on the COC record can be left blank and a partially completed ARCOC report printed
and given to the sampler as instructions. For some projects,  it is helpful to pre-print Armcos  for
all sampling events scheduled for the week and forward them to the sampling  team to direct their
efforts. By creating COC's  in the system for scheduled sampling events,  it  is  also possible to
develop reports showing projected laboratory analytical costs.
                                                 694

-------
2.2    IMPORTING OF ANALYTICAL DATA

The common practice of manually entering analytical data into a computer database greatly
increases the error and effort for a project. Importing electronic analytical data facilitates data
manipulations such as statistical calculations, graphing, etc. and,  most importantly, automated
QA/QC data validation and review. The ADAM system  includes  applications of an importing
software  which can import virtually any electronic laboratory  analysis results and QC data
formats. This includes data for analyses on four different instrument types: GCMS including air
volatile organic compounds (VOC), volatile, and semi-volatile organic analyses;  GC including
PCB, pesticide, and herbicide analyses; Metals including ICP, AA, and cold vapor analyses; and
Miscellaneous Parameters including a variety of wet chemical tests.

ADAM is designed to  handle either "rawฐ analytical data such as a GCMS  quantitation report
produced by the instrument data system or "calculated" analytical data such  as a  results report
produced by the laboratory data management system, CLP software, etc.  In  addition to the
analytical data, ADAM requires  sample-to-QC sample references. This data can be imported
from a laboratory report or taken from the instrument run log and extraction lab log.

All data is first imported  into holding tables. If electronic analytical data is not available, these
tables can be  used for manual  entry of the  analytical  data.  The  import routine  includes
procedures that verify no duplicate records exist and check data formats by  applying validation
expressions and/or numerical ranges. Procedures are also included that assign a Sample Type
(i.e. CCAL, MS, BLANK) to each sample  analysis and  the ADAM Set Number and Sample
Number to  each extant sample analysis. Sample Types, which are used be ADAM  for QA/QC
processing, are defined  in an ADAM system table and thus can be varied depending on a
particular laboratory's naming convention. Finally, ADAM copies the data into permanent tables.
Figure 4 shows an example of a GCMS quantitation  report downloaded as an ASCII text file from
a  Finnigan  data system  and Figure 5  is the resultant Access table obtained from the import
routine.

2.3    QA/QC DATA VALIDATION

Data validation  is a process to verify that the laboratory has complied with all QC data quality
requirements (QC Checks) of the specified analytical method. The QC Checks are defined in
terms of data quality objectives  (DQOs) which include both procedural requirements, such as
calibrating the instrument each shift and numerical requirements, such as accuracy and precision
control limits.  ADAM  performs  an automated QA/QC data validation  for analyses on four
different  instrument types: GCMS, GC, Metals,  and Miscellaneous Parameters. The system
includes procedures in the code and control limits in the maintenance tables for all of the QC
Checks in Table  1. The coded procedures are based on the USEPA's National  Functional
Guidelines for Data Review The maintenance tables include control limits for the major analytical
methods  listed in Table 2. Updates to these methods or the addition of control limits for new or
project-specific methods are easily accomplished via control limit entry screens.

ADAM performs the QA/QC data validation in two steps: QA/QC Pre-processing and QA/QC
Processing. Both steps  are completed for the current  Set Number for  each sample and
parameter and at the QC Level indicated on the COC. Based on the QC Level, an ADAM system
table is used to define which QC Checks are to be included and for each QC Check which type
of QC Sample is required (see Figure 6).
The first step, QA/QC  Pre-processing, verifies that all procedural requirements have been met
by checking that all sample results, both environmental and QC, have been received from the
laboratory and that maximum sample-to-QC sample  ratios have not been exceeded. ADAM uses
the system table mentioned above to determine which types of QC Samples are  required (i.e.
                                                695

-------
 MS, LCS,  CCAL, etc.). If any data if found to be missing or invalid, an  exception report  is
 printed. If no data is found to be missing, the Set Number is ready for QA/QC Processing.

 QA/QC Processing verifies that all numerical requirements have been met by adjusting the units
 of the analytical data  to those specified in the  control limits, calculating  the  necessary QC
 Elements (e.g. %RSD) using raw analytical data or retrieving them from calculated analytical
 data, and comparing each to the control limit. QA/QC processing is performed in a sub-database
 which includes only the imported data and the required maintenance tables.  This design feature
 results in a fixed processing time regardless of the data stored in the system. Again, ADAM uses
 the  system table mentioned above to determine which QC  Checks are to be included. All
 outcomes for the QA/QC Process are printed to a QC Failure Report or the QC Summary Report
 (see Section 2.8) and stored for use in QA/QC Data Reduction, Flagging, and Reporting.

 2.4     QA/QC DATA REDUCTION

 Data Reduction is the process of performing calculations on the analytical data to obtain reported
 amounts that are printed on the QA/QC Reports and exported to the user. Presently, the ADAM
 system includes Data Reduction Steps for calculating an analysis detection limit which is the
 reported amount for non-detects and for combining an original and diluted analysis pair into  a
 single set of reported amounts.  Analysis  detection limits  are  calculated  using the sample
 correction factor and the Method Detection Limit. This step is essential if raw analytical data from
 a quantitation  report that shows only hits is received  and  imported. The  combination of an
 original and diluted analysis is performed analyte-by-analyte taking the result that is within the
 calibration range. Like QC Checks, Data Reduction Steps are defined in an ADAM system table
 based on the  QC Level. Therefore, it is easy to incorporate new functions such as blank
 subtraction or Air front plus back tube addition as needed for project customization.

 2.5     QA/QC DATA QUALIFICATION/REVIEW (FLAGGING)

 Data qualification or data review, also known as data flagging, is a process to apply qualifying
 flags to each sample to reflect any failures found in QA/QC Data Validation according to a set of
 functional guidelines. QA personnel can then determine  if the qualified data is suitable for the
 intended use. Presently, ADAM includes procedures for data flagging according to the USEPA's
 National Functional Guidelines for Data Review (i.e. using U,J,D,B,N,R). The  procedures include
 flagging each sample to reflect any failure for the sample itself and any failure of a QC sample
 referenced to the sample. Like QC Checks and Data Reduction Steps, Data  Flagging  is defined
 in an ADAM system  table  making it easy to incorporate  different functional  guidelines for
 flagging as needed for project customization.

 2.6     COMPLETENESS CALCULATIONS

 Completeness is the yardstick of any Quality Assurance program. Completeness is defined as
 the percentage of samples which  pass a specific QC Check. ADAM calculates completeness for
 all QC Checks included in the QA/QC Data Validation and prints a completeness report. The
 calculations are performed on a select group of samples chosen by parameter, Set Group Code,
 collection date, laboratory,  project, client, etc. The ADAM system includes a  mechanism by
 which QA  personnel can reject data for a  sample analysis with gross  QC failures and thus
 exclude it from  export to the user and completeness calculations.
 2.7    LABORATORY INVOICE CHECKING

The  ADAM  system  includes an invoice checking feature which  calculates  invoices  using the
 number of samples and parameters called out on the COC, the laboratory analysis costs stored
 in the maintenance tables,  and  any applicable  penalties or  surcharges.  Invoice subtotals and
totals are calculated and a invoice report is printed for comparison to the laboratory invoice. The
                                                696

-------
calculations are performed on a select group of samples chosen by Set Number, analysis date,
project, client, etc. The ADAM system includes a mechanism by which QA personnel can reject
the invoice for a sample analysis with gross QC failures.

2.8    QA/QC REPORTING

ADAM can  print  or export a QC Summary Report that is  specific for  each instrument type
(GCMS,  GC,  Metals, Miscellaneous Parameters).  The report is intended  as an aid to  QA
personnel making the suitability determination. It is printed by Set Number and includes all types
of environmental  and QC samples, as specified in an ADAM system table, in order of analysis.
Table 3 shows an example of the GCMS  QC Summary Report for an environmental sample.
Custom report formats can easily be created from stored data in the ADAM system.
2.9    DATA STORAGE AND ARCHIVING

ADAM is designed to provide facilities for data storage and manipulation during the active phase
of a  project. Although not  designed to provide trend analysis and long-term data archival, the
system may be  equipped  with  sufficient  disk capacity  to support these functions. Since  the
system stores unprocessed  as well  as processed  (flagged) data, many useful project data
summaries may be generated using Microsoft Access query and reporting tools. Due to the open
database  connectivity (ODBC) supported by Microsoft Access, users  may also select from a
growing number of third-party applications for reporting and statistical functions.

3.0    CONCLUSIONS

We have found ADAM to be an  invaluable tool to  support the task of determination of suitability
of analytical data  for its specified intended use. The design, construction, and implementation of
ADAM provides a range of benefits unavailable in any other commercial  software system. The
major benefits of ADAM include flexibility, improved data defensibility, improved efficiency which
results in lower costs, and communicability with most commercial software.

Flexibility is realized through ADAM'S ability to handle virtually any analytical  method, set of QC
Checks (QC Level),  or laboratory electronic reporting format and a user friendliness which allows
easy setup of the desired  protocol. Maximum data defensibility is achieved  by cradle-to-grave
documentation, using unprocessed laboratory data, minimizing  manual data manipulation,
making it  practical (from the standpoint of cost) to validate and review all QC and sample data,
and the resultant elimination of errors of omission  prior to archiving of data. Increased efficiency
and lower costs result from the significantly lower labor costs needed for automated as compared
to manual data validation and review; and the accurate cost/work control which derives from the
sample tracking, scheduling,   and  invoice checking  features.  ADAM provides  an ideal
environment for evaluation of historical data by the  engineer/scientist or QA personnel through
its open database connectivity. QC and sample data  can  be easily imported to other software for
statistical calculations, graphing, trend analysis, etc.
                                                697

-------
    TABLE 1
ADAM QC CHECKS

Holding Times
Instrument Performance
Initial Calibration
Continuing Calibration
Blanks
Surrogates
MS/MSDs
Duplicates
Lab Control Samples
Internal Standards
Compound Identification
Compound Quantitation
Leach
Extraction
Analysis
Tube (Air)
Tune %RA
Tune Frequency
PEM Concentration
PEM RPD
PEM %Breakdown
ICS %R
ICS Frequency
Analytical Spike %R
MSA Coefficient
Serial Dilution %D
Concentrations
%RSD / %R
RRF
RT vs. ICAL Average
Peak* / RT vs. Establ'd
Frequency
Concentration
%D / RPD / %R
RRF
RT vs. ICAL Average
Peak* / RT vs. Establ'd
Frequency
Contamination
Frequency
Recovery
RT vs. ICAL Average
RT vs. CCAL
Corrective Action
Spike Level
Recovery
RPD
Frequency
Corrective Action
RPD
Absolute Difference
Frequency
Recovery
RPD
Frequency
Area
RT vs. CCAL
Corrective Action
RT vs. ICAL Average
RT vs. CCAL
Ion
Amount
GCMS
X
X
X
X
X
X








X
X
X
X
X

X
X
X
X
X
X
X
X
X

X
X
X
X
X
X
X



X
X
X
X
X
X

X
X
X
GC
X
X
X
X


X
X
X





X
X

X

::-:*;' :;: ;
X
X

X

X
X
X
X
X


X
X
X
X
X



X
X
X



X


X
Metals
X
X
X






X
X
X
X
X

X



X

X



X
X
X




X
X
X
X

X
X
X
X
X
X







Misc
X
X
X























X
X




X
X
X
X
X
X

X










         698

-------
                                  TABLE 2
                         ADAM ANALYTICAL METHODS
Instrument
GCMS
GC
METALS
MISC
Analysis
Type
AIR
VGA
SVA
PCB
Pest
Herb
AA
CV
ICP
BOD
Bromide
Chloride
COD
Coliform
Fluoride
Gross A
Gross B
NH3N
Nitrate
Nitrite
Nitrate/
Nitrite
Odor
O&G
Phosph
Radium
Sulfide
Sulfate
Surfact
TDS
TKN
TOC
TOX
TPH
TSS
Turbid
CLP

X
X
X
X

X
X
X


























EPA 100-
400 Series
*





X
X
X
X
X
X
X

X


X
X
X
X
X
X
X

X
X

X
X
X

X
X
X
EPA 500
Series

X
X
X
X
X





























EPA 600
Series

X
X
X
X






























Standard
Methods

X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
SW-846

X
X
X
X
X
X
X
X


X

X

X
X

X



X

X
X
X



X
X
X


k Method TO1 and TO2 in "Compendium of Methods for the Determination of Toxic Organic
 Compounds in Ambient Air", EPA-600/4-84-041.
                                         699

-------
                                        TABLE 3
                              EXAMPLE GCMS QC SUMMARY
                               Quality Assurance Associates
                                        QAA.L.C.
 Set Number:
 Instrument Type:
 Analysis Type:
 Analytical/Prep Method:
                   WELL000003
                   GCMS
                   VOA
                   5030A/8240A
                                     Page:
                                     Date Printed:
                                     Time Printed:
                                      User:
                                    1  of  2
                                      5/20/95
                                       11:45
                                        TGS
                 COC
 Sample Number   WELL000003 02
 Location:
 Grab/Co mp:
 Sample Type:
 Sample Matrix:
 Moisture %:
 Corr Factor

 Sample's EBLK

 Sample's CS:
              P1 Out
              Grab
              Env
              water
              NA
              1.0

                  NA

                  NA
Collection Date:
Leach Date:
Extraction Date:
Analysis Date:
Analysis Time:
Analysis Level:

Sample's MS:

Sample's CSD:
TYPE
Mand
Mand
Mand
Mand

TYPE
Mand
Mand
Mand
Mand

TYPE
Mand
Mand
Mand
     Chloromethane
     Bromomethane
     Vinyl chloride
     Dichloromethane
     Carbon Disulfide
     1,1-Dichloroethene
     trans-1,2-Dichoroethene

     SURROGATE
     d8-Toluene
     4-Bromofluorobenzene
     d4-1,2-Dichloroethane
    10
    10
    7
   100
   110
    5
    50

    AREA
    80123
    79627
    55592
                                           U
                                           U
                                           J
                                          JB
                                           J
                                           U
                                           R
     MAXIMUM FAILURES ALLOWED

     INTERNAL STANDARD      AREA
     Bromochloromethane        200593
     1,4-Difluorobenzene         94952
     d5-Chlorobenzene           109008
     MAXIMUM FAILURES ALLOWED

     INTERNAL STANDARD
     Bromochloromethane
     1,4-Difluorobenzene
     d5-Chlorobenzene
   5/15/95
   NA
   NA
   5/18/95
   09:05
   NA

  WELL000003
      01
      NA

       ug/l
       ug/l
       ug/l
       ug/l
       ug/l
       ug/l
       ug/l

  RECOVERY
      89
      90
      88
               RECOVERY
                   210
                   110
                   135
Shift Window (hrs):
Leach Hold Time (days):
                   ACTUAL
                      3.5
                      NA
Last Analyzed:
Reject Data:
Reject Invoice:

Lab Name:
Lab Number
Instrument:
Analyst Initials:
Smpl Amt (ml/g)
Ext Final Vol (ul):

Sample's MSD:
           Y
           N
           N

 Laboratory
 95050124
 F14
 PLS
 5
 NA

WELL000003
    01
  1st LIMIT
 88    110
 86    115
 76    114
     0

  1st LIMIT
 50    200
 50    200
 50    200
     0  *

    RT
    305
    373
    606
2nd LIMIT
                                2nd LIMIT
                                               LIMIT
                                             300 - 330
                                             350   380
                                             580   610
     LIMIT
      12
Analysis Hold Time (days):
Extract Hold Time (days):
         ACTUAL
             3
            NA
     LIMIT
      14
W
QC Failure
Waived
                                                 700

-------
Hie
                        Automated Data Assurance Manager
 COC
 Import  -
 QA/QC Preprocess
 QA/QC Process
 Completeness Calcs
 invoice Check
 Queries
 Reports
 Archive
 Tables
 Exit
Analytes
Control Umlts
Parameters
PGroups
Project Detection Uralts
QC Maintenance
Utilities
                                            QC Levels
                                            QC Checks
                                   Figure 1.

                               ADAM Main Menu
                                               701

-------
                        Automated Data Assurance Manager
Eile  Edit  tfew  y/indow
O Analysis Request and Chain of Custody H

COCKbc |rjM035 [
Extant COC: | |
Schedule Date^ 05/1 5/95 |SJ
Set HIM
Extant
Set Dei

*ซ |WELL000003 | Saiซf*ปiซK:|2 (]..
SetNbr;| |
re JWel Monitoring4iiiarterlr I pHG
Requested By: |TGS 1 I*M JTaryn 6 Scholz [

Project ''""'••••
raent: (client 1 |ฑ|ซj
Project |ptojecซ | [i|Zj
^1 234 Sheet |
jSuitelOO |

4ne
| Project | |48-.^
|l 234 Sheet |
-


HeporbfKi Laborahny ~ '
N
 < Remove

ซ Remove AT
8 RCRA Mdab (SW846)
SVA (SW846)
VOA(SW846)
             [*] Indicates items entered using the maintenance table


                                    Figure 2.

                      Typical Entry Screens for ADAM COC
                                               702

-------
flic  Edit  yiew ^Indow  jjelp
i/1/Vlxl/l
                        Automated Data Assurance Manager
WELLOOOOOJI
     WELL0000032
                        PI In "'""  [Grab       aler       E
                        PI Out    Siab      nater       Env
                               Sample Container
                    EffffBi.'i.n
      WELL000003     |1        h-32ozplas      IHN03
                                       Contains   1   Pnaetvativc


           WELL000003     1         2-40mlVOA      relrig
                                  Figure 3.

                  Sample and Container List for ADAM COC
                                            703

-------
Quantitation Report   File: H50004V01A

Data: H50004V01A.T1
03/10/95 10:47:00
Sample: H5000401
Conds.: 03/08/95
Formula: 03/04/95         Instrument: TX4020     Weight:  200.000
Submitted by: RUST REM     Analyst: FDD         Acct. No.: UG/L

AMOUNT=AREA * REF AMNT/(REF AREA * RESP FACT)
Resp. fac. from  Library Entry

 No Name
  1  CI35 PENTAFLUOROBENZENE    *W
  2  CI10 1,4-DIFLUOROBENZENE   **IS2**
  3  CI20 D5-CHLOROBENZENE     **IS3+*
  4  CI30 1.4-DICHLOROBENZENE-D4  **IS4+*
  5  CS15 D4-1.2-DICHLOROETHANE  nSUln
  6  CS05 D8-TOLUENE         **SU2n
  7  CS10 P-BROMOFLUOROBENZENE   **SU3**
  8  C150 TRICHLOROETHENE
  9 C220  TETRACHLOROETHENE
No m/z Scan Time
1
2
3
4
5
6
7
8
9
168
114
117
152
65
98
95
130
166
299
354
607
836
319
476
720
374
531
4:59
5:54
10:07
13:56
5:19
7:56
12:00
6:14
8:51
Ref RRT
1
2
3
4
2
2
2
2
3
1.000
1.000
1.000
1.000
0.901
1.345
2.034
1.056
0.875
A
A
A
A
A
A
A
A
A
Meth
BB
BB
BB
BB
BB
BB
BB
BB
BB
Area(Hght) Amount
67592.
78869.
71253.
34972.
20463.
80312.
39620.
28387.
81962.
10.000
10.000
10.000
10.000
9.574
10.390
11.252
8.589
20.368
UG/L
UG/L
UG/L
UG/L
%
%
%
UG/L
UG/L
%Tot
9.98
9.98
9.98
9.98
9.56
10.37
1.23
8.57
20.33
 No Ret(L) Ratio RRT(L) Ratio  Amnt   Amnt(L)  R.Fac R.Fac(L) Ratio
1
2
3
4
5
6
7
8
5:17
6:14
10:24
14:08
5:39
8:16
12:15
6:34
0.94
0.95
0.97
0.99
0.94
0.96
0.98
0.95
1.
1,
1
1
0.
1.
1
1.
.000
.000
.000
.000
,906
,326
.965
,053
1.00
1.00
1.00
1.00
0.99
1.01
1.03
1.00
10,
10,
10
10
9.
,00
,00
.00
.00
57
10.39
11
.25
8.59
10.00
10.00
10.00
10.00
10.00
10.00
10.00
10.00
1.000
1.000
1.000
1.000
0.259
1.018
0.502
0.360
1.000
1.000
1.000
1.000
0.271
0.980
0.446
0.419
1.00
1.00
1.00
1.00
0.96
1.04
1.13
0.86
 9  9:10  0.97  0.881  0.99    20.37    10.00  1.150  0.565   2.04



                                         Figure 4.

             Unprocessed ASCII  Results  File from Finnigan GCMS.
                                     704

-------
 H50004
i'H50b04"
             101

                       !OOOOH5000401
                                  IH50004V01A
                                  ;H50004y01A
                                 7H50ai4VOiA''
                                                                    I COC
                                                                    I COC
                                                                    Tcb'c"
                                                                                          i VOA
                                                                                          IVOA
'H50004

'HSbobT
                       iOOMH50004p1
                       IODOOHSCXXMOI'
                                                    ;yoA
                                                    IVOA"
             = 01
                                                                    jcpc
                                                                    Tcoc"
                       [OOOOHSOCXMOI
                       ?6bMH50604bl'
                                                        iH50004y01A
                                                        THSOOMVOIA"
                                                        'TH50004V01A"
                                                                    ;VOA
                                                                    IVOA"
SH50004
'H50004"
fHSOOOT
•HSOOM"
                                                                     ;COC
                                                                     ^'coc"
                                                                     !COC'
                                                                     tcoc"
:pOOOH5000401
Ib^HSOOMOT
T6oobH5o6b4bi''
                                                                                          iypA
                                                                                          !VOA"
                                                                                          IVOA"
              !01
              tbi"
                  !H500p4y01A
                  "!H5bb04V01A"
58260
          18260
8260
;8260"
[8260'
          ;8260
         18260
                            JC135
                            icn'o"
                            :C120
                            fciso
J299!  6759^1
"354T"~78^9r
 607t'~ 71253;
"siiieT "349721
                                                        10;UG/L
                                                          iUG/L
                                                       T6TuG/I
                                                                    168:
                                                                    114T
                                              117;
                                              152!
JUG/U
! UC3/L
^i^
TUG/L"
          ;8260
          ";8260"
                            ICS15
                           Icsos"
                                       319j__ 20463!
                                      ~476] "80312T
                              J.574!%
                               1O39;%"
                               65!
                              -ggf
HJG/L
;UG/L~
!ง260
^So~
?8260'
          :8260
          TsJGQ"
          ]826Cl"
                            !CS10
                           Iciiso"
                720\__ 39620;
               "374!  "^387;'
               "53i1"  iB1962l
                                                    1ML252!%
                                                   ~ 8.589J UG/L"
                                                   "20.'368!UG/L
                               95J
                             Tspf
                             "l66t
JJJG/L
;UG/L"
IUG/L"
                                        Figure 5.

            Datasheet View of Imported Data from Microsoft Access
                                                    705

-------
                       Automated Data Assurance Manager
Elle  ฃdlt  flew  Window jjelp
                              \/v\/\/\
O QC Checks Oi


QC Level
F :
F !
F i
F
F ;
F :
F
F
F
F
F
F
F
F
p
>
AnafesuTj
VOA
VOA
VOA
VOA
VOA
VOA
VOA
VOA
VOA
VOA
VOA
VOA
VOA
VOA
vriA

pv | Imtmaenl T;
GCMS
GCMS
GCMS
GCMS
GCMS
GCMS
GCMS
GCMS
GCMS
GCMS
GCMS
GCMS
GCMS
	 GCMS 	
cruc

[te*73 \

rpel QC Check
Blank Contamation Check - Esbadi
Blank CentiuJniiKiin Check - Method
CCAL Concentration Check
CCAL Mas XO Check
CCAL Mh RRF Check
CCAL Peak No. Check (vs. Estabithe
CCAL RRF Calculation Chock
CCAL RRT Check (vsiAve ICAL)
CCAL RRT Check (vs. Estabished)
ChioMtograei Check
Detection Limit Chock
Dupbcata Precision Check - CS/CSD
Duplicate Precision Check - MS/MSD
MniiCnn Tain Phnr+ - ftp -•-"-
+


                                QC Instrument
      insfcMnant Tjpป  leCMS
              flC Check  ] CCAL Mas XO Check
CS Sample
                                         Lab Blank SaMpta


                                         CCAL Tun* Sa^rie


                                         CCALSat^to


                                         ICAL Tune Saaipte


                                         ICAL Sample
                                  Figure 6.
                   Typical Entry Screens for ADAM QC Levels
                                    706

-------
                                                                                          98
                       THE DISKETTE DATA DILEMMA

Lisa Smith. Environmental Chemist, Rust Environment & Infrastructure, 4738 North 40th
Street, Sheboygan, Wisconsin 53083.

ABSTRACT

The growing trend in analytical data reporting includes a submittal of the data in an
electronic  format (on diskette).  Laboratories generally feel this is  a  minor request;
however, laboratories need to re-evaluate this process.  Often discrepancies are found in
the data, results submitted on hard copy do not match the data submitted on diskette.

Why are  these discrepancies getting to the client?  The stringent  quality  control
procedures applicable to bench chemists are generally not the same, or do not exist for
"computer people" generating the diskette.

The potential impact is enormous if these discrepancies are not detected.  At a minimum,
resampling is initiated. It is very important that the laboratory community understand the
importance for accurate analytical results.

What should  labs  be doing  to  avoid this problem?   Quality control procedures
applicable to data reporting for bench chemist should be similar to those for  computer
personnel - someone needs to check their work. This final quality control check should
include a comparison of hard copy results to diskette results.  Laboratories should have
Standard Operating Procedures (SOPs) for producing diskette deliverables; if they don't
have these SOPs they should not attempt diskette reporting.

INTRODUCTION

Major decisions are made based on environmental data, therefore the quality of this data
is very important.  Many types of QA/QC precautions may be incorporated into a project
with a Quality  Assurance Project Plan (QAPP) and rigorous data validation procedures;
however, if hard copy data is not check against electronically submitted data, reporting
discrepancies can occur.

How can reporting discrepancies affect the data?  If only rounding problems occurred
on the project,  the problem may be minor; however, when positive results are reported
incorrectly, the analytical  results  may jeopardize  the project.   The severity of the
discrepancy on the project, depends on the magnitude of error in results reported.

The purpose of this paper is to  make data users aware of problems that may occur with
electronic data submittal.  Examples of reporting errors are given, a discussion of the
benefits of data validation, the  regulations associated with analytical data reporting and
                                              707

-------
possible solutions to the discrepancies found between hard copy data and data submitted
electronically.

EXAMPLES

There are many different types of errors found during data reporting. Most of these errors
can be detected through  the data  validation  process.   The first type and most  often
encountered type of reporting  error is  the rounding error.   Reports generated  from
instruments and external software packages (for CLP reporting) may not be the same as
the diskette deliverables generated from the Laboratory Information Management System
(LIMS).  An example of rounding error  follows:

On a recent project, a laboratory reported results on CLP forms, on laboratory generated
reports (from LIMS), and on diskette. All three reports had different quantitation limits.
How did this happen? It appears that the CLP  forms were generated from the instrument
(GC/MS), the  laboratory generated hardcopy reports were generated from LIMS and
results rounded, while results submitted on diskette did not undergo rounding.  Although
rounding discrepancies may not be as severe as other error types, they should not be
occurring. The data user should not have to decide which number is correct; this may be
a time consuming task.

The second  type of error is generated when an analyst  changes a value that has been
already reported to the client. For example, the laboratory generated hard copy reports,
after these hard copy results were generated the analyst changed the results; the diskettes
were then generated and results between hard copy and  diskette  did not match.  There
should be a mechanism at the laboratory to prevent analyst from changing results; results
should be final before submittal to the client.

An example of the third  type of error occurs during QA review.  The laboratory QA
officer reviewed the hard copy CLP package; results were corrected manually on the hard
copy data, however, these changes  were  not incorporated to the electronic submittal of
data.  This type of problem is easily found during data validation if the reviewer is also
checking electronic data.  This is a  crucial step during data validation and should not be
over looked.

The fourth type of error occurs when an analyst makes an error by failing to report a
result correctly.   The  fourth type of error can  be detected  by an experienced  data
reviewer. These types of errors include misidentification of compounds, wrong dilution
factors,  and missing  peaks on the chromatogram (due to  peak  shape  or extreme
saturation).

It is seldom that projects get through the validation process without finding at least one
problem with the results reported.
                                              708

-------
DATA VALIDATION

Data validation is a rigorous review of analytical data reported.  During the review, the
data validator  reviews raw data packages and assesses the severity of quality control
noncompliances and determines if data is acceptable for project use.  The reviewer also
determines if sample bias has been induced on reported results do to the out-of-control
QC results.  Qualifiers (codes) are placed on the data to make the data user aware that
problems may  be associated with the data.

The review is based of the following USEPA guidelines:

USEPA, February  1994. "USEPA Contract Laboratory Program National  Functional
Guidelines for Organic Data Review", Office of Solid Waste and Emergency Response.
EPA-540/R-94/012.

USEPA, February  1994. "USEPA Contract Laboratory Program National  Functional
Guidelines for Inorganic Data Review", Office of Solid Waste and Emergency Response.
EPA-540/R-94/013.

In addition, each EPA regional office may have specific standard operating procedures for
validating data within their region.

The level of expertise required by the data reviewer includes five years of GC including
GC/MS experience for the organic data reviewer and  three years of experience with
inorganic instrumentation (AA, GEAA, and ICP) for the inorganic reviewer.  It is very
important to have qualified reviewers who are very experienced with instrumentation and
have the ability to  target areas where mistakes often occur.

A thorough review will include a review  of chain-of-custody documentation, a review of
the raw instrument data, a check on calculations, and a check on electronic data.

REGULATIONS

At this time, environmental laboratories  are regulated at the state level.  Each state has
its own certification program for environmental laboratories. Large laboratories who do
work in many states may have to  go through the certification program for each state it
does  analysis  for.   Certification  programs  generally  entail analysis of performance
evaluation (PE) samples and an audit by the regulatory agency.

The National Environmental Laboratory Accreditation Program (NELAP) has been created
to provide a national set of environmental laboratory  accreditation standards, if the
national standards are met, individual states are to provide reciprocity.  The individual
states would continue to  enforce accreditation; however, laboratories would only have to
                                              709

-------
analyze one set of PE samples and undergo only one audit.

The proposed NELAP can be found in the Federal Register (volume 59, No. 231). This
standard thoroughly discusses the reporting of data via hardcopy; however, this standard
does not specifically discuss electronic submittal of data.  The NELAP discusses SOPs
and says laboratories "shall maintain standard operating procedures (SOPs) that accurately
reflect all phases of current laboratory activities including assessing data integrity".  Such
a statement would include an SOP for data reported electronically.

Other regulations which discuss laboratory practices include the regulation called "Good
Laboratory Practices".   This  regulation governs  medical laboratories  (21  CFR 58),
agrochemical laboratories (40 CFR 130), and  laboratories performing analysis under the
Toxic Substances Control Act (40 CFR 792).  These current regulations also thoroughly
discuss  the hard copy data reporting, but do not specifically address electronic data
reporting.

SOLUTIONS

The attractiveness of using electronically submitted data is the ease of statistical analysis,
data searching, and reporting.  However, submitting data electronically benefits only if it
is submitted correctly.  Data validation is a tool  used to review the quality of data;
however, there  may be more  ways to encourage laboratories to report data correctly.
QAPPs, contractual agreements, and review  of the laboratory SOP for electronic data
submittal may be key in  reducing data reporting problems.

A thorough discussion of procedures used to submit electronic data and the quality control
requirements associated with the electronic submittal should be discussed in the QAPP.
This should be included in  chapter 9 "Data Reduction, Validation, and Reporting".

Contracts are also beneficial when communicating project requirements to the laboratory.
Contracts with laboratories  requiring them to  submit data correctly  or be penalized may
be an option (contractual agreements  are also very helpful in  defining holding time
requirements and turn-around requirements).

Another option is to review the laboratories SOP for electronic submittal and include this
in a QAPP appendix. If an SOP does not exist, the laboratory should not be submitting
data on electronic media.

SUMMARY

Assuming data submitted electronically is valid is  a dangerous mistake.  Electronically
submitted data must also go through a QA review  to determine if results were reported
correctly.  A more thorough review of electronically submitted data at the  laboratory
                                             710

-------
would benefit data users.

At this time,  regulations do  not specifically  discuss electronically  submitted  data.
However,  laboratories  should  have SOPs  regarding  electronically  submitted  data.
Communicating specific requirements to the laboratory is critical to obtaining quality data.
Including laboratory reporting requirements in QAPPs and laboratory contract may help
reduce discrepancies in data reported.

REFERENCES

Garner, W.Y,  M.S. Barge,  and J.P. Ussary,   Good Laboratory Practice  Standards:
Applications for Field and Laboratory Studies. American Chemical Society, Washington,
DC,  1992.

USEPA, National Environmental Laboratory Accreditation Conference (NELAC), Federal
Register, Volume 59, No. 231, December 2, 1994.

USEPA, Region  V Model QA Project Plan, USEPA Region V, Chicago, Illinois, May 24,
1991.
                                              711

-------
99
                     Development of Assessment Protocols for DOE's
                  Integrated Performance Evaluation Program (IPEP)*
                       E. Streets. P. Lindahl, D. Bass, P. Johnson,
                            J. Marr, K. Parish, A. Scandora
                             Chemical Technology Division
                                          and
                                       J. Hensley
                           Environmental Assessment Division

                             Argonne National Laboratory
                                9700 South Cass Avenue
                                  Argonne, IL  60439

                                          and

                               R. Newberry and M. Carter
                               U.S. Department of Energy
                             Germantown, MD 20874-1290
                                   To be presented at
           The Eleventh Annual Waste Testing & Quality Assurance Symposium
                                    Washington, DC
                                    July 23-28,1995
                                The submitted manuscript has been authored
                                by a contractor of the U. S. Government
                                under  contract No. W-31-109-ENG-38.
                                Accordingly, the U. S. Government retains a
                                nonexclusive, royalty-free license to publish
                                or reproduce the published form of this
                                contribution, or allow others to do so, for
                                U. S. Government purposes.
   Work supported by the U.S. Department of Energy under Contract W-31-109-ENG-38.
                                             712

-------
                 Proposed Routine IPEP Reports
Report Type
Single Study PE Reports
CLP Inorganic
CLP Organic
WS
WP
EMLQAP
MAPEP
#/
Year
16
4
4
2
2
2
2
Audiencefs)

EM-26
Laboratories
DOE Offices of Sample Management
DOE Operations Offices




Consolidated Reports
EM-26
Laboratories
DOE Offices of Sample Management
DOE Operations Offices
Management Reports           12

   DOE Operations Offs.         4

   EM HQ Area Program Offs.    4

   Offices of Deputy Assistant    4
        Secretaries, EM-30, -40
EM-26

DOE Operations Offices

EM HQ Area Program Offices

Offices of Deputy Assistant
     Secretaries, EM-30, -40
                                713

-------
IPEP Report!

nrp/) ? 7
/~
-------
                                Performance Evaluation Program Time Schedule FY94
01
CLP(EMSL-LV)
WP(EMSL-CI)
WS (EMSL-CI)
DMR-QA (EMSL-CI)
QAP (EML)
RIS (EMSL-LV)
 Gross-a,P (water)
 Sr-89, 90 (water)
 1-131 (water)
 U, Ra-226, 228 (water)
 Pu-239 (water)
 Mixed a, P, y (water)*
 y (water)*
 H3 (water)
 Gross-a, p Sr-90, Cs-137 (air filter)
 Sr, Y (milk)
MAPEP (soil)
                                                                                                               SEP

-------
                             PE Program Single Analyte Assessment Categories
ป IPEP ^
|. > .r
N "* y'
A
(Acceptable)
W
(Acceptable with
Warning)
N
(Not Acceptable)



;^ws :s x
A
(Acceptable)

N
(Not Acceptable)
(U)2
(Unusable)


•*„ NJ %^y%^ " "
si,- i ^WP X 3.
v - ซ > /v . **>' n ,*-• •  N ^ J 4 ^
\55* * * ^ "^N^^
A
(Acceptable)
CFE
(Check for Error)
NA
(Not Acceptable)
(U)2
(Unusable)


!^:>CLP
IN0RGANIC

$
(Warning)
U
(Analyzed,
Not Detected)
X
(Outside Action
Limits)
UX
(Element Not
Identified)
#
(False Positive)
CLP
ORGANIC^!

(W)1
(Warning)
U
(Analyzed, Not
Detected)
X
(Outside Action
Limits)
&
(Cmpd. Not
Identified)
NS
(Required Data,
Not Submitted)
AV fML 1
*U&*%AP , .!
A
(Acceptable)
W
(Acceptable with
Warning)
N
(Not Acceptable)



MAPEP^J
A
(Acceptable)
W
(Acceptable with
Warning)
N
(Not Acceptable)



CD
      1  Warning limits provided by CLP, but not used in its assessment.
      2  EMSL-Ci assesses a result that is reported as a "less than" or "greater than" value as "Unusable," because it could not be
        quantitatively judged. However, it the true value is higher than a "less than" value, the reported result is assessed as "Not
        Acceptable." IPEP will assess both these situations as "Not Acceptable."

-------
                                 Single Study Assessment

                      Matrix/Analyte Class and Overall % Acceptable
     Single Cell
     Assessment
      (Statistical)
       Qualitative
->     Assessment
       (A,W,N)
  %
Acceptable
                             Single Study Historical Assessment

                             Consolidated Report Assessments
                   (Multiple Studies, Current and Historical Assessments)
Overall % Acceptable or
Matrix/Analyte or
Matrix/Analyte Class: 	
     Total % Acceptable:
     1PEP Assessment:
PE PROG/
QUARTER
FY95Q2
FY95Q1
FY94Q4
FY94Q3
FY94Q2
FY94 01
FY93 04
FY9303
CLP
INORG








CLP
ORG








ws








WP








EML
QAP








MAPEP








%
ACC








IPEP
ASMT.








                                             717

-------
                                                           Single PE Program Assessments
   Assessment
    Criterion
                                               Current Study
                             Condition
                                            IPEP Assessment
                                                                                                   3-Study History
                                               Condition
Participation
% Participation = 100
 of EM-Required Matrix/Analytes
A: Acceptable,
No Corrective Action Recommended
                 % Participation < 100
                 of EM-Required Matrix/Analytes
                                 N: Not Acceptable,
                                 Corrective Action Recommended
                                 - Reason for not participating,
                                 - Participation in next available study for all
                                   EM-Required Matrix/Analytes
   Overall %
   Acceptable
% Acceptable ^ 90
A: Acceptable,
No Corrective Action Recommended
                 % Acceptable < 90
                              J>75
                                 W: Acceptable with Warning,
                                 No Corrective Action Recommended*1
                                        Overall % Acceptable < A
                                        in>l of lastS studies
                         Corrective Action Recommended for
                         unacceptable matrix/analytes in current study
                 % Acceptable < 75
                                 N: Not Acceptable
                                 Corrective Action Recommended for
                                 unacceptable matrix/analytes
Matrix/Analyte
 (Cell) Class*2
% Acceptable k 90
A: Acceptable,
No Corrective Action Recommended
                   ' Acceptable < 90
                              J>75
                                 W: Acceptable with Warning, No
                                 Corrective Action Recommended*3
                                        % Acceptable < A
                                         in >1 of last 3 studies
                         Corrective Action Recommended for
                         unacceptable matrix/analytes in current study
                 % Acceptable < 75
                                 N: Not Acceptable
                                 Corrective Action Recommended for
                                 unacceptable matrix/analytes
     Single
Matrix/Analyte
     (Cell)
Single Analyte < A
W: Acceptable with Warning
N: Not Acceptable
No Corrective Action Recommended
Analyte < A in >1 of last
3 studies*4
Corrective Action Recommended for
unacceptable analytes
                 For programs with >1 matrix per
                 study, any analyte 1 matrix
                                 W: Acceptable with Warning
                                 N: Not Acceptable
                                 Corrective Action Recommended
Notes:
* 1:  If a laboratory has only participated once in a given PE program, corrective action should be performed on unacceptable analytes, to provide stricter oversight of new laboratories.
*2:  If there is only one analyte in the matrix/analyte class, use the single matrix/analyte assessment as the matrix/analyte class assessment.
*3:  If a laboratory has only participated once, corrective action should be performed on unacceptable analytes if the class assessment is < A, to provide stricter oversight of new laboratories.
*4:  If a laboratory has only participated once, corrective action should be performed on unacceptable analytes if the cell assessment is < A, to provide stricter oversight of new laboratories.

-------
                                Assessment of Participation, Single PE Studies
CD
                             Calculate
                           % Participation
                          for all EM- required
                           matrix/analtyes
                           A: Acceptable

                        No Corrective Action
                           Recommended

                         Examine Overall %
                            Acceptable
\
                             No
_x
  A
                                                          Yes
     N: Not Acceptable

Corrective Action Recommended:

1. Reason for non-participation

2. Participation in next-available
  study for all EM-required
  matrix/analytes

 Examine Overall % Acceptable

-------
Assessment  of Overall % Acceptable, Single PE Studies
   Translate PE Program
   Single Matrix/Analyte
      Assessments into
     IPEP Assessments
      Calculate Overall
   % Acceptable for all EM-
   required matrix/analtves
        A: Acceptable

     No Corrective Action
   Recommended for Overall
        Performance;

    Examine Matrix/Analyte
      Class Assessments
   W: Acceptable with Warning

      No Corrective Action
    Recommended for Overall
         Performance;

    Examine Matrix/Analyte
       Class Assessments
s. "~- n. 111 *- i of/-
 \ lasts X
   Studies?
                                                                    No
     Only\
   iarticipatioh\    Yes
    in last 3
                Yes
\
                                       N: Not Acceptable

                                       Corrective Action
                                        Recommended
                                      for all unacceptable
                                    analytes in current study

                                    Examine Matrix/Analyte
                                       Class Assessments
\
 /
    W: Acceptable with
         Warning

     Corrective Action
       Recommended
    for all unacceptable
   analytes in current study

   Examine Matrix/Analyte
     Class Assessments

-------
  Assessment of Matrix/Analyte Class, Single PE Studies
       Calculate
 % Acceptable for all EM-
 required matrix/analtyes
  in Matrix/Analyte Class
     A: Acceptable

  No Corrective Action
    Recommended for
  Matrix/Analyte Class;

     Examine Single
Matrix/Analvte Assessments
 W: Acceptable with Warning

    No Corrective Action
     Recommended for
    Matrix/Analyte Class;

      Examine Single
 Matrix/Analyte Assessments
Yes
  No
participation^    Yes
  in last 3
         -\ ^.n. in>l of/~
          \last3 /
             studied?
                                                  N: Not Acceptable

                                                  Corrective Action
                                                   Recommended
                                            \Jfor all unacceptable analytes
                                               in matrix/analyte class in
                                                   current study

                                                   Examine Single
                                              Matrix/Analyte Assessments
                                            \
^
   W: Acceptable with Warning

       Corrective Action
        Recommended
   for all unacceptable analytes
    in matrix/analyte class in
         current study

        Examine Single
   Matrix/Analvte Assessments

-------
  Assessment of Single Matrix/Analyte, Single PE Studies
                No
      Examine IPEP
        Single
      Matrix/Analtye
       Assessment
W,N
        V
    No
 Corrective
   Action
Recommended
                                   Analyte 1 of last 3
                                     studies?
                 Only Participation
                  last 3 studies?
                  Is analtye 
-------
                                                   Assessments for Quarterly Consolidated Reports
       Assessment
        Criterion
                                                 Current Study
                                Condition
                                          IPEP Assessment
                                                                                            4-Quarter History
                                             Condition
            IPEP Assessment
     Participation
% Participation = 100
 of EM-Required PE Programs
A: Acceptable,
No Corrective Action Recommended
                     % Participation < 100
                     of EM-Required PE Programs
                               N: Not Acceptable,
                               Corrective Action Recommended
                               - Reason for not participating,
                               - Participation in next available study
      Matrix/Analyte
       (Cell) Class
All individual PE Assessments of
MatrixyAnalyte Class = A
A: Acceptable,
No Corrective Action Recommended
w
                     1 Individual PE Assessment of
                     Matrix/Analyte Class < A
                               W: Acceptable with Warning,
                               No Corrective Action Recommended
                                       ' Acceptable < 75
N: Not Acceptable
Corrective Action Recommended for
unacceptable matrix/analytes
                     >1 Individual PE Assessment of
                     Matrix/Analyte Class < A
                               N: Not Acceptable
                               Corrective Action Recommended for
                               unacceptable matrix/analytes
          Single
      Matrix/Analyte
          (Cell)
All individual PE Assessments of
Matrix/Analyte = A
A: Acceptable,
No Corrective Action Recommended
                     1 Individual PE Assessment of
                     Matrix/Analyte < A
                               W: Acceptable with Warning,
                               No Corrective Action Recommended
                                       > Acceptable < 75
N: Not Acceptable
Corrective Action Recommended for
unacceptable matrix/analytes
                     >1 Individual PE Assessment of
                     Matrix/Analyte  < A
                               N: Not Acceptable
                               Corrective Action Recommended for
                               unacceptable matrix/analytes

-------
    Assessment of Participation, Consolidated Reports
     Calculate
   % Participation
 for all EM- required
    PE Programs
   A: Acceptable

No Corrective Action
   Recommended

    Examine
Matrix/Analyte Class
No
                                Yes
         A
     N: Not Acceptable

Corrective Action Recommended:

1. Reason for non-participation

2. Participation in next-available
  study for all EM-required
  PE Programs

 Examine Matrix/Analyte Class

-------
                  Assessment of Matrix/Analyte Class, Consolidated Reports
en
                   Examine all Individual PE
                    Program Assessments of
                     Matrix/Analyte Class
     A: Acceptable

  No Corrective Action
    Recommended for
  Matrix/Analyte Class;

    Examine Single
Matrix/Analyte Assessments
                                                                                                    N: Not Acceptable

                                                                                                    Corrective Action
                                                                                                     Recommended
                                                                                                for all unacceptable analytes
                                                                                                  in matrix/analyte class in
                                                                                                     current report

                                                                                                     Examine Single
                                                                                                Matrix/Analyte Assessments
                                                           W: Acceptable with Warning

                                                             No Corrective Action
                                                              Recommended for
                                                             Matrix/Analyte Class;

                                                               Examine Single
                                                           Matrix/Analyte Assessments

-------
                Assessment of Single Matrix/Analyte,  Consolidated Reports
O)
                 Examine all Individual PE
                  Program Assessments of
                     Matrix/Analyte
                      A: Acceptable

                   No Corrective Action
                    Recommended for
                     Matrix/Analyte
                                                      W: Acceptable with Warning

                                                        No Corrective Action
                                                         Recommended for
                                                          Matrix/Analyte
                                                                                            N: Not Acceptable

                                                                                            Corrective Action
                                                                                              Recommended
                                                                                            for all unacceptable
                                                                                             matrix/analytes

-------
                                                                                    100


        INTERNATIONAL AGREEMENTS IN  LABORATORY ACCREDITATION
Peter S. Linger. Vice  President
American Association  for Laboratory Accreditation
656 Quince Orchard Road, Suite 620
Gaithersburg, MD  20878-1409
Abstract

Internationally, there is growing pressure to provide for acceptance of test
data on a worldwide basis under provisions of international and regional
treaties such as the General Agreement on Tariffs  and Trade (GATT), the North
American Free Trade Agreement (NAFTA) and a variety of directives promulgated
to establish the European Union (EU) Single Internal Market.  But today's
emphasis on quality has heightened awareness of the importance of good data
and competent testing laboratories.  Laboratory accreditation is a means to
promote the acceptance of test data.

Ways that existing accreditation bodies can cooperate, through multi-lateral
mutual recognition procedures, create in effect one international system,
thus paving the way for worldwide acceptance of test data.  Such an
international laboratory accreditation system is well underway in the
European Union.  European nations have established the European Cooperation
for Accreditation of Laboratories (EAL).  The EAL  approach is to create a
forum for arriving at a multilateral agreement (Mutual Recognition
Agreement -- MRA) among various accreditation systems.  This means that
appointed representatives from the laboratory accreditation systems which are
members of EAL perform an assessment of an applicant laboratory accreditation
system on behalf of all the systems in the agreement.  If the basic
requirements are met, then the accreditation is recognized by all systems
party to the agreement.  This model is being used  as a basis for similar
models in most industrial nations of the world,  most recently, in the Asia
Pacific area.  Efforts are also being made in North America to forge a multi-
lateral agreement among accrediting bodies.   The private sector European
Organization for Testing and Certification (EOTC)  is strongly encouraging
this MRA approach and has already recognized the MRA among several laboratory
accreditation systems in Europe.
Introduction

The achievement of an appropriate accuracy of testing  and measurement is
necessary for effective quality control  in industrial  enterprise.  To give
assurance of test and measurement accuracy to the customer, it  is necessary
to demonstrate the capability of the laboratory.   This is equally true for
both domestic as well as foreign customers.   To serve  this purpose, many
nations have laboratory accreditation systems that give industry confidence
in test data through accredited services of calibration and testing.  The
preferred mechanism for facilitating acceptance of tests and measurements
between countries appears to be the mutual recognition of national laboratory
accreditation systems.
                                            727

-------
In particular,  the European Union (EU)  has aggressively pursued various
programs to establish confidence in each country's laboratories as part of
the establishment of an "internal market."  Thus,  the EU along with the
European Organization for Testing and Certification (EOTC)  and its
recognition of the agreement group the  European cooperation for Accreditation
of Laboratories (EAL) developed a multilateral  mutual  recognition agreement
among laboratory accreditation bodies of the EU.

The importance of test (and calibration) data in trade is increasing rapidly.
Although there are many examples where  test reports from countries of export
have been accepted by the importer without retest,  this acceptance is limited
either by mutual  agreement between buyer and seller or by ad hoc decisions by
an importer.  But 'adhocracy'  is being  actively discouraged by current
international  quality assurance standards (e.g., ISO 9000 series).   Thus,  the
ability to sell internationally based only on reputation or salesmanship is
diminishing.  Unfortunately,  areas where test and  calibration data are not
accepted internationally are growing and products  are being retested in the
country of import.  Exporters often face a troublesome and  time-consuming
journey through foreign administration  of testing  acceptance.   The delays and
costs of retesting in a foreign country may even discourage the pursuit of
that market.

Lack of acceptance of test data across  national  borders is  claimed to be a
very significant barrier to trade and a number  of  international  agreements,
such as the GATT Standards Code,  the OECD Code  of Good Laboratory Practice,
and the European Union (EU) and the European Free Trade Association (EFTA)
policies on testing and certification,  have been developed  in efforts to
overcome this  particular problem.  If these agreements and  policies are to be
effective,  it  is essential  that one can rely on tests  made  in other
countries.   No one in an importing country should accept data from an
exporting country unless they are confident that these data are as reliable
(or of equivalent quality)  as if the instrument had been tested by a
competent body in the importing country.   Therefore,  in order to be able to
rely on foreign test results it is necessary to know,  or be assured of,  the
competence of  the laboratories providing the test data.   In turn,  this should
provide a high degree of confidence (but not a  guarantee) that the data is of
the requisite  quality.


Laboratory Accreditation

It is because  of the difficulty as well  as the  growing necessity to evaluate
the performance of laboratories that laboratory accreditation has developed.
It is defined  in ISO Guide 2 as "the formal  recognition that a testing
laboratory is  competent to carry out specific tests or types of tests."
Testing in its broadest sense includes  calibration.   Laboratory accreditation
is usually granted:

     •     By  an  identified accreditation body  to prescribed criteria;

     •     For  specific tests  or  types  of tests described in reference
           documents or otherwise defined by performance descriptors;
                                            728

-------
      •     After an  initial on-site assessment of QA management and specific
            capability by qualified assessors.

Surveillance of ongoing performance by reassessment at periodic intervals and
by proficiency testing or other forms of relevant auditing, is common
accepted or required.

In performing accreditations of laboratories, it is recognized that they
function differently from that of testing laboratories.


International  Acceptance of Testing

Existing mechanisms  by which test data are accepted in foreign countries are
based  on:

      •     Acceptance of foreign data without question;

      •     Approval  of a foreign laboratory by the acceptance body or the
            customer  of the laboratory (designated laboratories;

      •     Approval  of a foreign laboratory through evaluation or
            recommendation by a third-party in either country;

      •     Mutual  recognition agreements between laboratories;  and

      •     Mutual  recognition agreements between laboratory accreditation
            organizations in both countries.

Many examples  of all  these mechanisms are effectively in operation.  But it
is clear the latter  offers the most universal approach to the problem.   That
is why the  concept of laboratory accreditation has been so popular and has
spread so fast in the last 15 years.


International  Laboratory Accreditation Conference (ILAC)

One of the  most significant factors influencing the growing acceptance of
laboratories among countries, and within countries for that matter, is the
existence of an informal group of laboratory accreditation system managers
and interested parties known as ILAC.   The first ILAC conference was held in
Denmark  in  1977.   Since then, conferences were held in the United States,
Australia,  France, Czechoslovakia,  Mexico,  Japan, the United Kingdom, Israel,
New Zealand,   Italy,  Canada, and Hong Kong.   Future meetings are planned in
Amsterdam and  Sydney.  ILAC has no permanent secretariat; the host acts as
the secretary.   There is no formal  delegation procedure; interested persons
from the various countries volunteer to attend and pay the modest conference
fee.   Conferences last one week,  with reports from various task forces and
committees;  decisions are made by unanimous agreement on various resolutions
which  come  out of the work of the committees and task forces.

Acceptance  of  the ILAC Work.  In spite of this informality and the lack of a
permanent secretary,  ILAC has produced a number of documents which have been
                                            729

-------
adopted by other organizations to become, in effect, national as well as
international standards.  The International Standards Organization (ISO) has
been particularly active in converting these documents to ISO Guides (see
Table 1).  Subjects of the guides deal with general criteria for accrediting
laboratories (ISO Guide 25), requirements for the acceptance of testing
laboratories (Guide 38), proficiency testing (Guide 43), guidance for
operation and recognition of accrediting bodies (Guide 58).   OIML has
published guidelines for determining calibration intervals (International
Document No. 10) based on the work of ILAC.
                           Table 1 - ISO/IEC GUIDES

Guide 2         General Terms and Their Definitions Concerning
                Standardization, Certification and Testing Laboratory
                Accreditation.
Guide 25        General Requirements for the Competence of Calibration and
                Testing Laboratories.
Guide 43        Development and Operation of Laboratory Proficiency Testing
Guide 58        Calibration and Testing Laboratory Accreditation Systems --
                General Requirements for Operation and Recognition


Most if not all national  systems,  including the American Association for
Laboratory Accreditation (A2LA)  in the United States,  use ISO Guide 25 as its
formal criteria for accreditation.

Other international standards related to this subject  are listed in Table 2.
                       Table 2 -  INTERNATIONAL STANDARDS

                                 ISO  STANDARDS

8402       Quality --  Vocabulary.
9000       Quality Management and Quality Assurance Standards  --  Guidelines
           for Selection and Use.
9001       Quality Systems -- Model for Quality Assurance  in
           Design/Development, Production,  Installation and Servicing.
9002       Quality Systems -- Model for Quality Assurance  in Production,
           Installation and Servicing
9003       Quality Systems -- Model for Quality Assurance  in Final  Inspection
           and Test
9004       Quality Management and Quality System Elements  -- Guidelines
10011      Generic Guidelines for Auditing Quality Systems

ILAC Committees.   ILAC has four Committees to carry out its work.  Table 3
lists the current work of the first three ILAC Committees.  Committee 4 is
the administrative committee for the  conference.
                                            730

-------
                           Table 3 -  ILAC WORK TTFMS


Comnittee 1. Comnercial applications

Costs of Mutual Recognition Agreements and the efficiency of the process
Acceptance of test data on basis of Guide 25 or ISO 9000 for laboratories
Seminar on Guide 25 or ISO 9000 for laboratories
Uncertai nty. repeatabi1i ty, reproduci bi1ity
Advantages of laboratory accreditation for insurance industry
Competition in laboratory accreditation
Abuses of accredited status by laboratories
Promotion of Mutual Recognition Agreements
Legal implications of agreements on acceptance of test reports
Effectiveness of MRAs in dealing with technical barriers to trade
Agreements between laboratory accreditation bodies and certification bodies.
I LAC Handbook and Directory
Assist in Realizations of GATT Agreements
Liaison with International Trade Related Organizations
Liability in Testing
Testing, Quality Assurance, Certification and Accreditation
Guidelines on Cross-national Accreditation of Laboratories
Role of Testing and Laboratory Accreditation in International Trade

Committee 2.  Laboratory Accreditation Practices

Surveillance and Reassessment of Accredited Laboratories
Assessor Qualifications and Competence
Traceability of Measurements
Measurement Uncertainty in Testing
Accreditation of Multidisciplinary Laboratories
Accreditation of Non-routine Work
Harmonization of the Rules relating to Logos
Relationship between Testing, Inspection and Product Certification

Committee 3.  Laboratory Practices

Demonstration of traceability of measurements
Selection and use of reference materials
Validation and verification of test methods
Determination of uncertainties associated with test results
Test data processing and presentation: connection with declaration of
compli ance
Follow-up of the revision of ISO/IEC Guide 43
Follow-up of the revision of ISO/IEC Guide 25
Quality Assurance in relation with use of automated test equipment and
Implementation of laboratory information systems
Guidance for the preparation of a quality manual
                                             731

-------
 ISO and IEC

The references to the  International Organization for Standardization (ISO)
Guides in Table 1 really should  include the International Electrotechnical
Commission (IEC) as well,  since  the IEC has taken formal action to comment on
 and approve these Guides.   But most of the committee work has been performed
by ISO CASCO. the ISO  Conformity Assessment Standards Committee.  CASCO has
been responsible for many new ISO/IEC Guides dealing with product
certification.  Most of  its work related to laboratory accreditation is based
on the material supplied by ILAC, starting with Guide 25.

 ISO has published the  ISO 9000 series of standards to establish the basic
requirements for generic quality management programs in the manufacturing
 industries.  ISO 9000  provides guidelines for selection and use of quality
management and quality assurance standards.  ISO 9001, 9002,  and 9003 are
models representing three distinct forms of functional or organizational
capability suitable for  purchaser-supplier contractual purposes.  ISO 9004
consists of a fuller description of each of the quality system elements.

The ISO 9000 series have been adopted by virtually all of the industrialized
nations as their own national standards on this subject.  The ISO 9000 series
is having a significant  effect on the revision to the laboratory
accreditation criteria (ISO Guide 25).

Commission of the European Union (EC)

The European Commission  (EC) has implemented various programs in its effort
to achieve a "single internal market."   Many of these programs involve
standards-related issues and any firm doing business in Europe must keep
aware of the effect of these programs and must be ready to take action to
ensure equitable access  to markets.  The advantage of these programs is that
a single internal market will be created instead of the many separate markets
corresponding to the number of countries making up the European Union.   The
disadvantage is that the EU may  implement trade restrictive policies.

In 1985,  the EU decided  against  detailed standards for everything in favor of
only regulations containing "core requirements".  In the absence of EU-wide
standards or directives, member  states may use their own national  standards.
Products in compliance with these national  standards would have uninhibited
entry into other member  countries.   The EU has basically adopted the ISO/IEC
Guides 2,  25, 43, and 58 as well as the I LAC work for its standards in
1aboratory accredi tati on.
Bilateral and Multilateral Agreements

The first set of bilateral agreements were signed between European bodies as
well as between NATA Australia and TELARC New Zealand in the 1970's.   Several
more bilateral agreements emerged in the 1980's.   Recognizing the substantial
cost of maintaining several bilateral agreements, the accreditation community
has recognized the need for multilateral arrangements,  led by European
systems.  Table 4 lists national laboratory accreditation systems and the
number of other countries systems for which they have mutual  recognition.
                                            732

-------
          Table  4 -  LIST OF NATIONAL LABORATORY ACCREDITATION SYSTEMS

                                     Number of Mutual
                         Year        Recognition
Country       System     Established  Agreements

Australia     NATA       1946            5
Austri a       OKD        1983
Canada        SCC/PALCAN 1981            2
P.R. China    SBTS       1984
F.R. Germany DKD        1977           12
Finland       MSF        1980
France        COFRAC     1969           12
Hong Kong     HOKLAS     1985            l
Hungary       MSZH       1985            5
India         NCTCF      1988
Italy         SINALP     1977           12
Netherlands   STERLAB    1975            1
New Zealand   TELARC     1973            5
Norway        NOLA       1988
Poland        NLMS
Portugal      IPQ        1986
Saudi Arabia  SASO       1987            2
Singapore     SINGLAS    1986
South Africa  CSIR/NCS    1987            1
Spain         RELE       1986           12
Sweden        MPR        1972           12
Switzerland   SAS        1988           12
Turkey        TSE        1987
U.K           NAMAS      1966           12
U.S.A         A2LA       1978            4
U.S.A         NVLAP      1976            3
EAL

National laboratory  accreditation services have been developed mainly in
Europe as early as 1966 as a tool for the efficient dissemination of
standards and the confirmation of traceability of measurements to national
standards.  To avoid barriers to trade relating to calibration and test
certificates, the laboratory accreditation systems of Western European
countries are cooperating within what's called the European cooperation for
Accreditation of Laboratories (EAL).  EAL has set up an on-going program of
technical cooperation aimed at establishing mutual confidence among systems,
leading to formal declarations (multilateral  agreements or MLAs) of the
technical equivalence of accredited laboratories and their data.  Thus the
acceptance of data is being made possible through EAL.  This mechanism has
the potential to restrict the flow of data unless the data are generated by a
laboratory accredited by one of the EAL MLA members or recognized outside
bodies.
                                             733

-------
APLAC

Another multilateral  arrangement  is emerging in the Asia  Pacific region.  A
formal  Memorandum of  Understanding (MOU), designed to reduce technical
barriers to trade, was signed by  laboratory accreditation bodies from 16 Asia
Pacific countries, on April 4,  1995 in Jakarta,  Indonesia.   A total of 20
accreditation bodies  signed the agreement.

The Asia Pacific Laboratory Accreditation Cooperation (APLAC)  has met
informally for six meetings over  the past three years.  At its seventh
meeting on April 4, 1995,  Mr. John Gilmour, Chief Executive of the National
Association of Testing Authorities (NATA) in Australia, was elected Chairman.
The Board of Management will consist of the chairman and  members from five
other  countries:  Hong Kong, New  Zealand, Singapore, People's  Republic of
China,  and the United States.

John Locke,  President of the American Association for Laboratory
Accreditation (A2LA)  was named  chairman of the  first standing  committee
approved,  the Mutual  Recognition  Agreement (MRA)  Committee.  The Management
Board  will  organize additional  committees for proficiency testing,  the  APLAC
New Notes,  training,  the bibliography,  etc., as  deemed appropriate.

Full APLAC members are:

AUSTRALIA:  National Association of Testing Authorities,  NATA;
BRUNEI DARUSSALAM:  Ministry of Development, Construction  Planning and Research Unit:
CHINESE TAIPEI: Chinese National Laboratory Accreditation, CNLA;
HONG KONG: Hong Kong Laboratory Accreditation Scheme,  HOKLAS;
INDIA:  (National Accreditation Board for Testing & Calibration Laboratories, NABL);
INDONESIA: National Accreditation Body of Indonesia, KAN;
JAPAN: Standards Department,  AIST;
      Japan Calibration Service System,  JCSS;
KOREA: Korean Laboratory Accreditation Scheme,  KOLAS;
MALAYSIA: Laboratory Accreditation Scheme of Malaysia, SAMM, Accreditation  Council,  MAC, SIRIM;
NEW ZEALAND; Tel arc New Zealand;
PAPUA NEW GUINEA: National Institute of  Standards and Industrial  Technology;
PEOPLE'S REPUBLIC OF CHINA: China National Accreditation  Committee for Laboratories. CNACL
      Chinese Import Export Commodity Inspection Bureau,  SACI;
SINGAPORE: Singapore Laboratory Accreditation Scheme,  SINGLAS;
THAILAND: Thai Laboratory Accreditation  Scheme, TLAS,  Industrial  Standards  Institute;
UNITED STATES OF AMERICAN: American Association for Laboratory Accreditation, A2LA
      National  Voluntary  Laboratory Accreditation Program, NVLAP (at  NIST)
      ICBO (International  Conference  of Building Officials) Evaluation Service;  and
VIETNAM: Directorate for Standards and Quality, DSQ.

The MOU states:  "Laboratory testing  is recognized as an important element in
acceptance of products,  and the lack of acceptance of test  data accompanying
traded goods has long been one  of the major technical  barriers to trade.
APLAC's main objective is to establish a regional  network in which products
tested in one country need not  be retested in the importing country,  thereby
reducing both costs and delays  in shipment of the product."
                                              734

-------
By signing the MOU, laboratory accreditation bodies in the Asia Pacific Area
have expressed a desire to cooperate to generally improve standards of
testing and calibration in the economies of the region and to enhance the
freer trade objectives promoted by the Asia Pacific Economic Cooperation
(APEC).
MUTUAL RECOGNITION AGREEMENT PROCESS

The International Laboratory Accreditation Conference (ILAC) developed a
document entitled, "Guidelines for Establishment and review of Mutual
Recognition Agreements" in 1994.  This document will serve as the reference
document for establishing a mutual recognition agreement among accreditation
bodies in the area.  This document is similar in content and material  to the
agreement established by the European cooperation for the Accreditation of
Laboratories (EAL) and now in place with 12 of the 18 member countries in the
European Union recognizing each other's accredited laboratories.

The Guidelines deal with the major steps in assessing accreditation bodies:

•     criteria  for mutual  recognition  (ISO/IEC Guide 43 and  58 for the
      accreditation bodies, Guide  25 for the laboratories);

•     the contents of a  quality manual needed by  the body seeking recognition

•     procedures  for  preparing  for evaluations, including the selection of the
      evaluation  team members;

•     conduct of  the  evaluations (both of the applicant system and
      representative  laboratories  being assessed  by  that system);

•     the procedures  for completing the agreement (including handling of
      discrepancies found); and

•     procedure for maintaining and monitoring the agreement.

The MRA Committee will consider these guidelines and recommend adoption of a
final set of guides by all members of APLAC.
                                            735

-------
101


            INTERNATIONAL ACTIVITIES  IN REFERENCE MATERIAL CERTIFICATION
                                   Peter S. linger
                                   Vice President
                  American Association for Laboratory Accreditation
 ABSTRACT
 Interest in developing reliable reference materials for analytical  measurements
 is growing worldwide.  Serious discussions, both nationally and internationally,
 are underway on the need for third-party assessment of reference materials
 producers and the certification of their reference materials.   Programs  have
 evolved in the United States and other countries such as China.

 Established in 1975, the ISO Council Committee on reference materials  (REMCO)
 has developed guidance for definitions, categories, and levels  of classification
 of reference materials.  Similarly, the ISO Council Committee on conformity
 assessment (CASCO) has developed guidance for conformity assessment tools
 including laboratory accreditation, quality system registration (or
 certification) and product certification.  More recently, the Cooperation for
 International Traceability of Analytical Chemistry (CITAC)  has  emerged to deal
 specifically with the issue of reliable reference materials so  that measurements
 across international boundaries are comparable.  The documents  that these three
 bodies have published as well as what else is being developed or needed  for
 certification of reference materials will be discussed.
 AUTHOR'S BIOGRAPHICAL SKETCH

 Peter Unger is Vice President of the American Association for Laboratory
 Accreditation (A2LA).  Previously, he served as Associate Manager of Laboratory
 Accreditation at the National Bureau of Standards (now the National Institute of
 Standards and Technology).  He has been involved with laboratory accreditation
 on the national  level since 1978.

 Mr.  Unger is currently Chairman of ASTM E-36 on Laboratory Accreditation and
 Vice Chairman, Quality Provisions, of ASTM E-ll on Quality and Statistics.

 Mr.  Unger has a BS degree in systems engineering from Princeton University and a
 masters in environmental management from George Washington University.  He is a
 certified lead auditor under both the RAB's ISO 9000 auditor certification
 program and the U.K. IQA Institute for the Registration of Certificated
 Assessors.
                                              736

-------
                                                                                   102
  AUTOMATED SAMPLING PLAN PREPARATION:  QASPER, VERSION 4.1

William Coakley, QA Coordinator, Environmental Response Team/U.S. Environmental
Protection Agency, 2890 Woodbridge Avenue, MS-101, Edison, New Jersey 08837;
Gregory Janiec, Project Manager, Marv Pat Walsh.  QA Specialist, Federal Programs
Division, Roy F. Weston, Inc., 120 Uwchlan Avenue, Exton, Pennsylvania 19341.

ABSTRACT

The Quality Assurance Sampling Plan for Environmental Response (QASPER)  software
facilitates the preparation of sampling plans by prompting users, through an automated
process, to consider elements which should be addressed  in comprehensive QA/QC
Sampling Plans for environmental response actions. The software compiles user-selected,
technical text and user-provided, site-specific information into a QA/QC Sampling Plan
which can be implemented to generate reliable, accurate data of known quality that will
meet its intended use.  More specifically, QASPER allows the user to document general
site background information and data use objectives.  QASPER then focuses on specific
remedial units (or sampling areas) by prompting the user to define the sampling design,
sampling requirements, and analytical requirements for each unit. Other elements to be
identified  are  standard  operating  procedures, quality  assurance and data validation
protocols, deliverable formats, personnel responsibilities, and the project schedule.

QASPER, Version 4.1 is consistent with Superfund Accelerated Cleanup Model  (SACM)
initiatives  as well as current U.S. EPA documents such as the Data Quality Objectives
Process for Superfund  (OSWER 9355.9-01) and the Removal Program  representative
sampling guidance documents.

U.S. EPA On-Scene Coordinators (OSCs) and Remedial Project Managers (RPMs) are
currently the primary users of QASPER. The software is very flexible and would prove
beneficial  to other  regulatory,  academic, and scientific organizations and  their
contractors.

INTRODUCTION

QASPER  was developed to assist the site  Project Manager in developing  a timely
sampling plan which  includes many critical elements.   Users are prompted by the
program to consider elements necessary to generate a comprehensive QA/QC Sampling
Plan which is consistent with current U.S. EPA guidance.  QASPER creates a database
of user-selected, technical text and user-provided, site-specific information which is used
to generate a QA/QC Sampling Plan ready for review, approval, and implementation.

This paper will describe the Superfund data categories, define the essential components
of QA/QC Sampling Plans, and describe the features of the QASPER software.
                                           737

-------
SUPERFUND DATA CATEGORIES

The Superfund program has developed the following two descriptive data categories:

      •     Screening data
      •     Definitive data

Minimum  QA/QC requirements are associated  with  each category  and a variety of
analytical methods may be used to generate either type of data.

QA/QC Sampling Plans created within QASPER are developed around these two equally-
important categories; therefore, a brief definition of each category follows.  Screening
and definitive data categories  are  described in greater detail  in  the  Data Quality
Objectives Process for Superfund (OSWER 9355.9-01).

Screening Data  These data are generated by rapid,  less precise methods of analysis
(e.g.,  field portable X-ray fluorescence, portable gas  chromatography, immunoassay)
with simple or minimal sample preparation steps.  For example, sample preparation may
be a  simple  procedure  such as dilution with a  solvent  rather  than an  elaborate
extraction/digestion and cleanup. The resulting data provide analyte identification  and
quantification, although the quantification may  be imprecise.   At  least 10% of the
screening data must be confirmed using more rigorous QA/QC procedures and criteria
associated  with definitive data.

Definitive Data - These data are generated using more exact or  precise analytical
methods, such as gas chromatography/mass spectroscopy or atomic absorption. Data are
analyte-specific, with confirmation  of analyte identity  and concentration.   Methods
produce tangible raw data (i.e., chromatograms,  spectra,  digital values) in the form of
paper  printouts or electronic files.  For data to be definitive, either analytical or total
measurement error must be determined.

OA/OC SAMPLING PLAN ESSENTIAL COMPONENTS

Comprehensive  QA/QC  Sampling  Plans  should  include  the  following  elements:
background, data use objectives, sampling design, sampling and analytical methodologies,
QA requirements, and project organization.

Background - a description of how the site was used or the cause of the contamination.
This will help in choosing sampling locations, target compounds, and analytical methods.
Sources of this information could include local, state,  and federal files; representatives
of various  agencies; and previous response action reports.
                                           738

-------
Data Use Objectives - statements of the intended use of the data, questions that must be
answered, or decisions that will be made based on the collected data.  Examples of data
use objectives are:  determining the presence of contamination, determining the extent
of contamination, identifying threats  to humans or the environment, and verifying
cleanup.

Sampling Design   discussion of the  matrices to be sampled and the compounds for
which they will be sampled, the sampling strategy to be implemented, a description of
sampling locations and the numbers of environmental and QC samples to be collected.

Sampling  and  Analytical Methodologies  - a  description  of  sample  handling
requirements, the sampling  equipment to be used, and analytical requirements.  In
addition, the standard operating procedures (SOPs) to be employed for sampling, sample
documentation, and sample transportation should be described.

QA Requirements - a detailed description of the appropriate data quality indicators and
QA/QC protocols.  Data  quality indicators  are quantitative  statistics and qualitative
descriptors that are used to interpret the degree of acceptability or utility of data to the
user.  The principal data quality indicators are bias, precision, accuracy, comparability,
completeness, and representativeness.

Project Organization - a list of personnel responsible for conducting the  investigation
and the laboratories  responsible for analyzing the samples should be provided.

QA/QC Sampling Plans prepared  using the QASPER software include  all of these
components.

OASPER OA/OC SAMPLING PLANS

QA/QC Sampling Plans created within QASPER include a title page and 11  sections
which are based on the requirements of two documents: Data Quality Objectives Process
for Superfund (OSWER 9355.9-01) and the Removal  Program QA/QC Guidance on
Sampling  QA/QC Plan and Data Validation  Procedures (OSWER 9360.4-01).

QASPER has a database of standard technical text which is utilized in an electronic "cut
and paste" process  with user-provided,  site-specific information to  create a QA/QC
Sampling Plan.  This allows the user to focus on critical information while the software
handles the presentation and  correlation of that information with data in other sections.

This process will be illustrated by "walking-through" QASPER.  It is recommended that
users progress in a sequential manner since the database builds on previously provided
information. It is possible  to skip sections or avoid input requirements, especially when
information is not yet known, but  it may not be possible to complete certain sections
                                            739

-------
(i.e., 3.0, 4.0, 6.0, and 7.0) without providing information in preceding sections (i.e.,
1.0 and 2.0). Figure 1 depicts the menu of QASPER sections.
          U;Sy iEPA/ERT QualityrAssura'riceS'Saiiiplt'rtg^pjaniffbi* Enyi ronmentaI  Response V4.1
        ~si=~~;=~s~;^^~^~5~;:;sg~i=ii .Cur r ent; PI ah; : 7 es t :$rt e
                            k
                           Site Background
                       :2;:Oi.;OsttS Use
                       :3.p Sampling Deisvgn.
                       ;4.0: Sairipl ing and Arijal ys is
                       5iO Standard Operating'Procedures
                       6.0 Quality Assurance Requirements
                       7.0 Data Validation;
                       8.0 Delfverables
                       ••9*0 Project Organization and Responsibilities
                       10.0 Schedule of Activities
                       11.0 Attachments      :
                                                                       .	
                                                              c"="a -s-^s^s-s.—.—. 3 :c _c-

                              - Select.    - Exit
Figure 1.  QASPER QA/QC Sampling Plans Sections

Title Page -  This section  includes basic information such as the site name, various
identifying numbers, and the names and affiliations of key personnel associated with the
site.  Some of this information will be utilized elsewhere within the completed plan.  If
the user chooses  not to  enter the requested information, the completed plan will  be
assembled without  the  information.   To add  information  that is not requested  by
QASPER, the user may edit the plan using a word processing program.

Section 1.0, Site Background  In this section, information about the site is entered,
including: location and size of the site; information about the surrounding environment;
status of current site activity; general types of materials that may be present; remedial
units (sampling areas); specific contaminants of concern and their volumes; cause of the
contamination; potential migration pathways, exposure routes, and receptors; constraints
that may  hinder sampling; additional information about the site;  source  of the
information; and current  stage/phase of the project.

Section 2.0, Data Use Objectives  Here the user specifies the organizational program
area within which they are working and the objective(s) of the sampling event.  QASPER
then identifies the data category (screening or definitive) that is applicable to the project.
If QASPER indicates a data category of screening (S), the user is able to upgrade it to
definitive (D)  by following the instructions on the screen, as shown in Figure 2. In some
cases,  either  type of data  may be collected  so the user is  requested to specify the
category.  In addition, the user specifies acceptable limits for making decision errors.
                                              740

-------
         -Jiil:_lE^I?.T_ฐu5illy,Assurance Sampling Plan for Environmental Response V4.1
         ^^^'"^^^iiiplj Sectjon 2.0, Data Use Objectives ^^i^^aiSs=335===
         y^slilSl-iyi
                                      Required Data Type
                       Program
                          Area: Removal

                      Sampling
                     Objective: Quantity of contamination
                     The required data type for the above combination is: S

                     Do you wish to change the required data type?  [Y/N] j
Figure 2. Data Category Upgrade Mechanism

Section 3.0, Sampling Design - In this section, for each remedial unit and its associated
program area/sampling  objective, the user specifies the matrix to be sampled and the
parameters  for which the samples will be analyzed.   Users also  specify the sampling
approach,  and  the  locations and  numbers  of samples  to be  collected,  including
background and QA/QC samples as shown in Figure 3.
           U .S.EPA/ERTQual it^ Assurance Sampl 1 ng P Ian f or Envi ronmental Response^
             "- ^===^fei=i^^==%?JS?pJ^s==^^=f^====g=^ฃr-S==5=s--"-"= - =""~       "

                    Section 3.0, Sampling Design
                     drainage ditches. Removal/
                     Quantity of contamination
                            Soil, Metals

Enter  the number of background samples to be collected

Enter  the number of samples to be collected for screening
To determine screening analytical error,
   enter the number of replicate aliquots to be analyzed

Enter  the number of screened samples to be used for confirmation
   Enter the number of trip blanks to be collected
   Enter the number of field blanks to be collected
   Enter the number of rinsate blanks to be collected
   Enter the number of matrix spikes to be analyzed
   Enter the number of PE samples to be analyzed
                   more.
                      0
                                                                            0
                                                                            0
                                                                            0
                                                                            0
                                                                            0
                                                                            0
                                                                        more..
                     - Sa"ve~& Exit    - Exit
~U  :~Next/Prev~. Entry
                                                                                Tip
                                                                                5=2
Figure 3. Numbers of Samples to be Collected
                                                   741

-------
Section 4.0, Sampling and Analysis - Here users identify the sampling requirements,
sampling equipment, and sample analytical information for each remedial unit, program
area/sampling objective, matrix, and parameter combination as shown in Figure 4.
          U.S. EPA/ERT Quality Assurance Sampling Plan for Environmental Response V4.1
        gSffiaLฐySSฃiSa..Section 4.0, Samplins and Analysis
::: =::=::=^=::a::=::n:T2: ns: c trcua^rcs: :siie;:c::s^sas;; ^^^rmassiiB :: s:.-=r a:aaai^;^JM Et3MKsaaua:;asa::a:::i^aagaa:;3::E;:a:;sss; : ~ป:n:~ : s ::S::C:;S:K
Sampling and Analytical Sunmary
Remedial Unit Program Area/Sampling Objective Matrix Parameter

drafrase
dftfcfees

Removal/Quantity of
contamination
Sampl ins and Analysts Hi

—f- j>f . ^ f ^
Sampling Equipment
Sample Analysis
Soil
:nu
1
i
Metals

        _t=_=_=^^.=_:l^_;:^=^-i.,c_c-l.cla_=_=15^!3^rJ=:
                           -CENTER?v- Select   
-------
Section 7.0, Data Validation - This section contains the instructions for validating the
analytical data generated under  this plan.   The instructions  are based on  the  data
categories that were identified in  Section 2.0.  Users may view and edit the text.

Section 8.0, Deliverable - In this section, users specify deliverables or reports that will
be produced including  analyses, analytical reports, final reports,  maps/figures,  etc.
QASPER provides standardized summary text for the deliverables shown in Figure 5.
Users may choose from a picklist or enter new deliverables.
                         :.-      Sampling Plan for Environmental Response V4.1

                      y^^
                                 Standard Del iverables
                 ™m
                 Analytical Report
                 Data Review
                 F i na T Report:: ;
                 Maps/figures
                 Status Report:;
                 Trip Report  .
        -Add Deliverable  - Select Deliverable  -Save & Exit 
-------
Export Plan  This utility sends a copy of a plan database to the designated floppy disk
or subdirectory. The file may be brought back into this or another copy of QASPER
using the corresponding Import Plan feature, enabling users to transfer and share plans.

Maintain Lists  This utility allows additions to or deletions from the various picklists
which are provided in QASPER.

Maintain Generic Text  This utility allows users to customize generic text provided in
QASPER.  This feature provides for variations in the text to accommodate regional or
programmatic differences in policy and procedures.

Maintain Plan Text - This utility allows users to customize the format and content of
the QA/QC  Sampling  Plan template,  again allowing for variation in regional or
organizational differences.

Export Modifications  - This  utility allows users  to  prepare a file which includes
customized lists, generic text, and plan templates as shown in Figure 6. This file can be
distributed throughout a region or an organization  and imported into all regional or
organizational copies  of QASPER.  This will ensure consistency of QA/QC Sampling
Plans prepared  within the region or organization.
          U.S. EPA/ERT duality Assurance Sampling Plan for Environmental Response V4.1
          jgg^s^i^^^i^—g—i^^^g^pi^Vgr^^^
                          =s=; Export Modifications Option
                                  Generic Text
                                  Plan Templates

Figure 6. Export Modifications Option

Reindex - This utility recreates the indexes for the QASPER databases. This procedure
is useful as an  initial solution when data appear to be corrupted.  For example, if the
user knows that a picklist  has five  entries  but sees  only  one,  this option may be
implemented.  If the user is currently in a sampling plan and chooses this option, then
the system files or the plan files may be reindexed.

Pack and Reindex - This option also recreates the indexes for the QASPER databases.
In addition, it permanently removes any information that has been deleted.
                                          744

-------
System Configuration  Here the user may change monitor or printer types, and enter
the command line for the word processing package that will be used during editing. This
allows customizing of the QASPER software based on the user's hardware and software.

In addition to these features, status lines appear at the bottom of each screen throughout
the QASPER program to assist users with input in any field.  The status lines prompt the
user for information required by QASPER. They also indicate available function keys.

An On-Line Help system is available any time the program is waiting for user input;
however, not while reports are being generated, databases are being reindexed/recovered,
or system files are being searched. Help may be accessed from anywhere in the program
by pressing  .

In addition,  a User's  Guide is provided with the software and technical support  is
available between the hours of 9:00 AM and  5:00 PM ET by  calling:  U.S. EPA/ERT
Software Support, (800) 999-6990.

HARDWARE REQUIREMENTS

To run QASPER the following  must be available:

       •     An IBM personal computer (PC) or 100% compatible system
       •     A hard drive with at least 2 megabytes (MB) of free space
       •     At least  640  kilobytes (K) of random access memory  (RAM)
       •     A printer for hard-copy output

CONCLUSION

Using QASPER, Project Managers may save time, money, and resources by quickly
developing sampling plans  which address all  of the elements required by current U.S.
EPA guidance. Once familiar with QASPER,  a user may generate a technically complete
plan  in approximately 90 minutes. When a base format is established, users may take
advantage of QASPER's copy feature and generate plans in approximately 45 minutes.

Due to its minimal hardware requirements, QASPER may be incorporated  into any site
"tool-box" and may be used to generate sampling plans en route to or upon arrival at the
site.  These QA/QC Sampling Plans can be immediately utilized by knowledgeable field
crews to collect representative samples and increase the probability of generating reliable
data of known quality that will  meet the intended use.

In addition,  QASPER,  Version 4.1, is highly flexible and  may be adapted to a wide
variety of regional or organizational situations. This allows organizations to customize
or structure QASPER to  provide QA/QC  Sampling  Plans in their own standard
                                        745

-------
appearance and with their own specific content. Standardized formats may be exported
and transmitted to various regional and national locations.

REFERENCES

Office of  Emergency and  Remedial Response,  U.S. EPA, Data Quality Objectives
Process  for Superfiind, Interim  Final  Guidelines,  9355.9-01,  EPA540-R-93-071,
PB94-963203, September 1993.

Office of Emergency and Remedial Response, U.S. EPA,  Quality Assurance/Quality
Control  Guidance for Removal Activities, Sampling QA/QC Plan and Data Validation
Procedures, Interim Final,  EPA/540/G-90/004, April 1990.

QASPER User's Guide, January 1995.
                                          746

-------
103
                             Monitoring VOC Losses in Soils Using
                 Quantitation Reference Compounds and Response Pattern Analysis

        Quantitation Reference Compounds (QRCs) can be used to monitor the behavior of target VOCs
frequently found at Superfund sites.  The QRCs are spiked into the sample matrix at the time of sampling
and can monitor target VOC behavior in soils by response pattern similarities. Whatever the mechanism
of VOC/soil interaction, be it surface sorption, inter- and intra-particle partitioning, biodegradation, etc.,
use of QRCs will accurately track the behavior and fate of certain VOC target compounds.
        Spikes of target VOCs are spiked onto aliquots of soils of varying particle size distributions and
total organic carbon contents. Phase I of the study spikes the QRCs onto the soil aliquots at the same
time as the target VOCs, while Phase n spikes the QRCs at various lengths of time after the target VOC
spike. The samples are then connected to a purge-and-trap GC/MS, and analyzed according to Method
8260. The absolute response of the target VOCs and the QRCs is plotted against the sample number, and
the responses compared for pattern similarities.
        The QRC responses paralleled the responses of certain target VOCs regardless of soil type or
length of time target VOCs were held  before addition of the QRC spike.  The same target VOC/QRC
pairs showing parallel response behavior were identified across the range of soil characteristics. Response
factors were calculated from target VOC/QRC pairs exhibiting similar response patterns within a given
soil type. These response factors were used in subsequent analyses using the same soil type to determine
the percent recoveries of the target VOCs.  These were compared to the percent recoveries using the
current method without QRCs.  Target compound recoveries ranged  from  15-30% using the  current
method, while recoveries  ranged from 93-105% using QRCs and response pattern similarities.
        Target VOCs can be monitored for their behavior within a specific soil type by using QRCs as
a compound capable of demonstrating similar behavior. This includes losses from sorptive mechanisms,
biodegradation and  losses arising from  headspace partitioning prior  to  analysis  and venting of the
headspace when the sample container is opened.
        There is a great amount of research designed to understand the behavior of VOCs in various
matrices under various environmental conditions. Even the best VOC modeling systems  available require
many estimations or 'best guess' inputs  to quantify VOC properties, movements, losses, etc.   The
properties of VOCs and their complex mobility pathways make piecing together the all of the information
necessary to accurately describe the behavior of VOCs in soils very difficult. It is likely to be quite some
time before this information can be tested and consolidated. In the meantime, using QRCs takes all of
these variables into account as a  'summed' property, and  demonstrates the capability to more accurately
describe what VOCs existed in the sample at the time of sampling, not what is in the sample at the  time
of analysis.
                                                 747

-------
AUTHOR INDEX
Name
Altshul, L
Anderson, D.
Armstrong, G.
Bacon, B.
Bailey, A.
Barron, J.
Bass, D.
Bath, R.
Bauer, W.
Beckert, W.
Benedicto, J.
Benes, S.
Bennett, P.
Blye, D.
Boehler, W.
Boparai, A.
Borgesson, S.
Bottrell, D.
Bowadt, S.
Boyd, J.
Bruce, M.
Buhl, R.
Burn, J.
Carley, R.
Carlson, R.
Carosone-Link, P.
Carter, K.
Carter, M.
Chapnick, S.
Chau, N.
Chesner, W.
Chiu, Y.
Coakley, W.
Cohen, R.
Collins, L.
Connolly, M.
Cox, C.
Crain, J.
Cypher, R.
Daggett, M.
Dandge, D.
De Ruisseau, C.
Dendrou, B.
Desourcie, J.
Donley, J.
Dougherty, J.
Dupes, L.
Edwards, P.
Ekes, L.
Engelmann, W.
Ezzell, J.
Fallick, G.
Feeney, M.
Felix, D.
Felix, D.W.
Paper Number
38
57
1
12
78
33
99
76
25
34
34
2
37
75
67
72
6
71,76
37
87
20
6
3
46
35,381
65
9
99
69
41
74
35
102
81
55
25
8
72
88
42
86
38
66
53,54
73
12
83
77
10
13
21,36
60
23
36
21
Name
Flax, P.
Fleeker, J.
Flory, D.
Forman, R.
Frank, V.
Friedman, S.
Frisbie, S.
Ganz, A.
Gere, D.
Gravel, D.
Green, D.
Gregg, D.
Gueco, A.
Hall, J.
Hanby, J.
Hansen, A.
Harrison, R.O.
Hartman, B.M.
Hassett, D.
Hays, M.
Helms, C.
Hensley, J.
Herzog, D.
Hess, J.
Hewetson, D.
Hewitt, A.
Ho, P.
Hofler, F.
Horsey, H.
Hsu, J.P.
Hurst, L.
Ikediobi, C.
I lias, A.
Ivaldi, J.
Jackson, L.
Jacobs, L.
Janiec, G.
Jassie, L.
Jenkins, T.
Johnson, M.
Johnson, P.
Karu, A.
Kauffman, J.
Kaushik, S.
Keith, L.
Kelly, J.
Kelly, K.
Kiely, J.
Kim, R.
King, H.
Koch, C.
Kolodziejski, M.
Krol, J.
Lafornara, J.
Lancaster, D.
Paper Number
92
50
97
82
41
26
10
56
37
25
72
42
50
20
11
84
35,38
18
70
39
44
99
50
48
86
28
34
36
65
29
42
64
84,89
56
76
7
102
39
22
90
99
35
30
86
77
92
24,40,42
72
34
61
91
48
60
14
79
Name
Latinwo, L.
Lawruk, T.
Lazarus, L.
Lee, H-B.
Legore, T.
LeMoine, E.
Lesnik, B.
Lewis, M.
Lewis, D.
Lindahl, P.
Linton, C.
Littau, S.
Liu, S.
Loftis, J.
Lopez-Avila, B.
MacPhee, C.
Madden, A.
Manen, C-A.
Mani, V.
Manimtim, L.
Marr, J.
Marsden, P.
Martini, D.
Matt, J.
Maxey, R.
McCarty, H.
McCormick, E.
McGinley, L.
McMillin, R.
Medina, G.
Meiggs, T.
Melberg, N.
Messer, E.
Messing, A.
Moo- Young, H.
Moodier, J.
Mosesman, N.
Mussoline, G.
Myers, K.
Nagourney, S.
Naughton, V.
Neerchal, N.
Newberry, R.
O'Brien, R.
Oehrle, S.
Orzechowska, G.
Pan, J.
Pan, V.
Pandya, H.
Parish, K.
Parker, L.
Parsons, C.
Patton, G.
Paustian, M.
Peart, T.
Paper Number
64
50
92
37
4
80
31,32
91
77
99
23
62
46
65
34
10
46
78
53
97
99
32,41
58
51
41
31,57
22
97
42
84,89
68
7
33
43,44
93
8
23
67
22
74
45,46
4
95,99
5
60
13
29
94
95
99
47
44
77
56
37

-------
Name
Peden, D.
Perkins, E.
Petrinec, C.
Poziomek, E.
Pugh, L
Randall, S.
Ranney, T.
Re, M.
Rediske, R.
Reitmeyer, L.
Revesz, R.
Richter, B.
Rilling, A.
Rlsser, N.
Robertson, G.
Robison, M.
Roby, M.
Roethal, F.
Rogers, D.
Romano, J.
Rose, G.
Roskos, D.
Rubio, F.
Ryan, J.
Sadowski, C.
Sauter, A.
Saylor, T.L.
Paper Number
6
17
27
13
6
23
47
27,77
6
7
62
21,36
25
48,49
86
88
41
74
6
60
83
69
50
80
80
96
12
Name
Scandora, A.
Schilling, B.
Schonhardt, M.
Schotz, T.
Schwartz, N.
Secord, F.
Sensel, A.
Seyfried, J.
Sharma, M.
Shifrin, N.
Shirey, R.
Shirkhan, H.
Siler, S.
Silverman, J.
Singhvi, R,
Skoczenski, B.
Smith, L.
Smith, R.K.
Snelling, R.
Stalling, D.
Strattan, L.W.
Streets, E.
Stuart, J.
Studabaker, W.
Taylor, S.
Telliard, W.
Thorne, P.
Paper Number
99
72
8
97
24,40,42
63
45
16
69
69
53,54
38
58
38
14
51
72,98
63
52
24,42
19
99
84
26
57
44
22
Name
Toth, D.
Tsang, S.
Tummillo, Jr., N.
Turriff, D.
Uhlfelder, M.
Unger, P.
Vallejo, R.
Vitale, R.
Walsh, M.
Wang, S.
Ward, S.
Wen, L.
Weston, R.
White, D.
Whitney, R.
Wilburn, R.
Williams, K.
Willig, T.
Winka, M.
Wise, S.
Woolley, C.L.
Wright, K.
Xu, Y.
Yeager, J.
Yeaw, D.
Young, R.

Paper Number
66
32,41
74
7,15
88
85,100,101
26
67,75
57,102
46
103
64
102
97
43
6
55
30
74
39
53,54
16
51
72
59
34


-------