Second International Symposium
FIELD SCREENING METHODS FOR
HAZARDOUS WASTES AND
TOXIC CHEMICALS
February 12-14, 1991
Symposium Proceedings
-------
SECOND INTERNATIONAL SYMPOSIUM
FIELD SCREENING METHODS FOR
HAZARDOUS WASTES AND
TOXIC CHEMICALS
Februaiy 12-14,1991
CO-SPONSORS
U.S. Environmental Protection Agency
U.S. Department of Energy
U.S. Army Toxic and Hazardous Materials Agency
U.S. Army Chemical Research, Development and Engineering Center
U.S. Air Force
Florida State University
National Environmental Technology Applications Corporation
National Institute for Occupational Safety and Health
-------
DISCLAIMER
Although this Proceedings Document reports the oral and poster presentations and
discussions that occurred during this Symposium funded by the United States
Environmental Protection Agency, the contents represent views independent of
Agency Policy. This Document has not been subjected to the Agency's peer review
process and does not necessarily reflect the Agency views. No official endorse-
ment should be inferred.
-------
SYMPOSIUM ORGANIZATION
Symposium Chairman - Llewellyn Williams, EPA/EMSL-Las Vegas, NV
Vice-Chairman - Eric Koglin, EPA/EMSL-Las Vegas, NV
Executive Secretary - John Koutsandreas, Florida State University
ACKNOWLEDGEMENTS
This symposium has been arranged through a contract with
ICAIR, Life Systems, Inc. The following personnel were involved in
coordinating this symposium:
Program Manager - Ms. Jo Ann Duchene
Presentation Coordinators - Mr. Ron Polhill
Ms. Donna Studniarz
Exhibit Coordinator - Mr. Charles Tanner
Registration Coordinator - Ms. Linda Hashlamoun
-------
FOREWORD
The role of and need for field screening methods for the identification and quantification of contaminants in
environmental media is growing rapidly. This nation and its European neighbors are faced with the tremendous task
of remediating thousands of hazardous waste sites -- the legacy of our much less environmentally aware
predecessors. Field screening methods that generate real-time information on the nature and extent of contamina-
tion improve the cost-effectiveness of remediation. Many of these same methods can, and in some cases are already
being used to improve our capability to measure exposure, at the point of exposure, thereby improving our ability
to assess risks to human health and the environment.
The U.S. EPA is not the only viable user of field screening methods; that fact is reflected in the list of this
Symposium's co-sponsors. Other agencies are discovering applications for these same technologies to address
issues such as worker safety, drug interdiction, and chemical warfare defense. The research activities supported by
these same agencies are advancing innovative technologies that may have application in environmental monitoring
and field screening.
To present a global view of technological developments, this Symposium featured over 120 platform and poster
presentations from the United States and around the world. The papers and discussions that follow represent three
days of intense communication and cooperation among a variety of communities—regulatory, academic, industrial
and users. It is my hope that the products of this Symposium will find many uses and will provide the impetus for
new initiatives in field screening methods.
Llewellyn R. Williams
U.S. Environmental Protection Agency
Environmental Monitoring Systems Laboratory
Las Vegas, Nevada
-------
CONTENTS
OPENING PLENARY SESSION
Opening Remarks — Dr. Llewellyn Williams, U.S. EPA, Environmental Monitoring Systems Laboratory, Las Vegas 1
Keynote Address — Analytical Issues in the U.S. EPA Superfund Program
Larry Reed, U.S. EPA, Director Hazardous Site Evaluation Division, Office of Emergency and Remedial Response ....3
Overview ofDOE's Field Screening Technology Development Activities
C.W. Frank, T.D. Anderson, C.R. Cooley, K.E. Hain, S.C.T. Lien, U.S. Department of Energy; R.L. Snipes,
Martin Marietta Energy Systems; M.D. Erickson, Argonne National Laboratory 5
Department of Defense Field Screening Methods Requirements in the Installation Restoration Program
Dennis J. Wynne, U.S. Army Toxic and Hazardous Materials Agency 15
An Overview of Army Sensor Technology Applicable to Field Screening of Environmental Pollutants
Raymond A. Mackay, U.S. Army Chemical Research, Development and Engineering Center 17
Field Analytical Methods for Superfund
Howard M. Fribush and Joan F. Fisk, U.S. EPA 25
Field Delineation of Soils Contamination on Hazardous Waste Sites Regulated Under New Jersey's Hazardous Waste Program
Frederick W. Cornell, New Jersey Department of Environmental Protection 31
Plenary Session Discussion 40
SESSION 1:
Chemical Sensors
Chairperson: Dr. Ed Poziomek, University of Nevada Environmental Research Center
A FiberOptic Sensor for the Continuous Monitoring of Chlorinated Hydrocarbons
P.P. Milanovich, P.P. Daley, K. Langry, B.W. Colston, S.B. Brown and S.M. Angel, Lawrence
Livermore National Laboratory 43
Chemical Sensors for Hazardous Waste Monitoring
M.B. Tabacco, Q. Zhou, K. Rosenblum, Geo-Centers, Inc.; M.R. Shahriari, Rutgers University 49
Rapid, Subsurface, In Situ Field Screening of Petroleum Hydrocarbon Contamination Using Laser Induced
Fluorescence Over Optical Fibers
S.H. Lieberman, G.A. Theriault, Naval Ocean Systems Center; S.S. Cooper, P.O. Malone and R.S. Olsen, U.S. Army
Waterways Experiment Station, Vicksburg; P.W. Lurk, U.S. Army Toxic and Hazardous Materials Agency 57
Chemical Sensors Panel Discussion 64
Spectroelectrochemical Sensing of Chlorinated Hydrocarbons for Field Screening and In Situ Monitoring Applications
Michael M. Carrabba, Robert B. Edmonds and R. David Rauh, EIC Laboratories, Inc.; John W. Haas, III,
Oak Ridge National Laboratories 67
Surface Acoustic Wave (SAW) Personal Monitor for Toxic Gases
N.L. Jarvis, H. Wohltjen and J.R. Lint, Microsensor Systems, Inc 73
Arrays of Sensors and Microsensors for Field Screening of Unknown Chemical Wastes
W.R. Penrose, J.R. Stetter and W.J. Buttner, Transducer Research, Inc.; Z. Cao, Illinois Institute of Technology 85
SESSION 2:
Ion Mobility Spectrometry
Chairperson: Dr. Steve Harden, U.S. Army Chemical Research, Development and Engineering Center
Real-Time Detection of Aniline in Hexane By Flow Injection Ion Mobility Spectrometry
G.E. Burroughs, National Institute for Occupational Safety and Health; G.A. Eiceman and L. Garcia-Gonzalez,
New Mexico State University 95
Detection of Microorganisms by Ion Mobility Spectrometry
A.P. Snyder, M. Miller and D.B. Shoff, U.S. Army Chemical Research, Development and Engineering Center;
Gary A. Eiceman, New Mexico State University; D. A. Blyth, J. A Parsons, Geo-Centers, Inc 103
Data Analysis Techniques for Ion Mobility Spectrometry
Dennis M. Davis, U.S. Army Chemical Research, Development and Engineering Center 113
-------
Ion Mobility Spectrometry as a Field Screening Technique
Lynn D. Hoffland and Donald B. Shoff, U.S. Army Chemical Research, Development and Engineering Center 137
Hand-Held GC-Ion Mobility Spectrometry for On-Site Analysis of Complex Organic Mixtures in Air or Vapors Over Waste Sites
Suzanne Ehart Bell, Los Alamos National Laboratory; G.A. Eiceman, New Mexico State University 153
Remote and In Situ Sensing of Hazardous Materials by Infrared Laser Absorption, Ion Mobility Spectrometry and Fluorescence
Peter Richter, Technical University of Budapest 167
SESSION 3:
Robotics
Chairperson: Dr. Carolyn Esposito, U.S. EPA Risk Reduction Engineering Laboratory
The Department of Energy's Robotics Technology Development Program for Environmental Restoration and
Waste Management
A.C. Heywood, Science Applications International Corporation; S.A. Meacham, Oak Ridge National Laboratory;
P.J. Eicker, Sandia National Laboratories 173
Field Robots for Waste Characterization and Remediation
William L. Whittaker, David M. Pahnos; Field Robotics Center, Carnegie Mellon Institute 181
Space Technology for Application to Terrestrial Hazardous Materials Analysis and Acquisition
Brian Muirhead, Susan Eberlein, James Bradley and William Kaiser, NASA/Jet Propulsion Laboratory 187
Development of a Remote Tank Inspection (RTI) Robotic System
Chris Fromme, Barbara P. Knape, Bruce Thompson, RedZone Robotics, Inc 197
Automated Subsurface Mapping
Jim Osbom, Field Robotics Center, Carnegie Mellon Institute 205
SESSION 4:
QA and Study Design
Chairperson: Dr. Janine Jessup Arvizu
A Quality Assurance Sampling Plan for Emergency Response (QASPER)
John M. Mateo, Christine M. Andreas, Roy F. Weston; William Coakley, U.S. EPA 217
A Rationale for the Assessment of Errors in Soil Sampling
Jeffrey van Ee, U.S. EPA; Clare L. Gerlach, Lockheed Engineering & Sciences Company 227
A Review of Existing Soil Quality Assurance Materials
K. Zarrabi, A.J. Cross-Smiecinski and T. Starks, University of Nevada 235
SESSION 5:
Air Pathway Monitoring at Superfund Sites
Chairperson: Dr. William McClenny
Evaluation of Emission Sources and Hazardous Waste Sites Using Portable Chromatographs
R.E. Berkley, U.S. EPA 253
High Speed Gas Chromatographyfor Air Monitoring
S.P, Levine, H.Q. Ke and R.F. Mouradian, University of Michigan; R. Berkley, U.S. EPA; J. Marshall, HNU Systems.... 265
Screening Volatile Organics By Direct Sampling Ion Trap and Glow Discharge Mass Spectrometry
Marcus B. Wise, G.B. Hurst, C.V. Thompson, Michelle V. Buchanan and Michael R. Guerin, Oak Ridge National
Laboratory 273
Development and Testing of a Man-Portable Gas Chromatography/Mass Spectrometry System for Air Monitoring
Henk L.C. Meuzelaar, Dale T. Urban and Neil S. Arnold, University of Utah 289
On-Site Multimedia Analyzers: Advanced Sample Processing with On-Line Analysis
S. Liebman, Geo-Centers, Inc.; M.B. Wasserman, U.S. Army Chemical Research, Development and Engineering
Center, E.J. Levy and S. Lurcott, Computer Chemical Systems, Inc 299
Using a FID-Based Organic Vapor Analyzer in Conjunction with GC/MS Summa Canister Analyses to Assess the Impact of
Landfill Gassesfrom a Superfund Site on the Indoor Air Quality of an Adjacent Commercial Property
T.H. Pritchett, U.S. EPA; D. Mickunas and S. Schuetz, IT Corporation 307
-------
SESSION 6:
Field Mobile GC/MS Techniques
Chairperson: Dr. Stephen Billets, U.S. EPA Environmental Monitoring Systems Laboratory, Las Vegas
Field Analytical Support Project (FASP) Use to Provide Data for Characterization of Hazardous Waste Sites for Nomination
to the National Priorities List (NPL): Analysis ofPolycyclic Aromatic Hydrocarbons (PAHs) and Pentachlorophenol (PCP)
Lila AccraTransue, Andrew Hafferty and Tracy Yerian, Ecology and Environment 309
Thermal Desorption Gas Chromatograph-Mass Spectrometry Field Methods for the Detection of Organic Compounds
A. Robbat, Jr. T-Y Liu, B. Abraham and C-J- Liu, Tufts University 319
Rapid Determination of Semivolatile Pollutants by Thermal Extraction/Gas Chromatography/Mass Spectrometry
T. Junk, V. Shirley, C.B. Henry, T.R. Irvin and E.B. Overton, Louisiana State University; J.E. Zumberge,
C. Sutton and R.D. Worden, Ruska Laboratories, Inc 327
The Application of a Mobile Ion Trap Mass Spectrometer System to Environmental Screening and Monitoring
William H. McClennen, Neil, S. Arnold, Henk L.C. Meuzelaar, JoAnn A. Lighty, University of Utah;
Erich Ludwig, GSF Munchen, Institut fur Okologische Chemie 339
Field Measurement of Volatile Organic Compounds by Ion Trap Mass Spectrometry
M.E. Cisper, J.E. Alarid, P.H. Hemberger, E.P. Vanderveer, Los Alamos National Laboratory 351
Transportable GC/lon Trap Mass Spectrometry for Trace Field Analysis of Organic Compounds
Chris P. Leibman and David Dogruel, Eric P. Vanderveer, Los Alamos National Laboratory 367
SESSION 7
Portable Gas Chromatography
Chairperson: Dr. Thomas Spittler, U.S. EPA New England Regional Laboratory
The Use of Field Gas Chromatography to Protect Gmundwater Supplies
Thomas M. Spittler, U.S. EPA 377
Field Screening Procedures for Determining the Presence of Volatile Organic Compounds in Soil
Alan B. Crockett and Mark S. DeHaan, EG&G Idaho, Inc 383
Comparison of Field Headspace Vs. Field Soil Gas Analysis Vs. Standard Method Analysis of Volatile Petroleum
Hydrocarbons in Water and Soil
Randy D. Golding, Marty Favero, Glen Thompson, Tracer Research Corporation 395
Field Screening ofBTEX in Gasoline-Contaminated Groundwater and Soil Samples by a Manual, Static Headspace GC Method
James D. Stuart, Suya Wang and Gary A. Robbins, University of Connecticut; Clayton Wood, HNU Systems, Inc. ...407
Comparison of Aqueous Headspace Air Standard Vs. SUMMA Canister Air Standard for Volatile Organic
Compound Field Screening
H. Wang, Roy F. Weston, Inc.; W.S. Clifford, U.S. EPA 415
Quantitative Soil Gas Sampler Implant for Monitoring Dump Site Subsurface Hazardous Fluids
Kenneth T. Lang, Douglas T. Scarborough, U.S. Army Toxic and Hazardous Materials Agency; Mark Glover,
D.P. Lucero, IIT Research Institute 423
SESSION 8
Field Screening Methods for Worker Safety
Chairperson: Dr. Judd Posner, National Institute for Occupational Safety and Health
Tunable COi Laser-Based Photo-Optical Systems for Surveillance of Indoor Workplace Pollutants
Harley V. Piltingsrud, National Institute for Occupational Safety and Health 433
Immuno-Based Personal Exposure Monitors
Arbor Drinkwine, Stan Spurlin, Midwest Research Institute; Jeanette Van Emon, U.S. EPA;
Viorica Lopez-Avila, Mid-Pacific Environmental Laboratory, Inc 449
A Remote Sensing Infrared Air Monitoring System for Gases and Vapors
S.P. Levine, H.K. Xiao, University of Michigan; W. Herget, Nicolet Analytical; R. Spear, University of California;
T. Pritchett, U.S. EPA 461
Adriamycin Exposure Study Among Hospital Personnel
R.L. Stephenson, Thomson Consumer Electronics Inc.; C.H. Rice, J. Dimos, University of Cincinnati 465
-------
Real-Time Personal Monitoring in the Workplace Using Radio Telemetry
Ronald J. Kovein and Paul Hentz, National Institute for Occupational Safety and Health 473
Improvements in the Monitoring of PPM Level Organic Vapors with Field Portable Instruments
Gerald Moore, GMD Systems, Inc 483
SESSION 9
X-Ray Fluorescence
Chairperson: Dr. John Barich, U.S. EPA Region X
Rapid Assessment of Superfund Sites for Hazardous Materials with X-Ray Fluorescence Spectrometry
W.H. Cole III, R.E. Enwall, G.A. Raab, C.A. Kuharic, Lockheed Engineering and Sciences Co.;
W.H. Engelmann, L.A. Eccles, U.S. EPA 497
A High Resolution Portable XRF Hgh Spectrometer for Field Screening of Hazardous Wastes
J.B. Ashe, Ashe Analytics; P.P. Berry and G.R. Voots, TN Technologies, Inc.; M. Bemick, Roy F. Weston, Inc.;
G. Prince, U.S. EPA 507
Low Concentration Soil Contaminant Characterization Using EDXRF Analysis
A.R. Harding, Spectrace Instruments, Inc 517
Data Quality Assurance/Quality Control for Field X-Ray Fluorescence Spectrometry
Clark D. Carlson, John R. Alexander, The Bionetics Corporation 525
A Study of the Calibration of a Portable Energy Dispersive X-Ray Fluorescence Spectrometer
C.A. Ramsey, D.J. Smith and E.L. Bour, U.S. EPA 535
SESSION 10
Fourier Transform Infrared Spectrometry and Other Spectroscopy Methods
Chairperson: Dr. Donald Gurka, U.S. EPA Environmental Monitoring Systems Laboratory, Las Vegas
Use of Long-Path FTIR Spectrometry in Conjunction with Scintillometry to Measure Gas Fluxes
Douglas I. Moore, Clifford N. Dahrn, James R. Gosz, University of New Mexico; Reginald J. Hill, NOAA 541
Pattern Recognition Methods for FTIR Remote Sensing
Gary W. Small, University of Iowa; Robert T. Kroutil, U.S. Army Chemical Research,
Development and Engineering Center 549
Remote Vapor Sensing Using a Mobile FTIR Sensor
R.T. Kroutil, J.T. Ditillo, R.L. Gross, R.J. Combs, W.R. Loerop, U.S. Army Chemical Research,
Development and Engineering Center; G.W. Small, University of Iowa 559
Use of Wind Data to Compare Point-Sample Ambient Air VOC Concentrations with Those Obtained by Open-Path FT-IR
Ray E. Carter, Jr. and Dennis D. Lane, Glen A. Marotz, University of Kansas;
Mark J. Thomas, Jody L. Hudson, U.S. EPA 571
Remote Detection ofOrganics Using Fourier Transform Infrared Spectroscopy
Jack C. Demirgian and Sandra M. Spurgash, Argonne National Laboratory 583
Intrepretation ofPPM-Meter Data from Long-Path Optical Monitoring Systems as They Would be Used at
Superfund Hazardous Waste Sites
Thomas H. Pritchett, U.S. EPA; Timothy R. Minnich, Robert L. Scotto and Margaret R. Leo,
Blasland, Bouck & Lee 591
CLOSING PLENARY SESSION
Awards Ceremony 593
Closing Remarks 595
POSTERS
Calibration of Fiber Optic Chemical Sensors
W.F. Arendale and Richard Hatcher, University of Alabama; Bruce Nielsen, Hq. AFESC/RDVW 597
Gas-Chromatographic Analysis of Soil-Gas Samples at a Gasoline Spill
R.J. Baker, J.M. Ficher, N.P. Smith, S.A. Koehnlein, A.L. Baehr, U.S. Geological Survey 599
-------
Significant Physical Effects on Surface Acoustic Wave (SAW) Sensors
David L. Bartley, National Institute for Occupational Safety and Health 601
An Evaluation of Field Portable XRF Soil Preparation Methods
Mark Bemick, Donna Idler, Lawrence Kaelin, Dave Miller, Jayanti Patel, Roy F. Weston; George Prince and
Mark Sprenger, U.S. EPA 603
Development of a Field Screening Technique for Dimethyl Mercury in Air
Brian E. Brass, Lawrence P. Kaelin, Roy F. Weston; Thomas H. Pritchett, U.S. EPA 609
Applicability of Thin-Layer Chromatography to Field Screening of Nitrogen-Containing Aromatic Compounds
William C. Brumley, Cynthia M. Brownrigg, U.S. EPA 615
Assessing the Air Emissions from a Contaminated Aquifer at a Superfund Site
S. Burchette and T.H. Prichett, U.S. EPA; S. Schuetz, IT Corporation; K. Harvey, Roy F. Weston, Inc 619
Calculation and Use of Retention Indices for Identification of Volatile Organic Compounds with a Microchip
Gas Chromatograph
K.R. Carney, E.B. Overton and R.L. Wong, Louisiana State University 621
Determination ofPCBs by Enzyme Immunoassy
Mary Anne Chamerlik-Cooper, Robert E. Carlson, ECOCHEM Research, Inc; Robert O. Harrison,
ImmunoSystems, Inc 625
Practical Limits in Field Determination of Fluorescence Using Fiber Optic Sensors
Wayne Chudyk, Kenneth Pohlig, Carol Botteron and Rose Najjar, Tufts University 629
The Colloidal Borescope—A Means of Assessing Local Colloidal Flux and Groundwater Velocity in Porous Media
T.A. Cronk, P.M. Kearl, Oak Ridge National Laboratory 631
Fieldable Enzyme Immunoassay Kits for Drugs and Environmental Chemicals
Peter H. Duquette, Patrick E. Guire, Melvin J. Swanson, Martha, J. Hamilton, Stephen J. Chudzik and
Ralph A. Chappa, Bio-Metric Systems, Inc 633
Xuma Expert System for Support of Investigation and Evaluation of Contaminated Sites
W. Eitel, R. Hahn Landesanstalt f. Umweltschutz B.; W.W. Geiger and R. Weidemann,
Institut f. Datenverarbeitung 645
A Rapid Response SAW-GC Chemical Monitor for Low-Level Vapor Detection
John A. Elton, James F. Houle, Eastman Kodak Company 649
Passive Cryogenic Whole Air Field Sampler
Steven J. Fernandez, Bill G. Motes, Joseph P. Dugan Jr., Susan K. Bird, Gary J. McManus, Westinghouse Idaho
Nuclear Company 653
Effectiveness of Porous Glass Elements for Suction Lysimeters to Monitor Soil Water for Organic Contaminants
Stanley M. Finger, Hamid Hojaji, Morad Boroomand and Pedro B. Macedo, Catholic University of America 657
Comparison of Mobile Laboratory XRF and CLP Split Sample Lead Results from a Superfund Site Remediation in New Jersey
Jon C. Gabry, Ebasco Environmental 671
Screening of Groundwater for Aromatics by Synchronous Fluorescence
R.B. Gammage, J.W. Haas, III and T.M. Allen, Oak Ridge National Laboratory 673
In Situ Detection of Toxic Aromatic Compounds in Groundwater Using Fiberoptic UV Spectroscopy
J.W. Haas III, T.G. Matthews and R.B. Gammage, Oak Ridge National Laboratory 677
Development of Field Screening Methods for TNT and RDX in Soil and Ground Water
Thomas F. Jenkins and Marianne E. Walsh, U.S. Army Cold Regions Research and Engineering Laboratory;
Martin H. Stutzand Kenneth T. Lang, U.S. Army Toxic and Hazardous Materials Agency 683
Quantification of Pesticides on Soils by Thermal Extraction-GC/MS
T. Junk, T.R. Irvin, Louisiana State University; K.C. Donnelly and D. Marek, Texas A&M University 687
A Portable Gas Chromatograph with an Argon lonization Detector for the Field Analysis of Volatile Organics
Lawrence P. Kaelin, Roy F. Weston, Thomas H. Pritchett, U.S. EPA 689
Sea Mist—A Technique for Rapid and Effective Screening of Contaminated Waste Sites
Carl Keller and Bill Lowry, Science and Engineering Associates, Inc 693
xi
-------
Portable Gas Chromatograph Field Monitoring ofPCB Levels in Soil at the El:a Gate Property
Marty R. Keller and Gomes Ganapathi, Bechtel National, Inc 697
Real Time Monitoring of the Flue of a Chemical Demilitarization Incinerator
S.N. Ketkar and S.M. Penn, Extrel Corporation 701
Field Evaluation of the Bruker Mobile Mass Spectrometer Under the U.S. EPA SITE Program
S.M. Klainer, M.E. Silverstein, V.A. Ecker, D.J. Chaloud, Lockheed Engineering and Sciences Company and
S. Billets, U.S. EPA 705
The DITAM Assay A - Fast, Fieldable Method to Detect Hazardous Wastes, Toxic Chemicals, and Drugs
Cynthia Ladouceur, U.S. Army Chemical Research, Development and Engineering Center 709
Rapid Screening of Ground Water Contaminants Using Innovative Field Instrumentation
Amos Linenberg and David Robinson, Sentex Sensing Technology, Inc 711
Improved Detection of Volatile Organic Compounds in a Microchip Gas Chromatograph
Aaron M. Mainga and Edward B. Overton, Louisiana State University 713
On-Line Screening Analyzers for Trace Organics Utilizing a Membrane Extraction Interface
Richard G. Melcherand Paul L. Morabito, The Dow Chemical Company 717
Candidate Protocols for Sampling and Analysis of Chemicals from the Clean Air Act List
R.G. Merrill, J.T. Bursey, D.L. Jones, T.K. Moody, C.R. Blackley, Radian Corporation;
W.B. Kuykendal, U.S. EPA 721
The Investigation of Soil Sampling Devices and Shipping and Holding Time Effects on Soil Volatile Organic Compounds
J.R. Parolini, V.G. King, T.W. Nail and T.E. Lewis, Lockheed Engineering and Sciences Company 725
Developmental Logic for Robotic Sampling Operations
Michael D. Pavelek II, Micren Associates, Chris C. Fromme, RedZone Robotics, Inc 729
Practical Problems Encountered in Remote Sensing of Atmospheric Contaminants
Kirkman R. Phelps and Michael S. DeSha, U.S. Army Chemical Research, Development and Engineering Center....733
A SI/LI Based High Resolution Portable X-Ray Analyzer for Field Screening of Hazardous Waste
Stanislaw Piorek and James R. Pasmore, Outokumpu Electronics, Inc 737
Measurement and Analysis ofAdsistor and Figaro Gas Sensor Used for Underground Storage Tank Leak Detection
Marc A. Portnoff, Richard Grace, Alberto M. Guzman, Jeff Hibner, Carnegie Mellon University 741
Extraction Disks for Spectroscopic Field Screening Applications
Edward J. Poziomek, University of Nevada; DeLyle Eastwood, Russell L. Lidberg,
Gail Gibson, Lockheed Engineering and Sciences Co 747
Field Analytical Support Project (FASP) Development of High-Peiformance Liquid Chromatography (HPCL) Techniques
for On-Site Analysis ofPolycyclic Aromatic Hydrocarbons (PAHs) at PreRemedial Superfund Sites
Andrew Riddell, Andrew Hafferty and Tracy Yerian, Ecology and Environment, Inc 751
A Field Comparison of Monitoring Methods for Waste Anesthetic Gases and Ethylene Oxide
Stanley A. Salisbury, G.E. Burroughs, William J. Daniels, Charles McCammon and Steven A. Lee,
National Institute for Occupational Safety and Health 755
On-Site and On-Line Spectroscopic Monitoring of Toxic Metal Ions Using Fiber Optic Ultraviolet Absorption Spectrometry
Kenneth J. Schlager, Biotronics Technology, Inc.; Bernard J. Beemster, Beemster and Associates 759
Rapid Screening of Soil Samples for Chlorinated Organic Compounds
H. Schlesing, N. Darskus, C. Von Hoist and R. Wallon, Biocontrol Institute for Chemische Und Biologische
Untersuchungen Ingelheim 763
Development of a Microbore Capillaiy Column GC-Focal Plane Mass Spectrograph with an Array Detector
for Field Measurements
M.P. Sinha, California Institute of Technology 765
Application of a Retention Index Approach Using Internal Standards to a Linear Regression Model for Retention Time
Windows in Volatile Organic Analysis
Russell Sloboda, NUS Corporation 775
-------
Detection of Airborne Microorganisms Using a Hand-Held Ion Mobility Spectrometer
A. Peter Snyder, U.S. Army Chemical Research, Development & Engineering Center; David A. Blyth,
John A. Parsons, Geo-Centers, Inc; Gary A. Eiceman, New Mexico State University 783
Field Analysis for Hexavalent Chrome in Soil
Robert L. Stamnes, U.S. EPA; Greg D. DeYong, HACH Company; Clark D. Carlson, Bionetics Corp 785
Transportable Tunable Dye Laser for Field Analysis of Aromatic Hydrocarbons in Groundwater
Randy W. St. Germain and Gregory D. Gillispie, North Dakota State University 789
Real Time Detection of Biological Aerosols
Peter J. Stopa, Michael T. Goode, Alan W. Zulich, David W. Sickenberger, E. William Sarver and
Raymond A. Mackay, U.S. Army Chemical Research, Development and Engineering Center 793
Laser Fluorescence EEM Instrument for In-Situ Groundwater Screening
Todd A. Taylor, Hong Xu and Jonathan E. Kenny, Tufts University 797
Analysis of Total Poly aromatic Hydrocarbon Using Ultraviolet-Fluorescence Spectrometry
T.L. Theis, A.G. Collins, P.J. Monsour, S.G. Pavlostathis and C.D. Theis, Clarkson University 805
On-Site Analysis of Chlorinated Solvents in Groundwater by Purge and Trap GC
Stephen A. Turner, Daniel Twomey, Jr., Thomas L. Francoeur and Brian K. Butler, ABB
Environmental Services, Inc 811
U.S. EPA Evaluation of Two Pentachlorophenol Immunoassay Systems
J.M. Van Emon, U.S. EPA; R.W. Gerlach, R.J. White and M.E. Silverstein, Lockheed Engineering and
Sciences Company 815
Rapid Screening Technique for Polychlorinated Biphenyis (PCBs) Using Room Temperature Phosphorescence
T. Vo-Dinh, G.H. Miller, A. Pal, W. Watts and M. Uziel, Oak Ridge National Laboratory;
D. Eastwood and R. Lidberg, Lockheed Engineering & Management Services Co 819
Rapid Determination of Drugs and Semivolatile Organics by Direct Thermal Desorption Ion Trap Mass Spectrometry
Marcus B. Wise, Ralph H. Ilgner, Michelle V. Buchanan and Michael R. Guerin, Oak Ridge National Laboratory ....823
A New Approach for On-Site Monitoring of Organic Vapors at Low PPB Levels
H. Wohltjen, N.L. Jarvis and J.R. Lint, Microsensor Systems 829
A Rapid Screening Procedure for Determining Tritium in Soil
K.M. Wong and T.M. Carsen, Lawrence Livermore National Laboratory 835
Field Preparation and Stabilization of Volatile Organic Constituents of Water Samples by Off-Line Purge and Trap
Elizabeth Woolfenden, Perkin-Elmer Limited, James Ryan, The Perkin-Elmer Corporation 837
A Field-Portable Supercritical Fluid Extractor for Characterizing Semivolatile Organic Compounds in Waste and Soil Samples
Bob W. Wright, Cherylyn W. Wright and Jonathan S. Fruchter, Battelle, Pacific Northwest Laboratories 841
Detection of Mercuric Ion in Water with a Mercury-Specific Antibody
Dwane E. Wylie, Larry D. Carlson, Randy Carlson, Fred W. Wagner, Sheldon M. Schuster, BioNebraska 845
The Effects of Preservatives on Recovery and Analysis of Volatile Organic Compounds
Kaveh Zarrabi, Steven Ward, Thomas Starks and Charles Fitzsimmons, University of Nevada 849
Participants' List 851
XIII
-------
OPENING REMARKS
Welcome to the Second International Symposium on Field Screening Methods for Hazardous Waste and Toxic Chemicals.
Twenty-eight months ago, the first of these symposia was held here in Las Vegas, here at the Sahara, and the response to
that Symposium clearly indicated that the time was right. There was really a need for a forum to exchange information
about the emerging technologies that can be and have been applied to environmental monitoring in the field.
As you can see from the list of the Symposium sponsors, EPA is certainly not alone in its appreciation for these technolo-
gies and their potential for the future. I believe that we have assembled a powerful program for you this next two and a
half days.
The team responsible for this program was made up of, Mr. John Koutsandreas, the Executive Secretary from Florida
State University, Mr. Eric Koglin, the Matrix Manager here at EPA-Las Vegas for the Advanced Field Monitoring Meth-
ods Program and the coordinators at Life Systems, Inc. But for all of the efforts of the Symposium team, it's really the
interest, the enthusiasm, and the participation, over the next couple of days, of all the attendees, that will really set this
Symposium apart.
We are already planning for the Third International Symposium. We try to stagger them in such a way that enough time
elapses — that the papers aren't the same and the technologies have had an opportunity to advance. We're looking at just
about two years from now.
We are very interested in getting your feedback on what you like and what you don't like about the way the Symposium goes
this year, and any recommendations you can make to help us strengthen the next Symposium will be greatly appreciated.
This year, we have added a Scientific Awards Committee. Some of you have had an opportunity to see the certificates and
a couple of dramatic eagle trophies as you came in.
We're privileged to have a number of leaders in the area of environmental measurement here at this Symposium. We'll
share their views and the views of their organizations about current and future applications of field screening and field
analytical technologies.
Someone once said, Llew, why don't you write a poem? It was a long time ago, but it was never quite forgotten, so if
you'll bear with me:
The Second International has finally arrived.
Your program will suggest to you just how hard we have strived.
To bring to you the latest scoops and field technology
That's based on engineering, chemistry, biology.
Besides the platform papers that I know you'll want to hear.
The poster session entrees will just knock you on your ear.
And for the technophilic crowd, exhibitors galore
Will tell you all about their products, and a wee bit more.
We'll try to slake your appetite for the newest and the best.
And give you opportunities to mingle with the rest,
To share and learn, to see and show our efforts, may they yield
Accelerated products we can take into the field.
The future of our measurements, if any bets I'd hedge.
Resides in these technologies, we're on the leading edge.
So welcome to this overview of all those things to come.
And welcome to Las Vegas, where the Rebs are number one.
Llewellyn R. Williams
Symposium Chairperson
-------
KEYNOTE ADDRESS
ANALYTICAL ISSUES IN THE U.S. EPA SUPERFUND PROGRAM
Larry Reed, U.S. Environmental Protection Agency, Director Hazardous Site Evaluation Division, Office of Emergency and Remedial Response
I am glad to be invited to this Field Screening Symposium because our
Superfund office in EPA is such a primary user and booster of the
technology. We want to get more and more use out of the technology
and the field analytic methods. It's always good to be here and
participate. We've been very strong participants and boosters of the
EMSL-Las Vegas operation in field methods, and we'll continue to do
so for years to come.
I wanted to begin by setting the backdrop of where we are in the
Hazardous Waste Superfund Program, and then discuss the vital role
that field methods plays in that program.
We now have a Superfund Program that encompasses a full pipeline:
from discovery of sites (1,500) to two thousand new potential sites
identified to us each year, to the listing of approximately about one
hundred sites a year on the National Priorities List, through the
remedial action and remedial design process. The whole pipeline is in
complete use, and now more than ever, we're putting higher and
higher emphasis on focusing on the worst sites first throughout the
program. This obviously puts a premium on having the best and the
quickest environmental data available to evaluate and clean up sites
in the program.
The second aspect of where we are now in the Superfund Program that
bears on this Symposium, is we have just, (in December) promulgated
revisions to the Hazard Ranking System (HRS) which will become
effective March 12, 1991. This will expand the types of sites that we
will be looking at and screening. We have added new concerns, a
greater emphasis on ecological concerns, incorporated direct expo-
sure to soils and more emphasis on sediments. We are very proud of
this rule, and we will be gathering a lot more information for screening
sites for future National Priorities List updates.
Also, we have finalized the last of our proposed sites. In the Federal
Register, we proposed ten sites under the old HRS. All sites have
therefore been finalized. Eleven hundred and eighty-nine (1,189)
final sites are now on the National Priorities List. We will be hitting
the ground running, listing new sites as quickly as possible under the
new Hazard Ranking System, for the rest of the Superfund Program.
The focus of our Superfund Program has been on enforcement first,
integrating the use of the fund with the use of our enforcement
authorities. This is focusing more and more on a consistent use of
analytic methods, including both field methods and fixed lab meth-
ods, across the program, on appropriate QA procedures across all the
different types of sites, regardless of whether they are enforcement
lead, state lead or fund lead.
The final background point as far as field methods is concerned is our
adoption and phased incorporation of the principles of Total Quality
Management (TQM) into the Superfund Program. We began with
pilot projects last year, designed to embrace the principles of TQM.
The basic concepts of this program include:
• continual improvement in the process
• identifying our clients (ensuring that you know who they are, and
since there are various levels of clients and different relationships
with those clients)
• working with our clients
• identifying and addressing the worst problems first
• gathering data for informed decision making
These are the kinds of principles we're trying to address in all the
aspects of our program. So more and more, we'll be working with you,
and participating in this kind of audience where, at various times,
either you're our clients or we're your clients. This type of gathering
enforces that interaction among the various communities that deal
with field screening.
I now want to discuss some specific points on field analysis.
Howard Fribush of my staff went out and visited all 10 of our regional
offices to determine what is the state of the use of field screening in
our very decentralized program. We found field screening has a lot of
purposes including determining worker safety requirements, particu-
larly for our removal program and for the site assessment program,
which lists sites.
Field screening obviously provides immediate feedback to the site
assessors, to the samplers and to our clean-up contractors. That, again,
is a strong benefit that we see in encouraging the use of field methods
to continually improve and streamline our Superfund process.
An important application of field screening methods is how they can
be used to shorten the time that it takes to evaluate the risk posed at a
site. This can also be used to generate data to determine the appropri-
ate technologies to be used for clean-up and what levels of clean-up
are appropriate. These applications are evident looking at Regional
history—field screening technologies have been used in the Superfund
Program, basically from its inception. We have seen advances in field
instruments, and this is making on-site analysis at Superfund sites
much more desirable.
As part of Howard's Regional visits, the different aspects of our
Superfund program, were polled. The arms of the Program can be
divided into three functional aspects 1) the Site Assessment Program,
the front end of the Program that generates data needed to evaluate the
site and whether it needs to be included on the National Priorities List,
2) the Remedial Program, where once a site is on National Priorities
List the actual clean-up process is initiated, and 3) the Removal
Program, which can be called out at any time to clean up immediate
health threats at sites. We found a split among those different parts of
the program. About ten percent of the data being gathered for the Site
Assessment Program was from field screening. Similarly for the
Remedial Program, about ten percent of the data gathered was with
field screening methods, field analytic methods. The biggest user
proportionally was our Removal Program, slightly over a third of the
data gathered from the Removal Program is related to field screening
methods. What we'd like to do is, working through symposia such as
this, try to encourage and increase that use to even higher levels as
appropriate.
The role that we play in the Office of Emergency and Remedial
Response, and my division, the Hazardous Site Evaluation Division,
is basically providing guidance for this on-site analysis. As I men-
tioned before, when you're dealing with a decentralized program, you
always have to encourage consistency of methods among sites, but
you also have to deal with the uniqueness of each site. We are bridging
the gap by coming up with guidance to provide the Regional offices
on the use of methods.
-------
As follow-up to the Regional review we are evaluating the advantages
of field analysis in the Superfund Program and building on that to
expand its uses as appropriate in the future. Our future guidance
documents will address evaluating when to use it and then how to use
it. We are also trying to get consistent terminology in our guidance.
Screening technology, portable methods, fieldable methods, mobile
methods, all of these terms have been used. We've been trying in our
field methods catalog to come up with some consistency so even those
unfamiliar with these various technologies, can become familiar with
the basic terminology.
Several major efforts are underway in the Superfund Program. Within
the last year we established our first field methods management
forum. The focus of that field methods management forum was to get
managers involved, not just those that have to go out in the field and
implement the technology, but the managers who would be the ones
to determine what proportion of overall analytic support is necessary
for field methods versus fixed labs. The first meeting of this gpoup was
in June, 1990. We had seven regions, headquarter offices, and the
EMSL-Las Vegas group at this session. The objective of this effort
was to get management involved and to focus on the blockages
preventing us from getting field methods used to a greater extent.
Future topics for meetings include: 1) regional administration of field
screening (where does it go, who is in charge;) 2) collecting the
method and instrument performance information, and 3) trying to get
this data out to the field in the best usable form to those familiar with
the technologies, their usage, their limitations and their strengths.
There's another effort underway — the Field Methods Work Group.
This group contains the worker bees, the people that have to go out and
get the job done. This group has been meeting since 1987. Their initial
focus was looking at things at the very basic level of data quality
objectives—how to define them in order to get them in a more useful
format understood by both the chemists and the field engineers. In
July, 1990, they met and focused on the catalog of field methods and
the need for a new version. The Field Methods Screening Catalog
User's Guide came out three or four years ago, and we realized the
limitations of it. At the time we wanted to get information on some 30
different screening methods. Obviously the next stage is updating
this, adding more methods and more data that will be useful to our
field offices. We expect to release this update of the Field Methods
Catalog some time this year. It will triple the number of methods that
are contained in the original catalog to about 100.
We are obviously going to be looking at both QA and QC of field
methods. The basic question is the need for Regional consistency.
What is appropriate Q A for a field lab? How can we get that guidance
out? What are the appropriate QC requirements for field methods? We
need to get that information out to the field again, by bringing in the
user community.
Another issue obviously encouraging consistency and appropriate
use of the technology is training. We have been working to come up
with a training program on field methods with the regions and EMSL-
Las Vegas. We've even gotten one of our regional offices to hopefully
loan some of their field equipment in a true bureaucratic gesture to
EMSL-Las Vegas to use as a basis for training programs. We hope to
have this training program developed this year. Obviously the level
we'll have to look at then is how much and what level of training do
we need to provide out there? How much training should be done for
the people using the field methods? At what levels should it be
presented, and how much should be mandatory to ensure and promote
consistency?
There are several basic field method issues that I haven't mentioned,
but that I'd like to touch on before closing: how do we capture
performance information on methods and the instruments? The state-
of-the-art is obviously rapidly changing. How do we capture that
information given, among other things, federal regulations about how
much we can provide in working with industry. How do we capture
that performance data and get it to the field for use in the most useable
form? We have 100 methods that we have looked at for the upcoming
catalog update. What type of data do the people want, and what type
of format? How much? Do they want extensive data, shortened data
or very abstract data. What type of data will encourage the use in the
field?
The final point, and one that I know this Symposium will be working
on, is introducing improved methods, particularly to the Superfund
Program. How do we get the new methods out? What are the incentive
systems? How do we call out and identify the best methods so that they
are being selected for use in the field?
In closing, there are a lot of efforts we have underway to encourage
a maximal, appropriate use of field screening methods. This sympo-
sium is a key one. I mentioned the Field Methods Management Forum
and the Field Methods Work Group, two continuing efforts to provide
direction and recommendations for additional guidance for consis-
tency and use of technology in the field. Field screening methods is a
big field. It is a continuing, emerging field that will continue to
command national attention. We in the Superfund Program are great
boosters and great users of it. I speak as both a provider, working with
EMSL-Las Vegas and their services, and a user, working on risk
assessments and the site assessment program. I encourage you in your
pursuits to increase the use of field screening methods.
-------
OVERVIEW OF DOE'S FIELD SCREENING
TECHNOLOGY DEVELOPMENT ACTIVITIES
by
C.W. Frank, T.D. Anderson, C.R. Cooley, KJE. Hain, and S.C.T. Lien
Office of Technology Development
U.S. Department of Energy
Washington, DC 20874
R.L. Snipes
Support Contractor Office
Martin Marietta Energy Systems
Oak Ridge, Tennessee 37831
M.D. Erickson
Research and Development Program Coordination Office
Chemical Technical Division
Argonne National Laboratory
Argonne, Illinois 60439
ABSTRACT
The Department of Energy (DOE) has recently created
the Office of Environmental Restoration and Waste
Management, into which it consolidated those activities.
Within this new organization, the Office of Technology
Development (OTD) is responsible for research,
development, demonstration, testing, and evaluation
(RDDT&E) activities aimed at meeting DOE cleanup
goals, while minimizing cost and risk. Site
characterization using traditional drilling, sampling, and
analytical methods comprises a significant part of the
environmental restoration efforts in terms of both cost
and time to accomplish. It can also be invasive and
create additional pathways for spread of contaminants.
Consequently, DOE is focusing on site characterization
as one of the areas in which significant technological
Work supported by the U.S. Department of Energy,
under contract W-31-109-Eng 38.
advances are possible which will decrease cost, reduce
risk, and shorten schedules for achieving restoration
goals. DOE is investing considerably in R&D and
demonstration activities which will improve the abilities
to screen chemical, radiological, and physical parameters
in the field. This paper presents an overview of the
program objectives and status and reviews some of the
projects which are currently underway in the area.
INTRODUCTION
The Department of Energy (DOE) has recently
consolidated its environmental restoration and waste
management activities into the Office of Environmental
Restoration and Waste Management, formed by
Secretary James Watkins in early 1989. Within that
Office of Technology Development, in part
The_submitted manuscript has been authored
by a contractor of the U. S. Government
under contract No. W-3M09-ENG-38.
Accordingly, the U. S Government retains a
nonexclusive, royalty-free license to publish
or reproduce the published form of this
contribution, or allow others to do so, for
U. S. Government purposes.
-------
new organization, the Office of Technology
Development (OTD) oversees DOE's Technology
Development Program, whose objective is to establish
and maintain a national program for applied research,
development, demonstration, testing, and evaluation
(RDDT&E). These activities will pursue technologies
that will enable DOE to meet its 30-year compliance
and cleanup goals safely, efficiently, and effectively.(1)
The first step in environmental restoration is site and
contaminant characterization. Characterization of the
current distribution of contaminants and the
geohydrological factors that promote and control their
spread will provide the starting point for determining
what must be remediated and for selecting and
designing remediation methods.
STATUS OF OTD ACTIVITIES
A cross section of the technology development activities
which have been or are being conducted are described
below. Space limitations preclude describing all
activities in this area. Some of these activities will be
described in more detail by the principal investigators at
this conference.
DUVAS Fiberscope for in Situ Groundwater
Monitoring. Because of its proven ability to detect
compounds such as benzene and its derivatives, which
are common solvents and components of fuels,
derivative ultraviolet absorption spectrometry (DUVAS)
is being developed as a rapid and reliable method for in
situ detection of aromatic pollutants. To date, a
prototype DUVAS fiberscope has been constructed and
tested for measuring spatial and temporal distribution of
organics in groundwater. An important component of
the fiberscope is a rugged, down-well probe with a
unique "detector-in-head" design that increases the
maximum depth of subsurface detection. Results
comparable to those obtained with a conventional
laboratory spectrometer have been achieved with optical
fiber lengths up to 50 meters. The portable DUVAS
fiberscope will provide faster, more reliable, and less
expensive measurement of subsurface groundwater
contamination. For further information, contact the
Principal Investigators, J.W. Haas in and R.B.
Gammage, Oak Ridge National Laboratory, P.O. Box
2008, Oak Ridge, TN 37831-6113. Phone: (615) 574-
5042 (Haas), (615) 574-6256 (Gammage).
Advances in Surface-Enhanced Raman Spectroscopv for
Applications in Real-Time Subsurface Monitoring.
Because of its excellent selectivity, surface-enhanced
Raman scattering (SERS) has attracted considerable
attention as a potentially powerful analytical tool for
detecting and screening trace-level contaminants in
groundwater. The narrow Raman bands hold promise
for simplifying the identification of individual
components in complex mixtures. An inexpensive
computer-controlled portable spectrometer system
coupled to a fiber-optic probe is being developed for
rapid on-site and in situ determination of organic
groundwater contamination. Critical issues pertaining to
durability, repeatability, sensitivity, selectivity, and
universality are being examined, while means for
improvement in these areas are being tested. The
feasibility of utilizing SERS under harsh conditions has
been demonstrated. Substrates have been tailored for
maximum efficiency at particular excitation wavelengths
as a means for increasing the sensitivity of the
technique. Ongoing efforts have refined the state-of-the-
art Raman optrode design and have shown the feasibility
of producing a simple, inexpensive instrument for field
applications. As the technique approaches maturity,
SERS will provide powerful screening capabilities for
numerous organic and inorganic materials. It promises
rapid, reproducible, quantitative detection of trace-level
contaminants in aqueous solutions. For further
information, contact the Principal Investigator, Eric A.
Wachter, Oak Ridge National Laboratory, Health and
Safety Research Division, P.O. Box 2008, Oak Ridge,
TN 37831. Phone: (615) 574-6248 (FTS 624-6248).
Fiber Optic Raman Spectrograph for in Situ
Environmental Monitoring. A small (suitcase-sized)
surface-enhanced Raman spectrometer (SERS) is being
developed to use in field screening for a wide variety of
-------
organic and metallic pollutants in ground and surface
waters. The focus of this contract is twofold: (1) to
demonstrate a small spectrograph with high resolution
(3500 cm"1) and (2)
to demonstrate a micro-optical SERS probe head with
substrates engineered to detect certain critical pollutants
at ppm to ppb levels. The spectrograph will have no
moving parts and will employ fiber-optic sampling, an
ultracompact solid-state diode laser for Raman
excitation, a high-order diffraction grating, holographic
optical filters, and a state-of-the-art charge-coupled
device (CCD) detector. The probe head will be
contained at the sampling end of a fiber-optic probe
over 50 meters long inserted into a well less than 5
centimeters in diameter. The system will identify trace
contaminants in groundwater in real time.
This technique will increase the efficiency of
environmental characterization and mapping, reduce
costs of field sampling and ex situ laboratory analysis,
reduce personnel exposure, and provide site
characterization information. For further information,
contact the Principal Investigator, Michael Carraba, EIC
Laboratories, Inc., Ill Downey St., Norwood, MA
02062, Phone: (617) 769-9540.
In Sim Detection of Organics. The long-term objective
of this research is to develop a fiber-optic-based system
for monitoring contaminant species in groundwater and
to demonstrate it on contaminated groundwater at
Lawrence Livermore National Laboratory (LLNL).
These efforts require the development of optical
indicator reagents that are compatible with fiber-optic
chemical sensors (optrodes). Development of optrodes
for ppb-level detection of trichlorethylene (TCE) and
chloroform (CHC13) is complete and has moved into the
demonstration phase. Carbon tetrachloride (CCl^ and
perchloroethylene (PCE) optrodes are currently being
developed.
The fiber-optic approach has the potential of providing
less expensive measurements of groundwater
contaminants. Also, the reagent indicators and the
chemistry developed in the process of developing the
optrodes will "spin off into other applications. For
example, one chemistry that was developed serves as the
basis for a proposed TCE remediation technique, the
"TCE sponge". Finally, it should be pointed out that
these simple indicators are new and could be used in
other types of contaminant assays. For further
information, contact the Principal Investigator, Mike
Angel, Lawrence Livermore National Laboratory,
Environmental Sciences Division, P.O. Box 808, L-524,
Livermore, CA 94550. Phone: (415) 423-0375 (FTS
543-0375).
Optical Fiber Photothertnal Spectroscopies for in Situ
Monitoring and Characterization. Optical fiber sensors
using thermal lens and photoacoustic spectroscopies for
remote, on-site, real-time optical absorption
measurements of chemical species in groundwater
environments are being developed. Optical fiber sensors
based on photothermal spectroscopies are ideal for
ultrasensitive optical absorption measurements of
actinides and other chemical species in aqueous
environments. An optical absorption spectrum provides
qualitative and quantitative analysis of the species
present in the aqueous environment. The spectra can
also provide complexation information for actinides,
which is important for migration behavior. These
photothermal sensors rely on tunable wavelength for
selectivity and therefore do not require immobilized
agents at the distal fiber end (in the sample area).
Research has demonstrated two optical fiber
photothermal sensors with excellent sensitivity for rare
earth and actinide ions in aqueous solutions. A remote
photoacoustic sensor was demonstrated using a 100-
meter fiber to deliver the tunable laser beam to a glove
box located in a separate room from the laser. Acoustic
signals were returned to the instrument lab via coaxial
cables. An all-fiber thermal lens sensor was
demonstrated using a fiber to deliver the laser light to a
remote sample solution and a second fiber, with a
photodiode attached to the distal end, to measure optical
absorption; electrical cables were not required at the
sample area. For further information, contact the
Principal Investigators, Richard Russo, Lawrence
-------
Berkeley Laboratory, Applied Science Division, M.S.90-
2024, Berkeley, CA 94720, Phone: (415) 486-4258 (FTS
452-4258); and Robert Silva, Lawrence Livermore
National Laboratory, Nuclear Chemistry Division, L-
396, Livermore, CA 94550. Phone: (415) 423-9798
(FTS 543-9798).
Field Measurement of Groundwater Contamination by
Ion Trap Mass Spectrometrv. A transportable ion trap
mass spectrometer for the in situ characterization of soil,
air, or water at chemical waste sites is being developed
and demonstrated. The instrument will have a turnkey
operating system for use by minimally trained
personnel. The approach uses modular design to
produce an instrument that can be readily modified and
repaired in the field. Specifically, this project will
develop a daughter microprocessor system to control
ancillary hardware for sampling and separation and will
develop new software, write macros, and modify
existing software for semi-automated computer control
of the instrument.
The instrument consists of specialized sampling modules
for air, soil, or water samples; a separations module
containing sorbent traps and a megabore capillary
chromatography column; and a detection module, the
Finnigan Ion Trap Detector. Soil or water samples are
purged with helium and the evolved organics are
collected on sorbent traps. A sampling pump is
incorporated for air samples. The full analysis sequence
required 10 minutes. The Finnigan software was
modified through the addition of macros and Forth
routines. The analytical procedure can be selected from
a menu from the instrument's data system. Sampling,
calibration, analysis, and data reduction proceed under
computer control
The detection limit for TCE in water is approximately
20 picograms. Mass spectral identification of 50
picograms of TCE is possible by library comparison of
spectra. A linear calibration curve can be obtained from
10 ppt to 10 ppm organics in water.
Although transportable mass spectrometers are
commercially available for environmental analyses in
the field, the transportable ion trap technology described
here provides several additional benefits, including low
cost. The instrument can be assembled for a parts cost
of about $75K. For further information, contact the
Principal Investigator, Philip H. Hemberger, Los Alamos
National Laboratory, Analytical Chemistry Group, Mail
Stop G740, Los Alamos, NM 87545. Phone: (505) 667-
7736 (FTS 843-7236).
Direct Sampling Mass Spectrometrv. Rapid analytical
technology based upon direct sampling mass
spectrometry is being developed to determine trace
organic pollutants in the environment. This project is
jointly sponsored by DOE, the Department of the Army,
and EPA. Closely related work is sponsored by the
National Cancer Institute (NCI) for analyses of
physiological fluids. Oak Ridge National Laboratory
(ORNL) has developed sampling, sample interface, and
ionization chemistry techniques that are first being
combined with commercial mass spectrometers to
provide rapid laboratory-based methods. Knowledge
gained is used to develop instrumentation optimized for
on-site analysis. Field-sampling and field-sample-
processing methods are being developed to support the
mass spectrometric technologies. The general approach
involves a systematic comparison of the developed
methods using accepted EPA methods to analyze
organics in water, soil, air, and waste. Ion trap mass
spectrometry (TTMS) and glow discharge ionization
quadrupole mass spectrometry (GDMS) are being
investigated. Both GDMS and ITMS are applicable to
the quantitative determination of ppb concentrations of
organics in water and in soil with analysis times of five
minutes or less. This is achieved by purging the water
or soil-water slurry with air or helium and routing the
purge stream directly into the mass spectrometer. Less
volatile organics may be similarly determined by
collection on a suitable solid sorbent followed by
thermal desorption. The method has thus far been
demonstrated for the quantitative determination of
benzene, trichloroethylene, and tetrachloroethylene.
Applicability to semivolatiles has been demonstrated by
-------
the successful determination of nicotine and cotinine in
urine for the NCI and for die determination of military
chemical agents in air for the Army. A method is under
development for the simultaneous collection of samples
for subsequent confirmatory analysis in those cases
where interferences cannot be distinguished by mass
spectrometry or by mass spectrometry/mass
spectrometry alone.
Successful development and validation can reduce costs
and increase sample throughput by up to 90% as
compared to current regulatory analytical methods.
Field-versions of the technology will allow real-time
monitoring of remedial action progress, monitoring of
associated occupational exposure, and screening of
samples prior to shipment to die laboratory for
regulatory analyses. For further information, contact the
Principal Investigators, M.B. Wise, M.R. Guerin, and
M.V. Buchanan, Oak Ridge National Laboratory, P.O.
Box 2008, Bldg. 4500-S, MS-6120, Oak Ridge, TN
37831-6120. Phone (615) 574-4862 (FTS 624-4862)
(Mike Guerin).
Assessment of Subsurface Volatile Organic Compounds
(VOCs) Using Chemical Microsensor Arrays. A new
monitoring instrument that utilizes an array of coated
surface-acoustic-wave (SAW) microsensors is being
developed. Pattern recognition analysis of the
multidimensional sensor output permits determination of
the identity and quantity of target vapors from
difference chemical classes typically found in
contaminated soils and groundwater. The small size,
low cost, low power requirements, high sensitivity, and
large dynamic range of the instrument will facilitate its
use in a variety of applications related to site assessment
and process and control.
The project addresses some fundamental questions: (1)
what is the performance of the SAW microsensor array
instrument in applications relevant to site assessment
and restoration, namely, monitoring volatile organic
chemicals (VOCs) in high humidity environments, (2)
how are the measurements provided by this instrument
related to soil contaminant levels, and (3) how can they
best be utilized in site assessment and restoration
activities? A series of controlled laboratory experiments
will be performed to address diese questions.
The results of this research will demonstrate that
microsensor array instruments can provide rapid and
reliable compound-specific concentrations of volatile
organics in soil vapor. The low projected cost of
manufacture (less than $1000 in production quantities),
the capabilities of continuous, unattended operation, and
the ability to transmit data from remote locations make
the SAW sensor-based monitors a cost-effective and
desirable monitoring approach. For further information,
contact the Principal Investigator, Stuart Batterman,
University of Michigan, Department of Environment &
Industrial Health, 2505 School of Public Health, Ann
Arbor, MI 48109-2029. Phone: (313) 763-2417.
Thin-Layer Detectors: NO2 Detection with Polystyrene
Thin Layers. A solid-state sensor that can be used to
detect NO2 without interference by other species is
being developed. The device incorporates an
interdigitated electrode with a polystyrene thin layer and
operates by simply monitoring the change in
conductance of this thin film as a function of NO2
exposure. Although the film is an insulator in the
absence of NO2 , showing conductance of less than
10 S, upon exposure to NO2 gas, an increase in
conductivity of this highly insulating material occurs
over several orders of magnitude to 10"8-10"9 S. No
interference from ambient gases or water vapor has been
observed, and the effect is very specific to NO2. Upon
elimination of the NO2 gas, the device becomes
completely insulating again, all effects occurring at
ambient temperature and pressure.
The mechanism of the conduction within the film
remains unclear, although the level of conductivity is
related to the amount of residual benzene solvent within
the film. Thus, as the benzene evaporates from the
film, the change in conductivity of the film upon NO2
exposure diminishes dramatically. This effect appears
to be related to a stabilization of NO2 dimer by benzene
within the film. The increased conductivity of the film
-------
in the presence of benzene is attributed to the well-
known self-ionization of N2O4 to NO+ + NO3". For
further information contact the Principal Investigator
Stephen F. Agnew, Los Alamos National Laboratory,
Los Alamos, NM 87545. Phone: (505) 665-1764 (FTS
843-1764).
Antibody-Based Fiberoptics Sensors For in Situ
Monitoring. Sensitive and selective chemical sensors
for in situ monitoring of hazardous compounds in
complex samples are being developed. Special focus is
on a unique fluoroimmuno-sensor (FIS) which derives
its analytical selectivity through the specificity of
antibody-antigen reactions. Antibodies are imirtobilized
at the terminus of a fiberoptic within the FIS for use in
in situ fluorescence assays under field conditions. High
sensitivity is provided by laser excitation and optical
detection techniques. The technique can detect
femtomoles (10 M) of the carcinogen benzo(a)pyrene
and other chemicals of environmental interest. For
further information, contact the Principal Investigators,
T. Vo-Dinh and G.D. Griffin, Oak Ridge National
Laboratory, P.O. Box 1008, MS-6101, Oak Ridge, TN
37831-6101. Phone: (615) 574-6249 (Vo-Dinh) and
(615) 576-2713 (Griffin).
Underground Imaging for Site Characterization and
Clean Up Monitoring. State-of-the-art image
reconstruction techniques (tomography) can be used to
characterize the geology and hydrology of hazardous
waste sites. These methods extend spatial information
of geologic structure and hydrology between boreholes.
Both two- and three-dimensional imaging can be done
using these techniques. High-frequency electromagnetic
(HFEM) tomography is a proven technology for imaging
water content with high spatial resolution, (i.e., submeter
scale for small geologic scale applications (ten meters).
Electrical resistance tomography (ERT) is a newer
technology which has been used in the field with
moderate-scale resolution on larger scale images (meters
on tens to hundreds of meters).
Characterization of the subsurface geology and
hydrology is needed to select the most appropriate
remediation alternative and to demonstrate regulatory
compliance. Design of remedial actions must be based
upon knowledge of the often anisotropic and
heterogenous nature of the subsurface environment and
the natural processes that act upon the waste, as well as
upon protective barriers. Groundwater flow strongly
influences contaminant mobilization and transport and
geologic structure affects the flow of groundwater.
Current subsurface characterization techniques for
addressing these above problems depend heavily upon
drilled boreholes. Drilling is expensive and time
consuming and also creates conduits for contaminant
spread. A special need exists for three-dimensional
noninvasive subsurface characterization technologies.
For more information, contact the Principal Investigator,
William Daily, Lawrence Livermore National
Laboratory, P.O. Box 808, L-156, Livermore, CA
94550. Phone: (415) 422-8623 (FTS 532-8623).
Development of the SEAMIST Concept for Site
Characterization and Monitoring. This project is
developing the Science and Engineering Associates'
Membrane Instrumentation and Sampling Technique
(SEAMIST). The technique permits rapid emplacement
of instrumentation and sampling apparatus in a punched
or drilled hole. The objective of the technique is to
pneumatically emplace an impermeable membrane liner
carrying many instruments into a hole to provide
simultaneous access to the entire hole wall (e.g., many
measurement horizons per hole), elimination of
circulation of fluids within the hole, and isolation of
instruments at discrete locations between the hole wall
and the membrane. The membrane is emplaced by
eversion—it is rolled inside out and then everted using
air pressure. This causes minimal disturbance to the
hole because the assembly does not slide down as with
traditional rigid casings. Instruments such as fiber-optic
sensors, thermocouple psychrometers, gas- and liquid-
sampling systems, and other small instruments are easily
attached to the membrane and carried into the hole with
it.
Using this technique will save 50%-90% of the field
costs, as compared to current monitoring well practices.
10
-------
In addition, the technique is applicable to both vertical
and horizontal wells. For further information, contact
the Principal Investigator, Carl Keller, Science and
Engineering Associates, 612 Old Santa Fe Trail, Same
Fe, NM 87501. Phone: (505) 646-5188.
Site Characterization and Analysis Penetrometer System
(SCAPS). DOE is working with the Department of
Defense on the further development and demonstration
of the SCAPS for use on DOE facilities. The SCAPS,
as developed by the Army Corps of Engineers
Waterways Experiment Station for the Army Toxic and
Hazardous Materials Agency, includes surface
geophysical equipment, survey and mapping equipment,
sensors for contaminant detection, and soil sampling
equipment. Computer systems have been integrated
with the SCAPS in order to provide data acquisition,
data processing, and 3-D visualization of site conditions.
The system is mounted on a uniquely-engineered truck
that provides protective work spaces to minimize worker
exposure to toxic chemicals. The truck also provides
equipment to seal each penetrometer hole with grout.
Real-time sensors that are currently available for
characterization work include those which can determine
the strength, electrical resistivity, and spectral properties
of soils. Two sensors successfully demonstrated to
detect contaminant plumes at DOD facilities are the soil
resistivity unit and a fiber optic contaminant sensor.
The primary advantage of the fiber-optic sensor over
resistivity measurements is based on laser-induced
fluorescence, which presents a problem for contaminants
such as TCE that do not fluoresce; however, colorimetry
and absorption techniques such as the sensors which are
being developed by Lawrence Livermore National
Laboratory and by Fiberchem are tentatively planned to
be demonstrated in conjunction with the penetrometer at
the Savannah River integrated demonstration in FY-91.
Additionally, samplers such as the "Terra Trog"
developed by the Army Corps of Engineers may be
tested in FY-91 at the Savannah River Site. For further
information, contact the Principal Investigator, Stafford
Cooper, Waterways Experiment Station, P.O. Box 631,
Vicksburg, MS 39181-0631. Phone: 601-634-2477.
Design. Manufacture, and Evaluation of a Hvdraulicallv
Installed. Multi-Sampling Lvsimeter. A new lysimeter
sampling device design, approximately 1 inch in
diameter, having multiple sampling zones and capable
of being hydraulically installed at a desired depth in the
vadose zone without drilling will be developed. This
lysimeter will be readily retrievable for reuse and will
provide an inexpensive monitoring technique in
comparison to installation of lysimeters into predrilled
holes. In this project, the hydraulically inserted
lysimeter will be designed and constructed. The effect
of hydraulic insertion on the operation of the lysimeter
will be investigated by comparing hydraulic insertion
with standard boring procedures. The lysimeter should
be commercialized within three years. This new design
is less disruptive to the subsurface, both during
installation and after removal, requiring only a 1-inch-
diameter hole vs. the 4-inch holes commonly drilled for
monitoring wells. Costs are estimated to be under 50%
of that to drill monitoring wells. This project is a
collaborative effort among Bladon International, Inc.,
Institute for Gas Technology, and Timco Manufacturing.
For further information contact the Principal
Investigator, Joe Scroppo, Bladon International, Inc.,
880 Lee Street, Des Plaines, IL 60018. Phone: (505)
883-3636.
Minimally Invasive Three-Dimensional Site
Characterization. Hardware and software are being
developed to permit data acquisition from three
minimally invasive measurement techniques—cone
penetrometer, synergistic electromagnetic mapping
technology and reflection seismology. The software will
permit rapid feedback, comparison, co-calibration, and
data analysis from the combined technology.
Simultaneous application of these three technologies
permits physical and electrical property measurements
to be used to cross-calibrate each data set. The early
acquisition of preliminary data allows field personnel
quickly to adapt their field study strategy to changes in
the perceived site conditions or contamination
distribution.
11
-------
Costs will be saved by rapid feedback of the data to
field personnel, the improved informational quality, and
the lower cost of an integrated system. The minimally
invasive system reduces environmental impact and
reduces risk to field personnel. For further information
contact Principal Investigator, John Gibbons, Applied
Research Associates, Inc., 4300 San Mateo Blvd., N.E.,
Suite A220, Alburquerque, NM 87110. Phone: (505)
883-3636.
High Resolution Shear Wave Seismic Reflection
Surveying for Hydrogeological Investigation. This
technology will enhance the ability to directly determine
aquifers in the characterization and sensing of geologic
and hydrogeologic features. The project will extend the
state-of-the-art of shallow subsurface hydro-geological
characterizations by means of high resolution shear (S)
wave seismic reflection profiling. High resolution
seismic reflection profiling using conventional
compressional (P) wave technology has evolved over the
past ten years to the point where this technique has
become a major component of numerous environmental
investigations. Extension of the existing technology to
include S-wave reflections has the potential for greatly
enhancing the data which can be extracted from the
subsurface. Unlike a P-wave, an S-wave will not travel
through a purely liquid medium, hence its advantage
over current P-wave techniques.
Conventional high-resolution seismic reflection profiling
has proven cost-effective for environmental assessment
by reducing the number of holes and the cost of boring.
S-wave reflection technology will enhance the
information content of the seismic reflection technique
and improve the cost-effectiveness of the technique. For
further information contact the Principal Investigator,
William Johnson, Paul C. Rizzo Associates, Inc., 300
Oxford Dr., Monroeville, PA 15146. Phone: (412) 856-
9700. .
Field Measurements for the Hydrology and Radionuclide
Migration Program (HRMP) at the Nevada Test Site.
The HRMP was begun in 1974 for the purpose of
determining the potential for migration of radionuclides
from underground test areas. HRMP is a multi-agency
research project and is coordinated by the Nevada
Operations Office of DOE. The participants are
Lawrence Livermore National Laboratory, Los Alamos
National Laboratory, Desert Research Institute, and the
U.S. Geological Survey. The present goals of the
program are to learn more about the groundwater rates
and directions of flow on the Nevada test site (NTS),
which is located approximately 80 miles northwest of
Las Vegas, in regional and local systems, to develop
mathematical models of the flow systems, to determine
the effects of nuclear tests on the systems, and to
measure the migration rates of selected radionuclides
under various conditions.
Transport mechanisms for radionuclides from
underground nuclear detonations are studied by
sampling both the contaminated cavity water and
groundwater pumped from the surrounding formation.
Radioactivity in water greater than 9-cavity-radii
distance from the detonation point has been measured
without stressing or pumping the aquifer. A plume of
radioactivity which is being rapidly transported by the
local groundwater has been intercepted. Micro- and
ultrafiltration studies on this groundwater have shown
that radionuclides can be present and mobile in
groundwater systems in colloidal form. Water pumped
from a tritium contaminated satellite well over a 20-year
period drains into a mile-long ditch and has created a
secondary site emphasizing the unsaturated zone.
Current studies along the discharge ditch are
investigating the moisture and tritium front through
shallow alluvium. This project is developing systems
which can measure contaminants such as organics,
tritium, and long-lived radionuclides in wells in depths
from 1400 to 3300 feet. For further information,
contact the Principal Investigator, Jo Ann Rego at
Nuclear Chemistry Division, Lawrence Livermore
National Laboratory, P.O. Box 808, L-234, Livermore,
CA 94551. Phone: (415) 422-5516 (FTS 532-5516).
Depth Profiling in the Water Table Region of a Sandy
Aquifer. The feasibility of using a new multilayered
sampler to investigate organic contaminants in
12
-------
groundwater is being explored. The device passively
collects simultaneous groundwater samples from
multiple levels in the subsurface. In addition, the
project will develop a new device based on experience
with existing sampler.
The sampler, developed at the Weizmann Institute of
Sciences, Rehovot, Israel, was used to detect the
presence of several inorganic and organic species at a
contaminated Brookhaven site. The presence of
microscale heterogeneities in concentration gradients
over a vertical interval of 200 cm was observed for
eight solutes, including metals, organics, and anions. A
planned remediation was modified based on results of
this short sampling event. It is believed that the new
plan will be more cost effective than the original
because the contamination was better defined in the
vertical plane and because an oxygen-depleted zone was
found where it was previously thought to be fully
saturated. For further information, contact the Principal
Investigator, Edward Kaplan, Brookhaven National
Laboratory, Radiological Sciences Division, Building
703M, Upton, NY 11973-5000. Phone: (516) 282-2007
(FTS 666-2007).
Kr81 Counting for Nuclear Waste Sites. A new
technology to date groundwater is being developed. By
combining resonance ionization spectroscopy and mass
spectroscopy, ultralow levels of Kr81 in groundwater can
be detected. From the quantity of Kr8 , the age of the
groundwater can be determined. This information helps
find suitable locations to store nuclear wastes or highly
toxic chemical wastes in groundwater. Several samples
from Europe have been tested and the results are
adequate to search for new waste sites. It is beneficial
to the Department of Energy waste program to find a
geologically safe place to store nuclear wastes and
highly toxic chemical wastes. For further information,
contact the Principal Investigators, C.H. Chen and M.G.
Payne, Oak Ridge National Laboratory, Photophysics
Group, Building 5500, MS-6378, P.O. Box 2008, Oak
Ridge, TN 37831-6378. Phone: (615) 574-5895 (FTS
574-5895).
FUTURE TECHNOLOGY DEVELOPMENT
NEEDS
The OTD activities described here address some, but by
no means all, of the key needs which DOE foresees in
the area of in situ monitoring.
Present site characterization methods are imprecise,
costly, time-consuming, and overly invasive. Improved
site characterization methods will require better
technologies for accurately describing the subsurface
geohydrologic features of a site. For example, more
efficient nonintrusive sampling strategies and practical
models are necessary for understanding and predicting
subsurface transport. Also needed are more reliable
procedures for interpreting characterization data, such as
how clean is "clean".
Traditional hydrologic characterization of the subsurface
environment is highly dependent on data from
groundwater monitoring wells. A thorough
understanding of the subsurface environment requires a
series of hydraulic wells. Interpretation depends greatly
on proficiency of the scientific staff, making subsurface
characterization highly subjective and at times uncertain.
Research is needed to make hydrologic characterization
more precise and more cost effective.
Currently accepted analytical procedures such as those
in the Environmental Protection Agency's (EPA's) SW-
846 do not cover all materials that need to be measured
at DOE sites. DOE is working with the EPA and others
to alleviate such problems with sampling and analyses.
Close coordination with EPA and other regulatory
agencies is needed not only to identify, develop, and
validate appropriate methods, but also to ensure the
acceptance of data generated using these methods.
Intrusive exercises, such as sampling and excavation
during remediation of a site, often involve immediate
hazards to workers in the form of exposure to
radioactive and/or toxic materials. Remote real-time
analyses of ambient levels of potential hazards in the
air, water, and soil during characterization, as well as in
13
-------
the remedial action phase, would help ensure worker
safety and allow continuous operation. Instrumentation
capable of detecting broad classes of hazardous
materials and specific compounds is needed to indicate
cleanup status. Better characterization methods based
on real-time analyses are especially important to confirm
the most effective use of certain in situ remediation
technologies. In the absence of real-time monitoring,
excessive volumes of soil and water must be treated to
guarantee compliance; otherwise, pockets of
contamination may be missed.
Special characterization technologies are necessary for
inactive facilities, underground storage tanks, and
wastewater lagoons. These facilities often contain
significant quantities of radioactive wastes, in certain
cases mixed with heavy metals and/or hazardous organic
compounds that make personnel entry unacceptable.
Thus, the development of advanced robotic samplers,
smart probes, mobile and in situ fiber-optic devices, and
nonintrusive characterization instrumentation (based on
electromagnetic, thermographic, and acoustic principles)
is needed for sampling and chemically characterizing
these sites. The development of such techniques will
significantly reduce radiological exposure to workers
and provide more assurance that the correct remedial
technology has been selected.
Clearly, there are more technology development needs
and more good ideas than there are resources to devote
to these investigations. Priorities must be set to support
those activities deemed most urgent.
OPPORTUNITIES FOR PARTICIPATION
OTD is interested in eliciting broad participation from
qualified organizations who can contribute to its
RDDT&E activities. We are becoming increasingly
aware of the wealth of technological talent and good
ideas in all sectors. OTD has initiated steps during the
past year to increase participation of the private sector
(academia and industry) through competitive
solicitations and through funding of unsolicited
proposals. We have also worked to increase
participation by academia through interagency
agreements for cooperative funding of research and
through establishment of DOE educational consortia.
Several significant technology development activities are
being conducted at DOE sites such as national
laboratories. DOE is funding technology development
activities beyond the United States through direct
contracts, international agreements, and other
mechanisms.
DOE plans to continue this type of support for
technology development in the coming years.
Organizations interested in responding to solicitations
should contact John Beller (for Innovative Technology)
at Innovative Technology Program Coordination Office,
EG&G Idaho, P.O. Box 1625, Idaho Falls, ID 83405-
6902. Dr. Erickson (for applied R&D) at the above
address or Mr. Snipes (for DT&E) at the above address
to be placed on distribution lists. Organizations wishing
to submit unsolicited proposals should contact Larry
Harmon, Director, Division of Program Support (EM-
53), Department of Energy, 12800 Middlebrook Road,
Trevion II Building, Germantown, MD 20874, for
information on submission format and procedures prior
to preparation of a proposal.
REFERENCES
1. United States Department of Energy
Environmental Restoration and Waste
Management. Five-Year Plan, Fiscal Years 1992-
1996, June 1990, DOE/S-0078P.
14
-------
DEPARTMENT OF DEFENSE FIELD SCREENING METHODS REQUIREMENTS IN THE
INSTALLATION RESTORATION PROGRAM
Mr. Dennis J. Wynne
U.S. Army Toxic and Hazardous Materials Agency
The Superfund Amendments and Reauthorization Act
(SARA) and the implementing executive orders under this
legislation require that contamination resulting from Depart-
ment of Defense (DOD) past operations be remediated. In
response to this legislation, the DOD has undertaken a com-
prehensive program to comply with these mandates. Over the
years this program has expanded from a $150 millon effort in
FY 1984 to a $1 billion effort in FY 1991. Some 17000 sites
have been identified at 1808 DOD Installations. Ninety DOD
Installations have been identified on the National Priorities
list by the Environmental Protection Agency. The detection
and remediation of contamination is a long term and resource
intensive effort. Research that allows us to proceed more
quickly in locating contaminants and in pin pointing key soil
and water samples for analysis, assessment, and remediation
purposes can provide a tremendous resource savings to the
ITR Program and, ultimately, the taxpayer. It is noted that
over 30% of the budget is estimated to be totally dedicated to
drilling, sampling and sample testing. Any improvement in
Field Sampling and Analysis will quickly repay the cost of
its associated research and development.
DOD Field Sampling and Analysis accomplishments include
the fielding of a truck-mounted cone penetrometer for more
efficient contaminant plume identification, tracking and
reducing well drilling requirements. Also completed was the
development of a field Analytical Method for the explosives
TNT and RDX in soil and water. Current program efforts
include the development of various contaminant sensors to
be employed in the cone penetrometer system to define
concentrations of contaminants in soil and groundwater as
the penetrometer is advanced through the soil. Future plans
include the concept of placing sampling devices into the
ground with the penetrometer which can be sampled and
analyzed with field instrumentation at regular intervals
thereafter. All these efforts have significant cost reduction
implications and have the interest and funding support of not
only DOD but also DOE.
15
-------
AN OVERVIEW OF ARMY SENSOR TECHNOLOGY APPLICABLE
TO FIELD SCREENING OF ENVIRONMENTAL POLLUTANTS
RAYMOND A. MACKAY
U.S. Army Chemical Researchf
Development and Engineering Center
Detection Directorate
ATTN: SMCCR-DDT
Aberdeen Proving Ground, MD 21010-5423
ABSTRACT
The Army has under development a number
of technologies directed toward the field
detection and Identification of chemical and
biological (CB) agents. This Includes not
only specific sensors, but the technology
required to Integrate these sensors Into
effective field detection systems. Much of
this technology can be adapted to materials
of environmental concern. In particular,
there are technologies 1n various stages of
development which are applicable to vapor
and aerosol clouds, as well as to
contaminated surface water and terrain.
These Include both point sampling and
monitoring systems, as well as remote sensing
systems capable of providing rapid wide area
coverage. This paper will provide an
overview of Army programs applicable to
field screening methods, with particular
emphasis on mass spectrometrlc. Infra red,
and aerosol sampling technologies.
the form of vapors or aerosols. The two
main areas which will be covered are
standoff detection and point detection.
Standoff detection has sometimes been
referred to as remote detection. However,
remote detection 1s defined here as the use
of point detectors which are located at the
site to be monitored, which may be of some
distance from the main monitoring station or
base, and connected to 1t by hard wire or
telemetry. Standoff detection refers to the
use of equipment located at the monitoring
base which can sense chemicals at a distant
location. The point detection technology to
be discussed 1n this paper Is pyrolysls-mass
spectrometry. There will also be some
discussion of aerosol sampling, since this
is pertinent to point detection of
aerosolized particulates, liquid or solid.
It 1s not the aim of this paper to present
detailed experimental results but rather to
provide an overview of the technology and
Its range of applicability.
INTRODUCTION
Technologies which can be utilized for
the detection of chemical warfare agents In
the field may also be applicable to the field
detection, classification and identification
of various substances of environmental
Interest. Although Army detection programs,
particularly those 1n the early stages of
development, focus on biological as well as
chemical detection, and much of the
technology 1s applicable to both. In this
paper, the emphasis will be on chemicals 1n
DISCUSSION
STANDOFF DETECTION: The U.S. Army Chemical
Research, Development and Engineering Center
(CRDEC) 1s currently engaged in an extensive
multi-year exploratory development program
to exploit laser radar for Chemical
Biological (CB) Standoff Detection. At
present, the only near term capability for
the detection of chemical agents at a
distance Is the use of passive infrared
sensors. These sensors can detect only
chemical vapors. Active (laser) infrared
17
-------
(IR) systems employing Differential
Scattering and Absorption L1dar (DISC/DIAL)
are being developed for the detection of
chemical agents 1n all physical forms:
vapors, aerosols, and rains, as well as
liquid surface contamination. In addition,
an ultraviolet (UV) system employing laser-
Induced fluorescence 1s being developed for
the detection of biological clouds
consisting of organisms, toxins and related
materials. The principles of operation of
these systems and the background of their
development will be briefly discussed. The
IR and UV breadboard systems have recently
been used 1n an extensive field test
employing various non-toxic chemicals and
Interferents with excellent results. These
data will be discussed along with the
necessary development efforts required to
adapt the DISC/DIAL technology to practical
field use.
The Army is making a significant
investment in standoff technology because it
1s the only technology known that can provide
rapid wide area surveillance capability
while simultaneously reducing the total
number of detectors required. At CRDEC
there are three phases to the Standoff
Detection program; the XM21 Passive Remote
Sensing Chemical Agent Alarm, along with
technology upgrades; the Laser Radar (LIDAR)
CB Standoff Detection System; and, for the
future, combining these technologies with
other electro-optic systems 1n integrated
sensor suites.
First to be discussed is chemical
detection portion of laser radar project
called IR DISC/DIAL. The objective is to
provide chemical laser Standoff detection
systems for CB defense applications. The
planned systems capabilities are to scan
surrounding atmosphere and terrain, operate
in fixed or mobile mode, detect chemical
contamination in all its physical forms, and
range resolve, quantify and map data. The
purposes of the current program are to
demonstrate concept feasibility, establish
capabilities and limits, complete science
base, determine effectiveness in field
situations and establish basis for rapid
transition to mature development. The IR
DISC/DIAL system can develop data in four
ways (as shown 1n Figure 1):
FIGURE 1
AGENT VAPOR
TOPOGRAPHICAL REFLECTION
(VAPOR)
DIFFERENTIAL ABSORPTION
(VAPOR)
AGENT VAPOR .
V.
NATURAL
AEROSOLS
DIFFERENTIAL SCATTERING
(AEROSOL/RAIN)
AGENT/RAIN/
AEROSOLS -
DIFFERENTIAL SCATTERING
(SURFACE CONTAMINATION)
SURFACE
CONTAMINATION
A0332-EE6 23400"
18
-------
Topographic reflection DIAL; By transmit-
ting different IR frequencies and detecting
their topographic return* chemical vapor
clouds can be Identified by their selective
absorption of some of the IR frequencies.
This measurement detects the presence of the
cloud and Its total concentration times path
length (CD; however, It does not tell how
far away the cloud is or Its density
(concentration).
Aerosol backscatter DIAL; By the same
technique, but with higher laser power, the
normally occurring atmospheric aerosol
begins to reflection IR energy back to the
detector. This distributed reflector can be
"range resolved" by gate timing the
returning signal just as radar systems do.
In this way» average concentrations and
ranges can be developed for many cells
(range lines) down the LIDAR path. By
scanning the system spatially, a map can
then be made of vapor chemical agents.
Agent backscatter DISC: In the same manner,
chemical agent aerosols and agent rains can
be detected by the selective frequencies
that they directly backscatter to the
detector.
Surface reflection; The fourth mode of
detection is the detection of selective IR
frequencies backscattered from agents on
surfaces. This measurement is dependent on
the amount of material located on the
surface of dirt, grass, trees or equipment.
FIGURE 2
Figure 2 shows that, for each of the
detection modes, the return signals are
different so that all measurements can be
made simultaneously. This is important
because there are no significant hardware
design constraints to add aerosol rain and
surface detection to an aerosol backscatter
DIAL system. The first objective of the
DISC/DIAL project was to build a Ground
Mobile Breadboard (GMB) system to demon-
strate the feasibility of DISC/DIAL chemical
detection. The system was mounted in a van
and tested. Based on these tests, the GMB
was upgraded. The current specifications of
the Ground Mobile Breadboard Upgrade (GMBU)
are given in table 1.
The GMBU along with other devices was
then exposed to extensive U.S. Army Dugway
Proving Ground (DPG) field testing. The
goals of these tests were:
(1) Investigate effects of reducing
system size, weight and power on detection
performances. This was because the Army's
near term use was a ground mobile vehicle
application for reconnaissance.
(2) Obtain quantifiable data on vapors,
aerosols, and liquid detection and on
interferences to prove feasibility.
(3) Use more realistic field scenarios
to develop workable use concepts.
BACKSCATTER FROM
"NORMAL" CLEAR AIR
AEROSOL CLOUD
VAPOR CLOUD
HARD TARGET
RETURN
TIME
19
-------
Four C02 TEA Laser
TABLE I. DISC/DIAL Specifications
Transmitter
Lasers
TunablHty Line-Tunable by Grating
Wavelengths 9.2 to 10.8 Microns
Energy (on 10P20) 2.0 J/Pulse
Pulse-to-Pulse ±.3 Percent
Power Stability
Pulsewldth (3dB) 90 ns
Repetition Rate 20 Hz
Beam Divergence 3.5x4.0 MRAD
Mode Multlmode or TEMoo
Timing Jitter 2 NS Pulse-to-Pulse
Receiver
Telescope Diameter 16 Inches
Detector HgCdTe Quadrant
Size 1x1 mm Per Element
Detectivity 4x10 cm/Hz1/2w
Field of View 8 MRAD
Overall Electronic 10 Hz to 7 MHz
Bandwidth
These tests Involved large scale
simulant clouds created by a special 100
meter long spray system as well as aircraft
spray. Also, aerosols were generated by
spray from a high ranger boom, and surfaces
(such as dirt, grass, concrete, trees, or
vehicles) were coated with simulants. The
many accomplishments of these large scale
tests were:
- Demonstrated feasibility of ISC/DIAL
technology
- Demonstrated high sensitivity
- Demonstrated operation 1n motion,
scanning and mapping
- Detected cloud through a cloud
- Detected collocated DMMP and SFg
- Detected DMMP (dimethyl
methyl phosphorate)
- up to 5 Km (range resolved)
- up to 10 Km (column-content)
- 1n presence of all Interferents
(fog, rain, dust and military
smokes)
- on ground by secondary vapor
- at night and 1n reduced
visibility
- 1n calibrated chamber
- Detected SF96 - as an aerosol
- as ground
contamination
on six surfaces
- Detected other volatile and non-
volatile simulants
- Validate emulation and simulation
models
Figure 3 shows a typical GMBU map of a
simulant vapor cloud. Although not evident
in this black and white Illustration, the
range cells are colored to show the average
concentration from 0.1 to 2.0 mg/nrr.
Additionally, this field work was backed
up with an extensive emulation and simula-
tion program which was able to show excel-
lent correlation between predicted and
actual performance. For example, the DMMP
and SFg 1 Km range resolved predicted and
measured values are Identical. Using this
excellent agreement, one can Infer the
following sensitivities to chemical vapors
with strong absorptions 1n the 9-10 micron
region of the Infrared.
Column Content
2 Km 10 Km
10 mg/m 12 mg/m2
Range Resolved
1 Km
0.5 mg/m3
The minimum detectable concentration of
liquid simulants on the ground were measured
at 0.5-5.0 g/m depending on the porosity of
the surface. Also very encouraging 1s the
20
-------
FIGURE 3 GMBU MAP
fact that one four wavelength set (1/20 sec
data) can provide a high amount of informa-
tion about the situation. An example:
between biological simulants and inter-
ferents/backgrounds of UV/LIF are below:
Accuracy of Prediction
(Range Over All Data)
97.2-100 percent
87.4-87.8 percent
66.2-74.1 percent
Information
1 simulant on any
1 of 5 surfaces
Scattering
Signal Level
248 nm
Fluorescence
Signal Level
280-410 nm
1 simulant on
of 5 surfaces
any 5
3 simulants on any
6 of 6 surfaces
This demonstrates that a real time surface
detection algorithm can be developed.
The UV LIF based laser radar was also
successfully tested at DPG for detection of
biological and toxin materials. While not
nearly as far along in development as the IR
system, this system demonstrated significant
detections at ranges up to 1.2 Km. The
system, which measures the laser induced
fluorescence of tryptophane, a compound
occurring in all living material, can sense
the presence of biological/toxin clouds but
cannot as yet uniquely identify the
material. Relative optical discrimination
None
None
None
Small
Small
Strong
Strong
Tryptophane
EG
Egg Albumen
Diesel Exhaust
Auto Exhaust
Road Dust
Trees
Strong
Strong
Strong
Strong
Weak
None
Strong
Other optical concepts based on Mueller
Matrix scattering are currently being
investigated to add additional identifica-
tion capabilities to UV/LIF system.
21
-------
Passive IR. The standoff detection and
Identification of chemical vapor clouds 1s
currently achieved by recording the IR
spectrum 1n the 8-12 micron wavelength
region by means of an Interferometer. This
Is the XM21 Remote Chemical Agent Sensing
Alarm. It Is a tripod-mounted device
weighing approximately 55 pounds, exclusive
of the source power. It scans a 1.5° field
of view (FOV) for 2 seconds, co-adding eight
scans. If the cloud fills the entire FOV,
the sensitivity Is on the order of a
concentration-path length product of 150
mg/m , the precise value depending upon the
strength of the absorption bands. The
interferogram, taken 1n the time domain, 1s
converted to a frequency domain spectrum In
the microprocessor by means of a fast
Fourier transform. A background spectrum of
the FOV must be obtained and stored, and
then subtracted from the sample scan prior
to further signal processing. Because of
the relatively slow scan speed, and the
requirement of the current algorithm for a
background subtract, 1t cannot be operated
from a moving platform.
A lightweight (20 Ibs), fast scan
interferometer is under development. In
addition, recent developments 1n direct
signal processing In the time domain have
both reduced demands on the microprocessor
and relieved the requirement for a
background scan. Since results equivalent
to those on the XM21 can be achieved in a
single scan without a pre-determined back-
ground spectrum, this device can be operated
from a moving platform such as a ground
vehicle or alrframe. Thus, if only vapor
detection 1s required, passive technology
represents an attractive method for rapid
survey of an area, particularly by air.
In summary, CRDEC has demonstrated the
feasibility of IR DISC/DIAL technology for
the detection of chemical agents in all
forms, as well as passive IR for chemical
vapor detection. Prototypes for ground
mobile, fixed site and test facility appli-
cation are beginning to be developed. The
potential exists for modifying these systems
to mount on helicopters, RPVs, and even
satellites, and to add the capability of
detecting biologicals and toxins, as well
as chemicals.
POINT DETECTION: There are two specific
technologies which form the basis of
recently fielded and developmental Army
point detectors; namely, Ion mobility and
mass spectrometery.
Ion Mobility Spectrometrv. This 1s a
technology which operates at atmospheric
pressure. The air sample containing the
vapor(s) to be detected are drawn through a
permselective membrane Into an ionizatlon
region where reagent gas Ions react with the
(polar) compounds to be detected and form
cluster ion species. These are gated Into a
drift tube where the ions migrate under an
applied electric field, and are separated
according to their mobility as measured by
their time of arrival at the collection at
the end of the drift tube. These may be
operated in both a positive and negative
mode. The U.S. Army currently has fielded a
hand-held monitor, the Chemical Agent
Monitor (CAM), and has a point alarm system
(XM22) under development. These relative
low weight, man portable, field hardened
devices are quite sensitive and should be
quite useful for field screening and
monitoring of a wide variety of
environmentally hazardous vapors. Since
this technology and Its applications will be
discussed extensively in the symposium, 1t
will not be considered further here.
Mass Spectrometrv. A mass spectrometer
system which can provide sensitive,
effectively real time detection and
identification of chemicals in the form of
vapors, aerosols, and ground surface
contamination, is currently under
development by CRDEC. Since this system
also has the potential to detect materials
of biological origin, it is referred to as
the Chemical Biological Mass Spectrometer
(CBMS).
The CBMS consists of two major
components, the biological probe and the
mass analyzer chassis. An artist's concept
is shown in figure 4. The biological
sampling probe contains the virtual impactor
and infrared pyrolyzer. The mass analyzer
chassis contains the mass analyzer,
instrument computer, data processing
computer and display, alarm and
communication modules.
The virtual Impactor block of the
biological sampling probe consists of a 1000
l/m1n pump and a four stage virtual Impacter
concentrator. This device separates the
aerosol particles from the air by virtue of
their inertia and directs them onto a quartz
wool matrix. The quartz wool 1s mounted
Inside of the infrared pyrolyzer assembly.
Periodically this assembly 1s heated to
22
-------
• 10 SAMPLER
CHEMICAL/BIOLOGICAL
MASS SPECTROMETER
Figure 4
temperatures near 600 C. As a result, any
biological material collected on the quartz
wool is pyrolyzed. Although the focus is on
biological aerosols, any aerosol particle in
the applicable size range will also be
collected and analyzed in the same way.
This includes liquid or solid chemical
aerosols, or chemicals adsorbed on or
attached to other aerosol particles of
network or anthromorphic origin. These
pyrolysis products are then drawn into a
heated 3 meter long, 1mm O.D. capillary
column and pulled to the mass analyzer
chassis. Any chemical vapors in the air are
also drawn into this capillary and pulled to
the mass analyzer.
The pyrolysis products and/or chemical
vapors enter the mass analyzer by permeating
through a silicone membrane. This membrane
separated the high vacuum mass analyzer from
the ambient pressure sample. After the
sample enters the mass analyzer, it is
ionized using an electron gun and the mass
spectra taken of the ionized components.
The instrument control computer controls
the mass analyzer, the pyrolysis event, and
all other instrument related functions
including temperature settings, electron gun
current, and rf/dc voltages and frequencies.
The data processing computer interprets the
mass spectra and generates the necessary
system responses. The display, alarm and
communications modules are the primary
interfaces to the operator. A block diagram
is shown 1n figure 5.
A QUISTOR (Quadrupole Ion Storage
Device) mass analyzer is used in the CBMS.
(Figure 6) This mass analyzer consists of
two end caps and a ring electrode. An ion
getter pump or molecular drag pump can be
used to produce the required vacuum. An
electron gun is mounted on the sample inlet
side. Selected masses are either trapped
within the QUISTOR or expelled out through
the end caps depending on the voltages and
frequencies applied to the caps and ring.
The masses of the ions that are expelled are
directly correlated to the voltages and
frequencies applied to the rings and caps.
In principle, a mass analysis is made as
follows. First a vapor sample enters the
QUISTOR. This sample is then ionized using
the electron gun. The voltages and
frequencies applied to the rings and end
caps cause these ions to become trapped
within the QUISTOR's Internal electric
fields. The dc voltage applied to the
QUISTOR 1s then changed at a controlled
rate. At specific voltages, certain masses
become unstable and are expelled from the
QUISTOR and are detected at the electron
multiplier. A plot is made of the signal
from the electron multiplier as a function
of the applied voltage. This voltage is
increased until all ions are expelled. The
final mass record is then obtained by
correlating the applied and plotted voltage
to the corresponding masses that should be
expelled.
23
-------
Figure 5
DISPLAY ALARM
AND
COMMUNICATIONS
Figure 6. QUISTOR Schematic
RING
ELECTRODE
IONIZATION
REGION
\ \
V
0
ION STORAGE
REGION ELECTRON
MULTIPLIER
24
-------
FIELD ANALYTICAL METHODS FOR SUPERFUND
Howard M. Fribush, Ph.D. and Joan F. Fisk
U.S. Environmental Protection Agency
Analytical Operations Branch (OS-230)
Washington, D.C. 20460
Abstract
The Analytical Operations Branch (AOB)
of the U.S. EPA is responsible for
coordinating field analytical methods
information transfer for Superfund. With the
assistance of the Environmental Monitoring
Systems Laboratory in Las Vegas (EMSL-LV) , AOB
has initiated a series of projects designed to
facilitate the appropriate use of field
analytical methods throughout Superfund. This
paper will summarize the use of field
analytical methods in the various phases of
Superfund activities, and will describe AOB
efforts in coordinating field analytical
methods information transfer throughout EPA.
In addition, this paper will summarize the
field analytical methods currently used
throughout EPA's Superfund program and
describe the development of comprehensive
document that will compile field analytical
methods and provide guidance to the use of
field analytical methods for environmental
samples.
Introduction
Field analytical methods have been widely
used for the past eight-to-ten years by EPA
organizations under various Superfund
contracts, such as Field Investigation Teams
(FIT), Technical Assistance Teams (TAT),
Emergency Response Cleanup Services
(ERGS), and Ranedial Engineering
Management (REM) contracts. As efforts
to streamline the Superfund site
assessment, site characterization, and
site clean-up processes have developed,
the need to assess field analytical
technologies for their appropriate use in
Superfund decision-making have increased.
The Analytical Operations Branch (AOB) of
the Hazardous Site Evaluation Division
(HSED) has been involved in coordinating
information on field analytical methods
used in support of Superfund. The AOB's
first efforts at coordinating field
analytical methods resulted in a paper
entitled 'Field Monitoring Methods in Use
for Superfund Analyses' (1) and the Field
Screening Methods Catalog (2).
Field analytical methods are used
throughout the Superfund process. In
EPA's Site Assessment, or Pre-remedial
Program, FIT teams under the direction of
EPA Site Assessment Managers (SAMs)
analyze samples in the field for Site
Inspections (SI). The results of the SI
determine whether a site should be added
to the National Priorities List (NPL) of
Superfund hazardous waste sites. In
EPA's Removal Program, a TAT team, under
the direction of an EPA On-Scene
Coordinator (OSC) , will conduct a Removal
25
-------
Assessment, often using field analytical
methods, to determine if an emergency response
(a removal action) is necessary. When a
Removal Action is initiated, an ERGS cleanup
contractor, under the direction of the OSC,
may be dispatched to the site for further
analysis and cleanup. The result of the
removal action is typically a short-term
stabilization of a site, and field analytical
methods are often used to monitor the extent
of the cleanup and determine when to stop the
removal action. In EPA's Remedial Program,
REM contractors, under the direction of
Remedial Project Managers (RPMs), have
conducted field analyses to characterize the
extent of contamination at a site (the
Remedial Investigation), to test remedial
treatment technologies (the Remedial Design),
and for site cleanup activities (Remedial
Actions). In all of these programs, field
analytical methods are often used to identify
critical samples for CLP confirmatory
analyses.
Field analytical methods are typically
not as rigorous as chemical analyses conducted
in a "fixed" laboratory - a laboratory in a
permanent location. Field methods are often
used for screening sites to determine if
contamination is present, and to obtain a
general idea of the extent of contamination.
Further, field analytical methods are most
useful when the contaminants of concern have
already been identified, so that the
appropriate methods, dilutions, calibration
ranges, etc., can be employed. In addition,
field analytical methods are usually designed
to identify only a limited number of analytes.
Recently, however, more sophisticated and more
rugged instrumentation have allowed for more
rigorous analyses in the field; consequently,
field analytical chemistry does not have to be
limited to screening. Even so, it is
generally believed that field analyses provide
less precision and accuracy than analyses
conducted in fixed laboratories. (It should
be noted, however, that despite this
perception, a focused gas chromatographic
analysis is likely to be better than a heavily
quality-controlled QC/MS screen.) In all of
the Superfund activities described in the
previous paragraph, field analyses are used
for the rapid turnaround of sample results.
These results are, in turn, used to expedite
site assessments for NPL listings or for
emergency removal actions, site
characterizations, and ultimate cleanup. Data
quality is not compromised, since field
analyses are usually conducted in conjunction
with confirmatory analyses, such as GC/MS
or ICP/MS analyses using EPA Contract
Laboratory Program (CLP) protocols.
Consequently, field analyses are often
used to identify samples for more
rigorous, CLP-type analyses.
Site Assessment Program
As part of determining whether a
site should be added to the NPL, the Site
Inspection (SI) attempts to make a
determination of "observed release".
This determination indicates that the
site is discharging contaminants into the
environment.
The Site Assessment Program conducts
up to ten percent of its analyses in the
field, and about 75 percent of the
samples are sent to the CLP for full scan
analysis. In the Site Assessment
Program, very little is usually known
about the site and its contaminants;
consequently, it is more cost effective
to use the CLP as a screen rather than
conduct extensive field analyses designed
for analyzing a limited number of target
compounds. Nevertheless, FIT, the Site
Assessment Program's primary contractors,
conduct a limited number of field
analyses to obtain real-time data to
determine worker safety requirements, the
extent of contamination, the presence or
absence of contamination, for the
placement of monitoring wells, and to
select samples for subsequent CLP
confirmatory analysis.
To accomplish these analyses, EPA's
Site Assessment Branch has developed the
Field Analytical Support Project (FASP).
This project has, at this writing,
developed 31 field analytical methods,
called FASP Standard Operating Guidelines
(SOGs), and are designed to be modified
as needed to meet site-specific
conditions (3). These rapid turnaround,
FASP SOGs have been developed by FIT for
water, soil, or oil analyses for
volatiles, polynuclear aromatic
hydrocarbons, pesticides, PCBs, and
metals.
Some EPA Regions have used FASP to
perform preliminary evaluations of new
instrumentation. For example, two
Regions are evaluating Long Path Fourier
Transform Infrared (FTIR) Spectroscopy
26
-------
for the analysis of air samples remote from a
site, and one Region has evaluated the Thermal
Chromatography/Mass Spectrometry (TC/MS)
system for the analysis of solid samples.
According to these latter studies, TC/MS shows
promise as a rapid screen for solid samples
since there is minimal sample preparation.
Ranedial Program
The purpose of the Remedial Program is
to clean, or remediate, a site. This process
can be rather complex, and usually consists of
a Remedial Investigation (RI) phase, a
Feasibility Study (FS), a Record of Decision
(ROD), a treatability study, a Remedial Design
(RD) phase, and a Remedial Action (RA) phase.
The RI consists of data collection activities
undertaken to determine the degree and extent
of contamination within all media. The RI
supports the FS, which determines the risk
that the site poses to human health and the
environment, and identifies the most
appropriate remedial alternatives that can be
used to remediate the site. The ROD is issued
by EPA as the final remedial action plan for
a site. If necessary, a treatability study is
performed to determine the most appropriate
conditions for treatment, the remedy is then
designed (RD), and the site is cleaned (RA).
During all of these phases, the
potential exists for the use of field
analyses. For example, during the three-
dimensional characterization of the extent of
contamination (the RI),
rapid turnaround of sample results may be
necessary to focus subsequent analyses to the
determination of the extent of contamination.
Here, the analyses may be used to optimize
sampling grids for three-dimensional site
characterizations, to determine the location
of monitoring wells and well screen depths, or
to determine the direction and speed of
groundwater plumes. During treatability
studies, rapid turnaround of data may be
necessary to avoid shutting down a treatment
operation to wait for sample results. In the
Remedial Design phase of the remediation,
rapid turnaround of sample results may be
necessary to evaluate the efficiency of a
design. These data may then be used to make
improvements on the design, the net result
being more rapid development of remedial
designs. In removal and remedial actions,
rapid turnaround of data may be required to
determine cleanup levels and to minimize the
costs associated with using expensive cleanup
equipment such as bulldozers. When the field
analyses suggest that a regulatory level
has been reached, CLP confirmatory
analyses can then be performed to confirm
the cleanup level reached.
To accomplish these analyses, EPA's
Hazardous Site Control Division developed
the Close Support Laboratory (CSL)
Program. Because site remediations are
often very complex and typically take
several years to complete, the REM
contractors found it more convenient to
construct temporary, "close-support"
laboratories at the site rather than use
mobile laboratories or portable
instruments for the analytical
investigations. This program has
resulted in the development of 15 field
analytical methods for metals, volatiles,
semivolatiles, and polynuclear aromatic
hydrocarbons in water and soil matrices
(4) . In addition, the CSL program has
developed 16 field protocols for the
determination of physical measurements to
be used during treatability studies.
The Remedial Program conducts about
ten percent of its analyses in the field.
Once EPA has placed the site on the NPL,
Potentially Responsible Parties (PRPs)
are finding that it is more cost-
effective to assume the costs of site
characterizations. Consequently, there
are a growing number of these "PRP-Lead"
sites, requiring fewer analyses by the
EPA. As a result, in many Regions the
Remedial Program is placing increasingly
more resources on overseeing the
analytical activities of the PRPs. This
shifting of focus from "Superfund-Lead"
sites to PRP oversight has also coincided
with the phasing out of the REM contracts
and phasing in of the new Alternative
Remedial Contracts Strategy (ARCS)
contracts. Nevertheless, there are still
many Superfund-Lead remediations in
progress, and the Remedial Program is
planning to use ARCS contractors to
perform analyses in the field.
Removal Program
In addition to the long-term
remedial actions, Superfund legislation
provides for short-term, removal actions.
Removals are performed in emergency-type
situations on unstable sites. A removal
is the cleanup or removal of released
hazardous substances which may present an
27
-------
imminent and substantial danger.
Consequently, removals may be necessary in the
event of a release of hazardous substances, or
to monitor, assess, and evaluate the threat of
release of hazardous substances to prevent,
minimize, or mitigate damage to human health
or the environment.
Due to the nature of these activities,
removals often require a rapid turnaround of
analytical data; consequently, field analyses
are used quite often. The Removal Program
conducts about 30 percent of its analyses in
the field. Under the direction of the OSC,
TAT - the Removal Program's primary technical
contractor - may use field analytical methods
for purposes similar to those of the FIT
teams. If a more in-depth study is required,
the OSC may require the use of field
analytical methods to determine an estimated
extent of contamination. If drums are present
and the contents within the drums are unknown,
TAT may use a Hazard Categorization field kit
to categorize the potential hazard associated
with the contents of the drums. TAT uses this
field kit to perform simple qualitative tests
to determine gross characteristics of the
waste - the compound class, flash point and
other properties, and consequently, determine
the disposal options for the waste.
The Removal Program uses field analyses for
Classic Emergencies (for example, for fires,
spills, train derailments, and explosions), to
determine worker safety requirements, for
designing sampling grids, to estimate
exposure, for monitoring well placement, and
to determine cleanup levels. Across all
programs, the reasons for using field analyses
are for time savings, cost savings, and to
identify critical samples for confirmatory
analyses. Other reasons include being able to
take more samples, ease of acquisition, and
minimal paperwork requirements.
To accomplish these analyses, the
Removal Program established the Environmental
Response Team (ERT). The ERT provides
expertise to the OSCs in the area of
performing field analyses and field analytical
methods development. The ERT has developed a
number of field analytical methods, including
portable gas chromatography methods, x-ray
fluorescence methods for metals, and methods
for the screening and analysis of air samples
(5).
EMSL-LV
The Environmental Monitoring Systems
Laboratory in Las Vegas (EMSL-LV)
supports the Superfund field analytical
programs through both research and
development and through technical support
to the EPA regions. In the Advanced
Field Monitoring Methods Program (AFMMP),
EMSL-LV is developing and validating
field analytical methods. In its
Technical Support Program, EMSL-LV
dispatches field analytical teams to
hazardous waste sites for
characterization studies.
EMSL-LV is working under its
Advanced Field Monitoring Methods Program
(AFMMP) in coordination with the
Analytical Operations Branch (AOB) to
identify, develop, and validate new and
existing field analytical methods and
instrumentation. In addition, the
objectives of AFMMP include the transfer
to and exchange of information with the
EPA regions. EMSL-LV has performed
studies involving immunochemical methods,
soil gas techniques, portable gas
chrcmatographs and associated analytical
methods, X-ray fluorescence, and fiber
optic sensors. In addition, EMSL-LV has
identified a number of new techniques for
study, including Fourier Transform Infra-
Red (FT-IR) , portable supercritical fluid
extractor and solid phase extraction,
field test kits, portable GC/MS, ion
mobility spectrometers, and luminescence
methods.
Development of a Superfund Field
Analytical Methods Catalog
The Analytical Operations Branch
(AOB) is the focal point for coordinating
field analytical method information
transfer for Superfund. In 1988, the AOB
coordinated an effort to compile some of
the field analytical methods used in
Superfund into a document entitled "Field
Screening Methods Catalog".
The AOB is currently designing and
developing a comprehensive compendium
that will contain many of the field
analytical methods described in this
paper for use by all persons involved
with Superfund field analyses. This
compendium will contain developed field
analytical methods, it will contain
instrumentation requirements,
requirements for quality assurance and
quality control, analytical method
28
-------
performance, guidelines for effective
coinnunication, health and safety guidelines,
and evidentiary guidelines. This compendium
is being prepared with the assistance of the
Field Analytical Methods Workgroup, which had
its first meeting on July 19-20, 1990 and the
Field Analytical Methods Management Forum.
The forum is a group of EPA Headquarters and
Regional management representatives who met on
June 27-28 to determine Superfund policies
regarding field analyses in Superfund.
The field analytical methods that will
be a part of the catalog will come from the
sources described in this paper. The methods
will be presented in chapters structured by
fraction, analyte group, and media. In
addition, the methods will be restyled into
SW-846 format for consistency, ease of
reading, and to allow for variations.
Instrumentation requirements will be provided
for each type of method based on available
information and research by EMSL-LV. Quality
Assurance and quality control information will
be designed to facilitate a rapid turnaround
of data appropriate for the generation of
field analytical data, and will be tiered to
allow a variation of requirements for quality.
The compendium will contain a user's guide and
will stress "interactive management" - the
communication between the site manager, the
field analyst, and the sampler. In addition,
an electronic bulletin board will be
established to house the methods for
downloading, facilitate the quick transfer of
technology, information, and ideas. Health
and safety guidelines will be established
based on recent OSHA regulations, and evidence
guidelines for samples and analyses will also
be addressed.
References
1. Fisk, Joan F. Field Monitoring Methods in
Use for Superfund Analyses. Pittsburgh
Conference. February, 1987.
2. Office of Emergency and Remedial
Response, Hazardous Site Evaluation Division.
Field Screening Methods Catalog. Users Guide.
EPA/540/2-88/005. U.S. EPA. Washington, DC.
September, 1988.
3. Site Assessment Branch. U.S.E.P.A.
Hazardous Site Evaluation Division. Field
Analytical Support Project Standard Operating
Guidelines. Unpublished. Washington, DC.
July, 1990.
4. Hazardous Site Control Division.
Compilation of CSL Analytical Methods.
Unpublished. U.S. EPA. Washington, DC.
5. Environmental Response Team. Quality
Assurance Technical Information Bulletin,
Standard Operating Procedures.
Unpublished. U.S. EPA. Edison, NJ.
29
-------
FIELD DELINEATION OF SOILS CONTAMINATION ON HAZARDOUS WASTE SITES
REGULATED UNDER NEW JERSEY'S HAZARDOUS WASTE PROGRAM
Frederick W. Cornell
New Jersey Department of Environmental Protection
Division of Hazardous Site Mitigation
Bureau of Environmental Evaluation and Risk Assessment
401 East State Street, Floor 6W
Trenton, NJ 08625-0413
ABSTRACT
The New Jersey Hazardous Waste
Management Program (HWMP) recognizes
the potential for field analysis
techniques to expedite site
delineation while decreasing site
characterization costs. Although,
field analysis methods produce
accurate, real-time data at a low cost
per sample, the absence of
standardized data quality objectives
and method specific quality assurance
and quality control (QA/QC)
requirements has prevented widespread
use of these technologies. The HWMP
has defined data quality objectives
for each phase of site investigation,
and outlined QA/QC procedures for
several widely available field
analysis methods, including field
x-ray fluorescence spectrometry, field
gas chromatography, colormetric
analysis, and photoionization
surveying. The development of these
method-specific and use-specific
procedures has allowed the HWMP to
routinely recommend the use of field
analysis methods to expedite site
evaluation.
INTRODUCTION
The New Jersey Environmental
Cleanup Responsibility . Act (ECRA)
program requires industrial facilities
that handle hazardous materials to
conduct a site evaluation and develop
a site remediation plan (if necessary)
prior to any real estate transfer or
cessation of industrial operations.
Given the real estate and stock market
activity of recent years it is not
surprising that ECRA subject sites are
often operational facilities. Since
ECRA's enactment in 1984, thousands of
sites have been processed by ECRA.
For larger industrial facilities, site
evaluation has proven to be costly and
time consuming, frequently taking
several years to complete.
Site characterization efforts
typically involve a historical site
survey, site screening, and several
phases of site delineation (1).
Although, initial site screening is
usually conducted using survey
instruments, the remaining delineation
phases generally involve collecting a
limited number of samples for
laboratory analysis and evaluating the
lab results to determine the need for
additional sampling phases. This
typical investigation scheme is time
consuming, requiring months between
phases to allow for sample collection,
data analysis, delineation plan
development, and regulatory review and
interface; however, the phased
approach is necessary to limit
analytical costs. The unfortunate
result of phased investigation is that
remedial investigations frequently
last years and cost hundreds of
thousands of dollars.
These delays in site remediation
may not only render industrial
operations or property transfers
difficult or impossible to conduct,
but also may cause unnecessary
contaminant migration and exposure to
31
-------
human or environmental receptors. In
these situations it is desirable to
implement analytical methods that can
provide the necessary data in a timely
and cost-effective manner. Field
analysis is ideally suited for rapid,
cost-effective site characterization
as it can provide real-time data which
is reliable and inexpensive on a per
sample basis.
FIELD SCREENING AND ANALYSIS METHODS
To develop the standard operating
procedures (SOPs) included in this
paper, efforts were initially directed
at determining the minimum data
quality necessary to make appropriate
technical and regulatory decisions
(this is described in further detail
below). Subsequently, a literature
search was used to identify reliable
methods from the vast number of
commercially available technologies.
Using this information, method
specific SOPs were developed to detail
the minimum requirements a field
delineation plan must meet to receive
agency approval. These SOPs are
designed to encourage the generation
of consistent and reliable data from
user to user and site to site. The
quality assurance and quality control
(QA/QC) requirements in each SOP were
formulated in consistency with the
reliability, accuracy, and limitations
of each method (particularly when
considering field use), while
considering the ultimate use of the
resulting data.
Several instruments and methods
have been evaluated and determined
effective (or potentially effective)
at detecting site contamination at
milligram per kilogram (mg/kg)
concentrations. Although, a single
instrument or method may only be
useful for analyzing one or two
classes of compounds, the use of
several field analysis procedures in
tandem enables site investigation
teams to detect most priority
pollutant compounds at or near
background concentrations. For
example, ambient temperature headspace
analysis is extremely effective in
analyzing volatile organic compounds,
but not polyaromatic hydrocarbons
(PAHs) or metals. Color-metric tests,
on the other hand, are effective at
analyzing aromatic compounds
(including the PAHs), and a field XRF
will detect PCBs and most metals at
concentrations as low as 20-100
milligrams per kilogram. Thus, by
using several field instruments or
methods in tandem a broader suite of
contaminant compounds may be field
analyzed. It should be noted that the
methods cited in this paper are by no
means a comprehensive list of suitable
or potentially suitable field
methodologies. Initial selection for
these SOPs was based on instrument
availability, amenability to field
use, and in-house experience.
DATA QUALITY OBJECTIVES
The New Jersey Hazardous Waste
Management program (HWMP) data quality
designations are based on those
developed by the EPA (2-3) . The EPA
has established five levels of data
quality objectives (DQOs). Two of
these, Level 1 (Field Survey
Instruments) and Level 2 (Field
Portable Instruments), generate real-
time, field data. Level 3 and 4 are
laboratory methods with differing
QA/QC requirements, and level 5 is
laboratory special services. The EPA
has clearly stated the minimum data
quality level required for each stage
of site investigation. Additional
explanation of these data quality
levels may be found in any of the
EPA's Data Quality Objectives manuals,
cited above.
The HWMP data quality standards
have been developed to encourage the
use of real-time analysis methods
during site characterization (4). The
HWMP field data DQOs are: Level 1
(Field Survey Instruments), Level 1A
(Field Analytical Methods), and Level
2 (Field Portable Instruments).
However, unlike the EPA designations,
minimum QA/QC and support documen-
tation (deliverables) requirements are
defined to assure that the data
generated by these methods can be
validated based on technical criteria.
A detailed description of all DQO
levels is provided below.
Data Quality Level 1 instrumen-
tation are intended primarily for
health and safety or initial site
screening. Quality control and
deliverable requirements are limited
to a continuing calibration for
site-specific compounds and the
reporting of values on field/boring
logs. Level one (1) methods are
real-time and at times, erratic.
These methods can be described as
pseudo-qualitative and pseudo-
32
-------
quantitative as the end user can
easily be led to believe that these
instruments are reporting "true
values" or providing selectivity, when
indeed they are not. For example, the
photoionization detector (PID) survey
instrument is commonly thought to be
selective and not sensitive to species
whose ionization potentials (IPs) are
higher than that of the internal
ionization lamp. In practice,
however, species with IPs above the
lamp energy are routinely detected by
PID survey instruments. With respect
to quant itat ion, a PID survey
instrument reports a value often
expressed in mg/kg; however, since
detector response is highly variable
among chemical species this reported
value may not represent site
conditions or correlate with other
site data. For these reasons level 1
data should generally be used to
indicate contaminant presence or
absence, rather than compound identity
or total concentration. The
application of level 1 data should
therefore be limited to health and
safety screening or to guide the
placement of samples being analyzed by
higher DQO methods. Level 1
instruments include field x-ray
fluorescence spectrometers (XRF) with
a remote probe and PID survey
instruments.
Data Quality Level 1A methods
produce fairly precise data; however,
a reduced quality control program is
employed to allow high frequency,
low-cost sampling. Level 1A methods
are suitable for site screening and
site delineation when proper QA/QC
practices are employed. When
delineating using level 1A methods,
minimum deliverable requirements
typically include: calibration data
for site-specific compounds, check
standards data, a non-conformance
summary, a certification statement
signed by the analyst, sample
calculations, isopleth maps, tables
indicating results (raw and
"corrected" based on lab confirmation
data), and chain-of-custody documen-
tation. In addition, lab confirmation
data (10-30% of all samples collected)
must provide "calibration" throughout
the entire analysis range and
confirmation of the "clean" zone.
Level 1A methods include headspace
analysis of volatile compounds and
analysis using colormetric techniques.
Data Quality Level 2 methods
produce precise data when required
QA/QC procedures are employed.
Quality assurance and quality control
requirements are sufficient to allow
rigorous data interpretation, while
providing reasonable field operation
requirements. Level 2 methods are
ideally suited for low-cost, one phase
delineation. Minimum deliverables
requirements will include: an
instrument log, calibration data for
site specific compounds, standards
data, split sample data, raw sample
data, blank data, a certification
statement signed by the analyst, a
non-conformance summary, sample
calculations, isopleth maps, tables
indicating results (raw and
"corrected" based on lab confirmation
data), and custody documentation. Lab
confirmation data (5-15% of all
samples collected) must provide
"calibration" throughout the entire
analysis range and confirmation of the
"clean" zone. Level 2 methods include
field gas chromatography (GC) and
field XRF analysis using a
silicon-lithium detector.
Data Quality Levels 3 and 4 are
"Standard Lab Methods" with varying
deliverable requirements. Methods
which provide these data qualities may
be used for conventional site
characterization activities or to
confirm field instrument results
obtained during site delineation
activities. It should be noted that
the specific QA/QC procedures required
will be dictated by the applicable
regulatory program. Data quality
level 3 methodologies include SW-846
(5) methods and NJ ECRA Deliverables
(1). Data quality level 4 methods
include CLP methods and Scope of Work
(SOW) requirements (6).
Data Quality Level 5 methods are
generally state-of-the-art or non-
approved methods chosen specifically
for a particular site. Level 5
methods are required when "Standards
Lab Methods" are either unavailable or
impractical. Level 5 data may be
accepted to confirm field results or
define a "clean zone".
The goal of any site investigation
is to assure that the information
obtained is sufficient to select and
design an appropriate remedial
technology. Ideally, site character-
ization will provide complete
definition of contamination with
respect to both concentration trends
and actual contaminant load. The
advantages of levels 1, 1A, and 2
33
-------
analysis are rapid site delineation
and low per sample costs allowing high
frequency sampling and a rapid
estimation of concentration gradients;
however, the concentration results
must be assumed to have up to a 150%
error. Level 3 and 4 analysis methods
are not real-time and are more
expensive, limiting sampling
frequency, but reported results can be
assumed to be quite accurate and a
good indicator of actual contamination
present. In summary, the trade-off is
rapid, less expensive site
characterization verses data quality
and accuracy.
At first glance it may appear as
if HWMP has chosen to expedite site
characterization at the expense of
data quality by encouraging the use of
level 1A and level 2 methods. Upon
closer examination, it can be seen
that although the raw data obtained by
field instruments are less accurate
and less precise, the data set is
highly consistent within itself,
clearly indicating trends and
contamination zones. Also, since
field analysis costs are generally per
diem rather than per sample, field
samples may be collected at a greater
frequency, providing the project team
with better site definition and fewer
data gaps. Lastly, all field data are
supported by an independent
calibration or correction factor
provided by the required lab
confirmation samples, discussed above.
Thus, the end product generated is
actually a hybrid of field analysis
data and lab data which, when
combined, may not only be equivalent
in data quality to that obtained by
standard methods, but may actually
provide a more reliable and complete
characterization of site conditions.
SITE INVESTIGATION STRATEGY
The newly developed HWMP DQOs use
a combination of high and low quality
data to produce a data set which is
moderate in both quality and quantity.
These DQOs rely on the ability of
users to calibrate field analysis data
to laboratory confirmation samples,
providing superior site character-
ization at a reduced cost. The net
effect is that most site
investigations may be completed in a
maximum of 1-2 phases or less than one
(1) year. To accomplish this, the
following site investigatory procedure
is recommended (where site contam-
ination is known, step 1A may not be
required).
1. Obtain historical information
(i.e. past or present site
activities).
1A. If the contamination source
is unknown, a sampling
program incorporating site
screening tools (level 1) and
laboratory sample analysis
(level 3/4) should be
implemented. The goal of
this effort is to identify
all contaminants present by
documenting worst-case site
conditions.
2. The information above should
then be used to develop an
open ended, contaminant
delineation plan, including
the use of real-time (Level
1A/2 quality data) methods.
The plan should incorporate
sampling contingencies to
assure site delineation is
completed during this sampling
phase. To provide additional
data reliability, field
instruments should be
calibrated to site-specific
compounds of interest as
defined by previously obtained
information.
3. Upon receipt of the laboratory
confirmation data, the need
for a revised delineation plan
should be assessed. If
required, a phase II delin-
eation plan should incorporate
field analysis methods to
complete site delineation.
4. The complete database should
then be used to develop a site
remediation plan. If in situ
remedial measures are to be
used and system design limits
are being approached, an
increased percentage of
laboratory data may be
required.
DEVELOPMENT OF FIELD SOPS
The development of field SOPs is
considered the most efficient means of
assuring that data collected from site
to site is consistent. These SOPs
were developed by consulting the
literature, instrument manufacturers,
and personnel with extensive field
34
-------
and/or instrumental experience. Each
SOP has 5 technical sections, i.e.
method overview, method requirements
(including QA/QC requirements),
interferences and limitations, data
interpretation and reporting require-
ments, and health and safety
considerations.
The method overview or general
guidance section is intended to
provide the reader with a basic
understanding of the method. This
section details method applications,
including applicable matrices,
detectable compounds, and minimum
detection limits (MDLs). Additional
information is provided for use by the
project manager, including estimated
cost per sample, level of training
required to effectively use the
method, lab method equivalent, and
theory of operation. The theory
section contains instrumental and/or
chemical details aimed at
familiarizing the reader with the
actual science of operation. The last
section of each SOP also provides a
list of references directing
interested readers to a more detailed
explanation of instrumental theory and
use.
The method requirements section
provides four types of information:
sampling considerations, sampling
requirements, field operation require-
ments, and QA/QC requirements.
Sampling considerations include
general information applicable when a
sampling program is being developed.
This section provides guidance with
respect to sample frequency, selection
of lab confirmation samples, and any
other useful information gained
through field experience. As would be
expected, this section is continually
evolving as the experience base grows.
The sampling requirements section
details proper sample collection
procedures when standard field
sampling methods (7) are inappro-
priate. This section also includes
sample handling requirements when past
experience has shown sample
preparation to significantly impact
final results, as is the case with XRF
analysis. The field operation section
contains actual method guidance
intended to supplement or replace
manufacturer's recommendations. This
guidance customizes method procedures
in an effort to meet the goals of the
HWMP regulatory program. The last
section, QA/QC, states all quality
assurance recommendations and require-
ments. The requirements include
analyst "competence" tests, submission
of all raw data, and support
documentation.
The interferences and limitations
section discusses problems which may
be encountered during field use.
These comments are intended to
supplement manufacturer's recommen-
dations by highlighting problems
encountered during previous site
operations. It is likely that this
section will be in constant transition
until a comprehensive database has
been established.
The data interpretation and
submission requirements section
details data manipulation procedures
and regulatory submission require-
ments. Data interpretation require-
ments vary by method and DQO level;
however, all SOPs require the
calculation of "corrected" results,
accounting for discrepancies between
laboratory and field data. Reporting
requirements are standardized for all
field methods and include: scaled
site maps with plotted data, summary
tables indicating all field results
(raw and corrected) and lab reported
values, a calibration plot of lab
split sample data verses field data,
and quality assurance and quality
control documentation (consistent with
the QA/QC requirements stated above).
These requirements are intended to
expedite the required review time by
standardizing report contents and
format, while facilitating validation
of both lab and field data.
CURRENT AND PENDING SOPs
Standard operating procedures have
been completed for four field
instruments and two field analysis
methodologies. Additionally, several
other SOPs are under development. A
listing of all SOPs is provided below.
Level 1 Data Quality
Field Screening Using a Photo-
ionization Survey Instrument.
Field Screening Using an X-ray
Fluorescence Spectrometer
Equipped with a Remote Probe.
*Field Screening Using a Flame
lonization Survey Instrument.
*Field Screening Using a Portable
Infrared Instrument.
35
-------
Level 1A Data Quality
Field Delineation Using a
Colonnetric Test Kit.
Field Delineation Using Ambient
Temperature Headspace Analysis.
*Field Delineation Using a Portable
Infrared Instrument.
*Field Delineation Using a Portable
Ultraviolet Spectrometer.
Level 2 Data Quality
Field Delineation
Fluorescence.
Field Analysis Using a
Chromatograph.
Attachments: 1
2
3
4
*5
Using X-ray
Field Gas
*6,
*7,
PID Detector.
FID Detector.
AID Detector.
ECD Detector.
Analyzing
Extractables
(BNs/PCBs).
Analyzing Water
Samples.
Analyzing Air or
Headspace
Samples.
* - under development
FUTURE DIRECTIONS
Currently, the field SOPs
described above are in widespread use
throughout the HWMP program. Since
these instruments and methods are a
small subset of all currently
available field analysis methods,
similar SOPs will be developed for
several additional methods, including
FID survey instruments, several
spectrometers, and additional field
gas chromatography applications.
The performance of each of these
methods (on NJ regulated sites) will
be monitored using an in-house
database. Upon collection of
sufficient data, the SOPs will be
revised as appropriate. It is
expected that additional field
experience and the associated
understanding of method limitations
and accuracy will lead to wider use of
field analysis methods, making site
evaluation a much less time-consuming
and costly process.
5.
6.
REFERENCES
New Jersey Department of
Environmental Protection (NJDEP) ,
Division of Hazardous Waste
Management (DHWM). March 1990.
Remedial Investigation Guide.
U.S. EPA. March 1987. Data
Quality Objectives for Remedial
Response Activities.
EPA/540/G-87/003, EPA/540/G-87/004
and OSWER Directive 9335.0-7A&B.
U.S. EPA. October 1988. Guidance
for Conducting Remedial
Investigations and Feasibility
Studies Under CERCLA.
EPA/540/G-89/004 and OSWER
Directive 9355.3-01.
NJDEP, Division of Hazardous Site
Mitigation (DHSM), Standard
Operating Procedures: Field
Delineation Series (4.25), 1990.
U.S. EPA. SW-846, Third Edition.
CLP-IFB, most recent
U.S. EPA.
version.
NJDEP/DHWM. February 1988. Field
Sampling Procedures Manual.
BIBLIOGRAPHY
General Field Sampling and Analysis
Myers, J.C. "Converging Technologies",
Hazmat World, June 1989, p24-27.
U.S. EPA. Environmental Response
Team. 1989. Standard Operating
Procedure: Photoionization Detector
(12056).
Siegrist, R.L.; Jenssen, P.O.
"Evaluation of Sampling Method Effects
on Volatile Organic Compound
Measurements in Contaminated Soils",
Environmental Science and Technology,
1990, 24, 1387-1392.
1988. Field
Catalog.
U.S. EPA. September
Screening Methods
EPA/540/2-88/0055.
Keith, L.H. "Environmental Sampling:
A Summary", Environmental Science and
Technology, 1990, 24, p610-617.
Gretsky, P.; Barbour, R.; Asimenios,
G.S. Pollution Engineering, June 1990,
p!02-108.
36
-------
Organic Survey Instruments
Nyguist, J.E.; Wilson, D.L. "Decreased
Sensitivity of Photoionization
Detector Total Organic Vapor Detectors
in the Presence of Methane", Journal
of the American Industrial Hygiene
Association, 1990, 51(6), p326-330.
Gervasio, R. ; Davis, N.O. "Monitoring
in Reduced Oxygen Atmosphere Using
Portable Survey Direct Reading
Instruments (PID and FID)",
Proceedings HMCRI, 1989-90.
Tillman, N.; Ranlet, K.; Meyer, T.J.
"Soil Gas Surveys: Part I", Pollution
Engineering, July 1989, p86-89.
Tillman, N.; Ranlet, K. ; Meyer, T.J.
"Soil Gas Surveys: Part II",
Pollution Engineering, August 1989,
p79-84.
Headspace Analysis
Holbrook, Tim "Hydrocarbon Vapor Plume
Definition Using Ambient Temperature
Headspace Analysis", Proceedings of
the NWWA/API Conference on Petroleum
Hydrocarbons and Organic Chemicals in
Ground Water - Prevention, Detection,
and Restoration, November, 1987.
Roe, V.D.; Lacy, M.J.; Stuart, J.D.
"Manual Headspace Method to Analyze
for the Volatile Aromatics of Gasoline
in Groundwater and Soil Samples",
Analytical Chemistry, 1989, 61,
P2584-5.
Colormetric Analysis
Roberts, R.M., Khalaf, A.A.,
Friedel-Crafts Alkylation Chemistry; A
Century of Discovery, Macel Dekker,
Inc., New York, 1984.
Rohriner, R.L., Fuson, R.C. et al.,
The Systematic Identification of
Organic Compounds, John Wiley and
Sons, New York, 1980.
Hanby, J.D., "A New Method for the
Determination and Measurement of
Aromatic Compounds in Water", Written
Communication, Hanby Analytical
Laboratories, Inc., Houston, Texas,
1989.
X-ray Fluorescence
Piorek, Stanislaw "XRF Technique as a
Method of Choice for On-site Analysis
of Soil Contaminants And Waste
Material", Proceedings 38th Annual
Conference on Applications of X-ray
Analysis, Denver, Vol. 33, 1988.
Grupp, D.J.; Everitt, D.A.; Beth,
R.J.; Spear, R. "Use of a
Transportable XRF Spectrometer for
On-Site Analysis of Hg in Soils", AFI,
November 1989, p33-40.
J.R. Rhodes, J.A. Stout, J.S.
Schindler, and Piorek "Portable X-Ray
Survey Meters for In Situ Trace
Element Monitoring of Air
Particulate", American Society for
Testing and Materials, Special
Technical Publication 786, 1982,
p70-82.
Piorek, S.; Rhodes, J.R. "Hazardous
Waste Screening Using A Portable X-ray
Analyzer", Presented at the Symposium
on ' Waste Minimization and
Environmental Programs within D.O.D.,
American Defense Preparedness Assoc.,
April 1987.
Piorek, S.; Rhodes, J.R. "A New
Calibration Technique for X-ray
Analyzers Used in Hazardous Waste
Screening", Proceedings 5th National
RCRA/Superfund Conference, April 1988.
Piorek, S.; Rhodes, J.R. "In Situ
Analysis of Waste Water Using Portable
Preconcentration Techniques and a
Portable XRF Analyzer", Presented at
Electron Microscopy and X-ray
Applications to Environmental and
Occupational Health Analysis
Symposium, Pennsylvania State
University, October 1980.
Barish, J.J.; Jones, R.R.; Raab, G.A.;
Pasmore, J.R. "The Application of
X-ray Fluorescence Technology in the
Creation of Site Comparison Samples
and in the Design of Hazardous Waste
Treatability Studies", First
International Symposium: Field
Screening Methods for Hazardous Waste
Site Investigations, October 1988.
Piorek, S. "XRF Technique as a Method
of Choice for On-Site Analysis of Soil
Contaminants and Waste Material", 38th
Annual Denver X-Ray Conference, 1989.
Watson, W.; Walsh, J.P.; Glynn, B.
"On-Site X-Ray Fluorescence
Spectrometry Mapping of Metal
Contaminants in Soils at Superfund
37
-------
Sites", American Laboratory, July
1989, p60-68.
Freiburg, C.; Molepo, J.M.; Sansoni
"Comparative Determination of Lead in
Soils by X-Ray Fluorescence, Atomic
Absorption Spectrometry, and Atomic
Emission Spectrometry", Fresenius Z
Anal Chem, 1987, 327, p304-308.
Smith, G.H.; Lloyd, O.L. "Patterns of
Metals Pollution In Soils: A
Comparison of the Values Obtained By
Atomic Absorption Spectrophotometry
and X-Ray Fluorescence", Environmental
Toxicology and Chemistry, 1986, Vol.5,
P117-127.
Jenkins, R "X-Ray Fluorescence
Analysis", Analytical Chemistry, 1984,
56(9), p!099A.
Field Gas Chromatography
U.S. EPA. Standard Operating
Procedure: Sentex Scentograph G.C.
Field Use (SOP #1702), December 1988.
U.S. EPA. Standard Operating
Procedure: Photovac 10S50, 10S55, and
10S70 Gas Chromatography Operation
(SOP #2108), January 1989.
Wylie, Philip, L. "Comparing Headspace
with Purge and Trap for Analysis of
Volatile Priority Pollutants",
Research & Technology, 1988, 80(8),
p65-72.
38
-------
TABLE I. NJDEP/HWMP DATA QUALITY CLASSIFICATIONS
DATA
QUALITY
LEVEL
PURPOSE
OF
SAMPLE
EXAMPLE METHODS
OR
INSTRUMENTS
Health & Safety
Site Screening
Field Use when
excavating.
Portable PID (HNU).
Colormetric Analysis.
Portable FID (OVA).
XRF with a remote probe
(X-met).
1A Site Screening.
Field Use when
excavating.
Site Delineation.
ATH Analysis.
Colormetric Analysis.
Field use when
excavating.
Site Delineation.
Portable GC.
Portable XRF with SiLi
detector.
Mobile Lab (limited QA/QC)
Site Delineation.
Lab Confirmation of
field delineation
samples.
Traditional Site
Characterization.
Laboratory Analyzed
Samples, without
QA/QC documentation,
i.e. 600 Series.
Mobile Lab.
Traditional Site
Characterization.
Lab Confirmation of
field delineation
samples.
Laboratory Analyzed
Samples, with full
QA/QC documentation,
i.e. CLP-IFB.
Traditional Site
Characterization.
Lab Confirmation of
field delineation
samples.
Laboratory Special
Services.
Mobile Lab.
39
-------
PLENARY SESSION DISCUSSION
LLEWELLYN WILLIAMS: There was a reference in the first or second paper
to our concern about the acceptance of field screening and field analytical data
by a regulatory group. How do we deal with, or how do we encourage the
acceptance of field screening and field analytical method data in the regulatory'
arena?
DENNIS WYNNE: Pan of it. I think, is encouraging the risk-taking among our
managers. What we have dealt with in the past is a tendency to rely almost
exclusively on the tried and true methods of the contract lab program (CLP).
What we are trying to do in the Superfund Program is to wean people off
inordinate use of the CLP by saying that that is for a specific intended purpose.
Il's not intended for all uses. If you focus on thedataquality objectives approach,
there's not a need lo over rely on the CLP. because you often have for some uses
a gold plated version that isn't needed for some of the basic uses. The field
screening methods would be more appropriate. Some of the ways to do that is by
the work groupapproach and trying things toencourage managers to use it. We're
trying to focus on things like streamlining the remedial investigations and
feasibility studies, and you really can't tell what you find if you're using fixed
labs exclusively. There's downtime while the data are being sent out. analyzed
and rev icwed. In some cases I think what we're trying to do is look where the time
being spent on the program. Trying to shorten those times where we can. trying
to encourage the user community to come together in work groups, being able to
provide guidance through training programs are ways we can gel more people
familiar with field screening and thereby limiting some of the conservativism
that we deal with in some of the managers who tend to rely on contract lab
programs.
Another pan of it. I think, would be toemphasize that some of the field analytic
methods can provide you with as much accuracy as you get through fixed labs.
We need to emphasize those points, so people aren't always assuming field
methods are sort of the poor cousin of the fixed labs.
1 guess from a PRPperspective. which the Army is. our approach has been to use
field screening as a powerful tool to guide the traditional quality control in lab
data. We've been forced that way because of our negotiations with the regions.
because of the requirement for a lot of this data to eventually stand up in court.
So we see it as minimizing the requirement for that extreme case of chain of
custody and total reliability of the data because of extreme quality control. We'll
minimize the number of samples we really have to take, because of this powerful
tool, the field screen.
HOWARD FRIBt'SH: 1 think that your continued use of field analytical
methods and analyses is going to force it to be accepted, for one thing. Another
thine, the acceptance seems to be more fragmented. That is. it seems to be more
accepted to say in the Removal Program, and less accepted but somewhat
accepted in other programs. I think that without field analytical methods and
analyses, we're really sampling blind, and there is no reason what 90^ of the
samples that get sent to the CLP should be nonhits. when 909t could be hits.
Another way is to document all this just like we're trying to do with the catalog
and the user's guides. I think it will be accepted much more than it is now.
N ABIL YACOUB: My question has two pans: 1) would that manual encompass
methods developed by the Army and other entities? 2) w ould the methods incl ude
those for matrices other than w ater. because in the real w orld, y ou have a problem
with soils and sludges and such.
HOWARD FRIBUSH: Yes. it will include other matrices. As long as the
methods have been used for Superfund activities, and they have been shown to
work, and they've been field tested, there is no reason why they can't be included.
That's why channeling performance information one of these methods, back to
EMSL-Las Vegas for an ongoing evaluation is so important. In the future
updates, we can either delete some of the methods, or it might help us to combine
some of the methods. And as far as your first question: 1 would say that we're
definitely open to including methods developed by the Army, especially if
they're users in Superfund activities, for example in the Federal Facilities
Program. If there is performance information, we would like to know that.
MICHAEL C ARR ABB A: I have more of a comment or a suggestion directed
at both the Environmental Protection Agency, as well as the Department of
Energy.
If you look at the Chemical Sensors session, there are six talks: two representa-
tives from the Federal Government, and four representatives from small busi-
ness. My comment is that the Environmental Protection Agency, as well the
Department of Energy, is grossly underutilizing the small business innovative
research program to bring forth some of these field screening technologies, such
as in the area of chemical sensors or optical spectroscopy. If you look at the
current solicitations for 1991 for the Department of Energy, we've been hearing
about this great problem in environmental restoration and field screening. There
arc no topics in there for small business, and a lot of the innovation that we're
going to need in the future, particularly for the DOE and EPA, is going to come
from small business with new and innovative ideas. This is not the case for the
Department of Defense, who is actually doing a pretty good job in using the SBIR
program lo fulfill these needs.
EDGAR SHL'LMAN: I noticed in the user's guide that is presently out, that
there is a heavy emphasis on fieldable methods, and very little on the man-
portable type of instruments or methodology. Could you comment on what the
future direction is relative to the man-portable type of instruments for field
screening? And also perhaps to other panelists in terms of their judgment as to
the value of smaller dev ices for field screening?
HOWARD FRIBUSH: 1 think that the catalog in the user's guide is intended to
be comprehensive, and there is no reason that the smaller survey instruments.
such as organic vapor analyzers, or portable radionuclide analyzers couldn't be
included. In fact, since they are used a lot, especially in the Site Assessment
Program, and the Removal Program, they should be included and will be
included.
Up a stage to the man portable instruments, we now have portable GC/MS. Those
certainly will be included. I think the short answer to your question is that we
w ant everything that is used in Superfund typically to be included into the catalog
and user's guide.
EDGAR SHULMAN: I guess I was looking toward your judgment in terms of
the v alue of small devices. Would the priority in the future be toward encouraging
people to actually get much smaller devices? I know you are talking about man-
portable GC/MS. but they really are not man portable right now. They're
fieldable. you still need a truck or something similar.
HOWARD FRIBUSH: Are you talking about field kits, or are you talking about
survey instruments?
EDGAR SHULMAN: I'm talking about survey instruments, try-ing to encour-
age their research and development community, in terms of an agenda for
research. Maybe that's what I'm looking for. Where should the priorities be put,
from the R&D community, relative to the kinds of methods that are envisioned
for the future.
LARRY REED: You've made a good point. I think looking at the present catalog
we have out. There is a bias that was introduced when we were gathering existing
information, a large pile which was available as part of the Field Analytical
Support Program. This program was developed in part for field investigation
teams, and the Site Assessment Program nationwide. There had been a focus to
look at the bigger equipment and the more refined type of equipment. I think that
was done just to get the catalog out, what information is in use — was available
and useful. I think what we are going to try and do is balance it now by more of
the technologies, try to focus more on portable kinds of instruments, also. I know
40
-------
DISCUSSION
in particular when I'm looking at the future of the Site Assessment Program, as
the field investigation team contracts start to expire this year, we're going to be
looking at two phases of the shifting of the equipment, the larger field analytic
support equipment, and then the portable equipment. We want to make sure that
that equipment will be transferred to the people who are going to be doing the Site
Assessment work and in looking at the next generation of it. That's a good point
you make. I think we'll try to balance it out.
HOWARD FRIBUSH: I just wanted to say that the survey instruments have a
definite use in Superfund. They are used quite often to determine the health and
safety requirements for workers, and also to identify hot spots. So since they have
a definite use in Superfund, they will be included in the next update.
LLEWELLYN WILLIAMS: I was just reminded thai the EPA is not 100%
Superfund. There are a fair number of other programs out there for which field
screening technologies will have a place, and in many of those applications, I
think some truly portable measurement instruments are going to be very, very
important.
CHRIS LIEBMAN: I though that the key to the compendium and the success
of the compendium was really dependent on people submitting their methods to
the working group, so that we can see that they are included in the compendium.
I think it's important to point out that if survey instruments are not currently in
the compendium, that largely reflects the fact that people had not submitted
methods. I think if you are unhappy with what is in the compendium, to change
that, make your submissions.
DOUG PEERY: In putting the catalog together, of course you're addressing
purely programs that the EPA is addressing, in dealing with private clients who
rely on these things for their own information, you get locked in. We also have
to respond to that. Is there going to be a flexibility in this catalog whereby we.
as the person developing the procedure, can go through steps and prove that the
procedures are applicable and usable, and not be locked in or having to reply.
Maybe taking the USATHAMA Procedure and Methodology Proof Program,
making it simpler, and integrating the two, so that it can be done very quickly and
easily and economically would be one way. Is there a procedure or a thought to
adding something along that line?
HOWARD FRIBUSH: I think that is a really good idea, and a really good
statement. This is something that the work group has not yet addressed. I think
that is a good topic for a future item at our next work group.
Originally, we had talked about EMSL-Las Vegas doing some of that validation.
When we look at all the methods that we have, I think it might be more
appropriate to have EMSL-Las Vegas look at the performance information. But
for new methods, I think that that is an area for future consideration and I
appreciate the comment.
COLLEEN PETULLO: I notice that DOD, DOE and EPA are all developing
innovative technologies, or supporting innovative technologies to be developed.
We all march to a different drummer in terms of QA. How is that all being
coordinated?
LLEWELLYN WILLIAMS: There are a number of ways in which attempts are
being made to harmonize Quality Assurance, not the least of which is the
interagency ad hoc committee on QA for environmental measurements that's just
been established. We are looking very hard at both QC and QA requirements,
both from a process standpoint and from an operations standpoint, to see if we
can get more uniform application of QA/QC procedures, agency wide, as well as
across the agencies. We're well aware that there have been concerns in the past
with respect to dealing with each of our individual Regions, as separate
autonomies, and that a DOE or a DOD may have a difficult time in getting the
same kind of response to the same situation going from Region to Region. This
is part of what we're hoping can come out of the interagency work is to get more
uniform application and uses.
COLLEEN PETULLO: Is there one form of QA program plan that you're kind
of leaning to at this point?
LLEWELLYN WILLIAMS: When you say a form of QA program plan, I'm
not quite sure what you mean.
COLLEEN PETULLO: EPA tends to be more a laboratory type QA versus
field operational, and DOE tends to be more field operational, and I'm just
curious as to how you're going to get all this all melted together.
LLEWELLYN WILLIAMS: I think there is much we can learn from the
approaches of other agencies. We will attempt to accommodate and utilize the
best that the other agencies can offer, and provide a focused program that
everyone can buy in on and live with.
41
-------
A FiberOptic Sensor for the Continuous Monitoring of
Chlorinated Hydrocarbons
P.P. Milanovich1, P.P. Daley2, K. Langry1, B.W. Colston1 Jr.,
S.B. Brown1, and S.M. Angel1
Environmental Sciences Division,
Environmental Restoration Division
Lawrence Livermore National Laboratory
Livermore, CA 94550
Abstract
We have developed a fiber optic chemical sensor for use
in groundwater and vadose zone monitoring. The sensor is
a result of modification of previous work in which we dem-
onstrated a fluorescence based sensor for the non-specific
determination of various volatile hydrocarbons. The prin-
ciple of detection is a quantitative, irreversible chemical
reaction that forms visible light absorbing products. Modifi-
cations in the measurement scheme have lowered the detec-
tion limits significantly for several priority pollutants. The
sensor has been evaluated against gas chromatographic
standard measurements and has demonstrated accuracy and
sensitivity sufficient for the environmental monitoring of
trace levels of the contaminants trichloroethylene (TCE) and
chloroform.
In this paper we describe the principles of the existing
single measurement sensor technology and show field test
results. We also present the design of a sensor which is
intended for continuous, sustained measurements and give
preliminary results of this sensor in laboratory experiments.
Background
This sensor technology is an outgrowth of research
initially sponsored by the U.S. Environmental Protec-
tion Agency. Here, a fluorescence based probe for the
remote detection of chloroform was conceived, devel-
oped and demonstrated in the mid-1980's.1 The sensi-
tivity and accuracy of the probe proved insufficient for
many monitoring applications and research was dis-
continued. However, in DOE sponsored research one
of us (SMA) invented a new concept sensor that has
demonstrated significantly improved sensitivity and
accuracy for both TCE and chloroform.2 This sensor is
currently under evaluation in monitoring well and
vadose zone applications.
Principles of Operation
The basic components of the sensor technology are
the chemical reagent, electro-optic measurement de-
vice, and the sensors. For the latter, we have developed
two versions, one for single and one for continuous
measurements. A brief desrciption of the components
follows.
Chemistry. The chemical basis of this technology is
the irreversible development of color in specific re-
agents upon their exposure to various target molecules.
The primary reagent is an outgrowth of the work of
Fujiwara3 who first demonstrated that basic
pyridine,when exposed to certain chlorinated com-
pounds, developed an intense red color. This red color
is due to the formation of highly conjugated molecules
as shown below. We and others have since demon-
43
-------
strated that this and closely related reactions can be
used to detect trace amounts of these same com-
pounds.4
H H
H R H
(Red)
Sensors The single measurement sensor (Fig 1) is
comprised of the terminus of two optical fibers and an
aliquot (20 ul) of reagent in a small capillary tube. The
fibers are sealed into one end of the capillary tube and
reagent is placed into this capillary to a length of ap-
proximately 5 mm. A porous teflon membrane is
placed over the open end of the capillary to prevent
loss of the reagent. Target molecules, TCE for example,
readily pass through the membrane and produce color
in the reagent. This color results in decreased transmis-
sion of light at 540 nm. The measurement of the time
history of the color development provides a quantita-
tive measure of the target molecule concentration.
Since the reaction is non-reversible, the reagent must be
replenished for every measurement. This is readily
accomplished through the use of easily replaceable,
disposable capillaries.
Electro-optics. The readout device is shown highly
schematically in Fig 3. Here the emission of a minia-
ture tungsten-halogen lamp is collected by suitable
optics, chopped with a tuning fork and directed into an
optical fiber. The fiber transmits this light with high
efficiency to the sensor where it passes through the
chemical reagent, reflects off the teflon membrane, and
is collected by a second optical fiber. This latter fiber
transmits the reflected light to an optical block where it
is divided into two beams by a long pass dichroic
mirror. These resulting beams are optically filtered at
540 nm and 640 nm, respectively, and their intensity is
ultimately measured with silicon photodiodes using
phase sensitive detection techniques.
Figure 3. Sensor readout device
Computer
-: M :-•
-.J---_TCE Chloroform
Figure 1. Schematic of the single-measurement sensor
Figure 2 shows a sensor that has been designed for
continuous operation." It is essentially identical to the
single measurement version with the exception of the
addition of two micro-capillary tubes. These are used
to supply new reagent to the sensor either continuously
or on demand.
n«»a»m
-*-TCE Chloroform
Figure 2. Schematic of the continuous-measurement
sensor
Since the colored product absorbs strongly at 540
nm and is virtually transparent at 640 nm, the ratio of
540 to 640 gives a nearly drift free measure of 540 nm
absorption. The sensors are calibrated in two ways (1)
in the headspace above standard TCE solutions of
known w/w concentration or (2) in vapor phase using
calibrated dilutions (v/v) of dry TCE vapor. Figure 4
shows the time dependent transmission of sensors
exposed to TCE standard solutions and a resulting
calibration curve.
Results and Discussion
Groundwater monitoring. The sensor has been
evaluated against contractor sample and analysis of 40
monitoring wells located within the boundary of LLNL.
These wells are sampled quarterly with subsequent
chemical analysis performed by EPA standard 624
purge and trap gas chromatography (GC). We ob-
tained concurrent samples during the quarterly con-
tractor sampling and used our fiber sensor to make
-------
0.90
0.70
o
| 0.50
a>
I
5 0.30
0.10
3.0
9.0
15.0
Time (min)
21.0
0 ppb
27.0
0)
O
0.60
0.50
0.40
0.30
0.20
0.10
0.00
100
200
300
400
500
600
[TCE] (equilibrium vapor phase over given
ppb in stirred water solution at 25°C)
Figure 4.
Standard
Sample transmission ratio curve, and working standard curve for dual-wavelength absorption sensor.
curve obtained from % transmission at a fixed time following iniation of exposure
duplicate TCE concentration determinations. Figure 5
shows a diagram of the laboratory measurement appa-
ratus. Samples were sequestered with no head space
into 250 ml Pyrex bottles. These were immediately
returned to the laboratory and divided in half. The
fiber sensor was then introduced into the resulting
headspace through a gas tight valve and a measure-
ment was initiated after stiring the sample for 5 min-
utes.
Optical fiber
(to spectrometer)
Capillary to pump
(when operating in
continuous mode)
Gas-tight valve
Sensor
_JL^
^ <=> -4
^
t
**"»',
•
-" Water sample
Magnetic stirrer
Table 1 below shows the comparison of some of the
contractor measurements with the fiber sensor. All
fiber sensor values are the average of the duplicate
samples. There is excellent agreement between the GC
and fiber sensor determinations with nearly all values
within the variance of the GC.
Vadose zone monitoring. LLNL site 300 was chosen
as the location for initial vadose zone evaluation of the
fiber sensor. The vadose zone was accesssed at several
locations through existing dedicated soil vapor moni-
toring points. The samples were drawn at nominally
450 cc/min through copper tubing to a remote mobile
laboratory. The lab contained both the fiber sensor
apparatus and a portable GC. The instruments were
connected to the sample stream in series as depicted in
Fig 6 below. Both devices were calibrated for TCE
Figure 5. Schematic of vessel used for laboratory
headspace measurements
Figure 6. Schematic of vadose zone sampling and
calibration apparatus. Sample air is drawn with a pump
on board the GC
45
-------
Table 1. Representative data from field calibration study, compiled from TCE measurements
from monitoring wells and piezometers at LLNL.
Well
MW352
P418
MW271
MW217
MW365
Date
2/13/90
2/13/90
3/7/90
3/5/90
3/6/90
[TCEKppb)
Fiber GC
44
54
86
106
27
58
72
160
86
22
Well
MW357
P419
MW364
MW458
MW142
Date
2/13/90
2/13/90
3/7/90
3/6/90
3/6/90
[TCEKppb)
Fiber GC
78
61
59
33
94
84
66
74
20
140
15-H
10
20 30
Elution time (min)
40
measurements with precision gas mixtures prior to
sampling. The fiber sensor tracked the GC very well
through a wide range of concentrations. Figure 7 is a
particularly interesting result. Here both instruments
were compared in a nearly contamination free location.
It is clear that the GC was at its limit of detection,
whereas the fiber sensor readily made a successful
measurement. Estimates of TCE concentration for this
location was <10 ppb.
Continuous measuring sensor. The above described
sensor has demonstrated adequate sensitivity and
accuracy to represent a viable new environmental
monitoring technology. However, the current design,
Time (min)
Figure 7. Results of (above) GC (SRI Instruments 8610,
PID detector, 6' x 1 /8" silica gel column), and
(below) fiber sensor measurement of extremely
low TCE levels in soil gas (estimated to be -150
ppbv/v, i.e.: 150 umoles TCE per mole air).
1.0-1
E
M
| 0.6-
0.4-
0.2-
'0
20
30
Time (min)
60
Figure 8. On-demand measurement of 10 ppm TCE
(i.e.: headspace measurement over water containing 10
ppm TCE) with continuous sensor system
-------
which incorporates an irreversible chemical
reaction, requires the sensor to be refurbished
subsequent to each measurement. This liabil-
ity limits its application somewhat in envi-
ronmental monitoring.
The sensor shown in figure 2 represents the lowest
risk mitigation of this liability. Preliminary resits with
prototypes of this sensor are very promising. Figure 8
shows typical on-demand measurements obtained with
this sensor in laboratory testing. We anticipate that this
sensor will become an integral component in a down-
well monitoring instrument currently being developed
atLLNL.
Acknowledgements
This work is supported by the DOE Office of
Technology Development (OTD) and performed
under the auspices of DOE contract W-7405-
Eng-48 and the Center for Process Analytical
Chemistry. The authors are indebted to Dr.
Lloyd Burgess and the Center for Process Ana-
lytical Chemistry, Univ of Washington for col-
laboration that led to the design and demon-
stration of the continuous sensor. The authors
also wish to thank Dr. F. Hoffman of LLNL for
many helpful discussions.
References
1. F.P. Milanovich, D.G. Garvis, S.M. Angel, S.K.
Klainer, and L. Eccles, Anal. Inst, 15,
137(1986).
2. S.M. Angel, M.N. Ridley, K. Langry, T.J. Kulp
and M.L. Myrick, "New Developments and
Applications of Fiber-Optic Sensors," in
American Chemical Societry Symposium
Series 403, R.W. Murray, R.E. Dessey, W.R.
Heineman, J. Janata and W.R. Seitz,
Eds.,(American Chemical Society, Washing
ton, D.C.,1989) pp 345-363.
3. K. Fujiwara, Sitzungsber. Abh. Naturforsch.
Ges. Rostock, 6,33(1916).
4. S. M. Angel, P. F. Daley, K. C. Langry, R.
Albert, T. J. Kulp, and I. Camins, LLNL UCID
19774, "The feasibility of Using Fiber Optics
for Monitoring Groundwater Contaminants
VI. Mechanistic Evaluation of the Fujiwara
Reaction for the Detection of Organic Chlo
rides", June, 1987.
5. R. J. Berman, G. C. Christian and L. W. Bur
gess, Anal. Chem, 62,2066(1990).
47
-------
Chemical Sensors for Hazardous Waste Monitoring
M.B. Tabaccoj Q. Zhou, and K. Rosenblum
GEO-CENTERS, INC.
7 Wells Avenue
Newton Centre, MA 02159
M.R. Shahriari
RUTGERS UNIVERSITY
Fiber Optic Materials Research Program
Piscataway, NJ
ABSTRACT:
A family of novel fiber optic
sensors is being developed for on-
line monitoring of chemical species
in gases and liquids. The sensors
utilize porous polymer or glass op-
tical fibers in which selective che-
mical reagents have been immobiliz-
ed. These reagents react with the
analyte of interest resulting in a
change in the optical properties of
the sensor (absorption, transmis-
sion, fluorescence). Using this ap-
proach, low parts per billion level
detection of the aromatic fuel va-
pors, benzene, toluene and xylene,
and hydrazines has been demonstrat-
ed, as have sensors for ethylene
vapor. Also relevant to groundwater
monitoring is the development of a
pH Optrode System for the pH range
4-8, with additional optrodes for
lower pH ranges.
INTRODUCTION
The functional operation of
optical fiber chemical1 sensors in-
volves the interaction of light
which propagates through the fiber,
with a reagent that in turn selec-
tively interacts with the environ-
ment to be sensed. Typical optical
properties including evanescent ab-
sorption and fluorescence, and che—
miluminescence can be exploited in
these sensors. The reagents are
normally immobilized into a membrane
or porous polymer matrix and then
coated either on the tip or side of
the fiber.
One of the problems encounter-
ed with fiber optic chemical sensors
based on evanescent absorption is
their characteristic low sensitivi-
ty. This results from the limited
depth of penetration of the evanes-
cent field of the light into the
reagent cladding as well as the ef-
fect of internal reflections [1-4].
Figure 1 illustrates the prin-
ciple of detection used in fiber
optic chemical sensors. In the fig-
ure, porous glass and porous polymer
approaches are compared to conven-
tional evanescent chemical sensors.
In the porous fiber, the analyte
penetrates into the pores and inter-
acts with the reagent which is pre-
viously cast (immobilized) into the
pores. The porous fiber has a large
interactive surface area (due to the
large surface area provided by the
pores), resulting in dramatically
enhanced sensitivity in the optrode.
Another advantage of a porous glass
fiber is the small sensing region
(about 0.5 cm in length and 250 mi-
crons in diameter). Additionally
the sensor is an integral part of
the fiber waveguide. This latter
feature minimizes the complications
associated with the physical and
optical coupling of the sensor probe
to data transmission fibers. In
addition, multiple fiber sensors can
be deployed using a single analyti-
cal interface unit. These sensors
are expected to be less expensive
than conventional fiber optic chemi-
cal sensors based on materials cost
and ease of fabrication. Porous
49
-------
fiber sensors for the measurement of
humidity, pH, ammonia, ethylene, CO,
hydrazines, and the aromatic fuel
constituents benzene, xylene and
toluene have been successfully dem-
onstrated by GEO-CENTERS, and by
Rutgers University [5-12].
Fabrication of Porous Glass Optical
Fiber
Porous glass optical fibers
are fabricated by the Fiber Optic
Materials Research Program at
Rutgers University, using the meth-
odology described below [5].
The material used in the fiber
is an alkali borosilicate glass with
the components Si02> B203 and alkali
oxides. This type of glass is a
well characterized system, produc-
ible at a low cost. Most important-
ly it exhibits the phenomenon of
liquid/liquid immiscibility within a
certain temperature range. The
above composition is melted in an
electrical furnace at 1400°C and
cast into rods with a 20 mm diameter
and 0.5 m in length. The rods are
drawn into fibers at about 700 "C by
a draw tower equipped with an elec-
trical furnace. Fibers with a 250-
300 micron diameter with a 5-10 cm
length are then heat treated in a
tube furnace at 600°C for about 3
hours. The heat treated glass be-
comes phase separated, with one
phase silica rich and the other bo-
ron rich. The boron rich phase is
leached out of the glass by placing
the fiber in a bath of hydrochloric
acid. The fibers are subsequently
washed with distilled water and
rinsed with alcohol. Figure 2 il-
lustrates the processing steps for
fabricating porous fibers.
Subsequent to fiber prepara-
tion, the porous segment is cast
with the sensing reagent (indicat-
or) . This is done by dissolving the
reagent in a solvent at a predeter-
mined concentration and soaking the
porous fiber in the solution. The
reagent is then dried into the pores
by air drying or in a low tempera-
ture oven. Alternatively, the glass
surface can be treated with a silan-
izing reagent to facilitate chemical
coupling to the sensing reagent.
Fabrication of Porous Polymer
Optical Fiber
As an alternative to chemical
immobilization or physical adsorp-
tion in porous glass, porous polymer
optial fibers can also be used to
create fiber optic chemical sensors.
Sensors using these fibers have been
demonstrated for ethylene, CO, NH3,
pH, and humidity detection. The
principle of porous polymer fiber
sensors has the same basis as porous
glass sensors. Consequently high
sensitivity is achieved. In this
approach the indicator is dissolved
directly into the monomer solution
before forming the polymer fiber;
therefore, the indicator is strongly
bonded to the polymer network. In
fact, the porous polymer approach
provides the advantage of both chem-
ical bonding and physical entrapping
of the indicator. Also, the pore
size and the amount of indicator can
be precisely controlled by changing
the composition of the monomer solu-
tion, resulting in very good sensor-
to-sensor reproducibility. This
fabrication process is additionally
quite suitable for mass production.
This reduces the cost of optrodes.
The porous polymer fibers are
prepared by a heterogeneous copoly-
merization technique. The basic
principle behind this technique is
the polymerization of a mixture of
monomers which can be crosslinked in
the presence of an inert and soluble
component solvent. Subsequent to
polymerization, the inert solvent
which is not chemically bound to a
polymer network, is easily removed
from the polymer leaving an inter-
connected porous structure.
Monomer starting solutions are
prepared which contain the cross-
linker, initiator, inert solvent and
chemical indicator. The mixture,
including the indicator, is injected
into a length of glass capillary,
(typically 500 microns in diameter).
The filled glass capillaries are
sealed such that they are virtually
free of air, and polymerization is
initiated and completed in a low
temperature oven. After polymeriza-
tion, the uniform and transparent
polymer fibers are pulled out of the
capillaries. Finally, the fibers
50
-------
are washed in an organic solution to
remove any remaining inert solvent.
A combination of parameters
determines the final physical prop-
erties of the cross—linked polymer
network. These include the solvent
properties, amount and type of inert
solvent, as well as the quantity of
cross-linking agent employed.
Results and Discussion
Porous glass and porous poly-
mer optrodes have been designed and
demonstrated for aromatic fuel va-
pors (benzene, toluene, xylene),
hypergol vapors (hydrazine and
UDMH), for NH3, CO and ethylene.
Similarly, optrodes have been demon-
strated for the chemical parameters
pH, humidity and moisture content.
A pH Optrode System is cur-
rently under development which is
applicable to a variety of field
screening and contamination monitor-
ing tasks. Porous glass pH optrodes
have been fabricated which are oper-
ational in the pH 4-8 range. A
unique co—immobilization technique
was developed to tailor the sensor
pH sensing range to a specific ap-
plication. Optrodes are fabricated
by first silanizing the porous fiber
surface to facilitate the attachment
of the sensitive indicator material.
Spectral transmission scans are con-
ducted in order to identify the
wavelength region of maximum sens-
itivity to pH. The sensor interro-
gation wavelength is selected based
on these spectral scans.
Optical intensity versus time
measurements as a function of pH,
have been made for each optrode at
the interrogation wavelength. The
sensitivity and linearity is deter-
mined by plotting optical intensity
at equilibrium, versus pH. Figure 3
shows the response of the optrode
with an immobilized indicator. The
sensor is operational between pH 4
and pH 6.5, with greatest sensitivi-
ty and linearity between pH 4.5 and
pH 6. Saturation of the sensor re-
sponse occurs at pH values above 7
and less than 4.
A second indicator, which is
structurally very similar to the
first indicator, has been tested
with the intent of increasing sensi-
tivity at higher pH values. The
response of this indicator is pre-
sented in Figure 4. The data indi-
cates good linearity and sensitivity
above pH 7.
A mixture of the two indica-
tors was immobilized in a porous
glass fiber. The results with this
sensor are shown in Figure 5. The
data indicates both excellent sensi-
tivity and linearity across a pH
range extending from 4 to 8. The
co—immobilization of these two indi-
cators represents a unique approach
to sensor design and demonstrates
that sensing range can be tailored
to meet specific requirements.
The reversibility of these
sensors has been evaluated. This is
accomplished by cycling a test solu-
tion, into which the pH optrodes
have been immersed, between pH val-
ues of 4.5 and 7.
Figure 6 depicts the variation in
optical transmission of the pH opt-
rode as a function of time. The
data indicate that the sensor is
fully reversible and peak to peak
reproducibility is better than 90%.
The spikes in the response curves
are artifacts associated with the
test setup. Similar results have
been obtained using porous polymer
optical fiber.
Fuel Vapor Optrodes
GEO-CENTERS, INC. has design-
ed, fabricated and evaluated porous
fiber optrodes for detection of aro-
matic fuel constituent vapors. A
xylene optrode with sensitivity <50
ppb has been demonstrated. Response
time, reproducibility, linearity,
and selectivity have been determin-
ed. Benzene and toluene optrodes
have also been demonstrated. Labo-
ratory results indicate that there
are highly sensitive optrodes, with
near real time response. They are
additionally capable of selective
detection of target species.
51
-------
With these optrodes (as well
as the hypergol, ethylene, and CO
optrodes) the rate of change of the
optical transmission is directly
proportional to analyte concentra-
tion. An example of xylene optrode
response to different xylene concen-
trations is presented in Figure 7.
Each curve corresponds to a differ-
ent xylene concentration. A plot of
the slopes of the data in Figure 7
versus xylene concentration is shown
in Figure 8. This data demonstrates
good sensor linearity from low part
per billion to low part per million
concentrations.
Hypergolic fuel optrodes have
been developed to detect vapors for
NASA and U.S. Air Force operation
applications.
The principle of operation and sen-
sor response is similar to that of
the xylene optrodes. The hypergolic
fuel optrodes can be configured as
personal dosimeters for industrial
hygiene applications or as portable
detection instruments. Figure 9
shows a typical optrode response as
a function of time for different
concentrations of hydrazine. The
slope of the optical intensity ver-
sus time curve may be correlated to
the hydrazine vapor concentration.
Conclusions
Sensors utilizing optical
waveguides offer many advantages for
hazardous waste monitoring applica-
tions including size, near real time
response, and low manning and exper-
tise requirements. Additionally,
porous glass and polymer optical
fibers offer significant advantages
in these applications because their
large interactive surface area dra-
matically improves sensitivity.
They also provide a continuous opti-
cal path. This minimizes mechanical
and optical coupling losses. Addi-
tionally, sensor interfaces can be
developed that allow multi-sensor
operation. These chemical optrodes
can be applied in a variety of envi-
ronmental monitoring scenarios, as
well as to developmental bioreac-
tors, control of process streams,
and industrial hygiene. A family of
fiber optic optrodes offers the pos-
sibility of effectively having a wet
chemistry laboratory that can be
brought to the field.
References
1. J.F. Giuliani, H. Wohltjen,
and N.L. Jarvis, Opt. Lett. 8,
54 (1983).
2. A.P. Russell and K.S. Fletch-
er, Anal. Chem. Actal. 170.
209 (1985).
3. D. S. Ballantine and H.
Wohltjen, Anal. Chem. 58, 883
(1986).
4. C. Zhu and G. M. Hiefttse,
Abstract 606, paper presented
at the Pittsburgh Conference
and Exposition on Analytical
Chemistry and Applied Spectro-
scopy, Atlantic City, N.J.,
(1987).
5, M.R. Shahriari, Q. Zhou, G.H.
Sigel, Jr., and G.H. Stokes,
First International Symposium
on Field Screening Methods for
Hazardous Waste Site Investi-
gations, Las Vegas, NV (1988).
6. M.R. Shahriari, G.H. Sigel,
Jr., and Q. Zhou, Proc. of
Fifth International Conference
on Optical Fiber Sensors, Vol.
2 Part 2, 373, (January 1988).
7. M.R. Shahriari, Q. Zhou, and
G.H. Sigel, Jr. Opt. Lett. 13,
407 (1988).
8. M.R. Shahriari, Q. Zhou and
G.H. Sigel, Jr., "Detection of
CO Based on Porous Polymer
Optical Fibers", Chemical,
Biochemical and Environmental
Fiber Sensors, V. 1172, SPIE
Sept. 6-7, 1989.
9. M.B. Tabacco and K. Rosenblum,
"Aromatic Hydrocarbon Optrodes
for Groundwater Monitoring
Applications", GEO-CENTERS,
INC. Technical Report GC-TR-
89-1912. April 1989.
52
-------
10. M.B. Tabacco, K. Rosenblum,
and Q. Zhou, "Optrode Develop-
ment for Environmental pH Mon-
itoring" , GEO-CENTERS, INC.
Technical Report GC-TR-89-
1989, August 1989.
11. M.B. Tabacco, K. Rosenblum,
and Q. Zhou, "Personal Hydra—
zine Vapor Dosimeter", GEO-
CENTERS, INC. Technical Report
GC-TR-90-2071, February 1990.
12. M.B. Tabacco, Q. Zhou, and K.
Rosenblum, "Development of
Trace Contaminant Vapor Moni-
tors", GEO-CENTERS, INC. Tech-
nical Report GC-TR-90-2138,
August 1990.
53
-------
a) Evanescent (Internal Reflection), RFS
Chemical Reagent
b) Evanescent (Internal Reflection), Side Coated FOCS
c) Porous Fiber (In-Line Absorption or Luminescence)
Figure 1.
Schematic Diagram Comparing Basic
Sensor Designs
100
80
>
1 60
1 40
Q.
0
20
0
4
Sei
x
^
\
\
v
^*»
^•^
— ••
5 6
PH
Figure 3.
isor Response With Bromocresol Green
Indicator As A Function Of pH
7
Composition Design
Na2BeO,3 Si02
Melting And Casting
Fiber Drawing
Heat Treatment
Leaching
Surface Treatment
Figure 2.
Processing Steps For Producing Porous
Glass Fibers
CH2
ii z
(CH3)
o-c
S;
360
300
£" 240
I
Z 180
0" 120
60
0
(
**• -.
•^--i
' ,
<
\
I
\
\
56789
pH
Figure 4.
Sensor Response With Bromocresol Purple
Indicator As A Function Of pH
-------
Figure 5.
Sensor Response With Co-immobilized
Indicators As A Function Of pH
50 100 150 200 250
Time (seconds)
300
Figure 6.
Optrode response time as a function of pH
105
X
«
E
c
i
o
in
0
(
• ^"^"^
^
^r
* i
)
^s>
S
^^
^,
^
2
Xylene Concentration in ppm
Figure 8.
Calibration Curve for Xylene Optrode
Based onPorous Glass Fiber
234
Time in Minutes
Figure 7.
Response Curves for Porous Glass Xylene
Sensor. Xylene Concentrations Range from
2 ppm to -40 ppb
55
-------
x 10
Average Slope (sec
20
15
5
0
Y = 2.55 + 0.1
^
^^
S
13X R = 0.99
„,--
^
*^
I^*3*"
; 20 40 60 80 10
Hydrazlne Concentration (ppb)
Figure 9.
Optrode Response to Hydrazine Vapor
at 32% relative Humidity and 24 °C
-------
Rapid, Subsurface, In Situ Field Screening
of Petroleum Hydrocarbon Contamination
Using Laser Induced Fluorescence Over Optical Fibers
S H Lieberman and G A Theriault
Naval Ocean Systems Center
Code 522
San Diego. CA 92152
(619)553-2778
S S Cooper P G Malone and
RSCMsen
US Army Waterways Experiment Station
Vicksburg, MS39180
(601)634-2477
PWLurk
U S Army Toxic and Hazardous Materials
Agency
Aberdeen Proving Ground, MD 21010
(301)671-2054
ABSTRACT
A new field screening method is described that couples a fiber
optic-based chemical sensor system to a truck mounted cone
penetrometer. The system provides the capability for real-time,
simultaneous measurement of chemical contaminants and sol
type to depths of 50 meters. Standard sampling rates yield a
vertical spatial resolution of approximately 2-cm as the
penetrometer probe is pushed into the ground at a rate of 1-m
mirf1
The system employs a hydraulic ram mounted in a truck with a
20-ton reaction mass to push 1 meter long, threaded, steel pipes
into the ground. The first section of pipe is terminated in a
60-degree cone and includes strain gauges for measurement of
tip resistance and sleeve friction. A sapphire window mounted
in the side of the pipe, approximately 60-cm above the probe
tip, provides a view port for a fiber optic-based ftuorometer
system. The soil sample is excited through the sapphire window
by light transmitted down the probe over a 500 micron d iameter,
60 meter long fiber coupled to a pulsed nitrogen laser located
at the surface. Fluorescence generated in the sol sample is
carried back to the surface by a second fiber where it is
dispersed using a spectrograph and quantified with a time-
gated, one-dimensional photodiode array. Readout of a fluores-
cence emission spectrum requires approximately 16
mail-seconds. A micro-computer based data acquisition and
processing system controls the fluorometer system, acquires
and stores sensor data once a second, and plots the data In
real-time as vertical profiles on a CRT display.
Results are presented from the first field tests of the system at a
POL (Petroieum-OH-Lubricant) contaminated hazardous waste
site. Initial results from a series of more than thirty pushes
indicate that the system is useful for rapid characterization, in
three-dimensions, of the boundaries of a POL contaminant
plume at concentrations equivalent to sub-parts-per-thousand
of diesel fuel marine. Vertical fluorescence profiles show sig-
nificant small scale vertical structure on spatial scales of a few
cm. This vertical micro-structure appears to correlate with sol
characteristics estimated from point resistance and sleeve fric-
tion. Field ami laboratory calibration of the fiber optic sensor
system using different fuel products is presented and discussed.
Sensor performance is characterized as a function of sol mois-
ture content
Introduction
Defining the location and extent of subsurface chemical con-
tamination is a difficult task. Detailed site investigations require
installation of many monitoring wells and subsequent analysis
of discrete sol and groundwater samples. Effective site char-
acterization is often limited by the ability to select optimal
locations for monitoring wells. Furthermore, the ability to
resolve horizontal and vertical features in the distribution of
chemical contaminates is a function of limitations imposed by
the spacing between wells and the vertical spacing between
samples.
At present, locations for monitoring wells are usually based on
information gleaned from site historical data, ground water
hydrology, and/or indirect chemical screening such as soil gas
measurements. Because of uncertainties in the information
available, well placement is at best an inexact science. Histori-
cal data is often incomplete or inaccurate. Knowledge of
groundwater hydrology at the site may not provide the level of
detal required to understand site characteristics. Interpreta-
tions of sol gas measurements may be complicated by erratic
movement of vapor in the sol due to impervious layers and
changes in atmospheric temperature and pressure. Conse-
quently, many wells are not property positioned and, therefore,
yield information of marginal utlity.
Accurate delineation of the boundaries of contaminant plumes
and defining small scale vertical structure in the distribution of
contamination has important implication with respect to site
remediation. The more precisely the area of contamination is
defined, the less likely ft is that "dean" material will be unneces-
sarily removed or subjected to costly remediation procedures.
Improved techniques for in situ, subsurface, field screening
would have several benefits. Knowledge of the distribution of
chemical contamination in sols and groundwater could be used
to more effectively guide the placement of monitoring wells and
thereby, greatly reduce the number of wells required. Field
screening methods that provide real-time chemical information
at closely spaced intervals could be used to rapidly delineate
small scale horizontal and vertical structure in contaminant
plumes. In addition to increasing the effectiveness of site char-
acterization there should also be a significant cost savings
57
-------
Figure 1. Photograph ofpenetrometer truck developed for use with the fiber optic fluorometer system. The data acquisition system
and fluoromcter system are located in the rear compartment. The hydraulic system used to push the penetrometer probe into the soil is
in thefoward compartment.
associated with the reduced requirement for monitoring wells
and associated analytics.
Towards this goal of improving capabilities for rapid site char-
acterization, we have equipped a truck-mounted cone
penetrometer system (Fig. 1) with a fiber optic based, laser-in-
duced fluorometer system. Cone penetrometers have been
widely used for determining soH strength and soil type from
measurements of tip resistance and sJeeve friction on an instru-
mented probe (1). The probe is normally pushed into the
ground at a rate of approximately 2-cm sec using hydraulic
rams working against the reaction mass of the truck. For a 20
ton vehicle, the standard (35-mm diameter) peoetrometer rod
can be pushed to a depth of approximately 50-m in normally
compacted soils. In order to extend the measurement
capabilities of the penetrometer system to chemical con-
taminants of environmental concern, it is possible to use the
penetrometer system as a platform for insertion of other sensors
into the soil. To date, use of penetrometers for direct sensing
of chemical constituents in soils has been limited to resistivity
measurements (2) and sensors for measuring radioactivity (3).
This report describes the development of an optical based
sensor for direct in situ screening of chemical contaminants.
The system employs optical fibers to make remote laser-in-
duced fluorescence measurements through a window in the
probe tip. The system can be used to characterize contaminant
plumes that contain compounds that fluoresce when exposed
to ultra-violet light. In its present configuration, which uses a
nitrogen (N2) laser (337 nm) excitation source, the system is
selective for polycyclic aromatic hydrocarbon compounds
which are components of POL products. Coupling the optical
fiber sensor with the cone penetrometer provides a capability
for direct, real-time sensing of petroleum hydrocarbon com-
pounds in soils that has not previously existed.
System Description
A schematic diagram of the fiber optic fluorometer system is
shown in Fig. 2. The system was adapted from a design original-
ly developed for in situ fluorescence measurements in seawater
(4-5). The penetrometer system uses two silica clad silica
UV/visible transmitting optical fibers. One fiber is used to carry
excitation radiation down through the center of penetrometer
pipe and a second fiber collects the fluorescence generated in
the soil sample and carries it back to the detector system.
Excitation and emission fibers are isolated from the sample at
the probe tip by a 6.35-mm diameter sapphire window mounted
flush with the outside of the probe approximately 60-cm from
the tip. Although different fibers from several sources have been
employed, the fibers used in studies reported here were 500-^m
in diameter and 60-m in length, unless otherwise noted. At-
tenuation was specified by the supplier to be about 100 dB/km
at 337 nm (this corresponds to 25% transmission at 337 nm for
a 60 m fiber.
Excitation radiation is provided by a pulsed N2 laser (Model
PL2300, Photon Technology, Inc) that operates at 337 nm with
a pulse width of 0.8 nsec and a pulse energy of 1.4 mJ. The
beam is coupled into the excitation fiber using a 2.5-cm quartz
lens. Because of asymmetry in the beam dimensions, 6-mm x
9-mm at the laser aperture, coupling losses into the fiber are
somewhat greater than what would be expected for a conven-
tional Gaussian resonator type laser. No attempt has been
made to reshape the beam to improve coupling. Instead, we
take advantage of the non-symmetrical beam shape by using a
separate length of optical fiber to intercept a portion of the laser
beam that would not normally be coupled into the excitation
fiber. This auxiliary fiber is coupled to a photodiode that is used
to provide an optical trigger for time gating the detector. Opti-
cal triggering of the detector eliminates problems associated
with laser jitter that are experienced with electronic triggering of
58
-------
• ELECTRICAL SIGNAL
' FIBER OPTIC CABLE
Figure 2. Schematic of laser induced fiber optic fluorometer system.
the detector.
A photodiode array detector system is used to quantify the
fluorescence emission spectrum brought back to the surface
over the second 60-m fiber. The detector system consists of a
Model 1420 Intensified Photodiode Array Detector (EG&G
PARC) coupled to a quarter-meter spectrograph which houses
a 300 line/mm diffraction grating. The 1024 element array con-
sists of 25 micron wide diodes centered at 25 micron incre-
ments. For the 300 line/mm grating the dispersion of the
spectrograph translates to a spectral resolution of 0.45 nm per
pixel at the array surface when a 25 micron input slit is used.
The resolution may be increased to 0.075 nm per pixel by using
an 1800 line/mm grating. Readout of an emission spectra
requires approximately 16 msec. Because the detector can be
readout quickly it is possible to add spectra from multiple laser
shots in order to improve the signal to noise ratio of the meas-
urement. Typically, 10 laser shots are used per sample interval.
Control and readout of the detector is performed by a Mode)
1460 optical multichannel analyzer (OMA) (EG&G PARC).
Measurements are initiated by an electronic signal from the OMA
that fires the laser. The laser pulse then triggers an optical
trigger (Model 1303, EG&G PARC) which sends an electronic
signal to a fast pulser (Model 1302, EG&G PARC). The fast
pulser implements an appropriate delay and gates the detector
"on" fora period of 20 nanoseconds. Fast-gating of the detector
activates it only during the time period when the fluorescence
signal is present, thereby minimizing any contribution to the
signal from background light and detector noise.
Incrementing the delay of the detector gate for successive laser
pulses also permits determination of fluorescence decay times.
Other studies have shown that differences in fluorescence
decay times are useful for discriminating compounds of environ-
mental interest (eg., pdycyclic aromatic hydrocarbons) that
cannot be resolved based on differences in their fluorescence
emission spectra (5). At present, fluorescence lifetime meas-
urements are not performed routinely with the penetrometer
system because additional measurement and processing time
would be required. In the future, however, fluorescence decay
measurements could easily be implemented via software con-
trol to take advantage of "dead time" that is currently not utilized
when the push is halted every meter in order to install the next
section of pipe.
An Intel 386 based microprocessor host computer is used to
automate the overall measurement process. The host computer
controls the OMA system and stores fluorescence emission data
received from the OMA and data from strain gauges on the
probe tip. A representative fluorescence spectrum obtained
59
-------
16000
14000-
12000-
10000-
8000
6000-
4000
2000-
300 350 400 450 500 550 600 650 700
Wavelength (nm)
hle=OUA .iai soil
Figure 3. Fluorescnece emission spectum measured for
contaminated soil using fiber optic fluorometer system.
from contaminated soil at the first test site is shown in Figure 3.
The host computer is also used to generate real-time depth plots
on a CRT of the chemical fluorescence measurements and soil
characteristics as interpreted from the strain gauge data. Under
normal operating conditions, fluorescence measurements are
made at a rate of approximately once a second. For the stand-
ard push rate of 2-cm sec"1 this corresponds to a vertical spacial
resolution between measurements of 2-cm. Because each
fluorescence measurement consists of intensities measured at
1024 wavelength points, a push to a depth of only 10 meters will
generate more than 500,000 data points. In order to simplify
data presentation a window (approximately 50 nm wide) is set
in the spectral region anticipated to contain the maximum
fluorescence intensity. The average fluorescence intensity in
the spectral window is then plotted as a function of depth, in
real-time, as the probe is pushed into the soil. A typical data plot
is shown in Figure 4. The entire fluorescence emission
spectrum is stored on a fixed disk to facilitate post-processing
of the data
Characterization and Calibration of Sensor Response
Initially, there were several practical concerns about the viability
of using an optical fiber system to make in situ measurements
in soil in conjunction with the cone penetrometer. Issues of
concern included: (1 )Would the sapphire viewing window retain
contaminant after exposure and thereby exhibit a memory ef-
fect? (2)Could the optical fiber withstand the necessary han-
dling required to thread it through the penetrometer pipe during
insertion and removal? (3)Would the constant flexing of the fiber
during measurement significantly alter the attenuation charac-
teristics of the fiber and thus, invalidate quantitative measure-
ments? Experience gained to date, suggests that none of these
issues appears to be a problem. Inspection of data in Fig. 4
shows that when the probe was pushed through layers of soil
containing relatively high concentrations of contaminant,
fluorescence intensities rapidly approached background levels
as soon as the probe moves out of the contaminant zone. This
suggests that the high pressures acting on the window as the
probe is forced through the soil are effective in removing any
residual contamination that might be adsorbed on the window.
Field experience to date demonstrates that the fibers can
withstand the normal handling required for operations with the
penetrometer. No fiber failures have occurred during the more
than 80 cone penetrometer tests (CRTs) that have been made
so far. Finally, measurements in the field showed that there was
no measurable difference in the amount of laser energy trans-
mitted through the 60-m excitation fiber depending on whether
the fiber was laid out on the ground with no bends or threaded
through 50 meters of penetrometer rod with a 180 degree bend
approximately every meter (as was normally the case). It ap-
pears that as long as the minimum fiber bend radius, for which
total internal reflection is maintained for all modes, is not ex-
ceeded there is no significant variation in throughput loss.
Response of the fiber optic fluorescence sensor has been
calibrated both in the laboratory and in the field using different
fuel products added to soils. We have elected to use fuel
Sleeve Cone Soil Class Fluorescence
Friction Resistance s (relative)
(tons sq ft) (tons'sq ft) &
2 ? -O T3
I I I I
012345 0 100 200 012345 0 10002000
C-3O-90
Figure 4. Example of real-time display showing vertical profiles of soil characteristics and chemical fluorescence measurements.
60
-------
2 4 6 8 10 12 14 16
Diesel Fuel Marine (parts-per-thousand)
18
20 40 60 80 100 120 140 160 180 200
DFM CONCENTRATION (parts per thousand)
Figure 5. Laboratory calibration curves for DFM in soil as
a function of soil moisture content.
Figure 6. Field calibration of penelrometer fiber optic sen-
sor using dieslefuel marine in sand. Inset shows response is
linear below Wppt.
products rather than pure compounds because fuel products
contain a representative mixture of the compounds that may
fluoresce in environmental samples. Obviously, there is no way
to be sure that the distribution of compounds that respond to
our measurement system in the field is an exact match to the
product we select for calibration. In fact, in many cases there
will undoubtedly be a mismatch between the distribution of
compounds in the product used for standardization and the
mixture of compounds present at environmental "dump" sites.
These sites may contain a potpourri of products that have had
time to undergo degradation and loss of more volatile com-
ponents. However, at sites such as tank farms that contain
recent or ongoing fuel leaks, it may be possible to get a good
match between the product used to calibrate the sensor and the
product in the ground. Therefore, it should be stressed that the
utility of the system, in its present form, is for rapid delineation
of hydrocarbon contaminant plumes in order to guide the place-
ment of monitoring wells. With these qualifications with respect
to calibration in mind, data is presented which shows that the
fluorescence sensor appears to be at least a semi-quantitative
sensor for in situ screening of petroleum hydrocarbons.
Laboratory results (Fig. 5) show that measured fluorescence
intensities increased linearly as a function of diesel fuel marine
(DFM) added to uncleaned beach sand. Added quantities of
DFM ranged from 500 to 20,000 parts-per-million (ppm) for this
experiment. Standards were generated by adding known
quantities of fuel product to weighed samples of "clean" soil and
tumbling the mixture overnight in tightly sealed glass containers.
Figure 5 also shows that the measured response did not change
significantly when the water content of the soil was varied from
0 to 10%. Other calibrations using jet fuel (JP-5) in sand also
showed that the fluorescence response did not change when
the water content of the soil sample was varied from 0 to 25%.
This suggests that the response of the fluorescence sensor
should be relatively insensitive to changes in soil moisture
content as the probe moves through the vadose zone into the
saturated zone.
The penetrometer fluorescence sensor was also calibrated in
the field by placing a cylinder over the sapphire window and
filling it with "clean" beach sand (Fisher Scientific) containing
added quantities of DFM. Results (Fig. 6, inset) show linear
response (^ = 0.99) for concentrations in the range of 1000 -
10000 ppm. This is similar to laboratory results discussed
above. Figure 6 shows that for higher concentrations, fluores-
cence intensities appear to approach a saturation val ue at about
10% DFM in sand (weight/weight). This appears to set an upper
limit on the concentrations that can be quantified with this
system. We believe this saturation effect arises because the
fluorescence response of the sample is to a large extent a
surface phenomena. At high concentrations of fluorophore, the
surface of the soil particles become saturated with product and
therefore, the fluorescence approaches a limiting value. The
lower limit of detection for the system configuration described
in this report is approximately 100 ppm (two times noise) using
10 laser shots. Detection limits can be improved, at the expense
of analysis time, by increasing the number of laser shots that are
stacked for each sample interval. Efforts are currently in
progress to determine the effect of soil type on fluorescence
response and to evaluate the "depth of view" of the fluorescence
measurement (ie., how far into the sediment adjacent to the
sapphire window does the measurement penetrate).
Results of initial field tests
Initial field tests of the fiber optic fluorometer equipped
penetrometer were conducted at a hazardous waste site in the
southeastern United States. The site, which dates back to the
1940's, had been used for several decades as a disposal area
for mixed petroleum wastes. In the mid-1980's a ditch was dug
around the site and a recovery system installed. A map of the
site showing locations of the CRTs is given in Figure 7. Figure
8 shows representative results from a transect paralleling the
recovery ditch (CRTs 30-37). The depth of sampling in this study
was limited to 30 ft by a hard limestone layer. Inspection of the
fluorescence profiles ind icates that hydrocarbon related fluores-
cence was detected at locations 30,32,33,35 and 36 but not at
location 34 or 37. These results illustrate how it is possible to
rapidly delineate the horizontal extent of the contaminated area
by making a series of CRTs at the site. Each CRT required
61
-------
Figure 7. Map showing locations of cone penetrometer
tests (CPTs) at test site.
approximately 20 minutes to complete. Detailed inspection of
the vertical structure in fluorescence profiles at the locations with
the highest fluorescence intensities (CPTs 30,32 and 36) shows
marked similarities. Highest intensities were observed at a
depth of approximately 15 feet with a secondary maximum at
about 10 feet and background levels at the surface and at the
bottom of each profile. Similarity in the vertical structure ex-
hibited by the fluorescence profiles at the three locations and
the covariance with measured soil characteristics supports the
hydrogeological consistency of the data. The observation that
CPTs 34 and 37 showed no measurable fluorescence suggests
that at this site naturally occurring organic material did not
contribute to measured fluorescence signals. In order to
facilitate interpretation, fluorescence and soil property data from
individual CPTs can be combined with position information and
transformed (Dynamic Graphics, Inc) into a 3-dimensional
gridded file for visualization on a minicomputer system. Figure
9 shows an example of. a 3-dimensional representation of the
fluorescence data from the CPTs at the sites indicated on the
map in Figure 7. For this example, fluorescence intensities have
been converted into diesel fuel equivalents using the linear
portion of the calibration curve presented in Figure 6.
Conclusions and future efforts
Efforts to date suggest that use of a fiber optic based fluorometer
system in conjunction with a cone penetrometer may be useful
for rapid delineation of subsurface petroleum hydrocarbon con-
tamination at hazardous waste sites. Laboratory and field
calibration of the fluorometer system using fuel products (diesel
fuel marine and JP-5) indicates that the fluorometer system is
quantitative for direct determination of these products in soil
Fluorescence Soil Class Fluorescence Soil Class Fluorescence Soil Class Fluorescence Soil Class Fluorescence Soil Class Fluorescence Soil Class Fluorescence
| (relative) | (relative) | (relative) | (relative) | (relative) | (relative) ; (relative)
its! mi nil nil ml ml ml
012345 0 10002000 012345 0 10002000 012345 0 10002000 ^ 0123*5 0 10002000 ( 01 2345 0 10002000 ^12345 0 10002000 ( Q1 2345 0 10002000
. -
•
j;::
u-
.
u-
il.il
NO
DATA
P1 — i~t~
HO
DATA
r . i — i—
]_ HOLE 2 _ |
C-3S-90
L--J L--J |_»°-J
Figure 8. Test data showing the use of the fiber optic fluorescence sensor for locating the boundaries of a hydrocarbon plume.
62
-------
Figure 9. Eample of 3-dimensional visulalion of soil contamination based on CPTdata. The volume shown represents areas that
had fluorescence intensities equivalent to 1000 ppm or more dieselfuel marine. The lines on the upper surface represent cultural fea-
tures (ditches and roads) present at the site.
(sands) for concentrations in the range of 100 ppm to 10000
ppm. At present, the greatest utility of the system is for rapid
screening for POL contamination in order to more precisely
locate contaminated zones, and thus significantly reduce the
number of monitoring wells required for site characterization.
The accuracy of converting measured fluorescence intensities
to concentration units will depend on how closely the product
used for sensor calibration emulates the product in the soil.
Experience in the field indicates that the optical fiber system is
rugged enough to withstand normal deployment procedures
with the penetrometer system and that the sapphire viewing
window appears to be self-cleaning, thereby avoiding memory
effects.
Efforts currently planned, or in progress, include: (1) rigorous
intercomparison of penetrometer field measurements with con-
ventional sampling and standard analytical methods, (2) char-
acterization of the effect of different soil types and
characteristics on system calibration, (3) enhancing the
capabilities of the sensor system for measuring compounds that
are excited at higher energies by replacing the N£ excitation
source with a Nd-YAG operating at the third and fourth har-
monics (355 and 266 nm).
References
1. Olsen, R.S. and J.V. Farr. "Site Characterization Using the
Cone Penetrometer Test." Proceedings of the ASCE Con-
ference on Use of In-situ Testing in Geotechnical Engineering.
Amer. See of Civil Eng., New York, N.Y. 1986.
2. Cooper, S.S., P.G. Malone, R.S. Olsen and D.H. Douglas.
"Development of a computerized penetrometer system for Haz-
ardous waste Site Soils Investigations." Rept. No. AMXTH-TR-
TE-882452, U.S. Army Toxic and Hazardous Materials Agency,
Aberdeen Proving Ground, MD. (1988), 58 pp.
3. Campanula, R. G. and P. K. Robertson, "State-of-the-art in
in-situ testing of Soils: Developments since 1978," Department
of Civil Engineering, University of British Columbia, Vancouver,
Canada, 1982.
4. LJeberman, S.H., S.M. Inman and G.A. Theriault. "Use of
Time-Resolved Ruorometry for Improving Specificity of Fiber
Optic Based Chemical Sensors." In: Proceedings SPIE Op-
toelectronics & Fiber Optic Devices & Applications, Environ-
ment and Pollution Measurement Systems. Vol 1172. (1989),
p. 94-98.
5. Inman, S.M., P.J. Thibado, G.A. Theriault and S.H. LJeberman,
Development of a pulsed-laser, fiber-optic-based fluorimeter:
determination of fluorescence decay times of polycyclic
aromatic hydrocarbons in sea water," Anal. Chim. Acta, 239,
(1990), p. 45-51.
63
-------
DISCUSSION
The following is a panel discussion in which questions were posed to the first three
authors of papers in the Chemical Sensors Session.
DICK GAMMAGE: Most of the data you showed was for sand. You're going
to have different quenching problems, different degrees of quenching for
different soils. Can that throw you out at all? Also, I thought the original intent
of this device was to be able to lower it directly into groundwater and take in water
measurements. And I'm wondering why your focus seems to be totally on the
headspace ai this stage?
STEPHEN LIEBERMAN: I'll talk about this soil type question. That's a good
question. It's something thai has been on our mind. We actually have a laboratory
study going right now where we're going to evaluate the effect of soil type on the
response of the sensor. One of the other considerations with soil type, and this was
something we visually observed, is if you have a sand the sample volume is going
lo be different than if you have a very fine grain clay or something like that. We
have not parameterized that or really documented what that effect is yet. but we
are looking at that. That's kind of one of the drawbacks of rushing some of the
stuff out in the field, just to see if you can get that fiber down there without
breaking in and some of ihose very basic questions. But we haven't ignored that.
FRED MILANOVICH: The answer to the second pan is a quite complicated
answer. The experience we've had is that headspace measurement is far and away
more reproducible. And since this is a result-driven technology, we want
something that works. When we designed the continuous probe the reagent is
now in contact with a membrane. When we wet it on the other side with water,
we have problems. In the original probe there was an air space, and you could
stick that probe into the water. With the membrane being teflon, the wetting
phenomenon was different than what has been exposed to the pyridine. So some
work would have to be done there. But I don't see there's a great liability lo stay
with headspace.
JOHN SCHABRON: How often do you have to recalibrate the probe? I guess
now that you can introduce solution into it. you can calibrate it more frequently.
Could you also address the issue that, with the two diodes, the red and the green,
you're not compensating for the difference in output of the two diodes as you
would if you had a single lamp and a monochromalor with two different
wavelengths.
FRED MILANOVICH: The calibration issue is a function again of a lot of
factors. If you make enough reagent and it's stored cold, you can go with the
calibration. We've gone months with the calibration. But if you mix a new
reagent, open a new bottle of pyridine. chemistries are different. So you'd have
to recalibrate.
MARY BETH TABACCO: Basically we found that you can adjust the output
from those two diodes to make them match, make one greater or one less. The
ability for the ratio to remain constant isn't dependent on the output from the
ditxles. In the graph that I showed you. the green output was lower. In fact, the
system electronics that we've built, the green is just about the same output value.
By adjusting the current lo the LED. you adjust the output value.
DeLYLE EASTWOOD: As some of you may know, there is a fiber optic
committee chaired by Dr. Tuan Vo-Dinh of Oakridge which is working on
developing the calibration standards, fluorescence standards and standards for
terminology. and collecting a data base for fiber optic chemical sensors. We use
the term fiber optic chemical sensors because as some of you know. Optrode is
a registered trademark. Dr. Vo-Dinh is giving a presentation on that at the
Pittsburgh conference in Chicago, Monday. March 4.1 will also chair a meeting
on luminescence at that conference.
There's been a lot of previous work on classification and identification of oils.
some of which is in the literature, and is the basis for a couple of ASTM methods.
My question is. do you plan to use another laser and fiber to measure BTX?
STEPHEN LIEBERMAN: Yes, but I'm not sure we're going to get down to
BTX. We did have plans to use a different excitation source. That should be
coming on line, should at least be available to us about the end of this month. That
will give us the 266 and 355 excitation. But I think benzene and others are even
excited at lower wavelengths. The thing we're bucking there is the transmission
down the fiber. As you know, the attenuation dramatically increases as you go
down in the UV. So right now the 337 is kind of a nice compromise between what
we can get down there and a wavelength that will excite some of the 2,3,4-type
ring compounds. But if we could get the energy down there, it would be real nice
to try to go 200 or so. But I don't see that happening right now. I think 260 is going
to be pushing it. Even at that, we're going to be brute forcing the energy down
there. So I think we may be approaching the damage threshold of the fiber, versus
what we can get out the other end.
GORMAN BAYKUT: I have a question about telling compounds apart in a
mixture. You gave an example of a mixture of three compounds. If you have a
high concentration of some compounds with a very low concentration of another
compound, do you have any problems with determining them just using the
slopes?
STEPHEN LIEBERMAN: We have not actually done experiments where
we'vejuggled concentrations of these different compounds and really determined
what the range of concentrations we're able to discriminate. Obviously that's a
concern. We've done a little bit of work using Lifetimes as a way to discriminate
different metal ions that complex w ith a particular indicator molecule. We've had
some success fining biexponential curves to those compounds. But again, we
haven't really pushed the limit by having tremendous differences in concentration.
Our current thinking is it's going to take a combination of techniques and maybe
a smart pattern recognition-type techniques. We may be looking a neural
networks as a way. But obviously, there's going to be some point in the
differences in concentration that you're going to be able to determine.
FRED MILANOVICH: In these experiments we actually prepared the solution
so that they'd give a similar initial intensities.
BRIAN PIERCE: I have four questions: (1) These indicators in your porous
meter, are these reactions reversible? (2) What are the polymers you're using in
your porous polymer monitor or sensors? (3) Have you considered waveguide
configurations? (4) How is it possible to construct these 3-D visualizations from
the finite number of points that you've sampled? What kind of assumptions go
into that?
MARY BETH TABACCO: We're working with both reversible and irrevers-
ible systems. The pH Optrodes. the ammonia sensors are all fully reversible.
Right now. for some of the other vapors sensors forhydrazines.carbon monoxide.
we have irreversible indicator systems. But as 1 mentioned, in e case of the
irreversible systems we've demonstrated that by monitoring the slope you can
look at real time changes in concentration. For example, with ethylene, we've
cycled concentrations from 100 ppb to 100 ppm and you basically can monitor
the change in the slope to pull out real time information.
Your third question was about waveguiding. And no. we've not considered that
approach here.
Concerning the actual polymers we're working with, we're using a variety of
polymer systems, both hydrophilic and hydrophobic. These are
methylmethacrylate systems with bis-acrylamide cross-linkers. The actual
formulation varies depending on the sensor. We have applied for a patent for the
pH Optrode under development. But as I mentioned, it is kind of a witch's brew
at this point.
STEPHEN LIEBERMAN: By the 3-D visualization I assume you mean the
fancy three-dimensional figure of field data. I'm not quite sure if I understand the
question. There's actually a lot of data points here that represents about 30
64
-------
DISCUSSION
pushes. We're firing that laser about once a second as we're pushing it into the
ground. So we're getting a point in the vertical about every two centimeters. Now
obviously you have to be careful in any kind of three-dimensional visualization
— it only represents reality as good as those contouring algorithms. I think the
proper way is to first plot out your raw data in cross-section or by profile. You
have to make sure that the visualization you generate by the more sophisticated
computer program reflects the reality of what you saw in those individual
profiles.
MARTY HARSHBARGER-KELLY: What is the software package you're
using on that Macintosh for data manipulation and who's the software
manufacturer?
FREDMILANOVICH: The software package is Lab View. It's all icon driven.
so no words are typed to do all that interfacing, just moving icons around. I
believe the software manufacturer is National Instruments.
BERT FISHER: Your instrument is measuring polyaromatic hydrocarbons, so
it's a bit misleading to say that you're measuring product, because you're
measuring some chunk of that. Also, this really shou Id be able to look at historical
spills. Have you looked at weathered materials, because the PAH's will hang
around. And my comment on the three-dimensional visualization is, it's a lot like
doing geology. You have great resolution in the vertical and you accept the
horizontal on faith. So it's like doing stratigraphy.
STEPHEN LIEBERMAN: As to your question regarding weathered product,
I showed you data from a Jacksonville site that has a rather checkered past. Those
deposits go back 30 or 40 years. Now in geological terms that may not be your
idea of a weathered product, but it's not a fresh product. Actually there's some
work Iknow out of the petroleum people that shows that those PAH spectra don't
seem to change very much as a function of time, at least with the PAH
components, but we don't have any real evidence. This is also sort of a brute force
method here. We're taking this thing out on the field and we're sticking it in the
ground. We don't know very well what's down there or what we're even looking
at. Personally I think it would be much nicer to go to some sites where we have
some more recent leaks from a tank farm or something like that where we could
put ourselves to a better test of whether wecan discriminate for instance JP5 from
diesel fuel. Hopefully we would also have information on how old the product
is and how long it's been in the ground.
BERT FISHER: That really was my concern, in that you would be seeing stuff
where there in fact was no product, but you were looking at a tremendous amount
of PAHs that had been hanging around for many years.
STEPHEN LIEBERMAN: That may be the case in that example.
PETER KESNERS: As I understand your apparatus, there's a membrane
permeation front on it. What sort of membrane types have you investigated? Do
you think it's feasible to measure pyridine in water with other membranes with
the sensor working the other way around?
FRED MILANOVICH: That's a real interesting plot. Our concern with the
membrane is to keep pyridine out of the water, so we have solicited help
anywhere we can. The current membrane that works the best is plumber's tape,
simple expandable teflon plumber's tape. And that's a result of trial and error
from attempts too numerous to mention. Probably 40 or 50 membranes have been
tried and plumber's tape is the best. We do have some proprietary technology
from companies that we aren't able to speak about yet that could exceed the
plumber's tape.
TODD TAYLOR: It seems to me that the cal ibration curve that you showed on
the screen is going to depend on quite a few things in addition to the soil type. It
seems to me it's going to depend on the water content, because water is going to
affect the amount of oxygen quenching going on in the soil. It's going to depend
on the oxygen concentration. Surface soils are known to contain a lot of humic
materials, and those materials naturally fluoresce. Their fluorescence, in fact,
depends on metal concentration in the soil. So it seems to me there are quite a few
factors which may be involved in looking at the fluorescence of the soil. And the
last question is not really a question. It's more the fact that I think that you have
a lot more work to do in characterizing your system.
STEPHEN LIEBERMAN: In the previous graph, I did show that we have
looked at varying the water content over from all the way dry to up to 10% in the
data I showed you and 25% with JP5. And seeing, somewhat surprisingly to me,
no real significant change in the response of the sensor. And so I think at least as
a first cut we have addressed that. As to the question of humics, we've also
considered that question. In the case of the Jacksonville data, we showed the fact
that we could leave the area that historically was the site where the contamination
was and get down the background fluorescence, at least at that site. I don't think
we have a problem with background fluorescence due to the humic substances,
although we have done some other tests where we've measured humic substances.
We' ve looked at their spectra characteristics and also looked at their decay times.
The decay times for the humic substances appear to be much shorter than what
we're seeing for the petroleum products. So if we do run into a case where we are
getting background fluorescence due to naturally occurring organics, there's at
least some hope that we may be able to resolve that based on their emission
curves.
I agree with you, there's tons of problems out there that need to be addressed and
looked at in more detail. Our approach has been one to let's push this thing out
in the field and see what happens. Let's fill in some of these questions later, when
we get some handle on what we are seeing. But I think that the true proof of this
thing, and this is where we stand right now, is going to be to do some of these
profiles and then rigorous validation of it: to collect samples and analyze them
by the more conventional methods. Obviously that needs to be done. And that's
going to be the thrust of our effort now.
65
-------
SPECTROELECTROCHEMICAL SENSING OF CHLORINATED HYDROCARBONS
FOR FIELD SCREENING AND JN. SHU MONITORING APPLICATIONS
Michael M. Carrabba, Robert B. Edmonds and R. David Rauh
EIC Laboratories, Inc.
111 Downey Street, Norwood, MA 02062
and
John W. Haas, m
Oak Ridge National Laboratories, Health and Safety Division
P.O. Box 2008, Oak Ridge, TN 37831-6383
ABSTRACT
The detection and identification of chlorinated hydrocarbon
solvents (CHS) have been demonstrated by combining the
principles of spectroscopy and electrochemistry. The
successful observation of die CHS is highly dependent on
the analysis procedure. The procedure is based on a photon
induced electrochemical reaction which is detected by
surface enhanced Raman spectroscopy (SERS) on
electrodes. The results and methodology of the technique
will be discussed.
INTRODUCTION
The importance of techniques to sense and monitor
chlorinated hydrocarbon solvents (CHS) are becoming
increasingly more important with the intensifying presence
of groundwater contaminations. Our research and
development effort is aimed at producing a commercial,
low cost, field portable instrument for the field screening/in
situ monitoring of contamination from chlorinated organic
solvents based on spectroelectrochemical fiber optic
probes. Some of the advantages of this technique for
monitoring a contamination site are cost, small size of
sampling probe, real-time analysis, the capability of sensing
in adverse environments, and the ability of using a central
detection facility. The technique has an advantage over
current fiber optic chemical sensing methods for
chlorinated organics in that the sensing only takes place
when the electrochemical device is turned on. This should
enable long term monitoring of a well to be accomplished
with only one probe.
Our monitoring system for chlorinated organic solvents is
based on the principle of combining spectroscopic,
electrochemical and fiber optic techniques (Spectro-
electrochemical Fiber Optic Sensing (SEFOS)). SEFOS is,
in principle, a generic technique which can be adapted to
many different sensing applications. With the SEFOS
technique, we use electrochemical methods to reduce the
chlorinated organic solvents into reactive intermediates.
The reactive intermediates can then react with the
"trapping" reagent and spectroscopic changes, such as
surface enhanced Raman spectra, are used to sense the
chlorinated organics at levels far below their detection
means by electrochemical methods alone. Previous work
(1) has shown the usefulness of using surface enhanced
Raman spectroscopy (SERS) for the detection of
groundwater contaminations and the technique has also
been successfully applied to fiber optics (2). However,
these past experiments have mainly been restricted to
aromatic hydrocarbons.
In this manuscript we will discuss some of the fundamental
aspects of using SERS for the examination of the following
chlorinated hydrocarbons or organochlorides: carbon
tetrachloride, 1,2-dichloroethane (DCE), chloroform and
trichloroethylene (TCE). Our interest in these compounds
stems from their existence in the groundwater at the
Department of Energy hazardous waste sites.
EXPERIMENTAL
The Raman spectroscopy system for conducting the SERS
experiments at EIC has been previously described (2). The
system used at Oak Ridge National Laboratory (ORNL) is
shown in Figure 1 and, with the use of an optical fiber for
excitation, represents a first step toward a remote fieldable
Raman system. Of note in the optical system is placement
of the laser line pass filter (BP) after the optical fiber to
remove interfering Raman scattering from the fiber itself
67
-------
(3). Both research groups employed high-resolution
spectrometers and diode array detectors for measuring
Raman scattering from similar spectroelectrochemical
cells. As shown in Figure 2A, each cell was fabricated from
a 3 x 6 x 3 cm quartz cuvette with O-ring joints fused into
three sides and the top. Electrodes were fed into the cell
through O-ring joints and consisted of Pt counter, Ag/AgCl
reference, and copper working electrode. The working
electrode was placed about 2 mm from the Oarge) face of
the cell between the two electrodes. This orientation
minimized the path length of incident and scattered light
through the sample solution and simplified alignment of the
electrode in the optical system. For transport/concentration
studies, a membrane could be sandwiched between die
spectroelectrochemical cell and a second cuvette with
matching O-ring joint fused into the bottom (Figure 2B).
The spectroelectrochemical procedures were first
developed at EIC and then used at ORNL. Electrochemical
roughening of polished copper electrodes, consisting of
high purity 1.0 mm copper wire, was achieved with an
oxidation/reduction cycle (ORC) from -0.6 to +0.2V in a
0.1M KC1 electrolyte at 25 m V/sec. Saturated solutions of
the chlorohydrocarbon solvents (CHS) in distilled water or
100 ug/ml solutions of CHS in 0.1 M KC1 were cycled
several times under the same conditions and optimum SERS
spectra were acquired at -0.2V on the cathodic sweep. All
cycling occurred under laser illumination at 625 nm at EIC
or 647 nm Krypton illumination at ORNL. The use of the
slightly different wavelengths for illumination and Raman
spectroscopy did not produce significantly different results
at the two labs.
RESULTS AND DISCUSSION
Our results confirmed previous experiments (1) which
indicated that carbon tetrachloride was not observable on
Ag substrates. In addition, we were unable to observe the
chlorinated hydrocarbons on Ag or Au substrates.
However, when we examined the chlorinated hydrocarbons
with a Cu electrode, we were able to observe the SERS
spectra of carbon tetrachloride (Figure 3) as well as the
SERS spectra of TCE, DCE and chloroform (Figure 4).
The best SERS spectra were obtained when the ORC cycle
was stopped during the reduction step at the potential of
zerocharge for Cu(-0.2V) (4). The observation of the SERS
spectra was also highly dependent on illumination during
the cycling. Previous work by Thierry and Leygraf (5) has
indicated the importance of illumination during the
electrochemical roughening of Cu electrodes to produce
Raman active sites.
The vibrational features in Figure 4 indicate that a reaction
is occurring on the electrode surface (see Table 1 for
vibrational assignments). From the spectra, it appears that
ring formation is occurring due to an electrochemical and/or
photochemical process. However, in our experiments no
SERS spectra of the CHS were observed unless the
electrode was illuminated during the reduction step and thus
a strictly electrochemical reaction can be ruled out
This "photo" induced result indicates the possibility of a
photoelectrochemical process. Copper oxides are known
to be p-rype semiconductors which eject electrons under
illumination (Equation 1) (6). The band gaps for the two
possible copper oxides are 2.0-2.6 eV (620-477 nm) for
Cu2O and 1.7 eV (730 nm) for CuO. These electrons can
then electrochemically reduce the chlorinated hydrocarbon
solvents.
Cu2O + X -> Cu2O(hV)
This electrochemical reduction is similar to a reaction
scheme for the electrochemical reduction of chloroform
which has been determined by Fritz and Kornrumpf (6) to
be:
CHC13 + 2e -> CHC12 + Cl
CHC12- + CHC13 -> CH2C12 + CC13
CC13 -> :CC12 + Cl
The formation of the dichlorocarbene during the
electrochemical reduction process would tend to form a ring
type structure (6). This ring type structure is indicated in
our SERS spectra with the strong band at 1380 cm'1.
A preliminary observation has indicated that the SERS
spectrum is only observable for a finite amount of time. The
result is either due to the degradation of the electrode or the
sample. If the electrode was replaced with a new SERS
surface and then placed in the same solution, the spectrum
was still not observable. This indicates that the chlorinated
hydrocarbons were being consumed during the experiments
in the small volume (10 ml) of analyte. Confirmation of this
result would indicate that the SERS on Cu surfaces is a
method which is capable of both sensing and removing the
chlorinated hydrocarbons from the solution.
To determine the cause of the disappearing SERS signal, a
series of SERS/GC experiments which determined the TCE
concentration before and after the SERS experiments were
performed. Saturated samples of trichloroethylene (TCE)
in 0.1M KC1 and distilled H20 were cycled in a sealed glass
SERS cell to prevent the possibility of outgassing of the
TCE. Samples of the saturated TCE solutions were
collected both before and after the electrochemical cycling.
These samples were analyzed on a Hewlett-Packard Model
HP 5730A Gas Chromatograph. Chromatograms were
recorded and the magnitudes of retention peaks were
examined for the TCE peak in the experiments. Large
spikes at the 45 second retention time were due to impurities
in the distilled water. The chromatograms showed that a
large amount of TCE was consumed during electrochemical
cycling. Figure 5 represents a typical "before" and "after"
chromatogram.
68
-------
Analysis of "before" and "after" chromatograms showed an
average consumption of 66% of the trichloroethylene
during the electrochemical cycling and SERS experiments.
This is consistent with our observation that a film was being
formed on the roughened copper surface of our working
electrode. The formation of a film also indicated the carbene
may be originating a radical induced polymerization.
Methods for determining the exact structure of the products
formed during electrochemical cycling are currently under
investigation.
CONCLUSION
The observation of a "photo" induced SERS process in the
analysis of the chlorinated hydrocarbon solvents has future
implications for environmental sensors. Previous to this
work it was thought that the CHS type compounds were not
observable by the SERS technique. Upon completetion of
our fundamental experiments, future work will concentrate
on the analytical applications of the process and the
development of field portable Raman instrumentation.
ACKNOWLEDGMENT
This work was conducted in part under a collaborative
research agreement (CR-90-003) between EIC and ORNL
(Martin Marietta Energy Systems). Financial support for
this work was derived in part from the Office of Health and
Environmental Research Division of the Department of
Energy under the Small Business Innovative Research
program.
REFERENCES
1. Carrabba, M.M., R.B. Edmonds and R.D. Rauh,
"Feasibility Studies for the Detection of Organic Surface
and Subsurface Water Contaminants by
Surface-Enhanced Raman Spectroscopy on Silver
Electrodes", Anal. Chem., 52,2259 (1987).
2. Carrabba, M.M., R.B. Edmonds, PJ. Marren and R.D.
Rauh, Proceedings of the First International Symposium
on Field Screening Methods for Hazardous Waste Site
Investigations, Las Vegas, Nevada, October 1988, p33.
3. Carrabba, M.M. and R.D. Rauh, "Apparatus for
Measuring Raman Spectra Over Optical Fibers", U.S.
Patent Application 07/442,235 (1989).
4. Bunding, K., J. Gordon, and H. Seki, "Surface-Enhanced
Raman Scattering by Pyridine on a Copper Electrode",
J. Electroanal. Chem., 184.405 (1985).
5. Thierry, D. and C. Leygraf, "The Influence of
Photoalteration on Surface-Enhanced Raman Scattering
from Copper Electrodes", Surf. Sci, 149.592 (1985).
6. Fritz, H. and W. Kornrumpf, "An Improved Cathodic
Generation of Dichlorocarbene", Liebigs Ann. Chem ,
2,1416(1978).
Figure 1. Experimental setup for "photo" induced SERS
experiments at ORNL. F = optical fiber, O - microscope
objective, CL- collimating lens, FL = focusing lens, P =
right angle prism, BP = laser line pass filter, BR = laser line
rejection filter, C = spectroelectrochemical cell.
(A)
0=0=0
Figure 2. Diagram of spectroelectrochemical cell. (A)
Top view showing 3 electrode ports and O-ring joint
opening in the top of the cell. (B) side view showing sample
reservoir attached to the top for membrane concen-
tration/transport studies. Only 2 of the 3 electrode ports are
visible. In both diagrams the arrows point along the optical
axis as shown in Figure 1.
69
-------
4OO 60O 8OO
WAVENUMBER
10OO 12OO
TCE "before"
TCE "after"
UL
Time (minute*)
Time (minutes)
Figure 3. The SER spectnim of a saturated solution in
water on a Cu electrode of carbon tetrachloride. The
spectrum has been smoothed for clarity.
Figure 5. Gas chromatograms of TCE solution before
and after the SERS experiment. Retention time for the TCE
peak was 2 minutes.
B
BOO BOO 1OOO 12OO
WAVENUMBER
14OO
Figure 4. The SER spectra of saturated solutions in water
on a Cu electrode of (A) trichloroethylene, (B)
1,2,-dichloroethane and (C) chloroform.
70
-------
Table 1
Major Raman/SERS Peak Positions (cm"1) and Vibrational Assignment for the Chlorinated Hydrocarbon Solvents
CCl,
Raman
227s
319s
462s
762 w
787 w
SERS
220 w
261 w
288 w
521 w
791m
1051 w
1089 w
CHC13
Raman
689s
760m
1218 w
SERS
526m
670 w
783s
1021 w
1056m
1151 w
1234 w
1313m
1352m
1381s
1465 w
1550 w
1581 w
DCE
Raman
656s
674m
755s
882 w
944 w
1055 w
1209 w
1306 w
1433 w
SERS
521m
782s
965 w
1024 w
1058m
1101 w
1148w
1239 w
1312m
1379s
1464 w
1509 w
1582m
TCE
Raman
628s
780m
842 w
930 w
1247m
1585s
SERS
524m
781s
862 w
963 w
1018 w
1055m
1105w
1167w
1237 w
1312m
1358m
1379s
1463 w
1505 w
1580s
Vibrational Assignment
Cu-C?
Cu-C?
Cu-C?
"chain expansion"
symmetric CC14 str.
CCl str., Cu-C stretch?
CCl str. - secondary CA
CCl str. - primary CA
symmetric CC13 str.
CCl str. - primary CA
CCl str. - primary CA
CCl str. - primary CA
CC skeletal str.
CC skeletal str., ring
"breathing"
in-plane CH deformation, CC
str., ring "breathing"
CC str., ring "breathing"
CC str., ring "breathing"
ring "breathing" - cyclopropane
type
CH2 twist and rock
CH2 twist and rock, in-plane
CH deformation
CH2 in-phase twist, CH2 twist
and rock, in-plane CH
deformation
CH deformation
ring str.,
CH2 deformation
CH2 deformation
symmetric C=C str. - cyclo
C=C str. - cyclobutene
C=C str. CA, 3 or C=C couple
str. - polyene
s - strong intensity, m - moderate intensity, w - weak intensity
CA = Chloroalkane, str. = stretch
71
-------
DISCUSSION
ARTHUR D'SILVA: In the E.I.C. experiments at what wavelength did you
measure the fluorescence?
MICHAEL CARRABBA: We're looking at the complete spectrum, in this case
a very simple proof of concept. We weren't trying to develop a highly skilled
system as the Livermore people have developed, or as the people at GEO-
Centers. We're proving the concept here. We just monitored the intensity under
the total fluorescence band.
ARTHUR D'SILVA: What is the excitation wavelength?
MICHAEL CARRABBA: The excitation wavelength was 514 nanometers. We
added an argon-ion laser. We believe we could use just about any of the
wavelengths from 488 up to about possibly 600, but we really didn't try the 600.
EDWARD POZIOMEK: In the experiment where you described the photon
induced reaction, did you utilize a base?
MICHAEL CARRABBA: In the electrochemical experiment you don't need
the base. We use it as our bench mark, and then put the electrodes in. I believe we
don't need the base, and that's probably the important point.
EDWARD POZIOMEK: If you had the opportunity to solve a technology
barrier, which one would you go after first in this area to move it faster?
MICHAEL CARRABBA: The implication of the dichlorocarbene, going after
a double bond, could be quite lucrative in the future. And we believe we can make
probe systems that have been coded right onto an optical fiber and a very simple
sensor. That's where I think we'd pursue it at this point. Basically we'd use some
particular dyes that when the dichlorocarbene attacks the double bond it breaks
the conjugation and the fluorescence disappears or new fluorescence appears.
That's the direction that we're working on right now.
72
-------
SURFACE ACOUSTIC WAVE (SAW) PERSONAL MONITOR FOR TOXIC GASES
N. L Jarvis, H. Wohltjen, and J. R. Lint
Microsensor Systems, Inc.
6800 Versar Center
Springfield, VA 22151
ABSTRACT
A demonstration model 4-sensor Surface Acoustic Wave
(SAW) Personal Monitor for Toxic Gases was designed and
built, with emphasis on minimizing the overall system
size, weight, power consumption, and complexity. The
completed demonstration unit contained four 158 MHz SAW
delay lines, supporting RF electronics, microcomputer
(microcontroller), a miniature pump, valve, gas transfer
lines, and a small scrubber to provide a clean, dry, air
source to establish sensor baseline frequencies. The
demonstration unit weighs approximately 2 pounds. The
projected size of the follow-on unit is expected to be 6" x
3" x 1". Unlike previous SAW vapor sensor arrays, which
utilized coatings that interact reversibly with specific
classes of toxic organic vapors, this SAW Personal Monitor
takes advantage of sensor coatings that react irreversibly
with toxic chemicals. Thus it can more easily and
effectively determine total exposure to a given toxic gas.
The following toxic inorganic gases were selected for study
with the demonstration system: HCI, NO2, SO2, NO2, H2S
and NH3. Coating materials were selected that react
irreversibly with each gas. The coatings were applied to
the SAW sensors and their performance evaluated for
exposure to a single gas. The results show that suitable
materials are available for use as dosimeter coatings for
SAW sensors. Thus the potential exists for developing an
effective SAW Personal Monitor for detecting and
monitoring each of the above gases, except NOg, at
concentrations well below the OSHA "action levels".
INTRODUCTION
In all areas of environmental monitoring, as well as
industrial hygiene, there is a need for smaller, more
sensitive, and inexpensive personal monitors (e.g.,
dosimeters) for toxic gases and vapors. For example,
personnel involved in field screening must be concerned
with their personal health and safety when working at a
field site, and may often require accumulated exposure data
for various toxic gases. SAW sensor technology, however,
is not limited to use in a Personal Monitor (e.g., a toxic gas
monitor that can be worn on clothing). The same sensor
technology could be extended to the development of small,
hand-held or in-situ monitors for a variety of field
screening applications.
There are a number of techniques currently being used to
acquire toxic exposure data, however, each have their
limitations. In the future large numbers of more effective
monitors will be required for the rapid and reliable
detection and/or monitoring of toxic gases and vapors at
ever lower concentrations, in response to increasingly
stringent state and federal health and environmental
regulations. Chemical microsensors have demonstrated the
sensitivities and physical properties needed to meet the
size, cost, and performance requirements of a new
generation of personal monitors, and should ultimately
find a wide range of applications within the industrial,
medical, and environmental communities (1 - 13).
Of the chemical microsensors that have been investigated to
date, SAW devices, which measure changes in mass when a
chemically specific surface coating adsorbs or reacts with
an appropriate gas, are the best characterized and the most
promising for rapid development. SAW devices have been
shown to respond in just seconds to selected vapors at
concentrations down to the parts per billion range for
specific organic chemicals. Because of their solid state
construction and compatibility with integrated electronics,
they can be easily incorporated into very small,
lightweight instruments, small enough to be worn on
clothing. The primary challenge remaining in the
development of SAW based microinstruments is the
development of more selective and sensitive SAW coatings
for specific gases and vapors. Other technical areas to be
addressed are the miniaturization of supporting electronic
components and the development of computer software to
facilitate sensor operation, data analysis, and data
reporting.
73
-------
OBJECTIVE
2. SAW Sensitivity and Selectivity
The objective of the present study was to demonstrate the
feasibility of developing a miniaturized Surface Acoustic
Wave (SAW) Personal Monitor with the size, sensitivity,
selectivity, reliability, and low power consumption
appropriate for wearing on clothing. To achieve this
objective it was necessary demonstrate that: (1) the SAW
sensors and necessary support electronics can be
sufficiently miniaturized; (2) chemically selective SAW
coating materials are available or can be developed for the
detection of a wide range of toxic gases; and (3) the SAW
sensors and their coatings can be sufficiently sensitive to
specific toxic gases to meet the requirements of field
screening, personal safety, and related monitoring
applications.
SAW SENSOR INSTRUMENTATION
1.
SAW Sensor Operating Principles
SAW devices are mechanically resonant structures whose
resonance frequency is perturbed by the mass or elastic
properties of materials in contact with the device surface.
Rayleigh surface waves can be generated on very small
polished chips of piezoelectric materials (e.g. quartz) on
which an interdigital electrode array is lithographically
patterned. When the electrode is excited with a radio
frequency voltage, a Rayleigh wave is generated that
travels across the device surface until it is "received" by a
second electrode. The Rayleigh wave has most of its energy
constrained to the surface of the device and thus interacts
very strongly with any material that is in contact with the
surface. Changes in mass or mechanical modulus of a
surface coating applied to the device produce corresponding
changes in wave velocity. The most common configuration
for a SAW vapor/gas sensor is that of a delay line
oscillator in which the RF voltage output of one electrode is
amplified and fed to the other. In this way the device
resonates at a frequency determined by the Rayleigh wave
velocity and the electrode spacing. If the mass of the
coating is altered, the resulting change in wave velocity
can be measured as a shift in resonant frequency. SAW
vapor/gas sensors are similar to bulk wave piezoelectric
crystal sensors, except they have the distinct advantages of
substantially higher sensitivity, smaller size, greater ease
of coating, uniform surface mass sensitivity, and improved
ruggedness. Practical SAW sensors currently have active
surface areas of a few square millimeters and resonance
frequencies in the range of hundreds of MHz. However,
SAW devices having total surface areas significantly less
than a square millimeter and resonant frequencies in the
gigahertz range are possible using modern
microlithographic techniques. Such devices would
ultimately increase device sensitivity as well as decrease
size. Most of the SAW vapor sensors reported in the
literature employ two delay line oscillators fabricated side
by side on the same chip, with one delay line used to
monitor the toxic chemical and the other to act as a
reference to compensate for changes in ambient
temperature and pressure.
A 158 MHz SAW device having an active area of 8 mm2'
will give a resonant frequency shift of about 365 Hz when
perturbed by a surface mass change of 1 nanogram. This
sensitivity is predicted theoretically and has been
confirmed experimentally. The same device exhibits a
typical frequency "noise" of less than 15 Hz RMS over a 1
second measurement interval (i.e. 1 part in 107). Thus,
the 1 nanogram mass change gives a signal to noise ratio of
about 24 to 1. For vapor or gas sensing applications, the
objective is to have the chemical selectively adsorb onto
the mass sensitive surface of the device. Chemically
selective coatings are used for this critical operation.
3. Selective Coatings
The operational behavior of a Surface Acoustic Wave device
can be very sensitive to changes in density, elastic
modulus, and viscosity of the surrounding medium;
however, SAW devices are not inherently sensitive to the
chemical properties of the medium surrounding the device.
When coated with a chemically selective thin film they can
exhibit remarkable sensitivity to small quantities of a
chemical vapor or gas. The development of such selective
coatings for toxic chemicals can take two directions, (1)
coatings that will selectively and reversibly adsorb a
selected vapor or gas by matching "solubility"
characteristics; and (2) coatings that react chemically and
irreversibly with a selected vapor or gas. SAW
selectivities in excess of 10,000 to 1 for certain toxic
chemical agents have been demonstrated using the
"solubility" approach. Much greater selectivities should
be possible using chemically reactive coating/vapor (gas)
combinations.
SAW INSTRUMENTATION DEVELOPMENT
1. Miniaturization of SAW Sensor Array and RF
Electronics
Ultimate miniaturization would be achieved by going to
hybrid circuitry, where the sensors and support RF
electronics could be reduced in size to a few cm2 or less.
Hybridization, however, will require a major engineering
effort and was beyond the scope of this study. The emphasis
was therefore on the selection and arrangement of the
discrete components and electronic packages to minimize
the size of the demonstration unit. The basic design of the
system is essentially the same as used in previous SAW
Vapor Monitors. The four coated SAW dual delay line
devices were mounted in small, gold 1C packages. The lids
of each package were modified with short, 1/16" ID, gold
plated gas inlet and outlet tubes to provide the toxic gases
access to the sensors. A fifth SAW dual delay line, sealed to
prevent exposure to the ambient environment, was place in
a separate package. In the demonstration unit, this fifth
device was used as a reference for all other sensors to
compensate for changes in temperature and pressure. The
output of the 4 SAW Sensor Array was integrated with a 4
channel frequency interface card to generate the measured
74
-------
frequency differences, Af, and with an onboard
microcomputer (microcontroller) for data analysis.
2. Instrument Configuration
The system was designed with three circuit cards: a sensor
card, a four channel frequency interface card, and a
microcomputer card. The entire instrument will fit in an
enclosure 4-3/4" x 8" x 3", allowing room for the
necessary pumps, valves and gas transfer lines. The
system was designed for either battery operation or with a
120 VAC 50-60 Hz power supply. 1/8" Swagelok
bulkhead fittings on the enclosure provided gas inlet and
outlet to the system. Except for the stainless steel
Swagelok finings on the front of the enclosure, all surfaces
in contact with the gas up to the SAW devices are either
Teflon or gold.
The four channel microcomputer controlled frequency
counter measures and reports the frequency of each SAW
sensor every two seconds while controlling the solenoid
valves by means of a solid state relay. For laboratory
evaluation of the demonstration model SAW Personal
Monitor for Toxic Gases, the counter output is provided on
a 9600 baud RS-232C serial communications line. For
better control and monitoring of the demonstration model,
and it's subsystems, all communication with the unit was
through the FtS-232 line and a personal computer with a
serial communication port. In a follow-on program, a
different communication scheme will be devised so that the
user will have the option of entering all instructions
directly on the instrument. Also, all concentration data
and/or signals will be presented on visual (LCD) displays
or by audio alarms mounted on the instrument enclosure.
There will still be the option of communicating with the
SAW Personal Monitor via a personal computer to retrieve
data stored in memory.
In the demonstration unit, the onboard Octagon SB S-150
microcomputer was programmed to control operation of
the system, but not for analysis of the sensor array data.
Development of a sensor array data analysis program is
planned for the follow-on effort. With the demonstration
unit, the performance of each SAW sensor, and it's coating,
was evaluated individually against a specific toxic gas.
There are a number of experimental variables that also
require computer control and or analysis. For example,
due to the possible adsorption/desorption of ambient gases
(especially water vapor) on the coatings, the computer
must continually determine the actual baseline for each
sensor, by intermittently providing clean, dry (filtered)
air to the sensors. The computer must also store
calibration data for each sensor and provide total exposure
values on demand and/or activate an alarm when certain
values are exceeded. Figure 1 provides a pictorial layout
of a SAW Array Personal Exposure Monitor.
SAW COATING SELECTION
1 . Selection of Candidate Coatings
A series of candidate materials was selected for screening
as coatings for the SAW devices. They were selected on the
basis of their known reactivity with the toxic gases chosen
for evaluation. The coatings selected for screening against
the reactive gases are given in Table 1 .
Table 1. Candidate Coating Materials for
SAW Sensors
Candidate Coating
Diphenylbenzidine
2,4, Dinitrophenylhydrazine
o-Toluidine
Triethylenediamine (TEDA)
Na[HgCl2] (hydrate)
Pb(C2H302)2 ' 5H20
CuSO4 • 5H2O
K[Ag(CN)2]
Ninhydrin
Reactive Gas
NO2
NO2
NO2
S02
SQ2
H2S
H2S
H2S
2 .
Polyvinylpyridine (PVP)
Coating of SAW Devices
HCI
Each of the above coatings was applied to two 158 MHz Saw
devices. Each SAW device to be coated was inserted into a
suitable connector mounted on a circuit board that
contained the necessary electronics to operate the device
and provide frequency signals to an external data aquisition
system. Prior to coating, each dual 158 MHz SAW device
was ultrasonically cleaned in isopropanol or chloroform,
dried in a stream of compressed dry, zero air, and
positioned in the coating apparatus. In all but a few
instances, the coatings were applied by a spray deposition
technique developed by Microsensor Systems. The primary
requirement is that the coating material must be soluble in
a volatile solvent. Zero air was used to generate a fine mist
of the specific coating solution. A mask was placed over the
SAW device so that only the interdigitated delay lines were
coated.
The quantity of coating material deposited on each delay
line was closely monitored by the computer data system
which reported the mass of material deposited as an
increase in frequency, Af. The amount of coating material
applied was held closely to 250 KHz + 50 KHz. The
frequency shift, Af, corresponds to coating thickness,
assuming uniform surface coverage. Once the coatings
were applied, the SAW devices were covered and stored in a
low humidity (< 10% RH) environment until ready for
testing. As the candidate coating materials given in Table 1
are generally hygroscopic, it can be assumed that a certain
amount of water will be associated with each coating and
must be considered in subsequent gas interactions.
75
-------
SAW ARRAY
PERSONAL EXPOSURE MONITOR
PICTORIAL LAYOUT
(PHASE
REPLACEABLE
SAW SENSORS
(4 UNDERNEATH
SCREW-ON LID)
MICROCOMPUTER
BOARD
6VIS I 2 Ah
4x2 X I 807.
RECHARGEABLE
BATTERY PACK
AMBIENT
VAPOR
INLET
Figure 1. Pictorial Layout of SAW Array Personal Exposure Monitor
76
-------
3. Screening and Selection of Coatings for SAW Test
and Evaluation
The following criteria were established to define a
successful candidate material: (1) that a coating give a
frequency shift equivalent to a 100:1 signal to noise ratio
when exposed to the toxic gas at a concentration of
approximately 100 ppm for 1 minute or less; and (2)
that the coating react irreversibly with the test gas. With
a baseline noise level of approximately 15 Hz, a 100:1
signal to noise ratio would be equivalent to a frequency
shift on the order of 1500 Hz. Thin film coatings showing
less response would not have sufficient sensitivity nor
capacity to be useful in field monitoring applications.
A calibrated cylinder of each of the test gases (NO2, SO2,
HCI, H2S, NHa) in air was obtained from the Scott
Specialty Gas Co. The concentration of each gas source
was:
Toxic Gas Source Concentration
HCI 103.3 ppm
NHs 106.5 ppm
H2S 100.6 ppm
NQ2 108.0 ppm
SQ2 102.5 ppm
By simple dilution of the compressed gas with clean, dry,
zero air, a steady state concentration at any value less than
100 ppm could be easily prepared. A constant gas flow
rate of 200 cc/min was maintained. A valve was arranged
so that clean air, or a known concentration of the specific
test gas, could be alternately delivered to the sensor. A lid
with 1/8" gold gas inlet and outlet tubes was placed over
the device and was connected to the output of the gas
dilution chamber. The frequency output of the dual delay
lines could be monitored using a small frequency counter.
In the tests, a coated SAW device was first exposed to clean,
dry air at 200 cc/min to obtain a steady baseline
frequency. The valve was then turned to expose the sensor
to a known concentration of the toxic gas, at the same flow
rate, for a pre-determined period of time. The sensor was
then exposed once again to clean, dry air to establish a new
baseline. If the clean air baseline, after exposure to the
toxic gas, was significantly different from the initial clean
air baseline, it was assumed the change in frequency was
due to an increase in coating mass resulting from the
irreversible reaction with the challenge gas. If there was
no significant change in SAW frequency, the device was
exposed to higher gas concentrations for longer periods of
times. If there was still no permanent change in baseline,
it was assumed there was no reaction and that the coating,
in its present form at least, was ineffective. All tests were
performed with dry air, unless otherwise specified in the
text.
The results of the initial screening tests are given in Table
2. They show that for each toxic gas there was at least one
coating that gave an acceptable response. However, in
several instances there were rather unexpected results.
For example, NC-2 did not appear to react at all with 2,4
Dinitrophenyl hydrazine unless there was a relatively high
moisture content (= 80% RH) in the carrier gas. It was
also surprising that H2S did not react readily with the lead
acetate coating, even though we have observed this surface
reaction in a previous study. Copper sulfate seemed
unreactive initially, however, after repeated cycling it did
react to give a very large and permanent frequency shift.
The reaction, or lack of it, in each case may depend to a
large extent upon the amount of water present in the film.
Table 2. Results of Initial Coating Screening
Test
(Thickness of all coatings approx. 250 Hz)
Coating
Diphenylbenzidine
2,4, Dinitrophenyl
hydrazine
o-Toluidine
TEDA
Na[HgCl2]
Pb(C2Hs02)2*'
CuS04"**
Ninhydrin
CoCl2
PVP
Af Stable
Conc./Time (Hz) Reaction
NC-2 50 ppm/60 s. 900 No
NO2 50 ppm/60 s. 2,800 Yes
NC>2 50 ppm/60 s. <100
SO2 50 ppm/60 1,000 Yes
SO2 50 ppm/60 s.
H2S
H2S 50 ppm/60 s. 2,000 Yes
50 ppm/60 s. 100
50 ppm/ 20 s 2,700 Yes
HCI (known to react)
Reacted only in presence of high RH
* * Reacted in a previous study, but now
Reaction occurred after repeated H2S exposure
Based on the results of Table 2, the following coatings were
selected for more careful evaluation. 2,4 Dinitrophenyl-
hydrazine was not used for NO2- Rather TEDA was used for
both SO2 and NO2-
Toxic Gas
HCI
NO2andS02
H2S
Coating Material
Polyvinylpyridine (PVP)
Triethylenediamine (TEDA)
Copper sulfate (CuS04)
Cobaltous chloride (CoCl2)
TEST AND EVALUATION OF SAW SENSORS AS MONITORS FOR
TOXIC GASES
1.
Coating of SAW Sensors
The coating procedure used was the same as described
above. Both SAW delay lines on each device were coated
simultaneously, and the amount deposited was measured
and recorded. The identification number of each device and
the coating mass (in terms of frequency shift, Af) are
given in Table 3. The coatings applied are very thin, on the
order of a micron or so in thickness, on the average.
2. Evaluation of SAW Sensors as Monitors for Toxic
Gases
The frequency difference, Af, of each SAW device being
tested was input to a Apple Macintosh computer where the
data was collected and displayed. The test system evaluated
77
-------
only one sensor at a time against a single toxic gas. Even
though each of the coating materials being tested could very
likely react with more than one gas, binary gas mixtures
and interference studies were not included in this
preliminary investigation. Interference studies will be a
part of the follow-on study, using multiple sensor arrays
and other techniques to address the problem of sensor
specificity.
The gas dilution chamber was again used to deliver known
concentrations of each test gas to the SAW sensors at a
constant flow rate of 200 cc/min at ambient pressure, and
a constant "baseline" frequency established for each SAW
device by exposing it to a clean, dry air stream. Once a
constant baseline frequency was established, the sensor
was exposed to a predetermined "dose" of the selected toxic
gas. The size of the dose could be varied from 10 to 100
ppm over any selected time interval. After exposure to the
toxic gas, the sensor was again exposed to clean, zero air
until a new baseline frequency was established. The
difference between the initial baseline and the final
baseline was taken as the frequency shift due to the
irreversible reaction of the toxic gas with the coating
material. The magnitude of this frequency shift could be
correlated with the amount of toxic gas interacting with the
sensor.
The intent of the tests was to quickly look for order of
magnitude changes in frequency and general reproduci-
bility of performance when exposed to moderate changes in
gas concentrations; i.e., to identify coatings that could be
used in a more comprehensive follow-on development
program. This study did not include a careful
characterization of each coating reaction. In any event an
accurate characterization of the surface reactions would be
difficult without a more careful control of trace water,
both in the hygroscopic coating materials and the gas
delivery system.
3.
Exposure of NHs to CoCl2 Coated SAW Sensor
The SAW devices were at ambient temperature and thus
subject to the room temperature fluctuations (= 25° +/-
1°C). Although a reference SAW device was used to
compensate for both temperature and pressure changes,
the compensation is not exact, and may have caused some
small, random drift in device background frequency. These
slow changes occurred in cycles of many minutes and thus
did not adversely effect the measurements. Even though a
number of the coating materials have a small volatility,
the signal drift reflected "apparent" increases as well as
decreases in weight. Thus volatility did not have a
measurable effect on the measurements. Once a device was
equilibrated with the laboratory environment (temper-
ature and pressure) the slow baseline drift was usually on
the order of ± 50 Hz. In addition to temperature changes
and the possibility of volatility, the baseline drift may also
be due in pan to changes in gas flow rate (due to changes in
flow through the non-precision needle valve used to set the
flow rate). Even with the small observed background
drift, the following data show that system performance was
excellent and clearly able to detect and monitor changes in
SAW frequency upon exposure to the challenge gases.
Sensor drift will be corrected for in the follow-on
Personal Monitor development program.
An example of data for the exposure of ammonia to the
CoCl2 coated SAW devices is shown in Figures 1. An
exposure of 20 ppm NHs for 20 seconds was selected for
testing the CoCl2 coated sensors. When the NHs was
introduced, there was a large initial decrease in SAW
frequency followed by a rapid increase. Each point on the
curve corresponds to a 2 second time interval. After 20
seconds, when the clean air at 200 cc/min was again
introduced, Af continued to increase through a small
maximum and then level off to a new, higher, baseline
value. The initial negative "spike" in the Af vs time plot
may be due in part to disruption and re-establishment of a
constant gas flow rate, while the subsequent increase in Af
most probably results from both adsorption and reaction of
the NHs witn tne CoCl2 coating. The maximum may result
from a more gradual desorption of non-reacted NHs from
the coating. The equilibrium frequency shift values for all
devices are shown in Table 4.
160000
0)
-
CT
o
150000 -
140000 -
130000
.
V
200 400 600 800 1000 1200 1400
Time (seconds)
Figure 1. Frequency Shift (Hz) vs. Time for Repeat
Exposure of CoCIa Coated SAW Device
(9024-11) to 20 ppm NHs for 20 Sec.
Table 3. Thickness of SAW Device Coatings
Coating Material
PVP
CuSO4
CoCl2
TEDA
Coating Thickness (KHz)
Device Number Side "A"
9024-1
9024-2
9024-3
9024-7
9024-8
9024-9
9024-10
9024-1 1
9024-12
9024-4
9024-5
9024-6
255
198
198
149
150
196
136
1 12
106
149
178
300
76
-------
Table 4. Frequency Shifts for CoCl2 Coated SAW
Devices Upon Repeated Exposure to
20 ppm NH3 for 20 seconds
158000
Device Number
9024-10
(Coating 112 KHz)
9024-11
(Coating 136 KHz)
9024-12
(Coating 106 KHz)
Exposure
Frequency Shift
a. - d. (dose optimization test)
e,
f.
a.
b.
c.
a.
b.
c.
d.
e.
f.
1,200 Hz
OHz
4,000 Hz
4,000
1,000
2,600
2,000
1,200
1,600
2,000
Hz
Hz
Hz
Hz
Hz
Hz
Kz
OHz
From the data in Table 4 it is evident that CoCl2 coated
SAW devices show large (Kilohertz), irreversible shifts
in frequency when exposed to small doses of ammonia, and
that with continued exposure the coatings saturate as
expected. Even allowing for the variation is response of
the different sensors, the sensitivity of the CoCl2 coatings,
i.e., those with some residual capacity, is on the order of 5
to 10 Hz/ppm/sec. Considering that the background noise
level of the SAW sensors is on the order of 15 Hz, a ten
seconds exposure of a sensor to 1 ppm NHs would give a
signal of better than 50 Hz, at least three times the
background noise. Thus the CoCl2 coatings have more than
enough sensitivity to detect ammonia at concentrations
below the OSHA Exposure Limit of 50 ppm NHs for an 8
hour weighted average.
4. Exposure of CuSCU Coated SAW Sensor to H2S Gas
The test procedure was essentially the same as described
above. Typical results are shown in Figure 2 for device
9024-7. H2S shows a decrease in SAW frequency with
exposure rather than an increase in Af as observed with
the reaction of NHs with the CoCl2 Also, there was no
initial "spike" in Af when the challenge gas was introduced.
Upon repeated exposure, the frequency shifts became
progressively smaller, due to saturation of the reactive
sites of the CuSO4 coating.
The Af values for the CuSCM coated sensors 9024-7 and
9024-8 are given in Table 5. SAW device 9024-9
apparently became defective during the coating process.
SAW device 9024-7 was exposed five times to 20 ppm of
H2S for 20 seconds. With the initial dose of H2S, Af
decreased by 1,400 Hz. The second exposure decreased Af
by only 400 Hz. Subsequent doses caused essentially no
further change in Af. Thus the CuSC>4 coatings were
essentially saturated by a single 20 ppm dose of H2S for
20 seconds.
5
~
—
en
157000 -
156000 -
155000
]?
-,
0 200 400 600 800 1000 1200 1400
Time (seconds)
Figure 2. Frequency Shift (Hz) vs. Time for Repeat
Exposure of CuSO4 Coated SAW Device
(9024-7) to 20 ppm H2S for 20 Sec.
Table 5. Frequency Shirts for CuS04 Coated SAW
Devices Upon Exposures to 20 ppm H2S
for 20 seconds
Device Number
9024-7
(Coating 149 KHz)
9024-8
(Coating 150 KHz)
9024-9
(Coating 196 KHz)
Exposure Fjeguencv Shift
a.
b.
c.
i'..
e
a.
1,400 Hz
400 Hz
100 Hz
0 Hz
OHz
1,400 Hz
(device defective after coating)
Thus the CuSCvj coated SAW devices, like the CoCl2 coated
devices, do give large (KHz), irreversible shifts in
frequency when exposed to small doses of an appropriately
reactive gas, and that with continued exposure the coatings
saturate as expected. The sensitivity of a newly prepared
CuSO4 coating is on the order of 3 to 4 Hz/ppm/sec. With
background noise on the order of 15 Hz, a ten second
exposure to 1 ppm H2S would give a signal of around 30 to
40 Hz, equivalent to a signal to noise ratio of 2:1. The
detection limit of this coating is thus also is well below the
OSHA Exposure Limit of 20 ppm H2S for an 8 hour
weighted average.
5. Exposure of TEDA Coated SAW Sensor to SO2 Gas
The procedure used to test the TEDA coated SAW sensors
with S02 was the same as described above. Typical results
are shown in Figure 3 for device 9024-6. The results for
device 9024-5 were similar. SAW device 9024-4 was
reserved for testing with N02, which was expected to react
with TEDA in much the same way as SO2. A rather
unexpected behavior was observed when the TEDA coated
devices were initially exposed to SO2- For the first few
79
-------
exposures of 20 ppm SO2 (20 seconds), the coatings did
not respond significantly. After several repetitions,
however, the coatings did begin to respond with positive
shifts in Af with the continuing exposure. Thus it appears
there was a "conditioning" period, after which the coatings
began to respond. The "conditioning" must be associated
with some chemical change in the coatings upon exposure to
the test gas, or to the zero air, most likely involving
associated water. As each device, after being coated, was
covered with a close fitting lid (but not hermetically
sealed) and stored in a = 10% RH environment, they must
have adsorbed some water vapor (or perhaps another
ambient gas) which was subsequently desorbed from the
coatings by the dry (< 1% RH) zero air and/or the dry
sample (S02) air. This "conditioning" or "ageing" effect
was not further explored at this time, but will of necessity
be investigated in the follow-on study in order to provide
coatings that behave predictably and reproducibly.
280000
3
270000 -
260000 -
250000
0 200 400 600 800 1000 1200 1400
Time (seconds)
Figure 3(a). Frequency Shift (Hz) vs. Time for Repeat
Exposure of TEDA Coated SAW Device
(9024-6) to 20 ppm SO2 for 20 Sec.
(First exposure, a)
268500
£ 267500 -
266500 -
265500 -
264500
0 200 400 600 800 1000 1200 1400
Time (seconds)
Figure 3(b). Frequency Shift (Hz) vs. Time for Repeat
Exposure of TEDA Coated SAW Device
(9024-5) to 20 ppm SOa for 20 Sec.
(Exposures d to h)
After the initial induction period, the frequency shift vs
time plot in both Figure 3(b) shows an increase in the
SAW baseline with each 20 second dose of SO2, after the
initial "spike" in Af. Device 9024-5 was allowed to stand
in the test apparatus for approximately two hours with
continuous exposure to zero air, before the run. Even so,
it wasn't until exposure f that the device began to respond.
Somewhat similar behavior was observed for device
9024-6, however the conditioning period was much
shorter. For both device 9024-5 and 9024-6, once the
coatings became reactive, the shifts in frequency were
regular and irreversible.
The frequency shifts are given in Table 6. The data clearly
show the induction period during which there was no effect
of SC-2 exposure, and the subsequent increases in Af when
reaction began to occur. If we assume an average response
of 1,200 Hz for device 9024-5 and 1,800 Hz for device
9024-6, the sensitivities are approximately 3 and 4.5
Hz/ppm/sec, respectively. The coating on device 9024-6
was a third again the mass of the coating on 9024-5 (300
KHz to 178 KHz), thus one would expect the sensitivity to
SO2 to be a third again as high, which was observed. Thus
the two coated devices had essentially equivalent
sensitivities.
Table 6. Frequency Shifts for TEDA Coated SAW
Devices Upon Repeated Exposure to
20 ppm SO2 for 20 seconds
Device Number
9024-5
(Coating 178 KHz)
9024-6
(Coating 300 KHz)
Exposure
a.
b.
c.
d.
e.
f.
g.
h.
a.
b.
c.
d.
e.
f.
Frequency Shift
OHz
OHz
OHz
OHz
OHz
800 Hz
1,400 Hz
1,000 Hz
OHz
OHz
200 Hz
1,600 Hz
2,000 Hz
1,800 Hz
With sensitivities of about 3 to 4 Hz/ppm/sec, depending
upon coating thickness, and a background noise level of 15
Hz for the SAW devices, the sensors should ultimately
detect concentrations of S02 as low as 1 ppm within 10
seconds at a signal to noise ratio of about 2:1. With this
sensitivity, these coatings should easily detect SC>2 at or
below the OSHA Exposure Limit of 5 ppm SO2 for an 8 hour
weighted average.
-------
6. Exposure of TEDA Coaled SAW Sensor to NO2 Gas
It was anticipated that TEDA would respond to NO2 in much
the same manner as to SO2; however, the data for the one
available sensor showed quite different behavior. First, no
conditioning period was observed. The first 20 second dose
of 20 ppm NO2 gave a relatively small but definite
increase in SAW frequency which apparently saturated the
sensor, as no further increase in Af was observed with
additional exposure to N02- The frequency shift data are
given in Table 7. The baseline shift of approximately 350
Hz for an exposure of 20 ppm NO2 for 20 seconds, is
equivalent to about 1 Hz/ppm/sec, well below the
sensitivity to S02- With a sensitivity of approximately 1
Hz/ppm/sec, and a background noise level of 15 Hz, the
TEDA coated sensors would have to be exposed to 1 ppm NO2
for over 30 seconds to give a 2:1 signal to noise ratio. In
addition, the film apparently has a very low capacity for
N02 (i.e., saturating at a very low exposure
concentration). TEDA is therefore of only marginal utility
as a dosimeter coating for NO2-
Table 7. Frequency Shins for TEDA Coated
SAW Devices Upon Repeated Exposure
to 20 ppm NO2 for 20 seconds
Device Number Exposure
9024-4 a.
(Coating 149 KHz) b. - g.
Frequency Shift
350 Hz
OHz
7. Exposure of PVP Coated SAW Sensors to HCI Gas
Device 9024-1 was given 5 separate exposures to 20 ppm
of HCI for 20 seconds, over approximately a 30 minute
period, with no apparent reaction of the HCI with the PVP.
We know from previous studies that surface films of PVP
do react with HCI, thus the lack of response must be
similar to the "conditioning" period observed for S02 gas
on TEDA. To accelerate the reaction, the PVP coated device
9024-1 was exposed to a higher concentration of HCI
(100 ppm) for 2 minutes. The result was a very large
increase in Af, over 30,000 Hz in the 2 minute period, as
shown in Table 8. A second large dose (100 ppm over a
60 second period) further increased Af by only 4,800 Hz,
indicating that the PVP coating was approaching saturation.
The estimated sensitivity, based on the 30,000 Hz shirt is
about 3 Hz/ppm/sec.
Device 9024-2 was exposed to repetitive doses of HCI at a
concentration of 25 ppm for 20 seconds. The results given
in Table 8 indicate no conditioning period was needed. The
very first exposure gave an increase of about 900 Hz and
appeared to be stable with time. Subsequent exposures also
increased Af, until the film began to saturate. Sensitivity
based on the initial exposure is about 2 Hz/ppm/sec.
Device 9024-3 did require a conditioning period when
exposed to 25 ppm HCI for 20 seconds. HCI exposures
were increase to 50 ppm for 30, 60 and 90 seconds,
before an increase in Af was observed. With the final
exposure, a frequency increase of approximately 6,400 Hz
was observed.
Table 8. Frequency Shifts for PVP Coated SAW
Devices Upon Repeated Exposure to HCI
Frequency
Device Number Exposure Shift
9024-1 a.(20 ppm 20 sec) 0 Hz
(Coating 255 KHz) b.(20 ppm 20 sec) 0 Hz
c.(20 ppm 20 sec) 0 Hz
d.(20 ppm 20 sec) 0 Hz
e.(20 ppm 20 sec) 0 Hz
f.(100 ppm120 sec) 30,000 Hz
g.(100 ppm 60 sec) 4,800 Hz
9024-2 a.(25 ppm 20 sec) 900 Hz
(Coating 198 KHz) b.(25 ppm 20 sec) 600 Hz
C.(25 ppm 20 sec) 400 Hz
d.(25 ppm 20 sec) 600 Hz
e.(25 ppm 20 sec) 400 Hz
f.(25 ppm 20 sec) 200 Hz
9024-3 a. (25 ppm 20 sec) 0 Hz
(Coating 198 KHz) b.(25 ppm 20 sec) 0 Hz
c.(25 ppm 20 sec) 0 Hz
d.(50 ppm 30 sec) 0 Hz
e.(50 ppm 60 sec) 0 Hz
f.(50 ppm 90 sec) 6.400 Hz
The sensitivities of the PVP coated SAW devices were in the
range of 1 to 3 Hz/ppm/sec. Device 9024-1, with the
greatest apparent sensitivity (3 Hz/ppm/sec), had the
highest, coating mass, as would be expected. Thus the
results for the three devices are consistent. With a
sensitivity of 1 to 3 Hz/ppm/sec, a sensor would have to
be exposed to 1 ppm HCI for 10 to 30 seconds to give a 2:1
signal to noise ratio. The PVP films do appear to have a
high capacity for HC1, as evidenced by the 30,000 Hz shift
for device 9024-1. Considering that the OSHA Exposure
Limit is 5 ppm HCI for an 8 hour weighted average, the
PVP coating should be considered a good candidate for
further development as a coating for monitoring acid gases.
CONCLUSION
In the evaluation of the various SAW coatings it was found
that for each toxic gas, except NO2, a relatively large,
easily measured SAW response was observed when an
appropriate coating was exposed small concentrations.
The measured sensitivities show that each toxic gas studied
(except NO2) could be detected by a SAW sensor well below
the "action level" set by OHSA, when monitored for a
period of one minute or less. The candidate coatings, toxic
gases, and the respective OSHSA exposure limits, are:
OS'HA Exposure
Limit - 8 hour
Candidate Coating Toxic Gas Weighted Ave.
polyvinylpyridine (PVP) HCI 5 ppm
triethylenediamine (TEDA) NO2 and SO2 5 ppm
copper sulfate (CuSO4) H2S 20 ppm
colbaltous chloride (CoCl2) NHs 50 ppm
The study thus successfully achieved it's objective of
demonstrating that: (1) the SAW sensors and necessary
support electronics can be appropriately miniaturized;
81
-------
(2) a number of successful coatings are readily available
and others can certainly be identified in the literature, or
developed, for additional toxic gases; and (3) SAW sensors
are sufficiently sensitive to meet OHSA requirements, at
least for the toxic gases selected for this demonstration
study. A number of technical problems and/or potential
limitations of the technology were identified and
approaches suggested for their solution. Based on the
results of this program, we conclude that a prototype
Surface Acoustic Wave Personal Monitor for Toxic Agents
could be readily developed in a follow-on program. In
addition to use as a Personal Monitor, such a small,
sensitive and rugged solid state instrument could possibly
find other applications in the field screening for toxic
chemicals. In all applications however, the usefulness of
SAW sensors will increase with the continued development
of more sensitive and selective device coatings.
ACKNOWLEDGEMENT
This research was supported by the Department of Health
and Human Services, Public Heath Service, Small Business
Innovation Research (SBIR) Program, under Phase I Grant
No. 1R43 ES5039-01A1.
REFERENCES
1. H. Wohltjen and R.E. Dessy, "Surface Acoustic Wave
Probe for Chemical Analysis I. Introduction and
Instrument Design", Anal. Chem., 51(9), 1458-
1464 (1979).
2. H. Wohltjen and R.E. Dessy, "Surface Acoustic Wave
Probe for Chemical Analysis II. Gas Chromatography
Detector", Anal. Chem., 51(9), 1465-1470 (1979).
3. H. Wohltjen and R.E. Dessy, "Surface Acoustic Wave
Probe for Chemical Analysis III. Thermomechanical
Polymer Analyzer", Anal. Chem., 51(9), 1 470-
1475 (1979).
4. H. Wohltjen and H. Ravner, "The Determination of the
Oxidative Stability of Several Deuterated Lubricants
by an Electronic Gas Sensor", Lubrication
Engineering, 39(11), 701-705 (1983).
5. (Invited) H. Wohltjen, "Chemical Microsensors and
Microinstrumentation", Analytical Chemistry,
56(1), 87A-103(1984).
6. H. Wohltjen, "Mechanism of Operation and Design
Considerations for Surface Acoustic Wave Vapor
Sensors", Sensors and Actuators, 5 (4), 307-325
(1984).
7. H. Wohltjen, W. R. Barger, A. W. Snow, and N. L.
Jarvis, "A Vapor Sensitive Chemiresistor Fabricated
with Planar Microelectrodes and a Langmuir-Blodgett
Organic Semiconductor Film", IEEE Trans, on Electron
Devices. ED-32, No. 7, 1170-1174 (1985).
8. W.R. Barger, J.F. Giuliani, N.L. Jarvis, A.Snow, and H.
Wohltjen, "Chemical Microsensors- A New Approach
for the Detection of Agro Chemicals", Environ. Sci.
Health, B20(4), 359-371 (1985).
9. W.R. Barger, A.W. Snow, H. Wohltjen, and N. L.
Jarvis, "Derivatives of Phthalocyanine Prepared for
Deposition as Thin Films by the Langmuir-Blodgett
Technique", Thin Solid Films, 133, 197 206 (1985).
10. A.W. Snow, W.R. Barger, M. Klusty, H. Wohltjen, and
N.L. Jarvis, "Simultaneous Electrical Conductivity and
Piezoelectric Mass Measurements on Iodine-Doped
Phthalocyanine Langmuir-Blodgett Films", Langmuir,
2, 513-519 (1986).
11. D.S. Ballantine, S.L. Rose, J.W. Grate, and H.
Wohltjen, "Correlation of SAW Coating Responses with
Solubility Properties and Chemical Structure Using
Pattern Recognition", Anal. Chem. 58, 3058 (1986).
12. G. S. Calbrese, H. Wohltjen, and M.K. Roy, "Surface
Acoustic Wave Devices as Chemical Sensors in
Liquids", Anal. Chem. 59, 833 (1987).
13. J. W. Grate, A. W. Snow. D. S. Ballantine, Jr., H.
Wohltjen, M. H. Abraham, R. A. McGill, and P. Sasson,
"Determination of Partition Coefficients from Surface
Acoustic Wave Vapor Sensor Responses and
Con-elation with Gas-Liquid Chromatographic
Partition Coefficients", Anal. Chem. 60, 869 (1986).
82
-------
DISCUSSION
WILLIAM BOWERS: You showed some data on individual sensor responses
for single exposures. Have you done any interference effects on some of these?
I am glad to see you're going to resonators now.
N. LYNN JARVIS: We did no interference studies in this particular program.
You could probably tell that many of the coatings used would respond to more
than one vapor. These were not selective coatings in that sense. Selectivity is
much more difficult to get. That's why we end up using an array of sensors to get
the selectivity. Resonators are much, much nicer.
MICHAEL CARRABBA: When you put the coating on these SAW devices,
and the coating goes over electrodes, is the area on the whole surface sensing the
weight or is it just the area between the electrodes, or the area on the electrodes'?
N. LYNN JARVIS: The whole area surface senses the weight. The wave will
cover most of the surface. Most of the surface is sensitive and you get a response.
PHILLIP GREENBALM: Have you tried attaching antibodies to these? And
if not, do you think that would be a problem?
N. LYNN JARVIS: We have not and you could certainly attach them. The
problem is that antibodies are very large, and you're trying to attack very small
molecules with the antibody. You may get a very small signal i.e., the change in
weight is very small. Sensitivity might be fairly low in this case. It would not be
a way we would probably choose to go with these particular sensors. There are
probably better sensors for that.
MAHADEVA SINHA: Are these things disposable once you use them? After a
certain while do you throw them out?
N. LYNN JARVIS: Yes. In this system, once a sensor is used up. we propose to
it throw it away and plug in a new one.
MAHADEVA SINHA: You talked about the reversibility of some of the
reactions. What did you mean by that?
N. LYNN JARVIS: There are two ways you can go with a coating on a SAW
dev ice. You can use coat ings where the vapors absorb onto the coating, depending
on solubility characteristics and other factors. They will absorb when the vapor
is present. When the vapor challenge is removed, it desorbs again from this
polymer and is removed. So it's a completely reversible system with certain
vapor coating combinations. You can use a coating where there is no chemical
reaction. However, if you have a chemical reaction, then it is completely
irreversible, which is what we're looking for in this particular application. In
some applications you want reversibility; in some you don't, depending on the
intended use.
EDWARD POZIOMEK: In your last viewgraph and also in your comments
you mentioned the possibility of the wide applications to environmental
measurements, and you said something about putting a SAW down a well.
Perhaps you could comment on the state of this SAW technology for use in
liquids, because the applications presented here were for vapors or for gases.
N. LYNN JARVIS: If we put a sensor in a well, it would have to be within the
well headspace to be monitored, not the liquid. The technology for SAWs in
liquid is very poorly developed, and is just barely beginning. We know of no
really effective way to monitor using a SAW in solution.
83
-------
ARRAYS OF SENSORS AND MICROSENSORS
FOR FIELD SCREENING OF UNKNOWN CHEMICAL WASTES
W.R. Penrose, J.R. Stetter, M.W. Findlay, and W.J. Buttner
Transducer Research, Inc., Naperville, IL 60540
Z. Cao, Illinois Institute of Technology,
Department of Chemistry, Chicago, IL 60616
Abstract
The high cost of laboratory-based analysis has
driven the development of rapid screening
methods for hazardous chemicals in unknown
wastes. Screening methods permit the "triage"
of samples into those that (a) contain no
regulated wastes, (b) definitely contain
regulated chemicals, or (c) are ambiguous.
Only the last category requires detailed
analysis.
The requirements of portability and ease of use
place extraordinary demands on the designers of
analytical instruments. In this paper, we will
discuss several approaches to obtaining
qualitative analytical data from multiple
sensors or highly-selective sensors. These
are: (a) a sensor with a selectivity 1000-
10000 times greater for chlorinated or
brominated compounds than for unsubstituted
ones; and (b) pyrolysis-EC, which uses
catalytic pyrolysis, arrays of electrochemical
sensors, and pattern recognition methods to
identify pure chemicals and mixtures. Two
applications of the latter are described, the
rapid identification of chemical vapors, and
the grading of grain according to "odor".
Introduction
The high cost of laboratory-based analysis has
driven the development of rapid screening
methods for hazardous chemicals in unknown
wastes. A screening method is one that can be
done on-site, by non-chemists, inexpensively
and safely. On the other hand, a screening
method is less likely to provide the definitive
data that a full laboratory analysis, perhaps
requiring GC/MS or ICP, might give. In the
case where no information is available, however,
even limited information can be of value,
especially if it is used to supplement data
gathered from other sources. For example, a
suite of simple screening methods may be used for
the "triage" of unknown samples into positive,
negative, and ambiguous groups. Often, the
nature of the chlorinated compounds may be known
from purchase or production records, so that only
the ambiguous category may require detailed
analysis. Screening methods may also be useful
for confirming conclusions that have already been
drawn from independent data, for example, that a
collection of similar barrels do indeed contain
the same materials.
The will ingness to accept reduced certainty for
the sake of economy and practicality opens the
door to a wide variety of useful techniques that
can be used in the field. In this paper, we will
describe two such methods.
A unique semiconductor sensor has been found that
is very sensitive to chlorinated and brominated
organic compounds (1-3). It shows no detectable
response to hydrocarbons, oxygen- or nitrogen-
containing organic compounds, or fluorocarbons.
A second method that has given us promising
results has been catalytic pyrolysis of chemical
vapors combined with electrochemical detection.
Compounds that are not normally thought of as
electrochemical analytes, such as chloroform or
cyclohexane, can be partially oxidized on a hot
platinum surface (4). The volatile products
always include some that give a response on a
porous-electrode electrochemical sensor. We have
confirmed over several years that the products of
the pyrolysis are reproducible for most organic
and some inorganic compounds when the conditions
are kept reasonably constant (5). We have also
85
-------
confirmed the critical requirement that the
products are independent of analyte
concentration, at least at concentrations of
below 200 ppm. We call this method pyrolysis-
EC.
The present embodiment of pyrolysis-EC is an
instrument we call the CPS-100. This device
uses an array of electrochemical gas sensors
with different, but overlapping, selectivities.
The incoming gases are pyrolyzed over noble
metal catalysts heated at controlled
temperatures. The operation of the instrument
is orchestrated by a fairly powerful computer
which can perform pattern analysis on the
resulting data. In this paper, we report the
results of a study on pattern recognition of
odors in spoiled grain. The unique properties
of neural networks have been shown to have
significant potential for handling low-quality
information. On reflection, this unique
application is not so different from the
problems encountered in classifying and
handling hazardous wastes.
A simplified implementation of pyrolysis-EC has
also been tested that uses a single sensor and
a single catalytic filament. This drastically
simplified system was still capable of
distinguishing many organic chemicals. With
fewer parts and lower power consumption, this
simplified configuration may be suitable for
selective hand-held vapor monitors.
Experimental Methods
Organochlorine sensor. The sensor was made by
mounting a coil of platinum wire on a threaded
base. A separate platinum wire is also mounted
on the base and located axially within the
coil. A mixture of lanthanum oxide, lanthanum
fluoride, and a binder was applied to the coil.
The coil was slowly heated with an electric
current until a reaction occurred, forming the
active material. The sensor is used by heating
it to 550 °C with an electric current;
conductivity is measured between the heating
coil and the separate platinum electrode. When
the sensor contacts the vapor of a chlorinated
organic compound, the conductivity increases.
A simple circuit can be used to provide a
voltage output which is proportional to the
concentration.
Permeation device. The permeation sampler
consisted of a bundle of 0.025" o.d.
dimethyl si licone tubing (Silastic, Oow-Corning)
(Figure 1). The bundle could be placed in an
aqueous sample containing dissolved organic or
organochlorine compounds. A continuous flow of
air was circulated through the lumen of the
tubing, and organic material diffusing inward
through the silicone membrane entered the gas
phase. In a typical experiment, two permeators
were used to provide separate reference and
sample signals (Figure 2).
Pyrolysis-EC. The CPS-100 Toxic Gas Monitor has
been described in several earlier publications
(5-11); its configuration is diagrammed in Figure
3. The four sensors had platinum or gold working
electrodes. For the grain odor experiments, the
sensors were biased at differing oxidizing
potentials, since reducing potentials gave very
low or poor signal s. A single rhodiurnpyrolysis
filament was operated at 25, 450, 750, and 850 °C.
The combination of four sensors and four
temperatures gave an array of sixteen data points
per analysis.
The apparatus for simplified pyrolysis-EC
consisted of a single platinum filament and a
single platinum-electrode gas sensor. A control
circuit maintained the catalyst at any one of
four presel ected temperatures. The f i 1 ament was
enclosed in a Teflon-lined chamber of small
volume through which the analyte gas was pumped
at about 50 cc/min. The gas then passed through
a short tube to the sensor. The experiments were
controlled, and data gathered, by a commercial
datalogger (Onset Computer Corp., N. Falmouth,
MA).
Gas samples. Accurate samples of test compounds
in vapor form were made by injecting measured
volumes of the 1 iquids into 40-1 iter Tedlar gas
bags and filling with air pumped through a
charcoal/Purafil filter. A flowmeter together
with a stopwatch was used to determine the volume
of air being pumped into the bag. Samples of
permanent gases were made from standard mixtures
obtained from commercial sources. Volumes of the
standard mixtures and air were calculated and
pumped into a sample bag, using the flowmeter and
stopwatch to determine the volumes.
Samples from grain odors were generated by
heating a sample of grain to 60 °C and flushing
with a measured volume of air. The effluent air
was passed through an ice trap to collect a "non-
volatile" fraction and a liquid nitrogen trap to
collect the "volatiles". The two fractions were
run separately and in duplicate. Grain samples
were obtained from Drs. L. Seitz, and 0. Saur of
the USDA Grain Marketing Research Laboratory.
86
-------
Results and Discussion
Organochlorine sensor. Typical responses of
the sensor to different vapors in air are shown
in Figure 4. The sensor was exposed to 100 ppm
concentrations of chlorobenzene, benzene, and
n-hexane. Only chlorobenzene caused a
response. Of a series of compounds
investigated, only HC1, and compounds
containing carbon-chlorine and carbon-bromine
bonds, gave a response (Table I). The response
to concentration is essentially linear over at
least four orders of magnitude.
Combined with the permeator device, the highly-
selective organochlorine sensor was shown to
respond rapidly to dissolved material. Figure
5 shows the response to chloroform in water at
concentrations that dip below the part-per-
million level. This sensor can be used to
measure an organochlorine in groundwater, for
example, without any sample preparation. Many
sites, especially military bases, and areas
such as Rockford, Illinois, where there is a
large concentration of machine shops, have
serious problems with chlorinated C2 compounds
in the groundwater. In these cases, the nature
of the compounds is generally known, and
selectivity is not a concern. Nevertheless,
the sampling procedure, sample preparation, and
gas chromatography to determine these compounds
is involved and expensive. The availability of
a simple probe that can just be inserted into
a groundwater sample will greatly reduce the
number of laboratory analyses that need to be
done. The silicone material is chemically
resistant, and can be left in place for years.
Particulates cannot enter the system. Lastly,
and importantly, the permeator is very
inexpensive.
Pyrolysis-EC: Grain Odors Only a few organic
compounds will react directly with amperometric
sensors under field conditions. On a typical,
platinum-electrode sensor, we can detect
alcohols, epoxides, and formaldehyde. We also
detect many permanent gases, such as carbon
monoxide and hydrogen sulfide. Among these
gases that do react, there is no inherent
selectivity. The use of different sensors and
controlled pyrolysis, however, gives us extra
degrees of freedom that can be used to achieve
selectivity.
The grain odor problem is very instructive,
even to an audience that is concerned with
identifying individual hazardous compounds.
Sensor-array-based methods, including the
pattern-analysis methodologies used, treat
mixtures no differently than single compounds;
both give characteristic patterns which can be
identified against a pattern made from the same
mixture. The individual components of a mixture
need not be identified.
Grains are presently classified by odor by a
panel of trained inspectors. The results are
necessarily subjective. More importantly, the
subjective opinion is the standard; there is no
point in telling a customer that a sample of
grain is acceptable because a machine says so.
If it smells bad, it smells bad. On the other
hand, trained inspectors frequently disagree to a
greater or lesser extent on both the category and
degree of an odor (Table II). Attempts to
identify specific compounds associated with the
odors, using GC or GC/MS, have produced masses of
data, but limited results (12, 13).
The data obtained on the CPS-100 was subjected to
two different kinds of analysis. The first was
an established method called k-nearest neighbor
(KNN, ref. 5). The 16 data points acquired by
the CPS-100 were treated as a vector in 16-
dimensional space. Each known sample of grain
produced a vector which could be associated with
a particular odor category. The vectors from the
unknown samples were tested against this library
of known vectors by calculating the scalar
distance between the unknown vector and each
known vector in the library. All vectors were
first normalized to constant length, to remove
the concentration-dependent part of the
information. The shortest distance is the
identification (Figure 6).
The second method is the neural network (for
general references, see 14, 15). This is a
recently-developed method that has received so
much "hype" that we were at first suspicious of
it. However, its performance has been
outstanding in this application, the moreso
because we used a commercially-available packaged
method (NeuroShell, Ward Systems Group,
Frederick, MD), without really understanding the
internal mechanics of the method. This is a very
important feature of a method which may be used
in the field by operatives with differing
technical backgrounds.
Figure 7 shows the CPS-100 data, in histogram
form, for "good" wheat samples. The patterns are
very similar, in contrast with data showing some
extreme samples (one "sour" (S3) and one "insect"
(13) odor) (Figure 8). A experiment using the
older KNN method was run using a dataset derived
from three grades of wheat samples. A library of
vectors was prepared by averaging the signals for
all runs on each sample of wheat. The scalar
distances were calculated between all possible
pairs of the original data set and each of the
averaged vectors. A summary of the
identifications is shown in Table III. We were
very (pleasantly) surprised to find that those
samples that are "misclassified" by the KNN
87
-------
algorithm are also those that the human
inspectors did not agree on! Sample 42, for
example, was voted "good" by two inspectors and
"musty" and "COFO" by the other two. (COFO
means "commercially objectionable foreign
odor".)
Although KNN has shown good performance in past
applications (5, 6, 8-11), it has some serious
practical disadvantages. The greatest is that,
when the sensors become aged or drift for other
reasons, the complete training set must be
remeasured.
A larger data set had been gathered by the time
the work was begun with the neural nets. This
data set had a peculiarity built into it: one
of the sensors in the array went bad halfway
through the measurements and was replaced. The
data taken after that point gave noticeably
different histograms.
The data set was arbitrarily divided into two
groups. One group was used to "train" the
neural network, a process requiring up to 150
hours on a 386-type computer. The actual
classification process took seconds. Two tests
were run on the optimized neural net. First
was a test to confirm that the optimization
process was complete. This was done by using
the training set itself as unknowns. The rate
of correct classification was 100%. Second,
random, linearly-distributed errors were added
to the data, followed by classification. The
net tolerated 5% error without missing a
correct classification. Added error of 10% and
15% caused a small amount of degradation (Table
IV).
Having confirmed the robustness of the neural
net, it was challenged using the reserved
dataset. The net had not seen these numbers
before; nevertheless the rate of correct
classification was 65% (Table IV). This is
low, although substantially better than random.
Because the test conditions had changed during
the measurements, we added another element to
the data vectors to differentiate the
measurements made before and after the sensor
was changed. The numbers were arbitrary, 100
for the old sensor and 200 for the new. Using
these 17-element vectors, the neural net was
retrained. Now, the rate of correct
classification of the reserved dataset jumped
to 83%.
Pyrolysis-EC: Simplified Version This work is
the result of a project to determine whether a
greatly-simplified form of pyrolysis-EC would
be useful for situations requiring limited
selectivity. Figure 9 is a diagram of the
patterns obtained for representative compounds
in a typical experiment. The temperature of
the catalyst is programmed for two minutes at
room temperature, two minutes each at
temperatures of 500, 600, 700, and 800 °C, and
finally two minutes at room temperature again.
The patterns that are obtained are distinct for
many compounds. If your field problem is simply
confirming the identity of the contents of a
number of similar barrels of an unknown chemical,
the pyrolysis-EC approach may in itself be
sufficient, although most practitioners would
feel more comfortable if it supplemented other
field screening methods.
A table of distances for this limited
configuration is shown in Table V. The smaller
the number, the more similar the two compounds
will appear for a given configuration of the
experimental apparatus. This configuration gives
very good identification of ethylene oxide in the
presence of all but alcohols.
The pyrolysis-EC method has several advantages
that are especially conducive to field work. It
is suitable for portable instrument use; the
components are shock-resistant and will operate
in any orientation. They compact and
lightweight, and the power requirements are
small. They are also inexpensive.
Conclusions
1. A sensor has been developed and
characterized that can identify chlorinated or
brominated compounds in the vapor phase or, with
the use of a permeable membrane, in dissolved
form.
2. A combination of catalytic pyrolysis and
electrochemical detection (pyrolysis-EC) can be
used to distinguish unknown compounds with a
modest degree of selectivity that may be adequate
for many field applications.
3. Pyrolysis-EC data, combined with k-
nearest neighbor and neural network
classification methods, has been used effectively
for such varied tasks as the classification of
stored grains by odor, or the classification of
waste chemicals by functional group (11).
4. The neural net can be made to adapt
dynamically to instrument drift. In effect, it
learns from experience.
4. Errors made by the classification
methods correspond in a general way to errors
made by human experts faced with similar
ambiguities in the data.
88
-------
Bibliography
Stetter, J.R., and Cao, Z. "Gas sensor
and permeation apparatus for the
determination of chlorinated
hydrocarbons in water". Anal. Chem. 62,
(1990), 182-185.
Stetter, J.R., and Cao, Z. "A real-time
monitor for chlorinated organics in
water". Proc. 1990 EPA/AWMA Int'l.
Symposium on "Measurement of Toxic and
Related Air Pollutants", Raleigh, NC,
April 3 - May 4, 1990.
Cao, Z., and Stetter, J.R. "A selective
solid-state sensor for halogenated
hydrocarbons". Case Western Reserve
University, Edison Sensor Technology
Center, Proc. Third Int'l. Meeting on
Chemical Sensors, Cleveland, OH,
September 24-26, 1990.
Stetter, J.R., Zaromb, S., and Findlay,
M.W. "Monitoring of electrochemically
inactive compounds by amperometric gas
sensors". Sensors and Actuators 6,
(1984), 269-288.
Stetter, J.R. Penrose, W.R., Zaromb, S.,
Christian, D., Hampton, D.M., Nolan, M.,
Billings, M.W., Steinke, C., and
Otagawa, T. "Evaluating the
effectiveness of chemical parameter
spectrometry in analyzing vapors of
industrial chemicals". Proc. Second
Annual Technical Seminar on Chemical
Spills, Environmental Protection
Service, Environment Canada, Toronto,
Canada, February 5-7, 1985.
Stetter, J.R., Jurs, P.C., and Rose,
S.L. "Detection of Hazardous Gases and
Vapors: Pattern Recognition Analysis of
Data from an Electrochemical Sensor
Arrays," Anal. Chem. 58, (1986) 860-
866.
Stetter, J.R., Zaromb, S. and Penrose,
W.R. "Sensor array for toxic gas
detection". U.S. Patent no. 4,670,405,
1987.
Stetter, J.R., Penrose, W.R., Zaromb,
S., Nolan, M., Christian, D.M., Hampton,
D.M., Billings, M.W., and Steinke, C. "A
portable toxic vapor detector and
analyzer using an electrochemical sensor
array". Proc. DIGITECH/85 Conference,
Instrument Society of America, Boston,
MA, May 14-16, 1985.
9. Stetter, J.R., Zaromb, S., Penrose, W.R.,
Otagawa, T., Sincali, A.J., and Stull, J.O.
"Selective monitoring of hazardous
chemicals in emergency situations". Proc.
1984 JANNAF Safety and Environmental
Subcommittee Meeting, Laurel, Maryland.
10. Stetter, J.R., Zaromb, S., Penrose, W.R.,
Findlay, M.W., Otagawa, T., and Sincali,
A.J. "Portable device for detecting and
identifying hazardous vapors". Hazardous
Materials Spills Conference, April 9-12,
1984, Nashville, TN.
11. Findlay, M.W., Stetter, J.R., and
Pritchett, T. "Sensor array based monitor
for hazardous waste site screening". Proc.
HAZMAT 90 Central Conference, Environmental
Hazards Management Institute, Durham, NC,
March 13-15, 1990.
12. Weinberg, D.S. "Development of an Effective
Method of Detecting and Identifying Foreign
Odors in Grain Samples," Final Report,
Volume I, USDA Contract # 53-6395-5-59,
SoRI-EAS-86-1208, Dec., 15, 1986.
13. Ponder, M. C. and Weinberg, D.S.
"Development of an Effective Method of
Detecting and Identifying Foreign Odors in
Grain Samples," Literature and Equipment
Survey USDA, Contract # 53-6395-5-59, SoRI-
EAS-85-727, Aug., 5, 1985.
14. Nelson, M.M., Illingworth, W.T. A
Practical Guide to Neural Nets. Addison-
Wesley Publishing Company, Reading, Mass.,
1990.
15. Caudill, M., "Neural Network Primer", AI
Expert, Miller Freeman Publications, 1990.
89
-------
Table I. Sensitivities of the organochlorine
sensor to several halogenated compounds.
Vapors
C«HTCI
dH?Br
C,H,I
CJfcF
C.H.C1
C.H.BF
C.H.I
C.C1F.
Concitntration
(ppn)
125
US
125
62.5
61. S
62.5
125
12.5
R4t«pon*«
( X 10'*«hO/PPB)
0.024
0.016
0.003
0.005
0.029
0.020
0.00]
0.022
Table IV. Summary of the accuracy of the neural
network algorithm for identifying vapors drawn
from the wheat samples.
Sorghum Accuricy of
Dati Set Identification
1. Orlglnjl Diti
2. 5X Error tdded
3. 10% Error Added
4. 15X Error Added
100X
100X
98%
92%
Wheit Simples Accuricy of
Qltl Sfit JfJepf 1f1cit1on
1. Totil Oiti
2. Tnln on S5X of
Diti set
3. Add cliinnel for
Test Conditions
100%
65X
83X
Table II. Subjective odor characterization of the
grain samples used in our study.
OKRL INSPECTORS
SAMPLE '
NO.
F41
F42
F67
F78
F128
F30
F39
F69
FB9
N53
N166
N168
OS
CI
QUO
OKO
OHO
OKO
13
12
11
13
12
S3
82
US
OKO
OKO
HI
OKO
OKO
13
C3
12
13
S3
S3
S3
ICF
OKO
HI
OKO
M2
OKO
13
12
12
12
32
S3
SI
KM
OKO
C2
OKO
OKO
OKO
13
Cl
H3
S3
S3
S3
32
FUIS
CONSENSUS
OK
OK
OK
OK
OX
INSECT
INSECT
INSECT
INSECT
S3
S3
S3
nve.
INTENSITY
0.5
0.7
0.2
0.5
0.0
3.0
2.0
1.0
2.7
2.8'
J.91
2V
Table V. Distance matrices for a series of
organic compounds. Table V-A is several
concentrations of ethylene oxide; the
concentrations are shown as the numbers in the
symbols, e.g., ET0100 - 100 ppm. Table V-B shows
the distances among the series of thirteen
compounds. The Abbreviations are:
ISO - Isopropanol
KER - kerosene
STY - styrene
ETG - ethylene glycol
CHX - cyclohexane
ETE - ether
CLO - chloroform
FORM - Formaldehyde
ETO - Ethylene Oxide
ACE - acetone
XYL - xylene
WL- hilothne
ETA - ethanol
TABLE V-A
Distance for Ethylene Oxide
rroioo ET040 rroio
ET0100
FT040
ET020
ETO:
ETOS
ETOl
0.00
0.31
0.28
0.22
0.25
1.02
0.31
0.00
0.07
0.21
0.18
0.80
0.2>
0.07
0.00
0.21
0.16
0.82
ET05
0.22
0.21
0.31
0.00
0.09
0.85
ETO5
0.2S
0.18
0.16
0.09
0.00
0.80
1.02
0.60
0.62
0.85
0.80
0.00
TABLE V-B
Table III. KNN classification of the USDA grain
samples.
Average of Known Vectors
Good Insect Sour
(128. 42. 67, 41) (30, 39, 89) (53, 166, 168)
CHX
ISO
ACE
JTI
XYL
KER
CLO
STY
FORM
HAL
CT3
ETO
ETA
CHX
0
1.57
0.19
1.76
1.03
1.07
0.69
1.44
1.74
2.09
1.52
1.73
1.95
ISO ;
1.57
0
1.43
0.44
0.76
0.63
1.49
0.46
0.11
0.73
0.4
0.63 3
0.81 ]
kCE m XYL
1.19 1.76 1.02
L.42 0.44 0.76
0 1.59 0.85
.59 0 0.82
.85 0.82 0
.91 0.74 0.34
.55 1.53 0.98
.27 0.41 0.45
.56 0.2 0.85
.93 0.59 1.37
.35 0.3 0.55
.55 0.35 0.75
.77 0.36 0.98
KIR
1.07
0.62
0.91
0.74
0.34
0
1
0.49
0.76
1.06
0.54
0.75
0.9S
CLO
0.69
1.49
0.55
1.53
0.88
1
0
1.26
1.55
1.93
1.33
1.44
1.62
STY
1.44
0.46
1.27
0.41
0.45
0.49
1.26
0
0.45
0.9]
0.13
0.37
0.63
FORM
1.74
0.31
1.56
0.3
0.85
0.76
1.55
0.45
0
0.61
0.34
0.41
0.56
HAL
3.09
0.73
1.93
0.59
1.37
1.09
1.93
0.93
0.61
0
0.64
0.79
0.73
ETC
1.52
0.4
1.35
0.3
O.SS
0.54
1.33
0.12
0.34
0.84
0
0.3
0.55
ETO
1.73
0.62
1.55
0.25
0.75
0.75
1.44
0.37
0.41
0.79
0.3
0
0.27
ETA
1.9»
0.61
1.77
O.Ji
0.96
0.99
I.*'
0.61
0.5«
0.73
o.ss
0.27
0
128,128.42,
67,67,41,41,
41,41
89
168
42
30, 30, 30, 39,
39, 89, 89, 89
168, 168
42
30
53, 166, 166,
166, 168, 168
90
-------
Vent
Sensors
Figure 1. Permeation apparatus used to extract Figures. Configuration of the CPS-100 Toxic Gas
°i"ganochlorines from water. Analyzer, fitted with four electrochemical
sensors and two catalyst filaments.
Permeation
apparatus
3-way
valve
Permeation
apparatus
Ci"rier a
In blank water
Exhaust
Fl9ure 2. Experimental apparatus for selective
analysis of aqueous chlorinated hydrocarbons
Using a separate reference permeator.
RESPONSE OF C6H5CI, C6H6 and C6H14
100 PPM, SEH5O» *OO t 10-M
40 60 80
TIME (WIN)
100
120
Figure 4. Response of the organochlorine sensor
to chlorobenzene, benzene, and hexane.
91
-------
Low cone. CHCI3 in Water
Oun.100. FFI-UOcc/mnl
Figure 5. Response of the organochlorine sensor
to decreasing concentrations of chloroform.
,
It 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44
CHANNEL
I IB
JI | B^. 4
Figure 7. Histogram of normalized responses of
the CPS-100 to four samples of "good" grain.
5*6»oigwq1 900d/tOHout/123co(o
Data vectors are normalized
to vectors of unit length.
U, is unknown compound,
P, and P2are
known pattern vectors.
Scalar distance between
vectors U, (unknown)
andn P( and U, and
P3are calculated and
compared (D, and D,)
Figure 6. Schematic representation of the KNN
pattern recognition method in 3-dimensional
space. PI and P2 are library patterns for
known compounds, and Ul is the vector for an
unknown. The distances from Ul to PI and P2
are calculated and compared.
100
60
20
-9D-
~H
[
"H
n
1
1
UH
CHANNEL
Q] OK (Conlrol) B§ 101(53) (
l 1?3 ICOFO1
Figure 8. Normalized responses of the CPS-100 to
"good" (OK), sour (S3), and COFO grain.
-------
Chloroform
Isopropanol
Elliylene Oxide
Acetone
Elhanol
Halothane
Figure 9. Responses of the simplified
pyrolysis-EC apparatus to six different
chemicals. In this experiment, the catalyst
filament was programmed in 2 minute steps at
room temperature, 500, 600, 700, and 800
degrees, and room temperature again.
DISCUSSION
GORMAN BAYKUT: My question is about the chemical analysis with these
sensors. I'm not talking right now about the wheat vapor. But in terms of real
chemical analysis, you must know the compounds you are going to analyze,
otherwise you can't do the analysis because you need training. You can't analyze
the unexpected compounds, am I right?
WILLIAM BUTTNER: The way the CPS 100 Program was originally
envisioned, you had to install the library vectors of potential compounds. If you
were going to look at TCE, there had to be a library vector associated with the
TCE. On the other hand, these arrays are not totally selective in response. The
response to TCE was similar to PCE, that is, tetrachloroethane. You could
therefore identify classes of compounds. But you are right. You have to have
some ideaof the type of vapors present. Atotally unknown situation will still give
some ambiguity in your analyses.
GORMAN BAYKUT: But I think even though your software is powerful, you
need a training period for every compound. How about the mixtures? If you
analyze the mixtures will there be a problem?
WILLIAM BUTTNER: Mixtures are a problem for this type of system. Certain
types of mixtures are well behaved. Gasoline, for example^ is a mixture of many
types of compounds, but it behaves as a single class.
GORMAN BAYKUT: I'm referring to the cracker. You have a thermal cracker
in front of the electrochemical sensor areas. Sometimes you have a mixture of
two or three compounds, or five, or seven and they react in the cracker. You get
different answers, and the correlation is not linear.
WILLIAM BUTTNER: What you're referring to are the reaction products of
the thermal catalysis that result from mixtures being exposed to the sensors. Yes,
you are right. There is frequently a nonlinear response. The reaction products
frequently do react with each other. That's a comment relevant to many field
screening techniques. In some mixtures that factor is a little less significant. If
you do generate very reactive compounds, for example from chlorinated com-
pounds TCE, you do get a nonlinear response. That is a problem. This instrument
was designed to look at single vapors, maybe not necessarily positively identi-
fied, but single vapors.
STEVEN KARR: I wondered if you've given any thought to applying fuzzy
logic algorithms to this problem as opposed to neural networks?
WILLIAM BUTTNER: The neural network was a six-month program that we
tried on the SBIR (we've just finished Phase I). To stay within the time
constraints, we stuck to simple systems. We are investigating other neural
network software packages and other identification algorithms. We will certainly
consider fuzzy networks.
EDWARD POZIOMEK: Have you tried any real-world environmental samples
with the system.
WILLIAM BUTTNER: I had a program through Savannah River to monitor for
TCE emissions out of their stripping tower, as part of their groundwater clean up.
Initially the results were very encouraging. The analyses that I measured were
compared back to groundwater samples as measured at an independent labora-
tory. They were comparable in value. The unfortunate thing is that these
amperometric sensors did not behave truly reversibly to chlorinated compounds,
and that after u period of time their response factor, their sensitivity, would
degrade and ultimately their response would die completely. For that reason it
was determined that these types of sensor systems would not be applicable for
the problems associated with Savannah River Laboratory. This was before this
chlorine selective sensor was developed. It could potentially have application
down there.
93
-------
REAL-TIME DETECTION OF ANILINE IN HEXANE
BY FLOW INJECTION ION MOBILITY SPECTROMETRY
G.E. BURROUGHS
National Institute for Occupational
Safety and Health, 4676 Columbia Parkway,
Cincinnati, OH 45226
G.A. EICEMAN and L. GARCIA-GONZALEZ
Chemistry Department
New Mexico State University
Las Cruces, NM 88003
DISCLAIMER: Mention of company names or
products does not constitute endorsement
by the National Institute for
Occupational Safety and Health.
ABSTRACT
Ion mobility spectrometry (IMS) with a
conventional "Ni ion source exhibits
chemical behavior that should be
advantageous in detection of molecules
with high proton affinity such as
aromatic amines in common organic
solvents. Since IMS instrumentation can
be considered a continuous-sampling point
sensor, IMS may be adapted for industrial
process monitoring or area environmental
monitoring. However, quantitative aspects
of IMS are not well established and
possible interferences may limit the
usefulness of IMS. In order to
characterize IMS behavior as an effluent
sensor, a flow injection IMS device was
evaluated in which an IMS was used as a
detector for a heated injector port. An
IMS drift tube was used with an acetone
doped reaction region and a membrane
inlet. Five microliter replicate samples
were introduced and vaporized in the
inlet at 15 - 90 second intervals and
drawn into the IMS. Detection limits
were ca. 0.5 mg L-1 for 5 ul aliquots (2
ng per sample). Sampling intervals could
be reduced to 15 seconds for all
concentrations below 40 mg L-1 above which
however a working range could be
considered to approximately 100 mg L-1.
Precision was 10 - 25% RSD and was
largely concentration independent. Since
the IMS alone in a vapor stream shows ca.
1-2% RSD, the bulk of variance was from
the inlet and inlet-IMS interface. Four
solvents (benzene, methylene chloride,
ethyl acetate, and acetone) were
evaluated as interferences. All solvents
at some concentrations affected the peak
area for aniline, although the causes
arose through different mechanisms. The
use of IMS as a flow sensor for aniline
in organic solvents should presently be
restricted to samples free of compounds
with strong proton affinities and
solvents which do not exhibit strong
dipoles.
INTRODUCTION
Ion mobility spectrometry (IMS) with a
conventional "Ni ion source exhibits
chemical behavior that should be
advantageous in detection of molecules
with high proton affinity such as
aromatic amines in common organic
solvents. Since IMS instrumentation can
be considered a continuous-sampling point
sensor, IMS may be adapted for industrial
process monitoring or area environmental
monitoring. However, quantitative
aspects of IMS are not well established
and possible interferences may limit the
usefulness of IMS. Among the attributes
of an acceptable "field screening method
for hazardous waste and toxic chemicals"
are sensitivity, specificity, accuracy,
precision, speed, and portability. Also,
to be worthwhile, it should be applicable
to the screening of analytes or classes
of compounds which have a reasonably high
toxicity. The optimum value of a
real-time field technique would be in the
screening of substances with acute
toxicity, thereby assisting in the
elimination of short term exposures. The
purpose of this work is to investigate
95
-------
such quantitative aspects of IMS as
sensitivity, accuracy, and precision;
interference is examined as a comparison
of response to solvents of varying proton
affinity; and speed of analysis is an
additional experimental parameter.
In IMS, vapors are drawn into a reaction
region where analyte is ionized through
proton or electron transfers from a
reservoir of charge, the reactant ions.
The reactant ions originate in beta
emission from a 63Ni radioactive foil and
the reactant ions exhibit near thermal
energies. Consequently, product ions
usually experience little fragmentation
and exist principally as M+, MH , or MjH"1".
lonization in the reaction region is
based on competitive charge exchange, and
unequivocal response occurs when the
target analyte has a proton affinity
larger than that for any component in the
sample matrix. When this is not assured,
response can become confusing even for
simple mixture (1) . Thus, the primary
basis for selectivity of IMS as a
detector is based upon differences in
proton affinities of constituents
following vaporization into a flowing air
stream. Product ions are injected into a
drift region where ions acquire a
constant velocity in a weak electric
field. Differences in ion velocities are
due to differences in cross-sectional
areas, and this serves as a useful,
second level of selectivity in IMS.
However, response in IMS is fundamentally
governed by the original step of product
ion creation; thus, if a product ion is
not formed in the ion source, regardless
of cause, a peak corresponding to that
substance will not be observed in the
mobility spectrum.
Flow injection analysis (FIA) is a type
of continuous analytical technique where
discrete, reproducible aliquots of sample
are introduced into a flux, allowed to
interact with other components of that
flux or with forces exerted on that flux,
and are subsequently monitored by a
detector having some inherent specificity
for the resultant species. Reviews of
flow injection analysis by Betteridge (2)
and by Ranger (3) date the origins of
this technique to the early to mid-1970's
as an adaptation or subcategory of
"continuous flow analysis" as described
by Skeggs (4) . This type of analysis has
the advantages of being simple, accurate,
reliable, reproducible, and can be
accomplished with a small amount of
simple equipment. All of these
attributes are desirable in any
real-time, field screening method. The
disadvantages of FIA methods come from a
dependence on detector selectivity in the
absence of any separator techniques, as
will be seen later.
Chemically, the high proton affinities of
aniline and other aromatic amines suggest
that ion mobility spectrometry may be a
technically acceptable technique for
monitoring of these substances by flow
injection technique. Development of a
field screening method for these
compounds would be worthwhile based on
toxicity, the primary toxic effects of
this class of compounds on man including
methemoglobin formation and cancer of the
urniary tract (5). Environmentally,
"aromatic amines constitute a family of
serious pollutants due in part to a high
degree of toxicity toward aquatic life
(6) . Particular attention has been given
to the effects of aniline, aniline
derivatives, and aromatic amines on fish
(7,8), Daphnia magna (9,10) and microbes
in estuarine water (11)." (Eiceman et al)
Commercially, they are important as
intermediates in the manufacture of
dyestuffs and pigments, but are also used
in the chemical, textile, rubber, dying,
paper industries and other (5).
EXPERIMENTAL
Instrumentation
The introduction of a flow injection
stream to an IMS detector was
accomplished using the instrumentation
and procedures described below. A block
diagram of the flow injection IMS
apparatus is shown in Figure 1 and was
comprised of a heated injector taken from
a gas chromatograph, an Airborne Vapor
Monitor (Grasby Analytical, Ltd.,
Watford, UK) as the IMS detector, a
pressurized source of air and supporting
electronics to control injector
temperature. Air flow through the
injector port was ca. 5 ml/min and the
injector temperature was 100°C. Both the
injector block assembly and the IMS
instrument were placed inside a
laboratory hood, and there was a distance
of less than 1 cm between the injector
exhaust and the IMS inlet. Digital
signal averaging was used to acquire
mobility spectra with an Advanced Signal
96
-------
Processor (ASP) (Grasby Analytical, Ltd.)
into an IBM XT microcomputer. Also,
signal was routed from an output voltage
on the ASP to a Hewlett-Packard 3380A
recording integrator so peak areas for
the aniline product ion could be recorded
versus time and integrated. The window
of observation for drift times for the
aniline peak was ca. 0.1 - 0.2 ms wide
and was centered on the drift time for
aniline, 8.74 ms. Other parameters for
signal collection through the ASP board
were: number of waveforms, 32; points per
spectrum, 512; and scale expansion, 0.25.
The integrator parameters were:
attenuation and threshold, each 9; chart
speed, 1 cm/min; area rejection, 10000;
and peak width 0.5.
Reagents and materials
The following solvents were obtained in
high commercial purity and used without
further treatment: aniline (Aldrich
Chemical Co., Milwaukee, WI, 99.5%+),
hexane (Chromopure, Burdick & Jackson
Co., Muskegon, MI), acetone (Chromopure,
Burdick & Jackson Co.), benzene (B&J
Brand, Chromopure, Burdick & Jackson
Co.), ethyl acetate (Fisher Scientific,
Pittsburgh, PA), and methylene chloride
(Fisher Scientific) .
Procedures
In general, 5 ul aliquots of liquid
sample were delivered with a 10 ul
syringe (Hamilton Co., Reno, NV) to the
heated injection port during continuous
signal processing with the IMS. An
interval of 15 to 90 seconds was
permitted for the air to sweep vapors
from the inlet before another injection
was made. Several parameters were
examined to determine optimum operating
conditions and access the reliability of
IMS as a flow injection detector. The
particular details of each of these
studies were:
Clearance study and response curve - Five
microliters of aniline in hexane at
concentrations from 0 to 100 ppm
(volume/volume liquid) were delivered in
five replicates at different intervals
from 15 to 90 seconds. Peak areas were
determined for the aniline product ion in
the preparation of a quantitative
response curve. The effect of injection
interval also permitted the determination
of memory effects in the IMS under a
range of concentrations.
Chemical interferences - In the study of
chemical interferences in aniline
determinations, 5 ul of 5 ppm aniline in
hexane were co-injected with 0 to 4 ul of
pure interfering solvent. These
interfering solvents were methylene
chloride, benzene, acetone, and ethyl
acetate. Five replicate determinations
were made at 60 second intervals.
RESULTS AND DISCUSSION
General
The reactant ion peak (RIP) with acetone
reagent ion chemistry and the mobility
spectrum for aniline in the hand-held IMS
are shown in Figure 2. The mobility
spectrum for aniline contained a single
symmetrical peak at 8.74 ms drift time,
consistent with previous findings for
aniline with water-based chemistry in the
ion source (12). Residual amounts of
reactant ion at 6.97 ms in aniline
mobility spectrum demonstrated that the
ion source was not saturated and that
comparable behavior may be anticipated at
vapor levels lower than this. This
mobility spectrum was generated using 5
ul of a 5 ppm solution (25 ng absolute
mass) and the peak height relative to the
RIP was reasonable considering the high
proton affinities of aniline.
Previously, aniline was shown with IMS/MS
to yield a protonated molecule, MH"1"
product ion (12) although the ambient
temperature drift tube and alternate ion
chemistry used here may favor the
existence of a MH^S ion where S is an
acetone solvent molecule, but this has
not been unequivocally established.
Clearance Behavior. Standard Deviation.
and Response Curve
The hand-held IMS used in this work would
be suited for field use due to its size
(40cm x 15cm x 8cm), weight(2.6 kg), and
ability to operate continuously in
hostile environments unattended. The IMS
itself is battery powered and could be
interfaced with a battery powered lap top
computer for data acquisition, providing
a portable system. However, this IMS
could be expected to exhibit memory
effects from the ambient temperature
drift cell and membrane-equipped inlet.
At high concentrations of aniline, slow
clearance from repetitive determinations
might occur. In Table 1, peak areas and
percent relative standard deviations
(%RSD) from repetitive determinations are
given for solutions between 0 and 100 ppm
at injection intervals from 15 to 90
seconds. The %RSD ranged from 13 to 125,
but showed a median of 21% Previous
97
-------
experience with this IMS as a detector in
FIA methods had yielded reproducibility
of peak heights of 8 to 10 %RSD and this
large variance was suspected to be due to
the placement of the FI-IMS in the fume
hood. Turbulence in a fume hood has been
associated with position and movement of
the user as well as amount and location
of equipment in the hood (13). This
turbulence likely affected yields in the
interface between the inlet and IMS and
this large RSD was suggestive that
mechanical improvements in interface
between the IMS and injection port are
needed. A straightforward leak-tight
connection was not employed in these
studies due to the flow characteristics
for this IMS and the eminent rupture of
the membrane inlet if pressure
differences developed between the inlet
and ion source regions.
The anticipated memory effect from slow
clearance of the aniline from the IMS was
evident in the peak areas given in Table
1. In general, peak areas with 90 second
injection intervals were the lowest for a
given concentration level. Injection
intervals less than 90 seconds caused an
accumulation of aniline in the IMS and
peak areas increased for example as much
as 100% at 30 second intervals with the
100 ppm concentration. This was
manifested in the signal for continuous
monitoring as a rising baseline and in
the mobility spectrum as a persistent
product ion. Memory effects here were
dependent upon concentrations, as
expected, and at concentration below 20
ppm, injection intervals of 15 seconds
could be employed with reasonable
differences in absolute areas.
A plot of peak area versus concentration
of aniline in hexane for 5 ul injections
at 90 second intervals is shown in Figure
3 and resembled previous response or
calibration curves in IMS (14). Such
curves are comprised of narrow linear
ranges (in this instance between 5 to 20
ppm) , a shallow but mostly linear
response at concentrations above the main
linear region and a nearly linear plot
with shallow slope below the linear
region. This behavior is due to the
nature of the kinetics of reactant ion
formation from the beta emitting ion
source and, thus, to the limited
reservoir of charge available to analyte
vapors.
Chemical Interferences
The existence of solvents with a range of
proton affinities in industrial waste
streams constitutes a potential
compromise on the integrity of IMS
response in flow injection determinations
through two mechanisms. Conceivably,
large levels of such solvents might
compete for charge resulting in reduced
peak areas for aniline at given vapor
levels. Alternately, solvents may cause,
at ambient cell temperatures, ion-solvent
clusters which lead to shifts in drift
times for product ions. This will cause
a decline in certainty regarding peak
identity or may cause the peak to fall
outside a window of observation in the
signal processing software.
Four solvents with low and medium proton
affinities were selected for interference
studies and mobility spectra for
individual solvents are shown in Figure
4. Methylene chloride gave little
response in positive polarity IMS as
expected due to a low proton affinity.
For the same reason, benzene showed a
weak response with an acetone reactant
ion chemistry and the product ion had a
drift time shorter than that for the RIP.
Acetone formed cluster ion, with drift
times longer than that for the RIP,
through ion-molecule interactions in the
IMS drift region as described by Preston
and Rajadhax (15). Only ethyl acetate
(EtOAc) showed significant competition
with the reactant ion, due to large
proton affinities of EtOAc relative to
acetone, with the obvious result of a
product ion. Of these solvents, only
benzene has been mass identified as M
(16) though acetates are known to form
MH+ and MjH"1" product ions (17).
The influences of these solvents on IMS
response to a 5 ul injection of 5 ppm
aniline in hexane are shown in Figure 5
as a plot of peak height for aniline in
various ratios of four solvents in a
binary mixture with hexane. All solvents
affected the peak area for aniline
although the causes arose through
different mechanisms. In Figure 6,
mobility spectra are shown from egual
mixtures of hexane and solvent for 5 ppm
aniline and these can be compared
directly to spectra for individual
solvents (Figure 4) and for aniline
(Figure 2) . For EtOAc, the product ion
dominated the ion chemistry when aniline
98
-------
was present even though proton affinities
favored aniline. Ethyl acetate at high
concentrations relative to aniline
appropriated virtually all the charge
except that remaining with the RIP. The
ion-molecule chemistry for acetone as an
interference also followed this pattern
and aniline was not detected with high
levels of acetone. Thus, the rise in
peak areas in Figure 5 represented a
false positive by acetone for aniline
since acetone product ion intensity
intruded upon the drift time window used
to monitor aniline. In such a situation,
only inspection of the mobility spectrum
could avert an error in monitoring on
analyses. A product ion for aniline was
evident with methylene chloride due to
the low proton affinities of methylene
chloride. However, the increase in
response for aniline in positive
polarities from addition of methylene
chloride to hexane (Figure 5) was
unprecedented in IMS and conclusions
cannot be made pending IMS/MS studies.
Benzene, with proton affinities between
methylene chloride and acetone or EtOAc,
exhibited a type of intermediate
behavior. A product ion for aniline was
observed in the presence of benzene, but
the benzene was at a level sufficient to
effectively compete for protons from the
RIP and a benzene product ion was also
observer (Figure 6) . These spectra and
trends suggest that an IMS will be
sensitive to common solvents at low
levels even with an alternate reactant
ion chemistry, a membrane inlet, and low
(<1%) levels of solvents other than
hexane. However, if the solvent
composition is known and reasonably
constant, calibrations presumably could
be prepared in that matrix. These
findings for simple compositions argue
for standard addition techniques with
flow injection IMS determinations.
CONCLUSIONS
Ion mobility spectrometry has never been
widely regarded as a quantitative
instrument, but as a detector for flow
injection determination, IMS exhibited
suitable response curves, standard
deviations, and response times. This was
accomplished under the demanding
situation of a fast transient vapor level
in FIA methods. The linear range is a
weak aspect to quantitative IMS and
alternative configurations to
conventional 63Ni sources should be
sought. Reactant ion chemistry based on
acetone was not wholly successful in
discriminating chemically against common
organic solvents. Consequently, until
improved source chemistry is found,
standard addition should be considered
the method of choice for quantitative FIA
with IMS for aromatic amines.
REFERENCES
1. Eiceman, G.A., Blyth, D.A., Shoff,
D.B., Snyder, A.P. Anal. Chem., 1990, in
press.
2. Betteridge, D. Anal. Chem. 1978,
50, 832A-845A.
3. Ranger, C.B. Anal. Chem. 1981 53,
20A-32A.
4. Skeggs, L.T. Am.J. Clin. Pathol.
1957 13, 451.
5. Beard, R.R., Noe, J.T. "Aromatic
Nitro and Amino Compounds," in Pattv's
Industrial Hvoiene and Toxicology. Vol.
2A, G.D. Clayton and F.E. Clayton,
editors, Wiley-Interscience, New York,
1981.
6. National Research Council, "Aromatic
Amines: An Assessment of the Biological
and Environmental Effects," No.
PB83-133058, Washington, DC. 1981.
7. Bradbury, S.P., Henry, T.R., Nieme,
G.J., Carlson, R.W., Snarski, V.M.
Environ. Toxicol. Chem. 1989, 8, 247-261.
8. Newsome, L.D., Johnson, D.E.,
Cannon, D.J., Lipnick, R.L. "Comparison
of Fish Toxicity Screening Data and QSAR
Predictions for 48 Aniline Derivatives,"
QSAR Environ. Toxicol., Proc. Int.
Workshop, 2nd, Kaiser, K.L., Editor,
Reidel, Dordrecht, Netherlands, pp.
231-250, 1987.
9. Kuehn, R., Pattard, M., Pernak,
K.D., Winter, A. Water Res., 1989, 23,
495-499.
10. Gersich, R.M., Milazzo, D.P. Bull,
Environ. Contam. Toxicol., 1988, 40, 1-7.
11. Hwang, H.M., Hodson, R.E., Lee, R.F.
Water Res. 1987, 21, 309-316.
12. Karpas, Z. Anal. Chem. 1989, 61,
684-689.
13. National Research Council, Committee
on Hazardous Substances in the
Laboratory, "Prudent Practices for
Handling Hazardous Chemicals in
Laboratories," Washington, DC, 1981.
14. Leasure, C.S., Eiceman, G.A. Anal.
Chem., 1985, 57, 1890-1894.
15. Preston, J.M., Rajahyax, L. Anal.
Chem., 1988, 60, 31-34.
16. Kim, S.H., Betty, K.R., Karasek,
F.W. Anal. Chem, 1978, 50, 1754-1758.
17. Eiceman, G.A., Shoff, D.B., Harden,
C.S., Snyder, A. P. Internal. J. Mass
Spectrom. Ion Processes, 1988, 85,
265-275.
99
-------
fro* Plot* of AniliM Product Ion lotwwitr
AniliM
Concentration
(PP»>
PEAK AREA
-------
CH
6 6
EtOAc
Drift Time (msl
4. Ion mobility spectra for solvents
expected to be encountered in
analysis of non-aqueous streams for
aniline. Mobility spectra were
obtained in positive polarity with
acetone reagent ion chemistry.
Spectra were obtained with solvent
vapors permitted to deplete reactant
ion intensity ca. 50% from
background levels.
w/CH?l
I
7.77 ™ w/Acetone
| """ w/EtOAc
I 15.01 5
Drift Time (ms)
6. Mobility spectra for mixtures of 5
ppm aniline in 50 : 50 mixtures of
individual solvents with hexane.
Aniline in hexane exhibited a single
product ion with drift time of 8.74
ms as shown in Figure 2.
0.5 5 25
Percent in Hexane
5. Effect on peak height for aniline at
5 ppm in binary solvent mixtures of
hexane with other common solvents
with vol/vol percentage from 0» -
50% Curves were normalized to the
peak Sit of aniline in hexane
solution.
101
-------
DISCUSSION
STEVEN HARDEN: The question I have is with respect to orthonitrophenol
and the sensitivity of the IMS system to that particular kind of material. Did you
ever do a calibration run to determine what that sensitivity might be under various
conditions?
PETER SNYDER: The answer to that question is no, we have not on pure
orthonitrophenol. However, the signals—the amount of signal that we see from
the other point of view, looking at it from the organism's point of view, and
knowing how much organism we have. It seems like there is still plenty of
analyte, given the relatively short time of detection, and knowing that the signal
is still a bit spread out. The signal is not in one, or say two, or maybe three at the
most, peaks. We see it at about seven, eight, nine 10 peaks, until it finally clears
down.
So I'm not trying to skirt the question. It's just that no, we haven't done it to see
how sensitive the CAM itself is, or the ion mobile spectrometer 20MP. However,
I suspect that it has to be very sensitive, since 200, even 50 cells is a good
response, and the response is spread out, so if we can find ways of compacting
it, it'd be that much better.
MAHADEVA SINHA: What are the vapor pressures for the orthonitrophenol
when it gets combined with the glucose. Do you get any response?
PETER SNYDER: Yes, we've done many, many blanks. We always do a blank
before and after.
First of all, the vapor pressure of orthonitrophenol is 5.54 torr at ambient
temperature. That doesn't should like much, but relatively speaking, that's a lot
for the CAM. And the controls — we have done ONP by itself, with buffer,
without buffer, and then just organisms themselves. Organisms do produce some
peaks, but that's just right after the reactant ion peak. But it just happens to tail
off, and there is no signal in the area that the ONP shows. So we have been pretty
lucky in that respect.
The ONP has very negligible vapor pressure by itself. Even if you get a bottle of
the dry powder, and just stick the CAM in the bottle, you see no response at all.
That should be the most amount, the dry powder, and if anything's going on it
would show. But even in the solution, there's no problem.
Orthonitrophenylacetate is a different story. There is hydrolysis going on and
over a couple of hours, you can see orthonitrophenol being produced.
102
-------
DETECTION OF MICROORGANISMS BY ION MOBILITY SPECTROMETRY
A.P. Snyder, M. Miller
and D.B. Shoff
U.S. Army Chemical Research,
Development and Engineering
Center, Aberdeen Proving
Ground, MD 21010-5423
G.A. Eiceman
New Mexico
State University
Las Cruces, NM
88003
D.A. Blyth & J.A. Parsons
GEO-CENTERS, INC.
c/o U.S. Army Chemical
Research, Development
and Engineering Center
Aberdeen proving Ground,
MD 21010-5423
ABSTRACT
A relatively new concept is explored
where the potential for ion mobility
spectrometry is investigated for the
detection and determination of living
microorganisms. The hand-held,
NATO-fielded Chemical Agent Monitor
(CAM) embodies the analytical device.
Advantage is taken of the inherent
enzymes found in microorganisms and an
exogenous, tailored substrate was
provided in order to initiate the
desired biochemical reaction. The
substrate was ortho-nitrophenyl-beta-
D-galactopyranoside, and the product,
ortho-nitrophenol, can be detected in
the negative ion mode of the CAM and
signals the presence of bacteria.
Detection limits of approximately 10E4
E. coli bacterial cells in 5 min. and
3300 E. coli cells in 15 minutes were
realized. The results suggest a new
application of the CAM in the
screening of bacterial contamination
in community water and wastewater
testing situations.
KEYWORDS: ion mobility spectrometry;
microorganisms; E. coli; enzymes;
ortho-nitrophenol; Chemical Agent
Monitor; ortho-nitrophenyl-galacto-
pyranoside; fecal coliforms.
INTRODUCTION
Detection and identification of
microorganisms is a challenge in view
of the required sensitivity, selec-
tivity, and time of response of the
detection technique. Table 1 lists
these requirements for a number of
methods. It appears that analytical
instrumentation techniques broadly
fall in the detection limit range of
10E6 bacterial cells with an instru-
mental response time of approximately
1.5 hr. The colorimetric and fluoro-
metric enzyme assay procedures fare
better and can be characterized by
10E3-10E5 bacterial cell limits of
detection in a 0.25-4 hr response
time domain.
ion mobility spectrometry (IMS) is a
straightforward, analytical vapor
detection technique. Neutral analyte
vapors enter the device and are
ionized, usually by a Ni ring. The
ions are electrically gated and
"drift" through an antiparallel flow
of buffer gas (air or nitrogen). The
ions are focussed by an electrical
field about the heated, cylindrical
drift region and are registered by
a Faraday cup detector. The entire
process, from vapor sampling to the
detection event, takes place at
ambient or near-ambient pressure, and
thereby atmospheric pressure ioniza-
tion chemistry characterizes the ion
formation process. Ions are parti-
tioned primarily according to their
mass and shape and 'are characterized
by their corrected drift times (typi-
cally in msec) or ion mobilities. In
the negative mode, IMS is very simi-
lar with respect to an electron
103
-------
capture detector in terms of the
detection event and sensitivity.
The detection of bacteria by IMS
originated from the concept of aug-
menting the hand-held Chemical Agent
Monitor (CAM) with capabilities for
biological detection, more specifi-
cally, that of viable microorganisms.
The Hypothesis was that the ion-mole-
cule chemistry that characterizes the
atmospheric pressure-based IMS tech-
nique, embodied by the CAM device,
could be used to detect a targeted
volatile product of the biochemical
reaction between an rn vivo bacterial
enzyme and a tailored organic
substrate. This proved to be an
interesting challenge because
parallels could be drawn with that of
standard, well-established microbio-
logical and clinical bacterial evalu-
ation procedures in the process of
devising the CAM detection of micro-
organisms .
EXPERIMENTAL
Ortho-nitrophenol (ONP) and ortho-
nitrophenyl-beta-D-galactopyranoside
(ONPG) were obtained from Aldrich
Chemical Co., Inc., Milwaukee, WI and
Sigma Chemical Co., St. Louis, MO,
respectively. The beta-galactosidase
enzyme and ONP-acetate were obtained
from Sigma Chemical Co., St. Louis,
MO. Pure E. coli suspensions (ATCC
11303) or Bacillus globigii (ATCC
9372) were prepared by growth in a
nutrient broth solution for 48 hr
which was supplemented with 0.5%
lactose sugar for induction of the
beta-galactosidase enzyme. The
bacterial growth was centrifuged
and the pellet was washed three times
with a sterile 0.1M phosphate-
buffered saline solution (0.7% NaCl)
at pH 7.4 (PBS). Approximately Ig of
human fecal matter was suspended in
10 ml of distilled water. Strips of
Whatman 15 filter paper (Whatman
International, Ltd., Maidsgone,
England) were baked at 150 C over-
night in glass vials and used for
bacterial determination experiments.
Two microliters of the E. coli or
fecal matter suspensions were used
for filter paper experiments and 0.1
ml was used for bulk volume liquid
experiments. Two microliters of a
2.0 mg/ml ONPG solution in PBS were
used for the filter paper experiments
and 1.9 ml of the same ONPG solution
was used for bulk volume microbial
determinations. The fecal bacterial
experiments were conducted at room
temperature (25 C) while the pure E.
CQp.i experiments were carried out at
38 C.
After selected 'incubation periods at
the given temperatures, the headspace
of the bottle was sampled with the
hand-held CAM by removing the cap and
immediately placing the vial opening
at the inlet of the CAM unit.
The hand-held CAM (Graseby Ionics,
Ltd., Watford, England) device was
used as the analytical detection
technique which was designed speci-
fically for air sensing in military
field applications (15). Signals
from the CAM were processed by using
a Graseby Ionics, Ltd., advanced
signal processing (ASP) board and
software with an IBM-PC/AT. Details
of the CAM unit are as follows:
drift gas, nitrogen or air; ion
source, 10-mCi Ni; drift region
length, 7 cm; drift field, 230 V/cm;
drift gas flow, 300 mL/min; reaction
region length, 3 cm; drift tube tem-
perature, ambient; shutter width,
0.1 msec (16). A schematic and de-
tails of the operation of the hand-
held CAM ion mobility spectrometry
unit can be found elsewhere (17).
For the fecal bacteria experiments
the data were captured and displayed
by the ASP software while for the
pure E. coli determinations, the ion
mobilTty signals were captured and
displayed by a Nicolet 4094A oscillo-
scope and Hewlett/Packard 7470A
plotter.
RESULTS AND DISCUSSION
A number of constraints were
realized in that for a system such as
the CAM to be a realistic analytical
method for biological detection, only
minimal logistic burdens to the
collection, processing and introduc-
tion of the sample to the hand-held
IMS unit would be tolerated. There-
fore the question was posed: How can
the CAM be used as it is intended
(i.e. - a vapor detector) in the
detection and possible identification
of extremely complex entities such as
microorganisms? The microbiological
104
-------
literature provided constructive in-
sights into this problem in the form
of constitutive enzymes (enzymes that
are always present in a bacterium)
that are secreted at significantly
different quantitative levels depen-
ding on the organisms. This is a
property of living active cells and
not of dead or dormant microorgan-
isms. Conventional clinical proce-
dures used in the detection and iden-
tification of organisms rely on
tailored substrates (i.e. - compounds
that mimic the enzyme's natural
substrate) to interact selectively
with the secreted enzymes of bac-
teria. The enzyme-catalyzed products
of natural substrates are usually
spectroscopically-silent and as such,
tailored compounds substitute a por-
tion of the natural substrate with a
compound such that when it is re-
leased, it becomes spectroscopically
active (e.g. - colorimetric or
fluorimetric properties). This con-
cept was then related to the proposed
CAM detection of bacteria, except
that the product would have to dis-
play a relatively high vapor pressure
and the CAM must respond to the
product.
Enzyme Substrate and product
Previous investigations in this
laboratory (13) have shown that
bacteria such as Bacillus subtilis
(BG) , the yeast Saccharomyces
cerevisiae, Serratia marcescens (SM)
and E. coli produced at various rates
the 3-hydroxyindole (indoxyl) as a
highly fluorescent and blue colori-
metric product from the reaction of
indoxylacetate, indoxylglucoside and
indoxylphosphate with their respec-
tive esterase, glucosidase and phos-
phatase enzymes. 4-methyl-umbelli-
feryl-beta-D-galactoside reacted with
the beta-D-galactosidase enzyme in E.
coli and SM to produce the
fluorescent 4-methylumbelliferone
product (13). The indoxylacetate
probe (13) was the most sensitive
where as little as 500 BG cells/ml
could be detected in under 15
minutes. Modification of these
substrates, with extensive biochemi-
cal IMS experimentation underscored
the role of the organic substrate as
the heart of the project. A number
of important requirements concerning
the substrate must be satisfied in
order to ensure a successful
approach. Requirements of the sub-
strate include that it (a) is water
soluble, (b) is recognized by a tar-
geted enzyme, (c) displays rapid
enzyme-substrate kinetics (i.e. -
favorable association constant), (d)
has minimal/negligible spontaneous
hydrolysis and (e) that it gives a
minimal/negligible response to the
CAM. Requirements for the product
include (a) a low association
constant with biological material,
(b) a relatively low water solubil-
ity, (c) favoring the gaseous phase,
and (d) being "CAM-active". Alter-
nate compounds were sought. instead
of ester compounds, established
microbiological colorimetric indica-
tors were analyzed. ONPG displays an
acetal functional group that joins
ONP and the beta-D-galactopyranoside
sugar monomer (Figure 1) and is a
standard microbiological indicator
for the detection of all (total)
fecal coliform bacteria (18, 19).
Fecal coliform bacteria belong to the
Enterobacteriaceae and are comprised
of E. coli (4xlOE8 cells/g feces) ,
Klebsiella sp. (5xlOE4 cells/g) ,
EnterobacTer (10E5 cells/g) and
Citrobacter (10E6 cells/g) (20) .
These bacteria, with E. coli as the
predominate species, are tound in
fecal matter, and the latter three
genera are also associated with
plants and soils. E. coli, however,
can only be found in the environment
through fecal contamination (21).
Figures 1 and 2 pictorially display
the enzyme-substrate biochemical and
detection events of the ONP product
by the CAM. Figure 3 shows a CAM
response of a phosphate-buffered
saline solution of ONP in the
negative ion mode. The main peak at
6.2 msec,consists largely of
O (H 0) clusters and the shoulder
t£ ti?e ¥eft of the peak is
characteristic of the chloride ion.
The peaks at 9.1 msec represent the
ONP monomer at different concentra-
tions and the low intensity peak at
11.7 msec represents the dimer ion
(22). Thus, a favorable analytical
situation has been established in
that a compound has been found that
not only has established roots in the
microbiological detection and
identification arsenal as a
colorimetric indicator but also
105
-------
responds to ion mobility spectrometry
through well established ion-mole-
cule, gas-phase reaction chemistry.
CAM-Bacterial Trials
A buffered solution of the ONPG
substrate produced no response from
the CAM unit. When an aliquot of
pure beta-D-galactosidase enzyme was
added to the ONPG solution, a yellow
color appeared within seconds and the
CAM ASP registered this event in the
negative ion mode in a fashion
similar to that in Figure 3. Bac-
terial tests followed. one was from
a pure culture of E. coli and the
other bacterial source was of fecal
origin. Microliter volumes of
bacterial sample and buffered ONPG
were spotted on a strip of sterile
filter paper and the latter was
inserted into a vial. The vial was
secured with a screw cap in order to
contain any ONP product that was
released into the vial headspace.
For the fecal suspension, 2
microliters of a Ig feces/10 ml
distilled water was used. Since an
approximate concentration of E. coli
in human fecal matter (20) is~~about
4xlOE8 cells/g, the actual applied
amount of bacteria was approximately
8xlOE4 cells. Figure 4 portrays the
results of this study. Position A
in Figure 4 represents a background
CAM response of the bacterial
inoculation without ONPG substrate.
The bacterium does provide distinct
ion mobility peaks which are most
likely due to inherent bacterial
volatile compounds. A blank
consisting of the buffered ONPG
solution produced only the negative
background ion mobility signal
(Position B in Figure 4). Position C
in Figure 4 represents the CAM
response of the vial headspace after
the buffered ONPG substrate was added
to the bacterial spot on the filter
paper and was acquired 40 min. after
substrate addition. Note that in
addition to the background ion
mobility signal and the three peaks
representing the bacterial volatile
products, a new peak appeared at 9.1
msec which matched that of ONP
(Figure 3). Figure 5 shows a
replicate experiment where frame A
represents the ONP response 15 min.
after an ONPG solution was added to a
fecal inoculation on a filter paper
strip. Frame B shows that at 45 min,
the ONP signal grew considerably.
An inoculated dose of 10E4 E. coli
cells from a pure suspension on a
filter paper strip produced a peak in
five minutes (Figure 6). At the same
E. coli inoculation, Figure 6 also
shows the CAM "response to the
production of ONP after 10, 15 and 20
minutes. The background shows
essentially no peak in the 9.1 msec
time window and the reaction
consisted of 2 ul of phosphate buffer
added to 2 ul of ONPG. This
indicates that the spontaneous
hydrolysis of ONPG at 38 C is minimal
and intense ONP signals can be
observed over a relatively short
period of time resulting from the
bacterial enzymatic reaction at the
relatively low amount of 10E4 E. coli
cells. Figure 7 shows similar data
except that the amount of inoculated
E. coli was 3.3xlOE3 cells. Indeed,
within 20 min., a clear ONP signal
was observed at 9.1 msec. This
experiment was repeated (Figure 8)
and in 15 min. a discernible ONP peak
was observed. A bulk 2.0 ml volume
suspension consisting of ONPG and
fecal matter (a total of 4xlOE6 fecal
bacterial cells) took 2 hr for a
response from CAM while the yellow
ONP color in the suspension was
observed prior to the CAM detection
event. The longer dwell time is to
be expected because the relatively
large volume of water had a small
surface area for the ONP to partition
into the gas phase as opposed to
microliter amounts which rapidly
diffuse across a strip of filter
paper .
Other Enzyme/Substrate Complexes
ONP-acetate can be cleaved by an
esterase and this compound was used
in the determination of the lipase
enzyme in Bacillus globigii. Table
2 presents the amount of bacteria
used to generate an ONP ion mobility
peak after a 15 min. incubation time.
One thousand cells of B. globigii
produced an ONP signal comparable to
that of Figure 6E. However, with the
ONPG substrate, no signal was
observed with 10E5 cells. The
absence of an ONP signal is due to
106
-------
c —
approximately 3.:
bacterial cells.
the fact that B. globigii, as well as
most other bacilli, do not contain
the beta-galactosidase enzyme and as
such ONP is not produced. The
opposite situation occurs with E.
coli. As Table 2 indicates, E. coli
provides a positive biochemical
reaction with ONPG, but not with
ONP-acetate.
Comparison to Other Techniques
For the E. coli fecal coliform ONPG
test, the CAM unit was observed to
provide an ONP signal in 15 min. with
,3xlOE3 E. coli
It is of interest
to compare these response time/
inoculation figures of merit with
that of established and potential
microbiological, clinical and
analytical instrumentation
techniques. Table 1 provides a list
of a number of these methods
including total number of bacteria
and the time needed for a reliable
analysis of bacterial presence. The
CAM concept of bacterial detection
via inherent enzyme biochemical
reactions which yield tailored
volatile products appears to be a
competitive technique in the
determination of microbial presence.
CONCLUSIONS
A major step in the chemical detec-
tion and identification of viable
(i.e. - living) microorganisms was
presented in terms of analytical
techniques. The ion-molecule
chemistry associated with IMS was
shown to be a promising avenue for
the monitoring of bacterial presence
by taking advantage of available sub-
strate-induced accessible enzymes.
The hand-held ion mobility spectrom-
eter CAM unit displayed detection
sensitivity levels for E. coli fecal
coliforms and response times similar
or better than that of most commer-
cially-available methodologies and
analytical instrumentation techni-
ques. This suggests a potential
application of IMS for screening of
bacterial presence in community/local
water and wastewater testing
protocols.
ACKNOWLEDGEMENT
The authors wish to thank Ms. Linda
Jarvis for the preparation and
editing of the manuscript.
REFERENCES
1.
4.
Newman, R.S. , and O'Brien, R.T.,
"Gas Chromatographic Presumptive
Test for Coliform Bacteria in
Water," Appl . Microbiol. Vol.
1975, ~
30,
Bachrach, u. and Bachrach, z.,
"Radiometric Method for the
Detection of Coliform Organisms
in Water," Appl. Microbiol. vol.
28, 1974, pp. 169-171.
Wilkins, J.R., Young, R.N. and
Boykin, E.H. , "Multichannel
Electrochemical Microbial
Detection Unit," Appl. Environ.
Microbiol. vol. 35~T"T978, pp.
214-215.
Cady, P., Dufour, S.W., Shaw, J. ,
and Kraeger, S.J., "Electrical
Impedance Measurements : Rapid
Method for Detecting and
Monitoring Microorganisms," J.
Clin. Microbiol. vol. 7, 197?,
pp. 265-272.
Fraatz, R.J., Prakash, G. and
Allen, F.S., "A Polarization
Sensitive Light Scattering System
for the Characterization of
Bacteria," Am. Biotechnology Lab.
Vol. 6, 1988, pp. 24-28.
Libby, J.M., and Wada, H.G.,
"Detection of Neisseria
meningitidis and YersTnia pestis
with a Novel Silicon-Based
Sensor," J. Clin. Microbiol . vol.
27, 1989, pp. 1456-1459.
Shelly, D.C., Quarles, J.M., and
Warner, I.M., "Preliminary
Evaluation of Mixed Dyes for
Fingerprinting Non-Fluorescent
Bacteria," Anal . Lett. , vol.
14(813), 1981, pp. 1111-1124.
Steinkamp, J.A. , Fulwyler , M.J.,
Coulter, J.R., Hiebert, R.D.,
Homey, J.L. and Mullaney, P.F.,
"A New Multiparameter Separator
107
-------
for Microscopic Particles and
Biological Cells," Rev. Sci.
Instrum. Vol. 44, 1973, pp.
1301-1310.
9. Graham, K., Keller, K. , Ezzel,
J. and Doyle, R., "Enzyme-Linked
Lectinosorbent Assay (ELLA) for
Detecting Bacillus anthracis,"
Eur. J. Clln. MicroDio.1. vol. 3,
1984, pp. 210-212.
10. Feng, P.C.S. and Hartman, P.A.,
"Fluorogenic Assays for
Immediate Confirmation of
Escherichia coli ," Appl.
Environ. Microbiol. Vol. 43,
1982, pp. 1320-1329.
11. Warren, L.S., Benoit, R.E. and
jessee, J.A., "Rapid Enumera-
tion of Fecal Coliforms in Water
by a Colorimetric beta-Galacto-
sidase Assay," Appl. Envi ron.
Microbiol. Vol. 35, 1978, pp.
136-141.
12. Godsey, J.H., Matteo, M.R.,
Shen, D., Tolman, G. and Gohlke,
J.R., "Rapid Identification of
Enterobacteriaceae with
Microbial Enzyme Activity
Profiles," £. Clin. Microbiol.
Vol. 13, 1981, pp. 483-490.
13. Snyder, A.P., Wang, T.T. and
Greenberg, D.B., "pattern
Recognition Analysis of In
Vivo Enzyme-Substrate
Fluorescence Velocities in
Microorganism Detection and
Identification," Appl. Environ.
Microbiol. Vol. 51, 1986, pp.
969-977.
14. Berg, J.D. and Fiksdal, L.,
"Rapid Detection of Total and
Fecal Coliforms in Water by
Enzymatic Hydrolysis of
4-Methylumbelliferone-beta-D-
Galactoside," Appl. Environ.
Microbiol. Vol. 54,
1988, pp.
2118-2122.
16. Eiceman, G.A., Shoff, D.B.,
Harden, C.S., Snyder, A.P.,
Martinez, P.M., Fleischer, M.E.
and Watkins, M.L., "Ion Mobil-
ity Spectrometry of Halothane,
Enflurane, and Isoflurane
Anesthetics in Air and
Respired Gases," Anal. Chem.
Vol. 61, 1989, pp. 1093-1099.
17. Eiceman, G.A., Snyder, A.P.
and Blyth, D. A., "Monitoring
of Airborne Organic Vapors
using Ion Mobility Spectrom-
etry," Intl. J. Environ.
Anal. Chem. Vol. 38, 1990,
pp 415-425.
18. Paik, G., "Reagents, Stains,
and Miscellaneous Test
Procedures, in Manual of
Clinical Microbiology, Third
Edition, E.H., Lennette, A.
Balows, W.J. Hausler, Jr. and
J.P. Truant, eds., American
Society for Microbiology,
Washington, DC, 1980, p. 1006.
19. Colilert Most Probable Number
Method Product Brochure, Access
Medical Systems, Inc.,
Branford, CT 06405, 1989.
20. Olivieri, V.P., "Bacterial
Indicators of Pollution," in
Bacterial Indicators of
Pollution, W.O. Pipes, ed., CRC
Press, Boca Raton, FL, Chapter
2, 1982.
21. Stratman, S., "Rapid Specific
Environmental Coliform
Monitoring," Am. Lab, vol. 20,
1988, pp. 60-64.
22. Snyder, A.P., Shoff, D.B.,
Eiceman, G.A., Blyth, D.A. and
Parsons, J.A., Anal. Chem.,
1991, in press.
15.
CAM Chemical Agent Monitor;
Commercial brochures from
Graseby Ionics, Ltd.: Watford,
England, 1988.
108
-------
TABLE 1. COMPARISON OF MICROORGANISM DETECTION BY IMS TO OTHER
TECHNIQUES
Total number
of bacteria
8°c
10i
105
io7
107
10-
10
11
1
2.7xl04
10?
IO4
IO5
5xl07
IO5
10^
0.5
3
9
4
4.25
0.5
0.25
0.25
3.3x10-
Time
(hr) Technique
8.5 gas chromatography
1 radiometry
1.5 electrochemical
0.5 organism growth
0.25 polarized light
scattering
0.4 light-addressable
potentiometric sensor
1 excitation-emission
matrix
3-laser flowthrough
cytometry
enzyme-linked
lectinosorbent assay
H2/CO2 evolution
gfucuronidase enzyme
extracellular enzyme
aminopeptidase enzymes
extracellular enzymes
extracellular enzymes,
nutrients
0.25 CAM
Response
Reference
ethanol metabolite 1
CO metabolite 2
H« metabolte 3
electrical impedance 4
Mueller matrix 5
redox potential 6
fluorescence 7
fluorescence 8
lectin-conjugate 9
visual/ gas bubbles 10
fluorescence 10
colorimetric 11
fluorescence 12
fluorescence 13
fluorescence 14
vapor metabolite this
study
TABLE 2. ENZYME/SUBSTRATE BIOCHEMICAL REACTIONS PROBED IN MICROORGANISMS
PRESENT LIMIT
ORGANISM
E. coli
E. coli
Bacillus subtilis
B. subtilis
ENZYME
PROBED
/3-galactosidase
Lipase
/3-galactosidase
Lipase
SUBSTRATE
ONPG
ONP acetate
ONPG
ONP acetate
OF DETECTION
(Bacterial Cells)*
3.3 x IO3
6 x IO5"
IO5**
IO3
*Within 15 minutes
**No signal observed at the given concentration
109
-------
coll + e«l«cto»« + ortho-nlfro(>h»nol
•ruym* (ONP)
FIGURE 1. PICTORIAL REPRESENTATION OF THE £. COLI/EETA-SAIACTOSIDASE BIOCHEHICAL
REACTION KITH THE OHP6 SUBSTRATE.
HAND-HELD VAPOR DETECTOR
or0*
C3
-KO,
-^
Small lon« trav«t («al»r than larg* loni In «n •:*ctrlcal Br«dl»nl
FIGURE 2. PICTORIAL REPRESENTATION OF THE ONP DETECTION EVENT WITH THE CAK
HAND-HELD MONITOR. REFERENCE 17 PROVIDES DETAILS OF THE OPERATION
OF THE CAM.
IMS
SCAN
NUMBER
(SEC)
6.2
11.7
KSEC
FIGURE 3. ION MOBILITY SPECTRUM OF ONP IN THE NEGATIVE MODE. THE PEAK AT 6.2
MSEC REPRESENTS THE BACKGROUND ION SIGNAL AND THE PEAKS THAT LIE AT
9.1 MSEC REPRESENT ONP AT DIFFERENT RELATIVE CONCENTRATIONS.
110
-------
32
S.2
MSEC
9.1
FIGURE 1. ION MOBILITY SPECTRUrt IN THE NEGATIVE ION MODE (A) OF AN INOCULATION
OF 8x10^ FECAL BACTERIAL CELLS ON A-FILTER PAPER STRIP. (B) OF ONPfi
SOLUTION ON A FILTER PAPER, (C) AFTER 10 KIN. FROM AN ONP6 SOLUTION
ADDED TO AN INOCULATION OF SxlO11 FECAL BACTERIAL CELLS ON A STRIP OF
FILTER PAPER. A PEAK AT 9.1 RSEC, DUE TO ONP, ONLY APPEARS VHEK BOTH
OHP6 AND BACTERIAL CELLS ARE PRESENT.
J
w**wvrK'»»
MiU^uv-'
T~
6.2
KSEC
9.1
FIGURE 5. (A) 15 HIK. AND (B) 15 HIM. ION MOBILITY SPECTRA OF A REPLICATE FECAL
BACTERIA EXPERIMENT (REFER .TO FIGURE 1C FOR DETAILS).
FIGURE 6.
ION MOBILITY SPECTRA OF ONP LIBERATED FROM THE REACTION OF lO4 £. £flU
CELLS AND OKP6 KITH AN INCUBATION AT 58°C FOR (B) 5 HIN (SHADED AREA),
(C) 10 HIK, (D> 15 «IN, 20 HIK. FRAME A REPRESENTS THE ION MOBILITY
SPECTRUM OF A BLANK CONSISTING CF TWO HICROLITERS OF BUFFER AND OKP6
SOLUTIONS ON A PIECE CF FILTER PAPER,
6.2 9.1
nstc
111
-------
6.2 9.1
RSEC
FIGURE 7. ION MOBILITY SPECTRA OF ONP LIBERATED FROM THE REACTION OF J.JxlO3
£. COL1 CELLS AND ONP6 KITH AN INCUBATION AT 38°C FOR (B) 5 MIN,
(C) 10 MIN, (D) 20 MIN. FRAME A REPRESENTS THE ONP6 BLANK. NOTE
THAT ONLY FRAME D SHOWS A CLEAR ONP RESPONSE OVER BACKGROUND.
6.2 9.1
KtC
FIGURE 8. REPLICATE EXPERIMENT OF FIGURE 7 EXCEPT THAT SPECTRUM D WAS TAKEN
AT 15 MINUTES. NOTE THAT ONLY FRAME D SHOKS A CLEAR ONP RESPONSE
OVER BACKGROUND.
112
-------
DATA ANALYSIS TECHNIQUES FOR
ION MOBILITY SPECTROMETRY
Dennis M. Davis
Analytical Research Division, Research Directorate
U.S. Army Chemical Research, Development and Engineering Center
Aberdeen Proving Ground, MD 21010-5423.
ABSTRACT
The past several years have seen
the advance of ion mobility
spectrometry (IMS) as an analytical
technique. Most of these advances
have been made in the hardware
development end of the problem, the
result being that portable IMS
devices have begun to appear in the
marketplace. The other end of the
problem, the signal processing and
data analysis techniques, has not
been addressed to the same degree.
Recent attempts at applying data
analysis techniques to IMS data have
been made, and the results are
encouraging. Data processing
algorithms ranging from those which
perform simple tasks to those
performing more difficult tasks have
been developed. Among the algorithms
which will be discussed are
algorithms for measuring the peak
areas of selected peaks of interest
in biological studies, and linear
discriminant analysis for detecting
and identifying industrial chemicals
at, or near their maximum exposure
limits.
INTRODUCTION
When dealing with environmental
issues, there are two points of
emphasis that must be considered.
These two points of emphasis are the
protection of individuals in the
workplace, a task regulated by the
Occupational Safety and Health
administration (OSHA), and the
protection of the environment in
which we live, a task regulated by
the Environmental Protection Agency
(EPA). These two points of
emphasis, while dealing with the
same general problem, are typically
at different ends of the
concentration range of chemical or
biological contamination or
exposure. The concentration ranges
for which one must monitor an
individuals exposure to chemical and
biological contaminants is usually
in the low parts-per-million, ppm,
range to tens of thousands of ppm
[1-3], and is set by Federal law
[3]. The concentration range which
is monitored for environmental
compliance is usually parts-per-
billion, ppb, to low ppm. A useful
method for the monitoring both
concentration ranges at the same
time is ion mobility spectrometry,
IMS.
Ion mobility spectrometry is
based upon the flow, or drift, of
molecular ions through a gas of
uniform temperature and pressure. A
weak electric field is uniformly
applied to the gas in the drift
region of the IMS, causing the ions
to move along the field lines.
These ions continue to drift until
their movement is impeded by
collisions with neutral gas
molecules. Since the electric field
is still being applied to the gas,
113
-------
the ions are accelerated once again
and the process of acceleration and
collision is repeated until the ions
strike the detector. IMS is similar
to Time of Flight mass spectrometry
in that the electric field causes the
ions to drift, but it differs in that
Time of Flight mass spectrometry is
performed under vacuum and there are
few, if any collisions to retard the
ions. The average velocity, vd, of
the ions is determined by millions of
the accelerations and energy-losing
collisions. The time required for an
ion to traverse a known distance in
the drift region of the spectrometer
is the drift time, td.
The average velocity of the
ions, also called the drift velocity,
is related to the strength of the
applied electric field through the
equation
vd = Id /
= KE
(1)
where vd is the drift velocity, ld is
the length of the drift region of the
spectrometer, td is the drift time of
the ion, E is the electric field
strength, and K is a constant of
proportionality. This constant K is
also called the "mobility" of the
ion. The mobility of the ion is
directly dependent upon both the
molecular ion being studied, and the
neutral gas through which the ion
must drift. A more useful constant
which is used in IMS work is the
"reduced mobility" of the ion. The
reduced mobility of the ion, the
mobility of an ion through a gas at
standard temperature and pressure, is
related to the measured mobility of
the ion through the equation
K0 = K (273.15/T) (P/760)
(2)
where T is the absolute temperature
of the gas in the drift region, P is
the total pressure of the gas and the
ions in the drift region, and Ko is
the reduced mobility of the ion.
Because it is often difficult to
measure the temperature and pressure
within the drift region of the
spectrometer, a common practice which
is used in determining the identity
of ions is to measure the ratio of
the reduced mobility of the ion of
interest to that of a known species.
This known species is usually the
reactant ion for the study. If the
neutral gas is air, the reactant
ions are H3O+ when dealing with
positive ions, and 02"when dealing
with negative ions. The ratio of
the reduced mobilities are related
to measurable quantities through the
equation
(K01/K02) = (Ki/K2) = (td2/tdl) (4).
The only parameters which are needed
in the analysis is the ratio of the
drift times for the ions.
The equation for calculating the
mobility of an ion through a gas has
been shown to dependent on the
first-order collision integral
[4,5], which is proportional to the
transport cross section. This
implies that the mobility of an ion
is dependent on the size of the
ions, the shape of the ion, and the
distribution of charge on the ion;
this results in the possibility of
more than one ion having the same
mobility.
In an ion mobility
spectrometer, Figure 1, the sample
is introduced through a sample inlet
probe. This inlet probe contains a
semi-permeable membrane, which
allows only a portion of the sample
to enter the ionization chamber.
The portion of the sample which does
not enter the ionization chamber is
vented through the exhaust. The
carrier flow gas, which is input
directly into the ionization chamber
and the sample are then exposed to
the ionizing source, a 63Ni source
in this work. The ions and the gas
molecules are then allowed to mix
and react in the ionizing chamber.
Typical ion reaction schemes which
take place in the ionization chamber
are shown in Table A. A driving
pulse of known shape and duration is
then applied to the bipolar gating
grid, allowing the mixture to enter
the drift region of the
spectrometer. While in the drift
region, the ions are subjected to an
applied electric field (200 V/cm in
our studies), which causes the ions
to begin their acceleration and
collision process. After the ions
have traversed the drift region,
114
-------
TABLE A
TYPICAL ION REACTION SCHEMES
Typical Positive Ion Reactions
(X is the species to be detected)
Typical Negative Ion Reactions
02~+ AB -> 02 + AB~
02~+ AB -> 02 + A + B~
O2~+ AB -> (AB'O2)~
(AB is the species to be detected)
they strike the collector electrode
The signal is then processed to
produce the ion mobility spectrum.
For those who wish, a more detailed
description of ion mobility
spectrometry can be found elsewhere
[6].
The past several years have
seen the advance of ion mobility
spectrometry as an analytical
technique, with the utility of IMS as
an analytical tool for the rapid
detection of airborne vapors in the
atmosphere being previously
demonstrated [7-10], and computer
techniques for pre-processing IMS
signals have also been presented [11-
12].
EXPERIMENTAL
Equipment
Data were collected on an IMS
spectrometer [Airborne Vapor Monitor
(AVM) from Graseby Analytical,
Watford, Great Britain] and stored on
an IBM Personal Computer. The data
transfer is accomplished using a
Graseby Analytical Advanced Signal
Processing (ASP) board and its
associated software. Each spectrum
consisted of 640 data points, which
was collected at a sampling frequency
of 30 kHz. The other operational
parameters of the AVM are shown in
Table B.
Vapor Generation
The vapors being used in the
linear discriminant data set are
generated with a Q5 vapor generator,
shown in Figure 2 . The Q5 generator
has 16 component parts. These parts
are: (1) an equilibrator assembly,
(2) an air supply (or nitrogen
supply) stopcock, (3) a constant
pressure regulator (stabilizer) for
the air supply, (4) two sampling
bubblers filled with solvent (the
bubbler is not shown Figure 2), (5)
a flowmeter (manometer) for the air
supply, (6) a constant pressure
regulator (stabilizer) for the
diluent air supply, (7) stopcocks
for the stabilizers, (8) a stopcock
shut off the flow of air from the
equilibrator to the mixing chamber,
(9) a flowmeter (manometer) for the
diluent air supply, (10) a mixing
chamber, (11) a reservoir, (12,13)
sampling stopcocks, (14) a reservoir
exhaust stopcock, (15) a charcoal
trap on the exhaust of the reservoir
(not shown in Figure 2) , and (16) a
charcoal canister on the sampling
line after the SAW device (not shown
in Figure 2) .
The equilibrator assembly is the
liquid test reagent container of the
dilution apparatus. Dry air, under
a constant controlled pressure,
flows into the equilibrator. This
air stream passes over the surface
of the test reagent, and becomes
115
-------
TABLE B
OPERATIONAL PARAMETERS FOR THE AVM
Number Of Waveforms To Be Summed - 32
Number Of Samples Per Waveform - 640
Gating Pulse Repetition Rate - 40 Hz
Gating Pulse Width - i80 US
Delay To Start Of Sampling - o us
Sampling Frequency - 30 KHz
Gating Pulse Source Is
** External **
saturated with the reagent vapor.
The equilibrator is maintained at a
constant temperature of 25 °C by
partial immersion in a constant
temperature water bath. Included in
the equilibrator is a porous alumina
cylinder (from Thomas Scientific,
Swedesboro, N.J.) to produce a
greater surface area for the liquid-
vapor equilibration. The dry air-
test vapor mixture flows from the
equilibrator assembly to the mixing
chamber where it is diluted with dry
air to the required concentration of
milligrams test vapor per liter of
dry air.
The flow of air through the
equilibrator is controlled by an in-
line stopcock, a constant pressure
regulator, and a flowmeter. The
stopcock is located at the inlet of
the equilibrator, and acts as the
shutoff valve for the air supply,
from the flowmeter to the
equilibrator. The constant pressure
for the air supply is maintained by
bubbling the dry air through a
constant level of fluid, e.g. water,
in the stabilizer. By raising or
lowering the level of the fluid in
the stabilizer, the air pressure
controlled. The level of the fluid
is raised by adding fluid to the
stabilizer, and lowered by draining
fluid through the stabilizer stopcock
located on the bottom of the
stabilizer. Changing the pressure of
the air supply in this way increases
or decreases the flow of the test
vapor through the dilution apparatus.
Excess air passing through the
stabilizer is vented to the
laboratory hood. The flowmeter, or
manometer, consists of an inner
glass tube, which is graduated in
millimeters, and outer glass tube
through which the air flows, a glass
capillary tube of predetermined bore
size, a cover to seal the capillary,
and a bulb type bottom filled with
colored water, which is connected to
the constant pressure regulator.
The capillary is calibrated such
that the flowrate through the
capillary is known for any water
height. Thus, the flowrate is
determined by the height of the
water in the inner tube, and the
capillary calibration data. The
flowmeter measures the flow rate of
the dry air-test vapor mixture in
milliliters per minute. The flow
rate of the diluent air is
controlled in the same fashion as
the equilibrator air supply with a
larger inside diameter capillary
tube. The flowmeter for the diluent
air is measured in liters per
minute. The nominal concentration
of the test vapor can be calculated
using the equation
C = {(f * p)/([F + f]*P)}
(5)
where C is the nominal concentration
of the test vapor in parts-per-
million by volume, f is flow rate of
air through the equilibrator, F is
the flow rate of the diluent air, p
is the vapor pressure of the test
reagent at the temperature of the
experiment, and P is atmospheric
pressure. Thus, the concentration
116
-------
of the test vapor may be easily
changed by varying either the flow
rate of air through the equilibrator,
or by changing the flow rate of the
diluent air. In practice, it works
best to change the flow rate of the
diluent air, when possible, because
the efficiency of the vapor
generation in the equilibrator
decreases at higher flow rates.
The dry air-test vapor mixture
from the equilibrator and the diluent
air are passed into the mixing
chamber located at the entrance of
the reservoir. The dilute test vapor
is thoroughly mixed by a swirling
circular motion of the air in the
mixing chamber before entering the
reservoir. The reservoir is the
container for the diluted test vapor,
from which samples are taken for
concentration analysis and for
testing purposes. There is a
charcoal canister located on the
exhaust of the reservoir. This
canister serves as a scrubber to
remove test vapors passing from the
reservoir to the atmosphere in the
laboratory hood.
Pre-Processinq of Spectra for Linear
Discriminant Analysis
The pre-processing and data
processing procedure used in the
linear discriminant analysis is shown
in Figure 3. The first pre-
processing step is to determine if
the spectrum has been collected in
the positive (+) or negative (-)
mode. This knowledge is important
since the Graseby ASP board does not
differentiate between the two types
of spectra, i.e. the ASP board
converts all spectra to positive
values. The determination of the
operating mode under which the
spectrum was collected is made by
reading the data file header which
includes a single character which is
used to designate mode. A
preliminary discrimination is made
based on the mode; a spectrum
collected in the negative mode has
no chemical semblance to a spectrum
collected in the positive mode. Once
the mode has been determined, it is
necessary to determine the time at
which the reactant ion peak (RIP)
appears. The reactant ion for the
AVM. O2~in the negative mode and
H3CT~ in the positive mode, is the
species which transfers the charge
to the chemical species being
analyzed. The location of the RIP
must be determined for each
spectrum, if possible, because the
location is affected by changes in
temperature, pressure, and relative
humidity. If no RIP is found, then
one must assume the RIP is located
at the same time as the RIP for the
previous spectrum. After
determining the time at which the
RIP appears, the spectrum is
normalized to create a dimensionless
X-axis. To do this, each value on
the X-axis was divided by the value
position of the reactant ion peak.
For negative ion spectra collected
at, or near, sea level, the peak
position, with the maximum intensity
between 6.0 and 7.0 milliseconds
drift time was used for the
identification of the reference ion
peak. For positive ion data, a
value between 6.5 and 7.5
millisecond drift time was used as a
window in which to find the
reference ion peak. This reference
window is easily adjusted for
spectra collected at other altitudes
or pressures by multiplying the
window values by the ratio of the
operating pressure to atmospheric
pressure at sea level. This new
spectrum also appears as a pseudo-
"Reduced Mobility" spectrum which
has a dimensionless X-axis
corresponding to a Ratio of Drift
Times, TR. Only the data in the
range 0.5 to 3.0 along the TR axis
are used. A cubic spline is then
applied to the spectra such that
every spectrum has the same data
spacing with respect to the Ratio of
Drift Times axis. The IMS data
files used in this study have data
points every 0.005 TR.
LINEAR DISCRIMINANT ANALYSIS
Traditionally, much of the
effort associated with the analysis
of the IMS spectra has been left to
the chemist. In an effort to aid in
the preliminary identification, a
117
-------
personal computer (PC) based spectrum
identification package has been
developed. This package, written in
Microsoft Fortran, uses a linear
discriminant function for its
identification, and consists of three
separate programs. These programs
are: IMSDISC, a program which reads
selected data files from the PC and
builds a discrimination data set;
TRAIN, a program which analyzes the
discrimination set and calculates the
linear discriminant function that
best isolates the data of interest
from the interferant data; and
IMSIDENT, a program which reads the
data to be analyzed and identified
and calculates its linear
discriminant value.
Linear discriminant analysis, one
of the most basic forms of pattern
recognition used by scientists, is
used as a supervised learning
technique. In supervised learning
techniques, the computer learns to
classify the samples being analyzed
based on knowledge about the samples;
in this study, the samples either
belong to the class of chemicals you
wish to identify, or they do not.
The goal of the learning is to
develop a classification rule, the
linear discriminant function, which
allows the validity of the
classification to be tested and
ultimately to properly classify
unknowns .
The linear discriminant function
has the general form
n
g(x) = w0 + X Wi xi (6)
where wo is the threshold vector, wi
is the weight vector, X£ is the
response vector, and g(x) is the
response function. The discriminant
function, g(x) is determined by
choosing those variables xj[ with
characteristics which differ between
the groups being classified. These
variables are then linearly combined
and weighted such that the groups are
as statistically different as
possible. This linear combination of
variables is calculated using the
perceptron convergence criteria.
The perceptron [13-15] is a
pattern recognition procedure which
consists updating the weight vector
by considering only those patterns,
or spectra in this work, which have
been misclassified in the training
set. Each misclassified pattern is
considered in turn, with a fraction
of each misclassified spectrum being
added to the weight vector. This
procedure is continued until all of
the spectra are classified
correctly, or until it is determined
that the procedure fails to converge
to a satisfactory solution.
In this software package, the
three programs are run separately,
but are still inter-related. The
first program, IMSDISC, uses a file
called NAMES. NAMES is simply the
file that contains the names of the
individual data files to read, and a
value that tells the program whether
the file is to be treated as the
sample or as an interferant. The
data from the individual data files
is then treated such that all the
files are compatible with respect to
time spacing between data points,
delay to start of data sampling, and
number of data points. To
accomplish this, IMSDISC uses a
spline function to interpolate and
fit the data. After the data has
been treated to fill the
compatibility requirement, the
discriminant threshold is set to
zero by multiplying all interferant
spectra by negative 1, (-1). The
sample spectra are left unaltered.
The data is then stored in a
discriminant data file.
The second program, TRAIN,
develops a linear-discriminant based
on the perceptron convergence
criteria. TRAIN prompts the
operator for the name of the input
discriminant file that was created
with the program IMSDISC. It reads
the data from the discriminant data
set, accepts input for the values of
a scaling factor, between
0.000000001 and 0.1, and the number
of iterations to perform using this
scaling factor. In practice, it is
generally necessary to use a series
of decreasing scaling factors and
iterations to calculate the linear
discriminant function which best
differentiates the samples and the
interferants. After the linear
118
-------
discriminant function has been
calculated, the linear coefficients
are written to a file on the computer
disk for use by the last program.
These first two programs, IMSDISC and
TRAIN, are the time consuming
programs and are run only when a new
compound is to be added to the
database.
The third program in this
package, IMSIDENT, uses the linear
discriminant values created with the
program TRAIN. Thus, it is dependent
on the first two programs in the
package. IMSIDENT can be used in one
of two possible configurations; the
first configuration is as a stand-
alone program, and the second is that
it can be incorporated into a data
collection program for real time
identification of an unknown
environment. In the stand-alone
configuration, the program prompts
the operator for the name of the data
to analyze. The program reads the
data, and performs a spline
interpolation to make the data
compatible with the discriminant data
sets. Next, the program reads a file
named COEF.FIL that contains the
names of the coefficient files. The
linear discriminant value is then
calculated. i'f the linear
discriminant value is positive, an
alarm message is generated which
notifies the operator that the
spectrum has been identified. No
message is generated if the
discriminant value is negative. The
results of the identification process
are then written to a file named
ALARM.RPT for later use, and the
program then prepares to read the
next data file to be analyzed.
In the second configuration, the
program functions as a real time
monitor. The name of the data file
to be analyzed is passed from the
data collection program to the
IMSIDENT package rather than
prompting the operator for the name
of the data file to analyze. The
spline interpolation is then
performed on the data, and the linear
discriminant value is calculated. If
the discriminant value is positive,
the alarm message is generated; no
message is generated if the
discriminant value is negative. The
results of the identification
process are written to a file named
ALARM.RPT for later use.
DISCUSSION
The program package was
developed for use with the Graseby
Ionics Advanced Signal Processing
(ASP) board, the Graseby Airborne
Vapor Monitor (AVM), and a Zenith
286 PC. Using this hardware and the
linear discrimination package, it
has been possible to identify and
semi-quantitate the presence of 15
common chemical vapors in air.
These compounds, most of which are
of industrial importance, and the
levels at which the Occupational
Safety and Health Administration
(OSHA) have determined them to be
hazardous are shown in Table C, with
the ion mobility spectra of these
compounds shown in Figures 4 through
21. When the software is used in
the stand-alone configuration (i.e.,
separate from the data collection
routines) and using the Zenith 286
PC, the presence of these compounds
can be determined and the compound
identified in less than ten seconds.
This includes the time necessary to
perform the spline interpolation and
the calculation of the discriminant
value for the data; however, this
does not include the time required
to create the discriminant
functions.
The results shown in Table D
are from the evaluation of a series
of files used to determine the
presence of N-Methyl Formamide. The
"All Clear" report indicates that
the IMSIDENT program does not find
any similarities between the N-
methyl formamide test spectrum and
the spectra of the fifteen compounds
stored in the database. The report
of an alarm indicates that the
program did find similarities in the
spectra, and the magnitude of the
discriminant is a measure of the
amount of similarity.
It is not really surprising
that there are a number of false
positive alarms indicating the
presence of diethyl ether. Older
119
-------
versions of the AVM used an acetone
dopant within its detection system,
whereas newer versions of the AVM use
water vapor in the atmosphere as the
dopant. This dopant in the older
AVM's results in the presence of an
acetone reactant ion. This reactant
ion is the ionic species which is
responsible for transferring the
ionic charge to the chemical compound
being studied. All of the spectra
used in the discrimination functions
were recorded using water as the
reactant ion. Thus, the discriminant
functions have not been trained to
eliminate the possibility of alarming
on a spectrum which has an acetone
reactant ion peak, and an alarm is
reported. Examination of two
representative spectra for which an
alarm was reported, shows the
similarity of the IMS spectrum for
the diethyl ether, the lower trace
in Figure 22 (ETHER in Table D) and
N-methyl formamide background
spectrum, the upper trace in Figure
22 (\AVM\DATA\nmfoOOOO.ACQ in Table
D). The location of the reactant
ion peak does not appear at the same
time as does the diethyl ether peak,
however the ba.nd shapes are similar.
If the discriminant function is
trained to ignore the acetone
reactant ion peak, one does not get
an alarm. Results of identification
procedure with the acetone reactant
ion peak being ignored is shown in
Table E.
TABLE E
File "ALARM.RPT" for
N-Methyl Formamide Analysis
with Acetone Reactant ion Ignored
ALL CLEAR FOR FILE \AVM\DATA\nmfoOOOO.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0001.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0002.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0003.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0004.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0005.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0006.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0007.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0008.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0009.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0010.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0011.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0012.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0013.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0014.ACQ
ALL CLEAR FOR FILE \AVM\DATA\nmfo0015.ACQ
120
-------
LITERATURE CITED
1. Threshold Limit Values and
Biological Exposure Indices for 1989-
90, American Conference of
Governmental Industrial Hygienists,
Cincinnati, OH, (1989) .
2. NIOSH Pocket Guide to Chemical
Hazards. U.S. Department of Health
and Human Resources, National
Institute of Occupational Safety and
Health, Washington, D.C., (1985).
3. Code of Federal Regulations, 29
CFR 1910, Subpart Z - Toxic and
Hazardous Substances, 1910.1000 - Air
Contaminants. 19 January 1989.
4. E. W. McDaniel and E. A. Mason,
The Mobility and Diffusion of Ions in
Gases. Chapt. 2, John Wiley and Sons,
New York, 1973.
5. H. E. Revercomb and E. A. Mason,
Anal. Chem.. 1975, 47. 970.
6. Plasma Chroroatocrraphv. Carr, T.
W., ed., Plenum Press, New York,
1984.
7. G. A. Eiceman, A. P. Snyder, and
D. A. Blyth, Inter. J. of Environ.
Anal. Chem.. 1989, 38. 415.
8. G. A. Eiceman, M. E. Fleischer,
and C. s. Leasure, Inter. J. of
Environ. Anal. Chem., 1987, 28. 279.
9. c. s. Leasure, M. E. Fleischer,
and G. A. Eiceman, Anal. Chem..
1986, 58., 2141.
10. J. M. Preston and L. Rajadhyax,
Anal. Chem., 1988, 60, 31.
11. D. M. Davis and R. T. Kroutil,
Anal. Chim. Acta. 1990, 232. 261.
12. D. M. Davis and R. T. Kroutil,
in P. Jurs (Ed.), Computer-Enhanced
Analytical Spectroscopy, Vol. 3,
Plenum Press, New York, 1990, (in
press).
13. F. Rosenblatt, Principles in
Neurodvnamics: Perceptrons and the
Theory of Brain Mechanisms, Spartan,
New York, (1962).
14. R.O., Duda and P.E. Hart, Pattern
Classification and Scene Analysis,
Wiley, New York, (1973).
15. Y.-H. Pao, Adaptive Pattern
Recognition and Neural Networks,
Addison-Wessley, New York, (1989).
121
-------
CARRIER FLOW C6TB-, I 1 DRIUINC PULSE
CfiRRIERFLOWCAIR) _| |_ TO CRATING GRID
I ^-GATING
O . GRID
DRIFT
REGION
o
ft
>
•
1J
* O
9 o o o
* o o
i
1 IONIZING
X
I
SAMPLE
IONS
OUTPUT SIGNAL
AMPLIFIER
MICROPROCESSOR
SVSTEM
REGION
TO PUMPS
AIR IONS
1 SAMPLE IONS
COLLECTOR
ELECTRODE
Figure 1. Schematic diagram of an ion mobility spectrometer.
122
-------
CAPILLARY
>IR SUPPLY CONNECTION
KHNO FLOWNETEH
NITROSEN SUPPLY
SAMPLING LINE
VACUUM SUPPLY
Figure 2. Schematic diagram of the Q5 vapor generator.
123
-------
LINEAR DISCRIMINANT
ANALYSIS
ACQUIRE DATA
DETERMINE
MODE(+ OR-)
J_
LOCATE
REACTANT ION
PEAK (RIP)
NORMALIZE
SPECTRUM WITH
RESPECT ID RIP
IF NO PEAK
PRESENT,
ASSUME
LOCATION FROM
LAST SPECTRUM
COMPARE
RESULTANT
SPECTRUM WITH
THE REFERENCE
SPECTRA
IF DISCRIMINANT
VALUE < 0 ; ALL
CLEAR
IF DISCRIMINANT
VALUE > 0 ; SET
ALARM FOR
THAT SPECIES
Figure 3. Block diagram showing the steps taken when performing a linear
discriminant analysis on an ion mobility spectrum.
124
-------
in
•4J
•H
c
D
3)
L
03
L
-M
•H
I]
L
-------
in
-M
•H
c
D
L
ID
L
4J
•H
£!
L
-------
01
-M
•-I
C
D
31
L
05
L
4J
•H
U
L
-------
in
4->
•H
c
D
31
L
ID
L
4-1
•H
U
L
•H
n
L
-------
III
•p
•H
c
D
L
ID
L
•M
•H
a
L
-------
in
+j
•H
c
D
31
L
ID
L
-P
•H
12
L
-------
in
-M
•H
c
D
31
L
ID
L
4J
•H
J]
L
-------
in
-u
•H
c
D
Dl
L
ID
L
+J
•H
SI
L
-------
in
4J
•H
c
D
31
S.
IU
L
4J
•H
12
L
-------
Arbitrary Units
ro.
CJ
H- CO
in •
ID en
n
o H>
2 ®
D. •
in -4
CO
(0
(0
A)
ro
H*
•
CO
03
Figure 22. Typical IMS spectra analyzed using linear discriminant analysis.
Spectra show the similarities often encountered in IMS spectra. Spectrum A
is diethyl ether, and spectrum B is an acetone reactant ion spectrum.
134
-------
DISCUSSION
DREW SAUTER: Perhaps you could explain certain aspects that have hindered
adoption of ion trap mass spectroscopy, basically ion molecule reactions. One of
the things I've run into, and others have, is that in certain limited scenarios, you
can probably define your ion molecule chemistry.
PETER SNYDER: Yes
DREW SAUTER: But the truth of the matter, and correct me if I'm wrong, is
that you can have unknown reacting ions in the sample. In an unknown situation,
it would seem that you could actually get spectra that were sample dependent.
Basically, would you see IMS being more useful as a sort of screening tool on
relatively limited scenarios, as opposed to a tool that could offer more general
analysis capabilities?
PETERSNYDER: Well, I can't disagree with that when you just talk about IMS
by itself. Because of the potential complicating responses that can occur if your
environment is not controlled, anything can happen.
DREW SAUTER: What I mean though is in the real environmental world, a lot
of samples have a lot more than one compound, and not only that do they have
a lot more than one compound, recognizing that you can separate things by GC's,
they tend to have different concentrations.
PETER SNYDER: Yes.
DREW SAUTER: Hence if they have different concentrations, and there's ion
molecule reactions going on. you have them going on with some rate constant.
They're producing different populations of ions, and hence a different sample
dependent spectra. That strikes me as a significant drawback, despite all the
grand things that you've shown.
PETER SNYDER: You have to consider what IMS is based on? IMS is based
on ion molecule reactions, and that can be broken down into proton affinity and
electron affinity by and large. So then you have to look at what kind of
compounds are responding.
DREW SAUTER: But there's al so a concentrat ion term that you showed in your
graph.
PETERSNYDER: Yes, absolutely. Concentration is very important. I guess the
difficulty in response comes then when you get to phosphonate compounds or
phosphoryl compounds that are very sensitive to proton affinity. They get that
proton very nicely, and by and large to the exclusion of many other compounds,
even in their presence, or at relatively high concentrations. Ammonia, probably
would take exception too. That might be a complicating factor.
But in most cases, phosphoryl compounds really come through, and that's one of
the strengths of the chemical agent monitor, in terms of looking for phosphoryl-
oased nerve agents.
As you go down to amines, esters, ketones and alcohols, the relative proton
affinities are not as wide.
STEVE HARDEN: I'd like to just comment on that before we get on to the next
question, and say that yes, you have indeed hit upon one of the problems with ion
mobility spectrometry for analyzing real-world mixtures.
The reason the Army has developed it for their purposes is that the compounds
they're interested in either have such extreme proton affinities, or extreme
electronegativities, and that the sensitivity is very high for those compounds. So
it works for our purposes, and it may not work for some environmental purposes
that you mentioned, because of this mixture problem.
It also points out one of the needs and requirements in this unknown analysis, or
analysis of unknowns, for preparation of sample you mentioned the GC/MS
system. We'll hear some more about that in our next paper.
But one can also point out that in some of the data (in this paper) for some
compounds that do have a high electronegativity, can be picked out using these
techniques that we were talking about, and we can then point out the fact that yes,
indeed, that material was present.
That little bump on the side of that peak was, I think, the mustard, which is an
Army compound of interest. The bump was on the side of a peak of phenol,
phenol being in much greater concentration.
In previous sensitivities and single processing techniques, we can bring it even
more if we used preparation of samples. However, you do separate samples at the
expense of complexity of instrumentation, and that's one reason why the Army
hasn't pursued that to this particular point. So we have.
HERB HILL: Fora long time now we have been using ion mobility spectrometers
as a chromatographic detector, basically because we feel that there really are
problems with interferences, except for very specific cases.
I'm really excited to see us beginning to talk about the use of, what 1 call
chromatographic filters on the front end of IMS, for field monitoring. We've
done studies, for example, treating IMS as a chromatographic detector, and you
can see that the interferences under conditions like that are no worse than you
would have with a flame ionization detector, an electron capture detector. The
quantitative value of IMS is acceptable in any range. It's as good as any of the
standard chromatographic detectors that we have. We've published papers in
which we've put interfering species in, compared them to an FID, and ECD, an
IMS, and you see that the quantitative value of the data is fine, it's good in IMS.
When you add the chromatograph controls on the front end, you can do dioxins.
We do ligands in blood analysis, we do a variety of very small, minute trace
compounds in very, very complex mixtures, as well or better, than you can be a
lot of techniques.
And it should apply very well to field analysis for portable, if you put a portable
GC on the front end of that.
PETER SNYDER: Yes, you' re absolutely right. And the literature that you have
published over the past decade and a half, attests to that. There's many different
sample matrices that Professor Hill has looked at with very good resolution,
depending upon the column characteristics. There has been a lot of good
information coming out of that, using an IMS as a detector.
So basically the newer innovative topic we're looking at here, is using the hand-
held version of the IMS, to see how far we can go with that.
135
-------
ION MOBILITY SPECTROMETRY AS A FIELD SCREENING TECHNIQUE
Lynn D. Hoffland and Donald B. Shoff
Analytical Research Division,Research Directorate,
U.S. Army epical Research,Development and Engineering Center,
Aberdeen Proving Ground, MD
1. INTRODUCTION
Ion Mobility Spectrometry (IMS),
also called Plasma Chromatography, is
used to detect trace quantities of
organic va,pors in gaseous mixtures.
Several researchers over the past 15
gears have demonstrated the utility of
mobility detection for a variety of
organic compounds. 1~H Quantities as
low as
10-10 grams of nitrosamines have been
reported^^.
IMS is a conceptually simple
technique that relies on the drift time,
or time of flight, of molecular, or
cluster, ions through a host gas as a
means of differentiation. This differs
from classical mass spectrometry in that
there is little, if any, fragmentation
and the ions are not mass analyzed.
Detailed theory can be found
elsewhere.13'15 The ions are
differentiated by charge and by
mobility. The reduced mobility KQ
(corrected for standard pressure and
temperature) is expressed as
Kg = 42.51 D
where D is the sealer diffusion
coefficient of Pick's law. This reduced
mobility KQ is catalogued and identified
for each ionic species present.
2. EXPERIMENTAL
The work was performed on a MMS-290
Ion Mobility Mass Spectrometer (POP,
Inc.). shown in figure 1 and an Air
Vapor Monitor made by Graseby Ltd.
The PCP, Inc. MMS-290 spectrometer
used in these experiments consists of an
ion mobility spectrometer followed by a
quadrapole mass spectrometer coupled to a
Nicolet signal averager with a computer
interface for storage, data manipulation
and display.
There are four modes of operation
for the -MMS-290. In the total ion mode
the MMS-290 acts as an ion mobility
spectrometer. Ions are gated into the
drift region and detected by the
electrometer. All ions detected are
averaged, stored and displayed. In the
integral ion mode the mass spectrometer
is the detector instead of the
electrometer. Again, all ions are
detected, averaged, stored and displayed.
There is no mass analysis in this mode.
It is used to check that the ion
distribution is not changed by traveling
the extra distance through the mass
spectrometer. The third mode is the mass
spectrum. The shutter grid is held open
to allow a continuous stream of ions into
the mass spectrometer which is mass
analyzing the ions. This provides a mass
spectral scan of the total ion flux. The
last mode of operation is the tuned ion
mode where the MMS-290 is operated as in
the integral ion mode but the mass
spectrometer is only detecting one mass
ion at a time. This shows which mass
ions are associated with each mobility
peak.
The Airborne Vapor Monitor (AVM)
used in these experiments consists of an
IMS described above with a membrane inlet
and internal electronics for signal
processing and alarm. It operates in
both positive and negative ion mode, has
137
-------
no internal display but can be
interfaced to a personal computer for
display and storage of the IMS spectra.
The AVM has only an electrometer, it has
no mass spectrometer to mass analyze the
ions, and it operates as the total ion
mode of the MMS-290.
Air, or the sample gas, is drawn
into the ionizing region and is ionized
by 60 keV Beta rays from a radioactive
Ni63 source. A potential exists
between the ionizer and the collector
forcing the ions in the direction of the
shutter grid. The closed shutter grid
neutralizes all ions reaching it. The
shutter is pulsed open for approximately
0.1 millisecond (msec) and a cross
section of the ions flow into the drift
region. The shutter closes again
isolating a short pulse of ions that
travel down the drift region propelled
against the drift gas flow by the
potential on the collector. The ions
are differentiated by their charge in
the electric field and their mobility in
the drift gas (velocity V*)
Vd = K E.
The IMS differentiates the ions because
by the time that they reach the shutter
grid the ion molecule reactions have
equilibrated and in the drift region no
more reactions take place.
As the separated ions reach the
collector, they are detected by a fast
electrometer, and a current is generated
directly proportional to the number of
ions. The resultant spectrum is
depicted in figure 2 **>.
The highest KQ ions (C+) are
usually smaller or more compact followed
by the slower ions B+ and A+, in time.
Both positive and negative ion
formation of reactant and product ions
are multistep processes. Good, Durden,
and Keburle*7 have determined the
mechanisms involved in positive reactant
ion formation:
+ e-
+ 2 N
OH
(H20)2H+
The size of the resultant reactant ion
water clusters depend upon the relative
humidity but generally water chemistry
dominates the positive ion mode. The
water ion may cluster directly with the
sample molecule M or, as is the case
more often, the sample molecule
abstracts the proton from the water
cluster and then may attract more or
less water molecules depending on the
humidity. At high concentrations the
sample molecules may form dimers with a
proton. whenever there is some other
molecules present with a higher proton
affinity than water they may replace the
water in the above mechanisms i.e.
Acetone or NH-,. So, in figure two peak
C may be the reactant ion, B the
hydrated monomer, and A the protonated
dimer.
Negative reactant ion formation as
summarized by Spangler and Collins-1**
include the following:
e~(thermal) + 0,
"here n = 1,2.
The sample molecule can cluster with the
O,- or abstract the O2 from the CO2-
As can be easily seen, the chemistry can
be quite involved before any products
are formed.
138
-------
The operating parameters for the
MMS-290 were;
Cell length
Operating voltage
Electric field
Carrier gas
Drift gas
Cell Temperature
Pressure
Drift distance
15 cm
3000 volts
200 volts/cm
200 ml/min
500 ml/min
40 °C
Entered Daily
10 cm
The AVM was operated as received
from Grase&y Analytical, Ltd. (Watford,
Herts, UK) . Signals from this IMS were
processed with a Graseby Analytical,
Ltd., advanced signal averaging (ASP)
board installed in an IBM PC/AT
computer. Known or approximate operating
conditions were;
inlet flow
Drift tube temperature
membrane temperature
reaction region
drift region
field gradient
The samples were generated using a
0-5 apparatus, (where a saturated vapor
stream is mixed with a high volume
diluent dry air stream). By varying the
quantities of both streams the
concentration of sample in the diluted
vapor stream was controlled. The
resulting diluted vapor stream was
sampled by either the IMS /MS inlet or
the AVM. All samples were used as
received from the manufacturer. The
concentration of the saturated vapor
500 ml/min.
ambient
70 °C
2.2 cm
"3.8 cm
~ 200 V/cm
stream was calculated from vapor pressure
data or from the Antoine equation.
3. RESULTS AND DISCUSSION
The data following are an example of
the power of this detection system to
high concentration vapors of acetic acid.
The acetic acid was used "as is", and, as
will be shown, was contaminated with
acetic anhydride(as is often the case).
The target concentration for acetic acid
139
-------
detection with the AVM was the Time
Weighted Average (TWA) of 10 ppmiy, the
Short Term Exposure Limit (STEL) of 15
ppmi9, up to the Immediately Dangerous
to Life or Health (IDLH) level of 1000
ppm20. Figures 3-6 show the response of
the AVM for these three concentrations.
The identity of the peaks in the
above data was determined with the
IMS/MS in the following manner. First,
the reduced mobility is calculated for
each peak. Since the reduced mobility is
a factor of pressure and temperature and
these vary in the AVM and between the
AVM and IMS/MS, a drift time ratio is
calculated by dividing the specie
mobility by the reactant ion mobility
(both are under the same temperature and
pressure). Then, the IMS/MS is operated
in the total ion mode and the integral
ion mode to check that there is no
effect between the different inlets of
the AVM and the IMS/MS and that the mass
spectrometer entrance of the IMS/MS does
not change the specie (figures 7 and 8) .
The first thing noticed is that the
pinhole inlet of the IMS/MS is much more
sensitive than the membrane of the AVM.
The membrane is required, however, to
keep too much water and contaminants
from spoiling the sensitive IMS cell.
So, allowing for the difference in
sensitivity, the mobility spectrum of
the IMS/MS is compared with the AVM to
correlate the mobility peaks between the
two instruments. Once confirmed, the
mass spectrum is taken to determine what
mass species are the major contributors
to the ion mobility spectrum (figure 9).
Then, each mass is scanned in the tuned
ion mode to determine to what peak in
the mobility spectrum each mass
contributes (figure 10). As can be
seen, in this low concentration, the
masses 55,73,83,101,and 129 are all
hydrates and "nydrates" of the H
"reactant ion" and the masses 79, 97,
125, and 153 are hydrates and "nydrates"
of the H+ acetic acid monomer. The
concentration is then increased and the
analysis series is repeated. As the
concentration increases the mass
spectrum becomes more complicated but
assignments can be made bases upon past
experience. Since, at this time, we do
not have the capability there is no
secondary mass fragmentation for
confirmation of these species. Tables 1
indicates the assignments for each mass
fragment in the mass spectrum. Table 2
is a list of the mobility ratios and the
assignments for each mobility peak seen
at the various concentrations.
CONCLUSION
This example of acetic acid illustrates
the potential of this hand held ion
mobility spectrometer to differentiate
between regulated concentrations of
hazardous chemicals. In support of
another program this work has been
extended to identification of these
regulated concentrations (TWA, STEL, and
IDLH) of 15 other solvent chemicals.
Although limited in scope, by extending
this data base the AVM could be used as a
field screening device and as a safety
device for field personnel.
140
-------
TABLE 1
AMU
55
73
79
83
97
101
125
129
153
Specie
H+(H20) 3
H+(H20)4
m H+(H2O)
H+(H2O)3+^2
m H+(H2O)2
H+(H20)4+N2
m H+(H2O)2+N2
H+(H20)4+2N2
m H+(H2O)2+2N2
Comment
reactant ion
reactant ion
monomer hydrate
reactant ion
monomer hydrate
reactant ion
monomer "nydrate"
reactant ion
monomer "nydrate"
Mobility Ratio
1.00
1.08-1.09
1.18-1.24
1.34-1.35
1.47-1.48
TABLE 2
Assignment
fi+(H20)x(N2)y
m »+(H20)x(N2)y
m2 H+(H20)X(N2) y
m n H+(H20)x(N2)y
n2 H+(H20)x(N2)y
Reactant Ion
Acid Monomer
Acid Dimer
Acid Anhydride
Anhydride Dimer
141
-------
References
1. Cohen, M. J., and Karasek,F. W, ,
"Plasma chroma tography - A new dimension
for gas chroma tography and mass
spectrometry", J. Chromatogr. Sci. £,
(1970), 331.
2. Karasek,F. W., and Kane,D. M. ,
"Plasma chroma tography of the n-alfcyl
alcohols", J. Chromatogr. Sci., 10,
(1972), 673.
3. Karasek,F. W. , Tatone,0. S., and
Denney,D. W. , "Plasma chromatography of
the n-alfcyl halides", J. Chromatogr. 87,
(1973), 137.
4. Karasek,F. IV., Tatone,O. S. , and
Kane,D. M. , "Study of electron capture
behavior of substituted aromatics by
plasma chromatography", Anal. Chem, 45,
(1973), 1210.
5. Karasek,F. W . , "Plasma
chromatography", Anal. Chem. 46 , (1974),
710A and references.
6. Karasek,F. IV., Denney,D. W. , and
Dedecker,B. H. , "Plasma chromatography of
normal alkanes and its relationship to
chemical ionization mass spectrometry",
Anal. Chem, 46_, (1974), 970.
7. Karasek,F. W.f and Denney,D. W. ,
"Detection of aliphatic N-nitrosamine
compounds by plasma chromatography",
Anal. Chem., 46, (1974), 1312.
8. Karasek,F. W., Malcan,A., and
Tatone,O. S., "Plasma chromatography of
n-alkyl acetates", J. Chromatogr., 110,
295.
9. Karasek,F. tf., Kim,S. H. , and
Rokushika,S. , "Plasma chromatography of
alfcyl amines", Anal. Chem., 50, (1978),
2013.
10. Spangler,G. E. , and Lawless,?. A.,
"lonization of nitrotoluene compounds in
negative ion plasma chromatography",
Anal. Chem., 50, (1978), 884.
11. Shumate,C., St. Louis,R. H. ,
Hill, Jr.,H. H., "Table of reduced
mobility values from ambient
pressure ion mobility spectrometry",
J. Chromotogr., 373, (1986), 141.
12. Karasek, F. W. and Denney,
D.IV., "Detection of Aliphatic N-
Nitrosamine Compounds by Plasma
Chromatography", Anal. Chem.46,No.
9, (August 1974), 1214-1312.
13. McDaniel,E. W. , and Mason,E.
A., The Mobility and Diffusion of
Ions in Gases, John Wiley and Sons,
New York, (1973) .
14. McDaniel,E. W., Cermak,V.,
Dalgarno,A., Ferguson,E. E., and
Friedman,!.., Ion-Molecule Reactions,
John Wiley and Sons, New York
(1970).
15. Loeb,L. B., Basic Processes of
Gaseous Electronics (2nd edition),
UniversityofCalifornia Press,
Berkeley (1960).
16. Spangler,G. E., and Cohen,M.
J., "Instrument Design and
Description"in p. 15, Plasma
Chromatography, Ed. Timothy W. Carr,
Plenum Press, New York, 1984,1-42.
17. Good,A. I., Durden,D. A., and
Kebarle,P., "Ion-molecule reactions
in pure nitrogen and nitrogen
containing traces of water at total
pressures 0.5-4 torr. Kinetics of
clustering reactions forming
H+(H20) ", J. Chem. Phys., 52,
(1970), 212.
18. Spangler,G. E., and Collins,C.
I., "Reactant Ions in Negative Ion
Plasma Chromatography", Anal. Chem.
43, (March 1975), 2.
19. Threshold Limit Values and
Biological Exposure Indices for
1988-1989,AmericanConferenceof
Governmental Industrial Hygienists
20. NIOSH Pocket Guide to Chemical
Hazards, U.S. Department' of Health and
Human Services, 1985.
142
-------
FIGURE 1
IMS/MS
ATMOSPHERIC PRESSURE (760 TORR)
MOBILITY REGION FOR 0 NO. 1
lOc
-------
TYPICAL [ION ARRIVAL TIME SPECTRUM]
T 2x10-«A
MILLISECONDS
FIGURE 2
144
-------
in
4J
•H
c
31
L
01
S-
4J
•H
n
-------
Figure 4: AVM Spectrum (Acetic Acid 10 pom)
in
4-»
•H
c
DJ
L
to
4-«
•H
J3
L
-------
Figure 5:AVM Spectrum (Acetic Acid 15 ppm)
m H+(H2O)x(N2)y Acid Monomer
in
4J
•H
D
31
ID
-4J
•H
JQ
-------
Figure €:AVM Spectrum ( Acetic flcic? I
000 ppm)
m
•n-m-(H20)x(N2)y Acid Anhydride
01
4J
•H
c
\
D m2 H+(H20)x(N2)y Acid Dimer
3)
L
01
L
•H H
JQ
L.
-------
c:
a
UJ
c_>
-
•t.13 IO.3S,
ST) 22. 6B 2B.81 3-t . 3S •* 1 12
Tine tnsi
Figure 7:IMS/MS Spectrum "Total Ion Mode" ( Acetic Acid 80 ppb)
149
-------
G
c?
or
r~
r- —
r— .
'Of-
UJ
1/.M
10.35 16.SO 22.BS 28.61 3-j . 36 VI.12
TIME IMSI
Figure 8:IMS/MS Spectrum "Integral Ion Mode"
(Acetic Acid 80 ppb)
150
-------
p
H3C-C-OH
a
a
cu
r-
•2.
ru
97
73
55
79 83
101
125
i
129
153
10.0 'jQ.D
7O.D 1DO.D 13O.O 1 GO.0 13O.O
ness
Figure 9:IMS/MS Spectrum "mass spectrum mode"
(Acetic Acid 80 ppb)
151
-------
m/e 153
m/e 125
m/e 97
m/e 79
Total Ion
ID.35 IB.SO 22.BS 26.Bl 3*1.36 'r 1 12
TIME (MSI
Figure 10:IMS/MS Spectra "Tuned Ion Mode" (Acetic Acid 80 ppb)
152
-------
HAND-HELD GC-ION MOBILITY SPECTROMETRY FOR ON-SITE ANALYSIS
OP COMPLEX ORGANIC MIXTURES IN AIR OR VAPORS OVER WASTE
SITES
Suzanne Ehart Bell
Los Alamos National
Laboratory
MS K484
Los Alamos, NM 87545
G.A. Eiceman
New Mexico State University
Department of Chemistry
Box 30001, Dept. 3C
Las Cruces, NM 88003
ABSTRACT
Ion mobility
spectrometry (IMS) was
formally introduced
approximately 21 years
ago, and has been used as
a detector for chemical
warfare agents. IMS
research and development
outside the military has
recently been the subject
of renewed interest.
Military IMS units are
small, rugged, and
portable which makes them
ideal candidates for
inclusion in portable
airborne vapor monitoring
systems. The strengths of
IMS are low detection
limits, a wide range of
application, and
simplicity of design and
operation. The gentle
ionization processes used
in IMS impart a measure of
selectivity to its
response. However,
atmospheric pressure
chemical ionization with
compounds of comparable
proton affinities leads to
mobility spectra for which
interpretive and
predictive models do not
exist. An alternative
approach for the analysis
of complex mixtures with
IMS is the use of a
separation device such as
a gas chromatograph (GC)
as an inlet. The
attractions of GC-IMS over
GC-mass spectrometry (MS)
for field use include the
small size, low weight,
and low power demands of
GC-IMS.
Parameters in GC-IMS
which required examination
before further development
or field application
included three major
concerns. The first was
selection of an optimum
temperature of the IMS
detector and evaluation of
the effect of IMS
temperature on mobility
spectra. The second was a
study of the stability
and reproducibility of
chromatographic retention
and mobility behavior.
The final issue was the
153
-------
development of suitable
data reduction methods.
Results suggest that an
IMS cell temperature of
ca. 150° to 175°C provided
mobility spectra with
suitable spectral detail
without the complications
of ion-molecule clusters
or fragmentation. A
commercially available,
portable IMS unit was
configured as a GC
detector to evaluate the
possibility of using the
unmodified unit as the
basis for a portable
prototype. Significant
fluctuation in peak
heights were observed (ca.
+/- 12%), but mobilities
varied slightly ( ca. 1 %)
over a 30 day test period.
Neural network pattern
identification techniques
were applied to data
obtained at room
temperature and at 150°C.
Results showed that
spectral variability
within compound classes
was insufficient to
distinguish related
compounds when mobility
data was obtained using
the commercial room
temperature IMS cell.
Similar but less severe
difficulty was encountered
using the 150°C data.
Incorporation of retention
indices as a referee
parameter was useful in
eliminating false
positives.
INTRODUCTION
Background
The detection of
trace levels of hazardous
organic volatile compounds
in complex mixtures
represents an analytical
and sampling challenge.
Waste site sampling
requires ppb detection
limits in samples
comprised of complex
matrices and mixtures of
from ten to hundreds of
analytes. Other
considerations include the
time of sampling and time
of analyses, delays in
analysis, labor costs,
labor training, and
cost/sample ratio. The
time and expense of
complete laboratory
analyses can force that
fewer samples be taken
with the attending risks.
Technical aspects make the
translation of widely
accepted laboratory
instrumentation (GC-MS and
GC-FTIR) difficult or
unsatisfactory due to cost
and complexity.
Certainly, gas
chromatography with some
advanced detector will be
required for chemical
resolution of complex
mixtures of organic
compounds over waste
sites. Proven detectors
such as mass spectrometry
and infrared spectrometry
allow necessary
specificity of detection
but represent cumbersome
and intricate
instrumentation not easily
configured for field use.
These instruments often
require highly skilled
operators as well. The
high power consumption of
portable GC/MS and GC/IR
systems certainly limits
their use in many field
situations. Other
detectors which have been
154
-------
common to portable GC
units lack specificity and
necessitate a reversion to
dual column or dual
detector methods for
confirmation of peak
assignments. The
development of a hand-held
GC-IMS combines the
separation power of GC in
combination with a
multidimensional detector.
The release of the
civilian counterpart of
the military IMS units was
a logical starting point
for development of a
portable GC-IMS.
Ion Mobility Spectrometry
Ion mobility
spectrometry (Figure 1) is
based on the ionization of
vapors in air at
atmospheric pressure. The
differentiation of ions
occurs by measurement of
gaseous ionic mobilities
(1). A typical IMS
instrument is divided into
two regions. The first is
the reaction region
containing an ion source
(typically 63Ni). Ion
separation occurs in the
second (drift) region of
the spectrometer, where
separation is based on the
size-to-charge ratio of
the ions. The ion shutter
that separates the two
regions injects ions from
the reaction to the drift
region using period pulses
of the shutter field. The
drifting ions are detected
at the end of the drift
tube by a detector plate.
In IMS, ionization
occurs through collisional
charge transfer between a
reservoir of charge, i.e.
the reactant ions, and
neutral analytes, M. The
most abundant reactant
ions generated from a
beta-emitting source in
air are (H2O)n*H+ and
(H2°)n*°2~- These ionic
clusters co-exist at near
thermal energies in the
reaction region. Product
ions experience little or
no fragmentation and exist
commonly as M+ and MH+ or
M~ and M*C>2~ depending on
proton or electron
affinities of the neutral
species. Ions formed in
the reaction region are
injected into the drift
region by the ion shutter.
In the drift region, ions
move at particular drift
times (td) through an
electric field, E, of ca.
200 V/cm. For a drift
region with a given
length, L (cm) , the drift
time is related to
velocity (vd, cm/s) and
ion mobility (K, cm /V*s) )
through equations 1 and 2 :
d =
(1)
/ td
(2)
= vd /E
K
Ions strike a flat plate
detector and a mobility
spectrum or plot of
detector current (in pA or
nA) versus td (usually in
ms) is produced.
Consequently, the basis
for selectivity in IMS is
differences in drift times
for ions governed by ion
mobilities. Drift times
are dependent on
temperature and pressure
and are normalized to
reduced mobility
constants, K0, that are
related to molecular
155
-------
properties through the
Mason-Shamp equation. In
general, the equations for
mobility constants are
considered well-
established for small
spherical ions but
extrapolations to large
organic molecules may be
tenuous. Practically
speaking, direct
quantitative predictions
of Ko values for organic
molecules are presently
impossible. Mobilities
are inversely proportional
to collisional cross
sections. Thus, IMS is an
ion separator based on
size/charge rather than
mass/charge as found in
mass spectrometers.
Ion mobility
spectrometry offers
advantages such as low
power, simple and rugged
construction, ppb
detection limits, and
mobility spectra
representative of
individual constituents.
Disadvantages
traditionally ascribed to
IMS include significant
memory effects,
irreproducible behavior
and complex response to
mixtures (2). These
difficulties can be
circumvented with the
addition of a GC as an
inlet and with the
reconfiguration of the
drift tube (3,4).
Furthermore, hand-held IMS
instruments are currently
available in military-
hardened form with battery
operation (5). The
military IMS cells are
attractive for use in
portable GC units and were
used as a starting point
for the study of GC/IMS
parameters.
Objectives
Several areas of GC-
IMS have not been
addressed and must be
understood for practical
advances in field
applications of GC/IMS.
The first area is
optimization (or
influence) of IMS
temperature on GC/IMS
performance and on the
mobility spectra obtained
from the IMS. Second is
the evaluation of the
effect of concentration on
reduced mobility and
mobility patterns. Third
is the evaluation of a
commercially available
portable IMS as a GC
detector, and the final
area is the preparation of
a suitable software peak
identification program.
Each of these has served
as the basis for an
objective in the work
described below.
RESULTS AND DISCUSSION
Effects of Temperature on
Ion Mobility
The successful
development of a portable
GC-IMS requires that the
optimum IMS temperature be
determined. This data had
to be determined
empirically, since little
foundational theory was
available. Typically, low
temperature mobility
behavior shows
considerable ion
clustering and complexity,
while higher temperatures
encourage ion
156
-------
fragmentations. An
intensive study was
undertaken to determine
the optimum operating
temperature for the IMS
since a wide variety of
analytes are expected to
be encountered. A
representative set of 43
compounds was selected
from seven different
chemical classes, shown in
Table 1. The temperature
effect study was conducted
on a Tandem Ion Mobility
Spectrometer (TIMS, PCP
Inc., West Palm Beach,
Fla.) which allowed
heating of the inlet and
drift tube.
Confirmational mass
spectral studies were
conducted on an MMS-160
IMS/MS (PCP, Inc., West
Palm Beach, Fla).
There are four basic
processes that can occur
when a compound is
introduced into the IMS.
First, there may be no
detectable reaction, such
as when a species that is
active only under positive
polarity is introduced
into an IMS operating in
negative polarity.
Second, clusters may form
between the analyte and
various ions such as
N2+, or NH4+. Such
clusters appear as peaks
in the spectrum. The
third possibility is the
formation of cluster ions
which subsequently undergo
equilibria reactions while
in the drift tube. The
magnitude of the
equilibrium constant will
determine the effect on
the resulting mobility
spectrum. If the
equilibrium is slow
relative to transit time,
no significant effects
will be seen. If the
equilibrium is fast
relative to the transit
time, the ions arriving at
the detector can differ
significantly from the
original ions produced,
and peak broadening may
result. Finally,
fragmentation may occur,
and the resulting spectra
may exhibit such behaviors
as a generalized increase
in the baseline or a
series or numerous small
peaks. The exact
manifestation will depend
on the degree of
fragmentation. The IMS
portion of a portable
GC/IMS should operate
isothermally to reduce
power consumption and
complexity. It is thus
essential to select the
cell temperature such that
clearly resolved, sharp,
and reproducible peaks are
produced. Peak broadening
and fragmentation patterns
will be difficult, if not
impossible, for a data
reduction system to
classify. It is also
desirable that the cell
operating temperature be
as low as possible to
minimize power
requirements. The other
factor that must be
considered for temperature
selection is memory
effect. Higher
temperatures encourage
rapid clearing of the cell
and promote cleaner
operation. Thus, 3
factors must be balanced
in selecting the optimum
IMS temperature: clearing
time, mobility behavior,
and power requirements.
157
-------
The effect of IMS
cell temperature on
mobility behavior was
studied by analyzing the
43 target compounds using
nine different cell
temperatures from 50 to
250°C. The results showed
that while all compounds
behaved differently, a
general pattern was
discernable. At the lower
temperatures (ca. 50 to
150°C), many compounds
experienced drift tube
reactions, and peaks were
either very broad or moved
as the concentration in
the drift tube changed.
At the midrange
temperatures (ca. 100-
200°C), drift tube
equilibria decreased, and
stable ion/molecule
clusters were observed.
At the higher temperatures
(ca. 200-250°C),
fragmentation became
prevalent. Figures 2 and
3 show two examples of
compound classes and their
behavior over the
temperature range studied.
The aromatics (figure 3)
are not dramatically
affected by temperature
changes, although benzene
and ethylbenzene do show
evidence of drift tube
reactions at 75 through
150°C. The alcohols
(figure 4) show greater
variability with
temperature than the
aromatics, but the general
pattern of drift tube
reactions-clustering-
fragmentation is evident
in the ethanol and n-
propanol.
Members of the
chemical classes of
ketones, alcohols,
halocarbons, and esters
were examined by IMS/MS at
three temperatures to
confirm the data obtained
using the TIMS. At 50°C,
ion cluster formation
dominated mobility spectra
and the formation of dimer
and solvated ions was
evident. At elevated
temperatures (150° and
225°C), these ions were
not observed or present at
low levels. At 225°C,
fragmentation was
prevalent rendering
mobility spectra less
informative than those
from lower temperatures.
Compilation of the
TIMS and IMS/MS data leads
to several observations
cogent to the design of a
hand-held GC/IMS. First,
a portable GC/IMS will
require the use of a
heated IMS cell to obtain
distinctive and
informative mobility
spectra. If the
instrument is to be used
as a monitor for a wide
range of compounds, the
optimum temperature range
appears to be 150-200°C.
Second, the cell
temperature can be set to
optimize the response of
selected compound classes.
For example, the
halocarbons showed greater
spectral detail at higher
temperatures than did the
rest of the target
compounds. If the GC/IMS
is to be used as an in-
situ monitor for
halocarbons, the IMS cell
temperature could be set
at 225°C. Finally, the
variations in behaviors
with temperature might be
useful as an added
discriminator in GC/IMS
applications. For
158
-------
example, acetone and
isopropanol have similar
chromatographic retention
indices on many GC
columns. At lower IMS
cell temperatures,
isopropanol and acetone
both exhibit drift tube
equilibrium reactions, and
their spectra have many
similar features that
might confuse pattern
recognition software. At
175°, the spectrum of
isopropanol begins to show
distinct stable peaks,
while acetone still shows
drift tube reactions up to
ca. 225°. Thus, the
selection of cell
temperature could be used
to help discriminate
between these two
compounds.
Stability and
Reproducibility of IMS
Graseby Analytical
(United Kingdom),
manufactures a portable
IMS that is used by
western military
establishments for
detection of chemical
warfare agents. This IMS
(abbreviated as AVM for
airborne vapor monitor)
was coupled to a GC to
evaluate three parameters.
The GC used was a Hewlett-
Packard (Palo Alto, CA)
5730 equipped with a
Supelco (Supelco Park, PA)
SPB-5 30 meter capillary
column. Nitrogen was used
as the carrier gas, and
makeup gas was air. The
AVM operated in a water
chemistry mode. The
effect of concentration on
mobility behavior was
examined first to
determine if IMS mobility
patterns were
significantly influenced
by analyte concentration.
The stability and
reproducibility of the IMS
response over an extended
period was evaluated as
well. These findings were
then used to determine if
it would be practical to
use an essentially
unaltered AVM as the IMS
cell for a portable
prototype GC-IMS. These
findings were also used to
isolate and identify those
features of the AVM that
could be modified to
improve its performance as
a GC detector.
' The effect of
concentration on mobility
was studied, by injecting a
series of dilutions of
each of the target
compounds into the GC-AVM.
Review of the data
obtained led to several
unanticipated findings.
First, the AVM spectra of
many of the positive mode
compounds were very
similar. The data
obtained at 50°C using the
TIMS did not show these
similarities. As the
concentration of the
target analyte decreased,
the similarities between
the spectra generally
increased. Product ions
were often shoulders off
the reactant ion peak as
opposed to the separate
product peaks usually
observed using the TIMS.
Finally, a clear linear
relationship between peak
height and concentration
was not obtained over the
concentration range
studied. As a result, no
definitive statement
159
-------
regarding the effect of
concentration on mobility
was possible.
The reproducibility
of AVM was evaluated over
a 1 month period. Peak
heights, drift times, and
mobilities were monitored
for positive and negative
background spectra. The
spectra of known amounts
of positive and negative
mode standard compounds
(ethylbenzene and CCl^,
respectively) were also
examined. The results of
the study are shown in
Table 2. The variability
of intensity of the
reactant and product ions
showed drift over the 30
days, but reduced
mobilities varied
slightly. Any attempt at
quantitation using only
mobility spectra patterns
and relative abundances
would be difficult using
the AVM as configured.
Table 2 also shows that
the larger ions exhibit
more reproducible
behavior, as shown by the
decrease in relative
standard deviations with
decreases in mobility.
This fact was exploited in
neural network pattern
identification studies
which followed.
Evaluation of Neural
Networks for
Identification of
Compounds
Neural networks have
in the last 10 years
become very popular for
pattern recognition in
many disciplines. A
network consists of a
series of interconnected
nodes (called neurons or
perceptrons) in which
mathematical weighting,
summation, and submission
to a function are
performed. The output of
each neuron is-then sent
on to another neuron where
a similar operation takes
place. The network itself
can consist of a variable
number of neurons in a
layer, and variable
numbers of layers. The
network is trained by
submitting to it target
vectors consisting of
input and the target
output desired. In this
work, the factors included
in the training vector
were retention index and
mobility peak data. The
target output was the name
of the compound possessing
these GC-IMS
characteristics. The
network takes each
training vector and
adjusts the weights
applied in each neuron to
get the correct value
output. The next training
vector is submitted using
the previously obtained
weighting factors, and the
resultant error is used to
adjust the weights again.
This repetitive process
continues until the
weights are adjusted so
each training vector
submitted to the network
yields the correct output.
Training sets may consist
of hundreds of facts, and
the training process
itself may take hours.
Once the network is
trained, however, response
is rapid. For this
reason, neural networks
are well suited for use
in a portable instrument.
160
-------
For this study,
neural networks were used
with both the TIMS data
(150°C) and the AVM data.
The training vectors
consisted of retention
indices, reduced
mobilities, and in some
cases, the percent
relative abundance of the
mobility peaks. Aspects
of network structure,
training, and failures
were examined with both
data sets. The network
was unable to train on the
AVM data for the alcohols.
Many of the alcohol
spectra were very similar,
and the network was unable
to distinguish between
them even with the
retention index included.
The network was able to
train successfully using
the TIMS alcohol data.
The difficulty with the
AVM data may arise from
operating the cell at
ambient temperature and
from using a membrane in
the inlet.
A network was trained
using data from all the
positive mode compounds
obtained at 150°C.
Approximately 10% of the
initial test data was set
aside as a test set. The
network was trained using
the remaining 90% of the
original data set. The
trained network was able
to identify ca. 95% of the
test set. Failures were
associated with similar
compounds, i.e., within
compound classes. A
typical problem was
differentiating
ethylbenzene from the
xylenes. This problem was
successfully addressed by
using the retention index
of the test compound to
determine the correct
identification. For
example, if the network
yielded both ethylbenzene
and o-xylene as potential
identifications, the
retention index of the
test compound was compared
to the retention index of
the standard target
compounds. In all cases
of multiple
identifications, this
approach eliminated the
false positives. In no
instances were false
identifications seen
across compound classes,
i.e., never was a ketone
mistakenly identified as
an alcohol when the
retention index cr iteria
was used.
CONCLUSIONS
The findings
demonstrate that GC-IMS is
a viable field monitoring
technique, and holds
promise of evolving into a
genuinely portable and
powerful field screening
device. Elevated
temperature cells,
operating without
membranes, will be
required for such devices.
Commercial portable IMS
units such as the AVM
cannot, as currently
configured, be used as
detectors for GC-IMS.
While these devices work
well for specialized
applications, use of the
AVM as a generalized
detector is not possible
without modifications.
Neural networks can be
successfully used to
identify compounds when
161
-------
chromatographic data is
included in the training
process and mobility data
obtained at elevated
temperatures is used.
When the pattern
recognition process fails
to identify a compound,
retention index can be
used to obtain the correct
identification. Neural
networks are system
specific. The network can
not be trained using data
obtained on different GC-
IMS system. Aspects of
the chromatographic and
mobility behavior (via
temperature) can be
modified to suit specific
applications or can be set
to cover a broad range of
target compounds. The
small size and low power
requirements of GC-IMS
combined with the ability
to tune the instruments to
different applications
gives GC-IMS an advantage
over many other portable
techniques.
REFERENCES
1. G.A. Eiceman, Critical
Reviews in Analytical
Chemistry 1990, in press.
2. M.M. Metro and R.A.
Keller, J. Chrom. Sci.
1973, 11, 520.
3. H.H. Hill, Jr.,
Critical Reviews in
Analytical Chemistry 1990,
21,
4. C.S. Leasure, V.J.
Vandiver, G. Rico, and
G.A. Eiceman, Analytica
Chimica Acta 1985, 175,
135.
5. D.A. Blyth, "A Vapour
Monitor for Detection and
Contamination Control",
Proc. Internl. Symp.
Protection Against
Chemical Warfare Agents,
Stockholm, Sweden June 17-
19, 1983, pp. 65-69. b)
Commercial brochures from
Graseby Ionics, Ltd. and
Graseby Analytical, Ltd.,
Watford, Herts., UK.
ACKNOWLEDGEMENTS
Financial support to
NMSU by KRUG Life Sciences
for NASA through project
no. 50,016 is gratefully
acknowledged as is
financial and professional
assistance from Los Alamos
National Laboratory to
Suzanne Bell.
162
-------
Table I. Listing of analytes studied using GC-IMS.
Positive Mode
ALCOHOLS
Methanol
Ethanol
n-Propanol
i-Propanol
n-Butanol
i-Butanol
s-Butanol
t-Butanol
AROMATICS
Benzene
Toluene
Ethylbenzene
o-Xylene
m-Xylene
p-Xylene
Styrene
ESTERS
Methyl Methanoate
Methyl Ethanoate
Methyl Propanoate
Methyl Butanoate
Methyl Pentanoate
Ethyl Methanoate
Ethyl Ethanoate
KETONES
Acetone
2-Butanone
3-Metyl-2-Butanone
2--Pentanone
3-?entanone
ALDEHYDES
Propanal
Butanal
3-MethyIbutana1
Pentanal
Hexanal
Negative Mode
HALOCARBONS
Methylene Chloride
Chloroform
Carbon Tetrachloride
Trichloroethene
1,1,1-Trichloroethane
Tetrachloroethene
1,2-Dichloroethane
1,1,2,2-Tetrachloroethane
CHLORINATED AROMATICS
Chlorobenzene
o-Dichlorobenzene
2-Chlorotoluene
163
-------
Table 2
AVM Reproducibility Study
Description Mean* Rel. Std. Dev. (%)
Reactant Ions
Peak Height
Positive Mode
Negative Mode
Reduced Mobility
Positive Mode
Negative Mode
Product Ions
Peak Height
Positive Mode
Positive Mode
Negative Mode
Reduced Mobility
Positive Mode
Positive Mode
Negative Mode
6911
2109
1.87
1.60
935
679
1687
1.64
1.39
2.22
11.2
22.2
2.01
2.18
8.65
8.22
8.77
1.19
0.98
0.99
*: Mobilities reported in cm2 V -1 s ^ and peak heights reported
in millivolts.
**
: Ethylbenzene had 2 product ions.
164
-------
Ion Mobility Spectrometer
Reaction Region Drift Region Vent
Drif
Car
Ni63
Gas
ier Gas
Repeller
Shutter
Detector
Figure 1. Schematic of ion mobility spectrometer
Acetone 2-Butanone
KO
50 100 '50 200 250 50 100 <«> 200 250
Temper aluie
3-Methyl-2-Butanone
Figure 2. Behavior of selected ketones over the 9
temperatures studied. Legend for Figures 2 and 3: P:
that moved over the course of the elution. The P marks the
extremes of the mobility. X: Distinct stable peak.
Extremes of a drift tube reaction broadened peak.
Approximate center of the peak associated with a dri
reaction.
165
-------
Toluene
SO 100 '50 200 250 50 100 150 200
Temperature
200
Z50
1.75
Ethylbenzene
Sfyrene
Figure 3. Behavior of selected aromatics over the 9
temperatures studied. See figure 2 for key.
DISCUSSION
COLLEEN PETULLO: Did you use the same IMS in the IMS-MS study or
were several used?
SUZANNE BELL: The IMS-MS instrument was different than the heated
instrument we used in New Mexico State. That's simply because we didn't have
an IMS-MS available, so we simply used one that PCP was gracious to rent us
for a week.
COLLEEN PETULLO: But you only used one in the study at any given time,
right?
SUZANNE BELL: Right. The nine temperatures and 43 compounds were all
run on one instrument. The IMS-MS was on another instrument, and then the GC/
IMS was yet another instrument.
COLLEEN PETULLO: How long would it have taken you to train the neural
networks if you would have programmed it for the 43 compounds?
SUZANNE BELL: I would assume it would take eight to ten hours, at the worst.
The training time gets longer as you get more and more similar data. If we gave
it. for example, 25 examples of benzene spectra over a wide concentration range,
that would let the network generalize but you pay the price in training lime. It
could take hours or weeks to train the computer.
COLLEEN PETULLO: You had mentioned that you didn't do this because of
time constraints.
SUZANNE BELL: Right.
COLLEEN PETULLO: How many did you ultimately program?
SUZANNE BELL: We ultimately trained 23 in the combined data set. This was
about half.
166
-------
Remote and In Situ Sensing of Hazardous Materials
by Infared Laser Absorption, Ion Mobility
Spectrometry and Fluorescence
Dr. Peter Richter
The Institute of Physics, Technical University of Budapest
1111. Budapest, Budafoki u't 8. HUNGARY
ABSTRACT
Three instruments will be described that were
developed at the Technical University of Budapest
for the sensing of hazardous materials. A remote
sensing infrared differential absorption lidar
based on the coherent detection of backscattered
CO2 laser light has been built. The lidar can be
used for the detection of a wide range of molecular
pollutants in the atmosphere from ranges of a few
kilometers along a path to a topographic target.
Results of field measurements to detect molecular
pollutant clouds from km ranges will be presented.
The experiments were carried out on NHs and
DDVP but detection of more than 80 air-polluting
components such as Freons, SC>2, etc. is also
potentially possible. In addition, an ion mobility
spectrometer will be discussed which has been
developed for in situ measurements of impurities
in air. The impurities are identified with the help
of a dynamic dual-grid cell. Upon evaluation of
the frequency-ion current spectrum, the detection
of several impurities (e.g., NHs, DDVP, HF etc.) was
demonstrated. The instrument can operate either
in a stand alone or a remote controlled mode and
can be connected to a central computer. A
fluorescence detector for the detection of surface
contamination will also be discussed. Based on
chemical indicator reactions, UV excitation and
fluorescence detection via fiber optics, a mobile
instrument for detection of pesticide
contamination and control of decontamination has
been built. Reliability detection of concentrations
of 0.1 mg/cm2 for DDVP was achieved with a
measurement time of less than 5 sec. Applications
of the instruments and methods will also be
discussed.
INTRODUCTION
Sensing hazardous materials is a task that should
be approached using techniques that are
appropriate not only for the materials to be
detected but also for the measurements required.
A variety of sensing techniques are available to
accomplish this end. In this presentation, three
different methods and instruments that have been
developed at the Technical University of Budapest
will be described. As will be noted, these
instruments are applicable for different specific
purposes. The sensing techniques that will be
discussed are as follows:
A remote sensing lidar to measure pollutant
clouds in the atmosphere from km ranges;
An ion mobility spectrometer for in situ
measurement of air samples; and
An UV fluorescence detector to measure
surface contamination without direct
surface contact.
REMOTE SENSING LIDAR
Lidars are laser radars sensing backscattered laser
light from long ranges making use of the special
characteristics of laser light. Differential
absorption lidars measure light intensities at two
wavelengths corresponding to absorption maxima
and minima of the absorbing atmospheric
component along the beam path. Due to their
broad tunability range in the infrared region
around the 10 u.m wavelength where several
molecular pollutants have characteristic
absorption spectra, systems based on CO 2 laser
sources are of major importance [1]. In the group
of more than 80 detectable pollutants, some of the
more important ones are: NH3, C2H4, 03, SC>2, SFg,
C2H3C1, as well as pesticides such as DDVP (2,2
dichlorovinyl dimethyl phosphate).
167
-------
Two major problems associated with this technique
were eliminating the disturbances due to the open
path and keeping the system compact and
transportable. These problems were solved by the
development of the system, the optical part of
which is shown schematically in Fig. 1. Electronic
separation of the signals at the two wavelengths
allow the measurements to be simultaneous and
coincident, thus avoiding, for example, problems
due to turbulence and differential backscattering.
Using the internal amplification of the
backscattered light by the lasers and heterodyne
detection, make application of small CW lasers and
a transmit-receive telescope of diameter 15 cm
only possible. Topographic backscattering makes
long path absorption measurement possible. The
system used in the field tests is shown in Fig. 2 and
the results of a field test using stationary
topographic backscattering from 500m range with
an artificial cloud of NH3 is shown in Fig. 3. It is
the time dependence of the differential absorption
signal
K>.2.t)
E(t) =ln
that is displayed where ICXi^.t) are the
normalized detected light intensities at the two
wavelengths at time t. The column content along
the beam path cL (molecular concentration c times
the path length L) is given by
where A (T is the absorption cross section
difference of the molecule for the two
wavelengths.
The temporal variations of E in Fig. 3 are due to
the concentration changes in the cloud blown
across the beam path. The time resolution is 1 sec.
Due to the atmospheric window around A, = 1 0 |4.m ,
the reference range of the system is about 3 km
(material dependent) and is not significantly
influenced by the visibility conditions.
The measurement wavelengths and sensitivities
for some specific molecules are as follows:
(cL)min
(ppb)(km) (mg/m3)(km)
NH3 10.33 10.32
C2JU 10.53 10.59
O3 9.49 9.59
SO2 9.02 9.02
SF6 10.51 10.50
C2H3C1 10.61 10.50
8
8
22
710
1.5
34
8.6 x 10 '3
9 x 10 '3
4.2 x 10 '2
1.7 x 10 '2
9 x 10 '3
8.4 x 10 ~2
This system can be used in a stationary mode when
with a scanning attachment it can monitor either
large area (~ 30 km2) pollution distribution
(immission), or emission from certain selected
sources. When coverage of a larger area is
necessary, it can be used from a flying platform as
well.
ION MOBILITY SPECTROMETRY _
A simple and cost-effective technique for the in
situ detection of air pollutants is through the use
of ion mobility spectrometry. Here the air sample
is ionized by a radioactive source in a chamber and
the ions produced are moved by the use of an
electric field. The arrival time and current of the
ions characterize the products and their
concentration. However, as the predominant
charge carriers in the chamber are ion clusters
consisting of fragments of water, Nitrogen as well
as the molecule to be detected (e.g., NH3, HF,
CH3COH3, C2H5OC2H5, HCN, different pesticides), the
selectivity of the system requires the application
of sophisticated hardware and software solutions.
12].
The structure of the chamber is shown in Fig. 4.
Ambient air is drawn in across a semi-permeable
membrane allowing a portion of its component
gases and vapors to be introduced to relatively dry
air in the ionizing region. An alternating voltage
with frequencies sweeping from 0-30 kHz is
connected to a dual grid of transversal Venetian
blind type in front of the collecting electrode.
Recombination on the grid is dependent on the
mobility of the ions; therefore, evaluation of the
ion current as a function of grid frequency
improves the selectivity of the system. In Fig. 5.
ion currents are shown as a function of grid
frequency for clean air and air with NH3
Automatic evaluation of these curves are carried
out by a microprocessor taking derivatives of the
ion current curve at five characteristic
frequencies that correspond to f=OHz, f(Imin).
I(f2=OV) , f (jf = o) and f = fmax = 30 kHz.
With the help of an algorithm, these values are
compared with sets of stored data that had been
determined empirically.
Many materials can be monitored in the low ppm
region. The system shown in Fig. 6 can be used in
a network through a RS232 line that is also
supported by its low mass and power consumption
(2kg, 1W).
SURFACE CONTAMINATION DETECTOR _
Determining the contamination of surfaces of
ground areas as well as equipment and personnel
and the verification of the effectiveness of
decontamination from hazardous materials are
important considerations in assessing the extent of
f l =
168
-------
residual chemical activity such as in the
application of pesticides or in setting clean up
goals for site remediation.
With the technique described here, the
monitoring is based on the fluorescence analysis
of chemical compounds produced in a reaction
where a non-fluorescent compound, indole in an
alkaline peroxidase solution is oxidized by the
agent to be detected to give highly fluorescence
indoxyl [3].
To detect trace impurities, fluorescence techniques
show an inherent advantage compared to methods
based on absorption. Namely while the extinction
shows a logarithmic dependence on light intensity
given by
E = a c L = In
Io
Io -
the fluorescent light intensity F exhibits
an approximately linear relationship given by
F= QF Ia = QF Io (1 - e'ccL) = QF Io o c L ,
where IQ is the incident light intensity, Ia the
absorbed light intensity, and Q F is the quantum
efficiency of the fluorescence. Therefore, with
fluorescence the sensitivity can be improved by
increasing the exciting light intensity IQ. Also
surface contamination often appears in thin,
sometimes discontinuous layers or droplets where
the additional selectivity provided by the
wavelength discrimination of the fluorescent
light from the backscattered light can be
exploited.
In the chemical reaction described above, the
material to be detected plays the role of the
catalyst; therefore, the quantity of the fluorescent
material can be controlled to a certain extent by
the amount of reagent added.
The advantages of this method compared with
those requiring probe sampling are, that this
method operates without physical contact, is not
influenced by the surface type, and is highly
selective. The application of this method consists
of the following steps:
• spraying the contaminated area,
• illuminating it with UV light, and
• detection of the frequency shifted
fluorescent light and evaluating
the detector signal.
This system (shown in Fig. 7) consists of the
following three units:
• a spray unit to store and pump the
chemical reagents;
• an optoelectronic unit housing the
Mercury vapor light source, the
photomultiplier detector, the
spectral filters matched to the
compound to be detected and the
electronics using lock in detection;
and
• a sensor head unit (containing
optical elements and controls)
connected with 3 m long hoses,
cables and fiber optic bundles to
the other units.
Experiments carried out with DDVP and a reagent
containing NaBOs and indol in water solution
showed that the response time was less than 5 sec
after spraying and the detection limit was at 100
fig/cm^. Time duration of the fluorescence can be
adjusted by proper selection of concentration of
the reagents. This system can be used either in a
stationary mode or on a moving vehicle to monitor
large ground surfaces.
REFERENCES
1. Richter P., Proc. SPIE 883, 162 (1988)
2. Brokenshire J., Pay N., International
Laboratory, Oct. 1989.
3. Diehl W., Proc. 2nd Int. Symp. Protection
Chemical Agents, Stockholm, Sweden, 173
(1986)
169
-------
Telescope
m
Chopper
\
Lasers
b.s.
V
Attenuator
rA A . ,
Hm
—' V Detector
Lens
m
Figure 1. Arrangement of the differential absorption lidar.
Figure 2. The lidar system.
170
-------
30 60 90 120 150 180 210 240 270 t
1/DIV START:! SEC
AVO5 1989.09.06. 15.23 "E"
Figure 3. Time evolution of differential absorption signal
for an artificial NHa cloud.
insulator rings
metal housing
source
electrode
dual grid
collector
electrode
Figure 4. Structure of the ion mobility spectrometer chamber.
171
-------
l[pA]
80 _
60 -
40 -
20 -
air+NHg
/,
I
100 Hz 300 Hz 1kHz 3kHz
I
I
-»- f
10kHz 30kHz
Figure 5. Dependence of ion current on grid frequency for
clean air and air with 0.2 ppm
Figure 6. The ion mobility
spectrometer sensor.
Figure 7. The surface contamination
fluorescence detector.
172
-------
THE DEPARTMENT OF ENERGY'S
ROBOTICS TECHNOLOGY DEVELOPMENT
PROGRAM FOR ENVIRONMENTAL RESTORATION
AND WASTE MANAGEMENT
A. C. Heywood
Science Applications International Corporation
Pleasanton, California 94566
S. A. Meacham
Oak Ridge National Laboratory
Oak Ridge, Tennessee 37831-6305
P. J.Eicker
Sandia National Laboratories
AlbiK^uerque, Hew Mexico 87185
In August 1989, the new Office
of Environmental Restoration and
Waste Management (ER&WM) in the
Department of Energy (DOE)
published an ER&WM Five-Year Plan
which established DOE's agenda and
commitment to correct existing
environmental problems, ensure
compliance with applicable Federal,
State, and local requirements, and
effectively execute DOE's waste
management programs. The plan
includes a section covering the
applied research and development
needed to support the five-year
plan. In November 1989, DOE Issued
a draft Applied Research.
Development, Demonstration.
Testing, and Evaluation (RDDT&E1
Plan for ER&WM which expands on the
applied research and development
section of the five-year plan. The
RDDT&E plan provides guidance to
the new ER&WM Office of Technology
Development (OTD) for its mission:
"to manage and direct programs and
activities to establish and
maintain an aggressive national
program for applied research and
development to resolve major
technical issues and rapidly
advance beyond current technologies
for environmental restoration and
waste management operations." The
development and application of
robotics technology for the
resolution of identified problem
areas at DOE sites is a major
element of the RDDT&E program plan.
The OTD has established a
Robotics Technology Development
Program (RTDP) to integrate
robotics RDDT&E activities and to
provide needs-oriented, timely, and
economical robotics technology to
support environmental and waste
operations activities at DOE sites.
DOE laboratories, private industry,
and universities have existing
robotics technology that provides a
strong foundation for initiating an
aggressive RDDT&E program to
support ongoing and emerging ER&WM
functions.
A major objective of the ER&WM
Program's five-year RTDP is the
application of robotic technology
in the resolution of DOE's
identified problem areas. The
thrust of the application is to
reduce exposure of personnel to
hazardous substances and radiation
while increasing productivity. An
additional goal is to integrate all
such activities to obtain the most
economical approach to resolving
site-related waste problems using
robotic technology and to
demonstrate robotic technologies
that can be applied to major site-
specific waste clean-up efforts.
The Robotics Five-Year Program
Plan provides the focus and
direction for the near-term (less
than five years) and guidance for
the lona term (five to twenty
173
-------
years) R&D efforts associated with
resolution of site-specific waste
problems. The goals include: (1)
supporting the ER6WM Program and
being responsive to the ER6WM Five-
Year Plan, (2) focusing near-term
robotic R&D efforts to be
responsive to application
requirements, (3) ensuring that
robotic applications are responsive
to site requirements and scheduler
needs, (4) integrating all robotic
activities to obtain the most
economical approach to resolving
site problems while reducing
personnel exposure, and (5)
providing guidance for the Office
of Energy Research long-range (>5
year) robots research program.
Program Focus and Objectives
The Program currently
addresses a number of important
issues facing the ER&WM activities
at the DOE sites. Among the areas
included are:
• underground storage tanks
(material characterization
and remedial actions),
• buried waste retrieval,
• waste minimization,
• contaminant analysis,
automation,
• decontamination and
decommissioning,
• basic and applied research
and development required to
support the above areas.
The objectives of the Program
are to develop, test, evaluate, and
make available robotic technologies
that:
• allow workers in waste
operations and remediation
to be removed from hazards,
• increase the speed and
productivity with which
ER&WM operations can be
carried out when compared
to alternative methods and
technologies,
• increase the safety of
ER&WM operations, and
• provide robotic and remote
systems technologies that
have lower life cycle costs
than other methods and
technologies.
In addition to developing
robotics technology, the program
promotes the availability of the
technology and supports its
deployment and use in ER&WM
activities at DOE sites. The
program further serves as a bridge
between the ER&WM robotics RDDT&E
and the basic robotics research
carried out by DOE's Office of
Energy Research, providing guidance
for the basic research program and
integrating its results in applied
research and advanced development
projects.
Program Organization
In order to execute the
Program, the Program has been
structured as shown in Fig. 1.
Since the Program is an element of
the DOE ER6WM Applied RDDT&E
program, it is administered by the
ER&WM OTD through the Program
Manager (RPM).
To ensure that the Program
responds to the needs of the DOE
complex, RPM is assisted by an
Operations Review Group (ORG).
This group is familiar with the
ER&WM issues facing the DOE
complex. RPM also receives
assistance from a Technical Review
Group (TRG) of robotics and
automation experts from the DOE
laboratories and sites,
universities, industry, and other
federal agencies. A Program and
Budget subcommittee of the TRG also
assists the RPM.
The Robotics Applications
Coordinators (RAC) develop robotics
program plans focused on each of
the major ER&WM issues.
The RAC is responsible for
coordinating the flow of technical
information relevant to the
applications area among those
groups having an interest in the
area. RAC is also responsible for
keeping the other groups in the
relevant applications areas
apprised of the results of RTDP
174
-------
r sir-
?: (v»«t)
Coordinator
ST~
7 (east*
^Coordinator
5?
Burled
waste*
Contaminant
/ Analyst*
Automation
•: Waste
Minimization
Coordinator
D«contam.
and
; Dftcomm.
Coordinator
Wasta
Faculties
t Operations
Coordinator
C
i
II
n
Ficurc 1
\TDP Oreamzanon
175
-------
funded activities. The
coordinator, with the approval of
the RPM also convenes occasional
conferences on the applications
area.
The coordinators function as
the advocate for the technologies
applicable to their particular
problem area. To facilitate the
application of the best technology
with a high probability of success
to the particular problem area, the
coordinator actively solicits
proposals from the entire robotics
and automation community for
routing to the RPM. A thorough
familiarity with the ERSWM problems
and issues is required of the
coordinators. This familiarity
will be maintained through site
visits, personal contacts, and
symposia where appropriate.
Applied research is funded
through the applications center
that has identified the
technological need. This helps
insure that the applied research is
responsive to the needs of the
group sponsoring the research.
Coordinators who put together a
team approach with industry, labs,
•universities, or other agencies are
most favorably reviewed.
The R&D Coordinator (RDC)
reports to the RPM and is
responsible for coordinating the
flow of technical information other
than applied research. The RPM is
familiar with all aspects of the
RTDP and is able to identify areas
of future need in robotics and
ancillary systems which are not
being addressed in the applied RSD
areas. He is responsible for
coordinating with universities,
industry, DOE laboratories, and
other federal agencies to bring
proposals for need advanced
technology to the TRG and RPM.
Program Planning
A comprehensive technical
program plan has been developed
during the first year of funding.
This initial plan development is a
significant effort since the plan
is based on the needs of the
environmental restoration and waste
management operations as identified
by the eight DOE field offices and
the sites they administer. A major
portion of the initial plan
development is assessing and
understanding those needs. The
technical program plan covers a
five-year period with primary
emphasis on the one-year plan and
secondary emphasis on the two- and
three-year projections. The plan
covers technical work, budget
requirements, and schedules and is
tied closely to the requirements
and schedules of individual site
environmental restoration and waste
management projects.
FY 1990 Accomplishments; The RTDP
accomplished a number of
significant activities in FY 1990,
which facilitated a fast start for
robotics technology development and
established a sound basis for
program activities over the next
five years.
Program Planning: Five priority
DOE sites were visited in March
1990 to identify needs for robotics
technology in environmental
restoration and waste management
operations. This 5-Year Program
Plan for the RTDP was prepared on
the basis of the needs identified
at the DOE sites, and provides a
needs-based road map for detailed
annual plans for robotics
technology development.
Initiating Interactions with the
Robotics Technoloov Community: In
July 1990, a forum was held
announcing the robotics program.
Over 60 organizations (industrial,
university and federal laboratory)
made presentations on their
robotics capabilities.
Technology Demonstrations: To
stimulate early interactions with
the ERSWM activities at DOE sites,
as well as with the robotics
community, the RTDP sponsored four
technology demonstrations related
to ERSWM needs. These
demonstrations integrate commercial
technology with robotics technology
developed by DOE in support of
areas such as nuclear reactor
maintenance and the civilian
reactor waste program.
176
-------
Rapid, swing-free movement of
simulated waste containers was
demonstrated using control
algorithms developed at Sandia
National Laboratories (SNL) with
technology in computer control of
large gantry bridges at Oak Ridge
National Laboratory (ORNL). This
technology decreases the time for
materials movement and increases
safety by eliminating the potential
for collisions of swinging
payloads.
A scaled waste tank
remediation demonstration at SNL
integrated sensors and advanced
computer control into a commercial
gantry robot. The extensive use of
models for robot system control
allowed graphical programming of
the system complete with operator-
supervised path planning to
increase speed of repetitive waste
removal tasks.
A teleoperated vehicle with
advanced sensing technologies for
mapping of buried waste sites was
demonstrated at a small buried
waste site at ORNL. Navigation
technologies were coupled with the
sensing information (from
radiation, gas, and subsurface
large object sensors) to
automatically map subsurface
materials.
A team consisting of LLNL,
SNL, LANL, SAIC, and IBM demon-
strated a robotic system for
loading powder into a furnace in a
Pu production line, and then
transferring the product to the
next operation in a mock up
facility. This robotic system
eliminates the need for operator
hands-on transfer operations and
reduces the generation of operator-
associated waste materials such as
wipes, protective clothing, gloves,
and transfer bags.
SITE VISITS/NEEDS
In March 1990 RTDP planning
teams visited five DOE sites.
Additional site visits will be
conducted in the future to expand
the planning basis.
The purposes of these visits
were (1) to understand the needs
and requirements of the highest
priority environmental restoration
projects and waste management
operations at the sites, (2) to
obtain information for use in
planning the program, and (3) to
describe the RTDP to personnel at
the site and discuss development of
the program plan. Emphasis was
placed on both technical and
schedular (i.e., compliance dates)
needs and requirements.
The results of these visits
are documented in a Site Needs and
Requirements Document. This
document summarizes the findings at
each site and highlights priority
needs.
APPROACH TO NEEDS DIRECTED
TECHNOLOGY DEVELOPMENT
The visits to five DOE sites
led to selection of six areas of
need for robotics technology to
support ER&WM activities. These
need areas are:
• Remediation of waste
storage tanks,
• Retrieval of buried wastes,
• Automation of contaminant
analyses,
• Waste minimization,
• Decontamination and
decommissioning,
• Waste Facilities Operations
Plans for development and
application of robotics technology
are based on the need areas listed
above. In addition, the plans
reflect other aspects of needs at
the sites such as regulatory
compliance dates, planned remedial
actions, and established schedules.
The fundamental approach to
developing robotics technology to
meet these needs couples available
and emerging technology with
advanced technology. Near-term
needs can be met by integrating
177
-------
available commercial technologies
with emerging technologies
available in RSD laboratories. At
the same time, development of
advanced technology will proceed to
meet intermediate and long-term
needs. In addition, attention will
be given to development of cross-
cutting technology which will be
applicable to multiple need areas.
Technology development will be
keyed to integrated demonstrations
at the DOE sites to further couple
the robotics technology development
to the site needs and to the
deployment of remedial actions
technology.
The DOE sites are evaluating
alternative approaches to remedial
actions. The robotics technology
developed for each application must
meet the needs, and match the
approach selected by each site.
The plans described for robotics
technology development are based on
reference concepts, selected as
reasonable and likely concepts from
the alternatives, which form the
basis for identifying needed
technology development, estimating
schedules, and estimating budgets.
The robotics technology
development plans are also keyed to
demonstrations of technology at the
DOE sites. Wherever possible,
demonstration of the robotics
technology is integrated with
larger integrated remediation
technology demonstrations.
CROSS-CUTTING AND ADVANCED
TECHNOLOGY DEVELOPMENT
Near-term applications of
robotics to ER&WM activities is
necessarily focused on existing
technologies that can be readily
adapted to the specific cleanup
tasks and environments. As the DOE
cleanup activities progress and
evolve, a larger body of robotic
technology will be needed for
application to ER&WM projects. A
technology development program
targeted at relevant cross-cutting
and advanced technology development
will make possible a more rapid
insertion of beneficial technology
into these activities. This
technology development will be
focused on high payback projects
that offer safer, faster, or
cheaper approaches to cleanup
goals.
An advanced technology
development program including a
long term research and development
component is a means to effectively
incorporate the expertise of the
universities, national laboratories
and other basic research
organizations into the nation's
cleanup projects. Also, this
offers educational training
opportunities consistent with the
DOE emphasis on developing the next
generation technical work force.
Needs identified at DOE sites
indicate that cross-cutting and/or
advanced technology development in
the areas listed below would be
highly beneficial to application of
robotics in ER&WM activities.
Mechanical Subsystems
Manipulators
End-Effectors
Mobile Systems
Control Subsystems
Computing, Graphics and
Modeling
Man-Machine Interfaces
Communications
Telerobotic Operations
Motion Planning and
Control
Sensor Subsystems
Environmental Sensors
Servo Mechanical Control
Sensors
Imaging & Vision Systems
Multi-Sensor Integration
Cross-cutting and advanced
technology developments need to
focus on near-term, mid-term, and
long-term implementations. By
investing in a sustained long-term
development program, emphasizing a
balanced evolution in technology
development with implementations
continually encompassing technology
advances, steady progress may be
assured toward the technology
required for the more complicated
or demanding tasks of the decades
to come. Development of advanced
robotics technology that is
commonly applicable to many
environmental restoration, waste
178
-------
management, and waste minimization
activities can lead to higher
efficiency, increased reliability,
and reduced life cycle costs in
these operations.
Participants in this program
are the following whom we wish to
thank for their contribution.
SAIC - Science Applications
International Corporation
LANL - Los Alamos National
Laboratory
SNL - Sandia National
Laboratories
LLNL - Lawrence Livermore
National Laboratory
ORNL - Oak Ridge National
Laboratory
T-12 - Oak Ridge Y-12 Plant
RF - Rocky Flats Plant/EGiG
Rocky Flats
SR - Westinghouse* Savannah
River Company
'.VHC - Westinghouse Hanford
Company
?NL - Pacific Northwest
Laboratory
£GSG - EGSG Idaho
INEL - Idaho National
Engineering Laboratory
WMC - Westinghouse Materials
Company of Ohio
WINCO- Westinghouse Idaho
Nuclear Company, Inc.
Fernald Feed Materials production
Center
179
-------
Field Robots for
Waste Characterization and Remediation
William L. Whittaker
Field Robotics Center
Carnegie Mellon University
Pittsburgh, PA 15213
(412)268-6559
David M. Pahnos
Field Robotics Center
Carnegie Mellon University
Pittsburgh, PA 15213
(412) 268-7084
Abstract
field operations for waste characterization and remediation
offer real opportunities and compelling motivations for
advanced robot work systems. The application of field
robotic technology can enhance the quality of data collected
al waste sites through standardization, verification, and
repeatability of methodology. It can increase the coherence
of data by enabling dense data collection, advanced
correlational databasing, and the collection of previously
unavailable data, such as position tagged data or
inteipretable 3D subsurface images. Held robots can operate
were humans are precluded, in pipes, tanks, abandoned
mines, and sea and river bottoms or where humans perform
inefficiently in protective clothing and breathing apparatus.
Thus, field robots can greatly increase the knowledge base
gained during site investigations; this knowledge will
expand remediation options performed by human and open
the way for the use of field robots in remediation activities.
Moreover, the development and use of field robotic
technologies in the service of national efforts to characterize
Mid remediate nuclear and hazardous waste will eventually
lave profound effects on large commercial industries and
open new world markets for robotic technologies.
Introduction
Hazard has been the historical justification for the use of
field robots; operations surrounding accidents at Chernobyl
"id Three Mile Island have world impact, preclude humans
and call field robots to action. Less reactive than these crises
are the innumerable nuclear, deep sea, military, and space
operations that are inhospitable to humans and are
significant both strategically and fiscally. The ultimate
opportunities, however, for field automation are those
immense and inefficient industries like construction, mining,
timbering, hazardous waste management, subsea and outer
space that dwarf the economics of manufacturing.
Characterization and cleanup of the nation's weaponry
complex alone is now estimated at 100 billion dollars: efforts
of this magnitude require new technologies. As a growing
technology, the potential of field robots to apply sensing and
analytical capabilities and to perform precise, repetitive, and
dangerous tasks is virtually untapped in the world.
Field robots work in environments as they are encountered,
not idealized or altered to accommodate automation. While
an assembly process can be structured into a limited number
of predictable actions, a robot working in an unstructured
environment encounters new situations that it has not been
explicitly programmed to deal with.
Field robots are thus challenged to perform goal-driven tasks
that defy pre-planning in unpredictable and changing
environments. In order to explore, work, and safeguard
themselves and the environment, field robots must sense
complex phenomena in a dynamic world. As these robots
move towards autonomy, they must plan and implement
their work tasks.
181
-------
Robots are quickly becoming mobile in natural terrains,
perceptive, self-navigating, and competent in the field.
Within the next few years, a number of robotic performance
niches in waste characterization and remediation will be
exploited where humans are precluded from the scene or
where robots offer superior capabilities. Areas of
opportunity include reconnaissance, surveying, subsurface
imaging, soil gas sampling, perimeter monitoring, fast
analytical screening, accident response, remote sampling
and manipulation, remote coring, and excavation.
Automated Characterization
Perhaps the most frustrating aspect of waste characterization
is the paucity of reliable data that scientists and engineers
have to work with following an investigation. Field sampling
is expensive, time consuming, and labor intensive. Although
methodologies are standardized, human judgement and
sometimes intuition are broadly applied when deciding
where to sample or survey and how to interpret data once
collected. This is particularly true for the selection of
boreholes, the interpretation of geophysical data, and the
selection of soil and soil gas sampling points. Analytical
instruments and techniques have improved greatly over the
past several years, but the results are only as good as the
choice of sampling points, which often are too few and
chosen poorly.
Field robots can deploy screening instruments far more
rigorously, sampling hundreds or thousands of times per
acre, achieving total site coverage. They can create a three-
dimensional data base by analyzing air, soil gas, and the
subsurface: they can screen organics on the fly and create 3D
images of buried waste from radar data, sampling at
centimeter resolution. Held robots can survey a site and
layout a precise grid; take samples, position tag, package,
and label them; position tag instrument data, store the data in
a single spatially correlated data base, and present multiple
types of data to users in a straightforward visual format.
Quality of Data
Capable field robots can greatly increase the quality of data
from a waste site by obtaining verifiable data with a high
degree of repeatability, and they can advance the process of
data collection to a higher standard than is possible using
present methodologies. Ultimately, field robots can also help
ensure that the right samples are sent to analytical
laboratories.
Standardized Data
Most waste sites have long lives; the time from preliminary
assessment to the remedial action can stretch into years, and
monitoring can take place for decades after. Throughout the
life of a site, scores of scientists, engineers, technicians, and
workman perform tasks, and as a site transitions from
assessment to investigation to remediation, the cast of actors
changes.
Although methodologies are standardized, no two
investigations at a site are performed in exactly the same
way; indeed, no two investigators can be relied upon to bring
the same experience, judgement, and skill to a site or to
collect data in exactly the same way, thus making it difficult
to achieve standardization.
Moreover, because waste sites vary greatly in topography,
soil types, geology, and the nature of contaminants, it is
difficult to achieve standardization across a range of sites,
partly because humans perceive the sites differently.
The use of field robots to collect and screen data can
significantly improve standardization. Robots can be relied
upon to treat data in the same way in each investigation.
Robots eliminate human variables and collect far greater
quantities of data. The data thus become more reliable, and
data from different sites can be compared legitimately.
Ultimately, a single, complete data base can follow a site for
its entire life. Created during the preliminary assessment, a
three-dimensional computer data base can be an interactive
repository in which each new set of data is entered.
Verifiable and Reoeatable Data
Field robots can verify data taken previously at a site and
repeat the collection and screening process precisely.
Because robots process and store data at the time of
collection, the chain of custody can be maintained more
reliably and securely. Repeatable outcomes translate into
defensible conclusions and reduce uncertainly when
182
-------
planning remedial actions and issuing a record of decision.
Field robots can become an important tool in the process.
Relevant Data
Two ways to increase the relevance of data are to collect it in
quantities great enough to yield high statistical reliability and
collect several types of data at the same time. Reid robots
can build dense data bases. They are also capable of
deploying a range of sensors that humans cannot; e.g., three-
dimensional laser rangefinders, infrared sensors, sonar,
radar, etc. In addition, they can deploy analytical instruments
smultaneously and determine their position accurately in
global coordinates.
UK site investigation robot (SIR) under development at
Carnegie Me lion's Field Robotics Center collects ground
penetrating radar data (GPR) at two centimeter intervals,
accumulating in excess of a 400,000 data points per acre.
GPR data are inherently three-dimensional and can be
processed into 3D images, if the data are dense enough. A
human cannot attain the positioning accuracy or deploy the
sensor with enough precision to collect dense data, as a robot
can. The result is not just more data but new and better data.
Further, the robot can be configured to collect additional
types of data or samples simultaneously, e.g., organics in air
or soil.
hJeipretable. Usable Data
Investigators are often confronted with data that do not
asily yield to interpretation or, at worst, require the
investigator to make a guess as to what the data show. Field
robots can process data, making it easier to visualize and
understand.
GMU's Site Investigation Robot provides a visual image that
is not only quantitatively better but qualitatively better than
standard GPR data bases. The user is provided with an image
defined accurately in x, y, and z, making the data more
Weipretable, even to a novice.
Dta bases become more usable when one is able to see
oxidations among data in new ways. The availability of
multiple types of data superimposed on a computer-
derated site map will enable investigations to gain a whole
site profile in a single visual image. This kind of user power
will not only speed the investigation process but give
entirely new insights to investigators.
Accessible Data
Finally, when data are accessible to many people over time,
the likelihood of good use being made of the data increases
significantly. Data collected by field robots can be stored on
central file servers, available to all who need to determine
what is known about a site or who have new data to add to
the file.
When Humans Are Precluded
Some investigations and remedial activities preclude
physical human access, such as the interiors of pipes, tanks,
and ducts; abandoned mines; and river, harbor, and sea
bottoms. Field robotic technologies offer the best access to
collect data and to perform remedial activities.
Generations of competent pipe crawlers have been
developed and are in service in petroleum and natural gas
industries. In-tank inspection robots and remediation robots
are needed at DOE complexes. One such robot is being
developed by RedZone Robotics to inspect tanks containing
nuclear waste. At CMU's Field Robotics Center, we are
developing autonomous navigation and vision systems for
underground mining equipment and autonomous navigation
systems for walking machines and wheeled vehicles to
traverse rough terrain. Others have significant experience
with competent sub-sea robots and have demonstrated their
capabilities and utility.
Another class of sites precludes humans because of health
and safety concerns, e.g., high-level waste, mixed waste,
transuranics, unbreathable atmospheres, unknown waste,
and accident response. These sites present high-motivations
for robots to perform not only reconnaissance and sampling
activities but forceful manipulation and heavy work to a high
degree of precision. These activities include excavation,
loading, haulage, and packaging of diffuse materials;
removal of sludges and mixing of materials; removal of
debris; barrel handling; boring on gassy landfills, and the
handling of explosive materials or operations in explosive
environments.
183
-------
Field Robotic technologies have now progressed to the point
where the robotics community can begin to build competent,
rugged, and reliable systems to meet the performance needs
of waste characterization and remediation programs.
Integrated Characterization and Remediation
Systems
Robotic technologies can fulfill the need to better integrate
characterization and remediation systems. An excellent
example of this is the case of trenched transuranic wastes.
Conditions preclude most invasive means of characterizing
the volume and position of the waste, and having a human
onboard of an excavator is precluded during the remediation.
The work can, however, be performed by robots in a
coordinated sequence. A site investigation robot (SIR), using
ground penetrating radar, can produce measurements of
buried waste in x an y to a reasonable accuracy (7 to 14
inches), which would allow a robotic excavator to trench on
both side of the waste to install steel sheeting. The excavator
would have the SIR's position data and subsurface map
available to it to guide it through the digging process, along
with active sensing of its own.
The SIR also surveys the z axis, determining the depth of the
waste and the distance from the soil surface to the waste.
Through a sequence of iterative sensing and excavation, the
clean overburden could be removed, leaving 4 inches of soil
covering the waste. The excavator could then remove the
waste autonomously.
In this scenario, robots working together can perform the
tasks more efficiently and with greater accuracy than human
operators. Five years ago sensing and control in both robots
to the degree of accuracy described above would have been
wishful thinking; two years ago it was beyond the reach of
the technology, today it is within reach, and although it is not
yet ideal for selectively finding and excavating, deeply
buried hot spots, it is likely the safest, most cost effective
approach to retrieving radioactive, trenched wastes that can
be expected in the next several years.
Future Opportunities
Commercial applications for capable field robots will
number in the hundreds. Among them are significant field
robotic applications that are achievable in the near term with
evolutionary extensions to our current technology base.
Moreover, there are significant opportunities, some of which
are unique to the U.S., e.g., robotic timbering, surface
mining, and large-scale agriculture.
Federal agencies should not miss opportunities to develop
and apply robotic technologies in programs where they have
a legitimate interest and obligation to protect human health,
increase productivity, and decrease costs. Because robotic
technologies are extensible to many applications, there
should be a coordinated effort by Federal agencies to 1)
focus performance-based research to move the technologies
forward; 2) apply the technologies in Federal programs
where they will produce high-leverage results, sufficient to
pay for the investment; and 3) ensure that programs will be
sufficiently stable over time to attract world-class
researchers to the field.
There is an opportunity to reduce significantly the total
cleanup costs of chemical and nuclear waste sites through
the programmatic development of robots to perform site
investigation, data collection, and remedial activities. The
core technologies have reached a stage of development to
begin the task of putting together integrated, teleoperated
and semi-autonomous systems for this purpose. The
opportunity is to alleviate a major national problem and, at
the same time, to develop and apply new technologies that
will impact the world.
184
-------
DISCUSSION
BRIAN PETERS: You mentioned American leadership. What about the position
of the Japanese in this area? They're well known for corporation robotics on
tdomobile assembly lines, for example.
WILLIAM WHITTAKER: The Japanese are a significant force in this arena.
hiticularly, they have programs that have matured, driven in a strategic way, top
down,over several years, and they look very good. They look extremely good in
construction. They have lesser presence in subsurface and in space. Consider, if
you will, that we enjoy a 20- or 30- year history in space, and they're just building
ieirfirstrockets. But to bring it to terms here, I look for the United States to drive
Ibis agenda because we are the ones who pioneered some of the nuclear
tdmologies, and we are the ones that have the volumes and the programs to go
atelhis.
Resistance, if you looked at the navigation technologies, there aren't a lot of
places in Japan that have enough roads to drive something like that. And so if you
tola the agenda in the program, I think that is enough to really focus operations
lot I actually have a video tape of condensed Japanese technology that I just
pal together this week. After this session I'll be happy to show that.
GREGG DEMPSEY: On your remote vehicles that stand completely alone,
0% run on telemetry or whatever) is the technology such that if there's an
incident out on a site or something, and you lose communication, can the
machine actually turn itself around and come back?
WILLIAM WHITTAKER: Yes, that technology is available. However, I think
it's important to know that it's in very select pockets of seasoned research groups,
and very select pockets of small organizations that can move fast to put it
together. Specifically, that kind of technology source is from the DOD Strategic
Computing Initiatives and DARPA's Road Following Programs, which were
funded at the hundred million dollar level over a number of years, going back
three or four years.
GREGG DEMPSEY: I remember when the robots went into Three-Mile Island
there were problems with the camera lenses darkening up because of the radiation
exposure. Has that problem been solved to any great extent?
WILLIAM WHITTAKER: In the first deployment in November of 1984, it's
true that the cameras didn't function well. And that's because we were using a
CCD technology. It was small, and it was very new! But within a month that was
straightened up. And with the years that have gone by, particularly out of military
and space initiatives, rad hardened CCD's are a known technology. It's very
straightforward now.
GREGG DEMPSEY: So we have technology that can operate in the thousands
of roentgens per hour now?
WILLIAM WHITTAKER: Yes.
185
-------
SPACE TECHNOLOGY FOR APPLICATION TO TERRESTRIAL HAZARDOUS
MATERIALS ANALYSIS AND ACQUISITION
Brian Muirhead Susan Eberlein
James Bradley William Kaiser
NASA/Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA
ABSTRACT
In-situ and remote measurements of elemental, molecular
and mineralogical composition of materials has been part
of the space science program since its beginnings. There is
a great deal of commonality between space science missions
and terrestrial hazardous materials screening in the types
of measurements, methods and instrumentation used.
There are also strong parallels between the hostile
environments of space and those of a hazardous material
This paper discusses the measurements, methods and
nstrumentation used on past, present and future space
missions for in-situ and remote analysis of materials.
Specific instrumentation discussed includes gas
chromatographs, mass spectrometers, imaging
spectrometers, X-ray and gamma-ray spectrometers.
Work sponsored by the National Aeronautics and Space
Administration's Sample Acquisition, Analysis and
Preservation technology program is discussed, including
concepts and hardware for multi-spectral remote sensing.
Instrument data analysis and interpretation, material
acquisition and processing. Some new concepts for micro
Sensors for making various chemical measurements are
also discussed. Possible applications of space technology to
terrestrial hazardous materials field acquisition and
analysis are presented.
NTRODUCTION
In-situ and remote measurements of elemental, molecular
and mineralogical composition of materials has been part
of the space science program since its beginnings. Two of
the best known surface science missions were the Viking
mission to the surface of Mars and the Soviet Venera series
to Venus. The Galileo spacecraft is carrying a probe to
sample Jupiter's atmosphere and the National Aeronautics
and Space Administration (NASA) has just started a project
to make a variety of in-situ measurements of the comet
Kopff. NASA is currently working on technology to enable
robotic and human missions to the Moon and Mars. Such
missions will include a wide variety of in-situ and remote
science and engineering measurements. There is a great
deal of commonality between space science missions and
terrestrial hazardous materials screening in the types of
measurements, methods and instrumentation used as well
as in the hostile nature of the environment in which these
measurements are made. NASA is very active in the design,
development and utilization of the instruments. Table 1
contains a listing of some science data requirements and
associated instrument(s) that are used and/or under
development within NASA for its past, present and future
missions.
NASA has established a technology program called Sample
Acquisition, Analysis and Preservation (SAAP) to address
the specific needs of in-situ science and engineering
measurements. SAAP is intended to develop critical and
significantly enhancing technologies for remote
identification, acquisition, processing, analysis and
preservation of materials for in-situ science, engineering
characterization and earth return. Although the technology
being developed in the SAAP program is not currently
being applied to specific missions, the SAAP program will
broaden the base of technology available for future
missions. Specifically, SAAP is developing concepts and
hardware for multi-spectral remote sensing, instrument
data analysis and interpretation, material acquisition and
containment [1,2,3,4]. Some new concepts for
microsensors for making various chemical measurements
are also under development. There are many possible
applications of space technology to terrestrial hazardous
materials field acquisition and analysis.
SPACE INSTRUMENTS, MEASUREMENTS AND
APPLICATIONS
There is very high scientific value to direct surface
measurements, independent of whether a sample is
returned to a laboratory. In particular, the analysis of
volatiles is probably best done in-situ due to the potential
for loss or chemical change after prolonged storage. For
space applications, in-situ measurements may be a
necessity because of the limitations on sample return.
187
-------
Table 1. SCIENCE DATA REQUIREMENTS vs INSTRUMENT TYPES
Required Data
Example Instruments
Elemental Composition
Mineralogical Composition
Water Detection and Mapping
Atmospheric Composition
Subsurface Structure
Seismometry
Volatiles
Imaging
Exobiology
Magnetic Fields
Gamma-ray Spectrometer, a-p-x Spectrometer
XRF, a-Backscatter
Visible-Infrared Spectrometer
Mossbauer Spectrometer, DSC, XRD
Neutron Spectrometer, Electromagnetic Sounder
GCMS, Laser Spectrometer
Electromagnetic Sounder, Active Seismometer
Passive Seismometer
DSC-EGA, Visible-Infrared Spectrometer
Camera, Imaging Sectrometer
Viking Biology Instrument
Magnetometer
Although terrestrial applications do not face the same
limitations, major advantages in speed and accuracy can be
gained by employing field analysis prior to selecting
samples for laboratory study.
Below are listed some of the characteristics of a few
instruments that have been flown by NASA or are being
proposed for NASA future missions. The constraints on
mass and power, combined with the need to function in a
hostile environment, place severe requirements on these
instruments. The technology developed to meet these
requirements could benefit the production of similar
instruments for terrestrial applications.
.AM,.
- ELECTRON
MULTIPLIER
ION SOURCE
HOUSING
r
ION
PUMP
r
1
1
MAGNET
L" = 2 6 CM
ELECTRIC
SECTOR
Re=47CM
Rm- 38 CM
L'e = 1 7 CM-i
= 1 7 CM
Figure 1. The mass spectrometer for the Viking Lander
GCMS. The electric sector has a radius of 4.7 cm.
Chemical Analyzers
The prime example of a chemical analyzer is the Biology
Experiment on the Viking Landers. The experiment
included a GC-MS system for analysis of organic
compounds in Martian soil [5]. The GC-MS part of the
system had a mass of 16 kg, measured 28 cm x 38 cm x 27
cm and consumed 25 to 125 W when active. When the
system was presented with a soil sample it could sift a soil
sample into a pyrolysis tube, seal the tube to a GC inlet,
perform a controlled heating on the sample, and perform a
mass spectral analysis of the GC effluent with
exceptionally high sensitivity. The mass spectrometer also
had a direct inlet for analysis of the Martian atmosphere.
Figure 1 shows a diagram of the mass spectrometer.
Currently under development for the Comet
Rendezvous/Asteroid Flyby mission is the Cometary Ice and
Dust Experiment (CIDEX) instrument that incorporates a
3-column GC system for evolved gas analysis over a
sample temperature range of -90 to +1000 C. The
instrument also includes an x-ray fluorescence
experiment in a 15 kg package that uses an average of
about 22 W. The system will analyze comet dust for
organic materials and elemental composition.
New GC-MS systems have been proposed that combine the
analytical speed of microbore GC columns with the
exceptionally high sensitivity of a focal-plane mass
spectrometer equipped with an integrating focal plane
detector. Such a flight system would be comparable in size
and mass to the Viking Lander GC-MS, but with analytical
cycle times of a few minutes and the ability to analyze GC
peaks separated by a few hundred milliseconds. Such a
system could measure dynamic processes or determine
planetary atmospheric composition while descending on a
probe or parachute. The robust, portable nature of such an
instrument would make it a good candidate for deployment
in terrestrial field screening activities as well. A gas
chromatogram from a laboratory prototype is provided in
Figure 2.
188
-------
10000 -i
>; 8000-
(0
I
I 6000-
Q
4000
2000
0-
4 5
8
lOuil
9
A
50 100 150 200 250 300 350
FRAME NUMBER
Figure 2. Chromatogram of a mixture of EPA priority
pollutants. Each 50 mg frame contains a time-integrated
mass spectrum from mass 25 to 500 amu. (Peak 1 is air
and peak 9 is toluene.)
Elemental Analyzers
Gamma-ray spectrometers have been used in orbiting
spacecraft to obtain elemental maps of atmosphere-free
bodies such as the moon. The Mars Observer spacecraft
wil contain a gamma-ray spectrometer for elemental
mapping of the Martian surface through its thin
atmosphere. The recently built and proposed gamma-ray
systems for elemental analysis have tended to follow
commercial technology by use of cooled germanium
detectors. These detectors use radiators aimed into cold
space to achieve the required temperatures. The detected
elements are those with naturally radioactive isotopes or
which are excited by cosmic rays. Long counting times are
needed. Related instruments may be useful in the remote
determination of radioactive isotope composition at
terrestrial sites.
New. high efficiency x-ray fluorescence analyzer systems
have been proposed for lunar and Martian landers that use
new toroidal focussing crystals to achieve many orders of
magnitude increase in x-ray flux from microfocus x-ray
lube sources to achieve rapid and high sensitivity analyses
[6]. With the use of uncooled mercuric iodide x-ray
detectors, such an x-ray fluorescence system might have
""ass of 4 kg. consume 10 W, and occupy a volume of about
35cm x 25 cm x 25 cm. The same microfocus x-ray
source could be used in a high-efficiency, toroidal-
fccussing powder x-ray diffractometer for identification of
minerals. Both instruments can work in an atmosphere of
w x-ray absorption density, such as that on Mars, or in
vacuum.
VISIBLE AND NEAR INFRARED REMOTE SENSING
Imaging spectrometers play a major role in both Earth
observation and planetary exploration. The Airborne
Visible/Infrared Imaging Spectrometer (AVIRIS) images
with 20 m x 20 m spatial resolution in 224 spectral
channels from 400 to 2450 nm wavelengths [7,8]. The
data, obtained from NASA ER-2 aircraft at 20 km altitude,
is spectrally and radiometrically calibrated to provide
information for disciplines such as ecology, geology,
oceanography, inland waters, snow hydrology and
atmospheric science. An AVIRIS type instrument might be
used for aircraft tracking of ocean oil spills, smoke
plumes, or other indicators of chemical contamination.
In addition to visible and near infrared imaging
spectrometers, NASA has developed a portable backpack
point spectrometer (Portable Instantaneous Display and
Analysis Spectrometer - PIDAS). At a mass of about 30
kg, PIDAS obtains and records with integrating detectors,
reflectance spectra in 830 bands from 400 to 2450 nm.
The instrument, developed at JPL, has been used to support
geological and ecological disciplines, and can be calibrated
for identification of a wide range of materials. The
instrument field of view is 10 to 30 cm when hand held.
NASA is currently, working to develop an adaptive, reliable
and compact imaging spectrometer system for autonomous
site and sample selection and analysis of materials. This
system will provide wide area as well as close-up
identification of minerals which is enabling for surface
science and engineering missions.
The key element of the SAAP remote sensing subsystem is a
multi-spectral imager based on the solid-state acousto-
optic tuneable filter (AOTF). This device operates on the
principle of acousto-optic interaction in an anisotropic
medium and acts as a controllable narrow band filter. The
current breadboard version can collect spectral images at
4 nm spectral resolution in the visible range (0.5 and 0.8
microns). It has been implemented with a 1000x1000
fiber optic bundle between the foreoptics and the AOTF.
The fiber optic cable enables the mounting and articulation
of the foreoptics, remote from the main spectrometer body.
Figure 3 shows the current breadboard hardware.
By altering the pass band sequentially, only the desired
spectral bands are collected. Each pixel has a spectral
signature associated with it and classification is
accomplished on the basis of elemental content and spatial
location. Figure 4 shows a set of spectrometer images of a
rock containing the rare earth mineral neodymium taken
in the range of 783-710 nanometers. The absorption
characteristics of this mineral at around 750 nanometers
is evident in the dark spot in the right-center of the second
row of images. Figure 5 shows the complete spectral
signature of neodymium as taken by the AOTF
spectrometer.
Although the current instrument operates in the visible
region, the AOTF technology will also allow construction of
tunable filters for the infrared and ultraviolet regions of
the spectrum, with a total range between 0.35 and 25
microns. This may provide a new class of tunable spectral
analyzers for a variety of space and earth applications.
189
-------
CO
O
Figure 3. AOTF Spectrometer Breadboard
-------
- ;Cl • lul ;SJk ;Ui liB -. lul
^ii ^^ ^ *& -' wv :'& ^ • *«•
iJl iJl iA lA xfll Jl
1200
1150
LL?
3 1100
>
z
9. 1050
—
m 1000
i—
z
950
900
560 nm to 528 nm
Figure 4. AOTF Spectrometer Output in
710-783 nm
The completed imaging spectrometer will be capable of
collecting high resolution images at hundreds of discrete
wavelengths. Processing of such a large amount of
information (>1 gigabit per scene) will strain
computational systems without some means of data
reduction. Hierarchical analysis schemes, in combination
with neural nets, have been shown to produce several
orders of magnitude reductions in total computation time
and are discussed below.
INTELLIGENT DATA ANALYSIS
Spectral data from a variety of instruments is used in
many areas of chemical analysis. The proceedings of the
First International Symposium on Field Screening Methods
for Hazardous Waste Site Investigation [9] report on the
use of fieldable instruments for mass spectroscopy, x-ray
fluorescence spectroscopy, infrared spectroscopy and
Raman spectrosopy. For any of these instruments, the
spectral data produced is complex, requires a highly
trained chemist to assist in the interpretation process,
and often requires extensive computer work for proper
analysis. In many cases the data analysis and interpretation
step presents a significant bottleneck which prevents the
most efficient utilization of the instruments.
Work done within the SAAP program has concentrated on
the the analysis of visible and near infrared spectra for
mineral determination [10]. The developing system
incorporates a number of data analysis methods and
algorithms which will transfer readily to use with other
types of spectral data. Application of these approaches to
the instrumental analysis required for field screening of
toxic waste will improve the speed and efficiency of the
analysis step. Table 2 shows a comparison for speed and
accuracy of four classification methods. The first matched
0.7 0.6
WAVELENGTH (urn)
0.5
Figure 5. Neodynium Absorption Spectrum
from AOTF Spectrometer
filter is a brute force approach using full dimensionality
of all patterns, and requiring the most computation. By
reducing the dimensions used for matching, or performing
the matching in several steps (e.g. a grouping step and a
finer classification step), the computation is reduced. The
hierarchy of neural network pattern classifiers combines
these approaches. Images consist of 32-band spectra for
all pixels, and are classified as one of 28 known minerals
in each case.
Neural networks are trained to recognize spectra or
classes of spectra by presenting many examples of each
spectrum, complete with noise and normal variation in
features. Following training, new variants of the spectra
contained in the training set may be identified with a high
degree of accuracy. During the training procedure, the
network extracts the common features among the training
examples representative of each type of spectrum, and
learns to recognize these as important identifying factors,
while the noise is discarded. Thus new spectra are
classified based on the presence of the diagnostic features
specific to a type of compound, without significant
interference due to normal variation, noise, and
background contamination. The major components in
mixture spectra may also be identified, if the mixing
process does not obscure the critical features.
The neural network spectrum classifiers currently used
within the SAAP system work hierarchically, placing
spectra into progressively more detailed classes. This
approach allows either a rough estimate of mineral
composition, or a very detailed analysis and identification.
The final analysis step includes an assessment of the
classification accuracy. This allows the system to identify
those spectra which were poorly classified, and which may
represent mixtures or other unexpected spectra. Since the
191
-------
Table 2. COMPARISON OF 4 SPECTRAL CLASSIFICATION METHODS
METHOD
Single Matched Filter
Reduced Dimension Matched Filter
Two Step Matched Filter
Hierarchy
DATASET TOTAL OPERATIONS
Mars
AISA
Mars
AISA
Mars
AISA
Mars
AISA
16,226,560
5,017,600
8,113,280
2,508,800
6,374,720
1,971,712
4,858,284
1,006,099
ACCURACY
80%
80%
69%
89%
Note: Mars dataset is a simulated multispectral image derived from a Viking Lander image.
AISA dataset is a real multispectral image taken by the Airborne Imaging Spectrometer.
final application of this spectral analysis system requires
almost complete automation of the analysis process, the
results of the spectral analysis are integrated into an
automated decision making procedure. The decision making
is goal-driven: specific classes of minerals may be
searched for and analyzed in great detail, while other less
important compounds are discarded at an early step in the
analysis procedure.
The goals of the existing (planetary) spectral analysis and
decision making system include identifying interesting and
uninteresting areas on the basis of spectral information,
and identifying samples which should be acquired for more
detailed analysis. Similar goal driven systems could be
designed with the objectives of finding specific types of
chemical compounds or determining which samples will
prove most informative regarding chemical distribution in
an area. The hierarchical goal driven architecture allows
the system to analyze many samples rapidly, and to provide
the user with information regarding which samples are
most important for further examination.
Application to field screening for hazardous waste:
Two aspects of the work done for spectral data analysis in
planetary exploration will be of interest for the field
screening of hazardous waste. The neural network based
spectral analysis approach will be useful for the analysis
of IR, XRF, Raman, and mass spectra, if networks are
trained with real spectra gathered under the anticipated
field conditions. The hierarchical analysis architecture
that incorporates goal driven decision making may be
adapted to assist field workers in making rapid decisions
regarding the areas requiring special attention during a
field screening operation.
Although special neural network pattern recognition
systems will be required for each type of instrument data,
the basic algorithms developed for the analysis of
visible/near IR mineral spectra should transfer readily to
the analysis of other spectra. A hierarchical, neural
network based spectral identification system will have
several applications:
1. Unknown identification.
A network based hierarchy can replace a library search
procedure with favorable results for the identification of
unknown spectra. Progress is being made in the
implementation of hardware network pattern matchers
which will allow the equivalent of very large library
search procedure to occur in microseconds.
2. Searching for specific compounds.
A hierarchy of networks is particularly well suited to the
search for specific compounds. A spectrum is presented to
the hierarchy, and is progressively classified until it
becomes apparent that the spectrum does not represent the
desired compound (or until the desired compound is found).
A negative result is usually determined fairly quickly,
since at each step of the hierarchy, a large group of spectra
may be eliminated since they are not potential matches.
3. Searching for classes of compounds based on specific
features.
This is a variant of the hierarchical search for a specific
compound, with the difference that a positive result may
occur when a given branch point of the hierarchy is
reached, rather than only at the end of the search. The
192
-------
hierarchy is designed so that the groups of spectra that
represent important classes are together within a branch
of the hierarchy. The selection of critical spectral features
for identifying a class is ensured by using specific spectral
bands for training the networks. Extensive knowledge of
the chemistry is required at the training step for optimal
results.
4. Extracting major components from mixtures.
Identification of spectra of mixtures presents problems for
traditional library search and match techniques. Since
mineral spectra generally derive from mixtures of pure
minerals, this problem is being addressed in the work
wttiin the SAAP program. The neural network approach
has the advantage of basing results on important features
which are extracted from the anticipated data in advance,
rather than on complete spectral matching. This allows
identification of major components in many mixtures.
Situations where mixing causes masking or shifting of
critical spectral features require special treatment.
SYSTEM CONCEPTS
h-situ analysis systems can range from single
instruments placed on the surface to multi-purpose,
mobile units looking for specific materials or unique
materials units. An autonomous space exploration system
nil require the functions of planning, analysis, execution
control, reflex action, data processing and interpretation,
in order to operate in real time in a hostile environment.
for an in-situ analysis subsystem, the spectrum of
possible architectures can be characterized by two
extremes. At one end is a set of disjoint, self-contained
elements working more or less independently to perform
the required functions. At the other extreme is a fully
integrated system with many interdependent relations
between the elements. The former case is probably more
comparable to the terrestrial applications, where several
independent instruments are operated by humans. This
system design causes some problems for space systems
since it is not efficient in terms of mass or power and
compromises science due to uncoordinated measurements.
Multi-instrument data fusion and corroboration is an
important consideration in this system design.
An extreme example of the latter case is a multi-purpose,
foctory-like system, implementing a set of processes that
nay vary significantly depending on the desired outcome or
product. Physical material, not just data, must move
Between the elements. Current requirements and desires
brcoordinated measurements as well as mass, power and
volume limitations make an integrated design approach the
logical basis for technology requirements, but this
approach clearly pushes technology. Technology developed
fa such an integrated system could be applicable to the
automation of sample gathering and analysis in extremely
tostile earth environments, in cases where human
fraction must be remote and limited for safety reasons.
Technology will be validated in the laboratory and then
toegrated into the series of evolving SAAP testbeds. The
representative environment provided by the testbed will
be used to verify technologies and demonstrate overall
SAAP operational capability. By the end of September
1992 an initial laboratory testbed will be constructed to
demonstrate sample identification and acquisition. By the
end of 1995 a fully functional system testbed will be in
operation which will transition into a complete self-
contained transportable testbed for end-to-end "field"
operations. A preliminary system conceptual design of a
SAAP platform with a full complement of subsystem
components except for a regolith deep core drill is shown
in Figure 6. This configuration can be considered a
preliminary model for the full-up system testbed; no final
payload or mission configuration has been selected.
SAMPLE ACQUISITION
The capability to acquire physical samples robolically,
without human intervention, would be significantly
beneficifial in many hazardous waste screening
applications. The principal requirement driving sample
acquisition for planetary exploration is to obtain samples
of weathered and unweathered materials from accessible
rocks or outcrops. These samples must not be significantly
altered either mechanically or thermally during
acquisition. Conceptual designs and early experimental
work have been completed to help understand the
mechanical, controls and automation issues for sample
acquisition in the hostile environment of a planetary
surface. Effort has focused on mechanical designs to
achieve functional capability and is now proceeding to
include testing of control and automation methodologies.
Laboratory validation at the component level will be
followed by further development and verification at the
system level in a series of SAAP testbeds.
Various techniques have been studied for sample
acquisition including sawing, coring and chipping. Of
these, core drilling represents an efficient way of
obtaining surface and subsurface samples that are easy to
handle by a preparation or storage subsystem. Terrestrial
coring processes, however, require direct human
supervision and utilize high power and introduce large
volumes of fluid to aid the cutting process by cooling the
bit and removing cuttings of rock and/or soil.
SAAP has developed the means for core drilling low
porosity, high compressive strength rocks without the use
of coolant. High velocity diamond matrix core barrels are
used under the control of robotic manipulators. Under
study are various control approaches and a variety of
sensors modalities including, position, force, vision,
spectral, temperature and vibration. Progress in this area
should improve the prospects for remote robotic
acquisition of solid samples from hazardous areas on earth
as well.
In addition to tools, work is underway to identify and
develop end effector and manipulator technologies
necessary for the sample acquisition operations.
Preliminary studies of end effector and manipulator
dexterity versus reliability, mass, power and performance
have been made for some mission scenarios. The current
193
-------
6 DOF ARM
7 DOF ARM
SAMPLE PREPARATION SYSTEM
MULT-
SPECTRAL
IMAGER
TOOL/INSTRUMENT
BOX
SAMPLE CANISTER
X-RAY FLUORESCENCE
SPECTROMETER
-REDUCED DEGREE OF
FREEDOM END EFFECTOR
X-RAY DIFFRACTOMETER
DIFFERENTIAL SCANNING
CALORIMETER
•GAS CHROMATOGRAPH/
MASS SPECTROMETER
Figure 6. SAAP Preliminary System Conceptual Design
state-of-the-art in end effectors consists of either very
limited capability industrial vise-type grippers, or
extremely complex anthropomorphic designs being studied
in research laboratories. In general, the fewer degrees of
freedom the better for simplicity. However, to achieve
high inherent reliability, mechanical redundancy at each
degree of freedom will be required. Concepts that provide
adaptability or flexibility and involve trade-off's of
degrees of freedom with redundancy will be studied
further.
ADVANCED CONCEPTS
NASA is interested in developing new sensing device
technology for in-situ science investigations. Currently
available instruments for in-situ science investigations
are often incompatible with mission requirements due to
their excessive mass, volume and power consumption.
Science capabilities may be significantly extended by the
development of sensing device systems which represent
smaller payloads. The sensing device development is
directed to enable compact, low-mass, low-power
consumption instruments for a variety of mission
requirements. The advanced technology of silicon
micromachining for device fabrication will be employed to
implement highly capable, sensitive, and robust
instruments while retaining compact structure and low
mass attributes.
The development of silicon micromachined gas
sensors will be based on the compact gas chromatography
(CaC) instruments recently demonstrated in silicon
micromachined structures. The key components of the
compact GC systems include a silicon micromachined gas
dispersion column, integral gas metering valves and
silicon thermistor gas detectors, fabricated entirely on a
single silicon wafer. The successful operation of this
prototype time-of-flight GC system indicates the range of
opportunities for unique instruments of this type. In this
task, specific gas detector applications will be identified
and instrument requirements will be formulated. Gas
sensors and instruments will fabricated and tested for
operation in the Martian atmospheric environment
Finally, with results of device testing, complete
instruments will be designed for specific mission
applications.
CONCLUSION
This paper has discussed some of the measurements,
methods and instrumentation used on past, present and
future space missions for in-situ and remote analysis of
materials. Work sponsored by NASA's Sample Acquisition,
Analysis and Preservation technology program included
concepts and hardware for multi-spectral remote sensing,
instrument dafa analysis and interpretation, and material
194
-------
acquisition, and new concepts for micro sensors for making
various chemical measurements. Much of the technology
under development in the SAAP program has application to
terrestrial hazardous waste materials acquistion and
analysis.
REFERENCES
1) Moreno, C., Editor, In-situ Science Investigation
System Catalog, Version 1.0, JPL Document, June 5, 1990
2) B. Muirhead and G. Varsi, (1990), Next-
Generation In-situ Science Concepts and Technology, IAF
90-444, 41st Congress of the International Astronautical
Federation, Oct. 6-12, 1990.
3) Muirhead, B., et al, (1989), " Sample Acquisition,
Analysis and Preservation Technology Development",
Presented at the 2nd International Conference on Solar
System Exploration, Pasadena, CA.
4) Plescia, J., Editor, Sample Acquisition, Analysis
and Preservation Instrument Technology Workshop,
Proceedings, Johnson Space Center, November 14-16,
1988.
5) D.R. Rushneck, A.V. Diaz, D.W. Howarth, J.
Rampacek, K.W. Olson, W.D. Dencker, P. Smith, L
McDavid, A. Tomassian, M. Harris, K. Bulota, K. Biemann,
Al. LaFleur, J.E. Biller, and T. Owen, (1978), Viking gas
chromatograph-mass spectrometer, Rev. Sci. Instr.
49(6), 817-834.
6) D.M. Golijanin and D.B. Wittry, (1988).
Microprobe x-ray fluorescence analysis - new
developments in an old technique, Microbeam Analysis-
1988, D.E. Newbury, Ed., San Francisco Press. 1988,
397-402.
7 ) W.M. Porter and H.T. Enmark, (1987), A system
overview of the Airborne Visible/Infrared Imaging
Spectrometer (AVIRIS), Proc. SPIE. 834.
8 ) G. Vane, M. Chrisp, H. Enmark, S. Macenka, and J.
Solomon, (1984), Airborne Visible/Infrared Imaging
Spectrometer: An advance tool for earth remote sensing,
Proc. 1984 IEEE Int'l Geosciences and Remote Sensing
Symposium, SP215, 751-757.
9 ) Field Screening Methods for Hazardous Waste Site
Investigation, First International Symposium Proceedings,
October 11-13, 1988.
10) Eberlein, S., Yates, G. (1990). "Neural Network
Based System for Autonomous Data Analysis and Control",
In "Progress in Neural Networks". Volume 1, pp 25-55,
Ablex Publishing Corp.
ACKNOWLEDGEMENTS
The Sample Acquisition, Analysis and Preservation
Program is part of the Exploration Technology Program
within the NASA Office of Aeronautics, Exploration and
Technology. This project is the joint effort of the Jet
Propulsion Laboratory, Johnson Space Center and Ames
Research Center, with JPL as the lead center.
The research described in this paper was carried out by
the Jet Propulsion Laboratory, California Institute of
Technology, under a contract with the National Aeronautics
and Space Administration.
DISCUSSION
BRIAN PIERCE: My question concerns the fiber optic bundle. You said
infrared. Do you mean the near infrared or closer to the mid i.R.?
SUSAN EBERLEIN: Right now the fiber optic bundle that we've actually
wwked with has only been in the visible range. We're looking this year in the near
infrared of 1.2 to 2.5 microns. In the long-term maybe more, but I gather that as
you go further into the infrared you get more trouble with your fibers.
BRIAN PIERCE: Yes, that's right. You also mentioned very intriguing hard ware
neural networks. What do you mean by that?
SUSAN EBERLEIN: What I mean by hardware neural networks is micro
silicon chips where the connection weights for the neural network matrices are
actually in the resistances in the chips. JPL is fabricating some of these. They are
still in the early stages, and not as precise as we need them. Some othercompanies
are working on making them commercially as well. If in fact they turn out to be
a viable technology that can be space qualified, they offer very, very rapid
processing for specific problems.
195
-------
DEVELOPMENT OF A REMOTE TANK INSPECTION (RTI)
ROBOTIC SYSTEM
Chris Fromme
RedZone Robotics, Inc.
2425 Liberty Avenue
Pittsburgh, Pennsylvania 15222
(412) 765-3064
Barbara P. Knape
RedZone Robotics, Inc.
2425 Liberty Avenue
Pittsburgh, Pennsylvania 15222
(412) 765-3064
Bruce Thompson
RedZone Robotics, Inc.
2425 Liberty Avenue
Pittsburgh, Pennsylvania 15222
(412) 765-3064
ABSTRACT:
RedZone Robotics, Inc. is developing a Remote Tank
Inspection (RTI) robotic system for Westinghouse Idaho
Nuclear Company to perform remote visual inspection of
corrosion inside high level liquid waste storage tanks. The
RTI robotic system provides 5.8 m (19 ft) of linear extension
inside the tank to position a five degree-of-freedom robotic
arm with a reach of 1.8 m (6 ft) and a payload of 15.9 kg (35
Ib). The primary end effector is a high resolution video
inspection system. The RTI Intelligent Controller provides a
standardized, multi-tasking environment which supports
digital servo control, I/O, collision avoidance, sonar
mapping, and a graphics display. The RTI robotic system
features an innovative, standardized, and extensible design
with broad applicability to remote inspection,
decontamination, servicing, and decommissioning tasks
inside nuclear and chemical waste storage tanks.
I. APPLICATION
Westinghouse Idaho Nuclear Company (WINCO) will
use the RTI robotic system at the Idaho Chemical Processing
Plant (ICPP) to perform remote visual inspection of corrosion
inside high level liquid waste (HLLW) storage tanks. The
ICPP tank farm consists of several HLLW storage tanks that
are 15.2 meter (50 ft) in diameter with a capacity of
1,135^00 liters (300,000 gallons). The domed roofs of the
tanks are buried 6.1 m (20 ft) below ground level. The bottom
of the tanks are located approximately 12.5 m (41 ft) below
ground level. The tanks will be drained of liquid prior to
inspection,-however a 30 cm (1 ft) layer of caustic sludge will
remain on the bottom of the tanks. The only access to the
tanks is through 25 cm (10 in) and 30 cm (12 in) diameter riser
pipes which extend from ground level down into the tank
roof dome. Accessible risers are typically located 0.8 m (2.5
ft), 3.6 m (12 ft), and 6 m (20 ft) away from the tank wall.
Currently, the RTI system will only be deployed through the
30 cm (12 in) tank risers. Cooling coil arrangements line the
tank walls and the tank floor.
The primary mission of the RTI robotic system is to
perform remote visual inspection of the interior walls of the
tanks for corrosion which may have been caused by the
combined effects of radiation, high temperature, and caustic
chemicals present. Due to the location and limited number of
accessible risers inside a tank, the intent is to inspect only a
pie-shaped portion of the tank to qualify the typical
condition of corrosion inside the tank. Thus the application
does not require a robotic arm with a long reach.
n. SYSTEM OVERVIEW
The RTI robotic system features a vertical deployment
unit, a robotic arm, and a remote control console and
computer. One of the major design constraints for the RTI
system is that the in-tank components are inserted through a
25.4 cm (10 in) diameter riser. This criteria lead to the
design of compact, electric actuators for the robotic arm,
which provide high torque and absolute position feedback.
The RTI robotic system is initially lowered by a facility
crane into the top of the riser. The vertical deployment unit
then provides another 5.8 meters (19 ft) of servo controlled
extension inside the tank. The RTI robotic system transmits
minimal loading to the riser pipe since it is self-supporting
via a support structure that rests on the ground above the
riser. Figure 1 provides an illustration of the RTI robotic
system installed inside a tank.
A five degree-of-freedom robotic arm provides 1.8
meters (6 ft) of articulated reach to accurately position a
high resolution video inspection camera to examine the tank
walls. The arm has sufficient dexterity to position the
camera normal to the curvature of the tank wall. The
controller provides coordinated end point motion so that the
operator can easily jog the arm inside the tank. A graphics
display is provided at the control console to give the
operator a sense of how the arm is positioned inside the
tank. The robotic arm also positions a pressurized spray
nozzle to wash down the tank walls prior to inspection. In
addition, the end of the arm has an interchange flange to
allow the robotic arm to carry a gripper instead of the
inspection camera. Another camera system is mounted at the
top of the robotic arm to provide the operator with an
overview of the arm operating inside the waste tank. The
RTI robotic system is capable of manual recovery to retrieve
the system in event of motor failure.
197
-------
LUTING CAGE
ID'S" NOMINAL
911- NOMLNAL
Z-AXIS ACTUATOR WITH BRAKE
AND HOMLVG LIMIT SWITCH
S-5" NOMINAL
411" TO 511-ADJUSTABLE
TETHER MANAGEMENT SYSTEM
19-OF CABLE
COUNTER WIND ELIMLVATES NEED
FORSUTRING
SUPPORT STRUCTURE
0 • ir ADJUSTABLE FOOT PADS
GUIDE SLEEVE
167- NOM. RETRACTED/ 35V NOW. EXTENDED
HORIZONTAL
REACH 6TT
(WITHGRIPPER)
SHOULDER ROTATE (±180°)
.TILT MOUNTED OVERVIEW CAMERA
REMOTE FOCUS, ZOOM AND IRIS
ATT VARIABLE INTENSITY LIGHT
'NARSENSO:
SHOULDER PITCH (±90°)
EL BOW PITCH C±120->
WRIST ROLL a ISO-)
WRIST PITCH (±120-)
CRTPPER(t!-4-.0-70LBS)
25T SOM. RETRACTED/ 431 r NOM. EXTENDED
41 tT TO TANK BOTTOM -
Figure 1. RTI Robotic System Deployed Inside HLLW Waste Tank
198
-------
The RTI system is radiation and environmentally
hardened to assure reliable performance in the tank
environment. The design criteria requires that all in-tank
components be capable of withstanding a 20 psi washdown of
10% nitric acid and 10% oxalic acid, radiation field of 100
Rad/hr for a total accumulated dose of 10,000 Rad, and
operating temperatures of 4 to 49 °C (40 to 120 °F) at 100
percent humidity. The RTI system uses sealed components
such as connectors, video equipment, sensors, and actuators to
preclude the intrusion of decontamination fluids. Bearing
and wear surfaces are stainless steel and non-stainless
components are anodized or coated with epoxy paint to
prevent damage from caustic decontamination fluids.
The RTI's control system uses RedZone's standardized
Intelligent Controller for Enhanced Telerobotics to provide a
high speed, multi-tasking environment on a VME bus.
Currently, the robot is controlled in a manual, joint jog mode
or a coordinated end point motion control mode. Control
capability is available to develop a pre-programmed,
automated or teach/playback mode of operation. The
control system incorporates sensing and software safeguards
to prevent an operator from inadvertently colliding with the
tank wall. Collision prevention is implemented in software
and backed up with four proximity sensors. A sonar range
finding sensor is used to establish the orientation of the RTI
robotic system inside the tank.
HI. MECHANICAL DESIGN
The major components of the RTI mechanical system are
the support structure, vertical deployment unit, robotic arm,
accessories, and strongback. These assemblies are described
in the sections that follow.
A. Support Structure
The support structure rigidly supports the vertical
deployment unit at ground level. It consists of the alignment
guide sleeve and support stand assembly. The support stand
is a four legged structure that spans the riser pipe and
bunker. Its leg pads provide 1 foot of vertical adjustment and
allow the stand to be levelled. A facility crane is used to
position the support structure over the riser and to insert the
alignment guide sleeve into the riser pipe. The guide sleeve
follows the inclination of the riser pipe to guide the vertical
deployment unit during insertion. The objective is to avoid
loading the riser pipe if it is not absolutely vertical.
B. Vertical Deployment Unit
The vertical deployment unit provides 5.8 m (19 ft)
of servo-controlled vertical extension, at speeds of up to 7.6
cm/sec (3 in/sec), to position the robotic arm inside the waste
tank. The vertical deployment unit consists of a telescoping
tube assembly, cable management system, drive motor, and
junction box. The telescoping tube assembly contains a fixed
outer tube and an inner extending tube to minimize the
overall retracted height of the system. With the inner tube
extended, the wrist flange of the arm can reach the tank
floor. An adjustable hard stop is provided to safely reduce
the extent of vertical travel. The outer tube is a 20 cm (8 in)
square stainless steel tube and the inner tube is a 15 cm (6 in)
square tube. The vertical deployment tubes are designed for
deployment through 30cm (12 in) risers. However, the
robotic arm is designed to pass through a riser as small as
25 cm (10 in). The inner extending tube is supported and
guided along the upper tube by stainless steel linear bearings
and rails. The rails are mounted along the length of the
inner tube and the bearing blocks are attached to the inside
of the outer fixed tube.
An electric motor drives the lower tube, Z-axis, by a
dumb-waiter arrangement of a drive chain and pulley. The
motor package includes an integral gear reducer, brake and
resolver. The motor's output shaft is directly coupled to a
drive sprocket which drives a steel chain attached to the
upper section of the inner tube. The chain moves within the
gap between the upper and lower tubes. The drive sprocket
was designed so it can be driven from either side. In the
event of a motor failure, an identical backup motor package
can be quickly mounted in order to drive the telescoping tube
assembly. Due to the relatively large gear ratio and large
travel of the chain, absolute position feedback on the
vertical deployment was avoided. Instead, a resolver is
attached directly to the motor shaft and a limit switch is
used to home Z-axis position at start-up.
After insertion into the riser, the top flange of the
vertical deployment unit is bolted to the guide sleeve. On
top of the vertical deployment are located the cable
management drum and a junction box. Cabling is payed out
from a spring loaded cable drum which has a large diameter
so that only two wraps are required to pay out the 5.8 m (19
ft) of cable length. This design obviates the need for
electrical slip rings. The vertical deployment junction box is
connected to the control console with 30.5 m (100 ft) of cable.
The junction box contains some pneumatic and valve
equipment and terminal strips but no circuitry. Its main
purpose is to serve as a termination point for cables routed
down the vertical deployment unit to the robot arm.
At the base of the vertical deployment unit is a
mounting flange for the robotic arm. Cables are routed
internal to the inner tube and exit the tube at its bottom. At
the bottom of the outer fixed tube, a spray ring is mounted to
spray decontamination fluid on the inner tube as it retracts
upward. This minimizes the spread of contamination inside
the telescoping tube assembly.
C. Robotic Ann
The RTI robotic arm mounts to the bottom of the
lower extending tube. The arm is a five-degree-of-frcedom
revolute arm consisting of shoulder rotate, shoulder pitch,
elbow pitch, wrist roll and wrist pitch axes. The primary
function of the robotic arm is to position the WINCO
inspection camera system mounted to the wrist flange. The
arm has sufficient degrees of freedom to position the
inspection camera normal to the curvature of the tank wall.
Coordinated end point motion control allows the operator to
move the inspection camera in/out and along the curvature ol
the tank wall. An overview camera is packaged between
the shoulder rotate and pitch joints to rotate with the arm,
allowing a continuous view of the end of arm. A spray nozzle
is attached to the robot wrist so that the robot can wash
down the tank wall prior to corrosion inspection.
The robotic arm weighs approximately 100 Kg (220 Ib)
and has an overall length of 2.5 m (8 ft). The arm has a 1.6 m
(64 in) length to the wrist mounting flange, providing the
199
-------
robot with a 1.8 m (6 ft) reach when positioning the
inspection camera. The last three joints of the arm, elbow
pitch, wrist roll and wrist pitch, are clustered in close
proximity to provide dexterous manipulation. All axes are
electrically driven, feature absolute position feedback, and
are actively servoed to hold position. Upon loss of power,
the controller automatically shorts the motor leads to
provide dynamic braking. Gravity will backdrive the arm
into a nearly vertical position so the RTI system can be
removed from the riser in a manual recovery mode. Table 1
provides performance characteristics of the arm.
Table 1. Performance Specifications
Max
Description TraselVelocity
Shoulder Rotate
Shoulder Pitch
Elbow Pitch
Wrist Pitch
Wrist Rotate
±180°
±90°
±120°
±120°
±180°
1.0 rpm
1.0 rpm
2.4 rpm
55 rpm
5.5 rpm
Reach of Arm 6 feet
Coordinated End Point Motion
2.5 ips
Key: ips = inches/sec, rpm = rev/minute, ° = degrees
The five joints of the robot arm are driven by three
different sized actuator packages as specified in Table 2.
The three actuators are similar in concept and design but
provide differing torque and speed characteristics. The
capabilities of these actuators were optimized to meet the
goal of providing a 15.9 Kg (35 Ib) payload for the robot.
The actuators are designed into a compact, pancake-style
package. In the case of the shoulder pitch it was necessary
to keep the actuator small enough to fit sideways, in profile,
through the 25 cm (10 in) riser. Frameless DC high torque
brush motors were used as they offer the smallest size,
highest torque and lowest speeds available. Each motor is
coupled to a pancake type Harmonic Drive gear reducer,
providing a single step reduction of up to 200:1. These drive
components are integrated with slim line ball bearings and a
resolver to produce compact servo-actuators capable of large
torques. The integral resolver is directly coupled to the joint
output allowing precise, absolute, servo control of the arm.
Table 2. Mechanical Characteristics of Actuator Packages
Robot Joints
Shoulder
Rotate&Pitcl
Elbow Pitch
Wrist
Roll & Pitch
Actuator
Size
Heavy
Medium
Light
Dimensions
9.0" dia x 4.5'
351bs
6.5" dia x 3.5'
18Ibs
5.2" dia x 3.0'
81bs
Max
Torque
(in-lbs)
8400
2500
800
Max
Speed
(RPM)
1.1
2.4
55
The actuators and links are constructed of aluminum,
which is anodized on all exterior surfaces. The actuators are
environmentally sealed to protect them from the
decontamination solution. Since the actuators are not
equipped with brakes, they experience a 100% duty cycle
when the arm is loaded, causing the motors to heat up
significantly. Analysis of the system indicates that the
actuators' capabilities are thermally limited. That is, the
maximum payload of the arm is dictated by the motors
maximum winding temperature of 155 °C (311 °F) and not by
the maximum mechanical torque of the actuators. To
increase the actuator and arm payload capabilities an air
line is run into the actuators to provide cooling for the
motors. Cabling to each of the joints and tooling is routed
along the I-beam shaped linkages of the arm. Submersible,
molded connectors are provided on each motor.
D. Accessories
Accessories for the RTI robotic arm comprise the
quick change interface, decon spray nozzle, gripper,
overview camera system, sonar sensor, and proximity sensors.
A description of each accessory is provided below:
• A manual quick change interface is provided at the
wrist mounting flange to change end effectors
(inspection video system and gripper). The interface
consists of an electrical connector, pneumatic connectors,
and a common mounting plate.
• A decontamination spray nozzle is mounted directly
above the wrist flange to wash down the tank walls. It
has an adjustable flowrate of up to 15 liters (4 gallons)
per minute.
• A pneumatic parallel jaw gripper is provided with a 10
cm (4 in) stroke and adjustable gripping force of up to 482
kPa (70 psi).
• The overview camera system consists of an
environmentally sealed color camera with a zoom and
focus lens. The camera is mounted inside a cut-out
section of the robot shoulder linkage. A rotary actuator
provides the ability to pitch the camera along the
robot arm while zooming in for close views. Remote
control of the camera, rotary actuator, and light
intensity is provided at the control console.
• A miniature sonar detector is used to determine the
relative orientation of the robot inside the tank. The
sonar detector is mounted on the shoulder of the robot
arm to calibrate shoulder rotation to distance of the
tank wall. Since the risers are not located on the center
line of the tank, radial extensions from the riser to the
tank wall vary in length. An applications software
package automatically controls the sonar sensing and
rotation of the shoulder axis. The software processes
the data to identify the location of the wall closest to
the riser. Once distance to the tank wall is known as a
function of shoulder rotation, distance of the robot's end
of arm to the tank wall can be calculated based on
forward kinematics. Distance to the wall is displayed
on the graphics monitor and also used for software
collision avoidance. The accuracy of this information is
dependent the combined accuracy of the robot, sonar
detector, data processing, arm dimensions, and the
assumed location of the riser.
200
-------
• For impending collision detection, four photoelectric
proximity sensors are mounted on the leading edge of
the robot arm linkages to detect close proximity to the
tank wall.
E. Strongback
The Strongback fixture rigidly supports the RTI
robotic system during shipment. It is designed to attach to
the bed of a semi-trailer truck. The Strongback consists of a
tubular framework to cradle and support the full 10.7 m (35
ft) horizontal length of the RTI system. For additional
protection, the robotic arm is housed inside a reinforced cage
before it is placed onto the Strongback. A facility crane is
used to pivot the RTI robotic system vertical from the
Strongback during deployment at a riser.
IV. CONTROL CONSOLE
The RTI control console is the remote station from
which an operator can control and monitor the robotic arm to
perform visual inspection of the tank. The control console
will be located on a desk top inside a trailer located up to
30.5 m (100 ft) from the RTI mechanical system. The console
consists of two side-by-side 48 cm (19 in) racks which
maximize the useful working and viewing area to the
operator. The racks are encased in structural foam and
housed together in one self-contained shipping container. A
removable front cover protects the monitors and control
panels during shipment. All cables enter the control console
through external chassis mounted connectors.
The control console is composed of industrial grade
components, rated for operation in indoor, industrial
environments. The inspection and overview camera each
have their own display monitor and camera control panel.
VCR's are provided to record the video output signal of the
cameras.
As shown in Figure 2, the control console displays the
following equipment to the operator:
• Operator Control Panel
• 8-inch Color Monitor to display Overview Camera
• 9-inch Black & White Monitor to display B&W
Inspection Camera
• Two Super VHS Recorders
• Overview Camera Control Panel
(camera, zoom, pan, & lights)
• Inspection Camera Control Panel
(camera, zoom, pan & tilt, & lights)
• Control Panel (B&W Camera focus & iris)
• 20-inch Color (Video & Graphics) Monitor to display
inspection cameras or computer graphics
The control console also contains the following components
within its cabinet:
• Intelligent Controller Rack
• Servo Amplifier Rack
• Power Box
• Fan Panels
Figure 2. Operator Control Panel (Front View)
A. Operator Control Panel
The operator control panel provides the operator
with a complete interface to drive the RTI system. All
devices and accessories arc operated from the control panel
with the exception of the cameras which have independent
control panels. The operator control panel is wired directly
to the digital I/O boards of the controller. The controller
acknowledges operator commands by illuminating activated
switches. The controller performs safety checks of operator
commands before executing them.
The operator control panel provides switches for speed
selection and jogging of each individual axis. To prevent
accidental activation, each "Axis Jog" toggle switch must bo
held down continuously by the operator to jog the axis. The
axis will move at the selected speed (slow, medium or fast).
Once released, the toggle switch returns to its neutral "off"
position. In the event of a controller failure, the robot can be
driven in an open-loop mode by hooking up a battery power
supply directly to the motor amplifiers. An emergency stop
pushbutton is provided on the operator panel.
201
-------
The operator must depress a pushbutton to select
coordinated end point motion. A 4-position joystick is
provided to jog the end point of the arm towards or away
from the tank wall and clockwise or counterclockwise along
the tank wall. Consistent orientation of the end point is
maintained. In coordinated motion control, the Z-axis, wrist
roll and wrist pitch axis jog keys are also active. Depending
on the orientation of the robot arm, wrist roll and pitch
control the relative pan & tilt of the inspection camera
mounted at the end point.
Controls are also provided to open and close the gripper
and to control the decon spray ring and spray nozzle. The
operator control panel provides an up/down arrow and enter
key so the operator can make selections of menu commands
displayed on the graphics monitor.
B. Intelligent Controller
The design of the RTI controller is based on
RedZone's Intelligent Controller for Enhanced Telerobotics,
a proprietary, standardized platform for computation and
communications for the control of a wide variety of multi-
axis robotic systems. The Intelligent Controller is housed
inside a 12-slot VME bus chassis inside the control console.
The Intelligent Controller performs the following functions,
in a multi-tasking environment, for the RTI robotic system:
• Translation and execution of all operator commands
originating from the operator control panel.
• Digital servo control of all movement including
individual axis joint control and coordinated end point
motion of the robot arm.
• Execution of automatic routines; system self check,
power-up, sonar map control, and shut-down sequences.
• Safety monitoring of proximity sensors, joint overtravel,
joint and velocity tracking errors and overtorque
conditions.
• Continuous monitoring of potential collision states.
• Logging significant events in a data file.
• Displaying on the graphics monitor: plan view and side
view of robot arm inside tank, distance and orientation
of end point to wall, absolute position of each axis, error
message & diagnostics, and menu prompting of routines.
The computational devices of the RTI Intelligent Controller
consist of the following boards:
• 68020 CPU Boards (2) with floating point processors.
• RGB Video Board to interface the controller to the
graphics display monitor.
• Resolver to Digital Boards (2) to transform resolver
output to the digital signal used to compute current
position and velocity of each axis.
• Digital to Analog Board to convert the digital control
signal generated by the CPU to an analog control signal
to drive the motor amplifiers.
• Timer Interface Board to measure time-of-flight of
sonar echoes generated by the sonar ranging module.
• SCSI Interface Board to interface to the removable
cartridge disk drive.
• 44 MByte Removable Cartridge Disk Drive to provide
portability with hard disk performance. All software
resides on the disk drive.
• Digital Input Boards (2) to provide 64 opto-isolated
channels that are interrupt driven to the controller.
Digital I/O serves as primary interface between CPU
and operator control panel.
• Digital Output Board to provide 32 dry reed relay
outputs. Allows CPU to control devices and indicator
lights on each switch.
Control of robot motion is achieved by a control law
implemented in software on the main CPU. Motion control
boards are not required as servo control is flexibly
implemented in software. The CPU reads resolver inputs,
computes forward and inverse kinematics, and generates a
digital control signal. This digital control signal is then
converted into an analog input to the motor amplifiers. The
CPU performs all of the control calculations for robot motion,
interprets user commands from the operator control panel,
and maintains the graphics display. Two CPU boards allow
the computational load to be distributed by running the
motion planner on one board, and the remainder of the
software modules on the other. This results in stiffer motion
control and faster updating of the graphics display.
V. SOFTWARE
Under separate contract to the Department of Energy
(DOE), RedZone is developing an Intelligent Controller for
Enhanced Telerobotics to provide a standardized, multi-
tasking, VX Works ™ environment for software
development. The RTI system uses the hardware and
software architecture defined by the DOE Intelligent
Controller architecture. All software is written in the C-
language and resides on the disk drive. Figure 3 is a block
diagram of the major software modules of the system. The
software is organized into five main modules: the task
executive, the motion planner, the motion controller, the
data processor, and the graphics module. Communication
between (and in some cases within) these modules is
performed using RedZone's proprietary Robotic
Communications Protocol (RCP) which is the heart of the
Intelligent Controller. RCP provides both intra-cpu and
inter-cpu communications as well as global variables,
functions calls and semaphores between modules. Below,
each module is described in detail.
A. System Control
The system control module is the "front-end" of the
RTI controller. It contains four sub-modules: digital
input/output drivers, task executive, health monitor, and
data logger. The digital input and output drivers provide a
standardized software interface to the digital I/O boards.
The task executive's main function is to monitor the state of
the operator panel and of the robot. It directs action based
on these inputs. The data logger records events, errors, and
change of state into a file. The log is maintained on the
hard disk to help understand and troubleshoot failure or
accident scenarios.
202
-------
B. Motion Planner
The motion planner module provides a collection of
high level path generating modules, collision detection
modules, and kinematics utilities that operate with a
nominal cycle time of 10 milliseconds. The path generating
modules include joint space profile generation, cartesian
space profile generation, and control for sonar mapping.
Cartesian space points are transformed via inverse
kinematics into joint space goals to generate a smooth
trajectory for each joint in motion. The sonar map utility
automatically controls the arm while the sonar mapping
sequence is in progress. The collision avoidance module
monitors the proximity of the arm to the tank wall. The
kinematic module contains the mathematical model of the
arm, including link lengths and axes of rotations. Forward
kinematics are used to compute the end point position of the
arm based on axis joint positions for collision avoidance
checks. Inverse kinematics are used to compute the axis joint
positions necessary to achieve a desired end point position
for coordinated motion control.
Monito
153
A
Operator Console ^i,
f /jran
1
" ' -graphic-
driver
A
graphics
Graphics module
, f.
lOOHz
System am
module
T
1
—DID
rol drivi
1
I Task
'
f
-
r
r
n control & status bits
Executive L.
Monitor |
Data
I^RfilT
v ^-^ \ ^
collision
avoid.
T
Cart, space Joint space ^
profile profile
generation generation
t „
forward
kinematics
1KHz
inverse
kinematics
/
S S^ Motion planner
/ ^
Soft Limits
contro
^ module
1 Interpolation
^|r
j servo bw
1 •
•driver
J
Arm ^
il
-<
vsolvur
iriver -
Data pnxressor
. "
:>nar Map
Control
i •
sonar
m.i . >inK
t
1
i
r
Figures. Software Organization
1. Jog Control. Robot motion is initiated whenever
the operator holds down an axis jog toggle switch or the
coordinated motion joystick. The controller responds to the
switch transition state. An acceleration ramp is
immediately generated to ramp up to the preselected speed
range. The motion control module then generates new,
incrementally small, position goals for the joint every 10
milliseconds.
2. Coordinated End Point Motion. The operator's
primary objective is to position the robot's inspection video
camera relative to the tank surface. It is often difficult and
tedious to position the end-of-arm while jogging individual
axes. To facilitate easier positioning of the camera,
coordinated end point motion is provided in two axes while
maintaining a consistent orientation of the tool faceplate:
horizontal extension of the arm to the tank wall and
following the curvature of the wall at a constant distance.
Coordinated motion for the RTI robotic system is constrained
in the cylindrical world frame of the tank. Control is
simplified by requiring the arm to be in a preferred
orientation. Should the operator choose to deselect
coordinated motion and jog in joint mode, a resume function is
available to allow the operator to return to his former
position and resume coordinated motion.
3. Collision Avoidance. The collision avoidance
software consists of a real-time background program that
continuously checks the position of the arm to avoid a
collision with the tank. The computer checks for penetration
by the arm into a safety zone that extends from the tanks
walls and floor. If the robot enters the safety zone, the
computer executes an interrupt of the current motion and
warns the operator of the condition. Once the robot arm is in
the software collision state, the software only allows the
operator to jog arm motion away from the tank surface.
Proximity sensors are also provided to detect an impending
collision and initiate an emergency stop. A manual override
button is provided so the operator can override collision
avoidance so that the RTI can touch the tank wall or floor.
C. Motion Control
The motion control module reads the joint absolute
position from the resolver-to-digital driver every
millisecond. The servo law, an enhanced PID control, uses
commanded and actual position read from the resolvers to
calculate a command output to send the power amplifiers.
Robot motion is controlled in a position controlled mode, not
a rate controlled mode, as commonly used on robotic
manipulators. Position control provides stiffer motion
control with more damping. It also allows an easy upgrade
to programmed operation at a later date. Execution of the
motion control task is triggered by a clock interrupt to ensure
precise timing. The motion control module also enforces soft
stop limits and performs linear interpolation on the
commanded positions.
D. Sonar Data Processor
The sonar data processor module reads and
processes the sonar data to map distance to the tank wall as
a function of shoulder rotation. Radial extensions from the
RTI to the tank wall vary in length, since the RTI system is
inserted through a riser that is offset from the tank center.
The sonar sensor produces a digital pulse each time it is
203
-------
fired. The length of the pulse is proportional to the time
from transmission of the sonar signal to the return of the first
echo. The sonar driver measures this time-of-flight which
is converted into distance and recorded in an array with the
corresponding shoulder rotation angle. The sonar mapping
module performs pre-processing of the signal to remove
erroneous data and compensate for the wide beam width of
the sonar. Signal processing of the sonar signal is performed
to derive a circular model of the tank from the raw data.
E. Graphics Module
The graphics display on the large color monitor
provides the operator with a physical sense of the robot
arm's position inside the waste tank. Objects are portrayed
as two-dimensional diagrammatic models. A plan view
shows the orientation of the arm inside the tank and a side
elevation view shows the robot arm configuration to the
tank wall. The monitor displays robot joint angles, as well
as the distance and orientation of the end of the arm to the
tank. These views and information will greatly enhance the
operator's efficiency in operating the robot within the tank.
The graphics software module continuously reads the current
position of all axes and uses the kinematic model to compute
and display the configuration of the arm. The graphics
display module also provides menu commands, status
information, and messages to the operator.
VI. CONCLUSION
RedZone Robotics will deliver the RTI robotic system to
WINCO in April 1990. The RTI robotic system will then
become one of the first robotic systems deployed to remotely
inspect hazardous waste tanks. The initial mission of the
RTI will be remote visual inspection of corrosion inside the
ICPP waste tanks. WINCO is currently planning additional
development of the RTI robotic system including advanced
tooling to sample the sludge and inspect the bottom of the
tank, supervisory control to provide enhanced force control of
the tooling, and a programmed mode of operation.
The RTI robotic system provides a 15.9 Kg (35 Ib)
payload, 1.8 m (6 ft) reach, five degree of freedom robotic
arm that can be inserted through a 25 cm (10 in) diameter
opening. The vertical deployment unit provides 5.8 m (19 ft)
of servo controlled extension. The robotic arm can
manipulate a variety of tools: inspection viewing systems,
gripper, spray nozzle, or other specialized end of arm
tooling. The arm can be flexibly mounted on a variety of
platforms or even a mobile base. Its compact, high torque,
electric, servo-controlled actuators can be re-configured with
different linkages to customize a rcJbotic arm of any
configuration and degrees of freedom. The RTI robotic system
is radiation and environmentally hardened to assure
reliable operation in hazardous environments. The
Intelligent Controller provides a multi-tasking environment
to support digital servo control, I/O, collision avoidance,
sonar mapping, and a graphics display. The controller,
based on the standardized DOE architecture, is extensible to
servo control almost any multiple axis application. In
conclusion, the RTI robotic system and its components offer an
innovative, standardized, and extensible design with broad
applicability to remote inspection, decontamination,
servicing, and decommissioning tasks.
REFERENCES
Griebenow, Bret & Martinson, Lori, "Robotic System for
Remote Inspection of Underground Storage Tanks,"
Proceeding of 1990 American Nuclear Society Winter
Meeting. Washington D.C., Nov. 1990.
204
-------
AUTOMATED SUBSURFACE MAPPING
Jim Osborn
Field Robotics Center
Carnegie Mellon University
Pittsburgh, PA 15213
412-268-6553
Abstract
Non-invasive imaging of the underground is an essential
component of hazardous waste site investigations, yet,
despite advances in sensor technology, high quality maps of
the subsurface are difficult to obtain. Subsurface mapping
depends on the spatial correlation of individual sensor
measurements taken at multiple locations. Current manual
data collection techniques, however, are suboptimal for
precisely positioning subsurface imaging sensors and, in
general, are quite inefficient. Use of the sensors also requires
considerable experience on the operator's part to acquire and
interpret sensor data. In short, locating and identifying
buried objects and geological features is a process that relies
heavily on human adeptness and expertise. Thus by applying
automation and computer vision technologies to the
problem, subsurface mapping can be improved.
In our Site Investigation Robot (SIR) project, prototypical
robots are used to position ground penetrating radar (GPR)
equipment with the accuracy needed to generate three
dimensional subsurface maps. Estimating its site location by
a combination of dead reckoning and inertial measurements,
a rough terrain mobile robot deploys a gantry mechanism to
scan the ground with the GPR antenna. Radar data are
digitized and stored in three dimensional arrays for spatial
correlation and image enhancement on a color graphics
workstation. We have also applied basic image processing
and visualization techniques to assist in the interpretation of
these subsurface maps. Control of the robots and access to
the software are through user-friendly interfaces, which
facilitate the subsurface mapping process.
Introduction
For years, robotics and automation have increased
productivity in manufacturing industries through
standardization and repeatability. Core robotic technologies
have now progressed to the point that robots are moving into
the field and offering similar benefits performing tasks in
unstructured settings. One class of these field robots is
emerging to meet one of the most important challenges now
facing the world: the clean up of hazardous waste sites.
One of the cost drivers in remediation of a site is the lack of
information about the site itself. A detailed and costly
investigation is required to develop a knowledge base of site
geology, hydrology, chemistry, the extent of contamination,
etc., that can be used to select appropriate remediation
technologies and effectively plan the cleanup effort. Much of
this expense can be attributed to inefficiencies in manual
data acquisition techniques, lack of standard data collection
procedures, and the cost of insuring and protecting the
personnel who conduct the investigation. As an alternative,
automation offers the prospect to collect large quantities of
data in a form that supports more complete assessments and
at a significantly lower cost.
205
-------
Most investigations include efforts to locate buried objects
that are potential sources of contamination (such as drums),
identify and measure the extent of contaminant plumes, and
determine the morphology of geological formations that
affect pollutant migration. Commonly used methods to
generate such information include resistivity measurements,
acoustic techniques and ground penetrating radar. While
each has unique advantages, no single method alone
provides complete information, and all have limited utility
owing to the inaccuracies and inefficiencies of manual
sensor deployment. Ideally, the data resulting from the
application of these non-invasive techniques can be used to
construct an accurate graphical representation of the
geometry of buried structures - a map of the subsurface.
In this paper we present the Site Investigation Robot, a
system for automated subsurface mapping with ground
penetrating radar (GPR), as one aspect of a program to
automate hazardous waste site characterization. The Site
Investigation Robot is a mobile robot that collects and
spatially registers GPR data and recovers them to its base
station where they are correlated, enhanced and displayed so
that inferences about the shape and location of buried
structures can be made. This program's broader goal is to
develop robotic systems to make the data acquisition process
faster and more complete and to apply advanced data
processing techniques that will make these data more
accessible and easier to interpret.
System Overview
The Site Investigation Robot consists of a robot and
controller, data acquisition system, and a body of subsurface
mapping software to manage, process and visualize data
collected during investigation missions. The present
configurations of these subsystems are described below;
future enhancements planned for each are described in the
section that follows.
Robot
The Site Investigation Robot prototype is pictured in Rgure
1. We have employed an existing mobile robot, the
Terregator (short for terrestrial navigator), a driveriess,
outdoor vehicle built for autonomous driving and
exploration research, for the data acquisition aspect of this
project. Terregator is a rugged, six-wheel, skid-steer
locomotor scaled and powered to negotiate moderately
rough terrain and steep slopes.
On both the right and left sides of the base locomotor, three
wheels are linked together with chains and driven by a low-
speed, DC motor through a harmonic gear unit. This
drivetrain, in conjunction with off road floatation tires, gives
Terregator excellent tractive characteristics to overcome
obstacles and grades. For position feedback, each motor is
coupled to an incremental rotary encoder. Theoretically, this
arrangement gives the Terregator open loop positional
accuracy in the sub-millimeter range; in practice, tire
deflections, vehicle/ground surface interaction and other
non-linearities limit Terregator's dead-reckoning ability to
distances on the order of centimeters.
To position subsurface imaging sensors, a single-axis gantry
mechanism is attached to Terregator's frame forward of the
generator such that the direction of motion is perpendicular
to the mobile robot's path. The mechanism consists of a
buggy that is pulled along parallel fiberglass T-beams by a
chain belt driven by a DC motor. The GPR antennas are
suspended from the buggy with threaded rods for height
adjustment. A rotary encoder directly coupled to the motor
allows the antenna to be positioned accurately to one
centimeter over the entire two-meter length of the gantry.
Limit switches at each end of the gantry ensure safe
operation and provide a convenient way to identify the
antenna's limits of travel.
A 3kW, 120 VAC gasoline generator and ventilated, shock-
isolated electronics enclosure are mounted atop Terregator's
base to provide power for the locomotion, computation,
sensing and communications. Raw generator output is tied in
to the base locomotor's 90 VDC power supply; the generator
output is also conditioned by an uninteruptible power supply
(UPS) for more sensitive devices, including telemetry
equipment, onboard computers and disk drives, safety logic,
sensors and interface electronics. Substantial auxiliary
power is available for mission specific payloads, such as
GPR equipment.
206
-------
At the heart of the Terregator is a VMEbus card cage that
bouses a 68020 CPU card with 4 Mbyte onboard memory,
SCSI and ethernet ports. The system CPU functions as a
multi-tasking controller, coordinating and sequencing
locomotion and gantry motions, GPR data acquisition,
communications with the base station and system
monitoring functions. Other boards in the card cage include
a serial interface card, two 2-axis motion control cards, and
a sensor interface card with eight channels of analog-to-
digital (A/D) conversion, four channels of digital-to-analog
(D/A) conversion and 16 bits of digital I/O. All connections
to these boards are made through an intermediate patch panel
that facilitates the addition of new sensors and other
peripherals to the basic system. For development purposes, a
single board workstation and disk are located on the
equipment deck above the electronics enclosure and
interfaced to Terregator's CPU via an ethernet cable. The
organization of these components is shown graphically in
Rgure 2.
Controller
The Site Investigation Robot is intended for use by persons
who are much better versed in the practices of field
screening, data collection and analysis than they are in
operating a robot. It is thus essential to hide the complexities
of controlling the robot from its users and make interactions
with the SIR as simple and straightforward as possible. This
motivated us to develop a control architecture that allows
SIR users to command and monitor the robot at a high level
while masking the details of implementing expressed user
intentions.
The SIR command interface presents the user with a set of 2-
Dsurface maps of the site, that show the size, spatial location
and orientation of boundaries, known man-made structures
(e.g., buildings and roads) and natural features (e.g., trees
and surface water bodies) in a consistent, user defined site
coordinate system. These maps are created with a simple
CAD package, developed specifically for this purpose, at the
outset of a site investigation, and can be updated and edited
as the investigation proceeds. To initiate a data acquisition
ran, the user first displays a map of the site on the base
station computer by recalling a file that contains a CAD
description of a particular region of interest. Site boundaries
are indicated by straight line segments while all known
objects and other obstacles to the mobile robot are shown as
polygons. Using the computer mouse, the user then draws a
bounding box (a rectangle that encloses part of the map)
around the area of the site from which data is to be collected.
A set of routines to plan a path that covers all of the obstacle-
free ground surface within the bounding box are then
invoked. First, the dimensions of the bounding box and all
obstacles it contains are adjusted using dimensional
parameters of the SIR. In this algorithm, the robot's effective
turning radius is calculated by finding a circle within which
all parts of the skid steered locomotor will remain when it
turns in place. All sides of the bounding box and all included
polygonal obstacles are 'grown' by an amount equal to the
radius of that circle. Should the transformed bounding box
be found to intersect a site boundary, which is a pathological
case for the current path planner, the initial bounding box is
rejected arid the user instructed to redraw it. Once an
acceptable bounding box is found, the robot can be modelled
as a single point travelling through a more constricted space,
thus simplifying subsequent path planning.
Planning paths for the Site Investigation Robot is a departure
from traditional mobile robot path planning in the objective
is to cover as much of the ground surface as possible, rather
than finding the shortest route between two points. The SIR
path planning problem is constrained by the mobility
characteristics of the Terregator mobile robot. Terregator
can faithfully execute straight line motions of specified
length by dead reckoning, in which the wheel encoders are
used to measure distance travelled; it can also make accurate
turns in place, using a gyroscope to measure the angle of
rotation. However, the indeterminacy of Terregator's skid
steering makes following an arc of specified curvature
difficult even on hard, flat surfaces. For this reason, we have
limited all driving to straight line motions and point turns.
This is acceptable given the data acquisition protocol
described below.
SIR's path planner examines the resulting free space in the
transformed bounding box and finds a way to cover it such
207
-------
that the number of turns are minimized. If obstacles are
present the user selected area is divided into smaller
obstacle-free areas, and a path is planned for each. Since
there are often multiple ways to perform the subdivision,
solutions are not always unique. Furthermore, there is no
way to guarantee that the resulting path is optimal. However,
once a path is found, it is overlaid on the site map for
validation. This affords the user the opportunity to draw
smaller bounding boxes and specify point-to-point moves
that connect the subregions of the map.
The final path description is translated into a sequence of
driving commands (straight lines and rotations) that are
placed in a queue and transmitted to the robot via a wireless
modem. Using a software joystick, the robot is then
leleoperated to its starting point and set on its route. While
driving, the robot transmits its location back to the base
station which is displayed as an icon on the site map. Other
status information is similarly relayed so that the user can
supervise the data acquisition mission.
Subsurface Mapping Software
The Site Investigation Robot deploys and supports a
commercial ground penetrating radar set (Geophysical
Survey System, Inc. SBR.-3) to acquire subsurface data. A
data acquisition run is comprised of combinations of
Terregator drive motions and gantry movements in which
the basic procedure is to move the antenna from one limit to
the other and then drive forward some incremental distance.
At regular intervals through the antenna's travel, a series of
radar pulses are transmitted into the ground and the energy
reflected to the receiving antenna amplified, filtered and
digitized. These signals are stored adjacently in a buffer until
the antenna has completed a full scan. The result is a two
dimensional data array, in which the columns are individual
GPR waveforms, stored on disk as an image along with the
mobile robot's site coordinates. More details on the
principles of GPR are presented in the Appendix.
Every row of pixels in the GPR image contains data acquired
at a constant time delay relative to the transmitted pulse. That
time delay is converted into a distance from the antenna by
the speed of electromagnetic wave propagation in the
imaged subsurface media based on measured and/or inferred
electrical parameters. Since the position of the mobile robot
and the position of the antenna relative to the mobile robot
are measured for every recorded GPR waveform, it is
possible to assign three spatial coordinates to each pixel in
the image. It is this position tagging that makes it possible to
spatially correlate and visualize GPR data in three
dimensions.
Each recorded waveform spans a depth range that is
governed by the wavelength of the transmitted energy and
the electrical properties of the subsurface medium. Generally
speaking, there is a trade-off in depth of penetration and the
physical dimensions that can be resolved. The 500 MHz
antenna used in this work can image structures buried to
depths of 3 meters with 5-10 cm resolution in the best of
conditions (e.g. dry, sandy soils); lower frequencies
penetrate deeper at the sacrifice of resolution. GPR
performance is poorer in materials with high conductivity
and high dielectric constant - conditions associated with high
moisture content - due to attenuation of the radar energy. In
saturated soils and clays, imaging potential may be limited to
depths of only one meter,
This data acquisition procedure is repeated until the robot
has covered its entire planned route. Once the robot returns
to its base station, all acquired images are transferred from its
onboard disk to mass storage devices connected to the base
station computer for archiving and processing. Acquired
GPR data are arranged in volumes, each containing a set of
parallel subsurface sections stored as images. Individual
sections are stored as files that also contain other parameters,
including location of the scan, date and time of acquisition,
and radar gain and time base settings. These files are
organized in a Unix file system such that each subdirectory
corresponds to a unique site volume. Each subdirectory also
contains an additional site index file that is used to retrieve
and store individual images. Figure 3 shows an example of a
site map from which nine volumes of the subsurface would
be scanned.
Since the intuition of experienced field screening personnel
is still required to apply the appropriate processing steps and
208
-------
choose parameter values to make sense of the images, we
have developed a set of programs to process GPR data
acquired by the Site Investigation Robot that are called by
the user through a common menu-driven interface. This
software package, known as gpr-shell, includes routines for
reading and writing data files, applying time domain filters
to individual records, displaying of 2D subsurface sections
as color or gray-scale images, scaling and windowing
images, spatial correlation all GPR records in a subsurface
volume, and a variety of image enhancement functions. To
facilitate processing, Gpr-shell also provides command line
completion, prompting, and on-line help. It also provides the
user with an 'on-line lab notebook', in which the steps and
parameters used to process each image are automatically
recorded for future reference.
In order to transform raw GPR data scans into high
resolution images, several processing steps have been
implemented, as illustrated hi Rgure 4. (We have yet to
identify a single methodology or set of parameters that can
be successfully employed to generate interpretable
subsurface maps from all GPR data, however, the following
steps are generally taken.) First the signal is deconvolved
with the return signal from a pulse transmitted into air.
Deconvolution is a matched filter operation that removes the
effects of the secondary pulses from the return signal and
effectively transforms a return from the transmitted pulse
into the return that would have been caused by an ideal
impulse function. The resulting signal is then low pass
filtered to remove noise components introduced by the
deconvolution.
The waveform recorded at each grid point is actually a
composite of all radar reflections within the antenna's
conical beam pattern due to the poor focusing of the GPR
antenna. However, since the spacing between surface grid
points is accurately measured, we are able to correlate all of
the measurements and synthetically focus the antenna. A
process known as 'migration' is applied to convert the
deconvolved and filtered data into a representation of the
subsurface. Migration is very similar to the synthetic
aperture focusing techniques used for high resolution pipe
location, in that its underlying principle is data from adjacent
scans tend to reinforce one another.
A three dimensional array of GPR data is constructed by
sampling data from vertical sections in the scanned volume.
The value in each cell, or voxel (for volume element), is then
added to all array locations equidistant from the transmitter
and within the antenna beam. This effectively 'spreads' each
part of the return signal over surface that is a locus of points
with the same time of flight from the antenna. By applying
this algorithm cell in the array, the recorded signals
originally associated with individual voxels constructively
interfere with one another. This reinforcement indicates the
presence of an impedance discontinuity at the corresponding
subsurface location and emerges in the migrated image.
Migration can thus be used to effectively focus the
transmitted radar beam. (We note, however, that its success
requires a good estimate of the soil's dielectric constant,
which determines the speed at which GPR waves travel
through the subsurface, and the antenna beam pattern and
soil conductivity, both of which influence attenuation.)
Once a volume of data has undergone 3-D migration, vertical
and horizontal sections are extracted from it as individual
images. These images are then enhanced by a number of
image processing operations, including 2D low- and high-
pass filters of varied bandwidths, edge detectors and region
growing operators, depending on the image features of
interest.
Figure 5 through 7 show the results of these processing steps.
All three are images of a small metallic drum containing
water buried in sand. Figure 5 is a vertical section of raw data
and Figure 6 is the same image after deconvolution and
migration. In this case, the barrel cross section is best seen by
the thresholding of the image after it is finally processed by
the 2-D high pass filter (Figure 7).
Future Enhancements
A number of enhancements to our current system are
planned to increase its ability to operate on waste sites, ease
its use, and improve the quality of the subsurface maps it
generates.
209
-------
For sites with very rough terrain and/or numerous obstacles,
improving the mobility of the base locomotor will result in a
greater percentage of ground surface that SIR can cover.
This can be accomplished with suspension, greater ground
clearance, replacing the wheels with tracks, etc. An even
more significant performance increase can be realized by
improving SIR's position cognizance, regardless of its
mobility characteristics. The most promising technologies to
provide a more accurate measurement of the robot's location
on the site are inertial navigation units (INS) and global
positioning (GPS) receivers, both of which can be deployed
onboard and readily interfaced to the robot controller. By
providing a position estimate that is independent of the
robot's dead reckoning, the robot can be navigated with a
closed loop path tracking control scheme, a paradigm in
which the robot's actual (measured) position is used to
correct for deviations from the planned path that may result
from wheel slippage or other controller disturbances. Path
tracking control using combined INS and GPS has
successfully guided our NavLab mobile robot at speeds
exceeding 20 km/hr, more recently, the same controller has
been ported to an off-road dump truck.
More accurate GPR antenna positioning can also be
achieved by replacing the gantry mechanism with a tnulti-
degree-of-freedom robot arm. Our concept for such a sensor
deployment arm (SDA) is a long reach mechanism able to
position and orient sensor payloads weighing up to 10 kg.
over a 2 meter x 2 meter area, adjusting to any undulations
of the terrain. The principal advantage of an SDA is greater
integrity of the sensor position measurements - complete a
3D data array can be collected with a common frame of
reference, eliminating the possibility of positioning errors
between adjacent scans due to motions of the mobile base,
which arc typically an order of magnitude less accurate than
manipulator movements.
There appears to be a synergy between the Site Investigation
Robot and geographical information systems (GIS), another
emerging technology for waste site investigations.
Geographical information systems are software tools for
cataloging; manipulating and displaying any form of data
that can be related to a cartographic map. GIS applications
include land use management, record keeping of legal
boundaries, roads and utility networks, agriculture, and
many others. A GIS can be also linked to a relational data
base to provide a powerful tool for site investigation. Many
available GIS packages include routines to enter previously
digitized terrain maps and survey data which would aid in
the development of site maps for the SIR user interface. The
other attractive feature of GIS is simplified storage and
retrieval of data: entry of acquired position tagged data into
the GIS data base can be automated and its recall reduced to
the simple positioning of a cursor in the display window.
Two advances in subsurface mapping software are currently
being pursued. One is the development of more general three
dimensional migration algorithms that will account for the
non-homogeneous nature of the subsurface nature of the
subsurface medium. This will entail assigning permittivity
and conductivity values to each voxel in the scanned
subsurface volume in order to better model GPR wave
propagation. Techniques to measure and/or infer these
parameters will have to be developed to make the best use of
this algorithm. In addition faster processing engines and
techniques will be required to achieve results in useful time
frames. The second advancement will be the application of
three dimensional enhancement and rendering techniques to
subsurface maps. Such techniques exist in the domains of
medical imaging and geological exploration, but have yet to
be adopted for GPR.
Finally, our goal is to integrate these hardware and software
elements into the more complete system for waste site
characterization, as shown in Figure 8.
Summary
Subsurface mapping is a discipline that has advantageously
adopted technologies from the domains of robotics and
computer science. In this research, we have successfully
implemented registration of sensor position and automated
acquisition of sensor data using a robot, and thereby created
opportunities to apply processing techniques to create 2-D
and 3-D subsurface maps of higher quality than previously
attainable. This and other spatially correlated information
that the Site Investigation Robot generates can be used to
210
-------
more effectively characterize waste sites and ultimately
lower the expense of site cleanups.
More generally, robotics and automation can benefit waste
site characterization in a number of ways.
• The enormous data requirements will be satisfied
faster and at lower cost when data arc acquired by
robots.
• The quality of those data will be enhanced
through standardized, repeatable measurement
techniques.
• By automatically indexing measurements by
position in a geographical information system,
opportunities for numerical modeling, graphical
visualization and straightforward data correlation
are created.
The Site Investigation Robot is an example of an emerging
class of robots dedicated to the solution of hazardous waste
problems. We view the SIR be the first in a family of robots
for environmental applications. Systems that follow will
have additional perceptive capabilities and self-reliance to
perform detailed site assessments.
Acknowledgments
This research is sponsored through a cooperative agreement
with the U.S. Environmental Protection Agency and a grant
from the Ben Franklin Technology Center of Western
Pennsylvania. We also acknowledge RedZone Robotics,
Inc., for its participation in the Site Investigation Robot
project.
Appendix: Principles of GPR Sensing
Ground penetrating radar works by transmitting an
electromagnetic pulse into the earth which spreads as a
conical wavefront as it travels further from the antenna.
When the radar wave reaches a discontinuity in electrical
impedance of the subsurface, an echo is relumed, the
strength and the phase of which indicate the magnitude and
sign of the change. Mathematical descriptions of these
interactions in all but the simplest of cases defy closed form
solutions; even finite element methods are too cumbersome
for practical modeling of the GPR phenomenon. Fortunately,
modeling the physics using geometrical optics can produce
meaningful results. With this simplification, the transmitting
antenna is treated as a light source from which rays emanate
and are reflected to the receiving antenna. The distance to the
point of reflection (assuming a direct reflection) can thus be
estimated with time-of-flight measurements, i.e., the latency
of the echo relative to.tbe transmitted pulse.
A difficulty with the optical assumption is the poorly
focused radar beam. Commercially available GPR antennas
are designed to limit beam spread of the transmitted wave to
an elliptical cone, however, for a single return, the exact
location of an echo within this volume cannot be determined.
To resolve this ambiguity, the antenna is scanned in a line
over the ground surface to create an ensemble of return
signals. Latency of echoes are lowest when the antenna is
directly over an object and increase as the antenna moves
away. By combining recorded echoes from points along the
scan line, distinctive curves are generated which are then
interpretedby GPR experts to identify subsurface features.
In practice, there are several factors that complicate the radar
return. Time of flight measurements on return echoes depend
on knowledge of the propagation velocity of the transmitted
pulse, which is not a constant but instead depends on
electrical permittivity (or equivalently, dielectric constant)
of the subsurface material. This introduces uncertainty in the
measurements, which is currently resolved either by
calibration in the field or simply by estimation of subsurface
permittivity. Both the transmitted and reflected radar waves
are attenuated due to losses in the media that are governed
primarily by its conductivity, another parameter requiring
estimation. Geometric dilution of the wave energy as the
beam spreads with distance travelled is a further
complication since the exact shape of the antenna beam
pattern within the subsurface medium cannot be determined.
Finally, the difficulties of controlling the shape of the
transmitted pulse at GPR operating frequencies (one
hundred megahertz to over one gigahertz) introduce
additional return signals that confuse the main return echo
and must be removed.
211
-------
Figure 1. Site Investigation Robot prototype
VME Bus Computer
•:
Tirn^-z!
Js.
•n~^on
\ \ 4UBP | —
Optical Disk
1
i_
o
_
g
6
e
o
1
-------
PARKING LOT SUE MAP
ildjZ
test pa
test pa
test pa
ch 3
:ctiZ
;ch 1
leslpa
test pa
test pa
ch4
\
ch5
I
:ch6
1
test pa
est pa
le§tpa
ch 9
ch 8
:ch 7
Pzxking Cuib
Parking Cuib
figure 3. Site map with nine scanned subsurface volumes
Data Acquisition
Rgure 4. Ground penetrating radar processing steps
213
-------
Time
(IIS)
0.0
1.0
2.0
3.0
4.0
5.0
:,.'•
7.0
BjO
JjQ
10.0
11.0
12.0
13.0
14.0
15.0
16.0
17.0
18.0
19.0
Depth
(mm)
0.0
74.9
149.9
224.8
299.8
374.7
449.7
524.6
599.6
674.5
749.5
824.4
899.4
9743'
1049.3'
1124.2
1199.2'
1274.1'
1349.1'
1424.0'
< \iliu-M;ip: -I.W.I i
Volume: icsl volume I Slice Niimher: 5 (400 mm)
Antenna: 3102 I-rcq: 500 MH/. Pcnn: 4.00 Time Slcp: 50 ps.
X (mm)
0 149 299 449 599 749 899 10-19 1199 1349 I49S I64K I79S 194S
I I I I I I I I I I I I I
Figure 5. Vertical section of burled drum (raw GPR data)
Tune Ucplh
UK! (nun)
0.0
74.9
149.9
224.8
299.8
374.7
449.7
524.6
599.6
674.5
749.5
24.4
12.0 899.4
13.0 974.3
14.0 1049.3
15.0 1124.2
16.0 1199.2
17.0 1274.1
18.0 1349.1
19.0 1424.0
Figure 6
:
:
i
;
•
7.0
-
::
Volume: Icsl volume 1 Slice Number r> (4()()mm)
Aiuaina: 3H)2 l-'req: 500 MH/. Perm: 4.00 Time Step: SO ps.
X (mm)
0 149 299 449 599 749 899 1049 1199 1349 1498 1648 1798 1948
I I I I I I I I I I I I
-
Image of buried drum (Figure 5) after deconvolution and migration
214
-------
, i 7;i IS.'I
Volume: lesl volume 1 Slice Number: 5 (400mm)
Antenna: 3102 Freq: 500 MHx, Perm: 4.00 Time Step: 50 ps.
Figure 7. Drum image from Figure 6 following 2-D high pass filter and thresholding
215
-------
CITE IK'
Motion planner
Motion controller
Site posrtion
Data acquisition
Self monitoring
SIR System Architecture
: .v-fsa
Display site map
Display acquired data regions
Select acquired data
Overlay scalar data arrays
Show robot posrtion and path
Select robot path
ooooooooooo
OODOOOOOOQO
CO OOOOOO O O O
O O O L ^Q o D
Position and time tagged :
Sensor specific data
Site maps
Data annotation
Processing results
*B-*s
— -
KAal FILE SYSTEM
Variable length GPR data
Processing history
Processing results
VHSU&US&TOfi
2D Radar display
3D GPR 'stacks'
Solid visualization
Depth map displays
•:7'- ._••..-: .
Deconvolution
2D migration
3D migration
Depth mapping
Parameter Estimation
Bd&QE PHQCESSIH3
Thresholding
Segmentation
Frequency analysis
Rltering
Object tagging
Figure 8. Site Investigation Robot system architecture
DISCUSSION
BRIAN PIERCE: My first question has to do with using ground penetrating
radar, as just one example, or using a magnetometer as another type of sensing
device. And the second question has to do with the use of a pair of robots or a team
where you could take advantage of forward scattering using the ground penetrating
radar. Right now it seems to me you're just using back scattering in a monostatic
configuration.
JAMES OSBORN: That's certainly correct. If you recall the viewgraph that
Ann put up. they are actually going to pursue the magnetonietry type of sensing.
In fact, there is really a whole class of sensors that can be put on it. Each one has
unique requirements. In particular, some of the magnetic techniques can't be near
these very metallic robots. So you've got to come up with long deployment
booms. The idea of doing bistatic radar soundings is an interesting one. I can
think of a couple ways that do that. One is to have a multiple arm system on a
single mobile base. And the other is to actually go w ith two mobile bases. I would.
at this time, say the preferred way would be the former (two arms) because of the
ability to register a manipulator and/or affect your position with much higher
accuracy than you could a mobile robot.
CHRISTOPHER FROMME: There are some excellent available technologies
for registering line of sight over short ranges, like the distance between the two
of us right now. So the idea of a pair of robots working in unison and precision
may have some merit.
DOUGLAS LEMON: Is this technology resident in the university or is it in the
RedZone Robotics Company, and who has funded this?
CHRISTOPHER FROMME: The project isfunded by EPA.And the technology
is currently in the university, although we have had some collaboration from
RedZone. in particular to turn out that robot controller that drives the system. So
we are getting some collaboration from RedZone. but the project is resident at
CMU.
DOUGLAS LEMON: Do you expect this technology to eventually be
commercially available? Is that where you're headed?
CHRISTOPHER FROMME: Yes. If not, then it doesn't make any sense to do
it.
216
-------
A QUALITY ASSURANCE SAMPLING PLAN FOR EMERGENCY
RESPONSE (QASPER)
John M. Mateo, Quality Assurance Officer
and Christine M. Andreas, Assistant
Quality Assurance Officer, Roy F. Weston,
Inc.^REAC, GSA Raritan Depot, 2890
Woodbridge Avenue, Building 209 Annex,
Edison, NJ, 08837-3679
William Coakley, Quality Assurance
Coordinator, USEPA, Environmental
Response Team, GSA Raritan Depot,
2890 Woodbridge Avenue, Building 18,
Edison, NJ, 08837
Abstract
Integration of critical elements into a com-
prehensive Quality Assurance Sampling
Plan (QASP) is crucial to implementation
of an effective plan. How can a project
manager ensure consideration of all these
elements? Utilizing a software package
called QASPER, a project manager is
prompted to consider elements necessary
to generate a comprehensive Quality As-
surance Sampling Plan for Emergency
Response.
QASPER is a PC-based software package
which compiles generic text and user
provided, site-specific information into a
draft QA/QC Sampling Plan for the
Removal Program. QASPER addresses
the nine sections of a QA/QC Sampling
Plan, as specified in OSWER Directive
9360.4-01, Removal Program QA/QC In-
terim Guidance, Sampling QA/QC Plan,
and Data Validation Procedures (revised
April, 1990). Sections include: Initial data,
background information, data use objec-
tives, QA objectives, approach and sam-
pling methodologies, project organiza-
tion and responsibilities, QA
requirements, deliverables, and data
validation.
QASPER was created to facilitate the
timely assembly of a comprehensive sam-
pling plan for emergency response ac-
tions. By thorough consideration and
attention to the necessary requirements
of QA/QC sample planning through an
automated process, it is anticipated that
reliable, accurate and quality data can be
generated to meet the intended use.
The On-Scene Coordinators (OSC) or
the Technical Assistance Team (TAT)
contractors are the primary users of
QASPER. These individuals will have
access to the site specific information and
the sampling objectives which charac-
terize a particular hazardous waste site
investigation. They are also responsible
for assembling the information into an
acceptable plan for implementation.
217
-------
The system, however, is applicable to many
regulatory programs that require the com-
pletion of QASPs.
Features of QASPER are numerous.
QASPER is self contained, no other
software is required for support. ASCII
outputs are generated so that files may be
uploaded to other word processing pack-
ages for further manipulation. Database
files on all previous sampling plans are
retained. Consistency and comprehensive-
ness of sampling plan creation efforts are
maintained throughout office, region or
zone, therefore, sampling plans are created
more efficiently. Redundant data entry is
minimized by integrating repetitive infor-
mation throughout the plan after one entry.
The user is provided access to standardized
generic text with the capability to overwrite
and edit. QASPER allows for flexible data
entry throughout the plan. QASPER runs
on an IBM PC or 100% compatible, with a
hard drive, 640K RAM and a printer (for
hardcopy output).
Introduction
The U.S. Environmental Protection Agen-
cy (EPA) has divided the Superfund
cleanup program into short-term and long-
term remedial activities. Short-term inves-
tigative and mitigative efforts, typically
addressing imminent threat, are referred to
as "Emergency Response Actions" under
EPA's Removal Program. To ensure ade-
quate and comprehensive response, suffi-
cient time must be allocated for thorough
planning; however, planning is often
regarded as a luxury in an emergency
response scenario.
The EPA has taken a number of steps to
establish planning criteria for emergency
response actions which are sufficiently
detailed to ensure that data generated will
be of known quality to serve its intended
purpose and are commensurate with the
emergency response timeframes. The
first of these steps was the establishment
of data quality objectives (DQOs) for the
Removal Program. Second, EPA also
established a minimum framework for an
acceptable Quality Assurance Sampling
Plan. Both of these guidelines are set
forth in OSWER Directive 9360.4-01
released April 1990 (Publication No.
EPA/540G-90/004).
This paper will describe the Removal
Program DQO's, define the framework
of the QASP, and describe a third, in-
novative step EPA has taken in creating
a software package which facilitates the
timely assembly of both into a com-
prehensive plan ready for implementa-
tion in an emergency response. The
majority of this paper will describe the
features of the software program.
Removal Program Data Quality Objec-
tives
The quality of data is determined by its
accuracy and precision against
prescribed requirements or specifica-
tions, and by its usefulness hi assisting the
user to make a decision or answer a ques-
tion with confidence. OSWER Directive
9360.4-01 guides the user in defining data
quality within a framework that also in-
corporates the intended use of the data.
The guidance is structured around three
quality assurance objectives, each as-
sociated with a list of minimum require-
ments. The three QA Objectives,
hereafter referred to as QA1, QA2 and
QA3 are described as follows:
QA1 is a screening objective to afford a
quick, preliminary assessment of site
contamination. This objective for data
218
-------
quality is for data collection activities that
involve rapid, non-rigorous methods of
analysis and quality assurance. These
methods are used to make quick, prelimi-
nary assessments of types and levels of pol-
lutants. The primary purpose for this
objective is to allow for the collection of the
greatest amount of data with the least ex-
penditure of time and money. The user
should be aware that data collected for this
objective have neither definitive identifica-
tion of pollutants nor definitive quantita-
tion of their concentration level.
QA2 is a verification objective used to
verify analytical (field or lab) results. A
minimum of 10% verification of results is
required. This objective for data quality is
for data collection activities that require
qualitative and/or quantitative verification
of a "select portion of sample findings"
(10% or more) that were acquired using
non-rigorous methods of analysis and
quality assurance. This quality objective is
intended to give the decision-maker a level
of confidence for a select portion of
preliminary data. This objective allows the
user to focus on specific pollutants and
specific levels of concentration quickly, by
using field screening methods and verifying
at least 10% by more rigorous analytical
methods and quality assurance. The results
of the 10% of substantiated data gives an
associated sense of confidence for the
remaining 90%. However, QA2 is not
limited to only verifying screened data. The
QA2 objective is also applicable to data that
are generated by any method which satisfies
all the QA2 requirements, and thereby in-
corporates any one or a combination of the
three verification requirements.
QA3 is a definitive objective used to assess
the accuracy of the concentration level as
well as the identity of the analyte(s) of in-
terest. This objective for data quality is
available for data collection activities
that require a high degree of qualitative
and quantitative accuracy of all findings
using rigorous methods of analysis and
quality assurance for "critical samples"
(i.e., those samples for which the data are
considered essential in making a
decision). Only those methods that are
analyte specific can be used for this
quality objective. Error determinations
are made for all analytes of the critical
sample(s) of interest.
Quality Assurance Sampling Plan
Framework
There are nine sections to a Removal
Program QA Sampling Plan. Section 0.0
addresses basic information require-
ments such as site name, relevant work
order numbers, primary personnel
names and titles, etc. Section 1.0 solicits
information about the location of the
facility, type of facility, type and volume
of materials to be addressed, sensitive
adjacent environments, and action
levels. Section 2.0 addresses data quality
objectives (DQOs), i.e., regarding
decisions the data will support. Section
3.0 addresses the linkage of DQOs with
matrix and parameters. The project
manager must decide which parameter
will be assessed, by matrix, for which in-
tended data use, at which QA objective
(QA1, QA2, or QA3). Section 4.0 ad-
dressed the Sampling Approach and
Methodologies, including documenta-
tion requirements. This section will in-
clude a discussion of sampling design,
type of equipment, fabrication and
whether equipment decontamination
will be employed, standard operating
procedures, numbers of field samples
and control samples needed to achieve
the stated QA Objectives. It also in-
cludes a timetable for sampling activities.
219
-------
Section 5.0 addresses information about
what personnel are assigned which respon-
sibilities, and which laboratories will be
analyzing which samples. Section 6.0 dis-
cusses the requirements necessary to
achieve the quality assurance objectives
identified in Section 3.0. Section 7.0 ad-
dresses the types of deliverables to be
produced and what they will contain. Sec-
tion 8.0 addresses the degree of data valida-
tion necessary to achieve the identified QA
Objective.
for
Quality Assurance Sampling Plan
Emergency Response (OASPER)
QASPER is a PC-based software package
which compiles generic text and user
provided, site-specific information into a
draft QA/QC Sampling Plan for the
Removal Program. QASPER addresses
the nine sections of a QA/QC Sampling
Plan, as specified in OSWER Directive
9360.4-01, Removal Program QA/QC In-
terim Guidance, Sampling QA/QC Plan,
and Data Validation Procedures.
The site manager (On-Scene Coordinator)
or contractors are the primary anticipated
users of QASPER.
These individuals will have access to the site
specific information and the sampling ob-
jectives which characterize the site inves-
tigation. It is their responsibility to
assemble that information into an accept-
able sampling plan for implementation.
QASPER has a database of standard
generic text which is utilized in an
electronic "cut and paste" process with user
provided site specific information to create
a draft QA Sampling Plan. This approach
enables the user to focus on critical infor-
mation while the software manages both
the presentation and correlation of that
information with other essential data.
Perhaps the best way to illustrate this
process is to "walk through" QASPER.
The user should progress in a sequential
manner, starting with section 0.0 because
the plan database will build on previously
provided information. This feature
avoids the need for redundant input of
data which must appear in several sec-
tions of the completed plan. It is possible
to skip sections, or avoid certain input
requirements (e.g., when information re-
quested is not yet known to the user).
This allows the user to create those por-
tions of the database at times that are
convenient to the user. However, it may
not be possible to complete certain sec-
tions (most notably the DQO sections:
3.0, 6.0, and 8.0) without providing cer-
tain information in preceding sections
(e.g. Section 2.0).
Figure 1. Main Edit Plan Menu
Section 0.0 identifies certain information
required to complete the title page of a
Sampling Plan; some of information will
also be utilized elsewhere throughout
the completed plan. If the user chooses
not to enter the information requested,
220
-------
the completed plan (through the Output
menu) will be assimilated as if that informa-
tion was not requested. Should the user
wish to add alternate information currently
not requested by QASPER, this would be
accommodated through the Edit menu
after the plan has been compiled from the
database (through the Output menu).
Section 1.0 solicits background information
about the site. The user is first prompted to
geographically locate the site, characterize
its size and operating status, i.e., operation-
al or abandoned. The user is requested to
provide information about the type of
facility. (This information request is cur-
rently limited to one response per
category). For sites with multiple facility
types, the user may enter this data through
the Edit Text menu after the file has been
compiled. Next, the user is requested to
provide information about the materials
handled, the surrounding environs and
populations. Responses to these requests
are facilitated by pop-up menus of standard
responses. In the last three parts of this
section, the user provides the information
requested by typing onto free-form test
screens. Although there is room for multi-
ple page responses under each information
request, one to several paragraphs should
be sufficient.
Section 2.0 requests information regarding
the objective and purpose of the sampling
event. How does the user expect to utilize
the resultant data? Several standard
responses are provided and may be ac-
cessed by the arrow keys and or selected by
the "Return" key. The user may input an
alternate "objective" or "purpose" by select-
ing the "Other" category and specifying the
other use. The return key is utilized to
mark or unmark each item. A critical con-
sideration for any data collection event is
whether the data will be evaluated against
an existing database or action level.
Specification of the contaminants of con-
cern and their respective actionable
levels will help determine appropriate
analytical methods and quality assurance
needs later in the plan. Multiple selec-
tions are permissible from the screen.
Selections under the "Purpose" group
will be carried forward to other sections
of the plan. This section, therefore, re-
quires input in order to enable the user
to complete portions of Sections 3.0,4.0,
6.0, and 8.0.
Figure 2.
Menu
Section 3.0 QA Objectives
In Section 3.0, the user will select among
various parameters to identify the class
of compounds to be investigated. This
parameter selection will initiate the
DQO logic for a parameter, in a matrix
(next menu), for a given purpose (sub-
sequent menu), at a selected Quality As-
surance Objective (subsequent menu).
At the end of the logic path, the user will
be brought back to the parameter menu
to make another selection, if ap-
propriate. QASPER remembers the last
logic path, therefore if the user wishes to
select the same parameter, same matrix,
same purpose, and a different QA Objec-
221
-------
live, he/she need only move the highlight on
the last option.
Section 4.0 of the system solicits informa-
tion about the proposed sampling rationale
and how sampling will be conducted. There
are five subsections which address the fol-
lowing:
1. Sample Equipment
The user is requested to identify sampling
equipment that will be utilized, what
material it is made of (fabrication), and
whether it is to be dedicated and/or decon-
taminated. The user must identify the sam-
pling tools which will be used to collect
samples from the various matrices. This
process is initiated by first selecting a matrix
from among those previously identified in
Section 3.0. Next, the user will identify the
type(s) of equipment to be used in the
various matrices selected. The emphasis
here is on the equipment which will be
utilized to obtain the sample from the en-
vironment and transfer it to the sample con-
tainer. Most of the equipment in the menu
has a corresponding Standard Operating
Procedure (SOP) available in Subsection
4.3.
Figure 3. Sampling Equipment Decon-
tamination Sequence Menu
The user is also requested to identify the
equipment fabrication, or material of
construction. This is important so that
the quality of the sample is not com-
promised, inadvertently, by the materials
it comes in contact with during sample
collection. This is usually critical for low
concentration investigations, or situa-
tions of incompatibility between sample
contaminants and sampling device
fabrication. If the equipment is not dedi-
cated, QASPER will import generic text
describing decontamination procedures
and solicit additional information about
the user's preference for the decon-
tamination sequence and chemicals (e.g.
solvents) of choice. The user will high-
light, or select, the decontamination
steps from a menu in the order he/she
wishes the sequence to be conducted in
the field. A manifestation of that se-
quence will be compiled in the plan out-
put.
2. Sampling Design
In this section, the user will indicate the
sampling design or grid proposed to
achieve the sampling event objective. It
is expected that the user will detail where
and how many samples will be collected.
A basis for the sampling scheme would
be described herein, and a sampling map
would be referenced. QASPER will
print a blank page with the name of the
site and the title, "Sampling Location
Map", for incorporation of this map.
3. Standard Operating Procedures
There are three sections to the SOP sub-
section, addressing standard text for
Sample Documentation, Sampling, and
Sample Handling and Shipment.
QASPER allows the user to choose exist-
ing generic text from the database, or
222
-------
write new text to describe how sample
documentation will be achieved. If the user
selects "Write own Text", a free form edit
screen of several pages will appear to
receive the user's narrative.
Figure 4. Available SOPs Menu
QASPER enables the user to choose from
an inventory of standardized SOP texts to
prepare a description of how the sampling
event will be conducted. There are several
approaches for incorporating Sampling
SOPs:
-The user may import only the titles of
SOPs into the compiled plan. This reduces
the bulk of the final plan document and may
be appropriate where all users of the plan
would have access to a repository of the
actual SOP texts.
-The user may import title and text into the
compiled plan. This allows the final plan to
be a "stand alone" document.
-The user may import any portion of the
generic titles and text available through
QASPER and/or modify and add SOPs to
the QASPER database.
4. Schedule of Activities
The user is requested to provide a
timetable for the sampling activities.
This usually begins with the procurement
process for laboratory services and may
end with delivery of the final report. A
tabular presentation will be created
when the plan is compiled.
5. Tables
QASPER presents a summary table of
each parameter, matrix, purpose, and
QA objective as compiled in Section 3.0.
The user will select by means of the high-
light bar and "return" key to initiate a
method selection for each parameter,
identification of level of sensitivity, num-
ber of samples to be collected and QC
samples needed to address the relevant
QA objective. This information will be
assimilated by QASPER into Field Sum-
mary and QA/QC Summary Tables.
FigureS. Field QA/QC Summary Tables
Menu
In Section 5.0, the user is requested to
identify what personnel will be perform-
ing what tasks or responsibilities for the
sampling event. Likewise, the user is re-
223
-------
quested to provide the name of the lab and
a city or state descriptor for an address.
Labs will be characterized as either CLP,
commercial, EPA or field under the space
for lab type. Parameters may be identified
by class of compound.
Section 6.0 of the plan database receives
standardized text regarding QA require-
ments, based on the QA Objectives
selected in Section 3.0. The user has the
opportunity to view and edit the text in
Section 6.0, since this is where the informa-
tion will appear in the final compiled plan.
There are also options for deleting generic
text or writing unique text (requirements).
The menu will indicate which QA Objective
requirements are being imported (e.g.
QA1, QA2, and/or QA3).
In Section 7.0, QASPER contains an inven-
tory of standardized descriptions of the
types of deliverables which may be
prepared under a sampling event. The user
need only select the appropriate
deliverables, and the resultant plan will
contain the appropriate text.
Figure 6. Deliverables Menu
Section 8.0 contains the requirements for
validating the data generated under the
plan. The text in this section will be auto-
matically imported at the time the QA
Objective(s) is selected.
After completing review and/or
modification of Sections 0.0-8.0, the user
may proceed to the output menu to com-
pile the plan for eventual printing or
sending to diskette.
Features of OASPER
If contained, requires no other software
for support
-Generates ASCII outputs - file and
hardcopy. Files may be uploaded to
other word processing packages for fur-
ther manipulation
-Creates (draft) hard copy QA/QC Sam-
pling Plan document ready for approval
signatures and implementation
-Retains database files on all previous
sampling plans for future manipulation
(e.g. recreating documents, searching for
similar sampling plans by location,
facility type, contamination, etc.)
-Capable of transmitting (compiled)
sampling plan or database via diskette or
modem
-Improves consistency and comprehen-
siveness of sampling plan creation efforts
throughout office, region, or zone
-Improves efficiency for creating and
reviewing QA/QC Sampling Plan docu-
ments
-Repetitive use of information
throughout the plan without the need for
redundant data entry
224
-------
-Provides the user access to standardized
generic text with overwrite capability for
editing
-Flexible data entry throughout
Requirements
QASPER runs on an IBM PC or 100%
compatible, with a hard drive, 640KRAM
and a printer (for hardcopy output).
Conclusion
QASPER is a PC-based software package
which compiles generic text and user
provided, site-specific information into a
draft QA/QC Sampling Plan for the EPA
Removal Program. It is envisioned that this
tool will primarily facilitate the timely as-
sembly of comprehensive QA Sampling
Plans in emergency response scenarios and,
indirectly, educate users on the correlation
of data quality objectives and sampling ac-
tivities.
Mention of trade names or commercial
products does not constitute EPA endorse-
ment or recommendation for use.
References
U.S. Environmental Protection Agency,
Quality Assurance/Quality Control
Guidance for Removal Activities, Sam-
pling QA/QC Plan and Data Validation
Procedures, Interim Final EPA/540G-
90/004, April 1990.
225
-------
A RATIONALE FOR THE ASSESSMENT OF ERRORS IN SOIL SAMPLING
J. Jeffrey van Ee*
Exposure Assessment Division
Environmental Monitoring Systems
Laboratory
Las Vegas, Nevada 89193
*0irect questions to this author.
Clare L. Gerlach
Lockheed Engineering & Sciences
Company
Las Vegas, Nevada 89103
ABSTRACT
Considerable guidance has been provided on
the importance of quality assurance (QA),
quality control (QC), and quality assessment
procedures for determining and minimizing
errors in environmental studies. QA/QC
terms, such as quality assurance project
plans and program plans are becoming a part
of the vocabulary for remedial project
managers (RPMs). Establishment of data
quality objectives (DQOs) early in the
process of a site investigation has been
stressed in EPA QA/QC guidance documents.
Quality assessment practices, such as the
use of duplicates, splits, spikes, and
reference samples, are becoming widely
accepted as important means for assessing
errors in measurement processes. Despite
the existence of various forms of guidance
for hazardous waste site investigations,
there have been no clear, concise, well-
defined strategies for precisely how these
recommended QA/QC materials can be utilized.
The purpose of this paper is to familiarize
field scientists with an approach to these
questions:
How many and what type of samples are
required to assess the quality of data
in a field sampling effort?
How can the information from these
quality assessment samples be used to
identify and control sources of error
and uncertainties in the measurement
process?
The primary audience for this paper is
assumed to be RPMs who have concerns about
the quality of the data being collected at
Superfund sites but have little time to
investigate the complexities of the
processes used to assess the quality of
data from the total measurement process.
The approach outlined in this document for
assessing errors in the field sampling of
inorganics in soils may be transferrable,
with modification, to other contaminants in
other media.
This presentation is a summary of "A
Rationale for the Assessment of Errors in
the Sampling of Soils" by J. Jeffrey van
Ee, Louis J. Blume, and Thomas H. Starks,
1990.
An in-depth treatment of the statistical
approach is outlined in the Rationale (1),
and it is recommended reading.
INTRODUCTION
This document expands upon the guidance for
quality control samples for field sampling
as contained in Appendix C of EPA's Data
Quality Objectives for Remedial Response
Activities - Development Process (2). That
report outlines, in greater detail,
strategies for how errors may be assessed
and minimized in the sampling of soils with
emphasis on inorganic contaminants.
Basic guidance for soil sampling QA, which
includes a discussion of basic principles,
may be found in EPA's Soil Sampling Dualitv
Assurance Users Guide developed at the
Environmental Monitoring Systems
227
-------
Laboratory, Las Vegas (3). The Users Guide
is intended to be revised on a periodic
basis. It is anticipated that some of the
guidance provided in this document will
eventually be incorporated into the Users
Guide.
The sampling and analysis of soils for
inorganic contaminants is a complex
procedure from experimental design to the
final evaluation of all generated data.
Sources of error abound but they can be
successfully mitigated by careful planning
or isolated by intelligent error assessment.
Error (or variability) can be either bias or
random. Biased error is indicative of a
systematic problem that can exist in any
sector of soils analysis, from sampling to
data analysis. The first step in analysis
of variability (or error) is to establish a
plan that will identify errors, trace them
to the step in which they occurred, and
account for variabilities to allow direct
action to correct them. In anticipation of
errors, it is essential to ask two
questions:
1. How many and what type samples are
required to assess the quality of data
in a field sampling effort?
2. How can the information from these
samples be used to identify and
control sources of error and
uncertainty in the measurement?
Error assessment should be understood by the
field scientist and the analyst. To aid
scientists in the estimation and evaluation
of variability, the Environmental Monitoring
Systems Laboratory-Las Vegas (EMSL-LV) has
developed a computer program called ASSESS.
ASSESS can trace errors to their sources and
help scientists plan future studies that
avoid the pitfalls of the past.
BACKGROUND
Superfund and RCRA site investigations are
complicated by: the variety of media being
investigated, an assortment of methods, the
diversity of investigators, the variety of
contaminants, and the numerous risks to and
effects on human health and the environment.
Many phases exist in Superfund site
investigations. An initial phase, generally
described as a preliminary investigation,
consists of collecting and reviewing
existing data and data from limited
measurements using practically any
available method. The next phase,
generally described as site
characterization, uses selected methods and
prescribed procedures to characterize the
magnitude and extent of the contamination.
Later phases include an examination of
remedial actions, which involve an
assessment of treatment technologies, and
continued monitoring to assess the degree
of cleanup at a site. A final phase may
require long-term monitoring to
substantiate that no new or additional
threats occur to affect human health and
the environment. Throughout Superfund site
investigations QA/QC procedures change as
data quality objectives vary and different
phases proceed.
RANDOM ERRORS
Random errors can result in variations from
the true value that are either positive or
negative but do not follow a pattern of
variability. During the measurement
process, random errors may be caused by
variations in:
1) sampling
2) handling
3) transportation
4) preparation
5) subsampling
6) analytical procedures
7) data handling
The greatest source of error is usually the
sampling step. In the Comprehensive
Environmental Response, Compensation, and
Liability Act of 1980 (Superfund, or
CERCLA) and the Resource Conservation and
Recovery Act (RCRA), site investigations,
analytical, and data handling variability
are checked by the CLP protocol. When more
than one laboratory is involved, handling,
transportation, subsampling, and
preparation can be checked at Level IV.
All analyses are performed in an offsite
Contract Laboratory Program (CLP)
analytical laboratory following CLP
protocols.
228
-------
But how can the analyst know that the sample
in the jar is representative of the
surrounding samples at the site? How can
the field analyst know that the more (or
less) contaminated soil didn't stick to the
auger or split-spoon?
It is strongly recommended that the
traditional approaches used in mitigating
the error in the last six steps be applied
to sampling itself, i.e., use of duplicates,
splits, spikes, evaluation samples, and
calibration standards. A certain amount of
random error is inherent in samples
themselves. In fact, the total variance
equals the measurement variances plus the
population variances, as defined by the
equations:
where at = total variability
am = measurement variablity
a = population variability
and
and
where as = sampling variablity
(standard deviation)
Qh = handling, transportation
preparation variability
0SS= preparation variability
(subsampling variability)
laboratory analytical variability
between batch variability
aa =
a =
NOTE: It is assumed that the data are
normally distributed or that a
normalizing data transformation has
been performed.
We can address the variance in measurement;
the population variance, however, is a true
picture of the complexity of the soil.
BIAS ERROR
Some sources of error are systematic, that
is, in a given situation conditions exist
that consistently give positive or
consistently give negative results. This
skewing of data can be introduced early in
a sampling regime, e.g., by a sampling
device that alters the composition of the
soil matrix. It can occur in the middle of
the sampling regime, e.g., by the
preferential handling of a sampler who
isn't trained in the intricacies of sample
handling and preparation. Or it can be
introduced in the later, analytical stages,
where it is easier to trace because of
interlaboratory comparisons and frequent
calibration checks. The pervasive quality
of an early bias error is its resistance to
detection and the fact that other
variabilities are added throughout the
process until, finally, the reported data
may be significantly non-representative of
the true value. Bias errors can be traced
to:
faulty sampling design
skewed sampling procedure
systematic operator error
contamination
degradation
interaction with containers
displacement of phase (or chemical
equilibria)
inaccurate instrument calibration
PREVENTION
To avoid both random and bias errors (or at
least to be able to pinpoint their
occurrence and estimate their extent), it
is wise to plan a study well, anticipating
possible sources of error. The inclusion
of quality assurance samples used for
quality assessment and quality control can
help isolate variability and identify its
effect.
An effective technique is to concentrate
duplicate sampling early in the study and
send the samples off for rapid CLP
analysis. Dependent on the results, it may
not be necessary to include as many quality
assessment samples after these samples
demonstrate reliability in the sampling
process. Early detection of sources of
error can help the field scientist
customize the remainder of the study to
meet the specific needs of the project.
QUALITY ASSESSMENT SAMPLES
A Remedial Project Manager (RPM) must ask:
how many samples are needed to adequately
characterize the soil at this site? The
229
-------
key word is "adequately." By determining
the data quality objectives (DQOs) in
advance, the RPM can assure adequate
sampling at a site. Too little sampling, as
well as too much, is a waste of time and
money. The extent of QA/QC effort is
dependent on the risk to human health, the
nearness of action levels to detection
limits, and the size, variability, and
distribution of contamination. Ultimately,
the number of quality assessment samples is
determined by the DQO for the site. Table
1 explains various types of quality
assessment samples and their uses.
SOME STATISTICAL CONCERNS
Confidence in quality assessment sample data
can be expressed as an interval or as an
upper limit. All confidence levels/limits
are based on the number of degrees of
freedom and the limits get lower (or the
intervals get smaller) as the number of
degrees of freedom increases. For example,
if 15 samples are taken at a site, and each
split is extracted twice at a CLP
laboratory, and 2 injections of each
extraction are made into an Inductively
Coupled Plasma/Mass Spectrometer (ICP/MS),
the total number of degrees of freedom
associated with this experimental design
would be calculated as:
15 samples X 2 preparations splits = 30
X 2 CLP extractions = 60
X 2 injection replicates = 120
120 degrees of freedom for the whole
process. But, if only the population
variability in the field samples (which
includes the sampling error) is being
estimated, the number of degrees of freedom
is 15-1, or 14. There are 15 independent
samples but one degree of freedom is lost
with the estimation of the mean. Therefore,
there are 14 degrees of freedom for the
sampling variance estimate. As another
example, to estimate the variability in the
extraction step, one has 30 independent
pairs of numbers, each pair associated with
one extraction. Thus, there are 30 degrees
of freedom associated with the extraction
error.
Obviously, the confidence associated with
any particular sampling is directly related
to the number of samples taken. In Table 2
(also Table 3 of the Rationale Document) or
in a statistics manual, guidance is given
for the number of quality assessment
samples that must be used with the routine
site characterization samples. These
tables assume that data are normally
distributed. The tables will show the user
the confidence interval associated with the
degrees of freedom. Then, decisions may be
based upon the requirements of the DQOs. A
synopsis of this targeted approach can be
seen in Figure 1. The total measurement
error is comprised of error in the sampling
(a), subsampling (ass), handling (ah),
batch (ab), and analysis (aa) steps. Each
is addressed in the regime depicted in
Figure 1.
SAMPLE COLLECTION CONSIDERATIONS
If Level IV CLP analysis is performed on
the soil, we can assume that very little
error occurs in the analytical stage. This
focuses our attention on sources of error
in the sampling, handling, and preparation
steps. The two major considerations in
collection of environmental samples are:
1. Will the collected data give the answers
necessary for a correct assessment of
the contamination or a solution to the
problem?
2. Can sufficient sampling be done well and
within reasonable cost and time limits?
ASSESS
The EMSL-LV has developed an easy-to-use
program to calculate the necessary
statistics, as described in the Rationale
(1), from the generated data for an
accurate determination of precision and
bias. ASSESS is a public domain, FORTRAN
program that is available from EMSL-LV and
written for personal computers. It may be
applied
to cases where no field evaluation samples
are available as well as cases where they
are. ASSESS is user-friendly and its use
will greatly aid both field scientists and
RPMs in decision-making based on soil
studies.
230
-------
TABLE 1
QUALITY ASSESSMENT SAMPLES AND THEIR USES
• ALLOW STATEMENTS TO BE MADE CONCERNING THE QUALITY OF THE MEASUREMENT SYSTEM
• ALLOW FOR CONTROL OF DATA QUALITY TO MEET ORIGINAL DQOs
• SHOULD BE DOUBLE-BLIND:
Types of Samples Description
Field Evaluation
(FES)
Low Level Field
Evaluation (LLFES)
External Laboratory
Evaluation (ELES)
Low Level External
Laboratory (LLELES)
Field Matrix Spikes
(FMS)
Field Duplicates
(FD)
Preparation Splits
(PS)
• SHOULD BE SINGLE-BLIND:
Field Rinsate
Blanks (FRB)
Preparation Rinsate
Blank (PRB)
Trip Blanks (TB)
Samples of known concentration are introduced in the field
as early as possible to check for measurement bias and to
estimate precision
Low concentration FES samples check for contamination in
sampling, transport, analysis, detection limit
Similar to FES but without exposure in the field, ELES can
measure laboratory bias and, if used in duplicate, precision
Similar to LLFES but without field exposure, LLELES can
determine the method detection limit, and presence of laboratory
contamination
Routine samples spiked with the analytes of interest in the
field check recovery and reproducibility over batches
Second samples taken near routine samples check for
variability at all steps except batch
Subsample splits are made after homogenization and are used
to estimate error occurring in the subsampling and analytical
steps of the process
Samples obtained by rinsing the decontaminated sampling
equipment with deionized water to check for contamination
Samples obtained by rinsing the Blanks sample preparation
apparatus with deionized water to check for contamination
Used for Volatile Organic Compounds (VOC), containers filled
with American Society for Testing and Materials Type II water
are kept with routine samples through the sampling, shipment,
and analysis phases
• MAY BE NON-BLIND: AS IN THE INORGANIC CLP PROTOCOL
231
-------
TABLE 2
Some 95 Percent Confidence Intervals for Variances
Degrees of Freedom Confidence Interval
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
30
40
50
100
0.27s2
0.32s2
0.36sJ
0.39s2
0.42s2
0.44s2
0.46s2
0.47s2
0.49s2
0.50s2
0.52s2
0.53s2
0.54s2
0.54s1
0.56s2
0.56s2
0.57s2
0.58s2
0.58s2
0.59s2
0.60s2
0.60s2
0.61s2
0.62s2
0.64s2
0.67s2
0.70s2
0.77s2
< a2
< a2
< a'
< o2
< a2
< a2
< o2
< a1
< o2
< a2
< a2
< a2
< a2
< o1
< a2
< a2
< a2
< a2
< a2
< a1
< a2
< a2
< a2
< a2
< a2
< a1
< a2
< a1
< 39.21s2
< 13.89s1
< 8.26s2
< 6.02s2
< 4.84s2
< 4.14s2
< 3.67s2
< 3.33s2
< 3.08s2
< 2.88s2
< 2.73s2
< 2.59s2
< 2.49s2
< 2.40s1
< 2.32s2
< 2.25s2
< 2.19s2
< 2.13s2
< 2.08s2
< 2.04s2
< 2.00s2
< 1.97s2
< 1.94s'
< 1.91s'
< 1.78s'
< 1.64s2
< 1.61s2
< 1.35s2
232
-------
Figure 1.
QUALITY ASSESSMENT SAMPLES
DUPLICATES AND SPLITS
EVALUATION SAMPLES
BLANKS
SAMPLE TAKING
PREPARATION
ANALYSIS
SOURCES OF ERROR
III!
| ROUTINE | | FIELD | , , , , , —
1 QAMPI F 1 InilPtTTATFl 1 FFS 1 I FFC 1 1 F
1 1 1
_ 1 I . I „„ 1 „ ....__.....
1 1 j |
I I II
1 1 1 II
III II
| ROUTINE | JPREP. SPLIT] 1 FD 1 1 FES j 1 FES |
| SAMPLE |-| SUBSAMPLE | | SUBSAMPLE | 1 — , — 1 ' — , — 1 NO PREPARATION
i it ii i
1 1 1 IELESI |ELES|
»]
r-1
| PRS |
1
1
1
III II
| RS | I PS | 1 FD | | FES | | FES | ' |ELES| |ELES| | FRB | | PRB |
? oss.Oa a, ah. ass. aa os.ob.oh aa oa.ok.os oa.oh
ACKNOWLEDGEMENT
This work is based on the in-depth treatise,
"A Rationale For the Assessment of Errors in
the Sampling of Soils" by J. Jeffrey van Ee,
Louis Blume, and Thomas Starks.
NOTICE
Although research described in this article
has been funded wholly by the United States
Environmental Protection Agency under
contract number 68-03-3249 to Lockheed
Engineering & Sciences Company, it has not
been subjected to Agency review and
therefore does not necessarily reflect the
views of the Agency, and no official
endorsement should be inferred. Mention of
trade names or commercial products does not
constitute Agency Endorsement of the
product.
REFERENCES
(1) van Ee, J.J., L.J. Blume, T.H. Starks,
A Rationale For the Assessment of
Errors in the Sampling of Soils, U.S.
EPA, 1990, 600/4-90/013.
(2) U.S. EPA. 1987. Data Quality
Objectives for Remedial Response
Activities - Development Process.
EPA/540/6-87/003.
(3) U.S. EPA. 1989. Soil Sampling Quality
Assurance Users Guide (2nd. Edition).
Environmental Monitoring Systems
Laboratory, Las Vegas, Nevada. EPA
600/8-89/046.
233
-------
DISCUSSION
REX RYAN: You did an admiral job of explaining the strategy of breaking down
what we call a "nugget effect" by using ANOVA techniques. I was a little bit
shocked that you didn't discuss the amount of variance distance contributes
within a sampling program. I was also surprised that you didn't discuss variograms
or any of those kind of issues that would affect a sampling team's success in
determining what is in fact going on at a site.
JEFFREY VAN EE: The two methods go together. The method I've described
is useful in pinpointing sources of variability in the measurement process if you
want to make changes. But the points that you're making address the larger
question of where your samples are located and whether they're going to be
representative of the site, assuming that the measurement variability is relati v'ely
low. That certainly needs to be looked at how representative are your sampling
locations to the contamination throughout the site.
REX RYAN: In your experience which do you think is larger—which in fact
could—in your professional judgment be a larger contribution to total variabil-
ity: the problem of extending samples in distance or trying to replicate samples
at the same location?
JEFFREY VAN EE: I don't think I have enough data to answer that question.
I can pose a few questions for all of you to consider. Let's say that we're sampling
volatile organics or a contaminant that varies with depth. This approach would
be useful in determining whether the sampling of that contaminant is being done
well. If you take a field duplicate sample and you go down, say, four inches and
your contamination is in the first two inches of the surface, then this method will
allow you to see that kind of variability from how the samples actually collected.
This method w ould also allow you to look at the loss of volatile organics. By the
time the samples get to the lab, it's more difficult with volatile organics and we
need to do some more research to see if this approach is applicable. But those are
some of the questions that can be answered by using this approach.
Both methods have been used together—at a site in Region VII. and they both
yielded very useful information. The CEO Statistical Approach again, looks at
the question of how many samples you need to collect to characterize a site and
then our method looks at whether those samples are being collected properly,
handled properly, those kinds of questions.
NABILYACOUB: I have aquestion about a statement you made about a second
sample collected at about an inch and a half and two inches from the original
which relates directly to this concern. Would this be a measure of the effect of
sample handling, the performance of the laboratory, containers, etc.? I beg to
differ because we are introducing here a variable that might bias the results.
Would you consider this sample as a split sample? If not, would you consider a
split sample more representative of the effect of these operations rather than this
end?
JEFFREY VAN EE: You need to use a combination of samples together. We are
assuming, (although we can disprove it,) that the spatial variability in those two
inches is insignificant. We can disprove it by the introduction of other samples
throughout the process. Once we collect a field duplicate, we could split that
sample and then analyze it separately to get a handle on errors down the line:
handling in the subsampling of the core or analytical errors. If we do come back
with this analysis and see that we do indeed have tremendous differences in
moving two inches away and we compare that to the GEO Statistical Approach
then we've got some real problems in characterizing that site.
A lot really depends how the contaminant was distributed at the site. If the
contaminant was uniformly distributed at a site, then I would expect the spatial
variability to be low. If we have leaking drums, we might just happen to hit on
that area, and if we move two inches over we would get a dramatically different
result. But the more samples we collect, the more field duplicates we collect.
presumably we will get a more representative idea of where the variability is. If
we were to rely on just one field duplicate or a few. then we would really be prone
to some of the misjudgments that you're alluding to.
ROY KAY: As I understand it, the objective of sampling and population
comparison within samples, is to provide a cost effective means of reducing the
total sampling costs while maintaining a high level of accuracy. Am I correct
there so far?
JEFFREY VAN EE: Yes.
ROY KAY: Has there been any cost evaluation information developed on the
relative cost of going through the process of designing and multiple batching
your samples versus simply expanding randomly the samples that you take—
particularly if you're starting from an nonhistorical, time-zero point of view?
JEFFREY VAN EE: I think a lot depends on the objectives that you establish
for that site. You need to look at the economics of collecting more samples, what
type of samples, versus the kind of action that you're going to be taking. It you
know that you're going to be cleaning up the site in large pan then taking a lot
of samples may not be appropriate.
But if the cost of that clean-up is significant, if the cost of disposing of the
contaminant is significant, then you will want to pay more attention to how
accurately you can characterize the site. And then, of course, you want to know
whether the data that you're getting represents the site or whether it is more
representative of variabilities in the measurement process.
I'm not sure I really answered your question well. It's a difficult question to
answer, because it varies depending upon the site.
ROY KAY: I'm looking at a situation where in a time-zero, first evaluation of
a site, there are certain theoretical things that you had assumed, like if you have
an explosion of some kind, it would naturally be expected to disperse contami-
nants. Whereas a leaking drum would expect to leach in a continuous fashion and
probably in all geometric dimensions. That is. of course, is a seat-of^the-punts
guess in each individual case. But lacking historical experience on that particular
site, do the sampling techniques dial in on the proper variables and reduction of
their influence faster than simply expanding the sampling population?
JEFFREY VAN EE: In a situation like that I would weigh more QA samples.
as well as more samples, period, early on in the process. You can hopefully back
off as you learn more about the site. Now that's assuming you don't have
historical information on how well that particular contractor performs out in the
field, or how well that particular sampling method performs.
Let me demonstrate very quickly another value that comes out of this process.
Say you're out sampling the site and you're concerned about the change of the
contaminant overtime, you may have different labs involved, and you may have
different sampling crews involved. If you do not have a rigorous QA program
instituted, then when the data comes back out of the lab. it's very difficult for you
to say whether that data reflect the pollutant changing over time or whether it's
your measurement process changing over a period of time. So, at some point.
you've got to pay your dues and you've got to start developing that data. We have
a tremendous amount of data right now on how well the contract labs perform.
but we don't have enough data on how well those samples are transported to the
lab and how well they're prepared. Say there's a rainfall event during your
sampling study, how do you know that the data you collect after that significant
event is comparable to the data before that event?
J ANINE ARVIZU: Could you describe some of the programmatic applications
of the program and whether or not there were any good real world experiences
learned?
JEFFREY VAN EE: The philosophy I'm espousing today is relatively simple
and it's relatively new. My hope is that more people will pick up on it whether
they're in RCRA or Superfund Programs. I think we really do need to demon-
strate where the variability is throughout the measurement process. Right now
I'm simply advocating that we try it. How well it's used remains to be seen. We
have applied it to a Superfund site in the middle part of the country and we looked
at the spatial variabilities. As a result of our efforts using GEO Statistics, we
saved about 6 million dollars in the sampling effort at this particular site. We w ere
able to demonstrate that the sampling method that they were using, while it was
crude, was sufficient to meet data quality objectives. We were able to tell them
that they could back off on a number of samples that they're taking in certain
areas, because the measurement variability was relatively low. They weren't
getting a lot of variability in the compositing of the samples. We have had a few
success stories, but not nearly enough. We can just hope with time there will be
more stories like that.
234
-------
A REVIEW OF EXISTING SOIL QUALITY ASSURANCE MATERIALS
Kaveh Zarrabi, Chemist
Amy Cross-Smiecinski, Quality Assurance Officer
Thomas Starks, Senior Statistician
Environmental Research Center
University of Nevada, Las Vegas
4505 S. Maryland Parkway
Las Vegas, Nevada 89154
ABSTRACT
Assessment of the quality of environmental data
often depends on the availability of quality
assurance (QA) materials to measure errors at
various stages of the measurement process. A
rigorous approach has been developed to
evaluate the quality of data from the sampling
of metals in soils. "A Rationale for the
Assessment of Errors in the Sampling of Soils"
was written for application to hazardous waste
site investigations. The rationale described
is based primarily upon duplicate and split
samples and QA materials known as performance
evaluation materials. The rationale depends,
in varying degrees, on performance evaluation
materials being readily available for use in a
hazardous waste site investigation.
Unfortunately, early experiences in testing the
rationale indicate that inadequate numbers,
types, and volumes of performance evaluation
materials and other types of soil QA materials
exist to fully implement the rationale.
In order to begin to answer questions as to the
necessity of, and alternatives to, soil QA
materials, it is necessary to know the current
availability and the state of research and
development of soil QA materials. The intent
of this paper is to provide such information -
what materials are available and what is being
done to provide more materials.
INTRODUCTION
SCOPE
Millions of dollars are spent in designing and
implementing monitoring and remediation programs
for hazardous waste sites. It is the Agency's
responsibility to ensure that the data resulting
from these programs are of adequate qua!i ty to be
defensible in a court of law as well as to be
considered scientifically sound.
Quality assurance (QA) materials are an important
part of many environmental sampling and analysis
programs today. Results from the analyses of
hazardous waste site samples are often accepted
or rejected solely on the basis of data obtained
from QA samples analyzed for Agency programs
ranging from water quality monitoring to
hazardous waste remediation. It is alarming that
only a 1 imited supply of these QA materials is
available for soil sampling and analysis (Table
1). What does a project manager do when no QA
materials exist? It is the intent of this report
to discuss the need for soil QA materials in many
environmental programs'1'3 and to demonstrate the
limited availability of these materials. An
alternative to the use of manufactured QA
materials is briefly described as are approaches
for increasing the supply and variety of the most
commonly needed soil QA materials. This report
does not purport to have the answer to the
scarcity of soil QA materials, but simply to
point out the problem and explore some solutions
with the hope that more attention will be given
to the issue.
RESEARCH
Research in the area of QA materials has been
limited. In fact, the bulk of the information
gathered for this report came from catalogs,
personal communications, and internal reports.
The following examples were obtained through a
literature search. Recently, Taylor[3] published
a comprehensive book, Quality Assurance of
Chemical Measurements. The book discusses the
basic concepts of quality assurance and provides
details on evaluation samples, traceability, and
235
-------
reference materials. Sewardt41 of the National
Institute of Standards and Technology (NIST),
formerly the National Bureau of Standards
(NBS), published a book which contains 25
papers describing national and international
programs for the development of reference
materials. The selection criteria, use of
statistics, and steps for certification of
standard reference materials are discussed.
Reports of 15 panel sessions reviewing the use
of and needs for reference materials are
included.
Calit5] of NIST, in another NBS monograph,
examines the general use of standard reference
materials and their role in the measurement
system. Further, procedures for certification
of standard reference materials are discussed,
and examples of several selected industries are
given in which standard reference materials
have made a significant contribution. Steger
compiled the information on all of the
available certified reference materials through
the Canadian Certified Reference Material
Project. Taylor published a handbook for
standard reference material users. The
preparation and analysis of reference materials
has been discussed and documented by several
programs. C8'9'10>11«12'13] In other studies, the
design and stability of reference
materials have been evaluated.
Another search of "Chemical Abstracts" from the
year 1979 to the present resulted in just five
more references. Studies in which the QA
materials were used range from proficiency
samples discerning between immunoinhibition and
electrophoretic measurement to soil and
geological reference materials.
SOIL QA MATERIALS
DEFINITIONS
The uses of QA materials have been predefined
for the purposes of this paper in the EPA
report referenced in the abstract: "A
Rationale for the Assessment of Errors in the
Sampling of Soil."m Briefly summarized, there
are two basic uses of QA materials: quality
assessment or evaluation (QAS) and quality
control (QC). QAS samples are intended to aid
in evaluating data quality and can be used in
QC. QC samples are used specifically on a
real-time basis to detect and correct problems
before a large body of erroneous or out-of-
control data is generated. The main difference
between the two uses becomes evident when the
data generated from them is interpreted. QAS
data are usually analyzed at the end of
studies, whereas QC data is analyzed as it is
generated; hence, the quality is "controlled."
QAS and QC samples exist in several types such as
reference materials and performance evaluation
materi al s. Reference materi al s are defi ned as
having "one or more properties which are
sufficiently well established to be used for the
calibration of an apparatus, for
the assessment of a measurement method, or for
assigning values to materials."" Reference
materials are typically used as QC samples but
can be used as QAS samples. Originally, soil QA
materials began existence as reference materials
and are slowly evolving as important components
of QA programs.
Performance evaluation materialst2'171 often are
associated with an analytical program in which
participants submit results to a central
authority who "grades" the data either in
comparison to the pooled results of all of the
participants or against a "referee" laboratory in
order to judge the overall performance or
accuracy of the laboratory. Performance
evaluation materials are, therefore, examples of
QAS samples.
Whether the data is used on a real-time basis
(QC) or at the end of a study (QAS), the overall
effect of a QA sample is to evaluate measurement
system performance. The sample may be used to
evaluate a whole system, from sampling through
data validation, or a part of the system; such as
extraction efficiency.
An important issue for soil sampling and
analytical QA is how closely soil QA samples
represent the routine samples of interest. A QA
sample should be similar to the routine samples
for the analytical parameter in order for a true
correlation to exist between the two. Analytes
spiked onto potter's clay or sand probably do not
accurately mimic environmental samples visually
or analytically and, therefore, test only the
recoverability of the analytes from the clay or
sand in combination with the competence of the
analysts. In the chemical analysis of natural
soil samples, it is especially important that a
QA sample be of a similar soil type as that of
the samples being analyzed to eliminate the
effects of various matrices effects on analytical
measurements and final results.
This paper deals with three basic types of QA
soil samples which are non-blind, single-blind,
and double-blind soil QA samples. Non-blind QA
samples are used for internal quality control and
for calibration. Single- and double-blind QA
samples are used in quality assessment and
external quality control. All three types of
blind QA materials have been successfully
236
-------
utilized to control and evaluate laboratory
measurements.c18'191
Non-blind QA Samples
These samples are not blind to the analyst.
The identity and reference values of the sample
are known. Reference materials and laboratory
control samples are examples of non-blind
samples.
Single-blind QA Samples
Single-blind QA samples are used principally as
a reference point in analyses, the data from
which serve as a guide to acceptance or
rejection of routine sample data. A single-
blind QA sample is known to be a QA sample, but
its composition is not known to the analyst.
A performance evaluation material is an example
of a single-blind QA sample.
Double-blind QA Samples
Double-blind QA samples are used as a basis for
acceptance or rejection of routine sample data
and for quality assessment. The difference
between single- and double-blind QA samples is
that the double-blind QA sample is intended to
be indistinguishable from a routine sample.
Visually, the QA sample resembles the routine
sample in container type, number system, soil
texture and soil color. Analytically, the QA
sample resembles the routine sample in
interferences, coanalytes, etc. This minimizes
bias in processing the sample batch. A double-
blind QA sample is even more difficult to
compose or develop because, in addition to
having the same or similar chemical make-up,
the sample must appear to be of the same soil
type. For example, if the soil being sampled
for analysis is a Hagerstown silt loam (a fine
textured medium brown soil with a neutral pH),
an acidic red-colored sand would not be an
appropriate double-blind sample. Spiked field
samples and field duplicates are examples of
double-blind QA samples. Manufactured double-
blind QA materials are rare.
Use of Single-blind and Double-blind QA Samples
Quality assurance samples are used to detect
bias and to estimate precision in the
measurement system. The advantage of double-
blind QA samples is that they are treated
exactly like the routine samples in the
analytical laboratory and hence should be
exposed to the same types and levels of errors
in the preparation and analytical processes.
Unfortunately, it is often difficult to employ
double-blind QA samples for studies of
environmental pollution. Difficulties in using
double-blind QA samples usually arise for one of
two reasons. The first reason is that the nature
of the pollutant may make it impossible to carry
out the drying, grinding, sieving, homogenizing,
and subsampling (to obtain a laboratory sample)
of routine samples outside the analytical
laboratory. This series of preparatory steps is
essential for obtaining homogeneous soil QA
materials. Such treatment produces QA samples
that look different from the routine samples,
provided the routine samples did not go through
the same process before entering the laboratory.
The second most probable reason is that an
appropriate soil QA material is not available,
and there is insufficient time prior to field
sampling to characterize the soil QA material for
double-blind samples. It should be noted that no
matter how many soil QA materials are available,
it is unlikely that a soil QA material exists
that is appropriate for double-blind samples
unless the material actually comes from the site
under investigation.
If it is not possible to employ double-blind QA
samples in an investigation, an alternative
procedure has been suggested based upon single-
blind samples and additional field duplicate
samples.[" The additional field duplicate
samples in this alternative procedure allow the
estimation of total measurement error (i.e., the
precision of the measurement system) and the
estimation of the variance contributions of
several of the possible sources of error.
Depending on where they are incorporated into the
sampling and analytical scheme, the single-blind
samples provide means for detecting bias from
sample handling, preparation, and analysis.
Unfortunately, the single-blind QA samples may
miss some of the bias in the laboratory, owing to
special handling by the chemist, to which a
double-blind sample would not have been
subjected. A research study by Rumley[20]
evaluated the effects of favorable treatment of
samples and of alteration of results to reduce
bias on indices of performance in external
quality assessment (EQA) schemes. He concluded,
in fact, that EQA schemes can be affected by
giving favorable treatment to single-blind
samples.
Since there will always be a need for single-
blind soil QA samples, and the need will often
involve situations requiring rapid response, it
seems imperative that an extensive inventory of
soil QA materials be prepared and maintained for
future environmental pollution studies. Double-
blind soil QA samples should be employed where
practicable, and facilities should be available
to produce such samples in an expeditious manner.
237
-------
AVAILABILITY
The establishment and expansion of monitoring
and enforcement programs by federal agencies
requires the use of many QA samples. Federal
agencies such as the U.S. EPA and the Food and
Drug Administration (FDA) established
repositories of QA materials out of the
necessity to support their own programs. The
private sector, although originally interested
in producing standards for calibration of
different instruments, produces QA samples in
various media and for specific environmental
programs (e.g., RCRA) in a limited variety.
Although a listing of many of the soil QA
materials described that are available today
(Table 1) may appear sizable, many analytes are
not represented. At this time, the authors are
unaware of any sources of soil QA materials for
volatile organic analytes. The natural
variability of soils, however, is the factor
that makes a large number of QA materials
necessary. The same factor limits the ability
to manufacture sufficient materials to provide
realistic and/or blind QA materials for all
hazardous waste sites that are being
investigated. This deficiency makes it
difficult to plan and implement many soil
sampling and analysis QA programs.
SOIL QA MATERIALS NEEDED
AGENCY NEEDS
Clearly there is a need for more sources of
soil QA materials. This leads to certain
questions. Which types are most often needed?
Which materials should be manufactured first?
A survey1211 of U.S. EPA officials shows that
all 10 Regions share an interest in a national
QA material program for Superfund analyses,
primarily for use by the Contract Laboratory
Program and by Potentially Responsible Parties
(PRPs).
Although each Region has specific needs, there
is some agreement on analytes. Most interest
is in materials containing Target Compound
Listt213 analytes. Special requests include
tetrachlorodibenzo-p-dioxin (TCDD) and
pentachlorodibenzo-p-dioxin and -furan
(PCDD/PCDF) isomers; explosives (RDX); benzene,
toluene, and xylene (BTX), solvents; and
polycyclic aromatic hydrocarbons (PAHs) in
sediment. The number of QA materials needed
per year and their concentrations vary among
the Regions (Table 2).c211 One Regional
official commented that site-specific QA
materials are needed.1 The value of the soil
QA materials distributed by the EMSL-LV CLP
Performance Evaluation Program has been
proven, ' but as demonstrated in Table 1, this
program offers a limited variety of samples and
analytes. The EMSL-LV program would need
additional resources in order to be able to
provide a wider variety of materials.
INDUSTRIAL POLLUTANTS
ble
]
Industrial organic chemicals presently comprise
the highest volume of hazardous waste produced,
followed by wastes from general chemical
manufacturing, petroleum refining, and explosives
(Table 3)12 . According to the Comprehensive
Environmental Response Compensation Liabil ity
(Act) Information Systems database, the most
abundant pollutants on the National Priority List
(NPL) of Superfund sites are from the industrial
and general organic chemicals industries,
petroleum refining, and explosives industries
(Table 3). The pollutants found most often on
these NPL sites (Table 4) are Pb, As, Cd, Cr, Hg,
Cu, and cyanides for inorganic pollutants, and
trichloroethylene (TCE), other chlorinated
solvents, and BTX for organic pollutants. The
highest volumes of organic pollutant/waste are
volatile organic compounds (VOCs) , while heavy
metals comprise the greatest volume of inorganic
wastes. It would seem that soil QA samples
containing the pollutants specified by the users
(e.g., Regional users) and/or those most commonly
found at the NPL sites should be the first to be
produced.
SUGGESTED RESEARCH
Supplying Blind Soil QA Materials
At this time, preparing and stocking complete
(adequate analytes) and realistic (double-blind
as well as single-blind) soil QA materials is not
feasible due to the tremendous natural
variability of soils. On the other hand, as
stated previously, variety of QA samples from
present sources is limited (Table 1).
Two general approaches, that overlap somewhat in
their philosophy, are presented for manufacturing
both single-blind and double-blind QA materials.
These are: industry-specific QA materials in
which a limited number of soils are produced that
contain analytes specific to polluting
industries; and site-specific QA materials in
which soils found at hazardous waste sites are
prepared to contain analytes or analyte
combinations commonly found at hazardous waste
sites. Either approach would require a rigorous
multi -laboratory characterization study. As one
example, soils naturally rich in particular
metals could be obtained and processed for either
industry- or site-specific QA materials
238
-------
representing mining industry wastes for sites
with similar soil characteristics.
Industry-specific Materials
Using historical industry data as well as NPL
data, information such as geographic location,
contaminant types, and concentrations can be
mapped and evaluated for any general geographic
trends. This information can then be
correlated with 10-15 general soil-types1 '
to narrow the choices of industry-specific
soil/analyte combinations. The next step would
be to collect and homogenize the selected
soils. During homogenization some of the soils
would be spiked with contaminants for
characterization and distribution. This would
result in samples that could be used for non-,
single-, and perhaps double-blind, blank, or
contaminated soil QA materials. The materials
could then be stored at distribution centers to
fill user requests for various industry-
generated hazardous waste sites.
Site-specific QA Materials
Relying on NPL site data in combination with
geographically related soils, a set of site-
specific soil QA samples could be developed.
In this approach, the selected soils could be
collected for spiking and processing, as
described in the previous section; or, using
site-specific soil/analyte combinations, the
materials could be collected from actual
hazardous waste sites, with blanks being
obtained from nearby uncontaminated soils of
similar composition. The artificially composed
materials and the materials obtained from waste
sites could be used during the investigation
and remediation of sites having similar soils
characteristics, or they could be stored and
used throughout the study of the site from
which they were obtained.
Site-specific QA materials have been
successfully manufactured and used for
treatability studies for similarly
characterized sites,c283 as single-blind QA
samples with routine samples, and for
integration of QA data (site comparison
soils)1293 among several projects on a large (21
square mile) site for the duration of the site
investigation and remediation.
A disadvantage in preparing site-specific soil
QA materials is that often they cannot be used
;as double-blind samples because their visual
characteristics may be altered by the
processing that is employed to prepare QA
materials. The site-specific approach is very
successful, however, when the site is fairly
dryt151 and sieving is not necessary.
CONCLUSION
Increased public interest in environmental issues
has led to new legislation at both the state and
federal levels. As a result of these laws, many
contaminated sites have been or will be
evaluated. A large number of these sites have
been grossly contaminated by a variety of
hazardous chemicals at different concentrations.
A parallel increase in the number of sites added
to the National Priority List (NPL) and the
number of contaminants regulated by RCRA and
Superfund Amendment Reauthorization Act (SARA)
(CERCLA) and other federal and state regulations
demands a comprehensive suite of quality
assurance samples111 or a mechanism to produce
such on short notice. The QA samples should
represent the variety of contaminants at
appropriate concentrations and natural soil
characteristics to provide a true comparison to
real world samples. The authors of this report
recommend that the rationale document113 described
previously be consulted to determine whether the
information and conclusions presented there pose
serious problems for the investigator. If the
quality of environmental data cannot be
adequately assessed because suitable QA materials
do not exist, then more effort clearly needs to
be made to increase the supply of soil QA
materials.
Future research should include a preliminary
study comparing approaches for producing
realistic soil QA materials. It is felt that
such a study may show that the site-specific
approach produces the most useful soil QA
materials. A multi-laboratory pilot study would
evaluate the advantages and disadvantages of each
approach and should lead to a long-term plan for
providing a supply of soil QA materials.128'293
NOTICE
Although the research described in this paper has been funded wholly
or in part by the United States Environmental Protection Agency inder
Cooperative Agreement No. CR 814701 with the Environmental Research
Center of the University of Nevada, Las Vegas, it has not been
subjected to Agency review and therefore does not necessarily reflect
the views of the Agency and no official endorsement should be
inferred.
REFERENCES
1. U.S. EPA. 1989. "A Rationale for the
Assessment of Errors in the Sampling of
Soils." EPA 600/X-89/203, Environmental
Monitoring Systems Laboratory-Las Vegas,
NV.
239
-------
2. Hertz, H.S. 1988. "Quality Assurance,
Reference Materials, and the Role of a
Reference Laboratory in Environmental
Measurements." Proceedings, The
International Symposium on Trace Analysis
in Environmental Samples and Standard
Reference Materials. Honolulu, HI, pp.
5-8, January 6-8.
3. Taylor, J.K. Quality Assurance of
Chemical Measurements. Lewis Publishers,
Inc., Chelsea, MI, 1987, pp.159-163.
4. Seward, R.W., editor. Standard Reference
Materials and Meaning/Measurement. NBS
SP 408. National Bureau of Standards,
Gaithersburg, MD, 1973.
5. Cali, J.P. The Role of Standard
Reference Materials in Measurement
Systems. National Bureau of Standards
Monograph 148. NBS, Gaithersburg, MD,
1975.
6. Steger, H.F. Certified Reference
Materials Report 80-6E. Canada Centre
for Mineral and Energy Technology, Ottawa
Canada, 1980.
7. Taylor, J.K. Handbook for SRM Users.
NBS SP 260-100. National Bureau of
Standards, Gaithersburg, MD, 1985.
8. U.S. EPA. 1984. "Quality Assurance
Support: Project Plan for the Superfund
Standards Program." Tr-506-112A
(Internal Report). Project Officer J.G.
Pearson.
9. Bowman, M.S., G.H. Faye, R. Sutarno, J.S.
McKeague, and H. Kodema. 5o/7 Samples
SO-1, SO-2, SO-3, and 50-4: Certified
Reference Material. Report 79-3. Canada
Centre for Mineral and Energy Technology,
Ottawa Canada, 1979.
10. Stoch, H., and E.J. Ring. The
Preparation and Analysis of Reference
Materials and the Provision of
Recommended Values. Progress Report No.
5, Report No. M. Council for Mineral
Technology, Randburg, South Africa, 1983.
11. Holynska, B., J. Jasion, M. Lankosz, A.
Markowitz, and W. Baran. "Soil SO-1
reference material for trace analysis."
Fresenius Z Analytical Chemistry,
322:250-254, 1988.
12. Campana, J.E., D.M. Schoengold, and L.C.
Butler. "An environmental reference
material program: Dioxin performance
evaluation materials." Chemosphere 18(1-
6):169-176, 1989.
13. Inn, K.G.W., W.S. Liggett, and J.M.R.
Hutchinson. "The National Bureau of
Standards Rocky Flats Soil Standard
Reference Materi al." Nuclear Instruments
and Methods in Physics Research 223:443-
450, 1984.
14.
Jorhem, L., and S. Slorach. "Design and
use of quality control samples in a
collaborative study of trace metals in
daily diets." Fresenius Z Analytical
Chemistry, 322:738-740, 1988.
15. Thiers, R.E., G.T. Wu, H. Reed, and L.K.
Oliver. "Sample stability: A suggested
definition and method of determination."
Clin. Chem. 2212:176-183, 1976.
16. McKenzie, R.L., ed. "NIST Standard
Reference Materials Catalog 1990-1991."
NIST Special Publication 260, January,
1990.
17. U.S. EPA. "Annual Summary Report FY89,
Quality Assurance in Support of Superfund."
EPA600/X-90/033, Environmental Monitoring
Systems Laboratory-Las Vegas, NV, February,
1990.
18. Frank, D.J. "Blind sample submission as a
tool for measurement control." Institute
of Nuclear Materials Management, 14(3):112-
117, 1985.
19. Glenn, G.C., and T.K. Hataway. "Quality
control by blind sample analysis."
American Journal of Clinical Pathology
72(2):156-162, 1979.
20. Rumley, A.G. "External Quality Assessment
(EQA): The effect and implications of
favourable treatment of EQA samples."
Medical Laboratory Sciences, 41:295-298,
1984.
21. Bleyler, R. Viar, and Company. "Survey of
Quality Control for Superfund Programs."
April, 1989.
22. Gaskill, A. "News and Views:
environmental reference standards."
Environmental Lab, Z: 12-15, 1990.
23. Butler, L.C. Personal communication. U.S.
EPA, Environmental Monitoring Systems
Laboratory-Las Vegas, NV, 1990.
24. Krieger, J. "Hazardous waste management
database starts to take shape." Chemical &
240
-------
Engineering News, pp. 19-21. February 6,
1989.
25. McCoy, D.E. '"301" Studies provide
insight into future of CERCLA.' The
Hazardous Waste Consultant, March/April
1985, McCoy and Associates, Lakewood,
Colorado, Vol. 3/2: 18-24, 1985.
26. U.S. Department of Agriculture. Land
Resource and Major Land Resource Areas of
the United States. U.S. Soil
Conservation Service, Agriculture
Handbook 296, 1981.
27. U.S. Geological Survey. The National
Atlas of the United States of America.
Department of the Interior. Washington,
D.C. pp. 85-88, 1970.
28. Esposito, P., J. Hessling, B. B. Locke,
M. Taylor, M. Szabo, R. Thurman, C.
Robers, R. Traver, and E. Barth.
"Results of treatment evaluations of a
contaminated synthetic soil." JAPCA 39:
294-304, 1989.
29. Barich III, J.J., G. Raab, R, Jones, J.
Pasmore. "The Application of X-ray
Fluorescences Technology in the Creation
of Site Comparison Samples and in the
Design of Hazardous Waste Treatability
Studies." First International Symposium
Field Screening Methods for Hazardous
Haste Site Investigations, Symposium
Proceedings. Las Vegas, NV, pp. 75-80,
October 11-13, 1988.
241
-------
TABLE 1. LIST OF GOVERNMENT AND PRIVATE SOURCES FOR SOIL/SOLID QA MATERIALS*
8
SUPPLIER
Environmental
Research Associates
5540 Marshall St.
Arvada, CO 80002
USA
1-800-372-0122
QA
MATERIAL
Sludge
CLP-priority
pollutant in soil
Hydrocarbon
fuel in soil
Total petroleum
hydrocarbons
(TPH) in soil
Benzene,
toluene, ethyl
benzene and
xylene (BTEX)
in water/soil
DESCRIPTION
Certified QC standards in a
sludge matrix for volatile
(Benzene & TCE), semi-
volatiles (5 BNA), pesticides/
PCB, and metal analysis (11
metals)
Certified QC standards in soil
matrix for Superfund volatiles
(6 to 8 VOCs), semi-volatiles,
trace metals, and cyanide
analysis
Standards of gasoline, No. 2
diesel, heating oil, and crude
oil in a soil matrix
Standardized 50 g QC soil
samples, one specifically
designed for analysis of TPH
in soil in the presence of fatty
acids in screw top bottles
QC set containing two
standard concentrates and
one soil matrix
TYPE & CONCENTRATION
RANGE
Volatiles (5-500 ug/kg)
Semi-volatiles (300-30,000 ng/kg)
Pesticides/PCBs (10-10,000
"g/kg)
Trace metals (1-5,000 mg/kg)
Volatiles (5-500 ug/kg; Sealed
ampoule containing VOCs in
methanol to be spiked into 10 g
of soil)
Semi-volatiles (300-30,000 ug/kg)
Pesticides/PCBs (10-10,000
ug/kg)
Trace metals (1-5,000 mg/kg)
20 g QAS containing unleaded
gasoline (5-500 mg/kg)
No. 2 diesel fuel, heating oil or
crude oil (10-5,000 mg/kg)
Standard 1 - 50 g (100-2000
mg/kg) level
Standard 2 - the presence of fatty
acids (100-2000 mg/kg)
Ampulated 5-500 ug/kg in
CH3OH to be spiked onto 10 g
soil
APPLICATION
40 CFR 503
Evaluation of
laboratory
performance -
especially for
CLP-type
analysis
Evaluation of
specific analysis
for Underground
Storage Tanks
(UST program)
UST program
UST program
-------
TABLE 1. LIST OF GOVERNMENT AND PRIVATE SOURCES FOR SOIL/SOLID QA MATERIALS (Continued)
SUPPLIER
QA
MATERIAL
DESCRIPTION
TYPE & CONCENTRATION
RANGE
APPLICATION
Fisher Scientific
711 Forbes Avenue
Pittsburgh, PA 15219
USA
(412) 562-8300
Solid waste
Real world samples,
homogenized for consistency
and tested for accuracy
Fly ash (4 metals)
Waste water treatment
media (3 metals)
Diatomaceous earth filter cake (4
metals)
Circuit board coating sludge
(5 metals)
Electroplating tank bottoms
(5 metals)
Raw sludge, chrome plating
process (4 metals)
Incinerated sludge (5 metals)
Municipal incinerator ash (8
TCLP metals, 4-4000 ppm)
PAH-contaminated soil
(14 PAH and PCPs, 20-1200
ppm)
Custom Orders
SW846
Water treatment
facilities
SW846
Waste from
electronic
industries
Waste from
electroplating
Waste from
electroplating
Waste from
incinerators
SW 846,
Methods 3050,
6010
SW 846,
Methods 3540,
3550
As required
-------
TABLE 1. LIST OF GOVERNMENT AND PRIVATE SOURCES FOR SOIL/SOLID QA MATERIALS (Continued)
SUPPLIER
National Institute of
Standards and
Technology
Chemistry Bldg. B-
311
Gaithersburg, MD
20899 USA
302-975-6776
QA
MATERIAL
Ore, minerals,
and refractories
Solid organics
DESCRIPTION
QC reference materials for
critically important material
balance in mining and
metallurgical industries
QA materials for analysis of
materials for constituent of
interest
TYPE & CONCENTRATION
RANGE
Copper ores (5 metals,
0.03 ppm to 0.84%)
Fluorospar (CaF2) (97.4 to
98.8%)
Iron ores (Fe, 58 to 90.8%)
Bauxite ores (Al, 21.1 to
28.8%)
Powdered lead-based paint
(Pb, 12%)
Trace mercury in coal
(Hg,0.13ng/g)
Lead in refinery fuel
(5 varieties, 11.0 to
780.0 ug/g)
APPLICATION
Mining and
metallurgical
processing
Lead-based paint
analysis
Heavy metals in
fuel
-------
TABLE 1. LIST OF GOVERNMENT AND PRIVATE SOURCES FOR SOIL/SOLID QA MATERIALS (Continued)
SUPPLIER
National Institute of
Standards and
Technology
Chemistry Bldg. B-
311
Gaithersburg, MD
20899 USA
302-975-6776
QA
MATERIAL
Trace elements
Urban dust
Diesel
particulate
matter
PAH in solid
matrices
Polychlorinated
biphenyls in
sediments
Organics in
marine
sediments
DESCRIPTION
Trace elements in solid
matrices (12 to 42 elements)
Urban dust QA materials for
analysis of organic
constituents
QA materials for analysis of
diesel particulate matter and
its organic constituents
QA materials with variety of
PAHs on solid matrices
QA materials of sediments
contaminated by PCBs
QA materials made of marine
sediment contaminated by
organics
TYPE & CONCENTRATION
RANGE
Urban particulate
(1.0-860 ug/g)
Coal - bituminous
(0.1-100 ug/g)
Coal - fly ash, 4 varieties
(0.2-200 ug/g)
Coal - subbituminous
(0.1-20 ug/g)
Estuarine sediment
(0.5-375 ug/g)
Buffalo River sediment
(0.1-555 ug/g)
10 g
100 mg/ampoules
6 varieties, 1.0-4000 ug/g
In preparation
In preparation
APPLICATION
Evaluation of
laboratory
performance
especially for
analysis of trace
elements in
variety of
matrices
Air pollution
Air pollution
SW 846 or
similar analytical
programs
SW 846 or
similar analytical
programs
General
-------
TABLE 1. LIST OF GOVERNMENT AND PRIVATE SOURCES FOR SOIL/SOLID QA MATERIALS (Continued)
SUPPLIER
Canada Centre for
Mineral and Energy
Technology
555 Booth Street
Ottawa, Canada
K1A OG1
United States
Geological Survey
Geochemistry
Branch
P.O. Box 25046 MS
973
Denver Federal
Center
Denver, CO 80225
U.S. Environmental
Protection Agency
RREL, Releases
Control Branch
Edison, NJ
08837-3079 USA
201-321-4372
QA
MATERIAL
Soil Samples
SO-1, SO-2,
SO-3, SO-4
GXR-1-6
Synthetic Soil
Matrix/I
Synthetic Soil
Matrix/II
DESCRIPTION
Compositional Reference
Materials
Jasperoid soils, Cu millhead
tailings, B horizon soil
30% clay, 25% silt, 20% sand,
20% topsoil, 5% gravel
High organic, low metal
Low organic, low metal
TYPE & CONCENTRATION
RANGE
Clayey soil, sandy podzolic B
horizon with a high organic
content, a calcareous till, and a
chernozemic A horizon
Chemical and physical soil and
mineral properties
Organic: 400-8200 mg/kg
Metal: 10-450 mg/kg
Organic: 40-820 mg/kg
Metal: 10-450 mg/kg
APPLICATION
General
analytical and
earth science for
agricultural,
forestry, and
environmental
applications,
especially for
mining and
metallurgical
operations.
General
analtyical and
earth science for
agricultural,
forestry, and
environmental
applications,
especially for
mining and
metallurgical
operations.
Soil treatability
studies
Soil treatability
studies
-------
TABLE 1. LIST OF GOVERNMENT AND PRIVATE SOURCES FOR SOIL/SOLID QA MATERIALS (Continued)
SUPPLIER
U.S. Environmental
Protection Agency
RREL, Releases
Control Branch
Edison, NJ
08837-3079 USA
201-321-4372
U.S. Environmental
Protection Agency
EMSL-LV, QAD
P.O. Box 93478
Las Vegas, NV
89193-3478
702-798-2114
FTS 545-2214
U.S. Environmental
Protection Agency
EMSL-LV, QAD
P.O. Box 93478
Las Vegas, NV
89193-3478
702-798-2114
FTS 545-2214
QA
MATERIAL
Synthetic Soil
Matrix/Ill
Synthetic Soil
Matrix/IV
Dioxin
performance
evaluation
materials
Base-neutral-
acid PEMs
Pesticide PEMs
DESCRIPTION
Low organic, high metal
High organic, high metal
Real World samples
contaminated by dioxin and/
or selected matrices fortified
by dioxin
Sand fortified with selected
BNAs
••„
Real world samples
contaminated with toxaphene
and other pesticides or
selected soil fortified by
selected pesticides & PCBs
TYPE & CONCENTRATION
RANGE
Organic: 40-820 mg/kg
Metal: 500-22,500 mg/kg
Organic: 40-8200 mg/kg
Metal: 500-22,500 mg/kg
Kiln ash, XAD Resin, filter
paper, florisil, clay, sand (20 ppt
to 6 ppb)
TCDD/PCDF soil
Times Beach soil
Times Beach & PCDD/PCDF
soil
Times Beach & Region 9 soil
Low level BNA (400 ppb)
Medium level BNA (15 ppm)
High level BNA (75 ppm)
Mixed level BNA
Toxaphene soil
Pesticide soil 1 (4-40 Hg/kg)
Pesticide soil 2 (4-40 ug/kg)
Pesticide soil 3 (30-100 ug/kg)
+ PCB 1016
Pesticide soil 4 (30-60 Ug/kg)
+ PCB 1266
APPLICATION
Soil treatability
studies
Soil treatability
studies
SW 840, 8280
SW 846, 8250,
8270
SW 846, 8080
-------
TABLE 1. LIST OF GOVERNMENT AND PRIVATE SOURCES FOR SOIL/SOLID QA MATERIALS (Continued)
SUPPLIER
U.S. Environmental
Protection Agency
EMSL-LV, QAD
P.O. Box 93478
Las Vegas, NV
89193-3478
702-798-2114
FTS 545-2214
QA
MATERIAL
Inorganic PEMs
DESCRIPTION
Selected soil samples fortified
with metals and cyanide
TYPE & CONCENTRATION
RANGE
LCS metals (1 ppm-200,000 ppm)
LCS, cyanide (4-8 ppm)
APPLICATION
SW 846, 6010
Information containedin this table was obtained in September 1990 and may not include some sources of QA materialsTaespite
the authors' efforts to be accurate and complete.
DISCLAIMER: Mention of trade names or commercial products does not constitute endorsement or recommendation for use.
-------
TABLE 2. SOIL AND WATER PE SAMPLES NEEDED BY THE 10 REGIONS OF THE U.S. EPA™.
Region
I
II
III
IV
V
VI
VII
VIII
IX
X
Analytes
VOA, BNA, PEST/PCB
soil blanks for VOA and BNA
Dioxin
Unspecified
«TCE 25 ppb
toluene
vinyl chloride
phenols
napthalene
pentachlorophenol
2 or 3 mixes for each fraction; e.g.
5 analytes
7 analytes
3 analytes (determine in workgroup)
•VOA and BNA from CLP-TCL
PEST/PCBs
Metals
VOA and BNA
case by case; not routine enough to predict
levels or analytes
PCBs
Pest/Herb
PCP
TCE and solvents
dioxin congeners, tetrachloro-specific isomcrs
Complete TCL (grouped aromatics, PAH,
etc.)
EDB
RDX explosives
TCDD only
PCDD/PCDF
chloroform, carbon tetrachloride
BTX
chlorinated hydrocarbons
VOA and BNA
Heavy metals
include most common and possibly some
more difficult compounds
PAH (sediment)
Levels
same as CLP PE
no detectable levels
isomer specific; not only 2,3,7,8
Unspecified
100 ppb
100 ppb
100 ppb
50 ppb
100 ppb
2 x (CRQL)"
5 x (CRQL)
10 x (CRQL)
-1.5 ppb
-CRQL
-CRQL
•CRQL
100-80,000 ppm (soil)
300-10,000 ppm (soil) ,
300-30,000 ppm (oily matrix)
low ppb (water)
Low (10 x CRQL)
Med (50 x CRQL)
100 ppt; 1 ppb
Ippb
1 ppb; 5 ppb; 10 ppb (soil)
10 ppt (water)
10 ppb (soil)
20 ppb
wide variety
high for
high soils;
low for drinking water
asbestos needed but don't expect it
in this effort
low and high (within DOT
regulatory limits)
# PE samples/year
100/type/year
100, or if replace MS/MSD* 1/50
samples
unknown
15-20
15-20
15-20
15-20
up to 100 if convenient and
flexible schedule
200 water, 200 soil
50
20
1500 soil
50 water
50
-30;
contractor!, would like 2
50-75/matrix/analyte set
if replace MS/MSD, 1 per data set
* Matrix spike/Matrix spike duplicate
*Soil samples not requested.
"Contract required quantitation limit
249
-------
TABLE 3. VOLUME OF WASTE GENERATED BY INDUSTRIAL ACTIVITIES PER
YEAR'24'.
Standard Industrial
Classification
2869
2800
2911
2892
2821
4953
2879
2865
2816
2812
Category
Industrial organic chemicals
General chemical
manufacturing
Petroleum refining
Explosives
Plastic materials/resins
Refuse systems (commercial
TSDR* facility)
Agricultural chemicals
Cyclic crudes/intermediates
Inorganic pigments
Alkalis/chlo rine
Hazardous Waste Volume,
Millions of metric tons
60-80
40-50
20-30
10-15
6-10
5-8
5-8
5-8
3.5-5
2.5-4.5
* Transportation, storage, disposal, or recycling
250
-------
TABLE 4. MOST FREQUENTLY REPORTED SUBSTANCES AT 546 NPL SITES'25'.
Rank
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Substance
Trichloroethylene
Lead
Toluene
Benzene
Polychlorinated biphenyls
(PCBs)
Chloroform
Tetrachloroethylene
Phenol
Arsenic
Cadmium
Chromium
1, 1, 1-Trichloroethane
Zinc and compounds
Ethylbenzene
Xylene
Methylene chloride
Trans- 1,2-Dichloroethylene
Mercury
Copper and compounds
Cyanides (soluble salts)
Vinyl chloride
1,2-Dichloroethane
Chlorobenzene
1, 1-Dichloroethane
Carbon tetrachloride
Percent of Sites
33
30
28
26
22
20
16
15
15
15
15
14
14
13
13
12
11
10
9
8
8
8
8
8
7
251
-------
DISCUSSION
JANINE ARVIZU: Have you considered as one of your options for preparation
of these materials, reconstruction of some simulated soils from stockpiles of
individual soil constituents (clays and gravels) and so forth? Based on compo-
sitional analysis of the soils, would you be able to reconstruct QA materials on
a site-specific basis?
AMY CROSS-SMIECINSKI: Yes, we have considered this possibility and
have tried to locate large stockpiles of various types of soils. Most of the sources
of soils that we have found are not extensive. They're small volumes and the
people who distribute them are apprehensive about sending out large quantities.
They are used mostly for a routine soil sample analysis.
JANINE ARVIZU: I'm curious as to how you would envision addressing the
problem of accurately dealing with active soils, (e.g., biologically active soils or
natural soils that have absorptive properties) and being able to accurately
determine the recovery of analytes from those types of materials?
AMY CROSS-SMIECINSKI: In another study we have in the poster session,
we have looked into various types of soil, specifically volatile organic preserva-
tives, to prevent those kinds of degradations and activities. But then it's
something that would be a real problem for any type of soil QA material.
LLEW WILLIAMS: I might just comment on something we've been wanting
to try to see if we can get better representative spiking into QA materials. I think
this has always been a concern that spiked materials frequently don't reflect in
recoveries for instance. The same analytes, if they were naturally in a waste
material, we may get fifteen percent (15%) recovery, we spike them and then we
get ninety percent (90%) back.
One of the things that we're looking into right now and some of you who have
the facilities might want to play around with it a little bit, too, is looking at the
concept of using super critical fluid to put analytes back into matrices, rather than
taking them out. If the concept is a good one to reach down into the pores and draw
analytes out of a matrix, it's possible to be able to release the pressure and put
analytes deeply into a matrix in a way that they may better assimilate natural
materials.
JANINE ARVIZU: Your concerns about double blind QA samples for soils, I
think are really legitimate. Have you considered the introduction of single blind
QA samples with every analytical batch as an alternative to having a double
blind? Would it serve some of the same purposes?
AMY CROSS-SMIECINSKI: We believe it does and it has. Single blind QA
samples have been used this way for some time, particularly in the dioxin
program. But we feel that the double blind QA samples, although they're very
hard to manufacture, would be the most realistic type of soil QA samples at this
point.
252
-------
EVALUATION OF EMISSION SOURCES AND HAZARDOUS
WASTE SITES USING PORTABLE CHROMATOGRAPHS
R. E. Berkley
Environmental Protection Agency
Atmospheric Research and Exposure Assessment Laboratory
Research Triangle Park, NO
ABSTRACT
Portable gas chromatographs (PGC) cap-
able of direct detection of ambient con-
centrations of toxic organic vapors in
air were operated in field studies while
simultaneous data were taken for compar-
ison by the Canister/TO-14 Method. Sam-
ples were obtained downwind of Superfund
hazardous waste sites, highways, chem-
ical plants, and in locations where
there was concern about odors or nasal/
respiratory irritation. In some cases
two PGCs equipped identically were used
side-by-side or upwind/downwind. In ot-
hers, different columns were used side-
by-side to analyze a larger group of
compounds. Reasonable agreement between
methods was found, even though sampling
techniques were not equivalent. Such
agreement suggests that both methods
were free of sampling errors, and that
the data were substantially accurate.
This paper has been reviewed in accor-
dance with the U. S. Environmental Pro-
tection Agency's peer and administra-
tive review policies and approved for
presentation and publication. Mention
of trade names or commercial products
does not constitute endorsement or re-
commendation for use.
INTRODUCT I ON
Toxic organic compounds are usually pre-
sent in ambient air at such low levels
(typically about one ppB) that they can-
not be analyzed without preconcentra-
tion. In the TO-14 Method, six-liter
»ir samples are collected in passivated
canisters and stored pending analysis.
Just prior to analysis they are cryogen-
ically preconcentrated (1). Use of a
portable gas chromatograph (PGC) equip-
ped with a photoionization detector
(PID) sensitive enough to detect organic
compounds at sub-ppB levels without pre-
concentration offers an alternative sam-
ple collection method which produces
data on-the-spot in near real-time.
PID detectors are no longer novel. In
1984-5 Verner (2) and Driscoll (3) re-
viewed more than a decade of PID use in
gas chromatography. There have been
several reports since 1980 describing
analyses of airborne organic vapors with
them. However, none of the instruments
were portable, and sample preconcentra-
tion was always required because those
PIDs were not significantly more sensi-
tive than other kinds of detectors
(4-8). Then Leveson and coworkers dev-
eloped a 10.6 electron-volt PID of sig-
nificantly greater sensitivity and in-
corporated it into a PGC (9). The light
source was an electrodeless discharge
tube which was excited by a radio-fre-
quency oscillator to produce an intense
emission line. The chromatograph was
claimed to detect benzene without pre-
concentration at 0.1 ppB (10-13). How-
ever, the lamp is restricted to low-tem-
perature operation because heating it
would decrease sensitivity by broadening
the emission line. For Leveson1s PGC
(Photovac Model 10A10), Berkley estim-
ated a benzene detection limit equival-
ent to 0.03 ppB. The smallest sample
actually analyzed, one microliter con-
taining 1.6 picogram of benzene, produ-
253
-------
ced a 2.3 volt-second peak at maximum
gain. A linear response to benzene was
observed over a wide concentration range
(0.5 to 130 ppB), and injections as
large as one milliliter could be made
without significant loss of chromato-
graphic resolution. Similar sensitivity
to other aromatic compounds and to
chloroalkenes was also observed (14).
Such an instrument obviously should be
useful for air monitoring, but few re-
ports of it have appeared. Lipsky an-
alyzed vinyl chloride from landfills
(15), and Hawthorne analyzed indoor air
in a "research house" (16). Jerpe est-
imated a benzene detection limit of 20
picograms using a Model 10A10 PGC to
which an external capillary column and
constant-volume sample loop had been
connected (17). Users of the Model
10A10 PGC experienced difficulty with
battery endurance, baseline drift, and
on-site data interpretation. These pro-
blems were mostly resolved by the later
series of Model 10S- PQCs. Since PGCs
can be more easily transported than
large numbers of canisters, they more
readily produce large volumes of data in
the field. Their disadvantages are that
(a) at present they are limited to low
resolution chromatography, (b) they id-
entify, by retention time only, the lim-
ited number of compounds which they can
detect at low ppB levels, and (c) they
require a skilled operator.
It is difficult to be certain that pre-
concentrated samples are not being spoi-
led by sampling errors. Although sample
integrity during storage in passivated
canisters has been demonstrated in the
absence of highly reactive compounds
(18), artifact formation can be caused,
for example, by HCl (19). We have eval-
uated PGCs in both laboratory and field
operation (20, 2t). Because PGCs are
not affected by breakthrough of analytes
from a preconcentration trap, by chem-
ical reactions between collected com-
pounds, or by sample degradation during
storage, use of them in parallel with
the Canister/TO-14 Method could identify
such problems, should they ever occur,
if the two methods could be shown to
consistently produce similar results
under field conditions. That requires
much parallel use over a long period of
time at a variety of sites under differ-
ent ambient conditions using many kinds
of operating parameters. Herein are re-
ported an accumulation of comparative
data obtained during the past two years.
EXPERIMENTAL
Spherical 6-liter electropolished can-
isters (SIS, Incorporated) were used to
collect air samples and store PGC calib-
ration standards. Canisters were clean-
ed by heating to 90°C while evacuating
through a liquid nitrogen trap to a fin-
al pressure below 10 micrometers (mer-
cury equivalent) for two hours. Samp-
ling for direct comparison of canister
and PGC data was done by holding a can-
ister with its inlet less than 10 centi-
meters from the end of the PGC probe and
opening the valve to fill it during the
time the PGC sample pump was running.
Another method of comparison was to per-
form consecutive PGC analyses while
time-integrated canister samples were
being collected. For time-integrated
measurements, evacuated canisters were
fitted with pre-calibrated mechanical
flow controllers, and air was sampled at
25 mi 11i1iters/minute for two hours. Air
samples collected in canisters were
transported to a laboratory, cryogen-
ically preconcentrated, and analyzed
using a modified Hewlett-Packard Model
5880A gas chromatograph equipped with
flame ionization and electron capture
detectors. A Hewlett-Packard Model
5970A mass selective detector was used
for some samples. Calibration was based
on 41 organic compounds cited in the
Canister/TO-14 Method (1).
Microprocessor-controlled PGCs (Photovac
Model 10S70) were used. They were
equipped with constant-temperature col-
umn enclosures and 0.53 millimeter ID X
10 meter fused-silica wall-coated open-
tubular (WCOT) columns, a 1.67 meter
section of which was backflushable pre-
column. Chemically-bonded stationary
liquid phases were used, either CPSilSCB
or CPSil19CB (Chrompak). A KCl/Alumina
porous-layer open-tubular (PLOT) column
of the same size and configuration was
used for extremely volatile compounds.
Ultrazero air (less than 0.1 ppM carbon)
was the carrier gas. An IBM-compatible
laptop computer, using vendor-provided
software via an RS-232 interface, con-
trolled chromatograph operation and data
storage. Chromatographic peaks were id-
entified and quantitated using retention
times and response factors stored in
nonvolatile memory of the PGC micropro-
cessor. The calibration library was
created by analyzing mixtures of anal-
ytes (10 ppB) produced by flow-dilution
of commercially-prepared standards as
described above. Compounds with ioniza-
-------
tion potentials greater than 10.6 elec-
tron-volts were not detected by PGCs at
ambient (below 10 ppB) levels. Before
beginning to sample, a stable baseline
Mas observed, and the library was recal-
ibrated with a single-compound standard
(approximately 10 ppB) which had been
certified by GC/FID analysis. Chloro-
benzene or tetrachloroethylene were used
as calibrants with the WCOT columns, and
vinyl idene chloride with the PLOT col-
umn. During sampling, automatic recal-
ibration was performed every 4 or 5 runs
using the single compound standard, af-
ter which the microprocessor corrected
the retention time and response factor
for the calibrant, then corrected pro-
portionally the retention times and res-
ponse factors of other compounds. Samp-
les were taken every 15 minutes. Air
was drawn into the sample probe (3 met-
ers long X 2 millimeter ID stainless
steel tubing) for 45 to 60 seconds. Then
the sample was injected for 7 to 15
seconds, after which the sample loop was
removed from carrier flow to minimize
peak tailing. The precolumn was back-
Hushed by the carrier stream except
iihile calibrated compounds were passing
through it. Calibration runs differed
from sample runs only in that the loop
received calibration mixture instead of
an air sample. PGCs were sheltered from
drafts and direct sunlight inside a ve-
hicle or building, and a stainless steel
Sample probe was extended through a win-
dow or a sampling port. External re-
phargable 12-volt batteries (Johnson
Controls GC12800 or PP12120 Gel-Cell,
and Sears Die-Hard Marine) were used to
supply power.
RESULTS AND DISCUSSION
|n comparing Canister and PGC data it is
important to remember that samples col-
lected by the two methods are not equi-
Sfalent. A PGC analyzes only one of 50
to 70 milliliters of air which enter the
probe during sampling, whereas a repre-
sentative sample of the entire six lit-
ers collected by the canister is anal-
yzed. If the air is well-mixed and dev-
oid of reactive or corrosive materials,
then canister and PGC data should resem-
ble each other, and generally do. How-
sver, if a heterogeneous plume is samp-
Jed, or if highly reactive materials en-
ter the canister, then PGC and canister
tfata could differ significantly even
though the "same" air was sampled.
Complaints about episodes of stench at
Marcus Hook, PA were investigated at the
request of EPA Region III. A PGC was
operated in a van at several sites, and
canister samples were taken for compari-
son. The results are shown in TABLE 1.
The PGC twice failed to recognize small
benzene peaks which eluted in the tail
of the large initial peak. The CPSilSCB
column eluted compounds so close toget-
her that resumption of backflush always
interfered with some peak, no matter
when it occurred. In this case toluene
was missed. Trichloroethylene, reported
by the PGC, was never found in the can-
isters. That peak was undoubtedly due to
some other compound which had a similar
retention time. For other compounds,
agreement between the two methods was
reasonab1e.
TABLE 2 shows samples taken at hazardous
waste sites near Wilmington and New Cas-
tle, Delaware. Concentrations at the
Superfund remediation sites were low,
typical of sub-ppB background levels in
remote areas, showing that buried waste
was not emitting significant quantities
of these compounds into the air. Rela-
tive agreement between PGC and canister
data seemed to improve with increasing
concentration. PGC data for tetra-
chloroethylene at the waste lagoon were
not reported because of a persistent co-
eluting peak. Samples taken by both
methods near the waste incineration
plant show toluene and higher homologues
at significant levels. High levels of
benzene and chlorobenzene were found by
both methods downwind of the Standard
Chlorine plant. For compounds found by
both methods, agreement was reasonable
over a wide range of concentrations.
Under Project 02.01-12 of the US-USSR
Environmental Agreement, samples were
taken at a roadside site about 12 kilo-
meters from Vilnius, Lithuania. Two
PGCs were operated while time-integrated
canister samples were collected. A mo-
bile laboratory stood about 20 meters
from the highway on ground about 2 me-
ters below it. Daytime traffic volume
was moderate-to-heavy without stop-and-
go congestion and subject to a 100 km
per hour speed limit. No industrial ac-
tivity was visible in the immediate vi-
cinity. Two identically equipped PGC's
were compared side-by-side and then up-
wind/downwind. During side-by-side op-
eration inside the mobile laboratory,
the sample probes extended to about 18
meters from the roadway and one meter
above it. TABLE 3 compares colocated
255
-------
and upwind/downwind PGC analyses with
time-integrated canister data. During
colocated sampling canisters were placed
3 and 10 meters downwind of the highway.
Sampling was done during nonturbulent
movement of air across the site and
while traffic density was fairly con-
stant. Average levels of benzene, tol-
uene, ethylbenzene, m,p-xylene (reported
as one compound) and o-xylene found by
the PGC's were in reasonable agreement
with data from the canisters. PGC data
for toluene, and sometimes m,p-xylene,
exceeded average concentrations found in
the 10 meter canisters, even though the
PGCs were farther from the highway.
This discrepancy may have occurred be-
cause the PGCs often sampled the plumes
of passing vehicles. When the PGCs were
deployed across the highway from each
other, PGC-1 was inside a van parked 12
meters downwind while PGC-2 remained
upwind in the mobile laboratory. Canis-
ters were again placed 3 and 10 meters
downwind of the highway. Scheduling
constraints allowed only a half hour of
PGC sampling to be compared to the can-
isters, but downwind PGC results agreed
substantially with canister data.
At a Superfund remediation site in
northwest Georgia, airborne emissions
produced strong odor but contained low
levels of compounds which could be det-
ected by the PI Ds. Two PGCs equipped
with CPSil19CB columns were operated
side-by-side while canister samples were
taken for comparison. Data are shown in
TABLE 4. Toluene and xylenes were con-
sistently seen by both methods at sim-
ilar levels. Some styrene was also
seen. These compounds probably came
from trucks and earth-movers on the
site. The CPSil19CB columns provided
better resolution than CPSilSCB columns,
but benzene peaks smaller than one ppB
were missed because the PGC peak-recog-
nition algorithm could not find them on
the tail of the large initial peak.
Compounds which can be analyzed without
concentration by a PGC are those to
which the PID is sensitive and which can
be separated from each other by an iso-
thermal column at low temperature (50°C
maximum). The number of compounds which
can be analyzed can be increased by op-
erating two PGCs side-by-side with dif-
ferent columns. An example is shown in
TABLE 5. The site was about 40 meters
downwind of a dry cleaning plant. PGC-1
was equipped with a KCl/Alumina PLOT
column and used to analyze vinyl chlor-
ide and vinylidene chloride. Since the
PLOT column had very low bleed, the PGC
could be operated at maximum gain
(1000). PGC-2 equipped with a CPSilSCB
column was calibrated for the usual list
of compounds. Traces of vinyl chloride
and vinylidene chloride were found by
PGC-1 but not found in the canisters.
These concentrations were below detec-
tion limit (approximately 0.2 ppB) for
the Canister/TO-14 Method. PGC detec-
tion limits for vinyl chloride and vin-
ylidene chloride were 0.005 and 0.010
ppB, the amounts which would have pro-
duced 5 millivolt-second peaks. The
integration algorithm does not process
smaller peaks. Canister and PGC data
showed tetrachloroethylene at elevated
concentrations. They did not agree
closely, probably because the plume was
poorly mixed. To measure the extent of
agreement between PGC and canister data
a criterion for evaluation is needed'
The absolute difference between results
was chosen because it does not change
drastically with concentration. For
each compound, the averages of absolute
differences are shown in TABLE 6. For
the CPSilSCB column these differences
(from data in TABLES 1, 2, and 5) range
approximately from 1 to 2 ppB. Appar-
ently, absolute differences do increase
slightly with increasing concentration
Supposing they did not, then at about
100 ppB, relative differences would be
5%. At 10 ppB they would be approx-
imately 10%, and at one ppB, 100%. A
difference of 100% seems large, but sup-
pose one method reported one ppB of tol-
uene while the other reported two ppB.
That difference would arouse little
concern; the data would be considered
similar because both results are
"small". Detection limits for the
Canister/TO-14 Method (about 0.2 ppB)
prevent making such comparisons at sig-
nificantly lower concentrations. For
data taken with CPSil19CB columns
(TABLE 4),. agreement was much better,
because those columns retain compounds
longer and resolve them better, so peaks
are more likely to be identified and in-
tegrated properly. Agreement for ben-
zene and styrene was poorer than for
other compounds because benzene was lost
in the tail of the initial peak on every
run, while styrene was crowded by an ar-
tifact peak produced by column bleed.
PGC performance could most readily be
improved by using a column with better
resolution and less bleed, perhaps a
thicker-phase CPSilSCB, which would pro-
256
-------
vide better resolution of early-eluting
compounds and sufficient space between
later peaks to accommodate the minute-
long baseline disturbance which erupts
when backflush resumes. Improvement of
resolution will ultimately be limited by
flow system configuration. Another ad-
vantage of using a column with less
bleed would be that operation at higher
gain could result in lower detection
limits.
CONCLUSIONS
Portable gas chromatographs can rapidly
produce reasonable estimates of ambient
background concentrations of many vol-
atile nonpolar and semi-polar organic
air pollutants which ionize below 10.6
electron-volts. Because they process
data immediately, they are useful for
evaluation of hazardous waste sites,
chemical spills, and other sources of
airborne organic vapors. PGC data gen-
erally agree well with data from the
Canister/TO-14 Method, which provides
further indication that the latter is
generally valid for sampling atmospheres
not contaminated with highly reactive
compounds, even when analyses are de-
layed. Combined Canister/PGC analyses
should be used at uncharacterized sites
or where highly reactive compounds are
suspected. Positive interferences could
affect either PGC or canister data, but
negative interferences might be less
likely to influence PGCs because they do
not store or preconcentrate samples.
Furthermore, when analyses using dif-
ferent sampling methodologies produce
similar results, a preponderance of ev-
idence is created that sampling errors
did not occur and that data are sub-
stantially correct. Comparison of can-
ister and PGC sampling should be exten-
ded to include additional classes of
compounds, especially polar compounds.
REFERENCES
1. Compendium of Methods for the Deter-
mination of Toxic Organic Compounds
in Ambient Air. Environmental Pro-
tection Agency, Atmospheric Research
and Exposure Assessment Laboratory,
Research Triangle Park, NC 27711.
EPA-600/4-84-017. June 1988.
2. Verner, P. J. Chromatogr. 1984,
300, 249-264.
3. Driscoll, J. N. J. Chromatogr.
Sci., 1985, 23, 488-492.
4. Driscoll, J. N.; Atwood, E. S.; He-
witt, G. F. Ind. Res. Dev. , 1982,
24, 188-191.
5. Cox, R. D.; Earp, R. F. Anal.
Chem., 1982, 54, 2265-2270.
6. Rudolph, J.; Jebsen, C. Int. J.
Environ. Anal. Che., 1983, 13, 129-
139.
7. Nutmagul, W.; Cronn, D. R.; Hill, H.
H., Jr. Anal. Chem., 1983. 55,
2160-2164.
8. Langhorst, M. L. J. Chromatogr.
Sci. , 1981 , 19, 98 - 103.
9. Leveson, R. Ger. Offen. DE 3031358,
3-19-83. Leveson, R. C. US-
4398152, 8-9-83. Leveson, R. C.;
Barker, N. J. CA 1158891 A1. 12-20-
83.
10. Barker, J. J.; Leveson, R. C. Am.
Lab., 1980, 12, 76.
11. Leveson, Richard C.; Barker, Ni-
cholas J. Proc. of the Annu. ISA
Anal. Instrum. Symp.. 27th, St.
Louis, MO, Mar. 23-26 1981. Paqes
7-12.
12. Collins, M.; Barker, N. J. Am.
Lab., 1983, 15, 72.
13. Clark, A. I.; Mclntyre, A. E.; Les-
ter, J. N.; Perry R. Intern. J. En-
viron. Anal. Chem., 1984, 17, 315-
326.
14. Berkley, R. E. Evaluation of Photo-
vac 10S50 Portable Photoionization
Gas Chromatograph for Analysis of
Toxic Organic Pollutants in Ambient
Air. EPA/600/4-86/041. PB87-132858.
15. Lipsky, D. Proceedings of the APCA
Mid-Atlantic States Section Confer-
ence, Wilmington, DE April 18-19,
1983. Paper D.
16. Hawthorne, A. R.; Matthews, T. G.;
Gammage, R. B. Proceedings, 78th
Annual Meeting - APCA, Detroit. Ml
June 16-21, 1985. Paper 85-30B.
17. Jerpe, J.; Davis A. J. Chromatogr.
Sci., 1987, 25, 154-157.
18. Oliver, K. D.; Pleil, J. D.; McClen-
ny, W. A. Atmos. Environ., 1986,
20, 1403-1411.
257
-------
19. Gholson, A. R.; Storm, J. F.; Jayan- 21. Berkley, R. E.; Varns, J. L.; Mc-
ty, R. K. M.; Fuerst, R. G.; Logan, Clenny, W. A.; Fulcher, J. Proceed-
T. J.; Midgett, M. R. JAPCA 1989, ings of the 1989 EPA/AWMA Symposium
39, 1210-1217. on Measurement of Toxic and Related
Air Pollutants, AWMA, Pittsburgh.
20. Berkley, R. E. Field Evaluation of PA, 1989, pp. 19-26.
Photovac 10S50 Portable Photoion-
ization Gas Chromatograph for Anal-
ysis of Toxic Organic Pollutants in
Ambient Air. EPA/600/D-88/088.
TABLE 1. MOBILE PGC AND CANISTER SAMPLING AT MARCUS HOOK, PENNSYLVANIA
April 25, 1990. PGC in van with probe one meter above roof on upwind side.
CPSilSCB column. Concentrations are parts per billion by volume.
Tri- Tetra-
chloro- chloro- Chloro- Ethyl- m,p-
Benzene ethylene Toluene ethylene benzene benzene Xylene o-Xylene Styrene
Market Street at Railroad Overpass. 77°C.
PGC
CAN
PGC
CAN +
PGC
CAN +
Rt.
PGC
CAN
Rail
PGC
CAN +
6
4
7
6
10
ND
.8
.86 2
.7
.59 2
.2
1 3 at Trai 1 er
1
road
4
4
ND
.6
Station
.83 1
.7
ND
ND
. 18
ND
.20
ND
Park
ND
ND
*
15.9
*
15.6
*
20. 1
Street,
*
3.7
SW Parking Lot
.99
ND
*
7.6
4.90
0. 1
ND
0. 1
ND
0. 1
Trai ner ,
ND
0.4
. 77°C.
ND
0.6
0.53
ND
0. 13
ND
ND
ND
PA. 77°
ND
ND
ND
ND
ND
I .4
ND
2.3
0.47
3.4
C.
0.03
0.5
ND
1 .0
6
2
7
7
13
0
1
0
3
ND
. 1
.82
.5
. 14
.0
.84
.8
. 63
.4
ND
ND
ND
1 .4
ND
ND
ND
ND
ND
ND
ND
1 .9
0.36
2.5
ND
4. 1
ND
0.8
ND
1 .7
+ An appreciable concentration of hydrocarbons (not calibrated) was observed
in the canister sample.
* Toluene detection by PGC prevented by incorrect placement of valve time.
ND Not detected. Peak was absent or smaller than 5 millivolt-second.
258
-------
TABLE 2. PGC AND CANISTER DATA AT HAZARDOUS WASTE SITES IN NORTHERN DELAWARE
April, 1989.
mounted i
column .
Samples taken
n a van with probe
Concentrations
Benzene
April 5,
* PGC
CAN
1989
0.59
0.93
Tri-
ch 1 oro-
ethy 1 ene
Grantham
ND
ND
are
at Superfund haza
one meter above
rdous waste sites.
roof on
parts per bi 1 1 ion
Tetra
_
chloro- Chi
To 1 uene ethy 1
Lane
1
ND
.00 <0
by vol
oro- Ethy
upwi
ume .
1-
ene benzene benzene
INT
. 15
0
.60
ND
ND
ND
nd s i de
m.p-
Xy 1 ene
ND
0.62
PGC was
CPSil
Styrene
ND
ND
5CB
o-Xy 1 ene
ND
0.38
Army Creek
* PGC
CAN
Delaware
* PGC
CAN
April 6,
PGC
CAN
PGC
CAN
0.39
0.75
Sand
ND
0.45
1989
ND
1 .00
ND
0.50
ND
ND
& Gravel
ND
ND
0
0
ND
.69 <0
ND
.27
INT
. 15
I NT
ND
0
0
<0
.54
ND -
-------
TABLE 3. COLOCATED AND UPWIND/DOWNWIND PGC AND CANISTER OPERATION IN USSR
Vilnius June 1989. Colocated: PQCs in mobile laboratory. Probes 2.5 cm
apart 18 m from highway. Canisters sited on same side of road as PGCs and
filled continuously between 1631 and 1815. Data not shown if either PGC
recalibrating. Upwind/downwind: PGC-1 in van 12 meters downwind of roadway
with probe extended 1.5 meters above roof. PGC-2 in mobile laboratory.
Canisters downwind of road and filled continuously between 1100 and 1300.
CPSil19CB columns. Concentrations are parts per billion by volume.
Start
Time
COLOCATED
1631
1701
1716
1731
1801
Benzene
Toluene
Ethyl-
benzene
m, p - X y 1 e n e
DATA
(D
0.8
June
(2 )
0.8
1, 1989
2.5
(2)
2.8
1.3 1.0
0.6 0.7
1.1 1.0
0.7 0.8
2.7 2.7
2.1 2.1
2.4 2.2
2.3 2.6
ND
ND
ND
ND
ND
(2}
ND
ND
ND
ND
ND
(D
0.5
(2)
ND
0.7 ND
0.7 ND
0.9 ND
0.5
N'D
Average PGC values during the canister sampling oeriod
0.9 0.8 2.4 2.5 0.0 0.0 0.7 0.0
Canister sample values (distance from roadway in meters)
(3) (10) (3) (10) (3) (10)
(3)
2.1
(10)
1.1
3.1 1.3
0.4 0.2
UPWIND/DOWNWIND DATA June 2, 1989
(1) (2) (1) (2)
1229 1.3 4.5
1244 6.3
1259 3.3 0.4 7.4
(D
ND
ND
(2)
ND
ND
1.2 0.5
(1)
2.5
3. 1
1244 6.3 2.0 ND ND
1259 3.3 0.4 7.4 1.3 ND ND 3.1 ND
Average portable chromatograph values during the canister sampl
2.3 3.3 5.9 1.6 0.0 0.0 2.8 0.0
Canister sample values (distance from roadway in meters)
(3) (10) (3) (10) (3) (10) (3) (10)
2.1 1.1 3.1 1.3 0.4 0.2 1.2 0.5
o-Xy1ene
( 1 ) ( 2 )
ND 0.4
ND
ND
ND
ND
0. 1
0. 1
ND
ND
0.0 0.1
(3) (10)
0.5 0.2
( 1 ) ( 2 )
ND
ND
ND ND
ing period
0.0 0.0
(3)
0.5
(10)
0.2
ND
Not detected. Peak was absent or smaller than 5 millivolt-second.
260
-------
TABLE 4. SIDE-BY-SIDE PGC AND CANISTER DATA AT LAFAYETTE, GEORGIA
June 6, 1990. Shaver's Farm Superfund Site. PGCs in van were moved to several
sites. CPSil19CB columns. Concentrations are parts per billion by volume.
Tri- Tetra-
chloro- chloro- Chloro- Ethyl- m,p-
Benzene ethylene Toluene ethyiene benzene benzene Xylene o-Xylene Styrene
PGC-1
PGC- 2
CAN
PGC-1
PGC- 2
CAN
PGC-1
PGC- 2
CAN
PGC-1
PGC -2
CAN
PGC-1
PGC- 2
CAN
PGC-1
PGC- 2
CAN
ND
ND
0.7
ND
ND
0.3
ND
ND
0. 1
ND
ND
3.0
ND
ND
0. I
ND
ND
0. 1
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
1 . 10
1 .75
1 .0
0.99
0. 18
0.7
0.59
ND
0.4
0.32
0. 10
0. 2
0.08
ND
0. 1
0. 19
ND
0. 3
ND
ND
0. 1
ND
ND
0.2
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
0.67
0.62
1 .0
0.46
0.69
0.8
ND
ND
0.2
ND
ND
0. 1
ND
ND
ND
ND
ND
0.2
1 .31
1 .84
1 .6
0.28
0.58
0.8
ND
ND
0.4
0.04
0.07
0.2
ND
ND
ND
ND
ND
0.4
0. 12
ND
0.8
ND
ND
0.4
ND
ND
0.2
ND
ND
0.2
ND
ND
0. 1
ND
ND
0.3
*
18.03
5.9
*
13.68
4.5
*
ND
0.2
*
ND
0.2
*
ND
ND
*
ND
0.4
* PGC-1 was not calibrated for styrene because of a persistent interfering
peak probably caused by column deterioration.
ND Not detected. Peak was absent or smaller than 5 millivolt-second.
261
-------
TABLE 5. TANDEM PGC DATA AND CANISTER DATA IN
RESEARCH TRIANGLE PARK, NORTH CAROLINA
March 23, 1990. PGC-1 analyzed vinyl chloride and 1 , "i-dich loroethy 1 ene with
KCl/Alumina PLOT column. PGC-2 analyzed other compounds with a CPSil5CB column.
PGCs in car with probes 1.5 meters above the on upwind side, 40 meters downwind
of dry-cleaning plant. Concentrations are parts per billion by volume.
1,1-Di- Tetra-
Vinyl- chloro- chloro-
chloride ethylene Benzene Toluene ethylene m,p-Xylene Styrene o-Xylene
PGC
CAN
PGC
CAN
0.01
ND
ND
ND
0.02
ND
0.01
ND
ND
0.63
ND
0.76
ND
0.56
ND
0.62
1 .41
3.38
4.09
3. 15
ND
0.35
ND
0.29
ND
0.20
ND
ND
ND
0. 20
ND
< 0. 20
ND
Not detected. Peak was absent or smaller than 5 millivolt-second.
TABLE 6. AVERAGE ABSOLUTE DIFFERENCES BETWEEN PGC AND CANISTER DATA
Absolute values of differences between PGC and canister results for each
compound were averaged. CPSilSCB data taken from TABLES 1, 2, and 5.
CPSil19CB data taken from TABLE 6, in which two PGC values for each analysis
were averaged. CPSilSCB is methyl si "ii cone. CPSil19CB is 7% cyanopropyl-
silicone, 1% pheny1si1icone, 85% methyIsi1icone. and 1% vinylpolysiloxane.
Phase thicknesses 2 micrometers. Differences have dimensions of Darts per
billion by volume.
Compound
Benzene
Tr ich loroethy lene
Toluene
Tetrach loroethy lene
Ch lorobenzene
Ethy Ibenzene
m,p-Xy lene
o-Xy 1 ene
Styrene
UO 1
CPSi 15CB
1 .49
2.03
0.75
1 .27
1 . 17
0.97
1.69
0.40
1 .53
umn
CPSi 1 19CB
0.72
*
0. 15
0.05
*
0.20
0.30
0.31
3.69
No data were available for these compounds.
262
-------
DISCUSSION
Q)WARDFURTAUGH:WealsohaveaPhotovaclOS70.Iuseditinasmoking
lounge and at a gain of 100, there was a monster peak occurring near the retention
time of toluene. Any suggestions what it could have been? The Photovac people
Ait I've talked to haven't been able to shed much light on it.
RICHARD BERKLEY: Indoor air is a pretty tough thing to deal with. You
normally see ambient background levels of things like toluene as a result of single
photon absorptions. In indoor air you can have ppm concentrations so you can
teethings which are ionized in double photon absorptions. You can, for example,
calibrate things like carbon tetrachloride and chloroform, which you can't see at
iBatambient background levels. So, in indoor air all bets are off, and I have seen
some horrendous things in indoor air which are probably relatively high levels
ofthingsthat the instrument normally can't see. If it didn't have the retention time
of toluene, and if you were using a constant temperature column accessory the
dunces are very good that it was not toluene. It may be a much larger level of
something else.
TOM SPITTLER: We just did an air study of our building in Boston using the
Photovac. And we found 1 to 2 ppb of benzene and toluene inevery place because
it's > very well ventilated building. But in the smoking room we found about 100
ppbof toluene and about 50 of benzene. There wasn't any question, the retention
tees matched beautifully. We took samples back and confirmed them on GC/
MS. You get benzene and toluene in all smoking rooms. I'm not sure why your
(leak wasn't exactly there, but I bet anything that's what it was. A question
tough: you were using canisters and the Photovac with what? Occasional
sampling or regular sampling? How often did you sample with the Photovac in
onfcr to cover the period of time you were drawing the canister sample?
RICHARD BERKLEY: In most cases what I did was take a canister grab
sample by holding the canister within ten centimeters of the tip of the Photovac
Make and opening the can so that it filled during the same time, during the minute
or so that the Photovac pump was running. These samples are necessarily
nonequivalent. In the canister you get six liters, and you take a representative
sample of that to analyze it. The Photovac takes a milliliter or something that
happened to be flying through at the moment when it decided to inject. These are
not equivalent, but if the same air is being sampled they ought to resemble each
other.
TOM SPITTLER: Yes, I agree. I think it's really a nice correlation.
RICHARD BERKLEY: So, in all cases expect variances while taking those
{tab samples. Variances were seen with two-hour integrated canister samples,
and we were taking Photovac runs during the time.
TOM SPITTLER: You just averaged them then?
RICHARD BERKLEY: Well, the canister samples were shown as dotted and
•lashed lines because they were the time-integrated samples. If you could go back
and compare those slides, you'd find that all those dotted and dashed lines were
a the same level on all of those slides. We couldn't quite figure out how to show
4e continuity there.
TOM SPITTLER: No, I thought it was really nice data. This afternoon a couple
of guys from the Regional Lab up in Boston are going to show some Photovac
versus canister standards and calibrated by different techniques. They are
actually directly comparable samples, and you see the same basic kind of
correlation. It may be a little tighter now because they're sampling exactly the
same way and they're sampling the same known mixture of air.
RICHARD BERKLEY: Something I forgot to mention and it'll be important
to some people, we are using canisters to hold our calibration standards, and
we're preparing the standards the same way we prepare the standards for the
method that is used to analyze the canisters. There is no independence on that
point. These two methods are locked together, and if we make a mistake on one,
we make a mistake on the other. What's independent here is sampling methodology,
and I should have said that.
JOSEPH EVANS: My question pertains to detection limits. I notice that you're
down measuring at very low levels (1,2 ppb). Your worst agreement was at those
levels. When you got to the higher levels you had much better agreement. And
I was wondering about how close you were to your detection limits for the two
different methods?
RICHARD BERKLEY: Well, there are two limits to talk about here. One of
them is detection limit, and for single photon ionizations, compounds that ionize
well below 10 eV, such as benzene, its homologs and the chloroethylenes. We
measured detection limits by extrapolation, three times the baseline noise, using
an old 10A10 with a gain turned all the way up, and it appeared that the absolute
detection limit was somewhere in the neighborhood of 18 femtograms. That
would translate out to down in the neighborhood of 1/100 ppb in a 1 mL sample.
That's just a detection limit.The instrument in fact will refuse to process any peak
that is smaller than five millivolt seconds. And of course, when we did that
detection limit we were only extrapolating — our smallest sample was 1.6
picograms, and it produced a peak of about 2.3 volt seconds. We were no where
near this extrapolated detection limit with any sample we actually delivered to
the instrument. So, we're just guessing. But, we do have a substantial basis to
guess that a 5 millivolt second peak is way above that. And all you have to do if
you want to really get tough about what the detection limit is, is to run a sample
on a blank library, then shift to a calibrated library and calculate how much it
would take to make a five millivolt second peak, assuming linear response.
JOSEPH EVANS: What levels were your calibration standards?
RICHARD BERKLEY: We generally try to use between 10 and 20 ppb. If
something is very convenient to prepare like chlorobenzene or tetrachloroethylene,
we like to use one of them on one instrument and the other one on the other
instrument, because there is some tendency, if the calibration gas valve is a little
bit weak, to have some carryover contamination, usually no more than 0.5 ppb.
You do need to look at that for your standard, whatever your standard compound
JOSEPH EVANS: You were actually measuring below your lowest calibration
standard?
RICHARD BERKLEY: We're using a single point calibrations. We did a lot of
work on this thing early on and found that we were getting pretty consistent linear
responses from as low a sample as we could inject all the way up to higher than
we could inject.
263
-------
HIGH SPEED GAS CHROMATOGRAPHY FOR AIR MONITORING
Levine, S.P. (A,*), Ke, H.Q. (A), Mouradian, R.F. (A)
Berkley. R. (B) and Marshall, J. (C)
A) Department of Environmental and Industrial Health,
University of Michigan, Ann Arbor, Michigan 48109-2029
(B) U.S. EPA, AREAL/MRB (MD-44), 79 TW Alexander Dr.,
Research Triangle Park, NC 27709
(C) HNU Systems, 160 Charlemont, Newton Highlands, MA 02161
(*) Author to whom correspondence should be addressed.
Abstract
Gas chromatography has the potential
to be a much faster method of
separation than is usually realized.
If column operating conditions are
optimized for speed and injection
band width is minimized, some simple
separations can be completed in a few
seconds. In the work described here
the system was evaluated using common
organics including alkanes,
aromatics, alcohols, ketones and
chlorinated hydrocarbons.
Quantitative trapping and reinjection
was achieved for all tested
compounds. Limits of detection (LOD)
for many compounds, based on a 1 cm3
gas sample, were less than 1 ppb, but
for one carbon-chlorocarbons the LOD
when using a flame ionization
detector was inadequate. By using
the cold trap inlet with a low dead
volume detector and a high speed
electrometer, the efficiency
available from commercial capillary
columns can be better utilized and
retention times for some routine
separations may be reduced to a few
seconds.
Introduction
Gas chromatography (GC) is often used
for routine, repetitive analysis of
simple mixtures. For some of these
applications, the use of 2 to 5 m
capillary columns operated at linear
velocities of 100 to 200 cm/s offers
the possibility of greatly decreased
analysis times. This potential for
high speed analysis has been
documented in the literature (1-7).
Under optimal conditions, a 0.25 mm
i.d. column should be capable of
achieving 5000 to 7000 effective
plates with retention times of 5 to
10 seconds (4,8). Although this
number of plates is low compared to
most capillary systems, it is
comparable to the number of plates
achieved by many packed column
systems with retention times of
several minutes or more. Therefore,
some routine GC separations that are
currently performed using packed
columns or non-optimized open tubular
columns could be performed much
faster with a capillary system that
is optimized for speed.
While the theoretical potential of
capillary columns for high speed
analysis is well known, limitations
in commercially available equipment,
especially inlet systems, have
prevented general application of high
speed techniques. With most
commercial instruments, the major
265
-------
factors that limit analysis speed are
the width of the initial band
produced by the inlet system and the
response time of the electrometer.
Efficient separation with retention
times of 5 to 10 seconds and a column
diameter of 0.25 mm requires an
initial band width of about 20 ms or
less and an electrometer response
time of about 5 ms. For purposes of
comparison, most capillary GC systems
produce injection band widths of 50
to 500 ms and feature electrometer
response times of 150 ms or longer.
In response to the requirement for
narrow injection bands, a number of
experimental inlets have been
described (5, 9-13). Our group has
described a prototype cold trap that
was used as a vapor collection device
and which may also serve as a
focusing system for rapid analysis of
simple mixtures (14-15). The design
reported by our group, which expanded
on the innovative work of Hopkins and
Pretorius (16), featured a cold trap
that was cooled by a continuous flow
of cold nitrogen, and was resistively
heated using a current pulse. This
design was a marked improvement over
that reported earlier, which had a
number of unrecognized serious flaws
that prevented reliable and/or
quantitative operation (17-18). More
recently, van Es et al described a
fast GC system that utilized a
similar inlet (19). In their design,
a 50 micron capillary column was used
for the separation.
Experimental Section
The design and operation of the cold
trap is given in detail elsewhere
(14,15), and is shown schematically
in Figure 1.
Operating conditions and
chromatographic equipment. All
chromatograms were collected
isothermally at column temperatures
of 35 to 60 °C using a 5 m long, 0.25
mm i.d. fused silica column with a
0.1 micron bonded methyl silicone
stationary phase (Quadrex). The
carrier gas was hydrogen, which was
supplied at a flow rate of 2.5 to 3
ml/min to produce linear velocities
of 85 to 102 cm/s. The injector and
detector were heated to 225 °C. A
flame ionization detector (FID) was
used in all experiments. To minimize
the effective dead volume, the column
was moved close to the base of the
flame. Both a Varian 3700 or an HNU
301 GC were used.
For trap recovery studies, test
mixtures were prepared either
without solvent or in high purity
carbon disulfide provided by The Dow
Chemical Company. The injection
volume was 2.5 uL in all cases and
the split ratio ranged from about
50:1 to 500:1 depending on the sample
concentration. For vapor studies,
samples were injected in humidified
or laboratory air in volumes of
0.025-1.0 cm3.
Results and Discussion
Design Considerations. A number of
design considerations were found to
be important in determining the
durability and performance of the
system. The choice of trap material
and dimensions affects durability and
reinjection performance. An ideal
material would have high electrical
resistivity, low chemical activity, a
low coefficient of thermal expansion,
would be highly malleable and would
not work harden. A number of
materials, including stainless steel,
nickel, platinum, Monel 400, and an
alloy of thirty per cent copper -
seventy per cent nickel were
evaluated for use as trap tubes. The
work reported here was done using a
trap made of Monel 400. Stainless
steel, which was used in some early
studies (17, 18), is the least
desirable choice because of its
tendency to work harden and become
brittle. For a trap made of hard-
tempered Monel 400 with an internal
diameter of 0.25 mm, a wall thickness
of 0.18 mm provided a good
combination of strength and
performance.
266
-------
Trapping and Reinjection Efficiency.
Cold traps have been used in GC for
many years (19-23). Since the short,
open tubular trap used in these
experiments may be less efficient
than some other designs (23), a
careful evaluation of trapping
efficiency was necessary.
In order to test trapping and
reinjection efficiency, samples were
injected without using the cold trap
and average peak areas were
calculated for each compound. In
addition to comparing peak areas
obtained with and without trapping,
the FID response was monitored during
the entire process to allow any
breakthrough of the sample to be
detected. At temperatures of -100 °c
or colder, each of the tested
compounds was guantitatively trapped
and reinjected. Peak area
reproducibility for all compounds was
very good with coefficients of
variation ranging from 1 to 5 per
cent, or less in all cases in which
trapping was used.
Compounds tested were (given in order
of increasing boiling point):
isoprene, pentane, dichloromethane,
acrolein,chloroform,methanol, hexane,
carbon tetrachloride, acrylonitrile,
2-butanone,benzene,propanol, heptane,
i-octane, toluene, n-butanol,
tetrachloroethylene, octane, m- & o-
xylene, nonane, 4-ethyltoluene, and
1,3-dichlorobenzene. Detailed results
are given elsewhere (15). Trapping
efficiency was also measured for 1%
solutions of aromatics prepared in
carbon disulfide. The trapping
efficiencies obtained in those
experiments were not significantly
different than those measured without
solvent. These materials can be
effectively trapped and reinjected at
temperatures of -100 °C. However,
trapping behavior is not easily
predicted on the basis of boiling
point or freezing point, and in most
cases an effective temperature must
be experimentally determined for each
type of sample. Highly volatile
materials, which may be gases at room
temperature, and low volatility
materials, which may be difficult to
revaporize, have not yet been tested
and may be difficult to trap and
reinject with this system.
Limit of Detection (LOD). For
monitoring volatile organics in
ambient or workplace air, the LOD of
the method must be very low. As of
early November, 1990, the LCD's for
pentane, hexane, heptane, octane,
benzene, toluene, xylene,
ethylbenzene, 4-ethyltoluene, 1,3,5-
/1,2,4-trimethylbenzenes, and
chlorobenzene have been measured and
been shown to be in the range of 0.2
- 5 ppb, with the most recent results
all being <1.0 ppb. (The drop in LOD
has occurred as a result of improved
methodology as work has proceeded
over the past few months. There has
not been time to re-do some of the
earlier work.)
All of these values were determined
based on an injection of a maximum of
1 cm3 of air, and the use of an FID.
The LOD was calculated based on a
definition of three times the
standard deviation of the noise.
One of the major factors contributing
to the reduced LOD was the
optimization of the custom-designed,
high speed electrometer supplied for
this project by HNU Co. A filter
setting of 12 Hz was found to be
optimal for GC peaks in the retention
time range of 5-10 seconds.
Note that these LOD's are not
achievable for one carbon-
halocarbons. LOD's in the sub-20 ppb
range for certain halocarbons will
only be achievable with the use of an
electron capture detector (BCD).
Unfortunately, an BCD has, of
necessity, a certain internal volume
that may significantly spread peaks,
and reduce the advantage of the Fast-
GC method. This may require assays to
be performed on a 30-60 second basis,
rather than on a 5-10 second basis.
267
-------
In addition, it is important to
remember that the Fast-GC technique
trades chromatographic resolution for
speed. Although the cost of this
trade is reduced by tuning the column
for high speed, low retention time
use (8,14), the separation of
components of complex mixtures may
not always be possible.
Further, the limitations imposed by
the use of an isothermal GC method
(necessitated by the short analysis
times) limit the ability to monitor
compounds of widely differing boiling
point simultaneously. While this
might be overcome by flow-programming
methods, the extent to which such
strategies will allow effective
ambient air monitoring is unknown at
this time.
Acknowledgements
The authors acknowledge Lauri
Mendenhall and George Capps of
Prototype Design Inc. for engineering
and technical assistance in the
development of the capacitor
discharge power supply and
temperature measurement devices.
This research was supported by U.S.
EPA (AREAL/MRB) cooperative agreement
CR-817123-01-0. Earlier work leading
to this stage had been supported by
the Centers For Disease Control,
National Institute for Occupational
Safety and Health Grant R-01-OH02303,
the U.S. EPA (OER) R814389-01, and
The Dow Chemical Company Health and
Environmental Studies Laboratory.
References
1. D. H. Desty. capillary columns:
Trials tribulations and triumphs.
Advances in chromatographv. Vol 1.,
J. C. Giddings and R. A. Keller eds.,
Marcel Dekker, NY, 1965, pp. 199-228.
2. D. H. Desty, A. Goldup and W. T.
Swanton. Performance of coated
capillary columns. Gas
Chromatography. N. Brenner, J. E.
Callen and M. D. Weiss eds., Academic
Press, New York, 1962, pp. 105-135.
3. J. C. Sternberg. Extra column
contributions to chromatographic band
broadening. Advances jn
Chromatoqraphy Vol 2., J. c. Giddings
and R. A. Keller, eds., Marcel
Dekker: N.Y. 1966, pp. 203-270.
4. G. Caspar, R. Annino, c. Vidal-
Madjar and G. Guiochon. Influence of
instrumental contributions on the
apparent column efficiency in high
speed gas Chromatography. Anal. Chem.
50: 1512-1518 (1978).
5. G. Caspar, P. Arpino and G.
Guiochon. Study in high speed gas
Chromatography. i Chromatogr. sci .
15: 256-261 (1977)..
6. A. van Es, J. Janssen, R. Bally,
C. Cramers and J. Rijks. Sample
introduction in high speed capillary
GC ; Input band width and detection
limits. HRC&CC. 10: 273-279 (1987).
7. C. P. M. Schutjes, E. A.
Vermeer, J. A. Rijks and C. A.
Cramers. Increased speed of analysis
in isothermal and temperature-
programmed capillary GC by reduction
of the column inner diameter. J.
Chromatoqr. 253: 1-16 (1982).
8. R. Villalobos and R. Annino. The
computer aided optimization of
capillary columns for minimum time
analysis and minimum detectability.
HRC&CC. 12: 149-160 (1989).
9. R. L. Wade and S. P. Cram.
Fluidic logic sampling and injection
system for gas Chromatography. Anal.
Chem. 44: 131-139 (1972).
10. R. Annino and J. Leone. The use
of coanda wall attachment fluidic
switches as GC valves. J. Chromatogr.
Sci. 20: 19-26 (1982).
11. C. P. M. Schutjes, C. A.
Cramers, c. Vidal-Madjar and G. J.
Guiochon. fast fluidic logic
injection at pressures up to 25 bar
in high-speed capillary GC J.
Chromatogr. 279: 269-277 (1983).
12. R. J. Jonker, H. Poppe and J. F.
K. Huber. Improvement of speed of
separation in packed column GC. Anal.
Chem. 54: 2447-2456 (1982).
13. R. Tijssen, N. van den Hoed and
M. E. van Kreveld. Theoretical
aspects and practical potentials of
rapid gas analysis in capillary GC.
Anal. Chem. 59: 1007-1015 (1987).
268
-------
14. Mouradian, R.F., Levine, S.P.,
Sacks, R.D. and Spence, M.W.
Measurement of Organic Vapors at Sub-
TLV Concentrations Using Fast Gas
Chromatography. Amer Ind Hya Assoc
J. 51:90-95 (1990).
15. Mouradian, R.M., S.P. Levine and
R.D. Sacks. Limits of Detection and
Recoveries for Fast-GC. i..
Chromatoar.. 28: 643-648 (1990)
16. B. J. Hopkins and V. J.
Pretorius. Rapid evaporation of
condensed GC fractions. «L=_
Chromatoar. 158: 465-469 (1978).
17. B. A. Ewels and R. D. Sacks.
Electrically heated cold trap Inlet
system for high-speed GC. Anal. Chem.
57: 2774-2779 (1985).
18. L. A. Lanning, R. D. Sacks, R.
F. Mouradian, S. P. Levine, and J. A.
Foulke. Electrically heated cold trap
inlet system for computer-controlled
high-speed gas Chromatography. Anal.
Chem. 60: 1994-1996 (1988).
19. A. van Es, J. Janssen, C.
Cramers and J. Rijks. Sample
enrichment in high speed narrow bore
capillary gas Chromatography. HRC&CC.
11: 852-857 (1988).
20. G. Schomburg, H. Husmann and F.
J. Weeke. Aspects of double-column GC
with glass capillaries involving
intermediate trapping. J. Chromatogr.
112: 205-217 (1975).
21. J. A. Rijks, J. Drozd and J.
Novak. J. Versatile all-glass
splitless sample-introduction system
for trace analysis by capillary GC.
J. Chromatoar. 186: 167-181 (1979).
22. D. Kalman, R. Dills, C. Perera
and F. DeWalle. On-column cryogenic
trapping of sorbed organics for
determination by capillary GC. Anal.
Chem. 52: 1993-1994 (1980).
23. J. W. Graydon and K. Grob. How
efficient are capillary cold traps?
J. Chromatoar. 254: 265-267 (1983).
24. Mouradian, R.F., S.P. Levine,
H.Q. Ke and H.H. Alvord. Measurement
of Volatile Organics at Parts Per
Billion concentrations Using a Cold
Trap Inlet and High Speed Gas
Chromatography. submitted to J_. Air
Haste Manag. Assoc.
269
-------
Figure 1
Fast-GC system;
A: syringe or gas sampling
loop injection port;
B: silica transfer line;
C: low dead volume unions;
electrical contacts;
trap tube;
upper chamber cold trap;
lower chamber cold trap;
baffle;
capillary column
flame ion. detector;
K: capacitor power supply
270
-------
DISCUSSION
BANK WOHLTJEN: How much energy did your capacitive discharge heater
we?
STEVEN LEVINE: It's running about 30 to 70 volts discharge with a few tens
rfamps.
HANK WOHLTJEN: How big are the capacitors? Are they a tenth of a farad
ursomething like that?
SIEVEN LEVINE: All the details of the design is in that paper in Analytical
Chemistry.
HANK WOHLTJEN: You mentioned electric cooling of the trap. What do you
Jhiai you'd use for that, a refrigerator or a thermal electric?
STEVEN LEVINE: It would have to be a thermoelectric cooler. We are
investigating that at this moment.
JOHN SNYDER: I was curious as to the diameter of the columns you're using.
STEVEN LEVINE: They're just 0.25 mm columns. They're very traditional
columns. They're not megabore. They're not ultra small.
JOHN SNYDER: You also spoke about the dead volume in the detectors. Are
you modifying traditional detectors or are you making your own detectors?
STEVEN LEVINE: We have a 90 pi dead volume BCD from HNU Systems at
this point that we're working with. We feel that size is probably too big.
271
-------
SCREENING VOLATILE ORGANICS BY DIRECT SAMPLING ION TRAP AND
GLOW DISCHARGE MASS SPECTROMETRY*
Marcus B. Wise, G.B. Hurst, C.V. Thompson, Michelle V. Buchanan, and Michael R. Guerin
Analytical Chemistry Division
Oak Ridge National Laboratory
Oak Ridge, Tennessee 37831-6120
ABSTRACT
Two different types of direct sampling mass
spectrometers are currently being evaluated in our
laboratory for use as rapid screening tools for volatile
organics in a wide range of environmental matrices.
These include a commercially available ITMS ion trap
mass spectrometer and a specially designed tandem
source glow discharge quadrupole mass spectrometer.
Both of these instruments are equipped with versatile
sampling interfaces which enable direct monitoring of
volatile organics at part-per-billion (ppb) levels in air,
water, and soil samples. Direct sampling mass
spectrometry does not utilize chromatographic or other
separation steps prior to admission of samples into the
analyzer. Instead, individual compounds are measured
using one or more of the following methods: spectral
subtraction, selective chemical ionization, and tandem
mass spectrometry (MS/MS). For air monitoring
applications, an active "sniffer" probe is used to achieve
instantaneous response. Water and soil samples are
analyzed by means of high speed direct purge into the
mass spectrometer. Both instruments provide a range of
ionization options for added selectivity and the ITMS
can also provide high efficiency collision induced
dissociation MS/MS for target compound analysis.
Detection limits and response factors have been
determined for a large number volatile organics in air,
water, and a number of different soil types.
INTRODUCTION
Direct sampling mass spectrometry for the
measurement of trace levels of volatile organics in
environmental matrices has a wide range of important
field screening applications. These include the
measurement of volatiles in waters, soils, oily wastes, stack
emissions, and ambient air, among others. In addition, real-
time "sniffing" capability provides a convenient means of
detecting soil gas emissions, leaking waste containers, and
probing the atmosphere in enclosed storage facilities.
Because of their small size, relative simplicity,
ruggedness, and low power consumption, conventional
quadrupole mass spectrometers and quadrupole ion trap
mass spectrometers are especially attractive for
transportable field screening applications. In fact, several
commercial quadrupole based instruments are currently
available for field monitoring applications and recently,
several different research groups have been developing and
demonstrating transportable ion trap mass spectrometers for
on-site GC/MS applications (1-3).
This paper describes the use of an ion trap mass
spectrometer and a tandem source glow discharge mass
spectrometer for the direct measurement of ppb levels of
volatile organics in air, water, and soil. Because these
instruments do not use chromatographic separation prior to
admitting a sample into the mass spectrometer, the response
time is virtually instantaneous and accurate quantification of
target analytes can be accomplished in less than 2 minutes.
Although the tandem source quadrupole mass spectrometer
is somewhat limited in its ability to handle complex samples,
the ion trap mass spectrometer has the capability of
selective ion storage and multiple stages of collision induced
dissociation for much greater specificity.
Laboratory-based instruments arecurrently being used
to develop and validate methods for direct air monitoring
and the screening of water, soil and waste samples. A
transportable ion trap mass spectrometer for field use is
under construction in our laboratory and will be initially
tested in 6-9 months.
273
-------
EXPERIMENTAL
Instrumentation
Ion Trap Mass Spectrometer
All ion trap experiments were performed with a
Finnigan MAT Corporation ITMS ion trap mass
spectrometer. Our instrument is equipped with a
specially designed vacuum chamber which is
electropolished on the inside and pumped to high
vacuum with two air cooled 330 L/sec turbomolecular
pumps. The vacuum chamber and analyzer cell are
maintained at a constant temperature of 120° C by
means of infrared heating lamps which help to minimize
the adsorption of contaminants on the analyzer surfaces.
This instrument is also equipped with the necessary
hardware and software to perform electron impact (El)
and chemical ionization (CI), as well as selective ion
ejection, and collision induced dissociation multiple-step
(tandem) mass spectrometry experiments (MS/MS).
Control of the instrument and data acquisition are
performed with an IBM AT compatible computer using
software provided by the manufacturer.
The standard chromatographic interface provided
with the ITMS instrument has been replaced with a
custom designed interface developed in our laboratory.
This interface consists of a short length (14 inches) of
110 micron ID uncoated fused silica capillary tubing
which is maintained at atmospheric pressure at one end
and high vacuum at the other end. The high vacuum
end of the capillary is inserted directly into the ITMS
analyzer cell and the atmospheric pressure end is
connected to a quick-coupling device which allows rapid
switching of sampling modules for different monitoring
applications. The gas flow rate through the capillary
restrictor is approximately 0.5-1.0 mL/min. Because the
samples are introduced directly into the ion trap cell,
the manifold pressure is maintained at a lower pressure.
This is believed to help reduce deterioration of the
electron filament and the electron multiplier. For
example, even when sampling water-saturated air for
extended periods of time, the electron filament lifetime
has been approximately 6 months and the multiplier
lifetime has been in excess of 12 months.
ITMS Air Sampling Probe
For direct air monitoring experiments, a special
sampling system has been developed as shown by the
diagram in Figure 1. This system consists of an 1/4 inch
OD teflon transfer line which is connected at one end
to the air sample generation system and at the other
end to a sampling "cross" arrangement which allows
helium to be mixed with the air sample prior to entering
the ITMS. The helium is necessary as a buffer gas in
the ITMS to collisionally cool ions, thus reducing loss of
ions from the trap and improving the overall
performance. A pulsed valve is used to meter helium
into the air stream providing approximately an order of
magnitude increase in sensitivity relative to a fixed-ratio,
continuous mixing of helium with the air. A vent port also
located on the inlet "cross" of the sampling system allows
the gas stream to be continuously sampled at a high flow
rate, thus decreasing the response time for the mass
spectrometer. The other port of the inlet "cross" is
connected to a short section of uncoated fused silica
megabore capillary which is used as an "open/split" interface
with the ITMS by inserting 1 inch of the microbore capillary
restrictor into the other end of the megabore tubing.
Approximately 2 L/min of air is drawn through the
megabore tubing by means of a small sampling pump;
however, a metering valve located between the pump and
the splitter can be used to reduce the pumping speed if
desired. This combination of active pumping and the use of
the open/split capillary interface minimizes the dead volume
in the inlet system leading to a response time of only a few
seconds.
Purge Device for Water and Soil samples
For the measurement of volatile organics in water and
soil samples (slurries), the air sampling probe is simply
replaced with a high speed needle sparge purge device as
shown in Figure 2. This device accepts standard 40 mL
VOA vials which mount directly on the needle sparger. A
pressure regulator and a precision needle valve control the
flow of helium purge gas through the sample and the
purged components exit through a 10 inch length of
megabore capillary tubing. Normal helium flow rates vary
from 100 to 200 mL/min which efficiently purges the volatile
components from a room temperature sample in less than 5
minutes. The purge device connects directly with the
capillary restrictor interface in an open-split configuration
with a split ratio of approximately 100:1. The bulk of the
sample is diverted to the vent port. As an added feature
for screening applications, the vent port is capable of
accepting resin cartridges for trapping of components that
would normally be vented. This enables the collection of an
archived sample which may be sent back to a central
laboratory for confirmatory analysis by GC/MS.
Tandem Source Quadrupole Mass Spectrometer
The tandem-source quadrupole mass spectrometer
(TSMS) is a prototype instrument constructed using an
EXTREL C-50 quadrupole mass spectrometer as the basic
system. This instrument was configured with 3/4" diameter
rods for high transmission efficiency and a 300 watt RF
power supply for a maximum mass range of 500 amu.
Control of the instrument is provided by a Dell 325
computer using software written in our laboratory. An axial
El source was purchased with this instrument for testing
purposes and for generating conventional 70 eV electron
impact spectra.
In order to produce a versatile instrument for
environmental monitoring applications, the configuration of
the standard C-50 mass spectrometer was extensively
274
-------
modified. In addition to the axial El source which was
purchased with the spectrometer, a glow discharge
ionization source was designed and constructed for this
instrument. This source is housed in a differentially
pumped vacuum chamber which is separated from the
rest of the mass spectrometer by a 1.5 mm diameter
vacuum conductance limit as shown in Figure 3. The
glow discharge source is typically maintained at a
pressure of 0.25 torr while the analyzer is maintained at
2 x 10"5 torr. Ions generated by glow discharge
ionization pass through a lens assembly into the high
vacuum portion of the instrument where they enter the
lens assembly of the axial El source and are
subsequently focussed into the mass analyzer.
Air samples can be introduced into the tandem
source quadrupole mass spectrometer by two different
methods, either through the differentially pumped glow
discharge source chamber, or directly into the electron
impact source by means of a simple capillary restrictor.
Both inlet systems have been designed so that they are
directly compatible with the same sampling devices used
with the ion trap mass spectrometer. Thus, essentially
the same apparatus and experimental conditions are
used for direct purging of water and soil samples
regardless of the mass spectrometer used. The only
difference is the ability of the glow discharge ionizer to
sample air directly without the need for the air sampling
pump and open/split interface used with the ITMS.
Dynamic Sample Generator
A dynamic sample generation apparatus is used to
produce known concentrations of volatile organic
analytes in an air stream. This apparatus was used for
the determination of instrumental detection limits for
real-time air monitoring experiments. It basically
consists of a variable speed syringe pump and a dilution
air manifold. The syringe pump continuously meters
small amounts of organic compounds into a controlled
stream of air. Concentrations of the analytes can be
easily varied by adjusting the speed (metering rate) of
the syringe pump and/or by changing the flow rate of
dilution air through the manifold. Turbulent mixing of
the organic compounds and the dilution air occurs in the
manifold line which provides a homogeneous
concentration at the sampling ports.
." Components of the dynamic sample generator
include a Razel Instruments model A-99 syringe pump
equipped with a 5 mL syringe, a 100 psi air supply line
equipped with an on/off toggle valve and a precision
metering valve, a 1.5 m x 6 mm Teflon line (dilution
manifold), and two 1/4 inch Swagelock sampling ports.
The apparatus produces continuous and stable
generation of organic concentrations in air and also
; allows rapid changes in concentration without having to
wait excessively to reach a steady-state concentration.
Air containing the desired concentration of individual
organic compounds is typically generated by metering a
(1:1) water/methanol solution containing approximately 400
ug/mL of the organic compound into the dilution air stream
using the syringe pump. The flow rate of the syringe pump
can be continuously varied from 8.47 x 10"4 mL/min to
0.0503 mL/min. The dilution air flow is typically adjusted
for a rate of 25 L/min through the manifold. As this air
flows rapidly past the syringe pump needle, it quickly
vaporizes the volatile organics and the solvent. Liquid flow
from the syringe, however, must be maintained low enough
to prevent condensation in the system. By knowing the
concentration of the organic in the liquid solution, the flow
rate out of the syringe, and the flow rate of the dilution air,
the concentration of the organic compounds in the air can
be readily calculated. This assumes that there is minimal
adsorption of analytes on the walls of the manifold and
complete vaporization of the liquid into the dilution air.
Operating Conditions
Ion Trap Mass Spectrometer
Most of the ion trap data presented in this paper was
generated using electron impact ionization conditions. Scan
functions for the acquisition of mass spectra were written
using the scan function editor program supplied with the
commercial software. Typically, for optimum sensitivity the
electron ionization time was 50 msec. Low mass cut-off was
60 amu, preventing the storage of ions due to water and air.
The mass scan range was approximately 50 to 200 amu
which enabled the detection of major ions for each of the
volatile organic compounds. In order to improve the signal-
to-noise ratio, 16-25 microscans were averaged per displayed
scan. Axial modulation was used for all experiments in
order to achieve optimum instrument performance. Helium
buffer gas was admitted into the system exclusively through
the sample transfer line.
Tandem Source Quadrupole Mass Spectrometer
The glow discharge ionization source is specifically
designed for high sensitivity direct air monitoring
applications. Air is admitted into the ionization region
through a metering valve at a flow rate of 0.5-1.0 standard
mL/min while a 160 L/min roughing pump maintains the
pressure in the ionizer at a constant 0.25 torr. Coaxial
ionization electrodes are used for the discharge and consist
of a 1 cm diameter x 2 cm long hollow cathode with a 20
gauge wire anode. A potential difference of approximately
600 volts is sufficient to strike and maintain a discharge in
the source. Ionization of organic compounds in this source
is the result of ion molecule reactions which produce proton
transfer and charge exchange reaction products. Conditions
within the glow discharge source can be adjusted to
optimize either proton transfer or charge exchange
reactions. The proton transfer reactions provide high
sensitivity for compounds which have proton affinities
greater than that of water (which is the primary proton
275
-------
transfer reagent). Charge exchange on the other hand,
is a much more universal ionization method and
produces fragmentation spectra which are similar to
electron impact ionization spectra. By operating the
glow discharge source at low pressures, the formation of
water cluster ions which often hamper API mass
spectrometers is nearly eliminated, improving sensitivity
and decreasing the complexity of the spectra.
Direct sampling using the electron impact ionization
source of the quadrupole mass spectrometer is
accomplished by means of a 1 meter length of 110
micron ID uncoated fused silica capillary tubing. A
simple on/off valve between the capillary and the source
allows the restrictor to be isolated when not in use. The
conditions in the ionizer include an electron current of
0.5 to 1.0 milliamps and an electron energy of 17 to 20
eV. The use of lower electron energies helps to
minimize fragmentation, thus concentrating ion current
in fewer ions.
Samples and Chemicals
Individual samples of 31 different volatile organic
compounds from the USEPA Target Compound List
were obtained from Ultra Scientific Company as
solutions of the neat compound dissolved in methanol at
a concentration of 10,000 ppm. Solutions for use in the
dynamic sample generation system were prepared from
the methanol stock solutions using ultra-pure water and
spectroscopic grade methanoL In order to verify the
proper calibration and performance of the dynamic
sample generation system, certified standards of volatile
organics hi nitrogen were purchased from Scott
Specialty Gases.
Water samples were prepared using distilled water
containing 0.15 g/L of sodium chloride and 0.17 g/L of
sodium sulfate. A series of concentrations of individual
volatile organics from approximately 1 ppb to 200 ppb in
water was prepared by injecting a known concentration
of a methanol solution into water and then carefully
pipetting the water standard into a 40 mL pre-cleaned
VOA vial. The vials were capped with Teflon lined
septa until used. Most samples were prepared at
approximately pH 7; however, samples of benzene,
trichloroethylene, and tetrachloroethylene were also
prepared at pH 2 and pH 10.
A total of 5 different soil samples were examined as
part of this study including 2 soils provided by the U.S.
Army Toxic and Hazardous Materials Agency
(USATHAMA), 2 local soils, and a potting soil. These
represent a range of soil types including clay, sand, and
high humic content. The soil samples were prepared by
injecting a pre-weighed 5 gram sample of soil in a 40
mL VOA vial with a known quantity of the volatile
organic in methanol and allowing it to sit for a short
period of time. Slurries of the soil samples for direct purge
experiments were prepared by adding 25 mL of water to the
sample and allowing them to sit for at least 1 hour prior to
analysis.
RESULTS AND DISCUSSION
Volatile Organics in Air
The primary objective of the air monitoring study was to
optimize the experimental conditions and determine the
real-time detection limits for a representative sample of
volatile organic pollutants. This sensitivity assessment was
performed using standard electron impact ionization on
both the tandem source quadrupole mass spectrometer and
the ITMS. This enables comparison of our results with
other mass spectrometer systems which are commercially
available and use electron impact ionization. For all ITMS
experiments, the electron ionization time was 50 msec.
Mass scan ranges were selected as appropriate for each
compound although the lower mass cut-off was normally at
least 40 amu or higher. This prevented water, nitrogen, and
oxygen ions from being stored in the ion trap simultaneously
with the analyte ions, thus minimizing the effects of space
charge and unwanted ion-molecule reactions. Future studies
will involve a comparison of sensitivities for chemical
ionization and electron impact ionization.
Using the ITMS instrument, sensitivities for the 31
volatile organics were determined. However, pumping
problems with the tandem source quadrupole mass
spectrometer restricted experiments to the determination of
detection limits for only 3 compounds: benzene,
trichloroethylene, and tetrachloroethylene. For both
instruments, response curves (instrument response vs.
concentration in air) were prepared for each of the
compounds studied. The range of concentrations examined
was generally between 4 and 200 ppb. A typical experiment
involved the acquisition of a background level signal,
followed by the acquisition of spectra for a series of
decreasing concentrations in air generated with the dynamic
sample generator. Instrument response vs. time produced a
"stair-step" curve as the concentration of organic was
reduced to successively lower levels. Each concentration
level was maintained for several minutes to ensure that a
steady state concentration was reached before further
reducing the level.
Ion Trap Mass Spectrometer
An electron impact mass spectrum of a mixture of
volatile organics in air is shown in Figure 4. This mixture
contained carbon disulfide, benzene, chloroform, toluene,
and ethyl benzene at concentrations of approximately 1 to
10 ppm. As shown in this figure, space-charge-induced
peak broadening and mass shifting are not significant.
A typical "stair-step" air monitoring response curve
acquired with the ITMS is shown in Figure 5. This is a
276
-------
reconstructed plot of the ion current for m/z 83 as"seen"
by the ITMS instrument vs. time for a sample of
chloroform in air. As the concentration of the
chloroform was decreased to lower values over a period
of time, the response of the ITMS decreased
proportionally. This same type of plot can be generated
in real-time continuous monitoring applications, allowing
changes in the concentration to be readily visualized.
As shown in Figure 5, the response time of the ITMS to
changes in concentration was very fast (less than 15
seconds) and the time required for the sample generator
to reach steady state at a new concentration was
typically less than 3 minutes.
In addition to the continuous plotting of the ITMS
total ion response, it is also possible to monitor the
actual mass spectrum in real time in order to detect
changes in specific ion intensities. This is especially
useful whenever multiple components are present in a
sample. All of the information which is generated in
real-time may be stored on a hard disk as a temporal
series of mass spectra, allowing response curves for any
ion in the mass range to be reconstructed, plotted, and
integrated. An example of a post-processed mass
spectrum of chloroform in air is shown in Figure 6.
An important feature of the response curves
generated with the ITMS is the pseudo-sinusoidal
waveform superimposed on the curve. This is not
actually noise, but is actually an effect due to the pulsed
valve addition of helium into the air stream. Maxima
correspond to the optimum helium/air ratio and minima
correspond to the least effective helium/air ratio. By
synchronizing the pulsing of the helium valve with the
acquisition of the spectral scans, this effect should be
nearly eliminated.
The experimentally determined detection limits for
the 31 volatile organic compounds in air are presented
in Table 1. As shown in this table, the detection limits
are generally in the low ppb range which is comparable
to the sensitivity of some commercially available API
mass spectrometers. Exceptions to this include
bromoform, chloroethane, and chloromethane.
However, because chloromethane and chloroethane are
extremely volatile (boilding points of 24°C and +12.3°C,
respectively), it is likely that these compounds were lost
during preparation of the standard. Bromoform, on the
other hand, is less volatile than most of the compounds
examined, with a boiling point of +150.5°C. Bromoform
probably condenses on the walls of the vapor generating
system at room temperature and never reaches the
ITMS inlet. With proper sample preparation techniques
and a shorter, heated sampling line, detection limits for
chloromethane, chloroethane, and bromoform would
probably be more comparable to the other compounds
studied. This is a reasonable assumption since these
compounds are chemically very similar to other
halogenated hydrocarbons that have been successfully
measured and would be expected to have similar ionization
efficiencies under electron impact ionization conditions.
The detection limits which are reported for volatile
organics in air, were calculated using the RMS (root mean
square) variation in the signal measured with no sample
present (a blank). This is an accurate determination of the
analytical detection limit and represents the lowest
concentration of a compound in air that can reliably be
observed with the current sampling interface and ITMS
operating parameters. For these calculations, the lowest
reliably measured signal is defined as the average of the
blank signal plus three times the RMS variation in this
signal. From the lowest reliably measured signal, the
detection limit can be calculated from a calibration curve
relating signal to concentration. Linear least squares
calibration curves were constructed for the 31 volatile
organics studied. Due to space charging effects
encountered with a few compounds, a quadratic model was
necessary to describe a better fit for the data.
Tandem Source Quadrupole Mass Spectrometer
Detection limits for benzene, trichloroethylene, and
tetrachloroethylene in air were also determined using the
tandem source quadrupole mass spectrometer. Various
concentrations of the individual compounds were generated
using the .dynamic sample generator as previously described.
One signal averaged mass spectrum (n=36) was acquired
and stored for each concentration. Signal averaged
background samples were also acquired and subtracted from
the mass spectra of the actual samples. Experimental
difficulties arising from a high hydrocarbon background in
the instrument complicated these low-level analyses. The
background problem was due to backstreaming of diffusion
pump oil and condensation on the ionization source.
Linear regressions of the data were calculated and both
data and regression were plotted for each compound. Due
to the nature of the signal averaging experiments, an
accurate detection limit could not be determined for the
three compounds using the same RMS noise calculation
method as the ITMS. Rather, the detection limit was
determined by calculating the standard deviation of the
linear regression plot and then determining the
concentration at which the signal is equal to the standard
deviation4 as shown in Figure 7. The regression curve for
benzene in air is shown in Figure 8 and the calculated
detection limit was determined to be approximately 11 ppb.
Based on the linear regression curves for trichloroethylene
and tetrachloroethylene, detection limits for these
compounds were determined to be approximately 42 and 29
ppb respectively.
Although the electron impact ionization was used
predominantly for this study, earlier experiments with the
glow discharge ionization source indicate that the detection
limits are very similar to or slightly better than those
277
-------
achievable with the electron impact ionization source.
In fact, the tandem source configuration of the
quadrupole mass spectrometer is unique and provides
extra versatility in terms of sample introduction and
ionization options relative to a conventional electron
impact ionization quadrupole. For example, air may be
sampled and ionized directly with the glow discharge
source or it may be sampled through a capillary
restrictor and ionized with the axial electron impact
ionization source. Since both ionization sources are
simultaneously installed on the spectrometer, switching
between ionization modes or sample inlet systems is a
simple matter of opening the appropriate valve and
turning on the electronics for the selected source.
The advantages of the glow discharge source
relative to the electron impact ionization source are that
it is more rugged for long term operation, the response
time is virtually instantaneous, and the source is very
tolerant of high oxygen and water saturated
atmospheres. Primary advantages of the axial electron
impact ionization source are ease of operation and the
ability to produce library searchable mass spectra. A
major problem with the electron impact source is that
the filament assembly is very susceptible to oxidation
and burn-out if exposed to large amounts of oxygen or
water. For example, when performing direct air
monitoring experiments with the electron impact source,
the filament must be replaced every 3 to 4 weeks.
Volatile Organics in Water and Soil
The sample handling apparatus and methods for the
determination of volatile organics in water and soil
slurries are identical for both the ITMS and the TSMS
experiments. Volatile organics are purged from a water
or soil slurry directly into the mass spectrometer without
any preconcentration such as trapping on a resin
cartridge. In the simplest case, conventional electron
impact ionization spectra are continuously acquired over
a mass range of approximately 40-200 amu in order to
observe the response for ions corresponding to the
purged volatile organics. As shown in Figure 9, the
purge profiles for a particular ion can be reconstructed
as a plot of response versus purge time. At a helium
purge flow of 200 mL/min, purging is normally 90% or
more complete after 3 minutes. The area beneath a
purge profile correlates well with the concentration of
the analytes in the sample as shown in Figure 10.
Quantification is accomplished simply by integrating the
area of a reconstructed purge profile for the ions
corresponding to the target analytes. A typical
calibration curve for benzene in water from 1 to 100
ppb is shown in Figure 11. Using carefully prepared
standards, correlation coefficients of better than 0.998
are possible. Quantitative reproducibiltiy of less than
10% at the 95% confidence level can also be achieved
for water samples without the use of internal standards.
A series of experiments were conducted in which the
detection limits, relative response factors, and standard
spectra were generated for a series of volatile organics in
water. In addition, studies with benzene, trichloroethylene,
and tetrachloroethylene were also conducted in order to
examine the effects of pH and soil type on the purge
efficiency of water samples and soil slurries relative to
solutions of volatile organics in pH-7 water. Data for these
samples were acquired simultaneously using both the ITMS
and the TSMS instruments in order to compare detection
limits and quantification accuracy.
The detection limits for 21 different volatile organics in
pH-7 water using the ITMS and electron impact ionization
are shown in Table 2. These range from approximately 3
ppb for benzene to approximately 60 ppb for dichloro-
ethane and appear to be routinely achievable using the
direct purge method. For comparison, the detection limits
for compounds purged into the TSMS are also typically less
than 200 ppb, although they are generally not quite as good
as can be achieved with the ITMS. Accurate detection
limits for acetone, 2-butanone, and 4-methyl-2-pentanone
have not yet been established due to much lower purge
efficiencies.
The matrix effect experiments which were conducted for
benzene, trichloroethylene, and tetrachloroethylene
appeared to show essentially the same purge efficiency at
pH-2, pH-7, and pH-10. Similar results for these
compounds were also obtained for a potting soil leachatc
with a high humic content. These results suggest that
accurate quantification maybe achieved without the need
for extensive sample preparation or the use of internal
standards for many water samples. An exception to this
may be water samples which contain a high surfactant
concentration, although comparative data have not yet been
generated.
As opposed to the water samples, differences in the
purge efficiencies for volatile organics in soil slurries are
more pronounced. As shown in Table 3, the relative purge
efficiency for benzene, trichloroethylene, and
tetrachloroethylene ranges from approximately 25% to 90%
relative to pH-7 water. The least efficient purging was from
the soils which had a high clay content and the most
efficient purging was from soils having the highest sand
content. Although the general trend exhibited by these
results is probably reasonable, the actual purge efficiencies
are probably better than the data indicate. For example,
comparative purge profiles for benzene, trichloroethylene,
and tetrachloroethylene in pH-7 water and a potting soil
slurry are very similar as shown in Figure 12.
Apparent differences in purge efficiency most likely
reflect inefficient stirring and sample purging using a single
needle sparger. Further studies have also shown that there
was probably significant loss of volatiles from the soil
278
-------
samples during the preparation step using our soil
spiking procedure. Improvements in the purging of soils
samples could probably be achieved by simultaneously
stirring samples to ensure more homogeneous sparging.
Further, the use of an internal standard would be useful
to help minimize quantitative errors due to differences
in purge efficiency.
CONCLUSIONS
The results of these studies have demonstrated the
feasibility of using direct sampling mass spectrometry for
the real-time detection of trace organic compounds in
air, water, and soils. Detection limits for both the
tandem source quadrupole mass spectrometer and the
ion trap mass spectrometer are generally in the range of
5 to 200 ppb for water and soil samples without any
sample preparation or preconcentration. The detection
limits for volatile organics in air using the ITMS range
from approximately 1 to 45 ppb for the 31 volatiles
studied which is approximately 1,000 times lower than
the threshold limit values (TLV's) for these compounds.
These detection limits are comparable to those that can
be achieved with API mass spectrometers. Detection
limits for the compounds studied using the TSMS are
slightly worse than those obtained with the ITMS;
however, they also are well below the published TLV's.
This suggests that the ITMS or TSMS could indeed be
useful for field monitoring of stack emissions and soil
gas emissions at hazardous waste sites.
Although it is not likely that significant
improvements can be made in the detection limits
achieved with the TSMS, modification and optimization
of the sampling interface for the ITMS will probably
result in even better detection limits than reported in
this document. In addition, the ITMS instrument also
has the capability of chemical ionization which can be
used to selectively enhance certain target analytes
relative to other compounds in a sample stream.
Both the TSMS and ITMS have excellent detection
limits for volatile organic compounds in air, water, and
soil; however, experience with the two different mass
spectrometer systems suggests that the ion trap mass
spectrometer overall is a more useful instrument for
continuous air monitoring. Specifically, the ITMS is
highly reliable, easier to operate, and more stable than
the tandem source quadrupole mass spectrometer.
Further, the ion trap mass spectrometer has the
capabilities of controlled chemical ionization, selective
fan storage, and collision induced dissociation (CID)
tandem mass spectrometry (MS/MS). These features
are especially important in helping to identify individual
components in a complex sample, especially since no
chromatographic separations are performed -on the
sample prior to entering the mass spectrometer.
Without these features, the TSMS is restricted to
monitoring samples that typically have fewer than 10-15
components. Finally, due to the simplicity of the ion trap
analyzer assembly, this type of instrumentation lends itself
to downsizing, portability, and remote operation better than
the TSMS.
While the results of this study have been quite
successful and demonstrate the potential of the
instrumentation for screening of environmental samples,
much work remains. Especially important is the
development of methods for the identification and
quantification of compounds in complex mixtures. This
work will involve a thorough examination of chemical
ionization reactions, the generation of MS/MS spectra of
commonly encountered organic pollutants and potential
interferences, and the development of computer programs
to process this information in real time.
ACKNOWLEDGEMENT
'Research sponsored by the U.S. Army Toxic and
Hazardous Materials Agency under Interagency Agreement
1769-A073-A1 under U.S. Department of Energy Contract
DE-AC05-84OR21400 with Martin Marietta Energy
Systems, Inc.
REFERENCES
1. McClennen, W.H., Arnold, N.S., Sheya, S.A.,
Lighty, J.S., Meuzelaar, H.L.C., "Direct Transfer
Line GC/MS Analyses of Incomplete
Combustion Products from the Inceneration of
Medical Wastes and the Thermal Treatment of
Contaminated Soils", Proc. 38th ASMS Conf. on
Mass Spec. All. Topics, Tucson, AR, 1990, 611-
612.
2. Hemberger, P.H., Alarid, I.E., Cameron, D.,
Leibman, C.P., Cannon, T.M., Wolf, M.A.,
Kaiser, R.E. "A Transportable Gas
Chromatograph/Ion Trap Detector for Field
Analysis of Environmental Samples", Int. J. Mass
Spectrom. Ion Proc.. In press.
3. Wise, M.B., Buchanan, M.V., Guerin, M.R.,
"Rapid Environmental Organic Analysis by
Direct Sampling Glow Discharge Mass
Spectrometry and Ion Trap Mass Spectrometry",
Oak Ridge National Laboratory TM-11538, Oak
Ridge Tennessee, 1990.
4. Hubaux, A.; Vos, G., Anal. Chem.. 235, 1967,
849-855.
279
-------
Table 1
Detection Limits for Volatile Organics in Air using Direct Sampling FTMS
Compound Detection Limit (ppb)
1,1,1-Trichloroethane 2
1,1,2,2-Tetrachloroethane 3
1,1,2-Trichloroethane 20
1,1-Dichloroethane 16
1,1-Dichloroethene 6
1,2-Dichloroethene 3
1,2-Dichloropropane 45
2-Butanone 48
4-Methyl-2-Pentanone 17
Acetone 22
Benzene 5
Bromodichloromethane 4
Bromoform > 80
Bromomethane >280
Carbon Disulfide 25
Carbon Tetrachloride 16
Chlorobenzene 2
Chloroethane >209
Chloroform 3
Chloromethane >268
Cis-l,3-Dichloropropene 6
Dibromochloromethane 12
Ethylbenzene 2
Methylene Chloride 12
Tetrachloroethylene 8
Toluene 3
Trans-l,3-Dichloropropene 7
Vinyl Acetate • 44
Vinyl Chloride 5
O-Xylene 4
280
-------
Table 2
Detection Limits for Volatile Organics in pH-7 Water using Direct Purge ITMS
Compound
1,1,1-Trichloroethane
1,1,2,2-Tetrachloroethane
1,1,2-Trichloroethane
1,1 -Dichloroethene
1,2-Dichloroethane
1,2-Dichloroethene
Benzene
Bromoform
Carbon Disulfide
Carbon Tetrachloride
Chlorobenzene
Chloroform
Cis-l,3-Dichloropropene
Ethylbenzene
Methylene Chloride
Styrene
Tetrachloroethylene
Toluene
Trans-l,3-Dichloropropene
Vinyl Chloride
Xylenes (total)
Detection Limit (ppb)
12
28
18
33
27
21
3
15
18
16
5
20
6
4
60
5
5
4
15
5
4
Table 3
Purge Efficiency of Volatile Organics in Soil Slurries Relative to pH-7 Water
Soil Sample Soil Type
THAMA 1 Clay 29
THAMA 2 Sand/Clay 51
Local 1 Sand/Clay 61
Local 2 Sand/Clay/Humic 46
Potting Sand/Humic 91
Relative Purge Efficiency (%)
Trichloroethvlene Tetrachloroethvlene
20
48
45
42
77
19
46
61
42
53
281
-------
ITMS DIRECT AIR INLET
VENT/AUX SAMPLING PUMP
t
MICROBORE
MEGABORE CAPILLARY RESTRICTOR
AIR INLET.
PULSED SOLENOID VALVE
TO ION TRAP CELL
METERING VALVE
HEUUM INLET
TO SAMPLING PUMP
Figure 1 Air sampling interface for ITMS.
TEFLON
HELIUM
PURGE
GAS -
AN
Lll
•
t
9
•
A
•t
SF
ME
;
ER
\
"f*~* * — ^" *
SPLIT (
WAI \/t=N
i
•
i
X
3APILLARY
X3B33
ION
MC
Mo
i —
' — i
FPUMP |
Figure 2 Device used for direct purge of volatiles from water and soil samples.
282
-------
1.5 mm Vacuum Conductance Limit
Aa*Iyz»r Vacuum Chamber
Clew Discharge
Senrcv Vacuum
Chamber
To Reo(h Pump
Figure 3 Diagram of the tandem source quadrupole mass spectrometer.
500 -
(Cone. Low PPM)
0
100
Mass (AMU)
110
120
Figure 4 ITMS electron impact mass spectrum of ppm levels of VOCs in air.
283
-------
ITMS Response to Chloroform in Air
170
160
•53
140
130
120
no
•oo
90
80
70
60
50
4-0
30
20
•0
0
170ppb
56ppb
28ppb
14 ppb
blan
0.2 0.4
0.6 0.8
(Thousands)
Elapsed Time (seconds)
Figure 5 ITMS response for m/z 83 at various concentrations of chloroform in air.
INT
Chloroform in Air
83
56 61 65 69 73 81
85
8?
100
105
100 120
Figure 6 ITMS post processed mass spectrum of chloroform in air.
80
Mass (AMU)
284
-------
t>
<
130
120 -
110 -
100
90
80
70
60
50
40
30
20
10
0
Estimate of Tetrachloroethylene Detection Limit
Tandem-Source Quadrupole Detection
*95% Confidence Limits
^Calibration Curve
I 11 I I I ! I I
I I I I I I
20 40 60 80 100
Concentration (ppb)
120 UO
160
Figure 7 Graphical determination of detection limits for the TSMS instrument.
BENZENE
Tandem Source Ouadrupola Air Monitor
i
S.
'c
800
700 h
600
500
400
300
200
100
0
-100
I I I I L
20 40 60 80
Concentration in Air (ppb)
100
120
140
Figure 8 Linear regression curve for benzene in air using the TSMS instrument.
285
-------
RESPONSE FOR m/z 78
0:38
1:18 2:05
TIME (min)
2:45
3:25
Figure 9 Reconstructed purge profile for 100 ppb of benzene i
in water.
SOLUTION PURGE PROFILES OF AQUEOUS
VINYL CHLORIDE STANDARDS (ppb = ng/ml)
z
LLJ
H
Z
LLJ
LU
DC
40 ppb
20 ppb
10 ppb
2 ppb
7:09
T-T- | i i i i r-r-r
9:32 11:54 14:17 16:39 19:01
TIME (min)
Figure 10 Direct purge profiles for 4 different concentration of vinyl chloride in water.
286
-------
RESPONSE FOR m/z 78
0
20 40 60 80
CONCENTRATION (ppb)
100
Figure 11 Response curve from 1 to 100 ppb for direct purge of benzene from water.
287
-------
VOLATILE ORGANICS PURGED FROM PH-2 WATER
770
385 -\
840-
50 ppb each compound
BENZENE
m/z78
z
O
TRICHLOROETHYLENE
m/z 130
1000-
500-
TETRACHLOROETHYLENE
m/z 166
3 4
TIME (MEM)
VOLATILE ORGANICS PURGED FROM POTTING SOIL SLURRY
50 ppb each compound
BENZENE
m/z 78
a 630
I 315H
UJ
TRICHLOROETHYLENE
m/z 130
420
210 H
TETRACHLOROETHYLENE
m/z 166
Figure 12 Comparison of VOC purge profiles for pH-7 water and potting soil.
288
-------
DEVELOPMENT AND TESTING OF A MAN-PORTABLE
GAS CHROMATOGRAPHY/MASS SPECTROMETRY SYSTEM
FOR AIR MONITORING
Henk L.C. Meuzelaar, Dale T. Urban and Neil S. Arnold
Center for Micro Analysis & Reaction Chemistry, University of Utah
214 EMRL, Salt Lake City, UT 84112
ABSTRACT
A fully man-portable, GC/MS system based on the
combination of an automated vapor sample inlet, a
"transfer-line" gas chromatography module and a
modified Hewlett Packard model 5971A quadrupole MS
system is described. The current prototype weighs
approx. 70-75 Ibs and uses 150-200 W of battery power.
Th&mass spectrometer and computer are carried in front
of the operator by means of a shoulder harness whereas
battery pack, carrier gas supply and roughing vacuum
system are carried as a backpack. Air samples can be
malyzed using a special automated air sampling inlet.
TTie man-portable GC/MS system is designed to be
Supported by a vehicle transportable "docking station".
BACKGROUND
In situations involving severely contaminated hazardous
sraste sites, industrial accidents or natural disasters, as
srell as special military or law enforcement operations,
mobile laboratories may be of little use because of
Emited site access, restrictions due to contamination or
terrain constraints. Under such conditions, man-portable
analytical instruments may offer the only acceptable
means of carrying out on-sitc analyses.
Obviously, man-portability puts severe constraints on
Wight, size and power requirements as well as on
fflggedness and user-friendliness. Consequently, the man-
fortability requirement may also function as a convenient
benchmark for the development of analytical equipment
for a variety of special operational environments ranging
fiom remotely operated devices (e.g., robotic vehicles,
femes or probes) space stations and operating rooms.
All of the above environments require a high degree of
miniaturization, reliability and ease of operation.
The past decade has witnessed impressive progress in
miniaturization of mass spectrometric systems. Besides a
broad range of commercially available benchtop
instruments, including the Hewlett Packard MSD (Mass
Selective Detector) and Finnigan MAT ITD (Ion Trap
Detector), several specialized MS instruments have been
developed for applications where transportability is a
prime requirement. Well known examples include the
Bruker Franzen MM1 system, originally developed for
military applications involving chemical agent detection,
and the Viking Spectratrak system primarily designed for
environmental applications.
As shown in Figure 1 most commercially available
miniaturized systems are characterized by a combination
of relatively low weight (typically 100-300 Ibs, excluding
power source) and modest power requirements (600-1800
W range). In spite of these marked advances in system
miniaturization, however, man-portability and some of the
other abovedescribed applications require even more
stringent size, weight and power limitations.
This prompted us to undertake a study aimed at obtaining
maximum power and weight reduction using the Hewlett
Packard MSD as a starting point. Although the project is
still under continuing development, some preliminary
results and conclusions are starting to take shape, as will
be discussed in the following paragraphs.
SYSTEM DESIGN CONCEPTS
An overview of the selection criteria for the main
system modules and components is given in Table I.
289
-------
Automated Vapor Sampling Inlet Module
Transfer Line Gas Chromatography Module
Transfer line gas chromatography (TLGC) is defined here
as a form of GC in which the column connects two
environments, viz. an atmospheric environment at
ambient pressure and the vacuum environment of the MS
ion source region. In other words, column inlet and
outlet pressures are more or less fixed and, consequently,
optimization of column flow requires suitable adaptation
of column length and/or diameter. This sets TLGC apart
from the more widely used short column gas chromato-
graphy (SCGC) technique in which column inlet
pressures can usually be adjusted while column length is
kept below 5 meters or so.
Although most TLGC applications reported thus far do
use short to very short column lengths, optimum GC
conditions for a 500 p.m i.d., ambient inlet transfer line
columns connected to a vacuum detector (e.g., MS) may
dictate column lengths in the 50-100 m range (see Figure
2). In view of the abovedescribed distinctive differences
between TLGC and SCGC we feel justified in adding yet
another term to the already baffling jargon of the
chromatographer.
When sampling condensable and potentially labile vapors
from air, the main challenge is to avoid compound losses
through irreversible adsorption and/or decomposition in
the transfer line section. To this end, a novel, automated
vapor sampling method was recently designed at the
University of Utah Center for Micro Analysis & Reaction
Chemistry (1,2). The most characteristic property of this
sampling method, illustrated in Figure 3, is the absence
of any valves or other mechanical obstructions in the path
of the molecules between the ambient environment and
the ion source. Only quartz walls and/or surfaces coated
with inert stationary phases (e.g., poly-dimethylsilicones)
are seen by sample molecules on their way to the ion
source.
A second advantage of the new sampling technique is the
potentially very short switching time. Sampling times as
short as 60 msec have been used already (2) and 20 msec
or less may be achievable in the near future. This
enables "injection" of a narrow sample plug into the
TLGC column, thereby minimizing peak broadening due
to sample injection and allowing repeat GC analyses at 6-
60 sec intervals (3). All air flows in the inlet are
sustained by means of a Graseby Ionics miniaturized dual
air pump (max. capacity 2 x 500 ml/min, max power
consumption 1 W) whereas rapid switching of air flows is
performed with a Skinner micro valve (5 msec response
time).
The GC oven module consists of a simple heated
aluminum cylinder which houses the capillary GC
column, e.g., a 29 cm long, 50 \im i.d. fused silica
capillary coated with a 0.2 pirn thick layer of poly-
dimethylsilicone (DBS) and providing a continuous He
flow of approx. .02 ml/min.
At present the oven is used in isothermal mode only. A
temperature programming option as described by Arnold
et al. (4), which would allow a broader range of
compounds to be analyzed in a single GC run and also
help protect the column from oxidative degradation, has
not yet been implemented in the present prototype. A
direct consequence of the rapid GC run time is the need
for very high temperature programming rates, e.g, 10-20
C/sec. This requires significantly larger power supplies
than necessary for isothermal operation.
A small (2 ft3) compressed gas cylinder with flow
controller provides more than 36 hours of He or N2
carrier gas flow. The theoretical relationship between
inner column diameter, max. resolving power, column
length and retention time is depicted in Figure 2.
Obviously, the use of a 50 urn i.d. column (primarily
selected to keep gas flows as low as possible) has the
advantage of allowing very rapid separations, although
limiting maximum achievable resolving power.
Quadrupole MS Module
A Hewlett Packard Model 5971 MSD (Mass Selective
Detector) was modified extensively in order to reduce
system weight and power requirements and increase
overall manoeuverability. The original housing was
completely discarded and the relative positions of the
electronic boards were changed to enable convenient
operation of the air sampling inlet. The new
configuration is shown in Figures 4 and 5. Most
importantly, the original AC and DC power supplies were
removed and replaced by a battery powered 12 V DC
supply with DC/DC converters for the various DC
voltages required for mass spectrometer, computer and
sampling inlet operation. Total power consumption of
the modified MS system was determined to be 43 W (see
Table I).
Vacuum System
The vacuum system of the HP model 5971 MSD was
completely reconfigured to provide operating pressures in
the lO^-lO"5 torr range while minimizing roughing
vacuum requirements. The original 60 I/sec diffusion
290
-------
pump was exchanged for an Alcatel Model 5010 MDP
JMolecular Drag Pump) with a max. pumping speed of 8
sec'1 for N2 and a roughing vacuum requirement of < 30
millibar. This enabled us to replace the original rotary
pimp (power requirement approx. 160 watts; weight 14
Jbs) with a simple vacuum buffer capable of maintaining
iroughing vacuum of better than 10 millibar for up to 12
iours at the specified GC column flows. The vacuum
assembly configuration can be seen also in Figs. 4 and 5.
Micro Computer Module
A Toshiba model 5200, 20 Mhz, 80386 lap top is used to
control all GC/MS functions by means of a standard PC
interface and software available from Hewlett Packard.
In addition the PC system controls the operation of the
air sampling inlet. The only modification of the Toshiba
SlOO consisted of removing the built-in, relatively heavy
DC and AC power supplies and connecting the unit
directly to the specially constructed DC power supply
down in Figures 4 and 5.
SYSTEM INTEGRATION
Mechanically, the various components described thus far
were integrated by means of a specially designed
Shoulder harness and backpack frame, as shown in Figure
5. The aluminum backpack frame carries the two
batteries as well as the vacuum reservoir whereas the
entire mass spectrometry assembly with MDP and PC is
Suspended from the shoulder straps and stabilized by two
ilp straps. Due to the difficulty of typing in detailed
(omputer commands during field use, especially when
tearing gloves, a beach ball type mouse was installed to
fflable direct communication with a single (gloved) hand.
Alternatively, one could envisage the use of a built-in PC
Computer card (without display screen or keyboard)
icmotely controlled by a second, more completely
outfitted PC using standard PC software such as Carbon
Copy® or PC Anywhere®.
The most simple remote control option would be to use
in umbilical cord carrying a twisted pair cable in addition
to AC power. The latter option would eliminate the
heavy (28 Ib) battery pack, thus resulting in greatly
reduced overall size and weight. Finally, as also shown
kFigure 4, a special transportable "docking station" (still
under construction) enables vacuum system regeneration,
fettery recharging and carrier gas refills at 6-10 hour
intervals.
PRELIMINARY TEST DATA
TLGC/MS curves generated with a 100 cm long, 100 jim
i.d. capillary column, coated with 0.25 jim
polydimethylsilicone (DB5, Supelco) while sampling a
mixture of 10 ppm vapor components in air for 1 sec at
30 sec intervals are shown in Figure 6. Obviously, a
highly useful level of chromatographic separation is
achieved with the very short transfer line. Also the
narrow peak shapes (half height width < 1 sec) illustrate
the efficiency of the rapid sampling air inlet. Overall
peak height reproducibility (approx. + 10%) is influenced
by the limited resolution of the sampling time due to
manual operation.
From the selected ion profile (tropylium fragment ion at
m/z 91) in Figure 6 the minimum detectable
concentration in direct air sampling mode appears to be
approx. 1 ppm. Although this is 1-2 orders less than the
minimum concentrations detected by means of ion trap
type MS systems when using the automated vapor
sampling inlet (2), the MSD system has not yet been
fully optimized for operation under the present vacuum
and flow conditions. However, since it may be
anticipated that some of the most promising applications
will require detection limits in the lower ppb range, a
suitable adsorption/desorption module is currently under
development in our laboratory.
Figure 7 illustrates the performance of the automated air
sampling TLGC/MSD system with polar compounds
under similar experimental conditions as in Figure 6.
Note the rapid separation of a mixture of ketones into its
components and the relatively minor degree of peak
tailing due to the heated, all quartz vapor sampling inlet.
Finally, Figure 8 shows selected ion chromatograms for
several chemical agent simulants, demonstrating the fast,
repetitive (17 sec interval) analysis capability of the short
(29 cm) narrow bore (50 jxm i.d.) capillary column used
while maintaining adequate chromatographic resolution.
Although it is tempting to envisage the use of man-
portable GC/MS instruments for military reconnaissance
purposes, e.g., when venturing into contaminated regions
with high levels of background interferents, it should be
pointed out here that the current sensitivity of the MSD
based TLGC/MS system is insufficient for such appli-
cations. Partially, this is due to the relatively low sample
mass flow through the narrow bore capillary columns
used. In principle, this could be corrected by closing up
the MSD ion source thereby increasing the residence time
291
-------
of the vapor molecules in the source which would result
in increased ionization efficiencies.
Additionally, the use of rapid absorption/desorption
methods for sample preconcentration should be
considered. Assuming a 10 second absorption interval at
10 times normal flow, followed by a 1 second desorption
interval at normal flow, it should be possible to obtain a
100 times enrichment factor without sacrificing analysis
speed. Basically, the 10-15 seconds necessary for
chromatographic separation is then being used to collect
and preconcentrate the next sample.
Finally, we are investigating the use of rapid (10-20
C/sec) temperature programmed heating in order to
broaden the range of compounds that can be analyzed in
a single chromatographic run. The feasibility of this
approach has been demonstrated by Arnold et al. (4). A
second, important advantage of rapid temperature
programming is that the initial "air peak" passes through
the column at low temperature, thereby considerably
reducing the likelihood of oxidative degradation of the
column. This then allows programmed heating of the
column to high temperatures (e.g., 300 C) thus enabling
separation and detection of large polar molecules such as
underivatized trichothecenes, as demonstrated by
McClennen et al. (5). Many commercially available, air
sampling mass spectrometry and ion mobility
spectrometry systems use silicone membrane interfaces,
thereby the detection of large, polar compounds.
CONCLUSIONS
The feasibility of constructing a fully man-portable
"transfer line" GC/MS system with automated vapor
sampling capability has been demonstrated. In its present
form, the system weighs 72 pounds, consumes 160 W of
electrical power and can operate continuously for 6-10
hours. Application of novel battery technologies, further
integration of the microcomputer module and use of
alternative vacuum pumping strategies is expected to
reduce overall system weight to less than 50 Ibs.
Without vapor preconcentration, practical detection limits
appear to be in the low ppm range. Development of
rapid temperature programming capabilities is being
considered in order to facilitate detection of relatively
nonvolatile species and to increase the range of
compounds that can be analyzed in a single run. The
ultralow power and weight requirements of the technique
would seem to offer promise for a broad spectrum of
field applications ranging from hazardous waste sites and
industrial or natural disaster areas to reconnaissance
drones, space stations, interplantary probes and
autonomous vehicular robots.
REFERENCES
1. McClennen, W.H., Arnold, N.S., Meuzelaar,
H.L.C., Apparatus and Method for Sampling. U.S.
Patent 4,970,905.
2. Arnold, N.S., McClennen, W.H., Meuzelaar,
H.L.C., "A Vapor Sampling Device for Rapid,
Direct Short Column Gas Chromatography/Mass
Spectrometry Analyses of Atmospheric Vapors",
Anal. Chem., in press.
3. McClennen, W.H., Arnold, N.S., Sheya, S.A.,
Lighty, J.S., Meuzelaar, H.L.C., "Direct Transfer
Line GC/MS Analyses of Incomplete Combustion
Products from the Incineration of Medical Wastes
and the Thermal Treatment of Contaminated
Soils", Proc. 38th ASMS Conf. on Mass Spec.
All. Topics, Tucson, AR, 1990, 611-612.
4. Arnold, N.S., Kalousek, P., McClennen, W.H.,
Gibbons, J.R., Maswadeh, W., Meuzelaar, H.L.C.,
"Application of Temperature Programming to
Direct Vapor Sampling Transfer Line GC/MS",
Proc. 38th ASMS Conf. on Mass Spec. All.
Topics, 1990, 1401-1402.
5. McClennen, W.H., Meuzelaar, H.L.C., Snyder,
A.P., "Biomarker Detection by Curie-point
Pyrolysis in Combination with an Ion Trap Mass
Spectrometer", Proc. 1987 CRDEC Conf., 271-
277.
ACKNOWLEDGEMENTS
The authors acknowledge Jean-Luc Truche and
John Fjeldsted (Hewlett Packard Corp.) for their valuable
ideas and continued technical support and thank William
H. McClennen and Pavel Kalousek (University of Utah,
Ctr. Micro Analysis & Reaction Chemistry) for their
expert technical advice and assistance. This work was
financially supported by Hewlett Packard Corporation
(University of Utah Instrumentation Grant) and by the
Advanced Combustion Engineering Research Center.
Funds for this Center are received from the National
Science Foundation, the State of Utah, 23 industrial
participants and the U.S. Department of Energy.
292
-------
TABLE I: PRIMARY SYSTEM COMPONENT SELECTION CRITERIA
Automated Vapor Sampling Inlet Module
fully automated
only inert quartz and fused silica materials
ultrashort sample "injection" pulse
Transfer Line GC Module
interferent rejection
rapid analysis capability
Hewlett Packard 5971A Mass Selective Detector
low power requirements (43 W)
lightweight (7 kg)
Alcatel 5010 Molecular Drag Pump
low power consumption (17 W)
high backing pressure up to 40 mbar (no backing pump
needed)
light weight (2.35 kg)
Toshiba 5200, 20 mhz, 386 Computer
low power consumption (40 W)
high speed, capable of running existing MSD software
"o
O
1E5
1E4
1000
100
N>
CO
CO
200
150
Sf
B 100
O
50
FINNIGAN ITD
BRUKER MM-1
O
INCOS 500
D
HP MSD 5971A
V
SPECTRATRAK 600
MAN-PORTABLE
10 100 1000
Transfer Line Length (cm)
1E4
100
1E-2 1E-1
1 10 100
Retention Time (s)
1000
1E4-
500
1000
1500
2000
2500
POWER (Watts)
Figure 1. Power requirements and weights of typical
miniaturized GC/MS systems (note that man-portable
system includes power and carrier gas sources).
Figure 2. Theoretical relationships between internal
column diameter (in \am), maximum achievable resolving
power, column length and retention time for a compound
with capacity factor k=5.0. (Triangles indicate points of
minimum plate height operation.)
-------
To Transfer Line
and Detector
Sampling
Mode
Inert
Vacuum Carrier
Gas
Sample He He +
Sample
Control
System
J Vacuum
Inert
Vacuum Carrier
Gas
To Transfer Line
and Detector
Separation
Mode
Figure 3. Operating principle of automated vapor sampling inlet developed at University of Utah (US patent no.
4,970,905).
HP MSD ANALYZER
RF GENERATOR
QUADRUPOLE
DETECTOR
HP HARDWARE
INTERFACE
VACUUM SYS
CEN
PORTA
MSDD
INLET (
TEM
MOLECULAR DRAG PUMP
VACUUM RESERVOIR
VAPOR INLET
SAMPLE PUMP
GC OVEN HEATERS
CARRIER GAS
CENTRAL DATA SYSTEM
PORTABLE 386 COMPUTER
SAMPLE IN
"DOCKING STATION"
POWEF SYSTEM
24 VDC BATTERY
DC/DC CONVERTER
REFILL CARRIER GAS
RECHARGE BATTERY
EVACUATE RESERVOIR
Figure 4. Block diagram of man-portable GC/MS system and docking station interface.
294
-------
Figure 5. Schematic outline of GC/MS man-portable
system with operator. A) vapor inlet/transfer line GC
column; B) MSD analyzer; C) control electronics; D)
portable 386 computer; E) molecular drag pump; F)
vacuum hose; G) vacuum reservoir; H) carrier gas, and I)
24VDC battery.
12000-1
10000
3 8000
—
B
a
6000
3 4000
o
2000
1.20 1.40 1.60 1.80 2.00 2.20
Time (minutes)---
2.40 2.60 2.BO 3.00 3.20
Figure 6. Selected ion chromatogram profile of an alkylbenzene mixture at m/z 91 obtained by TLGC/MS using
the automated vapor sampling inlet in combination with a 100 cm long, 100 [xm i.d., DBS coated fused silica
capillary column. (1) toluene; (2) ethyl benzene; (3) m-xylene; (4) o-xylene. Approximate vapor concentrations:
lOppm. Arrows indicate air sampling events at 30 second intervals. Note that o-xylene (peak 4) elutes after the
next sampling event.
3944
i.2o '' i.« " i.to i.bo
Time (minutes)— >
Figure 7. Total ion chromatogram (TIC) for a mixture of 4 ketones. 1) acetone; 2) methyl ethyl ketone; 3) ethyl
acetate; 4) 3-pentanone; 5) methyl iso-butyl ketone.
295
-------
« 100 -
m/z 79, DiVEVEP
z 111, DEEP
6 sec
Time (s)
Figure 8. Selected ion chromatograms of 4 chemical agent simulants (DMMP=dimethyl methyl phosphonate,
DEEP=diethyl ethyl phosphonate, DlMP=diisopropyl methyl phosphonate, DEM=diethyl malonate). Arrows
indicate air sampling points (17 sec interval). Note separation of all 4 simulants within 6 sec. Star symbol (*)
indicates "pseudo" peak due to effect of eluting air on MS system.
296
-------
DISCUSSION
IALPH SULLIVAN: With these high flow rate systems, how did you go about
calibrating it and how do you introduce the gas to it to know what you have in
the system?
HENK MEUZELAAR: You make diluted air — the flow rate doesn't have to
fe above 100 mL per minute, or even 50 per minute. So, if you have a dilution
tjrstem that can give you that kind of output, you can just calibrate it with a
calibrated dilution system.
AUDIENCE PARTICIPANT: Could you repeat that?
HENK MEUZELAAR: All right. What I said is the high flow of the outer tube,
tie first sampling tube, can be as little as 50 or 100 mL per minute. So, if you have
tvapor dilution system that can give you a couple hundred mL output you can
4) a loose coupling for such a system and get very good results. If you have a
tapor dilution system that just puts out a few mL per minute it would be more
difficult to do that. You could do it from a bag if you could fill a bag and keep it
at atmospheric pressure for several minutes, you could obtain a sample without
changing the pressure or the concentration in the bag.
BILL McCLENNY: I was wondering what the prospects would be for using
some type of preconcentration that involved a cold trap, using thermo electric
cooling or something of that sort, and what that would add to the power
requirements for this unit?
HENK MEUZELAAR: I think almost any type of absorption, desorption, or
preconcentration by any method I know that would keep the high response
characteristic intact, would certainly require power because you would have to
desorb for a relatively short period of time. And the only way to make gain is to
absorb for let's say 60 seconds and flush desorb in one or two seconds. That's
going to require power. We are currently looking at a number of different
methods. The power requirement is just needed, for a second, or maybe even less
than that. I think it's a doable thing, but it certainly will add to the power
requirement.
297
-------
ON-SITE MULTIMEDIA ANALYZERS:
ADVANCED SAMPLE PROCESSING WITH ON-LINE ANALYSIS
S. Liebman
GEO-CENTERS, INC.
c/o U.S. Army Cml
Rsch, Dev & Engr Ctr
Attn: SMCCR-RSL
Aberdeen Proving Ground,
MD 21010-5423
M. B. Uasserman
U.S. Army Cml Rsch,
Dev & Engr Ctr
Attn: SMCCR-RSL
Aberdeen Proving Ground,
MD 21010-5423
E. J. Levy and S. Lurcott
Computer Chemical Systems, Inc.
Rt. 41 and Newark Rd., Box 683
Avondale, PA 19311
ABSTRACT
The need for on-site chemical analysis
of air, water, and soils has led to
development of two highly automated
prototype instruments in the field of
trace organic analysis: EPvA, the
Environmental £yroprobe Analyzer and
CHAMP, the Chemical Hazards Automated
Multiprocessor. In the EPyA unit, a
purge and trap module permits routine
determination of target chemicals in
water and hazardous wastes. A thermal
desorption module permits controlled
thermal desorption of air sampling
cartridges, as well as dynamic
headspace/pyrolysis analyses of
solids. CHAMP is based on supercriti-
cal fluid extraction (SFE) with liquid
C0£ mobile fluid for solid samples in
amounts from milligrams to over
several grams in six individually
heated extractors. Specialty
interfaces, such as TRANSCAP. provide
on-line analysis by chromatographic
and/or spectral detectors.
Both benchtop, microprocessor-based
systems are newly designed for in-
field operation, as well as laboratory
or plant sites. Highly automated
instruments such as EPyA and CHAMP
operating with external expertise
provided by artificial intelligence
(AI) software, illustrate the Inte-
grated Intelligent Instrument (I3)
approach which is focused on multi-
media analyses for hazardous
materials.
INTRODUCTION
Advantages of precision, accuracy, and
reproducibility are realized with the
use of automated instruments to per-
form thermal and nonthermal sample
processing with on-line chromato-
graphic and/or spectral analyzers.
New.engineering designs are required
to bring this analytical power on-site
to the field, mobile lab, or plant to
provide rapid, validated information
to analysts. Two prototype analytical
systems are described to meet these
needs; the Environmental Pyroprobe
Analyzer, EPyA (1) and CHAMP, the
Chemical Hazards Automated Multi-
Processor (2). The prototypes are
designed for compactness with inte-
grated specialty separation and/or
detector units that are important to
the hazardous waste field for on-site
use. Figure l(a,b) shows the bench-
top units, each about 2'x2'x3' and
weighing ca. eighty pounds. The pur-
pose of this report is to describe the
ongoing development of specialty in-
strumentation that is based on proven
analytical methodologies in trace
organic analysis.
I. Thermal Sample Processing - EPyA
The thermal analyzer system, EPyA. is
the result of over fifteen years of
engineering design and manufacture of
microprocessor-based instrumentation
used throughout the world for trace
organic analysis of vapors, liquids
and solids. Studies in the 70's and
299
-------
80's developed purge and trap modules
for water analyses and thermal desorp-
tion methods for rapid analyses of air
sampling cartridges that contained
treated charcoals, porous polymers,
Ambersorb, Tenax, etc. (3a). Figure 2
shows a test air mixture with 40 ppb
levels of typical solvents (benzene,
toluene, chloro-benzene, heptane, o-
dichlorobenzene, and dodecane) sampled
for 90 sec at 0.5 mL/min on a Tenax
sorbent bed (100 mg) which was then
thermally desorbed for GC/FID analysis
(3b-e). Figure 3 shows an analysis
conducted for gasoline/fuels using a
cryofocusing concentrator module and
on-line capillary GC/FID detection
(4). Figure 4a,b,c give results from
other studies (4) using remote air
sampling cartridges for analyses of
outside air, laboratory air, and paint
shop air (all 500 ml samples) with
GC/FID analysis.
Recently, the thermal desorption/cyro-
trapping module was used in trace
particulate analysis of a mlcroencap-
sulated pesticide, Diazinon, in an air
sampling cartridge with on-line analy-
sis by GC-MS (5) (Figure 5). A
corresponding dynamic headspace/py-
rolysis method using the Pyroprobe Pt
coil pyrolyzer on a few micrograms of
a microencapsulated sample also
provided trace detection and
identification of the Diazinon core,
which gives a parent ion at m/z 304
and a base peak at m/z 179. Clearly,
thermal desorption, rather than C$2
solvent stripping, proved to be the
optimum analytical method which is now
used throughout the world in the
industrial R&D, forensic, and
environmental fields. However, some
thermally sensitive samples required
additional effort for reliable
analyses. A more effective method
than solvent extraction was needed,
both for analyzing thermally labile
materials, as well as to eliminate
solvent wastes. The traditional
Soxhlet solvent extraction method has
further disadvantages of hour or day-
long extraction times and off-line,
more labor intensive, multistep
analyses for complex environmental
samples.
II. The Nonthermal Sample Processing
Analyzer - CHAMP
The nonthermal multiple sample proces-
sing system, CHAMP, using supercriti-
cal fluid (SF) technology (6) permits
the conduct of trace organic analysis
on diverse samples, including cart-
ridge sorbent beds (7), soils, coals,
or hazardous waste solids. Six
individually heated sample extractors
may contain up to five grams or more
of material to be treated near or at
supercritical fluid conditions in the
2, 4, or 6 mL extractor vessels.
Automated SF extraction (SFE)-capil-
lary GC analysis of gasoline from
charcoal filters may be routinely
analyzed with either single or
multiple SFE units. Analytes re-
quiring well-established capillary GC
methods use the automated SFE system
configured for GC separation.
Alternatively, in Figure fc, the SFE-
SFC analysis is shown of a phosphonate
chemical in soil (ca. 500 mg) with
detection by FID at estimated ppb
levels. The SFE was conducted at 3000
psi, 100°C with C02, which is a
nontoxic, safe and inexpensive mobile
fluid. The SFC was conducted with a
Nucleosil CN microbore column at 120°C
and pressure programming from 2000 to
6000 psi at 300 psi/min with a FID
unit.
As with the thermal processing
analyzer, EPyA. it is necessary to
have a variety of detection systems
for adequate analytical sensitivity
and specificity. The SFE process has
been used with FID, ultra-violet and
mid-infrared (ir) spectrometers using
fiber optic monitors (FOM) (6,8).
Figure 7 represents the recent on-line
SFE-SFC analysis of a polyolefin/
naphthalene mixture. An ion trap MS
detector (ITD) was used to detect the
molecular ion from naphthalene (m/z -
128) (9). Other detectors show
similar potential for trace on-line
analyses with highly specific and
sensitive responses to hazardous/toxic
substances, e.g., fluorescence/uv with
fiber optic technology and advanced
data analysis with applied AI (10).
Both EPvA and CHAMP incorporate new
design engineering features that
emphasize compact, transportable
systems. Sample processing,
integrated with separation and
300
-------
detection units are controlled by
microprocessors with programmable,
interactive software. External AI
software will provide guidance in the
use of the total system.
III. Applied Artificial Intelligence
- Expert System Networks
The I3 approach combines data
generation using highly automated
modular/interfaced systems with
external intelligence for development,
data analysis, interpretation and
validation. Development of a
proprietary expert system network for
SF technology, MicroEXMAT, has been
reported using CCS SF hardware and
methods (11). Currently, a multi-
variate experimental design based on a
Box-Behnken central composite is
linked explicitly in the network via
an expert system, EXBOXB. Further
integration of MicroEXMAT into a full
laboratory information management
system (LIMS) was also outlined
previously (12). Applications to EPyA
and CHAMP are being developed. The
recent ACS Symposium on expert systems
applied to the environmental field
(13) indicates the growing importance
of AI in analytical chemistry.
IV. Summary
Newly designed instrumentation for
multimedia (air, water, solids)
environmental trace organic analysis
is described for on-site applications.
The automated prototype units feature
advanced sample processing with
interfaces for on-line analyses with
chromatographic and/or spectral
detectors. Thermal sample processing
is provided by EPyA. including modules
for purge and trap/thermal desorption,
dynamic headspace, and pyrolysis.
Nonthermal multi-sample processing is
conducted with CHAMP based on super-
critical fluid extraction and
specialty interface units. Analyses
of low ppb levels of vapors, aerosols/
particulates, gasoline, and soils
illustrate the proven capabilities of
the integrated modular systems. A
developing expert system network,
MicroEXMAT, encodes expertise to guide
analysts in analytical strategy,
instrumental configurations, and
aethod development for the proposed
on-site analyzers.
REFERENCES
1. Manufactured by CDS Instruments,
Division of Autoclave Engineers,
Oxford, PA.
2. Manufactured by Computer Chemical
Systems, Inc., Avondale, PA.
3. (a) Michael, L.C., Pellizzari,
E.D., Norwood, D.L., Environ. Sci.
Technol.. 25, 150-155 (1991), "Appli-
cations of the Master Analytical
Scheme to Determination of Volatile
Organics in Wastewater Influents and
Effluents."
(b) Applications Laboratory,
Chemical Data Systems, Inc., Oxford,
PA.
(c) Liebman, S.A., Ahlstrom,
D.H., Sanders, C.I., First FACSS
Mtg., Atlantic City, NJ, Nov 1974.
"Automatic Concentrator/GC System for
Trace Analysis."
(d) Ahlstrom, D., Kilgour, R.,
Liebman, S., Anal Chem.. 47. 1411
(1975), "Trace Determination of Vinyl
Chloride Monomer by a Concentrator/GC
System."
(e) Liebman, S.A., Wampler,
T.F. , Levy, E.J., EPAInternat.
Sympos. on Recent Advances in
Pollutant Monitoring of Air, Raleigh,
NC, May 1982, "Advanced
Concentrator/GC Methods for Trace
Organic Analysis."
4. Applications Lab., CDS
Instruments/ Division of Autoclave
Engineers, Oxford, PA.
5. Liebman, S.A., Smardzewski, R.R.,
Sarver, E.W., Reutter, D.J., Snyder,
A.P., Harper, A.M., Levy, E.J.,
Lurcott, S., O'Neill, S., Proc. Poly-
meric Materials Science and Engineer-
ing. 5.2, 621-625, Amer. Chem.
Soc., Los Angeles, CA, September 1988.
6. (a) Liebman, S.A., Levy, E.J.,
Lurcott, S., O'Neill, S., Guthrie, J.,
Yocklovich, S., J. Chromatogr. Sci..
27, 118-126 (1989), "Integrated
Intelligent Instruments: Supercritical
Fluid Extraction, Desorption, Reaction
and Chromatography."
7. Raymer, J.H., Pellizzari, E.D.,
Anal. Chera.. 59, 1043, 2069 (1987),
"Toxic Organic Compound Recoveries
Using SF C02 and Thermal Desorption
Methods."
8. Liebman, S.A., Fifer, R.,
Griffiths, P.R., Lurcott, S., Bergman,
B., Levy, E.J., Pittsburgh Conf..March
1989, Atlanta, GA, Paper No. 1545,
"Detection Systems for Supercritical
Fluid/GC Instrumentation: Flame
301
-------
lonization Detector (FID) and Fiber
Optic Monitor (FOM) Units."
9. Liebman, S.A., et al., Pittsburgh
Conf., March 1990, NY, Paper No. 546,
"New Applications of I3 in Trace
Organic Analysis."
10. (a) Siddiqui, K.J., Eastwood, D.,
Lidberg, R.L., SPIE. 1054. 77-90
(1989), Fluorescence Detection III:
Soc. Photo-Optical Instrument. Eng. ,
Bellingham, WA, "Expert System for
Characterization of Fluorescence
Spectra for Environmental
Applications."
(b) Eastwood, D., Lidberg, R.L.,
Simon, S.J., VO-Dinh, "An Overview
Advanced Spectroscopic Field Screening
with In-Situ Monitoring Instrumenta-
tion and Methods, private communica-
tion.
11. Liebman, S.A., Fifer, R.,
Morris, J., Lurcott, S., Levy, E.J.,
Intelligent Instruments and
Computers. May/ June 1990, pp 109-120,
"An Expert System Network for
Supercritical Fluid Technologies."
12. Liebman, S.A., Snyder, A.P.,
Wasserman, M., Brooks, M.E.,
Watkins, J., Lurcott, S., O'Neill, S.,
Levy, E.J., Internat. Conf. on Anal.
Chem., University of Cambridge, UK,
July/Aug, 1989, "Integrated
Intelligent Instruments in Materials
and Environmental Sciences."
13. Hushon, J.M., Ed., ACS Sympos.
Series 431, Amer. Chem. Soc.,
Washington DC, 1990, "Expert Systems
for Environmental Applications."
(a)
EPyA
CHAMP
(b)
SPECIALTY INTERFACES TO
FTIR AND MS, MS/MS SYSTEMS
ANALYTICAL
PYROLYSIS MODULE
—PYROPROBE*
TRANSCAP INTERFACES
TO GC/SFC, FTIR, MS SYSTEMS
Figure 1. (a) EPyA, the Environmental Pyroprobe Analyzer
(b) CHAMP, the Chemical Hazards Automated Multiprocessor
(c) TRANSCAP Interface to Finnigan TSQ MS/MS
302
-------
CARTRIOQE SAMPLING FOB LOW PPB LEVELS
OF HALOCAHBOMS, ALIPHATIC!. AND AROMATIC*
TEST AIR MIXTURE ANALYSIS
TENAX CARTRIDGE
CHLOHOMItZtNl «» frt
•INZCNE
J
--»_>^.
8AHPLINQ
• MIN. 40 •! N«
oc.
OURAIONO 01 « 3 OH K O >I m 1 U FILH
• */MI«. TO 17**C
ATT'M 10"11 * 11
Figure 2. Cartridge Sampling for Low PPB Levels of Halocarbons,
Aliphatics, and Aromatics in Air with Thermal Desorber Module
TUT SAMPLES WITH WIDE-RANQINQ VOLATILES
FOR CBYOTHAPPIMO. DESORPTIOM AND CAPUAMY OC ANALYSIS
SAMPLE CONCENTRATOR CDS S3O/GC
iri.ITl.Ilt C»HU.»«» OC
WITH CHYOFOCUIM*
Figure 3. Gasoline and Diesel Fuel Test Mixture Analyzed with
Cryotrapping, Thermal Desorption, and Capillary GC/FID System
303
-------
(a)
OIRECT COtUMH CUTOFOCUtlMQ
SflO -1 OUTSIDE Alfl
• ETCNTION TlMf 10
ISO ml LABOtUTOftT AMI ( b )
(Cj
SOOml PAINT SHOr A.IN
MIEHTKM TIME 10
Figure 4. Air Monitoring on Tenax Cartridge with Direct Colu»n
Cryofocusing, 500 ml Sample
(a) Outside Air, (b) Lab Air, (c) Paint Shop Kir
SAMPLE CONCENTRATOR - QC/M*1S SPECTROMETER
CARTRIDGE AEROSOL/PARTICULATE
(a)
RECOWTHUCTTD Kttt
CKftOHATOOAAM
(b)
Figure 5. (a) Reconstructed Ion Chromatogram of Cartridge
Aerosol/Particulate. Thermal Degradation
(260°C/5 min) GC-MS Analysis of Microcap fl
(b) Electron Impact MS of Scanset 1529
304
-------
SUPERCRITICAL FLUID EXTRACTION-CHROMATOGRAPHY
EXTRACTION
SIMULANT IN SOIC
1880,Hgm
CHROMATOGRAPHY
BIS-CETHYLHEXYL)
PHOSPHONATE
>
Figure 6. Supercritical Fluid Extraction-Chromatography (SFE-SFC/FID)
of Bis-(Ethylhexyl)phosphonate, 3000 psi CO2 Mobile Fluid
SUPERCRITICAL FLUID EXTRACTION-CHROMATOCRAPHY
(a)
SFE-SFC/ITD
(b)
SPECTRUM
11'I. IP A*"'.*?'!1 """."?'" w. .,*J
m in MI s
Figure 7. SFE-SFC Interfaced to Ion Trap Detector (ITD) of Polyolefin
Mixture with Naphthalene
(a) Reconstructed Chromatogram
(b) Mass Spectrum, m/z 128
305
-------
USING A FID-BASED ORGANIC VAPOR ANALYZER IN CONJUNCTION WITH
GC/MS SUMMA CANISTER ANALYSES TO ASSESS THE IMPACT OF LANDFILL
GASSES FROM A SUPERFUND SITE ON THE INDOOR AIR QUALITY OF AN
ADJACENT COMMERCIAL PROPERTY
Thomas H. Pritchett
U.S. Environmental Protection Agency
Edison, NJ
David Mickunas and Steven Schuetz
IT Corporation, REAC Contact
Edison, NJ
The ERT was tasked to access the degree that VOCs, which
may have been co-migrating with methane from a Superfund
site, were affecting the indoor air quality of a shopping mall.
Of particular concern to the Region was the fact that the mall
had actually been built on top of the site prior to its being
added to the NPL, The actual assessment used a combination
of both field screening methods and fixed laboratory meth-
ods to gather two separate sets of data: one set on the landfill
gases and the other set on the air inside the mall. OVA,
Explosivity, and HNU readings from all of the landfill vent
were used to select the vents from which the Summa canis-
ters would be taken for GC/MS and permanent gas analyses.
Concurrent with Summa sampling, the inside of the mall was
screened using an OVA - particularly at all of the likely
entry points for subsurface gases.
The analytical results were interpreted as follows: The
Summa results were used to determine the "worst case" ratio
of target compound to methane observed in the vent gases.
These values were then multiplied by the worst OVA read-
ings observed in the vicinity of a likely soil gas entry point
in order to predict the highest possible concentration of
VOCs that could have been present due to co-migration with
the methane from the landfill. These "worst case" predic-
tions clearly indicated that there was not an apparent long-
term health risk due to VOC migration from the landfill.
307
-------
FIELD ANALYTICAL SUPPORT PROJECT (FASP) USE TO PROVIDE DATA FOR
CHARACTERIZATION OF HAZARDOUS WASTE SITES FOR NOMINATION TO
THE NATIONAL PRIORITIES LIST (NPL):
ANALYSIS OF POLYCYCLIC AROMATIC HYDROCARBONS (PAHS)
AND PENTACHLOROPHENOL (PCP)
Lila Accra Transue, Andrew Hafferty, and Dr. Tracy Yerian
Ecology and Environment
101 Yesler Way, Suite 600
Seattle, Washington 98104
ABSTRACT
The path from initial discovery of a site
as potentially contaminated to its inclu-
sion on the National Priorities List (NPL)
requires numerous activities, most impor-
tantly the identification and quantitation
of hazardous wastes or contaminants asso-
ciated with the site and the surrounding
area. New guidance for NPL nomination
places greater emphasis on accurate deter-
•ination of the areal and volumetric extent
of contamination during the site assessment
phase of work. Under this guidance, exten-
sive sampling is a prerequisite for charac-
terization of a site. This places a heavy
burden on the United States Environmental
Protection Agency (EPA) regions' ability to
provide quality assurance oversight for
data generated by Contract Laboratory Pro-
gram (CLP) analysis of these samples, and
adds considerable costs and time to the
nomination process. If the contaminants of
concern have been identified previously, it
•ay be appropriate to characterize the site
using field analytical support. In Region
10, the Field Analytical Support Project
(FASP) program has been integrated into the
Screening Site Inspection (SSI) and Listing
Site Inspection (LSI) process to provide
cost savings and near real-time analytical
information about the site. FASP methods
are designed to meet the data quality ob-
jectives (DQOs) established for each site.
All FASP data used for site characteriza-
tion are confirmed by analyzing 10 percent
of the samples collected for full Target
Compound List (TCL) analysis through the
CLP. Gas chromatographic methodologies for
field analysis of selected PAHs and PCP
have been developed for FASP in response
to a regional need for site characteriza-
tion at wood treating facilities. FASP
methods are developed for small volumes,
rapid extraction and analysis, and minimum
labor intensity. Methods developed for
FASP will be presented, as well as the
results from two LSIs, including a compari-
son of .FASP data to CLP confirmation re-
sults at each site.
INTRODUCTION
The United States Environmental Protection
Agency (EPA), under the Superfund Amend-
ments and Reauthorization Act of 1986
(SARA), uses the National Hazardous Waste
Site Investigation program to identify
hazardous waste sites for inclusion on the
National Priorities List (NPL). Ecology
and Environment, Inc. (E & E) holds the
Zone 2 Field Investigation Team (FIT)
contract, under which potential hazardous
waste sites are investigated, and relative
risks and threats to human health and the
environment are evaluated. FIT assists the
EPA in its goal of identifying sites for
the NPL in three stages: 1) Preliminary
Assessments (PAs), 2) Screening Site
Inspections (SSIs), and 3) Listing Site
Inspections (LSIs). A potential hazardous
waste site would go through all three
phases before it could be listed on the
NPL.
In 1988, EPA released the proposed re-
visions to the Hazard Ranking System (HRS),
which is used to score potential hazardous
waste sites based on an assessment of rela-
309
-------
tive risks. Prior to the revised HRS
(rHRS), the extent of contamination at a
hazardous waste site was determined only
after the site actually was placed on the
NPL (during the remedial investigation
phase of site cleanup). The rHRS includes
nev guidance for nomination to the NPL, and
places greater emphasis on accurate deter-
mination of the areal and volumetric extent
of contamination during the site assessment
phase of work. Coupled with congressional
mandates aimed at streamlining the listing
process, the new guidance places a heavy
burden on the limited analytical resources
available in terms of the number of samples
required for accurate site characteriza-
tion, and rapid turnaround of analytical
data after sample collection.
The site assessment program obtains most of
its required data through the EPA CLP,
since the CLP provides cost-effective
analyses for a large number of
contaminants. Sometimes, however, it may
be impractical to utilize the CLP to
characterize a site if preliminary data are
already available that identify the target
analytes of concern. The costs and time
involved with a large-scale sampling plan
can be minimized by tailoring the type of
sample analyses performed to the specific
project needs. Also, information obtained
from the laboratory during the sampling
event may allow the field team to optimize
sample locations for proper identification
of site boundaries, while minimizing the
total number of sample analyses required.
These types of laboratory interaction and
sample location tailoring are currently
difficult to obtain through CLP Routine
Analytical Services (RAS).
In addition, RAS contract required quanti-
tation limits may not be adequate to deter-
mine the extent of on-site contamination at
sites where the NPL listing criteria estab-
lishes a need for the lowest obtainable
quantitation limits. It also is possible
that CLP methodology may be inappropriate
under specific matrix conditions present at
a site, potentially resulting in further
elevation of the quantitation limit above
required action levels. Determination of a
matrix interference in advance, through
real-time analysis, may allow for modifica-
tion of the CLP method as requested through
the Special Analytical Services (SAS) pro-
cess, to minimize the necessity of resamp-
ling.
This paper describes an alternative to the
exclusive use of full organics and in-
organics CLP RAS analysis of samples col-
lected during the SSI and LSI processes.
When compared to CLP RAS, this alternative
often results in cost and time savings
while providing analytical information that
satisfies the data quality objectives
(DQOs) for each site.
DQOs
DQOs are statements regarding the level of
uncertainty that a data user or decision-
maker is willing to accept in results de-
rived from environmental measurements. The
DQO process is designed to help the data
user match quality needs with the appro-
priate analytical laboratory and methods so
that the right type, quality, and amount of
data are collected (1).
When applied to hazardous waste site inves-
tigations, the DQO process provides a quan-
titative basis for designing rigorous, de-
fensible, and cost-effective investiga-
tions. The DQO planning process recognizes
that decision making is driven by regula-
tory requirements and by risks to public
health and that the uncertainty in deci-
sions will be affected by the type and
quality of data collected. DQOs provide a
qualitative and quantitative framework
around which data collection programs are
designed, and can serve as performance
criteria for assessing projects (2).
DQOs determine the level of analytical sup-
port necessary to provide decision-makers
with sufficient confidence upon which to
select options with known levels of uncer-
tainty. Choice of specific analytical op-
tions may be determined by:
o Health-based concerns,
o Sample analysis cost,
o Analytes of concern or target/indicator
analytes,
o Regulatory action levels that dictate
method quantitation limits,
o Sample matrices,
o Sample collection, handling, and storage
requirements, and
o Statistical uncertainty in the qualita-
tive identification of analytes and
errors associated with the quantitation.
310
-------
All of the above considerations must be
weighed to determine the appropriate analy-
tical needs for the project data. Rarely,
if ever will a single analytical program
provide the best technical information and
the most cost effective solution to address
all concerns at the site.
The "art" of field analytical support is to
natch analytical capability to the DQOs re-
quired for a specific site in a cost-effi-
cient manner. Once the acceptable level of
error in the result is determined, the
acceptable level of inherent error in the
Measurement system can be addressed.
FIELD ANALYTICAL SUPPORT PROJECT (FASP)
PROGRAM
Broadly defined, field analytical support
is the use of chemists in an analytical
laboratory at or near the site of a hazar-
dous waste investigation, removal, or re-
Bed ial action. Field analytical support is
•ore than a facility or vehicle stocked
with instrumentation, glassware, and
expendables; it is the interactive
management process by which decision-makers
and the personnel who provide the
analytical results integrate planning,
execution, and assessment of analytical
data collection into environmental studies.
These procedures form the basis of the FASP
program.
In the late 1970s and early 1980s, field
analytical support for determinations of
contaminants at hazardous waste sites was
almost exclusively restricted to health and
safety monitoring of on-site personnel.
Early site screening was limited primarily
to air monitoring for volatile organic com-
pounds with hand-held instruments such as
the HNu PI101 (photoionization detection)
and the Foxboro OVA (flame ionization
detection). Within the last decade, more
sophisticated analytical instrumentation,
such as portable (hand-carried) and trans-
portable (mobile laboratory supported) gas
chromatographs and light-weight, compact
X-Ray fluorescence and atomic absorption
analyzers, have begun to be employed rou-
;tinely in hazardous waste site investiga-
tions. These new instruments, coupled with
field-experienced chemists, have provided
near real-time organic and inorganic
analyses for contaminants in air, soil,
Vater, and other matrices (3).
Under E & E's Zone 2 FIT contract, a FASP
program was initiated in 1984. The main
purpose of FASP is to support the PA, SSI,
and LSI process by utilizing field analyti-
cal methods to provide useful information
about site contaminants on a real- or near
real-time basis. FASP can be a cost- and
time-effective alternative or supplement to
conventional laboratory sample analysis in
many situations. Turnaround time for
conventional laboratory analyses, such as
CLP RAS is 40 days after receipt of the
samples. CLP data for site assessment
activities must undergo data validation by
a FIT chemist which takes approximately two
weeks. By contrast, FASP data are
generally provided verbally within 24 hours
of sample receipt, and a final deliverable
is often available approximately 14 days
after the project is completed. FASP data
are evaluated during laboratory projects.
Additional data validation time is not
required.
The EPA recognizes that field analytical
methods such as FASP provides, are appro-
priate for many decisions made in Superfund
(American Environmental Laboratory, October
1990). The EPA encourages the use of these
field analytical methods for screening,
monitoring and other assessments requiring
rapid turnaround of data, and for decisions
where unconfirmed analyte identity and
estimated concentrations are appropriate.
FASP methods are currently included in
EPA's revised Field Analytical Methods
Catalogue. FASP data have been used to:
o Optimize sampling grids,
o Select groundwater well screen depths,
o Guide remedial disposal requirements,
o Provide guidance to cleanup contractors,
o Assist in spill response,
o Select well locations based on soil gas
monitoring,
o Provide enhanced site characterization,
o Identify the most appropriate samples
for CLP analysis,
o Estimate waste quantities,
o Determine extent of contamination migra-
tion, and
o Find "hot-spots".
FASP is not a replacement for or an equiva-
lent of the EPA CLP. FASP does provide
real-time data of known (legally admis-
sible) quality, which may be used in situa-
tions where data generated by a certified
laboratory and standard methodology is not
a requirement for decision making. All
FASP analytes are, by definition, tenta-
311
-------
lively identified, and all FASP quantita-
tive data are estimated concentrations be-
cause methods and quality control (QC) are
a subset or variants of standard CLP QC.
Although both qualitative and quantitative
accuracy and precision may nearly equal
CLP, no attempt is made to alter these
limitations. Therfore, to properly iden-
tify FASP data as tentatively identified
with estimated concentrations, all FASP
data in Region 10 are annotated with the
qualifier "F". This qualifier also indi-
cates that field methodologies were
employed to generate the data.
FASP often is used at sites where previous
sampling has been performed and target
analytes have been identified. When
analytes have been identified previously,
unambiguous identification (i.e., mass
spectral detection) may not be required.
FASP is used most efficiently in the
analysis of samples for a limited group of
analytes requiring only one or two analyti-
cal methodologies. FASP is not used rou-
tinely for analysis of samples for unknown
contaminants.
FASP STANDARD OPERATING GUIDELINES (SOGs)
The FASP program functions under SOGs that
provide guidance on general QC and analyte-
matrix-specific methodologies which have
been developed within the FASP program.
Methodologies are developed on an as-needed
basis, to accommodate the FIT program, or
any other program in which FASP is uti-
lized. FASP methods are designed to pro-
vide near real-time data to field person-
nel. To accomplish this goal, the methods
utilize simplified sample preparation tech-
niques (disposable glassware, smaller scale
extractions) based on more exhaustive con-
ventional laboratory methods, such as CLP
methods. As field analytical methodologies
and the associated QC are generated, they
are standardized, reviewed by FASP
chemists, and submitted to EPA for review
by the Analytical Operations Branch (AOB)
Field Methods Workgroup for final approval.
By the use of standardized and approved
SOGs, consistent data of known quality are
generated.
Like EPA or other standard methods, SOGs
prepared for field analytical support pro-
vide information on the approximate pre-
cision and accuracy that the methods may
provide for sample analysis. However, FASP
methods often are tailored to meet site-
specific requirements. This increases the
probability of obtaining useful data by
overcoming matrix problems, establishing
appropriate quantitation limits for the
project DQOs, or focusing on specific
target analytes.
QC
FASP QC is based on the needs of the FIT
program and may vary according to the
analytical method and/or specific project
needs. There are, however, some general
guidelines provided by SOGs which are
consistently employed.
Instrument Calibration
Gas chromatographic response to target
analytes for the external standard method
of quantitation is measured by determining
calibration factors (CFs), which are the
ratio of the response (peak area or height)
to the mass injected. An initial calibra-
tion designed to demonstrate the instru-
ment's linear response is generated for
each target analyte by analyzing a minimum
of three standard concentrations which
cover the working range of the instrument.
Using the calibration factors calculated
from the initial calibration, the percent
relative standard deviation (ZRSD) is cal-
culated for each analyte at each concentra-
tion level. The percent relative standard
deviation generally is required to be less
than or equal to 25 percent.
The mean initial calibration factor for
each analyte is verified by the continuing
calibration during each operational period
(daily) to ensure detector stability. Mid-
range standards are analyzed, and calibra-
tion factors are compared to the mean
initial calibration factor for each
analyte. The relative percent difference
generally is required to be less than or
equal to 25 percent. If the continuing
calibration criteria are not met for each
target analyte, a new initial calibration
is performed.
Final calibrations are performed at the end
of a project, or sampling effort to ensure
analytical instrument stability. The cali-
bration factor from the final calibration
is compared to the mean initial calibration
factor for each analyte. The relative per-
cent difference is required to be less than
or equal to 50 percent. If the relative
percent difference meets continuing cali-
312
-------
bration criteria, the final calibration
also may be used as a continuing calibra-
tion.
Analyte Identification and Quantitation
Qualitative identification of target
analytes is based on both detector selec-
tivity and relative retention time as com-
pared to known standards, using the
external standard method. Generally,
individual peak retention time windows
should be less than ±5 percent for packed
columns.
The concentration of an analyte in the
sample is calculated using the calibration
factor for that analyte calculated from the
continuing calibration. Reported results
are in micrograms per kilogram (ug/kg)
vithout correction for blank results, spike
recovery, or percent moisture.
Sample chromatograms may not match identi-
cally with those of analytical standards.
Vhen positive identification is question-
able, the chemist may calculate and report
a maximum possible concentration (flagged
as < the numerical value) which allows the
data user to determine if additional (e.g.,
CLP RAS or SAS) analysis is required or if
the reported concentration is below action
levels and project objectives and DQOs have
been met.
Similarly, when sample concentration ex-
ceeds the linear range, the analyst may
report a probable minimum level (flagged as
> the numerical value) which allows the
data user to determine if additional (e.g.,
CLP RAS or SAS) analysis is required or if
the reported concentration is above action
levels and project objectives and DQOs have
been met.
Blank Analysis
A method blank is performed with every set
of samples extracted; a minimum of one
•ethod blank per 20 samples is performed.
The method blank must contain less than the
project quantitation limit, the minimum
reportable value, for each target analyte.
Matrix Spike Analysis
Accuracy is defined as the closeness to
-------
analyte concentrations may be used as a
comparison of the two data sets.
FASP POLYCYCLIC AROMATIC HYDROCARBONS
(PAHs) ANALYTICAL METHODOLOGY
FASP PAH methodology provides identifica-
tion of a subset of the base/neutral acid
(BNA) compounds included on the CLP Target
Compound List (TCL). The method provides
tentative identification of the PAH com-
pounds listed below, at estimated concen-
trations:
Naphthalene
Acenaphthylene
Acenaphthene
Fluorene
Phenanthrene
Anthracene
Fluoranthene
Pyrene
Chrysene
Benzo(a)anthracene
Benzo(b)fluoran thene
Benzo(k)fluoranthene
Benzo(a)pyrene
Indeno(1,2,3-cd)pyrene
Dibenzo(a,h)anthracene
Benzo(g,h,i)perylene
For the soil matrix, a veil homogenized 2
or 3g sample is weighed into a disposable
culture tube with a Teflon-lined cap. The
sample is extracted with 6 mLs of methylene
chloride twice by vortexing for 2 minutes,
combining the extracts. The final extract
is dried with a small amount of sodium
sulfate and then solvent exchanged into
isooctane.
Isolation of the target analytes is accomp-
lished by a small-scale silica gel column
cleanup. A disposable glass 4 mL giant
pipette is filled with a plug of glass
wool, silica gel, and sodium sulfate. The
column is eluted first with methylene
chloride, then petroleum ether (10 mLs of
each). The sample, in isooctane, is then
introduced onto the column. After the
sample is introduced to the column, the
column is first eluted with petroleum ether
(6 mLs) in order to allow interfering
contaminants, such as hydrocarbons, to be
removed. The PAHs are then eluted with
methylene chloride (10 mLs), and the final
volume of the extract is reduced to 1.0
mL under a stream of nitrogen.
The sample is analyzed by gas chromato-
graphy, using a J&U 0.53 mm x 15 m DB-5
fused silica megabore column and employing
flame ionization detection. A temperature
program is utilized to optimize separation
of the analytes. The gas chromatographic
analysis time is approximately 30 minutes.
Samples are quantitated using the external
standard method. Standard mixes are pur-
chased from a commercial manufacturer and
diluted to appropriate concentrations for
instrument calibration. Calibration
factors are calculated for each analyte in
the initial and continuing calibrations.
The concentration of the analyte(s) in a
sample is calculated based on the analyte
calibration factors calculated from con-
tinuing calibrations.
The quantitation limits for the FASP PAH
methodology are 1,000 ug/kg, while CLP RAS
required quantitation limits are 330 ug/kg.
As the CLP samples do not undergo silica
gel cleanup, the final matrix potentially
contains a higher degree of interference
from petroleum hydrocarbons, which are
often present along with the PAHs. When
petroleum hydrocarbon interferences are
present, the sample often requires dilution
before an accurate analysis can occur.
This results in an elevation of the actual
contractual quantitation limits. Samples
analyzed by FASP methodology are relatively
free of these interferences, and generally
do not require dilution.
The total time for preparation and analysis
of 10 soil samples is 490 minutes. In a
10-hour day, the maximum capacity for a
field analytical laboratory equipped with
one gas chromatographic system is approxi-
mately 11 samples during the first day of
operation, and 20 samples each day there-
after. This projected capacity does not
take into account any dilutions which may
be required when high target analyte levels
are present.
This method employs only disposable glass-
ware, eliminating time required for clean-
ing glassware, and minimizing the potential
for cross contamination. Solvent volumes
are minimal, requiring a total of only 40
mLs per sample, compared to the CLP method
for BNAs which requires 300 mLs of solvent
per extraction.
314
-------
FASP PENTACHLOROPHENOL (PCP) ANALYTICAL
METHODOLOGY
For soil, a well homogenized 2 or 3g sample
is weighed into a disposable culture tube
with a Teflon-lined cap. The soil is dried
by adding a small amount of sodium sulfate.
The sample is then extracted with methanol
(10 mLs) by vortexing for 2 minutes. Five
mLs of the extract is transferred into a
clean culture tube.
The extract is derivitized with a solution
of pentafluorobenzyl bromide and hexacyclo-
octadecane (18-crown-6 ether) in 2-pro-
panol. One mL of the derivitization solu-
tion is added to the sample extract, along
vith 3 mg of potassium carbonate. The
culture tube is then capped, gently shaken,
and left in a hot water bath at 80°C for 4
hours. The culture tube is allowed to
cool, then the sample is extracted with 5
•Ls of hexane by vortexing for 1 minute.
Five mLs of carbon-free water are added to
the culture tube, and vortexed for an addi-
tional minute. The hexane layer, which
contains the derivitized PCP, is trans-
ferred to a clean culture tube and dried
Vith a small amount of sodium sulfate. The
extract is then ready for analysis.
The extract is analyzed by gas chromato-
graphy using a 1.0 m, glass column packed
with 1.5% SP-2250/1.95% SP-2401 and employ-
ing electron capture detection. The iso-
thermal column oven temperature is 275°C,
and gas chromatographic analysis time is
approximately 20 minutes.
Sanples are quantitated using the external
standard method. Standards, blanks, and
appropriate quality control samples are
prepared with each batch of samples de-
rivitized.
The quantitation limit for PCP using this
•ethodology is 50 ug/kg. The quantitation
Unit for PCP by CLP BNA methodology is
significantly higher (1,600 ug/kg). FASP
•ethodology allows for the lower quantita-
tion limit by isolating the PCP present in
the sample and removing matrix interfer-
ences, and then using a more sensitive in-
strumental technique (GC/ECD).
The total time for preparation and analysis
of 10 soil samples for PCP is 530 minutes.
In a 10-hour day, the maximum capacity for
afield analytical laboratory equipped with
one gas chromatographic system is approxi-
mately 10 samples during the first day of
operation, and 20 samples each day there-
after. This projected capacity does not
take into account any dilutions which may
be required due to high target analyte con-
centration in the sample.
This method, like the PAH method, employs
only disposable glassware, and consumes
only minimal solvent volumes (21 mLs total)
compared to CLP solvent volumes of 300 mLs
per sample extracted.
CASE STUDY 1
E & E was tasked to perform an LSI at an
active wood treating facility occupying 19
acres in Oregon. The facility operations
involve pressure treating wood products
using creosote (containing PAH compounds)
and PCP in a petroleum oil carrier. The
determination of the extent of on-site sur-
face contamination was defined as one of
the objectives of the LSI, requiring
analysis of 56 on-site grid surface soil
samples. Since the target analytes were
known, it was determined that site-specific
DQOs could be met by using FASP at a sub-
stantial cost and time savings compared to
a full CLP sample analysis scheme.
Sixty-two surface soil samples were col-
lected at the site for FASP analysis, in-
cluding six duplicate, or colocated
samples. The samples were shipped to the
FASP Seattle Base Laboratory for analysis,
as the project was not large enough to
justify mobilization. The sample analyses
were completed within 24 hours of receipt
of the last sample shipment.
A cost comparison was calculated for FASP
versus CLP RAS analysis of the samples.
The total FASP costs included the purchase
of required expendables, which totaled
approximately $4,546.00 and labor, which
totaled approximately $13,300 for 350 hours
of effort. If CLP had been utilized for
these analyses, the total cost would have
been $27,308, which accounts for laboratory
charges and data validation. This amounts
to a savings of $9,461 by utilization of
the FASP program. This comparison indi-
cates that full organics CLP RAS would not
be appropriate for these samples. Rather,
a focused analysis, such as CLP SAS or FASP
would be more appropriate. For near real-
time availability of sample data, FASP
would be the preferred alternative.
The confirmatory samples were analyzed for
315
-------
BNA compounds by a CLP laboratory at a fre-
quency of approximately 10 percent (8
samples). Sample quantltation limits vere
consistently higher for the CLP data set
due to the matrix interferences from the
oil present in the samples. For most
samples, quantitation limits were elevated
2 to 300 times above the contract-required
quantitation levels.
Correlation between the FASP and CLP data
sets was excellent. FASP identification of
PAHs and PCP vas confirmed, and relative
trends in concentrations generally agreed.
A statistical analysis of the data sets was
performed using correlation coefficients.
FASP and CLP data sets were compared for
analytes where four or more pairs of data
points were available (i.e., four or more
samples sent for confirmatory analysis had
results above method quantitation limits
for the analyte). The calculated correla-
tion coefficients are summarized in Table
1.
Table 1. CORRELATION COEFFICIENTS FOR FASP
AND CLP DATA: CASE STUDY 1
Analyte
Data
Pairs
Used
Correlat ion
Coefficient
(r)
Phenanthrene/
Anthracene 6 0.999
Fluoranthene 6 0.999
Pyrene 6 0.999
Chrysene/
Ben7o(a)anthracene 8 0.9997
Benzo(b)fluoranthene/
Benzo(k)fluoranthene 8 0.9775
Benzo(a)pyrene A 0.9703
Pentachlorophenol 6 0.9696
As a result of the FASP analysis and CLP
confirmation, the data generated by FASP
were determined to be acceptable for use in
determining the on-site hazardous waste
quantity. This allowed data users to
accurately measure the relative risks
resulting from on-site contamination.
CASE STUDY 2
An LSI was performed at an inactive pipe-
coating facility, which had generated coal
tar, coal tar epoxies, asphalt, and cement
mortar wastes over the 51 acres for
approximately 30 years. Several target
analyte groups had been identified pre-
viously, including volatile organic com-
pounds, PAHs, and polychlorinated biphenyls
(PCBs). The project objectives required
on-site surface soil contamination to be
characterized. An on-site grid sampling
pattern was used, resulting in collection
of 54 samples.
Previous site sampling events had identi-
fied the target analytes, allowing for FASP
analysis of the on-site surface soil
samples while maintaining the project DQOs.
The soil samples were analyzed for volatile
organic compounds, PAHs, and PCBs at the
FASP Seattle Base facility. It was more
cost-effective to analyze the samples at
the base facility due to the variety of
analyses required and the relatively small
size of the project.
The cost of FASP analysis of the 54 samples
and four field duplicate samples vas
$20,900 ($1,900 for supplies, $19,000 for
labor) compared to CLP analysis costs which
would have totaled $57,408. This amounted
to a total savings of $36,508 by utilizing
FASP. All sample analyses were completed
within 7 days of the last sample shipment
date.
Six samples (approximately 10 percent of
the total number of samples) were split and
sent to a CLP laboratory for confirmatory
volatile, BNA, and pesticide/PCB analysis.
Again, matrix interferences prevented CLP
BNA analysis without elevated quantitation
limits due to the presence of oil. FASP
methodology, involving sample cleanup for
specific analyses, removed much of the oil
interference.
Correlation between the two data sets was
excellent. FASP identification of volatile
compounds, PAHs, and PCBs was confirmed by
CLP data, and relative trends in analyte
concentrations agreed. Calculated correla-
tion coefficients were generated where four
or more data pairs were available. One
split sample contained extremely high
levels of PAHs. CLP results were signifi-
cantly and consistently higher than the
FASP results for all PAHs detected in this
sample. It is most likely that this
phenomenon was due to the non-homogeneous
nature of the soil matrix. Therefore, this
data pair was not included in the correla-
tion coefficient calculation. The correla-
tion coefficients are presented in Table 2.
316
-------
Table 2. CORRELATION COEFFICIENTS FOR FASP
AND CLP DATA: CASE STUDY 2
DataCorrelation
Pairs Coefficient
Analyte Used (r)
Fluoranthene 4 1.000
Pyrene 4 1.000
Chrysene/
Benzo(a)anthracene 4 1.000
Benzo(b)fluoranthene/
Benzo(k)fluoranthene 4 1.000
Benzo(a)pyrene 4 1.000
Indeno(l,2,3-cd)
pyrene/Dibenzo(a,h)
anthracene 4 0.999
Benzo(g,h,i)perylene 4 0.999
Aroclor 1254 5 0.945
A statistical analysis of matrix spike re-
covery data for eight samples collected at
both of the sites described above is pre-
sented in Table 3.
CONCLUSION
Recently, EPA has placed a greater emphasis
on the determination of extent of contami-
nation during site assessments. FASP was
initiated under E & E's Zone 2 FIT contract
in 1984, and is a viable alternative or
supplement available to address the
analytical demands for determining relative
risks at hazardous waste sites. FASP
provides data of known quality, using
standard methodologies and QC modified to
meet the project DQOs. FASP data can be
obtained at a substantial cost and time
savings when compared to conventional CLP
analysis, and has been used successfully
for characterization of sites with known
target analytes.
Table 3. AVERAGE MATRIX SPIKE RECOVERIES
FOR SOIL SAMPLES AT HAZARDOUS WASTE SITES
Analyte
Average
Percent
Recovery
Standard
Deviation
Naphthalene 70.0 36.8
Acenaphthylene 103 47.3
Acenaphthene 94.3 36.6
Fluorene 90.3 24.9
Phenanthrene/
Anthracene 93.4 38.9
Fluoranthene 118 53.5
Pyrene 123 53.8
Chrysene/Benzo(a)
anthracene 121 34.5
Benzo(b)
fluoranthene/
Benzo(k)
fluoranthene 107 26.6
Benzo(a)pyrene 112 24.0
Indeno(l,2,3-cd)
pyrene/
Dibenzo(a,h,)
anthracene 98.7 25.8
Benzo(g,h,i)
perylene 88.0 28.8
Pentachlorophenol 122 51.4
REFERENCES
1. Cram, S.P., American Environmental
Laboratory, September 1989, pp. 19.
2. Neptune, D., E.P. Brantly, M.J.
Messner, D.I. Michael, May-June 1990,
Hazardous Materials Control, Volume 3,
Number 3, pp. 19.
3. Hafferty, Andrew, September 1989, "A
Cost Summary of Field Screening Implementa-
tion in Region 10", Division of Environ-
mental Chemistry, Proceedings, American
Chemical Society National Meeting, Miami
Beach, Florida.
DISCUSSION
DOUG PEERV: You were talking about doing 20 samples in a ten-hour day with
30-minute run time. Does thai include your QA/QC or did you have another four
hours of work time to cover that?
LILA ACCRA-TRANSUE: We did ten samples or 20 sample analyses. So that
includes the QC samples that we need to run.
DOUG PEERY: So you're talking about your standards and your QC's within
that 20 number.
LILA ACCRA-TRANSUE: Right.
VICKI TAYLOR: How many split sample pairs did you take?
LILA ACCRA-TRANSUE: We take approximately 10%. For the first project
we'd taken eight and for the second project, six.
VICKI TAYLOR: So you were basically presenting a correlation coefficient for
all the split samples that you took?
LILA ACCRA-TRANSUE: Right. All of the comparable data pairs are reported
where they were hits in both samples.
317
-------
Thermal Desorption Gas Chromatography-Mass Spectrometry
Field Methods for the Detection of Organic Compounds
A. Robbat, Jr., T-Y Liu, B. Abraham, and C-J Liu,
Tufts University, Chemistry Department, Trace
Analytical Measurement Laboratory,
Medford, MA 02155
INTRODUCTION
The overwhelming amount of information required to characterize
purported hazardous waste sites, as well as to support Superfund
site cleanup and closure activities, have catalyzed the development
of field instrumentation capable of providing site managers with
immediate access to chemical and physical data. The demand for
field "practical" methods and instrumentation has been recognized
by the U.S. Environmental Protection Agency (1, 2).
Faster data turnaround times and ease of operation have been the
primary motivation for selecting field gas chromatographic (GC)
methods of analysis. Despite recent advancements in field GC
instrumentation, typical applications focus on the detection of EPA
listed volatile organic compounds (VOCs) in water, air, or soil
gas. The primary limitation of commonly employed field GC's is
the non-definitive signal response of the detectors (including
photoionization, flame ionization, thermal conductivity, and
electron capture) which are incapable of providing unambiguous
identification of the wide variety of organic compounds that may
be present in a highly contaminated sample. Generally, ten to
twenty percent of the samples analyzed on-site are "split" for
confirmation by GC with mass spectrometric (MS) detection.
Since most commercially available mass spectrometers have
traditionally been housed and operated in a clean air, temperature
controlled room and the notion that economies of scale require
highly trained MS operators to be based in multi-MS laboratories,
misapprehensions have arisen as to whether MS's can be operated
successfully (and profitably) in the field.
The limited availability of field GC-MS's is not a function of MS
operating requirements, but more, the perception that significant
sample cleanup and QA/QC procedures will be required to obtain
useful data as well as the apparent reluctance of instrument
manufacturers to enter the field marketplace. Until recently, these
misconceptions have perpetuated the myth that GC-MS's belong
solely in the laboratory.
Over the last several years, we have discussed field GC-MS
applications utilizing Bruker Instruments' mobile mass
spectrometer (2-6). The MS, initially designed for NATO as a
chemical warfare detector, was manufactured from the outset as a
field instrument. In our studies, the MS was transported from
site-to-site in a mid-sized truck and was battery operated for - 8
to 10-hr at ambient conditions. For example, samples have been
analyzed with outdoor conditions, where; temperatures have been
between 10 "F and 90 "F, rain, snow, and high humidity. Gas
cylinders were not necessary for GC operation since charcoal
filtered ambient air served as the carrier gas.
Simple field methods have been developed based on analyte
introduction by thermal desorption CTD) followed by fast GC
separation and MS detection. Screening level and more
quantitative TDGC-MS methods have been submitted to EPA's
EMSL-Las Vegas for VOCs in water, soil/sediment, soil gas, air
and polychlorinated biphenyls (PCBs) and polycyclic aromatic
hydrocarbons (PAHs) in soil/sediment for inclusion into the
compendium of field methods that will be published by EPA's
Analytical Operations Branch. The methods include a menu of
QA/QC procedures whose implementation depends upon a given
study's objectives. The goal is to provide a practical GC-MS tool
that can deliver the quality of data required for the study with
minimal sample cleanup. Presented in this paper are typical
examples of data quality and a comparison of field and laboratory
results one can expect from both the screening and more
quantitative field TDGC-MS methods for PCBs, PAHs, and
pesticides.
EXPERIMENTAL SECTION
A mobile mass spectrometer (Bruker Instruments, Billerica, MA)
was used in these studies. The TDGC-MS was powered by
battery or electrical supply from the site. The MS was transported
to Superfund sites in Westborough (Hocomonco Pond; PAHs) and
319
-------
North Dartmouth, MA (Resolve; PCBs) in a Chevrolet Blazer, In
addition to the instrument's internal data collection and monitoring
system, the MS was equipped with an external data system and
thermal desorption sampling probe. Sample introduction was
made by thermally desorbing (TD) the analyte directly from
soil/sediment or from an organic extract through the TD sampling
probe's (SP) short 3.5 m fused silica capillary column. For direct
TD soil/sediment experiments, 0.5 g of soil was placed on an
aluminum foil covered petri dish. An internal standard was
injected into the soil before the measurement was made. In
contrast, the more quantitative measurements required several
additional steps: 1) 0.5 g of soil was weighed and extracted with
2 ml of solvent; 2) prior to extraction, a known quantity of
surrogate (or target) compound(s) was added to the soil (or field
blank) to determine extraction efficiencies (note: this step was
required since a single 2 ml extraction yielded analyte recoveries
of less than 100%; 3) co-inject known aliquots of extract and
internal standard onto aluminum foil covered petri dish; 4)
thermally desorb analyte. Shown below are the TDGC-MS
operating and PCB, PAH, and pesticide experimental conditions:
Operating Conditions
Mass Spectrometer Bruker Instruments (Billerica, MA)
electron energy 70 volts (nominal)
mass range 45 to 400 amu
scan time 2 sec
MS tune autocalibrate (HjO,,,; FC-77); 18, 69,
119, 169,331 amu
mass resolution set to unity; ca. 10% valley
definition
ion detection 17 stage Cu-Be dynode electron
multiplier with self-scaling
integration amplifier (108
linearity)
Sampling Probe Head 260 °C
GC Column
dimensions
DBS (J & W Scientific, Folsom, CA)
3.5m x 0.32mm i.d.; 0.25/x film
thickness
ambient air purified through carbon
filters
3 to 4 ml/min
earner gas
flow rate
PCBs PAHs Pesticides
initial temp 140°C, 30 sec 70°C, 40 sec 120 °C
temp prog 120°C/min 35°C/min 17°C/min
final temp 200°C, 90 sec 233°C, 80 sec 233°C
Internal d,«-pyrene d,-naphthalene d,0-
Standards or d,0-pyrene phenanthrene
Data were acquired by using the internal monitor's selected ion
monitoring program. The data system reported the total ion
current as a logarithmic value. The antilog value is used in
conjunction with MS response factors and analyte recoveries to
calculate concentrations in the sample. Standards were purchased
commercially from the following companies: PCBs (Ultra
Scientific, Hope, RI); PAHs (Supelco, Inc., Bellefonte, PA);
Pesticides (Chem Service, West Chester, PA); internal standards
(Cambridge Isotope Laboratories, Woburn, MA). All standards
and soil recovery experiments were prepared with high purity
solvents (> 96 %) as received.
RESULTS and DISCUSSION
The objective of this study was to develop fast TDGC-MS
methods (< 20 min/sample including sample cleanup). Two
methods were developed. Analyte introduction for quantitative
measurements were made by co-injecting organic extracts (or
standard solutions) of PCBs, PAHs, or pesticides and internal
standard(s) onto an aluminum covered petri dish followed by
TDGC-MS and for screening measurements by direct thermal
desorption from soil/sediment.
The surface monitor program mode was employed in this study.
Target compounds (maximum number twelve) were detected by
selected ion monitoring (SIM) MS. The (logarithm) ion current
was recorded and displayed visually on the system's monitor.
Found in Figure 1 are typical PCB and pesticide outputs. Three
fragment ions representative of each compound(s) and an
impossible ion (see below for rationale) were selected for
detection. For example, in cell A the target ions and their relative
intensities for the three monochlorinated PCBs were 188 (100%),
190 (33.5%), 152 (31.1%), and 189 (0%). Similarly, cells B-H
in Figure la illustrate the SIM four ion current responses for
chlorination levels 2 - 8, respectively; cell I, d,0-pyrene (internal
standard); cells J - K, PAH surrogates; and cell L, hydrocarbon
signals indicative of matrix complexity. Detection was made, and
printed on screen, when the signals from the four ions relative to
each other agreed to within preset criteria over a predetermined
retention time window. In this mode, SIM response may be
considered analogous to selective GC detection. Note above, that
the last fragment ion for the monochlorinated PCBs had a relative
intensity of 0%. Inclusion of an impossible ion served to provide
selective detection. For example, an increase in fragment 1 ion
current relative to fragments 2-4 within the target compound's
retention window precluded compound identification. Thus, the
mathematical algorithm assisted in screening out interferants
present in the sample.
solvent C»H,4
extraction
CH,C12
C6HM
320
-------
EURFACE MONITOR
M PC8/DIOXINS/CL
8
7-
6-
5-
4-
3-
2-
Die-PHEJWTHR
D-12-CHRYSENE
HYDROCHRBONS
CL3-BIPHET1YL
CL4-BIPHEWL
CL5-BIPHEWL
CLS-BIPHEJ1YL
B CL2-BIPHO1YL
G CL7-BIPHENYL
F 4.8
F 2.9
F 5.3
F 4.9
F 5.0
F 3.3
F 3.4
fiBCDEF6HI3KL
Figure la
5SFKZ KiiTO?. 28.18.315:44
V FESTICIuE
ft fl,3/E/0-BHD F 4.S
5 Die-Ft£?MHF£ F £.7
S-j CHEPTACllOR F 4.3 153
7-
6-
3-
4-
3-
2-
>H
^'-DOD F3.3 2
D ftLDRI/1 F 4.5
F DIEffi!?! F 3.8
EffiT-EFCKIDE F 4.4 [p
B 4,4 ' -DDE
1 VTT
A6!
.4 ' -rjDT
I
1
U F5.2
F5.4
)
7
DEFSHim
184
H
The fast GC linear temperature programs and MS detection
provided sufficient separation to identify compound(s) as shown in
Table 1. Figure 2 is a typical instrument print out for the amount
(4-ion total current count, in log values, left vertical axis) vs. time
response curves (horizontal axis) for four of the chlorinated
pesticides shown in Figure Ib. In addition to the compound and
amount detected, other information visible on the display included
"real-time" monitoring of: logarithm of ion current, left vertical
axis; MS vacuum pressure, right vertical axis; and column
temperature, above right vertical axis.
ajon toinot 38.u.a IBS
V PESTICIDE fflOUHT VS Tfft
I ft^E/HHC F 4.8
8-
7-
S"
4-
3-
2-
1
IS?
2
5
'V^'U !
# 1
tSSs
SUJFflK tttHITOR 3fl.1B.58 18:45
V PETICIIE5 fraW VS THE
t i£FT.EPIKir£ F4.4
1-
t
3-
4'
3-
1
172
2
i
H
i
WWHWMH ^V-V ^WM) 7
183s
aim nonna 38.ia.aa u:c
D MEW F 4.C
7-
£•
t
4-
3-
2-
171
2
!
f
,V*, .v"' A :
•nren KwrV n •?
imii
tffil
ssFsumnai 2a.ta.3B u;«
v PEsriciio nmin vs ire
S4/4'-0£ F5.2
J-
7-
£•
5-
3-
2-
1
173
2
S
1
i
':•.;•, It
Ssi'V ?
188 s
Figure Ib
igure 1. Typical Field TDGC-MS SIM response of a standard
ution containing PCBs (la) and chlorinated pesticides (Ib).
Figure 2. Amount versus time curvfr for several chlorinated
pesticides shown in Figure 1.
321
-------
TDGC-MS experiments were performed between the concentration
range of 40 and 4000 ng/compound. Repetitive measurements at
each concentration yielded differences in the log value of + 0.13
producing ion current differences of less than 30%. Table 1 lists
typical response factors (RF) and percent relative standard
deviations (%RSD) calculated for PCBs, PAHs, and pesticides
thermally desorbed from an organic extract. Plots of signal versus
concentration were linear (r= 0.999) with the %RSD for the
average RF less than 30%, meeting initial and continuing
calibration criteria in the Contract Laboratory Program. Table 2
lists representative RF and RSDs for PCBs and PAHs thermally
desorbed directly from soil. Despite somewhat larger percent
RSDs for some PAHs, measurement precision at this level will
only be critical at site cleanup "action" levels. It should be
pointed out that thermal desorption extraction efficiencies differ
greatly for some PAHs (see Table 3 for minimum detectable
quantity. Note: RF in Table 2 calculated over linear range as
shown in Table 3). Minimum detection levels for most
compounds were ~ 1 ppm for soil/solvent extraction and slightly
higher for direct soil thermal desorption. Because TDGC-MS
experiments can be performed in 5 to 20 min depending on the
method employed (with known data quality), many more analyses
can be performed than currently practiced for site characterization,
stockpiling, and worker/community protection activities. The
frequency for performing continuing calibration checks may be
determined (on-site) by following surrogate compound RF values
(see below).
Research has shown that compound recoveries vary with soil-type.
For example, PCB/hexane (0.5 g/2 ml hexane, 2 min) extraction
recoveries were 69 + 5% for 50 ppm backyard (organic) soil, 80
± 2 %, for 25 ppm sandy material from the Resolve Superfund
site in North Dartmouth, MA, and 73 ± 5 % for an ERA, 35
ppm, soil. Therefore, appropriate surrogate compound(s) and/or
target standards must be added to samples as the soil-type varies.
Such experiments can be used to determine instrument
performance as well.
Tables 4-7 illustrate typical examples of data quality one can
expect from the field TDGC-MS methods. Split samples were
collected by EPA's Region 1 oversight contractor and analyzed in
the field (Tufts) and lab (Lockheed ESC, Las Vegas, NV). Table
4 compares field and lab GC-MS measurements for total PCB
present in several samples obtained from the Resolve site while
Table 5 delineates chlorination level comparisons for two of the
samples. The field and lab results are in excellent agreement.
Shown in Tables 6 and 7 are field and lab comparisons for four
PAH samples from the Hocomonco Pond (Creosote contaminated
Superfund) site. Note that the samples in Table 6 and the sample
labeled HP-SB5 in Table 7 were performed by SIM using the
system's internal monitor as described above. In contrast, the
sample labeled pond (Table 7) was analyzed by total ion current,
selected ion monitoring extraction. The advantage of this
detection method was that full mass spectral fragmentation data
and compound library matching was applied. On the other hand,
the disadvantage was that ion current from matrix components may
add to the SIM signal resulting in higher concentrations than what
might actually be present. This, however, is no different than
what can occur using traditional CLP, MS methods. Field and lab
comparisons for PAH samples also appear to be in good
agreement.
Additional data will be presented describing further application of
the field TDGC-MS methods. Illustrations will be given
documenting cost effectiveness. Results will show that GC-MSs
can be operated in the field, provide rapid access of data, and
allow project managers to make decisions on-site.
ACKNOWLEDGEMENTS
Partial financial support for this project was provided by the U.S.
Environmental Protection Agency, EMSL-LV; New Jersey
Institute of Technology's Northeast Hazardous Substance Research
Center; and Tufts University's Center for Environmental
Management. The authors wish to thank EPA's Region 1
Hazardous Waste Division for providing access to Superfund sites
and samples and to the oversight contractors for their cooperation.
REFERENCES
1) Williams, L.R., Editorial Article, American Environmental
Laboratory, October, 1990 (see additional articles by EPA).
2) U.S. Environmental Protection Agency, Sixth Annual Waste
Testing and Quality Assurance Symposium, July 16-20, 1990,
Washington, DC; Field Analytical Methods Workgroup sponsored
by Analytical Operations Branch. See Proceedings, Robbat, A.,
Xyrafas, G., Abraham, A., "A Fast Field Method for The
Identification of Organics in Soil", 1-350..
3) "Method Evaluation for Field Analysis of PCBs and VOCs
Using a Field Deployable GC-MS", Xyrafas, G., Ph.D. Thesis,
Tufts University, Chemistry Dept., Medford, MA. O2155.
3) "A fieldable GC-MS for the Detection and Quantitation of
Hazardous Compounds: Analytical Chemistry in the Field?"
Robbat, A., Jr., Xyrafas, G., 198th American Chemical Society
National Meeting, 411, 29(2), 1989, Miami Beach, Florida.
4) "Evaluation of a Field-Based, Mobile, Gas Chromatograph-
Mass Spectrometer for the Identification and Quantification of
Volatile Organic Compounds on EPA's Hazardous Substance
List", Robbat, A., Jr., Xyrafas, G., In Proceedings of the First
International Symposium on Field Screening Methods for
Hazardous Waste Site Investigations, October 11-13, 1988, Las
Vegas, Nevada, Pg. 343.
5) "On-Site Soil Gas Analysis of Gasoline Components Using a
Field-Designed Gas Chromatograph-Mass Spectrometer", Robbat,
A., Jr., Xyrafas, G., In Proceedings of the First International
Symposium on Field Screening Methods for Hazardous Waste Site
Investigations, October 11-13, 1988, Las Vegas, Nevada, Pg. 481.
322
-------
Table 1. Thermal Desorption Field GC-MS Response Factors and
Percent Relative Standard Deviations - from Extract (Quantitative
Method)
Polvchlorinated Biphenvls
Table 2. Thermal Desorption Field GC-MS Response Factors and
Percent Relative Standard Deviations - Direct from Soil
Polvchlorinated Biphenvls
Chlorination Level
CM
Cl-2
Cl-3
Cl-4
Cl-5
Cl-6
Cl-7
Cl-8
Ave RF(n=5)
0.47
0.26
0.27
0.16
0.15
0.10
0.06
0.03
%RSD
20
17
17
15
12
15
17
10
Chlorination Level
Cl-1
Cl-2
Cl-3
Cl-4
Cl-5
Cl-6
Cl-7
Cl-8
Polvcvclic Aromatic
Ave RF(n=5)
13.44
3.75
3.91
2.55
2.02
1.61
1.04
0.36
Hydrocarbons
Polvcvclic Aromatic Hydrocarbons
naphthalene
acenaphthylene
acenaphthene
fluorene
phenanthrene/anthracene
fluoranthene/pyrene
chrysene/benz(a)anthracene
Chlorinated
BHCs
Heptachlor
Aldrin
Heptachlorepoxide
Dieldrin
4,4'-DDE
4,4'-DDD
4,4'-DDT
1.37
8.63
0.82
0.58
4.59
9.52
0.90
Pesticides
0.10
0.02
0.07
0.03
0.02
0.32
0.16
0.12
123
123
24J8
123
123
95
133
96
235
163
109
25.4
163
186
19.7
naphthalene
acenaphthylene
acenaphthene
fluorene
phenanthrene/anthracene
fluoranthene/pyrene
chrysene/benz(a)anthracene
2.39
1.21
0.33
0.16
0.25
0.06
0.003
RSD
19
25
23
16
16
16
23
19
75
356
52.1
165
213
310
229
323
-------
Table 3. PAH Dynamic Range Directly Desorbed from (0.5 g)
Soil Matrix.
Concentration Signal Linearity
Compound(s') Cne) (11=5) (rt
Table 4. Comparison of Field and Lab GC-MS Results for Total
PCBs in Samples from the Resolve Superfund Site, North
Dartmouth, MA
Naphthalene
Acenaphthylene
Acenaphthene
Fluorene
Phenanthrene &
Anthracene
Fluoranthene &
Pyrene
4000
2000
1600
800
120
80
40
4000
2000
1600
800
80
40
4000
2000
1600
800
80
40
4000
2000
1600
800
120
80
40
8000
4000
3200
1600
240
160
80
8000
4000
3200
1600
240
160
510084 + 22.9% 0.999
255648 ± 22.9%
210541 ± 31.2%
110357 ± 12.9%
28371 ± 13.2%
5312 ± 23.4%
1995'± 22.4%
255648 ± 22.9% 0.999
129245 ± 26.1%
94858 ± 10.8%
51454 ± 26.1%
3575 ± 13.2%
794'+ 17.2%
94858 ± 10.8%' 0.999
23397 ± 12.7%
20307 ± 22.9%
9936 ± 34.5%
740 ± 12.7%
251"+ 16.8%
52714 ± 11.0% 0.999
24197 ± 32.8%
18585 ± 12.7%
8971 ± 13.2%
371 ± 12.8%
794a± 13.4%
251"+ 15.4%
42578 ± 35.4% 0.999
21245 + 12.2%
16271 ± 26.1%
8084 ± 22.9%
877 + 24.3%
3981'+ 18.6%
316" ± 20.2%
17498 ± 24.3% 0.999
8629 ± 13.8%
6854 + 13.8%
3221 ±40.1%
371 ± 12.8%
195'+ 15.3%
Quantitative Screening Level
TDGC-MS TDGC-MS Lab GC-MS
EPA ID# (ppm) {ppm) (ppm)
TUF-RS-SO-A26-2-4 368.3 309.4 298.6
TUF-RS-SO-AI-5-2 274.6 213.6 260.0
TUF-RS-SO-A42-6-8 23.1 7.2 15.9
TUF-RS-SO-A37-0-2 9.1 3.2 1.3
TUF-RS-SO-AI4-0-2 7.6 1.6 5.0
TUF-RS-SO-A5A-2-4 1.7 1.7 0.4
TUF-RS-SO-NH24-2-4 1.7
TUF-RS-SO-A14-6-8 1.3 - 3.0
TUF-RS-SO-A7-4-6 ND ND ND
ND, compound not detected
Sample comparison on an as collected basis (i.e., soils were not
dried)
Lab GC-MS performed by Lockheed ESC, Las Vegas, NV
Field GC-MS performed by Tufts University
Sample collected by EPA's Region 1 oversight contractor
-, Samples were not analyzed
Table 5. Comparison of Field and Lab GC-MS by Chlorination
Level (ppm), Resolve Superfund site, North Dartmouth, MA.
ID Sample » TUF-RS-SO-A 1 5-2TUF-RS-SO-A42-6-8
Cl-level Field Lab Field Lab
TDGC-MS GC-MS TDGC-MS GC-MS
Cl-1 12.5 ND 0.5 ND
Cl-2 7.6 10.8 1.5 1.0
Cl-3 60.3 56.5 4.5 4 1
Cl-4 121.4 122.8 5.1 5.3
Cl-5 59.5 53.6 6.3 4.3
Cl-6 20.9 15.9 3.0 1 2
Cl-7 1.7 0.4 0.3 ND
Cl-8 0.7 ND 1.9 ND
total PCS 274.6 260.0 23.1 15.9
These values were not included in the dynamic range.
ND, compound not detected
Sample comparison on an as collected basis (i.e., soils were not
dried)
Lab GC-MS performed by Lockheed ESC, Las Vegas, NV
Field GC-MS (Quantitative Method) performed by Tufts
University
Sample collected by EPA's Region 1 oversight contractor
324
-------
Table 6. Comparison of Field and Lab GC-MS Results for PAH's
From the Hocomonco Pond Superfund Site in Westborough, MA,
in ppm.
Table 7. Comparison of Field and Lab GC-MS Results for PAH's
From the Hocomonco Pond Superfund Site in Westborough, MA,
in ppm.
DSTB22(0'-2') DSTB22(2'-4')
Lab Field1 Lab Field1
Naphthalene 0.1 ND 2.2 ND
Acenaphthylene 0.1 0.1 ND 0.7
Acenaphthene 1.4 0.1 6.0 0.2
Fluorene 2.9 1.5 16.3 3.0
Anthracene & 8.3 40.3 81.8 72,7
Phenanthrene
Pyrene& 11.8 10.6 112.2 60.5
Fluoranthene
Chrysene& 6.0 6.2 37.2 37.2
Benz(a)anthracene
Benz(b)fluoranthene, 3.2 23.8 17.7 22.3
Benz(k)fluoranthene, &
Benz(a)pyrene
ND, compound not detected
Sample comparison on an as collected basis (i.e., soils were not
dried)
Lab GC-MS performed by Lockheed ESC, Las Vegas, NV
Field GC-MS performed by Tufts University (Thermal Desorption
of Methylene Chloride Extract)
'Data collected by Selected Ion Monitoring (Internal Data System)
POND HP-SB5
Lab Field2 Lab Field1
Naphthalene 1.3 1.9 54.8 32.0
Acenaphthylene 1.4 ND ND ND
Acenaphthene 0.7 ND ND 1.2
Fluorene 2.5 ND ND 0.8
Anthracene & 16.7 10.4 ND ND
Phenanthrene
Pyrene& 30.7 43.6 ND ND
Fluoranthene
Chrysene& 37.2 55.2 ND ND
Benz(a)anthracene
ND, compound not detected
Sample comparison on an as collected basis (i.e., soils were not
dried)
Lab GC-MS performed by Lockheed ESC Las Vegas NV
Field GC-MS performed by Tufts University (Thermal Desorption
of Methylene Chloride Extract)
'Data collected by Selected Ion Monitoring (Internal Data System)
'Data collected as Total Ion Current Chromatogram and quantified
by Selected Ion Monitoring Extraction (External Data System)
DISCUSSION
ALAN CROCKETT: I found your presentation and the results extremely
informative and the accuracy or the precision you were getting was fantastic. Did
you say that you were using two tenths of a gram sample or a two-milligram
sample?
AL ROBBAT: A half a gram.
ALAN CROCKETT: That's impressive just being able to sub-sample a jar of
soil as repetitively as you've been able to. What's your preparation procedure for
homogenization of soil that comes into your facility?
ALROBBAT: These samples were all homogenized by EPA Region I. We didn't
do anything more after we got them, except stir them up a little bit.
ALAN CROCKETT: How did they homogenize when you get them so
homogeneous?
ALROBBAT: Basically they screen them and then they collected them in a large
jar and simply just rotated them. We did not do any of the real homogenization
of the sample.
ALAN CROCKETT: What's the cost of the instrumentation by the way?
AL ROBBAT: I think it's about $ 180,000 but your best bet is to ask Bruckner
Instruments.
JON GABRY: What are your power requirements for the unit?
ALROBBAT: We use six 24-volt batteries. Six 24 volt batteries out in the site.
We also can power-up at the site if there's electrical supply. So again, if you're
interested in those types of details. 1 would suggest you visit the Bruckner
Instruments booth.
325
-------
RAPID DETERMINATION OF SEMIVOLATILE POLLUTANTS BY
THERMAL EXTRACTION/GAS CHROMATOGRAPHY/MASS
SPECTROMETRY
T. Junk, V. Shirley, C. B. Henry, T. R. Irvin, E. B. Overton
LSU Institute for Environmental Studies
42 Atkinson Hall, Baton Rouge, LA 70803
J. E. Zumberge, C. Sutton, R. D. Worden
Ruska Laboratories, Inc., 3601 Dunvale, Houston, TX 77063
Abstract
There is considerable interest in rapid,
field deployable analytical systems!
Conventional gas chromatography/mass
spectrometry analytical techniques
provide sensitivity and specificity but
require cumbersome solvent
extractions. Thermal extraction offers a
fast and safe alternative to classical
extraction procedures for a wide range
of semivolatile pollutants. In this
technique samples are loaded into
porous quartz crucibles with no
preparation other than weighing
required prior to analysis. Analytes are
volatized into the helium carrier gas
flow at controlled preprogrammable
temperature profiles and subsequently
cyrocondensed onto a conventional gas
chromatographic column. The method
was demonstrated by analyzing for a
representative group of organic
pollutants covering a wide range of
polarity/volatility contained in natural
soil matrices at concentrations as low as
0.5 ppm using a Pyran Thermal
Chromatograph. Analyses were
independently performed by three
different laboratories (Institute for
Environmental Studies, Louisiana State
University; Engineering Toxicology,
Texas A & M University, Ruska
Laboratories, Inc.) using an on-line
Finnigan Ion Trap Detector for
identification and quantification.
Average correlation coefficients for
calibration curves ranged from 0.938 to
0.997 for compounds less volatile than
naphthalene. Naphthalene and more
volatile compounds experienced
variable losses during open-air sample
loading. Dialkylphthalates underwent
partial decomposition during the
thermal extraction process. Recoveries
varied depending on soil types as well
as on the physical and chemical nature
of analytes, with generally the highest
thermal extraction yields for river silt
and the lowest yields for clay. Typical
recoveries were 10 to 30% for
polynuclear aromatic hydrocarbons, 60
to 70% for hexachlorobenzene, and
nearly 100% for chloronaphthalenes.
However, the pesticide aldrin showed
recoveries of at most 19%. A majority
of the analytical results are within an
accepted range for quantitative analysis.
The Pyran system can be adapted to be
327
-------
deployable. With sample turn-around
times of typically 30-60 minutes this
instrument should greatly facilitate
remediation and hazardous waste
cleanup efforts.
Introduction
Transportation, field deployable
analytical systems that provide
unambiguous data on the amount of
semivolatile organic pollutants can aid
in the rapid assessment and cleanup of
hazardous waste sites. By
complementing the Environmental
Protection Agency's Control Laboratory
Program through interactive field
management, the efficient remediation
of hazardous wastes sites can be
accomplished (R. J. Bath, personal
communication).
Mass spectometry provides the
specificity and sensitivity necessary for
the identification and quantification of
most environmental pollutants.
However, to introduce analytes into the
mass spectrometer, the pollutants must
first be extracted from the soils.
Normally, organic solvents are used for
this purpose, a cumbersome and labor
intensive approach. Thermal extraction,
in contrast, desorbs analytes from their
matrices (soils) by controlled heating
under conditions which avoids analyte
decomposition (as opposed to pyrolysis).
In this report, we describe results from
a study aimed at verifying the
suitability of thermal extraction as
alternative to conventional extraction
for a representative cross section of
semivolatile organic pollutants. We
establish the factors controlling analyte
recoveries from different types of
matrices. Three laboratories
participated in this study, using
identical instrumentation (Institute for
Environmental Studies, Louisiana State
University, Texas A & M University,
College Station, and Ruska Laboratories,
Inc., Houston).
Instrumentation
A Level 2 Thermal Chromatograph
(Ruska Laboratories, Houston, Texas)
was interfaced with a Finnigan Ion Trap
Detector. Samples were heated in a
quartz chamber using a linear
temperature program and semivolatile
analytes purged with helium gas. These
analytes were cyrocondensed onto a
fused silica chromatographic column
(Hewlett Packard HP-5, 12 m x 0.2 mm)
cooled with liquid carbon dioxide,
separated, and identified by mass
spectroscopy. Thermal extraction
efficiencies for specific toxicants were
also monitored by thermal extraction
under identical conditions in an
identical quartz chamber coupled to a
flame ionization detector (Level 1
Thermal Extractor). Schematic diagrams
of these instruments are shown in Fig. 1.
Other experimental parameters were
chosen as follows: 30 ml/min He carrier
flow during thermal extraction phase,
30:1 split ration between thermal
extraction chamber and GC column, 1
ml/min carrier flow through GC column.
Standards Preparation
Test soils were prepared by adding
stock solutions of 20 semivolatile
organic pollutants covering a wide range
of polarity/volatility to three different
organic-lean natural soil matrices:
kaolin clay, sandy river silt, and
subsurface terrestrial soil from
Livingston Parish, Louisiana containing
30% clay, 66% silt, and 4% sand with a
total organic content of 0.11%. stock
solutions of the 20 standards (see Table
1) were prepared by weighing pure
328
-------
compound standards (primarily from
Aldrich Chemical Co.) and diluting 4000
Hg/ml stock standard (PP-HC8, Chem
Service, Inc.; lot #25-121B) with
dichloromethane to 20 ng/|il per
component.
The three soils were crushed using a
mortar and pestle and sieved through a
850 |im sieve. The sieved soils were
slurried for 1 hour with the appropriate
amount of stock standards (pure
dichloromethane for controls), the
solvent then removed at room
temperature by evaporation under a
fume hood to produce two sets of test
soils with concentrations of 50 ppm and
0.5 ppm, respectively, per analyte. The
soil standards were then sent to the
three participating laboratories for
independent analyses in well-filled
teflon lined screw cap vials and stored
at 6° to avoid analyte losses.
Methods
Soil samples were weighed into the
porous fused silica crucibles, while
standard stock solutions (20 ng/^il) were
injected onto the porous fused silica lids
of the sample crucibles using a 10 u.1
syringe just prior to loading into the
thermal extraction chamber. All
samples were heated from 30° to 260°
at 30°/min and held isothermally at
260° for 10 min before cooling to 30°.
The "trap" and "splitter" regions (see Fig.
1) were held isothermally at 300° and
310°, respectively; interface and
transfer line temperatures to the MS
were held between 280° and 290°. The
column was held at 5° until the thermal
extraction process was complete, the
temperature programmed to 285° at
10%nin and kept isothermal for 5 min.
Total cycle time was 59 min. The ion
trap detector was scanned from 47 to
440 amu at 1 scan/sec, peak threshold
was set at 2, and a mass defect of 100
mmu/100 amu was used. Full scan
mass spectra of eluting compounds
standards were verified using the NBS
mass spectra library. Areas and
retention times of characteristic ion
masses were recorded after each run for
each of the 20 compounds and internal
standards. Calibration curves for each
of the 20 compounds in the stock
solution (20 ng/jil) were obtained by
injecting 2, 5, 10, 15 and 20 ill onto the
crucible lids (corresponding to 40, 100,
200, 300, and 400 ng/component,
respectively). Ten u 1 (200
ng/component) of the deuterated
internal standards (Table 2) were also
added to the lid prior to each of the
above five runs. This experiment was
done in triplicate at Ruska Laboratories,
using a Finnigan Ion Trap Detector for
two runs as described above and a
Hewlett Packard Mass Selective Detector
(MSD) once for comparison. Just prior to
each run of the standard soils (10.0 to
13.8 mg for the 50 ppm standards and
approx. 100 mg for the 0.5 ppm
standards), 10 ul (200 ng/component)
of the deuterated internal standards
were injected into the soil/sediment.
Response factors (RF) and percent
relative standard deviations (%RSD)
were calculated for each compound
based on EPA's "Test Methods for
Evaluating Solid Waste, Physical,
Chemical Methods", SW-846, Third
Edition, Method 8270 (GC-MS for
semivolatile organics, capillary column
technique). RF values are based upon
the results of the on-lid injections of the
stock solutions.
Soil/sediment samples were also
analyzed using the Level 1- FID
instrument (see Fig. 1) to further
329
-------
eludicate the thermal extraction process
in an independent study at Louisiana
State University. This set of
experiments seeks to identify factors
influencing analyte recoveries by
systematically varying operator-
controllable variables including gas flow
rates, additives to facilitate extraction,
extraction temperature and duration; as
well as to define limiting factors for
target analytes and matrices. Three
analyte solutions were prepared: n-
triacontane ("C-30"), pyrene, and
hexachlorobenzene ("HCB"). These
compounds were chosen for their
thermal stabilities and chemical
inertness. Two are structurally similar,
all three are neutral and devoid of
reactive functionalites. Ten u.1 of stock
solutions in dichloromethane (10 mg/ml
for pyrene, HCB; 2 mg/ml for C-30)
were spiked onto the soils immediately
prior to analysis. The resulting FID
signals were integrated to calculate
analyte recoveries (Table 3), with the
FID signal of the pure analytes (no
matrix) as reference.
Conclusions
Level 1 Thermal Extraction/FID
Thermal extraction efficiencies vary
considerably with the nature of analytes
as well as matrices (Table 3). While
conventional solvent extraction
procedures would be expected to
produce similarly high recoveries for n-
triacontane, pyrene, and
hexachlorobenzene, thermal extraction
produced markedly different results for
clay as matrix (Fig. 2a). HCB recoveries
were quantitative, while C-30 and
pyrene recoveries ranged at approx.
30%. Variation of the matrix had a less
pronounced effect on the recovery of
pyrene. These results cannot be
explained solely in terms of polarity or
volatility. Not surprisingly, percent
deviations of recovery decrease
dramatically in the presence of a soil
matrix (Fig. 2b). The increase of helium
flow during the thermal extraction
process from 40 to 100 ml/min did not
increase the extraction yields of C-30
significantly (see Table 3); however,
addition of polar additives to the soil
samples immediately before thermal
extraction, such as water or phosphoric
acid, improved the recovery of pyrene
from clay markedly (Fig. 3c). Figure 2d
illustrates blockage of reactive sites of
the soil matrices by repeated spiking of
the same river silt sample. Thermal
extraction efficiences increased from 25
to 65%. Simple physical obstruction of
the carrer gas flow is certainly one of
the factors contributing to reduced
recoveries. The soil samples "cake" and
block the desorption of analytes into the
carrer gas flow. Thus, recoveries sank
to 69% for pyrene and to 82% for C-30
when standards were spiked onto the
lids of crucibles filled with 100 mg clay
without direct contact between analyte
and matrix (Table 2). Repeated thermal
extraction, the increase of extraction
temperatures above 450<>, or an
extension of extraction times were not
promising, as illustrated by Fig. 4a-c.
These figures compare the thermal
desorption of identical amounts of
pyrene (100 ng) from a porous quartz
crucible (Fig. 4a) and spiked into a
kaolinite clay sample (Fig. Figure 4b)
using the temperature profile shown in
Fig. 4c under otherwise identical
conditions. Not only is the thermal
desorption of the standard from the
spiked clay considerable below 100%,
but it is also shifted to higher
temperatures. At 450°, no further
analyte was released upon prolonged
heating. The fate of the unextracted
330
-------
analytes is currently unknown and
subject to future investigations.
Level 2 Thermal Extraction/GC/MS
The results of analyses from all three
laboratories are summarized in Table 1.
The 20 organic compounds and
corresponding characteristic ion masses
are listed along with linear correlation
coefficients (r) derived from the five
point calibration curves of the on-lid
stock solution injections. Fig. 3 shows
examples of four calibration curves
from one laboratory; the more volatile
components (e.g. naphthalene)
experience variable rates of evaporation
after injection of the standard stock
solution onto the porous quartz crucible
lids prior to sample insertion into the
pyrocell (approx. 2 min from injection
onto the lid until sample loading).
Dioctyl phthalate signals were relatively
low except at high concentration levels
(300-400 ng); after it appears that
much of this compound degraded to
phthalic anhydride (which was always
detected) during the on-lid calibration
runs. Diethyl phthalate, in comparison,
showed good linearity and less
degradation. Pentachlorophenol
linearity was not as good as that of
other compounds in the same volatility
range. All other compounds showed
good linearity.
Also listed in Table 1 are the percent
relative standard deviations (%RSD) of
calculated response factors based on the
on-lid injections of 20 ng/u,l mix of 20
compounds plus the deuterated internal
standards listed in Table 2. Since %RSD
values are also a measure of the
precision for each compound, it is not
surprising that most volatile compounds
also show the highest deviations.
Although there is some variation
between the participating laboratories,
specific compounds tend to yield high
%RSD values while others showed
consistently good precision. The same
holds for deuterated standards. Again,
the more volatile naphthalene-d8 and
dichlorobenzene-d4 showed the most
variation, phenanthrene-dlO and
chrysene-d!2 the least.
From the obtained data set, recoveries
could be calculated either by the
external standard method using the
least square fits of the five point
calibration curves for all compounds or,
alternatively, by internal standard
quantitation based on the response
factors calculated for each compounds.
Table 1 lists results for both methods,
which do not reflect the expected
improved accuracy for the internal
standard method. Due to the
considerably different chemical and
physical environments the standards
experience while being partially
adsorbed by the soil samples and
partially by the porous crucibles, no
high degree of accuracy can be expected
by the internal standard method. The
implicit assumption made in
conventional chromatography, namely
that standards and analytes are
subjected to identical environments,
cannot easily be realized in thermal
extraction.
Percent recovery appears to be
dependent on a number of factors
including polarity, molecular weight,
and interactions with constituents of the
soil matrix, both organic and inorganic.
Not surprisingly, recovery was
significantly greater for many
compounds from the river silt than from
the clay or subsurface soils (e.g.,
phenanthrene: 11% from clay and 31%
from silt) while chloronaphthalene was
close to 100% for both clay and silt. The
331
-------
recovery of diphenylamine was equally
low (approx. 5%) for clay and silt. Since
the subsurface soil contains about 30%
clay, percent recovery is generally in
between those for clay and silt. It is
interesting to compare recoveries for
the structurally similar tricyclic
compounds dibenzothiophene, fluorene,
and carbazole. In all three soil types
the order of recovery efficiency was
dibenzothiophene>fluorene>carbazole,
which likely reflects increasing binding
to the soil matrix. At the 0.5 ppm
concentration levels, naphthalene,
chloronaph thalene, fluorene,
hexachlorobenzene, dibenzothiophene,
phenanthrene, aldrin, and pyrene were
all detected in the soil standards in at
least two of the three laboratories.
It is apparent from these results that
small aliquots of soils can be analyzed
by thermal extraction/GC/MS without
any prior sample preparation. While
the method is generally suited for
situations requiring high precision or
low detection limits, it performs well a
analyte concentrations>50 ppm, is
amenable to full automation and will
serve for rapid screening of soils
contaminated with thermally stable
organic semivolatiles, a class of
compounds that includes PNA's PCB's,
most petroleum products and pesticides
and is commonly encountered in
hazardous waste cleanup efforts.
References
Bath, R.F., personal communication
(1989).
Environmental Protection Agency, "Test
Methods for Evaluating Solid Waste,
Physical, Chemical Methods", SW-846,
Third Edition, Method 8270 (GC/MS for
semi-volatile organics: capillary column
technique).
Henry, C.B., Overton, E.B., and Sutton, C.,
"Applications of the Pyran Thermal
Extraction-GC/MS for the Rapid
Characterization and Monitoring of
Hazardous Waste Sites"; Proceedings of
the First International Symposium for
Hazardous Waste Site Investigations,
399-405 (1988).
Overton, E.B., Henry, C.B., and Martin,
S.J., "A Field Deployable Instrument for
the Analysis of Semi-volatile
Compounds in Hazardous Waste";
Pittsburgh Conference and Exposition on
Analytical Chemistry and Applied
Spectroscopy, New Orleans, LA.,
Abstract (1988).
Zumberge, J.E., Sutton, C., Martin, S.J.,
and Worden, R.D., "Determining Oil
General Kinetic Parameters by Using a
Fused Quartz Pyrolysis System"; Energy
and Fuels, 2, 264-266 (1988).
Junk, T., Irvin, T.R., Donnelly, K.C., and
Marek, D., "Quantification of Pesticides
on Soils by Thermal Extraction-GC/MS",
in preparation.
Acknowledgements
We thank Drs. R.J. Bath and D. Flory for
helpful comments and suggestions.
332
-------
u
U
AIR
Sample Crucible
LC02
LEVEL I-FID ANALYZER
Figure 1
sScale
10cm
MS
COI.UUN LXII
LLTT OR RIGHT
SIDE
PYROCEU
-------
MATRIX vs RECOVERY
Flow 40 ml/min
% Racovery
N°n« Clay
•I P
|