United States
Environmental Protection
Agency
Office Of Water
(4303)
EPA 821-R-95-008
January 1995
Proceeding Of The
Seventeenth Annual EPA
Conference On Analysis Of
Pollutants In The Environment

May 3.-5,1994

-------
         Proceedings Of The
     Seventeenth Annual EPA
  Conference On Analysis Of
Pollutants In The Environment
              May 3-5, 1994

-------
                                              INTRODUCTION
       I am pleased to present the proceedings of the 17th Annual  EPA Conference on
Analysis  of Pollutants in the Environment.  This year's conference, which  was held in
Norfolk, VA on May 3-5, 1994, was jointly sponsored by the EPA Office of Water and the
Water Environment Federation. The result wan an enormously successful conference that
provided more than 400 participants with an opportunity to meet and discuss contemporary
issues concerning the measurement of environmental pollutants.

       Once again, conference  participants  represented  a  broad  spectrum of the
environmental  community,  including Federal,  State, and  local  regulatory authorities,
environmental  laboratories, regulated  industries, wastewater treatment facilities, and
environmental groups. This diversity has always been and continues to be an integral aspect
of the conference in that it fosters small, group discussions  among individuals representing
different analytical or regulatory perspectives.

       The proceedings contained in this document reflect the 31 presentations given at the
Conference. These presentations, which were chosen  because they reflect current areas of
concern in the water pollution control arena, included discussions on:  efforts to replace the
use of Freon 113 in methods for the determination of oil and grease and total petroleum
hydrocarbons  in environmental  samples;  the development of  alternate techniques for
measurement  of these  parameters;  techniques and  current efforts  to  make reliable
measurements of trace metals at ambient water quality criteria levels;  research concerning
the effects of  interferences  on cyanide measurements; new  procedures for measuring
biochemical oxygen demand;  new techniques for  determination  of volatile organic
compounds,  polynuclear aromatic compounds,  and total  organic halides;  statistical
procedures used to derive numerical effluent limitations;  the results of recent studies to
evaluate the performance of several methods for measuring organic  pollutants; and current
issues and developments concerning the implementation of performance-based  methods and
analytical  detection limits.

       I would like  to take this opportunity to thank Ms.  Jan  Kourmadas of Ogden
Environmental, Ms. Cindy Simbanin of DynCorp  Viar, Mr. Dale Rushneck of Interface, and
the staff at the Water Environment Federation for their assistance in making this conference
such a resounding  success.  I would also like to  extend my thanks to all who participated
in this conference; I look forward to seeing you  at the 18th Annual  Conference next May!
                                                               William A. Telliard

-------
                                                            CONTENTS
MAY 3. 1994 PRESENTATIONS AND SPEAKERS                                 PAGE
Opening Remarks                                                                   1
       William A. Telliard, Engineering and Analysis Division
       Office of Science and Technology, USEPA Office of Water

Introduction and Welcome                                                          3
       Robert K. Wyeth
       Recra Environmental, Inc.
       Laboratory Practices Committee
       The Water Environment Federation

Freon Replacement Study Phase II                                                   7
       William A. Telliard, Engineering and Analysis Division
       Office of Science and Technology, USEPA Office of Water

Nothing In Life Is Freon                                                           35
       Harold  Rhodes, RLT Consultants
       Authors: Harold Rhodes; Alexis Steen, Roger Claff, American
       Petroleum Institute; Ronald Benjamin, Southern Petroleum
       Laboratories

Impact Of Detergents On The Determination Of Oil And
Grease By Gravimetric And Infra-Red Analysis                                      59
       David  L. Clampitt, Uniform & Textile Service Association
       Authors: David L Clampitt; Robert B. Schaffer, Coyne Textile Services;
       David F. Tompkins, ETS Analytical Services, Inc.

Solid Phase Extraction  Disks-A Solution For The Freon Problem                     91
       Craig Markell, 3M Corporation
       Authors: Craig Markell, Eric Wisted, Donald F. Hagen, 3M Corporation

An Update On The Status Of Oil and Grease Measurements
By Solid Phase Extraction                                                         125
       R. E. Hawley, Varian Sample Preparation Products

Nondispersive Infrared Analysis Of Oil And Grease And
Total Petroleum  Hydrocarbons In Wastewater                                     133
       Jim Vance, Horiba Instruments,  Inc.

-------
MAY 3. 1994 PRESENTATIONS AND SPEAKERS                                 PAGE
Current Advances In Oil And Grease Using NDIR                                157
      Gerald DeMenna, Chem-Chek Corporation
      Authors: Gerald DeMenna
MAY 4. 1994 PRESENTATIONS AND SPEAKERS
Regulatory Background For Determination Of Metals
At Ambient Water Quality Criteria Levels                                        173
      James Han I on, Office of Science and Technology
      USEPA Office of Water

Trace Metal Clean Techniques: Problem, Quality
Assessments,  Comparisons                                                      197
      Carlton Hunt, Battelle Ocean Sciences

U.S. Geological Survey Protocol For Measuring Low
Levels Of Inorganic Constituents Including Trace
Elements in Waste Samples                                                     239
      Timothy Miller, U.S. Geological Survey

The Preparation Of NRC Certified Reference Materials                            281
      S.S. Berman, Institute for Environmental Chemistry
      National Research Council of Canada

Enzyme Immunoassay To Determine Heavy Metals Using
Antibodies To Specific Metal-EDTA Complexes                                   293
      Diane A. Blake, Ph.D., Tulane University School of Medicine
      Authors: Pampa Chakrabarti, Ph.D., frank M.  Hatcher, Ph.D.,
      Robert C. Blake II, Ph.D., Patricia A. Ladd, Meharry Medical College;
      Diane A. Blake, Ph.D., Tulane University School of Medicine

Determination Of Total Mercury For The Water
Quality-Based Approach                                                        317
      Billy B. Potter, USEPA Environmental  Monitoring Systems Laboratory
      Authors: 6.6. Potter, USEPA; Winslow \.  Bashe, Miguel D.Castellanos,
      Stephen E. Long, jane A. Doster, Technology Applications, Inc.
                                         VI

-------
MAY 4. 1994 PRESENTATIONS AND SPEAKERS                                 PAGE
Determination Of Metalloid Concentrations And
Speciation In Natural Waters                                                     333
       Gregory Cutter, Department of Oceanography
       Old Dominion  University
       Authors: Gregory Cutter, Lynda Cutter, Old Dominion University

Adaptation Of Ultra-clean Techniques For An Environmental
Monitoring Program  And Establishing Site-Specific Water
Quality Criteria In San Francisco Bay                                             355
       A. Russell Flegal, University of California-Santa Cruz
       Authors: A. Russell Flegal; Michael P. Carlin, California Regional Water
       Quality Control Board, San Francisco Bay Region

Can Hg Be Routinely Monitored At The Parts Per Trillion Level                    363
       Nicolas S. Bloom, Frontier Geosciences
       /Authors: Nicolas 5. Bloom,  Frontier Ceosc/ences; Eva Butler,
       Brown and Caldwell; Val Conner, CVRWQCB, Sacramento, CA

Effects Of  Multiple Interferences On The Determination
Of Total Cyanide In  Simulated Electroplating Waste
By EPA Method 335.4                                                            393
       Margaret Goldberg, Research Triangle  Institute
       Authors: Margaret Goldberg, Andrew Clayton, Research Triangle
       Institute; Billy B.  Potter, USEPA Office of Research and Development

The  Headspace  Biochemical Oxygen Demand (HBOD) Test:
A  New Approach For Measuring BOD                                             429
       Bruce E. Logan, University of Arizona

A  High Speed Automated BOD System                                            473
       Greg Hill, Hampton Roads Sanitation District
       /Authors: Greg Hill, Dr. Anna Rule, Allison Wilson,
       Hampton Roads Sanitation District, Central Environmental Laboratory
MAY 5. 1994 PRESENTATIONS AND SPEAKERS
Performance Characteristics Of An Isotope Dilution
HRGC/LRMS Method for Volatiles                                                507
       Bruce N. Colby, Pacific Analytical, Inc.
       Authors: Bruce N. Colby, Lee Helms, Pacific Analytical, Inc.
                                         VII

-------
MAY 5. 1994 PRESENTATIONS AND SPEAKERS                                 PAGE
Micellar Eiectrokinetic Capillary Chromatography:
Application To Separations Of Mycotoxins And
Polyaromatic Compounds                                                        541
      Michael J, Sepaniak, University of Tennessee

The Analysis Of Kraft Mill  Effluent Using The
Non-Purgeable Total Organic Halide Test                                         551
      Bruce R. Locke, FAMU/FSU College of Engineering
      Authors; Geoffrey B. Watts, Bruce R. Locke,
      FAMU/FSU College of Engineering

Pitfalls Using Conventional TPH Methods For Source
Identification:  A Case Study                                                     581
      lleana A. Rhodes, Shell Development Company
      Authors: LA. Rhodes, E. M. Hinojosa, D. A. Barker, R. A. Poole,
      Shell Development Company

Statistical Analysis Of Environmental Data Sets Which
Contain 'Non-Detected' Observations                                            625
      Steven W, Hinton, National Council of the Paper Industry
      for Air and Stream Improvement (NCASI),
      Tufts  University

Determination Of Proposed Effluent Limitations  For The
Pulp and Paper Industry                                                         657
      Henry Kahn, Economic and Statistical Analysis Branch,
      Engineering and Analysis Division, USEPA Office of Water
      Authors; Henry Kahn, Maria D. Smith, USEPA; Amy Brockman,
      Science Applications International Corporation

Methods Integration In  EPA
The Environmental Monitoring Management Council                               673
      Robert M, Runyon, Environmental Services Division
      USEPA,  Region II

A Nationwide Strategy To  Improve Water-Quality In
The United States                                                               699
      Elizabeth jester Fellows, Office of Wetlands, Oceans, and Watersheds
      USEPA,  Office of Water
      Authors: David A. Rickert, U.S. Geological Survey;
      Elizabeth Jester Fellows, USEPA Office of Water
                                         Vili

-------
MAY 5. 1994 PRESENTATIONS AND SPEAKERS                                PAGE
The Quality Control Level:  An Alternative To
Detection Levels                                                               739
       David Kimbrough, Department of Toxic Substances Control,
       California  Environment Protection Agency
       Authors: David Kimbrough, Janice Wakakuwa, California
       Environment Protection Agency

Reporting And Interpreting Data Near The Limit
Of Detection                                                                   757
       P. M. Berthouex, Department of Civil and Environmental
       Engineering, University of Wisconsin

Shell Performance Evaluation Study Of Methods 8270,
8020, and Modified 8015 (TPH)                                                 777
       George  H. Stanko
       Shell Development Company
       Authors: C. H. Stanko,  T. L Norton, R. A. Poole,
       Shell Development Company

An Extensive Evaluation Of An SPE Sample Prep
For Method 608                                                                805
       Craig Markell, 3M Corporation
       Authors: Craig Markell, Anh Dao Vo, Sandra Rodriguez,
       Keith Hoffman, 3M Corporation
Closing Remarks                                                                841

List of Speakers                                                                843

List of Attendees                                                               847
                                         IX

-------
                                                PROCEEDINGS
                                                                   May 3,  1994
                                     MR. TELLIARD:  I would like to welcome you to
the 17th annual meeting of measuring pollutants in the environment.  This meeting is
sponsored by the Office of Water and also co-sponsored this year by the Water Environment
Federation.

      There are a couple of housekeeping rules I need to tell you about.  First of all, my
name is Bill Telliard. I am from EPA, and I am here to help you.

      During the sessions, if you have any questions, there are microphones spaced around
the room.  If you would, please go to those  microphones, identify yourself, and ask your
question.

      There is no limit on questions.  That is covered in  your registration  fee.

      This year is our first effort at a co-sponsorship.  The Water Environment Federation
has been involved in a number of activities  with the Office of Water over the years, not
surprisingly. In addition to workshops and other committees that they serve on, the Water
Environment Federation has been a long-time source of information and  consulting to the
Agency.

      So, we are very pleased to initiate this program this year, and we look forward to
having a  number  of other years where we will  be co-sponsoring  with  the Water
Environment Federation.

      I would be happy to get your comments later on after this meeting on what you think
of it, how we can improve it, and how we can make it better.  We are all interested in that.

      I would like to introduce at this time Bob Wyeth.  Bob is Chairman  of the Laboratory
Practices Committee of the  Water Environment Federation, and in his spare  time, he is
Senior Vice President for Recra Environmental, Incorporated, a laboratory which does work
in the environmental field.  So, I would like  to introduce  Bob at this time.

-------
(Blank Page)

-------
 C1509
               17th ANNUAL EPA CONFERENCE ON ANALYSIS OF
                      POLLUTANTS IN THE ENVIRONMENT

                         INTRODUCTION AND WELCOME

                                  Robert K. Wyeth
                           Senior Vice President and Principal
                              Recra Environmental, Inc.
                      Laboratory Practices Committee Chairperson
                            Water Environmental Federation
       On behalf of the Water Environment Federation, it gives me great pleasure to welcome
all of you to the 17th Annual Conference on Analysis of Pollutants in the Environment.

       The Water  Environment Federation is  very  pleased to be able to co-sponsor this
prestigious conference with the United States Environmental Protection Agency.

       The Water Environment Federation, formerly the Water Pollution Control Federation,
is a not-for-profit technical and  educational organization that was founded in 1928.  Its mission
is to preserve and enhance the  global water environment.  Federation members number more
than 40,000 water quality specialists from around the world. Included in this membership are
environmental, civil and chemical engineers, biologists, chemists,  government officers, treatment
plant managers  and  operators,  laboratory  analysts  and  technicians, college professors,
researchers, students and equipment manufacturers and distributors.

       The strengths of the Water Environment Federation, in addition to its size, are:
1)     Active volunteer leadership
2)     Quality technical materials
3)     Technically  diverse membership
4)     Organizational culture responsive to change
5)     Financial stability and strong staff resources

       Other strengths include:
6)     Geographically broad base
7)     Worldwide reputation
8)     A track record of success and credibility
9)     Continuous growth in quality member services

-------
       And most importantly:
10)    Both individual and organizational commitment to water quality!

       In  support of attaining their mission the Water Environment Federation established a
number of strategic initiatives:
       Create and sustain a quality improvement philosophy
       Explore new methodologies and technologies for delivery of services
       Develop  a  system  for  comprehensive  technological  information  exchange  and
       dissemination
       Provide leadership for environmental policy information
       Educate the public
       Support and encourage research and development
       Expand the focus to include all issues related to the water environment

       Co-sponsorship of this conference is totally consistent with these initiatives and the
American  Water Works Association mission.

       I mentioned active volunteer leadership.  The Water Environment Federation generates
much of its strengths from its committee activities.  The Water Environment Federation has over
40 standing committees, including Toxic Substances, Standard Methods  (which  works jointly
with American Public Health Association and the American Water Works Association), Ecology
and Groundwater, to name a few.

       Another of these committees is  the  Laboratory Practices Committee of which I am
chairperson.  Our committee consists of approximately 30 active members from  across the
country. Members include predominantly chemists from the commercial laboratory industry,
industrial laboratories, municipal, state and federal government laboratories and agencies.

       Activities of the Laboratory Practices Committee include participation in such issues as:
National Environmental Laboratory Certification, Performance Based Methods, and application
of Method Detection Limits.  Significant committee efforts are also focused  on training and
education  and sponsorship of a Laboratory Practices Committee Specialty Conference.  These
specialty conferences are focused strictly on issues and concerns of the laboratory professional
and have consistently received exceptional ratings  from the participants and attendees.

       With the specific intention of a plug or commercial, please allow me  to state that the
Laboratory Practices Committee's next specialty conference will be in August of 1995 in

-------
Cincinnati, Ohio. I assure you that you will receive information about our specialty conference.
I hope you'll be able to attend.

       In reviewing the program for the 17th Annual Norfolk Conference, it is clear that the
content and  quality  of materials  will once  again  meet the high  standard for  which this
Conference has become known.

       Issues  of environmental  sampling,  analysis and discharge monitoring are  critically
important to the government, to industry and to all laboratories.   As a principal of Recra
Environmental, Inc., which is a commercial environmental laboratory with locations in Buffalo
New York, Cleveland  Ohio,  Detroit Michigan and Columbia, Maryland, I have to deal with
these types of concerns and issues each and every day.

       As  many of you may know, the commercial laboratory industry, particularly in the
Environmental Sector, has its  own set of problems, including greater demands from our clients,
continually eroding prices and shortened turnaround time, which exacerbate the operational and
management difficulties that we face.  An example of one of these problems, which is  being
addressed at the conference, is the question of Freon use.  As an environmentalist,  I share the
concern over  use and  control of Freon and all CFC's.  As a scientist, I need to insure its
replacement is appropriate, efficient and capable of providing reasonably comparable data.  As
a laboratory operation person,  I need to insure that replacement technologies can be implemented
to produce high quality  results in an effective and productive manner. And lastly as a laboratory
owner I must be concerned over the costs of performance of any new procedures, but anxiously
await changing from  a  method where my solvent costs alone are $1000/gallon.

       Likewise, in order to  remain competitive and increase marketshare, I have to provide
capital for  new technologies and  procedures like Immunoassay, SPE, SFE, Ion-Trap GC/MS,
post-column derivitization HPLC with various detectors,  some of  which are new in their
application to  the environmental analysis arena.

       All  of these issues, of course, also have to be considered in concert with the long term
trend in ever decreasing limits of quantification and ever increasing requirements of analytical
certainty.  - Oh, what a tangled web we weave\ -  Over the years, the Norfolk Conference, as
much as any other I am familiar with has continued to assist in untangling our web.

       In closing, let me state that the Vision of the WEF is for the federation to be the pre-
eminent organization  dedicated to the preservation and enhancement of the global water
                                        5

-------
environment.

       The membership of the organization, while in pursuit of this mission, are committed to
the principals of providing technical information  to a worldwide audience, expanding quality
services for its members and building alliances with other organizations.

       This Conference provides an ideal forum for attainment of our mission and realization
of our vision.  As a principal in Recra Environmental, Inc. and the Chairperson of the Water
Environmental Federation Laboratory  Practice Committee and a co-sponsor of the Conference,
I want to thank you for your attendance, invite your active participation in the proceedings, and
once again, welcome you to the 17th Annual Norfolk Conference.

-------
                                      MR. TELLIARD: Thanks, Bob.

      This afternoon's session is going to focus on the cutting edge of science.  None of
this high resolution mass spectrometry. We are going to talk about oil and grease.

      For those of you who were here last year and heard my presentation, you know that
as part of the effort to find  a suitable replacement solvent for  Freon in oil  and grease
methods, EPA had conducted Phase I of the Freon Replacement Study. Today I am going
to present the results of Phase II of the Freon Replacement Study.  My talk will be followed
by a number of other speakers, including vendors and industry representatives, who will be
presenting information on additional studies related to this effort.
                      FREON REPLACEMENT STUDY PHASE II
                                      MR. TELLIARD: First I am going to provide you
with some background  information pertaining to the Freon  Replacement Study.   I will
quickly review the objectives and conclusions of Phase I  of the study, and then I will focus
the remainder of my presentation on Phase II of the study and bring you up to date on the
present status of this project.

       As a party to the Montreal Protocol on Substances  that Deplete the Ozone Layer and
as required by  law under the Clean Air Act Amendments  of 1990, the United States is
committed to controlling and eventually phasing out chlorofluorocarbons, which have been
shown to cause depletion of the stratospheric ozone layer. Freon-113 is a CFC whose  use
as an extraction solvent is mandated under some EPA methods for the determination of oil
and grease.  As part of the effort to eliminate the use of  CFCs, the EPA initiated studies to
find a suitable replacement for Freon-113 in these methods.

       Phase  I of the Freon Replacement Study was the  result of a cooperative effort
between the Office of Water, the Office of Solid Waste and Emergency Response, the Office
of Air and  Radiation, and the Office of Research and Development. The objective of Phase
I  of the Freon Replacement Study was  to either find a solvent that gave results equivalent
to Freon-113 for gravimetric determination  of oil and grease in both aqueous and solid
samples, or to select a solvent or alternative technique  for further  study.  Results  of this
study  demonstrated  that of the five solvents tested, none produced results equivalent to
Freon-113 when the sample results were  evaluated collectively.  If the aqueous samples
were separated into petroleum and non-petroleum subcategories, however, values produced
when  n-hexane  and  perchloroethylene were used as  the extraction solvent were  not
significantly different from results produced  when Freon-113 was the extraction solvent.

-------
      As  a result, n-hexane was retained as a candidate solvent for further study of
gravimetric  oil  and  grease  and  total  petroleum  hydrocarbons determination,  and
perchloroethylene  was  retained  for  consideration  in  future  studies  related  to  the
measurement of oil and grease and TPH by infra-red techniques. Perchloroethylene was not
considered for further testing as a replacement solvent for gravimetric purposes because of
its high boiling  point, which would require a higher temperature for evaporation  and
therefore result in a loss  of anything that evaporates below 121 °C, and because it is more
toxic than n-hexane.

      Phase II of the Freon Replacement  Study focused on the gravimetric determination
of both  oil and grease and  TPH in aqueous  samples.  The purpose of this phase was to
further assess the suitability of n-hexane as a replacement solvent and, based on comments
received about the neurotoxicity of n-hexane,  to consider  cyclohexane as  an alternative
solvent.

      Oil  and  grease analysis was performed using MCAWW Method  413.1, with
modifications to compensate for the lighter density and higher boiling points of n-hexane
and cyclohexane as compared to Freon-113.  TPH analysis was performed using Standard
Methods 5520F.  Each  sample was analyzed  in triplicate and  1600-series method  QC
requirements were added to the analytical protocol to monitor laboratory performance.

      In addition, other techniques were examined independently by vendors and included
solid phase extraction, both column  and disk, non-dispersive IR, and immunoassay.  EPA
supplied these vendors with splits of the same  samples used for  EAD study purposes.

      In Phase  II, 34 samples from 25 facilities covering 15 industrial  categories were
collected. Sample collection activities ended in April, so we have not received all of the
data from this study.  This  presentation  is based on what data we have  available.  In
addition to these efforts, a round robin study has been initiated to test the new method,
Method 1664, with a group of laboratories located in the Minneapolis-St. Paul area known
as the Twin Cities Round Robin Croup.

      Samples were collected from a  number of industrial categories in order to test a
variety of sample matrices.  In Phase I we learned that many of the effluents sampled did
not contain detectable levels of oil and grease.  In order to avoid the statistical  problems
associated with nondetect values, and to ensure that analyses would produce measurable
values that could be evaluated, samples were prepared by mixing the influent to treatment
with the effluent from the facilities.  Facilities were surveyed to determine the average oil
and grease values  in  the effluent, and  based on the information provided, a portion of
influent was added to the  effluent to  hopefully produce  a sample with oil and  grease
concentrations in the range of 40-300 mg/L.

      Hexane and cyclohexane results  were  compared to Freon-113 results by calculating
the root mean square deviation of the results from the alternative solvents around the results


                                        8

-------
from Freon-113.  Acceptance limits indicate whether or not the hexane and cyclohexane
results are comparable to the Freon-113 results.  A smaller RMSD indicates better agreement
with Freon-113.   The results from  Phase II show that  both hexane  and cyclohexane
produced results significantly different from Freon, except for TPH analysis using hexane as
the extraction solvent.

      Another  way to compare the hexane and cyclohexane results to Freon results is to
look at the mean absolute deviation of the alternative solvent results from the Freon results.
The calculated values show that in general, the deviation  of the hexane and cyclohexane
results from Freon are often less than 20%, which is within the  acceptance criteria for
deviation  of duplicate analysis  results.  This indicates that, although the  hexane and
cyclohexane results are statistically different from the Freon results, they are close enough
in value to the Freon results to be within the variability of duplicate analyses.

      As part of the Phase II data evaluation, we also determined the percentages of the
hexane and cyclohexane results that were above or below the Freon results.  As you can
see, analysis for oil and grease using either hexane or cyclohexane  produced results less
than the results generated with Freon for approximately 80% of the samples,  and analysis
for TPH using either hexane or cyclohexane produced results less than the results generated
with Freon for over half of the samples.

      This means that approximately  20% of the hexane and cyclohexane oil and  grease
values are above the Freon values, which raises some concerns about the effect of a solvent
change on compliance monitoring and the ability to meet permit limits.  We  plan to look
at this particular data more closely to further evaluate how much of an effect this may have
on compliance  monitoring.

      Based on our evaluation of the Phase II data, we have concluded that both hexane
and cyclohexane  produce  results that  are  different from  Freon,  but  that  hexane and
cyclohexane results are equivalent to one another.

      Since hexane and cyclohexane had similar extraction efficiencies, when determining
the most  suitable solvent to  replace Freon  more practical issues were considered, namely
analytical conditions.  Due to the lower boiling point of hexane, the solvent evaporation
step took much  less time for hexane than cyclohexane.  The wide  range of evaporation
times presented on this slide  is the result of laboratories  using  various  evaporation
techniques,  such as water baths, steam baths, and rotovaps.

      We also  determined  that the toxicity of hexane was not that much higher than that
of cyclohexane,  and that by using good laboratory practices, this health and safety issue
should be manageable.

      Based on all of these considerations, the Agency is recommending the use of n-
hexane to replace  Freon as  the extraction solvent for gravimetric determination of oil and

-------
grease and TPH.  Though we have not yet analyzed the data from other studies,  such as
those that API and the Uniform and Textile Service Association will be presenting, our
evaluation of the data we have collected to date has led us to recommend  the use of n-
hexane.

      A comparison of the n-hexane extracted oil and grease data from Phase I and Phase
II demonstrates that Phase II results had much less deviation between replicate analyses than
the Phase I data.  This improved performance in Phase II analysis can be attributed to the
more stringent quality control objectives that were implemented in Phase II, which  resulted
in more careful and thorough application of analytical technique.

      These quality control criteria were incorporated into the new method for gravimetric
oil and grease and TPH determination, Method 1664.  Analysis consists of a series  of three
extractions with hexane and requires that both the sample bottle and cap be rinsed to ensure
removal of all extractable material that may adhere to the sample container. Sodium sulfate
is used to remove any residual water from the solvent after extraction of the sample.  For
the TPH procedure, the amount of silica gel used increases proportionately with the amount
of HEM in the sample at a ratio of 30:1. Method 1664 also requires the use of hexadecane
and stearic acid as reference standards, which were used for the QC analyses in Phase II of
the study.

      Quality control is more entailed than previously used methods and consists  of a two
point calibration of the analytical balance, calibration verification every ten samples, initial
precision and recovery analysis prior to the analysis of field samples, ongoing  precision and
recovery with each batch of samples, a reagent water method  blank with the IPR  analysis
and with each batch of samples, and a  matrix spike/matrix spike duplicate with each batch,
A batch consists of ten samples.

      As I stated earlier, hexadecane  and stearic acid are used as reference standards for
QC analyses. These compounds were chosen over materials such as Wesson oil or fuel oil
because they are standards of known composition and purity that can be readily obtained
from vendors. In addition, the use of stearic acid serves to verify the adsorptive properties
of the silica gel.

      Method  1664 is a performance-based method,  so it allows the use of alternate
extraction and concentration techniques, as long as the performance meets the specifications
in Method 1664.  The laboratory that chooses to use alternate techniques must demonstrate
equivalency by meeting the data quality  objectives for the specified QC tests in  Method
1664 which, among others include the method detection limit, initial precision and recovery
analysis, ongoing precision and recovery, and matrix spike/matrix spike duplicate analysis.
In addition, detailed records of the method changes and QC analyses must be maintained.
These recordkeeping specifications are provided in Section 9.1.2.2 of Method 1664.
                                        10

-------
      As I  mentioned earlier, an interlaboratory study of Draft Method 1664 is currently
being conducted by the Twin City Round Robin Group. Approximately 16 laboratories that
are part of this group are voluntarily participating to analyze two samples, one from an olive
packaging plant and one from a shore reception facility, each in triplicate, for oil and grease
using Method 1664.   Prior to analyzing field samples, the laboratories were required to
demonstrate their ability to meet  method protocol by performing initial  precision and
recovery analyses.  We are in the process of receiving the IPR data and, of the data that
have been submitted, almost all meet the data quality objectives of Draft Method 1664. We
anticipate that analyses will be completed and data submitted within a month or so.

      In the future, we plan to conduct an MDL study using  hexadecane and stearic acid
to determine the MDLs and Minimum Levels  for Method 1664.  Other related projects
under consideration  include an interlaboratory study  to compare solid phase extraction
techniques  with liquid/liquid extraction  techniques,  and a  study to  evaluate infra-red
techniques  for the analysis of oil  and grease and TPH using perchloroethylene as the
extraction solvent.

      Method 1664  will be proposed in the Federal  Register  as the Freon replacement
method for oil and grease and TPH. We will distribute copies of Draft Method 1664 at the
end of this presentation, and would be glad to receive your comments on this method. In
addition, a  questionnaire  is being distributed in an effort to collect additional comments,
information, and suggestions on the  Freon  replacement study.   Please  complete the
questionnaire and submit it to us before you leave.

      If there are  any questions, I will be happy to  answer them at this time.  Thank
you.
                                        11

-------
(Blank Page)
     12

-------
 EPA Efforts to Replace
        Freon-113
          Phase II
U)
       William A. Telliard
           USEPA
51-001-28                    Engineering & Analysis DMtlon

-------
Regulatory History
  Montreal Protocol on Substances that
  Deplete the Ozone Layer regulates the
  use of chlorofluorocarbons (CFCs), with
  an eventual phase out by 1996.
51-001-28                               Engineering & Analysis Division
                                    &EPA

-------
Regulatory History (cont'd)
  The Clean Air Act Amendments of 1990
  (CAAA) commit EPA to phase out CFCs and
  other ozone depleting chemicals by 1996.

  Freon-113 is the only CFC used in laboratory
  testing that falls under these regulations.
                                   &EPA
                                Engineering & Analysis Division

-------
Study Plan:  Phase I
• Find a solvent (if any) that gives results
  equivalent to Freon-113, or

• Select solvent or alternative technique for
  further study
st-ooi-28                                 EnglntuHng & Analysis Division

-------
Summary of Results from Phase 1
  All Solvents Were Significantly Different
  From Freon When Samples Were Not
  Segregated By Sample Category

  Hexane and Perchloroethylene Were Not
  Significantly Different From Freon For
  Aqueous Non-Petroleum Samples
51-W1-28                              Engineering & Analysis Division
                                    vvEPA

-------
Study Plan:  Phase II
  Further Assess Precision, Accuracy, and
  Comparability of n-Hexane in Gravimetric Oil &
  Grease Analyses of Aqueous Samples

  Evaluate Cyclohexane as an Alternative Solvent
  (Based on Concerns About Neurotoxicity of
  n-Hexane)

  Evaluate n-Hexane and Cyclohexane as Alternative
  Solvents For Gravimetric Total Petroleum
  Hydrocarbons (TPH) Analysis (Silica Gel Procedure)
  of Aqueous Samples
                                            cvEPA
51-001-28                                    Engineering & Analysis Division

-------
Solvents and Techniques:  Phase II

• Freon 113
• Freon 113 + silica gel adsorption procedure
• n-Hexane
• n-Hexane + silica gel adsorption procedure
• Cyclohexane
• Cyclohexane + silica gel adsorption
  procedure
51-001-28                              Engfnttrlng & Analytts DMslon

-------
o
Overview of Sampling Plan:
Phase II       	

• 34 Samples From 25 Facilities Covering
  15 Industrial Categori.es Have Been
  Collected

• Sampling Ongoing Through April, 1994

• Twin City Round-Robin Study Began in
  April, 1994
  - 16 Laboratories Participating
     5i-ooi-28                              EngtriMring & Analysis Division
                                        xvEPA

-------
Industrial Categories Sampled:
Phase II
     Non-Petroleum Sources:

     4   Meat Product Plants
     2   Coil Coating Plants
     2   Miscellaneous Foods
     2   Textile Manufacturers
     2   Leather Tanning Plants
     1   Metal Molding and Casting
     1   Meat Processing Plant
     1   Soap & Detergent Manuf.
     1   POTW
Petroleum Sources:

3  Metal Finishing Plants
3  Shore Reception Facilities
2  Petroleum Refineries
2  Transportation Facilities
2  Drum Reconditioning Fac.
2  Organic Chemical Plants
1  Industrial Laundry
51-001-28
                Engineering & Analysis Division

-------
Phase II
Comparison of Solvents Alternative to Freon
Following Natural Log Transformation
                                   Normalized RMSD*
                                  Hexane    Cvclohexane
Acceptance
   Limit
Category
Analysis
                    Oil & Grease
        All Samples
                    Oil & Grease
           Non-
        Petroleum
                    Oil & Grease
        Petroleum
       * Root Mean Square Deviation; Significantly Different Than Freon If Exceeds Acceptance Limit

       ** Value Within Acceptance Limit
51-001-28
 Engineering & Analysis Division

-------
         Phase II
         Mean Absolute Deviation From Freon Of Alternative Solvents
         Following Natural Log Transformation
U)
                                            Percent Deviation From
                                                vFreon-113 of:
                                            Hexanet
Category
Cyclohexane(
Analysis
                                            10.4 ±2.1
                All Samples
                                            15.9 ±3.3
                                        25.9 ± 11.9
                                            11.0 ±2.7
                   Non-
                Petroleum
                                            25.8 ± 6.7
                                        38.4 ± 15.3
                                            14.4 ± 4.8
                Petroleum
                                                         14.5 ± 6.7
               t Mean ± Standard Error of the Mean (S.E.M.)
               Percent Deviation =
                    I {100 x Cone derived with Hexane* / Cone derived with Freon) -100 |
               * or Cyclohexane
         51-001-28
                                                                           r/EPA
                                                                     Engineering & Analysis Division

-------
Phase II
Percentages of Solvent Results Above and Below Freon Results
Following Log Transformation
                                   Solvent Result
                                   < Freon Result
Solvent Result
> Freon Result
Analysis
Solvent
                       Cyclohexane
                       Cyclohexane
51-001-28
                                                         4EFA
                                                    Engineering & Analysis Division

-------
     Conclusions:  Phase II
Ln
Both n-Hexane and Cyclohexane Produce
Results Which Are Different From Freon

n-Hexane and Cyclohexane Results Are
Equivalent
     51-001-28                              Engineering & Analysis DMslon

-------
ho
Comparison of Analytical Conditions:
Phase II

  n-Hexane
    •  Boiling Point = 69° C
    *  Solvent Evaporation Time -30-150 Minutes,
       With Water Bath at 85 - 90° C

  Cyclohexane
    •  Boiling Point = 81° C
    •  Solvent Evaporation Time ~ 80 - 240 Minutes,
       With Water Bath at 90 - 95° C
      51-001-28                                    Englnewing & Analysis Division
                                                &EPA

-------
     Recommendation:  Phase II
NJ
XI
EPA Recommends the Use of n-Hexane as
a Replacement For Freon in Gravimetric
Oil & Grease and TPH Analyses of
Aqueous Samples
     51-001-28                             Engineering & Analysis Division
                                       &EPA

-------
        Summary of Hexane Results For Analyses
        Of Total Oil & Grease
        Aqueous Samples
ho
03
                                Sample
                                Category
 % Freon
Recovery*
Similar to
 Freon?
Study Phase
                               All Samples
                              Non-Petroleum
                                Petroleum
                               All Samples
                              Non-Petroleum
                                Petroleum
                  * Mean ± S.E.M.
        51-001-28
                                                                c/EPA
     Engineering & Analysis Division

-------
K)
       Draft Method 1664—Hexane Extractable Material
       (HEM) and Silica Gel Treated Hexane Extractable
       Material (SGT-HEM) By Extraction and Gravimetry
Characteristics:
   Hexane Used As Extraction Solvent (Purity > 95%)
   Bottle and Cap Rinsed 3x With Extracting Solvent
   Sodium Sulfate Used to Remove Residual Water
•   Set Ratio of Silica Gel to HEM at 30:1
   Reference Standards—Hexadecane and Stearic Acid
       5i-ooi-2fl                                          Engineering & Analysis Division

-------
u>
o
       Draft  Method  1664
Quality Control:
   Two Point Calibration of Analytical Balance
   Calibration Verification Every 10 Samples
   Initial Precision and Recovery (IPR; 4 Reps)
   Ongoing Precision and Recovery (Each Batch)
•   Reagent Water Method Blank (With IPR and With Each Batch)
   Matrix Spike/Matrix Spike Duplicate Each Batch
   Batch Size = Maximum of 10 Field Samples
                                                          cvEPA
       51-001-28                                          Engineering & Analysis Division

-------
Draft Method  1664
 Hexadecane and Stearic Acid as Reference
 Standards:

 •  Standards of Known Composition and Purity That
   Can Be Readily Obtained From Vendors

 •  Adsorptive Properties of Silica Gel Are Verified
   Using Stearic Acid
51-001-28                                   Engineering & Analysts Division
                                          vvEPA

-------
LO
ho
       Draft Method 1664
Performance-Based Approach
• Sect 9.1.2 Allows for the Use of Alternative Extraction and
  Concentration Devices and Procedures, as Long as Performance is
  Equivalent to Draft Method 1664
• Equivalency is Demonstrated by Meeting Data Quality Objectives in
  Draft Method 1664 for:
   - Sensitivity (Method Detection Limit;  Sect 9.2.1)
   - Precision (Sect 9.2.2 & 9.3.5)
   - Accuracy (Sect 9.2.2 & 9.3.4)
   - and Other Criteria
• Required Recordkeeping is Specified in Sect 9.1.2.2
       51-001-28                                            Engineering & Analysis Division
                                                          •vvEPA

-------
     Twin City Round-Robin Group
UJ
UJ
           Interlaboratory Study of Draft Method 1664
     51-001-28
                                       Engineering & Analysis Division

-------
      Future Plans
UJ
Conduct MDL and ML Study For Draft Method 1664
Using Hexadecane and Stearic Acid as Analytes

Conduct an Interlaboratory Study to Compare Solid
Phase Extraction Techniques With Liquid/Liquid
Extraction Techniques

Evaluate Infra-Red Techniques For the Analysis of Oil
and Grease and TPH Using Perchloroethylene as the
Extraction Solvent
                                                   &EPA
      51-001-28                                     Engineering & Analysis Division

-------
                                      MR. TELLIARD:  Our next speaker is presenting
on behalf of the American Petroleum Institute or, as we refer to it in government, the big
API, not to be confused with the little API.  Harold Rhodes is going to talk about a study
that they conducted looking at 30 facilities and comparing solvents.  Harold has just recently
retired from Texaco,  but he is  going to say Texaco things to  us, so please give him your
attention.  Thank you.
                           NOTHING IN LIFE IS FREON
                                      MR. RHODES: The title of our study was Nothing
in Life is Freon, and this can be attributed to the old philosopher from API, Roger Claff, or
you can blame me for it also.

      Last year at this meeting, we saw what Bill has just finished telling you again, that
the Phase I study was done and gave us some ammunition to conduct our own study for the
petroleum industry.  We did this in support of EPA so they could have some information
for their decision making.  A contractor was contacted  to do this for us.

      Our API project goals were to seek an alternative solvent to Freon  113 and Method
413.1 only.  This is a gravimetric method, and we were going to report the findings to EPA
in response to their request.

      Now, what we already knew when we started  is that oil and grease is a defined
parameter and is just used to monitor effluent quality. In our industry, it  is many different
compounds and many different classes of compounds.

      These  include the crude oil from produced  waters or refinery products from  our
refineries or our finished products from our marketing terminals, but the petroleum facilities
in our industries are all different.  Each one of those effluents is different, so we were going
to conduct a study of different sectors of our industry.

      Our goal was, can we find  another  solvent which is either equivalent  to  or
proportional to Freon 113?

      After listening to Bill, we  probably made the wrong choices, but it looks like we
came out  pretty  close.    We  were  going  to  try  n-hexane  and cyclohexane and
perchloroethylene, all in the gravimetric test.  Also in our scope, we wanted to take three
different industry sectors, the marketing terminals, production platforms,  and refineries.

      Our original goal was to select 10 sites from each sector.  We almost made it in, that,
one biotreater was out at a marketing terminal, so we had 29 samples.

                                        35

-------
      Triplicate samples were taken from each site at each location for each solvent.  The
data were duly compared against results for Freon 113 for each of the solvents.

      Our operating procedures we have gone through already, so I will show you exactly
what we did. Method 413 we followed rigorously and made some observations as to what
was going to be different for each one of our solvents.  We did a quality control daily,
method  blanks, all of the good QA/QCs that you are  supposed to do.

      If you use cyclohexane, one of the first things you run into is it is lighter than water.
This causes a perturbation in the Freon extraction procedure, because it, being heavier than
water, is easier to handle.

      Some of the things that you had to take care of were that, at each serial extraction,
you had to drain the solvent back into the bottle and remove the water, and do some more
handling.  Of course, the solvent is highly flammable and  has a flash point of about -20.
It requires  a higher  water  bath temperature to remove,  but  it  is relatively low cost  in
disposals.

      Again, here are some observations using hexane. The same low density is a problem.
It is a known neurotoxin.   We  had to take this and the flammability into  consideration.
Again, it has a relatively low price and disposal costs.

      For perchloroethylene, we go back to the greater density,  so it was  a lot  easier to
handle,  but its high boiling point required  the use of a heating mantle in order to remove
the solvent. It is a little more toxic than the Freon, but it is not known as a neurotoxin.

      One observation we made was we used a quality control standard, that we ran with
each batch on each day, which is a 33  API gravity crude and is about 0.86 density, and the
means you can compare results for recovery.

      Now, this crude oil  has some volatile  components.  As you can see, none of the
solvents gave as good a recovery as with Freon of your gravimetric residue. This may be
due to the higher boiling point or analyte loss at different stages.

      Here are some real data.  This is data plotted as a function of Freon for each sector.
This particular one is a production platform wastewaters.  Only one of the sites had any
significant...Bill, you would like that...had significant oil and grease in it, but this data point
was included into the pooled data for our decision.

      In refinery effluents, we tried to  select, again, streams that were upstream of our final
effluent.  The refinery 104  is a final effluent, and you can see we find none in it. This is
really a  comparison study for the solvents  more than  the sites.
                                        36

-------
       In the marketing terminals, we had a lot of non-detects and one super-detect.  This
point was rejected in our statistical data due to the fact that it had floating oil in it and was
not representative.

       Now, what did we do to check for equivalency of oil and grease  concentrations?
This is the data evaluation. What we did was take all of the data, and we took it for each
solvent at each site and for all the sites.

       We did the mean square error of the natural logs of the concentrations.  We did the
mean square deviation of the natural logs and calculated for each candidate solvent relative
to Freon.   For each solvent, an  F-test  was used  to  test the  null hypothesis that no
significantly difference between the candidate solvent and Freon  based on the ratio of the
mean square deviation.

       So, what did that say?  No candidate solvent was shown to be equivalent to Freon...
we have  already  heard this  from  Bill... from  refinery or  production  effluents.  The
perchloroethylene and cyclohexane were equivalent to  Freon  for marketing terminals, but
this may really be due to the fact that there were many non-detects.

       Were they proportional to Freon in any case? This is what we did to determine that

       We found that the data variability increased with concentration.  So,  we did the
natural logs of the oil and grease  concentration for each candidate solvent.  They were
regressed  against natural logs of the concentrations using Freon.

       Coefficients of determination were 0.8 and above.  (A regressed slope of 1 indicated
solvent behavior proportional to Freon.) The 95 percent confidence intervals were estimated
for all of the calculated correction  factors.

       What we found  here was  that, sure enough,  all three candidate solvents were
proportional in some manner to Freon within the three  individual petroleum sectors. The
exception was cyclohexane with the terminal effluents. Again, that may be due to the non-
detects.

       Only hexane and perchloroethylene were proportion to Freon for all of the pooled
data.  The only rejected point was that one marketing terminal.

       However, the confidence intervals  for this made of the correction  factors for each of
the concentrations were very large.  This is just an example from our main report of the
confidence intervals concerning hexane and  PCE.

       We did one more test sort of as a  sideline because of the  boiling point differences
of the solvents.   We did a little experiment here with analyte  loss as a function  of
evaporation  temperature.


                                        37

-------
      What we did was take one milligram of our general oil standard, take it up in the
solvent, evaporate it, and take the residue up  in methylene chloride and run a GC on it.
This is the comparison of the GC data as a function of temperature and carbon number.

      This study  was a one-time shot, and it needs further study to evaluate.

      What did we learn? What we learned for production refinery effluents, none of the
candidate solvents was equivalent to Freon for 413.1 and that the marketing terminals had
too many non-detects.

      With the  exception  of cyclohexane on marketing terminals,  all  of them were
proportional  to  Freon.   For the  pooled  data,  hexane  and perchloroethylene were
proportional.

      The  main  thing we found  out was the confidence  intervals for the estimated
correction factors were very large,  and it is amazing, but perchloroethylene,  in our test,
showed to be the closest to being proportional to Freon.

      This  is our  recommendation  from our study, and Bill  has  already  made his
recommendation, so this  is whatever you want to do with it.  Even though we did the
statistical analysis on all of our data and tried to make it either equivalent to or proportional
to Freon, the confidence intervals were so great that this was not possible.

      So, we are strongly recommending against correction factors, and I was glad to find
out in our previous  speaker that this is not  being done.

      I would like to thank all of these people who worked on this project, and if there are
any questions within the API sector, I would be glad to answer them.

                                      MR. TELLIARD:   Any questions?

(No response.)

                                      MR. TELLIARD:   Thank you.
                                        38

-------
LO
                NOTHING IN LIFE IS FREON  !
                   ANALYSIS OF OIL AND GREASE
                            FOR
                 PETROLEUM INDUSTRY EFFLUENTS
       intTG.wpd

-------
                 Cl
        Cl
       F
                        API PROJECT GOALS
•fe-
es
 THROUGH A LABORATORY STUDY OF CANDIDATE SOLVENTS,

1. To seek an alternative solvent to Freon 113 in EPA Method 413.1, Oil and
                         Grease;
        2. To report the study findings to EPA in response to their request for
        industry recommendations on a Freon replacement.
        ovhdS.wpci

-------
          .Cl
        -C
ci
F
                 WHAT WE ALREADY KNEW
        * O&G is a method-defined parameter used to monitor effluent quality

        * O&G is many different compounds and classes of compounds

        * Petroleum industry facility effluents are not alike
                       WHAT WE ASKED
ovhd3,wpd
Can another solvent produce results equivalent to or proportional
    to Freon for EPA Method 413.1 ?

-------
              -c
               &'/.
     ci  ci
 PROJECT SCOPE

     Candidate solvents:
       - perchloroethylene
       - n-hexane
       - cyclohexane

     Petroleum industry facility types:
       - marketing terminals (9)-*
       - production platforms (10)
       - refineries (10)

     Triplicate samples were collected at each location.

Effluent samples were analyzed using modified methods 413.1 ;for each candidate
solvent Data were compared against results from methods 413.1 with Freon®, to
identify an equivalent or proportional solvent.

-------
OIL AND GREASE DETERMINATIONS USING CYClOHEXANE
   * NO APPARENT BACKGROUND  INTERFERENCE IN
     REAGENT GRADE SOLVENT.

   * NO SEVERE EMULSIONS OBSERVED.  LOW DENSITY
     SOLIDS AND POTENTIAL FOR WATER TRANSPORT IN
     SOME SAMPLES MADE FILTRATION OF EXTRACT
     NECESSARY STEP.

   * DENSITY LESS THAN WATER COMPLICATES ANALYSIS.
     FLOATING SOLVENT ADDS MANIPULATION STEPS TO
     PROCEDURE (REQUIRES DRAINING OF SAMPLE INTO
     ORIGINAL CONTAINER AND RINSING WITH SOLVENT
     AFTER EACH  SERIAL  EXTRACTION).  ADDITIONAL
     MANIPULATION COULD LEAD TO LOSS OF ANALYTE.

   * BOILING POINT 81°C COULD CONTRIBUTE TO ANALYTE
     LOSS, i.e., LOW RECOVERY.

   * HIGHLY FLAMMABLE, FLAMMABIHTY LIMITS IN AIR 1.3
     - 8.4% V/V. FLASH POINT IS -20°C.

   * LONGEST ANALYSIS TIME OBSERVED, 73 MINUTES,
     WITH WATER BATH USED IN STRIPPING STEP, 95°C.

   * LEAST TOXIC OF SOLVENTS EXAMINED TLV = 300 PPM
     (TWA).  SKIN IRRITANT AND  NARCOTIC AT HIGH
     CONCENTRATIONS.

   * RELATIVELY LOW PRICE AND DISPOSAL COSTS.
                   43

-------
OIL AND GREASE DETERMINATIONS USING HEXANE
 * NO  APPARENT  BACKGROUND INTERFERENCE  IN
  REAGENT GRADE SOLVENT.

 * NO SEVERE EMULSIONS OBSERVED. LOW DENSITY
  SOLIDS AND POTENTIAL FOR WATER TRANSPORT IN
  SOME SAMPLES  MADE FILTRATION  OF EXTRACT
  NECESSARY STEP.

 * DENSITY LESS THAN WATER COMPLICATES ANALYSIS.
  FLOATING SOLVENT ADDS  MANIPULATION STEPS TO
  PROCEDURE (REQUIRES DRAINING OF SAMPLE INTO
  ORIGINAL  CONTAINER AND RINSING WITH SOLVENT
  AFTER EACH SERIAL  EXTRACTION).   ADDITIONAL
  MANIPULATION COULD LEAD TO LOSS OF ANALYTE.

 * BOILING POINT (69°C) COULD CONTRIBUTE TO MINOR
  ANALYTE LOSS, i.e., LOW RECOVERY. BP CLOSEST TO
  FREON OF THREE STUDIED.  LOW POLARITY MAY
  AFFECT PERFORMANCE RELATIVE TO FREON.

 * FLAMMABLE, FLASH POINT IS - 26°C.

 * ANALYSIS TIME OBSERVED, 40 MINUTES, WITH WATER
  BATH USED IN STRIPPING STEP, 90°C.  SIMILAR TO
  FREON.

 * MORE TOXIC THAN FREON, TLV = 50  PPM (TWA).
  RESPIRATORY  TRACT  IRRITANT AND  NARCOTIC AT
  HIGH CONCENTRATIONS.

 * RELATIVELY LOW PRICE AND DISPOSAL COSTS.
                     44

-------
   OIL AND GREASE DETERMINATIONS USING
          PERCHLOROETHYLENE
* NO APPARENT  BACKGROUND INTERFERENCE IN
 REAGENT GRADE SOLVENT.

* NO SEVERE EMULSIONS OBSERVED. HIGH DENSITY
 SOLIDS OF SOME SAMPLES  MADE FILTRATION OF
 EXTRACT NECESSARY STEP.  EXTRACT APPEARED
 HAZY IN BLANK RUNS.

* DENSITY  GREATER   THAN  WATER   SIMPLIFIES
 ANALYSIS. SETTLING SOLVENT IS CONSISTENT WITH
 STANDARD SEP FUNNEL TECHNIQUES.

* BOILING POINT (121°C) COULD CONTRIBUTE TO MAJOR
 ANALYTE LOSS, i.e., LOW RECOVERY. LOW POLARITY
 MAY AFFECT PERFORMANCE RELATIVE TO FREON.
    t
* NONFLAMMABLE.

* ANALYSIS TIME  OBSERVED, 40  MINUTES, WITH
 HEATING MANTLE USED IN SOLVENT STRIPPING STEP,
 130°C. ANALYSIS TIME EQUIVALENT TO FREON.

* MORE TOXIC THAN FREON TLV = 50 PPM (TWA) AND
 EQUAL TO HEXANE. DEFATTING ACTION ON SKIN AND
 NARCOTIC AT HIGH CONCENTRATIONS.

* RELATIVELY LOW PRICE, BUT DISPOSAL COSTS ARE
 HIGH MAY QUALIFY FOR RECYCLING.
               45

-------
            TABLE A-4.  QUALITY CONTROL MEASUREMENTS:
               PERCENT RECOVERY OF REFERENCE OIL
QC SAMPLE NO.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
MEAN
FRE
65
69
70
70
62
62
56
70
60
71
60
63
73
66
66
HEX
44
42
44
44
47
50
51
46
55
48
54
49
68
54
50
CYC
59
56
56
56
52
44
56
54
60
56
52
64
64
55
56
PCE
59
56
55
55
51
52
53
54
54
58
58
63
75

57
FRE = Freon-113*; HEX = n-Hexane; CYC = Cyclohexane; PCE = Perchloroethylene
                                A-4
                                  46

-------
     METHOD 413.1 FREON REPLACEMENT STUDY
    MARKETING TERMINAL WASTEWATER SAMPLES
   400-
   350-,
   300-
   2SO-,
(D
O  150-
c
o
O
   10CH
   50-
         MKT-101  MKT-102  MKT-1O3  MKT-104  MKT-105  MKT-107  MKT-109 MKT-110
    Cyclohexane
Freon-113
n-Hexane
Perchioroethylene ,

-------
00
                 METHOD 413.1  FREON REPLACEMENT STUDY
                       REFINERY WASTEWATER SAMPLES
               140
            c
            g

            2?
            *—•
            c
            05
            O
            C
            O
            O
               120-
               100-
80-
60-
4O-
               2CH
                      REF-101 REF-102 REF-103 REF-104 REF-105 REF-106 REF-107 REF-103 REF-109 REF-110
                Cyclohexane
                Freon-113
n-Hexane
Perchloroethylene

-------
     METHOD 413.1 FREON REPLACEMENT STUDY
  PRODUCTION PLATFORM WASTEWATER SAMPLES
   350-
c
O
+mt
<33
i—
"E
0)
O
c
O
O
   300-
   250-
200-
150-
100-
   SCH
      OCS-101 OCS-102 OCS-103 OCS-104 OCS-105 OCS-1O6
                                     OCS-108 OCS-109 OCS-110
    Cycfohexane
              Freon-113
n-Hexane
PerchJoroethylene

-------
                       .Cl
Ln
O
           Cl
EQUIVALENT OIL AND GREASE CONCENTRATIONS WITH METHOD 413.1?


Data Evaluation for Each Industry Sector

     •  Mean square error of the natural logs of O&G concentrations was calculated.
               Mean square deviation of the natural logs of O&G concentrations was calculated for
               each candidate solvent relative to Freon®.
            • For each solvent, an F-test was used to test the null hypothesis of no statistically
              significant difference between the candidate solvent and Freon®, based on the ratio of
              the mean square deviation to the mean square error.

-------
            c-
            j
           Cl  Cl
        EQUIVALENT OIL AND GREASE CONCENTRATIONS WITH METHOD 413.1?
t_n
            •  No candidate solvent was equivalent to Freon® in O&G concentrations for refinery or
              production effluents. Perchloroethylene and cyclohexane were equivalent to Freon® for
              marketing terminal effluents; however, this may be an artifact of the many NDs.

-------

    ci
PROPORTIONAL OIL AND GREASE CONCENTRATIONS WITH METHOD 413.1?


Data Evaluation

     *  Data variability increased with concentration.

     •  Natural logs of O&G concentrations by each candidate solvent were regressed against
       natural logs of O&G concentrations using Freon®.

     »  Coefficients of determination (r2) were 0.8 and above.

     *  Regressed slope of 1 indicated solvent behavior proportional to Freon®.  If the regressed
       slope was not statistically significantly different from  "1, the y-intercept of the regression
       line was identified as the natural log of the correction factor.
          f

     •  95% confidence intervals were estimated for calculated correction factors.

-------
             \        /"
            Cl
        PROPORTIONAL OIL AND GREASE CONCENTRATIONS WITH METHOD 413.1?
Un
OJ
• All three candidate solvents were proportional to Freon® within each of the three
  individual petroleum industry sectors. Exception: cyclohexane with marketing terminal
  effluents.
             •  Only hexane and perchloroethylene were proportional to Freon® for pooled data.
             •  The confidence intervals for the estimated correction factors for the candidate solvents
               were large.

-------
                     TABLE C-6. CORRECTION FACTORS
Solvent
HEX
PCE
Estimate of k
0.662
0.737
95% Confidence Interval
(0.352, 1.247)
(0.519, 1.046)
These correction factor estimates and confidence intervals were obtained by an
exponential transformation of the corresponding estimates and confidence intervals for
ln(k).  For completeness the estimates, standard errors, and confidence intervals for
ln(k) are provided below. The confidence intervals for ln(k) are based on the
Student's t distribution with 26 degrees of freedom and the appropriate critical value is
2.056.
Solvent
HEX
PCE
Estimate of ln(k).
-0.412
-0.305
Standard Error
0.308
0.171
95% Confidence Interval
(-1.045,0.221)
(-0.656, 0.045)
FRE = Freon-113*; HEX = n-Hexane; CYC = Cyclohexane; PCE = Perchloroethylene
                                          54

-------
          TABLE D-1.  COMPONENT LOSS RELATIVE TO FREON-113*  •
Component
n-C10
n-CM
n-C12
n-C13
n-C14
n-C18
n-C!8
n-C20
n-C22
n-C24
RT (min)
7.98
11.79
14.06
15.83
17.38
20.06
22.39
24.51
26.46
28.22

B.P.,°C
174
196
216
235
254
-
-
-
-
-
AVG
Loss CYC
72%
70%
65%
61%
55%
41%
29%
21%
19%
16%
45%
Loss HEX
71%
70%
65%
61%
55%
43%
32%
25%
23%
20%
46%
Loss PCE
57%
44%
36%
33%
31%
24%
24%
21%
23%
20%
31%
OBSERVATIONS
              t

   1.  These data show that cyclohexane and n-hexane residues experienced severe
      losses relative to Freon-113* residues. The bulk of these analyte losses were
      observed In components with boiling points less than 254°C (nC-14).
   2.  Perchloroethylene residues experienced severe relative losses of components
      with boiling points below 174°C (nC-10).
   3.  These data suggest that losses of "light end* crude oil components caused by
      evaporation temperature modifications made to Method 413.1 may be related to
      low temperature azeotropic conditions that apparently exist between the low
      boiling oil components and the non-halogenated solvents.
   4.  Perchloroethylene residues displayed moderate to low relative losses overall
      despite the 130°C evaporation temperature. This finding appears consistent with
      QC data presented in Table A-4 of (Appendix A).
   5.  Further study would be required to verify these findings.
                                     D-2
                                  55

-------
                  C
                  i
Ln

WHAT WE LEARNED - METHOD 413.1

    •  For production and refinery effluents, none of the candidate solvents tested was
       equivalent to Freon® in EPA Method 413.1.  Findings for marketing terminal effluents
       may be influenced by the large number of NDs.

    *  With the exception of cyclohexane on marketing terminal effluent samples, all three
       candidate solvents were proportional to Freon® within each of the three individual
       petroleum industry sectors. For pooled data, hexane and perchloroethylene were
       proportional to Freon®.

    *  Confidence intervals for the estimated correction factors were large.

    •  Perchloroethylene was the candidate solvent which appeared closest to proportional
       behavior relative to Freon® for pooled data by EPA Method 413.1,

-------
Even though stastical analysis demonstrated proportional




  relationships between each of the candidate solvents




   and Freon-113, the application of correction factors




        based on this study is not recommended.




Statistical analysis showed large uncertainties associated




                with correction factors.




 The 95% confidence intervals for the correction factors




      were found to be excessively broad for use.

-------
                          ACKNOWLEDGEMENTS
The following people are recognized for their contributions of time and expertise during
this study and in the preparation of this report:
                           API STAFF CONTACTS

                             Alexis Steen, HESD
                              Roger Claff, HESD

             MEMBERS OF THE OIL AND GREASE WORK GROUP

                           Kris Bansal, Conoco, Inc.
                           Stan Curtice, Texaco, Inc.
            Robert R, Goodrich, Exxon Research and Engineering Co.
                        Larry .Henry, Chevron USA, Inc.
                      Zara Khatib, Shell Development Co,
             David LeBianc, Texaco Exploration and Production, Inc.
            Francis C. McElroy, Exxon Research and Engineering Co.
             David W. Pierce, Chevron Research and Technology Co.
                          James P. Ray, Shell Oil Co,
            Harold A. Rhodes, Texaco Research and Development Co.
                Joseph P. Smith, Exxon Production Research Co.
                   George H. Stanko, Shell Development Co.
             Allen Verstuyft, Chevron Research and Technology Co.
The authors would like to thank the sample coordinators for their assistance in the
completion of this work. These individuals are: D. Pierce, marketing terminals; H.
Rhodes, refineries and S. Curtice, exploration & production. A special thanks is
extended to P. Smith, Shell Development Co., for her assistance in reviewing
statistical methods.
                                         58

-------
                                      MR.  TELLIARD:   Our  next speaker  is Dave
Clampitt from the Uniform and Textile Services Association.  Dave's group looked at the
application of the solvents as it relates to their particular wastewater and, in particular, the
impact of surfactants and detergents as it relates to the method and the method application.
       IMPACT OF DETERGENTS ON DETERMINATION OF OIL AND GREASE
                   BY GRAVIMETRIC AND INFRA-RED ANALYSIS
                                      MR. CLAMPITT:  Good afternoon.  I  am Dave
Clampitt, Director of Environmental Regulatory Services for the Uniform and Textile Service
Association or UTSA.  My presentation today is entitled "Impacts  of Detergents on the
Determination of Oil and Grease by the Gravimetric and Infrared Analysis".

      UTSA  is a national trade association representing the industrial laundry industry.
Most industrial launderers are small, family-owned businesses. Several are large, publicly
traded organizations. These companies rent and launder uniforms, coveralls, jackets, shop
towels, roll towels, floor mats,  bed and table linens, kitchen dish towels, health  care,  and
other items.

      The product mix among companies and individual facilities is highly diversified.
Industrial  launderers service a wide range of industrial  and commercial industries such as
auto repair shops, gas stations,  printing facilities, machine shops, special trade contractors,
restaurants, hotels, and agricultural services, to name a few.

      In the course of use, these rented textile products become soiled with a wide range
of materials, including grease, solvents, oils, inks, food,  blood, medical products,  and other
chemical substances.

      A textile supplier picks up these soiled textiles in their fleet of company-owned or
operated vehicles, furnishes clean replacements, transports these soiled textiles to a central
plant for laundering or dry cleaning, and provides replacements for  worn out textiles.

      After arrival at the  laundry,  the textiles are put through a various laundering  and
maintenance process.  The wastewater from the laundry process is discharged to publicly
owned treatment works or POTWs.

      Because of the wide variety  of customers served, contaminants and product mixes,
no two  laundry effluents  are alike.  Even within each  laundry facility, the wastewater
characteristics change daily, hourly, and even by the minute, depending upon the types of
textiles that are being processed at that time.
                                        59

-------
      UTSA believes that the oil and grease content in industrial laundry wastewater, as
measured by the current required method of USEPA 413.1, yields biased high results. Since
the solvent, Freon,  cannot  distinguish  between  oil  and  grease and  other extractable
materials, it is believed  that a significant amount of oil and grease now being measured on
these discharges is actually Freon soluble materials from chemical  detergents and other
sources.

      If so,  UTSA contends that the contribution  of these  other materials should not be
regulated as oil and grease.  Although Freon will  no longer be available after 1994, it is
unlikely that any of the EPA-proposed replacement solvents will alter this situation.

      The detection  of detergents by the oil and grease analysis was demonstrated by one
of our members in California. This company basically took a 600-pound washing machine,
loaded it with  the water level required to process  textiles and detergents for those loads.
They ran the machine  for basically  30 seconds, and then  took samples out of the wash
wheel.

      These are the analyses for those samples in parts per million of oil and grease (O&G)
with just water and detergents. As you can see, by whatever  method they used, they ranged
from 30 ppm all the  way up to  926  ppm for just water and detergents.

      With  these  results, UTSA met with  EPA on September 13, 1993 to discuss the
problem with the current oil and grease  test methods.  As  a result of this meeting, UTSA
volunteered  to conduct  some testing  to  evaluate the  impact of detergents on the
determination of oil and grease  by gravimetric and infrared analysis.

      In addition, industrial laundry wastewater  was  sampled and tested, using Freon,
hexane, and cyclohexane to assist EPA in their determination  of a replacement solvent for
the 413.1 test method.

      A total of 12  laundry effluent samples from UTSA members  were shipped to ETS
Analytical Services of Roanoke, Virginia. The  12  plants were from different areas of the
country, which included different product mixes, customer bases, and wastewater treatment
processes.

      The plants were located in the following States:  2 samples came from California, 3
from West Virginia, Illinois, Texas, Ohio, Washington State,  Louisiana, North Carolina, and
Massachusetts.

      Some of the laundries processed  high volumes of wipers used in the printing and
auto industries which contain high levels of toxics such as toluene and xylenes, while others
mainly process uniforms which are considered light soiled  products.
                                        60

-------
      Wastewater treatment at these plants ranged from simple techniques such as pH
control and shaker screens all the way up to dissolved air floatation systems.  Each sample
was extracted in triplicate with the three different solvents, Freon, hexane, and cyclohexane.
Each extract was split into two equal fractions and one treated with a silica gel clean-up
step.

      In all, a total of 216 samples were measured for oil and grease content using EPA
Method 413.1.  Six samples that were extracted with Freon were also measured using the
IR analysis.

      In addition, 7 chemical suppliers were asked to send 3 different detergent samples
to the lab, but, actually, one sent 4 samples.  Therefore, we had 22 products that  went
through the test.

      Of the 22 products, 20 were powders, 2 were liquids.  Each detergent was sampled
and analyzed with the 3  solvents and with and without the silica gel, according to Method
413.1, the gravimetrical analysis. Five of the detergents that were extracted with Freon  were
also measured with the IR analysis.

      The following results that I am going to cover are preliminary.  The analysis was just
basically completed last week, and everything was faxed to me. The final report should be
done in the next couple of weeks.

      The 12 wastewater samples analyzed using the 413.1 Method with Freon for O&G
basically ranged from  20 ppm all the way up to 792 with an average of 270 ppm.

      The 12 wastewater samples that were analyzed with the modification of a silica gel
with the Freon extraction ranged from 5 to 425 ppm with an average of 112 ppm which is
more than a 50 percent reduction from the 270  ppm with the normal test procedure.

      The wastewater results from the analysis of the 413.1 Method modified with n-
hexane for O&G ranged  from 12 to 865 ppm, with an average of 222; and the samples for
TPH with the silica gel clean-up ranged from 4 to 464 with  100 ppm average, again, a
greater than  50 percent reduction.

      The wastewater results with the 413 Method with cyclohexane for O&G ranged from
18 to 588 ppm with an average of 228  ppm and, when modified with a silica gel clean-up
step, it went from 6 to 389  ppm with an average of 94, again,  as the other test results
showed,  a greater than 50 percent reduction in  numbers.

      Using the IR techniques, 3 of the total oil and grease samples and 3 of the TPH  were
analyzed. You can see from  the slide that the numbers greatly reduced with the silica gel
clean-up step.
                                       61

-------
      This study concludes that industrial laundry wastewater analyzed by USEPA Method
413.1 and 418.1  modified with a silica gel clean-up step consistently produced lower
numbers than the standard 413.1 and 418.1 methods.

      What was the clean-up step removing?  We believe some of the materials removed
were detergents. So, let us go over what our detergent analysis determined.

      The results varied across the board for mg/kg of product. For Freon, you can see it
went from 14 all the way up  to 419,000 mg/kg.  TPH was 7 to 101,000 mg/kg.  Hexane,
under oil and grease was 28 to over 189,000 mg/kg. TPH was less than 7 to 14,894 mg/kg.
Cyclohexane was 32 to over 240,000 mg/kg, and cyclohexane with TPH was less than 7 to
over 75,000 mg/kg.

      Five detergent samples were analyzed using infrared analysis techniques.  For total
oil and grease analysis using Method 418.1 modified with the use of silica gel, the numbers
were as the chart shows,  ranging from less than  17 mg/kg all the way up to 253,703 mg/kg.
For TPH, all five samples were under the limit of detection.

      The limit of detection was high on the detergent samples due to the foaming of the
detergents during the analysis.

      Using the chemical supplier's recommended amount of 2 gallons of water per pound
of textiles laundered, it was calculated for two detergents how much they  will contribute
to the analysis using the 413.1 method for total oil and grease and TPH.

      The first sample, it was calculated that the Freon extraction yielded 226 ppm. Under
TPH, it dropped down to 127 ppm. For n-Hexane extract for total oil and grease,  it yielded
168 ppm; and for TPH, it was less than  10 ppm, below the detection limit. Cyclohexane
extraction for oil and grease yielded 309 ppm, the TPH was 40,

      For the second sample  that we did, again, everything with the silica gel clean-up step
was lower.  Whichever solvent we used for extraction, the results always produced a lower
number with the silica gel clean-up step.

      Detergents that yielded high numbers using the standard 413.1 method were those
having glycols, alcohols, and various solvents.  Since these materials are polar, the silica gel
was able to remove them. Detergents that yielded low numbers were caustics and silicates.

      Therefore, the addition of a silica gel clean-up step to the 413.1 method  may reduce
the impact of detergents during the analysis. As a concern to the industry, the contribution
of detergents  in the wastewater can cause the  results  of the analysis to exceed regulatory
limits.
                                       62

-------
      Talking about regulatory limits, another part of our study addressed the compliance
and enforcement  issue that deals directly with the oil and grease measurement.   Some
municipalities are requiring the use of infrared analysis 413.2 for the analysis of total oil and
grease and method 418.1 for petroleum hydrocarbons.  This has been required even though
these  methods are not approved  by EPA for use in the NPDES permit program.

      Arguments have been made by local and State regulatory agencies that since EPA has
published the 418.1 method in the EPA manual Methods for Chemical  Analysis of Water
and Waste, that it is therefore, validated and, therefore, does not require approval. It is
assumed that it is  not necessary to follow promulgated approval  procedures for alternative
methods set out by  Part 136 regulations.

      This raises several questions by our members.  Why is it that  regulators do not
comply  with  their own regulations? Why would EPA documents such as the manual  of
methods specifically point out that method  418.1  is not approved for the use  in NPDES
permits  but is still being used? Finally, why would Part  136 regulations go to the trouble
to lay out approval procedures if they are not meant to be followed?

      The industrial laundry industry has been arguing that method 418.1 is inappropriate,
not only because it is not approved but also that it consistently gives much higher numbers.
This puts many facilities  in violation and in jeopardy of enforcement  actions, including
substantial penalties without any real basis.

      The value found in most local ordinances for total oil and grease is 100 ppm.  When
this historical value was chosen, the method of analysis was hexane extractable compounds
determined by gravimetric analysis.  The value in ordinances has been  slow to change to
correspond with a change to Freon as a solvent and the ability to  use the IR analysis.

      Some  municipalities have chosen to change their ordinances to limit petroleum
hydrocarbons instead of total oil and grease, usually at the same 100 ppm. Some, however,
have chosen  25 ppm, no doubt  to protect the environment, but still without a scientific
basis.

      Also,  many municipalities have specified the use of infrared 418.1 method in their
issued permits.  Some cities have promulgated higher oil and grease limits. Most of these
limits  are still arbitrary. To our knowledge, only a few  municipalities have actually studied
the impact of total oil and grease or TPH on POTWs  based on  accepted criteria as pass-
through and sludge  impacts, et cetera.

      Unfortunately, the value,  having been around for so long, has been assumed to be
based on some actual  authority.

      In our study, the Freon extracts of 12 samples were analyzed using the 418.1 infrared
method  modified without the use of silica gel for total oil and grease and for 418.1 method


                                       63

-------
petroleum hydrocarbons. The data suggest that the total oil and grease values in IR analysis
is 45 percent higher than the approved 413.1 gravimetric method.

      The petroleum hydrocarbon fraction was compared in the same fashion. The analysis
shows an increase of over 120 percent over the gravimetric method. We realize that there
are many factors that may account for this difference,  including volatization of  lighter
fractions during solvent evaporation, the presence of different materials... industrial laundry
wastewater is one of  the most variable imaginable... and the contribution of detergent
products.

      The oil and grease limit in municipal  ordinances and permits is the major source of
enforcement problems for industrial laundries and many municipalities. In one northeastern
city, industrial user violations for total  oil and grease limits account for  over 50 percent of
the total violations.  Of course, USEPA, the State, and the environmental groups are after
the city to enforce their limits or face Federal enforcement action against them and the
industry.

      The sad thing is that there actually is no evidence of environmental impact  on the
POTW or the receiving waters from the oil and grease normally found  in POTW influent.

      UTSA  hopes that the results of this  study will assist  EPA  in their decision on a
replacement for Freon  in the 413.1 method.  However,  we believe that the test method
proves that other materials, such as detergents, interfere with the test method which results
in  biased high numbers.

      It is also  our hope that, with a change in solvent, EPA will modify the approved Part
136 oil  and  grease method  to  include the silica  gel  option  to measure  petroleum
hydrocarbons.

      Resolving the analytical problem will not completely solve the problems associated
with these limits, but it will go far in alleviating the excessive numbers of non-environmental
threatened oil and grease violations.

      As I stated earlier, these are only preliminary test results.  Anyone who would  like
to  get a copy of this final report with all the  statistical  information, just give me a business
card and put O&G study on the back.

      Also, you can write to me  at the Uniform  and  Textile Service Association, 1730 M
Street, N.W.,  Suite 610, Washington, D.C. 20036.  Thank you. I can take any questions.
                                        64

-------
                       QUESTION AND ANSWER SESSION
                                     MR. PARANJAPE: Can you go back to your slides
in the beginning? I got confused over the fact that you have taken averages anywhere from
20 to 900 or something, and this does not fit some of the other.

                                     MR. CLAMPITT:  Well, I am not a lab person.  I
am a compliance guy, so I averaged them just for my own numbers to compare.

                                     MR. PARANJAPE: Because, you see, normally, if
the analysis is performed, let us say, on the same samples collected four or five times during
the day,  I can understand  taking an average, but if you have performed  analysis on a
sample, let us say, the first of the month and the last day of the month, taking an average
for the industrial surcharge  purposes is also okay, but for scientific purposes, taking an
average from  below 20  anywhere and another at 800, somehow or the other, I fail to
understand how one can take that average.  Can you explain that.

                                     MR. TELLIARD:   Excuse me. Can you identify
yourself and your organization, please?

                                     MR. PARANJAPE: Oh, yes.  I am Bhal Paranjape
from the City of Solon, Ohio,  and we have an industry in town, a laundry industry with
whom we do not have the problem.

                                     MR. CLAMPITT: I just averaged  them for myself.
I was not sure if it was correct or not, but if you get a copy of the final report, you will see
each sample,  if it was one number under the normal 413.1  method, when you use the
clean-up step, that same  sample showed a lower result.  So, I just averaged it just for this
presentation, but the report will have each one. I was not going to put 12 samples on a
slide, because I figured people in the back would not be able to see it.  But,  as I said, I am
not a lab statistician.

                                     MR. TELLIARD:  Any other questions for Dave?

                                     MR. BOURBON: I am John Bourbon from USEPA
Region II. Dave, I agree with your study here,  I think, the approach. I have  been exposed
to some things from other industries like the dye industry claiming the same type of thing,
that there are  interference.  In  a sense, I guess, you could look at it that way.

      I guess  I am just saying I think  everybody  realizes that oil and grease is an
operationally defined  parameter.  It is sort of like a catch-all for a general rough indicator
of the effluent, and I am just not so sure how much... maybe Bill can discuss more, I do not
know... how much the EPA  really can do about all of these so-called interferences.

                                       65

-------
      You know, you are talking about compounds like these detergents or dyes or things
like that. They themselves could be causing problems when they are being dumped into
the ambient waters. So, I am just not so sure what...

                                      MR. CLAMPITT:   There are some cities,  like
Seattle, which have actually studied the impact of total oil and grease and petroleum oil and
grease.  For total oil and grease, they realize that it did not impact their system at all, and
they waived the limit.  For TPH, they have a limit of 100 ppm.

      All  we are trying to say with this study  is that we would rather you analyze our
wastewater for TPH and not for total oil and grease, because we do not think a lot of the
cities have actually done analysis to determine total  oil and grease on their systems.

                                      MR. TELLIARD:  Any other questions?
(No response.)

                                      MR. TELLIARD: There is a study underway by our
division to look at the industrial laundry group.  We are looking at  various measurement
techniques, including silica gel and actually running detergents as part of the test, and so
forth.  How this is all going to shake out at the end only God knows at this point, but we
are trying to make some of these considerations.

      As was referred to,  the method by itself is solvent dependent, you know, oil and
grease is that which comes out in the solvent you use.  As we all know, that could be ball
bearings, that could be a lot of other things, but whatever is left on the balance when you
are done is, by definition,  oil and grease.

      That does not make it right.  That does not make it scientific.  It is just the way it is,
and we are trying to address some of those issues as we get into some of these studies.

      In this  particular one where  surfactants and detergents are a  very large  part of the
effluent, it is something we are going to look at. That is all I can tell you.

      Somebody in the back?

                                      MR. LEVY:   Nathan  Levy  with A&E  Testing in
Baton Rouge.  Hi, Bill. How are you doing?  Happy 17th.  Hope you have 17 more.

                                      MR. TELLIARD:  Thank you.

                                      MR. LEVY:  I have got three questions for you just
to show you  I was  paying attention. The first one is your reference to MS and MSDs.
Typically,  that terminology has been used for organic analytes,  and you have used  it  now
in reference to an inorganic analyte.  Is that a trend that we should be expecting in the future?

                                        66

-------
                                     MR. TELLIARD: Yes.

                                     MR. LEVY:  Good  answer.  All right.

                                     MR. TELLIARD: Even if it was not, it is now.

                                     MR. LEVY:  You also have taken the opportunity
to use, if I  could use the expression, SW846 syntax in naming the method.  Is that a trend
for the Office of Water?

                                     MR. TELLIARD: No. Basically, the method format
is that which the Agency has agreed on through their EMMC which is the Environmental
Monitoring and Management Council. We have adopted a new format that, hopefully, all
the methods will eventually end up in or something  that looks a lot like it.

                                     MR. LEVY: I was hoping you would say that. That
is good.

      My  third question is that with hexane seeming to be the solvent of choice but the
recoveries  seem to be quite lower than Freon, do you think there will be an effort in the
regulatory  agency to reduce the NPDES limits for this  analyte to correspond with the lower
recoveries  from hexane?

                                     MR. TELLIARD: I do not know yet.  We are not
there  yet.  We are  still playing science.  We will get to policy a little bit down the road
here.

      That is certainly something we are going to solicit comment on when we propose the
method is implementation issues, and I am sure we will hear a few words, probably. Thank
you.

      Anyone else for either one of these gentlemen?  (No response.)

                                     MR. TELLIARD: Well, you are a quiet crowd, and
I would like to thank you for your attention.  I would like to thank this afternoon's first
batch  of speakers, and we are going to take a ten-minute break to get a cup of coffee and
get back in here.  Thank you.
(A brief recess was taken.)
                                       67

-------
(Blank Page)
    68

-------
         IMPACT OF DETERGENTS ON THE
      DETERMINATION OF OIL AND GREASE
        BY GRAVIMETRIC AND INFRA-RED
                     ANALYSIS
01
     By: David L. Clampitt, Uniform & Textile Service
           Association
         Robert B. Schaffer, Coyne Textile Services
         David F. Tompkins, ETS Analytical Services

-------
XI
O
                     INTRODUCTION
          Industrial Laundry Industry
          Product Mix Highly Diversified
          Wide Range of Customers
          Wastewater Characteristics Vary

-------
       REASON FOR STUDY
Method 413.1 Yields Bias Results
Freon Soluble Materials - i.e., Detergents
Replacement Solvent Will Not Alter Problem

-------
NJ
CALIFORNIA LAUNDRY STUDY

SOAP
Liquid
Powder
Powder

SOAP
Liquid
Powder
Powder
SOAP
VOLUME
0.5 gal.
12 Ibs.
12 Ibs.
SOAP
VOLUME
Igal.
12 Ibs.
12 Ibs.
WATER
VOLUME
144 gal.
144 gal.
250 gal.
WATER
VOLUME
125 gal.
144 gal.
250 gal.
418 IR
PPM
510
620
460
503A
PPM
926
793
461
503E
PPM
40
170
30
503E
PPM
642
423
89

-------
            SCOPE OF STUDY
u>
12 Laundry Effluent Samples
 - Freon
 - n-Hexane
 - Cyclohexane
 - Silica Gel
7 Chemical Suppliers
 - 3 Detergents
 - 3 Solvents
 - Silica Gel

-------
WASTEWATER RESULTS
 413.1 METHOD/FREON
        (mg/1)

LOW
MEDIAN
HIGH
AVERAGE
MEAN
20
167
792
270
SD
2.1
7.8
18.1
11.5
RMSD
10.1
4.7
2.3
4.7

-------
              WASTEWATER RESULTS
          413.1 METHOD WITH SILICA GEL
                      FREON
                       (mg/1)
                MEAN
          SD
          RMSD
Ui
      LOW
      MEDIAN   46
      HIGH
425
          0.9
          7.8
31.3
          17.7
          16.6
7.4
      AVERAGE  112
          10.0
          15.8

-------
WASTEWATER RESULTS
413.1 METHOD/N-HEXANE
         (mg/1)

LOW
MEDIAN
HIGH
AVERAGE
MEAN
12
143
865
222
SD
1.7
9.7
38.2
8.6
RMSD
13.8
6.8
4.4
5.6

-------
   WASTEWATER RESULTS
413.1 METHOD WITH SILICA GEL
         N-HEXANE
            (mg/1)

LOW
MEDIAN
HIGH
AVERAGE
MEAN
4
35
464
100
SD
1.6
4.1
19.7
9.0
RMSD
40.8
11.6
4.2
15.3

-------
              WASTEWATER RESULTS
           413.1 METHOD/CYCLOHEXANE
                       (mg/1)
CD
      LOW
      HIGH
                MEAN
18
      MEDIAN    154
588
          SD
2.4
          4.3
9.7
          RMSD
13.8
          2.8
1.7
      AVERAGE  228
          17.2
          7.6

-------
   WASTEWATER RESULTS
413.1 METHOD WITH SILICA GEL
       CYCLOHEXANE
            (mg/1)

LOW
MEDIAN
HIGH
AVERAGE
MEAN
6
51
389
94
SD
1.7
3.7
28.3
9.0
RMSD
30.0
7.3
7.3
16.0

-------
           WASTEWATER RESULTS
          INFRARED SPECTROSCOPY
                   (mg/1)
SAMPLE #
152149
152150
152151
152149
152150
152151
TEST
O&G
O&G
O&G
TPH
TPH
TPH
MEAN
517
1,520
429
64
1,283
300
SD
27.5
123.6
9.0
3.8
77.6
8.8
RMSD
5.3
8.1
2.1
5.9
6.0
2.1
CO
o

-------
                DETERGENT RESULTS
                    413.1 METHOD
                        (mg/kg)
oo
       SOLVENT  TEST  RANGE   AVERAGE
Freon
Freon
Hexane
Hexane
O&G  14-429,940  102,846
TPH  7 -101,639   20,417
O&G  28-189,793   54,030
TPH  <7- 14,894   4,110
       Cyclohexane O&G 32-240,147  71,584
       Cyclohexane TPH  <7 - 75,248  16,215

-------
                DETERGENT RESULTS
                INFRARED ANALYSIS
                        (mg/kg)
CO
       SAMPLE #   O&G FREON
       152825
       152826
       152930
       152931
253,703
20,158

16,288
             TPH FREON
<9,683
<1,976

<1,894
       152932
 17,200
<2,000

-------
            DETERGENT CALCULATION
                       (mg/1)
       SAMPLE # TEST SOLVENT  CONTRIBUTION
       152462
O&G   Freon
O5
OJ
TPH
                 TPH
Freon
                 O&G  Hexane
       Hcxane
226

127

168
                 O&G  Cyclohexane   309
                 TPH   Cyclohexane   40

-------
            DETERGENT CALCULATION
                       (mg/1)
       SAMPLE # TEST  SOLVENT CONTRIBUTION
O3
152825
O&G   Freon
TPH
TPH
                        Freon
O&G   Hexane
                        Hexane
                                   193
                                    85
                 O&G   Cyclohexane   108
                 TPH   Cyclohexane   16

-------
00
Ul
              IMPACT OF DETERGENTS
The Addition Of A Silica Gel Clean-Up Step To

The 413.1 Method May Reduce Impact Of

Detergent


Contribution Of Detergents Can Exceed

Regulatory Limits

-------
            COMPLIANCE & ENFORCEMENT
CO
         Some Municipalities Require I/R
         Test Is "Validated"
         Method 418.1 Is Inappropriate
         Local Limits Of 100 mg/1

-------
                TOTAL O&G ANALYSIS
                 GRAVIMETRIC VS. I/R
                         (mg/1)
               SAMPLE #  GRAY.   I/R   IR/GRAV.
       LOW
00
153672
       MEDIAN 152149-1
43
           384
59
      485
       HIGH
152150-3
813   1640
1.37
       1.26
       2.02
                              AVERAGE   1.45

-------
                   TPH ANALYSIS
                GRAVIMETRIC VS. I/R
                        (mg/1)
as
Co
              SAMPLES  GRAV.   I/R  IR/GRAV.
LOW
153672
       MEDIAN 152151-1
18
                   149
33
                 288
       HIGH
        152150-3
           469   1380
1.83

1.93

2.94
                             AVERAGE    2.23

-------
                       CONCLUSION
          Assist EPA
CO
          Detergents Interfere With Test Method
          Modify The Approved Part 136 Gravimetric
          Method To Include The Silica Gel Option

-------
      FOR COPY OF FINAL REPORT
Write To:
     Uniform & Textile Service Association
     1730 M Street, N.W.
     Suite 610
     Washington, B.C. 20036

     Attn: David Clampitt

-------
                                     MR. TELLIARD: We would like to get going with
our second session this afternoon which is going to focus primarily at the application of the
solid phase extraction technique as it relates to the oil and grease analysis and the infrared
application.

      Our first speaker is... this is a replay  of last year, although he says he has new data,
though  I am not a  believer... Craig Markell from 3M who  is here to talk about the
application of the Empore disks.
                        SOLID PHASE EXTRACTION DISKS
                     A SOLUTION FOR THE FREON PROBLEM
                                     MR. MARKELL: Thanks, Bill. It is a great pleasure
to be here, and I do have new data, believe me.  It is kind of nice to be  here once again
and speak in this airplane hangar.

      You know, last year, if you were here... I know some of you were here... the first
slide came up, and it was backwards. Well, this year, before I even got in the door, I had
to arm wrestle a couple of people who said we want your slides and we want them now.
We want to put them in the projector.  Then, after I went through that humiliation, they
actually went ahead and reviewed them for me. So, I think the slides are going to be in the
proper order.

      True to 3M traditions, we have a  multi-media presentation  today.  We have got
overheads which are  manned by  Professor Wisted down here, and slides.  So, we will see
how it works out.

      What I wanted to do is start out by recapping a little bit of what we did in Phase I
of the oil  and  grease study, then tell you how we used that information to go on  and
construct a new disk for Phase II which is specific for oil and grease analysis.  Even though
Bill refers to this as the cutting edge of science, it turns out that for solid phase extraction,
it is the cutting edge, because you have got to extract just what you want to, nothing more,
nothing  less.

      So, let us see how this works.  In Phase I, we started out knowing absolutely nothing
about oil and grease analysis. We knew all about pesticide analysis, but oil and grease was
something new to us.

      We started out by  looking at a number of different options to see how well we could
extract these extractable materials from water samples.  What we  looked at were three
different parameters that  we varied.


                                       91

-------
      One was sorbents.  We looked  at a couple  of different sorbents  to extract the
materials out of the water.   We looked at C18 silica, and  we also looked at styrene-
divinylbenzene which is able to extract more polar materials.

      We  looked at  a  couple of different disk  sizes to see what  size was  the most
appropriate for these matrices.  Finally,  we looked at six different elution  solvents.  We
looked at things as non-polar as hexane and Freon, then we went up to methylene chloride
and methyl t-butyl ether, and it turns out that all these things make a difference.

      In fact, when we were all done, we could give you any answer you  wanted. That
will be good for the industrial laundry people.  Just kidding.

      Now, for the Phase I recipe, this was the best one we found. We took a 90 mm C18
silica disk, washed it and conditioned it. We extracted the sample through the disk and
eluted  with hexane.  That  was the  best elution solvent we found to get an answer
somewhere in  the ball park of Freon. We dried it, filtered it to get residual sodium  sulfate
out of there, and then evaporated it and weighed  the residue. It is a very easy  recipe.  It
worked pretty  well.

      Now, I  think you are all familiar with disks.   I will just show you what they are.
Here is a 47 on the left and a 90 mm on the right.  The sorbent you know of in solid phase
extraction tubes, but here it is in disk form to give  you more of a filtration type of process.
It is very efficient.  It is the newest technology in solid phase extraction.

      This is a typical scattergram of our earlier results from Phase I.  Let me explain how
the graph is constructed.  Bill, I hope I do not get you with this pointer.

      What we have plotted here are the disk results eluting with  methyl t-butyl ether
versus the 413.1 results.  This is a liquid-liquid extraction with Freon.

      Now, if you get perfect 1:1 correspondence of the results, you will get a straight line
which goes up with a slope of 1.  These are the actual results  from a number of matrices,
and what you are seeing is that, although there is some scatter  in the data, the results tend
to track the Freon result.  So, that was pretty encouraging.

      After we knew the answer, we went back to the data, the results of the Phase I study,
and looked at the hexane results only on the 90 mm disk, the recipe I showed you a minute
ago. What we looked at was we took the Freon result as the target. This is the liquid-liquid
extraction Freon result, and we said okay, how did these three techniques...  hexane liquid-
liquid, 90 mm C18, and the SPE tubes,  compare  in closeness to the target and in what
number of matrices were they closest?

      So, the  total number of samples is down here in the bottom in the parentheses, and
we are looking at the three techniques. So, for hexane here, it turns out that for 27 samples


                                        92

-------
that were analyzed, 6 of them were closer to the Freon result than any of the two other
techniques, and there were 6 ties with one of the other two techniques.

      We looked at the disks.  There were 10 closer to Freon than any other technique,
and, again, 6 results were tied with other techniques.  Finally, for the tubes, that result is
a bit biased, because there were only 20 samples done by that technique.

      The point is, clearly, we are on the right track with solid phase extraction.

      So, armed with that knowledge, we went on and drew some conclusions, and we
said okay, first of all, non-polar is good and polar is bad. We looked at the elution solvents.
It was true for that.  The methyl t-butyl ether and methylene chloride gave you much higher
numbers than if you eluted with something  like Freon or hexane.

      Also,  in the sorbents,  polar was bad.  The styrene-divinylbenzene can extract more
of the polar materials that you  really do not want in there with  your results. Again, the
number could be high using styrene-divinylbenzene.
      So, based on those first two points, we decided we wanted to be as non-polar as
possible in the sorbent, the disk matrix material, and also the elution solvent.  That was
what determined the way we went for Phase II.

      Finally, you have heard this before, nothing duplicates Freon. That is becoming a
fashionable statement to make at this meeting, and I am proud to say that we do not, either.

      So, we went to Phase II.  We were told by certain  people very close to me that we
wanted to  use hexane and  cyclohexane as the elution solvents.  Do not bother with
anything else. We were also told that we wanted to do 1-liter samples, so we did those,
and we designed a new disk which was more appropriate for this analysis.

      Here is what  we came up with.  We call it the  oil and grease disk.  Now, the
marketing people are not going to like this, but let's call  it the OG disk.

      47  and 90 mm, you  need both sizes, especially  depending on  if you are  doing
influent or effluents. Certainly, the 90 is good for the influents that are a little more chunky.
It is C18 silica in a non-polar fibril matrix.  Again, we wanted to keep the polarity as low as
possible.  With this disk, there are fewer plugging problems than we saw with the traditional
Empore disk.  Finally, we wanted to design it and price it  so  it was very cost-effective,
because if you have got a $30 or $40 test, you just do not want to pay $50 for a new disk.
No, it is not $50, not even close. So, that is the OC  disk.

      Also, as a part of the system,  we have got to have more than just a disk, because,
remember, you are dealing not only with  dissolved species in the water. You are dealing


                                        93

-------
with the chunks floating around, and when you have high levels of oil and grease, those
chunks have all the oil and grease agglomerated on them, and you have also got to extract
those, just like you do with liquid-liquid extraction.

      So, to extend the range of what you could do, we took some filter aid material and
stuck it on top of the disk. These are small glass beads, and now what you have is an even
higher surface area to sort of adsorb and absorb the free oil  and  grease in the system.  It
speeds the filtration, it helps your recoveries, and it certainly increases the capacity for free
oil and grease in your system.  So, that  is the system we came up with.

      Here are the  directions.  Number one, you assemble the disk.  You put some filter
aid on the disk after you have assembled it, about 1  cm of filter aid. Number 2, you wash
and condition the disk.  This is stuff you all know very well.  You  run  the sample through.
We found you could run it as quickly as you wanted, just like the traditional solid phase
disks. Finally, we eluted it, and we blew it down.  We did a filtration step, of course, to
get rid of sodium sulfate fines that  might be in your extract,  and then we weighed  the
residue.

      Now,  certainly,  you can  use the old glass apparatus, but if you have multiple
samples,  it is nice to have an apparatus  like this, and this is actually what we used for the
evaluation, and this is Professor Wisted  hard at work.

      Spike recoveries.  First of all, we  started out by spiking some of our own samples.
We wanted to see how it performed before we started using the precious EPA samples.  So,
we looked at the lubricating oil which we thought was a non-polar hydrocarbon, corn oil,
and, finally, Eric went home and fried up some bacon, and we had bacon grease.

      So, we have got a wide range of  polarities from hydrocarbons to even a lot of fatty
acids, and the results are very good out to about 1 g/L  So, that looked okay so far.

      Ah, there is a hole here.  That means we must have an overhead. You are on,  big
guy. And there is our first overhead.

      The other thing is we kind of heard through the grapevine that maybe there would
be a new spiking mixture for this analysis, hexadecane plus stearic acid, 20 ppm each, and
that  represents non-polar materials  and  polar materials.  We did those.  Eric just got  the
results of this last  week.  These are the triplicate results  we got, 100 percent recovery plus
or minus 2, certainly capable of meeting the QA/QC criteria in the new method.

      The next slide shows the flow rates we found. Now, we have done about 25 of the
samples so far.  These... oops,  30. These are the results we got.

      If we used a47 mm disk, 21 samples gave the average flow time of about 55 minutes
per liter,  and, in fact, that was skewed  to the high side  by  some very slow samples  we


                                        94

-------
probably should have used a 90 for.  In fact, if you use 21 of the samples, the average was
55 minutes.  If you throw one data point out, for 20 samples, the average went down to 43
minutes. That shows you how skewed the result is. In fact, it was in the range of 15 or 20
minutes per liter for the typical sample that came  in.

      Now, 9 samples we did on 90 mm disks.  These tended to be kind of the meat types
of samples, meat packers, slaughter houses, things  like that which  had a kind of gelatinous
material  that really was very pluggy in terms of its behavior on  the disk. So, we used 90s
there. The average was 36 minutes if we used 9 samples.  If we threw one data point out,
it went down to 19 minutes. Again, it gives you a feeling for one or two bad samples in
the batch. The longest flow time we had on  a  90 which was the worst sample was 173
minutes.

      Now, you have got to remember that these were blended by the master himself, so
these are a  blend of influent samples plus effluent  samples.  If you only had effluent
samples, they are a lot cleaner, and I  suspect  we probably  could  have used 47s for all of
them.

      Why don't  we look at a couple of slides  first?  All right, what we are looking at is
some of the data points. We are comparing, again, this type of  plot where we have Freon
liquid-liquid results versus the new oil and grease disks.

      Here  is a scattergram going from  zero to about 800  ppm.  In fact, the highest data
point we saw was down around 700.  Again,  you can  see, certainly, a general correlation
of our results with the Freon results with a fair bit of scatter.

      Now, if we zoom in on the zero to 100 ppm part of that curve, this is what you see.
The scatter increases, as you might expect, as you get down towards the detection limits.
Still, over all, many of the data points are within  a fairly reasonable envelope of comparing
with the Freon number.

      I  have got all the data with me.  If anyone wants to visit afterwards, we can certainly
go over  this.

      Okay, we need another overhead.

      Now, we also looked at hexane and cyclohexane as an eluting mixture to see if there
was any difference. We like hexane a lot better, because you can blow it down much more
quickly.

      This is what we found, really very good correlation between the two solvents which
should not be any surprise.  They are very close  in polarity.  So, we like hexane much
better.
                                       95

-------
      Okay, now we have a slide. We also analyzed TPH and this is the way we did it.
We redissolved the oil and grease residue in some hexane, and then we added silica gel,
stirred it up as the method calls for, and then filtered, evaporated, and weighed the residue.

      I do not have any of that data to show you. We are working on it now.  The study
is not quite complete yet, so we do not have all the data points, but, so far, the results look
pretty good. It  looks like it works just fine.

      There is one other laboratory that has actually taken this a step further. They wanted
to look at the infrared technique.  So, what they  did is at this  point, they redissolved the
residue in Freon.  Now,  I do not know what other solvent you are going to use if you are
going to use infrared, but at any rate,  they used Freon, and that worked out very nicely as
well.

      Every year when I come here, there is a character named Jack Cochran from Illinois.
I think he has one of his cohorts here this year, and Jack always stands up at the end of the
presentation and says yeah, great, well, what about surfactants?

      Well, we actually did some work with a local Twin Cities company called Economics
Laboratories or  Ecolab to take a look at a design experiment to  see how surfactants impact
the oil and grease result.  So, they spiked, I believe, 10 samples in a design experiment, and
we took a look at surfactants.

      Now, you can rationalize this any way you like.  You can postulate perhaps we will
get low results, because the surfactant solubilizes and  complexes the oil and grease and
makes  it water  soluble.  Therefore, it stays in the water instead  of partitioning into the
organic phase.

      You can  also postulate that maybe you will get high results, because you are going
to extract the surfactant and it is going to add to  the oil and grease result.

      Finally, you wonder if there is a dependence of the type of surfactant it is on the
result.

      The answer to all of those is yes, by the way. Ah, another hole. It must be overhead
time.

      Okay, these are the recoveries. Now, this  was a 10-point design experiment. I am
not a statistician, and I do not have a clue as to what it all means, but this  is what the folks
at Ecolab told us the findings were.

      With one type of surfactant, we got an 88 percent recovery of the oil and grease.
Because they spiked these, they knew what the number should  be. The Freon liquid-liquid
                                        96

-------
extraction got a  135 percent recovery on the same sample.  These are the RSDs you see
here in parentheses.

      With surfactant 2, we got 64 percent; Freon got 106.  Finally, with surfactant 3, we
got 72; Freon got 136.

      So, certainly, it looks like we are extracting less of the polar materials in this case.
Whether it is through selective extraction or whether it is through solubilization, we just do
not know at this point, but it looks more attractive if you have got a lot of surfactant in the
sample.

      Jack Cochran is also looking at some of these in Illinois, doing some very creative
work with selective elution, and we should have some interesting results next year for you.

      This is also from that same study. What you are seeing here is the variation from 100
percent recovery. What we are doing, again, is using the target technique where we say the
target is the Freon  liquid-liquid extraction number. No, for this study, that is wrong. The
target is the actual amount of oil and grease spiked into the sample, because in this case,
we knew the answer, and we are looking at the variation from 100 percent.

      So, the height of these bars really is how close the result is to the target. For the
Empore here, this was certainly closer with surfactant 1 than the Freon was.  In the second
sample, that was reversed, and in the third with a third type of surfactant, it was more or
less of a  wash in closeness to the target.

      Here what we are looking at is RSDs.  Certainly, the RSDs look better using the disk
technique in all three cases.  So, that looked pretty good, too.  That is all we have so far in
the study, but it  looks like it can work for surfactants.

      The last thing I have for you is an overhead. This is what we have concluded from
the study so far.  We have got a couple of samples left to do, but it looks like the system
works. It looks like it is certainly capable of extracting these extractable materials from the
water samples.

      It  is user friendly.  Certainly, a lot of people are not going to like going to hexane
with the  two separatory funnels and everything else.

      Good  RSDs.  In fact, in our own internal evaluations of the EPA samples, the RSDs
tend to be in the single digit range, so they look very good.

      There is general agreement with the Freon result, and that is a very  broad term, you
understand.  It is good  for oil and  grease or TPH, and there is potential for  using it for
infrared.  In fact, perhaps if you  chose the right elution solvent, you  might be able to do
with no further steps.

                                        97

-------
      No emulsions.  You will never have to worry about another emulsion.

      It is cost effective, and we have done a little field testing so far with very positive
results. People really like it in terms of comparing with their standard technique which is
413.1 and also just the handle-ability of the whole system.

      So, thank you for inviting me,  sir.  Thank you all for coming.
                                        98

-------
                        QUESTION AND ANSWER SESSION
                                      MR. TELLIARD: Any questions?

                                      MR. BANSAL:  Kris Bansal from Conoco. I have
two or three questions on your solid phase extraction technique.  One  is relating to the
effect of solids.

       I understand the samples that you have done basically are in the lab, but you are
really not simulating the effect of the solids, or are you?  Did you put any solids in to see
whether the solid phase extraction plugging tendency will increase with  solids?

                                      MR. MARKELL:  Yes,  we have done about 25
samples that Bill has collected, and those samples, many of them have high solids, yes.

                                      MR. BANSAL:  When you say high solids, I am
looking at small particle sizes also, because if you leave the sample for  a long period of
time,  most of the solids will settle down, and if you are  taking that extract, the results are
really not comparable.

                                      MR. MARKELL: Well, let me explain a littleto you
how we did it.  We allowed the samples to settle first so  all  the solids went to the bottom.
Then  we extracted most of the sample so that it went through easily without the solids in
the system.  Then, finally at the end, we added the solids, and then we rinsed the bottle,
shook it up well a couple  of times with the hexane that  we were using for elution.

      So, I think we got a good extraction of things that  were adsorbed on the solids, and
there  was a range of solid sizes from a fine  precipitate or clay, perhaps from a formulating
plant, all the way to large  gels from meat processing.

                                      MR. TELLIARD: Sausage maker.

                                      MR. MARKELL: Sausage maker, yes.

                                      MR. BANSAL:  The second question I had is the
effect of salinity on the performance of the solid phase extraction technique.

                                      MR. MARKELL: I suspect there  is. We have not
looked at that,  but we do know that  in usual solid  phase extraction, if you have polar
materials, the more salt, the more salt  strength there  is in the solution, the more you will
tend to partition from the water into the organic phase. So, there may be an effect.  We just
have not seen  it.
                                        99

-------
                                     MR. BANSAL:  I come from the oil industry.  In
our case, the salinity really is anywhere from 30,000 to 200,000 ppm, so that is a very, very
major concern.

                                     MR. MARKELL:  I suspect if it is a hydrocarbon
type of material you are looking at, salt wiH not make any difference. If it is fatty acids, it
could make a difference.

                                     MR. BANSAL:  The last question is, what is the
volume of n-hexane which you used as an elutant? I am just comparing with the liquid-
liquid extraction what is used for the solid phase.

                                     MR. MARKELL:  We used two 15 ml  portions
which not only was good for the elution,  but it was also good for rinsing out all of the
garbage in the bottle and desorbing things from the particulates. So, about 30 ml for the
elution.

                                     MR. TELLIARD:  Yes, sir?

                                     MR. SLENTZ: My name is Kurt Slentz. I  am from
Energy Laboratories in Rapid City, South Dakota.

      Have you guys done any detection limit determinations on that at all to see how it
acts below 20 ppm?

                                     MR. MARKELLj We have  done a little bit of work
looking at MDLs but not extensive enough to give you an actual number. It certainly looks
like it is good down to 5 ppm at any rate.  That is something we have  got to look at yet,

                                     MR. PRONGER:  This question may  be more
pertinent for Bill.  My name is Greg Pronger with National Environmental Testing.

      When you have a method that is... the result is clearly very method specific, what
is EPA's position when you have clearly two different technologies to get the result?

                                     MR. TELLIARD:  Good question, thank  you.

      Hopefully, if we write this in a performance-based method format like we have said
we would, we are going to be able to allow  the option of using solid  phase in whatever
form it  is or the liquid-liquid extraction phase for use in the measurement technique.
Again, this is dependent upon review of the data and discovering that they are somewhat
compatible.  We do not know yet.

                                     MR. PRONGER: Thank you.

                                       100

-------
                                  MR. TELLIARD: You are welcome. Anyone else?




(No response.)
                                   101

-------
(Blank Page)
    102

-------
  Solid Phase Extraction Disks -
A Solution for the Freon Problem
 Craig MarkelL Eric Wisted, Donald F. Hagen, 3M

-------
Phase I Study - Rangefinding
       • Sorbents
       • Disk Sizes
         Elution Solvents

-------
      Phase I Recipe
1. Wash and Condition 90mm C18 Disk
2. Extract Sample Through Disk
3. Elute Disk With Hexane
4. Dry, Filter, and Evaporate Hexane
5. Weigh Residue

-------
Results - Disk Vs. 413.1 - High Levels
    900
       0  100 200 300  400 500  600 700 800  900
                  413.1 Result (ppm)

-------
  10
   5-
                Phase I Results -
          Correlation With Freon LLE
          Hexane
           LLE
           (27)

(  ) Number of Samples Run - 28 Total
 90mm
  C18
Empore™
  (28)
                                    Closest to Freon

                                    Ties
 SPE
Tubes
Varian
 (20)

-------
      Phase I Conclusions


      • Nonpolar is Good
o
00
      • Polar is Bad
        Nothing Duplicates Freon

-------
Phase II - Optimization
  • Hexane or Cyclohexane
  • 1 L Samples
    New Disk

-------
  Oil and Grease Disk
 47 or 90mm
• C18 in a Non Polar Fibril Matrix
 Fewer Plugging Problems
 Cost Effective

-------
       Filter Aid 400
1 cm on Top of O and G Disk
Speeds Filtration
Helps Recoveries
Increases Capacity for Free O and G

-------
Spike Recoveries, % (RSD, n=3)

Sample        20 ppm   175 ppm  900 ppm
Lubricating Oil    101 (0)   100 (3.1)   95 (6.6)
Corn Oil        95(11.8)   92(2.3)    93(5.0)
Bacon Grease   101 (4.5)   96 (3.1)   102 (5.1)

-------
      Spike Recoveries
20 ppm hexadecane + 20 ppm stearic acid

         100%
          98%
         102%
         100% ± 2%

-------
              Flow Rate
 21 Samples on 47 mm       Average 55 (43)
                          n = 21      n = 20

 9 Samples on 90 mm        Average 36 (19)
                          n = 9      n = 8

 Longest Flow time on 90 mm was 173 min.
[Influent!+ effluent blends

-------
   Oil and Grease Comparison
             LLE Vs. Disk
800
      100  200   300   400   500
                  Empore
600
700
800

-------
   Oil and Grease Comparison
100
  0
            LLE Vs. Disk
10  20
30  40   50  60
     Empore
70  80   90  100

-------
        Oil  and  Grease  Disk
        n-Hexane vs Cyclohexane
800
  Cyclohexane (ppm)
      100   200   300  400   500

                n-Hexane (ppm)
600
700
800

-------
00
      TPH Analysis
1. Redissolve Residue in Hexane
2. Add Silica Gel and Stir
3. Filter, Evaporate, Weigh Residue

-------
Oil and Grease Results in the Presence
     of Surfactants - Ecolab and 3M
 • Low Results Because of O and G Solubilization?
 • High Results Because of Surfactant Extraction?
 • Surfactant Dependence?

-------
        Recoveries (RSD)
              Empore™    Freon LLE

Surfactant 1     88(48)       135(82)
K)
O
Surfactant 2     64 (52)       106 (67)
Surfactant 3     72 (18)       136 (110)

-------
Oil and Grease Method Comparison
                         Empore™
                         Freon LIB
     Surfactant 1      Surfactant 2      Surfactant 3
            Variation From 100% Recovery

-------
SJ

SJ
     Oil and Grease Method Comparison
    120
    100-
     80 -\
     60-
     40-
     20-
                       Empore™  HlFreon LLE
           Surfactant 1
      67
                                            110
    .:-%*-* '^
    ^^ ,•. ^,  •.


    $*'&<:• >'I ; "" ;
    /»t:^',  *, -•»

    b'^f^i -v
    ^**^:^»v
Surfactant 2


  % RSD
Surfactant 3

-------
           Conclusions
System works
 » User friendly
 » Good RSD's & recoveries
 » General agreement with 413.1
Good for Oil and Grease or TPH
 » IR potential
No emulsions
Cost effective
Very positive field testing

-------
(Blank Page)
    124

-------
                                     MR. TILLIARD: Our next speaker is Rex Hawley,
Rex is with Varian Sample Preparation Products which you all know as Varian.  Rex has
been in the market development section of Varian forever, I guess, according to this resume.
Could not find a real job; stayed at Varian, 1 guess,

      Rex is going to describe a similar study that Varian has been carrying out looking at,
again, application of solid phase extraction.  Now, the samples that Rex is going to describe
to you are the same samples that Craig analyzed and that we analyzed. So, these are all the
same matrices; identical samples taken from the same locations.
         OIL AND GREASE MEASUREMENT BY SOLID PHASE EXTRACTION
                                     MR. HAWLEY: Thank you, Bill.

      First of all, I have to thank Craig for saying almost everything I have to say, I think
we would expect to have similar results, and, as you will see here, we tend to agree.

      Last year,  I stood here and presented information regarding  a new method for
determining oil and grease content from aqueous samples, and the technique involved solid
phase extraction  followed by gravimetric measurement of the analyte.  Today, I am going
to talk about the  results of additional studies that were conducted over the last few months
for further evaluation of the system.

      Fortunately or unfortunately, I do not have a lot of new features to describe to you,
simply because very few changes were really needed in order to have the unit function as
designed.

      Again, as a brief description, I would like to go through the fact that the system was
based on three design parameters.  First of all was simplicity, the ease of use.  There are no
large separatory funnels. It is compact in space requirements, and it is basically a cookbook
approach.

      The next slide is, as Craig has already talked about, the fact  is that you want to have
it to be cost effective. You have low solvent usage, again, of the  order of 25 to 30 ml of
solvent.  You have a less expensive solvent, less expensive disposal costs.

      The third   parameter which is on  the  next  slide  is  error reduction.  We were
attempting to minimize the potential for error wherever it may occur.  We tried to use the
same components, whether or not we were talking about sampling or running the test.  We
used full liter samples.  We were not splitting samples. As Craig  pointed out, we do not
have emulsions because of the technology that is being used.


                                       125

-------
      On the next slide is a brief flow chart, basically, for the procedure. If you cannot
read it, do not worry about it.  We are going to describe it in more detail here, because I
found out one thing from last year's talk was that my verbal description of the apparatus and
the procedure just was not sufficient to ignite your imagination so that you  understood what
I was talking about. So, I have some slides here that I would like to show, and it will give
you a better idea of what I am going to be talking about over the next few minutes.

      If  I can get the first slide at this stage, the processor that we used comes in two
stages. The first stage is basically set up for a single sample at a time.  It comes with two
stations, first of all, for sample application, and the second side over here is for elution, and
you will see a little bit better in a few minutes what this means.  It also has a version for six
samples at a single time.

      One of the keys that we talked about for error reduction was the fact that we are
using similar  components for both the sampling and the processing.  The first of that is
simply the bottle that is used.

      We designed a special cap to fit this bottle.  It is an 80 mm standard 1 liter bottle.
You can buy  it from any of the laboratory supply houses. As you can see, when you  have
a sample put  into it, you  have  an insert plug to keep it from leaking.

      As far as the extraction is concerned, we use a solid phase extraction cartridge which
has filtration material built into it, a depth filter material, to remove the particulates, the gel
or whatever it happens to  be that you may have in the sample that you are working with.

      The first step in the process is to condition the column and then to apply the sample.
Now, as far as applying the sample, it is nothing more than removing the plug from the cap.
It is a hexagonal key, as you may have noticed...  I should have pointed it out on the cap...
which fits into the holes in the apparatus. So, you simply put the cartridge on the top of
the cap, invert the bottle, and, in this case, we open a vacuum line and suck all the  fluid
through.

      As you can see, the filtration material itself really does hold most of the particulates
prior to the extraction sorbent.

      After the  sample has gone through, you  simply rinse the bottle  with the elution
solvent.  In our case, we use 20  ml of elution solvent first, and then we rinse  again with a
second 10 ml aliquot which is then applied to the larger side.

      Now, the reason we use two sides for this apparatus is because of the way we do the
elution.  Part  of the simplicity factor of this is we are trying to make it easier for everything.
                                        126

-------
      One of the problem areas that we have found and that Craig described was the fact
that sodium sulfate inherently has some problems.  Even if it does not have any problems,
you still have to filter it out before you can do the measurement.

      What we have ended up doing is we designed a new device which basically traps
water and separates it from the organic phase. So, what we designed is an in-line process
where you rinse the bottle out and invert the bottle again. The elution solvent goes through
the particulates and the sorbent bed into this what we call aquaset which separates the
organic from the aqueous phase.

      Then you end up with a separation of the aqueous water up here and organic phase
below it.  It is all done in-line, single step.  It is  real simple as far as what happens.

      So, you have a dried extract without having to filter it. To dry it down, we use the
nitrogen blow-down  system and then weigh the vial for your determination.

      I think that is the end of the slides.  Now we are back to the overheads.

      This slide  is nothing more than  what Bill has already presented.  It shows the
industries  and the description of the samples that we were sent.  In a few minutes when the
lamp  brightens  up there, you will be able to see it.

      The bottom line is I can sit here and tell you that our system is the best thing since
sliced bread,  but the end performance is what  is  going to convince anybody about the
validity of the technique. So, I  would like to review some of the data that we have done
in this Freon replacement study.

      The direct answer right here is that all of these samples that you are going to see here
are from  real-world samples.   By the way,  if anybody gets picky, 1  know I misspelled
maintenance down there at the bottom.

      The studies that we did involved the use of a number of  potential replacement
solvents.  This chart, again, we can talk about this afterwards. I would not worry too much
about trying to read this, because I am  going to show displays of the data in a few minutes
differently.

      The table is just an indication that the solvents that we investigated were hexane,
cyclohexane, because, as Craig said, we had this little birdie here that said that might be
something to investigate. We also looked at pentane, acetone, dichloromethane, and ethyl
acetate.

      In order to keep this presentation at a reasonable length, I am only going to show the
pertinent data for hexane, but if anybody wants to discuss the other solvents and their
results, please see me after the session.  I will be glad to talk about it.


                                       127

-------
      As far as calibration is concerned for these studies, we used vegetable oil.  We did
not use the hexadecane stearic acid standards.

      I can tell you that as far as displaying the results of almost 30 samples at one time
where everything is readable is somewhat challenging, so the presentation I finally decided
to use is simplified.  Since the results are method dependent on Freon, I decided to use
Freon as my standard.

      Now, what I have done here is this one is for less than 75 mg/L. The dark lines are
an arbitrary band around the Freon results.  The band  is plus or minus 25 percent.

      So, basically, the other two lines that are there are hexane done with liquid-liquid
extraction  and hexane used as an elution solvent for solid phase extraction.

      As  pointed  out this afternoon,  anything that tends to make hydrophobic  more
hydrophilic, such as alcohols or organic acids or surfactants, can affect recoveries negatively.
On the other hand, compounds which  are hydrophobic but not oils and greases,  such  as
waxes and soaps and hydrocarbon species,  et cetera, will be retained by SPE but may not
be extracted by Freon liquid-liquid.

      Now, there  is a series of six overheads here.  I think this is the third one.  The first
two had to do with all industries that were tested in this protocol, and then I broke it down
a little bit  into... and that is divided by less than 75 mg/L and over 75 mg/L, and I  broke it
down into petroleum  industries  and non-petroleum  industries.  This is based on the
information that we were given  as far as the breakdown is concerned.

      You can see that there are exceptions  that fall outside of this band, but, generally, the
trend is definitely there.   I think we can be  more  specific and say that the  general
conclusion is that we fall within  this plus or minus 25 percent band.

      Again, I will say that the  25 percent  is an arbitrary evaluation from my  standpoint.
You can see this area right here  is the solid  phase extraction, and this one over here is the
liquid-liquid hexane.

      By  the way, we know that certain  things, such  as glycerin  and some surfactant
compounds seem to clog the column, but, interestingly enough, we had more problems not
with the sample application as much as trying to elute the sample off the column. Many
times, the pentane or the cyclohexane just would not penetrate the organic  barrier that
basically had formed.

      The best technique that  we have been  able  to  find is to raise the sample pH  to
neutral. This seems to neutralize any of the ionized species that may have been there, and
it allows for better flow characteristics.
                                        128

-------
      However, the results that you are looking at here, these are results that did not use
that technique of increasing the sample pH.  These were all pH 2.

      One of the things that I am not going to present today is the results of the silica gel
clean-up that we did.  The data will be provided in a summary report that Bill will end up
with.

      Oh, I forgot this one. This is just simply a restatement of what you have heard from
Craig  and everybody else here,  that the current  Method 413.1 is a solvent dependent
method.  There are a lot of variations involved with all of these things to  be considered.

      As  far as the silica gel  results  are concerned,  I can  say that the results were
consistently low.  They did not vary up and down as  you might expect.

      If I had hexane values for 413.1 that varied up and down, I did not get similar results
when  I did the silica gel study.  I think  everything except for one sample  was lower. So,
they were consistently low.

      However, I did change the procedure. When I was using the solid phase extraction,
I tried to take advantage of the fact that I could stack two columns. So, that may very well
have changed the results and made them strange.

      I  am looking into why they are strange right now, but I still believe that a method can
be developed, and I think that Craig's results  indicate that if you simply take the residue
obtained from Method 413.1 and reconstitute it that you can get similar results.  I think it
is a given that we can do that.  Now,  the question  is, can I make it a simpler method than
what is currently being used?

      In conclusion, I think I am happy to state my belief that SPE is a viable alternative
to Freon liquid-liquid extraction.  Currently, the results indicate that hexane is suitable as
an elution solvent. Generally, consistent results are obtained from what you looked at here.

      What still needs to be done is solidifying the hydrocarbon fraction, but I do not think
it will take very long to finish that study.  Thank you for your time.
                                        129

-------
                       QUESTION AND ANSWER SESSION
                                     MR. TELLIARD: Any questions for Rex?

                                     MR. BANSAL:  Kris Bansal from Conoco.  I want
to find out what is the cost for these cartridges compared to the disks that Craig mentioned.

                                     MR. HAWLEY: Cartridges right now are running
at $12 list.  Craig, the list on yours is what, $5?

                                     MR. MARKELL: Well, we have not quite decided
that yet, but it will be somewhere in the neighborhood  of $4 or $5.

                                     MR. BANSAL:  The second  question is more in
terms of my understanding of the elution process itself.  The way you describe it, you have
the cartridge. You put the solution through.  Now you have a certain set volume of the
hexane which is used as an eluant.

      The way  I look at the column is that the initial part, as it is being eluted, that is going
to be adsorbed at the lower end.  The question is, if you are varying  the concentration of
the stuff which is going to be removed, how can you really be  sure that all of it is eluted
by a certain fixed volume of this solvent that you are using, the n-hexane?

                                     MR. HAWLEY: I think I will  have to refer back to
what  I presented last year. When we determined the 30 ml or the two aliquats that total
30 ml, it was done  by additional solvent rinses or elution  steps where we went up to, I
think it was, 70 or 80 ml altogether. We found that there was on the order of less than 5
percent, on average, was ever recovered after that 30 ml.  So, that is  how we determined
that number.

                                     MR. TELLIARD: Yes, sir?

                                     MR. SLENTZ: My name is Kurt Slentz with Energy
Labs. Have you guys, either one of you guys, done any studies on the relationship between
total suspended solids and recoveries that you get off of these devices?

                                     MR. HAWLEY: I can give you some information
for what I have done, and Craig will have to speak for himself.  We have done a little bit,
I  guess you would call it, of an informal investigation, and from what we have seen, the
amount of solids and the type of solids are  two different things.

      The  amount of solids, if I get something that  is fairly granular, I have no problem
either segregating it so that I can filter it out, and I have no problem with recovery.  Again,


                                      130

-------
remember all of these numbers, when we talk about recovery, are based on comparison to
the Freon method.

      When you talk about something that is gelatinous or extremely fine particles... in fact,
some of the samples that Bill sent to us did not look that bad, but they are the ones that ran
slowly.  Some of the stuff that came up, I mean, literally, the bottle was half full of crud, ran
through without any problem at all.

      So, simply looking at  it or simply looking at total suspended solids  did not really
relate to how well the n-hexane ran through or how well it related to the overall recovery.

                                     MR. SLENTZ: The reason I asked the question was
that we seem to have a lot of difficulties that are, I guess you could call them, colloidal type
particles plugging those disks up.  I guess I am asking if you think it would be worth maybe
taking some  samples of those types of particles and spiking them up and seeing what effect
that has on your recoveries that you get.

                                     MR. HAWLEY: I would be interested in doing that.
We just have not done it to date.

                                     MR. MARKELL:  If I can just add to that answer,
you are absolutely right. The smaller particles  seem  to cause more problems than the larger
ones.

      We have not done a systematic study, but with some of the work which I will present
on Thursday, we  looked at Method  608, and we have looked at a lot of those kinds of
samples.  We have not seen a definite relationship between any recovery problems and total
suspended solids.

      On the other hand, clearly, there is going to be an upper limit where you have a
certain amount of suspended solids,  and if you  have analytes that are very non-polar like
the organochlorine pesticides or PAHs, they are going to be adsorbed to those suspended
solids, maybe even  incorporated into the body of the solid.

      There is, obviously, no way in the world you can do that without a fairly extensive
extraction like a Soxhlet.  So, somewhere, there is a relationship there; you are right.

                                     MR. SLENTZ:   Have you  guys noticed any
channelization when you have and  that in your pre-filtering part that would affect your
recoveries at all when  you eluted off your column?

                                     MR. HAWLEY: For the filter that we use, it is a
very loosely woven  material.  So, we do not  tend to  have the problems with channeling
                                       131

-------
because of the particle size.  It tends to be distributed pretty evenly throughout the whole
filter matrix.

                                     MR. TELLIARD:  Ditto.

                                     MR. SLENTZ:  Thank you.

                                     MR. TELLIARD:  Any other questions?

(No response.)

                                     MR. TELLIARD:  Thank you, Rex.



      (Slides for this presentation were not available at the time of publication.)
                                       132

-------
                                     MR. TELLIARD: Our next two speakers will be
talking about the infrared effort  that has been underway.   Jim  Vance is with Horiba
Instruments. Jim, again, is going to be discussing analysis performed  on the same sets of
samples that Craig and Rex described and that I described earlier. So, these are all the same
matrices from the same locations.
            NONDISPERSIVE INFRARED ANALYSIS OF OIL AND GREASE
                   AND TOTAL PETROLEUM HYDROCARBONS
                                     MR. VANCE: Hello. I am Jim Vance with Horiba
Instruments, and I am here to entertain you, unfortunately, with a pile of data.  Perhaps I
will enlighten you. I know I  will raise some questions and, hopefully, will answer some.

      Horiba had asked the EPA to be  included in their efforts to replace Freon in the oil
and grease test procedures, and the EPA graciously accepted, and  sent us some of the
dirtiest, the grungiest, the stinkiest  samples they could find.  Thanks a lot, Bill.

      I would like to discuss our  method of measurement.  The nondispersive infrared
technique is the same extraction procedure as in the gravimetric technique, but rather than
evaporate and weigh the residue, we take the extract, put it into the nondispersive infrared
spectrophotometer, and measure  the absorbance  in the infrared  compared  to  known
standards.

      These known standards are oil that has  been doped into the solvent. The EPA had
asked us this time to use the same mixture you have seen, the 50/50 mixture of hexadecane
and stearic acid.  We did that by  making a 40 ppm standard on our zero to 50.0  ppm
analyzer and adjusting the readout to read exactly 40.

      A midpoint was made  by  dilution of that standard to 20 ppm, and,  of course, zero
was tested.

      The small aliquot method that we use  is to  put 30 ml in our extraction chamber
which consists of both solvent and sample.  The  extraction chamber  has a mixer that
vigorously extracts the hydrocarbons automatically.  When the extraction times out, the
water goes to the top, the solvent goes to the bottom, except in the case of emulsions, as
you are familiar with.
      Later I  will show graphically how we can treat emulsions and how the solvent to
water volume affects emulsions.  I will also present some data on residual measurement of
                                      133

-------
oil and grease, that is, the water that is left over after you have already done this extraction
and measurement, and data on another technique of sparging the sample.

      On this first overlay over there... I apologize to the people in the back that I did not
make it quite big enough, but let's see if I can use this pointer.  Episode it says there, and
these numbers relate to the sample collection process.

      Sample number is the next column, and you can see the same sample numbers that
have been presented before.  Lab ID, this is all Horiba.  We did the testing.

      The bottle number is identified in the next column, and then we start getting into
some significant things  like solvent.  We used Freon-113, Horiba's S-316  solvent, and a
special grade of perchloroethylene developed by J.T. Baker for this analysis.

      This column says O&G 1. We have got O&G 2, O&G 3, and an average for oil and
grease measurements.

       Next to that I have a residual. This is the residual I was talking about where we take
the water that has already been extracted and extract it again.  These numbers should be
zero, and  in  some cases, they are awfully close to zero, but  some are not.

      TPH 1, TPH  2,  TPH 3, and average  are the three columns for total petroleum
hydrocarbons. The total petroleum hydrocarbon is tested right after the oil and grease. We
have not done anything destructive to the oil and grease extract, so we simply drain it into
a clean catch beaker, treat it with the silica gel, and then filter it, and put it back into the
analyzer for measurement, and those are the TPH numbers.

      The next couple of columns read SPG O&G and TPH SPG. SPG stands for sparged.
What we did was take a small quantity, maybe 250 ml, of the water, raise it to 60 degrees
centigrade, bubble it with air to drive off low-boiling hydrocarbons.  In some cases, these
numbers  are much lower than the oil and grease numbers.

      The first one here is 60 point something.  This next sample is a couple tenths of a
ppm higher than the O&G number. It should not be higher than the O&G number, but that
is real world data.

      As we go down the list, we see some drastic changes down to 121 ppm sparged from
174  ppm oil and grease.

      This overlay is a continuation of the samples. We actually measured 30 samples so
far, and I have 4 backlogged to do. These samples were measured from 25 to 30 analyses
per bottle, so you are looking at a total of 750 or 900 samples on this data sheet.  This is
including the sparged and the residual measurements.
                                       134

-------
      You might also notice that we tested these samples with three solvents.  We have
Freon-113, S-316, and perchioroethylene.  Not all of the samples were tested with the
perchloroethylene. We did not have copious quantities, unlimited supply, and there was
a period when I did not have it at all. I  have 30 samples measured with Freon and the S-
316 solvent, and 19 samples ran with the perchloroethylene. We plan to do additional tests
on the four back logged samples and more tests with perchloroethylene.

      This overlay is an attempt to show the disparity of oil and grease  by solvent.  I have
the sample number indicated here, and we come down this column and there is a hole in
here, because there was not a sample number 24885.

      At any rate,  sample number, Freon-113. Now, what I am doing in this column is
comparing the solvents to Freon-113,  so  this Freon-113 column, by  definition,  is zero
disparity.

      The S-316 solvent  shows differences.   This is the percent difference, and some of
these percentages get  kind of high.  So, we decided also  to show the difference in ppm,
because at the low end of the analyzer,  perhaps 1 ppm difference can  represent as much
as 10 percent disparity.

      The Freon-113, again, for ppm difference is  going to be  zero.  S-316 shows a
maximum of, 24 percent  here,  and perchloroethylene -25. If you count the minuses and
the pluses, you will find that the S-316 had 13 samples that were high, 17 that were lower
than Freon-113.  Perchloroethylene had 12 samples that were higher than the Freon-113,
6 samples that were lower, and 1 that was right on.

      This overlay looks  similar to the last one, except it is total petroleum hydrocarbon
disparity by solvent.   The disparity seems to  be a little bit larger with  the TPH number.
Perhaps we needed to use more  than  3  grams of silica gel.  I  am not sure if that is
influenced there or not, to tell you the truth.

      The differences are as high as -54 percent which means we should go back and look
at that sample on the oil and grease  to compare the difference.

      On this overlay I have a sample description, and sample number. Here is the ID by
type of sample, textile mill in this case, leather finishing, POTW, and so forth. The solvents,
Freon-113, S-316, and perchloroethylene are listed here.  Now I have listed in this column
the volume sample and solvent.

      We used no less than a 1:1 ratio of solvent to sample to assure maximum extraction
efficiency.  The first sample has 10 ml of water sample to 20 ml of solvent.

      The next column tells if we treated it with sodium sulfate. Yes, means we had to do
that.  No, means we did  not have to use sodium sulfate. This one I  called coffee and

                                       135

-------
cream.  It turned our extraction chamber totally brown, but it settled out after a couple of
minutes. It did not require the anhydrous sodium sulfate treatment.

      The  residual  numbers  are also on this  sheet and the  extraction time.  Normal
extraction time was  1 minute except in some of the cases where we felt that the residual
was getting too high, and we tried 2 and 3-minute extraction times.

      The sample down at the bottom, again, I apologize that you probably will not be able
to read that, but this one was interesting, at least to me. I named it green volcano.  When
we did our sparge technique, we had a green volcano.

      The  next one  in this column was from the formulating  plant and, when sparged,
looked like a bubble bath.  That one, as you can see,  needed the anhydrous sodium  sulfate
treatment.

      This overlay is a continuation.  I could not put all the data on one page, so we have
two pages.  There is the sample description.  I do  not know that I  need to go over this
much, but I do have some that were very yellow, for instance.  Several of these did need
the treatment with sodium sulfate.  The  one from the olive plant, loved that one; Martini
time.  And the one from the bacon  plant, we named that one the breakfast smell.

      This slide shows the test set-up that we used, and you might notice some subtle hints
there as to which solvent to put in which analyzer.  We have perchloroethylene, S-316, and
Freon-113.     We  have  in  the   picture,  tetrachloroethylene,  otherwise  known  as
perchloroethylene and the S-316 bottle.  I do not have the Freon on the table top here.

      The funnels that you see there, this one is supported by a stainless steel rod, and it
is going directly into the  analyzer.  This can  be used  both for the treatment  with  the
anhydrous sodium sulfate and the silica  gel treatment.

      We do the extract.  If we have a  cloudy extract or an emulsion, we drain it into a
clean  catch beaker underneath, treat it with sodium sulfate, pass it through the funnel back
into the analyzer.  We have three funnels set up for the three  analyzers.

      You  can see a couple of burets set up to precisely measure the amount of solvent that
goes into each one of those analyzers.

      This slide is a work of art that I call tres nines males or three bad boys.  These had
to be  done in the fume hood because they smelled  so bad.

      On the next slide are the three good boys. This one is the bacon, the olive plant,
and the dye works. The dye works  was  kind of a paradise blue, so we have got breakfast,
lunch, and  paradise blue.
                                       136

-------
      This slide is an attempt to show what the extract is supposed to look like in the
extraction chamber. The extraction chamber holds 30 ml of sample, and you can see a line
here, kind of a dirty, filmy line. I have got about 5 ml of water from that point to that point,
and then the solvent,  25 ml,  is pretty clear.
      This slide shows the water sample a little greyer, but the solvent is still good and
clear.

      In this slide we have an emulsion down here.  This needs to be treated with sodium
sulfate.  The water phase is from here to here.

      I was not happy with the results of those slides, so we took three VOA vials, 40 ml
VOA vials, filled them with a total of 30 ml of solvent and sample from one of the bottles.
The one on the left has 20 ml of water, 10 ml of sample. The one in the  center is 15 ml
and 15  ml.  The one on the  right is 25 ml solvent and 5 ml sample, and you can see the
difference there; the extract is clear.

      As they settle, you see the bottom starts to clear out a little bit on  these two.  Of
course,  the one on the right  is good and clear anyway.

      This slide shows that the higher water to solvent ratio still has not cleared.  The one
in the center has cleared, but look at the  levels.  The water level on the center sample is
slightly  lower than the other  two VOA vials.  What we did was take the solvent phase out
carefully with a syringe, treat it with the anhydrous sodium sulfate, and put it  back into the
VOA vials, now compare these two extracts for clarity.

      This slide shows a technician taking a water sample with the proper pipette and
pipette bulb. This is the buret that we used to carefully  measure the solvent.

      This is a guy who has the right equipment, lab coat, gloves,  safety  glasses, but he
seems to be drinking the water.

      In this slide, he has drank it all; all done.  So, in conclusion, I can say that this has
been a lot of fun doing these tests.  Did we learn anything?  Well, we did learn that our
method using the small  volume aliquot is fast, 2 to 20 minutes per test, generally repeatable,
and still has an edge in this  industry in that we  can  look at samples on a  changing basis
with fast analysis.  You can take this machine to the field and add chemicals  to your influent
and look for reductions and that sort of thing.

      Are there any questions?
                                       137

-------
                       QUESTION AND ANSWER SESSION

                                     MR. BAN SAL: Kris Bansal with Conoco. I want
to ask you two  questions.   One, you used perchloroethylene to extract.   Now,  my
understanding is that perch loroethylene has the capacity for water which makes it absorb
in the infrared.

                                     MR. VANCE: That is true. I  am glad you raised
that point. We had a special grade developed by J.T. Baker to do this analysis.   It seems
that perchloroethylene that is available on the market right now will not  work for the IR tests
because of just exactly what you are saying.

                                     MR. BANSAL: I would be interested in getting a
little bit more information on the special grade of perchloroethylene, because this is one of
the problems, I think, that we have to find a suitable IR solvent when we do go to hexane
as the preferred solvent for the gravimetric method.

      The second question I had was that you said that you used a small aliquot from the
entire sample.  What did you  do to make sure that your aliquot was a representative
sample?

                                     MR. VANCE: What we did to do that was to take
several samples.  I do realize that if there are a lot of particulates in the sample  that  this
method is a little more shaky.  Because  of the tendency for  the oil to adhere to  the
particulates,  shaking the bottle does not assure that you are going to get a homogeneous
number of particles in your small volume aliquot.

      We did do at least one full bottle extraction on one of those samples which was one
of the points I was going to bring up here and missed.  I measured 13.2 ppm on the light
blue bottle from the dye plant versus 15 by the oil and grease method, IR. And the  dye
plant sample had a lot of particulates in it. Any other questions?

                                     MR, MARKELL:  Craig  Markell from  3M.  Jim,  I
love the idea of naming the samples. That was great.  You have got to explain martini time
to us.

                                     MR. VANCE:  Oh, martini time.  That has to do
with the olives. Now, I know that could have been lunch, but  I figured  it was a double
martini lunch.

                                     MR. TELLIARD:  Anyone else?

(No  response.)
                                     MR. TELLIARD: Thank you,  Jim.


                                       138

-------
                            NON-DISPERSIVE INFRARED ANALYSIS
                                   OIL & GREASE/T.P.H.
VO
EPISODE SAMPLE LAB-ID
4546 24873 HORIBA
24873 HORIBA
24873 HORIBA
4547
4548
4549
4550
4550
4551
4552
4553
4553
4554
4555
4556
4557
4558
24874 HORIBA
24874 HORIBA
24874 HORIBA
24875 HORIBA
24875 HORIBA
24875 HORIBA
24876 HORIBA
24876 HORIBA
24876 HORIBA
24877 HORIBA
24877 HORIBA
24877 HORIBA
24878 HORIBA
24878 HORIBA
24878 KORIBA
24879 HORIBA
24879 HORIBA
24880 HORIBA
24880 HORIBA
24881 HORIBA
24881 HORIBA
24882 HORIBA
24882 HORIBA
24883 HORIBA
24883 HORIBA
24884 HORIBA
24884 HORI8A
24886 HORIBA
24886 KORIBA
24887 HORIBA
24887 HORIBA
24888 NORIBA
24888 HORIBA
BOTTLE SOLVENT
87349 FREON
87349 S-316
87349 PERC
87449 FREON
87449 S-316
87449 PERC
87549 FREON
87549 S-316
87549 PERC
87649 FREON
87649 S-316
87649 PERC
87749 FREON
87749 S-316
87749 PERC
87879 FREON
87879 S-316
87879 PERC
87961 FREON
87961 S-316
88061 FREON
88061 S-316
88162 FREON
88162 S-316
88262 FREON
88262 S-316
88361 FREON
88361 S-316
88461 FREON
88461 S-316
88661 FREON
88661 S-316
88755 FREON
88755 S-316
888SS FREON
888S5 S-316
OiG 1 OiG 2 OiG 3 AVO,
62.4 64.2 55.8 60.8
65.2 54.4 55.6 58.4
66.8 55.6 61.2
24.2
27; s
16.7
21.4
22
Z3.5
251
251.5
246
57
44.6
42
170
129.2
144.8
333
412.2
42.3
36.3
£5.2
62.2
82
82.2
33.4
29.2
132.5
60.8
51.6
155
144.2
42.6
39.2
18.2
19.8
21.2
22.3
21.6
247.5
249
37.5
60.9
46
42.2
162.8
136
145.6
348
334.8
56.5
35
63 .t
57.8
72
69.7
32.3
30.1
127.2
113
64.2
55,3
157
150
81
45.6
30.2
19.8
22.1
22.4
247.5
236
54.2
42.3
171.6
130.4
320.4
336.6
35.2
63.S
59.8
72.7
68.8
33.7
31.4
124.6
108.6
64.2
53.7
150
79.6
46
31.2
21.9
18.3
21.6
22.2
22.6
249
247
242
57.3
44.3
42.1
168
132
145
334
361
44.7
35.7
64
60
75.6
73.6
33.1
30.2
128
111
63.1
53.5
154
147
80.3
44.7
RESIDUAL
22.6
32
5.5
6.9
3
4.1
35
34.5
0.5
1.2
0.3
3
13.2
14.2
4.2
5
1.5
1.9
3.5
2.3
14.7
16
13.4
**
**
3.1
3.6
TPH 1 TPH 2 TPH 3 AVG.
70 65.6 55.8 63.8
55.4 43.6 59.6 52.9
54.2 44.4 49.3
15.6
14.4
11.5
7.7
8.4
9.8
197.5
157
162.5
60
45.4
42.7
172.4
149.6
297.9
13.4
66
63.8
82.2
68.3
20.3
15.5

10.4
8.6
61.6
61.6
18
39
11.3
16.1
7.4
7.3
8.8
202.5
184
163
65
46.3
44.2
167.2
129.2
156.8
282
316.8
20.8
12.8
61.1
57.5
71.8
70.2
19.9
17. T
87.6
55.2
10.5
a
14
30.8
15.2
20.4
30
13.2
11
8.2
154
59.9

290.7
23.4
66

21.8
1(5.4
60.4
55.6


13.6
16
28.2
13
13.8
8.7
8
9.3
200
165
163
59.9
45.8
43.4
170
129
153
290
304
22.1
13.1
64.4
60.6
77
69.2
20.7
16.3
74
55.4
10.4
8.3
37.8
46.2
14.4
18.1
SPG OiG
64.4
39
21.6
187.5
43.8
172.8
102.8
121.5
166.5
34.5
30.22
5.2
4.1
12.9
11.8
32.9
115.2
92.6
20.7
14
74.2
82.6
33
31.4
SPG TPH
55

13.2
124
65.8
96.8
146.7
128.7
12.5
11.4


20
38.4
31
2
o.a
33.6
35
14.6
0.6
                                         PAGE 1

-------
NON-DISPERSIVE INFRARED  ANALYSIS
           OIL & GREASE/T.P.H.
4559
4560
4561
4562
4563
4564
4565
4566
4567
4568
4569
4570
4571
4572
4573

24889 HOfUBA
24889 HORIBA
24890 HORIBA
24890 HORIBA
24891 HORIBA
24891 HORIBA
24891 HORIBA
24892 HORIBA
24892 HORIBA
24892 HORIBA
24893 HORIBA
24893 HORIBA
24893 HORIBA
24894 HORIBA
24894 HORIBA
24894 HORIBA
24895 HORIBA
24895 HORIBA
24895 FROIBA
24896 HORIBA
24896 HORIBA
24896 HORIBA
24897 HORIBA
24897 HORIBA
24897 HORIBA
24898 HORIBA
24898 HORIBA
24898 HORIBA
24899 HORIBA
24899 HORIBA
24899 HORIBA
24900 HORIBA
24900 HORIBA
24900 HORIBA
24901 HORIBA
24901 HORIBA
24901 HORIBA
24902 HORIBA
24902 HORIBA
24902 HORIBA
25101 HORIBA
HORIBA
HORIBA
88955 FREOM
88955 S-316
89055 FREOM
89055 S-316
89155 FREOH
89155 S-316
89155 PERC
89255 FREOH
89255 S-316
89255 PERC
89355 FREOH
89355 S-316
89355 PERC
89455 FREON
89455 S-316
69455 PERC
89555 FREOH
89555 S-316
69555 PERC
89655 FREON
89655 S-316
89655 PERC
897SS FREON
89755 S-316
89755 PERC
89855 FREON
89855 S-316
89855 PERC
89955 FREOK
89955 S-316
89955 PERC
90055 FREON
90055 S-316
90055 PERC
90155 FREON
90155 S-316
90155 PERC
90255 FREON
90255 S-316
90255 PERC
10155 FREON
10155 S-316
10155 PERC
88
96.6
14.5
16.8
SIB
8.5
9.5
208.4
121.8
377.5
56.6
61.1
56
167
194
1?4
425
451
456
62.4
62.6
60
104
102.8
98.6
44.9
43.1
46.1
259
250
248
262
278
265
182
193
154
304
305
266
69.2
00.2
75.6
87
101.2
14.8
17.6
7.4
6.1
9.7
362.5
206.6
342
5?. 2
60
55.3
224
181.5
168
430
454
462
50.4
71.8
54.8
99
103.2
94.6
46.8
44.4
45.4
245
271
233
260
298
254
159
214
161
290
278
69
74.6
68.4
81.8
101.8
15.7
17.8
8.1
168
346.5
54.4
58.9


57
98.4
102.4
45.8
44.4
268
272
264
296
150
178
303
306
68.2
71
8S.6
99.9
15
17.4
8.1
7.3
9.6
246
225
240
56
60
55.6
196
188
171
428
452
459
56.4
67.2
57.3
100.5
102.8
96.6
45.8
44
45.8
257
264
241
262
290
260
164
195
158
299
306
272
68.8
75.3
72
                             7.8
                             8.4

                             1.5
                             2.2

                             3.5
                             3.7
                              31
                              30
                              39

                             6.9
                             6.6
                             6.3

                             40.5
                              50
                              20

                              75
                              80
                              30
                             11.8
                              16

                               5
                             4.6
                               3

                              12
                             10.8
                               8

                              17
                              30
                             15.5

                               5
                             13.2
                             1.5

                             13.8
                              42
                              20

                              10
                              42
                             12.6

                               1

                             1.2
                             0.6
14.2
16
6.5
6
3.8
2.8
2.4
99.2
44
156
23.8
23.3
23
71
59
171
156
178
104
24.4
20.4
22.2
99.6
79.2
78.6
15.2
15
12.8
26.7
33.3
23.3
150
179
142
150
139
121
70
67.2
69.2
77
66.6
14
17.6
3.6
S.1
2.8
1.4
4.4
101.6
45.4
115.5
26.4
22.3
21.6
39
32
10
150
215
128
24
22.4
28.4
96.2
91.2
78.2
12
12.8
15.1
6.7
28.5
0
143
198
118
156
159
150
83.3
63.7
69
72.4
65.6
11.2
13.6
4
5
3.1
118
26.7
22.2


28.8
97.8
93.6
14.2
13
0
29.2
140
160
122
148
142
41.5
53.5
69
68.4
13.1
15.7
4.7
5.4
3.2
3.3
3.4
100.4
69.1
135.8
25.6
22.6
22.3
55
45.5
90.5
153
196
120
24.2
21.4
26.5
97.9
88
78.4
13.8
13.6
13.9
11.1
30.3
11.7
144
179
127
151
147
136
62.4
61.8
65.4
69.1
72.6
66.1
 2.4 *«
 3.8 **

 7.5
   3
 2.7
   3

 43.6
  37
 36.5

 25.4
 27.8
 26.6

141.6
167.2
 155

 19.8
 21.8
 20.7

 62.4
 55.4
  56

 7.4
   6
 4.4

 30.6
 34.4
  29
 3.6
 3.5

 2.6
 2.3
  172
  191
  168

  100
  125
  113

  190
  192
  148

   1

  2.7
  1.5
 9.4
10.1
 8.2

 S.6
 8.6
 £.3

  80
80.4
21.5

11.4
10.6
11.2

14.4
  18
14.2

 5.6
 3.6
 0.8
       12.2
 110
 112
  82

79.2
  92
69.8

  21
  40
  39
                     PAGE 2

-------
OIL & GREASE DISPARITY BY SOLVENT
L AND GREASE %DIFF
SAMPLE
24873
24874
24875
24876
24877
24878
24879
24880
24881
24882
24883
24884
24886
24887
24888
24889
24890
24891
24892
24893
24894
24995
24896
24897
24898
24899
24900
24901
24902
25101
F-113
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
S-316
-3.9
-30
-2.8
-0.8
-23
-21
8.1
-20
-6.2
-2.6
-8.8
-13
-15
-4.5
-44
17
16
-9.9
8.5
7.1
-4.1
5.6
19
2.3
-3.9
2.7
11
19
2.3
9.4
DIFFERENC
PERC
0.6
-41
4.6
-2.8
-33
-14











19
-2.4
-0.7
-13
7.2
1.6
-3.9
0
-6.2
-0.8
-3.7
-9
4.6
F-113
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0'
0
0
0
0
0
0
0
0
0
0
0
0
IN PPM
S-316
-2.4
-9.3
0.6
-2
-13
-36
27
-9
-4
-2
-2.9
-17
-9.6
-7
-35.6
14.3
2.4
-0.8
-21
4
-8
24
10.8
2.3
-1.8
7
28
31
7
6.5
PERC
0.4
-12.9
1
-7
-20.4
-23











l.S
— *o
-0.4
-25
31
0.9
-3.9
0
-16
-2
-6
-27
3.2
               141

-------
TOTAL PETROLEUM HYDROCARBON
     DISPARITY BY SOLVENT
T.P.H.
SAMPLE
24873
24874
24875
24876
24877
24878
24879
24880
24881
24882
24883
24884
24886
24887
24888
24889
24890
24891
24892
24893
24894
24995
24896
24897
24898
24899
24900
24901
24902
25101
%DIFF
F-113
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
S-316
-17
-54
-8
-18
-23
-24
4.8
-41
-5.9
-10
-21
-25
-20
22
26
20
15
3.1
-31
-12
-17
28
-12
-10
-1.4
173
24
-2.6
-1
5.1
DIFFERENC
PERC
-23
-51
6.9
-18
-27
-10











6.2
35
-13
64
-22
9.5
-20
0,7
5.4
-12
-9.9
4.8
-4.3
F-113
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
O
0
0
0
0
0
0
0
0
0
0
0
0
IN PPM
S-316
-10.9
-15.2
-0.7
-35
-14.1
-41
14
-9
-3.8
-7.8
-4.4
-18.4
,,-2.1
8.4
43.7
2.6
0.7
0.1
-31.3
-3
-9.9
43
-2.8
-9.9
-0.2
19.2
35
-4
-0.6
3.5
PERC
-14.5
-14.4
0.6
-37
-16.5
-17











0.2
35.4
-3.3
35.5
-33
2.3
-19.5
0.1
0.6
-17
-15
3
-3
            142

-------
SAMPLE DESCRIPTION
SAMPLE
24873
24873
24873
24874
24874
24874
24875
24875
24875
24876
24876
24876
24877
24877
24877
24878
24878
24878
24879
24879
24880
24880
24881
24881
24882
24882
24883
24883
24884
24884
24886
24886
24887
24987
24888
24888
24889
24889
ID
TEXTL
MILL

LEATHR
FINISH
PLANT
POTW
POTW
POTW
DIE
CASTING
PLANT
METAL
FINISH
EFFL
METAL
FINISH
PROCESS
PUMP
MFG
BACON
PROCESS
SHORE
RECEPT
SHORE
RECEPT
CAN MFG
PROC.WW
CAN MFG
PROC.WW
DRUM
HANDLN
FORMUL
PLANT
LEATHER
TANNING
CHEMMFG
EFFL
SOLVENT
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
FREON
S-316
FREON
S-316
FREON
S-316
FREON
S-316
FREON
S-316
FREON
S-316
FREON
S-316
FREON
S-316
FREON
S-316
ML SAMP ML
10
10
10
15
15
15
15
15
15
5
5
5
15
15
15
5
5
5
3
3
15
15
15
15
15
15
15
15
10
10
15
15
2
2
10
10
10
10
SOLV
20
20
20
IS
15
15
15
15
15
25
25
25
15
15
15
20
20
20
27
27
15
15
IE
15
15
15
15
15
20
20
15
15
28
28
20
20
20
20
NA2SO4
YES
YES
YES
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
YES
NO
NO
NO
NO
NO
NO
NO
NO
YES
YES
NO
NO
YES
YES
YES
YES
NO
NO
RES ID
22
32

5
6

3
4

35
34

0
1

0


13
14
4
5
1
1

3
2


14

13


3
3
7
8
. EXT. TIME COMMENTS
.6


.5
.9


.1


.5

.5
.2

.3
3

.2
.2
.2

.5
.9

.5
.3


.7
16
.4


.1
.6
.8
.4
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1


1
1
1
1
1
1
1
1
3
3
1
1
1
1
1
1



COFFEE & CREAM


GREY W/ BLK PARTS





DISPERSE PARICLES


MURKY WHITE




WHITE

GRAY

MORE GRAY





GREEN VALCANO

BUBBLE BATH

BROWN COW
SMELLY
SUSPEND COLLOID

      PAGE 1
        143

-------
SAMPLE DESCRIPTION
24890
24890
24891
24891
24891
24892
24892
24892
24893
24893
24893
24894
24894
24894
24895
24895
2489S
24896
24896
24896
24897
24897
24897
2489S
24898
24898
24899
24899
24899
24900
24900
24900
24901
24901
24901
24902
24902
24902
25101


DYE
PLANT
CHEMHFG
PRIME
EFFL
PACKING
PLANT
EFFL
DRUM
HANDLN
IFF
MEAT
PROCESS
EFFL
EXTR0SN
PLANT
PROC.WW
OLIVE
PACKIN
EFFL
BUS
MAINT
EFFL
MEAT
PROCESS
1FFL
RENDER
FACIL
EFFL
IND0ST
LAONDY
EFFL
RAILRD
MAINT
EFFL
BACON
PLANT
PROCESS
SHORE
RECEPT
OILYWW
FREON
S-316
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PBRC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
FREON
S-316
PERC
15
15
15
15
15
5
5
5
15
15
15
5
5
5
2
2
2
10
10
10
10
10
10
10
10
10
3
3
3
5
5
5
5
5
5
5
5
5
10
10
10
IS
IS
15
15
15
25
25
25
15
IS
15
25
25
25
25
25
25
20
20
'20
20
20
20
20
20
20
25
25
25
25
25
25
25
25
25
25
25
25
20
20
20
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
YES
YES
YIS
NO
NO
NO
YES
YES
YES
NO
NO
NO
NO
NO
NO
NO
MO
MO
NO
YES
YES
NO
NO
NO
1.5
2.2
3.5
3.7

31
30
39
6.9
6.6
6.3
40.5
50
20
75
80
30

11.8
16
5
4.6
3
12
10.8
8
17
30
15.5
5
13.2
1.5
13.8
42
12.6
10
42
12.6
1
1.2
0.6
1
1
1
1
X
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
• 1
1
1
1
1
i
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
                            LT. BLUE
                            FDLL BOTTLE EXT    13.2

                            OL YELLER
                            GR/WHT/BRN
                            ORANGE
                            MARTINI TIME
                            BRW/GRONG FLT STUFF
                            CRY NO DEBRIS
                            BRKFAST SMELL
       PAGE 2
                144

-------

-------

-------

-------

-------

-------

-------

-------

-------

-------

-------
U1
Ul

-------

-------
                                     MR. TELLIARD; Our next speaker is going to also
be discussing the nondispersive infrared technique, Jerry DeMenna. Jerry worked on this
paper at home at night by the fireplace. Jerry's theory is that if you are going to do a paper
and you have to submit it, do not let anyone know where you are, because that way, you
do not have to worry about when to get it in, but, eventually, it did appear, and we are glad
to have him  here today.  Jerry?
             CURRENT ADVANCES IN OIL AND GREASE USING NDIR
                          WITH THE "NEW" SOLVENTS"
                                     MR. DEMENNA:   I want to  thank  you all for
coming, and I want to thank Bill for giving me the clean-up spot again for the third time in
a row.

      I started out in analytical chemistry as a food scientist years ago, so when you say oil
and grease, that is sort of in my blood, both figuratively and  literally, unfortunately, based
on today's lunch.

      When  I started out with Bill Telliard and his group, I  learned that a significant
number of variables and  parameters for oil  and  grease  analysis  is really somewhat
unexplored territory.

      My background is in analytical spectroscopy and food technology, so this area where
practical applications do not usually follow the theory and where the instrumentation usually
is not compatible with the chemistry is somewhat an area of concern, because  it would be
nice to have everything coordinated so that you can use a chemistry that is applicable to
some instrumental technique or the theoretical mechanisms do predict how a sample should
be prepared.

      Current methodologies for oil and grease and TPH consist of the existing gravimetric
methods, the  modification  using  an infrared detection with  a non-hydrocarbon based
solvent, and some States are using gas chromatography with an FID detector for the TPH,
not so much for the oil and grease, but I figured  I would throw it in to be thorough.

      Using the classical extraction or whatever extraction, be it solid phase disks or solid
phase tubes, I tried  to focus on a detection method that would be very quick and  very
reproducible and save the time involved with the classic gravimetric analysis.

      We did a series of samples by separatory funnel, by Soxhlet, to judge the efficiency
of the recovery, and we followed up with gravimetric analysis, and gas chromatographic
analysis.


                                       157

-------
      Unfortunately, in a GC, you do not get a single number like you do off an infrared
at a fixed wavelength.   You either get a group of peaks which, in this case, it was a
kerosene sample from the State of Pennsylvania, or you get a big blob which is this.  This
is a grease sample coming off at 400 degrees.

      So, really, chromatography is really not the way to go, and that was something that
Bill  was mentioning last  year. So, I am glad to see no one else is talking about GC.

      The infrared techniques that we utilize can be done on both FTIR, fixed wavelength
nondispersive IR, or even scanning dispersive infrareds. As long as you can sit at the 3.42
micron hydrocarbon wavelength, you can get  a reading, do calibrations, and judge your
precision that way.

      Here is a standard 1 cm liquid cell in an FTIR unit.

      This is a fixed wavelength IR set at 3.42 microns,  again, with an open sample
compartment, and you can fit a variety of different sample cells in there to accommodate
the concentration or pre-concentration of the sample.

      Right now, the way I  understand it, again not coming from an environmental lab
environment,  is that Bill and his people are looking for a safe solvent to use for gravimetric
methods  or a less hazardous non-Freon solvent to use for both infrared and gravimetric
where you have good correlation and you do not damage the environment or ourselves too
much.

      What we did  is we looked at a variety of  materials using a couple of the solvents that
have been floating around.  Last year, it was mentioned that hexane  and what was called
the 80/20 hexane/MTBE  mixture was going to be used but that it was not under serious
consideration  because of its neurotox...neurotox...neurotoxicity feature that Bill had brought
up.  Thank you for wanting me, Bill.

      Our neighbors to the north had been using cyclohexane with some degree of success.
I do not  know how it has worked compared  to Freon in our situation.  Within the last
month or two, the perchloroethylene came up.

      So, we decided to look at all these materials compared to  Freon-113.

      Now, I am one of the few people from Rutgers University that got his Ph.D. and
failed p-chem, so I cannot tell you what any of these things mean except to show that there
are  differences  in  something called the dipole moment, dielectric constant,  magnetic
susceptibility.  Boiling point I know. I  can figure that one out myself.
                                       158

-------
       But trying to get an idea of why one solvent or solvent mixture gives good recovery
for certain oil and greases and does not for another is still undiscovered country.  Maybe
Gene Roddenbury's ghost will come back this fall and do an episode on that for us.

       What we did is try to modify the existing infrared version of the method so that we
could use less solvent, be it Freon which is quite expensive and, technically, not supposed
to be used, or whatever the new material would be and try to use instruments for detection
so that we could  maintain good detection limits and get a little bit faster turnaround than
classical gravimetric techniques and keep our overall precision in acceptable ranges.

       So, we prepared the samples by the standard procedure using the four solvents, the
three proposed and the Freon.  We downscaled the  size of the  sample and either the
volume or weight of solvent used by from a factor of 2 to a factor of 10 which falls in line
with the solvent reduction protocols from last year.

       We did the analyses by, again, the gravimetric, by liquid cell infrared with a standard
1 cm cell, quartz cell, and by something called the cavity cell  which we developed last year
to allow us to use hydrocarbon based solvent in an infrared. Normally, you cannot do that,
because a solvent has so much of an absorbance, it would swamp out the sample.

       The tests were done on an HC-404 fixed wavelength hydrocarbon analyzer set  at
3.42 microns. Again, the analyses were done with a standard 10 mm, 1 cm IR quartz liquid
cell and what we call the cavity cell with a 250 uL depression  in a quartz plate.

       The gist  of it was that we can utilize the gravimetric preparation with an infrared
finish to it.  So, we have the speed of an infrared determination, and we have the sensitivity
of an infrared determination compared to gravimetric, and we do not have the interference
of a  hydrocarbon based solvent that the infrared would have.

       This is the evaporation plate. It is a quartz plate with a depression of about 300 uL,
and  what we did is we followed through our extraction with some methylene blue, oil
soluble methylene blue in the oil and grease sample. Basically, that is our 250 uL in the
cell.

       The back of the unit, the 404 unit,  has a pair of heat fins that are about 45 degrees
C, So, we put the plate on the heat fins which is a very constant temperature, and within
about 30 seconds to 45 seconds, all of the solvents we evaluated were evaporated, leaving
a residue of your grease, and you can see the little blue stain down there.

       That whole cavity fits into the beam of the infrared.  So, again, most dispersives and
FTIRs have a fairly large beam,  and we designed these plates so  that  the beam would
encompass the  entire area of it.  So, whether the grease is on the bottom or on the side
makes no difference. It is all going to be in the path of the  infrared.
                                       159

-------
      You place it in the unit, and you can see the large white area encompassing the
depression in the cell.

      I  did not know Bill long enough to be blessed with some of his smelly, grungy
samples that everybody else was, so I  took some stuff that we had in New Jersey and did
the three preparations. No disrespect to New Jersey.  I am a native, but I used to be 6'5".

      Using the four solvents and the three methods, we just came up with a table of
correlations.

      This was solid waste from a settling  pond bed in Camden.  I do not think it is
Campbell Soup, but I  cannot be sure. The 80/20, the cyclohexane, the perchloroethylene,
and Freon.

      By gravimetric, we could not do the perchloroethylene, because the boiling point
was too  high, so we could not draw it off without drawing off the grease. We could not use
the 80/20 and the cyclohexane in the regular liquid infrared cell, because there is about
999,000 ppm of hydrocarbons in there. Likewise, the perchloroethylene we could not use
in the evaporation plate because of the higher boiling point.

      However, based on the classic method which would be Freon by gravimetric, we can
see that  the 80/20 and cyclohexane came out a little bit low, and the reproducibility on the
cyclohexane was  not that good.  The liquid  infrared by Freon came  out higher.  By
perchloroethylene, it  came out much higher.

      The gravimetric technique using the 80/20 and the cyclohexane, because they are
volatile, was not that  good.  The 80/20 came out significantly low, about 15 percent low.
The cyclohexane was not too bad. And the Freon came  out quite good compared to the
gravimetric.

      So, again, we  have an evaporation preparation, a gravimetric preparation, with an
infrared  detection.  So, you can combine the two technologies.

      This was a residential soil from  former farm land. There was a tremendous amount
of organics in  there from biomass decomposition and tilled manure and other material.
Even with the silica gel treatment, we found some significant discrepancies.

      Again, the classical value of  Freon gravimetrically of 11 ppm, the 80/20  and
cyclohexane again coming out low, but the evaporation by Freon coming out not too bad,
10 ppm versus 11, much closer than the 14 ppm by the  liquid IR.

      Again, you do  not lose, as someone else mentioned on the panel here, some of the
light organics that are extracted in a liquid-liquid extraction  and not volatilized. So, that is
a part of the problem.


                                       160

-------
      Tnis a solid sewage sludge that came  from an area with a lot of petrochemical
activity.   Here, by the gravimetric  technique, we got about  4 ppm.   Our error was
significantly bad there, because the water was actually quite clean.  Excuse me, this is a
water sample, not the solid waste.

      By 80/20 and cyclohexane, we got nothing.  By perchloroethylene in the liquid cell,
we got about 5.6, a little bit higher, again, keeping in line with the other recoveries we saw.
Freon by the liquid cell, again, higher.

      Freon by the evaporation plate, because our detection limit by infrared is significantly
better than by gravimetric, we were able to quantitate this to about 3.5 ppm quite precisely,
and the gravimetric preparation with  the infrared finish for the 80/20 and the cyclohexane
were not too shabby.  At least we saw some good  reproducibility at the low levels.

      This is a discharge  process water that is cleaned up from a metal processing plant.
Again, the classic value would be 17.7 ppm.  The gravimetric with the 80/20 came out to
be fairly close.  Cyclohexane came out high. The perchloroethylene by the liquid cell came
out high,  again in keeping with all the other data.

      The infrared with the Freon in  the liquid cell came out significantly higher, showing
there were a  lot of volatile components that are maintained in the straight liquid running,
but the  correlation to the  evaporation  cell technique is quite good, 18.4 to 17.7 is a lot
closer than 21.7 to 17.7.

      This  is  a  situation where some aerospace parts  are  being cleaned to check
manufacturing QC for residual oil and grease, and here we have our classic value. Again,
excellent precision by gravimetric analysis. Because this is mostly high molecular weight
petroleum hydrocarbon greases, we see good correlation overall between the 80/20 and the
cyclohexane gravimetrically, between the perchloroethylene and the Freon in the liquid cell,
and even between the 80/20, the cyclohexane, and the Freon by the evaporation technique.

      So, what we did was just tabulate a typical solvent usage.  I think the hexane affected
my eyes, also.  Gravimetric techniques took about 100 ml of solvent, the liquid infrared
about 50 ml,  and the evaporation plate only about  25 ml of solvent.  So, we were able to
reduce our solvent usage and our costs there.

      The analysis time,  the  gravimetric, depending  upon the  material, took about 40
minutes on average.  The liquid  IR technique, once a sample is prepared,  takes only  3
minutes to transfer it to a cell, make sure it is filtered for no particulates because the liquid
cell will give an absorbance if there  are particulates that will scatter the light, and that  is
about 3 minutes.   The evaporation plate technique took about  10 minutes.

      So, here we have less solvent. It takes less time than gravimetric, a little bit more
time than  liquid, but the correlation to the gravimetric procedure  is much higher.


                                        161

-------
      So, based on some spike recoveries and using solvents with boiling points less than
81 degrees which is the 80/20, the cyclohexane, and the Freon, we calculated by running
a series of blanks and spikes a method detection limit of 1.5 ppm  which gives a 0,0044
absorbance unit signal...our  sensitivity, rather, 1.5 ppm.

      That gives us a detection limit of 4 ppm with 10 percent reproducibility at the 0.01 A
or 0.01  absorbance unit level which is quite low, and we are linear up to 250 ppm. We
have pretty much a straight line at a 0.98 correlation.

      So, this evaporation cell, cavity cell, technique is really applicable to all infrared
units, filter infrareds, nondispersive, scanning dispersive,  and FTIRs that  have  an open
sample compartment to allow you to place a  cavity cell  in there.

      We are working at developing a cell with a deeper cavity so we can use a  larger
sample aliquot, improving our reproducibility, because when you try to micro-pipette 250
uL of a volatile solvent, you will have some error, and also you  can get a little bit  larger
residue which will give you, technically, a factor of  2 better  improvement in sensitivity.

      The current studies with the listed solvents and a variety of sample matrices, until Bill
sends me some of his  goops,  show  suitability for  universal applications with  minimal
procedural changes and equipment modifications. So, for those of us that have infrareds,
you know, let us not throw the baby out with the bath water,  so to speak, till we figure out
which solvent is going to be used.

      Regardless of what solvent will be used, there are ways to still use your infrared so
you can get the sensitivity and speed of the  analysis without having to worry about the
problems  with the hydrocarbon based solvents.

      Thanks,  Any questions?
                                        162

-------
                       QUESTION AND ANSWER SESSION
                                     MR. BANSAL:   I  want to find out does the
thickness of the residue on the 1R cell contribute any problems?
on the entire phase of the IR cells?
                                     MR. DEMENNA: The thickness of the residue?

                                     MR. BANSAL: Yes.  I mean, is it to be uniform
                                     MR. DEMENNA:  It does not make a difference.
As long as the entire deposit is exposed to the infrared, you will get an absorbance.  So,
whether it is spread  out 100 microns thick over 10 mm or 10 microns thick over 100 mm
makes no difference, because the whole mass... it is a molecular absorption, so as long as
all the molecules are hitting light, you will get the same absorbance.

                                     MR. BANSAL: I see. How do you ensure that the
residual, like if you  are using cyclohexane, that all cyclohexane is gone,  that it does not
create any problems in the signal that you are getting from the IR?

                                     MR. DEMENNA: Basically, just like gravimetric,
you would basically evaporate to constant weight.  Here, you just put the thing on the heat
fins,  stick it in the machine every 15 seconds until you have a  constant absorbance, and
then our studies have shown that that has drawn off all the solvent.

                                     MR. TELLIARD:  Yes, sir?

                                     MR. SLENTZ: I am Kurt Slentz with Energy Labs.
I have got a question for Bill of the EPA.

      We have a number of clients that are required by their permits to run  the 413.1
method, and if we phase out Freon, what is going to be their possibilities with your agency
for some of these other techniques we have seen?

                                     MR. TELLIARD: We are proposing to have 1664
proposed this summer  and final by the end of the year. So, there is going to have to be, at
least gravimetrically...I do not know  if we are  going to have  NDIR, but, certainly,  the
gravimetric procedure  has got to be ready by the end of the year.

                                     MR. SLENTZ: What about the solid phase stuff that
we were looking at? Are  you guys taking a look at allowing us to use that, too?
                                       163

-------
                                    MR. TELLIARD: Yes, we are.  We do not know
the answer yet, because, as you know, we have not all sat down and crunched the data.
We will put out a Phase II report which will contain all of the data you have heard here
today from API and from the industrial laundry folks and from the solid  phase people and
from the infrared folks.  It will all be in a final report that we will have  available, and we
are working on that for proposal, as I say, during midsummer.

                                    MR. SLENTZ: Then you are going to publish that
in the Register so it is...

                                    MR. TELLIARD; We will notice it in the Register.
We will not publish it.  It will  be too expensive to publish it, but we will  notice it, and you
can write for it,

                                    MR. SLENTZ: Then, do they have to  modify their
permits to use those methods?

                                    MR. TELLIARD:  We hope we can  handle that
through the way we notice the procedure.

                                    MR. SLENTZ:  Okay, thank you.

                                    MR. TELLIARD: You are welcome.
                                      164

-------
      ENVIRONMENTAL OIL & GREASE SESSION:

 Current Advances in Oil & Grease Analysis using NDIR
              with the "Mew" Solvents"

      Gerald J. DeMenna,  Chem-Chek / BUCK
   44  Stelton Road,  Piscataway, NJ     08854
                 [908]  752-7793

  16th. Annual EPA  Conference   //  3  May 1994
               Norfolk, Virginia
            CURRENT METHODOLOGIES:

   [1]   Extraction and Gravimetric Isolation
               of TPH materials

   [2]   Extraction and Liquid Cell IR Filter
Photometry of TPH  (C-H) Absorptions  at 3.42 uM
        #
      [3]   Gas Chroaatographic Separation
          with FID Detection  for  TPH
            STATUS Of METHODOLOGY:
      [l]   Use a "safe" hydrocarbon-based
    solvent for gravimetric  procedure only

    [2]  Use  a "less hazardous"  non-Freon,
    non-hydrocarbon solvent for both IR and
            gravimetric  techniques
             EVALUATION PROTOCOLS:

                       165

-------
      -=> Perform extractions on a variety of
           samples  with assorted  solvents
[80/20 Hexane-MTBE, Cyclohexane,  Perchloroethylene]

      -=>  Compare  recovery  of with  previously
            approved method using Freon
             [tr ichlorotr i fluoroethane]
               EVALUATION PROTOCOLS:

      [1]  Prepare the sample and perform the
         extraction per standard protocols
               [SW-846 / #9070-9071]
[2]   Decrease solvent utilization  by downscaling the
      Volume or Weight of SOLVENT used for the
          extraction by a  factor of  2 to  10
            [dependant on  sample matrix]
      [3]   Perform the analysis  by Gravimetry,
        Liquid Cell IR and "Cavity Cell" IR
           [as proposed at 1993 meeting]
                  INSTRUMENTATION:
         Model HC-404 Hydrocarbon analyzer
      [Filter IR Photometer w/ 20 cm-1 bandpass
                  at 3.42 microns]

   Normal Analyses w/ 10mm.  IR-Quartz Liquid Cell
    Evaporative Analyses w/  250uL IR Cavity Cell

    Items purchased  from BUCK Scientific, Inc. ,
                   E. Norwalk,  CT
                    PRINCIPLES:
            Like the existing gravimetric
        procedure, the  "evaporation method"
    will  allow the chosen SOLVENT to volatilize
     and  leave the TPH, Oil & Grease as a  film
     residue  in a IR-transparent Quartz plate.
                    166

-------
                BENEFITS of MODIFIED PROCEDURE:
          [1] Allows use of significantly lower volumes
                of regulated & proposed solvents.

          [2]  Allows user to achieve faster results,
              better D.L.s and overall precision.
                      EXPERIMENTAL SET-UP:
               Samples of solid wastes and liquid
       effluents were examined by the 3 defined methods
                     GRAV, LIQ-IR, EVAP-IR
           using the 4 existing or proposed solvents
                80/20,  cyclohexane, Perc & Freon
                 Sample #1:   Settling Pond Bed,
                Camden, NJ / discharge line

    Data is AVERAGE in PPM from triplicate preps w/  [%RSD]

             [1] GRAV          [2] LIQ-IR         [3]  EVAP-IR

  80/20      68  (4.7%)           n/a              70  (5.1%)
Cyclohex     75  (5.2%)           n/a              85  (4-3%)
  Perc     >250  (n/a)          94 (2.9%)         >250  (n/a)

 Freon       81  (3.4%)         86 (1.9%)           79  (3.2%)
                               167

-------
                 Sample #2:  Residential Soil,
            Piscataway, NJ / past farm usage, biomass

    Data is AVERAGE in PPM from triplicate preps w/  [%RSD]

             [1] GRAV         [2] LIQ-IR        [3]  EVAP-IR
  80/20      9.8 (6.3%)           n/a             9.5  (4.4%)
Cyclohex     8.7 (4.5%)           n/a             9.1  (3.7%)
  Perc      >100 (n/a)        12 (2.9%)          >100  (n/a)

 Freon       11 (2.9%)        14 (2.2%)           10  (3.1%)
          Sample #3:   Post-treatment Sewage Discharge,
            Metarie, LA / petrochemical activity

    Data is AVERAGE in PPM from triplicate preps w/  [%RSD]

             [1] GRAV         [2] LIQ-IR        [3]  EVAP-IR

  80/20     < 10 (n/a)           n/a             4.5 (5.6%)
Cyclohex    < 10 (n/a)           n/a             5.9 (4.2%)
  Perc      >100. (n/a)        5.6 (4.0%)         >100 (n/a)

 Freon      - 4  (6.8%)        5.1 (3.3%)         3.5 (3.6%)
             Sample #4:  Metal Finishing Facility,
            Skokie, IL / process water, recycled

    Data is AVERAGE in PPM front triplicate preps w/  [%RSD]

             Cl] GRAV         [2] LIQ-IR         [3]  EVAP-IR
  80/20     16.8  (4.9%)           n/a             15.9  (5.3%)
Cyclohex    20.7  (4.4%)           n/a             21.0  (3.8%)
  Perc      >100  (n/a)        19.2  (4.8%)         >100  (n/a)

 Freon      17.7  (3.8%)       21.7  (2.9%)         18.4  (3.2%)
             Sample #5:   Aeronautical Parts Cleaning
          Schenectady, NY / manufacturing QC program

    Data is AVERAGE in PPM from triplicate preps  w/  [%RSD]

                          168

-------
  80/20
Cyclohex
  Perc

 Freon
 [1] GRAV

 29 (3.4%)
 32 (2.5%)
>100 (n/a)

 33 (2.4%)
[2] LIQ-IR

   n/a
   n/a
34 (3.1%)

35 (2.2%)
[3]  EVAP-IR

 30 (4.2%)
 31 (3.8%)
>100 (n/a)

 32 (2.9%)
                EXPERIMENTAL METHOD COMPARISONS:
         METHOD
          Type

          GRAV
         LIQ-IR
        EVAP-IR
            AVERAGE
          Solvent Use

           -100 ml.
            -50 ml.
            -25 ml.
             AVERAGE
          Analysis Time

            ~40 mins.
             -3 mins.
            ~10 mins.
                    SOLVENT CHARACTERISTICS:
SOLVENT
Type
80/20
Cyclo
Perc
Freon
DIPOLE
Moment
0.77
0
0
0.87
DIELECTRIC
Constant
2.13
1.80
3.89
4.22
MAGNETIC
Suscept .
64
68
82
57
BOILING
Point
65
80
121
46
                      EVAP-IR PERFORMANCE:

          [based on 250uL aliquot in 5mm x  4»m cavity,
               for solvents w/ BP < 81 degrees C]

           Sensitivity:   1.5  PPM gives 0.004A signal

       Detection Limit:   4 PPM gives 10% reproducibility
                        at approx.  0.01A

         Linearity:  Correlation of 0.98 up to 250 PPM
                               169

-------
                      CONCLUSIONS:

This technique is adaptable to all IR Photometric units
  with  an  open  sample compartment to allow use of  the
            "cavity cell" evaporation plate.

  Current  studies with listed  solvents and a variety
        of sample matrices shows suitability for
    "universal"  applications with minimal procedural
             and equipmental modification.
                        170

-------
                                     MR. TELLIARD: At this point in the proceedings,
we are going to pass out the method for your review. There are two issues. First, there is
a questionnaire on oil and grease with it. It will take you three to four minutes to fill it out.
I would like your input, and we would like to get it, before you leave.  Unless you want to
spend the evening, you have got to fill out the form.

      Secondly, we are looking for a  few good labs to participate in a round-robin testing
of both the solid phase procedure and 1664 as it is presently written. We would like some
feedback after you folks get a chance to look at it. Tomorrow, I will give you my address,
phone number, and box number, and  if you want to drop me a line, I would appreciate it.

      If you work in a laboratory or  are associated with a laboratory and would like to
participate in a round-robin on oil and grease, the cutting edge of science, please stop me,
Dale, or Marion while you are here and give us your name and information. We will be
glad to contact you, make arrangements to ship you the samples, tell you the data recording
requirements, and so forth.

      I want to thank all the speakers for today.  I hope you  enjoyed it. I hope you learned
something. We will see you tomorrow morning, but you cannot leave until you fill out the
form.

      Thank you.
      (The Conference was recessed at 4:22 p.m., to reconvene the following day, May 4,
7994, at 8:45 a.m.)
                                       171

-------
(Blank Page)
    172

-------
                                                                   May 4, 1994
                                     MR. TELLIARD: Good morning. I would like to
get started with today's session, please,

      A couple of brief announcements. Last year, for those who attended, you remember
that we had a small hole in the program regarding the policy that relates to the application
of method detection limits and minimum levels and its relation to water quality based limits
and, more specifically, those limits that are below the method detection limit that are water
quality based limits.

      We have made available on the back table copies of the memo that went out on that
policy. It is a draft proposal. You are welcome to any of those copies. If, for some reason,
you  do not  get one, if you will  let the folks know at the table  outside, we will make
arrangements to mail you one.

      Also,  in addition to that, there are copies of the Martha Prothro memo floating around
back there, as well. This memo focusses on today's subject, which is the application of
trace metals  and the issue of the dissolved versus the total metals or available metals  as it
relates to the water quality standards.

      So, those documents are available. If, for some reason, you do not get one of those,
again, check with the folks at the table outside at the break, and we will make arrangements
to get you one.

      Our first speaker this morning is Jim Hanlon.  Jim is the Deputy Director of the Office
of Science and Technology in the Office of Water.  Jim has been with the Agency for quite
a while. In his former life, he was the Director of the Construction Grants Program upstairs.
For those of you who dealt with that program, you probably know Jim pretty well.

      Jim is going to speak to you this morning on an overview of what is happening in
the regulatory field as it relates to metals and a few other tidbits of information that he has
brought down from Washington.  Thank you.  Jim?
            REGULATORY BACKGROUND DETERMINATION OF METALS
                 AT AMBIENT WATER QUALITY CRITERIA LEVELS
                                     MR. HANLON:  Good morning. This is the 17th
Annual Analytical Methods Conference, As many of you may know, Bill has been involved,

                                       173

-------
I think, in all 17 of these, but I want to make sure you all know that he started this as a high
school project.  So, basically, even though he has been involved  in the whole series of
conferences,  he started at a very young age and is looking forward  to the next  17
conferences.

      We, at EPA as well as the rest of the Federal Government, have gone through a
political transition and turnover over the last 18  months.   Carol Browner  is now the
Administrator of the Environmental Protection Agency, and Bob Perciasepe is the Assistant
Administrator for Water. In government, as you go through these transitions, you observe
the list of themes  and directions that each administration brings in  with them.

      A theme, however, that has carried over, clearly, from the Bill Reilly tenure at EPA
to Carol Browner's tenure is that of sound science.  A very high  priority on an Agency-wide
basis is that our programs be founded in sound science, and I  think the subjects that you
are dealing with during this conference in terms of our ability to identify and measure not
only where we are at but where our objectives are at, where  we are going, has never been
more critical.

      Basically, the investments that our society makes in  pollution control activities is
continuing to increase as our knowledge about how pollutants and contaminants interact
in the environment  increases, and it is ever more  important that we are  able to measure
where we are at and, again,  where we are going.

      What  I am going  to  talk about this morning is the issue  of metals  and metals
measurement and metals policy issues in our water quality programs.

      Metals issues have been around since the beginning. Those of us who have been in
the water quality programs over the last 20 years basically recall that the focus early in the
'70s, after the passage of the Clean Water Act, was on the more conventional pollutants;
BOD and suspended solids.  Not to forget those favorite target  pollutants, I notice on your
agenda this afternoon, there  is the ever-present session, on BOD measurement.

      This morning, we want to focus on metals and how the Office of Water is dealing
with metals in our regulatory programs and where we are going from a policy perspective.

      A basic and fundamental responsibility of the Clean Water Act that is assigned to the
States is the management  of the water quality standards  program.  By  definition, water
quality standards include three components: designated uses  of individual water bodies; an
anti-degradation policy; and  numeric and narrative criteria.

      Water quality criteria, therefore,  are fundamental building blocks in the foundation
of the water quality program that is laid out within the structure of the Clean Water Act.
It is those criteria that EPA develops that are adopted by the States in their water quality
standards that are the basis  for point source permitting actions that are  taken under the

                                        174

-------
Clean Water Act through the National Pollutant Discharge Elimination System permitting
program, the NPDES program.

      Currently, all States are responsible for implementation of water quality standards,
and 38 States have the responsibility for issuing individual point source permits.

      Water quality criteria become  enforceable instruments when they are adopted by
States into their water quality standards and then are enforceable on a point source basis as
they are incorporated into permits.

      Currently, EPA has developed two types of water quality criteria.  Basically, each of
these criteria types measure water column concentrations of pollutants.

      The first type of criteria currently available is for the protection  of aquatic life. We
currently have 30  criteria issued that are aimed at the protection of aquatic life.  Another
set of criteria are designed to protect human  health.  The Agency has  issued 91 criteria
aimed at human health protection.

      Our future  or the work in progress, illustrates that we are headed towards several
other different types of criteria that you will be hearing about  in time to come. The first is
sediment criteria, basically, criteria documents that would allow the measurement and set
objectives for contamination in sediment.

      Secondly, we have criteria documents  that would again be water column criteria
aimed at the protection of wildlife.

      Criteria are developed in the laboratory setting with the objective of  determining
acceptable levels or protective  levels  of contaminants, in this case, in the water column.
When you  go through that protocol, it is possible and often the case that the concentration
that ends up in the criteria document is below our current  levels of detection. Basically, we
are not able to measure at the criteria levels.

      This brings us to the rnetals issue. Many of the metals fall into the category of criteria
levels being below the level of detection.  In addition, metals  management in the aquatic
environment is further  complicated  by the site-specific nature of metals  toxicity.

      We have found in our efforts to manage metals in water quality limited water bodies
that we are often calling for an ability to measure at almost 300 times lower than levels that
have historically been required in the context  of technology-based  limits.

      Metals  in  the environment  exist and  can   be measured  in total form,  in total
recoverable form,  and in a dissolved form.   Metals  also appear  in both organic and
inorganic forms. The bioavailability and the related toxicity of metals varies, depending on
all the variables we have just talked about including form and speciation.


                                        175

-------
      It is also often necessary to determine more than one form of a metal in a particular
setting. In doing so, it could require multiple procedures for sample handling, preservation,
preparation, and analysis.

      Essentially,  what we are doing here is setting the table in terms of the whole myriad
of complications that we are faced with and, I am sure, you  are faced with as you look at
the need to  assess concentrations and appropriate levels in samples representing conditions
in the environment.

      From a policy perspective, EPA's role was to determine from a metals management
standpoint,  first of all in the ambient environment, what form of metals are we concerned
about? Historically, the Agency's position  was...to base all criteria on  total  recoverable
metals.  That was the best information  available in  the mid  '80s  when those criteria
documents  were developed.

      In  January  of 1993, we convened a workshop of experts representing industry,
academia, States, other Federal agencies to  talk about metals management and to provide
recommendations  in  terms of  the aquatic  environment and what were  appropriate
approaches for metals  measurement.

      Based,  in large part, on  the advice of the assembled experts at that meeting  that EPA
issued a memo in  October of 1993 that expressed the policy preference of EPA's Office of
Water to  use the dissolved form of metals  in the measurement of metals in the  ambient
aquatic environment.

      That was a  change in policy that had been brewing for some time and was clearly
articulated in that  October 1993 memo.

      What the memo goes on to say, however,  is that, if you reflect back to the role of
the States in implementing the water quality program, this form of metals management is
a State decision.  The October  memo sets out EPA's recommendation, but States may
choose to use dissolved or total recoverable forms of metals in  establishing and adopting
their water  quality standards.

      The October memo is basically EPA's best advice to the States in terms of how those
standards should be set and what form of metal to measure.

      What we have  also covered in the October memo is an update of the conversion
factors. Recognizing that all the criteria documents that have been issued to date are based
on total recoverable metals, it is necessary to be able to convert those criteria documents
from total recoverable to dissolved.

      The attachments to the October memo laid out our best advice for those conversion


                                       176

-------
factors, and we expect to have out in midsummer, an updated set of conversion factors
based on some additional analysis we are currently doing.

      Going beyond water quality standards, it is also necessary to make a decision  in
terms of what form of metal you use for managing metals across a water body in the
development of total maximum daily loads (TMDL). Again, our best advice is that dissolved
be used, but you would need to be able, when you get  into the TMDL process, to go back
and forth between dissolved and total recoverable.  When you get into TMDLs, it is also
important to continue to recognize the variable nature  of metals and how they behave  in
the environment.

      However, when you get to the NPDES program, EPA's permit regulations require that
permit limits for metals be issued as total recoverable.  The  reason for that is the dynamic
nature of metals and their relationship to the chemistry of the effluent, the chemistry of the
ambient water the effluent would be discharged into, and the interrelationship of the effluent
and the ambient water  at the point of discharge, in the mixing zone, and then as the mix
moves down stream.   So it is, and continues to be,  EPA  policy that permits  use total
recoverable as the method of measurement for metals.

      We  have also recognized  that it is  necessary to go  beyond a recognition of or a
measurement of dissolved metals which takes into some account the site-specific  nature  of
metals toxicity. We have issued additional guidance that will further allow site-specific
calculations of criteria for metals  management.

      These guidance documents include recalculation procedures, and indicator species-
based procedure, also  known as the water effect ratio guidance that has recently been
updated.

      That  is where the National Program is.  Our challenge is,  as we discuss metals
management and  metals measurement throughout the day today, is to assess our ability  to
go beyond  current techniques to be able to approach  the capability of measuring at the
criteria level. That is where we would like to be.

      We  recognize that metals are  essentially ubiquitous in the environment.  We all
know there are many metals that are required as trace dietary elements.  It is extremely
important,  as we get into the clean  and  so-called  ultra-clean techniques, that we are
conscious of the potential for sample contamination and preclude the identification of false
positives.

      I think we  are aware of some of the incidents or examples that have come to light
in terms of historical data that has been subject to sample contamination. That has occurred
with  some  USGS  data.  We are also aware that it has been  the case with some  EPA data
where we have not been or as sensitive as we should have been to sample contamination
potential.


                                       177

-------
      We need to improve laboratory capability.  Our current assessment is that our best
capabilities in terms of metals measurement exists in our marine research laboratories.

      We also are of the  opinion, and  correct us if we are  wrong, that there are no
laboratories out there that are currently able to reliably measure metals at criteria levels.

      So, what are we doing about this?  Our office, under the direction of Mr. Telliard,
is developing guidance that will describe sample handling and quality control procedures
necessary to avoid contamination. We are utilizing techniques capable of achieving MDLs
or method  detection limits that  are 1/1 Oth the criteria levels.  That is  the objective to
demonstrate freedom from contamination.

      We are describing the data reporting and data review requirements necessary to
define the quality of data prior to EPA's use of that data for guidance, policy, or regulatory
activities.

      The first of these documents addresses sampling methods and a QC supplement and
is currently in peer review within the Agency, and we project it to be available in June of
this year.

      Additional efforts are underway to develop new analytical methods  and  data
reporting and data review requirements for your use.

      A reasonable question at this time  is, why now?  Why metals in 1994?

      A quick review of the history of the water program would show that the metals
criteria that we have talked about for the protection of aquatic life were developed in the
mid '80s.  In  the  1987 amendments to the Clean Water Act,  Congress required the States
to adopt all available toxics criteria into their water quality standards.  Remember the criteria
to standard to permits relationship we talked about earlier.

      Given the State procedures for changing water quality standards and recognizing that
Congress laid out a three-year window within which the States were to adopt these toxic
criteria  into their standards.  It was not until the early 1990s that metals limits began to
appear in many permits.

      A survey we did back in 1988 to take a snapshot of where the industry was at the
time showed that there were very few metals limits or toxics limits in permits that had been
issued to municipalities (POTWs).

      It was, then, in the early '90s that EPA, after the three-year window that Congress laid
out for adopting the criteria had expired, that we began a regulatory action to require States
or to promulgate for the States toxic criteria into their standards.

                                       178

-------
       In 1991 when we issued the proposed regulations, there were still 22 States that had
not adopted the full suite of criteria into their standards.  We issued the final regulation in
December of 1992, and in the final regulation, only 14 States were included. So, between
November of '91 and December of '92, an additional eight States had adopted toxic criteria,
the full suite of toxic criteria into their standards.

       With the promulgation of the final  regulation, all States had in their water quality
standards toxic criteria, the full suite of toxic criteria.  That included the 91 human health
and the 30 aquatic life criteria I talked about earlier.

       A 1994  survey performed by our permits  office, identified that, as of right now,
approximately one in three municipalities has toxic limits which  include metals in  their
permits.  So,  you  can  see between  '88, almost  none, and '94, up to a third of the
municipalities have toxic limits in permits.

       It is really at the point in time when a discharger receives a proposed permit in the
mail that says you have now a numeric limit for zinc or copper or cadmium or whatever
the pollutant of concern is in the effluent, that their attention is focused on the impact of the
limit.  What does it mean?  How do I measure it?  Am I  going to be  responsible for
additional control technologies?

       That is why, I think,  we are dealing with metals in 1994 in terms of the importance
of being able to accurately  measure where we are at.

       Another interesting fact is that under Section 304(1) of the Clean Water Act, the States
were required to list impaired water bodies.  That process  resulted  in some 680 water
bodies being listed as impaired,  and our count is that approximately 600 of those listings
were attached to  metals contaminations.  Therefore, 90 percent or so of the listed water
bodies are listed because of metals concerns.

       The next slide is an eye chart. Basically, what this shows down the left-hand column
is a list of metals and, in the right-hand column, the yes/no(s), indicating which of those
metals  we are currently able to measure at the criteria level, given approved EPA methods.
Of that total list,  there are  seven of those down the right-hand column that we are not
currently able to measure at criteria levels.

       That is our perspective in terms of where our measurement capabilities are today.

       Who do you call with  any follow-up questions?  This list outlines the folks at EPA
who have lead  responsibilities for issues associated with metals and  metals management.

       That summarizes where EPA's Office of Water is at with the metals management and
metals  issues. What I wanted to do for just a second this morning before I turn the podium


                                        179

-------
back to Bill is give you a quick update in terms of where the Clean  Water Act is going,
because, as we have discussed, it was the Clean Water Act requirements within Titles III and
IV that have set the framework for management of metals within our water quality program.

       Currently, there are two bills active on the  Hill.  In the Senate, Senator Baucus has
issued  S.B. 1114 which has been through committee  markup, and we  are expecting the
committee report to be out within the next week or so that outlines a rather sweeping set
of changes for the Clean Water Act.

       In the House, Congressman Minetta who is chairmen of the House Public Works and
Transportation Committee has introduced H.R. 3948.  That bill is not as far along in the
legislative process.  There  are hearings scheduled on that bill on the 16th of this month, and
the process of the bill going to markup has been on again, off again, within the last month
or so,  but it is now expected that they will not  go to markup until after  the  May 16th
hearing.

       Also, the Administration has issued a publication that is the Clinton Administration's
vision  for Clean Water Act reauthorization.  It is in that document where  we lay out a
proposal for the water quality criteria and standards program. It would require the Agency
to prepare a five-year outline of where the Agency is going in terms of criteria development.

       It is that criteria development process, that would drive our  need  for  additional
laboratory and analytical support to assess where we are and more importantly, where we
are going with those criteria.

       In terms of the likelihood  for  either of the  bills in the Congress or elements of the
Administration's proposal to be  incorporated  into the Clean Water Act, the  bet now is
probably 50/50 whether, within this session of Congress, the Clean Water Act amendments
will make it through the legislative process.

       That concludes what  I wanted to say formally this morning.  Bill, do we have a
couple of minutes for questions if anyone has one?

                                      MR. TELLIARD: Sure.
                                       180

-------
                        QUESTION AND ANSWER SESSION
                                     MR. HANLON: Why don't we turn the lights back
up and the slides off.  Certainly, if anyone has any questions, I will attempt to answer them.
If I cannot,  I am sure Bill can.

                                     MR. TELLIARD: Please identify yourself and your
organization. There are mics around.

                                     MR. HANLON: Yes, ma'am?

                                     MS. ASHCROFT:  Navy Public Works Center in
Norfolk, Virginia.  I have a couple of questions, maybe not really directed at you guys,  but
we are getting ready to have our NPDES permit reauthorized or...

                                     MR. HANLON: Reissued, yes.

                                     MS. ASHCROFT:  And they have limits that  are
lower than drinking water.  Why are our permits being written that the water that goes into
a body of water has to be less toxic than what  we drink?

                                     MR. HANLON:  I will give you my answer to that,
and then we will ask for other clarifications or  Bill can get me out  of trouble.

      Did everyone hear the question? Okay.

      The  reason those numbers  can come out that way is that, when you go into a
laboratory and you run the  full suite of tests that are required to develop criteria for  the
protection of aquatic life, it is possible, and it does happen, that you  will get concentrations
of particular pollutants that may be demonstrated in the laboratory, to be toxic to aquatic
life that, in fact, may not harm you  or me if we  have it in the pitcher of water on the table.

      So, although, at first blush, there is an intuitive reaction that  says that doesn't make
any sense, how can I be less sensitive than the fish, I think, in  fact, what the folks who  run
the tests in the lab tell us is that that is exactly what can happen and does happen, and that
is the reason the numbers are different.

                                     MS. ASHCROFT: My concern is not only that it
is less toxic that we are allowed to drink, but the fact is that the body of what to which it
discharges has higher concentrations than  that  normally.

                                     MR. HANLON: What is  important is that if  the
State, in your case, I assume, has sent  you the proposed reissuance of the permit, if those

                                       181

-------
permit limits are a result of a direct interpretation or lifting numbers directly out of EPA
criteria documents which the States have adopted  into their standards, it may be necessary
and appropriate to go through some of the site-specific protocols that we talked about in
terms of assessing what the appropriate criteria may be for the receiving water that you are
discharging into because of local water chemistry.

      When the criteria documents are developed, most of the fresh water criteria were
developed  in our lab  in Duluth.   The marine  criteria  were developed in our  lab at
Narragansett.

      The criteria are developed in relatively clean or pristine water samples so that if there
are background concentrations of solids or other elements  in the ambient environment that
you are dealing with, it may affect the toxicity of whatever the parameters are in your
permit, and if you  use directly the numbers that are in the criteria document, one could very
well  get limits that may be lower or more restrictive than  are necessary for the protection
of aquatic life at your site.

      The only way to do that is getting into a site-specific calculation  of what that local
toxicity is.  Your State  permit writer, if he is sitting in  Richmond, cannot do it.   Our
scientists in Duluth or Narragansett cannot do it. That has to be a local decision  based on
local chemistry.

                                     MS. ASHCROFT: What the problem is that they
look for this guidance from EPA, and, often, they  do pick these numbers  exactly as if the
body of water was a lake and we are discharging  to an ocean.

                                     MR. HANLON:  If you have not  had a chance to
look at the October '93 guidance or policy document that was issued, you now have a
copy, and I would suggest you take a look at that and  have a talk with your State permit
writer. That is the best guidance I  can give you this morning.

      One more  question?

                                     MS. ASHCROFT:  I have two more questions.

                                     MR. HANLON:  Okay.

                                     MS. ASHCROFT: On  the dissolved  metals  issue,
we are trying to wrestle that, because that  is in our permit  now. One of the things I would
like to ask you is, would it be possible, rather than to do  field filtering for these dissolved
metals and  then preservation, to perhaps put these samples on ice, take them back to the
laboratory with a  six-hour holding time, and then filter under better conditions?
                                       182

-------
      The metals concentration really should not change that greatly in a six-hour period.
What are your feelings on this?

                                     MR. TELLIARD: That is something we are looking
at.  We  realize if you take water out of a well or someplace where you have an oxygen-
starved environment, there are going to be significant changes.  If you are taking it out a
water body where everything is basically at equilibrium, there probably is a safe period of
time.  Whether it is 6 hours, whether it is 12 hours, whether it is 55 minutes we do not
know yet, but that is one of the things we are going to be  looking at this summer.

   ~*  Nobody likes to do it on the back of a truck or hanging from the corner of your car.
It is no fun.  If we can do it in the laboratory and that is a viable option, we are going to
let you do that, but we have to generate some data first.

                                     MS. ASHCROFT:   And what is  EPA's policy on
ICPMS?  Are they looking at that?  Are they forcing us all to buy these?

                                     MR. TELLIARD: ICPMS is an accepted technique.
You are going to hear some methods today that will revolve around the application  and
expansion of the ICPMS approach, and, yes, it is the coming thing. My sister does not have
one yet, but they are corning down the road.

                                     MS. ASHCROFT:  Okay, thank you.

                                     MR. HANLON: Yes, sir?

                                     MR. BLOOM: I have a question and  a comment.
My name is Nicolas  Bloom from Frontier Geosciences.  My comment is that just in the last
five minutes, I have been able to jot down seven laboratories that can measure all of the
EPA criteria pollutants and trace metals at ambient levels.

                                     MR. HANLON: Great.

                                     MR. BLOOM: I do not think it is that difficult to
do, and  it brings  up a question of why doesn't the EPA go to the people who have been
able to,  for the last  15 years, measure these metals at ambient levels to develop their new
rounds of methods  rather than trying to develop them internally?

                                     MR. TELLIARD: The application or development
of the methods, when we say internally, as you know, is done by contract support. So, we
would look to these laboratories for that support and effort.
                                      183

-------
      Where possible, we would still use our own research capability if it can be done in
a timely manner, and that is what we are doing. You are going to hear some papers from
our folks in Cincinnati today on some updated methods.

      There are some big holes in the table that Jim showed you, and, yes, for those areas
where we have to resolve some things, we are going to be out there soliciting  help from the
user community and the application community.

                                     MS.  DINSMORE:  Donalea Dinsmore  from the
State of Wisconsin. Some of the toxicity-based limits for human health are based on total
metals  data.  So,  we have total and total recoverable.

      What I am seeing is  that the current methods  have phased out some of the total
metals  procedures, and preliminary indications 1 have from technical people is that those
two things  are equivalent.   I am trying  to  weigh what is  really true based  on your
presentation  of the three different forms.

      Are the toxicity people talking to the methods people, and what is going on there?

                                     MR.  HANLON: They sure should be talking.  In
most cases,  my understanding  is that total and total recoverable are not the same  and that,
if that is an issue, you can follow up with us or follow up with the people in your State or
region, and they  should be able to provide you some  advice on that.

      Okay, thank you. One thing I wanted to do when we started is to get a sense of who
is here and  maybe help the  audience get a sense of that also.

      How many folks are here from States?  (Show of hands.)

      From  municipalities?  (Show of hands.)

      From  consultants?  (Show of hands.)

      From  the laboratory folks, whether it is analytical?  (Show of hands.)

      Other Federal agencies other than EPA? (Show of hands.)

      Okay.  What I want you to do is look around now. All the people from EPA, stick
up your hand. These are resource people, to let you know who is here during the day. We
are all going to try to answer all your questions before you leave town. My plans are to be
here all day and participate in  the festivities this evening. Hopefully, the weather is going
to cooperate. Thank you.

                                     MR.  TELLIARD: Thanks, Jim.


                                       184

-------
CO
Ln
     Metals—Regulatory
          Background
   and Policy  Perspective
            James A. Hanlon
            Deputy Director
     USEPA Office of Science and Technology
 51 °0129                      Office of Science and Technology
                            &EPA

-------
CJv
  Regulatory Background
   * States are required to adopt water quality standards that designate the
     uses of each water body and to adopt water quality criteria (WQC)
     necessary to protect those uses.

«  * Water quality criteria are essential tools for implementing water quality
     standards.

     CWA requires NPDES permits that contain an integrated approach to
     the control of toxic pollutants through technology-based controls and
     water quality-based controls.

     WQC become enforceable when they are adopted in a State water
     quality standard.
   51"°°129                                              Office of Science and Technology
                                                         V>EPA

-------
   Regulatory  Background (Cont.)
00
XJ
Numeric WQC are set to protect aquatic life and human health

 - WQC represent a scientific assessment of the ecological and human
   health effects associated with pollutants in surface water
 - Because analytical detection limits are not related to actual
   environmental impacts, they are not a consideration when determining
   WQC.

Implementation of WQC for trace metals is highly complex due to:
 - The site-specific nature of metals toxicity

 - The need for measurement at levels as much as 280 times lower than
   those levels required by technology-based controls or obtainable by
   routine analyses in environmental  laboratories.
                                                        &EPA
   51 "°01 ~29                                            Office of Science and Technology

-------
oo
CO
   Metals Forms and Speciation
Metals form and speciation varies by site, depending on the chemical,
physical, and biological conditions of the site.
Metals exist in total, total recoverable, and dissolved forms.
Metals appear in both organic and inorganic forms, e.g., methyl
mercury vs. mercury:
  - Organic forms can exist as one or more organo-metallic
   compounds, e.g., tributyltin vs. phenyltin;
  - Inorganic forms can exist in one or more oxidation states,
   e.g., chromium (VI) vs. chromium (III).

Bioavailability and toxicity of a metal varies, depending on its form
and speciation.

Determination of more than one form may require multiple
procedures for sample handling, sample preservation, sample
preparation, and sample analysis.
                                                           &EPA
   5100129                                               Office of Science and Technology

-------
   Metals Forms and  Speciation  (Cont.)
    » A major Issue in the implementation of metals criteria for protection of
     aquatic life is whether, and how, to use dissolved metals or total
     recoverable metals concentrations in setting State water quality standards.

    * EPA Office of Water policy recommends the use of the dissolved metal to
     set and measure compliance with water quality standards
       - Policy reflects widely held belief that the dissolved metal more closely
»        approximates the bioavailable fraction of metal  in the water
       - EPA will also approve State risk management decisions to adopt
         standards based on total recoverable metal, if those standards are
         otherwise approvable by law.

    • Because the currently approved WQC are articulated as total recoverable
     metals, EPA has issued guidance for translating the  published total
     recoverable metals criteria to dissolved criteria.
                                                            &EPA
   51"°°1"29                                               Office of Science and Technology

-------
Metals  Forms and  Speciation (Cont.)
 • Although EPA recommends the use of dissolved metals criteria to calculate
  Total Maximum Daily Loads (TMDLs) across a watershed or waterbody,
  EPA's NPDES regulations require that limits of metals in permits be stated
  as total recoverable metals.
    - This is because the chemical conditions in ambient waters frequently
      differ substantially from those in effluent, and

    - There is no assurance that effluent particulate metal would not dissoJve
      after discharge.

 * NPDES regulations require permit writers to translate between different
  metals forms in the calculation of permit limits so that a total recoverable
  limit can be established.
51"°°1"29                                              Office of Science and Technology
                                                       vvEPA

-------
Metals Forms and Speciation (Cont.)


 • EPA has also recognized that while the use of the
  dissolved form will correct some site-specific factors
  affecting metals toxicity, additional refinements may
  be necessary.

 • EPA has issued guidance describing three methods
  for development of site-specific criteria:
   - A recalculation procedure
   - An indicator species procedure (also known as the
     water-effect ratio or WER)
   - A resident species procedure.
c | *v"i LOO
51 W1 Z9                                     Office of Science and Technology

-------
Measurement  Difficulties
 • Although the CWA does not require WQC levels to reflect analytical capability, our
   objective is to be able to measure at these levels.

 • Trace metals are ubiquitous in the environment and therefore, measurements at
   WQC levels require extensive precautions to preclude false positives that may arise
   from contamination during sampling or analysis:

     - USGS recently discovered that some metals data in one of its major databases
       may be the result of contamination; similar concerns have been raised about
       data gathered during  EPA's New York/New Jersey Harbor studies.

     - This suggests the need for EPA to take steps to ensure that similar results are
       not produced as EPA continues to measure metals at WQC levels.

 • Laboratory Capability

     - Expertise in metals determinations at the WQC levels currently exists only in
       marine research laboratories.

     - No laboratories are known to be capable of reliably measuring all metals at
       required WQC levels.
51 °01'29                                                     Office of Science and Technology
                                                               &EPA

-------
OJ
  Requirements for Implementing
  Measurements at Ambient WQC  Levels
• To address these concerns, OW's Engineering and Analysis Division (EAD)
  is developing guidance documents and methods that:
    - Describe sample handling and quality control procedures necessary to
     avoid contamination of samples during collection and analysis;
    - Utilize techniques capable of achieving method detection limits (MDLs)
     that are one-tenth of WQC levels in order to demonstrate freedom from
     contamination;
    - Describe the data reporting  and data review requirements necessary to
     define the quality of data prior to EPA use.

• The first of these documents, a draft sampling method and a QC
  supplement to existing EPA analytical methods, are currently undergoing
  peer review within the Agency; release is scheduled for June, 1994.
  • Additional efforts to develop new analytical methods and data
    reporting/data review requirements are currently underway.
                                                          &EPA
  51 °01 ~29                                              Office of Science and Technology

-------
Summary  of  WQC  Levels  vs. Current Technology
Metal
Sb
As
Cd
Cr (III)
Cr (VI)
Cu
Pb
Hg
Ni
Se
Ag
Tl
Zn
EPA WQC (ug/U1
14
0.018
0.32
57
10.5
2.5
0.14
0.012
7.1
5
0.31
1.7
28
EPA Method2
200.8
200.9
200.13
None4
218.6
200.10
200,10
245.7
200.10
200.9
200.8
200.8
200.9
Technique
ICP/MS
STGFAA
CC/STGFAA
N/A
1C
CC/ICP/MS
CC/ICP/MS
CVAF
CC/ICP/MS
STGFAA
ICP/MS
ICP/MS
STGFAA
MDL (ug/U
0.4
0.5
« 0.016

0.3
0.023
0.074
0.01
0.081
0.6
0.1
0.3
0.3
ML (ug/L)
1
2
0.05
—
1
0.05
0.2
0.02
0.2
2
0.2
1
1
MDL Needed (ug/U3
1.4
0.0018
0.032
5.7
1.05
0.25
0.014
0.0012
0.71
0.5
0.031
0.17
2.8
WQC Achieved?
Yes
No
Yes
No
Yes
Yes
No
No
Yes
No
No
No
Yes
CC = Chelation/Concentration
CVAF =  Cold Vapor Atomic Fluorescence
1C — Ion Chromatography
ICP/AES = Inductively Coupled Plasma/Atomic Emission Spectroscopy
ICP/MS =  ICP/Mass Spectrometry
STGFAA = Stabilized Temperature Graphite Furnace Atomic Absorption Spectrometry
'Lowest of freshwater, marine, and human health WQC promulgated at 40 CF/f Part 131 (57 FR 60848}. Hardness-dependent freshwater criteria were recalculated
 at a hardness of 25 mg/L CaC03, and all appropriate aquatic life criteria were adjusted for dissolved metals criteria.
2lf multiple EPA methods provide the detection levels required to reliably measure at WQC levels for a given metal,
 only the method with the lowest detection level is cited.
3The MDL needed in order to achieve WQC levels must be at least one-tenth of the lowest WQC level for that analyte.
"None  = No EPA method exists for analysis of species of Interest.

-------
Points of  Contact
   Copies of the full Office of Water guidance released in October 1993, can be
   obtained by contacting the Water Resource Center at (202) 260-7786.

   General questions about the guidance should be directed to me at (202)
   260-5400.
   Specific questions should be directed as follows:
   Subject

   Water quality criteria
   Water quality standards
   Monitoring & data issues
   TMDL issues
   Permit issues
   Modeling and translators
   Analytical methods
Contact

Bob April
Dave Sabock
Elizabeth Fellows
Don Brady
Jim Pendergast
Russ Kinerson
BillTelliard
Phone

(202) 260-
(202) 260-
(202) 260-
(202) 260-
(202) 260-
(202) 260-
(202) 260-
6322
1315
7046
7074
9537
1330
7134
51-001-29
                                                              xvEPA
                                                        Office of Science and Technology

-------
(Blank Page)
    196

-------
                                      MR. TELLIARD: Now that you have the overview,
we will try to get into the nuts and bolts, and the first person I  have is Carlton Hunt.
Carlton is a Senior Research Scientist with Battelle.

       Carlton is going to speak today on some of the problems with trace analysis.

       Carlton?
                  TRACE METAL CLEAN TECHNIQUES: PROBLEM,
                     QUALITY ASSESSMENTS, COMPARISONS
                                      MR.  HUNT:   Thank you,  Bill, and also the
organizers of this for inviting me to give this talk.

      What I would like to do, if I can have the first slide, is talk a little bit about the
problem of trace metal analysis using clean and ultra-clean techniques. I think most of us
know what the problem is,  poor data quality.  What I really want to do is focus at a high
level of quality assessments, on those procedures that are necessary to achieve good metal
results in waters, and also provide you with some comparisons of recent results that we
have generated over the last year or so, as laboratories and people have asked us to, in fact,
apply clean technologies and compare our methods to their standard methods to see what
the problems might be in their labs.

      My objectives are four-fold. One is to discuss major roadblocks that the labs might
encounter in achieving accurate trace metal results, again, at a fairly high level.  The details,
having talked with Bill a little bit about what EPA is doing, I will be included in some really
nice detailed guidance that  is coming out shortly.

      I would also like to convey required quality control assessments.  This  is really
critical to have good quality control early in your program.

      I will also show comparative results of some recent sampling and bottle comparisons.

      Finally, I would like  to start to focus, because of questions I have been asked over
the last year or so, on thresholds for initiating clean methods.  However, I do not want
people to leave and say there  is a point where you do  not have to  pay attention to
cleanliness in trace metal analysis, but I think there are places where you start to  begin to
apply a lot more stringent control.

      I have some data from our comparative studies to talk about.  It  is very preliminary.
It is also incomplete.  We need to do,  I think, a lot more work in terms of identifying when
we trigger these  really clean techniques.

                                       197

-------
      A couple of definitions.  First of all, I think achieving accurate trace metal results is
not necessarily application of new procedures.  I think the procedures are out there, and the
techniques are out there. It is a matter of appropriately implementing those technologies
and techniques.  It is execution more than it is new methods.

      In my own way of thinking, and this is an evolving type of definition, the definition
of clean methods is basically trying to  apply sampling and laboratory techniques that
accurately quantify  contaminant levels at somewhere  around the 20 //g/L and down to
around the 0.1 //g/L level.

      Basically, that includes achievement of a consistent, low blank contribution from your
sampling and from  your analytical procedures.  So, the goal is  really blank control and
contamination control.

      The ultra-clean techniques which are really getting down into the sub-part per billion
and the  part per trillion  range, to me, is really a targeted zero contribution of contaminants
to your  sample.   It  is an intensive effort to identify where contamination gets into your
sample,  either at the sampling phase or at the analytical level and at the processing level.

      We all probably  know what the problems are in terms of high results, false positives.
There are sampling errors, containers can be a problem,  the reagents in the processing steps
can be a problem, and  the analytical interferences on the instruments can be a problem if
not properly controlled for.

      Sampling errors.   A lot of people are using improper sampling devices, devices that
are not  built for trace metal collection.

      Cleaning of sampling  devices, even if it is constructed of the proper materials, is a
critical issue.  You have to also be able to clean sampling equipment properly.

      Sampling and sampler deployment.  Back when  we first started doing a lot of trace
metals sampling,  people were deploying their sample  bottles off the  back end of a boat
where lead-tainted  gas was  exhausted right  into your bottles.   This caused major lead
contamination problems. So, how you deploy a sampler and where you deploy it, must be
understood and you must know how to control the inputs of contamination.

      Atmospheric  contamination. This is why people  are using clean rooms.  Labs, inside
and outside, are very dirty oftentimes because of the  types of materials that have come
through a lab.

      There  are  some horror stories  where people have been  using contaminated  soil
sediments in a lab where they are also trying to do water quality measurements. The two
are incompatible.
                                        198

-------
      Clothing and gloves are the critical things  I talk about in terms of contamination
control.  Gloves must be  non-talc.  Talc has a lot of zinc in it,  and  it immediately will
contribute zinc to your sample and result in  a false positive.

      Finally, some sample transfer and handling procedures.

      This is a recent comparison of a teflon  sampler against a stainless steel sampler used
for mercury collections.  You can see that the  stainless steel sampler that was used, this was
a side-by-side collection, contributed a significant  amount of mercury to the sample.

      I do not know if you can read the scale in the back. This is 30 ng.  This was 10 ng.
The teflon sampler, with proper acids and control,  resulted in a signature of about 2 or 2.5
ng/L, whereas we were  getting ten times that with the stainless steel sampler.  This is a
recent study within the last year.

      In terms of processing, we do much of our work  on  board ships.   Because of the
numbers of samples and the distance we have to ship samples, we, in fact, do process on
board, but we do implement stringent control techniques, including Class-100 clean benches
and non-talc gloves.  All labware used is cleaned on board the vessel after each  use.

      Sampling concerns.  Sampling up the  contaminant gradient is critical.  Do not start
at the effluent and work down into a receiving water that has lower metal concentrations.
Rather, work up the gradient especially if you have to reuse samples and sample bottles.
Common  sense  in terms of contamination control will go a long way towards reducing
contamination.

      Containers.   The basic rule of thumb here is to work with non-contaminating
materials.  In the movie Good-bye, Columbus a number of years ago, the key word was
plastics.  That still holds with metals  today.  Plastics are your first choice.

      Compatibility of the metal with the plastic material or the teflon material is critical.
The guys who do the really low-level mercury measurement, do all of their work in teflon,
because it does not allow  mercury to pass through the bottle walls, as allowed, by linear
polyethylene and high density polyethylene.

      Most of the other metals are very much compatible with the standard plastics that
you might use.

      You can buy commercially cleaned containers, but I advise everyone to check the
blank on those bottles,  and if you need to, institute a cleaning procedure  to clean the
bottles.

      Finally, one of the things that we find  very effective is to store cleaned bottles with
a dilute acid until you use the bottle.  You dump the dilute acid out in the field, do a quick


                                        199

-------
rinse of the bottle with sample, and then put your sample in it.  That goes a long way to
controlling contamination.

      This slide shows a recent comparison of preservation acids and collection containers.
 We were asked to compare one type of sampling equipment with acid preservation against
our clean techniques.

      The metals are  mercury and zinc. The scale here is 200 ng/L  This is 50 ng/L

      What you have in the green is reagent grade acid in a polyethylene container.  The
yellow is ultra-pure acids in a teflon container. Again, the metals are mercury and zinc.

      You can see, for the mercury, certainly, the reagent grade acid in the polyethylene
gives a very  high signal relative to the teflon.  The same thing holds for zinc.

      I think the  critical point here is the contamination  that is evident here is the
difference between possibly being in compliance and out of compliance, if you had a
standard of 10 /Jg/L. At this level, a contaminated sample would kick you into an action,
I think, whereas a clean sample probably would  pass if that were the standard you were
working against.

      Reagents and processing and use of high  purity reagents.  If you are chelating a
sample, if you are spiking with acids, the solvents you use to extract or otherwise work with
a sample, must be  clean.  High purity reagents are commercially available and should be
applied.  The caveat here is check  the blank, check the amount of metal that is in  that
particular reagent,  because some of them are a  little better  for some  metals than other
metals, but they are available, and if you need to go to the ultra level, sometimes you need
to, in fact, purify the acid  in your own lab.

      Routine use of highest purity deionized water is an absolute requirement.  The guys
who do the ultra-low trace level work, ultra-clean methods, in fact, sub-boil in quartz to get
the highest quality acids.

      However, this procedure is way down the  hierarchy for ultra-clean techniques that
need to be applied.  I think, generally, deionized water is adequate for most of the work we
do, but you do have to understand what is in the sample or in the deionized water.

      Identification of reagent and procedural blanks prior to conducting analysis. This is
critical to getting good numbers.  Before you turn a sample in a lab,  you need to do a
procedural blank whereby you identify where potential contamination is coming in and then
control for that.

      The procedural blanks, then, routinely check that your analysis is in control.
                                       200

-------
      Again, consistent blanks that are less than  10 percent of the lowest value that you
expect to get are really a requirement in order to get good numbers.

      This is just a quick summary table of results that we generated during the New
York/New Jersey Harbor waste load allocation  program.  On the left are three analytical
methods that we were evaluating.  One was total recoverable, one was the draft acid
soluble technique that EPA had potential to apply, and one is the dissolved method.

      This is the blank level in ng/L for four metals. The lowest measured value in the
program is listed on this line. I will focus on copper in this  case.

      This is the minimum  blank contribution to this  sample from the total recoverable
measurement.

      What you see is that the total recoverable method requires more manipulation of the
sample,  and the procedure allows open beaker digestion with 10 fold concentration of the
sample,  digestion down to about 20 ml from a 100 ml sample, followed by reconstitution.

      The acid soluble and  dissolved  methods have equivalent  steps.  The difference is
when you acidify the sample. The dissolved, you filter before you acidify; for acid soluble,
then filter.

      What you see is  basically the blank contribution  from these two steps. Using clean
technique, the blank is reasonably consistent for these rnetals for dissolved and acid soluble,
but the total recoverable blank is much higher, sometimes five to six to ten times. In terms
of contribution to the sample, this number right here is  in  error. This should be 440.  The
contribution to the sample from the total recoverable ran anywhere from 90 percent down
to 15 percent, depending on the element.

      This slide is a more recent study.  Again, this is total recoverable copper in a recent
site-specific copper criteria development that we did. We did 60 blank samples during the
total recoverable digestion.

      16 of those 60 samples had contamination, these were processed in a fume hood
using normal procedures. 16 of those 60 samples had detectable copper contribution from
the procedure blank above the detection limit which is right here.  The other 44 samples
were below the detection limit.

      The issue is, (a),  it is highly variable which means there was a problem controlling
for the contamination and understanding contribution source.  If your sample only had 1 /yg
copper in it, these blanks would contribute over 50 percent to that concentration. So, you
have a false number at  the low end.
                                       201

-------
      If you had 500//g/L in terms of the spiked sample, there is really not a problem. You
have to make sure you are working at the right levels of concentration.

      Down at the bottom is a graph that just shows the same kind of procedures where
we worked in a clean room.  The bottom line is that these were extracted samples. The
detection limit was 0.05 ug/L, and, basically,  we saw no contribution of copper to that
particular set of samples.

      Analytical interferences,  i  think  again, the messages are basically that you need to
use the appropriate quantification techniques, because metals have notorious interferences
at the instrument level, depending on the instrument you are working with.  You have to
know what those are and how to control for them.

      I have jumped on somewhat of a  bandwagon that I get on in terms of standard curves
and standard additions and when  to apply them, and I will show you a couple data points
where we compared those methods.

      Basically, you need to know the  matrix you are working with, and you really cannot
mix a variety of matrices together when you load them on the instruments. A lake water,
a river water, and sea water do not necessarily have the same analytical interferences and
should not be run in the same batch of samples.

      Matrix modifiers are available and  should be applied in all cases, and appropriately
applied.

      This graph will take me  a  little  bit of time  to explain.   What you see  is  the
concentration of copper in six samples that were collected for a field comparison and
laboratory comparison.  The solid line  is samples that were collected  by Battelle for EPA
Region II and analyzed in our lab using extraction techniques and metal clean techniques.

      The comparison is with three other sets of samples. If they were perfect comparisons,
they would fall upon that solid line obtained by extraction using clean techniques.

      The triangles here are EPA samples that were collected and analyzed by New York
City for the required monitoring in New York Harbor. Basically, for the samples that we
collected  and  provided  to them in clean bottles,  they came  up  with a reasonable
comparison  of concentrations to ours.

      New York City also came in with a boat right beside the EPA's  OSV Anderson that
we used to collect the samples.   New  York City collected samples within 50 feet of the
/Anderson, and we then exchanged bottles. They used their techniques, and we used ours.

      In this case, we see that there is a slight elevation in terms of the copper contribution
when the New York City samples were run by Battelle. When New York City ran their own


                                       202

-------
samples, there was somewhat of a reasonable comparison here but a big difference in terms
of the concentrations they were getting in the lab.  Those are basically lab interferences
because of the techniques they were using.

      So, we had a bit  of a problem determining whether or not they had a sample
collection issue with their numbers or lab issue, and the  lab issue  had to be dealt with
before we could really get to the field issue.

      This slide addresses my bandwagon in terms of instrument calibration  methods.  It
is the same type of a plot as before. The solid line is the extracted samples from a number
of samples from  New York Harbor.

      In this case, we quantified the samples with a standard curve. These data are shown
with the pluses,  they fall on the solid  line.   The stars  represent samples that were
determined by standard additions.

      In this case, the metal is cadmium. When analyzed by standard curve, we achieved
very good agreement with the extracted  samples, but when we  ran  the samples using
standard additions, we started  seeing high results. That is a non-specific interference at the
instrument level.

      Recently,  I was looking at a report that was published in 1975 that  basically called
out this kind of a problem when you are working with standard additions and standard
curves.   So,  the  issue  has been  known  for a  long time.  We just  have not, I think,
appropriately dealt with them over the last 15 or 20 years.

      This is another instrument calibration method. This line is the extracted sample.  The
metal is copper.  In this case, we see that the standard curve gave us consistently low results
relative to the extracted sample where various  ions that interfered with the sample were
removed.

      This is the standard additions curve. Good agreement at the lower end of the curve,
and some disagreement  up to the top in terms of concentrations.  Basically,  we see a
negative comparison in terms  of concentration using the standard curve as opposed to an
extracted sample.

      Let me quickly summarize what I have gone through here, and  then I am going to
step  into the threshold argument.  Successful application of clean  methods  requires (a)
systematic identification and control of your contamination sources  during sampling and
analysis.  Systematic means going in and identifying it before you turn a sample and  collect
a sample.

      It is an awareness in the application of specialized cleanup procedures for storage
containers and labware. This has been known and published on  15 years ago in a number


                                       203

-------
of books, one by the National Bureau of Standards and one by Morris and Zeit.  The NBS
book  is titled  Contamination Control in Trace Metal Analysis.  Contamination  control
techniques are also  in the literature and  have been around for the last 15 or 20 years.

       Use of high purity reagents, appropriate reagents, in cleanup steps is essential.

       Isolation of samples from atmospheric contamination is required.

       Control of analytical interferences  are necessary.  You just cannot throw a sample on
an instrument without knowing the matrix that you are putting into the intrument  and
controlling for those instrumental  interferences.

       Finally, as  I have said  repeatedly, early evaluation and application of appropriate
QA/QC techniques  to include the procedural blanks and analysis of standard  reference
materials are essential.  It is a performance-based type of comparison. You need to be able
to measure what is in a certified standard appropriately before you really start the procedure,
and then you need to apply that during the whole analytical train.

       It also  includes matrix spike recoveries and replicates.  I think we will hear a little
bit more about some of those things today.

       The next sequence of slides presents data for a number of inter-comparison studies
in a variety of types of effluents. To explain  the sequence of graphs, the x-axis plots samples
collected and run by Battelle using clean techniques.

       The y-axis is a ratio of the other participating lab to our numbers. If you have perfect
agreement, the ratio would  be 1 which,  on all these graphs, would be right here.

       The concentration range for this set of copper in municipal  effluent is somewhere
between 55 and 100^/g/L, The samples were taken independently. Sample was placed into
our clean bottles, put into their bottles, sent to the labs, and the  numbers  provided for
comparison.

       Essentially, for copper in this  effluent... and these are total numbers...we see fairly
good  agreement  between the two  labs at this concentration  range.  There was not a
comparability  problem at that level.

       As we move to lower concentrations... this is another study of effluents... you can see
that at around  30 to 40  and  up //g/L that the concentrations between  the  two labs are
comparable.

       However, as we move to the 30 fjg/L and lower  range... this  is about 5 to 10 //g/L...
we start to see the ratio go up.  That  means that the other lab, in our opinion, the other lab
was contributing  metals to the sample in some way.


                                        204

-------
      The best  study that I  can present to you  is one  that Herb Allen  provided  the
comparative data for.

      You can see that at 6 to 8 //g/L copper, there was good agreement. The ratio was
about 1:1.  As we move to lower concentrations,  between 6 and 1 //g/L, there is a very
strong increase in the ratio, indicating false  positives in the samples not processed using
clean techniques.

      This slide  shows an inter-comparison  for lead using  the same samples. This is lead
at about 40 to 100 //g/L.  Fairly good agreement was achieved, but we start to see noise
develop at the 40 //g/L. We also see noise down below 1 0 //g/L.  Especially here at even
lower concentrations, there is a  very strong increase  of extraneous lead in some of the
samples,  and a lot of variability.  There is not a really good inter-comparison there.

      I do not have a comparison for zinc below 60 //g/L  but the labs were getting about
a 1 :1 ratio at below 60 //g/L.

      This slide shows another problem. In this  particular study, the reporting limit for
chromium in the effluent was 10 /vg/L.  We were detecting chromium at around 0.5 to 1
      The smooth curve here is basically our result divided into  10, so you get a very
smooth curve, indicating that this is a detection limit problem. However, there were three
samples that  showed high chromium, that fell off the  detection limit driven line, again,
indicating  analytical error or contamination error.

      We need to do a little bit more of this type of comparisons. Actually, quite  a bit
more of this type of work is needed in order to really understand the threshold. I think the
threshold  for  kicking in clean methods is probably somewhere  in the 10 to 20  //g/L and
down, and ultra-clean is, obviously, below 1  //g/L.

      So,  the take-home  messages from  my talk today is that contamination can be
controlled through proper sample handling and analytical procedures.  The methods  have
been around  for 20 years.

      Clean  methods are required when metals concentrations, I think, are less than 20
ppb.  I think the  threshold may be metal-specific, and I think  more study is needed to
identify where those thresholds might be applied. However, I am not trying to imply that
you can get away without some serious attention to clean metals analysis regardless of the
actual level of metal in the sample.

      Clean  room benches and  closed processing of samples and digestion are needed to
control lab contamination.
                                       205

-------
      I  think your choice of samplers and sample containers is critical  to high quality
results.  Again, there is literature that tells you what you should  use, and EPA is, 1 think,
bringing that together in some of the new guidance that they are developing.

      Finally, method detection limits must be lower than presently required for routine
monitoring.  I think that is a given we heard about a little bit ago.

      Finally, knowledge of matrix-specific interferences at the instrument level is critical
to achieving good quality trace metal results.

      Thank you.
                       QUESTION AND ANSWER SESSION

                                     MR. TELLIARD: Do we have any questions?

                                     MR. SLENTZ: My name is Kurt Slentz. I am with
Energy Labs in Rapid City, South Dakota.  I guess I  have one question.  We have a lot of
our samples that come through by UPS, and when you are talking ultra-low levels for
mercury, let's say, would you recommend a trip blank for those samples?

                                     MR. HUNT:  A properly identified trip blank,  I
think, would be appropriate. Again...and I think Nick Bloom is here and could give you a
lot more guidance in terms of specific  mercury handling procedures, but those sample
bottles need to be bagged properly, double bagged to control for any extraneous input, but
I think a trip blank would be advisable.

                                     MR. SLENTZ: Also, do you think that it would be
possible for people without chemistry backgrounds to successfully sample at those  levels?

                                     MR. HUNT:  I very much think people can be
trained as long as they are sensitized to the appropriate way to handle the sample.  It is not
a one-day training, however. It is going to require  some intensive effort to teach people
how to touch a sample, where to touch  a sample, but it is certainly possible.

                                     MR. HORNG: My  name is Albert Horng from
HTMA, Colmar, Pennsylvania.  You mentioned the importance of sampling. If you have to
do both organics and metals and you  have no choice but to use one container, the way we
do it now is we use glass, a glass container. The problem is boron.

      If we have no choice, do you know of any company that makes 2 or 3-gallon teflon
containers?
                                      206

-------
                                     MR. HUNT:   I know there are  2-liter bottles.
Whether or not you can get a 12-liter sample bottle I am not sure.

      It seems awfully expensive to me to have to sample that way.  I think you can use
a cheaper way of going about it.  I am not  sure exactly what that would be, though.  I
would need to sit down and think a bit about it.

                                     MR. HORNG: Sacrifice  boron?

                                     MR. HUNT:  Again, please?

                                     MR. HORNG: Sacrifice  boron?

                                     MR. HUNT:  I do not know.

                                     MS. ASHCRAFT:  I believe I can understand what
a clean room is, but I  am not sure what you meant by a clean bench.  Can you describe
what you would call a clean bench to us?

                                     MR. HUNT:  Yes, a clean bench  is simply a lab
bench that has, in most cases,  a HEPA filter that removes particles to a certain level.  Class
100 means, I  think it is, 100 particles per cubic rneter.

      There are two styles.  There is one where the HEPA filter  is on  top and  the air is
brought down to the working area of the bench and it exits the front.  The other one is
where the HEPA filter and the air blower is in the back side, and it blows out to the analyst.

                                     MS. ASHCRAFT:  So, it would be like biological
cabinets?

                                     MR. HUNT:  Yes, essentially that, and they are
commercially available.

                                     MR. EPSTEIN: Paul Epstein, National Sanitation
Foundation.  Have you noticed any problem with sample bottles now that there is an awful
lot of recycled plastic  out in the world?  We have had one case  where...these  were not
bottles. These were little molds that we were molding cement cubes in, and the plastic was
recycled from battery cases.  So, there was a severe lead problem in  these molds.

                                     MR. HUNT:  That is scary.   I do not  have any
evidence one way or the other, so I cannot say.  In our labs, we reuse bottles, and  we clean
them, so we may not have that problem.
                                       207

-------
      I can relate back in the early '70s, there used to be a polyethylene bottle that was
black or brown, and had a horrible cadmium problem, because that was the plasticizer that
was in the plastic.

      So, yes, it may be something we need to look at, but I do not know of any studies
that are really looking at this issue. That, again, is why bottle blanks should be run on the
bottles you have.

                                     MR. EPSTEIN:  Thank you.

                                     MR.  BERNARD;    John Bernard,  Alexandria
Sanitation. For Mr. Telliard, in your protocols, a question about in calibration standards,
running them through the digestion process.  Could you comment on that?

                                     MR. HUNT:  Calibration standards through the
digestion process, normally, our calibration standards are not run through the digestion.  It
is an instrument calibration, and samples are run in the matrix that is the final matrix that
we extract from a sample.

                                     MR. TELLIARD: And that is similar to  what the
draft methods are right now.  They do not go through the digestion.

                                     MR. BERNARD: Would it make a difference?

                                     MR. TELLIARD: Probably,

                                     MR. HUNT: It might, but the critical thing there,
to me, is  running procedural blanks as well as spike recoveries.  Those are the techniques
that are designed to pick up any problems in the digestion.

                                     MR, BERNARD: But your whole, your instrument
response  is based on your calibration standards. Shouldn't they be treated the same as your
samples?

                                     MR. HUNT: No,  I don't think they need to be,
because if you are matching up your matrices properly and the chemicals you are using to
standardize your instrument, again, are traceable, you should not have a problem with that.

                                     MS.  DINSMORE:    Donalea  Dinsmore  from
Wisconsin.  Are your samples collected with composite techniques, or are these all grabs?
                                       208

-------
                                    MR. HUNT: That is usually specific to the various
programs that we are  working  with.  The municipal studies that I  showed you  were
generally 24-hour composites, but the work that we did in New York Harbor were grab
samples, and a lot of our work is based on grab samples.

                                    MS. DINSMORE: Thank you.

                                    MR. TELLIARD:  Thank you,  sir.
                                     209

-------
(Blank Page)
    210

-------
     Trace Metal Clean Techniques:
Problems, Quality Assessments, and Comparisons

               Carlton D. Hunt
                Dion A. Lewis
                   Baiieiie
                 . . . Putting Technology To Work
                Ocean Sciences
                 Duxbury, MA

-------
KJ
NJ
             Objectives

Discuss major roadblocks to achieving
accurate metals results
Convey required Quality Control assessments
Present recent comparative results
Identify preliminary thresholds for initiating
clean methods

-------
Achieving accurate results using clean metals techniques
is primarily a function of the execution of appropriate
analytical techniques rather than the application of
new procedures!

           Definition of "Clean" Methods

The application of sampling and laboratory techniques
that are necessary to accurately quantify contaminant
concentrations in the low and sub part per million range
     to 0.1 ju,g/L) in fresh and marine waters.
Includes the attainment of a consistently low, known
metals contribution from sampling and analytical procedures.

-------
       Definition of Ultra-Clean Methods

Targeted zero contribution of extraneous contamination
to the  analytical result at the sub part ber billion
and part per trillion concentrations through the use
of clean room technologies and intensive contamination
control strategies at all stages of sample collection,
storage, processing, and analysis.

-------
NJ
O1
Causes of High Results

 • Sampling errors
 • Containers
 • Reagents/processing
 • Analytical interferences
                                             NKA/Hunl/19-13

-------
          Sampling Errors
Improper sampling devices
Poorly cleaned sampling devices
Improper sampler deployment
Uncontrolled atmospheric contamination
Improper clothing and gloves
Poor sample transfer and handling procedures
                                          NKWHum/18-06

-------
    Mercury Field Blanks - Stainless vs Teflon Samplers
NJ


•-4
c

.2
+3
03
4-1

01
o

o
o
        B
        0
           0.04
           0.03-
           0.02-
           0.01-
                 A = Teflon Sampler

                 B = Stainless Steel Sampler
                          A
                                       B
                               Sampler Type

-------
                         Containers

     •  Non-contaminating materials — plastics are
       first choice
*    •  Compatibility with metal — Teflon for Hg; LDPE,
       HOPE other metals
     •  Cleaned containers
        — Commercially available; independent bottle blanks
        — Additional hot acid cleanup for low level analysis
        — Shelf storage with high purity dilute acids

-------
o
o
c
o
o
            Preservation Acid

                  and

    Collection Container Comparisons
                    Mercury
                          B
B
       A = Polyethylene; Reagent Grade Acid

       B = Teflon; High Purity Acid

-------
NJ
SJ
O
           Reagents and Processing

*  Use of high purity reagents: chelator, acids, solvents
    — Commercially available; blanks must be checked
    — Purification via extraction; blanks must be checked

•  Routine use of highest purity deionized water
    - Subboiling distillation  for ultra-clean

•  Identification  of reagent and procedural blanks prior
   to conducting analysis

•  Routine use of procedural blanks

*  Know the potential sources of the contamination and
   causes of blank variability

•  Consistent and low blanks relative to analytical signal
   <10% of lowest value expected
                                                              NKA/Hunf 19-05

-------
to
to
                    Procedural Blanks (ng/L)
            Ambient NY/NJ Harbor WLA Study (n = 6)
Metal
Method
Total Recoverable
Acid Soluble
Dissolved
Lowest Measured
Value
Blank Contribution (%)
Cu
60 + 60
9 + 1
12 + 6
400
15
Pb
9 + 1
4 + 3
<2
40
23
Zn
180 + 90
80 + 40
60 + 20
44
41
Ni
220 + 300
60 ±40
60 + 30
250
88
                                                          NKA/Hum/19-oe

-------
Total Recoverable Copper Method Blanks
c
o
1
4-*
C!)
O
C
o
o
a.
a.
o
a
0.9
0.8
0.7 H
0.6
0.5
0.4
0.3-
0.2-
0.1
0.0
         Routine Fume Hood Processed
         (16/60 Samples Detected Above
         0.2 M9/L MDL)
0.9
0.8
0.7-
0.6-
0.5
0.4
0.3
0.2-
0.1-
0.0
       0
                  8   10  12   14   16
              Clean Room Processed
(MDL = 0.05 M9/L)
       123456789
       Replicate Samples (Number)
                 222

-------
NJ
oo
             Analytical  Interferences

•  Application of appropriate quantification techniques
   to metal level expected
    —  Graphite furnace, ICP/MS, flame AAS, etc.

•  Appropriate use of standard, standard addition or
   standard addition calibration curves
    —  Know  the matrix being analyzed, calibration curves
       appropriate to the sample type

*  Use matrix modifiers to reduce interferences
    —  Know  when to use these and the applicable matrix
    —  Don't  assume matrix modifiers work for all matrices
       and instrument settings
                                                         NKA/Munt/10-04

-------
                         Analytical Comparison
NJ
              30
            O)
              25-
               20-
0) 1
a 1
a
o
O
L. 10-i
O
x:
               5-
                               Tota! Recoverable
                        1      2      .3   -   4

                             EPA/Battelle Copper
                                                            D
                         A EPA/NYCDEP  D NYC/Battelle  • NYC/NYCDEP

-------
                   Instrument Calibration Methods
                              Total Recoverable
M
NJ
             0,0
               0,00
0,04       0.08        0,12        0,16

    NYC/Battelie Cadmium (ftg/L)
                                NYC/Battelle SC   * NYC/Battelle SA
0,20
                  1:1 = Extracted Sample

-------
       Instrument Calibration Methods
                Total Recoverable
             NYC/Battelle Copper
             + N¥C/BatteIIe SC   * NYC/Battelle SA
1:1 = Extracted Sample

-------
NJ
NJ
XJ
     Successful Application of
     Clean Methods Requires

Systematic identification and control of
extraneous contaminant sources during
sampling and analysis

Awareness and application of specialized
cleanup procedures for  storage containers
and laboratory ware

Use of high purity reagents, reagent cleanup
procedures, or both

Isolation of samples  from atmospheric
contamination sources
                                                  NKWVIunt/19-Ofl

-------
NJ
KJ
00
  Successful Application of
   Clean Methods Requires
           (Continued)
                 *
Control of analytical  interferences
Early evaluation and  application of
appropriate QC techniques procedures
including
   - Procedural blanks
   - SRMs
   — Matrix spike recoveries
   - Replicates
                                              NKA/Hunt/19-IO

-------
NJ
NJ
VI)
     I
        5-
        4-
        3-
o  2H
•B
        1-
     o
            Municipal Effluent Copper Comparisons
         50
               60          70          80
                 Laboratory A Copper (ug/L)
90

-------
U!
o
      oc
      <

      I
      (Q
flj

Q
      2
      o
         6.O-
         5,0-
         3,0-
         0.0
                Municipal TR Copper Comparisons
          0,00
          15,00   30.00   45,00   60.00  75,00
90.00  105,00  120.00
                         Laboratory A Copper (ug/L)

-------
KJ

(JO
      o

     'ts
     DC
      fc-



      1
      O
     m
     1
     O
     n
     co
            Industrial Effluent Copper Comparisons
         10
9-



8-



7-



6-



5-


4-



3-



2-



1-



0
                           3                6

                     Laboratory A Copper (ug/L)

-------
NJ
U)
NJ
Municipal TR Lead Comparisons
*% «*
Laboratory E/Laboratory A Ratio
3 •* $* w ^ pi p
> 9 9 9 9 9 c




•/\
• ^s::i — „ - • 	 	 i
W __ 	 ^ . 	 H |


20,00 40,00 60,00 80.00 100,00 120.00 140.00 160.00 180.00
Laboratory A Lead (ug/L)

-------
U»
      03
      oc
      S
      o
      O
      •*-•
      tQ

      O
      XI
      CQ
10.0


 9.0


 8,0



 7,0


 6.0


 5.0


 4,0


 3.0


 2,0


 1,0
          0.0
           0.00
                  Municipal TR Lead Comparisons
          2,00
4.00
6.00
8.00
10.00
12.00
14.00
                           Laboratory A Lead (ug/L)

-------
ho
UJ

o
DC
c1 A-
*— *T
(^
b
•° -v
(\J O
_J
O
C? O-
0 2
?

(C 1 ~
_J
0_
€
Municipal Effluent Zinc Comparisons





^~ """N^ /^\ /^
^^^x. f >^^ JL ^^

10 70 80 90 100 110 120 1:
Laboratory A Zinc (ug/L)









30

-------
OJ
Ln
      CO
      DC
      s
      b
     o
          Municipal Effluent Chromium Comparisons
     •g  6-
        5-
4-
        3-
     •*-* O"
     CO £-
     b

     _j 1
        0
         0
             23456
             Laboratory A Chromium (ug/L)
8

-------
LO
           Take Home Messages

1.  Contamination can be controlled through proper sample
   handling and analytical procedures.

2.  Clean methods are required when metals concentrations
   are <20 M9/L.
     - Threshold is metal specific
     - More study to identify metal specific thresholds

3.  Cleanrooms (benches) or closed processing and digestion
   are needed to control blanks.

4.  Choice of samplers and sample containers is critical to
   high quality results.

5.  Method detection limits must be lower than presently
   required for routine monitoring.

6.  Knowledge of matrix specific interferences at the
   instrument level is critical.

-------
U)
                           Table 1
  EPA WQC, Riverine Metal Levels, RLGs and MDLs (pg/L)
Analyte
EPA Freshwater WQC
Ambient Riverine1'2
Recommended RLG
iCP MDL
GFAA/Hydride MDL
ICP-MS MDL
CVAAS
MDL with Preconcentration
Cd
1.1
0.03
0.006
1
0.05
0.5
—
0.005
Cu
12
2.76
0.6
3
0.5
0.5
—
0.02
Hg
0.012
0.001
0.0002
7
__
—
0.02
0.0002
Pb
3.2
0.13
0.03
10
0.3
0.6
—
0.02
Zn
110
3.33
0.67
2
0.5
1.8
—
0.02
    10ttawa River Cd, Cu, Pb, and Zn (Canadian Research Council, 1990)
    2Mobile River Hg (Battelle, unpublished)
                                                           Pers/Hunl/20-3

-------
(Blank Page)
    238

-------
                                     MR. TELLIARD: Our next speaker is from the U.S.
Geological Survey. Tim Miller is the Assistant Chief of the USGS' Office of Water Quality.

      Over the last few  months, EPA and USGS have been looking at protocols both for
sampling and for analysis.  We felt... I  am sorry to say this...  that it probably was not
worthwhile reinventing the wheel.  Now, that goes against our previous policies. We have
a number of wheel factories that are downsizing. We have been... I know this again shocks
you... talking to our brothers at the U.S. Geological  Survey and trying to get  input from
them, again, both from the field and the analytical end.

      So, Tim is going to specifically address their efforts and, hopefully, shed  some light
on the problem.
             U.S. GEOLOGICAL SURVEY PROTOCOL FOR MEASURING
            LOW LEVELS OF INORGANIC CONSTITUENTS INCLUDING
                 TRACE ELEMENTS IN AMBIENT WATER SAMPLES
                                     MR. MILLER:  Thank you, Bill.

      I  am glad to be here.  This is the first time I have had a  chance to attend this
conference, and I  have got to say I  am very impressed.   For the first full  day  of
presentations, Bill arranged for the weather outside to ensure that  all of you would be in
here listening to the speakers, and  we appreciate that.

      I think the presentations so far have made my job a little easier this morning because
they have already discussed contamination sources and problems.   I am going to tell you
about a protocol for collecting inorganic surface-water samples that we have implemented
in the Geological Survey this year. For those of you looking at the abstract, you can change
the title.  It should say trace elements in ambient water  samples instead of waste samples.
The USGS protocol focuses really on working in ambient waters.

      Basically, Carlton Hunt has identified many of the contamination problems that we
were facing.  In addition, Jim Hanlon mentioned that the quality of some of the USGS data
has been questioned in the literature. Indeed, going back to 1987, there was an article by
Shiller and Boyle and then, subsequently, another by Flegal and Coale, and a recent article
by Herb Windom. We have discussed the critical comments with a number of the authors,
and found their concerns were justified.

      What we are  talking about, are problems  that  we  have had  in  our operational
program, not  in our research program.  Sampling for trace elements in our operational
program probably includes more than 600 to 700 people.


                                      239

-------
      In some cases, we have had to convince people that this change is really necessary,
and then, also, to provide the training so they understand what needs to be done is quite
a daunting task.   So, what we are talking about is essentially changing our operational
program culture.

      We are now in the process of convincing people that the culture needs to change,
the field process needs to change, and  the analytical process in our laboratory needs to
change as well.  What I will tell you about this morning are many of the changes that have
recently been put into place.

      We often work in river systems where, in contrast to marine systems, trace element
concentration changes in  time and space are highly variable because  of  suspended
sediments,  and  organic carbon.   So, we typically collect depth and width  integrated,
composite samples, that are flow weighted or volume weighted  for the cross section.  Wy
have we  introduced a ppb protocol? Many of the Federal water-quality criteria are at or
near the ppb  level. Our intent is to upgrade our sampling program so that we can collect
samples relatively inexpensively and clean.

      We  have  several key messages.   I am going  to highlight these as  I  would to
individuals in our organization.  What are the key messages from the protocol?

      One is we want to make sure that people understand  that inorganic samples  can
easily be contaminated; and that  those sources of contamination can be  identified and
controlled through proper cleaning, proper sample equipment selection, proper analytical
techniques, much of what Carl was talking about just a few minutes ago.

      In addition to that, the next key message  is ensuring that you have adequate quality
control data.  Following the protocol is not enough;  quality control data is needed to
quantitatively determine  that  contamination is within  the desired  level  of control.
Historically, our  operational  program did not collect quality control data to demonstrate
contamination was adequately controlled.

      The protocol that we  have developed has been predicated on  about five years of
work, following much of the critical comments both from outside USGS and also internally
to our organization.  A series of experiments were employed to determine our level of
contamination problems, and to test various equipment and cleaning procedures.  The
decisions that we have made in designing and implementing this protocol have been based
on these experimental data.

      One of our major experiments was a 1990 inter-calibration study on the Mississippi
river; where we took  our standard techniques that are used in the field by the operational
program and  compared results from that approach to results from what we were using in
our research program.
                                       240

-------
      We also invited along on that experiment, Alan Schiller, who was one of our initial
critics in the literature, and who is now at the University of Southern Mississippi.  We did
a complete cross-comparison of the techniques including sample collection, processing, and
analytical techniques; and the result from that comparison was the basis on which we
argued within our agency, that we needed to make major changes to the field protocols we
were then using.  All of the changes for our protocol have been substantiated by field tests,
by quality  control samples, and have  been  tested  in many different hydrologic and
atmospheric environments.

      The  protocol that we have released is targeted towards filtered samples.  Filtered
samples  present  more  of a  sampling  challenge  because ambient  trace  element
concentrations are low in these samples.  So, we focused on filtered samples to be sure our
procedures were adequate for easily contaminated samples.

      Even  for unfiltered  samples,  we  know there are samples that have very  low  to
moderate suspended sediment  concentrations.  Unfiltered samples with low sediment
concentrations are susceptible to contamination; so, the protocol is needed for unfiltered
samples too.  For  unfiltered samples, the protocol is applicable even if the concentrations
of sediment are high. We have shown in a number of our trials that there can be adequate
contamination to bias samples with high sediment concentrations as well. So, our argument
is that even for the unfiltered samples, whether the sediment concentration is low or high,
cleaner techniques need to  be  used.

      The field protocol that we are employing has the following components:  first, we
have a certified list of equipment and supplies; second, there are four procedures  which I
will briefly go over in a few  minutes, two are for cleaning, one  for field rinsing, and one for
processing and preservation; third, we have a very substantial emphasis on training, and
although it does take a few days of training the protocol can be used even by relatively
inexperienced people; and finally, we are recommending a minimum level of quality control
data be collected, and we are providing guidelines for how those quality control data can
be used.

      The equipment that we suggest for this protocol  are sampling devices that will allow
us to  collect a depth and  width  integrated sample in  a  river system.   These are  D77
samplers.  I will  show you a slide at the end of the  overheads that illustrates what this
equipment  looks like. They basically have teflon or plastic internal  components that can
be adequately cleaned. There are a number of environments that we cannot sample right
now with this sampler.  For example, under ice sampling, and in some large rivers with
depths of 50 to 70 feet, and flow rates as high as 15 to 20 feet per second.

      The  next overhead  is a list of the supplies that we  have  recommended in the
protocol.  It  is similar to the  types of supplies Carl  mentioned to  you.   Gloves are a
requirement, the non-powdered type.  We are also specifying certain types of filters  in order
to simplify the filtering process and minimize contamination.  The list of supplies identifies

                                       241

-------
what has been tested to date and what we are certifying for our protocol which is set at 1
ppb right now.

      The protocol also emphasizes to make sure that the equipment is adequately cleaned.
Preferably, we want people to clean the equipment in the office before it goes out to the
field.  We also have an acceptable approach for cleaning between sites; because we realize
in our operations, people are often out in  the field for a week at a time, and  it is not
convenient to go back to the office and clean the equipment.

      I will quickly  run  through the process for cleaning equipment in the office.  The
equipment is  broken down,  soaked, and  cleaned with a detergent  like  liquinox, then
copiously rinsed with hot tap water, followed by a cleaning and soaking in 5 percent
hydrochloric  acid, three rinses with deionized water, and then the equipment is double
bagged and stored until it is used.

      I want to digress  a  moment and  mention that  we  have modified some of our
equipment such as  a  compositing device to better shield samples from  atmospheric
contamination. We are suggesting that our field offices set  aside a dedicated vehicle for
water-quality  sampling; that will alleviate historical problems when vehicles are  used for
purposes incompatible  with water-quality sampling.  In addition, we have specified using
a processing  enclosure which is very inexpensive to construct; it  is made largely out of
plastic, but some people  have constructed them out of PVC  and wood.

      We are also  specifying  the  use of  a preservation chamber.   Our  samples are
collected, processed, and preserved in the field, because, it is often 48 to 72 hours before
they can be mailed to the laboratory. Preservation can be done using glass vials with ultra-
pure nitric acid at $5  each;  however,  the  acid is contaminated  by  the glass for some
elements such as aluminum, boron, and sodium.  If contamination of those elements is a
problem, an alternative at $16 each are teflon vials with acid contamination well below 200
ng/L

      We also looked at the filtration process. We recommend use of a capsule filter.  The
product we have tested happens to be a Gelman Supore capsule.   It is very easy to clean,
and filtering is easier because of the large filter surface area.

      A capsule filter is relatively inexpensive. It does cost more than the plate type filters
that we have  been using in the past, but when you consider  the amount of time and effort
it takes to clean plate filter holders, the capsule is  really a cost-effective approach. I    n
addition,  the  cost difference is  more  than  compensated by the reduced potential for
contamination using  a capsule filter.

       Now,  I want to return to the four field procedures that are included in the protocol.
I  mentioned the office preparation cleaning and briefly discussed the components of that.
In addition, we have a field cleaning procedure used between sites when you cannot return


                                       242

-------
to the office.  The field cleaning requires that the equipment be cleaned while it is still wet,
at the site, using hydrochloric acid and deionized water.  Both of the cleaning procedures
at the part per billion level have been shown in our field tests to work well.

       Next, we have a procedure for sample processing and preservation. Processing and
preservation is carried out in a work space with specified requirements; and the sequence
for preserving samples is also specified.  Finally, there is a procedure for field rinsing of the
sampling equipment prior to sample collection.  That  is, conditioning the equipment for
sampling at the site using  native water.

       The other major change for USGS is the protocol requires using two people. The
rationale is: probably one  of the major ways contamination is introduced into a sample is
from a person's hands.  At a site that  is difficult to sample,  it is easy for one person working
alone to contaminate a sample.   It  is too difficult for one person to keep their  hands
contaminant free with all  the equipment they touch.   Therefore, a procedure using two
people  has been deemed necessary by many  practitioners  of the ultra-clean method,
especially when collecting a depth and width integrated sample.

       At the site, one person is clean hands and the other dirty hands.  The person with
the clean hands changes their gloves frequently, and they are responsible for touching the
sample bottle, and anything that comes in close contact with the  sample.  The other person
who has dirty hands is responsible for all the rest of the work. With the types of sampling
equipment that we use, there is a fair amount of manipulation  that the person with dirty
hands is required to do.

       We have found that the process with two people works quite well. It does take a bit
of practice, but with one to two days of  training,  the comfort  level  is quite high.

       The protocol requires quality control samples. We want an equipment blank  run at
least annually or with every new crew that handles the equipment to demonstrate that they
have the capability of cleaning the equipment.   On each  run out in the field, then, field
blanks are taken. Those deionized blanks are run through the same equipment process and
used for the environmental sample so that we have an  understanding of the contamination
levels introduced in the field.

       In addition, we ask that  samples be split  periodically so that we have  better
information on laboratory precision. We also request concurrent samples be taken to
identify how much variability there is in  sample collection and  processing.

       Finally, in the laboratory, we have done a fair  amount of work.  All of our trace
element samples, even  for the  ppb protocol, are handled  in a Class  100  clean room. We
do not use a concentration step  in  the methods  that are  currently applying for the ppb
protocol. We are using the ICP/MS, and they are about 17 trace elements that are approved
in our  ICP/MS method.
                                       243

-------
      We are doing two  things.   First, we  are using the ICP/MS on  deionized water
matrices down around 100 to 200 ng/L for quality control and blanking purposes. Then for
the protocol, we are  using the ICP/MS to analyze the environmental samples  with  a
reporting level of 1 ppb for most of the elements.  The exceptions, I believe, are zinc and
aluminum which are at 3 ppb.  We are confident right now that, at 1 ppb, we can handle
matrix interferences; but down  below that, we still have some additional work to do. So,
for deionized  water, we  will  go  down to the  100  to 200 ng/L level but  not  for the
environmental samples yet.

      I will  take just a few minutes, to show some slides.  This is a D77 sampler; a 65-
pound fish made out of either aluminum or brass. It is coated in an epoxy paint, and that
is a nalgene  or teflon 3-liter bottle that fits inside. We have found that these samplers can
be adequately cleaned, but all  of the hardware is made out of aluminum and has to be
handled.  That is the responsibility of the person with the dirty hands.  So, the person with
the clean hands  does  not touch any of the metal, metal parts of the  bridge or sampling
environment.

      We also have a wading bottle sampler.  Again, in this case, many of the components
are all made out of teflon.  The rod is aluminum and is coated  in a teflon sheath.

      The compositing device  that I mentioned is made out of plastic. We call it a churn
splitter, because  it allows you to homogenize  samples.  It has a modification, a funnel on
top, in order to make it easier to introduce the  sample to the churn instead of removing the
entire churn  cover. There is much less area, then, exposed to the atmosphere. The churn,
then, is housed in a carrier which is all plastic. The reason  we have gone to this extent to
protect the compositing vessel  is that many of the environments in which we sample are
quite dirty.   We are either  on bridges or near roads.

      These efforts to clean up the environment in  which we are doing the sampling,
actually works quite well.  Contaminant levels are well below a ppb which is our target.

      The chambers we use for field processing and preservation are constructed of half
inch plastic pipe covered with a clear plastic bag. The outside plastic covering is changed
whenever we change gloves or preservative, and we have found these chambers cost maybe
$3 to $4  to construct.  We have found that they work quite well.

      That is where we are right now. Where we are heading is development of a protocol
for the part  per trillion level.   We do not anticipate that that will be deployed any time
soon.  It will probably be another two years before we are ready to distribute that type of
protocol within our division.

      We have been talking with Bill Telliard  and others at  EPA on the development of the
protocols that they are working on. The protocol that we distributed in February is an
internal document, but if you are interested in  taking a look at it, you can write to me at the

                                       244

-------
Geological Survey.  If you look at the abstract, just insert below U.S. Geological Survey, 412
National  Center, and the zip code is 22092.  We will be happy to try and fulfill any
requests that you have.

      Thank you very much for listening.
                       QUESTION AND ANSWER SESSION

                                     MR. TELLIARD: Do we have any questions?

                                     MR. VARNELL: David Varnell with the Tennessee
Valley Authority Environmental Chemistry Lab.  I was  wondering if you could give an
estimate of how much additional time all of this preparation and sampling requires for the
field people.

                                     MR, MILLER:  That is an excellent question.

      If a site is normally sampled right now with only one person, then the cost increase
is fairly substantial to get a second person to the site.  In those cases, we  are looking at
probably somewhere on the order of 50 to 75 percent increase in cost because of the
second person and the added cleaning that goes along with the protocol.

      However, if a site is sampled with two people, for example, from a boat or from a
bridge now, then the increased costs are relatively minor.  They are on the  order of 20 to
30 percent increase for the cleaning and for the additional equipment that is required.

      Our simple answer to folks in our agency when there is a concern raised on the cost
issue is that we would rather have the 20 to 30 percent, even the 50 to 75 percent increase
to have data of known and adequate quality than to have data that are questionable.  So,
we see it as a reasonable cost to pay.

                                     MR. VARNELL: And, basically, the EPA relative
to NPDES sampling for compliance, ICP/MS is not presently acceptable.  Is that correct?

                                     MR. TELLIARD: I  do not know. I think Bill Potter
is here from  EMSL.  I cannot remember Part 136.

                                     MS. KNOX: Excuse me.  That was my question
also. My name is Robin Knox.  I am with  Geraghty & Miller, and I looked it up recently
in the Federal Register, and ICP/MS by Method 200.8 was not listed as an approved NPDES
method unless there has  been some update to the regulation that was not available to me.
                                      245

-------
      That was my question also, because there is a need to achieve those detection limits,
but yet that methodology is not approved for NPDES as far as I can tell.

                                     MR. TELLIARD: I will check on that and get back
to you.  I am sure that the update on Part 136, which only happens every eon probably is
not the answer, but we will check on it for you and get back to you.

                                     SPEAKER:  Depending on which of the methods
that will be used, it is going to be considered for the variance in each of the situations?

                                     MR. VARNELL: A variance.

                                     SPEAKER:  A variance.

                                     MR. VARNELL: Yes, you can always do  that.

                                     SPEAKER:  No, but it is a matter of simply  from
my reading experience or the writing of items...

                                     MR. TELLIARD: This is basically the 8.1.2 in the
metals method that says you can change the method or the 9.1.2 in the format that the
EMMC has used. It allows you to change the method on a site-by-site basis, but it is  not for,
quote, national approval.
      So, you can use it. It tells you basically you have to do the start-up tests over again
and what data you have to generate and put in your binder so that when we kick  in  your
door, you say yes,  I am  using this thing, and here is the data to  show  that  I  have
standardized on it.

                                     MR. VARNELL:  Okay, and if you do use ICP/MS,
are you using hydrochloric acid for preparation and solubilization of the samples? You are
only dealing with dissolved right  now, so  you do not  need to do the total  recoverable
digestion?

                                     MR. MILLER: No, in our case, hydrochloric acid
is used simply for cleaning the equipment.  Preservation  is done in nitric acid, the reason
for that being that we collect samples for nutrients and other constituents simultaneously.
So, the cleaning protocol specifies hydrochloric acid. The preservation for trace elements
is nitric acid.

                                     MR. VARNELL: Okay, thank you.

                                     SPEAKER: And I understand that a letter went out
to the regions where, on a regional level,  they are  allowed  the liberty of approving the
                                       246

-------
ICP/mass spec or something to that extent.  You might want to talk to the region that you
work within, because I believe the regions have the liberty to approve the ICP/mass spec.

      The USGS filtration method eliminates filtration artifacts or reduces them due to the
size of the filter that you are  using. Is EPA addressing that in the methods that you are
looking at?

                                     MR. TELLIARD:  Yes.  We basically  are using
USGS studies on the filters.  They had like four of them.  We are using that data to look at
the filters that we are looking  at.

      One of the things that you  noticed in the USGS presentation is that they have these
funny numbers, a BJ2642.  You have to go find out what that is.  Okay?  And that is one
of the things that we are doing, to clarify what Tim has put in his procedures.  Like he says,
that is a Gelman number or whatever.

                                     SPEAKER:  Thank you.   Is anybody looking at
another definition of dissolved metals because of the problems related to having a physical
separation to define a biochemical type parameter?

                                     MR. TELLIARD:  No.  It is still a 0.45 micron filter.
By definition and by God's law, that is a dissolved metal.

                                     SPEAKER:  Thank you.

                                     MR. TELLIARD:  You are welcome.  Shier?

                                     MR. BERMAN:  Shier Berman, National Research
Council of Canada in Ottawa.  I can  see the horrific tremors through the audience about the
increased cost of performing these new protocols.  I would suggest that you look at the use
of polypropylene as your containers in place of teflon in many cases.

      We have about two decades of experience with these kinds of containers, and except
for mercury, they are quite adequate to meet the requirements for ultra-trace level if properly
cleaned. You can just throw them  away and not worry about the expense.

      Especially, a case in point is the ampules of acid that you are  preparing.  They can
be adequately prepared quite well in polypropylene bottles that can be thrown away, and
they only  cost a few pennies rather than several dollars.

                                     MR. TELLIARD:  Thank you, sir.

                                     MR.  BERNARD:   John  Bernard,  Alexandria
Sanitation for Mr. Miller.  What sort of filters?  You  said  Gelman capsule filters?


                                       247

-------
                                     MR. MILLER:  Yes.  I cannot tell you what the
exact number is.  They are Gelman capsule filters, and they are the Supore filter material,
because I believe that is the only material that Gelman  is currently producing that can be
acid cleaned.

      In our case  for part  per billion,  however, we have found  that the capsules are
adequately  prepared by  simply  washing them with high quality  deionized water.
Somewhere around a liter  or liter and a half of water through the capsule filter will get you
well down in the 100 to 200 ng/L range.

                                     MR. HUNT:  Carlton Hunt. Are those the filters
that are about the size of your fist there?

                                     MR. MILLER:  Yes, and as a  matter of fact, we
were led to those type of filters by Herb Windom.  He has been using them  at Skidaway for
a number of years.

                                     MR. HUNT: I would like to add one thing.  With
the nuclepore filters that a lot of people use, you have to be very careful, because there is
a lot of trace metal on those filters, and they do have to be cleaned.  Which is why you are
moving to capsule filters,  and the cleaning for nuclepore has to be in  a fairly warm acid
environment

                                     MR. COMO: Joe Como from N K Testing Services.
I am interested in knowing if microwave digestion techniques have a role in preparation of
low level metals where you have fairly closed systems involved and teflon liners, you know,
to protect the sample.  Have you checked at all into that?

                                     MR. MILLER:  No, we have not.

                                     MR. HUNT: I thought that...and correct me, Bill,
if  I am wrong, but I thought microwave technology  had been approved for the total
recoverable.
                                     MR. TELLIARD: Yes.

                                     MR. HUNT: And it is available and it works quite
nicely.
                                     MR. BOURBON: I am John Bourbon from Region
II. Bill, I just want to let everybody else know about the ICP/mass spec. I talked to James
Lichtenburg about a month ago.  For those of you here, he is the head of the committee or
work group that proposes methods for NPTE's updates, not very often, just like Bill said.
                                      248

-------
      The package that is up now that they are going to propose, includes the 200.8 which
is the ICP/mass spec and includes 200.9 which is a more advanced furnish procedure, ion
chromatography, and two or three other inorganic methods.

      I just wanted to at least give you that.  It should be a few months before it gets to the
proposal stage, and I think it is on a fast-track, where it will only be proposed for comment
less than the normal six  months.

      The other thing is the gentleman from Fisons is right, and the lady that just spoke
earlier. In case anybody is interested, any laboratory that wants to use the ICP/mass spec
method, should contact the region you are in, because each of the regions, for the ICP/mass
spec method, have flexibility on the  requirements.

      It is not going to be standard right now from region to region. That might be nice,
but we have the flexibility.

      So,  one  region might  require just that  you keep documentation  in the lab for
comparison, and another region may  require a full-blown alternate test procedure. So, you
should check within your region if you are interested in using the ICP/mass spec method.

      Thanks.

                                     MR. TELLIARD: Thank you. Tim, thanks so much.
                                       249

-------
(Blank Page)
    250

-------
 A PROTOCOL FOR THE COLLECTION
AND PROCESSING OF SURFACE-WATER
  SAMPLES FOR THE SUBSEQUENT
   DETERMINATION OF INORGANIC
 CONSTITUENTS IN FILTERED WATER

-------
KJ
Ln
     WHY A  MICROGRAM-PER-LITER
            PPB  PROTOCOL?

PPB is the concentration level at which most Federal
drinking water regulations have been established.

-------
SJ
<_n
U)
    WHY NUTRIENTS AND MAJOR IONS
               ARE  INCLUDED

Several nutrients have 0.01 or 0.001 mg/L reporting
limits, which are actually jiig/L levels. The cleaning,
QC, and other items in the protocol are necessary to
produce good-quality nutrient data at these levels.

Counterproductive to use separate equipment and
protocols for nutrients and major ions vs trace
elements.

-------
KJ
Ln
   KEY  MESSAGES  OF THE  PROTOCOL

Inorganic samples can be contaminated, but sources of
contamination can be reduced through proper planning,
use of tested equipment/supplies, proper cleaning, and
specified QA measures.

Collection of adequate QC data can identify whether
problems still exist.

-------
         KEY MESSAGES OF THE PROTOCOL,
                       continued
Ul
U1
Development
-  All decisions made on supplies, procedures, and need
  for QC are supported by data from laboratory tests
  and actual field trials conducted in a number of
  different atmospheric and hydrologic environments.

-------
    HOW THE  PROTOCOL APPLIES TO
           UNFILTERED  SAMPLES

Sample collection in the old approach was a major
source of TE contamination.
                       *

Therefore, for samples having low to moderate
suspended sediment concentrations (fairly low total
concentration of TE), the protocol is necessary.

For high sediment concentration samples, might not
need the protocol (TE on sediment might swamp
contamination).

However, never certain of suspended sediment
concentration before sampling, so always use the
protocol for unfiltered samples.

-------
Ul
             FIELD  PROTOCOL

Certified list of equipment and supplies

Four procedures

Heavy emphasis on training

Recommended and minimally acceptable collection of
"field" QC data

Guidelines for use of QC data as basis to using and
interpreting the environmental data

-------
KJ
Ln
CD
      ACCEPTABLE  SAMPLERS  FOR THE
         COLLECTION  OF  INORGANIC
   SAMPLES FOR  LOW-LEVEL ANALYSES1,253


                   D77 Teflon
                   D77 Frame
        D77 bag (Teflon or Reynolds Oven Bag)
               D77 standard (plastic)
         DH 81  (with "shrink-wrapped" handle)

  "•These samplers are acceptable only following
rigorous use of the specified cleaning procedures.
  2No through-the-ice sampler presently certified.
  3USGS is discussing development of a new
generation of flow representative, noncontaminating
samplers.

-------
 Equipment List for the ppb-Protocol


Item


Churn Splitter (8 or 14L)	

Concentrated Hydrochloric Acid	

Wash Bottles for Acid and DIW

Liquinox	

Non-Powdered Vinyl Gloves	

Clear/White Plastic Wash Basins	

Non-metallic Brushes	

Scalable Plastic Bags w/o colored strips
         if
Capsule Filters	

142mm Q.45-|im Cellulose Acetate Filters	

142mm Filtration System preferably w/

white/clear plastic/teflon inlet/outlet valves	

Peristaltic Pump for Filtration	

Pump Tubing (C-Flex or silicon	

Non-metallic (ceramic) forceps	

Non-metallic (Kel-F)) forceps	

Processing/Preservation chamber covers	

Churn Splitter Carrier	

Processing/Preservation Chamber Frames (plans available)
                         259

-------
NJ
     CLEANING OF  FIELD  EQUIPMENT

Preferable:  Separate sets of office-cleaned sample
collection and sample processing equipment for each
site.

Acceptable: Cleaning between field sites to prevent
cross contamination.

-------
    MODIFICATION/CONSTRUCTION  OF
             FIELD  EQUIPMENT

Funnel on churn splitter

Processing enclosure
-  To reduce/eliminate atmospherically derived
  contamination
-  Permanent in a dedicated vehicle; otherwise portable
-  Materials:  PVC or wood frame; disposable plastic
  cover

Preservation chambers
-  To prevent contamination during sample processing,
  cross contamination from preservatives, and
  atmospheric contamination
-  Same materials as processing enclosure

-------
ISJ
                   FILTERS

Capsule filters are preferred because
-  Minimum precleaning
-  Less potential for atmospheric contamination
-  Large surface area to prevent filtration artifacts
-  No post-cleaning (one use)

Cost is -6 times more expensive than plate filters
($12vs$2)

The initial cost is more than compensated by
-  Labor savings from reduced field handling and
   no need to clean between sites
-  Need for fewer QC blanks because of the lessened
   potential for contamination

-------
U)
            "FIELD"  PROCEDURES

PROCEDURE 1.  Office Preparations and Cleaning of
               Equipment

PROCEDURE 2.  Field Rinsing of Equipment Prior to
               Sampling

PROCEDURE 3.  Sample Processing and Preservation

PROCEDURE 4.  Field Cleaning to Prevent Cross-
               Contamination Between Sites

-------
  Before cleaning the equipment, clean the four basins.
     Each basin must be cleaned with: (a) detergent
   solution, (b) tap water, (c) dilute acid, and (d) DIW.
      Read through this procedure and follow the
   appropriate steps for the basins, as if they were part
     of the sampling/processing equipment, before
  	beginning to clean the equipment Itself.	
    nthe
  (Clean the processing chamber in the same way as the
     ,  basins, following the four step procedure.
     Disassemble all the equipment (sampling and
   processing), including any pump tubing you will be
     using, and Immerse in the detergent solution.
  Allow the equipment to soak in the detergent solution
  	fnr at least 30 minutes.	
 Put on a pair of disposable gloves, and using the appro-
 priate brushes, thoroughly scub all the equipment with
               the detergent solution.
                       ±
 Once scrubbed, place the cleaned items In a second non-
 contaminating basin; the basin should be pre-cleaned,
              non-contaminating basin.
                       i
Partially fill the churn splitter with the detergent solution
  and thoroughly scrub. Pay particular attention to the
 paddle and the area around the nozzle. Make sure that
   the spigot and cappable funnel are cleaned as well.
                        I
                       igegl
Change gloves.
   Thoroughly rinse all the scrubbed Items with warm
    tap water until there is no sign of any detergent
  residue (until the soap bubbles all disappear).  Fill the
    churn about one-third full through the cappable
    funnel with the tap water and swirl It around to
   remove any detergent residues. Make sure to allow
   some of the water to pass through the spigot. Force
    the tap water through any tubing that has been
   cleaned with the detergent If necessary, use a wash
   bottle filled with tap water to clean out any hard-to-
 	reach places.	
                  Change gloves.
                                            Place all the tap-water-rinsed items in a pre-cleaned,
                                            non-contaminating basin. Immerse the equipment in
                                              the dilute (5%) acid and let it soak for at least 30
                                             minutes. Fill the churn splitter with the dilute acid
                                              and allow it to soak for the same amount of time.
                                             At the end of the soak, remove the equipment and
                                             place It in a pre-cleaned, non-contaminating basin.
                                             Drain the acid from the churn splitter through the
                                            	spi ot.	
                                                            Chang
                            gloves.
                                           Fill the basin and the churn splitter (through the cap-
                                            pable funnel) with DIW. Using either a DIW faucet,
                                            and or a wash bottle, thoroughly wash down all the
                                            equipment with DIW. Swirl the DIW In the churn
                                             splitter and drain It through the valve and nozzle.
                                           Fill the basin and the churn splitter (through the cap-
                                            pable funnel) with DIW. Using either a DIW faucet,
                                             and or a wash bottle, thoroughly wash down all the
                                             equipment with DIW. Swirl the DIW In the churn
                                             splitter and drain It through the valve and nozzle.
                                           Fill the basin and the churn splitter (through the cap-
                                            pable funnel) with DIW. Using either a DIW faucet,
                                             and or a wash bottle, thoroughly wash down all the
                                             equipment with DIW. Swirl the DIW In the chum
                                             splitter and drain It through the valve and nozzle.
  All the parts, except the churn splitter, should be placed
    Inside 2 scalable plastic bags. The churn splitter with
 cappable funnel also should be double-bagged In plastic and
  placed Inside the churn carrier. Filtration gear should be
    reassembled and double-bagged hi plastic. If a fixed
    processing chamber is to be used, store the filtration
 equipment and assorted filtration supplies inside so that all
    the equipment and supplies for sample processing are
available within the processing chamber, prior to going out In
 the field.  AH sample bottles, appropriately labeled, may be
  placed Inside the processing chamber for transport to the
        field. All pump tubing required for sample
 filtration/processing should be sealed In double plastic bags
and placed Inside the processing chamber. If processing Is to
 take place In a lab van/vehicle, then store all the gear Inside
            prior to going to the field slte(s).
                   FIGURE 6--PROCEDURE 1:  OFFICE PREPARATIONS AND
                                      CLEANING OF  EQUIPMENT
                                            264

-------
                 Put on a pair of disposable gloves.
             Collect sufficient quantities of native water
               with the sampler to completely fill the
             bottle; shake; then empty bottle by pouring
                   the water through the nozzle.
              Collect aliquots of native water with the
              sampler and pour into the churn through
             the cappable funnel until the volume in the
                       churn is 2 to 4 liters.
         Remove the churn splitter, still contained within its
         inner plastic bag, from the churn carrier; leave the
         outer plastic bag inside the carrier. Move the churn
        paddle up and down several times so that the inside is
          thoroughly wetted, and then swirl the water in the
           churn so that the entire system has been rinsed.
         Force the churn spigot through the inner plastic bag
           and drain all the rinse water through the spigot.
         Once draining is complete, pull the inner plastic bag
        back over the spigot, rotate the churn so the spigot is no
        longer near the hole in the plastic bag, and replace the
         churn andinner bag inside the outer bag and churn
                            carrier.
FIGURE 7--PROCEDURE 2: FIELD RINSING OF EQUIPMENT PRIOR
                          TO SAMPLING
                             265

-------
     Park the field vehicle as far away from any
     nearby road(.s) as possible and turn off the
   motor.  Entrained road dust and emissions from
      highway vehicles and/or the Held van can
    contaminate trace-element samples for parts-
     per-blUlon analysis.  The vehicle should face
     toward the road because sample processing
    usually is done on the tailgate or in the back of
                  the vehicle.
       I Put on a pair of disposable gloves. I
  Collect the whole-water sample, using an appropri-
  ate sampler and following whatever acceptable pro-
  cedure Is appropriate to the site and the flow condit-
     ions. Even though one Individual has been
   designated as 'clean hands' and another as 'dirty
  hands'It is still extremely important to pay attention
  while the sampling operation is In progress to limit,
    as much as possible, contact with any potential
  source(s) of contamination (keep your bands off any
   metal bridge parts, try not to touch the sounding
  weights). When operating from metallic structures,
  It may be useful to spread a large plastic sheet over
   the area where sampling is to take place. If you
  make contact with a potential contaminant, dispose
   of your gloves and put on a new pair before you
         transfer any sample to the splitter.	
  Fill the churn splitter with each collected aliquot by
  opening the plastic bags and pouting it through the
cappable funnel in the lid. Remember, only remove the
  cap when filling the chum splitter, and to limit the
 opening as much as possible. After adding the sample
  aliquot to the churn splitter, re-seal the plastic bags.
 Place the open-lidded side of the churn carrier In such
 a way that It serves as a barrier to the prevailing wind
   and/or to turbulence caused by moving vehicles
 When sampling is complete, move the churn splitter
  Inside its carrier and plastic bag, and the sampling
        equipment back to the field vehicle,  ...	
                     I
   Remove the churn splitter, still housed inside Its
   Inner plastic bag, from the outer plastic bag and
      carrier and move inside the field vehicle.
 Attach the pump tubing through the hole in the side of the
  processing chamber. Keep your pump tubing as short as
                      practical.
    Pass 1000 mL of DIW through the pump tubing and
  through the capsule filter. After passage of the 1000 mL,
 remove the tubing from the DIW reservoir, and continue to
 run the pump to drain as much of the DIW remaining In the
        system as possible. Discard all the DIW.
                                                         Transfer one end of the pump tubing to the churn splitter
                                                          through the cappable funnel and re-seal the plastic hug
                                                         	around the tubing.	
Remove the pump tubing from the nitration system, start the
  peristaltic pump, and pump sufficient sample to fill all the
  pump tubing; place the end of the tubing In the disposal
         funnel or 'toss* bottle to prevent spillage
  Open a trace element sample bottle, place the outlet of the
  capsule over the opening, and filter SOmL (fill the bottle to
the top of the bottom lip). Use this filtrate to rinse the bottle.
   Process the dissolved trace element sample by filling the
    rinsed bottle to the top of the upper Up of the bottle.
                                                                               I
    Process sufficient -water to permit adequate rinsing of
  any remaining sample bottles, but no more than 100ml,.
                        1
  Complete any other requisite nitrations for any remaining
water quality determinations. If all other organic parameters
   are to be determined, the order of collection must be; a)
      nutrients, b) major Ions, and c) radiochemlcals
                        i
  Once all the nitrations are complete, remove each sample
 bottle from the processing chamber, one at a time, and place
 them In the preservation chamber. Removal should be In the
appropriate order, add the correct preservative to each bottle,
  and tightly cap the bottle. Change preservation chamber
   covers whenever the preservation procedure calls for a
                   change In gloves.
                                                              *REMEMBER TO REMOVE ANY
                                                            UNFILTERED SAMPLES PRIOR TO
                                                           CARRYING OUT ANY FILTRATIONS
          FIGURE 8--PROCEDURE 3: SAMPLE PROCESSING AND PRESERVATION
                                      CAPSULE FILTER OPTION *
                                            266

-------
                               THIS PROCEDURE SHOULD BE CARRIED OUT AT THE FIRST
                               SAMPLING SITE WHEN THE EQUIPMENT IS STILL WET, AND
                                         BEFORE DRIVING TO THE SECOND SITE.
             Sampler
I         Put on a fresh pair of
          disposable gloves.
                I
     Disassemble the sampler into its
     requisite parts so that all of the
     pieces (e.g., nozzle, head, bottle)
    can be thoroughly wetted with the
            various rinses.
                1
    Thoroughly rinse the sampler and
   aarts with DIW, use a stream of DIW
   from the appropriate wash bottle, if
     	required.	
             rnnsetr
     inorougtuy rinse me sampler ana
   parts with dilute acid, use a stream of
   dilute acid from the appropiate wash
   	bottle, if required.	
                1
   Thoroughly rerinse the sampler and
  parts; with DIW, use a stream of DIW
   from the appropriate wash bottle, if
    	required.	
   Thoroughly rerinse the sampler and
  parts with DIW, use a stream of DIW
   from the appropriate wash bottle, if
  required Repackage the various parts
         in double plastic bags.
        Churn Splitter
     Processing/Preservation System
    Remove the chum splitter from
    its plastic bags and discard the
   bag.s Thoroughly rinse the churn
   splitter with DIW. Fill the chum
     through the cappable funnel;
      swirl the DIW in the chum
    splitter, and drain some of the
    rinse through the spigot prior to
    discarding the remaining rinse
              water.	
  Place the end of the pump tubing, which connects to
 the filter, inside the disposal funnel ('toss bottle') in the
          bottom of the processing chamber
 Pass one (1) liter of dilute acid through the system using
   the same pump and pump tubing used to filter the
                     sample.
              1
  Thoroughly rinse the churn splitter
    with dilute acid. Fill the chum
  through the cappable funnel; swirl
  the dilute acid in the churn Splitter,
    and drain some of the acid rinse
      through the spigot prior to
  discarding the remaining dilute acid
               rinse. 	
I Pass two (2) liters of DIW through the system using the
  same pump and pump tubing used to filter the sample.
                                                                              Remove the pump tubing from the hole in the
                                                                              processing chamber and repackage it in double
                                                                             	scalable plastic bags.	
 Thoroughly rerinse the churn splitter
 with DIW. Fill the churn through the
 cappable funnel; swirl the DIW in the
 churn splitter, and drain some of the
    rinse through the spigot prior to
 discarding the remaining rinse water.
                                                    I
 If the processing chamber is non-portable, swab down the •
   inside with DIW to remove any spilled native water,
 suspended solids, or wash solutions spilled/dropped during
 removal of the filter, etc. Remove the swab and discard. If
   the processing chamber is a portable unit, discard the
	enclosure cover and replace it with a new one.	
 Thoroughly rerinse the churn splitter
 with DIW. Fill the chum through the
 cappable funnel; swirl the DIW in the
 churn splitter, and drain some of the
   rinse through the spigot prior to
 discarding the remaining rinse water.
Repackage the churn splitter in 2 plastic
 bags, seal them with a clip, and place
 the entire unit back inside the chum
              earner
•Items in bold letters are the only ones
that differentiate this procedure from
Procedure 3
      Discard the last preservation chamber en-
       closure.  Do not replace it until ready to
       preserve additional samples at the next
                  sampling site
                                         Proceed to the next sampling site
                                         and conduct procedures 2 and 3.
    FIGURE 9--PROCEDURE 4: FIELD CLEANING TO PREVENT CROSS-CONTAMINATION
                              BETWEEN SITES  -  CAPSULE FILTER OPTION
                                                           267

-------
           TWO-PERSON SAMPLING  CREWS
               AT ALL TIMES-RATIONALE
SJ
o
00
Aside from improperly cleaned equipment, use of one
person represents the greatest potential source of
contamination in sample collection and processing

Deemed necessary by developers/practioners of all
clean protocols used to date

-------
K)
O^
1X3
 TWO-PERSON SAMPLING CREWS-JOBS

Clean Hands: All operations concerned with
- Sampler bottle
- Transfer of sampler bottle to churn splitter
- Actual sample processing

Dirty Hands
- Preparation of the sampler
- Actual collection of the sample

Not clear cut, requires coordination and practice

-------
      SAMPLE PROCESSING SPACE
Specifically cleaned space is necessary
Dedicated vehicle is preferable

-------
         SAMPLE  PRESERVATION


Standard: highest quality nitric acid in borosilicate
glass ampules

- Found to have low levels of Al, Ba, B, Ca, Cr, Mg,
  Si5 and Na from the glass (at MDLs of ICP/MS)
- Cost: $5 per ampule
- Tried other types of glass-same TE's at about
  the same levels

Alternative: highest quality nitric acid in Teflon vials
  - No contamination (at the MDL's of ICP/MS)
  -Cost: $16 per vial

-------
            Figure 1-Order of Sample Preservation*
             Put on a fresh pair of disposable gloves
            and preserve all samples that require the
            addition of acids such as nitric, sulfuric,
              or hydrochloric. For example, trace
             element samples, with the exception of
               samples for the determination of
                 mercury require nitric acid.
                            i
           Using the same gloves used to add any acid
            preservatives,  preserve any samples for
              mercury determinations  with nitric
                  acid/potassium dichromate.
                            i
            Change gloves and preserve the nutrient
             samples with mercuric chloride, after
                 which they should be chilled.
                            i
          Change gloves again, and complete any other
          preservation procedures, such as the addition
          of sodium hydroxide, zinc acetate, or copper
                   sulfate, that are required.
*Remember, change the preservation chamber cover every time
you change gloves.

Always store your preservatives in separate sealed containers,
preferably away from each other and away from any samples.

Preservative containers, once used, should be stored in separate
sealed containers such as screw-cap bottles until proper disposal
can be arranged.

Used gloves should also be stored in sealed containers, such as a
lidded pail, until proper disposal can be arranged.
                         272

-------
NJ
VI
U)
             FIELD QC  SAMPLES

Equipment blank. To test whether equipment is clean:

- Before crew goes to field first time to ensure crew is
  capable of cleaning equipment properly
- When new equipment is used
- At least once per year

Field blank. To test for contamination during sample
collection and processing
- At least one per sampling trip
- If a sampler is used several times during a trip,
  collect after the prescribed between-site cleaning
  and just before the last environmental sample

-------
NJ
       FIELD QC SAMPLES-continued

Split sample. To test for precision in shipping and
analysis
-  At least one per sampling trip
-  Split after the preservation step

Concurrent samples. To test the reproducibility of
sample collection
-  At least one per two sampling trips

-------
                              Figure 2~Equipment Blanks
     Collect, store in an appropriate bottle
     labeled 'Source Solution Blank1, and
    adequately preserve an aliquot (at least
     2SOmL) of the IBW. Record and keep
    on file, in your field notes, the date and
     lot # of the IBW and the preservative
    obtained from the NWQL. Always use
    preservative from the same lot # for an
    entire sampling trip for both the actual
       and the quality control samples.
  Pour at least 5 liters (more may be required
  to fill the sampling container to capacity, at
    least once) of the IBW into the sampling
    device, then pour off an aliquot (at least
   250mL) into an appropriate bottle labeled
  'Sampler Blank', and adequately preserve it.
   If the sampler container is smaller than 5
 liters, it may have to be refilled several times.
  The sample container may be filled with the
 cap and nozzle removed, but both must be in
      place when emptying the container.
    Pour the remainder of the IBW from the
 sampler into the churn splitter, then collect an
   aliquot (at least 250mL) in an appropriate
 bottle labeled 'Splitter Blank', and adequately
                preserve it.
                    I
 Pump an aliquot (at least 250mL) of the IBW
from the churn, by whatever means will be used
    in the field (vacuum, peristaltic), into an
 appropriate bottle labeled 'Pump Blank', and
           adequately preserve it.
                    I
 Finally, pump an aliquot (at least 250mL) of
 the IBW from the churn through the appro-
priate filtration system (if using a plate filter),
 or through a pre-conditioned capsule filter,
into an appropriate bottle labeled 'Equipment
     Blank', and adequate ly preserve it.
    Initially, only send the bottle labeled
 'Equipment Blank' to the NWQL and have
  the water analyzed for all the constituents
  to be determined on normal field samples.
                   I
   If the data come back from the NWQL
   at acceptable levels no further work is
     required to indicate an acceptable
     equipment blank and the sequential
         samples can be discarded.
 If all or some of the data come back higher
   than at acceptable levels, the previously
 collected sequential blanks (e.g., the bottles
  labeled 'Source Solution Blank1, 'Sampler
   Blank', 'Splitter Blank') should be sub-
 mitted to the NWQL for analysis. The data
   from these sequential samples should be
 used to identify the source of the contamin-
 ition detected in the 'Equipment Blank1 and
   remedial measures taken to eliminate it.
    This process must continue until the
   'Equipment blank' is at acceptable levels
 and before any field samples are collected.
                                          275

-------
                                 Figure 3»FieId Blanks
    Collect, store in an appropriate bottle
    labeled 'Source Solution Blank', and
   adequately preserve an aliquot (at least
    250mL) of the IBW.  Record and keep
   on file, in your field notes, the date and
    lot # of the IBW and the preservative
   ob tained from the NWQL. Always use
   pre-servative from the same lot # for an
   entire sampling trip for both the actual
      and the quality control samples.
 Pour at least 5 liters (more may be required
 to fill the sampling container to capacity, at
   least once) of the IBW into the sampling
   device, then pour off an aliquot (dt least
  250mL) into an appropriate bottle labeled
 'Sampler Blank1, and adequately preserve it
  If the sampler container is smaller than 5
 liters, it may have to be refilled several times.
 The sample container may be filled with the
 cap and nozzle removed, but both must be in
     place when emptying the container.
   Pour the remainder of the IBW from the
sampler into the churn splitter, then collect an
  aliquot (at least 250mL) in an appropriate
bottle labeled 'Splitter Blank', and adequately
                preserve it.
 Pump an aliquot (at least 2SOmL) of the IBW
  from the churn, by whatever means will be
 used in the field (vacuum, peristaltic), into an
 appropriate bottle labeled 'Pump Blank', and
           adequately preserve it.
                   I
 Finally, pump an aliquot (at least 250mL) of
the IFBW from the churn through the appro-
pri ate filtration system (if using a plate filter),
 or through a pre-conditioned capsule filter,
into an appropriate bottle labeled 'Equipment
     Blank', and adequate ly preserve it.	
    Initially, only send the bottle labeled
  'Equipment Blank' to the NWQL and have
  the water analyzed for all the constituents
  to be determined on normal field sam pies.
   If the data come back from the NWQL
   at acceptable levels no further work is
   required to complete the field blank and
   the sequential samples can be discarded
                   I
  If all or some of the data come back higher
   than at acceptable levels, the previously
     collected sequential blanks (e.g., the
   bottles labeled 'Source Solution Blank',
  'Sampler Blank', 'Splitter Blank') should
   be submitted to the NWQL for analysis.
   The data from these sequential samples
  should be used to identify the source of the
     contamination detected in the 'Field
   Blank1 and remedial measures taken to
    eliminate it on future sampling trips.
                                            276

-------
      Figure 4-Split Field Samples
     Start with a full bottle containing a
    filtered/processed/preserved sample.
                   1
Condition a second bottle with a small volume
   of filtered/processed/preserved sample.
   Mix full bottle thoroughly by shaking.
Transfer the entire contents of the first bottle
     to the second bottle, cap and shake.
                   i
  Pour half the contents of the second bottle
  back into the first bottle, cap both bottles
                 securely.
                       277

-------
              Figure 5~Concurrent Field Samples
          I Starting with the first vertical, collect a sample for
          I compositing and place it in the first churn splitter.
                                i
              Reoccupy the first vertical, collect a second
           sample, and place it in the second churn splitter.
             Go to the second vertical, collect a sample for
         compositing, and place it in the second churn splitter.
             Reoccupy the second vertical, collect a second
             sample, and place it in the first churn splitter.
            Continue sampling the remaining verticals, and
          continue to alternate the place ment of the samples
                        in the churn splitters.
                                i
       After all the verticals have been occupied, there will be two
     churn splitters which contain two, as close to simultaneous as
        possible, representative samples from the cross section.
                                i
    Process (filter) and preserve the first sample, and then split it (as
     described in the section on split samples) into two appropriate
    bottles, with one labeled 'Site x, Sample 1, Split A1 and the other
                  labeled 'Site x, Sample 1, Split B1.
HPNHMHMHMiHBBBMMai^^^^^^^BiHHHM^^^^^^^^MHMMMHMiMMMH^^^HM^^^B"""^^B^^MMMMlMBW«*^^^BHMi^^M«**MiVl^^^^^^*^^^^^^^HHMi^«*BBB^BVH^B«^«
 Go through the field cleaning procedure for the entire filtration system,
  or clean the pump tubing and use a new capsule filter, process (filter)
  and preserve the second sample, and then split it (as described in the
 section on split samples) into two appropriate bottles, with one labeled
    'Site x, Sample 2, Split A* and the other labeled 'Site x, Sample 2,
                              Split B'.
                               278

-------
NJ
         LABORATORY  PROTOCOL

Class 100 clean room to:
- Prepare equipment and supplies
- Prepare samples

ICP/MS to measure concentrations of 17 TE's in DIW
blanks at MDL's of 0.1-0.5 jiig/L

ICP/MS to measure 15 TE's in environmental samples
at reporting level of 1.0 (iig/L

Standard lab QC
- Blanks
- Reference Samples
- Duplicates

-------
METHOD DETECTION LIMITS AND REPORTING  LIMITS  FOR
THE VARIOUS  CONSTITUENTS  COVERED BY THE PROTOCOL
Constituent
Aluminum (|ig/L)
Ammonia (Nf mg/L)
Antimony (jig/L)
Barium (jlg/L)
Beryllium* (jig/L)
'Boron (\ig/Jj)
Cadmium (|ig/L)
Calcium (mg/L)
Cobalt (Jig/L)
Chromium (|ig/L)
Copper (Jlg/L)
Iron (M-g/L)
Lead (p.g/L)
Magnesium (mg/L)
Manganese (jig/L)
Molybdenum (|ig/L)
Nickel (jig/L)
Nitrate (N, mg/L)
Nitrite+Nitrate
(N, mg/L)
Orthophosphate
(P, mg/L)
Silver (|Ag/L)
Sodium (mg/L)
Strontium (pg/L)
Thallium (p.g/L)
Uranium (^.g/L)
Zinc (M-g/L)
Silica (mg/L)
Analytical
Instrument
ICP-MS
ASF
ICP-MS
ICP-MS
ICP-AES
ICP-AES
ICP-MS
ICP-AES
ICP-MS
ICP-MS
ICP-MS
ICP-AES
ICP-MS
ICP-AES
JCP-MS
ICP-MS
ICP-MS
ASF
ASF

ASF

ICP-MS
ICP-AES
ICP-MS
ICP-MS
ICP-MS
ICP-MS
ICP-AES
Schedule 172
MDL
0.3
0'.002
0.2
0.2
0.2
2
0.3
0.002
0.2
0.2
0.2
3
0.3
0.001
0.1
0.2
0.5
0.001
0.005

0.001

0.2
0.025
0.1
0.1
0.2
0.5
0.02
Environmental
Sample RL
3
0.002
1
1
0.5
2
1
0.002
1
1
1
3
1
0.001
1
1
1
0.001
0.005

0.001

1
0.025
1
1
1
3
0.02
  Be  also  can  be  determined by ICP-MS but the  RL will be l|o.g/L
                           280

-------
                                     MR. TELLIARD: We are going to take a 15-minute
break, after the break please return for Session 2.

      (A brief recess was taken.)
                                     DR. FIELDING: We are now to talk of the rest of
this morning's session. The next paper will be presented by Dr. Shier Berman, entitled the
Preparation of NRC Certified Reference Materials.

      Shier was appointed director of the Environmental Measurement Science program or
EMS of the National Research Council Institute for Environmental  Chemistry, now the
Institute for Environmental Research  and Technology, in June of 1990.

      The EMS program is an internationally recognized center of excellence for analytical
chemistry, especially for trace analysis. The program is responsible for the inorganic aspects
of the NRC Marine Analytical Chemistry Standards program which is the world's foremost
producer of marine certified reference materials.

      Shier?
(Verbatim Transcript)
          THE PREPARATION OF NRC CERTIFIED REFERENCE MATERIALS
                                     MR. BERMAN: Thank you, Mr. Chairman, ladies
and gentlemen. As soon as I figure out modern technology, we will get going.

      I  guess I should give a little bit of background to who we are. We are a national
laboratory, and I guess the closest analogy to us in the United States is NIST, although there
are great differences between us and  NIST, but in the field of reference materials, we have
a lot in common.

      In fact, our institute has an informal memorandum of understanding with the standard
reference material program of NIST regarding cooperation and exchange in the preparation
of environmental certified reference materials.

      About 15, 16, 17 years ago... I have not done the arithmetic... the National Research
Council  was approached by what was then  the Canadian Committee on Oceanography to
look into what they called the  chaotic state  of analytical chemistry with respect to the
analysis  of marine materials.  Well, we soon discovered, in  an ad hoc committee that


                                      281

-------
looked into this, that it was not only a Canadian problem, but it was a worldwide problem,
and the National Research Council of Canada set up what became known as the Marine
Analytical Chemistry Standards program, a very successful program which has done some
very nice things, as I will tell you about some of them in this talk.

      One of the reasons for the concern... and I use this slide as one of my favorite ones
for introducing the topic... is that, as we know, as pollution was proceeding in this world.,.
and it is an old slide, but it serves the point... that the oceans seemed to be getting cleaner
and cleaner and cleaner and cleaner as we went through the decades from 1940 to the
1980s.

      Of course, that really was not happening, but we just did not have the protocols in
place in most of the laboratories which were described earlier this morning. This is a case
of water analysis where cleanliness is next to godliness, and that is the message that has
been given this morning.

      In fact, when we first got into it... and I  did not know a thing about marine things in
1975 or so... I  remember reading this  great treatise by  a  very prominent  physical
oceanographer on how wonderful  it was that no matter where you measured zinc in the
world's  oceans, the concentration was 5 ppb and how this had to be based on the physical
chemistry and the exchange in the chemistry  that was going on in the ocean.

      So, part of the problem is getting good  values in order to get rid of all this folklore.

      If you think it is the marine scientists who were in bad shape and those of you who
deal with inland waters or fresh waters can smirk at them, we had the same problem. This
is work  complied at the  University of Michigan, and  here we look at Lake Huron, how it
has been getting cleaner and cleaner and cleaner over the decades.

      By 1980, the people at the University of Michigan were reaching something which
was approaching the truth, and one has to give them a great deal of credit for being able
to do that some 14, 15 years ago. It also dispels the folklore that the fresh waters are much
higher concentrations than seawaters. Pristine fresh waters come awfully close to seawaters
in concentration.

      As a result of our work over the last decade and a half in this, we have developed
what we call a quality assurance program for environmental analytical measurement of
which the marine analytical chemistry program is a part.

      We have four basic things that we do.  One is the provision of certified reference
materials for environmental  samples, and we stay only in the environmental field. In fact,
we are really limited to the aquatic environment which includes waters, sediments, and the
biota that live therein.
                                       282

-------
      We are very interested in  the intercomparison of laboratories.  This  is from an
enabling point of view.  We want to ensure that the laboratories are able to get the right
answer.

      There are two reasons for this.  One is that decisions are made on these answers
which involve the expenditure of mi 11 ions and even mega-millions of dollars by our masters.
Two, we want to ensure that the laboratories get the work.  If you get the right answer, you
will get the contracts, and that is why we work with Canadian laboratories in this respect
but also with a lot of American and other international laboratories.

      Two other factors in our program are the development of reliable methodologies for
the analysis of environmental samples, and I am not going to talk too much about this today
save to say that  we have probably, in  the last decade, published about 250 papers on
methodologies in reviewed peer literature and that in our laboratories lives the guru of
electrothermal atomization atomic absorption, one Ralph Sturgeon, and  down the hall from
him is one if the pioneers in the ICP emission and ICP/mass spectrometry, Jim McLaren.

      So, we are well suited.  In fact, we  are very experienced in  these fields.  The
laboratory celebrated its 50th anniversary a few years ago. We are a product of World War
II, and we entered the war some two years before you people did.  We  have even an edge
on that.

      We also develop analytical instrumentation, and I will not talk about this at all today.

      I am going to talk about point 2 first, because it reflects back to point 1. The major
part of my talk is going to be about certified reference materials,  but the intercomparison
exercises have set our philosophy of what we think we have to do in order to improve the
quality of analytical  measurement.

      Over the last dozen years, we have conducted international studies, specifically, for
the International Council for the Exploration of the Sea. We are in our eighth year of a very
successful continuing program with NOAA where we share with NIST the responsibility for
their quality assurance program with respect to the National Status and Trends program, and
many USEPA laboratories participate in that study, and if you do not, I  would suggest you
get in touch  with NOAA to talk to them about it.

      And we even do some work in our home country of Canada.

      As an example of what we do with the intercomparisons,  I am using a  rather old
study, but it  is an extreme example of what we are after. In 1981, about the time we got
involved with the International Council, they were already carrying out an intercomparison
on the determination of trace metals in seawater, and in front of you are the  results for lead
in 1981.
                                       283

-------
      15 laboratories submitted results, and among these 15 were a good number of the
world's, quote, best laboratories.  They produced a consensus value. However, a sample
had been sent to Clive Patterson, the guru of lead analysis in this world, especially at that
time, and there is no doubt that lab 3 which was Patterson had a very good estimate of what
was the true value.

      The consensus value was wrong, and that is a lesson we learned early in the game.
Democracy may be fine for the United States and Canada and even South Africa at the
moment, but it has no place in analytical chemistry. If you are wrong, you are wrong even
if you are the majority. If you  are right, you are right even if you are a minority of one.

      That was lesson one. You had to have in an intercomparison exercise some way of
getting a look at the approximation to the truth in order to  estimate the  accuracy of the
laboratories, not only their ability to  reproduce each other or their variance between
laboratories.

      The other thing about this slide is if you look at the concentrations, even the true
value is about 0.15 ppb, and in open ocean water, we have never seen anything close to
that in all our experience.  So,  the sample was obviously contaminated during collection.

      Even if the consensus value and the true value had been close to each other, it was
a useless  experiment, because  you  never  analyze open  ocean  water at  such  high
concentrations.  So, we would have got no information about the ability of the laboratories
to perform the analysis at that point.

      At the time we got  involved with high seas, and we ran two intercornparisons for
trace metals in seawater over the next seven years, with the feedback to the  laboratories that
it entailed.  By the end of  the second one, we had something like this.

      About 20 laboratories taking part with concentrations at true open ocean level.  You
see that about 15 of the 20 labs seven  years later, with proper consultation and feedback
and discussion,  were now able to analyze the material, and the consensus value and the
true value are certainly not far  apart.

      That is the basis of it all.  You have to have good inter-laboratory consensus, but you
have to have accuracy also.  Too many intercornparisons leave out the accuracy factor.

      Moving on to the reference material program, I just want to remind you what we
mean when we talk about reference materials and certified reference materials. There is
nothing magic about our reference material.  In fact,  most of you in your own  laboratory
prepare one.  It just has to be a homogeneous material, and it has to have a well-established
track record.
                                       284

-------
      Then you can use it for your own purposes, for your quality control charts, for your
own internal calibration, or for whatever purpose you want to use it.

      However, you notice that in the official international standards organization definition
of a reference material, there  is  no reference to accuracy.  That is,  it does not have to have
an accurate value attached to it.  It has to be homogeneous so that you can reproduce it,
but it does not have to be accurate.

      Now, a certified reference material is a subset of reference materials. NIST insists on
calling theirs SRNs; the rest of the world talks about CRNs. It is a reference material which
is accompanied by a certificate, that  is, somebody is putting his name on the dotted line.
The properties have been certified by a procedure which established a traceability.

      It is like buying a dog. You have to establish its pedigree, its lineage. You have to
relate it  back to a national standard, an international standard, an SI unit, or what have you.

      That is not all that easy to do at times, because the international unit is the mole, and
who the hell ever measures  anything in moles?  So, we  have debates these days about
traceability, and it is a big word.

      But it has to be an accurate manifestation or realization of the  unit in  which  you
express  it. So, each certified value is also useless unless it has an uncertainty associated
with it at  a given  level of confidence.

      So, a number that does not have a plus or minus and is traceable back to a national
or an international  standard or an international unit is not a certified reference material, and
they are a little tougher to make than the reference  materials themselves.

      We have manufactured  and produced  three types of  reference materials, natural
waters,  sediments, and marine  biota. Well, no, they are not  all marine; they are aquatic
biota.

      All the materials we produce  are taken  from  the environment.  We do not believe
in  spiking. We do not believe  in synthetic reference materials.

      We have four types of water.  The MASS is an open  ocean water taken several
hundred kilometers off the coast of Nova Scotia in the North Atlantic.  The CAS is a coastal
water taken near Halifax Harbor.  The SLEW is an estuarine water, and  the SIRS is a river
water.

      So, we have essentially a range of salinities, ocean water from 35  parts per thousand
to  coastal  water at roughly 20 parts per thousand, estuarine about 12 or 13, and, of course,
the river water of zero salinity.  We cover the range,  and they each give  different problems
in  analysis, as many of you well know.


                                        285

-------
      The collection  is a problem.  It was alluded to this morning.  This was the first
research vessel we had at our disposal to collect ocean open water. You can see what one
of the problems is, because it had to keep its engines going to operate the winches for our
sampling.

      A closer view of the problem, scraping the rust off the deck. Now, we collect in 50-
liter carboys. One ug of iron into  any of those 50-liter carboys would increase our iron
content by 10 percent, and 1 ug of lead into any of our carboys would increase the lead
content by 100  percent.

      So, how do you collect off a  terrible collection platform like that? I am not going to
take... we developed a system in which the water never saw the atmosphere.  We either
take samplers or now, if we can,  if we are not going too deep, we pump by  peristaltic
pumping through silicone tubing which is all very well cleaned, through the filter system,
through the automatic acidifying,  and into the  carboys  which have  been stored with
acidified water to the same pH as we acidify our seawater.  All is sealed in wooden crates,
and the carboys are wrapped in plastic bags.

      So, all we do on site is reach our hand in on top of the plastic,  loosen the  top of the
container, connect the hose to the spigot on the bottom of the carboy, open the spigot, and
pump backwards into the tube and expel the air that was  in there.

      It seems to work quite well,  but it took us  a  long time to get there. In fact, it took
us five years to decide that we were capable of storing seawater and of analyzing it with the
aim of certification.

      There is  a picture of the on-board  pumping apparatus, all enclosed.   Those are
Gelman filter cartridges, by the way, which were referred to in the last talk. They may not
be the same size and shape, but they are enclosed  capsules.

      We have three or four water samples, as  I said before.  They are each, the three
seawaters  are certified for about a dozen trace metals, the river water with respect to 22
trace metals.

      It is not that we favor the river water in our work, but it is a lot easier to work with
a non-saline matrix. So, economically, we can afford to certify for more metals. Working
in a 3.5  percent saline  solution  for  analytical  chemistry at concentrations which are
extremely low is very challenging.

      This is a coastal seawater, not quite as low concentrations as an open ocean water,
but you see the concentrations that we are looking  at. Most are fractional and low fractional
parts per billion. In many cases, those  numbers are cleaner than the water quality in many,
many  laboratories in North America.  So,  the  blank problem becomes a very, very
significant part of the  analytical  procedure.

                                       286

-------
      All of these, of course, must be  separated from  the  salt matrix.  There  is no
instrument available today that you can shove seawater into at one end and get the answers
out the other end without both a separation from the saline matrix and a concentration
procedure.

      With respect to sediments, we have three of  them taken  at various parts of the
Canadian coastline, an east coast sediment, an Arctic Ocean sediment, and a west coast
sediment.

      The  west coast sediment is very special, because it comes  from a Canadian naval
dockyard.  It is contaminated to all hell, and it is a very practical one for those working in
very contaminated harbors, and it has a special feature which I will tell  you about in a
moment.

      Oddly enough, the geological matrix of the Arctic Ocean  sediment and the east coast
sediment are barely distinguishable.  Maybe it is not oddly enough. It is just the Canadian
shield being  deposited  by two river  systems, the McKenzie in  the north and the St.
Lawrence in the southeast. It is probably just the same ground rock being taken out to sea.

      The  mucky mess you start with, and I am sure some  of you are familiar with this,
after freeze drying, the bits of plastic and the sticks that have to be taken out, and then it
is ground a bit and screened through 100  mesh and then homogenized and bottled and
certified.

      The three sediments, anywhere from 16 to 20 trace metals certified  in each.  As the
time goes on, we tend to be certifying more elements in our sediments.  The last designation
is the batch number, so we are already on our second generation of MESS, and  you see we
have added a few elements to the list.

      The others are the major and minor constituents such as aluminum and silica which
are the main features of these sediments or phosphate and sulfur and that like.

      Another thing as time goes on, we find people are not interested in the major and
minor constituents as  much as in the trace metals. So, in that MESS-2, you see  there are
only six others. We have dropped some from the list for economic reasons.  We just cannot
afford to do everything all the  time, especially if we feel that people are not using the
numbers.

      A  typical...this  is the Beaufort Sea  sediment, but it is a typical aluminum silicate
concentration. It is fairly clean but not pristine. They are probably the easiest of the types
of materials that we certify to certify because of the relatively high concentrations.  These
are parts  per million.
                                       287

-------
       I have just listed 11  of the 20 certified metals.  These are, I thought, the ones you
might be more interested in that some of the other obscure ones.

       The sediment from the west coast from the dockyard was a real eye opener, because
it had about 40 ppm tin in it, and because navies like to paint their ships with butyltins and
tributyltin as  an anti-fouling agent, we found there was enough butyltin in  this particular
sediment to certify the specie of it butyltins.

       This is a unique material.   It was  the  first material ever  certified  for the  three
butyltins,  and it remains unique for the three.  There are not...there  is a Japanese fish
certified for tributyltin and dibutyltin which is available from NIAS in Japan.

       We are trying, as much  as possible, and the future wave is toward the speciation of
metals rather than the total, and we are trying to do all we can to move in  that direction.

       Our third set of materials are biological tissues, a dogfish muscle, a dogfish liver, two
types of lobster, and the difference between the two is the first one has been defatted so it
does not turn into peanut butter or look like peanut butter sitting in your jar.  The second
one is not defatted, but we have developed a process for stabilizing these very fat materials
and  issuing them as a homogeneous slurry which is very close  to what you do in  the
laboratory.

       The last one is our first foray in Ottawa, anyway, into the organic contaminant realm.
The  carp  sample  has  been just released about three weeks ago,  certified for furans and
dioxins and will soon be certified for a  range of PCBs.

       The dogfish liver and the muscle  come from the same fish, essentially, and it is just
a matter of homogenizing, spray drying, defatting... you must get the fat out, or it will  not
keep... and then blending,  bottling, and then we do  radiation sterilize  at the end or we
cannot ship them. Well, to keep them for  our own purposes, but  then you need to have
the certificate for shipping to foreign countries.

       A typical concentration  in the dogfish liver,  because liver is about the only thing that
is monitored  around the U.S. coast.  They are  quite low concentrations.

       These  are parts per billion, and they are, indeed, a challenge. You will see that tin
does not  have a  plus or minus.  So, as far as tin is concerned, this is only a reference
material; it is not a certified reference material.

       We did  not have a big enough data base in order to consider certification of that
material.  The number is probably a fairly good one, but we did not assign a plus or minus
range to it.
                                        288

-------
      The range is the most important part of the number.  You do not even need the
number itself; you just need the range.

      The other thing to look at, because we have complaints that your ranges are so
narrow that we cannot seem to get within them, and that should not be what you are
looking at. You should be looking at whether the range of the reference material that you
are analyzing is within your plus or minus, and then look at it from that point of view and
see whether your plus or minus  is too big for your  purposes, but if your range is within your
plus or minus and your uncertainty, it is good enough for your purpose. You are probably
accurate enough to do the job.

      Most people tend to look at it the other way on and worry about the fact that they
cannot get within the uncertainty range of the producers.

      We ran into a  problem, and it is  a  philosophical  problem. We produce these
materials.  We defat them. We get a nice white  powder, and they dissolve  up easily,
because we have defatted  them and everything, but that it not a real sample, and it bothers
us.

      The question is, how do we get the fish really into the bottle?  It  is a perplexing
problem. Of course, we  cannot really do that, because we cannot find a homogeneous
population of fish, or we would try it.

      So, we developed  a methodology with the people  at the  Technical University of
Nova Scotia where we homogenized the material, added  a bit  of antioxidant,  again
homogenizing but diluting it with a bit of water because that  is necessary for the process,
and then emulsification in the same kind of machine that is used to produce homogenized
milk or chocolate milk or ice cream  so that you have a true emulsion that lasts forever, add
a  quality sealing, autoclaving to kill the enzymes that  promote  rancidity,  and  then
packaging.

      It works, and we have produced a beautiful material which you see in the ampule,
and then it can be removed from the ampule, transferred to a volumetric flask  where the
suspension will stay homogeneous for as long as you desire.  You can pipette it.

      There is only one problem with  it.  Nobody buys it. It is something different. It is
a second generation reference  material, as we call it. We got great accolades from our
colleagues and international producers, but nobody  seems to want to use  it.

      That is one of the problems, how to transfer it to the client market. We do not know
how to overcome that one. Maybe we are not good enough salesmen.

      All our biological  materials  are certified  with respect to  methyl  mercury, again,
because we think that this is the way to go, and, again, we are unique in this business.

                                       289

-------
Some sediments are beginning to appear from  Europe certified with respect  to methyl
mercury.

      You can see in some cases in dogfish muscle that the methyl mercury is almost the
total mercury.  In other cases, it is only one third or one half.

      This is not trace metals, but I  am so proud of my guys for having been able to do
this, because these concentrations, these are ng/k. That  is about a thousand-fold less than
the PCB concentrations you may be working with.

      When you consider that they took from a 9 gram sample and they were following
less than half a total  nanogram of dioxins and furans through a horrendous clean-up and
separation procedure and could come out with plus or minuses as low as that in order to
certify it, along with the collaborating five or six Canadian labs that cooperated with us in
this certification, it was, indeed, a feat that we are rather proud of.

      I  am just going to... I will not  spend any time with this. I am just going to end up
with two slides which demonstrate what we think is the ability to comply with the new
MDLs which are being foisted upon you by EPA.

      The required MDL list may not be entirely accurate, because I understand there was
a revision to it recently following the  list I had, but you see, if the required MDL for the six
metals on the left are in blue, that pneumatic nebulization by ICP/MS is quite able to meet
them for those six. Ultrasonic nebulization is even better but probably not necessary, and
on-line preconcentration certainly does  it also.

      Why, you might ask, do we need an on-line concentration for these materials?  Well,
even the pneumatic and ultrasonic nebulization requires a separation and a preconcentration
of the materials.  The  on-line concentration, if you do it directly into your ICP/MS, takes you
out of the clean room. It gets you away from that horrendous cost that has been scaring you
all morning.

      We do that on-line preconcentration on the 5 ml sample in the instrument room, and
I thought I would like to impart that  information to you.

      Now, there are other metals on  the list, and  here we  have some  problems.
Obviously, antimony, we can meet that just  by pneumatic nebulization into ICP/MS. The
arsenic... the red numbers are we are getting close but not quite there yet.  The yellow, we
are not there.  The white, we are there.

      When  I last saw the  arsenic  number,  it  was 0.2 ng/L  We can  with  a hydride
generation going directly into the ICP/MS.   Now, no preconcentration is necessary here.
We can get down to 0.3 ng/L.  So, we are getting awfully close.
                                       290

-------
      We are getting close with mercury, and we are probably there already with selenium
and silver even with pneumatic nebulization, because these required MDLs are a bit hazy
and hairy on their own.  Certainly, with hydride generation, we are there for selenium.

      That is the talk there, Mr. Chairman.  I think I  have overstepped my time. Thank
you.
                       QUESTION AND ANSWER SESSION


                                     DR. FIELDING: Are there any questions? Before
you ask the questions, may I remind you we do have microphones, and you should try to
speak directly into the microphone and also give your name and affiliation before you start
asking questions.

      Are there any questions?

                                     MS. BURGESSER:  My name is Lisa Burgesser.   I
am with Environmental Research Associates. I was just wondering why you do not believe
in spiked samples.

                                     MR. BERMAN:  We are never sure that a spiked
sample behaves the same way in the analytical process as a sample that has been tied up
by Mother Nature, and God knows what the speciation or what combination. So, while you
might be able to easily extract 100 percent of your spike, you may not be able to extract
100 percent of the analyte of interest.

      There is no way of your getting it in there  in the right form.

                                     MS. ASHCRAFT: What is the price range  of the...

                                     DR.  FIELDING:  Please give your  name  and
affiliation.

                                     MS. ASHCRAFT:  Merrill Ashcraft, Navy Public
Works Center. What are the price ranges of these standard reference materials?

                                     MR. BERMAN: They range somewhere from $ 140
to $165 Canadian  dollars for substantially  sized  bottles.  The waters are 2-liter  bottles.
Multiply by 0.75 to get it into real money, and you  will see that you get a real bargain.

                                     MS. ASHCRAFT: Are you going to be leaving a
card or something or your full address so we can contact your agency to purchase these?

                                      291

-------
                                     MR. BERMAN:  Our full address is in the list of
attendees, but if you want, I will give you a card.

                                     DR.  FIELDING:  Are there any other questions?
(No response.)

                                     DR.  FIELDING:  Thank you, Shier.
(Slides for this presentation were not available at the time of publication.)
                                       292

-------
                                     DR. FIELDING:  Our next speaker is Dr. Diane
Blake who will present a paper entitled Enzyme Immunoassay to  Determine Heavy Metals
using Antibodies to Specific Metal-EDTA Complexes.

      Diane is  an Associate Professor for the Departments  of Ophthalmology and
Biochemistry at Tulane University School of Medicine.  She has worked as an assistant
professor at the Department of Biochemistry at Meharry Medical College, a research scientist
at Ames  Division of Miles  Laboratory,  and a lecturer for the Department of Biological
Chemistry at the University  of Michigan.

      She received her doctorate degree in biochemistry from the University of Michigan
and her B.S. degree in biochemistry from Ohio State.

      Diane?
             ENZYME IMMUNOASSAY TO DETERMINE HEAVY METALS
            USING ANTIBODIES TO SPECIFIC METAL-EDTA COMPLEXES
                                     DR. BLAKE:   This is  going to be  somewhat
different from what you have heard so far.

      I would like to thank Dale Rushneck and the other organizers for inviting me here
today, because what I want to describe to you is really an emerging technology that is not
yet out of the laboratory.  I was quite excited to be invited here so I could talk to the
ultimate end-users of this  technology and get some feedback about how we  should be
directing our experiments.

      Today, I would like to tell you about an assay we are developing in our laboratory
that will permit  rapid,  inexpensive, on-site analysis of specific heavy metal  ions  in
wastewater effluents and ambient water samples.

      This assay  uses antibodies and applies a technology called  ELISA technology.  I
would like to spend the first couple of minutes just reviewing the general features of ELISA
technology and then go on to  explain how we have adapted this technology for the assay
of heavy metals.

      ELISA technology requires an  antibody that binds tightly and specifically to the
material that you want to analyze, and from now on, I will  call  that material an antigen.
The antigen, which I have  designated as the little triangle on top of the circle in Figure 1
is insolubilized on the bottom of a plastic plate.
                                       293

-------
      The antibody (inverted Y in Figure 1) is linked covalently to an enzyme (shown in
Figure 1 as an arrow) that provides the signal for the assay.  That signal is usually a color.

      When  you add the antigen to this plastic plate that has the antigen immobilized on
the bottom surface, the antibody binds, and it binds tightly enough to remain bound through
several washes of the plate.

      When  you add a substrate to the solution for the enzyme, color will develop, and
the more antibody you  have bound to the plate, the more color that will develop. So, that
is your assay.  You measure color formation.

      We are using a variation of this particular ELISA technology called antigen inhibition
ELISA, and which is shown in the second part of Figure 2. In this variation,  in addition to
the immobilized  antigen on the plate, you also have soluble antigen in the solution, and the
antibody has a choice of binding either to the immobilized antigen or to the soluble antigen.

      When  the antibody binds to the soluble antigen, it will be removed during the wash
procedures.  So,  as you add more and more soluble antigen  in your solution, you generate
less and less color in the assay.

      In antigen inhibition ELISA's,  if you keep the antibody concentration constant and
increase the concentration of soluble antigen, you generate inhibition curves where the
color decreases in proportion to the  concentration of soluble antigen in the solution. You
can make standard curves and read  your unknowns off these standard inhibition curves.

      Of course, these assays are only as good as your antibody.  Our first problem in
developing this technology was obtaining an antibody that would recognize  heavy metals.

      Heavy  metals,  as you probably know,  are  not  in themselves  antigenic, but we
discovered that if you bind them to a chelator like EDTA and then covalently link that metal
chelate complex  to a carrier protein and inject  it into a mouse, the complex is antigen ic,
and it will elicit  very specific antigen responses.

      This was  first done in the mid 1980s by a researcher in California named  Claude
Meares who first generated an antibody that was specific to indium-EDTA.

      I would first like to tell you about some of our model system studies with the indium-
EDTA antibody that we got from Dr. Meares; and these data have recently been published
in Analytical  Biochemistry.  Then I will go on to talk about a new anti-cadmium antibody
that we have generated in our own  laboratory,  and tell you some of the characteristics of
that antibody as  well.

      The first problem in developing  an ELISA, once you have an antibody, is to make
sure that your antigen can be immobilized to the bottom of the  plate.  Free metal chelates


                                       294

-------
will not stick to the bottom  of ELISA plates, so we solved  that problem  by covalently
conjugating the metal chelates to a protein. We used a carrier protein called bovine serum
albumin which  is abbreviated as BSA in  some of my figures.

      Once the  metal  chelate has been covalently attached to the protein, then the
conjugate sticks very nicely to commercially available ELISA plates. All of our ELISA plates
have been pre-washed in 3 molar HCI to make them metal-free.

      One of the first studies we did,  shown in Figure  2,  is to ask how  much of this
indium-EDTA complex can be immobilized onto our BSA carrier molecule. Figure 2 shows
how different degrees of substitution of the BSA carrier  protein effects color development
in the assay.

      We have plotted color on the y-axis, and reciprocal antibody dilution on the x-axis.
We started at a 1:1000 dilution of the antibody and we went up to  1:64,000 dilution.

      You can see that as we increased the extent of conjugation on the carrier protein, we
increased  the color the  assay produced  until we got up to a conjugate  with 31  percent
substitution.  At  53 percent substitution, the color formation starts to decrease.   We
interpreted this result to be due to stearic hindrance to antibody binding at very high levels
of substitution.

      On the basis of these data, we used the 31 percent substituted material for the rest
of the studies I am going to tell you about today.

      So, we had a standard assay that  could detect indium-EDTA chelate  bound to the
bottom of the plate, and we could get adequate color formation in our assay.

      The next thing we  want to do was to add soluble antigen  and  see if we could
generate standard  inhibition curves. The first standard curves we generated with the indium-
EDTA are shown  in Figure 3.  Indium-EDTA was the soluble antigen we used to inhibit
color formation and increasing indium, shown as ppm on the x-axis, decreased color in the
assay.

      You can  see that  in this particular assay, if we went up to 320 ppm of indium, we
got complete inhibition of color formation, that is, no antibody binding to the well.  This
particular assay detects cadmium down to about 0.075 ppm.  The sensitivity of this assay,
which we defined as two standard deviations above the minimal detectable concentration,
was 0.6 ppm.

      This is a very simple assay. You dilute your sample into 5 mM EDTA, preincubate
the sample with the antibody, add the mixture to precoated ELISA plates, develop the color,
and read the indium concentration from  the standard inhibition curve.
                                       295

-------
      Now, this is a nice assay, but it is really not very sensitive, because it only detects
indium down to 0.6 ppm.  So, we next looked for ways that we could easily increase the
sensitivity of this assay.

      One of the best and cleanest ways to increase the sensitivity of an immunoassay is
to obtain an antibody with a very high affinity for its ligand.

      Well, we knew that this particular antibody had not been made just to indium-EDTA.
It had been made to a conjugate which consisted of indium-EDTA (shown as a little spider
web in Figure 4) up there, linked to a benzo group that was subsequently linked to a carrier
protein, in this case, keyhole limpet hemocyanin. This three-part conjugate (shown at the
top of Figure 4) is what we injected  into mice to get an immune response.

      So, we reasoned that indium-EDTA may be a very poor binder, but if we started
adding more of what was actually used as the antigen, we would increase the affinity of our
antibody for the antigen and, hence, make a more sensitive immunoassay.

      So we next prepared inhibition  curves  using indium-(p-nitrobenzyl)-EDTA as the
soluble antigen, as shown in Figure 5. You can see from the figure that we get about a five-
fold increase in sensitivity  in this assay  when we use indium-(p-nitrobenzyl)-EDTA as the
soluble antigen. Indium at a concentration of 120 ppm completely inhibited color formation
and the limit of sensitivity in this particular assay was 0.1 ppm, so we were getting our
limits of sensitivity into the high ppb range.

      To make the assay even more sensitive, we added the entire indium-(p-nitrobenzyl)-
EDTA-protein conjugate as the soluble antigen.  As shown in Figure 6, we now have a very
sensitive  assay. The sensitivity of this assay goes from 2000 ppb down to about 0.001 ppb.
The limit of the sensitivity  in this assay format was 0.005 ppb.

      These data demonstrate that we could  develop a very sensitive immunoassay for
indium.  We next wanted to study the specificity of the assay.  In Figure 7, we show the
specificity of the assay for indium.  Color formation is plotted on  the y-axis and metal
concentration (plotted on a log scale) is shown on the x-axis.  You can see that this assay
is about 100 times more sensitive for indium (shown as  the circles in  Figure 7) than it is for
either the copper (triangles) or manganese-EDTA (squares).  These two metals were tested
because they were present in the growth media for a bacterium that we wanted to use in
our field  test of the indium ELISA.

      Indium is a rare metal present at very low concentrations in the environment and we
felt that using water samples or soil extracts to field test the  assay would  only give only
negative results. We did, however, happen to have a bacterial  culture in the laboratory that
metabolized indium  arsenide and solubilized  the indium during its growth cycle.  We
therefore decided to test for indium solubilization during bacterial growth as a field test for
our new  immunoassay.  Figure 8 shows some of our solubilization data.


                                       296

-------
      On the x-axis we have plotted day of incubation of this particular bacteria with
insoluble indium arsenide.  On the y-axis we show the amount of indium solubilized  by
bacteria growth, assayed by both ELISA and atomic absorption spectroscopy.  As you can
see, the values for  indium were very similar by both methods of analysis.

      If you plot the AA data versus the ELISA data, you get a pretty good correlation.  If
it were perfect correlation, you would get a slope of 1 and an intercept through zero. Our
slope was 0.9, and we had a non-zero intercept.

      Of course, we like to think that the AA data was  wrong, because bacterial solutions
had particulates in them. Both indium arsenide, the substrate, and indium arsenate, the
product of bacterial metabolism, are relatively insoluble, and although we had filtered our
solutions through a 0.45 micron filter, we could not be sure that we had gotten out all the
particulates.

      The ELISA procedure only assays soluble, dissolvable metals or metals that can  be
solubilized with 5  mM EDTA, whereas the atomic  absorption method would look at total
metals, including any particulates. .So, we think that the slight difference that we saw in the
two methods might be due to particulates in our bacterial solutions.

      Now, as  I said,  these data have been recently published in Analytical Biochemistry.
and I will be happy to make reprints available to anyone who is interested.

      While we were doing these model system studies, however, we were also  in the
process  of making a monoclonal antibody to a metal that we knew was a priority pollutant.
We chose cadmium, because it has such a high  renal toxicity.

      So, the procedure for antibody production was very similar to that used for indium.
We covalently conjugated the EDTA to a  carrier protein, loaded  it with  cadmium, and
injected it  into mice.   Then  we screened the clones to get a  hybridoma that made a
cadmium-specific  monoclonal antibody.

      In Figure 10, I  show an antigen inhibition  curve similar to those generated for
indium, but now we are measuring a priority pollutant.  This curve was generated using a
cadmium-EDTA complex as the inhibiting antigen.

      It appears that the cadmium-specific monoclonal antibody gave better sensitivity than
the indium antibody.  We are down in the  ppb range, and this particular assay goes from
about 6000 down to 0.7 ppb.

      We  hoped to increase the sensitivity of the assay by changing the nature of the
soluble antigen, and in Figure 11 we have a p-nitrobenzyl EDTA derivative of cadmium as
the soluble antigen. We did not get the increase in sensitivity that we observed with the
indium antibody.  Again, the limit of sensitivity was about 0.7 ppb, but the shape of the

                                       297

-------
curve had changed, and I  will discuss this curve in relation to assay  sensitivity when I
present Table 1.

      When we added the cadmium-EDTA-protein  conjugate as the soluble antigen, the
sensitivity of the assay was  in the parts per trillion range, as shown in Figure 12.  This was
done in the laboratory  with atomic absorption grade reagents, so I am not making any
claims to how it is going to work in a field test, but the cadmium ELISA looks to be a very,
very sensitive test.

      Using cadmium-EDTA-protein as the soluble  inhibiting antigen, we could detect
cadmium from about 26 to  330 ppt in a very simple immunoassay format.

      When we began  generating  our anti-cadmium antibodies we did  a literature review
on  cadmium and  where  it occurs,  and we  discovered that it  often occurred as  a
contamination in zinc ores,  and also a contaminant  in metal plating plants.

      So, when we screened hybridomas for this antibody production, we wanted to make
very sure that we did not get any cross-reactivity with zinc, because we thought the end
users might be very interested in distinguishing zinc versus cadmium.

      In Figure 13, I show metal ion specificity of our anti-cadmium antibody. This assay
is two to three orders of magnitude more sensitive for cadmium than it is for either zinc or
mercury-EDTA complexes.

      The final table (Table 1) shows a quick summary of the data we have obtained so far
with the anti-indium and the anti-cadmium antibodies in this antigen  inhibition ELISA. The
anti-indium antibody that we got from Claude Meares is actually inhibited by free indium
ions to some extent, but since we run our tests in 5 mM EDTA, that is really not a problem
for the assay as presently formatted.

      With indium-EDTA as the soluble antigen, the concentration required for 50 percent
inhibition of color  formation is around  20 ppm.  If you  use p-nitrobenzyl EDTA as the
soluble  antigen, 6.8 ppm is required for 50%  inhibition, and if you  add  the  complete
indium-EDTA protein conjugate, only to 0.6 ppb is required.

      Now, the anti-cadmium antibody which  was made  specifically  for metal  ion
immunoassays provides a more sensitive ELISA. The antibody does not react at all with free
cadmium. With cadmium-EDTA as the soluble antigen, we get a 50 percent inhibition at
about 83 ppb.  When we used cadmium-(p-nitrobenzyl)-EDTA, only  15 ppb were required
to inhibit color formation by 50 percent.  This is a 5-fold  increase in sensitivity  when we
changed the structure of the soluble antigen, just as we saw with the anti-indium  antibody.
The  limit of detection  was not lower  in this assay because when these assays were
assembled,  the antibody concentration exceeded the cadmium concentration. That is the
                                       298

-------
reason that we got a curved inhibition plot when cadmium-(p-nitrobenzy!)-EDTA was used
as the soluble antigen.

      Finally, when we get use to the cadmium-EDTA-BSA, we are down in the parts per
billion range here as well.

      Now, I am at the point where I would like to bring this  assay out of the lab and
actually start looking at real samples,  especially with the cadmium antibody. One of the
reasons  I was happy to be asked to speak here today was to see whether I could generate
some interest in people sending me some samples,  so I could test my new cadmium assay
and compare it with ICP or AA assays that are already available.

      We do not envision this test as replacing ICP/MS or AA, but we envision this as being
a useful  adjunct to those procedures that would allow you to do on-site analyses and get a
general  idea of how much metal contamination there is at  a site.

      It is a very simple assay to perform.  It is very portable. The test is performed in a
little plastic plate about 3 by 4 inches long, and with an inexpensive plate reader.   And it
is very quick.  It only takes a couple of hours to do these assays.

      At this point, I would really appreciate feedback from people who might be final
users of the technology about what we should do next to make this assay useful to you, the
end-users.

      I  would like to stop here. I would like to thank my collaborators, Drs. Chakrabarti,
Hatcher, and Blake who participated in this study and also thank the EPA who supported
this research.

      Thank you.
                                       299

-------
                       QUESTION AND ANSWER SESSION
                                     DR. FIELDING:  Are there any questions?

                                     MR. PLOSCYCA:  I  am Jim Ploscyca  with  IEA
Laboratories. What matrices do you consider this to be applicable to in the future? We are
talking, I guess, water at this point?

                                     DR. BLAKE:   Well,  yes.   We are thinking of
wastewater. The assay presently detects soluble metals or any metals that can be solubilized
with 5 mM EDTA.  You dilute your sample  into 5 mM EDTA.

      So,  I guess I need some help from you folks, about what matrices I should be testing.

      All of the inhibition curves I showed today were done in ultra-purified water with AA
standards.  That is what we used to generate these standard curves.

                                     MR. PLOSCYCA: Just so I make sure I understand
this, you need one of these for every particular metal you are testing for. Correct?

                                     DR. BLAKE:   Yes.  What metals do you think
would be  most applicable?  What  metals  do  you need quick and fast that go in  the
concentration ranges that I am able to detect?

                                     MR. PLOSCYCA:  Well, what I mean is one test
would test for, say, chromium, cadmium.

                                     DR. BLAKE: Yes.  It is not like an ICP where you
can get the whole spectrum.  You get one metal at a time.

                                     MR. PLOSCYCA:  Thank you.

                                     MR.  KIMBROUGH:   My  name  is  David
Kimbrough, and I  am with the California Environmental Protection Agency.

      I had a question for you.  I did not quite catch on the earlier slide, how much
selectivity do you have between these different metals?  Will they interfere with each other?
I did not quite get that off your slide  there.

                                     DR. BLAKE:  Well, we can  go back. It depends
on the antibody you are looking at. This is the one we did for cadmium, and this is just a
preliminary one, because if you put too many metals on one slide, it gets really complicated
                                      300

-------
to look at, but in this we have about a thousand-fold selectivity of cadmium over zinc or
mercury.

      So, at a concentration of 1  ppm  here, we are down to 30  percent of total color
formation with 1 ppm of cadmium, and we have to go up to  100  ppm or either zinc or
mercury,

                                     MR. KIMBROUCH:  Those other metals do give
you color formation, though?

                                     DR. BLAKE:  Pardon me?

                                     MR. KIMBROUCH:  These other metals do give
you color formation?

                                     DR. BLAKE: They do inhibit color formation at a
100 to 1000-fold higher concentrations than cadmium.

                                     MR. KIMBROUCH:  Have you considered using
different chelating agents instead of EDTA which might give you  more specificity  and
selectivity?

                                     DR. BLAKE: Yes, we have something going right
now on this idea.  One of the problems with EDTA is it does not bind some metals tightly
enough to give you a stable complex that survives when you inject it in the animals.  EDTA
is a first generation chelaters, and there are second generation chelaters that are more highly
specific for certain metals.  We are thinking about making monoclonals for those second
generation chelaters as well that will give us a higher specificity.

      Yes?

                                     MS. ASHCRAFT:  Merrill Ashcraft with the Navy
Public Works Center.   Just a word of caution to you.  You will not be able to look at
solubilized and dissolved as being equal if you use something like EDTA, because once you
enter that chelating agent into the water, it will immediately solubilize some of the metals
that may be adhering to particulates that would  not be seen by regular dissolved metals.
You need to look at that.

                                     DR. BLAKE:  Okay, that is the sort of feedback  I
need when I am starting field tests.  Thank you.

      Yes?
                                      301

-------
                                    MR. HUNT: Carlton Hunt with Battelle. To follow
that up, I think what you are going to find is EDTA is going to compete with natural organic
ligands to release metals.

                                    DR. BLAKE; So, we are going to get something
more like total metals?

                                    MR. HUNT: Right, so I do not know if you are
measuring this against the free metal, the Cd+2?

                                    DR. BLAKE: Yes.

                                    MR.  HUNT:   In  a natural  environment,  in
wastewaters particularly, you are going to have an awful  lot of competing reactions for that
EDTA, and you are going to have to take that into account at some point. That is just in the
dissolved phase let alone the particulate issue.

                                    DR. BLAKE: Now, if we have 5 mM EDTA in our
reaction, the association constant of cadmium for EDTA is up in the 1024 range.

                                    MR. HUNT: You are probably going to pull it out,
but I think you are going to  have to look  at that fairly carefully to  see if you are getting,
quote, an absolute number. You are going to have to look at whether or not you are getting
20 percent, 80 percent, 90 percent out as sample.

                                    DR. BLAKE: Since these assays are not inhibited
by EDTA, we can even increase the concentration of EDTA if we need to.

                                    MR. HUNT:  Well, when you move  to natural
waters, I think you are going to have to do a sequence that looks at that issue.

                                    DR. BLAKE: Yes.  We  have not done anything
with natural waters yet.

                                    MR. HUNT: Thank you.

                                    DR. FIELDING:  Are there any other questions?
(No response.)
                                    DR. FIELDING:  Thank you.
                                      302

-------
Figure  1
    No Soluble Antigen
 no inhibition of color formation
  Plus Soluble Antigen

  inhibition of color formation
is proportional to concentration
      of soluble antigen
                                  303

-------
Figure 2
  o
  10
  \f
  •4—•
  cd

  CD
  Q
  c
  ctf
  -Q
  h_
  o
  00
         Conjugate Substitution
      Influences Antibody Binding
                   8  16  32 64 128

      Reciprocal antibody dilution x 10~3
2.5%



15%



31%



53%
                  304

-------
Figure 3
         Antigen Inhibition ELISA
                   In-EDTA
       120
       100
        80
        60
        40
        20
        0
         320
40    5    0.62   0.075
  Indium (ppm)
                  305

-------
Figure 4
  Ligand Affinity Decreases
 when Ligand Occupies Less
     of  the Binding Site
         Z
       Z
       Z
        Z
      Z
          306

-------
Figure 5
c
o
"*-•
!5
!c
          Antigen  Inhibition ELISA
             ln-(p-nitrobenzyl)-EDTA
        120
        100
         80
         60
         40
         20
          0
          120

            15    1.8   0.22

               Indium (ppm)
0.02
                   307

-------
Figure 6
    C
    o
Antigen Inhibition ELISA
       In-EDTA-BSA
       120
       100
        80
        60'
        40
        20
         0
         2000
                 \
                         _L
        16    0.13

         Indium (ppb)
0.001
               308

-------
Figure 7
    O
    10
    cd
    o
    c
    cti
    O
    CO
    _Q
                Inhibition by

               In, Cu,  or Mn
         1.60
         1.20
         0.80
         0.40
         0.00
	i i ii mil i ill mil i i i i mil  i i
                     — In-EDTA



                     - Cu-EDTA



                     — Mn-EDTA
             0.1   1    10 100  100



                 Metal (ppm)

                       309

-------
 Figure 8
           A   AAS
                 Bacterial Solubilization

                   of Indium Arsenide

                             --•-- ELISA
    1500
E
a.
Q.


E
3

'•5
c
O

CO
1000
     500
       0
                         Days of incubation
                        310

-------
Figure 9
   C/D
   _Q


   E"
   Q.
   Q.
   T3
   C
           Comparison of Data from
                AAS and ELISA
       1500
       1200
        900
        600
        300
          0
           0   300  600  900  1200 1500
               Indium (ppm) by ELISA

-------
Figure 10
    0)

    o
    I—

    0)

    0.
         Antigen Inhibition  ELISA

                  Cd-EDTA
       120
       100 h
    O  80
    •±  60
40
       20
        0

        5000 1250  312  78   19.5  4.87  1.21
                cadmium (ppb)
                312

-------
Figure 11
   c
   O
   C
   0)
   O
   1—
   CD
   Q_
        Antigen Inhibition ELISA
          Cd-(p-nitrobenzyl)-EDTA
l^U
100
80
60
40
20
0
^-•^
"*X
-
-
-
-
,
1125 281
                  70
17.5
4.4
1.1
               cadmium (ppb)
                  313

-------
Figure 12
    C
    o
    •f-*
    0
    o

    0)
    Q.
          Antigen Inhibition ELISA

                 Cd-BSA-EDTA
        120
        100
80
         60
40
         20
          0
          3320
        830
207.5
51.85
12.96
                 cadmium (ppt)
               314

-------
Figure 13
    CD
    O
    c
    cd
    JD
    V—
    O
    CO
    -D
    cd

    CD
    CD
           Inhibition by
          Cd, Zn, or  Hg
       100
        80
60
        40
        20
         0
Cd-EDTA


Zn-EDTA


Hg-EDTA
         0.1
          10  100  1000
             metal (ppm)
                    315

-------
   Table I
   Comparison of Anti-Indium and Anti-Cadmium
       Antibodies  in Antigen Inhibition ELISA
Antibody
Soluble
Antigen
                  Concentration Required
                  for 50% Inhibition (ppm)
Anti-Indium
(CHA255)
Anti-Cadmium
(2A81G5)
In
In-EDTA
in-(p-NBZ)-EDTA
In-EDTA-BSA

Cd
Cd-EDTA
Cd-(p-NBZ)-EDTA
Cd-EDTA-BSA
                  > 60
                   20 t
                    6,8
                    0.0006

                >500
                   0.083
                   0.015
                   0.00018
                         316

-------
                                      DR. FIELDING:  Our last paper this morning will
be presented by Billy Potter, entitled the Determination  of Total Mercury for the Water
Quality Based Approach.

       Billy  is a Research  Chemist in the Inorganic Chemistry  Branch of the Chemistry
Research Division of EPA's Environmental Monitoring Systems Laboratory in Cincinnati.

       He is responsible for the development of new methods  and the improvement of
existing methods for the analysis of parameters required by the Clean Water Act. This
involves the renovation of existing methods to meet regulatory demand for the reduction of
laboratory waste, lowering method detection limits, and increasing  method reliability.

       He received a B.S. degree in Chemistry and a B.A. degree in English from Central
State College.

       Billy?
                                       317

-------
(Blank Page)
    318

-------
(SLIDE 1)

TITLE:     Determination of Total Mercury for the Water Quality Based Approach.

SPEAKER:  B.B. Potter
AUTHORS:
(SLIDES 2-3)

ABSTRACT:
Billy B, Potter, Inorganic Chemistry Branch, Chemistry Research
Division, Environmental Monitoring Systems Laboratory, U.S.
Environmental Protection Agency, Cincinnati Ohio.

Winslow J. Bashe, Miguel D. Castellanos, Stephen E. Long and Jane A.
Doster, Technology Applications,  Inc., Cincinnati, Ohio.
The U.S. Environmental Protection Agency (USEPA) has developed the
proposed EPA Mercury Method 245.7 for the determination of total
mercury found in water, wastewater and sediment at the part per trillion
(ppt) level.  The total mercury method has an estimated method
detection limit (MDL) of 5 ppt to 20 ppt of mercury. The MDL is made
possible by digesting the sample using bromide/bromate reagent
followed by detection of elemental mercury by cold vapor atomic
fluorescence spectrometry at 253.7 nm.  Method 245.7 may be used for
the monitoring NPDES permits established by Water Quality Based
Effluent Limits.
                                      319

-------
INTRODUCTION

     METHOD 245.7, DETERMINATION OF MERCURY BY AUTOMATED COLD
VAPOR, ATOMIC FLUORESCENCE SPECTROMETRY is written in the Environmental
Monitoring Management Council (EMMC) method format. The EMMC format consists of
the following sections:

(SLIDES 4-5)

1.0  SCOPE AND APPLICATION
2.0  SUMMARY OF METHOD
3.0  DEFINITIONS
4.0  INTERFERENCES
5.0  SAFETY
6.0  APPARATUS, EQUIPMENT, LABORATORY AND CLEANING REQUIREMENTS
7.0  REAGENTS AND CONSUMABLE MATERIALS
8.0  SAMPLE COLLECTION, PRESERVATION, AND STORAGE
9.0  QUALITY  CONTROL
10.0 CALIBRATION AND STANDARDIZATION
11.0 PROCEDURE
12.0 DATA ANALYSIS AND CALCULATIONS
13.0 METHOD PERFORMANCE
14.0 POLLUTION PREVENTION
15.0 WASTE MANAGEMENT
16.0 REFERENCES
17.0 TABLES, DIAGRAMS, FLOWCHARTS, AND VALIDATION DATA

Each section addresses the details of the method application, procedures and quality
control issues necessary for the proper execution of the method.

     Method 245,7 describes procedures for the determination of mercury (organic +
inorganic) total recoverable or dissolved (filtered 0.45//), in drinking water, surface and
ground water, sea and brackish water,  industrial and domestic wastewaters. The
chemistry of sample digestion is based on the brominating reagent producing bromine
monochloride:

              KBrO3  + 2KBr + 6HCI  -»  3BrCI + 3KCI + 3H2O

In the presence of excess bromide ions and acid, the bromine monochloride is converted
to free bromine1:

                      BrCI  + excess KBr  -»   Br,  + KCI
                                   320

-------
Inorganic mercury compounds are rapidly oxidized by bromine and organomercury
species are degraded by the oxidizing properties of bromine releasing mercury (II)2'3 as
follows:

                           HgR2 + Br2  -»  RHgBr +  RBr

                          RHgBr + Br2  -»   HgBr2 + RBr

The excess Br2 reacts to oxidize mercury, forming a complex.  After the oxidation
reactions are complete, the excess bromine is removed by the addition of hydroxylamine
hydrochloride.

     Elemental mercury vapor is generated from the digested sample by reduction with
stannous (tin II) chloride in the presence of hydrochloric acid.4  High purity argon gas is
used to purge the mercury vapor from a gas/liquid separator, driving the equilibrium to
the right as follows:

                    Sn+2 + Hg+2  -»   (Art),  Hg°t  +  Sn+4l

The excess Sn+2l,  Hell, solution is discharged to a waste container and the mercury
vapor is carried by the argon flow to either a mercury concentrator/ detector or directly
to the detector. The liquid containing spent reagents and sample are flushed
continuously from the gas/liquid  separator to a  waste container. This waste contains tin
and hydrochloric acid and does not contain mercury.  The elemental mercury vapor is
then  purged from solution by a carrier stream of argon through a semi-permeable dryer
tube5 that removes water vapor.  The vapor passes directly to the detector and is
measured  as a change in the rise (height) from the baseline.  The mercury vapor
concentration is determined by atomic fluorescence spectrometry at 253.7  nm.6/7/8

     The  key to the successful analysis  of mercury at the method detection limits (MDL)
is the control  of mercury contamination of reagents and  samples from laboratory sources.
Control of contamination sources requires that the method procedures should be
conducted in  an ultra clean laboratory environment.  Many laboratories have source
contamination from common  mercury reagents  such as the  reagents used in the Kjeldahl
method and chemical oxygen demand method. The mercury vapors generated by these
methods can permeate the air ventilation systems of the laboratory.  Method 245.7
requires that equipment, reagents and samples be isolated from laboratory facilities that
may  have mercury  contamination.
                                       321

-------
(SLIDE, Photographs, 6-7 not shown)

     Some laboratories have designed soft wall clean rooms using air filtration systems
Federal Standard 209d: clean room class 10,000 filtration with activated carbon filtration
for instrument and changing areas with class 100 zones.  Other laboratories use hard
wall, Class  100, "metal free" clean room with forced air activated carbon filtration.

     The Environmental Monitoring Systems Laboratory, Cincinnati Ohio evaluated
Method 245.7 using a metal free glove box\dry box, purged by argon gas with  activated
carbon filtration.  This approach for the control of mercury contamination from
laboratory sources provides a low cost alternative to clean room storage of reagents and
preparation of samples.

     The Method 245.7 procedure is based on a method used by the Yorkshire Water
Authority (YWA) in the United  Kingdom.9 The Method 245.7 procedure is simplified
and summarized as follows:

(SLIDES 8-9)

(1)   Add  5 mL (1+1) hydrochloric acid and 1  mL 0.1N potassium bromate/potassium
     bromide solution to a 50 mL conical vial.

(2)   Transfer of sample to conical vial, filling  to the 50 mL mark.

(3)   Allow samples to stand for at least  30 minutes before analysis.

(4)   Add  50 fjL hydroxylamine hydrochloride solution to each conical  vial.

(5)   Turn on the automated instrument/detector and allow to stabilize.

(6)   The sample enters gas/liquid separator with SnCI2 to form mercury vapor.

(7)   The vapor is analyzed by cold vapor atomic fluorescence spectrometry.

     Method 245.7 was optimized using a statistically-based experimental design or
chemometric approach as described by Deming  and Morgan (1987).10  The chemometric
experimental approach was applied to this mercury method to speed  the process of
method evaluation.  The chemometric approach is  dynamic (modifiable) and recursive
(experiments may be repeated). During the execution of the experiments  an  evaluation
of each  "phase" of an  experiment is required.  When a modification of  the experiment
was required, it was strongly supported by the statistical evidence.  The experimental
design consisted of the following phases:
                                       322

-------
(SLIDES 10-11)
           Phase 1 -
           Phase 2 -
           Phase 3 -
           Phase 4 -
           Phase 5 -
           Phase 6 -
           Phase 7 -
           Phase 8 -
           Phase 9 -
           Phase 10
Familiarization Study.
Automated Instrument Optimization Study.
Automated Instrument Linearity Study.
Mercury Precision and Recovery Study.
Instrument Stability Study.
Initial Interference Study.
Sample Preservation Study.
Single  Laboratory Validation Study.
Establish Instrument Control Charts.
Establish Clean  Laboratory Protocol,
     In the familiarization and optimization phase of the experiments, the mercury
analyzer was optimized for maximum sensitivity and/or signal-to-noise ratio. The use of
Simplex optimization was  investigated using the carrier gas and sheath gas flow rates as
selected variables.  A range for optimized settings was found as described in Table 1.
These settings may be changed periodically to optimize the  instrument.  Small changes,
when they remain within the specified ranges, do not adversely effect the instrument's
performance.  However, one setting was made and procedures were held constant for
the remaining experiments.

(SLIDES 12-14)
INSTRUMENT CONTROL SETTING AND ARGON GAS FLOW SETTINGS
Fluorescence Instrument
Parameters
Delay Time
Rise Time
Analysis Time
Memory Time
Argon Gas Control
Gas Regulator
Carrier Flow
Drier Tube Flow
Sheath Flow
PSA Merlin Series AFS
Range of Settings
5 to 1 5 seconds
20 to 30 seconds
30 seconds
60 seconds
Range of Settings
20 to 30 psi.
150 to 450 ml/minute
2.5 to 3.5 L/minute
150 to 250 mL/minute
                                       323

-------
     The automated instrument is generally configured as shown below.

(SLIDE 15)
                                                          FLUORESCENCE
                                                           OETECTOB
             PEfllSTftLTIC
                PUMP
                                                                          CARBCK
                                                                  »ASTE      FHTEB
                                                                         Vent to Hood)
   AUTOSAVP'.ED
                     ABGOK GAS
               FIGURE  1  PSA  AUTOMATED MEBCUfiY FLUORESCENCE SYSTEM
     After the experimental design (Phases 1-5) was completed, Method 245.7 was
tested for ruggedness.  Control charts were used to regulate the instrument and method
procedures.  It was necessary  to find a sample container that would allow stable
transport and storage of samples containing mercury concentration at near the MDL (1 to
20 ppt).  A plastic container, polyethylene terphathalate copolyester (PETG), was
selected.  This plastic container's performance was measured at the 10 ppt-Hg, preserved
in 1 % HCI and sealed with Teflon™ tape.  The PETG containers maintained a mean
concentration of 10,6 ±  0.8 ppt-Hg over a period of 72  hours. This type of container is
recyclable or disposable.

     An important part of the ruggedness testing involves Phase 6, interference testing.
The use of brominating digestion coupled with atomic fluorescence detection  overcomes
many interferences (chloride,  sulfide) and molecular absorption interferences11'12 inherent
                                        324

-------
in previous methods.  No interferences were noted for sulfide concentrations below 24
mg/L.13  The only significant interferences observed are for samples containing gold,
silver and iodide.14

(SLIDE  16)
INTERFERENCES OBSERVED
Interferant
Gold (100 ppt Hg)
Silver (100 ppt Hg)
Iodide (10 ppt Hg)
Level
1 ppm
1 ppm
5 ppm
% Hg Recovery
76.1
183.2
9.0
DISCUSSION

     EPA Method 245.7 was developed to serve as a monitoring method for the
ecological assessment of the Florida Everglades.  Is this method adequate for a new role
of supporting water quality-based effluent limitations (WQBELs)?  To explore this
possibility consider the following scenario:

(SLIDE 17)

     (1)   Methylmercury poisoning of humans and cats in Japan known as "Minamata"
          disease (1950-1970) was caused by the consumption of mercury (10 to 24
          parts per million [ppm]) contaminated fish.  This environmental catastrophe
          marked the beginning of world-wide concern that mercury may  be a global
          pollution problem. The history, hazards, and concerns about mercury
          pollution have been documented since 1972.15

(SLIDE 18)

     (2)   The death of a Florida panther (110 parts per million [ppm]) in 1989 from
          mercury poisoning was the  impetus for the completion of a tissue survey in
          1990, of panthers,  raccoons, otters and alligators found in the southern
          Florida Everglades. The findings of this survey "MERCURY
          CONTAMINATION IN FLORIDA PANTHERS"16 documented widespread
          mercury contamination of wildlife in the southern part of the Everglades.
          Mercury contaminated food chain (fish and raccoons)  is the suspected cause
          for the death of the Florida  panther.  The State of Florida has determined that
          many sport fish caught in  the South Florida Everglades are contaminated with
                                      325

-------
          high levels (0.5 to 4 ppm) of mercury.  This has caused a fish advisory to be
          issued for the Florida Everglades,

     (3)   The mercury is concentrated in the aquatic food chain from natural waters
          and sediment.17'18'19  For fish, this represents roughly a bio-magnification  of
          1,000,000 times the concentration of mercury found in natural waters. A
          comparison  of which may be seen in the next slide.

(SLIDE 19)
                              ONE MILLION TIMES
                       BIO-MAGNIFICATION OF MERCURY
                                     BY FISH
                             FROM  NATURAL WATER

          A.      Fish 0.5 to 20 //g Hg/g (ppm, 10~3 g/kg), 106 cone, factor,

          B.      Insects  7 to 265 ng Hg/g (ppb,  10"6 g/kg), 103 cone,  factor,

          C.      Biological-mass, 10 to 210 ng Hg/g (ppb, 10'6 g/kg), 103 cone, factor,

          D.      Sediment,  34 to 753 ng Hg/g (ppb, 10'6 g/kg), 10J cone, ractor,

          E.      Water,  0.05 to 1  ng Hg/L (ppt,  10'9 g/L).

(SLIDE 20)

     (4)   There are at least 32 States with existing fish consumption advisories for
          mercury.20 The screening value (SV) for mercury in fish is established at 0.6
          ppm. The SV is based on calculations that include fish consumption of the
          U.S. population and other risk factors such as fish density (population).
          These risk factors may  lead to  the issuance of a fish consumption advisory.21

     Using the above information, high concentrations of mercury (5 to 24 ppm) can
lead to death of man (Japan) and predators (Everglades) that depend on the aquatic food
chain.  If using the SV of 0.6 ppm Hg for  fish and the mercury is bio-magnified
1,000,000 fold in fish from the aquatic food chain,  it is possible to back-calculate the
upper limit for the concentration of mercury acceptable in natural water.  The calculated
upper minimal limit (ML) would be roughly 0.6 ppt-Hg.

     Using the mercury in fish scenario to set an effluent limit at 0.6 ppt-Hg may be
setting mercury at a concentration to high to protect the aquatic environment.  Ecological
risk factors and target populations  other than the consumption of fish by humans should
be considered  and included in the calculation.  Until all of the risk factors are  included,
                                       326

-------
another approach of setting an interim effluent limit using Method 245.7's method
detection limit (MDL)  is suggested.

     The MDLs as listed below also may be used for enforcement of water quality-based
effluent limitations (WQBELs) by establishing the interim minimum level (Interim ML) for
mercury. The MDLs are as follows:

(SLIDE 21)
METHOD DETECTION LIMITS FOR MERCURY (ng Hg/L)
LOCATION
MATRIX
Reagent Water
Florida Marsh Water
Synthetic Sea Water
Sea Water
Lake Water
Waste Water
EPAXEMSL
Cincinnati
Ohio
Glove Box
1.8
3.3
2.6



EPAXRegion 4
Athens Georgia
Hard Wall
Clean Room
0.3 to 1.0





S.E. Environ.
Research, Florida
International
University
Soft Wall
Clean Room
0.3 to 0.6


1.4
0.3
0.4
       The Interim ML is calculated when a method-specific ML does not exist.  It is
calculated by multiplying the MDL by 3.18.  The factor of 3,18 is derived from the ACS
definition of level of quantitation (LOQ) that is 10 standard deviations above the average
blank signal and is divided by the 3.14 (student t value) for the MDL, i.e. 3.18 = 10/3.14,
for n = 7. The calculated ML is then rounded  up to 1, 2f 5, 10, 20, 50, etc.  The Interim
ML for mercury would then range from 5 to 20 ppt-Hg depending on the water matrix
and the laboratory's skill.  Method  245.7 would satisfy the need for the Interim ML for
mercury and the concentration value is near the levels found  in natural waters.  The
interim ML for mercury concentrations would be 10 to 20 times higher than most
ambient concentrations found in  natural waters.

     To summarize, if the risk-based water quality criteria (based on fish  consumption of
the U.S.) is used to establish effluent guidelines, the minimum level (ML)  of 0.6 ppt-Hg
may  be set to near the Method 245.7 method detection limit  (MDL) of 0,3 to 3.3 ppt-Hg.
If the risk-based approach includes the protection of the aquatic food chain  and other
                                       327

-------
endangered target populations,  it is conceivable that the ML would be set below the
MDL To satisfy this need will  require a revision of the method that will include a
sample pre-concentration  step to increase the sensitivity of the method. Method 245.7 is
an acceptable method for monitoring of the Interim ML of 5 to 20 ppt-Hg.
ACKNOWLEDGEMENTS

(SLIDE 22)

     The following individuals are acknowledged for their contributions to this project:
Professor Peter Stockwell, Paul Stockwell and Dr. Warren Corns, (P.S. Analytical Ltd.,
Kent,  UK) and Jim Coates (Questron Corporation, Princeton, NJ) are thanked  for their
technical support. Dr. Ron Jones (Florida International University, Miami,  FL) is
gratefully acknowledged for providing the surface water sample from the Everglades
National Park, Florida and for providing method detection limits.  M. A. Wasko, J. Scifres
and W. H. McDaniel, Environmental Services Division,  Region 4, USEPA, Athens
Georgia who contributed to the development of the method.

(SLIDE 23)

     A special acknowledgment is given to Lloyd Kahn and John Birri,  Region 2,
USEPA, Edison,  New Jersey and Jerry Stober, Region 4,  USEPA, Athens  Georgia, for
providing funding by the Regional Applied Research Effort (RARE) program.
REFERENCES


1.   E. Schulek, K. Burger, Talanta, 1-2, 219, (1958).

2.   B.J. Farey, L.A. Nelson, M.G. Rolph, Analyst,  103,656,0978).

3.   L.A. Nelson, Anal. Chem.,  51, 13, 2289,(1979)

4.   W.R. Hatch, W.L. Ott, Anal. Chem. 40, 14, 2085, (1968).

5.   W.T. Corns, L.E. Ebdon, S.J. Hill, P.B. Stockwell, Analyst,  177, 717, (1992).

6.   K.C. Thompson,  G.D. Reynolds, Analyst, 96,  771,(1971).

7.   K.C. Thompson,  R.G. Godden, Analyst, 100,  544, (1975).

8.   P.B. Stockwell, R.G. Godden, J. Anal. At. Spec,  4, 301,(1989).
                                       328

-------
9.   Yorkshire Water Methods of Analysis, 5th Ed. 1988.
     (ISBN 0 905057 23 6).

10.  S.N. Deming, S.L. Morgan; "Experimental Design: A Chemometric Approach,"
     Elsevier, Amsterdam,  1987.

11.  C.D. West, Anal. Chem., 48, 6, 797, (1974).

12.  J.F. Kopp,  M.C. Longbottom, LB. Lobring, Jo. AWWA, Vol. 64, No.1  (1972).

13.  Methods of the Examination of Waters of Associated  Materials, "Mercury in Water,
     Effluents, Soil and Sediments etc. additional methods 1985," 1987, (ISBN 0 11
     751907 3).

14.  E. Yamada, T. Yamada,  M. Sato, Anal. Sci., 8, 863, (1992).

15.  Selikoff, I.J. (Editor-in-Chief); "Hazards of Mercury," Environmental Research, An
     International Journal of  Environmental Medicine and Environmental Sciences, Vol.
     4, No. 1, Mar. 1971.

16.  Roelke, M.E.; Schultz, D.P.;  Facemire, C.F., Sundlof,  S.F., Royals, H.E., "Mercury
     Contamination In Florida Panthers," A Report of the Florida Panther Technical
     Subcommittee to the  Florida Panther Interagency Committee,  Dec. 1991.

17.  J.T. Dukerschein, J.G. Wiener, R.G. Rada,  M.T. Steingraeber, Arch, of Environ.
     Contam. Toxicol., 23, 109-116, (1992).

18.  G.E. Glass, J.A. Sorensen, K.W. Schmidt, G.R. Rapp,  Jr., Environ. Sci. Technol.,
     Vol.24, No. 7, (1990).

19.  J.A. Sorensen, G.E. Glass, K.W. Schmidt, J.K. Huber,  G.R.  Rapp, Jr., Environ. Sci.
     Technol., Vol. 24, No. 11, (1990).

20.  U.S. EPA Office of Wetlands, Oceans, and Watershed Protection Division, Fish
     Contamination Section,  NPS Information Exchange, Fish Consumption Database,
     NPS BBS Modem (301)  589-0205.

21.  Guidance  For Assessing Chemical Contaminant Data For Use in Fish Advisories,
     Vol. 1, EPA823-R93-002, August  1993.
                                      329

-------
                       QUESTION AND ANSWER SESSION
                                     DR. FIELDING: Do we have any questions?

                                     MR.  XIE:   My name is Jack Xie from  Water
Chemistry at Roanoke, Virginia.  I am just wondering whether we can get a copy of EPA
Method 245.7.

                                     MR. POTTER: It will be published probably about
midsummer.  I cannot release this  method at this  time, because it has not received its
second peer review which is a policy of the Office of Research and Development.  The
Method is in limited use at this time, and we are doing field testing  on  it.

                                     MR. XIE:  Okay, thanks.

                                     MR. PEIST: I  am Ken Peist, Region II laboratory
in Edison, New Jersey.  We have  been doing some  work with fluorescence detection
following digestion using the permanganate digestion, and we are going to be doing a study
this year where we compare the bromate/bromide versus the permanganate digestion, but
also, we are looking into the microwave digestion prior to the fluorescence detection.

      I was wondering if ORD was planning on doing any work with the microwave.  I
think a gentleman before had mentioned, you know, it  is a closed vessel system and it has
teflon liners, and it would probably be pretty good for the mercury contamination problems.

                                     MR. POTTER:  The only work we are doing right
now with the microwave is with fish, fish tissue. We are trying to  introduce disposable
containers so that you use it only once.  At the part per trillion (ppt) level, if you use a
bomb to do your digestion  in, you  will have cleaning problems, and you increase your
workload.

      I am trying  to get to the procedure where  we are  using inexpensive disposable
plastics so that we can cut the cost and reduce the  time.

      The  bromide/brornate method,   for  most  samples,  the   digestion  is almost
instantaneous, that is, it occurs within 1  to 2 minutes.

      The sample in the procedure  is digested for 30 minutes.  That is a conservative  time
for the digestion, and that is to applied for things that contain high humic materials.

                                     MR. PEIST: I  think what we are hoping to see is
the recoveries on marine sediments improving with the microwave digestion being that you
can digest at higher temperatures and pressures.

                                       330

-------
                                     MR. POTTER:   Now,  on sediment, that  is a
different  topic  altogether, because  now  you  are  not  at  the  parts per trillion level.
Oftentimes, you are at the parts per billion  level, and cleanliness of the bomb is not as
critical as it would be at the part per trillion  level.

                                     MR. PEIST:  Exactly.  Thanks.

                                     MR. POTTER:  Yes, ma'am?

                                     MS. ALLEN: I am Linda Allen from the Minnesota
Department of Health. My question was I have heard a lot of talk today about clean rooms.
The USGS has one definition, and you now  have another definition of clean room.

       I think some kind of document describing exactly what an environmental clean room
means as opposed to...I am sure some of us are  familiar with biological clean rooms from
that aspect, but what, exactly, connotes an environmental clean room for analysis?

       Obviously, when you were saying $10,000 to $100,000, that is a lot of money to be
investing, and  we would kind of like to know what we need to be looking at.

                                     MR. POTTER:   At this  time,  I have  a  work
assignment with a contractor, Research Triangle Institute, to try to answer that question.  We
do not believe that you  have  to go to a $200,000 clean room to do a  lot of the metals
analysis.  What we are trying to do is bring it down to  an  affordable price of, let's say,
around $10,000.

       With that, let me say if you are running a whole bunch of different  kinds of samples,
your laboratory may be better off going to a  more expensive  clean  room  to increase
throughput.  These clean rooms are typically  polypropylene constructed  or soft-wall
constructed clean rooms.

       The hoods  have absolutely no metal parts in them. The electrical outlets don't have
any metal things sticking out.  You generally go through Class 10,000 clean room to get to
a Class 100 metal free laboratory environment.

       For volatile metals, it also requires filtration with carbon or gold or some other type
of system.

       I am working on a "low-end budget" controlled laboratory room.  I am trying to get
the  price down so your lab can  afford to work in  a laboratory that has some mercury
contamination.  The technology being evaluated is glove box.

                                     MR. BERMAN:  I would just like to comment on
the remark about sediments.  It is true, if you look at a sediment, they seem to have about

                                      331

-------
100 times more mercury in them than a water sample. However, there is no way of getting
1 gram of sediment into 1  miliiliter of solution.

      So, when you prepare a sediment, you generally are diluting your concentration by
about a factor of 50 to 100, and you are right back to where you started with the original
water samples.  So, you have to be just as diligent when you analyze sediments for mercury
as you do analyzing waters.

                                     DR. FIELDING;  Are there any other questions?
(No response.)

                                     DR. FIELDING:  If not, this morning's  session is
over.  The afternoon session is scheduled for 1:30.  It is now  12:30 which gives us barely
an hour for lunch.  Let's try to get  back by 1:45.

      Speaking of which, next door is  a very nice set of fast and good food places to eat,
and they  have, since last time  I was here, put in a nice covered walk over there so it is not
quite as bad as it might seem.

      We will try again about 1:45.


      (A luncheon recess was taken.)
                                       332

-------
                                      MR. TELLIARD: Our first speaker this afternoon
is Greg Cutter.  Greg  is an Associate  Professor and Assistant Chairman  and Graduate
Program Director for the Department of Oceanography for the Old Dominion University
which is here in town across the river, up the road, wherever. Greg is going to be talking
about, again continuing in the same frame, the metals concentration and also the issue of
speciation.

      Greg?
            DETERMINATION OF METALLOID CONCENTRATIONS AND
                        SPECIATION IN NATURAL WATERS
                                      MR. CUTTER: What I want to talk about today is,
in fact, a group of elements that are not quite metals and are not quite non-metals; they are
the metalloids. The metalloids are the elements of group IV, V, and VI on the periodic table
and include germanium, arsenic, antimony, selenium, and tellurium.

      Currently, most of the environmental interest in the metalloids centers on arsenic and
selenium, and today I will talk focus on these elements.  However, I would like to suggest
to you that the other metalloids are quite interesting environmentally, and actually have
some utility as tracers of different inputs  to the aquatic environment.

      What I will do first is review a little bit about the  chemistry and the environmental
behavior of these  metalloids so that you can understand, or help to choose, an appropriate
analytical technique.

      To begin this review let me talk a little bit about the chemical  forms  of dissolved
arsenic and selenium in natural waters. Selenium has four principal oxidation states, +VI,
+ IV, elemental  or zero, and -II.  In most natural waters at pHs of between 5 and 8, you
would find that most of the Se(VI) will form selenate, because it is a strong Lewis acid. This
form is roughly equivalent to sulfate in the sulfur series.

      Next is Se(IV), primarily as selenite. You would find it in these two ionic forms (refer
to Figure 1). What I want you to note is that rather than  being cations such as mercury or
lead, these  trace elements are oxyanions, and, therefore,  behave very differently.

      The next selenium species is elemental selenium. Now, I am talking about dissolved
ions, and the operational definition of dissolved is something that passes through a 0.45 //m
filter. Because this is an operational definition, we have to include the possibility that there
could be some elemental selenium, which is insoluble, in a colloidal form and pass through
your filter.   So, rigorously,  we  have to include  elemental  selenium in this dissolved
metalloid series
                                       333

-------
      Finally, selenium in the-II oxidation state, roughly equivalent to sulfide, would exist
primarily in the form of organic selenides.  Examples would be the dissolved free amino
acids such  as selenomethionine, which would be just like sulfur's methionine, and could
be either free or bound in soluble peptides. There is another form of selenide that has been
of concern, especially to atmospheric chemists, and that is dimethyl selenide; this is a
volatile  liquid.

      If we go down to arsenic, and by analogy, antimony which is just below arsenic on
the periodic table, we have As(V),  It also exists as an oxyanion arsenate. There are  also
some organic forms of As(V), methylarsonic acid and dimethylarsinic acid. These are both
found pervasively in the environment.

      Finally, we have a reduced form of arsenic, As(lll)  which would be the arsenite
oxyanion.  There are some  other organic forms of arsenic including arsenobetaine and
arsenocholine which are produced by zooplankton and can be found in water samples.

      So, what we have with the metalloids is a very potentially complex chemistry.  We
have multiple oxidation states, and then within a given oxidation state, you have different
chemical forms.  Thus, the speciation of the metalloids is potentially quite complex.

      Now, superimposed upon this is the fact that, in natural waters, the interconversion
between these metalloid species, for example, from Se(VI) to Se(IV) and the reverse, tends
to be kinetically relatively slow.  What this means is that simple thermodynamic calculations
using, for example  the pH and the oxygen concentration, cannot be used to predict the
concentration of a given metalloid species.  Thus, you  have a lot of non-thermodynamic
processes affecting these elements.

      Finally, the other point to consider is the bioavailability or bioreactivity of these
elements.   It  turns out, for example, that  arsenate  is  almost chemically identical to
phosphate.  In fact, that is the basis of arsenic toxicity is its similarity to phosphate.

      Therefore, we would be interested in the environment to know the concentration of
arsenate as opposed to the concentration of total arsenic in water.

      Selenium has similar considerations. It largely follows the biogeochemistry of sulfur,
so it can get carried along in the natural sulfur cycle.

      So, we need to know the speciation from a geochemical  standpoint, that is, the
reactivity of these  different  elements in the absence of  biota and, as well,  how their
biological reactivity changes with the speciation. Overall,  speciation is very important for
the metalloids.
                                        334

-------
      What I have done is to put forth some of the considerations that you need to think
about when selecting appropriate analytical methods for determining the metalloids in
natural waters (refer to Figure 2).

      First of all, you have to consider the concentrations.  Now, what I have selected here
is natural range concentrations of selenium on the order of somewhere below 2 to 350 ng/L
for selenium which is subdivided into at least three different major chemical forms, selenate,
selenite, and selenide.

      Arsenic has a very large concentration range, 50 to about 4800 ng/L, and you have
at least four major chemical forms with arsenic.  So, we have quite a large concentration
range, and we have a diversity of species.

      The other thing to consider is the natural cycling of these elements.  There are
processes  such  as selective biological uptake, that is, the conversion of dissolved selenium
or arsenic into solid phases, and then the reverse, remineralization or regeneration of this
organically-bound  selenium or arsenic back  into the water column.

      You have to consider that the species can interconvert and that you have multiple
inputs, atmospheric inputs, riverine inputs, streams, input from sediment pore waters, as
well as anthropogenic inputs, of which the largest is fossil fuel combustion.  It turns out the
metalloids are very enriched  in fossil fuels,  especially  in high sulfur coals.

      Then, considering all these things, the analytical methods must be able to accurately
and precisely determine the concentrations.

      Well, the accuracy goes without saying.  Shier Berman talked about that.  You need
to use standard reference materials, and you want to have the right number.

      You need to have good precision. The natural variability of these elements is on the
order of perhaps 20 to 30 percent.  Therefore, if you have precision that is worse than that,
you are not going  to see anything;  you are not going  to see changes.

      And, as I said, you have to understand the speciation of these metalloids, and we are
working in a wide concentration range and in  a variety of matrices.

      What I want to do, then, is go over some major types of techniques that you can use
to determine selenium,  and I want to then argue for  what I feel to be  the technique of
choice based on these criteria and go over the methodology.

      What I have selected is five different common methods for determining selenium in
natural waters  (Figure 3).  I have selected standard inductively coupled argon plasma, gas
chromatography with electron capture detection, fluorimetry, graphite furnace-atomic
absorption, and then hydride generation-atomic absorption.


                                       335

-------
      I  have put in  some of the analytical figures of merit that one needs to consider,
detection limits, precision, then the ability or not to do speciation, which species you can
determine, and then some of the kinds of laboratory details you like to consider, the analysis
and preparation time. Finally, a very important issue are interferents, analytical interferences
in natural waters.

      First of all, if you remember the concentration range of selenium is somewhere below
2 ng/L to up to  300, you can see right away that techniques such as argon plasma or
graphite furnace are going to be right on the edge.  So, they have some problems.  You can
also see that their precision is not so good.

      Finally, the other thing that you have to consider with these is that typical techniques
are incapable of doing speciation.  As I  say, they can do  total  selenium,  but they are
incapable of determining the VI, the IV, and the -II.

      So, that leaves us with  the ones to  consider, gas chromatography, fluorimetry, and
hydride generation-atomic absorption.  There are other hyphenated techniques that we can
modify.  You could do hydride generation with ICAP, for example, but I am going to try to
stick to the relatively straightforward techniques.

      If we look at the gas chromatography, fluorimetry, and hydride generation methods,
we see that the detection limits are extremely low using a sample size volume of on the
order of 100 ml, except for fluorimetry, that we have very good precision (relative standard
deviation), and that all are pretty much capable of doing the speciation for the IV  and the
VI.

      You start to see differences when you want to try to determine the organic selenium.
Fluorimetry  has  been developed to  do that and  hydride  generation,  but  not gas
chromatography.  If we want to start doing species such as  dimethyl selenide, only  hydride
generation has been  used.

      If we wanted to start looking at  speciation  on  the solid  phase (particles), the
speciation methods  have only been  exploited for  hydride generation.  However, the
speciation methods are applicable to these other techniques.

      Finally, we  have to look  at the kind of prep time versus  analysis time.  Hydride
generation has the longest analysis time,  but it has one of the shortest preparation times.
So, I would suggest to you that,  in fact, you can kind of even this out in terms of prep time
versus analysis time.

      Finally, in terms of interferents,  there have been very few studies, or none that I am
aware of, for the  gas chromatographic and fluorimetric techniques. There have been quite
a few studies on the hydride generation  method.  The worst interferent for that is free
chlorine, so it can  pose a problem with the analysis of drinking waters.


                                       336

-------
      Now, anybody who knows me is going to know that hydride generation is my
favorite technique, and that is what I am going to focus on.  If you want to quickly jot down
the references here, they are the ones that go with that table (Figure 4). AH of the methods,
gas chromatography, hydride generation-atomic absorption, and gas chromatography  have
been published in the peer review literature and are readily available to everyone.

      What I  want  to  now do is focus a little  bit on  the  hydride  generation-atomic
absorption  method.  This  is the hydride generation setup for  doing  selenium  speciation
(slide).  This is set up on  a Perkin &  Elmer instrument.   We  have set it up on It's and
Varian's.

      it consists of the hydride generator which is  what we call a gas stripper here.  It has
a Teflon septum into which you inject sodium borohydride. The sample is acidified, and
you generate hydrogen selenide, the gas, which is swept out with helium into a U-tube that
is immersed in -60 degree  isopropanol; that removes water vapor.  It is then swept into a
glass U-tube that is immersed  in liquid nitrogen which freezes  out the hydrogen selenide.

      This is a closeup of the burner  (slide).  What we do is use an  open quartz  tube
burner that is burning an air/hydrogen flame.  The hydrogen is coming in on the back side
of the burner.  The air is coming in here, and  the effluent from the liquid nitrogen trap is
coming  in right here, so you have an open flame here.

      Finally,  the data are processed on a chromatographic integrator.  Here is  a  little
selenium peak you might not be able to make out. What we want to do with this integrator
is to determine the peak area rather than the peak height.  The reason  for this  is that we
obtain a much  larger linear working range, and the precision  is infinitely improved  over
peak height.

      The other question you may be wondering about, as a lot of people are familiar with
hydride generation, is  why do we  include this liquid nitrogen trap?  Well,  the liquid
nitrogen trap effects a preconcentration that  you do directly on the  instrument.  You
generate the hydrogen selenide. A commercial available hydride generator will then sweep
it immediately  into the flame.   The problem  with  that is that at the same time you are
producing hydrogen in the  reaction of acid and sodium borohydride, and  you actually dilute
your hydride  signal with  that of hydrogen.   You  also have some  other non-atomic
interferences.

      By using the liquid nitrogen trap and isolating the hydrogen selenide uniquely, you
can remove these interferences. You also affect the  preconcentration and have much better
detection limits.

      Now, you  may ask  the  question, why do you want to have really good detection
limits?  Well, good detection limits,  first of all, are needed if you are analyzing a natural
water and you want to know the baseline.


                                       337

-------
       But the other advantage, even if you are analyzing something such  as a refinery
effluent, which we have done, or something as contaminated as Kesterson Reservoir water
in California, you can use an extremely small sample size and dilute it, and you dilute out
all the interferences.  An important advantage to having low detection limits  is simply
diluting out  any  problems in your analysis.

       What we  have here (Figure 5) is a flow diagram for processing a water sample.  First
of all, in terms of the definition, we are going to filter the sample through a polycarbonate
filter; this operationally defines dissolved and particulate.

       The other step is to take that filter and use it for the analysis of total selenium and
the  selenium species.

       I would argue and recommend that you  do particulate analyses directly on the filter
rather than taking an unfiltered sample and a filtered sample and doing it by difference.  The
reason why  is because you have a small difference between two large numbers and your
errors are huge.  So, it is much better to take the filter and analyze it directly for selenium.
In addition,  you  can  do selenium speciation if you so desire.

       The filtrate is then placed  in  borosilicate  glass  bottles.   We  acidify  it using
hydrochloric acid to about pH 2,  and we have found that it is stable up to at least six
months under these storage conditions.

       We use borosilicate glass bottles, because the selenium species, as anions, behave
differently than  cations, and they tend to absorb.  For  example, if you  are doing metal
analysis, and Russ will be telling us about this, you want to use low density  polyethylene
bottles, for example,  which are very clean for trace metals.  It turns out they  are  very
absorptive for metalloids.

       So, you need to use a different container, and we use  borosilicate glass for selenium.
We have found it has the lowest absorptive loss during storage.  We use hydrochloric acid
because if you use nitric acid, it interferes with the hydride generation technique.

       The sample is acidified to  4M  with hydrochloric acid.  You add sulfanilamide.
Sulfanilamide reacts  with nitrite.  Nitrite is another severe interferent with the hydride
generation.  However, if you react it with sulfanilamide,  it removes the nitrite interference
very simply.

       You then add  sodium borohydride.  You  generate  the hydrogen selenide, liquid
nitrogen trap it,  sweep it into your AA, and you directly get Se(IV).

       You take another aliquot of the sample, you acidify it, you boil it for 15 minutes, and
then you follow  this  procedure again.  That gives you the concentration of Se(IV) + VI.
                                        338

-------
      You then take another aliquot, and you add an oxidant, potassium persulfate, boil it
for up to 1 hour.  You can experiment with your sample; you can do it in as little as 20
minutes, depending on how refractory the organics are. This oxidizes the -II selenium forms
up to IV and VI. Then the persulfate, after about 20 minutes of boiling is gone, and then
the acidic boiling reduces it all to IV. You then do the analysis, and that gives you the total
selenium.

      By difference, the Se(VI) is the IV +  VI minus the IV fraction, and the -II + 0 is the
total minus the IV  + VI.

      The reason why I included the SeO here is because there is a possibility, again, that
you  have colloidal elemental selenium passing through  your filter.  So,  rigorously, in
practical  purposes, this fraction is mainly selenide, but you have  to include the possibility
that some colloidal elemental selenium passed through the filter, and that is why we defined
that fraction in such a fashion (Se -II +  0).

      The analysis time for this is 12 minutes.   The prep time is a maximum of 1 hour.

      The apparatus that I showed you had a single hydride generator with  a single liquid
nitrogen trap.  We have hooked up in our laboratory a three hydride generator setup with
a special  Valco valve,  and it allows us to have three samples run simultaneously, because
the instrument time, the chromatographic  time, and the time  that the AA  uses is only a
minute.  So, most of the time  is spent in generating and collecting the hydride so that you
can maximize or minimize the instrument time, depending on which way you look at it, by
having three samples  running simultaneously and then just switching and selecting each
hydride trap and  sweeping it into  your AA.  In this way, you  can process many more
samples in a day.

      The other thing I wanted to add is that we do all our analysis in triplicate. One must
rigorously assess what the precision  is.  Therefore, you need at least three determinations.

      And, we regularly use the  standard  additions method  of  calibration to assure
accuracy.  However, you also must include SRMs that are appropriate for the type of sample
that you are analyzing.

      Now, we will go on to arsenic.  It is a similar type of thing that I did for selenium
(Figure 6).   I  have selected  the kinds  of standard  methods  that people  have in  their
laboratories, graphite  furnace; hydride generation, this time with argon plasma; HPLC;
hydride generation with atomic absorption, just like the selenium; and then a newer method
I want to try to sell which  is  hydride generation coupled with gas chromatography with
photoionization detection.  I will tell you in a minute why I think that is good.
                                       339

-------
      First of all, detection limit-wise, hands down it is the two hydride generation methods
in terms of  the detection  limits.  Again, this gives you  an  advantage of diluting out
interferences and analyzing any sample you want.

      The graphite furnace and argon plasma have other problems with precision, and they,
at the current level, are incapable of doing speciation. The analysis is a relatively short one,
however.

      HPLC is interesting,  because it has very poor detection  limits, but it is capable of
doing many  species.  In fact, it can do some of these complex organic arsenic species such
as arsenobetaine or choline.

      The only problem is that detection limits are not very good. This could be improved
if one coupled it to, for example, an ICP/MS.

      The hydride generation technique is capable of doing all the species, and the one
thing I want  to add here is the simultaneous determination of antimony.  Antimony is right
below arsenic on the  periodic table.  It is interesting to look at element pairs. The other
interesting thing to look at  with  antimony is that it is used as a plasticizer and has been
proposed as a good tracer for municipal waste incineration when you burn plastics.

      So, if you can get antimony at the same time as arsenic, you get more than twice as
much information.  The advantage here is that this hydride generation gas chromatography
technique gives us simultaneous antimony, so I would suggest that is an advantage.

      There is essentially  no prep time for  arsenic.   For the HPLC,  standard hydride
generation, and this hydride generation technique, there are effectively no interferences.

      Now, while you are  writing down references (Figure 7),  I will say something else.

      One of the things, unlike selenium whose species are generally stable during storage,
arsenic is  not.  Now,  you will find conflicting information in the  literature, but we have
been looking at this a  lot, and it turns out that As(lll) is not stable even if you rapidly freeze
the sample.  You cannot store it more than several weeks or you get speciation change.

      If you have speciation change, in other words,  if the As(III) oxidizes, you are going
to overestimate the concentration of arsenate.  This is usually the species that most people
are interested in because of its similarity to phosphate and its toxicity.

      So, one of the problems with arsenic is that you have species that are not stable with
storage, and it means immediate analysis.  Therefore, we developed a technique that is
portable or  relatively portable, and  you  do not have to lug  an atomic  absorption
spectrometer along with you.
                                        340

-------
      What  you  have  is  a  very  small  little  portable  gas  chromatograph  with  a
photoionization detector (slide).  You have your  hydride generator setup much  like the
selenium with the hydride generator, a water trap, and a liquid nitrogen trap.  You  have
your chromatographic integrator,  and you have this little gas  chromatograph here with a
photoionization detector.

      This is in a box here (referring to slide). Actually, this is out in the Black Sea.  We
have lugged this thing all over the world.  It was in Iceland this last summer, and it is pretty
sturdy.

      It is relatively inexpensive.  The whole setup is less than $8000.

      This is a closeup (slide).  The only thing you have to do here, because you have the
high pressure gas chromatograph and you have a relatively low pressure stripping apparatus,
you have to interface the high pressure and low pressure with this six-way valve that you
switch from strip/trap where you collect the hydride in the liquid nitrogen trap to injecting
it into the column.  It is relatively simple,  but the only trick is this six-way valve.

      Here we have the kind of analytical scheme for arsenic (Figure 8).  We again  have
the filtration step.  I recommend the filter itself be  used for total and arsenic and antimony
speciation in  the solid phases rather than doing it by difference.

      We take the filtered sample and we split it.  Because we have unstable As(lll), we put
the sample in teflon bottles, we refrigerate it, and no later than 24 hours do an immediate
determination.

      Now, the determination  involves adjusting  the pH with a buffer, Tris buffer, to pH
6, and adding sodium borohydride. That selectively volatilizes the arsenic or antimony (III)
to their  respective hydrides.  They are trapped and determined with the  GC/PID system
simultaneously.

      You then take another aliquot, add potassium iodide, acidify with HCI to pH  1.6, add
sodium  borohydride, and  you get the arsenic and antimony (III + V).

      Now, you could sequentially do this analysis where you do the tris analysis, get the
(III), then add the Kl, the HCI, and  do the reduction step again.  What we have found is that
the determination of (V) comes out a little low. What we believe is that some of the arsenic
and antimony (V) get reduced to elemental arsenic and elemental antimony and are  then
not subject to being  reduced to the hydrides.

      So, we recommend that you,  in fact,  do  a  (III), and  then you  do a (III  +  V)
determination rather than  doing a (III) and then sequentially a (V) determination.
                                        341

-------
      So, this  is the procedure you use with the gas chromatograph-photoionization
detection system.  You can do it in the field, or you can run it back to  your home
laboratory, but the important thing is that because of the instability of the species, you need
to do the analyses pretty fast.

      You can also take samples, acidify with HCI to pH 2 and then store in polyethylene
or borosilicate glass up to six months. On the stored sample, you can do the arsenic and
antimony (III  + V) determinations.  You can  also go back to the more traditional hydride
generation-atomic absorption and add the same reagents to get the methyl species, and what
we did not include on this drawing is the As(lll  + V).  The interesting thing here is doing
the methyls and the (III + V) on your AA, you  are performing a kind of semi-intercalibration
between  your photoionization  determination of (III  +  V)  and the determination  on a
completely different detector.

      So, if you want to determine the methyl species, you get the inorganic arsenic at the
same time, and then you can check your numbers from your field-determined (III  + V). So,
there is an advantage to this situation.

      Finally, the arsenic and antimony (V) is the difference between the (III + V) and the
unique (III) determinations.

      That is about it. What I  want to just  briefly conclude with is that I argue that the
hydride generation techniques are the best because of their wide application.  We have
analyzed refinery effluents, any kind  of nasty water you can imagine, down to  seawater,
Antarctic ice cores, atmospheric deposition samples, and aerosols and rainwater in urban
and pristine environments. We have applied these techniques to a wide variety of samples,
and we know they work. They have excellent precision, excellent accuracy, and they meet
all the analytical requirements listed previously.

      What is the future of this?  Well, everybody has  been talking a lot about ICP/MS
here. ICP/MS can be used for the metalloids.  There  are some problems, however.  The
first,  with arsenic, is  that it is mono-isotopic,  and  you have a problem  with isobaric
interferents with the argon oxide.   So,  arsenic is a  little  bit of a  problem  via hydride
generation with ICP/MS.

      The advantage with the  ICP/MS is  that we could in theory do all the metalloids
simultaneously.  However, you need to  get rid of these interferences, and the isobaric
interferences are a major problem with arsenic. You also have some isobaric interferences
with selenium.  The other thing is that the hydride generation conditions are somewhat
mutually exclusive.  What you need  to  do, then, is isotope  dilution  to correct your
recoveries.

      Thank you.
                                       342

-------
                       QUESTION AND ANSWER SESSION


                                     MR. TELLIARD: Questions? Yes, sir?

                                     MR. LOVETT:  Is it possible to go back to your
selenium slide a second?

                                     MR. CUTTER:  You mean the...

                                     MR. LOVETT:  The schematic.

                                     MR. CUTTER:  Okay.

                                     MR. LOVETT:  I am confused about something,
and I wondered if you would explain it to me.  If you have selenide there to start with, it
would seem that pH 2 would generate hydrogen selenide which I presume is the product
desirable from the borohydride reduction. If it does not generate hydrogen  selenide in that
step, what would prevent the subsequent hydrochloric acid step from generating hydrogen
selenide, and  why wouldn't the selenide actually be part of those other ones or is, in fact,
the sodium borohydride not producing selenide?

                                     MR. CUTTER:  I am a little fuzzy on the question,
but let me try and answer,  and you tell me if this answers your question.

      If you mean at...if you are referring to this first step here, first of all, it is not  pH 2,
or are you referring to,.,

                                     MR. LOVETT:  In the filtered sample.

                                     MR. CUTTER:  Okay, in the filtered.  Okay, now
I understand.

                                     MR. LOVETT:  Wouldn't the acid generate H2Se
immediately?

                                     MR. CUTTER:  H2Se does  not exist  in aqueous
systems.  It is  a stronger reductant than water. So, H2Se does not exist in water unless you
have extremely strong reducing conditions such as you have in the hydride generator with
the sodium borohydride. In a natural water sample, you would not have hydrogen selenide.

      The selenide is  almost exclusively covalently bound to carbon.

                                     MR. LOVETT:  So, it is, in fact, an organic.


                                      343

-------
                                     MR. CUTTER: That is right.

                                     MR. LOVETT; Okay.

                                     MR. CUTTER; Yes, so you do not have hydrogen
selenide.

                                     MR. LOVETT: That was not clear exactly.

                                     MR. CUTTER: In fact, if you have a water sample
that goes anoxic...by the way, this is a hint for people who want to do water treatment...if
you make the sample anoxic, you precipitate elemental selenium.  Elemental selenium is
the stable form in anoxic conditions. It will not produce selenide.

                                     MR. LOVETT: Okay.  It just was not clear that it
was an organic determination.

                                     MR. CUTTER: Yes, sorry.

                                     MR. TELL1ARD:  Other questions?

                                     MR. LOVETT: Another question.   Selenium  in
teflon bottles, you don't recommend it?

                                     MR. CUTTER:  Part  of  my  concern  is the
absorption of the organics onto the teflon,  but we have done  it.  I just  find that the
borosilicate bottles are a  little more  rigorous and cheaper.

                                     MR. LOVETT: Well, the concern was that we have
a bunch of samples in our lab that were collected for mercury that we retroactively were
asked to run selenium on. Would you recommend...

                                     MR. CUTTER: Part of the problem with selenium,
by the way, is that when  it adsorbs,  it is very hard to desorb. It is irreversible adsorption,
and once it goes on a surface, it is harder than heck to get off.  So, I guess, no, I wouldn't.

                                     MR. NELSON:  Would you  go  to your last
overhead, please?

                                     MR. CUTTER: Arsenic?

                                     MR. NELSON: The very last one.

                                     MR. TELLIARD:  Tell us who you are, please?

                                      344

-------
                                    MR. NELSON: John Nelson from Klohn-Crippen
Consultants, Vancouver Canada.

                                    MR. CUTTER: That one?

                                    MR. NELSON:  Yes.   That  last  overhead to
determine the methyl arsenic for the stored sample, you add potassium iodide, HCL, and
sodium borohydride.

                                    MR. CUTTER: That is correct.

                                    MR. NELSON:  If you were  not to add the
potassium iodide, you would measure the inorganic plus the methyl arsenic. What does the
potassium iodide do to eliminate the measurement of the inorganic arsenic  species?

                                    MR. CUTTER:  The iodide is actually added forthe
antimony. The antimony (V) will not reduce without the addition.  You need the extra
reductant, so the Kl is added to get the antimony.

      In other words, you will have incomplete recovery of the.,,oh, for the methyls, it does
not matter.  Is that your question?

                                    MR.  NELSON:    Won't you  also  get some
measurement of arsenate under those conditions?

                                    MR. CUTTER: Yes, you do.  I am sorry. I pointed
out that this slide was missing that.  With the methyl thing  here, the methyl should include
(III  + V) down here as well.

                                    MR. NELSON: So, you measure all three of them?

                                    MR. CUTTER: Yes. In fact, what you do...and  I
did not say this, because this is someone else's technique...is that you have...your liquid
nitrogen trap actually has a little bit of a chromatographic packing, OV3, in it, and then you
make a poor man's gas chromatograph. You wrap it with nichrome wire and  then hook up
a Variac to it.   So, you  pull it out of liquid nitrogen and heat  it up, and you get the
inorganic arsenic comes off first or antimony, and then you  get the methyls  come out.

                                    MR. NELSON: Okay, thank you.

                                    MR. TELLIARD:  Thanks, Greg.
                                      345

-------
(Blank Page)
    346

-------
UJ
       THE CHEMICAL FORMS OF DISSOLVED
         METALLOIDS IN NATURAL WATERS
SELENIUM

   Se(Vl)  Selenate (SeO42j

   Se(IV)  Selenite (HSeO3- + SeO32-)

   Se(0)   Elemental selenium (insoluble, but may be colloidal and
          pass through a 0.4 p.m filter

   Se(-ll)  Selenide, primarily in the form of organic selenides
          such as dissolved free seleno amino acids
          (e.g., selenomethionine, CH Se(CH2)2CH(NH3)CO H)
          or dissolved peptides, and dimethyl selenide ((CH3)2Se)

ARSENIC (and ANTIMONY)

   As(V)  Inorganic: Arsenate (AsO43")
          Organic: Methylarsonic acid (CH AsO(OH))
          Dimethylarsinic acid ((CH3), AsOOH)
   As(HI)  Arsenite (HAsO32-)
          Other organic forms:
          Arsenobetaine[(CH3)3AsCH^COOH]*CI-
          Arsenocholine[(CH3)3As(CH2)2OH]*CI-

-------
00
            FACTORS TO CONSIDER FOR SELECTING
           APPROPRIATE ANALYTICAL METHODS FOR
        DETERMINING METALLOIDS IN NATURAL WATERS
* TOTAL DISSOLVED SELENIUM CONCENTRATION RANGE IN
 UNCONTAMINATED WATERS: <2 - 350 ng/L; AT LEAST 3 MAJOR
 DISSOLVED CHEMICAL SPECIES

• TOTAL DISSOLVED ARSENIC CONCENTRATION RANGE IN
 UNCONTAMINATED WATERS: 50 - 4,800 ng/L; AT LEAST 4 MAJOR
 DISSOLVED CHEMICAL SPECIES
* THE BIOGEOCHEMICAL CYCLES OF THESE ELEMENTS INCLUDE:
 BIOTIC UPTAKE AND REMINERALIZATION; SPECIES
 INTERCONVERSIONS; INPUT FROM THE ATMOSPHERE, RIVERS AND
 STREAMS, SEDIMENTS, AND ANTHROPOGENIC SOURCES (FOSSIL
 FUEL COMBUSTION)
• THEREFORE, ANALYTICAL METHODS MUST BE ABLE TO
 ACCURATELYAND PRECISELY DETERMINE THE CONCENTRATIONS
 AND SPECIATION OF METALLOIDS OVER A WIDE RANGE OF
 CONCENTRATIONS AND IN A VARIETY OF MATRICES

-------
    Analytical Techniques for Selenium Determinations
Parameter
Detection Limit (rtg/L)
Sample Volume (ml_)
Precision (RSD)
ISe
Se(VI)
Se(IV)
Se(-ll)
Dimethyl Se
Part. Speciation
Analysis Time (min.)
Preparation Time (hrs.)
Severe Interferents
Reference
1CAP
32
100.0
10.0%
Y
N
M
N
N
N
3
5
7
1
GC
0.8
100.0
2.4%
Y
Y
Y
N
N
N
8
12
?
2
Fluor.
0.16
1000.0
8.0%
Y
Y
¥
Y
N
N
1
8
7
3
GFAA
2,000.0
0.1
6.7%
Y
N
N
N
N
N
2
0
MANY
4
H£A
0.16
100.0
2.7%
Y
Y
Y
Y
Y
Y
12
<1
Cl2
5
Fluor. - Huorimetry
GC - Gas Chromatography with Electron Capture Detection
GFAA - Graphite Furnace-Atomic Absorption Spectrometry
HAA - Hydride Generation-Atomic Absorption Spectrometry
ICAP - Inductively Coupled Argon Plasma

-------
U)
Ln
O
                            References
1.  Goulden et al., Anal. Chem., 53, 2027-2020,1981.
2.  Measures and Burton, Anal. Chim. Acta, 120,177-186,1980.
3.  Takayanagi and Wong, Anal. Chim. Acta, 148,263-269,
   1983.
4.  Kunselman and Huff, At. Absorpt. Newslett., 15, 29-32,1976.
5.  Cutter, Anal. Chim. Acta, 98, 59-66,1978; and 149, 391-394,
   1983; Cutter, Science, 217,   829-831,1982.

-------
Ul
            4MHCI
             NaBH
       Sulfanilamide4.
             Se(IV)
                              Selenium
                     Water Sample
                               0.4 urn filter
                              polycarbonate
                                             Filter
                              freeze up to
ISe,
SelV,
                                6 months  $e IV + VI
                     Filtered Sample
         pH 2 HCI
         borosilicate glass
        , up to £ months
Preserved Sample]   4M HCI +
                   A 60 minutes'
                   Sulfamlamide
   Se (IV + VI)
      Se (VI) = Se (IV + VI) - Se (IV)
      Se (-11 + 0) = I Se - Se (IV + VI)

-------
          Analytical Techniques for Arsenic Determinations
Parameter
Detection Limit (ng/L)
Sample Volume (mL)
Precision (RSD)
XAs
As(lll)
As(V>
Methyl As
Part. Speciation
Simultaneous Sb
Analysis Time (min.)
Preparation Time (hrs.)
Severe Interferents
Reference
GFAA
900.0
0.1
15%
Y
N
n-
N
N
N
2.0
0
MANY
I
HICAP
800.0
5.0
?
Y
N
11
N
N
N
Y
2.0
0
9
•
a
HPLC
13,000.0
0.1
8%
N
Y
Y
Y
N
Y
27.0
0
-
3
HAA
1.0
50.0
9%
Y
Y
¥
Y
Y
N
10.0
0
-
V
HGC
0.8
50.0
3%
Y
Y
Y
N
Y
Y
12.0
0
-
5"
Ul
      HGC - Hydride Generation-Gas Chromatography with Photoionization Detection
      GFAA - Graphite Furnace-Atomic Absorption Spectrometry
      HAA - Hydride Generation-Atomic Absorption Spectrometry
      HICAP - Hydride Generation-lnductively Coupled Argon Plasma
      HPLC - High Performance Liquid Chromatography-ICAP, GFAA, etc. Detectors

-------
U)
en
                          References
1.  Walsh et al., Anal. Chem., 48, 820-823,1976.
2.  Thompson et al., Analyst, 103,568-579,1978
3.  Irgolic and Stockton, Mar. Chem., 22, 265-278,1987.
4.  Andreae, Anal. Chem., 49, 820-823,1977.
5.  Cutter et al., Anal. Chem., 63,1138-1142,1991.

-------
u>
Ln
                   Arsenic and Antimony
                      Water Sample
                               0.4 jim filter
                              polycarbonate
                                 ~L freeze up to \  L?
                                 Filter!	_!.—^ As, Sb,
                6 months
       pH2
       HCI
Polyethylen
                      Filtered Sample
  Teflon
  refrigerated
  < 24 hours
         HAA
                       up to
                       6 months
            Stored Sample
                                        Kl
                                        HCI
                                        NaBH
          methyl
          As or Sb
         As, Sb,
         (111 -i- V)
As, Sb,
 (111)
                      As, Sb, (V) = As, Sb (111 + V) - As, Sb (III)
                                                    , v)

-------
                                     MR. TELLIARD: Our next speaker is Russ Flegal,
Professor of Toxicology at the University of California at Santa Cruz and visiting scientist at
Lawrence Livermore National Laboratories.

      Russ  is going  to  be speaking on adaptation of ultra-clean techniques  for an
environmental monitoring program for establishing site-specific water quality criteria for San
Francisco Harbor.

      Russ?

(Verbatim Transcript)

     ADAPTATION OF  ULTRA-CLEAN TECHNIQUES FOR AN ENVIRONMENTAL
           MONITORING PROGRAM AND ESTABLISHING SITE-SPECIFIC
               WATER QUALITY CRITERIA IN SAN FRANCISCO BAY
                                     MR. FLEGAL:  I would like to thank Dale Rushneck
whom I  met at the EPA workshop on developing new methodologies for inviting me, and
I certainly thank all of you for sitting around to listen to me.

      I  have a couple of comments prior to my talk in response to some of the questions
raised in the previous talks, and my comments are also consistent with those brought up by
Greg Cutter in his talk just a minute ago.

      We use conventional polyethylene or low density polyethylene sampling bottles. For
several metals, they are actually lower in their trace element concentration than the teflon
bottles,  and the obvious acknowledged exception is for mercury.

      We reuse the bottles, because every time they are used, they come cleaner, because
the concentrations of trace metals in water are so low that actually by sampling the water,
if it is filtered water, you are actually cleaning the bottles before you use it the next time.

      We number each bottle on the side and on the cap so that if we get an outlier, we
can go back and we can trace and determine whether that bottle is actually the source of
contamination  for that sample.

      I  am going to be presenting data from several studies but concluding with the data
for San Francisco Bay, and this is in response to the questions on instrumentation.

      We use a 15-year old Perkin & Elmer 5000 which the California Department of Fish
and Game surplused  10  years ago  so  that they could do  better analyses at the ppm
measurements  of trace metals  in organisms.
                                       355

-------
      Then, in terms of the methodology, I was not able to come here Monday, because
I was being deposed in a class action suit on lead contamination in lead crystal beakers or
glasses,  and the attorneys representing these manufacturers are going to great lengths to
invalidate my data.  They have gone to extensive questioning for four hours and fifteen
minutes on why I did not use the certified acetic acid leach procedure to measure the lead
concentrations coming out of the lead crystal.

      I  told the lawyers that I did not know anyone that went out and bought lead crystal
so they  could drink acetic acid out of it and that the method that they were using was
certified in 1973, it had questionable value then, and it has no scientific validity now.

      If you look at Greg's slides and listen to his talk, the  important points are we do not
care what instrument is used.  We do not care what method  is used. We only care if the
data is accurate.

      In fact, historically, the way the initial accuracy of these samples was demonstrated
was by using intercalibrations with independent methods and different instruments.

      My co-author is Mike Carlin who provided the support of our work in San Francisco
Bay with the San Francisco Region of the California Regional Water Quality Control Board,
and I have several people that I would like to acknowledge: Ken Brulin, John Donak who
is  now at Old Dominion, Kathy Lau who did  the speciation studies  on copper in San
Francisco Bay, and Hunt & Associates who did studies on the bioavailability of copper in
San Francisco Bay.

      As I indicated, prior to Shier Berman developing these standard reference materials
which, in the US  of A, Shier, we refer to as SRMs, the only way we could demonstrate
whether  the data that we were  reporting was accurate or  reasonably  accurate was to
determine whether or not the data  actually exhibited biogeochemical  consistency and
whether  or not the data  that we generated could be reproduced by someone  using a
different instrument and a different analytical methodology.

      This is the very first measurements of lead in the oceans. The one on the  far right
was the Shall and Patterson reference that Berman reported in 1981. The subsequent ones
were the ones that we made in the central and  south Pacific and reported in  1983.

      Now, at this time, there had been no  previous data, so the only thing we could do
was we could look at the data and determine whether or not it was reasonable.  If you look
at it, these are profiles going down to 4 km in depth and concentrations. So, they would
go out to about 35 ng/kg in the Atlantic and to about 17 in the Pacific, and then they would
drop down to values of 0.08 ng/kg at depth in the Pacific and higher levels of about 3 ng/kg
in the Atlantic.
                                       356

-------
      What we were able to do, though, was we were able to look at the fluxes of lead in
the sediments during the Pleistocene period and calculate what the lead flux was, what the
lead  residence time in the ocean was, and then what the contemporary EO and input of
industrial lead was, so the data were biogeochemically consistent.

      This was  how  we came  upon the idea that possibly we were using the right
procedures.  These analyses have since been corroborated by numerous researchers using
multiple methodologies and multiple instrumentations.

      Now, the common criticism  of this work was that it was  too expensive and too
difficult  That certainly was the case.

      The first samples we collected in the middle of the Pacific Ocean, we would get in
a rubber raft, and we would row at least 5 kilometers  away from the ship, and then we
would take the person who had, theoretically, the clean hands, and they would put on the
long plastic gloves that vets use to  inspect horses, and  then they would put other plastic
gloves that had been cleaned  in acid on  top of those.

      Then, the person with the dirty hands would pour subboiling quartz distilled  acid
over that individual, and while he lay over the bow of the boat,  the one with the dirty hands
would row the boat forward.

      This was not an especially efficient way to collect samples, but it did provide the first
accurate measurements of trace metals in the Pacific Ocean and the Atlantic Ocean.

      I do not have a picture of the first generation deep water profiler. This is the second
generation. The  first generation was about the size of a German tank and had a few more
moving parts, but we could take...the insides were made out of teflon, and we were literally
only able to take a single sample out of each unit, and it cost about $10,000, and each unit
inside cost about $10,000, so it cost us 6 weeks at sea and hundreds of thousands of dollars
to make those initial samples that you saw.

      The samples were then  brought aboard the ship and processed in a trace metal clean
laboratory that had been loaded onto the ship. The entire insides of these labs are, because
we are looking at metals,  not organics, they are metal free. They are HEPA filtered. They
have an entry room, and the  water is passed into it with purified nitrogen gas and then
analyzed.

      The initial analyses were done by thermal  ionization, mass spectrometry using
isotope dilution.  This is the mass spectrometer at Cal Tech which was initially built in the
1950s by hand, and we got so that we could actually run sample a week on  it.

      That is no longer the  case.  In San  Francisco  Bay now,  you can never row 5
kilometers away  from the ship without hitting land, so what we have is a teflon  peristaltic


                                       357

-------
pumping system where we extend the sampling hand over one side of the ship, process it
through a peristaltic filter, and it is collected, after going through this cartridge that everyone
is talking about, right here into the acid cleaned bottles.

      We  have determined that  that has been  sufficient.   We have never found any
substantial  sources of contamination from the sampling collection in  this  procedure.

      Again, as I said,  the only way we can demonstrate the accuracy of our data to the
level that we are satisfied is by independent intercalibrations using different methodologies
and instrumentation.  In this case, this is replicate samples with our  group  and then Ken
Brulin and  John Donak. John is now at Old  Dominion with Greg.

      I do not have the standard deviations, but they are about 0.05 on each  of those
analyses.   So,  these are  not  replicate  samples.  These  are duplicate samples taken
concurrently rather than splits of bottles.

      When we finished the Pacific Ocean stuff, I was very interested in working in San
Francisco Bay.  This was the time that Greg Cutter was actually looking  at the selenium
problem in the  Central  Valley and the possibility of elevated levels of selenium in the San
Francisco Bay area.

      He told the groups that he was working with, that it would be most appropriate to
have complimentary trace metal data so that we could actually look at the  biogeochemical
cycles of selenium and  normalized trace metals as one way of determining whether or not
his data was biogeochemically consistent.

      We intercalibrated with the groups that were doing the trace metal measurements at
that time.  We  took duplicate samples...! mean, a split of a  sample, so they are  replicate
aliquots, and in that intercalibration, our  measurements of trace metals were 10 or 100 or
1000 or 10,000 times lower than that of the  other laboratory.

      So, we were encouraged to write a small grant to do these analyses, and then it was
determined that our analyses were too expensive, so they continued to fund the other group
that generated different data.

      The  other criticisms that we encountered were that our data were unnecessary. You
did  not  need trace metal clean techniques.  They were inaccurate. We were getting low
values because  we were missing the metals.  They invalidated existing water data,  and they
invalidated bioassays, because those had not been used with trace metal clean techniques
or accurate measurements of the trace metal  concentrations that they were measuring the
bioassays with. So, they also invalidated water quality criteria.

      My  favorite criticism of our proposed analyses in San  Francisco  Bay was by an
anonymous administrator who is now retired who stated  that so what, she did not care if


                                       358

-------
our data were...if the data that they were generating were inaccurate; she only cared that
they were reproducible.

      So, we bagged San Francisco Bay, and about the same time, a friend of mine, Jerome
Riagu who was then in Canada asked me if I would  be interested in looking at the trace
metal concentrations of lead in the Great Lakes, because he had some concerns about the
data that had been published.

      This is the data that had been published in 1987, values of -3 to 416 ng/L. Now,
conclusions, if you look at that data, you come to some very straightforward conclusions.

      The Great Lakes represent the ultimate solution to the lead problem.  They contain
anti-lead,  so we can throw  batteries  in  them and bring them up to  neutral.   The
concentration of lead in the Great Lakes varied by 1000 to 100,000-fold,  in contrast to the
two orders of magnitude we saw throughout the world's oceans.  In fact, the concentrations
of lead in  the Great Lakes were, depending on which data you used, if -26 actually counts
as minus two orders of magnitude or not, the values ranges from either 10 to 10,000 times
the concentrations  of lead in the world's oceans.

      When we went and measured the trace metals concentrations of lead in the Great
Lakes using the same methodology we had used in the oceans, we found that, in fact, in the
middle of Lake Ontario right here, you had concentrations...this is in picomoles, but that is
about 2 ng/L which is as low  as the middle of the north Pacific Ocean.

      Similarly, the data was geochemically consistent. You had low concentrations in the
middle of the  Great Lakes  where you have  high  levels  of primary  productivity  that
scavenged lead out of solution, and you have elevated levels of lead near primary industrial
sources in the Great Lakes.

      We also looked at isotopic composition of that lead, and we can actually fingerprint
the lead so that we can show the lead in this area was coming from the industrial lead used
by the Canadians, and the lead in the lower reaches of the Great Lakes was consistent with
the industrial lead used by the United States. The values in the intermediate levels of Lake
Erie then represent a confluence of industrial aerosols  from the United States and Canada.

      Now, this is another plot of the data that Shier Berman showed you before with the
reported baseline concentrations of lead in the Great Lakes going from 1965 through 1989.

      I initially tried to put this on a scale, but that does not work, because we go through
three orders of magnitude. So, if you see this, you do not see this.

      Oh, excuse  me.  It is important to note that at the same time that we were making
these measurements of less than 2 ng/L in Lake Erie, a publication  was picked  up and
published nationally and carried by the wire services about the dramatic decreases of lead


                                       359

-------
concentrations in  U.S. waters.  Unfortunately, the lead blank in those measurements was
1000 times higher than the concentrations of lead in the middle of Lake Erie.

      This is a recent report, a summary of the data we have for silver in San Francisco
Bay. Again, it is in picomolar, but the importance of this slide is to familiarize you with San
Francisco Bay and to illustrate some of the problems we have.

      San Francisco Bay, the estuary, consists of a positive estuary with the confluence of
the Sacramento and  San Joaquin Rivers here that drain,  I believe, 85 percent of the
watershed of California and go out through the Golden Gate.  The South Bay, then, is a
negative estuary.  Its primary sources of fresh water are the San Jose sewage treatment plant
and the Santa Clara sewage treatment plant.

      San Francisco Bay is also referred to as the urban estuary in contrast with what you
see out the window here. The only vegetation you see in San Francisco  is in the fern bars.

      When we looked at copper concentrations, what we found, by station, was elevated
levels in  the South Bay,  low  levels at the  Golden  Gate which  were consistent  with
oceanographic data, and intermediate levels at the confluence of the Sacramento and San
Joaquin Rivers.  So, this, again,  was very consistent with what I  showed you previously for
the distributions of silver.  It was also reproducible.

      Now, the thing with copper and most metals is that they are surface reactive which
means that they tend to be removed from the water column just as you remove trace rnetals
from a sewage treatment plant.  You have both a geochemical sink where the copper would
be adsorbed onto  particles coming down the rivers and then a biogeochemical sink further
on in the estuary  where the copper is adsorbed onto phytoplankton produced within the
bay.

      Unfortunately, you cannot see the x axis, but it is plotted against salinity. Again, this
is because we always normalize our data to biogeochemical parameters to see if it makes
sense.

      So, again, this is a plot of copper with salinity in two cases.  This is the Sacramento-
San Joaquin River coming down.  Instead of a sink, we see a  small source,  intermediate
source, within the estuary, and then the low levels you  see at the Golden Gate Bridge or
the open ocean.

      Conversely, if you go to the South Bay, the copper concentrations go up off scale,
much higher.  So, in fact, these levels  seasonally exceed the water quality criteria for
copper.

      By seasonality, here are three plots of copper. This is the dissolved copper at the top,
total copper at the bottom, and similar plots from the Sacramento-San Joaquin to the Golden


                                       360

-------
Gate. The white represents what we see in the South Bay, a negative estuary, where the
wastewater discharge is the primary source of fresh water to the system.

      It is highest during the low flow periods and lowest during the high flow periods.
That is high  flow out of the rivers, not the treatment plants.

      The other thing is that our samples were taken during the protracted drought, so what
we were able to do is we were able to plot the hydrographic data of the  flushing of the
system with  what we saw in the copper, and it was consistent.  The copper  concentrations
built up in the South Bay when the flushing out of the North Bay was lowest.

      Because we are able to accurately measure the metals, we can actually calculate what
the residence time of the metals in the water column is.  This illustrates, again, the problem
with the South Bay.

      Residence times of silver and lead in the South  Bay are only 13 days.  That is the
residence time of the metals in the dissolved  phase, whereas the hydraulic  residence time
in the South  Bay is four months during the high flow discharge periods and up to six months
and actually infinity during the low flow periods.

      So, it is real, you  know, regulators problem in how do you get these metals out of
the bay when they stay in the South Bay, you know, essentially infinitely compared to the
water.

      The other thing we can do, then, with Mike Carlin  is that we can calculate what the
relative loadings of the South  Bay are by source.  Because we can actually measure what
is in the  river, we can figure out that the river loading is much smaller than previously
reported, and the storm  water loading is much higher than previously believed, and the
municipal and  industrial dischargers which have  historically been  blamed for all the
elevated levels of copper in the South Bay actually represent a distant second to what is
coming in from the storm water.

      The other thing pointed out by this is that the water quality criteria for copper in
drinking water is...what is that, almost three orders of magnitude greater than it is for the
invertebrates in the South Bay, and that is, of course, because copper is relatively non-toxic
to humans and is extremely toxic to some phytoplankton and some invertebrates.

      So, what they are doing now is that they  are using copper in the  drinking  water
systems of the Bay Area to keep the water clean of the phytoplankton  in it,  but in order to
reduce the copper loadings to the  South  Bay, they have realized that they can  actually
decrease the amount of copper put into the drinking water systems of the bay, and then that
would decrease the amount of copper coming out of the effluent.
                                       361

-------
      Again, here is the water quality criteria, the concentrations that we measure by
station relative to the existing water quality criteria.

      But we told them it does not  matter what your existing water quality criteria is,
because the concern you have  with  copper is the free copper.  That is copper that is
biologically available to the organisms.

      There have been studies by Ken Brulin that showed essentially 95 to 99 percent of
the copper in the ocean  is so tightly bound  to  organics that  it is not available to the
phytoplankton at any immediate term.

      Now, this is my failure to show the distribution of copper in aquatic systems,  but,
literally, you have inorganic complexes that are not available, you have free copper  ions
which are biologically available, and you have organic complexes of copper which are
generally not that available based on sundry studies.

      So, the plankton in filter feeders are getting copper only  as the free ions, and they
represent a relatively, we believe, relatively small  fraction of the total dissolved copper in
the water column.

      Ken Brulin looked at the speciation of copper with John Donak using two different
methodologies. The  labile or relatively free copper ranged from 4 to 25 percent, depending
on what definition you used and what methodology you used, but substantially less than the
100 percent that had been used in the water quality criteria.

      So, new water quality criteria are being used based  on bioassays with the endemic
species, Midels californianis,  and those toxicity studies were used with trace metal clean
techniques where we actually measured the concentrations of copper in the bioassays using
those same techniques.

      In summary today, ultra-clean analyses are not too expensive.  They are scientifically
defensible.  They follow the Clean Water Act guidance, and they stand up in court.

      In conclusion, ultra-clean techniques are required for scientifically defensible water
quality management.

      Thank you.
(5//des and overhead transparencies for this presentation were not available at the time of
publication.)
                                        362

-------
                                     MR. TELLIARD:  Our next speaker is Nicolas
Bloom.  Nick was here last year to talk about one of his favorite subjects, mercury.  Nick
is the Senior Research Scientist and Vice President of Frontier Geosciences, and he is going
to ask the ever-abiding question, can mercury be routinely monitored at the parts per trillion
level?
                   CAN Hg BE ROUTINELY MONITORED AT THE
                           PARTS PER TRILLION LEVEL
                                      MR. BLOOM: I was here last year talking about
ultra-clean methods as they applied to mercury,  and I believe at that time  I voiced some
pessimism as to whether the part-per-trillion level could be realized in a routine monitoring
capacity based on the amount of effort and incentive that one has to have  to apply clean
techniques correctly.

      Since then, I have been working on a project with the Central Valley Regional Water
Quality Control Board in Sacramento, trying to do just that; monitor the rivers through the
City of Sacramento at ambient levels, and  I believe we have met with a large  degree of
success which has caused rne to revise my pessimism, and I now believe  that  it is quite
possible to do this, given the incentive on the part  of the people who are collecting the
data.

      My co-workers on this project are Eva Butler from Brown and Caldwell, and Val
Conner from the Central Valley Water Quality  Control Board.  Both of them are field
workers for their respective organizations and have never had any past experience with trace
metal  work before embarking on this project.  I have never met either of them,  and all of
the information concerning techniques was communicated over the telephone.

      Before I  go on,  I might want to just ask the question why  is it  that  we want to
measure at ambient levels? Ambient mercury levels are typically in  the range of 0.5 to 5
ppt in water which is several orders  of magnitude lower  than  the  current standard
methodology and at least on a par or even lower than some of the  newer methodologies
that have been suggested, for example, the modified EPA method suggested by  Dr. Potter
earlier.

      It turns out that even  in very contaminated systems, mercury in the water is quite
low.  In a system that may have naturally had 1  ppt of mercury, elevation to  2  ppt could
represent a major environmental degradation.

      Unlike most metals, I believe evidence is now starting to build that mercury may be
exhibiting toxicity to organisms at  ambient levels, and  the ambient  levels  today are
                                       363

-------
enhanced by about a factor of 4 over pre-anthropogenic times, and that factor of 4 may be
enough to be pushing us today to chronic toxicity.

      These studies have been  wide-ranging.  They have included studies on  fishery
production  by Dr. Jim Wiener at the National  Biological  Survey.  They have included
human effects on fish eating populations in the Seychelles conducted through the World
Health Organization and Dr. Clarkson at the University of Rochester Medical School. They
have also included field correlational evidence linking mercury levels to degradation of loon
reproduction in the Midwest and the death of the Florida panthers in the Everglades.

      In all of these cases,  the  water bodies being  measured contained total mercury
concentrations of less than 3 ng/L of total mercury.

      I should also note before going on that this talk is going to be about total mercury.
It is likely that really what needs to be measured at ambient levels to  get at some of these
issues is  methylmercury. Dr. Cutter discussed speciation for arsenic and selenium, and
much the same  is true for mercury as well.  The  biologically active species for mercury is
methylmercury, and it is much more strongly correlated to biota concentrations than is total
mercury.

      When going out to measure mercury today, the only methods sensitive enough to
even begin  to come close to ambient mercury concentrations are based on the cold vapor
technique where mercury  is volatilized from the sample as elemental mercury and  then
detected  by some form of atomic spectroscopy,

      I have listed here several  methods and  their  relative standard deviations.  The
technique that we have been using involves gold amalgamation. We are using an atomic
fluorescence detector currently. Previously, we  had used an atomic absorption detector.

      The  detection  limit based on 2 standard deviations of the blank noise is about 0.05
to 0,15 ng/L. That has  not changed  in about 15 years.  I  want to emphasize that these
methods  have been used for quite some time since they were developed mainly in Bill
Fitzgerald's lab at the University of Connecticut.

      The  difference between the use of AFS and the use of atomic absorption in this case
is  largely a function of how big of a sample volume you process, but in both cases, the
detection limits are limited by  mercury in the blanks.

      Another  technique  that has been used,  and this is  similar to the one that was
described earlier today, is to directly  purge the sample into an AFS or  an AAS detector
without preconcentration on gold. This gives you detection limits ranging from, for AA
using the standard EPA method, about 200 ng/L down to somewhere in the range of 0.2 to
1 for  using  an atomic fluorescence detector.
                                       364

-------
      Then there is this somewhat unique method developed by Gary Glass for the EPA
which  uses AA  but  recirculates the gas stream through the bubbler, collecting all the
mercury, ultimately, into the analyzer, therefore eliminating the dilution effect of the purge
from the water.  We have actually intercalibrated with his laboratory,  and he has an
effective detection limit of about 0.3 ng/L

      I should note that these methods that give you detection  limits of, say,  0.3 or so,
imply that the limit of quantitation is  maybe 1 to 2 ng/L, which  is too high to  accurately
measure mercury  at ambient levels.   Especially if you are going  to want to  measure
methylmercury, which exists at about 1  to 20 percent of the total mercury concentration,
then really the only method available currently involves  a preconcentration and AFS
detection.

      The method we use, just very briefly, is that the sample is oxidized with  bromine
monochloride,  which is  essentially the same reagent that was  discussed in the earlier
presentation, and then it  is pre-reduced  with hydroxylamine hydrochloride.

      The mercury is reduced to elemental mercury with stannous chloride and purged
onto a gold trap.  The mercury is collected by amalgamation, and then the trap is placed
in line with a second smaller trap, and by thermal desorption, the mercury is removed from
this trap onto the second trap.  From the second trap, a similar step passes it into the atomic
fluorescence  detector.

      Through the combination  of  these  steps and  the atomic fluorescence detection
method, the method is virtually interference free, and all of the typical  interferences that you
have with the direct flow systems related to chlorine or hydrocarbons and  so forth are
avoided.

      In addition, an interference that is not thought of too often but is quite  real in the
direct purge technologies is that the peak shape of the mercury being purged from the liquid
varies as a function of the matrix. Very often, in techniques where a direct purge technique
is used, it is necessary to  use a method of standard additions on samples of different types
to account for the different elution characteristics.

      Using this method, since we purge an  excessive amount of time and then have all
of the mercury released from essentially  the same matrix, all of the interferences related to
things such as iodide and other trace metals and so forth are completely eliminated.

      This is a diagram  of a  sampling  system  that was used by the California Regional
Water Quality people in their studies.  It was in place before we started the project. They
had routinely been measuring other trace metals using this sampling equipment and, in  a
couple studies before ours, had attempted to measure mercury in the same system.
                                        365

-------
      We were actually called  in after they got some bizarre results where, in one year,
they had very high and variable numbers from all of their sites, and then the next year, they
switched laboratories and got less than detects from all of their sites, and they were trying
to figure out whether those two data sets were consistent and, if not, where the problems
were.

      This sampling system just consists  of a polyethylene box with  an acrylic window.
It is a glove box.  These boxes are located at the sites along the rivers where samples are
to be collected, and they have rigid PVC tubing going out into the lake or into the river, and
each time they collect samples, they string a new piece of acid-cleaned PVC tubing out into
the water and sample that through a peristaltic pump.

      Early in the study, they filtered it in this chamber. Since then, we have filtered in the
laboratory.

      They then collect these bottles, these samples, into bottles of a type appropriate for
the metals that they are studying.

      Prior to our participation in the study, we had nothing  to do with the development
of this method, so our role was largely to assess how well this was working, and we will
have some  data to indicate how well it works so far and what might be done to improve
matters as time goes along.
      This is a photograph of the system in operation.
      As part of our study, we felt it was quite necessary to have a lot of QA, both in the
field and in the lab, and I will be reporting some of that information.

      I  want to note that this project was largely performed by people of relatively low
level of  technical expertise.  The sampling crews at the consulting firms had no expertise
in trace metal  work at all, and  the analytical technician  at our laboratory  essentially
analyzed these  samples blind and relied purely  on QA measures of this type to ensure
whether the data quality was good or not.

      Later, actually much  later, I was given a key as to the sample and QA locations from
the people in California, and we went back and interpreted  the results geochemically.

      So, within this, we had method blanks.  We run at  least three per sample batch.
Filtering blanks, since  all  the samples are filtered in  our laboratory using  a  disposable
polycarbonate filtering device similar to the ones used for sterilizing blood serum, for each
sample batch, we measured the blank, introduced mq  water passing through that filtering
device.
                                        366

-------
      Spike recoveries on all batches.  Duplicate, lab duplicates and the field. Blank spikes
were done once so far.  We are not reporting those, because, apparently, there was an error
on the part of the people in the field in  spiking, and they spiked the samples with 100 times
more  mercury than they desired to, and it made that study a bit meaningless.

      Then we have done some specific studies on the field equipment and on the interlab
comparisons to verify the accuracy of  the method.

      One of the things that is really difficult with mercury is that at ambient levels, there
is no certified standard for mercury in water that is close to ambient concentrations. So, the
only real test of accuracy that you have  is intercomparison  with other reputable laboratories.

      Unfortunately, virtually all of those other laboratories use the same methodology that
we are  reporting  here.  So, that  intercomparison does  serve to compare standards and
technician skills and so forth, but it does not necessarily get at the root question of whether
this method is inherently accurate.

      To start off when I was asked to begin the project or, actually, before I was asked to
begin the  project, I was called  about sample containers.  As you  recall, I said the very first
data set was extremely high and  variable, and they went to another  lab, and the  values
dropped to less than detects in the second lab.

      One of the differences between the use of that first lab and the second lab was that
the first lab was using l-Chem polyethylene bottles for collecting the  samples, and the
second  lab was using acid-cleaned teflon bottles.   It had  been  reported  long ago by
Robertson and Bothner that polyethylene bottles did, indeed, allow mercury to pass through
in the vapor phase and contaminate the samples, although that work was done at very high
room  concentrations of mercury.

      So, one of the first things that we did was actually try to replicate that work in our
clean  room to give a more rigorous test as to how well  polyethylene bottles were suited to
mercury work. We used some of these I-Chem type polyethylene bottles and rigorously
acid cleaned them, filled them with natural lake water, and we used teflon bottles as well.

      Then I kept them in our clean room bench where the average mercury concentration
in the air was known to be about  4 ng/m3 and monitored the concentration of mercury in
these  bottles over two months.  There is actually another data point out here.

      This line is the polyethylene bottle, indicating a strong increase in time, even at very
low atmospheric mercury concentrations from about 1 ng/L at the initial time up  to 15.
After 60 days, it was up to 24, and we noted  a much smaller but statistically significant
increase in methylmercury in this  sample as well.

      These are actually replicates of...these are means of four replicates.


                                        367

-------
      On this bottom line right here are presented the results of the same sample stored
in teflon bottles with and without acidification.  In those cases, the results are essentially
identical over three months with no change whether the samples are acidified or not.

      So, the first conclusion was that yes, probably the first data set was compromised by
the use  of polyethylene bottles which I  would assert should never be used  for mercury
sampling. They will invariably give you bad results.

      Because we wanted to avoid the chance of contamination of the samples in the field
by inexperienced technicians adding acid to preserve them and filtering them in the field
and so forth, we decided to incorporate a procedure where the samples would be sent by
overnight mail at 4 degrees C  in teflon  bottles and filtered on the day of arrival in our
laboratory in  the clean room.

      We had used this method on other projects previously, and, anecdotally, it seemed
to work quite well  for oxic waters, although we had never really proved that in a rigorous
way.

      So, for the purpose of this project,  we ran some tests on two different kinds of water,
Duwamish River water...these are both western Washington waters now, because  in order
to get t  - 0, we had to be close to the source, so we were using water from  close to our
lab.
      Duwamish River water contained a lot of tertiary sewage effluent, and Lake Union
water is quite,  oligotrophic, although it is impacted by boating activities and shipping
activities even in this water.

      What we did, we had already established that total mercury was preserved  in these
samples for a long time period, so we looked at just the dissolved fraction over a period of
about six days in order to ensure that the dissolved-to-total ratio would remain similar in the
24-hour time involved in shipping the samples from California to our laboratory.

      For each of these two waters, we  have time zero and 18 hours, 36  hours, and 168
hours. These are the mean of four replicates of each of the dissolved species.

      The dissolved species are, in these samples, running about 0.5 ng/L which is quite
common for mercury in water that contains about 4 or 5.  So, that is 10 percent dissolved
which is pretty typical for mercury in water.

      Although the noise is rather high  in this scale, given how low these concentrations
are, essentially, this data  indicates that, for the purposes of our project, filtering in the
laboratory gives us sufficient data quality to meet the needs of the project.  In the actual
project data that I will show you later, typical total numbers are from 20 to 20 and dissolved
values are from 0.5 to 4 or so.
                                        368

-------
      [ should note a caveat in this. This, again, seems to work rather well for natural oxic
waters. We got a little too cocky once and applied this to groundwater samples, and it was
a disaster. All of the mercury went to the walls of these groundwater samples in 24 hours,
as did a lot of the manganese and iron.

      We have since repeated that study by going to the source of the well and collecting
dissolved samples at the wellhead, and in that case, the mercury in those samples was 100
percent dissolved  in the wellhead and greater than 90  percent on the walls.

      So, this kind of methodology may, in  fact, apply well to oxic surface waters, but I do
not think it can be universally applied.

      I have here now a couple quality control charts for this project, and I bring them up,
in part, to show the level of precision and accuracy that we are having in  our  project and
also to indicate the importance of keeping this kind of information.

      These are the method  blanks for these two projects combined with another project
that we use the same QA  protocols on for the National Mercury Deposition program and
over a five-month period.  These  are the method blanks from bromine monochloride and
purging and so forth, and about right here, about where it  says 20, this is a different batch
of reagents  than this, and  it is slightly cleaner.

      As you can see, excluding this bit of stuff right here, these values are quite tightly
constrained and enable us to have very low laboratory detection limits.

      What is perhaps of more interest is what happened  right there, and it turns out that
this event is correlated with a time when samples were sent to our lab that we had believed
were about 100 ppm mercury in  sediments from a contaminated sewage treatment plant,
and they actually contained 10 percent mercury.

      They were put in a drying oven and dried four rooms down from the analytical lab
where we do this work.   It raised the room air from about 10 to 2500 ng/m3 during the
course of that event, and it shut down our lab for over a month.

      These data were collected after that, not during the event.   We  were not crazy
enough to actually try to analyze samples during the event, but these data were collected
when we thought that we  were clean enough  to start back up routine operations,  and this
is a two-week period after room air concentrations had come down to about 40 ng/m3 and
we  thought we were safe, but we were not. There  is a lesson here.

      One lesson is always to measure your room air for mercury, because you never know
when somebody dropped  a thermometer in the room next door.
                                       369

-------
      This is a similar set of data for recovery of mercury on spike recoveries during this
project. These spikes are at 8 ng/L, and they are matrix spikes, and they show the mean,
as you can see, is 99.6 plus or minus 6.3 percent over the entire range.

      Interestingly, there is this excursion right  here. That corresponds to the same time
period that we had contamination due to the other system, and there is actually a reason
why recoveries are low during the same time period that the contamination was high, and
it has to do with when we ran the samples compared to when we ran the spike recoveries,
which were separated by several days, and the level of contamination in the room  was
higher on the day that we ran the samples than on the day that we ran the spike recoveries.
Therefore, subtracting the two resulted in a dip  in the recovery.

      So, this kind  of event where you have  mercury contamination in your  air can
compromise your data in many ways, both in terms of blanks which affect your detection
limit and also in terms of recovery which, if you were to apply a correction factor using that
information, would be considerably in error overestimating the mercury in the samples.

      As  I said, we do not have any quality assurance materials for mercury in water at
ambient concentrations, so we have to rely on participating and intercomparison  exercises,
and we try to intercompare with other state-of-the-art labs often.

      The data that is shown here is actually part of an intercomparison exercise that was
sponsored by the Electric Power Research Institute, and it involved samples being sent to
22 of the most sophisticated mercury labs in the country.  Each lab was sent three bottles
of water that was collected from the surface of a quiescent lake without filtration.

      We had already had enough experience to know that, on a given day, the surface
water of lakes does not change very much, certainly far less than the analytical variability
of laboratories which makes the  collection of an  SRM for mercury in water rather easy.  All
you do is  pump a bunch of bottles full, and they all have the same concentration.

      We pumped  120 consecutive bottles and randomly assigned them to each of the
laboratories, and then there were 30 bottles retained by our laboratory to get a measure of
the variability between bottles at the beginning of the experiment and four months later at
the end of the experiment.

      That is our mean right there. This is our laboratory, 30 bottles, 3 replicates on each
bottle, for a total of 90 determinations in this lake. The value for the total mercury in this
lake  water is about 1.29 ng/L. Excluding these three outliers, the consensus value is very
similar to  the value that we obtained.

      This is practically the only kind of evidence that you can have today for verifying the
accuracy of your methods,  given the absence of standards.
                                        370

-------
      This bar chart was completed quite recently, and it is our first investigation of sources
of contamination potentially involved  with the California Water Quality Control  Board
sampling devices. Essentially, the thrust of the thing is here, this is deionized water that was
sent to them in the  field from our laboratory in acid-cleaned glass carboys, and then they
ran this water  through their sampling equipment and sent the results back to us.

      Upon analysis, these  items here  labeled l#1, l#2,  and l#3, these  represent the
concentrations of mercury in water that had just passed through their 25 feet of acid-cleaned
tubing, through the peristaltic pump, into the box, and directly collected into the sample
bottles.  So, that is sort of the blank that  is  associated with this pumping system.

      Then these ones that have a C attached  to them add another feature that they were
using. They were actually taking a  10-liter integrated sample before they would collect the
sample into the bottles.  They would actually collect it into a 10-liter polyethylene carboy.

      So, these two and this one which  was performed on natural lake water rather than
on deionized water represent the entire system blank.

      Given the magnitude of the numbers that they actually have in their rivers, these
blanks are actually quite reproducible.  Without the  integrating carboy, the  blank
contribution is about 0.5 ng/L, and with the carboy included, it is about 1 ng/L

      They, I  believe, feel that this level of contamination is acceptable and not necessarily
worth changing their procedures which seem to work well for their other metals, although
I have suggested to them that they go to acid-cleaned teflon tubing and a longer flush time,
and that could probably drop these values considerably.

      However, one of the values of this kind of information is at least they know. They
actually can quantify what the  level of contamination on their samples is,  and that puts
bounds on their accuracy and interpretation.

      This is a summary of the QA statistics over the first five months of this project. We
have reagent  blanks, the mean at  0.14 ng/L and a standard deviation  of 0.04 over four
months.  This  is, again,  I want to emphasize, without sophisticated  world class scientists
making these measurements.

      Filtering blanks,  the  mean  is 0.00  plus  or minus 0.09 using this very simple
disposable filter holder that we have used,  and we have actually checked this with other
metals, and it  seems to be quite a clean technique for other metals as well.

      Spike recoveries, the  mean  is 99.6 with a small standard deviation of 6.3.  These
spikes are at levels that are typically twice the  concentration of mercury in the sample.
                                        371

-------
      Lab duplicates, the mean variation between the two  lab duplicates is 4.9 percent.
For field duplicates, the mean variation is 13.3 percent.  That includes the effect of both
contamination and different water, perhaps, that is passing the sampling at the time they do
their reps.

      Then our intercomparison exercise, we intercompared with 13 labs with a deviation
of 4 percent.

      One other thing I did not mention about all this.  Not only is this all being done by
non-well  known geochemist type personnel, but it is also being done at a price...we won
the contract on a competitive bid against the other two companies that produced the odd
numbers.  So, this is economical. This is not unrealistic, by any means.

      I have to apologize for this rather odd map of San Francisco...! mean Sacramento.
The people from the Water Quality Board sent it to me.  I  picked it up at the airport.  I
never saw it until today, and I do not know why Sacramento is blotted out there, but it says
it right there, so they are not hiding anything.

      I am going  to present some results that we have found in the first five months of the
study  not so  much to tell you a lot about Sacramento but to give you an idea of the kind
of knowledge that you can gain  by getting numbers that are actual rather than numbers that
are less than detect or random  contamination events.

      This is the City of Sacramento right here. There are two rivers, the Sacramento River
and the American River, that meet in the center of the city and then form the Sacramento
River  which comes down into San Francisco Bay.

      There are sampling points up here near Folsom Dam  upstream of the city, assumed
to be  pristine by this study, and up here on the American River, and there is a sampling
point  in the middle  of the city and several sampling points downstream.

      The last sampling point here is at Greene's Landing, and what is called RM44, River
Mile 44, are also downstream of the sewage treatment plants for the City of Sacramento and
a large suburban community.

      In  this slide, the data from Greene's Landing are presented. The scale in mercury is
now going from zero to 20 ng/L. In the very first study they had, all  of the numbers were
less than  200.  In  the more recent study, all of their noise numbers were noise up here in
the 20 to 50 range,  and then in the study immediately before this one, all of their values
were  reported back at less than  2 ng/L, and none of that seemed to make very much sense.

      So, upon applying ultra-clean techniques and high levels of QA and so forth, we have
now generated a mercury profile for the river at Greene's Landing which is traced by this
curve right here, and you will  notice it has these episodic peaks. These peaks correlate


                                       372

-------
almost with a R2 of 1 to increases in total suspended matter which is directly correlated with
river flow.

      I do not think there is anything too surprising about that, but the point that is being
made here is that you can actually see this now if you have a good data set, and that gives
you geochemical confidence that your data is really saying something and it is not randomly
distributed.

      Finally, I  have  plotted here  the  mean  and  standard  deviation  of  mercury
concentrations in sampling sites above and below the City of Sacramento. The scale here
now goes up to...our highest mean is about 7.5 ng/L, well below the quantitation levels of
most methods.  Given that the water upstream of the  city is about 2  ng/L, this 7.5 could
represent a significant degradation of the water resource in terms of impacts on  fish and,
ultimately, on people who might eat those fish, if they do eat the fish from that river.

      Clearly, there is  a  definite impact of both the city here...this is the  city center
point...and, more importantly, downstream of the sewage treatment plants on the mercury
concentration in  that river.

      I  was asked  to note,  because the people in Sacramento were surprised that  the
sewage could have  an impact like this, they did not have any idea why they could have
mercury in their  sewage, since they do not have a large industrial  base.

      It turns out that  a  large fraction of the mercury in sewage comes from human
excrement as a result of emissions of mercury from dental amalgams.  In a non-industrial
city, that may be 50 to 70 percent of the total mercury in the sewage flow.

      I thought that was common knowledge, but they said they had never heard of that,
and they wanted me to mention it.  So, there it is.

      That is it.

                                      MR. TELLIARD: Thank you, Nick.
      Any questions?  (No response.)
                                       373

-------
(Blank Page)
    374

-------
Can Mercury be Routinely Monitored at the
       Parts per Trillion (ng-L-1) Level?
          Nicolas S Bloom, Frontier Geosciences Inc.
              Eva Butler, Brown and Caldwell
     Val Conner, Central Valley Water Quality Control Board
                     Presented at:

        17th Annual EPA Conference on the Analysis of
               Pollutants in the Environment

                Norfolk, VA May 3-5,1994
                          375

-------
            Can Mercury be Routinely Monitored at the
                   Parts per Trillion (ng-L"1) Level?

  Nicolas S Bloom, Frontier Geosciences, 414 Pontius North, Seattle, WA 98109
  Eva Butler, Brown and Caldwell, 916 Micron Avenue, Sacramento, CA 95827
      Val Conner, CVRWQCB, 3443 Routier Road, Sacramento, CA 95827
Recent advances  in  ultra-clean sample handling technique  and analytical
methods have shown that environmental levels of aquatic mercury are far lower
than previously estimated by regulatory bodies. Rivers and lakes unpolluted by
direct point source emissions are now understood to contain Hg concentrations
in the range of 1-5 ng-L-1.  Methylmercury generation sufficient to adversely
impact commercial and sport fish tissue concentrations may occur in  waters
containing 1-5 ng-L-1 total Hg depending upon other ancillary parameters such
as pH, DOC, and  alkalinity.  Even grossly polluted water bodies may contain
water concentrations of only 10-50 ng-L'1 Hg, far below currently regulated levels
(0.2-2.0 Hg-L"1)  With these findings in mind, the City and County of Sacramento
and the Central Valley Water Quality Control Board (CA), sought to monitor
rivers in the region for Hg at ambient concentrations. Through careful attention
to clean technique in field sampling, especially in the use of pre-cleaned and
tested Teflon bottles, and overnight shipping to an ultra-clean Hg laboratory for
processing and preservation, reproducibUity at the ng-L-1 level was attained with
little additional cost to the program over standard protocols. Extensive field and
laboratory QA revealed the existence of a small (< 1 ng-L-1) additional source of
sample contamination due to the  use of non-ultra-clean  sample collection
equipment, which has not yet been rectified. Also, this work reveals that the use
of HNOs as a preservative may result in the loss of Hg from  the samples by
volatilization, and that polyethylene sample bottles are unacceptable for sample
storage, as at the ng-L'1 level, dramatic increases in Hg are seen as a function of
storage time. Ambient aqueous Hg concentrations for Sacramento Valley rivers
are observed to be in the range of 2-20 ng-L'1 (unfiltered) and 0.5-5 ng-L"1 (0.2 [i
filtered).
                                376

-------
          Options for Cold Vapor Hg Detection
   method       detector      3 s PL's (ng/L)          comments

     gold       AFSorAAS      0.05-0.15        reagent blank limited,
 amalgamation                                  almost interference free

 direct purge       AFS       0.2 (Pichet, et. al)   potential quenching from
                               1-2 (Merlin)        molecular species

 direct-purge       AAS        0.3 (G. Glass)      modified EPA method
 (recirculation)

 direct purge       AAS            200          EPA standard method
Slide 1.  Comparison of detection limits for various methods of Hg analysis.
Note that typical concentrations of Hg in water are 1-10 ng/L, meaning that less
than detects will always occur using the standard EPA method.
                                   377

-------
     He Gas
                 Soda Lime Pre-Trap   Gold Sample Trap
                                    ©
                      Aqueous Sample + SnCl;
                                                          He Gas
        Gas Phase Syringe Injection Port

                               \
Hg Free
He Gas
Gold Sample Trap
                                 Quartz Detection
                                 Cell
                    NichromeCoil       /  Nichrome Coil

                               Gold Analysis Trap     ( 2
                                            Photomulriplier Tube
                                                     Signal to
                                                     Recorder/
                                                     Integrator
                                                    Hg Lamp
                                                              r He Gas
      Slide 2,  Schematic diagram of dual amalgamation/cold vapor atomic
      fluorescence spectroscopy (CVAFS) method. The aqueous sample is oxidized
      with bromine monochloride to release all Hg, and pre-reduced with
      hydroxylamine prior to analysis.
                                    378

-------
                                                           Metal Sam pie Bottle
                                                            {double-bagged)

                                                           25ft of Tubing
                                                        Peristaltic Pump
                                              ft of PYC Tubrng
                            Fl G URE1.  Field Sampling Syst emj
Slide 3. Water sampling system employed on the Sacramento Rivers Project,
                                        379

-------
               Quality Assurance Measures

            parameter                     minimum frequency
          method blanks                    3 per sample batch
          filtering blank                    1 per sample batch
     spike recovery (4-11 ng/L)         1 per sample batch or 10 samples
       laboratory duplicates           1 per sample batch or 10 samples
          field duplicates                    1 per sample batch
        blind (field) spikes                  once during study
        storage experiments                  once during study
     field equipment blanking         as needed for method development
      interlab intercomparison         as needed for method development
Slide 4. Quality assurance measures routinely employed.
                            380

-------
            Stability of Total Hg in  Water
13-
10-

ao
C
'ofl
x
i™J
c —





t
(
»"'
>,'
p--"'"
*
/* EPA storage
X time limit
/
/
/
j
«
*
#
r
r
4
t
o
t
*
1
1
1
t
t
1
t
JL_C1_ 	 	 v , 	 _C!I^, C3^V
ft \^ ^^^f *• ~— —•.———— ^-^y
I 1 1 1
) ' 10 20 30 40
                                                             D     teflon, unpreserved

                                                             O     teflon 4- 0.05 N Acid

                                                          — • O	  polyethylene + 0.05 N Acid
                             time, d
Slide 5. Stability of total Hg in river water as a function of bottle material and
preservation acid. Samples were stored in a cleanroom with an atmospheric Hg
concentration of 5 ± 2 ng/m3.
                                       381

-------
               Storage of  Samples  before  Filtration (0.2u)
            0.75-
         "Bb
         c
         
         in
         . ft

         •O
             0.5-
            0.25-
                                 18          36



                                     time, h
E3 Lake Union, Hg(tot) = 4.26 ng/L
H! Duwamish R, Hg(tot) = 4.55 ng/L
i
T






*
















R :
^ ;

""<.}••

T








.. :•
I;

I;





^
',"

»
f













v •'•

\





T
JL

f ;

T






i




.* '





T
-I
" --^
\
168
Slide 6. Effect on dissolved Hg concentration of storage unpreserved in Teflon


bottles.  Two different Western Washington waters were employed in the study


(the Duwamish River contains 20-70% tertiary sewage effluent depending upon


season).
                               382

-------
                  Total Aqueous Hg Method Blanks
         0.75 H
          0.5-
         0.25-
                      mean = 0.14 +/- 0.04 ng/L (n=36 of 42)
                          10        20
                              individual results
i
30
 i
40
Figure 7. Control chart for total Hg method blanks, December 1993 to April
1994.  The excursion in January was the result of a lab-wide atmospheric Hg°
contamination from metallic Hg containing sediments.
                                   383

-------
   12.5
        1-5 bridge   Folson    Nimbus   Discovery   RM-44
Slide 13. Mean and variability of total Hg above and below the city of
Sacramento. The 1-5 bridge site is on the American River and the Folson and
Nimbus sites on the Sacramento River, upstream of Sacramento.  The Discovery
site is in the City of Sacramento, where the two rivers merge. The RM-44 site is
downstream of the city and its Sewage treatment discharge, near the Green's
Landing site detailed on the previous slide.  All values except the 1-5 site (i\=l
event) are means of 5-7 biweekly events.
                           384

-------
          TOTAL CONCENTRATION AT GREENE'S LANDING
                              DATE
                                                                      -e— [Hgi

                                                                       t   [Hg)-QA
Slide 12. Variability of observed total Hg in the Sacramento River downstream
of Sacramento, CA.  The upper line is the Hg concentration, while the lower line
is the total suspended solids, which correlate with total river flow (rainfall).
                                 385

-------
                    Summary QA Statistics
    Parameter       units
  Reagent Blanks     ng/L
  Filtering Blanks     ng/L
mean
0.14
0.00
SD
0.04
0.09
N
36
15
                           N     Cone. Range
 Spike Recoveries
  Lab Duplicates
 Field Duplicates
 Intercomparison
A%
99.6      6.3       60      4-11 ng/L
4.9      6.6       49     0.2-20 ng/L
13.3      14.6      33     0.2-20 ng/L
-4.1      13.3    13 labs   1.2 ± 0.1 ng/L
to) Difference between lab means in ICE intercomparison samples
 Slide 11. Summary of Laboratory QA statistics for December 1993 to April 1994.
                              386

-------
   1.5-
00
a:
i-
   0.5-

                   T
            ill
        ••••••r ••
         DDW
                            I
                                                            T
                                                              mm
                       I #2
I+C#1
I+C #2   Lake (G)   Lake (I+C)
       Slide 10. Sampling induced contamination of water samples. Lab double
       deionized water (DDW) was passed through the sample tubing of the Isco
       sampler (I#), and then a 10 L polyethylene integrating carboy (C), as an ordinary
       sample. The results are means of three replicates conducted at different times
       throughout the project.  In the last pair, the Isco sampler was compared with sub-
       surface ultra-clean grab sampling at an upstream lake site. Overall, the tubing of
       the Isco sampler and the integrating carboy are seen to each contribute
       approximately 0.5 ng/L contamination to the sample. Over the first five months
       of the study, this level has dropped by about 50%.
                                           387

-------
                         Total  Hg  Laboratory Means
    3.5'
c
o>
I
     3-
    2.5-
     2-
    1.5-
     1-
    0.5-
                        O
                                            O
        T
        -*"-
 T
.*	
 1  T
-r-
*
T
*
i
                               QO  O   O  — *
                                                      i   i
                                                      
-------
                Total Aqueous Hg Spike Recoveries
          150'
          100-
      o
      8
      £
      0,
      CO
           50-
                         mean = 99.6  +/- 6.3%  (n=60/65)
              i
              0
 i
20
           40
Individual Results
60
Slide 8. Control chart for Hg spike recovery at the 8 ng/L level (December 1993
to April 1994). The downward excursion in January occurred during a lab-wide
atmospheric Hg° contamination from metallic Hg containing sediments.
                                   389

-------
(Blank Page)
    390

-------
                                    MR.  TELLIARD:     We   have  one   quick
announcement before you get to go get your coffee.

                                    MS. ROMNEY: Hi, I am Jackie Romney from the
Office of Wastewater Enforcement and Compliance from EPA headquarters office, and I
wanted to just give you a brief status of the document that is on the table outside.  It is the
draft national guidance for the  permitting, monitoring,  and  enforcement of water quality-
based effluent limits  set below  the analytical detection  or quantitation level.

      Many of you  have received copies.  The document  was released by the Office of
Wastewater Enforcement and Compliance on March 22nd.  We sent that document out for
comment to the regions, State  NPDES directors, our work group, and headquarters.  We
ended the comment  period last Friday, April 29, 1994.

      Our plans are to finalize the document by the end of this summer.  If anybody has
any questions or if you would like to talk to me about the document, I will be here until the
end of the conference. My phone number is area code (202) 260-9528, or you can reach
Cathy Smith who handles the enforcement issues in that document, and her number is area
code (202) 260-0252.

      Thank you.

                                    MR. TELLIARD: Sure, Jackie.

      Okay, we are  going to take a 15-minute break, and let's try to make it 15 minutes.

      Thank you.
                                      391

-------
(Blank Page)
   392

-------
                                      MR. TELLIARD: This afternoon, we are going to
be talking a little bit about cyanide and BODs.

      Our first speaker this afternoon is Margaret Goldberg from Research Triangle Institute.
Margaret is going to speak on the effects of multiple interferences on the determination of
total cyanide. For all of those who have them in your permits and are being regulated by
them, this will probably make your day.
                                       393

-------
(Blank Page)
    394

-------
THE EFFECT OF MULTIPLE INTERFERENCES ON THE DETERMINATION
   OF TOTAL CYANIDE IN SIMULATED ELECTROPLATING WASTE
                     BY EPA METHOD 335.4
    Margaret M. Goldberg*, C. Andrew Clayton*, and Billy B. Potter+
    * Research Triangle Institute, Research Triangle Park, NC 27709
     + U.S.Environmental Protection Agency, EMSL, Cincinnati, OH
                             395

-------
INTRODUCTION

      The U.S. Environmental  Protection Agency approved test procedures for cyanide
determination are listed in the Code of Federal Regulations 40, Ch.1, Pt.136, Appendix B,
Table 1 B.  The test procedures are used for the reporting of results of analyses as required
by the National Pollutant Discharge Elimination  System (NPDES)  permits.  The methods
listed for total cyanide in water all share similar chemistries and interferences.  These
interferences have resulted in many modifications of the procedures that are included in
Standard Methods1, ASTM2, and the EPA methods.3'4  The application of these modifications
requires prior  knowledge of the interference.  Procedures to remove the interferences
generally work well when  a single interference is  present.   However, when multiple
interferences are present, the total cyanide methods produce questionable analytical results.

      The objective of this study was to evaluate the performance of the method when
applied  to simulated waste effluent that contained known  concentrations of cyanide and
multiple interferences.  Electroplating waste effluent was studied because there have been
many problems reported  by analysts for that waste, and because it was known to contain
a large  number of method interferences.  In  brief, the problems  reported included low
recovery of cyanide spikes, suspected false positive results, and poor precision for replicate
analyses.

OVERVIEW OF THE CYANIDE METHOD

      Method 335.4 consists of two discrete analytical steps: (1) MIDI  distillation  of an
acidified solution  into an  alkaline collector, and (2) colorimetric analysis.4  In the first step,
cyanide is converted to HCN at pH 1 by addition of sulfuric acid to the sample. The gas
is purged from the sample solution  into an alkaline absorber solution where it is stabilized
as the cyanide anion.  The purpose of this distillation step is  to remove cyanide from
method interferences that are present in the sample, and stabilize it in a clean matrix. The
second step is a colorimetric analysis procedure  using pyridine-barbituric acid  reagent to
form a colored adduct.

      The MIDI distillation and analysis procedures were used to the extent possible.  MIDI
distillation was performed with an automated, 10 sample, temperature-controlled, heating
block (Cyan-Ten,  Andrews Glass Co.).  Analysis  was performed with an automated  flow
injection analysis  colorimetric analyzer (QuikChem AE, Lachat Instrument, Inc.).

METHOD INTERFERENCES

      Electroplating industry waste contains  many interferences  that affect the cyanide
method. Table 1 contains a list of nine types of interferences that have been reported to be
problematic in electroplating waste.  Note that while some of the interferences listed are
discrete chemicals (e.g. sulfide, thiocyanate, carbonate), others include entire categories of
compounds (e.g.  oxidizers, surfactants, metals).   In  the design phase of this  study, we

                                        396

-------
performed preliminary experiments to identify five of the most significant interferences. We
did not include metals in this testing because they were being studied separately.  The five
interferences selected were sulfide, hypochlorite, bisulfite, formaldehyde, and thiocyanate.

      For some interferences, Method 335.4 recommends interference recognition tests
("spot tests") and/or interference removal methods. This information is summarized in Table
2, with"YES" listed if the method recommends a spot test for that interference, and "NO"
if no test is recommended. The active reagent in the interference removal method is also
listed if one is recommended by the method.   Note that thiocyanate and bisulfite do not
have spot tests.  However, the Standard Methods 4500-CN procedure recommends addition
of lead carbonate to the absorber tube of samples with sulfur-containing compounds, and
thus lead carbonate is added to samples with  thiocyanate and bisulfite.

RANGE FINDING STUDIES

      In order to design the multiple interference study, it was first necessary to perform
a series of range finding  experiments to identify the  most  significant interferences and
estimate the range of concentrations over which the interference had a measurable effect on
cyanide recovery.  The range finding studies were very limited tests of each  interference
individually. Typically, five solutions were prepared  in duplicate with either 0 or 100;/g/L
CN and a series of concentrations of a single  interference; in some  cases, more than five
solutions were prepared and tested.  In order to determine if the recommended  interference
removal method  improved the recovery of cyanide, we treated  some samples with the
interference removal reagent and did not treat others.  Samples  were then distilled and
analyzed according to Method 335.4.   Results are  shown in Figures 1 to 5 for sulfide,
hypochlorite, formaldehyde, bisulfite, and thiocyanate, respectively. Samples designated as
"Treated" were treated with the interference removal method; those designated  "Untreated"
were not. Data presented  in the figures  represent measured cyanide based on instrument
calibration with undistilled KCN standards, and have not been blank-subtracted or otherwise
corrected.

      In the case of sulfide (Figure 1), samples contained either 0 or 100 ppb (/yg/mL) CN,
and 0, 7.8 x 10'5, 1.6 x 10"4, 3.1 x 10"4, or 6.3 x 10"4 M sulfide.  Treated samples contained
lead carbonate in the absorber tube;  untreated  samples  did not.  Results shown in Figure 1
illustrate that:

(1) reagent blanks (0 ppb CN, Untreated) contained very little cyanide (approximately 2 ppb
CN); (2) "Treated" blanks (0 ppb CN, Treated)  had high measured cyanide (approximately
15 ppb  CN); (3) In the absence of interference or treatment  (100 ppb CN, Untreated, 0
Sulfide), cyanide was recovered at approximately 80% of nominal, but when sulfide was
present (100 ppb CN, Untreated, 1.6E-04 Sulfide) the recovery decreased to approximately
50% of nominal; (4) Treated  samples  generally  had higher recovery of cyanide than
Untreated samples (even after blank subtraction); and (5) higher concentrations of sulfide
resulted in lower recovery of cyanide.
                                       397

-------
      As shown in Figure 2, hypochlorite had a much stronger effect on cyanide recovery
than did sulfide. At the lowest concentration of hypochlorite tested, 3.4 x 10"6 M, cyanide
recovery from a 100 ppb CN solution  (100 ppb CN, Untreated, 3.4 x 10~6 M Hypochlorite)
was approximately 80%.  However, at all higher concentrations of hypochlorite tested, the
recovery of cyanide was approximately 0% whether the sample was treated with ascorbic
acid or not.

      Figure 3 shows that  for 100 ppb CN,  Untreated samples, formaldehyde had  little
effect on cyanide recovery at the lowest formaldehyde concentration tested (3.7 x 10"7 M),
but did  substantially  reduce recovery of  cyanide from 77%  to 60% when formaldehyde
concentration  increased from 3.7 x 10"7 to 3.7 x  10"6 M.  Treatment of samples  with
ethylenediamine was effective in removing the effect of the interference at a formaldehyde
concentration of 3.7 x 10"5 M (78% recovery), but not at 3.7 x 10"4 M (10% recovery).

      In the case of bisulfite, a threshold  response is observed  in  Figure  4 for the
effectiveness of the lead carbonate in removing the effects of bisulfite on cyanide recovery.
At 3.2 x 10"5 and 3.2 x 10"4 M bisulfite, cyanide recovery from 100 ppb CN solutions treated
with lead carbonate was approximately 45%. At higher concentrations of bisulfite (1.6 x
10~3 and 3.2 x 10~3 M), cyanide recovery  was reduced to approximately 11 %.

      Results  for  thiocyanate are shown  in Figure  5.   When  the  concentration of
thiocyanate was increased from 3.9 x 10~7 to 3.9 x 10"5 M, the recovery of cyanide in 100
ppb CN, Untreated samples  decreased from approximately 90% to 70%.  The high cyanide
concentration measured  in the blank solution treated with lead carbonate (21 ppb) makes
the interpretation  of  lead carbonate treatment uncertain, but in general,  lead  carbonate
treatment increased the recovery  of cyanide relative to untreated samples  at the  same
thiocyanate concentration.
SAMPLE HOLDING TIME

      A limited Sample Holding Time Study was also performed prior to the Multiple
Interference Study.  The sample  holding time is the time between sample collection (or
preparation of simulated samples) and analysis.  If interferences react with cyanide at room
temperature or refrigerated temperatures during this period, the final concentration  of
cyanide determined by the method will not accurately represent the initial cyanide content.
Method 335.4 recommends a 14  day sample holding time for cyanide analysis.  However,
we believed that  significant  sample alteration could occur within 48  hours, and  so
conducted a brief study of the effect of holding time on cyanide recovery.

      A set of simulated electroplating waste samples was analyzed on the same day they
were prepared (Day 0). Half of the samples were analyzed again after one day refrigerated
storage (Day 1), and  the  other half were analyzed after two days  refrigerated storage.
Percent recovery of the nominal cyanide was calculated for each sample analysis. Selected

                                       398

-------
results are shown in  Table 3.  The difference values  presented were calculated as the
difference between the percent recovery determined on Day 0 and that determined on either
Day 1 or Day 2, and are indicators of the stability of the sample over that period.  As can
be seen in Table 3, the results are highly dependent on the sample composition, and clearly
demonstrate that holding time has a large effect on cyanide recovery for some samples.

MULTIPLE INTERFERENCE STUDY

      The Multiple Interference Study was a statistically designed study that enabled the
estimation of the effects of 6 factors simultaneously on  measured cyanide concentrations.
Five of the factors were the interference levels, and the  sixth factor was the actual cyanide
concentration.   In addition to statistical aspects,  the design also incorporated  chemical
considerations and results from the Sample Holding Time Study and the Interference Range
Finding Studies.

Chemical  Aspects of Study Design

      The concentrations  of  cyanide and  the five interferences  used in the Multiple
Interference Study were selected based on three factors that are discussed below: (1) the
current regulatory levels for cyanide; (2) the concentrations of interferences that were found
to interfere in cyanide analysis in the range finding experiments; and (3) the rnolar ratios of
interferences to cyanide.

      Cyanide concentrations were chosen to cover a range of current regulatory discharge
levels. For example, discharge to municipal  sewer systems requires analysis of cyanide at
low concentrations, typically 5-50 ppb CN, while permit levels for electroplating industry
discharge  are typically around 700 ppb CN.  As indicated in  Table 4, the study design
included cyanide concentrations from 0 to 1000  ppb (0 to 3.8 x 10"5 M).  As explained
below, the statistical design enabled the best  predictions over the range 49 to 500 ppb (1.9
x 10'6to 1.9 x 10'5 M).

      The range of  interference concentrations was determined from  the results of the
preliminary range finding experiments.  In general, cyanide recovery was affected by the
interference when the interference was at a concentration between 3 x 10"7 M and 3 x 10"4
M.  As indicated in Table 4, the range of interference concentrations used in this study was
0 to 1.14  x 1Q-4 M.

      The molar ratio of interference to cyanide was important because the extent of the
interference reaction depends on the molar ratio of interference to cyanide as well as on the
magnitude of their concentrations.   Thus,  the study  design   includes samples with  a
stoichiometric  excess of interferences, a  stoichiometric excess  of cyanide,  and  1:1
stoichiometry of cyanide to interference.  The molar ratios of interferencexyanide included
in the study ranged from 0.068:1  to 185:1.
                                        399

-------
Experimental Methods

      The results  of the Sample Holding Time Study demonstrated the importance of
regulating the time between sample preparation and analysis. Thus, each group of samples
was prepared and analyzed within a period of approximately 24 hours.  On the first day,
ten samples (half of a study "block") were prepared, allowed to sit at room temperature for
one hour, and then stored refrigerated for approximately 16 hours. On the second day, the
samples were brought to room temperature, tested for interferences using the recommended
spot  tests, and  those  in  which interferences were  detected  were treated  using  the
recommended interference removal method.  Then all ten  samples were  distilled and
analyzed. The same process was then repeated for the second half of the study "block."

Statistical  Study Design

      The statistical study design consisted of 120 samples (trials) arranged in six blocks of
20. Each block had 18 samples containing cyanide and two blank samples.  Of the 72 non-
blank samples in blocks 1 through 4, 64 samples contained cyanide (factor Z,; Table 4) at
either 49 or 500 ppb, and each of the interferences (factors Z2, Z3, Z4, Zs, and Z6; Table 4)
at either 0 or 1.14 x 10^ M. These 64 trials represent a complete 26 factorial arrangement
(i.e., all possible combinations of high and low levels of each of six factors). In addition to
the 64 factorial points  and the blanks, each  of the first four  blocks also contained two
"center points," with cyanide concentration of 160 ppb and interference concentration of
1.07 x 10"6 M for all five interferences.  When the mathematical transformation used in this
study  (i.e., a logarithmic transformation) is applied to these concentrations, the "center
points" fall near the center of the factorial design (i.e., about half way between the low and
high levels of each factor).  Blocks 5 and 6 each consisted of three center points, 6 pairs of
points in which each factor was varied about the center point one factor at a time, and three
points chosen to examine stoichiometric relationships.

      Two  deviations  from the experimental design were performed.   First, the actual
concentration of interferences  in some of the block 5 and 6 samples was 2.6 x 10"6 M rather
than 1.07 x 1C"6 M. Second, an additional block of samples was  prepared and analyzed.
The extra block was a  re-run  of block 1 with the exception that  samples were  analyzed
immediately after preparation, rather than 24 to 48 hours later.  The effect of this difference
on the final model was negligible, so the additional samples were included in the data
analysis.

Statistical Analysis

      The goal  of the statistical analysis was to characterize how measured cyanide
concentrations were affected by actual cyanide concentrations and by the concentrations of
one or more of the interferences.  After considering a number of candidate model forms, we
selected the following class of models for detailed analysis:
                                       400

-------
                               ln[F+1] - ln[1 +A+BZJ  +  e


 where Y denotes the measured cyanide concentration, Z, is the KCN concentration, e is a
 random error term, and A and B are functions of the interferences having the form:

                               A = BQ  + f] (Z2,Z3JZ4,Z5,Zg)


                               B = BI  + y2(Z2,Z3,Z4,Z5,Z6)

 where B0 and B, are constants and f, and  f2 are polynomial functions of the interference
 concentrations that are zero when none of the interferences are present. The model allows
 each  interference (a)  to have an additive effect (i.e., to change the intercept) with the
 incremental amount being proportional to  its concentration; (b) to have a  multiplicative
 effect (i.e., to change the slope) with the increment being proportional to its concentration;
 and (c) to have both effects (a) and (b).  In addition, the interferences are allowed to interact
 with one another (e.gv by including cross-product terms in the f, and f2 functions)  and
 thereby jointly affect either A or B or both.

        Thus the objectives of the statistical analysis were first to determine "good" forms for
 the A and B functions and then to estimate the parameters of those functions.  Nonlinear
 least squares estimations were performed using the SAS NLIN procedure.


 Results

        Statistical analysis of the data resulted in the following representations for A and B:
                                   B6Z6


and
 B  = B^ + B12Z2 +


Thus the final model is given by:
                                         401

-------
 Predicted [CA7]  =


 B0  + B,  [CN\  + B2 [S\ +B3 [OO\ + 54 [HCHO]  +  B5 [HSO3] + B6 [SCN\


  +  512 [CAT] [S] + 513  [CN\ [OCl\ + 514 [CAT] [#C#O]  +515  [CAT] [#SO3]  + 516 [CAT]  [5


 + £22 [S]2 + £33 [00\2 + £5
                   + 536 [OCl\  [SCN\  + £45 [#C#O] [HSO3]


  + B123 [CA/] [S] (OCl\ + 5133 [CA/] [OC/]2


      The estimated  parameters are given  in Table 5, along with an  estimate of their
standard  errors and approximate 95 percent confidence intervals.  Asterisks are used to
identify those coefficients which are statistically significant. The model results are presented
in 3-dimensional graphs in Figures 6 to 1 1 , with each graph presenting the predicted percent
recovery of cyanide as a function of the actual KCN concentration and the concentration of
one  interference.  The other interferences  are  held at a  concentration  of 0 for these
simulations, except where noted.

      As  can be  seen in Figure 6, increasing the  concentration of sulfide  results  in
decreasing the  percent  recovery of  cyanide.   The effect is  most  dramatic  at  low
concentrations or KCN, and less so as  the concentration of KCN  increases.  This general
pattern is  repeated to a lesser extent for bisulfite (Figure 7), formaldehyde (Figure 8), and
thiocyanate (Figure 9).  In the case of hypochlorite (Figure 10), the effect of increasing the
hypochlorite concentration is a highly significant decrease in predicted percent recovery of
cyanide, such that  at the highest concentrations of hypochlorite and cyanide included, the
predicted  percent recovery approaches 0.

      One effect that was not originally anticipated was the combined effect of hypochlorite
and thiocyanate. As illustrated in Figure 1 1, for a fixed concentration of hypochlorite (0.1
mM) and variable concentration  of thiocyanate, the predicted percent recovery of cyanide
ranges from approximately 0 to 1000%. The explanation for this pattern is that, at low
concentrations of thiocyanate, hypochlorite rapidly oxidizes cyanide to carbon dioxide and
thus  very  little cyanide is present at the time of the analysis. At high concentrations of
thiocyanate, hypochlorite oxidizes thiocyanate to sulfate plus cyanide, and thus increases
the concentration of cyanide prior to analysis. Thus, the actual concentration of cyanide
present at the time of  analysis depends on the ratios of hypochlorite to thiocyanate and
hypochlorite to KCN as well  as the ratio of thiocyanate to KCN.

      During the Multiple Interference Study, we tested each sample for the presence of
each interference using the recommended interference recognition tests.  The results are
summarized in Table 6 and show that the interference recognition tests failed to correctly
identify the presence of sulfide, hypochlorite, or formaldehyde in  over half of the samples
containing those interferences.   In general,  the  presence of  more than one interference
caused each of the interferences to be  "masked"  during interference recognition testing.
                                        402

-------
CONCLUSIONS

      Several conclusions were drawn from these studies:

      1. As demonstrated in the Interference Range Finder Studies, individual interferences
caused a substantial reduction in recovery of cyanide in some cases, even after application
of the interference removal method.

      2. Sample  holding time was an  important parameter that lead to an increase or a
decrease in cyanide recovery as a function of time.  The effect was a function not only of
the interferences present in the sample, but also of the concentration of each interference
and the  length of time the sample was held. The fact that significant sample alteration was
observed within 48 hours suggests that the 14 day holding time recommended in Table II
of Method 335.4  and 40 CFR Part 136.3 is excessive.

      3. In the Multiple Interference Study, it was observed that the interference recognition
tests worked  properly in less than 50% of the samples when multiple interferences were
present.

      4. The effect of the interferences on cyanide and on each other was complex. Not
only was the effect of each interference on cyanide recovery statistically  significant, but
there were also statistically significant 2-way interactions between sulfide and formaldehyde,
hypochlorite and thiocyanate, and formaldehyde and  bisulfite, and a statistically significant
3-way interaction  among cyanide, sulfide, and hypochlorite.

      5. Hypochlorite was not considered a method interference by itself. It caused a rapid
removal of cyanide prior to analysis, but if the excess hypochlorite was adequately removed
in the pretreatment stage, then the method did accurately determine the concentration of
cyanide present at the start of the  analysis.  However, in the presence of other oxidizable
interferences, such as thiocyanate, sulfide, or formaldehyde, the effect of hypochlorite was
more complex and time dependent, and the method did not provide reproducible or reliable
results.

      6. The word "Total" in the Total Cyanide Methods may be interpreted absolutely and
lead to  improper treatment of cyanide wastewater.  Clearly the word  "Total"  is  not
representative of the results produced by these cyanide methods when multiple interferences
are present and/or when interferences  are not identified and removed.


REFERENCES

1.    "Method 4500-CN Cyanide", Standard Methods for the Examination of  Water and
      Wastewater,  18th  Edition,  A.E.  Greenberg,  L.S. Clesceri, and  A.D. Eaton, eds.,
      American Public Health Association, American Water Works Association,  Water
      Environment Federation, 1992.

2.    Annual Book  of ASTM  Standards, Vol.  11.02, American Society for Testing and
      Materials, Philadelphia, PA.
                                       403

-------
3.     Methods for Chemical Analysis  of Water  and Wastes,  Environmental Protection
      Agency, Environmental Monitoring Systems Laboratory-Cincinnati, EPA-600/4-79-020,
      Revised 1983 and 1979.

4.     "Determination of Total Cyanide by Semi-Automated Colorimetry," Method 335.4,
      Methods for the Determination of Inorganic Substances in Environmental  Samples,
      U.S. Environmental Protection Agency, EPA/600/R-93/100, August 1993.
                                      404

-------
                       QUESTION AND ANSWER SESSION

                                     MR. LOEWE;  Jeff Loewe with Daily Analytical
Labs.   Is the holding time before or after the distillation?

                                     MS. GOLDBERG;  It is before the distillation.

                                     MR.  LOEWE:   Were  the  samples  held after
distillation?

                                     MS. GOLDBERG:  No, we did not.  We always
analyzed the samples the same days that they were distilled.

                                     MR. LOEWE:  Okay, thank you.

                                     MR. JOHNSON:  Mike Johnson with Dupont.
I am one of the poor people in the regulated community that has to use this method, and
I applaud you, because you  have found exactly what we have found.

      The question is right  now, that method is the only method EPA has approved for
cyanide, and what we are seeing is negative values which I do not mind as long as EPA  lets
us use it in our average, but that is another topic. We are seeing positive interferences. It
is just all over the map.

      Is there any recommendation on another method? We have been experimenting with
the weak acid dissociable test and getting a  lot better results from that.  It seems  to be
ignoring a lot of the interferences, but from a regional standpoint, they do not recognize that
as a method.  So, what is somebody to do?

                                     MS. GOLDBERG:  I think  we will  have two
responses to that.  I will tell you my answer, and then I will ask Bill Potter to speak for EPA.

      The first answer is that  we are working on method improvements to this, and  we
have actually seen some improvements by using weaker oxidants than the hypochlorite.

      The goal there is to  oxidatively decompose  the interferent but not oxidize  the
cyanide, and we have had very good success adding sodium vanadate as an  interferent
removal oxidant.  It does not work with the thiocyanate.  So,  it will remove the  other
interferences for you but not the thiocyanate. For that, we have no recommendation.

      We are currently exploring other methods,  and I cannot give you any results from
those yet. One of the studies we have  not even  started yet, and others are just in  the
process, so I cannot give you  that recommendation, but maybe Bill can tell you some more.
                                      405

-------
                                     MR. TELLIARD:  Bill?

                                     MR. POTTER:  In  response to coming up with
different methods, right now, we are not funded for looking or exploring other methods.
This particular project was designed  only to find out if the classical  method could be
improved or if the interferences could be identified and handled in some way.

      What this study has shown is that the method  is dysfuntionate as written.

                                     MR. TELLIARD:  If you have a suggestion as to a
method  or an  application, if you would send it to me, I would  do what I  can to get it
addressed. No promises, but we are certainly interested, in the industrial industries that we
regulate and are writing regulations for; we have to have a method.  I have  been waiting
for this paper all day.

                                     MR. TELLIARD: Thank you. The gentleman in the
back?

                                     MR. STRAKA: My name is Mike Straka, and I am
with Perstorp Analytical Environmental.  A couple of comments. First of all, I am very glad
to hear  that response to the request  for better  methods.   We,  as an  instrumentation
manufacturing company, are devoting a lot of energy to this specific problem,  that is, the
cyanide problem.

      We have, in  cooperation with the University  of Nevada-Reno Mackey  School of
Mines, begun to commercialize a new weak acid dissociable chemistry that precludes the
need to  do a distillation and, therein, cures a lot of ills and  evils.

      The distillation plus colorimetric finish can take up to an hour or more, more like an
hour and a  half per sample.  With this new method, we can get results identical to a
distillation followed by  ion chromatography finish in two minutes per analysis,  and  I
promise you the interferences are what you would expect.

      So, my question actually to Bill  is I have the method that you are looking for.  Now,
tell  me how I can get it through the EPA or get  it  evaluated  rapidly.

                                     MR. TELLIARD: We would be glad to take a look
at it.

                                     MR. STRAKA: Anything that I can  do to help,  I
would appreciate it.

                                     MR. TELLIARD:  I understand.
                                       406

-------
                                      MR. STRAKA:    One comment,  actually,  to
Margaret.  Is it really fair to classify bisulfite and  hypochlorite and those other oxidizers as
interferences?  As you mentioned, they are clearly oxidizing the cyanide, and they would
only be expected to be in the sample because someone is trying to actually destroy the
cyanide before  it becomes an effluent.

                                      MS. GOLDBERG:  Well,  in  the  case of the
hypochlorite, that  is true.  The hypochlorite is  added to remove the cyanide to enable
discharge, and that was, I think, the comment that I made  at the end, that it is not really
considered an interference in itself, because  it really is removing the cyanide.

      The difficulty with the hypochlorite is that it is also oxidizing the other interferences
present and that there are very complicated reaction pathways proceeding. It  makes it very
difficult to regulate on a  method  where you can get between zero and  1000  percent
recovery.

      So, I think, in that sense, we do have to consider the interactions of the hypochlorite
with the other interferences as method  interferences.

      The bisulfite itself is not added as an  oxidant deliberately to remove the  cyanide,
typically.  Usually, that is present as a brightener or other component by the electroplaters.
So, that is in the sample and is not added as an  oxidant.

                                      MR. STRAKA:  In  some industries,  it actually is,
but my final comment, and I will let you go,  is it has been our observation that by adding
lead sulfide  to the accepter or the scrubber  solution that we can  have some  interesting
chemical kinetics going on there, too,  which catalyze the  cyanide and actually  produce
thiocyanate which  may be a mechanism for giving rise to your lower recovery in  the total
procedure. That is to say that the sulfide plus cyanide yields thiocyanate.

      I just make that general observation, because I know  a lot of people traditionally do
that for the total distillation, whereas if you do an  amenable or WAD distillation, they do
not use that practice, and sometimes, more often than not,  you can end up getting WAD
cyanide results that are higher than your totals, and that may be a very real mechanism for
that interference.

      Thank you.

                                      MS. GOLDBERG:  You are right. Thank you.

                                      MR. TELLIARD: Thank you.

                                      MR. SAWYER: My name is Bernard Sawyer. I am
with the Metropolitan Water Reclamation District of Chicago.  We have done a lot of work

                                       407

-------
on cyanide analysis using a UV lamp with the thin film distillation, and it breaks down the
thiocyanate.  That method is approved in the ASTM manual, and it was sent in at one point
or another to EPA,  but it never made it, for whatever reason, as being an approved  EPA
method, but we have used it for many years, especially on industrial waste samples.

      It seems  to  eliminate a lot of these interferences, and it  totally eliminates your
thiocyanate problem, because you actually run the analysis twice. It is done on a Technicon
train, and you basically have the UV lamp turned on, and then you turn the lamp off, and
the difference in the two gives you your thiocyanate value which you then can subtract out.

      A lot  of work has been done with that at our labs, I know, to show the percent
recoveries of all the different thiocyanate complexes, et cetera, and it has been  published
in the Water Pollution  Control Federation Journal from many years ago.

      So, there is some information out there.

                                     MR. TELLIARD:  Yes. The other thing is that, of
course, there is a rule in ASTM that no  method can go final while the author is alive.

                                     MR. POTTER: Let me say something about that.
That was, I believe, Nebi Kelada's method, and the purpose of this particular paper was just
to look  at the method that was already approved.   So,  we are now, since we have
completed this experimentation, starting to look at Kelada's method, along with many other
techniques, some of them UV  techniques.  There are membrane separation techniques.

      With the remaining amount of money that we have in the contract, we may be able
to review some of those on a very cursory sort of a quick  look or snapshot.

                                     MR. TELLIARD:  Thanks, Bill.

                                     MR.  XIE:  Jack Xie  from Water Chemistry in
Roanoke, Virginia.  My question is I  have percent recovery from zero percent here to 1000
percent.  How do you  record for your QA/QC data? Because some EPA methods require
that the percent recovery should fall into a certain range, like 60 percent to 120 percent, but
when you have a situation when your percent recovery is from zero to 1000, how do you
deal with that?

                                     MS. GOLDBERG: Well, I think, as Bill has said,
the fact that we had 1000 percent recovery really is just showing  that the method  is
dysfunctional.  There is no way to show that those are good values. In fact, they are fairly
non-reproducible. We can get 1000 percent recovery today, and we can get 1200 percent
recovery tomorrow.
                                      408

-------
      I did not speak at all about the quality control activities that we used in this lab. We
have used a lot of procedures to establish our quality control levels.

      Each day that we ran the cyanide distillation blocks with ten samples in them, we
also ran a whole quality  control block which  contained four blanks plus four  cyanide
samples dosed at 100 ppb. Half of each of those groups were with the lead carbonate and
the other half without, so  that we tracked  on a daily basis the performance of our entire
process.

      We kept quality control charts for that and for undistilled KCN just on a colorimetric
analyzer as well so that we were able to  distinguish on a  daily basis if there were any
problems with the distillation block or if there  were any problems with the colorimetric
analysis.

      We felt  that the quality control was  well under control for that study  and the
observed 1000 percent recovery really were oxidative effects of the interferences.

                                      MR. X1E: Okay, thanks.

                                      MR. TELLIARD: Thank you.  Thanks, Margaret.
                                       409

-------
(Blank Page)
    410

-------
TABLE I. POTENTIAL INTERFERENCES IN ELECTROPLATING INDUSTRY WASTE
                  Sulfide (S2-)
                  Thiocyanate (SCN")
                  Carbonates (HCO3", CO32')
                  Nitrite (NO21
                  Oxidants (dCV, Og, H2O2)
                  Bisulfite (HSCV)
                  Formaldehyde (HCHO)
                  Surfactants
                  Metals
                                    411

-------
            TABLE 2.  ELECTROPLATING INTERFERENCES STUDIED
INTERFERENCE
SPOT TEST
REMOVAL METHOD
Sulfide
Hypochlorite
Formaldehyde
Thiocyanate
Bisulfite
   Yes
   Yes
   Yes
   No
   No
PbCO3
Ascorbic Acid
Ethylenediamine
None (FbC03)
None (PbCO3)
                                412

-------
TABLE 3. HOLDING TIME STUDY
SAMPLE COMPOSITION
(10s M)
SAMPLE CN
A 1.9
B 1.9
C 1.9
D 1.9
E 1.9
F 1.9
* G 19.2
M_»&
M H 19.2
I 19.2
J 19.2
K 19.2
L 19.2
M 19.2
N 6.15
O 6.15
S
0
0
0
114
114
114
0
0
0
114
114
114
114
1.07
1.07
OC1
0
114
114
0
0
114
0
114
114
0
0
114
114
1.07
1.07
HCHO
0
114
114
114
114
0
114
0
0
0
0
114
114
1.07
1.07
HS03
114
0
114
0
114
0
0
0
114
0
114
0
114
1.07
1.07
SCN
114
0'
114
114
0
114
114
114
0
0
114
0
114
1.07
1.07
PERCENT
DAYO
79
40
989
83
78
56
71
122
4.0
59
59
32
78
91
90
RECOVERY (%)
DAY 1 DAY 2 DIFFERENCE (%)
72 -7
29 -11
1101 +112
56 -27
90 +13
1089 +1033
75 +4
95 -26
2.8 -1.2
69 +10
70 +11
40 +8
146 +68
64 -27
58 -32

-------
                   TABLE 4, STUDY DESIGN
CHEMICAL
FACTOR
                                        CONCENTRATION RANGES
    DESIGN
  PREDICTION
KCN
Sulfide
HypocWorite
Formaldehyde
Bisulfite
Thiocyanate
   Z,
0-1000 pg/L (ppb)
 0- l.HxlO^M
 0 - 1.14 x 10* M
 0 - 1.14 x IQ"3 M
 0 - 1.14 x 10-3 M
 0 - 1.14 x 10-3 M
49-500 pg/L (ppb)
 0 -1.14 x 10+ M
 0 - 1.14 x 1Q-* M
 0 - 1.14 x 1Q-4 M
 0 -1.14 x 1Q-4 M
 O-l.MxlCT'M
                                    414

-------
TABLE 5. ESTIMATED MODEL PARAMETERS
95% CONFIDENCE INTERVAL
PARAMETER ESTIMATE
BO
Bl
B2
B3
B4
B5
B6
B22
B33
B55
B12
B13
B14
B15
B16
B24
B36
B45
B123
B133
1.21697E+01***
6.72741E-01***
-1.56419E-03"*
9.26762E-04"*
-5.13696E-04***
-8.85959E-04***
-6.20955E-04*
9.85605E-09***
-1.82170E-08***
5.80259E-09**
-5.79119E-07
-6.50880E-05***
-9.21116E-07
1.13738E-06
6.67081E-07
8.091 71E-08**
4.48756E-06***
7.55712E-08***
3.2721 1E-09***
5.11637E-10***
ASYMPTOTIC STD. ERROR LOWER
9.28269E-01
2.82236E-02
3.01774E-04
2.62156E-04
1.80544E-04
2.70073E-04
3.53309E-04
3.57131E-09
2.44597E-09
2.71985E-09
1.81319E-06
2.93706E-06
9.47768E-07
8.57594E-07
2.67670E-06
3.3471 1E-08
3.09200E-07
2.58539E-08
3.781 61 E-10
2.36651 E-ll
1.03311E+01
6.16841E-01
-2.16189E-03
4.07529E-04
-8.71286E-04
-1.42087E-03
-1.32073E-03
2.78263E-09
-2.30615E-08
4.15579E-10
-4.17036E-06
-7.09053E-05
-2.79829E-06
-5.61196E-07
-4.63447E-06
1.46234E-08
3.87515E-06
2.43644E-08
2.52312E-09
4.64766E-10
UPPER
1.40082E+01
7.28642E-01
-9.66490E-04
1.44600E-03
-1.56107E-04
-3.51046E-04
7.88193E-05
1.69295E-08
-1.33724E-08
1.11896E-08
3.01212E-06
-5.92708E-05
9.56058E-07
2.83595E-06
5.96863E-06
1.47211E-07
5.09997E-06
1.26778E-07
4.02111E-09
5.58509E-10
              415

-------
TABLE 6. INTERFERENCE RECOGNITION TESTS
INTERFERENCE
Number of Interference-Containing
Samples Tested
Number Correctly Identified
Percent Correctly Identified
s2-
57
2
4%
ocr
57
9
16%
HCHO
57
24
42%
               416

-------
              7JE-Q5   I.6E-04
                       100 ppb CN Treated
                    100 ppb CN Untreated
                  0 ppb CN Treated
3.1E-04   6.3E-04  0 ppb CN Untreated
      Concentration of Sulfide (mol/L)
Figure 1,  Sulfide: Treated samples contain PbCO3 m *^e Absorber Tube
                              417

-------
 3.40E-06   3.40E-05      3.40E-04    3.40E-Q3

  Concentration of Hypochlorite (mol/L)
                                                 00 ppbCN Treated
                                             00 ppb CN Untreated
                                          OppbCN Untreated
Figure 2
. HypocWorlte: Treated samples contain Ascorbic Acid.
                    418

-------
     3.7E-07     a 7PJ)£>        o TT ~e\c~
                d./MJ&        3.7E-05     3.7E-04

    Concentration of Formaldehyde (mol/L)
      100 ppb CN Treated
   — ppb CN Untreated
0 ppb CN Untreated
Figure 3. Formaldehyde: Treated samples contain Ethylenediarnine.
                               419

-------
      3.2E-Q5     *a ic nA
                 3.2E-04        1.6E-03



        Concentration of Bisulfite (mol/L)
3J2E-03
    100 ppb CN Treated


0 ppb CN Treated
Figure 4. Bisulfite: All samples treated with PbCO3 in each Absorber Tube.


                           420

-------
                                                / 100 ppb CN Treated
                                                 100 ppb CN Untreated
                                              0 ppb CN Treated
                                                CN Untreated
    Concentration of Thiocyanate (tnd/L)
Figure 5. Thiocyanate
, TreatedsamplescontamPbCOamtheAbsorberTiabe.
                                421

-------
                                              100
                                 KCN
Figure 6. Sulfide:  Predicted cyanide recovery.
          422

-------
                                                                 100
10     3.0
                                                    7.67
                                              3.84
                                                          11.5
                                                           KCN
                Figure 7. Formaldehyde: Predicted cyanide recovery.
                                   423

-------
                                                 100
                                            15.4
                              7.67
                        3.84
                                      11.5
                                     KCN
Figure 8. Bisulfite: Predicted cyanide recovery.
            424

-------
                                                          100
10
ppm

 5.8


 8
                                                               zr
                                                               o
                                                               o
                                                               s
                                                               o
                                                           60
           4.7  ;
         3.5
             2,3
                 1.2
                                  100
                        0
                                        200
                                       3.84
                                             300
                                            7.67
                                                  400
                                                  11.5
ppb (xtO'lM

500   19.2

 15.4
                                                         KCN
         Figure 9. Thiocyanate: Predicted cyanide recovery.
                          425

-------
(xlQ-J)M  ppm
     10    5.1
                    Figure 10. Hypochlorite: Predicted cyanide recovery.
                                426

-------
                                                                   1000
(x10"°)M ppm
    10     5.8

             8
          V
              \
KCN
                                                 3.84
    Figure 11. Thiocyanate with 0.1 mM hypochlorite: Predicted cyanide recovery.
                                      427

-------
(Blank Page)
    428

-------
                                     MR. TELLIARD: Our next speaker is Bruce Logan.
He is Associate Professor of Environmental Engineering at the University of Arizona.  He is
going to talk about headspace analysis and BOD.
        THE HEADSPACE BIOCHEMICAL OXYGEN DEMAND (HBOD) TEST:
                   A NEW APPROACH  FOR MEASURING BOD
                                     MR. LOGAN: Well, isn't it amazing that, with all
the improvements we have in analytical methods and procedures and materials that we do
not do the BOD test a whole lot differently today than we did it 50 years ago?

      There are certainly some problems associated with the test.  It is labor intensive.  It
is  glassware  intensive  and  consumptive.   It  is bench space  consumptive,  incubator
consumptive, dishwasher consumptive, time consumptive.  I do not know if I have missed
anything.

      It is slow.  It can take at least five days, as you know, and perhaps one of my biggest
concerns, as an engineer who designs wastewater treatment systems, is that the results of
the BOD test, because of the way we have modified  it, no longer represent at all the
conditions that are going on  in the bioreactor.   So, in terms of  evaluating and changing
wastewater treatment processes, it does not provide us the kind of information that we really
need to evaluate the process.

      So, what do we do? Well, there are some alternatives that have come up over the
years, Respirometers,  for one. Unfortunately, they are expensive, and they, because of their
cost, provide few replicates.

      There are on-line BOD measurements, but, again, those are also expensive and are
usually limited to only one or two points in the treatment train.

      There are automated BOD systems, and the next speaker will address the problems
associated with those.

      I am here today to present an  alternative to the BOD test which I have called the
HBOD test, H standing for headspace BOD test.  I think it overcomes the problems with the
BOD test. It is not a dilution technique, and it  is fast, cheap, and relatively inexpensive.

      Let's just spend a minute  on  what I consider  the problems with the  dilution
technique. In diluting down the wastewater sample, we achieve less of an oxygen demand,
but what we also do is slow the whole rate of oxygen exertion, BOD exertion, and substrate
consumption down.


                                      429

-------
      The rate of substrate utilization or the rate of organic matter decay is proportional to
a couple constants and two things, X, which is the concentration of cells in the BOD bottle,
and S, the concentration of substrate or the organic matter.  When we dilute those down,
we can slow down the reaction rate.

      In  a typical BOD test, if you are looking at BODs in the range of, say, 30  to 100
mg/L, you may dilute the sample as much as a factor of 10 to 100. So, you have really set
up the game to work against you.

      What do we do in the headspace BOD test?   Well, as I said, it is not a dilution
technique, so you do not need to change the concentration of the wastewater depending
on the BOD.

      The overall calculation of the headspace BOD is based on a mass balance, and the
principle  behind the test is that oxygen is replenished in the wastewater or the  water sample
in a sealed tube.

      I have one of those sealed tubes, and  unless you are sitting in the front row here, you
will not be able to see this, but I can assure you I will have a slide of this in a moment.

      The idea is that you take a tube, you fill it with a certain amount of wastewater, not
diluted or in any way altered, and you leave a part of the tube open and just containing air,
and then  you seal that tube and let the reaction go on with time.

      Now, one of the advantages of the test is, because I have not diluted it, the reaction
will plateau out very quickly, typically  within 24 to 36 hours,  and this could allow you to
take the sample on Monday morning and have your answer Tuesday afternoon.

      Let me run through the test with you and then we can discuss how and  why it works.

      This is what you need to run the test.  You need test tubes like this.  A 28  ml test
tube is the test  tube that I use. This  test  tube was  developed for work  with anaerobic
microorganisms, so  it is airtight, gas tight,  liquid tight.

      Within one test tube rack, you can fit about 40 of these test tubes compared to, say,
3 or maybe 4 BOD  bottles. You also need a cap for these tubes. We use teflon stoppers
which were not around 50 years ago.  And we use aluminum tops, and we crimp seal on
the tops of the teflon stoppers to create and airtight seal.

      So, how do you conduct the test? Get your wastewater, and put it in a bottle.  You
can put this bottle on a magnetic stirrer if you wish to keep everything well suspended. We
use a little digital pipette or dispensette. We just set this for whatever volume of wastewater
we want  to put in the tube, say, 15 ml and just inject right into the test tube 15 ml of
wastewater.
                                       430

-------
      We put on the teflon top, crimp seal it, and we set these test tubes on their sides.
This can be done by placing all the test tubes in this rack, putting a cover on to hold the test
tubes  in the rack, and  putting the rack on a shaker table.

      You  need to agitate this sample so that you continuously re-aerate the sample with
oxygen from the atmosphere.

      Now, what do we do when we are done? We can wait 24 hours or 36 hours or 5
days and let this run, but  how do we analyze oxygen in  the liquid?

      This is a standard YSI probe which most of you probably have in your labs.  This
probe will not fit down into that test tube.  Moreover, even if it did, it would have to go
through some sort of headspace at the top of the tube. So, the probe does not work for us.

      So, if you want to run this test, you need to get a new  DO probe.  We use the
Wheaton BOD analysis system, and this is the probe right here.  It is a non-consumptive,
non-stirring probe, so  you do not  need to stir the sample, and you do not consume any
oxygen while the analysis is being done.

      You  just set the dial for the temperature of the laboratory and for the saturation
concentration of oxygen.

      When you are ready to analyze the sample, we give it a quick stir to make sure the
water  is equilibrium with  the air on a vortexer.  We then pour the sample  out into a little
plastic sampling cap.  This is an incubator cap off a test tube.

      And  we insert the probe. The cap is actually sitting in the top of this BOD  bottle
and contains the wastewater sample. We insert the probe down into the wastewater sample
and let  it sit there about 60  seconds,  and then we can read the DO directly from the
machine.

      When we are all done, we use a computer program to calculate the HBOD, although
you can make the calculation on a calculator. To make the calculation you use the volume
of the container, the volume of the liquid, the air pressure when you sealed the test tube,
the temperature,  the DO at the start which is usually insignificant, and the DO when you
finish.

      You  input this data into the computer, and out pops your HBOD, in this case, 89.3
or 90  mg/L, and you can see that the DO at the start which is probably only 1 or 2  mg/L
is insignificant compared to the final BOD.

      Let me convince you that this analysis really works.  It  seems too simple.  You
probably think well, why  didn't somebody think of this 50 years ago?
                                       431

-------
      I  do not know the answer to that, but the method is based on  Henry's law.  It is
saying that the partial pressure of oxygen in the gas phase is related to the concentration in
the liquid phase by a constant.

      What this means is that the amount of oxygen in the air as it is  used will drop the
partial pressure of the concentration of the oxygen in air.

      Now our intuition tells us that when the pressure  changes in that test tube that the
partial pressure of oxygen changes, but our intuition is based on a system where volume
changes can  occur, for  example, injecting  air into the bottom of the tank.  You can
compress or expand air bubbles depending on the pressure.

      In this test tube, once it is sealed, the only way that the partial pressure can change
is by consumption of oxygen.  So, the total pressure in the tube, let's say the total pressure
starts  out at 1 atm, and let's say that, through production of CO2/ it ends up at 2 atm.

      Let's say no oxygen was consumed.  The mole fraction of oxygen went from being
1 to 0.5. So, the pressure went up, the mole fraction went down, and the partial pressure
stayed the same.

      So, the only way that the partial pressure of oxygen can change  is by consumption
of oxygen.  So, I can either measure the concentration of oxygen in the air or the liquid.
By measuring either one,  I know both.

      The oxygen consumption is calculated by first calculating the  mass of oxygen in the
air. That is calculated by nothing more complicated than the  ideal gas equation, pV  =
NRT.  We all remember  that from freshman chemistry.

      Then we calculate the moles of oxygen initially in the liquid phase.  That is just the
concentration times the volume.

      To start out, we need the total moles of oxygen.  We add up the moles in the liquid
and the  moles in the gas.  We then have total moles of oxygen at the start of the test, and
when we are done, we can go through the same calculation.

      We measure the dissolved oxygen concentration in the liquid when we are done but
we do not measure the concentration in the gas.  There  is no need  to.  We can calculate
that from the Henry's law constant.  So, we back-calculate the gas concentration.

      So, the full  calculation is that  the headspace  BOD is...well,  you  have this  big
equation, and I think if you look at this, you can appreciate why we put it on the computer,
but all we need to know is the  volume of  the container, a  couple of constants, the
temperature, the initial concentration in the liquid, and the final concentration in the liquid.
                                       432

-------
      You also need to know the saturation concentration of oxygen in the liquid.  We can
look that up in a book.  If we don't believe it, we take our wastewater sample, put it on a
stir plate for half an hour, and then we can go measure that in the exact sample that we
have.

      This is what the analogous dilution looks like for the headspace BOD concentration.
Let's say that we run a test, and we end up with 2  mg/L of oxygen in the liquid.  Well, if
we had, say, 10 ml of liquid in a 28 ml container, our BOD is 100 mg/L.  If we had 15 ml,
it is 200, and  so forth.

      So, the final DO concentration and our choice of the volume of headspace tells us
what the BOD was.

      Varying the volume of wastewater or the  volume  of our  sample in a tube,  is
intrinsically simpler  than going  through  an extensive dilution procedure and perhaps
guessing it wrong.

      If you look at the DOC that is in a wastewater sample as a function of time, you can
see the  DOC  rapidly, between 24 and 36  hours.   Sometime later, but before our next
sample,  the DOC dropped down to zero,  and the BOD exerted within that time came up
and, over several more days, continued to increase, probably due to endogenous decay of
microorganisms.

      Now, we have run this comparison of the HBOD test to the conventional BOD test
in  a number of ways.  We have  looked at just primary effluent for  domestic wastewater.
We looked at the HBOD in six different  experiments, on six different days at the waste
treatment plant, and we compared our HBOD to what the operators reported for their BOD
on that sample.

      Of course, there is some variability and that is pretty typical of any BOD analysis, but
the results are quite similar for both tests,

      We also determined the HBOD after one day, and we recorded that number, and
then, from the average of all these numbers versus the average of all these numbers,  we got
a constant ratio with an average of about 0.48. So, we are trying to anticipate what the final
BOD is  going to be after five days.

      I  will show you those same numbers in this graph. In triangles are the BODs here,
and in boxes are the HBODs.  You would expect that at this treatment plant, things  should
be pretty constant with  time. All these  numbers should be about the same, and our
numbers, the HBOD numbers, compared  very well to the BOD numbers.
                                      433

-------
      The greatest outlier is the BOD, and it is a lot lower than these other ones.  In fact,
the operators thought that they had messed up, and our HBOD was right in line with all the
other ones.

      If we use the predictor,  that one-day predictor, we divide by 0.48 and predict the
five-day BOD and headspace BOD and compare it to the BOD, this is what our plot looks
like.  Surprisingly, our number drops down and tends to agree with that  number, so it
suggests to us that something unusual was going on in that sample.

      We compared the one-day HBOD with itself. We hoped the agreement should be
good and, indeed, it is, because this average is based on this data.

      We also examined what the effect of the seed was. We wanted to look at a standard
glucose:glutamic acid calibration procedure, so we looked at a variety of different seeds to
see if there would be any appreciable difference in our procedures.  We looked at some
trickling  filter  seeds, some activated sludge  seeds  from two different plants,  and a
commercially available inoculant (Polyseed).  We also looked at the effect of a nitrification
inhibitor.

      These are the HBOD and BOD results for the glucose:glutamic acid calibrations, and
you can see, of course, typical of these tests, a fair scatter in the data.  I think this represents
our own  relative inexperience in the  method, but, again, certainly the variability we have
with the  HBOD test is no worse than the variability with the BOD test.

      You can look at it this way.   This is  the BOD, and this is our HBOD  on  that
glucose:glutamic acid solution, and if our numbers agreed completely, they would lie on
this 45-degree line.  Since they are close to this line there seems to  be no statistical bias.

      One problem that we do have is there seems to be a little bit of a bias towards the
volume in the container.  This comparison was done earlier on. We have run some more
recent tests, and we do not see  quite this bias, but there is a little bit of experience that one
needs to go through with this test, also.

      There are some other uses that you can make of this  test.  For  instance,  if  you are
trying to improve a wastewater treatment process, you can  look at the effect of  nutrient
additions. Again, in the dilution test, the BOD test, you have stacked the deck against you.
You  have diluted out the sample, buffered it, added in lots of nutrients, and so forth. So,
you  really have no idea if your treatment  process is nutrient lacking or nutrient poor.

      pH changes  will  not occur in the BOD test; they will  occur  in the HBOD test,
although we generally have not seen a big pH  change.

       If you want to measure things like volatile solids and total solids as a function of time
in your HBOD bottle, you can  do that, because you are not diluting it out, whereas in the


                                       434

-------
BOD test, you have perhaps either swamped out the suspended solids with your inoculant,
or you are measuring a very small number.

      Some people have questioned the need for a nitrification inhibitor in the HBOD test,
because you have not diluted  out the nitrifiers, and what we found when we compared a
sample with no nitrification inhibitor to samples with nitrification inhibitor, was that for the
first five days, those numbers  were in good agreement, but by ten days, nitrification had
kicked in, and the oxygen demand had gone  up.  So, as long as you are running five-day
tests, there  is no problem.

      We also looked at comparing particulate versus soluble BOD.  We chose to  use a
5 um filter (a Millipore 5 um filter) because it actually passes particles about 1 um in size
on average, and we were able to very easily measure soluble BOD.  Since we have a very
small volume of wastewater, we do not spend all day filtering it.  Again, we see the  same
effect  on additional oxygen  demand at 10 days.

      If you just put these results all on the same graph, you can compare them. This is
the soluble  BOD, this is the total BOD, and these both have nitrification  inhibitors, so you
can see what your fraction of the total is  in the soluble  form.

      In conclusion, the HBOD test is very simple, it is very rapid, and  it gives numbers
that are equivalent to the BOD test.

      The smaller test tubes that are used in the HBOD test can  really be a space saver in
your laboratory and allow you to run greater  replicates  or just free up a lot of space.

      We have looked at wastewater samples, we have  looked at the calibration tests, and
we get comparable results in both those cases.

      The 1-day test may yield results that are comparable to the 5-day with some sort of
correlation factor that you probably have to put in for your own wastewater.

      I would just say to all of you  here today and to those people that you might know
I would encourage you to try using the HBOD test and see that this test can actually update
a test that has been virtually unchanged for 50 or 60 years.

      Thank you.
                                       435

-------
                       QUESTION AND ANSWER SESSION
                                     MR. TELLIARD: Any questions?  Yes, ma'am?

                                     MS. DINSMORE: Donalea Dinsmore of Wisconsin
DNR. Have you tested very clean wastewaters? How sensitive is it? We have got effluent
quality around 5 and 10 mg/L.

                                     MR. LOGAN: Well, at 5 or 10 mg/L, you do not
need to dilute your wastewater anyway, so you can just fill up the test tube and seal it and
run your tests that way.  If you still want to investigate the effects of buffering and pH and
so forth, you can easily  do a 50/50 dilution and run the test.

      We have found fairly good replicates.  If we run triplicates, we get very comparable
results within the limitations of a BOD test, of course.

                                     MR. HORNG: Albert Horngfrom HTMA, Colmar,
Pennsylvania. My question was the opposite. How high did you go? I saw that 2250 was
your limit.  In some wastewaters, you have 5000 or 50,000.  Have you checked into that?

                                     MR. LOGAN: No, we have not gone to the other
extreme, but it would be a very simple...actually, it is a much  simpler approach at that
point. I would say you fill up, instead of that jar that we have with wastewater, you fill that
up with your dilution water, and you autodispense in 10 ml of that, and then you get a nice
100 uL pipette and put  in your wastewater that way.

      So,  I think it would be very amenable to the higher concentrations, but if you have
too little volume of your wastewater, you are going to run into problems,  ultimately.

                                     MR. HORNG:  Another  question is in diluting
wastewater for the conventional method, you can find the toxicity from industrial waste very
easily.  Have you done  anything with this method?

                                     MR. LOGAN:  No.  Let me say two things. First
of all, the point is finding toxicity. Typically, if you just dilute down to what you expect,
you will dilute out toxicity problems,  but if you are really after finding out if you have one,
in this test, you could run one  that you do not dilute out, one you do dilute out, and then
you would have your answer very quickly.  In fact, you would be able to find ways to get
around that toxicity.

      I  do not know if I  am  being very clear on that, but the toxicity  question  is  an
important  point, and you do not always  address it when you immediately  dilute  it way
down.

                                      436

-------
                                    MR. TELLIARD: Yes? One more and we have got
to get going.

                                    MR.   SLENTZ:    Kurt  Slentz  with  Energy
Laboratories. I infer that you were incubating your samples at 20 degrees C.  Is that correct?

                                    MR. LOGAN:  That is correct.

                                    MR. TELLIARD:  Thank you, Bruce.
                                     437

-------
(Blank Page)
    438

-------
BOD TEST:  A DILUTION TECHNIQUE

Rate of substrate utilization, and therefore
oxygen demand, is decreased by dilution of
substrate:

             dS       \L X   S
             dt        Y  K + S

where:    u =  maximum uptake rate,
          K =  half-saturation constant,
          Y =  yield coefficient,
          X =  biomass concentration,
          S =  substrate concentration.

First order approximation:

             ^  -  --£-  X S
              dt       Y K
         439

-------
DILUTIONS NECESSARY TO DETERMINE BODs
      FOR PREPARING A BOD CURVE

Volume (ml) in 300 ml     Range of BOD test
BOD bottle	
  300                   0-7
  100                   6-21
   50                   12-42
   20                   30-105
   10                   60-210
   5                    120-420
                440

-------
THE HBOD TEST:

•  Not a dilution technique, so composition of
   wastewater is not changed during determination
   of oxygen demand

•  HBOD test is based on mass balance

•  Oxygen in water is replenished by oxygen in air
   in a sealed tube

•  Plateau  at 24-36 h could allow a rapid (1 day)
   test
              441

-------

-------
U)

-------
I



V

-------
Ln

-------

-------

-------
00
                                                                                                             "-
                                                                                           AIR CAL





                                                                                      OXYGEN (PPM)





                                                                                       DISPLAY OFF




-------

-------

-------
Ln


                                                                                                                                                                                                                            tU«*i*'**" ''^T'-f •'
                                                                                                                                                                                                                            •
                                                                                                                                                                                                             '  '•   '• *:'•," ^"'"^T-^VrX  Stf*

-------

-------
Concentration of oxygen in liquid and air are
related based on Henry's law:

               p = He

where:    p =  partial pressure of oxygen  [atm]
          H  =  Henry's law constant [atm-l mg"1]
          c =  concentration of oxygen in the
             liquid  phase [mg I"1]
Note:     pressure changes in tube do not
          affect p

               P = Y PT

where:    PT = total pressure
          y  = the mole fraction of oxygen in the
               air
              453

-------
At the start of a HBOD test, the moles of oxygen
in the gas phase, mg/ can be calculated from the
ideal gas law as:
                me  =
                 *
                        p. V
                        Pt   *
where:    Vg = volume of air in the container [ml]
          R  = universal gas constant
               [0.0821 l-atm mol'1 °K'1]
          T  = absolute temperature  [°K].

The moles of oxygen initially in  the liquid phase
are:
                 m,  =
                  1     M106

where:    M = molecular weight of oxygen
          V, = volume of the liquid phase
                             454

-------
The total moles of oxygen at the beginning of the
test are:
                                    M io6
Similarly, the total moles of oxygen present in the
sealed container at the end of a HBOD test mf, are:

                   H cf V     cf Vl
            m   = -
where:    cf =  concentration of oxygen
                measured in the liquid  phase (in
                equilibrium with the gas phase) at
                the end of the test
                    455

-------
  HBOD-
           (V-Vg)RT
where:    V =   V, + Vg , the total volume of
                  the container
          M  =   molecular weight of oxygen
          Pi =    initial partial pressure of
                  oxygen
          R =   universal gas constant
          T =   temperature
          GJ =    initial concentration of oxygen
                  in solution
          cf =    final concentration of oxygen
                  in solution
          c8at =   saturation concentration of
                  oxygen (from Standard
                  Methods)
                              456

-------
The HBOD of samples for a 28-ml sample container containing 0, 5, 10 or 15 ml of
headspace, as a function of final dissolved oxygen concentration.
     250
                  2        3        4        5        6
                   FINAL DO CONCENTRATION,  mg/l
                       457

-------
Q
O
CD
I
   200
   150-K
 r 100-
                       HBOD •"•*••••• DOC
o
E

Q"
O
Q
                                          0
                         458

-------
Comparison of HBOD and BOD tests using domestic wastewater (primary
clarifier effluent, Ina Rd. Wastewater Treatment Facility).
Exp.
1
2
3
4
5
6
Avgc
Range of Values8
HBODt
55-57
51-67
43-54
—
43-48
56-61
(n
(n
(n
__b
(n
(n
= 3)
= 5)
= 5)

= 5)
= 3)
HBOD
106-130 (n = 3)
95-111 (n = 6)
103-115 (n = 5)
96-107 (n = 5)
109-126 (n = 4)
103-1 14 (n = 5)
BOD
120-1
125-1
118-1
25
37
33
102-107
83-85
106-1
20
(n
(n
(n
(n
= 3)
= 3)
= 3}
= 2)
(n = 2)
(n
-3}
- HBODt/
HBOD
0
0
0

0
0
.47
.60
.44
—
.38
.53
0.48
(±.0.08)
* n is the number of tests performed,b Data not taken, and ° ±_ standard deviation based on averages
of numbers in column.
                               459

-------
   150-
Q


03  100
X
o

Q
O
03
    en-
    ou
                Primary clarifier effluent HBOD and BOO values.

                                 HBOD  A  BOD
                           34

                         EXPERIMENT
                           460

-------
   200-
   150-
d"
o
CQ 100-
o

Q

O
CQ
50-
                       BOD
                                  HBOD(1}/0.48
     0-
                   2345

                        EXPERIMENT
                 461

-------
   200-
   150-
OJ
Q
O
CD
X
 r 100-
    50-
                      HBOD
HBOD(1)/0.48
     o-
                   2345
                        EXPERIMENT
                         462

-------
 Comparison of HBOD and BOD tests using glucose:glutamic acid (150:150 mg I'1)
                        and different microbial seeds.
                                     Range of Values
 Exp.            Seed                                           Nitrification
	HBOD	BOD	inhibitor

    7       Trickling Filter     142-194 (n=4)   114-155 (n = 8)        Yes
              (Roger Rd.)

    8       Trickling Filter     143-179 (n = 3)   108-126 (n = 6)        Yes
              (Roger Rd.)

    9       Trickling Filter     109-149 (n=3)   144-156 (n = 6)        Yes
              (Roger Rd.)

    10     Activated Sludge    170-194 (n=4)   124-142 (n = 3)        Yes
           (Randolf Park)

    11      Activated Sludge    143-149 (n=3)   126-141 (n=4)        Yes
           IRandolf Park)

    12     Activated Sludge    126-132 (n-3)   132-162 (n = 6)        Yes
           (Randolf Park)

    13     Activated Sludge    151-192 (n=4)   175-206 (n=4)        No
               Una Rd.)

    14         Polyseed       113-137 (n = 2)   139-154 (n = 4)        No
                                       463

-------
       HBOD and BOO measurements on a glucose:glutamic acid solution
       (150:150 mg I"1).
I
   20°-
Q  150
O
en
5  100
Q
O
OQ   en.

t
                                     HBOD  A  BOD
            7    8    9    10    11   12    13   14
                          EXPERIMENT
                           464

-------
   250-
   200-
E  150"1
a
m  1004
    50-
      0
50
100      150
 BOD,  mg/l
200
250
                    465

-------
   200
    150
O)
Q
O
CQ
I
 r  100
                    HBOD versus TIME
                  For different volumes of liquid
                           466

-------
OTHER USES OF HBOD TEST

• Test effects of nutrient additions

• Examine pH changes in undiluted samples mixed
  with wastewater

• Measure biomass concentrations (VSS and TSS)
             467

-------
   250-
             EFFECT OF NITRIFICATION INHIBITOR
   200-
   150-1
Q
§  100-1
           Increase due to

           nitrification
     50
5            10
     DAYS
                                               15
                      468

-------
             EFFECT OF NITRIFICATION INHIBITOR
   250-
   200-
        SAMPLE FILTERED THROUGH 5 urn FILTER
|P  150-
   100J
    50-
     0-
                                D
                                D
                                 Increase due to

                                 nitrification
5           10
     DAYS
                                             15
             469

-------
          EFFECT OF FILTRATION THROUGH Sum FILTER
                  (Samples contain N-inhlbHor)
   250
Q
   200
   150
   100
    50
                                  < Sum
                   5           10
                        DAYS
15
                      470

-------
CONCLUSIONS

1.  HBOD test is a very simple and rapid test for
   determining oxygen demand.

2.  Smaller tubes  required  for  test can  reduce
   incubator space

3.  HBOD and  BOD results are comparable  for:
     - wastewater samples,
     - glucose/glutamic acid tests.

4.  1 -day HBOD test may yield much faster results
   than  a conventional  BOD test, but it requires
   correlation  to a 5-day test.
                   471

-------
(Blank Page)
    472

-------
                                     MR. TELLIARD: Our final speaker is going to be
very quick, because  he is talking about high speed automated  BODs.   Greg Hill  is  a
Chemist with the Hampton Roads Sanitary District and is going to keep talking about BODs.

      Greg?


                    A HIGH SPEED AUTOMATED BOD  SYSTEM
                                     MR. HILL:  I will try to keep this short and sweet.
I know we have all had a long day, and let's say we saved the best for last.

      As Bill said, my paper or talk today will be on a high speed automated BOD system.
For those of you who may have heard the original paper,  I ask that you bear with me as I
cover some background information. We feel it is important information that will help you
in  understanding why we would choose to pursue this project.

      HRSD is a regional wastewater agency. We have 9 treatment plants with a combined
flow capacity of over 2 million gallons a day. We service 13 cities and counties which have
a total population  of about  1.4  million.   Currently, the CEL, Central  Environmental
Laboratory, is responsible for providing analytical support for these treatment plants as well
as  in-house programs and an aggressive industrial waste program.

      Right now, we run 600 BODs a day.  So, you can see that the BOD test, with its
large number of high labor-intensive task, would be a primary candidate for automation.

      What we considered the prime tasks of automation were reading of the initial and
final DO concentrations, filling the BOD  bottles with dilution water,  adding the seed
material to BOD bottles, capping, uncapping,  calculating the BOD concentration, and
monitoring QC data.  You  will notice this adds  up to be 8 man-hours per day.

      Based strictly on the salary for one  person for one 8-hour day, not counting any
benefits, that calculates to be about $44,000 a year savings. At this point, it was financially
to  our advantage to investigate this possibility.

      After  doing some market research,  we found that automated systems could run
between $100,000 and $200,000, with the average system  cost being $150,000. We chose
a five-year time period as the time frame in which we  felt that whichever system we
purchased  should  be  able to  provide  us service  without  any  major  upgrades  or
modifications.

      With  that time frame in mind, you will notice that we could recover  the cost of a
$150,000 system in less than three and a half years.

                                       473

-------
      At this point, we began investigating the market for systems that could do the tasks
we wanted. The problem was there was no such system on the market that could meet the
throughput demand that we had.

      That left us with three basic approaches. We could either modify an existing system,
we could have  a system custom-designed, or we could  do  a three-phase custom design
which was simply a detailed design study with a small working model with delivery of the
final  product.

      We investigated all three, and found that the modification of an existing system was
the best way to  go  for us.

      Once we chose the method we wanted to go with, our next concern was finding a
manufacturer willing to commit to this program as much as ourselves. We were looking for
a commitment both financially  and with technical resources.

      This commitment came  in written  form.  The manufacturer would provide to us a
system and at the end of a set agreed-upon period, if the system did not perform as we felt
it should or meet the written specifications, that it would be taken back at no cost  to the
district.  Obviously, this would provide some incentive for the manufacturer  to help us.

      The first  thing the manufacturer did was  suggest  that we go to multiple systems
instead of one unit  like we originally had thought of.  This would allow the manufacturer
to meet our system  throughput, and it would  give us the ability to have redundant systems
so that if one system went down, we still could perform the BOD analysis but  at a reduced
speed.

      One of the things we liked about  the system  was the simplicity of its mechanical
parts. This, you will notice, is the rack  movement system where it moves the  BOD bottles
to the probes.  It is a simple carriage with a one-driven gear system.

      We were doing so many bottles, it was very important to us that it be very simple
and easy to work with.

      The other main part was the manipulators. There was simply x, y.  There was no z
movement. Again,  the movement was simplified, thus, hopefully, giving us less problems.

      The next thing was,  what time frame would it take for us  to complete  this project?
You  will notice  side-by-side of  our original time table compared to our actual time table.
Originally, we felt the system would be delivered and the modifications completed in 60
days, with the system on-site setup in 7 days, system training in another 7 days, and 14 days
for a system evaluation, which was not  a method comparison  since the methods were
exactly the same.  It was simply  a performance evaluation to  make sure there was no
significant difference between the two  ways that the analysis was being done.

                                       474

-------
      We ended up with 90 days.  Well, it did not...needless to say, it took 90 days before
the system was actually delivered, 21 days  for system setup, 7 days for training...that
remained the same...and the system evaluation  took 70 days, approximately, giving us a
total of 188 which was basically double our initial estimates.

      The reason the evaluation took longer was because of some problems that occurred,
which I will address later.

      The system evaluation was composed of three parts, first let us consider hardware.
Under hardware, we have four areas of concern.  First, the DO meter and probe.  They
were not what we had been using for years, so we had no feelings on how well they would
compare to what we had been using or how well they would hold up over time.

      Second was the stirring mechanism.  It was an adaptation by the manufacturer to
meet our specific needs. Thus, it had not been  field tested.

      The third was the bottle transfer system. Just because of the sheer volume of bottles
we were doing every day,  how well it would hold up over time.

      The last one was the transfer pump, would it aerate the water causing  it to become
unstable.

      We found that the third and fourth were  not a problem at all. Things worked out
much as we were hoping it would, but we did develop problems in the first and second
which will be mentioned as we go.

      Operational software, this is the second component.   What we basically  had to
decide was whether or not we wanted a central control of all three systems  through one
computer system or a separate system at each instrument. Originally, we wanted the option
to have both  to see which  one would work best for us.  Unfortunately, the manufacturer
could not  develop this quick enough and get it  into place before the system's delivery.

      We are currently running the instruments specific with the manufacturer still working
on trying to give us the ability to control  them from one.

      The second one is the initial and  final DO operation steps.  We  consider BODs
basically, in two parts, the  initial steps before you place them into the incubator,  and then
the final readings once they came  out.  We were concerned  how well they would work,
how well our people would interface with these steps.

      The third was the rack  identification.  This was another adaptation they did for us.
We felt that,  because  of the sheer volume of bottles  we did, we needed something that
would help allow us to prevent human error.
                                      475

-------
      The bar code tag on each rack would allow the bottles to be identified as they came
across the bar code reader, keeping track of the incubation time, and if the rack was not in
for the correct period of time, it would flag you to let you know that you had the wrong set
of bottles.  We also use color coded bottles as  a visual aid,  between the two, we were
hoping to keep everything in order.

      The  last  one  was multitasking.  Because we were trying to save as much time  as
possible, we wanted the system to be multitasking, that it could  run while we used the
computer to generate tables, check QC, things of that nature.

      We wanted the ability to recalibrate. Again, this was a specialized item for us. We
felt, and we still do feel, that recalibration  is a very important process, that the meters will
definitely drift, probes will drift, depending on the sample type, the reagent in the probes
and a lot of other variables.  So, we wanted the ability to reset or recalibrate at a set period
of time.  This was something new for them to address.

      We entered the final, the major portion. This was the analytical performance. Again,
we were not thinking of it as a method comparison. This was strictly an automated version
of our same method compared to our manual version.

      What we were after was to check the precision and accuracy of the new method
compared to our current method.  The parallel study basically consisted of samples run on
both methods along with  some of the glucose:glutamic acid standards.

      Jumping right into the data evaluation.  The first parallel  study which was performed
right after the instrument was delivered gave us a 29 percent RPD and a 19 percent positive
bias.  Obviously, this was well above what we considered acceptable.

      Our in-house RPD  level right now ranges between 5 and 8 percent,  and we felt that
we needed  something under 10 percent from the  automated system to even consider  going
this way.

      The  manufacturer thought that  the recalibration  problem  was  caused by the
calibration vessel itself.  If you will notice in the upper left, this is the calibration vessel for
the automated,  or it was.  Those at the bottom right was the manufacturer's of the probes
calibration vessel.   They basically copied this calibration vessel.

      The  problem that came into play is that the stirring mechanism was not involved
when you are originally calibrating the probe.  Now that it was in the system and running,
there was a stirring mechanism that was blocking the air flow from the water to the surface
of the membrane.

      Obviously, the membrane was not being  correctly air calibrated at this point.
                                       476

-------
      So, we added water to it to increase the level above the stirring mechanism so there
was nothing blocking the flow from the surface of the water to the membrane, and a second
run was done.  At this point, the RPD has dropped to 17 percent with a 17 percent bias.

      Again, this was very unacceptable, and we went back to  the manufacturer.  The
manufacturer at this point went back to the calibration vessel, and stated that they needed
to come up with a totally new calibration vessel.  The stirring mechanism  was bringing in
water and taking out water as it went in and out, and that the vessel was so small that those
changes in water would affect how much water was in it or whether or not it actually
covered the stirring mechanism.

      Amidst all this, they also wanted us to go away from recalibration.  They wanted us
to go to just a calibration check point at  which time we would go back to a known value
and just check for drift.

      We decided we could try that. As long as we were monitoring for drift and we had
the ability to calibrate, that was sufficient for us.

      This is the new vessel. Obviously, it is just a miniature BOD bottle.  It is filled to
the top with known dilution water.

      The instrument is brought up on line. At the check point, it goes back to  the
calibration bottle and monitors  for drift.  We can set the amount of drift that we feel is
acceptable.  If it does not meet that limit, it will flag itself, give an audible sound, and wait
for recalibration.

      We were hoping that this took care of our problem, and once it was brought on line,
we found  9 percent RPD but a 15 percent positive bias.  Well, a 9 percent RPD was great.
We were really happy with that, but we were  still concerned with the 15 percent positive
bias.

      We were really starting to get frustrated at this point.  What the manufacturer did at
this point was go to the meter's manufacturer and the probe manufacturer, showed them the
stirring mechanism they were using,  and the recommendation from the manufacturer was
that the tension was too great on the membrane itself and was causing it to warp.

      Excuses the artistry and it is off the screen, but the top one has a solid band which
was clamped to  the probe itself. According to how tight you had it pushed on  and if it
struck a bottle going in, the warping on the membrane might change. Thus, it would give
you erratic results.

      The second one is the modified version which is basically just a groove in the side
of the first one to relieve the tension, hopefully, eliminating the warping of the membrane.
                                       477

-------
      Well, with the new stirring mechanism, we still had 9 percent RPD and a 14 percent
positive bias.  At this point,  we called  the manufacturer back in.  We said obviously,
something is wrong.  We cannot accept this,  and they went off  in search of improved
stirring.  They really felt that the problem was in  the stirring.

      At the same time, we  got together with our group of people, and we said okay,
obviously, there is not but so many things different between what we are doing now and
what we are doing automated.  It had to be something very simple.

      We came up with three possibilities.  It was either the calibration method itself, the
stirring mechanism,  or the filling and seeding.  The filling and seeding tested not to be a
problem. There was no statistical difference between automated versus manual seeding and
filling.

      That pointed  back to the calibration or the stirring mechanism itself.

      This graph may be a little tough to see.  It shows the two meters. The middle one
with the red line is what we are  currently using with a Winkler calibration.  The upper line
was the new meter with this air calibration.

      What you will notice is that the difference between the two does not stay consistent.
It starts with a higher value, but because it is no  longer consistent  and the lines merge as
they go towards the lower end,  it would present  an automatic positive bias.

      The bottom line was the same  meter with a calibration  based on Winkler.  You will
notice this starts on the negative side,  a lower value, and it merges as  well toward the end.

      At this time, the manufacturer had gotten back with  us and said that they had found
some  real problems with the stirring mechanism,  and they  had developed a  new one.  We
said let's try a new stirring mechanism, and what it is is simply dropped the stir bar in the
bottom of the bottle, put the probe  in,  and  you will notice  a great difference,  a  bigger
difference between the two meters, but the distance stayed consistent.  By that difference
being consistent, it cancelled  each other out.

      This gave us a 5 percent PRO and only a  1  percent positive bias. So, we were really
happy.  We thought we  were on track here.

      We had received the manufacturer's new stirrer, and it  is hard  to see  here, but I put
it in just to give you a visual  of what we are looking at.  The black item across the slide is
the probe itself.  The little piece on  the end is the stirring mechanism, and that band is
where the warping was occurring where it was attaching to the membrane  itself.
                                       478

-------
      I  will go back to my famous art work.  The lower one is the final version.  Notice
there are no guards on the front or the back.  It allows for better transfer of water through
it. The groove is still on the side to alleviate  any warping of the membrane.

      Once this was in place, we ran another study, and you will notice the lines became
very tight.  They tracked  each other very well,  and we found a 4 percent PRD and a 1
percent  negative bias at this point.

      At this point, it looked like we had really found the answer, and it was looking good.

      So, in conclusion what we found is that the project grew from a simple turn-key into
a joint system development. This was something we were not anticipating at all. We really
thought  that it was something that you brought in, set up, and should go since the methods
were the same.  So, it was very surprising that  we had to spend as much time as we did on
it.

      The next one was the investment of the personnel resources.  Again, this became
something that was almost frustrating to us in  that our 14 days became 70 days, and every
time we did a parallel study,  we were talking  hours upon hours of extra  work  that was to
be done in the normal working day. We had a 10 to 20 percent increase in our work load
without  increasing our personnel.

      The only thing that saved  us was that  it  was new and it was a challenge for our
people,  and we think that really helped them  cope with the extra stress.

      The third thing is that we may have entered into this with an overly optimistic view
of implementation of the automation into the  system.

      The final thing is that the project has shown the ability to accomplish our original
goals. The system has been on line. We have run it. It has  saved us the 8 man-hours we
were hoping it would.

      So, what I think it has shown is that we had a realistic view, or our goals were very
realistic  in saving those 8 man-hours. We are  hoping in the future to automate  as much as
we can, because,  obviously,  manpower is your greatest resource and  where  the most
expense is  in the lab.

      I  am sorry that I rushed through this. I  knew it was getting late. I have a  tendency
to do that.

      At this time, I will take any questions.
                                      479

-------
                       QUESTION AND ANSWER SESSION
                                     MR. HILL: Yes, sir?

                                     MR. STEVENS: Hank Stevens, Sacramento County
Regional Sanitation  District.   I had a question  on EPA's approval for this method and
whether or not the bottle tops that are used are approved, because they are, I believe they
are teflon coated and not ground stop.

                                     MR. HILL: They are actually not teflon coated, no.
The method is approved.   We have a letter from EPA  stating that  it is an  approved
methodology.  There again, it is no different than the standard methodology.  It is exactly
the same.

      The stopper is a composite plastic, and it has a built-in water seal on the top. It is
tough to visualize. I wish I had brought one with me, but we do have a letter of approval,
yes.

                                     MR.  STEVENS:  And what was the net savings
overall on the...

                                     MR. HILL: The net savings will be difficult to tell
you at this moment,  because we have not completed it. I still project that it should come
out to be at least a $44,000 a year savings. Actually, I am expecting it to be more than that.

      Any more? (No response.)

                                     MR, TELLIARD: Thanks, Greg. Appreciate it.
      I would like to thank all the speakers today, and, if you would not mind, give them
a round of applause.

      Thank you.
      (The Conference was recessed at 5:15 p.m., to reconvene the following day, May 5,
      1994, at8;45a.m.)
                                      480

-------
CO
    A HIGH SPEED AUTOMATED
        BOD SYSTEM "

-------
              INTERCEPTOR AND PLANT LOCATIONS
                             LEGEND
                          Existing Interceptors
                          Proposed Interceptors
                          District Boundary
                       •  Trestmenl PlanU
                          Area Presently Served (70S Sq. Mllea)
                                                  N
                                             W
                                             ATLANTIC

                                               OCEAN
                                               STOCK NO 13902
482

-------
         TASKS TO BE  AUTOMATED
         TASK
TIME \ MANUAL PER DAY
1.   Reading (initial and final ) D.O. cone



2.   Filling BOD bottles with dilution water



3.   Adding seed material to BOD bottles



4.   Capping and uncapping bottles



5.   Calculating BOD concentrations



6.   Monitoring Q.C. data
         5.5 hours



         45 minutes



         30 minutes



         15 minutes



         45 minutes



         15 minutes
                         Total
    8 hours / Day
                     483

-------
                                                          Cost  Recovery
00
           250000 -r
           200000
           150000 -
           100000
            50000 --
                   High end purchase cost
                   Avg. purchase cost
                   Low end purchase cost
                                Dollairfavings
                           0.5
1         1.5
 2.5
Years
3.5
                                                                                                  H	1-
4.5

-------
     APPROACHES
1. Modification of an exiting system
2. Custom designed system
3. Three phase custom system
           485

-------
       • 'A" •.:,,«,^f*«  i     -         \, .;,
       '.ft <'''&'j't±i$dfii**•*'•*> *".- ,'-."• •* f*!•»•.''    ' *;*'"* r-'j
       ^•.?|Mg§cM^i;>jiJ!:    ••• .a-;*?
00
^MOii'S^iil^l


                  Application example -

                  Biochemical Oxygen Demand
                               Sequence of operation;
                                                                         ,__u 1 n „ -_

-------
CO

-------
OS
oo

-------
oo

-------
                 TIME TABLE COMPARISON
                                    Original    Actual
1.   System' s modifications
    completed and delivered

2.   System's on site set-up

3.   System training for lab personnel

4.   System evaluation
60 days

7 days

7 days

14 days
90 days

21 days

7 days

70 days (approx.)
Total time for automation conversion
90 days    188 days (approx.)

-------
         SYSTEM EVALUATION
Hardware
   1.   D.O. Meter and probe



   2.   Stirring mechanism



   3.   Bottle transfer system



   4.   Transfer pumps
                 491

-------
Operational Software





    1.   Central control vs Instrument specific control



    2.   Initial and final D.O. operating steps



    3.   Rack I.D.



    4.   Multitasking



    5.   Recalibration
                      492

-------
Analytical Performance
    1.  Parallel Study

       a.  precision

       b.  accuracy
    2.   Data evaluation

           FIRST PARALLEL STUDY

           -    29% RPD

               19% Positive Bias

           Potential causes
               Recalibration problem caused by
               calibration vessel
                   493

-------


-------
MODIFICATION OF CALIBRATION VESSEL






         17%RPD



         17% positive bias






      Potential causes




         Calibration vessel and function
                    495

-------

-------
 NEW CALIBRATION VESSEL


-   9% RPD

    15% positive bias


Potential causes
    Stirring mechanism causing membrane
    to warp
              497

-------
Original stirrer
 Modified stirrer
 Current stirrer
                          498

-------
MODIFIED STIRRING MECHANISM





  -   9% RPD



      14% positive bias





  Potential causes





      Calibration method



      Stirring mechanism



      Filling and seeding
                499

-------
                                Meters and Calibration Comparison
Ul
o
o
                                       4         5


                                            number of results
                                                                    •— YSIWInkiercal


                                                                    *—WTWWinklercal
                                                                        WTWAircal
                                                                         9% RPD
                                                                     14% POSITIVE BIAS

-------
                                          CEL Modified Stirring Mechanism
Ul
O
              I
              O
              Q
                                                                                YSIWmklercai
                                                                            *—WTWAircal
                                                                             1% POSITIVE BIAS
                                                   7    8     9     10
                                                     number of results
11
12
13
14
15
16

-------
o
NJ

-------
Original stirrer
  Modified  stirrer
   Current  stirrer
                          503

-------
                                       Manufacturer's Modified Stirring Mechanism
         9 -T
O
                                                                                         YSIWinklercal
                                                                                     *—WTWAircal
                                                                                         4% RPD
                                                                                   1% NEGTIVE BIAS
                                                678
                                                   number of results
10
11
12
13
14

-------
            CONCLUSIONS
Project size grew from a simple turn-key set-up
into a joint system development.

Large investment of personnel resources from the
manufacturer and the District.

Over-optimistic view of time frame needed for
implementing automation.

The project has shown the ability to accomplish
our original goal of saving 8 manhours per day by
successfully automating the repetitive, labor
intensive tasks found in  this analysis.
                  505

-------
(Blank Page)
    506

-------
                                                                   May 5, 1994
                                     MR. TELLIARD: Good morning. We would like
to start our session today. As you notice, we have it broken up into a number of different
areas.  For those of you who have been pestering me all week about the statistical papers,
you will have to wait a little while longer.  Try to hold it together.

      Our first speaker this morning is Bruce Colby.   Bruce has been coming to these
meetings probably about two meetings less than  George Stanko. So, he is really an old-
timer.  At that time, Bruce had hair.

      Bruce is going to be talking about the performance characteristics of isotope dilution
as it relates to the dipstick, as I affectionately refer to it, and the volatiles method. As you
know, over the years, Bruce has done a great deal of work for the Agency on the application
and introduction  of isotope dilution, so we welcome him back again this year.

      Bruce?
          PERFORMANCE CHARACTERISTICS OF AN ISOTOPE DILUTION
                      HRCC/LRMS METHOD FOR VOLATILES
                                     MR.  COLBY:   Thanks,  Bill.  I  am  glad to see
everybody was able to get up after the cruise  last night.

      Actually, what I am  going to be talking about is the incorporation  of capillary
columns into the method that the Office of Water uses internally and is available for people
to use externally for measuring volatiles in wastewater.  The method is isotope dilution
GC/MS, and it is Method  1624.

      The reasons...see that black thing forming in the middle of the slide? That is the film
melting.

                                     MR.  TELLIARD: I think we have a...

                                     MR.  COLBY:  We have a major meltdown  here.
We better shut that off,  Lee.

                                     MR.  TELLIARD:  These interactive graphics are
really neat.
                                      507

-------
                                     MR. COLBY:  Okay, here we go,  I  have got to
move these pretty quick, because it seems to melt them about as fast as they get up there.

      Well, okay, why are we interested in high resolution gas chromatography or capillary
columns? The real reasons are that wastewaters are pretty complex, usually, and they tend
to have a lot of interferences in them.  With the capillary columns, we can separate more
components, one from the other, so we can minimize the interferences. Also, because the
peaks are very narrow, typically, we will have better sensitivity. I will mention some more
about sensitivity as we go along, however.  In addition, the newer GC/MS instruments that
are available typically do not support packed column interfaces except as sort of afterthought
items, because the packed column technology seems to be fading into the background.  The
final thing is that the capillary column methods basically are more rugged than the packed
column methods.

      The goals of the work that we undertook were to retain as much of the method 1624
procedural steps as we could.  We wanted to  keep the calibration solutions.  We wanted
to keep the QC procedures,  both initial and ongoing,  the same  as they were with the
packed columns as much as possible.  In order to do that, though, we had to back off on
one thing, and that was the Method 1624 requirement that specifies analyte retention times.
That was a requirement that was put into the method initially to make sure that people did
not run their GC temperature programs at warp 9, get the run-over in  10 minutes and have
all the compounds come out at one time.  The goal was just to make sure that there was
some decent chromatography going on.

      Well, the way we elected to hook up the columns...and I am going to talk about both
a narrow bore and a megabore setup and show you a comparison of the results from the
two..,was to hook the purge and trap device up to the GC/MS instrument via a  splitter, and
that can either be a splitter at the injection  port of the  instrument if you are hooking it up
through an injection port, or it could be a swage lock T fitting with a  needle valve on  it or
something like that.

      This kind of a setup is pretty nice. It is very simple to work with. It works best with
newer instruments that are very sensitive, because we are going to throw away a fair amount
of the material going into the instrument.

      The actual hardware that we used was an Ol purge and trap, Hewlett-Packard 5890
GC and Fisons MD800 mass spec.

      The way we set up an instrument like this involves an optimization process whereby
we set the split ratio at some value.  Typically, we would set it at some low split ratio, so
we are not throwing away much of the material. We then run a standard and see what the
peaks look like. What we are looking for are nicely shaped peaks.
                                      508

-------
      One  of  the  problems  with  capillary  columns  and  purge and  trap desorption
instrumentation is that the flow rates from the purge and trap device typically are quite high,
whereas the flow rate through the GC column is relatively low. We have got to match these
things, so we use a  splitter to match the flows.

      Anyway, we  look at the peak shape, and if it is not acceptable, we split off some
more material, and we go through that cycle until we get  good peak shape.

      Then we run  a set of five replicates.  It could be a smaller number, but five is a good
number.  We look at the precision.

      If the precision is not good enough and the method has some specifications in it,
then we would increase the split again.

      The  reason that we increase the split is  that it allows us to throw away more water.
Basically, we are getting water out of the system by doing this. The more water we remove
from the  system, the more precise things are.

      The  water  interacts with the column and affects  retention time precisions.  When
water is co-eluted with analytes, it affects the precision.  So, we want to throw away a fair
amount of the water.

      Well, what we see if we  look at the peak width which is one of our key things,
basically, in looking at peak shapes  is  that at low splits...here I went all the way  down to
a 1:1 split...we get  fairly broad peaks,  and as we increase the split ratio, things kind of
narrow down.  They continue to get narrower as you go out, and we only went out to about
20:1 there,  I think, and things will get narrower as you continue on until you have split off
everything and can't see anything at all anyway.

      What is important here is  to see that you can actually get away with a fairly small
split.  A 4:1 split will  actually give you reasonable peak shapes.

      If you look at this in terms of what I call sensitivity, the peak heights are smaller per
unit material when the peaks are wide.  That is  what you expect. It is like a triangle.  When
it is wide at the bottom, it is not  very tall.

      So, as we  increase the split, things go out until you get to a relatively  constant
situation at  somewhere around 5 or 6:1 or so.  So, it is clear that we can work with fairly
small splits  if we need to,  but we can  use very large splits as well.

      If we look at the retention time precision that we get with a number of various splits,
one  thing we  see with a  1:1  split is that the precision is not particularly good for the
retention times.
                                       509

-------
      The  reason for that really goes back to the amount of water that is going on the
column.  The water interacts with the column material, and as you run and add more and
more water to the system, if you don't really bake it out for a long time between runs, then
the retention times start to shift a bit.

      As we go to more split, then  we start to see more precision in the retention time.
So, clearly, there is something that we need to be pretty careful with here.

      The  precision  of the areas that we get with these  different kinds of splits is shown
here.  With the 1:1 split or a small split ratio, the precisions are not very good.  Again, this
is pretty much a function of the water that is in the system.

      Now that we realize that, we could probably go back and put a long bake out cycle
in between each  run and improve the precision at a 1:1 split, but there are other things that
cause us not to want to use that kind of a split anyway.

      So, basically,  anything that has got a split of about 5 or 6:1 or more gives us nicely
reproducible results  both in the retention time sense and in the area sense.

      The conditions that we used for the narrow bore column are shown here.  Basically,
it is a 60-meter, 0.32 ID column, one that is geared towards volatiles analysis.  We use a
temperature program that starts out at 40. We hold that for 6 minutes, and then we use an
8 degree/minute ramp up to 160, and then ramp it up fairly quickly up to 250, and then we
hold it.

      We hold it for 10  minutes, again,  to bake out the water. We want to make sure that
water is out of the column. It is a really important thing to do.

      The  megabore setup is  shown in the next slide.  It is a bit more complex than the
narrow bore, because we now are adding a second splitter to the system between  the CC
and the mass spec.   This is effectively an  open split interface type  thing.  In some
circumstances, we actually might want to add some makeup here or some makeup here to
adjust the flows around.

      Now we have got flow from  a purge and trap device that is reasonably compatible
with the GC, but if we put all the water that the purge and trap device puts  out into the CC
column, we have reproducibility problems with the retention times and with the areas. So,
we do have to eliminate  some of the water and we do that using this first splitter, and then
the second  splitter is just to match the flow rate from the column with the gas load allowed
into the mass spec.

      The optimization  process for the megabore system  is a  bit more complicated.
Basically, we start out at some split and run a standard, check the peak shape, and if it is
                                       510

-------
not good enough, then we increase the split at the injector, the head of the column. We
are trying to get that peak shape right.

      Then we go and  we start to run some replicates,  and we  look at retention time
precision.  If the retention time precision is inadequate, then we increase the split at the
injector again.  So, we are trying to increase the split at this stage to get our chromatography
working well.

      Once we have satisfied the chromatography requirements, then we start looking at
the reproducibility of the areas that we get. If they are not reproducible enough, again, we
probably want to get rid of more water somewhere.

      We  can do that either at the injector or at the open split end.  It does not  really
matter too much.  Typically, we would increase the open split a little bit and throw some
material away there.

      If you throw away too much material on a megabore and cut the flow rate on the
column down, then the peak shapes starts to go away.  Again, I want to emphasize that if
too much water gets on the column, then we start to have a problem.

      Well, the GC conditions for the megabore data that I am going to be showing are for
a 75 meter, 0.53 ID column, same coating as the narrow bore. Temperature program is
quite a bit similar to the other one. We have an initial 40 degree setup for 4  minutes, then
9 degrees to 200, and then we ramp it up fairly quickly to 250 and hold it, in this case, 20
minutes.

      The  megabore system gets a lot more material on column, so we have to hold that
final temperature longer to bake the water off the column.  It is quite a long hold.

      The  split ratios we ended up with were a 3:1  split at the injector and a 7:2 split at
the...in between the mass spec and the GC. Total split for that actually comes out to be
about 10:1,  I think 10.5:1.

      The chromatography that we get from these systems is shown  here. You can see that
the narrow  bore produces a very nice set of peaks, as does the megabore, compared with
the old packed column type runs.  The runs are shorter, but, again, keep in  mind that we
had a bake  out period  involved in these.

      Actually, this  last peak in  both of these is some hydrocarbon.  I don't know why it
is in there,  because it is not a compound  in the standard.

      So, your run actually can end at about this point here, but you have got to bake the
system out  and make sure the water is gone.
                                       511

-------
      One thing that is worth noting here is the water peak in the packed column is this.
This is how much material would go on the column if we put everything in, and this is
quite a bit of water.

      When we go to the megabore now, we have thrown away quite a bit of the water,
so we get a much smaller peak.  It starts to look like a real  GC peak, and with a narrow
bore, it is just a very small amount of water that is getting in there, because we split away
quite a bit of it at that point.

      So, you can see that the water goes away very effectively as we go to the narrow
bore arrangement.

      The cycle times for the runs looking something like this. I put a packed column run
up there. It is the little dotted line here. You can see that it runs out to 45 or 50 minutes,
normally, in terms of cycling from one run to the next.

      These long hold times are out here to get rid of water and, in some cases, to make
sure that some of the heavier material that is purged from the sample actually is run through
the system.  So, we do that to make sure that run number 2 is not impacted by some of the
chemicals that were  in run number 1,

      You  can see that the narrow bore column does save some time, roughly 10 minutes,
so your analytical efficiency can go up a bit.

      The  acquisition actually  ends right about here where that higher ramp comes into
play.

      The  things that we note  in the chromatography in going to the capillary columns,
well, obviously, there are going to be lot of changes in relative retention times.  Also, we
see things like separation of cis- and trans-1,2-dichloroethenes.

      There is a swap between co-elution for packed column ortho- and para-xylene to
meta- and para-xylene being co-eluted, and there are also a variety of separations that are
challenging, and we  have  to  pay attention to them.   We need to be careful  with
dichloropropene or 3-chloropropene and carbon disulfide.  They have to be separated.
They have some common masses that can cause problems.

      Chlorobenzene-d5 and  1,1,1,2-tetrachloroethane, these are  closely  eluted if the
chromatography is not  good enough.   Again,  we  run into some problems  with the
measurements.  Isobutanol and benzene-d6, and so on down the line.

      One, in particular, that was interesting was that vinylchloride and methyl ether-d6
were found to closely elute, and they come out very early in the run. The methyl ether-d6
turns out to be an impurity in the methanol that the labeled analogs are spiked into, and it

                                       512

-------
has a spectrum that, unfortunately, looks quite similar to the vinylchloride.  If one is not
careful, it could be confused quite readily.

       The  analytical precision that we get out of the systems is shown in the next slide
here. The solid line shows the required precision for the...these are the naturally abundant
materials. This is the requirement for the method.  You  have to be essentially below this
line to be producing acceptable data.

       You  can see that the data, both for  megabore and for narrow bore are all well within
the envelope.  There are a few that have less precision than others. These are typically the
more polar compounds that do not purge particularly well.  Consequently, there is not as
good a sensitivity for those compounds.

       The  recovery of the analytesthat we are deal ing with looks something like this.  Here
I have got another envelope, the upper acceptable level,  and this blue line that is down in
here, a little harder to see perhaps, is the lower acceptable level.  Some of them, incidently,
are down at zero, so if you do not see them, it is still okay. That is what the statisticians
told  us it was supposed to be.

       Anyway, the results that we get  are almost always right along the 100  percent
recovery line which is what we would  hope they would  be, with an occasional deviation.
Usually, those  have something to do with some minor  interference in a spectrum from some
other material.

       The  recovery of the labeled analogs which, again, have acceptance criteria associated
with them.  Again, I have shown an envelope of what those recoveries are.

       You  can see that the labeled analogs that we have spiked in mostly come back at 100
percent.  There are a few gaps in here that are not analogs that were not actually in the
solution that we ran.  So, those are  not absent because we could not see them; they are
absent from the data because they were not really there.

       Again, everything is well within the  Method 1624 acceptance criteria.

      There are  also some minimum levels that  are identified for Method 1624, the
minimum level being, I think, 3.18  times the MDL,  and then you take  that number and
round it to  some 25 or 10 type number.

      Anyway, when we do the ML calculations, we  come up with MLs that are all below
the method requirements with the exception of one  compound here. This is actually 2-
chloroethylvinyl ether,  and  it is a fairly polar compound.  It was only a problem in the
megabore run, and I think that it is a consequence of some artifact in one of the runs.   I
believe if we re-ran it, it would fall inside with the  rest of them.
                                       513

-------
      We also have one compound that is quite high, but its ML does fall at the minimum
level that is required by the method. I  think that was dioxane, again, a polar compound
with polar compounds not purging quite as well.

      So, in conclusion with respect to capillary columns and Method 1624, it seems clear
that the capillary columns satisfy all the Method 1624 acceptance criteria, again, with the
exception  of absolute retention times  which, obviously, are going to  be substantially
different than they were with the packed column.

      The capillary columns are quite easy to use.  The narrow bore setup that we used
was easier than the megabore setup, because there were not as many splits to control and
fiddle around with.  We really prefer the narrow bore to the other.

      I should point out that neither of those systems required any cryogenics which was
something we were very much interested in avoiding. They are one less thing to have to
deal  with.

      The narrow bore column data, in general, was slightly better than the megabore data,
but they were really quite similar.

      1 would like to thank Bill Telliard for supporting this work and answer any questions,
if there are any.
                       QUESTION AND ANSWER SESSION


                                     MR. COMQ: Joe Como from HK Testing. What
about the column head pressures and the total flow rates on the megabore? What were they
and how did you deal with controlling  those?

                                     MR. COLBY: The head pressure on the megabore
was controlled really with a needle valve at  the splitter, and if we  needed more head
pressure, we would put makeup into it, just run it in with a regulator.

                                     MR. COMO: How was the control dealt with in
that situation?

                                     MR. COLBY:  I  don't recall  that right off hand.
This work was done a while back, but  it would be sort of normal for  megabore, and  you
can adjust how much flow goes through and how much of your sample goes through by
adding makeup if you need to. There is a lot of flexibility in there to control that.

                                     MR. COMO: In the narrow  bore?


                                      514

-------
                                     MR. COLBY: We just split more and more off and
jam more and more gas through in the other direction.

                                     MR. BOLT: Dan Bolt, Cambridge Isotope Labs.
      I just had a question.  I know that in 1624, there is  a means to deal with labeled
surrogates which separate chromatographically from their native analytes, and whether you
saw more of that with the cap columns.  Maybe we should  go to more 13C instead of the
deuterium.

                                     MR. COLBY: There are more that are separated.
It does not really  seem to make much difference whether you  use  13C or whether we
separate them or not. If they are separated or not separated, it doesn't seem to be too much
of a problem, but there are some where a 13C might be  better, and  there are some where
none of the labels that we have available are really very attractive.

      Acetone is probably one example.  With acetone-d6, all deuteria that are alpha to a
carbonyl.   They tend to exchange, so you have to be pretty careful  with acetone, but that
is really the only one that causes us much of a problem.

                                     MR. CORL: Ed Corl with the Navy Public Works
at Norfolk Naval Base.  I don't believe I  heard you say what type of trap you used in the
purge and trap.

                                     MR. COLBY: We were using the most recent trap
that Sepelco is producing. It seems to be a little more effective with  water than the others.

                                     MR.   PRONGER:    Greg  Pronger,  National
Environmental Testing. How did you interface a transfer to the column? Did you go direct
to the column or through the injection port?

                                     MR. COLBY:   We just hook  them up  with  a
swagelock T fitting.  We come out of the purge and trap with the  existing tube into a  T
fitting.  We actually try to run the end of the column up a little bit into the tube that comes
from the purge and trap and then put a needle  valve on the T sidearm.

                                     MR. PRONGER: Thank you.

                                     MS. KHALIL: Mary Khalil from the Metropolitan
Water District of Chicago.  In  my lab, I have the VOC analysis by megabore column,
capillary column, and the method jet separator, and I also do not use cryofocusing, and I
have very good recovery of all the gases and everything.  What is the advantage of with the
split over what I do  in lab?  Like just to avoid the jet separator?
                                      515

-------
                                     MR. COLBY:  We went with the split because it
is a very clean system to work with. The less material we get into the column and into the
mass spec, the fewer times we have to take it apart and clean it, and the fewer times we
have to replace the columns.  We just did not want to use the cryofocusing, and it allowed
us to avoid that.

      I think there are ways you could do it with megabore and not use cryofocusing, but
narrow bore gets pretty tricky if you do not use a split.

                                     MS. KHALIL:  Yes,  but I think the  method jet
separator is doing the same  thing, but I  think the main...maybe the disadvantage of jet
separator you have just to watch very carefully performing samples or something like that,
but, still,  it gives very good separation.

      Thanks.

                                     MR. TELLIARD:  Thank you, sir.
                                       516

-------
                   epilu^
                  Vo lat i I es An a lys is
Ln
                   Bruce Colby & Lee Helms
                    Pacific Ajialytical
                       Carlsbad, CA 92009
                        (619)438-3100

-------
CO
               Fewer

               Generally
               samples ;v!



-------
Ln
                   caiibMtion splutions ^ procedu
I Retain initial QC acceptance criteria

• Retain on-going QC acceptance criteria

• Allow flexibility with absolute retention
'  times .  : • ' '"^S#f- ^ ^^K^-^S^-^ :•.. • ;:4;::"

-------
Ul
ro
o

-------
Ln

-------
Ul
K>
NJ
                                                       8      10     12

                                                       Split Ratio (x:1)

-------
U>
 CL3000000-
CO
 X
+-i 2600000-
                                                            8       10      12

                                                            Split Ratio (x:1)

-------
Ul
tsJ

-------
NJ

-------
      GONi)raONs
• 60 m x 0.32 mm 1.8 li DB-624
 Temperature Program
  - 40 deg .for G^

 8:1 Split

-------
                 Vint
 Vent
NJ
                         GO
                Splitter
Splitter

-------
to
CO

-------
                 GG CONDITIONS
               • 75 m x 0.53 mm 3 u DB-624
Ln
SJ
Temperature Program
 - 40 deg for 4 min
 — 9 deg/miii to 200 deg
 - 25 deg/min to 250 deg
 - 20 min hold

3:1 Injector Split Ratio

7:2 Open Split Ratio

-------
                                    CHROMATOGRAMS
Ln
OJ
O
                 [NARROW

I,
111
.nwl

1
JJ
1
1

ill .
Narrow Bore
                 IMEGA

                  1(KH
                    (H
                 IPACKED
                    o-i
                                                                                   Megabore
                                                                                     Packed
                             5.00      10.00      15.00      20.00      25.00      30.00      35.00      40.00
ScanEI+
                                                                                                              1.44e6
                                                                                                                Area!
ScanEI+
    TICi
  2.59e6[
    Areal
ScanEI-l-
    TlCl
  1.84e6
    Area!
                                                                                                             -rrt
45.00

-------
Ul
                           Narrow
                           Megabore
                         - Packed
                                  20     30    40
                                     Time (min)

-------
              RETENTION TIMES
               Numerous RRT changes
               •Separation of ci^l arid trans-i?2-
Ul
OJ
KJ
               m- arid p^Xy le ne ,|SrS cjp^lu t

               rather than o- arid p-kylene

-------
OJ

• 3-Chloropropene and carbon disulfide
• Chlorobenzene-d5 and 1,1,1,2-
 tetrachloroe thane
• Isobutanol and benzene d6
• 2^roi±i6-l-chloropropane and trans-1,3
 dichlor6porpene-d4
• p-Dioxane and bromodichloromethane-
               !''
          Vinylchloride and methyl ether-d

-------
un
                                   Narrow
                                                      B.jj-»JiJ
                     1357  91113151719212325272931
                                      Analyte

-------
U!
U1
                                       Narrow
Meg a bo re
                   1  3  5  7  9 11 13 15 17 19 21  23 25 27 29 31
                                     Analyte

-------
un
Oo
                % 300 -

                 200 -
                      1357 91113151719212325272931
                                       Analyte
                          Narrow

-------
Ln
U)
XI
                 Narrow Bore
Megabore
Method 1624
                1357 91113151719212325272931
                                Analyte

-------
OJ
CO
              Dkpilte^
             QC criteria e:k^
             times.
^
 pi^
             ^provide
             Narrbwijbp^
             convenient t6 use than ^
             : columns/ •" 7I1||M

             Narrow bore columns provide slightly
             better QC data than do megab^
             columns. •  • - ] '. : :;• • '   . :::!i:iSi|I18:j \-mM:^

-------
               A^knowledgrnent
Ln
              The authors would like to thank
              WiH
            for his support and encouragement
                    in this effort.

-------
(Blank Page)
   540

-------
                                     MR. TELLIARD: Our next speaker, Mike Sepaniak,
is a Professor of Chemistry at the University of Tennessee, and his paper is entitled the
MicellarElectrokinetic Capillary Chromatography; Application to Separations of Mycotoxins
and Polynuclear Aromatic Compounds. There must be a shorter way to say that, but I think
it is all there.

      Mike?
           MICELLAR ELECTROKINETIC CAPILLARY CHROMATOGRAPHY:
             APPLICATION TO SEPARATIONS OF MYCOTOXINS AND
                          POLYAROMATIC COMPOUNDS
                                     MR. SEPANIAK:  I will add a few footnotes to that.
I am glad I  did not bring any expensive color slides.

      Let me start out by thanking the organizers for giving me the opportunity to talk
about some of our research at the University of Tennessee.   The technique that I  will be
talking about is a separation technique. It is a technique that probably many of you are not
that  familiar with.  It combines some of  the attributes  and characteristics of capillary
electrophoresis with chromatography, in fact, with reversed-phase LC.

      What I will do is talk a little bit about some of the principles and theory involved and
then get into a couple of environmental applications.

      i think the best way to understand the technique of micellar electrokinetic capillary
chromatography is really to look at the apparatus and explain what the experiment involves.

      The  heart of the system is a capillary. It is not megabore or narrow bore.  It is,  I
guess you would call it, microbore. We use columns, typically, that are about 50 um in ID,
less than a  meter  in length, typically,  70 cm in length.

      We will fill the capillary with an aqueous buffer solution, and place the ends of the
capillary in  reservoirs that contain that same solution.  We will put numerous other additives
in the system, too. We might add chelates, we might add soluble polymer, we might add
micelles...and that is what most  of this talk will be about...a variety of things we will put
into the system.

      When we apply a large voltage across the capillary, typically, 20,000 volts,  30,000
volts, what  we observe is transport phenomenon  referred to as electro-osmotic flow. That
is a flow of solvent from one end of the capillary to the other. It is generally cathodic; that
is generally goes towards the negative side of the capillary.
                                       541

-------
      That electro-osmotic flow is fairly fast.  It will move solutes from one end  of the
capillary to the other end in a matter of a few minutes.

      So, imagine an experiment, now, where we place into this end of the capillary about
a 1 mm  plug of sample and turn  on the voltage. Neutral species will migrate from one end
of the capillary to the other due to this electro-osmotic flow.  The velocity  of the neutral
species  is  equal to the velocity  of the electro-osmotic flow. If we have cations, they are
going to have an electrophoretic component that adds to the electro-osmotic flow, so they
will move faster than the electro-osmotic flow.  If we have anions in our sample, they will
move slower, because they have, basically, a velocity component that is in  opposition to
the electro-osmotic flow. If we have two different cations with different mobilities, we may
be able  to separate them.  In fact, if they stay as narrow plugs, as narrow bands, that is a
likelihood.

      At one end of the capillary, then, we are going to  have  our detection scheme, and
I  am showing  laser-based detection.  Whenever possible, we  like to employ laser-based
fluorescence detection, but a lot  of the chromatograms that I will show in this talk are based
on absorbance detection.  The advantage of the laser is that its high intensity  leads to better
sensitivity  to compensate for the short path lengths that are involved. When you perform
on-colurnn detection, the path length is essentially the diameter of the capillary.

      The problem is in separating neutrals.   Neutrals all move at the velocity of electro-
osmotic flow, so they are not separated at all.  But we can separate them if  they associate
in some manner with a charged  species, in other words, if we cause them in some way to
acquire  an effective electrophoretic mobility.

      The most common species that we will add to the  mobile phase for that purpose  is
SDS, sodium  dodecasulfate,  C12, an alkyl sulfate  surfactant.  SDS forms  micelles at
concentrations of about 8 mM.  The micelles have aggregation numbers of about 60, and
they have diameters of about 40 angstrom.  So, typically, we  will include  in the mobile
phase about 50 mM SDS  and  form micelles.

      Another type of micelle that we can add to our system involves bile salts.  We have
gotten a lot of mileage out of the bile salts.  They form smaller micelles.  Moreover, they
are more polar, which is an advantage in separating  hydrophobic molecules.  In addition
to that, they are chiral which allows us to separate certain enantiomers.

      Although they are  not charged additives, I will show some data  involving adding
cyclodextrins to  the mobile phase.   Cyclodextrins are microcyclic sugar molecules that
contain  a  hydrophobic cavity.

      You can buy a cyclodextrin with a different number of sugar units in the macro cycle,
and when you do, you have different cavity diameters, so you add a spatial  component to
                                       542

-------
interactions.  Since the cavity of the cyclodextrin  is hydrophobic,  it will interact with
molecules in a very dispersive manner.

      So, I am  going to  talk about using these things as additives  to separate neutral
species.

      Here is the MECC experiment.  That is a shorter title, so I  guess  I should have given
it to you when you were introducing me.  In this experiment, I have shown a section of the
capillary here.  Depicted is the rapid electro-osmotic flow.

      Notice it is plug-like, too. Unlike hydrostatic flow which is parabolic, electro-osmotic
flow is plug-like. That leads to better efficiency since there less dispersion due to resistance
to mass transfer  in the mobile phase.

      This  represents the electrophoretic velocity of the  micelle.   We generally use
negatively charged micelles.  This SDS would  be  negatively charged.  And the little
snowflakes here represent those micelles.  So, they will have an electrophoretic component
in the opposite  direction.   Generally, it  is smaller than electro-osmotic flow, so the net
velocity of the micelle is in the same direction as electro-osmotic flow; however, it is much
slower.

      So,  we  have  a two-phase  system  that  we  have  created  here,  the  basis of
chromatography. The secondary phase is not stationary, though.  The secondary phase is
moving  slowly, and is composed of the micelles.

      The two phases are an aqueous mobile phase and the very hydrophobic interior of
the micelle.  So, it is a lot like reversed-phase LC,

      A solute that is neutral will distribute between the mobile  phase and the micelle; for
example if it is hydrophobic, it could largely dive into the hydrophobic core of the micelle.
If  it distributes  between  these  phases,  it  will   acquire a  velocity  that  is  somewhere
intermediate between electro-osmotic flow and the net velocity of the micelles. Moreover,
two solutes that have differing distributions between the micelle and the aqueous phase will
be separated  if they stay as narrow bands.

      That is all depicted in this slide.  V is the velocity of the  band.  This equation  is
general in nature, since it considers both charged and neutral species.

      k' is the familiar capacity factor.  Basically, in this technique,  it is the amount of
solute that is in the micellar phase divided by the amount that is in the mobile phase.

      There  is a mobility  term.  That term is the mobility (Mb) of the analyte, and if the
analyte is not charged, Mb will be zero.  E represents the field that we are applying.
                                        543

-------
      So, basically, what we have is a system that will allow us to separate neutral species
based on differences in  k' and charged species based  on differences in Mb.

      Now, I know we do not like to look at equations unless we absolutely have to, but
I like to bring the equations on this slide to your attention, mainly because I want to bring
up that in achieving resolution, which is the object of any separation, there really are three
important factors.

      Efficiency, which we are all familiar with, is represented by the number of theoretical
plates.  With capillary electrophoresis, we can get efficiencies of greater than 1  million
theoretical plates for a 1 M length of capillary. For this MECC technique, phase transfer is
involved, so we never quite get to that efficiency, but in our best cases, we can  achieve
500,000 to 600,000 plates for a 1 M long column.  So, the efficiency is excellent.  I will
mention that a little bit  later on, too.

      Selectivity, the alpha factor in the illustrated equation, is the ratio of k' for adjacent
eluting peaks.

      System retention  is  incredibly important with this technique.  If you look here at the
inner parenthetical term,  you will  note that it does  not appear in the conventional
expression for resolution in chromatography.  It is not  present in the conventional form  of
chromatography, because  the stationary phase is just that.  It  is stationary.

      The T0 and TM represent the void time and the effective elution time of the micelles,
respectively. It is important to note that the micelles are incorporated into the entire system.
We are  not injecting them; nevertheless they have an  effective retention time,  tM.

      This system retention term has a dramatic effect on the technique, and I will illustrate
that shortly. You might notice that the first three terms in this equation are the same as in
conventional chromatography.

      What we have here, then, since the micelles  are  eluting from the system, is an
elution window, and it is depicted here between T0 and TM. What you see with the MECC
technique is that hydrophobic molecules, the ones that would be retained the strongest,
completely associate with the micelle, and "pile-up" near the end of the elution range. That
is why this term, this elution range term, is so critically important.

      Sometimes, it is beneficial to extend the elution range.  It is also very important to
reduce  capacity factors to be  in the optimum range.  In conventional chromatography,
resolution improves with increasing capacity factor. It means longer analysis, but it is better.

      That is not true in MECC. With this technique,  there is an optimum k', generally in
the range of about 1 to  5.
                                        544

-------
      This slide demonstrates that effect. These are some derivatized n-alkylamines. I can
tailor a sample here to have molecules with a range of hydrophobicities. So, what you are
seeing is that the most hydrophobic molecules are piling up at the end of the elution range.
We cannot separate them.

      In the bottom chromatogram, we see easy separation of those compounds. What we
have done is to add about 22% alcohol, isopropanol, to the mobile phase. The isopropanol
has two effects. It reduces k' just like an organic modifier would do in reversed phase  LC.
It also extends the elution range by slightly reducing electro-osmotic flow.  You will notice
the last  peak is coming out much  later here.

      The end of the elution range was about 30 minutes in the original one, and in this
one, it is 80 minutes, but the last two peaks which are right on top of each other in the top
chromatogram,  now,  they are easily separated.  Actually,  we have too much baseline
between those compounds, which are labeled J and  K.

      So, that is the effect of the elution range. It is good to extend it and it is good to
reduce capacity factors to get in the optimum range of about  1 - 5.

      There is another illustration of the separation power of this  technique and some of
those same points. In fact, I will  bring up a  point about efficiency that is pretty critical as
well.

      This slide shows the separation of two binaphthyl compounds. This one happens to
be charged. What we have here is a charged molecule and a neutral molecule, and we are
separating them  using the MECC technique.  That  is one  of the advantages of MECC.
Capillary electrophoresis is also operative.

      These two binaphthyl molecules are chiral, so we are going to attempt a chiral
separation by using a chiral micelle in our mobile phase. We are going to use a bile salt,
sodium  cholate.

      In the first chromatogram shown, you do not see separation of the optical  isomers
of the neutral hydroxy binaphthyl or the charged phosphated binaphthyI/compounds.  The
efficiency is rather poor. It is not bad, but it is not adequate to resolve the optical isomers.

      The reason efficiency is not high is because  we do not  have many micelles present
in this system.  The principal source of band dispersion with this technique results from the
polydispersity of the micelles. The micelles are not all the same size.  If they are not all the
same size, they are not all moving at the same velocity and that causes band dispersion.

      If you increase the concentration of surfactant, then the exchange of monomer with
surfactant becomes rapid. Basically, what it does is  it averages out the sizes of the micelles.
                                       545

-------
It is sort of like using a small diameter capillary or using small particles in conventional
chromatography to minimize the effects of resistance to mass transfer in the mobile phase.

      So, the difference between the  separations  labeled A  and  B is we doubled  the
concentration of surfactant.  In this case,  many more micelles are  present and there  is a
dramatic effect on efficiency. We observe nearly baseline resolution  of the optical isomers.

      Now, we increase the concentration for the separation labeled C even further. What
happens?  Because we have increased the phase ratio, we have increased the amount of
secondary phase and we pushed these components toward the back of the elution range.

      I  have nothing in here to mark T0 and TM, but TM would probably be somewhere
around there. So, these binaphthyls have excessive  k' values.  They are bunching up near
the end  of the elution range.

      We add some methanol to the mobile phase and separation is achieved, because we
have reduced the capacity factor.  It does not look like it.  They are coming out later,  but
we have reduced the capacity factor. If we were to look at the elution range now, it would
probably extend from there to someplace out  there.  So, we  have actually reduced  the
capacity factor even though retention time has  increased.

      What I should have done at this  point is to add an organic solvent like acetonitrile
which has a very small effect on the elution range but reduces the capacity factor.  We
probably could have achieved that same separation in a much less  time.

      Let me talk very quickly about a couple of applications. We tried to apply this MECC
technique to the separation of mycotoxins.  Collectively, the various mycotoxins shown in
this slide are present in  a lot of food samples.  They are naturally occurring as they are
produced by naturally occurring fungi. Collectively,  this group is very toxic. Some of them
are carcinogenic, some are mutagenic, some are terragenic. So, it is  not particularly a good
sample.  There is a lot of "bad actors" in this group of compounds.

      Notice that they are fairly polar, though.  They dissolve in water moderately well, the
aflatoxins not too well,  but some of them are fairly  polar.  One or two of the mycotoxins
even have acid or base functionalities and can  be charged.

      What I am showing you in this very  busy slide is one of the characteristics of  this
MECC technique.  You can change the separation  system incredibly fast.  It takes you a
mere 10 seconds to fill the capillary with a different solution.  Consequently, you  can
change the primary and stationary phases that you have in a matter of seconds.

      So, this experiment involves either SDS or sodium deoxycholate, the bile salt, under
changing acetonitrile concentration.  The effects that these changes have on capacity factor
are shown.


                                       546

-------
      Same thing down here.  We are changing both the pH and adding cyclodextrins in
the mobile phase.  All of that retention  data can be collected in half a day.

      I  know you cannot see it very well, but this slide shows two separations.  What we
did is to look at that data in the previous slide and  pick out two sets of conditions, one
involving, on top here, an SDS mobile phase  with some  organic modifier and gama-
cyclodextrin. The bottom one is a separation using sodium  deoxycholate  micelles.

      Although you can barely see it, a lot of the lines here, these dashed lines, show how
the components are changing positions.

      Our goal was to pick two sets of conditions that yield rapid separations  of all ten
mycotoxins, which I do not think anybody has done previously, two sets of conditions that
exhibit unique selectivities to enhance our qualitative capabilities for this technique.

      The time axis is not shown, but both of these separations require about 12 minutes.
In fact, using exactly the same instrumental setup, just simply squirting a different mobile
phase into the capillary and allowing equilibration, both separations are possible in about
40 minutes.  You can cycle back and forth between  mobile  phases. This enhances  ones
qualitative capabilities.

      Reproducibility is not outstanding for this MECC technique because of the fact that
retention times  depend upon  electro-osmotic flow which you can adjust by adjusting the
field, but  it also depends upon the surface condition of the capillary,  and that is hard to
control.    Moreover, it depends upon  the phase  ratio  which  also depends on  many
interrelated parameters.  So, reproducibility in retention time is not outstanding.  Here we
are showing RSD values of 2% and 4%.

      We randomly generated five samples of mycotoxins that contained the mycotoxins
that I showed and a number of polycyclic aromatic hydrocarbon interferences.   By injecting
standards  using both sets  of conditions and compare sample  and standard retention times,
we were able to identify  every mycotoxin in these artificially generated  samples with no
misidentifications or missed mycotoxins.

      So, the qualitative capabilities of this  technique are quite  good.   Moreover, the
technique is quite good at handling dirty samples.
      Another advantage of the technique, is that it can be  optimized for fast separations.
The separation of four aflatoxins shown on this slide is accomplished in 20 seconds.  So,
incredibly fast separations are possible with the MECC technique as well.
                                       547

-------
      However, I am misleading you a little bit. You have to use small diameter capillaries
in order to achieve speed. It is often necessary to employ laser-based fluorescence detection
with small capillaries.  These aflatoxins, fortunately, can be excited using a He-Cd laser.

      Switching gears very quickly, let me talk about some separations of, in this case,
molecules that are extremely hydrophobic. They are hard to separate with this technique
because of the fact that they like to dive into that micelle and stay  there.  In other words,
they co-elute near the end  of the elution range.  I  am going to mainly talk about some
anthracene and benzopyrene separations, both benzo-a and benzo-e,  as  well as some
separations of substitution isomers of these compounds.

      As you probably know, these PAHs,  especially benzo-a-pyrene, are carcinogenic, and
they are released into the environment when we burn fossil fuels.  So, they are important
pollutants.

      This is, again, an  interesting slide  in  that it  shows many of the  characteristics of
MECC  technique.     This  sample  contains  anthracene,  2-methylanthracene,   9-
methylanthracene, and benzo-a-pyrene.

      What we show in the first chromatogram on  this slide is everything co-eluting near
the end of the elution range. We have a short elution range.  In this case, we are using a
short column, high voltages, and all four compounds co-elute with k' that are too large.

      When we  switch  to this system over here, we  have added approximately  15%
organic modifier to the mobile phase. Above about 30% organic  modifier, you lose the
micelles,  and everything falls apart.

      However, you notice here in the separation labeled b that we  have reduced the
capacity factors.   The capacity factors are near optimum,  but the  two methyl-substituted
anthracenes are still not separated.  We have optimized system retention, but we have not
achieved adequate selectivity.

      So, the difference between the separation b and c is that we have added cyclodextrin
to the mobile phase. The ability of the anthracene  to insert into the hydrophobic core of
the cyclodextrin  is determined by the position of that methyl group.

       Now,  what does the cyclodextrin  do? It is moving at the mobile phase's velocity.
Thus, it is functioning as an organic modifier except that  it is selective in the way that it
interacts with the solutes.

      This next slide shows a cyclodextrin  and  its interaction with benzo-a-pyrene  and
benzo-e-pyrene.  Notice the numbering of the carbons on these PAHs.  I am going to show
separations of these two  molecules, as well as a lot of substitutional isomers of benzo-a-
pyrene.

                                        548

-------
       Now, these molecules can then insert into the hydrophobic core of the cyclodextrin.
The slide depicts gamma-cyclodextrin, which is the largest common cyclodextrin.  When
solutes insert into the cyclodextrin they are prohibited from associating with the micelle.
The stronger the interaction  with the cyclodextrin, the earlier the PAH will elute.

       The mechanism for this separation is shown  in the next slide. Benzo-a-pyrene, can
interact with the micelle. When it does,  it forms an  adduct that is moving slowing towards
the detector.  When it is in  the mobile  phase, associated with cyclodextrin, it is moving
rapidly towards  the detector.  Thus, the PAHs are distributing  between the micelle and
cyclodextrin phases.

       This slide shows separations of benzo-a-pyrene and benzo-e-pyrene.  The difference
between the separations labeled a, b, and c is we are adding increasing concentrations of
gamma-cyclodextrin to the mobile phase. By increasing the cyclodextrin concentration the
resolution between these compounds is improved.

       The separations of substitutional isomers of benzo-a-pyrene shown in this slide are
quite impressive. The separation of methyl substituted isomers is particularly impressive in
that high efficiency and excellent selectivity are observed.  The position  of the methyl
substitution influences the ease with which the isomer can insert into the cyclodextrin.

       The observed selectivity depends upon the type of cyclodextrin that is used.  Beta-
cyclodextrin is too  small.   Thus,  the k' values  shown  in this table are all  quite  large
indicating poor interaction with the cyclodextrin.  Conversely, the k' values obtained using
hydroxy-propyl-gamma cyclodextrin are all small. This large derivatized cyclodextrin does
not discriminate between the isomers.  However, by  using gamma-cyclodextrin, we observe
excellent selectivity.  The  values for the capacity factors vary greatly, and that is what is
really needed for adequate separation.

       I will entertain any  questions you have.

                                      MR. TELLIARD: Any questions?  (No response.)

                                      MR. TELLIARD: Thank you, Mike.
      (Slides for this presentation were not available at the time of publication for these
      proceedings.)
                                        549

-------
(Blank Page)
    550

-------
                                     MR. TELLIARD: Our next speaker is Dr. Bruce
R. Locke.  Bruce is employed by the Department of Chemical Engineering at the Florida
A&M/Florida State University College of Engineering in Tallahassee, Florida. His subject,
the analysis of kraft mill effluent using non-purgeable total organic halide test,  is very
pertinent to a new regulation recently written.

      So, this particular subject is very near and dear to our hearts at the present time and
certainly topical for our  meeting this year.

      Bruce?
              THE ANALYSIS OF KRAFT MILL EFFLUENT USING THE
                 NON-PURGEABLE TOTAL ORGANIC HALIDE TEST
                                      MR. LOCKE: I would like to thank the organizers
of this session for inviting us to present our work here, I would like to acknowledge my co-
author, Dr. Geoffrey Watts, currently employed by Geosolutions, Inc.

      Dr. Watts performed much of the work presented here while he was an administrator
for  site  investigation at the Florida  Department of Environmental Regulation, and he
subsequently used  some of this work as a master's thesis in chemical  engineering.

      First,  I would like to give an outline of the things we are going to cover today.  I will
begin with  an  introduction  to the  problem,  and show why we  are interested in this
particular problem.

      That  will lead to consideration of the  effluent water quality,  both upstream and
downstream of the  plant. This will lead to a discussion of the NPTOX,  non-purgeable total
organic halide,  analysis that we are considering here.

      I will thereafter discuss the development of an analysis for what we have termed
Fenextract which is a precipitate at low pH formed from the kraft mill effluent. We will talk
about the preparation of this precipitate, its distribution in the river, the determination of its
molecular weight, and some of its structural features. In addition, we will discuss  some
implications of  this work on the chemistry of the bleaching  process. Finally, we will end
with conclusions and acknowledgements.

      The next slide shows a picture of the area that we are interested in.  There is  a city
called Perry, Florida, really a town, that is  about 20 miles upstream from the Gulf of Mexico
on the Fenholloway River shown  in red.  Upstream of the city is pretty much undeveloped
pine land and  swamp  forest.  There is  really no industrial or residential development
upstream of the town.


                                       551

-------
      However, as shown in this figure, there is a large kraft mill that discharges into the
river.  We will consider the effluent water quality analysis both downstream and upstream
of the discharge.

      Before we do that, I would like to talk a little bit about, for those who are not familiar
with it, the chemistry of the pulp process just to give you a feel for what it is we are trying
to analyze.

      The next slide shows the composition of the feed material that goes into a pulping
plant. As you can see, the primary components are cellulose, about 41  percent, a linear
polysaccharide, and lignin, about 29 percent, which is a highly colored brown aromatic
polymer.  In addition, there are some other compounds, including hemicellulose, and
extractives at lower levels.

      Of course, the product desired in  a pulping process is  cellulose.  The lignin  is
covalently bonded to the cellulose and the other hemicellulose compounds.  The next slide
gives an idea of what the lignin looks like. This is just a schematic developed by Adler  in
1977.

      The lignin is not a simple compound,  as you can see. There is a range of alkyl-alkyl
ether linkages, also aryl alkyl ether linkages, and a number of aromatic groups. There is no
repeating structure here, and it is a fairly complicated structure.  It also extends to three
dimensions.   This just gives some structural  features of what it looks like.

      The next slide shows the pulping  process.   In  a pulping plant, the wood  is
introduced, broken up, chipped, and the central heart of the process is a digester where,
under high pH conditions and high  temperature, the bonds are broken between the lignin
and the  cellulose  compounds.

      There are two streams coming out of the digester. One is the black liquor which is
primarily your waste  component containing large amounts of lignin. Much of the black
liquor can be recovered and various products made from that.

      However,  the  waste  stream we are interested in arises from the subsequent
downstream processing of the cellulose product. The cellulose product coming out of the
digester contains  still  about 10 percent of the lignin compound which  still needs to be
removed.

      There is a  series of processes in the bleaching plant shown below where an alkali
extraction that follows a chlorine dioxide oxidation process serves to remove the remaining
amount  of lignin.

      There are  other ways of oxidizing  the  material.  However, the plant  we are
considering  here uses chlorine dioxide.


                                       552

-------
      The next slide shows the locations of the sampling sites.  There is one sampling
station a couple of miles upstream of the plant, and another sampling station downstream
of the plant.  In addition, some samples were taken at the confluence of the river with the
Gulf of Mexico which is not shown on this slide.

      The next slide shows some historical data for the facility taken by the DER from 1980
to 1988.   I might mention that this  plant  has been  in  operation  since 1953 and has
essentially  been discharging roughly 50 million gallons per day into the river.   This is a
significant portion of the river flow when the weather is fairly dry.

      The top of the slide shows a color level right above the effluent discharge which is
about 500 pcu, and just below the discharge to the river, it is about 1600. So, you can see
there is a fairly significant increase in the color.

      I might also mention that the impetus for this study was several years in  the late
1980s when the river was very dry, when the weather was very dry, and the residents in
and around the town of Perry found high levels of color, taste, and odor problems in their
drinking water. The motivation for this study was to determine the source of these problems

      Primarily what I will talk about is the chemical characterizations of the river. Other
work will  address the connection  between the well contamination and the river.

      Now, you might also note in this slide that the dissolved oxygen drops slightly, and
that the BOD level  is fairly low, although it rises somewhat at station 2 below the discharge
of the plant.  However, it is not a real  high level. Similarly, COD is not changed a whole
lot upstream and downstream of the discharge.

      You can see, of course, that due to the high salt content introduced in the plant, there
is a very large increase in the conductance in the water.  The pH level is approximately
neutral, and this is due to the fact that the streams from the bleaching process, the alkali
extraction and the  chlorine dioxide treatment, are mixed  together to give a fairly neutral
solution that is introduced into the river.

      Also shown  are a high levels of sulfate and chloride ions.

      The next slide just reiterates a little bit of this data that was taken at the beginning
of this study. There  is a fairly small change in temperature, the conductance is also very
high, and  the pH is very similar to  that given before.   Again, the high  levels of sodium,
sulfate, and chloride ions are what are important.

      Now, of course, it is very important to note that  the inorganic parameters are only
going to get it so far.  Really, the focus of this study has to be on organic materials, although
we have already noted that the BOD  and  COD levels  were not really very indicative of
what is going on in the waste.


                                       553

-------
      The next slide shows the results for the TOC analysis.  The TOC upstream of the
plant discharge is about 73 mg/L, and downstream it is about 140 mg/L.  TOC gives us
some idea of what is going on, however there still  is a significant amount of TOC in the
river due to the nature of the land above the plant.

      You might note that this value, the 73 mg/L, is fairly similar to values obtained in the
literature for undiluted streams.

      The next slide shows some  attempt here to  try to determine some  of the specific
compounds that might be present in the water. This is the result of EPA Method 625 to
analyze for extractable organics.

      There are many compounds present, however they have very low concentrations. We
are talking about  5 //g/L to 80 /^g/L. There is approximately  300 #g/L of unidentified
compounds.  This gives us a total of about 0.5 mg/L.

      Now, if you recall, in the last slide we had over 140 mg/L of TOC.  So, we are really
not making a lot of headway by trying to analyze for extractable compounds.

      This leads us to try to look at a more global analysis of the waste, and this  is the so-
called NPTOX analysis that we are interested in here. This is a modification of EPA Method
9020 for total organic halogen.  The method has a detection limit of roughly 10 //g/L of
NPTOX as chlorine.

      The procedure involves a modification to the EPA method by purging the sample
with CO2 or, preferably, helium to remove any volatile organics.  I might mention that the
waste discharge from the plant goes through  an aeration lagoon with about a  one-day
residence time before it is discharged to the river, so  many of the volatile organics are
already  removed.   However,  it is  important to remove any remaining volatile  organics,
primarily chloroform, to have  a good comparison with  other samples.

      This material is then passed through two activated carbon beds which adsorb the
organic  compounds.  The column  is then washed with potassium  nitrate to remove any
inorganic chlorides.

      The columns are then  combusted  at  1000 degrees, producing combustion gases
which can be precipitated with silver acetate to form silver  halides.  Finally, the decrease
in silver is measured calorimetrically.

      The results for the river water are shown in the next slide where the NPTOX  level
upstream of the plant discharge is  roughly 90 //g/L, and downstream of  the plant, it is
roughly 15,000 to 16,000 //g/L. So, you can see that there is a very significant increase in
the level of NPTOX from upstream to downstream.
                                       554

-------
      The next slide shows the ratio of NPTOX to TOC. One of the important things to
note is the elevated level for the upstream sample. Now, this is in line with some literature
that was taken just about the same time that this work was performed. There are a number
of works by Asplund and Grimvall in Sweden who reported adsorbable organic halide or
total organic halide rather than the non-purgeable TOX per TOC to be roughly in the range
of 1000 /jg/g as shown  here for the unpolluted river upstream of the plant.
      So, we have roughly a factor of 100 increase in NPTOX to TOC ratio.

      What we would like to do at this point is try to get an idea of the composition of
NPTOX.  This analysis was made somewhat simpler by the observation  of workers who
were trying to test for metals.  They found that in lowering the pH of the river water, a
precipitate was formed.

      The next slide shows the procedure here that is used to purify lignin compounds from
various sources. The river water is first filtered, then acidified to lower the pH, heated, and
centrifuged to produce a precipitated compound.  The compound is then redissolved and
washed in several cycles

      So, what we have here is, finally, a redissolved and purified lignin compound.  I
might note at this point  that this procedure is  based  very much on procedures used  by
Fricke and  Martin and others for analyzing and purifying lignin compounds from black
liquor. However, this work was the first work to try to apply this kind of procedure to the
effluent from a bleaching plant.

      The next slide shows the NPTOX distribution in the Fenholloway River as we have
shown already, the 1 6.1  mg/L.  Now, the supernate from the extraction purification process
shown in the previous slide is about 1 1.1 mg/L which represents about 70 percent of the
total NPTOX in the water,

      There is about 31  percent of the NPTOX, by difference, therefore contained in the
Fenextract.

      The next slide shows the molecular weight distribution of the supernate from ultra-
filtration.  The species above 30,000 nominal molecular weight gives a value of NPTOX
roughly  0.43  mg/L, and  the values between 10,000 and 30,000 molecular weight give
values of about 0. 37 mg/L. Finally, below the 10,000 molecular weight, there is about 7.2
mg/L.

      The values on the right-hand side represent the recovery.  There is somewhat of a
loss of the sample during the filtration  which  is to be expected due to the nature of the
compound. However, that 7.2 represents approximately 45 percent of the total 16.1 mg/L
of Fenextract.
                                      555

-------
      So, we  have roughly about  45 percent of the NPTOX below  10,000 molecular
weight, and the Fenextract is roughly 30 percent of the total NPTOX.

      I might note also at this point that most of the color in the river, approximately 65
percent of the color of the sample, could be attributed to the Fenextract itself.  I also will
note that  Fenextract could not be precipitated from water taken upstream of the plant.

      The next slide shows the elemental  composition of Fenextract.  It can be seen to
consist of primarily carbon, hydrogen, oxygen, and nitrogen. We also have elevated levels
of sulfur and chlorine which  indicated that we are getting chlorinated thiolignin compounds.

      The next slide  shows the distribution of different  types  of carbon types in  the
Fenextract as obtained using C13 NMR. The Fenextract is shown  in  the first column.

      In  the second column is indulin AT.  This is  a purified native lignin which would
represent a lignin that would be found before any kind of bleaching process. Of course, the
Fenextract is what is found downstream of a bleaching plant.

      The important point to note here, is that the aliphatic character of the Fenextract is
significantly  increased from the  42 percent in  the native lignin to 55  percent in  the
Fenextract.  So,  it is showing there  is a  significant  increase in aliphatic content and,
simultaneously, a decrease in aromatic groups.

      The carbonyl and  carboxyl groups also show  an increase from the native.  The
methoxyl groups were not determined due to the fact that the signal on the NMR was not
completely resolved.  However, other methods found approximately 2  percent in  the
Fenextract.

      It is important to note that we have both an increase of carboxyl groups as well as
a decrease of the aromatic content.  This gives us some indication of what is happening in
the bleaching process.

      The next slide shows a chemical reaction scheme developed by Lingren for various
model  lignin compounds.   Compound 1  is a  lignin guaiacol, and this compound is
subjected to chlorine dioxide treatment to lead to a phenoxy radical, and this phenoxy
radical, number 2,  can either go through a chloride  ester to form compound 6 which  is a
methyl muconate, or it can  go to form a compound  4  which is a quinoid product.

      Due to the information shown on the last slide where the aromatic content was very
much decreased and the number of carboxylic acid groups  was increased, we  can see  that
the ring opening is very much what we would expect  to be occurring in the bleaching plant.

      So, we feel that the evidence from this study of the Fenextract gives us some idea that
there is an increase of ring opening.

                                       556

-------
      Finally, in conclusion, I would like to say that the chemical characterization of the
water quality in the effluent from kraft mills has led to an improved measurement, termed
NPTOX, of non-purgeable total organic halide. This is just a modification of EPA Method
9020.  The ratio of  NPTOX to TOC was elevated for the downstream  by  a factor of
approximately 100 from the upstream to  the downstream of the kraft mill.

      Of course, there are other sources  of NPTOX which are found in the literature also
and that were found upstream of the plant. The NPTOX to TOC ratio is not a definitive way
of showing the effluent.

      However, an acid insoluble precipitate, termed Fenextract, could only  be isolated
from  the downstream  part of the effluent.   This consists  of  large molecular weight
chlorothiolignin derivatives.

      Finally, the structure of this Fenextract, as obtained by the  C13 NMR, indicates that
the  chlorine dioxide oxidation reaction in this pulping process leads to the aromatic ring
cleavage and formulation of smaller lignin fragments.

      Finally, in acknowledgements, I would I ike to acknowledge the support of the Florida
Department of Environmental Regulation Water Quality Assurance  Trust Fund for this work
and also thank the organizers of this conference.

      The work reported here has been published in Environmental Science & Technology,
Volume 27, Number  12, 1993, pp 2311-2317.

                                      MR. TELLIARD:  Any questions  out there?  (No
response.)

                                      MR. TELLIARD: Are you sure? Bruce, thank you
very much.
                                       557

-------
(Blank Page)
   558

-------
      THE ANALYSIS OF KRAFT MILL
  EFFLUENT USING THE NONPURGEABLE
  TOTAL ORGANIC HALIDE (NPTOX) TEST
                      by
 Geoffrey B. Watts+ and Bruce R. Locke [Speaker]
       Department of Chemical Engineering
       FAMU/FSU College of Engineering
           Tallahassee, FL 32316-2175
Paper presented at the EPA's 17th Annual Conference on Analysis
of Pollutants in the Environment, Norfolk, VA, May 3-5, 1994.
+ present address: GeoSolutions, Inc., P.O. Box 7638, Tallahassee, FL 32314.
                   559

-------
               OUTLINE




Introduction



Effluent Water Quality Analysis



NPTOX Analysis




Fenextract Analysis






     •  Preparation and elementary composition



     •  Distribution in River




     -  Molecular Weight Determination




     •  Structural Features






Implications for chemistry of bleaching




Conclusions



Acknowledgements
                      560

-------
Ul
                Aucilla Wildlife


                Management Area
             0   1   2
             I	1	I

             Scale in Mites
Figure  1.1   Perry/Fenholioway River Location  Map

-------
Composition of Pinus Strobus
            (Timell, 1967)


  Constituent                         %_


  Cellulose                           41

  Lignin                              29

  Hemicellulose

      Arabino-4-O-Methylglucuronoxylan 9

      O-Acetyl-Galactoglucomannan      18

      Arabinogalactan                  1

  Extractives                         1-2
                  562

-------
01
O*>
UJ
                                                                      OH
 HC=Q [CHOH
                          C
                    _O—CH

                      HC—O—
                                                                         OCH,
                         HC

                          CH
         CH2OH
                    CH30
      CH2OH
                          0	CH

                             HCOH
        H<|_o —
        HCOH

TXJ^l
-A^1	^^A,
                                CH30
                                                         H
HC-
                                                 ,OCH
                           OCH.
                                  CH-OH
                                  i  z
                  CH2OH
                                         HC	0

                                         HCOH
      0	.CH


         HC
                                    CH30
                                                  HO
      OCH3 CH3°
                                 HOCH.
  CH

 HCOH
  HOCH0
     \2
     HC'
     HC	0
                                          •O
                                                     CH30
                                     OCH.
                                               HC-
                     0
                                                HCOH
CH30
                              O—CH

                                HCOH
                                        H,COH
                                          ^ i
                 HC	0
                           OCH.
                           CH30
                                 OH
                      OCH.
                   OH
                                                                 HC - CH
                             Q^CH2
                              OCH.
                                                                  -0
                                        Structural Features of Lignin (Adler, 1977)

-------
Ln
                                                                                                        White  Liquor

                                                                                                            I
                                 ©
                             Wood
                                               Barking Drum
                              Bleached Pulp
                              Bleach Liquor
                                                                    o
                                                                   Chipper
                                                                                                                            Source:  Adapted  from Rydholm, 1935
                                                                     To  Tall  Oil
                                                                     and Turpentine
                                                                     Recovery
;>;//////\
/////////A
d Chip Pile



/
C
N
                     Water
                                                                                     Black Liquor
/      \
Digester

T
                                                                                      Bleaching  Plant
                                                                                                                                            Knotter    Screening
                                                                                                                                              Unbleached
                                                                                                                                              Pulp
                                                                                                                                                 Filter

( \



Chic
Tow
D3

rlno
er
Stag



Dioxide
e


( \



A
E

kali
2 S



Tower
age


( \




Chi
Tow
02

orln
er
Stc





( \


e Dioxide
ge








( \



Alkali
Tower
E) Stage

                                                                                                                                                                  Dioxidn
                                               Chlorine Dioxido
                                               Tower
                                               D)  Stage

-------
Ul
o^
Ul
                                                                            Staff Gauge SC-4
                 0     2
                Scnlo in rniloa
                                  Figure  3.1   Location of  Fenholloway Sampling  Stations

-------
 Fenholloway River-Median Water Quality Data
           1980 - 1988 (FDER, 1990A)
Station 1
Parameters US 27 Bridge
Color (PCU*)
Dissolved Oxygen
(mg/1)
BOD5 (mg/1)
COD (mg/1)
Conductance
(^mhos/cm.)
pH (S.U.)
Sulfate (mg/1)
Chloride (mg/1)
499
3.1
1.7
413
206
6.5
5.0
8.6
Station 2 Station 3
US 19/98 Bridge Fishcamp
1640
1.9
27.7
422
1993
7.1
110
345
652
1.2
21.3
195
931
7.1
49
205
*PCU = Platinum Cobalt Units
                       566

-------
          Inorganic Parameters


             Upstream        Downstream
Parameter     Station # 1         Station # 2

Temperature (° C) 26.2             25.2

Conductance      72              1780

pH(S.U)        6.4               7.1

Chloride (mg/1)   10               590

Sulfate          35               260

Sodium          2.6               490

Iron (mg/1)      0.59             0.53

Manganese (mg/l)0.008             0.19
                  567

-------
  Fenholloway River - TOC Analyses
             Station 1
             US 27 Bridge
                Station 2
                US 19/98 Bridge
Parameter
(6/18/9 n
(1/19/90^
TOC (mg C/l)  73
                140
                     568

-------
  Extractable Organic Components

Extractable Organic                   Concentration (ftg/1)
2,4-Dichlorophenol                           8

2,4,6-Trichlorophenol                         24

2,3»4,6-TetrachIorophenol                     6

Bis (2-ethylhexyl)phthalate                     26

Sulfonyl bismethane*                         30

Dimethylcyclohexene*                         40

Dimethyltrisulfide*                           5

Dimethylhexadiene*                           8

Trichlorodifluoroethane*                      80

Methoxyphenylpropanone*                     10

4-Hydroxy-3-methoxybenzaldehyde*             10

Tetrachloromethoxyphenol*                    5

Hexadecanoic Acid*                          10

Unidentified Components (I)                   300 (7)
                                     Total  < .5 mg/1
*  Tentatively identified compound.
** Water sample taken from monitoring station 2
                       569

-------
             NPTOX Analysis


.  Nonpurgeable total organic halide

.  Modification of EPA Method 9020 for Total Organic
Halogen

•   Method detection limit  of 10 jitg/1 NPTOX  as
chlorine

•  Procedure:

       1)  Purge with CO2 (or He) to remove volatile
           organics

       2)  Pass through two activated carbon beds

       3)  Wash columns with potassium nitrate

       4)  Combust columns at 1000° C

       5)  Precipitate combustion gases with silver
           acetate to form silver halides

       6)  Measure  decrease   in   silver   ions
           coulometrically
                        570

-------
Fenholloway River - NPTOX Analysis

Sampling Station         NPTOX (>g/l)


Station 1                     90
(upstream)
Station 2               15,000- 16,100
(downstream)
                571

-------
     Total NPTOX / TOC Ratio
Station 1            1200
(upstream)
Station 2           112,000 /ig/g
(downstream)
                 572

-------
    River Water
         I  Filtration

    River Water
          Acidification with
 Lignin Precipitate
         I Heat
 Lignin Precipitate
         I Centrifuge/Acid  Wash/Centrifuge
 Lignin Precipitate
           Dissolve  in NaOH
   Lignin Solution
           Acidification With
 Lignin Precipitate
           Heat
 Lignin Precipitate
           Centrifuge/Acid Wash/Centrifuge  (Three Times)
         y
 Lignin Precipitate
         I  Deionized  Water  Wash/Centrifuge
 Lignin Precipitate
           Oven Dry
   Lignin Sample
Figure 3,2  River Water Lignin  Extraction/Purification Scheme
                          573

-------
             NPTOX / Distribution

Sample                       NPTOX
                             (mg/1)      Recovery


Fenhplloway River Water      16.1
(downstream)
Supernatant                    11.1        (69%)
(from extraction / purification)
Fenextract (by difference)      5.0         (31%)
                        574

-------
     NPTOX Molecular Weight Distribution
          of Supernatant Ultrafiltration

                             NPTOX
                             (mg/1)  Recovery

Retentate >  30,000 NMWL    0.43     3.8%

30,000 >  Rentenate > 10,000
NMWL                      0.37     3.3%

Filtrate <  10,000 NMWL      7.2      65%
                                     72%
                   575

-------
Elemental Composition of Fenextract





      Element                   Weight %






      Carbon                   54.27



     Hyrdogen                  5.20



      Oxygen                   29.12



      Nitrogen                   1.32



       Sulfur                   3.41



      Chlorine                   3.50
                    576

-------
         Carbon Distribution in Fenextract
              and Indulin AT by C-NMR
Carbon Type      Fenextract %             Indulin AT %
                 (after bleaching)             (before bleaching)
Aliphatic                55                     42

Aromatic                34                     56

Carboxyl                5                     2

Carbonyl                6                     ND

Methoxyl               NQ                    - 9


ND - Not detected  NQ - Not quantified
                         577

-------
                         OCH3
                    OH





                    (D
     OCH,
O-




(II)
                                                          CIO-
Polymer
                          O
                        COOCEh
                     ^COOH



                     (VI)
        L^Lignm
                                                            OC1O
0
(HO
O
(HI)
                                                           -O--  CIO
 (V)
                Figure 5. Chlorine Dioxide Oxidation of Lignin (Lindgren, 1971)
                                         578

-------
              CONCLUSIONS
.  Chemical characterization of the water quality in the
outfall from a  kraft mill  has  lead to  an improved
measurement, termed  NPTOX,  of the  nonpurgeable
total organic halide.
    NPTOX /  TOC  ratio allows  organochlorine
compounds  derived  from  Kraft  mills   to  be
distinguished from naturally occurring organochlorine
compounds from blackwater rivers.
•  An acid insoluble precipitate, termed Fenextract, has
been isolated and consists  of large molecular weight
chlorothiolignin derivatives.
•   The  structure of Fenextract gives indication that
chlorine dioxide oxidation reaction in pulp bleaching
leads to aromatic ring cleavage and polymerization of
smaller lignin fragments.
                    579

-------
       ACKNOWLEDGEMENTS
•  Support by the Florida Department of Environmental
Regulation WQ005  Water Quality Assurance Trust
Fund for this work is gratefully appreciated.
                       580

-------
                                     MR.  TELLIARD:   We would like  to finish
somewhere close to the schedule today.  Some people have travel scheduled to deal with.

      So, if we could take a break now and get back in here in  15 minutes, I  would
appreciate it.  Thank you.

      (A brief recess was taken.)

                                     MR.TELLIARD: We would like to get going.  Our
next speaker is Ileana Rhodes from Shell Development. Ileana is a Research Chemist in the
Analytical Chemistry Group there,  i first met this young lady when she was working on
drilling muds with us which was a few years ago. She started this, as was pointed out, as
a high school project, so she was very young at the time. As part of her senior year field
trip, she got to do drilling muds.

      We are going to get a case study today on the pitfalls of using conventional TPH
methods for source identification.

      Thank you and welcome.
 PITFALLS USING CONVENTIONAL TPH METHODS FOR SOURCE IDENTIFICATION:
                                 A CASE STUDY
          Ileana A. L. Rhodes, E. M. Hinojosa, D. A, Barker, Robin A. Poole

             Shell Development Company, Westhollow Research Center
                                  Houston, TX
                                    Abstract

A case study involving soil contaminated with used motor oil illustrates the problems in
using conventional TPH methods for identification of source.

                                  Background

There are several approaches for assessment of water and soil contamination. The terms "oil
and grease" and "total petroleum hydrocarbons" (TPH) are used to describe the extent of
contamination  in  water and soil.  However, the  actual  value determined  is method
dependent and is defined by the method used. Most of the methods have been adapted
from EPA methods that were originally developed for the determination of target analytes


                                      581

-------
at trace (ppb) concentrations in clean matrices. All methods involve some sort of extraction
procedure followed by analysis of the extracts using gravimetry, infrared spectroscopy and/or
gas chromatographic procedures.  Gas chromatographic procedures either used selected
components or a sum of all components detected within a given range. There are numerous
publications documenting the problems and describing studies that emphasize the severe
limitations with all of the commonly used "TPH methods1"3. The ability to interpret TPH data
is only as good as the knowledge of the type of hydrocarbon contamination2.  In addition,
results are entirely dependent on the method used. Table 1 includes a summary of the most
commonly used analytical methods for the  determination of TPH. With the exception of
ASTM Method 3328-90 (identification of waterborne oils), none of the methods listed have
any provisions  for  product type  identification4.  This  ASTM method  does  not  include
quantitation  since it  describes  characterization of separate phase hydrocarbons  and it is not
a TPH method.

Most of the investigations involving potential contamination by petroleum hydrocarbons are
related to underground storage tanks and pipeline releases. These types of investigations are
primarily regulated  by the states. Each state has  its own  criteria  and methodology for
determination of contamination  based on  analysis  of the potentially affected media using
methods such as those listed in Table 1. Some of the practical difficulties include the fact
that   notification/action/cleanup  levels  are different  depending  upon the  type  of
contamination5. Typically, gasoline range material is considered more hazardous to human
health and the environment than heavier distillates because of its monoaromatic content as
well as  its higher solubility and volatility. However, the methods specified by each state
have no real protocol for identification of product type. Rather, some states simply require
a combination of chromatographic techniques and define "gasoline" and "diesel" materials
based on carbon ranges. Whatever elutes within a selected range is called "gasoline range"
or "diesel range" whether or not the labels are applicable. As indicated in Table 2, there is
a great deal  of overlap in carbon ranges for all petroleum products,

                               Practical Limitations

The most frequently used approach is to use two methods,  one for the volatile range and
another for the semivolatile range. The volatile range analysis is usually done by analysis
of water samples or  soil extracts by purge  and trap GC with flame ionization detector and
often called "gasoline range organics" method, GRO, Modified EPA Method 8015, California
LUFT 1, Wisconsin GRO,  Washington GRO, etc. The methods typically quantitate anything
eluting after C4, C5  or C6 and up to C10 or C12 depending on the state. The semivolatile
range is done by analysis of a concentrated extract by direct injection GC  with  flame
ionization detection. The method is often referred as "diesel range organics", DRO, Modified
EPA Method 8015, California LUFT 2, Wisconsin DRO, Washington DRO, etc. Depending
on the state, laboratory or analyst, the method may include a range C9, C10 or C12 up to
C25 or  up to C30.
                                       582

-------
Anything detected in the specified ranges is automatically called gasoline range or diesel
range.  In many cases the "range" qualifier is dropped or ignored and the final report often
identifies the contamination  as "gasoline" and/or "diesel". This can result  in erroneous
assessment of nature/source of contamination as well as false liabilities.

Many laboratories attempt to determine a product type by matching the chromatograms to
other chromatograms obtained from analysis of reference products. This approach is referred
to as "fingerprinting". This approach can be quite useful if there is appropriate expertise in
recognizing different products' fingerprints. However, this can be difficult because the
fingerprint is segmented (obtained from two different analyses), the products are weathered,
there are bias due to sample preparation, evaporation, purging efficiency, and the great deal
of overlap among product types. For example, a portion of diesel is in the gasoline range
by the  ranges defined in the methods. Conversely, a portion of gasoline is within the diesel
range as defined in the methods.

A  better approach to identify product type is to use a single chromatographic analysis that
includes a  wide carbon range that encompasses gasoline as  well  as diesel ranges and
beyond. There  are such methods that have  been  in used primarily by the oil industry to
optimize refinery processes and product characterization. The methods can be modified to
analyze extracts rather  than neat materials6.  Using a single chromatographic  procedure, it
is  more convenient to evaluate fingerprints. Figures 1-3 and Tables 2-3 illustrate the degree
of overlap of gasoline  and diesel. Because different states have different carbon number
cutoffs, results can vary a great deal. The distribution becomes even more complex with
weathering of gasoline as shown  in Figure 4 where a severely weathered gasoline would
result  in reporting of  a  significant concentration  due to diesel.  Without seeing  and
understanding the "picture"  or chromatogram, it would be impossible to properly assess
source. The  common   practice  is to  simply report gasoline range and   diesel  range
concentrations  for TPH without accompanying chromatograms. This information is often
used incorrectly because it may  be interpreted simply as a mixed  gasoline and  diesel
release.

                             Additional Considerations

Even in cases where analysis identifies gasoline range material  with the proper fingerprint
for gasoline,  there still  could be  significant problems  in source identification without a
deeper understanding of the nature of contamination. This is illustrated with a case study
where  gasoline range material was properly identified in the sample. However, the source
of gasoline was not from an underground storage  tank release but rather from used motor
oil in soil.

                                    Case Study

An investigation report indicated that there was soil contamination in a site that had been
a service station for about 60 years and an auto repair shop for the last 20 years. There had


                                        583

-------
been  several generations of underground storage tanks on the site through  the years.
Gasoline as well as used motor oils were stored at the site. Since the auto repair shop did
not utilize the fuel tanks, they were removed. The contractor's report indicated that there
was no evidence of leaks but there was some overall soil contamination at the site.

Conventional Approach

Soil analysis was  done using California TPH methods where the gasoline range and the
diesel  range materials are determined  by gas chromatography. Numerical  TPH results
indicated that there was gasoline present in the soil (anywhere from not detected up to 3000
ppm). There was limited information on the diesel range. This information was interpreted
as a release from  leaking tanks.

Chromatograms were requested  and reviewed. Upon review of the chromatograms, it was
evident that a weathered gasoline fingerprint was present in the gasoline range method and
that material heavier than diesel fuel was also present in significantly higher amounts in the
diesel range method. Some type of oil was the likely source of the heavy material which
was not reported since the bulk of it is out of the diesel range. The fingerprint of this heavy
material suggests the presence of motor oil.

After review of chromatographic fingerprints, the site was classified as containing gasoline
range material as well as a heavy oil contamination. The  implication was that the previous
service station owner was liable since the current auto repair shop did not store fuels at the
site. Both operators used a waste oil tank. Most  of the analytical information was on the
gasoline range and aromatics (BTEX) composition  since the investigation was centered in
finding gasoline.  This is  a typical  situation  where investigations are not conducted to
determine what is really present but rather to look  for specific components to support a
case.

Interpretative Single Analysis Approach

It is quite important to use information from many sources  when attempting to identify
contamination source.  Upon review  of  the contract laboratory data  (particularly the
chromatograms), it was evident that weathered gasoline range material  was present in the
soil. It was  also  evident that  the  bulk of contamination  was a heavy  oil whose
chromatographic fingerprint and carbon number range resembled a motor oil. The main
issue  was whether or not  leaky fuel tanks were the source of gasoline  range soil
contamination or were there  other possible  sources.

It is a well known and normal phenomenon that used motor oil becomes diluted with fuel
during engine operation. Gaseous fuel from engine blowby and liquid fuel washing down
the cylinder walls past the piston rings are  introduced into the crankcase of the engine. Oil
dilution is a factor in both gasoline and diesel engines,  and allowances are made for its
presence in oil formulations and engine performance testing. Blowby (combustion chamber

                                       584

-------
gases blowing past the piston rings) can be more pronounced in high mileage engines with
worn piston rings. In addition, under cold start and warm-up conditions more liquid fuel will
be transported past the rings and into the engine lubricant. As much as 10 percent of the
motor  oil can consist  of gasoline7.  After engine warm-up, some of the more volatile
components  of the gasoline will vaporize and be removed by  the  positive  crankcase
ventilation (PCV) system. The higher boiling fuel components will remain in the motor oil,
and the appearance of the gasoline portion of a GC trace from analysis of such an oil will
resemble heavily weathered gasoline.  Oil dilution is a factor not only in high mileage
vehicles and  cold start conditions, but also in new vehicles driven at highway speeds. A
study was initiated to obtain fingerprints of used motor oils and to understand  how used
motor oils could be interpreted using conventional GC TPH methods.

Used Motor Oil  Study

Since samples from the  site were not available, a study was initiated using new and used
motor oils and spikes into soil.  A protocol was developed to characterize the oils. Samples
of new oils and  used motor oil from different  sources were analyzed using conventional
gasoline  range purge and trap GC  methods  and  a single  analytical  method for  the
determination of petroleum hydrocarbons6. Figures 5-8 show the chromatograrns from single
gasoline to diesel range analysis of new oils. Figures 9-12 show the results obtained from
analysis of used  oils from Toyota,  Honda, Ford and Chrysler engines. Clear  evidence of
weathered gasoline is observed in all chromatograrns.

The  four used motor oils and  a new motor oil were spiked in soil at  approximately  1%
concentration. The soils were extracted with methanol and analyzed using the conventional
volatiles method for gasoline range and using the single analysis for the entire gasoline to
diesel range (up  to  ~C30). The results from both types of analyses are listed in Table 4.
Figures 13-17 show the  clear fingerprint of weathered gasoline in the spiked samples. All
of the used oils contain  0.6 to  1.9%  gasoline  range material.

Figure  18 shows the chromatogram obtained from analysis of one of the soil spikes (1%
used oil/Honda) using  a conventional purge and trap TPH method that  can  only give
information on the gasoline range. Clear evidence of gasoline range  material is evident,
however, the method does not provide any other pertinent information such as the presence
of any additional material. The  problem is that  this is often the only type of analysis done.
Even if diesel range analysis  is done, the  presence of other  heavier materials may be
disregarded because it may fall  outside of the carbon number range that are defined by  the
methods. The project engineer  typically gets a report indicating numerical results which in
this case point overwhelmingly to a "gasoline" source.

Study Implications

This  study clearly indicates that when used motor oil is present, it is also expected to find
the fuel that  the particular engine operated  on. This  is obviously a case  where finding


                                       585

-------
gasoline does not automatically imply a leaking underground storage tank or a pipeline
product release.

                                    Summary

It is clear that to properly identify source, conventional TPH methods are inadequate and
can lead to serious misidentification of sources. This has implications beyond legal liabilities
and include impaired ability to correctly establish the source of major contamination, to stop
the source, and to choose proper remediation technology.  In this particular case study,
conventional methods indicated that soil contamination was due to a leaking storage tank
when in fact the source was used motor oil which had been improperly disposed of. Motor
oil cannot be readily determined using conventional chromatographic methods for TPH but
useful information about its presence can be obtained from the chromatographic fingerprint
                                   References

1.    T. L. Potter, "Analysis of Petroleum Contaminated Soil and Water: An Overview",
      Petroleum Contaminated Soils, Vol. 2, Chapter 10, E. J. Calabrese, P. T  Kostecki,
      Editors.  Lewis Publishers, 1989

2.    B. Sullivan, S. Johnson, "'Oil' You Need to Know About Crude: Implications of TPH
            Data for  Common  Petroleum  Products",     Soils. May  1993,  pp.  8-13
3.    G. S. Douglas, K. J. McCarthy, D. T. Dahlen, J. A. Seavey, W. G. Steinhauer, R. C.
      Prince,  D.  L. Elmendorf, "The Use of  Hydrocarbon Analyses for  Environmental
      Assessment and Remediation", Journal of Soil Contamination, 1 (3), pp.197-216,1992

4.    Standard Test  Methods  for Comparison of Waterborne  Petroleum Oils  by Gas
      Chromatography, ASTM  D3328-90

5.    T. Oliver,  P.  Kostecki,  "State-by-State  Summary  of  Cleanup Standards",  Soils.
      December 1992.

6.    I. A. L.  Rhodes, R. Z. Olvera, J. A.  Leon, E. M. Hinojosa, "Determination  of Total
      Petroleum Hydrocarbons by Capillary Gas Chromatography", Proceedings from the
      Fourteenth  Annual EPA Conference on Analysis of Pollutants in the Environment,
      Norfolk, VA, 1991.

7.    K. Owen, T. Coley, "Automotive Fuels Handbook", Society of Automotive Engineers,
      Inc., 1990
                                       586

-------
                        QUESTION AND ANSWER SESSION
                                      MR. WALLINGS:  E. Wai I ings from Roy Airquest
and Incorporated.  My question here is to trace the target source contamination of the oil
contamination.  We do not have to identify it is diesel fuel or it is gasoline.  All we have
to do is see the whole patterns and compare to the nearby potential contaminated source.

      So, to my knowledge, we should still be able to correctly trace to where the source
of contamination.

      The second  thing I would like to say is it is on the chromatograms, sometimes they
have other markers.  Now,  you can try to identify the source.

      If you go back to your chromatograms, the C17 and C18 always have the phytane
and the pristane over there,  and the ratio of them is a characteristic of the source of oil. So,
I  don't know if you  have looked into that kind of a marker to trace the source of the
contamination?

                                      MS.  RHODES:  The point that I  am trying to
make...we use all those things that you refer to when you have a reference material that you
are trying to piece  it  back to, but in this particular case in the underground storage tanks
investigations, you  have a facility that has been in operation for, in this particular case, 80
years.

      So, we do not  have the original material, so all you have to do is look at a fingerprint
and try to figure out what is there. The actual source, what I meant by source was, was it
a gasoline spill  or was it a motor oil spill?  In this particular case, it was  a motor oil  spill,
and the particular source, meaning the type of product, was misidentified because they were
not looking for the right thing with the methodology they were using.

      But, yes, I agree with  you.  If you have a reference material, we use  all kinds of
things to  try to piece it back to the source, but they are not part of any conventional TPH
methods  that  are used routinely.

                                      MR.  PEIST:  Ken  Peist, Region  II Laboratory in
Edison,  New Jersey.   I  am  aware of a few  other  people  who  have  been using the
fingerprinting techniques in our area of the country,  and  I was just curious  if you  were
aware of any like perhaps national data bases where people can compare fingerprints to sort
of eliminate a lot of the  leg work that has to be done.

                                      MS. RHODES: There are a lot of contractors that
do the work, and, frankly, when anybody at Shell, for example, calls me and says we need
an outside party to  look at fingerprinting,  I  do not look towards the environmental

                                       587

-------
laboratories. I look at the laboratories that are doing characterization for petroleum products
like the Southern Petroleum Labs, CORE, and others.  People that have been doing work
for the oil industry rather than for environmental purposes.  Those are the ones that I think
have a better chance of trying to identify what kind of material is present, not use an EPA-
based method that was done for target compounds and then try to find what the product is.

       But,  yes, the  fingerprints are not difficult.  The problem is that the information  is
pieced from a purge and trap method and a direct injection method which has gone through
an evaporation step.  The resulting information can be skewed.

                                     MR. PEIST:  Thank you.

                                     MR. TELLIARD: One more,  hold on.

                                     MR. VARNELL:  I am David Varnell with the
Tennessee Valley Authority.  I was wondering if you had any information concerning the
ability to differentiate between lubricating oil and what is commonly called mineral oil or
what we use in our  transformers which  I think is from a similar source.

                                     MS. RHODES:  Sometimes you can look at the
trace  metals and the additives but you  would have to know the different additives  that
different manufacturers  use.  There are ways to do it, but it is not something that you can
just go to a contract lab and get it done, to my knowledge.

                                     MR. VARNELL: Okay, thank  you.

                                     MR. TELLIARD: We came up with a method of
looking at the distinction between mineral  oil, diesel, and drilling muds, and we  have a
procedure that we had put together for the offshore oil and gas category when they insisted
on our regulating them.

       If you are interested, give me a buzz and I will send you a copy of that. It is not
directly applicable, but you might be able to...it is a decision  tree type of approach, and it
might fit in or at least give you a starting point.
                                       588

-------
        PITFALLS USING CONVENTIONAL TPH METHODS FOR
                    SOURCE IDENTIFICATION:

                         A CASE STUDY
                           lleana Rhodes
g                        Emiliano Hinojosa
                           Dave Barker
                           Robin Poole

                     Shell Development Company
                           Houston, TX
                       17th Annual Conference
                 Analysis of Pollutants in the Environment
                         Norfolk, May 1994

-------
                                     O&G/TPH ???
Ul
<~D
O
           The terms "OIL & GREASE" and "TOTAL PETROLEUM
           HYDROCARBONS" are used to describe the extent of contamination
           in water, soil and wastes. However, the actual value determined
           is method dependent and thus must be defined by the method used

              WHAT ELSE CAN BE  MEASURED AS O&G/TPH ???

           Any other organic compound (cleaning fluids, phthalates, mineral
           oils) and anything Freon soluble

                  WHAT IS NOT O&G / TPH ???
             It is not always "TOTAL" since heavy hydrocarbons
             are not always extracted, volatiles can be lost

             Not just petroleum

             Some methods underestimate aromatics

             Selected compounds are added in some methods
             (target compounds only)

             Limited information on product type which
             is often misinterpreted

-------
                                   O&G / TPH
                     INDICATOR METHODS WHICH PROVIDE INFORMATION
                    ON FREON EXTRACTABLE PETROLEUM HYDROCARBONS
Ul
                  EXTRACT
                 GRAVIMETRIC
                 INFRARED
               Oil and Grease
                   SILICA
                                                              REMOVAL OF
                                                               POLARS
                   GRAVIMETRIC
                                                          INFRARED
Total Petroleum Hydrocarbons

-------
                                   Table 1
                      SUMMARY OF CONVENTIONAL TPH METHODS
   NO INFORMATION
  ON PRODUCT TYPE
LIMITED INFORMATION
 ON PRODUCT TYPE
 (JUST C# RANGES)
 GRAVIMETRIC TECHNIQUES

 - EPA Methods: 413.1, 9070, 9071
 - Standard Methods: 5520B, 5520D, 5520E, 5520F

 INFRARED TECHNIQUES

 - EPA Methods:    413.2, 418.1
 - Standard Methods: 5520C

 GAS CHROMATOGRAPHIC TECHNIQUES

• Direct Injection Methods
 - Mod. EPA Method  8015 (GC-FID), CDHS, WDNR, WTPH, etc.
 - EPA Method 8270  (GC/MS)/Selected components
 - ASTM D3328-90 (GC-FID)/"Fingerprint" only. No quant.

• Purge & Trap and Headspace Methods (only Gasoline range)
 - Mod. EPA Method  8015 (GC-FID), CDHS, WDNR, WTPH, etc.
 - EPA Method 8020 (GC-PID)/Gives Only BTEX
 - EPA Method 8240  (GC/MS)/Selected Components
                                      592

-------
                                     TPH
                                GC METHODS
Ul
         • Sample is extracted with a solvent
         • Extract is introduced into a gas chromatograph either by direct injection
           or by purge and trap techniques (the latter is only applicable for
           gasoline range organics)
         • The chromatographic column separates components in the sample
         • Total area of chromatogram is integrated and quantified by comparison
                  GC METHODS DO NOT INCLUDE A POLAR REMOVAL STEP

                     • SOME POLARS (S, N, O) WILL BE DETECTED
                     • SOME POLARS WILL NOT BE DETECTED (ACIDS)
                     • ONLY CHROMATOGRAPHABLE RANGE DETECTED
                      (USUALLY	
-------
                              Table 2
   Petroleum  Product Carbon Number Range - Approximate
 Gasoline
 Mineral Spirits/ Stoddard Solvent
                     r~"-
 Jet Fuel
 Kerosene

 Diesel Fuel/ Light Fuel Oil

 Lube Oil, Motor Oil, Grease
C1  C2 C3 C4  CS C6 C7  C8  C9 C10 C11 C12 C13 C14 C1S C18 C17 C18 CIS C20 C21 C22 C23 C24 C2S >C2S
 TPH-g, Modified 8015-g, GRO, etc.
        [=^^••••1
 TPH-d, Modified 8015-d, DRO, etc.

 418,1, Modified 418.1


 Petroleum Hydrocarbon, PHC
        TPH Method Carbon Number Range - Approximate
                                     594

-------
                         Table 3
BOILING POINT DISTRIBUTION, CARBON NUMBER RANGES OVERLAP
                        Cumulative % Composition
Boiling
Point
£ 36°C
<> 69°C
fi 98°C
S126°C
S1S1°C
S174°C
£1960C
S 216-0
S236°C
£253°C
S279°C
S287"C
S302°C
S316°C
S329"C
S343«C
S3S8°C
S369°C
S380"C
S391°C
S402°C
Approxlmat*
Carbon #
s=CS
=C6
»C7
«C8
«C9
=C10
«C11
=C12
«C13
«C14
=C15
«C16
«C17
«C18
=C19
«C20
=C21
»C22
«C23
«C24
asC25
Gasolino
13
27
43
59
62
84
93
98
99












Jet/Keroaana
0
0
0,2
1.2
3.6
9.8
23
44
63
79
91
98
99








Diesel
0
0
0.05
0.2
0.5
^ C10
1.2 ^••••l
8,9
^ C12
22 -4™™
30
40
57
69
78
85
90
94
97
98
99


                       595

-------
                          Table 4
                    SPIKED SOIL STUDY
SUMMARY OF BTEX AND TPH RESULTS: Gasoline Range TPH (up to C12)
Clean Soil Spiked with ~1% New and Used Motor Oil from Several Sources
Motor oil
Source
New Oil

Toyota Used OH

Honda Used Oil

Chrysler Used Oil

Chrysler Used Oil (R)

Ford Used Oil


Driving
Conditions
Not used

Suburban Driving

Short Trips/City

Suburban
Driving
Suburban
Driving
Fraaway
600 miles
per day
Analysis
Type
Dl
P4T
Dl
P4T
Dl
P4T
Dl
P4T
Dl
P4T
Dl
P*T

BTEX
ppm
<2
<1
16
15
22
23
16
17
16
17
12
13

TPH/Gasollne
Range, ppm
<20
<10
170
160
190
140
160
140
160
130
80
60

% Gasoline
In Used Oil


1.7
1.6
1.9
1.4
1.6
1.4
1.6
1.3
0.8
0.6

Dl: Extraction followed by direct injection GC-FIO
P&T: Extraction followed by purge and trap GC-FID
                             596

-------
                        CHARACTERIZATION OF ORGANICS TO  ~C30
Ul
        1000 _
                                   15.0
I
 20.0     25.0
T i we  (mi nut es)
                                                          i" I"
       30.0
                                                                    i _  i
35.0
                                                                             -t ............ i  t-
40.0
                               Figure 1: Chromatogram of fresh gasoline

-------
                              CHARACTERIZATION  OF ORGANICS TO ~C30
              1000
              800
           >  600
            e
CO
               200
fe
5.0
                                  10.0
15.0       20.0      25.0


        Time  (minutes)
30.0
35.0
40.0
                                  Figure 2: Chromatogram of fresh jet fuel

-------
                         CHARACTERIZATION OF ORGANICS TO ~C30
          210
VO
VO
           180 -
           150
        S   120
        £
        <5   90
           60
           30
              -I	1—I	1  I  I  I '
                     5.0
10.0
15.0
 20.0      25.0


Time (minutes)
                                                            I  I  '  ' I—1—I—I—I—I—I—I—I-
30.0
35.0
40.0
                                 Figure 3: Chromatogram of fresh diesel fuel

-------
                              CHARACTERIZATION OF ORGANICS TO ~C30
O
o
             1000 -




              900 -




              800 -




              700 -
           S  600
           _£



           5  500
           C
           «
400




300




200




100
3.0
                         5.0
                   10.0
15.0
 20.0      25.0


T) me (n t nut es)
30.0
35.0
40.0
                           Figure 4: Chromatogram of severely weathered gasoline (-98%)

-------
                 CHARACTERIZATION OF ORGANICS TO ~C30
  30.0
  27.0
  24.0
S 21.0
£
  18.0
  15.0
  12.0
   9.0
                I  I  I I    1  1  I
     0.0
5.0
10.0
15.0       20.0       25.0


        T ime (m mutes)
30.0
35.0
40.0
                       Figure 5: Chromatogram of new Motor Oil A

-------
O
KJ
  IMPORTANT FACTS ABOUT TPH

The carbon number range of different
products overlap.

The carbon number ranges of the various
methods overlap.

Non-petroleum material may be measured
as well.

Laboratories may automatically assign the
name of a product type to anything that
elutes within a given range

Because information is fragmented,
results may point to wrong source(s)

-------
               CASE STUDY:  CONVENTIONAL APPROACH

             BACKGROUND

             • Service station for -60 years and then auto repair shop for the last
               20 years
             • Several generations of underground storage tanks through the years

             • Service station USTs contained gasoline only

             • Repair shop stored diesel only for a short time

             • Repair shop no longer used the tanks. It was required to
               either permit the tanks or remove them

§            INITIAL ASSESSMENT
u>
             • Tanks removed. No evidence of leaks found but there was overall soil
               contamination
             • Soil analysis:

               - 418.1  (TPH/IR)           R-enlte
               - Mod 8015              ncouno        _ Gasoline Range Organics

                • Gas°'ine Ra"9e      Interpreted as '   ' "HeaVy" Oi'
                . Diesel Range

               INITIAL CONCLUSION

                 Soil contamination due to gas service station operation

                                          i
                                       LAW SUIT

-------
                       CASE  STUDY

 REVIEW OF DATA FROM CONVENTIONAL TPH METHODS


As  previously stated...

• Information from  conventional TPH methods is segmented and often misused

• "Product type" is  simply as "defined by the method"

Problems...

• Results were reported as "gasoline range" and interpreted as "gasoline".

Chromatograms were requested. Review  indicated that...

• There was indeed a weathered gasoline "fingerprint" in the soils
• Limited data on a few of the samples for "diesel range organics" indicated
  that the main contamination was due to a heavy material present in the soils
• The heavy material  "fingerprint" resembled motor oil (Motor oil cannot be
  chromatographed in its entirety but sufficient "fingerprint" is obtained for
  potential identification)
                       QUESTIONS???

 Is a leaking UST the source of gasoline in the soil?
 Could there be other sources (there was  no evidence of leaks in the tanks)?
 Could the used motor oil be the source of gasoline?
                                                                            •**J
                                                                            Y

-------
              HOW DOES FUEL GET IN THE MOTOR OIL?
O
Ln
            i Used motor oil is diluted with fuel during engine
            operation. Fuel gets into the crankcase of the
            engine by

                • Gaseous fuel from engine blowby
                * Liquid fuel washing down cylinder
                  walls past the piston rings

            i Oil dilution takes place with gasoline and diesel
             engines. Allowances are made for this in oil
             formulations and  engine performance testing

-------
                     HOW DOES FUEL GET IN THE MOTOR  OIL?
o
cr>
          Blowby (combustion chamber gases blowing past
          the piston rings) can be more pronounced in high
          mileage engines with worn piston rings
          Under cold start and warm-up conditions, more
          liquid fuel is transported past the rings and
          Into the oil
After engine warm-up some of the more volatile
components of gasoline vaporize and are
removed from the oil via the positive crankcase
ventilation system (PCV)


Higher boiling components remain in the motor oil
and will resemble heavily weathered gasoline.
There can be 1 to 10% fuel in the used motor oil
                                                                       INTAKE VALVE

                                                                       COMBUSTION
                                                                          CHAMBER
                                                                          INJECTOR
                                                                           NOZZLE
                                                                           INTAKE
                                                                          MANIFOLD
                                                        SPARK
                                                         PLUG

                                                        PISTON
                                                              CRANKCASE

-------
                                 CASE  STUDY

         CHARACTERIZATION OF  FRESH AND USED MOTOR OIL
                 There were no samples available from the site

               A study was conducted to...

g              • Characterize new and used motor oil
               • Understand  impact of used motor oil composition on source
                 identification using conventional TPH methods

               Approach:
               • Neat new and used motor oil were analyzed using single GC TPH and
                 "fingerprinting" method
               • Spiked soil samples (1% motor oils) were analyzed using both
                     - Single GC TPH and "fingerprinting" method
                     - Conventional GC TPH methods for gasoline range

-------
o
00
            30.0
             9.0
                          CHARACTERIZATION OF ORGANICS TO ~C30
                .0
5.0
10.0
15.0
 20.0      25.0


Time  (minutes)
30.0
35.0
40.0
                                Figure 6: Chromatogram of new Motor Oil B

-------
cr<
o
<£>
                           CHARACTERIZATION OF ORGANICS TO ~C30
          40.0
          36.0 -
          32.0 -
          28.0 -
          24.0 -
        £ 20,0 -
          16,0
          12.0 _
                LJt.JL..~ . ..i. ..11.	  1.1 .  Jk.1...
                      5.0
10.0
15.0
 20.0       25.0


Time (minutes)
30.0
35.0
40.0
                               Figure 7: Chromatogram of new Motor Oil C

-------
                            CHARACTERIZATION OF ORGAN1CS TO ~C30
             45.0
o
              10.0
                0.0
10.0
15.0
 20.0      25.0



T i we  (m t nut es)
30.0
35.0
40.0
                                   Figure 8: Chromatogram of new Motor OH D

-------
               CHARACTERIZATION OF ORGANICS TO ~C30
30.0
27.0
 9.
0 I	i, i—i
0.0
           5.0
10,0
15.0
20.0
25.0
30.0
35.0
40.0
               Figure 9: Chromatogram of used motor oil. Toyota Gamry

-------
NJ
                            CHARACTERIZATION OF ORGANICS TO ~C30
             30.0
             27.0 -
               0.0
10.0
15.0
 20.0      25.0
Time  (minutes)
30.0
35.0
<»0.0
                               Figure 10:  Chromatogram of used motor oil. Honda

-------
OJ
                          CHARACTERIZATION OF ORGANICS TO  ~C30
         30,0
         12.0
          9.01—i—i i  i  I i  i  i i  I  i  i i  i  I i
0.0
                    5.0
10.0
15.0
 20.0      25.0
Tine  (n I notes)
30.0
                           Figure 11: Chromatogram of used motor oil. Chrysler

-------
                CHARACTERIZATION OF ORGANICS TO ~C30
30.0
27.0 -
              ,  ,  , ,  1  ,,,.,,,,
   0.0
5.0      10.0
15.0
 20.0      25.0
Time  (minutes)
30.0      35.0
40.0
                Figure 12:  Chromatogram of used motor oil. Ford Taurus

-------
U1
         11.0
         10.8 _
       >•
       e
          10.0 _
            0.0
                        CHARACTERIZATION OF ORGAN1CS TO ~C30
5.0      10.0
15.0       20.0       25.0


       Time (minutes)
30.0       35.0
40.0
                Figure 13: Chromatogram of soil extract. Soil spiked with 1% new motor oil

-------
                 CHARACTERIZATION OF ORGANICS TO  ~C30
  H.O
  10.8 -
  10.6 -
- 10.4 -
  10.2 -
  10.0
5.0      10.0      15.0
                                       20.0      25.0
                                      Tine (minutes)
30.0      35.0
40.0
       Figure 14:  Chromatogram of soil extract. Soil spiked with 1% used motor oil. Toyota

-------
                  CHARACTERIZATION OF ORGANICS TO ~C30
11.0
10.8
10.6
10.2
10.0
   0.0
5.0
10.0
15.0       20.0      25.0



        Time  (minutes)
30.0
35.0
40.0
    Figure 15:  Chromatogram of soil extract. Soil spiked with 1% used motor oil. Honda

-------
00
                            CHARACTERIZATION OF ORGANICS TO ~C30
                         5.0
10.0
15.0
 20.0      25.0
Time  (minutes)
30.0
35.0
40.0
                  Figure 16: Chromatogram of soil extract. Soil spiked with 1% used motor oil. Chysler

-------
11.0
                  CHARACTERIZATION OF ORGANICS TO  ~C30
         ^~**LA—Jw*—.
10.0 -
            5,0       10.0
15.0
 20.0       25.0
Time (rainutea)
30.0      35.0
40.0
  Figure 17: Chromatogram of soil extract.  Soil spiked with 1% used motor oil. Ford Taurus

-------
                           GASOLINE RANGE ORGANICS TO~C12: PURGE AND TRAP
t-0
o
                 1.8.0
                 <.2.0
                 56.0
                 30.0
                 2<..0
                 IQ.O
                 I2.0
                    0.0
2.5
                                                         a
                                                         3
5.0
7.5
   IO.O

Tina (minutes)
I2.5
I5.0
I7.S
                                                                                           .  . i
                            20.0
               Figure 18:  Chromatogram of soil extract using conventional TPH purge and trap method for
                          determination of gasoline range organics.  Soil spiked with 1% used motor oil.
                          Honda

-------
                            SPIKED SOIL STUDY
         SUMMARY OF BTEX AND TPH RESULTS: Gasoline Range TPH (up to C12)
           Clean Soil Spiked with ~1% Used Motor Oil from Several Sources
ro
Motor oil
Source
Fresh Oil

Toyota Used Oil

Honda Used Oil

Chrysler Used Oil

Chrysler Used Oil (R)

Ford Used Oil


Driving
Conditions
Not used

Suburban Driving

Short Trips/City

Suburban
Driving
Suburban
Driving
Freeway
600 miles
per day
Analysis
Type
Dl
P&T
Dl
P&T
Dl
P&T
Dl
P&T
Dl
P&T
Dl
P&T

BTEX
ppm
<2
<1
16
15
22
23
16
17
16
17
12
13

TPH/Gasoline
Range, ppm
<20
<10
170
160
190
140
160
140
160
130
80
60

% Gasoline
in Used Oil


1.7
1.6
1.9
1.4
1.6
1.4
1.6
1.3
0.8
0.6

                     Dl: Extraction followed by direct injection GC-FID
                     P&T: Extraction followed by purge and trap GC-FID

-------
K)
K)
                    CASE STUDY

           OBSERVATIONS/CONCLUSIONS

As expected from knowledge of fuel dilution phenomenom,
used motor oil contains gasoline range material
Used motor oil from diesel engines are also expected to contain
diesel range material
The source of the gasoline and/or diesel may be misidentified:
       - If only gasoline range analysis is done
       - Even if diesel range analysis  is done, the bulk of
         motor oil falls out of the range and  may not even be
         reported, thus totally missing its presence
                    It is extremely important to "see" the whole picture when attempting
                    to "name" a source of product and not to rely solely on a "range"
                    Results from conventional TPH methods could result in improper
                    selection of remediation technology
                           - Gasoline range usually remediated through soil venting
                           - Used motor oil usually removed for offsite disposal or
                             need to do risk assessment

-------
cr>
N>
UJ
                   SUMMARY



Conventional TPH methods as commonly applied are not
adequate for source identification. This can lead to...


    » Incorrect liability allocation

    • Impaired ability to stop source

    • Improper selection of remediation technique

    • Failure to correctly identify source of contamination
                (III

-------
(Blank Page)
    624

-------
                                     MR. TELLIARD: Our next speaker is Steve Hinton.
Steve is a Research Engineer at the National Council of Air and Stream Improvement at Tufts
University.  He is going to be talking about the statistical analysis of environmental data sets
that contain non-detect observations.

      Not being a statistician, I  tried to interpret what this means. Is this like how many
non-detects will fit on the head of a pin, this sort  of thing?

      Steve?
         STATISTICAL ANALYSIS OF ENVIRONMENTAL DATA SETS WHICH
                   CONTAIN 'NON-DETECTED' OBSERVATIONS
                                     MR. HINTON: Thank you, Bill.

      Let me say,  first of all, that there are no differential equations  in this presentation
today, so there is no reason to evacuate the room.  I tried that a few times when I gave this
talk a couple years ago, and I  evacuated the room each time. So, I  have learned my lesson,
and could I have the first slide, please?

      Let me tell you a little bit about the motivation for this work, as you may have heard
a few references in  the last day or two about the rulemaking for the paper industry that EPA
is presently conducting.  We  have just gone through the first phase which concluded with
the end of the comment period, and are now moving  into Phase II.  Understanding the
potential effects of this rulemaking's data handling procedures was the motivation for some
of my research.

      In particular, the motivation for what I would characterize as methods development
work for statistics, was to help us understand what might occur when averages and standard
deviations were calculated from data sets that contained censored observations.  As I am
sure most of you  in this room are aware, there are times when it is essential that a number
be calculated,  even,  perhaps, when that does not make the best sense.  We wanted to
understand the consequences of that, and I am here today to alert you to some of the
material which has been developed to address this issue.

      As dischargers and regulatory agencies strive to reduce the concentrations of trace
organics in wastewaters, there is a greater fraction  of sample measurements that  are being
reported as non-detect.  When this occurs, the data sets that they are a part of are called left
censored data. Such data sets contain non-detected observations whose magnitude we do
not really know but for which we do know their frequency of occurrence, as well as fully
quantified measurements for which we know both their magnitude, or at least we have an
estimate of their magnitude, and their frequency of occurrence.


                                       625

-------
      The notion of left censoring arises because we do not have complete information
about the frequency distribution of the sample.  In the shaded area of Slide 2, we can
characterize the  frequency distribution  quite  well.   These  are  the fully quantified
measurements.  Below the X0 location, we simply do not know the shape of this curve.  I
should have drawn the curve below X0 with a dotted line, because we really do  not know
the curve's shape in that region.  About all that can be said  in this situation  is that we can
estimate the relative area on the left versus the right based on the numbers of non-detects
and fully quantified  measurements, respectively.  So, if we could back up one slide; I only
do that once in a presentation, so it is forward from here on.

      In a  general  context, there are really three types  of observations; observations
represent measurement attempts.  These include: the non-detects, those things which we can
count but we do not really know what their value is other than  to say it is less than some
censoring threshold; the fully quantified measurements, which we can also  count for and
which we have an estimate of their magnitude; and a third related type of measurement,
which I characterize as uncertain measurements and which  we will not  discuss today, but
they are similar to non-detected observation in that we know their frequency of occurrence,
but we do not have  a precise estimate of their magnitude.  The  most predominant type of
such a measurement attempt is the greater than values that occur when  you  are above the
linear calibration range of an instrument.

      So, if we could go forward two slides to Slide 3, let's think about these left censoring
thresholds.  What could they be?

      There are many definitions. Here are a few: the level of detection and the USEPA's
minimum level values which are determined from subjective judgment about the  analytical
chemistry process.  Then there are more formalized mathematically described definitions
such as the method detection limit and the limit of quantitation which are determined from
precise statistical and analytical chemistry procedures.  There are many others.  In  fact,  I
believe later in today's program, you will hear a much more expanded discussion about this
topic, so I will just simply say  that it  is fortunate for me and for those trying to  apply the
simple techniques that we are going to discuss that the censoring threshold definition itself,
does not really have an impact on the results of a statistical analysis calculation.

      The presence of censored observations in data sets does, however, make calculations
of means and standard deviations more difficult, because we simply cannot incorporate the
word non-detect (ND) into the common statistical procedures that we are all used to using
like summing up numbers and dividing by n.  This situation often leads people to  substitute
in a fixed value such as zero or the ND value for these missing observations and then to
proceed with simple techniques that they are familiar with.  This is not a sound practice.
It produces biased estimates, and it is unnecessary because there are simple,  unbiased, and
easy-to-apply statistical methods.
                                       626

-------
      So, if we could go forward to Slide 4, my objective today is to alert you to these
statistical techniques and to show you some of their properties and limitations. To do that,
we are going to delineate the problem setting, define some preliminary data analysis steps
that  will simplify  the  problem and  make  it easier  to  tackle,  review three common
approaches that can be used, and then show you the results of an evaluation of the bias and
root  mean square error properties of those three approaches.

      When data comes from the laboratory, it often  appears to be almost  a puzzle. In
Slide 5, you see four data sets for chlorophenolic compounds in chemically bleached pulp
mill effluents.  This is the way the data was received from the laboratory, and in its present
form, little can be surmised about what might be going on.

      In our next slide  (Slide 6), we show what is often a good first step in  data analysis
which is to rank the data from lowest to greatest in magnitude.  Then you can begin to see
some trends in  the data and  begin to assess the complexity of the statistical analysis
problem.

      In data  set 1, which represents  our simplest case, we have, basically, observations
or measurement attempts that were  censored at approximately 1. We have  four of those
that were less  than 1, and we have fully quantified measurements at 1 and above.  In this
case, we have a single censoring threshold.  We can say these measurements can be
characterized as either being below 1, or above 1  and fully  quantified.

      Data  set 2 is slightly more complex,  because there were actually three detection
limits reported by the chemist.  In a statistical  sense, however, it is possible to treat the  data
set as if there were a single censoring location; in other words, all of our  NDs are less than
12.  This is an important property which we can exploit when  performing statistical
calculations, because it makes the applicable statistical methods much simpler.

      In data set 3 of Slide 6, the chemists, have done  it to us because they have reported
this 500 ND, all these fully quantified observations between 56 and 281,  and  also a couple
NDs at 50. Well, what do we do here? On first inspection,  it is not immediately obvious
how to reduce this problem and  make it simpler.  However, simplification is possible
because we can eliminate the 500  ND  from the data set given its distance from the median
value of this data set which is 89;  i.e. 500  is roughly five times the median value.  In
addition to that, it is  more than a couple standard deviations away from  the average of these
observations which you would only  know had  you calculated  it with  and without
consideration of 500 ND.

      As a  good rule  of thumb, when you have ND observations which exceed the
maximum value by several times, and  if you  have some suspicion about the  credibility of
the values, it is proper to eliminate them  from the data set if your purpose is to calculate
means and standard  deviations.  In  this  instance, elimination of the 500  ND value from data
set 3, i.e. just  ignoring it, would result in only a few  percentage difference  between the


                                       627

-------
estimates for the means and standard deviations. The 500 ND occurred in this particular
data set because the sample was analyzed in several dilutions, and the analysis of strongest
solution was lost. So, when the chemist reported the values, it was only possible for them
to report the results for the dilution that made it through the analytical  chemistry process,
and that number was less  than 500.

      Saying something is less than 500 when the median  value  is 89 is equivalent to
saying that it is less than infinity which is not particularly useful information and which is
why ignoring the 500 ND does not have a big impact on the calculated means and standard
deviations. You can show  this mathematically by looking at the  likelihood functions. I will
not distract you at this time with that information.

      The final data set in  Slide 6 is the most complex and is not one that we can simplify.
The ND at  100 is wedged  in the middle of the fully  quantified observations, and this
requires a more powerful  statistical technique than the ones that we will discuss today.

      By performing the preliminary steps of ranking the data and determining the number
of unique censoring thresholds,  you can drastically  simplify the problem into one that
becomes more manageable; these are the first two preliminary data  analysis steps listed in
Slide  7. The remaining preliminary steps refer to testing for distributional properties and
transforming the data, if it  is needed,  to conform to normality.  The latter being necessary
for many of the statistical procedures because they were developed and originally conceived
for normally distributed data.

      So, how do we determine what might be a good  statistical distribution of our data?
Well,  a common and very  useful way is to construct a probability plot like the one  shown
here on the right of Slide 8. It is made by calculating the cumulative  probabilities based on
the ranks, i, of the ordered  information and then plotting the ranks  or the  cumulative
probability and the  data values on special  paper which  has  been constructed for that
purpose.  When  a  straight  line  is formed, then the data are not  inconsistent  with  the
distribution  assumption used to construct the paper.  What is nice about this technique is
it works for  both a completely, fully quantified data set which is what you see here, and it
also works when there are censored observations.  You simply plot the fully quantified
observations at their cumulative probabilities; some lower region of the curve is undefined.

      Well, that  is  a nice textbook example; Slide 9 shows some real data.  The normal
distribution on the left is compared to the log normal distribution on the  right. I might point
out that these scales are switched compared to that previous slide in that the probability is
on the x axis and the data values are on the y axis.  Clearly, the log normal distribution
provides a more linear fit, and of these two choices, it would  be the preferred  statistical
distribution  model for this  data set.  This outcome is very typical of low level concentration
measurements of environmental quality data.
                                        628

-------
       One last point about probability plots is that it is possible to use the slope of this line,
as we will discuss in a minute, and the line's intercept with the 50th percentile to estimate
means and standard deviations.

       Going to Slide  10, suppose the data is log normally distributed.  Well, what do we
do?  For many procedures, this involves an additional two steps which are to transform  the
data by calculating the logged values of the observations  prior to applying the statistical
procedure and then, returning the estimates of mean and standard  deviation to the original
scale of measurement.

       What methods are available to us? Well, there are seven,  listed in Slide 11, that I
am aware of including:  the maximum likelihood estimators of which there are three
variations; regression  of order statistics which is a mechanistic probability plot; delta-log
normal; USEPA's D-log procedure which is an adaptation of the delta log normal statistics;
balancing using either trimming or Winsorizing; graphical techniques which are just
basically extracting the information you need from a manually constructed probability plot;
and, finally,  replacement  techniques  where  the missing values  are  estimated in a
probabilistic way and then conventional statistics are  calculated with the replacement
values. The replacement technique is different than the common practice of substituting in
a fixed value which really does not make much sense.

       The first of the three methods that we evaluated was Cohen's MLE method (Slide 12).
Here,  the  starred  quantities represent  statistics  calculated  from  the  fully quantified
observations.  These are corrected based on the  difference between the fully quantified
mean and the censoring threshold and a function which incorporates the fraction of non-
detected  observations  and the dispersion of the data represented by g. If we go back to  the
familiar picture shown in Slide 13, we can graphically observe  the calculation process and
what would happen if we calculated the mean only from the data in the shaded area; x  bar
is calculated for the shaded area. The true population  mean which includes the missing or
censored values has  to be to the left of the calculated  mean  of the  fully quantified
measurements.   In  other words,  the true mean has  to be smaller and the objective of
Cohen's method is to  predict a correction factor to reduce this x bar to where it correctly
estimates the population statistic.

       Regression of normal order statistics is just a mechanization of the probability plot
process.  The quantity  in the square brackets on Slide 14 is the cumulative probability based
on the ranking of the data.  The z function is the inverse normal function which linearizes
the probability plot scale.  Common regression techniques are applied to find the y intercept
and  slope of this  equation which  represent,  respectively,  the  mean  and the  standard
deviation of the parent population.  This is perhaps the  easiest  process of the three to
visualize.

       The third and final approach that we evaluated was USEPA's D-log procedure which
forms weighted estimates  for the mean and  standard  deviation  based  on the average


                                        629

-------
properties for the fully quantified observations, assuming they were log normally distributed,
that is this term on the left of Slide 15 plus a point value for the non-detects.  The weighting
factor used  in this  approach is delta  which  represents the fraction  of non-detected
observations.  Inherent in this approach  is the  assumption that the observations or data
which you are trying to model, arise from two distinct statistical populations.

      Our research set about to  test how these  three statistical  techniques, since they are
probably the most commonly used ones around, work under a  variety of situations so we
might have some understanding of their bias and root mean square error when  applied to
real data.  We did this with a Monte Carlo simulation study which calculated average bias
and root mean square error for the mean and standard deviation estimators in 1000 trials.
We actually performed some  simulations on different numbers of trials up to,  I think,
100,000 and found no difference, so we felt that 1000 was sufficient to characterize the
statistical techniques. We used  three different  sampling sizes, 10, 15,  and 20.  This is
unique to this work, to my knowledge, because most of the  literature work on this topic
have used sample sizes of 20, 50, and even larger. When samples cost $2000 apiece, you
rarely find people that are willing even to pay for 10, let alone 20.  So,  I think this is an
important issue since the results vary widely in  this range, and  it is important to consider
them in this context.

      Four distribution assumptions were used. We used a  log normal  distribution with
a coefficient of variation of 1 and a log normal  distribution with a coefficient of variation
of 0.3.  We chose this based on some preliminary analysis of paper industry data sets which
I  will describe in just a  second and which seemed to have characteristics  that were
bracketed by this range of coefficients of variations; i.e. from  1  to 0.3. We also used data
sets that contained 100 fully quantified observations and formed an empirical distribution
from them; i.e. we drew values from them in groups of 10, 15,  and 20 in order to test the
three techniques under more  realistic analysis conditions.  Five levels of censoring were
tested; these included censoring  at the 5, 20, 40, 60 and 80  percentile values.

      The simulation process described  in Slide 16 work  as follows:   The  computer
program generates 10, 15, or 20 random  numbers, depending on what condition you are
simulating.  If censoring is to occur at some  fixed threshold, then the data set is scanned,
and any value below that threshold is marked as an ND.  The three statistical procedures
are applied, and then the bias resulting from that application  is stored.  The  process is
repeated 1000 times, and then the average bias and the average  root mean square error are
calculated.

      An important  issue in evaluating simulation results  is  how  well distribution
assumptions such as the log normal model, characterize  real  data sets, because if they do
not properly characterize real data sets,  then you could  be misled when interpreting the
simulation results. One of the steps that has been taken, was  to do distributional testing
(Slide 17) for  the log normal  statistical  model  versus the bimodal feature of  the  D-log
procedure using the effluent variability study data base generated from a cooperative effort


                                        630

-------
between USEPA and the paper industry. Just briefly, it involved sampling at 8 facilities with
approximately 8 sampling locations per facility for approximately 18 events. So, we had a
large  pool  of data to look at.  You will hear more about this data base  from  the next
speaker,  I believe, during the in depth discussion of the effluent  guidelines rulemaking.
From  that large data base, we selected data sets that met a particular criterion.  In this case,
we chose data sets  that had at least two non-detected values and  at least three fully
quantified observations.  We then constructed probability plots and examined the correlation
coefficient values of those plots. When we did this test, we found that, by and large, the
log normal distribution was  superior to the  D-log assumption for modeling the paper
industry data sets and the EVS data base in particular.

      As a further reality check, we also attempted to validate the notion that the single log
normal distribution was the best choice for modeling paper industry data sets in a second
way.  We predicted the highest and the lowest fully quantified measurement  in all of the
data  sets  selected for regression analysis.   Basically, we found that the  log normal
distribution model had one-sixth the error of  the D-log assumption when applied  to real
paper industry data sets. So, we felt like, from this activity, that the log normal model used
in our simulations is representative of the real world.

      Slide 18 shows the results, at least for bias; time is short today, so I am going to skip
the root mean square error results. For bias, we have our three techniques listed across the
top; i.e. D-log, MLE, and RNOS.  We have different sample  sizes and different  levels of
population variability; CV = 1 being a more variable population than CV = 0.3.

      For sample sizes of 10, we found that bias can range from as high as 28 percent for
the D-log procedure to a high of 7 percent for the MLE procedure.  These results are for the
5 to 60 percentile censoring range. Above 60 percentile censoring, none of the techniques
worked particularly well, and they are really not recommended. Although with that caveat
I  mentioned at  the beginning of my talk, sometimes it  is necessary to  calculate a number
whether it makes sense  or not.   However, I only show here the bias for 5 to 60 percent
censoring.  The  regression technique  seems to be not nearly as effective  as  the MLE
technique, and the MLE technique is definitely superior to D-log as well.

      As sample  size increases  for the more variable samples with CV  = 1  condition, we
see that percent bias decreases with increasing sample size for both the MLE and  RNOS
procedures, whereas for the D-log procedure, the bias remains more or less constant. I am
not sure we can detect the difference between 24 and  25.

      When population variability is decreased, we see decreases in bias for all techniques.
However, the decrease in bias  seems to  be strongest for the MLE  and the regression
techniques which seem to go down by factors of 3  to 5 compared to the D-log technique
which only drops by a factor of 2.  When we get into large sample sizes, we see that the
bias is approaching zero quite nicely for the MLE and the RNOS techniques.  Another thing
                                       631

-------
to notice about this slide is that all the methods, with the exception of this one case right
here, overestimate  mean standard deviations.

      How well can these three techniques estimate standard deviations?

      We find that, for the RNOS technique and sample size of 10, standard deviation
estimation is terrible with 700 percent error; don't even use it (Slide 19). In general, biases
are greater for standard deviation estimates; they are more difficult to estimate. Overall, the
MLE is superior at estimating standard deviations. However, we have a peculiarity in that
when population variability decreases, we see increasing bias for the D-log procedure and
decreasing bias for the MLE and the regression techniques.  Depending on the variability,
the regression and MLE techniques seem to swing from positive to negative biases, whereas
the D-log procedure always seems to underestimate.

      In summary  and conclusion (Slides 20, 21, & 22), all methods appear unreliable for
data censoring greater than 60 percent.  For estimating means,  all  approaches tend to
overestimate; the MLE appears superior to the RNOS and the EPA  D-log procedure with a
bias, in the worst case,  2.5 times less than the D-log procedure.   For estimating standard
deviations, the MLE approach tends to over or underestimate standard deviations, and so
does the regression technique while the EPA  D-log procedure consistently underestimates
standard deviations; in this case, the MLE was  a superior method to both the regression and
the D-log procedure with a bias that was 1.2  times less than the D-log procedure overall.
In terms of sample size  effect, the bias for the D-log procedure remained  approximately
constant while for the  MLE  and  RNOS techniques, we saw  a  decrease  in  bias  with
increasing sample size.

      In terms of population variability effect, for means, when you decrease the coefficient
of variation, there is a decrease in bias for all the techniques while  for standard deviations,
the same occurs for the MLE and the RNOS techniques. However, for the D-log technique,
we see an increase in standard deviation bias for a decrease in population variability.

      That concludes my  presentation today.


                        QUESTION AND ANSWER SESSION
                                      MR. TELLIARD: Any questions?

                                      MS. DINSMORE:  Donalea Dinsmore from State
of Wisconsin.  I would like to know, you talked about its not mattering where the values
are censored, whether it be at the MDL minimum level or some other value.  Are you
talking about the absolute number that is censored? Did you deal at all with the uncertainty
                                       632

-------
around censoring when you are using an MDL and the numbers not being real numbers
until you get to something that is like a quantitation limit?

                                      MR. HINTON: The present statistical techniques
that are available in the literature do not presently contain the sophistication to deal with
that question which is why I said or should have said  that, at this time, it does not matter
which censoring threshold was used by the chemist.  However, you  are absolutely correct
in that the choice of the censoring level and its uncertainty would make a difference  if we
could incorporate that into our calculations.

                                      MR. TELLIARD: Thank you,  Steve.
                                       633

-------
           STATISTICAL ANALYSIS OF ENVIRONMENTAL DATA.
         SETS WHICH CONTAIN  'NOT-DETECTED' OBSERVATIONS

                        (An Abstract for)
                     1994  Norfolk Conference
                               By
                        Steven W. Hinton
                           March 1994

         National Council Of The Paper Industry For Air
              And Stream Improvement, Inc.  (NCASI)
                 Department Of  Civil Engineering
                        Tufts University,
                  Medford,  Massachusetts  02155

     Special considerations and analysis  techniques are needed
to make rational decisions during regulation development or
compliance monitoring when 'non-detected1  observations are
involved.  The paper describes  approaches for analyzing such
data, limitations of the approaches  and strategies to use when no
'detections'  occur.

-------
U)
Ln
      Introduction

Increasing Occurence of
  Left Censored Data

Types of Observations
- Non-detects (NDs)
- Fully Quant. Measurements (hits)
- Uncertain Measurements (GTs)

-------
       Censoring on the Left at xt
o
en
u>
en

-------
u>
  Introduction Cont.

Left Censoring Thresholds
- Level of Detection (LOD)
- USEPA's Minimum Level (ML)
- Method Detection Limit (MDL)
- Limit of Quantitation (LOQ)
- Etc	

Statistical Calculation Difficulties

-------
CO
        Objective



Delineate Problem Setting



Define Preliminary Analysis Steps



Review 3 Common Approaches



Evaluate Bias and RMSE Properties

-------
U>
     As Received Data

Set 1      Set 2      Set 3      Set 4
   3     ND(ll)     ND(50)       161
   1     ND(10)       281    ND(IOO)
   2        18    ND(500)       203
ND(1)    ND(ll)       104        42
ND(1)    ND(12)       114        42
   1     ND(12)     ND(50)        37
ND(1)       16        89
ND(1)    ND(12)        61
         ND(12)        80
         ND(10)       134
            15        56

-------
       Ordered for Analysis

     Set 1       Set 2      Set 3      Set 4
     ND(1)    ND(10)    ND(50)        37
     ND(1)    ND(10)    ND(50)        42
     ND(1)    ND(ll)        56         42
     ND(1)    ND(ll)        61    ND(IOO)
        1     ND(12)        80       161
        1     ND(12)        89       203
        2     ND(12)       104
        3     ND(12)       114
                 15        134
                 16        281
                 18    ND(500)
Slide (o

-------
Preliminary Data Analysis

  Rank Data

  Determine Number of Unique
  Censoring Thresholds

  Test Distribution Assumptions

  Transform Data to Obtain
  Normality

-------
     Probability Plot
     P(X
-------
   Example Probability Plots
•S
09
i
O


X

%

>
V


3



<
w


I
iu
2.


U.

O


H-
22.5|-






20.0





57.5






15.0






1X5






10.Q





7.5





5.0





2,5
     (a) Normal
                  «f
                   /
 /

 t

i

i »

                       03
                       <


                      "i
O



2:
uj


O


U-,
V,




u
                         -»
                           (b) Log Normal ,.•;
                                /,*
                              ,<••
                           i  1. _i_ !  ! I 1 1 \ • \ , 1
 0.01 0.1
           10 20 . 40 80  80  °5Q
                            to 20 30 40 SO €0 70 80 80

-------
           Log-Normal Data
     Transform w/ y. =
     Apply Statistical Procedure
     Untransform w/
     x - exp (y + 0.5sv2)
             y
       2_-2
sxz=xz[exp(sy/)-1.0]
Slide 10

-------
Ul
           Methods

 Maximum Likelihood Estimators (MLE)
   -  Cohen - Restricted - Hald
 Regression of Order Statistics
 Delta - Log Normal
 USEPA D-Log
 Balancing
   -  Trimmed & Winsorized Mean
 Graphical
 Replacement
Slide  11

-------
  Cohen's MLE Method
mML  = m* ~ (ro*-*o) A \SM



 1      *2   /  *   \2 A  r
  "*   — C1   I I ^M  	"V* 1  /\  I /^"
  ML    ^    \^m   AO)  i.\ [^?
  = fraction of ND observations


g = s *2/(m * -xf

-------
      Censoring on the Left at x0
Slide 13

-------
           Regression of NOS
00
    xt  =  ordered values of quantified observ
    z = is the inverse normal Junction
    [ ] = plotting position
          14

-------
       EPAfs D-Log Approach
M    =  SD + (1-6) exp(y+0.5s)

        (1-6) exp(2y+s)[exp(s) - (1-6)]
        + 6(l-6)Z)[Z)-2expCy+0.5s2)]
where:
D = detection limit
8 = fraction of NDs
y. = Ln(xt) for xt > D
 Slide 15"

-------
Ul
Simulation Study Approach

> Average Bias and RMSE of x and S on
 1000 trials

> Three Sampling Sizes 10, 15, & 20

> Four Distrib. Assumptions LN(cv=l),
 LN(cv=.3), 2346-TCP, 45-DCC

> Five Censoring Levels 5, 20, 40, 60 &
 80 Percentile

  SUe.lt

-------
Distribution Testing
    LN vs D-LOG
 EVS Data Base
 Data Set Selection Criteria
 Prob Plot R2 Values
 Relative Prediction Error

-------
 Mean Estimator Bias (%)

          DLOG     MLE    RNOS
N = 1Q
 CV=1    4 to 28     5 to 7    8 to 25
 CV=.3    3 to 15     .6 to 2   2 to 5
N=15
 CV=1    2 to 25     3 to 4    5 to 12
 CV=.3    .8 to 15    .3 to .4    1 to 3
N=20
 CV=1    .2 to 24     I to 2    3 to 9
 CV=.3    .3 to 15   -.1 to -.3  .5 to 2

* For 5 to 60% Censoring

Slide 19
*

-------
Ul
   SD Estimator Bias (%)

            DLOG    MLE      RNOS
N=1Q
 CV= 1     -1 to 20   10 to 25   17 to 756
 CV = .3    -6 to-39   -2 to-7    -.5 to-2
N=15
 CV = 1     -5 to -26   6 to 14     9 to 49
 CV = .3    -6 to 39   -2 to-5    -.6 to-2
      CV=1     -7 to-27   3 to 11     6 to 36
      CV=.3    -6 to-39   -2 to-3     .7 to 1


     * For 5 to 60% Censoring

     C I • I  Id

-------
Ul
Summary & Conclusions
 All Methods Appeared Unreliable
 for Data Censoring > 60 Percentile
 Estimating Means
   - All Approaches Over Estimate
   - MLE Superior to RNOS &
     EPA-DLOG
         MLE
                    < 1/2.5 Bias
      5lic(e 2.0

-------
cr<
Ln
Ui
Summary & Conclusions Cont
   Estimating Standard Deviations
     - MLE Over/Under Estimates
     - RNOS Over/Under Estimates
     - EPA-DLOG Under Estimates
     - MLE Superior to RNOS &
       EPA-DLOG
     - Bias   < 1/1.2 Bias
              MLE      .      EPA.DLOG
     » s\ t

-------
      Summary &  Conclusions Cont.
         Sample Size Effect
           - BiasEPA.DLOG » Constant
           - BiasMLE & Bias    I w/ n t
Ul
Population Variability Effect
              x: Bias ALL I w/ cv
              s: BiasMLE & Bias    1 w/ cv I
                           t  w/ cv I
     SliUe Z2-

-------
                                      MR. TELLIARD:  Our before luncheon speaker is
one of our own.  Henry Kahn  is in the Office  of Water and, more importantly,  in the
Engineering and Analysis Division.

      Henry is going to speak on the statistics applied for developing the  regulations
covering the pulp and paper industry.
          DETERMINATION OF PROPOSED EFFLUENT LIMITATIONS FOR
                        THE PULP AND PAPER INDUSTRY
                  Henry D. Kahn and Maria D. Smith, U.S. EPA, and
           Amy S. Brockman, Science Applications International Corporation
Presented on May 5, 1994 at EPA's 17th Annual Conference on Analysis of Pollutants in
                                 the Environment
ABSTRACT

      Effluent guidelines regulations for the pulp, paper and paperboard industry were
proposed by the U.S. Environmental Protection Agency in October, 1993,  The proposed
regulations contain numerical limitations on the amounts of pollutants in mill effluent. This
paper provides a description of the characteristics of the data used to support the proposed
limitations. This paper also includes a discussion of statistical methodology used in previous
regulation development and modifications that were required  to accommodate certain
characteristics of the pulp and paper data.  These modifications allowed for a mixture of
various types of censoring  in the data and for multiple detection limits.
INTRODUCTION

      This paper provides a summary of the data and the statistical methodologies that were
used in  developing the effluent limitations contained in the proposed effluent guidelines
regulations for the pulp, paper, and paperboard industry.  In particular, this paper describes
the data sources, the censoring of the data, aggregation of duplicate samples, calculation of
production normalized loadings, the statistical modeling of the data, and special cases where
the statistical methodologies were not used to develop the limitations. Detailed summaries
of the data and statistical methodologies are presented in the "Statistical Support Document
for  Proposed Effluent Limitations Guidelines and  Standards  for the Pulp, Paper,  and
Paperboard Point Source Category." [1]

                                       657

-------
DATA SOURCES

      The data used in developing the limitations were obtained from three sources: the
long-term  study,  short-term studies, and  self-monitoring  data.   These data  provided
information about the concentration levels of various pollutants in wastewater. Samples of
wastewater were analyzed for concentration levels of the following pollutants:  volatile
organic   compounds,  chlorinated   phenolics,  adsorbable  organic   halides   (AOX),
2,3,7,8-tetrachlorodibenzo-p-dioxin  (TCDD),   2,3,7,8-tetrachlorodibenzo-furan  (TCDF),
chemical oxygen  demand (COD), color, total  suspended solids (TSS), and  biochemical
oxygen demand (BOD5).

Long-term Study

      The long-term sampling study was undertaken as a cooperative effort between EPA
and the industry.  Representatives of the paper industry, the American Paper Institute (now
the American Forest and Paper Association  [AFPA]) and the  National Council of the Paper
Industry for Air and Stream Improvement, Inc. (NCASI), cooperated with EPA in obtaining
data to support EPA's effluent guidelines development. In this study, sampling data were
collected and analyzed from eight pulp and paper mills.

      The eight mills included in the long-term study were  selected because  they utilized
particular pulping or bleaching technologies, wastewater treatment, or fiber furnishes.  At
each mill, sampling points were selected to characterize the bleach plant effluent and the
final effluent. Samples were collected during one 24-hour period each week for nine weeks
in the summer of 1991 and each week for nine weeks in the winter of 1991-1992.  A total
of about 540 samples was collected.  These samples were chemically  analyzed for
chlorinated phenolics, chlorinated dioxinsand furans, volatile organics, AOX, color, 6ODS,
and TSS.  All of the measurements were analyzed statistically, and the appropriate subsets
of the data were used to develop the proposed  effluent limitations and standards.

Short-term Studies

      EPA conducted 13 short-term sampling episodes from  1988 through mid-1993.  Each
episode was either two or three days in length.  Mills  were selected  for participation in the
short-term  sampling program because  they utilized particular pulping or bleaching
technologies, wastewater treatment, or fiber furnishes.

      During these short-term episodes, samples were analyzed for chlorinated phenolics,
chlorinated dioxins and furans, volatile organics, AOX, color, COD, BOD5,  and  TSS.
Depending on the mill, sampling  location, and pollutant, 24-hour, two-day, or three-day
composite samples were collected.   The sampling points were selected to characterize
wastewater discharges from various processes and treatments, including bleach plant filtrates
and final effluent  streams.  All of the data  from the short-term episodes were statistically
                                       658

-------
analyzed, and the appropriate subsets of the data were  used to develop the proposed
effluent limitations and standards.
Self-monitoring Data

       Limitations for BOD5,  TSS, and COD are based, in part, on  self-monitoring data
collected from the 1990 National Census of Pulp, Paper, and Paperboard Manufacturing
Facilities.  In October 1990, this census was sent to all pulp, paper, and paperboard facilities
in the United States and the self-monitoring data base was developed from the responses.
In general, the questionnaire self-monitoring data base contains data provided by the mills
in an approximate daily format (a few skipped days, samples for Monday through Friday
only, etc.). These data were provided for time periods ranging from six months to one year
for the time span from 1989 through 1992.
CENSORING OF DATA

The pulp and paper analytical data base (from the long-term study, short-term studies, and
self-monitoring data) contained a mixture of measured values, non-detect measurements and
right-censored measurements.  These three different types of samples were delineated by
certain qualifiers in the data base:

      o      Non-censored (NO: a measured value.

      o      Non-detect (ND): samples for which analytical measurement did not yield  a
             concentration above a sample-specific detection limit (such measurements are,
             in effect, left-censored).

      o      Right-censored (RQ: these samples were qualified with a greater than (>)
             sign, signifying that the reported value is considered a lower limit of the
             actual concentration.

      The pulp and paper effluent concentration data were characterized by a large number
of measurements reported as below the detection limit (ND).  These detection limits were
sample specific and, for many pollutants,  covered a wide range of values.

      The right-censored  values  occurred  in the data for AOX, volatile organics, and
chlorinated phenolics.  For the AOX data, break-through is determined by comparing the
results of two columns used in the chemical analysis  of AOX.  Ideally, all of the AOX is
adsorbed in the first column.  Break-through occurs when AOX  is adsorbed in the second
column.  For the volatile organics and chlorinated  phenolics data, right-censored values
were reported when the measured values were beyond the highest calibration points.
                                       659

-------
AGGREGATION OF DUPLICATE SAMPLES

      Both laboratory and  field  duplicate samples  were provided in the data sources.
Laboratory duplicates are samples that were divided at the laboratory, analyzed separately,
and had the same sample number. Field duplicates are two or more samples collected for
a particular sampling point at virtually the same time, assigned different sample numbers,
and flagged as duplicates for a single episode number.  For the statistical analysis, a single
value was needed for each sample or episode  number.  Therefore, duplicates  were
aggregated  using an  averaging procedure.  If a sample had  both laboratory and field
duplicates, the laboratory duplicates were averaged first.

      In some cases, this aggregation produced another type of censoring which was called
"mid-censoring."  When a non-censored (NC) sample and a non-detected sample  were
averaged, the resulting average was labeled "mid-censored" (MQ, that is, a censored sample
whose true value lies between two non-zero bounds (lower and upper). For instance,  the
lower bound  of the average is not  zero (because one of the samples was detected at a
measurable concentration), but instead  would equal  the average of the NC and zero (the
lowest possible value of the non-detect).   Similarly, the upper  bound would  equal  the
average of the NC and the detection limit  of the  non-detect sample (the highest possible
value of the non-detect).  Thus, the lower and  upper  bounds for this type of mid-censored
data point are

      lower:                          NC/2
      upper:                          (NC  + ND)/2

where the value of ND is the detection  limit for the non-detected sample and the value of
NC is the observed concentration value. For  example, if one of the duplicate samples is
non-censored  with a concentration value of 44 ppq and the other duplicate sample is non-
detect with a detection limit of 10 ppq, then the bounds of the  mid-censored value would
be:
      lower = 44 ppq / 2 - 22 ppq
      upper = (44 ppq  + 10 ppq) / 2 = 27 ppq.
CALCULATION OF PRODUCTION NORMALIZED MASS LOADINGS

      After all laboratory and field duplicates were averaged, production normalized mass
loadings were calculated for each sample.  Three types of information were used with
appropriate conversion factors to calculate the production normalized mass loadings:  an
analytical  concentration, a wastewater flow rate,  and a  brownstock  flow rate.   All
subsequent calculations were computed using the production normalized mass loadings.
The censoring associated with the concentration values was assigned to the corresponding
production normalized value.


                                      660

-------
STATISTICAL MODELING OF DATA

      The remainder of this paper describes the statistical methodologies that were used
to develop the proposed  limitations for the pulp and paper industry.  The basic approach
used was to fit observed data to various modifications of the lognormal distribution. These
modifications were necessary to accommodate the different types of censoring present in the
data.  In certain cases, this basic approach was not suitable, and these special cases are also
described.

Lognormal Distribution

      The lognormal distribution is often appropriate for modeling effluent data (see figure
of lognormal  distribution) because such data are positively valued and the shape of their
distribution is positively skewed. The BOD5, TSS, and COD data were modeled using the
lognormal  distribution.   Limitations were then calculated  based on parameters of  the
lognormal distribution estimated from the data.

      The presence of censored measurements  in other pulp and paper effluent data sets
led, for several  reasons,  to the consideration of modifications to the  basic lognormal
distribution. These modifications allow for the modeling of such data as mixtures of positive
measurements that are lognormally distributed and measurements with values that are  not
known exactly ("censored" values).
Classical Delta-Lognormal Distribution

      To incorporate censored data into the model, two modifications to the lognormal
density  model have been  used  by EPA in past effluent guidelines rulemakings.  The first
modification is known as the classical delta-lognormal model or delta distribution (see figure
of classical delta-lognormal distribution), used in economic analysis to model income and
revenue patterns (see reference [2]). In this adaptation of the usual lognormal distribution,
the model is expanded to  allow for the presence in the data of zero amounts. To do this,
all positive (dollar) amounts are grouped together and fit  to a lognormal density.  Then all
zero amounts are segregated into another group of measurements representing a discrete
distributional "spike"  or  probability mass at zero.   The resulting mixed  distribution,
combining a continuous density portion with  a discrete-valued spike, is known as the delta-
lognormal distribution.  The delta in the name refers to the  proportion  of the overall
distribution contained in the spike at zero; that is, the proportion of observed zero amounts.
Adapted Delta-Lognormal Distribution

      EPA further adapted the classical delta-lognormal model ("adapted model") to account
for non-detect measurements in the same fashion that zero measurements were handled in
                                        661

-------
the original delta-lognormal.  Instead of zero amounts and non-zero, positive amounts, the
data  consisted  of non-detects  and  detects.   Rather  than  assuming  that  non-detects
represented a spike of zero concentrations, these samples were allowed to have a single
positive value (see figure of adapted delta-lognormal distribution and reference [3]). Because
each non-detect was assigned the same positive value, the distributional spike in this
adapted model  was located not at zero,  but at that single positive value.  In the adapted
delta-lognormal model, the delta again  refers to  those measurements contained  in the
discrete spike, this time representing the proportion of non-detect values observed in the
data  set.

      The adapted model was used in developing limitations for the Organic Chemicals,
Plastics,  and  Synthetic  Fibers  (OCPSF)  and  the Pesticides  Manufacturing  regulations
promulgated in  1987 and  1993, respectively. For most data sets for these two rulemakings,
the concentration data were fit to the adapted model (see references [4] and [5]). However,
the distribution  can also be used to model mass values as was done in two instances in the
pesticides manufacturing rulemaking.  Mass values and production-normalized mass values
are typically lognormally  distributed as are concentration data.
Modified Delta-Lognormal Distribution

      The modified delta-lognormal model contains several modifications of the adapted
model.  The modifications allow for changes in three key assumptions  underlying the
adapted delta-lognormal.  These assumptions relate  to the discrete probability mass of the
model, the continuous lognormal portion of the model, and non-censored values below the
detection limit.

      The first assumption is that the discrete spike portion of the adapted delta-lognormal
model is a fixed, single-valued probability mass associated (typically) with all the non-detect
measurements.  If all non-detect samples in the pulp and paper data base had roughly the
same reported detection limit, this assumption would be satisfied adequately.  However,
reported detection limits  among sample measurements in the pulp and  paper analytical
studies varied substantially, especially when the non-detect concentrations were converted
to "no detectable mass amounts" by multiplying the concentration detection limit by the
effluent flow rate associated with the stream from which the sample was taken. Because of
this variation in the reported concentration-based detection limits and "no detectable mass
amounts", a single-valued discrete probability mass could not adequately represent the set
of non-detect measurements observed in the pulp and paper data base and a modification
of the model was used.

      The second assumption of the adapted delta-lognormal  model is that all non-censored
values (i.e., measurements) reported below the detection limit (D) are set equal to the value
chosen to  represent non-detect measurements.  For example,  if this value for TCDD was 10
parts per quadrillion (ppq), then any non-censored samples reported below 10 ppq were set

                                        662

-------
to 10 ppq.  The adapted model was modified to incorporate the presence of non-censored
values below the detection limits.

      The third assumption of the adapted delta-lognormal model is that all of the detected
measurements comprising the continuous lognormal portion of the overall distribution are
known concentration (or mass) amounts. In the pulp and paper data base, however, not all
of the samples considered to be detects were associated with known numerical values.  As
an example, certain sample measurements within the AOX data base were known to have
a concentration at least as large as some lower bound L, but the exact value could not be
determined. In statistical terms, just as non-detect samples are referred to as left-censored
measurements because they are known to be between zero and  an upper bound (i.e., the
detection limit), these AOX measurements were referred to as right-censored samples.  In
effect, left-censored values are censored  on the  left side of the distribution and right-
censored values are censored on the right side. Another example occurred for mid-censored
samples. These samples were known to have a concentration (or mass value) between some
lower bound (L) and  some upper  bound (U)  but the exact value was  not known.   As
discussed previously, mid-censored values occurred due to averaging duplicates where one
measurement was non-censored and the other measurement was non-detect.

      The presence of measurements that are censored in some fashion,  so that the exact
values are indeterminate, makes it inappropriate to apply the adapted delta-lognormal model
without  further modifications.  One approach that could be taken without changing the
model would be to assign an "exact" measurement value to those samples that are censored.
However, this tactic leads to arbitrary measurement value assignments and would have an
uncertain and  potentially  arbitrary impact of the calculated estimates  of the final model
parameters.  Instead of handling  uncertain  measurements in this fashion, the choice was
made to modify the adapted delta-lognormal model to accommodate censored samples as
well as non-censored samples (i.e., those detected measurements associated with "exact" or
known concentration/mass values).
Modification of the Discrete Spike

      To appropriately modify the adapted delta-lognormal model for the observed pulp
and paper data base, the first modification was made to the discrete single-valued spike
representing  non-detect  measurements  (see  figure  of  the modified  delta-lognormal
distribution).   In order to model these values as production-normalized mass values, a
production-normalized mass-based detection limit is defined as the reported concentration-
based sample-specific detection limit multiplied by the flow rate associated with that sample,
and divided by the corresponding production value. Because non-detect samples had wide
variation in production-normalized mass-based detection limits, the single spike of the delta-
lognormal model was replaced by a discrete distribution made up of multiple spikes.  Each
spike  in this modification is associated with a distinct production-normalized mass-based
detection limit observed in the pulp and paper data base. Thus, instead of assigning all non-


                                       663

-------
detects to a single, fixed value, as in the adapted model, non-detects can be associated with
multiple values depending on how the production-normalized mass-based detection limits
vary.

      In  particular,  because  the  production-normalized  mass-based  detection  limit
associated with a non-detect sample is considered to be an upper bound on the true value,
which could  range  conceivably from zero up to the detection  limit, the modified delta-
lognorrnal model used here assigns each non-detect sample to half its production-normalized
mass detection limit.

      This procedure of using half of the production-normalized detection limit was
modified when the concentration-based detection limit was much larger than the majority
of other  concentration-based detection limits  for that pollutant.  Using the  production-
normalized mass  loadings resulting from these high concentration-based detection limits
caused  instabilities in estimating  the parameters of the  distribution of  the  loadings.
Therefore, twice the mode (i.e., the most commonly reported concentration-based detection
limit) was substituted  for any concentration-based detection limits that were reported as
greater than the value of twice the mode of the set of detection limits for a pollutant. This
substituted value was then used in calculating the production-normalized  mass-based
detection  limit.  For example,  if one sample  has a concentration-based sample-specific
detection limit reported as 500 ug/l for pollutant XYZ and the mode of the set of detection
limits for XYZ was 20  ug/l, then the value of 40 ug/l (i.e., two times the mode) was  used in
calculating the  production-normalized mass-based detection limit.

      The modified delta-lognormal used to model the production-normalized mass values
is, in effect, a generalization of the adapted model that allows for more than one  sample
specific detection limit. In the adapted model, the delta  portion represents the proportion
of non-detects.  In the modified model, the delta portion  represents the proportion  of non-
detects, but is divided  into  the sum  of smaller fractions,  each representing the proportion
of non-detects associated with a particular and distinct detection limit.  While replacing the
single discrete spike in the adapted delta-lognormal distribution with a more general discrete
distribution of multiple spikes increases the complexity of the model, the discrete  portion
with multiple spikes plays a role in limitations development identically parallel to the single
spike case and  offers flexibility for handling multiple observed detection limits.
Modification of the lognormal portion

      To accommodate detected  observations that are  censored in some fashion, the
lognormal  portion of the adapted delta-lognormal model  also  has been  modified.   A
lognormal distribution is still  used  to represent the set of  detected measurements, but the
manner of estimating the distributional parameters has been changed to allow for mid- and
right-censored observations and for non-censored values below the multiple detection limits.
In general, the method typically used to estimate the parameters of the underlying lognormal


                                        664

-------
distribution is known as maximum likelihood estimation (MLE). The MLE method is based
on assuming that a group of independent observations follow a particular distributional
model, in this case the lognormal  distribution.  A  mathematical function known as the
"likelihood," is constructed from the mathematical formula for the lognormal distribution fit
to the  observed data.  Data that are reported as either measured  or censored can be
incorporated into the likelihood function.  The values of the parameters of the distribution
that maximize the likelihood function for a given set of data are referred to as the maximum
likelihood estimates.
SPECIAL CASES

      The modified delta-lognormal was not used to model data sets that contained only
non-detect measurements.  For each of these data sets, the proposed effluent limitation is
non-detect at the minimum  level for the analytical  method.  EPA proposed non-detect
limitations for some of the chlorinated phenolics, volatile organics, and TCDD when the
data contained all non-detect measurements.
CONCLUSION

      With two basic modifications to the adapted delta-lognormal  distribution,  it is
possible to fit  a wide variety of observed effluent data sets to the modified model.  This
model can accommodate data sets that contain a mixture of multiple detection limits for
non-detects,  detected  samples  with  mid-  and  right-censored  measurements,  and
non-censored values below the multiple detection limits.  The same basic framework can
be used even if there are no non-detect values or censored data.  Thus, the modified delta-
iognormal model offers a large degree of flexibility in modeling effluent data. This flexibility
was necessary in order to model the data available to support the proposed pulp and paper
rulemaking.
REFERENCES

[1]  U.S. Environmental Protection Agency (USEPA).  1993.
      Statistical  Support Document  for  Proposed Effluent Limitations Guidelines and
      Standards  for  the  Pulp,  Paper,  and  Paperboard  Point  Source  Category.
      EPA-821-R-93-023.  November 1993.

[2]  Attchison, J. and J.A.C. Brown.   1963.  The Lognormal Distribution.  Cambridge
      University Press, New York.
                                       665

-------
[3]  Owen, W.J., and DeRouen, T.A.  1980. "Estimation of the Mean for Lognormal Data
      Containing Zeroes and Left-censored Values with Applications to the Measurement
      of Worker Exposure to Air Contaminants."  Biometrics.  Vol. 36:  707-719.

[4]  Kahn, H.D., and M.B. Rubin.  1989.  "Use of Statistical  Methods in Industrial Water
      Pollution Control Regulations in the United States." Environmental Monitoring and
      Assessment.  Vol. 12: 129-148.

[5]  U.S. Environmental Protection Agency (USEPA).  1987.  Development Document for
      Effluent Limitations Guidelines for the Organic Chemicals,  Plastics,  and Synthetic
      Fibers Point Source Category. Volume I, Volume II. Industrial Technology Division.
      EPA 440/1-87/009. October 1987.
                                      666

-------
                       QUESTION AND ANSWER SESSION
                                     MR. MADELONE: Ray  Madelone, TRW,
      What was the percentage of non-detects in the data sets?

                                     MR. KAHN: It ran the gamut.  There were so
many different data sets with different percentages of non-detects. You  saw the one with
100 percent non-detects. We had all the way from zero to  100 percent, depending on the
analyte.

                                     MR. MADELONE: Can you hazard a guess on
what the typical value would be?

                                     MR. KAHN:  No, I would not want to do that.

                                     MR. MADELONE: Okay.

                                     MR. TELLIARD: It depended on the analyte, Ray.
I  mean, for example, AOX  was always there.  2,3,7,8, as Henry pointed out, in  most
instances, was below the detection level,

                                     MR. MADELONE: In your process of determining
the true mean value of the data set, do you preserve the variability of the overall data set?
In other words, if I were to take the non-detects and just set them to some number and then
compute the standard deviation of that set, I  would probably decrease it, because I loaded
it, weighted it, with numbers that are all the same.

      In the process that you are using here, do you maintain the variability that the rest
of the true data set has in computing the numbers for the non-detect?

                                     MR. KAHN:  The short answer is yes.

                                     MR. TELLIARD:  Fellow in the back?

                                     MR. SLENTZ: My name is Kurt Slentz with Energy
Labs.   I guess I  had  the same question, maybe formed  a little bit differently, but if I
understand  it correctly, you are going to enforce their limits at non-detectable values. Is that
correct?

                                     MR. KAHN: The proposed  compliance level for
certain pollutants  is at the minimum level for the  analytical method which is the lowest
level for quantification.
                                      667

-------
                                    MR. SLENTZ: Have you taken into account the
precision of the analytical method at that value?

                                    MR. TELLIARD:  Yes,

                                    MR. KAHN: Bill says yes.

                                    MR. TELLIARD:  Yes.

                                    MR. SLENTZ:   You have accounted  for that
statistically?

                                    MR. TELLIARD:  Yes.

                                    MR. KAHN: Yes.  The variability inherent in the
data is inherent in the values such as limitations, that we calculate from the data.

                                    MR. SLENTZ: Do you require reporting of data
that we produce that  are below the  detection  that we flag,  I mean, it is below  our
quantitation limit and we flag it as detectable?  Do you count that?

                                    MR. TELLIARD:  No.

                                    MR. SLENTZ:   If we  detect  something that is
greater than our method detection, lower than our practical quantitation limit, do you count
that as a number then?

                                    MR. TELLIARD:  No.

                                    MR. KAHN: We use a value that is at or above
the minimum level which is our minimum level for quantification.

                                    MR. TELLIARD:  Thank you, Henry.
                                      668

-------
                                            DETECTS
              Standard Lognormal Distribution
NONDETECTS
                                          DETECTS
           Standard Delta-Lognormal Distribution
                          669

-------
   NONDETECTS
 0    10
                                                 DETECTS
              Adapted Delta-Lognormal Distribution
NONDETECTS
                             NON-CENSORED
        MID-
      CENSORED
       LOWER
       BOUND
  MID-
CENSORED
 UPPER
 BOUND
                                                         RIGHT-
                                                        CENSORED
              Modified Delta-Lognormal Distribution
                                 670

-------
                                   MR. TELLIARD: lleana has copies of her paper in
the back of the room for anybody who wants to take a hard copy home with them.

      It is lunch time.  If you will, please be back  here by  1:30.  For those who are
checking out and need to put your bags somewhere, feel free to bring them down, and we
will find spaces for them around the room.
                                     671

-------
(Blank Page)
    672

-------
                                     MR. TELLIARD: Good afternoon to all of you who
are back from the pizzeria, chocolate factory, or whatever else you were doing.

      Our first speaker this afternoon is Bob Runyon.  Bob is Chief of the Monitoring and
Management branch of the Environmental Services Division (ESD) in Region II.  Bob is also,
in his spare time when he has nothing else to do, Co-chairman of the Methods Panel for the
Environmental Monitoring Management Council, or, as  we like to call it  in government
because we cannot use words, EMMC.

      Bob is going to talk to us this afternoon about what efforts are underway on methods
consolidation and what is going on, in general, in the EMMC and give you  an overview of
what has happened.

      Thank you.
                        METHODS INTEGRATION IN EPA
         THE ENVIRONMENTAL MONITORING MANAGEMENT COUNCIL
                                     MR. RUNYON:  The Environmental Monitoring
Management Council came about in 1990 in response to several issues. In the late 1980s,
EPA faced a situation where the credibility and the quality of the EPA scientific data used
to make policy decisions came under criticism. Reports from the General Accounting Office
and the Science Advisory Board questioned the credibility of the science used to support
policy decisions in the Agency.

      There was also Congressional interest, with all the money that had been spent in
wastewater treatment facility construction, with improvements in the environmental area in
general, in what a national scope assessment would tell them about whether the waters are
getting better, the air is getting better, et cetera. They found that they were unable to make
national environmental assessments  that were meaningful.

      EPA's approach to analytical methods development was fragmented, and it was on
a program-by-program basis.  To address contaminants  of concern within each particular
program,  methods were developed independently.  This led to the creation of a number of
methods for the same analyte that may have been only  slightly different.

      There was a great deal of confusion in the regulated community, because they really
were not  sure which method they were supposed to use in a given situation.  In addition
to that, there was a great deal of difficulty for the production laboratories to continually shift
from using one method today,  another method tomorrow, analyzing for the same analyte.
                                      673

-------
      So, in March of 1990, the Deputy Administrator from the previous administration,
Hank Habicht,  endorsed the formation  of the Environmental  Monitoring  Management
Council,  The council was established to address all of the issues that I have just mentioned.

      The EMMC is a four-tiered organization. It has  a policy council that is made up of
assistant  administrators,  regional administrators,  chaired  by  the Region  III  regional
administrator and the AA for the Office of Research and Development (ORD).

      Under the Policy  Council, there is a steering committee that is composed of office
directors  and division directors from programs, regions, and ORD.

      Under the Steering Committee, there are ad  hoc panels composed of scientific and
engineering program and regional staff. In addition to the panel tri-chairs (program, region,
ORD) each panel has work groups with representatives from the  regions, the Office of
Research  and Development, and the program offices.

      The next overhead illustrates the organizational chart for the EMMC.  As you can see,
under the steering committee, the four panels  that  are currently in place are the methods
compendium panel which has led to the formation of EMMI, the methods integration panel,
the lab accreditation panel, and the regulation development panel.

      The methods compendium panel is charged with developing a readily available
methods  index for the Agency, and we are going to hear a little more about that later.

      The regulatory development panel is charged with assuring that, as regulations are
promulgated by the Agency either, in  the development process or in the revision process,
if environmental measurements are involved, the panel would ensure  that there is an
appropriate method and  that there is appropriate quality assurance included in the method
so that the Agency would not promulgate a regulation with no ability to analyze for the
parameter that is being regulated.

      The lab accreditation panel is charged with investigating the feasibility of a national
lab accreditation program.

      The methods integration panel  has quite a large number of work groups. We have
the water, solids, air, biological, radiation, field methods (a new one), performance-based
methods  (a new one), and a QA/QC work group.

      The first action that EMMC took was to establish an infrastructure for addressing the
issues that it was charged to address. EMMC focused on the cross-program issues, because
they were the most critical areas within EPA.
                                       674

-------
      The first accomplishment was the development of a common method format. The
EMMC Format for presentation and documentation of methods is one of the major successes
of EMMC (in addition to EMMI, of course).

      All the new Agency methods are going into EMMC Format.  EMSL-Cincinnati has,
through the efforts of Tom Clark, started to incorporate every one of their new methods into
the EMMC Format, and the integrated  EMMC methods that have been completed are also
in this format.

      The EMMC Format allows for better assessment of comparability when you look at
methods, because the documentation  is consistent across all of the  methods.

      The EMMC Framework for Methods Development has also been completed. The
Framework is a  process for  integrating methods development needs with the status of
methods development within the Agency.  It promotes joint methods development and
funding, minimizes the overlap and the number of methods that are being developed for the
same analyte, and ensures that the EMMC  Format is going to be  incorporated for each
method.

      EMMI, or the Environmental Monitoring Methods Index, is, as all of you know, the
EPA compendium of methods.  For those of you that are interested, outside on the table,
there is some information as well as some demonstration disks.  EMMI is available to the
public through NTIS as well.  That is my promotional spiel here.

      This is a very good index of EPA methodology for anyone, and it is being updated
this year.

      The  methods  integration  panel is  charged with eliminating the  unnecessary
duplication of methods  that are out there, and the EMMC provides a consensus forum for
those methods that need integration. A priority list is developed for methods which should
be integrated first, and the panel serves  as a cross-program mechanism for methods and data
documentation.  Methods integration allows for the development of comparable data across
programs.

      Four methods, as of right now, have been integrated:  graphite furnace-AA; ICP
spectrometry; hot acid extraction for elemental analyses in those first two; and the fourth
one is purgeable organics by  capillary column GC.

      There are four others  that are  in the process right now, semi-volatiles,  dioxins,
halogenated pesticides,  and microwave digestion.

      I will talk a little bit more about  that integration effort in  a minute, but the next steps
will be to finish those methods that are in the process in terms of integration as well as then
put them into the Federal Register in the EMMC format so that they can be officially adopted.


                                      675

-------
      A discovery in the methods integration process has been that it is a very difficult
process to go back and have people integrate existing methods that are different.  There
were many turf battles.  Bill bears the scars of many of those battles.

      It is felt that, once we finish those methods that are in process in terms of integration,
the effort that would be required to integrate further methods may not be worth the return
that we would get on it.

      The  current thinking is that we need to evaluate the performance-based method
approach.   There are advantages to the performance-based  method approach, one being
that it would definitely minimize the regulatory modification work that would be involved
as technology and methods improve over time.

      Currently, with the specific methods being regulated and identified in the regulations,
to go back and modify a method based on new and  innovative technology that has come
on line may take a year and a half to two years to go through the regulatory revision process
to get that new technology on line.

      With the performance-based  approach, changes would  be much  more  rapidly
accommodated.

      The challenge is to encourage the technology, innovation, and method development
while preserving  data  integrity.  The  issues currently being discussed  involve what
constitutes adequate documentation if you choose to take the  performance-based approach,
what do you use as your criteria for selecting the reference methods that would be utilized
in any performance-based method approach, and what reference materials are going to be
available for you to be able to demonstrate the performance of any alternative technology
that someone would want to use aside from the reference method?

      The Office of Ground Water and Drinking Water came up with a pilot approach for
performance-based method implementation, and that currently is under internal review
within the Agency. They have developed a draft documentation package that will lay out
what the requirements would be in a laboratory for documentation in a performance-based
methods system.

      That particular documentation scheme has been distributed to all the members of the
steering committee on EMMC, and their comments are coming in now.  As you will see in
a minute on one of these overheads, the Deputy Administrator of EPA has charged EMMC
with presenting an option paper on the use of a performance-based methods approach by
the end of this calendar year.

      There are two other new work groups in the EMMC methods integration panel, the
field methods work group (proposed to deal with new methodology as it comes on line for
use in the field, i.e. portable in-the-field analytical methods), and due to the fact that we are


                                       676

-------
evaluating the performance-based method approach, we have adopted the performance-
based methods work group that was not a part of EMMC as a work group now under the
methods integration panel in EMMC.

      The quality assurance regulatory development panel is charged with, as I mentioned
before, ensuring that there are appropriate methods for any regulated analytes.

      EPA is currently in the process of revising its regulatory development process into a
tiered approach to try and speed up the regulation  implementation  process.  EMMC  is
trying to ensure that we get the same  level or better control over assuring that there are
methods available, if there are environmental measurements included in the regulation, and
that the quality assurance concerns are addressed in the regulation when it comes out under
this new three-tiered process.

      Another effort of this panel is to conduct an assessment of the use of performance
evaluation materials  across the Agency, how they are  utilized by the different programs
within EPA, and to try and establish what would be a stable source of funding to ensure that
there will be a continuing source of performance evaluation materials.

      Particularly, if we are going to a performance-based approach or we are going to be
using that to a greater or lesser extent, performance evaluation materials are even  more
critical to the Agency.

      The lab accreditation panel may, from the raising of hands  the other day as to the
people that were represented here, have the most impact on many of you. The national lab
accreditation evaluation  effort that is being put forth  by EMMC resulted from  the CNAEL
report.

      That report was submitted to the government and  currently, there are State EPA focus
groups that were formed that evaluated and analyzed the issues raised  in  the  report, and
have now prepared papers on each of the issues that were raised in terms of implementing
a national lab accreditation program. They have also developed options under each of the
issues in terms of the questions that would be needed to be answered.

      That is where it stands right now. The next step is a national  conference  held on the
national lab accreditation program.  The EPA Administration has endorsed  going forward
with that conference.

      The funding is being made available to develop that national conference which, the
latest information I have, is intended to be held sometime next spring, early in 1995.

      As Bill has said, we have not necessarily moved forward with the speed of light.  The
EPA Deputy Administrator, Bob Sussman, was  briefed on EMMC to ensure recognition and,
                                       677

-------
I guess, the blessing of the current administration to go forward in the directions we were
going.

      That occurred in March of this year, and he essentially has approved EMMC progress
made to date and endorsed the EMMC, the focal point for internal EPA and external contacts
with EPA on monitoring management issues; has endorsed the EMMC methods format and
the EMMC framework for  methods development; and has indicated that he wishes us to
continue  on the national lab accreditation development process.

      He directed EMMC to brief the Science Policy Council (which actually took place in
April) on  the lab accreditation activities that had come forth to date. At that point, it was
decided that the conference will  go  forward  and also that there will  be funding to
accomplish that.

      The performance-based method approach evaluation has been scheduled to be
completed by the end of this calendar  year.

      The EMMC has been identified  as  EPA's  internal  and  external focal  point on
monitoring management issues, and one of the major activities we are involved  with now
is serving as the EPA contact point for the Intergovernmental Task Force on Monitoring
Water Quality which Elizabeth Jester Fellows is going to talk about in the next talk.

      We will be the focal point  for interacting with ITFM on the issues of comparable
methodology, lab accreditation, methods  compendium  issues;   issues  that the ITFM  is
addressing on an inter-agency level that EMMC is addressing within EPA itself.

      I guess the question  we have to ask ourselves is "Why EMMC?" Really, it is an effort
to go across programs in trying to develop better scientific credibility and data comparability
within the Agency, and that will also interact with the ITFM across agencies.

      Essentially, with the watershed approach of EPA and the other geographically based
initiatives that are taking place, data sharing is becoming more and more critical.  So, when
you make cross-media decisions,  we need to have the  integration capability across the
Agency itself.

      It is simplifying lab procedures. The integrated methods have reduced large numbers
of  methods into more consistent methods and in a format that everyone will be able to,  I
think, understand. Since all methods will be in the same format,  it will allow you to make
comparisons much more easily.

      There will be cost reductions in  methods development, because, hopefully, EMMC
will result in more sharing of the efforts in  developing methods  and making the methods
more comprehensive, beyond specific  program needs.
                                       678

-------
      It will avoid the duplication of field, lab, and QC efforts.

      The national lab accreditation program, has a major focus to deal with the issue of
the current lack of reciprocity  and the amount of time that labs spend doing multiple
proficiency analyses, going through multiple lab audits if they do business outside of one
State, also EMMC  is dealing with different criteria for accreditation across the nation.

      Hopefully, there will be consistency, and there will be a savings not only to agencies
but, as well, to the regulated community and the lab community.

      That is all  I have.  Do you  have questions?


                       QUESTION AND ANSWER SESSION


                                     MR. TELLIARD:  This gentleman?

                                     MR. BOWDEN: My name is Brian Bowden. I am
with Hach Company.  I have two questions for you, Bob.  The first is with regard  to the
EMMC format.  As you might know,  Hach Company is a vendor which  supplies water
quality testing analysis systems.

      My question is, when we take our methods and transcribe our format and procedures
into the EMMC method format...the procedures that we offer now, we feel and our users
feel, are very simple, very easy to use, very easy to understand. When we transform them
into the EMMC format, we get procedures that go from being two pages long to procedures
that are 20  pages long, and we get feedback saying that they are not as useful.

      So, my question for you is,  is this EMMC format expected to be standardized and
used by method suppliers?

                                     MR. TELLIARD:  You want me to take a shot at
that?

                                     MR. RUNYON:  Yes, if you will.

                                     MR. TELLIARD: As a non-supporter of the EMMC
format, I would say right now that it is the one on the table, and it is the one we are  using.
We agreed  to use this format, and we are converting all the 500, 600, and  1600  series
methods  into that format.
                                      679

-------
      We have not put that format out for comment, and I would like to do that, and that
is something that we probably ought to address if we are asking you, as a vendor, to use the
format.  We have not done that, but, then again, we have not asked you to do that.

      So, right now, the answer is you can do what you want, and the second part is if we
are going to ask you to do that, then it is only fair to put that format out for comment, and
I think that there are better ways to write a lab format.

      What you do with the EMMC  format is you write an SOP, so you can take the 20
pages and reduce it down to what you have where you add the blue stuff and it turns green,
count three minutes.  Okay?  We do not do that. We have a document.

                                     MR. BOWDEN:  Right now, the feedback we are
getting from EPA-EMSL is that they are asking us to put our methods  in the  EMMC format
procedure prior to submission for acceptance or approval review.

                                     MR. TELLIARD:  For ADP?  Yes, right.  If that is
what they want, that is what you will have to do.

                                     MR. BOWDEN:  Okay. My second  question is
with regard to colorimetric  chemistries  and methodologies.  I  noticed on your method
integration listing, you did not include colorimetry. I am wondering where colorimetry
stands in the EMMC committee's mind.

                                     MR. TELLIARD:  I think the colorimetry tests are
the same ones...you know, they are basically the oldest ones we have. They are impacted
pretty much by  matrix.

      The ones we picked for  integration, the volatile method,  semi-volatiles, the metals
methods, were ones that basically all the program offices were using anyhow, and if you
can get them all in a room and lock the door, you could come out with a kind of consensus
method  that says yes, you will use these surrogates and, yes, we will use these internal
standards, and we will run at such an such a rate.

      That was easy,  but when you get into a situation where you are doing solids versus
water versus air where the matrix is  a big player, we do not feel that it is economically
practical to tackle colorimetry for those areas.

      It is not to say  we do not like  it.  It certainly has its place and  its use, but as far as
an effort to combine those methods,  it is not afoot.

                                     MR. BOWDEN:  Thank you.
                                      680

-------
                                      MR.   GRIFFITHS:     David  Griffiths,  Olver,
Incorporated.  I, too, have two quick questions,  I think.  I  have a vested  interest in  a
commercial laboratory, and as such, I strongly endorse the initiative towards consolidation
of the various methods that we are obligated to use today.  When can we see the first
integrated methods published in the Federal Register?  That is really my first  question.

                                      MR. TELLIARD:  The metals method is probably
ready to go out for comment.  Bill, do you know? I think it is pretty close, isn't it?

                                      MR.  POTTER:   We expect  to  publish... this
summer.   It is real close.

                                      MR. TELLIARD: Yes, it has been badgered, beaten
up, and flogged. It is ready to go, so it will probably be mid-summer.

      1613,  the dioxin integrated method, is  due this summer, probably late summer.
524.2 which  is the volatile method  which is an abstract of the  water method is due this
summer.   We have just generated the method specs,  and we  are looking  at the tiered
approach, drinking water, wastewater, solids. So, there will be different levels and the QC
will be a little bit different, but that  is ready for this summer,  too.

      That is the best I can do and I do not know where the digestion is.  That is the one
I  have not kept track of, because it  is... the microwave digestion.  The last I saw,  it was
almost completed, and that was February. So, it would  be amazing, but we could shoot to
put all these  methods in one notice, but we don't want you to  stop reading the Federal
Register,  but  I am not about to promise that.

                                      MR. GRIFFITHS: I think I speak for many of us
in saying we  are looking forward to it.

                                      MR. TELLIARD:  Okay, thank you.

                                      MR. GRIFFITHS: Secondly, there are some other
issues worthy of consolidation, and I think we have heard two of them today or yesterday
and today. One is clean  metals,  and the other is methods for  dealing with uncertainly
analytical  data, namely, data at or near the method detection  limit or below.

      These cross over several programs, most notably,  drinking water for clean metals and
groundwater  monitoring  under the various solid waste programs.   Is  this a  subject,
consolidation  of methods  for clean analytical protocols and   methods  for statistically
analyzing data that  may  be  uncertain?  Has  this  been brought up  for discussion  or
consideration?
                                       681

-------
                                     MR. TELLIARD:  The metals issue certainly will.
The ambient and drinking water methods certainly are applicable to be combined, and we
will do that.  Ivan Deloach from Drinking Water is floating around here someplace. He and
I talked and we will make it fit.  Both of us will use those methods.

      The data integration issue is something that we are working with, at least in the
Office  of Water, with  Drinking Water and Permits and Enforcement to come  up with a
strategy.  We have a document floating around which we  affectionately refer to as the
pumpkin  book which lays out data review, data requirements, data information, how you
review data.

      That is going to be updated this year, and it will hopefully have in it MDLs and MDL
procedures.

      So, hopefully, it is going to be a busy summer and fall, because a lot of these things,
hopefully, will  be coming to fruition. It is not a question now of resources; it is  a question
of time.

                                     MR. GRIFFITHS:  Is solid waste represented within
the EMMC?

                                     MR. TELLIARD:  Yes, it is.

                                     MR. RUNYON:  Yes.

                                     MR. GRIFFITHS: Thank you.

                                     MR. THOMA: Jerry Thoma, Environmental Health
Laboratories. A multi-part question.  Number one, Bob,  are you willing to speculate just
a bit on what the content of the option report to the deputy administrator might be?  Maybe
more easily said, do you expect a pro position from the Agency on  performance-based
methods?

      Secondly, how  do you expect to integrate the criteria in the  performance-based
method guidelines into the existing methods?

      Thirdly,  is there a  time frame when you expect, assuming that the Agency has a
positive outlook, is there a time frame on when these criteria actually might be integrated
into the Agency framework?

      Then, I guess fourthly, in the performance-based method structure, are holding times
and sample preservation issues considered sacred?
                                       682

-------
                                     MR. TELLIARD:  The last part is yes, unless we
have data.  Okay?

      Performance-based methods are probably a positive position in the Office of Water,
they have always written performance-based methods. All the 1600 series are performance-
based.

      The other thing is the application of 8.2.1 in the 600 series methods which says, you
can change the column  as long as you meet the method specs. In the new EMMC format,
it is 9.1.2, if you are  up on  your sections.

      Anyhow, if we really define this, and that is what this pumpkin book does, what you
need to do under 8.2.1 or 9.1.2, it takes away probably 90 percent of the changes you want
to make in the method anyhow. Now, it is not... that is to say, if  you want to change the
extraction procedure, if you want to change the temperature ramp, if you  want to change
the  column, if you want  to change  the detector, pretty much...no,  that won't go for a
detector, but these changes are all  covered, and  it tells  you what you need to do to
document a change which is equivalent or better. We are not against better, either.

      So, that is out  there now, and we are pushing it.

      Now, there  are  a  lot of folks who are  not real happy with  performance-based
methods. Don't  let me misrepresent that.  People who have to enforce this stuff do not like
to have to spend many hours trying to figure out whether you cheated or you were honest.

      Those people have the nasty part of working  in the real world and dealing with real
issues. We can sit here and proselytize all we want, because we do not have to do any of
that hard stuff.

      I think, the Agency's position is that probably performance-based methods are good
and it makes our lives easier.  To the working world, they  may not be believers yet.

      So, when we hear back from the States, municipalities, and people who  have to
actually  work in  the trenches, we may want to change our mind a little bit.  I am not sure
yet, but what I have heard in this meeting and other meetings is when you say performance-
based methods and are waiting for the roar of thundering clapping, it is pretty quiet.

      So, there are differing views. I think we are for it, but I am not sure that the working
world is, and I am sure there will be some compromise.

                                     MR. RUNYON:   And it is  consistent with  the
innovative technology initiative, trying to get technology up to speed and not two years
down the road trying to change a specific method.
                                       683

-------
                                     MS. ASHCRAFT:  I am Merrill Ashcraft with the
Navy Public Works Center in Norfolk. I would like to ask you to give us a little glimpse
of what they are looking at in the national accreditation program, because that is a concern
to us, many of us here.  If  you  are going to hold a conference in the spring, will the
attendees that were at this conference be invited?  How are you going to get your mailing
list?

                                     MR. TELLIARD:  The last I  heard was that the
attendees were primarily going to be the States and certain organizations. It was not going
to be a, quote, public meeting.  It was going to be an organizational meeting where the
States and the regulated community, i.e., laboratories, would have representatives.  That is
the last I heard, and that is a year old.

                                     MR.  RUNYON:   Right, and  it  is still  in  the
formative  process, but I am  sure that everyone that is in the community,  the laboratory
community, will be aware that this is taking place.  There will be adequate notice that this
is going to be occurring.

                                     MS. ASHCRAFT: And the glimpse of what you are
seeing as part of that policy,  what do you  think it is going to involve?

                                     MR. RUNYON: Well, the State EPA focus groups
are picking up right where the CNAEL report left off. They have put together a proposal that
was developed as to how they thought the  CNAEL  Report might be implemented and what
the issues  were, et cetera.

      The EPA and State focus groups have now taken that and spent several  meetings
hammering out what the actual  issues are that need to be developed, like proficiency
materials and how will  it be  administered?

      No final decisions have been made at this point, because EPA wanted to make sure
all the players would have an opportunity  to input into any decision. So, there have been
a series of options that have been presented on how to administer  it, what the scope would
be, what reciprocity issues would be dealt with, and no decisions have been made in those
areas.

      The focus groups give options that range from one end of the spectrum to the other.
So, it is really impossible for me to tell you what  the final resolution will be.

      I presented some of the issues that prompted this whole initiative to take place which
are the lack of reciprocity, the inability to have consistent criteria across the country, et
cetera.  The national program is going to try to address some of those issues and minimize
the impact on the lab community and the costs that are associated with it from a time and
dollars aspect in trying to meet multiple requirements across the  board.

                                       684

-------
      So, there are issues with implementation that are going to be presented and grappled
with.  Just as with the performance-based methods issue, there are people on differing sides
of how this should  be handled, whether you use a third party accrediting body, whether the
States are required  to be the accreditors, and how will this be handled?  Can they add extra
criteria to a national lab accreditation program for their respective State accreditation?

      Those are issues that are going to be addressed in the conference. They have not
been resolved at this point in time.

      Any other questions?

                                      MR. TELLIARD: Thank you Bob.
                                       685

-------
CO
en
The Environmental Monitoring
Management Council (EMMC)

-------
00
  The EMMC was established in March
  of 1990 to:
•  Coordinate Agency-wide policies concerning
   environmental monitoring issues especially in
   the areas of analytical methods integration,
   laboratory accreditation, and QA

•  Address Congressional  concern  over our
   ability to make national environmental
   assessments, and

•  Respond to  needs of Administrator/Deputy
   Administrator  to make decisions based on
   credible scientific data.

-------
00
00
EMMC Organization:


   •   EMMC Policy Council is made up of
      AA/RA level and is chaired by ORD
      AA, and the Region III RA


   •   Steering Committee is comprised of
      Office Directors and Division  Directors,
      with scientific and engineering program
      and regional staff providing direction to
      the panels and work groups

-------
    The Environmental Monitoring Management Council (EMMC)
03
                                Policy Council
                                     i
                              Steeling Committee
                                     1


Method*
Panel OBMMO



Method*
Integration
Panel (M1P)



Laboratory
Accreditation
Panel



Regulation
Development
Panel**
        Field* ) I W«te» ) I Solkii
                    * Proposed
                   ** Activitica may be combined to fonn a new QA Panel

-------
VO
o
EMMC PROCESS FOR ADDRESSING
ISSUES -  Focus on cross program issues
             to ensure real improvements:


•  EMMC Infrastructure  established

  * EMMC Format - For All New Agency Methods/builds
   consistency/comparability in documentation/facilitates
   assessment of methods

  • Framework for Development of New Methods  - Uses
   EMMC Format for consistency; based on existing methods;
   promotes joint methods development/funding; facilitates
   geographic, multi-media assessment processes.

  • Environmental Monitoring Methods Index (EMMI) - Agency
   compendium/facilitates methods selection for specific
   purposes/about 800 Agency users/public purchase available
   through NTIS.

-------
INTEGRATION OF MONITORING METHODS

The EMMC serves as a forum to determine which methods need
integration; provides for consensus of all offices, serves
as a  cross-program vehicle  for assuring  documentation of
methodology  and data, and comparability of data.

•  Integration of four monitoring methods completed:

    •   Graphite  furnace atomic absorption; Inductively  coupled
       plasma atomic emissions spectrometry;  Hot acid  extraction
       for elemental analyses (as part of above); Determination of
       purgeable organic compounds by capillary column gas
       chromatography.

    •   These four may account for half of all lab monitoring
       procedures

    •   Others still in-process [Semi-volatiles; Dioxins;
       Halogenated pesticides; Microwave digestion]

    •   Next  step/publish integrated methods in the Federal
       Register.

-------
NJ
PILOT OF PERFORMANCE-BASED (PMB)
Approach to Methods

•  EMMC to bring recommendations for an
   Agency-wide approach to the Science Policy
   Council by end of year

•  The PBM approach minimizes regulatory
   modification workload as technologies change;

•  Encourages technology development and
   innovation;

•  Preserves data integrity through proper
   criteria and evaluation [Drinking Water
   Initiative]

-------
               QUALITY ASSURANCE/
            REGULATORY DEVELOPMENT
U>
Currently working on assessment/design to
ensure environmental  measurement/quality
assurance issues are built into the new three
tier approach to regulatory process.
      Performance Evaluation samples/critical
      to enforcement of statutory requirements/
      currently on ad hoc funding basis/should be
      part of cross-program budget process.

-------
  DEVELOPING THE NATIONAL LABORATORY
    ACCREDITATION PROGRAM PROPOSAL
Voluntary national program to  encourage
reciprocity

Simplifies standards for federal, state, and local
laboratories

Regulated laboratories receive  better coverage
for same costs

Needs Agency resources to support National
Environmental  Laboratory  Accreditation
Conference (Issue to be presented to the
Science Policy Council on April 15, 1994)

-------
Ln
Deputy Administrator reviewed EMMC
activities on March 4,1994:

   • Approved progress to date;

   * Made the following decisions:

     •  EMMC to be focal point for internal/
        external policy on monitoring
        information  activities;

     •  Endorsed EMMC  Methods Format
        and Framework for Methods
        Development;

     •  Encouraged EMMC to continue
        developing a national program for
        laboratory accreditation;

-------
   DECISIONS (CONT)
cr>
EMMC to brief the Science Policy
Council (SPC) on its activities
including :

*  the laboratory accreditation activities
   and associated near term resource needs;

*  options and recommendations for an
   EPA-wide approach for applying performance-
   based methods to monitoring activities;


Requested  Assistant Administrators
and Regional Administrators to
continue support of EMMC activities.

-------
XI
       EMMC as Internal and External Focal Point
    for Policy on Monitoring Information Activities
       Major External Activity: Intergovernmental Task Force
       on Monitoring Water Quality (ITFM) requested formal
       link with EMMC. [ITFM consists of all federal/state
       governments that monitor water quality]
           EMMC/ITFM will address issues of comparable and
           performance-based methods, laboratory accreditation,
           and government-wide compendiums of methods, and

           The EMMC  to review specific products and provide
           formal EPA  responses in the areas  that are most
           important to achieving  comparability nationwide.

-------
co
WHY ARE EMMC ACTIVITIES IMPORTANT
  •    Better Science/Credibility
  •    Data Comparability for Sharing Information
  •    Required by Cross-media Decision  Making
      (Cross Programs and Cross  Agencies)
  •    Simplified Lab Procedures
  •    Cost Reductions from:
      •  Eliminating duplication of methods development
         efforts
      •  Avoiding duplication of field, laboratory and QC efforts
      •  Fewer lab evaluation programs

-------
                                     MR. TELLIARD: Following along on this, our next
speaker is Elizabeth Fellows.  She is going to be talking about a nationwide strategy for
improved water quality.

      Elizabeth  is Chief of the Monitoring Branch  in the Assessment and Watershed
Protection Division.  Also,  she is the  Co-chair of the Intergovernmental  Task  Force on
Monitoring Water Quality.

      This kind of ties in, we thought, with what Bob has just said, and Elizabeth is going
to give you an overview of what is going on in ITFM.
            A NATIONWIDE STRATEGY TO IMPROVE WATER-QUALITY
                             IN THE UNITED STATES
                                     MS. FELLOWS: Can you hear me? Yes, I guess
so.

      As Bill Telliard said, I am the Chief of the Monitoring Branch for EPA's Office of
Water. That is ambient water.  As such, I have responsibility both for the computer systems
that  hold  ambient water  data  and, on the other  side, for the monitoring  protocols,
procedures,  reports, et cetera that we do in EPA.

      I have had the position about three years now,  and in sitting  down to do a game plan
for what we wanted to do in the new Office of Water reorganization, we  quickly realized
EPA itself collects very little data.  Our regional folks do a great job  for certain kinds of data,
but we rely enormously on the States  and on other Federal agencies to give  us data to
answer questions that we are constantly asking. Such as:

      "How clean is the water?"  "How and why is water  quality changing over time?"
"Are our programs effective in improving or preserving water quality?"

      So, the first thing we needed to do is go talk to our partners in the  States and other
Federal agencies to see if we can do  a better job of combining our data to answer clearly
identified questions. We went to USGS first, and they said yes, we confront that problem,
too.

      We then jointly went to other Federal agencies and to States, and  all of them said
yes, monitoring in this country is not working as well as it could.  There are a number of
reasons for that.  Let's sit down and try to  solve them.

      So, what we did was, of course,  form a task force. It  is the  Intergovernmental Task
Force on Monitoring Water Quality and it is a three-year task force designed to recommend

                                       699

-------
solutions to solve the monitoring problem by specific deadlines and then sunset in favor of
whatever was needed to  implement the proposed solutions.  We wanted to get in, we
wanted  to try and  come  up with solutions  to solve the problem, and then  sunset into
whatever would need to be done to implement the solutions.

      The major problem we all face is that we cannot answer well the most basic question
about water quality, how  clean is our water and how is it changing over time?

      That is a simple question that  Congress and  everybody else asks  us all the time.
Obviously, it does not have a simple answer.  What do you mean by clean? What kind of
time period are you talking about?

      But however you define the question, the  key is better water quality monitoring,
assessment, and reporting, and we do not have a system that is good  enough at this point
to answer nationwide questions at a specific site or in particular States.  Specific questions
can be answered , but on a national level, we cannot do it well enough.

      One of the reasons we cannot at this point is that our water programs are changing.
The Clean Water Act passed over 20  years ago.  We have learned a lot more about our
water resource, and many other things have considerably changed  including a  large
population increase along the way.

      EPA itself has changed. We  are,  of course, still a regulatory agency, but that  is no
longer our primary  mission. We are really moving into more holistic geographic programs
using risk reduction  principles.   New emphases, as Bob was talking about, include  a
watershed and ecosystem  focus emphasizing biological, ecological, and habitat as opposed
to a specific chemical focus.

      Nonpoint sources,  as we have solved our point source problems over the years, we
have uncovered the major nonpoint source problems we have.  Wetlands are disappearing
at an alarming rate.   Sediment, both clean and contaminated, is a problem.  It is monitoring
that shows us where and  how grave our problems are.

      Thousands of groups monitor, spending millions, even billions, of dollars annually
in their  monitoring for a  variety of purposes, and the roles of partners contributing to  a
nationwide strategic look  at monitoring  have never been clearly defined.

      Also different agencies use different  methods to monitor the same parameter.
Obviously, this is how this connects with this audience most directly.  In  spending three
years thinking on a nationwide level about our monitoring programs and of all the problems
we  have to  deal with, if I  had  to choose  one, it would be the  methods problem...
inconsistent methods for the same parameters when collected  for the same purpose.
                                       700

-------
      There are obviously reasons to have different methods where monitoring purpose
differ, but there are many cases where different methods are not necessary.  You can have
the best linked computer systems that you possibly can. You can talk all you want over a
table. But if you have collected your data with inconsistent, incomparable methods and you
have not documented how you have done it,  you have lost already.  You cannot combine
your data.

      So,  I think what you  are doing in dealing with methods  is probably the most
important thing in this whole complex monitoring  picture.

      As well as problems in the monitoring area, there are, of course, opportunities. First
of all, the spotlight is shining on monitoring  right now. It is shining on  monitoring for a
number of  reasons.

      The whole ecosystem/watershed approach, as Bob Runyon says, is targeting the need
for integrated data and the need for a variety of data, not just, say, chemical water column
data. And everyone is recognizing that.  Congress is, OMB is, States are, Federal agencies,
all the volunteer groups that are springing up to  do their own kinds of monitoring.

      We have many new scientific and computer techniques, including, obviously, CIS,
...geographic information systems... which allow you to easily portray  your data in an
integrated way and immediately points up if you are trying to portray apples and oranges
together. This again, points out the need for comparability in methods and in data here.

      EPA  and  USGS are modernizing our computer systems which are 20-plus years old.
It would be stupid not to modernize them so they can talk to each other better, and we are
using joint  design features to ensure easy data sharing.

      There is a lot of increased ancillary data that is being collected,  that  we can  obtain
once and share, rather than separately reinventing the wheel.

      So, all of this  led to the ITFM. The first meeting was in January of  '92. The final
recommendations are due in January of '95.  We are working on those now.

      It is a Federal-State partnership. Of the specific membership of 20 members, the Feds
are the usual suspects, EPA, USGS, NOAA, USDA, Fish and Wildlife, et cetera. Of States
and Tribes, we have ten of them. They are geographically dispersed and have expertise  in
various water resource areas across the country.

      Over 140 Federal and State  staff sit on various working groups working on the
various problems, and we  have an advisory committee which includes municipalities,
industry,  academia, and volunteer groups.  We are trying to pull all the players together
here.
                                       701

-------
      I  am  the  Chair  of that group,  and USGS  is the Vice Chair and the Executive
Secretariat.

      We are not just talking, as I said, about traditional water column kinds of things.  We
are talking about a resource  that includes  surface and ground  waters, coastal waters,
associated aquatic communities,  habitat, wetlands, and  sediment.  So,  we have got the
whole range here.

      We are talking about protecting uses which gets back to State water quality standards
of human health, ecological health, and then the uses that are designated  through the State
standards. We are talking about physical, chemical, and biological  parameters here.

      When we say monitoring, we do not, again, just mean traditional monitoring, either.
We mean the whole range of activities that go from what is the program objective all the
way up  to giving the data to whomever needs  it, which includes  indicators, field data
collection and methods, lab, QA, data storage, data analysis and reporting.

      When we sat down all  together to figure out how we are supposed to  think about
the problem clearly and organize even a way to  get our minds around an  immensely
complex problem with lots of players, we formed ourselves into eight task groups that are
looking  at the specific problems.  One is institutional framework, obviously, who is doing
what where  and how can we do it better.

      One is environmental indicators.  If we can choose core indicators that  will answer
identified questions, and measure identified  goals, then  we can talk about commonality
among methods and data and  everything else for these indicators.

      Methods, obviously, I will go into some more. Data management... how do we  link
our systems, store data with descriptors so  we  know the QA/QC  used,  and have data
transfer  standards so we can transfer the data better.  Assessment and reporting... how  can
we tell identified audiences what they need to know about water quality.

      Another working group concentrates on groundwater. Obviously, we are all aware
of the difference between ground- and surface water, but  much of our attention is devoted
to the Clean Water Act kinds of programs, and we needed  to have a separate group that was
a groundwater expert to put the groundwater needs into the picture. So, we have a separate
group looking at groundwater  that will  make sure that  anything we come up with applies
or is varied according to the needs of groundwater.

      We are probably going to need to do that for coastal/marine waters and for wetlands
as well.

      Cost is obvious.  How much money can we save  by doing things better, and then
do we need  new money in addition  to that.

                                       702

-------
      Also, a pilot project, to do a nationwide aquatic biological integrity assessment of the
flowing surface waters of the country. Can we actually take the recommendations the 1TFM
is making and see  if they work on a particular project.

      The overall  recommendation we came  up with was to develop  an integrated
nationwide voluntary strategy.  As you may imagine, each of those words is freighted with
significance here.

      A strategy means an organized process using a range of monitoring approaches.  We
are way  past needing only a  fixed monitoring network across the country where  you
monitor all the same things with all the same methods.  We need to include fixed stations,
we need to include synoptic surveys, we need to include short-term studies, and a strategy
has to incorporate  all  of those.

      It needs to be nationwide, covering all the water resources I just mentioned.

      It needs to be integrated with a unified process using common  design guidelines,
comparable methods,  shared data, common reporting and training formats.

      To top it all off, it needs to be voluntary.  This is the only thing  that has generated
a lot of comment  here,  because with  an issue of such complexity  and so many people
playing, people ask, quite legitimately, how can a voluntary system  work.

      Well, it can work because there are incentives and because there are benefits. I think
this relates somewhat back to  one of  the questions that Bob Runyon  discussed. Yes, a
method, a format and a  performance-based  method is going to take more documentation
and produce maybe a little more paper, but part of that additional work may be critical if
a secondary user is going to use the data or the method that was thought up by somebody
else.

      So, we have got a tradeoff here, and that line of tradeoff is exceedingly difficult to
arrive at,  and it is at a different  place for different kinds of problems.  So, that is one of the
things that we are in the  middle of debating right now.  Where is that line where voluntary
works because it is beneficial to those who play?

      The ITFM also recommends that there should be a permanent monitoring council,
wherein all the public and private players sitting together to come up with guidelines, plans,
and implementation approaches on all the things that I talked about before.

      In particular, training is one of those, and methods training is, I think, a big part of
that. Rather than trying to legislate or put out guidelines, if you train collaboratively among
agencies and entities, you are way ahead of the game there.
                                       703

-------
      ITFM spent their  first year coming to  consensus  about what the problems,
opportunities, and preferred strategy were. Actually, I was fairly amazed. The ITFM took
very little time to become a cohesive working body and arrive at consensus on what the
problem was and the ways we might approach it. There was very little turf protection and
posturing which, I think, was indicative of the critical need we share to design a better way
to get better water quality information.

      The second year of the ITFM, we honed in and said okay, if we are really going to
advance our recommendation, we need to produce building block products that would give
us the foundations to implement our national strategy.

      Those  products are:   a framework for monitoring programs... the steps that we
recommend that any monitoring program should go through  if it is really going to achieve
its objective.

      A charter for a permanent monitoring council to replace the ITFM, many parties have
recognized the need for a permanent collaboration mechanism.  It is in the Senate bill for
the  Clean Water Act that ITFM would just become a permanent body legislated under law.

      We have done a matrix of monitoring activities of all the Federal agencies.  We are
beginning to do one for the States, and then there is the whole private sector as well.

      We have got a selection criteria  sheet for environmental indicators and also a matrix
of environmental indicators that would best measure surface  and groundwater so we have
information on core parameters to measure our water quality.

      Methods is what you are most interested in.  Again, it was felt that there needed to
be a very specific part of a national monitoring council which  would be a national methods
and standards comparability council.  If you will, it  is an interagency counterpart to EPA's
Environmental Monitoring Management  Council (EMMC).

      I have to say... Bill Telliard is frank, and I will be frank, too... that it is kind of ironic
being the EPA Chair of this national body when EPA, I think, is one of the biggest problems
in terms of methods comparability and  consistency. Right?   I see some  knowing smiles
here.

      Other agencies and, obviously, the private sector are very concerned about this. That
is why we are so delighted to have an EMMC and, in particular, the strong backing  of the
new administration for EMMC, so we have a real  EPA attempt  to speak with a unified voice.

      Obviously, if even on agency has a comparable methods problem internally, you can
imagine the kinds of problems we have in trying to get methods comparability throughout
the  government. However, we are trying, and we do have some hopes here of success.
                                       704

-------
      One of the things is a policy on performance-based monitoring methods.

      The ITFM Methods Task Group began developing a performance-based methods
policy with EMMC input, and the States and the other Federal agencies who sit on the ITFM
methods group came to pretty much the same conclusion that EMMC did, and ITFM and
EMMC will be working on this issue very  closely together.

      There was some previous discussion in this session about whether States really buy
into performance-based methods. There are five States that sit on the ITFM methods group,
and, it is Co-chaired by South Carolina... and those States really have bought into it,  as have
representatives of five or six Federal  agencies as well.

      Just to talk a  little bit more about the task  before the methods and standards
comparability council, the first one is really to set the agenda, to agree upon  those classes
of methods which are priority to get methods comparability, another to sit down and try and
figure out how to come to some commonality, again, much as  EMMC did.

      The task group  is  trying to produce guidelines for how you compare methods,
develop a performance-based analytical system, establish some reference methods so the
system can work, come up with a minimum data set of data qualifiers that would allow you
to do intercomparison exercises, and  support a lab accreditation program. The ITFM is not
doing much on lab accreditation, but we are looking to the EMMC to have the lead on that.

      The Task Group is also  producing  a glossary, and investigating  the need for  pre-
laboratory certification. Certainly, as detection limits get more and more refined here, good
laboratory procedures are becoming  increasingly more  important in this whole picture of
data and data quality.

      Biological methods... in thinking about what  methods  are  a  priority for us, what
monitoring entity arena to work on  together,  biological  methods come  out  as very
important, number one, because we are going into a watershed-based ecological approach
to environmental protection and need biological information in  many cases, and number
two, biological methods do not have so much historical baggage as the chemical methods
do... indeed we are still developing procedures,  and we figure  if we can nip some of the
confusion at the beginning, we will be far ahead.

      Therefore, we  held a workshop last June that had 12 Federal agencies as  well as
States and some other parties in it that looked  at algal, benthic fish and habitat methods so
far and tried to compare the differences, and  we are  looking at the report comparing the
differences and figuring out where we can go from there in terms  of negotiating how we
might come closer before we get set in concrete on those methods.

      Nutrients. There was a question about sample preservation  time  here. That is one
of the  specific  areas  that we  have  started working on already.   USGS  uses  nutrient


                                       705

-------
preservation methods for ambient monitoring samples (not for regulatory monitoring) where
evidence suggests that you can just use a cooling method rather than using some of the
chemical preservatives.

      EPA's Cincinnati lab has now looked at that, analyzed the information, and said well
we may be able to agree on a common sample preservation method.

      ITFM did not want to just theorize about how to solve our monitoring problems. So,
we began a pilot project.   We have four, actually, but the  Wisconsin pilot project in
particular is trying out a number of the recommendations, including, most importantly, a lot
of methods comparison  where EPA  and USGS and the States and some other  Federal
agencies, where it is appropriate for the method, are going out into the field and sampling
using their own methods to try and  compare and see differences and opportunities  for
comparability.

      This is the  ITFM's third year.  In the first year, we built consensus, launched our
working groups, and made general  recommendations.   The second year we provided
building blocks.  The third year, we are really getting into the implementation aspects.

      What do we mean by a national strategy?  How will  EPA's EMAP monitoring
program, USGS' NAWGA program, USGS' fixed station design for NASQAN which they are
redesigning, all the States with their comparability problems, all of the information from the
private organizations, from the volunteer groups, how will it really fit together?

      And finally, how do compliance monitoring and ambient monitoring fit together?
Obviously, compliance monitoring is where a majority of monitoring money is spent, and
it is kind of a chicken and egg situation.  Good ambient monitoring is needed to set good
compliance limits, and good compliance monitoring can produce information to augment
the ambient data.

      Indeed,  a lot  of ambient monitoring goes on  in the compliance arena  by  sewage
treatment  plants, and by some industries, whether such monitoring is required  in  their
permit or not.  As good citizens, a lot of them have included some ambient monitoring...
monitoring beyond the mixing zone.

      Nobody is asking them to share that data so there is a rich wealth  of data  that we
would benefit by capturing and sharing.

      We have taken the steps to do that. We have had a couple of preliminary meetings
with  AMSA, the American Association  of Metropolitan  Sewage Authorities,  the Water
Environment Federation, and AMWA, the drinking water suppliers as well as some industrial
associations such as NCASI. All of them are very interested in sitting down and really
talking through this issue and what we might do.
                                      706

-------
      It obviously has methodological parts to it as well, because if you are collecting
compliance data differently than ambient data on the same parameters, you cannot put it
together even if you manage to get it in the first place.

      So, we are just starting to deal with all those issues.  It is very important for us, and
that is going to be a real focus for us in the next couple of months so we can  have at least
a good start  on what part compliance monitoring should play  in this strategy.

      We will be holding a national conference on monitoring in the next year, and there
will be specific components sitting down and talking about our recommendations for each
one of these areas to which, obviously, you all would be invited.  And at any time,  we
would love to have comments on our activities and  recommendations to date.

      Funding is an enormously big issue. Suffice it to say we are working hard on  it.  As
usual, most of us are scientists, we are not economists, so we have linked with a series of
economists who can help us.

      We are doing a survey to try and capture the  amount of money that is being spent
on monitoring now, because nobody knows, except to say it is very large. What the Feds,
States, and municipalities  spend tends to be buried in the budget and not as a specific line
item.

      For instance, at EPA I may run the ambient water monitoring program, but my peers
in the  nonpoint source program  and  the combined sewer overflow  program and  the
wetlands program all have monitoring portions as  well, so trying to  figure out how  much
money is spent  is very difficult.

      I  would like to  just take two minutes more and kind of shift to my EPA hat for a
minute to show you again how important the EPA  part of the national strategy is and how
methods play into that. For the following overheads, you are not going to be able to read
every word,  but I just want to show you the principle, not the details.

      This is basically a pyramid which shows that we in EPA, in doing our strategic
planning for monitoring, realize that we need to have clearly identified goals, choose  the
indicators by which we can measure them, and then choose the methods by which we are
going to measure those indicators.

      So, what we have done is said our prime goal is human and ecosystem health.  If you
break that down, what  do you mean by human health? We mean safe drinking water, safe
fish consumption, et cetera.  On the other side, you  have the healthy ecosystem goal.

      In order to get to those goals, you need to improve ambient conditions.  How well
is the water doing for both toxics and conventionals, from both point and nonpoint sources?
                                       707

-------
      In order to improve ambient conditions, you have to reduce pollutant loads.  In order
to reduce pollutant loads, you have got to link it all to what your specific control programs
are doing.

      Take one of the goals  of  conserving and enhancing ecosystems.  For each one
subgoal, we are going to choose an indicator, such as is fish assemblage or benthic macro
invertebrates or habitat or plankton or floral or faunal composition.  For each one  of those
indicators, we show what the EPA data source is and what the other agencies' or private
sector data source is.

      So, we at EPA have got a scheme that tells us what our goal is, what the indicator
is to measure it, who has got the data, and the next question is, what method are they using
to get that data for that specific indicator?

      So, both from an ITFM interagency point of view and a specific EPA point  of view,
we end up with the importance of the method that is being used to get the data. Since the
data  to  measure our goals  are corning from so many  different agencies, how  can  the
methods be comparable enough,  to allow us to put the data together to come up with a
specific  answer to the question we have asked.

      That is the quick overview of ITFM and of EPA's  goals and indicators  and how it
relates to methods. I want to close by reiterating what I started out with. If I had to choose
one thing that was of utmost  importance in this entire monitoring, vastly complicated
picture,  it would be methods and a way to determine comparability and the known quality
of it so you can figure out if you can use someone else's data.

      I  put a brochure on the ITFM  out on the table, and on  it there is an address there
that you can  write to in  order to get a  copy of the ITFM report with  details on ITFM
recommendations.

      Any questions?


                        QUESTION AND ANSWER SESSION
                                     MR. TELLIARD:  Yes, sir?

                                     MR. WINTERS:  I  arn Dave Winters  from  the
Arizona State Laboratory.  I have a question, since we have just heard two talks about
combining and being consistent with methods and getting consistent data, what is  the
movement within your Agency to get, for the laboratories, for us to get some consistency
when we call, say, different Regions?
                                       708

-------
                                     MS. FELLOWS: Right.  There is a big laboratory
study just going on in EPA now where that is one of the questions being asked. The study
is part of the EPA effort to make our products easier for people to use.

      There are other answers, and Bill  might want to give them.

                                     MR. TELLIARD:  If you want an SW846 answer,
there is an SW846 hotline.  If you want a water answer, I generally get it or Cincinnati gets
it, and we do the best we can.  I don't get any air questions. That goes to RTB.

      So,  we have got it all covered, kind of like with a shotgun. One of the issues  is how
do you disseminate this.  We now have a resource center that will mail  out methods and
all that sort of thing.  So, it is starting to come together.  Hopefully, within my lifetime, we
will see  that.

                                     MR. WINTERS:  As far as moving  ahead  on the
methods, especially in performance-based methods, if you are allowing laboratories to make
changes but  then require certain documentation, it has been my experience that  I have
gotten differing answers even within a Region on what is necessary as far as documentation
and what is acceptable.

                                     MR. TELLIARD:  I agree with you.  That is true,
and  you can begin  with  the  pumpkin  book which will  give you  a starting point for
documentation.

      If, for some reason, the Region needs additional information for some purposes,
enforcement, crucifixion, whatever, they will call you and let you know, but we can give
you the  bottom line.

      I  arn going to  put my name up here and phone number and a fax number, and you
can send it to me, and I will send you a copy.

                                     MR. WINTERS: Okay, thanks.

                                     MR. TELLIARD:  You are welcome.

                                     MS. ALLEN: I am Linda Allen from the Minnesota
Department of Health. My question was, what do you do when your methods are missing
an analyte such as your Method 200.9 and 200.7 for metals do not contain titanium?  We
do ambient monitoring occasionally for titanium. I do not have an EPA method for that in
your new methodologies.

      What  I am concerned about is when you do these metals methods, are you going to
incorporate all the analytes or only a select few of the  analytes  and then we have to


                                      709

-------
scramble around finding other methods to cover these analytes that are not covered in your
methods?

                                    MR. TELLIARD: Yes, you have to scramble around
and...yes, in the methods that we are looking at right now, titanium, lithium, and some of
these others are going to be addressed. They are not on the high point. We are looking
at what Billy Potter talked about, mercury,  arsenic, selenium, thallium, the popular voices.

      The less or secondary groups...molybdenum is a big one in sludge. We are working
on that now. Titanium and some of these others are  not something that we are moving on
right now. So, basically, what we can give you is our best guess.

      In your  case,  if you are going to use it, I would  recommend all you  do is keep
documentation of it.  We may say you were wrong, but we can't say you were dumb.  What
was it  the guy said earlier today?  We  didn't care if  it was right  as long as  it was
reproducible.  Somebody said  that, but I think document the heck out of it. Okay?
                                      710

-------
 INTERGOVERNMENTAL TASK FORCE
  ON MONITORING WATER QUALITY

                (ITEM)
We cannot answer well the most basic
question about water quality ~ how clean is
our water and how is it changing over time.
This simple question, often asked of us by
Congress, does not have a single simple
answer and the multiple answers must come
from many agencies and groups and cover
many different parameters.

Better water quality monitoring,
assessment, and reporting are essential to
understanding and managing our resources.
                   711

-------
 WATER PROGRAMS ARE CHANGING
 MONITORING NEEDS ARE CHANGING
                 TOO

The country is moving beyond single media
eommand-and-eontrol programs into holistic
programs based on risk reduction.  New
emphases include:
   Watershed, ecoregion, and
   geographically-based programs

   Biological, ecological, and habitat focus

   Nonpoint source remediation programs

   Wetlands

   Sediment
                     712

-------
         OPPORTUNITIES

Recognition of the need for better water
resource information on the part of
Congress, OMB, Federal agencies,
States and Tribes, citizen groups

New scientific and computer
technologies, including GIS

USGS and EPA modernization of NWIS
and STORET computer systems; NBS
and EMAP beginning theirs

Increased ancillary data
                713

-------
            PROBLEMS

Many players spend millions annually
monitoring water quality for a variety of
purposes. Roles, objectives,  and
responsibilities are not always clearly
defined and, until the ITFM, no clear
leadership or intergovernmental strategy
linked these efforts.

Different agencies use different methods
to measure the same parameter, often
do not store information about the data
that would enable others to use it with
confidence, and keep the data in systems
that other find hard to access.

The resulting data are often not
comparable and fall short of supporting
effective management of water resources
on a nationwide basis.
                   714

-------
           AUTHORITY

In April 1991, EPA and USGS began
discussion of the need for better water
resources data.  Together, they
approached other Federal agencies and
States, and all agreed the time was ripe
to act.

In January 1992, OMB Memorandum
92-01 replaced Circular A-67, reiterating
the USGS lead in water data
coordination,  and setting up the Water
Information Coordination Program
(WICP) under which the ITFM then
began to operate.
                 715

-------
            •  ITFM

First meeting January 1992; final
recommendations January 1995.

Federal/State partnership of 20
members.

Federal:  USGS,  EPA, NOAA, USDA,
      FWS/NBS, Corps, DOE, OMB,
      TVA, NFS

State/Tribe:  Arizona, Florida, New
         Jersey, Ohio, Potawatomi
         Community, South Carolina,
         Washington, Wisconsin,
         Delaware River Basin
         Commission.

Over 140 Federal and State staff

Advisory Committee on Water Data for
Public Use; includes municipalities,
industry, academia, volunteer groups
                   716

-------
         SCOPE OF RESOURCE

The Resource:   Surface and ground
                waters, including coastal
                waters, associated aquatic
                communities and habitat,
                wetlands, and sediment.

Uses to Protect:  Human health
                Ecological health
                Uses designated through
                   State Water Quality
                   Standards

Parameters to   Physical
Measure:        Chemical/Toxicological
                Biological/Habitat
                   717

-------
         MONITORING SCOPE

Activities:   •• Selection of program
              objectives

            •• Selection of indicators

            •• Field data collection

            •• Laboratory analysis

            ™ QA/QC

            wm Data storage, management
               And sharing

            •• Data analysis

            mm Data reporting
                      718

-------
      EIGHT TASK GROUPS

Framework

Environmental Indicators

Data Collection Methods

Data Management and Sharing

Assessment and Reporting

Groundwater

Cost

Nationwide aquatic biological integrity
assessment
                719

-------
FIVE PURPOSES FOR MONITORING



1 Status and Trends



1 Emerging Problems



1 Program Design



1 Program Evaluation



1 Emergency Response
                  720

-------
            ITFM VISION

Water quality monitoring will be fully
successful when all levels of government and
the private sector meet today's and
tomorrow's priority information
requirements, make the best use of
available resources and institutional
capabilities nationwide, and provide useful
information for the future.
                    721

-------
ITFM WATER QUALITY MONITORING
             PRINCIPLES

OBJECTIVES:
   Clearly stated

COORDINATION:
   Should be maximized

TIMELINESS:
   Timely information available for
   decision making

METHODS:
   Documented, scientifically accepted, and
   comparable

INFORMATION SHARING:
   Easy access, use, and sharing

ASSESSMENT/PRESENTATION:
   Information analyzed and provided in
   easily understandable and useable form
                    722

-------
    OVERALL RECOMMENDATION

Develop an integrated, nationwide,
voluntary strategy

STRATEGY:    An organized process
         using a range of monitoring
         design  approaches

NATIONWIDE:   Covering the country,
         including surface, ground, and
         coastal waters

INTEGRATED:   Developed through a
         unified process using common
         design guidelines, comparable
         field and analytic methods,
         shared data, and common
         interpretive, reporting, and
         training formats

VOLUNTARY:   Strategy will be built
         voluntarily from existing stations
         with modifications where needed
                    723

-------
       NATIONAL COMMITTEE
Would provide guidelines and support
   QA/QC
   Monitoring approaches
   Site selection guidelines
   Environmental indicators
   Comparable field and laboratory
   methods
   Data management/information sharing
   Ancillary data
   Interpretation techniques
   Reporting Formats
   Training
   Evaluation
                     724

-------
        REGIONAL STRUCTURE

Would implement data collection

™ QA/QC
•• Monitoring approaches
•• Site selection
•• Environmental indicators
•• Sample collection and field analysis
•• Evaluation
                    725

-------
           PILOT PROJECT

Pilot project in Wisconsin is applying the
ITFM recommendations to test and refine
them.
                     726

-------
           BUILDING BLOCKS

ITFM is producing "building block"
products useful to itself and others in
developing and reviewing monitoring
programs. These include:

Institutional

•• Monitoring program framework
•• Charter for permanent National
   Monitoring Council
•• Matrix of monitoring activities of
   Federal agencies

Indicators

•• Environmental indicator selection
   criteria
™ Matrix of environmental indicators to
   measure designated uses for both surface
   and groundwater
                    727

-------
   BUILDING BLOCKS (CONTINUED)

Methods

•• Charter for a Standards and Methods
   Comparability Council
•• Policy on performance based monitoring
   methods
                    728

-------
           NEXT STEPS
Detailed national strategy out for public
review in summer 1995. Will include a
national and regional component of a
strategy that recommends for the
nation's waters:

o  collaboration of Federal and State
   monitoring agencies

o  core indicators to answer national
   questions

o  comparable methods

o  ways to better share data, including
   common reference tables and linked
   systems

o  integrated reporting of core data
                 729

-------
               FUNDING

ITFM recommendations will be initially
refined and implemented within the base
program funding of the agencies and groups
involved.

Savings will be gained by better
collaboration; new needs will be estimated.
                      730

-------
 OW  Strategic  |
         Goals

    *St!ite Designated Uses
    Aquatic Lite Support * Fish
     Consumption * Shellfish
    I larvesting • Drinking Water
     Supply « Primary Contact
   Recreation » Secondary Contact
     Recreation * Agriculture
                                                        Human &
                                                        Ecosystem
                                                          Health
       Protect &
        Enhance
   Public Health

Safe Drinking Water*
Safe Fish Consumption*
Safe Aquatic Recreation*
 Conserve &
 Enhance
 Ecosystems

1  Biologically Healthy
  Water Resources*
Societal/Cultural Goals

    Pollution Prevention
        Ildiication
   Environmental Kquity
   Suskiituihlc I ieonomic
       Development

 Waters Meet Designated Uses*
                                              Improve Ambient Conditions
                          improved Surface Water Ambient Concen-
                          trations of Toxic and Conventional Pollutants

                          Ground Waters Meet Water Quality Objectives
                       No Net Loss of Wetlands

                       Extent of Sediment Contamination Is Reduced
    xO°
^


* Reduced Toxics Loading * Reduced Conventional Loading \^
*********
Standards & Source Control Programs
Sluimwaler Pingum * C'SO Program * NPS ,11*) Progimn * NPS/t'/.M Program » 1 MIX, Program * l%h/
Sediment CoiHttmuutlion * Ktlluenf (JuideltHea* * Oceun Dumping * Drinking Wiiier Standards Program *
NPDt.S Program * WQS & Criteria Program * Marine Debris * Sludge Manayemen! * Wetlands 'lO^ Program
Resource-Driven Approaches
Watershed Protection * Wellhead Protection * Niiiionul
Ksfuary Program * Clean Lakes * Ground Water Protection *
Habitat Wulluntfe Protection * Near Coastal Water*
Office of Water
Environmental Indicators
                   21
                                                                February 3, 1994

-------
                                   CONSERVE AND ENHANCE ECOSYSTEMS
          Biologically Healthy Water Resources Including Lakes, Rivera, Streams, Estuaries, Coastal Waters, Wetlands, and Ground Water

   1 Indicator           |
   I Waters Meet Aquatic Life I
   ' Designated Uses (ioclu-  j
   I ding ground water disdiar j
   I ges to surface water)

   I EPA Data Sources
   1305 (b)«
   ,STORET/WBS»
   I Other Sources
   'USGS: NAWQAI
   IUSFWS »
   I Slate Water Programs >
   I	I
Indicator
Fish (assemblage) or IB1-
like Index
EPA Data Sources
305(b)l
EM API
BIOS/STORET»

Other Sources
NOAA: ELMR>
NOAA:NS4T>
NOAA: FSP>
USFWS: NCBPI
USGS: NAWQAI
USFWS: BESTO
Stale Water Programs »
                                                  _L
   QQQQQ•
Indicator
Benlhic
Macroinvertebrales
(assemblage)
EPA Data Sources
EMAP»
BIOS/STORE! >
Other Sources
NOAA: EI.MRI
NOAA:NSAT»
MMSI
USGS: NAWQA*
State Water IVogranis
1 1 1 1
QQQQQH
Indicator
Habitat
(physical structure)

El' A Data Sources
EMAP»
BIOS/STORET»
Other Sources
USDA Forest Service >
USFWS: BESTm
USGS: NAWQA»
State Water Programs »











QQQQQH
Indicator
Plankton &
Periphyton
Assemblages
EIM Data Sources

Other Sources
Research Institutions 1
Stale Water Programs »
USGS: NAWQAI












QQQQQH
Indicator
Floral Composition


Et'A Data Sources
EMAPO

Other Sources
USFWS: BESTO
USGS: NAWQA»
States O












QQQQQH
Indicator
Faunal Composition


EPA Data Sourcei
EMAPO

Other Sources
USFWS: BESTO
Stales O


           • data available now, needs improvement
           > limited data available now
           O no data available now
      I    I  We can set baseline and begin to report in FY94
             either nationally or for certain regions, specific
      I	|  geographic areas, or specific resource type.
       QQQQQQ Hierarchy of indicators
       1-2-3-4-5-6  • indicates level
       1= Administrative; 6=True environmental
Office of Water
Environmental Indicators
                                                                      24
                                                                                                    February 3, 1994

-------
                                  PROTECT AND ENHANCE PUBLIC HEALTH
u>
Safe Drinking Water
i i i i i
[ QQQHQQ |
Indicator 1
Waters Meet
Drinking Water 1
Supply Designated j
Use i
EPA Data Sources 1
305(b)»
1 STORET/WBS • 1
I
1
| Other Sources \
\
\
L _J
QHQQQQ
Indicator
Population Served
by 1'WSS with
Wellhead
Protection
EPA Data Sources
Wu-llhead
Protection
Biennial Reports 1

Other Sources
State WIIP
programs >

I Q BQQ QQ |
Indicator
Populations served
by community water
supply in violation.

EPA Data Sources
FRDS >


Other Source!


L _J
QQQQ BQ
Indicator
Blood Lend Levels
in Children


EPA Data Sources


Other Sources
CDC >















aaaaaa
Indicator
Disease Outbreaks
from Public Water
Supplies

EPA Data Sources


Other Sources
CDC »


                                                                                             Safe Aquatic Recreation
_1
IQQQBaa 1
Indicator
Waters Meet
Swimming and
Secondary Contact
Designated Uses
EPA Data Sources
305(b)»
STORET/WBS •
Other Sources
NOAA:NSAT»
USFWS:NCBP>
L _J
i
QQQBQQ
Indicator
Beach Gosures:
Miles Closed and
Organism Levels
EPA Data Sources
305(b) •
Regional
Other Sources
Slate health depls. »
NRDC>




1
QQQQQB
Indicator
Disease Outbreaks
from Swimming
EPA Data Sources
Regional
Other Sources
CDC >
State health depls. »
                • data available now, needs improvement
                I limited data available now
                O no data available now
            r~  I  We can set baseline and begin to report in FY94
                  either nationally or for certain regions, specific
            I	|  geographic areas, or specific resource type.
            U Q U a a U Hierarchy of indicators
             1-2-3-4-5-6 • indicates level
            1= Administrative; 6=True environmental
Safe Fish & Shellfish Consumption
FdpaBab"
Indicator
Waters Meet Fish
and Shellfish
Consumption
Designated Uses
EPA Data Sources
30S(b) •
STORET/WBS •
Other Sources
\ 	

QQQIQQ
Indicator
Fish Advisories
EPA Data Sources
305(b) •
STORET/WBS •
EMAP »
OST: PAD >
Other Sources
NOAA: NS&T •
USFWS: NCBP >
USGS: NAWQA >
r~aaa«aa
i Indicator
1 Waters with Fish
1 Contaminant Levels
I of Concern to
Human Health
1
J EPA Data Sources
1 305(b) •
1 STORET/WBS •
• ODES*
1 EMAP »
| OST: NI'1T> O
1 Other Sources
NOAA: NSAT •
1 USGS: NAWQA •
| SFWS: NCBP »
nbaaBQQ^
1 Indicator
Shellfish Bed
| Closures
1
1
|
j EPA Data Sources
I305(b) •
| STORET/WBS •
1
1
I Other Sources
NOAA:NSR •
1 NOAA: NS&T •
L J

QQQQQH
Indicator
Disease Outbreaks
from Fiih and
Shellfish
Consumption
EPA Data Sources
ODES/STORET*
Other Sources
CDC •
      Office of Water
      Environmental Indicators
                                                                   25
Februarys, 1994

-------
OJ
                 Ground Waters
                   Meet Water
                    Quality
                   Objectives
                Indicator
                Ground Waters
                Water Quality
                EPA Data Sources
                CWGWPP Biennial
                 Report •
                OPTS: PGWDB •
                NPSurvey >
                305 (b)>
                STORET >
                ERAMS »
| Other Sources
 WIDE •
I USGS •
QISGS: NAWQA >
                                                   IMPROVE AMBIENT CONDITIONS


Improved Surface Water Ambient
Concentrations of Toxic & Conventional
Pollutants
. 	 J 	
QQ QB Q Q
Indicator
Selected Water Quality
Parameters
EPA Data Sources
EMAP>
BIOS/STORET»
Other Sources
USGS NASQAN
Stations 1
USGS: NAWQA
National Monitoring

QQQBaa
Indicator
Water Quality
Standards Attainment
EPA Data Sources
305 (b)»
303(d)»
3040) »
BIOS/STORETI
Other Sources
USGS: NAWQA »


Extent of
Contaminated
Sediments is
Reduced
Indicator
Extent of
Contaminated
Sediments
EPA Data Sources
305 (b)»
Superfund >
BIOS/STORETI
CSS1O
Other Sources
NOAA: NS&T»
USGS: NAWQA »
1
                                                                                                              No Net Loss
                                                                                                              of Wetlands
I  System Stations O    I
                                                                    I	I
I	I
                                                                                                               aaaaaa  ~~1
                                                                                                            Indicator          \
                                                                                                            Loss or Gain of
                                                                                                            Wetland Acreage
                                                                                                            EPA Data Sources
                                                                                                            Regional
 Other Sources
 USFWS: NWI •
 NOAA: NCWI*
 USGS: NAWQA >


I	I
\

• data available now, needs improvement
ft limited data available now
O no data available now

1 1 We can set baseline and begin to report in FY94
either nationally or for certain regions, specific
|__ 	 | geographic areas, or specific resource type.

QQQQQQ Hierarchy of indicators
1-2-3-4-5-6 • indicates level
l=Administrative; 6=True environmental

J
            Office of Water
            Environmental Indicators
                                                                              26
                                                                                                           (     February 3, 1994

-------
                               REDUCE POLLUTANT LOADINGS
                                   Reduced Conventional Pollutant Loadings
Reduced Toxics Pollutant Loadings
I _1 	 1 ..!.._. I .. . 1 1 1
QQBQQQ
Indicator
Pollutant Loading to
Ground Water from
Underground injection
Wells


EPA Data Sourcei
TRI»
STORET 1



Other Soarcft



1 QQBQQa 1 I QQHQQQ 1
1 Indicator \ Indicator
Point Source Toxics [Selected Conventional
• Pollutants: TSS,
1 1 BOD, Fecal Coliform
| | A Nutrients
I I
1 1
| EPA Data Sources \ EPA Data Sources
I NPDES Permits • i Needj Survey *
ITRF» 'PCSI
|pcs» IEMAPI
(Needs Survey* [STORETI
STORET 1 NPDES Permits 1
\OthcrSourctj 1 Other Sources
• NOAA: NCPDI •
I I USGS: NAWQA 1
1 1 I 1
QQBQQQ
Indicator
Key Welwcalher
Conventiona]$ from
CSOs



EPA Data Sourcei
Needs Survey •
PCSI
TRII
NPDES Permits 1

Other Source!





















QBQQQQ
Indicator
Number of State and
Local Gov'u Requiring
Treatment of
Stormwaler Runoff
from Rural, Suburban
& Urban Land Uses
EPA Data Sourcei
RCW Program 1
3 19 Program O
NPDES Slormwaler
Permit Program O

Other Sources
USGS: NAWQA 1
NOAA: NCPDI*



















QIQQQQ
Indicator
Number of DMPs
Implemented at State
and Local l^vcl



Kl'A Uala Source*
RCW Program 1
319 Program O
NPDES Slormwaler
Permit Program O

Other Sourcei
USGS: NAWQA »
NOAA: NCPDt «



















QQIQQQ
Indicator
Key Wetweatlier
Conventional
Pollutant! from
Nonpoint Sources and
Stormwaler

EPA Data Sourcei
EMAPI
RCW Program 1
319 Program O
NPDES Slormwaler
Permit Program O
Other Sources
USGS: NAWQA 1
NOAA: NCPDI •
CZM Program O


















QQQQHQ
Indicator
Marine Debris





KI'A Data Sources
EMAPI




Other Sources
Center for Marine
Conservation *
NOAA 1
XI
UO
Ln
\

• data available now, needs improvement
1 limited data available now
O no data available no*

1 I We can .ie! txuelinc and begin to report in FY'-M
either nationally or for certain regions, specific
1 	 ] geographic areas, or specific resource type.

QQQ QQQ Hierarchy of indicators
1-2-3-4-5-6 • indicate level
1= Administrative; 6=True environmental

J
     Office of Water
     Environmental Indicators
                                                27
c
3

-------
                                            Hierarchy of Indicators
XI
UJ
                         ADMINISTRATIVE
                           INDICATORS
                       Level 1
Level 2
                      Actions by
                      EPA/State
                      Regulatory
                      Agencies
                              ENVIRONMENTAL
                                 INDICATORS
Level 3
                               Level 4
                              Level 5
                              Level 6
                                           Changes in
                                            Uptake
                                            and/or
                                           Assimilation
                                          Changes in
                                            Health
                                          Ecology, or
                                          Other Effects
Responses
  of the
Regulated
Community
Changes in
Discharge/
 Emission
Quantities
 Changes
   in
 Ambient
Conditions
                                         Preferred Data For Measuring Environmental Results
          Office of Water
          Environmental Indicators
                                                             DRAFT Sept. 20, 1993
                                                              28

-------
                                                           Acronym  List
U>
XI
AWWA      American Water Works Association
BEST       Biomoniloring and Environmental Status and Trends,
            USFWS (Update of NCBP)
BIOS        Biological System Component of STORET, OWOW/OW
CDC        Center for Disease Control
CSGWPP    Comprehensive State Ground Water Selection lYograms
CSS1        Contaminated Sediment Sites Inventory
ELMR       Estuarine Living Marine Resource, NOAA
EMAP       Environinenlal Monitoring and Assessment IVogram, ORI")
ERAMS      Environmental Radiation Ambient Monitoring System,
            Office of Radiation Programs
FAD        Fisli Advisory Data Base, OST/OW
FRDS       Federal Reporting System, OGWDW/OW
ESP         Fisheries Statistics Program, NOAA
HW1W       Hazardous Waste Injection Well Database, OGWDW/OW
IBI          Index of Biological Integrity
1TFM        Intergovernmental Task Force of Monitoring Water Quality
LMR        Living Marine Resource, NOAA
MMS        Minerals Management Service
NAWQA     National Water Qualify Assessment Program, USGS
NASQAN    National Stream Quality Accounting Network, USGS
NCBP       National Contaminant Biomonitoring Program, USGS
NCPDI      National Coastal Pollutant Discharge Inventory, NOAA
NCWI       National Coastal Wetlands Inventory, NOAA
NEP        National Estuary Program, OWOW
NFTD       National Fish Tissue Data Base, OST (does not yet exist)
NPDES      National Pollutant Discharge Elimination System, OWEC
NPSurvey    National Pesticide Survey, OPP
NRDC       National Resources Defense Council.
NRI         NaUonal Resources Inventory, SCS/USDA
NSR        National Shellfish Register, NOAA
NS&T       Nalional Status & Trends, NOAA
NWI        National Wetlands Inventory, USFWS
ODES       Ocean Data Evaluation System
PCS         Permit Compliance System, OWEC
PGWDB     Pesticides in Ground Water Data Base, OPP
PWSS       Public Water Supply Systems
RBP        Rapid Bioassessmenl Protocols, OWOW
STORET    STOrage and RETrieval System, OWOW
TRI         Toxic Chemical Release Inventory System, Office of Toxic
            Substances
WBS        Waterbody System (for 305(b) Reports), OWOW
WIDB       Water Industry Data Base, AWWA
       Office of Water
       Environmental Indicators
                                                                         29
                                                                                                                February 3,1994

-------
(Blank Page)
    738

-------
                                      MR. TELLIARD: David Kimbrough is going to be
speaking on quality control levels, alternatives to detection  limits.  David  is with the
California Environmental Protection Agency, Department of Toxic Substances Control, and
Hazardous Materials Laboratory - Southern California,  and he is with us from southern
California to come out here and enjoy the rain.
           QUALITY CONTROL, AN ALTERNATIVE TO DETECTION LEVELS
                                      MR. KIMBROUGH: The title of my presentation
is  the  Quality  Control  Level, An Alternative to Detection Levels by myself  and my
supervisor, Janice Wakakuwa.  The experimental data that this is presentation is based on
is  derived  from our work  in the analysis of soils (1-2) but the principles that will be
presented here  are general to all  matrices.                                       •

      The first part of  this presentation will  be a critical examination of the Method
Detection Limit (MDL), the official method of USEPA (3-5).  The basic question the MDL
asks is what is the lowest concentration of analyte in a sample matrix that is not zero with
99 percent confidence (or that a particular reading has 99% confidence of not being a false
positive.  Using this procedure an individual  reading has a 50% confidence of not being a
false negative).   The  theoretical  assumptions  of MDL  are  1)  that  you  have  an
interference-free matrix, 2) that your sample preparation procedure is 100% effective for all
concentrations,  3) and that you have a blank material with zero concentration of an analyte.
There is a considerable body of literature that has examined  the theoretical (i.e. statistical)
validity of these assumptions (6-8).  I will leave statistical theory to the statisticians.  Rather,
our goal  was to examine the MDL from an empirical perspective.  From an experimental
point of  view,  there are no matrices or analytical methods that meet  all four  of these
assumptions very few that meet any, especially in solids analysis. So right from the start, the
MDL does not seem to be a very sound theory for empirical work.

      The approach we adopted  was to apply the MDL to some real analyses, which in our
case was soils, and see if it worked. Our criteria for assessing whether the MDL "worked"
was follow the  MDL method for five analytes and three instruments and determine the
frequency of false positives and negatives as well as the accuracy and precision of the results
determined.

      It will be useful to  review the MDL procedure which is summarized on Table I, The
term "MDL" is used a great deal  to mean many different things by many different people.
Not only do  most people not realize that the MDL has a detailed  set of the theoretical
assumptions,  they also do not realize that it has a very specific empirical procedure for its
calculation. It is actually a rather complicated procedure.   The first step is to make an
estimated MDL, and there are four different procedures as  to how to estimate an MDL.
Depending on which one of these you choose, you will get a different estimated MDL.


                                       739

-------
      Then, having chosen one of these methods, you make an estimated MDL. Then you
choose or make a material in matrix of interest with one to five times the concentration of
the estimated MDL of the analyte of interest.  This material  is analyzed seven times,  the
standard deviation of the mean measurement of the analyte is taken. The calculated MDL
is equal to the Student's t value, which is about three, times the standard deviation.  You
have an option at the end of reiterating this procedure to validate your choice.  It is
important  to note, there is no place for the accuracy of method in  this  calculation.  The
MDL is entirely based on  a standard deviation, the precision, irrespective of how accurate
the measurement  ends up being.

      It must be emphasized that it can only be calculated on a matrix by matrix, method
by  method,  and  instrument  by instrument  basis.  There  are  very few laboratories that
actually go through this entire procedure. Most of the "MDLs" that one sees on reports are
actually determined on a "representative matrix"  such as Ottawa sand for solid waste
analysis and deionized water for any aqueous sample. These "canned" general purpose
MDLs are at best  Instrument  Detection Levels  (IDLs)  and are not  matrix specific.

      For our study we chose five regulated toxic elements to study, arsenic, cadmium,
molybdenum, selenium, and  thallium and analyzed them on three different instruments, a
sequential ICP-AES, a simultaneous  ICP-AES, and a Flame Atomic Absorbence Spectroscopy
(FAAS).  I arn only going to show you just one small set of that data which is representative
of all the results. Table II  shows the results for thallium in  soil  by simultaneous ICP-AES.

      The first step is to make  an estimated MDL for thallium when analyzed by ICP-AES.
We determined four different estimated MDLs, one for each method  and there is quite a
range of estimates. We prepared a samples in the matrix of interest with a concentration
of one to  five (1-5) times the concentration of each of these estimated MDLs.  The  soil
samples had concentrations with 500, 50, and 5 mg/kg of thallium which covered all of the
estimated MDLs. These soils were each digested seven times, the mean values and standard
deviations were calculated. The calculated MDL was determined for each  soil.

      All  of these results are shown on Table III.  If you use the 500 mg/kg soil, you get
a calculated MDL  of 23.  Using the 50 mg/kg soil, you get a calculated  MDL  of 11. Finally,
If you use the 5 mg/kg, you cannot  get a calculated MDL because you can not  get a signal
at all; all you get are interferences. Judging from these results the actual MDL is somewhere
between 23  and 11. It must be noted that determining MDLs in parallel at three different
concentrations is  not  required.  Technically,  either  of these MDLs is  acceptable as the
procedure is complete and the two results are not far apart.

      To  check the usefulness of these calculated MDLs we prepared  a sample with 20
mg/kg thallium  in the same soil used for the other samples. As  can  be seen,  it does not
even give  a signal on  this  with the ICP-AES. Both of the calculated MDLs were completely
unrealistic in the soil  matrix for ICP-AES.  The MDL procedure gives you an unrealistically
low number, because underlying theoretical assumptions are not met.


                                       740

-------
      Similar results were obtained for arsenic, cadmium, molybdenum, and selenium by
ICP-AES, both simultaneous and sequential.  ICP-AESs of course have no interference-free
wavelengths.  Every wavelength has some sort of interference on it, especially in solids
analysis.  FAAS has fewer  interferences, so it tended to have better results.  However,
arsenic and selenium are very difficult to analyze by FAAS and so long as you are looking
only at  precision and not accuracy, you are going to have problems like this. We  can
conclude that the assumption of an interference free matrix is not met in soil samples for
this analysis.

      These results are from a single laboratory and may represent  the  limitations of the
personnel or instruments of that laboratory (as  much as we may not like to think this  so).
In order to determine how generalized a phenomenon this is, an inter- laboratory study was
designed in  conjunction with  the Environmental Laboratory Accreditation Program in
California,  ELAP.   As such, it was  decided  that this study would take the form of
performance  evaluation (PE) sample study. We prepared 10 soil samples, five soils spiked
in these various combinations of the same regulated toxic elements used in the above single
laboratory study and five more soils spiked with PCBs as Aroclor 1260 as shown in Table
111.  As can be seen, for each analyte, except arsenic, there is a sample that is unspiked and
has a value significantly less than 1  mg/kg.  These samples were validated  first in-house and
then by  thirty  reference laboratories.  They were then sent out to 200 environmental
laboratories.

      All these laboratories selected were accredited by ELAP to perform these analyses in
solids.  Under California regulations  there  is  separate accreditation for drinking water,
wastewater, and solid waste.  So, all these laboratories are accredited to  analyze these for
these compounds in this matrix.

      Figure I summarizes the data we got back for thallium. Four measures are presented.
It is important to note that the X axis is a logarithmic scale in mg/kg while the Y  axis in  not
logarithmic and has the units percent.  The first measure is the  percent  bias  of the mean
result where  percent bias is defined as the measured mean value minus the  true value
divided  by the true value times 100, so it  comes up in units  of percent.  The second
measure is the inter-laboratory percent  relative standard deviation (%RSD) which is  the
standard deviation divided by the  mean times  100 so the units are also in percent.  The
third measure is percent quantitative errors, that is, the number of laboratories  were able to
correctly identify the presence or absence of analyte but assigned a value that was beyond
the control limits for that particular sample divided by the number of laboratories that turned
in positive results for that sample, times 100. For the purposes of this study we established
the control limits as _+_ 50%  of the spiked value. Finally, the  percentage of  qualitative
errors, false positives and false negatives are also  presented.

      It is important to note that we used a different definition of false positive and negative
than is  normally used.   We  defined  a false positive  as a result above the  MDL  of  the
laboratory when in fact the sample had a concentration less than that. Conversely,  a false


                                        741

-------
negative  is a result that is  reported as less than the laboratories MDL when the actual
concentration is larger than their reported MDL.

      The first thing you notice which should not be a big surprise to anybody is that at
high concentrations, you have  relatively few problems.  You  get good precision, good
accuracy, and  relatively few numbers of errors of qualitative and  quantitative types.  At
lower concentrations the percentage of errors increases and the precision deteriorates and
the accuracy deteriorates. It can be seen that accuracy, precision, and number of errors and
types of errors are concentration-dependent.  The thallium results presented here are typical
of the results for the other analytes.

   What  this study allows  us  to do is compare the claimed detection limits  of the
laboratories  with what they were actually able to perform in a real sample.  Consider a
laboratory with a claimed MDL of 1 mg/kg for selenium.  The laboratory analyzes a sample
with 5 mg/kg selenium in it but reports a value of 2 mg/kg. It is not hard to see that if that
same laboratory were to analyze a sample with 2 mg/kg selenium, it would be reported as
less than  1 mg/kg, a false negative. Such an MDL would be quite meaningless.  Obviously
a similar  problem could occur with positive biases and false positives.

      Figure II, a log/log chart, shows some common spike recovery  curves we got from
a number of laboratories for PCBs. There are other types of spike recovery curves, but these
are the ones we want to look at right now.  If you did an MDL study at 10 mg/kg, you
would come up with a detection limit of around 0.01 mg/kg. So using the assumptions  of
the MDL procedure, you would expect a straight line recovery  as shown in the figure.

      Unfortunately, using real samples, using real  extractions, you  get curves like this.
You  can  have positive interferences. You can have linear range effects.  You can have
extraction inefficiencies. As you can see, if a laboratory generated either one of these other
curves with  the circles or the triangles, their MDL would be completely useless.  They
would either be high biased or low biased or have some other problem. Their reporting of
them would be inaccurate based on their method detection study at 10 mg/kg.

      We went through and looked at  all the data  from a total of 177 laboratories that
returned  data.  We calculated from the results like this that about two-thirds of the reported
MDLs were  inaccurate.  Either they were having problems with  interferences or they were
having extraction inefficiency.  This study shows that the linearity assumptions of the MDL
procedure are  not  realized  in the field and that the  results obtained  from the first single
laboratory study were  not unique to that laboratory. Further, not only are interferences
usually most significant at lower concentrations, but they are not predictable and must be
determined empirically on a matrix by matrix basis.

      One  of questions that these first two studies raised for us was,  is the MDL even
asking the right question?  The MDL theoretically  can only  answer the question what
number  is not  zero with identifying  the quality of that number.  Quality assurance and


                                       742

-------
quality improvement  have become the rallying cry in  the  USEPA, which presumably
includes the quality of laboratory results.  The MDL however is completely blind the issues
of data quality. The precision and accuracy of results near the MDL, the two most important
measures of data quality,  are never determined so the quality of these  results in never
known.  Of what use is a laboratory result  if all that is known about is that it is not zero?

      So we decided  to take a closer look  at the relationship between the quality control
parameters of precision and accuracy versus concentration.  Since our main area of work
is with solids and toxic elements, this was the medium in  which we decided to work. We
prepared a series of spiked soils with a range of concentrations form 100 mg/kg to less than
0.5 mg/kg for sixteen toxic regulated elements as shown on Table IV. The elements were
silver, arsenic, barium, beryllium, cadmium, cobalt, copper, molybdenum, nickel, lead,
antimony, selenium, thallium, vanadium, and zinc.  As most of you know, almost any soil
will have most of these elements at concentrations in the range described.  So we prepared
an  artificial soil  from reagent  grade  chemicals which  would have  concentrations  of
aluminum,  calcium, magnesium, manganese, and other soil matrix elements in the same
proportions as the soils used in the first two studies.

      Using an acid digestion (Draft Method 3055) which uses 2 grams and a final volume
of 100 ml gives a 50-fold dilution from solid to liquid.  Each of these soils was digested and
analyzed in eight replicates.  You can see if you  divide all the values in mg/kg by 50, you
get the equivalent concentration in mg/L,  also shown on Table IV. For comparison we also
prepared a series of aqueous standards  was prepared,  a 5 percent  nitric acid aqueous
solution over a range of concentrations from 2 mg/L down to 0.001  mg/L and double
deionized blank  all of which were also analyzed with eight replicates.

      Figure III shows the results for cobalt which is representative for all of the elements.
This is from a simultaneous ICP-AES, a Jobin-Yvon 50 P.  Each  point represent the mean
result from the eight replicates.  The X axis  is logarithmic with units of ug/mL (ppm w/w)
showing the results both the aqueous standards and the acid digestates. The Y axis has a
linear scale with units of percent for percent bias of the mean result and the %RSD from the
eight replicates.

      Both the digestates and standards have the same pattern.  Over about two orders of
magnitude, you have very reproducible results, very accurate results. All of a sudden, at
around 0.01  ug/mL,  you have  a sudden increase  in  bias and sudden increases  in
imprecision.  The mean values become very inaccurate and very unreproducible.  This is
using baseline correction, so you are correcting for some of  the interference of other
elements.  As you can see, the curves for both liquid and the solid are very similar.  One
thing we did on both of these graphs was we had it normalized for the fact that the most
negative bias you can have is 100 percent but there is an infinite possible positive bias. So,
just for the purposes of presentation here, we had a maximum bias of 100 percent, positive
or negative, to normalize for this effect.
                                       743

-------
      There  is a general  relationship between precision,  accuracy, and  concentration.
There is a range of concentrations over which precision and accuracy are constant. At some
concentration the both of these parameters deteriorate rapidly.  In Figure III this happens for
cobalt around 0.05 mg/L  (or 2.50  mg/kg)  for  both  the liquid  standards  and the acid
digestates. As it happens the MDL for the aqueous standards is 0.05 mg/L and for the acid
digestates  it  is 0.02 mg/L (1  mg/kg).  Let us  ignore the problems with the MDL already
discussed  and suppose that we can take the MDL at face  value and  say that these two
concentrations have a 99 percent confidence of not being zero.  These results, although not
zero,  have 100 percent bias and 300 %RSD.

      So results near the MDL,  whatever that may be, are going to be  very imprecise and
very inaccurate. One of the questions that comes up when this graph is presented is, how
much of this is an artifact of the way you determine relative standard deviation? After all,
the relative  standard  deviation is  a ratio of the standard deviation to the  mean,  the
numerator to the denominator, and if your denominator is constantly decreasing and your
numerator stays constant, you expect the ratio to increase at  lower concentrations solely as
an artifact of how the  precision  is being measured.

      On Figure IV we have plotted the standard deviation and the %RSD. Let us suppose,
just for the sake of argument, that we really  did have an interference-free matrix and an
instrument that had a linear dynamic range through "zero" and an acid digestion that was
100% efficient at all a concentrations.  Then  the %RSD would be the standard deviation
divided by the true value. So for comparison, we created a curve with this artificial %RSD.
As you can  see, all three curves  look pretty much the same.  However  you chose to
measure variance, the  same pattern can be seen. A range of concentrations with constant
precision and a range where the precision deteriorates rapidly.  Sometimes there may be an
even lower range of constant precision  due to reproducible interference from the matrix.

      It might be tempting to conclude that this was some artifact of elemental  analysis of
solid  wastes.   Figure V  however is  derived from  data from a 1992  paper presented  by
Charles Hertz  et al. of the  Philadelphia Suburban Water Company presented in Montreal
at the Water Quality & Technology Conference (9).  Here you are seeing the same general
relationship between concentration,  precision, and accuracy in lead  in drinking water. At
high concentrations the results highly  reproducible and very accurate.  The lower the
concentration  becomes,  the greater the  imprecision, the greater the inaccuracy.

      Now, just in case anyone thinks this pattern is only associated with inorganics, Figure
VI is based on results from a 1991 paper presented by Yohe and Hertz presented in Orlando
also at the Water Quality & Technology Conference (10). Here you see the concentration
versus precision results for five carbamates. At ten times the MDL, you are getting very
reproducible  numbers. At the MDL of all five  of these  carbamates,  the results are not
reproducible.  Similar  results were obtained by Dr. William Horwitz of the Food & Drug
Administration from inter-laboratory studies of food residues (11).
                                       744

-------
      Why do we analyze environmental samples? We want to learn enough about some
environmental situation so that useful decisions can be made, either to leave a situation
alone or to improve it. In order to do this we need to have a certain level of confidence
in the quality of the data.  Different situations will require different levels of quality. This
is, I suppose, what is meant by data quality objectives.

      Two of the most important measures of data quality are precision and accuracy, so
for any given situation the acceptable percentages of bias and %RSD will vary. What is the
highest level of bias that will give you the confidence that you need to make a decision?
What is the largest %RSD that tells you what you want to know?  If you are a data user and
your results have a 100 percent bias and 300 %RSD, of what use is that result to you? There
may be situations where that is useful, but not many.  In most situations, you are going to
want to know what is the lowest precision and lowest accuracy that meets your data quality
objectives.

      That is really what I am trying to argue for here is that instead of looking at MDLs
or other statistical measures, let's determine what is the lowest concentration in your matrix
by your method that meets your data quality objectives.  If 50 percent bias and  50% RSD
is acceptable to you, then find the lowest concentration that gives you 50 percent bias and
RSD or less and we will know that any higher concentration will have lower levels of both.
This concentration is what we call the Quality Control Level (the QCL).

      How would this be done on a routine basis?  The first step would to determine the
instrument QCL (IQCL) which should be constant barring instrument deterioration.  In this
case, the IQCL for cobalt  in the aqueous standard with  baseline correction is  about 0.5
mg/L. From the IQCL, a good estimate of a method QCL (MQCL) would be 2,5 mg/kg, or
0.5 mg/L times 2 gram divided by 100 mL for the dilution correction of the acid  digestion.
Then you select the sample which you wish to know the MQCL, spike on enough cobalt
to make a 2.5 mg/kg concentration, analyze that soil at least three times (although seven
times would be best), and determine the precision and accuracy at that concentration.  As
it turns out the % bias is 4 and the %RSD  is 15. If this is too high, the process would be
repeated at higher concentrations until a the quality control is acceptable.  Likewise, if 2.5
mg/kg is not a low enough concentration for other reasons, the process  can be repeated at
still lower concentrations.

      I would argue that for values less than the QCL, the results should read, "Analyte not
present in concentrations greater than QCL".  If a positive value is measured but it is less
than the QCL then it should either read the same as above  or "Analyte detected but with
unknown precision and accuracy".   It would also be very useful to know  if this less than
QCL determination could be confirmed by another method.
                                       745

-------
Literature Cited

1.   Kimbrough,  D.E. and Wakakuwa, J.R.; "A Study of Method Detection Limits  in
      Solid Waste Analysis"  Environmental Science and  Technology,  27, 1993, 2692 -
      2699.

2.   Kimbrough,  D.E. and Wakakuwa, J.R.;  "Quality Control Level: An Alternative  to
      Method Detection Levels" Environmental Science and Technology, 28, 1994, 338 -
      345.

3.    Glaser,  J.A.,  Forest,   D.L.,  McKee,  G.D.,   Quave,   S.A.,  and  Budde,   W.L.,
      Environmental Science & Technology, 1981,  15, 1426 - 1435, December

4.    Appendix  A, July  1982  to Methods  for Chemical   Analysis  of   Wastewater
      EMSL-Cincinnati, USEPA, June 1982 S.Appendix B to Part 136 CFR 40, October 26,
      1984, Federal Register Vol. 49,  No. 209, Pg 198 -  204.

6.  Gibbons,  R.D., Taylor, W.,  Jarke,  F.H., and Stoub, K.P.,  "Method Detection  Limits",
      Proceedings of  Fifth Annual USEPA  Symposium  on  Waste Testing &  Quality
      Assurance, July 1989.  USPO, Washington, D.C.

7.   Clayton,  C.A.,  Mines,  J.W., and Elkins,  P.D.,  "Detection Limits with Specified
      Assurance Probabilities",  1987,  Analytical Chemistry, 59, 2506-2514,  October

8.   Keith, L.H., and Lewis, D.L., "Revised Concepts for Reporting Data Near Method
      Detection Levels", Proceedings of 203rd Meeting of the American Chemical Society,
      Committee on Environmental Improvement, San Francisco, June 1992

9.  C.D. Hertz, J.  Brodovsky, L. Marrollo,  R.E. Harper
      "Minimum  Reporting  Levels Based on  Precision and Accuracy for Inorganic
      Parameters in Water"; The  Proceedings  of the  Water Quality &  Technology
      Conference 1992, Toronto, Quebec, Canada

10. T.L. Yohe, C.D. Hertz, Importance of PQLs in  the Development of MCLs: A Water
      Utility Perspective"; The Proceedings of the Water Quality & Technology Conference
      1991, Orland, Florida, USA

11. Horwitz,  W.,  Kamps, L.R.,  Boyer,  K.W.; "Quality  Assurance in the Analysis  of
      Foods  for Trace Constituents";  Journal of the Association of Official  Analytical
      Chemists, 63,  1980, 1344 - 1354
                                      746

-------
                       QUESTION AND ANSWER SESSION
                                     MR. MADELONE: Ray Madelone, TRW. The data
that you showed on your study that you ran in California, how was that pooled? Was that
a single operator within the laboratory pooled, or was it all the laboratories?

                                     MR.  KIMBROUGH:  It was all the  laboratories
using all the instruments.

                                     MR. MADELONE:  But did you use it as a single
operator pooled, or did you take all the data at separate data points and pool it as  an
inter-laboratory?

                                     MR. KIMBROUGH:  As an inter-laboratory. We
have an intra- laboratory,  the very first one  I showed, and then the second one was
inter-laboratory, and the third one is intra again.

                                     MR. STANKO: George Stanko, Shell Development
Company.   Could you go to the slide before this last one?

                                     MR. KIMBROUGH: Sure. This is the carbamates
from Charles Hertz' paper.

                                     MR. STANKO:  If there was ever a slide that made
my day, that is the one.   Industry has had the gospel  all  along that we should not  be
regulated nor measured at the MDL level. Your data says ten times MDL is the level where
the precision and accuracy is acceptable.

      That equals PQL.  I  have been sounding like John the Baptist,a voice crying out in
a desert.  I concur with your observations,  and I appreciate your paper.

                                     MR. KIMBROUGH: Well, thank you very much.
Although I would argue not necessarily assigning ten as being the magic number. That is
what it  is  here, but there  should...and I would not even argue  that it should be some
multiple of MDL but that it should be done from the other end in terms of precision and
accuracy, but yes,  I understand what you are saying.

      More questions? (No response.)

                                     MR. TELLIARD: Thanks, David.

                                     MR. KIMBROUGH:  Sure.
                                      747

-------
                              Table I
1)ESTIMATETHEMDL:
      a) The concentration that corresponds to an instrument signal to noise ratio
        of 2.5 to 5.
      b) The concentration value that corresponds to three times the standard
        deviation of replicate instrumental measurements for the analyte in
        reagent water.

      c) The concentration value that corresponds to the region where there is
        significant change in sensitivity at low analyte concentrations.
      d) the concentration value that corresponds to the known instrument
        limitations.

2) SAMPLE WITH 1 TO 5 TIMES (BUT NOT MORE THAN 10)
   ESTIMATED MDL

3) ANALYZE SAMPLE SEVEN TIMES

4) CALCULATE THE STANDARD  DEVIAITON (S)

5) CALCULATE THE MDL BY USING THIS EQUATION
                            MDL = t*S

6) REPEAT STEPS 2 - 5 USING THE CALCULATED MDL
* IF THE S1 IS 3.05 TIMES GREATER THEN S2, START AGAIN.
* IF THE S1 IS LESS THAN 3.05 TIMES S2, POOL THE RESULTS.
               Spooled = [(6Sa)2 + (6Sb)2/12]1/2
                      MDL = 2.681 (Spoo|ed)
                               748

-------
                        Table II

The MDL for Thallium in ug/g for Simultaneous ICP-AES
                 STEP 1: ESTIMATE THE MDL
               Procedure            Estimated MDL
               a)                   120
               b)                   110
               c)                   5.0
               d)                   5.0

               STEPS 2-5: CALCULATE THE MDL
             Calculated or Recalculated MDL
Spiked
Value
500
50
5
Mean
Value
452
42
<1
Standard
Deviation
7.4
3.6
• o
MDL
23
11
NONE
           Spiked
           Value
           20
                      REALITY CHECK
Mean
Value
<1
Standard
Deviation
0
MDL
NONE
                          749

-------
                              Table III

Spike Concentrations of Analytes in Perfromance Evaluation Samples in ug/g
     Sample ID     ABODE
     Arsenic       4,000       500        55        10          5
     Cadmium     500         50         5         -           5,000
     Molybdenum   30          5           -          5,000       500
     Selenium      5           -           5,000     500         50
     Thallium       -           4,400       500       50          5
     Sample ID     F           G          H         I           J
     PCBs         100         10         1.0        0.1          <0.01
                                 750

-------
                       Table IV

                     STUDY DESIGN
LIQUID CONCENTRATION    SOLID CONCENTRATION
        mg/L                              mg/kg
        2.00                                100
        1.50                                75
        1.00                                50
        0.50                                25
        0.20                                10
        0.15                                7.5
        0.10                                5.0
        0.05                                2.5
        0.02                                1.0
        0.015                              0.75
        0.010                              0.50
        0.005                              0.25
        0.002                              0.10
        0.001                               0.05
        <0.001                             <0.05
                          751

-------
LaJ
O
ttl
LiJ
Q_
                     FIGURE I

    INTERLABORATORY BIAS  & PRECISION FOR THALLIUM
   150
   125
   100
Percent False Negatives
Percent Quantitative Errors

Mean Percent Bias
Percent RSD
          10        50 100      5001000
              CONCENTRATION MG/KG
                       4400
                        752

-------
                      FIGURE II
         Common Measured  Recovery  Curves
<0
   <0.01       0.1        1
     Spiked Amount of  Aroclor
                    753

-------

LJ
Q

Q
ce
<
Q


I
GO

UJ
cr
m
o
cc
UJ
0_
       0
     UNSPIKED
                           FIGURE III

                 BIAS & PRECISION FOR COBALT
                   Using Baseline  Correction
                     Liquid  % Bias

                     Liquid  %RSD

                     Digestate % Bias

                     Digestate %RSD
0.001      0.01       0.1

      CONCENTRATION
                            754

-------
                      FIGURE IV

  PRECISION AND ACCURACY EOR LEAD IN DRINKING WATER

                   Hertz et al.1992
LJ
o
cr
LJ
o_
    250
    225 -
    200 -
    175 -
    150 -
125
    100
                      % Bias

                      % RSD
       0.3  0.50.7  1        3   5  7 10

                CONCENTRATION uq/L
                                     30  50
                     755

-------
FIGURE V
VARIANCE FOR CARBAMATES IN DR NKiNG WATER
Yohe and Hertz 1991
50


40



30
a
(/)
cc
W
20





10




0

i i i
l^/XJ Aldicarb Suifoxide

-



-






p-
/
/
/
_, /
X
/
/
/
/
^,
\
X
\
X
X
\
X
\
X
X
X
X
X
X
X
X
X
X
X
X
X
x.
X
X
X
X
X
X
X
X
X
X
X
X
X
¥
[\Xj Aidicarb
fxX<1 Atdicarb Suifone
j | [, Carbofuron
[___ 	 [ (J-rQITiyl




_













[— 1
—
. —
—
_
—
_
—
__
—
—
—
„
. —


—









-ri
\
_\
/"^
/\

f X^ /" ^ 1 p™- 1 ^^
XXX H kNX "1

0 MDL 1 MDL 5 MDL 10 MDL
Concentration (MDL)
          756

-------
                                     MR. TELLIARD:   Our next speaker is Dr. Paul
Berthouex who is a Professor in the Department of Civil and Environmental Engineering at
the University of Wisconsin.

      Mac is  back.  Mac has been here  before and  is going to talk to us today about
reporting and interpreting data near the limits of detection.  It sounds like we have a roll
going on here.

      Mac?
                                       757

-------
(Blank Page)
    758

-------
        REPORTING

            AND

   INTERPRETING DATA

           NEAR

THE LIMIT OF DETECTION
          P. M. Berthouex
Department of Civil and Environmental Engineering
    The University of Wisconsin-Madison
              759

-------
      I believe most of you here are chemists. I am an engineer and I am going to view this



problem as an engineer, which is a little different from that of many chemists.



     The limit of detection is a very shaky number on which to base any kind of important



decisions.  However, the concept has been around a long time is widely accepted. It seems that



chemists like this concept, or at least accept it willingly. I have never quite understood why. If I



were expertly operating a tremendously expensive piece of equipment, I would not be happy if



somebody declared most of the numbers being produced as rubbish and said they should be



discarded and treated as if they are unknown.



     The process of disregarding certain measurements and recording the data values as



"unknown"  is called data censoring. Censored data sets are a shaky basis for making important



decisions.  Statisticians would prefer not to see data censored, but it must be admitted that censored



data sets provide them  with a good deal of work. A little while ago you heard about some



complicated  calculations that can be done on censored data sets.  What those calculations amount to



is trying to replace values that existed at one time but were thrown away.  These calculations do



not replace the lost information.



     Engineers tend to agree with the statisticians. We would prefer to see a number.







THE MDL, ML, AND REGULATORY DECISIONS



     We need to make a distinction between bias (sometimes called accuracy) and precision. The



method limit of detection (MDL) deals only with precision. It gives no information about bias.



Often bias in measurements, including measurements on trace quantities, is more a problem than



poor precision.



     Figure  1 shows the hypothetical distribution of differences  between a control blank and some



reference sample. If one makes seven replicate measurements (the minimum recommended by the



USEPA), the MDL will be equal to 3.14 times the standard deviation of the replicate specimens.



The MDL is  supposed to be the minimum concentration of a substance that can be measured and



reported with 99 percent confidence that the analyte concentration is greater than zero.
                                         760

-------
     The latest proposal is to use the minimum limit (ML) for purposes of judging compliance



with water quality and effluent standards. The ML is 3.18 times the MDL, or about 10 times the



standard deviation used to estimate the MDL.



     Since both the MDL and ML are estimated from the standard deviation of measurements on



replicate specimens, both depend on how the operational definition of this standard deviation.



There are different definitions and different ways to estimate the standard deviation and the MDL



(or ML) obtained will depend on the number of replicates, the analyte concentration of the test



specimens, the background sample matrix, and perhaps many other factors. When all is said and



done we don't know that we have established 99 percent confidence in anything. The chance of



the true analyte concentration being greater than zero might be 90 or 99.9 percent.  In short, our



ability to estimate the standard deviation of a blank sample is so weak, that we do not know what



the confidence level really is.



     Wisconsin has fifteen water quality limits that are currently set below the MDL of our



analytical procedures.  One of the first proposals in Wisconsin was to call a discharger in



compliance so long as the analyte was "not detected" MDL" and to declare non-compliance on any



occasion when the analyte was detected. That would have been an extremely unfair policy. It



guarantees that every discharger eventually will be found in violation, even those discharges whose



effluent may truly be blank. The only question is how how long your luck will hold until the



inevitable mathematical laws of probability will wrongly declare an innocent man guilty.



     The proposal to use the ML to judge compliance has some tremendously appealing features



as an administrative structure. All measured values below the ML are to be treated as zeros for the



purposes of calculating averages and judging compliance. This does give reb'ef to permittees who



seek to avoid being falsely accused of violations. It also accomplishes the EPA's stated second



objective of providing a great deal of certainty to the regulatory agency that a violation has indeed



occurred when a measurement above the ML is reported.







SCIENTIFIC PROBLEMS WITH  THE MDL AND ML



     Since many of us are engaged in investigating scientific problems rather than in making





                                          761

-------
regulatory decisions, it is worth reviewing some scientific problems with the MDL.



      Our business is trying to learn the truth about waste treatment systems.  Is performance



getting better or worse? Have our interventions in the process been effective? What are the trends?



What is the level of performance? To know only that we are above or below a numerical



specification like the MDL or ML does not help in making these kinds of decisions. We need



numbers.  Numbers that are useful for judging these kinds of questions may be attractive for



making regulatory decisions. The corollary is that disregarding certain values in one decision-



making setting does not require us to disregard them is all other settings.  The problem with



censoring at the point of data generation (the laboratory) is that the decision maker is prohibited



from deciding the utility of the data for his specific purpose.



     The "detection limit" is a misnomer. What does it limit? It addresses only the probability of



false positives, and not false negatives. If an analyte is not detected, it does mean that the analyte



is absent.  It may just be hidden in the sample matrix.  Not detected does not even mean that the



true concentration is below the MDL.



     The MDL is an elusive and fuzzy value to estimate and we cannot estimate it very precisely.



Its value may depend more on the statistical  definition and the operational procedure used to



estimate the standard deviation than it depends on the intrinsic properties of the analytical method.



     The MDL considers only measurement precision. Bias may be a more important



measurement problem.



     With all these weaknesses, why has the use of the MDL survived so long? (There were



papers published in the MDL at least as far back as 1968).  A common answer is that we want to



avoid reporting  a positive concentration for a specimen that may be blank. But,  is this really a



problem?  As a  scientist, have you ever expected that a specimen truly would be blank? Isn't the



MDL based on a statistical hypothesis that few of us really believe? Many hypotheses we construct



in statistics is like that. We hypothesize, for example, that the mean level of two procedures are the



same and we set of a t-test to examine the hypothesis even while know in our heart that the two



things are not the same. What we may be prepared to believe is that the difference between the two
                                         762

-------
procedures is small enough to have no practical importance, just as we may be prepared to believe
that the concentration of an analyte is so low that we are indifferent to its presence or absence.
     The MDL is determined from replicate analyses, but it is intended to be applied to a single
routine measurement. Engineers, as I think most scientists should, object to having any important
decisions made on the basis of a single measurement. Standards based on such measurements are
flawed and serve no proper scientific purpose. Such judgments should be based on a collection of
measurements, and based on trends and levels over a period of time.

RESULTS OF SOME LEAD MEASUREMENTS ON WASTEWATER
     We did a study in order to get some data on measurements at and below the MDL.  We chose
to do the study on lead because we had friends in laboratories who could measure lead without too
much extra trouble.  We made fifty test specimens for each participating laboratory.
     These were prepared by filtering effluent from an activated sludge treatment plant that
routinely produced 5-day BOD below 10 mg/L and received virtually no industrial wastewater. A
large volume of this background matrix was subdivided to give five bulk portions of identical
matrix. One subsample was the unspiked background matrix; four of the large subsamples were
spiked with 1.25 |ig/L, 2.5 |ig/L, 5 Hg/L and  10 ng/L. The spikes were in addition to the
background concentration of lead in the effluent.
     These test levels were determined after asking each participating laboratory their MDL for
lead. Most of them told us their MDL was 5 H-g/L; all were between 2.5 \ig/L and 10 H-g/L. In
order to be sure of having a lot of measurements at or below the labs stated MDLs,  most test
specimens were at the 1.25 and 2.5 \ig/L spike concentrations, and some were unspiked. The
laboratories were not told that there were five different lead levels, or that the highest spike
concentration was 10 M-g/L. They were told the matrix was activated sludge effluent and that the
lead concentrations were "low".
     The labs were told to report a numerical value for each test specimen.  We did not want any
results  report as "not detected" or "below the limit of detection." All the labs satisfied this request
but one. Fortunately this lab still had the raw instrument readings in computer files and could later

                                          763

-------
provide the wanted numbers.  Ironically this laboratory, which was originally willing to discard
about 85 percent of their measurements, turned out to perform the best.
     Figure 2 shows the data obtained from two of the laboratories.  The x-axis is the amount of
lead spike that was added and the y-axis is the measured concentration. The solid line is the true
concentration for a background matrix that contained zero lead (the concentration in this matrix was
non-zero). These laboratories had told us their MDL was 5 p.g/L. The true lead concentration in
the unspiked, 1.25 and 2.5 (ig/L spikes were below the level these labs thought were their MDL.
In their normal course of work, these laboratories would have disregarded all measured values that
were below 5 Hg/L.
     It is my opinion that the measurements on these specimens contain much useful information.
There is good consistency and the precision (variation) is good, even at these low levels. It would
be wasteful not to use these data. Censoring at the MDL would distort, rather than clarify, our
knowledge about the set of test specimens.
     We need to be careful how we think and talk about precision, I do not like to think about
precision as a percentage (i.e. as a relative standard deviation), I prefer to view precision as an
interval on the original metric scale  of the measurement process. Viewed this way, the precision
(the spread or variation) is about the same at all five levels of lead. Precision has not deteriorated
with a decrease in concentration even down to the unspiked (background) concentration.
     On the other hand, it would be true that the relative standard deviation (RSD) of the low
concentration specimens is larger than the RSD at the higher concentrations. But, note that this is
entirely due to the change in concentration level and not because the absolute measurement errors
have increased.
     Figure 3 compares the low level measurements from six laboratories (three municipal
wastewater treatment plants, two commercial, and one state lab). The true lead concentration of all
the represented test specimens were less than 5 |ig/L, The values measured on unspiked
specimens are the open boxes, the 1.25 |J.g/L spikes are the solid boxes, and the 2.5 (ig/L spikes
are the open circles. You will note differences between the laboratories, but the differences are
notably in respect to measurement bias and not to precision. The range of variation is remarkably
                                          764

-------
similar at the three levels for all six laboratories. The differences in average concentration between



the three concentration levels are also remarkably consistent at just about the magnitude of the true



differences between the added spikes. I find this consistency impressive in view of the fact that



these laboratories ordinarily would have censored a great deal of useful data. Figure 4 is another



way to look at the data. It shows that there is consistency when the data are considered



collectively.



     The MDL for each laboratory could be estimated using the values measured on the unspiked



specimens, or on the 1.25 or 2.5 (ig/L spiked specimens, or on these results pooled together. Any



of the estimated MDLs would be legitimate under the EPA definition and procedure. Depending on



the choice that is made, we get estimates of the MDL ranging from 0.4 (ig/L up to about 6 |ig/L.



(Note that these MDL results are independent of any bias in the measurements because the MDL



reflects only measurement precision.)  This is quite a range of MDL values. We do not know



which value in this range to pick. This is why I agree with the previous speaker that the MDL is an



imprecise number, perhaps so imprecise as to have little scientific merit. The notion that we can



determine an MDL that gives 99 percent certainty of making correct decisions regarding presence



or absence of an analyte is clearly not well supported by real data.







COMMENTS



     What should be concluded about the limit of detection? Is it helpful?  In this particular case



of lead in wastewater effluent much useful data would have been thrown away if the measurements



were censored at some MDL. Using a laboratory's a priori MDL would have discarded about 85



percent of the measurements, with the remaining values including the most suspicious



measurements and large outliers.



     To illustrate how severe this would have been, I have used the data from laboratories E and F



at the unspiked, 1.25, and 2.5 |ig/L spike levels to construct the series of 100 values shown in



Figure 5. Imagine this is a record of effluent quality for a treatment plant.  Censoring the data at 5



mg/L gives the picture shown in Figure 6.  Eighty percent of the values are disregarded. Now,
                                          765

-------
suppose this censored data record were given to a discharger or statistician who wanted to figure
out what to do about the data hidden under the grey bar. No matter what they do, they are not
going to get the right answer.
     This censored data presentation gives a sadly distorted view of effluent quality. You are left
with the impression that effluent quality was not very good, when in fact it was almost always at a
low level, about 2 (ig/L. This will only be apparent if the chemist reports all the values measured.
As an engineer who designs and evaluates the efficiency of treatment plants, this is the impression
I want to convey because this is the impression that is relevant to judging the efficiency of the
process.
     Let us rearrange the data and look at it in a slightly different way.  Suppose that I had a
treatment plant or an industrial discharge, and that from time to time I made improvements to it. I
have reordered the data to construct Figure 6, which will represent these imaginary improvements
of the imaginary process. (The solid line in Figure 6 is the moving average.) At the beginning of
the record are samples that were spiked with 5 (Jig/L, followed sequentially by those with spikes at
lower levels. If the data were censored at 10 \ig/L (one possible value for the ML), or at 5 |4.g/L (a
possible MDL), the substantial improvements would not be revealed. All the discharger's good
work is hidden.  This should not be done.

SUMMARY
     In summary, analyses at and below the limit of detection can produce useful numerical data.
It is not necessarily true that measurements at low concentrations are less precise than those at high
concentrations. If, however, such properties exist in the data, they should be handled statistically
by the data user.  It should not be handled by censoring the data at the point of data generation.
Chemists should be encourage to keep and report all measured numerical values. The ultimate data
user can make the appropriate manipulations to account for differences in precision, or to compute
statistics that are used for administrative purposes.
     The detection limit really is a troublesome invention. It causes a lot of problems, but I do not
see that it solves many.  It is a statistical (not a chemical) concept based on a hypothesis that many
                                         766

-------
of us do not believe. The MDL itself is a statistic (i.e. a value estimated from data) that cannot be
estimated precisely. It is defined only in terms of precision of measurements at low concentration.
Bias in these measurements is likely to be a more important problem.
     The Method Limit (ML) is a beneficial administrative tool. It will accomplish its stated
objectives. But, as a scientific tool it has all of the disadvantages of the MDL, some of which are
exaggerated when we are trying to generate and interpret data for the purpose of judging trends,
interventions, and process efficiencies. In fact applying an MDL or ML at the point of data
generation may make it impossible to do these things.
     A final reason for not censoring data is that chemists may be better than they admit and are
producing a lot of numbers that do not deserve to be thrown away.
                                          767

-------
                       QUESTION AND ANSWER SESSION
                                     MR. AUSES: Jay Auses from Alcoa. It is not really
a question; it is just a comment.  Paul, I am a chemist, but I disagree with you on one part,
and I agree on the other. My disagreement is in that chemists, as a whole, agree with and
accept the MDL.

      I do not think that is necessarily the case in all cases.  I do agree with you, on the
other hand, that most of what you presented are very valid issues that need to be dealt with,
and thank you.

                                     MR. BERTHOUEX;  Thank you.

                                     MS. KNOX: I am Robin Knox with Geraghty and
Miller. I would like to make a comment about a similar type of approach where data below
MDLs could be useful.  You gave the example of looking at a wastewater treatment process.
It is also useful when you are  looking at natural processes in streams.

      It is very difficult when you are trying to develope permit  limits  that address the
differences between dissolved and total metals concentrations when most of your data is
getting discarded because it is below the detection limit. 1 think, in these cases, that data
that is being obliterated by the way the laboratories  are reporting could be very useful to
the permittees and the agencies.

      I think,  you know,  that the reason the MDLs are so important and the labs are
hesitant to report that on a lab report is because of all the legal concerns and the fact that
all these permits are in administrative processes where, you know, a  lot of questions are
asked about the validity of the data.

      That is something the agency has a role in doing, is to allow data to be discussed and
used in determining what is going on in natural processes without turning around and using
it against the permittee that collected it for compliance purposes.

      So, I think those are some  very valid things, and looking at that data could help solve
a lot of the real world  problems.

      Thank you.

                                     MS. ASHCRAFT: Merrill Ashcraft from the Navy
Public Works Center.  I have a comment also.  I want you to know that we are one of the
labs that do not throw  out that data. We had some problems initially,  and we did go back
to reporting non-detect, because our customers were so confused when we reported the real
numbers.

                                      768

-------
      I  wanted to report the  real numbers to give that valuable information  to  our
customers to use as a statistical data base. I reported also our method detection limit,  and
they were just totally confused.

      So, I resorted to putting less than,  but the actual data is in our data base and can be
gotten out.

                                      MS. ROMNEY:  I just wanted to make one point.
To address the issue of not having actual  data for trend analysis, we have recommended in
the draft document that in the comment part of the DMR (discharge monitoring report)  you
at least list or note the number of non-detect/non-quantifiable data that you actually have.

      We do not ask that you record all the  non-detect/non-quantified values, but we do
try to account for this data by keeping a record  of the number of non-detects/quantifiables
that were observed.  The fact that you have  a  record of the  non-detect/quantifiable data
allows you to explain the basis  for the trend  analysis.

                                      MR. TELLIARD:  Who said so? You did not tell
us who you were.

                                      MS. ROMNEY:  Jackie Romney from EPA.

                                      MR. TELLIARD: Those people over there are going
to get you, Jackie.  Thank you.

                                      MS. ROMNEY:  I really agree with what you are
saying in terms of censoring data. I do not think a lab should ever censor data.  It should
provide the data to the user with the proper information in terms  of the detection  limit.

      I  have a  little concern with some  of the statements  you  made about the data,
particularly at the zero, 1.25,  and 2.5.  On one of your charts, you showed the variability
associated with each one of  those data  points in a real sense in terms of  the standard
deviations, and earlier, you stated that  your interest as an  engineer was  to determine
whether one process was better than another process.

      Typically, how you do that is you do a comparison of means  based on the variability.
Using the data that you have there, if you try to  determine that 1.25 is better than 2.5, I do
not think it would  pass the standard Mest.

      So, in fact...

                                      MR. BERTHOUEX: Oh,  it will.
                                       769

-------
                                     MS. ROMNEY:  The trends that you are pointing
out there may not be trends that you can prove in any statistical or even engineering sense.

                                     MR. BERTHOUEX:  I have done the statistics on
it, and I can assure you that lab after lab, the difference between the levels at 1,25 and 2.5
are different. Furthermore, that difference, which should be 1.25, is, within statistical limits,
1.25.

                                     MS. ROMNEY: But your data showed that the...at
least the  range of data points that you had there were on the order of almost  2  plus or
minus 1 it looked like.

                                     MR. BERTHOUEX:  That is right, but when you
average them together and do the t-test, it is very clearly different.

                                     MS. ROMNEY:  All right.

                                     MR. TELLIARD:  I would like to cut it off  now.
Could you talk to Dr. Mac at the break?  I am trying to get people  out of here on time so
they do not miss airplanes.

      If you could take a 10-minute break so we can kind of get back on schedule so folks
can get out of here on time, we  would appreciate it.

      Thank you very much, Dr. Mac.


      (A brief recess was taken.)
                                       770

-------
                          246
                         Concentration
8       10       12
Figure 1. Graphical definition of the Method Detection Limit (MDL) and the Method Limit (ML)
                                771

-------
I

o
O
8

s
0      2      4      6      8     10

        Added Concentration,
                                                            2      4      6      8      10

                                                             Added Concentration, |jig/L
  Figure 2. Lead data produced by Laboratories A and D. The solid line is the true concentration

  that would exist if the background matrix concentration were zero.
                                                772

-------
         i
          a
         I
10 -
1 -
1 1 -
	 o 2.5 (jigfL s
• 1.25 ng/L
	 • unspikedn























	 j


.§ V
Q D
B









uik*
spike
latrix 	




L'O:::::::::::°::
r ° 	
§
o -J
B
^ 	 "..











O


"O 	
"O 	

o
e
8 	
D
D
B
B
0



:::::::::::::::::::::



	








p n
|
&

B
B

	 D


•













	
a
0
	 ,
s
n
a
B



:::::::::::::::::::H




, j








8
H
o
	
o


L 	 	
r

L


L
                               B
C       D
Laboratory
 Figure 3. Comparison of measurements from six laboratories at the three lowest lead levels
(spikes of 0.00,1.25 and 2.50 (ig/L Pb added to a matrix of filtered activated sludge effluent)
                                     773

-------
  o
  O
  a*
  E
                       10             20             30             40
                            Hypothetical Day of Measurement
Figure 4. Collective view of the data at all five lead levels from the six laboratories. The
measurements on the unspiked matrix are at the left-hand side of the graph, folloowed by the 1.25
Hg/L spikes, and so on. The six values plotted for each hypothetical day are the results from six
laboratories.
                                             774

-------
     15
     10
      5-
     15
 5J  1
-------
3
o
O
,0
              25      50      75     100     125     150     175    200     225     250     275
                    50
75     100     125     150     175     200     225     250     275


            Observations
Figure 6. Series of measurements constructed from the lead data representing a hypothetical

effluent that has improved over time.
                                            776

-------
                                     MR, TELLIARD:  If you could sit down, we would
like to get started again.

      Our next speaker is a constant companion to this meeting, thank heavens. George
Stanko from Shell has been with us for, I think, maybe all but one or two meetings, all of
them here at Norfolk.

      As you know, there are those reviews in your forms. Please fill them out on ranking
the papers. We have done this for a number of years, and out of 17 years, by the way,
George was selected as the best presenter.  So, with that in mind, I will introduce George.
                                      777

-------
(Blank Page)
    778

-------
     SHELL PERFORMANCE EVALUATION STUDY
                       OF
EPA METHODS 8270, 8020, and MODIFIED 8015 (TPH)
              Authors:  G. H. Stanko
                        T. L. Norton
                        R. A. Poole
              Shell Development Co.
                  Houston, Texas
 Presented at: 17TH Annual EPA Conference on Analysis
           of Pollutants in the Environment
                 Norfolk, Virginia
                  May 4-5, 1994
                      779

-------
                                   ABSTRACT
A  performance evaluation (PE)  study  of contract  environmental  analytical laboratories
currently being used by Shell was conducted. The study was designed to establish the level
of performance for 29 laboratories and the methods selected by Shell for the study were EPA
Methods 8270, 8020, and Modified 8015. The study was limited to selected polynuclear
aromatics and phenols by Method 8270, BTEX plus MTBE  by Method  8020, and total
petroleum hydrocarbons  (TPH) by  Modified  Method 8015.   A contractor,  Analytical
Standards Inc. (ASI), was hired to prepare the whole-volume water samples and  to perform
statistical analysis of the resulting data. Participation in the PE study was voluntary and each
of the participants received a report of their performance from ASI.  The results for the PE
study are presented in the paper.
                                       780

-------
                    SHELL PERFORMANCE EVALUATION STUDY
                                       OF
               EPA METHODS 8270, 8020, and MODIFIED 8015 (TPH)
                                 INTRODUCTION
A performance evaluation (PE) study of 29 contract environmental analytical laboratories that
are currently in the Shell Laboratory Accreditation Program (SLAP) was conducted in late
1993. The authors selected the methods for the PE study as well as the limited list of target
analytes. These selections  were based on the nature and volume of work being performed
at  SLAP laboratories.  A contract was negotiated with Analytical Standards Inc. to prepare
the whole-volume water samples, to collect all the resulting data, to statistically  analyze
these data, and  to prepare  a  report for  each of the laboratories  which showed their
performance for the study.   ASI also  prepared  a summary  report for Shell.   All the
laboratories that participated in the  PE  study were initially contacted  by Shell  and were
asked to participate at their expense. All laboratories  did volunteer  for the study.

The study was initiated in late November and samples arrived at  laboratories  in early
December. Results from laboratories were reported directly to ASI in late December and
ASI reports were sent directly to laboratories  in early January.  ASI also prepared  a report
for Shell which summarized the results for the PE  study. This paper  includes much of the
material in the ASI format and from the  ASI report for the PE study as well  as Shell's
interpretation of these results.
                    SLAP PERFORMANCE EVALUATION STUDY
The most recent Shell SLAP PE studyd) evaluated laboratory performance for volatiles by
EPA Method 8240 (GC/MS),  metals by ICP, and five general parameters - oil and grease,
BOD, pH,  COD,  and TOC.   The study was done blindly and represented  the level of
performance one could expect from commercial  laboratories for routine samples.  Such a
study required considerable effort, time, and was quite costly.  Since the study, the list of
contract laboratories being used by Shell changed considerably and it appeared timely to
conduct another PE study of SLAP laboratories to assess the performance for the current list
of laboratories.

Due to  cost constraints, it was decided the current study had to be more  limited and
focused. The costs associated with a blind study were too prohibitive. While Youden pairs
were initially considered, it was decided not to use pairs of samples because  of costs and
the fact that laboratories knew they were participating in a PE study and resulting data might
not be truly representative of routine operations. However, it was decided to use whole-
volume water samples rather than concentrates to eliminate the possibility of laboratories
analyzing concentrates.

                                       781

-------
Study Design

A number of factors were considered in developing the study design.  Review of the nature
of the work being done for Shell at SLAP laboratories was the  main criteria used for
selecting the methods to be studied.  It was also decided not to include any of the methods
previously studied. After EPA Methods 8270, 8020, and Modified 8015 were selected, the
lists of target analytes were selected.  Methyl tertiary butyl ether (MTBE) was included in the
list for Method 8020 because it is a component of most commercial  gasolines and has been
found in environmental samples. To make the PE samples more realistic, small amounts of
commercial gasoline were added to the Method 8020 sample and small amounts of turbine
fuel (Jet A) were added to the Method 8270 sample.

The PE study was also limited with respect to  the concentration levels selected for target
analytes.  The current study was not designed  to assess performance at or near detection
limits, but  to  assess  laboratory  performance well  above  quantification  (PQL) levels.
Basically, good laboratories should not have had any problems with any of the samples or
target analytes with the possible exception of  MTBE.  All laboratories may not have had
much experience with  MTBE.   Table  1. identifies  the  lists of  target  analytes and
concentrations used for the PE study.

                                    TABLE 1.
                                     Parameter
"True Value"(ug/L)
            Method 8270
            Method 8020
            Mod. Method 8015
                                     Acenaphthene             3.028
                                     Acenaphthylene          33.865
                                     Anthracene              71.000
                                     2,4-Dimethyl Phenol      46.950
                                     2-Methylnaphthalene      88.990
                                     Naphthalene             24.950
                                     Phenanthrene            42.900
                                     Phenol                  65.070
                                     Benzene                 20.000
                                     Ethyl Benzene            35.000
                                     Toluene                 42.000
                                     Xylenes                 55.000
                                     MTBE                   85.000
                                     Gasoline Range
                                     Organ ics                500.000
                                      782

-------
It should be noted that acenaphthene was not one of the initial target analytes, but resulted
as an impurity in the acenaphthylene standard  used to prepare the sample.  A number of
laboratories reported the presence and concentration of acenaphthene so it was decided to
include and list the compound as a target analyte.

Contractor

A contract was negotiated with Analytical Standards, Inc. (ASI) to do  most of the work for
the PE study.  Shell provided ASI with the list of  laboratories to be  included in the study and
ASI prepared and shipped the whole-volume water samples directly to the laboratories.
Laboratories were directed to report results for the study to ASI who would be doing their
usual statistics and who would return an individual report to each  laboratory which showed
their  performance.  In addition, ASI would prepare  a summary report  for Shell which
showed the performance of all  SLAP laboratories on  an individual and combined basis.
Most of the information included in this paper  was taken from the ASI summary report to
Shell and is shown  in  the ASI format.

Performance Evaluation Results

The  statistical summary  report from ASI for the SLAP PE study is shown in Table 2
(attached). Table 2. lists the "true" values; the statistical means for the study; the standard
deviations for the means; the highest/lowest reported values; and the upper/lower "warning"
and "control" limits for each of the parameters.  Table 2. provides a  general  overview for
the performances of all the laboratories for the methods/analytes included in the study.

ASI  prepared control  (Shewhart)  charts for the  PE  study which   illustrates laboratory
performance for each individual analyte. The control charts are shown in Figures 1 to  14.
Whenever possible, the  outliers  were left on  the  charts for identification  of  poor
performance. However,  if they greatly distorted the graphing scale, they were removed.

The bottom line is that the performance for this group of laboratories was quite good.  One
has to look at the  performance for each laboratory to identify specific problems and/or
corrective action.  If one chooses  to assess overall performance using the EPA and ASI
"acceptable", "check for errors"  and "not acceptable"  categories  for all  laboratories,  most
laboratories performed very well. In addition, where the statistical mean and corresponding
control  limits indicated significant method  bias, laboratories were not penalized in this
assessment for being closer to the "true" value.  Table 3. shows such an assessment.  It
should be noted that not  all  laboratories reported results for all parameters.
                                       783

-------
                                    TABLE 3.

                         Method 8270      Method 8020      Method 8015

      Acceptable               22                25                 25

      Check for Errors           1                  2                 0

      Not Acceptable           320


Specific Observations from SLAP PE Study Results

Lab #33 reported their result for TPH (Method 8015) as < 1,000 ppb. One wonders why
their quantification level is so high.  This problem requires some kind of explanation  and
perhaps corrective action.

Lab #8 did poorly on both benzene  and ethyl benzene.  Both values were very low.  Lab
f8  needs to review their raw  data  and take  corrective  action.   They  also  need to
demonstrate  they can run  Method 8020.

Lab #59 was acceptable for all parameters except for 2-methyinaphthaiene where it was a
factor of 2X high.  A bad standard or dilution error are possible causes.  Corrective  action
is in order to establish the cause for the problem.

Lab #159 missed all target analytes for Method 8270. They need to correct the  problem
then demonstrate they can run Method 8270.

Lab #162 missed phenol. They need to correct the problem then demonstrate they can run
Method 8270.

Lab #166 had an accuracy problem. They have a low bias problem  of approximately  1,5.
Corrective action is in order.

Lab #167 was low by a 2X factor for toluene by Method 8020, Corrective action is needed
for the problem.

Lab #168 has a major problem with Method  8020 and corrective action is needed. They
need to correct the problem then demonstrate they can run Method 8020,

Modified Method 8015

Some general observations resulted for Modified  Method 8015. These observations apply
more to the TPH method rather than any or all laboratories.  The sample was prepared to
have a "true" value of 500 ug/L and was prepared using a commercial  unleaded  gasoline
purchased in ASI's local  area.   Review of the  statistical data showed there  were both

                                      784

-------
precision and accuracy problems.  The precision expressed as the standard deviation was
77 ug/L (S) which indicated a lot of analytical variability. Statistically, there were no outliers
because of the large variability. The mean for all laboratories was 268 ug/L which showed
poor accuracy (low bias). Even the highest reported value 440 ug/L represents a low bias.

Some follow-up contact with laboratories revealed that there really is a method problem
with Modified 8015.  For example, laboratories that used a synthetic mixture of compounds
as a standard exhibited the lowest  bias.  Current plans are to attempt to  standardize the
method for all SLAP  laboratories in an effort to improve both precision and accuracy for
TPH.  Shell will prepare an SOP for SLAP laboratories to use for future Shell  samples. Some
additional PE studies are planned for the immediate future and will probably be the subject
of a future paper.

Acenaphthene

Acenaphthene was not added to the samples on purpose, but was an impurity in one of the
other  analytes. Ten  out of twenty-six laboratories reported values from 2 to 3 ug/L for the
compound.  The actual concentration of acenaphthene in the sample was below the EPA
listed  PQL for the analyte in Method 8270 (PQL = 10 ppb).  Ten laboratories reported
values from 2 to 3 ug/L for the compound.  15 laboratories reported the observation as < 10
ppb.  One laboratory reported the  observation as  < 11 ppb; another laboratory reported
<50 ppb; and two laboratories reported  < 5 ppb. There was just too much good data for
acenaphthene  to ignore it.  No attempt was made to  contact laboratories  to  establish
whether the  raw data indicated the presence or absence of acenaphthene. It is obvious that
laboratories  will report  such observations differently.
                                 CONCLUSIONS

The results from the PE study indicated that most of the laboratories currently being used
by Shell perform quite well for the three methods evaluated.

Unfortunately, there were some laboratories that did not adequately perform Methods 8020
or 8270.  There appeared to be other possible and known causes which resulted in data that
were not acceptable.  There is concern  for this observation since all laboratories knew they
were participating in a PE study. One would expect that bad standards and/or dilutions or
transcription errors would (should) have been caught if appropriate levels of QA/QC were
in place.  Corrective action is needed at a few of the laboratories and, in some  instances,
a demonstration that the laboratory  is now in  control and has the  capability to run  the
methods  is required.

The results from the SLAP PE study of Modified Method 8015 revealed there is a problem
with lack of standardization of the procedure. The results indicated that the most obvious
source of method  bias was the type of standard used.
                                       785

-------
The prime goal for the PE study, to assess the performance of the current list of contract
laboratories used by Shell, was met in a cost-effective manner.
                                  REFERENCES
1.     G. H. Stanko, "Performance Evaluation Study of Environmental Analytical Contract
      Laboratories,"  14TH  Annual EPA Conference on Analysis of Pollutants  in  the
      Environment, Norfolk, Virginia, May 8-9, 1991.
response.)

      Thank you George.
                       QUESTION AND ANSWER SESSION


                                     MR. TELLIARD:  Are there any questions?  (No
                                      786

-------
00
      DT
                             TQ
           SHELL PERFORMANCE EVALUATION STUDY
                            OF
        EPA METHODS 8270, 8020, AND MODIFIED 8015 (TPH)
G. H. STANKO
T. L. NORTON
R. A. POOLE
                 17TH ANNUAL EPA CONFERENCE
                   ANALYSIS OF POLLUTANTS
                     IN THE ENVIRONMENT

                      NORFOLK, VIRGINIA
                        MAY 4-5, 1994
                                                    jr

-------
DOUBLE BUND
                788

-------
                                          Table 2
                  Shell Development Company SLAP Audit
                            Statistical Summary Report
  PARAMETER
 Calculated
"True" Value
Statistical
 Mean
Standard
Deviation
Upper Warning
   Limit
Lower Warning
   Limit
Upper Control
   Limit
Lower Control
   Limit
Highest
Reported
 Lowest
Reported
  Gasoline Range Organics
    500.0001  268.0371   77.4581
                  419.8551
                     116.2191
                       467.879
                       68.1951  440.0001 110.000]
Benzene
Ethyl Benzene
Toluene
Xylenes
MTBE
20.000
35.000
42.000
55.000
85.000
17.889
30.507
38.846
45.314
86.904
2.373
5.261
5.654
6.998
18.608
22.540
40.819
49.928
59.030
123.376
13.238
20.195
27.764
31.598
50.432
24.011
44.080
53.433
63.369
134.913
11.767
16.934
24.259
27.259
38.895
23.900
42.500
53.200
62.100
123.000
5.900
5.300
17.000
11.000
50.000
CO
Acenaphthene
Acenaphthylene
Anthracene
2,4-Dimethyl Phenol
2-Methylnaphthalene
Naphthalene
Phenanthrene
Phenol
3.028
33.865
71.000
46.950
88.990
24.950
42.900
65.070
2.479
30.768
58.504
36.864
66.252
20.240
33.580
35.009
0.388
4.544
7.859
5.891
13.031
3.612
4.685
13.315
3.239
39.674
73.908
48.410
91.793
27.320
42.763
61.106
1.719
21.862
43.100
25.318
40.711
13.160
24.397
8.912
3.480
42.492
78.780
52.063
99.872
29.559
45.667
69.362
1.478
19.044
38.228
21.665
32.632
10.921
21.493
0.656
3.000
47.000
77.000
50.000
130.000
28.000
95.000
55.000
2.000
19.000
45.000
18.000
40.000
13.000
24.000
15.000

-------
                           Table 2
            Shell Development Company SLAP Audit
                  Statistical Summary Report
PARAMETER
I Gasoline Range
Calculated Statistical Standard Upper Warning Lower Warning Upper Control Lower Control Highest Lowest
"True" Value Mean Deviation Limit Limit Limit Limit Reported Reported
Organics |
500.
.000 1
268.0371
77.4581
419.8551
116.219}
467.8791
68.1951
440.0001
110,0001
Benzene
Ethyl Benzene
Toluene
Xylenes
MTBE
20.000
35.000
42.000
55.000
85.000
17.889
30.507
38.846
45.314
86.904
2.373
5.261
5.654
6.998
18.608
22.540
40.819
49.928
59.030
123.376
13.238
20.195
27.764
31.598
50.432
24.011
44.080
53.433
63.369
134.913
11.767
16.934
24.259
27.259
38.895
23.900
42.500
53.200
62.100
123.000
5.900
5.300
17.000
11.000
50.000
•o
Acenaphthene
Acenaphthylene
Anthracene
2,4-Dimethyi Phenol
2-MethyInaphthalene
Naphthalene
Phenanthrene
Phenol
3.028
33.865
71.000
46.950
88.990
24.950
42.900
65.070
2.479
30.768
58.504
36.864
66.252
20.240
33.580
35.009
0.388
4.544
7.859
5.891
13.031
3.612
4.685
13.315
3.239
39.674
73.908
48.410
91.793
27.320
42.763
61.106
1.719
21.862
43.100
25.318
40.711
13.160
24.397
8.912
3.480
42.492
78.780
52.063
99.872
29.559
45.667
69.362
1.478
19.044
38.228
21.665
32.632
10.921
21.493
0.656
3.000
47.000
77.000
50.000
130.000
28.000
95.000
55.000
2.000
19.000
45.000
18.000
40.000
13.000
24.000
15.000

-------
                                     Figure 1
XI
       450


       400
        350 -
      0)
      
      §250
      D)

      §200


        150
        100

         50
                Shell Development Company SLAP Audit
                   SDC1293 Gasoline Range Organics (8015)
Laboratory #33 reporting < 1000 ug/L
Laboratories #86 and 188 not reporting
                                                                      UCL
                                                          UWL
                                                          Mean
                                                                       LWL
                                                          LCL
             07   175  37  162   164   59  166   167   46   120   68   171   187  186
               08   161  160  163   165  159   16   168   169  170   94   172  185
                             Laboratory Identification Number

-------
                                    Figure
KJ
       23 -
      
-------
                                      Figure 3
UJ
         45
          40
        0)
          35
          30 ---
0)
Q_



i25
        §20
          15 -
          10
                  Shell Development Company SLAP Audit

                       SDC1293        Ethylbenzene (8020)
                                                                       UCL
                                                                       UWL
                                                               Mean
                              Laboratory #188 not reporting
              07   33  161  160  163  165  159   16  168  169  170  94  171  187  186

                08  175  37  162  164  59  166  167  46  120  68  86  172  185

                              Laboratory Identification Number

-------
VJ
                 Shell Development Company SLAP Audit
                        SDC1293          Toluene (8020)
                                                                       UCL
                                                                       UWL
                                                                       Mean
        20 --  Laboratory #188 not reporting
            07  33  161  160  163  165  159   16  168  169   170   94   171   187   186
              08  175  37  162  164   59  166  167   46  120   68   86   172 185
                             Laboratory Identification Number

-------
                                     Figure 5
Ul
         60 -:
                 Shell Development Company SLAP Audit
                      SDC1293       Xylenes, Total (8020)
                                                                       Wean
              Laboratory #188 not reporting
             07   33   161   160   163   165   159  16   168  169  170  94   171  187  186
               08   175  37  162 164  59   166  167  46   120  68   86   172  185
                             Laboratory Identification Number

-------
                                    t-igure
01
       135
       125
       115
       105
   95
(/)
co  85
L_
D)
        65
        55
        45
        35
                Shell Development Company SLAP Audit
                     SDC1293   Methyl Tertiary Butyl Ether
       Laboratory #08 and 188 not reporting
                                                                 UCL

                                                                 UWL
                                                                      Mean
                                                                 LWL
                                                                 LCL
            07   175   37  162  164   59   166  167  46   120   68   86   172   185
              33   161  160  163  165   159  16   168  169  170   94   171  187  186
                            Laboratory Identification Number

-------
                                 Figure 7
3.75
  Shell  Development Company SLAP Audit

        SDC1293         Acenaphthene (8270)

Laboratories #16, 33, 46, 59, 68, 120, 166, 167, 169, 170 and 175 reporting < 10.0 ppb
Laboratory #94 reporting < 11.0 ppb
Laboratory #159 reporting < 50.0 ppb
Laboratories #185 and 187 reporting < 5.0 ppb
3.25 -
1.75  --
       Laboratories #161, 162, 172 and 188 not reporting
                                      Laboratory #186 not participating
1.25
         07
       08
37     160     86    163     164     165    168     171
   Laboratory Identification Number
                                                                        UCL
                                                                UWL
                                                                        Mean
                                                  LWL


                                                  LCL

-------
                                      Figure 8
CO
                 Shell Development Company SLAP Audit
                      SDC1293      Acenaphthylene (8270)
         45
         40
       = 35 -
        (D
        Q.
        «J30 -
        05
        2
        o

       '£25
         20 --
          15
             Laboratory #186 not participating
              Laboratories #161,172 and 188 not reporting
                                                                         Mean
                                                                         LWL
                                                                         LCL
              07   33    37   162   164   59   166  167   46   120   68   86   187
                08   175   160  163   165  159   16   168   169   170   94   171  185
                              Laboratory Identification Number

-------
                              Figure 9
80
45
40
35
         Shell Development Company SLAP Audit
              SDC1293         Anthracene (8270)
Laboratories #161, 172 and 188 not reporting
Laboratory #186 not participating
                                                                UCL
                                                                UWL
                                                                Mean
                                                           LWL
                                                           LCL
    07   33    37   162   164    59   166   167   46   120   68   86   187
       08   175   160   163   165   159    16   168   169   170   94   171   185
                     Laboratory Identification Number

-------
                                    Figure 10
00
o
o
       50 -
      45 --
     o>
240 --
o>
»35
CO

130
O
      25 -=
      20
       15
                Shell Development Company SLAP Audit
                    SDC1293   2,4-Dimethyl Phenol (8270)
           Laboratories #161,172 and 188 not reporting
                                         Laboratory #186 not participating
                                                                       UCL
                                                                       UWL
                                                                       Mean
                                                                   LWL

                                                                   LCL
           07    33   37    162   164   59    166   167   46    120   68   86    187
             08    175   160   163   165   159   16    168  169   170   94   171   185
                            Laboratory Identification Number

-------
                                     Figure 11
00
o
        130


        120
        110 --
90 f


80
E
E
O)  -yrt
E  70
o

e  60


   50


   40


   30
                 Shell Development Company SLAP Audit

                     SDC1293   2-Methylnaphthalene (8270)
                                 Laboratories #161,172 and 188 not reporting
              Laboratory #186 not participating
                                                                        UCL
                                                                        UWL
                                                                        Mean
                                                                        LWL
                                                                        LCL
             07    33   37    162   164   59   166   167   46   120  68   86   187
                08   175  160   163  165   159   16   168  169   170   94   171  185

                             Laboratory Identification Number

-------
                                       Figure 12
CD
o
          30
          28
          26
-24
1
I22
I 20
2
O> « -,
o 18
o
                  Shell Development Company SLAP Audit
                        SDC1293        Naphthalene (8270)
       Laboratory #159 reporting < 10.0 ug/L
          14
          12
          10
       Laboratories #161,172 and 188 not reporting
Laboratory #186 not participating
                                                                   UCL
                                                                   UWL
                          Mean
LWL
LWL
LCL
              07    33    37   162   164   59   16   168   169   170   94    171   185
                08   175   160   163   165   166   167    46   120   68   86   187
                               Laboratory Identification Number

-------
                                   Figure 13
CD
O
LO
              Shell Development Company SLAP Audit
                    SDC1293       Phenanthrene (8270)
      90 --
                                      Laboratories #161,172 and 188 not reporting
Laboratory #186 not participating
    S70 --
                                                                      Mean
                                                                      LWL
                                                                      LCL
          07   33    37   162   164   59   166  167   46   120   68   86   187
             08   175   160  163   165   159   16   168  169   170   94   171   185
                           Laboratory Identification Number

-------
                                       Figure 14
CO
o
         70
                  Shell Development Company SLAP Audit
                         SDC1293           Phenol (8270)
         60 ---
         50 --
         40
E
CO
|30
O
"E
         20
         10
          0
              Laboratory #159 reporting < 10.0 ug/L
              Laboratories #161, 162,172 and 188 not reporting
                                          Laboratory #186 not participating
                                                                     UCL
                                                                           UWL
                                                                           Mean
                                                                     LWL
                                                                     LCL
              07    33,    37    163   165   166   167    46    120   68    86   187
                08    175   160   164    59    16    168   169   170    94    171    185
                               Laboratory Identification Number

-------
                   MR. TELLIARD: Our final speaker of this meeting is Craig Markell from
3M.  Craig is going to be talking about an evaluation that they have been running on the
use of SPE, solid phase extraction, on the application of Method 608 and the results thereof.
               AN EXTENSIVE EVALUATION OF AN SPE SAMPLE PREP
                                FOR METHOD 608
                                      MR. MARKELL: Thank you, Bill.

      You know, normally, I would be really ticked off at being placed as the last speaker
on the last day, but any time  you get to share the podium  with  professors Stanko and
Telliard, it is a great day, and I am highly honored. Also, thanks to all of you for staying.

      We first  introduced the  mighty Empore disk in 1989.  At that time,  we got initial
interest from the drinking water folks in Cincinnati who were having some problems getting
solid phase extraction to work on some of the waters and thought they worked pretty well.

      However, it has been five years now, and, still, on wastewater, I do not believe we
have too many approved methods. There are some  that were sort of slipped under the door
which  I will tell you about in a  minute,  but there is  a huge energy barrier to getting
wastewaters approved using solid phase extraction.

      So, what I want to do today is tell you a little story  about some of the efforts that
have gone on and also the latest effort we have.

      We were telling people this five years ago.  We will still tell you it,  we think, not
only for drinking water, but, certainly, for wastewaters, especially the finished effluents that
are the things you are regulated in your permits  for, the 600 series types of analytes.

      Now, you all know what a disk is, I think, by now.  Hopefully, you do.  If not, tell
me, and our marketing people  will get their hands slapped.

      The method is fairly straightforward. There are a few  little things you have to know,
but, basically, you filter your water sample through it, and then you elute it.  You have got
the analytes now in organic solvent  in concentrated form.

      Then there are some advantages in using  disks, and I will not go through all these,
especially in view of the lateness of the day and the surely  nature of the attendees.
                                       805

-------
      Now, drinking water.  Lots of drinking water approvals.  There is no problem there
whatsoever. We are looking here at 1991, we got some approvals already in some of the
methods for semi-volatiles, 525.1, and so on and so forth.

      Supplement II was written in 1992. That is right now in the approval process, and
we expect Supplement II  perhaps to go final later this year.

      Finally, this is the one they slipped under the door.  I am not sure who did this, but
I think I know.  In 1993 for the pesticide  manufacture effluent guidelines, we actually saw
some of the drinking water methods approved for wastewaters which was rather historic,
considering that the drinking water folks wrote these methods, but the wastewater people
actually approved them first.  How did that ever happen, Bill?

                                      MR. TELLIARD: We had to use  them first.

                                      MR. MARKELL:  Now,  drinking water is a  no-
brainer for most people. There are no problems with drinking water.  It tends to be a pretty
clean matrix.

      What is  the  objection to the  dirty water  samples?  Well, there are a  couple of
objections.  One is very legitimate that if  you have suspended solids  in your water sample
and you filter it through any sort of bed  of particulates, whether it is in a tube or a disk
form, you will wind up plugging the matrix.

      The  plugging depends on the number of particulates you have,  the size  of the
particulates, the physical nature  of the particulates, and the pore size of your matrix. That
is really about all there is to  it.  Fairly straightforward, but if you  have got a liter of water
you have got to  filter, and you can only get 250 ml through, you have got a problem there.

      So, that is one objection.  The second is some chemical interactions of the analytes
with the matrix  and also the solid phase.  For example,  suppose  you have got an analyte
that somehow complexes with humic materials made into a water soluble complex that can
go sailing right  on by  the reverse phase/solid phase matrix.  People are objecting on the
grounds that maybe that can happen.

      It is  something  we have never actually seen good data on. In fact, the papers that
have been  published do not have compelling data.  They show a slight drop in recovery,
but there has never been a good paper published on that that I have seen.

      The  other thing that perhaps is a little  more real is if you have hydrophobic analytes
that are stuck to hydrophobic particles, organic particles in your sample, what is going to
happen to the analytes stuck to  that particle? That is a legitimate concern.
                                       806

-------
      At any rate, what I want to do now is tell you a little about some of the studies that
have been done on Method 608 and the analytes. These have been done ever since 1990,
'91, and have some really excellent results.

      These are just some of the things you can do to counteract the plugging.  I guess the
things I  would really recommend is you can always go to a larger disk, use  a little bit more
horsepower.  You can let the sample settle and decant most of it before you throw the
sediment on the disk.  That helps a lot.  That  is something we always do.

      A filter aid is nice.  So on  and so forth. You will see these slide copies in the
proceedings.

                                      MR.  TELLIARD:  And multiple disks.

                                      MR.  MARKELL: And multiple disks. Good point.
If you get to the point where you just cannot filter any more through a single disk, save the
water sample and put a  new disk on, and you can finish up.  That is a legitimate strategy,
especially with large volume samples.

      In fact, one guy just reported using five  disks to do 100 liters of water. It is the only
way you can legitimately do 100 liters of water.

      Now, here are the analytes stuck to particles, and symbolizing the  analytes are As
that you can see on the red particulates.  Okay,  you are doing this in  a liquid-liquid
extraction, separatory funnel. You are shaking  these particles up with the analytes on them,
and you have got a micro-emulsion of methylene chloride.  You have got  water-saturated
particles with the analytes on them, and you have got methylene chloride with which you
are expecting to hit those particles, wet them  out, and extract the analytes off.

      It just is not going to happen.  We have seen evidence of this.  In fact, the last  study
I am going to show you proves that this is, indeed, a  problem.  So, you cannot expect
liquid-liquid to work all the time in this type of format.

      If you catch these particles on the disk  and elute them, you can let them soak for a
while.  The water is all gone.  You can actually let  it dry if you like.  You  may well have
a better crack at getting those analytes off of the particles than you will in a liquid-liquid
extraction.

      So, here is the traditional  Method 608.  You all know it, but, basically, you take a
sample of water, you extract it with methylene chloride, shake it up, combine and dry them,
take it down to 1 ml, and take it up in hexane  to get rid of the methylene chloride for your
electron capture detector.
                                       807

-------
      Here is the first study that was done. This was really a good study.  It was done by
some of the folks at Waste Management. Anne O'Donnell was the one in particular. This
was presented at the  1991  Pittsburgh conference.

      Here  is the way she did it.  She took  a  disk.  This  was a  47 mm  C18 disk.
Conditioned  it,  ran  the water sample through, added  a  little  methanol...probably not
necessary but she did...extracted with a little ethyl acetate which is a nice solvent to use,
and I will tell you why in a minute, took  it up with ethyl  acetate, dried  it, and ran it by
electron capture detector.

      That is the method, fairly straightforward.

      The samples she did were RCRA types of samples.  Can  I use  that word, RCRA?
They were from ground water monitoring wells around their dump sites.

      For those  of you who do not know Waste Management, it is the biggest garbage
company in the world.

      At any rate, here are the recoveries and RSDs of the samples. Really nice recoveries.
I will not spend a lot of time on them, but these are for the single component analytes.
Nice looking  RSDs, mostly single digit, and when they went down to the MDL level, still
very nice data. There are a lot of data points in this study, and if anybody wants it, by the
way, I can send you a copy.

      I  am afraid to  use this slide.  These are the MDLs, and what I will tell you is they
compare very nicely to the standard liquid-liquid extraction method,

      I  will  let you  read  these conclusions,  but  to make a long story short,  she was
convinced it worked very nicely for their types of samples and actually recommended that
they approve  the method.  They never did, because it never became EPA approved.

      Now, here is another method.  I have got a couple of slides.  These are the folks at
Twin City Testing. This was actually reported here a couple of years ago by Merlin Bicking.

      What they looked at was a 90 mm disk.  Now, all of a sudden,  they were looking
at wastewaters, and there were problems with plugging. So, they used the larger disk, and
it worked very nicely.

      They also used a little glass fiber prefilter on top to catch some  of the chunks and
prevent some of the plugging. They did a 1 liter sample in this case, eluted with 3 x 15 ml
dichloromethane.

      Remember, the last study used ethyl acetate.  This one uses dichloromethane.  Then
they dried it and concentrated it.


                                       808

-------
      Here are the results.  There were four authentic wastewaters here. What they did
was a pesticide manufacturer's effluent, a POTW, a pulp and paper  mill which was the
worst matrix of all, had all kinds of cellulosic material floating around  in it, and petroleum
refinery effluent.

      Now, they took the samples, spiked them, shook them up, and let them sit overnight
so that these hydrophobic analytes can come to equilibrium with the particles in the sample,
and that is an important point. If you ran  it right away, it probably would not all adsorb to
the particulate in the sample, and we wanted that to happen.

      So, great recoveries for the most part, and great RSDs for the most part.  I am going
to point out a problem here, and it is the pulp and paper matrix. 68 percent recovery which
is  not a disaster, but it  is a little lower than you would like to see,  and the RSD is
correspondingly higher than you would like to see.

      That is a problem, and what the problem was is that the pulp and paper matrix had
this cellulosic garbage saturated with water onto which the analytes had equilibrated.  So,
a lot of the organochlorine pesticides were stuck to the cellulosic material which was, in
turn, saturated with water.

      Well, here is the point.  This is a simplistic diagram  of what  is happening at the
molecular level.  You have got our base sorbent particle. You have got the C18 chains here.
The analyte comes  in and sticks to the C18 chain through a hydrophobic interaction.

      Obviously, when you are done with the extraction, you have got water everywhere.
It is saturating the internal pores of the sorbent. So, you have got water covering up that
site where the analyte is stuck.

      You come in with something immiscible with water like dichloromethane. Hexane
is even worse. It just cannot get through that water layer and get the analytes off efficiently.
You have to use large volumes of elution solvent, and it still may not work.

      Here is just an idea of some of the solubilities of water.  Hexane, of course, will
dissolve almost no water. You get up to methylene chloride, a little bit, but,  really, you
have got to get up to things like methyl t-butyl ether or ethyl acetate to really get that water
out of the way. Acetone, of course, is miscible with water and a pretty good solvent, too.

      Well, here is the latest work I really wanted to show you. This was presented at
PittCon  this year.  It is the  latest and, certainly, the most extensive study in Method 608
using solid phase extraction.

      What  they  did  here was a  little different from  the other  two  studies.   They
conditioned, took a liter sample through the disk.  These are 90 mm C18 disks. They eluted
                                        809

-------
first by wetting the disk with 5 ml of acetone.  This gets all the water out of the way, and
it is no longer a problem even if you have some saturated particulate on top of your filter.

      Then they eluted with dichloromethane a couple of times with  15  ml, dried it, took
it down to volume, and shot it in the ECD.  So, the difference here is the acetone, really.

      You  know what the disks are.  Here is what a 90 looks like if you have not seen one.

      What they did also was use  some supplemental filter materials, glass fiber filter
and/or some filter aid. They did not have to use this in all samples. I just wanted to show
you what the scheme is.

      You  put a little in situ prefilter on top of the disk, and that can help you in a number
of ways.  There is one fellow I talked to, Paul Marsden  from SAIC, who claims that they
have been experimenting with a filter aid material.  Not only does it make the filtration
faster, but it helps recoveries in most cases simply because it is spreading the suspended
solids out on a larger surface area, and  it is easier to elute the analytes in there.

      It is hard to read this.  Basically,  these organochlorine pesticides were spiked in at
about 0.2, about 1.0, and about 5 ppb in the sample. Again, it was shaken and allowed to
equilibrate with the sample.

      Here is what the samples looked  like. We had 10 samples which represented 5 SIC
codes.   There  were some  from  the chemical industry,  pulp  and paper  people,
pharmaceutical, refuse, and sewerage.

      What we are seeing here is a  range of pHs, a range of suspended solids, a heck of
a range in  dissolved solids, and that is the characteristic of these samples.  Obviously,
authentic samples.

      Now, you  cannot tell  much  from  this.  The yellow bars  show  you the percent
recovery of the disks. The blue bar is  a side-by-side done on the same samples using a
liquid-liquid extraction and separatory funnel, Method 608 in other words.

      What you are seeing is that, with a couple of exceptions which I am going to focus
in on and point out in a minute, the results are comparable.  They look very good. We are
getting results usually above 80 percent recovery, even at those low levels.  It is something
any reasonable analytical chemist would be proud to have.

      Standard deviation, these are scattergrams, but what you are seeing is the standard
deviation of the disk plotted against the standard deviation of Method 608, and, in fact,
there is certainly no trend here. There is a bit of scatter but nothing consistent.  Both had
about the same RSDs.
                                       810

-------
      The MDls, again,  I am afraid to use this  slide,  but the  MDLs are reasonably
equivalent. Certainly, there is no high or low trend here.

      Now, everybody is afraid that if  you use a new technique, there  will be that
nightmare matrix out there waiting to come into your lab, and your new method  is not
going to work  on it.  Well, there was a nightmare matrix,  and it was the pulp and paper.

      What you are seeing here is, in the yellow, the traditional Method 608 result. The
blue is the disk result.

      It turns  out, if you notice, the nightmare matrices, 40 percent recovery or so, were
actually on the Method 608.  The separatory funnel  method simply did not work on these
samples.  We were getting about a 40 or 50 percent recovery, and I suspect the reason is
because the analytes had equilibrated with the water saturated suspended solids, and the
methylene chloride just could  not get in there and pull them off the particulate.

      Maybe  continuous would have been better.  We did not try it, but the nightmare
matrix in this case was for Method 608, not the disk modification which actually worked
reasonably well.  We are looking at about 80 percent recoveries here.

      Again, there what you can do is actually let the methylene chloride soak into those
suspended solids and dig off the analytes.

      So, I will let you  read these comparisons.  The bottom line is this  is the most
extensive study that has been done yet.

      What we have done is submitted this to the EPA as an alternate test procedure. We
are fairly  confident that  it will pass the  review of  procedure  and we will see  some
equivalent results.  That is really the key, is equivalency here, and we certainly hope to see
something by perhaps the  end of the year, maybe early next year.

      So, that concludes what I have. Thanks again for staying.
                                       811

-------
co
NJ
 SPE Disks Will Replace
LLE  for Water Extractions

-------
        Method for Using Empore Disks
 1) Pre-Wash Disk With the Final Eluting Solvent
 2) Pre-Wet Disk with Methanol
2 3) Pass Water Sample Through Disk
 4) Elute Disk using an Appropriate Solvent
 5) Dry and Concentrate Elutant, if Necessary

-------
              Why Disks?
00
• Higher Flow Rates Through a Large
  Diameter Bed (?rr2)
• Lower Back Pressure Through
  a Thinner Bed
• Smaller, More Efficient Particles (S jum)
• Uniform Flow - No Channeling
• Inert, Clean

-------
     SPE Incorporation Into EPA  Methods
         1991
          1992
          1993
CO
_1
Ln
Drinking Water
  506
  525.1
  550.1
                            Phthalates
                            SOCs
                            PAHs
Supplement II
  7 SPE Mtds or Options

Wastewater, Pesticide Mfg. Effluent
  515.2       Acid Herbs
  525.1       SOCs (Plus Cmpds)
  548.1       Endothall
  553        Benzidines, ONPs
  555        Acid Herbs
                Drinking Water
                  515.2
                  548.1
                  549.1
                  552.1
                  555
                  525.2
             Acid Herbs
             Endothall
             Diquat/Paraquat
             Haloacetics/Dalapon
             Acid Herbs
             SOCs, Expanded
APPROVED
                                              Published
APPROVED for Effluent
Proposed Approval
          Footnote: Allow Use of SPE for 507/508

-------
CD
 Particulates
and Sediment

-------
03
  Slow Flows  With  "Dirty  Water"

Suspended Solids Can Plug Pores in Disk - Severity of Problem
Depends on Size and Concentration of Solids

Worst Problems Are Often With Water High in Biological Activity
(Ponds) or Fine Clay

 Symptoms
 • Flow Rate Drops Off Rapidly With Time

 Remedies
 • 90 mm Disk
 • Smaller Volume
 • In Situ Prefilter
 • Filter Aid
 • Settle and Decant Sample
 • Good Vacuum
 * Split Sample, Combine Eluates

-------
818

-------
oo
 Method  608

  Extract 1 Liter Sample
  3X With  60 Ml MeCI2
           '
 Combine and Dry Extracts
           i
 Concentrate Sample (K-D)
        to 1 Ml
           »
  Add 50 Ml Hexane and
    Concentrate (K-D)
           j
Make to 10  Ml With Hexane
           '
       GC-ECD

-------
00
KJ
O
Evaluation  of Solid Phase Extraction Disks
    as a Replacement for Liquid/Liquid
Extraction in the Determination of Organo-
  chlorine Pesticides and PCB's in  Water

     Anne D. O'Donnell, Denise R. Anderson,
     Laura Bartoszek and John T. Bychowski
     WMI Environmental  Monitoring Labs,  Inc.

       Craig Marked and Donald F. Hagen
       3M Corporate Research Laboratories

-------
   Method 608  - Disk Modification
                 Condition Disk
CO
so
              Extract 1 Liter Water
              Sample (0.5% MeOH)
               Elute Disk 2X With
                  5 Ml EtOAc
            Make to 10 Ml With EtOAc,
                Dry With Na2SO4
                   GC-ECD

-------
00
  Recovery and RSD  (%)
                           Ave.     Ave.
Validation Level (^vlug/L)     Recovery   RSD
  Reagent Water               92       3.1
  Average Groundwater          93       4.4
  High SS Groundwater -
   Best Case                 86      11.7
  High SS Groundwater -
   Worst  Case                63       8.6
MDL Level fvO.02 ug/L)
  Reagent Water               93       5.7
  Average Groundwater          81       7.1

-------
Method Detection Limits  (ug/L)
                   Reagent   Reagent  Groundwater
Analyte               LLE     LSE        LSE
Aldrin               0.011     0.004      0.004
a-BHC               0.009    0.001      0.002
b-BHC               0.013     0.002      0.003
d-BHC               0.006    0.002      0.002
g-BHC (Lindane)      0.004    0.002      0.002
4,4*-DDD            0.006    0.003      0.002
4,4'-DDE            0.007    0.002      0.002
4,4'-DDT            0.006    0.004      0.003
Dieldrin              0.007    0.006      0.007
Endosulfan I          0.005    0.002      0.016
Endosulfan II          0.006    0.004      0.006
Endosulfan Sulfate     0.020    0.006      0.017
Endrin               0.006    0.003      0.003
Endrin Aldehyde       0.011     0.017      0.008
Endrin Ketone         0.015     0.002      0.016
Heptachlor            0.005    0.004      0,004
Heptachlor Epoxide     0.004    0.002      O.OO3
Methoxyehlor          0.036    0.011       0.012
                          823

-------
                Conclusions
CO
KJ
• 200-500 MI MeCI2 Replaced With 20 MI EtOAc
» Significant Time/Labor Savings
• MDL's Lower
• Recoveries and RSD's Equivalent to LLE
• Suspended Solids Not a Problem in
 Average Groundwaters
• Worst Case  Suspended Solids Needed
 Pre-Filtration and Affected Recoveries

-------
05
NJ
Ln
      Method 608/808O
    • 90 mm C18 Disk + GF/A Prefilter
    • Extract 1 Liter Sample

    • Elute With 3 x 15 ml MeCI2

    • Dry and Concentrate

-------
00
SJ
       Method 608/8080
                        Average Recovery, %
Waste Water Type             (Ava. RSD)*
Pesticide Manufacturer           91 (5.3)
POTW                        86 (7.4)
Pulp/Paper Mill                 68 (12.0)
Petroleum Refinery              80 (2.4)
     * Average of 18 Single Component Analytes, n=5

-------
   Elution Solvent/Water Miscibility
00
Ni
Sorbent
Particle
                                   3M

-------
en
K>
00
  Solubility of Water in Solvents*
            (% By Weight)
Hexane                                0.01
Toluene                                0.03
Methylene Chloride                     0.24
Ethyl Ether                            1.26
Methyl t-Butyl Ether                     1.50
Ethyl Acetate                           3.30
Acetone                              Miscible
Methanol                             Miscfble
*Burdick and Jackson "High Purity Solvent Guide"

-------
00
to
             PittCon® 94

Validation Study of Liquid/Solid Extraction
       for the Analysis of Organochlorine
Pesticides and PCBs in Ground and Wastewaters

    A.D. Vo.. ST. Rodriguez, K.M. Hoffmann
         3M Environmental Laboratory
                C.G. Markell
          3M I&C New Product Dept.
 3M ENVIRONMENTAL LABORATORY

94NAS966- 1

-------
           Method 608 Using 3M Empore
                   Condition Disk
                Extract 1 L of Sample
               Elute With 5 mL Acetone
                 2xWith15mLWIeCL
00
U)
o
Dry
                 Add 10 mL Hexane
               Concentrate (K-D) to 1 mL
              Make to 10 mL With Hexane
                      GC-ECD
  94NA5966- 9
                                              3M

-------
      3M Empore™ Disks
   with Standard Apparatus
03
U)
                           3M
94NA5966- 4

-------
   3M Empore™ Set-Up Schematic
                       Empore Filter Aid (Optional)
CO
OJ
KJ
              OQ . ooo 0 o o 0
                 o"ooo<>o0o
                     o o.
                               Glass Fiber Filter

                               Empore 90mm C-18
                                       3M
 94NA5966-12

-------
             Analyte Groups and Concentrations
  * Multlcomponents





94NA5966- 8
Analyte
4,4'-DDE
Aldrln
Alpha-BHC
Beta-BHC
Delta-BHC
Dieldrin
Endosulfan 1
Endrln Aldehyde
co Gamma-BHC
UJ
w Heptachlor
Heptachlor Epoxide
IVIethoxychlor
4,4'-DDD
4,4'-DDT
Endosulfan 11
Endosulfan Sulfate
Endrin
* Chlordane
*PCB1254
* Toxaphene
Baseline
(FLB)
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2

0.2
0.2
0.2
1.0
1.0
1.0
1.0
1.0
2.0
2.0
10.0
Fortification 1
(FL1)
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0

1.0
1.0
1.0
5.0
5.0
5.0
5.0
5.0
10.0
10.0
50.0
Fortification 2
(FL2)
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0

5.0
5.0
5.0
15.0
15.0
1S.O
15.0
15.0
50.0
100.0
250.0

-------
   Physical Data on Samples as Collected
CO
U)
                             mg/L
SIC Industry
2869A Chemical
2869B Chemical
2621A Paper
2621 B Paper
2833A Pharmaceutical
2833B Pharmaceutical
4953A Refuse
4953B Refuse
4952A Sewerage
4952B Sewerage
PH
7.8
12.0
6.7
7.9
6.5
8.0
3.1
3.7
7.0
7.9
TSS
3
12
48
3
18
10
14
120
23
11
TDS
2100
9100
1300
630
1700
570
360
48300
780
1000
TS
3100
9600
1500
650
1800
580
560
50500
N/A
1200
 94NA5966- 7
                                       3M

-------
OS
u>
Ul
             Average Recoveries for all Analytes
                            by Industry
140%-*

190% 	
100%-
(D
> 80%-
O
t)
X 60%~ '
;
0°
40%-

20%- •

n°/ '



\
i
s
f
r
i
t
i
>
'«
,_
1

—


—

—


]

1
i
—

!


i == ._ 	
j
!

0 Empore
|i Method 608

• w



i
-
*


t.
— :
1





—


S —

1 —


!
\

'
|



i
i
i
1
i 	 	


i
i


— , — .
;
	

^





i —


; 	
f
	


J














—


—

—


i








i

i
1




i

:


•S
          AB  AB AB   AB  AB AB  AB  AB  AB   AB  AB  AB   AB  AB  AB
           Chemical
                 2
Paper    Pharmaceutical
 T-  «    ca  «.  M
 c!  S    2  if  2
         Industry
 Refuse
S  £  2
Sewerage
   £   2
                                         FLB, FL1 & FL2 levels all analytes and replicates

                                                                 3M

-------
                 Relative % Standard Deviations
                                 by Level
CO
OJ
25% -,
20%-
00
S 15%-
TJ
O
f 10%-
5
0%-,



*
* •
*
W



/
•
>* ** *


/



/



/



»
 25%
                             20%
CO
S 15%
TJ
O
5
        0% 5% 10% 15% 20% 25%
               Empore
                                  •c
                                  #*

                                      .
                                    H.
                                                     25
    0% 5% 10% 15% 20% 25%
           Empore
0% 5% 10% 15% 20% 25%
       Empore
       Relative Std Deviation for
              FLB
  Relative Std Deviation for FL1   Relative Std Deviation for FL2
                                                                       3M

-------
                          Statistical MDL - \ig/L
                  Analyte                       Empore        608
Alpha-BHC
Gamma-BHC
Beta-BHC
Heptachlor
Delta-BHC
Aldrin
Heptachlor Epoxide
Endosulfan 1
82 4-4-DDE
Dieldrin
Endrin
4-4-DDD
Endosulfan II
4-4-DDT
Endrin Aldehyde
Endosulfan Sulfate
Methoxychlor
*PCB1254
* Chlordane
* Toxaphene
0.005
0.004
0.021
0.020
0.011
0.008
0.010
0.008
0.022
0.008
0.068
0.083
0.043
0.071
0.015
0.048
0.027
0.26
0.07
0.61
0.006
0.006
0.015
0.015
0.006
0.011
0.008
0.005
0.013
0.006
0.028
0.071
0.032
0.039
0.023
0.030
0.026
0.21
0.08
0.88
  MDL = 3.143 XStdDev

94NA5966-11

-------
        Average Recoveries for All Analytes in
                     Paper Industry
      % Recovery
      120% *
CO
U)
CO
               ?','
                                               • Empore

                                               m Method 608
             A FLB B
          A FL1 B
         A FL2 B
                                           mg/L
       SIC
Industry
pH
TSS
IDS
TS
      2621A Paper
      2621B Paper
             6.7
             7.9
       48
        3
       1300
        630
         1500
         650
                                                       3M
  94NA5966-13

-------
00
Comparison of Empore™ & LLE

     • Less Solvent
       -1/3 Use in Extraction & Concentration
       - Less Contamination of Sample
       - Less Cost
       - Less Hazard to Personnel
       - Less Disposal and Emission = More Pollution Prevention
     • Less Time
       -2xthe Samples
     • Less Space Required
     • Less Labor Intensive
     • Better Recovery Without Emulsion
                                                  3M
94NA5966-14

-------
(Blank Page)
    840

-------
                                                             CLOSING
                                     MR. TELLIARD: I would also like to thank you for
staying.  I would also like to thank a few other people.

      This is our first effort at joint sponsorship with the WEF.  I think it has gone well.
We had a few glitches, but, you  know, marriages have that, too.  We are  looking forward
to coming back next year, and we are looking forward to having another  joint session.

      I  would like to thank Bill  Nivens and Suzanne Shutty for their help here.  I would
like to thank Dale Rushneck, particularly, for working on the technical program which is,
I think, one of the strongest we have had in quite a few years. Now, we did have some hot
topics that made it easy for him, but Dale certainly did a heck of a good job.

      I  would like to thank Cindy Simbanin from Viar who did a lot of helping with the
registration, and Jan Kourrnadas  who ran around and worked with the hotel people, and
Marion Thompson  from my staff who answered all the phone calls when you called and
said what time does it start. Another small glitch.

      We are hoping to be back here same time, maybe same station, but general area.
If you have any  suggestions on  things you would  like to see or hear about, we would
appreciate your giving me a call or dropping us a line.

      We have asked you  to sign and evaluate, for our purposes and everyone else's, the
papers so that we can basically see how things are  going.

      I appreciate your time, patience, and hope to see you next year. Thank you so much
for coming.
      (The conference was concluded at 4:30 p.m.)
                                       841

-------
(Blank Page)
    842

-------
                                                          SPEAKERS
S. S. Berman
National Research Council of Canada
Institure for Environmental Chemistry
Building M12, Room G12
Montreal Road
Ottawa, Ontario, Canada K1A OR6
Phone:  (613) 993-3520
FAX:  (613) 993-2451

P. M. Berthouex
Professor
Dept. of Civil and Environmental
 Engineering
University  of Wisconsin
1415 Johnson Drive
Madison, Wl  53706
Phone:  (608) 262-7248
FAX:  (608)262-5199

Diane A. Blake
Dept. of Opthalmology
Tulane University School of Medicine
1430 Tulane Avenue
New Orleans, LA  70112
Phone:  (504) 584-2478
FAX:  (504) 584-2684

Nicolas S.  Bloom
Frontier Geosciences
414 Pontius North
Seattle, WA  98109
Phone:  (206) 622-6960
FAX:  (206) 622-6870
David L. Clampitt
Director of Environmental & Regulatory
  Affairs
Uniform & Textile Service Association
1730 M Street, NW
Suite 610
Washington, D.C.  20036
Phone: (202) 296-6744

Bruce Colby
President
Pacific Analytical, Inc.
6349 Paseo del Lago, Suite  102
Carlsbad, CA  92009
Phone: (619) 931-1766
FAX:  (619)  931-9479

Gregory Cutter
Department of Oceanography
Old Dominion University
Norfolk, VA  23529
Phone: (804) 683-4285
FAX:  (804)  683-5303

Gerald J.  DeMenna
President
Chem-Chek Corporation
44 Stelton Road, Suite 325
Piscataway,  NJ 08854
Phone: (908) 752-7793
FAX:  (908)  752-6973
                                     843

-------
Elizabeth Jester Fellows
Chief, Monitoring Branch
Assessment and Watershed Protection
  Division
USEPA Office of Wetlands, Oceans, and
  Watersheds
Mail Code: 4503
401 M Street, S.W.
Washington, D.C.  20460
Phone:  (202) 260-7062
FAX:  (202) 260-7024

A. Russell  Flegal
Environmental Toxicology
WIGS
University  of California/Santa Cruz
Santa Cruz, CA 95064
Phone:  (408) 459-2093
FAX:  (408) 459-3074

James Han Ion
Deputy Director
USEPA Office of Science and
  Technology
Maid Code:  4301
401 M Street, S.W.
Washington, D.C.  20460
Phone:  (202) 260-5377
FAX:  (202) 260-5394

R. E. Hawley
Market Development Manager
Varian  Sample Preparation Products
24201  Frampton Avenue
Harbor City, CA 90710
Phone:  (310) 539-6490
FAX:  (310) 539-8642

Greg Hill
Hampton Roads Sanitation District
1432 Air Rail Avenue
P.O. Box 5911
Virginia Beach, VA  23455-0911
Phone:  (804) 460-2261
FAX:  (804) 460-6586
Dr. Steven W. Hinton
Research Engineer
Department of Civil Engineering
Tufts University, Anderson Hall
Medford, MA O2155
Phone: (617) 627-3254
FAX:  (617) 627-3831

Carlton D. Hunt
Battelle Ocean Sciences
397 Washington Street
Duxbury, MA 02332
Phone: (617) 934-0571
FAX:  (617) 934-2124

Henry Kahn
Chief, Economic and Statistical Analysis
Branch
Engineering and Analysis Division
USEPA Office of Science and
  Technology
401 M Street, S.W.
Mail Code: 4303
Washington, D.C.  20460
Phone: (202) 260-5408
FAX:  (202) 260-5394

David Kimbrough
Public Health Chemist
California Dept. of Toxic Substances
  Control
Southern California Laboratory
1449 West Temple  Street
Los Angeles, CA  90026-5698
Phone: (213) 580-5795
FAX:  (213) 580-5706

Bruce R. Locke
Associate Professor
Department of Chemical Engineering
FAMU/FSU College of Engineering
Tallahassee, FL  32316-2175
Phone: (904) 487-6149
FAX:  (904) 487-6150
                                     844

-------
Dr. Bruce E. Logan
Associate Professor
Chemical and Environmental Engineering
University of Arizona
120 Harshbarger Building
Tucson, AZ 85721
Phone: (602) 621-4316
FAX:  (602) 621-6048

Craig Markell
Research Specialist
3M Corporation
3M Center
Building 209-1W-24
St. Paul, MN   55144-1000
Phone: (612) 733-2813
FAX:  (612) 736-6009

Timothy Miller
US Geological Survey
National Center MS 412
12201 Sunrise Valley Drive
Reston, VA  22092
Phone: (703) 648-6868
FAX:  (703) 648-5295

Billy B. Potter
Research Chemist
USEPA ORD  Environmental Monitoring
  Systems Laboratory
26 West Martin Luther  King Dr.
Cincinnati, OH 45268
Phone: (513) 569-7452
FAX:  (513) 569-7757

Harold Rhodes
RLT Consultants
585 Munsterman Place
Beaumont, TX  77707
Phone: (409)  866-5476
Dr. lleana Rhodes
Staff Research Chemist
Shell Development Company
P.O. Box 1380
Houston, TX  77251-1380
Phone:  (713) 544-8215
FAX:  (713) 544-8727

Robert Runyon
Chief, Monitoring Management Branch
Environmental Services Division
USEPA Region II
Raritan Depot Building 10
2890 Woodbridge Avenue
Edison, NJ  08837-3679
Phone:  (908) 321-6645
FAX:  (908) 321-6788

Dr. Michael Sepaniak
Professor, Department of Chemistry
University of Tennessee
Knoxviile, TN 37996-1600
Phone:  (615) 974-8023
FAX:  (615) 974-3454

Dr. G.  H. Stanko
Senior Staff Research Chemist
Shell Development Company
P.O. Box 1380
Houston, TX 77251-1380
Phone:  (713) 544-7702
FAX:  (713) 544-8727

William A. Telliard
USEPA Office of Science and
  Technology
Engineering and Analysis Division
Mail Code:  4303
401 M Street, S.W.
Washington, D.C  20460
Phone:  (202) 260-7120
FAX:  (202) 260-7185
                                     845

-------
Jim Vance
Product Line Manager
Horiba Instruments
17671 Armstrong Avenue
Irvine, CA 92714-5583
Phone: (714) 250-4811
FAX: (714) 250-0924

Robert K. Wyeth
Senior Vice President and Principal
Recra Environmental, Inc.
10 Hazlewood Drive
Amherst,  NY 14228-2298
Phone: (716)691-2600
FAX: (716) 691-3011
                                     846

-------
                                            ATTENDEES
M. ANDERSON -ASHCRAFT
DIRECTOR LABORATORY SERVICE
NAVY PUBLIC WORK CENTER
9742 MARYLAND AVENUE
CODE 900
NORFOLK, VA 23511
AMOS ADAMS
CHEMIST
FLEET & INDUSTRIAL SUPPLY CTR
1968 GILBERT STREET
ATTN:  CODE 700
NORFOLK, VA 23511-3392
GREG D. W. AITKEN
COUNTY COURT REPORTERS
WINCHESTER, VA 22601
ALLISON ALBEE-GUNTER
ENVIRONMENTAL LAB SUPERVISOR
TROPICANA PRODUCTS INC
1001 13TH AVENUE EAST
BRADENTON, FL 34208
LINDA G. ALLEN
UNIT LEADER - METALS
MINNESOTA DEPT OF HEALTH
717 DELWARE STREET SE
MINNEAPOLIS, MN 55440
HOPE ALMOND
CHEMIST
US GEOLOGICAL SURVEY WRD QWSU
4500 SW 40TH AVENUE
OCALA, FL 34474
JACKIE ANDERSON
LAB SUPERVISOR
DOW CHEMICAL COMPANY
BUILDING  1261
MIDLAND, MI 48667
KATHLEEN ANDERSON
CHIEF UTILITIES CHEMIST
PINELLAS COUNTY SEWER SYSTEM
14850 118TH AVENUE NORTH
LARGO, FL 34615
JEAN ANDREWS
LAB SUPERVISOR
AUGUSTA COUNTY SERVICE AUTH.
P.O. BOX 859
VERONA, VA 24482
STACEY ANELOSKI
INORGANIC SUPERVISOR
PDC LABORATORY INC
4349 SOUTHPORT ROAD
PEORIA, IL 61615
JOHN ANZALONE, III
ANALYTICAL CHEMIST
CTI ENVIRONMENTAL SERVICES
4643 BENSON AVENUE
BALTIMORE, MD 21227
STEPHEN ARPIE
TECHNICAL DIRECTOR
ABSOLUTE STANDARDS
P.O. BOX 5585
HAMDEN, CT 06518
DAVID E. ASHKENAZ
REGIONAL MANAGER
VARIAN SAMPLE PREPARATION PROD
388 FOREST KNOLL DRIVE
PALATINE, IL 60074
FEDERICO ASMAR
LABORATORY MANAGER
HIGH TECHNOLOGY  LABORATORY
P.O. BOX 3964
GUAYNABO, PR 00970
                             847

-------
JOHN P. AUSES
TECHNICAL SPECIALIST-ENVIRON.
ALCOA TECHNICAL CENTER
100 TECHNICAL DRIVE
ALCOA CENTER, PA 15069
LAWRENCE BAGWILL
CHEMIST IV
CITY OF HOUSTON -WW OPERATION
2525 MACARIO GARCIA DRIVE
HOUSTON, TX 77020
STEPHEN BAINTER
ENVIRONMENTAL SCIENTIST
U.S. EPA
REGION 6-6W-PT
1445 ROSS AVENUE, SUITE 1200
DALLAS, TX 75202
K. M. BANSAL
SENIOR STAFF ENGINEER
CONOCO, INC
DU-1008
P.O. BOX 2197
HOUSTON, TX 77252
THOMAS BARBER
MANAGER, ANALYTICAL CHEMISTRY
CIBA
410 SWING ROAD
GREENSBORO, NC 27409
MAGALENE BARBOUR
LAB TECHNICIAN
CAROLINA POWER & LIGHT
ROUTE 1
P.O. BOX 327
NEW HILL, NC 27562
HARRY W. BARRICK
PROGRAM MANAGER
ENVIRONMENTAL TECH GROUP, INC
1400 TAYLOR AVENUE
P.O. BOX 9840
BALTIMORE, MD 21284-9840
WERNER BECKERT
RESEARCH CHEMIST
US EPA, EMSL-LV
P.O. BOX 93478
LAS VEGAS, NV 89193
ROBERT G. BEIMER
LAB MANAGER
S-CUBED
8808 BALBOA AVENUE
SAN DIEGO, CA 92123
MARILYN BENNETT
SR WATER POLL CONTROL TECH
JEFFERSON COUNTY BARTON LAB
1290 OAK GROVE ROAD
BIRMINGHAM, AL 35209
SHIER BERMAN
NATIONAL RESEARCH COUNCIL
MONTRGAL ROAD
OTTAWA, ON K1A OR6
CANADA
JOHN BERNARD
LAB MANAGER
ALEXANDRIA SANITATION AUTH.
P.O. BOX 1987
ALEXANDRIA, VA 22313
PAUL M. BERTHOUEX
PROFESSOR
UNIVERSITY OF WISCONSIN
DEPT OF CIVIL & ENVIRON ENG.
1415 JOHNSON DRIVE
MADISON, WI 53706
MARY LEE BISHOPP
PROJECT COORDINATOR/LEADER
EASTMAN KODAK CO
B-34, CQS, KODAK PARK
ROCHESTER, NY 14652-3708
                                       848

-------
CHRIS BLAKE
CHEMIST
NESTLE QUALITY ASSURANCE LAB
6625 EITERMAN ROAD
DUBLIN, OH 43017
                           DIANE A. BLAKE
                           DEPT OF OPTHALMOLOGY
                           TULANE UNIV SCHOOL OF MEDICIN
                           1430 TULANE AVENUE
                           NEW ORLEANS, LA  70112
BEVERLY E. BLANCHARD
QA/QC CORRDINATOR
JAMES REED & ASSOCIATES
11864 CANON BOULEVARD
NEWPORT NEWS, VA 23606
                           NICOLAS S. BLOOM
                           SENIOR SCIENTIST
                           FRONTIER GEOSCIENCES,  INC
                           414 PONTIUS NORTH  #B
                           SEATTLE, WA 98109
RICK BOGAR
TEAM LEADER CHROMATOGRAPHY
WEYERHAESUER COMPANY
WTC 2F25
TACOMA, WA 98477
                           VENISE T. BOLDUC
                           MGR ENVIRONMENTAL  SERVICES
                           BOWSER-MORNER, INC
                           4518 TAYLORSVILLE  ROAD
                           DAYTON, OH 45424
CHRIS BOLLING
LAB SUPERVISOR
DEGUSSA CORPORATION
P.O. BOX 606
THEODORE, AL 36590
                           DAN BOLT
                           PRODUCT MANAGER
                           CAMBRIDGE ISOTOP  LABORATORIES
                           50 FRONTAGE ROAD
                           ANDOVER, MA 01810
TOM BOOCHER
QA/QC DIRECTOR
BELMONTE PARK ENVIRON.
22 EAST MAIN STREET
DAYTON, OH 45426
LABS
DANA BOOTH
CHIEF, LABORATORY OPERATIONS
HENRICO COUNTY WTF
P.O. BOX 27032
RICHMOND, VA 23273
PAUL BOUIS
ANALYTICAL RESEARCH
J.T.  BAKER
222 RED SCHOOL LANE
PHILIPSBURG, NJ 08865
                           JOHN BOURBON
                           CHEMIST-QUALITY ASSURANCE
                           USEPA REGION 2
                           2890 WOODBRIDGE AVENUE
                           BUILDING 10
                           EDISON, NJ 08837
BRIAN K. BOWDEN
REGULATORY COMPLIANCE DIR
HACK COMPANY
P.O. BOX 907
100 DAYTON
AMES, IA 50010
                           JOSEPH BRACK
                           PROJECT MANAGER
                           CT & E ENVIRONMENTAL  LAB  SERV
                           4642 BENSON AVENUE
                           BALTIMORE, MD  21227
                                    849

-------
BETTIE BRADLEY
ENVIRON. PROTECTION SPECIALIST
NAVY PUBLIC WORK CENTER
9742 MARYLAND AVENUE
NORFOLK, VA 23511
PATRICK J. BRADLEY
ENVIRONMENTAL PROTECTION SPEC
103 WESTOVER AVENUE #301
NORFOLK, VA 23507
SANDRA F. BRADSHAW
CHEMIST
ORANGE WATER & SEWER AUTHORITY
P.O. BOX 366
400 JONES FERRY ROAD
CARRBORO, NC 27510
DON W. BROWN
ENVIRONMENTAL QUALITY SUPER.
CITY OF DANVILLE, WPCP
229 STINSON DRIVE
DANVILLE, VA 24540
GLENN D. BROWN
CITY OF EDMONTON-TRANS DEPT
15TH FL, CENTURY PLACE
9803 102A AVENUE
EDMONTON, AB T5J 3A3
CANADA
NANCY A. BROYLES
ADVANCED CHEMIST
UNION CARBIDE CORPORATION
3200 KANAWHA TURNPIKE
SOUTH CHARLESTON, WV 25303
BARBARA S. BRUMBAUGH
ENVIRONMENTAL INSPECTOR SENIOR
VA DEPT OF ENVIRON QUALITY
287 PEMBROKE OFFICE PARK
PEMBROKE II SUITE 310
VIRGINIA BEACH, VA 23462
LESLIE BUCINA
LABORATORY MANAGER
KEMRON ENVIRONMENTAL SERVICES
109 STARLITE PARK
MARIETTA, OH 45750
VIC BURCHFIELD
MGR OF WATER QUALITY MONITOR.
COLUMBUS WATER WORKS
1420 54TH STREET
P.O. BOX 1600
COLUMBUS, GA 31993
LISA BURGESSER
CHEMIST
ENVIRONMENTAL RESOURCE ASSOC.
5540 MARSHALL STREET
ARVADA, CO 80002
E.A. BURNS
VICE PRESIDENT
QUALITY ASSURANCE LABORATORY
6605 NANCY RIDGE DRIVE
SAN DIEGO, CA 92121
CARRIE BUSWELL
DYNCORP VIAR, INC
300 NORTH LEE STREET
ALEXANDRIA, VA 22314
THOMAS BYRON
SENIOR MARKETING SPECIALIST
PERKIN ELMER CORPORATION
50 DANBURY ROAD
WILTON, CT 06897
CRAIG CALDWELL
TECHNICAL DIRECTOR
ROSS ANALYTICAL SERVICES
16433 FOLTZ INDUSTRIAL PARKWA
STRONGSVILLE, OH 44136
                                        850

-------
JASON CAPE
SALES ENGINEER ICP & ICP-MS
FUSIONS INSTRUMENTS
412 NORTH MADISON AVENUE
CLEARWATER, FL 34615
JOHN G. CAPITO
LAB SUPERVISOR
SAN DIEGO GAS & ELECTRIC
P.O. BOX 1831
MS SB-409
SAN DIEGO, CA 92112
BETSY A. CARBONE
WET CHEMISTRY SUPERVISOR
COAST-TO-COAST ANALYTICAL SERV
340 COUNTRY ROARD # 5
P.O. BOX 730
WESTBROOK, ME 04092
BILL CASTLE
LABORATORY DIRECTOR
CA FISH & GAME, OSPR
1995 NIMBUS ROAD
RANCHO CORDOVA, CA 95670
MARK CAVA
AUTOMATION PROGRAM MANAGER
ZYMARK CORPORATION
ZYMARK CENTER
HOPKINTON, MA 01748
ROBERTO CELIA
CHEIF CHEMIST
ENVIRONMENTAL SCIENCE CORP
12065 LEBANON ROAD
MT. JULIET, TN 37122
JACK CHAN
CHIEF CHEMIST
METRO TORONTO WORKS
30 DEE AVENE
WESTON, ON M9N 1S9
SHIH-LING CHANG
DIRECTOR OF ANALYTICAL TESTIN
COMMONWEALTH TECHNOLOGY INC
2520 REGENCY ROAD
LEXINGTON, KY 40875
ALLEN CHEESMAN
LAB SUPERVISOR
EC LABS, INC
P.O. BOX 569
FARMERSBURG, IN 47850
ROGER CLAFF
AMERICAN PETROLEUM INSTITUTE
1220 L STREET, NW
WASHINGTON, DC 20005
DAVID CLAMPITT
DIR OF ENVIRONMENT AFFAIRS
UNIFORM & TEXTILE SERVICE
1730 M STREET, NW
#610
WASHINGTON, DC 20036
ELLEN COBB
ANALYTICAL CHEMIST
UNION CAMP CORP
P.O. BOX 178
FRANKLIN, VA 23851
TRACY COLBERT
GROUP LEADER
NUS LABORATORY
5350 CAMBELLS RUN ROAD
PITTSBURGH, PA 15205
BRUCE N. COLBY
PRESIDENT
PACIFIC ANALYTICAL
6349 PASIO DEL LAGO
CARLSBAD, CA 92009
                                    851

-------
JOSEPH COMEAU
TECHNICAL DIRECTOR
INCHCAPE TESTING SERVICES
55 SOUTH PARK DRIVE
COLCHESTER, UT 05446
THOMAS G. CONALLY
LABORATORY MANAGER
CITY OF DURHAM
DEPT WATER RESOURCES
1900 EAST CLUB BOULEVARD
DURHAM, NC 27704
SANDRA CONLEY
ARLINGTON COUNTY WPC DIVISION
3401 SOUTH GLEBE ROAD
ARLINGTON, VA 22202
JERALD CONWAY
ASST SUPERINTENDENT/CHEMIST
MONTGOMERY WATER WORKS
22 BIBB STREET
MONTGOMERY, AL 36102
WILLIAM CORL,  III
SUPERVISOR CHEMIST
NAVY PUBLIC WORK CENTER
9742 MARYLAND AVENUE
NORFOLK, VA 23511
ROBIN COSTAS
USEPA
CENTRAL REGIONAL LAB
ANNAPOLIS, MD
SUSAN COSTIGAN
CHEMIST
QUINCY WASTEWATER TREATMENT
700 WIST LOCK i DAM ROAD
QUINCY, IL 62301
DONNA COX
COUNTY COURT REPORTERS
WINCHESTER, VA 22601
BRADLEY W. CRAIG
ENVIRONMENTAL COMPLIANCE COOR.
ACZ LABORATORIES, INC
30400 DOWNHILL FRIVE
STEAMBOAT SPRINGS, CO 80487
JACK CRISCIO
PRESIDENT
ABSOLUTE STANDARDS INC
P.O. BOX 5585
HAMDEN, CT 06518
GREGORY CUTTER
PROFESSOR
OLD DOMINION UNIVERSITY
DEPT OF OCEANOGRAPHY
NORFOLK, VA 23529
JOSEFINO S. DAKITA
ACTING CHIEF, LAB DIVISION
WASUA-BWT-LAB DIV
5000 OVERLOOK AVENUE, SW
WASHINGTON, DC 20032
DAREN DAMBOTAGIAN
AVERILL ENVIRONMENT LAB
100 NORTHWEST DRIVE
PLAINVILLE, CT 06062
BRAD DANIELS
HAZARDOUS WASTE RESEARCH
ONE EAST HAZILWOOD DRIVE
CHAMPAIGN, IL 61820
                                        852

-------
KATHY DAVIS
CHEMIST
BIRMINGHAM WATER WORKS
3600 1ST AVENUE NORTH
BIRMINGHAM, AL 35222
TERRY DAVIS
CHEMIST
CITY OF WYOMING WWTP
3059 CHICAGO DRIVE SW
GRANDVILLE, MI 49418
DR. THOMAS L. DAWSON
GROUP LEADER
UNION CARBIDE CORPORATION
TECH CENTER 770-144
3200 KANAWHA TURNPIKE
SOUTH CHARLESTON, WV 25303
MICHAEL DELANEY
LABORATORY SUPERINTENDENT
MASS WATER RESOURCES AUTHORIT
100 FIRST AVENUE
BOSTON, MA 02129
IVAN DELOACH
EPA
WASHINGTON, DC
JESSICA DELUNA
CHEMIST
HAMPTON ROADS SANITATION DIST
1432 AIR RAIL AVENUE
VIRGINIA BEACH, VA 23455
DR. GEORGE J. DEMENNA
CHEM-CHE/BUCK
44 STESTON ROAD
#325
PISCATAWAY, NJ 08854
DAVID L. DENTON
LABORATORY TECHNICIAN
ORNL ANALYTICAL SERVICES ORGA
P.O. BOX 2008
OAK RIDGE, TN 37831
FRANK DIAS
DIRECTOR OF TECHNOLOGY
WMX-EML
2100 CLEANWATER DRIVE
GENEVA, IL 60134
KATHY J. DIEN HILLIG
ECOLOGY ANALYTICAL SERVICES
BASF CORPORATION
1609 BIDDLE AVENUE
WYANDOTTE, MI 48192
DONALEA DINSMORE
AUDIT CHEMIST
WI DNR
P.O. BOX 7921
101 SOUTH WEBSTER
MADISON, WI 53707
KHANH K. DOAN
CHEMIST
US GEOLOGICAL SURVEY WRD QWSU
4500 SW 40TH AVENUE
OCALA, FL 34474
ANN DOEBROWSKI
ECAL COORDINATOR
ODU/APPLIED MARINE RESEARCH
1034 WEST 45TH STREET
NORFOLK, VA 23529
CHARLES N. DYER
QUALITY ASSURANCE/CERT. OFFIC
STATE OF NEW HAMPSHIRE
P.O. BOX 95
6 HAYDEN DRIVE
CONCORD, NH 03302
                                     853

-------
WILLIAM F. EBERHARDT
VICE PRESIDENT, LAB SERVICES
SCIENTIFIC CONTROLS LABS, INC
3158 SOUTH KOLIN AVENUE
CHICAGO, IL 60623
                        PAMELA J. G. ELDRIDGE
                        LAB MANAGER
                        MOORE ENVIRONMENTAL MGMT
                        407 WEST LINCOLN HIGHWAY
                        EXTON, PA 19341
GARY ENGELHART
ENVIRONEMNTAL MARKETING MGE
THERMO SEPARATION PRODUCTS
3661 INTERSTATE PARK RD NORTH
RIVIERA BEACH, FL 33404
                        PAUL S. EPSTEIN
                        DIRECTOR LABORATORIES
                        NSF INTERNATIONAL
                        3475 PLYMOUTH ROAD
                        ANN ARBOR, MI 48105
MARIA L. ESPARZA
CHEMIST II
CENTRAL CONTRA COSTA SAN.
5019 IMHOFF PLACE
MARTINEZ, CA 94553
DIST
DAVID EVANS
CHEMIST
NAVY PUBLIC WORKS CENTER
9742 MARYLAND AVENUE
NORFOLK, VA 23511
VALERIE EVANS
CLIENT SERVICES MANAGER
TRIANGLE LABORATORIES, INC
801 CAPITOLA DRIVE
RTP, NC 27709
                        STEVE FALATKO
                        STAFF CHEMIST
                        RADIAN CORPORATION
                        2455 HORSE PENN ROAD
                        #250
                        HERNDON, VA 22071-3426
JOHN P. FAULSTICH
OPERATIONS MANAGER
CHEMAX LABORATORIES, INC
P.O. BOX 21122
RENO, NV 89515
                        SUSAN FERREIRA
                        MGR. ENV. MONITORING PROGRAM
                        NARRAGANSETT BAY COMM
                        235 PROMENADE STREET
                        PROVIDENCE, RI 02908
TOM FIELDSEND
DYNCORP VIAR, INC
383 CANTERBURY DRIVE
RAMSEY, NJ 07446
                        CHRISTINE M. FLAJNIK
                        ATOMIC SPECTROSCOPIST
                        VARIAN ASSOCIATES
                        201 HANSEN COURT
                        SUITE 108
                        WOOD DALE, IL 60191
A. RUSSELL FLEGAL
UNIVERSITY OF CA-SANTA CRUZ
WIGS
ENVIRONMENTAL TOXICOLOGY
SANTA CRUZ, CA 95064
                        ANNA L. FLORES
                        SENIOR CHEMIST
                        LTV STEEL COMPANY
                        3001 SICKEY ROAD
                        DOOR 026
                        EAST CHICAGO, IN 46312
                                        854

-------
GARY FOLK
TECHNICAL DIRECTOR
IEA, INC
3000 WESTON PARKWAY
GARY, NC 27513
TOM FOWLER
TECHNICAL DIRECTOR
SEQUOIA ANALYTICAL LAB
680 CHESAPEAKE DRIVW
REDWOOD CITY, CA 94063
PETER FOWLIE
WTC
867 LAKESHORE ROAD
BURLINGTON, ON L7R 4A6
CANADA
ANGIE FRAME
ENVIRONMENTAL SERVICE LABS
P.O. BOX 2855
DECATUR, AL 35602
DREW FRANCIS
DIRECTOR
MICROBAC LABORATORIES
604 MORRIS DRIVE
NEWPORT NEWS, VA 23605
GREG FUNK
LAB TECHNICIAN
CITY OF WOOSTER
1123 OLD COLUMBUS ROAD
WOOSTER, OH 44691
CRIS GAINES
EPA
WASHINGTON, DC
H. JOSEPH GANNON, JR
PRESIDENT
ENVIROCORP, INC
14 COMMERCE STREET
HARRINGTON, DE 19952
CHUCK GARDNER
PRODUCT DEVELOPMENT MANAGER
BACHARACH, INC
625 APLHA DRIVE
PITTSBURGH, PA 15238
EUGENE GASIEWSKI
LABORATORY MANAGER
PHILADELPHIA WATER DEPARTMENT
BUREAU OF LAB SERVICES
1500 EAST HUNTING PARK AVENUE
PHILADELPHIA, PA 19124
DENISE S. GEIER
ANALYTICAL SERVICES, INC
390 TRABERT AVENUE
ATLANTA, GA 30309
JOHN GEMOULES
LABORATORY MANAGER
AMERICAN BOTTOMS REGIONAL WTF
#1 AMERICAN BOTTOMS ROAD
SAUGET, IL 62201
JENNY GOEGLEIN
DYNCORP VIAR, INC
300 NORTH LEE STREET
ALEXANDRIA, VA 22314
MARGARET GOLDBERG
RESEARCH TRIANGLE INSTITUTE
P.O. BOX 12194
RES. TRIANGLE PARK, NC 27709
                                     855

-------
MARK GRABIGEL
CHEM LAB SUPERVISOR
THOMAS STELL STRIP CO
DELAWARE AVENUE NW
WARREN, OH 44485
CALVIN L. GREEN, JR
TECHNOLOGY LEADER
PROCTER & GAMBLE
6110 CENTER HILL ROAD
FB2N26
CINCINNATI, OH 45224
DAVID R. GREENE
ASSISTANT LAB DIRECTOR
DUKE POWER COMPANY
13339 HAGERS FERRY ROAD
HUNTERSVILLE, NC 28078
SANDRA K. GREGG
CHIEF INORGANIC UNIT
MICHIGAN DNR ENVIRONMENTAL  LA
3500 MARTIN LUTHER KING  BLVD
LANSING, MI 48906
DAVID W. GRIFFITHS, PH.D.
PRESIDENT
OLVER INCORPORATED
1116 SOUTH MAIN STREET
BLACKSBURG, VA 24060
ANGIE M. GROOMS
LAB SUPERVISOR
DUKE POWER COMPANY
13339 HAGERS FERRY ROAD
HUNTERSVILLE, NC 28078
ZOE A. GROSSER
SENIOR MARKETING SPECIALIST
THE PERKIN-ELMER CORP
50 DANBURY ROAD
MS-259
WILTON, CT 06897
JOHN GUTE
SUPERVISOR
LA SANITARY DISTRICT
1965 WORKMAN MILL ROAD
WHITTAKER, CA 90611
YOLANDA GUTIERREZ
CHEMICAL ANALYST II
SAN ANTONIO WATER SYSTEM
517 MISSION ROAD
SAN ANTONIO, TX 78210
DAVID W. HADDAWAY
SENIOR CHEMIST
CITY OF PORTSMOUTH
LAKE KILBY WTP
105 MAURY PLACE
SUFFOLK, VA 23434
DONALD J. HAERTEL
LABORATORY MANAGER
CENTER FOR APPLIED ENGINEERING
10301 9TH STREET NORTH
ST. PETERSBURG, FL 33716
MICHELLE HAIN
LABORATORY MANAGER
M J REIDER ASSOCIATES,  INC
107 ANGELICO STREET
READING, PA 19611
JEFF HALVORSON
CHEMIST
BURDICK & JACKSON
1953 SOUTH HARVEY STREET
MUSKEGON, MI 49442
SHIRLEY HAMMOND
SENIOR CHEMIST
ARCO CHEMICAL COMPANY
P.O. BOX 30
CHANNEL VIEW, TX 77530
                                        856

-------
JAMES HANLON
DEPUTY DIRECTOR
USEPA SCIENCE & TECHNOLOGY
401 M STREET, SW
MAIL CODE:  4301
WASHINGTON, DC 20460
DAN L. HARP
SENIOR CHEMIST
HACK COMPANY
P.O. BOX 389
LOVELAND, CO 80539
PAUL HARVATH
TECHNICAL ENGINEER
GM CORP
902 EAST HAMILTON
BUILDING 85, M-S 85-07
FLINT, MI 48550-2085
DAVID HASKE
ROCHE ANALYTICAL LABORATORY
8040 VILLA PARK DRIVE
RICHMOND, VA 23228
CHUCK RASKINS
SALES DEVELOPMENT MANAGER
3M
3M CENTER
BUILDING 220-9E-10
ST. PAUL, MN 55144
ELAINE T. HASTY
SR. APPLICATIONS SPECIALISTS
CEM CORPORATION
P.O. BOX 200
MATTHEWS, NC 28106
R. E. HAWLEY
MARKET DEVELOPMENT MANAGER
VARIAN SAMPLE PREPARATION PROD
24201 FRAMPTON AVENUE
HARBOR CITY, CA 90710
GAIL HAYES
CHEMIST
BIONETICS
445 FIRST STREET
ARNOLD AFB, TN 37389-3400
LISA HEAGLE
PRODUCT SPECIALIST
HACK COMPANY
P.O. BOX 907
AMES, IA 50010
NATHAN HELDENBRAND
SENIOR CHEMIST
KOCH REFINING
P.O. BOX 64596
ST. PAUL, MN 55164
JOHN HENDERSON
SUPERVISOR-LAB PRETREATMENT
CITY OF CHATTANOOGA
ASS MOCCASIN BEND ROAD
CHATTANOOGA, TN 37405
MICHAEL HENIKEN
WASTEWATER CHEMIST
CITY OF COLUMBUS
SURVEILLANCE LAB
900 DUBLIN ROAD
COLUMBUS, OH 43215
HERB HERNANDEZ
R & D ENGINEERING MANAGER
ANTEK INSTRUMENTS, INC
300 BAMMEL WESTFIELD ROAD
HOUSTON, TX 77090
EDWARD HICKEY
SANITARY ENGINEER
RI DEPT ENVIRONMENTAL MGMT
DIVISION OF WATER RESOURCES
291 PROMENADE STREET
PROVIDENCE, RI 02908
                                     857

-------
ROCHELLE HICKMOTT
ENVIRONMENTAL CUSTOMER SERVICE
CAMBRIDGE ISOTOP LABORATORIES
50 FRONTAGE ROAD
ANDOVER, MA 01810
GREG HILL
CHEMIST
HAMPTON ROADS SANITATION DIST
1432 AIR RAIL AVENUE
VIRGINIA BEACH, VA 23455
JUDY HINSHAW SMITH
LABORATORY TECHNICIAN
CITY OF ASHEBORO
146 NORTH CHURCH STREET
ASHEBORO, NC 27203
DR. STEVEN W. HINTON
RESEARCH ENGINEER
NCASI/TUFTS UNIVERSITY
COLLEGE ANVENUE
ANDERSON HALL
MEDFORD, MA 02155
DENNIS D. HINTZ
CHEMIST
DAKOTA GASIFICATION CO
P.O. BOX 1149
BEVLAH, ND 58523
RICHARD L. HOAG
PHY SCIENCE TECH
FLEET & INDUSTRIAL SUPPLY CTR
1968 GILBERT STREET
ATTN:  CODE 700
NORFOLK, VA 23511-3392
RENEE M. HOATSON
DEVELOPMENT CHEMIST V
EG & G ROCKY FLATS, INC
GEN LAB 881
P.O. BOX 464
GOLDEN, CO 80402
JILL C. HOGLUND
PRETREAMENT COORDINATOR
TEXAS NATURAL RESOURCE
CONSERVATION COMMISSION
P.O. BOX 13087
AUSTIN, TX 78711
SALLY HOH
CHEMIST
SPRINGETTSBURY TOWNSHIP WWTS
3501
NORTH SHEMAN STREET
YORK, PA 17402
PAMELA HOLBROOK
ASSOCIATE ENVIORN. AFFAIRS
TOYOTA MOTOR MANUFACTURING
1001 CHERRY BLOSSOM WAY
GEORGETOWN, KY 40324-9564
KEVIN HOLBROOKS
CHEMIST
CITY OF JACKSONVILLE
2221 BUCKMANN STREET
JACKSONVILLE, FL 32206
DAWN HOLDREN
SENIOR SCIENTIST
NASA
BUILDING 160
WALLOPS ISLAND, VA 23337
BEN HONAKER
EPA
WASHINGTON, DC
BRUCE E. HONTS
BOILER QA TECHNICIAN
PHILIP MORRIS, PARK 500
4100 BERMUDA HUNDRED ROAD
CHESTER, VA 23831
                                        858

-------
STEPHEN HOPKO
CHEMICAL ENGINEER
NAVAL SURFACE WARFARE CENTER
BUILDING |619, 2ND FLOOR
CODE 6223
PHILADELPHIA, PA 19112-5083
ALBERT HORNG
LABORATORY SUPERVISOR
HTMA
3200 ADVANCED LANE
COLMAR, PA 18915
LYMAN H. HOWE, III
RESEARCH CHEMIST
TVA
CORPORATE CENTER 1A
1101 MARKET STREET
CHATTANOOGA, TN 37402
GEORGE D. HOWELL
SUPVY CHEMIST
FLEET & INDUSTRIAL SUPPLY CTR
1968 GILBERT STREET
ATTN:  CODE 700
NORFOLK, VA 23511-3392
JOHN HSUEH
CHEMIST
CITY OF PHOENIX
2303 WEST DURANGO
PHOENIX, AZ 85009
SAMUEL A. HUBER
GROUP LEADER WATER QUALITY
LANCASTER LABORATORIES, INC
2425 NEW HOLLAND PIKE
LANCASTER, PA 17601
MIKE HUGHES
CHEMIST
EAST KENTUCKY POWER CO-OP
4758 WEST LEXINGTON ROAD
WINCHESTER, KY 40392
WILLIAM S. HUNLEY
ENVIRONMENTAL SCIENTIST
HAMPTON ROADS SANITATION DIST
1426 AIR RAIL AVENUE
VA BEACH, VA 23455
CARLTON D. HUNT
PROGRAM MANAGER
BATTELLE OCEAN SCIENCES
397 WASHINGTON STREET
DUXBURY, MA 02332
M.GHIALIOTTY IRIZARRY
ACTING CHIEF LABORATORY DEPT
PUERTO RICO AQUADUCT & SEWER
P.O. BOX 7066
BO OBRERO
SANTURCE, PR 00916
WILLIC ISOM
ENVIRONMENTAL CHEMIST
DYN MCDERMOTT
P.O. BOX 2276
FREEPORT, TX 77541
DENISE JEROME
MANAGER
COMMONTHWEALTH TECHNOLOGY INC
2520 REGENCY ROAD
LEXINGTON, KY 40503
ELIZABETH JESTER FELLOWS
CHIEF MONITORING BRANCH
USEPA-WETLANDS, OCEANS
ASSESSMENT & WATERSHED PROTECT
401 M STREET,SW MAIL CODE 4503
WASHINGTON, DC 20460
GEORGE JETT
EPA
WASHINGTON, DC
                                    859

-------
EARL H. JOHNSON
ENVIRONMENTAL SPECIALIST
DOW CHEMICAL COMPANY
734 BUILDING
MIDLAND, MI 48667
MICHAEL E. JOHNSON
ENVIRONMENTAL ENGINEER
DUPONT
P.O. BOX 347
LAPORTE, TX 77572
ROBERT JOHNSON
CEO
HORIZON TECHNOLOGY
8 COMMERCE DRIVE
ATKINSON, NH 03811
PHANIBHUSHAN B. JOSHIPURA
CHEMIST
FLEET & INDUSTRIAL SUPPLY  CTR
1968 GILBERT STREET
ATTN:  CODE 700
NORFOLK, VA 23511-3392
LARRY KAEDING
LABORATORY SERVICES SUPERVISOR
CITY OF CEDAR RAPIDS WPCD
7525 BETRAM ROAD, SE
CEDAR RAPIDS, IA 52403
HENRY KAHN
CHIEF ECON & STATS ANALYSIS
USEPA
OFFICE OF SCIENCE & TECHNOLOG
401 M STREET, SW MAILCODE  430
WASHINGTON, DC 20460
CHERYL KAMERA
SUPERVISOR, TRACE METALS LABS
MUNIC. OF METRO SEATTLE
ENVIRONMENTAL LAB
322 WEST EWING
SEATTLE, WA 98119
KABEW KASSEW
WASTE WATER LAB MANAGER
CITY OF LA, GENERAL  SERVICES
2319 DORRIS PLACE
LOS ANGELES, CA 90031
NANCY KELLER
LAB SUPERVISOR
CITY OF PUEBLO WTP
1300 SOUTH QUEENS AVENUE
PUEBLO, CO 81001
ELIZABETH KENNELLEY
P.O. BOX 1703
GAINESVILLE, FL 32602
DEBORAH L. KENNISON
TESTING SPECIALIST
EXXON CO, USA
P.O. BOX 551
BATON ROUGE, LA 70821
DR. MARY KHALIL
INTRUMENTAL CHEMIST  3
METRO WATER RECLAMAION  DIST
550 SOUTH MEACHAM
SCHAUMBURG, IL 60193
DR. MOHAN KHARE
PRESIDENT / CEO
ENVIROSYSTEMS,  INC
9200 RUMSEY ROAD
SUITE B102
COLUMBIA, MD 21405-1934
DAVID KIMBROUGH
PUBLIC HEALTH CHEMIST
CA DEPT TOXICS SUBSTANCE  CTRL
SOUTHERN CAL LABORATORY
LOS ANGELES, CA 90026-5698
                                        860

-------
JIM KING
DYNCORP VIAR, INC
300 NORTH LEE STREET
SUITE 500
ALEXANDRIA, VA 22314
CAROL KLEEMEIER
QA/QC COORDINATOR
JENNINGS LABORATORIES
1118 CYPRESS AVENUE
VIRGINIA BEACH, VA 23451
ROBIN S. KNOX
WATER QUALITY DIRECTOR
GERAGHTY & MILLER
2900 WEST FOLK DRIVE
BATON ROUGE, LA 70827
JAN KOURMADAS
OGDEN ENVIRONMENTAL
3211 JERMANTOWN ROAD
FAIRFAX, VA 22030
KELLY KRAFT
PRODUCT MANAGER
LABCONCO CORPORATION
8811 PROSPECT AVENUE
KANSAS CITY, MO 64132
JOE KUREK
CHIEF CHEMIST
HERITAGE ENVIRONMENTAL SERVIC
7901 WEST MORRIS STREET
INDIANAPOLIS, IN 46231
WAYNE LACROIX
ENVIRONMENTAL CHEMIST
BP CHEMICAL
P.O. BOX 659
HWY 185
PORT LAVACA, TX 77979
JERRY LANDRY
LAB MANAGER
SHEERY LABORATORIES
316 MECCA
LAFAYETTE, LA 70508
LYNN LANE
ENVIRONMENTAL COORDINATOR
ARCO PRODUCTS
1801 EAST SEPULVEDA BOULEVARD
CARSON, CA 90749
SALLY B. LANGE
SUPERVISOR
CITY OF PONTIAC WWTP
1631 STIRLING
PONTIAC, MI 48340
JOAN W. LAROCK
PRESIDENT
LAROCK ASSOCIATES, INC
801 PENNSYLVANIA AVENUE, NW
SUITE 1213
WASHINGTON, DC 20004
MICHAEL I. LESSER
SENIOR CHEMIST
NATURAL GAS PIPELINE COMPANY
ENGINEERING ANALYTICAL LAB
P.O. BOX 3399
JOLIIT, IL 60434
MARK L. LESTER
ENVIRONMENTAL SPECIALIST
ALABAMA POWER COMPANY
P.O. BOX 2641
GSC |8
BIRMINGHAM, AL 35291
NATHAN LEVY
PRESIDENT
A & I TESTING
1717 SEABORO DRIVE
BATON ROUGE, LA  70810
                                     861

-------
DION LEWIS
PRINCIPAL RESEARCH SCIENTIST
BUTTELLE OCEAN SCIENCES
397 WASHINGTON STREET
DUXBURY, MA 02332
MICHAEL LEWIS
PRETREATMENT COORDINATOR
HUNTINGTON SANITARY BOARD
P.O. BOX 1659
CHARLESTON, WV 25717
YI-HUA LIN
LAB MANAGER
ROY F. WESTON INC
42 DELTA COURT
NORTH BRUNSWICK, NJ 08902
MARK LINER
EPA
WASHINGTON, DC
ROGER LITOW
DYNCORP VIAR, INC
300 NORTH LEE STREET
ALEXANDRIA, VA 22314
BRUCE R. LOCKE
ASSOCIATE PROFESSOR
FAMU/FSU COLLEGE OF  ENGINEER
DEPT OF CHEMICAL ENGINEERING
TALLAHASSEE, PL 32316-2175
JEFFREY M. LOEWE
QUALITY ASSURANCE COORDINATOR
DAILY ANALYTICAL LABORATORIES
1621 WEST CANDLETREE DRIVE
PEORIA, IL 61614
BRUCE E. LOGAN
ASSOCIATE PROFESSOR
UNIVERSITY OF ARIZONA
306 CIVIL ENGINGEERING  BLDG
DEPT OF CHEM & ENV ENG
TUCSON, AZ 85721
RAYMOND J. LOVETT
PROGRAM MANAGER
NATION. RES. CTR COAL & ENERGY
EVANSDALE DRIVE
P.O. BOX 6064
MORGANTOWN, WV 26506
NORMAN LOW
PROJECT MANAGER
HEWLETT-PACKARD
1601 CALIFORNIA AVENUE
PALO ALTO, CA 94304
TED W. LUFRIU
PRESIDENT, LAB DIRECTOR
CHESAPEAKE ANALYTICAL LAB, INC
106 A ROCKEFELLER COURT
WALDORF, MD 20602
THEODORE B. LYNN, PH.D.
DIRECTOR OF RESEARCH
DEXSIL CORPORATION
ONE HAMDEN PARK DRIVE
HAMDEN, CT 06517
RAYMOND F. MADDALONE
PROJECT MANAGER
TRW
ONE SPACE PARK 01/2030
REDONDO BEACH, CA 90278
DEBBIE C. MAGIN
REG. LAB DIRECTOR
GBRA
P.O. BOX 271
SEGUIN, TX 78155
                                        862

-------
REMY MAGTOTO
CHEMIST
ALEXANDRIA SANITATION AUTH.
P.O. BOX 1987
ALEXANDRIA, VA 22313
JIM MAGUIRE
MANAGER ENVIRONMENTAL SERVICE
ROCHE ANALYTICAL LABORATORY
8040 VILLA PARK DRIVE
RICHMOND, VA 23228
BRAD MAHANES
ENVIRONMENTAL SCIENTIST
USEPA PERMITS DIVISION
401 M STREET SW
WASHINGTON, DC 24060
M. JASON MANNING
CHEMIST
GREENVILLE UTIL. COMMISSION
P.O. BOX 1847
GREENVILLE, NC 27835
SULEIMAN MANSARAY
LAB TECHNICIAN
CAROLINA POWER & LIGHT
ROUTE 1
P.O. BOX 327
NEW HILL, NC 27562
CRAIG G. MARKELL
SUPERVISOR
3M-I&C SECTOR LAB/NEW PRODUCT
209-1W-24
ST. PAUL, MN 55144
PAUL MATTHEWS
ENVIRONMENTAL SPECIALIST
ADMIRAL ENVIRONMENTAL SERVICES
2025 SOUTH ARLINGTON HTS ROAD
ARLINGTON HEIGHTS, IL 60005
SANDRA G. MAYS
INDTRUMENT SPECIALIST
ODU/APPLIED MARINE RESEARCH
1034 WEST 45TH STREET
NORFOLK, VA 23529
CRAIG MCCAFFREY
MARKETING MANAGER
OHMIERON CORPORATION
375 PHEASANT RUN
NEWTOWN, PA 18940
HARRY B. MCCARTY
SENIOR SCIENTIST
SAIC
HAZARDOUS WASTE METHODS SUPP.
7600-A LEESBURG PIKE
FALLS CHURCH, VA 22043
KARL MCCREA
ENVIRONMENTAL MANAGER
AMERICAN ASSAY LABORATORIES
1500 GLENDALE AVENUE
SPARKS, NV 89431
BARRY MCKENZIE
SENIOR RESEARCH CHEMIST
MALLINCKRODT SPECIALTY CHEM C
P.O. BOX 800
PARIS, KY 40362
LISA MCMILLIAN
ANALYTICAL CHEMIST
DEPT OF ENVIRONMENTAL QUALITY
629 EAST MAIN STREET
RICHMOND, VA 23219
ED MESSER
USEPA
CENTRAL REGIONAL LABORATORY
ANNAPOLIS, MD
                                    863

-------
ALAN MESSING
DYNCORP VIAR, INC
300 NORTH LEE STREET
ALEXANDRIA, VA 22314
DON MILESTONE
SENIOR TECH SPECIALIST
S C JOHNSON & SON INC
1525 HOWE STREET
RACINE, WI 53403
TIMOTHY MILLER
ASSIST. CHEIF OFFICER OF WATER
US GEOLOGICAL SURVEY
12201 SUNRISE VALLEY DRIVE
MS 412
RESTON, VA 22092
RAYMOND MINDRUP
ENVIRONMENTAL MARKETING MGR
SUPELCO, INC
SUPELCO PARK
BELLEFONTE, PA 16823
DONALD K. MITCHELL
QA/QC MANAGER
ESCAMBIA COUNTY UTILITIES AUTH
401 WEST GOVERNMENT STREET
PENSACOLA, FL 32501
JEFFREY K. MITCHELL
MARKETING DEVELOPMENT MANAGER
3M
3M CENTER
BUILDING 220-9E-10
ST. PAUL, MN 55144
KIM MITCHELL
LAB TECH
HRWTF
P.O. BOX 969
231 HUMMELL ROSS ROAD
HOPEWELL, VA 23860
D. UNDERWOOD MITCHELL-WEST
LAB TECHNICIAN
HRWTF
231 HUMMEL ROSS ROAD
P.O. BOX 969
HOPEWELL, VA 23860
MARLENE 0. MOORE
PRESIDENT
ADVANCED SYSTEMS INC
P.O. BOX 8090
NEWARK, DE 19714-8090
DAVID MORELEN
CHEMIST
VA POWER
P.O. BOX 5711
YORKTOWN, VA 23690
JOSEPH MORRIS
CHEMIST
NAVY PUBLIC WORK CENTER
9742 MARYLAND AVENUE
NORFOLK, VA 23511
KEN MOURA
DYNCORP VIAR, INC
300 NORTH LEE STREET
ALEXANDRIA, VA 22314
DAVID MURAWSKI
TECHNICAL MANAGER
CHURCH & DWIGHT
469 NORTH HARRISON STREET
PRINCETON, NJ 08540
DENNIS MURPHY
ENVIRONMENTAL Q.C.
THE DOE RUN COMPANY
P.O. BOX 500
VIBURNUM, MO 65566
                                       864

-------
DEBORAH NELSON
CHEMIST
HAMPTON ROADS SANITATION DIST
1432 AIR RAIL AVENUE
VIRGINIA BEACH, VA 23455
JOHN NELSON
KLOHN-CRIPPEN CONSULTANTS LTD
10200 SHELLBRIDGE WAY
RICHMOND, BC V6X 2W7
CANADA
GUENTER NIESSEN
PRODUCTION MANAGER
EM SCIENCE
480 DEMOCRAT ROAD
GIBBSTOWN, NJ 08027
WILLIAM NIVENS
WATER ENVIRONMENT FEDERATION
ALEXANDRIA, VA 22314
JAMES D. O'CONNER
LABORATORY DIRECTOR
INDUSTRIAL WATER SERVICES
P.O. BOX 43369
JACKSONVILLE, FL 32203
SUSAN O'NEILL
WATER ENVIRONMENT FEDERATION
ALEXANDRIA, VA 22314
DENISE OMOREGIE
LEAD CHEMIST
OLIN CORP
LAKE CITY ARMY AMMO PLANT
INDEPENDENCE, MO 64051
TIM ORGAIN
OPERATOR
HRWTF
P.O. BOX 969
HOPEWELL, VA 23860
C. MINERVA ORTIZ
ACTING LABORATORY DIRECTOR
PUERTO RICO AQUADUCT & SEWER
P.O. BOX 7066
BO OBRERO
SANTURCE, PR 00916
VERITI P. OVERBY
CHEMIST
FLEET & INDUSTRIAL SUPPLY  CTR
1968 GILBERT STREET
ATTN:  CODE 700
NORFOLK, VA 23511-3392
JAC L. PADGETT
VICE PRESIDENT
EC LABS, INC
US HWY 41 SOUTH
P.O. BOX 569
FARMERSBURG, IN 47850
BHAL V. PARANJAPE
CHEMIST
CITY OF SOLON POLLUTION CTRL
6315 SON CENTRE ROAD
SOLON, OH 44129
JERRY L. PARR
DIRECTOR OF TECHNOLOGY
ENSECO-RMAL
4955 YARROW STREET
ARVADA, CO 80002
JAY PATEL
LAB MANAGER
ROY F. WESTON  INC/REAC  PROJEC
2890 WOODBRIDGE AVE
BUILDING 209
EDISON, NJ 08837
                                     865

-------
KEN PEIST
ENVIRONMENTAL SCIENTIST
USEPA
2890 WOODBRIDGE AVENUE
BUILDING 209
EDISON, NJ 08837
GLENN PERRONE
GROUP LEADER
MCGINNERS LABORATORIES
4168 WESTROADS DRIVE
WEST PALM BEACH, FL 33407
DAVID PETERSON
LAB SUPERVISOR - METALS
CITY OF JACKSONVILLE
2221 BUCKMAN STREET
JACKSONVILLE,  FL 32206
WILLIAM F. PFEIFFER
PRESIDENT, DIR. OF OPERATIONS
GINOSKO LABORATORIES, INC
17875 CHEROKEE STREET
P.O. BOX 8
EARPSTER, OH 43323
GREGORY T,  PHILIPS
CHEMIST
DC/DPW/BWT
5000 OVERLOOK AVENUE, SW
WASHINGTON, DC 20032
JAMES A. PLOSCYCA
QUALITY ASSURANCE DIRECTOR
IEA INC
3000 WESTON PARKWAY
GARY, NC 27513
LEE POLITE
RESEARCH CHEMIST
AMOCO CORPORATION
P.O. BOX 3011
MS F-7
NAPERVILLE, IL 60566
DONNA POPP
CHIEF ENVIRON. LAB SERVICES
WISTCHISTIR DEPT-LAB/RISIARCH
2 DANA ROAD
VALHALLA, NY 10595
BILLY B. POTTER
RESEARCH CHEMIST
US EPA, EMSL-CINCINNATI
26 W. MARTIN LUTHER KING DRIVE
CINCINNATI, OH 45268
RICHARD V. PRIDDY
DIRECTOR BUSINESS DEVELOPMENT
ENVIRONMENTAL TECH GROUP,  INC
1400 TAYLOR AVENUE
P.O. BOX 9840
BALTIMORE, MD 21284-9840
WILLIAM R. PROKOPY
APPLICATIONS CHEMIST
LACHAT INTRUMENTS
6645 WEST MILL ROAD
MILWAUKEE, WI 53218
GREGORY E. PRONGER
DIRECTOR, TECHNICAL SUPPORT
NET, INC
850 WEST BARTLETT ROAD
BARTLETT, IL 60103
CATHY PULLIZZI
PRE-TREATMENT COORDINATOR
JMEUC
500 SOUTH 1ST STREET
ELIZABETH, NJ 07202
NATALIE QUITS
WATER ENVIRONMENT
RESEARCH FOUNDATION
ALEXANDRIA, VA 22314
                                        866

-------
FLOYD W. QUILLEN, JR
SENIOR DEVELOPMENT CHEMIST
EASTMAN CHEMICAL COMPANY
P.O. BOX 511
EASTMAN ROAD
KINGSPORT, TN 37662
DR. GILBERTO QUINTERO
MANAGER
TVA
1101 MARKET STREET
CHATTANOOGA, TN 37402
DARLENE RAIFORD
CHEMIST
HAMPTON ROADS SANITATION DIST
1432 AIR RAIL AVENUE
VIRGINIA BEACH, VA 23455
MARGARET RAISGLID
GRAD STUDENT
UNIVERSITY OF ARIZONA
OLD CHEMISTRY BUILDING
TUCSON, AZ 85721
DAVE RAJESH
CORPORATE TECHNICAL DIRECTOR
LAB RESOURCES
100 HOLLISTER ROAD
TETERBORO, NJ 07608
DULCIE M. RANTA
TEAM LEADER, CONVENTIONAL LAB
WEYERHAEUSER COMPANY
WTC 2F25
TACOMA, WA 98477
KENNETH T. RAUM
ENVIRONMENTAL INSPECTOR SUPERV
DEPT OF ENVIRONMENTAL QUALITY
287 PEMBROKE OFFICE PARK
PEMBROKE II SUITE 310
VIRGINIA BEACH, VA 23462
LISA M. REED
CHEMIST
CHEMICAL SCIENCE LABORATORY
KELLY AFB
508 SHOP LANE   ROOM 2
SAN ANTONIO, TX 78241
MIKE REEKS
TECHNICAL DIRECTOR
DOBER CHEMICAL
14461 SOUTH WAVERLY AVENUE
MIDLOTHIAN, IL 60452
HAROLD A. RHODES
CONSULTANT
RLT CONSULTANTS
585 MUNSTERMAN PLACE
BEAUMONT, TX 77707
DR. ILEANA A. L. RHODES
STAFF RESEARCH CHEMIST
SHELL DEVELOPMENT COMPANY
P.O.BOX 1280
HOUSTON, TX 77251-1380
JAMES K. RICE
CONSULTING ENGINEER
JAMES K. RICE CHARTERED
17415 BATCHELLORS FORREST ROA
OLEN, MD 20832
H. WAYNE RICHARDSON
VP RESEARCH & DEVELOPMENT
PHIBRO - TECH, INC
P.O. BOX 1979
SUMTER, SC 29150
LYNN RIDDICK
DYNCORP VIAR, INC
300 NORTH LEE STREET
ALEXANDRIA, VA  22314
                                     867

-------
MICHELE L. ROBERTS
ENVIRONMENTAL ANALYST
NEW CASTLE COUNTY
100 NEW CHURCHMENS ROAD
NEW CASTLE, DE 19720
DAVID J. ROBERTSON
QUALITY CONTROL ANALYST
HOECHST CELANESE CHEMICAL CO
9502 BAYPORT ROAD
PASADENA, TX 77505
KERI ROBERTSON
LABORATORY SUPERVISOR
F & R
P.O. BOX 27524
RICHMOND, VA 23261
PATTY ROLLINS
CHEMIST
HAMPTON ROADS SANITATION DIST
1432 AIR RAIL AVENUE
VIRGINIA BEACH, VA 23455
JACKIE ROMNEY
EPA
WASHINGTON, DC
JAMES R. ROTH
LABORATORY MANAGER
ALPHA ANALYTICAL LABS
8 WALKUP DRIVE
WESTBORO, MA 01581
DR. ANNA RULE
CHIEF LABORATORY DIVISON
HAMPTON ROADS SANITATION DIST
1432 AIR RAIL AVENUE
VIRGINIA BEACH, VA 23455
ROBERT RUNYON
CHIEF MONITORING MGMT  BRANCH
USEPA, BSD REGION  II
2890 WOODBRIDGE AVENUE
EDISON, NJ 08837
DALE RUSHNECK
INTERFACE, INC
P.O. BOX 297
FT. COLLINS, CO 80522
MELISSA RUSSELL
QA/QC OFFICER
HYDROLOGIC
1491 TWILIGHT TR
FRANKFURT, KY 40601
MICHAEL W. SAMPLES
VICE PRESIDENT & TECHNICAL DIR
STANDARD LABORATORIES, INC
147 11TH AVENUE
SUITE 100
SOUTH CHARLESTON, WV 25303
BERNARD SAWYER
COORDINATOR OF TECH SERVICES
METRO WATER RECLAMATION DIST
6001 WEST 39TH STREET
CICERO, IL 60650
AISLING SCALLAN
MANAGER FIELD PRODUCTS
ENSYS INC
P.O. BOX 14063
RTF, NC 27709
ROBERT SCHAFFER
DIRECTOR ENVIRONMENTAL AFFAIR
COYNE TEXTURE SERVICES
140 CORTLAND AVENUE
SYRACUSE, NY 13221
                                       868

-------
MARCIA A. SCHMELZER
LAB MANAGER
CITY OF BOISE PUBLIC WORKS
11818 JOPLIN ROAD
BOISE, ID 83704
JEFF SCHMIDT
COUNTY COURT REPORTERS, INC
124 EAST CORK STREET
WINCHESTER, VA 22601
GEORGE A. SCHMITT
BUSINESS DEVELOPMENT MANAGER
3M
3M CENTER
220-9E-10
ST. PAUL, MN 55144
RAY F. SCHMITT
ENVIRONMENTAL ENGINEER
DEPT OF THE NAVY
CARDEROCK DIVISION
NAVAL SURFACE WARFARE CENTER
BETHESDA, MD 20084-5000
TERRY SCHUCK
CHEMIST
LANCASTER LABORATORIES, INC
2425 NEW HOLLAND PIKE
LANCASTER, PA 17601
MICHAEL SEPANIAK
PROFESSOR
UNIVERSITY OF TENNESSEE
DEPT OF CHEMISTRY
KNOXVILLE, TN 37996
J. BRIAN SERBIN
ANALYTICAL CHEMIST
PHILADELPHIA WATER DEPARTMENT
BUREAU OF LAB SERVICES
1500 EAST HUNTING PARK AVENUE
PHILADELPHIA, PA 19124
STEPHEN SHANDOR
INORGANICS SUPERVISOR
PHILADELPHIA WATER DEPARTMENT
BUREAU OF LAB SERVICES
1500 EAST HUNTING PARK AVENUE
PHILADELPHIA, PA 19124
DR. PHIL SHANK
DIRECTOR OF RESEARCH & DEVELOP
MALLINCKRODT SPECIALTY CHEM CO
P.O. BOX 800
PARIS, KY 40362
ABHA SHARMA
SENIOR ENGINEER
CHESTERFIELD COUNTY
P.O. BOX 40
CHESTERFIELD, VA 23832
YU-MIN SHI
PESTICIDE SUPERVISOR
ANALYTICAL TECHNOLOGIES, INC
9830 SOUTH 51ST STREET
SUITE B-113
PHOENIX, AZ 85044
SHELLY SHOOK
AVERILL ENVIRONMENT LAB
100 NORTHWEST DRIVE
PLAINVILLE, CT  06062
SUZANNE M. SHUTTY
WATER ENVIRONMENT FEDERATION
ALEXANDRIA, VA 22314
JERRY L. SIDES
SENIOR RESEARCH ASSOCIATE
TEXACO EPTD
P.O. BOX 425
BELLAIRE, TX 77401
                                     869

-------
JILL SIEGRIST
CHEMIST
LAW ENVIRONMENTAL
114 TOWNPARK DRIVE
KENNESAW, GA 300114
CINDY SIMBANIN
DYNCORP VIAR, INC
300 NORTH LEE STREET
ALEXANDRIA, VA 22314
ANN SIMS
LAB MANAGER
WESTERN CAROLINA REG SEW AUTH
P.O. BOX 5242
GREENVILLE, SC 29606
THEODORE R. SKINGEL
QUALITY ASSURANCE ADMIN
TALEM, INC
306 WEST BROADWAY AVENUE
FORT WORTH, TX 76104
RICK SLAGLE
PROGRAM MANAGER
MARTIN MARIETTA
Y-12 PLANT M.S. 8081
BUILDING 9769
OAK RIDGE, TN 37831
KURT R. SLENTZ
LABORATORY MANAGER
ENERGY LABORATORIES,  INC
P.O. BOX 2470
RAPID CITY, SD 57709
SHARON SLOAT
PRODUCT GROUP MANAGER
HACK COMPANY
P.O. BOX 907
AMES, IA 50010
JODY SMILEY
LAB DIRECTOR
ENVIROTECH MID-ATLANTIC
1861 PRATT DRIVE
BLACKSBURG, VA 24060
CHARLES D. SMITH
OPTICAL EMISSION SYSTEMS
THERMO JARRELL ASH
8E FORGE PARKWAY
FRANKLIN, MA 02038
GORDON T. SMITH
STAFF ENVIRONMENTAL CHEMIST
RHGNE-POULENE AG CO
P.O. BOX 2831
CHARLESTON, WV 25330
JAMES A. SMITH
TECHNICAL DIRECTOR
1258 GREENBRIER STREET
CHARLESTON, WV 25311
JIM SMITH
PRESIDENT/CHEMIST
TRILLIEM, INC
7A GRACES DRIVE
COATESVILLE, PA 19320
KEVIN SMITH
GREENVILLE UTIL. COMMISSION
P.O. BOX 1847
GREENVILLE, NC 27835
DR. ROY-KEITH SMITH
ANALYTICAL METHODS MANAGER
ANALYTICAL SERVICES, INC
390 TRABERT AVENUE
ATLANTA, GA 30309
                                       870

-------
TERRY SMITH
ORGANIC SECTION MANAGER
USPCI ANALYTIC SERVICES
4322 SOUTH 49TH WEST STREET
TULSA, OK 74107
TOM SMITH
DYNCORP VIAR, INC
300 NORTH LEE STREET
ALEXANDRIA, VA 22314
JUDITH W. SNIDER
LABORATORY MANAGER
STANDARD LABORATORIES, INC
2315 GLENVIEW DRIVE
EVANSVIEW, IN 47720
ROBERT F. STALZER
PRESIDENT
LAB/MAN CONSULTING
P.O. BOX 257
MONTCHANIN, DE 19710
GEORGE H. STANKO
SR STAFF RESEARCH CHEMIST
SHELL DEVELOMENT COMPANY
P.O. BOX 1380
HOUSTON, TX 77251-1380
HANK STEVENS
LABORATORY MANAGER
SACRAMENTO REG CNTY SANITATIO
8521 LACUNA STATION ROAD
ELK GROVE, CA 95798
BILL STORK
CHEMICAL ANALYSIST
ENVIRONMENTAL ANALYSIS, INC
3278 NORTH HWY 67
FLORISSANT, MO 63033
MICHAEL STRAKA
US BUSINESS DEVELOPMENT
PERSTORP ANALYTICAL
1256 STOCKTON STREET
ST. HELENA, CA 94574
PAUL STRICKLER
PRESIDENT
ENVIRONMENTAL EXPRESS
443 LONG POINT ROAD
MT. PLEASANT, SC 29464
ANN B. STRONG
CHIEF ENVIRONMENTAL CHEMISTRY
US ARMY CE/WATERWAYS EXP
3909 HALLS FERRY ROAD
VICKSBURG, MS 39180
ROBERT L. SULLIVAN
LAB ANALYST
CLACKAMAS CO DEPT OF UTILITIES
902 ABENETHEY STREET
OREGON CITY, OR 97045
RENDO SURENDRO
1401 MELROSE AVENUE  #B
CHESTER, PA 19013
CAROL SWANN
EPA
WASHINGTON, DC
S. REID TAIT
RESEARCH ASSOCIATE
DOW CHEMICAL COMPANY
BUILDING  1261
MIDLAND, MI 48667
                                    871

-------
H. SHERMAN TAN
SENIOR CHEMIST, PH.D.
COMMONWEALTH LAB
2209 EAST BROAD STREET
RICHMOND, VA 23223
ROBERT TEECE
WW LAB SUPERVISOR
PIMA CITY WW MANAGEMENT
TECH SERVICES LAB
7101 NORTH CASA GRANDA HWY
TUCSON, AZ 85743
WILLIAM A.  TELLIARD
USEPA
ENGINEERING & ANALYSIS DIV.
401 M STREET, SW MAILCODE 4303
WASHINGTON, DC 20460
JERRY J. THOMA
LABORATORY DIRECTOR
MAS TECHNOLOGY CORPORATION
110 SOUTH HILL STREET
SOUTH BEND, IN 46617
MARION KELLY THOMPSON
EPA
WASHINGTON,  DC
DAVID TOMPKINS
PRESIDENT
ETS ANALYTICAL SERVICES
1401 MUNICIPAL ROAD
ROANOKE, VA 24012
ALLAN M. TORDINI
PRESIDENT
QUALITY WORKS INC
8 STRAFFORD CIRCLE ROAD
MEDFORD, NJ 08055
DAN TREMBLAY
QUALITY ASSURANCE SUPERVISOR
ORANGE COUNTY SANITATION DIST
10844 ELLIS AVENUE
FOUNTAIN VALLEY, CA 92708
DAVID TRIMBLE
MGR OF ENVIRONMENTAL AFFAIRS
TEXTILE RENTAL SERVICES ASSOC,
1054 31ST STREET, NW
SUITE 420
WASHINGTON, DC 20007
FELICITAS G. TRINIDAD
ENVIRONMENTAL SUPERVISOR
HOFFMANN LA ROCHE
340 KINGSLAND STREET
NUTLEY, NJ 07110
JOHN J. URH
SALES & MARKETING MANAGER
CETAC TECHNOLOGIES
5600 SOUTH 42ND STREET
OMAHA, NE 68107
JIM VANCE
PRODUCT LINE MANAGER
HORIBA INSTRUMENTS INC
17671 ARMSTRONG AVENUE
IRVINE, CA 92714
JOHN A. VANDERHOFF
RESEARCH PHYSICIST-TEAM LEADER
ARMY RESEARCH LABORATORY
AMSRL-WT-PC
ABERDEEN PROV CDS, MD 21005
DAVID M. VARNELL
ANALYTICAL CHEMIST
TVA ENVIRONMENTAL CHEM LAB
401 CHESTNUT STREET
CHATTANOOGA, TN 37402
                                       872

-------
PAMELA O. VARNER
ANALYTICAL SERVICES, INC
390 TRABERT AVENUE
ATLANTA, GA 30309
LOSALYN VASQUEZ
CHEMIST-INORGANIC SECTION
NAVY PWC ENVIRON CHEM LAB
BUILDING 398
NAVAL STATION
SAN DIEGO, CA 92136
JOE VIAR
DYNCORP VIAR, INC
300 NORTH LEE STREET
SUITE 500
ALEXANDRIA, VA 22314
JOE VITALIS
EPA
WASHINGTON, DC
MICHELLE VODOPIA
LAB MANAGER
JMEUC
500 SOUTH 1ST STREET
ELIZABETH, NJ 07202
CHARLIE VOINCHE
MANAGER
PETROLEUM LABORATORIES,  INC
333 E. KALISTE SALOOM ROAD
LAFAYETTE, LA 70508
JACK S. WAHLSTROM
LAB MANAGER
GULF COAST WASTE DISPOSAL AUTH
10800 BAY AREA BOULEVARD
PASADENA, TX 77505
TONIE WALLACE
COUNTY COURT REPORTERS,  INC
124 EAST CORK STREET
WINCHESTER, VA  22601
CYNTHIA WALTERS
LABORATORY MANAGER
NARRAGANSETT BAY COMM
235 PROMENADE STREET
PROVIDENCE, RI 02908
BERNADINE L. WARDLAW
CHEMIST
CITY OF ASHEBORO
146 NORTH CHURCH  STREET
ASHEBORO, NC 27203
JOHN J. WATKINS
PRETREATMENT PROGRAM MANAGER
CITY OF CONYERS
1184 SCOTT STREET
CONYERS, GA 30207
JAN WAVERING
PRETREATMENT  COORDINATOR
QUINCY WASTEWATER TREATMENT
700 WEST LOCK &  DAM ROAD
QUINCY, IL  62301
MELISSA WEEKLY
CHEIF CHEMIST
COLUMBUS CITY UTILITIES
P.O. BOX 1987
COLUMBUS, IN 47202
PETER WEICKMANN
LAB COORDINATOR
THE BOEING COMPANY
P.O. BOX 3707
M/S 4H-26
SEATTLE, WA  98124
                                    873

-------
FRED WEIDMAN
VICE PRESIDENT
WALLE CORPORATION
600 ELMWOOD PARK BOULEVARD
JEFFERSON, LA 70123
RICHARD WEISS
SENIOR PRINCIPAL SCIENTIST
WESTINGHOUSE HANFORD CO
P.O. BOX 1970
H4-23
RICHLAND, WA 99352
LESLYE E. WERNER
GNAN/LABO/ENSV/EPA
25 FUNSTON ROAD
KANSAS CITY, KS 66115
RICHARD WHITNEY
ORGANICS DEPT MANAGER
ETS ANALYTICAL SERVICES
1401 MUNICIPAL ROAD
ROANOKE, VA 24012
MARISA WIECZOREK
ENVIRONMENTAL SPECIALIST
PRINCETON UNIVERSITY
PLASMA PHYSICS LABORATORY
P.O BOX 451
PRINCETON, NJ 08540
PAUL V. WIEST
CHEMIST
US ARMY CORPS OF ENGINEERS
476 COLDBROOK ROAD
HUBBARDSTON, MA 01452
IDELIS Z.  WILLIAMS
QUALITY ASSURANCE OFFICER
SPL
8800 INTERCHANGE DRIVE
HOUSTON, TX 77225
RICK WILLIAMS
PROFESSOR OF CHEMISTRY
MIDWESTERN STATE UNIVERSITY
3410 TAFT BOULEVARD
WICHITA FALLS, TX 76308
ALLISON WILSON
CHIEF CHEMIST
HAMPTON ROADS SANITATION DIST
1432 AIR RAIL AVENUE
VIRGINIA BEACH, VA 23455
DEBORAH A. WILSON
LABORATORY DIECTOR
BAW DIV/MICROBAC LABORATORIES
635-A PRESSLEY ROAD
CHARLOTTE, NC 28317
JANET N, WILSON
LAB SUPERVISOR
UNIFIED SEWERAGE AGENCY
16580 SW 85TH
TIGARD, OR 97224
JEAN WILSON
AVERILL ENVIRONMENT LAB
100 NORTHWEST DRIVE
PLAINVILLE, CT 06062
JOE WINDIELD
ASSOCIATE DIRECTOR
ODU/APPLIED MARINE RESEARCH
1034 WEST 45TH STREET
NORFOLK, VA 23529
CORNELIUS WINFREE, JR
LAB TECHNICIAN
WASTEWATER  (HRWTF)
231 HUMMEL ROSS ROAD
P.O. BOX 969
HOPEWELL, VA 23860
                                        874

-------
DAVID WINTERS
PROGRAM  MANAGER
ARIZONA DEPT OF HEALTH SERVICE
STATE LAB
1520 WEST ADAMS STREET
PHOENIX, AZ 85007
HUGH WISE
EPA
WASHINGTON, DC
ERIC E. WISTED
ADVANCED CHEMIST
3M-I&C SECTOR LAB NEW PROD  DEV
209 1C 30
ST. PAUL, MN 55144
SCOTT R. WOLFF
ENVIRONMENTAL ENGINEER
AQUALON
P.O. BOX 271
HOPEWELL, VA 23860
MICHAEL W. WOSTER
CHEMIST
US ARMY CORPS OF ENGINEER
MRD LABORATORY
420 SOUTH 18TH STREET
OMAHA, NE 68102
ANN E. WRIGHT
PROJECT LEADER
DOW CHEMICAL COMPANY
1897 G ANALYTICAL SCIENCES
MIDLAND, MI 48674
ROBERT K. WYETH
SENIOR VICE PRESIDENT
RECRA ENVIRONMENTAL, INC
10 HAZELWOOD DRIVE
AMHERST, NY 14228
JACK Z. XIE
SENIOR TECHNICAL ANALYST
WATER CHEMISTRY,  INC
P.O. BOX BOX  4273
ROANOKE, VA 24015
DAVID C. YAWORSKY
SENIOR CHEMIST
VA POWER SYSTEM LAB
11201 OLD STAGE ROAD
CHESTER, VA 23831
JOHN E. YOUNG
WESTINGHOUSE SAVANNAH RIVER C
P.O. BOX 6809
AIKEN, SC 29804
JIM J. ZHU
RESEARCH CHEMIST
CETAC TECHNOLOGIES
5600 SOUTH 42ND STREET
OMAHA, NE 68107
                                     875
                                    U S GOVERNMENT PRINTING OFFICE' 1995 - 615-003/01098

-------