840R83003
                                            PROCEEDINGS OF


                                 USEPA EFFLUENT GUIDELINES DIVISION

                             ANNUAL SEMINAR FOR ANALYTICAL METHODS

                                       FOR PRIORITY POLLUTANTS
                                            March 16-17, 1983
                                             Norfolk, Virginia
                         Transcribed by:
  \J
County Court Reporters
Vinegar Hill Square
27 East Loudoun Street
Leesburg, Virginia 22075
                         Prepared by:
 rv
D
                         For:
Whitescarver Associates, Inc.
P. O. Box 17088
Dulles International Ariport
Washington,  D.C.  200*1
William A. Telliard, Acting Chief
Office of Analytical Programs Branch
USEPA Effluent Guidelines Division (WH-552)
401 M Street, S.W.
Washington, D.C. 20460
                       EDITING NOTE:  Because of the scientific/technical nature  of
                       these proceedings,  the  preliminary verbatim  transcript  was
                       edited for clarity prior to distribution.   Seminar speakers were
                       responsible for editing individual  seminar presentations, which
                       appear herein in final form.

-------
                     Table of Contents
March 16, 1983
Title
Introduction
Speaker               Page

William A. Telliard     1
EPA
Status of Analytical Methods
for Wastewater Measurement,
CWA Subsection 304(h)
Robert B. Medz
EPA
Validation Study of EPA
Methods 624 and 625
Robert Graves
EPA
 24
Overview of Effluent Guide-
line's Analytical Activities
William A. Telliard
EPA
 45
Revision A of Methods 1624
and 1625
Dale R. Rushneck
Interface, Inc.
 61
Optimization of GC/MS Analyses
Bruce N. Colby
S-CUBED
 81
Round Robin Study of EPA Methods
624 and 1624 for Volatile Organic
Pollutants
George H. Stanko
Shell Development
Company
108
Automated Identification of
Priority Pollutants from
GC/MS Data
Philip W-. Ryan
S-CUBED
159
Receipt and Transcription of
Quantitative Data on Magnetic
Tape at the EPA Sample Control
Center
John Norris
Viar & Company
173

-------
                  Table of Contents
March 16, 1983
Title

Increased Confidence in
Spectrum Matching by Use
of a Retention Time Library
Speaker

Walter M. Shackelford
EPA
Quality Assurance Decision
Models for Hazardous Waste
Analysis
Paul E. Mills
Mead CompuChem
204
                             ii

-------
                     Table of Contents
March 17, 1983
Title
Introduction
               Speaker               Pag<

               William A. Telliard   230
               EPA
Statistical Methods For
Effluent Guidelines
               Barrett P. Eynon
               SRI International
                      231
GC/FT-IR and GC/MS:
When and Why?
Which,
James Brasch
Battelle Columbus
Laboratories
269
Rapid Organic Analytical
Methods Development, The
Triple Quadrupole Mass Spec-
trometry Potential
               Andrew D. Sauter
               EPA
                      299
Evaluation of a New GC/MS
Direct Aqueous Injection
Interface for Volatile Organic
Analyses
               Robert G. Beimer
               TRW, Inc.
                      330
Results of the U.S. EPA
National Validation Study of
the Inductively Coupled Plasma
Method
               Robert Maxfield
               Versar, Inc.
                      350
A Survey of Precision and Bias
Data for Methods of Analysis for
Priority Pollutant Elements
               Ray F. Maddalone      372
               TRW, Inc.
Compliance Monitoring Methods
for Priority Pollutant Elements
in the Discharges from Steam
Electric Power Plants
               James K. Rice         397
               Consulting Engineer
                            111

-------
                 LIST OF ATTENDEES
Jackie Anderson
Dow Chemical U.S.A.
2030 Dow Center
Midland, Ml  48640

Area Code 517  636-1000
Devereaux Barnes
Deputy Director, EGD
U.S. Environmental Protection Agency
401 M Street, S.W.
Washington, D.C.  20460

Area Code 202  382-7120
Jon Barney
Chemical Engineer
U.S. Environmental Protection Agency
Region V  (5WQP)
230 South Dearborn
Chicago, IL  60604

Area Code 312  886-6109
Lynn M. Beasley
Chemist/Project Officer
U.S. Environmental Protection Agency
401 M Street, S.W.  (WH-552)
Washington, D.C.  20460

Area Code 202  382-7162
Robert G. Beimer
Lab. Manager
TRW
One Space Park, Building 116
Redondo Beach, CA  90278

Area Code 213  536-3894
Bob Booher
Technical Consultant
Viar & Company
300 North Lee Street
Alexandria, VA  22304

Area Code 703  683-0885

                   iv

-------
James Brasch
Battelle Columbus Labs
505 King Avenue
Columbus, OH  43229

Area Code 614  424-5096
Carol Byington
Environmental Chemist
Envirodyne Engineers
12161 Lackland Road
St. Louis, MO  63141

Area Code 314  434-6960
Bruce N. Colby
Manager, Chemistry
S-CUBED
P.O. Box 1620
La Jolla, CA  92038

Area Code 619  453-0060
Deborah Danforth
Sample Control Center
Viar & Company
300 North Lee^ Street
Alexandria, VA  22314

Area Code 703  683-0885
Robert W. Dellinger
Acting Chief, Wood Products & Fibers Branch
U.S. Environmental Protection Agency
401 M Street, S.W.
Washington, D.C.  20460

Area Code 202  382-7137
Jeffrey D. Denit
Director, Effluent Guidelines Division
U.S. Environmental Protection Agency
401 M Street ,  S.W.
Washington, D.C.  20460

Area Code 202  382-7120

-------
Barrett P. Eynon
Statistician
SRI International
333 Ravenswood Avenue
Menlo Park, CA  94025

Area Code 415  859-5239
Thomas E. Fielding
Chemist
U.S. Environmental Protection Agency
401 M Street, S.W.
Washington, D.C.  20460

Area Code 202  382-7156
Paul C. Geiszler
Group Leader, GCMS
ESE, Inc.
P.O. Box ESE
Gainesville, FL  32602

Area Code 904  332-3318
John E. Gersbach
Water Control Engineer
Virginia State Water Control Board
P.O. Box 11143
Richmond, VA  23230

Area Code 804  257-6326
Robert L. Graves
Staff Scientist/Chemist
U.S. Environmental Protection Agency
26 W. St. Clair, Room 531
Cincinnati, OH  45268

Area Code 513  684-7325
William P. Gulledge
Manager, Environmental/Scientific Programs
Chemical Manufacturers Association
2501 M Street, N.W.
Washington, D.C.  20037

Area Code 202  887-1188


               vi

-------
Susan Hancock
Sample Control Center
Viar & Company
300 North Lee Street
Alexandria, VA  22314

Area Code 703  683-0885
Bill Hardesty
Research Coordinator
Science and Technology Consultants
701 Clear Spring Road
Great Fall, VA  22066

Area Code 703  430-1515
Thomas J. Hoogheem
Specialist
Monsanto Research Corporation
1515 Nicholas Road
Dayton, OH  45418

Area Code 513  268-3411  Ext. 257
William P. Huff
Andrew S. McCreath & Son, Inc.
P.O. Box 1453
Harrisburg, PA  17105

Area Code 717  238-9331
Richard A. Javick
Research Associate & Group Leader of
  Env. Anal. Services
FMC Corporation
Box 8
Princeton, NJ  08540

Area Code 609  452-2300  Ext. 4411
Rosemary C. Keane
County Court Reporters, Inc
27 E. Loudoun Street
Leesburg, VA  22075

Area Code 703  777-8645
              vii

-------
Gary Keen
Research Group Leader
Conoco
1000 S. Pine
Ponca City, OK  74601

Area Code 405  767-2761
Richard Kinch, Project Officer
Effluent Guidelines Division  (WH-552)
U.S. Environmental Protection Agency
401 M Street, S.W.
Washington, D.C.  20460

Area Code 202  382-7159
George Kosko
Technical Specialist
Mead CompuChem
P.O. Box 12652
Research Triangle Park, NC  27709

Area Code 800  334-8525
William G. Krochta
Assistant Director Analytical Res.
PPG Industries
Box 31
Barberton, OH  44203

Area Code 216  848-4161  Ext. 505
A. F. Looker
Chemist
PETRO-CANADA
P.O. Box 50
Pointe-aux-Trembles, Quebec   H1B 5K2

Area Code 514  252-5844
Dr. Ray F. Maddalone
Program Manager
TRW
One Space Park  116/106
Redondo Beach, CA  90278

Area Code 213  536-3894

               viii

-------
Robert Maxfield
Manager, Inorganic Branch
VERSAR, Inc.
6850 Versar Center
P.O. Box 1549
Springfield, VA  22151

Area Code 703  750-3000
Kevin V. McConnaghy
MTKG, Manager
Mead CompuChem
3308 E. Chapel Hill
Research Triangle Park, NC  27709

Area Code 919  469-9876
Frank McElroy
Staff Chemist
Exxon Research & Engineering Company
P.O. Box 121
Linden, NJ  07036

Area Code 201  474-3954
Robert B. Medz
U.S. Environmental Protection Agency, RD 680
401 M Street, S.W.
Washington, B.C.  20460

Area Code 202  382-5788
Paul R. Michael
Res. Group Leader
Monsanto
800 N. Lindbergh Boulevard
St. Louis, MO  63141

Area Code 314  694-4838
Paul E. Mills
Director, Quality Assurance
Mead CompuChem
P.O. Box 12652
Research Triangle Park, NC  27709

Area Code 919  549-8263

                 ix

-------
Danny Moss
Chemist, Oyster Creek Division
Dow Chemical Company
P.O. Box BB
Freeport, TX  77541

Area Code 713  238-9009
Bob Nicholson
Chemist, Analytics Laboratory
Roche Biomedical Laboratory
P.O. Box 25249
Richmond, VA  23260

Area Code 804  353-8973
James E. Norris
Group Leader, Analytical/Environmental Dept
CIBA-Geigy Corp.
P.O. Box 113
Mclntosh, AL  36553

Area Code 205  944-2201  Ext. 2220
John Norris
Sample Control Center
Viar & Company
300 North Lee Street
Alexandria, VA  22314

Area Code 703  683-0885
Fred A. Pennington, Jr.
Andrew S. McCreath & Son, Inc.
P.O. Box 1453
Harrisburg, PA  17105

Area Code 717  238-9331
Allison Phillips, Project Officer
Effluent Guidelines Division   (WH-552)
U.S. Environmental Protection  Agency
401 M Street, S.W.
Washington, D.C.  20460

Area Code 202  382-7167
                 x

-------
William B. Prescott
Director, Scientific Services Department
American Cyanamid Comany
1937 West Main Street, P.O. Box 60
Stamford, CT  06904-0060

Area Code 203  348-7331  Ext. 2647
Charles Puchalsky
Research Scientist (Analytical)
Uniroyal Chemical
Elm Street
Naugatuck, CT  06770

Area Code 203  723-3625
James K. Rice
Consulting Engineer
17415 Batchellors Forest Road
Olney, MD  20832

Area Code 301  774-2210
Richard J. Ronan
Vice President
Versar, Inc.
6850 Versar Center
Springfield, VA  22153

Area Code 703  750-3000
Dale R. Rushneck
Interface Inc.
P.O. Box 297
Ft. Collins, CO  80522-0297

Area Code 303  223-2013
Philip W. Ryan
S-CUBED
P.O. Box 1620
La Jolla, CA  92038

Area Code 619  453-0060
                xi

-------
Andrew D. Sauter
Research Chemist
U.S. Environmental Protection Agency
EMSL1LV
P.O. Box 15027
Las Vegas, NV  89114

Area Code 702  798-2144
Robert Schaffer
Vice President
Centec Corp.
11260 Roger Bacon Drive
Reston, VA  22090

Area Code 703  471-6300
Walter M. Shackelford
Athens Environmental Research Lab
College Station Road
Athens, GA  30613

Area Code 404  546-3186
John A. Sinsel
Senior Research Chemist
National Steel Corp.
Research & Development Dept.
Weirton, WV  26062

Area Code 304  797-2832
David N. Speis
Director of Mass Spectrometry
ETC Corp.
284 Raritan Center Parkway
Edison, NJ  08837

Area Code 201  225-6759
George H. Stanko
Staff Research Chemist
Shell Development Co.
P.O. Box 1380
Houston, TX  77001

Area Code 713  493-7702
                 xii

-------
David  Stewart
Vice President
Viar & Company
300 North  Lee Street
Alexandria, VA   22314

Area Code  703   683-0885
Ed Stigall
Chief,  Inorganic Chem., EGD
U.S. Environmental Protection Agency
401 M Street, S.W.
Washington, D.C.  20460

Area Code 202  382-7124
Paul A. Taylor
President
California Analytical Lab, Inc.
5895 Power Inn Road
Sacramento, CA  95824

Area Code 916  381-5105
William A. Telliard
Chief, Energy and Mining
Effluent Guidelines Division, WH552
U.S. Environmental Protection Agency
401 M Street, S.W.
Washington, D.C.  20460

Area Code 202  655-4000
Theodore F. Them
Supervisor, Analytical Laboratory
Roy F. Weston, Inc.
One Weston Way
West Chester, PA  19380

Area Code 215  692-3030
Kathleen E. Thrun
Senior Professional
Arthur D. Little, Inc.
Acorn Park
Cambridge, MA  02140

Area Code 617  864-5770  Ext. 2311

                 xiii

-------
Joseph Viar
President
Viar & Company
300 North Lee Street
Alexandria, VA  22314

Area Code 703  683-0885
Dr. B. Frank Vincent, Jr.
Associate Director, Regulatory Affairs
James River Corporation
1915 Marathon Avenue
Neenah, WI  54956

Area Code 414  729-8168
Tonie M. Wallace
President
County Court Reporters, Inc.
27 E. Loudoun Street
Leesburg, VA  22075

Area Code 703  777-8645
Gary Walters
Chemist
Jordan Laboratories
P.O. Box 2552
Corpus Christie, TX  78403

Area Code 512  884-0371
Keith Ward
Chemist
Analytics Laboratory/Roche Biomedical Labs
P.O. Box 25249
Richmond, VA  23260

Area Code 804  353-8973
Michael L. Webb
Chemist II
PA. Dept. Environmental Resources - Bureau of Labs
Evan Press Building, 3rd & Reilly Streets
Harrisburg, PA   17120

Area Code 717  787-4669

                 xiv

-------
Alan F. Weston
Support Manager
Occidental Chemical Corp.
Research Center
Grand Island, NY  14072

Area Code 716  773-8548
John Whitescarver
President
Whitescarver Associates, Inc.
11260 Roger Bacon Drive
Reston, VA  22090

Area Code 703  435-4444
Stuart A. Whitlock
Manager, Chemistry Department
ESE, Inc.
P.O. Box ESE
Gainesville, FL  32602

Area Code 904  332-3318
Bruce E. Wilkes
Project Scientist - Environmental Applications
Union Carbide Corp.
Technical Center, Bid. 770-318
P.O. Box 8361
South Charleston, WV  25303

Area Code 304  747-4463
Tom Wilson
GC/MS Chemist
IT Corporation - Stewart Labs Division
5815 Middlebrook Pike
Knoxville, TN  37921

Area Code 615  588-6401
Nancy Wincentsen
Chief, Information Services
Whitescarver Associates, Inc.
11260 Roger Bacon Drive
Reston, VA  22090

Area Code 703  435-4444
                       XV

-------
Jack M. Winn
Environmental Supervisor
Rohm & Haas Tx Inc.
P.O. Box 672
Deer Park, TX  77536

Area Code 713  476-8152
Hugh Wise
Effluent Guidelines Division  (WH-552)
U.S. Environmental Protection Agency
401 M Street, S.W.
Washington, D.C.  20460

Area Code 202  382-7177
                   XVI

-------
             INTRODUCTION and WELCOME

                 William Telliard
       U.S. Environmental Protection Agency
           Effluent Guidelines Division
                   MR. TELLIARD:  Good morning.

There are a number of empty seats down here in

front.  I guess those were some of the people that

are now in the Washington Post.  So if you want to

move down you can use those seats.

   Welcome to Norfolk, our third visit here, subject

the same, Priority Pollutant Analysis and their

kindred.  I would like this morning to introduce

the Director of the Effluent Guidelines Division,

Jeff Denit, who would like to say a few things to

welcome you to this meeting.  Jeff.

                   MR. DENIT:  Thank you very much,

Bill.  It is a personal pleasure and a pleasure for

the Division to once again be able to sponsor our

conference on analytical methods, this being the 6th

annual.  We are looking forward to the exchange of

ideas and the dialogue as we mutually work to

advance the state of the art of analytical methods.

-------
 I  am  certainly  looking  forward  to  the conference.



 I  wish you all  well, and please, by all means if



 you have the opportunity, stop  by  and introduce



 yourself if I haven't had the chance to meet you




 of if you have  any questions of me, and enjoy the



 time  that we can spend together to discuss any of



 your  ideas.




   At this point I'll turn the program back over to



 Bill  for our morning session and,  once again, best of



 luck  and thank you so much for coming.



                  MR. TELLIARD:  Thank you, Jeff.



A couple of announcements;  one, as usual,  concern-



 ing this evening.  Right at the break Captain



Whitescarver will announce some arrangements we've



made  for our usual soiree out to dinner tonight



with  a cast of thousands.



   At this time I would like to introduce  our first



speaker, Bob Medz.  Bob has addressed this group

-------
on the same subject before and it's kind



of a continuing saga.  Bob has been developing



a certain package now for a number of years,




and it feels like trying to guideline out, doesn't



it,  Bob?



                  MR. MEDZ:   It does.



                  MR. TELLIARD:  Dr. Medz is



Associate Director of Water  and Waste Management,



Monitoring Research Division, Office of Research



and  Development, EPA.  Bob's subject this morning



is the approved analytical methods, affectionately



referred to as 304-H.  Bob.

-------
    STATUS  OF  ANALYTICAL METHODS  FOR  WASTEWATER
         MEASUREMENT,  CWA SUBSECTION  304(h)

                  Robert B. Medz
       U.S. Environmental Protection Agency
         Office  of Research  and Development
                  MR. MEDZ:   I  think the last time

 I addressed the group I  said  the regulations on

 analytical methods would probably come out within

 about three to six months and that was two years

 ago.  They still haven't come out, but they'll be

 out, and again I'll say, in three to six months.

 But this time I think we're closer than that.

    First of all, what are these analytical methods

 we're talking about?  In Subsection 304(h)  of the

 Clean Water Act (CWA) Congress said that the Admini-

 strator would develop and promulgate guidelines

 establishing test procedures  for the analysis of

 pollutants.  These test procedures are intended

 to be used in any certification made under Section

 401 of the Act or any permit application under

Section 402 of the Act.   Now, those requirements

-------
 have  quite  a  scope of  application  for  these  analy-



 tical methods.   The  analytical methods are to be



 used  in  any enforcement  and compliance action that



 is dependent  upon  any  of the regulations that go



 into permits.




  The 304(h)  regulations are promulgated at Part



 136 of Title  40  of the Code of Federal Regulations.



 The first regulations  under this Part 136 were



 published in  the Federal Register on October 16th,



 1973  (38 FR 17318).  They were first amended on



 December 1st, 1976  (40 FR 52780).  The basic thrust



 of this regulation is  to look at the available



 analytical methodologies that have been developed



 by concensus organizations, that have been devel-



 oped by industrial groups, or that have been re-



 ported in the analytical literature and to select



 these analytical methods that have been so described



 which meet the precision and accuracy requirements



and other criteria of the agency.



   In 1976 a new condition was placed on the agency.




The regulations that were intended to implement



the toxics regulations of Section 307 of  the  Act

-------
                        6
 had been  floundering.   The  new condition was es-



 tablished in a lawsuit  against the agency and in




 a subsequent consent settlement in 1976, which



 established the priority pollutants.  I think you



 are all familiar with the priority pollutants.




 At that point, the analytical methods by which



 trace organic pollutants could be analyzed in



 industrial discharges just were not available in



 the reported literature or the consensus manuals.



 There were a few analytical procedures that had



 been developed by single laboratories relative to



 measurements of pesticides.  EPA's Environmental



 Monitoring and Support  Laboratory in Cincinnati



 had developed a few of these analytical methods



 based on gas chromatography (GC) and thin layer



 of chromatography,  (TLC) but the wide testing of



 these test procedures was just not available.




 The type of validation that you would expect of a



 concensus method was not available for the priority



 pollutants.  So the agency set out to develop




these methods.   On  December 3, 1979, the agency

-------
 proposed  these  analytical  methods  (44  FR  69464)



 which  had  been  developed to a point where, even



 though they hadn't been fully validated by multi-



 laboratory collaborative tests, the agency still



 felt were  sufficiently reliable for use in enforce-




ment and compliance measurements.  The methods




 that were  proposed included 12 methods based on



 gas chromatography or high pressure liquid chroma-



 tography (HPLC) and three methods based on gas



chromatography  with the mass spectrometric detec-



tion (GC/MS).




   Methods 601  through 612 were the GC or HPLC



methods.   Method 613 was a high performance GC/MS



method.  Method 624 and Method 625 were based on




the quadrupole  GC/MS.  The methods were proposed



and the performance criteria that were included



in the methods  were based on single laboratory



experience.  The GC/MS test protocol that was



proposed  wasn't significantly different from



the screening GC/MS test protocol that  had been

-------
                        8
used by the Effluent Guidelines Division  (EGD)  in



their industrial survey work.  So we had some in-




formation on the applicability of the GC/MS test



procedures to industrial wastewaters.  But the




experience with the GC/HPLC methods was restricted



to the experience of our contractors and some of



the work done in house at EMSL-Cincinnati and



some of EPA's regional laboratories.



    Proposed at the same time with the trace




organics pollutant test procedures was a test



procedure for carbonaceous biochemical oxygen



demand (CBOD).   The CBOD test procedure was being



proposed because it was becoming more and more



apparent that in many instances the traditional



five-day BOD might not be the most appropriate



measure of performance of control technology



for secondary treatment plants,  i.e., the biologi-



cal treatment plants.  Nitrification was becoming



a problem after treatment had been completed and



was giving abnormally high BOD5 readings because

-------
                         9
 of  the oxygen demand  of nitrifying  bacteria.



 So  the CBOD five-day  test procedure) was proposed



 in  order  that a  nitrogen bacteria inhibitor could



 be  used to  prevent nitrification in the test that



 was being used to show  that the proper control



 technology  had been applied.



    Additionally, the inductively coupled plasma



 optical emission spectroscopic test procedure (ICP)




 was proposed  for trace metal analysis.  ICP is



 probably one  of the more powerful state-of-the-art



 instruments by which trace metals can be rapidly



 and reliably  analyzed.



    The final item that was included in the proposal



 was a  table of mandatory requirements addressing



 sample container material, sample preservation,



 and maximum allowable sample holding times.   The



 preservation  requirements are for samples which



are taken from industrial wastewaters to a labora-



 tory for the measurement at some later time.   The



holding times are the length of time that such

-------
                         10
 samples  can  be  held  before  analyses  without  losing



 their  integrity.




     Now, those  were  the  things that  were  included



 in the proposed regulations.  Everyone has been



 quite  anxious to know exactly when the final




 regulations  will come out.  The multi-laboratory



 validation studies that  were initiated in the



 1979-1981 time  frame for the GC and  HPLC  test



 procedures have been completed.  These valida-



 tion studies include Methods 601, 602, and 604



 through 612.  The validation of the GC/MS test



 procedure defined in Method 613 for dioxin is



 also completed.  Incidentally, the raw data from



these validation studies are available at Environ-



mental Monitoring and Support Laboratory in



 Cincinnati in case anyone has occasion to want to



 see the raw data.  The raw data has been reduced



and has been used for calculating the mean recover-



 ies of spikes, the standard deviation of recoveries



of spikes for the overall study,  and the standard

-------
                         11
 deviation  for  single analyst  performance.   The



 validation studies  were conducted in six differ-




 ent matrices:  distilled water, tap water,  one



 surface water  known  to be prone to contamination



 by organic  pollutants, and in three different




 industrial  wastewaters in which it was highly



 likely priority pollutants in the particular



 categories  covered by the methodolgy would  be



 regulated.



    The validation test procedures were designed



 after the procedure developed by Youden.  Youden



 is a statistician and I think he still is at the



 National Bureau of Standards.  In his approach



 samples of  a given pollutant in closely matched



 concentrations are analyzed by the participants



 in the validation study.  These particular tests



were conducted under the auspices of EPA's Environ-



mental Monitoring and Support Laboratory in Cin-



cinnati (EMSL-Ci).  Incidentally,  the names that



are critical in these studies are Jim Longbottom

-------
                         12
and Jim Lichtenberg of that laboratory.  They have



been the principal investigators on these studies



together with Bob Graves who is here sitting right



next to me.  The Youden test was designed to have



the lowest concentration pair slightly above the



detection limit.  The second Youden pair was at a



mid-range concentration and the third Youden con-



centration was at the upper end of the concentra-



tion range that was being tested.



    These concentration weren't the same for each



method and there were also some variations in



concentration ranges depending on the analyte.



The main conclusions of these tests was that the



means and the standard deviations, i.e.,  the over-



all study standard deviations and  the single



analyst standard deviations over the concentra-



tion ranges studied could be expressed as linear



regression equations.  Another conclusion that



came from these studies was that in the family



of linear regression equations for a given compound

-------
                        13
that were developed  for the six different matrices,



the statisticians couldn't find any statistically




significant differences in the slopes and the



intercepts of each of the regression lines appli-



cable to the different matrices.



   So the conclusion reached from these studies



is that in the methods themselves (and the methods



are being revised right now to include this new



information) the performance data for distilled



water is all that we need to require an analyst



to meet in order to establish that the analyst




can apply these methods within acceptable confi-



dence levels to industrial wastewater matrices.



Now, as chemists we feel or at least I feel



uneasy with that.   I feel there are going to be



matrix effects.  What the statisticians tell us



from these studies is that if you wanted to see



what these matrix  effects really  are, at least



in the scope of these studies you would probably



have to have many, many more data points in your

-------
                         14
 study  population  to  establish  the means  and the



 standard  deviations  with a much higher degree of




 confidence;  I guess  what the statisticians call



 the central  limits theoram.  If you make a mea-



 surement  a sufficient number of times regardless




 of whether it's being measured at two or more



 significant  figures, if within the population



 that you  are measuring, the number of observations




 you make  is  large enough, the statisticians tell



 us that there are equations by which you can



 calculate the means  to many, many more significant



 figures for  that particular population than any



 of its individual measurements.  In other words,



 if your individual measurements have two signifi-



cant figures, if you had sufficient numbers of



measurements you could express the mean for that



particular population to as many significant




figures as you wanted to and it's statistically



valid.



    So for the number of data points that we have

-------
                         15
 in these studies and there were  from 15 to 20




 laboratories that participated in these studies,



the general conclusion is that we can use the



distilled water data to show that the analyst may



be competent to apply the methods to more compli-




cated matrices, such as treated industrial waste-



water effluents.  Now, this statement is only



restricted to treated wastewater effluents.



The statement does not apply to the raw influent



wastewaters.



    So now where are we within this particular




regulation and when is it going to come out?  Right



now we are in the process of incorporating the



texts of the analytical methods within the regula-




tions by reference.  Incorporation by reference is



an administrative term that is placed on us by the



Office of the Federal Register.  When you incorporate



any material into a regulation by reference it means



that the material being incorporated has the  full



effect of regulation as though it had actually been

-------
                         16
 printed in the Federal Register.  One of the



 restrictions that is placed on incorporating  by




 reference  is that the method is incorporated  into



 the  regulatory language in  exactly  the  text which



 is submitted to the  Federal Register.   Any alter-




 ations  in  that text  in the  future requires that



 a new Federal  Register notice  be made to  show



 the  alteration.




     If  you will  remember, I said that only methods



 601, 602,  and  604 through 613  had been  fully




 validated  and  the  performance  criteria  for those



 methods would  be  incorporated  within the  analytical



 method.  Multi-laboratory performance criteria



 for Methods  603,  624  and 625,  and Bob Graves  will



 tell us more about the  status  of those validation



 studies, are not  available  right now and  so the



 Methods 603, 624 and  625 will  be incorporated




 with the best  single  analyst data that  is cur-



rently available to the agency; which means at



such a time as the data from the multi-laboratory

-------
                         17
validation  studies  are  completed  for  Methods  624



and 625,  i.e.,  the  GC/MS test procedures, we



will have to go back with a Federal Register  Notice



to indicate that the criteria in those methods



has been changed.




    There are several reviews that have to be



made on this regulation once it leaves the agency.



The incorporation by reference by the Office of the



Federal Register is one of them.  There is another



review which comes under the Paperwork Reduction



Act which is performed by the Office of Management



and Budget (OMB) .  Any regulation that requires



information to be provided by the community



which is affected by that regulation has to be



reviewed by OMB under the provisions of the



Paperwork Reduction Act.  In Part 136 we have two



provisions where we ask the affected community for



information.  One of these is in the equivalency



program where an instrument manufacturer might have



a new instrument system based  on a principle

-------
                         18
other than those that have been approved and



wishes to have it added to the approved list.




There we are asking for information that comes



under the Paperwork Reduction Act.  The amount



of time that this request for information requires



of the respondents has to be estimated and sub-



mitted for OMB review and approval.



   Another provision in the regulation is that




in the holding times and preservation techniques



there might be industrial discharges that because



they have limited interferents can have holding




times and preservation requirements other than



those that are being made mandatory in Table II.



For such discharges, the permittees can apply



for variances from the mandatorily prescribed



holding times and preservation requirements.



That also comes under the Paperwork Reduction




Act.  The Paperwork Reduction Act review takes



two to three months, but it will not hold up



the regulation.  Only those parts of regulation

-------
                        19
which require information to be furnished may



be delayed.



    Then, we have another 20 days of review by



OMB for the regulatory reform review which every



regulation now has to undergo.



    So my best estimate for publication of the



final regulation is based on the completion date



of EPA's internal "Red Border Review" which




should be finished in the next several weeks.



We then submit this reviewed regulation to OMB



where they will have 20 working days for the



regulatory reform review which means (and I hate



to make this prediction again), that in another



two to three months we'll have the regulation




published.



                  MR. TELLIARD:  Thank you.  Any




questions?

-------
                         20
               QUESTIONS AND ANSWERS






                    MR.  VINCENT:   Frank  Vincent.



 This  has  to do  with the Table  II  in  the regulation.




 If  we can  show  that the sample arrives  at our



 laboratory in sufficient time  that preservation is



 not required, is it probably true that  we could



 establish  a variance on that situation?



                     MR.  MEDZ:  I would  say the answer



 is affirmative  on that.  You would have  to provide



 the data that shows  that this is the case.



                     MR. VINCENT:   Sure.



                     MR. MEDZ:  I would  say if your



 data  bears this out, there would be a variance



 to the preservation  requirement that you have



 demonstrated for your sample type.  That's the



 intent of  this  provision of the regulation.



                  MR. VINCENT:   Thank you.



                  MR. STANKO:  George Stanko, Shell



Development.   Bob,  could you identify the version

-------
                         21
of Methods  624  and  625  that  will  come  out  in  the



regulation; is  it the version that was in the



document from Cincinnati dated July, 1982?



                  MR. MEDZ:  There will be slight



changes from that.   Let me tell you where the



changes will be.




    Our general counsel tells us that regulations



have to stand on the text which is to be incor-



porated by reference.  The text that is incor-



porated by reference cannot include regulatory



requirements which at some later time would




automatically change the text of the method with-



out further notice in the Federal Register.  What



I'm talking about right now is the provisions in



Methods 624 and 625 that says, "Here are the



single laboratory performance criteria which you



will use until the Environmental Monitoring Sup-



port Laboratory establishes the multi-laboratory




performance criteria, and when these are available

-------
                        22
they will automatically be used  in this method."




Our general counsel tells us we can't do that;



that is saying that there is going to be future



automatic regulatory revision that is being



predicted in the text right now.  They say the



text has to be binding as of the date of the



promulgation.  So that kind of language is going



to be deleted.



                  MR. STANKO:  Could you tell me,




specifically, if Table 5 in the version published



in July, 1982 shows up in the final version?



                  MR. MEDZ:  Will you refresh my



memory as to which Table 5 is; is that the



performance criteria?



                  MR. STANKO:  That is the



performance criteria with respect to R and 5.



                  MR. MEDZ:  That will remain the




same, but Bob Graves will be able to tell you more



about that.

-------
                         23
    Let me make  the  statement, myself.   The  per-



formance criteria that was in the July  1982  re-




lease was the best single and multiple  laboratory



data we have right now and that will stay in.



                  MR. STANKO:  Thank you, Bob,




that's what I needed to know.



                  MR. TELLIARD:  Any other



questions?



    Thank you, Bob, look forward to having you



back next year to tell us when they're coming



out.



     Our next speaker is Robert, commonly known



as Bob Graves from the Environmental Monitoring



and Support Laboratory in Cincinnati.  Bob is the



project manager  for the review and verification



of the GC/MS procedure, affectionately referred



to as 624 and 625.  Bob is fortunate to be here



today because he wouldn't have to stay home  in



Cincinnati for the EMSL program review.  Bob.

-------
                        24
  VALIDATION STUDY OF EPA METHODS 624 AND 625

                  Robert Graves
       U.S. Environmental Protection Agency
                 EMSL-Cincinnati
                     Abstract


     The Quality Assurance Branch, Environmental Moni-

toring and Support Laboratory, Cincinnati, Ohio, is

responsible for conducting interlaboratory method

validation studies for EPA analytical methodologies

as they pertain to measuring analytes in water media.

The study design and data analysis scheme used in

validating EPA's GC/MS Methods 624 (Purgeable) and

625 (Base/Neutrals, and Acids) will be discussed.

General areas to be covered include:   selection of

participants, preparation of samples, selection of

water-types, rejection of outliers, calculation of

statistics, weighted linear least square regression

equations, correlation between surrogates and analy-

tes, false positives and false negatives, and com-

parison of method capability across water-types.

-------
                        25
                   Introduction






     The Quality Assurance Branch, Environmental Moni-




toring and Support Laboratory, Cincinnati, Ohio, is



responsible for conducting interlaboratory method



validation studies for EPA analytical methodologies




as they pertain to measuring analytes in water media.



The impetus that drives the Environmental Protection



Agency to promulgate guidelines establishing test



procedures for the analysis of pollutants are Sec-



tions 304(h) and 501(a) of the Federal Water Pollu-



tion Control Act of 1972 and the Clean Water Act of




1977.   The document that delineated which organic



pollutants were to be initially studied is the so-



called "Consent Decree".  On June 7,  1976, in the



United States District Court for the District of



Columbia, a settlement agreement was reached.  The



litigants were as follows:   Natural Resources De-




fense  Council, Inc.,  et al. versus Russell E. Train



(Civil Action No. 2153-73); Environmental Defense



Fund,  et al. versus Russell E. Train  (Civil Action

-------
                         26
No.  75-0172); Citizens  for a Better  Environment, et



al.  versus Russell E. Train (Civil Action No.



75-1698); and the Natural Resources  Defense Council,



Inc. versus James I. Agee, et al. (Civil Action No.



75-1267).






             METHOD VALIDATION STUDY






     Conducting interlaboratory  studies involve




designing the studies and developing appropriate



data analysis techniques to process  the data.  The



study design and data analysis scheme specific to



validating EPA's GC/MS Methods 624 (Purgeables) and



625  (Base/Neutrals and Acids)  are presented.





STUDY DESIGN



     Participants, prior to their acceptance for this



study, were required to analyze a performance evalua-



tion sample.  The sample was designed so as to pre-



sent an analytical challenge to the analyst.  The



purpose of this preliminary study was to ensure that



the participants were familiar with the analytical

-------
                         27
methods,  the  sample  handling  procedures,  and  that



the  analysts  were  competent.  Previous studies have



shown  that a  screening process such as this leads



to more reliable data, and thus, the statistical



evaluation of the  method becomes more indicative



of the method's true capability.



     Samples were  prepared to conform with Youden's



plan for collaborative evaluation of analytical



methods.  The analytes were prepared as three You-



den pairs, one pair just above the detection limit,



one mid-level pair, and one pair near the upper



limit for use of these methods.   A Youden pair



consists of two samples with analytes at similar,



but distinctly different concentrations.   Analytes



were weighed out, dissolved in an organic solvent,



and shipped to participants as liquid concentrates



in sealed glass ampuls.  As many analytes as possi-



ble, within any one group of compounds that are



analyzed together,  i.e.,  purgeables,  base/neutrals,



acids,  and pesticides,  were placed within the same

-------
                        28
ampul.  Methods  624 and 625  require  that  surrogates



be dosed  into each water sample prior to  analysis.




Surrogates were  also supplied to each participant



as liquid concentrates.



     Caution was taken as to prevent problems with



the simultaneous qualitative and quantitative mea-



surement of any of the compounds dosed within a



water-sample.  Purgeables were analyzed simultan-




eously; base/neutrals and pesticides were divided



into two sets, and acids were analyzed simultaneously,



     Prior to distribution to participants, ampuls



were analyzed to assure that the added constituents



were present at the intended levels, and they were



analyzed periodically thereafter to assure stability.



     Various water-types were studied, namely, rea-



gent water, drinking water, surface water, and in-



dustrial effluent.  Reagent water represents the



control; it represents the primary matrix, i.e.,



WATER.  The other water-types were chosen because



they effect the analytical procedures via other

-------
                         29
 impurities  being  present  in  the  water  causing  matrix



 effects due to  considerations other than water.



 Participants  secured  their own water samples,  thus



 allowing  each water-type  to be tested by the methods



 on a myriad of  individual waters each possessing




 varying interferences.  The participants then  pre-



 pared their own samples by dosing a specified  ali-



 quot of liquid  concentrate from the supplied ampuls



 into their  self-collected water-samples.






 TREATMENT OF DATA



     The primary purpose of conducting interlabora-



 tory validation studies is to document the method's



 capability, i.e., precision and accuracy, when a



 competent analyst, practicing appropriate quality



 control techniques, uses it.   The operative ideas



 here are competent analyst and appropriate quality



 control techniques; these two factors define what a



laboratory, that will eventually conduct routine



analyses via this methodology, must possess.  The



interlaboratory validation studies conducted by

-------
                         30
 EMSL-Cincinnati  are  intended  to define the accuracy



 and precision of a method when in fact these two



 items are operational.



     Spurious data points are always a part of any



 set of data collected.  Some objective technique



 must be performed to identify and to rid the data



 set of these spurious data points.  EPA, EMSL-



 Cincinnati applies Youden's laboratory ranking pro-




 cedure (1) at the 5% level of significance to iden-



 tify outlying laboratories.  However, the Youden



 ranking procedure requires a complete set of data



 from each laboratory within each water-type.   There-



 fore, missing data within a laboratory data set are



 replaced by taking the natural logarithms of the



 laboratory's available data and regressing it against



 the natural logarithms of the true concentration



 levels.   The missing values are then estimated by




 ey where Y is the predicted value from the regression



analysis corresponding to the missing value.   (2)



     After completing the laboratory ranking proce-



dure, zero, missing, "less than", and "nondetect"

-------
                        31
data are rejected as outliers (2).  The data remain-

ing are now checked for individual outliers at the

5% level of significance using the outlier rejection

test constructed by Thompson (3), and recommended by

the ASTM Committee D-19 on Water  (4).

     Summary statistics documenting the method's

capability are calculated using the retained data.

                                1   n
                                    1   *
     Mean Recovery (X):  X =    n  i=i   i

     Accuracy (% Relative Error):

                  X - True Value
         %RE =        True Value   x 10°


     Overall Standard Deviation:
                        n          _ 2
                                 - X)
         S =   \J  n-1   i=i


     Percent Relative Overall Standard Deviation:

                _S_
         %RSD = (x) X 100


     Single-Analyst Standard Deviation:
         Si =  \  2 (n-1)
                            m        _ 2
                                 i - D)

-------
                        32
Percent Relative Single-Analyst Standard Deviation:

                Sr
     %RSD-SA = -^— X 100
                X*

where:

    n  =  nunber of retained data points;

    X^ =  value for the ith retained data point;

    n  =  nunber of retained Youden paired observa-
          tions;

    Dj_ =  difference between the observation in the
          ith Youdoun pair;

    D  =  average of the D values; and

    X* =  average of the two mean recoveries corres-
          ponding to the two ampuls defining a par-
          ticular Youden pair.


     From the summary statistics EMSL-Cincinnati

develops statements, in the form of linear equations,

for each compound and each water-type delineating the

method's accuracy and precision across concentration

levels.  Mathematical equations of the form y = ax + b

are generated, where for precision y = overall or

single-analyst standard deviation and x = mean

recovery;  for accuracy y = mean recovery and x =

true value;  and the constants a and b represent the

slope and the intercept, respectively.

-------
                         33
      In order to get a good fit of the linear regres-

sions to the low concentration points it was decided

to minimize the sum of squares of the percent differ-

ence  between each of the points and the line.  To

accomplish this the traditional least-squares algo-

rithm was applied to a modified data set (y/x as the

dependent data set and 1/x as the independent data

set) .

                             Dependent
Accuracy:
mean recovery/
true value
Single-Analyst Precision:  single-analyst
                           std. dev./mean
                           recovery
Overall Precision:
overall std.
dev./mean
recovery
Independent

  I/true
  value

  I/mean
  recovery
  I/mean
  recovery
     The resulting regression is of the form y/x =

a -i- b (1/x) and can easily be converted to the desired

relationship y = ax + b.  The intercept (a) from the

equation y/x = a + b (1/x) becomes the slope (a) in

the equation y = ax + b, and the slope (b) from the

equation y/x = a + b (1/x) becomes the intercept (b)

in the equation y = ax + b.

-------
                        34
     The relationship (correlation) that exists be-



tween each surrogate and each priority pollutant



will be established.  The intent is to statistically



establish which surrogates should be monitored to



assure that the data submitted for the priority pol-




lutants are of high quality.



     The potential for these GC/MS methods to give



false-positives and false-negatives is assessed in



this study by requiring each analyst to analyze a



real-world sample laden with priority pollutants and



interfering compounds.  The output from this will be



a statement concerning the methods probability of



producing false-positives and/or false negatives.



     Comparison of method performance vis-a'-vis




water-types tested is performed to determine if there



are differences in the method's capability across



water-types.  A formal analysis of variance test for



differences across water-types is performed.  The



test is based on a regression model that directly



compares each water-type to reagent water, which




serves as a control (5).

-------
                        35
The basic nodel used to describe the data is given

by the multiplicative forn



  xijk = Bj ck Yj Lj.ȣijk     j = l\2\...\ 6
                               Jx   JL)^f*»*f U

where Bj and Yj are the fixed effects due to water-

types, C^ is the true concentration for anpul k, L^

is the systematic laboratory effect for lab i, and

 tijk is the random within laboratory error.  This

model converts to a linear regression model on a

In - In scale.

     ln(XiJk) = InBj + YJ lnCk + InLi + In £ijk


     The random error terms, namely, (Lj_) and (E_ijk),

are assumed to be mutually independent and to follow

a lognormal distribution.  Therefore, In L^ and In

 Eijk are normally distributed with constant variance.

Now by transforming the data within each laboratory

by a set of independent contrasts, the laboratory

bias term L-^ can be eliminated.  The model now depends

only upon the water-types through the parameters

BJ and Yj.

-------
                        36
     There is no differences across water-types if

InBj - InBi = 0 and Yj - Y! = 0 for j = 2,3,4,5,6

where j = 1 refers to reagent water.  In this case

the parameters B]_ , . . . , Bg are all equal and Y]_,...,YQ

are all equal.

     The analysis of variance procedure tests at the

5% significance level the null versus alternative

hypothesis

HQ: InBj - InBi = 0 and Yj - YI = 0 for j = 2,3,4,5,6

versus
HI: InB-; - InBi t 0 and/or Y-s - Yx + 0 for
    some j = 2,3,4,5,6

using a standard F-test.
REFERENCES

1.   Youden, W.J., and Steiner, E.H., Statistical
     Manual of the AOAC ,  1975.

2.   Outler, E.G., and McCreery ,  J.H., Interlabora-
     tory Method Validation Study:  Program Documen-
     tation, Battelle Colunbus Laboratories draft
     report, funded by EPA Contract No. 68-03-2624,
     1982.

3.   Thomson, W.R., Annuals of Mathenatic Statistics,
     Vol. 6, p. 214, 1935.

4.   ASTM Designation:  D2777-77, Standard Practice
     for Determination of Precision and Bias of
     Methods of Committee D-19 on Water.

5.   Bishop, T.A. , Brydon ,  F.E.,  and Outler, E.G.,
     Development of Appropriate Statistical Techni-
     ques to Compare Analytical Methods Across Waste-
     waters, Battelle Columbus Laboratories draft
     report, funded by EPA Contract No. 68-03-2624,
     1981.

-------
                         37
               QUESTIONS  AND ANSWERS






                   MR.  STANKO:   George  Stanko,




Shell Development.   Bob,  I  have a  couple  of




questions; one, deals  with  EMSL-Cincinnati



criteria.  Does Cincinnati  have a  criteria for




the EPA standards  priority  pollutants that can



be expressed in plus or minus a certain amount



from the true value?




                   MR.  GRAVES:   For the standards




themselves?



                   MR.  STANKO:   Yes.




                   MR.  GRAVES:   Not that I am aware



of, no.



                   MR.  STANKO:   My second question



is, you talked about calculating a standard



deviation and also a mean value.  Was the standard



deviation an estimate  from  the  inter 50 percent



quartile of your data?



                  MR. GRAVES:  Absolutely not.



                  MR. STANKO:  Thank you,  I

-------
                         38
needed...



                  MR. GRAVES:  What you're refer-




ring to  Windsorization.



                  MR. STANKO:  That was my third



question.  Thank you.



                  MR. GRAVES:  That was not done




in this  case.  The standard deviations, the over-



all, and the single laboratory standard deviation,



were calculated with retained data in the normal



fashion  (see Youden's statistical manual of the



AOAC).



                  MR. VINCENT:  Frank Vincent.




I perceive a conflict and maybe it's because I



don't understand.  I wonder if you would clarify



it.  You said that it's up to the analyst to



decide if a method is applicable and I guess I



was under the impression that the published




methods  are pretty much mandated and they are



defined  to be applicable.  Is that...am I wrong



somewhere?

-------
                         39
                  MR. GRAVES:  Well,  I'm speaking



 strictly  for myself, okay.  The way  I  interpret it




 is that it's up to the analyst to show that the



 method works on the matrix to be analyzed.



    Now,  if one would pick a very messy matrix,




 then gas  chromatography, even when coupled to



 something like an eletron capture detector, which



 is quasi-specific, may not allow you to identify



 or quantitate the analyte of interest.  There



 may be other interferring compounds  in that par-



 ticular matrix that co-elute with te analyte.




 Now, if this is the case, then, obviously, the



 GC method is not applicable.



    Hopefully, the precision and accuracy state-



ments supplied in the method write-ups will help



 you in determining if, in fact, that's the case.



 You can do things such as standard addition to



help determine that the wastewater under study



 is, in fact, applicable to the method then from



that point on you have no problem.   However,  if

-------
                        40
in fact, you find interferring substances, you do




not go willy-nilly ahead.  You, in fact,  say,  okay,



let us try a different column or let us go to  a



more universal method such as GC/MS; namely,



methods 624 and 625.



                  MR. VINCENT:  What position



would we then be in as far as complying with the



regulations?  I guess that's what I'm really



wondering about in any kind of compliance enforce-



ment situation.  We have used a method which



basically is not in the regulations.  We can



validate that the method is applicable, that it



gives the right result.  I guess that's part of



where my confusion is right now.



                  MR. GRAVES:  Dr. Medz seems  to



want to answer this one.



                  DR. MEDZ:  Let me get him off




of the hook on this one.



                  MR. VINCENT:  Thank you.



                  DR. MEDZ:  The methods as they

-------
                        41
are  written  are  the  yardstick by  which  all dis-



charges are  being measured for compliance and



enforcement.  We recognize there  will be situations




where the methods because of interferring problems,



even though  it's a treated wastewater where the




interferring problems may make the applicability



method a little difficult.



    We have, in Cincinnati, a program which we are




funding at least through 1985 called the Correction



of Deficiencies Program.  This is designed to be



an interactive program with the regulated commu-




nity, wherein when the regulated community finds



problems with the analytical method they get on



the phone and then talk with people in Cincinnati.



Jim Lichtenberg would be a point of contact on



this one;  Jim Longbottom.



                  MR. VINCENT:   I didn't get the




name, I'm sorry?



                  DR. MEDZ:   Jim Lichtenberg or



Jim Longbottom would be the people.

-------
                         42
                  MR. VINCENT:  Lichtenberg or



Longbottom.




                  DR. MEDZ:  Or Longbottom; and,




the problem that is being  encountered would be



discussed, analytical chemist, to try to resolve



where the difficulties were.   If the difficulty



can be resolved and if there are sufficiently



important we would revise  the method and repropose



the method.



                  MR. VINCENT:  Thank you.  Let



me congratulate you.  I think that is an excellent




idea because when the analytical chemists can talk



to each other frequently the problems seem to



evaporate.  So I'm very glad to hear that.  Jim



Lichtenberg and Jim Longbottom are the two indi-



viduals?



                  DR. MEDZ:  And Bob Graves for




the GC/MS.



                  MR. VINCENT:  All right.  That




name,  I think, I can get;  thank you.

-------
                        43
                  MR. MADDALONE:  Ray Maddalone




from TRW.  I'm interested about the 15  industrial




wastewaters.  Do you intend to test the variances



from each one of the individual wastewaters, each



one of the contractors wastewaters to see if there




is any significant difference in the precision



obtained from each of the waters?



                  MR. GRAVES:   Right now we do




not.  The 15 wastewaters give you a broad selection



of the type of effluents that are out in the real



world.  The precision and accuracy for these 15



wastewaters as a group will give you an idea of



what other people performing these methods on



industrial wastewater get - regardless of the




water type that they are analyzing.  But, to ans-



wer your question, no, there is no intent at this



time to take each individual laboratory as they



present their data and compare it to the other



14 laboratories.



                  MR. MADDALONE:   So there will

-------
                        44
be a single operator and overall standard devia-




tion that will be pooled from those 15 different




wastewaters?




                  MR. GRAVES:  Yes.  The thought




right now, I think, is that GC/MS should be more




free of major interferring affects than the non-




GC/MS methods.



                  MR. TELLIARD:   Anyone else?




                  MR. GRAVES:  Well, thank you.



                  MR. TELLIARD:   Thank you, Bob.

-------
                        45
   OVERVIEW OF  EFFLUENT GUIDELINE'S ANALYSTICAL
                    ACTIVITIES

                 William  Telliard
       U.S. Environmental Protection Agency
           Effluent Guidelines Division
    I just can't say enough about our next speaker.

Most of you know Bill Telliard as a serious,

studious, scientific chemist, but he can be a lot

more than that, isn't that true, Tonie?

    This morning we would like to talk a little bit

about EGD's analytical program.  It is changed a

little bit.  As you notice, I'm not as tall as I

used to be and I don't wear a sheep's scowl.  We

have changed a number of personnel over the last

couple of years and we have trimmed down.  We're

doing less with less and I would like to introduce

the people that are now in the program.  The first

one is Lynn Beasley.  Lynn, would you stand up

to provide a picture for these folks when you talk

on the telephone.  Next to her is Susan Hancock,

Sample Control Center - you know the one - you can

-------
                        46
never get the numbers you want from her; and, my new



Dean Neptune and Mike Carter all rolled into one,




Tom Fielding over here (indicating).  Tom is also




the project officer in organic chemicals in his



spare time.



    This morning I think it's important that we look



back to where we came from and where we're going.



As you know, we were assigned the small task of



writing regulations on a number of compounds, most



of which the engineers couldn't even pronounce,



and to come up with analytical methods that were



"useable" to develop a data base to do that.



    When we started out it was a kind of rough



decision-making process, but through the efforts,



really, of a few people in our Athens Laboratory,



Ron Webb and Larry Keith, we sold the Agency on an



idea of using GC Mass Spec for looking at garbage.




We've taken the mass spectrometer out of the



closet and only because of the work of Larry



Keith and Ron Webb through the long term R&D

-------
                         47
 project  were  we  able  to do that. .  As  Ron



 pointed  out to his management,  "We've were sitting




 down  here  for years and  no one  asked us what



 to do.   Now they are actually going to use some



 of this  stuff that we did."




    We set out by using  a  mass  spectrometer to




 look  at  garbage, and we  took a  number of efforts,



 realizing  that we were on  kind  of a experimental



 path, and  out of it came a group of industrial work



 groups,  as we like to call  them.  Pulp and paper



 people had one, Chemical Manufacturers' Association,



 the iron and steel industry put forward one, the



 Amercian Petroleum Institute put one through.  I



 remember our first meeting  with them.  We sat at



 this big round table, the  kid in the crowd was



 about 68.  He looked at us and  said you all got a



big problem here, boy.  We  said, yes, we have to



be done in a year.   Since  then we have had numer-



ous meetings like this, which I think are important,



where all of us get together and lie to each other.

-------
                        48
    We set out by doing a number of different



phases in this development.  The first phase was



the one which we affectionately called screening,



which meant we didn't know enough to say how good



we were.   After that we went back and did some




verification studies called precision and accur-



acy after the fact, using what is now basically 624



and 625 Method.  What we found was that, basically,



in treated effluents, we really didn't have too much



of a problem.  Now problem is defined by degree -



I will preempt Stanko - realizing that by "degree"



is meant varying degree.



    Again, in an effort to get better quantification,



we went to a phase called verification, which used




surrogate spiking compounds to insure or try to do



some recovery corrections.  As we all know, recov-



eries on things like phenol, particularly in



untreated or raw discharges, can vary somewhat,



from 22 to 94 percent, which gives you a rather



large window when you're trying to do some calcula-



tions .

-------
                         49
     In  an effort  to  keep growing we  initiated  the




use  of  stable labelled  isotope dilution  for GC/MS



about two and one-half  years ago...and we used



this protocol particularly in looking at the



petroleum industry,  offshore oil and gas, and  some



work in the area  of  organic chemicals.  That's



kind of where we  are today.  There are some defi-



ciencies in the program that we're going to try




to correct.  One  was the...when we started out



on the  isotope dilution we were limited on the



available compounds  to  use, and we had to go through



the  internal standard approach for those compounds



that weren't available.



     In addition we expanded the use of the protocol.



For example, in the  petroleum industry we did not



only look for the 129 compound, but for another



group of compounds affectionately referred to as



Appendix C.   For  those of you who aren't familiar



with Appendix C,  there is another group of com-



pounds that the Agency is committed to do something

-------
                        50
 about  some day,  and we  incorporated those compounds,



 both through the development of their analogs



 and the procedure for identifying these compounds.



    The third expansion was into an area which is



 now extinct, the Synfuel industry, where we picked




 out specific compounds, again, going through the



 same workups that we did for developing the 129



 original;  the same standard criteria for selection



 of compounds.  The Appendix C compounds that I



 spoke about are available now as spiking material.



    We have had this ongoing development program.



 What we're looking at here is a problem that



 keeps creeping up which was addressed a little



 earlier, the cookbook versus the protocol approach



 to analytical chemistry.  Being that most of us



 here are chemists, no one likes to be handed the



method and told, you don't deviate, you don't do



 it.  It says shake it ten minutes by the clock and



 stand under the clock and shake for ten minutes,



 not nine,  not seven; you do it that way.  Of course,

-------
                         51
 this  stifles  creativeness.   We  know  we  can get by



 with  five minutes  under that clock;  it  stops




 creative thinking.  Three degrees a  minute is



 fine,  15 is better.




    We have a small problem.  We have a protocol




 and we have takers, tuners,  tighteners, fixers,



 and we end up with some questions of what is the



 quality of the data we're generating with these




 protocols.  Now, we have built-in quality assur-



 ance  practices; that is to say, a written protocol




 that  we all know you won't deviate from.  We have



 check samples which we sneak in and  you run over



 and say that's a check sample, watch it.  You can



 tell that because they all look the  same.  We



 have lab visits and lab reviews; that's where we



 all show up and everybody has a new white lab



coat,  all of the samples are in a row,  all of the



 extractors are clean,  all  of the people are smil-



 ing.  Nowadays you even have some people in the



lab, I think that's a new addition.

-------
                        52
    So I think in this growth process our next step




is to try to quantitate and insure the quality of



the data we're getting:  in one, timeliness; and,



in two, reproducibility or comparability, I guess



is a better term.  Our group is now reduced both



in the engineering staff and in the analytical



staff, and we have to look at ways in which we




can expand this program under this handicap.  That's



what I really want to talk about today.



    We want to talk about a new program.  The rea-



son we do this is so that when CMA catches up with



one protocol we can come out with another; got to



stay about six months ahead of them.   Otherwise



Stanko would have to stay in the lab and work.



So what we're attempting to do here is to establish



a basic performance over an area of prerogatives




and we're going to take some prerogatives away;



I think it's important that we do so.

-------
                         53
     The  question  is,  what  is  this  new gimmick?




 Presently, we  have  a  system where  the  laborator-



 ies,  for all of those out  there who know, receive




 their samples  and their small amount of paper work



 that goes with it,  fill out these  wonderful track-




 ing forms, and mail them back to Susan who then



 makes copies of them  and sends and mails them to



 the project engineer, who  then makes copies of



 them and mails them to the engineering contractor.



 Meanwhile, the Sample Control Center punches up the



 data in its computer.  The engineering contractor




 punches up the data in his computer; and, of course,



 we have a few  transitions or errors.



    Then, a year later, after the study is done,




 we do a statistical package of all of the data



run on various industrial categories, Oyster and



Other Shuckers, for example, and we'll tell you



how wrong you were a year ago.  Of course,  we're



all busy mailing in tracking forms.

-------
                         54
     So what we're  proposing  to do  is to cut out the



 paperwork.  Now, you  know  what that means  in govern-




 ment, you just generate more.  The first thing



 we're suggesting is a revision of  1624 and 1625



 and  update to it,  which is going to require the



 laboratories to report their data  on nine  track



 mag  tape.  You would  mail  the tape to the Sample



 Control Center and they would punch up the data.




 We would call the  engineering contractor and say



 that the data is in the machine along with the




 package we put together to give us real time



 statistics.  Therefore, the statistics would be



 run on, say, an eight plant visit, or the affec-



 tionate term, episode, whatever the hell that



 is; and, at the same  time, this would give us an



 understanding that this data that we received is



 acceptable.  That  is  to say, you must report



 performance.  In the  new protocol there will be




 requirement of performance; no folks, if three



degrees is in the protocol, you can't run 15.

-------
                        55
 If  it  says a 43  hold,  you can't hold it 15.  If



 the eluting time is 12 minutes, it better come



out in 12 minutes because, guess what, if it



doesn't the data is not acceptable and you get to



do it over again.  More importantly, you don't



get paid; harsh.




    Now, I know you're all interested in this new



protocol and I think that what we're looking at




here is a step towards somewhat automating the



process; one, out of necessity, due to the lack



of people; and, two, out of the fact that  we can



no longer wait and find out what we've done;



and, three, so as not to wait until the develop-



ment document is half written to find out  that



the data is no good.



    In addition to the GC/MS workup, we are also



looking at doing a similar effort  for metals,



realizing that nine track tape for metals  is not

-------
                         56
all that common right  now.   We  have been  talking




to manufacturers and some of the laboratories on



putting this program out.  What we are trying to



do is to come up with  a system of generating



quality assurance data for people to look at



which is better than that in the presently avail-



able records, whether  it of the courts or of our




own people.  Real time information to the project



engineer then thereby generates some effort and




real time statistics.  We are looking at, also,



some more flexibility.  We would like somehow or



other to automate the tracking forms,  and we are



not too sure how that will happen.



    Now, all of this effort has been going on for



a while, and I guess the question is where,



when, and how.  We are presently...no,  we can't



transpose the tape.  As of yesterday we have made



the final the revisions in the new protocol, which



Dale Rushneck will address, and John Norris will

-------
                         57
 show  you  how  the  tape  program  is  going  to  work.




 Barry Eynon will  tell  you what the statistics are




 going to  look like.  So all of the first three



 parts as  they relate to GC/MS are almost in



 place, and we would hope that by  the beginning of



 the month we could have enough information to



 have a number of  laboratories try using the



method, which means that the next SAS (Special



 Analytical Services) Contract will require that



 the laboratory be able to perform these functions,



report on tape, and meet the statistical require-



ments.  You will look for this in your new con-



tracts.



    Also in the contracts you will find an under-




 standing that if these criteria are not met,



a whistle goes off and  a flare goes up and you



are told that it definitely impacts you where you



are most interested, in your pocket.   The new



contracts will also contain a penalty clause for



late arrivals.  Now, we all know that there are

-------
                         58
 instrument problems, deaths  in  the  family, more



 expensive samples to come in — therefore, I



 realize that while you're making it really fat



 on our dollar, here now and  then you get an outside



 sample where you can make a  few more bucks and




 my sample sits in the corner of the bench growing



 hair.  I get a phone call that says you wouldn't



 believe what happened yesterday, that data all



 set to go out, and you know  that plane crash —



 did you see that in the paper this morning?  Your



 data was in that plane crash, but don't worry, it



 will get out next week; no problem.



    So there is going to be  a penalty clause for



 time and for performance.  I think we've come far



 enough in this process that we are able to do this.



 I don't think it's going to stifle the creative,



but somewhat blunt the crooked.   The sole purpose



here is to give all of us a better handle on



where the data is and where it is coming from.  I

-------
                         59
 think  it  is  important  that  we  are  the  ones  that



 are  going  to do  this.   I hope  that some of the



 other  programs in EPA  might want to use a similar




 system; and, of  course, it's open to them.  Again,



 since  we invented everything else, it's only




 right  that it should start in  EGD.  If there are



 any  questions on this  methodology, I'll be glad



 to address them  now.   I'm off  free; good.



     A  couple of  announcements  while I've got you.



 Out  in the registration desk is a setup of for-



mer  historical documents, former Norfolk, Savannah,



Denver proceedings for those of you who have a



 gap  in your lives or really need something.   If



 you  want any of these, there is a sign-up sheet



and  we will make them  available to you.



     In addition, on the back table are the proceed-



ings from the Hershey meeting of a year and  a




half ago,  which I did not attend, for anyone who



didn't get them because there was restrictive print-



ing on those  copies.




(WHEREUPON, a coffee break was taken.)

-------
                        60
                  MR. TELLIARD:  Our next speaker



is from Interface, Dale Rushneck, who has appeared



on a number of these programs before.  Dale is




going to talk about the revised 1624, 1625.  Those



of you who don't know the jargon, 624/625 is




the EMSL-Cincinnati GC/MS procedure.  1624/1625



is the EGD GC/MS procedure which is a thousand



times better, hence the number.  I just wanted to



bring you up to speed.



    At this time I would like to introduce Dale



and point out when Dale started in this program



he was only 5'1".

-------
                         61
        REVISIONS A OF METHODS  1624 AND 1625

                   Dale Rushneck
                  Interface,  Inc.
                      ABSTRACT

    Methods  1624  and  1625  are  protocols  for  analysis

of the volatiles  and  semi-volatiles fractions of

the organic  priority  pollutants by isotope dilution

gas chromatography-mass spectrometry (GCMS).  This

paper presents improvements in these protocols,

directed mainly at quality assurance and quality

control (QA/QC) of the data the methods produce.

Copies of Revisions A of Methods 1624 and 1625 are

available from the EPA Sample Control Center,

P.O.  Box 1407, Alexandria, Virginia  22313.


                   INTRODUCTION

    Isotope dilution methods employ an isotopi-

cally labeled analog of each compound of interest

to track that compound through the analytical

process.   Methods 1624 and 1625 employ stable iso-

topes of the organic pollutants for this purpose.

-------
                         62
 There  are  two major  advantages  to  isotope dilution




 methods over conventional analytical methods:  first,



 the  labeled compound quantifies and compensates



 pollutant  loss in the analytical process; second, a



 spike of the labeled compound into the sample obvi-




 a.tes the need for a spike of the pollutant itself



 in order to determine compound recovery.  The first



 advantage  results in a more accurate analysis; the



 second effects a cost savings by reducing the number



 of analyses required.




     The original versions of Methods 1624 and 1625



 (1) were the first application of isotope dilution



 methods on a large scale for a large number of pol-



 lutants.  Such application was possible because of



 the availability of labeled analogs of the priority



 pollutants, and the widespread use of GCMS for pol-



 lutant analysis.  Revisions A of these methods



 incorporate more labeled compounds, not only of



 the organic priority pollutants, but also of the



Appendix C and Synfuel pollutants.   In addition,

-------
                         63
 Revisions  A  incorporate  the  experiences of  several



 laboratories  (2)  in  use  of isotope dilution methods.






         OVERVIEW OF METHODS  IMPROVEMENTS




    The major areas of methods improvement are listed



 below  and  detailed in the sections following.




    1.  Provisions for analysis of complex samples.



    2.  Allowance for computerized data reduction



        and reporting.




    3.  Use of fused silica capillary column for



        semi-volatiles.



    4.  Standarized QA/QC.




    5.  Improved criteria for qualitative determi-



        nation.




These improvements are directed at obtaining more



precise and accurate data and at providing well



defined levels of data quality.






    Analysis of Complex Samples.   Many of the waste-



water samples analyzed in support  of  effluent guide-



lines contain large quantities of  dissolved  minerals,

-------
                         64
 suspended  solids,  polymeric  compounds,  and other



 materials  which  can  interfere with analysis of the



 pollutants.  Spike recoveries of the pollutants



 range from zero  to 400 percent in these samples



 because the spike can be dissolved irreversibly



 by the matrix, or the spike  can liberate a pollu-



 tant from  the matrix.  Use of the labeled pollu-



 tants permits these  effects  to be quantified.  The



 isotope dilution method works well when greater



 than ten percent of  the labeled compound is recov-



 ered.  In  the revised methods, data are acceptable



 if recovery of the labeled compound is greater



 than 10 percent of its recovery from reagent



 water.  If not, the  volatiles fraction is diluted



 by successive factors of ten and analyzed; the



 semi-volatiles fraction is diluted by a factor of



 100, then extracted  and analyzed.  As a result,




pollutants are accurately quantified in an analyti-



cal range in which the method is known to work.

-------
                         65
     Further  provisions  in  the  revised methods for



analysis of  complex  samples  are use of an alter-




nate quantitation mass  or  an internal standard



method if an  interference  is present at the pri-



mary mass, use of a  lower  GC column temperature



program rate  as  an option  to resolve overlapped GC




peaks, and dilution  of  the water or extract to



bring high concentrations  of pollutants within the



calibration range of the GCMS.






    Computerized Data Reduction and Reporting.  The



number of pollutants and labeled compounds in each



sample ranges from approximately 100 to 214, with



13 discrete pieces of information required for



rigorous quality assurance of each pollutant or



labeled compound.  Clearly, a computer is required



for storage and tracking of this information.  The



methods were revised to permit maximum utilization



of computerized GCMS data systems for repetitive



operations,  with the safeguard that all  results

-------
                        66
must be verified manually by a qualified spectro-



metrist.




    Modern GCMS instruments use a calibration curve




for those compounds which have a non-linear response,



or use an averaged calibration or response factor



for those compounds which respond linearly.  The



revised methods employ a five-point calibration



(most methods use three) to better define the curve




over the calibration range.   Once the curve is



defined,  it is verified at a single point on each




shift.  Other information stored for each pollutant



is the mass spectrum, retention time (absolute and



relative), quantitation mass, and peak area.  These



data are used to search and locate each compound and



to identify and quantify it properly.



    Quality control charts (3) are generated by the



GCMS computer to determine and assure a high level




of data quality.  These charts are updated each



working shift.

-------
                        67
    Data can be reported on magnetic tape to elimi-



nate transcription errors and reduce reporting




times.






    Fused Silica Capillary Column.  The advantages



of these columns for analysis of the semi-volatile



priority pollutants have been demonstrated by



Sauter, et al. (4).  Method 1625A mandates the use



of capillary columns for the acid and base/neutral




fractions of the pollutants.  The improved GC reso-



lution provided by a capillary column is required



for separation of the large number of compounds in



these fractions when the labeled compounds are



included.






    Standarized QA/QC.  The July 1982 Revisions of



the 600 series EPA Methods for analysis of priority



pollutants in waters (5) incorporate repetitive



analyses of the pollutants spiked into reagent



water for initial and on-going tests of laboratory



performance.   Methods 1624A and 1625A incorporate

-------
                        68
these tests, also.  Additional  tests required by

the revised Methods are summarized below:


1.  Each working shift

    1.1  Mass spectrometer

      1.1.1  Spectrum validity  and resolution

        1.1.1.1  Volatiles:  p-bromofluorobenzene (BFB)

        1.1.1.2  Semi-volatiles:  decafluorotriphenyl-
                 phosphine (DFTPP)

      1.1.2  Absolute response  (suggested)*

        1.1.2.1  Volatiles:  80,000 to 150,000 area
                 for 100 ng toluene

        1.1.2.2  Semi-volatiles:  20,000 to 50,000
                 area for 20 ng phenanthrene

      1.1.3  Relative response  (to isotopic diluent
             or internal standard)*

        1.1.3.1  Response ratios by isotope dilution:
                 +_ 10 percent of initial calibration

        1.1.3.2  Response factors by internal stan-
                 dard:  ;+ 20 percent of initial cali-
                 bration

    1.2  Gas chromatograph

      1.2.1  Resolution

        1.2.1.1  Volatiles:  <10 percent valley height
                 between toluene and toluene-d8
* variables to be investigated in inter- and intra-
  laboratory studies

-------
                    69
    1.2.1.2  Semi-volatiles:  <10 percent valley
             height between phenanthrene/anthra-
             cene

  1.2.2  Absolute retention times*

    1.2.2.1  Volatiles:  chloromethane in 2-4
             minutes; ethylbenzene in >30 minutes

    1.2.2.2  Acids:  phenol resolved from sol-
             vent:  pentachlorophenol >20 minutes

    1.2.2.3  Base/neutrals:  N-nitrosodimethylamine
             resolved from solvent; benzo(ghi)-
             perylene >40 minutes

  1.2.3  Difficult compound detection*

    1.2.3.1  Volatiles:  100 ng 2-chloroethylvinyl
             ether, bromoform, and 1,1,2,2-tetra-
             chloroethane

    1.2.3.2  Acids:  50 ng pentachlorophenol;
             100 ng 2-methyl-4,6-dinitrophenol;
             250 ng hexanoic acid

    1.2.3.3  Base/neutrals:  50 ng benzidine;
             100 ng di-n-butylamine,  10 ng
             B-naphthylamine

2.   Each sample

  2.1  Internal standard peak area:  +_ factor
       of two of area in standard

  2.2  Labeled compound recovery:  >10 percent
       of recovery from reagent water

-------
                        70
    3.  Each sample lot  (samples analyzed on a
        given 8 hour shift for volatiles; samples
        started through  the extraction process on
        a given shift for semi-volatiles, to a
        maximum of 20)

      3.1  Blank:  all pollutants <10 ug/L

      3.2  Recovery of standards spiked  into reagent
           water*

        3.2.1  Volatiles:  85-115 percent by isotope
               dilution; 60-140 percent by internal
               standard

        3.2.2  Semi-volatiles:  85-115 percent by
               isotope dilution; 40-160 percent by
               internal  standard

    4.  Miscellaneous

      4.1  Sample carry-over (volatiles only):
           <5 ug/L

      4.2  Recording of extraction and concentration
           variables (semi-volatiles only):   initial
           and final extraction and concentration
           volumes

      4.3  Manual examination of GC peaks greater
           than height of internal standard  peak(s)
The specifications above are being revised based on

inter- and intra-laboratory testing of the Methods,

so that final specifications will reflect performance

-------
                        71
actually achievable by analytical laboratories.



Where possible, these specifications are perfor-




mance based; i.e., they require that a laboratory



repetitively demonstrate the ability to analyze



the pollutants spiked into a reagent water matrix.






    Improvements in Qualitative Determination.  In



the original versions, methods 1624 and 1625 speci-



fied + 20 percent relative abundances of one to



three spectral masses plus relative retention time



for pollutant identification.  Revisions A require



+ a factor of two in relative abundances of 5 masses




minimum, and all masses having abundances greater



than 10 percent of the base peak plus relative



retention time for pollutant identification.  This



change is directed at reducing the number of false



positives reported and is based on the fact that



the presence or absence of a given mass is of




greater significance than its relative abundance.



The disadvantage to use of 5 masses is that some

-------
                        72
pollutants do not produce spectra with 5 masses



with 10 percent or greater relative abundance.



As a result, the detection limit for these pollu-




tants will be proportionately raised by the reduc-



tion in relative abundance below 10 percent.  (Ten



percent was chosen based on experience with pollu-



tant spectra.)  But it is better to know that the



actual pollutant was detected than achieve a low



detection limit with high risk of a false positive.



A further requirement by the revised methods is



that a qualified spectrometrist must decide if the



spectrum is that of the pollutant; therefore, an



interference at one or two of the five minimum



masses does not preclude identification.   A subse-



quent revision of the methods will specify mass-



relative abundance data for the pollutants and



labeled compounds based on inter-laboratory studies,



but the requirement shall remain that each labora-



tory must obtain authentic spectra of the pollutants



on each instrument used for pollutant analysis

-------
                         73
 under BFB  or  DFTPP  tuning  conditions.   Understand-



 ing these  spectra is  fundamental to pollutant



 identification.




    With isotope dilution methods, one of the most



 important  criteria  for compound identification is




 the relative  retention time between the pollutant



 and its labeled analog.  This measurement is usu-



 ally more  accurate  than the scan resolution speci-



 fied for most GCMS methods.  Revisions A specify



 a retention time tolerance of +_ 6 seconds for vola-



 tiles, and HH  2 seconds for semi-volatiles, based




 on relative retention time computations.  The



 exact tolerance in relative retention time will



 be specified  on a per compound basis as a result



 of inter-laboratory studies in a subsequent revi-



 sion to the methods.  Relative time tolerances



between internal standard(s) and the labeled com-



 pounds will also be specified as a result of inter-



laboratory studies,  but are typically _+ 30 seconds.

-------
                        74
    Of concern in the identification of a pollutant



by its relative retention time is the effect of




column overloads by a large concentration of the



pollutant or by other compounds, especially when



capillary columns are employed.  The use of a




large window for the labeled compound, coupled



with a small window for the pollutant relative to



the labeled compound nearly precludes a false nega-




tive under these conditions.  In addition, the



spectrometrist is required to examine manually all




GC peaks with heights greater than the internal



standard(s).





                     SUMMARY



    The A revisions to Methods 1624 and 1625 have



been outlined above.  Further revisions will be



made based on advances in GCMS technology and on



feedback from laboratories performing analyses



using  the revised methods.  QA specifications



will be revised to reflect performance obtainable

-------
                        75
by analytical laboratories.  Revisions A reflect




the state-of-the-art in analysis of pollutants



by isotope dilution GCMS.

-------
                         76
                     REFERENCES






 1.  Methods  1624 and  1625,  USEPA, Effluent Guide-



    lines Division, WH-552, 401 M Street, S.W.



    Washington, D.C.  20460.






 2.  The laboratories  participating  in the original



    and/or revised methods  were:  Acurex, Enviro-



    dyne, IT Analytical (Knoxville), Radian, S-CUBED,



    and TRW.






 3.  "Handbook for Analytical Quality Control in



    Water and Wastewater Laboratories," EMSL,



    ORD, USEPA, Cincinnati, Ohio  45268, EPA-600/



    4-79-019.





4.  Sauter, A.  D.,  et al.,   "Fused Silica Capillary



    Column GC/MS for the Analysis of Priority Pol-



    lutants."  "Journal of HRC & CC," 4 (1981) 366.






5.  Longbottom, J.  E., and Lichtenberg,  J.  J., Ed.,



    "Methods for Organic Chemical Analysis of Muni-



    cipal and Industrial Wastewater," EMSL,  USEPA,




    Cincinnati, Ohio  45268, EPA-600/4-82-057.

-------
                         77
               QUESTIONS  AND  ANSWERS






                  MR. WALTERS:  Gary Walters from



 Jordan Labs.   The use of  five masses for compound




 identification may increase  the possibility of



 false positives  in complex samples, because the



 greater  the number of masses, the greater the



 chance of  interference at one of these masses.



 Do you agree?



                  MR. RUSHNECK:  I would not aferee



 with that  as a general statement.  Under certain



 circumstances, what you say may be true, but these



 protocols  address the broadest number of circum-



 stances .



                  MR. WALTERS:   Well, it is my




 contention that a limited number of masses might



 permit measuring compounds which have one main



 base peak  with other peaks close to 10 percent



 relative abundance at lower levels.   For example,



The poly-nuclear aromatics.

-------
                        78
                  MR. RUSHNECK:  If you drive that



to its logical conclusion, only one mass would be



best.



                  MR. WALTERS:  I see your point.




You  want to save time, but not make the mistake of




not  seeing something that is really there.



     In using a dilute aliquot for semi-volatiles,



is the decision to use the dilute aliquot based



on prior experience with the sample?



                  MR. RUSHNECK:  Yes.  For example,



if we know that untreated effluents from a given




industrial category have caused extraction or con-



centration problems in the past, the dilute ali-



quot would be required.



                  MR. WALTERS:  How do you handle



complex samples?  For example, a sample which con-



tains free oil?



                  MR. RUSHNECK:  I think a number




of people here have been exposed to that type of



sample.  Several options are available:   the sample

-------
                         79
can be processed  as  if  it  were  water  to  see  if



it can be extracted  and concentrated; each phase



can be processed  as  a separate  sample; or it can



be stirred vigorously while a representative ali-



quot is withdrawn.



                  MR. WALTERS:  Thank you.



                  MR. DELLINGER:  Bob Bellinger,




Effluent Guidelines  Division.  What was the lower



end of acceptable recoveries for semi-volatiles?



                  MR. RUSHNECK:  For the labeled



compounds in any  sample?



                  MR. DELLINGER:  Yes.



                  MR. RUSHNECK:  It was 10 percent




for volatiles or  semi-volatiles when compared to



recovery from reagent water.



                  MR. TELLIARD:  Thank you, Dale.



    As you noticed,  for those of you who were for-



tunate enough to be here yesterday we just had



one hell of a good time.  Anybody want to talk



about how low we  should scan.

-------
                        80
    Our next speaker is Bruce Colby.  Bruce  has



also been a continuing appearing, act on this




road show.  Bruce is from what we used to call



S-CUBED, now named SCUBED, and is going to talk



about something...what;  see, I knew he was



to remember.  Bruce.

-------
                         81
           OPTIMIZATION OF  GC/MS  ANALYSES

                    Bruce Colby
                      S-CUB ED
                   DR.  COLBY:   The  topic  that  I am

going  to be  addressing was  initially described

as Optimization of GC/MS Analyses.  For  this

presentation this  will be a description  of some

of the kinds of information that has been devel-

oped and incorporated  into the revision  that

Dale was just speaking  about.  Historically, we

have called these acceptance criteria.    The main

thrust of these criteria has centered around the

precision with which measurements should be made

in a laboratory on a very routine basis.

     The first topic I  am going to address is

the question 'when does a retention time for a

compound detected in a data file agree  with the

retention time of a standard run previously.1

The major issue in this apparently trivial question

has to do with compound identification.   If the

-------
                        82
retention time is not in agreement, even though



the mass spectrum is the same, it's improper to



make a positive identification of a compound.



Further, more precise the retention times result



in narrower windows within which identification




can be made, the less likely a false positive



identification will be generated.



    The second topic I'm going to address is the




reproducibility of responses on a nominal one-month



basis.  The data I'll show were acquired over



approximately a four-week continuous activity.



The specific data are from standards analyzed on



a daily basis.  The rest of the acquisition time



was devoted to analyses of samples from the



Synfuel industry.  The samples were particularly



dirty.  They were process waters, treated and



untreated effluents, and they contained percentage




quantities of some of the priority pollutants.



The impact that samples had on the data was



quite real and I'll try to point out some of

-------
                         83
 the manifestations  of  running  these  "dirty"  sam-



 ples on a routine basis as I go along.



    The first thing, then, that we will look at a



 plot of the absolute retention time precision in



 terms of percent standard deviation as a functional



 retention time (top).



    Below that is relative retention time preci-



 sion, and finally the isotope dilution relative



 retention time precision.  What we desire for



 predicting further retention times are the most



 precise retention times.  In other words,  we have



 run standards so we know the retention times.  How



 we can predict what that retention times will be



in a sample file when we inspect it.   Clearly we




can see some interesting things in this slide.



    Initially, in the run retention times is con-



siderably less precise  than it is  later on in



the run.   This is true  both for relative and abso-



lute retention times.  I should point out  that

-------
                        84
this is fused silica data; we'll get to the



packed-columns in a minute.  We believe the



thing that causes this is lack of precision in re-



setting or reestablishing the initial chromato-



graphic conditions, particularly oven temperature



and carrier flow rate.  We're near ambient temper-



ature, and most GC ovens don't control well in



that area unless they have a sub-ambient regulator.



In this situation we did not have that.



    With the relative retention times, where an




internal standard present, we start to see one



other thing; a significant dip right at  the time



that the internal standard is eluted.  This is



followed by a slow rise (decrease in precision)



as we go out to longer retention times.   This



seems to say that if we put in more than one



internal standard which is something that cer-




tainly could be done, we could improve relative



retention time precision and consequently the



ability to predict them.

-------
                         85
     This  is consistent  with  short-term  precision



 data that Drew Sauter has published and is some-




 thing that the hazardous waste people are currently



 doing.  The ultimate extention of using internal



 standards, if you will, would be isotope dilution




 where for each compound has a labeled analog pre-



 sent.  The result is exactly what you would



 expect; it drops percent standard deviation right



 down onto the baseline pretty near.



     The predictability of isotope dilution rela-



 tive retention times, if you will, is extremely



 good.  I have taken those points and expanded the



 scale a little bit.  The scale here (Slide 2)




 instead of being 10 percent is now 1 percent and



 you can see that precision is essentially constant



 across the figure.  Right in the beginning it's



 not quite as precise, and I think this probably



 has to do with the fact that the peaks are very



narrow at that point and that it is very easy



 for a one scan difference to have a fairly signi-

-------
                         86
 ficant  impact  on  the  ability  to  calculate  the



 relative  retention  times.




    For packed-column work, this is the volatile



 methodology.   I have created  a similar set of



 plots (Slide 3).  We  see, roughly, the same be-



 havior as with fused  silica capillaries.  That



 is initial poorer precision compared to the pre-



 cision later on in  the run where the chromato-



 graphic conditions  become more reproducible.



 When we go to  internal standards and I have only




 plotted data for  two in order to keep it easy to



 see, again there  is a dip where  internal standards



 are eluded with best precision right at the inter-



 nal standard elution point.   This is true for



 both internal standards, although the second dip



 is much broader than the first.  Again, with



 isotope dilution we have the  best case situation,



 a closely eluted reference for each compound and



 the retention times are quite precise.  The slight



decrease in precision at short retention times

-------
                         87
 we  suspect  has  to do with  peak  width,  the  peaks



 are more  narrow in the  beginning  and a one  scan




 difference  at 200 scans has  a more  significant



 impact  then a one scan  difference in 1600.



    Now if  we take those precision  data  and use




 them  to identify  what we really want to  know, that



 is  the  range of  time within  which we should look



 for a spectral  pattern  in  the data  file, we get




 a plot  such as  that  in  Slide 4.  Here the search



 windows are plotted  as  plus  or minus three stan-



 dard deviations  in  seconds for each compound.



    With  the absolute retention times we have



 wider windows in  the beginning.  Then they become




 relatively  constant  somewhere around 35, 36




 seconds.  The times plotted  are the total win-



 dow width you would expect to find the spectral



pattern in.   When we go to relative retention



 times we see a very marked dip in the curve



right at the elution point of the internal stan-



dard.   Again,  we could expand the internal

-------
                        88
standard technique  to  improve  the retention



time or narrow the  retention time windows down




by adding more internal standards.  The ultimate



case, again, being  isotope dilution where we



have specific internal standards present for



each compound.




    The isotope dilution data plotted on a factor



of 10 different scales so that you can see more



closely what that looks like is shown in the



fifth slide.



    The windows for searching or accepting a spec-




tral pattern for a  postive identification in the



volatile fraction looks something like what I



have shown in these curves (Slide 6) with a 100



second full scale for the window width.  Absolute



retention times are not much of a problem ini-



tially.  I'm showing a dip and a rise in the curve,



but I may be overly optimistic in the shape and



may be weighing this last point a little too



heavily.   Nominally, however, it's fairly constant

-------
                         89
 across  the  retention  time  scale.   When  we  go  to



 the  internal  standards, we  see the dips at the



 elution points  for the internal standards.



 Clearly, more than one internal standard improves,



 i.e., narrows,  the windows.  We can reference the




 early ones  to correspond to the first internal



 standard and  the later ones to the second  internal



 standard.   Historically, our lab had been using



 the  internal  standard which is eluted most closely



 to the compounds we are attempting to identify.



 Now we know the cross-over is not as straight



 forward as  that, so we have moved the cross-over



 point down  to here, decreased the windows slightly



 and believe we are doing a more credible job.



 With isotope dilution, again, very, very narrow



 windows can be used due to the highly reproducible



relative retention times.



    In tabular form the plotted data looks some-



thing like this (Slide 7)  for fused silica preci-



sion.  The absolute retention times came out to

-------
                       90
be on the average about 2.27 percent deviation,



but keep in mind that there is a distribution



to these data which is not a normal random distri-



bution.  There are significant trends within the



result so this is just a ballpark kind of number




to fall back on.  Relative retention times are



roughly twice as precise using a single internal



standard and somewhere around 20 times is precise



using isotope dilution.  The effect of precision



on the search windows within which one would ac-



cept agreement of a spectral pattern results in



a nominal 40 second window for absolute retention



times, and about half that (20 seconds) for rela-



tive retention times.  This means that we have cut



down the quantity of data that must be processed



by a factor of two by using, one internal standard.



We have also cut down the possibility that similar



spectral pattern will exist in the window and re-



sult in a false positive identification.  We cut



the window down to a very, very small one by going

-------
                       91
 to  isotope dilution.  With a one second scan you



 are looking at a one standard deviations on the



 order of a third of the scan or a little less.



 Of  course, we can't really make any technical



 sense out of a partial scan.



    For the pack-column VGA data, precisions are



 nominally the same as they were with the fused



 silica capillary data for absolute retention



 time and relative retention time.  Remember, how-



 ever, we had a factor of 10 improvement with the



 fused silica data.  With packed columns we're




dealing with peaks which are much wider, we're



 scanning slower,  and this results in somewhat



lower precision.




    The windows,  again,  are very similar to what we



had with fused silica,  nominally 40 and 20 seconds



but not quite as  narrow  for the isotope dilu-



 tion technique.   We pretty much settled on the



plus or minus three standard  deviation  numbers



for acceptance criteria  and,  in a short term, we

-------
                       92
would  expect  people  to be  able  to do  that.   The



windows are sufficiently small  so that even  a




three  sigma window is easy to work with from a



data handling standpoint.



    The second area  that we get into now has to do




with the response precision of  the standards that



were analyzed.  The  data we see first is for fused



silica runs of the base/neutral and acid fractions.



With an external standard situation we see preci-



sions which are nominally the equivalent averaging



about 34 or 35 percent standard deviation for



either fraction.




    When we go to a conventional internal standard,



in this case the difluorobiphenyl Dale mentioned



earlier, we see a considerable improvement, about



a factor of two for the base neutrals and not



quite that with the acids.   I'll get into some



of the difference that appears to exist here in



a moment.  This is sort of like the retention



time precision improvement.

-------
                        93
     With  isotope dilution we  see roughly another



 factor of  two  improvement in  terms of  the response




 reproducibility.  We expect standard analysis



 results to be  this  precise day-after-day-after-day;



 if it's not, something is wrong with the equipment,



 the  standard,  the analyst, or possibly all three.



     The interesting thing to  note is that there



 seems to be a  fairly substantial difference in




 the  internal standard precision for base/neutral



 and  acid fraction compounds.  The quick thing




 to note, of course, is that our internal standard



 is essentially a neutral compound and that perhaps



 there is a relationship here.  Consequently,  we



 took the data and re-normalized it to 2,4,6-tri-



 cholorophenol.  This compound is eluted right



 next to difluorobiphenyl so its retention time



 is the same.  Also, its quantitation mass is  with-



 in a few masses of  the mass used with difluorobi-



 phenyl so we expect no major spectral impact  of



using the  trichlorophenol.   When we do that

-------
                       94
renormalization  the  acid  fraction precision  im-



proves markedly, roughly  about a factor of 2.




    In looking at these data it was pretty clear



that there were  several other compounds that



seemed to behave more like the acids when we did




this renormalization.  When we took those compounds



which are more precise with the trichlorophenol



as the normalizing response then we have got,



again, an improvement in  the situation in both



cases.  Now, we've taken  some previously so-called



base/neutrals and treated them as if they were



acids and picked up  some  improvement.  The degree



of improvement is shown here where all eight of



the phenols present  improved on the average 37



percent by going to  the trichlorophenol reference.



All of the phthalates present improved by 35 per-



cent.   Isophorone improved 40 percent and nitro-



benzene improved 60 percent.   These are all



compounds that are somewhat more polar due to



functional groups "hanging out" of them that

-------
                       95
probably  tend  to behave differently  from the



difluorobiphenyl.  So there appears to be some



sort of a chemical consideration here and that



is quite  interesting.  What it seems to say is



that in dealing with retention times, one would




like to use the reference most closely eluted in



time as the reference but for quantitation, to



use some other compound present in the run to do



response normalization calculations.



    We also looked for a correlation between res-




ponse precision and the difference between quanti-




tation mass of the reference compound and the tar-



get compound,  for conventional internal standard



situation.  We found what may be a weak but not




a very convincing correlation.  We expected that



even if DFTPP criteria are met,  there is still



room for a great deal of latitude in terms of



the response factors one would expect to get and



that as the difference in mass between target and



internal standard increases, we  would expect

-------
                       96
response precision  to become poorer.



    From these data, it's not totally clear that




this is the case, but it is easy to argue that



it could.  I was surprised that something much



more dramatic didn't show up here, but I think



it says is that tuning with DFTPP is useful.



    Overall, the use of an internal standard im-



proves precision, but you have to give up a piece




of information in the process in that normalization



takes place and if the instrument's performance is



degrading in an absolute sense this might not be



detected.




    When we go to isotope dilution there is about



a factor of three gain in precision and the pre-



cision for isotope dilution standards is extremely



good.



    Finally, I took a quick look at the pack ver-




sus fused silica capillary column data.  I'm not



sure this is really valid owing to differences in



method but I did it anyway.

-------
                        97
     When we look at  the retention  time windows  it



was  rather striking  to find  them so  close, both




for  absolute area or external standard and for  in-



ternal standard.  Isotope dilution was not nearly



as comparable, perhaps because we are in an area




where  scan rate and  peak shape become the limiting



factors affecting precision.  There was also fairly



good agreement between external standards or raw




area precision.  The same is true for the internal



standard data.  I was surprised to see these data,



but also encouraged to see them because I think



what it's saying is that we are looking at numbers



which represent what can be done with the methodo-



logy and not what a single operator or instrument




is doing.   With isotope dilution relatively the



same kinds of precision again.  It's interesting



to see that an external standard, an internal



standard,  or an isotope dilution approach will



yield results not highly dependent  upon whether



the approach is geared  around pack  columns or

-------
                       98
fused silica capillary columns.



    The general conclusion we come up to, and I




brushed on these earlier, is that the data tend



to support the conclusion that one should calculate



relative retention times using the nearest eluted




internal standard, and that the relative response,



be it isotope dilution or internal standard be cal-



culated using an internal standard which is chemi-



cally similar to the target compound.  Naturally,



in both of these cases the ultimate would be iso-



tope dilution.   One can also make use of this



information with internal standard approaches



and expect some improvements.



    This brings me to the end of my data.  I would



be willing to address any questions that you may



have at this time.

-------
                                  98a
o
o
0)
LL
 I
o
in
cc
Q.

LLJ
Z
o
UJ
H
UJ
cc
       UJ
z
o

H
Z
UJ
H
UJ
CC

UJ
H
D
_J
O
(0
03
                           LU
Z
UJ
H
UJ
DC

UJ
-I
UJ
CC
                                 c/)
                                                                o
                                                               LO
                                                                (O
  UJ
  5
  H
  Z
  O

li
t- UJ
D H
-J UJ


s>
                                                                O
                                                                O
                                                                CM
                                                                     O
                                                                     LLJ
                                                                     0)


                                                                     UJ
                                                           H
                                                           Z
                                                           UJ

                                                           UJ
                                                           oc

                      o o
                                   O O
                           LO
           as %
                        as %
                         as  %

-------
                                  98b
o
LU
*
O
o
55
o
LJJ
UJ
          z
          o
          p
          z
          LU
          h-
          LU
          a
          LU
o


              °H
              S2<
                -I
                LU
                CC
            o
            o  ^
            CM   O
            r-   0>
                (0

                0)
                E
                                                          o
                                                          o
                                                          oo
                                                          c
                                                          o
                                                      o
                                                      o
               CM
                  O Tf
                      CM
          O Tf
  CM


GS%

-------
                                        98c
    Q
    0)

    CO

    +1
o
o
en
u.
z
OQ
    LJJ
   Z
   LU
   h-
   LU
   DC
            iu
z
o

H
Z
HI
H
LU
QC

HI
O
CO
CD
        O
        O
      o
      U)
                                                                       o
                                                                       .o
                                                        UJ


                                                        H
I-  UJ
3  h-
_J  UJ

5*

Uj£
Q.  -
8*
-  cc
                  o
                  o
                  CM
                                                                          o
                                                                          UJ
                    UJ
                                                           o z
                                                           00 O

                                                              p
                                                              z
                                                              UJ
                                                              I-
                                                              UJ
                                                              QC
                                                           o
                                                           o
       o
       10
                03S
                         03S
     oas

-------
                                    98d
    CO
    -H


    0)
Q
HI
*
O
    LU
    z
    LLJ
    H
    LU
    CC
                                 o
                                 0)
                                 (0
                                >—»

                                 0)
                                                                c
                                                                o
                                                                ^
                                                                e
                                                                4)
                                                                4-1
                                                                0)
                                                                DC
              oas
oes
                                                 oas

-------
                     98e
                 BNA-FSCC
        MEAN TARGET COMPOUND
       RETENTION  TIME PRECISION
METHOD

Absolute
Retention
  Time
Relative
Retention
  Time
 Isotope
 Dilution
 Relative
Retention
  Time
PRECISION (% SD)
      2.27
       1.21
        .085
±3SD WINDOW
	(sec)	


    39.4
    23.9
     1.8

-------
                  98f
       VOA - PACKED COL.

   MEAN TARGET COMPOUND

   RETENTION TIME PRECISION
 METHOD
Absolute
Retention
Time

Relative
Retention
Time

Isotope
Dilution
Relative
Retention
Time
PRECISION
   %SD
   1.30
   0.60
   0.25
±3SD WINDOW
    (sec)
    39.1
    20.0
     8.4

-------
98g








0
o
CO
LL
<
Z
OQ



















QC
HI
f-
UJ

O
DC
1-
O
ULI
Q.
CO
CO
CO
<
5


<
LU
2
«*-N
Q
CO
c£

Z
0
CO
o
LU
QC
O.
LU
CO
Z
o
Q.
CO
LU
QC





















-j
H
O
1-


Q

O
<



BASE/
JEUTRAL
fm






^t CM (O
• • •
^t N  (0 Q.
E S = °
*• 0^ •*-
f O o o
x c -
r-^ j^ ^M
QJ —














>,
c
o
.c
Q.
3
Mrf
O
>-.
0
3
*•
'•5
•X-


-------
98h




G
CC
G
Z
H
co
RNAL
LU
1-
Z

Z
UJ

G
CO
£
z
0
CO
0
UJ
CC
Q.

UJ
CO

O
Q.
CO
UJ
or




















_j
^^^
i-
o


Q
O



_J

t • •
h- tO CM
•«- 1- T-


T- q CM
05 CD LO
CM i- r»



o ^r ^t
in in d
r- T- T-


"3
_ C
1 ft
^ o £
1 ^ s
o •§ S
*- -. CL
0^0
3 **
— 1
)± ®.
J

-------
98i
LLJ
CO
z
o
Q.
CO
LU
CC

D
LU
>
O
CC
QL
2

O
z
i
o
CO
CO
o
z

o
a.
•5
•c
O
o

-I
o
z
LU

a.
0
CC
o
_J
o
CC
1
CO
m
CM
X
H
i

o
CO
0
LU
CC
CL








Q
CC
D

H
CO
-J
Z
CC
LU
H
Z
""•*






5
H
Z
111
til
O
CC
CL
5
CO
MPOUND
O
0




N in o
CO CO ^



(0
o
tf> co
O (0
C £ 0
o *- c
£ JC O
a a. jr
o
CO "tf -C
Q.
z z o
< < .2



o
(O




0)
c
0)
N
C
(D
0
'c

-------
                              98j
CO
CO
<

 >

z
o

22
o
UJ
cc
a.

u
CO
z
o
a.
CO
LU
cc
LU
O.
O
_J
CO
HLU
^ O
mJ —
LU LL
«M ^t
rr UJ
°S

Z
O
JZ
o
cc
LU


0
O
X
H
LU
5

CM 0> (O
O CO CO
T-* r r^


CO "^ CO
00 (O CO
rf CM tf)
• I •
1

"(5

3
0
z
4) ^ 
-------
              98k
    VOA - PACKED COL.
MEAN MASS SPECTROMETER
    RESPOSE PRECISION
       METHOD
%SD
    External Standard
    Internal Standard
    Isotope Dilution
28.7
10.6
 3.7

-------
                                  98 1
CO
Z
o
o
UJ
cc
a.

a
o
0)
UL

 (0
O
o

o
LU
^
O
<
a
SE(%SD)
FSCC
Z Q
O UJ
a. X
03 O
UJ <
oc 2
^ O
S O
x 03
C LL

« g
X* II 1
co rj
-H X

* 2
o
o
H
LU
2
CO

N
CO
CM
^.
0>
CO


T-
•
0)
CO
•o"
03
*-
X
UJ
0>
CM
T-

(O
o
0)
co'
CM


0
•
0
CM
•6
03
•
^
(O
CO*

N
*
CO
00




-------
                98m
         CONCLUSIONS
• Calculate RELATIVE RETENTION TIMES
  using NEAREST INTERNAL STANDARD

• Calculate RELATIVE RESPONSE using
  MOST SIMILAR INTERNAL STANDARD

-------
                        99
              QUESTIONS AND ANSWERS






                  MR. SAUTER:  Drew Sauter with




the Environmental Monitoring Systems Lab  in Las



Vegas.  The only thing I would like to point out,



Bruce, when you talk for those others that are




using multiple internal standards, your internal



standard data which, you know, it was an  interest-



ing present on that, is somewhat biased towards



the worst case of internal standardization.  That



is, there is only one multiple internal standard



which approach isotopic dilution in infinite



extrapolation would provide in many cases better



relative retention time; I think that's really



the gist of what you're saying.



   So that internal standard quantification and



relative retention time precision information



that's been given should be considered as a worst




case, internal standard comparison.  Would that



be accurate?

-------
                       100
                  MR. COLBY:  One internal stan-




dard is the worst case of an internal standard




technique; yes, definitely.



                  MR. BEIMER:  Bob Beimer, TRW.



Did you try, in terms of the most similar chemi-




cally type of compound, to determine the effect



of the difference, the absolute delta in mass



from the quantitation mass of what you are analyz-



ing to the quantitation mass of the standard and



the internal standard to determine if the magnitude



of that difference has an absolute effect on the



ability to quantitate; did I make myself clear?



                  MR. COLBY:  I think so.  I




think the answer is, yes, but that's a difficult



question to address.  Probably what we ought to



do is sit down and I can describe in more detail



what we really did.



    We expected a large delta between the two masses



used to calculate a relative response, and that



large deltas would led to an equally lar^e delta on

-------
                        101
 a day-after-day basis.   The changes  in  them,  slight



 changes in  the tuning would yield measureable or




 noticeable  changes  in the relative response;  is



 that what you're asking?



                  MR. BEIMER:  Well, let me put




 it in terms of an example.  If you used phenol as



 an internal standard for pentachlorophenol,  would



 you get a better number  than if you used a compound




 which had a mass more close to the pentachloro-



 phenol quantitation massive 266?



                  MR. COLBY:  The data that I




 have presented here supports the argument that



 you would be better to use phenol as a reference



 for pentachlorophenol even though the mass delta



 is very large.  I think if you made a point of



doing crummy DFTPP tunes so that you skewed in



 one direction one day and skewed in the other



direction the next day, this would not hold.



                  MR. TAYLOR:   Paul Taylor from



California Labs.   All of this is all in the recs

-------
                        102
because  you have polarity problems, chromatography



problems and it's not simple to separate the



example of phenol and pentachlorophenol is a



particular example of that where you are not only



dealing with difference in mass, difference in




retention time and perhaps an affinity for the



column.



  So that's probably the reason you have a problem



with the correlation co-efficient for whether



there was a change in mass;  precision response



factor with delta mass.



                  MR. COLBY:  In this particular




instance we had data and we just went back and




looked at them to see what we could do.  If we



set out to prove that delta mass can have an



effect, certainly could  establish it,  but I agree



with you completely, there are a lot of things



that come to play in this and we're only trying, to



sort out the ones that seem to have enough impact

-------
                       103
that we can clearly identify them as significant



and try to make the best of that information.




                  MR. TAYLOR:  So basically your



most similar internal standard definition is



empirical, whatever works?



                  MR. COLBY:  It definitely is.



                  MR. TAYLOR:  And that's reason-



able.



                  MR. SAUTER:  Drew Sauter, again,




with EPA, Las Vegas.  I think what your data shows,




Bruce, is that effectively what it's saying and



we've discussed this before is, that the instru-



ment once tuned with some error.



   Another point,  though, with respect...it all



depends on how you want, I think how one wants to



look at the data and whether you are looking at



it through an inter and intra-laboratory perspec-



tive.   I could see problems; and, other people



have published this.

-------
                        104
    We  approached GC/MS a long  time ago on this



where relative response factor  precision does




seem to expand away from the internal standard.



I think if you were...I could see some chemical



sense in some of the data that  you give, obviously,



supports that similar  compound  classes should be



used as internal standards which is effectively



isotopic dilution.  One might find a response




factor precision problem with phenol relative to



pentachlorophenol, for example, strictly because



the mass difference is large; but, within the



limits of DFTPP tune,   I think your data shows



that that difference is not that great.



                  MR.  RUSHNECK:   Dale Rushneck,




Interface.   In those instances in which you have



compared pack and fused silica capillary, were



the labeled compounds  present as well  as the



unlabeled compounds in all of the samples you



analyzed;  that is,  was there a carrier effect?



                  MR.  COLBY:   I  am not totally

-------
                        105
 sure  I  understand  your question, but  I don't



 expect  much of a carrier effect with a few hundred



 nanograms of most  labeled compounds.  If we had



 put a microgram in, perhaps.




   Can  you rephrase the question if I didn't ans-



 wer it?



                   MR. RUSHNECK:  Yes.  When you



 compare the capillary and the pack and you inject




 the samples, were  the labeled compounds always



 present in those samples?



                   MR. COLBY:  Always, yes.




                   MR. RUSHNECK:  Well, I agree




 with you.  The carrier effect is most pronounced



 at the higher levels, but even 100 nanograms is



 sufficient, I think, and that accounts for what



 surprises me is as to how good the pack column



 data are.



                  MR. COLBY:  Well, it is volatiles




data;  if it were base/neutral fraction data,  I ex-



pect it would be much worse.

-------
                        106
                  MR. TELLIARD:  Any other ques-




tions?




    Thank you.  We'll break for lunch and we are



due to reconvene at 1:30.



(WHEREUPON, the lunch recess was taken.)



                  MR. TELLIARD:  I would like




to start the afternoon session of the continuing



saga, fun in the labs.   In the afternoon session,



our first speaker is George Stanko from Shell



Development.  George is  one of the rare people



who have been with this ongoing scenario since



the very beginning.  There was water, darkness



and George.  Darkness didn't seem to bother



George at all.



    When we started in this program, George was



one of the harder people to deal with because



he had data.  Most of the other members of the



committee were your basic coffee cup chemists,



but George went out and did some nasty things,



he actually got some numbers.   Therefore, it had

-------
                       107
made it a little bit more difficult for me to



deal with him.  Today,  George is  also going  to




show us some of his new data on 624 and 1624 on



the volatile fraction and the GC/MS procedure.

-------
                        108
   ROUND ROBIN STUDY OF EPA METHODS  624 and 1624
          FOR  VOLATILE  ORGANIC  POLLUTANTS

                  George  H.  Stanko
            Shell  Development  Company
                      ABSTRACT


     A round robin  study of  EPA Methods  624 and  1624

 for  volatile organic  pollutants was conducted  at

 eight laboratories.   The study was designed to

 determine  interlaboratory and intralaboratory  preci-

 sions and  accuracies  of EPA Methods 624  and 1624:

 to evaluate laboratory performance for EPA Standard

 Samples; and to explore the use of methods of stan-

 dard addition.  The precision, accuracy, variability,

 and uncertainty in the resulting data for the

methods studied are reported and discussed.

-------
                        109
                    INTRODUCTION






      Previous experience with Gas Chromatography/



Mass  Spectrometry (GC/MS) analytical methods for



priority pollutants in wastewater prompted a study




to evaluate the performance of a selected number of



industrial and contract laboratories that routinely



employ GC/MS methodology for the analysis of priority



pollutants in wastewater.  The study was limited to



EPA Methods 624 and 1624 for a selected number of



volatile organic pollutants, and was designed to



determine the precision and accuracy of the metho-



dology, to compare Method 624 with Method 1624,



and to explore the use of method of standard addi-



tion with GC/MS methodology.  The prime focus of



the study was directed toward the GC/MS methodology



by holding other variables associated with sampling,




sample preparation, preservation, and holding time



constant.  The resulting data and the statistical



analysis of the data reflect only the variability



and uncertainty associated with the GC/MS methodol-



ogy.

-------
                       110
     Arrangements were made with eight industrial



and contract laboratories to analyze identical sam-



ples within a specified time and report the data



within 30 days.  Each of the participating labora-



tories was furnished a set of nine samples, along



with instructions which outlined procedures and




goals of the study.  Participants were advised that



each sample had been spiked with nine deuterated




compounds.  The identities of the deuterated com-



pounds and the theoretical concentrations for each



of the components in a stock spiking solution were




provided, as well as a portion of the stock spiking



solution to facilitate calibration.

-------
                        Ill
                        TEXT






 Samples  for Study




     A total  of  nine  samples  were prepared for the



 study using an organic-free water (Super Q) or a




 chemical plant effluent.  Three of the samples



 (1, 2, and  9) were prepared with Super Q water, and



 six of the  samples (3-8) were prepared with portions



 of the same chemical  plant effluent which was col-



 lected as a single grab sample.  Table 1 lists the



 theoretical spike concentrations for the nine sam-



 ples used for the round robin study.



     Samples No. 1 and No. 2 were prepared in an



 organic-free water matrix using EPA Standard No. 2.



 The concentration range of the nine priority pollu-



 tants ranged from 22ug/l to 228ug/l.  These samples



 also were spiked with the nine deuterated compounds,



 all at the 100 ug/1 level.  Samples Nos. 3-8 were



 prepared using the same chemical plant effluent.



 Sample No. 8 was the chemical plant effluent that



was spiked only with the deuterated compounds.



Samples Nos. 3-7 were prepared with the same chemical

-------
                        112
 plant  effluent  and  by blind  spiking  with  eight



 selected  priority pollutants, at three concentra-



 tion levels ranging from 50  ug/1 to  200 ug/1.



 Samples Nos.  3  and  6  and Nos. 4 and  7 were  blind



 duplicates.   Sample No. 9 was prepared by spiking




 the eight priority  pollutants into an organic-free



 water matrix  at the 100-150ug/l level plus  spiking



 with the nine deuterated compounds.






 Purpose for Samples




     Samples No. 1 and No. 2 were prepared  to evalu-



 ate laboratory  performance using an  EPA standard



 (No. 2) and EPA Effluent Guidelines Division (EGD)



 criteria for acceptable performance.   The data for




 Samples No. 1 and No. 2 were also included  in the



 evaluation of between-laboratory (inter-laboratory)



 precision and accuracy.  The blind duplicate pairs



 (Samples Nos. 3 and 6 and Samples Nos.  4 and 7)



 were prepared to assess within-laboratory (intra-



laboratory) precision on an individual  laboratory



basis or on a pooled average basis.  Samples Nos.



 3, 4, 5, and 8 were used to evaluate  the method of

-------
                        113
standard addition.  The primary purpose  for Sample



No. 9 was to identify any particular matrix problems



with the chemical plant effluent.  Sample No. 8



represented the chemical plant effluent and the re-



sulting data were used to correct Samples Nos. 3-7



for background.  All nine samples were used to com-



pare the precision and accuracy of EPA Methods 624



and 1624.






Initiation of the Round Robin Study



     A single grab sample of a chemical plant efflu-



ent was collected and all samples were prepared at



Shell's Westhollow Research Center on July 19, 1981.



After preparation, the sample vials were divided into



sets of nine and each set was placed in one-pint



bottles with bakelite tops and aluminum foil liners.



The pint bottles were then packed in wet ice inside



of foam ice chests and sent by Federal Air Express



to the participating laboratories.  The samples




were received by all participants within 24 hours



and were analyzed within 7 days of being shipped.




All of the data were returned within 30 days.  The

-------
                        114
data  obtained  for  the  study  are  rather  massive  and




have  not been  furnished in this  report.  Engineering-



Science, Inc.  (3109 North  Interregional, Austin,




Texas  78722)  was  retained to statistically analyze



the resulting  data and to  prepare a technical report.




The information and results  from the study provided



in this paper  summarize the  information from the



Engineering Science Report.






Data Analysis  Methods



     Most of the calculations involved with the data




analysis were  performed on a WANG System 2200 compu-



ter.  Statistical  programs were written in BASIC to



calculate accuracy, interlaboratory and intralabora-



tory precision, and to check for outliers in sample



population distributions.   Programs were also



written to determine the points for plotting the



method of standard addition, and to perform linear



regressions.




     Compounds reported by the laboratories only as



"detected"  in any  sample were not given a quantita-



tive value, and therefore  were not  included in  any

-------
                        115
of the statistical calculations.



     The criterion for identification of outliers



was set at the 90 percent confidence level.  Checks



for outliers were made for interlaboratory precision



calculations with respect to the geometric mean con-



centration for a sample.



     Outliers were not removed from the data sets



in the calculation of mean accuracies and related



precision estimates.  It is believed the inclusion



of any outliers would present a more representative



picture of the average laboratory accuracy which



could be expected for the volatile pollutants analy-



zed in this study.



     Interlaboratory (between) precision with res-



pect to the geometric mean concentration was calcu-



lated for each compound in a sample.  The concen-



trations were assumed to follow a log-normal distri-




bution, as was assumed in previous and similar



studies.  The pooled interlaboratory standard devia-



tion (sp) for each compound was calculated in the

-------
                       116
following manner:

                  n

                i  = 1
      SP =
                                  1/2
where
                   n
                   ! Vi
                  i = 1
       ^ = the standard deviation for sanple i

       i = the degrees of freedom associated with
           the nean for sanple i

       n = nunber of samples
     Interlaboratory (within) precision estimates

were made for those samples having replicate analy-

ses.  Samples No. 3 and and No. 6 were replicates

and Samples No. 4 and No. 7 were replicates.  In

addition, replicate analyses data were provided by

one laboratory for Sample No. 8 for deuterated com-

pounds, and in Sample No. 9 for both deuterated and

nondeuterated compounds.  The intralaboratory preci-

sion was calculated for each compound measured by a

laboratory in a sample, assuming a log-normal dis-

tribution.

-------
                       117
For any two replicate values (x]_, xo) in a log-nornal

distribution, the standard deviation(s) (loge base)

is:
        s  =
2  In  (X!/X2)
The pooled intralaboratory precision (sp) for n

replicate pairs is:
                   n
                 i = 1
              1/2
Where Si = standard deviation of the replicate pair
           nean for a compound in sanple i
Variability factors as used in this report define

the 95 percent confidence limits in relation to a

calculated mean from a data set.  When the mean is

a geometric mean calculated from a log-normal dis-

tribution, the variability factor is multiplicative

rather than additive as with an arithmetic mean.

The upper and lower variability factors  (Vjj, VL,

respectively) are defined for a geometric mean as

follows:

-------
                       118



       Vn = exp (t •  s)

       VL = exp (-t • s) =
                                             _
                            exp (t • s)     Vu
where  t = value of Student's t distribution at
           the 2.5 percent probability level
           (two-tailed distribution for 95 per-
           cent confidence level) for the degrees
           of freedom associated with the sanple
           nean.

       s = sanple standard deviation

The upper and lower 95 percent confidence limits

(U, L) are then:

       U = xVu

       L = xVL

where  x = sanple geometric nean.


     The meaning of upper and lower variability

factors is best illustrated by an example.  The

upper interlaboratory variability for a sample with

a known mean pollutant concentration of 100 ppb was

determined to be 1.28.  Based on this estimate of

interlaboratory variability,  95 percent of analyses

from all laboratories for this sample would fall

-------
                       119
between 78 ppb  (1.28 x 100 ppb), and 128 ppb



(1.28 x 100 ppb).



     The repeatability factors define the 95 per-



cent confidence interval for the difference between



two analyses when the nean of the sample is not



known.  The variance related to the difference



between two values  (xi, X2) with the sane standard



deviation(s) is:



     V(Xl - x2)  =  2s2



Therefore, the standard deviation related to the



average difference between two values is \/2 • s.



The repeatability factors for the upper and lower




95 percent confidence limits (Ry, RL > respectively)



relative to x as defined above are:



     R[j = exp  (x/sT • t • s)



     RL = exp  (-x/lF« t • s)





     To illustrate, a laboratory analyzes a sample



and reports a concentration of 100 ppb.  Based on



an interlaboratory  repeatability factor  (R[j) of 1.42,




95 percent of the values of a second analysis per-

-------
                        120
 formed  by any other laboratory  would  fall  within
 70  ppb   (1.42  x  100  ppb)  to  142 ppb  (1.42 x 100




 ppb).   The  95  percent  confidence  range  as defined




 by  the  repeatability factors is a good  indication




 of  the  range of  values that can be expected when




 only a  single  analysis is reported.




     The recovery  (accuracy) of each laboratory was




 calculated  for each  compound in Sample Nos. 1 through




 9.  Accuracy was reported as a percent of the known




 concentration.   Interlaboratory and intralaboratory




 mean accuracies  were calculated as simple arithmetic




 means.  Standard deviations and pooled standard




 deviations were also calculated in the usual manner.




     Deuterated analogs of nine compounds were spiked




 into all the sample  solutions in order to compare




 Methods 624 and 1624 in the analysis of these vola-



 tile organic pollutants.   Method 1624 is similar




 to Method 624,  except that the  recovered fraction



of the deuterated analog  spike  of a compound is




 used to adjust  the analytical value.   Recovery cor-




rection incorporated in Method  1624 is illustrated

-------
                        121
by the  following equation:
        '1624
              =  C
 624
where
       ^624


       CDg24



       CD
CD624/CDS

the recovery corrected concentra-
tion by Method 1624

the measured concentration of the
compound by Method 624

the measured concentration of the
deuterated analog of the compound
by Method 624

the theoretical (spiked) concentra-
tion of the deuterated analog of the
compound
     Deuterated analogs were not spiked for six of

the compounds analyzed in this study.  Only Samples

No. 1 and No. 2 contained these compounds; these sam-

ples were prepared from an EPA volatile pollutant

standard.  Deuterated analogs of all the compounds

analyzed in the effluent matrix samples, however,

were spiked into the sample solutions.  Therefore,

direct comparison of Methods 624 and 1624 relative

to the effluent matrix samples could be easily made.

-------
                        122
      Spiking of  the  samples with nondeuterated com-



pounds was designed  so that three spiked concentra-




tions of each compound were present in Samples Nos.



3, 4, and 5.  This spiking arrangement allowed the



interlaboratory  mean concentration for each spike




level of a particular compound to be plotted, ver-



sus interlaboratory mean measured concentrations in



the manner of the method of standard addition.  Linear




regression was then performed on the data set.



     The analysis of these plots can define:  (1)



the relative response to spike addition;  (2) the



expected range in measured values at a given confi-



dence level for  a given spike concentration; (3) the



base level of the compound in the unspiked sample



(extrapolation to zero spike addition);  and (4) the



expected range in values at very low concentrations



for a given confidence level.



     Samples Nos. 3,  4, and 5 also were spiked with



deuterated analogs of the sample compounds so that



Method 624 could be compared to Method 1624 rela-




tive to  the method standard addition.

-------
                        123
 Eight  compounds  were  analyzed  in  this manner.

 They were:

     Benzene
     1,1-dichloroethane
     1,2-dichloroethane
     1,2-dichloropropane
     Ethylbenzene
     1,1,2,2-tetrachloroethane
     Toluene
     1,1,1-trichloroethane


Laboratory Performance

     There are a number of criteria that may be used

to assess the performance of laboratories.  The EPA

ECU has considered a laboratory's performance ac-

ceptable for guideline development purposes when

standards in organic-free water are found to be

within the range of minus 50 percent and plus 100

percent of the true value.  Samples Nos. 1 and 2

were prepared with EPA Standard Solution No. 2.

The resulting data from all laboratories for Samples

Nos. 1  and 2 easily met the EPA EGD criteria for

volatile priority pollutants.  All of the observa-

tions for Samples Nos. 1 and 2 fell within minus 40%

to plus 25% of the true value.  If one considers

-------
                        124
 only  the  lowest  and  highest reported values  (extremes)



 from  the  eight laboratories for the nine components



 in Samples  Nos.  1  and 2, 'the range of low values




 was from  9  -  40% with a mean of 23%, and the range



 of high values was from 0 - 25% with a mean  of 12%.



 Sample No.  9  was similar to Samples Nos. 1 and 2,



 since it  was  composed of standard compounds  in an



 organic-free  water matrix.  Again, all of the




 laboratories  met the EPA EGD criteria for acceptable



 performance for  Sample No. 9.






 Precision of  Method 624



     The  interlaboratory precision of Method 624 for



 this study with  respect to each compound in  this



 study is presented in Table 2.   It should be noted



 that the data summarized in Table 2 assume that



 errors are independent of concentration.  Table 2



 also includes the calculated upper 95 percent confi-



 dence level factors for variability (Vy) and repeat-



 ability (Ru) on  a compound specific basis.    Average



variability and  repeatability factors were  calculated

-------
                        125
 for  the deuterated  and  nondeuterated  compounds as




 independent groups.  The assumption made in calcu-




 lating such averages is that a homogeneous set of




 variance exists (i.e., all compounds have the same



variabilities or errors).




     The tabulated data show that the variability




 factors (Vu) for nondeutrated compounds ranged from




1.14 for bromoform to 1.76 for 1,2-dichloropropane.




Variability factors (Vjj) for the deuterated com-




pounds ranged from 1.30 for both 1,2-dichloroethane-




d4 and 1,2-dichloropropane-dg to 1.49 for 1,1,2,2-




tetrachloroethane-d2-   The 1.49 and 1.76 values both




fall outside the calculated 95 percent confidence



ran&e, via the Student's t confidence interval




for a normal distribution.  The variabilities in



the analyses of 1,2-dichloropropane and 1,1,2,3-




tetrachloroethane-d2,  appear to be significantly




larger than the rest of the compounds listed.  The




reason for this is not clearly understood.   The




mean interlaboratory variability factors (Vjj's)




for nondeuterated and  deuterated compounds is the

-------
                        126
 same,  1.35,  although the standard  deviation for



 deuterated  compounds (0.063)  is less  than half



 of  the  standard  deviation  for nondeuterated com-



 pounds  (0. 145) .




     The variability factors  (Vy's) and repeatability




 factors  (Ru's) listed in  Table 2 define the inter-



 laboratory precision of Method 624 for the  compounds



 listed, with  the matrix  studied, and  as practiced



 by  the laboratories  included  in the study.  For



 example, the  average mean  variability factor  (Vy)



 listed in Table  2  for nondeuterated compounds is




 1.35.  If the known  mean or true value of a compo-



 nent is 100 ppb, 95  percent of the results  for the
sample would fall  in the range of 74 ppb (1.35 x



100 ppb) to 135 ppb (1.35 x 100 ppb).  However, if



the true or mean value is not known, the 95 per-



cent confidence range to be expected relative to



a single observation can be calculated using the



average repeatability factor (Ry).  Using the



average repeatability factor listed in Table 2 of




1.54,  if the first determination yielded a value of

-------
                        127
 100 ppb,  95  percent  of  the  values  for  a  second



 determination would  fall in the range of 65 ppb



 (1.54 x 100 ppb) to  154 ppb (1.54 x 100).



     The  intralaboratory (within) precision of



 Method 624 with respect to each compound for which




 replicate analyses were available is presented in



 Table 3.  Variability (Vy) and repeatability (R(j)



 factors for the upper 95 percent confidence limit



 are also  included in the table.



     The variability (Vy) factors for nondeuterated




 compounds ranged from 1.16 for 1,2-dichloroethane



 to 1.72 for 1,2-dichloropropane.  As with the



 interlaboratory variability, 1,2-dichloropropane



 was found to be outside of the 95 percent confi-



dence interval for the mean V^ for nondeuterated



 compounds (Student's t confidence interval for a



 normal distribution).  Intralaboratory V(j for deu-



 terated compounds ranged from 1.16 for chloroform-




 d1? to 1.28 for toluene-dg.  All of the deuterated



 compounds are within the 95 percent confidence



interval for the mean intralaboratory Vjj.  As

-------
                       128
noted in Table 3, the variance associated with the

average difference between nean Vjj' s for non-
                                                -3
deuterated and deuterated compounds is 4.49 x 10

The difference between nean Vy' s is 1.31 ninus 1.22,
                                          -3
or 0.09.  Based on a variance of 4.49 x 10  ,  a

difference between nean Vjj's as great as 0.09 can

be expected to occur only 10 percent of the tine.

This indicates that the nean intralaboratory

variability of deuterated conpounds for Method

624 is noticeably lower than that for nondeuterated

conpounds.


Precision of Method 1624

     The concentration values reported by all the

laboratories using Method 624 were recovery cor-

rected by the recovery percentages of the deu-

terated analogs to represent Method 1624.  Preci-

sion calculations on the resulting values were

then nade.  The interlaboratory precision for

Method 1624 is sunnarized in Table 4, along with

the upper 95 percent variability (Vw) and repeat-

ability factors (RU)-   The Vy range fron 1.19 for

-------
                        129
1,2-dichloroethane  to 1.67  for 1,1,2,3-tetra-




chloroethane.  The  average Vv over all compounds




for Method 1624 is  1.35, plus or minus 0.152.  In




comparison, the average Vy for the same nine com-




pounds with Method  624 is 1.37, plus or minus




0.171, indicating that variation between labora-




tories is not reduced by recovery corrected with




Method 1624.  The largest difference between




Method 624 and Method 1624 on variability within




compounds is with 1,1,2,2-tetrachloroethane.  With




Method 624,  V(j is 1.49;  whereas,  Vjj is 1.67 for




Method 1624.  This  increase in variability is due




in part to the relatively high Vy for the deu-




terated analog, 1.49 (Table 2), as compared to the




other deuterated compounds.  Another source of in-



creased variability in Method 1624 with respect




to 1,1,2,2-tetrachloroethane results from the




fact that the recoveries of 1,1,2,2-tetrachloro-




ethane and its deuterated analog  vary inversely.



That is,  when 1,1,2,2-tetrachloroethane is mea-




sured at  a value higher  than the  true mean, its

-------
                       130
deuterated analog is recovered at less than 100



percent.



     As noted in Table 4 the variability (Vy) fac-



tor for 1,1,2,2-tetrachloroethane (1.67) falls



outside the 95 percent confidence interval  of the



mean variability factor for all compounds for



Method 1624.  Again, this implies that this value



is either an outlier (95 percent confidence) when




compared to the mean value for all compounds, or



that 1,1,2,2-tetrachloroethane may have a larger



error when analyzed by the method under the experi-



mental conditions used.



     The intralaboratory precision of Method 1G24



is presented in Table 5.  The upper 95 percent



confidence level variability and repeatability



factors are included.  The average Vy for eight



compounds (chloroform had no replicates with



Method 1624) is 1.27, plus or minus 0.215,  not



greatly different from the mean intralaboratory



YU for Method 624 for the same eight compounds



(1.31,  plus or minus 0.186).  The lowest intra-

-------
                        131
laboratory Vy with Method  1624 is 1.07  for ben-



zene; the highest is with  1,2-dichloropropane at



1.69.  This compound also  exhibited the highest



intralaboratory variability with Method 624.



The  1.69 Vy for 1,2-dichloropropane lies near



the  upper limit of the 95  percent confidence for



the  mean intralaboratory Vy with Method 1624



(0.84 - 1.70).






Accuracy of Method 624




     The accuracy of Method 624 for the individual



compounds in this study is presented in Table 6.



The  average accuracy (recovery) for all nondeu-



terated compounds in this  study was found to be



94 percent, plus or minus nine percent.  The  range



of recovery was from 70 percent for 1,1,1-



trichloroethane to 106 percent for 1,2-dichloro-



ethane.   The accuracies of all 15 nondeuterated



compounds fall within the 95 percent confidence



interval (77-111 percent) of the average mean




accuracy and for nondeuterated compounds except



for  1,1,1-trichloroethane  (70 percent).  The mean

-------
                        132
 accuracy for all deuterated compounds of Method



 624 was found to be  95 percent, plus  or  minus



 six percent.   The lowest  mean  recovery was  85




 percent for  ethylbenzene-d^o,  and  the highest



 mean recovery was 105 percent  for  1,2-dichloro-



 ethane-d4.   All  mean accuracy  values  fell within



 the 95  percent confidence interval of 83 to 107



 percent.



      The most recent version of EPA Method 624



 (Ref. 1) includes  a  section identified as "8.



 Quality Control".  In Section  8.2 one is directed




 "to establish the  ability to generate acceptable



 accuracy and precision, the analyst must perform



 the following operations".  The data  summarized




 in  Table 6 are virtually  what  one is directed



 to  obtain in Section 8.2.3 except it  represents



 an  average of eight  laboratories instead of a



 single laboratory.   Section 8.2.4 directs one to



 compare their results with those expected for the



method for each method parameter given in Table 5




 for  the method.   If  two specified criteria cannot

-------
                        133
be met, one is directed  to review potential prob-



lem areas and repeat the test.  The data summarized



in Table 6 were subjected to the criteria listed



in 8.2.4.  Criteria were not met for 8 of the 15



compounds included in the round robin study.  The




eight laboratories which participated in the study



did not meet quality control conditions specified



in EPA Method 624, although they were able to meet



EPA EGD criteria.






Accuracy of Method 1624




     The accuracy of Method 1624 for the individual



compounds in the study is summarized in Table 7.



The average accuracy (recovery) for the nine EPA



organic pollutants included in the study was 100



percent, plus or minus 10 percent.   The range of



recovery was from 78 percent for 1,1,1-trichloro-



ethane to 109 percent for benzene.   The accuracies



for all nine compounds fall within the 95 percent



confidence interval (80-120 percent) of the aver-



age mean accuracy, except for 1,1,1-trichloroethane

-------
                        134
 (78 percent).   Compared  to  the  average mean accur-



 acy for Method  624  (93 percent, plus or minus 11



 percent) for the same nine  compounds, the average



 mean accuracy for Method 1624 (100 percent, plus



 or minus 10 percent) represents an improvement



 in the determination of the true concentration



 oil the average.  The observed mean accuracy for



 Method 1624 was identical with the true mean,



 while the mean  accuracy for Method 624 was 7 per-



 cent less than  the  true mean.






 Method of Standard Addition



     The EPA has proposed (46 Federal Register



 3033,  January 13, 1981) quality control procedures



 (Section 8) for Method 624 which call for the



determination of actual recovery levels for prior-



 ity pollutants from a sample matrix.   This is  ac-



complished by first determining the background



 level  of a sample,  then fortifying (spiking)  the



same  at  two times the background level and reanaly-



zing.   After correcting for the background,  the

-------
                        135
 percent  recovery  can  be  calculated.   There  also



 has  been some  indication that the EPA has consi-



 dered recovery correcting observed values on the



 basis of the recovery data.  This procedure was



 attempted by the  EPA  during the long-term study




 (Ref. 2)  at Shell's Deer Park Chemical Plant,



 but  the  EPA data  indicated some serious problems



 with the  approach.  The percent recovery was



 quite variable and ranged from 0-576% for treated



 effluent  samples.  Because of the Agency's inter-



 est  in this area  of analyses, one portion of the



 study was designed to allow for the calculation



 of eight  priority pollutants by method of stan-



 dard addition using a three-point plotted curve.



 Samples  Nos. 3, 4, and 5 were fortified (spiked)



 at three different levels with the eight nondeu-



 terated  compounds in an attempt to quantify pollu-



 tants present in  the background sample (Sample



 No. 8) by method  of standard addition using both



 Methods  624 and 1624.  This part of the study was




only partially successful, primarily due to the

-------
                        136
 fact  that  Sample  No.  8  contained  virtually  no



 measureable  concentration of priority pollutants.



      Plots were constructed in the method of stan-



 dard  addition  for  the eight compounds spiked into



 Samples Nos. 3, 4, and  5.  Two plots were made




 for each compound; one  for concentrations as mea-



 sured by Method 624,  and the other for concentra-



 tions as calculated by Method 1624.  The actual



 plots are included in Figures 1-4.  A summary



 table of the regression equations for each of the



 plotted lines  is presented in Table 8.  It should



 be noted that  the plots, as well as the regression



 equations, were derived from average measured con-



 centrations from all laboratories.  Normally, the



method of standard addition is done by a single



 laboratory, and individual plots and regression



 equations are prepared by the laboratory.   Plots



 and regression equations were prepared for the



data from each of the laboratories, and these



were compared with the plots shown in  Figures 1-4



and the regression equations in Table  8.   The result-

-------
                        137
 ing  plots  and  regression  equations  were  found  to



 be similar.




     The regression  equations resulting  from the




 plotted data for Samples  Nos. 3, 4, and  5 can  be




 extrapolated to the  point of zero spike  addition




 (x = 0) in order to  estimate the concentration




 level of the eight priority pollutants,  or the




 concentration  can be read directly  from  the y-axis




 intercept.  Also included in Table 8 are the




 numerical  uncertainties at one standard  deviation




 associated with the various components of the




 regression equations.  This information  can be




 used to derive the range of uncertainty associated



 with the y-axis intercept value.  For example, the




 regression equation for benzene by Method 624 is:






     y  =  (1.02 + 0.21)x - (0.96 + 32.10),






where 0.21 is one standard deviation relative to




the slope value, 1.02 and 32.10 is one standard




deviation relative to the y-axis intercept, -0.96.




When x is zero, y is equal to -0.96, plus or minus

-------
                        138
 32.10,  and  the  range  of  values  for  a  zero  spiking




 concentration is  from -33 to 31 ug/1.  The calcu-




 lated range of  values for all of the  regression




 equations are also  included in Table  8.  It is




 immediately apparent  from Table 8 that there is




 a large uncertainty associated with measurements



 of  zero or near zero  concentrations of priority



 pollutants.




     Using the  method of standard addition for Sam-




 ple No. 8, it was determined that most of the y-axis




 intercept values were negative.  Negative concen-




 tration values  have little meaning;  however, they,




 along with zero, are well within the 95 percent



 confidence interval associated with the y inter-




 cept.  The ranges listed in Table 8  also indicate



 that an analysis producing a positive value even




as high as 31 ppb (benzene)  can, in  fact,  likely




 have a true  value of zero.   For all  practical




purposes,  there  is a high probability that  the




compounds  with negative intercept values are at




 zero concentration,  or nearly so.   If one  assumes

-------
                        139
 that  the  observed  negative  values  are  indeed  zero,




 the method  standard  addition  indicates that Sample



 No. 8 contained virtually no  measurable concentra-




 tion of priority pollutants.  These data are con-



 sistent with the resulting  data from the eight




 laboratories for Sample No. 8 using Methods 624




 and 1624.   Only a  few compounds were detected in



 the sample, and those that  were found were not




detected by every  laboratory.




     The range listed in Table 8 can be used to




 assess the  relative  precision of Methods 624 and




 1624.  The  data indicate again that the overall




 precision of Method  1624 is not improved over



 Method 624.




     The results from the limited study show that




 the method of standard addition has some promise




as an alternate procedure for the EPA recovery




correction by fortification.  It should also be




recognized  that the  technique is more time-consuming




and costly, and might find  utility for critical




situations where the best estimate of true value



is required.

-------
                        140
      The  limited  study  demonstrated  that the method



of  standard  addition  was capable of  establishing




that  Sample  8  contained no measurable concentra-



tions of  priority pollutants.  No data resulted



from  this study to show how well or  how poorly



the method of  standard  addition works when a sam-



ple actually contains measurable levels of a num-



ber of priority pollutants.






Additional Observations



      Methylene chloride was detected in Samples



Nos.  1 and 2 by some  laboratories.   This compound



is often detected in  samples or standards prepared



in laboratories performing numerous  extractions



associated with EPA Method 625.  It is very likely



that  the source of the methylene chloride was the



EPA Standard Solution No.  2, which is common to



both  Samples Nos. 1 and 2.   No values were reported



for methylene chloride (methylene chloride was



only reported as detected,  "U"),  and it was, there-




fore,  not included in any  of the  data analyses.

-------
                        141
     Sample No. 8 was prepared as a matrix back-



ground for Samples Nos. 3 through 7, and was chemi-




cal plant effluent spiked only with the deuterated



compounds.  One of the laboratories detected the



presence of five nondeuterated compounds.  Four



other laboratories detected the presence of one



compound (1,2-dichloroethane), with three of these



laboratories having values listed.  Review of the



data suggests that 1,2-dichloroethane was probably



present at a nominal 10ug/l (ppb) level in Sample



No. 8.  Because only five of the eight laboratories



were able to detect its presence, one might con-



clude that the minimum detection limit (99 percent



confidence level) for this compound in the chemi-



cal plant effluent (matrix) is somewhat higher



than 10 ppb.



     It was very fortuitous that Sample No. 8,  the



background matrix for Samples Nos. 3 through 7,




contained virtually no measurable concentration of



the EPA volatile organic pollutants.   This allowed



the assessment of accuracy (recovery)  for the

-------
                        142
 spiked  nondeuterated  compounds  with  no  background



 correction.   Because  the values listed  for 1,2-



 dichloroethane  were so low  (approximately 10 ppb),




 and came from only three of the eight laboratories,



 these data were not included in any other data



 analysis.



     Sample No. 9 was prepared by spiking all of the



 deuterated and  nondeuterated compounds  in organic-




 free water.  All of the laboratories detected the



 presence of 1,1,2-trichloroethylene even though



 the compound was not spiked into Sample No.  9.



 Seven of the laboratories were able to quantify the



 level present, which appeared to be a nominal



20ug/l (ppb)  concentration.  Because all of  the



 laboratories were able to detect the presence of



 the compound, it was believed to be a true contami-



nant in the organic-free water.   Also, one mi&ht




conclude that 20 ppb is above the minimum detec-



 tion limit (99 percent confidence level) for this



compound in the organic-free water matrix.   Because



the values listed for  1,1,2-trichloroethylene were

-------
                        143
 much  lower  than  the 50  to 200ug/l spiking level



 of the compounds of interest, the values were  not



 included  in any of the  data analysis.



      Two  laboratories also reported the presence



 of chloroform in Sample No. 9.  Chloroform was



 also  detected ("D"), but not quantified, in  several



 other samples.  One laboratory reported a value of



 3ug/l in replicate Samples Nos. 3 and 6.  This



 value was considered not reliable, and was omitted



 from  the data analysis.  Review of the data  from



 all laboratories did not reveal strong enough  evi-




 dence to indicate that  chloroform was present  in



 any of the samples except Samples Nos. 1 and 2, in



 which case chloroform was an added component at



measurable concentrations.



     One of the major concerns with the analysis



 of priority pollutants  at or near detection  limits




 is false positive and negative identifications.



 These problems are difficult to define for a single



 laboratory, but become obvious when numerous labora-



 tories analyze the same sample and in particular,

-------
                        144
when known concentrations of compounds are spiked



into samples.  For this study, there was one case



of a false negative; that is, the compound should



have been detected but was not.  This occurred with



blind duplicate Samples Nos. 4 an 7.  1,2-dichloro-



propane was spiked into Samples Nos. 4 and 7 at a



50ug/l (ppb)  level, and all laboratories  except one



quantified its presence.   The reason that one



laboratory was not able to detect the presence of



the compound  has not been determined.

-------
                        145
                    CONCLUSIONS






     All  of  the laboratories in the round robin



study easily met the EPA EGD performance criteria



for the two  samples prepared with EPA Standard



Solution  No. 2 as well as the sample prepared using



Shell's standards and organic-free water.  This was



not surprising since all of the participating labora-



tories were experienced in the GC/MS methodology



studied.



     The calculations revealed that the interlabora-



tory and  intralaboratory precision for Method 1624



did not represent an improvement over the precisions



observed for Method 624.   This observation is con-



sistent with the results from a previously reported



study (Ref. 3).  The study demonstrated the uncer-



tainty in GC/MS data currently being generated by



qualified laboratories.  The resulting average inter-



laboratory variability factor (1.35) and repeat-



ability factor (1.54) for the compounds included




in the study define the actual level of uncertainty

-------
                       146
for the GC/MS methodology exclusive of any vari-



ability associated with sampling.  The range of



values one can expect when the true value of a pol-



lutant is 100 ppb is from 74 ppb to 135 ppb (OS1*-.



confidence interval).  When the first analysis of



an unknown sample yields a value of 100 ppb, the



range of values expected for a second analysis is



from G5 ppb to 154 ppb (95% confidence interval).



This degree of uncertainty associated with the



GC/MS methodology as well as that for sampling and



sample handling most certainly must be addressed



in NPDES Permit limitations, as .well as in compli-



ance monitoring and enforcement action.



     The results of the study indicated that the



average mean recovery for Method 1624 (99.8% + 9.9%)



represented a small (~ 7%) improvement in the



determination of true concentration as compared



with the average mean recovery for Method 634



(92.8% + 10.8%).  The study of recovery also re-



vealed that the eight laboratories could meet EPA



EGD performance criteria but could not meet recovery

-------
                       147
criteria currently described in the July 1982 ver-




sion of Method 624 for approximately one-half of



the compounds studied.



     The results from the limited study show that



the method of standard addition has some promise




as an alternative procedure for recovery correction



by fortification, and might be useful for critical



situations where the best estimate of true  value



is required.  One important observation resulting



from the method of standard addition study  was the



definition of uncertainty associated with measure-



ments of priority pollutants at zero or near zero



concentrations.



     The problem of false positive and negative




identifications still persists.  It does not appear



to be a major problem particularly if one totally



ignores all data below a 10 ppb and considers




these as unreliable.  The documented case of a



false negative at the 50 ppb was startling; how-



ever, the reasons could not be  established.

-------
                       148
                   BIBLIOGRAPHY
1.   "Test Methods - Methods for Organic Chemical
     Analysis of Municipal and Industrial Waste-
     water", EPA-600/4-82-057, July 1982.

2.   Stanko, G. H., Szentirmay, R., "Analysis of
     Petrochemical Wastewater for Volatile Organic
     Priority Pollutants", EPA Effluent Guidelines
     Division Seminar for Analytical Methods for
     Priority Pollutants, Hershey, Pennsylvania,
     April 9, 1981, pp. 104-130.

3.   "Refinery Waste Priority Pollutant Study -
     Sample Analysis and Evaluation of Data",
     API Publication 4346, December 1981.

-------
                        149
                 ACKNOWLEDGEMENT






     The author would like to recognize the contri-




bution of Dianna Kocurek of Engineering Science,




who was responsible for preparation of the Report




for Shell.  I particularly appreciated her work and




help for the method standard addition data.

-------
                        149a
ROUND ROBIN STUDY OF EPA METHODS 624 AND 1624
       FOR VOLATILE ORGANIC POLLUTANTS
                     BY
              GEORGE H, STANKO
          SHELL  DEVELOPMENT COMPANY

-------
149b




(/}
o
1-
_J
o
00
UJ
O.
g
2
z
~~
ex
*o>
. r/)
"^ 2
UJ O
M
r-
UJ
U
O
H i
*
O.
(O
-J
<
THEORETIC








.X
'c
+-»
7
q>
CD
CO








O
P.
#



*J
W-
Ul
fj
c
V

U
I
<_>
*
£






fc
«-)
5






c
o
*J
3
o
l/>
01
*a
i
i/>



L.
4U
cr
&
•S]

»<
*
o
x
to
£
^»
o
z
to
o
K
m
o
z
«•
o
z
n
^
O 10 V O O V OO OOOOOOO O -0
IS5P;S5'8'IIKI!8D! 888888888
rj
S T
g * £
S £ S * 5 T
i« ** 4-> a; «j «««*>«
^OlOlOl »> *< t — T5 -C ' f C
C C -D C ft) 6 O W >. IIOIOW
§ s S «I>UE LI. f£M a; s1 £ k 5
ff L. fCClO OO ***>V C C "O O ^
S o *> -- aiffo £ 6 u B «y iu a. S « ooo. f-. eu tp a i 5CB9fcP^^9T?~*>'*»»5 •• *-S6l*JS>s
•i c O O V'^'^v^CSJtSJf^H^N^ «^ v^ C'C^'v^CSjIVf^^**^^
•S6i.cS*:S»««S« »o • • +>*>£. • • •*• »o •
gS»«00«JO — — — UJ — — •- — — 3*0 — — — UJ — t- —
i ' ^

-------
                                149c
VARIABILITY FACTORS
         =  EXP (T •  S)
     VL  =  EXP (-T •  s)  -  - -1 - - = JL
                            EXP (T • S)   Vy
REPEATABILITY FACTORS




     RU  -  EXP  (VT* T  • s)


     RL  =  EXP  (-V2~- T • s)  = ^

-------
                                  149d
                 RECOVERY SAMPLES NOS. 1 AND 2
         ALL WITHIN MINUS 40% TO PLUS 25% OF TRUE VALUE
 RANGE OF RECOVERY LESS THAN TRUE VALUE 9-40% WITH MEAN OF 23%
RANGE OF RECOVERY GREATER THAN TRUE VALUE 0-25% WITH MEAN OF 12%

-------
                                                        149e
   Q
   Z

   o
   o.

   O
   o


   00
   ID

   Q
   O
   X

   UJ
CO
   o
   UJ
   tr
   Q.
  cc
  O
  CC

  O
  ffi
L.
e
«i w
c <
-J U.
u
££
cr> —
J3
U "O
K «-
Cv >0
C. O
= o.
C,
cc
V.
— , £
c -
u *
f;£
CT> —
U J3
•J H

O. L.
= T

C
O
*o ".. "~

*Is
e? C ?
c °
^ ci

_.
10


















^
c
o
o.
§
0







0^ir»^MCO<*^r^c\jm**>
fi«tTe\j*r»uiin*rc\je%j







VJJ
OO«J-COmcOCDl^l£>
t\j f*} ^^ r**> P^ ^^ csj »^ f"fc









•^ rsj us m f*) 5r (vj r>^ CO
OOOOOOOOO
















Oi Oi 0)
C T3 C Ol
l« 10 •— IB flj Ol C
•O £ t. £ C C «
c +s o «-• is "s o.
9 u — W £ £ O
C^ O u> O O ft^ Ci
E w « u o o o
O O U O k. t. L.
«_l •- <->E<~OOO
T> uC^Ou£££
il cSw c"o g--.--ii
o oooouB-o-DX)
U M E E £< O L. • t I
a caa>ca<_><_>a-^~-«
4l

c
z

a>
€^
v in co ot co ^ • 10 o^ c^ in
5
M





2
CT* &> it CJ CD «^ * ^^ ^w O^ ^3
^j if if ^^ f^ n ^3 f) r^ c^ f*)
tn

*
•^



—
in
O
CT* CO *^ «™< ""^ *^ • V O »•*  o v v

O Ol IB X •• li
u c £ £ « a/ a>
00 -*• — — EC

£*X OOCIV> ££
U£ 1- 1- T> t> M*J
^•^ OO C v^ftrttl
QV W 4/ *" ^ TO 3 T3 O O
C^O ££1.0 t\j£a><-><-i *> u oi<—uu
J3 . u C 1 I w CO — —
^-CNJ9OI««C\J I" ^ CIL.TQT3
>» • u. s • • ^. a> M o i i
4-1 »%iO •» 41 IDO;£ *»
UJ -- 1- «• •— ^ J. _• —
1. —
Cl 3
< £

(Nj
^"
in o% in o PVJ o
s
•— •




f\J
r-j
C
^3 ^^ ^rt • ^ft ^3 ^
"* S

t— •




r->
rj
o
m rj co vc o
m ^r cr» to ^* o
O O O O O

** m

o






^j
"O
9!
C f^
ID ^ C
«O £ I O
•O «- CJ —
i ai c *j
cr o IB i?
C U £ —
•0 o -J >
a o — ai o
0 — £ 0 •=
L. TO U I.
O. 1 « o 13
O Ol L- — 1-
U C *» £  - O • Cl
_ LU » t— -« C"

u
a.-
<















o
4j
u
X
'^
IB

L.
10
C
IB
fl)
6
W
O
_-
>
V.
O)
C
Ol
u
c
OJ
~
c
0
u

*j
c
u
u
L.
cx
m
Oi
•o
3
0





-------
                                 149f
VARIABILITY FACTOR  =  1,35




     Vu  =  100 PPB  X  1,35  =  135 PPB




     VL  =  100 PPB  x  -*E  =   74 PPB
REPEATABILITY FACTOR  =  1,54




     RU  =  100 PPB  x  1,54  -  154 PPB




     RL  =  100 PPB  x  T--  =   65 PPB

-------
                                     I49g
GO


















o
z
D
o
a.

O
o

00

Tt
Oil
(D

Q
O
X
»-
ID

U.
O

jy
^&
^J
to
\u
cc.
a.
^^
^^
tr.
0
^
<
ac
0
CD
_l


QC
(—
2























o
+•*
C

u.
^c
1*0 ^>
er. *j
4- •—
C) ••-
Q.-0
O. 0
-•1 -4-*
o
QJ
a.
QJ
a:





u
0
u
« u.
U1
0 >,
*"^
CJ —

C.JD
— O
•.-
i-
o



c
o
-r-
1 >
O
•r-
12 *»*S
r— 1/1
S."^- o"
^5 *~ '
C '
10
4->
CO








































O^ VO ^J" VO r^ ^£
PO csj tv -< ro n-

















*
cS2i2£!°c\;















»™H co c\J P*^ ry) n
<— i r^ r»- tr> co O
O O O O O O













QJ
C
j;
LO -4->
•a QJ
c co o
3 ai aj c v.
o ceo o
CX ro ro Q. ^
E J= J= 0 -C
O 4-J *J l_ U
LJ QJ QJ Cu  QJ -a "O "a •— c\j
3 IM i i I >» •
cj c •— « CM cvj .c —«
•Q QJ » » •» 4v* ««
C CO i— « •-« i— « UJ t— 1
O
z



^^
o
ro
•
i o ^ O
i ^r \Q

^f
^4











VD
CO
•— I
•
r^ ro f*i
^
r— «
ro

«— »







CM
VD
O
ro (^ •
*-H r^ o
0 0
OJ
f— i
•
o







c
o

4_>
o
">
OJ O
c o
fO
JC -O
•«-> i.
QJ ro
O T3
i. C

p— *->
_lf" I/)
£ +1
u
QJ 4*1 QJ
C 1 CT1
QJ •—• O
3 • i-
f— ,— I O)
o •* >
t— ~H  O^ c j r^ ^^t c J
r-«OO'— 'OO"— '•— 'O
OOOOOOOOO
LD
0
•








CNJ C
•o o

QJ 4-1
C ro o
VO -C 1 >
^~ ^r"O 4-* QJ O
T3 "O • QJ CO
1 1 QJ O re
•SI QJ QJ C i- JC ~3
"O CCO O 4J i_
C ^ ^3 r^ f^r-^ Qt ^
3 JT j= o ^H^: c -o
O *J4-»i_T3L» l-C
Q. •— 1 QJ QJ Q. 1 
O 1 &. L. L. C 4-> -Cl/1
CJ VDEOOOQJQJCOU
•D 1 O JC J= J= C • • l-

4-> c O •'-•'-•— J3 - C 1 CTi
^J ^J ^_ ^^ ^^ ""JJ ^^ ^^ QJ I^H ^3
L. IM O 1 1 I >i •• 3 • W
QJ e '""• *•"< c\j CNJ jz ^-< 'M> ^-^ QJ
4-> QJJ= ...4J -O ' >

QJ
O
1/1
3
cr
OJ

1/1
•D
C
3
O
0.
Q
u
•D
QJ
4~>

1-
CJ
4^J
QJ

"O
C
O
-o
CJ
4->
z
QJ
3
CJ

C
0
c
J_
o

UO
3

C
T3
QJ
E
QJ
CJ
QJ
o

QJ
O
c
CJ •
S-ro
QJ I
i»- O

TD X

c~ o^

o
c ^*

W- II
0
r—i
O CT>
O "*^-
CCNJ
ro O**
•^ ro
I- O
ro
>• O
•f
• •
UJ CO
^— "^^k
Orv
Z 0
cc
•-H
•
o
I—1

-------
149h








Q
Z
D
O
o_
S
O
u

^^
CD

J5
.
cc
O
1-
<
CC
o
m
_i
cc
UJ
f-
^™"





















0
*j
« 
0.—
r<3
i.
OJ
Di




S_
O
u
(O
X U.
in
en >>
*"^
0 •—
Q_ *^
0..0
• r-

rO
>




C
o
*J 	 	
-o >^o
^^^
o _
M ^3
r^ *•* ^Jt
r\ ^ o
^^ — '
C *~^
ro
I/I





























CTi «~* C\j CO Cn
ro ^ vo CM P^












vo co o en t— c
C\J CM *T •— < in














vo ^ O co in
*-< ~* r+* CO O
.—I ^-1 ^H O CM
o o o o o













CJ
oj a c
C C ro
^ ^3 ^^.
JC ^ O
*J «-> i_
CD ai Q.
o o o
i_ S- i_
E o o o

O J= -C -C
ai «»- u u o
c o — — •-
ry i_ -O -0 -D
IM O « ' 1
C — •— < CM CM
a .c • •• •
ca <_j <— < •— « <-H

en
CM
c\j vc en co o
in o ro ro
in
~






cv
in
,_^
*
in t*». vo vo CD
CO VO - CM C\J
^
m
ro
•
•""*








^.
in
0
en vo m r~.
^T in i— i i— < o
«— 1 CM »-H «— 1
o o o o
p~"
S
•
o



c
c
•—
C «e
ro •—
r~ >
*J flj a;
CU CO
^5 ^5
&— ^^ T^
0 4-> t-
t— O! r=
.C O "C
u u c
ro o re
ai t. •— *->
c *J x: wo
CD O U
fsj +-» **^ ^~
C 1 J-
QJ CM O> *J Qj
js - c • cn
•— CM O) •— i «O
>> •• 3 U
jr ^i •— ^ a
*J • O • >
LU •— < r— ^^ «t























^
O
*->
O
•2
^
*^"
*""
2
ro
i-
ro
C.
ro
ai
E
l_
0
>*-
,_
ro
£
4->
C
«^-
o
c
a
•o
•r—
\*—
C
o
u

^«?
in
cn

a>
T3
•^
i/i
*• '
3
O
*


-------
                               1491
                 COMPARISON  OF  PRECISION FOR
                  METHODS  624 AND  1624  (Vu)


                                     624        1624
     1,2-DlCHLOROPROPANE             1.76*      1.51

     1,1,2,2-TETRACHLOROETHANE        1.49       1.67'
*OUTLIERS

-------
149J













0
Z
D
2
s
o
U
03
s
(0
^~

O
o
UJ
u.
O
O
CO
0
UJ
oc
a.
oc
o
^
oc
O
03
_J
OC
1-

^m




















c
c
u.
ID >,
,
QJ -f-
cx—
Q.-—
Z3 -O

iu.



C
0
«'"*
"0 aJ "^
a»o u
^__ fc-J j^j
gT3 ^
^3 t ^7*
r\ J^ Q
C ' '
>a
^




























rs.
ro
«— tVOI^^HVOCNJlD^J- O
i-»^HCMt— iro

sz >
*-> OJ OJ
(U CO
O O «i
0) O) C fc. J: TD
C C T5 O *J k.
^5 ^J ^^ ^^ QJ ^J
.C J= 0 JC 0 -0
j_p *J I. O J- C
QJ Q} E"* fO O *V
O O O QJ i- •— *->
U I- 1- C *-> ^00
o o o QJ a> u
— — — fj *> -^ -H
Jl £ J= C 1 S-
ajooua>cMa>4-> (D
C *^~ '^* '*" •& * C 1 O>
ai'a^'O'— CMCU«— i «o
isi i i i x, - rs * i.
C^HCMCV £•—••— — < ft)
Q ..«..«_> «O » >
CQ—
-------
                             149k
                 COMPARISON OF PRECISION FOR
                 METHODS 624 AND 1624  (Vu)
                          INTERLABORATORY     INTRALABORATORY
METHOD 624                1,37 + (0,171)      1.31 + (0.186)
METHOD 1624               1,35 + (0.152)      1.27 + (0.215)

-------
149 1








o
3
2
5
O
U
00
. CNI
ID to
ui o
	 i r*
•"• f i
00 x
Hfc

*
u.
o


u
ACCURA

















•£„
*• J
c
>.=>
u "~
•2.3
7 Z *
— s —
I^i
e =
& «B




IB • — •
u D

ti
c o.
r~

















•o
C I.

1-1
e
o x


ekDss«teGDcoeowte«o\0iOk0io




.
.•M->oo«*»-«a>'*>a>cMCMi/>»Tvr>i^








^> -•> ^ o — «3eowovaer>veey^"CM
- - — — i "~ °* ** er> sr»








cu
1 1
S~* e «< cu v —
»
CUOC U f>9 -^ IB C ^
cci a *>f L. £ cu ^
•9OQ. — a»*-« O -^ — CU
ffo f oat — o >•> o
-•-.L. u I.E fgfi.
c, o a. « o o u o — • o
o c o cy i- ^c. wueu —
UkkCM £O I.OC.C
r^=^S.2 ^^e^^o"

-~5'55^tVtc*i''0<2 C §'5*i>
Sc'1?«><-s"^ii5i.i. •
v> a^"^"%'r^-'"-'o"^t I- te — t;"^

5
a.
i »

^J
<>y
*" o MN IM m in

a
a-
c

<*> m





1 1






com
ood
* ^
CC«T>
* *
9>Cf<



C
O

*J
> « ^
1. I 1
IB
•o -o
3 3

s If
cu <_><->













rw
CWCMCMCMCMCMCMfvf^





9V«WI~>CMC»U1bn







vO

oier -H
kn
0'



CM C
•o o
1 1-
" ^0
« ^ —
IO £ I >
^ «r ^ *•* cu cu
•o -o • cu co
i i ai o IB
cu cu c w f v
c c IB o <-• L.
IO IB O. O ^~ CU *B
£ £ O — < £ O -D
•J  o o. i IB o IB
•a o o o cy k- •— *-•
1 t- U. k. C «-i £ <^>
voEoeoocuaou ,
i O £ JC £ C I ik
£<^UUUCUCMCU*-» CU
o .^ _ ._ .0 .ci en
CWW*O'^^'™'CMCy*M IB
hi O I I I >1 ' 3 - k.
C — •
-------
                                        149rn
   Q
   Z
   o
   0
   >-
   m
ffi O
< F
  oc
  o
  o
*° „
i. 5
0) ^
"i >



>^
s-
0 C
4-> O
ro -i-
O 0
.0 ••-
ra >
i— QJ
fc- O
OJ
4J -O
C U
•— i ro
•c
-0 C
CJ 'O
O i^>
o
0-




>,
o

c 
4-> OJ CJ
0) CO
(X) ^5 ^5
QJ QJ C l_ f T3
C C fO O 4-> t-
^3 ^3 ^^ »™™ QJ ^3
f .c o ^ o "o
4-* 4-* ^. o L- cr
CJ O) CX ro O*O
o o c at i- •— 4->
i- S_ i_ C 4J .C CO
c O O C QJ O U
l_r-^-^rsj4-> -^ -H
0 JZ ^ £ C 1 i-
QJ **- o o u at ou a* 4-» a1
C O ••- -f- •«- £3 » C 1 CT>
cji-'o'o'O'^cvjcj'— ' ™
M O 1 1 1 >> * 3 • i.
c>— •—•cM«Mi:«— "•—•—! at
CJ.C «»»4_> »O •• >





c
3
O
Q.
E
0
U
QJ
C
•^
C

^^
r^
1C

i.
O
<4-
>,
0
IZ
3
O
0
c
JP
E
QJ
CT
>
j_
0
<*-
QJ
CD
C
ro
V.

at
u
c
at
•o
•r^
t^_
c
o
o

^ft
IT)
a>

QJ
T3
»^-
(/)
4-1
3
0

-------
                       149n
          COMPARISON OF ACCURACY FOR
      METHODS  624 AND  1624  (%  RECOVERY)
METHOD 624                         92,8%  +  10,;
METHOD 1624                        99.8%  +   9.9%

-------
                                          149o
iu
cc
D
O
                        3MldS H31JV SB\H TIV WOHJ SNOUVaiN30NOO Q3dnSV3tN JO 30VH3AV
LJ

LU
Nl
Z
LJ
CD
                NOU.IOOV 3MWS M31JV S8V1 TIV WOHJ SNOHV«1N30NOO 03MnSV3W JO

-------
                                                 149p
CM

UJ
CC
D
          U
          LJ
          O
          cr
          o
          -i
          x
          o
          <
          en
          h-
          LJ
          •-
          I
          OJ
          »
          OJ
_T Cl/w> IKOI1IOOV 3MWS H3UV S8V1
SNOIlVaiN30NOO 03«nSV3W JO 39VW3AV
         UJ

         UJ
         ISI

         UJ
         m
         I-
         u

\
V




\
\
\
N



s
\
^




N
^\





x\


*
» M
CSJ (D
(C —
88
H- I-
•O




V
\
50 100 150 200 250
CONCENTRATION OF ADDED SPIKE (|ij/l)
3 8 § 8 8 8
0 N N — —
NOI1IOOV 3MWS H3JJV
                                TIV WOMJ SNOIlVMiNSDNOO
                        JO 39VM3AV

-------
                                             149q
          LJ
          o
          cc
          o
          _J
          X
          o
          or
          K
          I
                                                                         N (O
                                                                        • O
—   n          CM

        NOIXIOOV
      §
      CVl

H3UV S0V1 11V
                                                         o
                                                         o
                                                           o
                                                           m
                                                                                o
                                                                                o
                                                                                CM £
                                                                                  V
                                                                                  Ul

                                                                                  a.
                                                                                o «

                                                                                « o
                                                                                  ui
                                                                                  o
                                                                                  o
                                                                                  t-
                                                                                  <
                                                                                  IE
                                                                                  t-


                                                                                  Ul
                                                                                  o
                                                                      s\°
                                                  SVOUVaiN3DNOD Q3UnSV3IN JO 39VU3AV
CC
D
o
         UJ
         D
                                            WOHJ SNOUVUJ.N3DNOO Q3«nSV3»N M 30Vd3AV

-------
                                          149r
Ul
ec
D
O
                                                 0        O
                                                 «

                                              03HnSV3N JO 3SVH3AV
                 NOUIOOV 3MWS «3JJV 88V1
NOI1IQQV 3XldS «3iJV SSV1 T1V WOMJ
                                                              03»nSV3h JO 30VW3AV

-------
                          149s
  O
  O
  O
  o
  CC
  <
  O
  O
  o
ob t
uj 5


li
  O
  <
  CM
  (O
  IU
  u.
  o

  I
  cc
  I

  I

(•, ^
cV A
?§
•9
ee •
15
*> «t
«•—
IT
"9
k. X
+J
** *i
t^i «^













c
c o
O *•"
•<— w
«_» •«—
*O -u
&s
UJ
•c
C t-
O "0
— T3
VI C
•" *°1
cu -^
tin
01
CU k.
ex o
u.





















»
O
n
^
i


m
e

n
»n
i
r»»
v
o
ro
4
to
r^
•
CM
1
o
CM
9
~
P**

—
li
X

^11^
o

•
CM
3
ON

o
i
4-
K
»— •
rxj

i
o
•-
X








g
cu
M
c
£
to <*
<»-«
o o
CM in
•


v O

e o

co en
i i


~ i
CT> •
• r^.
r«i -W
^ i
r*» .
CM t-»
^ ^
w r»-
0 0
0 O
^^ o^
CO CD
0 O
M II
x x


«B*k
§• 	
CM
ro
2 ^
CT> r^
. m

i r>
«*• ^
X K
to to
0 O

s i
CD O
0 «
X X




2 £
•o  CM
«^ *^
CM
CM
O
CM
«*4
1

tn
CM
e

o
CM
1

ON
O>

to
3
•-4
tn
^
to

O
C?
CO
o
x


*-«v
t^»
*£>

CV
51
r«-.
cr«

«-^
*
X
<^N
«s*

1
cf!
o
x




I
o.
o
n
0
cS
^
u
5
CM

CO
o
fv
CM
1

•i*
CM
0

m
CM
i
~
^.
CM
CM
•«!
^4
m

«
i
^
tn

0
*1
•-4
«-4
X

^•l*
o
fs».
*
CM
2
O^
•
»— t
1
^
4/>

2
Cri
O
X






2
cu
c
£
"x
f
««
UJ
to
1
e
CO

1

0
1
e

m
^«
i
v
o\
tn
J
O
**
•
CM
«M


9
f5
c«(i
^^
X


*— x
«o
*"^

3
m

C7^
•
4>
g
•
3
**
••^
x


cu
c
«0
hloroct
u
«
u
•rf
Cu
1
CM
CM
»

^
O
CM
e
to
CM
1

to
c->
o

tn
CM
i
CO

m


I

I

fs^
cr>
cv

to
i
^
C
0
«r
^^
O
X

f fc
V
^*
•
to
^
m^
t**

O

1
*
§

«D
S
K
O
X




••
£
g
U
c
JC
^
k.
«rf
1

CMC*
.J
^
i
O
. *M
CM
1

m

e

m
^^
i
vO

o>
^
^
CM
n
7
^
O
O
o

-
X


^*t
o


3
c<


1
*
' O

a
^
o>
0
X





«/t
1
§.
§
w
^B
<












































J3

X
II
X
uatton
ty
UJ
*J

i




























•
o
k.
• a;
•4
£*
O- —

mii~i
ev x
jtf «—»
^ "0. co
O. «l J<
o. —
£ O.
» *> *)
o "» c
- -B*
«o cu x
u c -o
•J O T> ~>
c — w o.
cu «•• cu
v w c u
C => O L.
o o- — cu
u  u X
II • • •
XE x A
cu
1








-------
                       149t
Y  =  (1,02  + 0.2DX  -  (0,96 +
32,10)

-------
                        150
              QUESTIONS AND ANSWERS






                  MR. SAUTER:  Drew Sauter with



EPA, Las Vegas.  Did you allow...I was just



snowed by the presentation, excellent presenta-



tion, but I was snowed with the numbers.



    Did I understand you to say that you allowed



the laboratories to utilize their own standards?



                  MR. STANKO:  For the non-deu-




terated compound, yes.  For the deuterated com-



pounds we provided the stock solution that had



been used to spike each one of the samples.  Each



one of the samples actually was spiked at 100 parts



per billion level.



                  MR. SAUNTER:  So in other words,



then, is it possible that what you saw was the



laboratories... in your isotopic dilution, saying



that...see, it just strikes me fundamentally



incorrect that isotopic dilution,  GC/MS is not



better than internal standard GC/MS.



    I am wondering, could someone ask the

-------
                       151
question that your study, while probably one of
the best I have ever seen presented at a meeting,
might have demonstrated that you can't trust
laboratories to make up their own standards?
                  MR. STANKO:  To answer your
question, I think if you look at the variability
factors that we saw in this particular study, in
particular the relationship of the intralaboratory
variability factors with the interlaboratory
variability factors, it tells you that these
laboratories are doing an excellent job.  I don't
think they could do any better.  I think they
have very good standards.  They were probably
not...
                  MR. SAUTER:  But it was outside
of your control, then?
                  MR. STANKO:  It was outside of
my control.
                  MR. SAUTER:  I have done a few
studies like this; I mean, in the area of organic

-------
                        152
 and  analytical  GC/MS  and  I  have done  probably 40



 or 50  laboratory  audits  in  regards  to the  hazard-



 ous  waste  program and  it  is my unequivocal experi-



 ence that  it  is not preferred to allow the labora-



 tories  to  make  up their own standards.



   It might be  worthwhile looking at  the data in



 that again; and,  there are some other  situations.



                   MR. STANKO:  In this  particular



 case, we were trying to study the methodology as



 applied by industrial and contract-type labora-



 tories on  samples  of real world wastewaters.



                   MR. SAUTER: I understand that.



                   MR. STANKO:  In that particular



 case, they do use  their own calibration standards



 and  curves to determine that and we thought it



 would be unfair for me to provide that portion



 of the standards as well.




   The purpose  for doing  it with the deuterated



compounds was to insure that all laboratories used



the same source of deuterated material.

-------
                       153
                  MR. SAUTER:  I still think,



although from the volume of the data you presented



it's difficult for me; that that study might have



shown that given the laboratory the freedom to



essentially make up their own standards,  1624 and



624 will give you approximately the same results.



   I'm really wondering if that's not a real...



really what was presented and from the volume...



like I said, it was an excellent presentation,



but from the volume of the data I can't really



digest it at this point.



                  MR. COLBY:   Bruce Colby,  S-CUBED.



George, let me go back to the Table 1 you showed,



I  didn't have this when you were telling  which



samples were which.   Is sample No.  8 your indus-



trial  wastewater?



                  MR. STANKO:  That's correct.




                  MR. COLBY:   Were there any of



the priority pollutants in there?



                  MR. STANKO:   To our knowledge,

-------
                        154
 there really was one compound  that  we think  was



 there below what we  call  an operational limit of



 detection;  that  particular compound was 1, 2-



 dichloroethane.   Four of  the eight laboratories



 said  they detected it; three tried to quantify




 it; four did  not even see it.  On that kind of a



 basis,  I would say that it was below  the method



 detection limit.




    In  one  particular sample, sample  No. 9, there



 was a contaminate that showed up that we don't



 know where  it came from.  All eight laboratories



 were able to detect  it; seven of them were able



 to quantify it,  and  if you want to take the



 average, it is somewhere between 20 and 22 ppb.



 So here, again,  I would say that's the limit of



 quantification,  which is above the operational



 limit of detection.



                  MR. COLBY:   Would it be  fair to



conclude, then,  that your conclusions are  based



primarily on the analysis of  reagent or very



clean water samples, rather than on typical

-------
                        155
 industrial  waste  samples?



                   MR.  STANKO:   In  this particular




 case, the chemical plant effluent  that we  had



 used was studied  before by the  EPA.  In our data



 it didn't show that  there was any  difference with



 respect to  precision or recovery with the matrix



 sample versus the distilled water  sample.



                  MR.  COLBY:  There was nothing




 in that sample?



                  MR.  STANKO:   There was



 nothing?  No, I am not saying that.  There




 were no priority pollutants in  that sample or



volatile priority pollutants.



                  MR. COLBY:   All  right, I think



my point has been made, George.  Thank you.



                  MR. GRAVES:   Bob Graves, EPA,



Cincinnati.   If I understood you correctly you




did say...well, you used transform the data to



log data.



                  MR. STANKO:   That's correct.

-------
                        156
                   MR.  GRAVES:  Can  I ask you why




 you  did  that?




                   MR.  STANKO:  I have given




 several  papers  before.   It  has been a precident




 used  by  Radian  in  the  report on the EPA screening




 phase  (API), and the CMA report on the screening




 phase.   In the  paper I presented at Hershey on




 the  five-plant  study,  it had also been used.  In




 this particular study, we preferred to go that



 way.




     If you looked  at the data, in a number of




 cases and on a  given sample, the data did look as




 if it were normally distributed.   If you use log-




 normal statistics, and you can try this on a nor-




mal distribution, you will end up with a standard



 deviation that  is  somewhat less than if normal




 statistics were applied.  So, using log-normal




 statistics on normally distributed data results




 in standard deviations that are conservative.




                  MR.  GRAVES:  Well,  if they




are less I would say they are standard  deviations




that are very...well,  I guess you're right,  okay,

-------
                        157
 because  from  what  we  found  is,  normally  log-normal



 data  comes  from  environmental samples.



    If you  are taking samples from a  screen with




 respect  to  time  they would  normally vary log-



 normally to get  them, you know, transform them




 back; but,  if you  take a standard and have 10



 labs  analyze  where there's  a set true value then



 that  normally will follow a non-transformed with



 just  random variation around the true value.



                  MR. STANKO:   In this particular



 study, if you took a given  sample that had nine




 compounds or  15  compounds and you looked at it on



 a compound  specific basis,  for one compound, the



 data  were normally distributed.  For several



 others,  it  would have been  log-normally distri-



 buted.   Even there, the difference was not all



 that great.



                  MR. GRAVES:  Thank you.



                  MR. STANKO:   I don't think we



over-estimated the variability because of using



log-normal statistics.  If  anything, we have



slightly under-estimated it.

-------
                        158
     Any further questions?  Thank you.



                   MR.  TELLIARD:   Thank  you,




 George, up to  your same old  more  data;  I  liked  it



 better  the other  way.



     Our  next  speaker...that was  a very good



 presentation.



                   MR.  STANKO:  Thank you, Bill.



                   MR.  TELLIARD:   And I like the




 way  you made it clear  enough that we couldn't see



 the  numbers.



                   MR.  STANKO:  They are definitely



 in the  paper,  though.



                   MR.  TELLIARD:   Yes, I understand,



 George.




     Our next speaker is Phil Ryan from S-CUBED in



 La Jolla.  Phil is a mass chrotomatogist with



 S-CUBED and is going to make his presentation




 now.  Phil was  fortunate enough to participate



 yesterday in that exciting review that  we went



over on all of the quality assurance stuff




and  I think he's recovered.

-------
                        159
        AUTOMATED IDENTIFICATION OF  PRIORITY
             POLLUTANTS  FROM  GC/MS DATA

              Philip  W.  Ryan,  S-CUBED
      I am going  to address the topic of automation

of data reduction from the point of view of a commer-

cial  laboratory  which has for several years faced

the necessity of automating in order to get its work

done  in a cost-effective manner.  In response to that

necessity, we have developed routines for automated

processing that really do work well and have allowed

us to operate with efficiency in a competitive field.

So I'm going to spend most of my time discussing our

particular routines, and also show a few comparisons

with some of the other automated reduction routines

that are available.

     The first slide summarizes the problem we face

in data reduction,  one which is particularly severe

in the case of isotope dilution work.   The old

needle-in-a-haystack analogy is sometimes used with

this problem, but it really doesn't serve quite

adequately.  With isotope dilution, we typically

-------
                        160
 have  more  like  100  needles  to  find,  and  for  each



 instrument being  used, we have to do it a dozen or



 so  times each day.



      This  slide is  designed  to emphasize the time



 constraints.  The rate at which we need  to process



 GC/MS data leaves us with only 18 seconds alloted



 for each target compound, and that rate can  only be



 achieved with extensive automation.  There is- no



 chance of  ever coming close  to that rate with the



 traditional user-interactive routines that most of



 us  learned  in more  research-oriented contexts.



      There are a  number of reasons for insisting on



 such  short  times.   These are both scientific and



 economic,  particularly as Bill Telliard tightens up



 his demands for timely reporting and quality control



 The stringent time  requirements are derived from the



 imperative  of getting the greater part of the data



 reduction  done in time for the operator to see his



 results before he has moved on too far to make good



 use of it.   In practice,  that means we need to get




that part done during the succeeding GC/MS data

-------
                        161
 acquisition, and  that is where these numbers come



 from:  at 18 seconds per compound, the typical iso-



 tope dilution set of 100 or so target compounds can



 be handled in the 30 or so minutes available during



 the next acquistion.




     The next slide lists the component parts of a




GC/MS analysis.  The first part, automated data ac-



 quisition, is something you always get done rather



 well when you spend $200,000 for a GC/MS/DS instru-



ment.  The last point, report preparation, is gen-



 erally best done without too much reliance on com-



puters.  But the other three points, qualitative




 analysis, quantitative analysis and much of the



quality control activity, all ought to be done at a



rate which requires the kind of automated routine I



 am going to describe shortly, preferably within the



 30 minutes available during the succeeding acquisi-



tion.



     In order to provide the analyst the information



he needs in a time which allows him to make use of




it, we need to accomplish the steps detailed on the

-------
                        162
 next  slide,  and  we need  to complete  them  at  the



 rate  of  18  sec per compound.  For each of the 100



 or more  compounds,  we  want  to look through the




 GC/MS data  file  and  select  the appropriate portion



 of the file  for  more detailed inspection:  in other



 words, choose a  retention  time window in which to



 search for  the compound.   Then we want to look at



 the selected portion of the file in more detail to



 decide whether the compound is present, is not pre-



 sent, or might be present.  If it is, or might be



 there, we want to  take a very close look at  the mass



 spectrum to be sure  we can make a positive identifi-



cation of the compound.



      Then we want to take the first steps toward



quantitative analysis.  This means we want to measure



instrument response  as peak area or peak height for



the selected quantitation mass chromatogram.   Finally,




we want to present the analyst with all of the infor-



mation he needs  in order to know how his analysis is



proceeding, what his results are and  whether  he needs



to take care of some kind of instrumental  problem



or reanalyze a difficult sample.

-------
                        163
     The  scheme  I have just sketched is the tradi-



 tional one used  in semi-automated as well as in



 fully-automated  approaches with differences primarily



 in the order of  specific operations.  The usual



 sequence  of operations for automated reduction is



 depicted  in this slide.  The sequence describes the



 Finnigan/INCOS scheme as well as ours, and I am not



 aware of  any system which makes significant devia-



 tion from it.



     The  program we use at S-CUBEU follows the flow



 diagram shown on the next slide.  As you can see,




 there are several decisions and selections to be



 made, and this is where the difficulty comes in



 reliably automating a data reduction.  We must rely



 on a computer to make decisions and to recognize



 things which were formerly the responsibility of a



 human being, presumably one with extensive experi-



 ence and  training in the nuances of mass spectral



 data.  For example, a computer will have to decide



 whether the data justifies concluding that a com-



pound is, or is not, present,  and we must come up

-------
                        164
 with much better defined  criteria  for  the  computer



 than we  have  available  to  characterize the user-



 interative  approach.




      To  see what's  involved  in  automation of such



 decisions,  let's look more closely at  the computer



 implementation  of some  of  them.  Specifically, let's



 look at  the process of  identifying a target compound



 on  the basis  of  GC/MS data.




      We  get some guidance  from  Method 625.  Everyone



 will  probably recognize the  contents of this slide,



 which are taken  directly from the method.  The cor-



 responding  criteria  for Method  1625 are in a state



 of  flux  right now,  but they  will be similar when



 the revised method  is published.  These instructions



 are acceptable identification criteria from a



 scientific  point  of  view,  and they fulfill an impor-



 tant function in  assuring  that consistent criteria



 are applied among various  labs.   They were formulated



 in  the earlier days of priority pollutant analysis



 when user-interactive software was all  we had to




 work with, and they are best  suited to  that semi-



automated approach.

-------
                        165
     The user  is given a library of identification



criteria, and  displays data in such a way that he



can see whether the criteria are met.  For example,



combining the  instructions in this slide with library



information of the type shown in the next (7th) slide



(also from Method 625), and displaying the data as



in the following (the 8th) slide, the operator can



see quickly that the target compound probably was



eluted at scan No. 925.  This slide portrays a



nearly ideal situation where the criteria are defini-



tive and nothing in the data is likely to confuse



the analyst's judgment.  As we all know, not all



data is so clean-cut.  The next slide shows  the same



type of data display for less ideal data.  A little



more thought is demanded of the analyst in this



case, and a little more chance of confusion  is intro-



duced.  This less ideal case will be used as an



example for the computer algorithm I'm going to dis-



cuss next.



     The Method 625 criteria are actually a  mechanism



for keeping the intuition of an expert analyst within

-------
                        166
 defined  bounds,  particularly  when  poor  data  such  as



 this  introduces  ambiguity and calls for exercise  of



 judgment.  As  such,  they can  be  inappropriate  for




 other  analytical  endeavors and are not  suitable for



 adaptation to  fully  automated identification algo-



 rithms.



     More useful  identification  algorithms for tar-



 get compound identification can  be derived from the



 conclusions of pattern recognition theory by using



 computed similarity  indices.  These utilize most



 of the information contained in  the spectrum and



 can be shown to be the best possible indicaters of



 similarity between reference mass spectra and sam-



 ple spectra.



     Library search  routines almost always generate



 similarity indices and use them  to rank possible



 matches.  Within  the INCOS system,  the fundamental



 indices are called FIT and PURITY,  and the strategy



 for target compound location involves  locating  the



mass spectrum for which the index is a maximum.  In




 this slide,  we take the same data as in  the previous



one but we plot FIT and PURITY rather  than char-

-------
                        167
acteristic  ion  intensity.



     While  the  similarity parameters alone do not



seem to be  strong indicators of compound presence,



the parameter plotted at the bottom is a pretty



clear marker of the correct elution time.  That



parameter is a product of FIT, PURITY and the



quantitation mass intensity and is the parameter



used in our software.  It typically displays very



sharp peaks with very good signal-to-noise even with



data for which other indicators are ambiguous.  The



computer locates target compounds by locating peaks



in this search parameter, and in our experience,



that is the most reliable, least ambiguous identifi-



cation criterion for automated GC/MS data process-



ing.



     Other criteria, based on other search parameters



are used as well.   The next slide shows some of the




possibilities.  This data is a case where benzene



is eluted between two major interfering components



so that it doesn't even produce a peak in the total



ion chromatogram.   The third and fourth plots are

-------
                        168
 the options provided by the INCOS search program,



 and the last trace is the  one  we  use.



      This  case  illustrates the special  problems  of



 isotope dilution  GC/MS.   In  isotope dilution, there



 are always large  peaks due to  the labeled  analogues,




 which are  eluted  very close  to the target  compounds.



 The failure of  the INCOS options  is due  to their



 being weighted  with  total  ion  intensity, and that



 total ion  intensity  is dominated  by labeled analogues



 and other  interferences.   The  next slide summarizes



 some of the special  considerations imposed by isotope



 dilution.   The  negative aspects of using total ion



 weighted criteria  have been discussed.  Another



 special  problem with  software  which was not designed



 for  isotope  dilution  is the inflexibility of the



 reference  peak designation.  Isotope dilution re-



 quires  that  the data  system be able to use different



 peaks for  retention time reference and for quanti-



 tation  internal standard.



      In the context of target compound location al-




gorithms,  I've shown  some comparisons among our

-------
                        169
 software,  the  standard  INCOS  software and  the  tradi-




 tional user-interactive approach.  Now let's return



 to the consideration of data  reduction pace and make




 similar comparisons there.  The next slide summarizes



 the requirements for getting  the reduction done in



 time for the analyst to make  efficient use of the



 information.  Also shown are  the times required:



 150 minutes and 50 minutes, respectively, for user-



 interactive techniques and for standard INCOS proce-



dures.  Only our SRCHMX approach, which takes only



5 minutes, allows any time for the operators to




look at results and act on them during the succeed-



ing run.



     The last point I want to address is what kind



of results the operator gets, and what happens to



the data next.  The diagnostic information shown



in this slide is available to the operator after com-




pletion of the automated data processing.  This



happens to be data from a standard mixture, so it is



unacceptable that the llth compound, pentachloro-



phenol, is not found.  This is the kind of feedback

-------
                        170
 the operator  needs immediately  because  he  has  to




 correct  whatever  problem  has occurred before goin-




 on.   Remember,  this  diagnostic  information  is  dis-




 tilled from 2700  mass  spectra,  and only a fast




 automated data  reduction  could  do this  in time.




      There is a lot  of other diagnostic information




 here, too, including the  pattern recognition para-




 meters which are  indicative of  spectra quality, and




 retention time data  which reflect chromatography




 performance.  The operator using our software has




 25 minutes to look at  this reduced data and decide




 whether he has some problem to  correct.   With slower




 techniques, he may not know he  has a problem until



 the next day.




     My last slide indicates the next sta^e of data




 reduction as practiced in our laboratory.   The prob-




 lems identified by the diafonostic are reflected in




the reduced data included in this upper  quantitation




report.   We return to the user-interactive  philosophy



at this  point and  use our own data editing  software




to correct errors  in the reduced data files.  In

-------
                       171
this case, a tailing peak required re-integration



for compound No. 8 and an antiquated library was



responsible for not finding entry No. 11.

-------
                 171a
         Isotope Dilution
   Priority Pollutant Analysis
• 45 minute FSCC GC/MS analysis

• 2700 spectra recorded

• 100 target compounds to be
  determined

• 30 minutes per analysis for data
  reduction to keep up with data
  acquisition

• 18 seconds per compound

-------
                   171b
        For  Each Analysis
• Data acquisition and recording

• Qualitative analysis: Search through
  data file and identify target compounds

• Quantitative analysis: Calculate
  pollutant concentration in sample

• Quality Control and Quality Assurance

• Report generation

-------
                  171c
  For Each Target  Compound
• Select appropriate portion of data file

• Inspect selected portion to determine
  if target compound might be
  represented

• Analyze spectrum to make definitive
  identification

• Measure value of quantitation
  parameters

• Generate QC data (diagnostic
  information)

-------
                         171d
TARGET COMPOUND  DATA REDUCTION
       Get Target Compound Information
                 from Library
          Decide whether Entry is an
                internal  standard
            Define a search window
       Search through window to locate
               Target Compound
      Measure amount of Target Compound
           as quantitation mass area
     Write information into quantitation list
               and into scan list

-------
      START
                       171e
  retrieve target
  compound info
   from library
        is
      target
    compound
   internal std
      no
  define search
     window
locate all possible
 target compound
      peaks
   select best
   possibilities

  measure areas
  output results
    "another
      target
   .compound ?
yes
   reset search
window parameters
                               quantitation list
             scan list
                             diagnostic print-out
yes
      no
  retun to MSDS

-------
                         171f
          Qualitative Identification

   14.1.1  The characteristic ions for each compound
of interest must maximize in the same  or within one
scan of each other.

   14.1.2  The retention time must fall within ±30
seconds of the retention time of the authentic
compound.

   14.1.3  The relative peak heights  of the three ions
in the EICP's must fall within ±20% of the relative
intensities of these ions in a reference mass spectrum.
The reference mass spectrum can be obtained by a
standard analyzed in  the GC/MS system or from a
reference  library.

  EPA Method 625

-------
                                                     I71g
 (0
 c
 o


 o
'•£

.52
'Z
 o>
*«
 o
 <0
 v_
 CO


O
                    •o
                    c
                    o
                    o
                    E
                    *£Z
                    0.
 CO
 (0

 
 o  +*  -i
j:  o ^
*;  o -M
 c

.2  m-
*3  O   •
 c  £ .£
 «  .5  c
*-  i-  c
    v.

    0)
   *-

    0)



    (0
    k.

    (0

   Q.
                           O

                           CO
                                                CM  T-  T-
                                       CM  i-
                           <0o   o
                          "Z  Q   «   <0  ^
                          oz  «44.^ro4.scz
                           I   I   £    -  "   V   I     ~   J    O   I
                          CMCMaCMCMCMTfCMCMQ.^

-------
                                  171h
                       CM

                       CM
                                                CM

                                                to
a!
<
o
03
a.
x
ui

a

£
 (0
»

O
o

VO
Lf)
                                                            CM

                                                          ,-CM
                                                                        O <^
                                                                        a —
                                                                        • CO  ••
                                                CM

-------
                                             1711
                  cms
                  •^o
                  is in
                                       G> \t~J


                                       LOi

                                       LD
ODO

f-IGJ

OlO
             in
             CO
             ot
ui


1
    vz


S  ^ai
3c  OLto
rv  in ^
O  I—_1
O -• LUOO


5^ in ^ *^

       -
     .

0 x 55
•-< — « 
-------
                        171J
          Identification  Parameters
                       Purity
100O -i
        33O
34O
35O
360
1000 1
                        Fit
        33O
           35O
           36O
10OO
                S-CUBED Parameter
        33O
340         35O
  Scan Number
           360

-------
                171k
      The  Search Parameter
   RIC
   QM
 PURxRIC
  FITxRIC
PURxFITxQM
                   SCAN

-------
                 171 1
     Special Considerations
       For Isotope Dilution
• TIC is always too complex to be a
  useful search parameter factor

• Multiple internal standards in a single
  library are desirable

• Different references for retention time
  and for internal standard quantitation
  in a single library are desirable

-------
                  171m
     Data Processing Pace
• Data system must have the capability
  to process one data file while another
  is being acquired

• 30 minutes are available to process
  a file for 100 target compounds

• SRCHMX requires 5 minutes

• INCOS procedures require 50 minutes

• User-interactive technique requires
  150  minutes

-------
                                            171n
                                     1-1    in in in  0s  co  n  in       -0 in
                                  ajon<-'z
 rr                               n UT m n o: N  o-  CD  r^ UT z: ^ o  u")  i^ ^ uj    »* in o
 <0                                                            ^          j.

 O                               r- -i n n n rj  n  -<  ra o en m in                 O                                          UJ
JL                (j             Q *-H    *oosi^'O—J    in in CM
_                               jj -J3    in c CM  p  in  -o ** a    r^ -o n

^                UJ             £L J              '   f  *    z    N
I                   -I                                          O
                   LU                                          -H

^^                *^             "CL^^jsnonOiD^O1™*cj3o^in*
                     a;             t ^ rs f> w^ «n rn  ^i  rt*  gt* m * T j m N. rn  t
                   z             cn     i                      D  !
                   D                                          a

                   +             UJ 03    O CM 03  CM  CM  Q* «       C CD -O


                   til                                          Z

                   uj                tniii              z

                   -;-j-5-5->-3*    -)~3~i
           CO  «  
-------
                                  171o








^^rf
w
!j
c
(0
3
O
15
£
B
•?~
O


t- n £ n rs ^ n>n)ino>>{a>
r<onnn
w.
o
0


H-o-ioncococo
z-'-oo-J-xNtmon
<-OOQQO03'O^N^
(jo-^^jnoffcostD
V)
ii]cooNtDr*oio-n«r

*
n
•0
o


9-
n
o
0-
3

•o
fM
n
c
n
CO

n



n
n
N
N

n
m

*
n
n
n
o


•a
n
n
*•
8

^
m
0-
t.
n


I

o
a

*
rv
n
o


•0
*
CO

8

•r
•^
vi
E
n
-0
M

-0
n



CO
S
N
(M
n
m

*
o
n
o


fH
n
DO
N
0

•0
o
(M
E
8

n



n
n
n

o
>

*
S
*
o


N
o
m



-0
n
n
E
N
O
0

S



y)
O
n

m
>

*
o
tc
CO
o


CO

:
0
r>

-0
-o
"~*
E
CK
n

0-



n
CO
-0
ru

H
a
til
o
M
0-
0-
o


^,
•*
•0
n
r>
0-
.

-------
                       172
                  MR. RYAN:  I'll be glad to answer




questions if anybody has some questions on this



automated procedure.



                  MR. TELLIARD:   Questions?



Anyone; last chance, he gets off free?



   Thank you, Bill.




   Our next speaker is John Norris from Viar and



Company.  John is going to explain step two of



what we had discussed this morning about the



automated data system.  He will carry on from



where Dale left off and John has been the project



manager on this project for about the last year;



John.

-------
                        173
     RECEIPT AND TRANSCRIPTION OF QUANTITATIVE
            DATA ON MAGNETIC TAPE AT  THE
               EPA SAMPLE CONTROL CENTER

           John Norris,  Viar and  Company
                   MR.  NORRIS:   Good  afternoon

 ladies  and  gentlemen.   I  would  like  to  take  this

 opportunity to  briefly describe the  Effluent

 Guidelines  Division's  Program for  the receipt of

 quantitative data on magnetic tape.   I  kind of

 feel  like one of  Bruce Colby's  outliers standing

 up here  today because  all of the previous  speakers

 have  been pretty  much  chemically or  lab oriented.

 My area  of  expertise is the data processing field

 and in this  case  I guess  I'm representative of

 Bruce's  slide with its  single outlier.

    During  this presentation I'll  be covering the

 topics as shown on this slide.  First, we'll start

 off the  session with a  quick overview of the EGD's

 analytic process  and briefly describe the  role

 that  the Sample Control Center plays in it.  Next,

 we'll look at the actual collection and reporting

of quantitative data by the laboratorys, how it has

been done in the past and more importantly how it

-------
                        174
 will  be  accomplished  in  the  future.   Since  the



 new media  will be magnetic tape we'll look at the



 elements of information  that would be on the tape



 and the  formats that  this data will be recorded



 in.   First let us look at the overall analytical



 process  that transpires  prior to the  institution



 of effluent regulations.



    The key players in this process are shown in



 this  slide.  The EPA  Project Officer  for the



 specific industry being  regulated has overall



 responsibility for developing the effluent regula-



 tion.  The Effluent Guidelines Division, Office



 of Analytic Support has  overall responsibility



 for the analytical process.   The Sample Control



Center assists the EGD's Office of Analytical



Support in carrying out  its responsibilities.



 We'll look at the role that the Sample Control




Center plays in this  process in detail later.



The laboratories provide the staff and equipment



necessary for sample  analysis and generally



perform under contract with the EGD.



   As can been seen in this slide, the analytical




process is initiated  by  the EPA Project Officer

-------
                        175
 when specific analyses are requested to be per-



 formed.   These requests  are  forwarded  to  the



 EGD's  Office  of Analytical Support  for processing.



     The  Office of  Analytical  Support in conjunc-



 tion with the Sample  Control  Center defines appro-




 priate tests  and selects the  particular laboratory



 best suited to perform them.   Samples  are then



 collected by  field  sampling  teams.   These  field



 sampling  teams are  frequently agency contractors.



 The  samples they collect are  shipped to the



 appropriate laboratory for analysis.   Once the




 laboratory has completed its  analysis  of  the



 samples,  it assembles  its  findings  into data



 packages  and  forwards  them to  the EGD  Sample



 Control Center.  The quantitative results  from



 from the  data  packages are in  turn presented to



the EPA Project  Officer  for review.




    What  I have  just described is a very simplis-



 tic view  of the  actual process that  transpires prior



 to regulation  implementation.  To reiterate, the



EGD's Office of  Analytical Support has  primary



responsibility for this  process.

-------
                      176
    The Sample Control Center assists the Office



of Analytical Support in this process and is




their primary arm for insuring that the process



works.  The current slide shows some of the major



functions that this Center performs for them.



It is through the Sample Control Center that



actual samples are scheduled for analysis at



the specific laboratories; sampling progress




is monitored; and sample scheduling problems are



resolved.



   Also, the Sample Control Center monitors labor-




atory progress and directs any technical and/or



scientific problems to appropriate EGD personnel



for resolution.  The Center also maintains an



inventory of chemical standards and spiking



cocktails for use by the laboratories in the



sample analysis.   The Sample Control Center also




functions as the EGD's focal point for



the receipt and management of data packages



from the laboratories.  They are responsible

-------
                       177
 for  entering  the  majority  of  this  quantitative



 data  into  the  EGD's data base.  They are also



 responsible for maintaining the data base



 itself and using  it to derive management infor-



 mation for the EPA Project Officer.  It is the



 last  three bullets on this slide, what I'll



 call  the collection and reporting of quantitative



 data  that  I would like to focus attention on




 at this time.  Let's look at how this process



 was performed  in  the past.



    Collection and reporting of quantitative data



begins at the  laboratory during sample analysis.



 The laboratories are responsible for transcribing



all quantitative results onto data sheets once



sample analysis is completed.   This is a very



time-consuming and exacting process for the



labs to perform.  The laboratories then assemble



these data sheets into data package organized



by sampling visit or episode and forward them



to the Sample Control Center.   Upon receipt of

-------
                       178
 the  data  packages,  the  Sample  Control  Center



 performs  initial  receipt and control procedures



 to insure completeness  of the  data.  If  the



 data sheets are missing the laboratory is



 contacted and asked to  supply  the missing



 data.




    Once receipt  problems are  resolved,  the hard



 copy data is keyed  into machine-readable format.



 The data thus keyed is edited  and verified to




 insure exactness  of the entry  function.  Next,



 quality control checks are applied to the data




and any discrepancies that are found are resolved



with the laboratory.  Finally, the data is sum-



marized using various statistical routines and



presented to the EPA Project Officer for review.



    The overall process of transcribing data onto



hard copy data sheets and eventually entering




this data into machine-readable format has been



at times a very costly, time-consuming  and labor



intensive method for data collection.  The

-------
                      179
process is costly, not only in terms of the



additional dollars required to transcribe the



data and key it, but also in terms of the



additional time that these steps add to the



process.  In addition, this methodology presents



a high potential for injecting error into the



data that is being collected.   Each time a



laboratory copies data for a report onto a



data sheet or a data entry person keys the



results from the data sheet there is a chance



that error could be made.  Realizing these



deficiencies, the EGD looked at alternate



approaches for collecting and reporting of



this data.  I would like to now describe the



methodology that's been adopted by the EGD for



collecting this quantitative data.  This slide




shows what I call the analysis and confirmation



portion of the collection and reporting process.

-------
                       180
 It depicts the actual steps that are performed



 during the process, the center portion of the



 slide; and, the actual flow of data represented



 by the right-most portion of the slide.



   This process is performed by the laboratory



 for each blank, standard or sample fraction



 it is required to analyze.  The process begins




 with the analysis of the sample fraction by



 the laboratory.  The raw data generated by



 the GC/MS with computer interface is used to



 produce a quantitation report, or quantitation



 list as I have heard it called here today, for



each sample fraction analyzed.  This quantitation



report is then subjected to a compound  verifica-



 tion process where the report should be reviewed



by a chemist.   This review is necessary to insure



 that appropriate compounds were determined and



 to compare the mass spectrum for each compound

-------
                        181
 against the standard.   Once the quantitation



 report  is  reviewed  and  approved  by  the  chemist,



 then  a  final  report is  produced.  At  this  point,




 the lab should  have a quality assurance in-



 spector review  and  verify  the results on the



 quantitation  report.  The  QA Inspector  verifies



 that  the appropriate method protocol was followed



 and that the  quality assurance specifications



 were met during the analysis.  The  quality insur-



 ance  inspector  then certifies the analysis by



a formal signoff procedure.  Once the sample




analysis has been certified then a magnetic



 tape copy of  the Quantitation report is made and



 sent to the Sample Control Center for processing.



    This slide depicts the SCC validation por-



tion of the collection and reporting process.



It begins when the data tape is received at the



Sample Control Center.  Each tape received  is



logged in and several checks are made against



the tape to ensure all data is  present.   First,

-------
                        182
 the  files  contained  on  the  tape  are  verified



 against a  transmittal received with  the tape.



 A file in  this case  is  the  same  as a single



 quantitation report.  Second, the actual com-



 pound data within a  file is read from the




 tape and edited for  completeness.  If the



 data on the tape passes these checks, it is



 then subjected to the same  quality control



 checks that were applied at the  laboratory.



 Any problems noted in processing of  the tapes



 results in a discrepancy report  as shown down



 on the bottom right  of the  slide.  This dis-




 crepancy report becomes the basis for request-



 ing the laboratory to reanalyze  and/or resubmit



 the quantitative data.  All data that passes



 these checks is then merged with sample infor-



mation derived from  other sources and loaded



 into an EGD data base for subsequent statistical



 analysis and reporting.  Once that is completed,




 it is then summarized and presented to the

-------
                       183
 to  the Project  Officer  for  his or  her  review



 as  was done  in  the previous method.



    Submission  of data  by magnetic tape has



 several advantages over the previous hardcopy



method of data  collection.  Some of the more



 significant  ones I have shown here (indicating).



    First, the  method streamlines the reporting



and collection  process by eliminating the




 transcription and data  keying steps.  This



elimination  significantly decreases the time



required to  place results into the hands of



the Project  Officer and, more importantly,



decreases the overall cost of the collection



process.



    Second, quality assurance is improved with



the certification process at the laboratory



and the automated quality control review of



the data at the Sample Control Center.



    Third, the accuracy of the data is  improved



with the automation of the collection function

-------
                      184
Automation also allows additional elements of



information about the sample fraction analysis



to be collected at no additional cost.



    Let's now look at the types of information



that will be captured by this process.  This




slide gives a general idea of the categories



of data that will be captured on the quantita-



tion tapes that are submitted.  Sample number



and EGD compound numbers are some examples of




the types of data fields that we found in the



identification category.   Extraction date and



date of analysis are examples of the date



information that will be captured.  Fraction



type, that is, acid, base neutral, or volatile



and dilution or concentration factors are some



examples of those that we included under the



fraction category.




    The next category, analytic conditions,  would



include information about the column that was



used, the temperature information, and flow

-------
                       185
 or  velocity  rates.   Results  such  as  retention



 time, mass to  charge ratio,  or scan  time would



 be  examples  of  the data  fields that  would be



 reported  for each compound under  the results



 by  compound  heading.



    The library information  that  you use during



 your analysis  for the reference amount, response



 factor, or reference peak would be examples of



 data included under  the QA category.



    In summary, 26 unique elements of informa-



 tion or data fields  have been identified for




 collection purposes.  Some of these data fields



 will be presented only once on the quantitation



 report while some will be represented for each



compound that is determined.



    The last topic that I would like to cover is



 the actual format of the quantitative data on




magnetic tape.   The format that has been adopted



by the EGD Division is the quantitation report



 that is produced by the GC/MS now.  This format

-------
                       186
 is being  developed  for  a  variety of  the GC/MS



 machines.   For  purposes of  discussion, the




 report  has  been broken  down  into three main parts



 as  shown  on  the current slide.



     Let us  now  look at  the  format of each of these



 parts.  The  first part  is the header segment.



 The  top portion  of the  slide gives you a visual



 of  what the  data looks  like on the tape and also



 what the  report  looks like.  The circled



 numbers identify the data fields that are used



 from this segment of the report.  The names of




 the data  fields  are identified at the bottom of



 the slide.



    This slide gives a visual of the data seg-



ment of the tape.  The segment contains the



 analysis results for each compound  that was



 determined.  Notice item number 13 up there,




 it points to something new (indicating).



What we're asking each lab to do is to precede



the compound name in the library with the  EGD

-------
                       187
compound number  for  identification purposes.




The number beside the EGD number on the slide



is called a reference number and is used to




tie the data portion back to the actual name




and compound number  identification.  The com-




pound number is required to insure proper com-




pound identification and eliminate variances in



spelling.




    This is the third portion of the report.




It is called the F-2 segment.  This segment




basically provides library information from




the analysis.   It is presented for each com-




pound analyzed.




    This concludes my discussion.  Are there




any questions?



    Thank you.



                   MR.  TELLIARD:   Thank you,




John.






(WHEREUPON,  a  recess was taken.)

-------
       187a
s     °
= 50.

-------
187b
UJ
Z
K-


O
DISCUSSI


u
H
Z
Q
O
UJ
<
0.
UJ
'ERVIEW OF
OCESS
£°c
Oo.

cc
LU
^^^^H
1-
z
LU
O

cc
MPLE CONT
<
CO


CD
Z
<"
r—
1- CC

-------
                  187c
                  -I           CO
UJ                <      CC   ^
                  O      uj   CC

CO          cc    >      z   £
CO          S    2      ju   <
            O    <      °   CC
            IE    Z      -J   O
            U_    <      O   CD

ccs       °    S      *


            8    B      R     -
            -J    2|_   O   O
            O    u.tt   uj

            0.

            <    O Q.
            CL    OD
            UJ    UJ CO   CO

-------
                                   187d
(A
W
UJ
O
O
cc
a.
                                      Ul
                                      wgo
                                   ooE

                                  ,O >• Q-,

                                   to
Q
(3
UJ


2
UJ
LL
O


UJ


CC
UJ


O
     coco
     HUJ
     coco
     cc<
               Ceo
               LU UJ
                        CO
UJ(0
NUJ

_l O.
          CQ<

          ^0
          CO
UJ

NW
OCH-
                                                                (0
CO

-------
                187e
                        Ul
               co    ^:   co
               Q
               §    s   *

          69^
      CO
      Ul   ^    Q.    £-   Q

u     Q!   o    5    2   Q
          S    Ul    P   O
          t-    iZ    5   ui
Q 2  co   Q    3    J2  z|  2   si  ss
      i   oco  ^^  S   >  ocg
      O   ccco  5o.  ui   UJ  o
      CO   £^^!  SCO  Q.   O  u.

-------
              187f
UJ

h
Ft
  o
X UJ
2

3°


=ORMED
kte
oc
UJ
OL
CO
CO
>•
<
z
<
UJ
a.
Z
<
CO
A SHEETS

O
0
Z
0
NSCRIBED
<
oc
H
ESULTS
oc
o
0
CO
o
o
UJ
I
00
D
CO
CO
UJ
CD
<
^
O
<
CL
£
<
0



CONTROL
Q
Z

H
<
UANTIT
a



<
oc
o
1 1
LL
UJ
CO
<
o
<
UJ
oc
0
*"l
•M
CL
CL
<
CO
*
o
UJ
X
O
o
oc
H
Z

O
UALITY
a


o
UJ
'"w.
ES RESOLV
o
^•^F
<
Q.
UJ
QC
O
CO
Q
RTING PERFORMED
O
a,
UJ
oc
o
z
<
ANALYSIS
_j
<
o
TATISTI'
CO

-------
    187g
s§
CO i
UJ±
Ow
2*
<2
>o
0>tt
5 *•
   CD
(0
o
o
CO
z
o
o
LU



ENSIVE
CO
cccc
o2
LU CC
_i DC

-------
                      18 7h
        O


        <
        oc

po
2 Q. E
  I8
O go
u.2|
O H<
  uffi
Q
o
uO<
UJ
N
CO
        (O
             UJ
     ZQ
     <

     O
                 o
O
-•£
<0
ZQL
z
Su.
o>u-

3°
                             Ul
H

i
CQ
D
(0

-------
                              1871

I-  CCO
li
                                CO
                                01
u.2
O jO
T J5°
So"
LUO
      LU

                   Q<

                   
-------
                187j
00
coco
UJCO
z=2
REPORTING
^
Ul
s
1-
Ul
oc
O
2
•

^
-1
CO
o
o
CO
CO
LJJ
_J
•
RANSCRIPTION OF DATA
H
1 Ml NATES
_i
Ul
•
LL
O
1
Z
o
CO
o
z
Ul
Ul
oc
a
f^
LL
^
iOVIDES FC
DC
Q.
•

OQ
5
•
H
ARD COPY DATA ENTRY
X
IMINATES
—i
LU
•
RE ACCURATE
O
OVIDES M
oc
a.
•
z
g
<
OC
o
II
UB
Z
l/IENTS OF INFORMATION
s^
LU
PANDS ELI
X
Ul
•

LLECTED
O
O
2
<
Q
LL
O
LU
Ul
OC
<
a
Ul
X
K-
TOMATES
D
<
A

-------
                     187k
        o

Q     P
£     £          o    «
        O          I—    ^    —
        LL          <    O    O
        Z          5    H    0=
O     T»           cc    Q
—     Z           O    7    O
1-0           LL    g    o
m     j—           ^    °
"     b           ^    O
        d           z    o    ^    cc
        C           2    H    ^    O
              CO     I-    >:
              LU     o    d
r\     Lu    t     <    5    to
Q     S    <     cc    2    LU
(^     S    Q     LL    <    CC
111      •     •     •     •     •

-------
                     187 1
o
QJ
<
DC
O
Q_
LU
DC   LU
O  O
    LU
    CO
O  tt
—  LU
£  O
                     LU
                         LU
                         O
                         HI
                         CO
LU
O
LU
CO
CM
                     I  Q  LL

-------
              187m
   •

§
                                                     Ul
                       O
                       Z
                       UJ
     oc     uj
     UJ     h-


     &     3

     I     ^

     5     £
o    o
o    o
                                   0
                                   O
                  QC
                  o
                  o
                  QC

                  UJ
                  CC


                  <
                  CC
                  UJ
                  1-
                        o
                        X
            (/)

            o

            UJ

            QC
            QC
      G:    o
                                              fr-\  fc*p
                  O

            z    5
            o    d
                       Q    uj
                       UJ    CO

                       >    I
                       <    Z


                       <    0.
                             s

                             0)
                  Q



            C    2
1O
E
UJ
            Q
            Z

            O
            Q.


            O
            o
<
QC
I- OC
Z O
UJ H-
O O
z <
O u-
O
Q
UJ

O
<
CC

X
UJ
UJ

<
o
                                  O
                                  O

-------
187n






H


Ul
s
o
Ul
0)

s
Q

















UJ
Z
Z
, *• w
UJ UJ 9
J®
VINYL CHLORIDE
CHLOROETHANE
METHYLENE CHLORID
TRICHLOROFLUOROM
1.1-DICHLOROETHYLEI

S\
N
Z S 5 S o o
Q ^ Oi CO * tO
Z
1-
Oiotpmmo*
*Iim
(s) ,_ooooo
® illlli
333000

^ 
H CO CO CO CO CD

®cc 2 2 ^ ^* S
^ CO V tO CO Oi
ooo'cio

©u.
U- 0000000000
®5 ^ ? r; *N r;
= cn*rso>o
® |PSSN


^J* J ~ (O S 00 O S
x-/ "~
O*-«N««»IO
Z

UJ
Z
0
Z
UJ
Ul
CC
^
^*
Ul
CC
®'


CC
D COMPOUND NUMBEI
(9
Ul









<
CC
^
UJ
CL




ISS TO CHARGE RATIO
•«*
2





,_
Z
D
O

O
Ul
CC
13
3
UJ
S
(D



AN NUMBER
U







UJ

(O
UJ
u.
o
H
Z

©

Ul
5
ISOLUTE RETENTION Til
CD
^


















FERENCE COMPOUND
ui
CC

(r


-------
             187o
283S28
t- t- O O O r-
"* 7 O (O (O 0) ^
^ Q w CD r^ do
0 P * CM. N 01
   «- O O 0 0
   i-: ci i-: d i-'
   *- o o o o
 < O O T-^ O O
 OC
 OC



 Z
QC
O

O
      oc
      O

      O
0)
z
O
Q.
0)
01
OC

OC
                          01

                          Z
                          O
                          8>    *
                          uj    OQ
                          OC    J
       Z
       O
o.    £
5    01
H    t
Z    CC
O    01
I    >
5    §
01    01
OC    CC
                          CC    OC
                          <    <
                          OC    CC
                          OQ    OQ
            H
            Z
            D
            O
                  CC
                  00

-------
                       188
                   MR. TELLIARU:  Our next



speaker is from our Office of Research and




Development, our Athens Laboratory, Walt




Shackelford.  Walt has been involved in a



number of parts of this program for the few



years, and, in particular, the tape program



we have been carrying out on spectral



matching;  and, which is what Walter is going




to speak about today.  I hope you can




understand him,  he has a funny sound in his



voice, I understand,  but...Walt.

-------
                        189
     INCREASED CONFIDENCE  IN SPECTRUM MATCHING
        BY USE OF A RETENTION TIME LIBRARY

              Walter M. Shackelford
       U.S. Environmental Protection Agency
                Athens Laboratory
                     ABSTRACT

     To successfully extract the maximum amount of

information, all dimensions of the gas chromato-

graphy/mass spectrometry (GC/MS) data from a sample

run must be used.  In this work, retention data

were combined with reference mass spectra for com-

puter-aided identification of organics in industrial

effluent.  Use of retention data proved to be a

great help in increasing the analyst's confidence

in compound identifications from low quality spec-

trum matches.  Even greater confidence will be

achieved when libraries that include capillary

column retention data and gas phase infra-red

spectra are available.

-------
                       190
     Increased Confidence in Spectrum Matching
        by Use of a Retention Data Library
     Introduction.  The data acquired in a scanned

gas chromatography/mass spectrometry (GC/MS) analy-

sis has three dimensions of qualitative information

(Figure 1).  Each dimension can provide the chemist

with varying degrees of confidence in identification,

but it is when these parameters are combined that

the power of GC/MS is evidenced.  The elution time

of a component taken by itself provides useful

qualitative information only if the sample has no

interferences.  Likewise, the masses recorded in a

given scan, while providing more qualitative infor-

mation than elution time, are of little value unless

coupled with intensities.  The detector's response

to a compound without elution time or specific mass

data provides little in the way of qualitative infor-

mation.

     Even if two dimensions are combined, the result-

ing qualitative information falls far short of the

total capabilities of GC/MS in qualitative analysis.

-------
                       191
For instance, an extracted ion current profile (EICP),



which combines mass and retention data, while nar-



rowing a chemist's search for compounds having a



certain characteristic mass, still requires manual



search of each occurrence of that mass for compound



identification.  Use of the full mass spectrum,



which includes mass and intensity for all masses



recorded, eliminates many of the ambiguities found



in using characteristic ions or retention times alone,



but requires that probability based matching (PBM),



an automated library search system, be used for



acceptable efficiency.



     The use of automated spectrum search and re-



trieval systems is a great aid to qualitative analy-



sis of large numbers of unknowns, but the reliability



of such systems is suspect.  For example, it is well



known that mass spectra, while highly characteristic



of a molecule, are not always unique.  Also, when



dealing with real world data, one often finds con-



taminated spectra and spectra skewed from instru-




mental problems.

-------
                        192
      In  this  work,  elution  time  information was added



 to  full-scan  mass spectra to  increase  the reliability



 of  automated  spectrum matching.  In this way, mass



 spectrum ambiguities were alleviated by requiring



 retention data matching.  Retention data overlaps



 were  overcome by use of mass  spectral  data.  A dy-



 namic historical library was managed that increased



 in  size as compounds were authenticated and more




 retention data were added.  The increase in confi-



 dence of automated spectrum matching gained through



 the use of retention data was measured.  In Figure 2



 the use of mass, intensity and retention data to



 narrow the choices for identification is depicted as



a set of filters.



     Experimental.  This work is the result of a



 study of wastewater from 21 industrial categories



and finished water from publicly owned treatment



works (POTW).   The study encompassed some 4000 sam-




ples that were analyzed at 14 contract laboratories



and EPA regional laboratories.  Details of the study,




computer system, and data reduction systems can  be



found elsewhere.l

-------
                       193
     To create the historical library, several homo-



logous series standards were analyzed to provide



retention data.  The classes of compounds used are



shown in Table 1.



     The logic of spectrum analysis is shown in



Figure 3.  The Iibrary2 of reference mass spectra,



which contained more than 40,000 spectra, was



searched first.  In this way, compounds for which no



retention data existed in this historical library



were not prematurely eliminated from the search



procedure.



     As compounds are tentatively identified using



spectra from different GC columns from different



laboratories, provision in the library must be made



to differentiate among retention data from different



columns.  In addition, care must be exercised to



differentiate among the internal standards used for




reference in retention data.



     A search was conducted  in three steps:



          1.  The Chemical Abstracts Service (CAS)




number of the best acceptable spectrum match for

-------
                        194
 the  unknown was recorded, along with the GC column



 identification number, relative retention time and



 internal standard.



          2.  The historical library (ordered on



 CAS  number) was searched for the candidate's CAS




 number.  If the numbers matched, the GC column and



 internal standard had to match as well.



          3.  The relative retention time of the




 candidate compound and the retention window allowed



 for  the library entry were compared.  If these two



 were in accord a match was recorded.  If the two




 did  not match, the next best acceptable spectrum



match was carried through.   If no other spectrum



matches were acceptable, the computer program



 flagged the spectrum so that a chemist could make



 an appropriate decision on its identification.



     The Probability Based Matching (PBM) system



 has been reported by Pesyna and coworkers.3  Figure




 4 depicts the matching parameters used in this study,



 The measure of match overall quality, K,  is theore-



tically unbounded, but, in  practice depends on the

-------
                       195
nunber of peaks available for natchinc in a spectrun.



Thus, Knax, the K for a perfect match, varies with



the nunber of fragments.  This means that rather




than relying on K alone, A K, the algebraic differ-



ence between Knax and K, should be considered.  To



consider K and A K simultaneously, the ratio



A K/Knax can be used as a matching parameter that



reflects both positive and negative natch qualities.



     Confirmation of computer matches was accom-



plished by reanalysis of sample extracts using



capillary column GC/MS^.  More than 3000 computer-



matched identifications were studied in the confir-



mation process.



     Results and discussion.  In Table 2 are shown



the relations between the match parameters K and



K/Knax and the precision of relative retention data.



The standard deviation of the relative retention



data is expressed in relative retention units and is




calculated using data fron the 14 participating



laboratories.  The compounds shown here are repre-



sentative of commonly found compound classes for



each fraction or column.  The fused silica capillary

-------
                       196
column data are from compounds not commonly found

on the acid or base/neutral packed columns.

     In comparing the relative retention times with

K values for each compound, one can see that whereas

retention data variance is very small, the K value

range is a factor of 2-5.  The narrow retention tine

windows allow greater confidence in poor spectrum

match parameters as v/ill be shown below.

     Note that the standard deviations of the fused

silica capillary column data are much smaller than

those of the other columns.  Although a smaller num-

ber of laboratories is represented in the fused

silica data (only 2 compared to 14 with the packed
                        5
column data),  later work  has shown the interlab

precision of retention data using fused silica

columns to be excellent also.

     In Figure 5, the effect that retention data

has on matching confidence when using K as the

deciding match parameter is seen.  The upper curves

refer to data collected in this work, where the

lower curve refers to Atwater's previous study0

-------
                        197
 using  spectrum  data  alone.   As  can be  seen,  although



 the  curves merge  at  high K, much higher confidence



 can  be placed in  the matches  from the  present study



 that have retention  time corroboration at low K



 values.




     An anomaly can  be seen in  the two uppermost



 curves.  Whereas  the confirmation rate should in-



 crease with increasing K, one of the curves shows



 a decrease in confirmation rate at high K.  Examina-




 tion of the data  revealed that, although aliphatic



 carboxylic acids  were the largest group of compounds




 at K>100, only 32 percent were confirmed.  Likewise,



 aromatic acids had a confirmation rate of only 37



 percent.  Results of the carboxylic acids were



 deleted  from the data set and replotted to obtain



 the  topmost curve, which follows closely that of



AtwaterS at high K.




     Poor confirmation rates for the carboxylic acid



matches can be attributed to degradation of sample



 extracts used for confirmation studies during storage,



To examine the confirmation characteristics of other

-------
                       198
compound classes, note Table 3.  Even though the



ambiguity of Hatching results anong hydrocarbons



should be high, hydrocarbons evidence the highest



confirnation rates.  Perhaps storage stability ex-



plains this fact as well.  Note that carboxylic



acid esters had a nuch greater confirnation rate



than the acids.



     The data of Figure 5 show the fallacy of using



K value alone.  Since K is unbounded (Figure 4), a



molecule with many fragments will have a higher K



value in matching than a molecule with fewer frag-




ments.  Thus, the carboxylic acids of carbon length



>12 have a high K value even when the match is not



good simply because of the number of fragments



matched.  One must also look at the negative points



of the match.



     In Figure 6 the relation between A K and con-



firnation can be seen.  Note that again the use of



retention data improves match confidence greatly at



low match quality (high A K in this case).  Note



also that there is no anomalous behavior due to



the carboxylic acids.  In this case, since the dif-

-------
                        199
 ference between  the calculated match and a perfect



 match  is represented, the effect of increased frag-



mentation due to increased molecular weight is not



 seen.



     Observations such as this led to the use of




K/Kmax as the deciding match parameter.  In this



 way, both positive and negative match parameters



can be viewed in one term.  In Figure 7, the rela-



 tion of this parameter to match confirmation rate



is shown to be a function of the size of K/Kmax.



     Conclusions.  This study shows that the con-



 fidence of poor spectra matches can be greatly in-



creased by using retention data as a match parameter.



 The confidence of excellent spectra matches is not



affected by retention data — probably because



closely eluting compounds with very similar spectra



begin to interfere at this level of confidence.




The building of larger retention data libraries and




the construction of an FT/IR segment to the histori-



cal library management program (Figure 8) are the




next steps in improving the analyst's confidence in



automated identifications.

-------
                        200
                     REFERENCES
1.   W. M. Shackelford, D. M. Cline, L. Burchfield,
     L. Faas, G. Kurth, and  A. U. Sauter,  Advances
     in the  Identification and Analysis of Organic
     Pollutants  in Water, L.  H.  Keith, ed.,  527-554,
     Ann Arbor Science, 1981.

2.   EPA-NIH Mass Spectral Data  Base, John Wiley
     and Sons Registry of Mass Spectra, and  EHL
     Athens Master Data Base.

3.   G. M. Pesyna, R. Venkataraghavan, H. E. Day-
     ringer, and F. W. McLafferty, Anal. Chera.,
     1976, 1362-1368.

4.   EPA Contract Number 68-03-2867, Research Tri-
     angle Institute, Research Triangle Park, NC.

5.   A. D. Sauter and D. Betowski, HRC&CC, 4, 1981,
     366.

6.   B. L.  Atwater, Ph.D.  Thesis, Cornell University,
     Ithaca,  NY,  1980.

-------
                    200a
             PBM
    A
INT
        INT- MASS
           A
                                    HISLIB
INT-MASS-RT
                                        EICP
                          RT-MASS
                    RT
  Figure 1.  The three dimensions of GC/MS data.

-------
                     200b
   Mass      Intensity
RT
Figure  2.  Mass,  Intensity and  Retention Time
       Filters for  spectrum matching.

-------
                           200f
                  HISLIB  LOGIC
     UNKNOWN
     SPECTRUM
       GOOD
       PBM
      MATCH?
   PROCESS
    AS A
MISSED SPECTRUA
    ANY MORE
    RRT WINDOWS
    TO COMPARE
      WITH?
     ANY
 MORE "GOOD"
 PBM  MATCHES?
                                          LET CHEMISTS
                                         MAKE DECISION
                                            ON THIS
                                           SPECTRUM
                                            IS THIS
                                           A MATCH?
SPECTRUM TENTATIVELY
IDENTIFIED - STORE IN
 HISTORICAL DATABASE
Figure 3.   Logic of spectrum analysis in  the
               historical library.

-------
                        200d
            MATCHING SYSTEM PARAMETERS

                       PBM
       0    K = Z (IL + Ai

            THEORETICALLY HAS NO LIMIT
       0    AK = K    - K
                  max
       0    _K	   MINIMIZES BIAS TOWARD COMPOUNDS

            Kmax   WITH MANY FRAGMENTS
Figure 4.  Probability based matching parameters.
U. = empirically derived uniqueness; A. = empirically
derived abundance value; D = dilution of spectrum by
impurity; W = the tolerance allowed for abundance
match.

-------
    lOO-i
                                200e
"d
 
-------
                                  200f
o
o
    lOO-i
     90-
     80-
     70-
Q)   60-
6
     40-
     30-
     30-
     10-
                                                o
             10    20    30     40    50     60     70

                                  AK
80
90
          0 - All compounds
          * - Data of Atwater  (5)
    Figure  6.   Plot of AK versus confirmation rate comparing
              this study with that of Atwater (5).

-------
                                     200g
    100-
     90-
     80-
     70-
-d
 CD   60-
 o
o
     40-
     30-
     20-
     10-
       0.3      0.4      0.5      0.6      0.7      0.8      0.9       1.0


                                K / K
                                        max
           Figure 7.   Relation of
                                   K
to confirmation rate
                                    max
                     with all compounds  included.

-------
                        200h
                 FUTURE STUDIES
       0    BUILD RT LIBRARY WITH CAPILLARY COLUMN

            DATA


       0    ADD IR SPECTRA


       0    ADD SITE SPECIFIC DATA
Figure 8.  Planned improvements to the historical
        library for further selectivity.

-------
                                2001

      HOMOLOGOUS    SERIES    STANDARDS


                        C6  -  C19 N-ALKANES

                        Clg - C^ N-ALKANES

                         C5 - C1Q ALKENES

                         C8 - C22 ALKENES

                       C/! - C22 N-ALCOHOLS

                        C3  -  C16 ALDEHYDES

                     fy - C1£, PRIMARY AMINES

              C/j  -  C18 SECONDARY AND TERTIARY AMINES

                      BENZENOID HYDROCARBONS

                        DICARBOXYLIC ACIDS

              DIMETHYL ESTERS OF DICARBOXYLIC ACIDS

                       C3 - C18 FATTY ACIDS

                         C3 - C12 GLYCOLS

                      C3 -  C10 GLYCOL ETHERS

                        LOW BOILING ESTERS

                     C3 - Clg METHYL KETONES

                             PHENOLS

                         PHTHA.LATE ESTERS
Table 1.  Compound families used to initialize the historical library,

-------
                                                0)
                                                tn


                                               I

r-t
rr
•
0
1
VO
03

VD
CM
O
1
O
0
200J
00
••*
o
1
o
o

1-1
*»
0
1
cn
r-

co
CO
o
I
o
0

o
CO
o
1
in
cn

00
CO
0
I
0
o

CM
CO
O
1
O
o

cn
r-H
o
I
0
o

VO
T
o
1
o
00

(^
1"
o
1
cn
CO

CO
in
o
I
o
o
                                                              f-
                                                              in
                                                              in
                                                              ID
                                               vO
                                               m
                       n
                       1C
                                                                                                                     r-
                                                                                                                      •
                                                                                                                     o
                VD
                vO
                 •
                o
                                                                                                                            in
                                                                                                                              •
                                                                                                                            o
                                                                                                                                            vD
                                                                                                                            CN
                                                                                                                            00
        CM



        O
                                                I

                                               I
                                                       O
                                                       CM
                                                       tN
                                                       VO
                       in
                       tN
                        I
                       cn
                       en
                CM     rf
CM      03      CM     "tf
^      CO      I       I
 I        I       O     CO
cn      co      o     co
co      r~      in     H
                                                                                                                     03
                                                                                                     VO



                                                                                                     CO
                                                                                                                             i
                                                                                                                             CO
                               CO

                                I

                               r-
m

 I
co
CO
                                                              in
                                                              in
•O

 10
     X
     CD
     6
                                        rS
                                       •rH
                                       •O
                VO
                       in
                       vo
        tN
        in
                                                                              vO
                                                                              VO
00
co
                                                                      tN
                                                                                                                                                            CD
.c
+J
•H


 it!
-P
 (fl
Q


 0
•M
-P

 (U
-P
M-t
 O
 O
 U)
 I
 0
u
CM


0)
 w
•o
 O
o

•0
 V
4J
 O
 0)
r-l
 
C
0)
C
-H
&

a
co
1-1


, 	 ,
CO
1
r>
in
I
in
en
t__ i

i-H
0
C
vD
s:
04
0
M
0
r-l
£
U

CN
CO
cn
(N



, — ,
O
1
in
CO
1
in
vo
i — i

•d
•H
O
10

u
-H
O
N
C
a)
J3
VO
VD




r— t
•*r

r-l
r-t
1
1^
m
h 	 1

•o
•H
U
(B

O
•H
r-l
HJ
,
X
4J
(1)
•rM
13
cn
r-





, — ,
vO
|
co
CO
i
r^
in
i—j

1-1
O
>-i
ai
4J
Ul
0)
r-l
O
r.
u
rH
CM





, — ,
•5J1
1
cn
CO
1
CO
o
H
i — i


-------
                                  200k
Pi
O
o
lOO-i



 90-



 80-



 70-



 60-



 50-



 40-



 30-



 20-



 10-
                 &

            Table  3.   Rate  of confirmation for selected
                          chemical classes.

-------
                        201
               QUESTIONS AND ANSWERS






                   MR.  STANKO:   George Stanko,  Shell



 Development.   Walt,  obviously,  this work was done




 with  some  of  the  data  and  some  of  the extracts




 that  had been sent  to  Athens as part  of  the



 screening  phase.



                  MR.  SHACKELFORD:  That's correct.



                  MR.  STANKO:   Could  you tell us



 what  has happened with that program; where are you



 in that particular program now?



                  MR.  SHACKELFORD:  Well, as far as



 Athens is concerned, we have finished all of the




 computer matching tests that we are going to do.




We have done the confirmation study and  we were



able to confirm 435 compounds.



    These are compounds that were found  at a high



frequency.   We also tried to confirm each com-



pound  at least once in every industrial effluent



in which it was found.   The library of unknowns




is presently being evaluated right now.  We have

-------
                        202
 some  55  or  60  candidates  presently  being  studied



 whose spectrum does  not exist  in our reference



 library.  For  the  final end of  the  data you  have




 to refer to the  Effluent  Guidelines Division.



                   MR.  TELLIARD:  I  want to add




 some  toxics to  the list,  George, only in petro-



 leum.



    Thank you,  Walter.  We are  going to try  to




 continue this  program  with the  additional indus-



 tries of offshore oil  and gas and organic chemi-



 cals  that will be coming  up this year.  We will




 continue to use  the  tape  program and the extracts,



 with  the new quality assurance  built into the



 data  set, which  will make Walter's life easier.



 Of course,  Walter is saying, "who's goin^ to pay



 me to do this".



    Our next speaker is Paul Mills from Mead




CompuChem.   Mead has spent some time in develop-




 ing a quality assurance decision tree I guess is



 the best way to  describe  it for real time quality

-------
                       203
assurance.  This was developed primarily for




the garbage people, the solid waste people, but



I think a lot of these measurements decision can



be made applicable to the work we are doing.  So



we have invited Paul to come today and explain



the system.

-------
                       204
        QUALITY ASSURANCE DECISION MODELS
           FOR  HAZARDOUS  WASTE  ANALYSIS

            Paul  Mills, Mead  CompuChem
                MR. MILLS:   Thank you.   I

have asked Nancy  to handle  some  transparencies

for me.

  Now, this will  be a multi-media show because

it deals not only with transparencies and slides,

but because it will also deal with soils, sludges,

solid and hazard  waste as well as water  that you

are primarily interested in.

  Earlier in the  program we  have heard several

speakers talking about quality assurance and

what you do, for example, at the instrument, how

can an operator make decisions as to the quality

of the data that  has been produced.   I thought

for those people who may or may not  have some

familiarity with quality assurance I would put

the obligatory quality assurance and quality

assurance definition up there; is that focused

-------
                       205
 well?   (Indicating.)



     CompuChem  is  one  of  the  largest analytical




 facilities  in  the country.   We  have quite a  few




 GC/MS instruments and it poses  some unique prob-



 lems for me as Director, Quality Assurance,  some




 of which we will  get into which led me to help



 develop the model that I will be talking about.



 Some of the things down here that I would like



 to point out (indicating).   We do hundreds of



 samples a month,  by a variety of methods, both



 EPA and commercial methods for a variety of cus-



 tomers and industries.  We have three shifts,



 24-hours a day, we never close.  We have signi-



 ficant computer capability so that we can pro-



cess the data that is generated and turn it



around quickly.   We have a laboratory at Research



Triangle Park,  North Carolina, and one in Gary,




Illinois,  near Chicago.   We have 24 GC/MS



instruments and trying to keep track of the



data from  all of  those can be time-consuming.

-------
                      206
   The manner and the size, the scope of



CompuChera is set up so that samples come in



on what amounts to an assembly line; no one



person sees the entire job on the sample from



extraction to concentration through clean-up,




through GC/MS analysis through data reporting.



So we found that it is critical that each



person who does a piece of that sample as it



is passed along knows how well that that job



has been done because they get immediate



feedback as to "Did I do my job correctly, or




did I screw it up, does it have to be done



again?"  The person next in line that gets



that sample to be able to do his part of the



job with it, like an auto assembly plant,



needs to know that that job was done correctly



so that his piece will have value when it




is added as the product goes down the line.



    So we must make sure that the quality



of the product that went out the door to the

-------
                       207
 customer  meets  the  standards  that  are  demanded,



 whether it  is by contract or  purchase  agreement.



 Also,  to  facilitate  intralaboratory transfers



 between departments  and between people of pro-



 ducts  of  known  quality, we started by  implement-




 ing a  system so that each person, each product,



 each lab  area was defined as  to the type of



 quality that was required.



    May I have  the next transparency,  please,



 Nancy.  This you should have  seen before, the



 elements  of quality assurance that are listed




 in the EPA quality assurance guidelines.  We



 started to look at how are we organized and



 who are responsible for what aspects of quality



 in the laboratory, what are the quality assurance



 objectives for the data in terms of these



parameters.



   In EPA  contracts these are very well spelled



 out in some regards with the number of definitive



criteria that are supposed to be applied:

-------
                      208
 Surrogate recoveries,  internal  standard areas,



 how well your spikes and duplicates are sup-



 posed to be recovered  and duplicated, things




 like that.  However, there are sections in the



 contracts which read such as, '...If these cri-




 teria are not met it is left up to the judgment



 of the anlyst in order to take corrective ac-



 tions' ; it's not spelled out clearly what those



 should be or how those should be implemented.



    There are also procedures spelled out for how



 sample custody should be handled, how do you



 calibrate, how do you tell if your instruments



are properly calibrated, the methods that are



 to be used.   Some of the methods in the hazardous



waste program that we found have been developed



 in advance of validation data because of the



 urgencies for some of the data to be produced.




We find, not surprisingly, that the methods



don't work for all kinds of samples very well.

-------
                       209
 Some  they  will  work  for  very  well,  but  some  they



 won't.   Then, how  do you produce and validate



 your  data, what checks are performed within  the



 laboratory on how  well that data has been pro-



 duced, and the procedures that are  used; in




 particular, corrective actions and  reports to



 the management on  the corrective actions.



    May  I have the next  slide, please.  I went




 up to the mountain one day and came down with a



 stone tablet with  the Four Laws of Quality



 Assurance engraved on it, which are not my



 invention but they seem  to make some kind of a



 sense and at least the people that  I work with



 understand them.  The first law,  the most



 important one is,  'Do it right the first time'.



 If you are going to take the time to process



a sample and report it out, do it right the



 first time so there are no mistakes.  Secondly,



 'Detect errors as soon as possible1.  If you



know that there was a mistake made in the

-------
laboratory try and get that nistake rectified



or start the reprocessing of the sanple; don't



wait until it is ready to be reported out the




door to say there was a problen.  You have lost



tine and you have wasted a lot of energy.  Again,



this gets back to one of the things Phil Ryan



said earlier, you want to correct the error



as close as possible to its source.  If an



instrument operator can detect that there is a



problen with the surrogate recovery, that's



the tine that sonething could be done about it.



It is also cheapest and quickest to do  it that




way; and, from a quality assurance standpoint



I denand that all of the actions that have been



taken for problen data be docunented.   I want



to know what the corrective actions were, who



did then, what was their rationale, what was



the result.



   On the next transparency, we started to build



an exanple criteria for building a decision

-------
                       211
 model.   How do you apply those four laws of




 quality assurance so  that  you  would  apply



 some  sort  of a logical  or  hierarchical  frame-




 work  for making decisions  based  on  problems



 that  you might see?



    So  we  looked at,  first, what data can be



 examined by the analyst or someone  who  detects



 the error.   For example, you could  look  at it




 as a  GC/MS  operator:  Was  the  tune correct?



 Was the  blank  run okay? Was the standard with-



 in the criteria for calibration? Did all of




 the pieces  of  information  that were passed to



 him concerning  the preparation of that sample



 match what  it  was supposed  to  be for that pro-



 cedure?  Were  there other  samples in that data



 set,  say if  they came from a particular  case,



 that  have similar problems that could account



 for the problems  that are being seen?  Essen-



 tially, what was  the quality criteria for the



product and were  they met?

-------
                      212
    If some of these things are not correct then



in what order should you examine the possible



causes?  You could look and say certain things



like the tune, the blank, the standard must



have been acceptable or the analyst would not



have run the sample.   You can check internal



standard areas,  you can check the worksheets,



you can check response factors, things like



that,  and check to see whether there was



anything special about those samples.  Was



there any additional  data that may be necessary



to determine the source of the error?  For



example, the data from other sets of samples.



   In  our set up, a particular operator may




not have analyzed all of these samples from a



particular case.  They may have been prepared



at different times, they may have been done by




a different instrument, a different operator,



a different shift; but, the laboratory manager



in charge of that area can go back and

-------
                       213
 determine  they  get  similar  samples  from  the




 same  set of  samples to have the same problems.



 Are there  additional people outside of that




 laboratory that it may be necessary to determine



 the source of the error, like the lab manager




 or the QC  Department?  And what are the options



 for taking corrective action?  What are  the ones



 that  are most prompt, likely to lead to the



 solution and elimination of the errors and saving



 costs, especially saving time in identifying and



correcting the problems?




    It may be possible that a calculation error



 was made in the information that was provided



 to the analyst.  If that is detected a calculation



correction can be made;  that is quick, that is



 simple, that does not effect the quality of the



data except to correct the mistake.   You may be



able to reinject the sample, in the  worst case



you may have to go back and reanalyze an entire



lot of samples.  Then,  how are the corrective

-------
                       214
actions documented?  They are supposed to



be documented on the worksheet associated



with the sample in the laboratory files by



a personal memo to me, to the files, and in



the report to the customer.




  The next slide, please.  These are some of



the advantages and disadvantages of the



implementation of this system, at least as



it applies within CompuChem.  It has shown



an improvement in the turn around time



because it will detect and correct problems



earlier and, I'm sure, avoid repetition.



It has improved the laboratory working



relationships.  If you have established with



each part of the laboratory that you



expect a certain quality of product from



them, all of the little pieces of paper




completely filled out, and you don't get it,



then you turn it back to them or don't



accept it, they tend to get the  message very

-------
                       215
 quickly  when  things  pile  up  on  them,  that



 it's got  to be  right or it won't be passed



 along.




    To reduce rework and  the associated cost



 with the  rework because people  are starting



 to do things right more often,  it has



 improved  our good will and our  prestige to



 the customers because you have decreased the



 turn-around time and it improved the



 quality of the product to the customer.  It



 tends to  free higher level staff for planning




 instead of problem-solving if things can get



 solved at lower levels.  You can document the



 accountability for quality, something I am



particularly interested in.  I am always trying



to establish that quality control and quality



assurance are really profit centers,  they are




not cost centers or overhead; they contribute to



the value of the products.  If you can document



what corrective actions were made and taken

-------
                       216
and  that  you can reduce costs,  you can  show  that



the  quality departments are paying their way.



     The detailed logic that goes into the



corrective actions for each area can be put into



the  computer so that eventually there will be




no human  intervention.  Data can go directly from



the  GC/MS instrument to a main-frame computer



that has  the logic of the corrective actions and



decisions built into it so that those data can



be rejected or accepted right there.  You save a



lot  of manual intervention.  We have this



currently in force for our biomedical area which



deals with much less complex samples than the



environmental ones.  We are a few months away



from implementing it entirely for the environ-



mental , but the concepts in what we have learned



in biomedical will apply in environmental sam-



ples.  Having the defined criteria we found



makes training of new staff quicker and more



effective.  They know what is expected of them

-------
                       217
 and  they  know  what  they  have  to do.



   The  system  for documentation as required by



 the  customer for this product is on the computer




 that allows ready access by managers, so if



 there is  a question:  "How good does this piece




 of information have to be?," it is spelled out



 and  it  is readily accessible.  It's nice to be



 able to know how much things cost so that you



 can bid on some of the new work, for example.



 It is an  excellent management tool for measuring



 performance.



    Some of the disadvantages are there are some



 costs associated with implementation because



 you have  to make changes in how the laboratory



does some things.  In the past, it has been



 the policy of CompuChem to use code numbers so



 that the analysts working in the laboratory do



 not know which samples are duplicates,  blanks,



or spiked samples.   This is so that we can

-------
                       218
 identify  how  well  the  laboratory does on all



 kinds of  samples.  In order to make sure certain



 kinds of  information are detected at the



 earliest  possible  level, the analysts need to



 know the  identities of those samples.  If




 someone thinks that that may distort the



 performance,  that  if they know it is a QC sample



 they are  going to do even better than normal




 on regular samples, there are still periodic



 blind samples submitted that are doctored by



 our quality control and quality assurance




 department which come in as true "blinds" and



 will test how well people are doing.  Those are



 submitted for each of the analysts and operators



 every month.  As you will see later in the



 presentation that data is available for their



managers  to review, comment on, and correct if




 performance is under par.




    May I have the next slide, please.  This is



just a brief summary of the decision model

-------
                      219
steps as applied to, for example, if you



are looking for contamination in a blank



associated with a set of soil samples that



has been prepared.  You have to define the



product; usually that's defined by the



customer or in the contract.  What are the



attributes that you want to be determined?



How much contamination do you want or how



little?  How is the report to be delivered?



How do you make that product?  Is it on the



GC/MS or do you want it on the GC?  It is



usually defined by the contract for that



product.  What quality of performance is



desired?  For example, if something has failed



the criteria.  How do you measure the product



quality?  How often do you measure it?  Who



is responsible for measuring it?  Then,



this is where the managers and the actual



technical staff have to get heavily involved,



listing in detail all of the possible

-------
                      220
reasons why you might not be able to meet



that criteria; such as, contamination in



various parts of the laboratory.  Then, how




would you test and document those...or



eliminate those sources of problems in a



logical manner.  Describe the documentation,



corrective actions, train the staff, report



it, and then notify the customer if it is




necessary.



  Some of the changes that we have come up



with, now, for example in the processing of



blanks for the volatiles we have changed



hoods, we have changed types of inpingers,



we have changed the location of sample



preparation based on the results of some



studies indicating there is some volatile



contamination in certain parts of the




laboratory.  We have changed certain times



when we do things to limit the  contamination



and  have  seen an overall improvement,

-------
                       221
 for  example,  in  the  quality of  the volatile



 blanks.




     Next  slide,  please.  This is a listing of




 the  desired product  quality; for example, from



 the  previous  slide,  our example of the volatile



 soil blank.   Most of this is taken straight



 from the  contracts that come from the Hazardous



 Waste Program office, but it is translated into




 saying "This  for our laboratory is what has to



 be produced."  You have to say the RIG of the



 sample doesn't end on an eluting peak, for



 example.  You have to have certain kinds of



 information, document control number, you have



 to label certain peaks, identify who did it,



 when, how, what standards they used;  all of



 these kinds of things go into making up the



 attributes of an acceptable product.   If there



is any qualifying data it is important to put



 footnotes in there so people understand it.

-------
                      222
   Next slide, please.  This is the example



for the volatile blank of what happens.  There



are three different blanks made up to be able



to determine for a particular set of 20



samples where a contaminant might occur.  If




you check the first blank and it is clean



then you move to check the second blank.  If



that is clean you check the third blank.



If any of the blanks have a contamination



problem in them you can then narrow down



where the source of the contamination might



be coming from, go back and correct it;



or, you prepare the sample and do it until



you have gotten a clean blank and clean



samples associated with it.  This is the kind



of logic that is being applied for other



types of samples in spikes, the duplicates,



as long as the blanks are in addition across



the laboratory.



    Now, I would like to switch, if possible,

-------
                      223
back to the 35 millimeter slides and then



at the end of that I'11 have about three more



transparencies.



    Could someone turn on that slide projector,




please.   In case you haven't seen a GC/MS



laboratory with a lot of instruments in it



that's what it looks like (indicating).  This



is, for the example that I was using, the




volatile bottles are on the top and packed in



nice little Styrofoam containers so that



they don't break.  There are four there, they



are contained in four volatile bottles.



That's CompuChem's...our EPA samples don't



come in nice containers like this.  If they



did we wouldn't have as much breakage problems



as we see with them; in these we don't lose



samples for our commercial customers, they



come in like this (indicating).



    As part of the laboratory quality control



some of these things...well, all of these

-------
                      224
things have to be met.  These are pieces of



information that are reported to the  customer;



for this instance, in the Hazardous Waste



Program of EPA.  I certainly echo,  I  nearly



stood up and applauded when I heard that the



Effluent Guidelines is trying to reduce the



amount of paper that is produced, they can



get more on tape because for several  years



we have been trying to get the Hazardous



Waste Program to do that.  All of our customers



are the regions so they are demanding



additional paperwork.  Some of the tests that



are done in order to  insure that there is no



contamination  in  the  glassware, for example,



for every  set  of  samples,  say,  for sample



containers that are prepared, a portion of them



are prepared for  tests to  determine if they




look  clean before they are used.  We have



storage  stability tests  in order to monitor



the  atmosphere in which...you walk in  a

-------
                      225
refrigerator in which the volatiles that



are prepared are stored to make sure that



there is no contamination occurring from




the storage of the samples.



  This is the purged water that is used




for the preparation of the samples so that



we can demonstrate that we are not



contaminating the samples with the water.




The purge and trap, GC/MS, is where they



get analyzed.  Some of the reports that we



put out to the managers to document how



well the decision models are being followed,



how well the people that they have for



their work are performing.  We have computer-



generated reports which will show surrogate



recoveries by matrix, by level, by individual



extraction person, by GC/MS instrument, by



operater, by shifts; all of the different



ways so that the manager can actually



determine if somebody is out of line and

-------
                      226
what needs to be done about that.  You have



this kind of information that is also useful



to determine how well the laboratory is




performing and all of that.  Quarterly we



will take a look at our recoveries and



determine whether or not they need to be



tightened based on what we are seeing on



the results.




    The thrust of what we are trying to do



is to establish if we find a problem is it



a problem with the laboratory in technique?



Is it a problem with the method and its



applicability for those samples so that we



can document for our customer it's our fault,



we screwed up, don't pay us for our mistakes?



Or, if it is not our fault we can present



that to the customer and say it is a problem




that either the method or the matrix causes



the data not to be acceptable to meet the



criteria as has been specified.

-------
                      227
    This is the main frame of the computer or it



is back behind there that we use for processing




some of the reports (indicating).  I'll turn up



the lights again.  There are just a few more



transparencies that I wanted to show which will



show the form of some of the reports that we




get.



    This is an example of a pie chart that




essentially tells the managers, the people



that I work with how many repeat requests



for sample repreparation or, for an example,




reinjection that were done during February.



The pie chart is divided into different



sections depending on the type of fraction



which is analyzed.  The volatiles over here,



acid, base neutrals, semi-volatiles is part



of the chart and that seemed to be where a



large number of the problems were.



    If you are interested to know how big a



problem this represents, this is less than

-------
                      228
four percent of the total number of samples



that we processed for the month that



represented things that had to be reprepared.



    The next one, please.  An example of some



of the information, say, for surrogate




recoveries for volatile samples.  A target



range is based or set up based on the criteria



that are in the EPA contracts that we have



saying, assuming a normal distribution of the




surrogate recoveries you would want to see



the same distribution up here (indicating).



This is the actual distribution that we are



seeing, to be able to see for the kinds of



samples that we are getting in, if we are



within the control limits for those types



of samples and on that particular indicator.



    The last one, please.  Here is an example




of tracking internal standard response



verifications.  Here is  an instance in which



suddenly something went way out of control

-------
                      229
down here, the instrument stopped, maintenance
was performed on the instrument and brought back
up (indicating); and, it's within the criteria.
That kind of information is available promptly
at the instrument, although the graph was not
made until later.  The operator at the instru-
ment has to make decisions if he sees that
happen; that's it.
    Thank you.  I would like to thank Nancy.
I'm sorry you only got to see the back of her
head while she was doing that; the rest of
her is nice, too.  That's the end of my
presentation.  I would be glad to answer any
questions that you may have.
                MR. TELLIARD:  Thank you,
Paul.
    A couple of announcements.
                   ******

-------
                                        229a
                       QUALITY ASSURANCE DECISION MODEL
                                      FOR
                           HAZARDOUS WASTE ANALYSIS

                            Paul E. Mills, Director
                            Quality Assurance
                            Mead CompuChem
                            P.O. Box 12652
                            Research Triangle Park, NC  27709
ABSTRACT:
Adopting a customer and service-orientation within a laboratory Quality
Assurance program provides better-defined product quality within laboratory
areas, and ultimately for the laboratory's customers.  This presentation descri-
bes a quality assurance decision model which is being installed at Mead
CompuChem.  The model defines the various products of the laboratory as
"analytical results" which are conveyed via paper or magnetic tape to each part
of the laboratory or to customers.  Each product must meet a pre-defined set of
criteria; for EPA hazardous waste analyses, these are based on contract-
specified deliverable documentation.  Each laboratory area is responsible for
the quality of its products.  The Quality Control and Quality Assurance
Departments monitor product quality, and check that documentation of corrective
actions is complete and consistent with the model.  The second generation of the
model will extend the concept to allow "quality-costing" to be applied to
finished and reworked products.  The advantages of this decision model are:
1)  It is accessible to each manager through a common file;  2) Corrective
actions are made consistently the same for similar problems;  3) Logic applied
can be used for automating review systems, and ultimately "networking" to elimi-
nate the need for manual interventions;  4) Cost of producing products of
defined quality can be determined and used in assessing laboratory performance.

INTRODUCTION:

This paper provides a description of a management concept for improved product
quality from an analytical chemistry laboratory.  I have provided a summary of
its development, and a description of the advantages of the system.  Some
examples are provided of the logic applied, and the system performance data on
portions installed and currently operating.

Because of the size of Mead CompuChem (24 GC/MS instruments at 2 locations) and
the scope of the company's business, the concept of manufacturing centers has
been used in planning and production.  Productivity is essential to warrant
capital investment.  Specialization is applied to tasks such as sample receipt,
extractions, analysis, and data reporting.  No one person does the "whole job"
on a sample.  Therefore, it is critical that information accompany the sample in
process down the "assembly line".  Each lab area must check its products'
quality prior to transferring samples to another lab area.

The original concept for this paper developed from attempts to quantitate the
contribution of QA and QC to profits.  Several references are included which
provided assistance in this effort.

-------
                                        229b
The productivity of a quality system can be measured by its contribution to
business profits.  It is desirable to develop a quality system to achieve and
maintain product quality and decrease variability.  To accomplish this, it is
necessary to define the product, its quality, and the quality of the perfor-
mance necessary to make the product.  The procedures or methods used in produc-
tion place constraints on the quality by. specifying:  Limits of detection,
sensitivity, safety, cost measurability, precision, accuracy, selectivity, and
specificity.  It is important to identify areas of responsibility to be effec-
tively managed to obtain control of product quality.  It is also important to
measure the performance of procedures and analysts for the samples being
analyzed.

The objective of CompuChem's system is to improve product quality for the
laboratory's customers.  I have suggested the definition that the lab's products
are "analytical results".  These products consist of information packaged in
customer-requested formats such as EPA deliverable paper, GC/MS magnetic tapes,
etc.  A sub-objective is to improve the quality of information exchanged within
the laboratory used to produce the customer-reportable data; i.e., each area of
the laboratory is a customer for intra-lab data; QA uses it to determine how
well those areas are performing.

To attain the objective, at least the following goals must be met; they are
stated in the form of Quality Assurance Laws for impact and for easy remembering.
The First Law of Quality Assurance is "Do it right the first time!"  The Second
Law is "Detect errors as soon as possible!" The Third Law is "Correct the error
as close as possible to its source!" The Fourth Law is "Document all actions
taken!"

The objectives and goals can be met if the following concepts are adopted for
the laboratory:  1) Each product of the laboratory must be a defined quality.
2) Criteria are established by Marketing and the customer for the products to be
delivered; criteria are established by QA for those products remaining inhouse.
Specifications for finished products must define the desired attributes and sub-
components.  They must specify the inspection methods and frequencies and who is
responsible for inspections.  Specifications must be expressed as "targets" and
"ranges".  3) Each lab area is responsible for the quality of its products, or
the product is returned for rework or explanation.  Specifications for the
disposition of rejects (rework or scrap) must be made.  No one should have to
look at bad data from another part of the lab!  Each lab area should be viewed
as a "customer" for the products of the other lab areas.  Each lab area has the
right as a customer to demand that the quality levels be met and maintained.  4)
Make QC samples, such as blanks, spikes, and duplicates, known to the analysts,
in the laboratory, to allow for prompt detection of problems.  5) Before changes
in product specifications or procedures are made, the approval of the Director,
Quality Assurance, is necessary.  6) Training and documentation are critical
steps to ensure quality.  Figure 1 shows an example which summarizes the steps
in establishing the decision model.

While these system concepts were being developed for application at Mead
CompuChem, several EPA customers began requesting that the current set of hazar-
dous waste analytical contracts be modified to define more specifically the data
quality desired and corrective actions to be taken if acceptance criteria were
not met.  EPA contract requirements would seem to imply that, by using the
required procedures for analyzing hazardous waste samples, it is possible to
produce data of acceptable quality on most samples, as determined by specified

-------
                                      229c


quality indicators.  Unfortunately, insufficient data is available to  prove this
is true for all samples to which the methods are being applied.   For the
contracts, the nature of corrective actions has been left to the "judgment  of
the analyst" without specifying that the same problems (i.e., exceeding accep-
tance criteria) should be treated in a standardized fashion for  all  who
experience the same problems.  However, there may in fact be samples for which
the methods and therefore the quality criteria do not apply.  The lab  must
therefore demonstrate that the analytical  procedure and the techniques of ana-
lysts are in control, or that the problems are inherent in the method  or the
nature of the sample.  This can be established, for example, by  using  duplicates,
spikes, blanks, and other test samples to evaluate lab performance.

Using the EPA contract-specified deliverables list, I have produced  a  document
which defines the desired quality of the products (pieces of paper)  which make
up the EPA data package.  The criteria applied are either specified  in the  EPA
contract, or have been established by CompuChem in their absence in  the
contract.  It is the responsibility of the manager of each lab area  that his
products meet the quality criteria.  An example is provided in Figure  2 of the
criteria used in building the model.  Each manager is responsible for  rework
until the product is acceptable.  The system for detection and correction of
such problems are established within a lab area by its manager,  who  presumably
knows its capabilities and resources best.  Each manager goes through  the logic
required to produced acceptable quality products.  Quality of product  should be
considered as well as the constraints of productivity and resources.  Where
there are conflicts, top management must resolve them.  This will give some
options to management in producing certain products.  For example, if  the pro-
duct is a "screening analysis" to determine approximate amounts  of organics in  a
sample, it may be that the screening data can be acceptably produced either by  GC
or GC/MS.  The system must define how many and what types of errors are to be
monitored and corrected, the frequency of testing, and what kinds of corrective
actions are appropriate.  In addition, quality measures of performance are
required.  An example of the product quality, procedure for production, and flow
chart of the decision model is shown in Figures 3, 4, and 5.

It is the responsibility of each lab manager to monitor for errors within his
area, to implement corrective actions, and to report the problem, its  extent,
and the effectiveness of remedies, to QC and QA.  If quality control samples are
outside control limits, the manager is informed by the QC Department,  so that
the manager can correct the problems.  QC and QA can assist and advise on the
appropriate actions.  QC monitors the effectiveness of these actions,  and
reports this to QA.  Documentation of problems and actions must be made, either
by footnotes or written explanations within the body of the report.  This docu-
mentation should provide adequate detail to state the problem, actions taken,
and their effectiveness, what data was affected, what dates these things
occurred, and the names of parties responsible, should there be questions.  In
Figure 6 I have listed advantages and disadvantages of conversion to this
system.

The system described has demonstrated improved product quality and lowered costs
for those areas in which it has been installed.  The system is being expanded
into other lab areas, and its operation continues to be  refined with experience.
Figures 7, 8, and 9  show the type of management information generated.

-------
                                        229d
Appraisal of the effectiveness of the system will eventually be handled by
acceptance sampling at CompuChem during review processes.  Currently, several
levels evaluate all the data prior to release to other lab areas and to custo-
mers.  Acceptance sampling will be instituted as observed error rates fall.

I would like to acknowledge my colleagues who contributed their time effort and
study results to developing parts of the system:  Mrs. Patty Ragsdale; Mr.
Robert Meierer; Mr. Robert Whitehead.

-------
                                         229e
REFERENCES:

ASTM Standard E882-82:  "Standard Guide for Accountability and Quality Control
in the Chemical  Analysis Laboratory"
Managing Quality for Higher Profits, Robert A. Broh, McGraw-hill, 1982.

  ility Control in Analyt'
  ley-Interscience, 1981.
Quality Control in Analytical Chemistry, G. Kateman & F. W. Pijpers,
Wiley-Ir

-------
                                     229f
                                  FIGURE 1:


     SUMMARY OF QA DECISION MODEL STEPS EXAMPLE OF  VGA BLANK,  SOIL  SAMPLE

 1)  Define product    (VOA blank)
 2)  Describe attributes to be determined (extent of contamination, form and  con-
     tent of report to be delivered)
 3)  Define how product is to be made (GC/MS output, contract  method)
 4)  Define quality of performance desired (no blanks fail  criteria)
 5)  Define product quality criteria  (specified in  contract, priority  pollutants
     less than half detection limits)
 6)  Determine measurements of product quality, frequency of measurement,  and
     responsibilities (each set of samples prepared, analyzed  by  GC/MS operator,
     within acceptance criteria)
 7)  List in detail all possible problems which could cause unacceptable product
     quality (contaminated standards, glassware, water, etc.)
 8)  For all problems, list tests to  determine source of problem  in a  logical,
     heirarchical  order (operator, manager check, reanalysis).
 9)  Describe documentation of corrective actions to be reported  (reanalysis)
10)  Implement system with training for staff responsible
11)  Monitor and report on system effectiveness
12)  Modify as necessary, and document changes (eg., change type  of impinger,
     change location of sample preparation).

-------
                                        229g


                                   FIGURE 2:

              EXAMPLE CRITERIA APPLIED TO BUILD A DECISION MODEL


What data can be examined by analyst who detects error?  (For example, instrument
performance Tune, blank, standard data, worksheet, vials are all available for
inspection at the bench, as well as results of previous, related, samples, and
the quality criteria for the product)

In what order should it be examined?  (Tune, blank, and standard must have been
acceptable, or no samples could be run; check internal  standard areas, check
internal standard areas; check worksheets for amount of sample used, volume of
concentrates, surrogate and spike standards used, any nonroutine actions taken
or problems encountered in prep.)

What additional data may be necessary to determine the source of error?  (Other
data from same set of samples)

What additional people may be necessary to determine the source of error?  (Lab
manager, QC, etc.)

What options for corrective actions are most prompt, likely to lead to elimina-
tion of errors, save costs?  (From least to most costly, identify and correct
calculation errors; reinject sample; reprepare and reanalyze samples.)

How are corrective actions documented?  (In report, in lab files, by memo, etc.)

-------
                                       229h


                                   FIGURE 3

                   DESIRED PRODUCT QUALITY:  VOA SOIL BLANK


Desired Product:

    The RIC must be normalized to the largest, non-solvent peak.

    The RIC must cover the range of Hazardous Substances List compounds.

    Internal and surrogate standards must be labelled on the RIC.

    There should be no tailing or elevated baselines (the latter portion  of the
    baseline should not rise by more than 4X the midrange level).

    The RIC must not end on an eluting peak; peaks must not be cut off by the
    end of a page.

    Contaminants must be less than I/? the detection limits for HSL compounds, and
    less than 25% the peak height of the nearest internal standard for others.
    Contaminants must be accounted for.

    There must be a document control number of the RIC, representing the  EPA
    case and sample numbers.

    The RIC scan starts before the firt eluting HSL compound, and  ends no sooner
    than the latest eluting HSL compound.

    File header information is included to identify the ID number, standards
    used, operator, shift, instrument, and time.

    Tabulated results (identification, quantity, scan number of retention time)
    of the specified HSL compounds must be submitted, validated and signed in
    original signature by the Laboratory Manager.

    On the EPA reporting form, the appropriate units and detection limit  factors
    must be circled and/or adjusted.

    Appropiate footnotes for qualifying data must be included.

-------
                                     2291



                                   FIGURE 4

                         VOA SOLID SAMPLE PREPARATION


Three different laboratory areas are involved in preparation  and  analysis  of  VOA
solid samples:  Glassware preparation;  Inorganics lab hood  for sample
preparation; GC/MS lab for addition of  water, sample storage, and analysis.

Glass impingers are taken from the oven in Glassware preparation  area  and
transported to the Inorganics lab hood.  Samples are transported  to  the  lab area
for preparation, but kept outside the hood until each one's turn  for preparation.
Only one impinger and one sample at a time are in the hood  during preparation.

Water from the GC/MS lab (purged organic-free water) is  taken into the hood for
filling the designated "A" and HB" blanks.  A "C" blank  is  filled in the GC/MS
lab and makes the trip with the other samples, but is not opened  in  the  hood; it
is similar to a trip blank.

The "A" blank is prepared first, by filling the impinger with the GC/MS  water.
(This tests the hood area, to demonstrate it is clean before  preparing other
samples).

20 samples are prepared, one at a time.  Weighed quantities of samples are
transferred from jars into impingers with appropriate utensils, then capped.

After the 20th samples is prepared, the "B" blank is made,  similarly to  the "A"
blank.  (This tests that there has been no contamination introduced  into the
hood during sample preparation).

Prepared samples are taken into the GC/MS lab, filled with  aliquots  of GC/MS
water, and stored in the GC/MS lab refrigerator for VGA's only.  It  is equipped
with a charcoal scrubber.

The order of analysis for these samples and blanks is described on the following
flow chart.

LOGIC:  The instrument blank shows that the internal surrogate standards were
not contaminated, and that the GC/MS lab air and water are  clean, and  that the
instrument is not contaminated.

The "A" blank will show is the hood area was contaminated prior to sample
preparation.

The "B" blank will show if there has been "cross-contamination during  transit or
staorage due to faulty impinger seals, lab air, etc.

The latter three blanks will also show if there is contamination  of  syringes  or
glassware.

-------
•4->    (/>  • i.  a.
          ^ 6 o  tt»
           1- (O 4->  t.
           O) -4-» t/1
          •U C    -O
           fO O S-  C
           3 o o  
                                                                OJ  10
                                                                IM  0)
                                                                >>•—
                                                               •—  Q.
                                                                ro  E
                                                                C  rd
                                                                tQ  10
 ca  A
 ca  a.
                             co
                             LU
  CQ  A

  =   a.
-)
                                               o:

                                               co
                     O)
ofl 4^ f^ O ^^K {*••
    C Q. t.  to 0)
co  a> 3-1	• i-
i—i  £ to >  4)   .o  i- 10
JZ  C (O 10  >>i—
 o -i- S i—  to 01
                                                                                            ^/
    • «        to
              O)
  -i-     E
 (O  C S-     OS
    O ZJ     W
•o  u -a
              a>
    Q.T3     $-
	  ai cu     ns
    i- >     O.
J>£  C3. O C»"    s- a. ai  a.
.c  o E s-  a)
 O  M- -r- Q. S-
s

-------
                                        229k


                                   FIGURE 6

                         ADVANTAGES AND DISADVANTAGES
ADVANTAGES:
    Improvements in turnaround time
    Improved intralab working relationships
    Reduced rework and associated costs
    Improved goodwill and prestige with customers
    Correction of problems at earliest stages
    Higher-level staff are freed for planning, not problem-solving
    Accountability for quality can be well-established
    Detailed logic of corrective actions can be automated, "networked"
    Defined criteria makes training quicker, more effective
    Automated system allows prompt access, consistent responses
    Costs of errors can be documented
    Costs of corrective actions can be documented
    Costs data assist in bidding new work, measuring performance, etc.


DISADVANTAGES:
    Minor costs of implementation:  Changes in paperwork, work flow, training
    Allowing analysts to know identities of QC samples may distort true
    performance; can be corrected by submitting true "blinds"

-------
                              229 1
                                                                         r-

                                                                         UJ
                                                                         a:
CO
CD
CD
DC
CD
cc
o
u.

en

en
UJ

a
UJ
cc
UJ
a.
UJ
cc


-------
                   229m
                                                         oo
                                                         LU

                                                         QC
CD
GO
CD
>_"
fT



^
1-3 8
1 1

UJ S
CL


CO

CE
UJ
^

hi
i
__ j
L.—J
F^l
>~~ X
< 1
-J i
o i
> j



E
K\W

Kivv\vCvC^^^

lOv\\XX\\X
E
c

i 8 9 8 8 8 c


UL.w-.,.aw



E
ES
^c^^sXN^^xsxsv<

KVsNNXNNXNN
r^r
I!>A^

289888*
i


8
2
sM
 1
» i


d
£k
ESS
^^

ES^^^^

^\w
ES
\

i 8 9 8 8 8 c


. - . i) - 1


c
E
LW
N-V^
f^sx\:
^SS»>^v^'

f^^
rrr
fx-x>i
^
1
1 	
B 8 9 8 8 8 *
^
i
9
5
9
M
8
ii

*«.

Hi
8
8
9
8
»


n
s
B
§
?
8
S
§
•«

8
g
8
:
9
                                             O
                                             u
                                             u
NOIiVindOd 31dMVS JO !N33d3d

-------
        INT.STD.RESP.VERIFICATION  CONTROL  CHART
                       SEMIVOLATILE D3-PHENOL/D8-NAPHTHALENE
                                               UCL
                                                               LCL
                                                                Page
                                                                229n
                        (CHART ABOVE PLOTTED
                                  SO  82  24
                               UCffl FILE ISflVCCS)
                    SEMIVOLATILE:  DB-httPHTHALENE/DlO-PHENANTHHENE
             HEL.flESP.            MEAN             UCL
                                                               LCL
      1.8
      t.8
      1.4
      t.B
       1
       .8
       .8
       .«!
       .2
      0.0
        RELATIVE RESPONSE
            4  '  '  '  I  '
           (CHART ABOVE PLOTTED
                                          -jU)'12
                                          'lM»PILE I8HVCC80
ffiL.RESP.
  -+-
                     SEMIVOLATILE D10-PHENANTHHENE/D12-CHRYSENE
                               MEAN              UCL
                                                               LCL
    1.2

     1

     .8
     .8

     .4

     .2

    0.0


  DATE

ANALYST
         ELATIVE RESPONSE
 4 '   '
(CHART ABOVE PLOTTED
Ltmet
                                                  16
                                            1KB) FILE XSRVCCSa
                                                      15
COMMENTS
                                                                          FIGURE  9

-------
                     . 230








             PROCEEDINGS




                MR. TELLIARD:  Good norning.



Contrary to the program, we have made a few



ninor changes, there is no break this norning



so that we can stop about 11:30 for lunch



and people can check out.



    Our first speaker this norning is Barry



Eynon from SRI.  Barry is part three of the



continuing saga of the new procedure that we




are trying to enact and which we discussed



all day yesterday and which we will touch on



again today.  Barry is going to discuss this



norning sonething we all want to listen to at



about quarter to 10, statistics; the joy and



fun of numbers.

-------
                      231
  STATISTICAL METHODS FOR EFFLUENT GUIDELINES

                Barrett P. Eynon
               SRI International
                MR. EYNON:  Good morning, I

am glad to see we are all still here or at

least partly.  Bill asked me to come down and

talk to you a little bit about what we at SRI

and other groups working with effluent guidelines

have been doing as far as the statistical

analysis of pollutant data for setting effluent

guidelines.

     The statistical analysis of industrial waste

pollution data is an important step in the

determination of effluent water guidelines.  A

number of different statistical techniques are

used to address the information needs of the

technical staff at EPA in setting limitations

guidelines.  There is not time today to talk

about all of the different methodologies, but

what I will try to do is give a general review

-------
                      232
of the circumstances, methods, and objectives



of some of these analyses.



    SRI and myself, personally, have been



involved in three major industrial categories



over the past three years on both conventional




and priority pollutants:  pharmaceutical, petroleum



refining, and organic chemicals manufacturing.



This work has been in cooperation with Effluent



Guidelines and the Office of Analysis and



Evalution.



    The data that we use in these analyses is




usually voluntary data  submitted by the plants



and will consist of influent and effluent



treatment concentrations of pollutants.  The



plants are selected from among the voluntary



participants to be those which have well-designed



and operating treatment systems of the



appropriate  type for the regulatory package.



For instance, of the set of 22 pharamaceutical



plants which submitted  data,  13 were  judged  to

-------
                      233
have well-designed and operating biological



treatment systems and were designated as BAT/



BCT plants for the purposes of constructing



regulations.  A further subset of 10 plants



among the 13 were designed as NSPS plants for



setting NSPS limits.  The characterization



of these plants represents the engineering and



technical evaluations of the plants by EPA.



    The sampling and data handling for each study




proceeds through several stages to insure high



data quality.  The samples are usually taken by



the plant according to a pre-determined sampling



plan.  The sampling plan can be as straight-



forward as one sample taken each day or each



week at each sampling point; or, as extensive as



that in the first slide which is the sampling



design for the Organic Chemicals 5-Plant Study.



    In this study at each of the participant plants



approximately 30 days of sampling were performed.



On each sampling day, one sample was taken at the

-------
                      234
pre-treatment and the post-treatment sampling



points.  Each sample was analyzed by an EPA



contract laboratory for a specific set of



priority pollutants and the samples were



also analyzed by a Chemical Manufacturers



Association contract laboratory and also by



the participant plants.



    The pollutants that were analyzed for were



chosen from one or more of the analytical



fractions of the organic priority pollutants




so as to reduce the number of analyses needed



on each sample and also to satisfy the



confidentiality restrictions of each of the



plants.  In order to evaluate the accuracy



and precision of the priority pollutant



measurements several quality control measures



were included in the sampling design.



    Approximately two-thirds of the samples were



spiked with known concentrations of priority




pollutants after their analysis and reanalyzed

-------
                       235
 in order  to measure  the  percent  recovery  of



 the analytical methods.   The remaining one-third



 of the samples were  analyzed in  duplicate  in



 order to measure laboratory precision.  The



 samples were also spiked  with known amounts of




 "surrogate" chemicals known not  to be present



 in the waste stream, in order to aid in measuring



 the recovery of the  analytical methods.  Measure-



 ments were also made on blank samples of distilled



 water, some of which were shipped with the waste



 samples to check for contamination.



    Upon receipt of  the laboratory reports on the



 chemical analyses, the data from such studies



 are coded and entered into the computer data base.



As we have heard today, hopefully some of this



 stuff will be obviated in the future,  but the



current work on this study we coded the data and




checked the data and then reviewed the data.



    The data base that we have found to be very



effective for setting up data bases to handle

-------
                      236
complicated studies like this  is the Statistical



Analysis System package, SAS, which we have



available on EPA's IBM computer and it runs on



IBM main frames.  A very powerful and flexible



package for data processing that has data




management and reporting facilities and also



has the capabilities for sophisticated



statistical analyses.




  Once the data is stored in the computer, data




listings and plots of the data can be generated.



These are checked for unusual or extreme values



which may indicate coding or transcription



errors.  The values are reviewed with the



laboratory reports and with the laboratory to



correct any errors.  In addition, concentrations



which are confirmed by the laboratory,  but



which are attributable to known plant treatment



upsets, or deemed to show variation beyond that



associated with well-operated treatment systems,



can be removed from the analysis, in order to

-------
                      237
focus on the behavior of well-operating treatment




systems.



    Figure 2 shows plots of the Effluent Total



Suspended Solids concentration versus time for one




of the plants in the pharmaceutical data base



before and after removal of an extreme value.



In the top picture we can see that one point just



sticks out like a sore thumb and we went back



and checked it out.  I'm not sure exactly what



was going on in this one, it could have been a



typo or it was a value that just was an upset.



We reviewed the plant records and removed that



value from the data set and then we get...when



we replot the data and rescale it we get a much



more reasonable looking view of the situation



at that plant.



    So this is done for the set of data and the



final data set or edited data set is then



available for analysis by statistical methods.



So now we go into what is it that we are

-------
                       238
 trying to determine  from  this data  once  it  is



 in the computer.  There are two major




 quantities of  interest in all of these studies.



 The first is to measure the average concentration



 of each pollutant in the wastewater of each



 plant, before  and after treatment.



    Why don't  we put up the next slide.  This can



 be directly estimated from the data, using the



 arithmetic averages of the measured concentrations



 for each sample.  If the set of plants for which



 the data is available is deemed to be a



 representative set of the set of all plants with



 well-operating treatment systems for the



 industry, then the average pollutant concentrations



 can be taken across plants to estimate the



 average effluent concentrations for the industry.



 So we will start with an averaging, if we have




multiple analyses per sample we will start with



 an average and come up with a number for each



 sample.  Then, we would take an average across

-------
                       239
 those  samples  to  come  up  with  a  value  for  the



 plant  and  then  an average across  the plants to



 come up  with an overall concentration  value.



     In other situations similar  to the organics



 five-plant  study  where the plants were more on




 the order of case studies with specific



 pollutants  of  interest there we  want to actually



 review the  pollutants on  a pollutant-by-



 poilutant basis on a plant-by-plant basis  to



 look for pollutants in each different kind of



 effluent.



    If both influent and  effluent data are



 available on a particular data base, a second



 quantity which can be calculated is the



 percentage reduction of the pollutant; and,



 that is given in  the format up there, influent



minus effluent divided by influent and




 corrected by 100  to turn into a percentage.



 This is used to quantify the effectiveness of



 the treatment system by the plant.

-------
                      240
    The second main quantity of interest is




to characterize the day-to-day variability



in the concentrations of a pollutant in a




waste stream.  For regulatory purposes, the



quantity of interest known as the variability



factor is defined to be the 99th percentile



of the distribution of daily concentrations



divided by their long term mean.  This quantity



which is similar in concept to the usual



coefficient of variation, except we are aiming



at a different percentile; this is found to




be a reasonable stable measure of the amount of



day-to-day variation in a pollutant independent



of the overall level of the pollutant in the



effluent.



    If the appropriate variability for a pollutant




is determined, then it could be multiplied by a



designated long-term plant mean concentration



for that pollutant such that if the plant is



discharging overall at the designated long-term

-------
                       241
mean concentration,  then  the  rate of  exceedance



of  the  limitation  will be one day in  100.  Lon^-



term mean effluent concentrations above the



designated mean level will show an exceedance



rate in excess of one in  100.



    In order to calculate the variability factor



from a set of data,  an estimate of the 99th



percentile of the distribution is necessary.



This is a more complex problem than the




estimation of mean concentrations, since the



data at hand often only consist of 30 to 50



points, or less.  Several statistical methods of



estimating the 99th  percentile have been examined



in  the course of these studies.  Figure 3 shows



the models used in the three main methods, super-



imposed on a hypothetical data histogram.



    If sufficient numbers of points are available,




nonparametric estimates of the 99th  percentile



can be calculated directly by looking at the



histogram.  These estimates  make no  parametric

-------
                      242
assumption about the shape of the distribution.



In particular, the specific which was used in



work where we have sufficient data is 50 percent



non-parametric tolerance estimator.   I have the



reference in the paper when it comes out.  This




requires at least 69 data points to be calculated.



  There is also another form of estimator known



as the tail-exponential estimator which makes a



parameteric assumption about the upper tail



of the distribution.  That's the dotted curve up



there, and it assumes that beyond a certain



percentile, usually we take like a base 90 of



the percentile, that the tail of the distribution



falls off like an exponential distribution



(indicating).  Taking only the data in the tail



we can construct a smooth...we smooth that out



and use that to estimate the 99th percentile.



Again, this requires about 70 data points to be



an effective method of calculation.



   For cases with fewer data points, distributional

-------
                       243
models  are  necessary.   The  best  general



distributional model  that we have found



for low concentration pollutant  data  is the



lo£,-normal  distribution.  The log-normal



distribution is  the distribution of the




variable whose logarithm has a normal



distribution.  Log-normal distribution is



appropriate for  this type of data because



it does not assign any probability to




negative concentrations, and it has an



appropriate frequency distribution which




accords with the actual distributions



observed in the  sample data.



    The next figure shows a sample cumulative



distribution for an actual set of data along



with a fitted cumulative distribution of log-



normal.   As you can see, they fit each very



well...I'm afraid that's a little light,  but



the jagged line  is the frequency distribution



of the actual data.  It's a rather  large data

-------
                       244
set in this case.  The  smooth line  is  the




fitted log-normal distribution (indicating).




So they do appear to  fit each other very well




and have the appropriate type of behavior.



To estimate the 99th percentile, the lOfe-norrnal




distribution is fitted  to the data and then




we obtain the 99th percentile from  tables of the



fitted distribution.




    The concept of variability factor is also




applied to the situation of determining




limitations for average concentrations over




longer time periods.  Kor instance, for the




pharmaceutical and petroleum studies, variability



factors were calculated for averages over 30




consecutive measuring days.  The lon^-terrn



mean of 30-day averages is equal to that of the




daily values, but the averaging decreases the




variability in the resulting measure.




Therefore,  the appropriate variability factor




for 30-day average concentrations will be

-------
                       245
 smaller than  that  for daily concentrations.   If



 the  concentrations on each  day  were  completely



 independent,  the appropriate  formula for  a



 variability factor for these  averages  would  be



 as given  in the  first figure  there.   This comes



 about  through the  central limit  theory of



 statistics which says that  if we take  X bar



 here is the mean of the data  from which we are



 investing and  S of X is the sample standard



 deviation, that if we  take averages  of size  30



 from a process with this mean in standard




 deviation they will  tend to have the same mean



 and a  standard deviation which reduces by



 root 30.




    Actually,  in practice, we find that the



 concentration values  on successive days tend



 to be more similar  than that  which would  be



suggested by independents.  This is



presumably due to dependencies in the effluent



discharge from the plant and mixing and

-------
                       246
holding systems  in  the  treatment process.



     Figure 5 shows some sample graphs




of the autocorrelation  functions which we



calculate at...it doesn't...we can take



either half.  These were calculated on some




pharmaceutical data where we had long-term



data and we could calculate the autocorrelation



which is the correlation between values, a



particular fixed number of days apart.  So the



autocorrelation of lag one is the correlation



between concentrations one day apart, an



autocorrelation of lag 30 is the correlation



between values 30 days apart.  If we plot those



as a function of the lag going down,



correlations running between minus one and one,



we see that we have for each of these



situations we have positive autocorrelations



and their positive and tail-off get smaller and



smaller as we get a longer and longer lag.



Of course, we would expect as the distance

-------
                      247
between any  two measurements goes  towards



infinity, that the correlation between those



measurements would tend to go to zero.  The



effect o± this is that the averaging process



on consecutive days reduces the variability,




but not by as much as would be suggested by



independents.



   Could we back up one slide; that's the one.



The calculated autocorrelation for lags up...



1 guess we need lags 1 to 29 in order to



calculate a 3U-day variability factor, then the




formula for the appropriate variability



factor is similar, but it has another term in



it which depends on the autocorrelation.  This



can be used to calculate appropriate variability



factors for 30-day averages for consecutive



days.   For 4-day averages such as have been



suggested for priority pollutant limitations,



the appropriate variability factor would be



what we have got on the bottom because those

-------
                      248
are being suggested for non-consecutive days



of measurement.  Also because when we look at



the priority pollutants we see less auto-



correlation than in the conventional pollutants.



This could be due to the effect of analysis



variability or just that priority pollutants



work differently, but our preliminary look



at priority pollutants shows that there is less



evidence of autocorrelation present.



   So that's really where we're coming from on




objectives and how we would calculate these



numbers if data were perfect, perfect in the



sense of no reporting problems and no other



external considerations.  Data, of course, all



laboratory data always, of course, are expected



to have some variability in them.



    There are some special topics I would like




to mention.  On things that are particularly



applicable to priority pollutants and the way



that they effect our statistical analysis.  In

-------
                       249
 particular,  there  is  the  reporting and



 handling of  detection limit values.  When



 the concentration  of  the  sample is too



 small  to measure,  the laboratory will report



 not detected.  This is fine and very




 appropriate  as an  analytical tool, but just



 drives the statisticians  nuts because it



 is not a numerical value.  Somehow in order



 to do a calculation with  these values we



 have to come up with some sort of numerical



 value to use.  The first  cut on this would



 be to stick these values  in at a concentration



 of zero.   This is not bad, but it may



 under-estimate the concentration of the



 pollutant.



    So what we want to do is,  we would also



 like to explore the sensitivity of our analyses



 to the assignment of these values by also



assigning them to an upper value for the



concentration.  This works best if we know

-------
                      250
the detection limit for the methodolgy; and,



that's not always true in the data that we see,



If we calculate a statistic with the data



assigned at zero and then assigned to the



detection limit, we get a sensitivity type of




analysis which will tell us how much the



means, for instance, could change between



these two values.



    There are also other more sophisticated



techniques for handling these quantities



that deal with the detection limit data as



missing information or sensored data, not



in any majority of sense;  simply, that the



concentration would be known to be below a



certain low but would not be known



quantitatively further than that and that



that would be the kind of model.



    The appropriate handling of detection




limit data is a question that has to be



approached for each different technique,

-------
                       251
 statistical  technique  that  we  are  going



 to  use.   For the  calculation of variability,



 simply assigning  the values to a



 particular numerical concentration is not



 quite the right thing  to do.   We  have




 done a lot of work with what is called the



 Delta log-normal  model where we explicitly



 give these concentrations their own



 probability  mass  at zero and this  allows...



 and then we  model the data above the



 detection limit by a log-normal distribution.



 This seems to work fairly well.



    For other types of considerations and



 situations,  has to be a continuing factor in




 the statistician's mind as to how he is going



 to handle these detection limit values.   What



 1 would also like to say is that continued




attention by analytical chemists to the



definition of reporting of detection limits



will be an important step in clarifying  how

-------
                      252
these values should be used.  I have seen



some American Chemical Society publications



and some o± the things that we talked



about in these conferences on clarifying



detection limits and defining them.  I




would like to suggest that that always can



be carried further in terms of standards



and reporting practices for all of the



laboratories.



    Another issue in the analysis of priority



pollutant data is inter-laboratory and



intra-laboratory variation.  The analysis



replication in the organic 5-plant study




allows an investigation of the sources of



variation in the concentration measurements



because the study includes multiple samples,



multiple laboratories analyzing each sample



and replicate analyses by laboratories on



at least a portion of the samples.  Using




statistical variance components estimation

-------
                       253
 techniques,  the  variability  in  these  samples



 can be broken down  into  four components.




    There is the  inter-sample variability which



 would be the natural variability of the true



 concentrations in each sample which we would




 see if there were no analytical errors.  This



 would be representative  of time or sampling



 variation in these  samples.  The second



 factor is the consistent inter-laboratory



 variability which is, if we take a set of



 samples and give them to a set of laboratories



 and look at the mean concentration that each



laboratory gives, each laboratory will vary



 slightly and the variation between laboratories



on that is another  factor that can come out.



This could be called the inter-iaboratory



accuracy or lab bias.  The third factor is,




within-sample inter-laboratory variability



which has to do with the individual handling



of each sample by the laboratory;  and, if we

-------
                      254
took one sample and gave it to a bunch of



laboratories they would also vary.   This



could be thought of as the inter-laboratory



precision.  The fourth factor is the



intra-iaboratory variability which  would be




the variability between pairs of replicates



run at the same laboratory.  So this would



be the intra-laboratory precision.



    These four components were estimated in



the organic study for each pollutant for which



there was sufficient data and the model we



used was a slight modification on the ordinary



variance components model in that we applied




this to the log-normal model effectively



analyzing the logarithm to the concentrations



and what we end up with is a multiplicative



model rather than the ordinary additive model.



it seems to work fairly well with the log-



normal distribution and all of our  other



assumptions in  the analysis.

-------
                      255
    We did run into some problems with lots



of detection limit data on some of this data.



A lot of the effluent data was consistently



down to detection limit.  We can't really say



much about these sources o± variability in




such cases.  That's why it's nice to have



studies like George's study with designed



levels of pollutant concentrations so you can



actually see what's going on there.   So these



kinds of factors can be quantified in cases



where we have sufficient data.



    The last issue that 1 wanted to mention was



spike sample analysis.  We had data in the



organic study for both priority pollutant



spiking and also surrogate chemicals, deuterated



or halogen substituted pollutants which were



added to the sample after the original analysis



and measured for their concentration.



    The last slide here shows...just gives the



ordinary formula for percent recovery and here



we have the spike...we take the raw sample

-------
                      256
concentration, C, the spike level as L and



the spike sample concentration as S; we



calculate the percent recovery this way.



For the surrogate chemicals, C would be fixed



at zero because we know these chemicals




would not be present in the sample and we can



calculate the recovery.



    Now, the important point here from a



statistician's point of view is that we can



assume fairly well that we know L because it



is a laboratory standard, but S and C are



both subject to analytical variation and,



therefore, for a single sample the estimate of



the recovery is subject to analytical



variation.  Therefore, when we calculate these



recoveries we like to take an average over



many samples in order to evaluate the overall



recovery of the method;  and, that's fine.



The other issue which comes up is the question



of correcting individual sample values.  We



decided that it was probably better not to do

-------
                       257
 that because  while  you  are  increasing  the



 accuracy, you are also  decreasing the  precision



 of the concentration measurement.  So  we




 decided it was better in this case not to do



 this on these samples.



    There was also  an additional consideration



 in that not all samples were spiked and so we



 couidn't...we wanted to make sure we were doing



 everything the same on  all  samples.  So we



 evaluated that for  each method, for each



 chemical.  We evaluated recovery and that's



 part of our summary that we will be presenting



 to the agency.



    Hopefully, this has given an idea of some



of the types of techniques  and issues  in the



 statistical analysis of the data for effluent



guidelines,  and 1 hope that continued cooperation




 between statisticians and analytical chemists,



1 hope that will continue.   I think it is



important in exploring all  of the facets of this



complex subject.   Thank you.

-------
                          257a
        Statistical Methods for Effluent Guidelines








                      Barrett P. Eynon




                     SRI International








I.. Int roduc t i on








The statistical analysis of industrial waste pollution data




is an important step in the determination of effluent water




quality guidelines. A number of different statistical




techniques are used to address the information needs of the




technical staff at EPA in setting limitations guidelines.




There is insufficient time today for a detailed discussion




of all of these methodologies; what will be aimed for in




this talk is a general overview of the circumstances,




methods, and objectives of some of these statistical




analyses.  SRI has been involved in the data analysis for




three major industrial categories over the past three years:




pharmaceutical manufacturing (1), petroleum refining (2)  ,




and organic chemicals manufacturing industries (3). This




work has been performed under the auspices of the EPA Office




of Analysis and Evaluation. The topics presented here are




drawn from our work on these projects, and are intended to




indicate some of the important concepts and methods in this

-------
                            257b
uo rk.
11.  Peso ri pt i on of Data








The basic data used in these projects consists of




measurements of pollutant concentrations in water samples




taken at the treatment influent and effluent points, at a




set of representative plants from the industrial category in




question. The plants involved in the study are generally




voluntary participants from among the set of plants having




well-designed and operating treatment systems of the




appropriate type for the regulatory package. For instance,




of the set of 22 pharmaceutical plants which submitted data,




13 were judged to have well-designed and operating




biological treatment systems, and were designated as BAT/BCT




(Best Available Technology/Best Conventional Technology)




plants for the purposes of constructing regulations.  A




further subset of ten plants from among the 13 were




designated as NSPS (New Source Performance Standards) plants




for setting NSPS limits. The characterization of the plants




represents engineeering and technical evaluations of the




plants by EPA.








The sampling and data handling for each study proceeds




through several stages, to insure high data quality. The




samples are usually taken by the plant according to a




predetermined sampling plan.   The sampling plan can be as

-------
                         257c
straightforward as one sample taken each day or each week at




each sampling point over the sampling period, or as




extensive as that shown in Figure 1, the sampling design for




the Organic Chemicals 5-Plant Study.

-------
                                 257d
                                            CO
                                            S
S >•
sl
S 2
£ a
•- -e
-= J3
CO -1
                                                                                  c
                                                                                  (0
                                                                                  cn
                                                                                  u
                                                                                  o

                                                                                  u
                                                                                  O
 c
 cn
 ••H
 0)
                                                                                  cn
                                                                                  C
                                                                                  a
                                                                                  S
                                                                                  (0
                                                                                 e
                                                                                 a
                                                                                 (S
                                                                                 • v-t
                                                                                 Q


                                                                                 O
                                                                                 a
                                                                                 £
                                                                                 •
                                                                 "O 2
                                                                 .4!<
                                                                 a.
                                                                 CO
       OJ
      f-.
 !->     I
 3     cn
 cn     c
••«     o
U,    J

-------
                           257e
In this study,  at each of the participant plants,




approximately 30 days of sampling were performed.  On each




sampling day, one sample was taken at the pre-treatment and




the post-treatment sampling points.  Each sample was analyzed




by an EPA contract laboratory for a  specific set of priority




pollutants.  The pollutants analyzed  for were chosen from one




or more of the  analytical subsets of the organic priority




pollutants,  so  as to reduce the number of analyses needed on




each sample,  and to satisfy the confidentiality restrictions




of each of the  participant plants. Approximately one-fourth




of the samples  were also analysed by a Chemical




Manufacturers Association (CMA) contract laboratory, and the




participant  plants were also encouraged to conduct their oun




analyses of  the samples.








In order to  evaluate the accuracy and precision of the




priority pollutant measurements, several quality control




measures were included in the sampling design. Approximately




two-thirds of the samples were spiked with known




concentrations  of priority pollutants after their analysis,




then reanalyzed, in order to measure the percentage recovery




of the analytical methods. The remaining one-third of the




samples were analyzed in duplicate,   in order to measure




laboratory precision. Samples were also spiked with known




amounts of "surrogate" chemicals known not to be present in




the waste stream, in order to aid in measuring the recovery




of the analytical methods. Measurements were also made on

-------
                          257f
blank samples of distilled water, some of uhich were shipped




with the uaste samples to check for contamination.








Upon receipt of the laboratory reports on the chemical




analyses, the data  are coded and entered into a computer




data base for processing.  An appropriate data base




structure is set up to incorporate the elements of  the study




design.  In our work at SRI, we have found the Statistical




Analysis System (SAS) computer package (4),  which is




available on EPA's NCC-IBM system, to be the most effective




system for flexible and efficient data processing,  because




it both  provides data management and reporting facilities,




and has  the capabilities for sophisticated statistical




analyses.








Once the data is stored in the computer, data listings and




plots can be generated. These are checked for unusual or




extreme  values, which may indicate coding or transcription




errors.  These values are reviewed with the laboratory




reports  and with the laboratory to correct any errors. In




addition, concentrations which are confirmed by the




laboratory, but which are attributable to known plant




treatment upsets,  or deemed to show variation beyond that




associated with well-operated treatment systems, can be




removed  from the analysis, in order to focus on the behavior




of well-operating  systems.  Figure 2 shows plots of the




Effluent Total Suspended Solids (EFTSS) concentration versus

-------
                           257g
time for one of the plants in the pharmaceutical data base,




before and after removal of an extreme value.  Note that the




plots of the data after correction have been rescaled, and




the data now exhibits much more homogenous behavior.

-------
                      Plant  12097  EFTSS (MG/L)

                            ORIGINAL  DATA
                               257h
 800
 600-
 400
 200
                                     '  I   '
                      ;u^A/wu7\Ajvij yijW
                            yjL
              350
400
450
500
550
500
650
700
                        DATA AFTER CORRECTION
150
100
 50
              350
400
450
       550
       500
       650
       Figure  2.  Plot of  Effluent Total  Suspended  Solids for a


       pharmaceutical plant,  before  and  after correction of an


       outlier.

-------
                         2571
III.  Statistical Analvs is








 After the data set is cleaned and checked,  statistical




analyses are performed. There are two major  quantities of




interest in all of these studies. The first  is to measure




the average concentra'tion of each pollutant  in the




uasteuater of each plant, before and after treatment.  This




can be directly estimated from the data,  using the




arithmetic average of the measured concentrations for  each




sample. If the set of plants for which data  is available is




deemed to be representative of the set of all plants with




well-operated treatment systems in the industry,  then  the




average pollutant concentrations for the industry can  be




estimated by taking the average across plants of  the average




concentrations for each plant. In other situations,  such as




the organics study, where there are only a few plants, each




analyzed for a different set of pollutants,  a case-by-case




analysis can be prepared for each plant,  focusing on the




pollutants found to be present in large concentrations in




the effluent streams of specific plants.

-------
                          257 j
If both influent and effluent data are available for a




pollutant at a plant, the percentage reduction of the




pollutant can be calculated by:








           (influent concentration - effluent concentration)




   100 x   	




                        influent concentration








The second main quantity of interest is a characterization




of the day-to-day variability in the concentrations of a




pollutant in a waste stream. For regulatory purposes, the




quantity of interest, known as the variability factor, is




defined to be the 99th percentile of the distribution of




daily concentrations, divided by their long term mean. This




quantity, (which is similar in concept to the usual




coefficient of variation,  the ratio of the standard




deviation to the mean),  is found to be a reasonably stable




measure of the amount of day-to-day variation of a




pollutant, independent of  overall level of the pollutant




concentration.  If the appropriate variability factor for a




pollutant is determined, then it can be multiplied by a




designated long-term plant mean concentration for that




pollutant at that plant, such that if the plant is




discharging overall at the designated long-term mean




concentration, then the  rate of exceedance of the limitation




will  be 1 day in 100.  Long-term mean effluent




concentrations above the designated mean level will show an

-------
                           257k
exceedance rate in excess of 1  in 100.








In order to calculate the variability factor from a set of




data, an estimate of the 99th percentile of the distribution




of daily values is necessary. This is a more complex problem




than the estimation of the mean concentrations, since the




data at hand often only consist of 30-50 points, or less.




Several statistical methods of  estimating the 99th




percentile have been examined in the course of these




studies.  Figure 3 shows the models used in the three main




methods, superimposed on a hypothetical data histogram.

-------
257 1
                                     1
                                     cs
                                        J
                                     1

-------
                         257m
If sufficient numbers of points are available, nonparametric




estimates of the 99th percentile can be calculated.   These




make no parametric assumption about the shape of the




distribution. In particular, the 50X nonparametric tolerance




estimator (5, pp 40-43), is a useful estimator, but  requires




at least 69 data points to be calculated.   Also, tail-




exponential estimators(5), uhich make assumptions about only




the shape of the upper tail of the distribution, have been




found to be effective, but require about 70 points to be




effectively calculated.  In cases with feuer data points




available,  distributional  models are necessary. The  best




general distributional model we have found for low




concentration pollutant data is the lognormal distribution.




The lognormal distribution is the distribution of a  variable




whose logarithm has a normal, or Gaussian distribution. The




lognormal distribution is  appropriate for this type  of data,




because it does not assign any probability to negative




concentrations, and it has a skewed frequency distribution,




which accords with the actual distributions observed in the




sample data.  Figure t shows a sample cumulative




distribution, along with the cumulative distribution of a




fitted lognormal .  To estimate the 99th percentile, the




lognormal distribution is  fitted to the data, and the 99th




percentile of the  fitted distribution, obtained  from tables




of the lognormal distribution, is used to calculate the




variability  factor.

-------
                                                        257n
o:
S3

CD
               '".  1
                                                                                                   1 f\J
  C
  o
 •ft
 -M
  O
  C
  3
                                                                                                                           C
                                                                                                                           O
LiJ
                                                                                                    o
                                                                                                           CD
 M
 • r-t
 TJ

 OJ

 •^
 -+J
 re
 «H
 3

 3
 U
CO
                                                                                                                         —i     a
                                                                                                                          a     E
                                                                                                                          co
                                                                                                                          in
       o
       c
       Gl
       O
^T    —(

 OJ    -a
 Li     01

 en    -H
•
-------
                        2576
The concept of the variability factor is also applied to the




determination of limitations for average concentrations over




longer time periods.  For instance,  for the pharmaceutical




and petroleum studies,  variability factors were calculated




for averages over 30  consecutive measuring days.  The long-




term mean of 30-day averages is equal to that of  the daily




values, but the averaging will decrease the variability of




the result.  Therefore the appropriate variability factor




for 30-day average concentrations will be smaller than that




for daily concentrations.  If the concentrations  on each day




were completely independent, the appropriate formula for the




variability factor would be
                          x





where X and S are the sample mean and standard deviation of



the daily concentrations. (The numerator is the 99th



percentile of the asymptotic distribution of 30-day averages



according to the Central Limit Theorem of statistics).



However, examination of the data reveals that concentration



values on successive days tend to be similar, presumably due



to dependencies in the effluent discharge from the plant,



and mixing and holding systems in the treatment process.




Figure 5 shows sample graphs of autocorrelation functions



for effluent Biological Oxygen Demand (BOD) and Total




Suspended Solids (TSS), measured in concentration and mass




discharge units, at a representative pharmaceutical plant.

-------
                        257p
The autocorrelation a * ,  Si = 1,...,30 ,  represents the




correlation between concentration values measured 1 days




apart. Note that, in the figure, the calculated




autocorrelations are all  positive and decrease with




increasing time lag, which is consistent with the physical




model proposed above. If  a*is calculated for a plant, then




the appropriate formula for the variability factor for




30-day averages is                 		
                                    X





See, for instance, Switzer (7).








For 4-day averages, as are under consideration for priority



pollutant limitations, the appropriate variability factor



uould be
                         x
No autocorrelation correction uould be used,  because the




sets of four days are not consecutive. In addition,




preliminary analysis of priority pollutant data shous much




less autocorrelation than the conventional pollutants.

-------
                                        257q
                     ESTIMATED  AUTOCORRELATION  FUNCTIONS

L V
1
2
3
4
5
6
7
3
9
10
11
12
13
14
IS
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
V
(LAG
391
390
389
338
387
386
335
334
383
332
331
380
379
373
377
376
375
374
373
372
371
370
369
363
367
366
365
364
363
362
'ARIA8LE -
COVAR
. 190E»04
.147E*04
. t21E*04
.109E+04
.101E»04
962.
.110E»04
.111E*04
.104E+04
.105E+04
.106E*04
972.
852.
837.
807.
. 100E«04
.119E»04
.118E«04
.116E+04
.112E»04
.100E»04
873.
847.
962.
.106E»04
.105E»04
.111E«04
.113E+04
.1106*04
.107E+04
TSS MG/L
NOBSO - 395
MEAN - 84.846
MAX LAG - 30
-1
0.6670
0.5144
0.4247
0 . 3827
0.3532
0.3376
0 . 3857
0.3899
0.3668
0.3692
0.3730
0.3412
0.2991
0.2939
0.2833
0.3523
0.4171
0.4133
0.4072
0.3929
0.3526
0.3066
0.2973
0.3378
0.3714
0.3679
0.3906
0 . 3964
0.3877
0.3741
NXNM - 395
NOBS - 396
VARIANCE - 2849.1
0
Ixxxxxxxxxxxxxx
Ixxxxxxxxxxx
Ixxxxxxxxx
Ixxxxxxxx
1 xxxxxxxx
Ixxxxxxx
1 xxxxxxxx
1 xxxxxxxx
1 xxxxxxxx
Ixxxxxxxx
IXXXXXXXX
Ixxxxxxx
IXXXXXX
1 xxxxxx
IXXXXXX
Ixxxxxxxx
Ixxxxxxxxx
IXXXXXXXXX
Ixxxxxxxxx
Ixxxxxxxx
IXXXXXXXX
IXXXXXXX
IXXXXXX
IXXXXXXX
IXXXXXXXX
Ixxxxxxxx
1 xxxxxxxx
Ixxxxxxxx
Ixxxxxxxx
Ixxxxxxxx
                                                         PLANT        1202Z.
                                                         VARIABLE - TSS IB/DAY
                                                                   NOBSO - 395
                                                                   MEAN -  990.97
                                                                   MAXLA6 -  30
                                   NXNM - 394
                                   NOBS - 396
                                   VARIANCE -  .32352E»06
                                                    1
                                                    2
                                                    3
                                                    4
                                                    5
                                                    6
                                                    7
                                                    8
                                                    9
                                                   10
  NLAG
   390
   389
   388
   387
   386
   385
   384
   383
   382
   381
11  330
12  379
13  378
14  377
IS  376
16  375
17  374
18  373
19  372
20  371
21  370
22  369
23  368
24  367
25  366
26  365
27  364
28  363
29  362
30  361
.213E»06
.15SE*06
.133E*06
.122Et06
.110£t06
.109E»06
.129E+06
.132Et06
.122E*06
.120E*06
.121E*06
.113E+06
.974E+05
.924E*05
.448Et05
.104Etfl6
.124Ef06
.12SEt06
.128E»06
.126E»06
.112E«06
.894E*05
.848E»05
.975E»05
.110E»06
.113E>06
.120E«06
.125E*06
.123E*06
.117E»06
-1
0.6477
0.4821
0.4058
0.3709
0.3347
0.3327
0.3932
0.4032
0.3712
0.3665
0.3697
0.3440
0.2966
0.2814
0.2530
0.3178
0.3763
0.3796
0.3896
0.3835
0.3416
0.2720
0.2530
0.2969
0.3356
0.3426
0.3641
0.3817
0.3735
0.3560
0
• — - 1
1 xxxxxxxxxxxxx
Ixxxxxxxxxx
Ixxxxxxxxx
Ixxxxxxxx
1 xxxxxxx
Ixxxxxxx
Ixxxxxxxx
Ixxxxxxxxx
Ixxxxxxxx
1 xxxxxxxx
1 xxxxxxxx
Ixxxxxxx
1 xxxxxx
IXXXXXX
1 xxxxxx
Ixxxxxxx
Ixxxxxxxx
1 xxxxxxxx
Ixxxxxxxx
1 xxxxxxxx
Ixxxxxxx
IXXXXXX
IXXXXXX
IXXXXXX
Ixxxxxxx
1 xxxxxxx
1 xxxxxxxx
Ixxxxxxxx
Ixxxxxxxx
1 xxxxxxxx
                      ESTIMATED AUTOCORRELATION FUNCTIONS
^
1
.2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
NLAG
365
364
363
362
361
360
359
353
357
356
35S
354
353
352
351
350
349
343
347
346
345
344
343
342
341
340
339
338
337
336
PLANT
VARIABLE
COVAR
274.
183.
134.
132.
93.3
57.4
45.0
60.0
53.7
96.7
121.
166.
141.
131.
128.
111.
94.5
62.2
27.4
31.0
22. 1
52.0
13.3
19.3
5.27
30.5
44,9
27.6
19.1
17.0
12036.
- BOO HG/L NXNM - 366
NO8SO ' 366 NOBS - 366
MEAN - 33.041 VARIANCE - 604.25
MAXLAG - 30
-1 0
0.4529
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
.3035
.2224
.2177
.1544
.0950
,0744
.0994
.0971
.1601
.2007
.2743
.2327
.2160
.2127
.1843
.1564
.1029
.0453
.0512
.0366
.0861
.0229
.0319
.0087
.0505
.0743
.0457
.0316
.0282
Ixxxxxxxxxx
Ixxxxxxx
Ixxxxx
Ixxxxx
Ixxxx
Ixx
IXX
Ixx
Ixx
Ixxxx
Ixxxxx
IXXXXXX
Ixxxxx
Ixxxxx
Ixxxxx
IXXXX
Ixxxx
Ixxx
IX
IXX
IX
IXX
1
IX
1
Ixx
Ixx
IX
IX
IX

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
13
19
20
21
22
23
24
25
26
27
23
29
30
NLAG
364
363
362
361
360
359
353
357
356
355
354
353
352
351
350
349
348
347
346
345
344
343
342
341
340
339
333
337
336
335
VARIABLE -
COVAR
.218E*OS
.154E+OS
,111E*05
.868E«04
.391E»04
.304E«04
.200E»04
,414E«04
.392£t04
.328E»04
.322E»04
,106E«05
.732E*04
.630E.04
.640E*04
.661E*04
.521E»04
.263E»04
267.
.172E»04
.102E»04
.116E*04
-809,
677.
-310.
317.
.185E«04
.262E«04
.139E«04
-56.4
SOC
At
0.
0.
a.
a.
0.
a.
0.
0,
0.
0.
0,
0.
0.
0,
0.
0,
0
0
0.
0
0
0
-0
0
-0
0
0
0
0
-0
I LB/OAT
NOBSO - 366
MEAN - 293.70
(1AXLA6 - 30
-1
.4922
.3488
.2519
,1962
.OS33
.0683
.0453
.0936
,0886
.1872
.1353
.2336
.1654
.1536
.1446
.1493
.1173
.0595
.0060
.0390
.0231
.0263
.01S3
.0153
.0070
.0072
.0419
.0592
.0313
.0013
NXNM - 365
NOBS - 36a
VARIANCE - 44255.
0
1 XXXXXXXXXX
1 XXXXXXX
IXXXXXX
Ixxxx
Ixx
Ixx
IX
Ixx
Ixx
Ixxxx
Ixxxx
Ixxxxx
Ixxxx
Ixxxx
Ixxx
Ixxx
Ixxx
Ixx
1
IX
1
IX
1
1
1
1
IX
Ixx
IX
1
Figure  5.  Estimated  autocorrelation  functions  for BOD and

TSS  in  mg/1  and  Ib/day  at a  pharmaceutical  plant.

-------
                        257r
IV. EXCEPTIONS AND SPECIAL TOPICS








The objective of calculating means and variability factors




can be accomplished as described above, for any set of




standard data. However, in many cases, there are side issues




and complications which affect the data analysis.  Some of




them are particularly prevalent in the analysis of priority




pollutant data, due to the necessity of measuring




concentrations very near the limits of the measurement




technique.








One issue in particular is the reporting and handling of




detection limit values. When the concentration in a sample




is too small to measure, the concentration is reported as




"not detected" (ND). This is fine as a descriptive




statement/  but causes problems for the statistician,  because




statistical procedures must work with numerical




concentration values, and these measurements still reflect




valid samples and must be accounted for in calculations. For




even the usually straightforward process of calculating mean




concentrations, the handling of these values can be




approached  in several ways.  Using a zero concentration is a




reasonable  first approximation, but may understate the




actual average concentration.  If the analytical detection




limit for the particular method was supplied by the




laboratory,  or can be assumed  to be known,  a sensitivity




analysis can be performed by also calculating statistics

-------
                       257s
with ND values set to the detection limit, giving an upper




and lower limit to the "true" value.  Compromise solutions,




with ND values set to 1/2 the detection limit are also often




used.  However, all of these methods produce a somewhat




distorted estimate of variability, because all of the




detection limit values are being placed at the same point.




In our work/ we have made extensive use of an extention of




the lognormal distribution, known as  the delta-1ognormal




distribution, in which the detection  limit data are placed




in a separate probability spike at zero, or the detection




limit value, and the concentrations above the detection




limit are modeled with the lognormal  distribution.  This




allows calculation of appropriate variability factors.








However, the appropriate statistical  handling of detection




limit data has to be addressed for each statistical




technique.  Continued attention by analytical chemists to




the definition and reporting of detection limits, and




standardization of reporting formats  and notation would be




of great aid in this task. Various recent American Chemical




Society papers and talks have addressed these questions




(8,9), but more needs to be done to implement standards in




pract ice.








Another issue in the analysis of priority pollutant data  is




inter- and intra-laboratory variation. The analysis




replication in the organics 5-plant study allows an

-------
                        257t
investigation into the sources of variation in the




concentration measurements, because the study includes




multiple samples, multiple laboratories analyzing each




sample, and replicate analyses by laboratories on some




samples. Using statistical variance components techniques,




the variability can be broken down into four components:








     - Inter-sample variability. Variation in the true




       concentration of each sample (time and sampling




       variation).








     - Consistent inter-laboratory variability.  Variation




       between the average concentrations measurements from




       each laboratory (laboratory bias, or inter-laboratory




       accuracy).








     - Within-sample inter laboratory variability. Variation




       between laboratories in the analysis of each sample.




       (inter-1afaoratorx precision).








     - Intra-1aboratory variability. Variation between




       repeated measurements at the same laboratory (intra-




       laboratory precision).
For the organics study, these analyses were performed in




terms of a multiplicative effects model consistent with the

-------
                           257u
lognormal distribution.  These analyses were done for each




pollutant,  at each samling point at each plant, for all




situations  having sufficient data above the detection limit.








The final issue I will mention is that of spiked sample




analyses.  The organics  study included both priority




pollutant and surrogate  chemical spiking of samples. The




calculated  percent recovery for a sample is:
where C is the raw concentration measurement, L is the spike




level, and S is the spiked concentration measurement, for




surrogate chemicals C is zero, these quantities can be




computed, and then averaged across samples, to give a




measure of the average recovery for each chemical by each




method.








Some consideration was given to the correction of individual




sample measurements according to the measured recovery in




that sample (10). While this method generally increases the




accuracy of the measurements, it also decreases the




precision of the measurements substantially. Because of




this, and because not all samples were spiked in the study,




it was decided to do all statistical analyses on the




uncorrected data.

-------
                           257v
1. CONCLUSIONS








Hopefully, this has given an idea of the types of techniques




and issues in the statistical analysis of data for effluent




guidelines. Continues cooperation between the statistican




and the analytical chemist is important in exploring all




aspects of this complex subject.

-------
                         257w
                         Ref erenc es
1.  Eynon,  B.  P.,  Javitz, H. S., Valdes, A. J., Skurnick, J. H.,
     Gofer,  R.  L.,  Maxwell, C.., and Rollin, J. D. , "Pharmaceut ica 1
     Effluent Data  Analysis: Long-Term Pollutant Data Analysis,"
     Final Report on Task  1, EPA Contract 68-01-6062, for EPA
     Office of  Water Regulations and Standards, SRI International
     (1982) .

2.  Eynon,  B.  P.,  Valdes, A. J., Maxwell, C., Walter, L., and Rollin,
     J.  D.,"Petroleuro Refining: Self Monitoring Data Analysis,"
     Final Report on Task 6, EPA Contract 68-01-6062, for EPA
     Office of  Mater Regulations and Standards, SRI International
     (1982).

3.  Eynon,  B.  P.,  "Organic Chemicals Manufacturing: EPA/CMA Long
     Term  Study", Final Report on Task 7, EPA Contract  68-01-6062,
     for EPA  Office of Mater Regulations and Standards, SRI
     International  (in preparation).

4.  SAS  Institute,  SAS User's Guide ,  SAS Institute,
     Raleigh, NC ( 1979) .

5.  Gibbons,  J.  D.,   Nonparametric Statistical Inference > McGraw-HilJ
     New York (1971)7                                ~"~~

6.  Breiman,  L., et  al.,"Statistical Analysis and Interpretation
     of  Peak  Air Pollution Measurements," Final Report  by
     Technology Service Corporation to Thomas Curan, U.S.
     Environmental  Protection Agency,  MD-14, Research Triangle
     Park, North Carolina  (1978).

7.  Suitzer,  P.  "Variances  and Autocorrelations for Time-Averaged
     Autocorrelations," SIMS working paper No. 18, Stanford
     University (1981).

8.  Crummett,  W.C.,  et al.  "Guidelines for Data Acquisition
     and Data Quality Evaluation in Environmental Chemistry,"
     ACS Committee on Environmental Quality, Subcommittee
     on  Environmental Analytical Chemistry (Final Draft,  1980).

9.  Kagel,  R.  0.,  "Validation and Priority Pollutant Analysis",
     Invited  Plenary Address, American Chemical Society National
     Meeting, Division on  Environmental Chemistry (1980).

10.  Eynon, B. P., "Percentage Recovery Information in Organics
     Long  Term Data", Memorandum to R. Roegner, et al,  US  EPA,
     August 20,1982.

-------
                      258
             QUESTIONS AND ANSWERS








                MR. MADDALONE:   Ray Maddalone,




TKW.  I actually have a comment and a question.



                MR. EYNON:  Sure.



                MR. MADDALONE:   We run into




the problem of detection limits and the problem



of trying to determine what value to put into




the data base, and some of the  reports have not



detected value.  1 don't think  there's any



really good solutions when you  don't have a




base of data to go back on.



    My question is, when you don't have a base




of data, a historical base of data just a single



number without a detection limit reported,



what do you recommend?  Is it a zero, is it one-



half of the value that the report has the less



than or non-detected?



                MR. EYNON:  I'll tell you what

-------
                       259
we did  in  the organics data base and  that  is,



we stored  the number as a zero, but we also



kept a comment field associated with  each  entry



Part o± that comment field was the fact that



this was really a non-detected value.  Now,




that was actually the only way you could get an



actual zero concentration was to have a not



detect or a less than 10 or something like that



as the statement by the laboratory; but, we



kept the detection limit.  If it was present




we knew the fact that these were not detected



values and there is no easy answer to this



question.  1 don't think there is any single



numerical value which is the correct value to



put in.   If we knew the correct value we



would have detected, you know, we wouldn't be



worried  about this detection question.  I



think that what has to be done is,  you have to



do some  exploratory statistics on the effect



of these values on your overall statistical

-------
                      260
analysis.



                MR. MADDALONE:  What about less




than numbers?  I asked, actually, a two part



question.  Less than, you have a number and that




really can't...



                MK. EYNON:  Well, let's say you




have a number;  I found that the difference



between saying  less than 10 and saying not



detected seems to be more of a phenomenon of the



laboratory reporting criteria than anything




else which is really going on with the data,.



So I think it would be unfair to treat those




any particularly differently.



   It would be nice to always have the detection




limit; and, if you have a not detect and you



have no other information about the upper limit



and you have to calculate a mean, I don't see



that you can have any argument for anything



other than this real value to put in.  You



simply have to caveat your result and say, look,

-------
                      261
this is all we could do, we have no further



information about this value, we're going to



have to stick it into zero.  The thing you can't



do is, you can't ignore that value because you



are told something positive, although not exact




about the concentration.  You can't leave it



out of the calculation, you can't ignore it, you



can't put it as missing; so, you know, it's a



total lack of any sort of information about the



level of that pollutant, or about the detection




limit I would say, yes, you would have to put a



zero.



                MR. MADDALONE:  My comment is,




is that as part of the effort that we did we



reviewed the various definitions of limit of



detection.  1 think one definition has been



grossly overlooked by agencies and the people



using or setting definitions for the limit of



detection is, are the ACS guidelines that



were published in Analytical Chemistry in 1980;

-------
                      262
it was Volume 52, page 2252.  Those are



extremely good because they set three



different levels.  They chose a three  sigma



detection limit and they said if the values



are less than your three sigma detection




limit you report them as not detected  with



the LOD.  Now, that would solve part  of



the problem that you get with these not




detected values.



    Then if you have a value above your three




sigma you report it as a real value,  but you




also, again, put the limit of detection in



parenthesis so you know where you are  in



relationship to that.  I think that some sort



of consistent use of a definition ought to



be assigned; and, then, some explanation of



what the risk is associated with that



definition because it really varies and



whether you consider false positive or false




negative errors.

-------
                      263
                MR. EYNON:  I couldn't agree
with you more.  In fact, I think I'm talking
about the...when I mentioned ACS I think I'm
thinking of the exact same paper you are
talking about.  So, yes, I think that that's
an important question to standardize the
laboratory reporting practices on these issues.
                MR. DELLINGER:  Bob Uellinger,
Effluent Guidelines Division.  My questions on
the use of autocorrelation in establishing 30
day variability factors.  I was wondering if you
checked your 30-day, 99th percentiie estimates
on the data sets from which they were derived
to see i± they were good estimators of the 99th
percentiie?
                MR. EYNON:  We have in cases
where that's possible.  Sometimes we can't
because we don't have complete sample
information on every day.  Therefore, we cannot
construct 30-day running averages from the

-------
                       264
 daily  data  that  we  are given.   There  are



 many cases  in  which we could estimate the



 autocorrelation  function and come up  with



 the variability  factor without being able



 to go  back  and check that.




    However, there  are a few cases that we




 have,  we have  looked at some other smaller



 data sets on...I think we looked at the



 leather tanning  and  the iron industry and



 the cases we have checked out, yes, much



 better agreement than  the central limit




 theory in value would  give, much more...much



 closer agreement with  the actual empirical.



    1 mean, if we had  years and years and



 years of data we could even think about



 constructing the 3U days,  then actually



using some sort of non-parametric estimate



or model the 30-day averages directly,



 but that's well beyond any scope of any



data set we have.

-------
                      265
                MR.  DELLINGER:  Okay, because



we have checked the central Unit theory as a



predictor with biological treatment and it is



not very good at all.  We get something like



20 percent of the values fron the data set fron



which the number was derived at higher than



the 99th percentile.




                MR.  EYNON:   I'm not surprised.



The correction for autocorrelation can actually



make a noticeable difference in the variability




factor that you would get and, indeed, they



have used the central limit theory that assumes



independence; actually central limit theory



underlies both these.  The ordinary central



limit theory which is the independent one would,



indeed, give too small a variability factor.



If you went back and checked it, you would get



exactly what you're saying; which is, you would



get too many exceedances even in the data set



that you calculated it.

-------
                      266
                MR. DELLINGER:  Now, we have




used things like  taking the 30-day averages.




Let's say we had  12 or 14 or 16 30-days averages




on a set.



                MR. EYNON:  That's a different




question because  there if you are calculating




your 30 day averages on less than 30 days data,




you are also going to get a larger variation




because...




                MR. DELLINGEH:  No, these would




be straight 30 day values.



                MR. EYNON:  See, I mean, if you




only measure 12 days out of the 30.




                MR. UELL1NGER:  No, that's not




what 1 am...what  I am saying is, we have used...




let's say we have had 12 sets  of 30 day



averages.




                MR. EYNON:  Right, okay, that




gives you 12 numbers.



                MR. DELLINGER:  That's right

-------
                      267
and we have checked for...



                MR. EYNON:  Just enough to look




at, to examine how many are exceedance.



                MR. DELLINGER:  And then we have




just checked for using parametric procedures and




establish variability factors that way.



                MK. EYNON:  You could do that




also, although 1 think that this method will be



stronger because you are using more of the



information in the data to actually calculate



the autocorrelation.



                MR. BELLINGER:  You are using




each individual data point as opposed to using..



                MR. EYNON:  Right, rather than




combining each...



                MR. DELL1NGER:  ...30 day




average.



                MR. EYNON:  That's a tough




question; 1 think that's  true.  Yes, we have



done...we have our program that does this.

-------
                      268
I have been working...if you would like to



catch me later and you have your data set on



the PA System 1 can talk to you about maybe



our stuff through if you are interested in



seeing 30 day numbers based on the stuff;



it's not too hard to do.



                MR. DELLINGER:   Sure.



                MR. TELLIARD:   Anyone else?



    Thank you, Barry.




    Our next speaker is from Battelle, Columbus.



Jim Brasch is going to talk about something that



we haven't utilized too much in this program,



but we have skirt it; that is,  the Utilization



of GC/FT as it relates to Analysis Priority




Pollutant.

-------
                      269
    GC/FT-IR and GCMS:   WHICH, WHEN and WHY?

                   Jim Brasch
         Battelle's Columbus Laboratory
                MR. BRASCH:  Have you ever heard

the expression, as confused as the little farm

boy who dropped his chewing gum on the floor of

the chicken house and didn't know which one to

pick up?

   I know why I am here; why I, personally, am

here.  It's because Dale Rushneck called

Battelle and he started talking to people and

filtered down through the hierarchy.  By the

time he got down to my level to talk to me, I

had been told that I would give a talk on GC/FT-

IR if he asked me to.  I did respond positively

to his request.  Let me assure you, if I had

had any idea how big he was when  I was talking

to him, I would have responded much faster.

  Now, what continued to puzzle me was, why one

GC/FT-IR talk in a GC/MS Symposium?  Those

-------
                      270
of you attending the Hershey meeting in 1981



saw the same phenomenon; one GC/FT-IR



talk in a GC Mass Spec Symposium.  It was only




last night after an exquisite meal and a



delightful glass of wine that it became smash-




in^ly clear to me; mass spec ceremonies



require the periodic sacrifice of a pristine



virgin.  Obviously, these qualities require they



go outside the mass spec community for their



victim.  I am complimented by Battelle's



recognition that I have these qualities.  This



is mitigated by the fact that they also,



obviously, consider me totally expendable.



   Nevertheless, I am here and I want to give



you a state of the art status report of GC/FT-IR



stressing its complementary nature with GC/MS.



How do you do GC/FT-IR?  You can obtain one of



the earlier generations of the system, such as



the DIGILAB instrument, shown in Figure 1, first



produced three or four years ago.  You can get

-------
                      271
one of the later generation, shown in Figure



2, again, a DIGILAB system which is much more



cosmetically nice and is configured so the



instrument is free for normal operation.   All



of the major manufacturers produce these now;



Nicollet, IBM and Analect.  Beckmann and Bowmen



are very hard at work on their systems.



    You can also do like we did at Battelle




where we are faced with two problems; one, the




equipment is expensive and sometimes we  can't



buy it; secondly, we are concerned only  with



selling the output, we don't have to sell



the instrument.  So we are somewhat less



concerned with aesthetics and cosmetics  and



you can do as in Figure 3, which shows our



interface, the chromatograph and our instrument.



What else is required?  Nothing particularly



profound as diagramed in Figure 4.  What you



do is just take the infrared beam from your

-------
                      272
instrument through the light pipe through which



the GC effluent is going.  Take the output from



an MCT detector and you can get the spectrum



that way; nothing particuarly profound.  There



is a little technology in the light pipe, but



it is also not difficult as shown in Figure 5.



You just have some way to get the effluent in,



traverse it down the pipe and back out.  For



mid infrared spectroscopy, one generally uses



KBR windows.  There is a little technology



effecting a good seal at the windows and the



transfer lines so that you don't lose the sam-



ple.  You also need to heat the light pipe.



This can be done relatively simply as I'll show



you in just a moment, but the only other require-



ment then is some technology in the light pipe



coating.  I really hesitate to use the word tech-



nology; it's absolutely a black art.  We make



our own light pipes.  They are simply precision



bore glass tubes that we put a gold coating on

-------
                      273
ourselves.  Most of the manufacturers are doing



the same thing or else they have a sole source




of supply.  Nothing really profound.  It is just



difficult to get a really good gold coating on



it.



  The only other problem then is, how do you



heat it?  Again, in our system we enclose it



with a very simple aluminum block as shown in



Figure 6.  This is the end of the light pipe



here; ours is only about four inches long.  So




this is relatively compact.  We have a heated



transfer line here through which we are actually



bringing the fused capillary from the GC, rout-



ing it over to here so you can get the effluents



into the light pipe, traverse it down here, it



comes right back through the heated transfer



line back over to the FID of the GC.  So we get



infrared data, and FID traces after it has been



through the light pipe; really rather simple.



   What do you do with it then and why would I

-------
                      274
want to give  you a comparision to mass spec?



(Figure 7).   I want to compare the information



content, the speed and ease of operation, the



sensitivity, and the chromatographic resolution.



I can actually dismiss the last one because




when that slide was made the first approaches



to doing capillary column work were being made.



There was considerable necessity to justify all



of the additional work and complexities for the



capillary work to show that you did get an



improvement in data that was worth the extra



trouble.



     Now, with the capillary ability, the



chromatography is the same.  So that has



become quite irrelevant.  The other three I do



want to talk about some more.  I will demonstrate



this to you by using what I will define only as



a "hazardous waste sample." What happened with



this was that in our laboratory we did GC/FT-IR



on the sample using a packed column.

-------
                      275
We also did the capillary column work and



this is where we first demonstrated that the



additional difficulty was well worth it.



A second laboratory was also running




capillary GC/FT-IR.  At Battelle we also



were doing the mass spec on it.  Another



laboratory, completely independent of all



of these was also doing mass spec on it.



So we had an excellent cross-check here;



from laboratories using mass spec




and giving, for all practical purposes,



absolutely identical results, and two



different laboratories using GC/FT-IR and,



while they were not absolutely identical



because they did not use exactly the same



column, there was no difficulty in correlating



the two and seeing that they reproduced  each



other extremely well.



    So we had a very nice cross-check here,



not only of the two different techniques,

-------
                      276
but  the validity of the technique in our labora-




tory, and outside of our laboratory.  What do



you  get, then?  As I mentioned, as the sample



traverses the light pipe, in addition to trans-



forming the data and producing a spectrum every



second, if there is any absorption above the



baseline, the computer also takes a point and



stores it to reconstruct a gas chromatograph;



a Gram-Schmidt reconstructed gas chromatograph.



Then, the effluent goes on to the FID.   So, as



shown in Figure 8, we have an FID trace where



we can check the chromatographic resolution and



make sure we haven't degraded that.   We also



have a reconstructed gas chromatograph based on



the  infrared data so we can correlate those.  I



don't know if it is apparent from that slide,



but  there is very nice correlation there.  You



have no trouble whatsoever in correlating a



GC/FID peak with its corresponding infrared

-------
                      277
peak in the data bank.



   How does that stack up with the mass spec




data?  Well, Figure 9 shows the RGC from mass



spec (a total ion count RGC) and what you just



saw, the RGC from the infrared.  Now, there



are differences here, but there are also great



similarities.  The differences are sensitivity



differences and I can point out instances



where, for example, mass spec saw a rather



intense peak that was missing in the infrared.



On the other hand there are instances where the



converse is true.  I don't want to spend too



much time on the whys of that; it has to do



with the absorbtivity in the infrared which



determines whether it is going to see it or



not.  The major point I want to make is that



their differences are complementary.



  Again, on that slide I want to make the point



that we can correlate these data very well.  We



have no difficulty in correlating an infrared

-------
                      278
peak to a mass spec peak.  What sort of data




do we get in the infrared?  Well,  Figure 10 shows



three spectra that are pulled out almost at



random showing one of the stong peaks,  a medium



peak, and a very weak peak.  The middle region




includes strong absorption from C02,  which we



do not purge out of our instrument and, indeed,



all of the search programs completely obliterate



this region in their searches; you do not use



this region.  You can see that as we  get to the




very weak peaks we have a much lower signal noise



level and this is where we ultimately lose out.



If we do not have the discrimination of the sig-



nal there we can't get any useful spectrum infor-



mation.  The lower example is an excellent spec-



trum and this is from one of the very weak peaks



in that RGC.  What else do we see?  We see a



lot of structural information, functional group



information.  I'll mention that one again later.

-------
                      279
   The software programs that are available are
nice, getting nicer, getting faster,  getting
better all of the time.  Figure 11 illustrates
some software features.  This is a DIGILAB
slide, it is not of the data from this hazard-
ous waste sample.  I just wanted to show you
what you can do with this.  The lower trace is
a spectrum from a GC run; the other spectra are
the results of their search program through
their catalog of spectra.  They have a HIT
index listed.
   Now, another thing I wanted to point out;
if you get a very low number here, it's an
excellent identification, particularly if there
is a low first number here and then a wide gap
between the others; that is what you call a
positive ID.  Actually, what I have chosen
here, I don't know if you can read that number,
but  the HIT  index ranges  from .61 here to
 .69;  this means  that the  search really wasn't
sure  what this compound was.  It couldn't

-------
                      280
discriminate between these four or five



candidates.  One of the flexibilities you



have, you can tell it to look for the top



ten, the top five, or another number of your



choice.  The other point I wanted to make



here, that they are all chemically similar.



This particular one is an ester; and, while it



couldn't identify the particular ester this



was, the search program picked out all esters.



That is because of the nature of the infrared



information that's here; this carbonyl group



absorption is specifically characteristic of



esters.  From this you also could tell that it



is not terribly complicated.   So all of this



information is inherent in the infrared spectrum



and even if it doesn't come out  positively



identified, you will get excellent chemical



type information.



   Some of the results,  now,  from the hazardous

-------
                      281
waste sample are shown  in Figure 12, again,




to talk about this complementary nature.



Here an X means the compound was positively




identified, and zero means the compound type



was identified, but not specifically.  The



mass spec did not see the fluorinated alcohol;



the infrared not only saw it, it identified



it.  Why?  Carbon-fluorine stretching vibration



is one of the most intense infrared absorbers.



So if there is much there, the infrared is



going to see it.  Other things you might expect:



mass spec, certainly cannot discriminate ortho-



and para-chlorotoluene;  infrared did.



   Similar things here are seen in Figure 13.



Another isomer, mass spec typed it, infrared



identified it.   A case where mass spec  gave an



indentification and infrared didn't even see



it; a very weak infrared absorber with  very few



bands for the infrared to key on.  So,  again,



the complementary nature of the two techniques.

-------
                      282
   Now,  I want to very quickly show Figure 14



where I have more recent tabulations of this



data.  This summarizes where we are with this



particular sample.  There were 44 components



in the GC FID trace.  By the infrared data, we




identified specifically 28 of them, and we got



information on 15 types.  By infrared alone we



got good chemical evidence on 43 of the 44



components.   Mass spec gave a positive IDS on



13; and good information on 23 compound types;




a total of 36.



   At this point, now, I want to say something



with great caution.  This sample was probably



optimum to show the value of infrared.  We did



not chose the sample, however, to demonstrate



this point.   The sample came in totally blind.



We had no idea what is was.  It worked out that



infrared gave a lot more data than mass spec



on this sample.   I can suggest some other

-------
                      283
samples where the converse would be completely




true, namely, long chain hydrocarbons; then,



mass spec would shine, infrared would tell



you it was hydrocarbons, but it would not



give great definitive information.   This one



happened to work out to show the power of



infrared.



   Another point I want to make and I cannot



emphasize too much.   If we combine the two



sets of data we see the complementarity even




greater; of the 23 compound types identified



by mass spec, 19 of them are positively



identified by infrared.  There were only



seven overlaps in here; five of these, GC



mass specs identified that infrared did not



identify.  If we combine these two sets of



data, we get useful information on all 44 of



them.  We would have specific identifications



on 33 of the 44.  This impresses me.  I

-------
                      284
think this demonstrates beyond any question




that the two together are best.   What if you




can1t do that?



     Let's compare them in Figure 15.



Sensitivity:  the gap is not as great as in



the past, and it is getting smaller.   But there



is no question, if you know what compound you



are looking for and where to look for it,



infrared will never compete with mass spec on



sensitivity.  The gap now is certainly one,



perhaps as much as three orders of magnitude.



Infrared is going to improve and I expect mass



spec will also.  So I think that ultimate gap



is going to remain there.



    Ease of operation:  the mass spec is



better.  That gap is closing also, but there



is one very important difference.  At our



laboratories, and I'm sure this is common



with other laboratories also, we can bring

-------
                       285
a kid with a decent high school education



into our mass spec lab and we can have him



getting reliable, useful data in a day or two.



It is just so well automated, so well soft-



wared that that is no problem.  Infrared GC



software is very nice, but it presently requires



an experienced spectroscopist to utilize the



system and to make sure it is doing what it



should be doing.  That will obtain for quite



awhile because of the different nature of the



data, the information that is coming out.



   The time element is not that much different



now.  The software programs for the GC/FT-IR



are becoming very fast now and just last week



at a Pittsburgh Conference there was some



very exciting new developments that are going



to make it even faster.  So I think that's



going to be quite comparable.  I've mentioned



the chromatography is identical (capillary

-------
                      286
column).   In  fact,  there have been several



laboratories, including ours that have



successfully  coupled a GC to an infrared and



then onto  a mass spec; that is super



powerful,  but it is going to be a few years



before that is routine.  Information Content:



infrared is the best; no question.



    Anaylsis  time.  Again, they are just about



equal now; with the exception, again, that




the infrared  requires an experienced



spectroscopist and there are some manual



operations that help you out.



    In a pseudo-summary, on Figure 16, if your



problem is to detect a specific component and



you know exactly where it is, GC mass spec is



the way to go.  If you want to identify compon-



ents in an unknown sample,  far and away the



best thing to do is use both.  If you can only



use one, you will get more information by GC



infrared.

-------
                       287
   Now, at considerable  risk  I  am going  to



completely change subjects.  The risk is that



neither Bill nor Dale knew I would do this



and by so doing, I am following a philosophy



expressed by a  colleague, that  if you want to



do something, it is almost always easier to



obtain forgiveness than  it is permission.



In a very few minutes I want to give you an



abbreviated version of a development first



announced publicly only a week ago yesterday



at the Pittsburgh Conference.



  Ken Shafer has added another  important



member to the analytical alphabet soup;  SFC/



FT-IR.  He has  successfully coupled an FT-IR



system to the effluent of a supercritical



fluid chromatograph.  Why do you want to do



that?  What is  a supercritical  fluid?  (Figure



17)  It is one  that is above its critical tem-



perature and pressure.  It is neither liquid



nor gas.   It has properties intermediate.  The

-------
                       288
 one I  want to talk about is C02  which has a




 critical  temperature of  31  centigrade and at



 a pressure of about  73 atmospheres.



    Why is this important or useful  to anybody?



 (Figure 18)   In  normal GC you have  lots of



 stationary phases, one mobile phase.   In  HPLC



 you have  a few stationary phases and  many



 mobile  phases.   SFC  has  those intermediate



 properties.   It  can  use  all  GLC and HPLC  columns



 It  can  use a  variety of  mobile phases.  The one




 I'll talk  about  today is  C02, but pentane, N20



 and the Freons have  also been used.



    (Figure 19)  Common  detectors, GC uses,



 FIDs, HPLC uses  UV;  SFC  can  use both  of them.



 I'll show  you data to support that.   (Figure



 20) GC  uses temperature programming to get



 your separation; in  HPLC  you use solvent pro-



 gramming;   SFC  you use pressure programming.



 This is the major difficulty of it, but it's



not that hard  to do.   Figure 21 shows some

-------
                      289
separations and the use of two detectors.   This



is a mixture...of biphenyl,  isomers of terphenyl,



another phenyl, a triphenyl  benzene, and two



quaterphenyls.  These are highly condensed



ring compounds.  The separation here is very



good and what you see slightly displaced here



is a UV trace followed then later on by an  FID



trace (indicating).  So you have both detectors



it's possible to use and you see here the



separation of these isomers  and some condensed



ring compounds of relatively high molecular



weight.



   This is one of the more exciting avenues for



this is, in the separation of higher molecu-



lar weight compounds.  But I want to show you



today that it can also be used for low molecular



weight materials that you would be interested



in.  Some other considerations in interfacing



FT-IR with various chromatographies are shown




in Figure 22.

-------
                       290
 In GC/FT-IK, you use a light pipe with a



 volume as you would like to have it, the



 exact volume of the peak that is coming out.



 In HPLC you either have to get rid of the




 solvent or you have to use a flow through



 cell that is much, much less than the volume



 of the peak; and, one of the major handicaps



 of HPLC/FT-IR is this problem right here



 (indicating).  With SFC you can use a flow



 cell with the volume equal to or greater than



 the peak width.  What I'll talk about is using



 C02 as the mobile phase.  You can eliminate



 the solvent much more easily than you can with



HPLC, but with infrared you don't need to.   C02



is a beautiful infrared solvent.



    Figure 23 is the spectrum,  a transmis-




 sion single beam spectrum of CO^', this band



 about 2400 wave numbers.  There is another



 strong bend out here about 3600;  this is a

-------
                      291
little unfortunate because you would like to



look at some alcohols out here, but the only




thing that ever shows up in the 2400cm-l



region is a few nitriles, C=N compounds, and



you don't see those very often; otherwise,



it is an absolutely beautiful solvent.  This



cut off here was caused by the £>2p2 window  of



the cell, the only one that he had at this



moment that would take this pressure.  C02



remains a very good solvent on for several



hundred wavenumbers.



   Now, the only problem with it, it shows



changes with pressure as shown in Figure 24.



This is a relatively low pressure and at a



higher pressure you see some other bands



coming out here because of a Fermi resonance



interaction.  This is very easy, you can just



program your computer to use a particular



background of whatever pressure you are at

-------
                       292
 and  these  will  substract  out.   It  handicaps



 your  sensitivity  here  a little bit, but that



 can be handled  by the  software quite nicely.



 So it is a very good infrared solvent.



   How do  you do  it?   (Figure 25)  You have




 a Varian syringe  pump  for HPLC that nobody



 wanted.  Hooked up the CC>2 tank to it, a



 simple pressure controler, through a preheat-



 ing coil and a valve loop injector into the



 conventional gas  chromatograph with a conven-



 tional capillary  column, went through the UV



 detector; and, that's  another neat thing.  All



 he did was run the capillary all of the way



 through here and just  scratch off the outer



coating and actually do the UV detection



 directly in the capillary.  Then went to this



 FT-IR (in this case it was one of the small,



low cost Analect systems)  and on beyond that



 to the FID.  So you had the UV detection here,



the IR detection here, and the FID detection

-------
                      293
here (indicating); a very powerful combination.



    Did he get data?  Well, Figure 26 shows




the chromatography on it.  This is a mixture



of anisole, acetophenone and nitrobenzene.   You



see the differing sensitivities of the two dif-



ferent detectors, the UV detector here, the



FID here, the solvent peak from the chloroform



and the separation of those three materials.



Figure 27 shows the spectra he obtained, anisole,



acetophenone and nitrobenzene (indicating).



   This is brand new.  The slides were still wet



when Ken reported it at the Pittsburgh meeting



last week.  So these data are about two weeks



old.  The first public report is a week and one



day old; it is very exciting.  So in overall



summary, then, GC/MS or GC/FT-IR.  Which and



when are a matrix into which are factored the



nature of the sample, the information you



need, and the differing selectivities, speci-



ficities, and sensitivities of the two

-------
                      294
techniques.  Why?  Because the complementary




nature of these techniques effects a synergism



such that the whole is substantially greater



than the sum of the parts.



     Last, but very far from least, SFC/FT-IR




has been accomplished.  Its potential is truly



exciting because the chromatography is excep-



tionally versatile and the spectroscopy is



relatively simple.  That potential is further



enhanced by earlier but still recent demonstra-



tions of SFC/MS.  The very same complementary



nature I have stressed today will be evidenced



in this new field.  With this I have now ful-



filled my commitment to Battelle and I dutifully



submit myself to the remainder of the ceremony,



whether this be the leap into the flaming



volcano; or maybe questions.  Thank you.



                MR. TELLIARD:  Any questions?

-------
294a

-------
294b

-------
294c

-------
294d

-------
                                294e
                          -KSr Windows  (Glued  to 1/16" Swagelock Fitting;
                            Hewlett Packard  Fused Silica Capillary Line
                          1/16" O.D. Stainless Steel Transfer Line
                          1/16" Graphite Ferrule
                          •1/16" O.D. Gold-coated Lightpipt
IGURE 1.   DIAGRAM OF 1/16' UGHTPIPE  AND FITTINGS

          • AT T « L U *' — C O L U M B U *

-------
294:

-------
294g
2;
o
CO
M
P^
<
p_i
^
O
o

p_l
o

CO
H
£5
M
O
pL|
w.
*2
bJ
H
2
O
U

J2J
O
M
H
<^
JJEJ
S
O
PH
53
M





-------
294h
              -  FID

-------
2941
             RGC - MS

-------
           2941
HA.'AK'Ix'il i 'c,A i II   >\i*f>

-------
294k

-------
294 1

-------
294m

-------
                                 294n
                        RESULTS WITH
                   HAZARDOUS WASTE SAMPLE

                   W COMPONENTS
METHOD                  SPECIFIC                COMPOUND
 USED                     ID's                   TYPES
                     COMBINED DATA SETS
                      33 SPECIFIC ID's

                      11 COMPOUND TYPES
GC/IR                      28                      15

GC/MS                      13                      23

-------
294o

-------
                          294p
CO

-------
Q-   C-)
                    294q
                  CD   CO
                                        CM
                                       O

-------
294r

-------
294s

-------
294t

-------
294u

-------
294v

-------
294w

-------
              294x
SUPERCRITICAL CARBON DIOXIDE
       1 CM PATHLENGTH
      EFFECT OF PRESSURE

                   900  psi
                  1100  psi
                  1300  psi
                  1500 psi
                  1700 osi
                  1900 nsi
                  2100 osi
         2000
            WAVENUMBERS

-------
294y

-------
294z

-------
294aa

-------
                      295
             QUESTIONS AND ANSWERS






                DR. VINCENT:  Frank Vincent,




James River Corporation.  That was a fasinating




presentation.  I never heard of either one of




these methods before.  Are you talking about a



precision bore glass tube, gold coated?  I would




assume that the interior surface has to be pretty




smooth or you get so much breakup of the beam




that you don't really get much energy out the




other end.



    Also, I was wondering about the gold coating.




Is this something that is relatively simple to




get done?



                MR. BRASCH:   The answer to the




first is, yes, it must be quite smooth; and,




the answer to the second is, yes, in the sense



that the procedure itself is quite simple.  It



simple.  It is just a solution that is poured




into the tube to coat it and then it is heated.

-------
                      296
The technology, or the "black art," comes in



how to get the right thickness and the heating



rate,  to get uniformity of the coating.   And



that is, just from my point of view,  a black



art.  There is nothing profound or difficult to



it at all.  It is used in many laboratories;



but, it is just that there is an art  to it.



                DR. VINCENT:  This was done




at Battelle rather than some...



                MR. BRASCH:  Yes.



                DR. VINCENT:  So it is basi-




cally...it is something like silvering a doer



flask, except, apparently, somewhat more critical



in the way you handle it?



                MR. BRASCH:  Yes, very much so.



                DR. VINCENT:  Is the coating




critical?  The amount of coating and  the amount




of gold on the tube?



                MR. BRASCH:  I cannot give you




an answer, only that there is a lot of work

-------
                      297
going on on geometries and different coatings.




If there is a critical thickness, it must not




be too thick; an interesting phenomenon that




the physicist understands, but I don't.  It




must be relatively thin, but not transparent.



                DR. VINCENT:  Thank you.




                MR. TELLIARD:  Anyone else?




   For the presenters, for the proceedings we




would like to have copies of your slides




or your overheads;  if you could supply us




with xeroxes of them so that we can




incorporate them.  Otherwise, these two ladies




up front here will  come after you,  that may




not be bad.  It wasn't a very good  threat;




sorry.

-------
                      298
   Our next speaker is Drew Sauter from EMSIL-



Las Vegas.  Now, that we have all mastered the



use of a mass spectrometer, why not put them




in tandem.  If one is good probably more is



better, is that true; you will see.  Drew, come




on up.

-------
                       299
  RAPID  ORGANIC  ANALYTICAL  METHODS  DEVELOPMENT,
 THE  TRIPLE QUADRUPOLE MASS SPECTROMETRY POTENTIAL

                 Andrew Sauter
      U.S. Environmental Protection Agency
                EMSIL-Las Vegas
                MR. SAUTER:  May I have the

first slide, please.  The original work for

Triple Quadrupole Mass Spectrometry that was

funded within the agency was done out of the

Athens Laboratory.

    The Triple Quadrupole, was sold to the agency

because supposedly one could reduce sample work-

up.  What I hope to do today is give some idea

of the analytical utility of the instrument,

hitting probably too many areas and to demon-

strate why we feel that it does have great

utility to the hazardous waste programs and I

think in many specific areas to Effluent Guide-

lines or Priority Pollutant type programs.

   Just last year in Analytical Chemistry,

Burlingame said what is on that particular slide

-------
                      300
and I think that's effectively true.  Hopefully,



what we will do today is give you an idea from



about five or six areas why we think the Triple



Quadrupole does have some great potential and,



hopefully, demonstrate a little bit about it.



    The ion optical train of a Triple Quadrupole



is shown on this slide.  There are three Quad-



rupoles, you might focus on that.  The first



one can be used to select and/or scan.  The



second Quadrupole is the collision chamber




where the ions can be made to undergo collision-



induced dissociation, generally in the range



of a few volts.  The instrument that we currently



have, which, by the way, is a Finnigan Triple



Quadrupole Mass Spectrometer.  The third mass



filter can be scanned and/or set at a given



mass depending on the configuration.  Alterna-




tively, quadrupole one and quadrupole three



can be offset...both scaned and offset, give



characteristic neutral loss or gain.

-------
                      301
    So we have a variety of options with the



instrument.  Now, one of the things that most




people did when they discussed the analytical



utility, the potential of the Triple Quadrupole,



they resolutely ignored both source introduction



problems and problems which might occur from



introductions of large volumes of material due



to...for example, problems with source saturation



which are found in all types of mass spectroscopy



So you can see that there is a fairly complex



set of choices that one could have and what



I'll do is take a few of these configurations



today and try to give you an example of why we



think the instrument is useful.



    We have published in analytical chemistry



in January a comparison of response factors



from GC/MS,  GC/MS/MS and compared those values.



This comparison I think firmly establishes



that one should be able to attain quantitative



data out of such instruments which is effectively

-------
                      302
identical to good GC/MS work.  Peter Dawson,



who is probably the most well-known gentleman



in ion optics of Quadrupoles has described the



ion optics of Triple Quadrupoles as complex.



I submit that we should take his word there.



So such observations are not trivial and are



of some practical utility.



    One does not buy a Triple Quadrupole to do



GC/MS.  One would like to be able to do other



things and because the instrument costs approxi-




mately $350,000, one would like to be able to



do a lot of other things.



    This is a view of the Triple Quadrupole and



you will note that in the front of the instrument



is a moving belt, LC/MS interface.  While this



is a mechanically crude device, one can use this



device to rapidly introduce samples into the



ionization region and then perform a variety



of different experiments.  We have been doing



this with a variety of samples and mixtures

-------
                      303
and we believe that it will find great utility,




perhaps in screening analysis.



    Let me move on.  By simply placing in this



crude fashion, an extract on the belt, for



example, neat transformer oils; One can screen



for a variety of different compounds.  One can



also do that fairly quickly.  This slide shows



25 determinations of Aroclor 1260;  it is essen-



tially a single level precision study that was



done in slightly over 1,000 seconds.  There



are 25 measurements of Aroclor 1260 at 50 nano-



grams.



    The precision, including all data there,



was approximately...16 percent relative standard



deviation and if you will allow me to throw



three outliers, the RSD improves to 12 percent.



This is a total ion current plot of a negative



daughter ion experiment introducing standards



of Aroclor 1260 into the ion source.

-------
                      304
    The next  slide shows  triplicate analyses




of Aroclor 1260 from five to 100 nanograms per



microliter and with the subsequent analyses



of eight transformer oils in triplicate.  These



particular transformer oils were diluted by, I




believe, a factor of two because the chlorinated



biphenyls identified in these samples were found



in the relatively high concentrations.  In 27



minutes there were three times eight plus five



times three determinations of Aroclor 1260.



The ionization mode was methane chemical ioniza-



tion at approximately one turn.  We were doing



negative parent ion scanning and it's obvious



that analyses at this rate, is of considerable



utility for a couple of reasons.



    Most of the environmental measurements that



are made,  are made on one sample.  They are not



made in triplicate.  It would be nice to have



triplicate measurements to examine sample related



precision.  This is a multi-level calibration

-------
                      305
curve of Arocolor 1260 using negative parent



ion with methane at .94 in argon of approximately




.5 millitons.



    Again, the methane is utilized to create




negative ion which are, in this case, then




selected and undergo induced dissociation in



Q2 creating  ion current which is sensed at the



multiplier.  This is an example of a calibration



curve that we can currently get now and such



determinations can be done in the order of



minutes.  We think that is also useful.



    This is a. complex sample workup scheme that



was utilized to obtain the data in the previous



slide.  Essentially, the transformer oils have



been taken out of the vial and placed directly



on the belt.  We have done this probably



eight or nine different times for the course



of approximately an hour, demonstrating that,



in fact, the system can take the abuse of



direct complex, mixture analyses of chlorinated

-------
                      306
biphenyls in transformer oils.  The fact that



the LC/MS Interface tends to throw away quite



a bit of the material itself is the reason this



system works.  We gain back the sensitivity



lost in that we are using negative ions.



    So using this introduction technique one



would take transformer oil and place it directly



on the belt.  A normal negative ion Q3 mass



spectra produces a complex mass spectrum.  Tak-



ing the same sample under the same ionization



and introduction conditions and doing a negative



parent ion scanning for the same sample, this



is the resulting spectrum.



    Most of the ion clusters are related to the



formation of molecular anions of chlorinated



biphenyls.  The nice thing about this type of



detection technique is, it takes a complex mix-




ture, chlorinated biphenyls, and makes it sim-



ple.  That is,  I believe, of regulatory  interest,



One would not want to use this type of technique

-------
                       307
 if one was  trying  to  study metabolism of given



 isomers, but for making regulatory decisions I



 think it is a valuable approach.  Aroclor 1260



 standard run under the identical conditions



 there are shown.  So  they are quite similar;



 in fact, Aroclor 1260, 1254,  1248 and 1242 and



 perhaps 1232 can be differentiated.  The mixture



mass spectra of negative parent ion scanning




mode is unique.  That is not to say that we



could differentiate mixtures of those given




mixtures, but under such analyses conditions



we seem to be able to unequivocally determine



that there are, in fact, molecular anions con-



taining chlorine of the molecular weight which



coincides with chlorinated biphenyls.  One can



do this quite rapidly, with effectively no



sample workup.



    We are interested, in our programs,  in



hazardous waste areas.  In our particular aspect



of the MS/MS Program we are particularly inter-

-------
                      308
ested in compounds which cannot be done by Gas
Chromatography,  Mass Spectroscopy.   This slide
shows a variety  of compounds,  many of which
cannot be done by Gas Chromato^raphy, Mass
Spectroscopy, but can be directly  introduced in
the fashion discussed previously.
    We expect from our work that,  in fact, methods
for these given  compounds of regulatory interest
to RCRA could be rapidly developed.  One, in
fact, does need  quite a bit of manpower to develop
methods for many different molecules and while
this is a major problem with rapid analytical
method development, we feel quite  certain that
for a variety of molecules MS/MS coupled with
this crude introduction technique  can be exploited
to develop methods rapidly.
    This is an example of a positive daughter
ion spectra of diethylstibesterol.  This slide
presents an indication of the information con-
tent available in daughter ion spectra acquired

-------
                       309
 in  this  nature.   We  have  noticed  that  for many




 molecules  the  information content is sufficient



 to  identify  polar molecules  in hazardous waste



 extracts.



     Professor  Hunt at  the University of Virginia



 is  developing  priority pollutant methods.  This



 slide presents a  direct comparison of results



 done independently by GC/MS and MS/MS.  A gen-



 eral summary of the  work to date by Professor




 Hunt is that the  results based on performance



 evaluation samples and a variety of hazardous



 waste samples, very  complex mixtures, is that




qualitatively the MS/MS scheme that he has



developed is quite promising.  In many cases,




quantitatively, the data is excellent; in a



quantitative sense it requires improvement.



    The interesting  thing about Professor Hunt's



work is that sample workup for the priority



pollutants and analyses and acquisition requires



on the order of 25 minutes, total.  Will that

-------
                      310
type of methodology apply to every sample in the




universe?  I could probably say unequivocally,



no.  Will it have great analytical utility for



certain industries and for certain waste indus-



trial effluents?  I believe it will.  In fact,



I had thought that his mission to develop analy-



tical methods which would compete with the eco-



nomics of fused-silica capillary column, GC/MS



was a particularly difficult one.  Both the



qualititative and quantitative reliability of



the data that has been provided to date has



been good, but we anticipate further improve-



ments.  He is working under a cooperative agree-




ment with EMSL-Las Vegas and Dr. Don Betowski



at our laboratory is monitoring that program.



    We are not concentrating on MS/MS analysis



of priority pollutants at our lab, but as Bill



invited us to talk here about MS/MS and as we



were analyzing hazardous waste extracts and I




thought we should examine some priority pollutant

-------
                      311
data by MS/MS.  What you are looking  at right



now is a positive ion Q1MS of an actual extract




provided by Dr. Larry Straton at NEIC.   This



is a particularly clean hazardous waste extract.



This is positive ion methane CI with  a full scan




This is as if one would introduce a sample on



the LC belt directly into a single Quadrupole



Mass Spectrometer.  You will note that in many



cases fragments corresponding to molecular ions



of the priority pollutants which were spiked



into this mixture are obvious.  This  sample



was spiked at approximately 100 nanograms per




microliter.



    It is interesting to look at the  region...




where the pointer is (indicating).  The power



of MS/MS becomes apparent when one looks, for



example, at this region.  At mass 139 and mass



140, the pronated positive molecular  ions for



isophrone and two nitrophenol.  What one can




do is introduce this sample in the same fashion

-------
                      312
and instead of doing a full Ql scan, one can



ask for daughters of 139 or 140.   This is a



positive daughter ion spectra of 140 and you



can see the pronated molecular ion and you can



also see loss of water and phenol and a variety




of other peaks which are quite characteristic



of nitrophenols.  In fact, the CID spectra of



positive ions are, in fact, very  similar to,



in many cases, low energy electron impact mass



spectra.  I guess in retrospect that really



shouldn't surprise anyone, but it is nice to




know that if you can interpret electron impact



mass spectra it is fairly easy to interpret



CID spectra.



    Taking M/Z 140 in the next few milliseconds



of a scan for the positive daughters of 139,



alternative identification of isophrone is



mode.   So going back again, selecting these



peaks and doing daughter ion analysis allows



one, despite the fact that their  proximity is

-------
                      313
1 amu apart, to identify these compounds in



hazardous waste sample extracts.



    Other things can be done.  This is a nega-




tive Q3, CI mass spectra, full scan of another



complex hazardous waste extract.   Chlorinated




materials are present, someone will say maybe



that's a polynuclear.  In selecting the daughter



ion, M/Z 182, from this sample, just with the



electronics of the instrument one gets a very



characteristic and clean spectrum for that



molecular anion of a dinitrotoluene.  It amazed



us that in many cases the instrument selects



ions out of incredible garbage and provide one



with reasonably clean spectra.  We have been




able to qualitatively verify a variety of



priority pollutants in hazardous  waste extracts



via this approach.  With proper quality con-




trol, we expect to attain good qualitative



results.  We had done some work with fused



silica capillary column along with a lot of

-------
                      314
other  people  here  and  the  acquisition  time  for



priority pollutant analysis was reduced to ap-




proximately 30 minutes.  What would happen if



we put all of the priority pollutants on the



belt at one time and performed a full  scan Ql



mass spectra.  What one observes is 95 for



phenol, 124 for nitrobenezene...let's  see,



185...anyone  that will give me help with that?



I believe that's benzidine.  And a variety of



other compounds can be identified through



appropriate daughter or parent ion scanning




techniques.  The information content  in many



respect to the priority pollutants daughter



ions and other scanning modes are quite adequate



for qualitative identification.



    This is the negative ion CI Q1MS acquistion



for the priority pollutant standard.   So that



half a half a second later taking negative ion



Q1MS data from the same sample that you saw



previously and you will note that where the

-------
                      315
sensitivity is low in positive ion spectra, the



negative ion CI is more sensitive.  You see



hexachlorobenezene, benzeneapyrene D-12 or



benzapyrene or a molecular anion with that



weight, trichlorophenol and a variety of other




molecules at 25 nanograms are observed.  Doing



daughter ion experiments we have repeated this



at one nanogram and using the belt introduction



technique.   You are able to observe signals for



most of the priority pollutants, including some



very low molecular weight compounds which sur-



prised us,  like dipropylnitrosamine and dimethyl-



nitrosamine also.  So MS/MS offers the possibi-



lity of a rapid screening procedure (MS/MS) for



priority pollutants.   One could analyze, at



least theoretically,  on the order of 150 to



200 samples an hour.   It's not clear whether



one could do 800 a day; it's not clear that



one would want to do 800 day, but one could



surely do in triplicate analysis of the samples

-------
                      316
of interest.  One might be able, then, to screen



extracts for selected priority pollutants and



other compounds of interest which can be done



by GC/MS and in this fashion determine whether



one needs to do GC/MS analysis.



    A perfect example of this is the Missouri



dioxin problem.  The information that I have



indicates that approximately 80 percent times



4,000 times $400 per sample minus the cost of



a Triple Quadrupole screening scheme could be



saved in that program by a MS/MS dioxin screen-



ing scheme.  One of the reasons that we work



with the chlorinated biphenyls was related to



the interest in dioxin.  In fact, it appears...



there is every reason to believe that in actual



extracts one will be .able to have a very rapid



screening technique for this molecule.  The



reason for this talk, is to present an idea



which has become obvious to me, that it is



still just an idea, that analytical methods

-------
                      317
development can be structured such that methods



with people and perhaps a few automated instru-




ments can rapidly develop analytical methods in



crash problems.



    The Effluent Guidelines Division program has



evolved over a number of years now, but it is



still saddled in many cases with the matrix



problems.  When we were first told to write



methods for priority pollutants I remember a



lot of analytical organic chemists standing up



and saying you can't write methods for everything



in anything.  The progress that has been made



is really amazing, but there are reasons why



one might want to have a matrix specific approach




or a structured approach to methods development



using, for example, the 1600 methods as the



quality control check.  Using that as a model



and knowing a little bit about scanning options



and ionization processes at MS/MS, it is not too



difficult to say how one could go about making

-------
                      318
the development of a method routine.  Methods



Development costs quite a lot and I think it is



worthwhile for us to look into Rapid Analytical



Methods Development.



    To give you another idea what you can do with




a MS/MS.   We have a program to develop methods



for dyes.  There is a gentleman by the name of



Professor McGovern at the University of Pennsyl-



vania.  If you have read C&E News last February,



there was an article discussing instrumental



applications to archeology.  A dye that was



discussed in that article called 6 of 6'  dibro-



moindigo was of particular interest to Professor



McGovern.  The dye apparently at one time was



used by the Greeks, Egyptians, Phonecians and



Romans.  Work to date has not been able to confirm



that this dye is present in samples of archeo-



logical interest.  We have contacted Professor



McGovern and suggested another introduction



technique which is a FAB-like technique where

-------
                      319
one bombards a. sample on a platform with ions.



This mass spectrum is full negative ion scan



of a polar dye, bromocresol purple.  The struc-



ture of 6 of 6' dibromoindigo does have some



structural similarity to this molecule and we



sent him the spectra and asked if he would



like to send us some sample.  He appears to be



interested in this application of MS/MS to his



problem.  We are going up to Philadelphia tomor-



row and probably take a sample back from this



artifact, but based on the structure of the




molecule an MS/MS approach has become obvious.



If the dye is present and if there are not



ionization suppression problems with the matrix



we should be able to unequivocally conclude...if



the Phoenicians practice unregulated dye dumping



in the year 1300 B.C. using DISIMS ionization




and negative ion daughter scanning techniques.



    Is screening important enough to warrant




purchase of an MS/MS?  The work of Dr. Shackelford

-------
                      320
at Athens in creating the data base on the



industrial effluents has indicated that,  in



fact, in industrial effluents the occurrence



of priority pollutants is relatively rare.  I



believe the highest compound that was found



was phenol and the frequency was 5 percent;



that would indicate, then, that priority pollu-



tant screening methods via MS/MS, would seem




to be viable.  It would seem to be economical



to screen samples by MS/MS and the data could



be produced for project officers in a more




timely fashion.



    For dioxin, let me repeat 80 percent of



4,000 times $400 could be saved minus the cost



of a TSQ mixture screening scheme.  Is a method



ready to go right now off of the shelf; it



isn't.  Should you go out and immediately



purchase a Triple Quadrupole at  $350K, I  would



probably wait.  However,  I believe you can see



why we are excited about  the technique and why

-------
                      321
we think it has advantages.  I think it will



eventually find its way into the programs




for programmatic as well as technical reasons.



    Any questions?

-------
    321a
                   o>
   .  CO
   "O .ti            co
                   £
                   CD
   o
                   "5 g.
                   co O.
0)
+* m 55 —i          ^ «
^ g 2 co          ^o
S 25 c o          ®«-
                   o =
  .
3 3 -S C          < 10
                   -
C                 ® o
                   E z
_k
0)  /•% p      .      30
  i§
   s s

-------
                        321b
        CO
        O
           0)
           CO
           tO
 O
 a
 3

"O
 CO

 0)
 O
 CO
CO

.2
 a.
           0
           0
           CO
?•
.=  0
3  *
O  §
O -I
                      o

                    c 2
                    O 3
                   — O
                      CO

-------
                    321c
                               0)
                               (ft
                            3£ ^ .£
                            C 
                    « « o> ^
                              oo
                               «
                                    li
                     ilglll  III
             Wo.2   — — <  <  JS^S^  oo —
       _    O-j£E   UJOUL  O  5 Q 0. Z  OOO
      0)
W
       (0                          (ft
                             8
            3 •:      c •=     CD
            o **      « 5     o

-------
                                             321d
                                                                                           2  uu

                                                                                           <  2

                                                                                           0
    C
   to


— 03
 (/)  i-

 cS
 Q)  03

 *~ O
 03

a-1


 Q)^0



 ro "5
 Q)

Z
 D

O
 03


 O
    O)
    c

 •550
 a O
 a >.
'IT  O
          0

          6
          0
                                                                                      r*
                    tO)J)

                    LD  a.


                    •R  E
                    o  co

                    (/>
                                                                                            o «-
                                                                                            o co
                                                                                            O CM

                                                                                            O ,X
                                                                                               CD
                                                                                            O
                                                                                         i- o
                                                                                            o
                                                                                            IT)

-------
                         321e
3.0-t
2.0-
1.0-
              Multilevel Calibration
              Negative Parents
              Aroclor 1260
              Introduced via LC Belt
                                Neg. Parent Ions
                                CHa. Cl 0.90 torr
                                Argon 0.5 mtorr
                                Expr. CX
        10
20
 l
30
40
 r
50
 I
60
 i
70
 1
80
90 100
                   ng Aroclor 1260

-------
                                                       321f
                en
                UD-
                CM
    in-
    cs
                   tn-
                   
                                           rs- -
                                             .
                                           10-
                                             '
                                           en
                                          in
                                           en
                                           en
                                           en
                                           in
CO
 CSD

 CO
 co
 1

CD

CO
in
                         CO
                         in
                         
-------
                                               321g
                                                                                             tn

                                                                                             5
     *-j   x   i^
V

*s
                                                                                            cn-
                                                                                             oo
                                                                                                   CD
                                                                                                 - CO
               V
                                                                                            in
                                            O-
                                            co-
                                                                                                 - LD
                                                                                               -*  cs>
LD
cn-
                                                                                             CD
                                                                                             co- •
                                                                                             co  •
                                                                                               _  CO
                                                                                               ,-f  CO
                                                                                             ID
                                                                                             CO .


                                                                                             00
                                                                                             «
-------
                                     321h
Li'
                                                                              '4
                                                                               t
                                                                               »



                                                                             §t
                                                                             uH
                                                                             rr



                                                                             ••—€

                                                                             (Ti
                                                                              rv


                                                                              ^
                                                                            ^J"

                                                                            ^-t
                                                                             t-O-l
CD

10

                                                                            co -
                                                                          IO-
                                                                               't

-------
                                      3211
 to  "U  Q)

•O  &  o

 £Z  JJ  CD
 334-
 aP  £
"O
-a  o  x

 S-R  w
•o
c
3
O
a


o
O
                  <  . =  . r
                     o  o
                  a>  .o  to

                  co  o  a>
                  > o  o
                             o  a)  o  o
                                                           cc
                         S -2  P
                       »
                    c
o  o  c
                                 o i
                                 c
3  o  o  z:  •*-  **  **
2  S  2  .E  o  -g  •£
                   COCM
               CO CM CM  OJ
         t-r-     r-CMt-r-r-CMr-r-^1

-------
                                  32 Ij
  Summary of Results Obtained by the University of Virginia on
                      an EPA "Rag Oil" Sample
                              Radian GC/MS               MS/MS
Compound                         (ug/mg)                 (ug/mg)
C2-benzenes                        14                    12.4
toluene                             6                     3.7
Cl-dibenzothiophenes                3.4                   8.0
Cl-phenanthrenes                    3.2                   0.9
C3-benzenes                         3.1                   8.6
phenanthrene                        2.0                   1.0
dibenzothiophene                    1.5                   4.0
Cl-naphthalenes                     1.5                   0.5
C4-benzenes                         1.2                   4.3
C2-phenanthrenes                    1.1                   0.3
C2-dibenzothiophenes                1.1                   4.8
benzene                             1.0                   2.0
C2-naphthalenes                     0.9                   0.2

-------
                                      321k
   T3
   0)
   +-
   0)

   O
 x 0
 0) O

 C «
 0) :

•IS

 si
a-  «
    N
    (0
   I
Extract
Clean
                   CO-
in
                             IS-.
                             UO
                                      r —
                                      CM
                                     U>-
                  ro
                                     CO
                                     a-
                                    \D  -
                                    cn
                          CL
                         -o
                          Q.
                                          8
                                                      00
                                                             LU

                                                             o
                                                                          £-B
                                                     0)
                                                     c
                                                     a;
                                                     TO



                                                     .C
                                                CL

                                                Q
                                                                        CM
                                                                       S—:
                                                                       CO
                                                                        CO


-------
                                             321  1
 0>
 c
 o

 o
.c
 a
 o
—

O)
CO
t—
 N


 E


 S


 D)
 3
 (0
G
 0)
 o
a.
                                       
-------
          CE>
                               321m
O
c
                  [-8
 Q.
 O
                                                                           co
\

 E
 (0

 O
co.
                                                                  .
                                                                en
                00



                CO"
 D)
 3
 03
Q

 o
 O
Q.
            
                                                          .
                                                         CO
                                                                     $-:
                                       I

                                       CD
                                                                           UJ

-------
                                                       321n
           co
           r»--
           CM
E
3
i_
v
O
0
a
    CO
OLU
  Z
 c
 <0 ^
o
0
(Q
O5
0)
                         03
                         
                                          cn
                                          co-
                                       in-
                                       cn-
                                                                               00
                                                                               co-
                                                                               m
                                                                               r^--
                                                                               co
                                                                         LO-
                                                                         CO
                                                                                                    CO
                                                                                                    cn-
                                                              00-
                                                              TT-
                                                                                                               CO
                                                                                    CD

                                                                                    00
                                  GD

                                  O
                                  in
                                                          CD
                                                          IT)

-------
321o
CN

CN
CD
N
£
"*™ ^^
QO
C LU
£2
0>
CD
O)
0
Z



I
CO
§
^.
C ~"
0) ,s.
3 £
"5 in
^ ^
O m
^— **t
^* ^
*^
I 5
Si
*
0>
8
S
oo
to
c
u
00
«-•
? ^ LQ
S 10 •"•
CL 
-------
                                                      321p
                                                                                        
 c  .ti

                                                                    in
                                                                    ca-
                                                                               in
                                                                               in-
                                                                                          CO

                                                                                          a:

                                                                                          CO
                                                                                          «•"••-
                                                                                          CM
 O  CO
—  •*-"

 ©  °
.>  2

•S  x
 «UJ

a,  =
            Tl.
                                                             CN-
                                                              in.
                                                              en
                                                                          to
                                                                          CO
             I


             O

              •

             (Si
                                                      CO
                                                                              LD"

-------
                                                     321q
 O   -
 0 'S
 0.^-

C/) O)

 (/> C

 <2 LO
 C
 CD 'C

5.2
 o>   >
—  (U
    *J
 0  O
            CN
            CO-
             .
           CO
                                                    CO-
                                                    CM
                                                CM
                                                             \r>-
                                                             CM
                                                     CO
                                                     in
                                                                                     ot-
                                                                                     CN
                                                                                  CM
 8
 cs
                                                                                         (N
                                                                        00
                                                                        
-------
                                             321r
                                                  if1
                                                  r
                                                  rn
          L	1	J	1	1.	1— - -JL	I	
     °
cn
r»--
co
               in
               r---
               (N
                                          I
                                            f
                                          —*-
                                  LO
                                        r-
                                        CM
                                        CO
cn-
CN

en
co-
cs
                                r^
                                CO-
                                IN;
                                               CD
                                               O?
                                               -r
                                               CO
                                               cs>
                                               ir>
                                               CM
-
'CT
r
~\ /"
,-- i" ^
(O/

OL "^
0 P g
|J1 "-
LU CK:
I s
CO ^
o
^ >—
UT CL
O -.

[
I
s
on
in




-

r~f



~^.
o
UJ
>— 4
1
0
-j
c^




L
m
I
o
CD
• — • s
V
T '


CO
U)
03
fr>
^ L.O ^
>-" >
cn --*
£ ^ !o
I 05
0
UD
IO
LD
cn-
in

to
in
in
LO-
cn:
	 3^73pr=~3=
r*- —
S-
cn -
?-
$ =
-— ==£=i^=ri=£-s=.
^
-co
r ^




>— U'j
uo

-
C'
— ca


-
.
— in
in
-
CO


•
  CB
  CO
  cs

-------
                       322
             QUESTIONS AND ANSWERS





                DR. COLBY:  Bruce Colby, S-CUBED.




Drew, I did a quick calculation here based on




your 200 sample per hour estimate as a through



put.  If there were 90 compounds per sample that



you were interested in, you would ultimately



require or have available for any single compound



identification, quantification, whatever you



are going to do; one-fifth of one second in terms



of data reduction.



   How is the agency addressing the data



manipulation problem that would appear to be



generated here?



                MR. SAUTER:   I think one of the




ways that we would address that is not to



address it, frankly.  The problem...what I see



in this meeting, for example, George Stanko's




talk I thought was particularly interesting.



Earlier on, what people have done in these methods

-------
                      323
and programs is, they have come in and have said



we can do anything in anything and a whole lot



of that.



    What you see now in the analytical community,



I think, is a concentration on very specific




issues.  I feel quite confident from data with



standards and samples to indicate that this type



of screening techniques would be of great



interest to people worried about the polynuclears



at given levels.  In the type of workup that I



am talking about for parts per billion analysis,



one does not gain the minimal sample workup; one



still needs the concentration factor.



    There is no reason to think on an average




that MS/MS will be more sensitive than a single



Quadrupole instrument.  So that I don't think



one would, in fact, want to look for every prio-



rity pollutant in a sample by the method that



we have discussed.  One might want to  screen



samples that way.  Based on Dr. Shackelford's

-------
                      324
work and on other areas of our work it strikes



me that screening in such a fashion has inter-



esting properties.



    This is a potential approach for limited




objective analytical strategies.  For example,




chlorinated biphenyls in transformer oils, dio-



xins in extracts, which have gone through some



workup, polynuclear aromatics in a variety of



different samples and one could go on and on.



To me, the analytical applications are obvious.



We have given a few examples of applications.




We have not shown that it is an equivocal method



to supplant GC/MS.  It will augment GC/MS.  It



will not supplant GC/MS.



                MR. RUSHNECK:  Dale Rushneck,



Interface, Inc.  Well, the answer to Bruce's



question seems pretty obvious, in addition to




the Triple-stage Quadrupole you need a Triple-



stage computer.

-------
                       325
    My question  is one concerning isotope dilu-



tion.  I noticed this belt technique in having




worked with detectors of that nature myself.



There is, of course, a lot of variability in



getting reproducibility; that didn't play to-



gether.  There is a lot of variability in that



technique from the standpoint of getting the



sample on the belt precisely the same way;



and, I have wondered if you've tried isotope



dilution in terms of the...



                MR. SAUTER:  That data that was




presented was isotopic dilution.



                MR. RUSHNECK:  Pardon?



                MR. SAUTER:  The data that was




presented, that multiple level concentration



curve was effectively isotopic dilution.



    We were using the C13 label chlorinated




biphenyl standards which are now being used in



the interlaboratory PCB study.  We used the



per C-13 CL-8 molecular anion relative to the

-------
                      326
most intense molecular ion in the negative



parent ion scan of Aroclor 1260 which was from



the molecular anion of the heptachloro isomer,



of all heptachloro isomers.   So it was almost



isoptic dilution.   I think,  your point is well




taken; because of  your work,  Bruce's, Bill's,



the labeled materials for priority pollutants



are available.  It would not  take much ingenuity



to take the 1600 methods and  that's what I meant



before, take the material available because of



the work on the 1600 methods  and lace that into




some sort of screening scheme.



    Your point about the belt is well taken, it




is mechanically crude.  I do  not personally



believe that the way we put material on the belt



was, in fact, the  best introduction method; but,



I think in many cases it can  work.



                MR. RUSHNECK:  Sure.




                MR. SAUTER:   I do believe very




strongly that if,  for example, newer developments

-------
                       327
 of  the  thermospray  LC/MS interface may provide a



 superior  introduction  technique.  I think the




 speed of  the belt is worth considering and I



 think if  one could  demonstrate, unequivocally,



 the analytical utility of that approach then



 someone would figure out a damn precise way to



 put it on that belt or some other type of



LC/MS interface.



                MR. RUSHNECK:  The second ques-



 tion I had was concerning the analysis of PCB's



and transformer oils.  Do you think with negative



 ion CI you could get sufficient results from a



 single stage instrument to be unequivocal.



                MR. SAUTER:   In many cases you




can't.  It depends on which regulation and pro-



bably which transformer oil.  The OTS regula-



tions are worried about 50 PPM,  I believe,



whether to incinerate or not.



    Many of the samples that were provided to



us could have been done in that fashion.   How

-------
                      328
well quantitatively and qualitatively it could



have been done, I can't really say; but, at



levels above 100 PPM, 75 PPM, one could use



a 4,000 in theory to do this.  One would like



to have, I can tell you from a certain amount



of experience; one would like to have the



selectivity of a triple-stage instrument.



   Thank you.



                MR. TELLIARD:  Thank you, Drew.



                MR. KEEN:  Gary Keen with Conoco,




I may make one comment, Dale, we do use a single



stage instrument for PCBs and negative CIs, but



we use a mass 35 and 37 and not molecular ion



and it works very well.



                MR. RUSHNECK:  And no GC; it is



just a production sample?



                MR. KEEN:  No, we do have




capillary GC on it.  We find it works extremely



well, better than, then, the specific GC



techniques.

-------
                      329
                MR. TELLIARD:   Our last speaker




for this morning's session is  Bob Beimer from




TRW.  Bob, as you know,  has been on this program



before and as we know Bob can't speak to



metals analyses, but he  is here to talk about



some organic analyses which is, perhaps, more



in his area of expertise.



    Bob is going to talk about a direct injection



technique that EDG has been working on, on and



off, for the last year and a half.  It,



basically, is a selective little tubing.  So




Bob, now, is going to talk to  you about a hose




job; Bob.

-------
                        330
     EVALUATION OF A NEW GC/MS DIRECT AQUEOUS
INJECTION  INTERFACE FOR VOLATILE ORGANIC ANALYSES

                 Robert G. Beimer
                    TRW, Inc.
                MR. BEIMER:  There have been a

lot of comments out there about the length of

this morning's session.  I'm going to try to run

along pretty fast so that you won't miss the

rubber chicken and peas.

   At the request of Bill Telliard and others,

we have done some work on evaluating a DuPont

polymer called Nafion.  We evaluated this material

as a concentrator technique for the determination

of volatile organic compounds in water.  The

analysis is conducted by directly injecting the

water without any previous separation.  The

sample passes through the Nafion tube and right

onto the GC column where the analysis is

conducted.

   The interface consists of an injector block

that's the injector (indicating).  Were were

-------
                        331
using an all-glass system  in order  to minimize



contamination problems.  The carrier gas is



pre-heated by winding through coils within that



injector block, which is maintained at 150



degress centrigrade, passed into the injector



port itself, and then the water sample is



injected through the septum and is  flashed in



this zone in the glass injector.  The material



is carried from the injector in a vapor state



into a six foot length of Nafion tubing.  I



have no idea of the chemical structure of this



stuff; but, basically, it is a material which is



at least permeable to water and at  the most



permeable to all polar organic compounds while



being impermeable to non-volatile species.



   The tube itself is this inner line here on



the drawing, you can see that there are two



lines there and the inner line is the Nafion



tubing (indicating).   The outer sheath is just

-------
                        332
a nylon tube through which one passes  a dry



gas in a counter-current direction to the flow



of the helium.  The countercurrent flow of dry



gas around the Nafion tubing is flowing in the



reverse direction carrying away the water that



is permeating through the Nafion tube.



  The reduction in relative humidity here is



substantial as shown by work that has been done



by Peter Simmons of International Science




Consultants in England, the person who came up



with this technique to begin with.  Basically,



the sample once injected at this point enters



the GC column as a dry gas.  There is no problem



with water buildup in the system.  We studied



how much water could be injected into this sys-



tem on a routine basis without deterimental



effects on the mass spectrometer system and/

-------
                        333
or  the  GC  column.




    Originally,  it  had  been reported that seven



microliters was the maximum water injection




which could be  tolerated when an electron



capture detector was used.  We felt that the



mass spectrometer  might be a little more




tolerant of water  than the electron capture



detector system, so we started at seven micro-



liters and worked  our way up.  Our determina-



tion was that 250  microliters or a quarter of



a milliliter of water could be injected into



this system on a routine basis.   You could do



that for at least  eight hours at one sample



each 45 minutes and not get an increase in the



water background in the mass spectrometer and



you could maintain your vacuum.



  We did, however, find that when you inject



in a half a milliliter of water,  the system

-------
                        334
 self-destructs.   If  you  will notice, we



 have got connectors  here connecting the Nafion



 to the injector; and, then, there is



 another connector down here where the Nafion



 is connected to the GC column.  We blew them



 both apart.  There is quite a column



 change when you go from a half a ml of water



 in liquid to its equivalent vapor state.  We



 ended with most of that half a ml of water on



 the end of the GC column which we ruined.  It



 took the better part of the day to get the



mass spectrometer vacuum back;  but, basically,



a quarter of a ml was not a problem.   If in-



jected in a reasonable way I think a half a ml



could be done as well.  In other words, you



would have to inject it slowly, not trying to



 slug it in at a given instant because the way



the technique works is, you are maintaining



 the GC column at room temperature or below.



 While you are making the injection, at this

-------
                       335
point, and if you made it slowly you could




hold the GC for just a little longer at room



temperature before you programed it up to do



your analyses.  Effectively, your samples will



be concentrating on the head of the GC column



anyway.



  The whole idea of this was to be able to run



particularly nasty samples without going through



the purging operation and the secondary traping



operation.  Before we did that we had to deter-



mine whether or not this technique was compati-



ble, reasonable and similar to the purge and



trap operation.   In order to do that, we ran a



rather significant number of standards by both



the pur^e and trap technique and by the Nafion



interface technique using similar concentrations



of materials.



   This is just  a reconstructed ion trace of a 100



nanogram standard run, using the purge and trap



technique.  A number of the peaks are identified,

-------
                        336
but  that  is  really  not  important; more  impor-



tant is the  shape of what you see here and then



compared  to  the same standard run using the



Nafion (indicating).  Down at this end, you



will notice  the typical starting end of the GC



trace for a  volatiles analysis, the peaks are



broad and unresolved; maybe that's only me,



maybe the rest of you do better.  If you will



notice, with the Nafion injection a much better



resolution at the low end (indicating).  A



reason, of course, that you have this is that



you don't hold the GC for a significant period



of time using the Nafion system like you do



with the purge and trap.  With the purge and



trap you are holding the GC at the low end while



you desorb the materials from the trapping col-



umn.   Here,  of course,  you start the GC at room



temperature  and you inject through the Nafion



and then you program the gas chromatograph.



So there isn't that lag time, the material

-------
                       337
doesn't have a chance to defuse at the head of



the column at low temperature, and you get a much



sharper chromatogram.



   Well, that's fine for the beginning peaks,



but one might expect that the later peaks could



be a problem in that we are introducing a




significant amount of volume before the GC



column itself by having this six feet of tubing.



This is a comparative trace, mass 78, I hope;



I can't see all of the way over there, for




benzene (indicating).  The top trace is the



Nafion direct injection interface, the bottom



trace is the purge and trap; and,  I think to



anyone's satisfaction the GC resolution is



virtually identical in both cases.



   Now, this is all well and good, but assuming



that you can only inject 250 microliters of



water into this system,  you are limited to a



factor of 20 loss in sensitivity,  assuming that



five milliliters would normally be used in a

-------
                       338
purge and trap operation.  Therefore, we are



not proposing, this technique, supplant purge



and trap.  We are only saying that in those



samples where you have a very high concentration



of material this may offer a solution to



diluting and rediluting your sample with water



and purging it, you can just inject different



volumes of it into the system using this tech-



nique and get some pretty good analytical



results.




     On this slide we have a response plot.



The bottom axis is the amount injected in nano-



grams and up this side is an arbitrary uncor-



rected area count measurement.  The idea here



is to show you that although there is a displace-



ment in the slope of the direct aqueous injec-



tion response curve which is this bottom line;




I took some liberaties and dotted it down here



at the low end where it didn't have any sensi-

-------
                       339
tivity (indicating).  The purge and trap line is



the top one and basically they are the same,  in



the sense that you can get good linear response



over a broad range of concentration.   This con-




centration out here is about 1200; 1200 nano-



grams is this last data point injected into the



system (indicating).



     On this slide we have determined the limit



of detection based on a 250 microliter injection



volume into the Nafion system.  What 1 want to



point here is that there are some compounds that



have poorer detection than others.  Simply put,



1,2-dichloroethane at a 2,000 nanogram detection



limit which is rather poor.  The bromodichloro-



methane also had a 2,000 nanogram detection limit,



I have no real good explanation for this since a



lot of this work is preliminary.  It may be that



those materials have some significant affinity



for the Nafion and, therefore, they are not



transmitted effectively at low concentration.

-------
                       340
However, benzene down here at 80 nanograms in



a 250 microliter injection...or excuse me,




this 80 micrograms per liter based on a 250



microliter injection.  Toluene at 40.  Aromatic



hydrocarbons give very good transmission



through the Nafion.



   In order to try to nail down what the



mechanism of some compounds being better



performers than others,  we calculated the




recovery, if you will, of various different




levels of standards injected through the



Nafion interface and run by the purge and trap



technique.  We assumed the pur&e and trap



technique was perfect; and,  therefore, we



ratioed everything to the purge and trap data



at the same concentration.  This chart shows



the recovery of the materials that we studied.




At the top is chloromethane and needless to



say, a highly volatile gas which is not



trapped all that well on the tenax trap

-------
                       341
and lost somewhat through the GC column by migra-




tion.  Performs much better by the Nafion tech-



nique.  In fact, the ratio of the concentrations



of the same material injected was 360.  So we



are getting almost a four-fold increase in sen-



sitivity of chloromethane by this technique;  but,



down the list the rest of them, for the most



part, are less than 100 which says that the



Nafion is not transmitting quite as well as the




purge and trap.  With a couple of exceptions.



The benzene, for example, is 160.



   To get some idea of what effect the Nafion



has on this transmission, I have also put on




this chart boiling point.  We are dealing with



a 150 degree injector, we are taking that down



to room temperature, presumably the injection



has allowed the organic materials to move



quickly through the Nafion before the water



itself condensences.  So let's think about boil-



ing point as being the mechanism by which

-------
                       342
materials are transmitted or lost.  Well,



that didn't work because you go down this



list and you look at the boiling points and



in some cases the higher boiling points



have higher transmission efficiencies; in



other cases they don't.  So I think the



mechanism or the thing to describe, the



transmission efficiency probably has more to



do with the polarity of the molecule than the



boiling point.  However, when you deal with




similar molecules (i.e., benzene, toluene and



xylene) with similar polarities, the transmis-



sion efficiency drops off as the boiling point



increases.



     The same is true for the chlorinated



organic molecules, chloromethane, methylene



chloride, chloroform and carbon tetrachloride



which have  the same trend as the boiling point



increases,  the transmission efficiency drops off;

-------
                       343
which says that there is some condensation




taking place in the Nafion tube.  The idea



here was also to determine how one can do




the analyses on complex or nasty samples.



We had a bunch of really nasty samples with



water from a low BTU gasifier and if any



of you have done any synfuel wastewater work



you know that it may only be about half



water and the rest of it is suspended garbage.



  This is a sample run through the Nafion



of the synfuel wastewater from a low BTU



gasifier.  The movements in the baseline down



here are benzene and toluene; the two consti-



tuents of the priority pollutants that were



actually observed in this sample.  If you can



notice these peaks, that's phenol and those



are methyl phenols.  The phenol and the methyl



phenols injected by this technique not only



passed the Nafion onto the GC column, but those

-------
                       344
rascals can actually be chromatographed very



nicely with the Carbowax on Carbopack column;



something I didn't realize would be the case.



      A sample was also run by purge and




trap, the phenols, obviously, were not




observed under those circumstances because phenol



itself is not purged.  We compared the two



pieces of data and there was a reasonable



correlation between the direct aqueous injection



analysis of the sample and the purge and



trap analysis of the sample.  If one assumed



that the low concentration materials would not



be seen by the Nafion injection which was the



case because we have a higher sensitivity



cutoff of about 20 fold.



    This is a standard which we ran three days



after we finished the low BTU gasification



study and surprise, surprise, we got phenols



coming out in our standard.  Well, obviously,



we didn't have any phenols in our standard

-------
                        345
 to begin with,  so  it became very  obvious  to



 us that one of  the problems we would have in



 running nasty samples through this Nafion




 interface is a  carryover or contamination within



 the tube itself.  When subsequent injections



 are we made we  get a steam distillation effect



 and the phenols come off in the next sample.



 Well, that's not very good.



  So we bake the Nafion tube 100 degrees



 centigrade overnight and put it back in the



 system and ran  the standard again.  Now all



 of the phenol materials are gone.  So we



 learned something about the interface; in



 that when one is running dirty samples they



 are going to have clean it up and you can



 clean it up by  thermally desorbing it and



purging it with an inerx gas.   That's all I



have on the interface itself.   I would like



 to talk to you  just a little bit about where



 we are going from here.

-------
                       346
   It's obvious that the loss in sensitivity




of a factor of 20 can be debilitating.  It's



obvious that the contamination problem that



one has when you inject dirty samples onto the



interface is a shortcoming;  but, if one



assumes that the Nafion is a good system



for reducing the relative humidity of a gas



and based on what we have seen that the



Nafion can transmit non-polar halogenated



organics or non-halogenated  organics, non-



water soluable species very  well, then the



thought that comes to mind is, why not use



the purging apparatus, purge the volatile



organics from the water, and then rather than



trapping them secondarily on a tenax trap which



is essentially is a water removing system, why



not trap them directly on the end of the GC



column, but remove the water by running this



effluent through the Nafion  tube.



     Hopefully, within the next few months we

-------
                       347
will have some data to show that this technique
works and we can remove the trap part of the
purge and trap and still maintain the sensiti-
vity that one gets using this technique.  If
this works then we can go from there because
there are many other applications that we are
looking at in terms of removing moisture prior
to analyses by GC/MS especially when one is
dealing with capillary columns where even small
amounts of water frozen on the end of the column
using subambient conditions will cause the
analyses to be totally useless.  Thank you
very much.
                MR. TELLIARD:  Any questions;
none?
   Thank you, Bob; and, that does it for
our morning session.  Checkout time  is  1
o'clock.  For those of you who want  go  up and
bring your bags down and put them along  the

-------
                       348
back.  If the government people will put



their bags on one side and not mingle with the




industry bags.



   Lunch is next door.  So we will break until




this afternoon.






(WHEREUPON, the lunch recess was taken.)

-------
              348a
     X
     UJ
     tn
      tu
u
s!
                                                  I
                                                  _J
                                                  o
                                                  u
                                                  u
                                                  o
                                                         c/3
                                                         >-
                                                         CO
                   a. Oc
Ul
o
DC I
                                 o
                                 I—I
                                 u_
                                 
-------
                       348b
       sz--
fin!  liiiii HI i!,
                              I-
                                                     i
                                                    •I
                                                      I
                                                           00
                                                           
-------
             348c
62~
sl?Ht I   i I

-------
        348d
                              Li—
                               ZZ--
01 01
M 3
04 I*.
OJ CM
                                                 U-l
                                                 O
                                                 LU
                                                 UJ
                                                 PQ
                                                 Q_

                                                 -
                                                 CO

-------
                          348e
I   1
E.^S.Z
          sie^siel - ti
          SEgiiE.2z 7 ZS
3S.2SSgil JlSSESiiS^Z > S? ?,

iiinihihiihii^ ii?i I
Cu«*ouuv«-w*i --•o^Bo**«*Ce«s**jJ55

                      u1
                      S
                                            62—
                                    L2 S 02*61-
                                              SL"
                                                      •i§  *?
                                                       WQ  ^~—
                                                       *«o  o
                                                           CJ

                                                           UJ
                                                           oo

                                                       I?  o
                                                       5«  UJ
                                                              CO
                                                              UJ
                                                           UJ
       00

       «
       LU

       fe
       ^
       UJ
       I —
       00
                                                           co yj
                                                           1:3 u_

                                                           m^
                                                           oz oo

                                                           § UJ
                                                      •i:
IS 8

-------
                                348f
          62"
   L2"
    22"
!   !  I  J«.,l.llH
L ffill Iflfftilii I I
                         i ii
II
                                   21-
                                  01—
                    9—
                                                               B
                                                               .9
                                                              -s
                                                                 1
S    *-*
ft
                                                                     CO
                                                                     rD t_J
                                                                     O e
                                                                     LU O
                                                                     ZD CD
                                                                     C3 i—J
                                                                     UU
                                                                     ex:
                                                                    00
                                                                       CQ
                                                                    
                                                                      c

                                                                     CD
                                                                     CD

-------
                                      348g
                      CHLOROFORM RESPONSE CURVE


-------
                                     348h
UJ
-D
to
o
UJ
CC.
                                <£>
                                ft
CD
O£

a.
                                 Q_
                                 -
PQ

Q
UJ
                                                                                     I— 00
                                                                                     a ro
                                                                                     •—• o
                                                                                     :z uj

                                                                                     ^ a
                                                                                     
-------
                               3481
 OBSERVED DETECTION LIMITS  FOR NAFION DIRECT AQUEOUS INJECTION

Reference
Number
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
Compound
chloromethane
bromome thane
dichlorodifliroromethane
vinyl chloride
chloroethane
methylene chloride
trichl orof 1 uoromethane
1,1-dichloroethene
1 ,1-dichloroethane
trans-1 ,2-dichloroethene
chloroform
1 ,2-dichloroethane
1 ,1 ,1-trichloroethane
carbon tetrachloride
bromodichlororomethane
1,2-dichloropropane
trans-1 ,3-dichloropropene
trichloroethene
dibromochloromethane
1,1 ,2-trichloroethane
cis-1 ,3-dichloropropene
benzene
2-chloroethylvinyl ether
bromoform
. 1,1 ,2,2-tetrachloroethane
tetrachloroethene
toluene
chlorobenzene
ethyl benzene
Limit of Detection (ug/L)*
40
120
400
120
200
160
160 •
200
120
160
160
2000
160
200
2000
200
120
160
160
160
120
80
N/A
120
160
160
40
80
160
* Based on 250 uL Injection Voli'me

-------
                              348J

DIRECT AQUEOUS INJECTION RECOVERY EXPRESSED RELATIVE TO PURGE
AND TRAP DATA FOR THE SAME STANDARDS OF VARYING CONCENTRATION

Reference
Number
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
Compound
chloromethane
bromomethane
dichlorodifluoromethane
vinyl chloride
chloroethane
methyl ene chloride
trichlorofluoromethane
1,1-dichloroethene
1,1-d.ichloroethane
trans-1 ,2-dich.loroethene
chloroform
1 ,2-dichloroethane
1 ,1 ,1-trichloroethane
carbon tetrachloride
bromodi chloromethane
1 ,2-dichloropropane
trans-1 ,3-dichloropropene
trichloroethene
di bromochl oromethane
1 ,1 ,2-trichloroethane
cis-1 ,3-dichloropropene
benzene
2-chloroethylvinyl ether
bromoform
1 ,1 ,2,2-tetrachloroethane
tetrachloroethane
toluene
chlorobenzene
ethyl benzene
Direct Aqueous Recovery
Relative to Purge and
Trap (%)
360
62
—
81
45
, --
—
—
39
110
53
—
81
33
—
--
71
38
—
—
67
160
—
61
42
36
84
38
64
B.P. (°C)
-24
4
--
-14
12
--
—
--
57
47
62
--
74
77
--
—
104
87
—
—
112
80
—
150
146
121
in
132
136

-------
                       349
        AFTERNOON  SESSION






                MR. TELLIARD:  Bob Maxfield is




from Versar.  About two years ago we had some




concern in two particular areas in the mining




industries, where we looked at some comparability




between ICAP and AA.  Versar, i.e., Bob, spent a




lot of time and effort putting together a study




which looked at the comparability, in a small




sense, within the maxtrix of a mining sample; and,




since then they have done the national validation




study for ICAP and Bob is here today to tell us




a little bit about it.

-------
                        350
   RESULTS  OF THE  U.S. EPA  NATIONAL  VALIDATION
  STUDY OF  THE  INDUCTIVELY COUPLED PLASMA METHOD

          Robert Maxfield, Versar, Inc.
                MR. MAXFIELD:  Good afternoon.

This afternoon I would like to briefly discuss

the Inductively Coupled Plasma Method and the

validation study that Versar is currently con-

ducting on this method for EMSL-Cincinnati.

    The method was originally published on

December 3rd, 1979 in the Federal Register and

has since been revised.  The method that we are

validating is method 200.7 which, as I said, is

a revised method based upon the December 3rd,

1979 Federal Register version.

    The method describes the requirements for ICP

in the analyses of water and wastewater and

details analytical procedures such as the

sample preparation, interference testing,

operating conditions, and quality control

procedures that are required for the analyses

-------
                        351
 of  water  and  waste  by  Inductively  Coupled



 Plasma  Emission  Spectroscopy, or ICP.  The



 objective of  the validation study  is to define



 the precision and accuracy of the  ICP method.



 As we heard yesterday, EMSL has come up with



 a standard validation procedure which they



 have used on  the 600 methods as well as the



 624, 625 GC/MS methods.  This is,  in fact,



 the same sort of validation procedure that is



 being used on the ICP method.



    My objective this afternoon will be to



 give you an idea of the study design and also



 to discuss some preliminary results of the



 study.  The study is not complete at this



 point and a final report is not expected until



 sometime this spring.  So,  therefore, any



comments I have with regard to the data are



 subject to further review and the EPA has



not yet reviewed any data at this point.

-------
                       352
    The overall study design defined by EPA



at the outset is shown on this slide and is



also in a handout that you have in front of



you.  I will be discussing the various points



of the overall design in some detail as I go




along.  This study design uses aspects of



Youden's unit block approach as well as the



ASTM method, Standard Practice for Determination



of Precision and Accuracy.  This approach has



been used, as I said, to validate other methods



and is currently being used by us, again, to



validate the ICP method.



    The first parameter that is included in



the study design, is the elements, 27 were



studied.  All of the priority pollutants are



included with the exception of mercury.  I have



put an asterisk next to the metals which are



the priority pollutants.



    The second aspect of the study design is



the water types, there were six water types;

-------
                        353
laboratory pure  water, drinking water, surface




water and three  treated effluents from the chemi-



cal manufacturing industry, copper sulphate,




sodium hydrosulphate and crome pigments manufac-



turing.  These particular effluents were selected



to present an analytical challenge and, indeed,



were rather difficult samples to handle with the



ICP instrumentation.




    The two digestion types that we studied are



termed the hard or total metal digestion, and




the soft or total recoverable metals digestion.



As the names imply, the hard digestion is a more



rigorous procedure requiring a greater degree of



refluxing and a greater degree of evaporation



during the process.  It is a somewhat longer



procedure than is the soft digestion.  These



procedures are similar, although not identical



to the methods that are included for the atomic



absorption procedures in "Methods for Chemical



Analyses of  Water and Waste," the EMSL method book.

-------
                       354
    Another variable that we have in our study




is the sample spikes.  All of the samples were



analyzed without any spike, that would be



the background analyses.  Then, we looked at



three concentration levels, at each concentra-




tion level we had a Youden pair of spikes; that



is, two spikes of similar concentration.  That



would total seven analyses per sample, background,



plus six individual spikes.



    The spike solutions were prepared in sealed




glass ampules.  All of the 27 elements were



included in three individual spiked solutions.



Very specific instructions were provided to the



participating laboratories on how to go about



spiking the water types with the elements



of interest.



    Again, the overall design  included 27 ele-



ments,  six water types, two digestion procedures,



six spike samples, plus the background



analyses and  12 participating  laboratories.

-------
                        355
 That  totals approximately  30,000 data  points


 for 12 participating laboratories.  The parti-


 cipating laboratories  involved are listed on the


 slide, there are  12 of them.  One is EMSL-Cin-


 cinnati.  The other 11 were selected by Versar


 through a selection process whereby we collected


 bids, these bids  were  evaluated, the bidders


 deemed responsive were then included in a preli-


minary performance evaluation study where they


 received one sample which was treated  in a manner


 similar to that which would be used in the study


 later on.  They analyzed the sample, provided


 data  to Versar and based upon this data Versar


 selected the 11 participants that would be in-


cluded in the study.


    The 30,000 data points were then evaluated


using  a software program developed by EMSL,
                  «

termed IMVS,  and this data is treated such that


we produce measures of precision and accuracy


for each of the various permutations of water

-------
                       356
type, element, and digestion procedure.  As you



may well imagine, to digest the information




generated from this program is very difficult;



therefore, the summary plots are generated



as an easier way to visualize this vast




amount of data.  The plots that we have generated



summarize precision and accuracy under differ-



ent conditions.  Another plot we use is called



the scatter plot which is also termed a Youden




plot.  These various plots allow one to visualize



the vast amount of data and make some interpreta-



tions and comparisons between water types,  diges-



tion procedures and the like.



    This is an example of a precision plot  for



lab pure water for copper using a hard digestion.



Mean recovery is along the horizontal axis  in



micrograms per liter; and, on the vertical  I



have precision as S or overall precision, and



Sr, single operator precision.  The lower line



represents a linear regression of the individual

-------
                       357
operator precision.  The upper, the regression



analysis of the overall precision for the



12 laboratories.  As is the case in most of my



plots, the individual laboratory precision,




the single operator precision is better than



the overall laboratory precision.  There are



some 300 of these precision plots.



    This next plot is an accuracy plot and what



we have along the horizontal access is the true



concentration of the spiked samples and on the



vertical the mean concentration for the 12 labora-



tories.  This slide also represents data for



copper, laboratory pure water and the hard diges-



tion.  There are also 300 of these plots.



    The third type of plot I call a scatter



plot or a Youden plot, allows me to look at both



precision and accuracy in one diagram.  This



plot has concentration plotted along the horizon-



tal access and along the vertical.   On the hori-



zontal access we have one ampule from a Youden

-------
                       358
pair, on the vertical a second spike of the



Youden pair.  The crossed lines in the upper



right-hand portion of the plot are the true



values.  If we were to analyze these vials and



get exactly the true value in the vial the data



point should fall squarely in the center of that



crossed area.  As you can see, the plots are




somewhat scattered about that point.  The Xs I



have on the diagram indicate one laboratory's



data; a "Z" indicates that two laboratory's data



fall on top of one another (indicating).  This



particular data is for chromium in drinking



water for the hard digestion.  If I show the



next slide, I have an elipse drawn around the



same set of data.  This elipse is at a 45-degree



angle to the plot and this is indicative of



the larger systematic error involved in the




analyses relative to the random error.  The



random error being made up of two possibilities;



that is, random error within the laboratory,

-------
                        359
or random error that may be a result of non-



uniform samples.   If the systematic error is



dominate, this eliptical pattern is the pattern



that one will get on a Youden or scatter plot.



This appears to be the general case at this



point in the study; most of the plots seem to



form this sort of eliptical pattern.



    I would now like to go into a few examples



showing you some of the comparisons that we can



make using the precision accuracy and scatter



type plots.   In my first comparison I have alumi-



num in laboratory pure water on your left and



aluminum in effluent number one on the right.



We are looking at precision; again, mean recovery



for all laboratories on the horizontal axis, and



precision on the vertical axis.   The axis are



identical on both plots.   So,  therefore, the



regression lines are comparable.   It would



appear, then, that the laboratory pure water, as



one might expect, exhibits better precision than

-------
                       360
in the case of the effluent; the effluent being



the more difficult matrix.




    In my next example, we're looking at the same



water types.  Again, aluminum for laboratory



pure water on the left, and aluminum for



effluent number one on the right (indicating).



These are accuracy plots, true concentration



on the horizontal axis and mean recovery for



all laboratories on the vertical axis.  If the



slope of the line approaches 1.0 that would




indicate 100 percent recovery or perfect a^ree-



ment between the true value of the sample and



the mean observed value by the laboratories.  As



you can see, the laboratory pure water, the



easier solution to analyze, has a slope of .93



approaching 1.0 which would indicate good re-



covery.  In the case of the effluent, .78 indi-



cates somewhat poorer recovery.  This is the



general case one would expect when analyzing a



more difficult sample, that is, a poor recovery,

-------
                       361
poorer precision than would be achieved with




the lab pure water.  Indeed, these plots for



this particular example do point that out.



    My next comparison is between digestion



types.  Again, we had the hard digestion, a




more rigorous procedure, and the soft digestion,



a less rigorous procedure.  These are precision



plots, mean recovery along the horizontal and



precision along the vertical again.  In this



particular case, the scatter of points and the




linear regressions that result from these points



are inconclusive and I wouldn't like to say too



much about the differences in precision between



the hard and soft digestion.  If we look at the



accuracy plots for the same data, chromium in



effluent number one, hard digestion on the left;



chromium effluent number one,  soft digestion on



the right.  The accuracy of the two methods



appear to be very similar; that is, the recovery



for the hard digestion and the soft digestion

-------
                       362
appear to be about the same.  Now, if the hard



digestion is a more rigorous procedure and we



are having some recovery problems with this



effluent, one would think that the hard diges-



tion might produce better data.   In fact, that



doesn't appear to be the case.  The soft diges-



tion, a simple or more economical procedure to



use appears to be producing for this particular



example data with similar recovery.



    In is my last example I have a pair of




Youden plots.  Again, for chromium on the left



in the lab pure water, the control; and, chromium



for effluent number one on the right.   These are



scatter plots and the scatter of the points are



indicative of the precision with which these



laboratories were able to analyze the  sample.



Note the obvious better precision that one has



with the laboratory pure water.   The scatter is



much tighter, the eliptical pattern again is



there in both cases.   It does appear,  however,

-------
                       363
that the chromium data for the effluent is



skewed toward the lower left-hand quadrant of



the Youden plot.  This would be indicative of



low recovery, poor recovery than in the case



of the laboratory pure water.   If the data



points were skewed toward the  upper right



quadrant, this would indicate  high recovery;



and, again, if the pattern were not eliptical



but more circular in nature one would expect



that the random error is more  dominant than




the systematic error or, at least, the errors



are somewhat equivalent.



    In conclusion I would like to say that the



ICP Validation Program is ongoing.  The data



has not been totally analyzed  at this point.



We are in the processing of analyzing the data



and the report is not due until sometime in



the spring.  What this report  should do is,



allow us to quantitate the precision and



accuracy for the Inductively Coupled Plasma

-------
                       364
Method under a variety of conditions.  Specifi-



cally with six different water types, realizing



that that is not the universe, 27 elements and



the two digestion procedures that were used



in the study.  I thank you for your attention



and if there are any questions I would more than



happy to try and answer them.

-------
                      364a
                                                           SI
                 VALIDATION
                  of ICP for
                 27 ELEMENTS
                      in
              WATER and WASTES
                METHOD 200.7
                 SPONSORED BY:
ENVIRONMENTAL MONITORING AND SUPPORT LABORATORY
      U.S. ENVIRONMENTAL PROTECTION AGENCY
                CINCINNATI, OHIO

-------
     364b



                                     S2
ELEMENTS
Al
Sb*
As*
Ba
B
Be*
Cd*
Ca
Cr*
Co
Cu*
Fe
Pb*
Li
Mg
Mn
Mo
Ni*
K
Si
Ag*
Se*
Na
Sr
Tl*
V
Zn*

-------
                            364c
                   WATER TYPES
                          1.  LAB PURE WATER
                          2.  DRINKING WATER
                          3. SURF ACE WATER
                                                           S3
   TREATED EFFLUENTS
         FROM
CHEMICAL MANUFACTURING
       INDUSTRY
4. COPPER SULFATE
5. SODIUM HYDROSULFATE
6. CHROME PIGMENTS

-------
               364d



                                                S4
      DIGESTION TYPES
      HARD DIGESTION




      "TOTAL METALS"









      SO FT DIGESTION




TOTAL RECOVERABLE METALS"

-------
               364e
           SAMPLE SPIKES
           BACKGROUND
                                                 S5
CONCENTRATION LEVEL 1
YOUDEN
PAIR OF
SPIKES
SPIKE 1
SPIKE 2
CONCENTRATION LEVEL 2
YOUDEN
PAIR OF
SPIKES
SPIKE 3
SPIKE 4
CONCENTRATION LEVELS
YOUDEN
PAIR OF
SPIKES
SPIKE 5
SPIKE 6

-------
              364f
                                               S6
           OVERALL DESIGN
27  ELEMENTS
 6  WATER TYPES
 2  DIGESTION PROCEDURES
 6  SPIKED SAMPLES + BACKGROUND SAMPLE
12  PARTICIPATING LABS
TOTAL OF - 30,000 DATA POINTS

-------
                      364g




                                                        S7
           PARTICIPATING LABORATORIES









        WEYERHAUSER TECHNOLOGY CENTER




              HARRIS LABORATORIES




                    RALTECH




        MONSANTO RESEARCH CORPORATION




                   ANALYTICS




                     ERCO




         VETTER RESEARCH INCORPORATED




         BATTELLE COLUMBUS LABORATORY




               JOHNSON CONTROLS




              RADIAN CORPORATION




            GCA TECHNOLOGY DIVISION




ENVIRONMENTAL MONITORING AND SUPPORT LABORATORY

-------
                                 364h
                                                                         S8
                 COPPER - LAB PURE, HARD DIGESTION
   20-
cc
00

ec
o

GO

CO
<

Z
O
o
LU
cc
a.
10-
    0-
                                     1
                                    100

                            MEAN RECOVERY
                                                              200

-------
                              3641
                                                                      S9
                 COPPER - LAB PURE, HARD DIGESTION
   200-
cc
UJ
>
o
u
LU
CC
100-
UJ
    0-
                                     1
                                    100

                           CONCENTRATION jug/I
                                                             200

-------
                                364J
             CHROMIUM - DRINKING WATER, HARD DIGESTION
                                                                    S10
    600-
    400-
    X  X

      2

       X
o>
a.
0.

<
XX
    200-
      0-
                           200               400

                               AMPUL 5  jug/1
                                                        XX
                     600

-------
                                          364k
                                                                              Sll
                CHROMIUM - DRINKING WATER, HARD DIGESTION
   600
    400-
01
3.
tO
o.
    200-
     0 —
                             I
                            200
 1
400
                                                           XX
600
                                  AMPUL 5  jug/'

-------
                                     364  1
                               PRECISION AS S OR SR
                                                                                              S12
z
3!
m

8  M
<  ^
m  o~
a  o
                                                                                 C


                                                                                 2
                                                                                 C
                                                                                 CD
                                                                                 •O

                                                                                 7
                                                                                 m

                                                                                 Cf)
                                                                                 O
                                                                                 m
                                                                                 H

                                                                                 O

                                                                                 5
                                                                                 m
                                                                                 GO
                                                                                 H

                                                                                 O
                                PRECISION AS S OR SR


                                        §
                                        I      	
m


I
33
                                                                                 J>

                                                                                 C


                                                                                 z
                                                                                 C
                                                                                 m

                                                                                 H
                                                                                S
                                                                                •n
                                                                                H
                                                                                O
                                                                                5
                                                                                 5

-------
         364m



 MEAN RECOVERY pg/




       I

	I	
                                                                                S13
                                                          §
                                                          i
 o
 m
51-
o
              10

              CO
                                                                 c


                                                                 z
                                                                 c


                                                                 I
                                        0)

                                        T3


                                        31
                                        m



                                        8
                                        Tl
                                        H

                                        O
                                                                 en


                                                                 O
                         MEAN RECOVERY


                                S

                                §

                         	I
                                 i
                                                                 z

                                                                 s
o
m


i
                                        c
                                        m
§
                                                                 s
              00
                                                                 CO

                                                                 •H


                                                                 O

-------
                                         364n
                              PRECISION AS S OR SR
                                                                                               S14
   §~
m
o
O
                         a
                         o

                         c
                                                                               c
                                                                               m
                                                                               a
                                                                               o
                                                                               m
                                                                               GO
                                                                               H

                                                                               O
                              PRECISION AS S OR SR
                      00
                      o

                     J_
o
 I
3)
m

8
                                                                               O


                                                                               33
                                                                               O
                          I
                         m
                         •n
                         -n
                         i-
                         C
                         m
                                                                               VI
                                                                               O
                                                                               •n
                                                                               -i

                                                                               O

                                                                               O
                                                                               m
                                                                               v>


                                                                               O

-------
                                   364o
                            MEAN RECOVERY
                                                                                    SI 5
                     §
                                                                     O


                                                                     30

                                                                     O
o
o

o
                                                                     m
O
                                                                     33
                                                                     O

                                                                     O

                                                                     5
                                                                     m
                                                                     M
               1
                                                                     O
                             MEAN RECOVERY ,ug/l
      o
       I
I-
                                                                     o

                                                                     a
                                                                     o


                                                                     c
o
m
2
-
o§-
                                                                     c
                                                                     m
                                                                     Z
                                                                     S
                                                                     o
                                                                     m
                                                                     en
               00

               CO
                                                                   O
                                                                   z

-------
                            364p
                         AMPUL 4
                                                                     S16
h-
                                      o
                                      1
                                    X X"
                                                        n
                                                        I
                                                        33
                                                        O
         CD
         ^
         C
         33
         rn
                                                        33
                                                        O
                                                        a
                                                        5
                                                        m
         5
                         AMPUL 4 M9/I
     o
     J
                                      s
  8-
h-
 o
. I _.
         x
         o

         i
         m
         Z
                                                        33
                                                        O

                                                        O

                                                        O
                                                        m
                                                        V)
                                                        H

                                                        O
                                                        Z

-------
                       365
              QUESTIONS AND ANSWERS

                FROM THE AUDIENCE:  I have one
for you.  It's my understanding and recollection,
and Bill you correct me if I am wrong, that the
original ICP, Effluent Guidelines Study on Mining
waste was conducted on field samples,  spiked and
shipped.
                MR. MAXFIELD:   That is correct.
                MR. TELLIARD:   That's right.
                FROM AUDIENCE:   I notice that
this study was conducted on ampules split and
received and diluted.
                MR. MAXFIELD:   In fact, it's a
little bit more complicated than  that.  Could  I
explain.
                FROM AUDIENCE:  Well, then, my
question is and you can maybe cover that in
your explanation,  then, too, is,  did  you evaluate
the differences in errors that are introduced
by those two processes?

-------
                       366
                MR. MAXFIELD:  The answer to




that question is no.  In fact, what was done




is, effluent samples were collected by Versar



and tested, split and sent to the participating



laboratories.  Spiking solutions for all six



water types were prepared by Versar and sent



to all participating laboratories.  The three



other water types, the laboratory pure water,



surface water and the drinking water were, in



fact, collected at each of the participating



laboratories in the study.  So they are not the



same waters.



                MR. TELLIARD:  The industrial




samples, were those treated or untreated?



                MR. MAXFIELD:  Those were treated




wastes.



                MR. PRESCOTT:  I am Bill




Prescott, American Cyanamid Company.   I have a



question about the spiking solutions.  You had



obviously Youden pairs at each level,  was that



correct?

-------
                       367
                MR. MAXFIELD:  That is correct.
                MR. PRESCOTT:  This implied...!
guess I'm having difficulty saying what I want
to say;  27 metals, the two Youden pairs that were
high for one metal were the same spikes for all
27 metals?
                MR. MAXFIELD:  Do you mean the
same spiked concentrations?
                MR. PRESCOTT:  In spike
concentration.
                MR. MAXFIELD:  No.
                MR. PRESCOTT:  Let's say you
have got vials A, B, C, D and E.  And vials A
and B for aluminum were the two low
concentrations.
                MR. MAXFIELD:  That's right.
                MR. PRESCOTT:  Were those vials
also the low concentrations for the other 26
metals?
                MR. MAXFIELD:  Not necessarily.

-------
                       368
There was some mixing involved.  For some metals
it would not have been the same.
                MR. PRESCOTT:  Thank you.
                MR. MAXFIELD:  In fact, there were
more than six spiking solutions because of the
various matrices involved we had some effluent
that had very high background concetrations for
many of the metals.  So, therefore, if we took
something that would be an effective spike in,
say, drinking water and attempted to put that into
an effluent it would not be a reasonable spike
level.  There were, in fact, ten sets of spiking
solutions; or, ten spiking solutions, five sets.
Five sets of Youden pairs.
                MR. TONKIN:  Dave Tonkin, Centec.
I missed the beginning of your talk so maybe you
already addressed this, but were all of the
instruments used in the study simultaneous or
were there any sequential?
                MR. MAXFIELD:  There were 11

-------
                       369
direct readers  and one  sequential device.



                 MR. TONKIN:   Is there any




conclusion about the precision accuracy, one



versus the other at this point?



                MR. MAXFIELD:  There is none




at this point and I doubt seriously whether we



will be able to draw any conclusion with regard



to direct reader versus sequential device with



only one sequential device included in the study.



                MR. TONKIN:  Would you anticipate




a need for this  in the future?  It seems like the



instrumentation industry as in terms of ICAP is



going towards the sequential.



                MR. MAXFIELD:  It would seem like




a very reasonable thing to do.  The problem I see



with that is the sequential devices operate very



differently and the operator of the sequential



device can operate his particular device in so



many different  ways, using so many different



lines and different procedures for background



correction, et cetera.

-------
                       370
    Any other questions?



                MR. MEDZ:   In the regulations




or in the write up of the procedure, are there



going to be any changes to reflect that fact,



that you have more latitude with sequential




instruments in chosing your background correction



or moving to another line when there are



inferences?



                MR. MAXFIELD:  The method as it




is currently written, I don't believe addresses




the sequential device to any great degree.  In



fact, the lines are not specified at this point.



There are some lines that are referred to in



the method, but lines are not specified for



individual elements; at least that's my under-



standing at this point.



                MR. TELLIARD:  Thank you, Bob.




When did you say that report was going to be?



                MR. MAXFIELD:  The spring.



                MR. TELLIARD:  Direct draft,




interim draft; you and Bob Medz, I'm sorry, Bob.

-------
                       371
                MR. TELLIARD:   Our next speaker




is from TRW and Ray is going to talk about



precision.  I won't address the rest of his title



because bias is in the eye of the beholder.

-------
                        372
  A SURVEY OF PRECISION AND BIAS DATA FOR METHODS



    OF ANALYSIS  FOR PRIORITY POLLUTANT  ELEMENTS



            Ray  F.  Maddalone,  TRW,  Inc.






                 MR.  MADDALONE:   Having  sat  through




a  few days  worth of  GC/Mass  Spec and being  a per-



son who is  more  attuned  to  the  inorganic  analysis,



I'm going to  try to  prove that  there are other



elements than carbon, hydrogen, oxygen, chlorine,



and fluorine.   I am  going to talk about the




other parts of the periodic  table, in particular



the 13 priority  pollutant metals.



    What we have been listening to is what  has



been going on with the forefront of technology.



What TRW has tried to do in a study for the



Electric Power Research Institute (EPRI) is to



develop a picture of what the people in the



trenches are actually doing and what they are



capable of doing.  What we have found in this



study is that the analysts in the field are not

-------
                        373
performing as well as the people on the forefront



of technology expect them to.



    Before I get into the actual presentation,




I want to give you a brief outline of the program



that TRW has with the Electric Power Research




Institute.  It is RP1851-1 and the EPRI program



manager is Winston Chow.  The project consists



of four primary tasks.  The first task is one on



data base development.  In this data, we took



data from a number of sources, in particular 100




of the most recent NPDES 2C permit forms which



were coded and then put into our computer system



at TRW.  In addition, all of EPA/EGU's data and



information from open sources were included.  All



of this data was then computerized and statisti-



cally evaluated for outlines, and used to calcu-



late the aqueous discharge concentrations for the



steam electric power industry.  We wrote a data



base report which is now in the hands of the Pro-



ject Manager and should be published this spring.

-------
                       374
    The second task, which is the main focus of




the program, concerns the review of the sampling



and analysis methods.  This task had two major




components, one of which was a precision and



bias data compilation effort which I will talk



about today.  The second subtask is the litera-



ture review effort, which consists of reviewing



the chemical literature for the last ten years



with the intent of identifying interferences and



finding solutions for the problems that exist



with the NPDES approved methods for priority



pollutant metal analysis.



    The third task is a small effort to plan



for Phase II, which we believe will be a vali-



dation study of the methods used for NPDES prio-



rity pollutant metal analysis.  The fourth task



was a workshop.  At this workshop utility chemists



came to Los Angeles for formal presentations and



then broke up into working groups to discuss



sampling and analysis problems related to the

-------
                       375
utility industry.  There will be a proceedings



document from the workshop containing the formal



presentations and the concensus R&D development



ideas that were recommended by the utility




chemists.



    Today I am going to discuss the findings from



two major sources of precision data on the prior-



ity pollutant metals analysis methods.  The first



source was the data tape from the DMR-QA-I study,



which was obtained through the good offices of




Bob Medz and Wayne Gueder in Washington, and



John Winters and Paul Britton of EMSL-Cincinnati.



DMR-QA stands for Discharge Monitoring Report,



Quality Assurance program.  The second source or



rather sources of precision and bias data was



compiled from the validation studies that we



could find in the open or governmental literature



    The DMR-QA study we evaluated was conducted



in 1980 and consisted of distilled water ampules



containing 26 parameters, including 10 of the

-------
                       376
13 priority pollutant metals.  There were two



concentration levels for each parameter which




varied depending on the element and parameter.



For the sake of this presentation I will simply




refer to them by their code names:   red and



white.  The data tape obtained from EMSL was



coded in a manner which permitted us to make



various data evaluations.  For example, we



could break out the EPA State results and com-



pare them to the Permittee laboratory results.



The data tape contained results from all the



NPDES Permittees responding, so it wasn't spe-



cific to the utility industry.  I want to define



two words.  When I say method, I'm referring to



a generic title such as Graphite Furnace Spec-



troscopy (GFAAS), ICP, Flame Atomic Absorption



Spectroscopy (Flame AAS).  When I mention pro-



cedure, I'm referring to the protocol, such as



ASTM, or Standard Methods that were used to



perform the GFAAS or Flame AAS analyses.

-------
                        377
    The DMR-QA data reduction was done with soft-




ware developed by TRW and using our CDC compu-



puter system.  Without going into great detail,



the first steps  in the data reduction effort



consisted of an  outlier test, at the suggestion



of Paul Britton, we used screening test to get



rid of the decimal point errors or obvious



recording errors.  We did that by excluding any



data point that  was a factor of 5, higher or



lower than the true value.  The data that passed



through this initial screening test was then




tested with the  ASTM D-2777-77 (a one percent



double tail test).  The data that failed either



test were omitted from the final compilation.



We calculated the mean, the standard deviation,



and the relative standard deviation.   We also



calculated biases and differences.  Biases



being the mean of the EPA/State or the Permittees



as compared to the true value.   By differences,



I am referring to the EPA/State mean compared

-------
                       378
to the Permittee mean.  All of this was placed



with other details on a single page format for



each parameter.  If you are interested, I have



a copy of the report here in the draft form and



I can show you the type of format that was out-



puted.  Incidentally, all the data for the 26



parameters were reduced.



    This whole exercise was completed for the



individual method procedures.  The procedure



data was also compiled so that all of the proce-




dures for a given method were placed into one



data set.  We did that by taking the equivalent,



alternate procedures listed under 40 CFR 136.



    First, some general observations about the



DMR-QA data set.   The DMR-QA test concentrations



were compound to  the non-cooling water discharge



(NCWD) concentrations calculated from the Task 1



data base.  The non-cooling water discharge



streams are all the power plant discharge



streams except the cooling water streams.

-------
                       379
Had we added the cooling water streams and com-



puted the average, it would have given us an



arbitrarily low number.  So we sequestered the



once through cooling water data into a separate



group.  The NCWD concentrations represent the




nominal concentrations a plant chemist would



monitor.



    We found that the red sample set was approxi-




mately five times higher than the non-cooling



water discharge concentration and the white set




was generally greater than a factor of 15.  As



a result, we are not sure that the precision data



that we saw in the DMR-QA data set is represen-



tative of the actual samples that the utility



industry has to monitor.



    The methods that were used by the EPA/State



laboratories and the Permittees were primarily



the same.  The biggest difference was that the



Permittees used wet chemical analyses (WCA) some-



where between two to nine percent of the time;

-------
                       380
whereas, the EPA/State laboratories never used



it at all.  The biggest difference in Atomic




absorption usage was for the Graphite Furnace



AAS analyses of arsenic and selenium.  As you



can see by the data in the slide, the EPA/State



laboratories primarily used GFAAS for those two



elements; whereas, the Permittees used the combina-



tion of gaseous hydride, wet chemical methods,



and GFAAS.



    The procedure selection was also very inter-




esting.  The EPA/State laboratories, as you would



expect, used the "Methods of Chemical Analysis



for Water and Wastewaters" (MCAW) most of the



time.   The Permittees only used it 57 percent of



the time and very surprisingly, at least as far



as I was concerned, is that they used "Standard



Methods" as their second choice.  I think it's



very important that we, as a group, try to get



the message across to the users that the ASTM

-------
                       381
procedures are far better written than "Stan-



dard Methods" on the MCAW.  There are no pre-



cision and bias statements in the "Standard



Methods"; whereas, the ASTM methods have preci-



sion and bias statements for each procedure.



Also, each ASTM procedure is written from start



to finish for each metal and not grouped under



a general method as they are in "Standard




Methods".



    One  final general observation about the




DMR-QA data is that the EPA/State data set had



far fewer outliers than the Permittee's.  Many



of the data points were removed by the initial



screening test.



    The  next two slides are histographs summari-



zing the relative standard deviation data for



flame and graphite atomic absorption.  The rela-



tive standard deviation is plotted on the X axis



with the number of elements falling in a given



range of relative standard deviation plotted

-------
                       382
on the Y axis.  The top two histograms are for



the Permittees red and the white concentration




test sets.  The bottom two are for the EPA/State



data for the red and white test concentrations.



So if you look at it in the vertical sense, you



can compare the two histograms for data distribu-



tions.  For Flame AAS both distributions are



similar.  The EPA/State RSD's tend to cluster in



the 10 to 20 percent bracket.  The Permittee



Flame AAS data is slightly higher compared to




the EPA/State laboratory data.  In particular,



there are three bad elements (As, Se, Hg) pro-



bably because they were determined by the Per-



mittees using gaseous hydride absorption.



    The next slide shows the same type of histo-



grams for GFAAS.  There is a much bigger differ-



ence in RSD's between the two different or^aniza-



tions when you look at the GFAAS data.   In this



case, you can see that the Permittees had a



much wider spread in their relative  standard

-------
                       383
deviation versus the EPA/State, which was



clustered, around or less than plus or minus 20



percent.  Clearly, there is a difference in how



these two groups are able to apply the methods



methodology.



    In addition to the DMR-QA data, we collated




precision data from a number of sources.  This



next slide shows a list of the documents that



we collected, and reviewed.  During the course



of this review we found that the 1975 AOAC Manual



and "Standard Methods" used the same precision



and bias data.  The source for this precision




and bias data was a 1968 Public Health Service



study.  This fact re-enforces my concern about



using "Standard Methods" as a procedure manual.



The best source for precision and bias data is



the ASTM, Part 31, Water.  In many cases we were



able to obtain the original research reports



used in ASTM, Part 31, Water.

-------
                        384
     We  also  collated data  from Bill Telliard's



 study on Mining Effluents  and the Utility Water



 Act  Group  (UWAG) inorganic analysis round robin



 study.  These are the only two studies that used



 samples that were collected, spiked, and split



 in the  field, and then sent to the participants.



     Even with all of these studies that were



 performed, we found a lack of high quality data



 for  matrices that might be considered challenging.



 If you  look at this next slide which shows the




matrices that were tested  and the various methods



 that were used, you can see the limited extent



 validation data.  If you look upward from the



 Ohio River water, you will see that there is a



lot of data collected for  standards either in



distilled tap, or surface waters.  Whereas you



go down from there, you find the same elements



are being done and only a  few, maybe six or



seven of the priority pollutant metals have



been tested in matrices that are challenging

-------
                        385
or representable of common SIC matrices.



    Now, what do we do when we collect this



precision data.  The idea was to have precision



data at three concentrations, so we could calcu-



late a regression equation of the single operator



and overall standard deviation obtained at the



mean concentration tested.  With these equations,



we could go back and calculate what the relative



standard deviation is at the specific non-cooling



water discharge concentration.  We could also



calculate the limit of detection using the inter-



cept of this equation.  Finally, we could use



this equation to calculate the limit of quantita-



tion using a specific relative standard deviation.



    Now,  the idea with using the regression equa-



tion to calculate the limit of detection (LOD)



is based  on the idea that if you have a plot of



standard  deviation versus the test concentration,



you then  can extrapolate to the standard deviation



at zero concentration.  Some factor times the

-------
                        386
standard deviation at zero concentration is



defined as the LOD of a method.  In most cases



it is obtained by taking a distilled water blank



or your blank reagent sample and analyzing it a



number of times.   In this case, we are using



the actual data generated from these validation



data to extrapolate to the standard deviation



at zero concentration.  LODs calculated in this



manner are fairly conservative (i.e., low) esti-



mates since as you approach the limit of detec-



tion, the absolute standard deviation tends to



reach a limiting  value and not linearly decrease.



    This next slide shows the approach that we



took in calculating the limit of detection.



Generally, you can define the limit of detection



as the minimum concentration that produces



a specific relative standard deviation.  This



is pretty much what the ACS guidelines are and



the general approach that Lloyd Curie took in



his article on limits of detection quantitation.

-------
                       387
Based on the RSDs calculated at NCWD concentra-




tions, the relative standard deviation for rou-



tine analyses should be near plus or minus 20



percent.  As this slide shows,  you can use the



linear regression equation to calculate a con-



centration that would give you a relative stan-



dard deviation of 0.2 (20 percent relative stan-



dard deviation).  Taking this approach, we calcu-



lated both the extrapolated three signal limit



of detection and the calculated limit of quanti-




tation.



    Before I show you a table which compares those



numbers to the NCWD concentrations, I want to



give you an overview of the precision and bias



data that was obtained from these validation



studies.  We generally found that at the non-



cooling water discharge concentrations that




Flame AAS produced poor precision.  This was



not totally unexpected because NCWD concentra-



tions are generally below its limit of detection.

-------
                       388
We also couldn't see any trends with precision




based on matrix effects, but this correlation



may have been obscured because we didn't have



the exact composition of a matrix to rank the



matrices.  Future validation studies should have



a spark source mass spectroscopic analysis and an



anion analyses of the matrix, so you have some



idea of why one matrix's precision is different



from another.  There is another problem that



there was no single study that covered all of



the matrices, so we were trying to compare differ-



ent groups of people doing different matrices.



    The biggest single finding was that the over-



all precision was two times higher than the sin-



gle operator precision for all the methods that



we studied (Flame AAS, GFAAS, and ICP) .  This



has a major impact when you calculate the limit



of detection.  It will make a factor of two



difference when you use either the overall pre-



cision or the single operator precision.

-------
                       389
    To illustrate calculation of precision based



on limits of detection, I took the data presented



by George Stanko using the standard addition



technique with Method 624 and compared it to the



detection listed in Method 1624.  I assume that



in Method 1624 two sigma detection limits are



reported though I may be wrong, but just for the



purposes of discussion let's assume that that is



the case.  If a LOD was listed as 10, we converted



that to a three sigma so it would be 15.  If you



compare that to a three sigma detection limit



calculated from the data that George presented,



you find that in some cases you have reasonably



good agreement, but in other cases you have



very poor agreement.   This is the same sort



of thing that-we found with the trace metal data.



    On this slide there is a listing of the non-



cooling water discharge concentrations.  I set



a criteria that a method could detect or quanti-



tate the method if its limit of detection or

-------
                       390
limit of quantitation, based on overall precision



was below the non-cooling water discharge concen-



tration.  This slide shows that ICP could not



detect no more than six of the elements at their



non-cooling water discharge concentration.  GFAAS



did a bit better.  They were able to detect 10



of the 13, but unfortunately five of the data



points were actually based on single operator



precision.



    When we get to quantitating,;  that is, being




able to measure the element at plus or minus 20



percent relative standard deviation, we found



that ICP can quantitate only a few elements at



NCWD concentrations.  Only two of the metals



(cadmium and zinc) could be quantitated at the



non-cooling water discharge concentration.



GFAAS dropped down to five elements out of



the 13, but four of those elements are based



on single operator precision.  As we mentioned



earlier, the single operator precision is approxi-

-------
                       391
nately a factor of two less than the overall



precision.  As a result, we are not exactly sure



whether the data that we have really does indi-



cate that the graphite furnace can be used to



detect priority pollutant netals at non-cooling



water discharge concentrations.



    So what we have done in this EPRI program



is to establish what the state of the art is for




the methods being used to analyze the 13 priority



pollutant metals.  What we expect to do in the



future is to extend the study to other parameters,




In fact, we have a recent add-on to the project



to do six more conventional and non-conventional




parameters.  We also hope to validate the



methods used to monitor these metals using power



plant discharge streams.



    If you have any questions, I would be happy



to answer them at this time.

-------
                                               391a
oo


o:

rD
—i

CD
Q_

>-


Hj
i—*
I—
LJ_  I
CD i—I
    LA
00 OO
.    ^—J
•    «I
OO
>- Q-
_i az

     _l
     <
     z
     <

     CO
     <   2
     i—i   LU
     PQ   —,

     Q   Ul
     z   cc:

          ui
     z   ce
CO
•—I

o
LU
C£
Q.
                                                        o:
                                                        LU
                                       oo


                                        I


                                       CXI
                               oo
                               
-------
                                          391b
en
CD
ct:
D_


CNJ
                                                              LU
                                                              1-
                                                              <
                                                              a:
                                                              <
                                                              a.
                                                              UJ
                                                              CO
                               CO
                               CO
                               z
                               o
       a:
       l-
       z
       LU
       O
       z
       o
       o
                                      LU
                          Q
                          UJ
                          o:
                   CO
                   _J
                   LU

                   LU
                          Z
                          o
OL
\-
z
LU
O
Z
o
o
                                                              0
                                                              ^-4

                                                              CO
                      c_>

                      i—i
                      <_>


                      oo
                                   (=1
                                   LU
                                              PQ
                                              o

                                              LU
                                              Q_
                                            OO

                                            
-------
                                     391c
cc:


                          CXI   IX    CQ       Q          Qi
                                                          LU
                                                          x
                                                                    CD
                                                                    OO
                                                                    1^
                                                                    CXI
                                                                    LO














LU

LU

C_J
OO

OO

OO

i

CO X
I— .
CO ^
UJ LA
1— \
UJ
CC 12
LU C£
~ t-
_l
_l
<


LU
_l
CQ
Z>
O
Q
£wO
6s?
i— 1

^
1^
i^
i
r^
r^
r^
CXI
a

z:
t—
CO
<

ED/REJECTED
i—
Q-
UJ
(_)
O
<

CO
1—
z
o
0.

LL
O

OH
Ul
pa

3
z

•\
Q
CO
C£
/^~v
IX
LU
LU
1-
t-
»— 4
s:
a:
LU
Q_

O
h-

IX

s-** Ijj
ml
r—
=3 <
-1 _J t-
< —• CO
> < \
H- <
LU Q.
=3 LU LU
CE _| •— '
1- OQ
Z) CO
O O LU
H- Q U
Z
IX &-S LU
^ t— i o:
LU
CO 1 U.












_i
»— l
<
i-

LU
_l
CQ
Z3
O
Q

B>S
i— 1

1
OO
Q
CD
zc
LU
S^

1—
•ZL
LU
_l


ZD
f—yt
C_i>
LU

0£
CD
Ll_

cn
LU
I—

o

"N
CO
<
<
X
ID

•\
CO
<
<
U-
cu

>
CO
<
<
u_
v_x

CO
Q
O
X
UNDER METHOD

Q
LU
1-
u
LU
_1
_J
O
U
CO
LU
ce
Z)
Q
LU
CJ
O
ce
a.

LU
t—
<
z
a:
LU

-------
                                    391d
OO
•zz
o



 O

8-8
un

CM
 \s

 LU
                                     _l
                                     _l
                                     <
                                     o:
                                     LU
                                     >
                                     o
CO
LU
LU
                                                     co
      Q-
      LU
                                                          —•   Q
      a:
      LU
      a.
                                                                         oo

                                                                         LU
                                                               co
                                                                          co   •—
      QL
      LU
      a.

-------
                                 391e
O
CC
CO UJ

UJ
          UJ
          UJ
CC
Ul
a.
                           N
                           O
                               o
                               CQ
                                  3
                                  in
                         1    1
_J  CC
uj  o
CC  U_

u.  co
Oz
             in
                         CM   i-
CQ UJ



CO

O
CC

 I
UJ
UJ
CC
UJ
Q.



|N|Z




£
3
£
i
o
S
S
                        A
                        in
>— S
                  II

                                   in
                              ui
                              H

                              I



                              UJ

-
-








1C 1 — 1 .Q 1 *o 1 ®
N|z|tt.|o|CQ
1 1 1 1 1
^^^^m
^^^^
O)
X
•••
•••
o
to
<
3
8
9|
»i
Ul
M Q
Q
in z
OJ <
N <"
U) ^
o §
*- UJ
CC
m
UJ
(o in * co CM i-
                                        Q
                                        UJ
                                        CC

                                         I
                              w
— S    ^
          Q.
          Ul
             10
                     CO  CM  i-
                                           
-------
                                  391f
                                §
                                A
O
DC
<  (/>
H  <
<
LU  U.
h-  DC
<0

EJ"-
51
                   co  CM  «-
                                         LU
                                         H
                                          I

                                         LU
                                         I
                                         LU















1 1












z

£



-
—

_

-






S
%
•••i
o
&

^
i
O
in
A Z
inO
Si
LU
8 °
rt Q
o QC
n <
O
in z
CM 5
s ^
^ uj
m>
'5
O U
t- uj
CC
in
o
                                            LO
                                                   CO  CM   t-
CO

E

O)
         o
         LU
         CC

          I

         LU
         LU
PERM

                        1    1
                                LO
            in
                    CO  CM  r-
LO

-------
                                              391g
oo
LU
LU
CC:
LU
OO
LU
OO
ca
o
oo
Q_

O
          LU
          LU
          cc:
          LU
          Q_

          — 1
                                             PQ
METHODS
                                   _     «a:
                                             0
                                             l^-i

                                             CO
                                            LO
                                            CNl
                                              1

                                             O


                                            0
                                            I— 1
                                              1

                                             UJ
            oo       o
            LU       M
            LU  00
STR
                                          S

-------
                                       391h
              en
              r^»
              CD
                                                          CO
                                                          LU
              CO
              CO
                   CD
                 i—I     OO
                 oo     en
                 en     ,—i
                           a.
                                     co
                               5      LU
                               00    —•
        O
        00
CD

00

CD
                    CD
                    oo
                    en
   LL.

                                     LU

                                     LU
                        cc:
                        CD
                        LU
                        Qi
                        LU
                        a_
                                            en
                                            r^
                                            en
                              LU
                              PQ

                              LU
                              <_>
                              LU
                 CO

                 5
                 LU
                              >
                                                  cr>
                                        cc:
                                        
-------
                   3911
CQ
CO
CO
>-
— 1

LU
Qi
Q_












+
t-
IX
•SL
II
Q
CO
•*^

CO
z:
CD
f— I
H-

LU

•^^
CD

CO
CO
LU
Qi
C£3
LU
cr:

LU
1 —

z
o

CO
1— H
CJ
LU
o:
a.

_i
_l
<
o:
LU
>
o
•^
o:
o
i-
<
Q£
LU
Q_
O

LU
_I
O
Z
CO

1

CO

CD
1— 1
I—

CD
CO

Ll_
CD
CO
i i i
LJ—I
CO
=D
CONCENTRA

Q
2
o
z

o
1—4
U-
l—l
o
LU
Q.
CO

H
<

Q
CO
cc

LU
1-
<

ZD
0
_l
O

1




»-
a.
LU
0
OL
LU
f-
z
•— «

2:
o
a:
LL

Q
o
_i

LU
h-
<

ID
O
<
_J
0

1
Q
CO
a:

o
i— i
LL.
1— 1
o
LU
D_
CO

0
Z
1-4
CO
Z)

a
o
_i

LU
^-
<

D
0
_J
o

1

-------
       39 Ij
   LU
                                o
                                t—t
                                I-
                                <
                                LU
                                u
                                z
                                o
                                o
                                CO

                                UJ
NOIIVIA3Q

-------
                                                      391k
CC
o
>
CC,
in

O
O
LJLJ
CC
O
UJ
CC
o
     UJ
«og
O  c/>
1  1
<  3
Q.  <
UJ  >
Q
UJ
(0
UJ
o




o.
o


LU
73
g
z
	
u
UJ
H
^-
CQ
2 <
\— ^
CO £
uj O
1-
co
z
UJ
UJ
^J
UJ
CO
<
UJ
S
_j
u.





X
E
<
s








^
0

O |yj
-C m
0$
Q> o*
03 £
CA •""
 c
< N
£ z*
k> *>
3 1 £
«o~ v> aj m <« c/T




i5
K Z
ii £
CNW * 4
x to o
CD

£ -. z g 1
S 2 S g g I
£ ^ LL ** 5E m
fl. ^ u. a. O S ^
OC uj Z ^ *^ ^
O* UJ 1- UJ < CD r-
— H «" 5 — Z Jk
T > ^ >















N
*,
*>
3
0
to*










g
3
oc
>
Q
1

1
^
OLD
X^
2~
      CO

      X
                                                                                                 §    p
                                                                                                 UJ    — UJ
 tu
 Q.
 CO
 UJ
 _l
 C3
 Z

 CO
CO

 CO*
                                                                                                  OC
                                                                                                  O
 O

cV

 co*
 <
 <
 LU
 Q

 OC
 O

 X
 CO
                                                                                                  LU
                                                                                                  3
N
O UJ
Z I-
   Q
   uj
   >
      U
        ,
j?n
< UJ
CC u-
PO
                                                                                                       h- CC
                                                                                                 .-   ^

-------
                                   391  1
           CD

           CO
           

           UJ
           cc:
           UJ
           ct:
           CD
           CD
           Q_
           CO
           

           2=
           CD
              D_
           I— <_>
GO
i— -«
— i
CQ
<=C
h-
CO
UJ

CD
1—

1—
	 1
rD
c_>
i— i
u_
i i
1 '
s
2^
CD
CO
i—i
C_3
UJ
O£
Q_

•y^
CD
CO
1 —
c_>
UJ
u_
u_
•=a:

X
i— i
0£
I—











a.
r>
i
UJ
:*:
<
^
_J
O
»— 1
^*
UJ
X
o

X
>— I
or
H-
<
^~
^_
z:
o

<
i—
<
Q

O
Z.













CO
UJ
o
i— <
a:
<

_i
_i

a:
UJ
>
o

>
Q
rs
i
r^
CO

LU
_J
CD
•z.
1— 1
CO

o
z.

z.
0
t— 1
CO
^^
0
UJ
o;
Q.

UJ
_i
OQ
<
^-1
o:
<
>

CO
Q
Q
<
CO
H-
^y
^.
<
a.
o
i— i
l-
o:
<
a.

u_
0
_i
UJ
>
UJ
_J

_l
_J
t— t
^.
CO

•s
CO

CO

-^
^^
CO
CO
LU
^
1—

CD
S
P

•y*
CD
CO
i«— i
C_)
LU
0£
Q_

_J
	 1

-------
                            391m
I—                     i-H     CXI
«3.                     v_x     s_x

LU
CALCULATION OF LOQ
: MINIMUM CONCENTRATION THAT PRODUCES A SPE
DEVIATION (RSD)
LU
PQ
_J
CD
OO
Q
OO
LU
h-
^
Q
t _._, i t^£
 Q_
LU
Q_ CD
OO
CD
CNI
IX CNI S
+ CNI
S CD
IX
II *^ II
IX IX
a LU
00 _l
LU
 -- ^    LU    t_)
C3 

-------
                                391n
   0)
Ow
UJ _l
H UJ
o
OCC
X CL
so
DO
wz
   <
01
H
H-
Z

D
O
O
UJ
I-
01
Q
      Q.
      O
               at
      00
                 o,
                 O
      Q.
      O
               UJ
                 CO
2 £<*-£
^ o cc
      1
      Ol
      s
      01
      _l
      Ol
                        >   i  >

                                        CM
                                                   un  to
                                                   CM  f.
                           E  S
THALLIUM

ZINC
                                                                 01
                                                                 _l
                                                                 03
                                                                 O
                                                                 cc
                                                                 o
                                                      cc
                                                      01
                                                      a.
                                                      O
                                                      O!
                                                                 Z
                                                                 CO
   •2.
   O

3  I
cc    -.
o  <
%  cc
5  01
                                                   o
                                                  CM
                                                                 01
                                                                 01
Ol
O
IT
D
>

CO
cc
Ol
h-
%<
                                                                 ^ O
                                                                 S Z

-------
                       392
           QUESTION AND ANSWER SESSION

                MR. RICE:  You might point out
the number of participants and the composition
in the DMR-QA1.
                MR. MADDALONE:  Generally, for
the metals the number was about 200.  I think
the maximum number that I saw was on the order
of 200 reporting in respondents for a given metal,
Most metals on the order of 40 or 50 people re-
porting.  There is a correlation.  Since the
DMR-QA study allowed them to monitor all or none
of the parameters that were in the vials, depend-
ing on what elements are required by their per-
mits, the number in the DMR-QA study related to
the number of people required to monitor a pol-
lutant .
                MR. MEDZ:  Ray, the 1980 DMR-
QA program, was entitled program and we
only had two states participating in the  1980

-------
                       393
program.  We had one state that had the primacy,



that was Minnesota, and we had one state that



did not have the primacy, that was New Jersey.



The study on which we based the number two study,



which is completed now, had almost a full




8,000 dischargers.



                MR. MADDALONE:  We would really




like to get a copy of that information for our



program.



                MR. RICE:  Bob, I had a question.




As far as I was concerned we were led to believe



that the DMR-QA tape that was made available to



EPRY covered the round of the five or six major



SIC category industries and that this represented



all responses.   It wasn't just a two-state affair



                MR. MEDZ:  I remember, in 1980




we had a pilot  program.



                MR. RICE:  Well, there was a



pilot program prior to this, as far as I know,



but that was New Jersey; wasn't it?

-------
                       394
                MR. MADDALONE:   One problem with




the DMR-QA1 study was the number of procedure



codes available.  There was a large number of




procedures that were used that weren't expected



to be used.  There was a code "99" that lumps



all those responses together.  We would like to



recover that information.  I understand there's



more codes in the second study based on the



results from the first.



                MR. STANKO:  George Stanko, Shell




Development.  I think if you will check you will



find that there was a DMR-QA study 1 with approxi-



mately 7,000 permit holders and that DMR-QA Study



2 with approximately 7,000 to 8,000.  There was



also a pilot program before Study 1 or Study 2.



So there should have been a lot more data for



Study 1 than what you show.



                MR. MADDALONE:  Well, it depends




on the parameter.   If  you  look at the pH, we had



something  like  2,000 respondents; but, then, in

-------
                       395
the metals you would end up with 40, 50, to 200



people responding on that particular element.



                MR. STANKO:  I would have thought




for zinc you would have had a lot nore than what



you did.



                I'm. MADDALONE:  I'll have to



look through...! don't have that data with me.



                MR. RICE:  I do, you will see it



on the slide I have, George.



                MR. STANKO:  Thank you.



                MR. MADDALONE:  Bob, one question



about that.  The tape that we received, was that




the pilot study?



                MR. MEDZ:  Well, when you said



that you only had 40 or 50 respondents...



                MR. RICE:  No, Bob, I'm almost,



answering for Ray, and that's what George says is



true.  Our understanding was that this was the



first major round, it wasn't the pilot study,



that on pH and total suspended solids and common

-------
                       396
parameters such as that there were thousands of



respondings on that data tape.  The numbers that



I will show on the slide I have are for those



who had to run these elements.



                MR. TELLIARD:  Our next speaker,



now speaking, is Jim Rice.  Jim is a consultant



to the utility industry.  He and I have been



jousting over monitoring questions for the last



seven years and today he would like to talk a



little bit about compliance monitoring and the



poisons being discharged from public utilities.

-------
                        397
     COMPLIANCE  MONITORING  METHODS  FOR  PRIORITY
     POLLUTANT ELEMENTS  IN  THE DISCHARGES FROM
           STEAM ELECTRIC  POWER PLANTS

                James K. Rice, PE
                Consulting  Engineer
              Olney, Maryland  20832
                     ABSTRACT


     The data presented in the report by the Electric

Power Research Institute, "Aqueous Discharges from

Steam Electric Power Plants:  Analytical Methods

Precision and Bias," November 1982, clearly supports

a concern of the Steam Electric Power Generating

Industry that insufficient interlaboratory precision

data exists for the compliance monitoring methods

for priority pollutant elements associated with power

plant discharges.  In addition, the potential for

greatly lowered NPDES permit limitations based on

water quality standards emphasizes the need for vali-

dation at these concentrations in effluent matrices

as well as in fresh,  estuarine and ocean water.

     In the absence of a national program for con-

sensus validation of  environmental monitoring methods

-------
                       398
at appropriate concentrations and in representative



matrices, the Electric Power Research Institute has



been urged to undertake in cooperation with ASTM the



task of validating existing and future EPA methods



relevant to the power industry's discharges.






POLLUTANT PARAMETERS OF CONCERN






     Pollutants derived from the fuel being burned,



chemicals added for cleaning or for corrosion and



deposit control, as well as pollutants present in



the intake may appear in the process discharges from



the steam electric power industry.  The average con-



centration of the priority pollutant elements in



such discharges is presented in a recent study of one



hundred steam electric power plant NPDES Application



Form 2C's by the Electric Power Research Institute



(1).  In a parallel study, EPRI determined the avail-



able precision and bias data for the approved analy-



tical methods for the priority pollutant elements



(2)(3).  This latter study included an analysis of



the results of the performance sample program con-

-------
                       399
ducted by EPA in 1980 under Sec. 308 of the Clean




Water Act (DMR/QA-1).




     Table I summarizes the data on the major pollu-




tants in coal-fired power plant process discharges.




The parameters are shown ranked by mass discharge




rate normalized by plant name plate capacity.  It is




important to note that the priority pollutant ele-



ments are present in the lowest two of the four order




of magnitude range of the mass discharges of all of




the pollutants.  The average concentration of the




priority pollutant elements in coal-fired plant dis-




charges is used herein as the basis for examining the




adequacy of the compliance monitoring methods ap-




proved for these elements.






REQUIREMENTS FOR COMPLIANCE MONITORING






     As per Sec. 304(h) of the Clean Water Act, EPA




has published analytical methods for use by permittees




to determine whether their aqueous discharges comply




with the terms of their NPUES permit.  Any determina-




tion of the compliance of that result with the

-------
                       400
limitations in the permit must take into account the



precision of the method employed.  Since the result



is always subject to verification by the regulatory



agency, a minimum of two laboratories are involved,



expressly or implied, in making any compliance




determination.  Thus, determinations of compliance



with a permit limitation can be made properly only



in terms of the interlaboratory precision of the



method employed on the matrix in question.






AVAILABILITY AND QUALITY OF PRECISION DATA






     In view of the foregoing, the methods for the



priority pollutant elements, as contained in 40 CFR



Part 136, were examined by the EPRI study (2) to



determine both the single operator and the  inter-



laboratory precision.



     It is important to note that very little of the




interlaboratory precision data was found by the study



to have been collected in a manner that reflected



the errors introduced by the sample container, by




preservation,  shipping and storage.  ASTM Committee

-------
                       401
D-19 has recently adopted a definition for a multi-



laboratory operational precision that encompasses




all of these errors as well as the more common



within-the-laboratory errors.



     An additional point revealed by the study is




that the very largest portion  of the precision data



available on the Part 136 methods was developed on



reagent water, or on fresh natural water,  by employ-



ing vials of concentrated standards that were diluted



by the recipient.  Only a few  studies determined pre-



cision data on specified effluent water samples,




none separately on estuarine or seawater.



     For the average concentrations in power plant



process discharges Table II shows the relative stan-



dard deviation (RSD) as reported in the different



flame AAS procedures approved  by EPA in Part 136.



It should be pointed out that  the RSD's for the



ASTM procedures may be not be  applicable to the con-



centrations shown since, except for Se and As, they



were developed over a concentration range  much higher




than those in Table II.   Note  that the mining indus-

-------
                       402
try's effluents are the only ones for which the



priority pollutant elements by flame AAS are specifi-



cally validated.  In view of the many important and



varied matrices for which these methods are approved,



the amount of precision data available is clearly



inadequate.



     Table III shows data similar to that in Table II



except for furnace AAS.  Here there is even less



interlaboratory precision data than for flame AAS.




Even the 1979 MCAW does not contain any interlabora-



tory precision statements for furnace AAS.  The only



study available, with one exception, on the furnace



AAS procedures for As, Cr,  Cu, Ni and Zn as they



appear in the 1979 MCAW was performed by the power



industry on one ash pond effluent and on one river



water (4).






APPROVED ALTERNATIVE METHODS






     EPA faces numerous problems with validating the



Part 136 methods.   One underlying problem stems from



1973 when EPA accepted, a priori, that the differently

-------
                       403
written procedures for a given method, such as flame




AAS, for a given element as they appeared in several




widely employed standards publications produced



equivalent results.  That is, the same level of con-




fidence could be placed in the results produced by




any of these several procedures when used by quali-




fied operators.  Subsequent experience with the




methods concerned shows in hind-sight that this




conclusion was incorrect.  The clarity, the precise-




ness and the detail with which a method is written




greatly influences the manner in which that method




is carried out by different skilled, or unskillled,




operators.  Thus, the skill and care with which a




method is written greatly influences the closeness




with which one laboratory can verify the results of



another (one measure of which is interlaboratory




precision).



     Table IV illustrates the varying degree of




equivalence of two of the most widely used of the




alternative procedures sources, 1974 METHODS FOR THE




CHEMICAL ANALYSIS OF WATER AND WASTES (5) (MCAW) and

-------
                       404
the 14th Edition of STANDARD METHODS (6) (SM).  The



relative standard deviations (RSD) were obtained



from EPRI's analysis of the results for one of the



two sample sets furnished by EPA/EMSL on the DMR/



QA-1 program.  In the foregoing program, each of the




several thousand permit holders who received the



samples (vials), diluted them with reagent water and



then analyzed them for selected parameters (those



required by their permits plus any others they chose)



by employing the procedures they normally used for



obtaining their compliance monitoring data.  In addi-



tion to the results of their analyses, each permittee



reported, according to a prescribed code, the speci-



fic procedure that they employed.  EPRI examined the



data using this code. It must be cautioned that



there is no way of knowing if each respondent em-



ployed the procedures exactly as written.  Nonethe-




less, the data in Table  IV are very informative.



     Of the  nine elements studied, six  have RSD's



for the MCAW and the SM procedures that are signifi-




cantly different at the  99% level of confidence.

-------
                        405
 Of  these  six  elements,  four  (Cr,  Cu,  Ni,  and  Zn)



 have RSD's  that are  significantly higher  for results



 determined  using  the procedure as it  appears in



 STANDARD METHODS  than if the results  were deter-



mined following the procedure as written  in the




 1974 MCAW;  two, Cd and  Se, have lower RSD's follow-



 ing the SM  rather than  the MCAW procedures.  It is



 well to remember  that these significant differences



 in the performances of  two widely used procedures



 sources arose on  reagent water.  What the perfor-



mance differences would be on actual effluent



matrices is not known.






EXISTING VALIDATION REQUIREMENTS






     In 1978 EPA/EMSL made known a formal requirement



for applicants who proposed test procedures as alter-



natives to the procedures approved in 40 CFR Part 136.



These requirements for nationwide approval of equi-



valency specified  comparative testing of representa-



tive samples of the point source discharges from five



Standard Industrial  Classification codes or subcate-

-------
                       406
gories.   It  would appear  from the EPRI  study that



none of the  Part 136 methods for the nine priority



pollutant elements discussed here has been so tested.



     Table V summarizes the applicability of approved



methods for  the nine priority pollutant elements




when these methods are evaluated by comparing their



detection and quantitation limits with the average



concentrations in power plant process discharges.



By this criterion, approved methods are available



to detect six of the nine for compliance purposes,



but to quantify only two, As and Se.






POWER INDUSTRY CONCERNS






     The absence of interlaboratory precision data



for all of the matrices for each of the alternative



procedures discussed herein would probably not be of



great concern to the power industry if compliance




with effluent limitations was to be enforced on only



an order-of-magnitude basis at technology based



effluent concentrations.  The Part 136 methods pre-



cision data allows adequate confidence to be placed

-------
                       407
in results under such circumstances.  However, EPA's



major effort under the Clean Water Act has now




shifted from technology based effluent limitations



to limitations based upon water quality standards.



The changes proposed in October 1982 in the Water




Quality Standards regulations (7) are a major step



toward implementing that shift.



     The concentrations for the priority pollutant




elements in the present National Water Quality



Criteria (8) are significantly lower than those



discharge concentrations shown in Table I for power




plant effluents.  The latest draft revisions of the



National Guidelines for Deriving Water Quality Cri-



teria (9) will lower many of these concentrations



still further.   If future permits are likely to con-



tain effluent limitations for the priority pollutant



elements that result from more strict water quality



standards,  or from the waste load allocation systems



that may be emplaced, a major effort to correct the



situation evident in Table V must begin soon.

-------
                       408
VOLUNTARY VALIDATION






     The Electric Power Research Institute has been



urged to begin a program whereby it will conduct in



cooperation with ASTM and, hopefully with EPA, vali-



dation studies of selected EPA approved methods of



concern to the power industry (10).  These studies



would be carried out on matrices representative of



the industry's process discharges and of the major



receiving waters, fresh, estuarine and sea.  Power



plant and selected state and federal laboratories



would be the participants in the round robin studies.




The result of the program would be precision and



bias data that would be acceptable to the EPA, to



the industry and to the courts as representative of



the expected performance of the EPA approved methods



on power plant discharges and associated receiving



waters.  It is possible that this effort could begin



before the end of 1983.



     Other industry associations may be interested in



considering conducting methods validation programs



for their own members.  It is essential that a solu-



tion be found to the present impasse.

-------
                         409
 BIBLIOGRAPHY

 1.    "Aqueous Discharges  from  Steam Electric  Power
       Plants:  Data  Evaluation,"  Interim  Report RP
       1851-1, Electric Power Research  Institute,
       Palo Alto, CA, October 1982.

 2.    "Aqueous Discharges  from  Steam Electric  Power
       Plants:  Analytical  Methods  Precision  and Bias,"
       Draft Report RP 1851-1, Electric Power Research
       Institute, Palo Alto, CA, November  1982.

 3.    Maddalone, R.F., "A  Survey of Precision  and
       Bias Data on Methods of Analysis for Priority
       Pollutant Elements," Sixth Annual Priority Pol-
       lutant Symposium,  Norfolk, VA, March 1983.

 4.    "Round Robin Interlaboratory Inorganics  Analy-
       ses," Utility  Water  Act Group, Hunton &  Williams,
       Washington, D.C.,  January 1980.

 5.    Methods for Chemical Analysis of Water and
       Wastes'! U. S. Environmental Protection Agency,
       Environmental  Monitoring  and Support Laboratory,
       Cincinnati, OH, 1974.

 6.    Standard Methods for the  Examination of  Water
       and Wastewater, 14th Edition, APHA-AWWA-WPCF.
      Washington, D.C.,  1975.

 7.    "Water Quality Standards Regulation," Proposed
      Rule, Federal Register, 47, 49234 et seq.,
      October 1982.

 8.   "National  Water Quality Criteria," Federal
      Register,  45, 79318  et seq., November 1980.

 9.   Stephan,  C.E., et al, "Guidelines for Deriving
      Numerical National Water Quality Criteria for
      the Protection of Aquatic Life and Its Uses,"
      Draft, U.S. Environmental Protection Agency,
      Environmental Research Laboratory, Duluth,  MN,
      February 1983.

10.   Rice, J.K.,  "Utility Perspective on Pollutant
      Analysis  Requirements," EPRI Seminar on  Sampling
      and Analysis of Utility Pollutants, Los Angeles,
      CA, February 1983.

-------
                 409a
                     TABLE I

            NORMAL CHARACTERISTICS OF
                COAL-FIRED PLANT
            PROCESS WASTE DISCHARGES
Param
eter
TSS
O&G
Mn
Fe
P
NH3
Zn
Cu
As
Pb
Ni
Cr
Se
Be
Cd
Mass
Kg/day/GW
1870
172
53
45
18
12
4.4
3.2
2.8
2.6
2.2
1.0
0.58
0.35
0.34
RSD
%
87
70
183
77
295
118
68
118
130
94
84
128
130
65
156
Cone.
mg/L
32
3.3
1.1
0.71
0.22
0.28
0.075
0.043
0.050
0.034
0.041
0.017
0.012
0.005
0.005
RSD
%
96
68
173
83
246
109
76
147
112
87
88
108
123
71
147
Source: Table 4-11 and 4-17, (1)

-------
                     409b
                        TABLE  II

     REPORTED FLAME AAS RELATIVE STANDARD DEVIATIONS
          BASED ON INTERLABORATORY PRECISION AT
     PROCESS WASTE  DISCHARGE AVERAGE CONCENTRATIONS
Data Source
Cone.
Element ug/L
AS
Be
Cd
Cr
Cu
Pb
Ni
Se
Zn
41
5.2
4.6
19
45
35
39
16
76
ASTM
1981
Reagent
Water
8.5
41.1
1134
62.3
274
169
240
16.9
58.1
RSD(%)
AOAC
Reagent
Water
-
-
121
67.5
38.9
120
-
-
20.2
uses
River
Water
-
-
-
26
27
-
-
26
25



.5
.1


.7
.2
MCAW
1979
Natural
Water
41
-
128
66
34
69
-
50
46
.2


.3
.6
.4

.4
.6
EPA/EGD
Mining
Effl.
-
-
-
86
27
157
70
-
18



.6
.6

.1

.9
Source: Table 4-2 and 4-3, (2)

-------
                        409c
                         TABLE III

REPORTED FURNACE AAS RELATIVE STANDARD DEVIATIONS BASED ON
   INTERLABORATORY PRECISION AT PROCESS WASTE DISCHARGE
                  AVERAGE CONCENTRATIONS

Element
As
Be
Cd
Cr
Cu
Pb
Ni
Se
Zn

Cone.
ug/L
41
5.2
4.6
19
45
35
39
16
76

ASTM
1981
Reagent
Water
-
-
-
-
-
-
-
-
-
RSD(%)
UWAG
River
Water
47.1
-
-
21.0
17.9
-
19.1
-
24.5

UWAG
Ash
Pond
Efflu.
9.1
-
-
38.1
13.4
-
25.6
-
42.3

MCAW
1979
Natural
Water
_
-
-
-
-
-
-
-
_

EPA/EGD
Mining
Effl.
53.8
-
-
-
-
-
-
-
_
 Source:  Table 4-2 and 4-3. (2)

-------
                       409d
                         TABLE  IV

   COMPARISON OF  RELATIVE  STANDARD  DEVIATIONS BETWEEN
     ALTERNATIVE PROCEDURES FOR FLAME AAS BASED UPON
             EPA  DMR/QA-1  PERMITTEE RESULTS
Element
As
Be
Cd
Cr
Cu
Pb
Ni
Se
Zn
Cone.
ug/L
235
235
39
261
339
435
207
50.4
418
MCAW
1974
RSD% n
28.5 (39)
6.95 (34)
19.4 (171)
13.9 (265)
6.74(310)
15.0 (238)
11.5 (210)
94.6 (27)
7.95(317)
STANDARD
METHODS
RSD% n
35.3 (36)
7.94 (16)
14.2 (111)
18.2 (204)
8.63(177)
13.5 (135)
15.0 (128)
37.3 (34)
12.4 (208)
R2
1.54
1.30
0.53
1.72
1.64
0.81
1.69
0.16
2.43
F
[.01]
No
No
Yes
Yes
Yes
No
Yes
Yes
Yes
R = (RSD STANDARD METHODS) -r  (RSD MCAW)
Source: Table 2-16,  (2)

-------
                      409e
                         TABLE V '
                 CAPABILITY TO DETECT OR
             TO QUANTIFY AT LOWEST REQUIRED
                    CONCENTRATION(1)
Ele-
ment
As
Be
Cd
Cr
Cu
Pb
Ni
Se
Zn
WQS
ug/L
50
5.3
10
50
72
50
13
10
47
Proc.
Waste
ug/L
41
5.2
4.6
19
45
35
39
16
76
Detect
GF/AAS
Y
-
N
Y
Y
N
Y
N
N
F/AAS
Y(2)
Y
N
N
N
N
N
Y(2)
N
Quant
GF/AAS
Y
-
N
N
N
N
N
N
N
ify
F/AAS
Y(2)
N
N
N
N
N
N
Y(2)
N
(1)  (Y) means that method LOD or LOQ is lower than either
concentration shown; (N) means not lower than the lowest
of the concentrations shown.

(2)  Gaseous hydride method

Source: Tables 5-19 and 5-20 (2).

-------
                       410
           QUESTION AND ANSWER SESSION



                MR. TELLIARD:  And you say you




looked at coal-fired and gas-fired?



                MR. RICE:  Maddalone separated




the effluent data on steam electric plants into



three categories:  coal-fired, oil-fired, and



gas-fired.



                MR. TELLIARD:  Any difference...




you didn't  sample any hydro?



                MR. RICE:   We decided that the



question of potential pollution from hydro-power




dams was better left to the courts.

-------
                       411
                MR. TELLIARD:  That concludes



this year's presentation.  Thank you for coming,




I hope you enjoyed it; hope to see you next



March, same time, same station, same players,



maybe a few more.  Thanks a lot.








(WHEREUPON, the hearing was concluded.)

-------