-------
-------
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
OFFICE OF WATER
INDUSTRIAL TECHNOLOGY DIVISION
THIRTEENTH ANNUAL.EPA CONFERENCE ON
ANALYSIS OF POLLUTANTS IN THE ENVIRONMENT
MAY 9 & 10, 1990
-------
-------
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
WASHINGTON. D.C. 20460
MEMORANDUM
TO: Attendees
OFFICE OF
WATER
FROM: William A. Telliard, Chief
Analytical Methods Staff
Engineering and Analysis Division
SUBJECT: The Thirteenth Annual Analytical Symposium
Proceedings
Enclosed please find a copy of the Proceedings of the
Thirteenth Annual Analytical Symposium sponsored by the USEPA
Office of Water, Industrial Technology Division (now known as
the Engineering and Analysis Division), held in Norfolk,
Virginia, May 9-10, 1990. We believe you will find these
Proceedings to be an invaluable reference for the analytical
methods discussed during the symposium.
Thank you for your continued interest and, support in the
Division's Analytical Symposium.
-------
-------
FOREWORD
The Industrial Technology Division of the USEPA
Office of Water Regulations and Standards sponsors an
annual conference on the analysis of pollutants in the
environment. This symposium is attended by laboratory
and sampling personnel, data users, and regulatory
agencies. It is intended to provide a forum for the
discussion of current and proposed analytical methods
for the analysis of priority pollutants.
These proceedings document the presentations and
discussions from the Thirteenth Annual Analytical
Symposium. Topics this year encompassed discussions of
method validation, determination of volatile and semi-
volatile compounds, and the effects of new regulations
on laboratories. A special panel discussion was held
by Federal, State, and industry representatives on
laboratory certification and reciprocity.
This year's conference was very successful, with
over 270 people in attendance. We look forward to the
Fourteenth Symposium in May 1991.
W. A. Telliard
-------
-------
THIRTEENTH ANNUAL EPA CONFERENCE ON
ANALYSIS OF POLLUTANTS IN THE ENVIRONMENT
Office of Water Regulations and Standards
Industrial Technology Division
May 9-10, 1990
Norfolk, Virginia
TABLE OF CONTENTS
Wednesday, May 9, 1990
Welcome and Introduction
William A. Telliard, Chief
Analysis and Analytical Support Branch
USEPA, Industrial Technology Division
Revision and Updates to EPA's List of Lists.
James King
USEPA Sample Control Center
Viar & Company
A Toxicity-based Approach to Pollutant
Identification
D. R. Mount
ENSR Corporation
Page
,. .1
,42
Development of a HRGC/HRGC/LRMS System for
Determination of Chloro-dioxin/furan Isomers/
Congeners; Instrument Design and Performance
L. L. Lamparski
Dow Chemical Company
Development of a MAGIC Interface for HPLC/FT-IR..
James A. de Haseth
Department of Chemistry
University of Georgia
New Capillary Column for the Determination
of PCDDs and PCDFs
T. 0. Tiernan ^
Wright State University
Single Laboratory Evaluation of EPA Method 200.8,
Determination of Trace Elements by Inductively
Coupled Plasma - Mass Spectrometry
Theodore D. Martin
Chemistry Research Division
USEPA-EMSL
,78
119
,160
,209
-------
THIRTEENTH ANNUAL EPA CONFERENCE ON
ANALYSIS OF POLLUTANTS IN THE ENVIRONMENT
TABLE OF CONTENTS - 2
ICP Performance for the Measurement of 14 Trace
Metals in Power Plant Waste Streams ,
James K. Rice, P.E.
James K. Rice Consulting
Page
..235
Quantitation/Detection Limits for the Analysis
of Environmental Samples
W. G. Krochta
PPG Industries, Inc.
Laboratory Determination of Diesel Oil
in Drilling Fluids
Warren Haltmar
EPTD-Environmental Technology
Texaco, Inc.
Validation of a Method for the Determination
of Diesel Oil in Drilling Fluids
M. T. Stephenson
EPTD-Environmental Technology
Texaco, Inc.
Preparation and Analysis of Air Emission
Samples
Larry D. Johnson
Source Methods Standardization Branch
Quality Assurance Division
Atmospheric Research and Exposure Assessment
Laboratory
USEPA-RTP
Chesapeake Bay Program - Experience with Nutrient
Analytical Methods in the Estuarine
Environment ,
Bettina Fletcher
USEPA Region III CRL
270
301
326
352
392
Thursday, May 10, 1990
Determination of Semi-volatile Organic Compounds
in River Water at the Part-per-quadrillion (ppq)
Level by High Resolution Gas Chromatography/High
Resolution Mass Spectrometry ,
Yves Tondeur
Triangle Laboratories
,438
-------
THIRTEENTH ANNUAL EPA CONFERENCE ON
ANALYSIS OF POLLUTANTS IN THE ENVIRONMENT
TABLE OF CONTENTS - 3
EPA's RREL Sampling and Analysis Methods Database
Lawrence Keith
Radian Corporation
Micro-extraction Isotope Dilution GC/MS Determination
of Volatile Organic Compounds
Bruce N. Colby
Pacific Analytical
Liquid-solid Extraction for Determination of
Acid Herbicides in Drinking Water
Jim Eichelberger
USEPA-EMSL-Cincinnati
Application of Multispectral Techniques to the
Identification of Aldehydes in a Combined
Sewer Overflow
Susan D. Richardson
Environmental Research Laboratory
USEPA-Athens
A Laboratory Robotic Method for the Automated
Determination of Total Suspended Solids in
Environmental Water Samples
Joe C. Raia
Shell Development Company
Determination of Semi-volatile and Pesticide
Pollutants in Sewage Sludge by Soxhlet-Dean Stark
Extraction, High Performance Liquid Chromatography
Cleanup, Gas Chromatography with Selective
Detectors, and Isotope Dilution Gas Chromatography/
Mass Spectrometry
D. R. Rushneck
ATI-Colorado
Page
..465
,502
,546
,585
,616
648
Federal Government Perspective on the Regulation
of Laboratories under the CLIA of 1988
Rhonda Whalen
Office of Survey and Certification
U.S. Department of Health and Human Services
Management of a Clinical Laboratory Certification.
Dr. Gerald Hoeltge
Cleveland Clinic Foundation
673
693
-------
THIRTEENTH ANNUAL EPA CONFERENCE ON
ANALYSIS OF POLLUTANTS IN THE ENVIRONMENT
TABLE OF CONTENTS - 4
Page
Panel Discussion: Laboratory Certification
and Reciprocity 717
A. W. Tiedemann, Jr., Ph.D.
Commonwealth of Virginia 718
Mike Carter
USEPA, OERR 723
Arthur Perler
USEPA, Office of Drinking Water 728
Benjamin R. Tamp1in
California Department of Health Services .....757
Margaret Prevost
State of New York 763
George Stanko
Shell Development Company 767
Question and Answers 774
Closing Remarks 784
List of Speakers 785
List of Attendees 789
-------
PROCEEDINGS
MR. TELLIARD: Good morning.
My name is Bill Telliard. I am from EPA, and I am here to
to help you. Welcome to the 13th, our lucky 13th, annual
symposium on measurement of pollutants in the environment.
A couple of notes on housekeeping things before we get
started. We have a substitution at 9:00 o'clock. Dr.
Fielding will not be presenting. His paper has been dropped
due to a lack of interest on Dr. Fielding's part. And for
tomorrow's talk on the coastal marine environment by Gordon
Wallace, Larry Keith will speaking in his place.
This morning and throughout the next couple of days, we
are going to talk about a myriad of different problems in
environmental measurement. Our people to our left are
County Court Reporters, Inc., who take down your every
single, dripping word that you might want to put forward.
If for some reason you don't want your words on the
record...and we all know what that means; it is like telling
your wife where you are...we will be happy to tell them to
stop recording and stop typing which they do real well.
Stopping is one of the best things they do. And we will be
glad to take it off the record.
Our first speaker this morning is Jim King from Viar &
Company...which reminds me of a story. Has nothing to do
with Viar & Company, but I had to lead in with something.
-------
This lawyer has a parrot on his shoulder, walking down
the street, walks into a bar, walks up to the bar, and the
parrot says give this guy a beer. Bartender says wow, this
is really kind of a neat bird, you know. So, he gives the
guy a beer, goes down the bar, and he turns to this other
guy and says that parrot is really kind of neat. So, he
comes back up the bar and he turns to the lawyer and he
says, where did you get him? And the parrot says
Washington, there's thousands of them there.
Our first speaker is Jim King, and Jim is going to talk
about one of my favorite subjects which is the list of
lists.
Jim is going to talk about a program that ITD has come
up with dealing with the use of a listing series to cover a
catalog of analytical methods and also those listed
compounds that appear on various agency lists throughout the
agency.
Jim King from Viar and Company.
-------
MR. KING: Thank you, Bill.
Hi. I am Jim King from Viar and Company. We operate
the Sample Control Center for Bill Telliard and EPA's
Industrial Technology Division (ITD).
In 1985, ITD project officers were faced with the
prospect of regulating a wide variety of compounds, not only
Priority Pollutants,but also this very strange list that
appeared in the 1984 Hazardous and Solid Waste Amendments
called the RCRA Appendix VIII list. There were a number of
compounds on this list that were unanalyzeable by present
methods. Bill Telliard decided to develop an automated
catalog of these analytes and ideas and thoughts about the
methods that might work for them.
Over the years, this system has grown into a compendium
of all of the analytes of interest at the Agency contained
in regulatory, as well as, office-based lists. When I say
regulated, I am referring to environmentally significant
regulations rather than product regulations such as the
FIFR/TSCA rules and others like that.
Now, I'll demonstrate the current system. I will talk
everyone through the system as we view it on the screen. If
anyone has any questions, feel free to ask them at the end.
This is the main menu for the system. It does work
with a Microsoft compatible mouse. When the device driver
-------
for the mouse if installed, you will see a highlighted bar
menu at the bottom of the screen.
The system runs on an IBM PC or compatible. It
requires about 256K of memory. A hard disk is required.
The database is indexed by these seven primary keys and a
number of other alternate keys that once you become familiar
with the system, you can utilize.
The primary key and the one on which we place the most
significance is the CAS number. When entering data into the
system, we go through and thoroughly research the CAS number
either using Chemical Abstracts on line or two hard copy
references in order to verify it, because we have found a
number of regulations and other listings of compounds with
CAS numbers that just were not correct.
The easiest way to view the data is by name, so we will
take a look at it by name. Place the cursor in the lookup
field and search for a compound. Here we have
acenaphthene, CAS Number 83329. Here we have the CAS
number, the regulatory origin and sequence, reportable
quantities if there are any associated with the regulation,
the analyte name as listed in the regulation, the
International Union of Pure and Applied Chemists (IUPAC)
name, and then any synonyms would appear after the IUPAC
name.
-------
The first screen is the original screen developed in
1985, so we had relatively few locations for apparatus,
method, and a use code, but at that time, we were very
interested in sources for standards. So, here we have a
little catalog of sources of standards for true analytes.
We also have some physical properties data such as
whether the compound hydrolyzes or decomposes in water,
extracts or purges from water, whether it is possible to do
by gas chromatography or whether you have to resort to
liquid chromatography.
There is also a location for the EPA-NIH mass spectral
library page number.
This is a relational data base, and if we place the
cursor in one of these fields, we can scroll through these
regions, and handle a relatively large number or regulations
per compound.
If we .flip over to page 2, we see the methods screen.
This was developed in early 1988, and the objective here was
to catalog as many methods as were available for a
particular compound and some information about the method.
We have the origin of the method here, the apparatus
utilized, the method number if there is one, the suffix
which is really a use code (the Contract Laboratory Program
uses that for low soils, medium soils, water, et cetera).
-------
6
We also have the detection limit appearing here,
expressed in the identical terms that it was expressed in
the method. In other words, we did not convert everything
into some common detection limit or common unit.
Some fields that we will be updating in 1990 include
precision expressed as standard deviation or coefficient of
variation and other methods information. We are going to
take the same approach that we took with the detection
limits. However precision is expressed in the method, that
is how it will appear here for that particular compound.
And also bias expressed as percent recovery.
We are also talking about adding an additional screen,
and we have a prototype of that. If you place the cursor on
a method number and then hit PF key 5, it will take you into
the Method Information Screen. This screen is in prototype
form right now. We are still experimenting with what sort
of data we want to display on this screen.
We often receive comments from people who say "well,
why don't you just put the whole method in there". We are
trying to keep the focus of the system very clear so that it
doesn't become unwieldy and you need a monster 80386 PC to
run it on. We have run this on an XT for a number of years
with no problem. It works quite well.
On the Methods Information Screen, we look up by method
number, the primary key. We have the revision, the date of
-------
the method, the status, whether it is draft, final,
proposed, or promulgated, the number of labs that
participated in the validation study, the type of analysis,
the matrix, the apparatus used, the appropriate
concentration range, a citation for the method, and a short
abstract of the method.
i
If we go back to the previous screen, one thing that is
real interesting, is that we can actually switch indices
here with the mouse just by clicking the right button on the
actual fields. So, here we are in the name index, and we
are viewing it by name. With a right click we will go over
to the method index, organization index, etc.
Let's go back to the name index here. The name index
»
contains a little over 10,000 entries. What appears in this
window is the compound name as it appears in the reg, the
UP AC name, and any synonyms. So, there are actually just
under 1800 unambiguous analytes in the system, 150 methods
for their analysis, compiled from 26 regulatory lists.
We can move from A to Z down through an index of 10,000
in less than one second. Here we see zectran, our first
entry in the Z's, but let's look up...and basically what we
have here is a byte by byte search of the data base. So, if
we enter an "L", it will bring up the actual first
occurrence of an "L" compound.
-------
8
This particular index contains analytes names listed
multiple times, as they appear in the regulation, by IUPAC
name, and by synonym. We can look up lasso, and we can also
look it up as alachlor or metachlor or by the IUPAC name
which I won't even try to pronounce.
The 1990 revision of the system is scheduled to be
complete by the end of the current fiscal year, and we
expect at that time to publish a new List of Lists document,
and it is probably going to be a little unwieldy at 200
pages to reduce this bulk, we are looking at is distributing
the system, the system software and the data on diskette for
a break-even cost, outside of the agency.
The diskette version is slated to start beta tests
within the next 60 days, and we would hope to send it out to
all current requesters. There were a number of people at
the Water Pollution Control Federation conference last year
that requested this system, and at that time, we were hoping
to have it available sooner, but we have maintained their
names on file, and they will be getting a copy of the system
software in this first round of distribution.
If anyone has any questions at this time or has any
favorite compounds that they would like to look up, feel
free to ask, and I will do my best to answer them.
MR. TELLIARD: I should point
out that these people who wanted a copy of this wonderful
-------
system, we don't take green stamps, but we are going to use
something that we feel the price will be probably about $70
on the first issuance.
We were talking about handling it through NTIS. I
think, for the moment, we will handle it through Sample
Control Center. I think it is faster, and I think the
product is better if we can get it out to the folks.
We are putting it in all the EPA Regional libraries,
permits office, and BSD laboratories, and Region III is the
beta test area, so we will put it in all the State offices
there to see how the thing works and what, if any, problems
we have with it.
Since we have a working product, of course, the new
EMMC, Environmental Methods and Management Committee, has
put a work group together to figure a way that this can cost
more. So, we are working on that now. It is your
government in action: if it is working fine, let's fix it.
So, one of the areas that is totally lacking, as
pointed out by Jim, is the fact that we don't have a lot of
data on the precision and accuracy of the method, and we are
just going to extract that from the existing method write-
ups. Some of that is good, some of that is bad, some of
that is indifferent, but we are going to put it in there at
least as a starting point and see what type of feedback we
-------
10
receive from the other users in the other laboratories that
may have an interest in this particular program.
So, are there any questions? When you come to the
microphone, please identify yourself and tell us who you
are.
-------
11
QUESTION AND ANSWER SESSION
MR. RICE: Jim Rice. I am
a consulting engineer.
Bill, I have several questions. First, what is the
data base management system you are using?
MR. KING: The software
is written in system J. It is as relatively unknown system.
However, in 1985...
MR. TELLIARD: Obscure is
the word.
MR. KING: Yes, obscure.
MR. RICE: When you make
it available on diskette, what will come with it is the
license to the data base management system as well and the
software?
MR. KING: No, it will
not. It will be run-time only module. So, in other words,
we will send you only a couple files that are required to
run the system.
MR. RICE: Right. Well,
that is what I meant.
And now the other question, Bill, and a comment. You
mentioned about putting precision...I think it says
precision and bias on those columns up there.
-------
12
MR. KING: That is correct.
MR. RICE: But I would make a
plea to you not to use and put up and, in effect, put into a
data base for wide distribution data that you know is not
worthy to reprint. I think I would only put that which
is...that you consider really valid interlaboratory data on
a method that has been checked out by a large suite of
laboratories with it rather than put up junk, because other
people than this group which are technically astute and can
understand that get hold of that data, and then it causes
all kinds of mischief.
MR. TELLIARD: Thank you, Jim.
I agree with you, but, you know, there is a box to be filled
in.
(Laughter.)
MR. TELLIARD: It is kind of
like your census. Irrespective of how many live in the
house, you have to fill in the box.
We are aware of the problem that some of the data is
better than others, and I think for the first go-around, as
we mentioned, for the beta test, we are going to try to put
everything in and circulate it and have people pull it out
or revamp it based on their experience or their information.
-------
13
We haven't figured a way to flag what we call real good, not
so good, and mediocre, but I agree with you. I think we
ought to do that, but we haven't addressed that issue yet.
MR. FRAZIER: My name is
Bill Frazier, City of High Point, Central Lab Services.
In the first screen, you had information about chemical
and physical properties. Is that also going to have
toxicology data available with it?
MR. KING: We have talked
about it, but the problem is there is such a limited data
base of toxicological information.
MR. FRAZIER: Is that a
future plan for it?
MR. KING: That is
currently in the plans.
The agency has a data base called IRIS. I believe
there are only about 200 entries in the data base FOR
mammalian toxicity.. So, that would be a very minor subset
unless we wanted to extend it to different salmonella type
toxicity tests or other toxicities that may or may not have
relevance on mammalian toxicity.
MR. FRAZIER: Thank you.
MR. TELLIARD: TSCA has a
similar list that they have generated for the Office of
Toxic Substances. We are not trying to combine them, but
-------
14
where their data interfaces with ours, what we are thinking
of doing is having a reference arrow that says see block
something or other.
This system will allow us to add additional screens
where we could add a toxicity screen or one of the other
things that we are interested in is a production screen.
How much of 1,2-diphenol bat shit is made, exported, used
internally, controlled and kind of from a permitting
standpoint when the guy is writing the permit know how much
he should be concerned about it.
That is an option that is probably a couple of years
away if we wish to implement it, but right now, we are
trying to get it out that says it is on somebody's list,
somebody is concerned about it, and we are monitoring for
it.
Now, when we say a list, it is not necessarily a
regulatory list. It is a list whether you are in the Office
of Solid Waste or you are CERCLA or you are inter-toxic
where someone is going to have to make a measurement, either
in a permit, in a monitoring mode, or somewhere. So, when
we say list, it is not the narrow list of just those
regulated compounds which, of course, is probably like 85
and we have, what, 1700 or something on here.
MR. KING: Right.
-------
15
MR. TELLIARD: So, when
we say list, our analogy is those compounds which you as a
user or a producer or a manufacturer may have to monitor
for.
MR. KING: In fact, if we
look at the on-line help screen, you can see where are
origins are derived, and these are the regulatory origins
here. We have various lists on there from the Appendix C
list, from the consent decree, California list of
pollutants, CERCLA 302, Clean Water Act 116, the Michigan
Petition list, the administrator's VTOX list which
ultimately became the SARA Section 313 list.
Also, there are a couple of office-based lists on
there. There is a list of fish tissue contaminants that are
monitored by the Office of Water Regulations and Standards
that we do have on here.
MR. TELLIARD: Good
morning, George.
MR. STANKO: George
Stanko, Shell Development Company.
I think I may have a solution to Jim Rice's problem and
also would ask a question.
Could we replace that precision field with something on
MSDS for these compounds, or was anything considered with
respect to MSDS?
-------
16
MR. KING: No.
MR. STANKO: Because a lot of
us have to handle these, ship these, and do that, and that
kind of information with this kind of a data base looks like
it would be very useful.
MR. TELLIARD: Well, we
certainly have the capability of adding that, George. Thank
you.
Any other questions?
(No response.)
MR. TELLIARD: Again, if you
will pick up this little blue card, we have had the format
and the obscure...obscure was the term?...obscure language
has been approved by the agency for circulation, and we are
going to try to get this thing out by mid summer to the
world.
There is also an issue in fact that we may rewrite the
program into a different format, that is to say, a different
language based on what our computer folks tell us in RTP,
but right now, we are shooting to have the program available
by mid summer. So, if you would like a copy. . .as I say, we
are estimating $70 for the cost.
MR. TELLIARD: If you fill out
the little blue card, when we figure out what the real price
-------
17
is, we will send you a notice, and if you still want one, if
you send the check to my sister...
(Laughter.)
MR. TELLIARD: We will be
glad to send you a copy.
We would also like as you get this copy any feedback
that you can send to us on ways of making it better or
changing it or reformatting it. Since it is our first
effort at it, we would be glad to have some feedback from
you folks, particular the user community like you. Also, as
was pointed out, you are probably more on top of this than a
lot of the other people who are going to be using it. So,
any information you can send back to us to make it better
would certainly be appreciated.
Thank you, Jim.
MR. KING: Thank you.
-------
THE ITD LIST OF LISTS SYSTEM
"A Catalog ofAnalytes and Methods"
CO
Analytical Methods Staff
USEPA Office of Water Regulations and Standards
Industrial Technology Division
OWRS
-------
WHAT IS THE LIST OF LISTS?
The Office of Water Regulations and Standards Industrial
Technology Division has developed, maintained, and
distributed, since October of 1985, an automated
composite index of analytes listed by the Agency. This
master database is known as the "List of Lists" (LISTS).
ANALYTES
OWRS
-------
WHY WAS THE LIST OF LISTS CREATED?
PROLIFERATION OF LISTS AND ANALYTES
By 1985, there were 15 Agency lists and over 1,000
analytes. Analyte lists could no longer be managed
individually.
TO HELP ITD DESIGN MONITORING PROGRAMS
Tasked with promulgating industrial effluent limits for the
Priority Pollutants, RCRA Appendix VIII list of 385
compounds and compound classes, as well as CERCLA's
Hazardous Substances List, ITD created an automated
composite list of pollutants of concern to EPA as a resource
to support its monitoring programs.
to
o
OWRS
-------
WHY THE LIST OF LISTS IS IMPORTANT
TODAY
METHODS STANDARDIZATION
With the proliferation of field and laboratory measurement
methods, it is necessary to establish degree of
standardization among methods.
STUDY DESIGN
EPA monitoring program designers need analyte and
methods information to make correct choices:
Addressing cross-media contamination
Intersection of Regulatory lists
Selection of appropriate methods
SECTION 518 REPORT TO CONGRESS
The 518 Report recommends "establishment of a
computerized catalogue of the availability, applicability, and
degree of standardization of methods currently in use in the
Agency."
-------
SYSTEM DESCRIPTION
LISTS is a PC-based system developed using System J.
The List of Lists is distributed in both hardcopy and computer-
readable formats. Over 10,000 hardcopies and ASCII files
have been distributed since 1985.
Run-time (read-only) module created for system dstribution to
user community.
LISTS database contains information on:
-1,716 regulated analytes
- 26 statutorily-mandated and Office-based lists
-150 analytical methods
OWRS J
-------
THE LISTS SYSTEM
FEATURES
Menu Driven
Rapid Text
Search Using
Lookup Field
Indexed on Key
Fields
Simultaneous
Display of Key
Data Elements
Optional use of
pointing device
("mouse")
CAPABILITIES
Search, Retrieve,
Display, and Print
Data Sorted by Key
Field
Database Add,
Delete,'and Modify
Output ASCII Text
Files in PC or
Mainframe Formats
Generate Standard
Reports by Nine
Key Fields
REQUIREMENTS
IBM PC XT, AT, or
Close Clone
DOS Operating
System
System J Software
Hard Disk and
256K RAM
(640K RAM
Recommended)
OWRS
-------
SYSTEM CHANGE CONTROL
OWRS MAINTAINS CENTRALIZED CHANGE CONTROL OVER
LISTS SYSTEM CONFIGURATION AND DATABASE FILES.
OWRS
ITD
Full System
Add/Delete/Modify
Capabilities
USER COMMUNITY
Run-Time Only Module
Read-Only Capability
OWRS
-------
WHAT DATA ARE IN THE LISTS SYSTEM?
ANALYTE DATA
Analyte Name (as
appears on Agency list)
i?^1
&:<
Common, trade,
synonym, and IUPAC
Names
CAS Number and Base
CAS Number
$
Regulatory Origin/
Agency List
Analytical Method(s)
EPA/NIH Mass Spectral
File Reference
Physical Properties
Source for Standard
METHOD DATA
Method Number/
Identification
Custodial
Organization
Instrumentation /
Technique
(Apparatus)
Sample Matrix,
Fraction, and Level
(Suffix)
Detection Limit by
Analyte
Precision and Bias
by Analyte
LIST DATA
Regulatory
Origin or Name
of Agency List
Custodial
Organization
Reportable
Quantity
Analyte Names
(as appear on
Agency list)
to
Ul
OWRS
-------
DATA INTEGRITY
Integrity of LISTS system data is ensured by use of an
unambiguous identifier (CAS number) as a primary key and
stringent verification of each CAS number before entry into
the system, ; , : :
C. .*.. .v./v ... .".,'
Each CAS number is checked to verify that it is an
unambiguous identifier for a specific analyte.
- Check-sum algorithm applied to ensure that CAS
number is valid.
- CAS number checked against two published sources or
one published source and one on-line source to ensure
that it accurately identifies the listed analyte. If
necessary, the Agency office responsible for the list is
contacted for confirmation.
OWRS
to
en
-------
KEY FIELDS
LISTS data can be retrieved, displayed, and printed by
any of nine key fields:
~*«L. , PRIMARY
CAS Number KEY
Analyte Name
Base CAS Number
Apparatus and Method
Method and Apparatus
Regulatory Origin/Agency List
and Analyte Sequence in List
Custodial Organization
Standard Source and Analyte
Name
Method, Suffix, Apparatus,
and Analyte Name
10
sj
OWRS
-------
WHAT ANALYTICAL TOOLS ARE AVAILABLE TO
LISTS USERS?
Data search on key fields
Data retrieval and display sorted by selected index (key
field)
Look up data for specific variable while in selected index
Print data sorted by selected index
-if !'#
Output data from selected index for loading to other
computers (create print file)
to
00
OWRS
-------
WHAT SYSTEM REPORTS CAN BE
GENERATED?
A user can print LISTS reports presenting data ordered by any of
the nine key fields. Report data is presented in alphanumeric
order by data element(s) in the key field.
*« CAS Number
> Analyte Name
!;/Pase
* Apparatus anf) Method
t Method andI Apparatus
alyte
''
to
ID
n
,
eustodlal Organisation
Source and
-;t> Methodi^Suffbci
JandAnalyte;
-------
SYSTEM ENHANCEMENTS
Addition of Analyte-Specific Precision and Bias
Information
Data fields for precision and bias information have been created and
entry of these data is ongoing.
> Addition of Methods Abstract Screen
A methods abstract screen has been created to provide information on
method standardization and validation. Entry of method abstract data is
in process.
Development of User Documentation
A Draft User Manual has been written, which provides detailed
information on how to install the LISTS system, retrieve and display
data, and print standard reports. The Draft User Manual will be utilized
in the upcoming beta test and user input will be incorporated in the final
manual.
= OWRS
-------
EXAMPLE SCREENS AND
ON-LINE HELP FILE
1. Main Menu
2. Data Screen Indexed by Name - Pages 1 and 2
3. Methods Abstract Screen
4. Excerpt from On-line Help File
U)
OWRS
-------
32
MAIN MENU
OW LIST OF LISTS MENU
The OW LISTS database contains environmentally
significant analytes and information necessary
to develop methods for their regulation.
-DISPLAY ANALYTE DATA=
F2
F3
F4
F5
FG
F7
F8
= By CAS number
= By name (alphabetically)
= By base CAS number
= By apparatus and method
= By method and apparatus
s By origin and sequence
= By organization
To PRINT ANALYTE DATA, use Alt
vith any of these function keys.
ORGANIZATION TO DISPLAY/PRINT
PRINT FILE
3THER LIST OF LISTS FUNCTIONS'
F16 = Help for this menu
Alt-Fie = Rebuild database indices
=SYSTEM J ACCESS=
Fl = Exit menu to J() prompt
Type I_HELP to get back
F9 = System J command menu
Fl
F2
F3
F4
F6
F7
F8
F9
HELP
-------
33
DATA SCREEM INDEXED BY NAME - Page 1
;. :::> Lookup field
AROCLOR_1242
KEYS
ANTIMONY_TRICHLORIDE:
ANTIMONY_TRIFLUORIDE;
ANTIMONY_TRIOXIDE
ANTIMYCIN_A
ANTU
AQUACIDE
:AQUA_FORTIS
ARAMITE
ARASAN
.ARGENTATE(1-),_DICYA
AROCLORS
AROCLOR_1816
AROCLOR_1221 '<
AROCLOR_1232 :>
AROCLOR_1242 £
AROCLOR_1248 '
AROCLOR_1254
AROCLOR_1268
ARSENATES
=OWRS ITD AASB
CAS No
Base
53469219
1336363
Created 89/26/87
Updated 82/14/88
Names and comments
PCB-1242
Aroclor 1242
OW LIST OF LISTS =
ORIGIN SEQUENCE
CAL
CER^
CWA,_
P-POLL iiae
RCRA i386-84
=86/13/98 87:55:83=
At 8 of 8 Page
one.
JQ=18 Ib F6
SQ=18 Ib gets
page
two.
APPARATUS METM SUFFIX
Organ iCGCEC :& :|i
zationGCEC 5:' \
GCMS '? >;,
STD
CIN
LV
Hyd/Dec
Ext/Prg E
GC poss Y
LC poss
EPA/NIH page
Fl=Exit,Up,Set 2-Beg,End,Reread 3=Up,Page,Next
CL NL SL
4=Down,Page,Next 18=More
-------
34
DATA SCREEN INDEXED BY NAME - Page 2
Lookup field
AROCLOR 1242
KEYS
ANTIMONY TRICHLORIDE
ANTIMONY~TRIFLUORIDE
ANTIMONY_TRIOXIDE
ANTIMYCIN A
ANTU
AQUACIDE
AQUA FORTIS
ARAMITE
ARASAN
ARGENTATE(1-),_DICYA
AROCLORS
AROCLOR 1016
AROCLOR 1221
AROCLOR"1232
AROCLOR 1242
AROCLOR 1248
AROCLOR_1254
\ROCLOR 1260
ARSENATES
=OWRS ITD AASB (
CAS No 53469219
Names and comments
PCB-1242
Aroclor 1242
ORG APPARATUS METH
ASTMftGCEC :.SD3534
CIN IGCEC ?=608
CIN %-GCMS ';625
CLP l-GCEC WEST
CLP iGCEC -:;PEST
CLP -GCEC ilPEST
ITD :--CGCEC &1618
ODW ;iGCEC -;505
OSW l;::GCEC ;:;8080
OSW J--GCMS :::-8250
USGS GCEC O-31G4
5W LIST OF LISTS 06/13/96 07:52:28=
Page two. F6 gets page one.
At n
SUFFIX DETECT LIMIT PREC BIAS of 11
fEDL=l ug/L
i;MDL = 6.065 ug/L
BN ii
LS ^CRQL=86 ug/kg
MS :?CRQL=1200 ug/kg
W ?;CRQL = 6.5 ug/L
j|MDL = 0.31 ug/L
iPQL=56 ug/L
:;PQL=106 xig/L
EDL = 6t.«l ug/L 'I
5 =Method,Svap/Dup 6=Page,Ed FlG=More Shft-Fn = Fn Help
CL NL SL
-------
35
METHODS ABSTRACT SCREEN
::;.;.-: Lookup field ^ ::C:
1618
KEYS
1613
1618
1624
1625
=OWRS ITD AASB =
Method 1618
METHOD INFORMATION =06/13/90 08:12:44=}
L
Created
Updated
Revision 0 dated 05/01/89
Status DRAFT Draft, Final, Proposed,
# labs 1 in validation Promulgated
05/09/90
05/09/90
PESTICIDES
GC
MULTI Water, Soil, Sludge,
LOW Low, Med, High
An'alytes
Apparatus
Matrix MULTI Water, Soil, Sludge, Biota, Air
Level
Ci tat ion
"Chlorinated and Phosphorous Containing Pesticides by
GC with Selective Detectors", USEPA, OWRS-ITD WH552,
Washington, DC 2646Q, May 1989.
Abstract
The method is designed to meet the survey requirements
of the USEPA ITD. The method is used to determine the
chlorinated- pesticdes, PCB's, phenoxyacid herbicides,
and phosphorous containing pesticides amenable to GC.
Fll/12 = Do ...; depends on current field and Shft/Ctr1/Alt.
CL NL SL
l«=Reset
-------
36
EXCERPT FROM ON-LINE HELP FILE
ITD LIST OF LISTS DATABASE
Legend for Information in Data Fields.
I_RCRA.DSC
02/27/89
The Industrial Technology Division's List of Lists system is an
automated catalog of analytes of environmental concern and methods
for their analysis.
CAS NO The Chemical Abstracts Service (CAS) Registry Number for the analyte.
In certain instances, CAS has assigned a number to a compound class and
this number is used.
Note: where CAS has not assigned a number to an analyte or class, a
synthetic numbering system has been used. This number begins with a
digit(s) followed by an underscore or hyphen followed by three digits
(e.g., 1_001) and assures that an analyte can be unambiguously identi-
fied in relationship to the class from which it is derived. The three
digits following the underscore identify its position on the parent list
and match the ORIGIN SEQUENCE number. At present, the following leading
digits are used (definitions of acronymns and abbreviations are listed
under ORIGIN below):
0- identifies the Drinking Water Priority List
0_ identifies the RQ List
1- identifies analytes on ITD's List
1_ identifies the RCRA Appendix VIII List
2- identifies the AIR List
2_ identifies the RPAR List
3- identifies the SWDA List
3_ identifies the VTOX List
4- identifies the SEC_313 List
4_ identifies the OAGJSRB List
5- identifies the SEC_112 List
10- identifies the FTC List
CAS Number Error Checking on the PC: When a CAS number is entered in
the CAS NO or BASE CAS NO field, the CAS error checking algorithm is run
at the instant the cursor leaves the field. If the CAS number is
incorrect, the message "CKSUM" will appear immediately above the CAS
number or below the BASE CAS number, indicating that an incorrect number
has been entered.
BASE The "base" CAS NO. If the analyte is derived from a compound class
(e.g., "Chlorobenzenes"; "Silver and Compounds, NOS"), and the analyte
can be traced to the class, the CAS NO of the class from which the
analyte is derived will appear in this field.
ORIGIN An acronym or abbreviation for the list from which theanalyte is
derived. The following lists are used:
AIR Analytes of the air "List of 37"
APP-C Analytes listed in Appendix C of the Consent Decree.
APRIL Analytes added to the RCRA groundwater monitoring list by
Bob April of EPA.
CAL California List pollutants [40 CFR Part 268, Appendix III;
52 FR 25791].
-------
37
page 2 EXCERPT FROM ON-LINE HELP FILE (cont.) I_RCRA,DSC
02/27/89
CER_302 CERCLA Reportable Quantities List [40 CFR 302.4, Table 302.4].
CWA_116 Hazardous substances under Section 311(b)(2)(A) of the Federal
Water Pollution Control Act [40 CFR 116, Table 116.4A] and
Reportable Quantities-HO-eFirliTT Table 117.3].
CWS_DIS List of analytes for which Community Water Systems and non-
transient, non-community water systems shall monitor at the
discretion of the State [52 FR 25715, 08 Jul 87].
CWS_REQ List of analytes for which Community Water Systems and non-
transient, non-community water systems shall monitor [52 FR
25715, 08 Jul 87].
DWPL Draft Priority List of Drinking Water Contaminants
[52 FR 25720]
FTC ITD's list of Fish Tissue Contaminants
ITD Additional metals, classical analytes, and dioxins that the
Industrial Technology Division monitors in its sampling and
analysis programs.
MICH The list of analytes proposed to be added to the RCRA Appendix
VIII List by the Michigan Petition [49 FR 49793, 21 Dec 84].
OAG_SRB Oil and gas, secondary recovery biocides: biocides, slimicides,
and molluscides used on oil platforms.
P-POLL The priority pollutant list [NRDC vs Train, 8 ERC 2120 (DDC
1976)] as expanded to the 129 "Priority Pollutants", Appendix
C Pollutants, and High Priority Paragraph 4(c) Pollutants.
(The specific compounds on this combined list are given in
Methods 1624, 1625, plus the original Priority Pollutant list
of pesticides, metals, cyanide, and asbestos).
PARA-4C The list of 56 compounds detected in the 4(c) study.
PARA_4C The remaining 367 compounds detected in the 4(c) study.
RCRA RCRA Appendix VIII list [51 FR 28305, 06 Aug 86].
RCRA_IX The RCRA Appendix IX Groundwater Monitoring List [51 FR 26632,
24 Jul 86].
RPAR "Rebutable Presumption Against Registration" - compounds EPA is
considering removing from registration as pesticides.
SARA110 Hazardous substances most commonly found at facilities on the
CERCLA National Priorities List [52 FR 12866] under Section 110
of the Superfund Amendments and Reauthorization Act (SARA).
SDWA Safe Drinking Water Act Amendments of 1986 [House Report 99-575]
SEC_112 Pollutants listed as hazardous under the Clean Air Act.
SEC_313 The toxic chemicals subject to the provisions of Section 313 of
the Emergency Planning and Community Right to Know Act of 1986
TCL Superfund Target Compound List (current as of August 1987).
VTOX Compounds on the "Acutely Toxic Chemicals" List in EPA's
Chemical Emergency Preparedness Program [EPA OPTS-00066,
November 1985; 52 FR 13378], mandated under Section 302 of
the Superfund Amendments and Reauthorization Act (SARA).
SEQUENCE The sequence number on the respective list. In those instances in
which the list was unnumbered, a sequential number starting with 001
was assigned to each analyte, except for the HSL in which the number
used by the QA Formaster data base was used (because a sequential
reference number would change every time the HSL is revised), and
numbers beginning with Z to represent the atomic number of a given
-------
38
page 3 EXCERPT FROM ON-LINE HELP FILE (cont.)
element (e.g., boron is Z05) .
I_RCRA.DSC
02/27/89
REGULATORY NOTES (Column heading not given; found next to "SEQUENCE" field)
Location for encoding information pertinent to a given regulation.
RQ Reportable quantities under CERCLA and FWPCA (CWA).
NAMES AND COMMENTS Various names for this analyte and other unrestricted com-
ments. Names are listed in approximate order of common usage. Each
distinct name starts on a new line and has no leading spaces. Contin-
uation lines for this name should have four leading spaces. Blank lines
and comments can be entered anywhere between names. Comment lines
should have two leading spaces.
The I_JSXPORT program assumes the following about names:
1. The longest name is 168 characters not counting leading
spaces.
2. When making a continuation line back into a full name, a space
should be inserted if the previous line ends in a letter, digit,
colon, semicolon, close parenthesis, close bracket, or a comma
preceded by a dash.
ORGANIZATION
ASTM
CIN
CLP
ITD
ODW
OSW
STD
USGS
The organization originating the method, as follows:
American Society for Testing Materials
EPA's Environmental Monitoring and Support Laboratory in
Cincinnati, Ohio
EPA's Office of Emergency Response Contract Laboratory Program
EPA's Industrial Technology Division
EPA's Office of Drinking Water
EPA's Office of Solid Waste
"Standard Methods" published by the American Public Health
Association
US Geological Survey Techniques of Water Resources Investiga-
tions
The following apparata are encoded:
APPARATUS As derived from the METHOD.
BRIDGE Conductivity bridge.
CGCEC Combination method using gas chromatography with electron
capture detector.
CGCFPD Combination method using gas chromatography with flame
photometric detector.
COLOR Colorimetric determination
COUL Coulometric detector
CS2 Analysis of a carbamate by liberation of carbon disulfide.
CVAA Cold vapor Atomic Absorption Spectrometry
DICHROM Dichromate oxidation
EVAP Evaporation
FID Flame ionization detector
FILTER Filtration
FLAA Flame atomic absorption spectrometry
FURNAA Furnace atomic absorption spectrometry
GCAFD Gas chromatography with alkali flame detector
GCEC Gas chromatography with electron capture detector
-------
39
>age 4 EXCERPT FROM ON-LINE HELP FILE (cont.) I_RCRA.DSC
02/27/89
GCFID Gas chromatography with flame ionization detector
GCFPD Gas chromatography with flame photometric detector
GCHRMS Gas chromatography with high resolution mass spectrometry
GCHSD Gas chromatography with halogen specific detector (Hall,
O.I., microcoulometric, electrolytic conductivity)
GCMS Gas chromatography/mass spectrometry
GCNPD Gas chromatography with nitrogen-phosphorus detector
GCPID Gas chromatography with photoionization detector
GRAV Gravimetric
HPLC High performance liquid chromatography
HPLCUV HPLC with an ultra-violet detector
HYDAA Hydride atomic absorption spectrometry
ICP Inductively coupled plasma spectrometry
MICRODF Micro diffusion
NEPHELO Nephelometer
OXY-FID Oxidation/reduction followed by flame ionization detection
OXY-IR Oxidation followed by infra-red detection
PHMETER pH meter
RETORT Oil platform operator's apparatus for determining oil content
of a drilling fluid
SCINT Scintillation counter
SPECTRO Spectrophotometer
TITR Titration
WET Analysis by a classical wet method
WINKLER Incubation in airtight bottle
METHOD The Method number where it is known.
associated with an ORGANIZATION.
SUFFIX
STD
Methods and method numbers are
The suffix to the METHOD. The suffix is specific to the sample
fraction, matrix, and level. Suffixes are as follows:
Suffix Frac Matrix Level
AW Acid Water
BN B/N
BNW B/N Water
CHS Combine Solids High
HS Solids High
LS Solids Low
MS Solids Medium
S Solids
W Water
Sources for a standard for the analyte.
are defined as follows:
ALD Aldrich, Milwaukee, WI
ALF Alpha
ATH Athens ERL
CIL Cambridge Isotope Laboratories
GIN EMSL Cincinnati
DUP DuPont
EPA EPA repository at RTP
EXX Exxon
Acronyms and abbreviations
-------
40
page 5
EXCERPT FROM ON-LINE HELP FILE (cont.)
I_RCRA.DSC
02/27/89
LV EMSL Las Vegas
NAN Nanogens, Watsonville, CA
NCI National Cancer Institute
PAB Pfaltz & Bauer
SCC Sample Control Center
SIG Signal
SUP Supelco
SYN Must be synthesized in the lab
ULT Ultrex
HYD/DEC The analyte hydrolyzes (H) as estimated by Athens-ERL, or decomposes
(D) as determined by Dr Beimer of S-CUBED.
EXT/PUR The analyte can be extracted from water as determined by Athens-ERL
or by S-CUBED. E=can be extracted; P=can be purged? N=can neither be
extracted nor purged.
GC POSS The analyte can be gas chromatographed as determined by Athen-ERL or
S-CUBED.
LC POSS W Roy Day of Waters Associates believes that the analyte can be deter-
mined by LC.
EPA NIH PAGE The page number in the EPA/NIH mass spectral file where the
reference mass spectrum for the analyte can be found.
*** Information Specific to Screen 2 ***
DETECTION LIMIT
EDL
MDL
ML
PQL
Estimated Detection Limit
Method Detection Limit [49 FR 43234, (Appendix B)]
Minimum Level-ITD's definition of the minimum level that must
give recognizable mass spectra and acceptable calibration
points (see footnote 2 to Table 2 of Method 1624, Revision B
[49 FR 43234].
Practical Quantitation Limit-EPA Office of Drinking Water
definition of the lowest level that can reliably achieved
within specified limits of precision and accuracy during
routine laboratory operating conditions [52 FR 25699], or
the Office of Solid Waste definition of EPA's current best
estimate of the practical sensitivity of the applicable method
for RCRA groundwater monitoring purposes [52 FR 25945].
NOTES (Column heading not given; found next to "DETECTION LIMIT" field)
Notes pertaining to a given method for the analyte.
-------
41
MR. TELLIARD: Our next
speaker is Dave Mount who is going to talk about the joys of
a toxicity-based approach to pollutant identification.
-------
42
MR. MOUNT: Thank you, Bill.
I want to start with two disclaimers which should make
you all nervous. Could I have the first slide, please?
For those of you who are astute, you will notice that I
have made a one-word change in my title from "A Toxicity-
based Approach" to "The Toxicity-based Approach." The
reason I did this has to do with my first disclaimer.
Although I am presenting this information today, it is by no
means a lone effort on the part of our laboratory. It
really represents the efforts of a great many laboratories.
In particular, the EPA lab in Duluth, Minnesota, was heavily
involved in the development of some of the techniques I will
discuss today.
The other disclaimer I want to make is that I am not a
chemist. I am an aquatic toxicologist, and I think that is
an interesting point to make prior to this talk. As we have
entered the world of toxicity-based toxics identification,
it has become apparent the need for interdisciplinary
communication and effort in order to most effectively deal
with these problems, because the knowledge that is needed
really far exceeds the knowledge of any one person.
I need to provide information as a toxicologist.
Analytical chemists and other types of chemists as well have
to provide me information from their disciplines. Engineers
-------
43
and all kinds of people have to become involved in order to
make this effective.
As many of you are aware, toxicity, as measured in
toxicity tests, is gaining fairly wide acceptance, at least
in some areas of environmental regulation. The most notable
of these is the NPDES effluent discharge program, but there
are other areas in which toxicity as a unit of measure is
becoming favored as well. "The hazardous waste
classification system is using it some places. EPA has
recently issued guidance on using biological assessment
techniques to look at hazardous waste sites.
And there are a lot of reasons why. There are several
advantages to using toxicity testing for environmental
monitoring. One is that measuring toxicity addresses a lot
of questions that aren't addressed by general analytical
techniques, such as are questions of bioavailability. A
chemical analysis can tell you if a contaminant is present,
but it can't tell you whether it is in a form that will
cause adverse effect to organisms out in the environment.
Another consideration is matrix effects. Of course,
all analytical chemists are familiar with matrix effects,
but the same things happen in toxicity testing. A common
one we are probably all familiar with is the effect of
hardness on metal toxicity. There are considerations
relating to the physical/chemical environment in which a
-------
44
contaminant exists in the environment that will heavily
influence its toxicity.
Finally, there are interactions among toxicants. We
can study a single compound to death and know exactly how
toxic it is in and of itself, but when it is present with
other contaminants, how toxic will it be?
All of these considerations are addressed by toxicity
testing. As some of us are fond of saying, an organism
knows more than we do.
Another feature of toxicity testing is that it will
detect the presence of all toxic chemicals that are there,
if they are present in toxic amounts; and that beats any
analytical chemistry technique known. Every last one is
detected if it is present in toxic amounts. An interesting
sidebar to that quote is the toxic materials in toxic
amounts reference, which is in the Clean Water Act, and is
encompassed by toxicity testing as a monitoring approach.
However, if we are to regulate on the basis of
toxicity, we have an obligation, to develop means to
determine sources of toxicity, so that control strategies
can be implemented. This gets into the focus of what I am
going to talk about today. This led to the buzz words, the
"toxicity reduction evaluation," or TRE as it is commonly
referred to. In the case of effluents, a TRE is a
systematic evaluation, of both plant and effluent to
-------
45
identify sources of toxicity and, ultimately, to control
toxicity.
Now, people have been trying to do things like this for
a long time, and the traditional approach has been
analytically-based, using tools like a priority pollutant
analysis. If you have something toxic, what do you do? You
do a priority pollutant analysis and find out what is in
there. The problem is that in order for this to be
effective, you have to be measuring whatever it is that it
toxic. Another problem is that analytical approaches, at
least most of them, don't account for the questions I spoke
of earlier, bio-availability matrix effects, and
interactions among toxicants. Hence, this analytical
approach generally falls short, in our experience, of
identifying toxicants in a complex matrix such as an
industrial effluent.
The other approach that has been used is the
engineering and treatability approach. Basically, you don't
worry about what is causing toxicity, you just find some
treatment strategy that will remove it.
There are some real disadvantages to this approach
also. First of all, you run a large risk of having your
treatment remove toxicity, but doing it in a way that you
didn't expect or that does not lie along the usual means of
that treatment technology. Another is that if you don't
-------
46
know what it is and you are just going to treat it, things
like source control aren't an option. If you don't know
what it is, you can't figure out how to stop putting it in.
This is a very simple schematic example I will use to
talk about a couple concepts (Fig. 1). Let's imagine that
the center box is a waste water treatment plant. We have
two influent waste streams in our waste water treatment
plant. If we were to measure the toxicity of Influent I, it
would be moderately toxic, and it would have a toxicity due
to compound A, although we don't know what compound A is at
this point. Influent 2 would be extremely toxic. It has
toxicity due to compound B.
Both influents enter the waste water treatment plant.
Compound B is effectively treated. Compound A is resistant
to treatment, and we end up with a final effluent that is
moderately toxic, and it is toxic due to compound A.
Now, if we were to go and look for toxicity back up
through this system, we would say it looks like Influent 2
is the source of our toxicity when, in fact, it has nothing
to do with it.
If we were to take the analytical approach, we would
analyze the final effluent. Well, if compound A is a
priority pollutant or other commonly analyzed parameter, we
might pick it up. On the other hand, I have had lots of
people come up to me with a list of tentatively identified
-------
47
GC/MS compounds and say, "are any of these toxic?" And I
say, " yes, they are all toxic". But the question is, how
much? Furthermore, with all the other matrix effects and
interactions, you simply can't take a list like that,
compare it to toxicological benchmarks, and know whether or
not any of those compounds are causing your problem.
So, the goal of the procedures that I am going to talk
about today is to work on that final effluent, and using
toxicity-based procedures, find the identity of compound A.
Once we know what compound A is, then we can take analytical
approaches, go back through the system, and find out exactly
where compound A comes from. And with that all kinds of
control options become possible. In addition to just
treating the final effluent for compound A, we can treat
just that influent line, we can modify the source, we can do
all kinds of things that we couldn't do before.
So, the strategy for the toxicity-based approach is to
use simple separation chemistry techniques to separate toxic
components of the effluent from other components of the
effluent.- In all cases, we use toxicity tests as our
analytical detector, rather than flame ionization or some
other standard analytical detector. We use a toxicity test
to find out, for that separation technique, where did the
toxicity go?
-------
48
This has led to the development of a subset of studies
under the TRE, the Toxicity Identification Evaluation or
TIE. As you might have guessed from my discussion earlier,
the objective of the TIE is to relate observed toxicity to a
parameter that has application to engineering solutions.
A lot of what I'll talk about today is heavily geared
toward effluent toxicology, because that is where the
majority of the toxicity limits have been placed. In the
case of hazardous waste sites, a toxicity test can also be a
very useful tool for determining whether or not a hazardous
waste presents a potential toxicological effect. But more
than just that, we then need to know what the toxic compound
is. And if we know what it is, then we can go about setting
cleanup criteria. So, there are all sorts of other
applications of this approach.
The phased approach to the TIE was developed largely by
the EPA Duluth lab and consists of three phases. I am not
going to go through each phase indepth, except to tell you
that they exist. There are guidance manuals that have been
issued by EPA that explain each in great detail. I am going
to talk specifically about some of the tests, some of the
information that you get out of and these tests, and how
this information is applied.
The approach for a Phase I TIE may not seem very
unusual (Fig. 2). It is a lot like a traditional
-------
49
engineering treatability approach. We have a whole effluent
sample or any other sort of environmental sample, that can
be used in a toxicity test, a sediment, anything. We split
it. On the left side, we do a toxicity test on it. Down
the right side, we do some sort of physical/chemical
manipulation, and as you will see in a minute, there are a
great many of them that are done in parallel. We conduct
a toxicity test on the manipulated sample and then compare
the toxicity in both of the tests to determine whether or
not that physical or chemical manipulation had any effect on
the causative toxicant in the matrix.
I hate to get into methodological detail, but I want to
make sure we are on the same wavelength as we go through
some example data. These are some of the tests that are
used in this procedure. (Fig. 3). At the top, we have the
whole effluent test which is simply a standard toxicity
test. The whole effluent or other sample, is diluted and
tested for toxicity along a concentration series to
determine just how toxic that material is. This serves as
the comparison for all the other tests.
There are two pH adjustment tests, pH 3 and pH 11. The
sample is simply adjusted to that pH, left there for three
hours and then returned to the original pH. Then the
toxicity of the manipulated samples is measured using
toxicity tests.
-------
50
The pH adjustment tests primarily address compounds
that undergo some sort of physical transformation at those
pH's, an irreversible transformation like hydrolysis. They
also serve as the procedural blank for other tests that use
pH adjustment.
We also use aeration tests. In these, we use aliquots
of sample at the initial pH, and also at acidic and basic
conditions. These aliquots are aerated for 30 minutes,
returned it to the original pH, and then tested it for
toxicity. These tests obviously address volatile or easily
oxidizable compounds.
The filtration tests have a similar structure to the
aeration tests except that we filter the sample instead of
aerating at those three pH levels. This gets at materials
whose toxicity is associated with filterable solids or
toxicants whose solubility is affected at extremes of pH.
For example, a lot of heavy metals will precipitate at pH
11, and we can filter them out. If they were the cause of
toxicity, the sample will lose its toxicity.
Solid phase extraction is a very simple technique using
Sep-Pak CIS columns. We run the effluent through the
column, and it removes a great many non-polar organic
compounds. Like the aeration and filtration tests, this is
done at acidic, basic, and neutral conditions.
-------
51
We use an EDTA test in which we add EDTA over a range
of concentrations to determine if that has any effect on
toxicity. EDTA will reduce the bioavailability of many
heavy metals. Therefore, if those are the source of
toxicity, we will see a reduction in toxicity when we add an
appropriate amount of EDTA.
There is an oxidant reduction test using sodium
thiosulfate. This test has the same structure as the EDTA
test, except use sodium thiosulfate to reduce residual
chlorine and other oxidants.
Finally, there is a graduated pH test where we adjust
the pH to 6, 7, and 8, test the toxicity, and see if there
is a marked difference in toxicity at those different pH's.
Toxicity of many materials is greatly affected by pH.
Ammonia is a good and common example of a pH-sensitive
toxicant.
So, what kind of information do we get out of this? I
am going to show you some specific data in a moment, but
this is conceptually what we find out. These are results
from a wire coating facility in the Northeast, showing the
information from a Phase I characterization. (Fig. 4).
The solid phase extraction test, which is geared toward
non-polar organics, did not remove toxicity. EDTA chelation
did remove toxicity. With that, we begin to think metals
may be the cause in this case. Aeration did nothing at any
-------
52
pH. Acid -and neutral filtration did nothing. However,
filtration at pH 11 did remove the toxicity.
So, we can say at tliis point we feel pretty confident
we are zeroing in on heavy metals. We did some more tests
to confirm that heavy .metals were,, in fact, the cause, I am
not going to go through tlio.se, tout once we knew that it was
a metal, we turned to analytical approaches to determine
which metal it actually -was.
These are actual data from a municipal wastewater
treatment plant effliuent !'(Fl
-------
53
and test the toxicity of the post-column effluent. It is
not toxic. We assume, therefore, that whatever is toxic is
on that column.
Next we do a very simple elution of that column, using
a series of methanol concentrations. We use methanol
because methanol is not very toxic to the organisms, so we
can do toxicity tests with samples that are in a methanol
matrix, as long as they are appropriately diluted. Other
solvents like hexane, more common analytical solvents, are
too toxic to be of much use in this procedure.
At any rate, we elute the column with a series of
methanol fractions, ranging from 25 to 100 percent. Then we
test each one these eight functions for toxicity.
The resulting 48-hour LC50 valves are shown here in
percent (Fig. 7). The LC50, again, is the theoretical
concentration of that sample that would cause 50 percent
mortality of the test organisms.
Of the eight methanol fractions, only two showed
toxicity, the 85 and 90 percent fractions. The recovery
of this toxicity shows that we have the causative toxicant
out of the effluent matrix and have begun to separate what
is toxic from what is not toxic.
We can extend these results by conducting additional
separation procedures. And at every step along the way, we
-------
54
do toxicity tests to confirm that we still have the
causative toxicant isolated.
We combine the toxic fractions, concentrate them on a
C18 column, inject the concentrate into the HPLC, and split
those toxic fractions into 25 fractions. For illustration
purposes, we'll assume that compounds distribute themselves
randomly among all functions. We took the original sample,
split it in 8 fractions, then threw out 6 and kept 2. Based
on this, we would be down to a quarter of the original
compounds in the sample. Then, we split the remaining
compounds into 25 more fractions. In theory, then, each of
these fractions contain only 1 percent of the compounds that
were in the original sample, if you will forgive the
simplistic assumptions.
After the HPLC separation, we test each of the 25
fractions for toxicity. In this particular case, we found
that fractions 21 and 22 were toxic. We then combined those
two fractions, concentrated them, and submitted them for
GC/MS analysis. This analysis turned up diazinon in
sufficient concentration to explain the toxicity of the
sample.
There are some analytical wrinkles that can arise from
this process. First, the question being asked of the
analytical chemist is not "How much of compound X is in this
sample?" Instead, we are asking "There is an unknown toxic
-------
55
compound in here; what is it?" As you no doubt realize,
those two questions are quite different, and require
different analytical thinking.
Another potential analytical difficulty is that not all
organic compounds are amenable to GC/MS analysis. Knowing
whether this is the case is impossible, a priori, since we
don't know what it is we are analyzing for. This kind of
problem also requires some innovative thinking, and
sometimes some innovative techniques. HPLC-MS and super-
critical fluid extraction are some alternative procedures
that are showing some promise for these situations. This
diagram illustrates some ongoing research that we are doing
in our laboratory (Fig. 8). One of the big problems we run
into is heavy metal toxicity, because the bioavailability of
heavy metals varies widely. Even if you measure total,
total recoverable, and dissolved concentrations, toxicity
will not necessarily relate to any one of those. If you
have a concentration below the toxicological benchmark, you
can feel confident that it is not a source of toxicity.
But, for example, a great many effluents in this country
have potentially toxic amounts of zinc in them, yet they
aren't toxic. The reason is that the zinc is not in a
bioavailable form.
-------
56
To identify the heavy metal responsible for toxicity,
we need to give the analytical chemist a boost. What we have
done is to combine tests using two different metal ligands.
Although the sodium thiosulfate test is intended to
remove toxicity due to oxidants, we discovered during the
course of one TIE that the sodium thiosulfate test would
remove copper toxicity. When we went back and asked the
chemists about this, they weren't surprised, as sodium
thiosulfate will chelate copper.
We decided to explore this further, by conducting both
EDTA and thiosulfate tests on a range of metals.
The results are presented here (Fig. 8) in a simple 2 by 2
contingency table, with toxicity removal by EDTA in the left
column, and no removal by EDTA in the right column. The
same applies to results of the thiosulfate test, positive
results in the upper row and negative in the lower. So, for
example, in the case of that original effluent, we noticed
that both the EDTA test and the thiosulfate test took out
toxicity. At the time we did that study, we didn't have
this table to look at. But if we had, we could have
pointed to the yes/yes box in the upper left, and would have
suspected that the toxicant was either copper, cadmium, or
inorganic mercury.
As another example, we recently completed some work on
a drain water from an historic mining operation. It was
-------
57
quite toxic and had a myriad of heavy metals in it. We
conducted both tests on the sample and found that EDTA the
removed the toxicity and sodium thiosulfate didn't. By
using this table, we then removed the range of metals that
might be responsible for toxicity. So, even though there
were a tremendous number of heavy metals in the effluent, we
zeroed in on zinc as being the only metal in this category
that was present at potentially toxic concentrations.
I have a final example TIE, to illustrate that it is
not always as simple as I have been showing, this effluent
had an llC50 of about 33 percent, and solid phase extraction
removed this toxicity.
In order to get you to discuss the fine points of this
example, I have to introduce one more term, the toxic unit.
A toxic unit is 100 percent divided by the LC-50 of the
solution. An effluent with LC-50 of 50 percent, has 2 toxic
units. One with an LC50 of 25 percent would have 4 toxic
units. The number of toxic units also indicates the number
of times you need to dilute that sample to reach 50 percent
survival in a 100 percent sample.
As I said, the toxicity of this effluent was removed by
solid phase extraction. We then conducted the solid phase
(Fig. 9) extraction and elution that I showed you earlier,
with the following results. The whole effluent had an LC50
<^
of 33 percent, corresponding to 3 toxic units. The toxicity
-------
58
tests on the fractions showed toxicity in the 75, 80, 85,
and 90 percent fractions. There is a concentration step
involved with the SPE dilution, which is 5x with the
methodology that we use. Calculating toxic units and
dividing by 5 we have these toxic units listed on the right,
corrected to the whole effluent concentration. Adding the
four toxic fractions together, we had 1.3 toxic units, but
we had removed 3 from the whole effluent.
Well, this kind of disturbed us, because it meant
either that something was still stuck on that column or
that we had very, very poor recovery for some reason. In
this case, it is hard just to live with poor recovery,
because of the risk that it actually indicates the presence
of a second toxicant.
The clue to this problem came in another sample (Fig.
10). We did a graduated pH test on this other sample, and
found no survival at pH 6, 100 percent survival at pH 7, and
no survival again at pH 8.
Now, there was ammonia in this sample, and ammonia is
more toxic at high pH, so we were expecting and could
explain the toxicity at pH 8. But we couldn't explain the
toxicity at pH 6. Because we had seen non-polar organic
toxicity in this effluent before, we did a solid phase
extraction on that sample and then did the graduated pH
test. This manipulation took out the toxicity at pH 6. We
-------
59
assumed/ therefore, that the toxicant responsible for the
toxicity at pH6 was now on the CIS column. Since ammonia
isn't affected by the column, there was still toxicity at
pH8.
Having found that out, we did the solid phase elution
(Fig. 11).'
As background information, the effluent pH was in the
high 7's, while the pH of the dilution water that was
originally used for testing fractions was about 8.5. When
we tested the toxicity of these fractions at pH 8, we saw no
toxicity at all. But when we tested them at pH 6, we saw
toxicity in the 75 and 80 percent fractions, and the pieces
began to fall together.
What had happened in the first sample was that the
effluent fractions were tested in dilution water with pH
just a few tenths higher than the whole effluent. Because
this material appears to be more toxic at lower pH than at
higher pH, we could thereby account for the difference in
toxicity that we observed originally.
With that, I would be happy to open it up for any
questions you might have.
-------
60
QUESTION AND ANSWER SESSION
MR. HACHIGIAN: Lee
Hachigian of General Motors.
Have you tried the guidance manual with regard to
chronic toxicity applying the same types of methodologies?
MR. MOUNT: Yes, we have.
For those of you who aren't familiar with toxicity testing,
there are two general types of toxicity tests, acute and
chronic. I have been talking about acute toxicity, which
generally deals with survival. Chronic toxicity a longer
term test, and deals not only with survival but with more
subtle and points.
Yes, we have worked on applying the TIE methods to
chronic toxicity. That is certainly an area of heavy
methods development right now. EPA Duluth is working on it.
We are working with a couple of dischargers who are being
regulated on chronic toxicity, and have to find ways to
control it.
It is definitely a trickier situation. There are a lot
of shortcuts that can be taken with acute toxicity that are
more difficult with chronic toxicity, but some of the
methods still apply. One of the problems is simply that the
chronic tolerance of organisms for substances like EDTA and
thiosulfate haven't been worked out thoroughly yet. But
that information is coming, and I would imagine...well, I am
-------
61
not going to speak for when EPA will issue guidance, but
they will probably be technically ready to issue guidance
within the very near future.
Yes?
MR. HOWE: Stavros Howe with
the Molecular Ecology Institute.
I think there is a whole suite of assays that currently
exist that would compliment your scheme there very well and
perhaps would be more sensitive in addressing some of the
chroniti toxicity issues. Specifically, I am thinking about
stress protein assays, metal binding ligand assays, various
enzyme assays that would correlate much closer with the
mechanism of toxicity and give you a better handle on what
is causing the problem, basically.
I think the screening approach would be far more rapid
and much more cost effective.
MR. MOUNT: A lot of other
assays have been suggested. There are a couple comments I
might make on the choice that is made.
For those of you who are familiar with toxicity testing
as a monitoring tool, you will know that they cost from
several hundred to a thousand dollars each test. When they
are done in the context of a TIE, however, they are much,
much less expensive, down on the order of $50 per test. So,
the cost effectiveness issue starts to go away.
-------
62
People get into this situation as the result of
compliance monitoring, which is whole organism monitoring.
Therefore, whole organism testing is a very appropriate
measure to use, because it is the measure by which the
problem is defined.
The danger in using other assays is that they may
respond to things other than what is causing the toxicity to
the original organism, other things that are present in the
effluent. For example, the Microtox assay has been
suggested for use, and it is an effective tool if it is
responding to the same material as is the original test
organism.
If you use something other than the original organism,
you need to do some background work to establish for certain
that there is a correlation between the alternate assay and
the tests that is of regulatory concern.
Finally, I'll admit that this is a regulatory policy
constraint on the scientific process, and what you are doing
the study for depends on how you might approach that
problem.
MR. YOCKLOVICH: Steve
Yocklovich from Burlington Research.
While we use this type of analysis ourselves, I was
wondering what you would see if this was implemented
nationwide. What kind of cost of analysis could you see if
-------
63
it were put into regulation, and how many labs in the
country are equipped to do it?
MR. MOUNT: Let me get all the parts of your
question here. As far as implementation, it has been
implemented in many, many parts of the country. In our
region, Region VIII, and in most of the EPA regions in the
East, it is in place, and we are working with several
clients who are doing it not as a proactive stance, but who
are in it as a result of their permit monitoring.
In terms of cost, we have solved problems for a few
hundred dollars if it is real obvious. It can also cost a
lot more than that. But I would venture to say some of the
horror stories that are floating around about the potential
cost aren't realistic, if the studies are properly done.
There are stories running around about expenditures of
$200,000 or $250,000. We certainly haven't run up any bills
like that, and I don't think that it is necessary.
This is an emerging area, so there is varying expertise
out in the world regarding how effective labs are.
In terms of equipment, there is really very little
equipment needed aside from the analytical capabilities. Of
course, what analytical equipment you need depends on what
direction you end up going, metals, non-polars, or some
other compound. But in terms of the laboratory equipment to
do the separations and the toxicity tests, there really
-------
64
isn't much. A couple thousand dollars in equipment is all
that is required. Obviously, the analytical situation is
different.
Within our company, we have the full range of
analytical equipment. On-site, we have GC, HPLC, and AA.
If we run into GC/MS work, we, often contract with a
laboratory that happens to be right across the street from
us.
Is there anything I didn't touch on you want to follow
up with?
MR. YOCKLOVICH: Thank you.
MR. MOUNT: Thank you.
MR. TELLIARD: Any more questions?
MR. FRAZIER: In the TRE/TIE approach, there
is not anything specifically addressing the diseases of
organisms. Are you all aware of any body of knowledge of
this either affected by effluents or once you get them in
the laboratory?
MR. MOUNT: There are diseases of organisms
that definitely influence test results. In terms of
diseases inherent to the laboratory, the laboratory
culturing and quality assurance practices should address the
presence of pathogens in the laboratory in general.
As far as pathogens in the effluent, we have never had
any experience with that. I do know of one example, but I
-------
65
haven't reviewed the data to know if I feel their
conclusions are accurate. But I do know of one example
where a municipal plant had an experience where they were
convinced that a bacterium growing inside their sampler was
causing disease in the toxicity tests.
And there are some things as a toxlcologist you can do
to spot that kind of problem. For example, if you have
toxicity due to a pathogen^ you essentially have a
toxicant that can multiply.
As a result, things like concentration response
relationships will likely fall apart. Frequently, you will
see higher mortality at lower concentrations,, or other
irregularities that lead you to suspect that something else
may be involved.
Another option If you suspect that, of cours-ev, is to
use ultrafiltration to try to physically sterilize the
sample, and remove at least bacterial pathogens from the
matrix.
MR. TELLIARD: Thank you, Dave.
MR. MOUNT: Thank you.
-------
66
Influent #1
Moderately toxic
(Compound A)\
Influent #2
Extremely toxic
/(Compound B)
Figure 1
WWTP
Compound A resistant
to treatment
Compound B very
amenable to treatment
Effluent
Moderately Toxic
(Compound A)
-------
67
APPROACH FOR A PHASE I TIE
Whole Effluent Sample
Physical/Chemical
Manipulation
Toxicity Testing
Toxicity Testing
Compare Toxicity
Figure 2
-------
68
Phase I - Toxicity Characterization
Physical/Chemical Manipulations
Baseline Toxicity
pH 3 Adjustment
pH 11 Adjustment
pH 3 Aeration
Initial pH Aeration
pH 11 Aeration
- pH 3 Filtration
- Initial pH Filtration
- pH 11 Rltration
pH 3 Solid-Phase Extraction
Initial pH Solid-Phase Extraction
pH 9 Solid-Phase Extraction
EDTA Chelation
Oxidant Reduction
Graduated pH
Toxicity Testing
Compare toxicity of manipulated effluent samples
with toxicity of whole effluent
Figure 3
-------
69
Phase I TIE
Manipulation
Reduce Toxicity?
Yes No
Solid-Phase Extraction
(non-polar organics)
EDTA Chelation
Aeration (any pH)
Acid or Neutral Filtration
pH 11 Filtration
X
X
X
Figure 4
-------
70
Municipal WWTP
Phase I Results
Manipulation
48-hour Survival (%)
Whole Effluent
pH 3 Adjustment
pH 3 Filtration
pH 3 Aeration
pH 3 SPE
Initial pH Filtration
Initial pH Aeration
Initial pH SPE
pH 11 Adjustment
pH 11 Filtration
pH 11 Aeration
pH 9 SPE
EDTA Chelation
Oxidant Reduction
20
60
20
100
100
0
100
100
20
0
100
100
0-80
0-20
Figure 5
-------
71
SPE Elution
1 Liter
of
Effluent
SPE
Column
25%
50%
75%
80%
85%
90%
95%
100%
Test Each Methanol Fraction for Toxicity (5x)
Figure 6
-------
72
SPE Elution Results
Fraction
Survival (%)
25%
50%
75%
80%
85%
90%
95%
100%
100
100
100
100
0
0
100
100
Figure 7
-------
73
Toxicity Removal by EDTA
Yes No
M
o
S
CO
I
0)
DC o
>» 2
'o
'x
o
Copper Chloride
Cadmium Chloride
Mercuric Chloride
Zinc Chloride
Manganese Chloride
Lead Nitrate
Nickel Chloride
Silver Chloride
Sodium Selenate
Iron Chloride
Chromium [III] Chloride
Potassium Dichromate
Sodium m-Arsenite
Sodium Arsenate
Sodium Selenite
Aluminum Chloride
Figure 8
-------
74
Refinery Effluent
SPE Elution
Test
48-h LCcn Concentration Toxic Units
'50
Whole Effluent
25%
50%
75%
80%
85%
90%
95%
100%
33
>100
>100
71
55
62
71
>100
>100
1x
5x
5x
5x
5x
5x
5x
5x
5x
3
-
-
0.3
0.4
0.3
0.3
-
_
Total of Fractions
Figure 9
1.3
-------
75
Refinery Effluent
Graduated pH Test (24-hour survival)
pH 6 pH 7 pH 8
0% 100% 0%
After Solid-Phase Extraction
pH 6 pH 7 pH 8
100% 100% 0%
Figure 10
-------
r
76
Refinery Effluent
SPE Button
Fraction
24-hour Survival (%)
pH 6 pH 8
25%
50%
75%
80%
85%
90%
95%
100%
100
100
0
40
100
100
100
100
100
100
100
100
100
100
100
100
Figure 11
-------
77
MR. TELLIARD: Is the
coffee out back? Yes? Okay. We are going to take a 15-
minute break, 15, that is, a 10 and a 5...a 15-minute break
to get a cup of coffee, come on back in, and you can listen
to Dow tell us about dirty, dangerous, and deadly dioxin.
(WHEREUPON, a brief recess was taken.)
MR. TELLIARD: Our next
speaker this morning is Les Lamparski from Dow Chemical.
The agency at the present time is spending $1.7 trigabucks
looking at the impact of dioxins and furans in the
environment, in particular, as it relates to the pulp and
paper industry but also in refining, centralized waste
treaters, and so forth.
This morning, we have two people speaking on dioxin
analysis. Les is probably one of the more formal speakers
in the sense that he has been at it longer than most of the
average bears dealing with the analysis of dioxin and
furans, and we are very fortunate to have him this morning.
So, with no further ado, Les?
-------
78
MR. LAMPARSKI: Thanks, Bill.
Probably one of the few people who have, as Bill said,
who has been involved in this longer than I have is the
other speaker this morning. Tom Tiernan. So, I haven't been
involved the longest.
This morning, Terry and I were sitting in the bar
having a cup of coffee, and we decided that this talk was a
little bit too boring the way I had it originally been
configured. So, we took the slides from two talks and
combined them, just sort of shuffled them up and put them
together. So, I am going to try to go through these slides.
Don't look at the numbers at the bottom of the slides,
because they jump all over. Basically, what I am trying to
do is describe the instrument and show some applications at
the same time, so it may be a little bit jumbled around.
Our work in the measurement of chlorinated dioxins is
driven by the fact that our laboratory at Dow has
responsibility basically for the analysis of chlorinated
dioxins and furans for all of Dow U.S.A. So, there are a
number of sites across the country that we are responsible
for answering to the various State and EPA regional
regulations.
So, we have a wide variety of samples that have to be
analyzed for these types of compounds, and we are always
-------
79
looking for ways to make the analysis a little bit easier.
And by being easier, we mean faster and cheaper.
Currently, chlorinated dioxins require quite a
sophisticated cleanup procedure prior to the measurement,
generally by GC mass spec. These cleanup restrictions tend
to limit the size of studies that can be realistically
performed in an environmental survey. If you have to spend
a week to clean up a series of 5 or 10 samples, you are
naturally not going to want to invest the time to analyze or
to conduct a survey that would contain 1000 samples. So,
this is a very strong limitation that is placed on
environmental studies that are undertaken to analyze for
these compounds.
Because of this, there is always possibility that any
data that is generated in an environmental study...the data
has been compromised in some fashion. If the samples have
not been homogenized properly, if homogenization is the
method that is chosen to reduce the number of samples that
are analyzed, possibly we could collect 1000 samples and
combine them in sets of 10 in some fashion and only analyze
100.
Well, if this combination procedure is not truly
indicative of the samples in the environment, we may have
compromised the integrity of the final data.
-------
80
This type of approach may have a very serious effect on
any risk assessment that comes out of a study, whether it is
in a positive fashion and the risk is determined to be
greater than it is or if the risk is falsely determined to
be too low. Either of those possibilities can cause a
problem.
We decided that it might be an interesting approach for
an environmental study to focus on specific types of
compounds. Rather than looking for all of the chlorinated
dioxins and chlorinated furans that are possible, all 210
isomers, let's look specifically at only a couple of the
key...let's call them the most toxic isomers.
In this case, we could say 2,3,7,8-TCDD and 2,3,7,8-
TCDF. I will show this structure on a later slide for those
of you who aren't familiar with them.
By limiting the amount of analytes and limiting the
sensitivity that we are going to look for these compounds,
let's not go down to 1 part per quadrillion or whatever in a
soil sample. Let's set a realistic limit. If it is 10
parts per trillion or 1 part per billion, that limit will
determine how large a sample is going to be analyzed to
begin with.
Naturally, if you have a smaller sample to begin with,
you can use a cleanup...the cleanup procedure will be much
more effective, generally, if it only has to be applied to
-------
81
10 mg of sample rather than to 100 grams. So, it will allow
us to use more efficient methodology and simpler equipment.
What kind of simpler equipment are we talking about?
We decided to investigate the use of 2-dimensional gas
chromatography followed by a low resolution mass
spectrometer as a detection system. There were a number of
potential advantages that we saw when we decided to try to
construct this instrument, and they are shown here on this
slide.
Basically, they allow us to automate the system, and
the changes that had to be made to the system were
compatible with the instrumentation that we currently had
in-house. Naturally, if somebody has a low resolution mass
spectrometer and all of a sudden the EPA mandates that high
resolution mass spectrometry is the only way that a sample
can be analyzed, you are stuck with a $200,000 boat anchor.
So, we were looking for ways to extend the capability
of low resolution mass spectrometry and, in fact, in some
cases, to try to use low resolution mass spectrometry as an
alternative to the high resolution mass spectrometers that
are currently in vogue.
The system that we constructed shown here consists of a
5987 Hewlett-Packard low resolution quadruple mass
spectrometer. It is a 7-year-old instrument, so it is a
well run instrument.
-------
82
The 2-dimensional system is contained in a 5890 GC
shown here. It was purchased from a company called
Analytical Controls, Incorporated.(ACI), and the 5890
contains a Dean switch and a cryogenic trap for collecting
the analytes. It is configured presently with an FID and an
electron capture detector for monitoring pre-column
effluents.
The whole system is controlled by a single controller
provided by ACI which monitors all of the components. It
basically monitors all the components in the 5890. The only
electronic link between the 5890 and the 5880, the GC/MS
portion of the instrument, is by a remote start cable. That
is the only connection except for the analyte transfer line
which is that large black line going across the top of the
two instruments.
The analyte transfer connection is an approximately 40-
inch heated line. The transfer is through a fused silica
tube which can be changed to match the characteristics of
your analyte. So, if you want to have a coated fused silica
column containing DB-5 or Supelco 2330 or whatever, you can
change that. Presently, we are using DB-5 because it is
very rugged, and that is also the column that we have on the
pre-column instrument.
-------
83
As you can see, the system is also automatable. We
have an autosampler which can be and is routinely used, but
it can be removed for manual injections.
This is the slide that we showed to management to try
to sell them on the subject. It is very crude, and it is
totally non-scientific, but management bought it, so they
bought the instrument for us. But basically what happens is
the sample is injected into the prep GC, goes through the
capillary column, a switch is effected, and the analyte can
either go to the detector or the cryo trap where it is
collected by the CO2-cooled trap, and then it can be heated,
transferred into the second chromatographic column, and
finally into the mass spectrometer for detection.
As I said, that was for management. What really
happens looks something like this. This is the flow
schematic for a Dean switch that is used by the ACI
controller. I am going to show you four of these slides
that are essentially the same, and what we are doing is
looking at where the analyte goes during the various stages
of the separation.
In the initial pre-column separation, the analyte is
injected into the preparatory capillary column, and
separation is achieved in this column. Generally, the
analyte goes into the EC detector for detection and
monitoring of effluent.
-------
84
The system contains two basic carrier gas systems. The
flow controlled system controls the preparatory GC column,
and the pressure controlled system controls the Dean switch
which is the method of switching the analyte flow between
the monitor detector and the analytical column.
So, at the appropriate time when the analyte is eluting
from the prep column, the Dean switch is switched. Valve 3
is switched so that flow .goes in the opposite direction and
essentially pushes the analyifces onto the cryogenic trap
which is being cooled by t3ae C02.
The analyte is trapped out for a given period of time,
and that is pre-determined by analyses of standards. The
analyte can then be reinjected by stopping the cooling CO2
onto the cryogenic trap., and it is resistively heated in
roughly one minute from SO to 250 degrees, and the analyte
is deposited into the heated zone through the transfer line
and onto the analytical column where separation can then be
made under the types of conditions that are best for the
analyte and the column that is used for the final
separation.
At this time during the reinjection onto the analytical
column, the flow on the prep column has stopped. So, if we
want to collect multiple fractions, it is possible to then
go back, start the flow again on the column, and collect
-------
85
another fraction for a second analyte or any number of
analytes.
After the analyte is injected into the analytical
column, separation is made in the normal fashion, and the
flow on the preparatory column is reversed. What this does
is elutes off the very heavy boiling garbage that normally
would stick on the front end of the column and cause
problems.
This is a very real advantage for this type of system.
I will show you later that we are looking at crude extracts
of waste treatment sludge, let's say. We have analyzed
through the single preparatory column which is a DB-5 column
over 800 injections. We have made over 800 analyses on this
column without having to change the column, and part of that
advantage comes from the fact that the high boiling
components are eluted backwards off the column during this
phase of the analysis scheme.
If we look very briefly at the types of control
sequences for various components of the system, basically,
the prep controller...that is the ACI controller that is the
dominant computer in the whole system...it looks for ready
signals from the various other components in the system, the
autoinjector, the prep GC, the mass spec computer, the mass
spec GC, and the mass spectrometer itself.
-------
86
When it gets ready signals from all of those
components, it initiates the start of the run by telling the
autoinjector to make the injection. Prep GC does the
separation, and, as you can see, over a period of time, all
of these control sequences are brought into play by the prep
computer. All of this is done. There is no operator
intervention at all. Once you load the samples into the
autosampler rack, you can just walk away and forget about
it.
If we look at some of the data that we have gotten out
of this system, this is 2,3,7,8-TCDD. We have analyzed this
on the top trace by GC/MS, and this is 40 picograms injected
onto a DB-5 column running under conditions optimum for
detection of that component. You see the retention time is
about 6.5 minutes.
The bottom trace is the GC/GC/MS separation on a Lee SB
smectic column which we have chosen at this time to be
useful for determination of 2,3,7,8-TCDD isomer-
specifically. We have yet to hear Tom's talk, so I can't
really say that we want to use that column, but I am sure
that is one that we are going to be investigating in the
future.
I didn't want to steal any of his thunder, but I
couldn't resist.
-------
87
As you can see here, the sensitivity of the two traces,
the signal to noise ratio for 40 picograms injected onto the
column are virtually identical. Now, what this tells us is
that we are getting good transfer through the Dean switch
which is just a series of T's in the gas flow stream, and
also, we are not seeing very drastic effects, if any, of
adsorption onto various components in the switch itself.
It is predominantly a fused silica system, but there
are places where, if the system is not put together
properly, there could be some stainless steel active sites
which would cause adsorption of TCDD, but we don't see any
of that as evidenced by this chromatograph.
We then looked at the reproducibility of the system
compared to GC/MS, and using a manual injection without the
autosampler, you can see that the reproducibility over a
period of time...this was 9 runs that were run over a period
of 2 days... is certainly comparable to anything that can be
done by GC/MS for a similar type compound.
Using the autoinjector where 10 runs were run over a
period of about 20 hours total time, you can see that the
reproducibility is essentially the same. There is some
evidence that there was a little bit of gradual drift of the
instrument sensitivity which is causing this number to be
probably a little bit larger than it would be if it were a
very short-term experiment, but each analysis takes roughly
-------
88
38 minutes, and when we put in blank runs in between, it
took quite a long time to do that precision study.
We looked at the linearity over a small range of
concentrations for TCDD again, and these are injections from
5 picograms on column to 200 picograms on the first column.
As you can see, the linearity is very good.
If we increase the concentration range up by a factor
of 10 up to 2000 picograms on column, the linearity does
skew a little bit. It is biased somewhat high at the higher
concentrations, but this is certainly usable for most
applications.
If we examined what does a real sample looks like on
this system: here we have a complete electron capture gas
chromatogram of an extract of waste solid from a waste
treatment plant in our company. This is the crude extract,
a benzene extract, of the sludge out of the treatment plant.
As you can see, by electron capture, there are a lot of
compounds eluting off of this column.
This is a 30-meter DB-5 column programmed from 135 to
roughly 280. The fraction of interest...this is supposed to
be highlighted in grey, and I don't know if you can see it
in the back of the room...but we are looking at a roughly
half-minute wide window between 9.5 and 10 minutes.
A 1-nanogram response with TCDD is shown on the upper
right of the chromatogram. So, as you can see, some of
-------
89
these components that are eluting off are very large
compared to the TCDD that we are supposedly looking for.
We collected the 9.6-10.1 minute fraction and then put
it onto our analytical column, the Lee smectic column, and
what comes out by GC/MS...by the second GC/MS...is a single
peak with no interfering components. The amount observed is
5.3 ppb. The detection limit was set up in this study to be
of the order of .5 ppb. The regulatory level for this
compound in this matrix was 1 ppb.
So, we set up the analysis by using a 10 mg sample to
begin with. We extracted a large sample, took an aloquat
corresponding to 10 mg and injected that directly into the
GC/GC/MS, and we were able to determine the TCDD as shown
there.
The bottom trace is the internal standard that is added
prior to the analysis. We use this to monitor recovery
through any cleanup procedures that could be used. In this
case, we used the internal standard in a typical response
factor calculation. The internal standard is used to
determine the amount of analyte that is present.
What does all of this mean? Why would anybody want to
invest the time? This instrument costs roughly... the 5890
GC and the controllers cost about $50,000. This is if you
have a mass spectrometer that you are willing to devote to
-------
90
this type of system. Why would I want to spend $50,000 for
a single type of analysis like this?
First of all, we should make the point that this isn't
a single type of analysis. This is a very broad... there are
a lot of applications that are potential for this type of
instrument, not only in the field of trace analysis but also
in product analysis. If you are looking for a very minor
component in a mixed product or a waste stream from a
product, there is a very good likelihood that this type of
instrument would be applicable for that type of analysis.
But let's stick to the original topic, determination of
chlorinated dioxins and furans in environmental samples. We
currently have an environmental study which is being
mandated by the State of Michigan in which we are going to
look at a variety of sediment samples from nearby the plant.
At present, this study is slated to be 100 samples
large. Now, if we were to only look for 2,3,7,8-TCDD and
TCDF by our current method which involves a significant
amount of sample preparation prior to GC/MS measurement, we
are looking at a time period of 24 weeks for two people to
do the analysis, and that breaks down to a cost of roughly
$2400 per sample.
Now, I am sure a lot of you contract labs say that you
can analyze the samples for significantly less than that,
-------
91
but let's just look at the relative amounts that we re
talking about.
Using the 2-dimensional LRMS system, we project that
these 100 samples can be done in 3 weeks. This breaks down
into $240 per sample. Well, obviously, it won't take too
many studies of this size to pay for that instrument many
times over.
So, that is why we feel that it will be a useful tool
in an general analytical laboratory in the future. As I
say...this is the commercial...Analytical Controls is the
company that sells this GC instrument and controller, and
they will be more than happy to take your money.
MR. TELLIARD: Thank you,
Les. Any questions?
-------
92
QUESTION AND ANSWER SESSION
MS. KLATT: I am Kelly Klatt
with J&W Scientific.
The question I do have is, you evidently showed first
the GC BCD slide, the extract, and you had evidently spiked
in the 2,3,7,8-TCDD. Right?
MR. LAMPARSKI: In the extract
of the sample?
MS. KLATT: Right, right.
MR. LAMPARSKI: Yes.
MS. KLATT: And then you
showed the...
MR. LAMPARSKI: No, that is
not spiked in. This is naturally occurring TCDD.
MS. KLATT: Okay, right. And
then you showed this. Was this sample that you extracted
this from, did it also have the other tetra isomers?
MR. LAMPARSKI: Yes, it does.
MS. KLATT: It does. So, this
does separate all the other tetra isomers?
MR. LAMPARSKI: Yes. The Lee
column does separate 2,3,7,8-TCDD from the other TCDD's. It
doesn't separate the furans.
MS. KLATT: Oh, okay.
-------
93
MR. LAMPARSKI: And, to be
truthful, when we started this study, Tom's column wasn't
available yet.
MS. KLATT: Okay, thank you.
MR. TELLIARD: Any other
questions?
DR. SCHULTZ: Bill Schultz
from Eastern Kentucky University.
I don't suppose you have a trace of what it looks like
with selected ion monitoring from a single column separation
that compares to your BCD?
MR. LAMPARSKI: I don't have
it on an overhead, but I can show you a similar comparison.
DR. SCHULTZ: It seems to me
that the selected ion would give you some separation.
MR. LAMPARSKI: No, it is
worse than a picket fence. It goes up, and it is just
humpy.
MR. TELLIARD: A humpogram?
MR. LAMPARSKI: Yes. Well,
this is the saturated one. It is humptane.
(Laughter.)
MR. TELLIARD: Good morning,
John.
-------
94
MR. MCQUIRE: John McQuire from
EPA.
Have you, in the interest of trying to come up with low
cost...lower cost...low resolution mass spec investigated
any of the work that Don Hunt did a few years back, maybe
10, on negative oxygen CI? He claimed that you could get
isomer-specific breakdowns and all sorts of things. I was
wondering if you had done anything with that.
MR. LAMPARSKI: We haven't done
anything with that, but in review of his work, the
separation is not really that clear cut. You are looking at
fragment ions that are potentially formed in one case and
not in the other case. If the separation is not such that
the appropriate isomer is separated, then you effect very
little advantage.
Also, there are generally very real problems with
matrix effects for NCI analyses. So, if you inject a
standard, you get one response, and if you inject a dirty
sample matrix, you will get something totally different.
For that reason, we felt that the reliability of that
type of system was not what we were looking for. Basically,
what we tried to do is, by selling the EPA the concept of
running fewer samples, what we also try to say is we are
going to run fewer samples, but we are going to give you
more reliable data on those fewer samples.
-------
95
So, we are
MR. MCQUIRE: Okay,
understood. Thank you.
MR. LAMPARSKI:
trying to balance things out that way.
I might add that we have run a number of comparison
studies between the 2-dimensional analysis and the GC/MS
analysis using our full 3-day-long cleanup procedure, and
the results are very comparable. So, the agreement between
the two systems is very good. The only difference is the
amount of time that it takes to do the analyses.
MR. MCQUIRE: Okay, thank you.
MR. TELLIARD: Anyone else?
I certainly like that $240 price, Les.
MR. LAMPARSKI: For you, it
will be a little bit different.
MR. TELLIARD: Yes, Iknow, you
are going to deal. Right.
Thank you very much.
-------
96
DEVELOPMENT OF AN HRGC-HRGC-LRMS SYSTEM:
INSTRUMENT DESIGN AND PERFORMANCE DATA
L. L Lamparskl and T. J. Nestrick
The Dow Chemical Company
Michigan Research and Development
Analytical Sciences, Special Analysis, 1602 Building
Midland, Michigan 48674 USA
Analytical Controls, Inc., can supply a Hewlett Packard Model 5890 high resolution gas
chromatograph that is appropriately configured to accomplish fully automated, single-oven,
2-cfimensiona! gas chromatography. As a portion of an extended research program dedicated to
Improving the capabilities of conventional instrumentation, we have developed and assembled
the necessary components to permit the linkage of such a unit to a Hewlett Packard Model 5987A
high resolution gas chrornatograph-fow resolution mass spectrometer (HRGC-LRMS). The
results of this project have produced an instrument that can routinely conduct fully automated,
dual-oven, 2-dimensiona! HRGC separations in conjunction with LRMS identification and detection
for solutes eluting from the secondary analytical gas chromatograph. A description of this new
analytical instrument and the techniques required to assemble it from commercially available
resources will be the subject of our presentation. Preliminary evaluation and testing of the
operational characteristics of the unit indicate that it may be a valuable tool for measuring a
variety of compounds at trace concentrations in extremely complex matrices.
-------
97
Development of HRGC-HRGC-LRMS
Instrument Design & Perfomance Data
Lester L. Lamparski & Terry J. Nestrick
The Dow Chemical Company
Michigan Research and Development
Analytical Sciences, Special Analysis Group, 1602 Building
Midland, Michigan 48674 USA
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 1
-------
98
Current Situation for CDD/CDF Surveys
Prelim matrix preparations often required.
Variable extraction procedures for different matrices,
Specialized cleanup often required for analytes.
Cleanup simplification = Sophisticated instruments.
Expensive & time consuming projects.
The Dow Chemical Company
EPA Conference, Norfolk, Hay 90 SLIDE # 2
-------
99
END RESULT for Current Situation
Limited number of surveys actually performed.
Limited number of samples examined in a survey.
Often necessitates sample compositing.
Sample representativeness often questionable.
Statistics on resultant survey data less reliable.
impact on Risk Assessment policies?
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 3
-------
100
New Approach for Regulatory Surveys
Limit analytes (e.g., toxicological significance).
Limit sensitivity (e.g., >10 PPT).
Use larger sample cohorts to improve reliability.
Use efficient methodology & simpler equipment.
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 4
-------
101
Why GC - GC - MS ?
Affords increased chromatographic efficiency.
Automation capability.
Compatible with current GC-MS equipment.
Extension of current cleanup capabilities.
Potential means to reduce /eliminate cleanup
Broad applicability.
TTie Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 2
-------
102
POTENTIAL
to extend capability of LRMS ?
to HRMS for trace analysis ?
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 3
-------
103
GC- ANALYTICAL
HP-5880 (LRMS)
Heated Transfer Line
Independent Oven
Secondary Injector
GC - PREPARATIVE
HP-5890 (FID & EC)
Deans Switching
Cryogenic Trap
ACI Controller
HP-7673AAutoinjector
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE $ 4
-------
104
Cryogenic Trap
Heated
Transfer
Line
Preparatory Gas Chromatograph
Analytical Gas Chromatograph
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 5
-------
105
Precolumn Separation
Injector
Flow Controller
Preparatory
Capillary
Column
Solutes Path
EH Carrier Gas
D No Flow
Helium
Inlet
=4*,,,
Pressure Regulator
CQz Inlet
Analytical Capillary
Column
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE 16
-------
106
Analyte Trapping
Injector
Flow Controller
Preparatory
Capillary
Column
Solutes Path
El Carrier Gas
D No Flow
Hejium
Inlet
Cryogenic
TraR n
«cool»
FID
EC
y 9 Pressure Regulator
"COfe Inlet
Trap LRMS
Effluent
Heated
Transfer Line
Analytical Capillary
Column
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 7
-------
107
Analyte Reinjection
Injector
Flow Controller
Preparatory
Capillary
Column
'* ' l"l
*mt
Is
Cryogenic
-f
Trap
«heat»
FID c
//
l»
K
W/L
^^Jf ^vj
1
Heate<
[Transfer 1
^^
Solutes Path
B Carrier Gas
D No Flow
EC
V5
V3
Helium
Inlet
Is^EUII
v 9 Pressure Regulator
jX] 4"" CQz Inlet
NV3
Trap LRMS
Effluent
Analytical Capillary
Column
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE £ 8
-------
108
Analytical Separation & Precolumn Backflush
Injector
Flow Controller
Preparatory
Capillary
Column
Solutes Path
D Carrier Gas
D No Flow
Effluent v 2 MW o
Filter Jf NV2
V5
V3
Cryogenic
Trap
«heat»
FID
Helium
Inlet
Pressure Regulator
=X
COz Inlet
V4 NV3
LRMS
Effluent
Heated
Transfer Line
EC
Analytical Capillary
Column
The Daw Chemical Company
EPA Conference, Norfofk, May 90 SLIDE * 9
-------
109
GC-GC-MS Control Sequence for TCDD
0 3 6 9 12 15 18 21 24 27 30 33 36(min)
Prep Computer
Autoinjector
Prep GC
Cryo valve
Cryo trap
Trap heat
Backflush
Foreflush
Oven cool
Oven equil
Anal Computer
Analytical GC
LRMS
Run
Ready
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 10
-------
110
Chromatographic comparison...
ci ^ a ^ a
Ion Intensity
"A 322
0%J
5.0
6.0
7.0
100%
Ion intensity
A 322
0%J
GC- GC-MS
9.0
10.0
11.0
m/z = 322
40 pg injected
8.0 9.0 (min)
m/z = 322
40 pg injected
12.0
13.0 (min)
77w? Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 12
-------
Ill
Manual Injection GC - GC - MS
TCDD Reproducibility Data for 40pg
n (number of runs)
Hg(avg response)
o M (relative S.D.)
m/z 320
23489
3.5%
m/z 322
29188
4.6%
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 13
-------
112
Autoinjector GC - GC - MS
TCDD Reproducibility Data for 40pg
n (number of runs)
xm (a vg response)
(relative S.D.)
m/z 322
10
7784
5.5%
m/z 324
10
3774
3.8%
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 14
-------
113
Autoinjector GC - GC - MS
Detector Linearity for TCDD (normal range)
Detector
Response
50000
40000
30000
20000
10000
/z 322
'/z 324
40 80 120 160 200
Picograms TCDD injected
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 15
-------
114
Autoinjector GC - GC - MS
Detector Linearity for TCDD (extended range)
Detector
Response
600 000
500 000
400 000
300 000
200 000
100000
m
/z 322
m
'/z 324
400 800 1200 1600 2000
Picograms TCDD injected
The Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 16
-------
115
Prep HRGC (ECD) for Waste Solids
Response for ~1 ng
2378-TCDD injected
Trapped
analytes
fraction
i
2
i
4
i i
6
I ' I ' M 1
8 10
I ' 1 ' I ' I ' I ' I
12 14 16(min)
TTre Dow Chemical Company
EPA Conference, Norfolk, May 90 SLIDE # 11
-------
116
Analytical HRGC-LRMS for Waste Solids
2378-TCDD = 5.3 PPB
3000-
2000-
1000:
0
Native
m
'/z 322
8.0 9.0 10.0 11.0 12.0 13.0 14.0 (min)
1604 Native
1200
800 -3
400-
l/z 324
8.0 9.0 10.0 11.0 12.0 13.0 14.0 (min)
80 000 -
60 000 -
40 000:
20 000 -i
PCJ
m
'/z 334
8.0 9.0 10.0 11.0 12.0 13.0 14.0 (min)
ThA r>nw (Jhftmlnal Ciamnanv
EPA Conference. Norfolk. Mav 90 SLIDE # 12
-------
117
Why should I bother with HRGC-HRGC-LRMS?
Dow perspective: 100 samples
2378-TCDD & 2378-TCDF
Analytical Method Analysis Time -Cost per Sample
Full Cleanup
HRGC-HRGC-LRMS
~ 24 weeks
".." f \
f''s' ' '\
~ 3 weeks
*2,400.
OI)
The Daw Chemical Company
EPA Conference, KotroK,U3y 30 SLIDE* 15
-------
r
118
MR. TELLIARD: The next
speaker is going to talk about MAGIC. I always like that.
Jim de Haseth is with the University of Georgia. We
heard about MAGIC four years ago when Dr. Browner talked
about MAGIC as it related to particle beam, and today we are
going to talk about HPLC/FT-IR MAGIC. Anytime we get MAGIC,
it is fun.
-------
119
DEVELOPMENT OF A MAGIC INTERFACE
FOR HPLC/FT-IR
James A. de Haseth* and Richard F. Browner1
Department of Chemistry
School of Chemical Sciences
University of Georgia
Athens, GA 30602
^School of Chemistry
Georgia Institute of Technology
Atlanta, GA 30332
-------
120
ABSTRACT
The interfacing of separation methods with selective detectors has been a long sought after
goal. In some instances, such as gas chromatography (GC) interfaced with spectrometric methods
such as mass spectrometry (MS) or Fourier transform infrared (FT-IR) spectrometry, considerable
success has been achieved. Interfaces with liquid chromatography have been far less successful due
to the difficulty in eliminating the mobile phase. In the infrared, liquid mobile phases must be
eliminated as they are opaque at all wavelengths at even short pathlengths. The solutes are generally
in such low concentration that long pathlengths must be used to record their spectra in the presence
of the solvent. Nonetheless, the solvent opacity precludes this option. Solvent elimination has been
possible for high volatility nonpolar mobile phases, yet the majority of liquid chromatography is
accomplished with polar (reverse phase) solvents. An interface has been developed that can
accommodate both normal and reverse phase solvent systems, as well as gradient elution systems.
This interface is the Monodisperse Aerosol Generation Interface for Combining (MAGIC) liquid
chromatography (LC) with Fourier transform infrared (FT-IR) spectrometry. This method provides
a sensitive interface between HPLC and FT-IR spectrometry. Detection limits below 1 ng are
possible, and the use of buffered solvent systems with the HPLC does not impair the performance
of the interface.
-------
121
INTRODUCTION
The increased identifying power that results from coupling separations techniques with
selective detectors has been dramatic. The value of such procedures arises because the process of
separation of a mixture into its components does not in itself identify the chemical or structural
nature of the compounds present. Such vital information is only provided by a detector that is
capable of responding in some unique manner to the chemical nature of the separated compounds.
Depending on the perspective of the researcher, the development of such analytical
techniques, loosely known as "hybridized techniques", may be seen either as a means of improving
the quality of analytical detectors for chromatography, or of simplifying clean-up procedures for
sample introduction to the detector of choice. In the recent past, when many of the selective
detectors available commercially were extremely expensive, the second approach was prevalent. The
advent of powerful but inexpensive detectors for mass spectrometry (MS) and Fourier transform
infrared (FT-IR) spectrometry, for example, has radically changed the situation.
GC VERSUS LC SEPARATIONS
The separation and identification of volatile components of materials is generally
accomplished by interfacing gas chromatography (GC) with sophisticated and highly selective
1
-------
122
physical analysis detectors. The most notable physical analysis techniques used for this purpose are
mass spectrometry (MS) and more recently, Fourier transform infrared (FT-IR) spectrometry. Both
of these techniques provide information about an analyte's molecular structure; mass spectrometry
provides structural information based on molecular fragmentation patterns, whereas FT-IR
spectrometry provides characteristic functional group and isomeric information. A key factor is that
the two techniques provide information that is very complementary in nature. By interfacing MS
or FT-IR spectrometry with GC, either individually (i.e. GC/MS or GC/FT-IR spectrometry), or
in combination (GC/FT-IR/MS), a wealth of information for the identification of gas chromato-
grapnic eluites results. Appropriate combination of the data from both techniques can give rise to
a substantial increase in the identifying power available for unknown compounds. For example,
work with combined systems has shown that a much greater success rate is obtained when spectral
data from FT-IR and MS are used jointly, than when either technique is used alone (1).
The types of molecules that may be successfully separated by gas chromatography, while
extensive, is nevertheless severely limited by the dual problems of compound volatility and thermal
lability. While many procedures, such as derivitization, may help to circumvent these problems,
there is clearly a limit to the molecular weight and the polarity of molecules that may be separated
by gas chromatography. The only practical alternatives currently available for separation of high
molecular weight and high polarity molecules are liquid chromatography (LC) and supercritical fluid
chromatography (SFC). While SFC clearly has great potential for a large class of compounds of
intermediate molecular weight and low or intermediate polarity, effective separation procedures are
not yet available using SFC for either very large molecules (e.g. > 2000 daltons) or very polar
molecules. Many compounds of great biological and environmental importance are therefore not
presently amenable to SFC separation. SFC has an additional problem resulting from the limited
-------
123
sample mass that can be injected on-column. Consequently, while absolute detection limits may be
in the picogram range, concentration detection limits may be inadequate for trace analysis.
Interfacing liquid chromatography with MS and FT-IR detectors is a much more demanding
problem than that experienced with interfacing GC with these detectors. This results from the
much greater difficulty involved with removing the LC mobile phase. With GC separations, the
mobile phase, which is often helium, presents little difficulty for the detector. For example, in
GC/MS analysis with packed columns an enrichment device, such as a jet separator, is used. With
low-flow capillary columns, the pumping capacity of the mass spectrometer itself provides sufficient
enrichment. By these means, the carrier gas is reduced to a level that does not influence the
operation of the mass spectrometer. In GC/FT-IR spectrometry, no mobile phase elimination is
necessary, as the carrier gases typically used are optically transparent in this region of the
electromagnetic spectrum. On the other hand, liquid chromatographic mobile phases are neither so
readily eliminated nor are they optically transparent in the infrared region. As a consequence, there
are currently no commercially available or widely accepted methods for directly interfacing LC and
FT-IR spectrometry.
Two basic research approaches to LC/FT-IR interfacing have been followed to date: (1)
effluent flow-through systems and (2) solvent elimination procedures.
SOLVENT CONSIDERATIONS FOR DIRECT LC/FT-IR MEASUREMENTS
If a liquid chromatographic eluite is sufficiently concentrated, its infrared spectrum can still
be measured even in the presence of LC solvent. No solvent is totally infrared transparent,
however, even in the regions where there is no apparent infrared absorption band. The object of
the LC/FT-IR spectrometric experiment is therefore to record the spectrum of the eluite and ignore
-------
124
or eliminate the spectrum of the solvent. In infrared spectrometry, as well as in many other
absorption spectrometries, the spectrum of a solute is recorded by ratioing a solution spectrum
against the spectrum of the pure solvent. This operation is valid only when the absorbance of the
solute is relatively high and the absorbance of the solvent low. If the solvent absorbance becomes
sufficiently great, the spectrum is opaque in the region of the absorption band and no solute
o/
absorbance can be measured. If the concentration of the solute is too low, the absorbance due to
the solute will be in the noise level of the measurement, and again a spectrum cannot be recorded.
In normal practice, spectrometric absorbance can be controlled by pathlength. In an ideal situation
the solvent should absorb no more than approximately 37% of the infrared radiation. In this case
an optimum signal-to-noise ratio can be attained (2).
Unfortunately typical liquid chromatographic solvents do not absorb uniformly across the
infrared region, and it is impossible to design an ideal cell for a complex absorber. A compromise
must thus be made, and generally pathlengths are chosen so that only a few bands are opaque and
most of the spectrum is transparent. The relatively transparent regions of the solvent spectrum form
windows in which the signal-to-noise ratio (SNR) is sufficiently high that a spectrum of the solute
can be recorded. The pathlengths vary depending upon the solvent (3). For example, a weak
infrared absorber such as carbon tetrachloride can be used with pathlengths of up to 1 mm. Very
strong absorbers, such as water, permit only very short pathlengths, up to about 0.1 mm. Other
solvents may fall into intermediate regions.
In a dynamic system, such as in LC/FT-IR spectrometry, the interface cell should be
designed for the maximum SNR of the eluite. At a first approximation this is when the entire
analyte band is trapped momentarily in the interface cell. In this situation, the maximum possible
analyte mass is measured spectrometrically, and so highest sensitivity will be achieved. The
practical problem this approach gives rise to is that of ensuring the chromatographic resolution is
-------
125
not compromised. If the volume of the interface cell is too great, it is possible for two chroma-
tographically resolved eluites to be in the cell at same time. To avoid this problem, the volume of
the interface cell must be the full width at half height volume of the eluite peak (4). An
appropriate diameter for an LC/FT-ER cell is on the order of about 4 mm, based on optical and
chromatographic flow considerations. In a conventional HPLC experiment (column i.d. of 4.6 mm)
a typical narrow peak is on the order of 250 /*L. An ideal cell of 4 mm diameter (12.5 mm2 area)
would then have a pathlength of 20 mm. This is more than an order of magnitude longer than the
maximum cell pathlength for the weakest infrared absorbing solvent. Cells of larger diameter
provide an ideal shorter pathlength at the cost of sensitivity; smaller diameter cells are more
sensitive because the ideal pathlength is longer, but the discrepancy between ideality and practice
is greater.
LC/FT-ER spectrpmetry, therefore, presents the researcher with an interesting dichotomy:
cell pathlength must be long for maximum sensitivity, but it must be short to provide adequate
solvent windows to record solute spectra. As long as the chromatographic eluite is sufficiently
concentrated, then short pathlength cells can be employed successfully. Where solute concentrations
are extremely low, cell pathlengths are in practice too short to record spectra at useful
signal-to-noise ratios.
FLOW-THROUGH LC/FT-IR SYSTEMS
The earliest LC/FT-ER spectrometry systems used flow-through cells (5-8). These prototype
systems suffered from severe problems, and were not really practical. Two related areas where
flow-through cells have proved to be of value, though, are those of gel-permeation chromatography
(GPC) and size-exclusion chromatography (SEC). In these chromatographic techniques, where
-------
126
components are separated on the basis of molecular size rather than by polarity or adsorption
characteristics, chlorinated solvents can be used, that makes GPC/FT-IR spectrometry into a
powerful technique. Chlorinated solvents, for the most part, do not absorb strongly across the
infrared spectrum except in the C-C1 stretch region and possibly in the C-H stretch region. These
regions are opaque even at short pathlengths. Vidrine (3) compiled a series of spectra of common
LC solvents at various pathlengths and showed that many of the solvents that are useful for GPC
can be used with reasonable pathlengths. Vidrine showed GPC/FT-IR spectra of the separation of
poly(butyl acrylate) and polystyrene on a 100 A /i-Styragel column using THF (tetrahydrofuran) as
the solvent. Although THF is far from an ideal solvent because it is not transparent, some spectral
information was still recorded in the flow-through system. The regions 3000 to 2800 cm*1 and 1300
to 800 cm"1 are obliterated due to THF absorptions. Had a more polar solvent been used, however,
much less spectral information would have been recovered.
Flow-through systems are not restricted solely to GPC, and this technology has been applied
to both normal-phase (NP-) and reverse-phase (RP-) HPLC. Most of the solvents used in
NP-HPLC are less polar, and hence absorb IR radiation less strongly than THF, but overall
performance is often not as good as in GPC/FT-IR spectrometry. In GPC, chromatographic sample
loadings can be quite high, on the order of mg, but typical HPLC 4.6 mm i.d. silica gel columns
have capacities of only 2 to 50 jig. The concentration of most NP-HPLC eluites is on the order of
200 ppm. These low sample loadings therefore require long cell pathlengths in order to record
spectra with good SNR, but this contradicts the requirement for short pathlengths dictated by the
solvent system.
A series of fine studies involving flow cells with NP-HPLC has been completed under the
direction of Taylor (9-16). In the earliest of these studies Johnson and Taylor (9) compared the use
of semi-preparative, analytical and microbore columns for NP-HPLC/FT-IR spectrometry. In order
-------
127
to improve the transmission of the ceil and reduce the number of opaque regions, Freon-113 was
used as the mobile phase. Even so, it was necessary to use a 0.2 mm pathlength cell to minimize
the number of opaque regions. These workers showed that microbore HPLC (/aHPLC) columns were
practical for flow-through cell FT-IR spectrometry. This was a consequence of the smaller volumes
of the microbore eluite peaks compared to the analytical or semi-preparative peaks.
In an analytical column the typical eluite peak volume is at least 250 pL, and a 0.2 mm x
4 mm diameter cell has a volume of only 2.5 pL. This means that no more than 1% of the eluite
peak will be in the ceil at any given time. As the volume of the eluite peak decreases, a higher
percentage of the eiuite will be present in the cell; but this is offset by the reduced sample loading.
Nevertheless, the tradeoff is in favor of the microbore column, hence, high SNR spectra can be
measured in the transparent regions of the solvent. In later studies, Brown and Taylor studied
material of intermediate polarity in coal-derived process solvents, using analytical NP-HPLC
columns (10) and also compared analytical and microbore columns (11). Both studies used deuter-
ated chloroform as the mobile phase. CDC13 is transparent in the C-H stretch region of the infrared
spectrum, and hence has more useful windows than chloroform.
Taylor and coworkers have investigated many other aspects of NP-HPLC/FT-IR spectro-
metry (12-15). Coal derived products have been investigated: on a polar amino cyano (PAC)
analytical column using a 98:2 CDCl^CH^CN mobile phase (12), and, on an amino (NH2) bonded
microbore column using a 70:30 CDCl^CClj + 0.02% triethylamine mobile phase. Amateis and
Taylor (14) investigated the optimum experimental conditions for NP-HPLC/FT-IR spectrometry,
including spectrometric signal-averaging and flow cell pathlength. Column overload and system
detection limits were also investigated. Brown, Amateis and Taylor (15) determined detection limits
for phenols and amines in NP-/iHPLC/FT-IR spectrometric systems. These detection limits ranged
from 300 ng to 12 /»g, depending upon the eluite. These figures reflect injected quantities, not the
quantities in the ceil. One interesting development was a cylindrical bore cell for flow cell
-------
128
/jHPLC/FT-ER spectrometry. This has been named a zero dead volume (ZDV) cell (16). This cell
provided a, detection limit of less than SO ng for 2,6-ditert-butylphenol measured at the O-H
stretching frequency.
This cell, which is cylindrical, was designed to give a wide dynamic range of solventsolute
ratios. Unfortunately, the absorbance across the spectrum is then, by necessity, nonlinear, although
some bands will exhibit linear behavior. Other workers have investigated flow cells for LC/FT-IR
spectrometry involving normal phase systems (17-19). An unusual set of studies has involved
interfacing a high speed planet coil countercurrent chromatograph (CCC) to an FT-IR spectrometer
(19,20). The CCC is a liquid-liquid chromatograph that has no stationary support, and hence is able
to operate at very high sample loadings. The eluites are then present in the system at high
solute-to-solvent ratios, making flow cell LC/FT-IR spectrometry feasible. Regrettably, the
chromatographic efficiency of this system is poor.
A few studies have involved reverse phase systems (21-23). These systems have fewer
spectral windows than the normal phase systems and analysis is severely restricted. Even when
deuterated mobile phases are used, the number of useful spectral windows is small. Although much
has been accomplished with flow-through LC/FT-IR spectrometric systems, these designs are hardly
applicable as a universal approach. Reverse phase systems are almost totally excluded from this
technology, and normal phase HPLC systems operate under the severe handicaps of poor solvent
transmission and very low eluite concentrations.
SOLVENT-ELIMINATION SYSTEMS
As stated above, the problem with flow-through systems is that the spectrum is always
obscured by absorption bands of the mobile phase. To avoid this problem the mobile phase, or
8
-------
129
solvent, must be removed to leave the eluite for spectrometric study. The first attempts at
eliminating the solvent were applied to NP-HPLC systems and involved a two-step process (24,25).
The first step was to concentrate the eluite by about a factor of ten by evaporating some of the
solvent in a heated tube. The second step was to deposit the concentrated eluite onto a powdered
potassium chloride (KC1) substrate where the remainder of the solvent was evaporated. Spectra were
recorded by diffuse reflectance (DR) infrared spectrometry. DR spectrometry is well-suited to the
measurement of small quantities of a sample; but this two-step process is time-consuming and
necessitates long delays between elution and recording of the spectrum. This system has also been
adapted to microbore columns (26).
One major disadvantage of the DR solvent elimination technique is that polar (reverse phase)
solvents will dissolve the K.C1 DR substrate. A remedy for this shortcoming is to extract the eluite
into a nonpolar solvent for deposition onto the KC1 powder (27). This has been achieved in an
extraction coil by mixing methylene chloride with reverse phase HPLC effluents, then physically
separating the polar and nonpolar phases (aqueous and CH2C12) in a separation tee. The nonpolar
phase is passed to a concentrator for deposition on a DR K.C1 substrate, as in the normal phase
HPLC system. This has the disadvantage that an additional, low extraction efficiency, step is added
to the process. An alternate method is to spray the HPLC effluent onto an insoluble substrate, such
as industrial-grade diamond powder, using an ultrasonic nebulizer (28).
In the case of an aqueous mobile phase, the water can be removed by reaction with 2,2'-di-
methoxypropane (DMP), that produces methanol and acetone when the reaction is acid catalyzed
(29). The methanol, acetone, and excess DMP can be deposited on a DR substrate along with the
eluite. All components, except the eluite, can then be evaporated readily. This is an elegant system,
but it is restricted to simple mobile phases, such as H2O:CH,OH and H2O:CHSCN. If nonvolatile
-------
130
ionic modifiers are added to the mobile phase, these modifiers can have concentrations exceeding
that of the eluite, and hence dominate the spectra.
A different solvent elimination method is the buffer memory technique developed by Jinno
(30-33). In this technique the effluent from a normal phase microbore HPLC column is deposited
onto a slowly moving KBr plate. The solvent is removed by evaporation, and spectra of the eluites
are recorded by transmission spectrometry. This technique, like the diffuse reflectance technique,
is essentially an off-line method that cannot be used easily with polar mobile phases.
Other methods have been devised to remove the solvent from HPLC systems and deposit the
solute for infrared spectral interrogation. Gagel and Biemann (34,35) have developed a continuous
recording reflection-absorption interface. The interface involves taking the effluent stream from
a chromatograph and mixing it with nitrogen gas. The nitrogen gas disperses the liquid effluent and
sprays it onto a reflective surface. The nitrogen then evaporates the solvent and leaves the solute
as a residue on the surface. The solutes are deposited on a spiral track on a reflective surface.
Spectra of the solutes are obtained as the track is moved into an infrared beam and the spectra are
obtained as reflection-absorption spectra. This system is capable of eliminating the mobile phase
when it contains up to 55% water (with methanol), but in gradient elution systems the temperature
of the nebulizing gas had to be computer controlled to evaporate the solvent as the mobile phase
composition changed.
Griffiths has developed a similar interface, except that the spectra are collected as
transmission spectra with an infrared microscope (36). The nebulizer for this apparatus is similar
to the one reported by Gagel and Biemann, but the deposition is done onto an infrared transparent
substrate, such as zinc selenide (ZnSe). The advantage to this system is that Griffiths realized that
the ultimate detection limits will be achieved by concentrating the solute into a minimum cross-
sectional area, then focussing the infrared beam onto that area. The general system has been
10
-------
131
described (37,38), but these references reported only normal phase systems. Griffiths has shown
one spectrum of methyl violet from 100% water using his HPLC nebulizer when the system is
evacuated as opposed to the normal ambient pressure system (39).
All of the above solvent-elimination methods suffer from the need for complex steps to
effect solvent removal or reverse phase systems cannot be accommodated on a routine basis. These
solvent elimination steps are sometimes time-consuming and are not always very efficient. As with
the flow-through systems, these solvent-elimination techniques work best with normal phase
systems.
MAGIC-LC/FT-IR INTERFACE
The Monodisperse Aerosol Generation Interface for Combining (MAGIC) Liquid
Chromatography (LC) with Fourier Transform Infrared (FT-IR) spectrometry provides an alternate
interface that solves many of the problems exhibited by the above-mentioned interfaces. The
MAGIC interface, in its generic form, is a transport based device that takes LC column effluent,
removes the solvent with high efficiency, and passes the desolvated solute to a detector. As this
device is a high efficiency desolvating interface, it has the property most desirable for an
LC/FT-IR spectrometric interface.
MAGIC-LC has three primary components: (1) a monodisperse aerosol generator (MAG) (2)
an atmospheric, ambient temperature, solvent evaporation chamber and (3) a momentum-based
particle enrichment separator. A schematic of the MAGIC-LC/FT-IR spectrometric interface is
shown in Figure 1. The MAGIC-LC device has already been described in some detail in the
literature, in its application to mass spectrometry (40-42). Basically, the MAG is a device that
converts the effluent stream into highly uniform droplets. This is accomplished by pumping the
11
-------
132
effluent through a fine bore fused silica capillary tube, typically 25 pm in internal diameter. The
jet formed at the end of the fused silica tube is unstable and forms an aerosol of uniform droplets,
as originally modelled mathematically by Rayleigh (43). The droplets are dispersed by a gas sprayed
perpendicular to the jet, to prevent the droplets colliding and forming a polydisperse aerosol. A
detailed schematic of the MAG has been given elsewhere (40).
The dispersed droplets flow towards a nozzle at the opposite end of the desolvation chamber
to the MAG. The pressure in the desolvation chamber is near atmospheric and the pressure is
considerably reduced on the other side of the nozzle. The fact that the droplets are of uniform size,
and that they are rather small (on the order of 50 jum in diameter), the rate of evaporation of the
solvent is high. As the volume of the droplets decreases the relative surface area increases, and
hence the evaporation of the droplet is rapid. The desolvation chamber serves the purpose of
evaporating the solvent and passing the remaining solute, in droplet form, into the momentum-based
separator. Both solute droplets and solvent vapor pass into the separator, but only the solute
droplets have sufficient momentum to pass through the separator to a collection plate where the
solute droplets are deposited. The momentum-based separator has been discussed in detail
previously (42).
The collection plate is translated during deposition so that each eluite is deposited in a
unique area on the plate. In this way each eluite is separated spatially for infrared analysis. Once
the HPLC separation is complete, the deposition plate is removed and placed in an infrared
spectrometer. Typically a beam condenser is used to focus the beam onto the eluite deposition. A
spectrum is collected by transmission spectrometry as the collection plate is an infrared transparent
window, such as KBr or KC1. An alternate approach is to use an infrared microscope instead of
a beam condenser. An infrared microscope will focus a beam to a much smaller area than a beam
condenser, and as long as an eluite deposition can be constrained to a small area, for example
12
-------
133
100 ftm in diameter, the microscope will give higher sensitivity than the beam condenser. The size
of the deposition is dependent upon the quantity of component injected and the configuration of
the MAGIC device. Depositions of approximately 100 ftm have been achieved.
The MAGIC device as used for LC/FT-IR spectrometry differs somewhat from the device
used for LC/MS. These differences are to accommodate changes in the aerosol to produce good
depositions for FT-IR spectrometry, as opposed to producing an easily vaporized solute stream for
mass spectrometry. The aerosol considerations for different spectrometric techniques are described
in detail elsewhere (44).
Spectra have been collected successfully using the MAGIC-LC/FT-IR spectrometric device
for a variety of solvent systems and at varying concentrations of analytes (45,46). Figure 2
illustrates a series of spectra of erythrosin B from different mobile phase compositions. Figure 2a
is the erythrosin B from 100% methanol, in 2b the mobile phase is 80:20 methanol:water, 2c it is
60:40 methanol:water, and Figure 2d shows the spectrum when the mobile phase is 100% water. All
four spectra are good representations of erythrosin B and match the reference spectrum closely.
Figures 2a-c show some methanol trapped in the eluite depositions. This is residual methanol after
the solvent elimination process. In the original solutions of erythrosin B the solute concentration
was only a few parts per thousand, and after desolvation the solvent concentration is only a few
percent of the solute concentration. Clearly the desolvation is not 100% efficient for some solvents,
yet it is close to that efficiency. It is interesting to note that the spectra from 100% water mobile
phase show no trapped solvent. The reasons why some solvents are trapped are currently being
investigated in this laboratory? as well as methods to alleviate the problem. One immediate solution
is to remove the spectrum of the trapped solvent by spectral subtraction.
MAGIC-LC/FT-IR spectrometry can also work well with pure normal phase separations.
It can be argued that 100% methanol is not a normal phase solvent. Figure 3 shows the spectrum
13
-------
134
of anthracene as deposited from 100% hexane. There is no hexane residue apparent in the spectrum.
Figure 4 is a spectrum of caffeine from 100% water, a reverse phase solvent, and again there is no
residual solvent apparent.
All the spectra shown thus far have been of rather large samples, on the order of a few tens
of micrograms per injection. This is typically greater than standard injections for HPLC
separations, and detection limits in the tens of nanograms or lower are desirable. The limiting
factor for the detection of eluites by the MAGIC-LC/FT-IR spectrometric device is not in the
solvent elimination step, but in the optics for the spectrometry. Figure 5 illustrates a spectrum of
100 ng of methyl red (injected) and analyzed using an infrared microscope. The spectrum is clearly
identifiable, despite the interference of atmospheric water vapor. The baseline is skewed due to
dispersion effects of the deposition; that is, the deposition is too thick. This spectrum was from an
earlier version of the MAGIC-LC/FT-IR device that was rather inefficient. Only five to ten
percent of the injected eluite was recovered at the deposition plate. Current models of the
MAGIC-LC/FT-IR device appear to have efficiencies of 30%, and further improvements are
anticipated. With microscopic optics instead of beam condensing optics, detection limits of less than
one nanogram (injected) can be projected.
The interface functions well in the presence of buffered solvents (47). Even with buffer
concentrations as high as IF, spectra of the chromatographic eluites can be collected. Figure 6
shows the spectrum of caffeine deposited when the solvent contained IF KHP buffer. Obviously
the spectrum of the caffeine contains extra bands, and these are attributable to the buffer. As the
buffer is involatile, it is deposited continuously. A spectrum of the buffer without the eluite is
measured and subtracted from the spectrum of the mixture. In the upper half of Figure 7, a
spectrum of the caffeine minus the KHP buffer is shown. A reference spectrum of caffeine is
14
-------
135
shown in the lower half of the figure. The spectral differences are due to the protonation of the
caffeine.
The only present shortcoming of the MAGIC-LC/FT-IR spectrometric device is that it is
currently an "off-line" technique. Largely the mechanics of making it an "on-line" technique are
only engineering changes, and systems previously devised for LC/FT-IR interfaces could be
implemented (for example, those given in references 34-39). Clearly, MAGIC-LC/FT-IR
spectrometry is a method that holds great promise for the interfacing of HPLC with FT-IR
spectromeftry. This is the first device that has been used successfully for both normal and reverse
phase separations, and also the first to work with buffered reverse phase systems (47). Both
KH3PO4 (potassium dihydrogen phosphate) and KHP (potassium hydrogen phthalate) buffers were
used successfully. Effluent flow rates of up to 1 mL/m can be accommodated by the interface.
One of the most beneficial aspects of the MAGIC-LC/FT-IR device is that the solvent elimination
is done at ambient temperature; no heating of the effluent stream or of the desolvating gas is
necessary. Not only can changes in the mobile phase composition be handled without changing the
desolvation parameters, thermally labile components can be analyzed with no concern for thermal
degradation.
ACKNOWLEDGMENTS
This work would not have been possible without the collaboration of Professor Richard F.
Browner, who originally developed the MAGIC device. This collaboration has led to a joint
research project under the direction of J. A. de Haseth and R. F. Browner, and the support of the
National Institutes of Health, under grant number 1 R01 GM40715-01 is greatly appreciated.
15
-------
136
LITERATURE CITED
1. Wilkins, C. L.; Giss, G. N.; White, R. L.; Brissey, G. M.; Onyiriuka, E. C. Anal. Chem., 1982, 54,
2260-2204.
2. This is a well-documented phenomenon, see, for example, Christian, G. D. Analytical Chemistry,
3rd Ed., Wiley, New York (1980), p. 400.
3. Vidrine, D. W. "Liquid Chromatography Detection Using FT-IR", in Fourier Transform Infrared
Spectroscopy: Applications to Chemical Systems, Vol. 2, Ferraro, J. R.; Basile, L. J., Eds., Academic
Press, New York (1970), Chapter 4.
4. Griffiths, P. R. AppL Spectrosc., 1977, 31, 284-288.
5. Kizer, K. L.; Mantz, A. W.; Bonar, L. C. Amer. Lab., 1975, 7(5), 85-97.
6. Vidrine, D. W.; Mattson, D. R. AppL Spectrosc.. 1978, 32, 502-506.
7. Vidrine, D. W. /. Chromatogr. Sci., 1979, 17, 477-482.
8. Shafer, K. H.; Lucas, S. V.; Jakobsen, R. J. /. Chromatogr. Sci.. 1979, 17, 464-470.
i.
9. Johnson, C. C.; Taylor, L. T. Anal. Chem.. 1983, 55, 436-441.
10. Brown, R. S.; Taylor, L. T. Anal. Chem., 1983, 55, 723-730.
11. Brown, R. S.; Taylor, L. T. Anal. Chem., 1983, 55, 1492-1497.
12. Amateis, P. G.; Taylor, L. T. Chromatographia. 1984, 18, 175-182.
13. Amateis, P. G.; Taylor, L. T. Anal. Chem., 1984, 56, 966-971.
14. Amateis, P. G4 Taylor, L. T. LC Mag., 1984, 2, 854-857.
15. Brown, R. S.; Amateis, P. G.; Taylor, L. T. Chromatographia, 1984, 18, 396-400.
16. Johnson, C. C; Taylor, L. T. Anal. Chem., 1984, 56, 2642-2647.
17. Combellas, C.; Bayart, H.; Jasse, B.; Caude, M.; Rosset, R. J. Chromatogr., 1983, 259, 211-225.
18. Combellas, C;B Bayart, H.; Jasse, B^ Rosset, R. Analusis, 1985, 11, 225-233.
19. Romanach, R. J^ de Haseth, J. A^ Ito, Y. J. Liq. Chromatogr., 1985, 8, 2209-2219.
16
-------
137
20. Romanach, R. J.; de Haseth, J. A. /. Liq. Chromatogr., 1988, 11, 133-152.
21. Jinno, K^ Fujimoto, C.; Uematsu, G. Amer. Lab., 1984, 16(2), 39*45.
22. Fujimoto, C.; Uematsu, G.; Jinno, K. Chromatographia, 1985, 20, 112-116.
23. Tartar, A.; Huvenne, J. P.; Gras, H.; Sergheraert, C. /. Chromatogr., 1984, 298, 521-524.
24. Kuehl, D.; Griffiths, P.R. /. Chromatogr. Set., 1979, 17, 471-476.
25. Kuehl, D. T.; Griffiths, P. R. Anal. Chem., 1980, 52, 1394-1399.
26. Conroy, C. M.; Griffiths, P. R.; Jinno, K. Anal. Chem., 1985, 57, 822-825.
27. Conroy, C. M.; Duff, P. J.; Griffiths, P. R.; Azarraga, L. V. Anal. Chem., 1984, 56, 2636-2642.
28. Castles, M. A.; Azarraga, L. V.; Carreira, L. A. Appl. Spectrosc., 1985, 40, 673-680.
29. Kalasinsky, K. S.; Smith, J. A. S.; Kalasinsky, V. F. Anal. Chem., 1985, 57, 1969-1974.
30. Jinno, K.; Fujimoto, C. /. High Res. Chromatogr.; Chromatogr. Common., 1981, 4, 532-533.
31. Jinno, K.; Fujimoto, C^ Hirata, Y. Appl. Spectrosc., 1982, 36, 67-69.
32. Jinno, K.; Fujimoto, C.; Ishii, D. /. Chromatogr., 1982, 239, 279-286.
33. Jinno, K. Spectrosc. Lett., 1981, 14, 659-663.
34. Gagel, J. J.; Biemann, K. Anal. Chem., 1986, 58, 2184-2189.
35. Gagel, J. J.; Biemann, K. Anal. Chem., 1987, 59, 1266-1272.
36. Griffiths, P. R.; Fraser, D. J. J. "HPLC/FT-IR with Continuous Solvent Elimination and Eluate
Deposition", ACS 39th Annual Summer Symposium on Analytical Chemistry, "Chromato-
graphic/Spectroscopic Combinations", Salt Lake City, Utah, June 1986.
37. Griffiths, P. R.; Pentoney, S. L., Jr.; Pariente, G. L.; Norton, K. L. Mikrochim. Acta [Wein],
1987, 3, 47-62.
38. Fraser, D. J. J4 Norton, K. L.; Griffiths, P. R. "HPLC/FT-IR Measurements by Transmission,
Reflection-Absorption, and Diffuse Reflectance Microscopy", in Infrared Microspectroscopy:
Theory and Applications, Messerschmidt, R. G.; Harthcock, M. A., Eds., Marcel Dekker, New York
(1988), Chapter 14.
39. Griffiths, P. R. "Recent Advances in FT-ER Hyphenated Techniques", 40th Pittsburgh
Conference on Analytical Chemistry and Applied Spectroscopy, Atlanta, Georgia, March 1989,
paper 623.
17
-------
138
40. Wffloughby, R. C; Browner. R. F. Anal. Chem.. 1984, 56, 2626-2631.
41. Browner, R. F.; Winkler, P. C^ Perkins, D. D.; Abbey, L. E. Microchem. J., 1986, 34, 15-24.
42. Winkler, P. C.; Perkins, D. D.; Williams, W. K.; Browner, R. F. Anal. Chem., 1988,60, 489-493.
43. Rayleigh, J. W. S. Proc. London Math. Sac.. 1878, 10, 4-17.
44. Browser, R. F. Microchem. J., 1989, 40, 4-29.
45. Robertson, R. M.; de Haseth, J. A.; Browner, R. F. Mikrochim. Acta [Weinj, 1988, 2, 199-202.
*
46. Robertson, R. M.; de Haseth, J. A.; Kirk, J. D.; Browner, R. F. Appl. Spectrosc., 1988,42,1365-
1368.
47. Robertson, R. M^ de Haseth, J. A.; Browner, R. F. Appl. Spectrosc., 1990, 44, 8-13.
18
-------
139
FIGURE CAPTIONS
1. Schematic diagram of the MAGIC-LC/FT-ER. spectrometric interface showing the three major
components of the MAGIC device and the placement of the deposition plate for collection of the
eluites from an HPLC separation.
2. MAGIC-LC/FT-IR spectra of SO p% injections of erythrosin B using various different mobile
phase compositions: (a) 100% methanol, (b) 80:20 methanokwater, (c) 60:40 methanokwater, and (d)
100% water. (Reproduced from Ref. (46).)
3. Spectrum of deposited anthracene using 100% hexane as the mobile phase.
4. Caffeine as deposited from the MAGIC-LC/FT-IR device, when 100% water was used as the
mobile phase.
5. Spectrum of 100 ng of methyl red (injected) from a 100% methanol mobile phase. (Reproduced
from Ref. (45).)
6. Spectrum of caffeine and IF K.HP buffer. (Reproduced from Ref. (47).)
7. Spectrum of caffeine (upper) after spectral subtraction of the KHP buffer spectrum. A reference
spectrum of caffeine is shown below. (Reproduced from Ref. (47).)
19
-------
He
DISPERSION
GAS
L
DESOLVATION CHAMBER
MOMENTUM
SEPARATOR
LC EFFLUENT
D
IR TRANSPARENT
WINDOW FOR
DEPOSITION OF
SOLUTES
-------
o
GO
D
3500
30OO 250O 2OOO
WAVENUMBER (CM-1)
15OO
1000
-------
.4-
o
0:
O
.2-
0-
ANTHRACENE 100% HEXANE
JLJJJUi
LLJ
3000 200O
WAVENUMBER (CM-1)
1000
N)
-------
.2-
LU
O
.1 -
.1 -
CAFFEINE 100% WATER
H
*»
W
3000 2OOO
WAVENUMBER (CM-1)
1OOO
-------
97.23 97.98
'/. TRANSMITTflNCE
98.73 99. *8 100.23 100.98 101 .73 102. «t8
-------
.6-
4-
O
O
GO
.2-
0
80:20 MethanohWater
Caffeine + KHP Buffer
pH 4
Collection time: 1.0 min
3000
2000
Wavenumber (cm"1)
Ln
1000
-------
o
cd
,0
o OH
CO "/»
0
Caffeine deposited from KHP
at pH = 4
Caffeine reference deposited
from 100% Methanol
3000
2000
Wavenumber (cnT1)
1000
-------
147
Slide 2
Slide 3
Slide 4
Slide 5
LC/FT-IR Spectrometry
The mobile phase is the problem
In HPLC separations the compounds can have
concentrations as low as 100 ppm
Dynamic Range
FT-IR spectrometer sensitivity is limited
by the dynamic range of the ADC
Two Basic FT-IR Approaches
1. Use flow-through cells
(keep the solvent)
2. Eliminate the solvent
Aerosol Beam Separator
Needed Properties
High Solute Transport Efficiency
Effective Vapor Removal
Minimal Peak Broadening
No Memory Effects
No Adjustments
-------
148
Slide 10
MAGIC
Monodisperse Aerosol Generation
Interface Combining...
Slide 22
MAGIC-LC/FT-IR SPECTROMETRY CHARACTERISTICS
Elimination of all the solvent
Normal phase and reverse phase operation
Mobile phase can be 100% aqueous
LC flow rates to 1 mL/min
No heating of the effluent
-------
149
I » I » I » I » I » I
0
c. °d
-------
150
MdNOOJSPERSE AEROSOL AEROSOL BEAM
GENERATOR SEPARATOR
DISPERSION /
GAS
N1 S1 N2 S2
\ II /
LC EFFLUENT
OESOLVATION
CHAMBER
MS IONIZATION
SOURCE
-------
% TRANSMITTANCE
97.33 97.96 98.73 99.*8 100.Z3 100.98 101.73 102.
*F
TST
-------
152
ffl
^^^v
UJ
O
8
O
O
o
CM
o
K)
U
03
Ld
I
O
o
30NVUJSNVyi %
10
-------
153
OQ
o
o
(O
o
o
00
o
o
^ »
r- o:
o OQ
O 2
^ ^
o
o
00
30NVJJJflSNVyi %
o
o
o
CM
-------
o
B
H
Ul
2000 1800 1600 1400 1200 1000 800 600
WAVENUMBER (CM-1)
-------
.4-
O
o
en
,0
,2-
o-
80:20 Methanol:Water
Caffeine + KHP Buffer
pH 4
Collection time: 1.0 min
U1
Ol
3000
2000
Wavenurnber (crof1)
1000
-------
-------
157
QUESTION AND ANSWER SESSION
MR. TELLIARD: Any
questions?
DR. MSIMANGA: My name is
Huggins Msimanga from Kennesaw State College in Marietta,
Georgia.
It looks like to me you have to time...you have to
specify the amount of time to allow for the deposition of
the solute analyte. Supposing two components come off from
the HPLC within a specified time of less than a minute, how
do you take that into account? Suppose it is two different
analytes on the window.
MR. DE HASETH: Well, the
plate itself translates at a reasonable speed, the
deposition plate. We can certainly separate two components
one minute apart. That is a long time. We can complete for
higher concentration components... by higher concentration,
we are talking a few hundred picograms.
It has been demonstrated through direct deposition
GC/FT-IR which is a parallel technique to this that you can
get reasonable spectra a less than 200 picograms in 3
seconds of data collection time. So, all you need is about
3 seconds of analysis, not a minute.
The one I showed was through a beam condenser, and
there, yes, we did need a minute, because that was a much
-------
158
less efficient interface in terms of optics. With this new
system, we can do the same analysis in a much shorter time.
The depositions are confined to a radius of about 100
to 150 micrometers. So, they are very small depositions.
We can focus indirectly on that and collect our spectra.
MR. TELLIARD: Thank you very much,.Jim.
-------
159
MR. TELLIARD: Our next
speaker is Tom Tiernan from Wright State University. Dr.
Tom has been working on the joys of dioxin for the last few
years and spent the last couple of years working on a new
column which is going to save me a lot of money. Right?
DR. TIERNAN: Probably.
MR. TELLIARD: Okay. Tom
is going to talk about the analysis of dioxins and furans.
-------
160
DR. TIERNAN: You heard
earlier this morning about chlorinated dibenzodioxins and
chlorinated dibenzofurans from Les Lamparski.
Most of you who do environmental analyses .
and I know that includes many of you . . . know that there
are 75 polychlorinated dibenzodioxin (PCDD) isomers and 135
polychlorinated dibenzofurans (PCDF) isomers. No one, to my
knowledge, is even contemplating doing a routine type of
analysis for all 210 isomers of these, but in more recent
years, there has been a focus on determining the 2,3,7,8-
substituted isomers of the various chlorinated groups of
PCDD and PCDF, particularly the tetra- through
heptachlorinated congeners. The reason for this interest,
of course, is that these are thought to be the more toxic of
the PCDD/PCDF congeners. Clearly, the seventeen 2,3,7,8-
substituted isomers are more readily analyzed than all of
the 210 isomers.
Even though there are methods that purport to
measure the 2,3,7,8-substituted PCDD and PCDF isomers,
including some EPA methods, one of the problems with these
methods is that, in order to achieve optimum reliability and
specificity for the quantitation of the 2,3,7,8-substituted
isomers the use of multiple GC columns is required. There
is no single GC column that will resolve all of the 2,3,7,8-
substituted dibenzodioxins and dibenzofurans in a single
analysis.
-------
161
We originally began the work which I will describe
today with the objective of trying to come up with some
better columns for resolving and quantitating the 2,3,7,8-
substituted PCDD/PCDF isomers. At the outset, we focused
primarily on just the tetrachlorinated isomers, because we
were involved with the U.S. EPA and the paper industry
(NCASI) in a program in which the primary objective was to
measure uniquely and specifically, only 2,3,7,8-TCDD and
2,3,7,8-TCDF in sludges, effluents, pulp and other paper and
pulp mill wastes and products.
The requirement in the latter case, of course, is
somewhat simpler than to separate all of the 2,3,7,8-
isomers, but, still, separating even those two
tetrachlorinated isomers absolutely uniquely is something of
a challenge. At the time when we began this work, there was
no single GC column that would separate both 2,3,7,8-TCDD
and 2,3,7,8-TCDF uniquely in a single analysis.
Of course, 2,3,7,8-TCDD had been routinely
analyzed on a number of different GC columns, specifically
and uniquely, including the DB-5, SP-2330 and other columns.
Also, there was purported to be a unique separation of
2,3,7,8-TCDF from all other TCDFs on the DB-225 GC column.
Thus, determining both 2,3,7,8-TCDD and 2,3,7,8-TCDF in a
given sample using these columns required two separate GC-MS
analyses.
Much of the early work which we did with the U.S.
EPA and the paper industry (NCASI) in connection with
developing a new
-------
162
column for determining both 2,3,7,8-TCDD and 2,3,7,8-TCDF
focused on the concept of using serially connected capillary
GC columns. By that, I mean using two different portions of
two different columns which are simply connected together.
Typically, one does this using a dead volume fitting.
There is, in fact, in the literature a good bit of
information about modeling the performance of these kinds of
capillary columns, as well as columns which are coated with
mixtures of different stationary phases. There has been
little or no modeling, however, which would permit one to
predict theoretically the composition of a single stationary
phase or GC column coating containing multiple functional
groups which would achieve the kinds of separations of
interest here.
So, effectively, the initial goal of our work was
to come up with a modeling procedure that would permit us to
predict a stationary phase or coating for a capillary GC
column which would contain multiple functional groups, and
which would achieve the unique separations of these two
isomers (2,3,7,8-TCDD and 2,3,7,8-TCDF). The next phase of
this work was conducted largely by our partner in these
studies, J & W Scientific, and involved synthesis of a
specific polymer coating containing the desired functional
groups in the predicted amounts (as indicated by the
modeling results) and preparation of a bonded capillary GC
column coated with this new stationary phase. Finally,
testing of this column to determine its effectiveness for
separating 2,3,7,8-TCDD and 2,3,7,8-TCDF was accomplished at
Wright State.
I am going to describe to you today the outcome of
these studies.
One might well ask, what is wrong with coupled
-------
163
GC columns if they achieve the desired separation? Indeed,
we previously developed a coupled column that would, in
fact, separate 2 , 3,7,8-TCDF uniquely. In addition, we
eventually developed a three-section capillary GC column
combination, that is, portions of three capillary columns
coated with different stationery phases which were coupled
in sequence, that would resolve both 2,3,7,8-TCDD and
2,3,7,8-TCDF uniquely in a single analysis.
However, it is difficult to convince analysts to
prepare and use these kinds of columns, and even more
difficult to get a manufacturer to make them. Such coupled
columns tend to be somewhat fragile, and the connections are
difficult to make and the junctions tend to leak.
Chromatographers are therefore reluctant to make and use
such columns for routine analyses.
These, then, were some of the reasons for
attempting to develop a single-phase coated column, using a
specially tailored phase, which could be manufactured and
supplied by a recognized gas chromatography specialty firm.
As we will see, what was required initially in
terms of developing a model such as that mentioned was to
measure GC retention times, capacity factors, and
temperature dependences for the gas chromatographic
separations of the tetrachlorinated dibenzodioxin isomers
and the tetrachlorinated dibenzofuran isomers using several
different capillary columns.
Among the columns that were evaluated here were
DB-WAX, SP-2250, DB-17, DB-5, SP-2200 and DB-1. The data
derived from isothermal measurements of the parameters cited
were then incorporated into a .multidimensional computer
model, which I
-------
164
am going to describe in rather general terms, to ultimately
predict optimum combinations of the various functional
groups which would be required as components of the
specially synthesized phase in order to obtain the desired
resolution.
J & W Scientific used the modeling predictions to
synthesize the new polymer or stationary phase, and prepared
a bonded capillary GC column coated with that phase.
Ideally, the first new column prepared would have
yielded the desired separations of 2,3,7,8-TCDD and 2,3,7,8-
TCDF. In reality, of course, scientific methods are rarely
that good, and some iteration of the modeling/synthesis
procedure proved to be necessary.
One of the reasons that the model for predicting
GC column performance did not work quite as well as expected
initially is that, even when one synthesizes accurately the
polymer or stationary phase containing the predicted
functional groups in the predicted amounts, there are likely
to be interactions between the functional groups on the
polymer that vary the activity and the separation capability
of the column coated with this polymer. We are unable to
accurately predict these interactions.
What we do, therefore, is take the retention time
and related data derived from the initially prepared column,
feed these data back into the model, obtain new solutions,
and come up with a second prediction of the optimum
stationary phase composition. If necessary, this procedure
is repeated still again.
-------
165
Now, just to give you an indication of how
difficult and sensitive these kinds of isomer separations
are, I wanted to just show you some retention time data for
several closely-eluting TCDF isomers which was obtained on
two capillary GC columns at two not very different
temperatures (Slide 5) . The columns here are the SP-2330
and DB-5
capillary columns. What you can see if you compare the
retention times for the several isomers that are shown there
is that as you go from 220 to 250 degrees with both of these
columns, you get some inversion in the order of elution of
the isomers.
Obviously, if you look at all 22 TCDDs and 33
TCDFs, this becomes even more severe. You have frequent
mixing, jumping, and shifting of positions in terms of the
order of elution.
So, it is no easy feat to reliably predict where
these numerous isomers are going to elute.
The next slides (Slides 6, 7 and 8} show flow
charts which give you an idea of the procedure used in
applying the model which I have mentioned. I am going to
discuss in greater detail, a little later, what are the
specific critical parameters here.
The first thing we have to do, of course, is write
the software and develop the model, and this was
accomplished for use with a PC type computer. The model was
actually written in Turbo Pascal for an IBM-compatible 386
PC with a 387 co-processor. The model will generate about
1000 to 2000 solutions, in terms of
-------
166
predicted retention times, in a minute, and in an hour or
so, we typically get the bulk of the predictions we are
after.
In applying the model, we first enter the relevant
experimental parameters which include the column length,
radius, film thickness, inlet and outlet pressures, and the
dead times at a minimum of two temperatures. These are
entered into an equation or formula from which we ultimately
calculate capacity factors. Initially, we determine two
experimental parameters or constants which are called Kl and
K2. Of course, to get the Kl and K2 constants, we have to
measure isothermal retention times for the group of isomers
in which we are interested at different temperatures. Data
obtained at a minimum of two different temperatures are
required for each of the columns evaluated.
We then insert the retention time data obtained
for a given column at two different isothermal temperatures
into a set of simultaneous linear equations and solve them
to determine the constants, Kl and K2 (Slide 9).
Finally, we utilize these constants in the third
phase of the model which gives us the capability
to predict the effect on retention times of each functional
group that we wish to incorporate into the polymer backbone
that becomes the ultimate coating or stationary phase for
the column. Finally, of course, we synthesize the phase,
coat the column with this phase and test the column.
So here then, is the capacity factor relationship
(Slide 9) , and you can see from equation number 3 at the
-------
167
bottom, which you eventually work down to ... the input
data, of course, is again retention time, dead time,
isothermal temperature . . . one can see that by measuring
the retention time parameters for a given column at two
different T's, one can insert these data into equation (3)
and solve the two resulting simultaneous equations, and come
up with the Kl and K2 factors, which are the empirical
factors which we must obtain initially.
Now, these constants are then used in the
retention time relationship shown in Slide 10 (Equation 4A)
to come up with retention time predictions at different
temperatures, in other words, to predict the times at
which isomers are going to elute on a given column as a
function of temperature.
Once we have that data, we then go to the last
stage. We put these retention times into equation 5 shown
on Slide 11 and, again, by doing this at multiple
temperatures and solving simultaneously, we come up with the
K-prime factors which are characteristic of the actual
functional groups, that is, the pure functional groups, methyl,
phenyl, or whatever groups we want to incorporate into the
polymer.
On Slide 11, for example, are shown the applicable
equations for two different columns that we have tested in
this study. Column 1, of course, is actually the DB-5
column, which incorporates 95 percent methyl and 5 percent
phenyl components, and Column 2 is the DB-17 column which
incorporates about 50/50 methyl/phenyl components.
-------
168
After we have obtained the K prime factors, that
is, the functional group factors, we get to the very end of
the procedure. We put these back into the retention time
relationship (Slide 12), and we determine, for any given
combination of these several substituent groups in a
polymer, what the predicted retention times for the various
isomers to be separated are going to be.
So, in other words, we can now predict what a
totally new column containing entirely different
combinations of functional groups would yield in terms of
retention times for the PCDD/PCDF isomers.
Of course, the experimental verification of the
results requires actual measurements using a GC or GC-MS
(Slide 13). In this case, we are using a low to medium
resolution sector mass spectrometer, a Kratos MS-25. It is
operated in the selected ion monitoring mode, of course, for
monitoring TCDD and TCDF. Here, you see the sets of ions
that are typically monitored as indicators of both native
TCDF and the isotopically labeled TCDF and the same for
TCDD.
The chromatograph used here is a Carlo-Erba, and
in the development phase of this work, we used hydrogen as
the carrier gas. Subsequently, we have demonstrated that
the predictions are also applicable and the same column
performance can be achieved when we use helium as the
carrier gas.
Throughout this study, retention times were
measured on the various columns for at least three different
temperatures, typically 220, 235 and 250 degrees C.
Well, let's look now at some data obtained in the
initial stages of this work where we attempted to predict
the combinations of functional groups in a single stationary
-------
169
phase column coating which would give us an adequate
separation of both 2,3,7,8-TCDD and 2,3,7,8-TCDF from all
other PCDD/PCDF isomers.
Clearly, you must have a separation of at least
one peak width of the 2,3,7,8 isomer from the nearest
eluting TCDD or TCDF congener which is adjacent to it. So,
here you see (Slide 14) some of the predicted separations
which resulted from the modeling calculations. It appears
that the very first combination of functional groups shown
on Slide 14 does, in fact, yield the optimum separation in
terms of peak width separation for 2,3,7,8-TCDD from the
other TCDD isomers, and this combination also results in a
quite good separation of 2,3,7,8-TCDF.
So, initially, we indicated to J & W that they
should synthesize a polymer that would contain the
40:25:25:10 proportions of the methyl: phenyl: cyano : WAX
functional groups. WAX simply denotes a PEG (polyethylene
glycol) functional group.
J & W Scientific synthesized a polymer substrate
containing these functional groups in what they believed to
be the proportions specified by us, and prepared a bonded
column coated with this polymer. When we tested this column
for its ability to separate 2,3,7,8-TCDD and TCDF, the
observed column performance was different than that
expected. When the experimentally observed retention times
were compared with the predictions of the model, it could be
seen that the column was acting as if the composition were
that shown on the second line (Result 1st synthesis) of
Slide 15. The first line on the slide shows the column
composition predicted by the initial calculation, which J &
-------
170
W attempted to obtain. The difference in the theoretically
expected and experimentally observed behavior of the column
was undoubtedly due to the fact that there were
unpredictable functional group interactions in the newly
synthesized stationary phase coating.
We next inserted the experimentally observed
retention time data for the initial column back into the
model and calculated a new solution, that is, an iterative
prediction of the stationary phase composition.
Before looking at the new solution, let's just
look at what the initial column did in terms of separating
the isomers (Slide 16). Here you can see that the 2,3,7,8-
TCDD is not at all well separated from adjacent TCDD
isomers, and exhibits a strong overlap with 1,2,3,8- and
1,4,6,9-TCDD.
The separation of 2,3,7,8-TCDF is considerably
better (Slide 17) , but, there is a considerable valley
between 2,3,7,8- and 2,3,4,6-TCDF and the other co-eluting
isomers.
Well, the results of the second modeling
prediction obtained by reiterating the model calculations,
as described above, are shown in Slide 18 on the third and
fourth lines. The second prediction of the optimum
functional group composition of the column stationary phase
which we selected was the 43:26:22:9 combination, and when a
polymer with this expected composition was synthesized and a.
bonded capillary GC column was coated with this polymer, the
performance of this column in terms of retention times of the
separated isomers, corresponded closely to the composition which
was originally predicted. The last column on Slide 18 (hori-
zontal) shows the calculatedcomposition to which the experi-
mental behavior corresponded.
-------
171
This is, in fact, the composition of the capillary
GC column that has now become known as DB-DIOXIN and is
currently being marketed by J & W Scientific under that
name.
The separation which this column achieves for
2,3,7,8-TCDD is illustrated in Slide 19, and as can be seen,
this separation is quite good. Slide 20 shows the
separation achieved with the DB-DIOXIN column for 2,3,7,8-
TCDF, again, well within desirable bounds.
You can see from the comparison shown in Slide 21
that the retention time predictions derived from the model
are quite close to the experimentally observed retention
times. The second vertical column here shows the expected
or calculated retention times for this set of isomers, while
the third column shows the actual experimental retention
times that were observed. You can see that, in most cases,
these are extremely close to the predictions.
The sensitivity of the DB-DIOXIN column is
excellent. The mass chromatograms shown in Slide 22, for
example, which were obtained for a 2.5 pg injection of
2,3,7,8-TCDF, exhibit about a 25:1 signal to noise ratio for
the responses at the ion masses which are indicators of both
native TCDF and the isotopically-labeled TCDFs that would be
monitored in a normal analysis run. The DB-DIOXIN column
exhibits similar sensitivity for 2,3,7,8-TCDD, as shown on
Slide 23.
-------
172
We have now used the DB-DIOXIN column very
extensively for analysis of extracts obtained from a large
variety of sample matrices. Slide 24 shows some of the
types of samples that have been analyzed with this column
during the past year or so, during which time a total of
about 500 samples have been characterized for 2,3,7,8-TCDD
and TCDF. In fact, the very same DB-DIOXIN column was used
for all of these analyses.
You can see that the samples analyzed with this
column range from relatively clean biologicals and human
tissues to very dirty chemical wastes. It turns out that
pulp and paper samples and wastes pose a particular
challenge here, partly because of the kinds and types of
interferences.
The DB-DIOXIN column also exhibits good, long-term
stability. You can see from the mass chromatogram shown in
Slide 25 that even after some 600 analyses with the column,
the resolution is essentially unchanged for the 2,3,7,8-
TCDD. The same long-term performance of this column is
observed for 2,3,7,8-TCDF,.as seen from the results shown in
Slide 26.
The conclusions from this study are summarized on
Slides 27 and 28. The main conclusion is that the DB-DIOXIN
column is quite effective for the analyses of both 2,3,7,8-
TCDD and 2,3,7,8-TCDF in a single GC-MS run. Moreover, the
theoretical model developed in this study appears to be
capable of reliably predicting the combinations of
functional groups in a polymeric stationary phase coating
which will yield a capillary GC column capable of achieving
the desired separation of virtually any closely eluting
compounds.
-------
173
Of course, we are not only interested in the
separation of tetrachlorinated dioxins and dibenzofurans,
but are also interested in developing capillary GC columns
that will be useful for separating the entire chlorinated
range of these compounds, and in particular, in uniquely
separating all of the other 2,3,7,8-substituted congeners.
We have done some preliminary work toward this end already.
Obviously, the first step in achieving this
objective is to see what isomers the DB-DIOXIN column which
we already have in hand is capable of resolving. We have
already determined that this column will, in fact, separate
all of the 2 , 3,7,8-substituted congeners of all the
chlorinated groups from each other. However, we do not yet
know that this column will separate all of the other isomers
of each chlorinated group from the 2 , 3,7,8-substituted
isomers.
Some overlap of the chlorinated group retention
time windows occurs with the DB-DIOXIN column. For example,
some of the TCDFs coelute with some of the PeCDFs, as
indicated by Slide 29. So the DB-DIOXIN column is somewhat
different than the DB-5 column, on which the respective
chlorinated congener groups are quite well separated from
each other, at least under the conditions that are usually
applied, and the DB-5 column is therefore more useful for
the total chlorinated congener group type of analysis.
Nevertheless, at this point, it appears that the DB-DIOXIN
column shows considerable promise for analyses of just the
2,3,7,8-substituted isomers.
-------
r
174
We are in the process of evaluating the separation
capability of the DB-DIOXIN column for the entire 210 isomer
PCDD/PCDP set.
The last slide (Slide 30) shows you the
separations that are achieved with this column for some of
the 2,3,7,8-penta CDDs and CDFs.
In conclusion, let me just say that the DB-DIOXIN
column is currently available from J & W Scientific, and, in
their most recent catalog, which has just been issued, some
very nice mass chromatograms are shown which exhibit the
separations obtained for all of the TCDF and TCDD isomers,
using both helium and hydrogen carrier gases.
So, here for the first time, we have developed a
working, stable, commercially-available capillary GC column
that will permit one to determine at least 2,3,7,8-TCDD and
2,3,7,8-TCDF in a single analysis and will, as Bill said,
possibly save the agency a little money.
MR. TELLIARD: A lot of
money, Thank you, Tom.
DR. TIERNAN: Thank you.
-------
175
QUESTION AND ANSWER SESSION
DR. TELLIARD: Are there
any questions?
MR. LEWIS: Michael Lewis
of TC Analytics.
What was the optimum column dimensions that you
are using for your . . .
DR. TIERNAN: Sorry, what
was the question?
MR. LEWIS: What was the
optimum column dimensions that you were using for your
development work? And conditions. Were they standard?
DR. TIERNAN: Oh, yes.
You mean like temperature programs and what not?
MR. LEWIS: A 30 meter or
a ...
DR. TIERNAN: Oh, well,
the original coupled column we put together was about 40
meters in length. But the present DB-DIOXIN column can be
anywhere from 30 to 60 meters. It just depends on what you
want to achieve here. But these are standard length columns
as compared to current technology, and the temperature
programs one would use here are very standard, too.
-------
176
I didn't mention it, but, typically, you would be
programming the column from about 180 to 220 degrees at
about 10 degrees C per minute and then holding at the higher
temperature. We know, however, that this column will go as
high as 250 to 270 degrees C with no degradation.
MR. LEWIS; Were you
using attempt to look at more the megabore standard
dimensions right now?
DR. TIERNAN: No, this is
not a megabore column, and we probably are not going to do
that at this point. No, this is a standard 0.25 micron
dimension column.
MR. TELLIARD: Anyone
else?
Tom, what is the run time when you use the
hydrogen carrier roughly?
DR. TIERNAN: Actually,
you can adjust the head pressure of helium or hydrogen to
achieve the same effective separation with either carrier
gas, and the run time is on the order of about 30 minutes
for analyses of both 2,3,7,8-TCDD and 2,3,7,8-TCDF in the
same run.
MR. TELLIARD: That is
good.
DR. TIERNAN: Yes.
MR. TELLIARD: Thank you
very much.
DR. TIERNAN: Thank you.
-------
NEW CAPILLARY COLUMN FOR THE
DETERMINATION OF PCDD's AND PCDF's
T.O. Tiernan, J.H. Garrett, and J.G. Solch
Department of Chemistry
and Toxic Contaminant Reseach Program
Wright State University
Dayton, Ohio
and
R.M.A. Lautamo and R.R. Freeman
J & W Scientific
Folsom, California
-------
BACKGROUND
1. Separation of the 2378- substituted isomers of the DibenzoDioxins and
DibenzoFurans provides a difficult challenge for capillary GC columns.
2. Tetrachlorinated Isomers
Dioxins (TCDD)-22
Furans (TCDF)-38
3. Requirements:
Separate 2,3,7,8-TCDD from all other TCDDs
Separate 2,3,7,8-TCDF from all other TCDFs
4. While separation of 2,3,7,8-TCDD from all other TCDDs is routinely
accomplished on several capillary GC columns (DB-5, SP-2330,
SP-2340), no single liquid-phase coated column has been reported
which separates 2,3,7,8-TCDF from all other TCDFs and
2,3,7,8-TCDD from all other TCDDs in a single GC-MS analysis.
00
-------
BACKGROUND
5. Initial studies in our laboratory using retention time data obtained on
seven columns (DB-5, DB-225, DB-1701, DB-WAX, SP-2250, SP2340,
and SP2401) led to the development of two different columns
consisting of coupled serially-connected sections of single-phase
columns. A combination of two different phases (DB-5 and DB-225)
successfully resolved 2,3,7,d-TCDF from all other TCDFs. A
combination of three phases (DB-WAX, DB-225, and SP-2550)
simultaneously resolved 2,3,7,8-TCDD from all other TCDDs and
2,3,7,8-TCDF from all other TCDFs.
IO
6. Coupled columns, while capable of the required fisomer resolution, are
somewhat difficult to handle and reproduce.
-------
OBJECTIVES OF PRESENT STUDY
1. Measure GC retention times, capacity factors and temperature dependences
of separations for all TCDD and TCDF isomers using a variety of capillary
GC columns.
2. Utilize data obtained empirically in a multidimensional computer model to
predict optimum combinations of various functional group phases whicji
will yield complete resolution of both 2,3,7,8-TCDD and 2,3,7,8-TCDF.
00
o
3. Synthesize a single liquid phase polymer coating incorporating the
functional groups selected in the relative proportions predicted by the
model and construct a capillary column coated with this phase.
-------
OBJECTIVES OF PRESENT STUDY
4. Test the efficacy of the column for achieving
resolution of 2,3,7,8,-TCDD and 2,3,7,8-TCDF
in a single run.
5. Use data derived from initial column to obtain
an iterative solution using the model, as necessary.
6. Synthesize new polymer coating based on revised
model predictions, and construct new column.
7. Demonstrate complete resolution of 2,3,7,8-TCDD
and 2,3,7,8-TCDF using finally developed column.
CO
-------
EFFECT OF TEMPERATURE ON
TCDF ISOMER LOCATIONS
ISOMER
SP2330 (220 C)
1,2,3,9
2,3,4,7
1,2,6,9
2,3,7,8
2,3,4,8
ISOMER
2,3,7,8
1,2,7,9
2,3,4,8
1,2,6,9
2,3,6,7
1,2,3,9
0.938
0.950
0.984
1.000
1.012
DB-5 (220 C)
1.000
1.001
1.004
1.064
1.080
1.081
SP2330 (250 C)
0.965
0.963
0.996
1.000
1.009
DB-5 (250 C)
1.000
1.005
1.001
1.046
1.030
1.063
00
NJ
Retention times relative to 2,3,7,8
-------
183
/ Create \
I Component List I
V (Isomer Names) /
Enter Initial Data: ^
Column length*radius.
and film thickness
Inlet and outlet
pressures
Dead volume times at
a minimum of tuo
temperatures
Enter
Temperature
Program
and
Retention Times
Calculate
Capacity
Factors
flnother
column
type?
-------
184
fill Data \Yes
Collected on
Single functional
group
columns?
No
Select
Isothermal
Temperature
and
Functional Group
to
resolve
Solve
simultaneous linear
equations to
calculate neu
capacity factors
that represent
"pure" phase at the
selected
temperature-
Yes
flnother
"Functional
Group
or Temperature
to
resolve?
No
Select:
- functional
groups
- components
to separate
-------
185
Calculate
Capacity
Factors for
New
Column
Enter:
Temperature Program
and
Retention Times
for New Column
Enter Data for Neu
Column:
Column length*radius»
and film thickness
Inlet and outlet
pressures
Dead volume times at
a minimum of two
temperatures
Predict New Column '
- Calculate % of
each functional
group required to
produce maximum
resolution of
selected components
Synthesize
new
column
New Column
meet required
resolution?
Success
-------
EXPERIMENTAL CONDITIONS
(FOR DEVELOPMENT)
Kratos MS-25 mass spectrometer
DS-90 Version 5.00 software
Selected-Ion monitoring mode
Ions monitored:
TCDF m/z 304, 306, 312, 316, 318
TCDD m/z 320, 322, 328, 332, 334
Carlo-Erba Mega 5300 series Gas Chromatograph
Splitless Injection
Hydrogen carrier gas 1.20 kg/square cm and 2.40 kg/square cm
Inlet and exit carrier gas pressures measured to
+/- 0.07 kg/square cm
Retention times measured at 220 C, 235 C, 250 C
CO
-------
Capacity Factor Calculations
Tr
Td
T
k'
k1 & k2
- Retention Time
- Dead Volume
- Temperature
- Capacity factor at a given Temperature
- Experimentally derived constants
Tr Td ( 1 + k' )
Ln ( k' ) Ln ( k1 ) + k2 / T
(1)
(2) a.
oo
Substituting Equation 1 into Equation 2 yields:
Ln ( ( Tr / Td ) - 1 ) - Ln ( k1 ) + k2 / T (3)
Under experimental conditions Equation 3 contains two
unknowns ( k1 and k2 ). Values for k1 and k2 can be
calculated by solving simultaneous equations for data
collected at two or more isothermal temperatures.
a. Akporhoor.Vent.and Taylor, J. Chrom.,405 (1987)
-------
New Retention Time Calculations
Tr
Td
T
k'
k1 & k2
- Retention Time
- Dead Voiume
- Temperature
- Capacity factor at a given Temperature
- Experimentally derived constants
Ln ( k' ) - Ln ( k1) + k2 / T
Equation 2 can be written as
Tr Td [ 1 + k1 exp ( k2 / T ) I
(2) a.
(4) a.
Since k1 and k2 were determined during the capacity
factor calculation, Equation 4 permits calculation of
predicted retention times.
00
00
a. Akporhoor.Vent.and Taylor, J. Chrom.,405 (1987)
-------
Calculation of Functional Group Capacity Factors
Existing column phases:
Column.1: 95% Methyl, 5% Phenyl
Column_2 : 50% Methyl, 50% Phenyl
Column_1:
Tr Td ( 1 * 0.95 * k'.Methyl * 0.05 * k'_Phenyl ) (5)
Column_2:
Tr Td ( 1 + 0.50 * k'_Methyl + 0.50 * k'_Phenyl ) (6)
- Equations 5 and 6 each contain two unknowns
( k'_ Methyl and k'.Phenyl ). Effective capacity
factors for each functional group can be calculated
by simultaneously solving the above equations for
data collected under the same experimental
conditions.
H
CO
VO
I
-------
PREDICTING A NEW COLUMN
Equation for retention time prediction for new combinations
of "n" functional groups:
Tr - Td [ 1 + ( %_Qroup_1 * k'_Group_1 )
+ ( %_Group_2 * k'_Group_2 )
... additional groups ...
%_Group_n * k'_Group_n
where %_Group_1 + %_Group_2 + ... + %_Group_n
1
After calculating the expected retention times, (Tr), for
each component in a specific combination of functional
groups, the isomer least resolved from the 2,3,7,8 isomer
is used for the resolution value for that phase combination.
The best solution must be selected from those combinations
giving a minimum resolution of one peak width.
Other factors that must be considered:
_ nurnber of functional groups contained in the prediction
- stability of the solution ( width of solution window)
vo
o
-------
FORWARD PREDICTION FOR DB-DIOXIN
% Functional Group
Methyl Phenyl Cyano Wax
Separation
(Peak Width)
7CDD TCDF
40
40
40
45
40
40
45
45
45
45
25
25
25
25
30
25
20
30
2$
35
25
30
20
15
20
25
5
5
10
5
10
5
15
15
10
10
10
15
15
10
1.19
1.15
1.09
1.05
1.03
0.99
0.95
0.89
0.83
0.75
1.59
1.78
1.42
1.34
1.32
1.53
1.11
1.16
1.32
1.10
-------
PREDICTIONS FOR DB-DIOXIN COLUMN
First Prediction
Result 1st Synthesis
% Functional Group
Methyl Phenyl Cyano Wax
vo
to
40
45
25
32
25
13
10
10
-------
RESOLUTION OF TCDD ISOMERS - FIRST COLUMN
1234,1236,1268
1269
1279
1267
u>
1
23:50
25:50
27:50
29:50
Wriglit State University and J & W Scientific
-------
RESOLUTION OF TCDF ISOMERS - FIRST COLUMN
25:30
2346, 2348,1239
2347
3467
VO
I
I
27:30
29:30
31:30
Wright State University and J & W Scientific
-------
PREDICTIONS FOR NEW DIOXIN/FURAN COLUMN
DB-DIOXIN
% Functional Group
First Prediction
Result 1 st Synthesis
Second Prediction
Final Synthesis
Methyl
40
45
43
44
Phenyl
25
32
26
28
Cyano
25
13
22
20
Wax
10
10
9
8
vo
-------
RESOLUTION OF TCDD ISOMERS ON DB-DIOXIN COLUMN
2378
1246
1249
1238
1268 1234 1236
1469
VD
CTl
1278
27:00
28:00
29:00
30:00
Wright State University and J & W Scientific
-------
RESOLUTION OF TCDF ISOMERS ON DB-DIOXIN COLUMN
2378
2346
2347
I
30:00
31:00
32:00
1
33:00
Wright State University and J & W Scientific
-------
I
EXPECTED vs ACTUAL RT's FOR DB-DIOXIN
(TCDD REGION)
Isomer
1478
1249
1246
1268
1234
1236
2378
1469
Expected Rt
24:40
24:44
24:46
25:00
25:32
25:28
26:02
26:14
Actual fit
24:40
24:38
24:40
25:22
25:25
25:26
26:06
26:14
ID
CO
-------
2.5 PG OF 2378 TCDF ON DB-DIOXIN
m/z306
m/z304
m/z 241
m/z 312
m/z 316
m/z 318
30:00
30:26
30:51
Wright State University and J & W Scientific
-------
2.5 PG OF 2378 TCDD ON DB-DIOXIN
2378
m/z 322
m/z 320
m/z 257
m/z 328
m/z 334
m/z 332
27:03
to
o
o
27:13
28:44
Wright State University and J & W Scientific
-------
MATRICES ANALYZED ON DB-DIOXIN
Biologicals:
Beaver, fish, frogs, and mice
Human Tissues:
Blood and fat
Air Samples:
High volume ambient air and fly ash
Other Environmental Samples:
Soils, sludges, and waters
Chemical Wastes
Paper/Pulp Products and Wastes
500 Samples of Above Types Were Analyzed
Over a 12 Month Period Using This Column
to
o
-------
RESOLUTION OF 2,3,7,8-TCDD ON DB-DIOXIN AFTER 250
STANDARDS AND 350 SAMPLE ANALYSES
2378
1246 1268 1238
1249 " 1237 1234
1236
1469
1278
25:47
26:37
27:28
28:19
29:10
10
o
Wright State University and J & W Scientific
-------
RESOLUTION OF 2,3,7,8-TCDF ON DB-DIOXIN AFTER 250
STANDARDS AND 350 SAMPLE ANALYSES
2378
2347
2346
1 1
29:10
I I
31:26
I I
31:16
I
32:07
to
o
Wright State University and J & W Scientific
-------
CONCLUSIONS
An analytical approach has been developed to solve
difficult separations of compounds. This approach
has been successfully utilized to produce a capillary GC
column capable of separating both 2,3,7,8-TCDD and
2,3,7,8-TCDF from all other TCDD and TCDF isomers in
a single GC-MS analysis.
to
o
-------
CONCLUSIONS
The approach developed takes into account variances in:
temperature programming
-head pressure
-column length
-film thickness
-functional group interactions
10
o
01
The synthesis of a tailored bonded-phase column coating was
essential in order to obtain a column of sufficient durability
and reproducibility which would achieve the required separations.
-------
RETENTION TIME WINDOWS FOR CONGENER GROUPS
USING HELIUM CARRIER GAS
OCDD
OCDF
HpCDD
HpCDF
HxCDD
HxCDF
PCDD
PCDF
TCDD
TCDF
fj 2,3,7,8 Substituted
20
40 60
Retention Time in Minutes
NJ
O
cn
-------
SEPARATION OF SOME PCDD & PCDF ISOMERS ON DB-DIOXIN
13468
PCDPs
m/z 338-340
PCDD's
m/z 356-354
L
32:50
12378
23478
12389
JL
12468/12479
12367 12389
12378
..JL.
J
36:28
40:06
43:43
47:21
to
o
Wright State University and J & W Scientific
-------
208
AFTERNOON SESSION
MR. TELLIARD: Our next
speaker is going to be Tom Fielding.
MR. FIELDING: Thank you.
Bill asked me to speak for 15 minutes this morning, and
then he told me I was going to moderate the afternoon
session, and I figured that was my 15 minutes right there.
So, without further ado/ let's start with our afternoon
session.
Our first speaker is Mr. Ted Martin of the Chemistry
Research Division, USEPA-EMSL who will talk about single
laboratory evaluation of EPA Method 200.8.
Ted?
L
-------
209
SINGLE LABORATORY EVALUATION OF METHOD 200.8, DETERMINATION OF TRACE
ELEMENTS BY INDUCTIVELY COUPLED PLASMA-MASS SPECTROMETRY
T.D. MARTIN, USEPA, EMSL-CINCINNATI AND
S.E. LONG, TECHNOLOGY APPLICATIONS, INC.
Inductively coupled plasma-mass spectrometry (ICP-MS) is currently
receiving attention as the newest spectrochemical technique for trace element
analyses of environmental samples. It is a multielement technique that
provides extremely low limits of detection and freedom from the potentially
complex interelement spectral interferences of atomic emission spectrometry.
It is not without limitations, but when properly recognized and its strengths
utilized, it is an extremely powerful tool for environmental analyses.
In this presentation certain procedural features of Method 200.8 and
specific requirements for its use will be discussed. Also, presented will be
detection limit data, single laboratory data compared with inductively coupled
plasma-atomic emission spectrometry for the analysis of aqueous and solid
matrices and the status of a joint EPA/AOAC multi-laboratory study currently
being conducted.
-------
SINGLE LABORATORY EVALUATION OF METHOD 200.8
DETERMINATION OF TRACE ELEMENTS IN WATERS AND
WASTES BY ICP MASS SPECTROMETRY
Theodore D. Martin
Environmental Monitoring and Systems Laboratory
and
Stephen E. Long
Technology Applications Inc.
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
CINCINNATI, OHIO 45268
to
H*
o
-------
211
SINGLE LABORATORY EVALUATION OF METHOD 200.8
DETERMINATION OF TRACE ELEMENTS IN WATERS
AND WASTES BY ICP-MASS SPECTROMETRY
f
T.D. MARTIN, USEPA-EMSL AND S.E.LONG, TAI
The information that will be presented in today's talk can be divided
into three topical or subject areas. The first portion will be a brief
functional description of ICP-MS, the second part will be a discussion of
certain procedural requirements of Method 200.8 followed by comparative data
to inductively coupled plasma-atomic emission spectrometry (ICP-AES) and the
final portion will be a status report of the joint EPA/AOAC multi-laboratory
study currently being conducted.
ICP-MS offers some unique and distinct advantages to.environmental
analyses. Some of these are listed on the next slide (#2). The first item of
particular importance is a linear range of 5 orders of magnitude, and up to 8,
if the instrument is equipped with some type of extended range capability.
This wide dynamic range is very useful for analysis of dilute solutions
especially when viewed along with the low, PPT, detection limits. ICP-MS is
considered to be a simultaneous analysis technique. This is achieved by
repeated, rapid, sequential scanning of the ionized sample over the mass range
5-250 amu. This process provides multielement determinations of the entire
periodic table with the few noteworthy exceptions (C, N, 0, F, Si, P and S).
By the very nature of the technique, isotope ratio information is available
with each analysis. This feature gives the analyst the capability of doing
very accurate determinations by isotope dilution. This same isotope
information supplies recognizable patterns, like fingerprints, which can
facilitate qualitative analyses. Sample throughput is rapid (5 min. per
sample) while generally providing for some analytes detection limits that are
equal or lower than graphite furnace atomic absorption spectrometry (GFAA).
The next slide (#3) is a functional schematic of the VG PlasmaQuad
spectrometer. This is the instrument used in the single laboratory validation
study to be discussed later. The slide is self-explanatory. However, it may
be helpful to trace the analyte route through the system.
The sample solution is pumped to the nebulizer where the resulting
aerosol droplets of the analyte traverse the spray chamber and enter the
plasma through the injector tube of the torch assembly. The energy of the
plasma desolvates, dissociates and ionizes the analyte. A portion of these
ions enter the expansion chamber through a small hole in the sample cone
interface. The ions are then drawn into the spectrometer through even a
smaller hole in the skimmer cone and are focused by the lens supply past the
photon stop into the quadrupole. By virtue of their mass-to-charge ratios,
the ions sequentially pass through the charged rod assembly onto the detector.
The resulting signals from the detector are processed by the multichannel
analyzer and the data transferred to a dedicated computer.
It is important to note the most critical area of the instrument is the
expansion chamber or cone assembly. It is in this area where the oxides and
-------
212
polyatomic molecular ions are primarily formed and where analyte deposition
can occur at the cone orifice causing instrument drift and memory effects.
The following slide (#4) shows the actual instrument. The stand alone
unit on the right is the RF generator for plasma. The torch box assembly and
spectrometer interface are enclosed behind the dark door in the center of the
instrument. On the right are the gas controls for the plasma and on the left
are the controls for quadrupole, vacuum pumps and readout systems. The rotary
vacuum pumps and required cooling system are behind the instrument and cannot
be seen on this slide.
The next slide (#5) shows the torch lighted and with the torch box
assembly in the advanced or working position in front of the spectrometer
sample cone interface.
The following slide (#6) shows the appearance of the sample cone
interface of the spectrometer after a period of operation. The cone is made
of nickel and when new is very shiny. The heat of plasma is the main cause
for change of appearance. Two hours of operation can account for such a
change.
A new sample cone is shown in the next slide (#7). Although shielded by
the sample cone during operation, the heat also affects the skimmer cone shown
at the right.
With all the advantages offered by ICP-MS there are still some limiting
factors. On the next slide (#8) are listed the most common interferences that
affect the technique.
Although the inorganic mass spectrum is far simpler and less complex than
atomic emission, it is not free of spectral interferences. Spectral
interference occurs as an isobaric interference from either an isotope of
another element, that form singly or doubly charged ions of the same nominal
mass-to-charge ratio, or from polyatomic ions commonly formed in the plasma or
interface system from support gases and sample components. The mass range
from 12 to 75 amu is the region most affected by polyatomic interferences.
The analytes of this region include the first series transition metals -
scandium through zinc. Fortunately, the elemental and most of the common
polyatomic interferences have been identified.
In Method 200.8 only two of the recommended analytical isotopes
experience isobaric elemental interferences. Ruthenium interferes on Mo-98
and krypton on Se-82. However, eight of the analytes can experience a
polyatomic interference. These analytes are As, Cd, Cr, Cu, Mn, V and Zn with
a remote, but possible interference on Ag. Again, the majority of these
analytes are first series transition metals and the polyatomic interferences
are well documented and noted in the method. The third spectral interference,
abundance sensitivity, is a problem of resolution where the wing of a large
ion peak contributes to a small ion peak. These occurrences can be minimized
by selecting the proper instrument operating conditions.
The listed physical interferences also are best controlled by operational
measures such as: (1) using a peristaltic pump to control and provide a steady
rate of sample introduction to the nebulizer, (2) the use of a mass flow
-------
213
controller to provide accurate gas flow and uniform aerosol transport to the
plasma, (3) limiting dissolved solids to 0.2 % (w/v) to reduce the potential
clogging of the orifice of the interface cone assembly (4) the use of a water
cooled spray chamber to lower the water vapor content of the aerosol to reduce
oxide formation and (5) the use of internal standards to correct for matrix'
suppression. Method 200.8 requires the use of all these procedural measures
except the water cooled spray chamber which is listed as a recommendation.
Memory interference results when isotopes of elements in a previous
sample contribute to the signals measured in a new sample. Memory effects can
result from sample deposition on the sample and skimmer cones and from buildup
of the analytes in the spray chamber and sample uptake system. Adequate
flushing of the system with a rinse blank reduces the effect. Method 200.8
requires as a minimum, the use of a one minute rinse time. Also, the presence
of a memory effect may be assessed during analysis by noting a consecutive
drop in replicate integrations.
On the next slide (#9) is a summary of Method 200.8 features. The method
is applicable to both aqueous and solid matrices for the analysis of 20
analytes. It should be noted that elements Ca, K, Mg, Na and Fe have not been
included in the method because they are normally present at relatively high
concentrations in environmental samples and can be determined accurately and
more effectively by other techniques. However, the elements thorium and
uranium are included because of the increasing interest of these elements in
groundwater and because of the relatively good sensitivity of these elements
by ICP-MS.
The method provides similar acid digestion procedures for aqueous and
solid matrices. This occurs when the remaining particulate material in an
evaporated aqueous sample is acid refluxed in the same manner as a solid
sample. A combination of HN03 and HC1 acids is used in both procedures. The
resulting analyses are considered to be "total recoverable" determinations.
Prior to ICP-MS analysis, internal standards are added to an aliquot of
the digestate. Method 200.8 requires the use of at least 3 internal
standards; however, the use of the 5 listed on the slide is recommended and
was used in the single laboratory validation study.
Outlined on the next two slides (#10 and #11) are the "total recoverable"
sample preparation procedures for aqueous and solid samples, respectively.
Specifics included in the outlines are the size sample used, the acid
addition, the reflux time and dilution volumes. Note that prior to ICP-MS
analysis an aliquot of the digestate is diluted further to limit the dissolved
solids to 0.2% (w/v) and to control the chloride concentration to a level of
0.4% (v/v). In each case, the analytical determinations are multiplied by a
dilution factor to obtain the sample analyte concentrations.
A combination acid "total recoverable" procedure was selected for sample
preparation because it requires less time, is less labor intensive, and past
studies have shown it to be equal or superior to the "hard digestion" (HNO, +
H202) for solubilizing toxic and priority pollutant metals. With the "total
recoverable" procedure insoluble oxides are less likely to occur while the HC1
has been shown to stabilize Ag in solution and improves the solubilization of
oxides and certain analytes such as Sb and Cr.
-------
214
As a side note, it may be important to know, that in order to provide
uniform sample preparation for the various spectroscopy techniques these same
digestion procedures have been incorporated into the recently revised ICP-AES
Method 200.7 and the new stabilized temperature graphite furnace atomic
absorption methodology, Method 200.9.
On the next slide (#12) are shown the requirements of instrument
calibration. After a 30 min warm-up period, a tuning solution containing 5
elements (Be, Co, In, Mg and Pb) is used to check instrument resolution and
adjust mass calibration. The Mg isotopes 24, 25, 26 are used to check
resolution for the low mass range while the Pb isotopes 206, 207, 208 are used
for the high mass range. Mass calibration is adjusted if it has shifted by
more than 0.1 amu from unit mass. This solution is also used to check the
stability of the instrument.
After tuning is complete, the instrument is calibrated using a
calibration blank and two composite standard solutions. Before beginning
sample analysis, the calibration is initially verified for all analytes by
analyzing a QC sample obtained from an outside source. The analytical result
must not exceed ± 10% of the established values of the QC sample. The
calibration is then verified on a continuing basis by analyzing a standard
solution as surrogate sample at frequency of 10% with the determined value
being within the QC window of ± 10% of the true concentration.
As mentioned before, at least three internal standards must be used and
their response monitored. If their response falls outside the QC window of
+25% to -40% of the original response, the situation must be corrected. This
may require only sample dilution or may entail termination of the analysis to
clean the sampling cone and/or retune the instrument.
On the next slide (#13) are given the quality control requirements of
Method 200.8. For initial demonstration of performance, method detection
limits or MOLs and the linear calibration range must be determined.
Procedures for these two determinations are described in the method. These
two determinations must be completed every six months or whenever there is a
significant change in background or instrument response (e.g. changing the
detector).
For assessing laboratory performance, a reagent blank and a laboratory
fortified blank are analyzed with each batch of samples. If an analyte in the
reagent blank determination exceeds its MDL, laboratory or reagent
contamination should be suspected. Recovery of the analytes in the fortified
blank should be within the established control limits or 85 to 115%, if
control limits have not been developed. If recovery is outside the limits,
the analysis is out of control. The analysis should be terminated and the
problem identified and corrected before continuing the analyses.
To assess analyte recovery from the matrix, 10% of samples are fortified
with the same analyte concentration as that used in the fortified blank. If
recovery from the sample matrix falls outside the designated range of the
fortified blank, a matrix affect should be suspected.
The method detection limits for Method 200.8 are given on the next slide
(114). The aqueous method detection limits are all 1 ng/L or less with the
-------
215
exception of As, Se, V and Zn. Method detection limits for solids are 1 mg/kg
or less with the exception of Se. These, along with instrument detection
limits, are included as data tables in Method 200.8.
The next slide (#15) shows the analytical scheme used in the single
laboratory validation study. A matrix set of five waters and three solids
were used in the study. The five waters consisted of drinking water, well
water, pond water, a sewage treatment primary effluent and an industrial
effluent. The three solid samples were NITS sediment #1645, EPA
electroplating sludge #286, and EPA hazardous waste soil #884. For each
matrix, a total of five replicate sub-samples were analyzed in order to make
an estimate of precision. Two further sets of duplicate sub-samples were
fortified with a multielement analyte mixture, one set a low concentration (10
Mg/L .to 50 M9/L for aqueous and 20 mg/kg for solids), the other at high
concentrations (100 /xg/L to 200 /xg/L for aqueous and 100 mg/kg for solids) in
order to assess element recoveries in a given matrix. A total of nine samples
were therefore analyzed for each matrix with the total number of samples 72.
The digestate or aliquots of the digestate were analyzed by both ICP-AES and
ICP-MS.
On the next slide (#16) are given the instrument operating conditions
used in the single laboratory validation study. These are the same operating
conditions that appear in Method 200.8 and Method 200.7, respectively. There
is an error in the ICP-AES listed conditions. The time used for data
acquisition was a 16 sec integration period not 20 sec as shown. Also, of the
ICP-MS operating conditions given, only the minimum of 3 replicate
integrations is required by Method 200.8.
During the course of the single laboratory validation study, spectral
interferences were experienced by both technniques in the analysis of certain
samples. Those which were most significant are listed on the next slide
(#17). The analysis of all samples by ICP-MS required correction of the
interference on As, V and Cr from the polyatomic ions of argon-chloride and
chlorine-oxide. These corrections were anticipated and necessary because of
the sample preparation procedure used and the inherent concentration of
chloride in environmental samples. Cu-65 was use for copper analysis of the
industrial effluent sample because the high sodium concentration caused a
NaAr* interference on Cu-63. Also, correction of MoO on Cd-111 was necessary
because the industrial effluent contained approximately 1 mg/L Mo.
The largest ICP-AES spectral interferences occurred in the analysis of
the sediment and sludge samples. The interference was of an interelement
direct spectral overlap nature from high concentrations of Cr (#1645-28,000
mg/kg and #286-7500 mg/kg) and Fe (#1645-89,000 mg/kg). The interferences
effects in these two samples were so severe that it prevented the analysis of
Th and Se in the sediment sample and caused the reporting of inaccurate data
for Sb in both the sediment and electroplating sludge samples.
In the next slide (#18) are recovery data from the fortified primary
sewage treatment effluent. Except for Ba and Th the recoveries are between
85-115%. The low Ba recovery is attributed to the presence of sulfate.
However, the low Th recovery cannot be explained. In the other four aqueous
-------
216
samples, Th recoveries were acceptable. The formation of insoluble phosphates
during sample processing has been suggested, but not confirmed.
The next 5 slides (#19-23) are figures that contain ICP-MS/ICP-AES
comparison plots for the elements Cu, Mn, Ni, Ag, and V. In each case the
center bar represented the mean concentrations of the five replicates and the
vertical bars represents two standard deviations of the replicates each side
of the mean. The concentration of the analyte is noted on the y-axis, and the
sample matrix along the x-axis. The ICP-MS data are noted by the wider bars.
These five elements were selected for presentation because of the available
data. We will step through these slides for your observation without
additional discussion.
These five figures will not appear in the proceedings of this meeting
because they are being published elsewhere. However, these figures, along
with others, and a complete dicussion of the single laboratory validation
study has been prepared as a draft report by Dr. Stephen Long of Technology
Applications, Inc. For those interested in obtaining a copy of the report see
me following this session or during the course of the conference.
Given in this slide (#24) is a brief summary of the single-laboratory
validation study. Approximately 3000 analytical determinations were
completed. The matrix analyte background data were used to compare the two
spectroscopic techniques using a paired t test. A null hypothesis approach
was used with the analyte means of the two techniques being compared at the 5%
level of significance. For an analyte to be included in this comparison, a
requirement was set that data must be available for at least two of the
matrices. For aqueous matrices, 11 analytes qualified; for the solid
matrices, 17 analytes qualified. Only Se, Tl and U did not meet the
requirements. These comparisons represented 860 determinations. A
significant difference between the two techniques exists if calculated p value
is < the level of significance set at 0.05. The lowest p value calculated was
0.08 for Zn in water. Therefore, it would appear that the two techniques
provide data that are equivalent in quality for the range of concentrations
applicable to both techniques.
The summation of the precision determined in the study was the following.
Intra-sample precision of the two techniques was similar being 1 to 2% RSD.
Although the overall precision appears to be sample limited, for the matrices
tested, the mean RSD was < 10% except for the hazardous waste soil. Higher
RSDs for this sample were attributed to the inhomogenity of the sample
material. Of the 298 spike duplicate determinations, only 8% had a relative
percent difference (RPD) that exceeded 10%. If the hazardous waste soil
duplicates were not included in the calculation, the number would drop from 8%
to < 6%.
Recovery of the fortified analytes was very good. For the fortified
blank, recovery ranged from 94% to 107%. For the fortified matrices < 10% of
the recoveries were outside the 85% to 115% limits. As for specific analytes,
Sb gave good recovery in the aqueous matrices but low recoveries (55.4% -
81.2%) in the solid matrices. Poor recoveries obtained for Ba in the sewage
effluent, hazardous waste soil and electroplating sludge have been attributed
to precipitation as the sulfate. Low recoveries of Th in the sewage effluent
and hazardous waste soil cannot be explained at this time. However, all low
-------
217
recoveries appear to be matrix related and with the exception of these three
elements, recovery data indicates adequate method performance for the range of
elements and matrices study.
On the last slide (#25) is a summary description of the joint USEPA/AOAC
multilaboratory ICP-MS study currently being conducted. This project was
started in early December, 1989 after concurrence by AOAC that a joint effort
multilab study of Method 200.8 would be of benefit to the analytical community
doing environmental analyses. The study design that you see here was
completed and agreed upon by both parties in early January, 1990. Samples and
ampul concentrates were shipped to the 18 participating laboratories on March
22, 1990, and as of May 3, seven of the participants had already forwarded
data packages.
In addition to determining sample background concentrations, sample
aliquots are to be fortified as Youden pairs with analyte concentrations
necessary to collect the data required. For determinations in reagent, tap
and ground water this is to be done at three levels of concentration. The
participants will use their own reagent water and a local drinking water for
sources of these two types of water. For the ground water determinations each
laboratory has received for analysis one of the waters collected from five
different sources. Besides the reagent, tap and ground water determinations,
eight of the laboratories will receive for analysis a wastewater digestate and
also will analyze a wastewater of their choice.
The data from this study will be compiled by EPA and presented to the
AOAC Executive Board for approval along with a copy of the method in the AOAC
format. The project and final report are scheduled to be completed by October
1990. The results of the study are to be presented as a poster paper on
September 12 at the annual AOAC meeting in New Orleans, LA.
For more detailed information concerning this study, please contact James
Longbottom of the Quality Assurance Division of EMSL-Cincinnati.
This concludes my presentation.
-------
SINGLE LABORATORY EVALUATION OF METHOD 200.8
DETERMINATION OF TRACE ELEMENTS IN WATERS AND
WASTES BY ICP MASS SPECTROMETRY
Theodore D. Martin
Environmental Monitoring and Systems Laboratory
and
Stephen E. Long
Technology Applications Inc.
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
CINCINNATI, OHIO 45268
to
f->
00
-------
SLIDE
INDUCTIVELY COUPLED PLASMA
MASS SPECTROMETRY
ADVANTAGES
1. LINEAR RANGE 5 - (8) ORDERS
2. PPT DETECTION LIMITS
3. 'SIMULTANEOUS" DETERMINATION OF PERIODIC TABLE
4. ISOTOPE RATIO INFORMATION
ISOTOPE DILUTION CAPABILITY
5. FACILITY FOR QUALITATIVE ANALYSIS
6. RAPID SAMPLE THROUGHPUT
to
-------
SLIDE * 3
VACUUM tTA««t
to
to
o
-------
221
SLIDE #
ICP-MS INTERFERENCES
SPECTRAL
1. ISOBARIC ELEMENTAL - Mo, Se
2. POLYATOMIC IONS - 8 analytes
3. ABUNDANCE SENSITIVITY - Resolution
PHYSICAL
1. VISCOSITY - Peristaltic pump
2. AEROSOL TRANSPORT - Mass flow control
3. DISSOLVED SOLIDS - Limit 0.2% (w/v)
4. OXIDE FORMATION - Cooled spray chamber
5. SUPPRESSION - Internal standards
MEMORY - ANALYTE BUILDUP
1. SAMPLE UPTAKE SYSTEM
2. CONE DEPOSITS
3. DROP IN REPLICATE INTEGRATIONS
-------
I
METHOD 200.8 : SUMMARY FEATURES
SLIDE # 9
SCOPE & APPLICATION
ELEMENT COVERAGE
SAMPLE DIGESTION
DISSOLVED SOLIDS
INTERNAL STANDARDS
: Ground, Surface and Drinking Waters
Wastewater, Sludge and Solid Waste
: Al, Sb, As, Ba, Be, Cd, Cr, Co, Cu, Pb
Mn, Mo, Ni, Se, Ag, Tl, Th, U, V, Zn
: Nitric, Hydrochloric acid
: 0.2% Limit
: Sc, Y, In, Tb, Bi
to
to
to
-------
SLIDE # 10
SAMPLE PREPARATION - TOTAL RECOVERABLE
1. AQUEOUS
100ml sample + 1ml cone, nitric
0.5ml cone. HCI
Heat to reduce volume to 15ml. Reflux for 30min.
Cool and dilute to 50ml.
to
M
GJ
Dilute 20ml
50ml and analyze.
0.4% HCI, 0.8% nitric. Dilution factor 1.25.
-------
SLIDE I 11
SAMPLE PREPARATION - TOTAL RECOVERABLE
2. SOLIDS
1g sample + 4ml (1+1) nitric
10ml (1+4) HCI
Reflux for 30min. Cool and dilute to 100ml.
Centrifuge or allow to stand overnight.
Dilute 10ml
50ml and analyze.
0.4% HCI, 0.4% nitric. Dilution factor 0.5
to
to
-------
SLIDE #12
METHOD 200.8 : CALIBRATION
DEMONSTRATION OF INITIAL CALIBRATION
Tuning Solution (mass calibration)
Quality Control Sample QCS
CALIBRATION VERIFICATION
Standard run as surrogate sample at 10% frequency
QC Window ± 10% of true value
INTERNAL STANDARDIZATION
Minimum of Three Internal Standards
QC Window + 25% of Original Response
- 40%
to
to
Ln
-------
SLIDE I 13
METHOD 200.8 : QUALITY CONTROL
1. INITIAL DEMONSTRATION OF PERFORMANCE
MDL, Linear Calibration Range
2. ASSESSING LABORATORY PERFORMANCE
* Reagent Blank (LRB) - 1 per batch
Laboratory Fortified Blank (LFB) - 1 per batch
3. ASSESSING ANALYTE RECOVERY
Laboratory Fortified Sample Matrix - 10% of samples
to
CO
-------
SLIDE # 14
TOTAL RECOVERABLE METHOD DETECTION LIMITS
ELEMENT
Al
Sb
As
Ba
Be
Cd
Cr
Co
Cu
Pb
Mn
Mo
Ni
Se
Ag
Tl
Th
U
V
Zn
MASS
27
121
75
137
9
111
52
59
63
208
55
98
60
82
107
205
232
238
51
66
AQUEOUS
(ug/l)
1.0
0.4
1.4
0.8
0.3
0.5
0.9
0.09
0.5
0.6
0.1
0.3
0.5
7.9
0.1
0.3
0.1
0.1
2.5
1.8
SOLIDS
(mg/kg)
0.4
0.2
0.6
0.4
0.1
0.2
0.4
0.04
0.2
0.3
0.05
0.1
0.2
3.2
0.05
0.1
0.05
0.05
1.0
0.7
10
-------
SLIDE | 15
ICP-MS METHOD 200,8 SINGLE LAB VALIDATION
WATERS
8x6
3x8
ANALYZE ICP-MS
METHOD 200,8
TECHNIQUE
COMPARISON
PRECISION AND
ACCURACY STATEMENT
SOLIDS
s
SPIKE
^*
REAGENT BLANK
LFB
6 x (3+2)
y
DIGEST
' SPLITS >
r ^
SPTKF
V
ANALYZE ICP-ES
Total Samples 72
to
to
CO
-------
SLIDE # 16
METHOD 200.8 VALIDATION
INSTRUMENT OPERATING CONDITIONS
Instrument
Forward power
Coolant flow rate
Auxiliary flow rate
Nebulizer flow rate
Solution Uptake
Repl. Integrations
Data acquisition
ICP-MS
VG PlasmaQuad Type I
1.35 kW
13.5 l/min.
0.6 l/min.
0.78 l/min.
0.6 ml/min.
320 us dwell
2048 channels
85 sweeps
ICP-ES
Jarrell-Ash AC 1160
1.1 kW
19 l/min.
0.3 l/min.
0.63 l/min.
1.2 ml/min.
integration
48 channels
to
to
10
-------
SLIDE I 17
SPECTRAL INTERFERENCES 200.8 ELEMENT SUITE
ICP-MS
1. Chloride on As-75, V-51, Cr-52
2. Na(Ar) on Cu-63 (effluent)
3. Mo(O) on Cd-111 (effluent)
ICP-ES
1. Cr on Sb 206.8 nm (NIST 1645)
2. Cr on Sb 206.8 nm (EPA 286)
3. Cr on Th 283.7 nm (NIST 1645)
4. Fe on Se 196.0 nm (NIST 1645)
to
U)
o
-------
SPIKE RECOVERIES - PRIMARY EFFLUENT
SLIDE # 18
MENT BKGRD CONG. SPIKE RECOVEI
Sb
As
Ba
Be
Cd
Co
Pb
Ni
Se
Ag
Tl
Th
U
V
Zn
(ppb)
1.5
<1.4
202
<0.3
9.2
13.4
17.8
84.0
<7.9
10.9
<0.3
0.11
0.71
<2.5
163
(ppb)
10
50
50
10
10
10
10
10
50
50
10
10
10
50
50
(%)
95.7
104.2
79.2
110.5
101.2
95.1
95.7
88.4
112.0
97.1
97.5
15.4
109.4
90.9
85.8
NJ
U)
-------
SLIDE I 24
VALIDATION STUDY SUMMARY
ANALYTICAL DETERMINATIONS - 3000
MATRIX COMPARISON ICP-MS / ICP-AES
1. Paired t test, p 0.05 for n > 2
2. Aqueous (11) - Solids (17) 860 determinations
3. Aqueous Zn, p 0.08
PRECISION
1. Intra-sample RSD 1-2%
2. Matrices, mean RSD <10%
3. 8% RPD >10% - 298 determinations
RECOVERY
1. Fortified blanks 94-107%
2. <10% Fortified matrices >85-115%
3. Sb, Ba and Th
to
OJ
to
-------
SLIDE # 25
JOINT USEPA/AOAC MULTILABORATORY STUDY
STUDY PERIOD, DEC 1989 TO OCT 1990
STUDY DESIGN
WATER SOURCES
YOUDEN PAIRS
REAGENT
TAP
GROUND (5)
WASTE (Digestate) *
WASTE (of choice)*
3
3
3
2
1
BACKGROUND
DATA
PARTICIPATING LABORATORIES 18 - (8)
to
CO
CO
-------
234
MR. FIELDING: Our next speaker is Jim Rice who will
talk about ICP performance for the measurement of 14 trace
metals in power plant waste streams.
Jim?
-------
EPRI
235.
13th Annual EPA Conference on
Analysis of Pollutants in the Environment
Norfolk, Virginia
May 9-10,1990
ICP PERFORMANCE FOR THE MEASUREMENT OF
FOURTEEN TRACE METALS
IN POWER PLANT WASTE STREAMS
By
Raymond F. Maddalone, PhD
TRW, Inc.
Redondo Beach, California
James K. Rice, PE
Consulting Engineer
Olney, Maryland
and
Winston Chow, PE
Electric Power Research Institute
Palo Alto, California
-------
236
ICP PERFORMANCE FOR THE MEASUREMENT OF
14 TRACE METALS IN POWER PLANT WASTE STREAMS
BY
Raymond F. Maddalone, PhD
TRW, Inc.
James K. Rice, PE
Consultant
and
Winston Chow, PE
Electric Power Research Institute
ABSTRACT
A 26 laboratory study by the Electric Power Research Institute determined the
performance of an Inductively Coupled Plasma Atomic Emission Spectroscopic (ICP) method
for measuring 14 elements (Al, Ba, Be, B, Cd, Cr, Cu, Fe, Pb, Mn, Mo, Ni, V, Zn) in typical
power plant waste streams. The resulting single operator and overall precision were used to
compute limits of detection and quantitation for each of the elements in each matrix. These
limits of detection are compared to published EPA detection limits. In addition, an
algorithmic approach was used to compare ICP precision and recovery results from the
utility sample matrices to those obtained in reagent grade water samples. From this
analysis the presence or absence of chemical matrix effects were investigated.
-------
237
BACKGROUND AND OBJECTIVES OF THE ANALYTICAL METHODS QUALIFICATION PROJECT
The utility power Industry 1s required under the Federal Mater Pollution
Control act to Monitor their discharges for a number of parameters. As a
result of the 1976 Consent Decree (National Resources Defense Council ver-
sus Train), the EPA was required to establish effluent limitation
guidelines, pre-treatment standards, and also new source performance stan-
dards for 65 pollutant classes (129 specific priority pollutants). In an-
ticipation of a requirement that utility discharges be monitored for some
or all of the priority pollutants, EPRI Project RP 1851-1 was initiated
both to collect concentration and frequency data about utility aqueous dis-
charges and to assemble a set of sampling and analysis guidelines for the
species of interest in those discharges. Utility chemists would then have
a firm basis to select methods to monitor the species of Interest In those
discharges.
The data bases produced in Phase I of EPRI Project RP 1851-1, Sampling and
Analysis of Utility Pollutants (SAUP), contain both the average steam elec-
tric power plant discharge concentrations for all conventional, nonconven-
tional, and priority pollutants (1) as well as a comprehensive compilation
of precision and bias data on the methods used to monitor the pollutants
(2). In developing this latter data base, it was found that the validation
data were often obtained for matrices and concentrations not representative
of the steam electric Industry. The lack of an Independent data base de-
rived from utility laboratory performance In typical discharge matrices
placed individual utilities at a disadvantage during siting and permit
negotiations. At the same time, there was a trend toward setting effluent
discharge limits at the end of pipe or edge of the mixing zone at the water
quality criteria concentration. Many of these water quality criteria are
at or below the detection limit of current analysis methods. In order to
determine whether current monitoring methodology Is capable of detecting
pollutants at the EPA detection limit or the water quality criteria
concentration, EPRI instituted a project under RP 1851-1 entitled Utility
Aqueous Discharge Monitoring - Analytical Methods Qualification (AMQ).
The primary objective of the AMQ Project is to collect precision and bias
data for methods used to determine selected parameters and elements in util-
ity discharge streams. The required analytical data are collected through
collaborative testing using representative utility laboratories. The lim-
its of detection and quantitation derived from this project thus provide a
realistic estimate of the capabilities of EPA-approved methods In the com-
pliance monitoring situation under utility process stream and laboratory op-
erating conditions.
The specific AMQ project goals are to:
Establish both the single operator and overall precision that
can be expected from utility laboratories utilizing the se-
lected compliance methods.
Establish whether any statistically significant bias exists
in a given matrix for the methods validated.
-------
238
Determine the expected limits of detection and quantitation
of compliance methods when used by a utility analyst for util-
ity matrices.
Table I provides an overview of the entire AHQ project. It 1s divided Into
four parts In order to Minimize the Impact on the participating Industry
laboratories and to permit modification of the test design to reflect chang-
ing environmental Issues or regulatory requirements. The elements and pa-
rameters In each part were selected on the basis of their Importance to the
utility Industry, regulatory Interest, and a comparison of discharge and in-
take concentrations. Under the AMQ-III validation program, the subject of
this paper, Inductively Coupled Plasma (ICP) was validated for 14 metals in
6 different utility matrices.
AMQ-III TEST METHODOLOGY
The AHQ project is based on the premise that a method qualification project
should use matrices representative of those encountered by the analyst in
routine work. Furthermore, since the shipment and storage of samples is
part of the normal analysis procedure at most laboratories and comparison
of the results from different laboratories on split samples is often encoun-
tered in compliance monitoring, the test program should Include spiking the
matrices and sending aliquots to each participant. A detailed review of
the AHQ test and analysis protocols can be found in references (1,1). A
summary of the test program protocols 1s provided in the following
paragraphs.
laboratory Selection
The participation of utility laboratories was solicited through the offices
of EPRI and the Utility Water Act Group (UWAG). Twenty laboratories were
eventually enlisted to validate the AMQ-III elements.
Laboratories were contacted before the test to determine which labs had the
experience or interest in determining the elements in the seawater
samples. These laboratories participated In a pre-test study to assist in
selecting test concentrations, assessing sample stability, and providing
the operators with some experience with those difficult matrices. The
QA/QC vial was used as an absolute, Independent measure of the bias associ-
ated with an individual laboratory.
Matrices Tested
A key component of this validation effort was the use of typical utility
matrices. To that end, sufficient quantities of river water, ash pond
overflow, seawater Intake, seawater discharge (seawater with a proportional
amount of fireside wash added to simulate routine plant discharges), and
treated chemical metal cleaning wastes (TCHCW) were obtained from utility
sources. Following the overall objective of simulating typical NPDES
monitoring, the test matrices were homogenized, spiked and then split. In
addition to these samples, spiked reagent grade water and a QA/QC vial sup-
plied by the ERA (Environmental Research Associates) were also sent to
-------
239
participating laboratories. All laboratories were required to analyze the
reagent grade water and QA/QC vial while the analysis of the TCMCW sample
was optional.
Test Concentration Selection
Depending on the background concentration, every effort was made to have
the lower test concentrations near the expected detection limit of the
method. This requirement was difficult to meet in some of the matrices,
due to the high background. The remaining test concentrations were select-
ed to stay within the published (£) estimate of the optimum analytical
range, if possible. A total of 4 test concentrations (neat and three
spiked) were prepared for each matrix and element.
Sample Preparation
Samples were prepared (1,1) by filling a polyethylene 120 L churn (Figure
1) splitter with the acid (HN03) stabilized test matrix. The volumes
were determined by weight and density measurements and were accurate to
±100 ml based on the calibrated accuracy of the scale. After the volume
was determined, weighed amounts of single element 1000 ug/L certified AAS
standards were added to the churn splitter using Teflon beakers. After the
addition of a spike, the churn was operated for 5 minutes before samples
were removed from a spigot at the bottom of the churn. Each sample consist-
ed of a 500 ml aliquot in a precleaned (HC1, HN03, high purity water)
polyethylene bottle.
OA/OC Activities
A comprehensive QA/QC program was part of the sample preparation and dis-
bursement effort. Internally, the sample spiking and disbursement activi-
ties were guided by a test plan whose adherence was audited by personnel
not associated with the test program. In addition, test samples from each
matrix were selected at random and analyzed prior to shipment to ensure
their proper preparation. Samples were also analyzed by GFAAS to determine
the stability with time. Based on the seawater pre-test, it was found that
barium could not be stabilized in the seawater and TCMCW matrices at the
test concentrations. Consequently, barium data from those matrices are not
included in the results.
Test Procedures and Reporting Requirements
Each participating laboratory was sent a detailed test protocol describing
the sample digestion and analytical procedures. The test protocol con-
tained a copy of EPA Method 200.7 and detailed instructions on the report-
Ing requirements. Each sample was analyzed in duplicate and the results re-
ported to the TRW test coordinator on standardized reporting forms or on a
hard-copy computer output from the ICP. A QA/QC survey, covering equipment
used, background/experience of the operator, and laboratory practices, was
included in this package.
-------
240
Statistical Analyses
The design of the validation program was based on ASTH 02777-85 using repli-
cate analyses. The data from the test program was reduced using a PC-based
statistical program, which implemented the data analysis and outlier test
protocols in ASTH 02777-85. All data received from the participants was
Initially screened for values greater than 5x and less than 1/5 of the ex-
pected true values. Laboratories exceeding those limits were asked to
check their calculations and reports, but were not asked or permitted to
re-analyze their samples. On this error-free data set, the statistical pro-
gran performed the lab ranking and Individual outlier testing to screen the
data. The program then computed the single operator tnd overall precision
and recovery at each test concentration tlong with Its linear tnd curvilin-
ear regression equation for precision and recovery. Figure 2 1s in example
of overall precision versus true value.
The true concentration was computed by taking the mean of the reported re-
sults for the lowest test level and then adding the known amounts of each
element added during the sample preparation effort. In most cases for a
given element, the lowest test level was the as-received matrix. For se-
lected elements and matrices, small quantities of the element were added to
the background concentration to raise their lowest test concentration near
the expected ICP detection limit. This approach was taken based on previ-
ous test experience to avoid data drop-out due to concentrations below an
element's detection limit.
CALCULATION OF THE LIHITS OF DETECTION AND QUANTITATION
Background
An important consideration in selecting an analytical method 1s its limit
of detection (LOD). Over the years, several definitions for limit of detec-
tion and its companion concept, limit of quantitation (LOQ), have evolved
(Table 2). Under Phase I of RP1851-1, a methodology for computing an LOD
was developed utilizing open literature precision data. That approach will
be discussed and its problems noted. An alternate approach utilizing the
greater amount of data that was derived from the AHQ validation studies
will be presented.
Definition of Limit of Detection
In reviewing the sources of LOD definitions (Table 2) several areas of
agreement are apparent:
I. The consensus is that the LOD can be defined as some factor
times the standard deviation of the blank.
2. The factor chosen depends on the risk level of making false
positive identification of zero values and false negative Iden-
tification of nonzero detectable values.
3. The blank should be well characterized (>10 replicates).
-------
241
4. The blank sample mist be processed exactly as the actual
sample.
The key questions remain: What is the value for the factor and what type
of standard deviation (single operator, SQ, or overall, ST) should be
used? Based on the definitions in Table 2, the consensus value for the
factor appears to be at or near 3. The interpretation of this value in
terms of a confidence level depended on the point of view of the reader.
If one is only concerned about false positive identifications of zero
values, then the 3 sigma limit provides <1% risk of making a false positive
identifica- tion. The actual risk Involved depends on the number of
replicates used to calculate it. Should one also be concerned with the er-
ror of not identifying detectable levels of a species, then the 3 sigma LOD
represents a 7X chance for both false positive and false negative
responses. The 7X risk is for a large number of blank replicates, and the
risk increases with a decrease in the number of replicates. It should be
noted that most regulatory limits consider only the chance for false posi-
tive errors. In those cases, 3 sigma represents a risk level of <1X of
falsely identifying a species as present when It is not.
The Environmental Monitoring and Support Laboratory (EMSL) has published an
approach to LOD calculation stressing the complete analytical procedure,
which includes specifying a matrix, an analytical procedure (calibration
through the evaluation of results), and a particular instrument. EMSL
proposes the concept of method detection limit (HDL) to describe processing
a sample through all the steps comprising an established analytical
procedure. The HDL is defined as "minimum concentration of a substance
that can be identified, measured, and reported with 99% confidence that the
analyte concentration is greater than zero" (10.). The HDL is determined
from replicate analyses of a sample in a given matrix containing the
analyte at concentrations 1 to 5 times the estimated HDL. Two approaches
are given (£); one employs seven replicate measurements and is expressed
as:
HDL
t(N-I, 1- a) * sc
(1)
where N » 7, « « 0.01 and S- is the standard deviation of the seven
replicates. Substituting the appropriate one tail "t" value, equation 1
becomes:
HDL - 3.143 (Sc) (2)
An alternative approach tests the reasonableness of the HDL estimate by us-
ing an Iterative process (2 sets of seven replicates). The HDL is then de-
fined as:
HDL « 2.8681 x SpooT (3)
where SDOQI is the pooled Sc for the two sets of seven replicates and
2.681 13 the value for a single tail t/j* n 0.1) The HDL approach has
been incorporated in promulgated (49 FR 43430,'Friday, October 26, 1984)
regulations.
-------
242
laplicit in the LOD discussions by all the quoted authors except Rice was
the assumption that single operator precision data are used to calculate
the LOD. The reasoning, as expressed by the ACS (S), is that the analyst
needs to report his/her detection limit. From a regulatory standpoint, in-
terlaboratory (overall) precision data may be more applicable, since split
saaples are involved. In our earlier studies (£), LODs were calculated us-
ing both single operator, S0, and overall, Sj, data.
An estimate of the S0 and Sj values at zero concentration can be ob-
tained from the y-Intercept of the precision versus mean test concentration
linear regression equations. The drawback of this approach was twofold.
First, precision is not a linear function of concentration all the way to
zero (Figure 3). In practice, the precision tends to flatten to a value
that could be equated to the Instrument noise and effects due to a given
natrix. A linear regression of precision data, especially with data points
far from zero, can underestimate the precision at zero and produce negative
intercepts. Negative intercepts were often obtained in earlier studies
(£»!>£) and rendered the computation of an LOD impossible.
Recognizing this problem, the statistical program used in the AMQ-I and -II
validation efforts was modified to compute curvilinear regression equations
in the form:
S « ab1
(4)
where T is the estimated true value at each test level. These curvilinear
equations were found to fit the data from the test program much better than
the linear regression equations.
Calculation of the Limit of Detection
The first step in calculating the LOD 1s to use the precision versus true
value and recovery regression equations; they are used to compute a preci-
sion versus mean concentration equation. The resulting equations are used
to compute the limit of detection in the following manner.
Starting with the EPA definition of an HDL (1P_, H), the regression equa-
tion for precision based on the mean concentration will be substituted:
K)L - t*S
(5)
S - nx + b
(6)
where t is the students t, S the standard deviation, and x is the mean
concentration. Therefore:
HDL - t(mx +b)
(7)
-------
243
In order to satisfy the EPA requirement that the precision be obtained at a
concentration 1 to 5 tines an estimated MDL, x will be set equal to the HDL
or
HDL * t(m*MDL + b)
LOD - tb/(l-tni)
(8)
(9)
where t 1s the "t" value for the degrees of freedom of the precision versus
Bean concentration regression (related to the number of libs for single op-
erator precision or the total number of data points for overall precision).
Equation 6 can be solved directly to obtain an LOO. Using the same substi-
tution approach, the curvilinear equation Is:
(LOD)/(bLOD) - ta (10)
Equation 10 cannot be solved directly; but, using an Iterative approach
found In a commercial software package (TKSolver Plus), a root can be
obtained.
Definition of Limit of Ouantltatlon
In our review (£) of the literature definitions for limit of quantltatlon,
several areas of agreement were noted:
I. The LOQ 1s equal to a factor times the standard deviation of
the blank.
2. The factor 1s related to the expected/required precision at
the LOQ.
As with the LOD calculation, establishing the factor and choosing the type
of standard deviation are the key Issues.
Compared to the LOD, the LOQ Is established 1n a less rigorous manner. Cur-
rle (£) and the ACS (9.) define the factor as Inversely proportional to the
relative standard deviation (RSD) at the LOQ. The ACS (£) has chosen ±10%
RSD as the precision expected/required at the LOQ. The choice of ±10% Is
arbitrary and does not accurately reflect the day-to-day precision that an
analyst finds at ppb levels. In TRW's review of the literature validation
data (£), RSDs were calculated at the primary drinking water standard and
at the average power plant discharge concentration using the overall preci-
sion linear regression equations for Flame AAS and GFAAS. The average of
all the RSDs computed was 23.5%. This value represents the best generally
attainable precision at levels which are encountered In real world
analyses. Based on those data, expecting an RSD of ±20% at trace levels
seems reasonable and defensible and will be used 1n the evaluation of the
ANQ data.
8
-------
244
Rice (12) has suggested that the LOQ be defined as "the lowest true concen-
tration for which the relative overall precision is 20%." This definition
parallels the EPA's Practical Detection Limit.
One approach to fulfill that definition is to use the computed linear re-
gression equations for the precision versus mean. The regression equations
are in the form of a linear equation:
S - mx + b (11)
Converting the S to relative standard deviation by dividing through by x:
S/x « m + b/x
Setting the S/x - 0.2 and solving for x:
x - b/(0.2-m)
(12)
(13)
or
LOQ - b/(0.2-m) (14)
Using the curvilinear precision equation, the resulting equation would be:
bx/x - 0-2/a
or
(bLO{*)/(LOQ) - 0.2/a
(15)
TKSolver Plus was also used to solve equation 15.
By taking this approach, the actual concentration which produces an RSD of
20% can be found. For comparison, both single operator and overall preci-
sion regression equations were used to produce a value for the LOQ. Nega-
tive LOQs are computed from the linear regression data when either the
slope exceeds 0.2 or when the intercept is negative and the slope is less
than 0.2. In the former case, a slope of >0.2 indicates that an RSD of
±20% was never obtained over the test range.
RESULTS
The linear and curvilinear precision (single operator and overall) versus
mean concentration equations were computed using the respective precision
versus true value and the recovery regression equations. Using the method-
ology described, the linear and curvilinear LODs and LOQs by precision type
and matrix were computed. Those preliminary results can be found in Table
3 together with the EPA detection limit and the WQC.
-------
245
Table 3 shows the problems associated with using a linear regression equa-
tion to estimate LODs. Most of the tine the linear regression precision
equations produced negative LODs, probably as the result of the test range
being too far away from the detection limit. Even though every attempt was
made to have the lower test concentrations near the detection limit, the
background concentration of some elements in some matrices did not permit a
test range as low as we would have liked. The curvilinear precision regres-
sion equations were less sensitive to this problem and, in most cases, were
able to produce a positive LOO. Another problem with using the linear re-
gression precision equations was negative slopes. In this case, the data
is probably poorly correlated, since is would be quite unlikely that the
precision (standard deviation of mean) would increase with decreasing
concentration. Consequently, no LODs were computed from linear regression
precision equations having a negative slope.
Similar problems with the calculation of an LOQ from the linear regression
precision equations were also seen. If the slope was >0.2 (Indicating that
an RSD of 20% was not obtained over the test concentrations) or the Inter-
cept was negative, LOQs could not be computed from the linear regression
equations. Once again, the curvilinear regression equations seemed to pro-
duce results much more reliably than the linear equations (Table 3).
As part of the final review effort, the LODs in Table 3 will be reviewed to
determine whether they will meet EPA guidelines (H) for computing an MDL.
As such, the results in Table 3 should be considered preliminary pending
the formal review of the RP 1851-1 Project Advisory Committee.
Comparison of EPRI AHO-III LOP and LOQ to the EPA Detection Limit
The preliminary single operator and overall precision based EPRI LODs and
LOQs from the curvilinear regression equations were ratioed to the EPA de-
tection limit in Table 3. To further highlight this comparison, the ratios
were collated by matrix and plotted as a bar chart. Where there was no
data from the curvilinear regression equation, the linear regression result
was used. The overall precision based LODs and LOQs are presented here
since they represent the situation (comparison of multiple laboratory data
from split samples) found in most NPDES monitoring audits. In all cases, a
ratio less than one indicates that the EPRI LOD or LOQ was lower than the
EPA detection limit. A ratio greater than one indicates that the EPRI LOD
or LOQ was greater than the EPA detection limit. For plotting and Interpre-
tation convenience, the y-axis (ratio) Is a log scale.
Reagent Grade Water. Reagent grade water should be the closest to the EPA
detection limits since the EPA ICP detection limits (£) are the concentra-
tions which produced a net analytical signal three times the standard devia-
tion of the background beneath the spectral line. In application of this
definition, the resulting detection limits were based on single operator re-
agent grade water data. However, except for cadmium, all the EPRI computed
LODs are above EPA detection limit (Figure 4). Host of the EPRI LODs are
greater than a factor of 3 higher and, for beryllium, Iron, and zinc, the
LODs were greater than 10 times higher than the EPAs. The EPRI LOQs fol-
lowed the same pattern; but, since they were usually 2 to 4 times higher
than the LODs, most of them were 7 times higher than the EPA detection
limit.
10
-------
246
River Water. Based on the chemical characterization studies done on each
matrix during the test program, river water could be considered the next
most complex matrix. In this case (Figure 5), all of the EPRI LODs are
greater than 3 times the EPAs and five elements (boron, beryllium,
manganese, nickel, and zinc) are 10 times larger than the EPA detection
Unit. In most cases, the EPRI LOQs were 7 times higher than the EPA detec-
tion Unit.
Ash Pond Water. The ash pond water sample was taken at the same plant as
the river water and consists of the overflow from ash pond. It had a
higher IDS and sulfate content then the river water. Both cadmium and chro-
niun were below the EPA detection limit (Figure 6). The remaining elements
were 2 tines higher than the EPA detection limit. A substantial number of
those eleaents (barium, beryllium, boron, Iron, Manganese, nickel, and
vanadium) were 10 times greater than the EPA detection limit. All the EPRI
LOQs were greater than the EPA detection limit and most (except cadmium)
were 8 or core times higher than the EPA detection limit.
Seawater Intake and D1 scharoe. Both seawater matrices, as expected, pro-
duced similar results (Figures 7 and 8). All EPRI LODs (except for copper
in the seawater intake) were 10 times higher than the EPA detection limit.
Host of the element's LODs were 20 times greater than the EPA detection
limit. Boron and cadmium exhibited LODs 100 times higher than the EPA de-
tection limit in one or both of the matrices. Comparison of the EPRI LOQ
to the EPA detection shows an even worse situation as boron, beryllium,
iron, nickel, and zinc LOQs were 100 times higher than the EPA detection
Unit.
Treated Chemical Hetal Cleaning Wastes, fTCHCWK The TCMCW sample was simi-
lar in composition (high sulfate and chloride) to the seawater samples, but
with the added organic flocculating agents to remove iron and copper. The
LODs computed from the precision data taken from this matrix are not as bad
as the seawater samples, but show only 3 elements (cadmium, copper, and
vanadium) having an LOD less than 10 times the EPA detection limit (Figure
9). All elements having data were 4 times higher than the EPA detection
limit. Only the LOQ for aluminum was less than 10 times the EPA detection
Unit.
Symnarv of EPRI LOD Results. Figure 10 shows the percentage of reporting
eleaents having an LOD between 0.1 to 1, 1 to 10, 10 to 100, and 100 times
greater than the EPA detection limit. First, this three-dimensional bar
chart shows that, though different, the results from the reagent grade wa-
ter (RGW), river water (RH), and ash pond overflow (APO) are comparable.
On the other hand, the seawater Intake and discharge (SU-I and SH-D) and
the TCNCU have similar results. Secondly, the performance, based on In-
creasing LOD, is much worse in the complex matrices. Thirdly, even though
the LODs in the RGW matrix were the lowest of the matrices studied, almost
70% of them were between 2 to 9 times higher than the EPA's and a signifi-
cant (23%) were 10 tines higher than the EPA's.
At the minimum, these results show that detection limits will vary by
matrix, and that fact should be considered when a discharge or performance
Unit is set.
11
-------
247
Comparison of EPRI AMO-III LCDs to EPA Hater Quality Criteria
Water Quality Criteria (WQC) represent another benchmark that utility labo-
ratories are being required to meet. A similar comparison of EPRI LODs for
those elements having a proposed or approved WQC published In the Federal
Register was performed. The results of those comparisons by matrix are
found in Figures 11 through 16. The WQC limits are generally higher than
the EPA detection limits and, for most of the elements having WQCs, the
EPRI overall LOD was within a factor of 1 to 10 of the WQC. However, the
EPRI LOD for lead 1s consistently a factor of 10 or 100 greater than the
WQCs In all matrices. In the APO, SW-I/D, and TCHCW matrices, the EPRI LOD
1s 10 to 100 times the EPA WQC. The EPRI LOO for aluminum in the SW-D/I
and TCHCW matrices was consistently a factor of 10 or more than the WQC
limit. Finally, only one element, zinc, consistently had an EPRI overall
LOD lower than the EPA WQC in all the matrices.
These data Indicate that most laboratories would have a difficult time de-
termining an element at Its WQC.
Matrix Effects
As part of the data analysis effort, the slopes of the precision and recov-
ery linear regression equations for each discharge stream were compared to
the slopes for reagent grade water. Standard statistical tests were em-
ployed to determine whether the differences seen were significant. Table 4
summarizes the results of the analysis. No significant trends were seen
based on significant differences between the slopes of the single operator
and overall precision, and recovery equations. In fact, one would have ex-
pected more elements to have had significant slope differences for matrices
such as the two seawater. This prediction did not turn out to be true.
Part of the problem with this sort of statistical comparison 1s that the er-
ror bands associated with the slopes are relatively large and get worse
with matrices such as seawater. As a result, differences tend to be ob-
scured In the same matrices where one might predict there to be a matrix
effect. Additional development effort is required to devise a statistical
test to determine the presence of a matrix effect.
SUMMARY
The EPRI RP 1851-1 AHQ validation programs have developed a comprehensive
data base of precision and bias data using utility laboratories analyzing
typical utility discharge streams. These data have shown that EPA detec-
tion limits based on single operator tests in reagent grade water do not
produce similar detection limits. In fact, detection limits 10 to 100
times the EPA detection limits have been seen. In light of these
differences, consideration should be given to setting detection limits that
are more representative of the real world performance of NPDES monitoring
methods.
12
-------
248
REFERENCES
1. White, H.R., H.D. Powers, C.C. Shlh, and R.F. Maddalone, "Aqueous Dis-
charges from Steam-Electric Power Plants: Data Evaluation," EPRI
CS-3741, November 1984.
2. Maddalone, R.F., J.W. Scott, and H.D. Powers, "Aqueous Discharges from
Steam-Electric Power Plants: The Precision and Bias of Methods for
Chemical Analysis," EPRI CS-3744, November 1984.
3. Maddalone, R.F., J.W. Scott, and J. Frank, "Round-Robin Study of Meth-
ods for Trace Metal Analysis; Volume 1: Atomic Absorption Spectroscopy
- Part 1," EPRI CS-5910, Volume 1, August 1988.
4. Maddalone, R.F., J.W. Scott, and J. Frank, "Round-Robin Study of Meth-
ods for Trace Metal Analysis; Volume 2: Atomic Absorption Spectroscopy
- Part 2," EPRI CS-5910, Volume 2, August 1988.
5. U.S.E.P.A., "Methods for Chemical Analysis of Water and Wastes,"
EPA-600/4-79-020, March 1979 (updated March 1983).
6. Curie, L.A., "Limits for Qualitative Detection and Quantitative
Determination: Application to Radiochemistry," Anal. Chem.. 40(3), 586
(1968).
7. Kaiser, H., "Quantitation 1n Elemental Analysis (Part 2)," Anal. Chem.
41(4) 26A (1970).
8. Kaiser, H., 1. Anal. Chem.. 209. 1 (1965)
9. Keith, L.H., et al.. "Guidelines for Data Acquisition and Data Quality
Evaluation in Environmental Chemistry," Anal. Chem.. 55(14), 2210
(1983).
10. Glaser, J.A.; D.C. Forest, G.D. McKee, S.A. Quane, and W.L. Budde,
"Trace Analysis for Wastewaters," Environ. Set. Techno!. 15(12), 1426
(1981).
11. Rice, J.K., "Analytical Issues 1n Compliance Monitoring," Environ.
Sci. Techno!.. 14(*12), 1455 (1980).
12. Rice, O.K., Private communication, February 1987.
13. U.S.E.P.A., "Guidelines Establishing Test Procedures for the Analysis
of Pollutants," Federal Register. 49. (209), Friday, October 26, 1984.
13
-------
249
Table 1
UTILITY AQUEOUS DISCHARGE MONITORING-
ANALYTICAL METHODS QUALIFICATION (AHQ)
OVERALL PROJECT PLAN
Pro.lect Title
Sampling and Analysis
of Utility Pollutants
Analytical Methods
Qualification
Analytical Methods
Qualification
Analytical Methods
Qualification
Test Program Scooe
Part
*
I
II
Roui
1
2
1
III
Parameters
As, Se
Ni, Pb, Cr, Cu
Cd
Hg
Fe, Zn
Al, B, Ba, Be,
Cd, Cr, Cu, Fe,
Mn, Mo, N1, Pb,
V, Zn
Method
GFAAS**
GFAAS
GFAAS
CVAAS
Flame AAS
ICP
* Initial effort under EPRI Project RP 1851-1 which involved collection
and analysis of data on discharge rates and data on the analytical pre-
cision and bias for utility discharge species.
** Approximately 6 seawater laboratories determined As and Se by GHAAS.
GFAAS Graphite Furnace Atomic Absorption Spectrometry
GHAAS - Gaseous Hydride AAS
CVAAS - Cold Vapor AAS
ICP - Inductively Coupled Argon Plasma Optical Emission Spectroscopy
14
-------
Table 2
LOD DEFINITION FROM VARIOUS SOURCES
(71
Author;
Currle (fi)
ACS (9)
R1ce
Title
Decision Limit
Detection Limit
Critical Level
Definition
LC « 1.6450
LQ 3.290a
Kaiser (2,8) Limit of Detection LOD - 3a B
Limit of Detection LOD - 30
EPA/EHSL (Ifl) Method Detection HDL - 3.143a R
Limit
LC * 2.576a g
Limit of Detection LOD « 4.652a
Mathematical Degrees of
Interpretation Freedom
5% chance of reporting zero Infinite
value as detected,
5% chance of reporting zero Infinite
value as detected or not
reporting a real value as >0.
Approximately a 5% risk of Infinite
reporting zero values as
detected for normal distri-
butions and as high as 11%
risk of reporting zero values
as detected for asymmetric or
broad distributions.
7% chance of reporting zero Infinite
value as detected or not
reporting a real value as >0.
1% chance of reporting zero 6
value as detected.
0.5% risk of reporting zero Infinite
value as detected or not
reporting a real value as >0.
1.0% risk of reporting zero Infinite
value as detected or not
reporting a real value as >0.
to
en
o
* R1ce also suggests using overall precision data from Interlaboratory studies on real, spiked samples.
-------
Table 3
Summary of AMQ-III ICP LOD and LOQ Data
IITIKHTEO TEIT
IT
Tom
MMUft i
UKM
'
PJKVItlHtM
COMPUTED' I I COMPUTED 11 COMPUTED I I COMPUTED 11 UNIT
5
443T1 T/*/H I Httti tlcMhii IWttitlOMi jasSiiT
I OUOllTT I
(CRITERIA
IINCM
CURVILIRCM
IIKEM
CURVILIIItM
M^
-------
Table 3
Summary of AHQ-III ICP LOO and LOQ Data (continued)
-------
Table 3
Summary of AMQ-III ICP LOD and LOQ Data (continued)
loatut/i) lice iu»/i)HioQ cut/I
-------
I
Table 3
Summary of AHQ-III ICP LOD and LOQ Data (continued)
«O
-------
Table 3
Summary of AMQ-III ICP LOD and LOQ Data (continued)
Mittr owllty CrlttrU fw frwfcMMr MUMIe.
mtPlM VTiMMl MMV Ml CVVBTIMn ? ! 9f
N)
( ui
-------
256
Table 4
ELEMENTS HAVING A MATRIX EFFECT BASED ON
REGRESSION EQUATION SLOPE DIFFERENCES
River
Water
Single
Operator
AT, Cd
Overall
Recovery
N1
Ash Pond
Overf1ow
Be, Cd, Mo
Seawater-Intake
Treated Chemical
Metal Cleaning
Wastes
Ni
Fe
Al, Pb
21
-------
f\>
to
Ul
Effor,
-------
258
0.15
I
* 0.10
§
L
0
a
0.06
o.oo
0
RIv»r Water
Ften Pond Overflow
Soawator Intake
Soawater Dlecharae
. Reagent Grade Water
TCMCW
.0 0.1 0.2 O.3 O.4 0.5
True Concent rot I on. fna/L.
0.0 0.7
Figure 2. Zn by ICP
Overall Operator Precision vs True Concentration
23
-------
vt
c5
MATRIX "NOISE"
INSTRUMENT "NOISE"
NEAR ZERO LINEAR
EXTRAPOLATION O
to
U1
vo
LONG RANGE LINEAR
EXTRAPOLATION 0
TRUE CONCENTRATION
Figure 3. Effect of Test Concentration on the Calculation of LODs.
-------
260
I
E
s
I
LEGEND
LOO/EPA
LOO/EPA
Al B Bo Be Cd Cr Cu F« Un Uo N! Pb V Zn
Elements
No data far ; Cd LOO/EPA ratio O.I.
Figure 4. Ratio of EPRI LOD and LOQ to EPA ICP Detection Limit for
Reagent Grade Water
1*OO X
LEGEND
LOD/EPA
LOQ/EPA
At B Bo B« Cd Cr Cu F« Un Uo Nt Pb V Zn
El«m«nta
tar .
Figure 5. Ratio of EPRI LOD and LOQ to EPA ICP Detection Limit for
River Water
25
-------
261
LEGEND
LOO/CPA
LOO/CPA
Al B Ba B« Cd Cr Cu F« Mn Ue Nl Pb V Zn
El«m«nts
Figure 6. Ratio of EPRI LOO and LOQ to EPA ICP Detection Limit for
Ash Pond Overflow
g
LEGEND
LOD/EPA
LOQ/EPA
Al B Bo B« Cd Cr Cu F« Mn Me KJ Pb V Zn
El«m«nts
N* «^« f*r UJO: Bo.V LOQ.
Figure 7. Ratio of EPRI LOD and LOQ to EPA ICP Detection Limit for
Seawater Intake
26
-------
262
«OO ;
3 :
I :
-c
S- "*=
3 '
K
CJ
§
* :
I
_t
r
f
, '
p
. ^
;
^
»
, j
^
/
^
^
>
'
-
P
/
X
J
/
;
'
i
^
/
t
t
>
*
?
P
2
'
i
^
^
t
t
t
t
mn
t
?
\
t
4
j
t
,
'
X
+
*
f
f
s
t
0
t
1
^
^
^
, (
. t
t
<
f
^
«
«
«l
t
t
t
LEGEND
l^xfl LOO/EPA
^H LOO/EPA
Al B Bo B» Cd Cr Cu F« Mn Mo Ni Pb V Zn
Elements
Na ««ta tar LOO; AI.««.r«.V LOO.
Figure 8. Ratio of EPRI LOD and LOQ to EPA ICP Detection Limit for
Seawater Discharge
ratio
i
LEGEND
LOO/EPA
LOO/EPA
A! B Bo B« Cd Cr Cu F« Un Mo NI l»b V Zn
Elements
N« «ot
-------
263
swo
SWI
>100
Range of LCD to B=H ICP detection Unit
Figure 10. Percentage of Reporting Elements Having a Given Ratio
28
-------
264
I
I "
LEGEND
UOO/WQC
LOQ/WQC
Al B Bo Bm Cd Cr Cu F« Un Uo Nt Pb V Zn
Elements
N« WOC tar
Figure 11. Ratio of EPRI LOO and LOQ to Water Quality Criteria for
Reagent Grade Water
two !
*-+ too :
LEGEND
t£^3 LOD/WOC
H LOQ/WQC
Al B Bo B« Cd Cr Cu F« Un Uo N1 Pb V Zn
Element*
N« WOC tar ,IXi.F«.Un.M«.V.
Figure 12. Ratio of EPRI LOO and LOQ to Water Quality Criteria for
River Water
29
-------
265
1
LEGEND
LOO/WQC
LOQ/WQC
Al B Bo B« Cd Cr Cu Fe Mn Uo Nl Pb V Zn
Elements
N* WQC tar *o.r«.Un.M».V.
Figure 13. Ratio of EPRI LOD and LOQ to Water Quality Criteria for
Ash Pond Overflow
1
LEGEND
LOO/WQC
LOQ/WQC
Al B Bo B« Cd O Cu F« Un Uo NI r*b V Zn
Elements
N* WQC tor B,B«,r*.Mn.Ua.V: n* *»ta tar M UOQ
Figure 14. Ratio of EPRI LOD and LOQ to Hater Quality Criteria for
Seawater Intake
30
-------
266
I -!
LEGEND
LOD/WQC
LOQ/WQC
AI B Bo B« Cd Cr Cu F« Un Uo NI Pb V Zn
Elements
N« WOC tor
. n* «*« *w AI UX3
Figure 15. Ratio of EPRI LOD and LOQ to Water Quality Criteria for
Seawater Discharge
LEGEND
LOD/WQC
LOQ/WQC
Al B Bo B« Cd Cr Cu r« Un Uo Nt Pb V Zn
El«m«nts
tt» WOO tar B.B«.r«JtfnAI«.VS n* tf^a tar
Figure 16. Ratio of EPRI LOD and LOQ to Water Quality Criteria for
Treated Chemical Metal Cleaning Waste
31
-------
267
QUESTION AND ANSWER SESSION
MR. HACHIGIAN: Lee
Hachigian of General Motors.
I assume the intent was to use this data to work
with your regulatory agency in establishing limitations on
effluents from the utility industry. Do you foresee that
during these negotiations, if you can call it that, you
would be required to do site-specific...the same type of
report site specific for each of the operations or you could
use this data that you have already determined?
MR. RICE: That is a very
good question. Obviously, the collection of the data was
directly related to the entire compliance monitoring and
permit negotiation effort.
Both things have happened. In some cases, the
published information has been able to be used by a member
utility in its negotiations of a permit limit.
In other instances, it has at least secured for
the utility the opportunity to run a site-specific MDL or to
obtain information on their recovery and reproducibility by
a round-robin with a number of corresponding labs. We have
used the latter situation in connection with and MDL when
that was in question (no detectible discharge for a limit)
to get two or three additional qualified laboratories
-------
268
together with the utility laboratory in an approved round
robin simply on MDL.
MR. HACHIGIAN: As a followup
question, has it been effective?
MR. RICE: Pardon me?
MR. HACHIGIAN: Have your
efforts been effective?
MR. RICE: Yes, very much so.
The entire method validation program has cost the
Electric Power Research Institute close to $1.5 million to
date including furnace AA and ICU on a range of metals and
matrices. EPAL is now continuing with additional elements
that were not run earlier but are now requested by members
since they in turn are being requested by regulatory
agencies to monitor or to control parameters not previously
to subject of interest.
MR. HACHIGIAN: Thank you.
MR. FIELDING: Any
other questions?
(No response.)
MR. FIELDING: Thank you.
-------
269
MR. FIELDING: Our next
speaker is Mr. Bill Krochta. He will be speaking about the
quantitation detection limits in analysis of environmental
samples.
-------
270
QUANTITATION/PETECTION LIMITS FOR THE ANALYSIS
OF ENVIRONMENTAL SAMPLES
W. G. Krochta. PPG Industries:
L. I. Bone, The Dow Chemical
CMA; T. L. Dawson, Union Carbide
Inc; K. J. G. Hillig, BASF
N. E. Prange, B. M.
Company; B. A. Cuccherini,
Chemical & Plastics Company,
Corporation; R. A. Javick, FMC Corporation;
Hughes, Monsanto Company; F. I. Saksa, CIBA-GEIGY Corporation; G.
H. Stanko, Shell Development Company
-------
271
QUANTITATION/DETECTION LIMITS
FOR THE ANALYSIS OF ENVIRONMENTAL SAMPLES
I.
INTRODUCTION
Analytical technology continues its unrelenting pace to develop methodology to
lower the concentration limits at which the analytes can be measured. Picogram
(10~12 grams) quantities are commonly reported as new detector systems for gas
and liquid chromatography are developed. Advances in mass spectrometry are
leading to lower levels of quantitation. For example, ion trap mass
spectrometers and inductively coupled plasma-mass spectrometry (ICP-MS) are some
highly sensitive techniques, which are becoming more commonly used for organic
and elemental determinations respectively and capable of detecting subnanogram
(<10 gram) quantities. The statement following depicts the situation that we
are encountering:
"... the number of compounds detected in a sample of
water is related to the detection level. As the
detection level decreases an order of magnitude, the
number of compounds detected increased an order of
magnitude. Based on the number of compounds detected by
current methods, one would expect to find every known
compound at a concentration of 10-12 g/L or higher." -
Dr. William T. Donaldson (EPA Athens Laboratory)
As the regulated community is required to perform within the level of
increasingly restrictive compliance limits, the analytical chemist must emphasize
to the public that all measurement data have an associated uncertainty
interval(1). This information becomes critical as measurements are made
approaching the lowest analytical capability of a given procedure.
-------
272
II. ANALYTICAL LIMITS OF MEASUREMENT - DEFINITIONS
In their regulatory programs, the USEPA uses a variety of procedures to establish
limits of measurement. In the following slide definitions are given to present
an approach to define analytical capability.
LIMIT OF DETECTION (LOP) - Lowest concentration level that can be
determined to be statistically different from a blank(7).
METHOD DETECTION LIMIT (HDL) - Minimum concentration of analyte that can be
determined with 99% confidence that the true value is greater than zero(2,3,13).
INSTRUMENT DETECTION LIMIT flDU - Smallest signal above background noise that
an instrument can detect reliably(7).
LIMIT OF QUANTITATJON fLOQ) - Concentration above which quantitative
results may be obtained with a specified degree of confidence(7).
PRACTICAL QUANTITATION LIMIT fPQL) - Lowest level that can be reliably
achieved within specified limits of precision and accuracy during routine
laboratory operation conditions(S).
III. APPLICATION OF METHOD DETECTION LIMITS fHDLl SUBJECT TO MATRIX EFFECTS
The HDL is similar to the LOD except that the LOD is defined with a sample blank
whereas the MDL is defined with either a blank or in each matrix being
analyzed(2,3). In most cases, however, laboratories report MDLs determined at
one point in time and routinely based on reagent water. They do not normally
perform the MDL evaluation on the different matrices analyzed for regulations
development, compliance monitoring, or tested to determine permit requirements.
In their proposal to set enforceable maximum contaminant levels (MCLs) for
volatile synthetic organic chemicals in drinking water(5), EPA explains that
the MDL could not be used as the basis for quantitative maximum contaminant
levels: "The specification of such a concentration is limited by the fact that
MDLs are variables affected by the performance of a given measurement system.
MDLs are not necessarily reproducible over time in a given laboratory, even when
the same analytical procedures, instrumentation, and sample matrix are used."
-------
273
IV. PRACTICAL OUANTITATION LIMITS (POL) AS A MEANS OF IDENTIFYING
HEASUREABLE CONCENTRATIONS
Many observations for organic toxic pollutants are below the MDLs, thus creating
difficulties in developing effluent limitations guidelines and permit limits.
In such instances where analytical and effluent variability cannot be determined,
only those concentrations above quantifiable levels (17) should be considered.
It should also be recognized that there is a fundamental difference between
detection and quantitation limits. Unfortunately these terms are too often
misused as being synonymous. EPA has developed a method for establishing such
quantifiable numerical limits for its proposed drinking water standards (50 FR
46902) and for its proposed organic toxicity characteristic (51 FR 21652),
designated as the practical quantitation limit (PQL). EPA has developed this
concept of a PQL for specific analytical methods and lists of chemicals.
A. RECOMMENDED PRACTICAL OUANTITATION LIMITS COMPARED TO METHOD DETECTION
LIMITS
The EPA used PQLs which are recommended as 10 times the MDL for selected volatile
organic chemicals when it proposed MCLs for drinking water. The Agency states
that: "setting the PQLs in a range between 5 and 10 times the MDL achieved by
the best laboratories is a fair expectation for most state and commercial
laboratories" (50 FR 46907). At the PQLs chosen by EPA for this rulemaking, its
performance evaluation studies indicate that 80% of the EPA and State
laboratories in its water program evaluation studies could measure within ±40%
of the true concentration. This was the basis for setting the PQL at 10 times
the MDL. This is not a very high standard of performance as admitted by the
Agency in the preamble to this proposed regulation. Thus, even at the PQLs,
over 20% of the "good" laboratories would not be expected to obtain results
within ±40% of the concentration of a specific component. At concentration
levels below PQL, performance of even the best of "good" laboratories
deterioriates rapidly.
B. PRACTICAL OUANTITATION LIMITS IN REAL MATRIX SAMPLES REFLECT EFFECT OF
MATRIX INTERFERENCE
A recent presentation(12) described a study evaluating Method 8020, which is a
gas/liquid chromatpgraphy procedure in SW-846 "Test Methods for Evaluating Solid
Wastes, Physical Chemical Methods" for the determination of low concentrations
of toluene, benzene, and xylenes in real matrix groundwater samples. The round
robin study involved 20 commercial laboratories. Method 8020 lists the practical
quantification limits for all three compounds as 2.0 /*g/L. The PQLs derived from
results achieved by the laboratories in this study are much higher. The PQLs
at which 80% of the laboratories could achieve a recovery within ±40% of a true
value from this study are 7.5 /»g/L for benzene, >20 /*g/L for toluene, and 18.5
/ig/L for total xylenes. It is clear that the Method 8020 published PQLs are
seriously underestimated when applied to this groundwater matrix and for these
20 laboratories.
-------
274
The Inability of these laboratories to perform within the method PQL criteria
should not be surprising even to EPA. In the preamble to the final rule on the
federal primary drinking water standards for eight volatile organic compounds
(VOCs), EPA states that:
"PQLs for the VOCs were based on the MDL and surrogate test data ... The PQLs
based on these laboratory data are considered a two step removed surrogate for
actual laboratory performance, first because they are estimated from another
measurement (the MDL) and second, because they are derived from laboratory
performance under ideal circumstances. Therefore, they do not actually
represent the results of normal laboratory procedures, but are a model of what
normal procedures might achieve. Specifically:
(1) Laboratories receive performance evaluation samples in which a limited number
of concentrations are analyzed and the samples do not have matrix
interferences as might actual samples;
(2) PQLs are based on EPA and State laboratory data which are considered to be
representative of the best laboratories, but not all laboratories; and
(3) Samples are analyzed under controlled ideal testing conditions which may
not be representative of routine practices.
For these reasons, the PQL represents a relative stringent target for routine
performance." (52 Federal Register 25699).
More specific to groundwater samples, EPA discussed the significance and
reliability of the PQLs that are included in Appendix IX of the rule: "The PQLs
listed were EPAs best estimate of the practical sensitivity of the applicable
method for RCRA groundwater monitoring purposes. However, some of the PQLs may
be unattainable because they are based on general estimates for the specific
substance. Furthermore, due to site-specific factors, these limits may not be
reached." 53 Federal Register 39721.
For solid wastes the matrix problem has also been demonstrated to be very
significant(21). Member companies of the Hazardous Waste Treatment Council
obtained initial information that showed 33 out of the 91 Best Demonstrated
Available Technology (BOAT) standards promulgated for the First and Second Third
Land Disposal Restrictions were set at levels below the PQLs. As a follow-up,
a formalized inter!aboratory study using incinerator ash samples was performed.
In this matrix a range finder study was conducted by six member companies to
determine appropriate spiking levels to determine MDLs for each of the
constituents. As part of this study a matrix spike was prepared at the BOAT
level and determined. The results of the study showed that 65 percent of the
volatile constituents, 73 percent of the acid extractable constituents, and 23
percent of the base neutral extractable constituents were not detected at the
spike performed at the treatment standard limit.
The PQLs and also MDLs published by EPA for its analytical methods are
based on reagent water spiked with the compounds of interest, so they do not
represent limits achievable where matrix interferences exist, as with actual
samples. EPA does identify in Method 8020 that the PQLs are highly matrix-
dependent and that they are listed only to provide guidance and may not always
be achievable. Unfortunately, these caveats or warnings are likely to be
Ignored, particularly by some regulatory agencies, when permit limits or other
regulatory levels are set.
-------
V.
275
PROPER TREATMENT OF THE DATA CAN AVOID MISREPRESENTATION OF THE FACTS
A.
RULES FOR THE USE OF SIGNIFICANT NUMBERS
Despite the wide attention given to numbers for quantitative and qualitative
limits the improper use of rules for use of significant numbers goes virtually
unnoticed. As measurements are required more and more frequently to be made at
decreasing concentrations, the relative analytical variability and uncertainty
can increase substantially and the need to understand and recognize significant
data is essential. Horwitz et al (22) reviewed data from over 50 independent
Association of Official Analytical Chemists (AOAC) inter!aboratory collaborative
programs covering numerous AOAC drug and pesticide studies. The analytical
methods covered were chromatography, atomic absorption spectrometry, absorption
spectrometry, polarography, and biossay. In Figure 1 the % variation is
expressed as powers of 2 with the mean concentration expressed as powers of 10.
A convenient reference point is that at 1 ppm the variation is 16%. The %
variation was found to double for each decrease of concentration by 2 orders of
magnitude. It is important to note that this curve is independent of the
analyte or analytical technique that was used to make the measurements. These
relationships should also apply to environmental levels of measurement as well.
Analytical chemists must always emphasize to the users of the data that the
single most important characteristic of any result obtained from one or more
analytical measurements is an adequate statement of its uncertainty interval.
Often in legal judgments there is an attempt to dispense with uncertainty and
try to obtain unequivocal statements; therefore, an uncertainty interval must
be clearly defined in cases involving litigation and/or enforcement proceedings.
Otherwise, a value of 1.001 without a specified uncertainty, for example, may
be viewed as legally exceeding a permissible level of 1(7).
The analytical inclusion of only significant numbers is vital to the accurate
interpretation of data. Scientific personnel are not exempted from the tendency
to retain all values, no matter how divergent or suspect they may be. One of
the principles of handling the data of physical and chemical measurements is
that a numerical result by itself should give an approximate idea of the
precision of the value as indicated by the number of significant figures used
in expressing the value. An inaccurate representation of significant figures
may give one an impression nearly as erroneous as from an inaccurate value.
Misuse of significant figures can cause reporting violations when indeed the
measured value does not exceed the limit. Adherence to proper expression of
significant numbers is especially important when permit limits are near the
limit of quantitation for the procedure and its relative uncertainties are
1arge.
The number of significant figures reported as a result of a scientific
measurement depends on establishing previously the relative precision with which
the measurement can be made as shown in Table 11(11). In considering the proper
use of significant figures for regulatory reporting, it is imperative that
significant figures start at the laboratory bench and be adhered to by anyone
who further treats or handles the data. Otherwise, false conclusions and
misunderstanding will develop and possibly lead to serious consequences.
-------
276
B. GUIDELINES FOR REPORTING DATA
EPA has recognized that data measured at or near the detection limit have
considerably more uncertainty associated with them than when significant
amounts are present(6). In this discussion EPA acknowledges the recommendations
by the American Chemical Society Report(7). A graphical illustration of the
relationship of LOD and LOQ is shown in Figure 2(7). The base scale is in units
of standard deviation, which is assumed to be the same for all the measurements
Involved.
Confidence in the apparent analyte concentration increases as the analyte signal
increases above the LOD. The value for LOQ = IQa is recommended, where a is
the standard deviation of the measurements. Assuming a large number of samples,
the LOQ then corresponds to an uncertainty of ±30% in the measured value (10<7
±3ff) at the 99% confidence level. The LOQ is most useful for defining the lower
limit of the useful range of measurement methodology.
From these guidelines in Table III, if the measured value is less than the limit
of detection, one should report "not detected" together with the value for the
LOO. When the measured value is larger than the LOD but smaller than the limit
of quantification (LOQ), report "detected but not quantifiable" together with
the value for the LOQ. If the measured value is greater than the LOQ, report
the value and its uncertainty.
VI. IMPACTING THE REGULATORY PROCESS
Data measured at or near the limit of detection may cause serious difficulty for
the user in developing valid conclusions from any study. Not only can the
amount of uncertainty approach and even equal the reported value, but also
confirmation of the species reported is virtually impossible as the
identification must depend solely on the selectivity of the methodology and
knowledge of the absence of possible interferents. As the concentrations
increase to measurable amounts these problems diminish. As stated previously,
quantitative interpretation, decision-making, and regulatory actions should be
limited to data at or above the limit of quantitation(7). The following
discussion graphically illustrates how analytical variability can impact the
regulatory process.
A. GRAPHICAL ILLUSTRATIONS OF THE IMPACT OF
ON COMPLIANCE LIMITS
ANALYTICAL VARIABILITY
Figures 3 through 7 were developed in order to visualize the impact of
variability upon laboratory measurements of concentrations in plant effluents.
Figure 3 shows the general probability distribution function for random error
when the measured concentration is expressed in a units. This curve can be
thought of as a frequency distribution when a large number of effluent samples
of the same concentration are analyzed repetitively. The x-axis is the
concentration that a laboratory may measure; the y-axis is proportional to the
frequency the laboratory measures a given concentration and is expressed in
probability units. The y-axis data have been normalized so that the total area
under the curve gives a value of 1.000. This curve applies to all analyses in
which only random error occurs.
-------
277
Several observations can be made regarding the probability distribution shown
in Figure 3.
Observation #1: Only a small percentage of the total analyses may give the best
estimate of the true value.
Observation #2: One-half the measurements are above the mean and one-half of
the measurements are below the mean. Therefore, if the mean is some effluent
trigger concentration above which a plant would be violating its permit, the
plant would be failing one-half the time, if these data were treated as having
QO uncertainty.
Observation #3: The measured concentrations shown in Figure 3, 99.7% of the
reported values would fall between plus or minus 3a of the mean concentration;
therefore, it can be seen that the a of a determination is a very fundamental
property of a distribution which must be used in evaluating data which contains
uncertainty.
B. THE APPLICATION TO REGULATORY LIMITS
In order to translate this general probability distribution to real-world
examples, Figures 4 through 7 were generated assuming different analytical
uncertainty in the random errors. All figures were generated for the
measurement of an effluent sample containing 100 ng/L of the target analyte.
Figure 4 shows the distribution of measured concentrations when the analytical
uncertainty produces a value of 1/jg/L for &; Figure 5 shows the distribution of
measured concentrations when the analytical uncertainty produces a value of 10
^g/L for a\ Figure 6 shows the distribution of measured concentrations when the
analytical uncertainty produces a value of 30 v%/\- for a; and Figure 7 shows the
distribution of measured concentrations when the analytical uncertainty produces
a value of 100>g/L for a. The probability distribution for the last case has
been truncated at 0 /*g/L since negative values of concentration are meaningless.
These four cases show clearly the impact of determinations which are carried
out with different amounts of analytical uncertainty. Unfortunately,
regulations are written as if data were being obtained with an uncertainty less
than that shown in Figure 4. Permits which give a specific limit for a certain
compound, fall into this category. However, the analytical data which are being
obtained by a typical environmental laboratory for the analysis of reagent water
are most likely analytical data obtained with the uncertainty shown in Figures
6 or 7. Figure 6 describes most analytical data obtained using EPA Methods 624
and 625 when the measured concentration is ten times higher than the method
detection limit determined in reagent water. Figure 7 describes most analytical
data obtained using EPA Methods 624 and 625 when the measured concentration is
equal to the method detection limit which can be the case if the sample or
sample extract must be diluted due to interfering substances. The concern is
that the probability distribution summarized in Figure 6 is used by the
Environmental Protection Agency to characterize data obtained by analytical
laboratories for effluent analyses. However, these data represent a best case,
since method detection limits for Methods 624 and 625 are derived from the
analysis of reagent water. Reagent water data should not necessarily be used
to determine the random error associated with all plant effluents which may
contain relatively high levels of inorganic salts, and unregulated organic
compounds which may interfere with these methods.
-------
278
An additional important conclusion from this set of figures is that these case
studies must be applied not only to analytical data obtained using the classical
EPA Methods 624 and 625, but it also applies to BOD, COD, toxicity, opacity,
etc. Any time any measurement is being made which includes random error, this
measurement contains the same types of uncertainty as described above which is
certainly accentuated as the required concentration limits are decreased.
Therefore, regulations should not be written with wording that implies that
Figure 4 uncertainties exist when in fact Figures 6 and 7 uncertainties are
typical representations of analytical uncertainties associated with permit
violation data.
Evaluation of permit violations cannot be properly made without first knowing
the analytical uncertainty of the determination in the vicinity of the permit
trigger concentrations and for the exact matrix under study. This means that
a data which are published with EPA Methods for the analysis in spiked reagent
water should only be used as a guide. For application, however, this
information should be developed for each compound/parameter in each laboratory
performing these analyses.
C. THE APPLICATION TO BACKGROUND CONTAMINATION
As regulated limits go lower and lower, background contamination becomes an ever
increasing problem. However, this topic can be treated in much the same way as
were regulatory limits discussed above. The same analytical uncertainty must
also be applied to the analysis of method blanks. In this case, method blanks
are not defined as replicate injections of a sample extract, but are replicate
extraction and extract analyses when representative glassware, solvents,
instrumentation, etc. are used in the analyses. Using this same reasoning, an
analysis of the mean concentration of the background contamination and the a for
that determination gives one the information needed to determine whether a
measured quantity in a plant effluent sample is actually different from the
quantity present in the method blank.
For commonly occurring background contaminants such as methylene chloride,
acetone, toluene, 2-butanone, and common phthalates no positive sample results
should be considered real which are within 10
-------
279
VII. RECOMMENDATIONS
There is a LOD or MDL which can be determined for every analyte in every matrix
below which it is not possible to reliably ascertain that an analyte is present
or absent. There is also a concentration range above the LOD or MDL where it is
possible to qualitatively establish the presence of an analyte, but the
concentration cannot be accurately and reliably quantified. It is also not
practical to determine precisely the LOD or MDL for all analytes, in every
matrix, and at all laboratories. All regulatory programs must recognize these
facts. As a practical solution to this problem, every method should have
published practical quantification limits (PQLs) which are at least media
(water/soil) specific. Many of these PQLs have been published by media, and for
most analytes these PQLs are representative of levels that can be achieved at
most commercial laboratories. However, there should also be procedures for
determining matrix specific detection and quantitation limits. Unfortunately
it is not possible to analyze a large enough universe of matrices to establish
generalized quantitation limits for comparison with regulatory levels. An
approach must be established which will preserve the utility of published PQLs
as guidance, while recognizing the significant number of compliance limits which
are below their respective PQLs and thus require a variance procedure.
If a laboratory determines that it can not meet published detection and
quantitation limits in their sample matrix, they should be allowed to measure
these levels using established procedures which include mandated QA/QC
requirements. These levels would then be used as reporting limits. If the
quantitation limit, so established, is above the regulatory level, the compound
would be considered to be in compliance until such a time that a level above the
quantitation limit is measured. This assumption of compliance would apply
whether or not the quantitation limit were a published PQL or a measured
quantitation limit. EPA would also determine the frequency that these published
PQLs would be re-evaluated pending method and equipment improvement. In some
cases the Agency has suggested that a facility may petition for such a variance
(24).
We also recommend that the EPA establish uniformity among the various regulatory
programs for the determination of the method detection limit. Although the
definition is essentially the same, the number of replicates and blanks may be
different, therefore, the calculation is effected. This can further compound
the current state of confusion in understanding and applying quantitation and
detection limits. The corresponding quantitation limit should be established
at five to ten times the MDL or substantially higher as the matrix would dictate
(19). The use of such factors, however, must be used with extreme care as the
method variability may well be underestimated by most laboratories (17). EPA
recognized this need for consistency in its Report to Congress in CWA Section
518. It was reported that analytical methods are sometimes unnecessarily
different for similar sample matrices, target analytes and data quality
objectives. The Agency should move to greater method uniformity and more
consistency in the use of quantitation and detection limits and use these
concepts in regulatory compliance situations.
-------
280
VIII.
REFERENCES
1. Rogers, L. B., et al., Eds., "Recommendation for Improving the Reliability
and Acceptability of Analytical Chemical Data Used for Public Purposes",
Chem. Eng. News, 1982, 60 (23) 44.
2. Glaser, J. A., et al., "Environmental Science & Technology", Vol. 15. No.
12, pp 1426-1434 (1981).
3. 40 Code of Federal Regulations 136, Appendix B, 1987.
4. 40 Code of Federal Regulations 136, Appendix A, 1987
5. 50 Federal Register 46902, November 13, 1985.
6. 53 Federal Register 48849 December 2, 1988.
7. Keith, L.H., et al, Anal. Chem. 1983, 55, 2210-2218
8. 53 Federal Register 48840 December 2, 1988.
9. 53 Federal Register 48839 December 2, 1988.
10. Standard Methods for the Examination of Water and Waste Water, 15th ed.,
pp 16-18, American Public Health Association, American Water Works
Association, and Water Pollution Control Federation, (1980).
11. Private Communication.
12. Stanko, 6. H., and R. W. Hewitt, "Performance Evaluation of Contract
Laboratories for Purgeable Organics", Presented at: 12 Annual EPA
Conference on Analysis of Pollutants in the Environment, Norfolk, VA,
Hay 10-11, 1989.
13. 40 Code of Federal Regulations 136.2 (f) 1987
14. USEPA Laboratory Data Validation Functional Guidelines for Evaluating
Organic Analyses. Prepared for the Hazardous Site Evaluation Div.,
USEPA. Washington, DC. Prepared by the USEPA Data Review Work Group,
July 1, 1986.
15. USEPA Laboratory Data Validation Functional Guidelines for Evaluating
Inorganic Analyses. Prepared for the Hazardous Site Evaluation Div.,
USEPA, Washington, D.C. Prepared by the USEPA Data Review Work Group,
July 1, 1988.
16. Koors, S. J., "Environmental Law Reporter News and Analysis", May, 1989,
p. 10213.
17. Koors, S. J., "Environmental Law Reporter News and Analysis", May, 1989,
p. 10219.
-------
281
18. USEPA "Statistical Analysis of Ground Water Monitoring Data at RCRA
Facilities, Interim Final Guidance" Office of Solid Waste Management
Division, February, 1989, Section 8.
19. "Test Methods for Evaluating Solid Wastes, Physical/Chemical Method"
Third Edition, 8010-10, USEPA Office of Solid Waste, Revision I,
December, 1987.
20. Parr. J., K. Carl berg, and G. Ward, "Reporting of Low Level Data for
U.S. Environmental Protection Agency Needs", Presented at: Third
Chemical Congress of North America Symposium in Honor of W. E. Harris,
June 8, 1988.
21. Method Detection Limits and Practical Quantitation Limits for Incinerator
Ash Matrices-Interlaboratory Study. Prepared for the Office of Solid
Waste, USEPA, Washington, D.C. Prepared by the Analytical Chemistry
Committee, Hazardous Waste Treatment Council, December 22, 1989.
22. Horwitz, W., Anal. Chem., 1982. 54 (1), 67A - 76A
23. "Handbook for Analytical Quality Control in Water and Wastewater
Laboratories" EPA-600/4-79-019, Chapter 7, Environmental Monitoring and
Support Laboratory, USEPA Office of Research and Development, Cincinnati,
Ohio.
24. 54 Federal Register 26603, June 23, 1989.
-------
282
DR. WILLIAM T. DONALDSON
(EPA ATHENS LABORATORY)
"... THE NUMBER OF COMPOUNDS DETECTED IN A
SAMPLE OF WATER IS RELATED TO THE DETECTION
LEVEL. AS THE DETECTION LEVEL DECREASES AN
ORDER OF MAGNITUDE, THE NUMBER OF COMPOUNDS
DETECTED INCREASED AN ORDER OF MAGNITUDE.
BASED ON THE NUMBER OF COMPOUNDS DETECTED BY
CURRENT METHODS, ONE WOULD EXPECT TO FIND
EVERY KNOWN COMPOUND AT A CONCENTRATION OF
10-12 G/L OR HIGHER."
-------
283
DEFINITION OF
ANALYTICAL CAPABILITY
LIMIT OF DETECTION (LOP) - Lowest concentration
level that can be determined to be statistically different
from a blank.
METHOP PETECTION LIMIT (MPL) - Minimum
concentration of analyte that can be determined with 99%
confidence that the true value is greater than zero.
INSTRUMENT PETECTION LIMIT (IPL) - Smallest
signal above background noise that an instrument can
detect reliably.
LIMIT OF QUANTITATION (LQQ) - Concentration
above which quantitative results may be obtained with a
specified degree of confidence.
PRACTICAL QUANTITATION LIMIT (PQL) - Lowest
level that can be reliably achieved within specified
limits of precision and accuracy during routine
laboratory operation conditions.
-------
284
COMPARISON OF REPORTABLE SIGNIFICANT FIGURES AS A
FUNCTION OF RELATIVE PRECISION
Precision f%")
±0.001 to ±0.01
±0.01 to ±0.1
±0.1 to ±1
± 1 to ±10
± 10 to ±30
Significant
Figures
5
4
3
2
1
Example
Calculated Reported
54.8149
54.8149
54.8149
54.8149
54.8149
54.815
54.81
54.8
55
5 x 101
-------
285
ACS GUIDELINES FOR REPORTING DATA
Analyte Concentration
in Units of a
t - sb)
Region of Reliability
<3
3
3 to 10
10
Region of questionable detection
(and therefore unacceptable)
Limit of detection (LOD)
Region of less-certain quantisation
Limit of quantisation (LOQ)
Region of quantisation
-------
286
I
I
Chemical
Product
Analysis
Drinking
Water
Limits
-60
FIGURE 1. CURVE RELATING VARIABILITY BETWEEN LABORATORIES AND CONCENTRATION.
Reprinted with permission from Analytical Chemistry, vol. 54
No. 1, January, 1982. Copyright 1982 American Chemical Society.
-------
±3a
Total Signa
(St)
±3CT
Zero
LOD
LOQ
Region of High
Uncertainty
i i i
Region of Less
Certain
Quantitation
J I I I I L
Region of
Quantitation
1 I I LJ
-4-3-2-1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Sb i
(Sb + 3a)
(In units ofa)
(Sb + 10cr)
FIGURE 2, RELATIONSHIP OF LOD AND LOQ TO SIGNAL STRENGTH
REPRINTED WITH PERMISSION FROM ANAL. CHEM. 1983. 55. 221 0-22 18.COPYRIGHT 1983. AMERICAN CHEMICAL SOCIETY.
to
CO
-J
-------
288
Figure 3. Normal Curve of Random Error
(x in sigma-units from mean)
0 ^Probability-Units
0.3
0.2
0.1
0.0
-6 -4-2
Sigma-Units
0246
from the Mean
Rgure 4. Normal Curve of Random Error
Mean = 100 ug/L; Sigma = 1 ug/L
^Probability-Units
0.3
0.2
0.1
O.Ol
0 50 100 150
Measured Concentration (ug/L)
Figure 5. Normal Curve of Random Error
Mean = 100 ug/L; Sigma = 10 ug/L
^ Probability-Units
0.3
0.2
0.1
0.0
0 50 100 150 200
Measured Concentration (ug/L)
Figure 6. Normal Curve of Random Error
Mean = 100 ug/L; Sigma = 30 ug/L
0^ Probability-Units
Figure 7. Normal Curve of Random Error
Mean = 100 ug/L; Sigma = 100 ug/L
0.3
0.2
0.1
O.Ol
0 100 200 300
Measured Concentration (ug/L)
0 200 400 600
Measured Concentration (ug/L)
-------
Figure 3. Normal Curve of Random Error
(x in sigma-units from mean)
Q ^Probability-Units
0.3
0.2
0.1
0.0
to
00
10
-4-20 2 4 6
Sigma-Units from the Mean
-------
Figure 4. Normal Curve of Random Error
Mean = 100 ug/L; Sigma = 1 ug/L
0 ^Probability-Units
0.3
0.2
0.1
0.0
L
N)
U3
O
0 50 100 150
Measured Concentration (ug/L)
-------
Figure 6. Normal Curve of Random Error
Mean = 100 ug/L; Sigma = 30 ug/L
Probability-Units
0.3
0.2
0.1
0.0
to
<£>
300
Measured Concentration (ug/L)
-------
Figure 5. Normal Curve of Random Error
Mean = 100 ug/L; Sigma = 10 ug/L
0 /| Probability-Units
0.3
0.2
0.1
0.0
to
vo
10
0 50 100 150 200
Measured Concentration (ug/L)
-------
Figure 7. Normal Curve of Random Error
Mean = 100 ug/L; Sigma = 100 ug/L
Q ^Probability-Units
0.3
0.2
0.1
0.
to
vo
CO
200 400
Measured Concentration (ug/L)
-------
294
(1) Laboratories receive performance evaluation samples in which
a limited number of concentrations are analyzed and the
samples do not have matrix interferences as might actual
samples;
(2) PQLs are based on EPA and State laboratory data which are
considered to be representative of the best laboratories,
but not all laboratories; and
(3) Samples are analyzed under controlled ideal testing
conditions which may not be representative of routine
practi ces.
For these reasons, the PQL represents a relative stringent
target for routine performance. (52 Federal Register
25699).
-------
295
QUESTION AND ANSWER SESSION
MR. ENWEZE: I am Tony
Enweze from EBASCO.
My question is/ having looked at all these different
values of method of detection limits, don't you think that
one of the primary reasons EPA is using what they have now
is something related to the risk level that has been
documented that particular contaminants could cause certain
kinds of diseases as certain concentration levels? You see,
in order for them to maintain that every laboratory report a
value within a certain range, they have established a value
such that it will be compatible with that risk level for
that particular compound.
Do you see that as a possible reason why they have
these random values for detection limits?
MR. KROCHTA: I am not
sure if I understand that question.
MR. ENWEZE: What I am
trying to focus on here is that detection limits sometimes
as established by EPA often are related to the risk level of
a particular contaminant or any compound in the environment.
It isn't necessarily that it might be coming from a
practical approach, in other words, how this can be analyzed
or how this can be found, but EPA values have been
-------
296
established to give the limit above which the toxicity
effect of a particular compound may impact a person who
comes into contact with it.
So, do you think that those kinds of reasons is a major
impact on why EPA has established their limits to be such in
a random fashion?
MR. KROCHTA: Well, I am not
sure if I understand the complete question. I may be the
reasons, but our primary concern is that EPA will need to
establish for these compounds PQLs that we should use as
only guidance, because published PQLs are set primarily in
specific matrices that are not necessarily site specific.
We should only use them as a guidance as they may not be
achievable even if for risk assessment purposes or routine
practical applications.
As I mentioned earlier, work was done evaluating solid
matrix effects, when they did a study on incinerator ash.
It was found that about 33 percent of the EPA designated
limits were actually below what could be measured in that
particular matrix, in that site matrix.
Now, EPA has agreed that they do have a problem, and
they are making an effort to establish more realistic method
detection limits, but we have to recognize that they are
only to be used as a guidance. Unfortunately, they are too
often used, when they are setting regulations, as a level
-------
297
that can be achieved, and that is not the case. They cannot
be achieved.
One more comment, in our own opeation, we had a
regulation limit set which was about 100 times below what we
could actually measure in that particular matrix.
MR. ENWEZE: This is
exactly what I am trying to get to. Don't you then think
that for EPA to try to make it sort of arbitrary that
wouldn't that put the environment... the people at some kind
of risk to exposure to contaminants? If the detection
limits...the results that are reported by labs often are
reported than less than the detection limit which is
considered from a risk point of view as not relevant, how
would you then control that factor of the fact that...
Let me give you an example. Use the method detection
limit of 5 ppb. It is clearly established that at 5 ppb, a
particular contaminant can cause cancer to anybody who
inhales it at this certain dose. So, how can you then
cont.^1 the public or the industries from polluting above
those limits that could cause the risk that we are all
worried about?
That is where my concern comes in, that what you are
saying, leaving the public to decide where the method
detection limit will be based on the analytical approach
-------
298
might be risky to the public as a whole because of that
factor of risk that is associated with each compound.
MR. KROCHTA: I can see your
point. It is a problem. However, if they are going to
establish a regulation, it should be at a level that can
actually be measured. To be controlled, you have to be able
to measure it.
You want to have a regulation that can actually protect
the public, but if you can't measure it, it is very
difficult to operate under it.
MR. ENWEZE: So, do you think
then that the risk level that is established in risk
assessment books are probably not valid or those values are
reported with a method that was not capable of detecting
that level?
MR. KROCHTA: I am not sure.
I don't think we can answer that question right here.
MR. ENWEZE: Well, okay. I
think I made my point that the values that we are talking
about in detection limits, it is a little bit hard to leave
the public exposed because the industry would like to see
EPA compromise at a certain level that they consider
practicable when in that quality it has been documented
through research that below that level that can be detected
-------
299
there is a high risk of possible diseases that could be
inflicted on the public.
So, I find it difficult to let the system go to a
higher detection limit when considering the risk level that
might be encountered.
Thank you.
MR. FIELDING: Any other
questions?
(No response.)
MR. FIELDING: All right.
Let's take about a 15-minute break and be back about 3:15 or
3:20.
(WHEREUPON, a brief recess was taken.)
-------
300
MR. TELLIARD: Over the last
eon, the Office of Water Regulations and Standards has been
working on a regulation for the offshore oil and gas
industry. We feel that this regulation will outlast all of
us. It is the longest known regulation in history. It has
taken so long to write, most of the panel members have died.
(Laughter.)
MR. TELLIARD: The youngest
kid on the committee is 63, and he started out fresh out of
school.
In this continuing story today, we have two presenters,
both of them from the man who wears the star at Texaco, and
we are going to talk about some applications and concerns
about analysis of drilling fluids and drilling muds and
cuttings.
Warren who is our first one up, followed by Mike
Stephenson, are both involved in this program which we are
trying to implement in relation to both offshore oil and gas
and also, in the next year or 2 or 500, onshore oil and gas
regulations.
So, Warren, would you like to lead off?
-------
301
MR. HALTMAR: Thank you.
Good afternoon. Today, I will be talking about a
method for determining diesel oil in drilling fluids Method
1651, Revision A. This is a proposed method for determining
the total oil and identifying and determining diesel oil in
drilling fluids which include muds and cuttings.
Proposed methods, I always believe, are not actual
facts but are expected to undergo modifications.
The detection limits for this method are 200 mg of
total oil and 100 mg of diesel oil in our samples. This
amounts to, if are taking a 20 gram sample, 2 mg of diesel
oil.
Again, this is a proposed method. I got involved in
this last year in validating the GC portion of this method.
This year, I became a participant in a round-robin study to
determine how effective this method is.
Today, I will talk about only the high points of the
method, retorting or distillation of the sample, the
gravimetric determination for total oil, and then the
capillary gas chromatography method for determining diesel
oil.
Then I am going to talk about possibly a few
modifications to this method. Some of them are drastic
modifications. I really originally thought they might be a
-------
302
pie in the sky type thing, but maybe they are more down to
earth than I originally thought.
Finally, I am going to bring you my conclusions about
the method.
For those of you who are not familiar with drilling
fluids, the definition of a drilling fluid is it encompasses
all of the compositions used to aid the production and
removal of cuttings from a borehole.
For our case, we are looking at a water-based mud, and
this is made up of three major components: water, clays,
and additives. The additives can be weight enhancers,
viscosity enhancers, viscosity reducers, fluid loss agents
and lubricating agents. Diesel oil is not supposed to be
added to these drilling fluids.
Occasionally, a driller will add it because he has
problems. The problem is he is getting a drill bit stuck in
a hole, he left his lubricating agents back on shore, he is
off shore, he only has a finite time to do something. So,
he sees diesel oil is handy, and he mixes it in the drilling
fluid.
This first step of the procedure is to retort the
sample. Retorting is a simple distillation procedure, which
is semi-quantitative at best. The device has been around
the oil industry for a long time and consist of a metal
sample cylinder, an aluminum condenser and a graduated
-------
303
cylinder to correct the distillate. The drilling fluid
sample is retorted until liquid stops coming from the
sample.
Once we have our distillate. Total oil is determined
gravimetrically. We extract the distillate with methylene
chloride. The methylene chloride removes the hydrocarbon
material, which includes total oil and diesel oil from the
distillate.
The methylene chloride is removed using a Kuderna-
Danish concentrator to give a gravimetric determination of
total oil. I was unsuccessful in removing all of the
methylene chloride from the sample using this device. The
Kuderna-Danish Apparatus is a concentrator not a device to
bring to dryness.
What I finally did with this apparatus is I took the
receiver off the bottom after I had removed as much
methylene chloride as I could. I put it in a sand bath,
warmed it gently, and blew dry hydrogen on it. I knew when
I did that I was probably going to lose some of the light
ends of the diesel oil, and, indeed, we did.
One of the interesting things that I did with the
Kuderna-Danish is that once I removed all the methylene
chloride, I still had methylene chloride in my sample. I
took a sample of that, injected it into my GC, and I got a
chromatogram very similar to what I was expecting from my
-------
304
diesel oil. So, it really didn't look like the retort was
having no adverse effect on the diesel oil.
For the gas chromatography, section of the procedure,
we redissolve the sample in methylene chloride and added an
internal standard, 1,2,5-trichlorobenzene. We inject this
into our GC. We essentially fingerprint the sample, to
identify the diesel oil by looking at the alkanes. We are
looking at ten alkanes eluting after the internal standard.
Gas chromatography conditions, we are using a DB-1
column or equivalent, 30 meters long, 0.25 mm inside
diameter with a film thickness of 1.0 micron.
The method calls for a split ratio is 0 to 120. When I
did the verification on the calibration method, I found out
when we use the high split ratios, we came up with the old
bugaboo of discriminating distillation. If we reduced the
split ratio down to 40:1 or less, then eliminated the
discriminating distillation.
Our detector and injector temperature is 275. Our oven
temperatures are programmed from 90 to 250 degrees C at 5
degrees per minute. This can change. There is a criterion
in the procedure that calls for relative retention times to
elute at a particular time. The internal standard must
elute at 8 minutes.
I had some difficulties making my G.C. meet the
retention time requirements of the method. Because of the
-------
305
many variables associated with G.C. retention times, many
manufacturers test mixes would be a better check for G.C.
and column operating efficiency. This is especially true
when analyzing complex samples.
Finally, a data system is needed that can store and
process data from the G.C.
A typical chromatogram.of f2 diesel is this right here.
We have our internal standard. We are looking at these 10
alkane peaks. We start with C12. The method does allow
that if we run into some interferences with these alkanes,
say, from a crude oil, that we can use some of the lesser
peaks, possibly some of the aromatics here. The aromatics
are really the compounds that could be harmful to our
environment.
The data we obtained on our calibration samples is
shown here. The first three standard samples have relative
retention times and response factors in agreement with the
procedure. The last sample is our lowest standard and
failed to meet the requirements for response factors.
There is a caution here that I think everybody needs to
be when aware of using data systems. Data systems can have
problems picking the correct baseline for samples ranging
over a wide range of concentrations. The expanded
chromatograms show an alkane region in and out of the
-------
306
calibration range. Large differences do exist because of
baseline differences.
Taking a look at the data generated from the method, it
was found that the light ends of the diesel oil were lost.
This occurs when the methylene chloride is evaporated to
dryness. This data was obtained by spiking mud samples with
known amounts of diesel. As can be seen, 40 to 50 percent
of the diesel oil is lost during the gravimetric analysis.
Earlier, I stated that the retort device is a
semiquantitative device. We ran a blank down here of just
water, and lo and behold, we find out that there are 61 mg
of diesel oil in that water blank.
To see what effect this loss has on our chromatogram,
we took two samples, spiked with known weights of diesel
oil. One of them was dissolved in methylene chloride; one
of them we carried through our retort procedure. What we
want to see is differences in the chromatograms of the two
samples.
Here we have our internal standard, and you can see
this chromatogram is very, very similar to what I showed you
earlier.
Now, if we carry diesel through the retort, the
extraction, and the concentration, this is our diesel. We
have lost all the light ends. This is our internal
standard. Before, on the other chromatogram, the maximum
-------
307
alkane was peaking about 17 minutes retention time. Here,
we have pushed our maximum alkanes out to 20 to 24 minutes.
So, we have drastically altered the diesel oil.
If we look at the aromatic fraction in here and compare
the two different samples/ we see that we have also lost
part of our aromatics.
Some of the modifications that could improve the
method. We need larger samples. Two mg of diesel is a
rather small sample to work with. The retort evidently is
retaining some of the diesel oil. So, possibly, a glass
distillation device would be better.
The split injection mode of the gas chromatograph
reduces sensitivity. Performance and sensitivity could be
improved by reducing the split ratio or by splitless or on-
column injection. If additional sensitivity is needed then
widebore or megabore columns could be used.
The problems were encountered with one internal
standard, but in complex matrices it is advisable to use
multi-internal standards.
Working through this method, it became apparent that
the internal standard should be added to the drilling fluid.
This would allow tracking the sample all the way through the
procedure, not just the G.C. portion.
-------
308
The method uses only an FID detector. Because the
aromatics are the compounds of concern, it is suggested that
a PID detector be used along with the FID detector.
One of the major modifications would be to use FT-IR.
FT-IR's with the latest in software techniques, like
partial-least-squares could be used to determine solvent
(methylene chloride), crude oil and diesel oil. This
procedure would still need a suitable extraction technique.
Another possible method would be fluroescence. This
would monitor only fluroescening compounds (aromatics) a
non-fluroescening solvent would be needed for the
extraction.
Finally, a method that I am not too familiar with is
SFE/SFC. This method appears to be ideal for this type of
analysis. The method could be one continuous procedure
using on-line extraction. The chromatography could give
chromatograms similar to the G.C. or it could give only
total saturates and total aromatic.
In conclusion, I was unable to determine how effective
the retort separated the hydrocarbons from the drilling
fluids (gravimetric analysis). Chromatograms of diesel oil
versus retorted diesel oil were similar.
Methylene chloride is a problem with the gravimetric
analysis. A little bit of error can cause a large error in
total oil and diesel oil.
-------
309
The gas chromatography proved satisfactory with one
internal standard with low split ratios.
If you all have any questions now, I will be glad to
entertain them. Thank you.
MR. TELLIARD: Thank you, Warren. Thank you
very much.
-------
310
LABORATORY DETERMINATION OF DIESEL OIL IN DRILLING FLUIDS
BY: WARREN C. HALTMAR
ABSTRACT
Drilling muds and/or drill cuttings occasionally become contaminated with diesel oil, which
contains toxic compounds and could be harmful to the environment if discharged in an
improper manner. The Environmental Protection Agency (EPA) has proposed a method
(Method 1651) for determining total oil and diesel oil in drilling fluids (cuttings and muds).
This method involves the retorting, gravimetric determination of total oil, and the capillary
gas chromatography determination of diesel oil.
Detection limits are estimated to be 200 mg/kg for total oil and 100 mg/kg of diesel oil
in the drilling fluids. This translates to 2 mg of total oil or 1 mg of diesel oil per 10 gram
sample. After retorting the distillate (water and oil) is extracted with methylene chloride
to separate oil from the water coming from the drilling fluid. The methylene chloride is
evaporated to give a gravimetric determination of total oil, which includes diesel oil.
The dried extract (oil) is dissolved into methylene chloride with an internal standard and
analyzed by split injection capillary G.C..
This 5s a proposed method and is expected to experience modifications before final
acceptance. The merits and possible modifications of this method will be discussed.
-------
11:
oo
-------
METHOD 1651 REVISION A
^ytjSisSv*;-' ,.
sS&fyivtXi-:?!^'-:
fl^p^sfc
?tf<&""*iVJ?-'fV>i."lJX.<'<-.t '
Ki^
-------
co
-------
DEFINITION
; ,-iv*, *;< ,,?{:./iffi&%fy$?ffi
P^^^l^»i|p*iii
llffi^simlM
WiseiiWl^roii^
* v.i'if"1!-/''^^'^T^X.*,-..,' ^, ' 1 '"":';" -'x'"..^.1'^" "^''^S,,^'1;*?:^ jil "';:1;1.'^.^;,1:?
-------
315
-------
OJ
-------
317
-------
G. C. CALIBRATION DATA FOR #2 DIESEL
PEAK
ISTD
1
2
3
4
5
6
7
8
9
10
760 STD
300 STD
150 STD
RELRET
1
2.64
4.99
7.48
9.98
12.41
14.98
17.01
19.15
21.2
23.16
RF
0.013
0.013
0.024
0.018
0.018
0.018
0.012
0.009
0.005
0.003
RELRET
1
2.62
4.97
7.45
9.94
12.36
14.73
16.97
19.12
21.18
23.15
RF
0.016
0.016
0.030
0.021
0.020
0.014
0.009
0.009
0.005
0.002
RELRET
1
2.62
4.96
7.43
9.92
12.35
14.71
16.96
19.11
21.18
23.15
RF
0.019
0.019
0.033
0.023
0.023
0.023
0.016
0.012
0.009
0.004
60 STD
RELRT
1
2.79
5.13
7.60
10.09
12.53
14.88
17.13
19.26
21.36
23.34
RF
0.370
0.393
0.253
0.407
0.458
0.512
0.792
0.988
1.467
2.768
-------
SAMPLE
1
mg DIESEL OIL mg DIESEL OIL
ADDED RECOVERED
RECOVERY
408.7
384.3
403.0
404.0
0
205.4
230.7
228.8
262.6
61.2
50.2
60.0
56.8
65.0
feri
-------
ie.ee '
9.00
8.00
7.00
6.00
5.00
4.00
3.00
2.00
1.00
0.00
SAMPLE D-l
Jj
to
to
o
i h
H 1 1 h
0.00 4.00 8.0012.0016.0020.0024.0023.0032.003S..0040.00
RETENTION TIME (MINUTES)
-------
10.09
9.00
8.00
7.00
6.00
5.00
4.00
3.90
2.00
1.00
0.00
SAMPLE D STD
kA*~l~_J.
1 1 1i 1 1-i
u>
to
0.00 4.00 3.0012.0015.0020.0024.0023.0032.0036.0040.90
RETENTION TIME (MINUTES)
-------
-------
CO
CN
ro
-------
-------
325 "
MR. TELLIARD: In the
same drilling mode, Mike Stephenson is going to come up and
explain how nice the method is and how he really likes it.
-------
326
MR. STEPHENSON: Warren got up
and told you something about the nuts and bolts of this
Method 1651. What I want to talk about is the validation of
the method.
In order to get to where we can talk about the
validation study that we are undergoing, I want to go
through a little bit of history that is involved with this
elusive Method 1651.
The purpose of the method is to identify the presence
of diesel oil in drilling fluid and to quantify the amount
of diesel oil present. Diesel oil is a highly aromatic
fuel, and aromatic compounds are known to be bad for the
environment. One of the most famous ones is benzene.
Naphthalene is a known poison. They use it in moth balls.
So, we don't want to put these in the ocean. The
agency saw the wisdom of doing that, and banned the
discharge of drilling muds that contain diesel in them.
Very logical.
The next thing we had to do was say, how do we know
whether there is no diesel in it? That means you have to
have a way of analyzing for it.
So, back in...I think the first official proposal of
this method was in August of 1985. It was unofficially
proposed some time before that, and we have been sort of at
-------
327
cross purposes with Bill on it since the first day he
mentioned it.
One of the things that brought us to cross purposes was
the method that we were talking about using. We were
analyzing a highly aromatic fuel by using aliphatic
compounds as the labels. The agency's recommended
substitute for using diesel fuel was to use mineral oil
which has very low to no aromatic content. Since it doesn't
have any aromatics, it has as lot of aliphatics which makes
it look almost the same as diesel.
In fact, one of the studies that we conducted as an
industry showed that from the chromatograms, you couldn't
tell the difference between diesel and mineral oil in some
cases.
MR. TELLIARD: If you
were blind, you couldn't. That is true.
(Laughter.)
MR. TELLIARD: There was
an incident.
MR. STEPHENSONs Okay,
there were two.
After the method was originally proposed, one of the
things that we as an industry tried to convince Bill and the
rest of the agency, was that we needed to be able to use
diesel for periodic operational requirements.
-------
328
It costs a lot of money to move these drilling rigs
around out there, so we like to park in one place and drill
two or three holes from one location. In order to do that,
you do a lot of bending of your pipe and drilling off at
angles. You can imagine, if you will, a bent shaft with
something that is inside it turning, that it ends up binding
periodically.
In order to reduce that binding, historically, one of
the best lubricating agents that we knew as an industry was
diesel oil. It would unstick our pipe, and we could get on
with drilling pretty effectively.
If we put a slug of diesel in the mud to unstick our
stuck drilling pipe, then if we recovered all of that
diesel, we should end up with no diesel left in the mud.
And it turned out that we ended up having to recover a large
slug of mud.
At any rate, I am not here to talk about the diesel
pill monitoring program. That was a long winded program
that did a lot of analyses, and it is the source of a lot of
the operational history with Method 1651.
That method came out and was published. As an
industry, we had a few problems with it. There were
typographical errors in a couple of the formulas and...
MR. TELLIARD: Conclusions.
-------
329
MR. STEPHENSON: Yes,
conclusions.
Also, one of the things that it didn't do was address,
from our viewpoint, the possibility of interferences. If we
were using a recommended substitute such as a mineral oil in
our drilling fluid and we happened 'to get lucky and drill
through an oil bearing zone, then we ended up with a little
bit of oil in the drilling fluid.
How can you tell that what you measure isn't diesel?
Or, how can you tell if there is any diesel there or how
much is there? You end up with an interference problem from
the formation oil.
If you drill a dry hole and don't find oil, it becomes
a pretty simple system, but if you find oil or you put a
little diesel oil and a little mineral oil in the mud, it
becomes very complicated. And that issue is not resolved
yet.
At any rate, we made our comments on the first
publication , and they came out with Revision A. When they
came out with in October of 1988, we had 14 rigs that were
out in the Gulf of Mexico drilling; drilling not using
diesel and not using oil based muds.
So, we went out and took some samples. We took a
sample before they hit an oil bearing formation, a sample
after they hit an oil bearing formation, and a sample at the
-------
330
end of the well. We were going to analyze all those samples
using Method 1651 to see how much total oil was in the mud
and if there was any problem with interferences with diesel.
We were trying to convince Bill that we needed to do
something else.
When we started out, and we had difficulty with the
method. Our contract laboratory spent about $30,000 just
trying to do the calibration according to the method. At
that point, we decided he could stop trying to calibrate.
We had an emergency meeting with Bill and Dale
Rushneck and the contractor that was doing the work. We got
a little advice from Dale on how to get on the right track,
and our contractor went back and tried some more. We
finally decided to stop.
At that point, I said, okay, we have a problem. I knew
I had a good analytical chemist. So, I called Warren, and
I gave him the method. I asked him to go down to the local
Texaco station, get some diesel, and do the calibration.
However, he couldn't calibrate following the letter of the
method, either.
So, I said okay, that is wonderful. Then I asked him
to go back and see what had to be done to make the method
calibrate. So, he did.
Then we had another meeting. We are good at meetings.
We meet all the time. We had another meeting, and we made
-------
331
some modifications to the method. At that point we decided
that with those modifications, we needed to do a study by
several laboratories of this whole method. The purpose was
to determine just exactly how good the method was and what
it was going to accomplish, and we weren't even going to
talk about such things as interferences at this time. We
were just going to see if we could do straight analyses.
So, what I am talking about in this validation study is
the purpose of the study, the laboratory samples, the
control of the study, and what results we intend to get.
As you see, we had a joint study, industry and
government. You can see Bill is the guy over there in the
blue coat and the white beard.
The purpose of the study was to determine the
difficulty of the method. One of my colleagues, Dan Caudle,
and I have maintained for a long time that this was a very
difficult method. So, one of the things we wanted to find
out because Bill wanted to show that it really wasn't all
that difficult, was just what the degree of difficulty of it
was.
We also wanted to find out what the reproducibility of
the method was.
And finally, we wanted to get some rough idea of the
detection limits, because we weren't comfortable with the
limits that were set forth in the proposed method.
-------
332
And Bill said, okay, I will let you do that. I will
even donate one laboratory to do that. Very magnanimous of
me.
MR. TELLIARD: Nice guy, nice guy.
MR. STEPHENSON: Yes, nice guy. He
is here to help us.
The laboratories that we are using for the study
include, of course, Warren Haltman of Texaco as one of the
laboratories. Conoco Research Center up in Oklahoma is
another one of the laboratories. Since we already have
laboratories that have good analytical capability, and since
we are not going to be reported on these results, and we are
just doing some research, we figured that we might as well
use them to do this little study.
We did use an independent laboratory, CORE Laboratories
in Lafayette which used to be known as Weintritt Testing
Labs. That was the laboratory that did the bulk of the
testing in the diesel pill monitoring program, and that was
the laboratory that was trying to do the calibration in the
14 rig study.
Then, of course, we have Bill's laboratory, Dave
Thompkins at ETS Analytical Services.
And then as a matter of interest, there is a new
technology that has arisen at Ruska Instruments. They call
it thermal chromatography, but it is controlled distillation
-------
333
through a detector. It is an interesting device, and he has
some data that shows good results. In fact, it is so good/
Bill to uses it for another method.
Since Dan Caudle from Conoco and I had been arguing
with Bill about this for several years, we decided that we
were a little prejudiced in what was going on; and since
Texaco and Conoco had provided two of the laboratories to do
the testing, we ought not be involved as being the control
officers. That would mean we would know what the unknown
samples were, and we might prejudice our own people.
So, we asked Joe Raia from Shell Development Company
act as the industry control officer. Our good friend here
acts as the other one.
The next topic is the results that we intend to
achieve. Each of the laboratories, when they turn in their
analysis package, are supposed to turn in the chromatogram
of each run, the oil content from each retort, the diesel
content as a result of doing the GC runs and doing the
calculations as set forth in the method, and the QA and QC
requirements of the method. That was the way we set up the
program with triplicates and so forth, to make sure we got
sufficient data. This program provides 23 retorts and 23
G.C. runs.
-------
334
At $250 a run, it costs the industry about $5750 for
CORE Laboratories. I would hate to see what Warren has
spent on the testing.
What about the data received to date? Well, one
laboratory, CORE Labs, has turned in its data. The others
are still working on it. I saw Dave Thompkins the other
day, and ETS is still working on it. Texaco still has not
finished all of their analyses. Conoco hasn't finished
theirs. We have heard from Ruska and they are still working
on it.
It is taking a little longer to do this than we first
envisioned. But, when we get all the data in to Bill and
Joe, we are going to get back together again, probably at
Texaco, to discuss all of this data and what it means.
Hopefully, we will come to some way of either modifying the
method to make it suitable to all of us and something we can
live with or inventing some new method. Who knows?
At any rate, I want to thank Bill for inviting me to
come speak and for being our control officer. I want to
thank Dan Caudle who dragged me, kicking and screaming, into
this debate; Joe Raia from Shell who worked with Warren a
great deal on some of our comments and on the recommended
revisions to this method; Warren who has done a lot of work
and consultation with me; the API for paying for Weintritt
Laboratories; and, of course, the EPA for paying for ETS.
-------
335
That is basically all I wanted to say, except I wanted
to know if you had any questions, because we have a bunch.
-------
336
QUESTION AND ANSWER SESSION
MR. HOPPE: My name is Eric
Hoppe. I am with Battelle.
I am kind of wondering if I am missing a point here. I
am not sure I understand what the difficulty of the method
is. I think there are a lot of commercial laboratories that
have done this for years and years, although I don't think
there has been any check as far as their recovery limits or
anything.
MR. TELLIARD: We have not met
before the meeting, right?
MR. HOPPE: No, we have not.
(Laughter.)
MR. TELLIARD: And before you
went to Battelle, you worked in which part of the EPA? Just
kidding.
MR. STEPHENSON: Actually, the
basis of one of our sets of comments describing the
difficulty of this method was a Battelle report, just to
keep things in perspective.
It turns out that we end up doing things like
evaporating to dryness in a Kuderna-Danish. A Kuderna-
Danish, you know, is not a dryness device. It is not
designed for that. It doesn't do that well. Hence, it
-------
337
doesn't do it. And that is one of the problems we have with
the method.
Another problem is the fact that the retort is not a
fully quantitative distillation procedure. The recoveries
that we get here, the recoveries that Weintritt got in their
work were low.
So, you know, there are some problems with it.
One of the reasons we went to doing the retort...maybe
this will help clarify it and why it becomes such a problem
to us...we started out initially...and Bill had this idea
for doing this...and we wanted to have a method that we
could do out on this platform bobbing up and down on the
ocean.
MR. TELLIARD: You didn't
like my first idea. I recommended we wouldn't have this
difficulty if they just used GC/MS. They didn't think it
was a good idea to put a mass spectrometer on an oil rig. I
thought they had a very narrow window to look through. You
had room for a Coke machine, but you didn't have room for a
mass spectrometer.
(Laughter.)
MR. STEPHENSON: What can
I say? It is a matter of priorities. We didn't want to do
this in the first place.
-------
338
And we may end up going back to GC/MS. I can see it
coining.
MR. TELLIARD: Five years.
Go ahead. It is your show. I shouldn't be heckling
you from the side.
MR. STEPHENSON: That is all
right. I heckle you.
So, we chose what would be a method we could do out on
this rig that is bobbing up and down out there in the ocean.
One of the things that the companies that sell us our
drilling muds and all the components have is this retort.
It was designed as a means of measuring the amount of solids
that were in the mud. All they did was put a weighed mud
sample in this device, and they heated it up, and drove off
all the liquids. Hence, they could weigh what was left,
because it was a tared flask in the beginning, and knew what
their solids content was.
We decided we would pervert this method and use it as a
method of collecting the oil and water. The water we didn't
care about, but the oil was important. So, we tried to use
that as our distillation, our oil recovery method from the
mud, and it has its problems.
The other thing is we wanted a simple GC method. GC is
a pretty simple thing. Graduate students use them;
undergraduate students use them. I know of a trucking
-------
339
company that that is how they monitor what comes in on the
18-wheelers off the road. They have some guy who didn't
finish high school yet who takes a sample off of this truck
and goes and runs it in a GC, and he interprets the data,
and they spend thousands of dollars based on his
interpretation.
So, we figured a GC method was a good idea.
Howsomever, by the time we got through with all the QA, QC,
and everything else that we cranked into it to make it a
good, reliable EPA method, it was no longer a field method.
The calibration restrictions are also extremely tight.
So, we are shifting the samples back to shore to be
analyzed anyway. I think Bill is going to win.
Did I cover it? I still haven't answered it?
MR. HOPPE: I still don't
know what the problem is.
(Laughter.)
MR. TELLIARD: You can't
make up for the weaker players is the problem.
MR. STEPHENSON: We have
been trying to compensate for Bill, but...
Maybe at the break, you can talk to Warren. He has
been doing this, and he can maybe discuss some of the
details. As I say, he is the nuts and bolts man. I am the
guy who has to go meet with Bill.
-------
340
Do I have any other questions?
(No response.)
MR. TELLIARD: Thank you,
Mike. Thanks a lot.
-------
VALIDATION OF A METHOD
FOR THE DETERMINATION OF
DIESEL OIL IN DRILLING FLUIDS
M. T. STEPHENSON
TEXACO INC.
CHAiRMAN
TECHNOLOGY WORK GROUP
API OFFSHORE GUIDELINES STEERING COMMITTEE
-------
PURPOSE OF METHOD
IDENTIFY THE PRESENCE OF DIESEL OIL
IN DRILLING FLUIDS
QUANTIFY THE DIESEL OIL PRESENT
CO
TEXACO INC.
-------
HISTORY OF THE METHOD
ORIGINAL PROPOSAL - AUG 26, 1985
DIESEL PILL MONITORING PROGRAM
API COMMENTS
REVISION A
14 RIG STUDY
TEXACO RESEARCH
to
£*
GO
TEXACO INC.
-------
VALIDATION STUDY
PURPOSE
LABORATORIES
SAMPLES
CONTROL
RESULTS
TEXACO INC.
-------
PURPOSE OF STUDY
DETERMINE DIFFICULTY OF METHOD
DETERMINE REPRODUCIBILITY OF METHOD
DETERMINE "ROUGH" DETECTION LIMITS
u>
*
Ul
TEXACO INC.
-------
PARTICIPATING LABORATORIES
TEXACO E&P TECHNOLOGY DIVISION
CONOCO RESEARCH CENTER
CORE LABORATORIES - LAFAYETTE, LA,
ETS ANALYTICAL SERVICES
RUSKA INSTRUMENTS
TEXACO INC.
-------
STUDY CONTROL
INDUSTRY CONTROL OFFICER:
J. C. RAIA
AGENCY CONTROL OFFICER:
W. A. TELLIARD
to
*»
-J
TEXACO IMC.
-------
STUDY RESULTS
CHROMATOGRAMS OF EACH GC RUN
OIL CONTENT ANALYSES
DIESEL CONTENT ANALYSES
QA/QC
DATA RECEIVED TO DATE?
w
£*
00
TEXACO INC.
-------
ACKNOWLEDGMENTS
W. A. TELLIARD, USEPA
D. D. CAUDLE, CONOCO INC.
J. C. RAIA, SHELL DEVELOPMENT CO.
W. C. HALTMAR, TEXACO INC.
AMERICAN PETROLEUM INSTITUTE
ENVIRONMENTAL PROTECTION AGENCY
CO
£*
VD
TEXACO INC.
-------
350
o
LU
o
z
o
X
UJ
I-
-------
351
MR. TELLIARD: Our next
speaker is from EPA and our lab at RTF, Larry Johnson who
has spoken with us before. It has been a while since he has
been back. Larry is going to talk to us on some air source
methods they have been working on at RTF.
-------
352
MR. JOHNSON: As Bill
mentioned, we are the air methods people. Of course, we
don't really talk all that much with the water people except
when we get water in our air and they get air toxics coming
out of the water.
Some of the push for looking at air samples comes from
a couple of different directions. There is lot of interest
now because of the new Clean Air Act that is coming through,
air emissions from Superfund sites, air emissions from
lagoons. The sewage sludge incineration regs are well into
coming out, and we have been looking at the emissions from
incinerators.
What I am going to try to do today is just a quick
look at some of the kinds of samples that you might see
coming into your laboratories related to air.
Just like with water samples, air samples have three
major phases that have to be dealt with. The sampling part
of the operation is greatly different, of course, than
taking samples out of water or sludge.
The preparation steps, you will find as we get into
them, are going to look fairly familiar to you with just a
few differences. You will be getting new kinds of samples,
but you are going to use old familiar techniques, although
on different matrices.
-------
353
After the sample is extracted, the analysis methods are
really quite similar except in fairly rare cases.
I am going to go through some of the sampling methods
just briefly to show you what kind of equipment these
samples are taken with and, therefore, where some of these
samples are coming from. I am skewing this discussion quite
a bit towards stack emissions mostly because that is our
specialty.
In the interest of time, I am going to talk mostly
about organics, and only the more frequently used sampling
trains. There are a lot of other trains available,
especially if you start trying to look for a wide variety of
compounds.
So, I am not going to make any pretense that this is
complete. We are just going to hit the high spots,
basically.
Most of the work has been done on stack emissions from
combustion sources. We started out working on power plants.
We worked on a whole variety of combustion sources, and a
lot of our funding and, therefore, our focus in the last few
years has been hazardous waste incineration related.
The new Clean Air Act is going to focus attention
combustion stacks, but also vents and similar sources.
Where are these methods provided? In fact, somebody
asked me a very good question out in the hall before the
-------
354
talk. Is there a compendium somewhere of all of the air
methods? The answer is not quite. A list of the better
references will help.
Most of the Federal Register methods that are used in
relation to the air part of our program are in 40 C.F.R 60
and 61, There are air and stack methods in SW-846, and we
have a number of them in various stages of review and
clearance for inclusion in SW-846. There are also other
sources. We have a number of guidance documents and other
references. In the hard copy of the paper, I will list a
few of those.
Also, there is an operation called EMTIC, Emission
Measurements Technical Information Center, that has been set
up at RTF, and its function is primarily to supply methods
information to the States and to the regions. The number
is (919) 541-1059.
One of the methods I am going to talk about is Method
0010/ and that is an SW-846 method.
We are also going to talk about 0030. A lot of stack
samples and some ambient samples are taken in Tedlar bags.
Metal cylinders are very popular, and sometimes they are
great and sometimes they are terrible. Usually, for stacks,
they are terrible, but we will talk a little bit about their
use for ambient sampling, too. Likewise, sorbent tubes are
used for source and for ambient sampling.
-------
355
A lot of the time we will spend on 0010. This is a
very flexible method, so a lot of the samples will come in
from this hardware. It is also one of the more complicated
methods, so you will get more subsamples to deal with.
It is the same thing as Modified Method 5. The term
Modified Method 5 became so ambiguous that we had to quit
calling it that.
It is also the same train as the so-called Semi-VOST
which is less ambiguous but a poor name also.
This method is used for compounds with boiling points
greater than 100 degrees C. So, we are talking about what I
call the semi-volatiles and the non-volatiles. It doesn't
necessarily correspond exactly to what you would call those
things, based on water analysis terminology.
»'^
This is a diagram of the Modified Method 5 train. If
you haven't seen one of these stack sampling train before,
it looks like a glass blower's nightmare, but, believe me,
you need all of these pieces there to get the samples.
From your standpoint, the important thing is what
samples will be generated from this device probe rinse
sample comes from this part of the train, a sorbent or a
filter sample is also generated there. The sorbent is
usually XAD-2. A condensate sample is also produced.
We will talk a little about how each of those sample
types is handled in the laboratory.
-------
356
Solvent extraction is used to recover organic compounds
from the samples. Usually, it is methylene chloride, almost
a universal solvent, which, sometimes, we do have to follow
up with another solvent. In the vast majority of cases,
methylene chloride is the solvent of choice for all the
usual reasons.
The probe rinse has two sub-samples to it, in effect.
The particulate material, and the solvent that the probe was
rinsed with. The rinse is filtered and the two sub-samples
dealt with separately. Usually, they are combined in with
some of the other subsamples. The solvent is combined with
an extract, and the particulate is combined with the filter.
Depending on the information needed the combination might be
different.
The sorbent and the filter are where most of the sample
is collected in this train. If you have any luck at all, it
will be distributed between those two, and you should never
try to figure out what this means in terms of what was in
the vapor state and what wasn't in the vapor state in the
stack. It is totally meaningless if you do try that, and
unfortunately, a lot of people do.
Again, the samples are Soxhlet extracted. We went out
of our way to misspell Soxhlet here. I think about four of
senior scientist types looked these things over and didn't
see the mistake until I started packing the slides. The
-------
357
sorbent which is usually XAD-2 is well-behaved compared some
solids like soils and sludges. XAD-2, assuming it has been
cleaned and treated properly, is consistent from batch to
batch, and you know what to expect from it.
The particulate is another matter. It is a little bit
like a soil sample in that respect. It may be very docile
for months and months and then turn around and bite you by
not giving back some of the material that you are interested
in recovering.
Very much like your water sample, once you have
extracted the sample, you have to concentrate the extract,
and then you have to do an analysis. We almost always go
with GC/MS or GC, and when we can't, we go with HPLC. It is
very much like water or sludge analysis in that respect.
The condensate has passed through the filter, and has
been through XAD-2, so most of the organics are stripped
out. It is relatively clean. It is not something you want
to drink, but compared to a dirty water sample with a lot of
organics in it, it is relatively clean, and you analyze this
material just to prove that nothing broke through. It is
almost a QC sample.
In some cases, if you have enough history, you can quit
running this sample. Unfortunately, a lot of people have
quit when they shouldn't, and we are tightening up
requirements on running it.
-------
358
A separatory funnel extraction is required very much
like a water sample, and like with a water sample,
sometimes another extraction with a second solvent may be
required. Once the extract is obtained, it is
concentrated in a K-D and analyzed with one of the unusual
analytical methods.
Method 0030 is the VOST or VOST, depending. The
pronunciation is like tomatoes or tomatoes. It is designed
for compounds that boil between 30 and 100 degrees C. We
have found that most of the compounds that boil between 100
C and 132 C will also work, but we don't allow the train to
be used above 132 C, because things up in that range can be
involved with the particulate, and the particulate is
discarded in this method.
Here is the location of the particulate filter. You
throw it away. The sample stream passes through a cooler,
then into a tenax cartridge. There will be condensate,
which is usually minimal. This is a low volume train which
is why it will catch the volatiles. There is a back-up
tenax tube and charcoal which we included in sheer
desperation. Most of us have wished we hadn't put it in
there for years now. We are going to replace it one of
these days.
Most of the sample typically collects on the front
tenax tube, but you have to analyze the condensate to prove
-------
359
it contains no compound of interest, and you always analyze
the second tenax tube, because it is not unusual for it
contains some of the compound of interest.
You only have two subsamples here, two sets of sorbent
tubes. In fact, a sampling run usually consists of
approximately six sorbent tubes.
The condensate must be dealt with, even though some
people discard it when it should be analyzed. Again, it is
more of a QC sample just to prove that nothing got through
the rest of the train. It protects against breakthrough,
channeling, things like that.
This is what the tenax tubes look like. We actually
adopted the biggest ambient air sampling tubes that we could
find. There is about 2 grams of tenax in the one, and the
other has about a gram of tenax and a gram of charcoal in
it.
Method 5040 which, again, is a SW-846 method is used to
analyze these tubes. It is just a heat desorption method
with a twist. You heat desorb into a purge and trap
chamber, because these tubes are going to have maybe a
milliliter of water on them sometimes. You get rid of the
water by running it into water.
We have tried a lot of more elegant techniques, and so
far, none of them have worked very well for us. That is not
-------
360
to say they wouldn't if we spent enough money to really make
them work.
We have a draft Method 5041. 5040 is in the book, and
it is a packed column. In fact, it is the old 624 method.
We didn't want to develop an analytical method. We were
busy with the sampling method, so we just adopted the
analytical method.
5041 is a megabore version of 5040, and it will give
you a lot better resolution. It is under review, and a year
or less it will be in SW-846.
The condensate is analyzed by purge and trap whenever
possible. If you have something like acetonitrile that
doesn't purge and trap well, you may have to use direct
injection which I know all of you love.
Tedlar bags is a viable way to sample, but one problem
with these bags is that they are deceptively simple, and
there is a multitude of ways to use them wrong. You use
them only for low boilers because you don't want things
condensing out in the bag. It is necessary to run through a
water trap if there is any moisture in the stack. You
usually do the analysis by GC/MS, and if there is any
condensate in the trap, you have to analyze that however you
can, usually purge and trap.
Some of the sampling trains that are available for bags
right now don't have the condenser, but most of the time,
-------
361
you need that if there is any moisture in the stack.
Moisture can't be allowed in the bag, because if any
condensation occurs in the bag, the sample is invalid.
Metal cylinders are very popular. The Summa polished
cylinders are used a lot for ambient air. Method TO-14 uses
these devices. The summa polishing is just a pacifization
technique that is applied to the cylinder. Cryo-focused
GC/MS is usually used to do the analysis, sometimes just GC.
Method 25 is an attempt to obtain a total organics
concentration. A metal cylinder is used, and then an FID
analysis is performed after running the sample through some
treatment to convert it into methane.
All of these metal cylinder techniques only really work
when you don't have high levels of acid gases like HC1 and
SO2 and other corrosive agents. High moisture levels
interfere with the cryo-focusing analysis. Most of the time
canisters don't work well for stack sampling.
Sorbent tubes, you can use those same tenax tubes to do
ambient air samples from around dumpsites and off of
lagoons. The technique is a very similar to Method 0030.
You heat desorb, but don't usually run through the purge and
trap procedure, because the water isn't present..
The QA/QC is very important for all of these methods,
and we are just realizing how truly important it is. We are
-------
362
being forced to tighten up on the sampling QC and on the
laboratory workups.
Because we had clean sample matrices in a lot of cases,
we would typically demonstrate that the technique worked and
then only have minimal safeguards against analysts
performing poorly in the laboratory. We have found that
that is not enough. Poor performance has occurred all too
often.
We have recent guidance on QA, especially related to
hazardous waste incineration. I will put the references to
that in the written version of the paper. We are going to
tighten many of the QC requirements and use more labeled
spikes. There is going to be less combining of subsamples
allowed, and I think where we can, we are going to have to
utilize isotope dilution.
Isotope dilution procedures are not yet developed for
stack samples, but if funding allows, we would like to work
on them. We have typically fewer things to look for in a
given sample, so it is not quite as big a cost burden as if
you are looking for 200 or 300 things at once.
Audit samples are available for a lot of these methods.
VOST audit samples are available at RTF. They have to be
requested by an EPA person. So, if you want to be audited
asks an EPA person to call RTF and request the audit
samples.
-------
363
There are audit sample development and validation work
going on right now on the TO-14 canister method for use
around Superfund sites. There are two different groups in a
real laboratory that are doing that work. So, there will be
audit samples available.
Are there any questions?
MR. TELLIARD: If there
are no questions, thanks a lot, Larry. We appreciate it.
-------
PREPARATION AND ANALYSIS
OF
u>
AIR EMISSION SAMPLES
-------
SAMPLING
PREPARATION
ANALYSIS
CTi
Ul
-------
SAMPLING METHODS
STACK EMISSIONS
ORGANIC
MOST FREQUENT
COMBUSTION CONNECTION
CTl
cn
-------
WHERE?
40 CFR, PARTS 60 & 61
SW-846
OTHER SOURCES
co
CTl
-J
-------
METHOD 0010
METHOD 0030
TEDLAR BAGS
METAL CYLINDERS
u>
CT>
00
SORBENT TUBES
-------
METHOD 0010
MODIFIED METHOD 5
B.P. GREATER THAN 100 °C
u>
en
-------
Sortnnt
Trap
Temperature
Senior
Probe -*Hr*
Reverie-Type
Pilot Tub*
Cheek
Valve
Thermometer
Filter Hotrftr
Radrculation Pump
Tharmomettrs
OJ
^j
o
Vacuum Line
DryGai
Meter
Airtight
Pump
Modified Method 5 train.
-------
PROBE RINSE
FILTER
SORBENT
CONDENSATE
u>
-------
SOLVENT EXTRACTION
U)
^J
NJ
-------
PROBE RINSE
PARTICU LATE/SOLVENT
OJ
-J
tjj
-------
SORBENT
FILTER
SOHXLET EXTRACTION
GO
-------
SORBENT CONSISTENT MATRIX
PARTICULATE MAY BE INCONSISTENT
(Jl
-------
EXTRACT CONCENTRATION
ANALYSIS
Co
j
en
-------
CONDENSATE
CLEAN, PREDICTABLE
SEP. FUNNEL EXTRACTION
SECOND SOLVENT
CO
-------
EXTRACT CONCENTRATION
ANALYSIS
oo
-j
oo
-------
METHOD 0030
VOST
B.P. 30°CTO100°C
RANGE EXTENDED TO 132 °C
oo
-j
-------
Heated Probe
Glass Wool
Particulafe
Filter
ft
STACK
(or test System)
Isolation Valves
Carbon Filter
Thermocouple
Sorbent
Cartridge
Condensate
Trap Impinger
Condenser
Backup
Sorbent
Cartridge
Silica Gel
Vacuum
Indicator
9
xL-/-Jr=pq=.
f \ i
Exhaust
1 2^4
Pump
Dry Gas
Meter
co
oo
o
Rotameter
Volatile organic sampling train (VOST).
-------
SORBENT TUBES
CONDENSATE
u>
00
-------
SORBENT TUBES
ANALYZE BY METHOD 5040
DRAFT METHOD 5041
U)
00
to
-------
CONDENSATE
PURGE AND TRAP
OJ
00
oo
-------
TEDLAR BAGS
LOW BOILERS
ANALYSIS BY GC/MS
ANALYZE CONDENSATE
00
-------
385
COOLER
CONDENSER
CRITICAL
ORIFICE
CONDENSATE
COLLECTOR
ACTIVATED
CHARCOAL
FILTER
EXHAUST
FROM
PROCESS
TO
AMBIENT
PUMP
BAG SAMPLE COLLECTION APPARATUS
-------
METAL CYLINDERS
METHOD 25
FID ANALYSIS
METHOD TO14
CRYOFOCUS GC/MS
CO
00
-------
SORBENT TUBES
SIMILAR TO METHOD 0030
HEAT DESORB
Co
oo
-j
-------
QA/QC
VALIDATION LIMITED
RECENT GUIDANCE
MORE LABELED SPIKES
ISOTOPE DILUTION?
oo
00
-------
AUDIT SAMPLES
oo
00
10
-------
QUESTIONS?
vo
o
-------
391
MR. TELLIARD: Our last
speaker today...which means this will also be on the
final...is going to talk about the Chesapeake Bay and the
joys thereof. Tina Fletcher from Region III and the Bay
Office is going to talk about the nutrient measurements that
the folks have been spending a great deal of time on in the
last couple of years.
-------
392
The Chesapeake Bay Program: Experience with Nutrient
Analytical Methods in the Estuarine Environment
Five years into the mainstem monitoring program for nutrient,
demand and physical parameters the program is involved in a re-
evaluation of the project design and implementation. This test of
our assumptions has involved an analysis of the data available for
the detection of trends in nutrient levels which have a direct
bearing on the availability of oxygen and thus the survival of our
living resources in the Chesapeake Bay waters. Concurrent with
this trend estimation is a power analysis to determine the
theoretical sensitivity of the current database with respect to its
temporal and spatial coverage and the detection limits available
from the existing field processing and analytical methodologies.
Were the initial assumptions which set us out on this first five
years of perhaps a twenty year journey correct? But I am getting
ahead of my story.
Here in Norfolk, we are close to the mouth of this the largest and
most productive estuary in the continental U.S. The Bay as we
presumptuously call her stretches 195 miles along a mainstem with
7,000 miles of shoreline 18 trillion gallons of water and an
average depth of less than 30 feet. Formed 10,000 years ago as
melting glaciers flooded the Susguehanna River Valley she is home
to 200 species of fish along with 2,700 other plant and animal
species, 12 million people and their hundreds of thousands of
pleasure boats. The resource is too valuable to let go and we at
-------
393
the Chesapeake Bay Program, a consensus organization of
representatives from federal, state and local governments along
with citizen groups, and universities from Maryland, Virginia,
Pennsylvania and the District of Columbia take that responsibility
very seriously.
The drainage basin for the Chesapeake spans 4,500 square miles
from the states of Maryland, Virginia, Pennsylvania, Delaware, West
Virginia, New York and New Jersey. Data are received from
environmental organizations in each of these regions. The quality
of these data must be understood so that they may be effectively
used in the management decisions.
From before 1950 through the 1970s many state and university groups
conducted research oriented studies some of which operated over a
protracted period but most of which were both spatially and
temporally very limited. Between 1978 and 1983 the Chesapeake Bay
Program as coordinated by the Environmental Protection Agency was
in its research phase where it sought to characterize the Bay's
water quality, sediment quality and resources.
This period was devoted to determining what data were available
from historical sources and to identifying the issues at hand for
the Chesapeake. The infamous "Appendix F- A Monitoring and
Research Strategy to Meet Management Objectives1*, defined the
Chesapeake Bay Water Quality Monitoring Program Objectives as:
-------
394
Characterizing existing water quality
Baywide
Determining trends in water quality that
might develop in response to management
actions or additional sources of
pollution
Integrating the analyses of various
monitoring components with a view toward
achieving a more comprehensive
understanding of processes affecting
water quality and the linkage with living
resources
In 1984 the Chesapeake Bay Program entered the Implementation Phase
in which the designs of the Research Phase took form. Within the
committee structure of the Chesapeake Bay Program, the Monitoring
Subcommittee attempted to translate the research questions into an
EPA funded mainstern monitoring program for physical, nutrient,
demand and biological parameters. The Mainstem Monitoring Program
was based upon the segmentation scheme developed by researchers to
encapsulate the variable portions of the Chesapeake's central trunk
and near tributaries. The MMP was designed as a 50 station network
which would be sampled 20 times per year. As with any monitoring
effort, the scope of the program was heavily linked to the
budgetary constraints. The extent to which existing laboratory
capabilities and configurations could be utilized would be pivotal
in moderating the costs of the program. Data from the tributaries
would be taken from the monitoring programs within each of the
states vastly increasing the reach and complexity of the database.
-------
395
The MMP measures more than 25 forms of nutrient, demand, physical
and biological parameters. The Mainstem Monitoring Program is not
a regulatory program and as such there is no EPA mandated set of
methods. The EPA water chemistry manual, "Methods for Chemical
Analysis of Water and Wastes" describes most of its methods as
suitable for the analysis of saline waters. However, the degree
to which this has been evaluated should not be overestimated.
There were no Agency guidelines for the selection of analytical
methods. The Chesapeake Bay Program was on its own to identify its
needs and to determine the analytical methods which would satisfy
those requirements.
With strong input from EPA and those organizations which would be
implementing the analytical work, it was decided by the Monitoring
Subcommittee that EPA water methods and procedures would be adopted
for this program. There was a sense that this would provide a
continuity with the historical database and would provide a linkage
with the tributary data which was generated by the principal state
laboratories which were driven by these same EPA methods required
under the National Pollution Discharge Elimination System., the
workhorse of those labs at that time.
While the Chesapeake Bay Program is a consensus organization, this
consensus is not instantly achieved. A series of laboratory and
field consensus workshops were held to thrash out the program
design beyond the work done in the Monitoring Subcommittee. A
-------
396
strong objection to the analysis of these estuarine waters with
TKN as the measure of total nitrogen was made by Chris D'Elia and
the University of Maryland representing a strong linkage with
standard oceanographic methods. Intense challenge was brought to
the analysis of the crucial particulate phase of nutrients by
difference between the total and dissolved determinations.
Detection limits were said to be unacceptably high. However, the
analytical program was in place and it was determined that
"comparability" would have to be demonstrated before a change
would be made.
Change, the enemy of trend analysis. Nonetheless, change is
inevitable in any monitoring program. A learning curve is present
initially and then with each new analyst, each new piece of
equipment and so forth. Trend analysts may argue that consistency
is more important than "truth". However, new instrumentation is
inevitable and should be embraced. Improvement in system
efficiencies which lead to lower detection limits are always
desirable as long as the cost equation is not shifted. It is the
role of the formal comprehensive quality assurance program to
provide the documentation and management of that change so that the
needs of the data users can still be met.
-------
397
In 1986, two years into the monitoring program, special studies
outside the normal monitoring program were initiated to accommodate
the recommendations for change while maintaining a linkage to the
on-going monitoring program. These special studies addressed
recommendations that freezing be allowed as a method of
preservation and that .45 micron membrane filters be replaced by
.8 micron effective pore size glass fiber filters.
Further, special studies were initiated to evaluate the
comparability between the EPA methods and those coming from the
oceanographic protocols. It was determined that the recommended
changes would benefit the goals of the Mainstern Monitoring Program
for a variety of reasons from improved detection limits, greater
analytical throughput, improved recoveries and even the lessening
of hazardous waste materials. Field analytical procedures did not
change, however, many changes were adopted in the laboratory
analytical methods.
As a reflection of the Appendix F concerns and with the strong
realization that there was no natural barrier at the tributaries,
a Baywide Quality Assurance Program was initiated in 1986. Work
was begun to bring the tributary monitoring programs in line with
changes underway in the Mainstern Monitoring Program. As a
beginning a Coordinated Split Sampling Program was designed to
identify the variability associated with the processes used to
collect, process, analyze and report the data which are stored in
the Chesapeake Bay Program database.
-------
398
The Coordinated Split Sampling Program utilizes four organizational
components to collect and distribute split samples to a varied
group of federal, state and university laboratories who routinely
produce data from the mainstem and tributaries. The attempt is to
establish the comparability between organizations so that the data
will be of known quality and apparent differences can be evaluated
for possible organizational artifacts. Triplicate samples,
spiking, duplicate samples and standard reference materials all
facilitate the evaluation of sources of variability. An
established statistical protocol is used to evaluate the data
produced in this split sample program and rapidly turn it back to
the organizations for evaluation and implementation of changes as
indicated.
Some of the data which have come out of the Coordinated Split
Sampling Program clearly indicate the need for change. In the
case of Total Dissolved Phosphorus, it was quite apparent that the
university laboratories who are responsible for the analysis of the
Mainstem Monitoring Program were reporting data which were nearly
an order of magnitude lower than the principal state laboratory.
This is not surprising considering the bulk of the sample load for
that laboratory is from point source and hazardous waste sampling
locations. State management used the data from the Coordinated
Split Sampling Program to leverage changes in laboratory
arrangements which would allow the production of data which were
not a function of the-analyzing laboratory.
-------
399
Conversely, it has been heartening to see how tightly bunched the
data are for field and lab precision for TDP overall. Silica split
sample results are hard to separate lab to lab. Each of the other
parameters has its own characteristics and trends point to a
tightening of limits. Organizations are using these data to track
and implement changes in their field and laboratory operations.
Split sample data provide a measure of the comparability between
the different sampling and analytical organizations which have been
crucial in the evaluations of the trend. The role of the
Coordinated Split Sampling Program and the Baywide Quality
Assurance Programs can only be expected to expand as the CBP
attempts to utilize more effectively the data from myriads of
historical and on-going monitoring programs throughout the basin
as identified in the Chesapeake Bay Basin Monitoring Program Atlas.
The Living Resources Monitoring Plan seeks to forge a link between
habitat requirements of the biota and water quality. The quality
of data throughout all of these programs must be known to permit
their use in the decision making process.
The Chesapeake Bay Monitoring Program represents an evolving data
collection network and a process of providing information necessary
for management of the Chesapeake Bay's Resources. If the support
of management decisions is the reason for data collection, then a
database is only as good as its match with the needs of those
decision makers. Five years into the monitoring program an
-------
400
intense process of re-evaluation is in process to be certain that
the data produced to date are of known quality. To determine if
the monitoring program is able to detect a trend if one develops
is the task at hand. Will the existing monitoring program be able
to track the 40 % reduction in nutrients by the year 2000 as
stipulated by the Governors in the Chesapeake Bay Agreement? The
users of this database need to be identified and their concerns
prioritized. Then, they must stipulate explicitly the quality of
data they require to meet the needs of the sophisticated computer
models or to satisfy the monitoring of the Living Resources Habitat
Criteria.
The configuration which is represented by the Mainstern Monitoring
Program in the future will be a combination of the original
perspectives of the Research Phase with clear direction from the
users of this evolving database. Analytical systems will be honed
to produce data of the quality required by the program, not just
by the goal of "how low can you go?"
What have we learned? In the area of monitoring, consensus may be
an impossible form of management but it is essential to the success
of the multi-jurisdictional nature of estuarine monitoring. The
success of a monitoring program is directly related to the clarity
of the needs of the decision makers who will use these data.
Program objectives and the specific questions must be understood
and translated into a network design. Trends resultant from
management actions will rarely be seen in two to five years.
-------
401
Natural variability in rainfall and other cycles has a much longer
period. Monitoring of this type is there for the long haul and
must have a long term stable source of funding to provide
consistent implementation of a program of the quality required.
A comprehensive quality assurance program is crucial for the
success of any monitoring program. A coordinated split sampling
program is necessary to establish the comparability between
multiple organizations involved in the monitoring effort. Data
must be statistically evaluated and returned to the participants
for action. The technical consensus of experts must be utilized
in the design and implementation of the monitoring program.
Data management cannot be ignored. A well designed, thoroughly
documented and consistently implemented data management system must
be in place to facilitate the accurate and efficient acquisition
of data and to effectively make the data available to the user
communities. Close attention must be paid to the handling of
historical data sets where the quality is often unknown.
Data analysis must not be an afterthought. Rather, it should be
leading the process. Data which sit on the shelf often are of
inappropriate quality since they have not had the benefit of
feedback from the data analyst. The strongest organizations are
clearly correlated with those operations who routinely utilize the
data generated by the Hainstem Monitoring Program for their own
purposes.
-------
402
The Chesapeake Bay Program experience with nutrient analytical
methods has been or is being repeated in some form in each of more
than a dozen estuarine programs nationwide. The isolation of the
technical communities of the Agency's estuarine programs is clear.
Regions II and III have been involved in a year long effort to
identify the analytical needs of the estuarine and marine
communities. Last week we held with support from the Office of
Marine and Estuarine Protection the first annual Estuarine and
Marine Analytical Methods Workshop in an attempt to establish a
network of technical experts in the estuarine and marine matrices.
The Estuarine and Marine Analytical Methods Committee was formed.
Analytical needs were coordinated under four workgroups who are
charged with the compilation of candidate methods, as well as the
evaluation of existing methods and SRMs. Round-robins which can
be organized within the community will be encouraged and the
interest and support on the part of the Office of Research and
Development is clear. It is an idea whose time has come as we seek
to promote each estuarine program with through the experience of
the other in this most challenging matrix.
-------
403
I would like to offer special thanks to Peter Bergstrom and Rodney
Buckingham of CSC and Rich Batiuk of the Chesapeake Bay Liaison
Office for their assistance in the preparation of these materials.
OVERHEAD
-------
I
THE CHESAPEAKE BAY PROGRAM
Experience with Nutrient Analytical Methods
in the Estuarine Environment
-------
405
CHESAPEAKE BAY BASIN
NY
PA
WV
MD
NJ
-------
406
CHRONOLOGY OF THE CHESAPEAKE BAY
MONITORING PROGRAM
1950s-1970s Independent data collection efforts, often more research
oriented and short-term in nature
1978-1983 Chesapeake Bay Program Research Phase
Characterization of the Bay's water quality, sediment
quality and resources
Compilation of historical water quality data
Chesapeake Bay Synoptic Survey
Chesapeake Bay Segmentation Scheme
"Appendix F - A Monitoring and Research Strategy to
Meet Management Objectives"
-------
407
CHRONOLOGY OF THE CHESAPEAKE BAY
MONITORING PROGRAM (cont'd)
1984 - Present Chesapeake Bay Program implementation Phase
1984
1985
Monitoring Subcommittee established
Chesapeake Bay Water Quality /Biological Resource
Monitoring Program implemented in July 1984
Technical Monitoring Program Consensus Workshops
Expansion of definition of the Chesapeake Bay
Monitoring Program to include non-tidal data collection
programs
First Biennial State of the Chesapeake Bay Report
published
Increased emphasis on coordinated data management
-------
408
CHESAPEAKE BAY PROGRAM BASIN SEGMENTS
CB-l
ET-1
WT-1.
WT-2
WT-;
WT-4
WT-5
ET-2
RET -2'
Figure 2.
ET-10
EE-3
TF-5'
RET-
B-8
-------
409
MARYLAND CHESAPEAKE BAY MAINSTEM
WATER QUALITY MONITORING STATIONS
Washington D.C.
SCALE 11682,463
0 15 30 45
VTT.1S
-------
410
VIRGINIA CHESAPEAKE BAY MAINSTEM
WATER QUALITY MONITORING STATIONS
SCALE 1571650
K
10 20
-------
411
MARYLAND CHESAPEAKE BAY WATER
QUALITY MONITORING PROGRAM: TRIBUTARY
CHEMICAL /PHYSICAL STATIONS
Baltimore
Washington D.C. MWT8-]
pxroeos^ 4yviwT8.2
MWT8.3 V
PXT0402
XFB2470
XFB1433 XED94!
XDE533
XDE279?
15 30 45
rnr.re
-------
412
VIRGINIA CHESAPEAKE BAY TIDAL TRIBUTARY
WATER QUALITY MONITORING STATIONS
10 20
IflLES
-------
413
WATER QUALITY PARAMETERS MEASURED IN
TIDAL MONITORING PROGRAMS
Temperature
Dissolved Oxygen
Specific Conductivity
Salinity
pH
Secci Depth
Total Phosphorus
Total Dissolved Phosphorus
Paniculate Phosphorus
Dissolved Inorganic Phosphorus
Dissolved Organic Phosphorus
Total Nitrogen
Total Dissolved Nitrogen
Paniculate Organic Nitrogen
Dissolved Inorganic Nitrogen
Dissolved Organic Nitrogen
Nitrate
Nitrite
Ammonia
Total Organic Carbon
Dissolved Organic Carbon
Paniculate Organic Carbon
Dissolved Silica
Total Suspended Solids
Chlorophyll a
Phaeophytin
-------
414
CHRONOLOGY OF THE CHESAPEAKE BAY
MONITORING PROGRAM (cont'd)
1986
Comprehensive nutrient analysis and sample storage
comparison studies funded by the Monitoring
Subcommittee
Baywide Quality Assurance Program initiated
1987
Second Biennial State of the Chesapeake Bay Report
Published
Expansion of program coordination to include
fisheries/shellfish resource monitoring
1987 Chesapeake Bay Agreement signed by the states
and the EPA
1988
Sediment processes monitoring expanded to support
Time Variable Chesapeake Bay Water Quality Model
development
Baywide Coordinated Split Sample Program implemented
-------
LABORATORY ANALYTICAL METHODS
VARIABLE
EPA METHOD
GBP METHOD
Silica, dissolved EPA 370.1 (AA)
TOC EPA 415.1 (Infrared)
DOC EPA 415.1 (Infrared)
Particulate Carbon TOC - DOC = PC
TSS
TKN
TON
PN
Standard Methods
(Gravimetric)
EPA 351.2
TKN(f) + NO3 + NO2
TKN (W) - TKN (F)
EPA 370.1 (AA)
PC + DOC
EPA 415.1 (Infrared)
High Temp Oxidat.
Combustion
Standard Methods
(Gravimetric)
Not Analyzed
Alk. Pers. Djg.
EPA 353.2
High Temp Oxidat.
Combustion
-------
LABORATORY ANALYTICAL METHODS
VARIABLE
EPA METHOD
GBP METHOD
Ammonia
NO2 + NO3
NO2
TP
TOP
OrthoP
PP
EPA 350.1 (AA)
Automated Phenate
EPA 353.2 (AA)
Cad. Red, Diaz.
EPA 353.2 (AA)
EPA 365.1
TP (w) - TP (f)
EPA 365.1 (AA)
TP (w) - TP (f)
Chlorophyll "a" Spectrophot.
Phaeophytin "a" Spectrophot.
EPA 350.1 (AA)
Automated Phenate
EPA 353.2 (AA)
Cad. Red, Diaz.
EPA 353.2 (AA)
TOP + PP
Alk. Pers. Dig.
EPA 365.4 (AA)
EPA 365.1 (AA)
High Temp Oxid.
Dig., EPA 160.2-1
Spectrophot.
Spectrophot.
-------
417
FIELD
VARIABLES AND
ANALYTICAL METHODS
Variable
Temperature
Depth
Dissolved Oxygen
Conductivity
PH
Seech i Depth
Salinity
Analytical Method
CTD
CTD
DO Sensor
w/ Winkler Cal.
CTD
pH Probe
20 cm disk
after Welch
Calculated from
Conductivity
-------
I
Figure 6. UiesapeaKe Bay coorumaica apiii Dumpie n ug
RavfetonNo.3
3/19/90
Page 19 of 2i
Key:
VWCB - Field/Program Agency
DCLS - Analytical Laboratory
t | - Component lead agency
SRBC
PADER
PADER
PAOER
Tributaries/Fall-Lint
Component
(latton varlisl
OWML
OWML
USGS Towaon
USGS Denver
OCDCRA
EPACRL
MDE
MDDHMH
Potomac
River
Component
(PMS-101
_« _J_i._\--
Malnstem/
Tidal
Tributaries
Component
(CB5.31
Virginia
Malnstem/
Tributaries
Component
(TFS.S)
CO
-------
419
Figure 4. Schematic of the Operational Flow of Analyses,
Coordinated Split Sample Program
LARGE
VESSEL
(See Figure 1
for dispensing
order)
Normal Laboratory
Quality Control
Procedures
Replicate Analyses**
Analyze for
Routine Parameters
Split Sample
Revision No.
3/21/90
Page 12 of 26
Triplicate Aliquots
(sent to each laboratory)*
*__
Analyze for
Routine Parameters
Analyze for
Routine Parameters
*(in-matrix estimate
of field precision)
Spike Sample
***
Analyze for
Routine Parameters
** (in-matrix estimate
of lab precision)
Analyze for
Percent Recovery
***(in-matrix estimate
of accuracy)
Deionizcd/distillcd
water dilution
EPA Standard
Reference Material
Matrix water
dilution
(saline only)
Analyze for
SRM Parameter
Analyze for
SRM Parameter
-------
Splii
sample results at four labs, TDP
oif tfc?fa aliquot* for Total Daaaolvad PhotphoMli, aw/l
aa*pn* iton« quwttrly at station CM.3
0,07
0
z
A
N 0.0*
0.08
0.01
0.00
B
««*'
to
o
Aw^^^V'
WMMW
SAMM.XM9 OATt
JULtt
NOVM
Lcttart hew analysis laboratory (A, B. C, 0)
-------
CO *0 *i
Mont
oeau
MAON
trwr
MNVT
i
ttwnr
ZtOlO iMIW
1*1
>00'0
10'0
80*o a
t
i
CO'O \
A
>o*o J
0
0
SO'O i
s
O'OJ
-------
utra
'nan
SWWP WdiS ttNOT MUVN £1310 £t9W £tAVN itUJ 91100 WinT 9tW
1.1,1.1,1 . I tl.j.l.l.lr
8idB6
> i ,
CN
CN
OOO'O
WO'O
ovo-o
9IO'0
OfO'O
WO
a
'0 J
ddL 'V
-------
423
COMPONENTS OF THE CHESAPEAKE BAY
MONITORING PROGRAM
Mainstem Chesapeake Bay Water Quality Monitoring Program
Maryland, Virginia and District of Columbia Tributary Water Quality
Monitoring Programs
Citizen Monitoring Program
Fall Line (River Input) Monitoring Programs
Baywide Phytoplankton and Zooplankton Monitoring Programs
Baywide Benthic Monitoring Programs
Fisheries Independent Shellfish Monitoring Programs
Fisheries Independent Beach Seine and Trawl Survey Programs
-------
424
COMPONENTS OF THE CHESAPEAKE BAY
MONITORING PROGRAM (cont'd)
Submerged Aquatic Vegetation Aerial and Ground Survey Program
Sediment Toxicant Monitoring Programs
Finfish and Shellfish Tissue Monitoring Programs
Baywide Waterfowl Survey Programs
Sediment Processes Monitoring Programs
-------
425
OTHER CHESAPEAKE BAY BASIN
MONITORING PROGRAMS
State Ambient Water Quality Monitoring Networks
State Shellfish Bacteriological Monitoring Programs
Utilities Monitoring Programs
USGS NASQUAN Program
USGS Streamflow Gauging Network
State Groundwater Observation Network
Nonpoint Source Watershed Monitoring Programs
-------
426
OTHER CHESAPEAKE BAY BASIN
MONITORING PROGRAMS (cont'd)
Point Source Compliance Monitoring
Radiological Monitoring Programs
NOAA NWS Meteorological Data Networks
State Air Quality Monitoring Networks
State Acid Depostion Networks
NOAA Tidal Height Network
-------
427
CHESAPEAKE BAY LIVING RESOURCE
MONITORING PROGRAM OBJECTIVES
Document the current status of living resources and their habitats in
Chesapeake Bay
Track the abundance and distribution of living resources and the
quality of their habitats over time
Examine correlations and relationships between water quality, habitat
quality and the abundance, distribution and integrity of living resource
populations
-------
THE CHESAPEAKE BAY
MONITORING PROGRAM
An Evolving Data Collection Network
and Process Providing Information
Necessary for Management of the
Chesapeake Bay's Resources
to
00
-------
429
MONITORING PROGRAM
DESIGN AND IMPLEMENTATION
Chesapeake Bay Monitoring Program Experience:
Institutionalized the program through an effective committee/technical
workgroup structure
Planned adequately for water quality network design but not for living
resource components; monitoring of toxics still not fully addressed
Recommendations:
Establish a multi-jurisdictional monitoring committee
Clearly state the program objectives using them in developing Data
Quality Objectives and the network design
Continually seek long-term, stable funding sources
Integrate existing monitoring programs into the design of a
coordinated monitoring program
Consider future modeling needs during network design
-------
430
DATA MANAGEMENT
Chesapeake Bay Monitoring Program Experience:
Did not adequately plan up front for data management needs
* Centralized computer data base networked with other data bases
Working with numerous data submitting organizations demanded
specific data submission formats and data management
requirements
Recommendations:
Plan adequate resources for data management prior to monitoring
program implementation
Seek consensus on and require adherence to specific data
submission requirements
Clearly state objectives for data base development up front and
adhere to them when structuring the data base
Target acquisition of key historical data sets early on
Establish procedures for quality assurance of all data entered onto a
common data base
-------
431
QUALITY ASSURANCE
Chesapeake Bay Monitoring Program Experience:
More than 15 laboratories analyzing water quality samples alone
eventually contributing to the centralized computer data base
Significant effort required to ensure use of comparable sample
collection and analysis methods across component programs
Routine analysis and interpretation of QA data critical to future quality
of data
Recommendations:
Establish quality assurance as an integral part of all monitoring
program components (QAOs, audits, QA plans)
Set up a coordinated split sample program between analytical
laboratories
Seek technical consensus on sample collection and analysis
procedures in view of program objectives and implementation of
resultant consensus
-------
432
DATA ANALYSIS AND INTERPRETATION
Chesapeake Bay Monitoring Program Experience:
Insufficient resources were devoted to data analysis
* Direct connections between information resulting from the program
and management decisions were limited until adoption of the
Baywide Nutrient Reduction Strategy and setting of Baywide
restoration goals
Establishment of consensus on data analysis priorities up front and
sharing of data management and data analysis resources between
agencies was necessary
Recommendations:
Dedicate resources for analysis and interpretation of monitoring data
Establish a tiered reprting system to force routine analysis and
synthesis of data targeted toward various levels of agency managers
and the public
* Create a dependence on using results from the monitoring program
for management decision-making
-------
433
ANALYTICAL METHODOLOGY
Chesapeake Bay Program Monitoring Experience:
* Adopted EPA Water and Wastewater Analytical
Methods
* Hostage to continuity with the "historical database"
* CBP special studies to establish comparability
* Some data usability compromised
Recommendations:
* Develop performance data for estuarine
and marine analytical methods through
round robin and validation studies
* Evaluate and develop SRMs
* Develop a network among technical
estuarine/marine professionals
-------
ESTUARINE
AND
MARINE
OJ
ANALYTICAL METHODS
COMMITTEE
-------
WORKGROUPS
en
-------
ORGANICS
NUTRIENTS
METALS
BIOLOGICS
CO
O1
-------
437
MR. FIELDING: Good
morning.
I have one announcement to make. EMSL-Cincinnati is
soliciting labs for two interlaboratory method validation
studies. The first is a joint EPA/AOAC effort to validate
NFS Method 6 for ethylene thiourea in water by GC/NPD. The
second is a joint ASTM/EPA study announced in the Federal
Register for Cr6 ion chromatography.
If anyone is interested in participating in either of
these two method studies, information will be provided at
the table in the rear of the hall.
I hoped everybody survived last evening's excursion on
the HMS Sinkfast. If you didn't, let us know.
We have several interesting papers today. We would
like to start off with a paper by Yves Tondeur of Triangle
Laboratories who will discuss the determination of semi-
volatile organic compounds in river water at the ppq level
by high resolution GC and high res mass spec.
Yves?
-------
438
MR. TONDEUR: Good morning.
Sometime last fall, the Canadian Government, which is
engaged in a study of pollutants in the Niagara River,
established a list of some 23 organic chemicals for which
methodologies capable of achieving parts per quadrillion
detection limits were necessary, and the U.S. Environmental
Protection Agency, in the spirit Of international
cooperation, has offered to help and subcontracted the work
to our lab.
The objectives (Fig. 1) of the study were to develop an
analytical procedure that would be able to detect and
quantify 21 target compounds in river water sample extracts
at the parts per quadrillion level. Originally, the list
was for 23 compounds, and we had to reduce it to 21,
primarily because one of the compounds, the trichlorotoluene
could not be obtained as a standard at the time of the
study, and the other one was a series of chemicals called
toxaphene for which we didn't think we could develop a
procedure at the parts per quadrillion level within the
required 21-day turnaround.
So, during that short period of time, we were involved
in the evaluation of short and long-term reproducibilities
of the response factors for the various analytes relative to
internal standards and also somehow evaluate the procedure
-------
439
on simulated matrix spike or laboratory control spike
extracts.
The method itself calls for isotope dilution mass
spectrometry during which a group of 16, labeled standards,
mostly deuterium labeled, were used. The mass spectrometer
uses a detector which is operating in the selected ion
monitoring mode and with a resolving power of 10,000 for
selectivity.
The samples are introduced inside the system through a
capillary GC column. The samples were supposed to be
obtained from the extraction of a 40-liter sample size and
analyzed following a concentration step down to 25
microliters. In other words, a million-fold concentration,
and that was the requirement set for us.
The target analytes (Fig. 2) represent a broad range of
compounds ranging from compounds such as acenaphtene,
benzidines, a group of halogenated aliphatics and aromatics
such as chloroethylether or bromophenylphenylether and also
chlorinated benzene and brominated benzene. We have also a
series of nitrosamines (n-nitrosodimethyl, dipropyl, and
diphenylamines), DEHP and di-n-octylphthalate, along with
some phenolic compounds such as tetrachlorophenol and
dinitrophenol.
So, the very first thing we did was to prepare a set of
calibration solutions whose composition is summarized on
-------
440
Figure 3. Two types of compounds: 1) A series of 21
unlabeled analytes representing the target compounds, and
2) A group of 16 labeled internal standards.
The analyte concentrations along these five calibration
solutions vary from 25 picograms per microliter in
concentration all the way to 300 picograms per microliter,
while the concentrations of the internal standards remain
constant at either 100 picograms per microliter or 200
picograms per microliters, depending on the compound.
Now/ if we consider a sample size of 40 liters and a
final extract volume (if achievable) of 25 microliters, then
the concentration domain represented by this set of
calibration solutions covers approximately the 15 ppq level
up to about 200 ppq level, which was part of the requirement
of the study.
Since I am going to be showing quite a few tables this
morning, I will just show a few selected current ion
profiles obtained on as little as 5 picograms of selected
compounds such as the chloroethylether for which a 5-
picogram injection gives you a very reasonable and
respectable signal-to-noise ratio (Fig. 4). The same can be
said for the chlorinated and brominated benzene ( 5-
picograms of material injected) (Fig. 5).
Acenaphthene and its internal standard, DiO-
acenaphthene (Fig. 4). eluting slightly before or the
-------
441
chlorophenylphenylether with the M and M+2 responses/ the
M+2 becoming kind of a noisy thing at the 5 picogram level.
Another final example is the one obtained on
tetrabromobenzene. Figure 6 represents 5-picograms of
material injected on the GC column for which a noisy signal
is obtained.
I am not saying here that all the target compounds
could be analyzed at the 5 picogram injection level. These
examples are the only ones for which a reasonable signal
could be obtained. They reveal that the methodology itself
could very well exceed the required detection limits
requested by the Canadians.
So, an aliquot from each of these calibration solutions
were then injected once and the response factors of the
various analytes calculated for each of the calibration
points. The means of those five points were calculated
along with the relative standard deviations of the means.
(Fig. 7).
As you can see from this summary table, with the
exception of acenaphthene, the standard deviation remained
below the 20 percent mark, and most of the time, it is even
less than 10 percent, which is quite remarkable considering
t the levels at which we are working. The results are in
agreement with expectations from the use of isotope dilution
techniques.
-------
442
There are a few analytes for which we were not able to
measure a response factor. Benzidine and dinitrophenol:
Those two compounds were never seen at those levels in the
chromatogram. We suspect that the benzidine, being a basic
compound, and nitrophenol, an acidic compound, reacted
together and, therefore, make their detection very
difficult.
We also believe that at those very low concentrations,
picogram per microliter range, that the reaction is much
more noticeable. We are able to see benzidine and
nitrophenol simultaneously in the solution when their
concentration in the solution exceeded the nanogram per
microliter level.
Other compounds such as the DEHP for which we were not
able to generate a response factor because of a problem with
the mass spec operator who did not cover the right retention
time window. This was, in fact, corrected afterwards during
the matrix spike studies.
Nitrosodiphenylamine constituting another exception.
Since we use a splitless injection technique, we observed
almost quantitative conversion of nitrosodiphenylamine into
diphenylamine.
As far as the relative retention time for these
analytes, the mean relative retention times are summarized
in Figure 8. They were determined using the internal
-------
443
standards that were used during the computation of the
response factors.
As you can see, over the 5-point calibration curve,
these standard deviations are extremely small which makes it
evident that isotope dilution will provide pretty reliable
results as far as qualitative characterization of the
analytes in the samples.
We then looked into the continuing calibration (con-
cal), that is, after a 48-hour and 4-day periods following
the initial calibration. Figure 9 contains the summary
results obtained after 48 hours.
The middle point of the calibration curve was used to
determine the relative response factors for each of the
analytes, we then compared those response factors to the
mean that were obtained during the initial calibration (I-
cal), and then calculated the percent difference between the
con-cal and the I-cal response factors.
As you can see, for most of the analytes for which we
could measure a response factor, the percent difference
remained within 20 percent of the I-cal, which is quite
remarkable.
Then we also looked at the response factors obtained
for these various analytes four days following the initial
calibration (Fig. 9). Four days may not be a lot, but when
-------
444
you have a 21-day period to develop and evaluate a method, 4
days is a significant amount of time.
So, again, the daily response factors calculated here
from the middle point of the calibration curve were compared
to the I-cal response factors and then the deviation from
the I-cal summarized in the last column. And with the
exception of acenaphthene, dichlorobenzidine and
diphenylamine, the deviation remained within 20 percent of
the I-cal.
Even though some of these analytes exceeded 30 percent
deviation, we believe that the initial calibration can be
used for routine analysis at least after 4 days after
establishment.
We then evaluated the overall procedure of high-
resolution GC and high resolution mass spectrometry on what
we called matrix spike analysis (a laboratory control
spike). The sample is supposed to be obtained from the
extraction of 40 liters of water, then concentrated down to
1 ml methylene chloride and sent out to the lab in vials of
1 ml. The lab is then required to add the internal
standards to that 1 ml solution before blowing it down to 25
microliters.
So, what we have done since we were not able to obtain
actual river samples, we simulated a matrix spike by taking
1 ml of dichloromethane and spiking known quantities of the
-------
445
unlabeled compounds shown here along with the internal
standards. The spike level is expressed here in parts per
quadrillion and varies from some 30 ppq to 63 ppq/ and for a
couple of analytes, the concentration of the spike is 125
ppq (Fig- ii)-
The second column shows the result obtained from the
triplicate set of matrix spike studies. These are the mean
and the relative standard deviation of the mean.
The last column, which is called here percent accuracy,
is in fact is the recovery (amount found in the sample
relative to the amount spiked in the sample), and offers an
idea on how accurate the procedure is.
There are a few compounds, acenaphtene and the
phthalates along with tetrabromobenzene, tribromobenzene and
diphenylamine for which these percent accuracies were very
high, exceeding what we would call an acceptable limit of
150 percent. For the remaining compounds, the percent
accuracies were close to the 100 range.
With the exception of the phylates for which an obvious
explanation can be offered simply because of background
contamination, we cannot come with any reasonable
explanation for the acenaphthene observation. For the
others, one can always assume that tetrabromobenzene and
tribromobenzene, are not actually measured using isotope
-------
446
dilutiont need their measurement are more susceptible to
instabilities during the GC/MS run.
In conclusion, the procedure developed here of high-
resolution GC and high-resolution mass spectrometry could
definitely offer parts per quadrillion detection limits on
most of the target compounds, that is, provided that 40
liters of extract can be blown down to 25 microliters
without any form of a cleanup.
Some of the limitations we encountered were primarily
associated with chemical reactivity between some of the
components but also background contamination.
As far as the precision and accuracy of the method (as
measured on matrix spike studies for 18 analytes), the
precision, was in the neighborhood of 15 percent (mean),
ranging from 3.5 to 50 percent, which is kind of remarkable
considering the low levels at which we were working.
Finally, the accuracy gained here from matrix spike
studies gave us a mean of 157 percent, ranging from 86 to
374. Of course, that mean is biased high because of a
couple of extremely high data points originating from
tetrabromobenzene and the phlalates.
Finally, we recommend strongly that: 1) Separate
analyses be done for the acidic compounds. It is not a very
good idea to mix the acid and the base neutral fractions of
the extraction; 2). The actual testing of river water
-------
447
sample be evaluated directly so that one can really
determine the applicability of the method at the parts per
quadrillion level; and, 3) finally, as we go to lower and
lower levels, special precautions will have to be taken to
minimize or control the contribution from background such as
phlalates. Otherwise, the method will not be applicable to
analytes for which we have some background contribution.
As far as the determination of nitrosodiphenylamine and
diphenylamine goes, they will have to be analyzed separately
at this time perhaps by using other GC conditions (we found
that nitrosodiphenylamine was degrading into diphenylamine
using cold on-column injection.
Thank you for your attention, and if you have any
questions, I would be glad to answer them.
-------
questions?
448
QUESTION AND ANSWER SESSION
MR. FIELDING: Are there any
MR. STANKO: George Stanko, Shell
Development Company.
I would like to point out that one of the definitions
of practical quantitation limit is that 80 percent of the
labs can get within plus or minus 40 percent of the true
value. I don't think you got there.
Thank you.
MR. FIELDING: Any other questions?
(No response.)
MR. FIELDING: Thank you, Yves.
-------
449
DETERMINATION OF SEMI-VOLATILE ORGANIC
COMPOUNDS IN RIVER \VATER AT THE
PART-PER-QUADRILLION (PPQ> LEVEL BY
HIGH-RESOLUTION GAS CHROMATOGRAPHY
/
HIGH-RESOLUTION MASS SPECTROMETRY
Yves Tonclexar. Mick. Cbxi & Don
-------
r
450
OBJECTIVES
1. Develop Methodology CZ1 target compounds,
river -water sample extract, ppq level)
Demonstrate
an
I-Cal with. Reproducible
Response Factors
Demonstrate Con-Cal \Arith Reproducible
Response Factors
Demonstrate Matrix Spilte Recoveries after
Spils-ing and Concentration Steps
-------
451
(METHOD!
Isotope - Dilution Mass Spectrometry
Fifteen Deuterated or Carh>on-13 Labeled
Selected Ion Monitoring M S
Electron. lonization
Positive Ion. Nlode
Resolving Power of 1O.OOO
for Selectivity
Capillary GC Column C6O-m DB-S)
to :In.sure Adequate GC Resolution
Extraction to
be from. 4O-L River
"Water
Concentration of the Final Extract to 25 uL
Greater than a IVIillion-Fold Concentration
-------
452
ANALYTES of INTEREST
Acenaphthene
Benzidine
Bis(2-chloroethyl)ether
Bis(2-ethylhexyl)phthalate
4-Bromophenylphenylether
4-Chlorophenylphenylether
3,3'-Dichlorobenzidine
Di-n-Octylphthalate
N-Nitrosodimethylamine
N-Nitrosodipropylamine
N-Nitrosodiphenylamine
Diphenylamine
1,2-Diphenylhydrazine (or decomposition product azobenzene)
1,2,3,4-Tetrachlorobenzene
1,2,3,5-Tetrachlorobenzene
1,2,4,5-Tetrachlorobenzene
2,4,5-Trichlorotoluene
2,3,4,5-Tetrachlorophenol
2,4-Dinitrophenol
1,3-Dibromobenzene
1,3,5-Tribromobenzene
1,2,4,5-Tetrabromobenzene
-------
453
INITIAL CALIBRATION SOLUTIONS
Compound
Solution Number
Concentration (pg/uL)s
12345
Unlabeled Analytes
Acenaphthene
Benzidine
Bis (2-chloroethyl)ether
Bis(2-ethylhexyl)phthalate
4-Bromophenylphenylether
4-Chlorophenylphenylether
3,3*-Dichlorobenzidine
Di-n-Octylphthalate
N-Nitrosodimethylamine
N-Nitrosodipropylamine
N-Nitrosodiphenylamine
Diphenylamine
1, 2-Diphenylhydrazine
1,2,3, 4-Tetrachlorobenzene
1,2,3,5-Tetrachlorobenzene
1,2,4,5-Tetrachlorobenzene
2 , 4 , 5-Trichlorotoluene
2,3,4,5-Tetrachlorophenol
2, 4-Dinitrophenol
1, 3-Dibromobenzene
1,3, 5-Tribromobenzene
1,2,4, 5-Tetrabromobenzen.e
Internal Standards
!0
8-
8-
4-
5-
5-
6-
4-
d,
1-
-Acenaphthene
Benzidine
Bis (2-chloroethyl)ether
Bis(2-ethylhexyl)phthalate
4-Bromophenylphenylether
4-Chlorophenylphenylether
3,3 ' -Dichlorobenzidine
Di-n-Octylphthalate
N-Nitrosodimethylamine
-N-Nitrosodipropylamine
N-Nitrosodiphenylamine
-Diphenylamine
-1, 2-Diphenylhydrazine
6-l,2, 4,5-Tetrachlorobenzene
2, 4-Dinitrophenol
6-1, 4-Dibromobenzene
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
100
200
100
100
100
100
200
100
100
200
100
100
100
100
200
100
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
100
200
100
100
100
100
200
100
100
200
100
100
100
100
200
TOO
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
200
100
100
100
100
200
100
100
200
100
100
100
100
200
100
200
200
200
200
200
200
200
200
200
200
200
200
200
200
200
200
200
200
200
200
200
200
100
200
100
100
100
100
200
100
100
200
100
100
100
100
200
100
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
100
200
100
100
100
100
200
100
100
200
100
100
100
100
200
100
(a) Based on a 40-L sample size, 25 uL final extract volume, 25
pg/ul represents 15.6 ppq. (Calibration Range: 15.6 ppq to 187.5
ppq->
-------
ro a*, en co *r;
co co co co °~
i i t i P**
ro oj
X
CO -(
ri
ro tn
53. rn
4 I' u
5' I
CO
=1 CO
o
a cT
en pt-
cu
U3
cu
tn
in
co m
co «=>
ro co
en
co
CD
CO
ro
CO
co
co
vj
ca
co
co
co
co
ro
CO
-O-
co
CO
co
ro *ix co oo ~_*
co co co co S2
lit! f"
i to
ro cu
I
O
o
H
O
t)
(D
a
a1
ID
a
a1
re
=J CO
n> to
o m
o co
=3 CO
to
cnr;-
c? «
cu
10
ro
SK
OJ
in vj
in co
j*. co
CO
co m
cu «o
u>»
to
1C
ui
o
t
to
cs>
en
-n.
CO
CO
CD
CO
CT3
ro
CO
t *
CO
-o.
CO
**
CD
CO
j
»O
ro
CO
^
vj
.0»
CO
CO
CO
CO
CO
ro
CO
CO
_r>-
co
CO
CO
CO CD
i i t t i^
t to < *
ro o> i en_
2-fL 5? £
ro
CO
5" CO CD
ro' to
-__ a s-eg _
r^^ o s'co ??.
, » to ro
' t co
O
u CO ^TT
pj ~7 ~3 " *
o ° ^- £
a1 -o o
£ ^n^
a1 cu «
re ua vj
» =."«
(0 CU CD
in ^J
in CD
i
* ro _
co en -^j
-*^ CO .. -
co ro
C^ lft_J ^"^
^^
to
ur .
T. ^
S s"
CO
CD
CD
CO
sc ro'
o co
3
CO
CD
co to
to ^
CO
ga -».
&_
re
fc> > *
=] CO
ro" to
o m
f-*- ~O
r-' t
^- _^_ O CO
.^ ~~
n to
re CTJ r~*
M ~l ~>
a o
m c *=.
d "° °.
rt O^
a1 n>
re =>K
M cu
-" in vj
re in CD
t
» ' ro
en en
J>. co
> to
co m
XJ K3
CO
OJ tO
*T.
1
>
x.
p
at
J3^
«JL>
cn
Cn
-------
BE.
28.
e
455
Bis (2-chloroethyl) ether
451
i;4B 2=38 3=28 4=18 5=68
TLIB 1B-SEP-B9 Sir=Voltage 7B-25BSEC Sys= TLI
Sample 1 Injection 1 Group 1 Rass 95.J
Text--
5=58 6=48 7=38
1BL
88.
68.
48.
28.
8
Bis { 2-chloroethyl) ether
1;48 2=38 3=28 4=16 5=68 5=58 6=46 7=38
TLIB 1B-SEP-B3 Sir 'Voltage 7B-25BSEQ Sys= TLI
Sanple 1 Injection 1 Group 4 Rass 215.
Text=
Tetrachlorobenzene*
46.
28.
B
II!
Nor*=
858
12=28 12=48 13=88 13=28 13=46 14=BB 14=28 14=48 15=88 15=28
TLIB 1B-SEP-B9 SirVoltage 7B-25BSEC Sgs= TLI
Sanple 1 Injection 1 Group 3 Rass 235.BB53
Text'
1,3-Dibromobenzene
Kora=
33B
3=48 1B--6B 18=28 18^48 11=88 11=28 11=48 12=88
-------
456
TLI6 JB-SEP-BS Sir Voltage 7B-25BSEO Sys= TU
Sanple 1 Injection 1 Group 6 Rass 391.BB6S
Text'
IBB.
88.
EBJ
1,2,4,5-Tetrabromobenzene
13
19*28 2B-BB 28=4B 21=28 22=88 22=48 23=28
TUB 1B-SEP-B9 Slr-'Voltage 7B-25BSEO Sys= TU
Sample 1 Injection 1 Group E Kass-393.6B4S
Texts.
24=BB 24=46 25=2B
IBB.
Hor«=
1,2,A,5-Tetrabromobenzene
13=28 2B'{
2B=4B 21=28 22'
22--4B 23=28 24=88 24=4B 25=28
-------
457
INITIAL CALIBRATION RESULTS I
Analyte
Acenaphthene e
Benzidine
Bis (2-chloroethyl ) ether
Bis (2-ethylhexyl) phthalate
4-Bromopheny Ipheny lether
4-Chlorophenylphenylether e
3,3* -Dichlorobenzidine
Di-n-Octylphthalate e
N-Nitrosodimethylamine
N-Nitrosodipropylamine
N-Nitrosodiphenylamine
Diphenylamine . **e
1 , 2-Diphenylhydrazine e
1,2,3, 4-Tetrachlorobenzene
1,2,3, 5-Tetrachlorobenzene
1,2,4, 5-Tetrachlorobenzene
2,3,4, 5-Tetrachlorophenol
2, 4-Dinitrophenol
1 , 3-Dibromobenzene
1,3, 5-Tribromobenzene
1,2,4, 5-Tetrabromobenzene
Mean RRF
1.69
r j a
0.84
b
0.70
1.31
1.44
0.89
0.79
0.85
c
1.04
1.28
1.04
0.91
1.15
0.04
rja
0.36
0.11
0.10
RRF
Z RSD
25
_ _
4
8
6
12
5
4
6
12
18
5
5
6
1
--
2
0.3
0.8
(a) rj - rejected data point (fails to meet acceptable initial
calibration criteria).
(b) The signal corresponding to DEHP was not detected due to
improper selection of retention time window.
(c) N-Nitrosodiphenylamine is quantitatively converted into
diphenylamine inside the GC injector and on the GC column.
(d) Contains a contribution from N-Nitrosodiphenylamine.
(e) Corrected for isotope impurities using equations 6-9.
-------
458
INITIAL
CALIBRATION RESULTS!
Analyte
Acenaphthene
Benzidine
Bis(2-chloroethyl)ether
Bis(2-ethylhexyl) phthalate
4-Bromophenylphenylether
4-Chlorophenylphenylether
3,3' -Dichlorobcnzidine
Di-n-Octylphthalate
N-Nitrosodimethylamine
N-Nitrosodipropylamine
N-Nitrosodiphenylamine
Diphenylamine
1,2-Diphenylhydrazine
1,2,3,4-Tetrachlorobenzene
1,2,3,5-Tetrachlorobenzene
1,2,4,S-Tetrachlorobenzene
2,3,4,5-Tetrachlorophenol
2,4-Dinitrophenol
1,3-Dibromobenzene
1,3,5-Tribromobenzene
1,2,4,5-Tetrabromobenzene
;an RRT
1.005
1.013
1.002
1.002
1.001
1.001
1.012
1.015
1.002
1.003
1.054
0.998
1.001
1.228
0.832
1.101
1.034
RRT
Z RSD
0.00
0.09
0.03
0.00
0.04
0.00
0.55
0.11
0.00
0.00
0.00
0.00
0.00
0.07
0 .01
0.05
0.01
-------
CONTINUING
459
CALIBRATION
RESULTS
(48 hours following the initial calibration)
Analy te
Acenaphthene *
Benzidine
Bis{2-chloroethyl) ether
Bis(2-ethylhexyl)phthalate
4-Bromophenylpheny lether
4-Chloropheny Ipheny lether *
3,3'-Dichlorobenzidine
Di-n-Octylphthalate ^
N-Nit rosodime thy lamine
N-Nit rosodipropy lamine
N-Nit rosodipheny lamine
Diphenylamine *"
1 , 2-Diphenylhydrazine *
1,2,3,4-Tetrachlorobz
1 , 2 , 3 , 5-Tetrachlorobz
1,2,4,5-Tetrachlorobz
2,3,4,5-Tetrachlorophenol
2 , 4-Dinitrophenol
1 , 3-Dibromobenzene
1,3, 5-Tribromobenzene
1,2,4, 5 - Tetrab romo benzene
RRF
i-cal
1.69
0.84
b
0.70
1.31
1.43
0.89
0.79
0.85
1.04
1.28
1.04
0.91
1.15
0.04
0.36
0.11
0.10e
RRF
con-cal
1.15
a
0.87
5.08
0.73
1.49
1.60
0.77
0.83
0.77
c
1.12
1.43
1.04
1.02
0.99
0.05
a
0.43
0.12
0.11
Percent
Difference
-21.4
3.5
--
4.6
14.3
11.5
-13.5
4.4
-9.6
7.1
11.7
0.1
11.5
-14.3
20.4
20.9
3.7
6.2
(a) Benzidine and 2,4-dinitrophenol are not detected at lov
levels.
(b) No response
calibration.
factor is available for DEHP in the initial
(c) Thermal decomposition.
(d) Contribution from
nitrosodiphenylamine.
the decomposition of
N-
(e) Ion-abundance ratio was outside the expected QC limit.
(f) Corrected for isotope impurities using equations 6-9.
-------
460
I CONTINUING CALIBRATION RESULTS"
(4 days following the initial calibration)
Analyte
Acenaphthene *
Benzidine
Bis (2-chloroethyl) ether
Bis(2-ethylhexyl)phthalate
4-Bromophenylphenylether
4-Chlorophenylphenylether ^
3,3' -Dichlorobenzidine
Di-n-Octylphthalate f
H-Hitro so dime thy lamine
N-Nitrosodipropy lamine
N-Nitrosodipheny lamine
Diphenylamine df
1,2-Diphenylhydrazine ^
1,2,3, 4-Tetrachlorobz
1,2, 3,5-Xetrachlorobz
1,2,4, 5-Tetrachlorobz
2,3,4, 5-Tetrachlorophenol
2, 4-Dinitrophenol
1,3-Dibromobenzene
1,3, 5-Tribromobenzene
1,2,4, 5-Tetrabromobenzene
RRF
i-cal
1.69
0.84
b
0.70
1.31
1.43
0.89
0.79
0.85
1.04
1.28
1.04
0.91
1.15
0.04
0.36
0.11
0.10e
RRF
con-cal
0.93
a
0.92
4.60
0.77
1.49
1.90
0.94
0.76
0.79
c
1.40
1.11
1.04
1.03
1.12
0.05
' a
0.35
0.11
0.09
Percent
Difference
-35.9
9.0
_
10.6
14.2
32.7
-4.8
-4.2
-6.5
33.9
-13.0
-0.4
12.4
-2.9
20.7
-2.1
0.0
-6.2
(a) Benzidine and 2,4-dinitrophenol are not detected at low
levels.
(b) No response factor is available for DEHP in the initial
calibration.
(c) Thermal decomposition.
(d) Contribution from
nitrosodiphenylamine.
the
decompos ition
of
N-
(e) Ion-abundance ratio was outside the expected QC limit.
(f) Corrected for isotope impurities using equations 6-9.
-------
461
MATRIX SPIKE
ANALYSES
Analyte
Acenaphthene
Benzidine
Bis (2-chloroethyl)ether
Bis (2-ethylhexyl)phthalate
4-Bromophenylphenylether
4-Chlorophenylphenylether
3,3* -Dichlorobenzidine
Di-n-Octylphthalate
N-Nitrosodimethylamine
N-Nitrosodipropy'lamine
N-Nitrosodiphenylamine
Diphenylamine
1 , 2-Diphenylhydrazine
1,2,3, 4-Tetrachlorobenzene
1,2,3, 5-Tetrachlorobenzene
1,2,4, 5-Tetrachlorobenzene
2,3,4, 5-Tetrachlorophenol
2, 4-Dinitrophenol-
1 , 3-Dibromobenzene
1,3, 5-Tribromobenzene
1,2,4, 5-Tetrabromobenzene
Spike
Level
(_ppq)
31.
31.
31.
62.
31.
125
31.
62.
62.
.
62.
31.
31.
31.
31.
125
-
31.
31.
31.
3
3
3
5
3
3
5
5
5
3
3
3
3
3
3
3
Found
Cone .
(ppq)
63.
»
31.
113.
81.
80.
108
67.
66.
58.
.
352
29
40
36
34
293
-
31
53
117
4
4
7
8
2
1
9
9
. 0
.2
.5
.9
.0
.0
RSD
2
23
.
3
_
13
13
7
17
16
10
_
40
9
7
3
4
14
.
7
3
51
Accuracy
X
.1
.4
.8
.4
.8
.2
.2
.1
.8
.9
.6
.9
.8
.1
.5
.6
.1
203
»
100
364
131
257
86.
215
107
94.
-
563
92.
129
117
112
235
-
99.
169
374
2
3
7
3
-------
462
I CONCLUSIONS'!
1. PPQ Detection Limits Achievable for
Most of the Target Analytes
2. Chemical Reactivity Limitations
3. Back.groun.dl Contamination
4. Precision, i
IS Analytes (MS Studies)
Meant 14.37.
Range: 347. to 51.1%
S. Accuracy »
Analytes
Mean: 1S77.
Range: S6.27. to
3747.
-------
463
RECOMMENDATIONS
1. Separate Analysis for Acidic Substances
2. Testing on Actual River "Water Samples
3. Special Precautions to Control Background
Contamination
4. Different Approach. Necessary for
Reporting Separately
N-Nitrosodiph.enylam.ine
&
Diphenylamine
-------
464
MR. FIELDING: There is a
substitution next. Gordon Wallace from MIT was unable to
get here, so we are substituting a very nice paper by Larry
Keith of Radian Corporation who will talk about his sampling
and analysis methods data base.
-------
465
MR. KEITH: Thanks, Tom.
Good morning. The sample and analysis methods data
base I am going to talk about a little bit this morning
originated in Cincinnati at EPA. The overall objective is
very similar to Jim King (Viar) and Bill Telliard's (EPA)
List of Lists. In some places, the data are similar and in
other places it is different.
Jim King and I have talked about these programs and our
conclusion is that, basically, these are complimentary
programs. They use different searching techniques, and
there are some significant differences between them. Why
are we interested in databases of methods and analytes? The
answer is because the world has become a jungle of analytes
and methods, and it is growing all the time. I can remember
back not too long ago when life was a lot simpler, and there
were relatively few pollutants that people looked for. In
the 1970s there were some lists of pesticides, and we didn't
have very many methods. We had GC with electron capture
detectors, but we didn't have mass spectrometry back in the
late 1960s and early 1970s.
However, since that time, people have become interested
in more and more different kinds of organic compounds, and,
with that interest, we environmental analytical chemists
have become craftier and smarter and developed many more
techniques.
-------
466
The result is a proliferation of methods that we have
today.
A compounding factor is that many analytes have
multiple methods that can be used for their analysis. So,
the question is, what method should be used?
There are some obvious things, of course, that method
selection well depends upon (Figure 1). First of all, one
must consider the available instrumentation. If the method
of choice uses a GC/MS method and your lab doesn't have a
GC/MS, you can't use that method. The same analogy may be
made using an I-cap method for elemental analysis. If you
only have atomic absorption spectrometers available, then
you cannot use I-cap methods.
Another important factor to consider is the
environmental matrix. A third factor to consider is whether
or not there are interferences present. Different methods
and techniques may introduce or be prone to different
interferences. So, sometimes you may need to select the
best technique for your purposes by considering what the
interferences may be in the samples.
A fourth factor to consider is that various methods
have different detection levels. When the need for the
greatest sensitivity is paramount, this may be the deciding
factor in selecting a method.
-------
467
Not very often, but sometimes, you may also run into
the problem of maximum holding time. Some of the methods
have different holding times and in these cases this may
influence method selection.
Some examples will illustrate the influences that the
above factors can have. Take, for example, 1,2-
dichlorobenzene. If you want to do the analysis by GC with
a packed column and a photoionization detector, there are at
least three different methods that use that particular
technique (Figure 2).
However, if you wish to use an electroconductivity
detector, then there are at least three other GC methods
that use packed columns with that detector.
On the other hand, if you wish to use a capillary
column instead of a packed column with an
electroconductivity detector, then you may use Method
502.2. But, if you wish to use an electron capture detector
with a capillary column, then Method 8120 is applicable.
Finally, if you wish to use a mass spectrometer for a
detector then there are at least six other methods with
packed or capillary columns.
Consider the same example influenced by its
environmental matrix (Fig. 3). If you want to analyze for
1,2-dichlorobenzene in drinking water, EPA's 500 series of
methods will apply. If you want to analyze for 1,2-
-------
468
dichlorobenzene in wastewater, then EPA's 600 series of
methods will apply. If you want to analyze for 1,2-
dichlorobenzene in sludges, soils, groundwater, and various
other matrices, then EPA's 8000 series of methods will
apply.
Each of these methods differ in required
instrumentation, sample preparation, detection levels, etc.
Next consider the problem of interferences with method
selection. If a non-selective detector (e.g. an electron
capture detector) is used then phthalate esters may
interfere with the analysis. Or taking elemental analyses
using I-cap as another example, some elements will cause
interferences with that technique that would not exist if
atomic absorption methods were used and vice versa.
Thus, knowledge of potential interferences can also be
very important in making method selections.
Next, consider the influence of detection levels on
method selection (Fig. 5). Using I-cap to analyze for total
chromium, one can select EPA Methods 200.7 and 6010 which
are essentially identical; they both have a method detection
level of 7 ug/L.
I-cap instrumentation is expensive and not all
laboratories have such instruments. However, most
laboratories do have atomic absorption (AA) instruments.
Direct aspiration AA methods have a detection level of 50
-------
469
ug/L. On the other hand, if you use a graphite furnace
technique (EPA Method 7191), then the detection level for
total chromium is 1 ug/L. In this example graphite furnace
AA provides the most sensitive detection and also a less
expensive instrumentation than the I-cap.
The above example is just one of hundreds of variations
where method detection levels may influence method
selection.
Thus, the overall problem becomes how do you find and
select the most appropriate method for sampling and analysis
without having to be an expert on all of these or without
reading through mountains of EPA reports, the Federal
Register and the literature?
Although environmental chemists may be familiar with
this detailed information, the people who have to work with
environmental chemists (i.e. engineers, regulators, etc.) do
not often have the knowledge to make the best decisions
involving sampling and analytical methods. How can we help
them?
One solution is to provide method summaries from a
sampling and analysis database that can be searched by the
criteria of interest in order to find applicable method
summaries (Fig. 6). This is essentially what we have done.
In 1987 and 1988, Bill Mueller and David Smith started
compiling a sampling and analysis database at EPA 's Risk
-------
470
Reduction Engineering Lab in Cincinnati, Ohio. (Fig. 7).
They began compiling this database primarily for engineers
and contractors who worked with EPA so that the best methods
for sampling and analysis of a large variety of pollutants
in many matrices could be selected.
The records were compiled using d-Base III and, as the
database got larger and larger, the searches got slower and
slower.
At the ACS meeting in Los Angeles in September of 1988,
and suggested that we use a software program written in C
language that we had developed at Radian. It is a " free-
text" searching program that may be used to search ASCII
text files.
In October of 1988, I received an ASCII dump of the d-
Base files. In December, 1988, we did some trials with the
free-text searching program and made minor changes.
Then in March of 1989, I sent final copies of the
program to EPA and, during the next nine months, we
essentially doubled the size of the database and used a
revised format. Finally, in March of 1990, the program was
sent to Lewis Publishers for publication in the summer of
1990.
A private publishing firm was used so that the
publication would get wide distribution and be made easily
accessible to everybody.
-------
471
This publication is the first-of-its-kind, electronic
reference book (a book on diskettes) with environmental
sampling and analysis methods (Fig. 9).
Each method and analyte summary is self-standing and it
is about one page long. What I mean by self-standing is
that all of the information is there to allow you to make a
selection for a method and an analyte combination without
having to go to any other reference.
There are 150 methods, and 660 Method-analyte
summaries. The EPA List of Lists program has about 1700
method-analyte summaries.
One of the main differences between EPA's Sampling and
Analysis Methods Database and EPA's List of Lists is that,
the former has fewer analytes, but contains more information
and in a different format List of Lists program. The
Sampling and Analysis Methods Database is completely menu
driven, so it is very easy to run.
The program performs and/or searches.
It has a hypertext "feel" to its use. It is not a
hypertext program, but it has a hypertext "feel" to it. The
menu has a highlight bar, which is moved to the analyte-
method summary of interest. When the "Enter" key is
pressed, the selected summary is instantly displayed.
-------
472
Once the files have been searched and the appropriate
summaries have been found any of them may be displayed
easily and very rapidly in any order desired.
Once a summary is displayed, you can page up and down
and browse through it. The length of each summary is about
two or three computer screens.
Any or all of the summaries may be printed either to a
disk file or to your printer.
Within the text on the screen, the key words that were
used for searching are reversed video, so they are
highlighted. This immediately locates in the text the key
words that were of interest. The highlighted keywords do not
appear any differently from the other text when printed.
Key word highlighting is only used with video screen
presentations.
The publication comes complete with an illustrated
tutorial. In addition, it has a printed manual.
The publication is provided in three volumes (Fig. 9).
Volume I covers industrial chemicals. It has three
diskettes of data, plus Systems Diskette. Within Volume I,
there are files on chlorinated aliphatic volatile organics;
all the other halogenated volatile organics (e.g.,
aromatics r bromo, chloro and fluoro compounds); non-
halogenated volatile organics; and selected semi-volatile
compounds.
-------
473
The second volume is on one diskette. It contains
method-analyte summaries of pesticides, herbicides, PCBs,
and polychlorinated dioxins and furans.
The third volume also is on one single diskette. It
contains one file with elemental methods and another file
with water quality parameters (e.g. BOD, suspended solids,
pH, etc.).
The information was grouped into three volumes because
some people may only be interested in selected groups of
analytes and, therefore, no want or need all of the data.
Figure 10 shows the menu for the searching program. It
is very simple. When you select "Search" the program
searches the active file.
"File" is an option that allows you to move from one
file to another without going back through the menu.
When you select the option "menu", you are returned to the
main menu.
When you select the "Print" option, you may print
either to diskette files or to paper.
The "criteria" option is used to enter key words that
are to be used to search the files selectively. Five lines
of search criteria may be entered. (Fig. 11). If
criteria are placed on each line, they are additive during a
search. Thus, if "sludge" is on line 1 and "water" is
placed on line 2, the program will only find methods that
-------
474
have both sludge and water as key words somewhere in the
summary. In this example, both sludge and water would have
to be present for a method summary to be selected.
As a second example, if you want to do "or" searches,
where one or another key word is in a summary (for example,
aromatic or aliphatic), then the piping character is used to
separate those key words on the same line. Thus, if
"aromatic:aliphatic" was placed on line 1, "water" on line 2
and "sludge" on line 3, then the searching program would
only find those method summaries that had the words
"aromatic" or "aliphatic" and "water" and "sludge" in their
text.
Each summary contains the primary name of a chemical,
its CAS number and the applicable EPA method number (Fig.
12). It also contains a brief description of the EPA
method; a short one-paragraph description of the method. In
addition, each summary contains the applicable matrices that
the method may be used to analyze. A very important section
of each summary includes common interferences and solutions
for removing those interferences if they are known.
Another part of each summary lists the instrumentation
required not just the major instrumentation required but,
very importantly with respect to the organics, the gas
chromatographic columns that are needed.
-------
475
The sections involving quality control include the
published concentration range and the published method
detection level (MDL) for the method analyte combination.
In addition, practical quantitation limit factors are listed
when those are available for the RCRA methods.
The method-analyte precision and accuracy data includes
another part of the QC information summarizes sampling and
preservation instructions, including the maximum holding
time (M.H.T.). Quality control requirements for performing
the method are also summarized in this section. The final
information presented lists the EPA reference source.
An example of the data is illustrated with Aroclor 1242
using Method 8080. This is shown in Figures 13-15.
There are also some limitations with this publication
(Fig. 16). First, not all of EPA's method-analyte summaries
are included yet specialized analytes, (e.g.
organophosphorous pesticides, and many of the semi-volatile
compounds with GC/MS methods) are not yet summarized.
These may be added later if there is sufficient
interest in this publication format.
Second, the publication is not available in the
Macintosh format. However, people with no computers or who
have Macintosh computers will be able to get a printed
version by the end of 1990.. Of course, that can't be
searched.
-------
476
Lastly, you can't perform the analyses using only the
summaries. There is not sufficient detailed information
included in the summaries to perform the analyses, but that
is not the purpose of this publication. Its purpose is to
enable one to make an informed decision as to which EPA
method to select for any given situation.
-------
477
QUESTION AND ANSWER SESSION
MR. FIELDING: Does
anybody have any questions?
MR. PRONGER: Greg
Pronger from National Environmental Testing.
I was wondering if this program or the List of Lists
programs allows the user to append compounds or methods to
it.
DR. KEITH: I will have
to let Jim King from Viar Corporation answer with respect to
the List of Lists. Jim, will you answer that?
MR. KING: The List of
Lists in the form that we use actually allows us to enter
the information. A number of EPA program offices also can
enter information if they have programming capabilities with
System J language. However, the average person could not
add information to this program.
DR. KEITH: With EPA'S
Sampling and Analysis Methods Database, you would be able to
add compounds or methods to it because, essentially, it is
in two pieces. The searching program is a run-time program
that can't be changed, but that searching program goes out
and loads an ASCII file into the computer's memory. One can
easily append information to the ASCII files so, yes, it
would be possible.
-------
478
MR. PRONGER: Thank you.
DR. KEITH: Any other
questions? Yes?
MR. LEWIS: Mike Lewis.
How much is the software?
DR. KEITH: Well, I hate to
get into costs in a technical meeting. Let me just say that
it is...
MR. LEWIS: Free?
DR. KEITH: No, it is not
free, of course, because it is produced by a commercial
publisher, so they hope to make a profit on it. One never
knows whether they will or not. Volume 1, the industrial
chemicals, will be in the same range of cost as the List of
Lists will be. It will be under $100. The other volumes
will be even cheaper than that.
I brought some information sheets, and for those of you
who may be interested further in this publication, I will
put the information sheets back on the back table. If you
are interested, you can pick them up, and they will give you
more details.
Any other questions? Yes?
MR. GILLENWATERS: Bill
Gillenwaters, Newport News Shipbuilding.
-------
479
Any thought about including the spectra to be tagged in with
it when you call the compound?
DR. KEITH: No, there is
so much trouble to do it with words, that spectra would just
be beyond the intended scope. I will never put spectra in
it. It is just too damn much trouble and not worth it.
MR. NEIN: John Nein from
the Naval Supply Center.
Have you considered correlating any non-EPA methods
such as ASTM, USGS standard methods or AOAC into this
database, or do you feel that would just add to the
confusion?
DR. KEITH: Yes.
Actually, I have thought about doing that, and, again, it
depends on how much interest there is in this type of thing.
Of course, this data base only has EPA methods in it,
because it was compiled by EPA chemists in Cincinnati. So,
that is what their interest was.
But there are many other methods, the ASTM methods and
Standard Methods, etc., all of which could be quite readily,
with a slight massive amount of work, be added to this
database.
So, I have thought about it. Really, this publication
is an experiment. The List of Lists is also an experiment,
-------
480
and the key is how helpful these databases will be to the
technical community. Are people really going to find these
databases to be useful? If so, then we are on the right
track, and, of course, we can add other data to them. If
people don't need these types of database summaries, then
there is no point in adding to them in the future.
Looking at it from my own point of view, since I don't
know the details of all these methods, I find it to be
pretty useful. In fact, let me digress and tell you the
first time I ever used EPA's Sampling and Analysis Methods
Database.
A lady called me to inquire about inorganic methods. I
am an organic chemist; therefore, I don't know the details
of the many inorganic methods. She told me that she had a
laboratory...and it wasn't Radian Corporation...perform some
analyses for chromium, and the results were inconsistent.
So, I had my computer on, and I quickly accessed the
EPA inorganic method summaries. I searched all the chromium
methods while I was talking to her. Of course, she didn't
know that I had this information available at my fingertips.
I asked what method the lab used and she told me. I quickly
displayed the summary of that method and I observed it was a
method for direct aspiration atomic absorption for chromium.
However, when I asked about the type of matrix, the samples
were in, she told me that they were solid samples.
-------
481
The interferences section of the summary warns that
high solids content can cause significant interferences.
So, I told her that the lab used the wrong method.
When I searched for chromium and for solids/ the
computer program found a different method that used graphite
furnace atomic absorption. Thus, the lab did use atomic
absorption spectroscopy, but it didn't use a graphite
furnace. It used the direct aspiration technique and
suffered from interferences.
The client was appreciative of this information and
said they were going to resample and, this time, send them
to Radian. So I said okay, that sounds good.
(Laughter.)
So, the client thought that I knew whereof I was
speaking about elemental analyses, and, as I have told you,
being an organic chemist, I didn't know anything about it.
But that was the first time that I personally used EPA's
Sampling and Analysis Methods Database to try to help
someone find correct methodology.
Yes?
MR. RICE: Jim Rice.
I have a question. I noticed that you stated that all
of the methods contained in the database were EPA methods
since that was the original objective.
DR. KEITH: Yes.
-------
482
MR. RICE: My questions
involves information that you show for method detection
limit and for precision and accuracy and performance, in
effect, of the methods in various matrices. Have you
limited yourself to just using the information furnished by
EPA, or have you gone further than that to other major study
results?
DR. KEITH: No, the data has
been limited to just the information that was published in
the EPA methods.
MR. PRONGER: If a user would
buy one of the early versions, is there going to be some way
for him to get an update if there would be further updates
without having to rebuy the entire package?
DR. KEITH: I really don't
know, but let me philosophize. The publisher publishes
books, and their concept is, instead of treating this
publication like a software program and selling it for know,
a lot of money, to instead treat it like a book and make it
cheap so that everybody will buy it.
Therefore, I suspect that like when a new edition of a
book comes out, you rebuy the book, right? And I suspect
that that is probably what they will have to do because the
price is so low. That is what they will probably have to do
to recover their advertising and production costs.
-------
483
So, my guess is that if there are updates, they will
probably be not at the full price but not free either
probably at some reduced price in between. But that is just
a guess. That would be the logical way to do it if I were
the publisher.
Okay, thank you very much.
MR. FIELDING: Thank you,
Larry.
-------
484
SUGGESTED READING
1. Johnson, L.D., "Detecting Waste Combustion Emissions,"
Environmental Science and Technology, 20, 223 (1986).
2. Johnson, L.D., James, R.H., "Sampling and Analysis of
Hazardous Wastes" in Standard Handbook for Hazardous
Waste Treatment and Disposal, H.M. Freeman, Editor,
McGraw Hill, New York, New York, 1988.
3. Test Methods for Evaluating Solid Waste, Physical/
Chemical Methods. SW-846 Manual, 3rd ed. Document No.
955-001-0000001. Available from Superintendent of
Documents, U.S. Government Printing Office, Washington,
DC, November 1986.
4. J.H. Margeson, J.E. Knoll, M.R. Midgett, D.E. Wagoner, J,
Rice and J.B. Homolya, An evaluation of the semi-VOST
method for determining emissions from hazardous waste
incinerators, J. Air Pollut. Control Assoc., 37(9)
(1987)1067.
5. R.G Fuerst, T.J. Logan, M.R. Midgett and J. Prohaska,
Validation studies of the protocol for the volatile
organic sampling train, J. Air Pollut. Control Assoc.,
37 (4)(1987) 388.
6. Johnson, L.D., "Trial Burns: Methods Perspective,"
Journal of Hazardous Materials, 22, 143, (1989).
7. "Handbook, Hazardous Waste Incineration Measurements
Guidance Manual", EPA-625 / 16-891021, June 1989.
8. "Handbook, Quality Assurance/Quality Control (QA/QC)
Procedures for Hazardous Waste Incineration, EPA-
625/6-89023, January 1990.
-------
Selection of best method depends on
Analytical instrumentation;
« Environmental matrix;
Interferences present;
Detection levels; and
Maximum holding time.
00
(Jl
-------
Analytical Instrumentation
Example: 1,2-dichlorobenzene
GC, packed column, PID
- Methods 503.1, 602, 8020
GC, packed column, HECD
- Methods 502.1 601, 8010
GC, capillary column, HECD
- Method 502.2
GC, packed column, ECD
- Method 8120
GC/MS, packed and capillary columns
- Methods 524.1, 524.2, 624, 625, 1624, 1625
CO
en
-------
Matrix
Example: 1,2-dichlorobenzene
Drinking water
- Methods 502.1, 502.2, 503.1, 524.1, 524.2
Waste waters
- Methods 601, 602, 624, 1624, 625, 1625
Soils, sludges, groundwater, wastes
- Methods 8010, 8020, 8120
Methods differ in sample preparation,
instrumentation, detection limits, holding times, etc.
00
-------
Interferences Present
Example: RGBs, pesticides, chlorinated organics
Methods using nonspecific detectors have
problems when phthalate esters, other oxygen-
and sulfur-containing compounds are present.
- Method 8120, GC with ECD
00
00
-------
Detection Levels
Example: total chromium
ICP
- Methods 200.7 and 6010 : MDL = 7 ug/L
AA direct aspiration
- Method 7190 : MDL = 50 ug/L
AA graphite furnace
- Method 7191 : MDL = 1 ug/L
00
10
-------
Problem: How to find and select most appropriate
method for sampling and analysis
without being an expert or reading
massive volumes of information
scattered throughout Federal Register,
EPA reports, and the literature.
Solution: A Sampling and Analysis Database in a
convenient widely accessible format.
vo
o
-------
Background
1987-1988:
(EPA
Oct.
Nov.
Dec.
Mar.
Dec.
Mar.
1988:
1988:
1989:
1989:
1989:
1990:
William Mueller and David Smith
-Cl RREL) compile S & A Base.
Larry Keith obtains ASCII dump.
Revise presentation format.
Trial with "free text" searching
Provide EPA with copies
Double size of database in new format
New version sent to Lewis Publishers
vo
Jul. 1990: Sampling and Analysis Methods
Database published
-------
Features
First of its kind electronic reference book on
sampling and analysis methods;
Each method/analyte summary "self standing";
150 methods;
650 method/analyte summaries;
Menu-driven program;
Performs "and/or" searches of multiple key
words;
Hypertext "feel" to program use;
Page up/down browsing of each summary;
Print any or all summaries to file or paper;
Highlights the searched text in each summary;
Tutorial with illustrations on disk; and
Printed manual.
to
-------
Volume I - Industrial Chemicals (3 diskettes)
Chlorinated Aliphatic Volatile Organics
Other Halogenated Volatile Organics
Nonhalogenated Volatile Organics
Semivolatile Organics
Volume II - Pesticides, Herbicides, PCBs, Dioxins
and Furans (1 diskette)
Volume III - Elements and Water Quality Parameters
(1 diskette)
10
CO
-------
Menu Selections
Search
Criteria
File
Print
Menu
VD
-------
You can search for five different criteria by entering in the 5 lines
below.
The criteria placed on each line are additive - sludge on line 1 and
water on line 2 will only find entries that have BOTH sludge AND
water in them.
To search for two or more criteria separate them on a line by PIPING
(|). Thus, aromatic|aliphatic on a line will find EITHER aromatic OR
aliphatic.
Partial words may be searched: vol will find volatile, volume, volt, etc.
Press Function Key F10 when finished; then select SEARCH from
the main menu.
1.
2.
3.
4.
5.
vo
Ln
-------
Each one page summary contains:
Primary name and CAS number
EPA Method No. and its description
Applicable matrices
Interferences and solutions (if known)
Instrumentation required
Concentration range and MDL
Practical quantitation limit factors
Precision and accuracy
Sampling and preservation instructions
Maximum holding time
Quality control requirements
EPA reference source
10
-------
PRIMARY NAME: Aroclor 1242 (PCB-1242) Method 8080
TITLE: Organochlorine Pesticides & PCBs
MATRIX: Groundwater, soils, sludges water miscible liquid wastes,
and non-water miscible wastes
CAS #: 53469-21-9
APPLICATION: This method is used for the analysis of 19 pesticides
and 7 Aroclor (PCB) mixtures. Samples are extracted,
concentrated and analyzed using direct injection of
both neat and diluted organic liquid into a gas
chromatograph (GC).
INTERFERENCES: Solvents, reagents and glassware may introduce
artifacts. Other interferences may come from
coextracted compounds from samples.
Phthalate esters are common interferences
when using an electron capture detector (ECD)
so all plastics must be strictly avoided.
Exhaustive cleanup of reagents and glassware
may be required to eliminate phthalate
contamination. Use of a halogen specific
microcoulometric or electrolytic conductivity
detector will eliminate phthalate interference.
10
-J
-------
INSTRUMENTATION: GC capable of on-column injections and an
ECD or a halogen specific detector (HSD). Column 1:1.8 meter by
4 mm with 1.5% SP-2250 / 1.95% SP-2401 on Supelcoport.
Column 2: 1.8 meter by 4 mm with 3% OV-1 on Supelcoport.
RANGE: 8.5 to 400 ug/L MDL: 0.065 ug/L (in reagent water)
PRACTICAL QUANTITATION LIMIT FACTORS FOR MULTIPLYING
TIMES FID MDL VALUE:
Multiplication
Matrix Factor
Groundwater 10
Low-level soil by sonication with GPC cleanup 670
High-level soil and sludge by sonication 10,000
Non-water miscible waste 100,000
PRECISION: 0.21X + 1.52 ug/L (overall precision)
ACCURACY: 0.93C + 0.70 ug/L (as recovery)
ID
00
-------
SAMPLING METHOD: Use 8 oz. widemouth glass bottles with Teflon
lined caps for concentrated waste samples, soils, sediments and
sludges. Use 1 or 2 1/2 gallon amber glass bottles with Teflon
lined caps for liquid (water) samples.
STABILITY: Cool soil, sediment, sludge and liquid samples to
4 deg. C. If residual chlorine is present in liquid samples add 3 mL of
10% sodium thiosulfate per gallon of sample and cool to 4 deg. C.
M.H.T. = 14 days for concentrated waste, soil, sediment or sludge.
M.H.T. = 7 days for liquid samples. All extracts must be analyzed
within 40 days.
QUALITY CONTROL: A quality control check sample concentrate
containing each analyte of interest is required. The QC check
sample concentrate may be prepared from pure standard materials
or purchased as certified solutions. Use appropriate trip, matrix,
control site, method, reagent and solvent blanks. Internal, surrogate
and five concentration level calibration standards are used. The
quality control check sample concentrate should contain Aroclor
1242 at 50 ug/mL in acetone.
REFERENCE: Method 8080, SW-846, 3rd ed., Nov 1986.
-------
Limitations:
"Specialized" analytes and many semivolatiles
using GC/MS methods are not included yet. May
be added if user interest is sufficient.
Not available with Macintosh formats. Printed
version, Compendium of EPA's Sampling and
Analysis Methods will help people with no
computers or only Macintosh computers, but it
can't be searched.
Can't perform analyses from summary
information - only select the best method.
tn
o
o
-------
501
MR. FIELDING: Let's take
about 15 minutes for some coffee. Can we get back at about
10:20?
(WHEREUPON, a brief recess was taken.)
MR. FIELDING: Before we
start, I would like to remind people if they have a
question, give your name and the company name so the young
ladies on the side there can get it correctly. "Voice from
the audience" doesn't look quite as good as it might, and
you don't get credit for your comments.
Our next speaker is Bruce Colby of Pacific Analytical
who will talk about microextraction isotope dilution GC/MS
determination of volatile organic compounds.
Bruce?
-------
502
MR. COLBY: Good morning.
Since we are running about half an hour behind
schedule, I am going to stretch my talk out so that we can
catch up. I knew that would excite you all.
I am going to talk this morning about a technique that
has been around for a long, long time. It was used, at
least, I recall people using it probably about 10 years ago
when the organic chemicals industry was being screened by
the Effluent Guidelines Division at the time. It is called
microextraction, and we, I guess, kind of resurrected it
recently to help solve a problem that we encountered where
no other solution seemed viable.
The problem we encountered was a series of samples that
contained relatively high levels of some ketones, relatively
high being in the 1 percent or just under 1 percent type
level, and we were looking for the volatile organic species
in the samples. If we tried purge and trap on these, the
ketones would absolutely wipe out any chance of making a
reasonable determination of the other targets.
The options we felt we had with respect to dealing with
this problem were to approach the situation by two possible
routes. One was direct injection of the aqueous sample, and
the other was to resurrect this microextraction technology.
Well, we decided before we were to strike off on things
that we should establish a few goals for ourselves. They
-------
503
are pretty standard sorts of goals for what, in effect,
would be a method development effort, not a big one, just a
little one.
We have to determine these volatile organics in the
presence of what I call semi-purgable organics, fairly polar
but nevertheless volatile species. The ketones fall in that
category. Phenol, we know, falls in that category. If you
have a sample with a tremendous amount of phenol in it, it
will get in the purge and trap device and cause trouble for
days to come.
And we wanted to have a system that was sufficiently
precise, accurate, and with enough sensitivity that it would
be close to the purge and trap technology.
Direct injection, we pretty much decided, wouldn't
provide us with enough sensitivity. Purge and trap allows
one to put the equivalent of a 4 or a 5 ml water sample in
the instrument. The water does not go in but all of the
volatile organics in the water presumably do.
Clearly, you can't inject 5 ml. A few microliters is
more like it. So, this is roughly a factor of 1000 in loss
of sensitivity with direct injection.
Microextraction falls somewhere in between, and it
provides a couple of interesting things. One is that it
provides a mechanism to generate a little bit of additional
chemical selectivity. By properly selecting the solvents,
-------
504
we can discriminate against things that are more polar and
keep them out of our extracting solvent and leave them in
the water. That was the big key for what we were up to.
The other thing is it is relatively simple/ but when
you are trying to solve a problem that is tough, even if you
have to go at it a tough way, that seems okay.
The problem with microextraction historically has been
that it is not really as sensitive as purge and trap and it
doesn't really have the accuracy and precision that you
expect to get with purge and trap or direct injection, for
that matter.
The sensitivity problem we thought we had a decent shot
at by using one of the newer GC/MS instruments that is on
the market. They seem to have gotten much more sensitive in
the last year or so. We used a VG Trio 1 on this job. I
suspect that there are several other instruments out there
that could do the job just as well.
Our older equipment, we know, cannot do the job. So,
it really is confined to the newer GC/MS instruments.
We thought we could beat the old problems of the
accuracy and precision that are associated with
microextraction by going to isotope dilution. The problems
with accuracy and precision, incidentally, are associated
with the fact that there can be some pretty significant
matrix effects. We heard something about matrix effects
-------
505
yesterday, and isotope dilution does help us get around
these to a reasonable extent.
The method I have outlined in my first slide. We take
35 ml of the water sample, add the labeled compounds to it.
We are using a 40 ml VOA vial, we call them.
We then add 1 ml of hexadecane, cap the vial, shake it
up, let the layers separate, pull off 1 ul of the
hexadecane, shoot it onto a 30 m DB-624 megabore column that
is directly coupled to the GC/MS ion source through a
restrictor, use a reasonable sort of temperature program,
acquire full scan mass spec data.
Hexadecane elutes after all of the target analytes.
So, at that point, we shut down the filament and let the
solvent through, getting the targets out before the solvent
rather than after it which is the more conventional way.
The target analytes are identified and quantified,
basically, according to the methodology described in 1624.
Well, a blank looks something like this which I guess,
might look a little bit horrifying, but these peaks are
actually very, very small, as you will see in the next slide
or two.
We see the air come through. Principally, it seems to
be argon, the C02 staying behind. This blank is a water
sample that we know to be clean extracted with the
hexadecane and just shot.
-------
506
We see some silicon compounds. The silicon compounds
are not reproducible. We suspect they probably are
associated with the analyst not properly cleaning or
handling the teflon caps on the vials. We don't know that
for a fact, but that is what we suspect, and we haven't been
back to verify it. They don't cause a problem in the actual
analytical work, so we haven't made a big issue of it.
There appears to be a little bit of hexane in the
hexadecane as well.
A chromatogram for a standard. This is the high point
on the calibration curve, 10 ng/ul injection. It gives us
some pretty nice looking peaks. I cut the chromatogram off
just about at the point where we shut the instrument down
and the hexadecane starts to come out.
Low concentration standard. Now you can just start to
see the silicon compounds in there. So, you can see that
even though they looked really huge in the previous one,
when we get down to injecting 100 picogram per analyte, that
is still a pretty small quantity of that silicon compound.
Looking for targets in the sample, we use a software
that comes with the instrument. This is looking at a 1 ng
spike into a water sample. Here you are looking at 1,2-
dichloropropane. Just picked one at random.
-------
507
The top trace is the total ion chromatogram. You see a
little bit of a blip in that. Again, it is only 1 ng, so
quite a small quantity.
Quantitation mass area is shown in the center drawing.
The bottom one I won't bother with.
So, the targeting that we used was all automatic. We
didn't go back and edit any of the data. We did take a look
at it, but whatever was there we accepted, because we just
didn't have enough time to fool around with it, basically.
Spectra, again, looking at that dichloropropane that we
had before. It is an unbackground subtracted spectrum or
raw data spectrum on top. You have a lot of peaks from
background within the equipment.
If you do a background subtraction, you get the center
kind of a spectrum. It certainly has all the major peaks.
It even picks up the minor isotope peaks. There is a set at
97 and one at 112. There are a few other peaks that show up
as well.
The bottom spectrum is a spectrum from the NBS library.
So, there seems to be plenty of sensitivity for what we
are up to.
What we do now is calibrate the instrument at a very
low level compared with conventional GC/MS technologies. We
are starting at a high level of 10 ng/ul and going down to
100 picograms per microliter using a 5 point calibration.
-------
508
That is roughly equivalent to the Method 525 sort of
calibration system. I think we are going to get a little
bit more information on that from another speaker later on.
Basically, it is a drinking water method. Here, we are
using the kind of instrument sensitivity you would use for
that drinking water method, but we are applying it to
volatile analytes in a wastewater, but the key is the
sensitivity.
The calibration compounds are spiked into a water
sample. They are prepared in methanol, then extracted from
the water sample with the hexadecane, and we do our
calibration of the target analytes, again, by isotope
dilution, and the labeled compounds we do by a mean response
factor calculation.
Target analyte calibration curves were generally very
good. What I have plotted here for about...I think it is 23
compounds... is the correlation coefficient that we get for
that 5-point calibration curve.
The two on the far right actually were correlation
coefficients of 1.0000. I don't know what the fifth decimal
place would have with it, but that was pretty impressive.
We have a couple that don't look quite as good on the
far left. Those turn out to be tetrachloromethane and
toluene. I went back and checked those, and it turns out
that each of those had a bad measurement, and we just left
-------
509
it in the data because we were taking the philosophy we
would do it all automated.
So, we have a couple of calibration curves that are a
little bit suspect. They, nevertheless, are pretty good
calibration curves.
We wanted to look at calibration a little bit more
carefully, because we are pushing down into a very low
concentration regime with this method. So, we also look at
the data with respect to higher order regression fits to the
line. This is an allowable part of the Method 525
technology.
In order to compare the quality of calibration curves
in these higher order fits, we can't really use correlation
coefficient. We have to go to some other statistic to
evaluate the quality of the curve.
What we use is a statistic called the mean residual.
Basically, what we do is take the calibration data and
generate a regression line of some type through the
calibration data, in this case, a linear fit to the data.
We then plug those calibration data back into that
regression line and calculate concentrations.
Now, we know the concentrations, because they are our
standards. We made them up, and we are now plugging the
data back into the calibration curve, and we get some
-------
510
concentrations that are slightly different than the true
values.
It is the difference between the true value and the
determined value as having plugged it back into the curve
that is the residual, and what I have plotted here is that
correlation coefficient from the previous slide on top
versus the mean residual.
The mean residual also has the advantage that it can be
expressed in terms of a measurable quantity, in this case,
ng/ul.
For the isotope dilution measured compounds, we have a
mean residual of approximately 1.28 ng/ul. So, our low end
of the calibration curve, on the average, is probably going
to be roughly plus or minus 100 percent. So, we are pushing
it kind of hard down there.
Now, that average includes those two bad actors up
there, and if we take those two out, then the mean residual
is quite a bit nicer.
Well, we looked at a second order calibration curve in
addition to the linear one, and we got what I would call a
pretty substantial improvement in the statistic known as the
mean residual. You can see that on here. Essentially, all
of the second order fit residuals were lower than the linear
regression residuals.
-------
511
It is tempting, in that sense, to want to use a second
order calibration curve, but we will look at some
implications of that later on that may change your minds.
It turns out that we had a couple of compounds in there
that we calibrated by internal standard because we didn't
have the labeled analogs for them. I just throw them up
here because we did make the measurements on them, and we do
have some information on them. The mean residual on the
internal standard compounds is really fairly similar to that
of the linear regression isotope dilution values with the
exception that they literally had to be done by a second
order fit, and I suspect that has something to do with
matrix effects associated with spiking of methanol into the
samples and the quantities of solvents used.
We also had to quantitate the labeled analogs in order
to get some kind of an evaluation of recovery. So, I have
again plotted the mean residual for the labeled analog
measurements here. Again, these were done by mean response
factor, and the mean residual on this is on the order of
about 0.13 ng/ul.
In a quick summary on the calibration, isotope dilution
looked pretty good by linear regression. Internal standard
data looks like it is going to have to be done by secord
regression, second order fit. And labeled analogs, as they
are in there at a constant amount and we only want...our
-------
512
interest in them was fairly basic and single point
calibration. Our mean response factor was all we were after
there.
The next thing after the instrument is calibrated is to
back off and say okay...I almost hate to bring up the
term...what are the method detection limits? In this
calculation, we used the equation that is in the Federal
Register, basically, the student's t value times the
standard deviation. So, it is roughly three times the
standard deviation of...I guess we used eight replicate
measurements, and these were prepared and measured
separately.
When we look at these, now we can see the difference
between linear regression and second order calibrations. It
is pretty clear that there is not much difference between
the two, and if you try to sort out where the lines go on
the screen, it is just about next to impossible, but you can
only see what, in effect, is one set of data.
So, even though the calibration curve looked better
with the second order fit, when we start applying it to
actual measurements of spiked samples, we don't see that
improvement. I don't think that we are at a stage where we
can definitely say that a second order fit isn't needed, but
the implication is at this point that a linear fit will
probably be okay.
-------
513
The few compounds we did by internal standard. The
mean detection limits there were about roughly a ballpark
factor of 2 to 3 higher than they were by isotope dilution.
And we could calculate a detection limit for the
labeled analogs. I don't know quite what it means, but at
least it gives you some sense for the reproducibility of the
data at the level at which they were spiked. Again, they
are roughly the same as the internal standard measured
numbers.
The method detection limits I broke out in a summary
slide here with the gases and the non-gases separately. It
turns out that the compounds that are generally considered
gases, chloromethane, vinylchloride, and so forth, they were
on the method detection limit slides, those were the
compounds on the left. They are harder to handle. They are
less reproducible, and using this method detection limit
calculation, they then have a higher detection limit.
It appears that the method detection limit for most of
the volatile compounds, i.e., the non-gas compounds, is
about 6.4 ug/1 according to the calculation. The gases are
substantially worse. I think we probably could improve that
some by fine-tuning our sample handling a little bit and
spiking. For what we were involved in, they were not
terribly relevant, however.
-------
514
The internal standard and labeled analog values, again,
are very similar to one another.
I put the standard deviation up there so you could see
how much variation there was involved in the mean method
detection limits that I put up there. The isotope dilution
values are reasonably tightly packed around that 6.4 value.
So, most of them were behaving quite well.
We also looked at the recoveries of the spiked
compounds. Keep in mind now that isotope dilution has an
inherent recovery correction associated with it. We did the
isotope dilution measured compounds using both linear and
second order polynomial fits on the calibration curve. As
we saw with the detection limit numbers, the way we
calibrate for isotope dilution doesn't appear to be very
significant when it comes to measuring values in the
samples.
The percent recoveries we would expect to average out
around 100 percent for the non-gases. They were about 109
percent using a linear fit and about 108 percent using a
second order fit. So, there is effectively no real
difference.
For the gases, they weren't quite as good. There
appeared to be a positive bias that may have been associated
with the order in which the compounds were spiked into the
sample. I am not sure.
-------
515
Internal standard values, again, as we are spiking our
standards into a water and extracting them out, there is an
inherent recovery correction going on here, and, again, we
would expect to see the recoveries showing up at around 100
percent. There appear to be some biases in this, three
compounds being quite high and three quite low, but I don't
really personally believe there is enough data there to say
much. Certainly, there is a lot more spread in these
recoveries than there was in the isotope dilution values.
The labeled analogs, like the internal standard
measured compounds, had recoveries over a fairly wide range
from roughly 50 to 250, but what we are interested in the
labeled analogs is can we get a decent signal from it, and
if we can, we are generally quite happy. If we can't detect
the labeled analog, then we are pretty darn sure we can't
detect the naturally abundant compound and we know we have a
problem with the analysis.
So, we are basically at this point interested in seeing
the labeled analog, not so much concerned about whether we
got 47 or 82 percent recovery.
Percent recoveries, in summary, we are looking at
isotope dilution non-gases again, about 109 percent, pretty
tightly grouped. Standard deviation of 7 percent.
-------
516
Isotope dilution, gases, again, there appears to be
some positive bias, but there is quite a bit of variation in
that, a 16 percent standard deviation.
Internal standards, even more variation.
Interestingly, quite similar to the labeled analogs and some
bias with the labeled analogs, again, possibly having to do
with the order in which things are spiked in.
In summary, we have a method that for most of the
volatile organic compounds that are not highly polar, we
have pretty decent detection limits with this technology
using the new GC/MS instrumentation that is available to us.
For the gaseous compounds, not quite as good. We know they
are tougher to handle.
Recoveries are in the vicinity of 100 percent. So, it
starts to look like a fairly viable technique for dealing
with some kinds of samples. We believe the technology could
be pushed into applications where other kinds of complex or
interference situations are an issue, and we expect to be
trying a little bit with the base neutral and acid fractions
in the future.
I might add that the base neutral and acid fractions
may have an additional benefit, and that is that by working
with relatively small samples and, in particular, smaller
quantities of solvents, we are involved a pollution
reduction problem which is, for some of us in the analytical
-------
517
community, becoming more and more of an issue, the
chlorinated solvents. We know where they go, and the
regulators are figuring it out, and I think EPA is probably
coming under a little pressure on all this, too.
That takes me to the end of the data that I had. I
think I am doing a little bit in helping catch up, but I
would be more than happy to try to answer a few questions.
-------
518
QUESTION AND ANSWER SESSION
MR. FIELDING: Does anybody
have any questions?
MR. MCCARTY: Harry McCarty,
Viar and Company.
Bruce/ presumably, your calibration data, since you are
spiking your standards into the reagent water, you are
spiking varying volumes to get the different concentrations,
did you look at making up an equivalent volume of methanol
so that all of them end up with the same amount of methanol?
Because that may explain some of the problems.
We have done a couple of studies, and it seems to help
some of the calibration data. I would assume with isotope
dilution, it would just make it even better.
MR. COLBY: I don't think it
would affect the isotope dilution data at all. I believe it
would probably have some effect on the internal standard
compounds, but they were not of real interest to us in this.
They were in there just because they happened to be there,
and we looked at them as a consequence, but yes, I am quite
sure you are right. That would be an improvement.
MR. MCCARTY: It is a
relatively simple thing, you know, when people are doing it
in practice, and it may make a significant difference there.
MR. COLBY: Quite true.
-------
519
Anybody else awake?
(No response.)
thank you.
Bruce.
MR. COLBY: If not, okay,
MR. FIELDING: Thank you,
-------
520
MICROEXTRACTION ISOTOPE DILUTION GC/MS
DETERMINATION OF VOLATILE ORGANIC
COMPOUNDS
by
Dante J. Bencivengo and Bruce N. Colby
PacificAnalytical
and
James S. Smith
Trillium, Inc.
-------
521
EXPERIMENTAL GOALS
Determine volatile organics in the presence of large
quantities of semi-purgabie organics.
Provide detection limits equivalent to purge & trap
GC/MS.
Provide accuracy and precision equivalent to purge &
trap GC/MS.
-------
522
MICROEXTRACTION
Advantages:
Chemical selectivity
Simplicity
Disadvantages:
Sensitivity
Accuracy
Reproducibility
-------
523
METHOD SUMMARY
Spike 35 mL sample with Method 1624 labeled com-
pounds and internal standards.
Add 1 mL hexadecane, shake and wait for layers to
separate.
Inject 1 uL onto a 30 m DB-624 Mega-bore column
directly coupled to the GC/MS ion source through a 1
m 0.25 mm restrictor.
Program from 40 to 180 °C at 8 °C/min with initial and
final holds of 3 and 20 min respectively.
Acquire full scan data until just before hexadecane is
eluted (filament turned off while solvent is eluted).
Identify targets and quantitate via isotope dilution.
-------
006 008 00*. 009 80S
. age .,..,. ??.......
8
in
U.S
T
98 ₯
uoBay
SIX
881
OUM :
Z0TT
lAIVUOOlVlAIOHHO HldllAIVS >!N\na
-------
STANDARD CHROMATOGRAM (10 ng/uL)
1702
06-flpr-90
Pacific Analytical
USTP 10.0
UG»4 TRI01
IC04U06F
100
XFS
13.71- 41411
'" 900'.'.'' ie'aa
Ol
-------
STANDARD CHROMATOGRAM (0.1 ng/uL)
1251
0G-flpr-90
Pacific Analytical
Satiple; USTDJI.M
VG»4 TRI01
IC04M86B
100
XFS
1.49
0
Sen
11.13
470865
*IC
ll
61
to
100
200 300 400
500 600
700
900 1000
-------
07-Apr-98
TYPICAL TARGETING (1ng/uL)
Pacific Analytical
Sanple; Matrix Spike
SCN 560
VS1945MS Pred :569
MQAID 30 1,2-Dichloropropane
100 - 8-
XFS
565
570
575
UGft4 TRI01
U1
to
-J
-------
TYPICAL MASS SPECTRUM AND LIB SEARCH
0008
Pacific Analytical VGft4 TRI01
07-flpr-90 Sanple: Hatrix Spike
VS1945MS 569 (8.553)
100
XFS-
0
A
39
/III
i
0
NBS 102 Hits : 102 Searched
11328
4144 57 63
/
. 49 \
\ 1 1 1 1 1 1 1 1 1 1 1 1
1 I'l 1 1 1 1 i 1 IH 1 1 1 II
VS1945NS'569
100
XFS
0
ttl
39 41
38 | 1 ,,«si
\ 4,9/
. 1 1
7
?67
63
62
\
^
F
l
QQ
,
6016
76
75Nlfi, 90 97. . 1*2 130 . 148
F:707 207V:PRbPAIIE, i,2-i>iCHLfaRb~ ' '
100
XFS-
0.
39xy
^.ll I/2 4I9/1
M/Z30 40 50
63
62
\
H
r
255
..\lh,. . . .?,?/ ..if2..
60 ' 70 ' 80 '90 ' 100 ' 110 ' 120 ' 130 ' 140 ' 150
U)
to
00
-------
529
INSTRUMENT CALIBRATION
Calibrate at 0.1, 0.5,1,5 and 10 ng/uL.
Prepare working standards in methanol.
Spike into water and extract into hexadecane.
Calibrate Labeled Analogs using mean response fac-
tors.
Calibrate Target Analytes by isotope dilution.
-------
Correlation Coefficient
CO
v]
0
CD
^1
Ol
P
CD
CO
0
CD
CO
CJ1
CD
CO
CD
CD
CJ1
o
o
en
o
o
3
o
c
3
(/)
o
-J
o
13
m
D
^^1
6
z
o
r;
CD
33
H
o
oes
-------
ISOTOPE DILUTION CALIBRATION
1.005
Correlation Coefficient
Mean Residual
O)
c
0-3
tn
CO
C
C3
1 I I I I I I 1 1 I I 1 I I I T
0
Compound
-------
ISOTOPE DILUTION CALIBRATION
Linear Regression
ii i i i i i i
Ul
W
to
Compound
-------
INTERNAL STANDARD CALIBRATION
0,25
(2nd Order Fit)
U1
w
w
Compound
-------
LABELED ANALOG CALIBRATION
en
CO
Compound
-------
535
CALIBRATION SUMMARY
Category
Calibration
Mean Residual
Isotope Dilution
Internal Standard
Labeled Analogs
Linear Regression 0.128 ng/uL
Second Order Regression 0.172 ng/uL
Mean Response Factor 0.129 ng/uL
-------
ISOTOPE DILUTION DETECTION LIMITS
5"
*O>
o
Linear Regression
2nd Order Fit
00
en
Compound
-------
INTERNAL STANDARD DETECTION LIMITS
-J
^)
o
100
90
80
70
60
50
40
30
20
10
0
Ul
00
-J
Compound
-------
LABELED ANALOG DETECTION LIMITS
100
90
80
70
60
50
40
30
20
10
en
(A)
00
1 1 1 1 1 1 1
Compound
-------
539
METHOD DETECTION LIMITS SUMMARY
Category
Isotope Dulution, non-gases
Isotope dilution, gases
Internal Standard
Labeled Analogs
MDL (ug/L)
6.4
38.4
12.1
12.0
SD
2.7
30.9
13.2
10.2
-------
ISOTOPE DILUTION RECOVERIES
O
o
0)
DC
i-*
c
-------
INTERNAL STANDARD RECOVERIES
0)
O
O
-------
LABELED ANALOG RECOVERIES
250
ui
*>
to
Compound
-------
543
PERCENT RECOVERY SUMMARY
Category
Isotope Dulution, non-gases
Isotope dilution, gases
Internal Standard
Labeled Analogs
%R
109
119
96
118
SD
7
16
44
44
-------
544
METHOD SUMMARY
Category
Isotope Dulution, non-gases
Isotope dilution, gases
Internal Standard
Labeled Analogs
MDL (ug/L)
6.4
38.4
12.1
12.0
%R
109
119
96
118
-------
545
MR. FIELDING: Our next
speaking will be Mr. Jim Eichelberger of EMSL-Cincinnati who
will discuss the liquid-solid extraction for determination
of acid herbicides in drinking water.
-------
546
MR. EICHELBERGER: Good
morning.
Actually, what I am going to try to do today is give
you a status report of two projects we have going on in the
laboratory in Cincinnati, and one of them is the subject on
the agenda. The second is a project we have going on to
develop a cleanup procedure for sludge extracts.
The reason they are status reports is because neither
one of them is completed yet, and we hope to have them
completed later this year.
The first project I would like to talk about a is
drinking water project that is an attempt to use disk
technology, solid phase disk technology, for the extraction
of the chlorinated acids, the herbicides, from groundwater
and finished drinking water. This project is being
conducted by Jimmie Hodgeson in our laboratory in EMSL-
Cincinnati, and he couldn't be here.
For those of you who would like some more details on
this work, you might want to take down his phone number. I
know he will be happy to talk to you.
Actually, what we are trying to do is develop a
procedure that is better than what we have on the books
right now for doing chlorinated herbicides. That procedure
is Method 515.1 which is currently in the drinking water
methods manual that was published in December of 1988.
-------
547
A summary of that method is the following: it uses a 1
liter sample in which the analyst adjusts the pH to 12 with
sodium hydroxide and shakes it for an hour to hydrolyze the
derivatives. Then he does a methylene chloride extraction,
which is a solvent wash, to remove all possible
interferences, then readjusts the pH to 2 with sulfuric
acid. He then does the conventional three serial
extractions with ethyl ether, concentrates that extract, and
does a solvent exchange to methyl tertiary butylether in
methanol. Methylation is achieved with diazomethane, and
the separation and determination are done by capillary
column GC with an electron capture detector.
Folks in the lab tell me that this procedure for a set
of five to seven samples takes approximately eight to ten
hours.
So, what we have is a fairly cumbersome method that
uses a liquid-liquid extraction with an undesirable solvent,
and is time consuming. So, we are trying to eliminate some
of the undesirable features of Method 515.1.
This is the set of analytes for 515.1, and we are using
that same set in our study to come up with a better method.
You have your garden variety phenoxyacetic acids, some
phenols, and some other types of compounds, all being
herbicides.
-------
548
Next, the disk technology. The disks are manufactured
by the 3-M Corporation. Currently, there are only two types
of disks available commercially, and they contain the C8 and
the C18 adsorbents.
This is one of the sets of equipment that you can use
to employ this technology. I don't think it is a good idea
to grab the disk with your fingers, but you could put it on
with a tweezers and seal the funnel to keep the apparatus
from leaking, and the entire thing sits on a suction flask
to which you apply a certain amount of vacuum, and put your
samples through the disk. It's a fairly simple,
straightforward technology.
I showed this slide last year, but for those of you who
weren't here, this is a picture of one of those disks. The
is a Ci8 disk, if I am not mistaken shown from the side.
You are looking at the thickness of it.
What you see are the silica particles coated with the
GIB and interspersed in a teflon matrix. I don't know what
the degree of magnification is, but it is a whopper. I like
to show this slide.
This is another picture of a disk from the side showing
the adsorptive capacity. We used Disperse Dye #1 as a
surrogate. It demonstrates the disk technology and how
compounds are going to behave. The red layer there is
adsorbed right at the very top of the disk. So, you can see
-------
549
the concentration right there on the top. It doesn't
penetrate down into the disk far at all.
So, after some experimentation...and bear in mind that
this isn't final yet, this is where we were about three
weeks ago...we have decided to use a 100 ml sample. We tried
liter samples, but we saw some breakthrough. So, the ideal,
and probably the volume that we will decide on, is between a
liter and 100 ml. I think it is going to be around 0.5
liter.
You adjust the pH of the sample to 2, but before you
put the sample onto the disk, you have to pretreat the disk.
This is standard operating procedure for solid phase
extractions.
First, you elute the disk with 20 ml methyl tertiary
butylether. That is going to be the final eluting solvent.
So, the attempt here is to remove any interferences that
possibly could end up in the final elution.
Then you pull a little air through it for a few minutes
and put some methanol through which is supposed to activate
the disk, and then introduce some reagent water before
putting the sample through.
Right now, they are using a throughput time of 5
minutes for 100 ml. The 3-M guys tell us that they
routinely put 100 ml a minute through the disk and find no
detrimental effects using that quick throughput.
-------
550
Then you air dry the disk after the sample has passed
through. That really doesn't remove all the residual water,
but it removes a great portion of it. Then you elute with
the eluting solvent which is 10 percent methanol in methyl
tertiary butylether.
You dry that over a little sodium sulfate just to
remove the residual water, and methylate with gaseous
diazomethane. The separation is done on a GC capillary
column, and the determination with an electron capture
detector.
Now, the sample prep time here...by the way, that line
is supposed to be at the bottom of the slide, but my Harvard
Graphics expertise isn't good.
The sample prep time for the same size set of samples
is now cut to two hours. So, we think we have a fairly
simple, straightforward procedure with little concentration
and no solvent exchange. You methylate in the eluting
solvent, and it looks as though we have a simple,
straightforward procedure.
This is a little graph I put together on some results
testing the C8 and the C18 disks. The third bar on the right
of each group will be discussed in a second or two. You can
see the Ci8, in general, seems to be a better matrix for the
adsorption. The C8 gave a little bit lower recoveries for
the compounds on the x-axis than the C18 did.
-------
551
We noticed when we were doing the initial work that
when we used metal funnels, the old millipore type systems
that we had around the lab for a long time, and some of the
metal had peeled off, we would get reduced recoveries, and
we would find a little green precipitate in the bottom of
the eluate.
It must be that the low pH is reacting with those metal
funnels. So, in our final report, we are going to recommend
that glass funnels and glass filter holders be used, and not
the metal which will eliminate that problem.
Again, we tried the old salting out trick. We used a
15 percent ammonium sulfate added to each 100 ml, and we
studied the compounds on the x-axis at 5 ug/1. That is not
very focused, but the lighter bars on the left of each group
are the salted results, and the grey or striped or whatever
they are on the right of each group are the unsalted.
You will notice, except for one compound, dichlorprop,
that the unsalted results are always higher. In solid phase
extraction work, that is common, and the only thing I can
think of is the compounds are salted out of the water before
they get to the disk. They are plated out on the glassware
and whatever else they come in contact with. So, salting
out in solid phase technology just doesn't work.
This is a list of some comparisons of recoveries and
relative standard deviations of some of the compounds that
-------
552
are analytes in Method 515.1. The column on the left of the
recovery section is from the disks, and on the right is from
liquid-liquid extraction.
You can see there are still a few problems. Bentazon
still is lower on the disk. Jimmie assures me that these
aren't insurmountable problems and we will overcome all
this.
In some cases, we are better off with the disk than we
are with liquid-liquid extraction. For the DCPA, we have
come quite a ways, and dalapon is another problem with the
disk. So, we are working on those problems right now, and
we should have them resolved when the final report is
published.
In the relative standard deviation portion of that
slide, the disks look much, much better than the liquid-
liquid extraction. So, it looks like the disks will work
for these compounds. The jury is still out on a few of
them, but I think we can resolve the problems.
I know there are some of you out there interested in
looking at alternate forms of methylation. We have been
looking for the last year or two, at some of the different
methods, and they are listed here.
I have data today on the one that looked most
successful out of those four. The borontrifluoride-methanol
-------
553
is just not an efficient methylating agent for the number of
compounds that we have in the method.
Sulfuric acid-methanol looked promising, and I will
show you some results of that in a minute.
The bottom two we tried, and the results were pretty
miserable. So, we really haven't come up with anything
great in those three areas.
Now, the sulfuric acid...and, by the way, this was done
by taking a small amount of methanol to which a 5 percent by
volume sulfuric acid solution had been added, spiking in
these compounds, and heating the solution to 70 degrees C
for two hours. Then the solution was diluted with reagent
water a microextraction was done with methyl tertiary
butylether.
For these compounds, this the good news. Everything
worked really well here. The phenoxy acids seemed to
esterify real well with this form of methylation, sometimes
better than the diazomethane. This is the bad news. The
phenols are terrible, and some of the other compounds just
don't methylate under those conditions.
So, we still haven't found a suitable replacement for
diazomethane for all the compounds on the list.
This project should be completed sometime this summer.
Jimmie is going to write up the results. We will publish it
in one of the journals, and we will put a method together
-------
554
that should be available to the Office of Drinking Water
maybe in the fall.
The second topic I wanted to talk about a little bit is
a cleanup procedure that we have been working on for sludge
extracts. Back in September of last year, we delivered a
method to the Office of Water for extracting priority
pollutants from municipal sludges, mostly digested sludges
and filter cakes.
The method uses 200 ml of sample. We were trying to
come up with an extraction procedure that would allow us to
handle highly contaminated sludge samples so we didn't have
to dilute them, and end up with method detection limits in
the parts per million range.
So, our method consisted of taking 200 ml of sample or
a 10 g equivalent of dry solids, centrifuging that sample,
separating the liquid from the solids, and doing a
continuous liquid-liquid extraction on the liquid and a
sonication sample preparation on the solids.
The sonication consisted of three steps, first using
methanol followed by 1:1 methanol:methylene chloride, and
the third was the methylene chloride step.
Then the methanol was separated from the solution. All
the analytes were contained in the methylene chloride, and
we tried to concentrate that down to 10 ml.
-------
555
Well, what we did was create a bit of a monster. We
created a procedure that extracted the analytes more
efficiently, but it also extracted all the unwanted gunk
that is found in a sludge sample just as efficiently. So, we
immediately realized that we had a problem and that we
needed an efficient cleanup procedure if this extraction
procedure were ever going to work. So, we did some work
with silica gel adsorption, gel permeation chromatography
(GPC), and solid phase cartridges.
The two that showed the most promise were the silica
gel and the GPC. Since those techniques were already being
used in analytical laboratories that were doing sludge
samples, we pursued those two approaches.
We tried to determine if varying the deactivation state
of the silica gel could improve the efficiency of cleanup of
sludge extracts. We varied the deactivation state up to 30
percent water, and found that 10 percent deactivation was
about the optimum condition for silica gel cleanup of sludge
extracts.
This slide just demonstrates on three different sludges
the amount of cleanup we can get with silica gel. The three
sludges were collected in Cincinnati on three different days
to show you the variability in the sludges from one source.
The second column shows the mg/ml of residue that contained
in the sludges. After the silica gel cleanup, which
-------
556
consisted of 65 g of 10 percent deactivated silica gel we
eluted with 425 ml of methylene chloride, you can see the
percent cleanup in the column on the far right.
The percent cleanup is the difference between column 3
and column 2 divided by column 2. So, in the case of sludge
A, it was almost not worth the effort. In sludge C, it did
a fairly decent cleanup job.
So, we figured that, at this point, we had a technique
that could possibly, when used with another technique, give
us a decent cleanup.
We tried coupling this with gel permeation
chromatography. We studied it with the silica gel process
prior to the GPC, and the GPC prior to the silica gel.
Results showed no difference. It makes no difference which
one you do first. You end up with the same amount of total
cleanup, about 30 percent, which wasn't enough to give us
the ability to concentrate the sludge extract to get decent
method detection limits.
Then we studied a number of different bio-beads. We
had done all the original work with SX-3. We thought maybe
SX-2 or SX-4 would be more efficient in the GPC process. It
turns out that no matter how we studied it, silica gel first
or second, the bio-beads gave us no improvement in overall
recovery.
-------
557
By the way, the GPC that we had been using up to now
was done with all methylene chloride as the mobile phase.
Due to the nonpolar nature of the types of materials
that we wanted to remove from the sludge extract, we thought
that it might be advantageous to add a less polar solvent.
So, we experimented with different amounts of normal
hexane in the mobile phase in the methylene chloride, and
this slide shows you what we found. The bar to the left of
each group is the gel permeation chromatography alone, and
the shaded bar is the silica gel plus the GPC result.
As we increased the hexane in the methylene chloride,
our cleanup was increasing dramatically. If we added 90
percent hexane to methylene chloride, we were getting almost
100 percent removal of all the gunk in the sludge extract.
However, that presented problems.
In the GPC, when you add that much hexane, you have
memories in the column, and you have to rinse the column
before putting the next sample on, and this is too time
consuming. So, we had to back off, and the work that we are
doing right now is with hexane in methylene chloride at
about 65 to 75 percent.
We put a standard set of analytes together, and
evaluated the method...which is typical in a method
development...backwards. We looked at the analytical
determinative step first, GC/mass spec. Then we looked at
-------
558
how efficiently we were K-D'ing all the samples, the GPC
recoveries, and the silica gel/GPC recoveries. Then we did
the entire procedure with a fortified sludge extract.
That extract, before we cleaned it up, contained almost
78 nig/ml of residue. After the silica gel and the GPC, it
was down to 22.9 mg which is a 71 percent cleanup.
This enabled us to concentrate our extracts down to 1
ml with no trouble. The silica gel step that we used here
was 60 g of 10 percent deactivated silica gel, and we eluted
with 425 ml of methylene chloride. The GPC was done with
the SX-3 bio-beads, and eluted with 75 percent hexane:25
percent methylene chloride.
These are some of the results we got for what we felt
would be a representative batch of analytes. We were
pleasantly surprised with the amount of recovery that we got
for most compounds.
There is a problem with isophorone and another with
benzyl alcohol. It turns out the problem is coming from the
silica gel step. We think we can solve that problem. So,
presently we are evaluating this procedure on a large number
of analytes and with a number of different types of sludges.
We hope to have this method completed by late summer, and
available for anybody who would like to try it.
There is one question which might come to mind. What
happens to the GPC calibration when you add this much
-------
559
hexane? Well, this picture here is a typical UV trace of
what happens when you have pure methylene chloride as the
mobile phase.
The analytical envelope starts right before
dioctylphthalate and ends right before sulfur. When you
put 40 percent hexane in the methylene chloride, it doesn't
change at all. It stays exactly the same. The sulfur is
still a useful peak.
But when you increase the amount of hexane to 60
percent hexane and 40 percent methylene chloride, the
analytic envelope spreads out, and the sulfur is
incorporated into the extract. We haven't found any
problems with this so far. The sludges that we have been
looking at haven't had much of a sulfur background.
However, it could be a problem, and we are working on that.
So, both of those products should be done by fall,
optimistically, late summer and will be available in
publications and methods.
I would be glad to answer any questions anybody may
have.
-------
560
QUESTION AND ANSWER SESSION
DR. WILLIAMS: Dan Williams,
Kennesaw State College.
Have you looked at alternate techniques such as
something like silation as opposed to methylation, something
like hexamethyldisilizane or whatever?
MR. EICHELBERGER: I think
Jimmie has been doing some work on that, but we don't have
any results yet. Yes, that is a good point.
MR. LEWIS: Do you see any
changes in your recovery between the difference in your
detection source when you GPC versus RI or fluorescence
detection, or did you limit it to UV?
MR. EICHELBERGER: I think all
they used was UV. I never was really involved in either one
of these projects, but I think they used UV solely.
MR. FALLICK: I am Gary
Fallick from Waters.
A question for you on the first procedure. Have you
also looked at conventional SPE cartridges as an alternative
to the disks?
MR. EICHELBERGER: We haven't
yet, but it is a good point. We are going to do that.
DR. ARMSTRONG: I am David
Armstrong from Southern Research Institute.
-------
561
Have you looked at any other solvents but hexane? Let
me give you a little more information.
When I was at S-CUBED, we did a study on hazardous
wastes using butylchloride. Are you familiar with that? I
don't know if you remember any of that work. It turned out
we were using about...I think it was 50 percent
butylchloride, and we were getting much better recoveries in
GPC of the analytes out of hazardous wastes. So, that is an
option you might consider.
MR. EICHELBERGER: How
about your ability to remove the extraneous interfering
materials?
DR. ARMSTRONG: It worked
pretty well. Again, this isn't exactly the matrix you are
looking at, but it seemed to work pretty well.
MR. EICHELBERGER: That
is a good point, too. This procedure might just work for
sludges. Hopefully, it will work for all sludges, but at
this point, we don't know. It is tough to try out all
sludges.
MR. YOCKLOVICH: I am
Steve Yocklovich from Burlington Research.
Concerning 515, there was a noticeable salt effect, and
I imagine there is a big effect when there is a large amount
-------
562
of some of the analytes. Will that be clearly stated in the
interferences section of the method?
The reason I ask that is regulators in our State are
starting to use waste water methods for groundwaters and
everything.
MR. EICHELBERGER: I never
thought about that. When you are in the methods production
mode, you just can't do everything that everybody expects,
but, yes, we could put that in there. I mean, we could
recommend no salting out, because it is detrimental to our
recoveries.
MR. YOCKLOVICH: Well, salting
out and also, you know, just a clear statement in the
interferences that the people that are writing regulations
can see that it is inappropriate in certain situations.
MR. EICHELBERGER: Well, you
see, at this point, this technology has not been proven for
anything but waters with no particulate matter in them. And
if somebody wants to try to use this technology for samples
other than drinking water, they might run into big problems.
So, I don't think States are going to be able to do that.
They are not going to be able to make you use a method that
won't work.
Of course, I don't know. States make you do a lot of
things.
-------
563
(Laughter.)
any other questions?
(No response.)
Jim.
MR. FIELDING: Are there
MR. FIELDING: Thank you,
-------
EXTRACTION OF CHLORINATED ACIDS FROM
GROUND WATER AND FINISHED DRINKING WATER
USING DISK TECHNOLOGY
Researcher: Dr. Jimmie Hodgeson
Environmental Monitoring Systems Laboratory
Cincinnati
Ul
(513) 569-7311
-------
LIST OF ANALYTES
Acifluorfen
Bentazon
Chloramben
2,4-D
2,4-DB
Dalapon
Dicamba
3,5-Dichlorobenzoic acid
Dichlorprop
DCPA-AM
Dinoseb
5-Hydroxydicamba
4-Nitrophenol
Pentachlorophenol
Picloram
2,4,5-T
Silvex
Ul
CT>
Ul
-------
METHOD 515.KCURRENTLY)
1 Liter sample pH 12 with NaOH
Shake 1 hour to hydrolyze derivatives
Solvent wash with CH2 CI2
Adjust pH 2 with H2SO4 and
3 serial extractions with ethyl ether
KD add MTBE + MeOH
Methylate with diazomethane
Determine with capillary column GC-EC
Sample prep time: 5-7 samples in 8-10 hours
Ul
01
-------
METHOD 515.1 USING DISK
LIQUID-SOLID EXTRACTION
100 ml sample pH < 2
Pretreat disk-20 mL MTBE
-air dry 5 min.
-20 mL methanol
20 mL reagent water
Sample throughput time = 4 min.
at 5 in. Hg vacuum
Air dry 10 min.
Elute with two 2.5 mL 10% MeOH in MTBE
Dry over anhydrous Na2SO4
Sample prep time: 5-7 samples in 2 hours
Methyiate with gaseous diazomethane
Determine with cap column GC/ECD
U1
en
-------
DISK LSE PRECISION AND ACCURACY
100 ML 5 UG/L C-18 DISK
ANALYTE
Acifiuoufen
Bentazon
Chloramben
2,4-D
2,4-DB
Dalapon
Dicamba
3,5-DiClbenzoic acid
2,4,5-T
Dichlorprop
DCPA-AM
Dinoseb
Silvex
% RECOVERY %RSD
Disk
96
56
65
89
115
21
75
114
85
66
79
35
88
LLE
90
90
55
94
87
100
79
88
73
90
23
74
77
Disk
7
8
1
9
8
9
8
9
7
8
8
8
1
LLE
4
22
4
13
15
20
4
6
4
12
74
10
6
Ul
<7»
00
-------
ALTERNATIVE METHODS OF
METHYLATION
BF3 - METHANOL
SULFURIC ACID - METHANOL
AMBERLITE RESINS - METHANOL
IN SITU ALKYLATION ON SOLID SORBENT
WITH NUCLEOPHILE - EXAMPLE
CH3I + RCOOH(ADSORBED) - ROCH3 + HI
ui
cr>
-------
SULFURIC ACID METHYLATION
Analyte
Dalapon
35DCBA
Dichlorprop
2,4-D
Chloramben
Silvex
2,4,5-T
2,4-DB
% Recovery
117.4
114.0
110.8
86.2
84.2
97.2
87.0
97.4
-------
100 ml Fortified Reagent Water - 5ug/L
SULFURIC ACID-METHANOL METHYLATION
Analyte
Dicamba
Acifluorofen
Pentachlorophenol
DCPA-AM
Dinoseb
Bentazon
% Recovery
2.0
37.0
0.0
0.0
0.0
0.0
Ul
-------
100 ml Fortified Reagent Water - 5ug/L
AG1-X8 Resin Bed, 0.11 mL Volume
Analyte
Acifluorfen
Bentazon
Chloramben
2,4-D
2,4 -DB
Dalapon
Dicamba
3,5-DCBA
Dichlorprop
Dinoseb
4-Nitrophenol
Pentachlorophenol
Picloram
Silvex
2,4,5-T
% Recovery
80
70
53
90
98
68
75
81
72
18
51
83
66
78
77
%RSD
7
4
16
13
5
13
6
19
28
19
1
12
11
3
2
to
-------
A METHOD FOR THE DETERMINATION OF ORGANICS
IN MUNICIPAL SLUDGES
200 ml_ sample(10 g eq. dry solids)
Centrifuge
Liquid-CLLE
Solids-sonication-MeOH
- 1:1 MeOH:CH2CI2
-CH2CI2
Concentrate extract to 10 mL ???
Attempted cleanup
Silica gel
GPC
Solid phase cartridges
Ul
»J
OJ
-------
DEACTIVATED(10%) SILICA GEL CLEANUP
OF CRUDE SLUDGE EXTRACTS
Sample
B
Crude Ext.
mg/mL
64.4
82.4
59.0
Clean Ext.
mg/mL
59.9
62.2
61.9
60.8
71.6
66.7
70.0
66.8
47.0
47.6
42.0
44.9
47.5
% Cleanup
7
4
4
6
13
19
15
19
20
21
29
24
19
in
65 g SG 425 mL MeCI- 10mL
-------
Comparison of DSGF/GPC and GPC/DSGF with a
Single Sludge C Crude Extract - 59 mg/mL
DSG/GPC
1
2
3
1
2
3
mg/mL
DSG
47.0
46.7
42.0
GPC
39.7
41.0
35.8
Total Cleanup
1 33
2 31
3 39
res
1
2
3
1
2
3
GPC/DSG
mg/mL
GPC
42.7
42.0
42.0
DSG
38.0
37.6
38.3
res
Total Cleanup
1 36
2 36
3 35
-------
Cleanup of Sludge Extracts Using Silica
Gel and Different Bio-Beads
Silica gel - SX-3
Silica gel - SX-2
Silica gel - SX-4
SX-3 - Silica gel
SX-2 - Silica gel
SX-4 - Silica gel
No improvement in GPC or overall cleanup was observed
by either of the alternate stationary phases, SX-2
or SX-4
01
Methylene chloride used as the mobile phase
-------
Average Recovery and RSD From Fortified
Sludge Extract - 100ug/mL - 5 Reps
Analyte
2-CI phenol
2-Nitrophenol
Aniline
Naphthalene
Isophorone
2-Flourobiphenyl
Acenaphthylene
Diethyl phthalate
N-Nitroso DPA
Benzyl alcohol
Dieldrin
4,4'-DDT
Benzo(a)pyrene
Benzo(b)fluoranthene
2,6-Dinitrotoluene
Hexachlorobenzene
%rec
75.2
74.6
23.6
73.8
4.9
93.2
83.2
60.6
92.6
5.6
88.5
70.2
89.0
87.8
80.6
78.5
RSD
9.6
7.2
25.0
8.4
36.7
1.6
3.6
5.6
8.9
32.1
14.0
9.1
6.9
1.8
18.3
5.3
GPC: 60g SX3 75% Hex 25% MeCI
-------
SAMPLE AND ANALYTICAL CONDITIONS
Each analytical step individually evaluated
GC/MS KD GPC DSGF/GPC FORTIFIED EXTRACT
Residue before cleanup 77.9 mg/mL
Residue after GPC cleanup 23.6 mg/mL (66%)
Residue after DSGF/GPC cleanup 22.9 mg/mL (71%)
All extracts KDed to 1 ml before analysis
Ul
^]
CO
DSGF: 60g 10% deactivated silica gel
425 mL MeCI elution solvent
GPC: SX-3 bio beads
75% hexane 25% MeCI mobile phase
-------
579
GPC CHROMATOGRAMS (UV)
Mobile Phase - MeCl2
Stationary Phase - SX-3
Analyte
Cocktail
GPC Calibration
Solution
139 inL
225 mL
Analytical
Fraction
A. Polystyrene
B. Corn Oil
C. Dioctylphthalate
D. Sulfur
MW = 280,000
-------
580
SLUDGE CLEANUV BY OFC
Analvtes
243 mL
Analytical
Fraction
GPC STD
Sample
Figure
GPC Chromatograms of Analytes, GPC Standard and Sample
Stationary Phase » SX3
Mobile Phase 40% Hexane and 60% Methylene Chloride v/v
+ Start of Collection
* » Stop of Collection
A - Polystyrene C - Oioctylphthalate
B » Corn Oil n . Sulfur
-------
581
S&UDQC CLEANUP BT OFC
Analytes
Analytical
Fraction
Figure GPC Chromatograms of Analytes, GPC Standard and Sample
Stationary Phase » SX3
Mobile Phase - 60« Hexane and 40% Methylene Chloride V/V
*> Start of Collection
* Stop of Collection
A » Polystyrene C Oioctylphthalate
B * Corn Oil D Sulfur
-------
COMPOUND EXTRACTION FROM REAGENT WATER
USING DISKS
120
100
% RECOVERY
24D
t_n
oo
to
Chloramben
Silvex 245T
COMPOUND
24DB
Bentazon
C18
C8
C8 METAL
-------
Salted vs. Unsalted 15% (NH4)2SO4
Reagent Water C18 Disks 5ug/L
Mean ug/L recovered
Dalapon Chloramben Dinoseb Dichlop Bentazon 245-T
Salted
Unsalted
01
GO
GO
100 mL sample
-------
r
584
MR. FIELDING: Our next
speaker is Susan Richardson of Environmental Research
Laboratory, USEPA at Athens/ who will talk about application
of multispectral techniques to the identification of
aldehydes in a combined sewer overflow.
-------
585
MS. RICHARDSON: My talk
today will cover the application of multispectral techniques
which are a combination of mass spectral and infrared
techniques for the identification of organic compounds in
environmental samples. Specifically, I will focus on the
identification of straight-chain aldehydes in a combined
sewer overflow sample.
This work has recently appeared in an EPA report, and I
have listed the report number here for anyone who is
interested in reading more about this work. I also have
some flyers on a back table with more information about how
to obtain this report.
The use of multispectral techniques for identifying
organic compounds in environmental samples has been a major
part of what I have been involved with at EPA's
Environmental Research Laboratory in Athens, Georgia.
The multispectral techniques that were used for this
specific study, the identification of the aldehydes, are
shown in the red blocks. Each technique incorporated the
use of gas chromatography with it.
The techniques that we used for this study were, first
of all, low resolution electron-impact mass spectrometry,
EI-MS; high resolution EI-MS; low resolution chemical
ionization mass spectrometry^ CI-MS; high resolution CI-MS;
and Fourier transform infrared spectroscopy.
-------
586
The techniques shown below that you can probably barely
see in the green block are fast atom bombardment mass
spectrometry and thermospray LC/mass spectrometry. These
techniques are currently available on our instrumentation,
but they weren't used for the study that I will talk about
today.
The techniques shown at the very bottom are currently
being developed, and we would like to incorporate those
techniques into our multispectral techniques analysis
program as they are developed. Those are supercritical
fluid chromatography coupled with nuclear magnetic resonance
spectroscopy and supercritical fluid chromatography coupled
with infrared spectroscopy.
The instrument that was used for the mass spectral work
is shown on this slide, and it was a VG 70SEQ High
Resolution Hybrid mass spectrometer. It has a double
focusing sector composed of electrostatic analyzer plates
and an. electromagnet. This double focusing sector allows us
to do the high resolution work that I will talk about later.
It also has, in tandem, a quadrupole analyzer which allows
us to do MS/MS work as well.
I want to mention that I was primarily involved with
the mass spectral work, as was John McGuire and Al Thruston
who are also listed on your program. Tim Collette was
-------
587
responsible for the infrared work that I will talk about
later.
The purpose of this slide is to emphasize just how few
compounds are typically targeted by common analysis methods.
I have shown here specifically EPA Method 1625, which I am
sure most of you are familiar with. It uses low resolution
electron impact mass spectrometry to target less than 200
analytes.
Just this year, Chemical Abstract Services has gone
over the 10 million mark for the total number of chemicals
it has given a registry number to. So, it is clear that we
are typically- only targeting a very small percentage of the
total amount of compounds that could be found in
environmental samples.
Part of my laboratory's mission since the Consent
Decree of 1976 has been to analyze environmental samples for
compounds other than those targeted by current EPA methods.
There are three main obstacles to obtaining
identifications of organic compounds in environmental
samples using common analysis methods that generally involve
an extraction of the sample into organic solvent followed by
low resolution GC/MS using El conditions and, finally,
library data base matching.
First of all, since the sample is generally extracted
into an organic solvent such as methylene chloride, there
-------
588
will be compounds that will not be extracted and will be
missed at this stage in the identification process.
Compounds that are highly water soluble would fall into this
category.
Of those compounds that are extracted, there will be
those for which GC/MS is not applicable. High molecular
weight compounds or thermally labile compounds would fall
into this category.
Of the compounds which are GC/MS applicable, there will
be those that won't have spectra in the library data base,
inhibiting a structural assignment by library data base
matching.
Part of my work is involved in using multispectral
techniques to, first of all, identify those compounds which
do not show spectra in the library data bases and then enter
that spectrum into a library data base to make the next
identification a little easier. We also use multispectral
techniques to add confidence in the structural assignment
for compounds which may have an entry in a library data
base.
Many times, particular compounds will show similar
library fits from library data base matchings. I am sure
all of you are aware of that, and there is not always a
clear choice available. So, using these multispectral
techniques which complement each other, we are able to
-------
589
obtain very precise identifications on compounds which would
not have been possible just using low resolution electron-
impact mass spectrometry and library data base matching.
As I mentioned earlier, this sample was taken from a
combined sewer overflow, and it was extracted into methylene
chloride. Bill Telliard of ITD was responsible for the
program which provided this sample for us, and Jim King of
the Sample Control Center was responsible for storing the
sample for us.
This slide shows the GC/MS chromatogram obtained under
low resolution El conditions. Among the many compounds that
we identified in this sample, the peaks labeled 1 through 8
represent the eight straight-chain aldehydes that we
identified in the sample.
There were four saturated aldehydes: n-hexanal, n-
heptanal, n-nonanal, and n-octanal; and four unsaturated
aldehydes: 2-heptenal, 2-octenal, 2-decenal, and 2-
undecenal. The low resolution El data led us to believe
that we could possibly have these assignments that I have
shown you. However, in this case, there were many similar
library fits, and there was no clear choice available from
library data base matching.
So, at this point in the identification process, these
assignments were only tentative.
-------
590
Also/ for those four unsaturated aldehydes, there were
typically only two possible isomer choices in the library.
For instance, for the compound 2-heptenal, 2-heptenal was in
the library and 4-heptenal was in the library, but other
possible isomer choices such as 3-heptenal, 5-heptenal, and
6-heptenal were not.
So, we could not be sure of a correct structural
assignment from library data base matching.
Part of the reason that the low resolution El data was
so inconclusive was that there was very little molecular
weight information present in the El spectra. I have listed
here the relative abundances of the molecular ions obtained
under El conditions for the aldehydes listed at the left.
You can see that most of these aldehydes show extremely
small molecular ions.
Shown here are n-hexanal at 0.1 relative abundance; for
n-heptanal, 0.2; n-octanal, 0.1; for 2-octenal, 0.3. The
one exception was 2-heptenal which did show a molecular ion
at a reasonable relative abundance. The three aldehydes
listed at the bottom, n-nonanal, 2-decenal, and 2-undecenal,
did not show a molecular ion at all.
So, in general, the low resolution El data was void of
molecular weight information.
This slide shows an example of one of the El spectra
obtained, in this case, for nonanal which has a molecular
-------
591
weight of 142. You can see that there are no ions in this
molecular weight region.
Generally, the highest mass ion present for most of
these aldehydes in the El spectra was due to the loss of
water from the parent compound, shown here at m/z 124 for
nonanal.
In order to determine the molecular ion for these
aldehydes, first of all, low resolution chemical ionzation
mass spectrometry was used. The reason low resolution was
used before high resolution was to prevent the interference
of a calibration compound such as pfk that would have been
necessary for the high resolution experiment.
So, we first of all determined to a nominal mass what
the molecular ion was for each of these compounds using low
resolution CI, and then we used high resolution CI to
determine the accurate mass.
I have listed here the observed masses obtained
experimentally for the (M+H) ions under high resolution CI
conditions. You can see that these observed masses do
compare very favorably with the calculated masses based on
the assignments given at the left.
So, this high resolution CI data did support our
structural assignments. However, we haven't yet shown
clearly where the location of the carbon-carbon double bond
-------
592
is for the unsaturated aldehydes, 2-heptenal, 2-octenal/ 2-
decenal, and 2-undecenal.
In order to determine, the location of that carbon-
carbon double bond, we used high resolution electron impact
mass spectrometry. I am choosing to use an example here, in
this case, 2-octenal, to show you how we used high
resolution El mass spectrometry to determine the location of
the double bond, but we used the same procedure for each of
the other three unsaturated aldehydes as well.
The premise that we used was this: carbon-carbon
double bonds will rarely fragment across the carbon-carbon
double bond under El conditions. So, if we could show we
had a fragment ion owing to cleavage at each of these other
carbon-carbon bonds along the hydrocarbon chain and if we
see an absence of an ion owing to cleavage at this
particular location, then we could determine that this must
be the location of the carbon-carbon double bond.
The reason that high resolution was necessary for this
work over the corresponding low resolution experiment was
that under low resolution El conditions, many of the
fragment ions obtained could be represented by more than one
possible empirical formula and, thus, by more than one
particular site of cleavage. So, in order to determine that
site of cleavage accurately, we did need high resolution El
mass spectrometry.
-------
593
This slide shows an example of that. Using low
resolution El mass spectrometry conditions, we obtained an
ion at m/z 83 for the compound that I showed you on the
previous slide, 2-octenal. This ion at 83 could be
represented either by C6HU which corresponds to a particular
site of cleavage along the hydrocarbon chain and also a
possible location for the carbon-carbon double bond, or that
ion at 83 could be represented by C5H7O corresponding to yet
another particular site of cleavage along the hydrocarbon
chain and another possible location for the carbon-carbon
double bond.
High resolution electron impact mass spectrometry
provided us with the accurate mass that determined that the
CgHyO assignment was the correct assignment for the ion at
83. Thus, we can assign that specific site of cleavage at
the location shown here.
We used the same methodology for each of the other
fragment ions obtained under high resolution El conditions,
and we showed that we did have cleavage of each of these
carbon-carbon bonds along the hydrocarbon chain, but we did
see an absence of an ion owing to cleavage at this location.
So, we concluded that the double bond must be located at
this particular position, which is at the carbon-2 position.
Incidentally, this particular location for the carbon-
carbon double bond allows it to be in conjugation with the
-------
594
aldehyde carbonyl group, making it a more thermodynamically
stable compound, something you might expect for a compound
found in the environment.
Keep in mind that this double bond is in conjugation
with that aldehyde carbonyl group. That will become
important in the next slide.
This slide shows the infrared spectrum of 2-octenal.
Again, I am choosing to use 2-octenal to show you how we
used infrared spectroscopy to identify all of these
aldehydes. I use this as one particular example.
Infrared spectroscopy was a nice complement to mass
spectrometry, because it, first of all, confirmed some
structural assignments that we had made using mass
spectrometry, and it also provided new information that mass
spectrometry was not able to provide.
First of all, the peaks shown here which are due to the
stretching fundamental of the hydrogen atom attached to the
carbonyl carbon and also due to a corresponding overtone
peak, those peaks, along with the peak due to the stretching
of the carbonyl shown here at 1715 cm"1 are clear evidence
for an aldehyde group
So, first of all, IR spectroscopy confirmed the
existence of the aldehyde group for all eight straight-chain
aldehydes we identified. Secondly, infrared spectroscopy
-------
595
confirmed the existence of the carbon-carbon double bond for
the four unsaturated aldehydes.
The peaks shown here at 1634 cm"1 is clear evidence of
a carbon-carbon double bond.
The third piece of information that infrared
spectroscopy provided was the location of that carbon-carbon
double bond along the hydrocarbon chain for those
unsaturated aldehydes. If this carbon-carbon double bond
had not been in conjugation with that carbonyl group, this
particular frequency shown here at 1634 cm"1 would have been
shifted to a much higher frequency, to about 1650 cm"1.
Also, the carbonyl frequency would have been shifted to
about 1742 cm"1.
So, because these frequencies are located where they
are at 1634 and 1715 cm"1, that is clear evidence for
conjugation of that carbon-carbon double bond with the
aldehyde carbonyl group, thus confirming that the double
bond is located at carbon-2, as high resolution electron-
impact mass spectrometry had suggested.
Finally, infrared spectroscopy allowed an exact
isomeric determination for the four unsaturated aldehydes,
that is, whether we had a cis or a trans double bond. This
was something that mass spectrometry was not able to
provide.
-------
596
The peak shown here at 976 cm"1 is clear evidence for a
trans double bond. So, we did determine that we had a trans
double bond for each of those four unsaturated aldehydes.
To summarize, using the multispectral techniques, which
were low and high resolution electron-impact mass
spectrometry, low and high resolution chemical ionization
mass spectrometry, and Fourier transform infrared
spectroscopy, we were able to precisely identify these
compounds that would not have been able to be identified
this way using the common analysis methods of low resolution
electron-impact mass spectrometry and library data base
matching.
First of all, low resolution chemical ionization mass
spectrometry allowed us to accurately determine the
molecular ion since it was either very small or wasn't
present at all in the El spectra for most of these
aldehydes.
Secondly, high resolution CI-MS provided us the
accurate mass which allowed us to determine the exact
empirical formula of the molecular ion.
Thirdly, infrared spectroscopy confirmed the existence
of the aldehyde group for all eight aldehydes.
Fourthly, both high resolution El mass spectrometry and
infrared spectroscopy together determined the position of
the carbon-carbon double bond for the unsaturated aldehydes.
-------
597
Finally, infrared spectroscopy allowed an exact
isomeric determination, that is, whether we had a cis or a
trans double bond. We did determine that we had a trans
double bond, and this, again, was something that mass
spectrometry was not able to provide.
You can see how nicely these techniques complemented
each other and how they worked very well together to allow
us to make these precise identifications.
I would like to conclude by saying that the observation
of these straight-chain aldehydes in a combined sewer
overflow sample was very unexpected. What we have typically
found in those types of samples are primarily fatty acids
and fatty acid methyl esters. So, the observation of these
aldehydes in that particular type of sample was unusual from
that standpoint.
It was also unusual from the standpoint that there are
very few reports of similar straight-chain aldehydes in the
literature where they were found in the environment at all.
So, we did feel that this finding was significant.
There are many other types of aldehydes that are
commonly observed such as benzaldehyde, acetaldehyde, and
formaldehyde, but, evidently, straight-chain aldehydes are
not that common.
I do want to mention, however, that I have been
recently told that there have been some tentative
-------
598
identifications of some similar straight-chain aldehydes in
Superfund samples, but these identifications are only
tentative at this point.
I would like to thank you for your attention. If there
are any questions, I would be happy to try to answer them.
MR. FIELDING: Does anybody
have any questions?
(No response.)
MR. FIELDING: If not, we
thank you very much, Susan.
-------
Application of Multispectral Techniques
To the Identification of Aldehydes
In a Combined Sewer Overflow
Ul
u>
IO
EPA/600/4-90/002
NTIS No. PB 90 160 995/AS
-------
Low Resolution
EI-MS
High Resolution
EI-MS
MULTISPECTRAL TECHNIQUES
V
High Resolution
CI-MS
Fast Atom Bombardment
MS
Thermospray LC/MS
SFC/NMR S SFC/IR
Low Resolution
CI-MS
O
O
Infrared
Spectroscopy
-------
ELECTROMAGNET
OUTER E.SA PLATE
OUAORUPOL6
COLLISION CELL
QUAORUPOLE
ANALYSER
1st FIELD FREE REGION GAS CELL
SOURCE SLIT (FIXED)
MULTIPLIER STACK
01
O
SOURCE
GC inlet
Solid probe inlet
FAB/FD inlet
Thermospray LC/MS inlet
-------
Method 1625
v
Target Analytes
(<200)
Low Resolution
EI-MS
O
to
V
Other Chemicals
-------
Aqueous Sample
Solvent Extraction
Extracted
1. Not Extracted
01
o
CO
2. GC/MS not
Applicable
GC/MS Applicable
Spectrum in File
3. Spectrum not in File
-------
100
95J
90J
85J
80J
75J
70 j
65J
60J
55j
50J
45J
40J
35J
30J
25J
20 j
15j
10 j
5J
Oj
300
5:16
1. n-hexanal
2. n-heptana!
3. 2-heptenal
4. n-octanal
5. 2-octenal
6. n-nonanal
7. 2-decenal
8. 2-undecenal
400
7:02
500
8:47
600
700
860
10:32 12:17 14:03
960
15:48
1000
17:33
SCM
BXttfi
01
-------
Electron Impact Relative Abundance
Of Molecular Ions
Compound
n-hexanal
n-heptanal
2-heptenal
n-octanal
2-octenal
n-nonanal
2-decenal
2-undecenal
Relative Abundance
0,1
0.2
10
0,1
0.3
01
©
-------
10 <5_
95j
90J
85j
80J
75J
70j
651
60j
55J
50J
45J
40J
35J
30 j
25.
20J
15-i
10 j
:
5j
0:
4
4.
0
3
15
1
' sfo'
57
1
'e'o' ' ' ''
71
8
o e'o
2
9
' ' '9V '
9
5
|
NONANAL F
:-
i
'_
Molecular Weight =* 142 :
No Molecular Ion Present :
-
8 ;
|-
'-
'-
'.
-
'r
114 r
124 "
, 1-
1 , :
160 iid 120 iid 140 isd i£d 170 M/Z
en
o
cr>
-------
High Resolution Cl Accurate Masses
Compound
n-hexanal
n-heptanal
2-heptenal
n-octanal
2-octenal
n-nonanal
2-decenal
2-undecenal
Empirical
Formula
C6.H13.O
C7.H15.0
C7.H13.0
C8.H17.0
C8.H15.0
C9.H19.0
C10.H19.0
C11.H21.O
Observed
Mass
101.096
115.111
113.097
129.130
127.113
143.143
155.144
169.158
Calculated
Mass
101.097
115.112
113.097
129.128
127.112
143.144
155.144
169.159
-------
608
0=0
CO
CD
*-*
O
O
i
CM
t
t
t
t
t
o
II
o
CM
O
i
-------
Low Resolution EI-MS
Ion at m/z 83
o
vo
C6.H11 or
C5.H7.O
By High Resolution EI-MS
-------
610
05
C
CD
*»
O
O
i
CM
X
o
II
X
o
CM
x
o
'CM
cof
oo 1
o
CM
X
O
1
CM
O
CO
X
O
-------
6-
o
o
X
CD
O
c
o
_Q
O
CO
0-
2-Octena
(O)C-H
1715
,C=O
conjugated
1634
C=C
trans
3500
3000 2500 2000
Wavenumbers (cmI)
1500
1000
-------
Summary
O Accurate Determination
Of Molecular Ion
O Exact Empirical Formula
Of Molecular Ion
O Confirmation of
Aldehyde Group
O Position of Carbon-Carbon
Double Bond
O Exact Isomeric Determination
(Cis or Trans Double Bond)
Low Resolution
CI-MS
^ High Resolution
CI-MS
to
> Infrared Spectroscopy
High Resolution EI-MS
Infrared Spectroscopy
-> Infrared Spectroscopy
-------
613
MR. FIELDING: It is time
for lunch. Can we try to get back about 1:00 o'clock? We
have a full afternoonf and I know some people have to leave
on early flights. We will see you at 1:00 o'clock.
-------
614
AFTERNOON SESSION
MR. TELLIARD: We would like
to get going for this afternoon's session, please.
One small announcement. We routinely take the
proceedings down and put it out in a book called
"Proceedings." As I alluded to before, last year's
proceedings are sitting waiting to be printed. I called my
management, I called Washington, and they said yes, it is
sitting here. The word is that we won't be able to print it
until September when we get new money. So, when it is
available, I will mail it to you, he said.
That also means that these proceedings that we are
taking down now will hopefully be out in September when we
get the new budget. New money comes in in a big truck and
they dump it in the office. We roll around in it for a day
or two and then spend it.
So, I am sorry about this. It is kind of embarrassing,
but the Office of Water took about a $1 million intramural
cut this year. Apparently, we made somebody unhappy up on
the Hill, and now they have made us unhappy. It works.
So, because of that intramural cut, we have not been
able to print almost anything. So, on those cards you
filled out, on some of those documents, you will get a
little notice that says "in printing" which means there is
-------
615
no money right now, and we will get those in September.
Those that are available we will mail out to you.
So, our first speaker this afternoon is Joe Raia. Joe
is going to speak about one of his favorite persons, a
robot, and the joys of doing suspended solids using a
robotic method.
-------
616
MR. RAIA: Good afternoon.
At Shell Development Company at the Westhollow Research
Center, I have the Environmental Wastewater Analysis group,
and in this laboratory we process a large number of samples
for the environmental research at this facility. One of the
goals and focuses of our group is to try to automate methods
where it makes sense to do so in order to perform these
analyses as cost effectively as possible.
The total suspended solids method was one of these
methods that we thought was a real good candidate for
automation. It was highly repetitive, we did relatively
large numbers per year, on the order of about 2000, and it
really wasn't that much fun to do.
We got together with our systems development
group...and the co-author up there on the slide, Al
Telfer...and began thinking about how we could automate this
procedure. The result of that effort is what I will be
talking about this afternoon.
Automation in the laboratory is really nothing new. It
has been around for a number of years. We have had
laboratory information management systems for identifying
the sample when it comes into the laboratory and to do
sample tracking.
We can skip over the sample preparation part shown here
on the slide for a minute, and looking at the analysis side,
-------
617
there are all sorts of automated instrumentation with
autosamplers, continuous flow type systems,and
instrumentation which are run by microprocessors. Then, in
the results and reports section of the analysis, we know
computerized data reduction has been around for years.
One 'important step in this whole process has been
sample preparation. About, I guess, 1982 or so, a company
out of Hopkinton, Massachusetts, the Zymark Corporation,
took a look at that and decided that it was a spot where we
really needed to do some work in terms of automation and
time savings efforts. So, they have come up with their
Zymate system for primarily the sample preparation part of
the analysis procedure.
The TSS analysis robotic syste'm that I am talking about
here in this presentation is one that measures total
suspended solids in water. We developed the method to
follow the standard EPA protocol, Method 160.2, which
essentially is a filtration of the sample through a 1 micron
filter and then followed up by a gravimetric finish after
removal of the water at 105 degrees in an oven.
The procedure will also allow us to measure the
volatile residue according to EPA Method 160.4 with what we
call a minimum of operator intervention. As I go through
and describe the lab robot...and many of you here have
already seen them around for quite a few years, you will see
-------
618
that the hands of the robot would not be able to stick the
sample into a 550 degree furnace that is used for the
volatile residue part of the test. So, we intervene at
that point and place the samples in a rack and manually
insert them into that type of a furnace.
Now, in the beginning of our story here, this is a
photograph of the way we used to do suspended solids in the
lab. We typically had our setup to do eight samples at a
time, batchwise. At that time, we tried to automate at
least the end part of the procedure by linking the balance
to a computer so that you could start the analysis, get your
tare weight, store that weight in the data system, then
perform the filtration, the drying, the weighing,
reweighing, and then from the data system memory of the
initial and final weight, the computer would calculate the
suspended solids that were captured on the filter.
This slide shows the system that we developed to
perform that analysis. This is the Zymark Zymate system
that we purchased from Zymark Corporation, and this is their
controller to automatically operate the instrumentation.
It is not shown here, but we have linked this system to
an IBM PC for report generation. I have another schematic
that I will put up here in a second to show you the layout
of the system.
-------
619
This whole apparatus is on about a 7 foot by 4 foot
table top, and the electronics for performing part of the
analysis are housed below here.
The sample starts out in this sample rack. Typically,
we use 100 ml of sample. This is the oven for drying at 105
degrees C, a dessicator for conditioning before weighing in
the balance, and these items here are the stands to hold the
Gooch crucible for the filtration.
To simulate our original earlier days with this method,
we stayed with a batch of 8 sample at a time, although we
can actually do 16 in two batches of 8.
This next slide shows schematically what the photograph
is trying to show there.
The feedback sensor is an item that I want to point
out. This turned out to be the real challenge in terms of
trying to automate this procedure; the reason being, " how
to determine when the filtration had finished". In order to
do that, we ended up using a capacitance type sensing device
which allowed you to essentially tell the liquid level in
the Gooch crucible.
The oven and dessicator were built in our shops at
Shell in Westhollow, and they are pneumatically operated.
The balance is a Mettler balance with a pneumatic door, and
I have already mentioned the data system.
-------
620
This next slide shows the step by step operation that
the robot does to perform the test. I am going to read
through here as we go. The sequence is as follows:
The robot gets the Gooch crucible with filter from the
dessicator and moves it to the balance where the tare weight
is obtained. Next, the robot takes them to the rinse
station where a small volume of water is placed on the
filter. This helps seal the filter to the Gooch crucible.
The robot then places them in the filter station and
backs away. The vacuum is turned on to that filter station,
and the capacitance electronics system takes a reading of
the base capacitance. That is the base point that will be
compared to determine when the sample is finished filtering.
This is accomplished using the switched outputs
available to the power and event controllers as well as the
A/D input. The output of the capacitance electronics is a 4
to 20 milliampere signal. This is converted to a volatage
at the PEC switches using a 50 ohm resistor. This voltage
reading is stored in the robot controller's memory for
further use.
The robot then gets the proper sample flask from the
storage rack and inverts the flask into the crucible. The
robot leaves the flask in a holder on top of the filter
station with the neck of the flask below the top of the
crucible.
-------
621
This allows us to filter up to 100 ml of sample into a
40 ml Gooch. crucible. You can kind of think of this as a
chicken feeder principle where the Gooch will not overflow
because of the hydrostatic pressure that won't allow the
rest of the 100 ml volume to fill until some of it filters
through.
The robot backs away, and a second capacitance
measurement is made. Now, if the filtration is complete,
this second reading will approach the base reading and the
vacuum will automatically be shut off for that filter
station. If not, the robot proceeds with loading the next
sample and rechecks all loaded stations for capacitance
until filtration is complete.
The rest of the procedure operates in exactly the same
way, and we rinse three times according to the protocol.
The sample can sit for some time without affecting
performance as long as the vacuum is off. We found that if
the filter gets too dry, some samples seem to get a
waterproof like coating that may hinder a rinse procedure
that follows. Conversely, the filter must not be too wet,
particularly when the crucible is placed in the oven.
For this reason, the program pauses after the final
rinse and before the crucible is loaded into the oven while
vacuum is applied to all occupied filter stations.
-------
622
After drying at 105 degrees, the crucible is moved to
the dessicator to condition and then weigh. Constant weight
is verified by redrying the crucible for a half an hour and
then reweighing.
If the volatiles are measured, the crucible and dry
solids are moved by the analyst to a 550 degree furnace, and
after 45 minutes, the crucibles are allowed to cool to
ambient conditions and then placed in a dessicator before
being weighed as before.
What I have here is a series of photographs to show
some of the steps that the robot is taking to carry out the
procedure that I have just described. This one shows
getting the crucible out of the dessicator at the beginning
of the analysis before placing it in the balance.
The door of the balance is activated and pneumatically
opened, and the crucible is placed on the pan. The arm
moves back, and the door is closed, and a weighing is made.
The average of 10 weighings is done.
The arm goes back to get the crucible and moves it to
the filtration station. The arm then goes to get the sample
of water that is in a 100 ml volumetric flask and moves it
to pour the sample into the Gooch crucible.
Programming that step was a little tricky to make sure
that no water was dripping during the pouring process so
that we didn't have any loss of sample. You can see how the
-------
623
filtration stations were designed to hold the flask in place
inverted as the water passed from the flask into the Gooch
crucible, and this is the place where the 100 ml of sample
is poured into a 40 ml volumetric Gooch crucible without
overflowing.
The neck of the flask is just below the top part of the
Gooch crucible. The filter has already certainly been put
in place early on in the procedure.
Then the arm goes to get the flask and moves it over to
a rinse station where the rest of the sample is
quantitatively transferred back onto the filter.
This shot was taken with a pen light put on the robotic
arm and a time delay photograph to show all the different
motions that are involved in carrying out this procedure by
the robot to do eight samples.
This next slide shows the linearity with sample volume.
It turns out that you have to select samples that are real
high in solids. You use a reduced volume of sample for
those type samples, and we wanted to see what sort of
linearity we were getting down in the lower sample volume
range and felt comfortable that it was linear down in that
range.
This slide shows comparison data using the robotic and
manual methods for a couple of samples that were generated
in our research in the Environmental Science Department, in
-------
624
benchscale biotreaters, at two different TSS levels. The
mean standard deviation and coefficient of variation at
these two levels were essentially about the same.
We feel that we don't see any bias in the robotic
versus the manual method. At the 100 mg/1 level, there
seemed to be a lower robotic recovery for this particular
set of data than for the manual, but that did not turn out
to be the case in later results comparison work.
This data here shows percent recovery using samples
provided to us by the Analytical Products Group out of
Belpre, Ohio. This vendor will supply proficiency testing
type samples to various laboratories who participate.
So, we had a comparison here of about 40 different labs
around the country who were also getting the TSS proficiency
sample, and we were able to compare it by the robotic
procedure and the manual method performed in our laboratory,
and compare it to the manual procedure in these other
laboratories... and we felt satisfied with this recovery
comparison.
The benefits of lab automation, as I conclude here, are
first the obvious one of unattended operation. The sample
can be run at night or on weekends.
Reduced time for analysis. We have been operating this
system for about three years in our laboratory and have been
able to demonstrate a cost savings over that period of time.
-------
625
Typically, you can get improved precision. With the
TSS system however, I think the nature of the sampling
itself is such that we really don't see any benefit of
improved precision over the manual method if the manual
method is done real carefully.
Finally, there is what we call gained "new" time for
R&D that we end up with for doing more interesting and
challenging tasks in the environmental area. A lot of work
needs to be done.
I believe that is it. I will take any questions, if
you have any.
-------
626
QUESTION AND ANSWER SESSION
MR. TELLIARD: I have a
question.
When you take the sample and you have a lot of solids,
do you shake it and then take 100 ml of that, or do you just
get it from...when you get to your 100 ml volume? What do
you start with/ 2 liters or a liter that comes out of a
sampler or something like that?
MR. RAIA: That is right. So,
you take that sample, and you shake it initially to get that
aliquot.
MR. TELLIARD: And then you
put that in the volumetric and transfer it to the system?
MR. RAIA: Right.
MR. TELLIARD: Any problems
with solids like sticking to the walls?
MR. RAIA: We haven't seen
any, but that is something to be concerned about.
MR. TELLIARD: Any other
questions as I am up here hogging the microphone?
MR. COLLAMORE: Martin
Collamore, City of Tacoma.
How do you handle larger sample sizes than 100 ml?
MR. RAIA: We don't. We don't
run a sample size larger than 100 ml.
-------
627
MR. COLLAMORE: But you
can handle lower samples? You can handle smaller samples?
MR. RAIA: Yes, we can
handle smaller samples. We do handle samples more
concentrated by diluting down if we need to, but in terms of
total volume, we have only used up to 100 ml. You are
limited to some extend there by the weight of that flask
with the water. We really have not gone to larger sample
sizes, and I don't know what maximum volume we could end up
with, but this particular setup has been able to handle the
type of samples that we typically deal with.
Some of our manufacturing locations also have a similar
type of robotic system for doing TSS, and they are using a
100 ml sample volume also.
MR. TELLIARD: Thank you
very much, Joe.
-------
628
A LABORATORY ROBOTIC METHOD FOR THE AUTOMATED DETERMINATION
OF TOTAL SUSPENDED SOLIDS IN ENVIRONMENTAL WATER SAMPLES
by
Joe C. Raia and AT Telfer
Shell Development Company
Houston, Texas
For Presentation at the
The Annual U.S. EPA Conference
on
Analysis of Pollutants in the Environment
Norfolk, Virginia
May 9-10, 1990
-------
629
A LABORATORY ROBOTIC METHOD FOR THE AUTOMATED DETERMINATION
OF TOTAL SUSPENDED SOLIDS IN ENVIRONMENTAL WATER SAMPLES
by
Joe C. Raia and Al Telfer
Shell Development Company
Houston, Texas
ABSTRACT
This paper presents a new laboratory robotic procedure which automates the
standard method for the determination of Total Suspended Solids (TSS) in
environmental water and wastewater samples. The method also determines
Volatile Total Suspended Solids (VTSS) with a minimum of operator
intervention. The automated equipment, robotic procedure, and results are
presented. The benefits of automation in the environmental analysis
laboratory are discussed.
ACKNOWLEDGEMENT
The authors wish to acknowledge P. J. Drymala and R. A. Balderas of Shell
Development Company for their assistance in this project.
-------
630
LABORATORY ROBOTIC METHOD FOR THE AUTOMATED DETERMINATION
OF TOTAL SUSPENDED SOLIDS IN ENVIRONMENTAL WATER SAMPLES
by
Joe C. Raia and Al Telfer
Shell Development Company
Houston, Texas
INTRODUCTION
Automation has been used in the analytical chemistry laboratory for many
years. This has primarily included microprocessor controlled analytical
instrumentation with dedicated autosamplers, continuous flow systems, and
computerized data collection, calculation, and report generation. In
recent years, laboratory automation has been extended by the use of
robotics, combined with programmable computers, to new tasks which include
sample preparation and even entire analytical determinations.
This article presents a laboratory robotic system for the
determination of total suspended solids (TSS) in environmental
water samples. The method adheres to the standard protocol
that is specified for the manual procedure (US EPA Method 160.2) (1).
It also performs Volatile Total Suspended Solids (VTSS) with a minimum
of operator intervention. The robotic TSS procedure required the
development of new robot-friendly modules and sensors not yet commercially
available. These components and the automated analysis
method are described. Results are presented which compare the robotic and
manual procedures. The method has been validated and is
currently in routine operation in our environmental analysis laboratories.
a) A portion of this paper is to be published in American
Environmental Laboratory
-------
631
THE TSS ANALYSIS LABORATORY ROBOTIC SYSTEM
The TSS analysis laboratory robotic system and associated equipment are
shown in Figures 1 and 2. The system consists of the following major
components:
1. A laboratory robot and controller (Zymark Corporation, Zymate I
upgraded to a Zymate II system)
2. Three Zymate Power and Event Controllers (PEC)
3. A printer for the Zymate controller
4. A computer (IBM-XT) and printer with programms for communication
with the Zymate, data calculation and report generation
5. A balance (Mettler AE163) with electronic interface and a remote
pneumatic controlled door
6. An Oven, thermocouple controlled with pneumatic controlled door
7. A Desiccator with a remote pneumatic controlled door
8. Eight filtration stations
9. Capacitance sensing devices for eight positions
10. A pump and reservoir system to supply rinse water
11. A rinse dispenser station with pneumatic valve
12. A storage rack to hold 16 polypropylene, volumetric flasks
(100ml) used for the samples
13. A custom built table top support frame with castors and leveling
jacks to hold the robot table and hardware
14. Various actuators, sensors, and controls required by the system
-------
632
One of the challenges encountered with the design of the robot system was
the problem of how to sense that filtration had been completed. This
problem also exists for sensing rinse completion. These problems were
solved by detecting the amount of water in the Gooch crucible using a
commercially available capacitance electronic package usually used for
level sensing in tanks and vessels. The holder for the Gooch crucible in
the filter ^station is made of two stainless steel parts. They are
electrically separated by a rubber gasket, and form two plates of a
capacitance measurment system. If a small amount of water is present
within the Gooch crucible, a significant change in the capacitance output
signal occurs.
In operation, the sequence of events is as follows: The robot gets the
Gooch crucible with filter from the desiccator and moves it to the balance
where the tare weight is obtained. Next, the robot takes them to the
rinse station, where a small (2-3 ml) quantity of water is placed on the
filter. This helps seal the filter to the crucible. The robot then
places them into the filter station, and backs away. The vacuum is
turned on to that filter station, and the capacitance electronics system
takes a reading of the base capacitance. This is accomplished using the
switched outputs available at the Power and event controllers, as well as
the A/D input. The output of the capacitance electronics system is a 4-20
milliampere signal. This is converted to a voltage at the PEC switches
using a 50 ohm resistor. This voltage reading is stored in the robot
controllers memory for further use. The robot then gets the proper sample
flask from the storage rack and inverts the flask into the crucible. The
robot leaves the flask in a holder on top of the filter station, with the
neck of the flask below the top of the rucible. This allows up to 100 ml
of sample to be filtered using a crucible of about 40 ml capacity. It
will not overflow since the neck of the flask is sealed below the liquid
surface. The robot backs away and a second capacitance measurment is
made. If the filtration is complete, this second reading will approach the
base reading, and the vacuum will be shut off for that filter station. If
not, the robot proceeds with loading the next sample and rechecks all
loaded stations until filtration is complete. The rinse procedure
operates in exactly the same way. The samples can sit for some time
without affecting performance as long as the vacuum is off. If the filter
gets too dry, some samples seem to get a water-proof like coating that may
hinder rinse procedure. Conversely, the filter must not be too wet,
particularly when the crucible is placed in the oven. For this reason the
program pauses after the final rinse and before the crucible is loaded
into the oven, while vacuum is applied to all occupied filter stations.
After drying at 105 C, the crucible is moved to the desiccator to
condition and then weighed. Constant weight is verified by redrying the
crucible for a half hour and reweighing. If volatile suspended solids are
measured, the crucibles and dried solids are moved by the analyst to a 550
C furnace. After 45 minutes, the crucibles are allowed to cool ambiently,
and then placed in the desiccator before being weighed as before.
The Zymate II system is interfaced with an IBM-XT-PC to perform the data
calculation and report generation of the TSS and VTSS results.
-------
633
RESULTS
The robotic procedure has been validated for unattended operation, and for
precision and accuracy. Comparison data of the robotic and manual
procedures are shown in Tables 1 and 2. Results in Table 1 show that the
precision of both methods is good and the values are comparable for the
biotreater aeration basin samples tested. Any precision gains offered by
robotics are likely obscured by variability in the sampling of these
wastewaters. The robotic procedure resulted in a 7% lower mean TSS value
than did the manual method for the aeration basin samples tested. No
consistent bias in the data, however, has been found for either procedure.
Results in Table 2 compare the recovery of the robotic and manual
procedures for TSS in standard samples prepared at various concentration
levels for round-robin testing. Data for an in-house control are also
given. The robotic and manual procedures both showed comparably good
recovery for these round-robin samples. The robotic method values were in
the 94% - 107% range, and the manual method values were in the 83% - 97%
range. The mean recovery of the fifty labs which participated in this APG
sample set were in the 91% - 95% range. Linearity of TSS with volume of
sample filtered is shown in Figure 3 for the robot and manual methods.
Typically, the volume of sample taken for analysis is such that it can be
filtered in a reasonable time period and yet be enough to yield sufficient
solids for accurate weighing.
Analyst time required for the robotic procedure is less than one-third
that for the manual procedure. The capital investment required for the
robotic TSS method has been recovered in about the first 200 days of
operation with the current sample throughput demand. The analyst has
welcomed the automated robotic TSS procedure. The responsibility
for operating the robotic instrumentation is viewed as a more interesting
task than the manual procedure, and gained "new" time can now be focused
on more creative method development and special problem solving challenges
in the environmental analysis area.
REFERENCES
1) Methods for Chemical Analysis of Water and Wastes, EPA-600/4-79-020,
Revised March 1983.
-------
634
TABLE 1. COMPARISON OF TSS RESULTS BY ROBOTIC AND MANUAL METHODS
SAMPLE: BENCH SCALE
METHOD: ROBOTIC TSS
MG/L
10670.
10130.
11631.
11219.
10820.
9981.
10761.
10200.
MEAN 10676.
S.D. 566.
C.V.(%) 5.3
BIOTREATOR
MANUAL TSS
MG/L
9670.
10590.
10250.
11130.
10540.
10320.
11530.
10640.
10584.
562.
5.3
AERATION
ROBOTIC TSS
MG/L
122.4
115.2
125.6
123.2
118.4
122.8
116.8
141.2
123.2
8.1
6.6
BASIN
MANUAL TSS
MG/L
139.2
152.0
127.6
130.0
128.8
128.4
126.4
124.8
132.2
9.1
6.9
-------
635
TABLE 2. RECOVERY RESULTS FOR TSS BY ROBOTIC AND MANUAL METHODS
SAMPLE
*
APG # 1
*
APG # 2
*
APG # 3
*
APG # 4
*
APG # 5
,*
APG # 6
**
KAOLIN CONTROL
TRUE TSS
MG/L
36.5
52.7
77.0
287.0
316.8
498.6
100.
ROBOTIC
% RECOVERY
107.
94.9
93.5
107.
98.7
94.9
104.
***
MANUAL MANUAL APG
% RECOVERY MEAN % REC.
83.0 92.5
96.8 90.7
92.2 90.7
94.8 94.2
97.2 94.6
96.3 94.2
95.6
* Proficiency Environmental Testing Program Samples: prepared and
provided by Analytical Products Group(APG), Inc., Belpre, Ohio
** In-house control sample
*** Mean Recovery of 50 labs participating in the APG Program
-------
636
1.
2.
3.
4.
5.
Figure 1. Robotic TSS Equipment
Zymark Robotic Arm
Balance
Desiccator
Oven
Flask Rack
6. Filtration Stations with Capacitance Sensors
7. Zymate Controller
8. Capacitance Sensors Electronics
9. Pneumatic Valves for Vacuum and Pressure
09941
-------
637
5
'5
!
i
2nd Hand . .(%
(Special Fingers)^1
D
n
o
Rinse
Station
D
Filter Stations (8)
Robot
Microprocessor
Sample Input
Printer
Figure 2. TSS Robot
24
20
16
Volume,
ml
D Y Actual
Y Predicted
20
40
mg Weighed
60
Figure 3. TSS Linearity with Sample Volume
mg Weighed vs. Volume, Manual and Robot
-------
A Laboratory Robotic Method for
the Automated Determination of
Total Suspended Solids in
Environmental Water Samples
CA)
00
Joe C. Raia and Al Telfer
She!! Development Company
esthollow Research Center
Houston Texas
-------
in the
Identification
Preparation
Analysis
Results
Report
LIMS
Robotics
Instrumentation
Microprocessors
Computerized
Data Reduction
U)
IO
-------
The TSS Analysis
Laboratory Robotic System
Measures Total Suspended
Solids in Water
Follows the Standard Procedure
(EPA Method 160.2)
Measures Volatile Residue
(EPA Method 160.4)
en
£>
o
-------
641
-------
Robotic TSS Procedure
Precondition
System
Remove
Crucible from
Desiccator
Weigh
Dry Crucible
(Initial Weight)
Moisten Filter
in Crucible
Place Crucible
in Stand
\
invert Flask
in Stand
Detect
Conclusion of
Rinse Water
(3 Times)
Return Empty
Flask to
Reapply Vacuum
to Crucible to
Remove Any
to
Return Crucible
to Oven
and Heat
Cool Crucible
in Desiccator
Re=-Weigh
Crucible
Heat Crucible
Again,Cool-jn:
Desiccator, and
Weigh Again
Calculate
Difference
Between Initial
and Final Weights
-------
Comparison of TSS Results by
Robotic and Manual Methods
Method
Mean
S.D.
Bench Scale Bfotreater
Aerator 61786
Robotic TSS,
mg/l
10670
10130
11631
11219
10820
9981
10761
10200
10676
566
5.3
Manual TSS,
mg/l
9670
10590
10250
11130
10540
10320
11530
10640
10584
562
5.3
Aerator Basin
Robotic TSS,
mg/l
122.4
115.2
125.6
123.2
118.4
122.8
116.8
141.2
123.2
8.1
6.6
Manual TSS,
mg/l
139.2
152.0
127.6
130.0
128.8
128.4
126.4
124.8
132.2
9.1
6.9
CO
-------
Recovery Results for TSS by
Robotic and Manual Methods
APG #1a)
APG #2a)
APG #3a)
APG #4a)
APG #5a)
APG #6a)
True TSS, Robotic WRC Manual WRC Manual APG
rnq/1 % Recovery % Recovery Mean % Rec0'
36.5
52.7
77.0
287.0
316.8
498.6
100.0
107.0
94.9
93.5
107.0
98.7
94.9
104.0
83.0
96.8
92.2
94.8
97.2
96.3
95.6
92.5
90.7
90.7
94.2
94.6
94.2
a) Proficiency Environmental Testing Program Samples: prepared and provided by Analytical
Products Group (APG), Inc., Belpre, Ohio.
b) In-house control sample.
c) Mean recovery of 50 labs participating in the APG Program.
-------
TSS Linearity with Sample Volume
mg Weighed vs. Volume, Manual and Robot
24
20-
0
Y Actual
Y Predicted
a\
*»
ui
40
mg Weighed
-------
Benefits of
Lab Automation
Unattended Operation
Reduced Time/Analysis
Improved Precision
Gained "New" Time for R&D
*>.
-------
647
MR. TELLIARD: Our next
speaker was supposed to be Gary Jackson, but I found out
they had some real technical work to do at the lab, so
instead of him, they sent Dale. Dale is going to talk about
some pesticide analysis using the Dean Stark extractor and
isotope dilution GC/MS.
-------
648
Determination of Semivolatile Pollutants in Sewage Sludge by
Soxhlet/Dean-Stark Extraction, High Performance Liquid
Chromatography Cleanup, and Isotope Dilution Gas Chromatography/
Mass Spectrometry
Gary B. Jackson and D.R. Rushneck, Analytical Technologies, Inc.,
225 Commerce Drive, Fort Collins CO 80524, and John Tessari,
Colorado Epidemiological Pesticide Study Center, Colorado State
University, Fort Collins CO 80523.
ABSTRACT
This paper gives the results of the determination of semi-
volatile pollutants in sewage sludge using Soxhlet/ Dean-Stark
(SDS) extraction, preparative scale high performance liquid
Chromatography (HPLC) cleanup, and isotope dilution gas
chromatography/mass spectrometry (GCMS).
Large scale (70 gram) sludge samples were extracted by SDS,
continuous liquid/liquid, soxhlet, and ultrasonic extraction
techniques. All of these techniques resulted in large amounts of
interferences being co-extracted with the compounds of interest.
Results showed that SDS extraction is more efficient at
extracting the compounds of interest than the other techniques,
but it is also more efficient at extracting interfering compounds.
HPLC improves cleanup slightly, but not sufficiently to make
this technique desirable as a standard cleanup procedure.
-------
649
The conclusion from these tests is that large-scale sewage
sludge samples will require further, alternative cleanup
techniques before reliable measurements of the semi-volatile
pollutants at the part-per-billion level in the solid phase of
the sludge can be made.
BACKGROUND
EPA has been attempting to measure the semi-volatile and
pesticide pollutants in sewage sludge at the part-per-billion
(ppb) and sub-ppb level since the early 1970's. Those familiar
with the determination of these analytes in sewage sludge will
expound at length on how difficult these determinations are,
mainly because of the large concentrations and large number of
interfering substances in the sludge. A variety of different
approaches have been tried in attempts to improve the precision
and accuracy of measurements, and to lower the detection limit for
the pollutants of interest.
The use of isotope dilution gas chromatography/mass
spectrometry (GCMS) and gel permeation chromatography (GPC) have
been helpful in improving the measurement of pollutants in sludge.
Further improvements in cleanup are limited by the desire to
measure analytes with a wide variety of chemical species. Thus,
the normal cleanup techniques that may be effective for a small
group of similar compounds (e.g., sulfuric .acid cleanup of sludge
extracts for determination of polychlorinated biphenyls) cannot be
used for cleanup of normal and polynuclear aromatic hydrocarbons,
amines, phenols, ethers, and other species because of the
destruction or removal of many of these species by the cleanup
page -2-
-------
650
technique.
Against this background, this study attempted to employ
Soxhlet/Dean-Stark (SDS) extraction and preparative scale high
performance liquid chromatography in addition to GPC and isotope
dilution GCMS as techniques for selective extraction and for large
scale cleanup of sludge extracts. The study objectives are shown
in Figure 1.
SLUDGE
The sewage sludge used for all measurements was filter cake
obtained from a nearby publicly owned treatment works (POTW). The
characteristics of this sludge are shown in Figure 2.
The sludge contained 14 percent solids. EPA Method 1625, the
isotope dilution method for determination of semi-volatile
pollutants, employs either continuous liquid/liquid extraction
(CLLE) or ultrasonic (sonic) extraction, depending on the solids
content of the sample. If the percent solids is less than one,
the sample is extracted directly. If the percent solids is in the
range of 10 - 30, the sample is diluted with reagent water to one
percent solids (10 grams in one liter) and extracted using CLLE.
If the percent solids is 30 percent or greater, the sample is
extracted using the sonic technique.
In order to test SDS and other extraction techniques, a mass
of 10 grams of solids was chosen. At 14 percent solids, the
weight of sludge sample was therefore 70 grams. The 10 gram mass
of solids is at the break point between the use of CLLE and sonic
extractions; i.e., this mass would be required if the sludge
contained 30 percent solids.
page -3-
-------
651
Method 1625 requires the use of a "dilute aliquot" when
interferences are know or suspected. For the dilute aliquot, the
sample is diluted by a factor of 10 and this diluted sample is
then analyzed. The dilute aliquot was not employed in this study
because it is well known that one gram of sludge solids can be
successfully analyzed by Method 1625 as written.
OVERVIEW OF EXTRACTION/CLEANUP
In order to determine the effectiveness of SDS extraction, the
SDS technique was compared to several other techniques, as shown
in Figure l. The solvent systems used are given in the figure.
An overview of the extraction and cleanup processes are shown
in Figure 3. For each extraction technique, an aliquot of sludge
was extracted after spiking with the stable isotopically labeled
compounds, a second aliquot was extracted after spiking with the
labeled compounds and pollutants, and a third aliquot was
extracted unspiked. The spiking level was 100 ng for each
component.
Extracts were concentrated to 10 mL final volume and processed
through the GPC. In this process, 50 percent of the extract is
recovered. Each 5 mL GPC eluate was concentrated to 0.5 mL
(to compensate the 50 percent loss). The 0.5 mL concentrated
eluate was split in two. One of these splits was spiked with
internal standard and a one microliter aliquot was injected into
the GCMS. This aliquot permitted comparison of the extraction
techniques.
The remaining halves of the concentrated GPC eluates from the
SDS extraction technique were subjected to HPLC cleanup. The
page -4-
-------
652
concentrated GPC eluate from the unspiked sludge was spiked with
the pollutants and labeled compounds so that losses associated
with the sludge matrix during HPLC cleanup could be quantified.
In addition, a standard containing the pollutants and labeled
compounds was processed through the HPLC to measure losses in the
absence of the sludge matrix.
After HPLC cleanup, the HPLC eluates were re-concentrated to
250 uL, the internal standard was added, and one microliter
aliquots were injected into the GCMS.
SOXHLET/DEAN-STARK EXTRACTION
SDS extraction employs a moisture trap in combination with a
Soxhlet extractor, as shown in Figure 4. The successful
application of this technique to determination of chlorinated
dioxins and furans has been reported (reference 1), and the
technique is part of EPA Method 1613 for dioxin/furan
measurements.
The apparatus uses a lighter than water solvent that forms an
azeotrope with water for the extraction. Most commonly, the
solvents employed have been benzene and toluene. The azeotrope is
condensed and falls into the moisture trap where the water settles
to the bottom of the trap. The water can be drained and measured
either gravimetrically or volumetrically. The percent moisture in
the sample can then be calculated based on the sample weight.
In applying the SDS extraction technique, we believed that the
use of a slightly polar solvent (benzene) would reduce the
quantity of interferences co-extracted from the sludge, yet would
permit extraction of the components of interest.
page -5-
-------
653
HPLC CLEANUP
This cleanup employed a preparative scale column with the
characteristics shown in Figure 5. The column was operated
isocratically with acetonitrile as the eluting solvent in a Waters
HPLC system with an ultra-violet (UV) detector operated at 254 nm.
RESULTS
Chromatocrrams
Figures 6-10 show reconstructed ion current (RIC)
chromatograms comparing the various extraction techniques. These
chromatograms are scaled to the largest peak. The fine structure
in these chromatograms give an indication of how much material is
being extracted and how readily the components of interest can be
measured.
The small, well defined peaks at the beginning of these
chromatograms are the labeled compounds and can be used to compare
the total amounts of material extracted, and to indicate the
degree of difficulty in locating the labeled and native compounds
in the sample matrix.
Chromatograms of the base/neutral and acid extracts are
analyzed separately in Method 1625. As shown in Figures 6 and 7,
keeping these extracts separate provides an improvement over the
bulk simultaneous extraction of the base/neutral and acid
analytes, as evidenced by Figures 8-10.
The chromatogram in Figure 10 gives an indication that much
more material is extracted using the SDS extractor than with the
other extraction techniques. Nearly all of the analytes are
page -6-
-------
654
masked by the large amount of material extracted from the sample
matrix. The two large peaks in this chromatogram, and at or near
the same retention time in all of the chromatograms are
hexadecanoic and octadecanoic acid.
Comparison of Extraction Methods
Figure 11 compares the concentrations of pollutants detected
by the various extraction methods. In reducing the GCMS data, a
formidable problem is presented in attempting to identify the
pollutants. Spectra are heavily contaminated by interfering
compounds, and the quantitation m/z is frequently inflated by
a contribution from one or more of these compounds. It must be
reiterated that the sample size used for these analyses is larger
than normally attempted, so the lack of complete success in
rigorously determining the concentration and identity of the
pollutants of interest is attributable to this large sample size.
Further, the seeming lack of success of identifying pollutants
from the SDS extraction can be attributed to the large amounts of
interfering materials co-extracted from the matrix.
HPLC Cleanup
Figures 12 and 13 allow comparison of the HPLC chromatogram of
the 100 ug/mL pollutant standard with that of the spiked SDS
extract. The similarities in the chromatograms exist because the
UV detector responds primarily to the polynuclear aromatic
compounds in these solutions and a lack of naturally occurring
polynuclear aromatics in the sludge. The detector is attenuated
to bring the chromatograms on scale. Even so, saturation of the
page -7-
-------
655
detector occurs in the beginning of the HPLC run for the SDS
extract.
Figure 14 compares the number of compounds detected before and
after HPLC cleanup with that of a standard containing all 74
labeled and 82 native compounds. As can be seen, HPLC cleanup
improves the number of compounds detected in the SDS extract, but
HPLC cleanup also removes some of the analytes of interest. The
analytes removed consist mainly of the normal hydrocarbons that
elute on the tail of the chromatogram (see Figures 12 and 13).
CONCLUSIONS
From the test results presented in this report, it can be
concluded that:
(1) Soxhlet/Dean-Stark extraction is more efficient at extracting
substances from the sludge matrix. Unfortunately, the greater
numbers and amounts of substances extracted mask the compounds of
interest.
(2) HPLC provides a slight improvement in cleanup of sludge
extracts. Unfortunately, some analytes of interest are lost in
this cleanup.
(3) Large amounts of sludge (10 grams of solids) cannot be
effectively extracted and cleaned up by the techniques tested in
this work. Based on experience with other sludge samples, smaller
amounts of sludge (1-3 grams of solids) can be effectively,
extracted and cleaned up using continuous liquid/liquid extraction
and gel permeation chromatography.
page -8-
-------
656
REFERENCE
Lamparski, L.L., and Nestrick, T.J., "Novel Extraction Device for
the Determination of Chlorinated Dibenzo-p-dioxins (PCDDs) and
Dibenzofurans (PCDFs) in Matrices Containing Water", Chemosphere.
19:27-31, 1989.
page -9-
-------
SDS EXTRACTION AND HPLC CLEANUP OF SEWAGE SLUDGE
OBJECTIVES
COMPARE SDS EXTRACTION WITH OTHER METHODS
ASSESS CLEANUP PROVIDED BY HPLC
ACHIEVE LOW DETECTION LIMITS IN SLUDGE
EXTRACTION METHODS COMPARED
METHOD
SOXHLET/DEAN-STARK
SOXHLET
CONTINUOUS
SONICATION
SOLVENT
BENZENE
CH2CL2
CH2CL2
CH2CL2/ACETONE
FIGURE 1
STUDY OBJECTIVES
-------
SDS/HPLC
SLUDGE USED
TYPE: FILTER CAKE
PERCENT SOLIDS: 14
SAMPLE SIZE: 70 GRAMS
14 % SOLIDS x 70 g = 10 g SOLIDS
10 g SOLIDS IS CLLE/SONIC BREAKPOINT IN 1625
en
oo
FIGURE 2
SLUDGE CHARACTERISTICS
-------
OVERVIEW FLOW CHART
WEIGH
SPIKE
EXTRACT
CONC
GPC
SPLIT
HPLC
CONC
70 grams
unsp/label/native
100 ngeach
lOmLFV
5 mL
> INT STD> GCMS
250 ti
250
CTk
(Jl
INT STD> GCMS
FIGURE 3
OVERVIEW OF EXTRACTION AND CLEANUP
-------
en
o>
o
FIGURE 4
SOXHLET/DEAN-STARK EXTRACTOR
-------
HPLC CLEANUP
COLUMN
TYPE: ^BONDAPAK (WATERS)
DIMENSIONS: 2.5X10 CM
MATERIAL:
CIS
125 A; 10 /i
MOUNTING DEVICE: RADIAL MODULE
SOLVENT & FLOW RATE
ACETONITRILE
ISOCRATIC @ 5 ML/MIN
FIGURE 5
-------
CLLE ACID EXTRACT AFTER GPC
SAMPLE: CLLE ACID - LABELED
CONOS.:
RANGE: G 1,2888 LABEL: N 0. 4.0 QUAN: A 0, 1.0 J 0 BASE: U 20* 3
100.0-1
FIGURE 6
RIC
A
CTV
CTl
to
509
8:20
1000
16:40
1500
25:00
2000
33:20
2500
41:40
SCAN
TIME
-------
CLLE B/N EXTRACT AFTER GPC
SAMPLE: CLLE B/N - LABELED
CONDS.:,
RANGE: G 1,2888 LABEL: N 0, 4.8 QUAN: A Q> 1.0 J 0 BASE: U 20, 3
100.9-1
RIC
500
8:20
1000
16:40
1500
25:00
2000
33s 20
2500
41s 40
cn
CO
SCAN
TIME
-------
SOXHLET EXTRACT AFTER GPC
SAMPLE: SOX - LABELED
CONDS.:
RANGE: G 1/2880 LABEL: N 0, 4.0 QUAH: A 0/ 1.0 J 0 BASE: U 20/ 3
100,0-,
RIC _
500
8:20
1000
16:40
2000
33:20
en
en
2500
41:49
SCAN
TIME
-------
SONIC EXTRACT AFTER GPC
SAMPLE: SON - LABELED
CONDS.;
RANGE: G 1,2889 LABEL: N 0/4.6 QUAN: A V, 1.8 J 8 BASE: U 28/ 3
100.0-1
RIC
(Jl
2000
33:20
2500
41:40
SCAN
TIME
-------
SDS EXTRACT AFTER GPC
SAMPLE! SOS - LABELED
COHDS, r
RANGE J G 1,2880 LABEL; N 0/4,0 QUAN: A 0/ 1.0 J 0 BASE: U 20; 3
100,01
R1C .
1500
25:00
2000
33:20
2500
41:40
SCAN
TIME
-------
COMPOUNDS DETECTED
PHENOL
n-CIO
1,3-DICHLROBZ
p-CYMENE
NAPHTHALENE
n-C12
BIPHENYL
ACENAPHTHENE
n-C16
DIBUTYLPHTH
n-C20
BIS(2ETIIEX)PH
n-C30
PHENANTHRENE -
DIOCTYLPHTH -
CONC IN SOLIDS (me/be)
ACD
1
-
-
-
-
-
- -
140
31
B/N
-
1
1
3
1
-
-
-
5
3
1
12
3
SOX
7
43
4
24
2
2
26
1
32
0.5
57
150
8
SDS
-
7
1
8
2
2
16
0.1
3
-
14
-.
SON
24
1
6
1
1
4
1
9
-
0.5
1500
SDS
-
-
-
2
-
24
-
0.7
9
112
(7)
2
3
FIGURE 11
RESULTS OF EXTRACTION METHODS
-------
668
HPLC OF SDS EXTRACT (AFTER GPC)
Ul
H
u.
c
FIGURE 12
-------
669
HPLC OF 100 pg/mL NATIVE STANDARD
CD
Ul
H
FIGURE 13
-------
SDS + HPLC RESULTS
SDS EXTRACTED
EXTRACT AFTER HPLC
SPIKED
LBL NAT LBL NAT LBL NAT
NUMBER*
AVE % REC
29 42 41 43 52 58
13 103 31 97 81 106
*OF 74 LABELED & 82 NATIVE
en
-j
o
FIGURE 14
EFFECT OF HPLC CLEANUP
-------
671
MR. TELLIARD: Are there
any questions?
(No response.)
MR. TELLIARD: No
questions?
MR. RUSHNECK: No
questions. Everyone can go home and do sludge now.
Wonderful
MR. TELLIARD: Thank you,
Dale. Thank you very much.
-------
672
MR. TELLIARD: A couple of
notes. The following speakers are going to be talking about
one of our favoritest subjects, laboratory certification and
reciprocity. We have beat this horse so many times it is
wounded and dying, but we are going to do it again. We are
going to do it until we get it right.
The agency has done something between our last
discussion on the subject and today in that a committee was
formed with is called the Methods Management Committee which
is part of the agency that is looking at monitoring and
methods across all media and in all program offices.
One of the ad hoc committees that has been formed under
that is a committee to look at the issue of certification.
Whether the agency is going to do it...as you know/ part of
our agency does do that, Drinking Water...but what the other
program offices are going to do and what form it might look
like and take. So, you will hear some of that kicked around
today, I hope.
So, our next speaker will be Rhonda Whalen of the
Department of Health and Human Services who will talk about
the Federal Government perspective on the regulation of
laboratories under CLIA of 1988. She will explain that.
-------
673
MS. WHALEN: Good
afternoon.
I am Rhonda Whalen. I am with the Health Care Financing
Administration with the Health Standards and Quality Bureau,
and we are charged with the responsibility of implementing
the Clinical Laboratory Improvement Amendments of 1988.
Basically, the law that was enacted October 31/ 1988
pertains to all entities that test human specimens. To set
this in context, I am going to give you a little background
information on what we currently do, what our current
framework is, and then what it is we are going to be doing
in the future.
At present, the Health Care Finance Administration
regulates laboratories under two federal programs, the
Medicare/Medicaid programs and the Clinical Laboratories
Improvement Act of 1967. Medicare and Medicaid are
reimbursement or payment programs, Medicare for
beneficiaries over 65 and Medicaid for recipients through
the State program. The Clinical Laboratories Improvement
Act of 1967 regulates laboratories that test specimens that
cross State lines.
At present, we have approximately 4900 Medicare-
approved independent laboratories, and We have approximately
6600 Medicare-approved hospital laboratories. Of the 6600
Medicare-approved hospital laboratories, about 5200 are
-------
674
accredited by either the Joint Commission on Accreditation
of Health Care Organizations or the American Osteopathic
Association.
What this means is that the Federal Government does not
directly inspect these hospitals. Rather, we deem the
accrediting organization's inspection programs, and the way
we conduct inspections is that we have agreements with all
50 States, and the States do the direct inspections, and the
Health Care Financing Administration has regional offices
that oversee the State operations.
There are about 2900 laboratories subject to the
Clinical Laboratories Improvement Act of 1967, that is, they
test human specimens in interstate commerce.
So, the total universe of regulated laboratories by the
Health Care Financing Administration is approximately
12,000. As I mentioned, for half of those, we deem the
accreditation programs of JCAA or the American Osteopathic
Association to inspect the hospital laboratories.
Beginning in 1987, a series of newspaper and magazine
articles were published on the quality of laboratory
testing, and, simultaneously, television programs were aired
concerning the number of laboratories that were not subject
to either Federal or State regulations.
Congress held hearings in 1988 and heard testimony from
victims of faulty laboratory testing. Specific concerns
-------
675
were raised about the validity of cholesterol screening and
the accuracy of pap smear results.
Now, in that environment, we currently are responsible
for regulating laboratories based on location. I mentioned
independent laboratories, hospital laboratories. We also
have laboratory services in nursing homes and dialysis
facilities, et cetera.
In order to run the regulatory program for
laboratories, we had interagency agreements with the Public
Health Service. We have a memorandum of understanding with
the Centers for Disease Control for provision of scientific
and technical expertise on questions related to advances in
instrumentation, new technology, proficiency testing, and
cytology services. We also have a memorandum of
understanding between the Health Care Financing
Administration and the Food and Drug Administration for the
provision of technical assistance concerning blood bank
services.
So, at present, we work very closely with the Public
Health Service and both CDC and FDA. This is based on long-
standing arrangements, because prior to 1979, the Centers
for Disease Control had responsibility for regulating the
interstate laboratories. In 1979, the Health Care Financing
Administration became responsible for licensing and
-------
676
inspection of laboratories that test specimens in interstate
commerce.
In 1980, the Health Care Financing Administration
assumed responsibility for the inspection of approximately
4500 registered blood establishments that also participate
in Medicare. These transfusion facilities that are
primarily located in the private hospitals either collect or
transfuse whole blood or packed cells or collect other blood
components in emergency situations.
Both of these arrangements were generated on the
premise that we were trying to reduce inspections. The
Department of Health and Human Services had simultaneous
programs and, per FDA, CDC, and the Health Care Financing
Administration, was conducting inspections on an annual
basis. So, what we were attempting to do was reduce the
numbers of inspections.
Therefore, we proceeded in negotiations with agreements
with both the Centers for Disease Control and FDA which took
over the inspection responsibility, leaving the Centers for
Disease Control and FDA with the responsibility for
providing HCVA with the technical expertise necessary to run
those two programs while we continued to carry on our
primary responsibility which is to inspect facilities for
approval for payment under Medicare and Medicaid.
-------
677
Now, there is an historical basis which was occurring
during this timeframe prior to the enactment of CLIA. One
piece of legislation was the Omnibus Budget Reconciliation
Act of 1987. This was Congress' first attempt to say that
the laboratories located in physicians' offices that we
currently reimburse for services provided to Medicare
beneficiaries should be subject to regulations.
Essentially, what Congress said was that a physician's
office laboratory that performs more than 5000 tests per
year should be subject to regulation, and they charged the
Department with regulating those laboratories.
In the meantime, this omnibus law that we are charged
with implementing was enacted on October 31, 1988, and it is
bigger than physician office laboratories. It says that all
entities, certainly including physician office laboratories,
that perform laboratory testing would be subject to federal
regulations.
The main impact of the law is that it affects all
testing entities, dramatically increasing the Health Care
Financing Administration's role from regulating the current
12,000 laboratories to include an estimated 100,000
physician office labs and perhaps another 200,000 to 300,000
or 400,000 laboratories.
A unique aspect of the Clinical Laboratory Improvement
Amendments of 1988 is that the law specifies the regulation
-------
678
of laboratories according to testing performed. I mentioned
a few minutes ago that we are responsible for regulating
laboratories based on location. It is the current basis of
our regulation.
This law says we should regulate based on the testing
performed. It certainly is a reasonable basis for
establishing regulation, but it is somewhat difficult
because we have no data. We have no information on adverse
impact of incorrect laboratory results, and we are being
asked to establish regulations on the basis of the
complexity of laboratory testing, meaning that the if the
testing is more complex, we should have more strenuous
regulations.
Therefore, if a particular test requires more
interpretation, more objectivity, more judgment, then that
type of regulation or that type of testing should be more
heavily regulated, and we are without data to establish that
type of regulatory program.
Now, one thing that Congress did in December of last
year is it withdrew the OBRA '87 law which said we would
regulate physician office labs that do over 5000 tests.
This was extremely helpful to us, because, number one, it
was this OBRA '87 was in conflict with CLIA '88. CLIA '88
says you regulate all entities. It doesn't specify
physician office labs, and it doesn't talk about volumes of
-------
679
tests performed in a laboratory. It talks about the kinds
of tests performed.
So, we were faced with a problem that regulates by
location, being asked to implement a law that designates
physician office labs to be regulated and specifies volume
simultaneously with a law that says no, you should regulate
on the basis of testing. So, we were in somewhat of a
dilemma.
Congress helped us out by withdrawing the Omnibus
Budget Reconciliation Act of 1987 in December of 1989.
Therefore, we have one omnibus law that we must implement.
The other thing that the Omnibus Budget Reconciliation
Act of 1989 did is tie in Medicare payments with CLIA. What
that means is that in order for a laboratory to be paid
under Medicare, that laboratory will have to be certified
under CLIA.
That has always been true for Medicare-approved
independent labs or hospitals or nursing homes but has not
been true for physician office labs. Presently, physician
office labs are paid simply on the basis of the physician's
name and receiving a billing number. The physician's lab
does not have to meet any standards.
So, CLIA '88 changes that and says that, number one,
those labs will be regulated, and then the Omnibus Budget
Reconciliation Act of 1989 came along and said not only were
-------
680
those labs to be regulate, but in order to be paid, those
laboratories will have to have a certificate.
Now, as I mentioned, at present, we are regulating
12,000 labs under Medicare and have the potential of
regulating over 300,000 labs under CLIA. Therefore, the
CLIA program will be the largest program and Medicare will
then be a smaller subset of the CLIA program.
During this time period in which Congress was hearing
testimony on the faulty laboratory testing practices, the
Health Care Financing Administration, in conjunction with
the Centers for Disease Control and the Food and Drug
Administration, was attempting to revise the current
regulations that we have for laboratories. We had a Notice
of Proposed Rulemaking which was published August 5, 1989,
and we stated our proposed regulations to revise our current
requirements for personnel, quality control, record keeping,
proficiency testing, and we were attempting to establish new
quality assurance requirements.
The closing date for the comments was November 3, 1988.
CLIA was enacted October 31, 1988. So, we had somewhat of a
dilemma, that dilemma being would we throw out our proposed
rule and ignore the over 1600 comments we received and just
start with the implementation of CLIA '88.
We determined that what we had proposed as a revision
to our current requirements was in keeping with the
-------
681
requirements that we would have to implement under CLIA '88.
Therefore, we considered the 1600 comments, we analyzed
them, and we responded to them in the preamble to the
regulations that we published in final on March 14, 1990.
Those regulations will be effective September 10, 1990.
They will affect those laboratories currently regulated
under Medicare and those laboratories that test specimens in
interstate commerce. What this rule does is it establishes
the basis or the framework for the regulations that we will
implement under CLIA '88.
This regulation published in final on March 14
establishes uniform proficiency testing requirements, a
grading system, criteria for approving proficiency testing
programs, the types of samples that should be sent by an
approved proficiency testing program to participating
laboratories, the kinds of challenges that should be
included, and the frequency of the testing events.
We revised our quality control requirements to be
current with new methodology and changes in technology. We
established quality assurance requirements that would
encompass the outcome measurements of laboratory quality,
and the self-implementing provisions of CLIA '88, that is,
the language in the statute that is so clear it doesn't
require interpretation or rulemaking we put in this rule.
We just lifted out the statutory language, and we put those
-------
682
requirements into the March 14 rule so that we could go
forward and implement those provisions of CLIA '88 that are
self-implementing.
Now, we determined that the Clinical Improvement
Amendments of 1988 are so far reaching and so extensive that
we could not propose one rule to implement CLIA '88.
Therefore, we have five rules all of which will require
comment period and evaluation of the comments and then
publication of a final rule.
The first rule is the most controversial rule, and I
called this morning to see where we were. There is a strong
possibility that that rule is going to be published the end
of next week. We have just about achieved the Office of
Management and Budget clearance needed for publication in
the Federal Register.
This rule will establish standards based on the
complexity of test performed. It will propose a list of
tests for waiver or exemption from the requirements, that
is, those tests that are so simple, accurate, and low risk
and pose no reasonable risk of harm if performed
incorrectly, then if laboratories do only those tests, the
laboratory would be exempt from meeting Federal standards.
We are also going to propose personnel requirements
based on complexity of testing. This is also quite
controversial since heretofore we have not done that, and
-------
683
anytime you specify personnel requirements, that always
generates a lot of comments. That rule will have a 90-day
comment period, and one of the most difficult things about
this whole activity is that we don't know the number of labs
we need to regulate, we don't know the testing performed in
those laboratories, we do not know the types of personnel
employed in those laboratories.
So, that brings me to the second rule that we are going
to attempt to implement. Probably in June it will probably
be published as a proposed rule and will have a 60-day
comment period.
That rule is an interim procedure for provisional
registration of laboratories. That is where we will sign
laboratories up. They will pay a registration fee, and we
will find out how many laboratories there are in the United
States, what kinds of testing the laboratories conduct, and
the types of individuals employed in these laboratories.
Also, this is a self-implementing law, that is,
Congress is not appropriating any budget, any funds, to run
this program. Therefore, we have to charge the laboratories
a fee which we determined it would be most appropriate to
charge each laboratory the portion of the amount necessary
to determine compliance. That is, if the lab is smaller,
less complex, fewer tests are performed, that laboratory
would pay a smaller fee. Ultimately, what we are proposing
-------
684
to do in this regulation is to set forth our fee schedule
methodology.
That rule will have a 60-day comment period/ and that
is probably the rule that we will implement first. It is
not quite as controversial and, obviously, we do need to get
the laboratories registered, and we do need to start
collecting money in order to maintain the program.
Now, the third rule that we need to publish as a
proposed rule is the criteria for recognition of
accreditation programs. The law specifies that we can
recognize private non-profit organizations' accreditation
programs and State programs with equivalent standards.
So, what we want to do is propose the criteria for
recognition of those programs, the kinds of information we
are going to request, the kinds of data that we are going to
need on an ongoing basis, and then as soon as we establish
our standards, then we can accept applications from
interested accreditation programs and State programs. That
rule is probably going to be published later on this summer.
The fourth rule is the proposed regulations that would
implement the intermediate sanction and adverse action
procedures, that is, the penalties that will accrue to a
laboratory that does not meet the requirements. There are
monetary sanctions and various penalties that laboratories
-------
685
are subject to that do not meet the requirements. That rule
will also be published later on in the summer.
The important thing about the hearings attached to CLIA
are that in Medicare, if a laboratory does not meet the
requirements, we terminate that laboratory's approval at
that time. The laboratory then is allowed an opportunity
for a hearing.
Under CLIA '88, since any adverse action against a
laboratory, if it went to conclusion, would prohibit that
laboratory from conducting testing in this country. If you
lose your certificate, then you can't conduct testing in the
United States. The laboratory is allowed an appeal process
prior to any action on the certification.
So, we have to propose the hearing process, the types
of evidence that will be allowed. It particular pertains to
the judicial proceedings.
Along with that rule, there is a particular statutory
provision that says that the department will make lists of
these laboratories available to the public that are subject
to intermediate sanctions, Medicare approval terminations,
and loss of certification. So, that information is going to
be available to the public.
The fifth and final rule will set forth the
responsibilities and functions of the State survey agencies
that we employ to do the direct surveys for us and also the
-------
686
responsibilities of the Health Care Financing Administration
in carrying out the activities of CLIA '88.
There continues to be a great deal of interest from
individual laboratories, laboratorians, physicians,
laboratory equipment manufacturers, and the media in the
implementation of CLIA. For four nights in February,
Channel 4 TV news in Washington, B.C. aired a program called
"Deadly Mistakes: Promises Broken" which continued to focus
on individuals who have died or were irrevocably harmed due
to incorrect laboratory results. There is a continued
emphasis by the media and a focus on the need for
implementing these requirements. We have felt a good deal
of pressure to get these regulations out, the comment period
allowed, and then publish these rules in final to start
regulating laboratories under CLIA.
Congress held hearings on March 7, and our
administrator testified, and the thrust of the hearings,
basically, from the Senate was, why don't you have these
regulations published? Why aren't you regulating
. laboratories under CLIA? We enacted these laws because we
wanted to protect the American public, and we don't see the
Department responding.
So, essentially, we are about ready to at least be able
to say that we are doing something, but while the media and
Congress are concerned that we have not acted, the soon to
-------
687
be regulated entity is very concerned that we are moving too
fast.
There are five studies that are mandated under CLIA '88
all of which would be very useful in establishing
regulations. Those studies were supposed to be out in May.
They are not completed. That is the charge of the Public
Health Service. In fact, they are in their very embryonic
stage.
So, in the meantime, we are going to establish
regulations in the absence of the information that would be
gained from these studies.
Thank you.
MR. TELLIARD: Thank you.
-------
688
QUESTION AND ANSWER SESSION
MR. TELLIARD: Are there any
questions?
MS. ORDONA: I have a
question. My name is Alicia Ordona. I am from the
Commonwealth of Virginia, Division of Consolidated
Laboratory Services.
My next question is, is the State laboratory going to
be included in this regulation?
MS. WHALEN: Is the State
laboratory what?
MS. ORDONA: Going to be
included under this regulation.
MS. WHALEN: If the State
laboratory does tests on human specimens, yes.
MS. ORDONA: Really?
MS. WHALEN: Yes, and all
Federal laboratories
MS. ORDONA: Thank you.
MR. RUSHNECK: Rhonda, Dale
Rushneck of ATI.
I am curious about the fee schedule methodology. I
know you don't have to have the rule out until late summer,
but can you give us your thinking? Is it going to be based
on size dollar volume of the laboratory, on the number of
-------
689
tests performed, is it going to be split and a fee
associated with each test? What is the current thinking?
MS. WHALEN: What we did,
we really did look at what we currently...our current costs
related to the inspection of laboratories and the
determination of laboratory compliance. It is somewhat
difficult right now, because we are budgeted an overall
amount, and we negotiate with every State survey agency a
different hourly rate, because the States are our
contractors, so to speak.
The part that makes it complicated is that our data
right now is somewhat limited because it is by State, not by
laboratory, and it is also based on our current regulatory
program which we are attempted to change and certainly
strengthen the requirements and the amount of time that will
be necessary to determine compliance.
So, with all of that, then I am going to tell you that
we used our current information and attempted to establish
fees. It is not a particular secret, but the fees will be
biennial, every two years, because that is specified by law.
If the laboratory is eligible for a certificate of waiver,
it is a nominal fee, and the nominal fee that we came up
with for the proposed rule is around $150.
The certificate fee, that is, the administrative cost
for issuing the piece of paper, will be between $200 and
-------
690
$300 for a certificate, a provisional certificate, or an
accreditation certificate. Now, the fees that we are
proposing for determining compliance...and remember again
this is for every two-year period...runs from around $800
to...it seems like it is up around $2000.
Now, we have set what we believe to be our base costs.
If it turns out that it costs us more to determine a
laboratory's compliance, meaning that the laboratory was not
ready for an inspection, had a lot of deficiencies, we had
to go back, we had to conduct a follow-up inspection, the
laboratory will get a second bill.
So, the first bill is going to be based on the extent
of services and the volume of services, and that will set a
base cost, and it is the minimum fee. If we incur
additional costs, then the laboratory will get a second
bill.
And as we gain more information on our costs relative
to determining compliance under CLIA '88, we will upgrade
our fees or downgrade if it turns out that we should be
.charging less, although I kind of doubt that, because CLIA
'88 is going to require many more things. Therefore, I
think that fees will go up, not down.
MR. TELLIARD: All right,
thank you very much.
Baldev, did you have a question?
-------
691
MR. BATHIJA: A quick
question. Will you be allowed to keep that fee you collect,
or does it revert back to the...
MS. WHALEN: Federal
Treasury, and then we get budgeted out of that money, we
hope.
MR. TELLIARD: I would
like to have the panelists come up during the break that we
are about to have so that you can go get your Coke and Pepsi
and come back in, and if the panelists would come up and
take a chair up here so that you are sitting target for the
rest of the audience, if you would do that right after the
break, I would appreciate it.
Thank you. Please get back in here in 15 minutes.
(WHEREUPON, a brief recess was taken.)
-------
692
MR. TELLIARD: Our next
speaker is Gerald Hoeltge who is going to carry on with the
program of talking about program and laboratory
certifications. He comes from our great town, Cleveland,
and is connected with the Cleveland Clinic.
Gerald?
-------
693
DR. HOELTGE: Thank you
very much.
In the next few minutes, I want to present the College
of American Pathologists' laboratory accreditation program
as an example of what the private sector can accomplish in
terms of laboratory certification.
This particular program began about 1965 quite
informally when a group of pathologists got together and
realized that there was no good mechanism for
interlaboratory inspection directed towards improvement.
They agreed to inspect each other's laboratories to offer
constructive criticism and helpful suggestions. That
activity has grown over the years to the point at which now
it has a fairly substantial structure. We accredit now just
under 4100 clinical laboratories.
I want to talk about some of the characteristics of the
program, to give you an idea of the philosophy behind it so
that the operational aspects, will be a little more
understandable.
First of all, this is a voluntary program. Now, when I
say voluntary, I am talking about the inspectors for the
program. They are all practicing laboratorians. They are
volunteering their time and expertise to the program.
They are recompensed only for meals and mileage. It is also
voluntary for the laboratory directors who choose to be
-------
694
involved in this program. I say that with a little bit of
caution because, in fact, many directors feel compelled to
get their laboratories accredited. It comes from several
different directions. Ms. Whalen talked about CLIA '67.
Those laboratories which are in interstate commerce have to
be licensed.
The inspection agency for such laboratories, of course,
is the Centers for Disease Control or one of its contract
organizations. However, a laboratory which is interstate
licensed and accredited by the College can apply to the
Health Care Financing Administration for a waiver from
regular CDC inspections, and that means that the laboratory
has one fewer inspection to suffer through on each cycle.
Also, the Joint Commission on Accreditation of
Healthcare Organization recognizes CAP program for hospital
laboratories. As Ms. Whalen pointed out, the Joint
Commission is one of two private agencies that can qualify a
provider for Medicare reimbursement. So, that means that if
a hospital laboratory is certified by the College, that
certification will be accepted by the Joint Commission. If
the laboratory using that Joint Commission accreditation to
qualify for Medicare reimbursement, then the CAP program is,
qualifying the laboratory for Medicare reimbursement. Some
laboratories find that to be a compelling reason.
-------
695
Thirdly, we are seeing increasing numbers of private
carriers who are offering provider contracts for
competitive bidding, and they are including CAP laboratory
accreditation among the bidding specifications. Points are
given to those laboratories that have chosen to be measured
by the accreditation criteria. That has brought a lot of
commercial laboratories into the program who had never
expressed an interest before.
So, those external characteristics aside, it is a
voluntary program, and it certainly is voluntary on the part
of the inspectors.
Secondly, it is a peer review program. The inspectors
are all practicing laboratorians, and the management of the
program, the commissioners for laboratory accreditation, are
all practicing laboratorians. I run the blood bank and the
transfusion service at the Cleveland Clinic Foundation.
About an hour or two of my day, however, is devoted to CAP
activities, and we manage the accreditation for about 450
laboratories in the Ohio, Indiana, Michigan, and Ontario
area.
Thirdly, the inspection covers the entire laboratory.
We will not accept an application from a facility that wants
only part of the laboratory inspected. We are going to look
at the entire facility, even those areas for which we have
no technical expertise (such as in vitro fertilization or
-------
696
environmental water quality). But we can look at any
laboratory at least in terms of safety requirements.
Fourthly, it is based upon the Standards for Laboratory
Accreditation. I will talk about that a little bit more in
a moment.
Fifth/ it combines on-site inspection with proficiency
testing. Both of them go together. Neither one suffices
for accreditation by itself.
Now, the Standards. This is a document that is the
basis for our program. It has about a five-year review
cycle. It is written generally to address the general
case. It is the only part of the program that the Board of
Governors of the College reviews very carefully. (All the
operational aspects are managed by the Commissions on
Laboratory Accreditation).
There are only five standards, each one of which is
about a paragraph long, and then there is explanatory
material that goes along with each. I will mention a few of
the highlights.
The first is that it vests the responsibility for the
management of the laboratory with the director. We have
very specific personnel requirements for the director.
A director must be a physician or a doctoral level
clinical scientists, an individual whose education and
laboratory training is appropriate for the span of
-------
697
disciplines that are represented in that particular
laboratory.
And the director must discharge a whole series of
responsibilities that you would find appropriate for any
director to discharge: ensuring that there are sufficient
numbers of people and the proper equipment and the quality
control and the safety requirements and the educational
aspects of laboratory medicine.
What we find is that in a number of commercial
laboratories, for example, or in hospital laboratories where
the medical director is not an employee of the laboratory
but is a consultant, the director does not have sufficient
authority to discharge those particular responsibilities.
Those laboratories do not meet the Standards.
We also have personnel standards for employees other
than the director. They are not as stringently defined.
They are, in fact, performance standards.
Quality assurance is an important standard. We make a
distinction between quality assurance and quality control.
Quality control we define as those intralaboratory practices
that contribute to the accuracy and timeliness of the
clinical laboratory data. Quality assurance includes
quality control, but it also includes the pre-analytic
issues and the post-analytic issues all of which together
-------
698
will impinge upon accuracy and timeliness in the healthcare
system.
There is a standard that describes the need for
adequate facilities and operational procedures. The
laboratory must participate in the laboratory improvement
programs of the College, that is to say, on-site inspection
and proficiency testing.
That is a picture of the cover of the book in its
current edition..
Now, our organization. At the top is the Board of
Governors of the College, and there are four councils. The
council to which the laboratory accreditation reports is the
Council on Clinical Pathology. There are two commissions
within that council. The Commission on Scientific Affairs
includes 16 scientific resource committees each of which is
devoted to a specific clinical laboratory discipline.
A second commission is the Commission on Laboratory
Accreditation. The Commission includes 14 regional
commissioners and 4 additional special commissioners. To
support us are 61 State commissioners. Some States, such as
New York, California, Ohio, and Pennsylvania, have more than
one State commissioner, and the State commissioners have
approximately 1,800 inspectors working for them.
Now, there are about 17,000 pathologists in the
country. About 11,000 belong to the CAP. So, that means
-------
699
that somewhere around 10 percent of all pathologists or
about 15 percent of our membership serve as inspectors for
the program.
Our inspection cycle is a two-year cycle. It begins
with the date of the first successful inspection.
In alternate years, the laboratory will undergo an on-
site inspection and a self-inspection. The self-inspection
is a very serious inspection. The director uses exactly the
same tools as an on-site inspector would use. The self-
inspection is followed by the same post-inspection data
entry routines to record the results of that inspection.
The accuracy of the data is verified by the next year's on-
site inspector. If the laboratory does complete the self-
inspection with appropriate thoroughness, it is a very
egregious deficiency indeed.
About 120 days before the lapse of accreditation, the
laboratory director is sent a reapplication packet of
materials. He or she has 30 days to fill that out and to
return it to the program office in Northfield, Illinois.
That gives us about 90 days, then, to find an inspector, for
that inspector to get together an inspection team, and for
t,
the inspection to be scheduled and completed.
I can't emphasize enough how important it is for all
deadlines to be met for this whole process to stay on
schedule. Host of the problems that we have had running
-------
700
this program have been due to delays, and we have had to
adopt a "get tough" policy to stay on the inspection
schedule.
The application packet that the director completes
includes an extensive questionnaire. Specific information
is requested. A list of all equipment, all of the tests
that the laboratory is doing, and the volume of those tests
have to be appended.
We ask the director to insert description of the
quality control and the quality assurance programs that are
operative in that laboratory.
We will get a floor plan of the facility, and we ask
for a diagram table of organization that depicts the
reporting relationships. The latter is particularly
important in trying to identify sites in which there is a
titular director only. The credentials of the key
personnel, (the department heads, section heads, and the
technical supervisors) are included.
To that information, the Central Office in Northfield
adds the results from the last two inspections, (last year's
self-inspection and the previous year's on-site inspection),
o
all the computer commentary that was generated from those
inspections, all relevant correspondence and, a statement of
surveys participation.
-------
701
By "surveys", we are talking about the proficiency
testing program of the College. The laboratory must
participate in this program. It is interesting to note that
about 10 or 20 percent of all of the subscribers to our
proficiency testing program, never send in their results for
evaluation. So, it is important that the inspector know to
which surveys the laboratory has enrolled in and does have
data for even if that data was not submitted to the program
office for evaluation.
The whole packet, is shipped to the inspector. We will
try to choose an inspector who is a peer of the laboratory
director. That means that we will try to identify an
individual from a facility of a similar size who can relate
to the kinds of problems and patient care issues that the
laboratory director is facing, somebody whose scope of
practice is comparable. We will not, for example, send a
pathologist who limits his practice to cytopathology to a
clinical laboratory that is involved only in toxicology and
therapeutic drug monitoring.
One walks a fine line, however, because some of the
peer review considerations make it difficult to preserve
objectivity, especially when one must worry about
competition. For example, to inspect a proprietary
laboratory, peer review considerations would suggest that
the most appropriate inspector would come from another
-------
702
proprietary laboratory: but, to do so might require
selecting an individual from a very long distance away so
that competitive interests will no interfere with
objectivity.
Once the director has applied for accreditation, all
the materials have been received, and the inspector has been
assigned. It is now up to the inspector to schedule an
inspection. He or she will do so at a date that is mutually
convenient with the director.
If it is a very large laboratory, the inspector will
want to bring a team of individuals. A typical team is
about 5 or 6 people. There may be a chemist, a
microbiologist, a couple of pathologists, and perhaps a
computer expert.
We have very specific rules for the qualifications of
the inspector, but the selection of the inspection team
members is entirely at the discretion of the inspector.
This allows as much flexibility in the program as possible.
The individuals who come are appropriate to the special
needs of that particular laboratory. At a very big facility
like Mayo Medical Laboratories or Metpath, we will have to
take 12 or 15 people to do the laboratory well. Most of us
who do inspections try to get the whole thing done in one
day simply to minimize the amount of time that we take away
-------
703
from our own practice. So, to do so often requires
generally a fairly large team.
Then the inspector completes the checklists and returns
them to the Centra Office. We have a checklist of
questions that right now numbers around 2200 items, and they
are enclosed in about 15 different booklets each of which
focuses upon a different discipline within the laboratory.
The checklists are what give our inspection its
structure. They standardize the program from region to
region, from State to State, and from inspection to
inspection.
Each of the questions have a value, and we talk about
Phase I and Phase II items. A Phase I item is the less
serious of the two. In some ways, you can think of it as a
recommendation. A Phase II item is a requirement for
accreditation. In fact, the laboratory must document
correction of any Phase II deficiency before accreditation
will be conferred.
We do sometimes deny accreditation on the basis of
Phase I deficiencies if there is a very large number of them
.or if they have been recurrent and the director has shown no
interest in correcting them.
Each item can be answered as "yes", as "no", or as "not
applicable". It is the "no" items that drive the next part
of the process: the generation of a checklist commentary.
-------
704
Each question has a commentary. The commentary, which is
generally quite a bit longer than the question, explains the
meaning of the question. Appropriate journal references are
included.
The specific set of checklist commentaries then is sent
back to the director who must respond in writing to each
deficiency, documenting correction of the Phase II
deficiencies. The director has 30 days to prepare this
reply. It is reviewed then by the regional commissioner
who may decide that there is sufficient information there to
render a decision or may ask for supplementary information.
Once there is sufficient information, the recommendation can
be to accredit the laboratory, or the recommendation may be
to deny accreditation.
The Commission as a whole meets three times a year.
Our first order of business is always to accredit all those
laboratories for which accreditation has been recommended.
Then we spend the rest of our time talking about the
individual facilities for which denial has been recommended
by the regional commissioner.
There is another document that is very important to the
process. That is the inspector's supplemental report. This
is a narrative, confidential document. The inspector can
use this to amplify upon any items in the checklist. And if
the inspector senses conflict between, let's say, the
-------
705
director and the medical staff or between the laboratory and
the hospital administrator, the issue should be described in
the inspector's supplemental report.
The director will not see the information in this
particular report, as a general rule. The regional
commissioner may extract information from the report for
inclusion into a personal letter that goes to the director.
Especially when the commission is considering denial of
accreditation, the feelings that are expressed in the
supplemental report can be very important in shaping the
course of the discussion.
Now, let us discuss proficiency testing. (We call that
the "surveys program" or the CAP Interlaboratory Comparison
Program). At the present time, there are over 90 different
packages that are offered, including clinical chemistry,
hematology, microbiology, toxicology therapeutic drug
monitoring, histocompatibility testing, cytogenetics, and
forensic pathology. New ones are being produced in
molecular biology and in andrology.
Each one includes a set of analytic tests. There are
more than 300 analytes available. An accredited laboratory
must participate in the surveys program according to its
repertory and complexity. A large laboratory will have to
subscribe to many surveys, and a sophisticated laboratory
-------
706
will have to choose the comprehensive versions of certain
surveys.
We had mentioned the resource committees briefly before
as being within the Commission on Scientific Affairs. You
will remember that they are independent of the Commission on
Laboratory Accreditation. The resource committees write the
survey specifications, and they evaluate the results. Each
reported result can be scored as acceptable or not
acceptable.
The unfavorable results are printed in a report that
goes to the regional commissioner once every three months,
called the surveys results exception reports. In it we look
for evidence of a possible systematic problem. All
laboratories will have certain problems from time to time,
but if it is a systematic problem, it will be recurrent.
We will often ask the laboratory director to check into
a particular analytic test. In fact, I send out about 100
such letters every three months.
You can see how important it is that the laboratory
participate appropriately, because, in fact, the commission
is using the PT program as its mechanism for monitoring the
performance of laboratories between on-site inspections.
Now, each of these resources committees is composed of
nationally recognized experts. They are the scientific base
for the College, and all sorts of scientific questions are
-------
707
referred to them. For purposes of this discussion, they
write the survey specifications and evaluate the results,
and they also develop the technical portions of the
checklists that we are using out there in the field, a very
important contribution to the program.
We have an appeals process. The appeals process, as
you might expect, will follow denial of accreditation. The
director has 30 days to file the appeal. The appeal board,
in fact, is the Commission on Laboratory Accreditation as a
whole. The director is invited to come to the next meeting
of the commission to plead his or her case.
If the commission affirms its original decision, then
that particular appeal can be extended to the Board of
Governors of the College, which is the final appeal board.
What are the qualifications for our inspectors? The
inspector must be a Fellow of the College. Now, since
fellowship in the College is limited to board-certified
pathologists, that means that the inspector will be a board-
certified pathologist.
It is also required that the inspector be affiliated
with an accredited laboratory, and that he or she has
undergone appropriate training.
Now, the training is, in most cases on-the-job. The
experienced inspectors will take a prospective inspector
along on two or three inspections to learn some of the
-------
708
operational aspects, but we are really not depending upon
the inspector for operational support. We are depending
upon the inspector for expertise in evaluating a laboratory.
We also conduct new-inspector workshops where we train
inspectors from scratch. I almost did not make this
particular meeting, because I was originally scheduled to
host one inspector's workshop in Cincinnati this evening.
One of my fellow commissioners very graciously is covering
for me at that particular event.
We also have update workshops that we hold periodically
throughout the country to refresh and revitalize the
inspectors who have been doing these inspections for many
years.
Like any big program, it has a set of policies and
procedures that codify all of the operational rules. Having
such policies in writing, ensures fairness and uniformity.
Our policies and procedures are on a three-year review
cycle.
Computer support is essential to the program. I cannot
emphasize enough how important computers are to us in this.
For example, we must hold to the inspection cycle. If I had
to keep track of all of these laboratories in my own region,
I would fall way behind. That would not serve the
inspection process well at all. The computers keep the
process moving.
-------
709
The whole surveys program is run on computer. Most of
the survey results are analyzed by computer. We talked
about the survey results exception report by which the
survey results are monitored by the commission. What I did
not mention in that in that the report is not a simple
listing of the unacceptable results. There is a fairly
sophisticated algorithm that selects for us those
laboratories that appear to have a higher probability of a
systematic bias.
Of course, all of our. inspector lists and other
demographics are kept on computer. Each of the regional
offices has a terminal to access the Chicago computer. The
checklists and the commentaries are all on computer in a
desktop publishing type of a format. This allows us to
update these rules every 6 to 12 months to keep them current
with technologic change. That can be a real challenge as
new instrumentation and methodology comes onto the market.
Lastly, let me just make a couple of comments about
finances.. This is a self-supporting program. The revenue
comes from the annual subscription fees that the
laboratories pay. Now, the surveys program also generates
revenue for the College, but we also incur significant
expenses in the manufacture of the PT materials.
The expenses that are generated by these revenues
primarily are for personnel support. We have about 75 full-
-------
710
time employees in the program office in Chicago who are
supporting the lab improvement programs of the College.
There are also part-time staff, such as the secretaries in
the regional office.
There are direct expenses that one incurs with each
inspection (travel, lodging, meals, and such), and those
are all charged to the program.
What is not charged to the program is the time of the
individuals who are working on its behalf, and this is
substantial. We have some solo practitioners, for example,
who work as inspectors, and when they go out once a year or
twice a year to do an inspection, they, must hire a locum
tenens pathologist to come in and cover their practice for
them. They pick up those costs themselves.
So, our inspectors and our team members are not paid,
at least not by the College. They may very well, of course,
be paid by the sponsoring organization that recognizes that
these people bring back to their laboratories, more new
information than they can confer onto the inspected
.laboratory. So, it is a two-way laboratory improvement
program.
The voluntary contribution to the program really cannot
be overestimated. It is immense when you include 1800
inspectors, 75 commissioners, approximately 200 scientists
-------
711
that are on these resource committees, and then all the
officers of the College. All are volunteers.
So, in those few words, I hope I have conveyed kind of
an overview of the whole program. In the panel discussion,
I will be happy to follow up on any items that you care to.
Thank you very much for your attention.
-------
712
Laboratory Accreditation Program
College of American Pathologists
325 Waukegan Road Northfield. Illinois 6OO93-275O 7O8 /446-88OO
Characteristics
voluntary
peer-review
must cover entire laboratory
based on Standards of Laboratory Accreditation
combines inspection with proficiency testing
Standards for Laboratory Accreditation
Vests the responsibility in the Director
Personnel standards
Quality assurance
Adequacy of facilities and procedures
Participation hi CAP Laboratory Improvement Programs
Organization
Council on Clinical Pathology
Commission on Scientific Affairs
16 Resource Committees
Commission on Laboratory Accreditation
* 13 Regional Commissioners
60 State Commissioners
~4,100 laboratories
-1,800 inspectors
-------
713
Inspection Cycle
Begins with date of first successful inspection
On-site and self-inspections in alternate years
Reapplication begins 120 days before date of expiration
All deadlines must be met to stay on schedule!
Application materials
questionnaire
list of all equipment and scope of analytic testing
statement of quality control and quality assurance methods
floor plan
diagram of table of organization
credentials of key personnel
Application packet also includes
All results from the last two inspections
Statement of Surveys enrollment
Relevant interim correspondence
Choosing an inspector
Peer-review considerations
facility of similar size
similar scope of practice
Objectivity must be maintained
non-competitive practice
-------
714
Process
Director applies for accreditation
Application materials are reviewed for completeness
Inspector is assigned by state commissioner
Application packet is sent to the Inspector
Inspection is scheduled, arranged, and conducted by the
Inspector and his/her team
Inspection materials are returned to the Program Office
Process (ccnt'cH
Checklist Commentary is printed and mailed to the Director
Director responds to each deficiency in writing with
appropriate documentation
* Reply is reviewed by Regional Commissioner
Accreditation may be recommended or not recommended
Commission votes on recommendation
Inspector's Supplemental Report
confidential document
may amplify individual checklist items
appropriate location for sensitive information
-------
715
Surveys Program (CAP Interlaboratorv
Comparison Program
> 90 proficiency testing surveys
> 200 analytes
Participation is required for laboratory accreditation.
Monitoring of Proficiency Testing
Resource Committee evaluate results
quarterly reports go to Regional Commissioners (Surveys
Results Exceptions Reports)
importance of appropriate participation
Resource Committees
Each disciplinary committee is composed of nationally
recognized experts.
Write survey specifications
Develop checklists
Inspector Training
Required for all new inspectors
Recommended periodically for all inspectors
Conducted throughout country
-------
716
Policies and Procedures
codification of all operational rules
* special importance in a distributive program
three-year review cycle
Appeals
Follows denial of accreditation
Director has 30 days to file
* Appeal heard by
entire Commission on Laboratory Accreditation
* Board of Governors
Computer support
inspection cycle
Surveys monitoring
inspector lists
updating checklists and commentary
Financial
self-supporting
revenues come from
annual subscription
* Surveys program
expenses include
full-time staff in Program Office
part-time staff in regional offices
direct expenses incurred in performing inspections
Inspectors and team members are not paid.The voluntary
contribution to the Program is immense!
-------
717
MR. TELLIARD: What we
would like to do now is there are five panelists. The sixth
one is at National Airport in the fog.
Each of the panelists is going to make a brief
presentation. Some of them you have heard before. Then
when they are done with each presentation/ we will open it
up to the panel.
So, I guess Al is the first one.
-------
718
PANEL PRESENTATIONS
DR. TIEDEMANN: I am Albert
Tiedemann. I am the director of the Virginia Division of
Consolidated Laboratory Services which is a central State
laboratory system, providing most of the State Laboratory
Services. Except for Highways and Transportation, none of
the agencies have their own labs. We are their
laboratories.
We feel we are in the middle where we can recognize
both sides of problems in approving laboratories. We are
subject to receiving approval from several Federal agencies.
We are also responsible for some different types of programs
for which we must give approval within the State, drinking
water being the biggest. But these programs include
venereal disease, driving under the influence of alcohol and
drugs, and commercial blood banks. So, we feel we see both
sides of the picture.
Since we are primarily in an environmental meeting
here. I'm talking about the drinking water program.
States actually are EPA surrogates if they have
primacy, that is, they conduct the laboratory approvals in
the States for EPA. I am a little disappointed to look at
the attendance list. There must be only about half a dozen
people from the States here today, because, basically, as
-------
719
acting for EPA, we are sort of under the gun. The
laboratories in the States look to us for information as
well as the piece of paper that says you are approved.
In Virginia, internal Virginia, we divide the
laboratories into two classes. One is the government-owned
laboratories, and most of these are water works. The large
ones which produce more than 3 million gallons a day have to
have at least an on-site approved microbiology lab. We do
not charge fees for government laboratory approvals. That
is funded by the general fund.
The others are the commercial profit-making
laboratories for which we are not funded by taxpayers'
money. Therefore, we do charge a fee. We break the fee
into four categories: microbiology, inorganic chemistry,
organic chemistry, and radiology. Depending on how many
categories the lab wants to be approved for, the fees will
range from $200 to $680 every three years. It is a three-
year cycle.
In running the approval program, the basic requirement
is we have to be at least as stringent as EPA. But within
that overall guideline, we try to take an attitude or
approach of being flexible, practical, and realistic. That
is, we don't try to mandate a lot of very specifics. Do
what you have to do with some flexibility in the way you do
it. Keep good records, but we don't tell you how.
-------
720
Our on-site visits analysts do not have to come back
and find something in that visit that they can report as a
deficiency. I am very pleased when I can sign a report that
says no problems, no deficiencies, no recommendations, lab
is doing excellent work.
You do find deficiencies, of course. We class them
major and minor, and we base an approval for the lab overall
on an evaluation of all of its pluses and minuses, its past
performance, and the results it has obtained on proficiency
evaluation samples. Basically, we want to know if we
approve a lab that that lab has shown it can produce valid
data. That is the bottom line.
Reciprocity. We do participate by giving reciprocity
to out of State laboratories. We have three conditions.
The first is they must show us that they have a need for
certification because of the business they have in the
State. We found that too many labs just wanted something on
their letterhead, and it takes a lot of time to keep the
records. So, they have to have business in Virginia.
Secondly, they have to provide us with the
documentation of their approval by EPA or by another primacy
State.
Third, because the number seems to keep
increasing...several years ago, we had to hire some part-
time clerical people to help with paperwork and had other
-------
721
costs...they pay an annual fee of $100 per category, that
is, the four categories I mentioned. I don't think there
are probably many that pay more than $200 or, at the most,
$300.
Because of the various time cycles in other states, we
do require that reciprocal certification be renewed each
year.
We have some problems with terminology. We talk about
certifying labs, but we are not really certifying labs. If
we do, we are in trouble,, because if we take the word
certification in its essence and say that lab is certified,
that means we are guaranteeing that that lab's work is
always accurate, we are assuring that it is absolutely
correct.
So, what we are really doing is saying we approve the
lab as being able to do good work or we are accrediting the
lab. So, really, this term certification needs to be
examined, because it could possibly, the way the lawyers get
into things these days, come back to haunt the certifier.
Let me mention a few problems. I've noted that there
are only a few state professionals here. I've gotten data
from a number of states.
A key problem is information that EPA is considering
discontinuing supplying Performance Evaluation (PE), quality
control, and standard reference samples. I say
-------
722
"considering: because program people in other states and
others are concerned over the lack of positive, direct
information from EPA Washington. We are receiving only
rumors or bits and pieces of information from EMSL, the
regions, and/or third-parties.
The term used is privatization. EPA has yet to define
officially what they mean by privatization. We've received
no plan, no details.
We hear that EPA will stop providing these samples. We
hear laboratories will have to purchase the samples. Will
we be able to buy the samples from any vendor? Who will
certify the quality?
One rumor says that EPA will designate a sole source
vendor. Many state procurement laws require competitive
bidding.
One third-party source, which seems to be a valid
source, stated that in addition to being a sole source
vendor, that vendor would pay EPA a royalty on each sample.
Therefore, EPA would reduce costs and also generate revenue.
What is the true story? We don't know. But someone in
authority in EPA Washington needs to provide direct, factual
information to the states. The states then need to transmit
this information to the laboratories for which the states
have approval responsibility.
-------
723
A typical problem of failure to coordinate that exists
today. If you are in the drinking water area/ you are
probably well aware by now that EPA several months ago
approved the new COALERT procedure for microbiology testing
for coliform. We can't approve anybody to use that
procedure. We can't use it ourselves, because EPA says the
method is approved. But EPA hasn't come out and said what
are the laboratory criteria, what does your laboratory have
to have in the way of everything from QA to detailed
procedures before EPA will accept the data.
This is like the other things that pop up. There is a
new requirement, a new parameter, something new to be
regulated, but the analytical procedure doesn't exist, at
least not in an approved form.
Last July the Journal of Environmental Laboratories
published a series on laboratory certification. Bill says
this Certification problem keeps coming up. It keeps coming
up because the problems are still there. They are not
really changing.
MR. CARTER: I am Mike
Carter. I work for EPA in the Superfund program. More
specifically, I am one of the managers of the contract lab
program.
-------
724
A lot of you know more about that program than you
might want to. For those of you who are somewhat unfamiliar
with it, we are the primary source of analytical data for
the Superfund program. That means we are responsible for
the analysis of 80,000 to 100,000 samples a year. Our
budget is, in round numbers, $45 million per year for
analytical services. That includes both quality assurance
activities and the actual payment for the analyses.
We are frequently referred to as a de facto
certification program, and I suppose that is an apt term.
Before contracts are awarded to laboratories, there is an
accreditation step. Normally, we do not award a contract to
a laboratory without an on-site visit and a finding that
they have adequate personnel, facilities, procedures, et
cetera.
I said normally. The Small Business Administration can
choose to issue a certificate of competency which we are
obligated to accept. Our experience says that, in many
cases, the laboratory was actually given a disservice when
it got that certificate, because they do have lots of
trouble performing.
During the course of the contract, we have a great deal
of oversight. That includes, normally, four performance
evaluation samples per year. They are intended to be double
-------
725
blinds. Most people recognize them right away. So, at
best, they are single blinds.
And we do have a certification activity. The former
speaker's concern about nomenclature we happen to share. We
accredit a laboratory, but then we exert fairly extensive
and intensive efforts in product certification. By that I
mean virtually ever data package is inspected at least once.
We a great deal of in-house activity. We have computer
programs that are checking for compliance with our
deliverable requirements. We utilize close to $3 million a
year in computing services just maintaining data bases and
cranking through these inspections.
We would like to see and hope to move toward more self-
certification in which we do less and the laboratories do
more so that we get out of kind of vicious circle where we
get data, find some defects, notify the lab, give them a
chance to rectify it. This is all taking time and effort.
In the lab's case, efforts cost.
Another critical part of any credible accreditation
program was mentioned earlier by Ms. Whalen, and that is for
a program to be...and Dr. Hoeltge...for a program to be
credible, there has to be some provision for a
deaccreditation or a decertification. We have historically
had administrative type actions that were, in essence, at
least an interim deaccreditation. We would stop sending
-------
726
samples to laboratories that were having certain problems,
technical or through-put.
Some of those decertification efforts have recently
become a whole lot more critical, sensitive, attention-
getting. At the moment, we have three different audits
underway by EPA's Office of Inspector General and somewhere
in excess of ten investigations underway by the Inspector
General's office.
There was an article in a journal that I had never
before. Most of you have probably never heard of it before,
either. It is called the Legal Times. The week of April
23rd of this year, it was on the front page. It wasn't at
the top of the page, but it was on the front page. The
headline said, "Superfund Effort Jeopardized by Suspect
Data." It goes ahead to quote our Assistant Inspector
General's investigation that says at least ten contractors
are under investigation for potential fraud.
There have been three formal actions taken to date with
only one of them being final. The final action was
regarding Roy F. Weston Company. Basically, they signed a
settlement agreement with the government in which they
agreed to pay $750,000, and one of their labs was then
voluntarily suspended for 4 to 12 months from doing any work
for Superfund.
-------
727
I must note here that the company has stated that they
did not and do not admit any wrongdoing. It was strictly a
settlement.
Subsequent to that, we had two laboratories, companies,
that were suspended under suspension and debarment
activities which are part of contract regulations. A
suspension means that that entity is not allowed to compete
from any Federal Government work from any Federal agency,
and it probably extends to any cooperative agreement with a
State in which Federal money has been passed on to a State.
An important little consideration here. Suspension is
intended to be an interim action and may lead to debarment
which is a longer-term, even more final type determination.
The other important note is that the suspension does not in
any way hold up the investigations that also precede it.
We now have inspectors general come to a lot of our
program meetings just to make sure everybody knows the
environment we are operating in. The fact of the matter is
that if data are falsified or changed, there is the
potential that three felonies have been committed. The
falsified data that is submitted to the government is
considered a false statement. To then send a bill to the
government in which you are asking for payment for these
falsified data is a false claim. And this one is a nice
-------
728
catch: if you send in your invoice or you get your check
back through the mail, that is mail fraud.
Each one of those felonies is punishable, as they tell
us, by up to five years in federal prison and a $10/000
fine. So, falsify some data and, potentially, somebody is
looking at 15 years in jail and $30,000.
The IG does tell us that indictments for felony charges
are in the works and probably will be applied to both
companies and individuals. So, we are in an entirely
different world in terms of decertification. That is a
fairly stringent decertification of a company or an
individual.
So, we are still considered a de facto certification
program, and I guess it would be fair to say that a lot of
elements that are part of the certification program are
active in our program. Certainly, it is a whole new world
for us and for a lot of people that have been working for
us.
MR. PERLER: I am Arthur
Perler from the Environmental Protection Agency Office of
Drinking Water. I suppose it is the folks that bring you
laboratory certification for drinking water. I don't know
if that is popular or not in this room. I have never spoken
to this particular crowd before.
-------
729
I just want to review briefly the authorities and the
status of our program and some of the directions we are
thinking about moving in I feel you all might be interested
in.
Our basic program of laboratory certification derives
from two elements of the Safe Drinking Water Act. The first
one you see, we feel, gives us the basis for establishing
analytical methods and all things related to doing the
analyses concerning drinking water compliance, compliance
under the Safe Drinking Water Act, and the key words there
are, of course, are including quality control and testing
procedures to ensure compliance.
Recently with the passage of the Lead Contamination
Control Act, an additional kicker was added that we must, in
a way, provide a program to ensure that the laboratories
testing for lead are providing reasonably good results.
From that flows certain national primary drinking water
regulations which apply to the public water supplies which
have to provide the compliance analyses and provide those
analyses either in their own labs or through commercial
laboratories or State laboratories and certain requirements
also on State offices for the approval of those
laboratories.
EPA's lab certification program and requirements are
basically, as most of you know, first of all, specified in
-------
730
the laboratory certification manual the third edition of
which is at the printers right now and would have been
available at this meeting/ but I think it is just maybe a
week or two behind schedule. Some of you have advance
copies of that.
This manual is used as the official policy of EPA
within its own part of the decentralized program that we
have for operating laboratory certification. That is that
EMSL-Cincinnati, the EPA's headquarters laboratory program,
certifies EPA regional laboratories. EPA regional
laboratories then certify a principal State lab in each
State, and then it is up to those State labs either to do
all the analyses or to certify the commercial laboratories.
So, EPA is not in the business of certifying
laboratories. For the business that we are in which is
certifying regional labs and principal State labs, we use
the lab certification manual as our official policy, and the
States, we understand, for the most part pick up most
elements of that manual as they certify commercial labs or
. the labs with which they do business or do business in the
States.
State requirements then are pretty much again use of
the lab cert manual, utilization of the EPA PE program, and
then on-site evaluations either through State auditors or
third party auditors.
-------
731
Let me say just quickly in response to what Al said
about PE samples, I will add to the rumor mill and tell you
what I know about it. On the privatization or the program
under which people may have to pay for a more limited number
of PE samples, it is not expected to apply, I don't believe,
to PE samples in its initial stages. What is under
consideration now are basically the QC samples and the check
samples that EPA provides, and I don't believe that PE
samples in what is known as the water supply programs will
be covered for quite a while. There is just too much
backlash against that, and I believe that the program that
we will be looking at won't be a sole source or a program
that drives up the cost.
Let me point out also that there is nothing in the
drinking water regulations that requires that PE samples be
provided by EMSL, and there are a number of private entities
and some States that have contracted with those private
entities to provide samples except, let me say, for the
trihalomethane regulations. The rest of the national
primary drinking water regulations allow you to use a
privately generated PE sample.
Our biggest problem and really what I wanted to focus
on, although time is running fast, are the problems as we
see them with laboratory certification. First of all, since
it is a decentralized program, there are divergent State
-------
732
requirements which we know now lead to a difference in
acceptability and a difference in reliability in the results
generated by laboratories in different States.
I don't want to call it a difference in quality,
because I am sure everybody is out there trying to generate
the highest quality data, but because we all in this room
understand reliability in terms of, let's just say,
precision and accuracy, there is a difference in that
because of the different ways that States implement their
programs.
These differences have led to limited reciprocity,
differing fee structures, we believe higher costs, and
limited availability of labs. That bothers us somewhat,
especially since the number of drinking water regulations,
the number of required analyses, and the number of drinking
water systems required to perform them is going up and will
be growing exponentially over the next 10 or 15 years.
The PE and audit requirement portions of the program
provide us with a little heartburn also. Simply performing
on a PE sample acceptably once a year is no measure of how a
laboratory does on routine samples.
I point your attention to an article in the April-May,
I guess it is, issue of Environmental Lab, the cover story
by Stan Blacker, cover article, which represents a position
which EPA and, I believe, the Office of Drinking Water may
-------
733
be moving towards. In that article, he points out that
laboratories that perform perfectly acceptably on PE samples
and that even have reasonably good precision and bias still
have relatively high probability of making a false positive
or a false negative compliance determination because of the
interval that precision implies and the statistics that are
involved.
And the result of that is exactly what Blacker says.
We believe that especially when environmental results are
close to standards, there axe probably an unacceptably high
frequency of compliance determination errors, either false
positives or false negatives being made.
Now, we were criticized also fairly severely for
allowing a program to proceed which does have such limited
reciprocity and without any ability for the private sector
to prosecute the same kind of a program. So, we have had on
the books a number of years, although it has been largely
ignored by most States, policies which do encourage
reciprocity and third party, and I am here to try to do that
some more. I will take the opportunity to do that whenever
I can.
The benefits of that, of course, are that there will be
shared capacity, and many States have deficient numbers of
laboratories. There will be increased competition. We
think it will reduce costs, especially since we are getting
-------
734
into regulating things which have quite expensive analytical
methods associated with them.
And, also, according to the possibility of reciprocity
and some other program changes which I don't want to address
today, we feel that laboratories will be able to adopt new
methodologies more quickly. £et me say that it is not that
the laboratories will be able to adopt them; EPA will accept
the adoption of laboratories of newer methodologies more
quickly.
These are not meant to be alternatives. These are just
various things that we are considering and some
possibilities of changes in the program.
We would hope that any policy that we would put forward
would allow more uniform State requirements...would end up
in more uniform State requirements in order that we have
more uniform reliability and probability of making correct
compliance decisions in the drinking water program.
We would encourage those States that have...and I
understand there are not many States here to listen...that
'many States that have changed their statutes and their
regulations to provide programs which are less amenable to
reciprocity to change those.
We are considering a complete overhaul of the system,
as I said. We expect to come out with proposed regulations
which would more likely provide evidence that laboratories
-------
735
are performing within acceptable standards on a routine
basis. Again, I would direct your attention to the general
revisions that Stan Blacker discussed in that article,
revisions that would attach to each piece of data the
appropriate quality information, and those kinds of changes,
by the way, could very well end up with environmental
regulations, especially under the Safe Drinking Water Act,
no longer even specifying the required analytical
methodology, that kind of requirement being unnecessary
within that type of program.
Now, that certainly is a program that only assesses
quantitative analysis, and there are qualitative aspects of
that program as well as on-site inspections and audits which
I think we will maintain.
As Bill and others have talked about, the agency has
gone into high gear on its EMMC which is an attempt to unify
quality assurance and quality control in all of the programs
and an attempt to unify, where possibly, analytical
methodologies, and that is something that I hope you will
all stay aware of, and as different solutions are proposed
or put somehow into either the Register or put forward in
meetings like this that you will comment on those.
Let me skip to the last bullet and say that all of .
these changes we are trying to move as quickly as possible
on and we look for data and support from the laboratory
-------
736
community, because I think that the last thing that we both
want to happen, both within the Office of Drinking Water
Programs and from, I think, the laboratories' side is some
direct Congressional action which would alter the program,
because, actually, we don't want to end up where we feel
that we are being forced to make changes. The last changes
that were forced on us that way resulted in drinking water
regulations which we think probably went a little bit too
far.
So, that is the end of what I have prepared, and I
think we have questions at the end.
-------
737
THE 13th ANNUAL EPA CONFERENCE
on
ANALYSIS OF POLLUTANTS IN THE ENVIRONMENT
Laboratory Certification & Reciprocity
Arthur H. Perler
Baldev L. Bathija
Office of Drinking Water
-------
738
LABORATORY CERTIFICATION & RECIPROCITY
DRINKING WATER LABORATORY CERTIFICATION
STATUTORY REQUIREMENTS
CURRENT SDWA REGULATIONS
LABORATORY CERTIFICATION REQUIREMENTS
EPA
STATE
ADVANTAGES OF INTERSTATE LAB CERTIFICATION
DIFFICULTIES WITH RECIPROCITY
POSSIBLE SOLUTIONS
NRFK-002
-------
739
CURRENT STATUTORY REQUIREMENTS
SAFE DRINKING WATER ACT - Section 1401(1)(D)
" .... criteria and procedures to assure a supply
of drinking water which dependably complies with
maximum contaminant levels,- including quality
control and testing procedures to insure
compliance with such levels ..."
LEAD CONTAMINATION CONTROL ACT - Section 4
"EPA shall assure that programs for certification of
testing laboratories which test drinking water supplies
for lead contamination certify only those laboratories
which provide reliable accurate testing."
NRFK-003
-------
740
CURRENT SDWA REGULATIONS
Section 142.10(b)(4)
"Assurance of the availability to the State of
laboratory facilities certified by the Administrator
and capable of performing analytical measurements
of all contaminants specified by the State primary
drinking water regulations."
Section 141.28 - Approved Laboratories
(a) "For the purpose of determining Compliance ....
samples may be considered only if they have
been analyzed by a laboratory approved by
the State ...."
NRFK-004
-------
741
CURRENT SDWA REGULATIONS - Cont.
Section 142.10(b)(3)(i)
"The establishment and maintenance of a State
program for the certification of laboratories
conducting analytical measurements of drinking
water contaminants ..."
"The requirements of this paragraph may be
waived by the Administrator for any State
where all analytical measurements .... are
conducted at laboratories operated by the
State and certified by the Agency."
NRFK-005
-------
742
EPA LAB CERTIFICATION REQUIREMENTS
Specified in Lab Cert Manual
Third Edition - Published - June 1990
Manual is Official Policy for:
EMSL Certification of EPA Regional Labs
Regional Certification of Primary State Labs
Manual is Guidance for:
State Certification of Local Labs
NRFK-006
-------
743
STATE LAB CERTIFICATION REQUIREMENTS
EPA Lab Cert Manual
PE Samples
EPA, State, Commercial
On-Site Evaluations
State Auditors
Third-Party Auditors
NRFK-007
-------
744
INTERSTATE LABORATORY CERTIFICATION
GAO Report Criticized Lack of Reciprocity Among
States Resulting in High Costs of Multi-State
Certification
ODW Policy Encourages Reciprocity Among States
ODW Policy Encourages Use of Third-Party Auditors
Benefits of Inter-State Certification
Shared Capacity
Increased Competition
Reduced Costs
Faster Adaptation of Latest Methodologies
NRFK-008
-------
745
INTERSTATE LABORATORY CERTIFICATION
(Perceived Hindrances)
State's Rights - DW Primacy Regs
Acceptable On-Site Evaluations
Qualifications of Other State Auditors
Qualifications of Third-Party Auditors
Restraint of Trade
Selection of Third-Party Auditors
* Differing State Laws - Due Process
NRFK-009
-------
746
RECIPROCITY AMONG STATES
(Limited Information Available)
Reciprocity Possible
AL, GA, IL, Ml, MN, MO, MT, NO, ND,
NV, NM, Rl, SC, TN, TX, WA, Wl, WV
Reciprocity Not Permitted
AR, CA, FL, LA, KS, KY, MA, ME,
NH, NJ, OH, OK, OR, PA, UT
NRFK-010
-------
747
POSSIBLE SOLUTIONS
Uniform State Requirements - ODW Survey of States
Third-Party On-Site Evaluations Sent to States -
State Lab Cert Officer Decides if On-Site by State
Team is Necessary
Encourage States to Change Statutes
National Federal Regulations
Complete Overhaul of ODW System
New QAMS Approach to QA/QC
EPA Environmental Monitoring Management Council (EMMC)
Consortium of Quality Environmental Data (CQED)
Congressional Action
NRFK-011
-------
INDUSTRY PERSPECTIVE
FOR A NATIONAL PLAN
FOR LABORATORY ACCREDITATION
by
GEORGE STANKO
00
J]
SHELL DEVELOPMENT CO.
HOUSTON, TEXAS
-------
CONCERNS WITH CURRENT SYSTEM
OF ACCREDITATION
NUMEROUS STATE ACCREDITATION PROGRAMS
A. COSTLY FOR LABORATORIES
B. CUSTOMER PAYS THE PRICE
C. REDUNDANCY BECAUSE OF NUMEROUS PROGRAMS
D. LABORATORIES SUBJECTED TO FAR TOO MANY AUDITS
E. IMPROVEMENTS IN DATA QUALITY ARE MARGINAL AT BEST
F. GENERAL LACK OF EFFECTIVE PERFORMANCE EVALUATION
PROGRAMS
G. LABORATORY CUSTOMERS HAVE LITTLE UNDERSTANDING
FOR CURRENT ACCREDITATION PROGRAMS
H. EXPERIENCES WITH SOME ACCREDITED LABORATORIES
WAS DISAPPOINTING IN SPITE OF ACCREDITATION
<£>
-------
CONCERNS WITH CURRENT SYSTEMS
OF ACCREDITATION
INDUSTRY (CUSTOMERS) EXCLUDED FROM
CURRENT ACCREDITATION PROGRAMS
A. ACCREDITATION PROGRAMS FUNDED BY INDUSTRY
B. NO CONTACT WITH INDUSTRY
1. PROBLEMS
2. HOW PROGRAMS COULD BE IMPROVED.
3. EFFECTIVENESS OF PROGRAMS.
01
o
-------
WHO SUPPORTS NATIONAL ACCREDITATION OF
ENVIRONMENTAL LABORATORIES?
1, UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
2, AMERICAN COUNCIL OF INDEPENDENT LABORATORIES
3, MOST COMMERCIAL LABORATORIES
4, INDUSTRY:
A, CHEMICAL MANUFACTURERS ASSOCIATION
B, AMERICAN PETROLEUM INSTITUTE
C OTHERS
U1
-------
ELEMENTS OF STATE CERTIFICATION PROGRAMS
1, MANDATED BY STATE LEGISLATION
2, REQUEST FROM LABORATORIES FOR CERTIFICATION
3, CERTIFICATION REQUIRES FEES FROM LABORATORIES
4. ON-SITE INSPECTION BY STATE AUDITORS
5, DEMONSTRATION OF LABORATORY PERFORMANCE WITH
EPA WP/WS PE SAMPLES
6, MUST MEET CRITERIA FOR PE SAMPLES
7, PERIODICALLY RECERTIFIED
8, SUBJECT TO STATE ENFORCEMENT ACTION
9, SOME PROVISIONS FOR RECIPROCITY BUT
SELDOM PRACTICED
UI
to
-------
MAJOR ROADBLOCKS TO NATIONAL ACCREDITATION
OF ENVIRONMENTAL LABORATORIES
, STATES RIGHTS
A, CERTIFICATION AUTHORITY
B, ENFORCEMENT ACTION
C, STATE PROGRAMS (PEOPLE)
2, RECIPROCITY
A, ON-SITE VISIT/AUDITING
U)
B,
CERTIFICATION FEES
C, CERTIFICATION
-------
SOLUTION TO CURRENT ROADBLOCK FOR NATIONAL
ENVIRONMENTAL LABORATORY ACCREDITATION
1, FORGET ABOUT EC92 AS THE DRIVING FORCE,
2, CONSIDER ONLY ENVIRONMENTAL LABORATORY ACCREDITATION
3, IDENTIFY THOSE ELEMENTS OF CURRENT PROGRAMS THAT ARE
ONEROUS
A, CERTIFICATION FEES
B. MULTIPLE ON-SITE VISITS/AUDITS
C, INCONSISTENCIES BETWEEN STATE PROGRAMS
D, GENERAL LACK OF RECIPROCITY
E, OTHERS
01
-------
SOLUTION TO CURRENT ROADBLOCK FOR NATIONAL
ENVIRONMENTAL LABORATORY ACCREDITATION
4, FORM A NATIONAL COALITION FOR ENVIRONMENTAL
LABORATORY ACCREDITATION
A, REPRESENTATIVES FROM USEPA, STATE AGENCIES,
ACIL, INDUSTRY
B, PREPARE A LABORATORY ACCREDITATION GUIDANCE
DOCUMENT BASED ON ASTM/ISO PRACTICES
a. ASSESSOR CHECKLIST
b, QUALIFICATIONS OF ASSESSORS
c, PROTOCOL FOR ACCREDITATION RECOGNIZED AND
ACCEPTED BY ALL
C. LET STATES RETAIN THE RIGHTS FOR PROGRAMS
a. COLLECT CERTIFICATION FEES
b, RIGHT TO VERIFY ASSESSORS REPORTS AND
ACCREDITATION
c, RIGHT OF ENFORCEMENT ACTION
d, RIGHT FOR ADDITIONAL PERFORMANCE EVALUATION
e, MODIFY STATE LEGISLATION TO ALLOW FOR
ALL THE ABOVE
Ut
in
-------
OTHER CONSIDERATIONS
2,
IF WE DON'T DEVELOP A WORKABLE PROGRAM,
CONGRESS WILL BE FORCED INTO MANDATING
ONE STATES WILL HAVE TO ACCEPT AND WE
MAY NOT LIKE,
BEFORE NATIONAL ENVIRONMENTAL LABORATORY
ACCREDITATION IS POSSIBLE THERE IS A NEED
FOR STANDARDIZATION OF METHODOLOGY AND
ANALYTICAL PRACTICES ACROSS ENVIRONMENTAL
PROGRAMS,
CTt
-------
757
MR. TAMPLIN: We have
been asked to keep this short, so no slides, no overheads.
We will try to keep you all bright-eyed and bushy-tailed.
I am Ben Tamplin from the State of California,
Department of Health Services, Division of Laboratories. I
am Chief of the Sanitation and Radiation Laboratory. We,
also, are a centralized laboratory service, although there
are a few laboratories in other agencies.
We have a long history of environmental laboratory
accreditation going back to the late 1940s when the Porter-
Cologne Water Quality Control Act was passed. In 1951,
drinking water laboratories came under the purview of the
Health Department, and this laboratory approval program grew
at the request of Regional Water Quality Control Boards to
include wastewater laboratories as well. Wastewater
laboratories were dropped in 1981, because the State Water
Resources Control Board could no longer afford to support
their part of the program.
In 1985, Hazardous materials laboratories were required
to become accredited in a separate program. A third
program exists in California, but it is really a program of
registration by the California Department of Food and
Agriculture for laboratories doing pesticide analyses on raw
produce.
-------
758
California began consolidating these programs several
years ago with the passage of two bills in our legislature,
one which brought together drinking water laboratories,
wastewater laboratories at the request of the State Water
Resources Control Board, and hazardous materials
laboratories. Just last year, a bill was passed that
brought the Department of Food and Agriculture's
registration program under our purview, but full regulation
of these laboratories won't really occur until about 1992.
The old drinking water and the hazardous materials
laboratories' regulations stay in effect in California until
the Environmental Laboratory Accreditation Program
regulations are in place. We expect this to occur by the
end of the year, but recognize that it may require more
time.
It should be no surprise that this combined program
follows the systems that were used by the drinking water and
hazardous materials programs, that is, that the
certification was granted upon approval of an application,
upon approval of a quality assurance plan, upon successful
participation in a performance evaluation study, and
following an uneventful site evaluation.
One other thing: there is the payment of a fee, and
that check is the very first step in the accreditation
-------
759
process. When one sends in an application, the check comes
right off the top and in it goes to the cashier.
Currently, the basic fee in California is $913.28. For
each field of testing a fee of $411.34 is added. These fees
are adjusted on July 1 or shortly after July 1 of each year
by the Department of Finance. It is, in a sense, a COLA.
Incidentally, we levy this fee every year.
The main problem before us right now is completion of
our regulations package. We are getting help here...and we
think we have done the right thing from an ad hoc committee
that is made up of representatives of the American Council
of Independent Laboratories, the Association of California
Testing Laboratories, the California Association of Public
Health Laboratory Directors, the California-Nevada Section
of the American Water Works Association, and the California
Water Pollution Control Association.
We are ready for public hearings now, and we expect
these regulations will be in effect by January 1, 1991.
The lack of regulations has forced us to use existing
regulations and has resulted in a very large backlog of
inspections. We have to address this backlog in the next
three months. We can give interim accreditation without a
site evaluation so long as we get the money.
But we need to simplify our process and consolidate our
fields of testing. We presently have 23 fields of testing.
-------
760
That is absolutely ridiculous. We used to have the same 4
that Virginia has, but in the writing of our statute, people
got carried away, and we now have, as an example, separate
fields of testing for organics by GC/MS for drinking water,
for wastewater, and for hazardous materials. Now, that is
really ludicrous, because, as you know, once you get the
lumps out of wastewater, the analysis is just as easy as
that for drinking water.
We have to find a way to simplify accreditation,to get
back to where we were. That may be difficult.
We also feel we have to have an effective system of
reciprocity. The requirements for analyses in California
are soon going to outstrip the State's laboratory resources.
We need to accredit out of California laboratories, non-
California laboratories.
We need to accredit them in order to provide a resource
for some of the programs that are in the wings. We have an
omnibus pesticide bill, a pesticides in food bill, which
will add perhaps 40 people to our laboratory staff looking
for pesticide residues in raw and processed foodstuffs. I
don't know how many analyses this is going to take State-
wide, but our share will be immense.
Also, we have something going in California now called
the Big Green Initiative. It is the environmental plank of
our Attorney General's candidacy for Governor. It is an
-------
761
all-encompassing environmental protection act. It covers
food safety and pesticides, agricultural worker safety,
greenhouse gas reduction, ozone layer protection, commercial
and residential tree planting, estuarine and ocean water
protection, water quality protection, and marine resources
and human health standards.
We don't have enough laboratories in California to do
the monitoring that will be required under this initiative.
We have an initiative system in California in which, if you
don't like what the legislature is doing, you may go to the
ballot box through the initiative process. This particular
initiative came into the Secretary of State's office with
two times the required number of signatures. It is
estimated in the polls to be running 4 to 1 in favor. It is
expected to pass and we will need laboratory resources, so
we will have to go outside.
Also, some of our California laboratories are divisions
of companies with laboratories in other States, and some of
them have different capabilities. At the present time,
there is no way they can trade samples back and forth to get
the biggest bang for their buck. That is to say, both
laboratories must be approved in California before the
movement of samples back and forth can occur, and that is
all wrong. We are not supposed to be restraining trade.
-------
762
Many California laboratories are accredited in other
States, some in 12 or more States, and I am certain there is
a very high cost involved for them to comply with all the
regulations. It is particularly costly in the area of
proficiency testing. Here we really need an effective
centralized system of performance evaluation.
Our current regulations allow us to recognize other
programs, and we have told other laboratories if you are in
the EPA program and you are analyzing Water Supply or Water
Pollution Evaluation Samples or both, that is fine. We will
accept those data. So far our legislature has gone along
with this approach.
The reason we have recognized EPA PE Studies is that we
really don't want to make our own PE samples. We don't want
to create an empire. We think it is wasteful when a program
like EPA's already exists, a program that really does work
pretty well. We do criticize a lot, but we also recognize
that this performance evaluation program is a good one, and
it needs to stay in place.
We are ready in California to join other States and EPA
in establishing some sort of nationwide centralized PE
system. We have authority to charge laboratories for these
samples, and even if we didn't, we could finance it
ourselves out of the fees that we charge for accreditation.
-------
763
We are happy that EPA has reversed at least part of its
stand on the phase-out of some of its PE activities, and we
call upon you at EPA to take a leadership role and help us
establish a national program. If you want to come out to
California to discuss it, we would be happy to have you.
Just bring rain.
MS. PREVOST: I am
Margaret Prevost from the New York State Health Department,
the Environmental Laboratory Approval Program which is the
program that certifies all environmental labs in New York
State and out of the State.
I think you just heard the West Coast. You are going
to hear the East Coast.
In our program, we certify labs in four categories,
drinking water, wastewater, solid and hazardous waste, and
air emissions. We have a little under 850 certified labs.
Close to 300 of those are out of State labs.
We make our own proficiency samples. We have never
used EPA's, probably for a lot of reasons. One of them is
that our program really started, although it had a
history...we certified drinking labs since the early
1950s...but the ELAP program to make it short, as it is
called, became official in late 1985, and about at that
time, there was already a little bit...I wasn't with the
-------
764
program then, so excuse me if I am making a mistake...but
there was concern about that perhaps EPA was going to cease
making samples. So, we do make our own proficiency samples.
A laboratory in our program is inspected once a year
and proficiency tested twice a year for what they are
certified for.
I spent most of the day trying to get here. You had
already started when I got here, so I don't know what the
newest rumor is about the PT samples and maybe it has all
been clarified now, but we received a number of calls from
other States when the announcement first came that the
wastewater samples were no longer going to be available. We
are able to provide them to other States if that is
necessary.
It sounds like now that isn't going to be necessary,
but because we have the whole system in place and it has
been in place for five years.
I think we could go toe to toe with California trying
to explain our fee system. If there are any New York labs
here/ they would agree with me.
We had one fee system. It changed as of April 1, the
beginning of our permit year. We now have a base fee of
$500 plus an analyte fee for every analyte you are certified
for plus a volume fee paid on the volume of your analysis of
those analytes you are certified for on New York State
-------
765
samples only. The legislature came up with this one; we
didn't.
But none of our laboratories are in revolt yet, because
the New York State budget has not passed. It is now 41 days
overdue, and I can't send out any bills until they pass the
budget.
I think one of our concerns...and we hear it from our
laboratories. We have very active laboratory associations
in New York. One of the things is we are very aware that a
large lab that is certified in many States spends a great
deal of time and a great deal of money. Actually, they were
less concerned about our fees...they didn't like our fees,
but they were less concerned about that than the idea of the
time and money that is spent on doing proficiency testing,
inspections, people come to inspect.
They would like that resolved. We understand that. In
fact, the legislature just passed a bill saying that we are
going to have an advisory board, a 7-member advisory board,
and its mission, among other things, but its first mission
is to discuss reciprocity and come up six months after it is
formed with a report to the legislature.
One of the problems...and I think this is worthy of
discussion...is the fact that as a northeastern State where
we have a lot of pollution...and I think it is true that
different regions have different problems... our legislature,
-------
766
our Governor, and our Commissioner of Health have concerns.
So, how it is done on a national certification program or
the PT samples.
For instance, with EPA, it is 7 VOCs. In our State, a
lab has to be certified for 52 VOCs.
So, I don't think it is that easy to put together
something that would be universal, because then you would
still have these subprograms, because you know how States
are. You never go backwards. You only add. We don't have
the Big Green coming down which sounds awful, but they never
stop. I mean, you just keep testing for more and more.
It is the argument that we have to be this stringent
and we go along with anything that is as stringent or more
stringent. I am not"quite: aware of any other State...there
probably is...that does annual inspections, but we do, and I
don't think the Governor or the Commissioner of Health is
going to back down on that.
So, there are a lot of issues, but we see the need for
some type of reciprocity.
I think it is very important for your contiguous States
especially, even if it is almost sort of a regional. So
many labs are on borders; large labs go right over.
I don't know the answer. I do think that if one of the
things is the proficiency test, if that can be addressed.
-------
767
My concern is on having some sort of uniform proficiency
test.
What do you do about the individual State requirements
like ours where you have to do 52 VOCs? How is that going
to be addressed? How will that prevent the labs from having
just two levels, three levels, four levels? Other States
probably have specific things they want tested. I don't
know how that will be addressed.
Thank you.
MR. STANKO: I would like
to present to you the industry perspective towards national
laboratory certification. We will go through some of the
concerns we in industry have.
I am not going to elaborate on any one of these
specific things. The main thing is we have concerns with
the numerous State accreditation programs. Other people
also have concerns, and these are some of the items that
lead to that concern: costly for laboratories; the customer
pays the price.
We are still bottom line people. Essentially, all the
certification fees that States are charging, the laboratory
user has to pay it, and we are the laboratory user.
Another major concern that we have is that industry is
left out of the entire system completely. We, industry, are
-------
768
the customers of these contract labs more or less getting
data for regulatory purposes, and nobody from the State
agencies ever asked us/ are there problems with the program,
how could the program be improved, and are the programs
effective? In other words, has it done anything for data
quality?
The first question I am going to ask is who really
supports national accreditation for environmental
laboratories? I think the United States Environmental
Protection Agency does and the American Council of
Independent Laboratories does. I think most of the contract
laboratories do and certainly industry does. The Chemical
Manufacturers Association the American Petroleum Institute
have gone on record as supporting national accreditation of
laboratories, and there are others.
On item number 5, I needed 50 question marks, but they
wouldn't quite fit. But in fairness to the States, I think
you have to look at what are the elements of all the State
programs, and this could be difficult to summarize. There
is no way anyone can go through all 50 certification
programs and do a good job of it.
What I have done is I have isolated some of the things
that are common to all of them. Most of them are mandated
by some sort of State legislation. They also require
request from the laboratories for the certification. In
-------
769
other words, the State requires that the laboratories ask
for certification in that State so that certain requirements
have to be met.
Number 3 you have heard a lot about. Certification
requires fees from the laboratories. No new taxes, huh?
On-site inspection by State auditors, demonstration of
laboratory performance with the EPA WP and WS samples, must
meet criteria for PE samples, periodic recertification, as
we have heard, and they are subject to State enforcement.
There are some provisions in all of these certification
programs for the big R word, reciprocity, but it is seldom
practiced. If it is practiced at all, most of the State
programs rely on WP and WS samples, and that is about where
the reciprocity is.
Now I would like to show you major roadblocks to
national accreditation of laboratories. Number 1 is State
rights. Once you give States the right to do something, you
can never get it back. They have this certification
authority. Someone has granted that authority, and now it
is difficult to take it back.
A lot of States feel very strongly on enforcement
action, and here I agree that the States should enforce this
type of a program.
Number 3, the State programs. Most people think that
if you have a national accreditation program, all the State
-------
770
people involved in their own certification programs lose
their jobs.
The big R word. On-site visiting and auditing. Every
State feels that they have a need to send their auditors to
that laboratory to inspect it. Some of the laboratories
tell me that half of their staff is involved in nothing but
working with auditors. The other half do the samples.
We have heard a little bit about certification fees to
five significant figures at one point, and then who grants
certification?
These are some of the major roadblocks, and now I am
going to do something unusual. Instead of being critical
and criticizing which Bill Telliard tells me I am good at, I
am going to try to see if I can give some solutions to our
current roadblocks.
Many of us thought that EC92 would be the driving force
for a national accreditation of laboratories. Many of us
thought that the environmental labs that we deal with could
go in on the coattails of those product testing laboratories
that are going to be affected or impacted by the EC92
regulations.
The first thing we have to do is forget EC92. Just as
clinical laboratories are different from environmental
laboratories, the product testing laboratories associated
with EC92 are entirely different from the environmental
-------
771
laboratories. That is one of the major flaws we have to
address.
Consider only environmental laboratory accreditation.
That goes along with divorcing ourselves from EC92. Groups
like CQED and IAETL have tied themselves with EC92. I think
they are going to have to step back and divorce themselves
from EC92.
What can we do to identify those elements of the
current program that are really onerous to us? We don't
like certification fees, but just like I said, no new taxes.
Have you ever seen anybody give anything back yet? I don't
think we stand a chance on that.
Multiple site visits and audits. I think this is one
area where if we have reciprocity we can really do away with
some of this nonsense of having 50 States going to audit
each single laboratory. I think this is one area where we
can have reciprocity.
There are lots of inconsistencies in the State
programs. Here again, I think we have to look to the
national EPA to try to come up with some kind of a guidance
so that we can have consistency between the State programs.
There is general lack of reciprocity. We have heard
that R word enough times from a number of speakers. I don't
have the solution to that.
-------
772
I think we need to form a national coalition for
environmental laboratory accreditation and environmental
laboratory accreditation only. I think we need
representatives from the USEPA, the State agencies, ACIL,
and the industry as well.
I think we need to prepare a laboratory accreditation
guidance document based on ASTM and ISO guidelines. They
are out there. We don't have to make these up. They are
available.
We need to come up with an assessor checklist that is
uniform and acceptable to all State agencies as well as the
Federal EPA. We need some sort of guidance and some sort of
qualifications for the assessors so that no matter which
assessor is used, all States and the Federal EPA and
industry and the laboratories will recognize the assessor as
being qualified and being able to conduct these audits.
I think we need a protocol for accreditation recognized
by all and accepted, and this is where I think we look to
the national EPA to look at the State programs and see what
we have in common and come together with something that all
the States can accept.
The politics involved says we are going to let the
States retain the rights for some of these programs. The
collection of certification fees, I don't see any answer to
that. The right to verify assessor reports and
-------
773
accreditation. Here, again, I think the States should have
that right.
I think the States ought to maintain the right for
enforcement action, and I think the States ought to have the
right for additional performance evaluations if they are not
willing to accept WP and WS or, like the last speaker said,
they have their own program. Shell has its own program. I
think the States have to be recognized and that they can do
something additional or accept WP and WS.
The last thing, I think, is to modify the State
legislation to allow for all of the above.
This slide has nothing to do with the actual
accreditation. There are other considerations. If we don't
develop a workable program, Congress will be forced into
mandating one that the States will have to accept and we may
not like at all. I also am giving a pitch for the EPA EMMC.
We also need to address the 518 report issue. Before a
national environmental laboratory accreditation program is
possible, I think there is a need for standardization of
methodology.
Thank you.
-------
774
QUESTION AND ANSWER SESSION
MR. TELLIARD: Thank you,
George.
Now, for any of you who would like to ask questions or
make comments, all of the panel is available at this time.
Would anybody like to start?
MS. ORDONA: I am Alicia
Ordona from Virginia, and I want to address this one to the
California representative. What is your main objection to
reciprocity? I mean, at least New York has said about VOCs,
but what is the State of California's main objection to
reciprocity in certification for drinking water?
MR. TAMPLIN: We have no
objection.
MS. ORDONA: You don't?
MR. TAMPLIN: No. We have the
authority in the statute. We don't have a mechanism in the
regulations yet. That will probably be about the first of
the year.
MS. ORDONA: In that case, I
think I am going to recommend to my boss that we are not
going to write reciprocal certification for laboratories
from States that don't write reciprocal certifications from
Virginia.
-------
775
MR. LEVY: Nathan Levy
with A&E Testing in Baton Rouge.
George, thank you. I wish that everybody in the agency
and in the State governments had your feelings. I think
most of us contract labs do have your feelings, and I guess
my question becomes to anybody who wants to take it.
Why not? Why can't we do it? Why can't we take one
person from every State and a few from the EPA and lock them
in a room and tell them they can't come out until they have
a program? Why not?
MR. FARRELL: Well, that
gentleman just stole my thunder. I am Jack Farrell from
Enseco.
George, I would also like to extend my thank you. I
don't think it could have been said better.
I don't really have a question. I have a couple of
comments I would like to make.
The first one is, very few of the programs that were
talked about and very few of the programs that I know of,
excluding the CLQ, deal with more than a capability
discussion of what a laboratory can do. We need to add
ongoing performance to whatever program or programs come up.
The second comment I would like to juake is while we
have half a dozen approval programs up there, there has to
be at least 100 of them out there. There is also starting
-------
776
to be a lot of different groups that are focusing on this
accreditation issue, and I applaud that.
The request that I have is...and, Bill, maybe you can
help or somebody can help...let's bring it all together so
that it is one group that is addressing this issue and
coming up with one consensus standard. If there is any way
that IATA or CQED or anybody can help do that, that is what
they are there for.
Fifty groups looking at fifty different certification
programs is only going to cost us a lot more.
MR. TELLIARD: Anyone else?
MR. HOLT: I am Phil Holt from
Occidental Chemical.
I would like to preface my remarks by saying I am in
favor of lab accreditation. I think it has accomplished a
lot, but as a matter of curiosity...and I will address this
to New York since that is a program I am familiar with...I
would like to know what we are accomplishing with the
ongoing programs.
We started five years ago. What percentage of
laboratories did we weed out as being not acceptable five
years ago, and on an ongoing basis, are we continuing to
weed out bad laboratories, or has it kind of leveled out now
and only good laboratories are participating?
-------
777
MS. PREVOST: Very
honestly, only because I got a question a week ago from the
New York Times or I wouldn't really be up on this, you are
right. Everybody is suddenly getting interested in this.
Just out of the blue, the New York Times wants to do a
series on environmental laboratories. They haven't done it
yet, and I wonder how they will manipulate what information
I gave them. It is always a matter of concern.
Since the program won't be give years old, actually,
until November of 1990, 342 labs have...records have not
actually been kept of why 342 labs that were in the program
are no longer in. I think I should preface this by saying
before this, except for the drinking water labs which was a
voluntary program...
By the way, New York State's laboratory ELAP program is
a voluntary program in that wonderful way they like to
define voluntary. The fact is that no laboratory can do
work for the State of New York or any of its political
subdivisions which means anything that is paid for by public
monies must be done in a certified lab, but if you want to
limit yourself to the private sector or just to drinking
water in private homeowners' wells, you don't have to be
certified. Obviously, most people are certified because the
public sector money is very important.
-------
778
Now, this was a group of laboratories that had never
been regulated before, the environmental labs in New York
State. Actually, the mission of the program at the time was
to ensure there were competent labs for environmental
analysis. It wasn't really to take labs in and throw them
out.
The whole point of it was to sort of put on a layer of
standards or regulations and then...and I do believe they
have been made somewhat more stringent over the last four
and a half years...the mission really is to try to bring
laboratories in compliance. We don't really like to throw
them out.
What they do do is many labs have withdrawn. We
certify by specific analysis in New York what you want. If
you fail your proficiency test enough times, you know,
finally some laboratories actually shrink down to have so
few analyses that they are certified for that they withdraw.
But the real mission of the program was not to throw a
lot of labs out. It was to bring them up to a level of
competency and, yes, it has leveled out. It has leveled
out. There are still labs that become withdrawn, as we like
to call it. We withdraw them from the program or they
choose to be withdrawn, but certainly from the earlier, say,
the first two years, it has leveled out.
-------
779
Of course, you cannot say that every lab in our program
is or very well...you do a proficiency sample. They are the
best sample works that have been done. I mean, you know,
you only have so much.
We do have unannounced inspections in New York, so
nobody can kind of gear up for our inspections.
MR. CARTER: I would like
to make kind of a response to what you said and pick up on
something Jack Farrell said.
In our program, we do accreditation, and I guess the
best way to say this is good labs have bad days. In some
internal meetings that we have had to discuss some of these
implications of accreditation, one thing I would say that
most of us that have thought about it within EPA have a
concern about is that the purchasing public, industry, or
whomever really should understand that it is up to them to
verify that this capable laboratory performed on their
particular work.
Like I say, the best labs and the best people have bad
days. So, being accredited or certified does not assure
that the individual piece of data you got is exactly what
you wanted.
MR. HOLT: I would like
to ask one other piece of information of New York. Since
things have sort of leveled out, is it time to start
-------
780
thinking about maybe annual testing instead of biannual or
if you are participating both in drinking water and
wastewater quarterly testing?
MS. PREVOST: Did you say
quarterly testing?
MR. HOLT: Yes. The drinking
water and the wastewater were on opposite three-month
cycles, so you were getting samples in every quarter.
MS. PREVOST: Oh, yes, right.
No, there has been absolutely no move by the Commissioner to
go to that type of...to reduce the number of proficiency
tests.
Actually, if you will notice, we keep adding things
that we proficiency test for. That seems to be, very
honestly, the way the direction goes in New York. I am
being honest.
MR. TELLIARD: Anybody else?
MR. PRONGER: I am Greg
Pronger of National Environmental Testing.
The first thing I would like to say is that there seems
to be an irony in the situation. For the first time, the
USEPA has somebody coming up to them asking them to be
regulated, basically, and the USEPA is having huge problems
trying to figure out how to do this. It is probably the
first time in history.
-------
781
Normally, everybody is complaining about the EPA coming
in and regulating them. Here is somebody looking to be
regulated and having trouble doing that.
What seems to me one of the problems with getting
anything implemented is the level of discussion going into
it. If something would be promulgated just to get a program
started, if the COP instead of...a problem that a lab has,
if you perform on a COP sample but your bid isn't low
enough, you technically aren't part of the program.
If there were a two-tier level where you could pass a
performance sample, be a part of the program and not receive
a sample, but if your bid is not high enough not become part
of it and have to pay for that, the sites visits and such,
that would at least be a step in the direction where it is a
mechanism already established for going out checking on
laboratories, taking care of that.
If you could then say you are a so-called COP lab and
just not receiving samples, it would address some of the
problems that the environmental labs are seeing, because the
COP program kind of initiated the problem in that the
laboratory was caught in the situation that you could pass
every PE sample, keep bidding on them, getting PEs, and
never become part of the program because your dollar bid is
too high.
-------
782
Then, if a client would call and say no, are you a COP
lab, you would have to say no. Even though you could pass
the samples forever, pass any type of inspection, you never
get any place like that.
If something like that were started just to initiate
that and then build from that on trying to get reciprocity
from the States, trying to get some reciprocity between the
different programs, it would be a building block, and I
don't think anything will occur with this until there is
just some type of initial action taken.
Thank you.
MR. CARTER: One of the
problems with the de facto COP certification is that it is
not just a PE and accreditation system. The fact that there
is this ongoing, literally daily, inspection of deliverables
means that we don't just do the quarterly PE samples. In
essence, there is, like I say, almost a daily assessment of
lab performance, and we make decisions on lab eligibility
for samples under our program on a weekly basis.
So, performance on periodic PE samples is a relatively
small part of the program.
We have problems right now with requests from media,
various places, under Freedom of Information Act for PE
sample results. What do you do about the lab that has
passed the most recent one but they failed the last four?
-------
783
Do we say yes, they passed the last one and they have done
one right out of four? Is that really an accurate
assessment of the laboratory?
The other problem is funding and authorization. We
don't have any authority to sell our PE samples. We are
prohibited by statute from giving away anything from
Superfund.
So, as I say, there is a kind of an inherent problem in
our passing on our de facto certification outside our
program, and we find ourselves in a position where, due to
the various scrutinies we are under, we can't really cut it
back and accept other accreditations.
I guess the best thing I can say is we understand the
problem.
Recently, the Office of Waste Programs Enforcement
indicated they were considering making part of their consent
agreements with responsible parties a requirement that they
use COP laboratories. We assured them that was improper and
at most what they could require is adherence to COP
protocols and COP reporting requirements.
So, as far as I know, we have prevented them from
further contributing to the problems inherent in a COP de
facto certification.
MR. TELLIARD: I would
like to thank the panel.
-------
784
MR. TELLIARD: I want to thank
you all for coining. I would like to thank Jan Sears in the
blue dress in the back who makes all this happen every year.
Jan, would you like to stand up?
I would like to thank the court reporters whose
proceedings we can't print until September, but we will.
And I would like to thank you all for coming. I hope
you have had a good meeting. I hope to see you back here
next year, probably same time, same station. Anybody who
has any ideas on subject panels or papers, please give me a
call or drop me a line. We are always looking for new
material.
Thank you so much for coming and thanks for your
attention.
-------
785
13th ANNUAL EPA CONFERENCE ON ANLYSIS
OF POLLUTANTS IN THE ENVIRONMENT
LIST OF SPEAKERS
Mike Carter
USEPA, OERR
401 M Street, SW (WH-548A)
Washington, DC 20460
202-382-7909
Bruce N. Colby
Pacific Analytical
1989 B. Palomar Oaks Way
Carlsbad, CA 92009
619-931-1766
James A. deHaseth
Department of Chemistry
University of Georgia
Athens, GA 30602
404-542-1968
Jim Eichelberger
USEPA, EMSL
26 W. St. Clair Street
Cincinnati, OH 45268
513-569-7278
Thomas E. Fielding, Ph.D.
USEPA, ITD
401 M Street, S.W.
Washington, DC 20460
202-382-7156
Bettina Fletcher
USEPA, Region III CRL
839 Bestgate Road
Annapolis, MD 21401
301-224-2740
Warren Haltmar
Sr. Chemist
Texaco, Inc.
5901 S. Rice
Bellaire, TX 77401
713-432-2279
Dr. Gerald Hoeltge
Cleveland Clinic Foundation
9500 Euclid Avenue
Cleveland, OH 44195
216-444-2830
Larry D. Johnson, Ph.D.
Research Chemist
Source Methods Stand. Br. (MD-77A)
USEPA
Research Triangle Park, NC 27711
919-541-7943
Lawrence H. Keith
Senior Program Manager
Radian Corporation
P.O. Box 201088
Austin, TX 78720
512-454-4797
Jim King
Sample Control Center
Viar and Company
300 North Lee Street, Suite 200
Alexandria, VA 22314
703-557-5040
W. G. Krochta
PPG Industries
440 College Park Drive
Monroeville, PA 15146
412-325-5183
-------
786
L. L. Lamparski
Midland Applied Science & Tech. Lab
DOW Chemical Company
Midland, MI 48640
517-636-2352
Theodore D. Martin
Chemistry Research Division
USEPA, EMSL
26 W. St. Clair Street
Cincinnati, OH 45268
513-569-8423
D. R. Mount
ENSR Corporation
1716 Heath Parkway
Fort Collins, CO 80524
303-493-8878
Arthur Perler
USEPA, ODW
401 M Street, SW (WH-550D)
Washington, DC 20460
202-382-3022
Margaret Prevost
Environmental Lab Approval Prg,
New York State Health-Dept.
P. 0. Box 509, ELAP-Room 299B
Albany, NY 12201-0509
518-474-8519
Joe C. Raia
Sr. Res. Chemist
Shell Development Co.
P.O. Box 1380
Houston, TX 77251-1380
713-493-7693
James K. Rice
President
James K. Rice Chartered
17415 Batchellors Forest Rd,
Olney, MD 20832
301-774-2210
Dr. Susan Richardson
USEPA
College Station Road
Athens, GA 30613
404-546-3199
Dale R. Rushneck
ATI - Colorado
225 Commerce Dr.
Ft. Collins, CO 80524
303-490-1511
George H. Stanko
Sr. Staff Res. Chemist
Shell Development Company
P.O. Box 1380
Houston, TX 77251-1380
713-493-7702
M. T. Stephenson
EPTD-Environmental
Texaco, Inc.
P. 0. Box 425
Bellaire, TX 77401
713-432-3329
Benjamin R. TampTin, Ph.D.
Chief, Sanitation &, Radiation Lab
California Dept. of Health Serv.
2151 Berkeley Way, Room 465
Berkeley, CA 94704
415-540-2201
William A. Telliard
Chief, Analytical Methods Staff
USEPA, ITD
401 M Street, SW (WH-552)
Washington, DC 20460
202-382-7131
A.W. Tiedemann, Jr., Ph.D.
Division Director
VA Div. of Consolidated Labs
1 N. 14th Street
Richmond, VA 23219
804-786-7905
-------
787
Dr. Thomas 0. Tiernan
Director, Toxic Contaim. Res. Prg.
Wright State University
175 Brehra Lab, 3640 Col. Glenn Hwy.
Dayton, OH 45435
513-873-2202
Dr. Yves Tondeur
Triangle Laboratories
P. 0. Box 13485
Research Triangle Park, NC
919-544-5729
27709
Rhonda Whalen
Office of Survey & Certification
U.S. Dept. of Health & Human Serv.
2D2 Meadows East, 632 Security Blvd
Baltimore, MD 21207
301-966-6801
-------
788
-------
789
13th ANNUAL EPA CONFERENCE ON ANALYSIS
OF POLLUTANTS IN THE ENVIRONMENT
LIST OF ATTENDEES
Greig Aitken
County Court Reporters, Inc.
124 Cork Street
Winchester, VA 22601
703-667-0600
David J. Armstrong, Ph.D.
Southern Research Institute
P.O. Box 55305
Birmingham, AL 35255-5305
205-581-2000
Steve Azar
Supv. Env. Engineer
Atlantic Div. NFEC
Bldg. IAA Code 1811
Norfolk, VA 23511
804-445-1929
Clifford J. Baker
Laboratory Director
Continental Analytical Serv. Inc.
1304 Glendale Road
Salina, KS 67401
913-827-1273
Donald P. Ballard
Water Treatment Plant Leader
Naval Supply Center
Fuel Department, Code 700
Norfolk, VA 23512
804-484-6430
Louis B. Barber
Chief Chemist
Public Utilities
1400 Brander St.
Richmond, VA 23224
804-780-5338
Michael E. Barber
Laboratory Manager
Core Laboratories
1300 South Potomac St., Suite 130
Aurora, CO 80012
303-751-1780
Thomas Barber
Group Leader, Analytical Chemistry
CIBA-GEIGY Corp.
410 Swing Rd.
Greensboro, NC 27409
919-632-7297
John Barr
Lab Manager
City of Indianapolis, DPW
2700 South Belmont Ave.
Indianapolis, IN 46221
317-633-5429
Baldev Bathija
USEPA, ODW
401 M Street, SW (WH-550D)
Washington, DC 20460
202-382-3039
Bruce Bauman
American Petroleum Institute
1220 L St. NW
Washington, DC 20005
202-682-8345
Laura 0. Beach
Chemist
Acurex Corporation
4915 Prospectus Drive
Durham, NC 27713
919-541-1014
-------
790
Mary Ann Becker
Chemist
USEPA, Region I
60 Westview Street
Lexington, MA 02173
617-860-4630
Robert G. Beimer
Program Manager
S-CUBED
3398 Carmel Mt. Rd.
San Diego, CA 92121
619-453-0060
Daniel W. Berisko
Vice President
Weerts Energy Associates
P.O. Box 3227
Johnstown, PA 15904
814-535-2992
Mark R. Bero
CLP Manager
IEA, Inc.
1901 N. Harrison Avenue
Gary, NC 27513
919-677-0090
Russ Bisping
Quality Assurance Office Code 130
Norfolk Naval Shipyard
Portsmouth, VA 23709-5000
804-396-9305
Daniel Bolt
Environmental Products Manager
Cambridge Isotope Laboratories Inc.
20 Commerce Way
Woburn, MA 01801
617-938-0067
Zvi Blank, Ph.D.
CHMM
E.C.R.A. Laboratories, Inc.
273 Franklin Rd.
Randolph, NJ 07869
201-361-4252
Howard Boorse
QA/QC Manager
Pacific Environmental Laboratory
9405 S.W. Nimbus Ave.
Beaverton, OR 97005
503-644-0660
Wanda Boyd
Louisiana State University
P.O. Box 20931
Baton Rouge, LA 70893
504-388-8521
Joel Bradley
President
Cambridge Isotope Laboratories Inc.
20 Commerce Way
Woburn, MA 01801
617-938-0067
Chris Bremer
Supervisor, Chromatography
Twin City Testing Corp.
662 Cromwell Ave.
St. Paul, MN 55043
612-641-9489
Anthony Bright
Laboratory Certification Coord.
OWRB
P.O. Box 53585
Oklahoma City, OK 73152
405-271-2580
Nancy Broyles
Chemist
Union Carbide Chem. & Plas. Co. Inc
P.O. Box 8361
South Charleston, WV 25303
304-747-4707
Douglas M. Brubeck
Analytical Chemist
Environmental Laboratories, Inc.
9211 Burge Ave.
Richmond, VA 23237
804-271-3440
-------
791
Eugene A. Burns
Maxwell, S-CUBED Division
P.O. Box 1620
La Jolla, CA 92038
619-453-0060
Linda Carter
Sr. Sanitary Chemist
New York State Dept. of Health
Wadsworth Cntr. Empire State Plaza
Albany, NY 12201
518-474-0404
William H. Chambers
NEA, Inc.
10950 S.W. 5th Street, Suite 260
Beaverton, OR 97222
503-643-4661
David Chang, Ph.D.
Manager
Burlington Research, Inc.
P.O. Box 2481
Burlington, NC 27215
919-584-5564
Paul H. Chen, Ph.D.
Staff Scientist
Environmental Science & Eng., Inc.
P.O. Box 1703
Gainesville, FL 32602
904-332-3318
Elizabeth Chisholm
Lab Manager
ECO LOGIC
143 Dennis St., Rockwood
Ontario, Canada NOB2KO
519-856-9591
Joe Chou
Organic Chemist
IL State Geol. Surv.
615 E. Peabody Dr.
Champaign, IL 61820
217-244-2744
Ellen W. Cobb
Analytical Chemist
Union Camp Corp.
P.O. Box 178
Franklin, VA 23851
804-569-4885
Sterley B. Cole
Supelco, Inc.
Supelco Park
Bellefonte, PA 16823
814-359-3441
Martin K. Collamore
Lab Supvr., Tech. Support
City of Tacoma, Public Utilities
2201 Portland Ave.
Tacoma, WA 98421
206-591-5582
Lee Collier, Ph.D.
Paracel Laboratories Ltd.
2319 St. Laurent Blvd. Unit 100
Ottowa, ON Canada K1G 4K6
613-731-9577
Linda Crawford
BRAVN Environmental Laboratory
6800 S. TH-169, P.O. Box 35108
Minneapolis, MN 55435
612-992-4811
Robert E. Creekmur, Jr.
Analytical Chemist
Froehling & Robertson, Inc.
3015 Dumbarton Road
Richmond, VA 23228
804-264-2701
Mark Crews
Viar & Co./Sample Control Center
300 North Lee Street, Suite 200
Alexandria, VA 22314
703-557-5040
-------
792
Jack Crissman
Supelco, Inc.
Supelco Park
Bellefonte, PA
814-359-3441
16823
Michael D. Crouch
President
ETC/TOXICON
3213 Monterrey Blvd.
Baton Rouge, LA 70814
504-925-5012
Brenda A. Cuccherini, Ph.D.
Associate Dir., Environmental Div.
Chemical Manufacturers Assoc.
2501 M Street, N.W.
Washington, DC 20031
202-887-1174
Amelia DaCruz
Chemist
Solutions Laboratories
814-H Greenbrier Circle
Chesapeake, VA 23320
804-420-0467
Linda Darington
Organics Lab Manager
General Engineering Laboratories
P.O. Box 30712
Charleston, SC 29417
803-556-8171
Sandra Daussin
Viar & Co./Sample Control Center
300 North Lee Street, Suite 200
Alexandria, VA 22314
703-557-5040
Rajesh Dave
Laboratory Director
Environmental Science & Engineering
217 Long Hill Crossroads
Shelton, CT 06484
203-926-9081
James E. Davis
Roy F. Weston, Inc. ESAT
Bldg. 209 Woodbridge Ave.
Edison, NJ 08837
201-548-1024
Ton Dawson
Group Leader R/D
Union Carbide Chem. & Plas. Co. Inc
P.O. Box 8361, Bldg. 770, Rm. 144
South Charleston, WV 25303
304-747-5711
Sue Deegan
ETS Analytical Services
2160 Industrial Drive
Salem, VA 24153
703-387-3995
Ivan B. DeLoatch
Chemist
USEPA Office of Drinking Water
401 M Street, SW
Washington, DC 20460
202-382-2273
Jessie Deluna
Chemist
Hampton Roads Sanitation District
1436 Air Rail Avenue
Virginia Beach, VA 23455
804-460-2261
Violeta Deluna
Water Chemist II
City of Norfolk/Dept.
6040 Waterworks Road
Norfolk, VA 23502
804-441-5678
of Utilities
Kathy J. Dien Hillig
Manager, Ecology Analytical Serv.
BASF Corp.
1419 Biddle Ave.
Wyandotte, MI 48192
313-246-6334
-------
793
Pam Dilsizian
Department of Labs & Research
Westchester County
Hammond House Rd.
Valhalla, NY 10595
914-524-5575
Jeffrey A. Dodd
S-CUBED, Div. of Maxwell Labs,
1800 Diagonal Rd., Suite 420
Alexandria, VA 22314
703-838-0220
Inc.
Dr. Willard Douglas
Manager, Environmental Labs
Sverdrup Corp.
Bldg. 24-23
Stennis Spce Ctr, MS 39529
601-688-3155
Rolla M. Dyer
Professor of Chemistry
University of Southern Indiana
8600 University Blvd.
Evansville, IN 47712
812-464-1712
Dwight Easty
Laboratory Manager
James River Corp.
904 N.W. Drake St.
Camas, WA 98607
206-834-8318
Ronald J. Edgar
Chemist
Spokane County APCA
W. 1101 College Ave.
Spokane, WA 99201
509-456-4727
Room 230
Kenneth W. Edge11
Section Chief
The Bionetics Corporation
16 Triangle Park Drive
Cincinnati, OH 45246
513-771-0448
Nariman El Fino
Lead Organic Chemist
Froehling & Robertson
3015 Dumbarton Rd.
Richmond, VA 23228
804-264-2701
Chuck Emnett
General Lab Supervisor
Aptus Environmental Services
21750 Cedar Ave. South
Lakeville, MN 55044
612-469-3475
Anthony N. Enweze
EBASCO
2111 Wilson Blvd., Suite 1000
Arlington, VA 22201
David Evans
Quality Assurance Office Code 130
Norfolk Naval Shipyard
Portsmouth, VA 23709-5000
804-396-9305
Dave Fada
Metro-Seattle
322 W. Ewing St.
Seattle, WA 98119
206-684-2303
Gary Fa Hick
Waters
34 Maple Street
Milford, MA 01757
508-478-2000
Mike Filigenzi
Senior Scientist
Enseco-Cal Labs
2544 Industrial Blvd.
West Sacramento, CA 95691
916-361-6168
-------
794
Edgar E. Folk, IV
Technical Officer
IEA, Inc.
1901 N. Harrison Ave.
Gary, NC 27513
919-677-0090
Jim Forbes
Lab Director
Law Environmental, Inc.
112 Town Park Drive
Kennesaw, GA 30144
404-421-3310
Peter Fowlie
Chief, Laboratory Division
Wastewater Technology Centre
P.O. Box 5050, Burlington
Ontario, Canada L7R4A6
416-336-4633
Drew Francis
Chemist
Hampton Road Sanitation District
1436 Air Rail Avenue
Virginia Beach, VA 23455
804-460-2261
William D. Frazier
Analytical Chemist
City of High Point, Central Lab
P.O. Box 230
High Point, NC 27261
919-883-3410
Candace D. Friday
QA/QC Manager
Keystone Lab - Houston
3911 Fondren
Houston, TX 77063
713-266-6800
Robert E. Fuchs
President
Environmental Consultants, Inc.
391 Newman Avenue
Clarksvflle, IN 47130
812-282-8481
Harry Gearhart
Conoco
P.O. Box 1267
Ponca City, OK 74603
405-767-4315
Peter Georges
Marketing Director
Environmental Science & Engineering
217 Long Hill Crossroads
She!ton, CT 06484
203-926-9081
Noshi Gerges
Philadelphia Naval SY
Philadelphia Bldg. 121
Philadelphia, PA 19112
215-897-3284
William E. Gillenwaters
Chemist
Newport News Shipbuilding
Dept. 031, 4101 Washington Ave.
Newport News, VA 23607
804-688-2475
Ray Graves
Senior Chemist
GNB, Inc.
P.O. Box 2165
Columbus, GA 36867
404-689-1701
Keith Greene
Chemist
American Analytical Labs,
840 South Main Street
Akron, OH 44311
216-535-1300
Inc.
John P. Gute
Lab. Supervisor Method, Res. & QA
L.A. County Sanitation District
1965 S. Workman Mill Road
Whitties, CA 90604
213-699-8903
-------
795
Lee Hachigian
Manager, Water Pollution Control
General Motors Corp.
30400 Mound Road
Warren, MI 48090-9015
313-947-1656
Clarence Halie
PACE, Inc.
1710 Douglas Dr. North
Minneapolis, MN 55422
612-525-3404
Guy J. Hall
President
Environmental Testing Services, Inc
816 Norview Ave.
Norfolk, VA 23509
804-853-1715
Philip Ham!in
ITT Rayonier Inc.
409 E. Harvard
She!ton, WA 98584
206-427-8232
Bryant Harrison
Acurex Corporation
4915 Prospectus Drive
Durham, NC 27713
919-544-4535
Lee Helms
Curtis & Tompkins
1250 S. Boyle Ave.
Los Angeles, CA 90023
213-269-7421
Rob Henry
VG Instruments
14513 Spotswood Furnace Road
Fredericksburg, VA 22401
703-786-5153
Michael Herbert
Baxters Health Care
Rt.#120 & Wilson Road
Round Lake, IL 60073
Geoff Hinshelwood
Organic Chemist
Jennings Lab
1118 Cypress Ave.
Virginia Beach, VA
804-425-1498
23451
Paula Hogg
Chemist
Hampton Roads Sanitation District
1436 Air Rail Avenue
Virginia Beach, VA 23455
804-460-2261
Dr. Philip Holt
Occidental Chemical
2801 Long Road
Grand Island, NY 14072
716-773-8538
Ben Honaker
Chemist
USEPA, ITD
401 M Street, SW (WH-552)
Washington, DC 20460
202-382-2272
Henry H. Hook
Deputy Director
Naval Supply Center
Fuel Department, Code 700
Norfolk, VA 23512
804-484-6140
E.W. Hoppe
Sr. Research Scientist
Battelle NW Labs
P.O. Box 999, Mail Stop P7-22
Richland, WA 99352
509-376-2126
-------
796
Robert M. Houser, Ph.D.
Technical Director
TCT - St. Louis
1908 Innerbelt Bus. Ctr. Dr.
St. Louis, MO 63114
314-426-0880
Dr. Lyman H. Howe, III
Research Chemist
Tennessee Valley Authority
150 401 Chestnut St. (CC IN 150A-C)
Chattanooga, TN 37402-2801
615-751-3711
Stavros R. Howe
Lab Manager
Molecular Ecology Institute
1250 Bellflower Boulevard
Long Beach, CA 90840
213-985-4019
George D. Howe11
Supv. Chemist
Naval Supply Center
Fuel Department, Code 700
Norfolk, VA 23512
804-444-2761
Dr. Francis Y. Huang
Environmental Science & Eng., Inc.
11665 Li 1burn Park Road
St. Louis, MO 63040
314-567-4600
Greg Hudson
Oldover Corp.
P.O. Box 228
Ashland, VA 23005
804-550-2644
Frank Hund
USEPA
438 N. Armistead St. #304
Alexandria, VA 22312
202-382-7182
Mary M. Husted
Environmental Manager
Husted & Associates
P.O. Box 5256
High Point, NC 27262
919-869-3097
Nang Huynh
Laboratory Manager
National Laboratories Inc.
3210 Claremont Ave.
Evansville, IN 47712
812-422-4119
Richard Javick
Research Associate
FMC Corporation
Box 8
Princeton, NJ 08543
609-520-3639
Ellen E. Jenkins
Section Manager
DataChem Laboratories
960 W. LeVoy Drive
Salt Lake City, UT 84123
801-266-7700
Debra Johnson
Chemist
Aptus Environmental Services
21750 Cedar Ave. South
Lakeville, MN 55044
612-469-3475
Dr. Phanibhushan B. Joshipura
Supervisory Chemist
Naval Supply Center
Fuel Department, Code 700
Norfolk, VA 23512
804-484-6430
Jeffrey T. Keever
Research Analytical Chemist
Research Triangle Institute
P.O. Box 12194
Research Triangle Park, NC 27709
919-541-7460
-------
797
R. Michael Kennedy
Lab Supervisor
City of Rock Hill, Env. Man. Lab
P.O. Box 11706
Rock Hill, SC 29731-1706
803-329-5504
A. M. Kettner
Mobil Oil Corporation
P. 0. Box 1027
Princeton, NJ 08543-1027
Mary S. Khali 1
Instrument Chemist III
MWRD of Greater Chicago
550 S. Meacham Road
Schaumburg, IL 60193
708-529-7700
Alan D. King
Director of Env. Services
Sherry Laboratories
2203 S. Madison Street
Muncier IN 47302
317-747-9000
Dewey R. Klahn
Environmental Science Corp.
1910 Mays Chapel Road
Mt. Juliet, TN 37122
615-758-5858
Kelly Klatt
Technical Support Chemist
J & W Scientific
91 Blue Ravine Road
Poison, CA 95630
916-985-7888
Herman J. Kresse
Lab Director
M.B.A. Labs
340 South 66th Street
Houston, TX 77011
713-928-2701
Mark Kromis
Vice President
Bionomics Laboratory, Inc.
4310 E. Anderson Road
Orlando, FL 32812
407-851-2560
Beth Kummling
ECO LOGIC
143 Dennis St., Rockwood
Ontario, Canada NOB 2KO
519-856-9591
An Lai
Chemist
City of Garland, Duck Crk. WWTP Lab
750 Duck Creek Way
Sunnyvale, TX 75182
214-226-7626
James D. Lamb, Jr.
Management Analyst
Naval Supply Center
Fuel Department, Code 700
Norfolk, VA 23512
804-444-5322
Rebecca A. LaRue
Assistant Professor of Chemistry
Cooper Union, Advan. of Sci. & Art
Sch. of Engr., 51 Astor Place
New York, NY 10003
212-353-4372
Peter A. Law
Laboratory Manager
Tighe & Bond, Inc.
53 Southampton Road
Westfield, MA 01085
413-562-1600
Nathan Levy
President
A & E Testing, Inc.
1717 Seabord Drive
Baton Rouge, LA 70810
504-769-1930
-------
798
James W. Lewis
Laboratory Projects Manager
Bionetics Analytical Laboratories
18 Research Drive
Hampton, VA 23666
804-547-8935
Michael D. Lewis
T.C. Analytic, Inc.
1200 Boissevain Ave.
Norfolk, VA 23507
804-627-0400
Ron Lewis
Quality Assurance Office Code 130
Norfolk Naval Shipyard-
Portsmouth, VA 23709-5000
804-396-9305
Dr. Albert A. Liabastre, Ph.D
USAEHA - South
2489 King Arthur Circle
Atlanta, GA 30345
404-752-2826
James Longbottom
Research Chemist
USEPA, EMSL-Cin.
26 W. Martin Luther King Dr. MS-525
Cincinnati, OH 45268
513-569-7325
Lazaro Lopez
Asst. Director
Suburban Labs
4140 Lite Drive
Hillside, IL 60162
708-544-3260
Dr. Raymond J. Lovett
Professor
West Virginia University
Department of Chemistry
Morgantown, WV 26506
304-293-3068
Norman Low
Chemist
Hewlett Packard
1601 California Ave.
Palo Alto, CA 94304
415-857-7381
Curtis Lueckenhoff
Chemist
Missouri DNR
2010 Missouri Blvd.
Jefferson City, MO 65101
314-751-7930
A.J. Malanowicz
Mobil Oil Corporation
P.O. Box 1027
Princeton, NJ 08543-1027
Douglas B. ManigoId
Supervisor Chemist
U.S. Geological Survey, WRD
5293-B Ward Road
Arvada, CO 80002
303-236-5345
Mark Marcus
Director, Analytical Programs
Chemical Waste Management
150 W. 13th Street
Riverdale, IL 60627
708-841-8360
Michael J. Martin
Research Manager
BASF Corp.
1419 Biddle
Wyandotte, MI 48192
313-246-6878
Dr. Thomas D. Mathews
S.C. Wildlife & Marine Res.
P.O. Box 12559
Charleston, SC 29412-2559
803-762-5083
Dept.
-------
799
Harry McCarty
Viar & Co./Sample Control Center
300 North Lee Street, Suite 200
Alexandria, VA 22314
703-557-5040
Frank McCullough
Applications Chemist
ABC Labs
P.O. Box 1097
Columbia, MO 65201
314-474-8579
Cheryl McGuire
Chemist
Solutions Laboratories
814-H Greenbrier Circle
Chesapeake, VA 23320
804-420-0467
Neal A.A. McNeill
Chemist
Newport News Shipbuilding
Dept. 031, 4101 Washington Ave.
Newport News, VA 23607
804-380-7744
Carol Meyer
NY State Dept Health
Wadsworth Cntr. for Labs & Research
Albany, NY 12201-0509
518-486-5670
Ann G. Miller
S-CUBED, Div. of Maxwell Labs, Inc.
1800 Diagonal Rd., Suite 420
Alexandria, VA 22314
703-838-0220
Dr. Deborah S. Miller
Union Carbide Chem. & Plas. Co. Inc
P.O. Box 8361, Bldg. 770, Rm. 318
South Charleston, WV 25303
304-747-4463
Harold W. Miller
Atlantic Div., NFEC
Bldg. IAA Code 1811
Norfolk, VA 23511
804-445-1929
Ray Mindrup
Supelco Inc.
Supelco Park
Beliefonte, PA
814-359-5414
16823
Karen L. Mixon
GC Supervisor/Chemist
Analytical Technologies, Inc.
560 Naches Ave. SW, Suite 101
Renton, WA 98055
206-228-8335
Jack Morgan, Jr.
Lab Manager
E.I. DuPont
P.O. Box 27001
Richmond, VA 23261
804-383-2968
Dr. Huggins Z. Msimanga
Kennesow State College
Chemistry Department
Marietta, GA 30061
404-423-5088
John R. Nein
Chemist
Naval Supply Center
Fuel Department, Code 700
Norfolk, VA 23512
804-444-2761
Gordon Nelson
Quality Assurance Office Code 130
Norfolk Naval Shipyard
Portsmouth, VA 23709-5000
804-396-9305
-------
800
Becky Newman
County Court Reporters,
124 Cork Street
Winchester, VA 22601
703-667-0600
Inc.
Henry B. Ojeniyi
President
JENLABS, Env. & Con. Serv. Co.,
P.O. Box 116
Westville, NJ 08093
609-848-7227
Inc
Beth 01sen
Naval Supply Center
Fuel Department, Code 700
Norfolk,. VA 23512
804-444-5137
Alicia P. Ordona
QA Analyst
D6S-DCLS
One North 14 Street
Richmond, VA 23219
804-786-3411
Martha C. Orr
Chief Chemist
HRSD, North Shore Lab
101 City Farm Road
Newport News, VA 23602
804-874-1287
Michael Palmer
Organic Chemistry Manager
PACE, Inc
5460 Beaumont Center Blvd.
Tampa, FL 33634
813-884-8268
Wen Pan
Chemist
Sherry Laboratories
2203 S. Madison Street
Muncie, IN 47302
317-747-9000
Trikam R. Patel
Associate Chemist - I
Ney York City DEP
Adm. Bldg. Rm. 316 Wards Island
New York City, NY 10035
212-860-3636
Michael N. Petterelli
OBG Laboratories, Inc.
5000 Brittonfield Prkwy, Suite 300
Syracuse, NY 13221
315-437-0200
William F. Pfeiffer
Ginosko Laboratories, Inc.
17875 Cherokee
Harpster, OH 43323
614-496-4571
Eugene Pier
Business Development Manager
Dohrmann/Rosemount
3240 Scott Blvd.
Santa Clara, CA 95054
408-727-6000
Alfredo Pierri
Week Laboratories Inc.
14859 E. Clark Ave.
Industry, CA 91745
818-336-2139
Marvin Piworn'
Lab Manager
Hazardous Waste Center
One E. Hazelwood Drive
Champaign, IL 61820
217-333-8724
Roy W. Plunkett,. Jr.
Analytical Chemist Supervisor
Commonwealth of VA, DGS/DCLS
1 N. 14th St., Room 264
Richmond, VA 23219
804-225-4007
-------
801
Gregory E. Pronger
Technical Director, Organic Labs
NET Midwest
850 W. Bartlett Road
Bartlett, IL 60103
708-289-7333
Bob Pullano
Quality Control
General Engineering; Laboratories
P.O. Box 30712
Charleston, SC 29417
803-556-8171
Gil Radolovich
Section Head
Midwest Research Institute
425 Volker Boulevard
Kansas City, MO 54110
816-753-7600
Katharine M. Raynor
Director, Quality Assurance Div.
Naval Supply Center
Fuel Department, Code 700
Norfolk, VA 23512
804-444-2761
Leah Reed
Sr. Analytical Chemist
Viar & Company
300 North Lee St., Suite 200
Alexandria, VA 22314
703-684-5678
Brenda Reeves
Conference Planner
ERCE
11260 Roger Bacon Drive
Reston, VA 22090
703-471-5550
Dennis G. Revel!
USEPA - Athens
College Station Road
Athens, GA 30613
404-546-3387
Lynn Riddick
Viar & Co./Sample Control Center
300 North Lee Street, Suite 200
Alexandria, VA 22314
703-557-5040
Debbie Rindfleisch
Sr. Lab Analyst
Newport News Waterworks
3629 George Wash. Memorial Hwy.
Newport News, VA 23602
804-867-9171
Ed Ritter
NJ Institute of Technology
323 King Blvd.
Newark, NJ 07102
201-596-5605
Roxanne M. Robinson
Scientific Officer
American Assoc. for Lab. Accredit.
656 Quince Orchard Rd. #704
Gaithersburg, MD 20878
301-670-1377
Dr. Peter D. Robison
Group Leader, Environ. Analysis
Texaco, Inc.
P.O. Box 509
Beacon, NY 12508
914-838-7692
David Roques
Research Associate
L.S.U. Institute for Env. Studies
Louisiana State Univ., 42 Atkinson
Baton Rouge, LA 70803
504-388-8521
Ann Rosecrance
Project Manager
ICF Tecnology
9300 Lee Highway
Fairfax, VA 22031
703-218-2587
-------
802
Robert N. Rosenfeld
TBD Analysis, Inc.
2261 Federal Ave.
Los Angeles, CA 90064
213-478-4050
Dr. James R. Roth
Laboratory Manager
Alpha Analytical Labs
8 Walkup Drive
Westhorough, MA 01581
508-898-9220
Richard Rozene
Manager, Business Development
ABB Environmental Services
261 Commercial St., Box 7050
Portland, ME 04112
207-874-2400
Anna M. Rule
Chief Laboratory Division
Hampton Roads Sanitation Division
P.O. Box 5000
Virginia Beach, VA 23455
804-460-2261
Joseph H. Rule
Associate Professor
Old Dominion University
Geological Sciences
Norfolk, VA 23529
804-683-4301
Dr. Eric G.S. Rundberg
Deputy Projects Mngr. & QA Officer
Bionetics Corp.
18 Research Drive
Hampton, VA 23666
804-865-0880
Mark Rusler
Bionomics Laboratory, Inc.
4310 E. Anderson Road
Orlando, FL 32812
407-851-2560
Jeffrey V. Ryan
Chemist
Acurex Corporation
4915 Prospectus Drive
Durham, NC 27713
919.544.4535
William Schnute
Finnigan MAT
355 River Oaks Parkway
San Jose, CA 95134
408-433-4800
Alan Schoffman
Vice President
U.S. Testing Company
1415 Park Avenue
Hoboken, NJ 07030
201-792-2400
Dr. William D. Schulz
Dept. of Chemistry
Eastern KY University
Moore 337, EKU
Richnond, KY 40475
606-622-1456
Janice Sears
Conference Planner
ERCE
11260 Roger Bacon Drive
Reston, VA 22090
703-471-5550
Steven M. Shatkin
Organic Chemist
E.S. Babcock & Sons Inc.
P.O. Box 432
Riverside, CA 92502
714-684-1881
Edwin F. Shaw, Jr.
Division Operations Manager
Bionetics Corp.
18 Research Drive
Hampton, VA 23666
804-865-0880
-------
803
Peter Shen
President
Quality Assurance Laboratory
6555 Nancy Ridge Drive, Suite 300
San Diego, CA 92121
619-566-1060
Lawson E. Sherman
Project Chemist
Texaco, Inc.
P.O. Box 509
Beacon, NY 12508
914-838-7531
Kate Simmons
Laboratory Director
Tighe & Bond, Inc.
53 Southampton Rd.
West-field, MA 01085
413-562-1600
Husein Sitabkhan
Lab Director
ASAP Technical Services, Inc.
19701 South Miles Road
Warrensville, OH 44128
216-663-0808
Dorothy S. Small
President
Solutions Laboratories
814-H Greenbrier Circle
Chesapeake, VA 23320
804-420-0467
Ronald B. Smart
Professor
West Virginia Universtiy
Department of Chemistry
Morgantown, WV 26506
304-293-3068
Michael Smith
Environmental Supervisor
SD Dept. Health Lab.
500 East Capitol
Pierre, SD 57501
605-773-3368
Nancy Souter
Chemist/Project Manager
Twin City Testing Corporation
662 Cromwell Avenue
St. Paul, MN 55114
612-649-5517
Margaret E. Wickham St. Germain
Mass Spectrometrist
Midwest Research Institute
425 Volker Blvd.
Kansas City, MO 64110
816-753-7600
Sally S. Stafford, Ph.D.
Senior Applications Chemist
Hewlett Packard
P.O. Box 900
Avondale, PA 19311-0900
215-268-2281
Eric Steindl
Chemical Standards Chemist
Restek Corporation
110 Benner Circle
Bellefonte, PA 16823
814-353-1300
Roger E. Stewart
Technical Consultant
Webb Technical Group, Inc.
4320 Delta Lake Drive
Raleigh, NC 27612
919-787-9171
G. Edward Stigall
Technical Section Chief
USEPA Chesapeake Bay Liaison Off.
410 Severn Ave. Suite 109-110
Annapolis, MD 21403
301-266-6873
Melanie Stoner
Program Administrator
ENSECO, Inc.
7440 Lincoln Way
Garden Grove, CA 92641
714-898-6370
-------
804
Kathleen Stralka
Statistician
SAIC
8400 Westpark Drive
McLean, VA 22102
703-734-2553
Mark Strangler
Chemist
Fairfax Cty. Health Dept.
10777 Main Street
Fairfax, VA 22030
703-246-3218
Dr. Chih-Wu Su
R & D Center
US Coast Guard
Avery Point
Groton, CT 06340
203-441-2720
Charles Sueper
Scientist V
Twin City Testing
662 Cromwell Avenue
St. Paul, MN 55114
612-649-5520
Roy Sutton
Development Chemist
Compuchem Inc.
P.O. Box 12652
Res. Triangle Pk, NC
919-248-6468
27709
Joseph Szlachciuk
Environmental Tech.
Texas Instruments, Inc.
34 Forest Street
Attleboro, MA 62703
508-699-1343
Jerry Thoroa
MAS Technology Corporation
110 South Hill St.
South Bend, IN 46617
219-233-3272
Frank H. Thorn
Sr. Laboratory Technician
Newport News Shipbuilding
Dept. 031; 4101 Washington Ave.
Newport News, VA 23607
804-688-4181
Samuel To
USEPA
401 M Street, S.W. EN-338
Washington, DC 20460
202-475-8322
James C. Todaro
Laboratory Director
Water Control Laboratory
106 South St.
Hapkinton, MA 01748
508-435-6824
Susanne F. Tomajko
BP Research
4440 Warrensville Road
Cleveland, OH 44128
216-581-5939
David Tompkins
President
ETS Analytical Services
2160 Industrial Drive
Salem, VA 24153
703-387-3995
Allan Tordini
V.P., Technical Services
NET
220 Lake Drive East, Suite 301
Cherry Hill, NJ 08002
609-779-3373
Dr. David S. Trimble
Analytical Chemist
Union Camp Corp.
P.O. Box 178
Franklin, VA 23851
804-569-4596
-------
805
Felicitas Trinidad
Hoffmann La Roche
340 Kingsland St.
Nutley, NJ 07110
201-235-3131
Jonathan S. Tschritte
Environmental Chemist
DuPont Company
P.O. Box 27001
Richmond, VA 23261
804-320-5398
F. Joseph Unangst
Laboratory Director, Vice President
Galson Laboratories
6601 Kirkville Road
East Syracuse, NY 13057
315-432-0506
Peter Unger
Vice President
American Assoc. for Lab. Accredit.
656 Quince Orchard Road #704
Gaithersburg, MD 20878
301-670-1377
Joe Viar
Chairman
Viar & Co., Inc.
300 North Lee Street, Suite 200
Alexandria, VA 22314
703-557-5040
Joseph S. Vital is
Chemical Engineer
U.S. EPA, OWRS, ITD
401 M Street, S.W., E-908
Washington, DC 20460
202-382-7172
Dr. Dallas Wait
Gradient Corporation
44 Brattle St.
Cambridge, MA 02138
617-576-1555
Tonie M. Wallace
President
County Court Reporters, Inc.
124 Cork Street
Winchester, VA 22601
703-667-0600
Gary Walters
Principal Scientist
ENSECO-RMAL
4955 Yarrow Street
Arvada, CO 80002
303-421-6611
Randy D. Ward
Senior Chemist
Environmental Science Corp.
1910 Mays Chapel Rd.
Mt. Juliet, TN 37122
615-758-5858
Dr. Walter C. Weimer
Battelle NW Labs
P.O. Box 999, Mail Stop P7-22
Richland, WA 99352
509-376-3995
Stuart Whitlock
Martel Laboratory Services Inc.
1025 Cromwell Bridge Rd.
Baltimore, MD 21204
301-825-7790
Robert Wichser
Manager, Chemical Services
Froehling & Robertson, Inc.
3015 Dumbarton Road
Richmond, VA 23228
804-264-2701
Dr. Daniel J. Williams
Association Prof, of Chemistry
Kennesaw State College
P.O. Box 444, Dept. of Chemistry
Marietta, GA 30061
404-423-6174
-------
806
Allison Wilson
Chief Chemist
Hampton Roads Sanitation District
P.O. Box 5000
Virginia Beach, VA 23455
804-460-2261
Hugh Wise
USEPA, ITD
401 M Street, SW (WH-552)
Washington, DC 20460
Hark Yancey
Res. Scientist
Battelle
505 King Ave.
Columbus, OH 43201
614-424-4654
Thomas Yawaraski
Laboratory Manager
University of Michigan
181 Engineering IA
Ann Arbor, MI 48109
313-763-5686
Steve Yocklovich
Burlington Research, Inc.
P.O. Box 2481
Burlington, NC 27215
919-584-5564
Dr. Demetri Zadelis
LA Co. Sanit. Districts
1965 S. Workman Mill Rd.
Whittier, CA 90601
------- |