UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
OFFICE OF WATER
INDUSTRIAL TECHNOLOGY DIVISION
TENTH ANNUAL ANALYTICAL SYMPOSIUM
NORFOLK, VIRGINIA
MAY 13 & 14, 1987
-------
FOREWARD
The Industrial Technology Division (ITD) of the EPA's Office of Water sponsors an
annual symposium on the analysis of pollutants in the environment so that chemists,
biologists, engineers and other members of the scientific community may gather to exchange
new ideas and discuss advances in analytical methodologies.
These proceedings document the presentations and discussions from the Tenth Annual
Analytical Symposium. Topics at the Tenth Symposium ranged from methods for
determination of pollutants in adipose tissue to drilling fluid toxicity tests. The ITD supports
this symposium with the aspiration that it will augment knowledge of methods for the
determination of environmental pollutants and spark new interest in the field of analytical
chemistry.
-------
TENTH ANNUAL ANALYTICAL SYMPOSIUM
Office of Water Regulations and Standards
Industrial Technology Division
May 13-14, 1987
Norfolk, Virginia
TABLE OF CONTENTS
May 13, 1987
Presentation/Speaker
WELCOME AND INTRODUCTION
William A. Telliard, Chief
Energy and Mining Branch
USEPA, Industrial Technology Division
Outlook on EPA Methods
Development and Integration
Robert Booth
USEPA, Environmental Monitoring and
Support Laboratory
Page
Analysis of Digested Sludge, Filter Cake,
and Compost by Isotope Dilution GC/MS
Bruce Colby . . .
Pacific Analytical, Inc.
21
60
Determination of Pesticides and Other
Semi-Volatile Pollutants in Adipose Tissue
by GC/MS
Michael Aaronson ,
Colorado Pesticide Center
Colorado State University
Effect of Number of Calibration Points on
Precision and Accuracy of GC/MS
Dale Rushneck
Interface, Inc.
Barrett Eynon
SRI International
Particle Beam LC/MS: An Emerging Practical
Technigue for Characterization of Involatile
and Semi-volatile Environmental Pollutants
Andrew Sauter 130
Andrew D. Sauter Consulting
108
-------
TABLE OP CONTENTS
May 13, 1987
PRESENTATION/SPEAKER
Determination of Volatile Organic
Pollutants in the Mid-Part-Per-Trillion
Ranqe by GC/ITDMS
James Poppiti
Finniqan MAT
PAGE
190
Analysis of Semi-Volatile Organic
Compounds by Robotics
R. H. Ode .., . ...
Mobay Corporation
234
Analyses of Water and Soils for Trace
Organic Contamination via Headspace and
Purge and Trap Techniques Using Robots
Michael Markelov
Sohio Research
Computerized Data Reduction and Reporting in
A Larqe Scale Environmental Laboratory
R. Lee Myers 267
CompuChem Laboratories, Inc.
On-Line Automated NPDES Monitoring
Using Robtics
Spencer Smith
Ciba-Geigy Corporation
299
Automated Aquisition, Reduction, and
Quality Assurance of ICP Data
Laurence Penfold
Thermo Analytical/Norcal
328
-------
TABLE OF CONTENTS
May 14, 1987
Presentation/Speaker
Determination of Organic Compunds in
Wastes and TCLP Extracts of Wastes
From Oil and Gas Extraction Operations
Using Isotope Dilution GC/MS
C. Lee Helms
S-Cubed
PAGE
359
Polltants in Drilling Mud Pits-Comparison
of Total Analyses with Toxicity
Characteristic Leaching Procedure and
Lysimeter Results
Michael Phillips
Enseco, Inc.
399
Organic Chemical Characterizattion of
Diesel and Mineral Oil Used as Drilling
Mud Additives
John Brown
Battelle NEMRL
Naturally Occurring Bioomarkers for
Identifying Hydrocarbon Contamination
Timothy Snow ,
Conoco, Inc.
443
491
On Site Determination of Volatile Priority
Pollutants in Soil by Headspace Gas
Chromatography/Ion Selective Detection
Harry Gearhart
Conoco, Inc.
520
A Wide-Bore, Capillary Clumn GC Method for
Organochlorine Pesticides and PCB'S
Paul Marsden ,
S-Cubed
563
-------
TABLE OF CONTENTS
May 14, 1987
Presentation/Speaker
PAGE
Occurrence of Organic Non-Priority
Pollutants in the Rubber Industry
J. M. McGuire 602
USEPA, Athens ERL
Method Detection Limits, on How Low Can
You Go?
John Koehn 635
Shell Development Company
Produced (Formation) Water from Oil and Gas
Production: Test Method Development and
Preliminary Toxicity Test Results
Richard Montgomery 663
USEPA, ERL Gulf Breeze
Toxicity Identification and Evaluation:
A Permits Case History
Robert Schaffer 684
CENTEC Corporation
Variability in Drilling Fluid Toxicity
Test Results
Jim O'Reilly 713
Exxon Production and Research Company
CLOSING REMARKS 752
Roster of Attendees 753
-------
PROCEEDINGS
MR. TELLIARD: Good morning.
My name is Bill Telliard. I'm with the Environmental
Protection Agency and I'm here to help you.
I'd like to welcome you this morning to the
tenth annual semi-respectable conference on environmental
measurement. Over the next two days, I think you'll
find that we're going to present some very interesting
papers. ;
In the back of the room you will find a number
of publications that we making available. One of
them is affectionately referred to as the list of
lists. The list of lists is a compilation of the
poison of the week club. It is the agency's, shall
we say, target arialytes or analytes of concern from
the various program offices put together1in a compendium
so that if things are dull around the office, you can
look up a poison to upset your management with, or
certainl at least find trace amounts of in some area
of your plant.
The other publication is the ITD list of analytes.
This particular publication deals with those compounds
which are a subset of the list of lists, that the Office
-------
of Water Regulations and Standards is routinely
monitoring and analyzing for at various plant sites
and in the ambient environment over this year, 1987.
It again gives you a feel for what the Office of
Water is using as its target analytes. It's primarily
made up of, again, the priority pollutants, the
Michigan list, which is a compilation primarily of
pesticides, a extracted amount of compounds from the
RICRA Appendix 8, now I guess Appendix 11, and other
and sundry compounds that we know we can measure.
Also, there are copies of previous years'
proceedings which many of you may want to pick up and
use as doorstops; they're very handy for that, nice,
big, thick.
We've been doing this now for ten years, and of
course, this will probably be the last year because
we'll now be able to answer all the guestions.
Anything that takes ten years, you certainly ought to
be able to have all the guestions answered by then.
But just in case we have to run over it next year,
we'll let you know on that.
For those people who are going to the Pumpkin
Flout and Boatride tonight, we'll give you a little
-------
bit more history on where you can pick up the vessel
out here in front. I would.point out we have never
lost an attendee on this boatride, and would like to
keep our record. I think that's all of the announce-
ments.
I'd like to introduce our first speaker, our
keynote speaker. He's been to these meetings before,
it goes with his job, attending meetings"like this.
He's been involved in analytical chemistry for a long
time, that's partially because he's a chemist. That
didn't use to be the case though in this agency. In
this agency everything used to be engineers. When we
first started out in this agency, if you weren't an
engineer you had to do small things, like go for
coffee and stuff. Chemistry was not considered a
real necessary science unless you were running BOD's.
That's changed a little bit now. We're having classes
for most of the engineering staff so they can pronounce
most of the compounds we're analyzing now.
Bob has been with the agency a good number of
years, and has been working primarily with measurement
and has been the deputy director of the environmental
monitoring support lab, and now of course is the director.
-------
Bob has spent a great number of years worrying about
things like turbidity and suspended solids, and now
1,2-biphenol bad stuff. Bob is going to talk a little
bit this morning on where measurement is going in the
agency, and we'd certainly all like to find out about
that. Bob?
-------
5
MR. BOOTH: Good morning.
Thank you, Bill. When my staff heard that I was
going to be giving the opening address, the almost
unanimous response was, you're a fool to follow
Bill Telliard. Actually, when Bill asked me to give
this keynote address, I was flattered. ,1 arrived at
the hotel yesterday evening to find that he' and some
of our mutual friends had me placed in one of the
suites. So I had the opportunity last night to write
my notes for this talk at a real nice conference
table. I came down this morning really feeling good,
until I ran into one of the people that I know quite
well and he said, "Look out, you're now taking the
place of Bob Medz." For those of you that were here
last year and in previous years, you know the meaning
of that remark.
There are roughly three or four things I'd like
to touch upon in my keynote address to you. For those
of you that are in the Agency, you're probably not
going to learn anything new. For those of you that
have worked closely with the agency, this may be old
hat also. But what I thought I'd try to bring you up
to date as to where we are in the Section 304(h) acti-
-------
vities, how the Agency plans to respond to Section 518
of the Clean Water Act, where the Agency is headed in
terras of trying to get its act together concerning
methods development, and finally, something that
you're probably all keenly interested in, the proposed
fee structure the Agency has published in the Federal
Register.
First, as far as the 304(h) activities are
concerned, there is a start-action reguest working
its way through the Agency right now in which we will
be proposing in the Federal Register for public
comment an amendment to Part 136.40 CFR.
For those of you that are interested in toxicity
testing, biological monitoring and the way the Agency
hopes to respond to the monitoring reguirements of a
water quality-based approach, this should be of keen
interest to you.
What we're planning on doing in this proposal is
to propose certain methods for measuring the toxicity
of pollutants in point source discharges, in drilling
muds, and also in the receiving waters. The tests
that we're thinking of will include both short-term
methods for doing the acute as well as the chronic
-------An error occurred while trying to OCR this image.
-------
8
Turning now to something that is brand new, in
Section 518 of the Clean Water Act which was recently
passed over the President's veto, Congress has asked
the Agency to take a long hard look at the test
procedures being used under Section 304(h), and to
report back to them by next January on three key
items. First of all, to see if the methods are
adequate in terms of doing what is needed for the
permits program. Secondly, to make sure they have
been properly standardized. A third item, which is
going to be probably very time-consuming, is to take
a look at the methods we have in 304(h) and compare
them to other methods that are used as part of the
Agency's regulatory program.
So what we're going to be doing is comparing the
Section 304(h) methods to the methods that are
currently being used by CERCLA, RCRA, the Office of
Drinking Water and the work that our contract
laboratories are doing. Any regulatory type
environmental monitoring method will be compared as
part of this Section 518 study.
Concurrently, we're being charged to take a look
at what other agencies are doing, what's being done
-------
by the private sector, what's being done by other
methods-setting groups. So it looks like that this
will probably be the most complete review of comparable
methodology that has been made under the Section 304(h)
program.
Then finally, we are to make some basic
recommendations to Congress as to what changes should
be made in methodology. This report is due to Congress
next January, and the powers to be in Washington have
gone on record as saying we will meet that deadline.
So it's something that you might mark your calendars,
because it's a report I'm sure you'll all be interested
in reviewing.
The way we're going to do it is through a combi-
nation of in-house and extramural-type activity.
Again, because of the strong emphasis that is starting
to be placed biological monitoring, our staff is
going to be looking at the biological test procedures.
John Winter, whom you know guite well, I'm sure, and
his staff will be looking at QA procedures and comparing
the guality control procedures that are now becoming
part of the methodology throughout the Agency, and
seeing if there isn't some common ground there.
-------
10
Concurrently, as a number of you know, we have
what's known as the equivalency program, in which one
may make application for alternate test procedures to
show comparability to the methods that have been
published in the Federal Register. We will also plug
that into the system review , too.
Then we have people at headquarters who are
going to be talking to the program offices and to the
regions, so that all of this information will be
funnelled into the contractor who concurrently is
going to be reviewing the literature, studying the
laws, looking at the various methods available and
coming up with a comparability statement.
All of that will be packaged together as a draft
document to the Agency that is scheduled to be available
to us for review sometime at the start of fiscal year
'88. So we're looking at about October of this year.
It will be available to the public, I'm sure, after
it goes to Congress in January of 1988.
What is the Agency doing in terms of grappling
with the very momentous problem of coming up with
methods that can be used on a multi-media basis, and
are more generic in nature? The Agency, like most
-------
11
governmental groups, felt the best way to do this was
to form a committee, and just about a year ago formed
what was known as the environmental methods development
steering committee.
This was a unique group, in that for the first
time we brought together representatives from all the
program offices that were involved in environmental
monitoring, whether it be air, water, solid waste,
what have you. People from the regional laboratories,
people from the ORD laboratories, representatives
from the regions and states, we all got together in
one room. I think, if we did nothing else, for the
first time we had together all of the people that had
something to do with providing the environmental
monitoring regulations.
Out of that meeting we formed two work groups,
one that was to look at the short-term methods and
needs of the agency — that was chaired by our host,
Bill Telliard — and a long-term methods group which
was chaired by a person whom I'm sure a number of you
know, David Friedman from the Office of Solid Waste.
Those two groups had a task force composed of
about eight to ten technical people, again representing
-------
12
the key program offices and the key laboratories.
They issued draft reports. These draft reports have
now been put into a final white paper. The white
paper is currently being reviewed throughout the
Agency. Final comments are due in the end of this
month to the AA for ORDf Dr. Vaun Newill.
This final report contains a very basic recommen-
dation, namely, to form a permanent Agencywide
committee that will be reporting directly to the
Administrator or to the Deputy Administrator as the
case may be, on a number of key areas. First of all,
to coordinate and to priortize on a multi-media basis
the methods needs of the Agency, and to try to
coordinate on an Agency-wide basis the activities
that would be done to respond to the Congressional
mandates. This will provide a focal point that
can serve as a screen to make sure there is a
totally coordinated effort throughout the Agency on a
multi-media generic basis to come up with the methods
needed.
Ideally then, what we would be looking for would
be methods that could be used to meet the needs, not
only of Section 304(h) for the permits program, but
-------
13
also for the Office of Solid Wastes, superfund, toxics,
and drinking water.
In the meantime, a number of people are already
trying to make this happen. For those of you who are
involved in the superfund program, you're probably
keenly aware the solid waste/superfund methods are
being merged as much as possible, the Section 304(h)
methodology is being incorporated where possible,
and the methods we have for meeting the requirements
of the revised Safe Drinking Water Act are being
primarily from the Section 600 series of the permits
program. So there's already an effort being made to
come out with methods that, where possible, have a
base of commonality.
Secondly, this method group would be charged with
putting together a priortized listing of what the
Agency would be doing in terms of methods development,
guality assurance development, and related activities
Finally, as part of that, there would be at least a
goal of trying to come up with a generic guality
assurance guality control section for the methods.
Again, for those of you that are involved in the
permit monitoring requirements, you well know that
-------
14
the section concerning quality assurance is a proposed
section right now. As near as we can tell, it's been
favorably received by the monitoring community. We
hope it will be finalized/ with some minor modification,
and will serve as the basis for what the Agency will
be doing.
My final topic has to do with the Agency's
proposed fee structure. For those of you that read
the Federal Register, last September we published a
notice concerning proposed user charges for certain
quality control and performance evaluation samples
under the Clean Water Act and the Safe Drinking Water
Act.
Basically, what the proposal called for is that
the Agency would start charging a fee for all of the
•quality control samples, i.e., the known calibration
standards, and also for the performance evaluation
samples, the sample unknowns we send out to the
regulative community.
The fee structure would be based on a sliding
scale. The minimum charge would be 15 dollars up to
about a max of 75 to 80 dollars, I believe it was,
depending upon the analyte and the sample concentrate
-------
15
in question.
That proposal was subjected to public comment.
All told, we received 128 public comments, 53 from
state agencies. As you can well imagine, this is
something that they are vitally concerned about.
Eighteen people from the consultant area, 17
municipalities, 17 independent laboratories, nine
universities, seven cities and five trade groups.
After all of these comments were reviewed throughout
the Agency and submitted to a red-border review, it
was agreed by the Agency we would go forward
with a proposed fee structure. So the Assistant
Administrator for the Office of Research and Development,
Dr. Vaun Newill, has written an official memorandum
to Lee Thomas, our ultimate boss on the 12th floor,
recommending that the Agency go forward with this fee
structure. If he approves it, it will go into effect
on October 1 of this year.
Because of this proposed action, we have had a
number of requests for multiple sets of samples. (Laughter)
It's good you're listening, that's good. So as a
part of my preparing for this talk, as of May 6, we
are now limiting requests to a single set of samples
-------
16
for any type of QC/PE sample. We just had to go to
that drastic measure.
If this is approved by the Agency, I think what
you can expect to see happen is that there will also
be a charge for the repository standards we currently
provide free of charge. There is currently underway
a study for determining whether we should charge a
fee for the PE studies themselves.
A number of you in the audience probably take
part in the annual DMR QA study for the major
dischargers. We send out about 7500 to 8,000 requests
and from that we get about 5,000 to 6,000 participants
taking part in this annual study.
The Office of Water Enforcement feels very
strongly that they want to continue this as one of
their basic QA efforts. So we're working closely
with them right now, trying to work out an arrangement
whereby the Agency will continue to provide this as a
service rather than as a fee. Other than that,
however, we suspect that if from a policy perspective
the fee structure is indeed approved, then we very
quickly will probably move into these other areas.
In summary, I would say we have made a lot of;
-------
17
progress in the last ten years. We certainly haven't
solved all the problems. I think the Agency is more
keenly aware of the need for us to be looking at
things on a multi-media basis, of coming up with
methods that can be used on all sample types, of
working more closely with you as part of the monitoring
community to develop these methods, and to provide
with guality assurance technigues that will not only
meet the Agency needs, but will also not prove a
hardship to you.
Bill started out by saying he was here to help
you. He probably didn't mean that. (Laughter) But I
am here to help you, and to prove it, I am listed in
the back of the book. The phone number that is there
is indeed the correct phone number. So if there's
any way we can help you in these areas of analytical
methods, guality assurance, bio-monitoring, please
call me.
Thank you very much.
-------
18
Question and Answer Session
MR. TELLIARD: Thank you,
Bob. Are there any questions?
Rules on questions. There are microphones on
each aisle. We have stenographers taking the
proceedings down so we can mail it to you so that you
can really remember what you said later on after this
thing is over with. So if there are questions, if you
go to the microphone, identify yourself and ask any
questions. I can't believe you're going to let him
off this easy."
One small note. For those people you'll notice
tomorrow will get up and run out when the postman
comes, that's because my procurement is due. All
those bidders, where you guys are figuring out all
those prices for the samples, I notice working at the
table. That's good. Get them down cheap.
Also relating to what Bob has said, the Office
of Water is going out...in fact, I think it's already
been noticed in the CBD, for six laboratories to
support the permit program on bio monitoring. This
is our, as I affectionately refer to it, our first
critter contract. This is killing critters, surviving
-------
19
critters, maiming critters. It's primarily fresh
water monitoring. Again, it's for whole effluent
monitoring, primarily for permit activities, both for
the states and for the Agency.
We also envision shortly coming out with a procure-
ment for marine saltwater additional testing. I
guess we're going to crunch some sea urchins and a
few other things in that procurement.
But this has been along the first effort that
the Office of Water has made to actually come up with
an IFB to support this type of activity. So referring
back to what Bob has said, biomonitoring is certainly
the tool of the day or the tool that Enforcement is
looking at down the road, for what that's worth.
Our next speaker is a constant attendee at this
meeting, despite our efforts. Dr. Colby has been here
a number of times, making presentations. Bruce has
been intimately involved in development of the isotope
pollution methodology that the Office of Water Regulations
and Standards is-presently using, and that the
industrial technology division uses routinely for
effluent monitoring.
Bruce has taken on a contract to help us come up
-------
20
with a proposed methodology for the analysis of
municipal sludge. This was in support of the proposed
regulations that will be coming out on the land ban
program for the disposal of municipal sludge. Bruce
is going to talk about one of his favorite subjects,
municipal sludge.
-------
21
DR. COLBY: First, I'm not
entirely sure my slides are in the carousel back
there, so...ahf they are. Great, Bob, thanks.
The interest in sludge and in analyzing the A list
of compounds in the sludge revolves around the cost
associated with disposing of the sludge on land,
spreading it out on perhaps some farmer's field as a
form of fertilizer or something of that nature. The
sludge cannot be disposed of in great quantities
if it contains chemicals at levels which are of
environmental concern.
The analytical methods we apply to wastewater
treatment sludges, are challenging because of the
complexity of the sludge. The need to do a good job,
with analyzing sludge, relates to the cost thing. If
we can push the detection limits down then, assuming
there are no chemicals of concern present, and for
the most part we don't really find many of the targeted
compounds present, then it's easier to justify disposing
of larger quantities of sludge in smaller areas.
This results in a very large cost factor driving
the detection limit issue for sludge.
There are basically two methods for determining
-------
22
volatile organics in solid samples. One, the 8240
GCMS method, is in the SW846 manual. According to
its documentation it is good for about a thousand
micrograms per kilogram of the majority of the analytes.
I personally believe that this is pretty pessimistic.
I think this methodology can produce lower detection
limits.
The second method is similar to 8240 but it
has different stated detection limit and it is
used by the "Superfund people" for the Contract Lab
Program. Here the detection limits for most of
the analytes are in the vicinity of ten micrograms
per kilogram. I think if anything, for samples as
complex as sewage sludge, this may be a bit optimistic.
Based on the above picked a goal of five micrograms
per kilogram, for detection limits for the sludge
analysis. Given this target detection limit as a
goal, we established tentative analysis scheme.
With this scheme the first thing we do, if we can
look at this flow diagram, is to determine the percent
solids for the material we're dealing with.
Sludge is far from a uniform material. Some
sludge pour guite easily, others are is hard enough
-------
23
to run a truck over. They may have greatly different
quantities of water in them.
Once we Know how much solid is present we carry
out the analytical process, based on how much the
sample appears to be like a water or a solid. If we
have less than one percent solids, then we will take
we take five grams of sample and we carry it through
a normal isotope dilution GCMS purge and trap
technique.
If we have high solids, we take five grams of
the sample, dilute it in five milliliters of water,
and then back into this other scheme. We try to make
the solids seem like a water. Presumably the organics
which are adsorbed to the solids will be pulled off
to some degree into the water and then removed via
purging.
The purge and trap parameters include a three-
phase trap. Gas chromatography is based on a 2.8
meter long by two millimeter ID packed column
containing one percent SP-1000 on Carbopack B, helium
carrier and a normal sort of temperature program rate.
Here we're looking at a 45 degree start temperature,
a three minute hold followed by eight degrees a
-------
24
minute to 240, where we hold again.
The mass spec uses 70 eV ionization, scanning
a mass range from 20 to 250 amu in two to three
seconds.
The identification of detached compounds is
based on their mass spectra and their retention
times. The reference mass spectra are based on the
five most intense peaks plus any other peaks which
are greater than ten percent of the base peak. All
peaks must agree within a factor of two with the
reference material we used in calibrating the GCMS.
For retention times comparisons, the naturally
abundant compounds which have labeled analogs that
can be used as a retention time reference, are required
to be found within plus or minus two scans or about a
12-second window. By looking in this very small window
we cut down on the possibility of false positives.
The basic differences between the sludge and the
water isotope dilution GCMS methods are, first, that
the calculations are based on micrograms per kilogram
dry weight, just to try to put things onto a constant
basis for what is of interest from the standpoint of
its solid content. We determine our initial sample
-------
25
amounts gravametrically. We take five grams of sample
in both of the two different approaches we used. We
use what's called a solids purge tube for all the
samples, even if it poured and looked like a water,
and we took roughly five milliliters of it. We still
use the solids purge tube to hold that part of the
methology constant.
Then we also add spikes to the purge tube, rather
than to a syringe. If sludge samples get put into a
. syringe, even if they're the less than one percent
solids type, oftentimes we find that particles will
plug up the syringe needle or jam the barrel.
The guality control used for the isotope dilution
volatiles includes several items. Once per group of
samples analyzed, we do an initial calibration. This
is a five-point calibration curve .for each analyte
and it has a series of specifications for linearities
and so on. There's no point in going into the detail.
Then, at the beginning of a shift, the instrument
tone is checked to assure that it satisfies the
BFB tuning specification in the method. We then
analyze a lab blank to demonstrate that the system is
clean, and we verify that the GC is providing adequate
-------
26
chromatographic resolution by evaluating the resolution
o. toluene and toluene-d8. Those are done once per
shift.
Every sample has QC associated with it. We
look at the internal standard. Its retention time
has to fall in a window, and its response also has to
fall within a window. If it does not, it is assumed
that there is something wrong, and things must be
corrected. Further; the labeled analogs that are
present are checked to determine percent recovery
and this becomes part of the record.
We applied this methodology to a series of
samples that collected locally. These were a digester
sludge which contained about two percent solids and
and a filtercalce. The digester sludge poured like
water; the filter cake was kind of a crumbly black
material that had 16 percent solids. We also looked
at something called composited sludge. It was about
60 percent solids and it was a material commercially
available in the Southern California area, called
"Topper." It's normally applied to a new lawn to help
it grow. It's got bits of flaked wood and nutrients
added to it.
-------
27
Using the above samples we attempted to determine
how the methods would work when applied to different
sludges. Method detection limit, incidentally, was
calculated as specified in the Federal Register for
the water methods, using the five replicates and the
95 percent confidence interval for the students' T
statistic.
The result of carrying out a detection limits
study is shown graphically here. I have plotted
detection limit on this axis for the 32 compounds
that we looked at across this axis. I've made
no real attempt to pull out which compound is which.
However, there are clearly a lot which have very low
detection limits. Most of them are below 20 micro-
grams per kilograms. Some are very much below that.
There are a few compounds which clearly have much
higher detection limits. These include methylene
chloride, acetone and MEK, all of which were present
in the samples at fairly high levels. We also have
1,4-Dioxane which is a compound that doesn't purge
particularly well, and conseguently the precision of
the analytical work is not very good. It also
is not being analyzed by isotope dilution. So we
-------
28
have a few bad actors in there.
But all in all, if we try to pull that together
another way and look at the isotope dilution compounds,
compounds determined by isotope dilution, 35 of the
22 fall in that category. We had an average of nine
micrograms per kilograms for those. The compounds
which were determined by the internal standard method,
in other words, we had no labeled analog available
for them, there were three of these. They had an
average detection limit of 60 micrograms per kilogram.
I broke out the compounds which were present at
high background levels in the sample because we were
not spiking these at levels which was really appro-
priate to test the methodology near the detection
limit. We did have three that fell into that category,
and the detection limits there are probably not
really representative of the methodology.
On the average, we had about 35 micrograms per
kilogram, even counting the bad actors.
The first slide that I showed had a goal of five
micrograms per kilogram. Clearly, we didn't make
that. The compounds which we got down to the average
of nine micrograms per kilogram I felt pretty good
-------
29
about. Those for which had a high background or for
which were done by the internal standard method,
well, they're about what we expected, I suspect that
the CLP methods, if applied to sewage sludge, would
probably produce detection limits in the vicinity of
the 60 microgram per kilogram number.
Since I am running a little bit behind, we'll
move on to the base/neutrals. Existing methods
again, the SW 846 method for base/neutrals and acids,
lists 1000 micrograms per kilogram as it's detection
limit. The CLP method claims around 330 for most
targets. Consequently we set a goal for ourselves of
50 ug/kg. The analytical scheme we tried to stick
to, was to first determine the percent solids in the
sample. There were then three paths. If there was
less than one percent solids, we would skip right
down, take a one liter aliguot, spike it and carry it
through an analytical scheme essentially identical to
that of the water isotope dilution methods.
If we had from one to 30 percent solids, we
would dilute the sample with water so that we would
have one liter total containing one percent solids.
We would then carry this through the analytical scheme.
-------
30
If we had greater than 30 percent solids, we would
take a 30 gram aliguot of the solids, run it through
a sonication extraction procedure with acetone and
methylene chloride, and then carry that through the
GCMS scheme.
I should point out that the two branches of this
flowchart which run down through here result in both
an acid and a base/neutral GCMS run. The branch that
runs down through the sonic extraction results in a
single extract for GCMS analysis.
In all cases, however, we come together here and
require a GPC cleanup of the sample prior to doing
the GCMS work. If the GCP cleanup is left out, a
capillary column will last approximately three or
four runs before it becomes completely useless.
The instrumentation set up for the base/neutrals
uses a fused silica capillary column, helium carrier,
and the same temperatures, ion source and scan
conditions in the mass spectrometer as the water
method.
Compound identification is very similar to
that for volatiles. We look at the five most
intense peaks in the spectrum plus any other peaks
-------
31
greater than ten percent of the base peak. These have
to agree within a factor of two of the reference
material. Compounds which are determined by isotope
dilution must have retention times which fall
within a plus or minus six-second window. Compounds
not determined by isotope dilution have a wider
window, in this case, plus or minus 15 seconds, twice
as wide.
The QA, again, is very similar to that for the
volatiles. We do have an initial calibration once
per grouping of analytical workup. Then we verify
toning with DFTPP. We look at the anthracene/
phenanthrene GC resolution to verify that the column
is functioning properly. Then on every sample analyzd,
we verify that the internal standard retention time
and response fall within acceptance criteria windows,
and we calculate the recovery of the labeled analogs
which are present in the sample.
For the high solids method, (30 percent solid
or greater) where we're using the sonic extraction
technigue, we're getting on the average sonic
compounds with high detection limits. There are
associated either with high background levels,
-------
32
phenol and the phthalates were high in the Encina
materials. I don't recall what the Topper was like
right offhand.
All of the other compounds which are high,
are hydrocarbons which are being determined by non-
isotope dilution techingues and they are present at
substantial levels in the sample. Over in here we
have a group of nitrophenols which are tough to
get through the chromatograph, and they're not quite
as good.
In a tabular form, perhaps we can get a better
idea of what these data look like. We have, for the
isotope dilution the base/neutral fraction, a
detection limit of about 48 micrograms per kilogram
on the average for 52 of the 92 compounds we looked
at. For the acid fraction, 134 micrograms per kilogram,
and there were ten compounds in that value. The
compounds determined by the internal standard technique
came out to be 530 micrograms per kilograms. We had
20 compounds there. Then where we had very high
background levels, the detection limits are not
representative of the methodology. We had numbers
that were basically out of sight, 1600 micrograms per
-------
33
kilogram. We had five of those. Those were he
hydrocarbons and the other compounds I mentioned.
We did fail to detect five compounds by
this method. They could have been completely
interfered with or they could have decomposed.
Some of them, we're quite sure they decomposed because
we were dealing with compounds we know that do that.
The medium solids method produced a slightly
different graphical picture of detection limits.
Again, we have a problem with phenol, which was present
at a high level. We have the hydrocarbons, although
here they're looking a little bit different. This is
a "dilute to one percent solids" sample.
In a tabular form, 53 base/neutral compounds
averaged 58 micrograms per kilogram; acid compounds,
about 150 micrograms per kilogram, internal standard
compounds 250, and then the high background ones were
900, a little bit more than that.
In this method we only failed to detect three.
Those were compounds which we are guite sure
decompose.
Comparison of these detection limits with our
goals of 50 micrograms per kilogram, comes out
-------
34
reasonably well. For the compounds which worked well.
The acids in both cases have somewhat higher detection
limits, then compounds determined not by isotope
dilution we did not meet our goals in any cases.
Most of the compounds did fall into the
isotope dilution areas, and it appears that we have
probably pushed the detection limits down by, about
a factor of three to five, say, for the base/neutral
and acid compounds by going from an internal standard
methodology to an isotope dilution methodology.
Since I have pushed slightly past my time, I will
cut it off there. Are there any questions?
-------
35
Question and Answer Session
MR. MILLER: Michael
Miller from Enviresponse. I'm wondering what compounds
you did not detect in your...especially with your
volatiles, what compound was not detected there?
DR. COLBY: We detected all
compounds in the volatile fraction.
MR. MILLER: I noticed on
the chart it said, did not detect one. There was one
compound that was missing on your chart.
DR. COLBY: I thought we detected
them all. I guess I'd have to go back and look at that.
I have the list with me. You're right, there is
one on here, and if you pigeonhole me afterwards,
I'll look it up for you.
MR. TELLIARD: Yes.
DR. COLBY: Microphone, please.
MRS. KHALIL: My name is
Mary Khalil. I am working for Metropolitan Kinetic
Institute in Chicago. I have a small guestion of...who
are the main suppliers for these isotope senders for...
DR. COLBY: There are
several commercial suppliers. Our standards were
-------
36
obtained from EPA. Can I mention names?
MR. TELLIARD: Sure.
DR. COLBY: Just to your
right is Joel Bradley of Cambridge Isotopes, they're
one supplier.. Authur is Merck Isotopes and I believe
Myra Gordon is representing them here.
MRS. KHALIL: Thank you
very much.
MR. NOUTH: My name is
Chantha Nouth, I am from West-Paine Laboratories in
Baton Rouge, Louisiana. I have a little question
about the DFTPP.
DR. COLBY: I don't know a
thing about DFTPP.
MR. NOUTH: You are not
going to answer that, or...
DR. COLBY: Go ahead.
MR. NOUTH: May I?
MR. TELLIARD: Sure.
MR. NOUTH: Obviously you
are familiar with the method. We have 8270 and its
counterpart 625. For DFTPP we have a little slight
-------
37
difference from isotope dilution.
DR. COLBY: That's right.
MR. MOUTH: Especially for
the GCMS values. Now, do we have to say that if you
do the 1625 you have to adhere to the specification
1625 and...isotope dilution 1625 and adhere to 1625, or
is the specification of the 1625 could be also valid
to run at 1625?
DR. COLBY: My personal
opinion is that it doesn't make any difference which
one you use. It will have no impact on the quality
of the data whatsoever.
MR. TELLIARD: If you have
a 1625 contract you use 1625 tuning.
MR. NOUTH: Yes, that's
what I'm afraid of. If you adhere to that method,
but I'm trying to...what is the impact? If we have
different criteria of the 1625 it is somewhat more
practical and more logical. I wouldn't say that the
1625 specification is disproved, I wouldn't want to
say that. But assume that your specification is more
logical and practical, it does not mean that the GCMS
would not perform well under that specification. I
-------
38
agree a hundred percent with that.
DR. COLBY: Earlier this
this week the agency had a meeting in Washington which
was attended by EPA regional people and manufacturers
of mass spectrometers used in the CLP program. They
were addressing the issue of DFTPP, and are putting
some considerable effort into determining what should be
done. Whether or not any changes come about because
of this is not yet clear. But it is something that
is being addressed, so your concerns are also concerns
that other people have.
MR. NOUTH: Another very
short question. What was the factor of two of the
mass spectrum? I couldn't...
DR. COLBY: To identify
a compound, you select peaks in the mass spectrum of
your reference material which correspond to the five
most intense peaks, plus any peaks greater than ten
percent of the base peak. Those peaks then have to
be present in the unknown within a factor of two in
abundance when compared with the reference spectrum.
MR. NOUTH: Meaning that if
one is ten percent, a factor of two is 20 percent?
-------
39
DR. COLBY: Right.
MR. TELLIARD: Thank you,
Bruce. Our next speaker is going to talk about one
of our...we always feel dutybound to have a pesticide
paper. It's a requirement for this meeting. To meet
that specification, we've asked Mike Aaronson to come
in and talk about the analysis of adipose tissue,
which to some of us is a very, very sensitive matter.
Mike is going to make his presentation.
-------
40
EXISTING METHODS FOR VOLATILES
Method
824O
"CLP"
GOAL
10OO
10
5
-------
VQA SLUDGE METHOD
41
Determine
X Sol ids
YES
Take 5 ml_
al iquot
Spike w/
IS's &
labeled
analogs
Purge &
Trap GC/MS
analysis
5 g aliquot
into 5 mL H=O
I
-------
42
VOfl INSTRUMENT PARAMETERS
Purge & Trap
three Phase Trap
Gas Chromatography
2.8 m x 2 mm ID packed with
I'/. SP-10OO on Carbopack B
Helium carrier at 4O mL/min
45C'C -for 3 minj 8°C/min to
24O°C; hold for 15 min
Mass Spectrometry
70 eV ionization
20 to 25O da 1 ton scan range
2 to 3 sec scan time
-------
43
VOA COMPOUND IDENTIFICATION CRITERIA
Mass Spectrum
The 5 most intense peaks plus all
Peaks greater than 107. o-f the base
peak must agree with the re-ference
spectrum within a factor o-f 2.
Retention Time
Compounds with labeled analogs must
be within ± 2 scans or ±.6 sec,
whichever is "greater
Compounds without labeled analogs
must be within ± 7 scans o-f ± 2O sec,
whichever is greater
-------
44
DIFFERENCES BETWEEN SLUDGE AND WATER METHODS
• Calculations are based on
Percent Dry Weight
•Initial sample amounts are
established gravimetrically
• Solids Purge Tubes,are used
•for all samples
• Spikes are added to the
sample in the Purge Tube
-------
45
VQA QA/QG REQUIREMENTS
e Initial Calibration
• GC/MS Tune (BFB)
• Lab Blanks
• Toluene/Toluene-d8
GC Resolution
• Int. Std. Ret. Time
• Int. Std. Response
• Labeled Analog Recovery
1 per group
1 per 12 hr
1 per 12, hr
1 per 12 hr
every sample
every sample
every sample
-------
46
MDL TEST SAMPLES
Sam p 1 e
Digester Sludge
FilterCake
Composted Sludge
_Pere en t So. 1 id s_.
2
16
60
.Source,
Encina WPCF
Encina WPCF
Topper
-------
47
VGA METHOD DETECTION LIMITS
•J<»IJ —
320 -
3OO -
280 -
260 -
240 -
220 -
2OO -
£ 180-
V
££ 160 -
140 -
120 -
100 -
80 -
60 -
40 -
20 -
.
7
/
/
/
^
/
/
/
/
/
/
/
',
i
r-.
/
/
/
/
/
^
^
/
/
/
^
/
^
^
1
Rpn^ 171^
i i i I i i
ra
/
^
^
1
~T r^
n
^
/
^
/
^
^
i
«
n
^
W'"l ,x
i T'rP i 'T1^ i i ""I1 ; ^ ^ i T1 i 'r
Compound
-------
VGA MDL SUMMARY
48
MDL
jag/Kg
Type
Isotope Dilution 9
Internal Standard 6O
High Background 23O
Not Detected
Mean 35
Compound
Count
25
3
3
1
Total 32
-------
49
VDA RECOMMENDATION
Evaluate relative compound amounts
found with high solids content samples
using MeOH extraction -followed by P&T vs
the 5g sample into 5mL H30 P&T method.
-------
EXISTING METHODS FOR SEMI -VOL AT I LES
50
Methg_d
827O
"CLP"
GOAL
B e jt....
10OO
33O
50
-------
51
SNA SLUDGE METHOD
Determine
'/. Solids
>307.
1-30'/.
Dilute to
IX solids
Spike 1L
aliquot w/
labeled
analogs
Spike 3Og
aliquot w/
labeled
analogs
Aqueous
Cont Extr
3X, pH 12
Cont Extr
3X, pH 2
Sonic Extr
w/ acetone-
MeCl3
Organic
GPC cleanup
K-D to ImL
Add IS
FSCC
GC/MS
analysis
-------
52
BNA INSTRUMENT PARAMETERS
Gas Chromatography
30 m x .25 mm ID DB-5 FSCC
Helium carrier at 30 cm/sec
3O°C -for 5 min; S-^C/min to
28O°C; hold for last PNA
Mass Spectrometry
7O eV ionization
35 to 45O da1 ton scan range
1 sec scan time
-------
53
BNfl COMPOUND IDENTIFICATION CRITERIA
Mass Spectrum
'The 5 most intense peaks plus all
Peaks greater than 1O7. of the base
peak must agree with the re-ference
spectrum within a factor of 2.
Retention Time
Compounds with labeled analogs must
be within ± 6 scans or ± 6 sec,
whichever is greater
Compounds without labeled analogs
must be within ± 15 scans o-f ± 15 sec,
whichever is greater
-------
54
SNA QA/QC REQUIREMENTS
Initial Calibration
GC/MS Tune (DFTPP
Lab Blanks
Anthracene/Phenanthrene
GC Resolution
Int. Std. Ret. Time
Int. Std. Response
Labeled Analog Recoveries
1 per group
1 per 12 hr
1 per 12 hr
1 per 12 hr
every sample
every sampl'e
every sample
-------
3.5 -
2.5 -
2 -
1.5 -
1 -
0.5 -
55
BNA HIGH SOLIDS METHOD DETECTION LIMITS
.
pi__rti| L-JTL
_nJh_n
fLnfWVlr,
Jiuft
Compounds
-------
BNA HIGH SOLIDS MDL SUMMARY
56
Type
BN Isotope Dilution
A Isotope Dilution
Internal Standard
High Background
Not Detected
Mean
MDL
pg/Kg
48
134
529
1610
-
244
Compound
Count
52
10
2O
5
5
Total 92
-------
57
BNA MEDIUM SOLIDS METHOD DETECTION LIMITS
2.4 -
2.2 -
2 -
1.8 -
^ 1.6 -
w
?! '•*-
x«
32 1.2-
f
N-'
0.8 -
0.6 -
0.4 -
0.2 -
n -
r
•
r
Ji
• - • - ••- — - -
i
/ F1
'I 1 I ' i ''
ri nnn-nrYv™-rti Vi rnVrtTfflrffnrrtTrmrtflrftlrvi mm T r/nhnrunnrnnr i
Compound
-------
58
SNA MEDIUM SOLIDS MDL SUMMARY
Type
BN Isotope Dilution
A Isotope Dilution
Internal Standard
Hiqh Background
Not" Detected
Mean
MDL
pq/Kq
58
149
250
927
-
144
Compound
Count
53
10
10
5
3
Total 81
-------
59
BNA RECOMMENDA TION
Evaluate relative compound amounts
found with medium solids content
(1-30 7.) using acetone/MeC 13 & sonication
vs the dilute to 1 "/. solids -followed
by continuous liq-liq extracton.
WTSLIDE
-------
60
DR. AARONSON: I would first
like to say that it is a pleasure to be here and to
publicly thank those people who put out an effort to
get me here.
The presentation is concerned more with what is
involved in generating the data on the GC/Mass Spec,
as opposed to the levels of the semi-volatile
pollutants in the adipose tissue. Perhaps next year
I will be invited back and I can give a talk on the
levels we found in the adipose tissue. But that work
is ongoing right now.
My talk is The Determination of Pesticides and
Other Semi-Volatile Pollutants in Adipose Tissue by
GC/Mass Spec. Before I begin, I would like to just
recognize two coworkers, John Tessari and Sharon
Chaffey, who are with me at Colorado State
University, Department of Environmental Health. The
project is being funded by the EPA's Office of Toxic
Substances, Field Studies Branch, Exposure Evaluation
Division, and that the original method was first
developed by Midwest Research Institute, Kansas City,
Missouri.
The objective of the project was to detect, in
human adipose tissue, organochlorine pesticides,
-------
61
PCB's, chlorobenzenes, polynuclear aromatic
hydrocarbons, phthalate esters and phosphate
triesters. Once these compounds were detected, to
then quantitate them.
We started out with 20 grams of adipose tissue
that was extracted using methylene chloride and a
tissue homogenizer. Five grams (after the
extraction) was then removed and placed in storage
for any future analysis, and the 15 grams remaining
was then put through gel permeation chromatography to
remove the bulk liquid material.
Vitamin E acetate was used to calibrate the GPC.
The Vitamin E acetate simulated what bulk lipid
material would look like, and as you can see, after
29 minutes, the Vitamin E acetate was eluted off the
gel permeation chromatograph.
Then we took some genuine lipid material and
placed it on the GPC, and at the end of the 29
minutes, for the most part the lipid material had
been removed, but it still had not come completely
down to baseline. But for the most part, the 29
minutes applied for the lipid material.
The remaining 15 grams that was just collected
through the GPC was then reduced in volume and placed
-------
62
on a florisil column for fractionation and cleanup.
There were two fractions used on the florisil column,
a six percent fraction and a 50 percent fraction.
The six percent fraction is six percent dielthyl
ether in hexane, and the 50 percent fraction is 50
percent diethyl ether in hexane. The resulting two
fractions were then analyzed by GC/Mass Spec.
The six percent florisil fraction should contain
the organochlorine pesticides, the PCB's, the
chlorobenzenes and the polynuclear aromatic
hydrocarbons. The 50 percent fraction then should
contain the phosphate triesters, the phthalate esters
and two pesticides, dieldrin and endrin, which eluate
in the 50 percent fraction.
Now to talk about what most of you came here to
hear, and that is how we analyzed these samples by
GC/Mass Spec, now that the wet chemistry portion has
been taken care of very briefly.
This is the internal standard method, and we
utilized three internal standards, naphthalene-D8,
anthracene-DIO and benzo-(a)-anthracene-D12, all at
the level of ten nanograms per microliter. The
internal standards were added just prior to injection
on the mass spec.
-------
63
Eleven surrogate compounds were also added. The
surrogate compounds were added to the adipose tissue
samples being placed on the GPC. The samples arrive
at Colorado State University, already having been
extracted in the methylene chloride, so the
surrogates are not added during the extraction step.
They are added just prior to going on to the GPC.
The eleven surrogates that we used were
trichlorobenzene-D3, chrysene-D12,
tetrachlorobenzene-13C,., hexachlorobenzene-13C,,, a
6 6'
monochlorobiphenyl-13C6, tetrochlorobiphenyl-13C ,
octachlorobiphenyl-13C12, decachlorobiphenyl-13C ,
diethyl phthalate-D4, di-n-butyl phthalate-D4, and
butyl benzyl phthalate-D4. They are all in solution
at the levels that are indicated on the slide.
The surrogates are used as a monitoring
technique for accuracy and precision, and we keep
recovery data on these surrogate compunds throughout
the entire analytical method.
We are utilizing a 15 point calibration curve.
The five levels of calibration standards are 100, 50,
10, 5 and 1 nanogram per microliter. Each one of the
calibration standards was analyzed in triplicate.
Each time we analyzed them we generated response
-------
64
factors. So we are generating 15 response factors
for each of our target analytes. In this initial
run, I am talking about 11 surrogate compounds and 57
target analytes that were chosen. We are generating
an incredible amount of data. So we have 15 relative
response factors that are generated. These 15
response factors are placed in a response list, and
then the average of the 15 response factors is placed
in our library. It is these response factors that
are used to quantitate our unknown samples.
Just briefly, for those of you that are not
familiar with relative response factors (RRF), the
response factors are calculated using this formula.
The RRF is equal to A_ times C. divided by A. times
S IS IS
C , where A is the area of the quantitation ion for
S S
the compound to be measured, A. is the area of the
is
quantitation ion for the internal standard, C is the
concentration of the compound to be measured, and C.
IS
is the concentration of the internal standards.
Now, for each concentration range, for the 100,
the 50, the 10, the 5 and the 1, we are generating
three response factors. The relative standard
deviation across those three response factors is very
low, in the neighborhood of one to five percent, and
-------
65
more towards the one percent. However, according to
our QA and QC protocol for this particular project,
we were asked to try and maintain a 20 percent
relative standard deviation across the 15 point
calibration curve. Our experience shows that the 20
percent across the 15 point calibration curve can be
shown for most of the compounds, except we had
problems with the phthalate esters and the phosphate
triesters.
Getting all of that out of the way, we come in
the morning and now we are getting ready to shoot an
adipose sample. We have to have some verification of
our calibration, and the first thing we use is FC 43
to calibrate the mass spectrometer. The second thing
we do is analyze the one microliter injection of a 50
nanogram per microliter DFTPP standard. We have to
meet DFTPP criteria. For those of you who are not
familiar with GC/Mass Spec, this is the DFTPP
criteria that we use, and I believe it is from method
1625. Mass 51 has to be between 8 and 82 percent of
the mass 198 ion; 69 has to be 11 to 91 of the 198
ion; 127, 32 to 59 percent of the mass 198; 198 is
the base peak; 199 has to be 4 to 9 percent of the
198 ion; 275 is 11 to 30 percent of the 198 ion; 441
-------
66
is 44 to 110 percent of the 443 ion; 442 is 30 to 86
percent of mass 198, and 443, is 14 to 24 percent of
mass 442. This is the criteria that we meet every
day.
The last thing we do before shooting an actual
sample is a daily calibration standard. By that, I
mean we take one of our calibartion standards, the 1,
the 10, the 5, and 50 or the 100. First thing is, we
shoot it, and I just used as an example the ten
nanogram per microliter standard.
We analyze that standard to make sure that the
response factor that we have just generated on that
particular run is within that 20 percent across the
15 point calibration curve. Then at the end of the
day, after we finish shooting our samples, we use a
high end standard, the 50 nanogram per microliter
standard, and make sure that we are still within our
20 percent across the 15 point calibration range.
This gives us a handle on bracketing our calibration
range throughout the whole day, by plugging the two
standards, one at the beginning and one at the end.
This slide is the ten nanogram per microliter
calibration standard. This is the daily standard
that we inject for analyzing all samples. It is hard
-------
67
to see, but right over there is the location of the
internal standard napthalene-D8 which is covering
from zero to 800 scans. We are scanning at one
second per scan, so we are talking about a time of
zero to approximately 13 minutes. Anything within
that range is calculated using the internal standard
naphthalene-DS.
Over here is the anthracene-DIO, and that is
used to quantitate any of the samples from scan 800
to scan 1350 or from 13 minutes to 23 minutes. This
is the benzo-(a)-anthracene-D12, which is used to
quantitate from scan 1350 out to 2,000 or from 23
minutes to approximately 33 minutes.
We are using a DB-5 column. It is temperature
programmed from 60 to 280 degrees at eight degrees
per minute. We are holding at the 60 degree mark for
two minutes on the front end. We are holding on the
back end at the 280 degree mark for five minutes. It
is 70 electron volts and the injectors are at 220
degrees. This chromatogram contains 71 different
compounds, the 11 surrogates, 57 target analytes and
the three internal standards.
This is a chromatogram of the reagent blank.
These are all the reagents used in the analysis,
-------
68
taken through the entire analytical procedure. What
you are really looking at here is basically the
internal standards and the surrogates that get put
into all the samples.
This is a chromatogram of the 50 percent elution
fraction of the reagent blank. Basically you are
looking at the same thing; the internal standards and
the surrogates. There is a lot more junk in this
chromatogram because the 50 percent elution mix is a
much more polar solvent. .
Welcome to the world of the "hump-a-gram". This
is what an adipose tissue looks like coming off the
mass spec. This is a six percent fraction of our
control adipose tissue. By that I mean the only
thing that it should contain are the internal
standards and the 11 surrogate compounds that we put
into it, and whatever semi-volatile pollutants are
associated with adipose tissue. You really get to
appreciate these "hump-a-grams".
This is the 50 percent fraction of that control
adipose tissue. What this should contain is the
surrogates, the internal standards, the phthalates,
the phosphate esters and the dieldrin and endrin and
whatever else might be in the 50 percent fraction.
-------
69
Just another example to show you that the
"hump-a-gram" is for real. This is a spiked adipose
tissue. By spiked, I mean it contains once again the
three internal standards, the 11 surrogates and 34
representative target analytes that have been taken
through the entire analytical procedure. At the end
of the talk I will give you the results on the
recoveries of these compounds. But this is the six
percent fraction of the spiked adipose tissue.
Here is the 50 percent fraction of the florisil
column of that spiked adipose tissue. The phosphate
esters that we spiked in, phthalate esters, dieldrin
and endrin, are in this fraction. This is just
thrown in as another example to really try to
convince you that we were not kidding around about
the "hump-a-grams" and that the adipose tissue gives
you that kind of chromatogram. They all come out
that way.
This is another spiked adipose tissue containing
those same 34 analytes, and the 50 percent fraction
associated with that.
This slide shows the area of the internal
standards that we are using, the naphthalene, the
anthracene and the benzo-(a)-anthracene. Their area
10
-------
70
in thousands over a 17 day period that they were
injected. These are the internal standards as they
are in the calibration standards, not in real adipose
tissue yet, but in the calibration standards on a
daily basis. There's a couple of things to note from
this slide.
One, the internal standards are placed in each
sample at a constant concentration. They are in
there at ten nanograms per microliter. But as you
can see, the area varies on a daily basis, and that
can be instrument conditions, injection, who is doing
the injection or differences in the injection.
But the point is that, number one, the "up and
down" trend for all three internal standards
basically is the same. When one goes up they all go
up, when one goes down they all go down. But we are
really not concerned about that, and the reason we
are not concerned is because we are assuming...and we
have evidence, but we are assuming that the area of
response for our target analytes would do the same
thing that our internal standard is doing. That in
fact is borne out by the fact that, when we shoot our
daily calibration standard, we remain within this 20
percent, because based on our formula of calculating
11
-------
71
RRFs, if the internal standard area goes up, and the
sample area goes up, then the RRF should remain the
same and stay within that 20 percent, which indeed it
does, because that is one of our calibration
verifications. If it does not, we have to stop and
find out why.
So the samples are doing the same exact thing as
the internal standard, so we do not have to worry
about this "up and down" trend, as long as our
samples are reacting the same exact way.
This is looking at the individual standard right
now, the internal standard of napthalene-D8. The
first part of the graph shows the area of
napthalene-D8 in thousands that took place in the
calibration standards. This is the area of
napthalene-D8 as it appears in the six percent
dilution mix, when we analyzed an adipose tissue that
came off the florisil column in the six percent.
On the end here is the napthalene-D8 in the 50
percent elution mix from the florisil column.
Basically what we are trying to show here is that the
areas of the two fractions are within the range that
we have seen on the daily calibration standards.
12
-------
72
However, for anthracene-DIO, once again, the
first part of the graph shows the areas obtained in
the daily calibration standards. The second points
show the area for anthracene-DIO established in the
six percent elution mix. However, look at the
dropoff in the 50 percent elution mix. The
deterioration of anthracene-DIO in the 50 percent
elution mix, yielding very low areas for the internal
standard. Low internal standard area, once again
going back to our formula, the internal standard area
is on the bottom of our fraction, if the internal
standard area is low, that means the relative
response factor that is generated is going to be
higher and utilizing a higher response factor is
going to end up giving us higher results. So this
became a problem, and easily noticeable when looking
at the results and you see recoveries in the 200
percent range.
Benzo-(a)-anthracene-D12, the internal standard,
basically showed the same thing as the
DIO-anthracene. Once again, the beginning part of
the graph shows the benzo-(a)-anthracene associated
with the daily calibration standards. The
benzo-(a)-anthracene for the most part in the six
13
-------
73
percent held...got a little high right there, but for
the most part is okay. But here in the 50 percent
fraction we see again another dropoff in the internal
standard area.
We attempted to calculate, to quantitate those
target analytes that came off in the 50 percent
fraction using four different techniques to see which
one would give us the best results. The first
technique, of course, using the average response
factor from the 15 point calibration curve, which
yielded very high results, because of these low
areas.
The second method that we attempted to do was
instead of using the 15 point calibration curve,
perhaps whatever area these points are associated
with...for instance, let us just say for argument
sake it was an area of 10,000 area units, to go back
to our 15 point calibration curve and which one of
the concentration levels was closest to the 10,000
area units of our unknown. Let's just say for
argument's sake it came out to be the ten nanogram
standard.
So we took the average of the three response
factors for our ten nanogram standard and tried to
14
-------
74
calculate the 50 percent fraction using the average
of the three points rather than the average of the 15
points.
The third method that we attempted to use was,
in the morning when we shoot our daily standard, the
response factor that was generated in that daily
standard to calculate the target analyte and the 50
percent. The fourth method that we attempted to use
was to use the internal standard napthalene-D8, which
showed no effect in the 50 percent elution mix. The
only problem with the napthalen-D8 is that it is in
the beginning of the chromatogram, and you should
have the internal standard spread throughout the
whole range, but we attempted to do that, to use the
napthalene-D8.
It turns out that the best results that we were
able to obtain were by using the napthalene-D8 as the
internal standard, even in the 50 percent elution
mix.
Now, this is a slide, and it has a lot of
information, and I am going to pass quickly by it,
but this is all the 34 target analytes that were
spiked in. What I have done is taken some out to
talk about the recoveries rather than all of them,
15
-------
75
and brought that slide along in case anybody had a
specific question about one of the other compounds.
We chose for the protocol, anything above 50
percent recovery was acceptable. So for the
chrysene-D12 we got 71 percent recovery; the
hexachlorobenzene-C13 labelled, 56 percent recovery;
decahlorobiphenyl, 77 percent; and then di-n-butyl
phthalate at 87 percent; and in a minute I will tell
you the levels that these were spiked in at. The
surrogates you saw before. Most of them were at the
ten nanogram per mimcroliter level.
Some target analytes. Representing the
polyaromatic hydrocarbons- is fluoranthene at 93
percent recovery from adipose; hexachlorobenzene at
75 percent; hexachlorobiphenyl at 64 percent;
dimethyl phthalate at 95 percent. Just to show you
that I was not lying about presenting poor data, we
have a five percent recovery of a phosphate ester. I
do not want anybody to shudder out there as to why is
this man presenting a five percent recovery in front
of all of you people, but our explanation for the
tributyl phosphate is, if you remember the slide
showing the lipid material coming off the GPC in the
29 minute time period, we really believe that...and
16
-------
76
we are looking into it right now, but the phosphate
esters are coming out very close to that 29 minute
time. Perhaps on the lower end of the 29 minutes,
and it is coming out with the lipid material. We
start collecting our samples after the lipid material
has been removed at the 29 minute mark.
So we really believe that these phosphate esters
are tied up into that lipid material, and we are
going to cut down on our elution time off the GPC to
a lower time period, and do some experimental work to
see if we can get the phosphates out of that, or
there might be too much interference with all that
lipid material. Hopefully that is the explanation
for the low recovery of the tibutyl phosphate.
Some other target analytes. O,p-DDT, 82
percent; p,p'-DDE at 139 percent, and we really do
not believe that we are looking at 139 percent
recovery. The spiking level of the DDE, we believe,
was at the same level of the concentration of DDE in
adipose tissue. We have not analyzed any adipose
tissue that does not have DDE in it. What we are
looking at here is, we did not even see the spike on
top of what already existed in there. So what we are
looking at is a real level of DDE; DDD at 98 percent;
17
-------
77
beta-BHC, and we did do all four BHC isomers, 85
percent. Aldrin at 109 percent; mirex at 95 percent;
dieldrin and endrin...now, dieldrin and endrin have
arrows next to them. Dieldrin goes from 196 percent
recovery down to 97 percent recovery. Once again,
this goes back to the problem of the internal
standards in the 50 percent elution mix. The 196
percent recovery for dieldrin represents the
calculation based on the deteriorated internal
standard in that 50 percent elution mix. The 97
percent represents the calculation based on using the
napthalene-D8 as the internal standard in the 50
percent elution mix.
The same thing applies for endrin. It dropped
from 235 percent down to 164 percent, and I am sorry,
but at this point I have no explanation to the high
recovery of endrin at 164 percent. There was nothing
in our reagent blank at least in the area of endrin
that should have interfered. But we are looking more
carefully at it. But at this time I do not have an
explanation for that.
The levels that they were spiked at, all the
compounds were at the ten nanogram per microliter,
except dieldrin, endrin, decachlorobiphenyl and the
18
-------
78
tributyl phosphate were at the 50 nanogram per
microliter level. The pentachlorobiphenyl through
nonachlorobiphenyl were at the 20 nanogram per
microliter.
That is the end of the data. I would like to
acknowledge one more person before I step down. He
is going to be the next speaker. That is Dale
Rushneck. Not only is Dale a friend, but he is a
very patient teacher.
We at Colorado State University are committed to
protecting this kind of environment. Here in Norfolk
you have a very pretty environment, being on the
water, but in Colorado we have this kind of
environment to protect, and we are committed to
protecting that.
Thank you.
19
-------
79
Question and Answer Session
MR. TELLIARD: Questions?
QUESTION: I just
wondered if you had an explanation for that odd
behavior of those internal standards in the 50
percent mix.
DR. AARONSON: At this
point, no. The only thing that...I really believe
that it is not just the fact that it is a fifty
percent elution mix, because we did do the studies
where we took the internal standards and placed it in
the 50 percent elution mix, and injected it. We had
a little bit of deterioration, but not as much. I
think it is a problem of the fifty percent elution
mix along with the adipose matrix. I am really not
exactly sure what is happening to it. It is being
"eaten away" by something. I can not give you a
better answer than that.
QUESTION: Is it some
sort of solubility problem, do you think?
DR. AARONSON: I do not
think so. I think it is just being "eaten away" by
some reaction that is going on in the adipose tissue
with the fifty percent elution mix.
-------
80
QUESTION: It is very
odd, because those things are awfully steep.
MR. TELLIARD: Would you
identify yourself, please?
DR. LACONTO: Yes, Paul
Laconto from Nanco Laboratories, Wappinger Falls, New
York. Concerning your GPC procedure, number one, do
you use a manual method or automated method? And
number two, what would be your total dilution volume
per sample that was prior to when you slotted in with
the surrogates?
DR. AARONSON: It'S a
totally automated GPC, and there are three GPC's that
are being used, because each sample requires 24 hours
to elute...20 grams onto the GPC requires a 24 hour
time period. The volume at the end, after collecting
through all the loops, is about 3000 milliliters,
after coming through all the loops. That is dried
down to approximately ten milliliters..yes, about
five to ten milliliters before putting it on the
florisil columns.
DR. LACONTO: The
tributyl phosphate, is that a surrogate or internal
standard?
-------
81
DR. AARONSON: No, the
tributyl phosphate was one of the target analytes.
DR. LACONTO: Target
analytes, which was put in...
DR. AARONSON: Prior to
GPC.
GPC. Thank you.
DR. LACONTO: Prior to
DR. AARONSON: You are
welcome.
MR. TELLIARD^: We're
scheduled for a break. Is the coffee out there? It
says 15 minutes. Let's try to do it in 15 minutes
and get back in here. Thank you very much.
(WHEREUPON, a brief recess was taken.)
MR. TELLIARD: Our next
spea kers are Barry Eynon and Dale Rushneck, one from
the West Coast and one from Colorado, just to show
that we're EEO.
They're going to talk on a esoteric subject of
how small is small. Who's up first? Dale's up
first.
-------
82
DETERMINATION OF PESTICIDES AND OTHER
SEMI-VOLATILE POLLUTANTS IN ADIPOSE TISSUE
BY GCMS
MICHAEL J. AARONSON, JOHN TESSARI AND SHARON
CHAFFEY, DEPARTMENT OF ENVIRONMENTAL
HEALTH, COLORADO STATE UNIVERSITY,
FORT COLLINS, COLORADO
-------
83
TARGET ANALYTES
1. ORGANOCHLORINE PESTICIDES
2. PCBs
3. CHLOROBENZENES
4. PAH
5. PHTHALATE ESTERS
6. PHOSPHATE TRIESTERS
-------
DETERMINATION OF PESTICIDES AND OTHER
SEMI-VOLATILE POLLUTANTS IN ADIPOSE TISSUE
BY GCMS
MICHAEL J. AARONSON, JOHN TESSARI AND SHARON
CHAFFEY, DEPARTMENT OF ENVIRONMENTAL
HEALTH, COLORADO STATE UNIVERSITY,
FORT COLLINS, COLORADO
PRIMARY OBJECTIVE
TO EVALUATE TftECOMPApABlLITY OF RESULTS
FOR SELECTED ORQ^ipCHLORINE PESTICIDES
PCBs'BE'
AND
BETWEEN THE PGC/ECD
AND HRGC/MS METHODS
-------
85
SECONDARY OBJECTIVE
PROVIDE ADDITIONALClNFORMATION ON THE
LEVELSXOF OTHER TARGET
SEMI-yOLATILE ORGANIC ANALYTES
TARGET ANALYTES
1. ORGANOCHLORINE PESTICIDES
2. PCBs
3. CHLOROBENZENES
4. PAH
5. PHTHALATE ESTERS
6. PHOSPHATE TRIESTERS
-------
86
Extraction
Tissue Homogenizer
Methylene Chloride
GPC
Bulk Lipld Removal
15 g
15 g
I
Florisil
Fractionation
-------
87
FLORISIL FRACTIONATION
6% FRACTION
1
ORGANOCHLORINES PESTICIDES
PCBs
CHLOROBENZENES
PAHs
FLORISIL FRACTIONATION
50% FRACTION
I
PHOSPHATE TRIESTERS
PHTHALATE ESTERS
DIELDRIN
ENDRIN
-------
88
HRGC/MS
INTERNAL STANDARDS
1. NAPHTHALENE-OS 10 NG/UL
2. ANTHRACENE-D10 10 NG/UL
3. BENZO-(A)-ANTHRACENE-D12 10 NG/UL
-------
89
SURROGATE COMPOUNDS
,2,4-TRICHLOROBENZENE-D3
CHRYSENE-D12
13C6-1,2,4,5-TETRACHLOROBENZENE
13C6-HEXACHLOROBENZENE
13C6-4-CHLOROBIPHENYL
10 NG/UL
10 NG/UL
10 NG/UL
10 NG/UL
10 NG/UL
ISCS-S^'^'-TETRACHLOROBIPHENYL 20 NG/UL
SURROGATE COMPOUNDS
13C6-OCTACHLOROBIPHENYL
(252',3,3',555',6,6')
13C6-DECACHLOROBIPHENYL
DIETHYL PHTHALATE-3,4,5,6-D4
DI-n-BUTYL PHTHALATE-3,4,5,6-D4
30 NG/UL
50 NG/UL
10 NG/UL
10 NG/UL
BUTYL BENZYL PHTHALATE-3,4,5,6-D4 10 NG/UL
-------
90
CALIBRATION STANDARDS
100 NG/UL
50 NG/UL
10 NG/UL
5 NG/UL
1 NG/UL
CALIBRATION VERIFICATION
1. FC43
2. DFTPP CRITERIA 50 NG/UL
3. DAILY CALIBRATION STANDARD 10 NG/UL
-------
91
RELATIVE RESPONSE FACTOR (RRF)
RRF = (As) (Cis)
(Afs) (Cs)
RELATIVE RESPONSE FACTOR
As=AREA OF THE QUANTITATION ION FOR
COMPOUND TO BE MEASURED
Ais=AREA OF THE QUANTITATION ION FOR THE
INTERNAL STANDARD
Cs=CONCENTRATION OF THE COMPOUND TO
BE MEASURED
Cis=CONCENTRATION OF THE INTERNAL
STANDARD
-------
92
DFTPP MASS INTENSITY SPECIFICATIONS
MASS
51
69
127
198
199
275
441
442
443
INTENSITY REQUIRED
8-82% OF MASS 198
11-91% OF MASS 198
32-59% OF MASS 198
BASE PEAK
4-9% OF MASS 198
11-30% OF MASS 198
44-110% OF MASS 443
30-86% OF MASS 198
14-24% OF MASS 442
AVERAGE RECOVERIES OF SURROGATES(%)
SURROGATE
ADIPOSE RECOVERY
CHRYSENE-D12
HEXACHLOROBENZENE-13C6
71
56
DECACHLOROBIPHENYL-13C12
77
DI-n-BUTYL PHTHALATE-D4
87
-------
93
AVERAGE RECOVERIES OF TARGET ANALYTES(%)
TARGET ANALYTE ADIPOSE RECOVERY
FLUORANTHENE
HEXACHLOROBENZENE
HEXACHLOROBIPHENYL
DIMETHYL PHTHALATE
TRIBUTYL PHOSPHATE
93
75
64
95
5
AVERAGE RECOVERIES OF TARGET ANALYTESf%)
TARGET ANALYTE
O,P'-DDT
P,P'-DDE
P,P'-DDD
beta-BHC
ALDRIN
MIREX
DIELDRIN
ENDRIN
ADIPOSE RECOVERY
82
139
98
85
109
95
196 —+ 97
235 —+• 164
-------
CO
f'J
US
94
Ul
o
'
' J
I-
'-«lit
:tt= s)P
«~« T*
<* 4 C4
**• "I-
•SI OS
C3
l^ti
•X »-<
CO
C-4
•cc
O05 .
Cl'l
O _J
OCOLU
s;c^ CD
Z3 I M
•:••)
c-t
'JC>
•SI
'S3 O-l
'35
Cti
-------
CM
tf
CO
ra .
CM
!— CO
CD
'SJ
CO OS
CM
lj-| ~>
*;?"
•:I
O Ul
'•' '
CD H" CJ3 CD
C3 O CD
.- Ul Cti d«
ix> in cu CM
mm ft" ft *-*»>J
»-» D Ul IO.
*CC l~*
P— O
*S* SL «""* CD
f1.!!) X )£^ Z Z
tii O IA O f*'
1 1
CD
CD
95 i^j
O t-H
O'l l~
r-i
tt !>!
.1— .
If
/
CO /
»- 1 , 4
S2 ~^i!
\
:t
^1
10 JS
1 _£t "TIL
u-. ^ ^i
U~> ""' V
• " 4
CO^^jg
"*%!" 4
^~* 1i ,
CM |
TT — - ---"— ?r
t— i «l
9
?
s4
§- •— "^
^ rr* 'J
5—4
^ 4
^
^M"_ j
'Tl T*5
J
1
CC'T '
1
Tj~ • 1
IX>*
— •*•
J
1
^
(!=, J2 — ; —
CO
~ 'S- CO
CM CO
CD Cx'
_ CT ••
CO CE«
•^ CO
-1?
(X* '-O
—i C--I
,_ •
g§
'•^ co
i-« CM
-
|T"»
'•Ifc1
CD CD
~ CM CD
'X'
S| "]^
~ CD >x"
T~* t— 4
'35
CD CM
~ CO CO
•r-l
•s-
~ Co ©
o
~ -*r to
t i
o
<£
-------
CM
CO
CD
96
o —
.
CM
O
CO
CO
•cc
CO
lOO
•8:
CO
r---
T-4
o
iUli
CD
•
CD
CD
•Tl
CM
CM
r--
CO-*
en
CO
1-H
CO
O O-l
G" «"
Pis ro
CM CO
CO CD
-« CO
CM
"
CO
CM
G)
CM CO
V*
CM
® CM
> cffi • •
OO CO
CS GS
O
-------
•Si
IS"
O-4
o
CD
'55
CO
i LO
o i.n
o a
•cc CJ GJ
O O LJ
'-1 •—' ^_
SH CC' '"^
•-» •**
_J O CO
O
Cf. CD
CJ CD _J
CO UU
LLKMCO
^:> i •!!
U'l O __1
in u>
O '
LU a: 1
N, Ul • ••
M cj !
i^: cs> ij-i
§! CD
_ c? ••
- ID
-------
cn
-*•
US
CO
CO
o
o>
ifi
*,*z
§
0
CM"
CO"1
i°.
•« lX Cu •»-•
o
t-H I
LU . ..
_J IA LU
§; 0 Sg
S o S
Si
CD
•3
G'_
•7-J
O
C£
co
r^.
IXi"
8.
CO
* p
F
>:•:•
.8
C-4
u:«
G' 'IS.'
L0 CD
OS •
-------
I.O
oo
I.O I—
r--
co
\ uj . ..
•*t _j in LU
O O- Q C3
o x E: ^ —
ct co u"1 o ct
-------
100
w
to
en
O >
C-t CO
O
. c* "
CO O.)
T-l CO
> >-0
' C"4
C-l
CO
' C-4
'SI
C-J O
•rH C'-4
'3>
•Si •«•
. O "
C" '.O
••—I 1"(
07.'
C» C-4
. » •-
CO CO
03 GJ
. 0 -
I.O 'SJ
fT?i «^-J*
O
-------
101
LD
in
C'-4
•X'
CTl
ZC. Ul
i CO
iTi I
' CSf
•*•
_ GJ
C'-l
«•
f 0
CJ
C-J
to
C-l
- .-
CO
IX' CT'
G'
IX"
-------
•Si
en
102
Z UJ
O '—'
l.n I—
<£>
!£
C*1 C-4
' CC< CO
o
CD
-------
Q
C/)
CO
O
< cr
2: CD
Q: Z3
Ld <
r
O tf}
iri K
r-v (o
o «q o
d csi in
co un ^i-
iq o to o
K o c4 uo
r> ro
c
103
D
N
C
DQ
o
O
5
I
c
0)
o
o
JZ
"c
o
00
_CD
O
Cu
D
z:
D
spUDsnoq; ui
-------
Q
CO
Q
_i o:
<
-------
CO
LJ
< = o
< <
Z Q
C/)
105
C
,2
V-1
o
Q
i_
Lu
O
O
c
,2
o
o
ui
(D
O
t/i
T?
D
"O
C
D
-t-'
00
n
o
ui
-------
CM
Ld
~2L
bJ
UJ
o
DO
106
o
a
O
O
c
,2
V-j
o
i_
b_
(O
O
a
-o
c.
o
-f-'
00
a
ui
-------
AVERAGE RECOVERIES OF SURROGATES AND TARGET COMPOUNDS (PERCENT)
COMPOUND NAME
ABBREVIATION ADIPOSE
107
SURROGATES
Chrysene-D12
•1,2,4-Tr ichlorobenzene-D3
1,2,4,5-Tetrachlorobenzene-13C6
Hexachlorobenzene-13C8
4-Chloroblphenyl-13C6
Tetrachlorobiphony1-13C12
Octachlorobiphenyl-13C12
;Decachlorobiphenyl-13C12
Dlethyl Phthalate-D4
iDl-n-Butyl Phthalate-D4
Butyl Benzyl Phthalate-D4
TARGET COMPOUNDS
Fluoranthene
p,p-DDT
o,p-DDT
p,p-DDE
o,p-DDE
:p,p-DDD
o,p-DDD
Alpha-BHC
Beta-BUG
Llndane
Delta-BHC
Aldrin
Dieldrln
Endrin
Heptachlor
Heptachlor Epoxlde
Oxychlordane
Mlrex
trans-Nonachlor
Gamma-Chlordane
Hexachlorobenzene
1,2,3-TrIchlorobenzene
2-Chloroblphenyl
Dlchlorobtphenyl
Trlchlorobiphenyl
Tetrachlorobiphenyl
Pentachloroblphcnyl
Hexachi orobiphenyl
Heptachlorobiphenyl
Octachlorobiphenyl
Nonachlorobiphenyl
Decachlorobiphenyl
Dimeth. Phthalate
Trlbut. Phosphate
CHRY-D12 71
TrCB-D3 38
TeCB-C13 54
HCB-C13 58
CBP-C13 84
TCBP-C13 130
OCBP-C13 57
DCBP-C13 77
Dlethyl-D4 130
Dlbutyl-D4 87
Butylbenz-D4 67
TARGET COMPOUNDS
Fluoranthene 93
p,p-DDT 214
o,p-DDT 82
p, p-DDE 139
o,p-DDE 69
p,p-DDD 98
o.p-DDD 84
a-BHC 93
b-BHC 85
g-BHC 101
d-BHC 65
Aldrin 109
Dleldrln 196
Endrin 235
Heptachlor 96
Hept. Ep. 98
Oxychlordane 110
Mlrex 95
tr-Nonachlor 71
g-Chlordane 82
HCB 75
TrCBenz 53
MoCB 71
DICB 79
TrCB 87
TcCB , 79
PeCB ' 67
HxCB 64
HpCB 77
OCB 73
NoCB -97
DeCB 96
Phthalate 95
Phosphate 5
-------
108
EFFECT OF NUMBER OF CALIBRATION POINTS ON PRECISION AND
ACCURACY OF GCMS RESULTS
D R Rushneck, Interface, Inc, PO Box 297, Ft Collins CO
80526 (303-223-2013), Cary B Jackson, Support Systems,
Inc., 3249 Silverthorne Drive, Ft Collins CO 80526,
Michael Aronson and Sharon Chaffey, Colorado State Uni-
versity, Ft Collins CO 80523, and Barrett P Eynon, SRI
International, 333 Ravenswood Av, Menlo Park CA 94025
ABSTRACT
In 1977, the gas chromatography-mass spectrometry (GCMS)
"Screening" protocols required a single point calibra-
tion and a single point verification. In protocols for
analysis of pesticides and other semi-volatile pollu-
tants by GCMS, and in certain dioxin analyses, EPA has
required 15-point calibrations. This trend to more cal-
ibration points assumes that improved precision and
accuracy can be obtained if more calibration points are
used.
This paper presents the results of a study that
attempts to quantify the improvement in precision and
accuracy of GCMS analyses as the number of calibration
points is increased. Calibration precision is measured
by the variation between calibration sets. Analytical
precision is measured by the variation in results
between daily calibration verifications. In combina-
tion, these form the overall precision of the analytical
-------
109
process. Overall accuracy is measured by the average
measured concentration or response factor over calibra-
tion sets and daily calibration verifications.
Results of this study show that precision and accu-
racy are improved by a factor of approximately two each
time the number of calibration points is doubled. This
result leads to the further conclusion that replicate
analysis of a sample may produce a more significant
improvement in precision and accuracy than does a large
number of calibration points.
INTRODUCTION
This paper gives the results of a study of the
effect of the number of calibration points on the preci-
sion and accuracy of GCMS results. The data used for
this study were collected at Colorado State University
in the determination of pesticides and other substances
in adipose tissue. The analytical method used for this
investigation requires a 15 point calibration curve and
daily verification of this curve at one or more levels
as part of the quality control (QC).
The objectives of this study were to:
* Quantify the relationship between the number of
calibration points and the precision and accuracy of the
analytical results
* Evaluate the effect of the number of calibration
points on the costs of GCMS analyses
-------
DATA COLLECTION 110
Compounds Studied and Concentrations
The compounds studied are listed in table 1. For
those familiar with pollutant analyses using EPA analyt-
ical methods, these compounds are in the semi-volatile
(base/neutral and pesticide/PCB) fractions of the pollu-
tants usually determined by GCMS and GC with an electron
capture detector (GC/ECD). Stock solutions of these
compounds were combined and diluted to produce calibra-
tion solutions at concentrations of 1, 5, 10, 50, and
100 ug/mL.
For determination of the compounds in table 1,
three internal standards were maintained at a constant
concentration of 10 ug/mL in each calibration solution.
In addition, 11 surrogates were maintained in each solu-
tion at a constant concentration of 10 ug/mL, except for
tetra-, octa-, and deca-chlorobiphenyl, which were main-
tained at concentrations of 25, 40, and 50 ug/mL,
respectively. Because the surrogates were at constant
concentration, they were not studied further. The
internal standards and surrogates are listed in table 2.
In calibrating the GCMS instrument for this study,
two of the chlorinated pesticides were not detected at a
concentration of 1 ug/mL, and 22 compounds saturated the
mass spectrometer at a concentration of 100 ug/mL. As a
result, these compounds were eliminated from further
study. The final list of compounds used for subsequent
tests is shown in table 3.
-------
Ill
Table 1. Candidate Compounds for This Study
1,3,5-TRICHLOROBENZENE
1,2,4-TRICHLOROBENZENE
NAPHTHALENE
1,2,3-TRICHLOROBENZENE
1,2,4,5-TETRACHLOROBENZENE
1,2,3,4-TETRACHLOROBENZENE
1,2,3,5-TETRACHLOROBENZENE
ACENAPHTHYLENE
DIMETHYL PHTHALATE
ACENAPHTHENE
2-CHLOROBIPHENYL
PENTACHLOROBENZENE
FLUORENE
DIETHYL PHTHALATE
TRIBUTYL PHOSPHATE
DICHLOROBIPHENYL
ALPHA-BHC
HEXACHLOROBENZENE
TRICHLOROBIPHENYL
GAMMA-BHC
BETA-BHC
TRIS(2-CHLOROETHYL) PHOSPHATE
PHENANTHRENE
DELTA-BHC
TETRACHLOROBIPHENYL
HEPTACHLOR
DI-N-BUTYL PHTHALATE
ALDRIN
HEPTACHLOR EPOXIDE
OXYCHLORDANE
FLUORANTHENE
GAMMA-CHLORDANE
0,P'-DDE
PYRENE
TRANS-NONACHLOR
PENTACHLOROBIPHENYL
P,P'-DDE
DIELDRIN
0,P,DDD
ENDRIN
HEXACHLOROBIPHENYL
P,P'DDD
O,P'-DDT
TRIS(2,3-DICHLOROPROPYL)PHOSPHATE
BUTYL BENZYL PHTHALATE
P,P'DDT
TRIPHENYL PHOSPHATE
TRIBUTOXY ETHYL PHOSPHATE
HEPTACHLOROBIPHENYL
CHRYSENE
OCTACHLOROBIPHENYL
MIREX
TRITOLYL PHOSPHATE
-------
112
Table 2. Internal Standards and Surrogates
NAPHTHALENE-OS (INTERNAL STANDARD)
ANTHRACENE-DIO (INTERNAL STANDARD)
BENZO(A)ANTHRACENE-D12 (INTERNAL STANDARD)
1,2,4-TRICHLOROBENZENE-D3 (SURROGATE)
1,2,4,5-TETRACHLOROBENZENE-13C6 (SURROGATE)
4-CHLOROBIPHENYL-13C6 (SURROGATE)
DIETHYL PHTHALATE-D4 (SURROGATE)
HEXACHLOROBENZENE-13C6 (SURROGATE)
DI-N-BUTYL-PHTHALATE-D4 (SURROGATE)
TETRACHLOROBIPHENYL-13C12 (SURROGATE)
BUTYLBENZYL PHTHALATE-D4 (SURROGATE)
CHRYSENE-D12 (SURROGATE)
OCTACHLOROBIPHENYL-13C12 (SURROGATE)
DECACHLOROBIPHENYL-13 C12 (SURROGATE)
Table 3. Compounds Used in This Study
1,3,5-TRICHLOROBENZENE
1,2,3-TRICHLOROBENZENE
1,2,3,5-TETRACHLOROBENZENE
DICHLOROBIPHENYL
ALPHA-BHC
HEXACHLOROBENZENE
TRICHLOROBIPHENYL
GAMMA-BHC
BETA-BHC
TRIS(2-CHLOROETHYL) PHOSPHATE
DELTA-BHC
TETRACHLOROBIPHENYL
ALDRIN
HEPTACHLOR EPOXIDE
OXYCHLORDANE
GAMMA-CHLORDANE
O,P'-DDE
TRANS-NONACHLOR
PENTACHLOROBIPHENYL
P,P'-DDE
DIELDRIN
O,P,DDD
ENDRIN
HEXACHLOROBIPHENYL
P,P'DDD
0,P'-DDT
TRIS(2,3-DICHLOROPROPYL)PHOSPHATE
TRIPHENYL PHOSPHATE
HEPTACHLOROBIPHENYL
OCTACHLOROBIPHENYL
MIREX
DI-N-OCTYL PHTHALATE
NONACHLOROBIPHENYL
DECACHLOROBIPHENYL
-------
Data Processing
GCMS runs were processed using the target compound
software provided by the instrument manufacturer.
Data were moved from the GCMS instrument through a
series of computers to the IBM mainframe computer at
EPA's National Computer Center as follows:
* Transfer raw GCMS data from GCMS computer to
stand-alone data system via 5 Mbyte hard disk.
* Compute averages of response factors for multi-
point calibrations on stand-alone data system.
* Process individual quantitation reports against
average response factors.
* Create ASCII files of processed quantitation
reports.
* Download ASCII files to IBM Personal Computer.
* Transfer ASCII files to 360 Kbyte floppy disks.
* Ship floppy disks to SRI International.
* Transfer ASCII files from floppy disks to Apple
Macintosh personal computer.
* Assign variable names using "Reflex" software on
Macintosh.
* Upload data via modem to Statistical Analysis
System (SAS) data set on IBM.
After the data were transferred to the IBM main-
frame, statistical analyses were performed using the SAS
software package.
113
-------
114
Calibrations and Verifications of Calibrations
The method used for determination of the compounds
listed in tables 1-3 requires a 15 point calibration.
The calibration is performed by triplicate analyses of
each calibration solution beginning with the 100 ug/mL
solution and proceeding downward to the 1 ug/mL solu-
tion. Upon completion of calibration, the average
response factor and the relative standard deviation
(coefficient of variation) of response factor over the
15 points is computed. For most compounds, the coeffi-
cient of variation (CV) must be less than 20 percent.
For some compounds that have highly variable responses,
the CV must be less than 35 percent.
Daily verification of the response is required for
days on which sample analyses are to be performed. This
daily verification consists of injecting either the 10
or 50 ug/mL solution and determining the deviation of
the response from the average in the initial calibra-
tion. Calibration is verified if the response in the
verification deviates less than 20 percent from the
average in the initial calibration.
Data Sets
In collecting data for this study, two 15-point
calibrations were performed. However, before the data
could be transferred from the GCMS to the stand-alone
data system, a head crash occurred and the triplicate
analyses at 100 ug/mL in one of the calibrations was
-------
115
lost. Therefore, the data in this report are based on
one 15-point calibration and one 12-point calibration.
The calibration data were split to produce the cal-
ibration data sets listed in table 4. Each of these
data sets entails a calibration (at 1, 2, 3, 4, 5, 12,
or 15 points) and verifications of these calibrations.
Verifications employed 4 through 9 points at either 10
or 50 ug/mL. These combinations permit evaluation of
the improvement in precision and accuracy to be gained
by using more calibration points because the deviations
of the calibration verifications can be measured against
varying numbers of calibration points.
Table 4
Calibration Data Sets
12-point Data Set (Data Set "C")
One 12-point calibration [(@ 1, 5, 10, and 50
ug/mL) x 3]
Three 4-point calibrations (@ 1, 5, 10, and 50
ug/mL)
Three 3-point calibrations (@ 5, 10, and 50
ug/mL with combinations chosen at random)
% Three 3-point calibrations at constant concen-
tration (@ 5, 10, and 50 ug/mL)
Three 1-point calibrations @ 10 ug/mL
Four verifications of each calibration (@ 10
ug/mL); 40 calibration verifications
total
15-point Data Set (Data Set "D")
One 15-point calibration [(@ 1, 5, 10, 50, and
100 ug/mL) x 3]
Three 5-point calibrations (@ 1, 5, 10, 50,
and 100 ug/mL)
Three 3-point calibrations (@ 5, 10, and 50
ug/mL with combinations chosen at random)
Three 3-point calibrations at constant concen-
tration (@ 5, 10, and 50 ug/mL)
Nine 1-point calibrations (3 each @ 5, 10, and
50 ug/mL)
Five verifications of each calibration (3 @ 10
ug/mL; 2 @ 50 ug/mL); 99 calibration ver-
ifications total
-------
116
RESULTS
Statistical Model for Calibration
Statistical models for calibration were fitted to
these data sets. The simplest such model relates the
ratio of the concentration of the compound and its ref-
erence to the corresponding extracted ion current pro-
file (EICP) area ratio, in terms of the response factor:
RF = [A / A(ref)] / [C / C(ref)]
This model applies to both internal standard data
and to isotope dilution situations data because the
EICP's for the labeled and unlabeled compound do not
overlap. To form a statistical model, we rearrange
terms and add an error term to yield:
A / A(ref) = K [C / C(ref)] (1 + epsilon)
where epsilon has mean zero, has variance sigma
squared, and is independent over instrument runs; and K
is a fixed proportionality constant depending on the
particular compound and reference compound involved.
The properties of this model are that tHfe instru-
ment response is proportional to the concentration ratio
for a given pair of compounds (pollutant and internal
standard), and that the relative standard deviation of
the response factor is approximately constant over the
calibration range of the instrument.
To calculate the measured concentration of a com-
pound using this model, a series of N instrument cali-
brations are performed, and for each calibration the
response factor is computed. For an N-point calibra-
-------
117
tion, and assuming the model given above, the arithmetic
average of the individual response factors is the stat-
istically best estimate of the calibration constant K.
It has statistically optimal properties of being
unbiased and minimum variance. It has a relative stan-
dard error of sigma / sqrt(N). (Since the constant K
can vary every time the instrument is set up or changed,
a calibration must be performed to estimate this con-
stant for each setup of the instrument.) Once the
instrument is calibrated, the concentration of a pollu-
tant in a sample is calculated as:
C(measured) = K(calibration) [A / A(ref)]•C(ref)
To first order approximation, the relative standard
error of this measured concentration around its true but
unknown value would be:
sigma [sqrt(l + 1/N)]
where 1/N is the portion that is contributed by
the calibration.
Calibration Sets
Table 4 lists the calibration sets. From the 12
point calibration (identified with the letter C),
smaller calibration sets were formed. Each of these
sets was verified with four daily verifications at 10
ug/mL. From the 15 point calibration (identified with
the letter D), another selection of smaller calibration
sets was formed, and verified with five daily verifica-
tions: 3 at 10 ug/mL, and 2 at 50 ug/mL.
-------
118
Evaluation of Model Assumptions
Constant Relative Standard Deviation
The first quantity of interest is sigma. This is
estimated by the relative standard deviation of the
response factors among samples of the same type. Figure
la plots the estimated value of sigma from each calibra-
tion set versus the concentration of aldrin. The points
marked C and D represent the two calibration sets, and V
and W indicate the two verification sets. Figure Ib
shows the same plot for di-n-octyl phthalate, and Figure
Ic shows the same plot for hexachlorobenzene. Similar
figures were generated for each compound. The response
factors appear to be fairly constant as a function of
concentration. The dashed horizontal line is the root
mean square average of all the points plotted. The
largest variations occur for calibration sets based on
the fewest number of calibration points. The calibra-
tion samples, which were all done on the same day,
tended to show smaller variation than the verification
samples, which were done one per day over several days.
Overall, the observed statistical variation is as would
be expected in sets of this size, and fairly good agree-
ment was found with the constant standard deviation
assumption.
Accuracy (recovery)
In order to examine the accuracy, each calibration
was used to compute a measured concentration for each
-------
119
calibration verification. In labeling the results, the
calibration set (12 or 15 point) is identified by a C or
a D, followed by two digits indicating the number of
calibration points in the set, followed by (where neces-
sary) three digits indicating the calibration level.
For instance, C01 is a single point calibration from the
12-point set (at 10 ug/mL), D01005 is a single point
calibration taken from the 15-point set at 5 ug/mL, and
DOS is a 5/10/50 ug/mL calibration taken from the
15-point set. For each calibration, the average recov-
ery was computed over all of the appropriate verifica-
tions. Figure 2a shows- the results for aldrin, plotted
versus the number of calibration points. Figure 2b
shows the same plot for di-n-octyl phthalate, and Figure
2c shows the same plot for hexachlorobenzene. The val-
ues are well centered around 100% recovery, and the
spread decreases as would be expected as the number of
calibration points increases.
Precision.
Precision was measured using the three replicate
calibrations and computing their relative standard
deviation on each verification sample, then averaging
across the verification samples. Figure 3a plots this
precision value versus the number of calibration points,
for aldrin. Figure 3b shows the same plot for di-n-
octyl phthalate, and Figure 3c shows the same plot for
hexachlorobenzene. The plotted curve shows the expected
-------
120
theoretical curve
sigma / sqrt(N)
which compares reasonably well with the experimen-
tal results for 1, 3, 4, and 5 points. Since replicate
calibration sets were not available for the 12 and 15
point calibrations, they are not shown.
The major decrease in the error occurs in changing
from one calibration point to three calibration points,
with a continuing decrease, but nowhere near as much
gain in going to four or five points. A further gain
would be achieved by using 15 calibration points, giving
an error approximately half as large as with 4 points.
Total Precision
The calibration precision is only one component of
error; the remainder comes from the innate amount of
variability in the measurement of any analyte. For the
experimental values, we can add the average variation
among the verification samples for a given calibration
to the average variation among different calibrations
for a given verification, to obtain an estimate of total
precision. Total precision versus the number of cali-
bration points is plotted in Figure 4a for aldrin, along
with the expected theoretical relationship:
sigma sqrt(l + 1/N) .
Figure 4b shows the same plot for di-n-octyl
phthalate, and Figure 4c shows the same plot for hexach-
lorobenzene. The experimental values bracket the theo-
-------
121
retical values fairly well, and we observe good agree-
ment with theory. As above, the major improvement is
going from 1 to 3 calibration points.
CONCLUSIONS
We have demonstrated and tested a statistical model
which predicts the effect of the number of calibration
points on measurement precision. The most we could ever
expect increasing numbers of calibration points to
decrease the total relative standard deviation by would
be a factor of 1 / sqrt(2), so we are left with a cost-
benefit comparison. More calibration points increase
both the precision and accuracy of the measurement, and
protect against stray values. But beyond a medium num-
ber of calibration points (3 - 5), the costs may be bet-
ter allocated to multiple measurements of the actual
samples, if the objective is improved final accuracy of
results.
QUESTIONS AND ANSWERS (paraphrased and expanded for
clarity):
Mr. Robertson: Gary Robertson, Lockheed, Las
Vegas. Given the variabilities of the GCMS, which
appears better to use for daily quantitation, the ini-
tial calibration data or the single point verification.
Dr. Eynon: I assume that your are asking about the
daily verification sample. I believe that there are
some tradeoffs as to which is most appropriate. At pre-
-------
sent, the daily verifications are used as a check on the
initial calibration. They are compared with QC control
limits to assure that the instrument has not drifted.
I believe that there might be some profit to inves-
tigating whether or not some combination of the initial
calibration and the verification data might provide
additional precision.
122
Mr. Robertson: In addition, the Contract Labora-
tory Program requires in some methods that only the
daily, single-point standard be used as the reference
for all analyses performed on that day, once the initial
multi-point calibration has been verified.
Dr. Eynon: I believe that to give up the informa-
tion contained in the multi-point calibration makes it
unclear as to why you're doing a multi-point calibration
at all. You're effectively doing a single point cali-
bration each day. As I mentioned above, some combina-
tion of the multi-point initial calibration and the sin-
gle-point verifications might provide the most precise
and accurate results.
But I think that relying solely on a single point
on any given day is subject too much to the vagaries of
the individual measurement. I don't think that costwise
a three-point calibration on any given day is going to
be supportable, so averaging the multi-point and the
single points may be the best compromise.
-------
123
Mr. Nouth: Chantha Nouth, West-Paine Laboratories.
Regarding the number of calibration points (i.e., 1, 3,
5, or 15), what is the level that we can rely on? Are
three okay, or 5, or what is the merit of a 1-point cal-
ibration?
Mr. Rushneck: It depends on the work you are
doing. If there is a contractual requirement from EPA
under the Contract Lab Program or other program, you
must adhere to that contractual requirement. If, how-
ever, you are making an independent decision in your own
laboratory, these data show that the precision can be
improved by a factor of approximately two each time the
number of calibration points is doubled. Therefore,
increasing the number of calibration points from one to
four increases the precision by a approximately a factor
of four. To gain another factor of four would require a
16-point calibration. The precision is measured as a
deviation from zero coefficient of variation (relative
standard deviation). So for example, increasing the
number of calibration points from one to four would
lower the coefficient of variation attributable to cali-
bration from 20 percent to 5 percent. Increasing the
number of calibration points from four to 16 would
reduce the CV attributable to calibration to approxi-
mately 1.3 percent. This decrease would not be worth
the cost of the extra 11 calibration points, given the
error in the measurement of the analyte in the sample
(probably on the order of 20 percent for this example).
-------
124
Mr. Nouth: What is the improvement in going from
three to five points?
Dr. Eynon: The square root of five thirds, or
1.29.
Mr. Telliard: Anyone else?
Barry, I'm glad that you put this microphone back
down where I can reach it.
A couple of years ago, we had Dr. Browner from
Georgia Tech come and talk about the LCMS interface that
he was working on. The LCMS technique offers some
advantages to us, in that a particular group of com-
pounds can be determined that cannot be done by GCMS.
Our next speaker, Drew Sauter, is going to be talk-
ing about the use of the LCMS to look for the Appendix
8, 9, 12, or whatever they call it, to generate data for
RCRA.
Drew used to have a real job before his present
one. He used to work at our Las Vegas laboratory. Then
one day his MSMS died, and he went home and never came
back, and he showed up here this morning with his slides
in his hand, so we put him on the program. Come on
over, Drew.
-------
125
X
D
*>
e
9
O
. a
"*
c
0
4J
M
•H
*, M
o «
a
« .4 ' ,
M " X D
rd "
r-i O -o
CD 0 3
b < o
II. tu ft.
Q) *^
pi_i O o
M 0
s x
a
n
rt
*-
#•
•X-
a
X
e X
_ 0 3
o X
e
•*
-
0
i
-
"""
4^
-
•
w
Ou
- rt H
bt
-------
126
n
)
X
'
X +
« 1
3
O
fl
G
o w
•H H
* <
s *
s *
*. 2.
." O
n " 2
rH > H
0 D
j_l O
3 ° I
D1 *» 3,
HO,
* M
Q
1
•
'" -o
« a •
o, 9
" "p.
o
#•
w
Q
X
C s^t
_ ~o g
o X
6
O
+
+
+
•
1
-
w
Ot
!H
- rH B
b
a
S*NNN-NN*!H'^^OOOOO
a!
-------
127
a
*»
q
0
**
e
0
* «
« *
M W
.o'o
at
» °
a > ^
•-H « ' ~ v-
u O • "**" '^
S-4 ®
3 *j X
cn o M
•H A a
fe "
• ""
a C
q 3 ^_
o °
o. o«
n o
p£ O
lO •»
to o o o '
06 * • ' •
O O O
X
+ X
a +
X
*•
•*•
*
Q
+
X
^J
c X
_ o g
o X
6
"*
O
+
+
+
'
j
1
1
1
1
i
u
Oi
X
bi
-------
128
m
CO
a
o
a>
-o
q
P
O
** o
O
o
•
tt
c
P. I|
IO
o
o o
in IH
o o
«r iH • o m
o o o »-i
p a eV Q
* < N DQ
* < NCQ
-K- < N DQ
n
ID
rH
a
6
n>
CO
G
O
•H
4J
n)
id
o
O
S3
o m
fH O
o o
<»><-« co in
o o o a
O Q Q Q
X O >•<
X O >-<
X O >-<
in
o
o
o
m
o
o • o o o
o a a a
+ a t*': :
+ a « : :
+ n « : :
w
cu
CJ
O
C4
O
O
o
00
-------
120
•
H
0,
4
M
id
0
X!
(N
OJ
-H
U
o
I
•o
c
M a
« o
> y
O
O
XX IS<
m n
o o
«T> r-t
o o
•a1 «H 'n i/»
o o o f-t
O Q Q' ' Q
NDQ
•* < NDQ
n
a
tH
o-
6
10
co
c
o
•H
4J
B
t)
o in
rH O
o o
« fj< - <
X O >-<
X O >-<
in o
o m
o o
rH tH CO C*>
O 'O O O
•U o Q a
+ a *t
+ a «
+ a «
O
O
C4
O
O
O
00
-------
130
MR. SAUTER: It's real nice
to be respected. I don't know why it is; every time
I give a talk here I feel like the Rodney Dangerfield
of this measurement thing.
My slides should be on behind Bruce's. In 1976
Bill Telliard came to Midwest Research Institute, and
I worked there. He saidf listen, these guys are
telling us to measure these compounds. Can you do it?
I ran that project along with a fellow who's here in
the audience, Dr. Marcus, who is now director of
analytical chemistry for Chemical Waste Management.
I don't know if any other MRI chemist are here, but it
was kind of funny at that time. It was said, you've
got to develope analytical these methods priority
pollutants and you've got to look at all these samples,
I said, but, Bill, I just got done doing a coal
gasification process water GC/MS study. The process
waters are very different chemically and physically.
"At that time though all of the preliminary work was
"just for screening purposes."
So we said, its an interresting problem, and we
thought we could do something so lets get started.
I show here some of the first slides that Bill saw
-------
131
in the first priority pollutant project meeting
at 1976 in Kansas City at Midwest Research Institute.
That was the list of priority pollutants. When we
got the first list of priority pollutants, it was
a list of compound classes! After some work, we
established an extraction scheme shown here and
mcuch of this got accepted and some rejected.
A lot of work was going on with the Athens Enviorn-
mental Laboratory, personnel that were working on
the methods development included Walt Shackleford,
Dr. McGuire, Ron Webb and others. This slide shows
the first volatile organic analysis standard which
was acquired on a varian mat CH4-B. That's 50 parts
per billion standard. We were so proud of that
chromatograph because at that time people were having
great difficulty resolving all the compounds and it
was not clear that adequate resolution and so forth
could be achieved. So in ten years we've come an
awful long way, Bill. Your to be congratulated for
the central effort you played in the development of
National pollution monitoring GC/MS and other
methods. (Applause)
But when I was doing the first few samples
-------
132
we had some difficulties with many aspects of the
work, including floaters. A decision was made to
filter the samples, as solids were not addressed
in the Consent Decue. Nevertheless, we were aware
from this and other studies that involatiles organic
compounds could represent the largest fraction of
the TOC in real world samples. We really haven't had
the measurement technology to study look at involatile
organic compounds in environmental work. Our topic:
Particle Beam/LC/MS has an environmental perspective,
but I think it has a much larger perspective with
respect to the entirety of analytical methods
development for involatile organic compounds in
environmental, forensic, depense and other applications
calibration curve.
t
Look at those RSD's on those babies. While that was
a multi-day composite curve for 1.2 dichlorethane.
But this was measured on a varian mat CH4-B,
with a crank. Our scan times were seven seconds
in duration and we used to get trapezoids out instead
of peaks. Seriously. That's what we had to deal
with. We actually had a 311-A that was much better
that we used for the base neutrals, and we got a lot
-------
133
better data. But these slides can give perspective
to what things were like in 1976.
The presentation deals with where are we at
with Particle Beam LCMS? There are regualtory require-
ments to look at thermally labeled organic compounds
of which we were aware. About a year and a half or
so ago, I met a gentlemen by the name of Dr. Ross
Willoughby, who took his Ph.D under Dr. Browner at
Georgia Tech, Dr. Willoughby is co-patent holder on
the Magic LC/MS interface, which is another particle
beam type interface. Ross come up to us and said,
read my paper. I read it and I said, boy, that has
great potential. We talked to the company president
there and some other guys and they said, gee, we
could do something with that which could have major
impact on environmental analysis related to Appendix
VII, and Appendix IX. It has some great potential
applications. I was aware of some need for ths
technology at at EPA. Lo and behold, these guys did
some work and did some prototypes. What you're going
to see is...literally, almost all of this is first-pass
results. Of course, Ross had about six years in the
magic and maybe about another year and six months in
-------
134
the...what's now called therma-beam. Great name.
Talking about labile organic compounds, it's given
.the name therma? Enough of this marketing discussion.
We approached Dr. Paul Friedman and said, look, this
has potential. Paul is with the Office of Solid Waste,
or he was. He's just changed to the Office of Program
Management he's into policy now so that means he'll
be able to help get the LCMS accepted by everybody.
Anyhow, what we're going to do is show the
results of a project that we did for Dr. Friedman.
I'm kind of excited about the technology, and I'm
very proud to know Dr. Ross Willoughby who now works
with Extrel, who I think has really done something
special in analytic organic chemistry. Nice guy,
good scientist, and he's made a helluva contribution
to our science.
These are the conclusions of the project. I'll
talk about the interface a little bit later on. This
is not going to be a commercial.
These are the LCMS parameters that we worked
under when we did LCMS project. We did both flow
injection type work as well as reverse phase
gradient elution PB/LC/MS.
-------
135
These are the conditions. Reverse phase,
gradient elution usage CIS column. About 90 percent
methanol to water. I'll show you this detail later.
We're going to make a couple of major points here.
Particle beam LCMS, it's called therma-beam, produced
spectra that matched the NIH EPA library for volatile
and involatile compounds.
That's the mass spectrum of DFTPP. Do you want
to do the DFTPP? You can do it, but you might want
to think about the specs because we believe they're
going to be revised. Nevertheless, PB/LC/MS can
meet the old 1975 criteria. That's an electron
impact mass pectrum. Why electron impact? We'll
talk a little bit about that, too.
This is a particle beam mass spectrum, as we'll.
All the spectra that I'm going to show will be just
to give you that idea. I will show the acguired and
then the difference mass spectra as compared to the
WIH/EPA mas spectral library where possible. Just
look at the difference and then try to look at the
structures and mentally figure out what that has to
mean with respect to analytical organic environments
chemistry as a whole.
-------
136
This is ortho toluidine which can exhibit
difficult going through GC columns reproducibly.
This is the mass spectrum of diethylstilbestrol.
Could kind of be done by GC, but also has exchangeable
protons on it which can complicate GC analysis.
That's a pretty good match.
Look at the bottom spectra. This is purple.
Is that thing focussed? Would you please focus that?
There we go. That's a polar group with ketone weighting
on it and nitrogens, although they're sort of aromatic.
Good library match.
This is caffeine. We can sort of do caffeine by
GC, but there's an electron impact spectra caffeine
This is 2,4,D. People like to derivatize this analyte
if you do it by GC. Of course, if you derivatize a
real sample you've got A plus the unknown matrix
component that phase the agent and you don't always
know what happens in that reaction all the time. I
tend not to like doing derivatizations in real samples,
in terms of analytical methods development, as a
principle.
This is Brucine, an alkaloid. It matches the
EPA mass spectral database pretty well. Effectively
-------
137
exactly. This is a neat molecule called Lasiocarpine.
All these are Appendix 8 listed analytes.
This is by the way a very good match. Look at the
fragility of that molecule. There are all sorts of
bonds that can be broken. There's ketone functionality
in it. There's hydroxy groups up there, secondary
alcohol, tertiary alcohol. It's a fragile molecule.
Try to get that through a GC column.
This is a dye called Auramine-O that we're
going to talk about a little bit later on. That's an
El spectra from that, which matches the mass spectral
database, and it brings to the issue why we're
interested in doing electron impact LCMS as opposed
to something like therma-spray or other ionization
technigues in the high pressure regimes.
Recently, Bill McFadden who works with Finnegan
Company, has started writing a newsletter, a thing
called TSP newsletter, and in that Bill says, I've
got the exact guote back there, but it's effectively
that you really don't get a whole lot of structural
information on average, using a thermospray. Plus
there's lots of other complications. We're going to
use this molecule to try to show an example of why
-------
138
El techniques are superior to other LC/MS ionization
techniques later on.
Injected weight, that's another major thing, but
this slide shows that injected weight is highly
correlated with observed ion current. As such the
response factor equation probably can be utilized
in PB/LC/MS work. Sometimes if you do a thermo-spray
experiment you can get ions and sometimes you can't.
There's a lot of reasons for that. It's because
outside the ion source there is both gas phase
chemistry going on, liquid phase chemistry, and
there may be some solid stuff. In fact, now Finnegan
in their Lc/MS product is now doing what they call
CID MS outside the ion source and taking ions in, and
then they indicated that if you had a triple quad,
then you could do CID MS-CID MS-CID MS...it's about
this long, that is, the alphabet soup.
So it's good that the response factor equation
can be utilized or appears as if it can be, based on
some of this information. All very preliminary.
Everything I'm showing is first run, non-filtered.
There's no internal standard. This is all absolute
areas where the relative standards of deviations are
-------
139
calculated.
So for saccharine, we RSD'd it, 500 nanograms
injected. Ran from 5.6 to, at 50 nanograms went to
56.1. That's absolute area counts, that's not
corrected began internal standard. Typically, if you
use internal standards"as opposed to absolute areas,
I find that precision would increase somewhere in the
area of two to four.
So just based on saccharine and the number of
determinations was eight, we can see that there is
a linear range, between 500 and 100 nanograms.
Preusim ranges from 5.6 to 18.6 percent RSD, and when
you get to 50 nanograms in this case, the RSD blew up.
With an internal standard we could correct that. So
there's some indication that injected weight can be
done precisely at a single level for that one compound.
When we looked at a bunch of compounds that were there,
propylthiouracil, diethylstilbestrol, etc. There's a
pretty diverse range of functionality, and the various
N determinations are there. Typically the concen-
tration range is from 50 to 500 nanograms in these
determinations. The correlation coefficients are
shown on the right, and that is not with internal
-------
140
standard, that is with absolute area, so it's a
worst-case presentation of precision and linearity.
So typically we found out that there was ion
currents correlated with injected weight, which is
kind of like FID's and El GGMS systems, things of
that nature.
Absolute sensitivities is an issue. Somewhere
like in best cases, 10 to 20 nanograms to 75 to 125.
It's compound dependent. But all of this data was
done on two different prototypes. The actual product
that's been introduced is supposedly better, but I
haven't done anything on the product, so I don't
know. If you're interested in one of these things,
check it out.
So if you're in the 10 to 20 nanogram range and
you get 75 to 125 nanograms, you're somewhere let's
say between 20 and 75 PPB. Of course, with doing
LC you can make all sorts of injection sizes, right?
You can use column switching techniques. You can get
huge injection volumes. So it's clear to us that
parts per billion analytical methods are possible.
In fact, I think really you can do parts per trillion,
It would be interesting, looking at that adipose
-------
141
tissue with LCMS, I think.
These are estimated detection limits, first
pass. Absolutely first pass. For Propylthiouracil
we said 75 ng and for 1,3,5-Trinitrobenzene we said
125 ng. This is on the prototype. For DES it was 20
to 30 nanograms, ball park figures. 1,3,5-Trinitro-
benzene was another compound we did. But some of
this data has some error in it. For example, the
1,3,5-trinitrobenzene, which is at the bottom, and
it's explosive, (try getting that through a GC column),
we estimated 125 nanograms. Subsequently Ross did
another experiment, or Dr. Willoughby, or Roscoe, as
he's called, did another experiment and found that it
was a solution stability problem. He's estimated
detection limits for that explosive are now on the
order of 10 nanograms. So that was a standard there.
That's the kind of error that's in this data. That's
how new the technology is. This is actually about
five or six months old data.
Also, PB/LC/MS can be utilized in what I consider
to be the most general and powerful LC configuration,
which is reverse phase gradient elution LC, This is
-------
142
pretty much normal bore LC data. This is a total ion
current that shows a variety of Appendix VIII
compounds, I forget what the absolute amounts were in
here, but it was probably around 100 nanograms. Some
were detected better than others.
In the course of doing this, we found something
that was called Auramine-0-Ketone that's the tenth
peak out, the little one out at the end. If you
remember when I showed you that structure of the
dye it had an imide bond. That bond can hydrolyze to
the ketone real easily. So we found that in the
standard, not surprisingly. We've got the standards
from a given repository for NADA, and this was a
small project. We were trying to demonstrate feasi-
bility of doing El mass spec at low nanogram levels.
I hope you'll will forgive our standards problem.
At any rate, in this slide I show two papers.
One is by Dr. Vestle, and another one by Parker, et
al. They talk about what goes on in the thermo-spray.
You say, why don't I just do a thermo-spray experiment.
There's a lot of reasons, principally you're in a
high pressure regime. If you're in a high pressure
regime and if you have ions, they will be collisionally
-------
143
stabilized. They'll transmit their internal energy
to the gas molecules around them. So they're stabi-
lized, the gas molecules keep them kind of together.
That's one thing.
Another thing is, it's complicated as hell.
You've got gas phase acid base reaction. You've got
liguid phase acid base reaction and other. It's a
complex soup. Sometimes you get neutralization.
That's one of the reasons why when you squirt something
in the thermo-spray you may see something and you may
not, which if you're running a lot of complex samples
in environmental game, it can be catastrophic, and
it's complicatedl But at the same time, the work of
both Parker and clearly Dr. Vestle has been legendary
in this regard, and probably still has a lot of great
application in the biomedical fields.
But I tend to think that for the environmental
game, a particle beam technology is going to be the
preferred route, because you can get a whole lot of
structural information and based on our experience,
one always gets ionization.
This is a thermo-spray specter of Auramine-O.
The fact of the matter is, I think that's M plus H,
-------
Due to an inadvertent skip
in page numbers this page
is intentionally omitted
-------
145
or that was the hydrolysis product, that instead of
havinq NH, which is 15, it had C double bondQ, so
that's 16. I don't .know. You see what 1 mean. The
molecular weight of the compound is 267, and in the
therrao-spray we see 268. But there's a 267 there as
well. Knowing the hydrolysis products, I don't really
know if that's M plus H from the thermo-spray,
something that's going on in that complex mixture
outside the analyzer, or if in fact it is really the
ketone. Or maybe it's both!
This shows why electron impact spectra are
better than other techniques. I'm sure we all know,
for example, there are no common methods utilized for
chemical ionization, despite the fact that it has a
lot of utility and it's good technology. I don't
believe it's as powerful as electron impact mass
spectrometry. So that is one of the strong advantages
of PB/LC/MS technology.
This kind of shows just a demonstration of the
information content that's available. What is it?
The information content is in the fragments, and the
fragments allow you to put together what the molecule
is, even when you have unknown things happen to your
-------
146
standards, as we've showed for the hydrolysis of
Auramine-0. In fact, that just goes to show that
that's the same representation, but the one peak
to the left is the (imine) and the one peak to the
right is the ketone of the same starting material,
that compound. We could have never have identified
what the heck that was with anything if we would not
have used electron impact LC/MS. That's just one
example and one dye. I think the benefits of electron
impact El MS in the projects of EPA and such are
really worthwhile, as opposed to things that just
form pseudo molecular ions or molecular ions.
So we said, we produced matchable El spectra.
Ion current is highly correlated with injected weight,
Detection limits are on the order of 10 to 20, 75-125
nanograms. They're getting lower. I think they're
probably around 10 to 20 to 30 to 40, really. That's
for one microliter, so you can shoot 20 microliters.
So you get the PPB levels theoretically real well.
What else? It's usable with reverse gradient
elution LC, which it has to be if EPA is ever going
to use it real samples.
But there's practical considerations as well.
-------
147
It's qualitatively and quantitatively similar to GCMS
methods. You can use the NIH EPA library, all that sort
of thinq. It fits existing PA/QC structures. That's
because you can use response factors which we talk
about. It can be retrofit to existing systems,
presumably. It also could help environmentalists to
eventually minimize the tarqet compound mentality,
because the fact of the matter is, you may not need
to look...as an industry you may not need to look for
a lot of compounds, but there may be others that you
need to look for that are oxidation products of
various starting materials or whatever. Or maybe
you're in a completely different game. Obviously
this has a lot more to say than just pollutant
measurement. I think it's a fairly major deal, in
terms of analytical methods in its entirety.
This paper we wrote on response factors, and
statistics aren't response factors, but we think
they're correlated with...in this paper we say we're
correlated with fundamental properties of mass
spectometry. That's our position in the literature,
and we've given estimated or observed values.
-------
148
So now, if you can get away from this and just
think of what this means, if somebody was analyzing a
drug or drug metabolyte in anything, in human plasma
or whatever, and you didn't have a standard, you could
use the LCMS technology and this paper to provide a
quantitative estimate of the unknown and be able to
quantitate something that you don't have a standard
for, using some formalism. You have to be careful
doing this. If you don't have a standard, you don't
have a standard. Your other option may be to hire a
Ph.D to synthesize it for a year or two.
So in other words, this could provide, coupled
with our paper on response factors, a way to quantitate
a lot of really intersting compounds of biological and
other significance.
This is kind of like what we're talking about
here. What happens in this interface, we don't want
to be a commercial. We're not being paid for this
talk, which thrills us to death. We have LC effluent
coming in, there's thermal labelization happening at
atmospheric pressure. At atmospheric pressure you've
got to have a gas to be able to get rid of the solvent,
because otherwise if you're at a high vacuum you have
-------
149
a lot of problem there. There's no way to get thermal
energy to the solvent.
It goes into an expansion chamber, and there's a
two-stage particle beam separator. If you know
anything about jet separators, it's easy to envision
what could happen there. This is kind of like what
it looks like. It's really not that big. In fact,
you can pull the LC line out of the instrument,
and it still sits at 10 to the minus fifth four. When
you think about that, as Dr. McGuire told me, he
said, that's not real surprising because it's only
about three orders of magnitude difference in mass
flux coming into the source.
So it appears as if based on those data that EPA
may be able to start to look at in volatile compounds.
Our information is that based on discussions we've
had with ORD people, that both Superfund and the Office
of Solid Waste and the Office of Research and
Development have taken the position where there will
be programs that will be developed to do this. I
predict probably within the next year to year and a half
you're going to have LCMS methods. That can be run
on GCMS machines.
-------
150
This technique will evolve. The important
aspect of it is that it's unequivocally required for
many...I've reviewed 26 remedial investigation feasi-
bility studies for the Corps of Engineers in their
superfund studies. In about half of them, thermally
labile or involatile molecules can be major site
pollutants. So it's required, it's got good technology,
and I really recommend that you look at it.
I want to thank Dr. Paul Friedman for having the
vision to support this research and I want to commend
Dr. Ross Willoughby for putting about eight years of
his life into particle beam LC/MS. I'm very thrilled
to have been a little bit a part of this. I think
the real question is, have we arrived where...is LCMS
where GCMS was in 1976, Bill? My general inclination
is, yes, it's at least that good. Might be better,
and in fact it probably is, given all the standardiz-
ation that have come out of a lot of programs that
Bill and other people in EPA have started.
So it's technology I recommend you all look
into. It's really fun to run samples with it, too.
I think it will have some impact. Any questions?
-------
151
Question and Answer Session
MR. TELLIARD: Questions?
MR. CHANG: I'm James Chang
from Galson Technical Services. I have a coupl^ of
questions. The first one is, what's the pressure in
your source area?
MR. SAUTER: Where the ion
guage is located is away from the source area. But
by all indications, by the very fact that we get El
GCMS, it's got to be below the millitora region. For
general information purposes, the ion guage reads 10
to the minus fifth torr, just like it does pretty much
if you're doing pack column GC.
You want to understand, there is very, very
little solvent getting into the ion source. That
makes life easy.
MR. CHANG: You mentioned
solvent getting to the source. So you still have
trace amount of solvent that's factored at 2,4-D, or
if you analyze that amount on HBLC, you need a...
So if you still have a very small amount of solvent
getting into that,...
MR. SAUTER: I would
-------
152
anticipate that involatile buffers would give you a
problem, although there's some indication to the
contrary on that. How rugged it is is something you
need to think about.
I've been doing this now for a long time. My
general information to you is that 8 particle beam
technique appears to be very practical. You may run
into some problems with very involatile buffers like
phosphates, things of that nature. But I think you
can get around that with other buffers.
MR. CHANG: Do you recommend
the differential...
MR. SAUTER: The fact of
the matter is that, there's a lot of this stuff that's
been done on a singly pumped instrument, which I find
amazing. I don't know exactly what the conductance
is in...it's 500 liters per second. Jim Buchner is
here from Extrel, and if you want to learn more about
that, I recommend you speak with Jim. Jim, do you
want to stand up? Jim is here and Jim is co-author
of the paper and he's somebody I've been working with
on and off for a little bit. I think Jim probably
knows a lot more about the state of their instrumen-
-------
153
tation right now than I do.
MR. TELLIARD: Anyone else?
MRS. IRIZARY: My name is
Maria Irizary from EPA and PRASA...I have two practical
questions. One, when do you think this technology
will be available for us morons, and two,
MR. SAUTER: For what?
MRi TELLIARD: Us morons!
MRS. IRIZARY: Two, what is
the cost, or what is an estimate of the cost?
MR. SAUTER: It's competitive
with other LCMS interfaces out there. If you want to
read a little bit about it, you can read the Pittcon
review which was in C&E News. They talk a little bit
about it in there. It was introduced formally in the
Pittsburgh conference, and it was fairly well received,
it appears. But I recommend that you could contact
Jim. In fact, Jim is sitting right in front of you.
MR. TELLIARD: Anyone else?
Let's reconvene at 1:45. For those of you who are
new here, lunch is of course available in the hotel,
and there are beaucoups fast food outlets right next
door in the waterside area. You can go over and just
-------
154
blow your mind on fudge if you want to. But there's
a whole bunch of restaurants available, and we'll see
you back here at a quarter to two. Thank you.
(WHEREUPON, a luncheon recess was taken.)
MR. TELLIARD: Could we get
the folks in here, please?
The first speaker on our afternoon session was
supposed to be Dan Deferro from Petroleum Labs. Dan
had some illness in the family and had to cancel. So
our first speaker this afternoon is Jim Poppiti from
Finnegan. We want to thank Finnegan, because we were
really interested in having the paper on the program.
With Dan having to cancel because of the illness of
his wife, we were really concerned that we would lose
the paper. We talked to Bob Finnegan, and he was
kind enough to throw Jim into the breach for us and
have him here today to make the presentation. So,
Jim?
-------
155
o
az.
CL.
CO
CQ
Q_
CO
•ss.
o
CO
=D
C_>
O
-------
156
oo
o
col «
>-
CO
UD
z
cr •-
s
u
CD
CXI Cxi
Z
LU
CXI
<
CQ
CD
00
CQ
fe-S
CD
r-H
CD
II
O
Z
<
32
h-
LU
II
PQ
CD
LO
LH
I
CD
o
CD
CXI
CD
CXI
O
CD
CD
II II
T-l CX|
O
C3O
OO
CXI
II II
CD
CD UJ
CD a:
r- 1 <
I 3:
_ I O
UJ OH
LU
Q <-)
O LU
CO
Q_
CO
oo
COl
pc
Q.
CD
.
_1
S E
La
CD
< o
CQ CD
CD
CD
I-
z
UJ
LO
o
o
O
H
CQ
CD
CXI
s <
LU OH
\- CD
O
cc
a.
CD
II
O
z
LU
s:
ii
CQ
LU
CD
LT»
LT\
O
CD
un
I
CD
- CD CD CD
CD ("^ h«^ CXI
H
z
LU
01
on
~~*
u
z
o
»-H
>.
CD
IV
LU
Z
LU
Z
o
01
h-
o
LU
LU
CD
Z
<
IV
CO
00
^
s:
a.
s:
UJ
UJ
0
01
^>
o
CO
CD LU
LT\ 01
_
UJ
LU
O C_J
O LU
s: o
LU
-------
157
tu
oc
o
a:
<
a:
CQ
«=C
a_
a:
O
LU
a.
V)
V)
<
s:
CO
ai
o
LU
o
a:
a.
LU
OH
<
a.
a.
CQ
Q.
LU
-------
158
Q
HH
=3
_J
O
I
O
eg
o
UJ
D.
UJ
O
z
ai
0£
ai
LU
u_
3C
O
a;
<
Qi
CQ
oo
Q_
r-
00
£
s:
SPECTRU
1
cc
CD
l-l
— 1
•
_=M
•
•B
— «
mm
1 1 1 1
-in
••
L
7
rtu
~
~
i
-
-
l ' r>
^§e
r ff
in
^ s
I cc
1-4
I o
r
Eo
r®
rtu g
r cc
r CD
_ »— i
- 0
1 UJ
~ 5
~_ o
r2 J
— .
I oc
""* ll
- u.
1— 1
Q
1 1 1 1
•
1 1 1 1
_
-in
-g
•
rru
M
~
^
—
^
^
1
I®
^
1^
^
-------
o
<
a:
=5
O
CL
O
U.
O
a:
H
o
LU
a.
CO
UJ
u
z
UJ
Q£
LU
a
a
X
o
o:
<
a:
rt
c/o
PQ
Q_
CO
en
01
in
ctj
cu
(£J
CO
cr>
01
i
CO
Ol
••
en
col
01
h in
159
co
CO
r-
^
©
ru
. ©
c
o
c
U3
tn
o
01
X
Ol
~»-« B
•31
Q.
©
_©
01
©
CO
o
c
© ©
©
©
©
©
"CO
©
CO
©
-oi
u.
QI
-------
160
V)
UJ
p
u.
o
Q:
o
at
a.
to
ui
o
z
UJ
C£
UJ
U.
U.
i— i
a
p
o
g
CQ
CO
CO
Q_
CXI
s:
UJ
o.
in
cc
CD
in
co
0
in
in
in
in
CE
UJ
Q FT
1 1
O
4
in
CO
o
in
rin
-------
1(31
00
PQ
Q_
C3
CO
o
o
<
CO
u_
o
on
\-
o
UJ
Q.
CO
CO
CO
o
<
a.
I-
0
UJ
_J
UJ
CO
0>
:co
: 00
o
in
: rr
«•
rr)
cn
m
oj
cn"
to
1
-------
162
tu
UJ
UL
u.
<
o
u.
a:
UJ
Q.
in
ui
o
ui
C£
01
o.
o.
in
C/0
s:
CC
©
CD
©
LCD
©
©
_1_
TOO
or
UI
en
o
©
ui
©
CO
©
CD
01
j> OQ
rco |
o
=,- UJ
=ir QZ
L©
ui
o
. g
T-r-r-r
_to
©
I I I I
CD
©
OJ
©
©
'CD
©
•CD
©
-------
163
Tin
Q
•3-
OJ
U.
O
ae.
ai
O.
V)
ai
u
z
ai
a:
at
a.
a,
i—•
Q
Q
z
X
o
I-
oc.
Of.
CQ
oo
CQ
Q_
O
O
OT
Ul
Q_
01
tr
CD
rru
r^
III!
.in
i«s»
i i l l
_«>
rru
-------
164
I
a:
O
CO
Q
z
o
Q.
O
O
to
=3
O
o
X
Q.
CO
O
O.
O
z
CD
Q£
O
LU
n:
UL
O
Q£
O
ULI
O.
CO
CO
CO
s
oo
PQ
Q_
m
in-
in
m
i-
©
to
in
OJ
ru
co
rcu
©
r-
01
cu~
en
CDL.
03
©
in
©
en
. 01
trtii
©
*— »
rai
: ©
- ©
XI
- en
o
m
-------
165
CU
UJ
z
<
LU
Q
til
X
I-
u.
o
s:
te.
t-
o
ui
a.
CO
CO
CO
c/o
CQ
Q_
CVJ
©
ru-
. to
©
fTU
in
ro
cu
©
'-co
X— 2=6
en
co
r>-
-rn
-rn
in
en
-m
ru
TO
("J
i ' i »
:rru
m
-ru
-------
166
Ul
z
•—I
o
o:
u.
o
o:
o
Ul
a.
CO
ui
o
z
Ul
a:
ui
u_
u.
i
u
cn
>• cu
o: TT
< m
o:
rcn
8
rcu
— rru
tr
S
a.
S
cc
n
a
4-0
I®
.n
in
-IIP
trxxi
-o
Ten
o PT
cn
•in
rcu
rcu
rm
-------
167
UJ
z
»— I
£L
ce
O
*—t
co
LL.
O
O
UJ
0.
CO
CO
CO
CQ
O_
-------
168
LU
Z
*—i
O
CQ
UL
O
o:
o
in
a.
CO
CO
to
CO
CQ
Q_
LJ
O
i
©
. r-
ttu
©
in
f-cu
r©
OJ—==r
en
in
•:in
.Er©
- = ©
in
•©
en
©
r©
in
r©
•n
r©
EOT
rtu
r©
:f~
-ai
-------
169
LU
Q
»—«
X
o
I
t—I
I
LJJ
en
to-
o
o
o:
rn
ru-
a:
<_>
UJ
a.
oo
CO
CO
CO
as
-i r-
rOl
CD
in
©
cm
-5©
cu
©
-TO
OJ
TO
©
©
CD
in
©
CO
-TU
©
j-~
-cu
CO
-OJ
©
in
-tvj
©
-tu
PD
-oj
©
OJ
TU
CO
TU
-------
170
_
•*.*
t
PB/LC/MS MASS SPECTRUM OF CYCLOPHOSPHAMIDE
OJ-
cn
©
co
©
in
• ©
r-
roj
.cp
rru
in
t-ai
©
b
-------
171
Q
LU
Nl
O •-«
Q
Ul
os
UJ
CO
«
O
X
K-
1— 1
S
Q
III
UJ
H-
•
^J
Ul
a:
a:
0
o
HIGHLY
UJ
w
z
o
o
K
3
a
Ul
O£
o
o
u.
ESPONSE
•
X-N
CO
Ul
00
o
CL
a:
••^
CD
I— I
UJ
Q
UJ
O
UJ
UJ
Ul
Ul
cc.
O£
•—' O
UJ
*"4
I-
Z
O
O
U.
CS|
-------
172
t/j
CO
CO
ai
UJ
oo oo oo to to
II II II II II
CO LT> Csl CO
un
LT\ oo UD
O O
CD un
LT» CO
OJ
0)
N
U)
(0
m
•o
CO
0)
0) en
-P-H
O A
-------
173
UJ
O
'O
•
a.
o
ce
a.
UD Csl
CO hA
en f>/*\
en cn
CD CD
o
a:
Ul
^-
V)
Ul
»•«
CO
_l Ul
> Z
X —•
H 0
UJ 3
*™^ P^
O CQ
N^ CD
CO CO
cn cn
CD CD
Ul
Q
S
X
Q.
O
a.
0 Q
o ^r
^» ^
O CM
tn
UD
cn
CD
UJ
Z
•"" 1
s
Q£
13
cn
co
cn
CD
~
qf
X
o
o
CO
to
•H
.C
TJ
0)
N
•H
•P
3
CO
•a
•a
m
to
r-l
c
a)
c
•H
-------
174
CO
z
o
OS
CD
O
O
UJ
tn
i
CD
o
z
CD
OJ
CD
t—<
CO
LU
I
UJ
0£
UJ
O
O.
CO
t—t
to
co
UJ
CO
z
UJ
CO
UJ
o
CO
ffl
-------
175
Csl
CD
- CD
LA
CD
UV
LA LT\ LA
GO O
c/O
zr c_> i_u o
H-« UJ GO »-H
= 000
o as as
>-H « o_ o
H- o as: o
o 3= o
UJ I — <_J GO
CO
CS
0
LA
CD
LO
LT»
Csl
1
CD
CM
LALntA
ca
C3
UJ
00 UJ
LU
a
BQ
X
o
I
I
LU
0
<
a:
LU
1-
>
_j
<
-
a.
o
o:
a.
UJ
z
o
~^
o:
* OQ
CM
O
o
o:
z
|
^T
UJ
Q
<
x
Q-
CO
ZOUJ
< x z
«a.~
< o s
UJ
LU
M
o
a:
H
Q:
0
CO
o
>-
0
cc
ID
<
0)
0) ft I
a
e
CO .
rC
s
o
•H
^ O
CO O.r
«} -H
-H-OQ)
COO
O •£ co
•H o w n
Q)
•P C >i
T3
O
OU-I S
C-H
(0 4J r-l
•P C OJ
co (0 3
3 -P
t3 D1 O
i CO <1>
dj s IQ
CO -rH
CO -
•4J > 4J «H
-a .
4J fi TJ
c§^
cujj
-S *m
-
a» CD -P
s o -H
g*
e -
T-lSSOtNOsS
-------
176
u.
0£.
O
a.
a
z
H-
13
_J
UJ
Ul
111
(9
co
O
a:
UJ
CO
UJ
UJ
N
I-
13
UJ
CQ
CJ
oo
CQ
UJ
CO
a:
UJ
UJ
ce.
UJ
U.
z
O
u
-------
177
CO
LU
X
»—I
Q
"Z.
at
a.
a.
o
CO
t—t
to
>-
<
z
<
CO
CQ
d.
1 8 5
Jui
z u tr
ID CD
-------
178
I-
z
at
I-
o
o
g
on
o
u.
z
CO
CQ
Q_
U.
O
z
o
>-«
<
o:
H
co
z
o
-------
179
=>
O_
o
I
at
u_
o
<
o
LLJ
Q
oo
C_J
CQ
Q_
-------
180
os
m
Q.
CM
Se
LU
H-
UJ
H
O
»-4
Q
u.
o
to
<
Z
<
>-
D.
CO
o
UJ
X
I-
z
o
o -
LU Z
5°
z »-«
'-' H-
o
3= UJ
o ->
UJ
o
»-l
a
m
£
o
a
cr
a.
G3 g
8 ¥
i— i ,
? i
V
i_ n r- «--- — '
2 !
_i v
i
w
^
^v
<
_JT
f
- in
- .
inT
~O)-^
-------
181
OH
o
u.
UJ
CO
o
a.
CO
LU
CO
<_}
1
••M!
1
CO
o
a.
0
t_)
^
>
><
gg
UJ
a.
o
H UJ
a
Ul
in
z
a
c/;
UJ
~3
z an
••H
3
CZ3|
CN
UJ
»— <
•-»
ANALYTE
CYCLOPHOSPHAr
o£ ce: z: z: o£ a£
Ul
fr-«
_J
0
1 Z
»— 1
Z UJ O
< Z UJ O
Q »-« z o;
Of < 1 O t-i
3C Q; J3- CO Z3 Z
o 13 ^ LU a: i
co «sc csj ca PQ cr
z as:
Ul
N
SILVEX
THIOSEMICARBA
Q.,
_j
o
PROPYLTHIOURA
as:
THIOACETAMIDE
0£
• •
uj r>
UJ
N
z
UJ
o
1- UJ
•-* z c_>
Z •- 1
•-* Q. Z
ot.cc.*-*
\r- < O
1 O >-
un o s
CM CO 1-
* < 1-H
r-H _l S:
CSI
R = RESPONSE
0
^3
o
CSI
1-
LU
CO
•z.
o
a.
CO
UJ
ex
0
z
II
-------
182
to
ui
H-
X
HH
Q
Z
UI
a.
a.
u.
o
V)
»->4
co
c/o
-------
183
•en
o
O3
o
X
H
0.
o
oc
0.
• to
• in
O
HI
OL
CO
CO
CO
OL
to
O
(£.
LU
tn
-------
184
tn
•00
r«S>
Ul
z
t-t
U
U.
O
O
IU
Q.
CO
to
to
r©
:CD
on
a.
to
o
o:
UJ
ac
:OJ
-OJ
-------
185
ru
cc
X
o
co
u.
o
o
UJ
PL
V)
CO
a.
CO
o
a:
LU
a:
CD
-CU
-cu
-tu
in
-tu
-------
186
CU
LU
rai
It-
CD
El
=3
D:
o
LLI
o.
CO
to
CO
s
>-
a:
o.
CO
o
s:
o:
LU
-------
187
(U
Ul
z
1—4
Q.
o
o
CO
U.
o
en
o
Ul
Q.
V)
CO
CO
a:
a.
CO
o
a:
LU
:on
:oo
:to
rtn
tn
ru
-------
188
1 —
UJ
—3
O
0£
D_
GO
•j^™
*^
e 5
_J
pf|
D_
>-
an
UJ
CD
1-
UJ
cc.
Q.
Q.
<
CO
UI
o
ZJ
o
C£
Q_
»
^.
«v*
<
C£.
CQ
_J
GO
*?£_
Q_
UJ
^T"
1— 1
^**
H
X
o
UI
3=
a
UJ
H
o
ui
•z.
H- 1
np
1-
1— fl
•£.
Q
UI
1-
<
_J
UJ
a:
C£
0
o
^.
i
*T"
(O
HH
X
1-
LU
QL
(£.
^
O
•z.
o
t— H
UJ
o
_l
CO ,
•z.
»-^
H-
H
Ul
CO
Q
•z.
^
CO
•z.
CD
CD
i-H
|
LO
f^^
CD
CSJ
1
CD
rH
CO
1-
1— «
s:
H-4
_l
•z.
o
•— 1
1-
u
UJ
H-
Ul
en
UJ
1-
1-
Ul
PQ
o:
o
CO
^C
Q
O
0
CO
CO
^£
CO
1-
»— 1
s
H^
_J
•z.
0
I— i
H-
O
UJ
|—
UJ
1=1
a
o
X
h-
01
y_ (fy
SI
CD *-•»
"Z. C_)
1— « g o
a
_j z.
UI <
•— X
>• t-
i
•z.
o
I-*
1^
o
1
UJ
H-
•z.
UI
1— 1
a
<
Ff
CD
UJ
CO
<
X
Q.
UI
CO
a:
UJ
>
UI
a:
X
I-
l— 1
•£.
UJ
_J
CQ
^
CO
X3
-------
189
CO
O
i— <
|
UJ
CO
PQ
CD
CO
2^
O
_j
*•»»
CQ
Q_
U_
CD
CO
0
H-H
— J
t_>
1— 1
H—
c_>
«c
Q_
t— «
5
|_
ULJ
UJ
— 1
z
(— 1
_J
g
UJ
^*
UJ
CD
t—
^z
^t
^^
UJ
^•J
UJ
sr
CO
o
%
a:
<
_i
»»4
SI
HH
00
%
>
_l
LU
>
»— 1
<
H-
l—«
i
r^
•z.
a
Q
z
**-
>-
,JI
LU
^
t— i
L., .
< to
H Q
•—* o
_J 3C
< 1-
=> LU
0 S
to
111
Q£
rs
I-
o
• =3
1-
00
tJ
C3
^^»
C3
O
21
>-H
H
00
X
UJ
CO
}«»
1— 1
u_
00
LU
C£
3
H
U
3
H
CO
1-
o
a:
1-
•z.
o
o
CO
z
t— 1
^*
00
X
LU
to
1-
t-t
u_
SSIBLE
o
a.
LU
CQ
'>•
s:
oo
s:
LU
1-
00
00
CO
**«v
CD
CD
Z
*••«
H.
00
^^
X
LU
O
h-
00
••H
u.
o
on
1-
LU
>-
»>H
_|
H
z
LU
s
Q
z
o
a.
s
o
o
H-
LU
CD
^
1-
111
K-
2
i— i
s
l«4
_l
LU
00
Ou
LU
=c
-------
190
DR. POPPITI: That's a
pretty good assessment. I get thrown into the breach a
lot, it seems like, when Bob Finnegan calls me up and
says, can you go here and go there and talk to
different people.
I've had actually'the opportunity to use...my
talk is going to be about high sensitivity applications
with an ion trap detector. I have myself used the
ion trap detector under a couple of different regimes,
and I will discuss that in the paper.
But by way of introduction, I just wanted to
make two points, two basic points. That if you
consider the types of compounds you find most frequently
in the environment, by frequently, I mean just the
number of times they occur, you find out that it's
generally VOAs. You tend to find those kinds of compounds
quite a bit, especially in groundwater, surface water,
those kind of things. Generally speaking, that the
compounds that you tend to find the most of are
related to production, that is, compounds that have
high production volumes you tend to find more of,
which makes sense. Also, compounds that you tend to
find the most of are usually pretty water soluble.
-------
191
Although you don't think of them as being terribly
water soluble, their solubility is appreciable compared
to other things like some of the base neutrals, et cetera,
So the first thing is, we wanted to focus on VOAs.
The second point I'd like to bring up before I actually
get into the body of the paper, if you examine the
regulations and the way EPA has handled certain types
of compounds in regulating them, different approaches
have been used in the past to set regulatory limits.
The best available technology has been used for many
years. But in certain areas in the agency, they are
moving into risk base type levels, health base levels,
those kinds of limits.
So there's a combination of techniques that have
been used to arrive at regulatory limits. The
interesting thing is that, if you look at it from a
purely health based or risk based number, the limits
tend to be usually guite low, very low. If that
trend would continue or receive more serious
consideration than in the past, then you need technigues
which have lower detection limits, basically, than
are currently routinely available.
So combining the two facts, the fact that VOAs
-------
192
are found quite a bit in the environment and the fact
that it may be necessary at some time to go much
lower in detection limits, there's a perfect area
for the ion trap detector. That's basically what my
talk today is going to be about.
So basically the title of the paper is analysis
of volatile organics at part-per-trillion levels, using
a GC with an ion trap detector. The method that we
used for this work is Method 524.2. Originally the
method was proposed as 524.1 under safe drinking
water. It was proposed back about a year and a half
ago as a packed column VGA procedure.
The proposal wanted to get very low detection
limits. In order to achieve very low detection
limits, they did purge and trap, of course, but
instead of using the typical five mil sample volume,
it was a 25 mil volume in order to get a little better
sensitivity. Also, they went to a heated purge
because they wanted to analyze some compounds which
were a little higher in molecular weight. So in
order to get these compounds up out of the aegueous
phase some heat would be applied.
-------
193
The basic method is pretty much like the 624.
It's a purge from water and you trap on an absorbent
and then desorb. In 524.2, the method that came out
after comments, which is revision of 524.1, allowed
capillary column instead of the packed column.
Now, there are three capillary columns that are
allowed under 524.2. There's the 30 meter DB-5, the
.32 millimeter ID, and two columns are allowed. The
Sepelco Vocol column which is a mega-board is allowed.
It runs about .7 or something like that millimeter
ID. The other column is a J&W DB-624 column, which
is about .52, I think, or .53 millimeter ID.
The column that we used for this work, is the
DB-624 column. Basically we desorbed onto the
capillary and went through our GC separation for mass
spec detection. We used the ion trap detector.
Compound identification is obviously based on
retention time and the mass spectra that one gets, and
quantitation with internal standards. Now, the data
that I will be showing you is full scan data. So
when we talk about doing parts per trillion levels (on
the order of 100 or 200 parts per trillion), this is
all done full scan. None of this is MID. Then we
-------
194
did standard quantitation against an internal standard,
Benefits of the ITD, the ITD I think is probably
the most sensitive, or at least one of the most
sensitive, if not the most sensitive mass spectrometer
on the market. You can achieve detection levels that
I think it would be very difficult to achieve using
any other type of a mass spec. That has to do with
the way the ion trap works. It's a trap, it doesn't
operate like a guadropole or a magnetic where you
have a continuous stream of material coming in and a
continuous stream of material as ions going out.
You can actually trap ions, and by trapping, if you
trap for longer periods of time you can get much
better sensitivities because you're more or less
integrating ion signals and then generating the mass
spectra from that.
As far as data analysis, the data system that's
used in conjunction with the ITD is an IBM...the ones
that we currently offer...IBM, XT or AT. It operates
as fast if not even faster than current INCOS data
systems, especially things like library searches,
chromatographic interpretation, that sort of thing.
Low cost of ownership. The ITD is a low cost
-------
195
mass spectrometer. The basic ITD costs about $35,000.
With a GC inlet, it's around $50,000 to $60,000. So
for a total system cost that includes a computer, the
total system cost would be somewhere around $50,000
or $60,000 to do a VGA analysis.
This is a characteristic ion signal from toluene,
again using mass 91. This is full scan data where we
just reconstructed the ion chromatograph for 91, just
to show what kind of detection levels we were able to
achieve. This is for 200 parts per trillion/ and
I'll show some data where we actually went a little
lower than this. But from that chromatogram you can
see we've got a signal noise there of about maybe 10
to 1, maybe it's a little less than that. But that's
approximately the lower end of the range that we went
down to. You may be able to get a little lower than
that, but for this compound that's where we sort of
ended up.
Just to show a few calibration plots for some of
the compounds that we determined in 524.2, ethyl
benzene at the top with a correlation coefficient of
.9, down to...that's 200 parts per trillion, is how
far we went down on that one. You can see, it's
-------
196
pretty linear all the way down, it's got a good
correlation coefficient.
The reason why we went to the log log plot was
to be able to just show the magnitude of the curve,
and to show that it is linear over several orders of
magnitude. Again, we didn't go down quite as low on
the benzene. But again, it's a very good correlation.
For benzene, again, down to 200 parts per
trillion, several orders of magnitude from top to
bottom. Excellent correlation there. Again, the.
same sort of thing for tri-chlorethane, 10 calibration
plots, log log.
How does it work? Why can you achieve such good
detection limits and linear calibration over such a
wide compound concentration in the ITD. It's basically
because of the way the ITD operates, the ion trap.
What you have in the ion trap, the basic trap itself
is a...looks like a doughnut, almost, it's a metal
doughnut that's about maybe four inches in diameter.
What happens is, there's two caps, there's a top
and bottom cap on it, and on the top you have a heated
filament just the way you do in any other mass
spectrometer. There's a gate electrode, which is
-------
197
sort of similar to what used to be used in time
of flight instruments, where you would gate the
electrons in at certain times, and you would have
ionization, and then you would go through detection.
Well, the ion trap works a little bit like that
as far as the ionization step. What happens is, the
gate electrode is held at a high negative potential
to keep electrons out until you're ready to do the
ionization step. Then the potential on the gate
electrode is switched, it allows electrons into the
trap, ionization occurs, and then the ions are trapped
inside the ion trap detector. Then they are swept
out by changing the RF frequency on the electrode,
the doughnut itself, and out into an electron
multiplier, and the signal goes out to the data system.
The GC effluent is coupled directly right into the
trap.
There's improved sensitivity over the original
version of the ITD. We introduced, I guess, about
nine months ago, about a year to nine months ago,
somewhere in that time frame, something called AGC,
which is automatic gain control. This gives you
longer ionization times. What that means is, you get
-------
198
better detection limits at low sample concentration
and also high sample concentration.
I want to take a couple of minutes and basically
explain how this works, and show why you can get
lower detection limits and still get full scan data
at very, very low levels because of the gain control
and the way it works.
Now, the way the ion trap works is, as I said,
it gates some electrons in, you make some ions and
then you sweep those ions out and detect them. Under
the AGC, we added an extra little section to this
function, and what we do is, we let the electrons come
in for a very short period of time, and then scan the
mass spectrometer out and detect the signal, and take
that integrated signal and calculate how long would
you need to leave the filament on in order to make
ten to the fifth ions in the ion trap.
As it turns out, in an ion trap of say four
inches in diameter, it can only accommodate only so
many ions. The right number of ions for the ion trap
is about ten to the fifth. So what happens then is,
if you calculate how much time you need to leave the
filament on to get about ten to the fifth ions, two
-------
199
things happen. Your low end sensitivity goes down.
In other words, you get much better sensitivity at
the low end and you get much more of a linear dynamic
range on the top end. I explain this basically this way
If you have a situation where a small amount of
compound is coming in, on the order of a few picograms,
you leave the filament on longer and create more
ions. You get ten to the fifth ions, even though
there's a small amount of compound present, and you
trap them, and you get a good spectrum.
If you have a lot of compound coming in, you
leave the filament on less time and still then make
ten to the fifth ions. If you weren't operating with
the automatic gain control, it would be linear over,
say, two orders of magnitude or three orders of
magnitude. With automatic gain control, you extend
that curve downward by, say, another order of magnitude
or so, and it also extends the curve upwards. So if
you get more compound in, you're still linear because
you're not ionizing the same amount of time, you
ionize less time. If you have a very small amount of
compound you ionize longer and still generate a good
spectrum.
-------
200
So that basically is the way AGC works. Because
of the automatic gain controls we've gotten this
really enhanced sensitivity because of the ITD.
This is a standard run, VGA run, the purgibles A
and B. This is just to demonstrate that we were
running the trap in such a way that we were doing all
the A and B purgibles. We didn't do the gasses in
this one because we weren't set up to...we felt that
what we really needed to have was cryofocussing here
to be able to get the gasses. So they weren't included
on this particular run. But you can do gasses with
the ITD.
This is using purge and trap at 20 parts per
billion of the chloromethane. What we're showing
here is, this is the spectrum that came off the trap.
The top view is the spectrum that came off the trap,
and the bottom two plots are the library search
result of that spectrum to show that you do get
pretty good agreement between the library and the
sample as it's generated from the ion trap.
A question about whether or not the ITD would meet
BFB requirements. This is a scan of BFB, and it
basically shows that the ion abundance criteria are
-------
201
met according to EPA requirements.
Now, on this slide...just while we're waiting I
also want to point out that the interface in the
ion trap detector from the GC column, the standard
interface that we provide is an open split interface.
So when you view this data, keep in mind that with
the megabore column, the .53 column, that megabore
column, the flow rate through the column is about 15
mils a minute, thereabouts.
The ion trap can take about 1.5 mil per minute
into the trap. So with the open split interface,
what you end up doing is, you send about 90 percent
of the effluent goes out the open split interface and
about ten percent of the column effluent into the ion
trap, at least under the set of experiments that we
did here.
The point is that if we were using the DB-5
column we could have taken all the effluent into the
mass spectrometer. There wouldn't have been anything
spilt. With that you could probably have seen another
order of magnitude lower. We wanted to try 524.2
actually and use the megabore just to see how it
would work out. We didn't go back to repeat the
-------
202
experiment, but it seems that in light of lowering
detection limits it would be an interesting thing to
go back and try to use just the straight DB-5 column
and see if in fact we couldn't get an extra order of
magnitude lower detection limit.
OFF THE RECORD
What's on this slide is basically the new format
spectrum, in other words, from 200 parts per trillion
to 160 parts per billion you're getting the same
spectrum for toluene.
Again, looking at toluene from 160 parts per
billion down to 200 parts per trillion. Full scan
data over that concentration range. You can see, it
really is guite linear.
Here is a basic kind of report that you get out
of the ion trap. The auto guantitation software on
the ion trap is a little bit different than what's
handled on INCOS. The search algorithms used on the
ion trap are identical to the ones used on the INCOS
system, if you're familiar with the INCOS system. It
uses all the same mathematical manipulations, but
basically the way the auto quantitation is set up on
the ion trap, it varies a little bit from the standard
-------
203
INCOS format. But basically the same sorts of things
are handled. You go out and your first step is
basically to identify the peak in a given window,
make sure that the peak is the correct peak, and then
go on with your guantitation; This is the result of
the quantitation report.
This is one of the windows, and it flashes on
the screen. As the auto guantitation is going, you
are getting a continual update on the screen as to
what the system is doing and what peaks it is' selecting
and identifying and. integrating. Again, this is just
another slide on the dynamic range for toluene.
So in summary, the ion trap detector is an
excellent mass spectrometer for applications like VGA
analysis at very, very low levels, so you go to sub
part per billion levels. You get full scan data, and
it has really an excellent dynamic range over that
range. It can be used for methods like 524.2.
-------
204
Question and Answer Session
MR. TELLIARD: Questions?
This was aimed primarily at drinking water. What
type of result could we expect if we got one of our
favorite samples that kind of rattle in the jar,
where you can expect to find a lot of material?
DR. POPPITI: I would say
you could probably expect a similar type of situation
that you would with, say, a base neutral run where
you've got other things present. Those samples tend
to be...although I don't have a lot of experience
with a lot of different VGA samples, but the VOA
samples that I've seen tend to be a little cleaner,
generally speaking, than say base neutrals.
But it really depends. The fact that you've got
the capillary in there instead of the packed column
is going to give you some advantage there. But if
you've got a lot of hydrocarbons and other things in
there, you're probably going to be up against the
same sort of problems you were up against with, say,
a dirty based neutral. It would probably be very
similar, I would expect.
-------
205
DR. SAUTER: My name is
Drew Sauter. If I were to have 100 parts per billion
...and I were to have under that like a one in ten
parts per billion, have you done any work to demonstrate
if there is ionization or ion detection suppression
in the ion trap?
DR. POPPITI: You mean
because of the automatic gain control?
DR. SAUTER: No, I mean ion
molecule reactions. •
DR. POPPITI: There are ion
molecule reactions which take place in the ion trap.
There's no question about that. It's compound dependent,
and it depends on the compound. It tends to be
compounds which tend to be acidic or basic. In other
words, for example, compounds that have amine functions,
you may tend to see some protenation there.
So things like that, things that have functionalities
that would...for example...
DR. SAUTER: How about acetone?
DR. POPPITI: Acetone, I
don't believe...again I don't recall acetone
specifically, but as far as I remember, acetone is not
-------
206
a problem. 'Compounds that would tend to good for CI
kinds of things would be kinds of things that you
would expect to have those types of auto protenation
reactions taking place.
DR. SAUTER: If you analyse
the kind of things that, as Bill says, rattle in a jar.
DR. POPPITI: It's compound
specific. It doesn't have any...
DR. SAUTER: Could it be
sample specific?
DR. POPPITI: I don't think
so. It has to do with...
DR. SAUTER: I'm just
curious. I think if the damn thing works, I think
you ought to...
DR. POPPITI: From all the
experience that I've had with it and Finnegan has had
with it, it's compound specific. It doesn't seem to
matter what else...in other words, if water was
entering the trap at the same time another compound
was entering the trap, if that compound was not prone
to protenation in the first place, if you would see
protenation. You wouldn't see it even if there
-------
207
were water present. You see what I'm getting at?
DR. SAUTER: No.
DR. POPPITI: For me to run
chemical ionization, if that's all I want to run,
chemical ionization, forget the ITD for a minute, I
want to run chemical ionization, I put methane...
DR. SAUTER: You're talking
about ion molecule reaction?
DR. POPPITI: That's ion
molecule reaction. I put methane in the source in
the mass spectrometer, and I introduce my compound.
I go through and I get some ion molecule chemistry.
If I take a compound and don't put the methane in the
source mass spectrometer, if I get that concentration
up high enough, I may at some concentration begin to
see some ion molecule reactions take place. With me
so far?
That would vary depending on the compound and
the concentration of the compound in the ion source,
certain compounds are more prone to give you ion
molecule reactions at certain levels.
DR. SAUTER: Because of the
concentration?
-------
208
DR. POPPITI: Precisely.
That's what I'm saying is true in the ion trap. The
first case, you don't see, at least at the levels
that we have, adulterants enter the trap, it's not
high enough to give you of an adulterant the methane
kind of pressure you need to protenate things. So
you're relying on the sample concentration to give
you enough of the concentration to give you some
protenation reaction.
If you put a compound in that isn't prone to do
that at that concentration in the first place, you
can't add enough of another analyte, let's say, to
get that reaction to take place.
DR. SAUTER: You can't get an
ion molecule reaction...
MR. POPPITI: You can do
chemical ionization in the ion trap by adding...if I
added, let's say, instead of using helium as my
carrier I used methane, and then change a few other
parameters around, you can do chemical ionization in
the ion trap at a relatively low pressure. But again,
keep in mind that the concentration of methane is many,
many orders of magnitude higher than the concentration
-------
209
of the analyte in the trap.
DR. SAUTER: ...it strikes
me that if you have real complicated samples, that
probability that we're seeing does happen and could
happen. I don't know if that's true or not, but it
strikes me...
DR. POPPITI: I think you're
right. Even though you have very high concentrations,
the concentration one would need would be even higher
than...the sample would probably have to be in the
percent level, or maybe even the.ten percent level.
DR. SAUTER: You don't know
that for sure.
DR. POPPITI: I don't know
that for sure, no, but in order to get...what I'm
saying is, if we want CI to take place, we have to
put something like methane or something else in the
trap, at many, many, many orders of magnitude higher
pressure than the sample itself for that to happen.
MR. SNEERINGER: My name .is
Paul Sneeringer, I'm with the Army Aberdeen Proving
Ground in Maryland. I wanted to ask about the trap,
the standards of approval of it by EPA for use in
-------
210
drinking water and the contract laboratory...
DR. POPPITI: The trap
meets the VGA protocols. Because of the way the ions
are stored in the trap, the spectra for DFTPP do not
meet current CLP requirements and method 625, et
cetera. So under current tune specifications, it's a
self implementing approval. In other words, if it
meets the criteria then it's approved. You don't
have to go in for a special approval for an ion trap
or a guadrapole or whatever. If it meets the criteria
then it's approved.
So for BFB or VOA analyses, you can use it. For
DFTPP, under current tune criteria it doesn't meet
the DFTPP tune criteria currently.
MRS. KHALIL: My name is
Mary Khalil with the Metropolitan Safety District of
Chicago. I'd like to ask about the... conditions
used for this separation.
DR. POPPITI: I don't
remember exactly. They were the conditions that were
specified in Method 524.2. As I recall, you cool the
oven to sub-ambient. You cool it to...I think it's
about ten degrees. It's either ten or minus ten, I
-------
211
don't remember. But then you go through your desorb
mode and trap the compounds on the head of the column/
again, the column sitting at like ten degrees. Then
you go up in temperature to...I'm guessing now, I
think you end up at around 18 or thereabouts, and
it's programmed at about eight degrees a minute, I believe.
MRS. KHALIL: Thank you
very much.
MR. TELLIARD: Anybody else?
Dale referred to the fact that at our first meeting
we had the instrument suppliers leave. They loved
it. We want to thank Jim and Finnegan for making Jim
available.
Our next set of papers will be talking about the
implication and use of robotics, everything from data
reduction to lab cleaning, I guess. Someone was
explaining to me at lunchtime that robotics are a
real blessing, because what it does is take all your
low level people and gets rid of them, so that when
you go in for a fixed fee contract your overhead is
so high you can't compete. Definitely a government con-
tract.
So our first speaker is Dr. Ode, who's going to
-------
212
be talking about robotics. He's from Mobay Corporation,
-------
513
ffl
08 Q
=
(QJLSl) auazuaqojon|}ouiojg-fr
(01SI)
auazuaq
auazuaqojomo
auaiitao.iomoe.iiaj.
auemao.ioiipui.-z' j.' j.
auadojdojomo;a-e' i-s
O CL
> cs
*-^
0)
.O)
O
auadojdcuomo!a-e'l.-sue/j/sio
auazuag
eueg;aojo|qo JQ- I.' I
1' (.-sue/)
O)
§
CO
s??
»00
fi
U.C
-------
214
0)
(0
5
3
c
or
a,
CO
DC
—tS<*»
**
£
<=
•s
-TS
•
COUT>UTi
=3CQIE— «>3gZI
4
il
-------
215
<0
00
CO
£
t:
o
a.
a>
C£
TJ
(D
BN
O
t/5
d
IT)
£
It
in ii
n
**• ii
°!i
"I
" n
01 II
cn n
it u
to. II
It
11
it
di!
•• n
§u
n
m !!
~* 11
n
--I II
I It
M II •
O II
II
II
ii m us n
H O E II
II * II
II-1- " II
11 & 01 II
It 4J II
II .. It II
II 01 Q II
II -« II
II -r* V ||
II M- L. II
II C 0 II
ii t a. a
!! a I i!
i! ::
!! !!
II
II
-U
o
a
01
II IT
II -O
II C IN
II O
II .rt -.
11
II
II
II
II
II
II
II
it TJ I
4J C I
•H 3 I
4J O I
c a i
it € I
' O I
BUTANE
Dates Mar-02-1986 Quan Masas 55
Retention Times 17s51 Peak Heights 4624
8 Entry Types Int Std Cali Files PURBl
4 Calc Amt(A): 1O.OOO Search Window: 20 Sec
Spectrum Unitss PPB Detection Modes Automat
d: BB
Oin N o
w •• x:
ty in •* •• TJ 4J
~ • U U
" I. C .* -r» L. 01
Q 4-1 It t -i It 4J
7 c u ai t ai oi
1 uj cn Q. u cn Q
tf
i«
OBENZENE
Dates Mar-02-19B6 Quan Masss 95
Retention Times 17s 56 Peak Heights 164O
Entry Types Int Std Cali Files PURBl
5 Calc Amt(A)s 10.000 Search Windows 2O Sec
Spectrum Unitss PPB Detection Modes Automat
ds BV
/y -O M 0
* in •» x:
Q Q «t .. TI 4J
J; N oo o o 01
3 CM 0 Z JC E
-1 - « TJ 01 C
I). 0 •• 01 O.E O
O Z Z > U U
C C .* -rt U 01
Q 4J it it t-i it 4J
TJ C U 01 fl 0) 01
K Ul 03 O. O CO Q
ffl
1
II Q U
-------
216
m
6
oo
H o
C9HCO
C9«8in
sea
CS
-N
IT
co_
CO
_CS/s
NO
W
Cu
CS
CS
s
a
2 3
«r
OB
in
H
H
«co
tain
a in
ON+
i v> 31
i ecu
•JSOC
OlOO<
01
'sew
uosco
+
CSOOi
CU **flj
«W
«
flu
J:BU
O 01
us
•CO
••-OC9_
jonr
o
K
s
K
H
1
-«
in
r*
C4
CO
XI
e i
w
rtf
JCO
rt co
K 04
• HO
' «W
CS
^^••-1
„§ $
311
•*»
.CS &OJ
00 3
0-
Oi
c
00 &
H
HW
CO+Jt
0*5 -
O s/
U.SJ
-------
217
Characteristic Masses for Purgeable Organics
Compound
1,1 -Dichloroethene
Trichlorofluoromethane
Dichloromefhane
frans-1,1-Dichloroethene
1,1-Dichloroethane
Chloroform
1,1,1-Trichloroethane
Carbon Tetrachloride
Benzene
1,2-Dichloroethane
Trichloroethene
1,2-Dichloropropane
Bromodichloromethane
2-ChloroethyivinyI ether
c/s/frans-1,3-Dichloropropene
Toluene
c/s/frans-1,3-Dichloropropene
1,1,2-Trichloroethane
Tetrachloroethene
Dibromochloromethane
Chlorobenzene
Ethyl benzene
Bromoform
1,4-DichIorobutane
4-Bromofluorobenzene
1,1,2,2-Tetrachloroethane
*EPA
Primary
Ion
96
101
84
96
63
83
97
117
78
98
130
112
127
106
75
92
75
97
164
127
112
106
173
55
95
168
*EPA Ion Used for
Secondary Quantitation
Ion with ITD™
61
103
49
61
65
85
99,117
119
62
95
63
83
63
77
91
77
83
129
129
114
91
171
90
174,176
83
61
101
49
61
63
83
97
117
78
62
130
63
83
63
75
92
75
97
164
127
112
91
173
55
95
83
Federal Register, Vol. 49, No. 209, October 26, 1984.
Finnigcin
mm
7380-10
-------
218
§
CO
e
CD
•••
^^
%£•
^^M
_, M
o
«S!
-
fc*^ ^0^~"
900M ... ,
HW g»-
*«4^ a ~
U)
o
sity Speci
c -e
m
LL
m
<§
ntensity Required
10 *t in
in in "c h» en
en en Q) to T- m
B-Slglgg _
«- t- Q. = I E «5 E
«t- "5 t! O
e°^ge
en
.2
o>
DC
2
o>
G
cn
s
-------
219
Mass Spectrum and Library Search
Result of Bromochloromethane
(20 ppb by PAT/GCMS)
Mass Spectrum
49
INI
3541
138
61 6? 7.9.8.3.
103 117
128
148
Library Search
Saaple
CHLORQBROHOHEIHANE
48 68
FoMila! C.H2.CL.BR.
Molecular weight 128
• l • i • I - i • I •
88 188 128 148
Rank 1 Index
Purity 884 Fit 977 Rfit 898
7320-02
-------
220
6
8
r*-
O
o
.c
."tl!
g
^^
^*
|>
••••
C/J
c
O
Q.
E
0)
o
E
o
03
0)
O
(0
O
E
c
o
€3
'E
1-
-------
SNP
BXGJ
SNP.
BKG
221
Uniform Mass Spectra
200 ppt
160 ppb
91
I «
Purge And Trap Analysis Of Toluene In Water C7H8 (mw = 92)
PFinnigon
mm
7580-09
-------
222
oo
o
in
0)
U)
03
DC
OS
CO 4-» o
"oJ S _i
*- o ~
*r fi O>
333
go
o> •«"
O.
a.
JS
a.
(0
o
f
Q.
O
in
Q.
<
CO
2
ra
o>
-------
223
o
00
to
h-
Q.
E
0) O
w ju
o
8
C/)
o a,-o
O) (0 3
1 1
ffii
o
-------
224
co
in
O
0)
"5
Q
8-
O
O
c
O
i
(0
t
O
J3
s
I
il
-------
225
CM
c w
(0
o
o
(0
o
o
(0
55
o
Q> 0>
1 "5
O
Q.
$0
o
(0
o_
o
o
<
(0
-------
226
CM
ri
CM
(O
CM
CM
in
•o
o
0)
o
CO
o
CL
CO
CO
E
V)
10
o
CL
CO
Q.
CD
CO
o
o
•N
c
1
J5
'o.
CO
O
§1
JQ "55
od
I
o>
a)
jSl
E4=* .= O
c •*-• «
^ s o *-*
§i
(0
CO
73
0)
c CO
o -s
0
*5 VJ flj +3
CD M= »- 5
Q '<& "S C
c/>
CO
o
-------
227
z
CM
(O
o
C
0
c
(0
c
O
OQ
Q
H
0)
(0 "K
O
O
(0
Q
to
(0
u_
(0
O
O
O
-------
228
Q.
Q.
CM
ts
0)
Q)
o
1
0)
Q
_ COOP
{2
-CSS -r
t
-------
229
in
8
0
0
0 0
V
^^M *
C V
UJ
J5
o
GL
c
mo
1
JQ
3
a a
Cv G9
(S GB
s ®
CD
'o
£
0)
o
o
c
o
"*3
(0
§
o
o
CM
O)
A °>
0 CD
0
0
0
0
e
a a a
§8 •
H e
H a e
.
a
8
8
"a
-a
"H
„
•
-
8
8
r®
is
H
-
a
ra
IH
-
;
-
a
8
8
B
B
B
0
O
0
E
*^™
o
m
^
o
CL
c
.2
2
.0
"5
O
CD
"o
1
o
0
e c
o
"J2
m
0 1
o
o
« S
O)
,*,
0
0
1 l""'1 ' ' i
a a c
§8
H G
-i a c
a
a
a
-a'
-a
-H
-
K.
8 §
la "-*
.H 0)
- "5.
- |
^^_
o
c
3
O
fM
I® <
IH
-
__
-
a
a
a*
B
B
•
B
ais
3|duies
B9JV
-------
230
0)
CM
.2 -43 o
3 If -"
0
a.
(0
c
o
Q1S
ajduies
-------
231
§
0>
o
g
JD
o
o.
c
o
—
-------
232
0)
d>
O "S.
O
O +*
I
,0
O)
(7)
- COOO
«^o csa
CX3
- UTS
-------
233
i JL
csa
73
(5
•o
C
(0
o
C
to
o
C
o
o
•o
+•*
en
CO
a)
TJ
*•*
C/)
(0
o
CSi
GS
-------
234
DR. ODE: This will be one
of the few papers that there is no tables, no charts,
no chromatograms, just a videotape. I thought I'd
just send that, but I figured I'd get a little bit of
negative feedback later from EPA.
I thought I'd give you a little bit of history
of how we got into robotics before we look at the
tape. About four years ago, my boss went to the
Pittsburgh conference, and about three weeks afterwards
he gave me a call and said, come down to the rese'arch
conference room. We have Zymark Corporation here to
talk about laboratory robotics. I thought, I'll go
and see it; but, I'm not too enthused. I've got
everybody and their uncle after data, including
Hugh Wise who is calling me, wanting to know where
the numbers are. I said, I don't need R2D2 to babysit
here. He said, come and see anyhow.
So I went down and looked, and was convinced
that maybe robotics might be helpful for us. About
two months later our executive vice president came
down from h.is ivory tower and sat down and talked to
us little people for a little while. He says, have
-------
235
you got any questions? I said, yes, I need $25,000
$30,000 to look at robotics. He never came back and
talked to us again. He did ask for a proposal on why
I really wanted to do it and how I was going to do
it. So we wrote it up, and about two months later we
had our robot. Everybody was anxious to see it going.
We unpacked it and got it all set up. Called the
Zymark representative, said, what do we do now? It's
ready to go, I think. He said, first of all you want
to take the six screws that are at the base of the
robotic arm, take those out. He says, lift up her
skirt, remove the retaining bolts and play with her.
That sounded a little kinky, but we went out
and did it. We had her operating within a couple of
weeks. We assumed it was a she since she was wearing
a skirt. After about three weeks or so of finding
out what kind of maneuvers she was able to do, we
came up with the name Norma Rae for her, since she
had definite union tendencies. I guess about six.
months later we had our application going, thanks to
Bill Hornbropk, one of the technicians I assigned to
this project.
The tape as we have it now shows our application,
-------
236
and if we can get it going here, assuming the airport
didn't screw it up with their metal detectors.
(WHEREUPON, the film commenced.)
VOICEOVER: The Environmental
Research Group is located in Martinsville, West
Virginia. The group is responsible for all of the
non-routine environmental testing for the corporation
and as such performs analyses for all the plant
locations.
In addition, a portion of the program is dedicated
to the evaluation of new treatment technology such as
carbon containing polyurethane foam for biological
reactor systems. It is in the area of biological
treatment that we saw a need for a simple routine
method for sample preparation which would provide us
with a means of evaluating reactor performance based
on the removal of priority pollutants.
We chose to use a.micro-extraction technique.
In 1983, with the introduction of robotics by the
particular Zymark Corporation, we decided to adapt
the procedure to robotics.
The basic system consists of a., robotic arm and
several-hands, a controller,1 a master lab station for
-------
237
solvent dispensing, a power and a vent controller, an
air-powered crimper modified to accept 8 millimeter caps,
several different racks and various other devices.
Prior to actual running of the procedure, certain
steps must be taken to prepare this system. First, a
50 millimeter aliguot of sample must be made acidic
or basic, depending upon the analysis desired. The
syringes in the master lab station and a micro pump
are run to purge any air bubbles from the system.
Five milliliter disposable centrifuge tubes must be
put in their rack and disposable pipette kits packed
with a small amount of anhydrous sodium sulfate are
put in their rack, and the rack is placed above the
centrifuge tubes. An appropriate number of 0.5
milliliter auto sampler vials, two for each sample,
are placed in the rack with the caps loosely attached.
Finally, the robot must be told how many samples are
being processed.
The procedure begins with the robot arm attached
to the hand with the large gripping fingers. The hand
picks up the sample and positions it under the
dispensing station. While this is taking place, a
five milliliter syringe on the master lab station is
-------
238
filling with the extraction solvent, methylene chloride,
which contains the internal standard napthalene. To
insure that an accurate volume is dispensed, the
syringe is overfilled, the volume adjusted to three
milliliters and then discharged into the sample.
The tube is lowered into the vortexing station.
Since the extraction seguence is guite vigorous, a
Teflon plug and holder are moved into position.
As you can see from the color change in the
acgueous phase, the extraction is rapid and virtually
no color remains in the water. Also, even though the
plug is not washed between samples, no carryover has
been experienced.
After one minute, the procedure is reversed and
the next sample is processed. Since we are doing
only one sample, the hand is parked. A blank hand
eguipped with a 9.5 inch cannula is attached. The
cannula is lowered into the tube and the extract is
withdrawn using a syringe on the master lab station.
In the case of base neutral extracts which usually
give emulsions, after the extract is withdrawn the
cannula is positioned just under the surface of the
water, and the extract is slowly discharged.
-------
239
We have found that by repeating the process
three times, the emulsion is broken. For the purposes
of this demonstration, however, the emulsion breaking
step is not employed and the extract is simply held
in the tubing. ,
The cannula is raised above the tube and a small
amount of air is drawn into the tubing. This prevents
the extract from dripping out as it is moved to the
drying tube rack. Once in position, the syringe is
slowly closed, discharging the extract into the drawing
tube. The dried extract is collected in the centrifuge
tube.
After the extract has been transferred, the
cannula is placed in the wash station and methylene
chloride is flushed through the tubing and up the
outside of the cannula. This procedure is repeated
for each sample.
To gain access to the dried samples, the drying
rack is moved out of the way. A last step is to fill
the auto sampler vials. These particular vials are
for a Perk & Elmer AS10OB auto sampler, and are one
of the smallest, if not the smallest vial, being
handled by a Zymark robot. The first step is to remove
-------
240
the cap from the vial. To accomplish this procedure,
the vial is picked up and moved to the decapping station
The grip is repositioned and the vial is moved to a
position below the cap holding device. A small vacuum
pump is turned on and the vial is moved into place.
The vial is then moved slowly straight down, and the
cap is held in place by the vacuum. To determine if
the cap has been successfully removed, the top of the
vial is placed in a fiber optic beam. If the beam is
broken, indicating that the cap is still on, the
process is repeated. Otherwise, the vial is moved to
the filling station.
A syringe hand is attached to the arm and moved
to the wash station. A small micropump is turned on
a stream of methylene chloride is pumped to the wash
station. The syringe is filled and emptied five
times to insure that the syringe is clean, as well as
filling the airspace with methylene chloride vapor
which prevents dripping during sample transfer.
The hand is moved to the centrifuge rack, lowered
into the dried extract, and 700 microliters of extract
is removed. -The syringe hand is raised above the
tube and the plunger is opened further, drawing in a
-------
241
small amount of air. This step also helps prevent
dripping during sample transfer, as well as aiding'in
complete discharge of the extract.
After washing, the syringe hand is moved to the
filling station and the extract is slowly discharged
into the vial. The hand is parked and the vial
removed from the filling station. The cap is retrieved
and crimped.
After readjusting the position of the fingers on
the vial, the sealed vial is returned to the rack.
The process is repeated for the second vial, except
that the syringe is not washed since the extract is
the same.
After completion of the final vial, the syringe
is washed, parked, and the arm is returned to the
original starting position and the procedure stops.
As shown, the procedure is designed to perform
extractions using solvents which are heavier than
water. However, with only slight modifications in
the sample removal step and the use of the Zymark cut
and paste software procedure, solvents which are
lighter than water can also be used.
Currently, the extraction, drying and vial filling
-------
242
operations are performed in batch operations and
require a number of hand changes. With the introduction
of the Prepsep Zymate sample preparation system, the
procedure could be performed in a serial mode. Also,
a dual function hand having both a syringe and a
gripper would reduce the number of hand changes
required. Both items would reduce the overall process
t ime.
The ultimate goal of this kind of analysis would
be, of course, to combine the extraction procedure
with GCMS. Recently, Randolph and Poole of Hewlitt-
Packard described the coupling of the HP-5890 gas
chromatograph equipped with the 7673A automatic
injector to a Zymate system. By modifying our
procedure to accept the larger vials required by the
injector and using the 5890A coupled to a mass
selective detector, the characterization of wastewater
could be completely automated, providing mass spectral
verification and quantitation of the components of
interest.
In conclusion, robotics have been successfully
applied to the preparation of wastewater samples and
is being used routinely in our laboratories. The
-------
243
preparation time required for the samples has been
reduced by 50 percent, and the quality of the data is
equivalent to or better than comparable manual
procedures.
We wish to acknowledge the assistance of Bob
DeBolt, Bob Hunt and Roger Frame from Mobay, and
Bruce Jamison from Zymark Corporation.
(WHEREUPON, the film was concluded.)
DR. ODE: That's the first
time I heard the music. Questipns?
-------
244
Question and Answer Session
DR. MARKELOV: Michael Markelov
from Sohio Research. Did you do a cost evaluation,
time savings and availability of the studies...
DR. ODE: Details will be
published in the Journal of Chromatographic Science I
think in May. We do have some charts of data; I
didn't want to put those in.
Time studies. We have dropped it down to about
12 minutes. A technician takes about 35 minutes. If
you count pitstops and telephone calls and whatever,
it's more like 45 minutes. So we see a good time
reduction.
Precision of data. We've looked at both phenols
and base neutrals covering the typical materials that
we see in our plant effluents. Relative standard
deviations on a number of runs have been consistently
around one percent. For our reactors we're talking
in the low parts per million level. We can go down
to about 100 parts per billion, if we tie in mass
spec maybe lower, I don't know. We have not really
taken the time to try and reduce the detection limits,
for instance, or something like that. Our concern is
-------
245
relative differences between influent and effluent to
see if our reactors are doing what they're supposed
to do.
If the EPA would give a little bit and not look
for every molecule that's out there, we might be able
to use it as a screening tool for other things. But
we've not been that fortunate yet.
MR. TELLIARD: Anyone else?
MR. CHANG: James Chang
from Galson Technical Services again. I have a
question about...how can you solve that problem? If
you assemble overnight you come back tomorrow, by any
chance you can mass one other branch, then you don't
know which one is which.
DR. ODE: You can go into
bar code reading and the like. We've not had to do
that. Reliability on this operation, and we do run
overnight, is better than 95 percent. The only thing
that we've encountered is, we've missed a cap which
is lying somewhere on the floor when we come in in
the morning. But as far as the samples running in
proper sequence, we've never experienced that in the
three and a half years or so of running it. But through
-------
246
bar code reading I'm sure you could keep track of
which sample is which. That's not a major problem.
MR. CHANG: A major concern
when you go to the...what's the reliability that you
tell the people, if there's a legal case.
DR. ODE: Ohr we wouldn't
use it for legal. We're not using it for legal. EPA
won't recognize the procedure. We're using it just
for in-house monitoring.
MR. MARKELOV: Why doesn't
EPA recognize the procedure?
• DR. ODE: Why doesn't EPA
recognize the procedure? We can't meet the detection
limits, basically, I think is the problem at this point.
MRS. KHALIL: How about big
volumes? Can this system handle big volumes?
DR. ODE: We can run up to
about 100 milliliters. Beyond that, unless you can
find a better vortex or some other way of doing the
extraction sequence...
MRS. KHALIL: For wastewater
samples you've got to extract one liter, so I don't
know...
-------
247
DR. ODE: For what we're
doing 50 milliliters is more than enough. There's
plenty there to see.
MRS. KHALIL: After that
basic...just the pH and that sort of thing?
DR. ODE: Just by pH
adjustment. We either get an acid neutral or base
neutral extract, and using capillary it really doesn't
make too much difference.
MRS, KHALIL: You just get
one at a time? Does it do more steps or just...
DR. ODE: You either adjust
the pH to very acidic or very basic at the beginning
and run it through. If you wanted to get both base
neutral and acid neutrals, you would have to put two
samples side by side. That's no problem.
MRS. KHALIL: There's no
set...for cleanup or something like this...that should
go to GCMS.
DR. ODE: Right.
MR. TELLIARD: Anyone else?
Our next speaker is from Standard Oil of Ohio and
will continue with the robotic theme. Mike?
-------
248
ANALYSES of WATER and SOILS for TRACE ORGANIC CONTAMINATION
via HEADSPACE and PURGE and TRAP TECHNIQUES USING ROBOTS
by
Michael Markelov*, Bruce R. Seitz
BP America R&O
4440 Warrensville Center Road, Cleveland, Ohio 44128
SUMMARY
A robotic system performing trace organic analysis in soil and water
matrices using "purge and trap" GC/MS and headspace gas chromatography is
described. The system is shown to greatly extend the capabilities of the
existing commercial equipment in the respect of number and type of samples that
can be analyzed automatically. The effects of moisture content on the results
of the headspace analysis for soils were investigated using this system. The
comparative study of conventional and robotic "purge and trap" analyses was
also conducted. The study showed that the .robotized analysis had advantages in
terms of quantity, timeliness, quality and cost effectiveness. It was
estimated that the system paid for itself in less then four months.
INTRODUCTION
There are two major techniques for analyses of volatile organic pollutants
present at trace levels in soil and water:
1. Static headspace technique.
2. Dynamic headspace or "purge and trap" technique.
The schematics for dynamic and static headspace techniques are depicted in
Figure 1. The static headspace analysis makes use of the equilibrium
established between the condensed (liquid or solid) phase and the gaseous
(vapor) phase in a sealed headspace vial. One can visualize the static
headspace analysis as an extraction process (and it is) where a gas (air in the
vial) acts as an extraction solvent. An aliquot of this extract is then
introduced into a gas chromatograph for an analysis.
-------
249
The purge and trap technique involves purging the water sample with an
inert gas, usually He or No, and trapping the volatile pollutants from the gas
stream onto a column which contains an appropriate sorbent (usually porous
polymers, such as Tenax*- •'or Porapak*- •*). The content of this column is then
thermally desorbed into a gas chromatograph for an analyses. This process
again can be considered as an "extraction" but with practically infinite volume
of extracting solvent (gas).
The analytical portions of both headspace and purge and trap techniques
are already substantially automated and the corresponding instrumentation is
available on the market. There are automatic headspace samplers available from
Per k i n-E liner and Hewlett-Packard (Dani). There are also purge and trap
equipment available from Tekmar Company and CDS.
In contrast to the high levels of automation available for the analytical
steps in both dynamic and static headspace procedures, the corresponding
automated equipment for the sample preparation steps does not yet commercially
ex i st.
This paper will describe an environmental robotic system that performs the
sample preparation and analyses using both static headspace and "purge and
trap" methodologies applied to water and soil samples.
It will also show that the system is capable of automatic method
development and method optimization. In addition, it will be demonstrated that
the system substantially reduces the cost of QC and QA required by regulatory
agencies and subsequently reduces the cost of environmental analyses in
genera I.
-------
250
EXPERIMENTAL
Purge and Trap
The schematics of an automated purge station is presented in Figure 2.
The robot brings the purge tube, containing the sample from the refrigerated
rack (Figure 3), and puts the tube into the purge tube holder. This holder can
be heated (heating jacket) depending on a particular EPA method used (601, 602,
603, 624, etc.)- According to EPA requirements, the sample must be
thermostated for about five minutes prior to purging. The robot controller
then activates the "purge and trap" apparatus and the pneumatic cylinder, which
forces the "moving bracket" upward along the guide until the septa on both ends
of the purge tube are pierced by the needles. From this moment, the purging
process begins and the volatile chemicals present in the samples are being
delivered to the trap. When the purging, desorption, draining and trap baking
steps are completed, as prescribed by a given regulatory method, the robot
controller again activates the pneumatic cylinder forcing the "moving bracket"
downward. The spring on the top needle ensures that the purge tube will stay
in the purge tube holder after the downward motion, otherwise the purge tube
may hang onto the top needle. The robot then removes the purge tube from the
purge tube holder and puts it back into the rack. When the chromatographic
cycle is completed and the trap column has cooled down to room temperature, the
robot repeats the operation with a new sample. The synchronization of the
robot's movements, with the ready status of the gas chromatograph and the
"purge and trap" apparatus, can be accomplished in two ways: through timing of
the analytical cycles or by monitoring the temperatures of the GC oven and the
trap column using temperature sensors (thermocouple, thermistors, etc.)
connected to the A/D converter of the robot's power/event control Iers.
-------
251
The sample preparation step in the purge and trap analyses of water
samples is not complicated. It only requires the spiking of the purge tubes
with a standard solution. This spiking can easily be performed using a robotic
system that includes a master laboratory station (autodiIutor) or automatic
repipetting station [4]. The analysis of soil samples is more involved. The
samples must be weighed and extracted with glycol or methanol in the ratio of
10 ml of the solvent per 4g soil. An aliquot of this extract (5 to 100 /»!') is
injected into the purge tube containing 5 ml of distilled water. A surrogate
chemical (i.e., trifIuorotoIuene) is added to the extracting solvent to begin
with or directly into the soiI sample prior to extraction.
All the operations above can be easily automated using the standard
robotic stations available from Zymark. However, we do not see any reasonable
way to teach the robot to take a representative soil sample (avoiding the
sampling of rocks, leaves and life objects). The robotized procedure consists
of several steps. First, the robot weighs the empty vials and stores their
weights memory. Then, a human "eyeballs" a scoop (approximately 4 gr) of the
soil sample and places it into a preweighed vial. After that, the robot takes
over; it weighs the vial with sample, calculates the sample weight, adds
proportionate amount of methanol (4:10), crimps the vials and then shakes them
to facilitate the extraction. After the extraction is completed and the soil
particles have settled down, an aliquot (5 to 100 /tl) of the extract is
withdrawn from the vial and is introduced into the purge tube containing 5 ml
of water. The purge tube is then capped and placed into a refrigerated rack
where it is ready for introduction into the robotic "purge and trap" station
depicted in Figure 2.
Static Headspace Analysis
Earlier, we described a robotized sample preparation procedure for
headspace analysis of waters, soils and polymers [1]. This procedure is now
extended to include the final analyses. The robot picks up the prepared
headspace vial and puts it into an automatic headspace sampler (Hewlett-
Packard) . In order to synchronize the robot with the status of this
autosampler and to provide equal equilibration times for the vials, the direct
communication between the robot controller and the autosampler was established
-------
252
via the "inputs" on the power and event control I module. A simple program was
written that permits the robots controller to read the position of the
autosamplerJs carousel and to advance the tray to a desired position. The
headspace autosampler has 24-sample positions. In order to extend this
capacity, we built a simple device that permits automatic removal of the
analyzed vials from the autosampler tray. A pneumatic cylinder with a
hypodermic needle located on the plunger's end was secured over the vial
introduction opening of the autosampler. The robot actuates the pneumatic
cylinder which forces the needle through a headspace vial septum. The robot's
controller then actuates the upward motion of the cylinder's plunger. The
robot then removes the analyzed vial from the plunger's needle and (if
necessary) puts a new vial into the sampler for thermal equilibration and
subsequent analysis.
SUMMARY GF CHROMATOGRAPHIC CONDITIONS
Purge and Trap
The transfer line from Tekmar purge and trap apparatus was connected to
two capillary columns* via split/splitless injection system. One of the
columns was directly interfaced with a mass spectrometer (MSD) and the effluent
from the other column was split between flame ionization (FID) and
photoionization (PID) detectors. The chromatograph (HP-5890) was equipped with
cryogenic capabilities. The chromatographic conditions used for the separation
of aromatic and chlorinated organics are summarized in Figure 4A.
30m fused silica DB-5 columns from J.&W. Scientific Company.
-------
253
Static headspace
The transfer line from the HP headspace autosampler was directly connected
to a capillary column via split/splitless injector. The effluent from the
column was split between FID and nitrogen-phosphorus (NPD) detectors. The
chromatographic conditions are summarized in Figure 4B.
RESULTS and DISCUSSION
The incorporation of robots in the environment of industrial analytical
laboratories opens new possibilities for statistical approach to generation
of analytical data that otherwise would be (and were) cost and time
prohibitive. This approach is demonstrated in the examples below.
1. Eva!uation of static headspace gas chromatography for analyses of
aroma tics and methy I ethyl ketone (MBQ j_n soi I.
The irregularities in soil compositions, even for the same sampling spot,
require statistical evaluation of the analytical results. Figure 5 shows the
scattering and linearity of the results obtained from the spiking of blank
soils with various levels of aromatic compounds. A similar curve was generated
for MEK (Figure 6). This curve shows substantially more scattering. To
verify that this scattering is due to the variations in soil compositions
rather than due to the analytical procedure used, the following special
experiment with uniform matrix (water) was set up. Aliquots of water were
spiked with various amounts of benzene, toluene and xylenes. These spiked
solutions were analyzed in the same fashion as soils. Figure 7 is a graphical
representation of the results generated. This figure shows substantially less
scattering than the similar spiking resulted for soils (Figure 5). The reduced
scattering of. the results obtained for uniform sample matrix indicates that
indeed the scattering shown in Figure 4 is due to variations in soil matrices.
-------
254
However, this experiment does not explain the larger scattering of the
results for MEK than the scattering observed for aromatics. We speculated that
there is an additional statistical variable for MEK compared with the
aromatics. In search for this variable, the following experiments were
conducted using the robotic system described above. The soil was dried to a
constant weight at 120*C. This dried soil and distilled water were mixed in
different proportions, in such a fashion, that the total weight of the mixture
in a headspace vial would be constant (2.5g). This mixture was then spiked
with 100 fi\ of water solution containing MEK and aromatics. The results of
this experiment are presented in Figure 8. This figure shows a dramatic effect
of the moisture content on the sensitivity of the headspace analysis to MEK
while no appreciable influence is shown for aromatics. This effect is probably
due to higher solubility of MEK in water as compared with the solubility of
aromatics. The higher solubility of a compound in water reduces the partial
vapor pressure of the compound; therefore, results in lower sensitivity of the
headspace analysis to the more soluble chemicals. The moisture content is
probably the additional variable that is responsible for the larger scattering
of the results for MEK.
Over 200 samples were analyzed during this study of moisture content
effects on the sensitivity and reproducibiIity of soil analyses. The
analytical work was performed by the robot within a week and involved only
about four hours of lab personnel's time.
Analyses of soil and water using purge and trap QC/MS
The evaluations of the spread of contamination in soil and groundwaters
of ten result in the simultaneous generation of a large number of samples to be
processed in a very short time. The cost of these analyses constitutes a
substantial portion of overall cost of the area cleanup. The high analytical
cost as well as the time involved in the analyses often lead to a compromise as
to the certainty of the contamination boundaries (minimal number of location is
samples) and of quality of analytical data (minimal number of duplicates,
spikes, standards and blanks that are run). The introduction of robotics to
EPA regulatory analysis substantially reduces the need for such sacrifices.
Indeed, the Table I shows the advantages of robotized "purge and trap" analyses
in respect to cost, time and analytical quality. This table resulted from a
-------
255
8
project that involved the analysis of soil samples for the presence of aromatic
and chlorinated compounds. The data used in the table for an independent
laboratory was based on our experience with contracting out soil samples to
several independent laboratories for "purge and trap" GC/MS analyses.
CONCLUSIONS
1. The introduction of robotics to an analytical process extends the power of
existing analytical instrumentation. In the case of the static headspace
analyzer, the robot permitted the automatic control of equilibration
times. These equilibration times may be varied for method development or
may be maintained constant for samples where equilibrium is difficult to
achieve. Moreover, the headspace sampler is no longer limited to 24
samples (number of positions in its carousel) but rather is established at
the user's discretion.
In the case of "purge and trap" analyses, the robot controlled purge
station (Figure 2) permits similar expansion of the automatic "purge and
trap" apparatus to the user specified number of samples (40 in our case).
The integrity of the samples to be analyzed is insured by the design of
the purge tube which is sealed and stored in a refrigerated rack prior to
analysis.
2. The cost effectiveness and timeliness of the described robotized
environmental analytical system permits a more rigorous quality control of
generated analytical data as well as automatic statistical investigations
and evaluations of the various parameters of an analytical procedure in a
fast and inexpensive manner.
REFERENCES
1. M. Markelov, M. Antloga "Robotization of Multiple Analytical Procedures,"
in "Advances in Laboratory Automation Robotics 1985" edited J.R.
Strimaitis and G.L. Hawk.
-------
256
III
o
<
CU
CO
Q
<
111
I
O
I
CO
ui
a:
ca
i—<
u.
O
UI
O
co 2
Q H.
Ul
•a
CO
Q
-------
FIGURE 2
ROBOTIC STATION FOR EPA PURGE AND TRAP METHnrv
\Te -trap column
Sample Line.
nrieuma-Kc linear
257
Sta-flcnary bracke.4-
"Purge I
'Purge
and
Trap
Heating jacket-
Drain
-------
258
O
C3
co
LU
OC
C3
coz
O O O. O O
O O O O O
O O O O O
0>
u i-
ts f>
X-8 E
o a
w W
Vortexer
O
\
o o o o
o o o o
O O O 0
{eadspace FU
J
UJ
CO
CO
CO
CO
_J
<
<
UJ
o
cc
UJ
o
QC
111
11.
o
CO
o
UJ
I
o
CO
-------
259
CD
CO « <
Z | ?J
0 E §
_ in ° "
Q O
oT _ Q.
— ^" ^
Q QC diuBJ i
CO _ dull
O
CC
I*
° <
-
»• c
E 3*
o^ 5
>. ^" T
\
0 £
0 ?
10 *•
\
\
™M-\)
A
^j C
o CT"E
^
K . .
".
ONISnOOdOAHO
r
003 001
co
C
CO E 0 ,
Z o SsJ
o !! ' a *
— CO ». r
H s ^
^™ o
Q 1
flu z Q.
» C* '
i If
= £s
e S" .
•*^/
^^-^
i . *
o°S
- M
*• r»
\ U
**•"'* ^^3 i **
LU ^ duiei uiui/0t. »\ 2
CD Q
£o
v^ ^fr •
Om ^J
s
0
CC
V
^^"ll^«^^^
yfw fr IB uo ind** » ^
C3 ^
m ,
k
^— »
CO
,c
"jjjj ' MVO-7^%
r Nwmoo
E
*
h
Nonvavdas .
OlHdVHOO-LVWOUHO
dO WUVM
» uoi)ezi|iqe)s MOU
k
ONisnoaodOAUO
p
Q " obz obt
-------
FIGURE 5
LINEARITY PLOT FOR AROMATICS
(soil spikes-total 108 points)
260
Q
HI
ffl
o
• ALL CALIBRATION POINTS
1.015207 s* X
100 200 300 400 500 600 700
SPIKED ppb
FIGURE 6
LINEARITY PLOT FOR MEK
(soil spikes-total 22 points)
ALL CALIBRATION POINTS
0.788338 ¥• X
100 200 300 400 500 600 700
SPIKED ppb
-------
261
x
t-
m
en
a
u.
>
i-
z
M X
I—
LU
Of
X
I-
* —
e
ui
e. •
in -o
ee to
ui «
-------
262
=feo
00
CO
CD
en
*r-
U.
CC O __r
CO
CO
OQ. ..
o>
OO
to
T-
t- eo —
in to
i- in
• ^^
52 o> _
•*• S
•*• _
CM
O
52 CM
CO o
3 =
CM
in
eo
eo
to
•
CM
CN V- T-
V3UV >iV3d DOT
-------
263
CO
CO
Z
o
o
m
o
oc
o
•«*•
4A-
i
U)
CM
o
CO
Z „
0>
CM
8
in
TJ
o
D
-J
<
U- Z
LU
CO
«C
<
O
o
N
p
O
CD
CO
o
G
o.
CO
•o
c
CO
;
3 SZ
I
S w
8 8-
< ^
-------
264
Question and Answer Session
MR. TELLIARD: Do we have
some questions? Can you identify yourself, please?
MR. HUNTINGTON: I'm John
Huntington, I'm a consultant...you mentioned soils.
Have you used this same system to do soils?
DR. MARKELOV: Soils and
sludges.
MR. HUNTINGTON: You didn't
describe that.
DR. MARKELOV: I didn't. A
robot takes an empty vial, put it on the balances,
weigh all the rack of empty vials, store the weights
in memory array. After that, human comes in, eyeballs
about four grams of soil. After that, you have a
choice. You either can maintain the EPA reguired
ratio of four grams of soil to ten milliliters. Then
you will use master lab station which will calculate
how much to add, depending on what weight a robot
took, or you might have a pretty good scoop and forget
about that and just dispense constant volume ten
milliliters.
After you put your soil in, the robot takes over,
-------
265
shakes it, and here is a dangerous point. For the
reasons which escapes me, EPA required two minutes
shaking, two minutes extractions for the soil analysis,
I believe for the sludge. It's physically impossible,
because the emulsions which you form do not separate
within ten minutes. You have to do something about
it. You have to do it by hand. You might immediately
take it and filtrate it through the filter. That
becomes very difficult to robotize. We have a problem
with that.
I don't understand why two minutes. I would
understand two minutes minimum, but I don't quite
understand...maybe somebody will be able to answer
that. I don't know. If that obstacle can be forgotten
and we can use minimum two minutes for extraction,
the robot has no trouble with that. After that,
everything is settled.
Now, after everything is settled, you can take
your aliquot either way you want. You just see on
the movie how the aliquot can be taken easily, or you
can do it by hand with a syringe. It's no problem.
Our system allows to do both.
MR. HUNTINGTON: Another
-------
266
question. How much did it cost you to set this up?
DR. MARKELOV: It's a good
question and a true answer on this question will be
unfair to many people. It didn't take us very long,
but it didn't take us very long because it was eight
system with the top. It took us...to set up the
purge and trap system took us about two months.
However, the first system which we tried to set up
took us about half a year.
MR. TELLIARD: Anyone else?
Thank you.
We have a break scheduled, and it says 15.
Let's keep it to 15. Go on and get your Coke and
come back in. We'll keep moving.
(WHEREUPON, a brief recess was taken.)
MR. TELLIARD: We'd like to
reconvene, please.
We have a slight program change. Lee Myers from
CompuChem is going to have to go catch an airplane
and fly to someplace else. So we're going to move
him up in order, and then Spencer Smith will follow
Lee. Again,, continuing with the joys of robotics or
thereabouts, Lee's going to talk on data reduction.
-------
267
DR. MYERS: What I hope to
. *
do this afternoon is discuss data management and the
lab management system at CompuChem Laboratories. The
lab management system at CompuChem is like those at
most other laboratories. The major functions it
provides are sample tracking, laboratory management,
a repository for results, linkages for the accounting
functions, and assists with our quality assurance/
quality control program.
CompuChem has had a lab management system since
1980. Our first lab management system was a Data
General C350 Eclipse, and the software was written in
FORTRAN, using fixed arrays to do the sample tracking.
The next version was a COBOL based system. In 1982
we moved to Hewlett-Packard 3000 based system. The
reasons for doing this is, lab management information
systems primarily involve database manipulations, and
we find the Image/Query system on the HP 3000 to be
the most satisfactory. There's little mathematics
involved and not a lot of number crunching, so that
writing the programs in COBOL or a fourth-level
language (Protos) and the using Image/Query database
provides the kind of support we need for our programs.
-------
268
In 1982 we also introduced into the lab a local
area network, which was one of the original versions
of the Ungerman Bass Net One system. As will be
discussed later, the purpose of the network that is
to connect all of the Finnegan mass spectrometers
into the Hewlett-Packard 3000 in real time.
In 1986, we introduced some of the reporting
software that I'm going to talk about this afternoon,
which is an environmental site profile system and
remote reporting. In 1987 we're also currently
completing what we call our network at the lab, which
in a certain sense is something which is never
completed due to continuing changes in instruments
and procedures.
The functional elements of the lab information
management system were discussed earlier. The items
that I want to talk particularly about today are
results and reports. The sample information, again,
is almost generic, the date the sample was received,
what analyses were requested, schedules, extractions,
mass spec, dry weight, QC, the normal items. Quality
control aspects are both in terms of the samples
themselves as well as scheduling and then looking at
-------
269
the data and making sure that the data meets all the
quality control checks that it must meet, such as
surrogate recoveries, internal standard area, and
response factors.
The accounting links that I mentioned are, first
and foremost, the samples must get invoiced once
the results are mailed, and secondly, cost accounting
to verify the projectability of individual product
lines.
At the risk of being too technical about our
particular system, the key building block of CompuChem's
lab management system and subsequently, the results
reporting, is the notion of a compound list. The
definition is a key one. It's a list of one or more
target compounds for which an analytical procedure is
performed which has stated units and detection limits.
The attributes of compound lists are equally
important. A compound list is unchangeable over
time. Essentially this means that if EPA changes
detection limits or changes compounds a new compound
list must be created and given a new number. The
system then will accurately reflect that two years
ago, when you analyzed that sample, that was the
-------
270
set of target compounds analyzed. The only other
altrnative is to track compound lists through
revision levels which we found to be somewhat more
cumbersome than actually changing the list and giving
it a new number.
The definition of a compound list consists of
a list number, units, a description, compounds and
detection limits.
The next key building block in the 6b management
system is the analysis code. An analysis code is one
or more lab procedures performed with defined guality
control, within defined time limits, and yielding
results for a specified compound list. An analysis
code is also unchangeable over time, for if it were
not you could not recreate history.
The analysis code completely specifies a protocol
to be used in analyzing any client sample. The
analysis code itself is any given number. As an
example it can specify the organic priority pollutant
analysis: acids, base neutrals and volatiles.
We also have a status code which could for
example indicate that particular procedure is not for
sale. An example of that is confirmation of
-------
271
pesticides by mass spec, which automatically follows
the GC procedure. It could indicate that the analysis
code is obsolete or that the analysis is sold only to
EPA.
The matrix denotes the specific method for which
the analysis code applies. The queue denotes the
various lab stations. When we generate work lists,
the samples that have that procedure automatically
show up on the work list for that laboratory area.
The procedure refers to the standard operating
procedures in the laboratory. Negative numbers are
always sample preparation procedures; positive numbers
are instrument procedures.
Ruler code basically specifies the length of
time a 28 ruler, if allowed to perform a procedure.
These times are based on the requirement of the
procedure if holding times or customer requirements.
Compound list specifies the compound list that
will be used. Note that extraction procedures don't
have compound lists because they perform the same
extraction independent of what the target compound
list is going to be at the mass spectrometer.
-------
272
The QC counter specifies the quality control
appropriate to this procedure. In our system,
when samples are received at the back door, they are
essentially logged against quality control counters,
such that every 19th sample becomes a matrix spike,
the 20th a matrix duplicate. Thus every sample
becomes part of one or more quality control batches.
Each QC counter also has a time limit, so if we don't
get 20 samples for a particular analysis within 30
days, the quality control samples are automatically
generated.
The last item you need to specify for the analysis
are the internal standards and the surrogates that
will be used. The lab management system then basically,
all that is derived from these two fundamental building
blocks.
The laboratory areas that we have, that are
automated or in the process of, is GC mass spec, GC
inorganics and other. The other area includes some
of the wet chemistries, and in CompuChem's particular
case, though I'm not going to stress it today, since
this is an environmental conference, we have another
side of our business which does drugs of abuse analysis
-------
273
in which the sample volumes are significantly higher
than they are on the environmental side. Consequently
the automation, laboratory management, moving
it around is even more critical in that area.
When I discuss the Hewlett-Packard 3000, we
actually have two Hewlett-Packard 3000/70s that run the
lab management system. There are 13 400 megabyte
disk drives, more than a hundred terminals, and ten
modems that are specifically allowing clients access
into the database.
In the first part of the laboratory network
the GC mass spec laboratory, we have 23 Finnigan OWA
1020s which are connected to the Ungerman Bass Net
One LAN. Last year we added four Hewlett-Packard
MSD's, which are used in the drug side of the business.
These were interfaced to the network from the
Quicksilver data system through a personal computer,
in this case an IBM PC AT, which does a translation
routine and essentially converts the MSB quant reports
into Finnigan lookalikes. So as a result, the network
protocol and the HP3000 software remain unchanged.
An archival storage block is shown because we
also send raw data up the network as well as quant
-------
274
reports. These kind of data are stored off line
on essentially a stand-alone INCOS system.
In the GC lab, we have approximately 22 gas
choromatographs right now, which are a mixture of
Perkin Elmer, Hewlett-Packard, and Varian these
which are all connected to a Hewlett-Packard 3357 lab
automation system, which we're running through a
standard DS line, which is a Hewlett-Packard
standard interconnect between 1000 and a 3000 computers.
The intorganics lab, is one that's the least
developed. Basically we have an ICP which has a
Digital Equipment Micro VAX which we then connected
to a personal computer running Walker Richer Quinn's
Reflection software. Reflections ability to emulate
a VT 100 terminal and also a Hewlett-Packard terminal,
makes the bridge guite nicely between the two essen-
tially foreign systems.
Similarly, we're starting to use PC's to interface
between the AA's and the micro VAX. We are definitely
finding in our laboratory applications that using
PC's as essentially $1200 interface boxes has turned
out to be the most cost effective way of getting a
lot of different equipment communicating with the
-------
275
Hewlett-Packard 3000 cost effectively, and with
minimum time in getting it done.
This basically concludes the part about how
the lab areas are integrated with this laboratory
management system.
From the databases we basically produce two
kinds of reports. The first type are, written
reports. The second type of report is what we call
our client database and environmental site profile,
and also includes reporting data via telecommunica-
tions in the PC's at the clients' sites.
In order to prepare a report the system must
first verify the data in the database against the
compound list that was ordered. Results must be
present for everything that was ordered. It is
also important not to report anything that wasn't
ordered. It is also important to verify that all
factors are present, such as dry weight, pH and
data things may be manually entered from other lab
areas.
One of the most important parts of the reporting
software is what we call a report formatter. Consider
the semi-volatiles appear in Method 625, the contract
-------
276
lab program^ 8240, or 8240, appendix 9. The compound
list is essentially the same in all thes same cases,
however, the output the client expects is completely
different. The lists are sorted differently. In
some cases they're alphabetic, and in other cases
they're in elution order. The purpose of the report
i
formatter is to provide a mapping function from the
database in the specified order that that particular
client wants.
The report can be printed on the laser printer
in one case we are using a commercially available
package that we load from the database system. Using
commercially avalable software has the theoretical
advantage of the vendor being responsible for updates,
As an aside one of the downsides of developing a
management system from scratch is code maintenance.
Currently our CompuChem's system staff is maintaining
something on the order of three quarters of a million
lines of code that we've written over the last eight
years.
Having produced the reports, the next thing we
do is, we set up a separate much smaller database,
results only for client access. This database is
-------
277
also re-indexed our clients are oriented to their
sites, sample point and sample identifiers and not
toward CompuChera's sample numbers. Therefore, the
database was re-indexed in terms of what our clients
like to see, such as sites, points and samples. It
makes the database more useful as an analytical tool,
though it doesn't help you process for the clients
but is done after the analysis are completed as the
analytical process is based on CompuChem sample
numbers.
In summary, the CompuChem system is built on the
a primary local area network supplemented by PC
links to transfer the data into the central computer.
The analysis codes and the compound lists are the
building blocks of the lab managment system.
Efficient laboratory management and flexible reporting
are some of the benefits derived from this system.
-------
278
Question and Answer Session
MR. MARKELOV: Michael
Markelov, Standard Oil. What's involved in the
maintenance of this system, peoplewise, timewise, and
how long it takes you to write a communications
software through the PC feeding to other pieces of
the...
DR. MYERS: Our systems staff
is currently a programming staff of eight with a
staff of four in computer operations. The computers
are run 24 hours a day, five days a week with weekend
coverage as well. So we've got four people to do
nothing but operate the computers.
Maintenance on the system is per se very little.
The code has been debugged. Adding new analysis
code, new compound list is a ten-minute exercise and
can be done by people in our marketing department or
quality control department.
PC interfaces typically only take a week or two.
DR. MARKELOV: One person?
DR. MYERS: One person.
But again, that's a person who's spent the last five
years of his life doing nothing but working with PC's
-------
279
in PASCAL and C and understands certainly more than I
do how to make them work in real time applications.
But again, even that, the variety we're starting
as an example, an atomic absorption instrument has
a microprocessor and produces RS 232 output which
is ASCII characters. The data must be captured by
the PC and reformatted before transmission to the VAX.
As I indicated earlier the Reflections software package
has two terminal emulation modes. Once the software
is loaded, it can be a VT 100 one minute and an
HP 2622 the next minute. It's very flexible moving
files back and forth and was a lot easier than solving
a VAX to 3000 direct problem. You're essentially
just buffering it.
MRS. IRIZARY: What volumes
of samples do you process?
DR. MYERS: We are typically
running...1'd probably say at this point in time
maybe 150 CLP type samples a week. In other environ-
mental work process the samples as fraction e.g.
volatiles, base neutrals, metals, etc. We're probably
running a thousand environmental sample fractions
a week. On the drug side of the business we're
-------
280
running more than two thousand samples a day.
MR. WALKER: Bob Walker,
Lancaster Labs. Could you explain the process of
taking a volatile sample and sending that data over,
for example, when you have an incorrect target compound
identification, what the analyst does and how the
file is sent over, or what kind of...
DR. MYERS: As when the
analysis is finished the GC/MS chemist reviews the
guant report. Once the chemist is satisfied with the
results, the guant report is sent over the network to
the database.
For each GC/MS run the database contains a verify
flag. The verify flag reguires that an assistant
manager or a data reviewer examine the chemists
results and then set the verify flag. Thus we have
assurance that compounds and have been correctly
identified. If there are errors, the reviewers can
make the adjustments prior to setting the verify flag.
No data can be reported until the verify flag is set.
Furthermore, we don't move that data into that final
results database until after the report has actually
been sent, just in case there should be something in
-------
281
what we call our final review process that may affect
those results.
MR. TELLIARD: Anything
else? Thank you, Lee.
Our next speaker is from Ciba-Geigy, who's going
to talk on automation of NPDS permits, which sounds
like a real interesting subject, particularly if you
don't have to put any data in it.
-------
fn i M * "" n* ' i I F ( '.Sir,
|i i ii 11 I i it nil illinium i i *1 HIM I iiiiiini ii mi ii
1 '
r .
-------
if-
SA .
r
O
tfk .fc/^W^
* IkM
"C
-MM
£ &
u
—
I
o
3
<0
DD
,£
"S
3
O
U
*
u.
o
C
o
u
cu
u
g
on
CD
<
£•
s
a
O o
-------
a;
u
UJ
U
"2
oJ
c
o>
O
f«
IS
O
o
oo
Z
C/5
&JD
C
CO
c
o
Q
o>
U
fl*
a.
••MMM
Z
CN
00
Gh
E
qj
W>
D
1
0)
i4-rf-
O
£
£
cC
C/5
UJ
*^l
au
p
.4
o>
Z
CNI
00
vD
00
00
-------
285
C/5
u
o
u
o
n*
wo
o
£ O
o
-------
i. 61 ' *,
••:
O 5
2
286
1
-------
26?
C
VMM
-J
«/5
D
•Q
-------
j
c
-------
239
Q
£& C
I 1
|
•o
O
o
o
A
o
-------
§
i
§
I
Uj
rt
_,
o
U
Q.
B
3
13
3
O
u
o< a
-------
C
O
S2 j^ O
2*.
+*>
OJO
C
U '•••
cr
^.:..lliO:
o I/)
CNI r-
> ^ se
O O D 2
> w a s
-------
§
o
a
<#>
u
= .a :
c
O
O)
3
O
0
o
u
U
V
jM&MMv ^^3'
a;
3
U w
O^ 1*>llrf
-55 q^
£-.. ^.0
- c
> C
f3
O>
F =
«
u u
e £
D £
I I
-------
•SS
o
I
CM
CO
X
5
UJ
a.
Q.
OH
LU
o>
.= c
£ o
2"^
ti t)
< <
O
"D
t3 «
S s
IS s
.SK1S
o>
o
*>
II 5 S
e I •£ -2
S S 5 JE
eo ca U U
a>
I S S
||l
| |ll|
o >S !§ "5 .a
BJl??
^c o ^ ^
CL
O
£
CL
O
J= «3
U J=
o *-»
C ^
||
5 u 'f-'
J=
U UJ
>ooooooooooooooooo
ooooooooooooooooo
ooooooooooooooooo
ooooooooqoqqqqqoo
"^>>d dodddddddddddddd
3OO*~r~^-"1--'r'1— r"*"'"*-'"*'-*-'*-"*-"*-1
tx
^^ T""1 ^4 fO L^ ^p f^x OQ ^^ ^JJ5
ooooooooo^:
OCMCMCMCMCMCMCMCM^
O
CM rf" LO *45 f^ CO
T"" r« r— r~ T~" r~
CM CM CM CM CM CM
KCOCOCOCOCOCOCOCOCOCOCOCOCOCOCOCOCO
oooooooooooooooooo
oooooooooooooooooo
-------
c
Is
i
o
a
el
-------
WD
1
o o
UL
> <
E
*•* r"
5 O
c ^ CL.
WD
ftf
3
O c u
LLa **!"? ftli
u
296
-------
a,
^r>
uj
QQ
43
CJl
.."O
X
E
O
fiu
t* &
f\J «tirf
Q l/S
-------
298
<«
s
I
0)
WD
C/i
CO
a;
o
u
O
u.
& Q ^-
^ 0> ft
re
c
o
D
o;
3
^
-------An error occurred while trying to OCR this image.
-------
300
5 day test.
This is an overview of a portion of the
system. You can see the technicon auto analyzer
systems. In the corner of the slide is a Tracer
GC that's used for the DiazinonR You can see the
•
extraction portion of that process here. We do
a five to one extraction/concentration in hexane, and
then a direct injection onto a gas chromatograph.
The series of valves seen here are used to differentiate
between samples from the two different streams.
Our two discharge streams from the plant are
located about 350 to 400 yards apart. This analyzer
building is located in the center of those points.
Sample from both discharge streams is continuously
pumped to the building. It circulates continuously
through the building, and these valves switch on a
timed basis between the two streams. As a result,
the technicon auto analyzers operate continously
and alternate between the two streams.
This is another overview of the system. You can
see the BOD instrument and the robotic arm that's
been discussed at length already this afternoon.
-------
301
This is a better view of the robotic system.
These are the crucibles used for suspended solids,
and these are the tubes used for total kjeldahl
nitrogen. With the use of this robotic system, the
Zymark, we've totally automated both suspended solids
and total kjeldahl nitrogen.
In this slide you can see the crucible that the
robot has in its hand prior to taking a tare weight.
This is a closeup of the robot actually setting the
crucible on the balance. The robot then moves the
crucible over to a filtration stand and sets the
crucible on the stand. At the same time the vacuum
system is automatically turned on, and a vacuum is
pulled through the tube.
Located above the vacuum tube is a cylinder
which is filled to overflowing with sample from the
stream that we're analyzing at this time. The slide
comes down to depense. You can see the tube that the
sample will travel down into the crucible.
If you look closely, you can see three thin
wires that are used as conductivity probes. Their
purpose is to keep the filter crucible from over-
flowing on slow filtering samples.
-------
302
We have three probes, two long ones and one
short one. The solution starts to fill. When it
reaches a certain level, and we get conductivity
between one of the long probes and the short one,
which closes the valve and stops the sample flow
allowing the filtration to continue. After the
sample is filtered or we lose conductivity between
the two longer probes, we then open the sample
valve and go through the process again.
This process is continued until either there's
no more sample in the cylinder, or the process times
out. Since this is part of the total kjeldahl
nitrogen and suspended solids analysis we cannot
allow unlimited time.
As soon as the filtration is complete, the
cylinder moves up the hot air gun that you see starts.
The vacuum stays on, and we pull the hot air through
the crucible drying the filter pad.
The next two slides show a compaison between
results from the on-line analyzer and the environ-
mental lab. As you can see, the correlation is
really very good between the drying based on the hot
air vacuum system and a laboratory oven set at 103 to
-------
303
105 degrees C.
The next step that the robot performs is the
total kjeldahl nitrogen digestion portion. It picks
up the digestion tube and moves the tube over and
adds the sample that we're analyzing, it then moves
over and adds digestion reagant. In this slide you
can also see a crucible that is sitting on the filtra-
tion stand drying.
The robt system analyzes two streams for suspended
solids and kjeldahl nitrogen at the same time. The
sample digestion for kjeldahl nitrogen is the rate
determining step.
In this slide the robot has taken the sample
out of the digester and moved it over and is starting
to put it in the cooling rack.
After the sample has cooled, it is diluted
with distilled water and mixed well to dissolve any
solids. The sample is then poured into the cup.
This cup is attached to a technicon auto analyzer for
the analysis of the ammonia in the digested solution.
The sponge that you see is used to remove the
drop of sample that always seems to remain on the
lip of the digestion tube.
-------
304
This slide lists a comparison between the kjeldahl
nitrogram data from the on-line analyzers as well as
the environmental laboratory. As you can see, there
is excellent correlation between both high and low
levels.
The data that is generated by the process
analyzer is automatically transferred to the effluent
area computer. Flow transmitters are in the discharge
lines and are used to totalized the daily discharge.
The next slide shows a copy of the computer printout
of the 24 hour summary which is generated daily.
As you can see, it contains the average valve for
each of our permit parameters. It also includes
the high and low valves and the time that they
occurred. We also have the number of readings for
each analyzer. This is for use in trouble shooting
to see if there were any problems with data trans-
mission.
We calculate the total pounds discharged based
on the flow rate and the average sample result.
Alarm limits are included so that the operating
personnel can readily look at the printout and
detect any problems.
-------
305
The next slide shows a print which is generated
daily which lists the total discharge amounts for the
various NPDES parameters. Also included is the
actual permit limits for easy comparison.
In addition to the 24 hour composite, we furnish
the area operating personnel with an hourly summary
of the totalized discharges per each parameter. This
can be used to make adjustments throughout the day,
so that no parameters are exceeded.
If there's any guestion of whether not this
system actually works, it was installed in the fall
of 1985 and has been in operation since that time.
We have had no parameter excursions during this time.
Purchase price for the analyzers in this system was
approximately $250,000. That does not include any of
the costs associated with installation.
Approximately six hours of daily maintenance are
reguired for a system of this type. The kjeldahl
nitrogen flasks are cleaned and the filter pads
changed on the suspended solids crucibles on a 24
hour cycle. Reagent preparation is reguired also.
Calibration and standardization are performed as
necessary along with minor repairs to the eguipment.
Any guestions?
-------
306
Question and Answer Session
MR. BURCH: Garrett Burch,
Smith Kline Chemical. Do you run standards as a part
of your testing regime?
DR. SMITH: Yes we run
They run on a reuglar schedule to maintain calibra-
tion of the analyzers. The schedule is based on
our experience as to how often standardization is
needed. These analyzers are really very stable and
don't reguire calibration daily.
DR. MARKELOV: Do you still
maintain the capability to do the analysis manually,
or do you cut down on manual analysis in general when
you use this system?
DR. Smith: The environ-
mental lab maintains the capability to run all of the
analyzers and in fact continues to do so for reportable
data. However, all of the intermediate and grab
samples that were run in the lab have been eliminated.
We also have plans to use this data for reporting
purposes in the future.
MR. TELLIARD: Anyone else?
Thanks so much.
-------
307
Our last speaker today is from TMA. He's going
to be talking on reduction and quality assurance of
ICP data. Let's go with it.
-------
NPDES Parameter Monitoring
TEST
Total Suspended Solids
Total Kjeldahl Nitrogen
Ammonia
Iron
Cyanide - Amenable
Cyanide - Total
Diazinon
On-line Method
Filtration and Drying
I : | -
Digestion - Colorimetry
Colorimetry
Colorimetry
Colorimetry
Colorimetry
Extraction - das Chrom
.t Mifm-rr-f•&&&--&-- *af-.i
o
CX3
-------
CD
-------
o
-H
-------
H
CO
-------
312
-------
ere
-------
CO
(-*
-------
Environmental laboratory
P10
47 ppm
36ppm
51 ppm
68 ppm
57 ppm
42 ppm
On-line Analyzers
D10
-_ jj ' .
43 ppm
35 ppm
1. f
57 ppm
63 ppm
56 ppm
46 ppm
00
j.a
C/T
-------
Suspended Solids
Environmental laboratory
S10
69 ppm
75 ppm
88 ppm
112 ppm
I
$3 ..
On-line Analyzers
S10
74 ppm
68 ppm
95 ppm
110 ppm
54 ppm
Si ppm
CO
-------
-------
-------
-------
320
ipS'T-jir';: i" rfti ^—.W^rt afff*H*
i \, , ,, );.; v,;.,1 . . JMvVjll
"5: " J
-------
,
r-rf ">
f rt >Vs
03
-------
sse
-------
TotalKjeldahl Nitroqen
On-line Analyzers
D10
10
16
14
18
16
12
010
12
15
12
19
17
10
CO
CO
UJ
-------
Total Kjeldahl Nitrogen
Environmental laboratory
S10
90
165
122
146
118
96
On-line Analyzers
S1Q
98
155
119
149
129
90
OJ
K>
rf*.
-------
Stream D10
Total flow '
Avq.
Iron, ppm
Ammonia, ppm
Total cyanide, ppb
Amenable cyanide, ppb
Diazinon, ppb
TKN, ppm
TSS, ppm
TOC, ppm
Ph, S.U.
17
12
19
7
0
12
43
53
6
,5
.7
.2
.1
.4
.6
.4
.4
,1
BGVAL
22
18
52
9
3
14
56
61
7
.5
.2
,2
.5
.3
,4
.3
,9
,9
= 3,133,;
Time
17
7
5
5
9
13
6
7
17
:03
:01
:03
:03
:02
:02
:31
:07
:06
1 69 gallons
SMVAL Time #Readin<
10
9
5
4
0
8
16
45
4
.4
.9
.1
.3
,1
,9
,1
.6
.5
7:
9:
13:
9:
19:
7:
7:
5:
16:
01
02
02
02
03
01
01
51
40
12
12
.12
12
12
6
5
2,880
Oi
to
-------
Total pounds to effluent for selected compounds
Stream S-10 Stream D-10 Stream C-1Q
Compound
Iron
Ammonia
Diazinon
Total pounds Alarm Limits Total pounds Alarm Limits Total pounds AJarm Limits
51.9 240.0 474.1 400.0
689.8 1,800.0
0.02 0.06
338.9 1,000.0
0.01 0.08
TKN
TSS
TOC
745.8
223.4
1,971.7
3,900.0
800.0
8,000.0
329.6
1,029.6
1,394.7
' Large CONC
1,670,0
2,086.0
2,500,0
Alarm li mi
Crt -
€n * ppfa
90,5
7,8
263,1 600.0
Ajarm Hniit&___La CStS^Mfij^Mllllifi^
450.0 52,2 mM
50,0
9,5
CO
J
7)
-------
r
,-r. n ^•^.•j.-.s-jiBS
tsasww; * IE * «
' 1 J*fclP*3fci i?Vt IT*y%l ft*
jpidioyypyi
Ammonia
- :
TKN
TSS , . .
IOC
Cn -
All streams
526.0
1,028.7-.
0.024
1,075,4
1,253.1
3,629,5
Alarm totals
640,0
2,800.0 3,000 J
0.140 -
-.
.'. 2J86J ' '
11,100.0
- : 500.0
L '
ife*e.tSB=tTii-1'iJJSaiS r"V.-S< .*, ._
-------
328
DR. PENFOLD: Good afternoon,
The purpose of this project has been to develop a
tool for the analyst to aid in the organization,
evaluation and documentation of ICP data.
The system that was developed focused on the
processing needs that occur after the elemental
concentrations have been established for individual
samples analyzed by ICP.
For the spectroscopy lab that's facing more
rigorous QA/QC requirements these days, daily ICP
data quality is evaluated on the basis of thousands
of interrelationships between hundreds of element
concentration values. Typically, this task is
handled by a group of clerical technicians working
with some sort of off-line computer assistance.
The effort can easily be 20 percent of the total
work required to produce ICP data of documented high
quality. Our lab was beginning work on a scheme of
connecting our ICP directly to our central lab mini-
computer when Dale Rushneck put me in touch with Joel
Karnofsky.
Joel and I hashed out the problems involved
here, and came up with what I think is a cheaper and
-------
329
more effective solution to this problem.
The specific tasks that we're trying to take
care of were to allow the analyst, unassisted by the
support staff, to know immediately that the instrument
is in operating specs and continues to perform within
operating parameters and to know by the end of the
analytical session that all the samples are in control
in terms of precision and accuracy, and further to
maintain quality control charts locally, automatically,
and printed summaries of all of these things at the
s ame t ime. ;
To better understand why the software is necessary,
it might be helpful to consider a comparison between
the nature of ICP analysis and another multi analyte
instrument method. I just happened to pick GCMS here,
and the data processing needs are quite different.
Clearly, the analysis of ICP spectral intensity
data to calculate instrument elemental concentrations
is a much easier job than analyzing and interpreting
time variant mass spectral data. The ICP requires
less front end computer power.
The ICP can test 30 elements in two to ten
minutes, and the GCMS semi-volatile run takes on the
-------
330
order of an hour. So the ICP sample throughput is
approximately six times higher, depending on the
instrument that's used and the settings. Ultimately
you're producing twice as many analyte concentrations.
Also, the ICP external quality control as required
by EPA protocol make up on average about 40 percent
of all the solutions tested. I believe it's roughly
half that number for the GCMS protocol (Strictly the
external controls).
One last factor that changes the nature of ICP
analysis and the post-analysis processing problem is
that despite ICP's larger dynamic working range as
compared to GCMS, re-analysis of ICP samples is
generally much more common than GCMS. I think that
this is partly because of the shorter time for analysis.
It's also due to the need for verification of ICP
results by other techniques, such as AA.
So all these factors, particularly the time factor,
create a much different working environment for ICP
than GCMS. Whereas the GCMS chemist has greater
processing needs at the instrument as compared to ICP,
an ICP chemist with a modern instrument has more need
of computer assistance in making sense of this massive
-------
331
intermediate result that the ICP has produced.
So to that end, the first graph, please. I won't
spend too long talking about this one, because it's
simple. These are the functional parts of the ICP
hardware system as you can see from the diagram here.
The ICP/QA system, as it's called here, is embodied
in a standard IBM PC/XT, plain vanilla IBM. It
does not control the spectrometer here; it does not
control the auto sampler; it has no algorithms
for doing peak identification, inter-element corrections
or any of the other issues that are handled right
here came with the original eguipment processors and
controllers.
Where the main ICP computer sends data to its
printer, the same data are captured by the PC. The
link between these two computers is a program resident
in the PC which captures the data from an RS-232
interface while the data is being sent and while the
PC is running other programs.
The only ICP data that's required by this ICP/QA
system are the header texts which indicate the start
of the run, sample identification, QC type codes and
pollution volumes if they exist. All the other
-------
332
information is either calculated, entered by the
analyst or obtained from the main lab computer.
I'm going to demonstrate the logic of the system
to some extent by working through examples and showing
some of the video screens that show up in the order
that they normally appear.
You'll notice as the screens come up that they
get busier and busier, more complicated, more data.
Bear in mind that they're designed for a full time
ICP operator who wants a lot of information. An
operator really only needs very small cues to understand
what's going on.
Towards the end, there will be simpler forms
designed for the end users of the data. The screens
are being shown here mainly to demonstrate what issues
are being involved in this guality assurance effort,
rather than to try to explain all of the intricacies
of the program.
So this first one is called the sample description
screen. Over here we have a list of associated files
one can stroll through, and as each file is read the
window on the right is updated with data. The top
line, the sample I.D., has considerable latitude as
-------
333
to type of lab descriptors that can be used. It can
easily be an EPA sample number, case number, statement
of work number, or an aliquot batch which associates
the sample with a set of quality control produced at
the same time. These fields of information are
supplied by the client and will show up on the final
report. The remaining information mostly relate to
the preparation of the sample and affect calculations.
This is a facsimile of the screen that the
operator would see most frequently. It's the screen
that fills up with data as the ICP sends it over.
This column here fills with information from the
sample description files that we were just looking
at.
Two terms on this page that I want to pay special
attention to are up here, run QC is a shortened name
for run time quality control, and the other, post PC,
post run quality control. Those two terms are very
essential to the logic of the whole system. The run
time quality controls refer specifically to to the
controls related to instrument performance, like
instrument calibration standards, verification
standards, calibration blanks and so on. The post
-------
334
run quality controls relate to all the quality control
samples that might be affected by the preliminary
sample preparation chemistry, matrix spikes, sample
duplicates and so on.
The effect of this division is that the run time
controls are calculated and the results presented
immediately to the operator. He can tell if the
instrument is performing properly as soon as the data
are acquired. The post run controls are calculated
after all the solutions have been tested and we're
sure that we have all data that might affect the
calculation of those control results.
The example here is data just acquired from the
instrument. It happens to be a calibration standard.
By way of example, if you were to assume that for
thalium at 190 nanometers the lower control limit for
the emission on the calibration standard was 100,000.
The effect of an out of control value of less than
100,000 would be to cause the field to blink. There's
a little error counter at the bottom that gets
incremented when errors occur.
Many of the other parameters on the screen can be
controlled in the same fashion. The internal standard
-------
335
ratio here, this SD CV for the standard deviation of
replictes of the same sample analyzed multiple times.
The controls are defined in terms of the QC type.
The elements are analyte line and the particular QC
method. In thi(.s example it was the emission value.
When all the essential information for a particular
solution tested at the instrument, when all the data
is available, operator information has been entered,
data is acquired from all the disk files, the data
are printed and stored on disks. This is the cover
page for the run time report as it's called. The top
block is a list of the working limits for each of the
element lines, and the bottom which is very hard to
see, is the flags, the qualifiers that are associated
with the results to follow, based on arithmetic
comparisons with these working limits.
So this is a copy of a printed run time report.
Again, this happens to be sample data. It's essentially
everything that appeared on the screen that we looked
at.
This is a run time report for a sample spike.
You'll see that the concentrations or the amount of
spike added to the sample have been filled in here,
-------
336
based on the initial volume spike added in the first
screen I first showed. The percent recovery on the
spikes is not shown. That comes at the end of the
process, as I was saying.
When the program receives a new begin run header,
indicating the end of the previous run, or the operator
pushes the appropriate function key, the run time
process ends and the post run process as it's termed
here begins. The first step is to evaluate all the
multiple attempts to analyze a solution, or multiple
dilutions, whichever. The selection process is done
automatically in cases where the choice is obvious.
For example, you have one result within the working
range and the other considerably above or below.
When the results are contradictory as in this
example, the computer will stop and allow the operator
to determine which are the better answers. In this
case, the initial analysis, the bottom one, was a
solution for thalium. Again, it was below the
instrument detection limit as indicated over here.
Because another element needed a dilution, a one to
ten dilution was made, analyzed, and we get a small
signal just above the instrument detection limit here.
-------
337
So in this case, the computer program stops, the
operator chooses one of the two or selects none, as
one of the choices. A point I want to make here is
that we're not discarding data. The work that's done
on these two solutions is fully documented before the
selection and after. The choice here is merely which
data are deemed usable as final results to present to
our client. In a different situation, a client may
care to see all the attempts to run samples, in which
case the following reports will supply that need.
As I say, they get busier and busier. The next
screen here is the result of the calculation of a
sample spike, in this case. The out of control values
are underlined, annotated, and on the right-hand
margin as indicated the disposition of these results
relative to the control files. Out of control data
are stored as the in control data and the control
limits are listed here.
The way this particular implementation of the
system has been defined, duplicate pairs below detection
limits are not included in the control files, and a
similar thing has happened here.
-------
338
This report is the last of the reports that
result immediately because of the end of the analysis.
This is probably the most useful to the analyst. It
lists all attempts to run samples for a particular
element, in this case, iron* in the same order that
they were analyzed.
The value here is that one is able to assess
quality control problems on an element-specific basis
as related to the chronology of events. Keep in mind
that there's another roughly two dozen of these pages
related to the other elements. I think that exemplifies
as well as any of these why the data processing chart
is so complex.
The right-hand margin indicates which of these
results have been selected. You've got multiple
dilutions for a relatively small number of samples
n
for iron.
At the end of the whole process the computer
prompts the operator with a flag indicating if any
quality control charts are waiting to be printed.
This chart is for zinc duplicates. With the chart is
a tabular summary of these data; I'll spare you that.
In this case, the lower line is the mean of these 22
-------
339
points. The next line up is the control limit. The
third line up is the three standard deviations of
these values, and the top line is...I'm sorry, two
standard deviations and three standard deviations.
So in this case we have three out of control
values that made their way to the chart. All of the
control limits are set manually. However, here and
on the table that I mentioned is all the statistical
information that should be necessary to determine
what are appropriate control limits.
This last graph I have to show is the final
report. It's simply a listing of the data that's
i
most interesting to the client, sample descriptors,
results per element. There's a cover page that I
won't show that defines all the symbols and methods
used. There's a final page that lists all of the
method detection limits.
The system here is implemented as obviously
tailored to the style of operation of one lab. But
many of the design decisions that were made here were
selected on the basis of other requirements. For
instance, a network of TCP's working together with
this program. Ability to print EPA CLP forms was a
-------
340
prime consideration, and the ability to work with
other various types and models of ICP's.
The features are not implemented, but the point
is that the design features allow for that as a possible
future development.
I've shown a lot of forms with many features,more
than really can be explained in a talk like this, but
they're meant to illustrate two things. One, that
the nature of ICP data quality work is significantly
different than it is for some other heavily instrumented
methods. The recognition of this has led to a consider-
ably different sort of data processing solution.
The key is that we've got a local computer system
that's under the control of the analyst and allows
him to assess and document data quality without
interrupting the operation of the ICP. The operator
can do this whole process basically in one sitting.
Alternatively, as is frequently done, the operator
could load up the auto sampler, let the ICP run, come
in in the morning and go through this process in
about 10, 20 minutes, and have produced all of this.
Thank you.
-------
341
Question and Answer Session
MR. TELLIARD: Any questions
for Larry?
MR. MILLER: Mike Miller,
Enviresponse. I was wondering, does this allow the
operator to...this has flags, but can the operator
restandardize the middle so they don't lose all the
standards if the thing has drifted off?
DR. PENFOLD: Yes, they
can. There's a feature I didn't mention. Some of
the controls, particularly the run time controls
would be categorized as fatal QC. A continuing
calibration could be categorized that way. In that
thalium example I was showing, where the emission
value is out of control. If that were to occur on a
continuing calibration check, all the thalium data
back to the last successful effort would be discarded.
But all the other element data would be marked unusable,
But the operator can still recalibrate whenever
they care to. What will happen is that a new method
header will come across and the ICP will recognize that
as essentially a break in data and the start of a new
run.
-------
342
MR. REDDY: My name is
Shekar Reddy from Advanced Chemistry Labs in Atlanta.
We have a problem with high value aluminum and high
iron in the samples. The arsenic and selenium are
showing high. Did you get a chance or opportunity to
check those...
DR. BURCH: The answer to
that is that, first of all, the solution to the
problem that you've talking about is first handled by
proper inter-element correction factors, assuming
we're talking about a simultaneous ICP.
The second solution is, we've got this host of
quality controls, particularly the dilutions, which
indicate that same type of inter-element spectral
interference. It's those controls that are going to
tell you that things are not going right. Particularly
the serial dilutions which is the one run time test
that is not related directly to instrument performance
but is run time, that you know instantly that there's
a problem there.
MR. REDDY: Thank you.
MR. TELLIARD: Anyone else?
Thank you.
-------
343
MR. TELLIARD: Anyone else?
Thank you.
For those attending this cruise of the H.M.S.
sinkfast out here, the departure is at 6:15 since
we're running late today, and give everybody an hour
before we depart. It's behind the building. It's
a little picture like on your card. It's both
inside and outside. It's a little bit chilly out
there, so you may want to bring something to stay
warm. We will leave, instead of at 6:00 at 6:15.
The ship is called the Molly B, you can't miss it.
It will be right behind the building here.
Thank you very much. We'll see you at
9:00 tomorrow.
-------
TMA/Norcal
344
ICP/QA Hardware
Simplex serial communications
Duplex serial communications
Offline or serialcommunications
Autosample
Controller
Autosampler
ICP
Spectrometer
ICP Main
Computer
Graphics
Printer
Original
- ICP
Equipment
IBM PC/XT
Computer
F
N
M
ICP/QA Equipment
Central Lab
Computer
PC/XT Hardware
512Kb memory
20Mb hard disk
floppy disk drive
magnetic tape drive
monochrome monitor
RS-232 serial interface
Figure 1
-------
KEYS
0479-075-001
0479
0479
O479
0479
0479-
0 <. 7 9 •
0479-
0479-
0479-
0479-
0479-
0479-
O479-
0479-
O479-
0479-
0479-
0479-
-075-002
•075-003
•075-OO4
•075-005
•075-011
•075-012
•075-013
075-014
•O75-015
086-001
086-O02
086-003
086-004
086-005
086-006
086-007
086-008
086-009
SCREEN DISPLAY FOR ICPJSflMP
TMA./ Norcal
345
10/17/86 08:31:59
ICP SAMPLE DESCRIPTIO
ID 479-75-1
Aliquot batch
Received 04/21/&6
Created 05/10/86
Updated 05/10/86
Sample ID S6-3/17/86
ample comment Well water
Set comment
Client Sandia National Laboratorv
ot Det. limit I(I/R)
DWF ""
Matrix
Level
Aliquot Volume
QC type Amt Un (mL) Spike id
100 «>L 100 WS15
Spike '••-'•
Amt Un
2
Figure 2
-------
Run date
Inst meth
Run QC
Post QC
Inst/Anal
SCREEN DISPLKt FOR ICP_6500
ICP_6500
Reuark
346
QC test
Anal »eth
Rep
ZSR
. _
Subt blk _ (Y/H)
MD limit _ (I/R)
Lab ID
QC type
Spike ID
Spike
Aliquot
Volume
•L
From To
Dilution
DWF
SCR GET
Element
E»i««lon In«t cone Unit*
SP/CV
OC
-------
SCREEN DISPLffif FOR ICPJ6500
ICP 6500
347
Run date 5/6/87 Remark . .
'Inst meth CftM QC test Calibration Staiytenfl
Run OC UV Anal meth Trane? Metals in
.Post OC Water
Inst /Anal ICPl .,JH
Subt blk I (Y/N)
ND limit I (I/R)
Lab ID 0-0-3
OC tvpe STOD 3
Spike ID
Soike
All QUO t
Volume mL
From To
Dilution
DWF
SER GET
Elevent
Wa+*»i- Irnr TP
Reo 3
P ISR 0.95 1.05
Eai>«ion Inat cone Unit* SD/CV PO OC
BE 313
CD 214
CD 226
PB 220
SB 206
TL 190
ZN 213
146811
142971
127866
110299
133723
95222
148611
1.0
1.0
1.0
5.0
10.0
10.0
10.0
vcr/iriL
uq/rcL
va/iriL
Vd/mL
Vd/rtiL
iiq/mL
ixT/rriL
Figure 3
-------
THA/NORCAL
ICP HORKIN6 RANGE — 05/09/87 13:44
Page 1
Inst aethod
CAN
Element
AG328
All
AL309
AL396
AS193
AS197
B249
BA455
BE313
CA1
CA317
CD214
CD226
C0228
CR267
CU324
FE1
FE238
GE209
HG1
HG280
Instrunent Units
ICPi
Detection
li.it
0.0160
5.00
0.110
0.110
0.110
0.110
0.0600
0.00320
0.00140
5.00
0.0740
0.00890
0.00890
0.00840
0.0180
0.0190
5.00
0.0210
0.200
5.00
0.0820
ug/iL
Run date
05/06/87 18:13
Reporting Over cali Blank
li.it
0.0530
10.0
0.370
0.370
0.370
0.370
0.200
0.0110
0.00470
10.0
0.250
0.0300
0.0300
0.0280
0.0600
0.0630
10.0
0.0700
0.600
10.0
0.270
li.it factor
1.15
115
11.5
11.5
11.5
11.5
5.25
1.15
1.15
115
11.5
1.15
1.15
: 1.15
1.15 :
1.15
115
11.5
11.5
'115
11.5
Run tiie
UV
Ele.ent
HN1
HN257
M0202
NI231
P214
PB220
PT203
SB206
SE196
SN189
SR407
TE214
TI334
TL190
TL267
V290
W207
Y371
ZN1
ZN213
2R339
QC ISR
Detection
limit
5.00
0.0660
0.0240
0.0360
0.300
0.0790
0.240
0.0880
0.180
0.0760
0.0100
0.220
0.00500
0.160
0.160
0.0320
0.0600
0.0110
5.00
0.00630
0.0400
QC li.its
-
Reporting
li.it
10.0
0.220
0.0800
0.120
3.00
0.260
0.800
0.290
0.600
0.250
0.0330
0.730
0.0500
0.530
0.530
0.110
0.600
0.0370
10.0
0.0210
0.400
Version
Ver 1.23
Over cali
li.it
11.5
11.5
5.25
5.25
11.5
5.25
11.5
11.5
11.5
11.5
1.15
11.5
1.15
11.5
11.5
1.15
11.5
11.5
115
11.5
22.0
12/1/86
Blank
factor
FLAGS USED UITH INSTRUMENT CONCENTRATIONS
< Not detected. The detection li.it is shown.
P Not detected due to peak offset. The reporting
Unit is shown if all replicates were peak offset.
? Less than the reporting liait.
FLAGS USED HITH LAB ID'S ON ELEMENT REPORTS
+ Selected as injection with best answer for this element.
* Sane as +, but selection aade by operator. v
- Injection not used because of a later run tine qc failure.
FLAGS USED HITH ANSMERS ON POST RUN QC AND ELEMENT REPORTS
{ Not detected. The scaled detection li.it is shown.
P Peak offset. The scaled reporting liiit is shown.
R Not detected. The scaled reporting liiit is shown.
? Less than the scaled reporting lisit.
( Less than the scaled quantitation liiit.
{ Too snail to use for control chart data.
[ Less than the amount in the prep blank.
B Less than the blank factor times the prep blank a.ount.
} Unspiked answer is bigger than the biased spike a.ount.
) Unspiked answer is too big to use in a control chart.
> Fro™ over calibration data -- answer nay be too stall.
Figure 4
GO
*>
CO
-------
TMA/NORCAL
ICP RUN-TIME DATA — 05/09/87 13:46
Page 6
Inst aethod Instrument Run date Injection Run tiae'dC Post run OC Subt blank
CAM ICP1 05/06/87 18:13 8 UV WATER Y
Lab ID 40J8-27-10 Reos ISR 1.154- LISA
Dilutions
Eleaent
AG328
AS193
BA455 r
BE313
CD214
CD226
C0228
CR267
CU324
M0202
NI231
PB220
SB206
SE196
TL190
V290
ZN213
Aliouot J50 al Volume 25-Q ml DUF
Inst cone Units Std dey PO pC
< 0.0160 ug/al
< 0.110 ug/aL 1
0.454 ug/aL
? 0.00310 ug/eL
"?••• 0.0126 ug/aL
? 0.0185 ug/aL
0.0424 ug/aL
0.110 ug/nL
0.103 ug/aL
< 0.0240 ug/aL
? 0.0539 ug/aL
< 0.0790 ug/aL : :
< 0.0880 ug/aL ;
< 0.180 ug/aL i " .
< 0.160 ug/aL '. 1
0.209 ug/aL
4.39 ug/aL
Inst method Instrument Run date Injection Run time QC Post run QC Subt blank
CAM ICP1 05/06/87 18:13 9 UV WATER Y
Lab ID 4018-27-11 Reos ISR 1.143 - 1.143
Dilutions
Eleaent
AG328
AS193
BA455
BE313
CD214
CD226
C0228
Aliauot J50 •! Volume 25.0 ml DUF
Inst cone Units Std dev PO Op
< 0.0160 ug/niL 1
? 0.229 ug/aL >.
0.498 ug/aL
< 0.00140 ug/aL '
(0.00890 ug/aL 1
? 0.0131 ug/fflL
? 0.0182 ug/aL
Analyst
JEH
Analyst
JEH
i^
Figure 6
-------
THA/NORCAL
ICP RUH-TIHE DATA ~ 05/09/87 13:54
Page 14
lost nethod
CAK
Instrument
ICP1
Run date
05/06/87 18:13
Injection
20
Run tine QC
UV
Post run QC
HATER
Subt blank
Y
Analyst
JEH
Lab ID 4018-27-9. SPIK
Dilutions 1/10
Reps _ ISR 1.028 - 1.028
Aliquot 50.0 «1 Volume 25.0 BL DHF
Spike ID ICP1
2.50
Element uq
AG328
AS193
BA455
BE313
CD214
CD226
C0228
CR267
CU324
M0202
NI231
PB220
SB206
SE196
TL190
V290
ZN213
Inst method
CAH
Spiked Inst cone Units Std dev PO OC
<
62.5 ?
6.25
6.25
6.25 ?
6.25
6.25 ?
6.25 ?
6.25 ?
31.3
31.3 ?
31.3 ?
62.5 ?
62.5 <
62.5
6.25 ?
62.5
0.0160 ug/«L
0.311 ug/mL
0.0748 ug/iL
0.0226 ug/iL
0.0243 ug/aL
0.0326 ug/nL
0.0226 ug/flL
0.0209 ug/fflL
0.0234 ug/nL
0.116 ug/nL
0.116 ug/nL
0.129 ug/nL :
0.205 ug/ffiL ;
0.180 ug/oL :
0.564 ug/nL !
0.0374 ug/iL i
0.269 ug/tL
InstrTiient Run date Injection
ICP1
05/06/87 18:13 21
*
Run time QC Post run QC Subt blank Analyst
UV WATER Y JEH
Lab ID 4018-27-10. PUP
Dilutions 1/10
Reps _ ISR 1.036 - 1.036
Aliquot 50.0 Hi Volute 25.0
DHF
Element
A6328
AS193
BA455
BE313
CD214
CD226
C0228
Inst cone Units Std dev PJ. QC_
0.0160 ug/ML
0.127 ug/nL
0.0160 ug/«L
0.00140 ug/*L
0.00890 ug/*L
0.00890 ug/ftL
0.00840 ug/aL
w
Ol
o
Figvire 7
-------
Analyst
Jlnst raeth
Run date
JH
CAM
5/6/87
SCREEN DISPLAY FOR ICP PICK
HOP INJECTION DATJ
Inst ICP1
Run QC UV
Lab ID 4018-27-9
351
Post QC
QC type
Element
ifater '*•
TL 190 •
LTI-INJECTION DATA FOR ONE SAMPLE AND ELEME1
Total Cone in Re
Inst cone Std dev dilution Initial vol Units ps PO OC
0.053 10. ? 2.35 yg/nL 3
— -3 • —
? 0.235
* <0.160
0.058
<0.16
F3 = .Select first
F4 = Select last
F9 « Print screen
F5 * Select next
F6 «= Select previous
F7 * Select none
F8 » Save the current selectior
Alt-Fl « Suspend this program
Ctrl-Break - Abort to ICP 6500
Figure 8
-------
THA / EAL
ICP POST RUN OC -- 05/09/87 14:00
Page 18
Elenent
PB220
SB206
SE196
TL190
V290
2N213
Lab ID
4018-27-9
Elenent
AS193
BA455
BE313
CD214
CD226
C0228
CR267
CU324
M0202
NI231
PB220
SB206
SE196
TL190
V290
ZN213
Inj
1
£—•••••
8
8
8
8
8
8
Dup
I
11 <
11 <
11 <
11 <
11 (
11
QC
Answer
0.0132 <
0.0147 <
0.0300 (
0.0267 <
0.0348 ?
0.723
type
SPIK
Inj
1
" .
7
16
7
7
7
7
7
7
7
7 .
7
7
7
7
7
7
Spk
|
10 {
10 }
10 ?
10 <
10 ?
10 {
10 ?
10 ?
10 <
10 ?
10 <
10 <
10 <
10 <
10 {
10 {
Sanp ans
ua/nl
0.103
0.234
0.000367
0.00148
0.00278
0.00727
0.00987
0.00992
0.00400
0.0108
0.0132
0.0147
0.0300
0.0267
0.0242
0.0355
Duplicate
answer
0.0395
0.0440
0.0900
0.0800
0.0281
0.754
Inst iethod
CAH
Cone
spiked
1.25
0.125
0.125
0.125
0.125
0.125
0.125 {
0.125 {
0.626
0.626
0.626 {
1.25
1.25 {
1.25 {
0.125 {
1.25
RPD
0.0
0.0
0.0
0.0
21.3
4.2
Computed
RPD lin
200.0
200.0
200.0
200.0
174.9
20.0
Instrueent
ICP1
Spiked
answer
1.00
0.324
0.109
0.0980
0.101
0.109
0.114
0.119
0.520
0.510
0.505
1.08
0.995
0.785
0.130
1.09
Pet Z
recov li
72.
72
87
78
: 79
81
83
87
83
80
81
86
80
63
85
84
Scaled
quan lin Hater duplicate controls Ctrl chrt
0.130
0.145
0.300
0.265
0.0550
0.0105
Run date
05/06/87 18
low Z high
"
Post run QC
:13 HATER
Scaled
Llil Jjnit auan lin Hater spike
80 120
63 137
80 120
80 120
80 120
80 120
68 132
66 134
80 120
80 120
72 128
80 120
68 132
72 128
41 159
75 125
0.185 Failed QC
0.00550
0.00235
0.0150 Failed QC
0.0150 Failed QC
0.0140
0.0300
0.0315
0.0400
0.0600 Failed QC
0.130
0.145
0.300
0.265 Failed QC
0.0550
0.105
Too snail
Too snail
Too snail
Too snail
Too snail
Subt blank Analyst
Y JEH
controls Ctrl chrt
Too snail
Too snail
Too snail
Too snail
Too snail
Too snail
Too snail
Too snail
Too snail
Figure 10
GO
Ol
to
-------
THA / EAL
E 1 e ni e n t
ICP ELOCNT REPORTS — O9/17/86 17:29
I n r_: t met. h o d
t_»V_SGAivl
I n s t. r u m & n t.
XCRJL
R u n date
09»/j.7Xo«s
Page 79
O&-S&
Standards: Run time QC method UV ItC
Inj
8 QC Type Cone/Gain Lfrdts Emission
CV / SD
Re
PS PO
2 STND2 10000.0 ug/L 125340
4 STND4 500.0 ug/L 6090
6 BLANK 521. 0 Gain 135
Samples: Post run QC nethod UAJfK Analyst f_H_
Inj ug spkd/ Aliquot Vblume Dilutions
Lab Id # QC Type % recov Amt Lh mL FVnm Tn
+
+
•f
+
i
+
+
+
+
+
0-0-7
0-0-8.
0-0-9
0-0-10
0-0-11
1777-144-0
1777-144-0
1777-144-0
1777-144-19
1777-144-19
1777-144-19
1777-144-19
0-0-19
0-0-20
1777-144-20
1777-144-20
1777-144-20
1777-144-20
1777-144-20
1777-144-20
1777-144-20
1777-144-20
0-0-29
0-0-30
17X7-144-21
1777-144-21
17X7-144-21
17X7-144-21
1777-144-22
'4777-144-22
7
8.
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
j'6
ICVS
10X
CALBLK
CCVS
ICS
PBLK
SPIK
SPIK
CALBLK
CCVS
SPIK
SPIK
SPIK
SPIK
CALBLK
CCVS
98 1.00
100 1.00
1.00
99 1.00
1.00
i.oo:
150 :
150 i
150 ;
150 ;
150 '
150
1.00'
96 1 . 00
150
150
150
150
150
150
150
150
1.00
97 1.00
150
11)0
150
15)0
150
150
mL
mL
mL
mL
mL
bl
ml
ml
ml
ml
ml
ml
mL
mL
ml
ml
ml
ml
ml
ml
ml
ml
ML
mL
ml
ml
ml
ml
ml
ml
1.00
1.00
1.00
1.00
1.00
25.0
25.0
25.0
25.0
25.0
25.0
25.0
1.00
1.00
25.0
25.0
25.0
25.0
25.1)
25.0
25.0
25.0
1.00
1.00
25.0
25.0
25.0
25.0
25.0
25.0
1.
1.
1.
1.
1.
1.
1.
1.
1.
1.
1.
1.
1.
1.
1.
00 10.0
00 1000
00 100
00 10.0
*
00 1000
00 100
00 10.0
00 1000
00 100
00 10.0
>
00 1000
00 100
00 10.0
00 1000
00 100
QC ok
QC ok
Inst cone Re
DWF ua/L Std Dev as PO
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
t
.000
.000
.0007.
.ooq". <
.000.
.000
.000
(
. o'oo ?
.cioo
.000
.000 )
.000 ?
.000
.000
.000 )
(
.000 ?
.000
.000
.000 )
.000
.000
4892.0
80.0
20.0
4963.0
18386.0 1
527.0
465.0
4395.0
20.0
108.0
1037.0
9194.0
20.0
4822.0
40.0
368.0
3565.0
20103.0
40.0
351.0
3744.0
19096.0
20.0
4826.0
40.0
430.0
4402.0
19180.0 1
376.0
3831.0
Blank corrected
Answer Units
4.89 ug/aL
0.0300 ug/sL
0.0200 ug/«L
4.96 ug/aL
18.4 ug/raL
13.2 ug/bl
0.775 ug/ffll
0.732 ug/»l
3.33 ug/al
1.80 ug/ail
1.73 ug/al
1.53 ug/s)l
0.0200 ug/aL
4.82 ug/aL
6.58 ug/ffll
6.13 ug/il
5.94 ug/al
3.35 ug/al
6.53 ug/sl
5.85 ug/al
6.24 ug/al
3.18 ug/al
0.0200 ug/aL
4.83 ug/uL
6.58 ug/al
7.17 ug/al
7.34 ug/al
3.20
62.7
63.8
ug/nl
ug/ffll
ug/ffll
QC ok
QC/CC ok
QC ok
No pair
QC ok
Too small
QC ok
-------
THA / EAL
ICP CONTROL CHART -- 05/09/87 16:16
Page 2
Eleient
ZH213
QC nethod
HATER
QC type
DUP
Instrument Test description
ICP1 Hater duplicate controls
Instrunent description
analysis by ICP
5
t
d
d
e
3 -
2 -
1 -
-1 -
i 1 1 1 I ! I 1 1 1 1 I 1 1 I 1 1 1 1 1 1 i
•
• / \ / •
:' : : •
''••*:*! \ * / ':
*. •' '•. / \ : '• : '•
'•• / ''• •' '•: ': ••• .» ;.-•... ..••••.... ..•••* \
t 1 1 1 ! i i 1 ! ! i i I ! i ! ! i 1 1 ! 1
ft 8 8 8 0 0 8 8 8 8 1 880888888880
M 3444477880 2 2222 2 2333 3 5
D 9 8 2 2 2 1 2 8 0 8 1 1 1 1 2228222 8_
D 117751621899335266Gb
58
40
38
28
10
8
R
P
D
Mean
QC liiris
+x-3 sd
H881 121 11 101 1 1 1 1 121 1 1 1 1
H 778441 88687959 1 779 98
PI 11115451431338238441
M 66251381352742622993
I 23193222269171129721
N 359 21726 1 552 51
J
Previously saved data
Data on current chart
Total of all data
Data Pts
22
22
Mean
10.55
10.55
Std Dev
13.01
13.01
Hean - 3 SD
Hean + 3 SD
49.58
49.58
Current test linits
LON High
QC 20.0
CC 50.0 co
Ol
rnj _.,_„ 1 C.
-------
TMA/NORCAL ANALYSIS REPORT
ELEMENTS RESULTS
Page 3
Set
Client:
Comment:
TMA/ARLI
CAH metals
Report Date:
Samples Received:
May 9, 1987
Hay 4, 1987
Sample: MW-1E 4/30 1415
H20
TMA/NORCAL Lab #: 4018-27-9
Aliquot/Via; lSOml/25.QaiL
Answer Method
(mq/L)
Antimony
Arsenic
Barium
Beryllium
Cadmium
Chromium
Cobalt
Copper
Lead
Molybdenum
Nickel
Selenium
Silver
Thallium
Vanadium
Zinc
<0.01
0.10
0.23
^0.0004
<0.001
iO.Ol
0.0073
= 0.01
<0.01
(0.004
=0.01
<0.03
<0.003
<0.03
0.024
0.036
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Saiple: MH-2E 4/30 1717
H20
TMA/NORCAL Lab I: 4018-27-10
Aliquot/Vl»: 150(1)1/25. OfliL
Answer Method
(fflq/L)
Antimony
Arsenic
Barium
Beryllium
Cadmium
Chromium
Cobalt
Copper
Lead
Molybdenum
Nickel
Selenium
Silver
Thallium
Vanadium
Zinc
<0.01
<0.02
0.075
=0.0005
=0.002
0.015
0.0071
0.017
<0.01
(0.004
±0.009
(0.03
(0.003
(0.03
0.035
0.72
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Sample: MW-3F 5/01 1130
H20
TMA/NORCAL Lab »: 4018-27-11
Aliquot/Vim; 150ml/25.0mL
Antimony
Arsenic
Barium
Beryllium
Cadmium
Chromium
Cobalt
Copper
Lead
Molybdenum
Nickel
Selenium
Silver
Thallium
Vanadium
Zinc
Answer
(rg/L)
Method
(0.01
=0.04
0.082
<0.0002
(0.001
0.012
iO.003
0.013
(0.01
(0.004
=0.007
(0.03
(0.003
(0.03
=0.01
0.31
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
to
Oi
Ol
-------
356
May 14, 1987
MR. TELLIARD: Good morning.
We'd like to start today's agenda. We found out from
the Coast Guard that no one was lost last evening.
We've kept our record at least for this year.
A number of the papers today are going to deal
with a rather bizarre subject which is called drilling
fluids or drilling muds or drilling stuff. Over the
last 18 to 24 months, the agency has been involved in
a number of efforts dealing with onshore and offshore
oil and gas exploration, extraction and production,
both in the ITD division as far as the regulatory
approach is concerned, that is to say, writing a
national standard, and also in the permit program
where certain permits both in the Gulf of Mexico and
in Alaska are under review, adjudication.
As a result of this and also some efforts that
we have initiated on behalf of the Office of Solid
Waste, which involved going out and looking at the
application of the definition of hazardous as it
relates to drilling or pit fluids, we've generated a
bunch of data and a bunch of information. We're going
to talk about that today.
-------
329
more effective solution to this problem.
The specific tasks that we're trying to take
care of were to allow the analyst, unassisted by the
support staff, to know immediately that the instrument
is in operating specs and continues to perform within
operating parameters and to know by the end of the
analytical session that all the samples are in control
in terms of precision and accuracy, and further to
maintain quality control charts locally, automatically,
and printed summaries of all of these things at the
same time.
To better understand why the software is necessary,
it might be helpful to consider a comparison between
the nature of ICP analysis and another multi analyte
instrument method. I just happened to pick GCMS here,
and the data processing needs are quite different.
Clearly, the analysis of ICP spectral intensity
data to calculate instrument elemental concentrations
is a much easier job than analyzing and interpreting
time variant mass spectral data. The ICP requires
less front end computer power.
The ICP can test 30 elements in two to ten
minutes, and the GCMS semi-volatile run takes on the
-------
330
order of an hour. So the ICP sample throughput is
approximately six times higher, depending on the
instrument that's used and the settings. Ultimately
you're producing twice as many analyte concentrations.
Also, the ICP external quality control as required
by EPA protocol make up on average about 40 percent
of all the solutions tested. I believe it's roughly
half that number for the GCMS protocol (Strictly the
external controls).
One last factor that changes the nature of ICP
analysis and the post-analysis processing problem is
that despite ICP's larger dynamic working range as
compared to GCMS, re-analysis of ICP samples is
generally much more common than GCMS. I think that
this is partly because of the shorter time for analysis.
It's also due to the need for verification of ICP
results by other techniques, such as AA.
So all these factors, particularly the time factor,
create a much different working environment for ICP
than GCMS. Whereas the GCMS chemist has greater
processing needs at the instrument as compared to ICP,
an ICP chemist with a modern instrument has more need
of computer assistance in making sense of this massive
-------
331
intermediate result that the ICP has produced.
So to that end, the first graph, please. I won't
spend too long talking about this one, because it's
simple. These are the functional parts of the ICP
hardware system as you can see from the diagram here.
The ICP/QA system, as it's called here, is embodied
in a standard IBM PC/XT, plain vanilla IBM. It
does not control the spectrometer here; it does not
control the auto sampler; it has no algorithms
for doing peak identification, inter-element corrections
or any of the other issues that are handled right
here came with the original equipment processors and
controllers.
Where the main ICP computer sends data to its
printer, the same data are captured by the PC. The
link between these two computers is a program resident
in the PC which captures the data from an RS-232
interface while the data is being sent and while the
PC is running other programs.
The only ICP data that's required by this ICP/QA
system are the header texts which indicate the start
of the run, sample identification, QC type codes and
pollution volumes if they exist. All the other
-------
332
information is either calculated, entered by the
analyst or obtained from the main lab computer.
I'm going to demonstrate the logic of the system
to some extent by working through examples and showing
some of the video screens that show up in the order
that they normally appear.
t
You'll notice as the screens come up that they
get busier and busier, more complicated, more data.
Bear in mind that they're designed for a full time
ICP operator who wants a lot of information. An
operator really only needs very small cues to understand
what's going on.
Towards the end, there will be simpler forms
designed for the end users of the data. The screens
are being shown here mainly to demonstrate what issues
are being involved in this quality assurance effort,
rather than to try to explain all of the intricacies
of the program.
So this first one is called the sample description
screen. Over here we have a list of associated files
one can stroll through, and as each file is read the
window on the right is updated with data. The top
line, the sample I.D., has considerable latitude as
-------
333
to type of lab descriptors that can be used. It can
easily be an EPA sample number, case number, statement
of work number, or an aliquot batch which associates
the sample with a set of quality control produced at
the same time. These fields of information are
supplied by the client and will show up on the final
report. The remaining information mostly relate to
the preparation of the sample and affect calculations.
This is a facsimile of the screen that the
operator would see most frequently. It's the screen
that fills up with data as the-ICP sends it over.
This column here fills with information from the
sample description files that we were just looking
at.
Two terms on this page that I want to pay special
attention to are up here, run QC is a shortened name
for run time quality control, and the other, post PC,
post run quality control. Those two terms are very
essential to the logic of the whole system. The run
time quality controls refer specifically to to the
controls related to instrument performance, like
instrument calibration standards, verification
standards, calibration blanks and so on. The post
-------
334
run quality controls relate to all the quality control
samples that might be affected by the preliminary
sample preparation chemistry, matrix spikes, sample
duplicates and so on.
The effect of this division is that the run time
controls are calculated and the results presented
immediately to the operator. He can tell if the
instrument is performing properly as soon as the data
are acquired. The post run controls are calculated
after all the solutions have been tested and we're
sure that we have all data that might affect the
calculation of those control results.
The example here is data( just acquired from the
instrument. It happens to be a calibration standard.
By way of example, if you were to assume that for
thalium at 190 nanometers the lower control limit for
the emission on the calibration standard was 100,000.
The effect of an out of control value of less than
100,000 would be to cause the field to blink. There's
a little error counter at the bottom that gets
incremented when errors occur.
Many of the other parameters on the screen can be
controlled in the same fashion. The internal standard
-------
335
ratio here, this SD CV for the standard deviation of
replictes of the same sample analyzed multiple times.
The controls are defined in terms of the QC type.
The elements are analyte line and the particular QC
method. In th^s example it was the emission value.
When all the essential information for a particular
solution tested at the instrument, when all the data
is available, operator information has been entered,
data is acquired from all the disk files, the data
are printed and stored on disks. This is the cover
page for the run time report as it's called. The top
block is a list of the working limits for each of the
element lines, and the bottom which is very hard to
see, is the flags, the qualifiers that are associated
with the results to follow, based on arithmetic
comparisons with these working limits.
So this is a copy of a printed run time report.
Again, this happens to be sample data. It's essentially
everything that appeared on the screen that we looked
at.
This is a run time report for a sample spike.
You'll see that the concentrations or the amount of
spike added to the sample have been filled in here,
-------
336
based on the initial volume spike added in the first
screen I first showed. The percent recovery on the
spikes is not shown. That comes at the end of the
process, as I was saying.
When the program receives a new begin run header,
indicating the end of the previous run, or the operator
pushes the appropriate function key, the run time
process ends and the post run process as it's termed
here begins. The first step is to evaluate all the
multiple attempts to analyze a solution, or multiple
dilutions, whichever. The selection process is done
automatically in cases where the choice is obvious.
For example, you have one result within the working
range and the other considerably above or below.
When the results are contradictory as in this
example, the computer will stop and allow the operator
to determine which are the better answers. In this
case, the initial analysis, the bottom one, was a
solution for thalium. Again, it was below the
instrument detection limit as indicated over here.
Because another element needed a dilution, a one to
ten dilution was made, analyzed, and we get a small
signal just above the instrument detection limit here.
-------
337
So in this case/ the computer program stops, the
operator chooses one of the two or selects none, as
one of the choices. A point I want to make here is
that we're not discarding data. The work that's done
on these two solutions is fully documented before the
selection and after. The choice here is merely which
data are deemed usable as final results to present to
our client. In a different situation, a client may
care to see all the attempts to run samples, in which
case the following reports will supply that need.
As I say, they get busier and busier. The next
screen here is the result of the calculation of a
sample spike, in this case. The out of control values
are underlined, annotated, and on the right-hand
margin as indicated the disposition of these results
relative to the control files. Out of control data
are stored as the in control data and the control
limits are listed here.
The way this particular implementation of the
system has been defined, duplicate pairs below detection
limits are not included in the control files, and a
similar thing has happened here.
-------
338
This report is the last of the reports that
result immediately because of the end of the analysis.
This is probably the most useful to the analyst. It
lists all attempts to run samples for a particular
element, in this case, iron, in the same order that
they were analyzed.
The value here is that one is able to assess
quality control problems on an element-specific basis
as related to the chronology of events. Keep in mind
that there's another roughly two dozen of these pages
related to the other elements. I think that exemplifies
as well as any of these why the data processing chart
is so complex.
The right-hand margin indicates which of these
results have been selected. You've got multiple
dilutions for a relatively small number of samples
for iron.
At the end of the whole process the computer
prompts the operator with a flag indicating if any
quality control charts are waiting to be printed.
This chart is for zinc duplicates. With the chart is
a tabular summary of these data; I'll spare you that.
In this case, the lower line is the mean of these 22
-------
339
points. The next line up is the control limit. The
third line up is the three standard deviations of
these values, and the top line is...I'm sorry, two
standard deviations and three standard deviations.
So in this case we have three out of control
values that made their way to the chart. All of the
control limits are set manually. However, here and
on the table that I mentioned is all the statistical
information that should be necessary to determine
what are appropriate control limits.
This last graph I have -to show is the final
report. It's simply a listing of the data that's
most interesting to the client, sample descriptors,
results per element. There's a cover page that I
won't show that defines all the symbols and methods
used. There's a final page that lists all of the
method detection limits.
The system here is implemented as obviously
tailored to the style of operation of one lab. But
many of the design decisions that were made here were
selected on the basis of other requirements. For
instance, a network of ICP's working together with
this program. Ability to print EPA CLP forms was a
-------
340
prime consideration, and the ability to work with
other various types and models of ICP's.
The features are not implemented, but the point
is that the design features allow for that as a possible
future development.
I've shown a lot of forms with many features,more
than really can be explained in a talk like this, but
they're meant to illustrate two things. One, that
the nature of ICP data quality work is significantly
different than it is for some other heavily instrumented
methods. The recognition of this has led to a consider-
ably different sort of data processing solution.
The key is that we've got a local computer system
that's under the control of the analyst and allows
him to assess and document data quality without
interrupting the operation of the ICP. The operator
can do this whole process basically in one sitting.
Alternatively, as is frequently done, the operator
could load up the auto sampler, let the ICP run, come
in in the morning and go through this process in
about 10, 20 minutes, and have produced all of this.
Thank you.
-------
341
Question and Answer Session
MR. TELLIARD: Any questions
for Larry?
MR. MILLER: Mike Miller,
Enviresponse. I was wondering, does this allow the
operator to...this has flags, but can the operator
restandardize the middle so they don't lose all the
standards if the thing has drifted off?
DR. PENFOLD: Yes, they
can. There's a feature I didn't mention. Some of
the controls, particularly the run time controls
would be categorized as fatal QC. A continuing
calibration could be categorized that way. In that
thalium example I was showing, where the emission
value is out of control. If that were to occur on a
continuing calibration check, all the thalium data
back to the last successful effort would be discarded.
But all the other element data would be marked unusable,
But the operator can still recalibrate whenever
they care to. What will happen is that a new method
header will come across and the ICP will recognize that
as essentially a break in data and the start of a new
run.
-------
342
MR. REDDY: My name is
Shekar Reddy from Advanced Chemistry Labs in Atlanta.
We have a problem with high value aluminum and high
iron in the samples. The arsenic and selenium are
showing high. Did you get a chance or opportunity to
check those...
DR. BURCH: The answer to
that is that, first of all, the solution to the.
problem that you've talking about is first handled by
proper inter-element correction factors, assuming
we're talking about a simultaneous ICP.
The second solution is, we've got this host of
quality controls, particularly the dilutions, which
indicate that same type of inter-element spectral
interference. It's those controls that are going to
tell you that things are not going right. Particularly
the serial dilutions which is the one run time test
that is not related directly to instrument performance
but is run time, that you know instantly that there's
a problem there.
MR. REDDY: Thank you.
MR. TELLIARDs Anyone else?
Thank you.
-------
343
MR. TELLIARD: Anyone else?
Thank you.
For those attending this cruise of the H.M.S.
sinkfast out here, the departure is at 6:15 since
we're running late today, and give everybody an hour
before we depart. It's behind the building. It's
a little picture like on your card. It's both
inside and outside. It's a little bit chilly out
there, so you may want to bring something to stay
warm. We will leave, instead of at 6:00 at 6:15.
The ship is called the Molly B, you can't miss it.
It will be right behind the building here.
Thank you very much. We'll see you at
9:00 tomorrow.
-------
TMA/Norcal
344
ICP/QA Hardware
Simplex serial communications
Duplex serial communications
Offline or serialcommunications
Autosample
Controller
Autosampler
ICP
Spectrometer
ICP Main
Computer
Graphics
Printer
Original
— ICP
Equipment
IBM PC/XT
Computer
ICP/QA Equipment
II
N
Central Lab
Computer
PC/XT Hardware
512Kb memory
20Mb hard disk
floppy disk drive
magnetic tape drive
monochrome monitor
RS-232 serial interface
Figure 1
-------
KEYS
0479-075-001
O479
0479
0479
0479
O4 79
0479
0479
0479
0479
0479
0479<
0479-
O479-
0479-
04.79.
0479-
0479-
0479-
-075-002
•075-003
•075-004
•075-005
•075-011
•075-012
•075-013
075-014
075-015
086-001
086-002
086-003
086-004
086-005
086-006
086-007
086-008
086-009
SCREEN DISPLKt FOR ICP_SAMP
TMA./ Uorcal 1O/17/86 08:31-59
HOP SAMPLE DESCRIPTIO!
Lab ID 479-75-1 Received Q4/21/&6
Aliquot batch B# 186 ..~~" Created oF/10/66
Updated OS/10/as
345
Saaple ID »6-3/17/86
Sample comment Well water
Set comment
Client Sandia National Laboratory
Hot Det. limit J{I/R) Matrix
DWF Level
Water
QC type
Spike
Aliquot
Amt Un
100 niL
Volume
(BL) Spike id
100 WS15
Spike
Amt Un
2 iriL
Figure 2
-------
DISP1XH PCR IGPJ5500
Run dmte
Xnmt «eth
Run QC
Post QC ...
In»t/An«l
subt blk _(Y/M)
MD limit _ (I/R)
ICP_6500
Renark
__ QC test
Anal Beth
346
L»b ID
QC typ«
Spike ID
Spike
Aliquot
Voluae
•L
From To
Dilution
DWF
CER GET
El«»«nt E»i««lon ln«t cenc Unit*
-------
SCREEN DISPLMf FOR ICPJ5500
ICP 6500
347
Run date 5/6/87 Remark .
'Inst meth CAM QC test Calibration Standem!
Run QC UV Anal meth Trace tfetals in Mater bv 1C
.Post QC Water
Irist/Anal ICP1 JH
Subt blk £ (Y/N)
ND limit I (I/R)
Lab ID 0-0-3
QC type SEND 3
Spike ID
Soike
Aliquot
Volume mL
From To
Dilution
DWF
SER GET
Eleaent
BE 313
CD 214
CD 226
PB 220
SB 206
TL 190
ZN 213
Reo 3
P ISR 0.95 1.05
Emission Inot cone Unit* SD/CV PO OC
146811
142971
127866
110299
133723
95222
148611
1.0
1.0
1.0
5.0
10.0
10.0
10.0
W/mL __
yg/ftL
vcr/iixL
viq/mL
va/mL
va/mL
UT/rriL
t
Figure 3
-------
THA/NORCAL
ICP WORKING RANGE -- 05/09/87 13:44
Page 1
Inst iethod
CAH
Element
A6328
All
AL309
AL396
AS193
AS197
B249
BA455
BE313
CA1
CA317
CD214
CD226
C0228
CR267
CU324
FE1
FE238
GE209
HG1
HG280
Instrunent Units
ICP1
Detection
liiit
0.0160
5.00
0.110
0.110
0.110
0.110
0.0600
0.00320
0.00140
5.00
0.0740
0.00890
0.00890
0.00840
0.0180
0.0190
5.00
0.0210
0.200
5.00
0.0820
ug/iL
Run date
05/06/87 18:13
Reporting Over cali Blank
liiit
0.0530
10.0
0.370
0.370
0.370
0.370
0.200
0.0110
0.00470
10.0
0.250
0.0300
0.0300
0.0280
0.0600
0.0630
10.0
0.0700
0.600
10.0
0.270
liiit factor
1.15
115
11.5
11.5
11. S
11.5
5.25
1.15
1.15
115
11.5
1.15
1.15
1.15
1.15 :
1.15
115
11.5
11.5
'115
11.5
Run tiie
UV
Element
HN1
HN257
H0202
NI231
P214
PB220
PT203
SB206
SE196
SN189
SR407
TE214
TI334
TL190
TL267
V290
W207
Y371
ZN1
ZN213
2R339
QC ISR
Detection
li.it
5.00
0.0660
0.0240
0.0360
0.300
0.0790
0.240
0.0880
0.180
0.0760
0.0100
0.220
0.00500
0.160
0.160
0.0320
0.0600
0.0110
5.00
0.00630
0.0400
QC liiits
-
Reporting
liiit
10.0
0.220
0.0800
0.120
3.00
0.260
0.800
0.290
0.600
0.250
0.0330
0.730
0.0500
0.530
0.530
0.110
0.600
0.0370
10.0
0.0210
0.400
Version
Ver 1.
Over cali
liiit
11.5
11.5
5.25
5.25
11.5
5.25
11.5
11.5
11.5
11.5
1.15
11.5
1.15
11.5
11.5
1.15
11.5
11.5
115
11.5
22.0
23 12/1/86
Blank
factor
FLAGS USED WITH INSTRUMENT CONCENTRATIONS
< Not detected. The detection liiit is shown.
P Not detected due to peak offset. The reporting
liiit is shown if all replicates were peak offset.
? Less than the reporting liiit.
FLAGS USED WITH LAB ID'S ON ELEMENT REPORTS
+ Selected as injection with best answer for this eleient.
* Saie as +, but selection iade by operator. *
- Injection not used because of a later run tine qc failure.
FLAGS USED HITH ANSMERS ON POST RUH PC AND ELEMENT REPORTS
( Not detected. The scaled detection liiit is shown.
P Peak offset. The scaled reporting liiit is shown.
R Not detected. The scaled reporting liiit is shown.
? Less than the scaled reporting liait.
( Less than the scaled quantitation Unit.
{ Too snail to use for control chart data.
[ Less than the amount in the prep blank.
B Less than the blank factor times the prep blank aiount.
} Unspiked answer is bigger than the biased spike a«ount.
) Unspiked answer is too big to use in a control chart.
> Froa over calibration data -- answer nay be too stall.
Figure 4
CO
£•
00
-------
THA/NORCAL
ICP RUN-TIME DATA — 05/09/87 13:46
Page 6
' _ _ . — ._ — , .
Inst aethod Instrument Run date Injection Run time'QC Post run OC Subt blank
CAM ICP1 05/06/87 18:13 8 UV HATER Y
Lab ID 40J8-27-10 Reos ISR 1. 154 - 1 154
Dilutions
Eleaent
AG328
AS193
BA455 •>•
BE313
CD214
CD226
C0228
CR267
CU324
M0202
NI231
PB220
SB206
SE196
TL190
V290
ZN213
Aliauot 150 al Volume 25.0 aL DWF
Inst cone Units Std dey PP OC
< 0.0160 ug/aL
< 0.110 ug/aL 1
0.454 ug/aL
? 0.00310 ug/aL
? 0.0126 ug/aL
? 0.0185 ug/aL
0.0424 ug/aL
0.110 ug/aL
0.103 ug/aL
< 0.0240 ug/oL
.? 0.0539 ug/aL
< 0.0790 ug/aL :
< 0.0880 ug/aL ; .
< 0.180 ug/aL i .
< 0.160 ug/aL '-. 1
0.209 ug/aL
4.39 ug/aL
Inst method Instrument Run date Injection Run time QC Post run QC Subt blank
CAM ICP1 05/06/87 18:13 9 UV WATER Y
Lab ID 4018-27-11 Reps ISR 1.143 - 1.143
Dilutions
Eleaent
A6328
AS193
BA455
BE313
CD214
CD226
C0228
Aliquot ISO tl Volume 25.0 mL DUF
Inst cone Units Std dev PO Op
< 0.0160 ug/fflL 1
? 0.229 ug/aL x
0.498 ug/aL
< 0.00140 ug/aL /
< 0.00890 ug/aL 1
? 0.0131 ug/aL
? 0.0182 ug/aL
Analyst
JEH
Analyst
JEH
£»
Figure 6
-------
THA/NORCAL
ICP RUH-TIHE DATA — 05/09/87 13:54
Page 14
lost nethod
CAN
Instrument
ICP1
Run date
05/06/87 18:13
Injection
20
Run tine OC
UV
Post run QC
HATER
Subt blank
Y
Analyst
JEH
Lab ID 4018-27-9. SPIK
Dilutions 1/10
Reps _ ISR 1.028 - 1.028
Aliquot 50.0 nl Voluue 25.0 aL DHF
Spike ID ICP1
Ant 2.50 •!
Element uq
AG328
AS193
BA455
BE313
CD214
CD226
C0228
CR267
CU324
M0202
NI231
PB220
SB206
SE196
TL190
V290
ZN213
Inst method
CAM
Spiked Inst cone Units Std dev PO OC
<
62.5 ?
6.25
6.25
6.25 ?
6.25
6.25 ?
6.25 ?
6.25 ?
31.3
31.3 ?
31.3 ?
62.5 ?
62.5 <
62.5
6.25 ?
62.5
0.0160 ug/iL
0.311 ug/«L
0.0748 ug/iL
0.0226 ug/nL
0.0243 ug/«L
0.0326 ug/iL
0.0226 ug/flL
0.0209 ug/iL
0.0234 ug/tL
0.116 ug/«L
0.116 ug/nL
0.129 ug/nL ; :
0.205 ug/*L i
0.180 ug/iL j
0.564 ug/fflL :
0.0374 ug/»L i
0.269 ug/«L
Instrument Run date Injection Run tiae QC Post run QC Subt blank Analyst
ICP1
05/06/87 18:13 21 UV WATER Y JEH
Lab ID 4018-27-10. PUP
Dilutions 1/10
Reps _ ISR 1.036 - 1.0
Al,iquot 50.0 Hi Volume
DUF
Element
AG328
AS193
BA455
BE313
CD214
CD226
C0228
Inst cone Units Std dev Pp_ OC.
0.0160 ug/nL
0.127 ug/iL
0.0160 ug/mL
0.00140 ug/mL
0.00890 ug/mL
0.00890 ug/«L
0.00840 ug/mL
CO
01
o
Figure 7
-------
Analyst
Slnst meth
Run date
JH
CAM
5/6/87
SCREEN DISPIA* FOR ICPJPICK
IICP INJECTION DATJ
Inst ICP1
Run QC UV
Lab ID 4018-27-9
351
Post QC
QC type
Element
water
TL 190
1
JLTI-INJECTION DATA FOR ONE SAMPLE AND ELEMEJ
Total Cone in Re
Inst cone std dev dilution initial vol Units ps PO OC
? 0.235 Q.Q53 10. ? 2.35 yg/faL 3
* <0.160
0.058
<0.16
F3 « .Select first F5
F4 = Select last F6
F9 «= Print screen F7
Select next
Select previous
Select none
F8 » Save the current selectio
Alt-Fl « Suspend this program
Ctrl-Break » Abort to ICP 6500
Figure 8
-------
THA / EAL
ICP POST RUH QC -- 05/09/87 14:00
Page 18
Elenent
PB220
SB206
SE196
TL190
V290
ZN213
Lab ID
4018-27-9
Elenent
AS193
BA455
BE313
CD214
CD226
C0228
CR267
CU324
M0202
NI231
PB220
SB206
SE196
TL190
V290
ZN213
Inj
t
8
8
8
8
8
8
Dup
1
11
11
11
11
11
11
QC
Answer
ug/nl
< 0.0132 (
< 0.0147 <
< 0.0300 <
< 0.0267 (
{ 0.0348 ?
0.723
type
SPIK
Inj
1
ill !•
7
16
7
7
7
7
7
7
7
7
7
7
7
7
7
7
Spk
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
Sanp ans
{ 0.103
} 0.234
? 0.000367
< 0.00148
? 0.00278
{ 0.00727
? 0.00987
? 0.00992
< 0.00400
? 0.0108
< 0.0132
< 0.0147
< 0.0300
< 0.0267
{ 0.0242
{ 0.0355
Duplicate
answer
0.0395
0.0440
0.0900
0.0800
0.0281
0.754
Inst nethod
CAH
Cone
spiked
1.25
0.125
0.125
0.125
0.125
0.125
0.125 {
0.125 {
0.626
0.626
0.626 {
1.25
1.25 {
1.25 {
0.125 {
1.25
Computed Scaled
RPD
0.0
0.0
0.0
0.0
21.3
4.2
RPD lin quan lin Hater duplicate controls Ctrl chrt
200.
200.
200.
200.
174.
20.
Instrunent
ICP1
Spiked
answer
1.00
0.324
0.109
0.0980
0.101
0.109
0.114
0.119
0.520
0.510
0.505
1.08
0.995
0.785
0.130
1.09
Pet
recov
22.
72
87
78
: 79
81
83
87
83
80
81
86
80
63
85
84
0 0.130
0 0.145
0 0.300
0 0.265
9 0.0550
0 0.0105
Run date
05/06/87 18
: low I high
**
Post run QC
:13 HATER
Scaled
liiit Jj«it auan lin Hater spike
80 120
63 137
80 120
80 120
80 120
80 120
68 132
66 134
80 120
80 120
72 128
80 120
68 132
72 128
41 159
75 125
0.185 Failed QC
0.00550
0.00235
0.0150 Failed OC
0.0150 Failed QC
0.0140
0.0300
0.0315
0.0400
0.0600 Failed QC
0.130
0.145
0.300
0.265 Failed QC
0.0550
0.105
Too snail
Too snail
Too snail
Too snail
Too snail
Subt blank Analyst
Y JEH
controls Ctrl chrt
Too snail
Too snail
Too snail
Too snail
Too snail
Too snail
Too snail
Too snail
Too snail
Figure 10
01
to
-------
TMA / EAL
E 1 e m e? n t.
ICP ELBCNT REPORTS — 09/17/86 17:29
I n ^.t t in e t- h o d
LJV SGAM
I ri s t. r u m e n t:
ICF»i
Page 79
Run date
OSS'/X^/ShS. O^zS'S"'
Standa»-ds: Run ti«e QC method UV ItC
Inj
«_ QC Type Cone/Gain Lhits Emission
Re
CV / SD PS PO
2 STND2 10000.0 ug/L 125340
4 STND4 500.0 ug/L 6090
6 BLANK 521.0 Gain 135
Samples: Post run QC method WATfR Analyst f_H_
Inj ug spkd/ Aliquot Volume Dilutions
Lab Id » QC Type % recov Amt Uh aL From To
+
4
+
+
+
+
+
+
+
+
0-0-7
0-0-8.
0-0-9
o-o-io
0-0-11
1777-144-0
1777-144-0
1777-144-0
1777-144-19
1777-144-19
1777-144-19
1777-144-19
0-0-19
0-0-20
1777-144-20
1777-144-20
1777-144-20
1777-144-20
1777-144-20
1777-144-20
1777-144-20
1777-144-20
0-0-29
0-0-30
1777-144-21
1777-144-21
17X7-144-21
1777-144-21
1777-144-22
'4777-144-22
7
8.
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
3 6
ICVS
10X
CALBLK
CCVS
ICS
PBLK
SPIK
SPIK
CALBLK
CCVS
SPIK
SPIK
SPIK
SPIK
CALBLK
CCVS
98 1.00
100 1.00
1.00
99 1 . 00
1.00
i.oo:
150 :
150 I
150 :
150 '
150
150
1.00
96 1.00
150 :
150
150
150
150
150
150
150
1.00
97 1.00
150
11)0
150
150
150
150
mL
fflL
mL
fflL
fflL
bl
ml
ml
ml
ml
ml
ml
diL
mL
ml
ml
ml
ml
ml
ml
ml
ml
mL
IRL
ml
ml
ml
nil
ml
ml
1.00
1.00
1.00
1.00
1.00
25.0'
25.0
25.0
25.0
25.0
25.0
25.0
1.00
1.00
25.0
25.0
25.0
25.0
25.0
25.0
25.0
25.0
1.00
1.00
25.0
25.0
25.0
25.0
25.0
25.0
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
10.0
1000
100
10.0
1000
100
10.0
1000
100
10.0
\
1000
100
10.0
1000
100
QC ok
QC ok
Inst cone Re
_DWF UQ/L Std Dsv DS PO
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
?
>
.000
.000
.000*.
.ooq'. <
.000,
.000
.000
<
.000 ?
.CiOO
.000
.000 >
.000 ?
.000
.000
.000 >
(
.000 ?
.000
.000
.000 >
.000
.000
4392.0
80.0
20.0
4963.0
18386.0 1
527.0
465.0
4395.0
20.0
108.0
1037.0
9194.0
20.0
4822.0
40.0
368.0
3565.0
20103.0
40.0
351.0
3744.0
19096.0
20.0
4826.0
40.0
430.0
4402.0
19180.0 1
376.0
3831.0
Blank corrected
ArtsMer Lhits
4.89 ug/raL
0.0800 ug/fflL
0.0200 ug/fflL
4.96 ug/B)L
18.4 ug/fflL
13.2 ug/bl
0.775 ug/al
0.732 ug/al
3.33 ug/ffll
1.80 ug/ffll
1.73 ug/al
1.53 ug/ful
0.0200 ug/«L
4.82 ug/sL
6.58 ug/ffll
6.13 ug/«l
5.94 ug/al
3.35 ug/al
6.58 ug/al
5.85 ug/al
6.24 ug/al
3.18 ug/al
0.0200 ug/aL
4.83 ug/«L
6.58 ug/al
7.17
QC ok
QC/CC ok
QC ok
No pair
QC ok
Too snail
QC ok
7.34 ug/al
3.20 ug/el
62.
.7
63.8
ug/9il
ug/ffll
-------
THA / EAL
ICP CONTROL CHART — 05/09/87 16:16
Page 2
Element
ZH213
QC Method
HATER
QC type
DUP
Instrument Test description
ICP1 Mater duplicate controls
Instrunent description
analysis by ICP
5
t
d
d
e
v
s
3
2 -
8
-1 -
1 1 1 1 1 I 1 I I I ! I 1 1 1 1 I 1 1 1 1 I
•.
•
. / '. : '•
:' \ ; \
'. .' ' : •
• • ^ B . .* '»•• .0""" '*.., t." *•.— .* f
. • . .' tt ', •
1 1 1 1 i ! i 1 ! i i ! 1 i i 1 ! ! ! ! ! 1
M 8 88888880 1 000888888888
M 3444477888222222233335
D 9 8 2 2 2 1 2 8. 8 8 1 1 1 122282228
D 11775162189933526666
5R
48
30 R
P
Oft
D
18
8
Mean
SC lims
~\^f £ 59
H8811211 118111111211111
H 77844188587959177998
M 11115451431338238441
M 66251381352742S22993
I
H
J
2319
359
3222269
21726
1129
552
7 2 1
5 1
Previously saved data
Data on current chart
Total of all data
Data Pts
22
22
Mean
10.55
10.55
Std Dev
13.01
13.01
Hean - 3 SD
Mean •» 3 SD
49.58
49.58
Current test linits
Lou High
QC 20.0
CC 50.0 oo
OI
_..,— i c
-------
THA/NORCAL ANALYSIS REPORT
ELEHENTS RESULTS
Page 3
Client:
Set Comment:
TMA/ARLI
CAM eetaIs
Report Date:
Samples Received:
May 9, 1987
Hay 4, 1987
Sample: MW-1E 4/30 1415
H20
TMA/NORCAL Lab I: 4018-27-9
Aliguot/Vli; 150ml/25.0mL
Sample: MW-2E 4/30 1717
H20
TMA/NORCAL Lab t: 4018-27-10
Aliquot/Vim: 150flil/25.0mL
Answer Method
(ma/L)
Antimony
Arsenic
Barium
Beryllium
Cadmium
Chromium
Cobalt
Copper
Lead
Molybdenum
Nickel
Selenium
Silver
Thallium
Vanadium
Zinc
<0.01
0.10
0.23
=0.0004
<0.001
=0.01
0.0073
= 0.01
<0.01
<0.004
=0.01
<0.03
(0.003
, <0.03
0.024
0.036
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Answer Method
(mg/L)
Antimony
Arsenic
Barium
Beryllium
Cadmium
Chroaium
Cobalt
Copper
Lead
Molybdenum
Nickel
Selenium-
Silver
Thallium
Vanadium
Zinc
<0.01
<0.02
0.075
=0.0005
=0.002
0.015
0.0071
0.017
<0.01
<0.004
=0.009
<0.03
<0.003
<0.03
0.035
0.72
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Sample: MW-3F 5/01 1130
H20
TMA/NORCAL Lab *: 4018-27-11
Aliquot/VlB; 150»1/25.0«L
Antimony
Arsenic
Barium
Beryllium
Cadmium
Chromium
Cobalt
Copper
Lead
Molybdenum
Nickel
Selenium
Silver
Thallium
Vanadium
Zinc
Answer
(mg/L)
Method
<0.01
sO. 04
0.082
<0.0002
<0.001
0.012
iO.003
0.013
<0.01
<0.004
=0.007
<0.03
(0.003
<0.03
=0.01
0.31
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
CO
Ol
Ol
-------
356
May 14r 1987
MR. TELLIARD: Good morning.
We'd like to start today's agenda. We found out from
the Coast Guard that no one was lost last evening.
We've kept our record at least for this year.
A number of the papers today are going to deal
with a rather bizarre subject which is called drilling
fluids or drilling muds or drilling stuff. Over the
last 18 to 24 months, the agency has been involved in
a number of efforts dealing with onshore and offshore
oil and gas exploration, extraction and production,
both in the ITD division as far as the regulatory
approach is concerned, that is to say, writing a
national standard, and also in the permit program
where certain permits both in the Gulf of Mexico and
in Alaska are under review, adjudication.
As a result of this and also some efforts that
we have initiated on behalf of the Office of Solid
Waste, which involved going out and looking at the
application of the definition of hazardous as it
relates to drilling or pit fluids, we've generated a
bunch of data and a bunch of information. We're going
to talk about that today.
-------
357
The impact of most of this data is the fact that
somewhere down the road in the next six to eight
months, the Office of Solid Waste has to report to
Congress on an existing exemption that covers the
disposal of drilling muds and cuttings, and a
clarification as to how they're going to regulate it.
At the same time in the next eight to 12 months, the
Office of Water Regulations and Standards is proposing
a regulation on offshore oil and gas drilling muds
and cuttings disposal as it relates to water.
Meanwhile, we have some ongoing work with a
number of the states dealing with onshore oil and gas
extraction, covering both muds, cuttings and produced
water. So the ramifications of this data are rather
far-reaching in the sense of the regulatory applications
Our first speaker today is from S-CUBED, and is
going to be talking about one of our favorite subjects
at this meeting, the TCLP methodology, which of course
is the test method where you take a container, apply
the eye of a newt and a bat's wing, shake it for 38
seconds and then proceed to analyze it. Lee is going
to talk on some data that he has generated over the
last year to support the Office of Water and the
-------
358
Office of Solid Waste in this review. Lee Helm.
-------
359
MR. HELMS: Thank you. I
see that the crowd is a little bit smaller this
morning. I trust that most of you survived the booze
cruise, and for those of you that haven't survived
and are here, I'll try and talk a little low. I
don't have that many slides, and the numbers and
letters are real big, so you'll be able to read them
very easily.
I'd like to begin by presenting some background
information relating to the onshore oil and gas study
that was undertaken by the industrial technologies
division of the EPA. As I'm sure you're aware, the
EPA monitors and regulates the oil and gas extraction
industry under several major environmental statutes.
Some of these statutes include the Clean Water Act,
the Safe Drinking Water Act and the Resource Conservation
and Recovery Act.
To fulfill some of the reguirements of these
acts, a study was undertaken by the EPA to identify
and guantify major waste constituents associated with
the oil and gas extraction industry.
This first slide summarizes the topic of my
discussion this morning. It is the determination of
-------
360
organic compounds in waste and TCLP extracts of waste
from oil and gas extraction operations, using isotope
dilution GCMS.
At this point, I'd like to point out that this
study included metals and conventionals, but they
won't be discussed this morning. I'm only going to
be discussing the organics.
The second slide pretty much summarizes the
major objectives of the project, and they are to
provide data to be included in the report to Congress
that went out this January, was it? It will be going
out? Then how comes there's such a big rush that you
bodily threatened me?
MR. TELLIARD: We talked to
the court and they're going to give us an extention.
MR. HELM: I can't tell you
the threats this man used, and begged.
The second objective is to identify and quantify
major waste constituents, characterize the complexity
and diversity of the wastes, and finally to provide a
representative overview of the waste characteristics.
Upon the completion of this study, all of these
objectives were met. Samples were taken from four types
-------
361
of sites. These include drilling sites, production
sites, centralized pits and finally, centralized
treatment facilities.
Sampling sites were randomly selected and
distributed throughout the United States. Specifically
selected sites were chosen to substitute for those
randomly selected sites that could not be accessed
due to travel restrictions and accessability, the
weather, what have you.
A total of 49 sites were sampled throughout the
United States, the majority of which were either
drilling sites or production sites. From these sites
a total of 101 samples were taken, 59 of which can be
classified as liquids and 42 that were classified as
soils or solids. For this presentation I use the
terminology soils, sludges, solids, they're all
interchangeable.
The analytes chosen for this particular project
were selected from various regulatory lists. These
include the priority pollutant list, the priority
pollutant Appendix C list, the RCRA Appendix 8 list,
the Michigan list, many of which were pesticides which
I will not be talking about this morning, the superfund
-------
362
hazardous substances list, the paragraph 4C list and
finally, the ITD list.
While editing and compiling the analyte list for
this particular project, a number of criteria were
taken into consideration. The three primary criteria
that were considered were the availability of suitable
analytical technique, the availability of an analytical
standard of known purity or concentration and finally,
the solubility of the analyte in water.
534 analytes were included in this study, 434 of
which are classified as organics. S-CUBED was directly
involved in the analysis of 231 of these organic
compounds which can be sub-classified into the
volatiles and the semi-volatiles. Other organics
that were searched for under this project were dioxins,
furans, herbicides and pesticides.
Waste samples were analyzed using established
EPA methods. For the volatiles analysis, Method
1624B was used. For the semi-volatiles, Method 1625B
was used.
These two methods are the isotope dilution
methods. For those of you that are not familiar
with the istope dilution methodologies, they are very
-------
363
similar to Methods 624 and 625, with one major
difference which I feel is an important advantage.
That is, prior to the sample extraction or purging of
the sample in the case of volatiles, the sample is
spiked with a solution which contains the labeled
analogs of the analytes that are being tested for
the concentration of these analogs is known.
The sample is then sent through either the
purging step or the extraction step. After the sample
is analyzed using GCMS, the concentration of the
analytes that are tested for is calculated, using a
response factor relative to its labeled analog,
which is in turn calculated using the internal standard
method.
Using this methodology, you compensate for the
variability of the analytical technigue. Examples of
this would be the purging efficiency, the extraction
efficiency, any loss of analyte during the concentration
step through the three ball Snyder column or loss of
analyte during the drying step.
For the analysis of the solids samples, slight
variations of the 1624B and 1625B methods were in-
corporated. Two of these variations, and these
-------
364
are included in 624 and 625, are the use of a gel
permeation chromatography or GPC cleanup procedure,
and the substitution or utilization of a sonication
extraction procedure in lieu of the liquid-liguid
continuous extraction.
Getting back to the GPC cleanup, a number of the
soils that were analyzed, I would have to say well
over 90 percent, would not even concentrate to the
final volume needed for GCMS analysis until after
they were sent through the GPC cleanup procedure. So
it was definitely a necessary step.
In addition to the direct analyses of the soil
or solid samples, approximately one half of these
were leached using the EPA's Toxicity Characteristic
Leaching Procedure, or TCLP, and the leachate was
then analyzed using Methods 1624B and 1625B.
Many of you are familiar perhaps with the
extraction procedure, or EP. I think it would be a
good idea to compare the EP to the TCLP. This next
slide summarizes some of the major differences involved
in the leaching steps themselves. You can see that
EP allows for the use of only one buffer solution which
is an acidic acid buffer solution, .5 normal solution.
-------
365
The TCLP allows for the use of two buffer or leaching
solutions. Both of these are also acidic acid
solutions. One is at a pH of 4.93 and the second at
a pH of 2.88.
The second solution is used for the leaching of
highly alkaline soils or solids which is not covered
under the EP. Other differences would be the filter
media, and the units here should be microns, by the
way, filtration pressure, the length of extraction
and the extraction temperature.
An important extension of the EP procedure which
is not listed here is the inclusion of organic analytes
in the leaching and analysis process. The organic
analytes include the volatiles and the semi-volatiles.
Also I might mention that the TCLP includes analysis
of metals. Again, I will not be talking about those
this morning.
When a waste sample is analyzed for volatile
analytes, a procedure referred to as the zero headspace
extraction or ZHE, is used. Here I have listed the
steps of the ZHE. The first step would be to load
the sample into the extraction vessel. The next slide
will show you this. Then you have an initial solid
-------
366
liquid separation. Any liquid that is drawn off at
this point is referred to as the expressed liquid.
Next you introduce the extraction or leachinq
fluid into the vessel. This is aqain the acidic acid
buffer solution. For the ZHE this would be the 4.93
pH solution only you're not allowed to use the 2.88.
Next you would leach or rotate the sample for 18
hours. After this you do the final liquid-solid
separation. The liquid that is drawn off here is
referred to as the leachate, and the leachate can or
cannot be combined with the express liquid, depending
on what type of numbers you're concerned about
achievinq. If you do combine these two, a weiqht
averaqe concentration can be determined either
mathematically or empirically.
Durinq the entire extraction procedure, it is
imperative that a vessel which effectively precludes
headspace is used. Here is a diaqram of the zero
headspace extraction vessel. This type of vessel
allows for the initial liquid solid separation,
leachinq and the final extract filtration without
ever havinq to open the vessel to atmosphere.
-------
367
Along with the inclusion of the organic analytes
in the TCLP, other advantages of the TCLP over the EP
would include less technician interaction with the,
sample extraction, simplification of the sample
extraction procedure, and standardization of the
equipment and procedure.
Now I'd like to discuss the types of analytes
that were typically detected in the analysis of the
oil and gas waste samples. This slide tabulates the
three most frequently detected groups of compounds.
These are the aromatic hydrocarbons, the aliphatic
hydrocarbons and the organic acids.
The predominant group of analytes that was seen
in both the liquid and solid samples were the aliphatic
hydrocarbons, and these are the n-alkanes ranging
from n-decane to n-tricontane. As you can see, they
were detected in approximately 50 percent of the
liquid samples and 70 percent of the solid samples,
and less than five percent in the TCLP extract samples.
The aromatic hydrocarbons were detected in
approximately 50 percent of the solid and liquid
samples and 30 percent of the TCLP samples. Typical
of these aromatic hydrocarbons that were detected would
-------
368
l
be benzene, toluene, ethyl benzene, all of which are
volatiles, and polynuclear aromatics such as napthalene,
2-methyl napthalene, fluorene.
Some heterocyclic compounds were also found in a
few samples. These would be dibenzofurari and
dibenzothiophene. They were found in less than ten
percent of the samples.
Typical of the organic acids that were detected
would be phenol, 2,4-dimethyl phenol, o and p cresol,
m-cresol was not included in this study, benzoic
acid and hexanoic acid. As you can see, they were
detected in approximately 30 percent of the liquid
samples and less than five percent of the solid and
TCLP samples.
This next slide summarizes a general comparison
of the results determined by the direct analysis of
solid samples and the TCLP analysis of those same
solid samples. Aromatic hydrocarbons were detected
at medium concentrations at both the direct and the
TCLP analysis. For this discussion, medium concentration,
and this is a concentration relative to the final
volume of the extract which is one milliliter. The
concentrations would be ranging from, say, 50 micrograms
-------
369
per milliliter to 150 micrograms per milliliter.
Again, if you want to get some feel for what that
would be for the sample itself, most of these solid
samples were analyzed at a 30 gram sample size 3,0.3,
and on down in increments of ten. So when I talk
about a medium concentration of 100 micrograms per
milliliter to change those units to microgram per
kilogram per sample size, you simply multiply by one
thousand divided by 30, and that should get it for you.
The aliphatic hydrocarbons were detected at high
concentrations in the direct analysis, and low
concentrations in the TCLP analysis. One reason for
this might be the low solubility of the aliphatic
hydrocarbons in water.
The organic acids were detected at low concentrations
in the direct analysis, and this would be less than
50 micrograms per milliliter relative to the final volume
of the extract, and in the TCLP analysis at medium
r
concentrations. Two possible explanations for this
might be the pH of the leachate, which is in all
cases less than five, or the higher solubility of the
organic acids in water.
Finally, I'd like to relate some of the analytical
-------
370
problems and difficulties that were encountered during
the waste sample extraction and analysis steps.
To begin with and as was expected, there was a
high concentration of aliphatic hydrocarbons in most
of the samples, and especially the solid samples.
This necessitated a very, very small sample size to
be used during the extraction and analysis step. For
many of the solids, we went down to .03 grams.
This next slide is a reconstructed ion chromatogram
of, believe it or not, a relatively clean volatile
analysis. Somewhere buried in there, there are no
discrete peaks that relate to internal standards;
buried underneath are the three internal standards;
you just cannot see them. They're just dwarfed. In
fact, this particular display was normalized so that
half of the peaks at least would not be off the
chromatogram.
Here is a chromatogram of a typical sample
extract. Again, there is no way that you can "correlate
one of these peaks to the internal standard. It is
buried underneath all of that mess. Again, this one
was also normalized so I could get everything on scale.
If I'm not mistaken, this is a 0.3 gram of...and I
-------
371
use this next term very, very loosely, water sample.
This sample more closely related crankcase oil, or if
any of you have ever changed your manual transmission
oil, it's that thick, and we're talking about 80 to
90 weight oil. So these samples posed quite a few
analtical problems.
Also, you can see from the chromatogram that
i
there are a number of peaks. These peaks more than
likely correspond to at least two to three to four
compounds. There are a vast number of compounds
involved in the analysis here, most of which we were
not looking for.
Also observed in a number of the soil or solid
samples, and I really don't have much of an explanation
for this, we absorbed tremendous retention time shifts,
occuring sporadically. There was no correlation
between the type of sample, weight of the sample,
time of day, what have you. There was no problem
with the GC oven itself. That was very carefully
checked.
Without the advantage of the isotope dilution
methodology, analysis of samples of this nature, as
you probably can guess, would have been very, very
-------
372
difficult, especially when retention time shifts were
experienced.
The advantage of the isotope dilution methodology
is, if you are looking for a particular analyte and
it's not where it should be, you can simply find it's
labeled analog and coeluting or one or two scans
after it will be the compound of interest, if it is
present.
Needless to say, these samples created a lot of
problems and at best were a nightmare. Please send
us wellwaters, please, I beg you. In fact, I doubt
if these samples could have been analyzed by any
other method as accurately and as quickly.
That concludes my presentation for today. If
there are any questions, I'll try and answer them.
-------
373
Question and Answer Session
MR. TELLIARD: Questions?
MR. PRESCOTT: I'm Bill
Prescott. I would like to ask what precautions you
took when you used the .03 gram sample to be it was
representative of the...
MR. HELMS: Of the sample
itself? Well, you have to take into account that
when these samples were initially taken,...I'm probably
slightly exaggerating here,, but if you can envision a
basketball court of sludge, I feel that a 30 gram
sample or a .03 gram sample is equally representative
of the sampling site itself.
Again, most of these samples were very, very
similar to crankcase oil. They were relatively
homogenous in appearance.
MR. TELLIARD: Bill, there
was two samples taken at the pit, the drilling pit.
There was the bottom sample taken, and that was a
composite sample taken with an auger and composited
on site. The supernate in the pit was taken on a
grid pattern at different depths and then composited
on site. So all the samples, whether they're the
-------
374
supernate in the pit or the pit bottom, were all
composites.
On the produced water samples that were taken .
offline, were generally grabbed. You let the system run
awhile and then you take a number of grabs off of the
...those were generally taken under a pressure system.
MR. CHANG: You referenced
the gel permeation cleanup, and you mentioned also
that it doesn't help much, right?
MR. HELMS: No, it helps
very, very much. Analysis of the solid samples that
we encountered, for the vast majority of them would
not have been possible to analyze without the cleanup
procedure. It was a great help.
MR. CHANG: Have you evaluated
how much oily component has been removed by this system?
MR. HELMS: By the GCP
cleanup procedure? Well, the cleanup procedure
generally will take out compounds of molecular weight
of approximately a thousand, I believe, or higher,
none of which we are looking for in the study.
MR. CHANG: I get that
problem also, a kind of oily stuff, and it does not
-------
375
really remove...the window of dilution. If we base
on that acid...when we take that window to collect
the sample then we have to check the oily component
at the same time. We cannot excluded because it will
be alluded at the same time and will be then...I have
monitored so far the solvent...! just spike them in...
MR. HELMS: Using corn oil?
MR. CHANG: No, no, it's
crude Louisiana oil. So I could not really remove
more than half of it because half of it really alludes
into the aliguot that I was taking out. So as far as
that oil is concerned, I gtill have a nightmare trying
to...because the target is to get better and better
sensitivity because of the...I cannot load a GCMS
system with a lot of oil junk.
MR. HELMS: I did not state
that the samples were spiked with the label analogs
prior to the GPC cleanup procedure. Therefore, any
analyte of interest that might have been lost
correspondingly its labeled analog would have been
lost. Again, that is the advantage of the isotope
dilution methodology. Jt accounts for any of these
variabilities or percent recoveries.
-------
376
The GPC cleanup procedure, if not used...again,
some of these extracts solidified during the
concentration step. It's kind of hard to imagine a
three gram sample once you've sonicated and extracted.
Then concentrating this approximately 300 mils of
methane chloride, and once you get down to about 50
mils you have a solid.
There's basically no way around that. You could
not do a GPC cleanup procedure on this. You would
basically destroy your instrument for a few days,
cleaning it up. The only way around this is to
decrease the sample size. Again, every soil was sent
through...almost every soil was sent through GPC.
The one that I had displayed was a .03 gram, and it
was labeled as a water, but it more closely resembled
crankcase oil. Even that would not concentrate to a
final volume of one mil until after the GPC cleanup
procedure.
MR. CHANG: Another question,
a short one. Have you used the...this is out of
methodology that you used for this project in previous
S-CUBED methodology? You mentioned about...the three
fractions? Do you recall that?
-------
377
MR. HELMSs No, I'm sorry,
I don't. I don't follow your question.
MR. CHANG: ...emission
pattern using a chromatography column, silicon gel.
Do you remember that?
MR. HELMS: No, I wasn't
involved with that at all.
MR. CHANG: I'm just
wondering if you have an approach to that, because
that required refraction...
MR. HELMS: No, I'm not
familiar with that one.
MR. TELLAIRD: Michael?
MR. MARKELOV: Michael
Markelov with Standard Oil. First question, is it
necessary to preconcentrate the sample if you have a
high concentration already?
MR. HELMS: I'm sorry, I
don't...
MR. MARKELOV: In order not
to overshoot your system you took a very low sample
weight, right?
MR. HELMS: Yes.
-------
378
MR. MARKELOV: If you would
take a normal sample weight...
MR. HELMS: Okay, now I
understand. What we did was, judging from the
appearance of the sample, we decided what level or
weight of sample to first try. The levels that we
started with were 30 grams, 3,0.3 grams on down.
Now, if we looked at a particular sample and
thought that it would cause a lot of problems due to
the high concentrations in alkanes, we would
analyze the three gram sample. If nothing was seen,
then we would go to the 30 gram sample and run that
secondly. You can see what we were doing was trying
to avoid the situation of running the 30 gram sample
and basically shutting down our instrument for a day
or two while we .bake out all the hydrocarbons.
MR. MARKELOV: Would you
shut it down?
MR. HELMS: Without a doubt,
Now, not always on the 30 gram, but the way that we
went at this is that it is much more cost effective
to attempt to do the lower levels first, and if you
-------
379
don't see anything, work your way up. All you've done
is one or two additional runs. We're talking two
hours here versus shutting down the instrument for
one or two days, which is normally required.
MR. MARKELOV: Would you
consider this action to be...to the dilution of your
extract?
MR. HELMS: I'm having a
very difficult time with your accent, I'm sorry.
MR. MARKELOV: Would you
consider taking a small amount of sample be equivalent
as to take normal amounts of sample and after that
dilute the extract?
MR. HELMS: If you were to
extract a 30 gram sample and then take that extract
and do a dilution and analysis, what you've done
there, if you end up doing a very large dilution of
the final extract, you've essentially thrown out the
isotope dilution methodology and now you have to
quantitate everything using the internal standard
method. This is because the labeled analogs are
spiked in at a certain level, a hundred micrograms
per milliliter relative to the final volume of the
-------
380
extract one mil. If you do anything greater than one
to a hundred dilution, you've just diluted out the
labeled analogs. The whole advantage of doing this
work was to incorporate the isotope dilution methodology.
MR. MARKELOV: You retention
times, you mentioned that they were changing from sample
to sample. Was it fluctuation or it was...
MR. HELMS: What occurred
here was...again, this hasn't occurred except but
once every ten or 20 GCMS runs. What I noticed was
midway through the GCMS run, a retention time shift
of, say, 2000 scans would occur, and it would always
jump forward. In other words,...
MR. MARKELOV: Shorter
retention times?
MR. HELMS: Yes, exactly.
In other words, labeled napthalene normally elutes
at time X, it was time X minus about three minutes.
Again, I have no possible explanation for this
retention time shift, except as simply to explain
away in the vague terms of sample matrix effects.
I'd like to point out that there was no malfunction
in the GC oven itself, because as soon as we noticed
-------
381
this for the first time we kept an eye on it
to make sure that was not the cause.
MR, MARKELOVs So it always
was the time.
MR. HELMS: Yesr it always
seemed to jump forward, yes. Again, there was no
correlation between the...one thing I did note was
that it always occurred with the solid samples, but
there was no correlation between sample weight.
MR. MARKELOV: This is the
last one. Zero headspace extraction; why it's necessary,
zero headspace, and what would be the effect of air
present in the vessel in your determination?
MR. HELMS: Why it was
necessary was, we were paid to do the analysis. As
far as errors, I'm not up here to defend the method,
for one thing. I've got to choose my words carefully
on this one.
MR. TELLIARD: The method
was designed specifically to look at the application
of the leachate, and the zero headspace analysis was
an effort...1've got to choose them, too...was an
attempt to coming up with an approach and giving the
-------
382
volatile constituent off the solids. The design of
the headspace analyzer and extraction setup is
something we inherited from the Office of Solid Waste,
As part of our response to themr we used their
equipment as prescribed.
We're not here to defend the method, and I think
at this time they're still talking about revisions to
it and so forth.
MR. MARSDEN: I just wanted
to clarify a little about the GPC having to do with...
MR. TELLIARD: Who are you?
MR. MARSDEN: Paul Marsden,
S-CUBED. Just to clarify mostly about the GPC from
the previous question, S-CUBED some years ago did
promulgate a method on cleanup of oil with silicon
pellets. The major disadvantage of that methodology
is that you throw away all your...highly polar
materials.
The GPC allows us,.based on work in progress, to
get very good recovery of the full Appendix 8 list of
the organics. The big problem with GPC, I think, as
it's applied in most labs, is people tend to overlook
the columns...it's the same way you did your old
-------
383
columns when we were slurry packing things. If you
overload the thing you won't get any separation.
So besides the problem of diluting all the isotopes,
in many cases if you're careful about choosing your
sample size and extract size that goes into the loop,
you will improve the performance of that technique
dramatically.
MR. HELMS: Thanks, Paul.
MR. TELLIARD: Paul, maybe
you could get with him and chat a bit.
MR. LEVY: I'm Nathan Levy
with A&E Testing in Baton Rouge. This is a question
about the specific method with ZHE. Exactly what
technique did you utilize to maintain zero headspace
from the extractor all the way into the purge and
trap apparatus?
MR. HELMS: There are valves
on each side of the extractor, one for the leachate
introduction to the sample, the other side which is
used to...there's a piston with some viton overrings.
MR. LEVY: I understand that.
You've got to get it out and you've got maybe two
ways, your pre-extraction and your post-extraction.
-------
384
If they're...you1ve got to put them together on a
weighted average. You've got to do all that while
you're maintaining zero headspace...
MR. HELMS: The expressed
liquid can be stored either in a Tedlar bag, or we
used VGA vials with no headspace. Again, the leachate
was collected as quickly as possible, transferred to
these VOA vials with the minimum of opening to
atmosphere. During storage, the expressed liquid and
the leachate were stored separately, and in VOA
vials with no headspace, in the refrigerator.
MR. LEVY: It's not your
problem, I think it's a method problem. What you're
saying to me is that you're really not maintaining
zero headspace...
MR. HELMS: During the
leaching procedure, yes, but during the collection of
the leachate, no.
MRS. KHALIL: Mary Khalil...!
would like to have an idea, please, about the conditions
of the...cleanup you used and how you avoided to
overload your column, and have you ever checked
recovery of your standard isotopes?
-------
385
MR. HELMS: There is a study
currently underway to observe the recoveries of the
labeled analogs through the GPC cleanup procedure.
Again, I'd like to stress that these soils were spiked
prior to the cleanup, so knowing a percent recovery
of a particular analyte is not needed whatsoever in
this instance.
In other words, if you spike in labeled phenol
at X concentration, it goes through the GPC cleanup
procedure, you do the GCMS analysis and you come up
with half X. If you find any phenol, your computer
programs will use that and calculate the concentration
of phenol in that soil, using the percent recovery,
50 percent recovery in this example.
Again, we don't care for this study whether or
not we lost some of the labeled analytes through the
GPC cleanup procedure or the concentration step or even
the drying step. All we are concerned with is that
final concentration, which is calculated using the
internal standard method. Once we know that, it is
easy to calculate the analytes that are tested for.
Now, I know you had another part of the question,
but I'm sorry.
-------
386
MRS. KHALIL: How do you avoid
to overload the column using this high concentration
of oil?
MR. HELMS: Unfortunately,
a few times we did overload. Again, this was a visual
type of situation where...for one thing, I think the
method calls for diluting the extract up to seven or
nine milliliters. Well, if our sample extract could
not even concentrate down to that level, we went down
to the next level of weight and cleaned that particular
extract up and ran.
In other words, we just eyeballed it to extend
the column life. If you could not even concentrate
your extract down to a reasonable final volume or it
solidified, we didn't even attempt to go any further.
MRS. KHALIL: You used it
manually on the column?
MR. HELMS: It was a fully
automated system, yes.
MRS. KHALIL: What was the
condition used...retention time or...
MR. HELMS: I'm sorry, I
just don't know some of the operating parameters
-------
387
I knew that we did not deviate or come up with
anything off the wall.
MRS. KHALIL: Thank you..
MR. SHALALA: My name is
Tom Shalala from the Environmental Ground Waters. My
question deals with TLCP. The scenario for the TCLP
I thought was sanitary landfill with five or ten
percent industrial waste. Did you make that assumption
that you could use the drilling pit environment to
get the same...
MR. HELMS: No, as municipal
waste mix with the hazardous waste?
MR. TELLIARD: No. What we
made as an assumption was, the Office of Solid Waste
wanted us to run the test. So we ran it. It was not
our call.
MR. HELMS: Nor mine.
MR. TELLIARD: We look
across the field and we saw these other guys with
blue jerseys on. We had green ones, but they were
paying, so we ran the test.
Now that we're really getting into mud, our next
speaker is going to talk on a similar subject wearing
-------
388
a different set of jerseys. Rocky Mountain Analytical
was responsible for running duplicate samples with
us, and they were contracted from some organization
called the American Petroleum Institute, whoever they
are. So they're here to talk about their data. Of
course, their samples were much easier than Lee's,
because they were duplicates.
-------
DETERMINATION OF ORGANIC COMPOUNDS IN
WASTES AND TCLP EXTRACTS OF WASTES
FROM OIL AND GAS EXTRACTION OPERATIONS
USING ISOTOPE DILUTION GC/MS
C. Lee Helms
Robert G. Beimer
00
'.Q
-------
PROJECT OBJECTIVES
Provide data to be included in report to Congress
Identify and quantify waste constituents
Characterize complexity and diversity of wastes
Provide representative overview of waste characteristics
lo
-------
'•SELECTION; OF ANALYTES
Priority Pollutant list
Appendix C List
RCRA Appendix VIII List
1 . •. '• .."'•...'.. ••'• .. - - Y " ' '•. . ••• . '' '•.' • '.. ' '. •
Michigan List
Superfurid Hazardous Substances List
Paragraph 4(c) List
ITD List
-------
392
-------
C1".
O;
CO
SH/OO
uoT^jedas Pltos/prnbif
smoq
radi
pm|j uorp^j^xa ni dnmj
uoiiyg xedas pi[os/pmbr[
jo^jej^xa o^ur ajdures
SHZ
-------
394
2ERO-HEADSPACE
EXTRACTION (ZHE) VESSEL
Liqui-d Inlet/
Outlet Valve
Pressure
Relief
Valve
Body
Top
Flange
Bottom
Flange
Pressurizing Gas
Inlet/Outlet
Valve
Pressure
Relief
Valve
-------
IT,
0>
CO
PIFg
sa^dnreg
-------
COMPARISON OF ANALYTE CONCENTRATION
Direct Analysis TCLP Analysis
Aromatic hydrocarbons
Medium
Medium
Aliphatic hydrocarbons
High
Low
Organic acids
Medium
00
CO
Oi
-------
397
-------
398
-------
399
DR. PHILLIPS: Rocky Mountain
Analytical Laboratory, which is now a part of ENSECO,
has been involved in the analysis and characterization
of freshwater mud pits, since 1984.
We've been involved in helping to develop
monitoring strategies, parameter selection, sample
collection and preservation techniques, analytical methods
and QC methods. We've been analyzing muds for organic
priority pollutants, various sublists of Appendix 8
compounds such as the petroleum refinery waste compound
list, Appendix 8 metals, various inorganic parameters,
major cations and anions and other water guality
parameters.
The current study as Bill mentioned is really
concerned with looking at the environmental impact
from surface runoff and leachate from drilling mud
pits. Fortunately, Lee has gone into a lot of the
background of the project, so I won't repeat any of
that. As Bill mentioned, we were on the other side
of the fence working for the API.
Today what I'd like to do is emphasize the
methodology primarily and discuss some preliminary
results. I'll start off with the first slide. This
-------
400
is just a little background information for those of
you who may not be familiar with drilling muds. Some
of the components that are in a typical mud are fresh
or saline water/ clay which is used to increase the
viscosity and help create a gel, barium sulfate which
is used as a weighting"agent, chromium lignosulfonates
to control viscosity and finally, lime and caustic
soda to increase the pH.
Waste drilling muds are typically disposed
of in an on-site pit. The waste mud will contain
some of these basic components, also drill cuttings
and other additives and materials.
We took a two-tiered approach, I guess you could
say, to this analysis. The first part was to analyze
the sample for total content of a variety of inorganics,
metals, and organics. The second part was to begin
to test the mobility of these materials in the environment
and we took two approaches to that. One was the
TCLP, the Toxicity Characteristic Leaching Procedure
that Lee described, and the other was lysimeter
leaching technigues.
The next series of slides shows the methodologies
that were used. For organics, liquids, we used
-------
401
EPA Methods 624 and 625. For the solids, we SW 846,
Method 8240 for the volatiles and 8270 for the
semivolatiles with preparation by Method 3550,
which is sonication procedure and 3530, which is an
acid base cleanup.
The list of compounds that we were looking for
was the 1624 and 1625 list of compounds, plus we were
screening for some other selected organics from the
Industrial Technology Division list of analytes.
We did not use isotope dilution. Lee mentioned
one of the disadvantages of isotope dilution is that
one can't dilute the sample. If one does, then the
labelled analogs are diluted out. This forces one to
take a very small and possibly not representative sample,
We preferred to take a little different approach,
that is take a larger sample and then dilute the extract
as necessary, because the limiting factor in many cases
was the concentration of the target compounds, not the
extraneous organic material.
The next slide shows the methods for the metals
analyses that were used. For liquids, the digestion
procedure was an SW 846 method, 3010, which is a nitric/
hydrochloric acid digestion, and also 3020 which is a
-------
402
nitric acid digestion. Solids were digested using
Method 3050, which is nitric acid and hydrogen peroxide.
Analyses were performed using inductively coupled
argon plasma for a list of 25 metals. In addition
we were looking for mercury using cold vapor AA, arsenic
selenium and thalium by furnace AA.
In addition to the organics and metals, we also
looked for some conventional parameters in RCRA criteria
What I have listed here are just the sources of methods
that were used. For liguids we took methodologies
out of Methods for Chemical Analysis for Water and Waste,
Standard Methods for the Examination of Water and
Waste Water. For solids, we used technigues from Methods
for Soil Analysis and also SW 846 methods. The
parameters that were of interest here were pH, TSS,
chloride, BOD, COD, TOC, oil and grease, cyanides, oil/
water/solids, ammonia, nitrate and nitrite.
That's a summary of the methodologies that were
used for the total analyses. The next slide shows
the parameters or groups of parameters that we
looked for in the TCLP extracts. We used the TCLP
procedure as proposed on June 13 of 1986 to replace
the existing EP toxicity test. Lee described that
-------
403
fairly completely so I won't go over that again.
We used the TCLP to look for the same metals,
volatiles and semivolatiles that we looked for in
the total analyses. That is, we were used Methods
624, 625, and ICP for the metals with the exception of
mercury, arsenic, selenium and thalium.
The other method that was used to begin to test
mobility of the target parameters was lysimeter
leaching. This slide is a diagram of the apparatus
that was used. Our goal here, and what made this an
interesting project, was that we had to collect enough
sample to be able to analyze for organics at reasonably
low detection limits. This proved to be guite a
challenge.
It reguired a glass column that was large, 12
inches in diameter, I.D., and 18 inches tall. The glass
column was sandwiched in between stainless steel end
plates. The bottom end plate was bevelled to allow
the effluent to collect. On top of the bottom end
plate was a support plate on legs was covered by a
fine stainless steel mesh. The plate had holes
drilled in concentric circles around it to allow
eluent to pass through.
-------
404
The purpose of this plate was to support the
column material and also to provide a space for the
eluent to collect in. On top of the support plate,
although it's not shown here for simplicity was a
thin layer of sand. The purpose of the sand was to
allow the elluent to drain and also to prevent soil and
mud particles from clogging the pores of the steel
mesh.
On top of that was a two inch soil layer, and
this was a composite soil that was taken and was
representative of the soils that are found in the
areas of the mudpits that were being studied.
On top of that was a 12 inch layer of mud, and
above that, of course, was the eluent itself which was
contained in a reservoir set up to provide a hydraulic
head of five feet. The eluent, which was a sodium
chloride solution, was blanketed with nitrogen to
minimize biological activity and oxidation losses.
Underneath the lysimeter were sample collection
reservoirs. One was a volatile reservoir, consisting
of a glass vial in a V shape. I have a picture of
which we'll see in a minute. The idea was to be able
to collect a volatile sample without headspace and
-------
405
to make it easily removable.
Downstream from the volatile container was
another reservoir which the eluent passed into.
This contained the sample that was used to analyze
for semivolatiles, metals and any inorganic parameters
that we were interested in.
Both the volatile and the semivolatile containers
were kept in an insulated box that was chilled in
order to minimize volatilization losses. Volatile
samples were taken immediately upon startup of the
columns after 48 hours, 72 hours and then once every
time a liter sample was collected. The volatile
sample was taken when the one-liter sample was
approximately half full.
What do these things actually look like? The
next slides show some examples. I chose this one
because it shows the layers quite well. Again, you
can see it's just a large glass column sandwiched in
between two steel plates, the whole thing held together
by threaded rods. One can see the sand layer, the
soil and the mud.
The next slide is another column. This gives
you an example of the differences in the physical
-------
406
appearance of the muds, this one looks quite a bit
different. The soil and the mud are not very
distinguishable in this one. You can see the sand
layer, but it's very difficult to see the interface
between the mud and the soil.
This is the volatile container, which was used
to collect a volatile sample while minimizing headspace.
For the lysimeters, several parameters were
monitored continuously. One was the time interval.
Some of these columns have been running for over 100
days at this point. Another was the volume of the
filtrate collected, the hydraulic head and the mudcake
thickness.
From these parameters we were able to calculate,
using standard equations in Methods of Soil Analysis,
parameters such as pore volume, hydraulic conductivity,
permeability, flux and mass flux.
In addition, we did chemical analyses of the
leachates, looking at parameters such as pH, chloride,
TOG, sulfate, and the same list of metals, volatile
organics and semivolatiles that we looked at in
the total analyses and also in the TCLP analyses.
-------
407
In addition, we leached the composite soils
themselves separately in order to determine what kind
of contribution they were making to the soil/mud
combination.
The next series of slides will show some of the
analytical results, not comprehensive, of course. We'll
be looking only at a couple of muds. One is a diesel
mud and the other is a chromium lignosulfonate mud,
to get a little bit of a flavor for the types of
components that were generally found in these muds.
First is a list of ten metals. All of the
slides are set up in much the same way. The first
column on the left is the parameter. Next are the
results of the total analyses in milligrams per
kilogram. Next is the TCLP result in milligrams
per liter, and then the lysimeter results in
milligrams per liter also.
The lysimeter concentrations that are listed
are the maximum concentrations that were found during
the time period that the samples were collected.
The items to notice on this slide...one other thing
I should mention is that the numbers in parentheses
are the soil results, to give you an idea of what
-------
408
kind of contribution the soils are making to the mud-
soil combination.
Things to point out here are the high total
barium concentrations. This result is not suprising
because of the use of barium sulfate. These
concentrations did fall off pretty rapidly going to
the TCLP and also to the lysimeter testing.
Again, not surprising, we found the same elements
in the leachates, both the TCLP and the lysimeter
leachates that were found in the muds themselves.
The contribution by the soil to the lysimeter results
were fairly significant for arsenic, potassium and
barrium.
One final thing to note on this slide is that
the TCLP and the lysimeter results are mixed relative
to each other. What I mean by that, for some metals
the lysimeter results were higher, more concentrated,
and for others the TCLP concentrations were higher.
The next slide shows the same results for
chromium lignosulfonate mud. Again, fairly high barium,
in fact, higher than last time. The chromium is also
a factor of five higher in this mud. It's a chromium
lignosulfinate mud so this is not a surprising result.
-------
409
The soil contribution to the lysimeter results
in this case were significant for arsenic and potassium,
but less so for barium.
Let's look at some TOC and volatile results
next. The top row is the TOC results. The volatile
compounds that were found were primarily benzene,
toluene, ethylbenzene and toluene. The other item
to note from this slide is that the concentrations
were higher in the lysimeter leachates than in the
TCLP leachates.
The next slide shows the same thing for a chromium
]ignosulfonate mud from site 13. Again, benzene,
toluene, ethylbenzene and xylene were found. The
contribution by the TOC to the soil was significant.
Again, the concentration in the lysimeter results
were higher than in the TCLP.
Next are the semivolatiles. What I've done here
is group the C-10 through C-30 hydrocarbons...these
are saturated hydrocarbons...together and shown a
range rather than individual concentrations. We
found polynuclear aromatic hydrocarbons, phenolics, n-
branched and straight chain hydrocarbons in the wastes
themselves. However, in the lysimeter leachate, the
-------
410
polynuclear aromatic hydrocarbons for the most part
and saturated hydrocarbons were not found. They did
not elute from the mud, with the exception of napthalene.
We did find some napthalene in the lysimeter leachate,
and, in addition, we found methyIphenol and
2,4-dimethyIphenol.
Overall, the polynuclear aromatic hydrocarbons
and the saturated hydrocarbons were more concentrated
in the TCLP than they were in the lysimeter leachates.
On the other hand, the phenolic compounds were more
concentrated in the lysimeter leachate than in the TCLP.
The next slide shows the same results for the
chromium lignosulfonate mud. Again, primarily polynuclear
aromatic hydrocarbons and saturated hydrocarbons were
found in the waste. But, in this case, they were
found at lower concentrations than in the diesel mud.
This slide shows some hydraulic conductivities
and also some mass fleues that were calculated from
the results. The units for the hydraulic conductivity,
by the way, are centimeters per second. Typical hydraulic
conductivies were 10-6 to 10-7 after one to two days,
dropping to 10-7 to 10-8 after seven to fourteen days.
-------
411
Typical mass fluxes in the diesel mud were
barium at 1.6 times 10-4, benzene at 1.2 times 10-6, and
napthalene at 1.5 times 10-6. There is a change in the
next series for the chromium lignosulfonate mud. The
barium number there is listed as 7.9 times 10-5. The
result actually is 2 times 10-6. The benzene result
is 1.3 times 10-7 and napthalene less than 2 times
10-8. Let me summarize some of this. First, the total
analyses versus the lysimeter results for metals.
The same elements were found in the leachates that
were found in the wastes. At lower concentrations,
of course, but most of the metals that were found
in the wastes also seemed to appear in the leachates.
The soil contribution was significant for arsenic,
potassium and barium in some cases.
On the volatiles side, primarily benzene, toluene,
ethylbenzene and xylene were found both in the wastes
and the leachates. For semivolatiles, polynuclear
aromatic hydrocarbons, phenolics and saturated hydrocarbons]
were found in the wastes. However, only napthalene,
4-methyphenol and 2,4-dimethylphenol were found in
the lysimeter leachates. Therefore, most of the
polynuclear aromatic hydrocarbons and the saturated
-------
412
hydrocarbons did not come through in the lysimeter
leachate.
Next is a summary of the TCLP versus the lysimeter
results for metals. The TCLP results were mixed
relative to the lysimeter results. For volatiles,
the concentrations were generally higher in the
lysimeter leachate than in the TCLP. Finally, for the
semivolatiles the polynuclear aromatic hydrocarbons and
the saturated hydrocarbons were more concentrated in
the TCLP than the lysimeter. On the other hand, the
phenolic compounds were more concentrated in the
lysimeter results than in the TCLP.
What's going to be done next with this data is
some modelling work. Currently, three sites are
being looked at in detail. Mass fluxes are being
used to model the movement and time of travel through
the unsaturated zone, adding retardation and biode-
gradation factors as they are appropriate, and also
looking at the movement into the saturated zone by
looking at the distance from the source at different
discrete times and also using steady state assumptions.
We've just gotten started really at reviewing
and interpreting this data. There's quite a bit more
-------
413
to be done. But I think I'd probably better stop now
and try to answer any questions, if there are any.
Thank you.
-------
414
Question and Answer Session
MR. LEVY: Nathan Levy with
A&E Testing. We have a particular problem in South
Louisiana with these drilling wastes. We've run
across a few problems and I want to see if your study
addresses them.
The first one was the barium content and the
total barium. Using some typical SW 846 protocol,
generally in drilling waste, you're generally going
to use too much sample. Given that the barium exists
as barium sulfate and everybody knows that's insoluble
in everything except for real concentrated sulfuric
acid, did you guys look at that? Did you guys take
various sample sizes like .1 gram size instead of 1
gram size in order to evaluate your total barium?
DR. PHILLIPS: In some
cases we did take two different sample sizes in order
to check for that, yes.
MR. LEVY: Second guestion
I had, I couldn't read the numbers from here in the
back on your TCLP result for the drilling waste.
Were any of the constituents, organic in particular,
greater than the proposed limits for RCRA?
-------
415
DR. PHILLIPS: Not on that
p#rticular mud, no.
DR. RUSHNECK: Dale Rushneck.
You said in order to obtain a more representative
sample, in contrast to what Lee Helms did, you used a
full sample and then diluted the extract.
DR. PHILLIPS: That's correct.
DR. RUSHNECK: The first
question is, that means you used all of the sample
that you received in the jar, and you extracted the
total amount, is that correct?
DR. PHILLIPS: No, that's
not correct.
DR. RUSHNECK: So you took
"***
a subset of the total sample.
DR. PHILLIPS: Correct.
DR. RUSHNECK: So in fact
you used a small...
DR. PHILLIPS: That is correct,
One can argue, is a 30 gram sample more representative
than a .03 gram sample.
DR. RUSHNECK: I understand.
For volatiles, did you do the same thing? That is, did
-------
416
you take the total volume sample and then dilute it?
DR. PHILLIPS: No, we subsampled,
DR. RUSHNECK: So you did
the same thing Lee did with volatiles, that is, you
took smaller and smaller aliquots.
DR. PHILLIPS: Right. But
not less than 1 gram. '
DR. RUSHNECK: Then, at
what point in that analytical process did you1add the
surrogates, presuming you added surrogates?
DR. PHILLIPS: The surrogates
were added prior to extraction.
DR. RUSHNECK: So they were
4
added after dilution but prior to extraction?
DR. PHILLIPS: No, after
subsampling but prior to extraction.
DR. RUSHNECK: The same for
volatiles. After you took a subset you added a
surrogate, is that correct?
DR. PHILLIPS: After the
subset was taken, before the extraction, yes.
DR. RUSHNECK: So they
don't reflect the dilution process?
DR. PHILLIPS: They don't
-------
417
reflect the sub sampling process.
DR. RUSHNECK: Thank you.
DR. MARKELOV: I'm not very
familiar with the leach test. I have a couple of
questions. The first one, if there is any effect of
the flow rate on the leach rate of, let's say, benzene.
DR. PHILLIPS: The flow was
controlled by gravity. It was not a forced flow, so
we did not vary the flow rate. As the hydraulic
conductivity of the mud...the muds will compress, and
as that happends then of course the flow rate will
slow down.
DR. MARKELOV: Could you
give a physical meaning of hydraulic conductivity?
DR. PHILLIPS: It's the
ability of the liquid to pass through the mud or the
soil or the combination of the mud and soil, essentially,
DR. MARKELOV: So there is
a correlation between hydraulic conductivity and leach
rate?
DR. PHILLIPS: Yes.
DR. MARKELOV: Direct,
square? What is the function?
-------
418
DR. PHILLIPS: I believe it's
just a direct relation.
DR. MARKELOV: Second
question. Why did you have to chill this vessel for
volatile analysis, the vessel where you collected the
sample for volatiles? "Why do you have to chill it if
it doesn't have any headspace?
DR. PHILLIPS: That was
just an added precaution. Probably it did not make a
significant difference. We didn't try it both ways,
but just to be on the safe side, we went ahead and
chilled it.
The other reason was that we had the other
reservoir that was downstream from the volatile
container. Since these samples were taken over a
long period of time, we wanted to make sure that we
minimized the possibility of having volatilization
from that reservoir also.
MR. SHALALA: Tom Shalala,
Environmental Ground Water Institute. In your
lysimeter, in your soil column, the soil that you
used, you said, was representative of the pit area.
Did you take a general composition of it? Was it
-------
419
clay or silt or sand or...
DR. PHILLIPS: We looked at
a variety of soils. Some were mostly clay and others
were more of a sand...and others were more of a
loam. Quite a variety, actually.
MR. SHALALA: Did you see a
difference in the attenuation factor with the different
soils that you used?
DR. PHILLIPS: That did not
seem to be the limiting factor. This is in the soil-
mud combination you're referring to?
MR. SHALALA: Right.
DR. PHILLIPS: The major
factor is the mud itself rather than the soil underneath.
MR. RAYBURN: Steve Rayburn
with Guilford Laboratories. In the lysimeter when
you were collecting volatile sample, you said that
the semivolatiles container was downstream from the
volatiles. How was there an allowance for mixing the
two samples in case there was any chromatographic
separation going on through the lysimeter?
DR. PHILLIPS: The volatile
sample was taken out of line. Then a new container
-------
420
was put into line. The volatile sample at the time
it was taken would not necessarily be representative
of the composite volatile material in the one liter-
container because that was taken over a longer period
of time. So we took multiple volatile samples over
time in order to try and better define the volatile
material in the one liter container.
MRS. IRIZARY: My name is
Maria Irizary. Several questions. Number one, how
do your results compare with those of S-CUBED? Number
two, what was the purpose of the comparison of the
different methods? Are you going to propose an
alternate method to EPA? Number three, what is your
position with respect to the TCLP method? Why don't
we do total analysis?
DR. PHILLIPS: Your first
question was, how die the results compare. Although
a complete comparison has not taken place yet, my
preliminary information is that the results compare
well.
Your second question about our plans on proposing
an alternative method to EPA, that is something that
-------
421
may happen. It's not, however, currently in the plans.
The third question, let's see, was, why don't we
analyze for total rather than using TCLP. That's why
we did both. We wanted to get a comparison. There
are some limitations on the TCLP procedure. So we
wanted to look at the samples in several different
ways so that we would have the ability to compare.
MR. TELLIARD: Part of the
program is...the data that we're generating for the
Office of Solid Waste is going into, as was pointed
out, a number of risk assessments that will be run
using the data and the TCLP, whatever you want to
call it, mobility, permeability, whatever you want to
use as a number. So it goes into this report to
Congress as a risk assessment and economic impact.
All of this is factored in.
One of the issues that we did not do was the
lysimeters, although it was talked about before the
study started. The agency decided not to do it. The
industry feels, and maybe rightly so, that that data
is going to be important in doing a real risk
assessment, and therefore, they funded their contractor
to do the lysimeters.
-------
422
MRS. DE NAGY: Susan De
Nagy, EPA. For your lysimeter study, your drilling
mud that you used, was that collected from the shale
shaker or was that the sludge samples collected from
the bottom of the pit?
DR. PHILLIPS: I think it
was from the pit, although I'm not sure of that. As
Bill mentioned, these samples were taken in duplicate
along with the EPA. On that particular guestion,
I'll have to defer to Bill who I believe was there.
MR. TELLIARD: The ones in
California, when we were out there, they took all of
those after the shale shaker and in the pit.
MRS. DE NAGY: So they took
them both, but from the slide those were really clean.
MR. TELLIARD: They were prettierj-.
Thank you, Mike.
I'm getting a signal that the coffee is back
there, so why don't we break now and do our ten
minute...because this ran a little bit longer this
morning. Let's get the coffee and stuff and get back
in here, because people have planes to catch today and
so forth. We want to keep the program moving.
-------
423
(WHEREUPON, a brief recess was taken.)
MR. TELLIARD: I'd like to
get started, please. Our next speaker is going to,
be talking on the analysis of four diesel or mineral
oil in drilling muds, cuttings and fluids.
This is rather important, because the agency has
proposed a regulation which reguires that an operator
offshore, if he has used diesel in association with
his drilling operation either in his mud or on his
cuttings, would be reguired to barge that material to
shore rather than discharge it over the side, a
somewhat, I've been told, expensive operation.
The guestion of the presence of diesel or the
question of the presence of mineral oil in the drilling
matrix is therefore important both for regulatory and
enforcement purposes. Battelle has been doing a
number of efforts over the last two years, two and a
half, to come up with some methodology both in support
of the offshore operator's committee and EPA. So
with that point...John?
-------
424
COMPONENTS OF DRILLING MUDS
• Fresh or saline waters
• Clay (bentonite)
• Barium sulfate
• Chromium lignosulfonates
• Lime/caustic soda
-------
425
ANALYTICAL APPROACH
• Analyze for total content
• Test mobility - TCLP and Lysimeter
-------
426
METHODOLOGY - ORGANICS
• Liquids - EPA Methods 624 & 625
• Solids - SW 846 Methods 8240 & 8270 (Prep by 3550. 3530)
-------
427
NETHODOLOOT - METALS
- Digestion TOiog SW846 Methods 3010 (NKric/HCl) and 3020 (Ntofc)
Solids - D%estkm usiag SW 846 Method 3050 OH)
- Furnace AA (As, Se, T
-------
428
METHODOLOGY - CONVENTIONAL PARAMETERS & RCRA CRITERIA
• "Methods for Chemical Analysis of Water and Wastes"
• "Standard Methods for the Eiamination of Water and Wastewater"
• "Methods of Soil Analysis"
• SW 846 Methods
-------
429
TCLP ANALYSES
9
• Seia a volatile Organics
-------
Nitrogen
inlet
18"
i
430
Nitrogen outlet and
Fill tube
X
I
12"
MUD
SPACE
/
i
3.51
12"
0.5"
Veot
VGA Reservoir
BN A .
others
ln$ulated Box
Lyslmeter Apparatus
-------
431
LYSIMETERS - PARAMETERS TO MONITOR
• Time Interval
• Volume of Filtrate Collected
• Hydraulic Head
• Mud Cake Thickness
-------
432
LYSIMETERS - CALCULATED PARAMETERS
Pore Volume
Hydraulic Conductivity
Permeability
Flui
Mass Flux
-------
433
LEACHATE ANALYSES
• pH, Chloride, TOC, and Sulfate
• Metals
• Volatile Organics
• Semi volatile Organics
-------
434
ANALYTICAL RESULTS-METALS
DIESEL MUD - SITE *4 (SOIL COMPOSITE *5)
PARAMETER
TOTAL (mg/kg) TCLP (mg/L) LYSIMETER (mg/L)
ALUMINUM
ARSENIC
BARIUM
CHROMIUM
COBALT
COPPER
LEAD
NICKEL
POTASSIUM
ZINC
2500
2.2(10)
4700 (240)
4.3
3.6
5.5
15 (30)
5.1
470 (3100)
48
3.3
< 0.008 (0.006)
3.5(1.0)
0.04
0.023
0.018
0.36
0.047
9.6 (13)
2.5
0.11
0.031 (0.020)
1.6(0.73)
0.034
0.014
0.02
<0.02
0.25
39 (23)
0.05
-------
ANALYTICAL RESULTS-METALS
CHROMIUM LIGNOSULFONATE MUD - SITE *13 (SOIL COMPOSITE
435
PARAMETER
TOTAL (mg/kg) TCLP (mg/L) LYSIMETER (mg/L)
ALUMINUM
ARSENIC
BARIUM
CHROMIUM
COBALT
COPPER
LEAD
NICKEL
POTASSIUM
ZINC
1900
1.8(3.3)
6900 (260)
21
3.8
3.7
24(13)
5.3
590 (1900)
78
1.2
< 0.004 (0.004)
3.8 (0.80)
0.19
0.050
< 0.006
0.32
0.060
1 1 (6.7)
6.6
0.8
0.002 (0.004)
>|'(0.13)
0.95
0.005
0.10
<0.02
0.26
14 (6.2)
0.05
-------
ANALYTICAL RESULTS-TOTAL ORGANIC CARBON 8. VOLATILES
DIESEL MUD - SITE *4 (SOIL COMPOSITE *5)
436
PARAMETER
TOC
trans-1,2-OICHLOROETHENE
ETHYLBENZENE
TaUENE
o,p-XYLENES
m-XYLENE
ACETONE
BENZENE
TOTAL (mg/kg)
TCLP (mg/L)
LYSIMETER (mg/L)
30500 (20000)
8.1
3.8
3.1
13
12
< 10 (3.6)
< 1
NA
< 0.005
0.024
0.042
0.10
0.079
< 0.05 (0.22)
< 0.005
268 (200)
< 0.005
0.033
0.074
0.15
0.13
1 .2 (4.7)
0.012
-------
437
ANALYTICAL RESULTS-TOTAL ORGANIC CARBON & VOLATILES
CHROMIUM LIGNOSULFONATE MUD - SITE * 13 (SOIL COMPOSITE
PARAMETER
TOC
ETHYLBENZENE
TOLUENE
o,p-XYLENES
m-XYLENE
ACETONE
BENZENE
TOTAL (mg/kg)
11000 (6300)
0.28
0.41
0.85
0.96
< 1.5(0.39)
<0.15
TCLP (mg/L)
NA
< 0.005
0.017
0.018
0.016
< 0.05
< 0.005
LYSIMETER (mg/L)
270 (5.7)
0.006
< 0.005
0.026
0.023
0.32
0.013
-------
ANALYTICAL RESULTS - SEMIVOLATILES
438
DIESEL MUD -SITE *4 (SOIL COMPOSITE *5)
PARAMETER
ANTHRACENE
BIPHENYL
nCIO - nC30 HYDROCARBONS
p-CYMENE
FLUORENE
NAPHTHALENE
PHENANTHRENE
4-METHYLPHENOL
2,4-DIMETHYLPHENOL
TOTAL (mg/kg)
TCLP (mg/L)
LYSIMETER (mg/L)
2.8
18
16-540 (0.26)
8.2
15
74
25
0.21
< 10
< 0.002
0.021
< 0.0 1-0.025
<0.01
0.008
0.32
0.007
0.002
<0.01
< 0.002
<0.01
<0.01
<0.01
<0.01
0.015
< 0.002
0.043
0.013
-------
ANALYTICAL RESULTS - SEMIVaATILES
CHROMIUM LIGNOSULFONATE MUD - SITE * 13 (SOIL COMPOSITE *1)
439
PARAMETER
BIS (2-ETHYLHEXYL3PHTHALATE
nC 10 - nC30 HYDROCARBONS
FLUORENE
NAPHTHALENE
PHENANTHRENE
2-METHYLNAPHTHALENE
4-METHYLPHENOL
TOTAL (mg/kg)
TCLP (mg/L)
LYSIMETER (mg/L)
5.2
3 (0.17-0.24)
0.91
2.4
1.5
9.7
0.058
<0.01
<0.01
0.003
0.024
0.004
0.041
<0.01
<0.01
< 0.01
< 0.002
< 0.002
< 0.002
< 0.002
0.045
-------
440
ANALYTICAL RESULTS
• Hydraulic Conductivity
• 1E-06 to 1 E-07 after 1 -2 days
• 1 E-07 to 1 E-08 after 7-14 days
• Mass Flux
• Diesel Mud, Site *4
Barium: 1.6E-04 ug/m1n/cm"2
Benzene: 1.2 E-06 ug/mln/cnT2
Naphthalene: 1.5 E-06 ug/mln/cnT2
C romlum Lignosulfonate Mud, Site *13
Barium: -£9-E-ey ug/m1n/cm"2 ;? x
Benzene: 1.3 E-07 ug/min/cnT2
Naphthalene: < 2 E-08 ug/min/cm"2
_
-------
441
SUMMARY - TOTAL ANALYSES VS. LY5IMETER RESULTS
• Metals
• Generally, the same elements found in wastes and leachates
• Soil contribution significant for arsenic, potassium, and barium in some
cases
* Volatlles
• BTEX found both in wastes and leachates
• SemlvolatHes
• PAHs, phenolics, and hydrocarbons found in wastes
• Only naphthalene, 4-methylphenol, and 2,4-dimethylphenol found in
leachates
-------
442
SUMMARY - TCLP VS. LY5IMETER RESULTS
• Metals
• TCLP results mixed relative to lyslmeter results
• Votatlles
• Concentrations higher In lyslmeter leachate than 1n TCLP
• Semlvolatfles
• PAHs and hydrocarbons more concentrated In TCLP than In lyslmeter
leachate
• Phenolic compounds more concentrated in lyslmeter leachate than in TCLP
-------
443
MR. BROWN: Last year I
came here and presented some of the preliminary
results of the study. So if the beginning sounds a
little familiar, just bear with me for a little while
and I'll get to the new results of the later
investigation.
In recent years, concern over the impact of
toxic diesel oil components on benthic and palagic
communities surrounding offshore drilling operations
has promoted government agencies to issue a ban on
the ocean discharge of diesel containing drilling
muds.
In response, the offshore oil industry has
resorted to the use of alternative lubricants such as
mineral oil in drilling muds destined for ocean
disposal. The selection of mineral oil was based on
a presumed decreased toxicity due to a lower aromatic
hydrocarbon content relative to diesel oils.
In anticipation of new federal regulations
banning the use of diesel oil in drilling muds,
Battelle New England was contracted by the Offshore
Operators Committee to initiate a multi-phased study
aimed at developing an analytical method capable of
-------
444
differentiating between diesel and mineral oils in
drilling muds, as well as capable of distinguishing
between diesel and mineral oil additives based on
differences in their organic content.
One method proposed by the EPA to analytically
measure diesel oil in drilling mud formulations, it's
known as the top ten method, involves a GC/FID analysis
of the drilling mud extract and subsequent guantification
based on the concentration of the ten major peaks in the
sample chromatogram relative to the same ten peaks in
a reference diesel oil.
This method appears to be analytically sound and
capable of yielding quantitative results as is
evidenced by the following four slides of representative
diesel and mineral oils. These slides are RIC's,
which are essentially the same as gas chromatographs.
This first slide is an RIG of mineral oil A.
ortho-ter-pheynl is the added internal standard. This
one represents another mineral oil, mineral oil C.
Now, this one is representative of the California
diesel, and this one is representative of the low
sulfur diesel.
As evidenced by the last four slides, there appear
-------
445
to be compositional differences between the mineral
and diesels which should easily be distinguished by
the top ten method.
However, in certain cases a potential ambiguity
exists, as shown by this slide of mineral oil B on
top and the Alaskan diesel on the bottom. A comparison
I
of the chro^atographic pattern exhibited by each oil
reveals that they're nearly identical.
In fact, really the only difference here is the
relative concentration of pristane in each sample.
Aside from that, the distributions are almost exact.
Once again, ortho-ter-phenyl is the internal standard.
In fact, our results show that if the top ten method
is applied in this instance, mineral oil B is mistaken
for a diesel oil.
These potential ambiguities emphasize the need
to develop tracers of specific additive types beyond
chromatographic pattern matching, which can serve as
guantitative indicators of the presence of diesel oil
and in drilling muds.
Nine oils representative of those used in
different offshore regions were analyzed by the best
available methods for the following targeted compound
-------
446
classes: organic sulfur content/ total nitrogen and
sulfur, sulfur, nitrogen and oxygen containing PAH or
PACs, carboxylic acids, phenolic acids, aldehydes and
ketones, phenol and its alkyl hortiologues up to C-4,
individual aromatic hydrocarbons, and total aromatic
content.
The polynuclear aromatic hydrocarbon, or PAH,
analyses were performed using a modified version of
standard method D 3239, and the results presented
here show the total aromatic content of the nine oil
samples.
The mineral oils were found to have a significantly
lower aromatic content with mineral oil A exhibiting
the highest value of 10.2 percent. Among the diesel
oils, the EPA number two fuel oil had the highest
value of 35.6 percent, while the remainder ranged
from 11.7 to 29 percent.
The total concentration of the individual PAH
and their alkylated homologues in general mirror the
total aromatic content of each respective oil.
However, we did find distinct differences among the
distributions of individual aromatics within the oil.
The mineral oils were found to have significantly lower
-------
447
concentrations of napthalene, benzene and their alkyl
homologues than the diesel oils. The differences
both in individual and total aromatic contents of the
oil indicated the potential to use these parameters
in future studies for distinguishing between diesel
and mineral oils.
The organic sulfur and total dibenzothiophenes
were determined by GC Hall BCD with dibenzothiophene
peak identifications being made by GCMS. The results
show the organic sulfur, total dibenzothiophene and
percent dibenzothiophene in the diesel and mineral
oils we analyzed. It is evident from this that the
mineral oils generally exhibit lower sulfur and
dibenzothiophene levels than the diesel oils with the
exception of the low sulfur diesel.
This figure represents Hall BCD chromatograms
of four of the oil sample was analyzed. Figures A
and C, corresponding to mineral oils A and California
diesel, (this is A and this is C), are representative
of the two extremes of organic sulfur content with
mineral oil A essentially devoid of peaks and the
California diesel containing many peaks including
a broad unresolved hump. Theanthrene, this peak,
-------
448
is the internal standard.
The majority of the samples exhibited a distribution
between these two extremes which are represented by
the mineral oil B and the Gulf of Mexico diesel,
which looked quite similar.
The total sulfur Was determined by the oxygen
bomb method, which was standard method D 129 modified.
Total nitrogen was determined by chemiluminescence.
The sulfur contents of the three mineral oils is low
and differs by an order of magnitude when compared to
the diesels. The mineral oils also exhibit a lower
nitrogen content than the diesel oils, although the
differences here are not as great.
As with the aromatic hydrocarbon compositions,
total sulfur was a desirable parameter to use in
future analytical programs designed to distinguish
between diesel and mineral oils in actual mud
formulation. However, because the differences in
nitrogen are not so great we do not recommend those
for nitogen analysis for future use.
The phenol and alkylphenol concentrations of the
nine oil samples were determined by GCMS analyses.
The important feature of this data is the presence of
-------
449
phenolic compounds exclusively in the diesel oils.
No phenols were found in any of the mineral oils.
This is a highly significant finding which suggests
that the alkylated phenols are another suitable
compound class for distinguishing between diesel and
mineral oils in actual drilling mud formulations.
This figure shows some mass chromatogram maps of
the alkylphenols in two of the diesel samples we
analyzed. This distribution is analogous to those
found for the aromatic hydrocarbons in which the
alkylated homologues are prevalent over the parent
compound in any one series.
I don't know if you can all see this, but in
this sample phenol is absent, in this one it's at
very low levels corresponding in comparison to the C-2,
C-3 phenols and the cresols.
To sum up the results of phase two, it was found
that quantitative differences between diesel and
mineral oils are evident in total aromatic, total
sulfur and organic sulfur as well as in the concentrations
of individual PAH and their alkylhomologues. In
addition, phenol and its alkylhomologues have been
identified as a compound class which has the potential
-------
450
to, by its presence or absence, denote the occurrence
of diesel oil in drilling mud formulations.
The phenolic acids, aldehydes and ketones were
not found in measurable quantities in any of the oil
samples, while the carboxylic acids and the sulfur,
nitrogen and oxygen containing PACs were detected in
most of the oil samples, but their compositional
differences did not allow us to distinguish between
mineral and diesel oils.
The purpose of phase two of this study was to
evaluate the efficiency of two different extraction
techniques, one retort distillation and the other,
solvent extraction, in isolating the diesel oil
tracers from actual drilling mud formulations. In
addition, we hoped to validate the analytical techniques
from phase one when applying them to drilling muds.
The compound classes from phase one which were
considered to have the greatest potential to
differentiate between the diesel and mineral oils
were the alkylphenols and the individual PAH. Total
sulfur and organic sulfur were not chosen for phase
two due to possible matrix interferences when analyzing
lignosulfonate drilling muds.
-------
451
These next two figures represent the analytical
approach we used in phase two. Briefly, the drilling
mud samples were acidified to pH 1 and mixed with
sodium sulfate, after which the internal standards
were added. The mud formulation was then extracted
on a shaker table at ambient temperature, two times
with methanol and two times with a nine-to-one
methylene chloride:methanol mix.
The combined organic extracts are then partitioned
versus one normal hydrochloric acid and the organic
phase is isolated and the acgueous phase extracted,
again with methylene chloride and ethyl ether to
isolate the phenols. At this point, the methylene
chloride ethyl ether extract was dried over sodium
sulfate and a five percent aliquot removed for total
extractable weight determination and subsequent PAH
analysis. The remaining extract was partition versus base,
which was then acidified and extracted with ethyl
ether to isolate the alkyl phenols.
Both the phenols and the aromatic hydrocarbons
are analyzed by the same electron impact GC/MS procedure
that we used in the earlier phase. The retort
distillates, which we received as probably about 10
-------
452
or 15 mils of water with an oily layer on the surface,
contained the same internal standards as the solvent
extracts and were introduced into this analytical
scheme just prior to the acid water partitioning
step, and carried out through the remaining procedure
in a manner identical to the solvent extracts. So
the retorts were actually introduced just before this
slide.
This figure illustrates the problems we encountered
when analyzing the phenolic fraction of the drilling mud
retort distillates. As you can see, the phenols are
present in high enough concentrations to saturate the
GCMS ion collector. This particular sample was a mud
without any oil additives.
We speculate that the phenols found in the retort
samples are thermal degradation products of other
organic compounds present in the drilling muds, in
particular the lignosulfonates. To compound our
problems, the phenolic fractions of the solvent
extracted drilling muds evidenced significant matrix
interferences which contaminated the GCMS sample
inlet and the chromatography column, making quantitative
analysis impractical. We could maybe run two or three
-------
453
samples before the column became useless.
The conclusions of phase two indicated that the
phenol analysis by high temperature retort was flawed
due to artifact formation and should not be considered.
It was found that considerable concentrations of
phenols were produced in the drilling mud matrix,
even with a modified low temperature retort distillation,
It was determined that the solvent extraction
should be adopted as a method for isolating the
phenols and hydrocarbons in the drilling mud matrix,
since our preliminary analyses showed that this method
yielded accurate and internally consistent data which
compared favorably with the composition determined
previously for the neat Alaskan diesel.
The objectives of phase three of the study were
first to develop a method of purifying the phenolic
solvent extracts, thereby removing the matrix
interferences encountered in phase two and allowing
direct instrumental analysis, and second, to analyze
a variety of drilling muds with a mixture of additives,
including crude oil.
In addition, we wanted to investigate the
possibility of artifact formation in high temperature
-------
454
hot rolled drilling muds which we hoped would
approximate the temperatures encountered in actual
drilling mud operations.
In an attempt to enhance the response of the
phenols, the solvent extracts were derivitized prior
to analysis to form trimethylsilyl ethers. The initial
results as illustrated by this slide were quite
promising. On top is an RIC of an underivatized
number eight mud with Alskan diesel, which exhibits
some very poor chromatography, to say the least.
The bottom figure shows the same sample after
derivatization, and as you can see the chromatographic
response was greatly improved. However, repeated
analysis of the derivatized samples evidenced the
same instrument contamination we encountered earlier,
although this time maybe we could qet six or seven
runs in before the GC/MS was contaminated.
In a further attempt to eliminate the matrix
interferences we developed a ten-gram, five percent
deactivated silica gel cleanup column. The results
of the cleanup proved guite favorable, as demonstrated
by this slide, which shows mass chromatograms of the
phenol and alkyl phenols of the same number eight mud
-------
455
with Alaskan diesel but after the column chroma-
tography. You can see, we've effectively removed any
matrix contamination.
These are the recovery results of the silica gel
column which we spiked with phenol standard mixtures.
We had guantitative recoveries of all the phenol
analytes, both in terms of absolute and relative
recoveries. In addition, the precision between
replicates was guite good.
Once we had solved the problems with the phenol
extracts, we could concentrate on some different
drilling mud formulations. The next three slides
present data from some of the drilling muds we
analyzed.
This first table shows the aromatic hydrocarbon
and alkyl homologue compositions of mineral oil B,
Alaskan diesel, number eight mud with no oil additives
and two muds with mineral oil and diesel added. As
you can see, the PAH analytes were absent from the
mud with no oil, at least, the napthalenes and
benzenes, at any rate, while the aromatic concentrations
of the muds with mineral oil B and Alaskan diesel
agree fairly closely with those of each respective neat
-------
456
oil. In other words, this mud sample with mineral
oil B, the numbers match pretty closely with the neat
mineral oil B here, and the same for the mud with the
Alaskan diesel. The numbers match quite nicely with
the Alaskan diesel neat.
Consequently, by looking at the alkylbenzenes
and napthalenes of these last two muds, we were able
to distinguish between diesel oil and mineral oil
additives.
This table shows the PAH data for a light crude
oil and some drilling muds with different mixtures of
additives and light crude. The aromatic concentrations
of the mud containing the light crude correspond well
to those of the neat crude oil in the first two
columns. This is the neat crude oil and this is the
mud with light crude.
As you can see, the concentrations of the
alkylbenzenes and the napthalenes in the crude oil
were sufficiently high to dominate the composition
of mud samples containing mixtures of light crude and
other oil additives, as shown by column three here.
This mud mixture contained light crude, Alaskan diesel
and mineral oil, and basically the composition of the
-------
457
benzenes and napthalenes looks pretty much similar to
just the crude oil alone.
Columns four and five represent data from samples
with a mixture of diesel and mineral oil, and diesel
alone at levels of less than one percent. What you
notice here is that both samples have concentrations
of benzenes and napthalenes which are proportional to
the neat diesel. The neat diesel was on the other
slide, but for the most part, those napthalene and
benzene concentrations compare with the diesel oil.
This slide presents the phenol data for the same
samples we looked at in the last two slides. Really,
the first thing that stands out here is that the mud
sample with no added oil contains relatively high
concentrations of phenol and cresols.
What this indicates is that the phenols and
cresols are probably analytes that are present in the
drilling mud matrix. These values may seem dispro-
portionately high in comparison to some of the other
phenol and cresol values. That's because of the low
extractable weight content of a mud without oil
additives. In other words, most of the extractable
weight was the added oil.
-------
458
The mud sample with the added mineral oil exhibits
a phenolic profile comparable to the mineral oil with
the exception of the phenol and the cresols which once
again we can assume are contributions from the drilling
mud matrix.
This is the mud with mineral oil and this is the
mineral oil itself. If you don't look at these two
numbers, essentially you get ND's all the down in
each column.
Similarly, the distributions of the C-2 through
C-4 phenols in the samples containing diesels and
mixture of diesel and mineral oil compare favorably
to the composition of the neat diesel. Here's a
drilling mud with diesel compared to the neat diesel.
This is a drilling mud with a mixture of mineral oil
and diesel, which also compares to the neat diesel.
The concentrations of the phenols in the light
crude are on the average an order of magnitude higher
than the levels determined for the Alaskan diesel.
The mud containing seven percent crude oil exhibits
phenol concentrations which generally agree with the
levels calculated for the light crude. In this
particular case, mud with the light crude and the light
-------
459
crude itself.
As was the case with the PAH analytesr the
phenolic content of the crude oil was sufficiently
high to dominate the composition of mud samples
containing mixtures of additives and crude oil, as
you can see here. This sample contains diesel,
mineral oil and crude oil, and the composition looks
almost the same as the neat light crude.
To summarize the results of this study/ we found
that solvent extraction with a silica gel cleanup
.column for phenols is capable of guantitatively
isolating PAH and phenolic analytes from drilling mud
samples. Based on the C-2 through C-4 phenols and
the alkyl benzene and napthalenes, it is possible to
distinguish between diesel and mineral oil additives
in drilling muds.
However, the presence of crude oil in drilling
muds may limit the usefulness of the phenols and the
alkylbenzenes and napthalenes as indicators of diesel
due to contributions of these analytes from the crude
oil matrix. Our preliminary results indicate that
high temperature hot rolling may produce some alkyl
phenols as thermal degradation products of other
-------
460
compounds present in the drilling mud. However, the
concentrations of the PAH were not significantly
affected.
I think the data from this study shows the
inherent difficulties in analyzing for organic tracers
in the complex drilling mud matrix. We believe that
this methodology can be a useful tool, perhaps used
in conjunction with chromatographic pattern matching
in compliance monitoring programs.
Finally, we recommend that future studies more
extensively analyze high temperature hot rolled drilling
muds, and in addition "wild" or field muds should be
investigated to determine if the pressure and
temperature conditions encountered in drilling
operations will significantly contribute to the
aromatic and phenolic compositions we documented in
this study.
If future studies of this kind prove favorable,
we would hope to have this methodology independently
validated and incorporated into a standard method for
monitoring offshore drilling operations.
Thank you very much.
-------
461
Question and Answer Session
MR. TELLIARD: Any mud
questions? Any questions?
MR. SNEERINGER: Paul
Sneeringer with the Army Aberdeen Proving Grounds.
T<7ith regard to those RIGS where you had the large hump-
could you comment...what the chromatography is and
what the instrument is seeing?
MR. BROWN: Sure. In both
the diesels and mineral oils, that unresolved hump or
the UCM is an unresolved complex mixture. What that
is, the peaks you see above the hump are straight
chain an-alkanes, nC-10 to nC-34 or whatever.
What's underneath there are mostly branched alkanes
and highly substituted alkanes, which the GC and the
GC/MS cannot sufficiently resolve into single peaks.
So what it does is, it just raises the baseline as
those components elute from the chromatographic column.
MR. TELLIARD: Thanks, John.
We now have a twosome from Conoco. Our first speaker
is going to be speaking on identifying hydrocarbon
contamination.
-------
462
ORGANIC CHEMICAL CHARACTERIZATION
OF DIESEL AND MINERAL OILS USED
AS DRILLING MUD ADDITIVES
J.S. BROWN
and
P.D. BOEHM
-------
463
OBJECTIVES OF PHASE I
TO DEVELOP AN ANALYTICAL METHOD CAPABLE OF DIFFERENTIATING BETWEEN
INDIVIDUAL DIESEL AND MINERAL OILS, BASED ON DIFFERENCES IN THEIR
ORGANIC COMPOSITION.
-------
160.0-1
o-terphenyl CIS)
588
8:20
1008
16:48
2880
33:28
MO-A-6-84-3
2500 SCAN
41:40 TINE
RECONSTRUCTED ION CHROMATOGRAM (RIC) OF MO-A-6-84-3.
-------
108.0-1
"RIG
500
8:20
MO-C-6-84-I 9
o-terphenyl (IS)
1000
16:40
2000
33:20
2509 SCAN
41:40 TIME
RECONSTRUCTED ION CHROMATOGRAM (RIC) OF MO-C-6-M-19.
OJ
01 JT
-------
109.0-1
RIC
o-terphenyl (IS)
California Diesel
500
8:20
1000
16:40
1509
• 25:00
2300
33:20
2500 SCAN
41:40 TIME
cn
RECONSTRUCTED ION CHROMATOGRAM (RIC) OF CALIFORNIA DIESEL.
-------
too. e-i
o-terpheny? (IS)
LOW SULFUR DIESEL
1000
IS: 40
1500
25:00
2000
33:29
2500 SCAN
41»40 TIME
05
RECONSTRUCTED ION CHROMATOGRAM (RIC) OF LOW SULFUR DIESEL.
-------
468
: 30.3-1
RIC
MO-B-6-84-7
1888
16:48
r
1588
25:88
2888
33:28
2588 SOW
41:48 TIME
169.B-1
RIC
Alaska Diesel
o-terphenyl CIS)
I
Pris
1588
25:88
2888
33:28
2580 SCAN
41:48 TIME
GC/MS/DS RECONSTRUCTED ION CHROMATOGRAMS (RIC) OF THE
MINERAL OIL MO-B-6-8<*-7 AND THE ALASKAN DIESEL. NOTE THE
SIMILARITY IN THE CHROMATOGRAPHIC PATTERN EXHIBITED BY
EACH. o-TERPHENYL IS THE ADDED INTERNAL STANDARD OS).
-------
Organic
Sulfur
Content
V
0
1
IINERAL AND
IESEL OILS
I
1 J | 1
\ r * X w v : ' '
Total IS-, N-, and Carboxyllc : Phenolic Alkylated ilndlvldual Tots
Nitrogen :0-Polycycl1c Adds : Acids, Phenols : Aromatic Aron
and Sulfur : Aromatic j Aldehydes, : Hydrocarbons Con1
j Compounds j and Ketones j(PAH)
• fpKr\ '• '•
• • •••\rrtl*/ •• ••• •• ••• ••
i
11 "1
^lc COMPOUND
'ent CLASS
SUMMARY OF ANALYTICAL RESULTS: ORGANIC CHARACTERIZATION OF DIESEL AND MINERAL OILS, PHASE I.
CD
-------
470
PERCENT AROMATIC CONTENT OF DIESEL AND MINERAL OIL
ADDITIVES
Sample
Percent
Mineral Oil
MO-A-6-84-33
MO-B-6-84-7
MO-C-6-84-19
Diesel OU
Low S Diesel
High S Diesel
Gulf of Mexico Diesel
Alaska Diesel
California Diesel
EPA-API No. 2 Fuel Oil*
10.2
2.1
3.2
0.7
16.1
29.0
23.8
11.7
15.9
35.6 + 3.9
triplicate determinations
-------
471
ESTIMATES OF ORGANIC SULFUR CONTENT AND TOTAL
DIBENZOTHIOPHENES CONCENTRATIONS (PARENT COMPOUND TO
USING GC/HECD.
Sample
Organic Sulfur
Content
Total
Dibenzothiophenes %
(DBT) DBT
(ug/g.oil)
Mineral Oil
MO-A-6-84-3
MO-B-6-S4-7
MO-C-6-84-19
Diesel Oil
Low S Diesel
High S Diesel
Gulf of Mexico Diesel
Alaska Diesel
California Diesel
EPA-API No. 2 Fuel Oil
ND
790
3.0
3*0
670
1900
1600
3SOO
3000
ND
370
ND
25
670
760
900
1200
2100
7
100
40
56
32
70
-------
MO-A-6-84-3
A.
thlanthrene (IS)
JL
MO-B-6-84-7
CjDBTn
hlanthrflne CIS)
C2DBT
C3DBT
California Diesel
thlanthrene CIS)
c.
Gulf of Mexico Diesel
DBT
D.
thianthrene CIS)
C2DBT
"V-U-
GC/HECD CHROMATOGRAMS OF TWO MINERAL OILS (MO-A-6-84-3 AND MO-B-6-84-7) AND TWO
DIESEL OILS (CALIFORNIA DIESEL AND GULF OF MEXICO DIESEL). PEAKS CORRESPONDING TO
DIBENZOTHIOPHENE (DBT) ALKYL HOMOLOGUE5 HAVE BEEN SHADED AND LABELED. ^
THIANTHRENE IS THE ADDED INTERNAL STANDARD (IS). to
-------
473
NITROGEN AND SULFUR CONTENTS OF THE DIESEL AND
MINERAL OIL ADDITIVES
Sample
Sulfw
(pom)
Nitrogen
Mineral Oil
MO-A-6-S4-3
MO-8-6-84-7*
MO-C-6-S4-19
Diesel Oil
Low S Diesel
High S Diesel
Gulf of Mexico Diesel
Alaska Diesel*
California Diesel
EPA-API No. 2 Fuel Oil*
<20
59
<20
270
720
2140
1055
3580
1005
39
28
21
57
84
130
50
410
112
aAverage of duplicate determinations
-------
CONCENTRATION OF ALKYLATED PHENOLS IN THE DIESEL AND MINERAL OIL ADDITIVES
Compound
MO-A-6-84-3
MO-B-6-84-7
MO-C-6-84-19
LowS
Diesel
HighS
Diesel
Gulf of
Mexico
Dieseia
Alaska
Diesel
California
Diesel
ERA-API
No. 2
Fuel Oil
(UR/K additive)
Phenol
o-Cresol
m+p-Cresol
C2 Phenols
C3 Phenols
Cti Phenols
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
0.8
0.9
1.8
0.6
ND
5.1
3.3
18
3.3
ND
6.0
7.6
9.6
23
11
1.7
1.2
0.2
0.1
2.3
3.3
0.7
ND
1.4
ND
27
71
6.9
ND
1.2
ND
5.4
6.2
ND
amean concentrations of triplicate determinations
-------
475
Gulf of Mexico Diesel
phenol
608
18:38
phenols
13:28
1888
16:48
phenols
/ o-cresol
,,
m+p— cresol / / /
f— 188
1288 SCAN
28:88 TIME
California Diesel
o—cresol
phenol (absent)
/
,
1
1\ Cg phenols / >
* •' /
680
18:88
888
13:28
1888
16:48
122
1288 SCAN
28:88 TIME
GC/MS/DS MASS CHROMATOGRAM MAPS CORRESPONDING TO THE
ALKYLATED PHENOLS FOUND IN THE GULF OF MEXICO DIESEL AND
THE CALIFORNIA DIESEL-
-------
o
476
CONCLUSIONS OF PHASE I
QUANTITATIVE DIFFERENCES BETWEEN DIESEL AND MINERAL OILS ARE EVIDENT
IN TOTAL AROMATIC, TOTAL SULFUR, AND ORGANIC SULFUR CONTENTS, AS
WELL AS IN THE CONCENTRATIONS OF INDIVIDUAL PAH (BENZENE,
NAPHTHALENE, BIPHENYL, FLUORENE AND PHENANTHRENE ALKYL HOMOLOGUE
SERIES).
THE IDENTIFICATION OF ALKYL PHENOLS AS A COMPOUND CLASS WHICH HAS
THE POTENTIAL TO, (BY ITS PRESENCE OR ABSENCE), DENOTE THE
OCCURRENCE OF DIESEL OIL IN MUD FORMULATIONS.
-------
It*
477
OBJECTIVES OF PHASE II
TO EVALUATE THE EFFICIENCY OF TWO EXTRACTION TECHNIQUES (RETORT
DISTILLATION AND SOLVENT EXTRACTION) IN ISOLATING THE ORGANIC
TRACERS (IDENTIFIED IN PHASE I) OF DIESEL OILS FROM DRILLING MUDS.
-------
I?
478
1. Acidify with 14ml 6N HO
2. Mix with 50g Na250* (3:1 wet weighteNajSO*)
1. Extract with methanol (2x, 100 ml each)
2. Extnct with 9:1 methylene chlorideanethanol (2x, 100 ml each)
3. Centrifuge aiter each extractioa (3000 rpm, 10 min)
1
Solids
Combined
Methanol/Methytene
Chloride Extracts
1
Drilling
Mud
Retorts
1
discard
1. Partition v. 100 mi IN HC1
2. Isolate organic phase
3. Extract aqueous phase with methyiene chloride
(JO ml) and dlethyl ether (50 ml)
-------
479
I
1. Remove 5% aliquot
2. Dry over NajSO^
3. Weigh aliquot on
etectrobalance
4. GC/MS
Total Extractabie
Content
Aromatic
Hydrocarbons
discard
1. Partition v. 100 ml
INNaOH
archive
1. Acidify with £N HCI
2. Extract 3x with diethyl ether (50 ml each)
1. Dry over Na^SO^
2. Concentrate
discard
1. Concentration
2. GC/MS
Alkyi Phenols
-------
180.0-1
94 _
63.7-1
108 _
15.6-
122 _
362.5-1
RIC_
200
3:29
*saturated
signal ""
phenol
n—
400
6:40
cresols (C-j-phenols)
I' ' "r"' 1
xylenols (C2
600
10:01
800
13:21
1000
16:41
1200
29:01
1400
23:22
SCflN
TIME
Mass chromatograms of phenols in No. 8 Mud (no added diesel) retort.
CO
O
-------
481
CONCLUSIONS OF PHASE II
1. THE ANALYSIS OF INDIVIDUAL PHENOLS BY HIGH TEMPERATURE RETORT
DISTILLATION IS FLAWED DUE TO ARTIFACT FORMATION AND SHOULD NOT BE
CONSIDERED. PRELIMINARY EVALUATIONS OF THE FEASIBILITY OF A LOWER
TEMPERATURE RETORT APPARATUS INDICATE THAT THESE ARE SIMILARLY
SUBJECT TO ARTIFACT FORMATION.
2. SOLVENT EXTRACTION SHOULD BE ADOPTED AS THE METHOD FOR ISOLATING
PHENOLS AND HYDROCARBONS FROM DRILLING MUD SAMPLES. PRELIMINARY
ANALYSES HAVE SHOWN THAT THIS METHOD YIELDS ACCURATE, INTERNALLY
CONSISTENT DATA WHICH COMPARES FAVORABLY WITH THE COMPOSITION
DETERMINED PREVIOUSLY FOR THE NEAT ALASKAN DIESEL.
-------
482
OBJECTIVES OF PHASE III
1. CONDUCT FURTHER DEVELOPMENT WORK TO SUFFICIENTLY PURIFY THE PHENOLIC
ISOLATES OBTAINED FROM SOLVENT EXTRACTS TO PERMIT DIRECT GC/MS
ANALYSIS.
2. ANALYZE DRILLING MUDS WITH A MIXTURE OF ADDITIVES (INCLUDING CRUDE
OIL). ANALYZE DRILLING MUDS WHICH HAVE BEEN "HOT-ROLLED" AT HIGH
TEMPERATURE (TO APPROXIMATE "DOWN HOLE" CONDITIONS).
-------
188.8-1
US
RIC_
288
3:28
488
Si 48
dgp—crMold.S.)
see
18i81
see
13:21
483
Note Poor Chromatography
1089
1S<41
1288
23l81
1488
23:22
SCflN
TIME .
138.0-1
172
388.5-
RIC
53S
587
515
see
8:29
see
18:88
crwoMI.SJ
S5S
J32
717 752
882
318
S55
887 Note Improved Chromatography
788
11:48
T
888
13:28
399
15:88
ieee SCAN
15:48 TIME
MASS CHROMATOGRAHS OF THE PHENOLIC EXTRACT OF NO. 8 MUD WITH
7.0 % ALASKAN DIESEL, BEFORE (UPPER) AND AFTER (LOWER)
DERIVITIZATION TO FORM TMS ETHERS.
-------
Phenol
8.8-
94 _
. . 1
108
10.4-1
107
200
3:20
m+p—cresols
o— cresol
1
t ft A l A A (U* l\*^AlV'V'Wv*^v^'Vt
i > „ . I , IV
1
i.l
i
1
il i JL WAA^^V^
Phenols
400
6:40
600
10:00
1000
16:40
1200 SCAf>
20:00 TIME
MASS CHROMATOGRAMS OF THE PHENOLIC FRACTION OF NO. 8 MUD WITH 7.0% ALASKAN ^
DIESEL ARER THE 10-g SILICA GEL COLUMN CHROMATOGRAPHY CLEAN-UP. »
-------
485
RECOVERY RESULTS FOR PHENOLIC COMPOUNDS ELUTED THROUGH 10-g
5 % DEACTIVATED SILICA GEL CHROMATOGRAPHY COLUMN WITH 23 mL
10% ETHYL ETHER IN
Compound
Percent
Relative Recovery3
Mean + SD
Percent
Absolute Recovery'3
Mean + SD
Phenol
o-Cresol
p-Cresol
2, 6-Dimethyl phenol
Ethyl phenol
3, 4-Dimethyl phenol
2,3,5-Trimethyl phenol
d8p-Cresol (IS)
96.1 + 2.2
118.0 + 13.0
102.9 + 1.2
119.6 + 15.4
121.5 + 15.6 •
109.1 * 4.4
.120.7 + 14.6
100
75.6 + 10.7
91.9 + 0.5
80.0 + 8.4
92.4 + 1.2
94.9 + 1.2
85.2 + 6.7
93.9+1.0
78.1 + 9.2
aRecovery based on triplicate determinations relative to
Internal Standard (IS), dgp-cresol.
DRecovery based on triplicate determinations relative to
External Standard, 2-isopropyl phenol added prior to
GC/MS analysis.
-------
CONCENTRATIONS OF AROMATIC HYDROCARBON ALKYL HOMOLOGUES IN EXTRACTS OF DRILLING
HUD FORMULATIONS, ALASKAN DIESEL. AND LIGHT CRUDE OIL (rag/g Extract).
Analyte
Benzene
CiB
C2B
C3B
€43
CSB
ceB
Naphthalene
CiN
C2N
C3N
C^N
CSN
Mineral
Oil B
NC
NC
ND
ND
ND
ND
ND
ND
ND
0.06
0.22
0.43
0.16
Alaskan
Diesel
NC
NC
0.577
0.946
0.509
0.085
ND
0.467
2.63
5.60
7.96
5.47
2.13
No. 8 Hud
(Ho Diesel Added)
NC
NC
NO
ND
ND
ND
ND
ND
ND
ND
ND
ND
NO
No. 8 Hud
W/7.0X H.O.B.
NC
NC
ND
ND
ND
ND
ND
0.01
ND
0.08
0.33
0.38
ND
Ho. 8 Hud
W/7.0S Alaskan
Diesel
NC
NC
0.604
0.937
0.816
0.363
0.496
0.665
3.76
8.63
12.3
3.96
2.22
00
05
-------
CONCENTRATIONS OF AROMATIC HYDROCARBON ALKYL HOHOLOGUES IN EXTRACTS OF DRILLING
MUD FORMULATIONS, AND LIGHT CRUDE OIL (rag/g Extract).
Analyte
Benzene
CtB
C2B
C3B
C4B
CsB
Naphthalene
CiN
C2N
CaN
C4N
C5N
SOOLight
Crude Oil
NC
NC
34.7
19.5
8.26
3.64
1.38
5.76
11.3
9.72
6.47
2.04
0.291
No. 8 Hud w/7.01
500 tight Crude
NC
NC
61.0
33.6
13.6
5.58
1.41
10.1
19.5
16.7
10.2
3.44
ND
No. 8 Hud W/7.0X
SOOLight Crude
0.7X Alaskan Diesel
+ 0.7X M.O.B.
Mr
31.9
44.5
26.7
11.5
5.28
2.38
7.88
15.6
14.6
10.9
4.47
1.67
No. 8 Mud w/0.7S£
Alaskan Diesel
0.7X M.O.B.
NC
NC
0.245
0.412
0.378
0.187
0.175
0.272
1.65
3.66
5.14
1.69
0.989
No. 8 Hud
M/0.72 Diesel
NC
NC
0.20
0.57
0.91
0.11
ND
0.38
2.47
7.64
12.2
6.09
NO
00
-------
COHCEHTRATIOflS OF ALKYL PHEHOLS IH EXTRACTS OF DRILLIHG HUD FORMULATIONS,
ALASKAM DIESEL, MINERAL OIL D, AMD CRUDE OIL (yg/g Extract).
Component
Phenol
o-Cresol
m+p-Cresol
C£ Phenols
€3 Phenols
04 Phenols
Component
Phenol
o-Cresol
m+p-Cresol
C2 Phenols
C3 Phenol
C4 Phenols
Mineral Oil B
ND
UD
(ID
ND
NO
ND
50" Light
Crude Oil
24.4
31.0
30.1
62.4
42.5
23.2
Alaskan Diesel
0.299
0.066
0.027
1.11
2.87
1.03
No. 8 Hud
W/7.CJ SO"
Light Crude
49.6
44.1
46.3
88.2
41.4
12.2
No. 8 Hud
No Oil
31.8
1.44
6.77
ND
ND
ND
No. 8 Hud w/0.7%
Alaskan Diesel;
0.7X H.O.B.; 7. OX
50° Liqht Crude
30.4
28.3
31.4
60.9
30.8
9.23
Lime Hud H/7.0Z
H.O.B.
2.47
0.196
ND
ND
ND
ND
No. 8 Hud W/0.7X
Alaskan Diesel;
0.7X H.O.B.
13.0
0.544
1.38
1.10
2.27
0.833
No. 8 Hud H/7.0J
Alaskan Diesel
3.15
0.195
0.385
2.23
3.16
1.63
Lime Hud w/0.72
Alaskan Diesel
25.1
0.958
5.56
4.04
7.43
0.660
00
00
-------
489
CONCLUSIONS OF PHASE III
1. THE SOLVENT EXTRACTION METHOD COMBINED WITH A SILICA GEL CLEAN-UP FOR
PHENOLS IS CAPABLE OF QUANTITATIVELY ISOLATING PAH AND PHENOLIC
ANALYTES FROM DRILLING MUD FORMULATIONS.
2. BASED ON THE C2-C4 ALKYL PHENOL HOMOLOGUES AND AROMATIC HYDROCARBON
COMPOSITIONS "OF THE "DRILLING' MUDS, IT IS POSSIBLE TO DISTINGUISH
' BETWEEN DIESEL AND MINERAL OIL ADDITIVES.
3. THE PRESENCE5OF CRUDE OIL IN DRILLING MUDS MAY LIMIT THE USEFULNESS
OF THE PHENOLS, ALKYL BENZENES AND NAPHTHALENES AS INDICATORS OF
DIESEL OIL DUE TO CONTRIBUTIONS OF THESE ANALYTES FROM THE CRUDE OIL
MATRIX.
4. PRELIMINARY RESULTS INDICATE THAT HIGH TEMPERATURE "HOT-ROLLING" MAY
PRODUCE SOME ALKYL PHENOLS AS THERMAL DEGRADATION PRODUCTS OF OTHER
ORGANIC COMPOUNDS PRESENT IN DRILLING MUDS. THE CONCENTRATIONS OF
PAH DO NOT APPEAR TO BE SIGNIFICANTLY AFFECTED BY HIGH TEMPERATURE
HOT-ROLLING.
-------
490
FUTURE WORK
EXTENSIVELY ANALYZE HOT-ROLLED AND "WILD" DRILLING MUDS TO DETERMINE
IF THE HIGH TEMPERATURES AND PRESSURES ENCOUNTERED DURING DRILLING
WILL SIGNIFICANTLY CONTRIBUTE TO THE AROMATIC AND PHENOLIC
COMPOSITIONS OF DRILLING MUDS THAT WERE DOCUMENTED IN THIS STUDY.
-------
491
DR. SNOW: It's important
? to find a source of petroleum hydrocarbon contamination,
not so much to place blame, but to expedite the
control of the source. Normally samples of this
nature are a complex mixture of compound types and
widely varying boiling ranges. Although the composition
varies widely, it's surprising how samples from the
same source can look very different, and conversely,
samples from different sources can sometimes look
very related.
This presentation examines the use of petroleum
biomarkers as detected by GCMS in the determination
of correlation between hydrocarbon samples found in
the environment. I'll give two environmental
• applications and one with a little more industrial
flavor to show their use in this application.
Biomarkers have been successfully used for quite
some time in the petroleum industry for oil exploration.
; We've found them to be quite useful as well in
environmental applications. Where the geochemist
uses the information to relate an oil to a source rock,
the same information can be used to relate a sample
to an isolated contaminate. Biomarker information is
-------
492
also useful in determining thermal maturity of the
oil or rock sample. However, this really has no
environmental application.
The relative amounts of certain biomarkers within
a certain petroleum sample give information us,to the
extent of the biodegradation that the sample has
undergone. This is important when comparing a fresh
sample to a more biodegraded sample.
Finally, the last application is that relative
amounts of specific biomarkers can be used in
determining oil migration distance parameters. It
becomes guite complicated when water transport is
involved, however. In any case, a detailed hydrocarbon
migration study using biomarker data is a potentially
powerful tool in locating unknown sources of hydrocarbon
contamination.
What is a biomarker? The best .definition for a
biomarker is an organic compound found in a petroleum
sample whose carbon skeleton suggests an unambiguous
link to some natural product in that sample's history.
We hope it's ancient history, although I'll show
where these work quite well when the sample has been
contaminated with not-so-natural biomarkers in its more
-------
493
recent history, such as phthalate esters.
This is a cross section of biomarkers typically
used in the petroleum industry. Normal paraffins are
easily detected by capillary GC and are useful in
determining the boiling range of the hydrocarbon
sample. Pristane and phytane are isoprenoidal branched
paraffins, C-19 and C-20. The ratio of pristane and
phytane to their corresponding normal paraffin is a
good indication of how much biodegradation the
environmental sample has undergone. This again is
detectable very easily with capillary GC.
The next broad class of biomarkers are those of
the ringed paraffin material related to sterane and
triterpane structures. They're indigenous to just
about all crude oils in different magnitudes and
distributions. I won't go too much in detail about
them right now because we'll be talking about them in
guite some detail later.
The substituted aromatics have been quite useful
in the petroleum exploration field, however they're
not very useful in environmental applications since
they're quite susceptible to water washing. Finally,
a new class of biomarker that's been employed are
-------
494
petroleum porphyrins. They're not normally used for
environmental applications either, since they require t
a complex isolation scheme and special mass spectral
considerations.
With all these choices of biomarkers available,
which one do you use? 'Well, there's two primary
criteria to be met. First of all, your biomarker
needs to be in your sample at a high enough concentration
that you're able to detect it with your method..
Secondly, it needs to be distributed uniquely enough
in different samples so you can use it as a correlation
tool.
With those two primary criteria being met, you
need to be sure that the biomarker that you use is
relatively non-susceptible to selective retention
during the oil's migration process, unless of course
you're using it in a migration study.
The biomarker needs to be thermally stable. In
environmental analyses, however, the criteria for
that is not quite as stringent as it is in petroleum
exploration. The requirement need only be that the
biomarker that you use be relatively non-volatile
under normal atmospheric conditions.
-------
495
Finally, the biomarker that you use needs to be
relatively stable to biological activity.
With that in mind, this is a normal progression
of a biodegraded petroleum type hydrocarbon. Initially
you'll see a net loss of abundant n-alkanes followed
by a loss of light n-alkanes. In the more moderate
range, you'll start to see some loss of the isoprenoidal
and branch paraffins and some attack on some of the
monocyclic parafins.
In the more extensive to severe range, you'll
start to see some severe alteration of the ringed
compounds and loss of the bicycloalkanes. In the
extreme biodegradation you'll finally see the loss of
multiring material that we use for biomarkers. The
point of this slide is that the sterane and triterpane
material is the biomarker of choice for environmental
samples.
These compounds are found in the saturate fraction
of a modified ASTM separation. After the separation
the sample is analyzed by GCMS. The amount of material
sometimes is a problem. In a water sample, sometimes
we cannot isolate enough hydrocarbon to run the ASTM
-------
496
separation. However, we've had quite a bit of luck
running the whole sample with enough sensitivity, in
quite a few samples where we've had very good luck in
getting a good fingerprint.
If concentration is a problem, however, we have
used methods such as splitless injection and single
ion monitoring, and you could even revert to on-column
injection just to get enough sample to the detector.
This is a typical triterpane-sterane fingerprint.
Essentially it is a single ion chromatogram for the
elution window for these compounds. The triterpane
mass spectrum is dominated by these two fragments.
Since the 191 is characteristic of all triterpanes we
use that for the single ion chromatogram.
The distribution represented here are due to
carbon number differences. They range from C-27 all
the way out to as far as you can get off a GC column.
Some of the distributions within a carbon number are
due to different isomers within this ring system.
The methylisoraers and some of the hydrogensisomers
are seperable by the GC. We use that information.
From this point on, I think you'll note that
there's a series of doublets in the triterpane region,
-------
497
and that represents an isomerization that occurs at the
22 carbon of the molecule. Initially, all these
materials are deposited in their natural R conformation.
During the course of thermal maturity the S conformation
is formed until you get some sort of distribution as you
see here. There is an equilibrium, and this particular
sample is about at equilibrium.
This information is also useful to the petroleum
chemists in determining the thermal maturity of an
oil sample. However, it gives us an added fingerprint
variable to use in matching hydrocarbons of this
nature. On the bottom trace, the sterane single ion
chromatogram, and that is due to a dominating 217
fragment. These particular materials are related to
the sterols in the original material, and they range
from C-27 to C-29.
It's surprising how many of these compounds have
been synthesized arid identified and reported in the
geochemistry literature. It's getting so if you look
at enough of these samples you can do a lot of the
identification using the relative retention time in the
single ion chromatogram.
This is an example of two crude oils that we
-------
498
received in our lab. There was an interest in whether
or not they were related to one another. I think
you'll agree, the TICs really do not look that close
to one another and I think you'll find this by just
about any other analytical method that you use.
The auestion again was whether these two were
related. Looking at the 191 single ion chromatogram,
you can see in fact, the relationship between the two
samples. This can be verified by the 217 single line
chromatogram very similar.
This is a set of samples that we received where
we monitor local well water to determine any introduction
of hydrocarbon from local drilling activity. -In one
of these wells we found an abnormally high total
organic carbon, and we received two extracts from these
water samples, the top one being the production water
itself. The ominous peak in the center is elemental
sulfur, that's how it elutes from a DB-1 capillary
column. The bottom wellwater sample has several ill-
shaped chromatographic peaks as well, that turn out to
be associated with natural occurring fatty acid esters
and phthalate esters.
We have submitted this Cample to an outside lab,
-------
499
and they were comparing samples using an infrared
method. There were so many carbonyl compounds, they
were unable to make a match between the samples using
their method. However, using our single 191 line
chromatogram. I think you can see the good comparison
between the two samples.
Finally, in the industrial application we had an
overseas refinery that was having a problem with a
residue being found in its propane stream. The bottom
trace is the GCMS of the residue, and you can see
that there are several components in that trace. We
related several of them to additives that were used
in the industry.
However, there's a broad distribution of
hydrocarbon distillate. This was only thought to be
derivable from a series of compressors along the
propane stream. The compressors use a seal oil, and
they sent us several samples of the seal oil. Using
the single 191 chromatogram, we can see a fairly good
correlation between the two samples. Fortunately,
the samples were different enough to allow us to
determine from the best fit which of the compressors
was introducing the hydrocarbon contamination into
-------
500
the stream. We substantiated that using the 217
single ion chromatogram. The point of this application
is that it also has applications in refined streams.
I'd just li^e to say that although some of these
comparisons might seem slightly subjective, in this
short presentation there are several criteria which
constitute a fingerprint match. The more unique the
biomarker content the more confidence you have in
that match. Although I have given several examples
where biomarker analyses have been very useful in
correlating hydrocarbon samples, there have been
several cases where the comparison has not worked or
where it has been inconclusive.
I would like to stress that this type of analysis
should be used in conjunction with other analytical method^
used for the same ends.
In conclusion, we've found that biomarker analysis
has found a place in correlation of environmentally
derived hydrocarbon samples. .I'll take any questions.
-------
501
Question and Answer Session
MR. HENNICK: My name is
Mike Hennick, Columbus, Ohio. Could this be used,
let's say after things are produced, you have oil or
diesel fuel facilities in your city and you suddenly turn
up some sort of petroleum product in your wastewater.
Could you use this type of technique to trace it back to
a certain manufacturer or a certain facility?
DR. SNOW: I think you'd be
able to use it if you could determine that there was
only one source. If it was in a large multi-user
system, it would be awfully difficult to do this with
several sources.
MR. HENNICK: Are oils or
gasoline products or diesel fuels, coming from
different areas of the country, do they have different
markers that can be used?
DR. SNOW: A particular
batch of, say, a mineral oil might have a biomarker
that could have a fingerprint that you could use in
whatever application you'd want to use in it, however,
a lot of times they're mixtures.
So again, you can compare mixed source materials
-------
502
But only so long as you're comparing to one mixed
source.
MR. HUNTINGTON: My name is
John Huntington. One question I have is, all of the
things you show basically showed a good match between
the suspected oil and the biomarkers. Are there
cases that you've seen where you really don't get a
match? In other words, all of these things look
pretty similar to me. Do most oils have the same
biomarkers in the same ratios, or are there significant
differences?
DR. SNOW: There are certain
biomarkers that are ubiquitous to just about all
petroleum samples. But again, the relative ratios
and some of the more unique biomarkers are generally
useful enough to be able to determine whether or not
the samples are in fact related to one another.
However, samples where you get matches as good
as these are fairly rare. I've given some of the best
cases for this presentation. Usually at best you'll
be able to determine that they are not matches, but
there is always an element of error. Since biomarkers
in a lot of cases are very similar in distribution,
-------
503
often these are the cases where it's non-conclusive.
MR. HUNGINGTON: One more
question. Is there a good summary or book about the
use of these biomarkers in chromatography...
DR. SNOW: The. one that I've
gotten most of my information from is a book that
Paul Philp has put out just recently from the University
of Oklahoma Department of .Geosciences. I'm not sure
of the publisher, but if you check on his name in an
author index, I'm sure you'll find it somewhere.
MR. HUNTINGTON: Thank you.
MR. TELLIARD: Our next
speaker is going to talk on the analysis of volatile
priority pollutants on site. Also from Conoco.
-------
504
-------
505
-------
506
-------
507
-------
80S
-------
Sterane and Tnterpane Region
m/e=191
25:00 33:20 41:4° 50:00
Time (Minutes)
m/e-217
_ 58:20
01
o
-------
5.10
-------
51,1
-------
512
-------
513
-------
514
-------
IMIIAI 03-89 00:09 Ofr'lfr
'OOOL
O'OOL
-------
516
-------
517
-------
Residue
^3-20 51:00 55:00 MIN
Oi
M
(Xi
-------
519
-------
520
DR. GEARHART: The title
for my presentation is actually slightly different
than you might find in the latest version of the
program. It's On Site Analysis of Volatile Priority
Pollutants and Subsoil by Headspace GC Ion Trap Mass
Spectrometry.
I'd also like to acknowledge contributions of my
co-author, Mary Beth Ford.
Conoco R&D is presently evaluating a new
methodology for subsoil analysis which will hopefully
provide an efficient, accurate and rapid method to
screen a site expected to have subsurface contamination.
The purpose of this is basically twofold. First of
all, using a grid pattern and sampling at various
depths, we hope to be able to define the plume or
three dimensional profile of subsurface contamination
that might be present in an area, contributed by
specific target analytes, and ultimately then to use
that data to more accurately pinpoint the location of
groundwater monitoring wells which would be used in
subseguent cleanup operations.
The focus of my presentation this morning will
be in the analytical phase, that is, the use of headspace
-------
521
gas chromatography coupled with an ion trap mass
spectrometer to rapidly identify and quantitate four
target analytes in subsoil samples in the concentration
range from ten parts per billion to one part per million.
It was predetermined that the target analytes
would include this list, which are chloroform benzene,
1,2-dichloroethane and 1,1,2-trichloroethane. Our
analytical solution to this problem involved several
method development steps, including developing a
sampling protocol, selection and setup of instrumentation
and development of guantitation procedures. We were
also faced with a remote site location so our
instrumentation solution had to be relatively portable.
The sampling process in this case begins by
drilling to expose subsoil at varying depth. This
picture shows a technician removing a drill auger
which is approximately five and a half inches in
diameter, from the ground. When you've uncovered
the subsoil at the desired depth, this drill bit has
to be removed and a special coring tool, which is
shown in this picture, is inserted down in the bore
and a sample core is removed. It may'vary in length
from two to three feet long, and it's about three and
-------
522
one half inches in diameter. The core is actually
removed from that coring tool by hydraulic pressure.
This picture shows you a cross section of an
actual core sample that was taken from the site. Our
sampling procedure involved taking several separate
aliquots from a given core sample using a cutoff five
millileter plastic syringe barrel. This was simply
press down into the core which normally had a
consistency of moist clay. The barrel was then
removed, containing the aliquot. That was placed
inside a 30 millimeter preweighed headspace 'vial," and
That was sealed on the spot, using a Teflon coated
silicon rubber septum in an aluminum cap.
Then the vial containing the aliquot was taken
back to the lab where it was weighed to get a net
weight for the sample. This picture shows an actual
clay aliquot together with the headspace vial, seal
and the cap.
The instrumentation setup which we selected
included, a Hewlett-Packard headspace analyzer interfaced
to a capillary gas chromatograph which was in turn
interfaced to a Finnigan ion trap mass spectrometer.
The headspacer was selected for sample introduction
-------
523
basically because it offered a minimum pre-analysis
sample handling procedure. Some of the operating
conditions operating are shown here. We selected a
ten-minute equilibration time at the 75 degee Centigrade
bath temperature for screening purposes. But for
guantitation, a longer"eguilibration time is really
required, and we found that 30 minutes was minimum.
The gas chromatograph was used for analyte
separation. We selected a fused silica megabore
column with a DB 1 phase from J&W Scientific. The
primary purposes for selecting a megabore column over
a narrow bore or wide bore capillary are twofold.
First of all, the ion trap operation is basically
only compatible only with capillary column flows.
Second of all, and equally as important, is the fact
that we wanted to maintain an optimum split ratio in
order to get an efficient transfer of sample material
from the headspacer to the GC column.
In the case of a megabore, you can do that very
easily with a low split flow. The column flow rate
was about eight milliliters a minute, split flow about
40 milliliters a minute.
The overall GC analysis time was about 20 minutes.
-------
524
The Finnegan ion trap mass spectrometer was used to
provide selective detection and quantitation at trace
levels by operating it in a multiple line mode. We
also used it with an open split interface between the
GC and the mass spec.
The narrow mass ranges that you see in this
slide were chosen for each of the target analytes to
give us a high degree of selectivity in the detection
process, as well as good sensitivitv. it was noted
yesterday in one of the talks that the ion trap was
capable of extremely high sensitivity, and we found
this to be true. It was not uncommon to be able to
detect on the order to 500 picograms of material in
the trap when using multiple line detection. However,
unlike the talk yesterday, our ion trap did not have
the new auto gain control software installed. We
plan to do that and presume we will see an increase
in sensitivity when that's done.
The IBM XT micro computer, which comes as part of
the ion trap package, was used for peak integration as
data logging. This particular system, as we have it
interfaced, is also, capable of doing identity
confirmations, if you desire to do that on target
-------
525
analytes, by operating in the full mass scan mode
instead of multiple ion detection mode. In case
unknowns do appear, it is also possible to do known
identifications.
This is a mass spectrum of dichloroethane, and
I've included it to show you the rationale for the
selection of the mass windows. You remember from the
previous slide, we used the mass range 62 to 64 to
select and detect dichloro ethane.
This picture shows the gas chromatograph, the
headspacer and the ion trap mainframe installed on a
lab bench. As you can see, it does present a minimal
footprint on the lab bench and is quite easily installed
in a truck or van if you do need to do a remote
operation.
This is a picture of the IBM XT which hosts the
operating system software.
This chromatograph shows the elution order for
the four target analytes, plus two additional
components, which are 1,1,1-trichloroethane and
trichloroethene.
This is an expanded chromatogram . The expansion
factor vertically is about tenfold, but I've included
-------
526
it so that you can see the chromatographic resolution
and minimal baseline noise which we achieved using
the megabore column-ion trap interface system.
I didn't mean to imply in the previous slide
that we were limited to such a small number of
analytes. Actually we can chromatographically resolve
many more. I put this list together on the basis of
compounds which might be at the site. In other
words, they were either probable or possible. As you
can see, they are resolvable chromatographically.
I've also listed the multiple ion detection modes
which we used in determining their retention times and
sensitivities.
We did anticipate rather severe matrix effects
with moist clay and sand samples using the headspace
sampling technique. So we did many experiments to
evaluate those matrix effects. This is an example of
one of the experiments. Basically, we plotted
response versus volume of spiked analyte. Note
the point here at 2 microliters, that would then
correspond to 2 micrograms of...in this case,
1,1,2-tricholoroethane per gram of moist clay matrix.
As you can see, these response curves are linear
-------
527
through at least two microliters or the equivalent of
two parts per million of this target analyte per gram
of the matrix material.
We used this type of response curve to validate
our application standard addition as a quantitation
procedure, since you do need to verify that you have
a linear response over the range in which your
calculations are done.
Minimum detection limits were measured for all
of the analytes, and those range at about ten parts
per billion or less, depending on the individual
analyte.
This is a similar response curve for the same
analyte, 1,1,2-tricholoethane, measured in moist
sand. The moisture content was observed to greatly
influence the extent of partitioning of the given
analytes between vapor phase and solid matrix.
This data summarizes results of an experiment
we did to measure reproducibility between three
aliquots from a given clay core sample.
The average relative deviations are shown in
this column here. Five percent or less was typical
for other experiments which we did like this to
-------
528
investigate the reproducibility of values. Most of
the source of this variance is due, we feel, to the
inhomogeneity that you find in soil samples. In
addition, the water is a major contributor to the
matrix absorption effect. You should also expect
humous material to drastically affect the partitioning.
The matrix effect was of the major factors that led
us to select standard addition as the method of
guantitation. Standard addition, as you know, does
compensate for matrix effects, and I've included this
slide to summarize the steps involved for headspace
analysis.
As I said before, we use a stock solution which
contains ten micrograms per microliter for each one •
of the target analytes. We collect for guantitation
two aliguots in the field from each core sample. One
of those is then treated as the unknown sample. The
second aliquot is spiked to act as a standard.
The analytes then are identified on the basis of
retention time matching from the chromatogram. But
remember we're doing multiple ion detection, so the
opportunity for coelution of an interferant is greatly
minimized.
-------
529
We use a 30 minute equilibration time for the spike
standard and the sample when the guantitation process
is desireable. We feel that's a minimum time and have
measured a 40 percent reduction in peak area over 30
minute intervals for various analytes. So the extent
of partitioning that does occur between vapor and
matrix is very significant in these types of samples.
It precludes an external standard type of calculation.
The quantitation process is somewhat lengthy,
but we have a basic language program installed on the
IBM micro computer that takes care of it. It prompts
the operator to input the appropriate data which
involves peak areas and weights for the sample aliquot
and standard aliquot, and also the weight of the
target analyte and the spike.
The computer then carries out the calculation,
archives it on floppy disk or hard disk, whichever
you prefer. We ask it to write a written report as
well for the logbook.
The method that I've described for you this
morning is an ongoing project, so we don't have a
large database yet to share with you in an attempt to
evaluate correlation of analysis of water samples
-------
530
from a contaminated area with this previous screening
technique, although that is our ultimate intention in
doing so. I have brought some data here from a
test site to give you an idea of what we're typically
seeing when comparing water analysis when done finally
with the screening procedure involving the rapid analysis
of subsoil samples taken in the grid pattern.
Look first of all at the right-hand column of
numbers. These represent the results which were taken
from subsoil analysis as I've just described, in a
test bore. The samples were clean down to 65 feet.
In other words, we didn't detect anything until we
reached 65 feet. Dichloroethane then appeared in the
soil samples at .1 parts per million.
As you can look down in the same column,
you can see that we actually drilled through the
contaminated layer. It happened to be a rather narrow
sand layer that was water bearing, bridged on either
side by moist clay. The maximum concentration that
we found was at 73 feet, and that was 1.03 parts
per million.
At a later time a water well was installed within
a 15-foot radius of the original test bore, in which
-------
531
we had done the subsoil headspace sampling. A screen
was installed at 65 to 75 feet, so the water sampling
would actually be done at that depth only. After the
well was properly purged and so on, a water sample was
taken and analyzed by purge and trap GC mass spectrometry
using Method 624. They identified and guantitated
di-chloroethane at three parts per million.
The results that I'm showing there in the right-
hand column were not corrected for water content in
the sample. If you do that and assume that most of
the analyte will be in the water, then the results
become virtually coincident.
In summary, screening of subsoil samples offers
a rapid method to determine the extent of underground
contamination which might be present at a site due to
the analytes which we've described here. It provides
a high degree of selectivity for definitive analysis
of the target analytes at trace levels. Its potential
for pinpointing the location of groundwater monitoring
wells to be installed later is presently being evaluated.
However, it appears based on the correlations that we
have so far that it does show a lot.of promise. Are
there any questions?
-------
532
Question and Answer Session
MR. MILLER: Mike Miller
from Enviresponse. Have you taken this ion trap into
the field?
DR. GEARHART: We have it
at a remote site. It's not located in a truck, but
it is in a remote lab site.
MR. MILLER: Is it in an
actual laboratory environment, conditions?
DR. GEARHART: Yes, it is.
We have mobile trailers for environmental analysis
which are also air conditioned, and we don't anticipate
a major problem in moving it to one of those.
MR. WALKER: Bob Walker.
I saw your curves were very linear, and I was
interested in how you spiked the sample. Do you
spike it into the vial containing the soil? Do you
spike it actually into the soil by putting the needle
into the soil, or do you sort of squirt it on top of
the soil?
DR. GEARHART: We use a
needle volume syringe, first of all, to measure the
standard solution. It is deposited on the inner wall
-------
533
of the headspace vial. So that, capillary action
assists in getting all of it into the vial.
MR. WALKER: So you're not
really testing what's...in the soil.
DR. GEARHART: You mean,
we're not touching the soil sample itself inside the
vial?
MR. WALKER: Right.
DR. GEARHART: No.
MR. WALKER: You're
not...can't really predict the efficiency of the
extraction, the efficiency of the headspace extractor
because you don't know what's going down to the soil,
depending on the... .
DR. GEARHART: No. We're
presuming equilibration occurs in partitioning between
the vapor and the sample. To add some credence to
that assumption, we have done time studies to monitor
the change in concentration in the gas phase in the
vial. As I said, we've seen that decrease as much as
40 percent, depending on what analyte it happens to
be and what the matrix is. But there is a significant
change as the partitioning equilibrium is approached.
-------
534
Thirty minutes appears to be sufficient time to get
most of that accomplished.
DR. MARKELOV: I'm not
familiar with the data system...and I have a question
about collection of data. In view of the data
position, can you process the data from a previous run?
DR. GEARHART: A qualified
yes to that answer. The way to do it with the IBM
system is to network a second PC to...the master PC
would be the one actually doing acquisition. By
using a simple third party commercially available
network, software package and board, you can access
files that are stored on the primary disk of the
master PC. In that way you can do calculations or
massaging of data from previous runs. You can't do
true foreground background, however. You can't
process the current run.
MR. PILLIS: I'm Lewis
Pillis from CAS. When you test soils, how do you
determine the amount of free headspace in the vial?
DR. GEARHART: We don't,
but we maintain a constant. You're probably anticipating
that the free headspace has to be either measured or
-------
535
constant.
MR. PILLIS: I didn't see
the slide that well on how you...
DR. GEARHART: We use an
identical aliquot for the sample in the spike standard,
so the PV relationship"is maintained constant in the
vial. That's assisted by using the syringe technique
for acquiring the sample from the core in the first
place. They're very regularly shaped.
MR. TELLIARD: Thanks. One
quick note. The TV in your room says that'you can
check out at 1:00. It's lying to you. Checkout time
is supposed to be noon.
At lunchtime, if you want to check out and bring
your bags in here we can litter them around the back
of the room. Those of you who are not staying at the
hotel tonight, please make an effort to check out.
They've informed us that the hotel is booked again
tonight, so they'd like to get their hands on your room,
Let's get back here at 1:30 and get at it again.
That hopefully will give you enough time to have a
baloney sandwich and check out.
(WHEREUPON, a luncheon recess was taken.)
-------
536
MR. TELLIARD: Can we get
started, please? Paul, John, John. I've got two
Johns, a double John, John squared.
Our first speaker this afternoon is Paul Marsden
from S-CUBED. Paul is going to talk on GCMS methodology
for primarily GC, megabore columns for pesticide
analysis. We've adapted a good deal of what Paul is
going to be talking about to our methodology presently
in use, and you'll probably see it down the road in a
couple few months in a procurement package that will
be out looking for bidders, for those people' who do
that sort of thing and are interested in money. Paul?
-------
-------
538
-------
539
-------
540
-------
-------
to
-------
Cn
j£>
CO
-------
544
-------
545
-------
546
-------
547
-------
548
-------
O5
OZl
001
I3H-N
c^^=^^^
-------
nun null
u,ni iinii
.,,.,, nun
, nun
099
-------
55.1
-------
S9S
-------
CO
to
-------
554
-------
555
0) GL
m
CD CO
C-.C-»,0:
O 3
J CO
O) CM
. . . --• - • . .- ^- CO
2 ^ co co co co 10 co ID * o> oo f- r- o>
W CO CO CO CO CO CO CO CO CO O) h* i J^ O)
w i i=i i -i i ii i i I I Ni O I
_ * * ^ co ^-co CM NI r*» ^- co r*
I CO CO CO (O (O (O CO CO CO O) K t- *- O)
co
's
>- o
co
O CO O h» O * CO U> *- CM CM O CO O)
CM CO * V O O CM CO O *- CO ^ CO CM
CM CM CM CM CO CO CO CO * CO
V. O
Oil-
co CQ
5 -o
o o
il 2 2
P Q
£ o
« 1 ^ f ^ A ^ § ^ CM ^ ^^ ^ ^
c .£ ^1 « £. ;J2. Tl- 2:;-JE .-°L ^1 • « ^Tl
-r > ^- S o O ^- H O *- i- co O *- ^-
-------
CD
in
in
-------
-------
558
-------
559
-------
560
-------
I9S
-------
562
_
-------
•'••. • .'••:. ••'••: • :• '. :.. ' •'' . • . . . 563
; .;- • : ; DR. MARSDEN: Thanks, Bill.
It is a GC ECD method, and so this afternoon I'm
, going to talk about a wide-bore capillary chlorine
method for organochlbrine pesticides and PCB's.
Those of you put of the Superfund arena may think
you've heard this talk before, but we've made some
additional refinements, and might look considerably
different from the last time I gave the talk. So be
prepared for a new talk.
The Superfund Office, the office which is now
part of the Office of Solid Waste and Emergency
Response, saw that they needed a new pesticide method
in order to satisfy the data guality objectives and the
method throughput requirements of the contract lab
program, CLP. There is a method in place, it's been
used for years, and as problems have developed with
the...in operation of the method, it's been fixed by
a series of Band-aids. At this point, it's now as
much Band-aid as old method, and the idea was to
build a new method, written and built with three
items included in the method. That was to have,
first, the QC reguirements of the method built in
as an integral part of the method.
-------
564
Second, it was to utilize the best available
analytical methodology as part of the protocol, and
then, third and finally, to bring in through the
method development consultations with a number of
people so that it wouldn't just be written by one
person.
As I said, there is a present method, but some
issues have been identified as needing correction in
the pesticide protocol that's now used. The first of
these is the performance of the method surrogate,
dlbutylchlorendate, DEC. Second, it was noticed that
the retention time windows specified in the present
IFB are a little bit too narrow, and there are cases
where lindane and other BHC isomers are non-detects
when they clearly are present.
There are problems with the use of the alumina
cleanup, and these include the inherent limitations of
using packed column for GC analysis in environmental
samples. Also, there's a need for an Aroclor
specific method.
Finally, the idea was to build a method that would
increase sample throughout over what can be achieved
.in the present Superfund method.
-------
565
• . s
Initially, when we were going to give this talk,
there was going to be somebody from EPA who was going
to identify these issues and then I was going to indicate
how we answered these technically. But that didn't
happen, so on with how we solved the problems with
the method. '
The first thing we tackled was the idea of a new
surrogate. Dibutylchlorindate is a di-ester, so it's
prone to acid or base hydrolysis. It co-elutes with
di-octylphthalate and in the operation of the CLP
pesticide method, gives recoveries anywhere from zero
to 3,000 percent. So that was the first thing that
was recognized that would have to be changed in the
method.
So we've come up with a dual surrogate method
that uses isodrin on the left side and hexabromobenzene
on the right side of the chromatogram. These things
also serve as retention time standards in addition to
being surrogates.
Isodrin is the endo isomer of aldrin. It
elutes in the middle of the GC run, and even though
it is an Appendix 8 analyte, it was never produced
commercially. Shell had made something less than 500
-------
566
pounds of it. It didn't have anything to recommend
it over aldrin, so they never proceeded with
commercializing it.
The other surrogate is hexabromobenzene. It's a
reasonably stable molecule. It resists sulfuric acid
and permanganate digestion. It comes out at the end
of the GC run. In our experience the recoveries of
these compounds range anywhere from 60 to 100 percent
out of water, and 60 to 80 percent out of clean soil.
Clean meaning water that comes from out of the CLP
program as opposed to what Bill sends us.
The next issue that was addressed was some of
the cleanups. The present method requires the presence
of a small alumina column. Alumina seems to be
a fairly non-reproducible material from lab to lab
and even within laboratories. So what we've gone
with is the little cartridge columns. This is a 2,3,
dihydroxypropyl silyl ether: they're available
commercially from a number of manufacturers; they
are more rapid and reproducible than alumina; and they
have behavior much like florisil. You can get endrin
aldehyde through the Diol column where that wasn't
possible with the alumina. We're specifying stainless
-------
567
steel frits or Teflon frits in order to reduce sample
contact with plastic. The laboratory has to demonstrate
acceptable performance of each lot of cartridges,
using pesticide standards, and the samples themselves...
the analytes are eluded through the columns with a
mixture of nine to one hexane/acetone.
This is a picture of one manufacturer's manifold.
This is the original design or what was originally
available as manifold. There have been some
significant improvements in these. You can get them
now where they'll handle than ten samples. You
should insist on getting one of those allowing you
to adjust the flow rate for individual tubes.
Sticking into the top, for those of you who haven't
used these before, the cartridges are what are held
in the syringe bodies there. The one on the left is
a half a gram, the one on the right is a two-gram
cartridge. They're washed with solvent. You lift
the black top off, place a rack of ten ml volumetrics
inside, replace the top and turn on the vacuum (the
gauge over on the right is almost a necessity for
this). Wait until you pull up the vacuum, put your
sample on, and elute them off with nine mis of the
-------
568
hexane/acetone soluent. This replaces the alumina
cleanup in the present CLP method.
Now, the next issue we addressed.was that of the
GC column to use. Packed glass columns have been
around for a long time. I can remember early on in
school, once you were clever enough that you didn't
break a metal column, then they said, "now you can use
one of these fancy new things we just came up with."
Nowadays we're probably looking at moving beyond
glass columns all together.
The wide-bore capillaries, as opposed to the
narrow bore, variety offer significant advantages. They
allow you to keep the resolution that you can have
with the capillary column and because of the small amount
that will flow through the column, you can temperature
program with the capillary and even with the BCD.
Because of their small size, you can put two columns
into a single injector port, so those of you stuck
with Varian auto samplers don't need to buy two auto
samplers.
With the wide bore columns, you have a column
capacity comparable with that of a packed column, so
they're really difficult to overload. Actually, make
-------
569
that not difficult to overload, but they work quite
well on a routine basis with environmental samples.
Now a list of things is one thing. They've got
a couple of samples of chromatogram standards here.
I told Neil I would tell them this. The DB-5 is
there because its what we use. Supelco makes an equally
good product.
This mixture here is the 17 CLP target pesticides
with the surrogates, and out here you've got isodrin,
hexabromobenzene, and dibutylchlorendate, which was
just put in there for comparison. Under this new
method, dibutylchlorendate will no longer be used at
all.
The next one is a side view of the DB 608, so we
have columns of two polarities. This is a somewhat
more polar column, somewhere between an OV-13 and an
OV-17. A little bit better resolution of DDE and
dieldrin is possible on this column as is reversal
of a couple of the BHC's.
Just as a matter of comparison it's as good as
packed column chromatograhy gets. You have a great
deal of overlap here. You've got methoxychlor
unresolved from endrin ketone, ODD and endosulfan
-------
570
laying on top of each other, and a somewhat complex
series right over there.
Mechanically, these things can be fit into a
single quarter-inch injector port. You can talk to
Neil about where to buy one of these. But this is
simply a glass tee available from Supelco. It runs
right into the regular injection port.
The amount of the split between the two columns
seems to be very reproducible, over time. We've never
really made an effort to see if it is a 50-50 split.
As long as that split is really reproducible it
doesn't really make much difference because you
calibrate each column separately.
My feeling is there's more difference between
the two ECD's than there is between the amount going
on at the two columns. That's mounting the injector
side. On the detector side, because you now have a
megabore column, those of you in the front
maybe can see this, there's the light yellow
column there going into the injector port.
You do need a makeup gas, and that's the line
right there, that silver. The makeup gas does allow
you a real advantage. You can use helium as a carrier
-------
:; "•:>. •;.•-;. ' ••':'•:>•:,-.\ : "' • . •"••' :.'••' . ' . 571 '
gas, and you can use your argon-methane as makeup.
This will significantly improve the chromatography.
That's enough on columns. The next point
that was raised was that one of the problems with the
present CLP method is that too much time is spent
running check samples and standards and not that enough
time is spent running actual samples. This really
'' - -i
slows down the acquisition of data. At present you
can run for 72 hours off of an initial calibration
and then you have to recalibrate.
So in the new method, we're going back to a
608-type analytical scheme where a three point initial
calibration of the single component pesticides is required,
But that initial calibration is used until there is
an unacceptable PEM, that's a performance evaluation
mix run. The retention time windows for
identification have been changed slightly. There is
a plus and minus 2.5 window percent for early eluters, and
that goes back down to a 1.5 percent window for
the second half of the chromatographic run.
Then finally, background subtraction is no longer
allowed. But all analytes must be present at less
than 0.5 times the low quantitation limit in the
-------
572
instrument blanks.
s
This is the 12-hour evaluation mix. Like I
said, you continue to run your initial calibration as
long as you pass on this 12-hour mix. It isn't an
indefinite process. You do eventually lose your
initial calibration. So probably you can run about
a week on a calibration. It depends on the type of
sample. But we have compounds in there at low,
medium, and high levels. We've got two BHC's in there
as resolution checks, and endrin and DDT continue to
be used to check column breakdown or breakdown of
analytes on the column.
The acceptance criteria for this evaluation mix
are the endrin and DDT breakdown, column resolution,
that all of the analytes on that list be present in the
identification windows, and that the responses be within
20 percent of what was calculated for the initial
calibration. Should the first injection fail, you
are allowed a second injection. If that fails, then
the initial calibration has to be rerun.
This slide then pulls together as a comparison
between the two, the present superfund CLP method and
the proposed method. There's no change in the extraction
-------
573
technique; the sonication technique for soils and
sediments, or the liquid-liquid technique for water
samples.
In sample cleanup, the GPC previously has been
allowed as an option on soils, which meant that nobody
did it, or very few people did it. It is now required
for soils. The alumina column has been replaced with a
Diol cartridqe. The analysis itself is different because
wide-bore capillaries are used and because there is a
three point calibration now on the sinqle component
pesticides. In method QC there are now dual surrogates
of isodrin and HBB, and your run sequence is for an
indefinite period as lonq as you can hit the 12-hour
tests. You can run more than every 12 hours. Matter
of fact, it's probably a qood idea if they're real
bad samples.
These are sample recoveries from the averaqe of
six recoveries over the low level quantitation to 120
times quantitation. I'm not goinq to arque the
statistics about that, but this is just to qet all
this on one slide.
Recoveries generally are quite good. They bounce
around 80 percent. The one where it is low is the
-------
574
endrin aldehyde. But in the previous method that one
isn't even recoverable.
Still in development or being written is an
Aroclor specific protocol. The need for this Aroclor
specific protocol came about in response to regional
requests or regional concerns that the new high
concentration IFB is a GCMS only. There is no real
straightforward way to get Aroclors out of that
method, and they're on the hook to deliver a lot of
Aroclor data. So the Aroclor specific protocol will
involve a regular pesticide type extraction with
liquid-liquid or sonication with methylene chloride
acetone, an exchange of the solvent to hexane and then
follow ,it with a sulfuric acid and then a permanganate
cleanup. This is an adaption of the transformer oil
method. Run that through a Diol column and shoot it
through a GC BCD.
The Aroclors come through with fine recoveries.
As a matter of fact, the only pesticide you lose by
doing this is aldrin.
Getting ready to wrap things up, Bill mentioned
in the introduction that a lot of this work is being
adapted or has been adapted by Dale Rushnick into
-------
•;•' •••'• '•;•.-.••... ..... -575
Method 1618. Thanks to Bill', we were able to get
some real samples, as opposed to the fairly clean ones
that CLP sends us to torture test this method, and to do
some testing with a number of different analytes. So
all."of the chlorinated ITD/RCRA compounds can be analyzed
by using a very similar method, and the only difference
with the method is that your final GC hold time is
longer because you've got rayrex on that list.
You can take the same extract, shoot it into a
FPD or NPD, and get your phosphates, and then finally
the phenoxyacid herbicides are derivatized and
analyzed using the same two GC columns. But that is
a separate extraction.
I don't have the recovery sheet on this because
there are too many compounds. But they are similar.
Some of the ITD/RCRA compounds are hydroxylated, so
they tend to give lower recoveries because of the
Diol cleanup.
Then to actually conclude, we've got some
acknowledgements here. As I said, this work has been
going on for guite a while. We've had regular
consultations and a lot of guidance from Joan Fisk in
the Superfund Office, and Dave Bottrell, who went over
-------
576
to EMSL Las Vegas during this time, has been involved
with it. I want to mention that Gareth Pearson at Emsel,
at EMSL^LV and Fai Tsang of S-CUBED did a lot of the
analyses themselves so I could go on these trips;
Bill Loy from Region 4 wrote the present pesticide
protocol, and he keeps"us honest a lot on QA and
calibration. I appreciate the help provided Armand
Lang at the QA lab in Las Vegas; and by Tom Gilfoil from
EMSCO. That concludes the presentation. Any questions?
-------
577
Question and Answer Session
MR. SAUTER: Drew Sauter.
Could you calculate, Paul, what the percentage
improvement in analysis statistics is? Is there a
number.
DR. MARSDEN: Well, it's
really matrix dependent. In a clean matrix, they're
real comparable, but then as you get to these real
filthy samples you can...if anything it probably
looks like your recoveries go down, because in a
filthy sample with an ECD, you tend to identify a lot
of noise analytes.
So we definitely never get over a hundred percent
recovery, although with a complex matrix and packed
column method you can get up to 3,000 percent recovery,
MR. SAUTER: I didn't state
it very well. What I meant was, if I have one of
these on the old method and your method or the method
that you have...and I understand it's a matrix problem
in...samples, are you saying there's no general
improvement in analysis, like how many samples you
can get through the...
-------
578
DR. MARSDEN: Because this is
going out for bid, this begins to be company
confidential. I think since my two bosses are over
there, I'm going to duck that one.
MR. LACONTO: Paul Laconto,
Nanco Laboratories, Wappinger Falls, New York. In
your initial comments concerning, let's start from the
bottom up and build a new method for...
DR. MARSDEN: No, no, no.
Build QC considerations into the new method. This
method is very close to Method 608 with a megabore column,
MR. LACONTO: Why didn't we
go a step further? I know you're using ICP...some of
their product line also includes the CH CAT risk
phase silica. Enlarging the scope on this, it would
seem to me that perhaps a solid phase technique which
is gaining quite a bit of popularity...
DR. MARSDEN: I've done a
little bit of the solid phase. Again, there's a
matrix problem. You can get very good extraction,
very good efficiency out of clean water. But you put
a water in there like one of the ones Bill sent us
for the flash point of 80 degrees...you saturate that
-------
.'••••'•'• ': "•" .'-""''"•'.'.'•••':.' '•':}••''/•^•.-. .'•-' ••-•' •"•;' • ••" .': .' ' • 579
C-8> there's no recovery at all from that one.
So this one will do just about every sample.
The solid phases will work with the clean ones only.
MR. LACONTO: In your
experience with this...what is your optimum injection
volume which you're using for that...
DR. MARSDEN: We're using
just the position one on the HP. It's closer to a one.
MR. LACONTO; One microliter?
DR. MARSDEN: ' Yes. I would
imagine you could go up to the three.
: ; MR. MOSESMAN: Neil Mosesman,
Supelco. Paul, you made a comment about the frit in
the solvents extraction, that you're recommending
stainless steel. What about the cartridge itself as
a plastic? The extract could come in contact with
the plastic syringe barrel. Have you found any
contamination?
DR. MARSDEN: We've never
found any contamination. We haven't even found
contamination from the frit. There seems to ,be
swelling again in some of these really high organic
liquids or high-organic extracts where you've got a
-------
580
lot of...in these things you can smell benzene or
toluene coming off of them. Those actually swell and
deform those frits. It degrades what little chroma-
tography happens on those columns.
MR. MOSESMAN: But the
organic content of the"extract doesn't affect the
cleanup procedure, either? It doesn't all go into
vials?
DR. MARSDEN: You're allowed
to go anywhere from half a gram up to two gram Diol.
There is some operator choice in there. If you've
got very heavily contaminated samples, you have to go
to the larger columns. So, yes, that will be a
problem if you try to go with 100 percent half
gram columns.
MR. O'DELL: George O'Dell
from Nanco Labs. I'm wondering, what are you using
for confirmatory analysis of those? Are you using
two different electron...
DR. MARSDEN: Yes, that's it,
dual column only, or you can do two injections, yes.
MRS. JONES: E.B. Jones
from...Canada. You mentioned the...can you tell me
-------
581
the way you obtained it?
DR. MARSDEN: It's available
through the EPA repository. Actually, I guess Shell
Agchem no longer exists. Maybe you can get it from
Sittingborne. It was made as a standard and as a
test compound but it was never disseminated in the
environment outside of test plots.
MR. PLAICE: Bob Plaice
with the McGruder Corporation. Can you give me the
acidics on the...
DR. MARSDEN: It's a half micron
thickness. I'm sure if you go back and talk to him.,
he'll give you catalog number and everything.
MR. TELLIARD: Thank you,
Paul. Our next speaker is going to be John McGuire
from our Athens Laboratory. For those of you who are
routine attenders to this conference, you know John
has spoken before. We've tried to keep him off the
program, he keeps coming back. John is going to talk
about one of our programs that we've been continuing
on with, which is additional data review and
classification that the Athens Laboratory is doing in
support of the Office of Water. John?
-------
A WIDE-BORE CAPILLARY! COLUMN METHOD FOR
ORGANOCHLORINE PESTICIDES AND PCB'S
By
Paul Maxsden
gS-CCTBED
Ol
00
to
-------
APPROACH TO METHOD DEVELOPMENT
• Identify OSWER requirements for a new CLP pesticide/PCB's method.
* Build QC requirements into method from the "ground up".
• Utilize best available analytical methodology as part of the protocol.
• Conduct regular meetings with individuals from OSWER, EMSL-LV,
S-CUBED, and other pesticide analytical chemists.
Oi
00
CO
-------
ISSUES IDENTIFIED IN THE
OPERATION OF THE PRESENT
CLP PESTICIDES/PCB'S PROTOCOL
• DEC surrogate performance.
• RT windows are too narrow.
• Alumina column cleanup.
• Limitations of packed column GC analysis.
• Need for an Aroclor specific method.
• Sample throughput using the present method.
.
00
-------
NEW METHOD SURROGATE/RRT STANDARDS
Endo - isomer of Aldrin
Elutes at mid-GC run
Never produced commercially
HBB
Br
Br
Stable in
or KMnO
Elutes . at end of GC run
Recoveries are:
60 - 100% from clean water
60 - 80% from clean soil
OJ
00
en
-------
DIOL CLEANUP
Prepacked cartridges are used to minimize interlaboratory differences.
More rapid and reproducibile than alumina.
Allows determination of Endrin aldehyde lost on alumina.
Requires stainless steel frits to reduce sample contact with plastic.
Acceptable performance of each lot of cartridges must be demonstrated.
Samples are eluted with 9:1 hexane/acetone.
01
00
0)
-------
ADVANTAGES OF
WIDE-BORE CAPILLARIES
• Increased column resolution improves peak identification and
quantitation. ^
« Temperature programming is possible with the EDO.
• Two columns can be installed in one packed column injector.
• Column capacity comparable to packed column.
Oi
00
-------
t
'••:•
t
OJ
-K.
30 m DB-5 MEGABORE ANALYSIS
o>
00
00
-------
." 13-. 50
-1BHC, gamma
BHC, beta
-1,33 Heptachior
——_ 2i,3?pHe, delta
r
24.37 Heptachior epoxide
26.2£
- * « .- •-'
Endrin ketone
2*: 36 Dieidrin
Endrin
?£,-15 9.PP.3? Endosulfan n
. 3Q .' ,s DDT . .
-.—— Jtv'?i Endrin aldehyde
——. - L . oo Endosulfan sulfate
i "?2.13 Dibutylchlorendate
33. ^7 Methoxychlor
i.i.w4 Hexabromobenzene
Ol
00
-------
1.5% OV-17/1.95% OV-210 PACKED COLUMN
CJi
co
o
(O
-------
CHANGES IN ANAL YTE IDENTIFICATION/
QUANTITATION PROCEDURES
• Three point initial calibration of single component pesticides required.
• Initial calibration is used until an unacceptable PEM is run.
•i , . •
• Retention tune windows are changed:
± 2.5% before heptachlor.
± 1.5% past heptachlor.
• Background subtraction no longer allowed - all analytes must be < 0.5
CRQL hi instrument blanks.
CD
73
-------
12 HOUR PERFORMANCE
EVALUATION MIX (PEM)
1. 7-BHC
2. Aldrin
3. 4,4/-DDT
4. /3-BHC
5. Endrin
6. Isodrin
7. HBB
5.0 ng/mL
50 ng/mL
500 ng/mL
5.0 ng/mL
100 ng/mL
50 ng/mL
200 ng/mL
CRQL
lOx CRQL
50x CRQL
CRQL
lOx CRQL
Ol
CO
to
-------
PEM ACCEPTANCE CRITERIA
• DDT and Endrin breakdown < 15%.
• All peaks 100% resolved.
• All RRT's in the identification windows.
• All Calibration Factors within 20% of initial calibration.
• Second injection of PEM is allowed.
Ol
CD
GO
-------
PROPOSED SUPERFUND PESTICIDE PROTOCOL
Present Method
Extractions Sonicator for soil/sediment
Cleanup
Analysis
QC
Continuous liquid/liquid or
separatory funnel for waters
GPC optional
Alumina required
Dual packed column GC/ECD
One point calibration
Dibutylchlorendate as
surrogate/RT standard
72 hour run sequence with
checks every five samples
Proposed Method
No change
No change
GPC required for soil
Diol cartridges required
Dual wide-bore capillary
Three point calibration
Isodrin and HBB as
surrogate/RT standards
Indefinite run sequence with
12 hour PEM's
01
CD
-------
RECOVERY OF SINGLE COMPONENT
ANALYTES AND SURROGATES
Matrix
Compound
a-BHC
/3-BHC
A-BHC
7-BHC
Heptachlor
Aldrin
Heptachlor epoxide
Endosulfan I
Dieldrin
4,4/-DDE
Endrin
Water Soil
80 80
62 77
63 60
87 95
69 74
104 118
74 98
101 85
79 104
73 67
119 94
Compound
Endosulfan II
4,4/-DDD
Endosulfan sulfate
4,4/-DDT
4,4/-Methoxychlor
Endrin ketone
Endrin aldehyde
a-Chlordane
7-Chlordane
Isodrin
HBB
Average of six recoveries CRQL to 120x CRQL.
Matrix
Water
79
87
68
91
73
68
55
82
85
97
63
Soil
77
112
81
104
71
77
49
78
75
94
69
01
CO
Oi
-------
AROCLOR-SPECIFIC PROTOCOL
• Regular extraction procedures are used:
Liquid/liquid or separatory funnel for water.
Sonication for soil/sediment.
• Solvent exchange to hexane.
• Vortex with 1:1 ELSO,.
2 4
• Vortex with 5% KMn04.
• Diol cleanup.
• GC/ECD analysis.
m
CD
05
-------
APPLICABILITY OF THIS METHOD
TO ADDITIONAL ANALYTES
• Additional chlorinated ITD/RCRA compounds can be analyzed using a
very similar method (proposed Method 1618).
• Organophosphorus pesticides can be analyzed by substituting FPD (or
NPD) for the ECD (Method 1618).
• Phenoxyacid herbicides are analyzed using the same GC columns in
Method 1618.
en
CD
-------
ACKNOWLEDGEMENTS
Joan Fisk
Dave Bottrell
Gareth Pearson
Siu-Fai Tsang
Bill Loy
Armand Lang
Tom Gilfoil
OSWER
QAD, EMSL-LV
EAD, EMSL-LV
S-CUBED
Region IV
ERG, Las Vegas
Lemsco, Las Vegas
en
cc
oo
-------
599
-------
oOO
11iii. :/'*•"'I '"• ,4i'-i-fi'";* '•'_ *;«;^^''^(^s^^^a^^^y^i^^M
,-
%«.«a«M|«jgs,si||g«
-------
601
-------
602
DR. MCGUIRE: Thank you, Bill.
A program has been reinstituted to identify organic
chemical compounds in industrial effluents, using
mass spectra and GC retention data. The first portion
of the program has the objective of identifying and
determining the distribution and relative concentrations
of organic compounds in industrial wastewater samples.
It utilizes mag tapes of spectral and chromatographic
data collected over the years by contractor laboratories
of EPA's Office of ITD.
Based on an analysis system that was assembled
in the late '70s at the Environmental Research Lab
in Athens, an extensive suite of computer programs
locates the best spectrum for each compound, identifies
the most likely compound, determines its historical
frequency and estimates its concentration based on
internal standards. Now, at the time we first whipped
that up in the 70s, that was very, very intelligent.
Nowadays, you would expect that almost any halfway
intelligent GCMS computer system would do the same
thing, except for that one item of "determining
historical frequency." We think that is still a
unique feature of this set of programs.
-------
603
, The results of the tape study are essentially
two files consisting of identifications and unidentified
spectra respectively. In a study using the earlier
version of the system, 1565 specific organic compounds,
the vast majority of which were not regulated, were
identified. (Slide 1) '
2500 additional compounds, (I call them compounds
although not identified they had reproducible spectra
and reproducible retention times, were detected five
or more times. Today, I will provide a progress
report on efforts to update the original programs to
take advantage of improved technology. I'll also
discuss application of the program as a tape study to
identify organic compounds and effluent data from the
rubber industry, collected for ITD as part of the
consent decree verification study.
Finally, as part of the second portion of the
programs, I will outline a multi spectral approach
to the identification of unidentified spectra from
the MIS file. This multi spectral approach involves
the re-analysis of retained extracts corresponding
to the particular contract laboratory GCMS data run.
Confirmations and identifications of selected
-------
604
compounds will be made using multi-spectral identifi-
cation techniques such as GC/FTIR, GC, high res MS
and GC/PPINICI.
Now, for those who aren't familiar with what
we're talking about, I'd like to discuss what a tape
study is. (Slide 2)
In order to provide guidelines for regulating
effluent discharges and establishing treatment
standards, the ITD division of the Office of Water
has had a large number of GCMS analyses conducted
by contract labs. These contract analyses typically
cost in the millions of dollars and had specific
and limited target analytes such as the priority
pollutants.
By the way, for the contract lab people, that
doesn't mean each one of you got millions of dollars,
but in toto.
In recognition that more information would be
generated in the analyses than would be required to
meet the limited objectives, the office had the raw
GCMS data stored on mag tape.
At the top of this slide, it says "tape study
flow chart", and underneath that is mandate. That
-------
605
is the way we got into most of this work. Congress
decided that EPA needed to regulate certain compounds,
or the courts decided it. At any rate, there was a
mandate for EPA.
That mandate came down to the program office;
The program office set'up contracts to conduct the
GC/MS analyses, which included in the broadest sense
taking the samples from a sample site, bringing
them back, working them up, and actually performing
the GC/MS analyses.
When that was done, the data were stored as
raw data on mag tape. At the same time, there were
also outputs from the contract labs for the target
compounds. The stored raw data were sent to our lab
where they became part of this tape study. The first
thing we did was to plug the raw data into our computer
and regenerate chromatograms.
Next, was to have the computer recognize the
presence of GC peaks as discrete compounds in those
chromatograms. The programs were sophisticated
enough that they also extracted the very best mass
spectrum for each of the blips identified as a GC peak.
When I say they were "sophisticated enough", we
-------
606
actually have 542 routines that take part in this
suite of programs, so they are quite sophisticated.
The computer then identifies the compounds,
and having done that, then proceeds to check against
an historical library database to see if for example
dimethyl chicken wire is actually coming out at the
right relative retention time. If it is, then the
computer is happy and it will go down on the right-hand
column of this chart to say that the identification
is reasonable. If on the other hand the computer
says, "no, no, no, no, dimethyl chicken wire can't
possibly come out at that point", it will determine
that that identification is unreasonable and it will
store the spectrum in our MIS database.
If, as is more typically the case, the computer
scratches its head and says, "gee, it might be dimethyl
chicken wire, but I'm not sure", then we have a
chemist who sits and looks at the data; the data at
this point consisting of the massaged GC/mass spec
output, that is, the mass spectrum, and the best
matches that the computer has been able to generate
from our rather extensive collection of mass spectra.
-------
607
The chemist then makes the decision "yea" or
"nay". At this point, the flows go on the same as if
the computer had made them; that Is,-either where
the identification is reasonable and it goes into a
hit list or it is not and it goes into a misfile.
The identified compound, list which is then used
to go back up and modify the modified lists so that
the lists reflect not simply what mandate comes
from Congress or a court, but actually the compounds
that ought to be identified in the environment.
On the left-hand side of the list is the MIS
list, those compounds where we cannot perform a
solid identification. At this point, the spectrum
along with all sample identification is stored.
The tape study then is a retrospective analysis
of all the stored data by a suite of programs.
Costs of a tape study, even for a large one such as
was done in the past and the one that I showed on
the first slide, are generally well under a million
dollars. The amount of information produced by
such a study can be expected to extend the value of
generated information by a very significant amount.
This information consists not only of the
-------
608
identity and approximate concentration of each compound
in each sample, but also a breakdown of the
frequencies with which the compounds are found in each
industry survey. As a valuable extra, the outputs
of the programs also include the MIS file of spectra
that have been encountered in the course of the
study, but which have not been identified by either
the computer or the chemists who oversee the programs.
This MIS list can be processed in the same manner
as the hit list of identified compounds. That is,
the final reports of the analyses are able to
provide statistics on the frequency with which the
unidentified spectra are found. The frequencies
found for these MIS spectra are useful measures to
highlight some spectra that need to be identified
in order to characterize a waste stream or process.
We have found, for example, in some of our old
work that the same spectrum was found as often as
200 times, but it wasn't identified. Compounds
like that are obvious targets for us to identify
them.
Information such as that is essential to
responsible regulation, since it replaces guesstimates
-------
609
of what's present and at what levels with hard
data. In the original EGD tape study for the non-
priority pollutants, the 114 organic pollutants were
the targeted compounds. Many more compounds were
found and identified in the 22 industries covered,
as well as 25 times as many that couldn't be
identified.
A new tape study has begun for the ITD during
the past year. It will be comparable in size to
the original EGD study and will cover more than a
dozen new industries. The first industry to be
studied was the rubber industry, which was sampled
and the samples analyzed for the contractors in '81
and '82.
The tapes were logged into our PDF 11/70 computer
database in '82, but they weren't processed until
the present work began. The original computer
programs, which were designed to operate on the PDF
1170 have been revised and modified for operation
on the back 785. (Slide 3)
A significant part of the modification was
-------
610
based on preliminary work that Walt Shackleford
carried out on the 1170 in changing the PBM library
and method of search to provide for a faster search
time. He's discussed this in the past at this
meeting and given due credit to McLafferty's group
at Cornell, who were quite helpful in our implementing
the Most Significant Peak sort on the 1170.
The programs used in the 1976 to '81 tape
study utilized data stored on mag tape, and were
revised for operation on the VAX 785. The library
of reference spectra used in that study also was
made operational.
Benchmark tests were run without any problem
in order to check the performance of the programs
on the VAX with those of the 1170. At that time,
which was about eight months ago, the new ITD tape
study began.
The suite of new programs performs well in
identifying GC peaks and matching the best spectrum
for each against a database or spectral library. A
new and very extensive mass spec library of over
100,000 compounds has been acquired from John Wiley
and is being prepared for use. Many tapes of GCMS
-------
611
data from the ITD contractors have been received
and processed at our lab.
A staff of computer personnel and chemists was
assembled to support the new tape study. However,
just after we got through training a key chemist, he
accepted another job and we're now in the spot of
retraining his replacement.
At the present time, our contract group working
directly on tape analysis consists of one chemist
and two senior programmers. Interviews are currently
being held for a junior programmer and a technical
writer. This last position is felt to be critical
because of the need for documenting exactly what
the programs do.
Bringing the old suite of programs up to speed
on the Vax, when they had last been used on the 1170,
was much more difficult than I thought it would be
because of inadequate documentation on the original
programs.
One of the results of the earlier study was
that a need was seen for better guantitation. In
order to provide the guantitation, isotope dilution
MS was used by many of the contractors, and because
-------
"~ 612
of the extensive use of isotope dilution duterated
spikes in Methods 1624 and 1625, it was felt critical
to use the large Wiley mass spec library with many
duterated compounds represented for the new work.
When this was attempted, a totally unanticipated
problem became evident in preparing the library for
its use. The older programs had run on the 1170,
which had the peculiarity of counting to its maximum
count 32,764, and then counting backward from the
negative of 32,764 to zero. This meant that the
1170 could handle a library of more than 65,000
spectra and each would be assigned a discrete
number even though some would be negative.
The VAX does better. When we first tried to
read in a spectrum with a number higher than 215
the machine informed us in no uncertain'terms that
that number was outside the range. This has limited
us to using the library that was used for the greater
part of the earlier phase of the project, while our
programmers have been working on getting the new
library into a proper 1*4 format for handling
spectrum numbers in excess of 100,000. We expect to
have that work finished the end of this month.
-------
613
At that time, we plan to implement the use of
the large library to reprocess all runs from both
the POTW and rubber industries. When the programs
were demonstrated on the VAXs, the first set of
tapes to be processed was the verification phase
analyses for the rubber industry.
Slide 4 is the chromatgram of a base neutral
fraction from the rubber industry. The base neutral
fractions from the rubber industry, at least those
from the butadiene plants have that characteristic
"humpogram". It is, as might be expected, just a
mess of aliphatic hydrocarbons. Nonetheless, this
profile is always found in base neutral butadiene
rubber plant chromatograms. Slides 5 and 6 are
typical examples of chromatograms of acid and VGA
fractions.
Because the rubber industry runs have been
obtained on a packed column, this might lend itself
well to establishing a real comparison with the results
of the earlier tape study. Results were gratifying,
in that the success rate for identifying the recognized
peaks was essentially the same, using the same library
of spectra as the overall rate for the other work.
-------
614
The summary reports produced included both the
earlier work and the runs processed in the more recent
phases. In slide 1, the two top frequency distributions
here show the old hit (on the top) and miss frequency
distributions. Those are typical distributions. The
vertical axis is the number of times the particular
compound or spectrum has been found, the horizontal
is simply a marking place.
Including all samples received in both phases
of the work, we've logged in approximately 500 samples
including blanks and standards for the rubber industry.
Processing was significantly slower than anticipated
due to the fact that we were expecting the use of
1,4-dichlorobutane as one of the internal standards.
It turned out that the contractor had used deuterated
toluene. It took us several days of running to
realize that although the 1,4-dichlorobutane was
usually present, it was not reliably present at the
same level.
The bottom frequency plot shows the frequency
distribution of identified compounds in the rubber
industry. The one identified most frequently was
found 32 times.
-------
615
The identity of the top 50 compounds identified
based on both frequency and median concentration is
shown in this table (slide 8). I doubt if it's
particularly legible; there were not very many
compounds. We had a grand total that was only longer
than this list.
The compounds that were found are the compounds
that one might expect to be found. Now, the MIS
report is functioning well and will be used to select
"candidates for the multi-spectral identifications
when enough runs have been processed for this to be
meaningful. I had hoped to discuss the newer POTW
data today, but we had a small problem. The weather
in our area of the country was running very, very
much below normal recently, and because the weather
was running below normal in temperature, it was
decided that it was an ideal time to shut down the
air conditioning system for some much-needed repair
work.
That was done, and the first day the air
conditioner was shut down, the temperature hit 93.
So, the computer system ran for about one hour that
day, 'and the individual in charge of the computer
-------
616
turned it off for the balance of the week. As a
result, I don't have my data out. I apologize for
it.
Slide 9 is a listing of the compounds found in
POTW during the last survey that we made. We were
just beginning to get into the POTW category at the
time we stopped the runs before.
An interesting sidelight for preparing for the
study of the rubber industry was the observation of
the fingerprint characteristics of the particular
contract lab. I'm not going to show you any of the
data, but the compounds noted for each lab were all
logical contaminants such as C-6 or C-7 hydrocarbons,
chlorinated compounds or phthalates. But it appears
that certain laboratories have a much greater chance
to show particular impurities in their blanks than
do other labs. On the other hand, the other labs
have a greater chance of showing their own fingerprint,
Now, whether this is due to impurities in
solvents or whatnot isn't assignable at this time.
We do know that we went through a big witch hunt in
my own lab a few years back on phthalate contamination,
We finally found it was coming from a final cleanup
-------
617
step we were giving, because it made the glassware
look prettier. Once we stopped giving that cleanup
step with reagant grade acetone the phthalates
essentially disappeared.
I see I'm running overtime, so I'm just going
to jump in and say a few words about multi spectral
identification (slide 9). We checked to see whether
multi-spectral identification on some of the retained
sample extracts from the first study was practical.
We chose some of the very trace level unidentified
compounds from the first phase of this work. We
requested 24 of the retained extracts from the
Sample Control Center. They were able to furnish
us four of them, and of the four, two of them
unequivocally had the spectra we were looking for.
So we knew that it was feasible on those.
One of them was a total bomb. There was no
correlation at all between what we were looking for
and the compounds we found in that extract. The
fourth one we had to make a very discouraging
assumption. We had to make the assumption that the
contract lab's mass spec wasn't in very good tune.
If we made that assumption and hence were able to say
-------
618
that the M/Z 58 peak we saw on our old MIS file
was really a M/Z 57 peak and that the M/Z 72 was
really a M/Z 71, and a few other assumptions like
that, then we were able to find the spectrum we were
looking for.
That sounds like a lousy assumption, however,
I checked the quality control for that particular
lab, and found they had a much, much higher percentage
of repeat runs for quality control checks than did
other labs. So I think it was a good assumption.
As the MIS spectra accumulate during the tape
study, we're going to apply GC/FTIR, GC/high resolution
mass spec and other techniques to the corresponding
extracts to identify the high priority unknowns.
The concept of using more than one analytical
instrument to determine the identity of a particular
organic is certainly not new. What I think is
new is working in batches. I believe the idea of
recycling is new as this flow shows we will be
doing: taking a number of unknown spectra, concen-
trating on trying to identify those, taking the
identified spectra and putting them into our data-
bases, whether they be GC, IR, mass spec, recycling
-------
619
the unidentified ones, and continuing to do this as
we bring in more batches and more batches.
Obv-iously> if you are, trying .to identify one
unknown compound your probability of doing it isn't
all that great. If you're trying to identify 20
unknown compounds, your probability of getting
something out of that batch of 20 is much higher.
This is where we're really hoping we're going to
do well. I think we're going to turn out with a lot
of good identifications.
We have come along guite far, but not necessarily
"us". The scientific community has come along guite
far in increasing the sensitivity of GC/FTIR. The
highest priority in my own group right now, is
buying a state of the art GC/FTIR. I think using
that in combination with the GC/MS and of course with
retention times, which we all now know are very,
very important, we're going to come up with a system
that will provide guite a bit of interesting information
to present at next year's meeting.
I'm going to sign off now. Thank you.
-------
-------
621
SUMMARY OF FIRST
TAPE STUDY
NUMBER OF GC/MS RUNS
TYPES OF SYSTEMS
NUMBER OF COMPOUNDS FOUND
NUMBER OF SPECTRA NOT IDENTIFIED
PARAGRAPH 4(C) COMPOUNDS SELECTED
20,000
5
1565
2500
6
-------
622
AERL TAPE STUDY
TAPE STUDY FLOW CHART
MANDATE
PROGRAM OFFICE
"CONTRACT GC/MS ANALYSES
STORED RAW DATA
TARGET
COMPOUNDS
MODIFY
LIST
RE-GENERATED CHROMATOGRAMS
IDENTIFICATION
UNREASONABLE
COMPUTER RECOGNITION OF COMPOUNDS
COMPUTER IDENTIFICATION OF COMPOUNDS
COMPUTER CONFIDENCE CHECK
IDENTIFICATION
REASONABLE -
MIS LIST OF
UNIDENTIFIED SPECTRA
HIT LIST
OF IDENTIFICATIONS
UNIDENTIFIED
SPECTRA
REPORTS ON FREQUENCY
OF OCCURRENCE OF ALL
IDENTIFIED
COMPOUNDS
-------
623
PLANNED DATA TREATMENT UPDATES
COMPARED TO EGD TAPE STUDY
Q FASTER COMPUTER
0 DOUBLE PRECISION ARITHMETIC
0 PORTABLE SOFTWARE
0 UP TO DATE REFERENCE LIBRARY
0 MULTIPLE STANDARDS AND SURROGATES
0 IMPROVED PEAK RECOGNITION
0 BASE PEAK COMPARABILITY !
-------
RUM MfiriE: 5623 FRflCTION NO.: 6/N
EPfl SflMPLE *:
GC COLUMN: 32 SP-2250 08
INOUSTRIflL CODE: RUBBER PROCESSING
TREATMENT STflGE : BflS
O 1
°i!
LP.3: ••
VERSITON 02-3
SCREENING
STRRT MflSS: 35
VERlftilflTION
0
0.03
17.
35
52 69 87 •
OPOMO 1 0
0 C H N S * i 0
104
121
'. 8 1
11 •• 5 8
TIME (MIN)
bd 40.45
4o.23 5'2.00
O5
-------
CO
o
en '
.CD
'CO
00
CM
0
RUN NflME:"'' 5S'l9 FRACTION NO.: flCI
EPfl SflMPLE *: =.'-- =
'GC COLUMN: IX. .SP-1240 Ofl
INOUSTRIflL CODE: RUBBER PROCESSING
TRER'TMENT J
flNRL
'VERSION 02-3
TflGE: RCID SCREENING.
STflRT MflSS:.1 35
VERIFICRTION
.
0-
78
155 233 311 389
SCflNS
46-7
544
522
700
Or."Q3 2'. 62 .5'. 21
7'. 80 I'D- 39 l'2.98
TIME (MINI
15.57 18.1
o
2'0.74 2"3.33'
cn
-------
CO
CO
0
>-
z: ' j
mi
O • 1
^ ^
z 1
0
RUN NflME: 5564 FRRCTION NO.: VOR
EPR SflMPLE *:
GC COLUMN: 0.2/.CRRBOWHX 1500 CRRBOPRK C
INDUSTRIRL CODE: RUBBER PROCESSING
ITRERTMENT STRGE: UNDILUTED - NO SURRCGRTE-
'pNRL LRB: -.- STRRT MRSS:
'7ERSIO\ 02-3
\^J
35
.64
VERIFICATION
129 193 258 322
SCflNS ,
387
451
516
580
2'. 18 4'-.3:2 tT^T 8T61 10.76 12.90 15.04 -I'l.lS T9.33-
•:" : " ': . TIME (M IN)
05
-------
627
FREQUENCIES OF THE HIT LIST DATA
.I
1..BOO
1 40O
1-3OO
» aoo
1 100
i ooo
SOO
TOP ONE HUNDRED COMPOUNDS
.TOO
COO
BOO
inn
III HI
mm
HUH
Illllll
in ruin
iiiimm
1 1 1 it 1 1 1 1 1 1 ni ,
300
aoo
100
1 1 1 II 1 1 1 1 1 1 » I Mil I H _
m 1 1 1 1 1 1 m IIMI i H i if.
I II 1 1 H I M M 1 1 Ijl 1 1 III I M I iTT.r
ao
eto
100
Compound*
FREQUENCIES OF THE MBS LIST DATA
1.SOO
1.-4OO —
1.
i.axx> —
i.ioo —
t.ooo —
- aoo -
TOP ONE HUMORED COMPOUNDS
TOO
BOO -
of identified Oompouncla
RUBBER INDUSTRY - '
-------
COMPOUND '
METHYLBENZENE(TOLUENE)
BENZENE
N-OCTADECANE
DICHLOROHETHANE
STYRENE
N-HEXADECANE
p-CRESOL
M2-BUTOXYETHOXY) ETHANOL
9-FLUORENONE - .
3ENZOTHIAZOLE
BENZOIC ACID
2,6,10,14-TETRAMETHYL PENTADECANE . " .
BENZALDEHYDE
PARA-CRESOL (4-METHYLPHENOL)
CAPRYLIC ACID
DI - ( 2-ETHYLHEXYDPHTHALATE
HEXANOIC ACIDCCAPROIC ACID)
N-HEPTADECANE
METHYL-PHENYL KETONE
M-CRESOL
DIACETONE ALCOHOL
TRICHLOROMETHANE (CHLOROFORM)
PARA-ETHYLPHENOL
DIOCTYL PHTHALATE
2,4-DIMETHYL PHENOL
PROPANENITRILE,3-(DIETHYLAMINO)-
1, 2-DIMETHYLBENZENE(0-XYLENE)
2-ETHYL-l-HEXANOL
PALMITIC ACID
4-PHENYLCYCLOHEXENE
ALPHA-TERPINEOL
STEARIC ACID
"FENCHYL ALCOHOL
1-METHYL-4-1SOPROPYLBENZENE
(2, 6-DI-TERT-BUTYL-4-METHYL PHENOL)
PHENOL,2,3, 5-TRIMETHYL-
CAMPHENE
PHENOTHIAZINE
AR, ALPHA-DIMETHYLSTYRENE
TRICYCLENE
ISOPROPYL ALCOHOL
3,4, 5-TRIMETHYLPHENOL
3-HYDROXYBENZOIC ACID-METHYL ESTER
2, 2, 4-TRIUETHYLDIHYDROQUINOLINE
P-NONYLPHENOL '
2-MERCAPTOBENZOTHIAZOLE
LIMONENE ' :: '•
CYCLOHEXANOL . .
BENZENE,l-ETHYL-2, 3-DIMETHYL-
l-HYDROXY-3, 4-DIMETHYLBENZENE(3, 4-DIMETHYLPHENOL)
CAS REG NO FREQUENCY MED CONC
~ 108883
71432
593453
75092
100425
544763
95487
112345
486259
95169
65850
1921706
100527
106445
124072
117817
142621
629787
98862
108394
123422
67663
123079
117840
105679
5351042
95476
104767
57103
4994165
98555
57114
1632731
99876
128370
697825
79925
92842
1195320
508327
67630
527548
19438109
147477
104405
149304
138863
108930
933982
95658
32
22
22
20
19
19
1.8
17
17
16
15
14
14
12
12
12
11
10
10
9
8
8
7
6
5
5
4
4
4
3
3
3
2
2
2
2
2
2
1
1
1
I
1
1
1
1
1
1
i
i
40
109
31
8
446
, 17
54
140
87
120
37
: 25
228
22
37
10
25
94
202
33
93
97
33
325
99
353
9
178
142
500
259
164
5088
1833
12629
1723
831
4107
1459
1235
4527
2170
j 1477
1098
: 1617
! 1272
2850
17854
1346
1351
628
-------
629
Twenty Most Frequently Found Compounds in
Phase 1 Tape Study of POTW's
Rank . Compound .
•1 Tetrachloroethylene
2 Methylene chloride
3 Butyl carbitol
4 Toluene
4 Di(i-octyl)phthalate
6 Chloroform
7 Ethyl benzene
8 Benzene
9 Butyl phthalate
10 Phenol
11 Dioctyl phthalate
12 p-Cresol
1.3 1-Methyl-naphthalene
14 Naphthalene
15 n-Pentadecane
16 Palmitic acid
17 p-Xylene
18 Biphenyl
19 Phenylacetic acid
20 Benzoic acid
# Observations
705
498
• 448
• 434
434 . , . .
420
249
245
240
1 51 .
136
134 =
122
118
112
106; ,
101
89
88
: 84
Concentration Range
.1 -*-
.6 -»-
1.8 -»•
.3 -*-
.8 +
.5 ->
.5 -5-
.7 -j-
1. ->
1.3 -*.'
2. -4-
2.6 -v
.6 -*•
1.1 •+
.9 -*-
1.4 ->
2. ^,
.9 -v
5.6 ->
1.3 -v
24,000 ppb
3900
2500
18,000
7200
2300
2700
5000-
4100
3500
5100
4200
2200
1800
2000
7400 ,
2600
1800
9900
5900
-------
630
ASSUMPTIONS:
STEPS:
MULTI-SPECTRAL IDENTIFICATION '
1, TAPE STUDY HAS GENERATED SPECTRA OF"CONFOUNDS .THAT HAVE
NOT BEEN IDENTIFIED BY THE STUDY,
2, IDENTIFICATIONS WILL BE HANDLED IN BATCHES OF CONVENIENT
SIZE,
1, RE-DO THE GC/MS ANALYSES TO CHECK THE PRESENCE OF THE
UNIDENTIFIED COMPOUND IN THE SAMPLE EXTRACTS,
2, OBTAIN POSITIVE ION AND NEGATIVE IOH CHEMICAL IONIZATIOH
SPECTRA TO DETERMINE MOLECULAR WEIGHT AND COMPOUND TYPE,
3, OBTAIN HIGH RESOLUTION MASS SPECTRA TO LEARN EMPIRICAL
- FORMULAE OF MAIN FRAGMENTS.
4. OBTAIN GC/FTIR SPECTRA TO DISCOVER IMPORTANT SUB-STRUCTURES,
5, POSTULATE IDENTITIES WHERE POSSIBLE AND CONFIRM WITH STANDARDS
6, UPDATE REFERENCE DATA BASES TO INCLUDE THE NEWLY IDENTIFIED
SPECTRA,
7, RESERVE BALANCE OF BATCH FOR FUTURE CONSIDERATION,
8, SELECT ANOTHER BATCH AND REPEAT THE ABOVE STEPS,
9, RECONSIDER ALL RESERVED SPECTRA CONSIDERING THE INFORMATION
FROM LATER BATCHES1.
10. OBTAIN NNR SPECTRA FOR BETTER SUB-STRUCTURE INFORMATION ON
HIGHEST PRIORITY "COMPOUNDS NOT IDENTIFIED,
11. POSTULATE AND CONFIRM IDENTITIES, .
12. UPDATE REFERENCE DATA BASES AND RE-CYCLE THROUGH ABOVE STEPS.
-------
631
Question and Answer Session
MR. TELLIARD: Any questions?
Our next speaker is from Shell. 1 thought it was a
misprint when I first saw it, it wasn't George Stance.
But we were talking in the back before, at lunchtime.
For some of you who haven't been here before or are
kind of newcomers, this group meeting has been
going on for ten years. We know that the described
purpose of this meeting is to get together, eat
seafood, lie to each other and drink a lot.
But it was conceived to do something else
originally. It came out of the consent decree
lawsuit against the agency, when the agency was
told to come out with national standards on a group
of new compounds called priority pollutants. We did
this through this media primarily, as an attempt to
get the industries, our laboratories, our contract
laboratories, together to discuss problems. It was
never designed as a research or a researchy type of
meeting. It was designed to talk about nuts and
bolts, and generally bitch.
The early proceedings from this sound a little
bit like the Bickersons. There were a lot of name
-------
632
calling, insinuation about parentage, things like
that. But as we look back over ten years, I think
it's important that you remember that, as we all
came off the block, as Drew had alluded to, we
turned around one afternoon and said, we're going
to use GCMS to do this> a lot of people said tish,
tish, tish, tish. A couple of people would have
listened, they'd be wealthy people now.
Of course, we made that decision really on a
very thin string. The only thing we had going for
us was the support from our Athens lab on the
semivolatiles through Ron Webb and Larry Keith and
John McGuire. Then we had the folks on the volatile
end with Lichtenburg and Bellar in Cincinnati, and we
tried to merge this thing.
Now, there were some people who didn't think
our first methods had everything in them, Stank
being one of them. He pointed out there were some
slight deficiencies, that after ten years there are
still some slight deficiencies. '
But I think the most important thing that this
meeting has done, the function of this meeting was,
it brought the industry and the regulated community
-------
••'.:' 633
and the agency together to sit down and make one
agreement, that we may fight and argue about what
the data means, whether it was 3, 12 or 15. But
that we would make a concerted effort to come up
with the best methods we could have and at least
the best science.
If you look at the early proceedings, you can see
a great deal of personal commitment by industrial
groups, by specific companies, of manpower, money
and time to make this happen. GCMS is common today,
but it wasn't ten years ago. The reason it's a
common tool today is because the agency and the
regulated community made a concerted effort to make
it happen.
We still argue, we still bitch about what the
detection limit is, and what can you really quantify
by, and you're really looking at this bag of crap
with all these peaks. We argue about that. But
the commitment that was made by the industry and
the regulated community is very, very significant,
and that's the function of this place. It's not to
talk about nuts and bolts.
I think that as George was saying at lunch, we
-------
634
promised ourselves we'd have...when we got done
with this, good methods, cleaner water and dirtier
women, or something like that. But I think i.t's a
significant contribution that the agency has made a
commitment to continue working at this. We've got
three offices, superfund, we've got Solid Waste,
we've got Circle, we've got us, all using GCMS.
For you people in the laboratory business, you just
love us, because you can't figure out which method
you're supposed to use.
Bob Booth alluded to the fact that the agency
is now looking at that. We're looking at formulating
a committee to sit down and address these issues.
Should we have 14 different quality assurance
programs for a similar method? We're looking at that,
So it's still a progressive thing.
But I think after ten years when GCMS was...as
Drew pointed out, you had two magnetics and the
rest of the elements they were giving me off a
borax feed test, we've come a long way. We certainly
still have a long way to go.
Our next speaker is going to talk about method
detection limits. John?
-------
635
METHOD DETECTION LIMITS,
OR HOW LOW CAN YOU REALLY GO?
Estimation of Analytical Method Reporting Limits
by Statistical Procedures
Authors:
J. W. Koehn and A. G. Zimmermann
Shell Development Company
Houston, Texas
Presented at:
U.S. EPA Conference
on
Analysis of Pollutants in the Environment
Norfolk, Virginia
May 13-14, 1987
-------
636
ABSTRACT
METHOD DETECTION LIMITS,
OR HOW LOW CAN YOU REALLY GO?
Estimation of Analytical Method Reporting Limits
by Statistical Procedures
by
J. W. Koehn and A. G. Zimmermann
Shell Development Company
Houston, Texas
Detection and quantification levels have commonly been determined by
empirically judging signal-to-noise ratios. This is done by correlating
standard deviations of blank measurements and a single standard
concentration level just above the background. Evaluating by this
classical approach typically provides limited information as to analytical
method response and is often obtained under ideal conditions which do not
reflect real world matrices.
A second approach to experimentally determining that an analyte
concentration is greater than zero is described by calibration curve
regression theory for multiple concentration levels. This approach is
argued by Hubaux and Vos and developed by the US Army Toxic and Hazardous
Materials Agency (USATHAMA) into a procedure for estimating an analytical
method reporting limit. The approach is based on a careful choice of
standards and various procedural enhancements that lead to a narrow
confidence band with high probability for predicting the reporting limit
for the method and distinguishing it from the background.
Values above the method reporting limit are quantified, without
attempting to define an area between the limit of detection (LOD.MDL) and
limit of quantification (LOQ.PQL). USATHAMA designated the values
estimated by this procedure as certified reporting limits (CRL) for their
methods. The term method reporting limit (MRL) is coined to emphasize the
use of this procedure for other than regulatory purposes. No values below
the method reporting limit are reportable for the analytical procedure.
This procedure was used to estimate MRL's for two analytical methods.
-------
637
METHOD DETECTION LIMITS,
OR HOW LOW CAN YOU REALLY GO?
Estimation of Analytical Method Reporting Limits
by Statistical Procedures
Introduction , ,
Establishing detection and quantification levels for analytical
methods is an optimization of both experimental procedure and statistical
manipulation of the data. This is especially true when correlating
analytical method performance of more than one instrument, analyst, or
laboratory. Being in the regulated community, the current authors have had
many concerns with the EPA MDL values listed in the 600 Series Methods.
These MDL values are obtained under ideal situations, and in many instances
cannot be achieved ;in real world matrix samples. We became aware of
procedures for estimating detection limits which were derived in matrix
samples and did not require determining signal-to-noise ratios.
This paper will describe the procedure used to calculate estimated
detection levels for analytical methods-employed by the US Army Toxic and
Hazardous Materials Agency (USATHAMA) ' . These are referred to as
certified reporting limits by USATHAMA and are the minimum quantification
levels for parameters USATHAMA contract laboratories may report.
• In developing an understanding of the USATHAMA certified reporting
limit, it was necessary to review the literature for its relationship to
other accepted detection limit estimation procedures. As Currie said,
"Examination...revealed a plethora of mathematical expressions and
widely-ranging terminology". His use of the word ''plethora" was
appropriate, though the current authors did not initially know the
definition (overly full). The word at once represented both the unknown
and crowded world of detection limits.
Detection Limits
Basically there are two approaches to experimentally determining that
an analyte concentration is greater than zero. The classical approach has
been to correlate standard deviations of blank and?sample responses. This
idea has b^e^i-developed by IUPAC , ACS ' , EPA , and ASTM . Several
researchers ' ' ' ' provided further discussion of this direct
comparison of signal and noise. The second approach to establishing a
limit for reporting concentration is described by calibration curve
regression theory. The advantage of this procedure is that one gains an
understanding of the performance of an analytical method in a region that
extends both below and above the reporting limit. The USATHAMA reporting
limit is derived from measurements at multiple concentration levels rather
than multiple measurements at one level just above the background. We
concur with the regression approach and feel it is superior to the narrow
range of information obtained in the classical approach. The., xirinicples
are described by Hubaux and Vos and Mitchell and Garden and are
employed by USATHAMA ' in the determination of their certified reporting
limit (CRL).
-------
638
This paper provides discussion and examples of the estimation of
reporting limits as described by USATHAMA. However beyond their
references, use will not be made of the term certified. This word, and a
similar one, validated, are often used in conjunction with testing,
evaluating, and assuring performance based on direct regulatory agency
requirements. The procedure described below can be used by a laboratory
both for its own in-house testing and method development work as well as
reporting data to customers (regardless of whether or not the data will be
subsequently reported to a regulatory agency). Additionally, the procedure
as used by the current authors allows adjustment of the analytical method
reporting limit based on chosen confidence limits. Therefore the more
generic phrase, method reporting limit (MRL), will be used in the
discussion and the examples.
Classical Approach
In the classical approach to determining a detection limit, blank and
standard samples are analyzed and the standard deviations of the analytical
response values are compared. This comparison is developed on several
levels and is reviewed below.
The first level is variously called the Critical Level or Decision
Limit by Currie, the Criterion of Detection by ASTM, and the Limit of
Detection by IUPAC. This level is dependent upon the specific experimental
result, and is the minimum true signal capable of being observed by a
laboratory. It establishes the maximum acceptable Type I error (false
positive) for a blank. Currie and ASTM set the level at 1164 times the
standard deviation of the blank (or standard deviation for the working
range of an analytical system), establishing a 5% probability for making a
Type I error. IUPAC defines the level at 3 standard deviations from the
blank. This entire discussion is predicated upon errors in the analytical
process being random and the standard 'deviation being nearly uniform and
independent of the signal level (homoscedastic).
The next level and the one following are defined by the capabilities
of the measurement process itself. For ASTM, Currie, and IUPAC the second
level is set where the probability of a Type I error equals probability of
a Type II error (false negative) for a given analytical procedure. ASTM
calls this the Limit of Detection, Currie the Detection Limit, and IUPAC
the Limit of Identification. For ASTM and Currie, the value is established
at twice their Critical/Criterion levels from above. This corresponds to
3.29 times the blank standard deviation (assuming the probability of Type
II errors do not exceed 5%). On the other hand, IUPAC requires 3 standard
deviations from their Limit of Detection, or 6 standard deviations from the
blank. ACS recommends a Limit of Detection (LOD) as 3 standard deviations
above the blank. Similarly, EPA defines a Method Detection Limit (MDL) at
2.326 - 3.143 standard deviations from replicate standard analyses of
concentrations near the blank.
-------
639
The third level calls for a measured value to be satisfactorily close
to the true value and with a small relative standard deviation. ACS refers
to this as the Limit of Quantification (LOQ) and establishes it at 10 (±3)
.standard deviations from the blank at a 99% confidence level. Note that
this complete description is necessary to establish the certainty with
which a result may be reported. Currie also sets his Determination Limit
at 10 standard deviations from the blank. EPA set its Practical
Quantitation Level (PQL) at 5-10 times the MDL. A great deal of
arbitrariness surrounds the placement of a minimum quantification level.
Depending on which procedure is chosen, this "gray area" could be as close
as 6 standard deviations from the blank (IUPAC) or as far as 23 standard
deviations (EPA).
In order to remove the arbitrariness of the signal-to-noise
measurement and quantification process, the USATHAMA procedure was examined
and is described in this paper. This procedure is applicable to any
analytical method which yields instrument responses for prepared standard
concentrations for a linear range of calibration. This estimation for a
method reporting limit is applicable to a single analyte response or to a
summation or group of responses that correspond to several analytes. The
calculated value is the quantification limit for the analytical method. A
reliable estimate is obtained only if the method is executed without bias
for each standard in the tested range.
This procedure for estimation of a detection limit was applied to data
collected for the gas chromatographic analysis of an organic compound and
for the reverse flow injection analysis of an inorganic. The definition of
terms, mathematical equations, and the procedure to be followed for
calculating reporting limits according to USATHAMA are outlined below.
Hubaux and Vos
Hubaux and Vos argue that the sensitivity of an analytical method and
hence the detection limit, is influenced by a judicious choice of analyte
standard concentrations. For a given set of standards and corresponding
response signals, a best fit regression line can be found. If a new
standard is measured, its response can be predicted to be near the line,
but may not necessarily fall exactly on the line because (1) response
signals are not fixed values but are randomly distributed in an unknown
fashion around some average value, and (2) the fitted regression line is
based on very few observations and is only an estimate of the true
calibration line. Confidence limits for the regression line can be
calculated and drawn on both sides of the regression line at any chosen
level of confidence. The width of the resulting confidence band depends on
(1) the dispersion of the responses for a given standard, (2) knowledge of
that dispersion, and (3) the concentration. It is important to note that
the confidence band does not represent the dispersion of known responses,
but rather allows one to predict responses for standards not yet measured.
Figure 1 shows a hypothetical example of instrument responses plotted
against a series of standards and a confidence band constructed about the
regression line for these observations. For a given signal Y of a standard
or sample of unknown concentration, the range of concentration values
-------
640
possible can be predicted as Xmin to Xmax. For the response Yc, the upper
limit of concentration is X'max. That same response could come from a
standard with a content as low as X'min (zero). The lowest concentration
distinguishable from zero can be no lower than X'max, or else the
corresponding response could be lower than Yc, and hence interpreted as a
blank (zero concentration).
In this example, the X'max concentration level is equal to Xd, which
Hubaux and Vos define as the detection limit of the method. This detection
limit is an estimate of the minimum detectable or guaranteed analytical %
response for that method. For the same analyte, a second series of
standards could yield response signals that differ randomly from the true
values and hence lead to a different estimate of the detection limit. The
same analytical method used at a second laboratory or with a second
instrument would also be expected to yield a different estimate of the
detection limit. It is important to acknowledge that a detection limit is
not a fixed value for a given method, but is a variable. The prepared
standards and corresponding response values directly influence the
confidence limits. To lower the detection limit, Xd, for an analytical
method, it is necessary to decrease the width of the confidence band. This
may or may not be possible.
The ability to predict responses for a given concentration is most
reliable in the range being tested. As one moves away from the range of
repartitions (distribution) of the tested standards, the confidence band
becomes increasingly nonparallel with the regression line. Responses
become increasingly unpredictable when there is no statistical connection
to the range of tested standards, and as one approaches the zero or blank
sample.
Hubaux and Vos describe several ways one can attempt to enhance the
detection limit (sensitivity) of an analytical method.
1) Improve the precision -- This includes lowering and refining the
residual standard deviation (s) . By improving and controlling the
analytical technique, the scatter of the data is reduced. Also,
analyzing independently prepared replicates of .each standard
repartition improves the ability to estimate the true regression line.
2) Increase the number of standards (N) analyzed -- The influence of N is
an important consideration in the Student's t-value and the subsequent
standard deviation calculations. For example, a significant gain in
sensitivity and confidence is achieved by using 6 standards to
establish the regression line rather than using 3 standards. However,
the gain is small if more than 10 standards are analyzed. There is a
balance between the analytical cost of additional standards and the
improvement in predictability of the regression line, with the
potential for lowering the detection limit.
3) Increasing the range ratio (R) of the standards -- The range ratio is
defined as
R - (Xn - XI)/Xl
(1)
where Xn is the highest concentration within the series of standards
and XI is the lowest concentration standard. The ability to predict
responses is extended to a larger region of concentrations when r is
increased To estimate the detection limit, Hubaux and Vos indicated
-------
641
that the: range ratio should be 10 or greater, but there is no need to
exceed 20. It should be recognized that XI cannot be zero in equation
(1).
4) Optimize the repartitions of the standards within the range -- To
: . obtain maximum information on the linearity, the standards should be
equidistant and as far as possible from each other. On the other
hand, Hubaux and Vos concluded that the best approach to calculating
detection limits is to have spme standards with the smallest
concentration feasible, other standards at the maximum level of
interest, and one midway between the two extremes. They referred to
this as the "three values repartition". Hubaux and Vos also studied
an equidistant or linear repartition, a parabolic arrangement of
standards, and a two values arrangement.
5) Attempt to have the mean X of the standards set near the estimated
value of the detection limit -- According to Hubaux and Vos, this is
most closely approximated by the three values repartition. When the
calculated detection limit is near the mean of the standards set, the
confidence limit lines will be nearly parallel with the regression
line in the region of the detection limit. This is the point of
greatest reliability for predicting the responses and calculating the
detection limit.
USATHAMA
USATHAMA developed their own application of the possibilities
presented by Hubaux and Vos. In their procedure, arriving at a detection
limit is a two-step process. The first step is to construct an instrument
response or calibration curve for an analyte at concentrations through the
anticipated testing range, not including a blank. The second step, method
certification, involves the preparation and analysis through the entire
analytical method of a specified repartition of spiked standard samples
over several days. These are prepared from the same master stock as the
calibration standards.
The detection limit for a method, the certified reporting limit (CRL)
according to USATHAMA, method reporting limit (MRL) according to the
current authors, is a value obtained graphically or mathematically from the
data generated in the method certification step. This procedure is
described in the following pages. The equation for calculating the
reporting limit is given in equation (9).
The calibration curve data includes the standard analyte preparations
(X) and the corresponding instrument responses (Y) . These data are fit
using a least squares linear regression with the usual assumptions. That
is, the errors in the measurements are independent and normally distributed
with zero mean and constant standard deviation. The error in preparation
of the standards is small compared to the error in the measurements. The
estimated' slope (b-) and intercept (b_) are calculated by
SffXi - XUYi - YV1
S (Xi - X)2 ;
bQ - Y -
X)
(2)
(3)
-------
642
where Xi and Yi__ are _the standard concentration and response value
respectively, and X and Y are the means respectively. All summations are
from 1 to N. The correlation coefficient (r) for the fit of the data to
the regression line is calculated by
r -
- X)(Yi -
[S(Xi - X)2 S(Yi - Y)2]
2V/
(4)
According to the USATHAMA procedure, each of the calibration curve
standards should be prepared and analyzed in duplicate at least. The
calibration data is then subjected to Lack-of-Fit (LOF) and Zero Intercept
(ZI) tests at the 95% confidence level. See the Appendix. If these tests
show no lack of fit for the line with a zero intercept, then the calculated
calibration regression line is assumed to be an adequate description of the
data.
For method certification, USATHAMA adopted a modified parabolic
repartition of the standards, with a range ratio of 19. The advantage of
this approach is that it allows one to look at the linearity of the
analytical method over the series of tested concentrations, as well as
providing a reliable estimate of the reporting limit.
Their repartition uses a series of standards spiked in the
concentration series OX (method blank), 0.5X, X, 2X, 5X, and 10X, where X
is the concentration of the analyte that corresponds to the reporting
limit. Because the actual value for X is not known prior to preparing the
standard series, one must arbitrarily select a concentration, a target
reporting limit (TRL), that is suspected to be near the final estimated
reporting limit. Prior experience with the analytical method allows one to
make a reasonable estimate of this value and avoid repeats of standard set
preparations.
To account for all variability in the analytical method, each spiked
standard concentration in the repartition range is individually prepared
and analyzed over at least four separate days. Analysis here means
performance of the entire analytical method. USATHAMA protocol calls for
the preparation of these standards in ASTM grade water or specially
provided standard soil. As a result, estimates of the reporting limit and
accuracy of the method are optimistic because interferences found in
natural samples will be absent. Standard samples can be prepared in a
solvent or matrix system that represents the natural matrix of real world
samples. For example, if the analysis and determination of reporting
limits are based on environmental groundwater samples, the water used to
prepare the standards should come from an upgradient well that contains
water free of the analyte of interest, but with all of the other matrix
interferences that affect the procedure. In this way the recovery of the
analyte and detection limit of the method are truly measured for the
natural matrix.
All of these prepared target concentrations (X) are introduced into
the instrument to obtain area count responses (Y'). Found concentrations
(Y) are obtained by entering these response values into the calibration
curve equation and reading the corresponding concentration, assuming
linearity through the analytical metjaod. See Figure 2 and equation (5).
-------
en
CD
CO
-------
i!"4 "t;r1 ift? i.",
T u «™ "" - * ' : "!' TI n 11 neifliiini i " ,«' i. i i „ 'i • i1 "i1, s „ .7 • " •, " • " "":
|IIIH|«I B*LJI™«I*»I IFiu ,111 |L'»IIMI .IllllllimiuH.linilUllin ! WHIhlll|IHll|llllllll|linSl nllili! Illii I 1 |L, r»ji ill.ih ^Illi 1,1 hllll g J I jll, jlillil.i | 1,31JIIIIM |||. ni; I < ,,n g I III, I, |>>, l|< < • n ,,g< MM | > , ii | 11 ill' ilM'i | ,, 11 , | ,, III' Mlllllllli
^''^" "«
005
-------
05
O
-------
602
DR. MCGUIRE: Thank you, Bill.
A program has been reinstituted to identify organic
chemical compounds in industrial effluents, using
mass spectra and GC retention data. The first portion
of the program has the objective of identifying and
determining the distribution and relative concentrations
of organic compounds in industrial wastewater samples.
It utilizes mag tapes of spectral and chromatographic
data collected over the years by contractor laboratories
of EPA's Office of ITD.
Based on an analysis system that was assembled
in the late '70s at the Environmental Research Lab
in Athens, an extensive suite of computer programs
locates the best spectrum for each compound, identifies
the most likely compound, determines its historical
frequency and estimates its concentration based on
internal standards. Now, at the time we first whipped
that up in the 70s, that was very, very intelligent.
Nowadays, you would expect that almost any halfway
intelligent GCMS computer system would do the same
thing, except for that one item of "determining
historical frequency." We think that is still a
unique feature of this set of programs.
-------
603
The results of the tape study are essentially
two files consisting of identifications and unidentified
spectra respectively. In a study using the earlier
• version of the system, 1565 specific organic compounds,
the vast majority of which were not regulated, were
identified. (Slide 1) '
2500 additional compounds, (I call them compounds
although not identified they had reproducible spectra
and reproducible retention times, were detected five
or more times. Today, I. will provide a progress
report on efforts to update the original programs to
take advantage of improved technology. I'll also
discuss application of the program as a tape study to
identify organic compounds and effluent data from the
rubber industry, collected for ITD as part of the
consent decree verification study.
Finally, as part of the second portion of the
programs, I will outline a multi spectral approach
to the identification of unidentified spectra from
the MIS file. This multi spectral approach involves
the re-analysis of retained extracts corresponding
to the particular contract laboratory GCMS data run.
Confirmations and identifications of selected
-------
604
compounds will be made using multi-spectral identifi-
cation techniques such as GC/FTIR, GC, high res MS
and GC/PPINICI.
Now, for those who aren't familiar with what
we're talking about, I'd like to discuss what a tape
study is. (Slide 2)
In order to provide guidelines for regulating
effluent discharges and establishing treatment
standards, the ITD division of the Office of Water
has had a large number of GCMS analyses conducted
by contract labs. These contract analyses typically
*
cost in the millions of dollars and had specific
and limited target analytes such as the priority
pollutants.
By the way, for the contract lab people, that
doesn't mean each one of you got millions of dollars,
but in toto.
In recognition that more information would be
generated in the analyses than would be required to
meet the limited objectives, the office had the raw
GCMS data stored on mag tape.
At the top of this slide, it says "tape study
flow chart", and underneath that is mandate. That
-------
605
is the way we got into most of this work. Congress
decided that EPA needed to regulate certain compounds,
or the courts decided it. At any rate, there was a
mandate for EPA.
That mandate came down to the program off ice;,
The program office set"up contracts to conduct the
GC/MS analyses, which included in the broadest sensie
taking the samples from a sample site, bringing
them back, working them up, and actually performing
the GC/MS analyses.
When that was done, the data were stored as
raw data on mag tape. At the same time, there were
also outputs from the contract labs for the target
compounds. The stored raw data were sent to our lab
where they became part of this tape study. The first
thing we did was to plug the raw data into our computer
and regenerate chromatograms.
Next, was to have the computer recognize the
presence of GC peaks as discrete compounds in those
chromatograms. The programs were sophisticated
enough that they also extracted the very best mass
spectrum for each of the blips identified as a GC peak.
When I say they were "sophisticated enough", we
-------
606
actually have 542 routines that take part in this
suite of programs, so they are quite sophisticated.
The computer then identifies the compounds,
and having done that, then proceeds to check against
an historical library database to see if for example
dimethyl chicken wire is actually coming out at the
right relative retention time. If it is, then the
computer is happy and it will go down on the right-hand
column of this chart to say that the identification
is reasonable. If on the other hand the computer
says, "no, no, no, no, dimethyl chicken wire can't
possibly come out at that point", it will determine
that that identification is unreasonable and it will
store the spectrum in our MIS database.
If, as is more typically the case, the computer
scratches its head and says, "gee, it might be dimethyl
chicken wire, but I'm not sure", then we have a
chemist who sits and looks at the data; the data at
this point consisting of the massaged GC/mass spec
output, that is, the mass spectrum, and the best
matches that the computer has been able to generate
from our rather extensive collection of mass spectra.
-------
607
The chemist then makes the decision "yea" or
"nay". At this point, the flows go on the same as if
the computer had made them; that is, either where
the identification is reasonable and it goes into a
hit list or it is not and it goes into a misfile.
The identified compound list which is then used
to go back up and modify the modified lists so that
the lists reflect not simply what mandate comes
from Congress or a court, but actually the compounds
that ought to be identified in the environment.
On the left-hand side of the list is the MIS
list, those compounds where we cannot perform a
solid identification. At this point, the spectrum
along with all sample identification is stored.
The tape study then is a retrospective analysis
of all the stored data by a suite of programs.
Costs of a tape study, even for a large one such as
was done in the past and the one that I showed on
the first slide, are generally well under a million
dollars. The amount of information produced by
such a study can be expected to extend the value of
generated information by a very significant amount.
This information consists not only of the
-------
608
identity and approximate concentration of each compound
in each sample, but also a breakdown of the
frequencies with which the compounds are found in each
industry survey. As a valuable extra, the outputs
of the programs also include the MIS file of spectra
that have been encountered in the course of the
study, but which have not been identified by either
the computer or the chemists who oversee the programs.
This MIS list can be processed in the same manner
as the hit list of identified compounds. That is,
the final reports of the analyses are able to
provide statistics on the frequency with which the
unidentified spectra are found. The frequencies
found for these MIS spectra are useful measures to
highlight some spectra that need to be identified
in order to characterize a waste stream or process.
We have found, for example, in some of our old
work that the same spectrum was found as often as
200 times, but it wasn't identified. Compounds
like that are obvious targets for us to identify
them.
Information such as that is essential to
responsible regulation, since it replaces guesstimates
-------
609
of what's present and at what levels with hard
data. In the original EGD tape study for the non-
priority pollutants, the 114 organic pollutants were
the targeted compounds. Many more compounds were
found and identified in the 22 industries covered,
as well as 25 times as"many that couldn't be
identified.
A new tape study has begun for the ITD during
the past year. It will be comparable in size to
the original EGD study and will cover more than a
dozen new industries. The first industry to be
studied was the rubber industry, which was sampled
and the samples analyzed for the contractors in '81
and '82.
The tapes were logged into our PDF 11/70 computer
database in '82, but they weren't processed until
the present work began. The original computer
programs, which were designed to operate on the PDF
1170 have been revised and modified for operation
on the back 785. (Slide 3)
A significant part of the modification was
-------
610
based on preliminary work that Walt Shackleford
carried out on the 1170 in changing the PBM library
and method of search to provide for a faster search
time. He's discussed this in the past at this
meeting and given due credit to McLafferty's group
at Cornell, who were quite helpful in our implementing
the Most Significant Peak sort on the 1170.
The programs used in the 1976 to '81 tape
study utilized data stored on mag tape, and were
revised for operation on the VAX 785. The library
of reference spectra used in that study also was
made operational.
Benchmark tests were run without any problem
in order to check the performance of the programs
on the VAX with those of the 1170. At that time,
which was about eight months ago, the new ITD tape
study began.
The suite of new programs performs well in
identifying GC peaks and matching the best spectrum
for each against a database or spectral library. A
new and very extensive mass spec library of over
100,000 compounds has been acquired from John Wiley
and is being prepared for use. Many tapes of GCMS
-------
611
data from the ITD contractors have been received
and processed at our lab.
A staff of computer personnel and chemists was
assembled to support the new tape study. However,
just after we got through training a key chemist, he
accepted another job arid we're now in the spot of
retraining his replacement.
At the present time, our contract group working
directly on tape analysis consists of one chemist
and two senior programmers. Interviews are currently
being held for a junior programmer and a technical
writer. This last position is felt to be critical
because of the need for documenting exactly what
the programs do. '
Bringing the old suite of programs up to speed
on the Vax, when they had last been used on the 1170,
was much more difficult than I thought it would be
because of inadequate documentation on the original
programs. .
One of the results of the earlier study was
that a need was seen for better guantitation. In
order to provide the guantitation, isotope dilution
MS was used by many of the contractors, and because
-------
612
of 'the extensive use of isotope dilution duterated
spikes in Methods 1624 and 1625, it was felt critical
to use the large Wiley mass spec library with many
duterated compounds represented for the new work.
When this was attempted, a totally unanticipated
problem became evident in preparing the library for
its use. The older programs had run on the 1170,
which had the peculiarity of counting to its maximum
count 32,764, and then counting backward from the
negative of 32,764 to zero. This meant that the
1170 could handle a library of more than 65,000
spectra and each would be assigned a discrete
number even though some would be negative.
The VAX does better. When we first tried to
read in a spectrum with a number higher than 215
the machine informed us in no uncertain terms that
that number was outside the range. This has limited
us to using the library that was used for the greater
part of the earlier phase of the project, while our
programmers have been working on getting the new
library into a proper 1*4 format for handling
spectrum numbers in excess of 100,000. We expect to
have that work finished the end of this month.
-------
613
At that time, we plan to implement the use of
the large library to reprocess all runs from both
the POTW and rubber industries. When the programs
were demonstrated on the VAXs, the first set of
tapes to be processed was the verification phase
analyses for the rubber industry.
Slide 4 is the chromatgram of a base neutral
fraction from the rubber industry. The base neutral
fractions from the rubber industry, at least those
from the butadiene plants have that characteristic
"humpogram". It is, as might be expected, just a
mess of aliphatic hydrocarbons. Nonetheless, this
profile is always found in base neutral butadiene
rubber plant chromatograms. Slides 5 and 6 are
typical examples of chromatograms of acid and VGA
fractions.
Because the rubber industry runs have been
obtained on a packed column, this might lend itself
well to establishing a real comparison with the results
of the earlier tape study. Results were gratifying,
in that the success rate for identifying the recognized
peaks was essentially the same, using the same library
of spectra as the overall rate for the other work.
-------
614
The summary reports produced included both the
earlier work and the runs processed in the more recent
phases. In slide 1, the two top frequency distributions
here show the old hit (on the top) and miss frequency
distributions. Those are typical distributions. The
vertical axis is the number of times the particular
compound or spectrum has been found, the horizontal
is simply a marking place.
Including all samples received in both phases
of the work, we've logged in approximately 500 samples
including blanks and standards for the rubber industry.
Processing was significantly slower than anticipated
due to the fact that we were expecting the use of
1,4-dichlorobutane as one of the internal standards.
It turned out that the contractor had used deuterated
toluene. It took us several days of running to
realize that although the 1,4-dichlorobutane was
usually present, it was not reliably present at the
same level.
The bottom frequency plot shows the frequency
distribution of identified compounds in the rubber
industry. The one identified most frequently was
found 32 times.
-------
615
The identity of the top 50 compounds identified
based on both frequency and median concentration is
shown in this table (slide 8). I doubt if it's
particularly legible; there were not very many
compounds. We had a grand total that was only longer
than this list.
The compounds that were found are the compounds
that one might expect to be found. Now, the MIS
report is functioning well and will be used to select
candidates for the multi-spectral identifications
when enough runs have been processed for this to be
meaningful. I had hoped to discuss the newer POTW
data today, but we had a small problem. The weather
in our area of the country was running very, very
much below normal recently, and because the weather
was running below normal in temperature, it was
decided that it was an ideal time to shut down the
air conditioning system for some much-needed repair
work.
That was done, and the first day the air
conditioner was shut down, the temperature hit 93.
So, the computer system ran for about one hour that
day, and the individual in charge of the computer
-------
616
turned it off for the balance of the week. As a
result, I don't have my data out. I apologize for
it.
Slide 9 is a listing of the compounds found in
POTW during the last survey that we made. We were
just beginning to get into the POTW category at the
time we stopped the runs before.
An interesting sidelight for preparing for the
study of the rubber industry was the observation of
the fingerprint characteristics of the particular
contract lab. I'm not going to show you any of the
data, but the compounds noted for each lab were all
logical contaminants such as C-6 or C-7 hydrocarbons,
chlorinated compounds or phthalates. But it appears
that certain laboratories have a much greater chance
to show particular impurities in their blanks than
do other labs. On the other hand, the other labs
have a greater chance of showing their own fingerprint.
Now, whether this is due to impurities in
solvents or whatnot isn't assignable at this time.
We do know that we went through a big witch hunt in
my own lab a few years back on phthalate contamination.
We finally found it was coming from a final cleanup'
-------
617
step we were giving, because it made the glassware
look prettier. Once we stopped giving that cleanup
step with reagant grade acetone the phthalates
essentially disappeared. '
I see I'm running overtime, so I'm just going
to jump in and say a few words about multi spectral
identification (slide 9). We checked to see whether
multi-spectral identification on some of the retained
sample extracts from the first study was practical.
We chose some of the very trace level unidentified
compounds from the first phase of this work. We
requested 24 of the retained extracts from the
Sample Control Center. They were able to furnish
us four of them, and of the four, two of them
unequivocally had the spectra we were looking for.
So we knew that it was feasible on those.
One of them was a total bomb. There was no
correlation at all between what we were looking for
and the compounds we found in that extract. The
fourth one we had to make a very discouraging
assumption. We had to make the assumption that the
contract lab's mass spec wasn't in very good tune.
If we made that assumption and hence were able to say
-------
618
that the M/Z 58 peak we saw on our old MIS file
was really a M/Z 57 peak and that the M/Z 72 was
really a M/Z 71, and a few other assumptions like
that, then we were able to find the spectrum we were
looking for.
That sounds like a lousy assumption, however,
I checked the quality control for that particular
lab, and found they had a much, much higher percentage
of repeat runs for quality control checks than did
other labs. So I think it was a good assumption.
As the MIS spectra accumulate during the tape
study, we're going to apply GC/FTIR, GC/high resolution
mass spec and other techniques to the corresponding
extracts to identify the high priority unknowns.
The concept of using more than one analytical
instrument to determine the identity of a particular
organic is certainly not new. What I think is
new is working in batches. I believe the idea of
recycling is new as this flow shows we will be
doing: taking a number of unknown spectra, concen-
trating on trying to identify those, taking the
identified spectra and putting them into our data-
bases, whether they be GC, IR, mass spec, recycling
-------
619
the unidentified ones, and continuing to do this as
we bring in more batches and more batches.
Obviously, if you are. trying to identify^ one
unknown compound your probability of doing it isn't
all that great. If you're trying to identify 20
unknown compounds, your probability of getting
something out of that batch of 20 is much higher.
This is where we're really hoping we're going to
do well. I think we're going to turn out with a lot
of good identifications.
We have come along quite far, but not necessarily
"us". The scientific community has come along quite
far in increasing the sensitivity of GC/FTIR. The
highest priority in my own group right now, is
buying a state of the art GC/FTIR. I think using
that in combination with the GC/MS and of course with
retention times, which we all now know are very,
very important, we're going to come up with a system
that will provide guite a bit of interesting information
to present at next year's meeting.
I'm going to sign off now. Thank you.
-------
-------
621
•SUMMARY-OF FIRST
TAPE STUDY
NUMBER OF GC/MS RUNS
TYPES OF SYSTEMS
NUMBER OF COMPOUNDS FOUND
NUMBER OF SPECTRA NOT IDENTIFIED
PARAGRAPH 4(C) COMPOUNDS SELECTED
20,000
5
1565
2500
6
-------
622
AERL TAPE STUDY
TAPE STUDY FLOW CHART
MANDATE
PROGRAM OFFICE
'CONTRACT GC/MS ANALYSES
STORED RAW DATA
TARGET
COMPOUNDS
MODIFY
LIST
RE-GENERATED CHROMATOGRAMS
COMPUTER RECOGNITION OF COMPOUNDS
COMPUTER IDENTIFICATION OF COMPOUNDS
.COMPUTER CONFIDENCE CHECK
UNSURE
CHEMIST DECISION
IDENTIFICATION
UNREASONABLE
IDENTIFICATION
REASONABLE -
MIS LIST OF
UNIDENTIFIED SPECTRA
HIT LIST
OF IDENTIFICATIONS
UNIDENTIFIED
SPECTRA
REPORTS ON FREQUENCY
OF OCCURRENCE OF ALL
IDENTIFIED
COMPOUNDS
-------
623
PLANNED DATA TREATMENT UPDATES
COMPARED TO EGD TAPE STUDY
0 FASTER COMPUTER
0 DOUBLE PRECISION ARITHMETIC
0 PORTABLE SOFTWARE
0 UP TO DATE REFERENCE LIBRARY
0 MULTIPLE STANDARDS AND SURROGATES
0 IMPROVED PEAK RECOGNITION
0 BASE PEAK COMPARABILITY
-------
o
RUM NfiriE: 5623 FRRCTION NO.:
EPft SflHPLE «:
GC COLUMN: 3X SP-2250 D8
INOUSTRIflL CODE: RUBBER PROCESSING
— •!•
n • i;.
0 f
^^ f.
LI* ±j ^/ *•+ i • » * * * *•* *» ""^ ** *" • • *^ **
• TREATMENT STflGE: BflS
R\-aL, LR3: •'
VERSION 02-
'* C0~4i|
* :
j
>- i
i
'' '
r- -j. ,
00 ^ 1
i
LU ; j
' csiji
^fX*,- l_O J i
4 r
H— i j
1
f~^ 1 •
*,_L_J ^/3 * i
.„_ ^ -
I'
< '
'
. c .
*
•
•
1 ' " ;
! 1
1 l'^ if
ce-1 • v. ' ^J u'
o 1
i
^ i '
Z) • • ,,,,,,.-•
.-'
fcrf ta« • » * 11 ^^ ^ u_ \-/ \j iii \j „,
p SCREENING , .
STRRT MflSS: 35 •
3 VERIrfibllON
i
!
i
1:
(i
i
i
*
i
?
i
1
j
V y
' .
\
i 1 -
V - -. .-
1 ••- -
'•
. .".
... . . .. .
i . -
I • '
V' • • : , •
1 1 * . - » " "
i i
,. I . - . . ... : .
IV i ' •'
' \ '•! - - ' . •
\ • ••' - ••••'• . .. ' ...--•'. "i i • '
\ ' 1 "
: . " - ^^^^--^ •" ''•'"'.-'--'.
' . ^^^^__ , ; '." '
• ' . . , ' ..._.-.
• ' . . -
0 17- 35 52 . 69 87 , 104 121 130 ,'qc
SCHNS *10 • '
O'-OS 5.81 1
•« 58
1 ' ' J!l T ^"i.* x ^ V<'~CI, '3U ^'bci 40.45 45.23 S'2.00 ;
TIME (MIN)
O)
to
-------
CO
RUN NflME:' '5319 FRflCTION NO.: flCI
EPfl' SflMPLE *::_••".:
'GC COLUMN: IX SP-1240 OR
INDUStRIflL CODE: RUBBER PROCESSING
\ •*
O :
'-'co :
*""' •
i
]
^^_^
• •
CT1
VD^:
z: _ I
^-i :
5^--
— * . :
-•— ^
n r - .,
L^:
i ;
^m^.
.^csj"
O' : •
^- i
rz 1
ZD ^
0
o'T
.- llHUfWWIltAIIW
TREfiTMENT J
flNflL LflB: ;
,
•.VERSION 02j
: "1
5TRC-E: RCID SCREENING
•.i/ : -. STflRT VlflSS: 35
3 'VERIFICailON • ' /-"••-
". . '.•"•" '
' , ii ' I "".'--'*•'
)• - • ' \ li i "
i 1 1
i
• ' i
, . • : <: \
I
1 . ' "
i
' " i • , '
V
i • ' . •
1 \ i ' '
: V i! '•
'V it1 ' ' '
: \ )('
^ ' V^^v
•• (
i v •'•
'1
i !'•'-•
i! Si •- • - '• •
1 \ \ \ '•"- ; ' •
i ! I : -A
i \ ' .11 V '"'•'.' : •• \
\ \ : \
\. • { vv ,,. .„' . \ .
.\ ill ^^s^/vy\j .'\,'
1 AM 11 f 1 I1:! - •' "' .- •
\M V , \ J'Vl
V) U \A-Ay Vjw •
• •••"•'
• • '
'..,..,!.. ....III.. 1 1 1 1 1 1 1 . | . 1 1 1 1 1 1 1 1 1 1 1 '. -•,-•. , , ,
78 . 15S 233 311 389 467 544 522 71
•SCRNS
03 2'. 62 .5*. 21 7'. 80 l'Q.39 12.98 l'5 . 57 1'8 . 1 6 2'0-74 2'
TIME (M-IN)
Ol
-------
CD
CO'
m
RUN NflME: 5564 FRRCTION NO/: VOR
EPR SRHPLE *:
GC COLUMN: 0-2XCRR80WRX 1500 CflRBOPRK C
INDUSTRIRL CODE: RUBBER PROCESSING
JTREflTMENT STRGE: UNDILUTED - NO SURROGRTE
q,MRL LRB: -. > STRRT MRSS:
/ERSION 02-3 VERIFICRTION
35
258 322
SCflNS
387
451
516
580
-Q3. ' 2' 18 ; 4'. 32 6'-47 8' - 61 10-76 12-90 15-04 17-19 19.33-
'. •••-•''• ' , . TIME (MIN) • - -
OS
-------
627
1..SOO
FREQUENCIES OF THE HIT LIST DATA
TOP ONE HUNDRED COMPOUNDS
B
M
• 300
200
1OO
000
9OO
BOO
.TOO
600
BOO
.400
300
III I.
urn
nm
mm
nun
mill
mini,
ii mi i
minim
111111 M 1111 IT;
too
11111111111111111111II11HIII1111111111111111111111 in in ii ii,.
20
ao
too
Compound*
1.SOO
t.MOO -
1.3OO —
FREQUENCIES OF THE MBS LIST DATA
TOP OMC HUNDRED COMPOUNDS
1.1OO —
1.OOO -
- 900 —
700 -
6OO -
BOO -
ao
of ld«»nt§f§ecl Compounoia
RJJBBCS* INOUSTRV . -
-------
COMPOUND
METHYLBENZENE(TOLUENE)
BENZENE
N-OCTADECANE
PICHLOROMETHANE
STYRENE
N-HEXADECANE
p-CRESOL
}>-(2-BUTOXYETHOXY) ETHANOL
9-FLUORENONE ' .
BENZOTHIAZOLE
BENZOIC ACID
2,6,10,14-TETRAMETHYL PENTADECANE
BENZALDEHYDE
PARA-CRESOL (4-METHYLPHENOL)
CAPRYLIC ACID
DI-(2-ETHYLHEXYL)PHTHALATE
HEXANOIC ACID(CAPROIC ACID)
N-HEPTADECANE
METHYL-PHENYL KETONE
M-CRESOL
DIACETONE ALCOHOL
TRICHLOROHETHANE (CHLOROFORM)
PARA-ETHYLPHENOL
DIOCTYL PHTHALATE
2,4-DIMETHYL PHENOL
PROPANENITRILE,3-(DIETHYLAMINO)-
1,2-DIMETHYLBENZENE(0-XYLENE)
2-ETHYL-l-HEXANOL
PALMITIC ACID
4-PHENYLCYCLOHEXENE
ALPHA-TERPINEOL
STEARIC ACID
'FENCHYL ALCOHOL
1-METHYL-4-1SOPROPYLBENZENE
(2,6-DI-TERT-BUTYL-4-METHYL PHENOL)
PHENOL,2,3,5-TRIMETHYL-
CAMPHENE
PHENOTHIAZINE
AR, ALPHA-DIMETHYLSTYRENE
TRICYCLENE
ISOPROPYL ALCOHOL
3,4, 5-TRIMETHYLPHENOL
3-HYDROXYBENZOIC ACID-METHYL ESTER
2, 2t 4-TRIMETHYLDIHYDROQUINOLINE
P-NONYLPHENOL '
2-MERCAPTOBENZOTHIAZOLE .
LIMONENE • : .
CYCLOHEXANOL .
BENZENE,l-ETHYL-2, 3-DIMETHYL-
l-HYDROXY-3, 4-DIMETHYLBENZENE(3, 4-DIMETHYLPHENOL)
CAS REG NO FREQUENCY MED CONC
~" 108883
71432
593453
75092
100425
544763
95487
112345
486259
95169
65850
1921706
100527
106445
124072
117817
142621
629787
98862
108394
123422
67663
123079
117840
105679
5351042
95476
104767
57103
4994165
98555
57114
1632731
99876
128370
697825
79925
92842
1195320
508327
67630
527548
19438109
147477
104405
149304
138863
108930
933982
95658
32
22
22
20
19
19
18
17
17
16
15
14
14
12
12
12
11
10
10
9
8
8
7
6
5
5
4
4
4
3
3
3
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
40
109
31
8
446
. 17
54
140
87
120
37
25
228
22
37
10
25
94
202
33
93
97
33
325
99
353
9
178
142
500
259
164
5088
1833
12629
1723
831
4107
1459
1235
4627
2170
- 1477
1098
: 1617
' 1272
2850
17854
1346
1351
628
-------
629
Twenty Most Frequently Found Compounds in
Phase 1 Tape Study of POTW's
Rank . Compound .
:1 Tetrachloroethylene
2 Methylene chloride
3 Butyl carbitol
4 Toluene
4 Di (i-oc ty1)ph thalate
6 Chloroform
7 Ethyl benzene
8 Benzene
9 Butyl phthalate
10 Phenol
11 Dioctyl phthalate
12 p-Cresol
13 1-Methyl-naphthalene
14 Naphthalene
15 n-Pentadecane
16 Palmitic acid
17 p-Xylene
18 Biphienyi
19 Phenylacetic acid
20 Benzoic acid
# Observations
705
498
448
434
434
420
249
245
240 ,
151 ....
136
134
, - 122
118
112
106;:;,
101
••••."•" ' .'89
88
84
Concentration Range
.1 ->
.6 -»-
1.8 -»-
.3 -v
.8 -»•
.5 -3-
.5 -*-
.7 ->
1. -J-
1.3 ->
2. ^
2.6 ->
.6 ->
1.1 +
.9 ^
1.4 ->
2. ->
.9 -9-
5.6 -*-
1.3 ->
24,000 ppb
3900
2500
18,000
7200
2300
2700
5000-
4100
3500
5100
4200
2200
1800
2000
7400 ,
2600
1800
9900
5900
-------
630
MULTI-SPECTRAL IDENTIFICATION '
ASSUMPTIONS: 1. TAPE STUDY HAS GENERATED SPECTRA OF COMPOUNDS THAT HAVE
NOT BEEN IDENTIFIED BY THE STUDY, • - '
2, IDENTIFICATIONS WILL BE HANDLED IN BATCHES OF CONVENIENT
SIZE,
STEPS: 1. RE-DO THE GC/MS ANALYSES TO CHECK THE PRESENCE OF THE
UNIDENTIFIED COMPOUND IN THE SAMPLE EXTRACTS,
2, OBTAIN POSITIVE ION AND NEGATIVE IOII CHEMICAL IONIZATION
SPECTRA TO DETERMINE MOLECULAR WEIGHT AND COMPOUND TYPE,
3. OBTAIN HIGH RESOLUTION MASS SPECTRA TO LEARN EMPIRICAL
. FORMULAE OF MAIN FRAGMENTS.
4. OBTAIN GC/FTIR SPECTRA TO DISCOVER IMPORTANT SUB-STRUCTURES,
5. POSTULATE IDENTITIES WHERE POSSBLE AND CONFIRM WITH STANDARDS
6, UPDATE REFERENCE DATA BASES TO IKCLUDE THE NEWLY IDENTIFIED
SPECTRA,
7. RESERVE BALANCE OF BATCH FOR FUTURE CONSIDERATION,
8, SELECT ANOTHER BATCH AND REPEAT THE ABOVE STEPS,
9, RECONSIDER ALL RESERVED SPECTRA CONSIDERING THE INFORMATION
FROM LATER BATCHES1,
- 10, OBTAIN NMR SPECTRA FOR BETTER SUB-STROCTURE INFORMATION ON
HIGHEST PRIORITY COMPOUNDS NOT IDENTIFIED.
; 11, POSTULATE AND CONFIRM IDENTITIES,
12. UPDATE REFERENCE DATA BASES AND RE-CYCLE THROUGH ABOVE STEPS,
-------
." ' • • • ' ' ••;•'• , • , • • " ' 631
Question and Answer Session
MR. TELLIARD: Any questions?
Our next speaker is from Shell. I thought it was a
misprint when I first saw it, it wasn't George Stance.
But we were talking in the back before, at lunchtime.
• .' • f
For some of you who haven't been here before or are
kind of newcomers, this group meeting has been
going on for ten years. We know that the described
purpose of this meeting is to get together, eat
seafood, lie to each other and drink a lot.
But it was conceived to do something else
originally. It came out of the consent decree
lawsuit against the agency, when the agency was
told to come out with national standards on a group
of new compounds called priority pollutants. We did
this through this media primarily, as an attempt to
get the industries, our laboratories, our contract
laboratories, together to discuss problems. It was
never designed as a research or a researchy type of
meeting. It was designed to talk about nuts and
bolts, and generally bitch.
The early proceedings from this sound a little
bit like the Bickersons. There were a lot of name
-------
632
calling, insinuation about parentage, things like
that. But as we look back over ten years, I think
it's important that you remember that, as we all
came off the block, as Drew had alluded to, we
turned around one afternoon and said, we're going
to use GCMS to do this> a lot of people said tish,
tish, tish, tish. A couple of people would have
listened, they'd be wealthy people now.
Of course, we made that decision really on a
very thin string. The only thing we had going for
us was the support from our Athens lab on the
semivolatiles through Ron Webb and Larry Keith and
John McGuire. Then we had the folks on the volatile
end with Lichtenburg and Bellar in Cincinnati, and we
tried to merge this thing.
Now, there were some people who didn't think
our first methods had everything in them, Stank
being one of them. He pointed out there were some
slight deficiencies, that after;ten years there are
still some slight deficiencies.
But I think the most important thing that this
meeting has done, the function of this meeting was,
it brought the industry and the regulated community
-------
:' •*, .' •• ';'•" "•••'..••'- : '• ' ' 633
and the agency together to sit down and make one
agreement, that we may fight and argue about what
the data means, whether it was 3, 12 or 15. But
that we would make a concerted effort to come up
with the best methods we could have and at least
the best science.
If you look at the early proceedings, you can see
a great deal of personal commitment by industrial
groups, by specific companies, of manpower, money
and time to make this happen. GCMS is common today,
but it wasn't ten years ago. The reason it's a
common tool today is because the agency and the
regulated community made a concerted effort to make
it happen.
We still argue, we still bitch about what the
detection limit is, and what can you really guantify
by, and you're really looking at this bag of crap
with all these peaks. We argue about that. But
the commitment that was made by the industry and
the regulated community is very, very significant,
and that's the function of this place. It's not to
talk about nuts and bolts.
I think that as George was saying at lunch, we
-------
634
promised ourselves we'd have...when we got done
with this, good methods, cleaner water and dirtier
women, or something like that. But I think it's a
significant contribution that the agency has made a
commitment to continue working at this. We've got
three offices, superfund, we've got Solid Waste,
we've got Circle, we've got us, all using GCMS.
For you people in the laboratory business, you just
love us, because you can't figure out which method
you're supposed to use.
Bob Booth alluded to the fact that the agency
is now looking at that. We're looking at formulating
a committee to sit down and address these issues.
Should we have 14 different quality assurance
programs for a similar method? We're looking at that,
So it's still a progressive thing.
But I think after ten years when GCMS was...as
Drew pointed out, you had two magnetics and the
rest of the elements they were giving me off a
borax feed test, we've come a long way. We certainly
still have a long way to go.
Our next speaker is going to talk about method
detection limits. John?
-------
635
METHOD DETECTION LIMITS,
OR HOW LOW CAN YOU REALLY GO?
Estimation of Analytical Method Reporting Limits
by Statistical Procedures
Authors:
J. W. Koehn and A, G. Zimmermann
Shell Development Company
Houston, Texas
Presented at:
U. S. EPA Conference
on
Analysis of Pollutants in the Environment
Norfolk, Virginia
May 13-14, 1987
-------
636
ABSTRACT
METHOD DETECTION LIMITS',
OR HOW LOW CAN YOU REALLY GO?
Estimation of Analytical Method Reporting Limits
by Statistical Procedures
by
J. W. Koehn and A. G. Zimmermann
Shell Development Company
Houston, Texas
Detection and quantification levels have commonly been determined by
empirically judging signal-to-noise ratios. This is done by correlating
standard deviations of blank measurements and a single standard
concentration level just above the background. Evaluating by this
classical approach typically provides limited information as to analytical
method response and is often obtained under ideal conditions which do not
reflect real world matrices.
A second approach to experimentally determining that an analyte
concentration is greater than zero is described by calibration curve
regression theory for multiple concentration levels. This approach is
argued by Hubaux and Vos and developed by the US Army Toxic and Hazardous
Materials Agency (USATHAMA) into a procedure for estimating an analytical
method reporting limit. The approach is based on a careful choice of
standards and various procedural enhancements that lead to a narrow
confidence band with high probability for predicting the reporting limit
for the method and distinguishing it from the background.
Values above the method reporting limit are quantified, without
attempting to define an area between the limit of detection (LOD.MDL) and
limit of quantification (LOQ.PQL). USATHAMA designated the values
estimated by this procedure as certified reporting limits (CRL) for their
methods. The term method reporting limit (MRL) is coined to emphasize the
use of this procedure for other than regulatory purposes. No values below
the method reporting limit are reportable for the analytical procedure.
This procedure was used to estimate MRL's for two analytical methods.
-------
637
METHOD DETECTION LIMITS,
OR HOW LOW CAN YOU REALLY GO?
Estimation of Analytical Method Reporting Limits
by Statistical Procedures
Introduction
Establishing detection and quantification levels for analytical
methods is an optimization of both experimental procedure and statistical
manipulation of the data. This is especially true when correlating
analytical method performance of more than one instrument, analyst, or
laboratory. Being in the regulated community, the current authors have had
many concerns with the EPA MDL values listed in the 600 Series Methods.
T^iese MDL values are obtained under ideal situations, and in many instances
cannot be achieved in real world matrix samples. We became aware of
procedures for estimating detection limits which were derived in matrix
samples and did not require determining signal-to-noise ratios.
This paper will describe the procedure used to calculate estimated
detection levels for analytical methods-employed by the US Army Toxic and
Hazardous Materials Agency (USATHAMA) ' . These are referred to as
certified reporting limits by USATHAMA and are the minimum quantification
levels for parameters USATHAMA contract laboratories may report.
In developing an understanding of the USATHAMA certified reporting
limit, it was necessary to review the literature for its relationship to
other accepted detection limit estimation procedures. As Currie.said',
"Examination...revealed a plethora of mathematical expressions and
widely-ranging terminology". His use of the word "plethora" was
appropriate, though the current authors did not initially know the
definition (overly full). The word at once represented both the unknown
and crowded world of detection limits.
Detection Limits ,
Basically there are two approaches to experimentally determining that
an analyte concentration is greater than zero. The classical approach has
been to correlate standard deviations of blank and7sample responses. This
idea has bee^n ^eyjloped by IUPAC , ACS ' , EPA , and ASTM . Several
researchers ' ' ' ' provided further discussion of this direct
comparison of signal and noise. The second approach to establishing a
limit for reporting concentration is described by calibration curve
regression theory. The advantage of this procedure is that one gains an
understanding of the performance of an analytical method in a region that
extends both below and above the reporting limit. The USATHAMA reporting
limit is derived from measurements at multiple concentration levels rather
than multiple measurements at one level just above the background. We
concur with the regression approach and feel it is superior to the narrow
.range of information obtained in the classical approach. The-j>rinicples
are described by Hubaux and Vos and Mitchell and Garden and are
employed by USATHAMA ' in the determination of their certified reporting
limit (CRL).
-------
638
This paper provides discussion and examples of the estimation of
reporting limits as described by USATHAMA. However beyond their
references, use will not be made of the term certified. This word, and a
similar one, validated, are often used in conjunction with testing,
evaluating, and assuring performance based on direct regulatory agency
requirements. The procedure described below can be used by a laboratory
both for its own in-house testing and method development work as well as
reporting data to customers (regardless of whether or not the data will be
subsequently reported to a regulatory agency). Additionally, the procedure
as used by the current authors allows adjustment of the analytical method
reporting limit based on chosen confidence limits. Therefore the more
generic phrase, method reporting limit (MRL), will be used in the
discussion and the examples.
Classical Approach
In the classical approach to determining a detection limit, blank and
standard samples are analyzed and the standard deviations of the analytical
response values are compared. This comparison is developed on several
levels and is reviewed below.
The first level is variously called the Critical Level or Decision
Limit by Currie, the Criterion of Detection by ASTM, and the Limit of
Detection by IUPAC. This level is dependent upon the specific experimental
result, and is the minimum true signal capable of being observed by a
laboratory. It establishes the maximum acceptable Type I error (false
positive) for a blank. Currie and ASTM set the level at 1.64 times the
standard deviation of the blank (or standard deviation for the working
range of an analytical system), establishing a 5% probability for making a
Type I error. IUPAC defines the level at 3 standard deviations from the
blank. This entire discussion is predicated upon errors in the analytical
process being random and the standard 'deviation being nearly uniform and
independent of the signal level (homoscedastic).
The next level and the one following are defined by the capabilities
of the measurement process itself. For ASTM, Currie, and IUPAC the second
level is set where the probability of a Type I error equals probability of
a Type II error (false negative) for a given analytical procedure. ASTM
calls this the Limit of Detection, Currie the Detection Limit, and IUPAC
the Limit of Identification. For ASTM and Currie, the value is established
at twice their Critical/Criterion levels from above'. This corresponds to
3.29 times the blank standard deviation (assuming the probability of Type
II errors' do not exceed 5%). On the other hand, IUPAC requires 3 standard
deviations from their Limit of Detection, or 6 standard deviations from the
blank. ACS recommends a Limit of Detection (LOD) as 3 standard deviations
above the blank. Similarly, EPA defines a Method Detection Limit (MDL) at
2.326 - 3.143 standard deviations from replicate standard analyses of
concentrations near the blank.
-------
639
The third level calls for a measured value to be satisfactorily close
to the true value and with a small relative standard deviation. ACS refers
to this as the Limit of Quantification (LOQ) and establishes it at 10 (±3)
.standard deviations from the blank at a 99% confidence level. Note that
this complete description is necessary to establish the certainty with
which a result may be reported. Currie also sets his Determination Limit
at 10 standard deviations from the blank. EPA set its Practical
Quantitation Level (PQL) at 5-10 times the MDL. A great deal of
arbitrariness surrounds the placement of a minimum quantification level.
Depending on which procedure is chosen, this "gray area" could be as close
as 6 standard deviations from the blank (IUPAC) or as far as 23 standard
deviations (EPA).
In order to remove the arbitrariness of the signal-to-noise
measurement and quantification process, the USATHAMA procedure was examined
and is described in this paper. This procedure is applicable to any
analytical method which yields instrument responses for prepared standard
concentrations for a linear range of calibration. This estimation for a
method reporting limit is applicable to a single analyte response or to a
summation or group of responses that correspond to several analytes. The
calculated value is the quantification limit for the analytical method. A
reliable estimate is obtained only if the method is executed without bias
for each standard in the tested range.
This procedure for estimation of a detection limit was applied to data
collected for the gas chromatographic analysis of an organic compound and
for the reverse flow injection analysis of an inorganic. The definition of
terms, mathematical equations, and the procedure to be followed for
calculating reporting limits according to USATHAMA are outlined below.
Hubaux and Vos
Hubaux and Vos argue that the sensitivity of an analytical method and
hence the detection limit, is influenced by a judicious choice of analyte
standard concentrations. For a given set of standards and corresponding
response signals, a best fit regression line can be found. If a new
standard is measured, its response can be predicted to be near the line,
but may not necessarily fall exactly on the line because (1) response
signals are not fixed values but are randomly distributed in an unknown
fashion around some average value, and (2) the fitted regression line is
based on very few observations and is only an estimate of the true
calibration line. Confidence limits for the regression line can be
calculated and drawn on both sides of the regression line at any chosen
level of confidence. The width of the resulting confidence band depends on
(1) the dispersion of the responses for a given standard, (2) knowledge of
that dispersion, and (3) the concentration. It is important to note that
the confidence band does not represent the dispersion of known responses,
but rather allows one to predict responses for standards not yet measured.
Figure 1 shows a hypothetical example of instrument responses plotted
against a series of standards and a confidence band constructed about the
regression line for these observations. For a given signal Y of a standard
or sample of unknown concentration, the range of concentration values
-------
640
possible can be predicted as Xmin to Xmax. For the response Yc, the upper
limit of concentration is X'max. That same response could come from a
standard with a content as low as X'min (zero). The lowest concentration
distinguishable from zero can be no lower than X'max, or else the
corresponding response could be lower than Yc, and hence interpreted as a
blank (zero concentration).
In this example, the X'max concentration level is equal to Xd, which
Hubaux and Vos define as the detection limit of the method. This detection
limit is an estimate of the minimum detectable or guaranteed analytical v
response for that method. For the same analyte, a second series of
standards could yield response signals that differ randomly from the true
values, and hence lead to a different estimate of the detection limit. The
same analytical method used at a second laboratory or with a second
instrument would also be expected to yield a different estimate of the
detection limit. It is important to acknowledge that a detection limit is
not a fixed value for a given method, but is a variable. The prepared
standards and corresponding response values directly influence _the
confidence limits. To lower the detection limit, Xd, for an analytical
method, it is necessary to decrease the width of the confidence band. This
may or may not be possible.
The ability to predict responses for a given concentration is most
reliable in the range being tested. As one moves away from the range of
repartitions (distribution) of the tested standards, the confidence band
becomes increasingly nonparallel with the regression line. Responses
become increasingly unpredictable when there is no statistical connection
to the range of tested standards, and as one approaches the zero or blank
sample.
Hubaux and Vos describe several ways one can attempt to enhance the
detection limit (sensitivity) of an analytical method.
1) Improve the precision -- This includes lowering and refining the
residual standard deviation (s). By improving and controlling the
analytical technique, the scatter of the data is reduced. Also
analyzing independently prepared replicates of each standard
repartition improves the ability to estimate the true regression line.
2) Increase the number of standards (N) analyzed -- The influence of N is
an important consideration in the Student's t-value and the subsequent
standard deviation calculations. For example, a significant gain in
sensitivity and confidence is achieved by using 6 standards to
establish the regression line rather than using 3 standards. However,
the gain is small if more than 10 standards are analyzed. There is a
balance between the analytical cost of additional standards and the
improvement in predictability of the regression line, with the
potential for lowering the detection limit.
3) Increasing the range ratio (R) of the standards -- The range ratio is
defined as
R - (Xn - XI)/Xl
(1)
where Xn is the highest concentration within the series of standards
and XI is the lowest concentration standard. The ability to predict
responses is extended to a larger region of concentrations when r is
increased. To estimate the detection limit, Hubaux and Vos indicated
-------
641
that the range ratio should be 10 or greater, but there is no need to
. exceed 20. It should be recognized that XI cannot be zero in equation
; (i).
4) Optimize the repartitions of the standards within the range --To
obtain maximum information on the linearity, the standards should be
equidistant and as far as possible from each other. On the other
hand, Hubaux and Vos concluded that the best approach to calculating
detection limits is to have spme standards with the smallest
concentration feasible, other standards at the maximum level of
interest, and one midway between the two extremes. They referred to
this as the "three values repartition". Hubaux and Vos also studied
an equidistant or linear repartition, a parabolic arrangement of
standards, and a two values arrangement.
5) Attempt to have the mean X of the standards set near the estimated
value of the detection limit -- According to Hubaux and Vos, this is
most closely approximated by the three values repartition. When the
calculated detection limit is near the mean of the standards set, the
confidence limit lines will be nearly parallel with the regression
line in the region of the detection limit. This is the point of
greatest reliability for predicting the responses and calculating the
detection limit.
USATHAMA
USATHAMA developed their own application of the possibilities
presented by Hubaux and Vos. In their procedure, arriving at a detection
limit is a two-step process. The first step is to construct an instrument
response or calibration curve for an analyte at concentrations through the
anticipated testing range, not including a blank. The second step, method
certification, involves the preparation and analysis through the entire
analytical method of a specified repartition of spiked standard samples
over several days. These are prepared from the same master stock as the
calibration standards.
The detection limit for a method, the certified reporting limit (CRL)
according to USATHAMA, method reporting limit (MRL) according to the
current authors, is a value obtained graphically or mathematically from the
data generated in the method certification step. This procedure is
described in the following pages. The equation for calculating the
reporting limit is given in equation (9).
The calibration curve data includes the standard analyte preparations
(X) and the corresponding instrument responses (Y). These data are fit
using a least squares linear regression with the usual assumptions. That
is, the errors in the measurements are independent and normally distributed
with zero mean and constant standard deviation. The error in preparation
of the standards is small compared to the error in the measurements. The
estimated slope (b-) and intercept (bn) are calculated by
SFCXi - XUYi - YV1
S (Xi - X)2 '•
- Y -
X)
(2)
(3)
-------
642
where Xi and Yi_ are Jphe standard concentration and response value
respectively, and X and Y are the means respectively. All summations are
from 1 to N. The correlation coefficient (r) for the fit of the data to
the regression line is calculated by
r - SFfXi - XUYi - Y)1
[S(Xi - X")2 S(Yi - Y)
(4)
According to the USATHAMA procedure, each of the calibration curve
standards should be prepared and analyzed in duplicate at least. The
calibration data is then subjected to Lack-of-Fit (LOF) and Zero Intercept
(ZI) tests at the 95% confidence level. See the Appendix. If these tests
show no lack of fit for the line with a zero intercept, then the calculated
calibration regression line is assumed to be an adequate description of the
data.
For method certification, USATHAMA adopted a modified parabolic
repartition of the standards, with a range ratio of 19. The advantage of
this approach is that it allows one to look at the linearity of the
analytical method over the series of tested concentrations, as well as
providing a reliable estimate of the reporting limit.
Their repartition uses a series of standards spiked in the
concentration series OX (method blank), 0.5X, X, 2X, 5X, and 10X, where X
is the concentration of the analyte that corresponds to the reporting
limit. Because the actual value for X is not known prior to preparing the
standard series, one must arbitrarily select a concentration, a target
reporting limit (TRL), that is suspected to be near the final estimated
reporting limit. Prior experience with the analytical method allows one to
make a reasonable estimate of this value and avoid repeats of standard set
preparations.
To account for all, variability in the analytical method, each spiked
standard concentration in the repartition range is individually prepared
and analyzed over at least four separate days. Analysis here means
performance of the entire analytical method. USATHAMA protocol calls for
the preparation of these standards in ASTM grade water or specially
provided standard soil. As a result, estimates of the reporting limit and
accuracy of the method are optimistic because interferences found in
natural samples will be absent. Standard samples can be prepared in a
solvent or matrix system that represents the natural matrix of real world
samples. For example, if the analysis and determination of reporting
limits are based on environmental groundwater samples, the water used to
prepare the standards should come from an upgradient well that contains
water free of the analyte of interest, but with all of the other matrix
interferences that affect the procedure. In this way the recovery of the
analyte and detection limit of the method are truly measured for the
natural matrix.
All of these prepared target concentrations (X) are introduced into
the instrument to obtain area count responses (Y'). Found concentrations
(Y) are obtained by entering these response values into the calibration
curve equation and reading the corresponding concentration, assuming
linearity through the analytical method. See Figure 2 and equation (5).
-------
643
(Y' - b0)/b1
(5)
Conversion to found concentrations allows a direct calculation, in
concentration units, of standard deviation for the target concentration
xesponses...
The estimate of the residual standard deviation (s, USATHAMA's
standard error of the least squares regression, Sy.x) is obtained by
s =
(Yi - (Y + m(Xi - X)))2
N - 2
1/2
(6)
In this equation, m is the slope of the regression line of the target and
found concentrations and is calculated as is the slope of the calibration
line in equation (2). Yi and Xi are found and target concentrations
respectively, and Y and X are the means of all the found and target
concentrations respectively.
& &
Upper and lower confidence limits (Y ucl and Y Id) at a particular
X-- X are calculated by
Y*ucl and Y*lcl - Yo + mX* (+/-) ts |l + 1/N + (X* - X)2
Sxx
1/2
(7)
where Sxx -'S(Xi-X) , and t is the two-tailed Student's t-value (tabulated
in most statistics texts) for N-2 degrees of freedom at a 90% confidence
level. In this equation, Yo is the intercept of the regression line of the
target and found concentrations and is calculated in the same manner as the
intercept of the calibration line in equation (3). The correlation
coefficient (r) is also calculated for the target and found regression line
in the same manner as in equation (4) for the calibration line data.
The estimated reporting limit for a method (Xd) is the value of X
(X'max in Figure 1) that corresponds to a point on the lower confidence
limit curve where the value of Y (Yc in Figure 1) equals the value of Y on
the upper confidence limit curve at X - 0 (X'min in Figure 1) . The
intersection of the upper confidence limit curve with the y-axis (Yc) is
similar to Yc in Figure 1 and is calculated as
Yc - Yo + ts
1 + 1/N + X2
Sxx
1/2
(8)
The reporting limit is calculated by
9
^/Sxx +
- (t2s2/Sxx)
— — — 9 222 1/2
Xd - X + m(Yc-Y) + ts f (Yc-Y^/Sxx + d+l/NUm -ft s /Sxx) ) 1 '
(9)
When X falls near Xd, the confidence limit lines will be near their
closest point to the regression line. The calculated reporting limit will
therefore be in the middle region of known responses. This is the region
of highest predictability and "best estimate" for the reporting limit.
-------
644
The calculated reporting limit will have its lowest value after
testing to see if truncation of target concentration levels is possible.
The reporting limit may be lowered by reducing X. This is accomplished by
making the range of repartitions smaller for the spiked standards.
Successively removing one concentration level at a time in decreasing order
effectively lowers the point where the confidence limit curves are closest
to the regression line. According to .USATHAMA, with each truncation, the
slope of the linear regression line of the target vs. found concentrations
must not change by more than 10% from the original data set of OX through
10X. When truncations are completed and the minimum estimated reporting
limit has been achieved, at least three target concentrations in addition
to the blank must be used in calculating the reporting limit. Also the
reporting limit cannot be less than the lowest tested concentration or
higher than the highest tested concentration. It should be noted that the
calibration curve data is not truncated or altered in the reporting limit
estimation procedure. Truncation is only for the purpose of estimating
realistic method reporting limits. It is noted that statistically, a blank
could not yield a value at or above the MRL. This is by definition in the
procedure. Also, false positives cannot occur. The method reporting limit
is estimated in an area of analytical response (with specified range and
confidence) significantly above the error-prone region of MDL, yet below
PQL values.
Organic Gas Chromatographic Data
V As an example, Table 1 lists the raw data for a gas
chrbmatpgraphically analyzed organic compound and follows these example
calculations through to the estimation of the reporting limit (CRL, MRL) .
Graph 1 shows the instrument response calibration curve. Graph 2 presents
the spiked target concentrations and corresponding found concentrations for
the entire data set. This graph demonstrates the method response linearity
for the example data set through the range of interest. Graph 3 is a plot
using the required data set through 0.2 ppb (10X) for estimation of the
reporting limit. Table 2 shows that by lowering the repartition range of
the target concentrations the reporting limit can be lowered, until the
regression line slope changes by more than 10% from the required data set.
In this example, the original estimate of the detection limit (TRL=0.02
ppb) was slightly inaccurate compared to the final calculated value
(CRL,MRL-0.06 ppb at 90% confidence level). Even though the statistical
tests for linearity and zero intercept indicated good overall fit of the
calibration data, extending the range of this data to lower values might
lead to improved found concentration calculations and reporting limit.
This calculated reporting limit can be confirmed by repeating the
preparation of the target concentrations through the method and repeating
the measurement series.
Inorganic Flow Injection Analysis Data
As mentioned earlier, data from an additional analytical method was
applied to -this statistical procedure for estimating reporting limits.
This data set is from the analysis of an inorganic by a reverse flow
injection analysis (FIA) method.
-------
645
Results from the FIA method are shown in Table 3 with a graphical
presentation of the reporting limit given in Graph 4. The statistical
.tests indicate good linearity in the calibration data to 0.5 ppm. However,
the intercept is statistically different from 0. As a result, the 0.0 ppm
target concentration (OX) shows a negative found value. Though,
apparently, there is nonlinearity in the method near zero, the reporting
limit might be lowered if additional standards were analyzed below 0.5 ppm.
Truncation of the target values down to 4.0 ppm, while allowed, does not
lower the reporting limit. The calculated reporting limit of 0.7 ppm was
very close to the TRL of 1.0 ppm.
Conclusion
Analytical method reporting limits can be estimated by the use of
calibration curve regression theory. This approach is superior to the
classical signal-to-noise approach. It provides information on method
responsiveness above and below the reporting limit and it establishes a
specific level of quantification with real matrices and over a specified
concentration range. There is no attempt to separate detection from
quantification. This avoids problems associated with the gray area between
LOD and LOQ for both regulators as well as the regulated community. No
values below the method reporting limit are reportable for the analytical
procedure. One cannot statistically distinguish between blanks and samples
below the MRL. We feel the use of calibration curve regression theory for
estimating detection limits for analytical methods has considerable merit
and should be considered by EPA for analytical methods required for
compliance monitoring under the CWA, SDWA, and RCRA regulations.
-------
646
REFERENCES
1. Department of the Array, US Army Toxic and Hazardous Materials Agency,
Sampling and Chemical Analysis Quality Assurance Program. April, 1982.
2. Department of the Army, US Army Toxic and Hazardous Materials Agency,
Installation Restoration Program. Quality Assurance Program. December,
1985.
3. L. A. Currie, "Limits of Qualitative Detection and Quantitative
Determination, Application to Radiochemistry", Anal. Chem.. 40, 586
(1968).
4. International Union of Pure and Applied Chemists, Analytical Chemistry
Division, "Nomenclature, symbols, units and their usage in
spectrochemical analysis - II. Data interpretation", Spectrochim.
Acta B. 33B. 242 (1978).
5. The American Chemical Society, "Guidelines for Data Acquisition and
Data Quality Evaluation in Environmental Chemistry", Anal. Chem.. 52.
2242 (1980).
6. The American Chemical Society, "Principles of Environmental Analysis",
(1983).
7. U.S. Government, "Definition and Procedure for the Determination of
the Method Detection Limit", 40 CFR 136, Appendix B, 504 (1985).
8. American Society for Testing and Materials, Standard Practice
D4210-83, "Intralaboratory Quality Control Procedures and a Discussion
of Reporting Low-Level Data".
9. J. B. Roos, "The Limit of Detection of Analytical Methods", Analyst.
87, 832 (1962).
10. J. A. Glaser, D. L. Foerst, G. D. McKee, S. A. Quave, and W. L. Budde,
"Trace analyses for wastewater", Environ. Sci. and Tech.. 15. 1426
(1981).
11. G. L. Long and J. D. Winefordner, "Limit of Detection, A Closer Look
at the IUPAC Definition", Anal. Chem.. 5_5, 712A (1983).
12. D. T. E. Hunt and A. L. Wilson, The Chemical Analysis of Water. 2nd
ed., 287-299 (1986).
13. A. Hubaux and G. Vos, "Decision and Detection Limits for Linear
Calibration Curves", Anal. Chem.. 42. 849 (1970).
14. D. G. Mitchell and J. S. Garden, "Measuring and Maximizing Precision
in Analyses Based on Use of Calibration Graphs", Talanta. 29. 921
(1982).
15. N. R. Draper and H. Smith, Applied Regression Analysis. John Wiley and
Sons, New York, 2nd ed., 1981.
16. R. H. Myers, Classical and Modern Regression with Applications.
Duxbury Press, Boston, 1986.
-------
647
APPENDIX
Introduction
This appendix describes the Lack-of-Fit (LOF) ..and.. Zero Intercept (ZI)
tests. Standard texts on regression' analysis ' also contain this
information.
In this appendix we will designate our N data pairs as (X,Y), where Y can
be either the instrument response or found concentration, depending on
whether the data is for calibration or finding the detection limit. The
variable X is the target concentration in either case.
The LOF test breaks the residual sum of squares from the regression into
two pieces. One piece is the pure error from the replication on Y at the
same value of X (USATHAMA's Sum of Squares Total Error). The other piece
is the lack of fit of the Y values to the regression (USATHAMA's Sum of
Squares Lack-of-Fit). If there is nonlinearity, the lack of fit is large
compared to the pure error. An excellent discussion of this is found in
reference 15.
The ZI test is to verify that the intercept for the calibration line is not
significantly different from zero.
Review
For convenience, we repeat here the linear regression model, its
assumptions, and estimated slope, intercept and variance.
The model is
where the e. are independent and have a normal (Gaussian) distribution with
zero mean and constant standard deviation. The estimates for 0n and p. are
given by b-. and b.. ,
b - 2[(X-X)(Y-Y)]/S(X-X)2
The standard deviation estimate based on the regression is s, where
- (Y +
(Unless otherwise noted, all summations are from 1 to N.)
2 _
(X-X))j2/(N-2).
-------
648
Lack-of-Fit Test
The procedure for carrying out the LOF test is as follows.
1. Let there be n replicated analyses (Y) at each target
concentration (X). Compute the sum of squares of each set of n
replicated points as
S(Y-Y)2,
where the summation is from 1 to n. The degrees of freedom (df)
associated with this sum of squares is n-1. In USATHAMA this sum
of squares (SS), is called random error SS or individual error SS.
2. Add these variances and associated degrees of freedom to find the
sum of squares for pure error (SSPE) and its degrees of freedom
(dfPE).
3. For the non-zero intercept case, compute the F statistic
F- [((N-2)s2 - SSPE)/(N-2-dfPE)]/[SSPE/dfPE].
Or in the zero intercept case, compute the F statistic
ZI
[(RSS - SSPE)/(N-l-dfPE)]/[SSPE/dfPE],
where RSS is the residual sum of squares from the regression with
no intercept.
RSS- S
SX2
In the non-zero intercept case, if F is greater than the upper 5%
point of the F distribution with numerator df=N-2-dfPE and
denominator df-dfPE, then there is significant lack of fit.
Or in the zero intercept case, if F is greater than the upper
5% point of the F distribution wifli numerator df=N-l-dfPE and
denominator df-dfPE, then there is significant lack of fit.
ZeroIntercept Test
The procedure for carrying out the ZI test is as follows.
Compute
F - b02/[s2(l/N + X2/S(X-X)2}].
If the computed test statistic F is greater than the upper 5% point of the
F distribution with 1 df in the numerator and N-2 df in the denominator,
then the intercept is significantly different from zero. The F
distribution is tabulated in most statistics texts, as well as the
regression analysis texts previously referenced.
-------
649
Response
Yc
Yo
FIGURE 1
0
X'min
X'max
(Xd)
(CRL)
(MRL)
Xmin Xmax
Content
»» Average predicted response for specific target content
-------
650
FIGURE 2
Calibration Curve
Calibration
Instrument
Response
Slope - b.
Intercept
Response value (Y')
obtained from
the prepared Method
Target Concentration (X)
Found Concentration (Y)
— obtained from the <-
calibration curve
Calibration Standard
Concentrat ion
Method Reporting Limit
Graph
L-> Method
Found
Concentration
(Y)
Slope •= m
Intercept
-> Method Target Concentration (X)
-------
651
GAS! CHROMATOGRAPHIC METHOD
CALIBRATION CURVE CALCULATIONS
**********Raw Data***********
Prepared
Standard Experimental
Concentrations Area Count
(X.pb)
11.20
5.00
1.00
0.50
0.20
0.10
0.05
278310
271126
283830
132009
127161
124991
209,37
21006
19885
11342
10253
11062
4264
3534
4276
2692
2861
2680
1351
1453
1378
Best Fit Least Squares
Linear Regression Equation
Derived From Raw Data
Y - 24996.95 X + (,-818.27)
r - 0.9995
For LOF with intercept test
at 95% confidence level:
Calculated F-Ratio - 2.51
Tabulated value - 2.96
For LOF through the origin test
at 95% confidence level:
Calculated F-Ratio -- 2.30
Tabulated value - 2.85
For the zero intercept test
at 95% confidence level:
Calculated F-Ratio -0.90
Tabulated value '- 4.38
-------
652
TABLE 1. (Cont'cn
FOUND CONCENTRATIONS AND METHOD REPORTING LIMIT
********Raw Data*********
Target
Cone (X.m>b)
10.00
8.00
5.00
2.00
1.00
0.50
0.20
0.10
0.04
0.02
0.01
0.00
Experimental
Area
Count (YM
260963
277565
311146
222205
239144
245078
104676
118391
122029
29692
27332
31711
16013
17253
18263
8302
9005
8872
3105
4137
4143
2718
2153
2398
1090
1399
1341
680
551
535
293
357
332
299
352
309
Found
Cone (Y.ppb)
10.47
11.14
12.48
8.92
9.60
9.84
4.22
4.77
4.91
1.22
1.13
1.30
0.67
0.72
0.76
0.36
0.39
0.39
0.16
0.20
0.20
0.14
0.12
0.13
0.08
0.09
0.09
0.05
0.05
0.05
0.04 ;
0.05
0.05
0.04
0.05
0.05
Upper
Confidence
Limit (ppb)
12.14
9.84
6.40
2.99
1.85
1.29
0.95
0.83
0.76
0.74
0.73
0.72
Lower
Confidence
Limit (ppb)
10.22
7.97
4.59
1.19
0.05
-0.52
-0.86
-0.98
-1.04
-1.07
-1.08
-1.09
X - 2.24 N - 36 Y - 2.36 r - 0.9912
t - 1.697 Sxx - 402.41 s - 0.52 m - 1.14 Yc - 0.72
Entering these values in equation (9) gives CRL, MRL as Xd - 1.59 ppb.
-------
653
TABLE 2
COMPARISON OF.DATA RANGE AND REPORTING LIMIT
Entire Range
Required Range
Target
Values
OX
1/2 X
TRL X
2 X
5 X
10 X
' . * . :
' .'of Target .:" of Target
Values fppM Values fppM
'0.00 0.00
.^olo.i - ^ o.oi-
0.02 0.02
0.04 0.04
0.10 0.10
0.20 0.20
"0.50
1.00
2.00
5.00
8.00
10.00
'of Target
Values (ppb)
0.00
.: J- ,0.01 ...
0.02
0 . 04
0.10
Slope - .
(±10% range)
CRL (ppb)
@90% -
@99% -•'
0.7227
(0:6504-0.7950)
0.9742
0.06
0.08
0.10
0.8900
0.9793
0.03
^0.04
0.05
-------
654
TABLE 3
DATA FOR FIA ANALYSIS
*****Calibration Curve Data***** ******Method Reporting Limit Data******
Concentration
(ppm)
0.5
1.0
2.0
4.0
6.0
8.0
10.0
Instrument
Response
(AU 0480nm xlOOO)
9
9
13
13
22
23
39
41
58
59
76
77
95
94
Calculated
Target Instrument Found
Concentration Response Concentration
(ponO (AU xlOOO) (PPM)
For LOF with intercept test
at 95% confidence level:
Calculated F-Ratio - 0.21
Tabulated value — 3.97
For LOF through the origin test
at 95% confidence level:
Calculated F-Ratio - 26.61
Tabulated value - 3.87
For the zero intercept test
at 95% confidence level:
Calculated F-Ratio - 235.66
Tabulated value - 4.75
0.0
0.5
1.0
2.0
4.0
6.0
8.0
10.0
0
0
0
0
9
9
9
10
14
14
14
13
23
23
22
23
41
39
39
39
59
57
56
58
77
75
76
76
96
91
92
93
-0.47
-0.47
-0.47
-0.47
0.53
0.53
0.53
0.64
1.08
1.08
1.08
0.97
2.08
2.08
1.97
2.08
4.07
3.85
3.85
3.85
6.07
5.85
5.73
5.96
8.06
84
95
95
10.17
9.61
9.72
9.83
-------
TABLE 3 (Cont'cn
COMPARISON OF DATA RANGE AND REPORTING LIMIT
655
Required Reduced Reduced
Range of Range of Range of
Inadequate
Range of
Target Target Values
Values (ppm)
0 X
1/2 X
TRL X
2 X
: "t '
~5 X . ••
10 x
0.0
0.5
1.0
2.0
4.0
6.0
8.0
10.0
Target Values Target Values
(ppm) (ppm)
0.0
0.5
1.0
2.0
4.0
6.0 •
i
0.0
0.5
1.0
2.0
4.0
Target 1
(ppm
0.0
0.5
1.0
2.0
Slope - 0.9999
(±10% range) (0.8999-1.0999)
r -
CRL (ppm) -
(90%)
0.9985
0.7
1.0129
0.9958
0.7
1.0415
0.9907
0.7
1.2077
;0.9821
fO.6
-------
280 —
Instrument
Response 160
(Thousands)
Regression Line
D Response Value
468
Standard Concentration, ppb
Graph 1. Calibration Curve
010634-9
05
Ol
-------
10
8
Found
Cone, ppb
4
2
0
-2
—Regression Line
O Found Concentrations
Upper
Confidence
Limit
Lower
Confidence
Limit
1
1
46
Target Concentration, ppb
Graph 2. Entire Data Set
90% Confidence Level
8
10
010634-11
Oi
-3
-------
0.22
0.18
0.14
Found
Cone., ppb
0.10
0.06
0.02
—Regression Line
D Found Concentrations
OMRL
pper .X*
fidence .X^ [
0.04 0.08 0.12
Target Concentration, ppb
Graph 3. OX to 10X Data Set
90% confidence Level. MRL = 0.06 ppb
0.16
0.20
010634-10
05
Ol
oo
-------
10
8
6
Found
Cone,ppb
—Regression Line
D Found Concentrations
OMRL
Upper
Confidence
Limit
Lower
Confidence
Limit
4 6
Target Concentration, ppb
Graph 4. OX to 10X Data Set
90% Confidence Level. MRL = 0.7 ppm
8
10
010634-14
O)
-------
660
MR. TELLIARD: We're running
late, as usual. What I'd like to do is take a five
minute break so everybody can go out and get a cup of
coffee, a soft drink and your cookies or whatever is
out there, and come in and sit down so we can keep
the program going.
(WHEREUPON, a brief recess was taken.)
MRS. DE NAGY: Sorry, I
have to repeat this. My name is Susan De Nagy, I'm
-------
661
with the EPA in Washington,, D.C. No, that belongs to
my boss.
This last session is on toxicity methods-,
variability and the application.of toxicity testing.
One of the reasons why Bill has slowly over the years
consented to start bringing in toxicity testing into
this program is to start educating the chemists that
will be interfacing with the biologists as EPA ventures
into the world of requiring toxicity tests on NPDS
permits or on superfund sites. They have yet to
really get into RCRA testing, but that will probably
be the next step.
ITD has been involved in toxicity testing for
the last several years. Over the last couple of
years the drilling fluids toxicity test had been
presented. Some of the results associated with that
test have been presented. We decided to have a followup.
One of the areas is produced water testing and
methods development, which is a spinoff of the drilling
fluids toxicity test. Versus drilling fluids, now
it's produced water..
Our first speaker this afternoon is Richard
Montgomery. He's a research biologist employed by
-------
662
Technical Resources, Inc. on contract to EPA's
Environmental Research Laboratory in Gulf Breeze,
Florida.
He's been working on the drilling fluids program
since about '82 and has just I guess in the past year
initiated this work on'produced water testing.
-------
663
JAN 21 1988
Produced (Formation) Water from Oil and Gas Production:
Test Method Development and Preliminary Toxicity Test Results
by
R.M. Montgomery1, P.R. Parrish2, and S.D. Friedman1
^-Technical Resource Inc., US EPA, Sabine Island, Gulf Breeze, FL 32561
2US EPA, Sabine Island, Gulf Breeze, FL 32561
-------
664
INTRODUCTION
Produced water (also called formation water or brine) is an effluent generated
from crude oil and gas production. It is often a high saline, hydrocarbon-
saturated wastewater and is usually discharged into the surrounding marine
or estuarine environment.
As part of the U.S. Environmental Protection Agency (EPA) field sampling
program during 1986, the Environmental Research Laboratory in Gulf Breeze,
Florida, was asked to provide toxicity data for acute and chronic toxicity
tests with several produced water samples and mysids, Mysidopsis bahia.
Mysids are commonly used for different toxicity tests with effluents that
are discharged into the marine and estuarine environments. Mysids,
important food-chain organisms, are abundant in many marine and estuarine
environments. Adult mysids are approximately 1 cm in length; juveniles
are approximately 2 mm when released from the brood pouch of the female.
Mysids reach maturity in approximately 10 days with a reproducing adult
no later than 20 days post release. Brood releases occur slightly less
than once a week for the remainder of the life-cycle.
The goals and objectives of this project were to: (1) determine the acute
toxicity of six produced water samples under current effluent toxicity test
methods; (2) determine if the standard effluent test methods were applicable
to produced water testing; (3) develop basic methods or adapt existing
methods for acute testing if necessary; (4) develop chronic toxicity
test procedures and determine any long-term effects of specific produced
water samples; and (5) compare the 28-day life-cycle toxicity test (Lussier)
results with the 7-day mysid survival, reproduction, and growth test (US EPA)2.
-------
665
MATERIAL AND METHODS
Test Animals
The age of mysids commonly used for 96-h acute tests is 1- to 6-day-old; our
animals were 5 (+ l)-day-olds. The 7-day test was conducted with 7-day-old
mysids that were exposed to produced water for 7 days. The 28-day test
was begun with 24h-old mysids exposed for 28 days.
Sample Methods for Produced Water
Sampling methods (Figures 1-4) follow: (1) a random discharge was selected
by cooperating state personnel; (2) appropriate volumes were taken directly
from the discharge pipe; (3) volumes were dictated by the types of tests
that were to be conducted; (4) samples were collected in appropriately
cleaned glass carboys; (5) samples were cooled with wet ice and transported
immediately to the Gulf Breeze Laboratory; (6) samples were stored in a
cold room 4 (+1)°C overnight; and (7) acute tests were started within 24 h.
Test Conditions
General. Toxicity tests conducted with mysids and produced water were
static 96-h acute tests, the 7-day mysid reproduction, survival and
growth test (US EPA 86), the 28-day mysid life-cycle toxicity test
(ASTM 87) and additional static 96-h acute test, which consisted of an
"on-site" and laboratory comparison; and an aging study. The endpoints
for the acute tests were measured as the 96-h LCSOs (concentration lethal
to 50 percent of the test animals in 96 h). Acute tests were also used
to examine any possible change in the toxicity with the transportation
and change in test location for the "on-site" and laboratory comparison
and to examine the possibility of change in toxicity with time for a 8-week
period. The 7-day test was to measure reproduction success of the females,
-------
666
growth measured by dry weight and survival of mysids exposed to produced
water. Reproduction was measured by the presence and/or absence of the
eggs/embryos in the females, not by the viability of these eggs/embryos.
The 28-day test was similar to the 7-day test in that the measurements
were reproduction, growth as dry weight and survival. However, reproduction
was measured as the number of young produced as well as the viability of
the young.
Acute Tests. The static acute toxicity tests followed ASTM3 and US EPA1*
guidelines for effluent testing. The tests were conducted using 1 H of
test mixture for each replicate (3 replicates of 10
animals per concentration). When needed, a brine control was used to
measure any possible high salinity effects (osmostic stress) of the
produced waters to the mysids. The brine control consisted of seasalt
saturated in deionized water and then mixed with standard seawater to
obtain the appropriate salinity of the highest produced water concentration.
Aeration was provided for each replicate. Tests were conducted in an
incubator under controlled temperature and photoperiod (25° +_ 1°C, and
14-h light to 10-h dark). Test mixtures were prepared in each dish;
physical parameters (dissolved oxygen and pH) were measured, mysids
added, and test containers then placed in the incubator (Figure 5).
Daily observations were made for DO and pH as well as survival. Additional
acute toxicity tests were conducted: (1) concurrent "on-site" and laboratory
tests, and (2) a series of tests with one produced water sample over an 8
week period. The "on-site" test was started as quickly as possible after
sampling (approximately 2 h) as close to the sample site as possible at a
State University Laboratory in Louisiana. The laboratory test was set up
the following day at the Gulf Breeze Laboratory. The aging study tests
3
-------
687
were conducted at 1 week post sample, 4 weeks post sample and 8 weeks
post sample. All tests followed the same acute methods previously described.
7-day Test. The 7-day mysid survival, growth, and reproduction study
used methods developed by US EPA2. There were 200 ml of test mixture
per replicate, with 10 replicates of 5 animals per each concentration
(Figure 6). Test concentrations were prepared from one stock; 200
m£ aliquots were measured and poured into each replicate before animals
were added. Tests were conducted in a controlled incubator (26° + 1°C and
14£ to lOd) and 150 m£ of day-old test media was siphoned daily; 50
m£ of day-old test media was retained to ensure that the animals were not
stressed during solution changes. DO and pH measurements were taken on
three replicates per concentration, but daily survival observations were
made for all replicates. The test was terminated by identifying the sex
of each animal and determining the presence or absence of eggs or embryos
within the females.
28-day Test. The 28-day test followed the new ASTM Proposed Guidelines
for Conducting a Mysid Life-Cycle Toxicity Test1. The three replicate
tanks (74-K glass tank) for each concentration contained 18 £ of test
water and 24 test animals (Figure 7). Partial renewals of 6 H per tank
were made daily during the 28-day of exposure. Mysids were placed in
exposure baskets (nylon screen silicon glued to a 15 cm petri dish bottom)
located in the tanks. These baskets were used to prevent accidental
removal of animals during renewal and also to aid survival counts.
Temperature was controlled by a water both at 25° + 1°C and photoperiod
was set at 14 light and 10 dark.
-------
668
Chemical Analysis
Chemical analysis was conducted by EPA contract laboratories. ERL/GB
examined TVOC (total volatile organic carbons), TOG, the oil and grease
and salinity, to determine if any correlations existed between LC50
values and major components of the produced water.
RESULTS
Results of the 96-h static acute toxicity tests conducted with six produced
water samples from five sites were expressed as 96-h LC50s (Fig. 8). One
site (#5) was tested as the "on-site" (OS) and laboratory (GB) comparison
and was resampled one month later (#5A). The LCSOs for all tests conducted
ranged from 1.3 to 9.3 percent produced water.
Results of the aging study with one produced water sample were also
expressed as 96-h LC50 (Fig. 9). These tests were conducted at time 0
(immediately after sample), time 0 + 1-day post sample, 1-week post
sample 4-weeks post sample and 8-weeks post sample. The 96-h LC50s
ranged from 1.3 to 9.8 percent produced water.
Concentrations of four major components from four produced water samples and
corresponding 96-h LCSOs indicate no apparent relationship to toxicity
of these produced water samples and components (Table 1).
The results of the 7-day and 28-day tests will be reported elsewhere.
-------
669
CONCLUSIONS
Initial acute toxicity test results indicate a small range in the
96-h LC50 values, roughly a factor of seven among all tests, which we
feel is quite good for the small data base. Also the standard test
guidelines (ASTM3, US EPA^) can be used for produced water testing with
few modifications.
The overall trend shows that little change occurs in toxicity with time.
Comparing the 4- and 8-week test results of the aging study (4.8 and
5.0%) with that for the initial test (5.1%) indicated a factor of
-------
670
References
1. Lussier, S. "Proposed New Standard Practice for Conducting Life-
cycle Toxicity Tests with Saltwater Mysids", Draft 12 American
Society for Testing Materials Committee E-47.01; U.S. Environmental
Protection Agency, Narragansett, RI 1987.
2. U.S. EPA, Aquatic Toxicity Testing, Seminar Manual: Guidance
Manual for Conducting a Seven Day Mysid Survival/Growth/Reproduction
Study Using the Estuarine Mysid, Mysidopsis bahia. Draft, ERL-
Narragansett Contribution No. X106,1985.
3. Abrahamsem, T.A. "Proposed New Standard Guide for Conducting Acute
Toxicity Tests on Aqueous Effluents with Fishes, Macroinvertebrates
and Amphibians." Draft 7 American Society for Testing Materials
Committee. E-47.01 Tennessee Eastman Company, Kingsport, TN. 1986.
4. U.S. EPA, Methods for Measuring the Acute Toxicity of Effluents to
Freshwater and Marine Organisms. EPA/600/4-85/013. U.S. EPA,
Environmental Monitoring and Support Laboratory, Cincinnati, OH,
1985 p 216.
-------
671
Fig. 1. Typical production platform (tank battery) with final oil and
produced water separation pit. This pit was used to collect any excess
crude oil carried by the produced water before the produced water
was discharged into the surrounding estuarine habitat.
Fig. 2, 3 & 4. Views of the produced water sampling procedures used by
ERL/GB personnel. The discharge was sampled directly from the pipe,
poured into glass, carboys, and transported to ERL/GB as quickly as
. possible. , . :- ';..'- • .
Fig. 5. Typical static acute test with airlines", and test containers.
Fig. 6. The 7-day mysid test; note size of test container with no aeration
supplied.
Fig. 7. Replacement (renewal) of 6-H of test media for the 28—day mysid
life-cycle test. Baskets (left) contained the test organisms.
-------
VTT
life-
PI'
k-ti • y™5f>
O)
-------
-------
-------
CD
-------
676
-------
-------
t '';'-;
8L9
-------
679
Fig. 8 Histogram of 96-h LC50 (concentration lethal to 50% of the test
animals in 96 h) of six produced water samples from five sites.
Produced water sample 5 was tested twice in one week for the "on-
site" (OS)-laboratory (GB) comparison and was resampled and tested
one month after: the initial sampling (5A). All 96-h LC50 values
are expressed as percent of produced water. (Vertical lines
through graphs represent 95% confidence limits for each LC50 value),
Fig. 9 96-h LC50 (concentration lethal to 50% of the test animals in
96h) values of the weekly test with one produced water sample. The
"on-site" laboratory comparison is represented at time 0 and
time 0 plus 1 day. All 96-h LC50 values are expressed as percent
of produced water. (Vertical lines through points represent 95%
confidence limits for each LC50 value).
16
-------
680
Figure 8.
.4
SITE*
OS GB 5A
Figure 9.
LC50(XPWJ
TIME WEEKS!
5TESTPB1006
O -
-------
681
Table 1. Comparison of four major components: total volatile organic
carbons (TVOC), total organic carbons (TOG), oil and grease,
and salinity and the 96-h LC50 (concentration lethal to 50% of
test organisms in 96h). TVOC, TOG, oil and grease were
measured in mg/£ and salinity as °/oo. All LC50 values are
expressed as percent of produced water. (Values in parenthesis
represent the 95% confidence limits for each LC50 value).
18
-------
682
sample TVOC TOC
# (mg/l) (mg/l)
OIL & GREASE
(mg/l)
SALINITY LC50
(ppt) (%)
1
2
3
4
1.2
0.6
8.0
1.6
768
32
370 .
108
17
35
300
460
36
138
98
34
3.7
(2.5-4.7)
3.1
(2.4-3.9)
1.9
(1.3-2.9)
9.3
(7.4-11.9)
-------
683
Question and Answer Session
MR. MONTGOMERY: Any questions?
MRS. DE NAGY: The next
speaker is Bob Schaeffer with CENTEC. The Office of
Water Enforcement and Permits was supposed to be here
to give this next presentation. Unfortunately, beyond
our control some things came up and Bob consented to
stand in at very little notice.
The purpose of the next presentation is to sort
of walk the lines between chemistry and biology in
the whole toxicity identification and evaluation, and
discuss the TRE principle.
-------
684
DR. SCHAFFER: Thank you,
Susan. This also proves the contractor will do
anything.
As you may be aware, there's a trend by state
and federal water quality control agencies to set
effluent toxicity limits based on water quality
standards and requirements that have been designed to
protect the biological community in the receiving
water body.
First slide. This regulatory approach differs
significantly from the one pursued over the last
decade in which effluent limits were based on the level
of treatment that was the best available technology
that was in use in an industrial category.
Implementation of' the water quality based approach
can lead to the need to upgrade established treatment
technologies and efficiencies, and may at times drive
technology innovation. Once water quality based
toxicity limits have been set, it is the discharger's
responsibility to determine if the facility is in
compliance, or if not, to determine how to remedy
the situation.
First question is usually addressed through an
-------
685
' e>- • ' •
effluent characterization program in which the
magnitude and variability of effluent toxicity is
estimated.
Guidelines for performing such a program are .
presented in some detail by the US EPA in its technical
support document for Water Quality Based Toxics
Control. If a toxicity problem is identified, it is
then necessary to identify the sources and causes of
the toxicity so that appropriate treatment can be
planned and tested. This phase is commonly addressed
through the implementation of a toxicity reduction
evaluation or TRE.
Currently, the EPA is preparing detailed guidance
on the procedures for conducting these TREs. It's my
intent today to produce an overview of how one such
TRE was designed and implemented, followed by a
discussion of tentative conclusions concerning the
relative toxicity of the various fractions of the effluent.]
The refinery in which this test was run produces
refined petroleum products, primarily gasoline and
diesel fuel from crude oil. Principal process units
are distillation, cracking, reforming and alkylation.
During the time this study was being conducted, the
-------
686
refinery had an average crude run throughout of about
100,000 barrels per day, and generated an average of
3.1 million gallons per day of processed wastes,
including cooling tower blowdown, sanitary wastes,
storm water runoff and other wastes from a sulfuric
acid plant which also operated on the site.
These wastewaters were treated in the refinery
wastewater treatment system, and then discharged into
the ocean through a diffuser which provides at least
a ten to one dilution. The layout of the wastewater
treatment system and its major components are diagrammed
in the next slide.
The quality of the effluent discharge from the
refinery is regulated by an NPDES permit. The
conditions of this permit are designed to insure
compliance with the applicable federal effluent
guideline limitations and state water quality standards.
Permit conditions include effluent limits for a number
of specific chemical and physical parameters as well
as one for whole effluent toxicity. The toxicity
limit requires weekly 96 flowthrough bioassays using
the three-spined stickleback With a minimum survival
of 50 percent in undiluted effluent. In other words,
-------
687
a 96 hour LC-50 equal to or greater than 50 percent.
As in most of these endeavors, the toxicity
reduction evaluation described in this case study
started out with clearly stated objectives outlined
in the next slide.
They were to characterize the effluent toxicity
to determine the toxicity reduction through each
treatment system, by identifying the major process
streams that exhibited toxicity, and to characterize
this toxicity, to determine how these particular
waste streams were treated and how components were
degraded through the treatment system. Then, any
additional steps that could be taken were evaluated.
All of the data was then synthesized.
Next slide. The first step in a TRE is to
understand what type of toxicity is to be reduced.
This requires implementation of a characterization
program which is designed to identify both the nature
and source of the final effluent toxicity. In
this study, the characterization program consisted of
the following three elements: to select a cost
effective toxicity monitor and routinely screen the
effluent, perform chemical fractionation to identify
-------
688
classes of toxic constituents in the final effluent,
and to perform chemical analyses to identify specific
toxic elements and/or compounds in the final effluent.
We'll describe how each of these elements was defined,
present the results and try to generate some conclusions
from the data that was gene'rated.
Next slide. The NPDES permit for this facility
required that the toxicity be evaluated using the
fish bioassay described earlier. Therefore, to insure
that all toxicity evaluations performed would provide
adequate results, it would be best to use that
particular compliance test for all of the analytical
purposes. However, due to its slow response time,
96 hours, and relatively high costs, it was deemed
impractical to use this test to process the large
number of samples required in this in-depth evaluation.
Therefore, a search was conducted to find a more
rapid and cost effective substitute toxicity monitor
for screening purposes. At first, the three commonly
measured parameters, COD, BOD and TOC were considered
as possible surrogates for the fish bioassay.
This was evaluated by using available data in
making side by side comparisons of fish bioassay
-------
689
results and concurrently determined values for each of
those parameters. However, the comparisons indicated
that there was no significant correlation using those
parameters, and therefore they were rejected as viable
alternatives.
Second, the use of a short term biological
monitoring system was evaluated. A review of the
literature indicated that Microtox might be a good
choice as a surrogate bioassay system for screening
purposes. This test system, which used bioluminescent
bacteria as the test species, provides results in
approximately one hour, and has been shown to respond
in a sensitive manner to refinery effluents.
However, before the test could be used in this
, - . <•.# , • .'•-.-• ^ , . . .
study, it was necessary to demonstrate that the
Microtox system would produce results which were at
least qualitatively similar to the three-spined
stickleback test. This validation was obtained by
performing side by side evaluations of the two tests.
Next slide. The results of this comparison
indicate that the Microtox bioassay serves as an
adequate screening tool for determining the relative
toxicities of process and treatment plant waste
-------
690
streams. In this comparison, the Microtox test end-
point , which is 20 minute EC 50, was not an exact
predictor of fish bioassay end point of the LC-50.
However, it was felt that the Microtox was adequate
for screening the effluent toxicity, because in all
cases the Microtox identified toxicity if toxicity
was also identified by the fish bioassay and the
Microtox always indicated at least as much toxicity
as the fish bioassay, and often more.
The EC-50 that I mentioned, the 20 minute EC-50,
is defined as the concentration of a substance which
causes a 50 percent effect in 20 minutes. In this
case, it is the percentage of effluent whiqh causes a
50 percent reduction in the light emitted by the
luminescent bacteria following a 20 minute exposure.
Based on the results of this evalution, the
Microtox test was selected for characterizing the
magnitude and variability of final effluent toxicity.
This was accomplished by analyzing effluent samples
over a four month period. On a routine basis, 24
hour composite samples were collected and immediately
monitored by Microtox.
The results of this monitoring effort indicate a mean
-------
691
toxicity as the 20 minute Microtox EC-50 of 29 percent
effluent with an associated standard deviation of
11.7 percent. These Microtox 20 minute results can
be expressed in terms of the fish bioassay LC-50
results. In conversion, the effluent was estimated
to have a mean 96 hour LC-50 of 59.2 percent and a
standard deviation of 29.8 percent.
This was .sufficient to pass the effluent toxicity
limit of equal to or greater than 50 percent of their
initial permit,, but it was considerably below the new
permit limit of. an LC-50 of 100 percent, which was to
become applicable upon issuance of the new permit.
Next slide. Toxicity in the fiaal effluent can
be caused by one or more of a wide variety of chemical
compounds, which may be the products and/or byproducts
of the refinery process. Due to, the large number of
constituents often found in complex effluents, and
the limited toxicolpgical data on most compounds, it
is often very difficult if not impossible to clearly
identify a specific toxic agent via chemical analysis
alone.
However, if the number of possibilities can be
reduced, chemical analysis efforts to be more focused
-------
692
and chances for identifications would be greatly
enhanced. In addition, even if specific toxic
chemicals could not be identified, knowledge of the
classes of chemical causing toxicity could lead to
possible treatment alternatives to reduce the toxicity
levels in the effluent from the treatment system.
In order to provide more understanding of complex
effluent toxicity, a fractionation procedure was
developed to identify the number and types of chemical
classes responsible for final effluent toxicity.
Next slide. In this procedure, the effluent was
separated into organic and inorganic fractions, and
each tested for toxicity. If the organic fraction
proved toxic, it was further separated into base
neutral and acid fractions, and each of these was
further tested for toxicity.
If the inorganic fraction proved toxic it was
further separated into cationic and anionic fractions
and each of these in turn were tested. Results of
this procedure would not identify the actual chemicals
causing toxicity, but would identify the classes
compounds to which the toxic agents belonged. If
further characterization was desired, in-depth chemical
-------
693
analysis could then be performed. However, the
magnitude of this effort would be limited only to
those chemical classes showing toxicity which would
result in lower costs and less confusion in
interpretation.
The specifics of the fractionation procedure are
as follows. On a weekly basis, composite samples of
the final effluent were collected. Each sample was
analyzed for toxicity using the Microtox. Then 50
milliliters were passed through a column packed with
five ml of XAD-4 polystyrene resin. The water elutriate
which contained organic chemicals in a wastewater
sample was then analyzed for toxicity.
The column was then eluted with ten ml of acetone.
The acetone elutriate containing the organics was
evaporated on a hot bath and resuspended in a 50 ml
of distilled water. It was evaporated to half a ml.
Then it was resuspended and analyzed for toxicity
using the Microtox test.
If the inorganic fraction exhibited toxicity it
was treated with anionic and cationic exchange resins.
The resulting subfractions were assayed also, indicating
whether either of these fractions were responsible for
-------
694
any toxicity. The organic fraction if it exhibited
toxicity, was subjected to-methylene chloride water
partitioning under basic and acidic conditions. The
resulting subfractions were assayed for toxicity,
indicating whether neutral, basic and/or acidic
compounds were responsible for organic toxicity.
The results of this fractionation effort indicated
that final effluent toxicity was almost always, 11 out
of 12 times, attributable to the organic constituents.
In addition, the most toxilogical active organics
appeared to be the neutral and to a lesser extent,
the acidic compounds.
Two approaches were used in an attempt to identify
specific chemicals which might be responsible for
final effluent toxicity. The first was a comparison
of GCMS results with maximum no observable effect
levels that were reported in the toxicological
literature. The second was a review of routine
effluent monitoring data collected over the years by
refinery personnel and stored in the refinery computer
system. These data were analyzed for significant
positive correlations between toxicity and any of the
commonly measured chemical parameters.
-------
695
As previously described, the fractionations
indicate that final effluent toxicity was routinely
associated with the organic fraction. Therefore, the
in-depth chemical analysis was keyed to detecting
organic constituents using GCMS scans. On three ,
occasions, final effluent samples were analyzed for
volatile and semivolatile organic compounds using
Method 624 and 625. These analyses are designed to
identify all the priority pollutants as well as any
major non-priority pollutant organic compounds that
might be present.
On the three dates under consideration, a number
of compounds were identified in the final effluent
sample. There was considerable variability between
samples, between the compounds and the concentrations
at which they were found.
A review of the toxicological literature failed
to identify any of the detected constituents as a
probable cause of final effluent toxicity. For
several of these compounds, mostly ketones, virtually
no, data could be found concerning their aguatic
toxicity. For those compounds for which there was
significant data available, such as isophorone,
-------
696
acetone, toluene/ the observed concentrations were
well below known effect concentrations.
Next slide. Three study elements were performed
in order to characterize the toxicity found in the
refinery effluent. Synthesis of the result suggests
the following. The final effluent exhibited variable
toxicity with a mean Microtox EC-50 of 29 percent
effluent and associated standard deviation of 11.7
percent.
Next slide. Final effluent toxicity was primarily
caused by neutral organic constituents and to a lesser
extent by acidic organic compounds.
Next slide. Specific organic compounds were
identified in the final effluent which might be
associated with toxicity included several ketones and
a few phenolics. Additional evaluations were performed
to shed light on the possible sources of toxicity in
the final effluent.
The wastewater treatment system of the refinery
removes about 85 percent of the influent toxicity and
is about equally effective on all of the major process
streams that feed into the treatment plant. However,
sufficient quantities of neutral and acidic organic
-------
697
compounds seem to pass through the system either
unaltered or slightly rearranged to produce measurable
toxicity in the final effluent. The ultimate source
of these toxic compounds which are found in the final
effluent are most likely wastewaters from the foul
water strippers and the ammonia recovery unit in the
plant. This was determined by going back up and
running toxicity tests at each of these units. Also
there were other minor sources that were found to
contribute to the toxicity,
The available data failed to identify any compound
identified by GCMS as the probable cause. In fact,
the most likely situation is that compounds are acting
additively to cause the observed toxicity. The mean
level of effluent toxicity observed in this study was
within the applicable limits at that time, but would
have been out of compliance based on the state's more
stringent limits, which became operative in a new
permit. However, based on the results of this TRE
and the followup pilot studies that were performed at
the refinery on the treatment streams, the facility
is now making modifications to its treatment system and
believes it will be able to come into compliance with
-------
698
the new permit limits,
Thank you.
-------
699
Question and Answer Session
DR. SCHAFFER: No questions,
please. I presume there are no questions. I'm not
sure I could give you a good answer.
MRS. IRAZARY: Maria Irazary.
I am curious; did you conduct individual screening
tests on each of the fractions?
DR. SCHAFFER: Yes.
MRS. IRAZARY: So you
actually tested the toxicity of every one of the fractions1-!
DR. SCHAFFER: Yes.
MRS. DE NAGY: We're now
down to the last speaker of the day. I'd like to
thank everybody who is staying here to listen to Jim.
The last speaker takes what is actually the last
step. You first have your development of a method of
identification and evaluation, then it gets put into
a permit, and then there are arguments as to whether
or not the numbers generated are valid or real or
whatever. Jim's talk will present some information
on the variability on the drilling fluids toxicity
test, and he's here to take potshots at us wherever
possible.
-------
TOXICITY REDUCTION
EVALUATION AT AN
OIL REFINERY
A CASE STUDY
o
o
-------
OBJECTIVES OF TRE
IDENTIFY SOURCES AND CAUSES
OF EFFLUENT TOXICITY
EVALUATE TREATMENT OPTIONS TO
REDUCE EFFLUENT TOXICITY
-------
FWS
ARU
PRIMARY CANAL >K
#2 AERATED
#1 AERATED
POND
ROTATING
CONTRACTORS
BIOLOGICAL
CRBC)
CLARIFIERS
AND
MULTIMEDIA
FILTER
o
m
j»
o
DAF APE
INDICATES SAMPLING LOCATIONS
FOR TOXICITY EVALUATIONS
COMPLIANCE POINT
E-001 (DLW)
W
DIFFUSION LINE
O
to
-------
TRE APPROACH
1. CHARACTERIZE FINAL EFFLUENT TOXICITY
2. DETERMINE TOXICITY REDUCTION
THROUGH TREATMENT SYSTEM
3. IDENTIFY MAJOR INFLUENT PROCESS
STREAMS
4. CHARACTERIZE PROCESS STREAM TOXICITY
5. DETERMINE PROCESS STREAM
DEGRADABILITY
6. SYNTHESIZE DATA
a
03
-------
CHARACTERIZATION OF
FINAL EFFLUENT TOXICITY
QUESTION:
• HOW TOXIC?
• HOW VARIABLE?
• CAUSATIVE AGENTS?
TOOLS:
• TOXICITY MONITOR
• CHEMICAL ANALYSIS
-------
TOXICITY MONITOR
1. USE COMPLIANCE TEST
(96-HR LC50- 3 SPINE STICKLEBACK)
• HIGH ACCURACY
• SLOW
• EXPENSIVE
2. USE SURROGATE TEST (e.g., MICROTOX)
• FAST - 3 0 MIN./S AMPLE
• INEXPENSIVE - $50/SAMPLE
• ACCURACY - GOOD, FEW FALSE
NEGATIVES
O
01
-------
CORRELATION BETWEEN 96-HR LCso (STICKLEBACK)
AND 15-MIN EC50 (MICROTOX)
150 -
REGRESSION LINE*
MICROTOX EC 50
Y = 9.9 + 1.7X
R = 0.84
o
Ci
-------
CHEMICAL ANALYSIS
1. FRACTIONATION - CHEMICAL CLASSES
2. GC/MS - SPECIFIC COMPOUNDS
-------
WASTE-WATER SAMPLE
XAD-4 RESIN
WATER ELUTION
INORGANIC FRACTION
pH> 10
1-X8 RESIN
pH < 4
50W-X8 RESIN
I
CATIONIC
FRACTION
ANIONIC
FRACTION
ACETONE ELUTION
ORGANIC FRACTION
PH>11
AQUEOUS
LAYER
ACIDIC
FRACTION
ORGANIC
LAYER
PH<2
ORGANIC LAYER
NEUTRAL
AQUEOUS LAYER
BASIC FRACTION
o
oo
-------
FINAL EFFLUENT
CHARACTERIZATION RESULTS
•TOXICITY (EC50IN % EFFLUENT|(N=34)
MEAN±S.D.= 29.0 111.7
• FRACTION ATION (EC50 IN % EFFLUENT) (N=12)
MEAN
WHOLE
INORGANIC
ORGANIC
ANION
CATION
ACID
BASE
NEUTRAL
28
NT
39
NT
NT
>100
NT
91
o
to
-------
FINAL EFFLUENT
CONCLUSIONS
1. MEAN £050=29%
A COMPLIANCE 96-
GOAL: 96-HR LC50
HR LC50S59%
100%
2. TOXICITY DUE TO ORGANIC
CONSTITUENTS
• PRIMARILY NEUTRALS--4-8 CARBON
KETONES
• SOME ACIDICS-PHENOL
-------
FINAL EFFLUENT
CHARACTERIZATION RESULTS
GC/MS Gug/1)
AROMATICS
KETONES
PHENOLS
AMINES
TOLUENE(2)
ISOPHORONE(12)
2-BUTANONE(18)
2, 3, 4-TRIMETHYL-2-CYCLOPENTEN
1-ONE (120)
2, 6-DIMETHYLPHENOL (85)
i, 2-DIMETHYLPlPERIDINE (62)
-------
TOXICITY REDUCTION
IN TREATMENT SYSTEM
QUESTIONS:
• HOW MUCH REDUCTION IN EACH
TREATMENT COMPONENT?
• WHAT TYPES OF TOXIC CONSTITUENTS
ARE REMOVED
• ARE TOXIC CONSTIUENTS FORMED?
TOOLS:
• MICROTOX
•FR ACTION ATION
• GC/MS
to
-------
713
VARIABILITY IN DRILLING FLUID TOXICITY TEST RESULTS
J. E. O'REILLY
EXXON PRODUCTION RESEARCH COMPANY
P.O. BOX 2189
HOUSTON, TX 77252-2189
AND
L. R. LAMOTTE
UNIVERSITY OF HOUSTON
COLLEGE OF BUSINESS ADMINISTRATION
4800 CALHOUN ROAD
HOUSTON, TX 77004
The available literature on drilling fluid toxicity tests conducted according
to the EPA toxicity test protocol was compiled and grouped to allow estimation
of three main types of variability: Intra-Taboratory (Assay), Intra-laboratory
with differences between "batches" of drilling fluid prepared using the same
"recipe" (Assay + Batch), and Combined intra- plus inter-laboratory variability
(Assay + Lab). The ratio of highest to lowest LCso (H/L) and the total vari-
ance (based on the log of the LCso values) were calculated for each of these
categories. The total variance was then used to calculate the coefficient of
variation (CV) and 95% Confidence Multiplier factors. Three separate estimates
of variability for the combined intra- plus inter-laboratory category were
possible with the available data (H/L: 4.2, 1.0 to 8.8, and 12.4 to 14.3; CV:
53.2%, 76.7%, and 156.4%; 95% Confidence Multiplier: 3.39, 4.10, and 21.93).
Such levels of variability are quite similar to variability reported for
toxicity tests on pure chemicals using either mysids (H/L: 1.7 to 6.1; CV:
19.7% to 59.5%) or other test species (H/L: 2.5 to 104.2; CV: 2.8% to 124%).
Variability in drilling fluid toxicity tests is a serious concern to operators
in the Gulf of Mexico (GOM) because the NPDES general permit requires that the
discharged mud meet a compliance toxicity limit. The high level of variability
in toxicity tests however, make them undependable for determining compliance or
non-compliance with a given toxicity limit. In contrast, the general permits
in other OCS areas use pre-approval of additives to regulate mud discharges.
Industry feels that the pre-approval approach (to control what goes into a mud)
combined with a monitoring toxicity test (to detect problems) is more protec-
tive of the environment than the present method of control in the GOM.
-------
714
INTRODUCTION
Variability inherent in drilling fluid toxicity test results is an extremely
important issue to industry. Industry's concerns about variability result from
the manner in which EPA addressed the variability issue in both the current
Gulf of Mexico (GOM) NPDES Permit and the proposed BAT/NSPS Guidelines. This
paper will:
o Review estimates made by both the EPA and Industry for the drilling
fluid toxicity test results available at the close of the GOM Permit
record.
o Review the literature which became available after the close of the
GOM Permit record.
o Use all of the presently available toxicity data (Appendices 1 through
4) to estimate variability.
Information available at close of the GQM Permit record
Variability of toxicity test results has only recently become an issue as EPA
and state regulatory agencies have begun to use toxicity limitations as a
compliance "tool" to control effluent discharges. Prior to this, toxicity
tests were used only to point out effluents of concern, so that some corrective
action could be taken by the discharger to reduce the level of toxicity. In
the relatively few instances where variability was examined, the commonly used
variability estimate was the ratio of the highest to lowest LC50 (H/L ratio).
Sample statisics, such as the coefficient of variation (CV) calculated using
the arithmetic mean, were used to estimate variability in only a few reports.
The variability associated with the EPA drilling fluid test protocol was never
examined using an appropriate round-robin validation procedure. Industry was
concerned that in order to fully comply with the toxicity limitation, it would
need to know the variability associated with the test protocol.
O'Reilly (1985) reviewed the available literature to estimate the variability
that industry might encounter in following either the proposed EPA or the API
drilling fluids toxicity testing protocol/ Eight of the references cited in
this study contained information that could be used to estimate variability for
the EPA test protocol alone (Duke et. al. 1984, ERCO 1984a-c, and ERCO
1985b-e). Each of these studies examined-the toxicity of a lignosulphonate mud
(Generic mud #8) formulated using the exact same "recipe"and containing 0, 5,
or 10% of one mineral oil.
These data demonstrated high levels of variability, including variability
within labs (assay), between labs (inter), and between different formulations
of the same "recipe" (batches). For generic mud #8 without oil, the coeffi-
cient of variation (CV) was 68% and the ratio of highest to lowest LC50 (H:L
ratio) was 6.7. Similar variability was observed in muds containing mineral
oil. With 5% mineral, the coefficient of variation was 74% with an H:L ratio
of 3.0. These results confirmed industry's fear that it was being asked to use
a highly variable test to comply with the toxicity limitation in the permit.
- 2 -
-------
715
Information available after close of the GOM Permit record
Mn commenting on the proposed Offshore Effluent Limitations Guidelines, the
industry GOM permit comments on variability (O'Reilly 1986a) were updated to
include some new studies which had become available. These comments were
further expanded in a reply to EPA's response to industry's GOM permit comments
(O'Reilly 1986b). These data completely support Industry's previous conclu-
sions on variability.
Several new reports which address variability have become available only
recently (Bailey and Eynon 1986, Parrish and Duke 1985, 1986; Diesel Pill
Monitoring Program 1986, 1987a, 1987b; Shell Offshore 1986; and Amoco 1986).
Importantly, EPA's own reports support industry's position that variability in
test results will affect industry's ability to comply with the existing toxici-
ty limitation (Bailey and Eynon 1986, Parrish and Duke 1985, 1986). A brief
summary of these studies is included as Appendix 5.
"New" EPA Studies
In a recent paper released by EPA after the close of the GOM Permit record,
Bailey and Eynon (1986), used more sophisticated statistics to calculate two
additional variability estimates: the coefficient of variation calculated
using the geometric mean and something which they referred to as a "95% Confi-
dence Multiplier".
Some of the limitations of the EPA Round robin study include:
o Individual range-finding tests not conducted by each laboratory,
i
o All LC50's were calculated by EPA and not by each laboratory,
o Intra-laboratory variability only measured at one highly experienced
laboratory, and
o Variability resulting from differences in "batches" of the same mud
prepared following the same "recipe" were not examined.
In addition, LaMotte (1987) identified several statistical limitations to this
study:
o The experimental design was inadequate for obtaining a "true" estimate
of variability amongst the population of all drilling fluid toxicity
testing laboratories because the test laboratories were not chosen by
means of a random sample from the population of all such laboratories.
EPA's variability estimate represents only the variability between the
ten commercial laboratories that responded to EPA's request for bids,
which can not be construed to represent a random sample.
o EPA used Maximum Liklihood Estimates (MLE) of variance components
(Bailey and Eynon 1986, Table 7) which are biased estimators and
depend heavily on distribution assumptions. Analysis of variance
(ANOVA) estimates of the variance components are more appropriate,
since they are unbiased estimators which do not rely as heavily on
distribution assumptions.
-3 -
-------
716
Bailey and Eynon (1986) treated the EPA lab as an estimate of interlab
variability when it clearly was not chosen at random. The effect of
the EPA lab should be regarded as fixed and not included in the
estimate of interlab variability.
Bailey and Eynon (1986) calculated the "95% confidence multipliers"
as:
95% confidence multiplier = expft^S,* x std.dev.)
= exp(1.96 x 0.4313)
• 2,33
This assumes that the standard deviation which they estimated is the
true standard deviation (fixed) and has no variability associated with
it. This condition would rarely be met since their estimate of the
standard deviation is based on a small number of samples (7). For
such small sample sizes, a table of "Student-t" values must be con-
sulted. A more defensible "95% confidence multiplier" can be
calculated as:
95% confidence multiplier = exp (t.Q25,df x std- dev.)
Thus, for a sample size of seven, which EPA used to calculate the
standard deviation of 0.4313, the t-value becomes 2.447 instead of the
1.96 which EPA used. The "95% Confidence Multiplier" then becomes
2.87 rather than 2.33.
If the variability data are being used to develop an Alternative
Toxicity Limit however, a still different "95% Confidence Multiplier"
must be calculated, since one would be looking at the difference
between two determinations of toxicity, both of which are variable.
This "95% Confidence Multiplier should be calculated as:
95% confidence multiplier - exp ((2)°-5 x t>025,df x std- dev-)
For developing an Alternative Toxicity Limit and taking all of the
above factors into consideration, LaMotte (1987) calculates a "95%
confidence multiplier" of 5.62 for the EPA dataset, instead of 2.33 as
reported by Bailey and Eynon (1986).
In his review of the Bailey and Eynon (1986) paper, LaMotte (1987) noted that
although the methods which they used to analyze variability are essentially
correct, he felt that some of the basic assumptions required for use of these
methods were not met and hence the "95% confidence multipliers" which they
calculated would only be correct in very limited circumstances (e.g. when their
estimate of variance is equal to the "true" variance).
Variability estimates based on all available data
The available data (Appendix 1) were compiled and grouped into three main types
of variability:
- 4 -
-------
717
o Intra-laboratory (Assay) - Appendix 2,
o Intra-laboratory with differences between "batches" of drilling
-fluid prepared using the same "recipe" - Appendix 3, and
o Combined intra- and inter-laboratory variability - Appendix 4. In
previous variability discussions (O'Reilly 1985, 1986a, 1986b) this
type of variability was incorrectly referred to as inter-laboratory
variability alone, but it does not alter any of the conclusions on
the magnitude of variability.
Five estimates of variability were calculated for these data (Tables 1 to 3):
o H/L ratio.
o Coefficient of Variation based on the ln(LC50).
o "95% Confidence Multiplier" as calculated by EPA (included even
though it is not appropriate):
"95%CM" = exp(1.96 x sd)
o "95% Confidence Multiplier" as calculated by LaMotte (more correct
than EPA's equation):
"95%CM" = exp(t0.025,df
o "95% Confidence Multiplier" as calculated by LaMotte for calculat-
ing an Alternative Toxicity Limit:
"95%CM" = exp(t0.025,df * sd x J2 )
Intra-laboratorv variability
(Assays)
Intra-laboratory variability, calculated using all of the intra-laboratory data
on drilling fluid toxicity (Table 1) is considerably higher than that estimated
by EPA. The H/L ratio for all of the datasets ranged from about 1.1 to 14.3,
with a mean of 3.5. Sample statistics were calculated for each of the individ-
ual datasets. Note that in some cases, extremely high variability was ob-
served, giving rise to some rather absurd values for the "95% Confidence
Multipliers".
LaMotte (1987) used Analysis of Variance (ANOVA) and the data from Appendix 2
to make two estimates of the intra-laboratory (assay) variability. A variance
(6Z) of 0.1057 was calculated using the data from EPA's Gulf Breeze Laboratory
in which the LC50 was caclulated using the probit model and corrected for
control mortality. This is somewhat lower than that calculated by Bailey and
Eynon (1986) for the same dataset using Maximum Liklihood Estimation (£2 =
0.1200). The second estimate was made using all of the available data (Appen-
dix 2) except for the two EPA datasets which used the moving average LC50
calculation. This estimate (£2 =0.4260) is 3.6 times higher than the variance
reported for EPA's Gulf Breeze Laboratory by Bailey and Eynon (1986).
- 5 -
-------
718
The variances were then used to calculate the coefficient of variation and the
"95% Confidence Multipliers". The coefficient of variation calculated using
sample statistics ranged from 7.2 to 579.1%. The CV calculated for the com-
bined data was 72.9%, twice that reported by Bailey and Eynon (1986) for EPA's
Gulf Breeze Laboratory (CV - 35.7%).
As previously discussed, Bailey and Eynon's "95% Confidence Multiplier" is only
correct if their measured variance is the "true" variance for all drilling
fluids. The more correct "95% Confidence Multiplier" factor calculated as
suggested by LaMotte (1987) is provided in Table 1 as "95% CM" - #2. For these
datasets, the "95% CM" - #2 ranged from 1.36 to over 900 ("95% CM" - #2, Table
1). The intra-laboratory "95%CM" - #2 calculated for the combined dataset
(3.87) was twice that reported by Bailey and Eynon (1.97).
Intra-laboratorv variability with differences
between "batches" of the same drilling fluid
(Assays. "Batches", and Assays + "Batches")
Intra-laboratory (assays) variation and "batch" variation can be estimated from
the "intra-lab plus batches" dataset (Appendix 3). Variability was calculated
using both sample statistics and a two-way, random, nested ANOVA model (LaMotte
1987) as shown in'Table 2. Six separate estimates of intra-laboratory vari-
ability are available which include the additional variability due to testing
different "batches" of drilling fluids prepared according to the same "recipe"
(Table 2). Three of these studies contained replicate analyses which allowed
an estimate of intra-laboratory variability for three "batches" of mud. These
replicate analyses, however, represent a subset of the intra-laboratory dataset
(Appendix 2). Hence, the intra-laboratory (assay) variability estimate made
from this subset is made from fewer toxicity tests conducted at only one
experienced laboratory and is lower than the estimate made using all of the
data.
The variability estimates based on individual sample statistics do not appear
to be very different from those measured for intra-laboratory variability
alone, mainly because of the broad range covered by all of these estimates.
The H/L ratio for this category of :variability ranges from 2.2 to 11.3.
LaMotte's (1987) estimation of the "95% Confidence Multiplier" (#2 in Table 2)
ranged from 5.36 to 13.00.
The effect of testing different "batches" of drilling fluid prepared according
to the same formulation is more apparent on those water base muds not contain-
ing oil. Note that for generic mud #1 without oil, the intra-laboratory
variability was 1.36 (Table 1, "95% CM - #2) and increased to 5.36 when addi-
tional "batches" are included in the analysis. The same general trend is
apparent for generic mud #8 without oil added. The intra-laboratory variabili-
ty ranged from 1.37 to 2.14 and increased to 13.00 when different "batches" are
included.
The total variance calculated for the combined "intra-lab plus batch" dataset
(Appendix 3) using ANOVA was 0.5957 (Table 2). These data allowed individual
estimation of variance due to assays (0.293) amd variance due to "batches
(0.3027).
- 6 -
-------
719
Combined Intra- Plus Inter-!aboratorv Variability
(Assays + Lab)
Variability "in the combined intra- plus inter-laboratory datset (Appendix 4) is
considerably higher than estimated in the EPA study. The H/L ratio ranged from
1.0 to 90.9 with a mean of 3.4. Variances calculated using individual sample
statistics were highly variable, ranging from 0.0223 to 10.1694. The wide
scatter in these estimates probably results from differences in levels of
experience between laboratories. It is this level of variability that is
important in comparing the monthly or end-of-well toxicity test result to a
permit limit.
These data were broken into three main categories (based on the design of the
original studies): The EPA "round-robin" study using data analyzed by probit
analysis and moving average with and without correction for control mortality;
the Diesel Pill Monitoring Program (DPMP) data; and the Shell Offshore (1986)
data. Amoco (1986) also supplied some data on two muds tested at two laborato-
ries, but the results for one of these toxicity tests was deleted due to high
control mortality, leaving only one mud tested at two labs.
o EPA "round-robin" on one mud type - Bailey and Eynon (1986):
- Laboratories #5,9, and 10 were deleted. Same group as used by
Bailey and Eynon (1986). Sample size = 8.
- Laboratories #5,9, 10, and ERLGB were deleted. Sample size = 7.
o DPMP (1987a) data, plus ERLGB test results (DPMP 1986), plus results
for kit #206 (DPMP 1987b), eight kits, eight basic mud types with
samples before and after spotting a diesel pill analyzed by up to
three laboratories; sample size = 37 toxicity tests.
Shell Offshore (1986) - sample size
two laboratories, 8 toxicity tests.
2 muds tested twice at each of
Coefficients of variation for these datasets were 47.8, 53.2, 76.7, and 156.4%,
respectively (Table 3). The "95% Confidence Multiplier's" for each of these
datasets (Figure 1 and Table 3) were much higher than the factor calculated by
Bailey and Eynon (1986). Figure 1 compares the variability which EPA took into
consideration in setting the GOM permit limit (Duke et. al.'s 1984 data for a
single sample of generic mud #1 tested once at EPA's highly experienced Gulf
Breeze Laboratory) to the new information. Particularly note the very tight or
small error bars around the mean LC50 for the Duke et. al. (1984) dataset.
These error bars are approximately representative of a coefficient of variation
of 9.1%, or a "95% Confidence Multiplier" of 1.10. In contrast, the EPA study
estimated a "95% Confidence Multiplier" of 2.33 (Bailey & Eynon 1986), twice
that accounted for by the Duke et. al. data and even that is an underestimate.
The correct value for that data is closer to 3.4 (LaMotte 1987) or over three
times as much. The value estimated for two and three experienced labs for the
DPMP is 3.5 times higher.
EPA Region VI recognized variability as an important factor; however, variabil-
ity is only incorporated into the permit when an operator files an Alternative
- 7 -
-------
720
Toxicity Request. The question remains as to whether an operator would be
found out of compliance if drilling with a generic mud without specialty
additives and obtains an LC50 of say 25,000 ppm SPP. EPA Region VI has been
using Bailey and Eynon's (1986) estimate of variability to develop these
Alternative Toxicity Limits. In relation to Figure 1 note that using their
"95% Confidence Multiplier" of 2.33, the lower 95% confidence limit becomes
14,200 ppm SPP rather than the original 30,000 ppm SPP.
Embedded in the 2.33 factor is the assumption that Bailey and Eynon's estimate
of the total variance is the "true" variance. As discussed previously this is
not the case. Taking this into account the calculated "95% Confidence Multi-
plier" becomes 2.87. It increases still further (3.39) when the EPA Gulf
Breeze data is considered "fixed" and not random. Thus a more "correct"
estimate of the lower 95% confidence limit based on the EPA dataset now becomes
9,700 ppm SPP.
Two additional estimates of combined intra- plus inter-laboratory variability
are available from the Diesel Pill Monitoring Program and from the Shell
Offshore (1986) studies. The DPMP provides a dataset based on numerous mud
samples but tested at either two or three laboratories. The "95% Confidence
Multiplier" for these data was 4.10. The Shell Offshore data provides a more
limited database but includes a contrast between a highly experienced laborato-
ry and a fairly "new" laboratory. As expected, the estimated "95% Confidence
Multiplier" is higher still (21.93). The lower 95% confidence limit based on
the original mean LC50 of 33,000 ppm SPP then becomes 1500 ppm SPP. This is
quite different from the 30,000 ppm SPP cited by EPA in the GOM permit.
Thus we are left with four estimates of variability ("95% Confidence Multipli-
ers"), none of which includes variability resulting from differences due to
different "batches" of the same drilling fluid formulation prepared by differ-
ent mud companies. Which estimate is the "best" estimate? We have already
shown that EPA's factor of 2.33 is more "correctly" calculated as 3.39. We must
keep in mind that this value also represents an underestimate because of the
original experimental design. We also have an estimate from the Diesel Pill
Monitoring Program (4.10) which represents variability between only two and
three laboratories for numerous mud types. Keep in mind though that this value
may also be an underestimate, since the labs participating in this study were
preselected for their experience in conducting these tests. The final estimate
is for the Shell Offshore data (21.93). Our confidence in this estimate of
variability being representative of the "true" variability is not as high as
for the other estimates since it is based on only two muds tested at two
laboratories, but it is an estimate nonetheless.
If these variability estimates are to be used in setting new toxicity limits,
then variability due to different "batches" of drilling fluid needs to be
accounted for. This is accomplished by adding the estimated variance for
different "batches" of 0.3027 from Table 2 to the estimated variance for
combined intra- plus inter-laboratory variability. The three estimates for
total variance then becomes 0.5516, 0.7657, or 1.5401 for the EPA, DPMP, and
Shell datasets, respectively. LaMotte (1987) recognized that combining these
sources of variability into a single estimate can be somewhat arbitrary.
However, he estimated a total variance of 0.820 using all of the available
information. Note that this lies within the range predicted by the individual
- 8 -
-------
721
studies. Thus, if EPA is going to use a "95% Confidence Multiplier" to develop
a toxicity limit then a value of 12.95 should be used in place of 2.33 which
has been applied in the past (LaMotte 1987).
Importance of these variability estimates to the operator
What impact does such variability have on an operator? It can be quite
severe as demonstrated by the following hypothetical example. Let's assume
that an operator has a drilling fluid which he is using to drill many of his
wells. Let's further assume that the operator knows the "true" LC50 for this
mud to be 50% higher than the GOM permit limit or 45,000 ppm SPP which he
thinks will allow him to discharge this mud offshore without violating the
permit limit. To satisfy permit requirements however, the operator must
conduct a monthly toxicity test plus an additional test once maximum well depth
is reached. As discussed above, measurement of the LC50 introduces a certain
amount of variability or uncertainty.
Presently, we have four estimates of this variability (Table 4). Assuming
the variability reported by Bailey and Eynon (1986) to be correct, then about
17% or one out of every six toxicity tests conducted on this drilling fluid
would be "out of compliance" with the 30,000 ppm SPP toxicity limit in the GOM
permit. If the Shell Offshore (1986) data is the "correct" estimate, then
nearly 36% or one out of every three toxicity tests would be "out of compli-
ance" with the GOM permit limit (LaMotte 1987).
This
implies that the operator will be out of compliance with the toxicity
limit once every three to six months assuming he is drilling only one well at a
time. This subjects the operator to a $750,000 civil penalty every three to
six months. These data also suggest that if an operator is drilling six or
more wells at any one time, that he may be out of compliance at one of these
sites every month and thus subject to a $750,000 penalty every month! This is
a high cost to the operator for discharging "innocent" drilling fluids, which
have a "true" LC50 of 45,000 ppm SPP (50% above the allowable toxicity limit).
- 9 -
-------
722
REFERENCES CITED*
Amoco, 1986. Results of toxicity tests.
APHA. 1985. Standard Methods for the Examination of Water and Wastewater.
16th Edition. A. E. Greenberg, R. R. Trussell, L. S. Clesceri, and M. H.
Franson, editors. American Public Health Association, Washington, D. C.
Bailey, R.C. and B.P. Eynon. 1986. Toxicity testing of drilling fluids:
Assessing laboratory performance and variability. Submitted to Papers on
Symposium on Chemical and Biological Characterization of Sludges, Sedi-
ments, Dredge Spoils, and Drilling Muds, ASTM.
Breteler, R. 0., P. D. Boehm, 0. M. Neff, and A. G. Requejo. 1984. Acute
toxicity of drilling muds containing hydrocarbon additives and their fate
and partitioning between liquid, suspended, and solid phases. Report
prepared for American Petroleum Institute by Battelle, New England Marine
Research Laboratory, Duxbury, Massachusetts.
DPMP. 1986. USEPA - API Diesel Pill Monitoring Program. Preliminary results
of diesel pill acute toxicity tests with mysids (Mvsidopsis bahia) con-
ducted at ERCO (industry testing laboratory), ESI (EPA contract laborato-
ry), and Environmental Research Laboratory -- Gulf Breeze. Table handed
out at DPMP Oversight Committee Meeting - October 14, 1986.
DPMP. 1987a. USEPA - API Diesel Pill Monitoring Program Report Number 3.
Prepared for the third meeting of the DPMP Oversight Committee. February
17, 1987.
DPMP, 1987b. USEPA - API Diesel Pill Monitoring Program. Results from
qualifiaction test of bioassay lab for DPMP. Letter from Weintritt
Testing Laboratories dated March 17, 1987.
Duke, T. W., P. R. Parrish, R. M. Montgomery, S. D. Macauley, J. M. Macauley,
and G. M. Cripe. 1984. Acute toxicity of eight laboratory-prepared
generic drilling fluids to mysids (Mvsidopsis bahia). Environmental
Research Laboratory, Gulf Breeze, Florida. EPA-600/3-84-067.
ERCO. 1984a. Acute toxicity of drilling fluid no. 8. Conducted for Exxon
Production Research Company by ERCO/A Division of ENSECO, Cambridge,
Massachusetts. December 1984.
ERCO. 1984b. Acute toxicity of drilling fluid no. 8 with 5% Mentor 28.
Prepared for Exxon Production Research Company by ERCO/A Division of
ENSECO, Cambridge, Massachusetts. December 1984.
ERCO. 1984c. Acute toxicity of drilling fluid no. 8 with 10% Mentor 28.
Prepared for Exxon Production Research Company by ERCO/A Division of
ENSECO, Cambridge, Massachusetts. December 1984.
- 10 -
-------
723
ERCO. 1984d. Acute toxicity of suspended particulate phase of drilling fluids
containing diesel fuels. Prepared for. U. S. Environmental Protection
Agency by ERCO/A Division of ENSECO, Cambridge, Massachusetts. May 1984.
r .•.-••
ERCO. 1985b. Acute toxicity of generic mud #8. Prepared for Exxon Research
and Engineering Company by ERCO/A Division of ENSECO, Cambridge, Massa-
chusetts. March 1985.
ERCO. 1985c. Acute toxicity of drilling fluid #4. Prepared for Exxon Re-
search and Engineering Company by ERCO/A Division of ENSECO, Cambridge,
Massachusetts. March 1985.
ERCO. 1985d. Acute toxicity of drilling fluid #5. Prepared for Exxon Re-
search and Engineering Company by ERCO/A Division of ENSECO, Cambridge,
Massachusetts. March 1985.
ERCO. 1985e. Acute toxicity of drilling fluid #6. Prepared for Exxon Re-
search and Engineering Company by ERCO/A Division of ENSECO, Cambridge,
Massachusetts. March 1985.
ERCO. 1985f. Inter!aboratory and intralaboratory comparison of the API
drilling fluids bioassays protocol. Final report prepared for American
Petroleum Institute by ERCO/A Division of ENSECO, Cambridge, Massachu-
setts. June 1985.
ERCO. 1986. Acute toxicity of water-based drilling fluids containing mineral
oils. Interim data report prepared for American Petroleum Institute by
ERCO/A Division of ENSECO, Cambridge, Massachusetts. June 30, 1986.
ERCO. 1987. Acute toxicity of water-based drilling fluids containing mineral
oils. Final data report prepared for American Petroleum Institute by
ERCO/A Division of ENSECO, Cambridge, Massachusetts. In preparation as
of April 1, 1987.
Herricks, E. E., D. J. Schaeffer, and J. C. Kapsner. 1985. Complying with
NPDES permit limits: when is a violation a violation? Journal of Water
Pollution Control Federation 57(2): 109-115.
LaMotte, L. R. 1987. Comments on variability among toxicity assays.
Comments prepared for API's Permit Modification Request.
O'Reilly, J. E., 1985.
Comments submitted
Variability in drilling fluids bioassay test results.
for the Gulf of Mexico NPDES Permit.
O'Reilly, J. E., 1986a. Variability in drilling
ments submitted for the BAT/NSPS Guidelines.
fluid toxicity tests. Corn-
O'Reilly, J, E., 1986b. Variability in drilling fluid toxicity tests - A reply
to EPA's response to industry's comments. Comments submitted for Indus-
try's December 1986 Stay Request.
Parrish, P.R. and T.W. Duke. 1985. Acute toxicity of a laboratory-prepared
generic drilling fluid to mysids (Mysidopsis bahia), and evaluation of
test results from ten commercial laboratories. U.S. Environmental
- 11 -
-------
724
Protection Agency, Environmental Research Laboratory, Sabine Island, Gulf
Breeze, FL.
Parrish, P:R. and T.W. Duke. 198:5. Variability of the acute tpxicity of
drilling fluids to mysids (Mvsidopsis bahia). To be published in:
Proceedings of the Symposium on Chemical and Biological Characterization
of Municipal Sludges, Sediment, Dredge Spoils, and Drilling Muds, ASTM,
Philadelphia, PA 19103.
Rue, W. J., J. A. Fava, and D. R. Grothe. In Press. A review of inter- and
intralaboratory effluent toxicity test method variability. To be pub-
lished in ASTM's Aquatic Toxicology and Hazard Assessment; 10th Symposium.
Shell Offshore, Inc. 1986. Summary of toxicity data for med types 1 and 2 -
split sampling project. Comments submitted by Shell Offshore, Inc. for
the Industry's December 1986 Stay Request.
* To minimize confusion, the references are cited with the same naming
convention as used in O'Reilly 1985, 1986a, and 1986b.
- 12 -
-------
TABLE 1
1NTRA-LABORATORY VARIABILITY OF DRILLING FLUID TOXICITY TESTS
CONDUCTED ACCORDING TO THE EPA PROTOCOL
(ASSAY)
VARIABILITY ESTIMATES
OIL
GEN. CONC.
MUD # (VOL. %)
_— — _— ___.___..,
1 0
1 5
80
8 2
8 5
8 10
SW_GYP 0
SW_GYP 3
ALL COMBINED
FROM "INTRA + BATCH"
WEIGHTED AVERAGE
REF.
TOXICITY LC50
NO. LAB
CONTRm
CALC. CORR. nC
MEAN LC50
(ppra SPP)
VARIANCE
H/l
RATIO9
— "•— — — _™.___ «.™__ «.___
16
16
15
16
19
2
2
15
16
15
20
20
20
20
17
(BASED
ERCO
ERCO
ERCO
ERCO
ERLGB
ERLGB
ERLGB
ERCO
ERCO
ERCO
A
B
A
B
ON SUBSET OF
MA
MA
MA
MA
MA
MA
P
MA
MA
MA
MLWP
«A,P
MLW1"
MA
DATA USED
NO
NO
NO
NO
NO
YES
YES
NO
NO
NO
NO
NO
NO
NO
FOR ABOVE
3
3
3
3
6
6
6
3
3
3
2
2
2
2
ESTIMATE)
21,762
1,000
153,300
603,900
26,500
29,200
29,300
3,400
600
1,200
204,900
49,900
2,700
2,600
d.f.=22
d.f.=14
d.f.-36
0.0051
1 .3308
0.0312
0.0053
0.0418k
0.0610
0.1057.
0.1200J
0.2370
0.0865
0.3583
0.2904
1.0479
0.0702
3.5420
0.4260k
0.2930k'q
0.3740
1.2
10.0
1.4
1.1
1.8
1.9
2.1
2.1
2.5
1.8
3.2
2.1
4.3
1.5
14.3
--
95% CONFIDENCE MULTIPLIER'S*
CV (%)"
7.2
126.1
17.8
7.3
20.6
25.1
25.1
35.7
51.7
30.1
65.6
58.1
136.1
27.0
579.1
72.9
#1
1.15
9.59
1.41
1.15
1.49
1.62
1.89
1.97
2.60
1.78
3.23
2.88
7.44
1.68
40.00
3.59
#2
1.36
143.15
2.14
1.37
1.69
1.87
2.31
2.44
8.12
3.55
13.14
941.47
4.5E+05
28.97
2.4E+10
3.87
#3
1.55
1.1E+03
2.93
1.56
2.10
2.45
3.26
3.52
19.35
5.99
38.18
1.6E+04
9.7E+07
116.85
4.9E+14
6.78
Sj
to
Ol
-------
Notes for Tables 1-3
REFERENCES CITED:
1.
2.
3.
4.
5.
6
7
8.
9.
10.
11.
12.
13.
14
ERCO (1984c)
ERCO (1984d)
ERCO (1985b)
ERCO (1985c)
ERCO (1985d)
ERCO (1985e)
ERCO (1985f)
15.
16.
17.
18.
19.
20.
ERCO (1986)
ERCO (1987)
LaMotte (1987)
Parrish and Duke (1985)
Parrish and Duke (1986) •
Shell Offshore, Inc. (1986)
"recipe."
Amoco (1986)
Bailey and Eynon (1986)
DPMP (1986)
DPMP (1987a)
DPMP (1987b)
ERCO (1984a)
ERCO (1984b)
b Method of calculating LC50:
MA - Moving Average
P - Probit Model
MLW - Modified Litchfield - Wilcoxon Procedure
c Number of samples of same "batch" of drilling fluid tested.
d Number of different "batches" of drilling fluids tested - all prepared using same
e Number of laboratories testing same "batch" of drilling fluid.
f Geometric mean
g H/L Ratio - Ratio of highest to lowest measured LCSO values.
h CV(%) - Coefficient of variation based on the ln(LCso). These are different from those reported in
O'Reilly (198s! 1986a, 1986b) which were based on the arithmetic mean. CV = yexp(0*M x*o
i "95% Confidence Multipliers" calculated using three methods:
#1 - As calculated by Bailey & Eynon (1986): "95% CM" = exp (1.960)
#2 - Correction for small sample size: "95% CM" = exp (tQ 025,df x ^
#3 - For use in developing Alternative Toxicity Limit: "95% CM" = exp [JZ x tQ 025jdf x 0]
j As calculated by Bailey & Eynon (1986) using Maximum Likelihood Estimation.
k As calculated by LaMotte (1987) using Analysis of Variance (ANOVA).
Using only 7 outside labs as suggested by LaMotte (1987).
Using 7 outside labs plus ERLGB as fixed.
Using 7 outside labs plus ERLGB, as random as in Bailey & Eynon (1987).
,
o Omitting two EPA datasets which used moving average calculation.
p d.f. = Number of degrees of freedom.
modified Litchfield - VMIcoxon Method, other lab used
moving average method on one sample and probit on another.
to
Oi
-------
TABLE 2
INTRA-LABORATORY VARIABILITY OF DRILLING FLUID TOXICITY TESTS
INCLUDING DIFFERENCES BETWEEN "BATCHES" OF MUD
CONDUCTED ACCORDING TO THE EPA PROTOCOL
(ASSAY + "BATCHES")
VARIABILITY ESTIMATES
in
I
GEN.
HUD #
SSS5S
1
1
1
8
8
8
OIL 95X CONFIDENCE MULTIPLIER'S
cone REF TOXIPITY i rsn rnwrpni MPIU irsn u&oiAurc u/i ..................
(VOL. %) NO.8 LAB CALC. CORR. n (ppm SPP) (£ ) RATIO9 CV (%) #1 #2
0 15, 16 ERCO MA NO 3 22,250 0.1541 2.2 40.6 2.15 5.36
5 15, 16 ERCO MA NO 6 3,450 0.5440 7.3 72.4 4.25 6.70
10, 15, 16 ERCO MA NO 5 4,240 0.6566 5.3 93.6 4.92 9.47
0 6, 9 ERCO MA, P NO 5 194,870 0.8546 11.3 116.2 6.12 13.00
10, 15
16
5 7, 12 ERCO MA, P NO 8 1,790 0.6059 8.1 9.1 4.60 6.30
15, 16
10 8, 13 ERCO MA, P NO 7 1,550 0.4937 7.6 79.9 3.96 5.59
15, 16
#3
10.73
14.74
24.05
37.62
13.49
11.39
ALL COMBINED
17
9 ASSAY + BATCHES 0.5957*
ASSAY 0.2930
90.3
4.54
5.00
9.75
BATCHES
0.3027
to
-------
TABLE 3
COMBINED IKTRA- PLUS IHTER-LABORATORY VARIABILITY OF ORILLIHG FLUID TOXICITY TESTS
CONDUCTED ACCORDIMG TO THE EPA PROTOCOL
(ASSAY + LAB)
(1)
(2)
(3)
(4)
(5)
C6)
(7)
(8) (9) (10) (11) (12)
VARIABILITY ESTIMATES
(13)
GEN. MUD #
========::=
8
101 -MB
,1, 101 -MA
7" 106-MB
106-MA
107-MB
107-MA
115-MB
115-MA
118-MB
118-MA
119-MB
119-MA
120-MB
120-MA
206-MA
DPMP COMBINED
SW_GYP
SW GYP
SHELL COMBINED
2w/Prod. A
OIL
CONC.
(VOL. %)
2.0
0.5
1.5
7
7
7
7
4.0
6.0
0.5
2.5
2.5
5.5
1.5
4.0
5.0
0.0
3.0
?
REF.
N0.a
17
17
17
17
2
3, 4
3, 4
3, 4
3, 4
3, 4
3, 4
3
3
3
3
3
3
3
3
5
17
20
20
17
1
LC50
CALC.b
MA
MA
P
P
P
P
P
P
P
P
P
P
P
P
P
P
P
P
P
P
MA,P,MLUr
MA,P,MLWr
P
CONTROL
CORR.
NO
YES
YES
YES
YES
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
d.f-.P = 22
NO
NO
d.f.P = 4
NO
ne
7
7
7
7
8
3
3
3
3
3
3
2
2
2
2
2
2
2
2
2
2
2
2
MEAN LC50f
(ppm SPP)
34,800
38,200
38,100
184,927
42,400
14,250
14,455
46,330
8,530
17,040
5,900
40,200
8,760
2,370
2,400
7,940
5,240
5,400
...
100,900
2,632
...
104,900
VARIANCE
O?2)
— ==== =====
0.2558k»l
0.2411k'1
0.2489k'1
0.2061k'm
0.1860J'n
0.2655
1.5100
0.8366
0.7442
0.0223
0.0366
0.1305
0.0006
0.9472
0.0661
0.2200
0.0690
0.1067
1.0976
0.3592
0.4630k
1.1151
1.2042
1.2374k
10.1694
H/L
RATIO9
4.1
4.0
4.2
4.2
4.2
2.4
8.8
6.1
5.3
1.3
1.4
1.7
1.0
4.0
1.4
1.9
1.5
1.6
4.4
3.3
...
12.4
14.3
...
90.9
95% CONFIDENCE MULTIPLIER'S1"
CV (%)h
54.0
52.2
53.2
47.8
45.3
55.1
187.8
114.4
105.1
15.0
19.3
37.3
2.4
125.6
26.1
49.6
26.7
33.6
141.3
65.7
76.7
143.2
152.8
156.4
1.6E+04
#1
77====
2.69
2.62
2.66
2.43
2.33
2.75
11.12
6.01
5.42
1.34
1.45
2.03
1.05
6.74
1.65
2.51
.1.67
1.90
7.79
3.24
3.79
7.92
8.59
8.85
518.19
#2
======
3.45
3.33
3.39
3.04
2.87
9.18
197.87
51.20
40.94
1.90
2.28
98.45
1.36
2.3E+05
26.20
387.36
28.17
63.51
6.0E+05
13.18
4.10
28.79
32.84
21.93
4.0E+17
#3
=====
5.76
5.47
5.62
4.81
4.44
23.00
1768.40
261.41
190.52
2.48
3.20
658.89
1.54
3.9E+07
101.36
4572.50
112.29
354.47
1.5E+08
38.37
7.35
115.81
139.50
78.82
7.7E+24
00
-------
729
Table 4
Importance of Variability Estimates to the Operator (LaMotte 1987)
DATA
EPA - Bailey & Eynon (1986)
EPA - LaMotte (1987)
DPMP (1986)
Shell Offshore (1986)
ESTIMATED VARIABILITY
Variance
(*2)
0.1860
0.2490
0,463
1.2374
Std. Pev.
0.4313
0.4990
0.6800
1.1124
PROBABILITY OF
NON-COMPLIANCE WITH
30,000 PPM PERMIT LIMIT
of Samples)(# Samples)
17.4 1 out of 6
20.8 1 out of 5
27.6 1 out of 4
35.8 1 out of 3
Assumptions:
Permit Limit of 30,000 ppm SPP
Drilling fluid has a "true" LC50 of 45,000 ppm SPP
- 17 -
-------
FIGURE 1
COMPARISON OF "95% CONFIDENCE MULTIPLIERS"
ESTIMATED FOR COMBINED INTRA- PLUS INTER-LABORATORY VARIABILITY
Upper 95% Confidence Limit Band
00
I
1000000-j
§: 100000
to
E
o.
CL
§ 10000
o
X
to
O)
1000
100
Lower 95X Confidence Limit Band*
REFERENCE:
DATA USED:
1 T 1
Duke et.al.(1984) Bailey & Eynon(1986) LSHotte<1987>
ERLGB Labs 1-4, 6-8, ERLGB Labs 1-4, 6-8
ERLGB
LaHotte(1987)
Labs 1- , 6-8
LaHotte(1987>
OPHP
723,700
1,500
r
Shell Offshore
(1986)
NOTES: a Confidence limits calculated by dividing or multiplying the mean LC50 (33,000 ppm SPP) by the "95%
Confidence Multiplier."
b "95% Confidence Multiplier" calculated by Bailey & Eynon (1986) as: exp (1.96£).
c "95% Confidence Multiplier" calculated by LaMotte (1987) as: exp [(tn nyt. . f
d Diesel Pill Monitoring Program (December 1986 plus Kit Number 206). u-u"» °-T-
CO
o
-------
731
SMPL_NO
REF NO
SOURCE
GENMUDNO
MUDPREP
MUD_BATCH
DPMP_KIT
PILL NO
KEY TO APPENDICES
Sample Number - Unique number to identify each observation.
Reference Codes:
1 Duke et al. (1984)
2a ERCO (1984a)
2b ERCO (19845)
2c ERCO (1984c)
3 ERCO (1984d)
4a ERCO (1985a)
5b ERCO (1985b
5c ERCO (1985c
5d ERCO (1985d
5e ERCO (1985e
6 ERCO (1985f)
7 Breteler et al. (1985)
8a OOC (1986) - conducted by ERCO
8b OOC (1987) - conducted by ERCO
9 ERCO (1986b)
10 Shell Offshore, Inc. (1986)
11 Parrish and Duke (1986)
12 Bailey and Eynon (1986)
13 DPMP (Dec, 1986)
14 Amoco (1986)
Source of Mud - Can be either LAB or FIELD.
Generic Mud Number
1 through 8 - Generic Muds #1 through #8
SW_GYP - Seawater Gyp Mud
2/prod_a - Product "A" tested in generic mud #2
2/prod_b - Product "B" tested in generic mud #2
Mud Prepared by:
Chromalloy
IMCO Services
Hughes
Newpark Drilling Fluids
NL Baroid
Milchem
Magcobar Dresser
Dowel 1
Jones/ERCO
Weintritt Testing Labs
1
2
3
4
5
6
7
8
9
10 Weintritt Testing Labs
11 Centec Analytical Services, Inc.
"Batch" of drilling fluid
Diesel Pill Monitoring Program (DPMP) Kit Number
Number of Diesel Pills "spotted"
-. 19 -
-------
732
TYPE
DATE_IN
DATE_TOX
OILCONT
OILUSED
TOXLAB
TOXTEST
TEST_NO
LC50CALC
CONTMORT
CONTCORR
LC50_96
L95CI_96
U95CI_96
LC50_ST
L95CI_ST
U95CI ST
For DPMP samples only. Can be either MB - Mud Before or
.MA - Mud After
Date Mud Prepared (Lab Mud)/Sampled (Field Mud)
Date Toxicity Test Initiated
Volume Percent Mineral/Diesel Oil Added
Type of Oil Added
NONE = None
LSD = Low Sulfur Diesel
HSD - High Sulfur Diesel
OIL A through OIL E and MENTOR 28 are mineral oils
Toxicity Testing Laboratory
ABL = Aquatic Bioassay Laboratories
EHA = Espey Huston & Associates
ERLGB = Environmental Research Lab, Gulf Breeze, FL
ERLNARR = Environmental Research Lab, Narragansett, RI
EPA_1 through EPA_10 = EPA coded labs
ESI - Envirosystems, Inc.
Test Protocol Followed: EPA or API
Number of Tests on Splits of Same Sample at Same Lab
Method Used to Calculate LCso
MOVAVG = Moving Average
PROBIT = Probit
BINOMIAL = Binomial
MOD_L_W = Modified Litchfield - Wilcoxon Procedure
Control Mortality (%)
Control Correction Used?
No - Not Used
Yes - Used
96 Hour LCso, ppm SPP
Lower 95% Confidence Interval, ppm SPP
Upper 95% Confidence Interval, ppm SPP
96 Hour LCso, mg/L
Lower 95% Confidence Interval, mg/L
Upper 95% Confidence Interval, mg/L
For Drilling Fluid
Standard Toxicant
- 20 -
-------
733
-Juino
u
§00000000000000
OOOOOOMOOOOOOO
ii
JO OO
:z z5
-eoxi -COM -COM -eo^of»-aMfli>»Mr«.w^iflcooeo{0
|H r-l eg
• moKN.minN-utifl- ce
o t- o
Z H O
i-ox-i
-------
M W W OJ 0» OH 0> OJ 0) Ul W M M >-• M !-• >-•)-' M
£>>>>>>>>>>>>>>> moscow
OZl PT-M-O
m-o-4-l
rs> PO
§z
O O rn m r
<«TJ'o'
03 03 09 0) 03 03
NOOOI-'l-'ONNOO
0303O30>030>030303030>
5.
f'OOOJ
no>aja>a>pJpJ0>Cia>C
o oooorow«
H> -xl -vl ^1 ij O P 4
ta ocja±c_co:
rj >-• o ro iv (-• o o M »-• >±
a dJb>o>o^5»o> 55 Si (nips 0.030? wo?
M> • U'a-cMi-a-lnui- • WWWVM
MMMOOOPPPOOPPOPOOP
S
w-
Cnimmmnmrnnimfnmnini
---''on-o-n'O
309O>p303p9
SSSSSSS8888§§§§§5^2"S8o§looyygyoogooooooooooooo
ZM
xo-l mn>o
-IZOOr-l-iO
WX-XOH
mmmmmmfnmmmmmmmrnmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm
H33O3-1ZOO
53S3OO-IZOO
w
WWM-^H
oo*^ouiO)OvO
oooooooooooooa
aoaooooaoooooo
^4NgPPP
-------
OZI
OZI
-r-f r-fr-i-»-r-r-r-r-i-i-r-i-i-r-r
80»ajo»o>o»o»»oj<»o>«o>c
lUiuiuiuiuii/nnuiinuiuiwu
;££**£•£•£*£*£•*•**•£•*•*•*****•?*•?**•**-?•?*•**•?*• J
r*ui IO-HJ
HHCH -DS-aa t/»
OZI f-l-HI-O -3 5
a <
rn-o-o -4 r-
o> m i-
o> 01 1-1
W -t Z
g g3
_- 3 •* —
*>TDI»»3> >
-< ^333JZ Z 79
0) 03 O 03 03 W 09
. • • ui
oNror»
aoooooooooooooooooooooooooooooooooooooooooooooooooo " o o H*
I 3 '
o r>
3 5>» » so » »;d »in jj io ib 50 id id »id id a id id id o am(/>cr-MO Z J» §
« -1 H
° 3 m
3»3J»aj»aJT3-OT3-OT3'O-0131313» I ' °
- —»r~r"r~r-r~<—»»»»»r> w>p-xo-4 m -tf
i i i i i n n m CT o« i i i i i i i i i o w*"^^u^ m g
•HMfs>H-)-'H'HiCOO)a)WO)0)xO(J»10>W*WNH'H' m £
ao o -o HI
-tMmHXO-H TJ HI
31 r*
O HI
H'H<0»W*WrvjH'0>UI*>WlVJH'H'HiH'H'H'H'MHiH'H'H'H>H'H'H'H'H'H'HiHiS>Ul*WrOH'H'H'H'H'H'HiH'HiHiHtH' OZI -Ht/>m-t 3 ^3
c ^ *
n«c
o
DW-
5§i§i
4NOMM
booao'M
oooaoo »^J ouior-
aooooo
H>H> N
^.««^ii.«i^^i.^^t?r^l— >o-*>oj>Kiiji^)wi3*^d*>45x"^<3b)-PiUi^>oaoooooijiooo aooooo
woaaooaaaoooooooooooooooooooooooaooooooooooo oooooo
H* ~ H*M
• ••..^. ............ . . . . • . ••.•*.«..•. «^ j^| Q U| f^ f«
~ " >4K»WO»H"
sez
-------
OZJ r-r-H-o
M M MMM M (-*
(/><)? W tfl l/> V> Wn t.
m (n m m rn m m ni
•o T3 "a "a "a "a "a CO
09 Q) 09 09 0> 09 0> O)
m m mmmm rnmrnrnm
T3-or>ootiT3T9'a'ooa
09 09 09 09 OJ O) 09 09 09 09 09
03 0909
090909
ZH m-i>a
xa-i
HZOOr-MQ
omlflCr-MO
ooooooooonoaoooooooooooogoa
mmmmmmnimmpnfTimmrnmmmmmmrTimnimnimni
•a -o-o -a -a-a -a-a -ay 3 5 55 222 23 2 23222 2 2
OJ>r-XO-»
X
M
t
|jiJi-iUJNi-'i-|i-1>-JHi-'i-'t-'VHro)-'>-')-'W4roi-'i-Ji-'i-'i-'>-' OZI
or-»oolnor-
ovnni-
MM
CD 09 Ul MM
OWO«-
MO^n^ar"
9S2,
-------
737
-JKC\lfOr- o z I-0 o ce a: o
o w v>
vxntntnvxn
ggggggggggggujujojujujujggooooLJujLaLijujLjoooooooo
zzzzzzzzzzzz^^^^^^zzzzzzs-^^^^^zzzzzzzz
uozi- 5:00:1-
—IOli)OO<-IU
8
I"
.J§go§gg§gg§
izzzzzzzzz:::!:
>»>>o5Sooo>>>>>>>:
000000:0:0:0:0:0:0006000?
h-LUWh- JZO rHC^MrHC
< Q. l-OXf-UJCOI-
CL
u
.......................^..........^
UJUJUJUJUJUJlUUJUJUJUJUJUJUJUJUJUJmUJl^UJliJUJUJlULUUJUJUJUJUJllJUJU^
CJ <
x °
ooo
LLI LLJ UJ m m UJ tu LLJ t|i p; n i n i m n i n i tiJUJ tit ui ut tu m unit n i it t m
ZZZH-JHS5S5SS2ZZZZZZZZZZZZZZZZZ-J-1-I-J.
0:0:0:0:
in 11JHJ111111 ii i ii i ii i
"ZZZZZZZ
OW-IUOZI-
o o o a\ IA ifl o o
O D<1-UJ »-OX
o
o
I-
K OO-LU MZ
>O(->UQ-ZCCUUU
3UJL1JUJL1I«UJUJ1IJ
cocococococococococococo
UJUJUJOO O
OQI/1C/1ZZZ
2:300. tttuo.
O CD C9 O CB CB CB CS CO CB
C5UIZZ30ZO rHiH
o. o. a a a. o. a a.
OCDOOCSOOU
rHiHiHi-ia)eo
-------
M ozoczzmo
...... .-_-, -i"n ZZZWO??^i
m rn rn mm m mmm mm m mm m m mfnm o
ro ro ro M o M ro H-o o ro i-" M poroivj>-;
- o s iv> o- *• w OJ i-1 o j> o
n m o m m m m m rn m m
M « o-a-H"O"Pono"a•a'O "*-;-^ "
CD CD O) 03 03 03 03 OS CD 03 O) 03 03 O3 03 G> G>
ui ui ui ui ui ui ui ff- o- o* in ui ui • o>o*o*ui
SSSSSooounjiuiUiuijnuiininvnJnvnooopppppppppppjnjnuilnvnlnVnVnppppp
bbbbbbbbbbbbbbbbbbbbbbbbbbboooooooooooooooooooo
oooppoo3poooogppoo22z:
PPPPPPP|r.(_r.r.l-r-r-r-,-r-zzzi.___,___m
rnano»»omannooai>»» no r
po
CO
POP*
coco
rifnmrnmmmmmmmmmmmrnmmmmrnmmmmmmmmmmmmmmmrnmmm
ooooonooooooononnonooooooooqoooooooooi
u1wi-'0*,£arjaw.c-.provi(ainm-oi-oCDOOooo
HW
8S2,
-------
739
.jeMnoM
-jomo
i-ii-i
|0>« OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO -OOOOOOOOOOOOOOOOOO
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOM
^F-lin^rtinfM^OMt'.e-CoSfMOineDf-lS^otBrH^toSaSo-MtGSocFSS M°S2°°SSS°OO°Sr5cM'*fc
, . CMMMJ rtfOf- co CM CM o vt> KI cf-ui KI CM in in M> co g in co rH vorijf^^inrl^incooo^cMitcM o>
__. w» wwioiowwi/iwviMWWWiowyx/jtne/) ww »»»»»»»»»»»»o»»» o o o > oaf? *
u aoraroracKixQrororQCQcaatEaooooooooooooooooooooooooocroooooooo^Sooc
K U
< O I—UltOh- IZ O iHr-iiHr
ffi i
^ (^ H-O X h-LU (/) H- — — — — — — — — — «. — __.^^.^.^.^.^.^.^.^.fc*.
—> LUtlJUJU;UJLULUUJIitllJUJUJUJUJUJUJLUUJUJLUUJUJUJUJ
wo- .P ...._" o o
-tr-lf
__i Q, C_ ________ ______
_UUJ1_ILUUJUJUJLUUJUJ_UUJUJUJUJUJUJ_ULUUJUJLULUUIUJUJUJUJ
C o
< _i_j_j_j_j_j_j_
tf e>a«>e« J> sc-n vD >e-o
-aiu
>. i§ ' ' ' " • • V •
g I- an_i_i EEC
10 oaza bCMH
2:300. eujn.
mm m m in m m m m m m in in m in m in in m w mm in in m m in m m in m in ui w m m m in in in in m m in in m m m S S S S Se
ao.aaao.ao.
O O O C9 C5 C9 Cd C9
UUIZZ30ZO
r4 r-l r-l r-( r-l r-l >-< H r-l r-l r-l r-l r-l r-< r-l t-< r-< >-l l-l r-< |H r-l
- 27 -
-------
740
-ionic iwt-
KlNOO
.jo-mow
OOP
ODO
ooo
OOOOOOOOOOOOOOOOOOOOOOOPOOOOOOOOOOOOOOOPO
OOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOO
^--
•H ONNiH
lH NfH millKliH NrH
§
g
uozt-uoteo: . . . .g .gg .gg .S
UOZH- 2:00:1-
-iiiiiiiiiiiiiiiii -..
SHMMWMMMHHWMMHMHHHHMWMWWWMMtHHHMlHHHHWWMMWW
ggggggggggggggggggggggggggggggggggggggggg
r-lrHiHrHr-lrHrHrHrKi-lrHfHlHiHiHr-liHrHiHtHlHrHiHlHNlH
t-ox-Kca
LUWUUJLULUUJUJUJUJUJUJUJUJUJUJUJUJUJUJUJUIUJUJUtQJUJLUUJUJUJUJUJUJUJUJUJUJllJlUUJ
_)S_jin
€0 CO COCO CO
>ou»
o-atu
O.M-l-1 CEO
oo.cn.
iutua.uia.<-5<->«l->-iHrHr-liHin(Oin
iHrHOrHOMrHKIiHKVOKtOOOOO
OOOoSoOOOOOOOPOOOO.HrHfHlHrKP4r
-------
T Fri * i
¥ ff
VARIABILITY IN DRILLING FLUID
TOXieiTY TESTS
J. E,
L. R
O'REILLY
AND
. LAMOTTE
-------
PREPARATION OF SUSPENDED
PARTICULATE PHASE (SPP)
• MIX WITH SEAWATER IN 1:9 RATIO
• ALLOW TO SETTLE 1 HOUR
• DECANT
1 PART MUD
DECANT
MUD
9 PARTS
SEAWATER
STIR 5 MINUTES
SETTLE 1 HOUR
100%
SPP
-------
• SPP IS DILUTED TO MAKE TEST CONCENTRATIONS
• TWENTY 3-6 DAY OLD MYSIDS PLACED IN EACH DISH
• TEST RUN FOR 96 HOURS
100%
100%
SPP
65%
25%
15%
CONTROL
>-*<
F3" HI
f 1
-------
COMPARISON OF VARIABILITY ESTIMATES
1,000,000 -i
100,000
'£6 HOUR,
LC50
(PPM SPP)
10,000 -
1,000 -
100
35,400
30,000
1—
EPA-1
-------
lO
' i
4
-000*1
ooo'oe
001
000*01
•000OS-;.
IddSWdd)
-OOO'OOl
-------
001
-OOO'I-
-OOO'Ol
00009
(ddSWdd)
.}
-OOO'OOI.
000'000't
do|io|sWydWv^
- ». ...p*
-------
COMPARISON OF VARIABILITY ESTIMATES
1,000,000 -
100,000
96 HOUR
LC50
(PPM SPP)
30.,000
10,000
1,000 -
100
PASS
35,400
30,000
FAIL
T
EPA-1
-------
COMPARISON OF VARIABILITY ESTIMATES
1,000,000
96 HOUR
LC50
(PPM SPP)
100,000 -
30.000
10,000 -
1,000 -
too
PASS
35,400
30,000
FAIL
14,200
1—
EPA-1
EPA-2
SLABS
MLE
CO
-------
^
OF
1,000,000
100,000-
96 HOUR
(PPM SPP)
30,000
10,000
1,000
100
PASS
76,900
111,900
35,400
. L. ^;
30,000
14,200
9,700
FAIL
EPA-1
EPA-2
8 LABS
MLE
I
EPA-2
7 LABS
ANOVA
SMALL*
SAMPLES
-------
COMPARISON OF VARIABILITY ESTIMATES
1,000,000-1
96 HOUR
LC50
(PPM SPP)
100,000 -
30..UOO
10,000 -
1,000-
100
PASS
35,400
30,000
FAIL
76,900
111,900
135,300
14,200
9>700
EPA-1
T
I
EPA-2 EPA-2
8 LABS 7 LABS
MLE ANOVA
SMALL #
SAMPLES
DPMP
Ol
o
-------
CHANGE OF FAILING DUE TO "VARIABILITY"?
ASSUME MUD HAS "TRUE"LC50 OF 45,000 PPM SP.P
VARIABILITY ESTIMATE
CHANCE OF FAILURE
EPA
DPMP
OPERATOR
2.33
4.10
21.93
1 IN 6
1 IN 4
1 IN 3
SI
Ol
-------
752
MR. TELLIARD: Thank you
all for staying. I'd like to thank Jan...where's Jan?
This lady was the one who put this all together, Miss
Sears. Jan, thank you.
I'd like to thank all the speakers for their
time and effort and your time and effort in coming.
I'd like to thank the County Court Reporters and
Pickers Restaurant for supplying the team. I'd like
to thank you for ten years of a lot of laughs and a
lot of fun, and I hope some good science. God willing,
maybe we'll see you all next year. Thank you very
much. It's been a great week.
-------
Dr. Michael Aaronson
Assistant Professor
Colorado State University
Dept. of Environmental Health
Fort Collins, CO 80523
(303)491-5776
Richard Albert
GC/MS Manager
ETTC Corporation
284 Raritan Cntr. Parkway
Edison, NJ 08837
(201)225-6782
Raymond W. Alden III
Applied Marine Research Laboratory
Old Dominion University
Norfolk, VA 23508
(804)440-4195
James H. Alexander
Norfolk Naval Shipyard
Code 1342
Portsmouth, VA 23709
John J. Austin
Chemist
USEPA-Region III
839 Bestgate Road
Annapolis, MD 21401
(301)224-2740
James W. Bailey
Development Support
Ciba-Geigy Corporation
P.O. Box 113
Mclntosh, AL 36553
Charlie Banks
Virginia State Water Control Board
2107 N. Hamilton Street
Richmond, VA 23228
(804)257-6694
Robert Beimer
Manager, Chemistry Program
S-Cubed -
3398 Carmel Mountain Road
San Diego, CA 92121
(619)453-0060
-------
Elizabeth Betz
Supervisory Chemist
Env Chem & Microbiology Sctn, NREAD
Crap Lejeune MC Base, PO Box 475
Sneads Ferry, NC 28460
(919)451-5977
Theodore J. Bohlk
Senior Environmental Chemist
County of Westchester
Dept of Labs & Res, Hammond House Road
Valhalla, NY 10595
(914)524-5588
Larry I. Bone
DOW Chemical
14002 Woodland Ridge
Baton Rouge, LA 70816
(504)292-6591
Robert L. Booth
Lab Dir, Envir Monitoring & Spprt Lab
USEPA
26 W. St. Clair Street
Cincinnati, OH 45268
(513)569-7301
Paul Bradford
Laboratory Manager
Law Environmental
112 Town Park Drive
Kennesaw, GA 30144-5599
(404)421-3400
Dr. Joel C. Bradley
President
Cambridge Isotope Laboratories, Inc.
20 Commerce Way
Woburn, MA 01880
(617)938-0067
Parry Bragg
Chemist
James R. Reed & Associates, Inc.
813 Forrest Drive
Newport News, VA 23606
(804)599-6750
-------
Mamie S. Brouwer
Program Manager
Clayton Environmental Consultants, Inc.
1252 Quarry Lane
Pleasanton, CA 94566
(415)426-2645
John Brown
Marine Chemistry Researcher
Battelle NEMRL, Ocean Science Center
397 Washington St., Box AH
Duxbury, MA 02332
(617)934-5682
Jim Buchner
Vice President, Applications & Develop,
Extrel Corporation
240 Alpha Drive
Pittsburgh, PA 15238
(412)963-7530
Ray V. Buhl
Senior Research Chemist
EDI Engineering & Science
611 Cascade W. Parkway, SE
Grand Rapids, MI 49506
(616)942-9600
Garrett Burch
Laboratory Supervisor
Smith Kline Chemical Co.
900 River Road
Comshohocken, PA 19428
(215)270-7033
Eugene A. Burns
VP/Manager Chemistry Group
S-Cubed
PO Box 1620
LaJolla, CA 92038
(619)453-0060
Mary R. Cannon
Conference Coordinator
CENTEC Corporation
11260 Roger Bacon Drive
Reston, VA 22090-5281
(703)471-6300
-------
Donald Casteel
Physical Science Technician
Pine Bluff Arsenal
Attn: SMCPB-PCT
Pine Bluff, AR 71602-9500
(501)543-3072
James Chang/ Ph.D.
Chemist
Galson Technical Services
6601 Kirkville
Bast Syracuse, NY 13057
(315)432-0506
Dale Chappelow
Technical Supervisor
Litton-Core Lab
8210 Mosley Road
Houston, TX 77075
Robert R. Claeys
James River Corporation
904 NW Drake Street
Camas, WA 98607
(206)834-8317
David Clemens
Chemist
PA Dept. of Environmental Resources
Third and Riley Streets - PO Box 1467
Harrisburg, PA 17102
(717)787-4669
Bruce N. Colby
President
Pacific Analytical, Inc.
1989 B Palomar Oaks Way
Carlsbad, CA 92009
(619)931-1766
Richard Cole
Analytical Chemist
State of Virginia, DCLS
1 North 14th Street
Richmond, VA 23219
(804)786-8312
-------
Robbie Comer
Program Manager
Technical Resources, Inc.
3202 Monroe Street
Rockville, MD 20852
(301)231-5250
Kathryn Conko
Applied Marine Research Laboratory
Old Dominion University
Norfolk, VA 23508
(804)440-4195
Patrick A. Conlon
Laboratory Director
Laboratory Resources, Inc.
363 Old Hook Road
Westwood, NJ 07675
(201)666-6644
Michael D. Crouch
President
ETC/Toxicon
3213 Monterrey Blvd.
Baton Rouge, LA 70814
(504)925-5012
Kathryn Crouch
Organic Chemistry Supervisor
Environmental Analysis, Inc.
3278 N. Hwy 67
Florissant, MO 63033
(314)921-4488
Johanna Culver
Chemist
Norfolk Naval Shipyard
Code 1342
Portsmouth, VA 23709
(804)396-3779
Maria Da Rocha
Manager, Analytical Services
Sun Chemical Corporation
441 Tompkins Avenue
New York, NY 10301
(718)981-1600 ext. 215
-------
Michael Daggett
Lab Chief
USEPA, Region VI
5508 Hornwood
Houston, TX 77074
(713)954-6766
David A. Danner
Organic Lab Supervisor
NUS Corporation
5350 Campbells Run Road
Pittsburgh, PA 15205
(412)788-1080
Seyed Dastgheyb
Manager
United States Testing Corporation
1415 Park Avenue
Hoboken, NJ 07030
(201)792-2400
Susan deNagy
USEPA - ITD
401 M Street, SW, (WH-552)
Washington, DC 20460
(202)382-7141
Alfred J. Deorae
Manager, GC-MS
Alliance Technologies Corporation
213 Burlington Road
Bedford, MA 01730
(617)275-9000
Robert DiRienzo
Organic Section Supervisor
Brown & Caldwell laboratories
1255 Powell Street
Emeryville, CA 94608
(415)428-2300
James Dunaway
Chemist
Bionetics Corporation
20A Research Drive
Hampton, VA 23666
(804)865-0880
-------
Jane Dunn
Manager
United States Testing Corporation
1415 Park Avenue
Hoboken, NJ 07030
(201)792-2400
Dr. Rolla Dyer
University of Southern Indiana
8600 University Boulevard
Evansville, IN 47712
(812)464-1701
Bob Edmondson
EPA/NEIC
Bldg. 53, Box 25227
DFC Denver, CO 80225
(303)236-5132
Anna Emery
Graduate Student
American University
4400 Massachusettes Ave
Washington, DC 20016
(202)885-1775
Janet S. Emry
Dept. of Geological Sciences
Old Dominion University
1034 W. 45th Street
Norfolk, VA 23508
(804)440-4301
Scott R. Emry
Old Dominion University
1034 W. 45th Street
Norfolk, VA 23508
(804)440-4195
Roger K. Everton
Applied Marine Research Laboratory
Old Dominion University
Norfolk, VA 23508
(804)440-4195
-------
R. Michael Bwing
Applied Marine Research Laboratory
Old Dominion University
Norfolk, VA 23508
(804)440-4195
Barrett P. Eynon
Statistician
SRI International
333 Ravenswood Avenue
Menlo Park, CA 94025
(415)859-5239
Denis Foerst
Hewlett Packard
1601 California Avenue
Palo Alto, CA 94304
Jim Forbes
Laboratory Director
Law Environmental
112 Town Park Drive
Kennesaw, GA 30144-5599
(404)421-3310
Russell D. Foster, Jr.
Technical Director
Resource Analysts, Inc.
P.O. Box 778
Hampton, NH 03842
(603)926-7777
Peter Fowlie
Waste Water Technology Center
PO Box 5050
Burlington, Ontario Canada L7R486
(416)336-4633
Andrew Francis
Hampton Roads Sanitation District
P.O. Box 5000
Virginia Beach, VA 23455
(804)874-1287
Robert E. Fuchs
Vice President
Environmental Consultants, Inc.
391 Newman Avenue
Clarksville, IN 47130
(812)282-8481
-------
Robert C. Gardner
Project Leader
Dow Chemical, Agricultural Products Dept
P.O. Box 1706
Midland, MI 48640
Harry L. Gearhart
Senior Research Scientist
Conoco, Inc.
PO Box 1267
Ponca City, OK 74074
(405)767-5461
Elwood Gibbs
Chemist
Norfolk Naval Shipyard
Bldg. 184, Code 1342
Portsmouth, VA 23709-5000
(804)396-4502
Myra Gordon
MSD Isotopes
612-1209 Richmond Street
London, Ontario Canada N6A 3L7
Frederick Grabau
Quality Engineer
McDonnell Douglas Electronics Co,
2600 N. 3rd St.
St. Charles, MO 63302
(314)925-6409
Thomas E. Gran
Mgr., Analytical Lab
O. H. Materials
P. O. Box 551
Findlay, OH 45839
(419)424-4925
Dr. David B. Greenburg
Chemical & Nuclear Engineering Dept.
University of Cincinnati
Cincinnati, OH 45221
(513)475-2714
-------
John Gresbach
Virginia State Water Control Board
2107 N. Hamilton Street
Richmond, VA 23228
(804)257-0383
J. G. Grimes
Laboratory Supervisor
Newport News Shipbuilding
4101 Washington Avenue
Newport News, VA 23607
(804)380-7744
Anthony Haga
Environmental Engineer
Ford Motor Company
15201 Century Drive, Suite 608
Dearborn, MI 48120
(313)845-1649
Guy J. Hall
Applied Marine Research Laboratory
Old Dominion University
Norfolk, VA 23508
(804)440-4195
Sam Hamner
Organics Manager
Versar
9200 Rumsey
Columbia, MD 21405
(301)964-9200
Judith C. Harris
Arthur D. Little, Inc.
Acorn Park
Cambridge, MA 02174
(617)864-5770 EXT.2311
Don Harvan
Triangle Laboratories, Inc.
4915 F Prospectus Drive
Durham, NC 27713
(919)544-5729
-------
David Haske
Analytical Chemist
State of Virginia, DCLS
1 North 14th Street
Richmond, VA 23219
(804)786-8312
George Havalias
Chemist
State of Missouri, DNR
P. O. Box 176, 2010 Missouri Blvd.
Jefferson City, MO 65702
(314)751-7930
Ken Hayes
Lab Director
Aqua Survey, Inc.
P. O. Box 46
Rosemont, NJ 08556
(609)397-0666
C. Lee Helms
Staff Scientist
S-Cubed
3398 Carmel Mountain Road
San Diego, CA 92121
(619)453-0060
Mike Henikeal
Waste Water Chemist
City of Columbus
900 Dublin Road
Columbus, OH 43215
(614)222-7016
John R. Heuser, PhD
Analytical Chemist
National Food Processors Association
1401 New York Ave, NW
Washington, DC 20005
(202)639-5971
Joe Hnatow
Organics Manager
Thermo Analytical Inc.
117 N. First St.
Ann Arbor, MI 48104
(313)662-3104
-------
Ben Honaker
USEPA - ITD
401 M Street, SW, (WH-552)
Washington, DC 20460
(202)382-7193
Sara Hopper
The Mitre Corporation
7525 Colshire Drive
McLean, VA 22102
(703)883-7810
Tracy Hunter
Analytical Chemist
State of Virginia, DCLS
1 North 14th Street
Richmond, VA 23219
(804)786-8312
John Huntington
Consultant
9679 Yukon Ct.
Broomfield, CO 80020
(303)422-7231
Nang Huynh
National Laboratories, Inc.
3210 Claremont Avenue
Evansville, IN 47712
(812)422-4119
Maria Margarta Irizary
Spec. Coordinator
PRASA
604 Barbosa Avenue
San Juan, PR 00916
(809)758-6725
Peter Issacson
Chemist
Viar and Company
300 N. Lee St., Suite 200
Alexandria, VA 22314
(703)683-0885
-------
Richard A. Javick
Research Associate
FMC Corporation
Box 8
Princeton, NJ 08543
(609)520-3639
Maurice Jones
Vice President
ENSECO Houston
2400 West Loop South, Suite 300
Houston, TX 77019
(713)960-9411
Yvonne Jones
Ministry of Environment
125 Resources Rd.
Rendale, Ontario, Canada M9W 5L1
(416)235-5760
Lin Kempe
Chemist
Reed & Associates
813 Forrest Drive
Newport News, VA 23606
(804)599-6750
Mary Khalil
Inst. Chemist III
Metropolitan Sanitary Dist. of Chicago
550 South Meacham
Schaumburg, IL 60193
(312)529-7700
Dr. Mohan Khare
Technical Director
E.A. Engineering, Science & Tech,
15 Loveton Circle
Sparks, MD 21152
Peggy Knight
Analytical Chemist
Weyerhaeuser
WTC 2P2S
Tacoma, WA 98477
(206)924-6002
-------
John Koehn
Chemist
Shell Development
P.O. Box 1380
Houston, TX 77251-1380
(713)493-7651
Herman J. Kresse, Jr.
Director of Laboratory
MBA Labs
P.O. Box 9461
Houston, TX 77261
(713)928-2701
Jana L. Krottinger
Group Supervisor
Conoco, Inc.
P.O. Box 1267
Ponca City, OK 74603
(405)767-2954
Suzanne Kupiec
Laboratory Chemist
Enviresponse, Inc.
GSA Raritan Depot, Bldg. 209, Bay F
Edison, NJ 08837
(201)548-9660
Norman J. Labhart
Administrator
Clark County Health Department
P.O. Box 69, 1220 Missouri Avenue
Jeffersonville, IN 47130
(812)282-7521
Dottie Lane
Western Research Institute
Box 3395
Laramie, WY 82071
(307)721-2267
Kenneth Lang
Chief Analytical Branch
US Army, Toxic & Haz Materials Agency
ATTN: AMXTH-TE-A
Aberdeen Proving Ground, MD 21010-5401
(301)671-3133
-------
Robert E. Lea
Entek Laboratories
12th & Marshall, Room 281
Little Rock, AR 72201
(501)375-0249
H. Nathan Levy, III
President
A&E Testing, Inc.
1717 Seabord, Suite 103
Baton Rouge, LA 70810
(504)769-1930
James W. Lewis
Laboratory Manager
Bionetics Corporation
20A Research Drive
Hampton, VA 23666
(804)865-0880
Harris A. Lichtenstein, Ph.D
Vice President/General Manager
Spectrix Corporation
3911 Pondren, Suite 100
Houston, TX 77063-5821
(713)266-6800
Lois Lin
Environmental Engineer
duPont de Nemours & Co., Inc.
P.O. Box 6090
Newark, DE 19714-6090
(302)366-4649
Dr. Paul Loconto
R & D Manager
Nanco Labs
RD #6 Robinson Lane
Wappinger Falls, NY 12533
(914)221-2485
Pat Logon
Biologist
Froehling & Robertson
3015 Dumbarton Road
Richmond, VA 23228
(804)264-2701
-------
Mia T. Lombard!
Environmental Chemist
Standard Oil Company
4440 Warrensville Ctr. Rd.
Cleveland, OH 44128
(216)581-5931
Dr. R. J. Madden
Extrel Corporation
240 Alpha Drive
Pittsburgh, PA 15238
(412)963-7530
Dean Marbury
Northrup Services, Inc.
2 Triangle Drive
Research Triangle Park, NC 27709
(919)549-0611
Mark Marcus
Director, Analytical Programs
Chemical Waste Management, Inc,
150 West 137th Street
Riverdale, IL 60627
(312)841-0360
Michael Markelov
Sohio Research
4440 Warrensville Center Road
Warrensville Heights, OH 44128
(2|6)581-5780
Paul Marsden
S-Cubed
P. 0. Box 1620
LaJolla, CA 92038
(619)453-0060
Michael Martin
Analytical Chemist
State of Virginia, DCLS
1 North 14th Street
Richmond, VA 23219
(804)786-8312
-------
Harry B. McCarty
Quality Assurance Chemist
Viar and Company
300 North Lee Street, Suite 200
Alexandria, VA 22314
(703)683-0885
J. M. McGuire
USEPA, Athens ERL
College Station Road
Athens, GA 30613
(404)546-3185
Randy McKinna
Analytical Chemist
Memphis Environmental Center
2603 Corporate Avenue, Suite 100
Memphis, TN 38132
(901)345-1788
Richard (Dick) E. Means
Senior Scientist
Northrop Services, Inc.
2 Triangle Drive
Research Triangle Park, NC 27709
(919)541-5387
Kathy Meyers-Schulte
Research Specialist
Computer Sciences Corporation
4045 Hancock Street
San Diego, CA 92110
(619)225-8401
Dr. Deborah S. Miller
Union Carbide Corp.
P.O. Box 8361, B770-318
S. Charleston, WV 25314
(304)747-4463
Dr. Herbert C. Miller
Director
Southern Research Institute
2000 Ninth Avenue South
Birmingham, AL 35255
(205)323-6592
-------
Michael W. Miller
Group Leader
Enviresponse, Inc.
GSA Raritan Depot
Edison, NJ 08837
(201)906-6843
Raymond F. Mindrup, Jr.
Supelco, Inc.
Supelco Park
BelleEonte, PA 16823-0048
(814)359-3441
Richard Montgomery
Associate Scientist
Technical Resource, Inc.
Sabine Island
Gulf Breeze, FL 32561
(904)932-5311
Sandra L. Mort
Laboratory Director
Pollution Control Systems Inc.
County Road 550 South, Box 17
Laotto, IN 46763
(219)637-3137
Sandra A. Moser
Vice President
County Court Reporters, Inc.
30 South Cameron Street
Winchester, VA 22601
(703)667-0600
Neil H. Mosesman
Supelco, Inc.
Supelco Park
Bellefonte, PA 16823-0048
(814)359-3441
C.M. Mueller
Cosa Instrument Corporation
70 Oak Street
Norwood, NJ 07648
-------
R. Lee Myers
Senior vice President
CompuChem Laboratories, Inc.
P. O. Box 12652
Research Triangle Park, NC 27709
(919)248-6405
Gordon M. Nelson
Chemist
Norfolk Naval Shipyard
1432 Watercrest Place
Virginia Beach, VA 23464
(804)396-3373
Constantinos Nicolaou
Research Scientist
Sun Chemical Corporation
441 Tompkins Avenue
Staten Island, NY 10305
(718)981-1600 ext. 264
Chantha Nouth
Qualtiy Assurance Supervisor
West-Paine Laboratories, Inc.
7979 GSRI Avenue
Baton Rouge, LA 70820
(504)769-4900
Jim O'Reilly
Exxon Production and Research Company
P. O. Box 2189
Houston, TX 77252-2189
(713)965-4367
R.H. Ode
Sect. Mgr Analytical/Enviro Research
Mobay Corporation
Route 2 North
New Martinsville, WV 26155
(304)455-4400 EXT. 2690
George Ode11
Vice President, Production
NANCO Labs Inc.
RD #6, Robinson Lane
Wappinger Falls, NY 12533
(914)221-2485
-------
Brett Organ
Chemist
Eagle-Picher Research Laboratory
200 9th Avenue, NE
Miami, OK 74354
(918)542-1801
Steven M. Ortel
Senior Substation Test Chemist
Potomac Electric Power Company
500 Kenilwotth Ave, NE
Washington, DC 20019
(202)388-2551
Mohan Palat
Chemclear
992 Old Eagle School Road
Wayne, PA 19087
(215)687-8990
Susan C. Paul
Research Chemist
BASF Corporation
1419 Biddle Avenue
Wyondotte, MI 48192
(313)246-6588
Laurence E. Penfold
Operations Mgr. - Inorganics
Thermo Analytical/Norcal
2030 Wright Avenue
Richmond, CA 94804
(415)235-2633
Bruce Petersen
President
ENSECO/CLE
2240 Dabney Road
Richmond, VA 23230
(804)359-1900
Michael P. Phillips
Director GC/MS Dept.
ENSECO - Rocky Mountain Analytical Lab
4955 Yarrow Street
Arvada, CO 80002
(303)421-6611
-------
Sharon L. Picksel
Assistant Chemist
Tidewater Coal inspection Bureau, Inc,
1200 Bolssevain Avenue
Norfolk, VA 23507
(804)627-0400
Lewis Pi His
Centec Analytical Services
P.O. Box 956
Salem, VA 24153
(703)387-3995
Ray Plunkett
Analytical Chemist
State of Virginia, DCLS
1 North 14th Street
Richmond, VA 23219
(804)786-8312
James Poppiti
Finnigan MAT
355 River Oakes Parkway
San Jose, CA 95134
(408)433-4800
Richard Posner
Vice President
United States Testing Corporation
1415 Park Avenue
Hoboken, NJ 07030
(201)792-2400
William B. Prescott
Consultant
724 Hawthorne Avenue
Bound Brook, NJ 08805
(201)469-1198
Andrew Procko
Chemist
CENTEC Corporation
11260 Roger Bacon Drive
Reston, VA 22090
(703)471-6300
-------
Steve Rayburn
Lab Manager
Guilford Laboratories
2748 Patterson Avenue
Greensboro, NC 27407
(919)854-2320
Dr. Ruth Reck
Asst. Manager, Painting Technology
General Motors Research Lab
Warren, MI 48090
(313)986-2087
Shekar Reddy
Senior Chemist
Advanced Chemistry Labs, Inc,
P.O. Box 88610
Atlanta, GA 30356
(404)455-1266
Donald Reihard
Lederle Labs
North Middleton Rd.
Pearl River, NX 10965
(914)735-5000
Dryden Reno
Organic Chemist
Froehling & Robertson
3015 Dumbarton Road
Richmond, VA 23228
(804)264-2701
Dennis Revell
USEPA Region 4 - BSD
College Station Road
Athens, GA 30613
(404)546-3387
Gary Robertson
Lockheed Corporation
1050 E. Flamingo Road, Ste. 120
Las Vegas, NV 89109
(702)734-3326
-------
Sandra Rocafort
Laboratory Department
PRASA
604 Barbosa Avenue
San Juan, PR 00916
(809)758-6725
Steve Rock
Sample Control Center Analyst
Viar and Company
300 North Lee Street, Suite 200
Alexandria, VA 22314
(703)683-0885
Dr. Peter Rogerson
Analytical Chemist, NWQL
U.S. Geological Survey
5293 B Ward Rd.
Arvada, CO 80002
(303)236-5345
Dr. K. Rollins
Applications Manager
VG Masslab Ltd.
3, Tudor Road, Altrincham
Chesire, Canada WA14 5RZ
44-061-941-3552
Richard A, Rozene
Laboratory Manager
E. C. Jordan Co. Environmental Labs
261 Commercial Street, P.O. Box 7050
Portland, ME 04112
(207)775-5401
Anna M. Rule
Chief Chemist
Hampton Roads Sanitation District
1436 Air Rail Avenue
Virginia Beach, VA 23455
(804)460-2261
Dr. Joseph H. Rule
Associate Professor
Old Dominion Univeisity
Geological Sciences Department
Norfolk, VA 23508
-------
Dale R. Rushneck
Interface, Inc.
P.O. Box 297
Ft. Collins, CO 80522-0297
(303)223-2013
A. D. Sautec
Andrew D. Sauter Consulting
2356 Aqua Vista
Henderson, NV 89015
(702)454-7884
C. G. Sawyer
Environmental Engineer
Newport News Shipbuilding
4101 Washington Avenue
Newport News, VA 23607
Robert B. Schaffer
Vice President
CENTEC Corporation
11260 Roger Bacon Drive
Reston, VA 22090-5281
(703)471-6300
Allen Schinsky
Environmental Chemist
Clayton Environmental
22345 Roethel Drive
Novi, MI 48050
(313)344-1770
John M. Schreiber
JTC Environmental Consultants
4 Research Pi £L-10
Rockville, MD 20850
(301)921-9790
Janice L. Sears
Project Administrator
CENTEC Corporation
11260 Roger Bacon Drive
Reston, VA 22090-5281
(703)471-6300
-------
Or. James Seeley
Chief, Nat. Water Quality Lab
U.S. Geological Survey
5293 B Ward Rd.
Arvada, CO 80002
(303)236-5345
Tom Shalala
Environmental Ground Water Institute
1713 Dakota
Norman, OK 73069
(405)321-3155
Judith G. Shaw
Mgr., Env. Sci. & Tech.
American Petroleum Institute
1220 L Street, NW
Washington, DC 20005
(202)682-8322
Michael Sheely
US Army, Environmental Hygiene Agency
ATTN: OECD (Dr. Sneeringer), HWOOD Area
Aberdeen Proving Ground, MD 21010
(301)671-3739
Kenneth A. Simon
President
Envirosysterns, Inc.
P. O. Box 778, 1 Lafayette Road
Hampton, NH 03842
(603)926-3345
Nan Simon
Chemist
Occidental Chemical
2801 Long Road
Grand Island, NY 14072
(716)773-8655 '
Margaret S. Sleevi
Operations Manager
Ensecol CLE Laboratories
2240 Dabney Road
Richmond, VA 23230
(804)359-1900
-------
James S. Smith Ph.D
Chemistry Manager
Walter B. Satterthwaite Associates
720 N. Five Points Road
West Chester, PA 19380
(215)692-5770
Spencer Smith
Development Support
Ciba-Geigy Corporation
P.O. Box 113
Mclntosh, AL 36553
Paul Sneeringer
US Army, Environmental Hygiene Agency
ATTN: OECD (Dr. Sneeringer), HWOOD Area
Aberdeen Proving Ground, MD 21010
(301)671-3739
Timothy A. Snow
Research Chemist
Conoco, R&D
R&D East, PO Box 1267
Ponco City, OK 74603
(405)767-5563
Steven W. Sokolowski
Applied Marine Research Laboratory
Old Dominion University
Norfolk, VA 23508
(804)440-4195
C. G. Spear
Laboratory Supervisor
Newport News Shipbuilding
4101 Washington Avenue
Newport News, VA 23607
(804)380-2418
David N. Speis
Manager, Chromatography
ETC Corp.
PO Box 7808, 284 Ruritan Pkwy
Edison, NJ 08818-7808
(201)225-6759
-------
David E. Splichal
Chemist
Wilson Laboratories
525 North 8th Street, P.O. Box 1884
Salina, KS 67401
(913)825-7186
George H. Stanko, Jr.
Staff Res. Chemist
Shell Development Co.
P.O. Box 1380
Houston, TX 77251-1380
(713)493-7702
David H. Stewart
Vice President
Viar and Company, Inc.
300 North Lee Street, Suite 200
Alexandria, VA 22314
(703)683-0885
Dr. Emilio Sturino
Hazleton Laboratories America, Inc.
3301 Kinsman Blvd.
Madison, WI 53704
(608)241-4471
Judith H. Suzurikawa
Chemist
Cincinnati Metro Sewer District
1600 Gest STreet
Cincinnati, OH 45204
(513)352-4821
Michael Szelewski
Support Engineer
Hewlett Packard
Rt. 41, Starr Road
Avondale, PA 19311
(215)268-2281
John Takle
Associate Engineer
General Motors - Envir. Activities Staff
30400 Mound Rd.
Warren, MI 48090-9015
(313)947-1866
-------
Alexandra G. Tarnay
Environmental Engineer
USEPA - OWRS/MDSD
401 M Street, SW
Washington, DC 20460
(202)382-7036
Hait Tawari
Biologist
Froehling & Robertson
3015 Dumbarton Road
Richmond, VA 23228
(804)264-2701
John H. Taylor, Jr.
Vice President/General Manager
Analytical Technologies, Inc.
5550 Morehouse Drive
San Diego, CA 92121
(619)458-9141
Paul A. Taylor
Executive Vice President
Enseco, Inc.
2544 Industrial Blvd.
W. Sacramento, CA 95691
(916)371-9017
William A. Telliard
Chief, Energy & Mining Branch
USEPA - ITD
401 M Street, SW, (WH-552)
Washington, DC 20460
(202)382-7131
Linda K. Tersegno
Chemist
Eastman Kodak
Building 34 Kodak Park
Rochester, NY 14650
(716)722-3355
Francis Thomas
Chemist
USEPA
536 S. Clark
Chicago, IL 60604
(312)886-5482
-------
F. H. Thorn
St. Laboratory Technician
Newport News Shipbuilding
4101 Washington Avenue
Newport News, VA 23607
(804)688-4181
C. Erinn Tisdale
Graduate Student
Old Dominion University
812 S. Buckingham Court
Virginia Beach, VA 23462
(804)490-0358
Samuel To
Quality Assurance Officer
USEPA - Washington, DC
401 M Street, SW, (EN-338)
Washington, DC 20460
(202)475-8322
David F. Tompkins
Director of Analytical Services
CENTEC Analytical Services
2160 Industrial Drive
Salem, VA 24153
(703)387-3995
Allan M. Tordini
Century Laboratories, Inc.
1501 Grandview Avenue
Thorofare, NJ 08086
(609)848-3939
Donald P. Trees
Sample Control Ctr. Manager
Viar and Company
300 N. Lee St., Suite 200
Alexandria, VA 22314
(703)683-0885
Dr. S.N. Tsoukalas
President
Advanced Chemistry Labs, Inc,
P.O. Box 88610
Atlanta, GA 30356
(404)455-1266
-------
Michael Tuday
Supervisor, GC/MS Laboratory
EMSI
4765 Calle Quetzal
Camarillo, CA 93010
(805)388-5700
Scott Underwood
Analytical Chemist
State of Virginia, DCLS
1 North 14th Street
Richmond, VA 23219
(804)786-8312
Marge Ventura
Environmental Engineer
duPont deNemours & Co., Inc.
P.O. Box 6090
Newark, DE 19714-6090
(302)366-4649
Allen Vergakis
Chemist
Norfolk Naval Shipyard
QAO Code 1342
Portsmouth, VA 23209
(804)396-3799
Joseph Viar, Jr.
President
Viar and Company
300 N. Lee St., Suite 200
Alexandria, VA 22314
(703)683-0885
Rock J. Vitale
QA/QC Manager
ERM
999 West Chester Pike
West Chester, PA 19382
(215)692-8606
Robert B. Walker
Lancaster Laboratories, Inc,
2425 New Holland Pike
Lancaster, PA 17601
(717)656-2301
-------
Robert H. Walker
President
Corpex, Inc.
3830 High Court
Wheat Ridge, CO 80033
(303)421-7776
Tonie M. Wallace, R.P.R.
President
County Court Reporters, Inc.
30 South Cameron Street
Winchester, VA 22601
(703)667-0600
Bruce K. Wallin, Ph.D.
Technical Director
B.C. Jordon Co.
261 Commercial St., P.O. Box 7050
Portland, ME 04112
(207)775-5401
Kelly West
Associate Scientist
Versar Environmental Systems
9200 Rumsey Road
Columbia, MD 21045
(301)964-9200
Dr. Charlie Westerman
Vice President
ETC/Toxicon
3213 Monterrey Blvd.
Baton Rouge, LA 70814
(504)925-5012
Brodie Whitehead
Analytical Chemist
State of Virginia, DCLS
1 North 14th Street
Richmond, VA 23219
(804)786-8312
Stuart A. Whitlock
Associate Vice President
ESE, Inc.
P.O. Box ESE
Gainesville, FL 32605
(904)332-3318
-------
Bob Wichser
Manager Chemical Services
Froehling & Robertson
3015 Dumbarton Road
Richmond/ VA 23228
(804)264-2701
Bruce E. Wilkes
President
Environmental Analytical Consulting, Inc
5176 Crystal Drive
Cross Lanes, WV 25313-1940
(304)776-2730
Idelis Z. Williams
QA/QC Manager
Spectrix
3911 Fondren, Suite 100
Houston, TX 77063-5821
(713)266-6800
Hugh Wise
Environmental Scientist
USEPA-ITD
401 M. St., SW, (WH-552)
Washington, DC 20460
(202)382-7177
Caryn Wojtowicz
Organics Supervisor
Ecology & Environment, Inc.
P.O. Box D, 195 Holtz Drive
Cheektowaga, NY 14225
(716)631-0360
Mark W. Wood
Associate Research Chemist
PPG Industries, Inc.
P. 0. Box 1000
Lake Charles, LA 70605
(318)491-4450
Dr. Donald C. Wright
Lab. Mgr., GC/MS
Langston Laboratories, Inc.
2005 West 103rd Terrace
Leawood, KS 66206
(913)341-7800
-------
David Young
Sr. Drilling Fluid Specialist
Standard Oil Co.
8404 Ester
Irving, TX 75063
(214)929-2905
Cy Zaneski
940 Gates Avenue, Apt.C-5
Norfolk, VA 23517
-------
------- |