United States Environmental Protection Agency
Office of Water Regulations and Standards
Industrial Technology Division
EIGHTH ANNUAL ANALYTICAL SYMPOSIUM
Norfolk, Virginia
April 3-*, 1985
-------
-------
FOREWORD
The Industrial Technology Division of the USEPA Office of Water
Regulations and Standards sponsors the Annual Analytical Symposium to
provide a forum where scientists and other interested parties can present new
ideas and advances in methodology for the analysis of Priority Pollutants. The
topics addressed during the Eighth Annual Analytical Symposium have deviated
from strict analytical water chemistry; we have considered the role that
quality assured biomonitoring techniques will play in compliance monitoring,
as well as advances in high performance liquid chromatography/mass spec-
trometry analysis of environmental samples. In concert with the Division's
responsibility to promulgate industrial effluent regulations, we feel that it is
in the common interest to promote new technologies and ideas that will serve
as analytical tools and assist us in this increasingly complex task.
W. A. Telliard
-------
-------
EIGHTH ANNUAL ANALYTICAL SYMPOSIUM
Office of Water Regulations and Standards
Industrial Technology Division
April 3-4, 1985
Norfolk, Virginia
INDEX
April 3, 1985
Presentation/Speaker
WELCOME AND INTRODUCTION .
William A. Telliard, Chief
Energy and Mining Branch
and
Devereaux Barnes, Deputy Director
Industrial Technology Division
WATER QUALITY BASED TOXICS CONTROL ........
Stephen L. Bugbee
USEPA Office of Water Enforcement and Permits
TOXICITY TESTS AND BEST AVAILABLE TECHNOLOGY (BAT)
DETERMINATIONS FOR DISCHARGE FROM OFFSHORE
OIL AND GAS PLATFORMS .' .
Thomas W. Duke
USEPA Environmental Research Laboratory - Sabine Island
Page
1
BIOLOGICAL ANALYSES OF COMPLEX EFFLUENTS
Teresa Norberg-King
USEPA Environmental Research Laboratory - Duluth
MAGIC-LC/MS: A POWERFUL NEW TOOL FOR
ENVIRONMENTAL ANALYSIS
Richard F. Browner
School of Chemistry, Georgia Institute of Technology
(Paper will be available in open technical literature.)
APPLICATIONS OF THERMOSPRAY HPLC/MS
FOR MONITORING THE ENVIRONMENT . . . ...
Robert D. Voyksner
Research Triangle Institute
RECENT ENVIRONMENTAL APPLICATIONS OF
THERMOSPRAY LC/MS
Marvin Vestal
Chemistry Department, University of Houston
23
38
79
122
11
-------
INDEX
April 3, 1985
Presentation/Speaker
Paee
DETERMINATION OF DYES BY THERMOSPRAY
IONIZATION AND MS/MS
John M. Ballard
Lockheed Engineering and Management Services Co., Inc.
PROBLEM SOLVING WITH MASS SPECTROMETRY AND FTIR 19*
Walter M. Shackelford
USEPA Environmental Research Laboratory - Athens
PROGRESS REPORT ON DMR QA STUDIES: QUALITY ASSURANCE
PROGRAM FOR NPDES SELF-MONITORING DATA
Samuel To 232
USEPA Office of Water Enforcement and Permits
and
Paul Britton 242
USEPA Environmental Monitoring and Support Laboratory, ORD
INTER- AND INTRA-LABORATORY ASSESSMENT OF SELECTED
SW-8*6 METHODS FOR ANALYSIS OF APPENDIX VIII COMPOUNDS
IN GROUNDWATER 282
George H. Stanko
Shell Development Company
ui
-------
INDEX
April *, 1985
Presentation/Speaker
Paee
UTILITY ROUND ROBIN RESULTS FOR THE DETERMINATION
OF ARSENIC AND SELENIUM BY GRAPHITE FURNACE AAS ......... 355
Judith Scott
TRW Energy Development Group
COMPARISON OF METHODS FOR ANALYSIS OF PCBs ..'..... 384
Mitchell D. Erickson
Midwest Research Institute
ANALYSIS OF VOLATILE WATER SOLUBLE COMPOUNDS ............ 436
Denis C.K. Lin
Environmental Testing and Certification Corporation
DETERMINATION OF FIVE-DAY CARBONACEOUS BOD
IN WASTEWATER 448
James C. Young
Department of Civil Engineering, University of Arkansas
The Battleship USS IOWA 449
ANALYSIS OF PRIORITY POLLUTANTS BELOW FIVE
NANOGRAMS (ON COLUMN) IN MARINE SEDIMENTS
BY ISOTOPE DILUTION GCMS 471
Peggy Knight
Scientific Services, Weyerhaeuser Company
USES OF ION CHROMATOGRAPHY FOR INORGANIC
ANALYTES IN WATER 495
John D. Pfaff
USEPA Environmental Monitoring and Support Laboratory, ORD
DIRECT ANALYSIS OF PHENOLS BY HPLC 519
Suzanne Lesage
Environmental Canada, Environmental Protection, Wastewater
Technology Centre
CLOSING REMARKS 546
Roster of Attendees 547
IV
-------
-------
P R OCEEDINGS
MR. TELLIARD: Good
morning. My name is Bill Telliard. I'm from EPA
and I'm here to help you. Welcome to the Eighth
Annual...boy, that's old...Analytical Meeting of
the Office of Water. It's the first one for the
Industrial Technology Division or the eighth one
for the Effluent Guidelines Division, depending
on how people score this one.
I'd like to open this morning's session with
a few words from Dev Barnes, who I don't see.
Dev? Dev is the Deputy Division Director for the
Industrial Technology Division and I'd like to have
him say a few words.
MR. BARNES: You found
my hiding place, Bill. I'd like to welcome every-
body here. It's a pleasure for the division to
sponsor this every year. We look forward to this
annual event and we think that it's paid a lot of
dividends in terms of just getting everybody together
and ironing out differences in various meetings that
take place concurrently with this, dealing with
things that our division directly deals with in
terms of establishing guidelines. So I think this
-------
meeting has a great deal of value to us. We hope
that you also feel that way and we look forward to
your continuing participation in it. So, I hope
you all have a nice time today and tomorrow and
we'll look forward to seeing you again next year,
hopefully. Thank you.
MR. TELLIARD: A couple
of announcements. Concurrent with last year's open
sea disaster, we're repeating it again this year.
We're going to try to get everyone out of here on
time to catch the boat.
As the tradition holds, we have Tonie and
Sandy from the County Court Reporters taking down
every gem of knowledge that you're about to drop on
us today. When you go to the microphone, again,
please state your name, otherwise you will be beaten
brutally about the knees; all except George Stanko
because...just tradition.
In the back of the room you'll find some copies
of previous years proceedings for all of you who
really want to remember, and some copies of the new
1625, 1624 GCMS methodology with the QC stuff in
it. They're available in the back of the room.
There's also copies of 304(h); you remember that.
Breaking with tradition, our first speaker
-------
this year is not Bob Medz. Bob didn't come because
he's home being sued, but then again, it just shows
he.did a real good job.
To show that we're an open minded group this
year, this morning will be spent discussing critters,
both crawling and floating types, and most analytical
chemists will find this a great eye opener. I
particularly think it will make you feel warm and
fuzzy when you look at standard deviations among
star measurements.and some of these guys.
So our first speaker this morning is Steve
Bugbee from the Permits Office and Steve is going
to talk about water quality type things. Steve.
-------
STEPHEN L. BUGBEE
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
OFFICE OF WATER ENFORCEMENT AND PERMITS
WATER QUALITY BASED TOXICS CONTROL
MR. BUGBEE: Good morning.
The use of biological techniques to identify and
control toxicity is not new, but as you will hear,
it has been a very long and tortuous road to get
where we are today.
The control of toxic pollutants is one of the
Nation's most critical remaining water quality
problems. For example, localized water quality
standard violations or impairments of water uses
have been widely reported by states in their
biennial water quality reports to EPA* Because of
the tendency of some toxics to accumulate in fish
tissue, fishing bans and fish consumption warnings
are in effect in many of the Nation's waters.
For a number of reasons, however, implementing
controls for toxic pollutants has always been a
difficult problem. These reasons include the sheer
number of toxic compounds which are involved;
insufficient monitoring data because of the high
cost of laboratory analyses for toxics; and
-------
uncertainties about the fate and effects of toxic
substances and uncertainties about the fate and
effects of these chemicals in the aquatic environ-
ment. EPA has developed a new approach which/
together with advances in the field of biological
monitoring, should help us deal more effectively
with toxic pollution problems.
The Clean Water Act established two types of
regulatory requirements to control pollutant dis-
charges: technology-based effluent limitations
which reflect the best controls available, consider-
ing the technical and economic achievability of those
controls; and water quality-based effluent limita-
tions which must be met so discharge permits reflect
the more stringent of the two whenever there are
differences. So, we have on one hand the technology-
based approach which you're most familiar with, and
then we have the water quality-based approach.
The technology-based requirements for discharges
are currently being issued and will have a substantial
affect in reducing toxic discharges. However, in
some cases these controls will not be sufficient to
eliminate water quality impacts and enable water
quality standards to be met. In these cases, water
quality-based controls are needed. Two technical
-------
approaches are available for developing water
quality-based effluent limits, the pollutant-specific
approach and the biomonitoring approach.
EPA and the States have traditionally used the
pollutant-specific approach. Pollutant-specific
techniques are best used where effluents contain a
few well quantified pollutants and the interactions
and the effects of these pollutants are known.
Thus/ they have worked well for pollutants such as
oxygen-demanding loads and nutrients and heavy
metals. In addition, pollutant-specific techniques
must be used where health hazards are a concern
and/or if bioaccumulation is suspected.
In the case of toxic pollutants and complex
effluents, however, it may be difficult in some
cases to determine the attainment or the nonattain-
raent of specific water quality use or water quality
standards and, to set the appropriate limits because
of the complex chemical interactions which affect
the fate and ultimate impact of these toxic sub-
stances in the receiving water. In many cases,
chemical methods cannot easily be used to identify
all potential toxic pollutants. Developing numeri-
cal water quality criteria and determining allowable
loadings for all the wide variety of pollutants
-------
found in effluents is also very time-consuming and
resource intensive. In such situations, it would
be very desirable to examine overall toxicity
directly without identifying and analyzing every
pollutant individually.
Biological methods, based on the direct
measurement of the impact of whole effluents on
biological test organisms and communities/ provide
this capability. Therefore, EPA is integrating
this whole effluent biomonitoring into our programs
to control toxic pollutants. The result is a two-
fold approach. In certain situations, an example
would be where potential human health impacts are a
concern, we must rely upon the chemical-specific
approach, measuring the individual toxicants and
evaluating their specific toxic properties. In
other situations, however, especially where we have
complex effluents, multiple discharges, it is more
appropriate to examine the harmful effects, or
toxicity, of the whole effluent rather than trying
to attempt to individually identify all the poten-
tial toxicants and understand the inherent chemical
reactions. In other words, we're looking at both
the whole effluent toxicity and the receiving water
body toxicity.
-------
This toxicity testing approach relies on newly
developed methods and laboratory testing procedures
while the two-fold approach employs both the chemi-
cal-specific and biological monitoring. The biologi-
cal component deserves a special focus because it is
the area that is developing most rapidly, both in
scientific and programmatic terms. In fact/ the
scientific basis for using biological techniques
has advanced so significantly in recent years that
it is now an important aspect of the water quality-
based approach for controlling toxic pollutants in
the EPA's permit program.
The importance of biological monitoring, both
as an ambient monitoring tool and as a potential
regulatory tool/ has long been recognized. We can
point to a long history of field studies that have
employed some type of biomonitoring activity. How-
ever, the success of biomonitoring as a regulatory
tool has until now been limited because it has
often proved difficult to translate the biological
instream effects into effluent requirements that
will be incorporated into a permit.
This is a very important point. Up until
approximately last year, the Chemical Manufacturers
Association, among others, had been very critical
-------
of our toxicity-based approach in NPDES permitting.
One of their major criticisms was the fact that we
were unable to make the transition from the end of
the pipe into the receiving water. In other words/
if you limit the effluent using some type of toxici-
ty limit/ that limit would be very difficult to
enforce because of defining receiving water im-
pacts.
So to deal with this problem/ we've conducted
research over the last few years which has provided
us with more effective biological monitoring tools
that can help us screen for problems/ assess impacts/
set toxicity-based limits/ and help identify ways
to reduce toxicity. These toxicity tests are an
outgrowth of those used in the laboratory to estab-
lish aquatic life criteria for specific chemicals.
These tests include acute toxicity tests which
measure short term exposure effects and chronic
toxicity tests which measure long term exposure
effects.
Through the work of EPA's Office of Water
Regulations and Standards/ the Office of Water
Enforcement Programs, the Office of Research and
Development/ the EPA Regional Offices/ the States
and academia, we have taken these methods into the
-------
field, employed them in site-specific situations,
especially those involving complex effluents. We're
primarily addressing the fresh water environs at
the present time. However, we do have a similar
program being developed which will deal with
marine and estuarine toxicity.
The results of these applications have been
promising. Not only do we have a better understand-
ing of the behavior of toxics in the ambient con-
ditions, but we also have learned that the various
methods can be used in concert to complete the pic-
ture of the water quality impact. It now appears
that effluent toxicity can be evaluated in conjunc-
tion with chemical and ecological data and can be
very useful in developing regulatory requirements.
We intend to continue these field applications of
biological methods in FY86 and translate their
results into useful guidance for the States to
help in issuing water quality-based permits.
We are committed to a balanced and integrated
approach; that is, one that applies both the chemical
and biological techniques. This isn't to alarm you
that we're suddenly shifting over to just the
biological approach. I think one way of better
illustration is the policy EPA published in the
10
-------
Federal Register in March, 1984, that essentially
underwrites this whole water quality-based approach
for toxics control. So I'll run through a few of
the basics of the policy for you.
SLIDE 1 ,
The policy was published in the Federal Register
March 9th, 1984.
SLIDE 2
Basically, the policy states, one, that we're
going to be controlling pollutants beyond BAT, and
EPA will use an integrated strategy consisting of
biological and chemical methods. Secondly, through
Section 308 of the Clean Water Act, EPA or a State
may require a discharger to provide chemical,
toxological, or ambient biological data to assure
compliance with standards.
Third, through Section 402 of the Clean Water
Act, EPA may develop NPDES permit limits based on
effluent toxicity and require a toxicity reduction
evaluation to eliminate unacceptable toxicity.
The fourth point is that the EPA Regional Admini-
strators will assure that each region has the
capability to conduct both chemical, biological
assessments and provide technical assistance to the
States. This is an ambitious goal but we have been
11
-------
Q
lU
PQ
I
LU
u.
o
1U
D.
O
UJ
UJ
X
o
cr
o
u.
oc
o
o
o
cr
UJ
12
-------
u.
o
o
CD
I-
CO
< CO CO
CQ Z Q
O O
a cj x
Z I-
O >- LU
>- CD Z
UJ UJ
CQ h- «|
CO OS O
h- H- ~
Z CO £
< UJ
I- Q X
Z) LU O
-J I-
_l < Q
o ac z
Q. CD <
UJ
-I J- -I
O Z <
OS •-* CJ
I- ~
Z Z CD
O < O
CJ «J
UJ O
O CO ^
I— 3 CQ
Qu
UJ O
H- OS
O UJ
CD
a: <
UJ X
»- o
< co
UJ
ac
UJ =3
x o
I- UJ
a?
u.
o >-
UJ X
~ I-
CQ •-•
<
LU
QC CJ
O Z
<
^ ^^
< Q.
CJ £
^ O
CD CJ
O
_J LU
O Q£
CJ Z)
•-* CO
x co
O <
H» < LU
^^^ ^^^
^^ pMB
£ UJ <
as o: z
UJ 1-4 •-•
a. => £
a ^
co LU _i
LU a: LU
UJ
cc o
=5 Z
CO O
co o
U co
oo
UJ
CD CO
O <
UJ
X —I -^
o < co
o a
LU ~ cr
o CD <
»- O Q
> -J Z
o o <
QZ •-*« h-
CL CQ CO
a. a o
o
-j
UJ
LU CJ 3
Q ~ -J
X < -^
>- o > >-
< I-UJ I-
H- Z O
z o ~
LU « x
ID I— O
-J O h-
CM U. O UJ
C=> LU LU _J
^5^ CO
z •<
O >• I-
X I- Q.
CD CO ^ UJ
3 I— CJ O
O ^ *- CJ
£K Z X <
X ^ O Z
CO -J
01 ^-*
O OQ
< a.
or <
I- O
CO
M UJ
UJ
£ •
CO LU
co o
UJ Z
CO <
CO I-
< CO
-J CO
CO
O CO
CD -J
O <
O ~
^^— ^w
^^* •••
CQ X
CJ
Q Uj
UJ
Z O — O
o ~ z: a:
M CD UJ Qu
CD UJ X
CJ O
-------
building this into our work plans and budgets, and
we've seen considerable success and progress in the
Regions as well as the States in terms of developing
toxicity testing capability.
SLIDE 3
Other issues addressed in the policy; one,
the importance of whole effluent toxicity is stressed
as an evaluation and control parameter/ particular-
ly for complex effluent permitting situations. In
other words, toxicity can be used as a parameter
just as BOD is used in NPDES permits. Second,
effluent toxicity can provide a valid indication
of receiving water impact.
The policy then goes on to respond to some of
the issues raised in the NPDES permit regulations
regarding the toxicity-based limits, in addition
to outlining the benefits and the disadvantages
of the chemical-specific versus the whole effluent
approach. Obviously, when you're dealing with
complex mixtures of chemicals, the whole effluent
toxicity-based approach may be the way to go. The
policy also stresses the importance of using toxi-
city testing when evaluating large publicly owned
sewage treatment plants. Some of those effluents
are extremely toxic.
-------
OS. -^
UJ CO
UJ O
UJ <
O O.
UJ Q- I-
-J -J CO
U- O
u. o: e>
UJ O I—
-J O I-
o «
I Q Z
^ z cr
< UJ
U. O.
O Z
O X
UJ ~ UJ
O I- -J
Z < Q.
z cr
< I-
O CO
>• CO
I— Z
X UJ
O l>
UJ O
U. O
u. *••
>- h-
H- O
3 O
O U.
at: co
UJ ~
I- CO
< < .
^ CO CO
I-
* UJ -^
z H- x
p <~
oc ~ _j
u. o:
a. >-
UJ O H-
UJ QC «
cr a. o
* -
UJ
X
O
CO
CO
UJ
CO
UJ
CO
CO
UJ >-
I CO J
CO >- J
UI I- D
CO ~- O
CO O ~
UJ — I—
«r x oc
h- O <
GO I— Q.
CO
CO
UJ
co
UJ Q
CO
UJ
Q
CO O
UJ
-------
The policy also asserts that toxicity testing
methods are currently available. EPA is confident
that there are adequate test methods which are
reliable and reproducible. With these test
methods available, we don't see any reason why'
this should prevent regulatory agencies from not
using this approach.
In addition to the policy, we also have put
out a technical support document which essentially
supports the policy and provides technical guidance
to the permit writers and water quality specialists.
Thus it is clear that in addition to the more
traditional chemical-specific approach to evaluat-
ing water quality, whole effluent toxicity tests
and other biological measures must now be systema-
tically employed throughout the various steps of
the regulatory process such as screening, impact
assessment, setting limits and measuring compliance.
These tests and measures must be related to instream
factors such as zones of initial dilution, flow
variability and sediment contribution so that the
instream behavior of toxics can be fully understood
as part of the decision-making process. Finally,
the selection of tests and their application must
be tailored to the specific site in question.
16
-------
I would like to conclude with this observation.
The history of the Nation's chemical and biological
monitoring activities has experienced several sig-
nificant shifts in emphasis. As you recall, in
the early 60's the emphasis was on biological
monitoring. We were looking at water quality
standards, water quality wasteload allocation
models, dealing with more of the traditional pol-
lutants such as BOD and dissolved oxygen. Then
in the '70's, with the event of the Clean Water
Act, the shift was from out of the stream to the
end of the pipe and we were following a technology-
based approach which is essentially where we are
today.
Now we are beginning to realize the real
advantages of biomonitoring in dealing with complex
effluents. However, chemical monitoring continues
to have many advantages over biomonitoring, such as
controlling human health hazards and in providing
direct measures of operating treating process and
evaluating compliance. Therefore, we must continue
to view both chemical and biological monitoring not
as separate entities but rather as integrated
components of a balanced and effective water quality-
based toxicity control program. Thank you.
17
-------
MR. TELLIARD: We're going
to do questions now. We're not going to let you off,
Any questions?
18
-------
QUESTION AND ANSWER SESSION
MR. DELLINGER: Bob
Bellinger, Industrial Technology Division. It
seems to me that probably the most critical thing
in biomonitoring is selection of a test species or
organism. I'm familiar with pulp and paper effluents
and there's a wide range of effect depending on the
test species chosen. Some are more sensitive than
others. What would be the policy in that regard?
Do we pick a very sensitive species or a relatively
insensitive species, one in the middle, or how...
MR. BUGBEE: There really
is no such thing as the most sensitive species.
What we try to do is use several species to bracket
the range of sensitivity which can occur.
The puip and paper example is a good one
because right now they're running acute tests
using rainbow trout and people would think that
rainbow trout would be the most sensitive species.
Well, it turns out that these rainbow trout really
do quite well in the kraft mill effluent.
The States of Oregon and Washington are now
looking for more tests to use in order to regulate
this toxicity. There is a chronic toxicity
19
-------
associated with the kraft mill effluent, but the
rainbow trout in the tests that they're using,
does not pick this up. So if you would happen to
user let's say, an oyster-spat or a Ceriodaphnia
test, these tests would show toxicity.
This is one of the reasons we're promoting
the use of more than one species, preferably three
to five species, and perhaps using both the acute
and the chronic test combined.
MR. DELLINGBR: Thanks.
MR. BUGBEE: Yes, sir.
Jim?
MR. RICE: Jim Rice. I
wanted to ask, Steve, you made some mention about
part 136, I think. My recollection is that none of
the tox tests methods that you're using are in part
136 now. I was wondering what your plans were in
terms of publishing for notice and comment and the
like, for these tox methods that you have under
development now?
MR. BUGBEE: I'm glad you
asked that question, Jim. We started in 1978, trying
to get the acute toxicity tests for fathead minnows
and daphnia into part 136, and the latest information
I have, it will be March of 1986 when they propose
20
-------
to get these acute methods into 136.
Corny Weber of ORD, Cincinnati, has put out
the third edition of his Biological Methods Manual,
which is a very fine manual for acute tests. He's
going to also put out a separate manual just for
some of the chronic tests. But that's what we're
using right now, and also, I believe, in Standard
Methods, there are some routine acute tests that
can be used.
MR. TELLIARD: Thank you,
Steve.
MR. BUGBEE: Your welcome.
21
-------
MR. TELLIARDs Our next
speaker, Dr. Duke, is going to talk a little bit
about critters that I like better, which are shrimp.
Due to the fact that Industrial Technology
Division always like to stay ahead or behind,
depending on which view you take, we are, this year,
going to for the first time, establish an effluent
limitation guideline based on an LC50 killing
critter number. This is for the offshore oil and
gas industry. For those industries feeling slighted,
we will do our best to get you an LC50 number as
soon as possible.
Much of the work on our offshore oil and gas
program has been supported by our agency laboratory,
Gulf Breeze, and particularly by Dr. Duke. So Tom,
this morning, is going to talk to you something
about the biological monitoring program that
they're conducting. Tom.
22
-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
THOMAS W. DUKE, PH.D.
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
ENVIRONMENTAL RESEARCH LABORATORY
TOXICITY TESTS AND BEST AVAILABLE TECHNOLOGY (BAT)
DETERMINATIONS FOR DISCHARGE
FROM OFFSHORE OIL AND GAS PLATFORMS
(Revised presentation submitted.)
23
-------
Toxicity Tests and Best Available Technology
Determinations for Discharges from
Offshore Oil and Gas Drilling Platforms
T.W. Duke and P.R. Parrish
The Environmental Protection Agency is charged with regulating the
discharge of materials from offshore oil and gas drilling platforms.
This is accomplished through Best Available Technology (BAT) and through
Section 403(c) of the Clean Water Act. BAT is promulgated on a national
basis and Section 403(c) includes guidelines for testing indigenous
species and for more complex chronic and community tests. The proposed
BAT guidelines for drilling fluid discharge from offshore platforms
contain a requirement for a toxicity test to estimate potential adverse
biological effect of a particular drilling fluid.
This paper discusses the toxicity test specified in the proposed
guidelines and how test results can be applied to regulatory activities.
Drilling Fluid Used and Characteristics '
Before discussing the toxicity test, it is important to describe
drilling fluids (muds) and how the fluids are used before discharge.
Drilling fluids are a complex mixture of chemicals and clays that are
forced through the drill pipe, through the rotating bit, and are returned
to the surface through the space between the drill string and casing.
During this process, drilling fluids lubricate the bit, coat the bore
hole with an impermeable cake to prevent fluid loss, reduce corrosion,
transmit hydraulic power to the bit, and remove cuttings (Ayers, 1981).
Four basic components — barite, bentonite, lignite, and lignosulfonate
comprise about 90 percent of the materials used in drilling fluids (Table 1),
-------
Barite, the most common weighting agent, is used to increase the
density of the fluid. Bentonite clay is used to thicken the drilling
fluid; lignite and lignosulfonates are used to ensure that the fluid
does not become too viscous. Speciality additives, including diesel or
other oils to increase lubricity and biocides to control microorganisms,
also may be added as the drilling process requires (Perricone, 1980).
Discharge of Fluids
After a drilling fluid is pumped from the well to the drilling
platform, it is passed through solids-control equipment and the separated
solids (cuttings) are discharged into the sea. This discharge occurs
continually, as long as drilling is in progress. Fluid passing through
the solids-control equipment is returned to a holding tank for possible
treatment and recirculation into the well. When the fluid becomes too viscous
for use or if it no longer functions as desired, a portion is intermittently
discharged into the sea. All of the used drilling fluid is discharged at
the completion of drilling of exploratory wells. The volume of solids
continuously discharged varies from 3,000 to 6,000 barrels per well (one
barrel equals 42 U.S. gallons), and the volume of used drilling fluid
intermittently discharged varies from 5,000 to 30,000 barrels per well
(National Research Council, 1983).
The fate of drilling fluids discharged into the sea is determined by
physical, chemical, and biological processes. A portion of discharged
drilling fluids contains larger particles that sink to the bottom relatively
near the well site, depending upon their density and upon current velocitites
and other environmental factors. A lesser amount of solids, as well as
soluble components, remains in the water column and can be transported
away from the well by ambient currents (Ayers et a!., 1980a; Ayers et
25
-------
al., 1980b; Brandsma et al., 1980). Thus, there is potential interaction
between discharged drilling fluids and biota in the water column and
on the bottom.
Toxicity Test
The proposed guidelines for BAT include a toxicity test, the purpose
of which is to obtain a general idea of the toxicity of a specific drilling
fluid or fluid component to marine organisms. This static, acute toxicity
test may lead to more complex tests, but it is not intended to provide
detailed toxiciological data as may be required under 403(c). Rather, the
test is intended to provide a basis for comparing the toxicity of drilling
fluids.
In order to be an effective test for BAT purposes, methodology should
be simple, the test organism should be relatively sensitive and readily
available and transportable, and results should be amenable to statistical
analysis. For these and other reasons, a 96-hour test with mysids,
Mysidopsis bahia, was chosen as the BAT toxicity test. Mysids have been
used as a reliable and sensitive test organism for test materials other
than drilling fluids and a substantial data base exists. In addition,
the mysid test has been used to evaluate the toxicity of many drilling
fluids; therefore, new results can be compared with the existing drilling
fluid data base. Although the test described here is only four days in
duration, mysids can be used for long-term tests to study the effects of
materials on partial or full life-cycle stages (Nimmo et al., 1977).
Mysids can be cultured by those performing the test or purchased
from a supplier. All mysids used in the drilling fluid test should be b+1
days old at the beginning of the test and fed Artemia salina nauplii
during culture and testing.
26
-------
Details of the test method are documented by Petrazzuolo (1983) and
Duke et al. (1984). Natural or artifical seawater can be used to prepare
the suspended participate phase (SPP) of the fluids. The SPP is prepared
by adding one part fluid to nine parts seawater (volume/volume) and stirring.
The mixture is allowed to settle for one hour and the material that
remains in suspension, the SPP, is the test material. It is added to
seawater (volume/volume) to prepare the test solutions. Dissolved oxygen
(DO) and pH of- the SPP are controlled during preparation. For range
finding tests, 10 mysids are added to each of four concentrations —
100%, 50%, 10% and 1% SPP and a seawater control, none of which is
replicated. For definitive tests, 20 mysids are added to a seawater
control and each of five concentrations that are based on the results of
the range-finding tests. Three replications give a total of 60 animals
per treatment. All treatments are aerated during the 96-hour exposure.
Water quality (DO, pH, and and salinity) are measured at 24-hour intervals;
temperature is measured continuously.
After 96 hours, the number of live organisms is determined in each
drilling fluid concentration and in the control. Mortality data from the
drilling fluid test and a reference toxicant test that must be conducted •
at the same time are subjected to statistical analyses. A 96-hour LC50
(the concentration lethal to 50% of the test animals after 96 hours of
exposure) and its 95% confidence limits are calculated for each drilling
fluids (if the mortality data are amendable) by using probit analysis
(Finney, 1971) or other suitable methods (Stephan, 1977).
Application of Toxicity tests Results
The purpose of the toxicity test is to obtain general information on
the comparative toxicity of a specific drilling fluid or component. The
27
-------
test is considered a first-tier test because other tests may be required
to determine the sensitivity of other organisms or communities if the
toxicity of the tested material warrants.
An example of how the tiered system works and how the results of
toxicity tests can be used to protect the environment through product
substitution resulted from a recent evaluation of drilling fluids by the
Environmental Resarch Laboratory at Gulf Breeze (ERL/GB). A cooperative
program (Duke and Parrish, 1984) was conducted to determine the effects
of 11 used drilling fluids on selected marine organisms. The fluids were
collected from operating platforms that were drilling wells of various
depths at different locations in the Gulf of Mexico. Drilling fluids
were tested at ERL/GB to determine their effect on mysids and were analyzed
for specific metals and hydrocarbon content by other laboratories. The
tests with mysids indicated that several of the 11 drilling fluids were
more toxic than fluids tested in the past (Petrazzuolo, 1981 and Ayers et
al., 1983). Toxicity of whole muds to mysids ranged from 26 to >1,500 pl/1
(ppm) and toxicity of the SPP phase from 726 to >50,000 ppm (Gaetz et
al., in press).
Subsamples of the fluids were provided to other laboratories to
determine toxic effects on several marine organisms. The toxicity pattern
of whole muds to grass shrimp (Palaemonetes pugio) was similar to that for
mysids. Grass shrimp were not as sensitive as mysids, but the relative
sensitivity of grass shrimp and mysids to the 11 fluids was similar. The
96-hr LCSOs for grass shrimp ranged from 142 to >100,000 ppm (Conklin and
Rao, 1984).
The results of the mysid and grass shrimp tests suggested the need
to test other organisms to confirm the effect of the petroleum hydrocarbons
28
-------
in the used fluids. Fertilized eggs of hard clams (Mercenaria mercenan'a)
were exposed to the SPP phase of the fluids for 96 hr. The number of
clams exposed to the drilling fluids that reached straight-hinge or "D"
stage larvae was compared with the number of clams that attained this stage
that were maintained in sea water without the fluids. The 96-hr EC50s
(the effective concentration that prevented 50 percent of the clams from
reaching the straight-hinge stage) were from 64 to >3,000 ppm (New England
Aquarium, 1984).
A comparison of the diesel content of the 11 used drilling fluids
and their toxicity to mysids, grass shrimp, and clam embryos indicated
that the higher the diesel content, the greater the toxicity. The
correlation between toxicity and diesel content was significant, according
to the Spearman rank order method (Simpson et al., 1960). Subsequent
results obtained at the University of West Florida by testing the toxicity
of drilling fluids to larval grass shrimp, P. intermedius. before and
after addition of diesel oil and mineral oil also confirmed that addition
of petroleum hydrocarbons increased toxicity of the muds tested (Gonklin
and Rao, 1984).
Thus, the results of toxicity tests that began with a modification
of the first-tier mysid test described in this paper revealed-a component
in the used drilling fluids that contributed greatly to the toxicity of
the fluids to marine organisms. Under use conditions, results such as
these would justify excluding the toxic material from a drilling fluid
and adding a less-toxic substitute that would also perform the necessary
function.
29
-------
TABLE 1. Some chemical ingredients in drilling fluids1
Ingredient
Use
Barite
Bentonite
Attapulgite
Sodium Tetraphosphate
Modified Tannin
Chromium Lignosulfonate
Calcium Lignosulfonate
Lignite
Starch
Cellulose
Detergents
Non-ionic Emulsifier
Processed Hydrocarbons,
including Diesel Oil
Aluminum Stearate
Paraformaldehyde
Sodium Chromate
Sodium Hydroxide
Potassium Hydroxide
Weighting Agents and Viscosifiers
Dispersants and Thinners
Fluid Loss Reducers
Lubricants and Emulsifiers
Defoamers, Bactericides
Corrosion Inhibitor
pH Control
pH Stability
1 After Perricone (1980)
30
-------
REFERENCES
Ayers, R.C., Jr., T.C. Sauer, Jr., D.O. Steubner, and R. P. Meek. 1980a.
An environmental- study to assess the effect of drilling fluids on
water quality parameters during high rate, high volume discharges to
the ocean. In: Proceedings of Symposium on Research on Environmental
Fate and Effects of Drilling Fluids and Cuttings, Vol. I, pp. 351-
391. American Petroleum Institute, Washington, D.C.
_ __. T.C. Sauer, Jr., R.P. Meek, and G. Bowers. 1980b.
An environmental study to assess the impact of drilling discharges
in the mid-Atlantic. I. Quantity and fate of discharges. In:
Proceedings of a Symposium on Research on Environmental Fate and
Effects of Drilling Fluids and Cuttings. Vol. I, pp. 382-418,
American Petroleum Institute, Washington, D.C.
_ . 1981. Fate and effects of drilling discharges in the
marine environment. Proposed North Atlantic DCS Oil and Gas Lease
Sale 52. Statement delivered at public hearing, Boston, MA, November
19, 1981. Bureau of Land Management, U.S. Department of the Interior.
. .C. Sauer, ., an
mud concept for offshore drilling for
ling Conference Proceedings,
Drilling
11399.
pp
NPDES.
327-330.
1983.
T.C. Sauer, JR., and P. Anderson. .
In: IADC/SPE 1983
The generic
PE 1983
Paper No. IADC/SPE
Brandsma, M.G., L.R. Davis, R.C. Ayers, Jr., and T.C. Sauer, Jr. 1980.
A computer model to predict the short-term fate of drilling discharges
in the marine environment. In: Proceedings of a Symposium on
Research on Environmental Fate and Effects of Drilling Fluids and
Cuttings, Vol. II., pp. 588-610, American Petroleum Institute,
Washington, D.C.
Conklin, P.J. and K. R. Rao. 1984. Comparative toxicity of offshore and
oil -added drilling muds to larvae of Palaemonetes intermedius.
Archives of Environmental Contamination and Toxicology 13:685-690.
Duke, T.W. and P.R. Parrish. 1984. Results of the drilling fluids
research program sponsored by the Gulf Breeze Environmental Research
Laboratory, 1976-1984, and their application to hazard assessment.
EPA-600/4-84-055, Environmental Research Laboratory, Gulf Breeze,
FL. 94 pp plus appendices.
_ , P.R. Parrish, R.M. Montgomery, S.D. Macauley, J.M. Macauley, and
G.M. Cripe. 1984. Acute toxicity of eight generic drilling fluids
to mysids (Mysidopsis bahia). EPA-600/ 3-84-067, Environmental Research
Laboratory, Gulf Breeze, FL. 11 pp.
Finney, D.J. 1971. Probit Analysis, 3rd Ed. Cambridge University Press,
London. 333 pp.
31
-------
Gaetz, C.T., R.M. Montgomery, and T.W. Duke. In Press. Toxicity of
component phases of used drilling fluids to mysids (Mysidopsis
bahia). Environmental Toxicology and Chemistry.
National Research council (U.S.). 1983. Drilling discharges in the
marine environment. Panel on Assessment of Fates and Effects of
Drilling Fluids and Cuttings in the Marine Environment. National
Academy Press, Washington, D.C. 192 pp.
New England Aquarium (Edgerton Research Laboratory). 1984. A survey of
the toxicity and chemical composition of used drilling fluids. EPA-
600/3-84-071, Environmental Research Laboratory, Gulf Breeze, FL. 109 pp.
Nirreno, D.R., L.H. Banner, R.A. Rigby, J.M. Sheppard and A.J. Wilson, Jr.
1977. Mysidopsis bahia: an estuarine species suitable for life-cycle
toxicity tests to determine the effects of a pollutant. In: Aquatic
Toxicology and Hazard Evaluation, F.L. Mayer and J.L. Hamelink, Eds.,
pp. 109-116. ASTM STP 634, American Society for Testing and Materials,
Philadelphia, PA.
Perricone, C. 1980. Major drilling fluid additives. In: Proceedings
of a Symposium on Research on Environmental Fate and Effects of
Drilling Fluids and Cuttings, Vol. I., pp. 15-29. American Petroleum
Institute, Washington, D.C.
Petrazzuolo, G. 1981. Preliminary report on environmental assessment of
drilling fluids and cuttings released onto the outer continental
shelf. Vol. 1: Technical assessment. Vol 2: Tables, figures and
Appendix A. Prepared for Industrial Permits Branch, Office of Water
Enforcement and Ocean Programs Branch, Office of Water and Waste
Managenent, U.S. Environmental Protection Agency, Washington, D.C.
_ . 1983. Proposed methodology: Drilling fluids toxicity
test for offshore subcategory; oil and gas extraction industry.
May 19, 1983. 45pp.
Simpson, G.G., A. Roe, and R.C. Lewontin. 1960
revised edition. Harcourt, Brace & World,
Quantitative Zoology,
Inc., New York. 440 pp
Stephan, C.E. 1977. Methods for Calculating an LC50. In: Aquatic
Toxicity and Hazard Evaluation, F.L. Mayer and J.L. Hamelink, Eds.,
pp. 65-84. ASTM STP 634, American Society for Testing and Materials,
Philadelphia, PA.
32
-------
INTRALABORATORY VARIATION
SAMPLE
1
2
3
96-HOUR LC5Q
2,9 SPP
2.5%
1,3%
2,1%
95% CONFIDENCE LIMITS
2,4-3,5% SPP
2,3-2,3%
0,5-2,2%
1,7-2,4%
2,1-2,;
33
-------
SOURCES OF VARIATION
1, MYSID CONDITION
2, TREATMENT OF SPP
A, DEFINITION
B, SEPARATION FROM SOLID PHASE
C, AERATION TO INCREASE DO
-------
BASICS OF METHOD
*• 1:9 DILUTION OF DRILLING FLUID
** PH ADJUSTMENT
***
****
SUSPENDED PARTICULATE PHASE
POSITIVE CONTROL INCLUDED
35
-------
SENSITIVITY OF MYSIDS
TQXICITYA (96-HOUR I C
GENERIC
MUD
n
MUD
ALONE
51,6
29,3
MINERAL
1
13,5
7,1
OIL ADDED «.
5
1,8
0,90
10
0,49
0,76
ACONCENTRATIONS GIVEN AS PERCENT (v/v) OF SUSPENDED
PARTICULATE PHASE IN S.EAWATER,
36
-------
INTERLAB RESULTS
TEST
LAB
GULF BREEZE
NARRAGANSETT
GENERIC
MUD •
#5
#1
#5
TOXICITY
(96-HOUR LC5Q)
2,7% SPP
No MEDIAN EFFECT
2,3% SPP
No MEDIAN EFFECT
95% CONFIDENCE
LIMITS
2,5-2,9% SPP
2,5-3,0% SPP
37
-------
TERESA NORBERG-KING
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
ENVIRONMENTAL RESEARCH LABORATORY
BIOLOGICAL ANALYSES OF COMPLEX EFFLUENTS
MRS. NORBERG-KING:
Actually, Norb told me to pass a message on to
Bill. He told me that you owe him one. I got
to come here today and explain some ideas of bio-
logical methods to chemists.
When Dale Rushneck asked me to present the
paper on fresh water toxicity tests in relation to
the testing approach that's being used for the
permitting process that's currently underway, I
said a tentative yes and it depends on who can pay
my travel. Then he explained it was for an ana-
lytical symposium and I was pretty sure he wouldn't
get me to go. Finally he said I didn't have to
discuss policy because Steve Bugbee was going to,
and in that case, I said sure. I don't like to
talk about policy. Biologists never like to.
First, what I want to discuss is how EPA got
involved in the toxicity testing developed for
complex effluents. Then quickly'describe the test
38
-------
methods that we're using, and then to take some of
our test results and compare them to the biological
survey results.
First I just want to give you some quick back-
ground that explains and again reiterates some of
what Steve Bugbee was saying. Under the Public Law
92-500, the procedures for regulating pollutants
were laid out and a national pollutant discharge
system, the NPDES Permit System was introduced.
Permits were originally written to incorporate
standards for toxicants. Yet these were not success-
full and EPA chose to establish effluent limitations
for the "priority pollutants." The limitations
were to incorporate the Best Available Technology
(BAT) that was economically feasible and to do so
on a chemical-by-chemical basis.
Toxicity information on many compounds is
limited, if nonexistent, and the interactions of
mixtures are poorly understood. Consequently, a
move to effluent toxicity testing in order to assess
the overall toxicity of effluents has been under
way. ,
The priority pollutant approach has not been
useful in effluent testing because they are just
not a concern to aquatic organisms. With the
39
-------
exception of a few metals and pesticides, barely
any priority pollutants are in effluents above the
detection level. In fact, a recent article by
Staples et al (1985) addressed the issue of priority
pollutants in effluents. The authors evaluated
EPA's STORET computerized data base of water quality
information that's collected by the states and the
regions. Their review pointed out that 80 percent
of organic priority pollutants are not detectable
in industrial effluents and that 78 percent of the
priority pollutants are not found in the average
waterway.
Effluent testing for aquatic organisms has
been going on for years, but it's only recently
with the water quality-based permit policy (EPA,
1985) that the renewed interest has been displayed.
With this new-found interest, the emphasis for sub-
lethal or subchronic tests to supplement the short-
term acute toxicity has increased.
This is where the research efforts at the
Environmental Research Laboratory at Duluth (ERL-
Duluth) Minnesota, have been expended for the last
three to four years. The goals of the complex
effluent program are to validate the ability of the
laboratory toxicity tests to predict the community
-------
impact of the industrial and municipal effluents.
To do so, we developed two short-term, subchronic
tests which are both run for seven days. We use a
cladoceran, Ceriodaphnia dubia, and a small fish,
the fathead minnow, Pimephales promelas. We run a
set of various effluent dilutions and we run samples
of the receiving water. Both tests that we run are
static with daily renewal of the test solutions.
Most often, these tests are conducted on-site.
The Ceriodaphnia test end-points are reproduc-
tion and survival to estimate the effect of the
level of the effluents. The fathead minnow test
end-points are growth and survival to estimate the
effluent effect levels. We have generally found
the mean number of young produced per female for
the Ceriodaphnia and that the mean weights for the
fathead minnows are more sensitive end-points than
survival in each seven day test.
Briefly, the Ceriodaphnia test uses newborn
young (0-4 hours old) to start the test. The young
are placed in individual test cups containing
15 mis. test solution each. As an aside, most
people are familiar with Daphnia magna. So, the
first thing people always ask me is how small is
the Ceriodaphnia compared to the Daphnia. I can
-------
see it with my naked eye; some people like to use a
microscope/ and we don't have to use electromicro-
scopy to read these results. Ceriodaphnia produce
three broods in seven days which is an end-point
for toxicity tests following the OECD guidelines.
•
They produce an average young of 18 to 30, depending
on what source of water is used in the test.
The fathead minnow test is a daily renewal
static test and is initiated using less than 24
hour old post-hatch larvae. In seven days we have
found they typically increase their weight five to
seven times that of their initial weight. This is
a quick overview of what the test methods are.
If you want more information on the methods, see
Mount and Norberg, 1984; and Norberg and Mount,
1985.
In addition to the effluent dilution tests, a
set of tests that we have termed the ambient toxicity
tests, are run (Mount et al, 1984). These are run
with water that is collected directly from the
stream. Typically the locations of these stations
are established above and below each discharger and
the water collected for the ambient tests allows us
to evaluate what's actually happening when the
effluent and the receiving water mix. The results
-------
of the Ceriodaphnia and fathead minnow ambient
tests are then used for comparisons with the stream
community data.
For these ambient tests, the concentrations
of effluents in the stream are not important, but
rather whether the water in the stream has an
observable effect in the toxicity tests and on the
stream community. So what you are actually getting
with these ambient tests is a measure of toxicity
of various stream stations which we can then use
to compare to the condition of the biological
community at the same stations.
Steve Bugbee mentioned the work that has been
going on for these sites. It can be intensive
laboratory and field work. We have completed eight
site studies around the country covering Connecticut
(Mount et al, in press), Alabama (Mount et.al, 1985),
Oklahoma (Norberg-King and Mount, in press), Mary-
land (Mount et al, in press), West Virginia (Mount
and Norberg-King, in press; and Mount et al, in press),
and Ohio (Mount et al, 1984; Mount and Norberg-King,
1985).
Along with the ambient tests, the field surveys
of the biological and hydrological conditions are
done. These consist of fish collection, benthic
-------
sampling, periphyton and zooplankton sampling or
artifical substrates where appropriate.
What I want to discuss now, in the remainder
of this talk, is how these biological analyses of
the effluents and the ambient•toxicity tests compare.
This discussion will present the analysis of effluents
by means of the ambient toxicity tests compared to
the stream community for just three out of eight
field sites.
The first site I want to discuss was conducted
on a very small prairie stream in Oklahoma called
Skeleton Creek. It has a very low gradient and the
stretch of river studied covered approximately 27
kilometers. There are three dischargers that are
located on this small creek directly. They are a
Publicly Owned Treatment Works (POTW), a refinery
and a fertilizer plant.
SLIDE 1
What you see on this slide, along the bottom,
are the stream stations that we plotted, and along
the left hand side is the mean fathead minnow
weights. The data is the mean individual weight at
the end of the seven day test, which is a dry weight.
Please note these stations were not equal distances
apart on the stream as they are depicted here.
-------
Again, just to quickly explain what the ambient
tests are, they are the daily renewal tests with a
grab or a composite sample from each river station,
that are tested to determine if chronic toxicity
occurs instream. Note with this site, we saw
^substantial toxicity observed at Station 5.
SLIDE 2
Next are the results of the Ceriodaphnia and
the fathead minnow ambient tests plotted together.
These show that there were similar responses of
each species up to Station 5 and yet there was a
dramatic drop in the Ceriodaphnia young production
evident at Stations 7 and 8. Irrigation flow
entered the creek at various points, but the first
known entering above Station 7. In fact, the water
from Stations 7 and 8 had chronic activity levels
that were three times those of upstream stations.
We have data that show the Ceriodaphnia to be much
more sensitive to salinity than the fathead minnows,
which may account for this. You may also note that
the slight inhibition of young production and
weight values at the most upstream station, which
was Station 2, was removed after the POTW and the
refinery effluents were mixed in the stream.
-------
SLIDE 3
Next is the fathead minnow weight data and the
dotted line is the number of benthic taxa. The
results of the benthic taxa show us they were
unaffected over the entire study area. If toxicity
observed in the ambient tests was due to ammonia
from the fertilizer plant, or from one discharger—
I'm trying not to point fingers here—the benthic
population may not have been affected due to their
lack of sensitivity to the ammonia.
SLIDE 4
Next, is the dash line, is the number of the
fish taxa plotted against the mean fathead minnow
weights, and these data match extremely well* You
can see that the number of fish taxa show a response
at Stations 7 and 8 that were not displayed in the
fathead minnow ambient test.
SLIDE 5
But when we plot the Ceriodaphnia data together
with the fish taxa as the dash line, you can readily
see more similar responses. For this site, an
invertebrate test organism is much more predictive
of the stream fish population as the Ceriodaphnia
test results correlate extremely well with the
number of fish taxa collected in the biological
-------
survey. This all is aimed at trying to answer the
question of which species to use and whether it's a
sensitive species, or how do we choose which species
we're testing for effluents.
The second site I want to discuss is Five Mile
Creek in Alabama. It is a foothill, spring-fed
creek with a high gradient in the upper end. The
study area covered a 42 kilometer stretch of river.
In this area there are two coke plant dischargers
in the upper stretch and a POTW discharge farther
downstream, and a few tributaries entered the
stream.
SLIDE 6
What you see here are the mean weights of
the fathead minnows plotted against the stream
stations for Five Mile Creek. Substantial toxicity
was observed at Station 5. Again, to reiterate,
what I'm showing you are chronic test results and
it's important to keep that in mind. We feel we
are able to make better predictive value judgments
based on chronic data than we can on acute data.
SLIDE 7
Here is one of my favorite slides. Tom had one
of his favorite slides and I have one of mine. I'd
like to just leave this one up for a while and figure
-------
out something else to talk about so you can stare
at this one because this is one of our better
slides. The values for the fathead minnow weights
and the number of the fish taxa at every station,
show the response is the same, with the sharp
decline of the taxa correlating extremely well with
the measurable toxicity of the fathead minnow
ambient toxicity test. Even the slight upstream
effect that was observed at Station 1, the ambient
toxicity test, was also observed in the number of
fish taxa. This is better agreement than we had
expected or we had hoped for in our comparisons.
SLIDE 8
So, if we take the same fathead minnow ambient.
test data and we plot the number of benthic inver-
tebrates, there's good agreement. At the upper
three stations the agreement was not as good, but
because of those stations at the upper end, what
actually happens is that we would under-predict the
impact at those stations.
SLIDE 9
The Ceriodaphnia toxicity test results also
show some of that same inhibition at the upper end,
yet when the tributary flow entered at Station 3,
just after 2A, and after the coke plant's effluents
-------
entered, what we saw was increased young production.
Only after the POTW discharge entered the stream did
we see additional toxicity in Ceriodaphnia young
production.
These dramatic result differences between the
fathead minnow weights and the Ceriodaphnia young
production indicates the value of running toxicity
tests with more than one test species. We may have
missed something with just running one species, but
we gained a lot with running two species. Now, how
to make them mesh is another question.
SLIDE 10
We took that same Ceriodaphnia data and we
plotted the zooplankton density in the stream with
the mean number of young per female for the Cerio-
daphnia at each ambient station. We shifted the
zooplankton response in the stream upstream to
match the upstream stations and allow for the drift
that would be occuring in the stream. There is
really good agreement and the same decrease in the
upper reaches is there with the zooplankton in the
stream that we saw with the Ceriodaphnia ambient
test.
So with this site, the Five Mile Creek site,
we can sum the results of the toxicity testing and
-------
the field survey as follows. The fathead minnow
tests and the fish test results correlate extremely
well. My favorite slide pointed that out. The
Ceriodaphnia and the zooplankton tests correlate
extremely well when one allows for the drift. The
benthic taxa does not agree well with the fathead
minnow tests at a few stations and the reasons for
this are too numerous to postulate here. However/
completely different responses of the two laboratory
toxicity tests again points to the value of testing
more than one species.
The last site I am going to discuss is the
Naugatuck River in Connecticut. It is a very high
gradient stream with large rubble boulders in the
upper reaches. This was a complex site and it
covered 64 kilometers of river. There were also
numerous tributary inputs that occurred. A total
of 27 discharge outfalls are located on the stream
and on the tributaries. Most of the industrial
discharges are metal plating operations with similar
plant designs and the effluents contain mostly the
same heavy metals.
SLIDE 11
Again, please note on this slide, I've plotted
where the tributary input enters the stream across
50
-------
the top as well as the POTW inputs. Again, on this
slide as on the other one, the stream stations are
not equal distances apart as the slide appears to
indicate. Most of the plating operations and the
other industrial discharges went directly into two
main tributaries. Also, there was a reservoir
above Station 4.
This same slide shows the results of the fat-
head minnow ambient test and this site, being so
much more complex, it's difficult to interpret.
The toxicity at Stations 10 and 11 was not chronic
but it was rather known to be due to a spill in the
river that occurred early during the testing. When
such an event occurs, it results in an overestimate
of the toxicity at those two stations. We are not
able to start the test again after a spill happens
because then the animals are older or a different
age group or something.
The two single data points shown by the arrows
are the results of fathead minnow ambient tests
with the source of that test water being collected
directly in the mouth of the tributary with the
large amount of dischargers on it. Essentially
what was measured was an estimate of the combined
effluent effects in each tributary prior to mixing
51
-------
with the river. We saw toxicity in the tributary
sample above Station 6. We saw some decrease in
growth in ambient Station 6 water. We also saw
toxicity in terms of reduced growth above Station
8 with the tributary sample, and we saw a sharp
decline in the ambient toxicity test weights of the
fathead minnows at Station 8.
SLIDE 12
If you take the periphyton diversity and plot
it against the fathead minnow test results, there's
good agreement on these two curves. There is a
problem here in that periphyton diversity was used
rather than taxa, but this is done because periphyton
have not been identified as to species as well as
many other groups such as the fish or the benthic
raacroinvertebrates.
SLIDE 13
The plot of the number of fish taxa with the
fathead minnow ambient test data does not show any
definite trend except possibly for lesser numbers
of fish taxa downstream. We feel we could not
discern any clear area of impact.
SLIDE 14
The benthic invertebrate taxa plotted against
the fathead minnow ambient test shows a general
-------
decline of taxa from the headwaters with a substantial
and consistent drop over Stations 3 through 7. Except
it appears to recover at Station 9 as the number of
benthic taxa increase, and increased weights were
obtained at Station 9 in the ambient toxicity test.
SLIDE 15
The Ceriodaphnia ambient toxicity data is
plotted in dotted line, and the dash line is the
zooplankton density in the stream. When one corrects
for the drift of the zooplankton and moves the
response of the Ceriodaphnia upstream to account
for drift, again we see good agreement.
So, what this all sums up for the Naugatuck
River is that there is a good agreement from the
periphyton, the benthic macroinvertebrates, the
zooplankton to the laboratory ambient toxicity test
data, while the fish taxa data did not show any
real clear trend of impact with the toxicity
tests in this instance.
From these studies and from this approach to
complex effluent testing, the following conclusions
can be made. One, that in every study the toxicity
was shown by one or both test species which correlated
with an adverse impact in the community. This says
that no one test species represents one group of
-------
the community, but rather some group in the commun-
ity, albeit fish, benthic macroinvertebrates or the
periphyton, and that the impacted component or com-
ponents of the community correlates with the toxicity
found in the lab. Two laboratory test results
differ from site to site. Therefore, the test
species cannot be considered a surrogate for any
one component of the community.
Finally, to say that this has been a brief
summary of the lab to field data. We have other
projects we are considering now such as negative
interactions, species sensitivity, and the fre-
quency and cause of reverse toxicity curves. This
data is only for three out of the eight sites. We
have reports on all of the sites in near final
form. It appears that toxicity tests will be used
as permits limits soon. We know of at least two
states that already have implemented toxicity
testing into their state policies. Also, we may
need to start looking for the toxic components in
an effluent in order to advise industries where
their problem is. However, when that happens, we
feel as the biologists, that we really must in-
corporate toxicity testing in that the analytical
methodology cannot provide the whole picture.
-------
REFERENCES
Environmental Protection Agency. 1985. Technical
Support Document for Water Quality - based Toxics Control,
Office of Water, Washington, B.C. 20460
Mount, D.I. and T.J. Norberg, 1984. A Seven day Life-
Cycle Cladocenan Toxicty Test. Environment Toxicol &
Chemisty - Vol. 3 : 425-434
Mount, D., N. Thomas, M. Harbour, T. Norberg, T. Roush,
and R. Brandes. 1984. Effluent and Ambient Toxicity
Testing and Instream Community Response on the Ottawa
River, Lima, Ohio.
EPA - 600/2-84-080. August
Norberg, T.J. and D.I. Mount, 1985. A new Fathead
Minnow (Pimephales promelas) Subchronic Toxicity Test.
Environment Toxicol & Chemistry - Vol. 4 : 711-718
Norberg, T.J. and D.I. Mount, (editors). (In press).
Validity of Effluent and Ambient Toxicity Tests for
Predicting Biological Impact, Skeleton Creek, Enid,
Oklahoma. U.S. Environmental Protection Agency, EPA
600/in preparation.
55
-------
Staples/ C.A., A. Frances Werner/ and Thomas I, Hoogheerru
1985. Assessment of Priority Pollutant Concentrations
in the United States Using STORET Database. Environment/
Toxicol & Chemistry - Vol. 4 : 131-142
Mount/ D.I. and T.J. Norberg. (editors). (In press).
Validity of Effluent and Ambient Toxicity Test for
Predicting Biological Impact, Kanawha River/ Charleston/
West Virginia. U.S. Environmental Protection Agency/
EPA 600/in preparation.
Mount/ D.I. and T.J. Norberg. (editors). 1985. Validity
of Effluent and Ambient Toxicity Tests for Predicting
Biological Impact, Scippo Creek, Circleville, Ohio.
U.S. Environmental Protection Agency, EPA/600/3-85/044/
June, 1985.
Mount, D.I., et al. (editors). (In press). Validity of
Effluent and Ambient Toxicity Tests for Predicting
Biological Impact, Back River, Baltimore Harbor, Maryland.
U.S. Environmental Protection Agency, EPA 600/in preparation,
Mount, D.I. et al. (editors). (In press). Validity of
Effluent and Ambient Toxicity Tests for Predicting
Biological Impact, Five Mile Creek, Birmingham, Alabama.
56
-------
U.S. Environmental Protection Agency, EPA 600/8-85/015.
Mount, D.I., et al. (editors). (In press). Validity of
Effluent and Ambient Toxicity Tests for Predicting
Biological Impact, Naugatuck River, Waterbury, Con-
necticut. U.S. Environmental Protection Agency, EPA
600/in preparation.
Mount, D.I., et al. (editors). (In press). Validity of
Effluent and Ambient Toxicity Tests for Predicting
Biological Impact, Ohio River, Wheeling, West Virginia.
U.S. Environmental Protection Agency, EPA 600/in pre-
paration.
57
-------
MEAN FATHEAD MINNOW WEIGHT (mg)
m
s
3
(A
ss
0>
<2. oo
p
KD
p
b>
p
bo
T
POTVV a
Refinery
Fertilizer
Plant
-------
MEAN FATHEAD MINNOW WEIGHT (mg)
3D
m
ro
C/3
3
c£
<§. oo
P
ro
T
P
o>
P
CD
POTVV a
Refinery
Fertilizer
Plant
CERIODAPHNIA
MEAN YOUNG/FEMALE
59
-------
MEAN FATHEAD MINNOW WEIGHT (mg)
;0
m
C/5
IX)
* 52
<2. oo
p
ro
p
0)
CD
m
ro
o
OJ
o
poTwa
Refinery
Fertilizer
Plant
NUMBER OF BENTHIC TAXA
60
-------
CO
m
CO
5!
5
3T
CO
5
rf
MEAN YOUNG/FEMALE
ro -
iff
3
X
NUMBER OF FISH TAX A
porwa
Refinery
Fertilizer
Plant
oo
61
-------
MEAN FATHEAD MINNOW WEIGHT (mg)
P P p o
O) 00
;0
m
CO
O)
2.
S-
T
T
T
s
x
•
X
POTVV a
Refinery
Fertilizer
Plant
ro
oo
NUMBER OF FISH TAXA
62
-------
MEAN FATHEAD MINNOW WEIGHT (mg)
o ° ° °
•
O
CO
H
ffl
V)
ro
ro
3
ss.
3 °°
0°
COKE PLANT #1
COKE PLANT .#2
POTW
63
-------
MEAN FATHEAD MINNOW WEIGHT (mg)
pop
8
o
•
o
OJ
o
ro
1 £
9
£ *
10 °>
3
OJ,
2 CX)
4^
O
~i
COKE PLANT
COKE PLANT #2
POTW
ro
O
NUMBER OF FISH TAXA
-------
MEAN FATHEAD MINNOW WEIGHT (mg)
rn
I
en
a"
p
5
8
NUMBER OF BENTHIC
INVERTEBRATE TAXA
o
•
o
COKE PLANT #1
COKE PLANT #2
POTW
65
-------
CERIODAPHNIA
MEAN YOUNG/FEMALE
58 8 '
m
>
ro
>
CO
ID
?
COKE PLANT*I
COKE PLANT #2
POTW
66
-------
CERIODAPHNIA
MEAN YOUNG/FEWALE
8
CU
a
COKE PLANT
COKE PLANT #2
ZOOPLANKTON SAMPLES
NUMBER/LITER
67
-------
MEAN FATHEAD MINNOW WEIGHT (mg)
o o p p p
8 fc
TRIBUTARY
POTW
68
-------
MEAN FATHEAD MINNOW WEIGHT (mg)
o p P P
'8 8 & 8
3)
•
ro
o
5
ro .
PERIPHYTON
TRIBUTARY
DIVERSITY
69
-------
MEAN FATHEAD MINNOW WEIGHT (mg)
TRIBUTARY
TRIBUTARY
TRIBUTARY
TRIBUTARY
POTW
NUMBER OF FISH SPECIES
70
-------
MEAN FATHEAD MINNOW WEIGHT (mg)
p p p p p
5 b 5S . .fe &
ro
TRIBUTARY
NUMBER OF BENTHIC
INVERTEBRATE TAXA
71
-------
ro -
i
m
en
c/>
3
o
CD
CERIODAPHNIA
MEAN YOUNG/FEMALE
5.8 8 .__
TRIBUTARY
•r
N
•POTW
•DAM
•POTW
-TRIBUTARY
-TRIBUTARY
•TRIBUTARY
-POTW
-POTW
ro
ZOOPLANKTON DENSITY
(NUMBER/m3)
o
en
72
-------
.QUESTION AND ANSWER SESSION
MR. PRESCOTT: I'm Bill
Prescott. Could you amplify just a little bit what
this correction for drift business was? Why and how?
MRS. NORBERG-KING: Sure.
One of the feelings we have is that since the
zooplankton in the stream are drifting organisms,
we don't expect them to be stationary. If you
collect them at Station 5, they're probably reflect-
ing the population response of an upstream station
as they move downstream with the water. What we
are trying to do is predict how fast they're moving
with the water and then move the response of the
animals upstream to where they would have been in
terms of what exposure they would have received.
MR. PRESCOTT: Thank you.
MR. HUNT: I'm Gardner
Hunt with the main department of Environmental
Protection. Have you had any experience with the
diversity index?
MRS. NORBERG-KING: Yes,
we've done a number of things with the diversity
indices. Right now we present indices in our
reports, but we don't feel that the biological
73
-------
indices are the best measure for us. They're not
giving us the actual number of taxa present in the
stream. We may find that in one species there
may not be a true representative of the species
at that station.
I'm not the one to explain the diversity
indices. We have a contractor that does all the
work for our biological field work, doing the
biological indices. Don Mount, for instance, is a
much better person to decide what we're using about
the biological indices as he does the integration
chapter in our report. I do a lot of the field
work, a lot of the toxicity testing, and am just
starting to try making some heads or tails out of
integrating the approaches. I'm sorry I don't have.
a better answer.
MR. TUROSKI: Vic Turoski,
James River. What two states have instituted
biomonitoring with respect to permitting?
MRS. NORBERG-KING: New
Jersey and Virginia.
MR. TELLIARD: Any other
questions? I have one, Teresa.
MRS. NORBERG-KINGs Yes.
MR. TELLIARD: If I was
-------
fortunate enough to have one of these permits with
biological monitoring requirements, what would it
cost me to run these...
MRS. NORBERG-KING; For a
Ceriodaphnia test?
MR. TELLIARD: Yes.
MRS. NORBERG-KING: Given
that you have the laboratory...or say that you
would have to go after a contractor to run one of
these tests, our estimates for a Ceriodaphnia test
are anywhere from $500 to $1,000 to run one seven-
day Ceriodaphnia test. That's extraordinarily low.
The fathead minnow test probably runs up to $1,000
to $1,500. It's just a little more labor-intensive,
MR. TELLIARD: Thank you.
I'd like to thank this morning's speakers—Steve,
Teresa and Tom. This is our first time talking
about critters and I think it was very good. Can
we have a hand for them?
We're going to break now for coffee and please
?
get back in here in 15 minutes. We've got a long
day ahead of us. Thank you.
(WHEREUPON, a 15 minute break was taken.)
-------
MR. TELLIARD: For those
people who struggled in late, there's a lot of room
in the front pews and they don't cost any more.
Continuing with this morning's session, we
have two speakers that are going to be
addressing, .'.probably if you had a theme for a
meeting, we had critters and probably LC/MS. LC/MS
is a new initiative for the industrial technology
division. We, of course, feel that it's only fair
to stay ahead of API and CME by grabbing new
analytical tools when they're not looking and
running off and using them.
We have, this year, started looking seriously
at LC/MS which gives us a lot more flexibility in
some of our areas of concern. The folks at our
Athens Laboratory, the industrial technology
division's lab, has been working hard at some LC/MS
work. John McGuire has been working on some analysis
of some of our pharmaceutical samples and so forth.
We see down the road this is going to be a valuable
tool both for investigating compounds that we know
we can no longer beat through a standard GC.
So, with that as a brief introduction of why
we're concerned about it, in that it is a new tool
and it does cover new compounds, and we want to keep
76
-------
your life interesting, particularly the industries
who feel somehow we've slighted them by missing
certain compounds.
We'd like to start this morning's presentation
and we're going to talk about LC/MS as a new and
powerful analytical tool. .
77
-------
RICHARD F. BROWNER
GEORGIA INSTITUTE OF TECHNOLOGY
SCHOOL OF CHEMISTRY
MAGIC-LC/MS: A POWERFUL NEW
TOOL FOR ENVIRONMENTAL ANALYSIS
(Dr. Browner's paper will be available
in open technical literature.)
78
-------
ROBERT D. VOYKSNER, PH.D.
RESEARCH TRIANGLE INSTITUTE
APPLICATIONS OF THERMOSPRAY HPLC/MS
FOR MONITORING THE ENVIRONMENT
DR. VOYKSNER: Thank you,
Bill. My name is Robert Voyksner and I'd like to
talk about thermospray LC/MS application to environ-
mental analysis.
In today's presentation, I'd like to talk a
little bit about what thermospray is, what the
interface consists of, talk about optimization and
going to some applications and analysis of pesticides,
dyes and mycotoxins.
First of all, I'll talk a little bit about the
type of samples we like to analyze by LC/MS. The
technique is still not routine and if you can do it
by GCMS or GC, it's still best to do it that way.
Samples can be derivitized or if you don't want to
run by GC/MS, HPLC serves as a good method for
separation. We use mass spec as a detector. The
conventional LC detector doesn't offer the speci-
ficity or the sensitivity.
Of the combined HPLC techniques available,
79
-------
thermospray is an ideal technique for a couple of
reasons. First of all, the interface can accept
the full HPLC flow rate, so conditions that sepa-
ration chemists developed don't have to be changed
and it's very easy to transfer from a normal LC
detector to a mass spec detection. Secondly,
there's no reverse phase solvent restrictions. If
we're using thermospray ionization, we can use any
reverse phase solvent. More recently, the thermo-
spray interface is being equipped with a filament
so we can do essentially water type CI and in that
case...or solvent CI, excuse me. In that case, we
really have no salt restrictions at all. Finally,
in either ionization case, both techniques, or
either ionization soft providing level of the
weight information.
SLIDE 1
The schematic of interface is shown here.
The HPLC effluent enters this directly heated coil
(A) to form an aerosol, which we see in area F.
This aerosol contains the analyte as well as some
of the solvent. The aerosol droplets shield it
from the heating to prevent any thermal decomposi-
tion. To ionize the compounds I'm looking at,
primarily we get gas phase ionization with the
80
-------
volatile buffer, in my case, ammonium acetate. So
we're effectively getting ammonia CI type spectra.
For more ionic compounds where ions are actually
formed in solution, the integrity of the ion is
maintained in this aerosol and by means of the
repeller (L) and first lens, the ions are extracted
through this ion exit cone (E) into the quadrupole
rods. The neutral gases or solvent basically goes
out into auxiliary pumping and liquid nitrate in
cold trap (I).
Probably the main critical parameter in
thermospray is the monitoring of temperatures or
controlling the temperatures, and we monitor the
temperatures of the source block (D), the aerosol
(F), slightly beyond ion exit cone as well as a
vaporizer (B).
SLIDE 2
First of all I'd like to talk about parameters
that affect the thermospray results. In order to
develop a good analytical scheme, we should have a
good understanding on the factors influencing
thermospray spectra and sensitivity. Most of these
parameters listed do affect thermospray sensitivity
and a few affect the type of spectra recorded.
I'd like to go through each parameter and show you
-------
briefly their optimization and their effects.
The first thing I'm going to talk about is
the aerosol temperature. You control the aerosol
temperature primarily by the vaporizer temperature
and the source block temperature. But on the aerosol
temperature that's being read just beyond ion exit
cone, effects analyte sensitivity.
SLIDE 3
You can see that the optimal aerosol tempera-
ture for this triazine is between 120 degrees C to
130 degrees C. If you're off by 10, 20 degrees you
lose a significant amount of sensitivity. Also I
should point out this plot here is very dependent
on solvent composition. As you increase the percent
water, your optimal aerosol temperature also in-
creases.
SLIDE 4
Next, I'd like to show you the effect of
changing the percent of water, which you do in the
gradient dilution type HPLC analysis, would have
effect on sensitivity. You can see here, I looked
at three different percentages of water in methanol.
As we increase the percent water, we gain signifi-
cant factors in sensitivity. If we compare the
high and the low end, we gain about, close to two
82
-------
orders of magnitude in sensitivity.
SLIDE 5, SLIDE 6
To compensate for this effect, we developed a
scheme to add water post-column and this way, even
though you're making a dilution of your sample,
you can gain over a factor of four in sensitivity.
It's very simple to do. We're using a coaxial
team which we can add our water or buffer post-
column and mix it with your HPLC effluent going into
the detector and minimize band broadening, in effect
enhancing our sensitivity.
SLIDE 7
Also, the percentage of water effects slightly
the type of spectra we generate. As I show you
here, we do see different types of ions either
(M+H)+ or (M+solvent)+ that are cluster ions, for
a percentage of water. The lower percentage of
water in methanol, we see more cluster and more
acetate addition. In a high percentage of water,
we see more pronation. The same roughly holds
true for acetonitrile in water. In general, though,
the change in solvent composition doesn't drastic-
ally affect the spectra and really there's no gain
or loss as one goes from one extreme to another.
S3
-------
I,'1'::'1 iff .'Si.11.,1..:11,,'!! f,.
SLIDE 8
Another factor that plays an important role in
thermospray is the selection of a buffer. For ther-
mospray ionization to occur, a volatile buffer
should be present. We evaluated five volatile buf
fers: triethylamine, ammonium carbonate/ bicarbon-
ates, formates, and acetate. For this organo-
phosphorous pesticides, you can see that ammonium
acetate produced the best response. In some cases,
ammonium formate was comparable to ammonium acetate,
but for most of the work I've been using acetate be-
cause it gave the best response.
SLIDE 9
As for concentration level of the acetate, we
found doing a plot of concentration versus (M+H)+
intensity, we see it plateaus near about .08 M and
any addition really gains very little in sensi-
tivity. For people who worry that the acetate will
degrade the separation, an ammonium acetate solution
can be added post-column. For example, you can
add .3 M solution post-column so that your final
concentration might come out to be .1 M. So actual-
ly the acetates or the buffer requirements for
thermospray doesn't really affect analysis from the
viewpoint of a chromatographer.
-------
SLIDE 10
Next, I'd like to show some examples of analy-
sis of pesticides, primarily carbonate pesticides
in the soil and water by thermospray. Now that we
have a handle on optimal conditions where we hope
to give us the best sensitivity in thermospray, we
tried to design an analysis scheme keeping these
points I showed you in mind, to get the best possi-
ble results both chromatographly and by mass spec-
trometry.
For sample preparation for water samples, we
stuck with literature-type extraction techniques.
We compared two extraction techniques, sep pak and
methylene chloride extractions. We found out
that methylene chloride extraction came out the
best with a 98 percent recovery and sep pak was
reasonable with 50 percent recovery.
One advantage to the LC/MS technique would be
to inject the entire volume of sample on column,
which is something you can't do with GC/MS unless
you dried down to a very low volume.
SLIDE 11
The conditions we used are developed for the
carbonate pesticides that are shown here, Zorbax
ODS (25 cm.). We're doing a gradient/multi-
-------
gradient, solvent 50 prograrrio You can see we're
working on normal flow rates (1.2 mL/min). I have
a UV detector in line for comparison purposes»
Mass spec was scanning (m/z 160-600). Primarily I
do most of my work in positive ion. The vaporizer,
as I showed you before, is temperature dependent.
We compensate for the loss or change in sensitivity
of optimal temperature by programming our vaporizer,
so this way we can always maintain optimal sensiti-
vity with varying solvent composition. The vapori-
zer is programmed down in sequence with the gradient,
SLIDE 12
Here's an analysis or one example where you
have the HPLC UV trace and I selected ion chroma-
tograms for about five pesticides spiked into lake
water. This water sample contained one part per
billion of each pesticide. This is under full
scan conditions. You can see here in the selected
ion chromatograms, each peak was easily detected
with very little noise, very little other ion inter-
ferences. If you look in the UV chromatogram,
they're very difficult to pick up where these
peaks would be detected.
We can go much lower. We can see per part per
billion there is very little noise in most of these
86
-------
channels. .Due to detection limits on these types
of samples, we found for most of the carbonates we
could go close to about 10:1 part per tray*
SLIDE 13
Typically the thermospray spectra for these
samples are very simple, consisting of an (M+H)+
or (M+NH4)+ ion, or a combination of both. This
is one of the examples, propoxur, which showed in
the selected ion chromatogram. Very simple spectra.
Fro most of the spectra, I usually just map out
the most or base peak for each of the compounds.
The big disadvantage for these simple spectra is
there's really no structural information gained
from fragmentation so it's very difficult to make
a qualitative identification of an unknown.
SLIDE 14
In summary, for the carbamates we worked in
the concentration range from 100 parts per billion
to 10 parts per trillion and it proved quite linear
over that range. We looked at about 15 pesticides
and they did show a significant variation in
detection limits. For the ones that were able to
go down to 10 parts per trillion, we used for
determining the linear coefficient (0.99). So
for about 8 or 10 that did go down to about 10
-------
parts per trillion, we did have a linear range of
about 10 to the fourth.
SLIDE 15
This method worked very well for soil analysis.,
A 100 parts per trillion of pesticides were spiked
into soil, and as observed in the HPLC UV, it's
pretty hopeless. I'm not sure you can make any
sense out of that chromatogramf however, in the
ion chromatograms you can pick up the pesticides
quite easily, looking at the base peak for each
representative pesticide. Again, most of them
show very little interference. I think propoxur
shows the most channel interference with some
other components showing common ion interference.
But in most cases, thermospray seems to be trans-
parent to a number of biological or environmental
matrix interferences.
Next, I'd like to talk a little about another
example, analysis of dyes. In this example, we're
looking at dyes in gasoline. Dyes in general are
very difficult. There's really no mass spec methods
available. Dyes are added to gasoline for a few
reasons. One, they're a fingerprint for the
manufacturers. If you have four service stations
and there's a ditch nearby with gasoline leaking
88
-------
into it, somebody could point a finger at the
proper service station that is leaking. Also they
use dyes to differentiate between leaded and un-
leaded gasoline.
Typically, TLC has been the main way to analyze
dyes in gas. TLC shows that the supposedly pure
dye component contains a multitude of components.
What is bought from a manufacturer with a given
structure is not very pure. The only thing else
we know about dyes is that they're spiked into gas
about the part per million level. Since the USA
uses millions of gallons of gas, we are talking
about kilograms of dye being used or thrown into
the environment after burning.
To get an idea of what degradation of product
are formed,,we have to have a good handle on what
the initial products are to begin with. So we try
to characterize some of the dyes initially just to
see what's in the parent dye to get a better handle
on what will come out after burning.
SLIDE 16 . '
An HPLC and a TIC trace for a commercial red
dye is shown here. There's really not too much in
the UV— multitude of peaks, very broad peaks,
indicating probably a multitude of components. In
89
-------
thermospray, quite a few peaks there.
SLIDE 17
We knew the dye had this basic structure with
R-group as being a methyl group. When we looked
at the selected ion plots, we found that-R-groups,
ranging from H all the way down to CIO H21 were
present and in many case, many isoraers for each
R-group were present. So this supposedly pure
dye, or being sold as a pure dye, is a multitude
of alkyl homologs of about 20 to 30 components.
SLIDE 18
Not only that, we also detected some other
colored dyes in the sample. We determined the
structure of this dye by purchasing the standard.
I think the standard was the methyl R-group again
and showed the same type of spectra as the sample.
This orange pigment was found in this commercial
red dye and again, a whole range of alkyl substi-
tutes were found.
So in this one dye, we had to deal with on
the upwards of 30, 40 components, making detection
of a specific component very difficult. We are
looking at concentrations in the lower parts per
billion now, instead of in the parts per million
level.
90
-------
SLIDE 19
To show a real example of a gasoline extract,
this was supposedly spiked with one part per million
red dye. We did a sep pak silica gel extract. There
is nothing in the HPLC UV trace where the dye
should elute. However, we looked at HPLC selected
ion chromatograms. The red pigment (m/z 381) and
the orange pigment (m/z 249) were detected. You
can see the two very neat peaks way above noise
level as detected. This we estimated, assuming
that each alkyl substitution has the same response,
was about 10 to 20 parts per billion of dye in
gasoline.
So in this way, if we wanted to characterize
the red dye, we might choose two or three of the
major components found in the red dye, measuring
the area ratios as a fingerprint for a particular
dye. The multitude of components in the dye make
it very difficult to detect each with adequate
sensitivity.
I'd like to go through one other example.
This is a study of mycotoxins and this was a big
issue last fall with Afghanistan and the Yellow
Rain issue. I'd also like to talk about highly
toxic contaminates in feeds and grains.
91
-------
The methodology for these toxins is rather
limited, primarily doing GC/MS techniques using
sample derivisation. However, GC/MS techniques
have a few major disadvantages. In El conditions
one detects low mass fragments below m/z 200. That
mass range is very easy to have interferences from
blood and serum samples. NCI overcomes some of
this problem by providing molecular anion infor-
mation. However, to get reasonable sensitivity,
we have to do at least two derivatives to analyze
these mycotoxins. There are extreme differences in
sensitivity of the different toxins with deriva-
tives.
On the other hand, thermospray offers some
advantages. First of all, there's no sample
derivisation. We can bypass that step completely
so your samples don't have a short shelf-life time.
Thermospray does provide molecular weight information
for all components. Primarily the spectra are con-
sistent of (M+H)+ or (M+NH4)+ ion with very few
fragments. If there are fragments, they're very
simple fragments like the loss of water.
The common interferences found in blood and
urine are pretty well transparent to thermospray
ionization, so the examples I'll show you, there's
92
-------
very little backdrop problems. Again, like I said
before/ there's no splitting HPLC effluents, so we
can inject an entire sample into HPLC and maintain
good sensitivity.
SLIDE 20
For those not familiar with the structures of
buffing toxins, they're shown here. Primarily T2,
HT2, DAS, T2 tetrol have this basic structure that's
varying Rl through R5 groups. F2 has the structure
shown below. With the weight ranges, you're dealing
from 296 on up to 466.
SLIDE 21
The first example I have here is a variety of
toxins in the urine. We have a quick isocratic
HPLC analysis. In about 10 minutes we can do a
complete assay of these four or five toxins. Again,
we have good sensitivity for these toxins (two
nanograms injected.) You can see in the chromato-
gram there's a.high background, making F2 a little
bit more difficult to detect.
SLIDE 22
Here's another example of detecting for HT2
in serum. HPLC UV chromatogram shows the retention
time of HT2. (See arrow.) Here's the actual
thermospray selected ion chromatogram for the
-------
standard and sample. The standard represents
one nanogram of HT2. One can see detection limits
below one nanogram for HT2 are achievable. It
also shows you the advantage of specificity of
the mass spectrometry over HPLC UV.
SLIDE 23 .
One more example, the detection of T2 in
serum. The serum sample again shows very few
interferences. An one nanogram standard chroma-
togram is also given. The mycotoxin work proved
to be quite linear and about from 0.1 nanograms
up to about 50 nanograms.
One other thing I should mention. Our detec-
tion limits can be reduced by about another factor
of 2 or 3f by performing post-column buffer addi-
tion. We did some experiments where we added an
aqueous solution post-column (0.3 mLs/min), of
100 percent water. This increased our overall
percentage of water because we were operating with
a high percentage of methanol in a range where
thermospray sensitivity is not optimal.
SLIDE 24
This is an older slide comparing detection
limits for toxins using various methods.
-------
Recently, we've been able to reduce thermospray
sensitivity down to 0.05 to 0.01 nanograms. You
can see how it compares with other techniques,
both NCI, GC, El, HPLC, UV. I believe HPLC/MS is
the method of choice for analysis of toxins.
In conclusion, I think thermospray pretty
well revolutionalized the technique of LC/MS. It
offers a new combination of sensitivity and
specificity and is very versatile for a variety of
compounds. I think all these reasons have made it
one of the most popular techniques for LC/MS.
It's at the point now where it's just becoming to
be routine for some assays.
I'd like to thank you for your attention and
EPA Cincinnati, Ohio, for their support.
MR. TELLIARD: Questions?
-------
t
0.006" I. D. Stainless
Steel Tubing
HPLC In
0.8 to 2 mL/min
Schematic of Thermospray Interface
A Directly heated vaporizer
B Vaporizer thermocouple
C Jet chamber
D Source thermocouple
E Ion exit cone
F Aerosol thermocouple
G Lenses
H Quadrupole assembly
I Liquid nitrogen trap and forepump
J Source block heater
K Vaporizer heater
L Repeller
96
-------
o
CO £
•CO CD
-Q CO
0) -C
SSf
5 to
w "
|a
I?
CO CO
1
Q.
O
O
•S
CD
s
CO
o
o
"o
o
taoe
g
V;
'w
O
Q.
O
O
O
o
'o
i ' i ^^
C C u.
_ > >
9r O O
X CO CO
O
CD
.2
v»
CD
CD
O
O
O
CD
CO QQ
CO •-
»;: co
Q. C
co o
o co
97
-------
o
o
OJ
o
CO
o
CO
O
OJ
CD
•O
__o
CO
4_o
J.o
( CM
O
CD
O -
CO
.O
o
CO
o
o
ro
O-
cvj
Oil %
98
-------
-------
i *
S1
8 *
_
.O
* T
s_ .
^fl) in
^^^^~ 00 i '
Z3 d
J3
C
^
i
j
i
i
i
bj
"o
9
to
£ ,
|2
r
1
in
CM
(0
O
>l
•n
I
I i
A
^^
M
• ,
1 i
x Si
5 sll
,0
+5
I
100
-------
I
1
Q.
T3
-------
The Effect Solvent Composition
Has On Thermospray Spectra
100
20% 50% 80%
% Water in Methanol
[M+H]+
[M+Acetate]+
or
[M+H+CH3CN]
20% 50% 80%
% Acetonitrile in Water
102
-------
COMPARISON OF BASE PEAK INTENSITIES FOR
SfMAZINE IN EACH BUFFER
50,000
40,000
30,000
CO
o>
20,000
10,000-
0
52,000
9,100
7,200 ,
360
(NH4)2C03NH4HCOOHNH4HC03 NH4COOCH;
103
-------
I
c-J
o —
o 2
c c
o e
055
3 CL
28
s
o
LU
CO
o
2
x 8
O C
1
"E
o
Ai|suaiu|
-------
CO
O
1
(0
Q.
£
CL
_0
Q.
(0
0)
i
CM
CO
1
DC
CO
CD
105
-------
Column:
Solvents:
Gradient:
Flow:
UV:
MS:
Thermospray
Interface:
HPCL/MS Analysis
Zorbax C18 25 cm x 4.6 mm
Methanol/water (0.1 m AA)
50/50
15 min
70/30(5) — 85/15(10)
Smin
1.2 mL/min
254 nm
Scan 160-600 in 2.0 s (+ ion det.j
Vaporizer 106° C
Jet 270° C
96° C
106
-------
Analysis for Pesticides in Water (1 ppb)
HPLC/UV
100.0 -i
100.0 -i
100.0 i
a>
CC
BPMC
100.0 -i
Linuron
100.0 -i
m/z 266
Benzopropethyl
m/z 366
5:00 10:00
15:00 20:00 25:00
Time (min)
30:00 35:00
107
-------
o
• o
••••
i
o
Q.
*O
S
o
0)
I
hS
CM
O
CM
CM
CM
Q_
{2
CM
J-8
CM
K§
x
a
o
o
-CD
O
o
o
p
S
108
-------
Q.
Q.
.2 ?
CO -Q
Q.
Q.
O
O
Q.
Q.
O
JQ
0)
CD
D)
50 o
*— .—
"co -g
• 8®
o
CD »= O •«
O C
-------
Analysis of Pesticides in Soil 100 ppt
HPLC/UV
100.0
I/
HPLC/ms
Carbaryl I m/z 219
^^^JJ^^^
100.0 T
100.0 n
i
100.0 n
100.0 n
BPMC
m/z 225
Propoxur L m/z 227
5:00 10:00
15:00 20:00
Time (min)
25:00 30:00 35:00
110
-------
Commercial Diazo Red Dye
HPLC/UV
100-1
HPLC/MS
*U"**N^^
6:50 13:40
- 1 - 1 - 1
20:30 27:20
Time (min)
34:10
111
-------
HPLC/MS Ion Chromatograms for the Various
Alkyl-Substituted Azo Benzene-Azo Naphthols
1001
100 n
iMn
100i
100
100 n
1001
100 -,
100-,
100 n
lOO-i
mli Rd)
353 H
OH
II'
13:40
20:30
Tim* (min)
27:20
34:10
112
-------
Various Alkyl-Substituted Phenyl-Azo-Naphenols
(orange pigments) Found in the Red Dye
100-
m/z R.
249 H
OH
\jy
R
N =N
100-
263 CH3
lOO-i
I 1 ' I
6:50
13:40
20:30
Time (min)
27:20
34:10
113
-------
HPLC/UV
100-r
Analysis of 1 ppm of Red Dye
in Gasoline
lOO-i
m/z 381
Red pigment
1 100 n
m/z 249
Orange pigment
6:50
13:40
I
20:30
Time (min)
27:20
34:10
-------
CH3
H H
--R1
~ H
CH3 R2
Compound
T2
HT2
DAS
(a) DON
TZfetraol
R1
OH
OH
OH
OH
OH
R2
OAc
OH
OAc
H
OH
R3
OAc
OAc
OAc
OH
OH
R4
H
H
H
OH
H
R5
i-C4H9CO2
i-C4HgC02
H
=0(a)
OH
Molecular
Weight
466
424
366
296
298
CH3
F2
Mol. Wt. 318
115
-------
Spiked Urine
76,0
DON<2 ng)
.«<^<-*VUv^n/>/'•>*•*''W*V^^
47.0-
mix 319
m/z384
JL
DAS(2ng)
28.6-
180.9
01/1484
Total Ion Currant
A!"
1:30 3:00 4:30 6:00 7:30 9:00 10:30
TInw (mini
116
-------
f
09
.8
I ^
.? Q.
a. x
117
-------
E
J
§
-9.
(0
u>
10
•t-
n
1
E
o
p
•5
•
1
I
I
- 1
i i i i i i
A)iniaui| OAiniay
a.
ta
118
-------
119
-------
QUESTION AND ANSWER SESSION
MRS. LAURIN: Any idea
what kind of sensitivity moving belt LC/MS has
for carbamates?
DR. VOYKSNER: From other
comparisons, I would say it's probably not too
good. I think the closest comparison is carbonate
pesticides analyzed by moving belt LC/MS, was about
a factor of 10 to 20, less sensitive than thermo-
spray HPLC/MS.
MR. TELLIARD: It's lunch
break. Lunch is being served next door. We're due
back here at 1:30. I'd like to have you all come
in for the afternoon session on time so that we can
get out on the HMS Sinkfast at our due date. Thank
you very much.
(WHEREUPON, a lunch break was taken.)
120
-------
MR. TELLIARDs Our first
speaker this afternoon is not Bruce Hidy, but Marv
is going to move up and fill in. Again, continuing
the discussion that we started before lunch, he'll
be discussing LG/MS and its application for
environmental measurement. Marv.
121
-------
MARVIN VESTAL
UNIVERSITY OF HOUSTON
CHEMISTRY DEPARTMENT
RECENT ENVIRONMENTAL APPLICATIONS
OF THERMOSPRAY LC/MS
MR. VESTAL: I'm afraid
my title may be a little bit misleading because I
had thought we might get around to actually doing
some real environmental applications. We've been
involved in developing the thermospray technique
for about 10 years, but the definition of an
academic analytical chemist probably applies to
me even though I don't really consider myself.one.
The definition I heard is that an academic analytical
chemist is one who never runs any real samples.
What I would like to do is to tell you a
little bit about the thermospray technique, how and
why it works, without going into all the details.
I could spend a couple of hours telling you every-
thing I know and some things I suspect, but I'm
not sure that would be productive. I'll just give
a little bit of an overview of how the technique
works, some of the instrumentation that's involved,
and then give some examples of the various
122
-------
alternatives that you have available.
Bob Voyksner gave/ I thought, an excellent
talk on thermospray this morning and with some
real examples. I'm envious of some of his results.
However, you have to realize that he was doing all
that on one leg. He was using the technique which
is sometimes called thermospray ionization, some-
times called direct ion evaporation, in which the
primary ions are produced directly by ion evapora-
tion from charged liquid droplets.
This technique requires that you have a
substantial concentration of ions in solution,
on the order of .1 M/L for it to work well. That
doesn't mean your sample has to be in anything like
that concentration, but you have to have something
that's ionized in solution, and in many cases of
real liquid chromatography that is not what you
have. Furthermore, if you're using something like
ammonium acetate, that technique requires that your
sample have a higher proton affinity, for example,
in the positive ion mode than ammonia, and there
are a great many environmental samples for which
that is not true.
So, from the beginning in our thermospray
systems, we have had an alternative method of
123
-------
ionization, such as a hot filament, which all of you
doing mass spectrometry are very familiar with.
It's a nuisance, but it does provide a very effi-
cient way of producing ions when you don't have
them by some surprising new technique. We find
that external ionization is extremely important for
many applications as is the ability to do both posi-
tive and negative ions. I will try to give some
examples of where one of these modes works and
others do not.
First of all, I'd like to tell you a little
bit about what thermospray is, and since we made up
the word, I guess our definition should count.
If you put the words thermo and spray together, you
get something like the production of a jet of fine
liquid particles by heating. That's basically
what we're doing. We force the liquid through a
capillary tube, typically on the order of 100 to
150 microns in internal diameter, and supply enough
heat to that tube to vaporize almost all of the
liquid. It is important that we not quite reach
100 percent vaporization. Out of this capillary we
produce a supersonic jet of vapor which contains
some entrained liquid particles or solid droplets.
Since the samples that we are interested in are
-------
generally somewhat less volatile than the solvent,
these sample molecules tend to be contained in the
droplets that survive this initial process.
Now, it is also true that if one has ions
present in solution, then you can produce ions by
ion evaporation from the charged liquid droplets or
solid particles. This process is in fact quite
analogous to ordinary field desorption. The
difference is we don't apply any electrical field.
The electrical field is self-generated simply by
the charge on the particles and their small size.
If you start off with particles on the order of one
micron in diameter, and put something on the order
5
of 10 elemental charges on it, you can calculate
that the electrical field at the surface is in
7
10 volts per meter, which is the range you need
for field desorption of ions. So the mechanism of
this ion production, to say it very quickly and
very simply, is field-assisted evaporation of ions
from these charged liquid droplets.
As we discovered some years ago, if the fila-
ment burns out in your thermospray mass spectrometer,
you still have ions present with no visible means
of producing ions. When we first discovered this,
we were quite excited about it. We're still quite
125
-------
excited about it, but it's not the only way of
using thermospray. The very nice thing about this
ionization mechanism is it is an extremely soft
ionization technique and it does not require a
neutral in the gas phase as an intermediate between
the sample in solution and the sample in the mass
spectrometer.
That is a very important distinction because
you can define a non-volatile molecule essentially
as one which does not have a vapor pressure. That
means it doesn't exist in the gas phase as a neutral,
Basically, if a sample doesn't have any significant
vapor pressure, then you're not going to be able to
analyze it by conventional electron impact no
matter what sort of interface or other technique
you use, because that requires having molecules in
the gas phase. Thermospray does, in fact, work for
many kinds of molecules, for example, peptides,
nucleotides and so forth, which do not have any
appreciable vapor pressure; that is, they do not
exist as neutral molecules in the gas phase. We're
producing ions in the gas phase directly from ions
in the solution and this allows a whole host of
things to be done by LC/MS and by thermospray
LC/MS that you couldn't do otherwise.
126
-------
But thermospray LC/MS is also applicable to a
whole range of molecules which are not nearly as
demanding as this/ which may be either neutral or
ionic in solution, but are smaller, more volatile,
et cetera. Those are the kinds of cases that I'm
going to talk about primarily today. I will not
show any slides of peptides even though that has
been one of our favorite systems for studying by
this technique and has turned out to be quite
fruitful.
The last point on the previous slide was that
thermospray is not the same thing as direct liquid
introduction. This slide shows what the jet (very
'similar to what Rich Browner showed this morning)
looks like if you don't heat the liquid. There's
actually a solid jet here which is not visible
because of the way the lighting was done and it
breaks up into droplets by the processes that he
discussed this morning.
This is not the way we do thermospray, but
rather we increase the heat so that we get a super-
sonic jet of vapor containing droplets. When you
reach the transition point, you can see a rather
dramatic difference. The particle sizes are much
smaller and determined by the aerodynamics now
127
-------
rather than the size of the nozzle. This is at
something like 50 percent vaporized. If we increase
the heat still further, we get over to a jet
containing fine droplets on the order of one micron
or smaller in diameter. We still don't have good
measurements of the actual distribution, although I
think it is fairly close to a monodisperse aerosol
as well.
This slide shows a schematic of a thermospray
system. This is somewhat different than what Bob
Voyksner showed this morning, but the principles
are very similar. Many of the details are different.
As he mentioned, it's very important to monitor the
appropriate temperatures in order to control the
degree of vaporization. The vaporizer is a capillary
tube mounted into a probe which can go through, for
example, the conventional solids probe inlet that
is available on many mass spectrometers. The
capillary is heated by passing current through the
capillary. The power input to that capillary is
controlled by a thermocouple attached near the
entrance end. By doing that we can compensate for
flow rate fluctuations by automatically adjusting
the power in the feedback system to keep this tem-
perature constant. We also monitor the temperature
128
-------
at the exit of the vaporizer and downstream of the
sampling point, as well as monitoring the temperature
of the block itself. In fact, we use two feedback
systems; one feedback system to control the power
into the vaporizer, and another one which controls
the heater input to the block.
If we reproduce the two temperatures and
flow rates, then we can quite reliably reproduce
conditions and obtain the same results from one day
to the next. It now is quite routine to set this
system up and run it every day, all day and all
night, for that matter, if you want to.
The other thing that I want to mention is the
position of the electron beam. What we normally do
in all the systems that I've had anything to do
with, is to supply an electron beam perpendicular
to the direction of the jet in the region, well
upstream from the normal position for an electron
beam in an ion source.
We find that the sampling efficiencies in this
ion source are very similar to what you get in a
conventional chemical ionization ion source, even
though we have a very large flow superimposed on
this. After all, we're vaporizing up to two milli-
liters a minute of liquid, which in the case of
129
-------
water corresponds to something like two standard
liters per minute going directly into the ion
source. The reason we're able to accomodate this?
as Bob mentioned this morning, is we pump on the
other side of this ion source off to a cold trap
and a mechanical vacuum pump.
All that's required to convert a conventional
mass spectrometer to a thermospray system is to
replace the ion source assembly with one designed
for thermospray. We bring the vaporizer probe
in through a probe lock or equivalent, depending on
what's available on the instrument, and we provide
an additional mechanical pump and trap to pump away
the excess vapor. Otherwise, conventional GC/MS
systems can be used with no modification and the
interfaces we've developed don't require any pertu-
bation of the GC connections at all. The GC is not
connected to the ion source when you're doing
thermospray, but it's sitting there so that when
you put your conventional source back in, it's
ready to do GC/MS or what other kinds of measure-
ments you want.
This slide shows a schematic diagram of the
control system where we sense the temperature, feed
it back to a triac control power supply and supply
130
-------
current to the capillary here for the vaporization.
It's all quite straightforward, and now works quite
routinely.
This slide shows an ion source for a Hewlett-
Packard instrument being installed. This assembly
goes in place of the standard ion source and the
optics are identical to the ones that are normally
present on the mass spectrometer. The only part
that's really changed is the source block. The
control system and pumping system are in the back-
ground.
This slide shows the probe which has the vapor-
izer in it, going in through the standard probe
lock into the ion source.
This slide shows the whole system including the
control for automatic recycling of the cold trap.
The vacuum pump is inside the cabinet. The refrig-
erator is outside. Of course, you can use dry
ice or liquid nitrogen if you like to fiddle around
filling traps every half an hour or so, but this
system can run unattended for up to 16 hours and
will recycle itself overnight while you're sleeping.
I mentioned our excitement over this new ioniza-
tion technique and I'm not going to say much more,
but just show one example of the reasons for getting
131
-------
excited. This slide shows some results on adenosine
back in the early days when we first discovered
this effect. With the filament on, the total ion
current shows a rather high background. Two
injections of one micogram each of adenosine show
a reasonable molecular ion, MH+, and significant
amount of a fragment ion on top of some background
at 136. We then turned off the filament and did
the same injections again.
There are two things that are initially surpris-
ing about the results. First, the molecular ion
leapt up by nearly an order of magnitude when the
filament was turned off total area, and second,
the fragment ion almost disappeared along with most
of the background. Now, that's not always the
case. Some people have concluded from this that
there's no reason to have a filament present; in
fact, it even may be a disadvantage.
Of course, everybody knows what a nuisance it
is to keep a filament running, particularly in an
atmosphere of water vapor and methanol and nasty
things like that. But in fact, if you use a filament
that's designed for hostile environments such as the
thoriated irridium filaments used in non-burnout
ionization gauges, this is not a serious problem.
132
-------
Also/ you must put enough voltage on the filament
to get the ions to penetrate into this high pressure
region; we typically use a kilovolt for that purpose,
The characteristics of the direct ionization/
without the use of a filament, are summarized in
this slide. If you have multiply charged ions in
solution/ you very often observe the multiply
charged ions in the gas phase/ particularly in the
positive ion mode. We've seen up to quadruply
protonated peptides. You don't see these for small
ions because presumably they're too reactive in the
gas phase. But we do produce abundant molecular
ions/ generally with little fragmentation. This
can be good or bad depending on whether you're
interested in structure or in getting the maximum
intensity at a particular ion. Of course/ the
performance depends rather critically on the
temperature, the capillary dimensions/ flow rate
and so forth. We now understand all of those
factors in much more detail and can now control
them rather well/ but it still is rather more
critical than running with the filament present.
One of the questions that always comes up
concerns calibration of the mass spectrometer.
\
With polar solvents such as water and methanol/
133
-------
PFK or perfluorotributlamine do not give good
spectraf so you need some other calibration com-
pound. We have found that the standard mixtures of
compounds such as polypropylene glycol and poly-
ethylene glycol both work extremely well by thermo-
spray and give very reproducible regular spectra
for calibration. This slide shows a mixture of two
cuts of polypropylene glycol. This is a mixture of.
1,000 molecular weight and 2,000 molecular weight.
Paul Goodley from HP provided me with this slide.
It was run on one of the new HPs with the 2,000
mass range, and you can see that we have two over-
lapping Gaussian envelopes, one centered near
2,000 and one at about 1,000. Unfortunately," the
tail of the higher mass one is off the scale of the
mass spectrometer, but I'm quite sure those ions
are there if you look for them.
This gives you a regular sequence of ions
spaced 58 units apart. You can use any molecular
weight cut of these glycols and they give a pre-
dictable spectrum which you can use to calibrate
the mass spectrometer. The ions observed are all
ammonium adducts to the individual molecules of
the polypropylene glycol present.
The next slide shows a few examples of some
134
-------
compounds which have some environmental significance
and are of considerable biological importance.
These were all done in the filament-off mode and are
typical of the kind of spectra one sees. The MH+
ion is observed if the proton affinity of the
molecule is higher than ammonia; if not, you may
see the M+ ammonia. Cortisol is a good test case
for a thermally labile compound because it is quite
sensitive to conditions, and people who have done
DLI have sometimes had difficulty getting molecular
ions on this. In this spectrum obtained on 50 ng.
the molecular ion is the base peak in the spectrum.
One interesting thing that you can do with an
electron beam that you can't do without it, is
electron capture CI, which may be a very powerful
tool both in ordinary CI analysis and in thermo-
spray. If you have compounds present that capture
electrons, then electron capture can give you quite
high sensitivity and specificity for these kinds of
compounds. In the direct thermospray ionization,
there are apparently no free electrons produced, so
one does not see M- ions in thermospray by the
direct ionization. Only if you have the electrons
present using the filament do you actually produce
these. There are many examples of cases which work
135
-------
extremely well with the filament on by electron
capture and don't work particularly well by some of
the other ionization techniques. Chlorinated
benzophenone is one example, and another is this
rather large floppy sugar, avermectin, molecular
weight 874, which gives the M- ion as the base peak
in the spectrum, with some structurally significant
fragments. The masses around the 500 region corres-
pond to breaking off the major pieces around the
central branch.
Another interesting case is the organic acids.
I know Bob Voyksner and others have worked on these.
They don't work well in positive ions and they don't
work particularly well in negative ions filament-off,
but with filament-on in the negative ions, they give
quite high sensitivity. In one example, in the
positive ion mode, 10 micrograms of this sample was
injected and not detected. On the other hand, two
nanograms was injected in the negative ion filament-
on mode, and one sees a very nice response, many,
many times noise. I think we could probably detect
*
thi'fe^at around the one picogram level in this case,
in selected ion monitoring.
You can also do hydrocarbons, and I mention
that because some people think you can't do hydro-
136
-------
carbons by thermospray. You can't do hydrocarbons
by thermospray if your solvent has a higher proton
affinity than the sample. If you use methanol or
water, you probably wouldn't do hydrocarbons anyhow
for other obvious reasons, but we've done quite a
bit of normal phase work recently using hexane as
a solvent. So, you can do hydrocarbons, filament-on.
Filament-off, you don't do hydrocarbons because
they just aren't ionized.
This slide shows some of the advantages of
having the filament. This is a pyrethrin pesticide
mixture. The upper trace is the LC/MS total ion
current trace for an injection of a mixture of five
of these at the 50 ng. level. As you can see for
the first three, we get quite a nice response, but
these other two, you would not even know were there
in the filament-off mode. On the other hand, at an
order of magnitude lower concentration with the
filament-on, we see all five components.
One reason that the sensitivity is low for the
filament-off mode in this case is that this was a
gradient from 70 percent acetonitrile to 90 percent
acetonitrile. At the higher concentration of organic
modifier, the direct ionization does tend to fall
off rather rapidly. So when you get up to about
137
-------
90 percent acetonitrile, the filament-off ionization
is not working very well at all.
I just wanted to show one final example from
some work that Al Yergy and Dan Liberatto are doing
at NIH. Al generously loaned me these next few
slides. The question comes up quite a bit, "Well,
it's okay for qualitative work but can you do
quantitation?" In fact, yes, I think the answer is
now that you can do quantitation and you can begin
to approach what you can do by GC/MS. That may be
a rather bold statement because there certainly
hasn't been even .001 percent as much work done as
yet, so we still have some things to learn.
This particular example is a mixture of sugars.
They're interested in monitoring glucose in plasma
samples from patients. This is the LC/MS chromato-
,. ' ''• , i '"
gram of these five sugars, done by the technique
that Bob mentioned this morning of using post-column
addition. The separation for these sugars involves
pure water. In this case they were using 0.6 mL/min,
of water in the column and added 0.4 mL/min. of
ammonium acetate downstream of the column in order
to get ionization without the filament.
The spectrum of glucose shows mainly M+ ammonia
and loss of water; a very simple spectrum. They're
138
-------
interested in doing isotope dilution experiments in
patients, so they wanted to determine how well they
could measure the isotope ratios for something like
this on column.
This slide shows their standard dilution curve.
You can see the sort of scatter they have, but you
have to realize this is down at the .1 percent
level so that in fact, they really are doing quite
well in terms of the standard curve. They are now
applying this to actual measurements of glucose
metabolism in patients at NIH and I'm sure will be
reporting on this work before very much longer.
This is a real application although not really an
environmental one.
I would like to summarize by giving you an
update as to where the technique stands, at least
in my view. Detection limits, of course, depend a
great deal on what kind of sample and what kind of
conditions and even what kind of mass spectrometer,
but our experience with a wide range of not too
difficult samples is that we can usually detect
them in the range of 1 to 10 picograms per second.
I put the per second in there because, of course,
in terms of the absolute amount, it depends very
much on what kind of chromatography you're doing
139
-------
and how wide your peaks are, but if you use some of
these short columns with small particles, getting
i, , I' nil1
peaks a second or two wide is not all that hard.
In these cases, detecting on the order of 1 to 10
picograms of a great many samples is quite feasible.
Getting full spectra on one nanogram is also quite
feasible for a very wide range of compounds.
,:;/' 4
The linear dynamic range is at least 10 .
We can accomodate virtually any liquid flow, but
typically for the pumping systems that are available,
we work in the .5 to 2 milliliters per minute range.
Going to lower flows is no problem if you make some
minor changes in the vaporizer, but the standard
commercial systems that are available work best in
this sort of flow rate range. The time constant in
fact is considerably less than one second, so there
really is no detectible peak broadening unless
there's something seriously wrong with the way the
system is being operated.
We can handle any buffer. That is, the thermo-
spray vaporizer will vaporize anything, including
non-volatile buffers in solution. However, these
tend to get deposited somewhere, often in your ion
source or in your pumping line or whatever, so we
really don't recommend using non-volatile buffers.
-------
The other thing they do is clutter up the mass
spectrum by giving you lots of high mass cluster
ions which are a nuisance. Buffers like ammonium
acetate, ammonium formate, trifluorocetic acid,
HCL all can be used to produce a buffer at almost
any Ph you want, I think, without using things
like phosphates and sodium salts. But it may take
some minor changes in your chromatographic proce-
dures. If you can't do without them, you can in
fact run non-volatile buffers and pay the price by
having to clean the system a little more frequently.
Thermospray can analyze any sample. When I
say any, that's obviously a bit of an exaggeration
because we haven't tried everything yet, but I
would say that a very high percentage of the samples
that we have tried do in fact work by one ionization
technique or another, provided you have the right
match between solvent and sample. You obviously
can't do nonpolar samples of low proton affinity in
polar solvents of high proton affinity if you're
looking at positive ions. It's the same as chemical
ionization in that respect. There's nothing myster-
ious about it. Everything as far as what works and
what doesn't work, is understood in principle,
although, of course, in> practice we may not know
-------
enough about the individual molecule to predict
what's going to happen, but we can explain anything.
Finally, thermospray is gradient compatible,
and now we do have an automatic control system
which automatically compensates for both flow
fluctuations and solvent composition and maintains
the fraction vaporized constant throughout the
whole range.
This slide of remaining problems is one that I
made a year or two ago. Some of them are still
with us, I think, although many of these have in
fact been solved, either in principle or in practice.
The first one I just mentioned, I think now is in
fact finished. We do know how to control the
vaporizer to routinely maintain the proper operating
conditions. If we want to go to lower flows and
maintain the same kind of performance, we do need
to go to smaller diameter capillaries. In principle
that's easy, in practice it presents some problems.
In particular, the smaller the diameter of the
capillary, the more prone it is to plugging, par-
ticularly if you're not careful in keeping the
'In, .
dirt out of your system. I'm afraid we're not very
careful because with the 150 micron diameter capil-
laries we're using, it generally is not a problem
142
-------
and we are really quite sloppy in terms of worrying
about this. We don't put filters in line because
they plug up much faster than the vaporizers them-
selves plug up, so they tend to be more of a nuisance
than a help.
Item #3 is something that we're still studying.
We really do need to have a better understanding of
the downstream vaporization process and how to
maximize the performance for really non-volatile,
really labile molecules. We can do it fairly
routinely, but as we get to the larger things like
the peptides, the sensitivity is quite a lot lower
than it is for smaller molecules. Of course, you're
taking a beating here in many ways, some of which
you have no control over, but I feel that we still
have the potential for improving that. Now we
require on the order of a nanomole, for example,
of a peptide of molecular weight, 1,000 or so, which
of course is about a microgram, to get a really good
spectrum from that molecule, while we may require
three orders of magnitude less for a smaller molecule,
I feel that there's still the potential for getting
the sensitivity down into the picomole or even
lower range for these larger molecules, but that
still remains to be worked out in detail.
143
-------
Thermospray is now available for magnetic ana-
lyzers. It's lagging a little behind the quadrupoles
because it is somewhat more difficult, but the basic
problems of dealing with the high voltage, et cetera,
are all well in hand now. Kratos has a system
commercially available for magnetic instruments, and
systems will be commercially available, I think,
for all the standard magnetic instruments very
soon.
Obtaining a better fundamental understanding of
the mechanism of thermospray is something I plan on
spending at least the next 15 years studying. At
that point I probably will retire, and I suspect
there will still be some things to be done. That's
the state of the art as it stands now. Thank you
very much.
MR. TELLIARD: Any
questions?
-------
QUESTION AND ANSWER SESSION
MR. KROCHTA: Bill
Krochta of PPG Industries. You mentioned plugging
due to dirt in your system and dealing with these
small orifices. How about the fact that you have a
lot of salts in that system and you're constantly
vaporizing them, do you have a problem during the
course of a day with this?
MR. VESTAL: No, provided
the vaporizer is properly controlled. That is, if
one runs for an extended period of time with complete
vaporization before the liquid exits from the capil-
lary, then of course you will get plugging rather
quickly if you have non-volatile salts present.
However, the way that the system is normally control-
led is to keep to say only 95 percent vaporized as
it comes out of that capillary. The non-volatile
salts then are carried away in the droplets. We
have run, for example, with sodium salts of ion
pairing reagents and things like this for days, in
fact, until we plugged up the ion source. The
vaporizer would not plug, but in fact they would
sit down either in the ion source or the pumping
line. I've actually filled an ion source
-------
completely full of solid material, say five grams
or so. The vaporizer was still fine, but the ion
source was no longer working. In fact, you could
see it happen. Everything is working and all of a
sudden, the ion beam quits and the pressure in the
vacuum system goes down. What happens is that you
actually plugged up the ion exit from the ion
source. In some cases, you couldn't even see
across the ion source, the whole thing was full of
solid. But you just take it out, hold it under the
water faucet, and it's all water soluble so it goes
away, put it back in and it runs.
MR. KROCHTA: If you
don't completely vaporize, would this give you a
problem when you do quantitative work unless you
use internal standards?
MR. VESTAL: It can.
If you really want to get quantitative performance,
it is important, I think, to completely vaporize
before you get to the ion sampling point. This is
why I was talking about the downstream heating.
It is essential that you not completely vaporize
within the capillary tube itself, but it's certainly
desirable for getting the best performance to
complete the process between there and the sampling
-------
orifice, and as Rich Browner mentioned this morning,
that is not a trivial problem at reduced pressures
in the very short residence time. We do fairly
well at doing that in the present versions because
we've learned in an artistic way how to accomplish
that, although I have to admit there are some
details about it that we don't really understand
yet.
But you're right. It is important, if you
really want to do the best quantitative work, to
get the sample completely or essentially completely
vaporized. I think that's one of the sources of
variation that have been observed by some people
where some samples worked and some didn't. I think,
in fact, the way they were operating, that the ones
that didn't work, weren't actually getting vaporized,
That's partly a matter of the design of the ion
source.
MR. TELLIARD: Thank you,
Marv.
1*7,
-------
MR. TELLIARD: Our next
speaker is John Ballard. John is going to tell you
how to get two for the price of one, then you can
put a thermospray on it. Everyone should have one
of these. It probably costs $30, $40, I imagine.
John.
148
-------
JOHN M. BALLARD, PH.D.
LOCKHEED ENGINEERING AND
MANAGEMENT SERVICES COMPANY, INC,
DETERMINATION OF DYES
BY THEMOSPRAY IONIZATION AND MS/MS
DR. BALLARD: Dyestuffs
are of environmental interest because of their
large scale production and their potentially hazard-
ous synthetic intermediates and by-products. Just
to give you some idea of the scale and complexity
of the dye manufacturing industry in this country,
the annual production runs at about 350 million
pounds—I think that's 200,000 tons or thereabouts,
at something like 87 different manufacturing loca-
tions. This results in a product line of about
1,000 commercially available products. Forty-eight
to fifty Chemical processes are needed to adequately
describe the chemical inputs into this industry,
and the resulting dyes can be classified into 24
different structural groups according to the Colour
Index, which tends to be the Bible of the dye and
organic pigment industry.
The toxicity and/or carcinogenicity of dye-
stuffs and some of their synthetic intermediates
-------
has been well documented. Compounds such as 2-
naphthylamine, 4-aminobiphenyl, benzidine, Basic
Violet 14, Solvent Yellow #2 and Citrus Red #2 have
all been identified as actual or potential carcino-
gens, but there is some evidence that in fact meta-
bolites of some of these compounds are the true
carcinogenic agents. The EPA and the government
have acted to reduce the number of colorants that
can be used as additives in the food, drug and
cosmetic industries in recent years, and of the
compounds that are still permitted, the permissible
amounts have been greatly reduced.
Because of the need for sensitive methods of
detection or identification of these materials in a
variety of matrices, HPLC coupled with UV, visible
or spectrofluorometric detection has been widely
used. Up to the present time, mass spectrometry
has not been routinely applied because of the
volatility requirement for generating electron
impact or chemical ionization spectra. With the
recent developments over the last 5 to 10 years of
the desorption techniques such as californium-252
particle-induced desorption, fast atom bombardment,
secondary ion mass spectrometry and field desorp-
tion, a whole new range of molecules, initially
150
-------
the biomolecules e.g. carbohydrates, peptides,
proteins, nucleosides, have become accessible to
mass spectrometry. We have now applied thermo-
spray ionization to the analysis of dyestuffs.
I will also mention briefly in passing, that
the organic pigments, a specialized group of dyes,
so far have not been accessible to us. We've
obtained about a dozen of these organic pigments
designed for use in plastics and textiles, and
they're generally intended to be impregnated into
the substrate rather than applied in solution.
We've yet to find a suitable solvent that will
enable us to look at these. Of the dozen or so
pigments we've examined, only three could be solu-
bilized in DMSO; they were two sulfonated azo com-
pounds and a polychlorophthalocyanine. Having
achieved solubilization, we thought we were okay,
but in our hands, DMSO was not a good solvent for
generating thermospray spectra, either in the positive
or negative ionization modes.
As a generalization, it became apparent to us
that in contrast to the normal analytical situation
using standards of very high purity, for the dye-
stuffs we examined that situation was the exception
rather than the rule. Even with the dye "standards"
151
-------
we're usingf it's obvious that there's a lot more
in there than just the stated dye. Sometimes even
the dye itself isn't present, and I'll show you
an example of that.
In general terms, we do see a lot of peaks
other than the protonated molecular ion and these
other peaks are variously due to solvents, dis-
persants and other compounds which the manufacturers
put in. It's very difficult to find out what these
are because the industry tends to be somewhat
secretive and protective. So we just have to accept
these materials, which are formulations designed to
meet a specific color and use application.
SLIDE 1
The first slide is just to give you some idea
of the equipment we use. The most expensive item
is the Finnigan TSQ instrument, which has the
capability of doing MS/MS analyses. It's been
modified for thermospray ionization by Marvin Vestal
here, using the Vestec thermospray probe. We use a
Waters 6000 pump as a delivery system just to get
the sample and solvent into the mass spectrometer.
We use the Rheodyne injector valve with a 10 micro-
liter loop. At the moment, we're not doing any
chromatography, but we do have a short chromatographic
152
-------
column in-line between the pump and the injector
to damp out pulsations of the pump which, if not
removed, cause severe fluctuations in ionization
in the ion source.
SLIDE 2
The operating conditions are pretty standard,
i.e., a small percentage of methanol in a large amount
of 0.1M ammonium acetate. We find a flow rate of
about 1.3 mL per minute is optimum for us with our
trapping system. As Marvin and others have
mentioned, the control of various temperatures,
particularly the tip and vaporizer temperatures,
are critical. We have put a range up there because
no two days tend to be the same, depending on
which analyte we're looking at.
For mass calibration, we use one of the poly-
ethylene glycol mixes, typically that of average
molecular weight 400, which in combination with
the lower mass thermospray ions gives us good
calibration from about m/z 30 up to say m/z 500-
600, which is adequate for our work. As "standard"
conditions for the collision-activated dissociation,
we start with a collision energy of 20 volts and
argon is the collision gas at one millitorr pressure.
The term "standard" just refers to what we do in
153
-------
our laboratory. We find that those conditions
usually generate good CAD spectra.
SLIDE 3
This is yet another view of a typical interface
design. I think maybe my colleague lifted this from
one of Marvin's publications. He may recognize it.
The pump-out line at the bottom here, is at right
angles to the line of flow of the solvent and in
our system it goes to a large cold-finger trap
which is cooled with an FTS cryo-refrigerant unit.
We get a temperature of about -80 degrees C to -90
degrees C using methanol in the trap. From just
doing rough measurements and calculations of what
goes in and what comes out of the trap at the end
of the day, we think that about 95 percent of the
solvent gets caught in the trap. You may wonder
where the other five percent goes, and the answer,
of course, is into the mechanical pump on the
other side of the trap. What comes out of the
mechanical pump when you change the oil is really
a hazardous waste in its own right.
SLIDE 4
This is to refresh people's knowledge of what
the triple quadrupole mass spectrometer is and how
we can use it. In comparison to gas-chromatography
151-
-------
where you separate the components of a mixture and
ionize them separately, with the triple quadrupole
you can ionize all constituents of a mixture simul-
taneously and then use the first mass filter to do
a separation. Suppose you select a particular
ion, it may be a molecular ion or a fragment ion,
you don't know which it is before you start. You
allow it to pass into the collision cell which is
pressurized with an inert gas, in this case argon,
at something like one millitorr. The molecular ion
is fragmented and typically you generate a spectrum
which is qualitatively similar to a standard El
mass spectrum and you analyze the fragments that
are formed using the third quadrupole mass filter.
SLIDE 5
This is just to give you an idea of what
actually comes out of the mass spectrometer in
terms of printout. You see there are quite a lot
of spikes in this RIC trace on the bottom. Those
are due to the pulsations of the pump which were
particularly bad when this was recorded. This is
quite a high loading of material in there, and so
you can see that there's a reasonable amount of
spreading from the injector loop and through the
flow line. With our configuration, we have a flow
155
-------
time of about 12 seconds from the injector into the
ion source.
I'm going to divide my material dealing with
dyes into three areas. The first'will be generaliza-
tions on what you might expect to see in the spectrum,
together with detection limits that we've established
for some of these materials. The second area will
be characterization of two or three dye wastes.
The third area will be use of the TSQ to do the
structural determination of an unknown dye.
SLIDE 6
The figures here were selected from a table of
about 16 dyes which we worked up; these were
deliberately chosen to show respectively the best
case (at top), a typical case in the middle (Dis-
perse Orange 13), and the worst case I could find
(Solvent Red 23), which has a large number of
peaks other than the protonated molecular ion in
the spectrum. As I say, that's a typical case,
the Disperse Orange 13. You might also notice
that these two dyes have the same molecular weight
and obviously the same MH+ ion. The question is-
What ARE all of these ions here?-and it's a good
question.
From our observations of the purest dyes that
156
-------
we've looked at, I would tend to say that they're
not fragment ions. This is based on our observa-
tions and those of others published in the litera-
ture, that therraospray is a soft ionization process,
Also, we noticed that typically the spectrum we
generate from the molecular ion when we do CAD
experiments, does not contain any or all of the
lower mass ions that are seen in the full-scan
spectrum of the dye sample.
SLIDE 7
We wanted to look at this in a quantitative
way and these are results obtained with dye
standards from a well known chemical supplier.
These .are the percent dye content as stated on the
label, tabulated with the percentage of the total
ion current that was actually carried by the pro-
tonated molecular ion. This slide shows that, in
general, there's a reasonably good correlation.
For the Disperse Orange 3, I bracketed the 100
percent simply because no information was given
and so reading the manufacturer's handbook, you
were led to believe that the material is of 99
percent or better purity.
157
-------
SLIDE 8
This second slide is a continuation of the
first. You can see here that we've gone from
reasonable correlation to virtually no correlation,
especially in the case of Solvent Red 23, in which
there's very little of the molecular ion compared
with what's supposed to be in there. Solvent
Red 3 is good. For the first sample of Basic
Yellow 11 we looked at, also from this well known
supplier, the dye content was listed as about 60
percent. So we ran its mass spectrum. Even
expecting the worst, we were a little surprised to
find that there was absolutely none of the expected
molecular ion in there. Experiments showed that
it's not even of the correct dye class. It does
have the correct color, it is yellow, but that's
just about all you can say for it. So this is one
of the pitfalls for the unwary. Don't believe
everything you read on the label.
SLIDE 9
Each one of these seven dyes is from a different
chemical class. We've had a look at detection
limits and we find, like others have reported for
pesticides and other compounds, that even within
the same chemical class, detection limits can vary
158
-------
somewhat widely, say over an order of magnitude.
The detection limits were obtained by scanning the
mass spectrometer from about m/z 150 to just above
the molecular ion region. We weren't doing selected
ion monitoring because in a real life situation,
if you're scanning a sample and you don't know
what's in there, you've got to scan for every
possibility, you can't just go looking for one
particular ion.
The amount we get in the best case, I think,
is 15-20 ng. Those figures are comparable to
numbers I've seen in the literature for pesticides
and other compounds. The boxes that are highlighted
here refer to negative ion sensitivity and you can
see that the detection limit is an order of magni-
tude higher than for the same dye in the positive
ion mode.
Another parameter the EPA uses is called the
method sensitivity, and it gives you a feel for
what you get out for what you put in. Essentially,
it's the slope of the straight line obtained by
plotting the area counts versus the weight of
material. One other point I should make about
this, of course taking the best case, is that if
we can see 15 ng., that's on a 10 microliter
159
-------
injection, then for this particular compound, Basic
Green 4, we can detect it in solutions of 1.5 ng/
ML. I don't care to do the mental gymnastics
to convert that to ppb or ppm, but I think it's
pretty reasonable.
SLIDE 10
Moving along to the analysis of waste streams
from dye manufacture, we obtained some waste stream
samples. These were not chemically pretreated.
The only thing we did to them was to filter the
solutions to make sure that we didn't get anything
solid into the system. I imagine from what Marvin
said, that it wouldn't destroy the LC pump, rather
it would block the vaporizer. From a knowledge of
the manufacturing process, you can look up the
relevant books, find out what the expected inter-
mediates are, and then you can look for them. In
only one case did we find the expected compound.
That's good and bad, I guess. It means that maybe
the manufacturer is doing a good cleanup job and
he's getting most of the impurities out, especially
the dichlorobenzidine. I think most people are
aware of the carcinogenic properties of these
benzidine compounds.
Pigment Yellow 12. We didn't see any of the
160
-------
expected intermediates in the second sample and we
didn't see the same by-product that we'd found in
the first sample, but we did find these acylated
toluidine derivatives and again, from the literature,
you can deduce that in fact this was a waste stream
from Pigment Yellow 14 or another pigment, not
from Pigment Yellow 12.
In some cases, again the expected intermediates
are not present in the waste stream, but there are
certainly some organics present. Again, if I just
put the number there, it means we couldn't identify
them and that is partially because with something
like 1,000 chemical intermediates used in the dye
industry, it is rather difficult to get hold of
all of these and run standards and generate CAD
spectra of all of them. We can make guesses, but
we'd rather not.
Acid Red 114. We did have a sample of the
authentic dye standard and it was interesting that
the dye itself also contained the two by-products
that we detected in the dye waste. These are
condensation products formed at one stage of
the synthesis. P-Toluenesulfonyl chloride reacts
with phenol which is left over from a previous
stage. Some of these compounds go back two or three
161
-------
stages in the synthesis. If you're prepared to put
a fair amount of time and a little imagination into
itf you can track some of these reaction sequences
backwards.
SLIDE 11
The picture here doesn't look quite as rosy.
These two direct dyes were in our waste streams.
You can see that/ again, they don't contain any of
the expected intermediates, but they do contain a
large number of other by-products, solvents, et
cetera. It's very difficult to work out what some
of these are, even with the CAD capability.
SLIDE 12
Now, you may remember in one of the first
slides, the typical and worst cases of dyes. Two
of them had the same molecular weights. Well,
here they are with their structures. You can see
that they're very similar. If you only have one
single stage of mass spectrometric analysis, and
because of the soft ionization, then assuming a
pure compound, all you see is 353, the MH+ ion.
The MS/MS capabilities of triple quadrupole mass
spectrometry allow you to differentiate these two.
You can see that although they do have some ions
in common at very low mass and the mid-mass range,
162
-------
these two molecules are clearly differentiated by
their CAD spectra which were recorded under identi-
cal conditions.
SLIDE 13
The third area that we've looked at using the
thermospray and triple quadrupole combination are
some structural determinations on unknown dyes.
This slide shows the full scan thermospray spectrum
from m/z 120 up to 400, of Basic Red 14. You'll
notice the presumed molecular ion at m/z 344.
There's another peak of about 40 percent abundance
at 346 and also these two ions at m/z 189, 174.
SLIDE 14
This spectrum shows the daughter ions generated
by CAD of the 344 ion. The important ions...well,
they're all important usually, but some of the more
important ions are those at 329, 314, 303, 289, 274.
If you look at 344 and wonder what goes to 289,
you'd probably go crazy trying to work out what the
loss of 55 is. Well, it turns out that it's not
55, it's a loss of 40 and 15; 289 is the loss of 40
from 329. These ions here, 173 and 158, are also
important and I'll come back to those in a minute.
SLIDE 15
The ion at 173 was tentatively identified from
163
-------
its CAD spectrum by comparison with the El spectrum
which we had in an old publication from Analytical
Chemistry/ of a compound called Fischer's base, which
has this structure. It's a trimethylindoline. It
happens that this compound is a very well known
intermediate. It's used a lot in the dye industry
in the manufacture of cationic polymethine dyes
which are used for dying polyester and other syn-
thetic fabrics. We confirmed that this was the
structure by obtaining an authentic specimen of
this and running the CAD of its M+l ion. That was
identical with the previously generated CAD spectrum.
SLIDE 16
This is the daughter ion spectrum of the ion
at 189. You see there's a loss of 41 to 148 and
other subsequent losses. The loss of 41 and/or 40
which we observed previously in the daughters of
344, is very characteristic of a 2-cyanoethyl
group, and the structure of the dye starts to become
more apparent when you look at the literature and
you see that they can make these polymethine cationic
dyes by condensing Fischer's base with, usually,
substituted aldehydes.
SLIDE 17
So if you get out your calculator and add up
-------
the numbers, this has the right molecular weight,
188. We obtained a sample of that compound and
generated its M+l ion in thermospray interface,
did the CAD and again obtained an identical CAD
spectrum, therefore confirming our guess that
this was the second half of the synthetic process.
SLIDE 18
So we have the two molecules there, and you do
the condensation and get this quatetnized nitrogen-
containing molecule. There are probably hundreds
of these with different atyl moieties and some
with an extended carbon-carbon linkage as well.
SLIDE 19
Going back to the 344 ion. That's it's struc-
ture, and it is the molecular cation. Now the
losses of 15 and 40 are well explained by consecu-
tive losses of methyl groups, losses of the CH2CN
and/or CH3CN, and you can get all the way down to
274. Considering the 173 and 158 ions, this mole-
cule is mass symmetric about that carbon-carbon
double bond and initially it wasn't obvious to
us which half retained the charge and formed the
m/z 173 ion. In fact, it's the right hand side,
and 158, I think, is due to loss of the methyl
from the 173.
165
-------
SLIDE 20
These are the daughter ions of the 346 peak
that we saw associated with 344 in the material when
we first looked at it. From the approximately 3:1
ratio, we wondered at first if this was an indication
of one chlorine being present. However, none of
the CAD spectra we looked at showed any losses
corresponding to elimination of small chlorine-
containing compounds and so the possibility of
chlorine being present was ruled out. In fact,
there are some similarities in the low mass end of
the spectrum, but the high mass end is very, very
different from that of the 344 CAD spectrum which
again is evidence that we're not just looking at
an isotopic variant. We think that the 346 is a
product that's formed by reduction of the 344 ion
somewhere along the line in the manufacturing
process. Either the indoline ring or maybe the
carbon-carbon double bond linkage is reduced.
SLIDE 21
We looked at another one of these cationic
dyes of known structure. This one is Basic
Orange 21 which has a molecular weight of 315.
We wanted to see if we could be sure that the 173
ion which we said was derived from the right hand
166
-------
side of the Basic Red 14 molecule, was in fact
correctly ascribed. This is the full scan spectrum.
This is one of the higher purity dyes, although I
imagine that m/z 315 is probably only about 30 to
40 percent of the total ion current. We obtained
CAD spectra of these ions at 180 and 198, but we
can't identify them. We think they're probably
solvents or dispersants that are used in these
proprietary formulations. A lot of the information
on these dyes is contained in the patents literature
which is pretty notorious for being ambiguous and
vague and sometimes misleading, so we really could
not identify these.
SLIDE 22
This is the CAD spectrum of m/z 315 from Basic
Orange 21. That's the molecular ion, and the ions
at 300, 285, 270 are due to the expected consecutive
losses of methyl groups from that molecule. 144
is the only other ion at low mass. If you do the
numbers game, you'll find that it's cleavage at the
carbon-carbon double bond, with a hydrogen transfer
and charge retention on the right hand side of the
molecule. So, we conclude that our assumption in
assigning the 173 peak from Basic Red 14 is correct.
I've run out of slides, so that's more or less
167
-------
itf except that I would just like to make a few
conclusions. I think for the analysis of dyes,
you've got to be sure that you're looking at the
dye that you think you're looking at. If possible,
you'd like to have pure dye standards and that's
rather difficult. At the moment we're not in a
position to do quantitative analyses of dye wastes.
It means we've got to get our lab coats on and
actually do some recrystallizations or chromato-
graphy to purify these. We have taken delivery of
a brand new, nice, complicated Spectrophysics LC
recently, which somebody is leaning how to use,
and we eventually- want to get on line with the
mass spectrometer and do some chromatographic
separations followed by MS/MS experiments.
In theory, the ability to use the first
quadrupole of a triple quadrupole instrument for
separation of ionized species means that you can
ignore chromatography, but I think you probably
would do that at your peril. It's better to take
advantage of all the parameters and cleanups
available.
That more or less ties it up. I would like
to acknowledge Don Betowski of EPA, who is my
co-worker on this project. Thank you.
168
-------
MR. TELLIARD: Questions?
No questions. Thank you very much.
169
-------
SLIDE CAPTIONS
1. Instrumentation
2. Operating Conditions
3. Schematic of Thermospray Interface
4. Mixture Analysis by Triple Quadrupole Mass Spectrometry
5. Thermospray LC/MS Elution Profile
6. Typical Thermospray Mass Spectra of Dyes
7. Dye Content vs. % Total Ion Current carried by (M+H)+
8. Continuation of Slide 7
9. Detection Limits/Sensitivities for Dyes
10. Characterization of Dye Manufacture Waste Streams
11. Continuation of Slide 10
12. Differentiation of Isomeric Azo Dyes by CAD of their (M+H)+ ions
13. Full-scan Mass Spectrum of Basic Red 14
14. CAD Mass Spectrum of m/z 344 from Basic Red 14
15. Structure of Fischer's Base
16. CAD Mass Spectrum of m/z 189 from Basic Red 14
17. Structures of Synthetic Precursors of Basic Red 14
18. Synthesis of Cationic Methine Dyes
19. Structure of Basic Red 14 and CAD Mass Spectrum of m/z 344
20. CAD Mass Spectrum of m/z 346 from Basic Red 14
21, Full-scan Mass spectrum of Basic Orange 21
22. CAD Mass Spectrum of m/z 315 from Basic Orange 21
170
-------
c
,g
IS
c
rume
**
C/)
c
•••
rupole(TSQ
"O
(Q
3
0
0)
(0
0
sQ.
*ZZ
+-•
tO
c
(0
'E
c
iZ
>meter
S
^^j
T*
o
0
Q.
CO
CO
to
(0
0)
o
•g
0
_c
ospray
E
k.
o
•£
Vestec
Q.
E
D
Q.
O
0
o
o
<0
Waters
"co
0
o
'E*
LO
CM
t-
^
C
•o
o
DC
171
-------
(ft
c
acetate/methanol (97:3 v/v
HH £
O O
o o
O O
CO
CM r-
"oT
g
||
H H*
O O
o o
O O
CO CO
t- CM
i i
in ID
"O
c
0
O
0
c
^^^J
£
05
Q.
O
mJ
VI Ammon
^i^M
6
1
C
E
E
CO
r-
r- CM
0
_N
'C
O
Q. *r?
S-&
H H
O
O
0)
Q)
0.
0
O
CM
D)
S
C
LU
T5
••
o
E
q
r"
C
o
o>
^
^^
c * <
•S !«
2
(U
L* OC
£ 5
^ ^
DO U.
(0 C «J
O o o
O Q
172
-------
3
H
.2
TJ
C
O 0)
•si
a
a.
-------
I
ti>
O
U)
(0
a> »-
II
•o
CD
(3
c
o
1
"o
O
w
V)
CO
0) k.
o£
0.=
gu.
•o
(0
St
W S
o i
8
.
JE <0
£ a "5.
(O
r*
JS
o
I
0)
C
o
O
17*
-------
00
-------
1
CO ^
CD <
Q *
H- 2^
0 -o
m ®
2 £
+•* 0
O (A
O -Q
a o
c
CO o
CO —
'W
J2
CD
"HH
"co
O t"
0- §
Xj* •
£
Q,
CO
O
C
r"
CD
JZ
«•».
o
o
' T-
5
CM
AN!
So
r"
5"
Kj1
CM
CM
CM
CO
0
CD
C
CO
6
0
(0
0
a.
.£2
O
00
CO
00
^
CM
p
T—
5T
CM
?
CO
CO
in
fY%
CO
T-
<5*
in
CO
CM
IT)
CO
CO
0
CO
c
CD
o
0
(0
k.
0
a.
.12
Q
CO
r-
co
co
^
o
r-
0
00
r-
^_^
O
O
r-
O)
O)
mmm»
CO
r-
O
O
CM
176
00
r-
^
'SJ1
CM
CM"
£.
0?
CO
5"
SI
55"
LO
Oft
* #
O
CM
5F
in
CO
CM
ID
CO
CO
CM
•o
0
4^
c
0
"o
CO
O)
00
00
en
^
^
00
en
O)
5"
r-
O
O
CM
» x
fs»
"«t
CO
•SJ"
CM
cn
CN.
co
fx
t—
0
o
^r
00
p
r-
CM
00
5*
CM
O)
00
t~
5"
T-
co
ID
-------
c
JD
__
CO
'T^
o
xo
(/)
>
CI
0
c
o
O
o
Q
+
M
^^^P
~
.Q
"O
.2
CO
O
c
o
3
O
C
fc»
3
O i-
c 5
™ $
JM ^***
CO
xp
c
?
O
0
>»
O
S5
fN
_
O
CM
CD
O
CO
0
3
5
0
w
0
Q.
CO
Q
O
CO
o
CO
CD
o
LO
O
ZZ
fli
>-
cy
(0
0
Q.
CO
Q
(O
0)
0
0
«^—
CO
0
o
c
CD
k.
O
0
c/>
0
Q.
(/}
O
r-
CO
t
LO
CD
O
CO
0
0)
c
(C
i_
O
0
V)
k^
fl^
Q.
(0
O
CO
CO
ID
CD
O
CO
^3
0
DC
0
CO
0
Q.
CO
Q
177
-------
0)
c n:
*i
eo —
£
vP
^«
c
o
O
0)
o rs if)
r- CM O
O (O
in if) ~T. o
00 O) O If)
. o
CD CD t~ 00
o o •— o
8-1
CO
o
00 O)
CM f- 00
T3 T5 T3
CD -8
O O O (0
CO CO CO CD
,_
c
o
k.
CD
r-
g
_O
41%
(1)
< CD
0) 0)
a. a
E E
(0 (0
CO CO
o
"35
CD
CD
178
-------
CO
0
^^*
O
j5
C/>
0
"•M
[5
'w
c
0
0)
C/)
4-*
£
c
o
o
0
O
£
B>
]P
"35
CD
T3
-*•*
0)
c
• —
o
CD
0)
G
^j
fl^
Q
i
D)
C
CO
c
D
O
JH
(0
0
k.
5.
*r«
c
*—•
1
*N"
X
E.
LO
CO
O
O
CM
i
O
LO
r"
CO
LO
CO
CO
*""
D)
C
fam
cu
O
0)
s
Q.
.J2
Q
r^
r<
0
LO
CO
LO
CO
CO
T3
0)
DC
C
CD
>
O
CO
(O
r-
O
LO
i
LO
CM
0
0)
CO
CD
C
+rf
^^^
00
c
CD
O
CO
CD
O (O
5 CO
LJL CM
O)
CM
O
(O
i
O
CO
CO
ft
o
r-
0)
to
o
"55
CO
00
(O
-------
V)
O
3
|
^^
CO
•
CD
^5
U
£
f^
"P
0
O
Intermediate
E
CO
CD
4"'
CO
+2
(0
CO
0
'E
CO
o
^
w
0
.cetoacetanilide
<
<
CM
1"
>
£
"0
'c
0
D)
'£
o
Z
0
ac
"5
,3'-Dichlorobenz
CO
0
T3
T3
0 1
."2 2
•«» «v
2 o
3 Q)
1 S
*^ CO
O O
^ ^J
0 0
0 0
^ ^
o o
z z
§
o
_Q
CO
(A
**•
ffl
ID
O
Z
-Naphthol
CM
CO.
ID
T3'
0
DC
C
0
E
D)
CL
i
Q.
i
-Amino-5-chloro
CM
O
Z
T5
)luenesulfonic ac
\*
CO
§
*^
3
if\
Vv
0
C
0
3
O
CL
^«
C
0
a.
-Toluenesulfonyl
Q.
^«
^—
^^
^
0
DC
"o
^
0
._
E
CO
o
it
3
(A
0
C
0
o
1
Q.
O
Z
0
,0
o
o o
z z
0
[2
ik.i
,3'-Dimethylbem
henol
co a.
o
z
1
£
°R '1
CO CO
fr-s
ll
1 • •»•
CM T3
180
-------
>
o
D
13
g
a.
i
00
•o
0
0
tS
O
•o
0
£
0
-Q
O
c/)
0
Intermedial
E
(0
0
00
o
Z
1
CD
C
CD
C
CD
.Q
O
N
CO
i
CO '5
E
•••i
O
CD
C
"co
.c
£
Q.
CO
C
1
cJ
^
o
•S
>•
1
CO
o
Z
TJ
'5
CO
.0
'E
o
H^
3
(0
'•D
181
-------
CO
LC
CO
•in
CO
o
ho
CO
o
-------
m
CM
CM
<0
CM,
00
CM
(O
CO
CM
cat
fffi
CM
00
g
D
° .s?
"~CM fe
en
oo-
O
o
CM
O
to
CO
co-
o
O
O
in
UJ
183
-------
co
o>
CM-
n
n
n
o-
co
O
J-O
co
60.
CM
CM
rt-
CM
O>
CO
r-.
eo
CO
co-
O
-to
CO
CM
O
•O
CM
O
-in
o
h-o
o
o
o
d
u>
LLJ
-------
o
185
-------
'"T; 'j'f'i'ilili1, "ii "I-"
O)
CO-i
CO
'fr-
O
CM-
(O
O-
O)
to
CM
O
CO
o
—CO
o
—
«- ^7
•s
_o
o
o
•o
'6
10
ut
5
186
-------
z
o
CM
X
o
CM
X
(J
co
X
o
o
o
o
o
187
-------
en
,1
o
o
o
188
-------
—m
-------
<0
<*-
CO
O
I-to
CO
, Ill'
in
.8-
g-
CM
10- -10
CM
CO
or
CM
S3-
n
o
to-
m
co-
CO
01
O
•O
CM
O
—to
O
f—o
q
6
o
o
10
111
5
190
-------
ov
CO
-------
in
co
o
8-
in
CO-
CM
o
I—
CM
—o
_o
in
LU
192
-------
MR. TELLIARD: Our next
speaker is Walt Shackelford from our Athens
laboratory. He is going to talk about problem
solving and since he's from R&D, this ought to be
novel. Walter.
193
-------
WALTER M. SHACKELFORD
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
ENVIRONMENTAL RESEARCH LABORATORY-ATHENS
PROBLEM SOLVING WITH MASS SPECTROMETRY AND FTIR
MR. SHACKELFORD: Bill,
I'm glad you put us back in R&D. You had us in
your division just a few minutes ago, and a few
long minutes ago, I'll tell you.
What I'd like to talk about today is essentially
an extension of work that we've carried on in Athens
for the past several years. It has to do primarily
with understanding or characterizing industrial
effluents. This sort of work has gone on in Athens
for at least the last 12 years. Initially, indus-
trial effluents were studied one plant at a time,
and work was done with kraft paper mills, organo-
phosphorus pesticide manufacturing and in textile
dye works.
With the advent of the consent decree, indus-
trial wastewater studies took on a much different
profile in that a nationwide study was undertaken
to look at a specific group of compounds in 21
industrial classes plus the publicly-owned
194
-------
treatment works.
Our contribution to that in Athens has been,
first of all, to make use of our experience in
earlier work and help determine simply through data
that we had in data base management systems, which
compounds among the priority pollutants were worthy
of study in resolving the original court list of
compound classes. Since then, we have aided the
Effluent Guidelines Division by matching spectra
in reference libraries to EGD contractor's GC/
mass spec analyses, to determine what compounds
other than priority pollutants might be in indus-
trial wastewaters that at some later time might
become important.
The third part of this study has been to try
to determine some of those compounds that were not
determined by spectrum matching and to perhaps see
the problems or design a protocol for identifying
unknown compounds in some sort of efficient manner.
I'm sure that if we checked in the audience, every-
one would have a good idea about how to get an
identification of a compound for which there was
no standard nor any reference spectrum, and
probably you'd come up with something very similar
to what we came up with. If you look in the
195
-------
literature, that's exactly what you'll see.
However, in making this efficient, we seemed to
have had to learn by trial and error.
Before I go any further, I would like to
acknowledge that some of the data that will be
presented here was acquired at Research Triangle
Institute. Dr. Joan Bursey was the project officer.
The GC high resolution mass spec work was done by
Mike Harvey, at that time at Harvey Laboratories.
I would also like to acknowledge the influence of
Leo Azarraga at our laboratory, who first made a
good interface for a light pipe, making GC/FTIR a
possibility; and John McGuire and Don Betowski who
off and on worked on one of the unknowns I will
show you, for several years.
SLIDE 1
This is an overall picture of the frequency
of occurrence of the most frequently occurring 50
compounds that we saw in our computer survey of
the screening phase of the Effluent Guidelines
work. As you notice here, the priority pollutants
are certainly some of the most frequently occurring
compounds. However, if you'll look again, you'll
see—and I'll do this for you—that only 12 out of
the first 50 compounds are priority pollutants.
196
-------
This has something to say about the need/ perhaps,
for surveying and characterizing industrial effluents
beyond just looking at priority pollutants. I
don't think it's any surprise to anyone that's
done any analysis that priority pollutants aren't
the whole story.
SLIDE 2
This second slide gives you an indication of
just how many possible compounds are there that we
don't identify. In our data base management system,
we logged approximately 3,000 spectra that we
believe we observed five times or more within a
tenth of a retention time unit. What this means
is, at least as far as we're concerned, there are
a lot more compounds that do not match our spectrum
library than do. Of these, we chose some 55 for
further study to try to identify them.
SLIDE 3
This brings us to the subject of the talk:
problem solving with GC/MS and GC/FTIR. I recall
here at this group in the last eight years we!ve
had only one talk that concerned itself with GC/
FTIR. I believe it was done by Jim Brasch of
Battelle.
As we've seen from the earlier slides,
197
-------
characterization of any sample is going to require
identification of unknowns. Next, if spectrum-
matching fails, barring any secondary information,
we're going to have to do manual interpretation.
The gist of this talk will be that the use of
alternate instrumental techniques may lower your
overall cost and may even increase your success
rate. There are no promises here, but that seems
like a reasonable tact to take.
SLIDE 4
To give you some of the limitations of spectrum-
matching, in a study that was done by Bill Budde
and Steve Heller of EPA several years ago, it was
found that if you matched the compounds known to be
manufactured in industry with the compounds in the
EPA/NIH reference spectrum library, you'd find less
than a 20 percent overlap. This means that there
are a lot of compounds manufactured that we do
not have reference spectra for. Also, if you
total up the gas phase infrared spectra that are
in libraries, you probably will not reach the
10,000 level. One thing to be said for the present
collection of gas phase infrared spectra is that
they are quite pertinent to water analyses. They
were chosen for that reason.
198
-------
Finally, and this is an important point as far
as we're concerned, if you don't have some sort of
corroborative data, your confidence levels for
everything but the best spectrum match is going to
be very poor.
SLIDE 5
I'd like to illustrate with this slide. This
is some work we did in evaluating the efficiency
of spectrum-matching. The bottom curve here is
data that we took from the literature. It came
out of Fred McLafferty's laboratory at Cornell.
The X axis here is the match quality increasing,
and along the Y axis, the number of hits that were
confirmed. As you can see, at low levels of match
quality, we have low levels of confidence and if
we have a high degree of match quality, we're
going to see something on the order of 80 percent
confidence in our match.
The two upper traces, however, involve data
that we collected in Athens. This data all has
retention time corroboration. Not only did the
spectrum match, but the retention data matched
with that of our data base management system. As
you see, even down for very low levels of K, we're
talking about something on the order of 60 percent
199
-------
confidence we could have in a match.
This point is even further confirmed when you
see the diversion of these two curves. The bottom
curve is for all compounds and the top curve is
that of the compounds except carboxylic acids.
When we first plotted the data and saw the downward
curve, we felt like there was something wrong.
It turns out that our retention data for
carboxylic acids of chain length greater than
14—this is all packed column data—was so impre-
cise, that we really were only guessing as to
whether or not a carboxylic acid was within a two
carbon length. The mass spectra of all these
are fairly close. When we take the carboxylic
acids out, we're left with essentially the same
sort of confidence at the high end of matching
parameters as the earlier study in McLafferty's
lab.
If our spectrum-matching program hasn't helped
us, the next thing we probably will try is spectrum
interpretation. The only problem is that in most
laboratories, this is going to be very expensive
because it requires time from your most expert
people. I realize that in some industries, espe-
cially where there's a lot of synthesis being done,
200
-------
it probably is cost effective to keep one or more
people in the laboratory who do interpretation
regularly. In most of our laboratories doing
water analysis, however, we're so busy trying to
get the samples through that we really can't spare
any of this expert help to sit down and interpret
spectra. Again, unless there's some confirming
evidence, the confidence levels even for a spectrum
interpretation, are going to be fairly low.
The solution to this cost problem, to us at
least, seems to target for identification only
those compounds that are deemed to be high priority,
Another thing is to build a data base management
system of identifications for later reference.
Our experience has been that a local library or a
historical library, comes into play very, very
often in determining the identification of com-
pounds.
Another solution is to acquire data from more
than one independent technique. This may be, in
the long run, cheaper than having a person on board
to do interpretation. Our experience was that we
could contract out high resolution mass spec runs,
for instance, much cheaper than we could buy a
high resolution mass spectrometer for ourselves.
201
-------
The criterion we use to determine whether or
not a compound is of high priority is frequency
of occurrence. Again, you're not going to know
that if you don't keep your local data base and
record compounds that you have identified. The
second criterion would be high concentration.
This is a type situation that would cause most
perturbation to the environment. Finally, does
the compound have an adverse effect on the system.
This is the case where a compound may only occur
once and there might not be much o'f it, but it
may cause easily identified problems in the
environment. In this case, you've got to be there
with an identification.
At the present time, we have been working with
the Industrial Technology Division on a case of an
industrial effluent that had an abnormally high COD
but yet the effluent was passing all of the priority
pollutant standards. This involves a series of
compounds of very high concentration but found
only in one effluent. They're not occurring with
great frequency. They are in high concentration
and they certainly have an adverse affect on the
COD from that particular industrial effluent.
That work, incidentally, will be reported on later.
202
-------
The criteria for compounds that we chose for
study all had a frequency of occurrence greater
than five. This was an arbitrary cutoff but one
that we felt, from our experience, gave us enough
retention data corroboration that we felt like we
indeed had a compound present. The apparent con-
centration had to be greater than 50 parts per
billion. The apparent concentration was determined
by comparing a response of the unknown with the
internal standard. Now, since we didn't have a
standard for the unknowns, the apparent concentra-
tions could be off by as much as a factor of 100
from the actual concentration. You can even look
at the priority pollutant phenols and see that
much variation in response factor.
Finally, as far as effects on an environmental
system, since these compounds were found in indus-
trial effluents, their treatability is in question.
The additional techniques that we believe to
be important to get a determination were high
resolution GC/high resolution mass spec to develop
a possible chemical formula; chemical ionization
mass spec in both positive and negative ion mode
for molecular weight confirmation, when that might
be needed; and finally, GC/FTIR for functional
203
-------
group determination.
SLIDE 6
This is a flow chart that we worked out for
identification of unknowns. The first step is to
rerun the sample extract using fused silica capil-
lary column GC/MS and check again for spectrum
match. In the case that we might have had two
closely eluting compounds that in none of the
samples that we looked at were ever resolved, we
would see the sum of those two spectra consistent-
ly. So, if we could separate them'by a fused
silica capillary column, then of course we could
separate them into their components rather than
trying to waste our time determining a mixture.
If indeed we did find a spectrum match due to
our greater resolution, we then checked with our
data base management system to see if we have
found anything like that before. If we have, and
if it matches the characteristics of something
that is already in our data base, we then look for
the standard to try to confirm the identification.
That did not happen but twice in the ones we looked
at. If there's no spectrum match or if the data
base management system does not bring up a match
for that compound, we then have to decide on
204
-------
additional data needed.
Some of the inputs to this are whether one has
a good idea if the molecular ion is present in the
sample for that particular compound, how closely
related are compounds that we found in the spectrum
matching program, and finally, do we think that we
really need to go the expense of high resolution
mass spec to try to determine that molecular formu-
la. If indeed the case is that we need all of
them, then we get all that data, try to propose a
structure, and get a standard.
SLIDE 7
Our confirmation criteria was that the reten-
tion with a co-injected standard must be identical
on the capillary column and the mass spectrum
identical with that of the standard. Also, if one
had FTIR available, you would certainly want to
get spectral confirmation there as well.
SLIDE 8
This is the first example of one compound that
was identified. This is a good example of why not
to trust your spectrum matching program or data
base unless you have some confirming information
along with it. If you take a look at this spectrum,
we have a peak at 199 and then a loss of 15 and a
205
-------
loss of 44. Also we have a big peak at 91, indi-
cating that perhaps we have alkyl group on a phenyl
ring. We did not get a match in the reference
library, although the library indicated that there
were compounds of similar spectrum that were
alkylphenylsulfonamides.
SLIDE 9
The first thing to do was check and see if we
indeed had the correct molecular ion. We ran iso-
butane CI for this and found the pseudo-molecular
molecular ion at 200. Thus, we felt like we had
the correct molecular weight. Then comparing the
actual spectra that are listed in the EPA/NIH
library, we found that the two sulfonamides which
are actually isomeric with this compound, had very
poor spectra. As a matter of fact, they only had
six peaks in the whole spectrum. So the matching
program could not generate a good answer. Never-
theless, when we checked the retention time of
one of those compounds we were able to get as a
standard, it was incorrect. We believed this to
be a different isomer from those listed in the
reference.
SLIDE 10
The second unknown that I'd like to go through
206
-------
that illustrates some of our points is this com-
pound. It had a molecular weight of 164, which was
confirmed by chemical ionization. This compound
was not only seen in the effluent guidelines study
but was seen in an effluent study some three or
four years before that. In that study, we got
GC/FTIR data and made what we thought to be a
reasonable determination of the structure. It
turns out, after getting high resolution data,
that we were completely wrong in the earlier
study.
SLIDE 11
The next step, after determining our highest
mass was indeed the molecular weight, was to get
high resolution data. Of course, the computer will
calculate every possible formula that it can, given
the elements. The more elements you give it, the
more possibilities it can come up with. But if you
look at this one closely and have the benefit of
examining the spectrum closely, you'll see that
these two that have sulfur in them are probably
incorrect because we don't observe any sulfur
isotopic peaks. These two that both contain fluor-
ine were later eliminated with the FTIR data.
This one with phosphorus is fine, but you can't
207
-------
find an analogue to it in any of the daughter ions
down at 149 or 131. It turns out that our proposed
molecular formula for that is C10H12O2. We took
our infrared data, compared it to the infrared li-
brary and there was no exact match for it. Never-
theless/ a close match, or one that seemed to
be close to us, was a dihydrobenzofuran.
SLIDE 12, SLIDE 13
Take a look at the actual FTIR spectrum of
the unknown. You can see similarities in the
region at and in the region at. The region has
some extra alkyl character that we felt belonged
in an ethyl group which, as you look at the mass
spectrum, you see the loss of an M -29. This
peak, at about 3,600 wavenumbers, is due to the OH
stretch, but the OH, to give a sharp peak like
this in the gas phase, cannot be intramolecularly
bonded. That is why we have decided the oxygen
would be on the opposite side of the ring from the
hydroxyl group.
We were unable to find a standard to confirm
this compound. In a later edition of the NIH/EPA
spectrum library, there is an isomer of this com-
pound. It is a dimethyl isomer rather than being
the ethyl, however, and it has the hydroxyl and
208
-------
the oxygen in the furan ring adjacent. We believe
this to be incompatible with the infrared spectrum
showing the free hydroxyl group.
SLIDE 14
Here's a summary of some selected analysis
results and to give you some idea of what I feel is
the power of the added information from FTIR. The
only unknown in this group that we were able to get
infrared data on is one of two that we were able to
assign a structure to. The high resolution MS data
gave us the ability to come up a reasonable formula,
but only with the FTIR were we able to assign a
good structure.
As you can see from that table, we didn't get
FTIR data on every unknown compound. One of the
problems with FTIR is that only a small reference
library is available. The lack of sensitivity,
though, in our case, seemed to be our biggest •'
problem. The way we were able to get around that
in the few cases where we were successful with
FTIR, was to use the Unicon sample concentrator as
a front end for the GCIR. This way we were able to
inject up to 50 microliter injections and thus
increase by a factor of 10 to 50, the amount of
injected material.
209
-------
SLIDE 15
There are quite a few interesting references
in the recent literature combining GC/mass spec
with GC/FTIR and also some applications that I
offer to you here. Interestingly enough, in all
the recent applications of FTIR, you'll always see
it either combined with GC/mass spec in the same
instrument or at least the data combined with
independently taken GC/mass spec data. The reason
for this is that the two forms of data are
so complementary that summed together, they form
a much bigger total information package than taken
separately.
Wilkens was really the first person to inter-
face a GC/mass spec/FTIR system. He was at Nebras-
ka at that time. He has since gone to UC Riverside
and has built a GC FT/MS FTIR system. Richard
Crawford at Lawrence Livermore had a paper in
which he did identify in the same type scheme
we're talking about here, some unknown compounds
using GC/mass spec and GC/FTIR. Gurka and Betowski
did some work with packed column GC/mass spec, pack-
ed column GC/FTIR and then later, Shafer and co-
workers at Battelle did the same samples using
capillary columns. In that work they found FTIR
210
-------
to be superior to mass spec in identifying compounds,
However, the results are somewhat skewed since
they were looking at alkyl benzenes in which IR,
of course, is much stronger confirmatory tool than
mass spec would be. Finally, the report from which
some of this data is taken, Joan Bursey of RTI in
the EPA report dated 1984.
MR. TELLIARD: Questions?
211
-------
Overall
1500
Priority Pollutant
Compound
212
-------
UNKNOWN COMPOUNDS
3000 SPECTRA SEEN 5 OR MORE TIMES WITHIN
±0.1 RRT UNIT
55 CHOSEN FOR FURTHER STUDY
213
-------
PROBLEM SOLVING WITH GC/MS AND GC/FTIR
0 CHARACTERIZATION OF A SAMPLE REQUIRES THE IDENTIFICATION
UNKNOWN COMPOUNDS.
0 IF SPECTRUM MATCHING FAILS, MANUAL INTERPRETATION IS
REQUIRED.
0 THE USE OF ALTERNATE INSTRUMENTAL TECHNIQUES MAY LOWER
OVERALL COSTS AND MAY INCREASE SUCCESS RATE.
-------
SPECTRUM MATCHING LIMITATIONS
0 LESS THAN 20% OF MANUFACTURED COMPOUNDS ARE IN
COLLECTIONS OF REFERENCE MASS SPECTRA (BUDDE AND HELLER).
0 ONLY ABOUT 10000 COMPOUNDS ARE IN COLLECTIONS OF GAS-
PHASE IR SPECTRA.
0 WITHOUT CORROBORATIVE DATA, CONFIDENCE LEVELS FOR ALL BUT
THE BEST SPECTRUM MATCHES IS POOR.
215
-------
lOO-i
90-
80-
70-
-------
BEGIN
GC/HRMS
V
FSCC
GC/MS
\
/
SPECTRUM
MATCH
9
\
NO
/
YES N
s
DECIDE ON U
ADDITIONAL
DATA NEEDED \
\
/
GC/FTIR
/
PROPOSE
STRUCTURE
V
GET
STD
DBMS
MATCH
GET
STD
^
CIMS
217
-------
CONFIRMATION CRITERIA
1. RETENTION WITH CO-INJECTEU STANDARD IDENTICAL
ON CAPILLARY COLUMN
2. MASS SPECTRUM IDENTICAL WITH STANDARD
218
-------
r
g«
ss
cviw
Ii s
g
s
r>j
gsg-
a^-e
Sla§
iil-
a
"
0)
C
0)
CO
CD
G1
vO
m
o c-i
« o
i-i c-l
V G
OJ
• O
4J OJ
CO
"
a i-
€ 4J
O GO
O 3
Cu ^
W
0!
m fH
o te
3 e
M 0)
•U J=
O O
0)
CU O
CO -H
C
co cd
w M.
CO
PQ
60
219
-------
SB
K-
4ANSS*
I I
-> r
-S
E
O
0)
c.
to
w
CO
o
N rH
•H CO
C O
O CM
tH (3
n) cd
o u
Q) «
j: AJ
u co
,, a)
0) !-<
d a)
Cfl 4J
•u C
3 -H
J2
O H-l
in o
OJ
T3
> §
•H O
4-1 Q.
03 g
4J O
C O
0)
co
-------
o
r-co
- o
-to
- o
-Tf
-C\2
- O
-O
O
o
o
o
-o
o
n
u-i
00
m
Q) O
o
4J CN
tO =fe
cu
v- c
Q) CO
4_1 O
c to
•a to
c 3
3 -o
c. -H
o to
u c
o
u-i cfl
O 14-1
E C
3 cd
u a)
a) -a
c. -H
to u
co u
to to
03 Q)
S a.
cu
u
3
221
-------
Figure 5.160. Accurate mass measurement data, extract 5855
(acid), pesticides manufacturing industry.
High Resolution Mass Data
M/E
164
149
131
C
10
7
8
7
7
7
9
6
7
6
4
9
6
6
3
9
6
4
6
3
6
3
3
H
12
13
11
16
17
17
9
10
8
13
12
10
11
14
15
7
8
7
11
12
9
10
13
16Q
2
3
0
2
2
0
2
3
0
2
0
0
1
0
1
1
2
0
1
2
0
1
0
F
0
1
3
0
0
0
0
1
3
0
3
0
1
0
1
0
1
4
0
,1
1
2
1
d*g
0
0
0
1
0
1
0
0
0
1
1
0
0
1
1
0
0
0
1
1
0
0
1
p
0
0
0
0
1
1
0
0
0
0
0
1
1
1
1
0
0
0
0
0
1
1
1
MMU
-4.8
-36
-7.2
-1.4
8.1
-9.6
3.9
5.1
1.5
7.3
4.9
-4.3
-3.2
-0.9
0.2
1.1
2.2
-0.2
4.5
5.6
-6.0
-4.9
-2.6
164.0885
149.0563
131.0486
222
-------
enzofuran
CM
O
)-i
T3
•H
u-i
O
e
i-i
4J
CJ
0)
O.
CO
Pi
M
0)
CO
CO
O
a.
PM
w
-a
1-1
«
•a
d
in
-------
sotreqjosqv
to
m
CO
u
ca
4-1
X
O
p3 CO
1 3
E co
£E
cj >-a
M t-l
CO O
r-l -H
rH 4-1
•H CO
O. 0)
cd O<
O
C '^^
o ^o
CJ -H
ca o
C «
u->
cu
60
22^
-------
to
P
rH
3
to
01
K
to
•H
to .
r^"1
.-1
C
rf
0)
P
CJ.
CD
r-i
0)
w
IM
o
t-,
^|
(rt
£5
E
3
CO
CM
V
r- 1
J2 '
IS
-O
(1)
iH 03
IB D H
4J H
C 03
0> 0> VH
§3
O1
Vl -H
P C
0) .C
c u
H 0) CO
EH
•O
O
4->
fO -
r-l P
3 U
P 3
01 i
O •!
ft CO
*O *•*
0) W
-P •-'
IB (8
r-l i— 1
3 3
P E
01 >-i
O O
ft b<
to
<4-l CD
O 0
c
(-1 CD
CD H
<"J }•
E 3
3 U
a u
o
^•t
M
C
a
4-
(0
u
ra
•H
P
W
3
C
HI
=3= C
o
1 t .^J
18 tj
V4 (0
•P ^
5^
w w
y • S
K PM
tX hC
0) 0)
c a
o o
a z
H
CJ
tJ M O
U O T
n 03 Z '
O *~ oo
co in CTI
CJ CJ CJ
en o
T^
O
•H
4J
0)
•H
4->
^
^4 *O
0) C
Da (8
(0
ft 03
4J
'O CM
C (1) -O
t8 E O
0) O
CU 03 CJ3
•3 i
ft <
-~ 2
M •^
O ffl
i , """
*3" VO
CO «*1
M
CJ
w
CO
s
&
s
0)
o
• z
co
CM
o
ft
CM
i-H
CJ
?_.
cn
CJ
VO
0)
P •
0
3
o
^4
ft
r-l
0
•r-l
C
.c
u
M
CJ
5
o
CO
CM
a,
M
- f
EH
b
%
03 CO CO
S 23 TS1
K K K
S ffi »C
,
,^\. .
*r ^s
\ ^
\ J
C C **4 ' >>^*V\
a a ° ~\()/
v ^ /
5^ x*^
O f^
>,
t^
CM
fSJ Jj
CMZ ^" CM
m O 'O O C5
O VO VO VD -
CJ CJ U CJ CJ
cr\ in m
„
03
U
-H
4J
0)
5
G
W
TJ
C
(8 01
WE 13
03 -H
•i-l C CJ
P -H .H
03 E P
(03 01
rH iH CU
CJ \ HH
< CQ O
VO CO
ro in in
^ CM • ^
CTl r- CO
r- CM in
W
s
33
0)
I
CM
Cu
o
"tb
o
cr*
CM
W
0)
'o
Jj
01
M
U
CO
en
0)
o
' 0)
o
a .
m
01
i— i
18
O
'g
CD
CJ
ganic
j^
o
H
3
m
VO
ro
in
H
o
^
C/}
*SJ
9
JTJ
1 - S3
53- K CM
U 0 0
(M U
/-. n
^CM ^-v
O S
^^ u
33 V
U Q
CD1*
VO
r-
E
CO
U%
m
^
-------
REFERENCES
A. COMBINING GC/MS WITH GC/FTIR
CHARLES WILKINS, ET AL, UC RIVERSIDE, ANALYTICAL CHEMISTRY,
1981, 113; 1982, 2260; 1984, 1163.
RICHARD CRAWFORD, LAWRENCE LIVERMORE, ANALYTICAL CHEMISTRY,
1982, 817.
B. APPLICATIONS
DON GURKA AND DON BETOWSKI , EMSL-LV , ANALYTICAL CHEMISTRY,
1981, 1819.
KEN SHAFER, ET AL, BATTELLE COLUMBUS, ANALYTICAL CHEMISTRY,
, 237.
JOAN BURSEY, RTI , EPA-600/S4-84-072 , 1984.
226
-------
QUESTION AND ANSWER SESSION
MR. GORE: Bill Gore,
American Cyanamid. There have been some recent
reports of dramatically improving PTIR's sensitivity
and resolution with cryogenic trapping techniques.
Have you had a chance to do some experiments there
and do you have any experience to draw on or comment?
MR. SHACKELFORD: No, I
have no experience whatsoever with cryogenic
trapping. I would say that the sensitivity that
has been reported in the literature has been much
better than what we've experienced, but it could be
that in many of our cases we're talking about a
small peak and a large background and that is
perhaps interference rather than sensitivity problem.
MR. YOUNG: I'm Jim Young
from the University of Arkansas. A few years ago
we were working with complex samples, or samples of
complex mixtures, and we found that the relative
retention time of a compound would shift if you
pass a sample through treatment so that you removed
a lot of the junk. Did you find this to be a
situation in your samples? I think you mentioned
that you used relative retention time as a key
227
-------
identifier, but it was a problem for us because it
would shift significantly with treatment.
MR. SHACKELFORD: One of
the things that we did do was to go through some
cleanup stages to try to get a clean peak to go to
the GC/FTIR and yes, after cleanup especially, when
we're talking about same compound but a much
different matrix injected, we did have significant
changes in the relative retention times.
MR. YOUNG: Do you see
that as a major problem in using this particular
confirmation technique?
MR. SHACKELFORD: In the
way we use it, no, because our samples in our data
base management system would be kept separate from
that of any later run acquired under different
conditions.
DR. LESAGE: Suzanne
Lesage, Environment Canada. You're talking about
your data base management system. Is there any way
of sharing with the world this kind of information?
I think everybody doing industrial work has the
same problem of the unknown spectrum and maybe the
MBS library should contain unknowns rather than
knowns.
228
-------
MR. SHACKELFORD: Our
system is built on a commercially available data
base management system called Inform. That's not
the most up-to-date data base management system.
One of our problems is that so much of our data is
packed column. Fortunately, there seems to be a
consensus now on fused silica capillaries and
perhaps in building a new data base, that would be
more appropriate. But I certainly do think it
would be worthwhile to perhaps discuss that, but I
don't know that you would ever be able to get some
kind of uniform system that everyone would feel
like was appropriate for their own laboratory.
DR. ERICKSON: Mitch
Erickson, Midwest Research. A kind of follow-up to
the first question. Walt, as you know from this
study, the throughput on GCIR tends to be maybe
half to less, maybe a quarter, of the throughput
you see on GCMS just because the data systems aren't
nearly as facile as what we're experiencing on
GCMS. With the interface on the cryogenic trapping,
my understanding from their salespeople is that the
throughput goes way down. The second problem with
trying it out is that the interface alone is
$115,000, so you have to want to invest in it.
229
-------
MR. SHACKELFORD: If Bill
had not moved us back into R&D, we'd probably have
that kind of money, but...
DR. ERICKSON: I can
write you a proposal.
MR. TELLIARD: Thank you,
gentlemen. It's time for our school break. There's
cookies and milk in the hallway. We're running
late, as usual, so to get to the HMS Sinkfast,
would you please get your cookie and so forth and
get back in here in the allotted time? Thank you.
(WHEREUPON, a break was taken.)
230
-------
MR. TELLIARD: Our third
speaker, due to the departure of the HMS Sinkfast,
will open tomorrow morning's session and in so
doing, we'd like to start 15 minutes early which is
a quarter to nine for those of you who can tell time.
One quick announcement. The HMS Sinkfast will
be located at Waterside, which is down near Phillips,
and be ready to leave at about 5 o'clock, plus or
minus a couple of minutes. We will then break
here. It's definitely a dress-down affair. The
issuance of life rafts and so forth will then take
place and you can then get on the boat. The best
way to get to the boat would be to go downstairs
either through the lobby bar or around behind it
and out by the swimming pool and then just walk
down the side. Captain John will be there. He'll
be the derelict standing almost in front of the
boat.
Now, to get our afternoon session going, I'd
like to have Samuel To open with his presentation.
231
-------
SAMUEL TO,
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
OFFICE OF WATER ENFORCEMENT AND PERMITS
PROGRESS REPORT ON DMR QA STUDIES: QUALITY
ASSURANCE PROGRAM FOR NPDES SELF-MONITORING
(Revised presentation submitted.)
232
-------
Progress Report on DMR QA Studies
A Quality Assurance Program for NPDES Self-Monitoring Data
Samuel To
Municipal and industrial wastewater treatment facilities are
regulated under the NPDES as mandated by the Clean Water Act.
Each direct discharger is usually regulated by a unique NPDES
permit, which specifies certain limits on pollutants in the discharge,
The permittees are required to routinely sample and analyze their
discharges and report the data in Discharge Monitoring Reports
(DMR), The validity of the NPDES Program hinges on the quality of
these DMR's.
Description of the Program
*
Through EMSL-Cincinnati, the Office of Water Enforcement and
Permits has been conducting a Quality Assurance (QA) program to
assure the quality of NPDES self-monitoring data. The program is
designed to evaluate the major NPDES permittee laboratories' ability
to analyze and report accurate NPDES self-monitoring data.
In 1979, DMR QA pilot studies were conducted in two States.
Responses were good. As a result, a national study was initiated
to include all 7500 major permittees. Since 1980, four national
studies have been completed.
Major permittees under NPDES are sent performance evaluation
samples containing constituents normally found in industrial and
municipal wastewaters. The samples are then anlayzed using the
methods employed for reporting NPDES self-monitoring data.
Responding permittees subsequently receive an evaluation of their
data, and are given guidance for checking error sources, advice
for taking voluntary remedial action, and requests to communicate
corrective actions in writing to the State for EPA Regional QA
Coordinators.
Evaluation of Data Quality
This program has provided valuable data in assessing the
quality of DMR's. As illustrated in Figures 1 and 2, improvements
in the DMR QA data had been significant.
Since permittees in each study were largely the same, we can
conclude that the quality of NPDES self-monitoring data has improved.
Data analyses at the permittee level also made possible the
identification of about 3000 permittees that have participated in
all four studies. This group showed slightly higher success rates
than the general population. (Figures 3 and 4)
233
-------
- 2 -
Identification of the Standard Industrial Classification (SIC)
code made possible the tracking of improvements by individual
industries. Tables 1 and 2 summarizes the performance of major
industries (i.e., with over 10 permittees in Study 4).
Other Uses of the Data
Besides measuring the quality of DMR data, this program also
encourage of proper OA procedures. When permittees receive evaluation
reports with unacceptable data, they are asked to check for sources
for errors. An examination of follow-up information indicated that
a substantial portions of errors were due to data management problems,
such as transcription, calculation, erroneous units, or misplaced
decimals. In Study 2, almost half of the errors were of this type.
(Figure 5)
Understanding the source of errors is the first step to improve-
ment. Data management errors are relatively easy to correct compared
to analytical problems. It is, however, a very important, yet often
neglected, part of a OA program. Such errors can be minimized by
instituting proper data handling procedures. A comparison of sources
of errors between (Figure 5) studies confirms this, as there was a
much smaller percentage of data management errors in Study 3.
The DMR OA data were also useful in planning inspections and
directing other follow-up. Since there are a large number of permittees,
this program enables EPA and States to concentrate corrective actions
on permittees with more needs. This results in increase efficiency
of NP-DES compliance monitoring.
234
-------
o
o
§ g
aiavidaoov SSSAIVNV
CM Q
I I I
- &
235
-------
O
CO
o
(O
g
saaumaad
o
C9
I
K)
S
fe
I
CV2
o o
236
-------
3 O
2 "
o e
to r
3 O
CO
g
s
o
n
o
01
o
o
aiaviaaoov
237
-------
S33JJJWH3d
232.
-------
>- uj in
SS5
(O
: uj
i en
a
l-
i- o <
in z z
SIC
ESCRIPTION
u a
M O
(0 (J
. — — " . • . i, - "—. -_-~ -— -- -— —rf -^* —v "-v «^j >^- i-i f-i r i r"i r
-------
«3- CO H o <
i r~ o i
• in o i
> co
i in
«^ o* *~4 o i
a> -< CM IT- •
CM
CJ O O tO I
.
CJ I
• co CM i
I CM &• i
IUT|OinCM«-*C7*O»-4»-4i-tOjrO3-cocvir^KiooooincO'-(-oo,cMotoror^•
t-
> E E 3
D; >- UJ aj O
Ul —I I- t- UJ
tn a. to in tn
Q. V >-
o 3 tn in -i
M en *t
CJ UJ <£ 3 M
Kt (> -^ —I I
O O CM IO •
-------
3
a:
O
O
O
DC:
e
CM
o
a:
2
Q
CO
g
1*4
O
CO
8
g
O
CO
§
CO
I
I
10
-------
PAUL BRITTON
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
ENVIRONMENTAL MONITORING AND SUPPORT LABORATORY
PROGRESS REPORT ON DMR QA STUDIES: QUALITY
ASSURANCE PROGRAM FOR NPDES SELF-MONITORING DATA
MR. BRITTON: During my
time here today, I want to cover two major topics.
First/ some special summary results that provide a
better basis for direct comparison from one study
to another; then some simulation results allowing
comparison of several statistical estimation pro-
cedures when applied to data with known character-
istics. Limits in DMR QA studies have been and
will continue to be based upon a statistical
estimation procedure equal or similar to one of
these.
On the study summaries, Sam To gave the official
study results and the percentage of acceptable data
for each study. However, the basis for limits in
each one of these studies differ. We wondered
whether going to a standardized basis for limits
and reevaluating the study results would lead to
some difference in the percentage of acceptable
data from study to study. Certainly it would lead
242
-------
to percentages which could be more validly compared
from one study to another.
For those who are not familiar with the basis
for our limits, they are developed from two different
kinds of information. First, regressions based on
statistics from the six previous studies are de-
veloped directly from data generated by EPA and
state laboratories in these studies on samples
just like the ones in the DMR QA study. Statisti-
cal estimates are then developed directly from the
study data; again, data produced by EPA and state
laboratories analyzing the DMR QA samples.
By comparing limits developed directly from the
study statistics against limits developed from re-
gression-based statistical estimates, the limits to
be used for each study are selected. In essence,
the two limits are compared and if the limits from
study statistics are completely enclosed within the
regression-based limits, then the regression-based
limits, the broader of the two, are used. If in
any way, the regression-based limits fail to enclose
the study statistics-based limits, then the study
statistics-based limits are used directly, because
there's no question about the relevancy of those
limits to the study samples.
243
-------
Obviously/ from one DMR QA study to another,
the six most recent studies would have changed and
therefore, the regressions would have changed. In
comparing and deciding on the broader of either the
regression-based limits or the study statistics-
based limits, there would also be a considerable
change from one study to another in how the limits
were developed. Consequently, we decided to go
back and reevaluate all the study results using one
set of fixed regressions to establish all appro-
priate limits for each of the studies. These
regressions are based upon the six most recent
.,!'• i "
studies which are related to the DMR QA samples.
Overhead 1 provides an example of one set of
fixed regressions. It happens to be the set for
aluminum, the first analyte, but gives you some
idea of what the regressions are like. Each set
includes relationships between the true concen-
tration (T) of a sample and the mean (X) arid
standard deviation (S) that can be expected for •
analytical responses from multiple laboratories
operating within control. The R2 values for
these two fits, R2 being the percent of the total
variability in the data that is explained by the
regression, are also given. For the relationship
-------
between X and T, R2 = 99.7 which is very high,
showing that this relationship fits the data very
well. Standard deviation being a little more
imprecise in nature, only 89.5 percent of the
total variability in the data were explained by
the regression, but that is still a solid linear
relationship and a reasonable basis for estimation.
The columns of residuals in Overhead 1 show
how the estimates from the regressions relate to
the observed statistics, sample by sample, again
indicating that the standard deviation is more
variable and that there are a few individual sta-
tistical estimates which vary somewhat from that
line, but none which vary to any significant extent.
This suggests good relationships and is character-
istic of these regressions.
There are 26 analytes and two concentrations
involved in each DMR QA study, so there were 26
sets of relationships like this which were applied
to two different concentrations for each study.
There was a total of four, studies to go back and
look at.
OVERHEAD 2
Using limits for each study that were strictly
based upon the fixed regressions, we obtained the
-------
results that are given in Overhead 2 for the DMR QA
studies. The second column shows the time the study
was actually run—basically an estimate of when the
analysis was done. The next column shows the total
number of reported values received in each study/
which has been fairly consistent. There were more
respondents in DMR QA Study 4 than in any of the
other studies—probably the main reason why the
total number of reported values was highest.
In the fourth column are the percentages of
acceptable data using the special fixed regression
limits. Surprisingly, they are not that much
different from the official percent of acceptable
data results, but the potential is certainly there
for a difference. This apparently shows that the
regression relationships have been fairly consistent
for several years; not changing as much from one
year to another as they did in the early years.
The next column shows the change in percentages
using the first study as a base, and that in DMR
QA studies, the percentage of acceptable data has
increased by 10.4 percent between the first and
fourth studies.
The last column attempts to show the percentage
of improvement in performance since the first study,
246
-------
showing the DMR QA participants have reduced the
amount of unacceptable data they reported by about
40 percent over these four studies. The EPA and
state laboratory data from analyses of the same
samples was similarly analyzed to give some idea
of relative performance by these groups. For EPA
laboratories, the percentage of data acceptable
was 94 percent in the first study related to the
DMR QA Study 1, and 96 percent for Study 4; improve-
ment not unlike the 40 percent seen for the DMR
QA studies. For state laboratories, the percent-
age starts out at 86.2 and goes to 91.4; a
similar 37.7 percent improvement.
OVERHEAD 3
Overhead 3 is a graphic representation of the
performance over the three and a half year period
by those three laboratory groups. Again, it's
fairly obvious that all show improvement with time,
which is pretty much what was expected, and that
they are all improving at about the same relative
rate, which was somewhat surprising.
The second topic I want to discuss has to do
with some simulations done on a series of different
statistical estimation procedures, any." of which
could potentially have been used as the basis for
247
-------
the background statistics on data from EPAand
state laboratories, that become the basis for all
limits.
OVERHEAD 4
Overhead 4 attempts to briefly depict the major
difference between the four techniques. If the X-
f ; . ' ;:"• ; i , '••:•.'
axis is the order of the ranked data, then the first
' .. . , '•'"„•' i i ,-, <':•
procedure, the traditional calculation without any
outlier removal, would allow every observation to
have an equal weight of one in the statistical cal-
culation. Where you have traditional estimation
but do some outlier testing before you do the
calculation itself, some of the observations, the
extreme observations on either end, would have a
weight of zero in the calculation. All other
, ... ' , '!•• ' "i , i' ' ' '
observations would have a weight of one.
If you use a robust estimation procedure—a
procedure that does not use all of the observations
in a data set in order to develop estimates of the
mean and standard deviation of that set—the calcu-
lations are different, compensating for the fact
that not all of the data are used.
The robust estimation procedure we are using
involves outlier testing first, then a robust
calculation on the middle 70 percent of the
i ?• •!' ,: v
2*8
-------
retained data. In other words, any observations
beyond the 15th and 85th percentiles of the retained
data have weight zero, while data within this inter-
val have full weight. The influence of observations
transcending either percentile is prorated on the
portion falling within the interval.
The fourth procedure is called the biweight
estimator. It involves development of a weighting
factor for every observation such that more extreme
observations have less weight and, therefore, less
influence on the biweight estimates.
We are currently investigating the effect of
these estimation techniques on data from studies
like ours and will probably continue to look for
and investigate alternative ways in the future.
This investigation is still in progress, so final
conclusions are not available on whether we will
change from the 70 percent robust procedure current-
ly in use, however, some early results are presented
in the following overheads.
OVERHEAD 5
The first set of simulation results in Over-
head 5 involves a thousand samples taken from a
normal population with mean 100, standard deviation
10, and 20 observations in each sample. The mean
249
-------
and standard deviation of the underlying population
were estimated from each of the thousand samples
using the four estimation techniques.
The two columns under "EST. OF MEAN" show the
traditional arithmetic mean and estimate of var-
iability (the standard deviation) to give a general
basis for comparison of the 1000 estimates of the
population mean obtained using each of the four dif-
ferent estimation procedures. In general, the only
difference among means produced by the four proce-
dures involved their variability; as expected, the
traditional estimates had slightly less variability.
Estimates from the traditional procedure should
always have the least variability when the popula-
tion is normal. For practical purposes, you get
the same estimate by all these procedures.
The last two columns characterize estimates
of the standard deviation developed using the four
procedures. As before, traditional has less varia-
bility, but there is really little practical differ-
ence, except for an apparent 5 percent high bias in
the biweight estimates. Origins of the bias in bi-
weight estimates of the standard deviation are un-
clear, but if used in its current form when the un-
derlying population is a normal, it would slightly
250
-------
overestimate the standard deviation and therefore
lead to slightly broader limits.
Overhead 5 also contains results from 1,000
samples of size 50. As you would expect with a
larger sample size/ the variability of estimates is
reduced for all procedures. The only real differ-
ence seems to be the bias on the estimate of the
standard deviation from the biweight procedure;
as before, it's about five percent high.
OVERHEAD 6
Okay, that shows what happens with normal dis-
tributions. What happens when you do not have a
perfect underlying normal population? Certainly,
we do not always see data from a perfect normal
distribution during our studies. Overhead 6 shows
results for simulations involving 1,000 samples
again, but the data is a mixture of two underlying
populations. For the first set of results, each
sample of 50 contained 40 observations from a nor-
mal population with mean 100 and standard deviation
10. The other 10 observations in each sample came
from another normal, with mean 130 and standard
deviation 10. So, 20 percent of each sample re-
flected another distribution with a 30 percent
higher mean.
251
-------
We are really interested in estimating the
characteristics of the 80 percent of each sample
which represents the good data, and we want to
look past the bad data which is represented by
the data with a 30 percent bias. Therefore we
would prefer our estimation procedure to give mean
estimates of 100 and standard deviation estimates
of 10, which reflect the capability represented in
the good data here.
The mean estimate from traditional calculation
techniques is 106.8 and is somewhat better with the
outlier testing. Mean estimates from the two
robust procedures are slightly better yet,although
the difference is not dramatic. The estimates of
the standard deviation are all similar; instead of
being 10, they are all over 15. As a result,
where we should have estimates of the mean of 100
and estimates of the standard deviation of 10,
we've got estimates of the mean that run 105 or
better, and estimates of the standard deviation
that exceed 15. Obviously, evaluation limits
developed from these estimates will be much broader
than they should be to represent the bulk of the
data.
The second set of results in Overhead 6 show
252
-------
the effect of slightly increasing the bias of the
20 percent bad data in our simulation. The estimate
of the mean from the traditional calculation has
gone up to 112, whereas the biweight has already
recognized all the bad data and has dropped the
mean estimate down to 100.7. The other two proce-
dures have both partially recognized the bad data,
but haven't successfully ignored all of it. The
standard deviation for the traditional calculation
is over 26, for the next two is over 16, and for
the biweight is down to 13. They should all be 10.
In the last set of results in Overhead 6, the
bias of the bad data has been increased to 90
percent and the estimate of the mean from the tra-
ditional calculatipn has increased to 118. The
other procedures have recognized virtually all 20
percent of the bad data and are estimating 100.
Obviously, there are some residual effects on the
mean estimates of the standard deviation, but by
far the worst is the 38 produced by the traditional
calculation as a mean estimate of the standard
deviation.
A tremendous mixture of data comes in during
these studies. As could be expected when anywhere
from 15 to 25 percent of the data are considered
253
-------
not acceptable, errors involving factors of 10, 100
or 1,000 are not unusual. Perhaps five percent of
the data will reflect a decimal placement error of
this kind. Factors of five, two and somewhat
lower biases are also quite common. Although 20
percent of a particular kind of biased data is not
a frequent occurrence in our studies, these simu-
lations certainly demonstrate the effect of such
data when you are really interested in the charac-
teristics of the 80 percent that represent good
performance.
OVERHEAD 7
We have seen what happens when the bias
changes, but what happens when the standard
deviation changes? In Overhead 7, 80 percent of
each sample comes from a normal distribution with
mean 100 and standard deviation 10, while for the
first set of results, the remainder comes from a
normal with mean 100, but a standard deviation of
30. The mean estimates, of course, are not influ-
enced one way or the other, but this shows how the
standard deviation estimates are affected. The
estimate of the standard deviation from the tradi-
tional calculation is up to 16.2. For the others
it is better, but still varies between 11.5 and
254
-------
12.8. . , • - • .
Of course, you can have an alternative distri-
bution which represents a very, very high quality
group of performers. Under such circumstances, we
were concerned that statistical estimates might be
dramatically affected, causing evaluation limits to
be much too narrow. Therefore, we looked at the
effect when 20 percent of the data represented a
normal distribution with a much smaller standard
deviation of 3.3. The mean of the standard devia-
i
tion estimates from all procedures are between 8.5
and 9.1, which is reasonably close to 10, the
standard deviation of the general population repre-
senting average performance.
We plan to make comparisons of these procedures
using real data. The disadvantage of using real
study data, of course, is that its true nature is
unknown. Its advantage, however, is in showing how
much results from the estimation procedures would
differ in actual use. We also plan to look at
simulations using other mixtures of data, perhaps
some more complex, but the ultimate is going to be
how it performs on the real study data.
I'm not at all unhappy with results for the
70 percent robust procedure, the one we currently
255
-------
use to produce the statistical estimates used as a
»i ' • ' •
basis for acceptance limits in DMR QA studies. If
we do decide to change our statistical estimation
procedure, it seems unlikely there would be a
noticeable effect on limits in DMR QA studies.
That's basically the end of my presentation.
I guess it's time for questions for Sam or I.
256
-------
REGRESSION EQUATIONS FOR ALUMINUM
X - . 982 T * 43. 1
- .997
S - . 0527 T * 20. 4
R*CS> - .
TRUE
CONC.
I960
1310
1260
938
930
662
607
503
340
98.
94.
43.
NUMBER
OF DBS.
REPORTED
46
46
45
37
43
42
43
46
35
1 38
6 35
0 32
MEAN
RECOVERY
CX>
1830
1293
1186
1003
925. 1
697.3
614.7
487.3
397.5
134. 3
145.4
82. 6
RESIDUAL
FOR EST.
OF X
- 3.6
-11.2
-69.7
27.9
-13.0
17. 1
-12.6
-39.9
27.2
- 3.2
11.3
- 1.8
STD.
DEV.
128.3
95.5
105.9
82.7
46.7
57.0
43.5
51.0
50.3
22.9
21.4
23.9
RESIDUAL
FOR EST.
OF S
9.90
6.07
19.87
11.26
-22.69
1.68
- 8.93
4.08
11.94
- 2.75
- 4.02
1.17
257
-------
PERFORMANCE IN DMR-QA
AND RELATED STUDIES
STUDY TIME
DMR-1 10-11/80
DMR-2 5-6/82
DMR-3 4-5/83
DMR-4 4-5/84
FOR EPA LABSa
WP006 10/80
WP008 3/82
WP010 3/83
WP012 3/84
FOR STATE LABSi
WP006 10/80
VP008 3/82
WP010 3/83
WP012 3/84
TOTAL
kiA f*^
NO. OF
RESPONSES
35.056
33,208
33. 730
39. 324
515
590
582
595
3.312
2.952
2.960
2.872
ACCEPT.
74.3
78.0
83.9
84.7
94.0
96.6
96.0
96.0
86.2
90.3
91.3
91.4
:,!,-: I',* "'»• A1 .
4%A * A & ^rf^^W 4& • ^ A^^^M
CHANGE SINCE
FIRST STUDY
...
3.7
9.6
10.4
ii<
...
2.6
2.0
2.0
—
4.1
5.1
5.2
IMPROVEMEft
SINCE FIRST SI
-nttm-m
14.4
37.4
40.5
•.IK ii,
...
43.3
33.3
33.3
...
29.7
37.0
37.7
258
-------
CO
CO
Q
LU
CO
t-t
*-•
_J
UI
CL
UI
W)
U.
o
UJ
v-C
i i i i i t i i i I i i •
259
-------
K
o
UJ
y.
0
260
-------
SIMULATIONS - EACH INVOLVING 1000 SAMPLES
POP.
N
NC100, 10) 20
N<100. 10) 50
EST.
PROCED.
TRAD.
W/0. T.
70% R.
SIWEIGHT
TRAD.
W/0. T.
70% R.
BIWEIGHT
EST.
MEAN
100.1
100. 1
100. 1
100.1
100.0
100.0
99. 9
100.0
OF MEAN
VAR.
2.21
2.2)3
2.27
2.30
1.48
1.46
1.53
1.52
EST. OF
MEAN
10.0
10.0
10.0
10.5
10.1
9.9
10.0
10.5
STD. DE\
VAR.
1.66
1.73
2.08
1.95
1.03
1.09
1.37
1. 18
261
-------
SIMULATIONS - EACH INVOLVING 1000 SAMPLES
POP.
NC100, 10)
NQ30. 10)
NCI 00. 10)
NC160, 10)
NCI 00, 10)
NC190. 10)
N
40
10
40
10
40
10
EST.
PROCED.
TRAD.
W/0. T.
70% R.
BIWEIGHT
TRAD.
W/0. T.
70% R.
BIWEIGHT
TRAD.
W/0. T.
70% R.
BIWEIGHT
EST. OF
MEAN
106.0
105.7
105.1
104.4
112.0
104.7
103.5
100.7
118.0
100. 1
100.1
100.0
MEAN
VAR.
1.40
1.54
1.65
1.71
1.44
4.64
4.27
2.25
1.40
1.80
1.76
1.59
EST. OF
(CAN
15.9
15.3
15.8
15.9
26.5
16. 8
16.3
12.9
38.1
10.0
10. 1
11.5
sm PEV,
VAFt
1. 30
1.53
2.27
1. 65
1.38
6.67
8.43
3.37
1.38
2.02
2. 24
1. 45
262
-------
SIMULATIONS - EACH INVOLVING 1000 SAMPLES
N
NU00, 10> 40
NC100, 30> 10
NU00. 10) 40
NU00. 3U3> 10
•^^^*1aV*^hk*^B^BV
PROCEDL
TRAD.
W/Q.T.
702 R.
BZWEIOHT
TRAD.
W/O.T.
702 R.
SIWEIGHT
EST. OF MEAN
MEAN VAR.
EST. OF S1BL QEV«
100.1
100.0
100.0
100.0
100.0
100.0
100.0
100.0
2.30
1.
1*78
1.30
1.34
1*38
1,
MEAN
12.2
11*5
12.8
8.1
8.7
8.5
81,0
VAIL
1.81
i.m
1.72
8,88
1*11
1*38
1.18
263
-------
QUESTION AND ANSWER SESSION
MR. RICE: Jim Rice.
Paul, all of those examples that you put up there
could just as well have represented two different
analytical methods, which is the case, for example,
for where there are alternative methods given say
, I •'. 11 Jr , , : , •,
for chromium or copper or whatever, one of them
flame and one of them furnace. For instance,
these two procedures or methods have considerably
different precision statements when operating on
the same metal sample. I mean, you could construct
them so that everything you said there would work
for that. Why not recognize that? Have you attemp-
ted to look and separate out and examine each method
by the appropriate statistics on the data base
, 1" i £ I;",,::'',
separated into its components as you can identify
them.
MR. BRITTON: Yes. You
know that we did collect the information on the
method that was used when these studies were run,
and you also know that we do generate statistics by
method. There would certainly be a policy issue
involved in setting limits method-by-method, but I
really think that's the only alternative, assuming
264
-------
that indeed there is a significant mixture of
methods and significantly different statistical
characteristics for the various methods. I think
that in general, we're not seeing that great a
mixture of methods, and in general, we're not seeing
tremendously different statistical results either.
I think one major exception is the group of
nutrients where in the DMR QA studies, we observed
that the percentage of unacceptable data in the
nutrient group was characteristically about twice
the average unacceptable data for the other groups.
We started considering that issue and decided that
it was probably caused by the fact that two-thirds
of the EPA and state laboratories use automated
methods for nutrients. We looked at the DMR QA
group, and they use manual methods about 2:1. So
we decided we had to make an adjustment, and from
now on, in fact I think with Study 4, the limits
for nutrients are to be based on statistics from
all labs rather than the EPA and state laboratory
group. This change has brought the percentage of
unacceptable data for nutrients in DMR QA studies
right down in line with all the other analyte
groups, so things are balanced better now than
they were.
265
-------
In general, I don't anticipate looking at
methodology and somehow compensating for statistical
differences from one method to another. We really
haven't had a chance to study the difference between
methods in a rigorous sense, and I'm not real sure
what I would do with the information. EPA policy,
as I understand it, is that approved methods are
all equivalent methods and therefore should be
judged against the same general criteria.
MR. YOUNG: Jim Young
from the University of Arkansas. I'm a little
concerned that this approach might tend to cause
people to think that this is the test precision
that you are measuring, when in fact the precision
within a particular laboratory should be much better
than from the data you have collected and are
analyzing. Is there any plan to look at intra-
laboratory precision versus interlaboratory
precision in this program?
MR. BRITTON: No, there
isn't any plan to look at within-lab standard devia-
tion in this program. That would require partici-
pants to do replicate analyses on the same sample,
or analyze two samples that were identical or nearly
so. But to do that, we would probably have to give
266
-------
up having samples at two concentrations, in order
to keep the analytical effort from increasing. I
believe having information at two concentrations is
more helpful for identifying where potential problems
may be. If there appears to be similar bias at both
concentration levels, follow-up personnel can look
for a systematic error. If there are dramatically
different errors at the two concentrations, then
some kind of a precision problem is more likely.
In general, we have not tried to get into within-
laboratory standard deviation estimates in these
studies. There are other sources for those esti-
mates.
MR. STANKO: It would be
difficult for me to look at the DMR QA program and
criticize it on the fact that it has improved the
quality of the data. It certainly has improved the
quality of the data for the NPDES permit labora-
tories that are generating these data.
Last year at this conference, I gave a paper
that was titled, Industry's Experience with the
DMR QA program. The very specific point that I
made last year was that the criteria acceptance
limits identifies too many industrial and contract
laboratories as bad performance. I would like to
267
-------
show one slide for DMR QA Study 4, if I might.
SLIDE 1
This slide takes the EPA laboratory data for
DMR QA Study 4, for the 52 parameters. The second
column is the true value, the third column is the
acceptance criteria limit that was used oh industry
for Study 4. The next column in identifies the
number of federal and state laboratories that are
using the program. The column after that is the
number of outliers that were identified by the EPA
statistical procedures. The next column identifies
the additional number of EPA laboratoriesthat fell
outside of the 99 percent criteria. The last
column identifies the percent of EPA laboratories
that fall out of the 99 percent confidence interval.
Statistics dictate that only approximately one
percent of the laboratories should fall out of the
99 percent confidence limits.
If you will look at some of the bottom values,
for total kjeldahlnitrogen, there was one identified
outlier out of 53 observations, there were seven
•• ' ' ''/'I .- " '•.,• -li'i1' • .';,.' • • •; ;;,-' .•<, "•
additional EPA laboratories that fell out of EPA
criteria, for a total of 13.2 percent of EPA's own
laboratories falling out of the 99 percent con-
fidence interval. If I work my way up that column,
268
-------
and what this column should be is 1 percent, we
get 6.8, 9.1, 13.2, 9.4, 10.6, 6.1, 8.1, 8.1 and
so on.
There is something wrong with the statistical
procedures. I really don't care if you Winsorize,
mesmerize, or hypnotize, but when you apply your
criteria limit to your data set, only one percent
of that data should be excluded. Thank you.
MR. BRITTON: You're
right, George, something is wrong. The problem is
your assumption that only one percent of the EPA
and state data should fail our acceptance limits.
We know that some of these data are not good and
our limits are supposed to pass 99 percent of all
good data. We would like all the bad data to
fail, and to make this more likely, we are willing
to pay by accepting rejection of one percent of
the good data as well. Clearly, EPA and state
laboratories are not perfect yet. They did generate
some bad data in these studies. If they hadn't,
then the failure rates you calculated would indeed
have been one percent.
269
-------
Revised Presentation
270
-------
STATISTICAL BASIS FOR LABORATORY PERFORMANCE EVALUATION LIMITS
Paul W. Britton * and Daniel F. Lewis **
1. I PRODUCTION
The effectiveness of the water regulations enforced by the U.S. Environmental
Protection Agency (USEPA) depends upon the quality of data generated by USEPA,
state, local government, industrial, commercial and other non-USEPA
laboratories. In an effort to improve the quality of data, USEPA conducts
collaborative studies evaluating the ability of laboratories to analyze water
samples and to produce data within specific evaluation limits. At present,
five formal studies are conducted each year for USEPA by the Environmental
Monitoring and Support Laboratory at Cincinnati, two for the drinking water
laboratory certification program, two for point and non-point source discharge
monitoring, and one for major NPDES permit dischargers. In addition, special
performance evaluation studies are conducted for the Suparfund and Solid Waste
activities, and other USEPA programs involving contract laboratories.
Acceptable performance during these studies demonstrates a well-managed
laboratory operating competently, while unacceptable performance indicates a
laboratory which is likely to be having problems generating quality data,
either for a particular analyte or in general. Unacceptable performance
results in an investigation of the circumstances, and if appropriate, remedial
action by USEPA. It should be mentioned, however, that even the best of
laboratories will periodically develop analytical problems. It is perfectly
reasonable for a quality laboratory to occasionally produce analytical data
that are outside of evaluation limits, due either to an unfortunate result of
randan chance or to an actual analytical error.
During these performance evaluation (PE) studies, the Qua! ity Assurance Branch
of the Environmental Monitoring and Support Laboratory - Cincinnati sends
participating laboratories a set of stable sample concentrates in sealed glass
ampules, a data reporting form and appropriate instructions. Each laboratory
produces the study samples by diluting a measured quantity of specific
concentrates to volume with reagent water, then analyzes them using its
routine procedures. The completed form is sent to USEPA for evaluation and a
fully-detailed report is returned to each laboratory. The responsible state
or USEPA office follows up with laboratories that demonstrate potential
pro bl ems .
The effectiveness of these studies depends upon two things: first, whether
the evaluation limits properly indicate data resulting from substandard
analytical work; and second, the commitment of all participants to ensure that
laboratories performing poorly receive appropriate follow-up work so that
substandard laboratories either improve their performance or are excluded from
producing further data. It should be noted that these studies serve only as an
indicator of potential substandard ability and should not be used as the sole
basis for a final decision as to the quality of a laboratory.
**
USEPA Quality Assurance Branch,
Laboratory- Cincinnati
Computer Sciences Corporation
Environmental Monitoring and Support
Draft 1/11/85
271
-------
2. DEVELOPMENT OF EVALUATION LIMITS
EPA's objective in developing evaluation limits is to be able to distinguish
between data from analytical systems operating within state-of-the-art
capabilities and data representing substandard performance. A statistical
prediction interval at an appropriate significance level is clearly suitable
if acceptable data have a normal distribution and if proper estimates of the
arithmetic mean and standard deviation are available. (Note that a prediction
interval is similar to a confidence interval except that where a confidence
interval uses an assumed mean, a prediction interval uses an estimated mean.
A prediction interval, therefore, is calculated the same as a confidence
interval except that it requires an estimate of the standard deviation that
has been adjusted to account for the variabil ity of this estimated mean.)
Since there are no independent sources of such information, it is necessary to
investigate the data from the past or present studies themselves to establish
a basis for the desired limits. Since even the best of data sets will contain
an unpredictable and often substantial portion resulting from substandard
analyses, it is necessary to use outlier tests to exclude extreme data points
from the statistical process before producing estimates of the mean and
standard deviation that will be robust to the effects of any outliers that may
remain.
2.1 Estimating the Study Statistics Using Data From the Current Study
For most analytes in a sample, basic descriptive statistics are estimated from
the data submitted by the participating USEPA and state laboratories, which
usually number over 100. The statistics are estimated from these data because
USEPA is more familiar with these laboratories and is confident that they will
produce results generally representative of the current analytical procedures
when properly applied. The statistical report for each analyte includes the
following:
1. Identification of obvious outliers in the data set;
2. Traditional and robust estimates of the mean and standard deviation from
the retained data;
3. Tests of the normal ity for the retained data;
4. A histogram of the retained data; and
5. A complete listing of the USEPA and state data with the obvious outliers
noted.
272
-------
Outliers are identified by a two-stage procedure. The first stage involves
the rejection of values larger than 500 percent or less than 20 percent of the
true value and is intended to remove all data resulting from a decimal
placement error. Data retained through this first stage are used to calculate
traditional estimates of the mean and standard deviation for the second stage
tests-. If there are 25 or fewer values left, the second stage involves
successive Grubbs1 tests at the two-tailed 1 percent significance level until
a suspected value is retained. As described in Grubbs (1950 and 1969) and
Standard Practice E-178 of the American Society for Testing and Materials,
this test involves ordering the data, identifying the largest or smallest
value as the suspected X — the value farthest from the mean of all the
data -- and calculating a statistic T = (X - X)/S, where X is the traditional
estimate of the mean and S is the traditional estimate of the standard
deviation. The suspected value is discarded as an outlier if the absolute
value of T exceeds the appropriate critical value, then X and S are
recalculated from the remaining data prior to testing the next most extreme
vatlue. For sets of more than 25 retained values, any value beyond a 99
percent confidence interval,
out! i er.
generated using the Student's t, is considered
an
Final estimates of the mean and standard deviation are calculated from the n
values retained through both stages of outlier testing. The traditional
estimates are:
(1) X = ( I X.)/n
(2)
S =
/
V
where X^
is the ith retained value.
Robust estimates are calculated from an ordered set of the retained data,
using a 15 percent Winsorizing procedure to estimate the mean as described by
Dixon and Massey (1969), and standard normal deviates to estimate the standard
deviation.
273
-------
After the retained data have been arranged in ascending order, the 15th and
85th percentile are calculated as:
(3) PIB -Xj - (3 - .15n- .5)(Xj - XM)
where j = 1 + (the greatest integer portion of (.15n + .5)), and
(4) P85 = Xk - (k - .85n - .5)(Xk - X^)
where k = 1 + (the greatest integer portion of (.85n + .5)).
After the percentiles have been calculated, the Winsorized estimate of the
mean is calculated as:
(5)
h(X ) + (n+2-k)(X )
h k-1
k-2
Z X
i=h+l i
n
where h ~ j - (the greatest integer portion of (j - .15n - .5)).
Note that Xn is the smallest observation that is greater than or equal
to PIS, and Xk_i is the largest observation that is less than or
equal to
The robust estimate of the standard deviation is calculated from the
distance across the middle 70 percent of the ordered retained data.
Assuming an underlying normal distribution, this calculation is:
(6)
P85 " P15
S = 2.0729
where 2.0729 is the number of standard deviations between the
15th and 85th percentiles of a normally distributed population,
This value is obtained using an inverse normal function.
274
-------
In order to produce prediction intervals, this estimate for standard deviation
must be adjusted for the variability in the estimate of the mean, as follows;
(7)
'adj
• /s2
V
= s
* 4
/__—
S2/n
where Sadj. is the adjusted estimate of standard deviation to be used
to construct a prediction interval, S is estimated using equation 6, and
Sy is an estimate of the standard deviation of the mean and is equal to
the square root of S /n for means calculated from n observations.
To provide information regarding normality, Kolmogorov-Smirnov (K-S) and
Anderson-Darling (A-D) statistics are calculated for the full set of retained
data and a K-S statistic is calculated for the middle 70 percent of the
retained data. The K-S statistic is used because it is a highly regarded
distance statistic that can be used to test normality of the middle 70 percent
of the retained data as well as the full set. The A-D statistic is calculated
for the full set of retained data because it is more sensitive to deviations
from normality in the distribution tails, and was found to be one of the most '
powerful and easily calculated distribution tests studied by Stevens (1974)
and Green and Hegazy (1976). Suitable critical values are provided -at 1 and 5
percent significance levels using the approximations provided by Stevens
For analytes with rugged analytical procedures or with which the laboratories
have considerable experience, there is no practical difference between the
traditional and robust estimates of the mean and standard deviation. However
there are analytes for which a nunber of the USEPA and state laboratories
still do not have their system fully under control, and so are generating data
that do not represent competent performance. Under such conditions, the
robust estimates are clearly superior to the traditional estimates since the
robust estimates are not as likely to be influenced by persistent outliers and
the outlier scheme will fail to identify all the outliers whenever analytical
problems become too common. Because they are universally appropriate, the
robust estimates of the mean and standard deviation are used to characterize
acceptable performance.
The remainder of the statistical report is a histogram and an ordered listing
of all the data from USEPA and state laboratories with outliers followed by an
asterisk. The histogram provides a convenient visual impression of how the
retained data are distributed and the ordered listing is necessary to document
the actual data that the statistics were developed from. Figure 1 i s an
example of the report.
275
-------
2.2 Estimating Study Statistics Using Regressions on Previous Study Statistics
When appropriate background statistics are available, estimates of the mean
and standard deviation for the current study can be made from linear
regressions generated from historical data. If these historically based
estimates are universally beneficial to the participants, and they appear to
be reasonable, they will be used as a basis for evaluation limits instead of
the estimates generated from the current study.
These linear regressions, for X~ versus true value (TV) and S versus TV, are
achieved by using the customary least-squares algorithm to fit:
(8)
(9)
:=F£) and
w •
d
After the coefficients have been estimated, these equations may be simplified
by multiplying through by TV, giving the desired forms:
(10)
(11)
X = a(TV) + b and
S = c(TV) + d
(Note that S is an estimate of standard deviation before it has been adjusted
for the variability in the estimate of the mean. Which means, of course, that
the historical data used to generate the regression equation for S will be
unadjusted standard deviations from past studies.)
Using this regression technique leads to a minimization of the sum of the
squares of the residuals as a percentage of the true value associated with
each residual.
The chosen regression procedure was selected after a comparison of results
with traditional linear, quadratic, cubic and power curve fits to the
historical statistics. The nonlinear alternatives were rejected because they
were more complex and did not generally produce regressions that fit the data
better. The traditional linear least-squares alternative, i.e., directly
fitting a linear relationship between X or S and TV, was rejected because the
statistics for samples with high true values would tend to dominate the
regressions, causing regressions which might provide misleading estimates of
the mean and standard deviation for samples with low true values. The
propriety of the chosen regression procedure is also consistent with
theoretical expectations since yariabil ity in results tends to increase for
most analytes in direct proportion to increasing concentration,and this
regression model was designed to deal with that problem. Currently, residual
randomness tests are performed and coefficients of determination are
calculated for every regression equation as it is produced. In this way the
regressions are continually monitored to assure they are appropriate and
effecti ve.
276
-------
To produce the prediction intervals defined in the introduction to Section 2
the estimates of the standard deviation must be adjusted to account for the
variability of the estimate of the mean. This is done as follows:
(12)
'adj
s
SX
where Sadj. is the adjusted estimate of standard deviation to be used
to construct a prediction interval, S is the regression estimate
of standard deviation from equation 9, and % is the standard
deviation of the estimate of X obtained fromAequation 8.
2.3 Determining the Evaluation Limits
The first step in determining evaluation limits for an analyte is to calculate
trial 95 percent confidence limits using the estimates from the current study
as well as the historical estimates, both after the stated adjustments.
Wherever the limits from the historical estimates do not completely encompass
those from the current study estimates, the current estimates are used as a
basis for calculating the final evaluation limits. Otherwise, the historical
estimates are used as a basis for final limits. In recent studies limits
from current and historical statistics are generally quite similar, however
limits based upon the historical statistics usually predominate.
There are several reasons why the estimates produced from regressions on
historical statistics might be inappropriate. The true value for an analyte
may be outside of the concentration range from which the regression was
developed, the true value for the analyte may be in error, there may have been
a significant change in the way the current sample was made, or one of the
regressions may not represent the true relationship between the true value and
the estimates of a statistic generated from past studies. Where there are no
historically based estimates available for use in setting limits, either
because the samples for previous studies were made differently or did not
include particular analytes, limits can only be calculated using estimates
made from the data of the current study.
To insure that these or other problems do not find their way into actual
laboratory evaluations, the final limits as well as the estimates and
procedures that lead up to them are inspected manually before any laboratories
are evaluated. Suspected problems are thoroughly investigated.
277
-------
3. PERFORMANCE EVALUATION REPORTS
Once appropriate standard deviations and mean recoveries are available, it is
a simple matter to develop the 95 and 99 percent prediction limits (explained
in the introduction to Section 2) to be used as evaluation limits, and proceed
with evaluation of the study data. An individual report is generated for each
participating laboratory showing the data reported, the related true values
and limits, and an evaluation judgment for each reported value based upon its
relationship to the appropriate limits. Each laboratory receives a copy of
its report and a copy is sent to the responsible USEPA or state office for
follow-up contact, as necessary, to resolve unacceptable performance.
4. DISCUSSION AND CONCLUSIONS
To date, USEPA has successfully completed 30 studies involving a total of 108
water analytes and thousands of participating laboratories. The results of
these studies have been well received by the participating laboratories, the
USEPA regi-orial and participating program offices, and the states which depend
on these reports to highlight laboratories requiring priority attention. When
states have a parallel interest in an environmental regulation, they often
request that their laboratories be included in the USEPA studies rather than
developing a system of their own.
278
-------
fM O
» O
•O *•
ooooooooooao
oaoooooooooo
>- 0 —O
Z 0 ^ »-
t=l O «0 -O
Q.
(C
O, rv)
a.
3
• •
«> u,
ooooooooooooooa
aooooaoooooaooo
fv M* ;A
o e> »*>,
UJ Q) c1* ^>» f^
7 V3 O 0 9
< O» 0* f*i
c
o
o a o
•o o o
0
•
it.
O
R3
UJ
<
•O O »-
o-o <
-I -I
ar oc o ee ac
O O 00
Z Z «•. Z Z
I I O II
z z z ±
o o _j o o
.z z < , z z
Z Z Z
>— »• o
. o
•e o
o «c
o o
K M
^
(0 UJ
z
o
l» O
z c
279
-------
REFERENCES
Grubbs, Frank E., 1950. Sanple Criteria for Testing Outlying
Observations, Annals of Mathematical Statistics, Vol. 21, pp. 27-55.
Grubbs, Frank E., 1969. Procedures for Detecting Outlying
Observations in Samples, Technometrics, Vol. 11, No. 1, Feb.,
pp. 1-21.
American Society for Testing and Materials (ASTM), Subcommittee
£-11. E178-Recommended Practice for Dealing with Outlying
Observations, Annual Book of ASTM Standards, Part 41.
Dixon, W. J. and F. J. Massey, Jr., 1969. Introduction to
Statistical Analysis, Third Ed. McGraw-Hill Book Co., New York, NY,
pp. 330-331.
Stevens, M. A., 1974. EOF Statistics for Goodness of Fit and Some
Comparisons, Journal of the American Statistical Association, Vol.
69, No. 347, Sept., pp.- 730-737.
Green, J. R. and Y. A. S. Hegazy, 1976. Powerful Modified EOF
Goodness-of-Fit Tests, Journal of the American Statistical
Association, Vol. 71, No. 353, March, pp. 204-209.
280
-------
MR. TELLIARD: Thank you,
Paul. Our next speaker is George Stanko. George
is going to talk about somebody else's methods for
a change.
281
-------
GEORGE H. STANKO
SHELL DEVELOPMENT COMPANY
INTER- AND INTRALABORATORY ASSESSMENT OF SELECTED
SW-846 METHODS FOR ANALYSIS OF APPENDIX VIII
COMPOUNDS IN GROUNDWATER
MR. STANKO: I'd like to
share with you the results of a Chemical
Manufacturers Association study that evaluted some
selected SW-846 methods for the analysis of Appendix
VIII compounds in groundwater. I'd also like to
point out that this is really a Chemical Manufactur-
ers Association paper or project, and that I am co-
author with Peter Fortini of the American Cyanamid
Company.
On October 1st, 1984, the EPA proposed in the
Federal Register to make the use of SW-846 methods
mandatory under Subtitle C, 40 C.F.R., Parts 260
through 271. In 1983, they had published the second
edition of SW-846. In May of 1983, the Chemical
Manufacturers Association hired a contract labora-
tory, Environmental Testing and Certification
Corporation, to do an evaluation of the effective-
ness of SW-846 as a methods manual. The American
Petroleum Institute also hired the Radian
282
-------
Corporation to conduct a similar study on the same
document. Both of these reports revealed that SW-
846 was not adequate to guide an analytical lab due
to the lack of sufficient information and details,
technical inaccuracies and inconsistencies, and
also, pointed to numerous problems with the com-
pounds on the Appendix VIII list.
To verify the findings of the CMA study on the
second edition, a project was developed in which
three prominent laboratories participated. The
prime objective of the project was an assessment of
the inter- and intralaboratory precision and bias
of some selected methods out of SW-846. A second
objective was to determine how well the resulting
data from this study would in fact define the
contamination problem.
The study was designed to look at a simulated
situation, a real environmental situation. We have
an abandoned waste site. Up gradient from this
waste site is a service station that's been closed.
Several years earlier there was a known leak of
product, gasoline, and it would be reasonable to
expect that it made its way into the groundwater
and compounds such as benzene, isobutyl alcohol,
toluene, might be expected to be found in the
283
-------
groundwater, up gradient of the dump site.
Also up gradient from this dump site is a
subdivision that uses septic tanks. They have been
using liquid drain cleaners, 1,1,1-trichloroethane
primarily, and one characteristic of this simulated
environmental situation is that all the organics
that went into this dump site are absolutely known.
It is reasonable to expect after 25 years that all
of the drums would have been leaking and the
compounds we know we put there probably showed up
in the water table. For a person who would have
to assess this environmental situation, it would
be reasonable to put one up gradient well and
three down gradient wells, collect water samples
and analyze these water samples.
CMA environmental monitoring task group pre-
pared a list of some 34 compounds from the Appendix
VIII list, and also prepared a list of distribution
across these wells in this environmental situation
I just described. CMA sent requests for proposals
to four prominent laboratories and each of the
laboratories were asked to maintain the integrity of
the list and the samples. CMA task group suggested
some nine methods that were listed in SW-846 as
possible candidates to define this environmental
-------
situation. Three bids were received from the four
laboratories and all three of those laboratories
concurred that method 8240, 8270, which are GC/MS
methods, and 8330, which is an HPLC method, would
be suitable to identify all the compounds on the
list. This was not surprising since the list was
prepared, more or less, with that in mind. Many of
us wanted to test the best procedures we thought
were in SW-846 and we arbitrarily picked one HPLC
procedure that no one had any experience with HPLC
for the Appendix VIII list type compounds.
CMA accepted proposals from all three of these
laboratories and they selected one of these
laboratories to prepare the simulated environmental
situation for analysis by all laboratories. The
list that we originally sent out of the RFP was
altered—some of it. Also, the concentrations that
were initially listed had been changed a little
bit.' Each lab was advised to use the three methods
that they themselves indicated would be suitable,
and each laboratory was also advised that there
were some Appendix VIII compounds not listed in the
two methods, but amenable to the procedures,
according to the EPA. Three laboratories analyzed
these samples independently and submitted the data
285
-------
to CMA. One of the laboratories was selected to
collect the data, compile it and do statistical
analysis.
The actual water used for this sample was a
groundwater sample collected in the coastal plain
region of Texas. It was provided by one of the
member companies of the CMA environmental monitoring
task group.
SLIDE 1
This particular slide identifies the physical
and chemical properties of that groundwater. We
also analyzed a portion of this water to verify
that none of the organics of interest for this
study was indeed present in the sample. It turns
out that the water in this well had no organics.
The study had four samples, and because we
• ii ,
were interested in inter- and intralaboratory
precision, bias, we elected to run these samples as
blind triplicates. We used a random number generator,
my wife. We assigned a number to each one of the
four samples, and each of the three laboratories
received 12 samples with 12 different numbers. The
only identification on the sample was a number.
< . . • • f "
The samples that were prepared for this study
followed procedures similar to those described in
286
-------
Section 7, "Calibration" of Method 624. Sample 1
was a background and contained only four volatile
organics; benzene, toluene, 1,1,1-trichloroethane—
and the slide will tell us what the next one is.
SLIDE 2
Isobutyl alcohol. The three other samples were
prepared to actually simulate what might be expected
in the down gradient wells. The slide shows the
compounds that were used for the volatile list.
Sample 1 contained four volatiles in a concentration
that varied from 25 parts per billion to 50 parts per
billion. Sample 2 contained approximately half of
the volatiles. Sample 3 contained the other half
of the volatiles, and Sample 4 was the one that was
loaded up and contained each one of the volatile
organic compounds.
SLIDE 3
This is a list of the semi-volatile compounds
or the compounds that were amenable to Method 8270.
As you can see, Sample 1 was the background. It
only contained volatile materials. Sample 2
contained the lowest concentration and about half
of the semi-volatile compounds. Sample 3 contained
about the other half, and and here again, Sample 4
contained all of the compounds. The concentrations
287
-------
were highest in Sample 4.
SLIDE 4
This slide identifies the three compounds that
1 jpf -, ;L h
were taken off of the 8330 list, more or less amen-
' !'!' : i .
able to this procedure. Sample 1, again, did not
contain any. Samples 2, 3 and 4 contained various
concentrations ranging from 100 Mg/L to 500 Mg/L.
The data from the study was statistically
analyzed to assess precision and bias. The assess-
ment of bias procedure was taken from the quality
assurance section of SW-846, Methods 8240, 8270.
In this procedure, one calculates the percent
recovery of spiked compounds; you calculate the
standard deviation of the percent recovery, assuming
a normal distribution; and you calculate the percent
relative standard deviation. Mean percent recovery
and percent relative standard deviation define bias
and precision as stated by the EPA for a given
compound.
SW-846 Method 8240 and 8270 also have the
performance criteria to be met in Section 8.2.4.
The mean recovery must be greater than 20 percent
for all compounds to be measured, greater than 60
percent for all surrogate compounds, and the percent
relative standard deviation must be less than 20
288
-------
percent for all compounds to be measured, including
the surrogates.
Section 8.1.1 of the quality assurance section
states, before performing any analyses, the analyst
must demonstrate ability to generate acceptable
accuracy and precision with this method. Ability is
established in Section 8.2 that I just described.
Calculations from the study were made for each
of the laboratories and each of the compounds.
SLIDE 5
This is a summary table that was prepared
because you wouldn't be able to read all the small
numbers. What this does is an assessment of how
well these prominent laboratories met EPA criteria
stated in Method 8240 and 8270. For Method 8240,
there were 17 compounds. Laboratory A did not meet
them for 29 percent, B for 29 percent. Laboratory C
did not meet them for 94 percent of the compounds.
For 8270—31, 69 and 81. For Method 8330, there were
three compounds studied. The laboratories, all three,
100 percent, did not meet the criteria.
It's difficult to explain why such a high
percentage of the data did not meet the EPA criteria
since these were experienced labs, experienced
operators and useed very good equipment. I'd also
289
-------
like to point out that there is no guidance listed in
either one of the analytical procedures to tell you
what to do when you do not meet the criteria.
An assessment of precision that we selected to
use was different from the EPA. We did not
use percent relative standard deviation. We used
lognormal distribution theory to assess the precision.
As a measure of a precision, we selected to use
variability factors and repeatability factors.
These define 95 percent confidence intervals. This
defines variability factor, Vu, and as an example,
if you have a variability factor of 1.68 for an organic
compound and the true or the mean concentration is
j • ',
known to be 100 parts per billion, the 95 percent
confidence interval of all observations should fall
, il.lf;''',. i',
between 60 parts per billion and 168 parts per
billion, using these calculations.
When you don't know the true value or the mean
value, and you have a single observation, and you're
trying to define the distribution of a second
observation, you use a repeatability factor. Here
again, I use the same example. The range is expanded;
208 parts per billion, 48 parts per billion. That
describes, more or less, how variability and
repeatability factors are used to define precision
290
-------
of the method.
In our statistical analysis of the data, we
considered looking at the nondetected observation—
we put the stuff in the samples and it should have
had a value—and we elected not to include non-
detects in our data. It would have made things
look a lot worse if we had.
There were two unusual values for diphenylamine
on different samples by one of the laboratories,
and a single value for methyl ethyl ketone by a
different laboratory. We excluded three outliers
out of somewhere between 500 to 600 observations.
I don't think we overdid it. The standard deviations
from the triplicate analyses were pooled over the
laboratories—samples, compounds and specific basis.
I have a number of copies available of this paper
and there is a Table 9 in there. If you're interested
in finding out for each one of the compounds, it's
in there. I will only show a summary of the intra-
laboratory data on this slide.
SLIDE 6
This is a summary based on the Method 8240,
8270, for the three particular laboratories. You
can look at the Vu values or the Ru, whatever you
care to, the number of degrees of freedom are
291
-------
listed. You can see that for the different com-
(• i '; ILL,' •' ', " % •' i i
pounds, the laboratories did not perform equally
as well. For 8240, Laboratory B had the smallest
Vu value or variability factor, which means it has
the best precision. For the 8270 methodology,
Laboratory A had better precision. If you want to
look at 8330, one laboratory didn't see anything,
and I would really disregard all the results, the
Vu and Ru; there's just not enough values there.
We also took the data and handled it on the
inter-laboratory basis. In other words, we pooled
all the data and these are the results. Here again,
in a copy of the paper you can look at individual
compounds on a laboratory basis. This is a summary
of values in the paper itself. You can see for
Method 8240, the variability factor is 2.31; for
8270, 2.40, exclusive of a couple of outliers—and
there's not a whole lot of difference between the
precision of these two methods on an average. I
would like to direct your attention to the Ru
range. For an Ru value of 3.27, the actual range
of Ru values is 1.70 to 9.01. It is very sensitive
to laboratories as well as compound specific.
There were other problems with the SN-846
methods. There were 11 compounds which one or
292
-------
more laboratories failed to detect in any of the
samples.
This brings up my favorite topic—false
positive, false negative observations. These are
two important properties to actually describe the
qualitative aspects of an analytical method. The
CMA study was designed to allow a very good assess-
ment of false positive and false negative observa-
tions. We knew what compounds we had spiked into
water. We looked at the data.
SLIDE 7
These are the false positive, false negative
observations for Method 8240 volatile compounds.
For Sample 1, we had seven false positive observa-
tions by two laboratories. It's interesting, if you
look at the actual raw data, which is also included
with the report, that the total concentrations of
false positives exceeded what we put in the sample
initially.
The false negative observations from the list
of compounds that we used for the study, are listed
on this slide. Three laboratories could not find
isobutyl alcohol at a concentration range between
50 and 250 mg/L in all 12 of the samples. That's a
false negative. Three laboratories also could not
293
-------
detect the presence of 1,4-dioxane in the six
samples that it had been spiked in. The concentra-
tion level here was 200 to 400 parts per billion.
Here again, I'm not going to go through all these,
but of the 17 compounds that were spiked into two
or more samples, there were seven cases of false
negatives.
SLIDE 8
This is the tally sheet of false positives and
false negatives for Method 8270. There are eight
false positives by one or more of the laboratories.
There was an interesting situation where two of the
laboratories had a false positive of N-nitrosodi-
phenylamine in six samples. I would hate to put
somebody in jail on that kind of evidence and
knowing that compound was not spiked in the sample
and not present. Two laboratories confirmed it was.
One interesting thing was noted in the data.
One laboratory did not experience any false negatives
with Method 8270. The tally sheet shows what the
other two laboratories did on a compound specific
basis. Some had problems with methyl methacrylate,
acetophenone. The one surprising thing, one of
these is a priority pollutant, chlorobenzene. One
laboratory could not find it in six samples at a
294
-------
concentration between 100 and 200 Mg/L.
A second objective of the study was how well
the resulting data defined the groundwater situation
that I described earlier. The results from the
study were reviewed from the perspective of those
conducting an environmental assessment and not from
an analytical point of view. Because the true
concentration and identity was known, it was possible
to evaluate the data on how well groundwater
contamination at the hypothetical situation was
really defined.
The analytical results were arbitrarily clas-
sified for the spiked components only and did not
deal with false positives. You either failed to
detect the compound, which is a false negative...
If you detected a compound, we had two categories.
You detected it and it was within a factor of two
of what was actually spiked into the samples, or
you detected it and there was a factor greater than
two than the actual spiked amount.
SLIDE 9
Table 9 in the handout lists this information
for all of the compounds. This is a summary table
right here. What this tells you is that we do
better for some compounds than we do for others.
295
-------
This actually identifies agreement between the
analysis run by two laboratories for Samples 2, 3
and 4 only. Percent of the compounds detected in
the first laboratory will be found by* the second
laboratory.
For the 8240 compounds, the non-chlorinated,
you can see that for 49 percent...the bulb went?
I'll have to ad lib from here on out without slides.
Essentially what this says, is that if a first
laboratory found a compound, the chances of a second
laboratory...you have one out of two chances that a
second laboratory would indeed find it. I can't go
through all of the data on that basis. It is
included in the report.
I think I'll just drop down to the conclusions
and the recommendations from CMA. The results from
the CMA SW-846 Assessment Study revealed that even
the best GC/MS methods contained in SW-846 are
somewhat inadequate for the analysis of compounds
reportedly amenable by Method 8240 and 8270 from
the Appendix VIII list. Method 8330 was found to
be completely inadequate for the three compounds
included in the study. This observation was
confirmed by all three laboratories. The statement
that SW-846 contains analytical methods for all
296
-------
375 compounds from the Appendix VIII list simply is
not true.
Review of the results from the assessment of
bias indicated that the three laboratories were not
able to meet the EPA criteria to demonstrate one's
ability to generate acceptable accuracy and precision
for a large percentage of the compounds included in
the study. This observation was unexpected since
three prominent laboratories—and these were
experienced laboratories—took part in the study,
and the samples contained only pure compounds in a
relatively simple groundwater matrix.
There were a large number of false positive
and negative observations by all laboratories to
varying degrees. This observation is of particular
concern since GC/MS methodology was employed for
rather simple samples in a rather simple groundwater
matrix. With less specific detectors such as FID
(GC) or UV (HPLC), the problem of false positive
and negative observations would be expected to be
even more severe. The resulting data for Method
8330 supports this conclusion.
The calculated Vu and Ru values, repeatability
and variability factors, which express intra- and
interlaboratory basis, revealed precision varies
297
-------
from laboratory to laboratory for different methods
and for specific compounds. These facts must be
considered when analytical data are being interpreted
to assess an environmental situation.
It is reasonable to conclude that the numerous
problems observed for Method 8240, 8270 and 8330
for the samples included in the CMA SW-846 Assessment
Study, may be expected for other methods in SW-846
not evaluated by SW-846. SW-846 certainly has not
reached the level of development whereby its use
should be mandated by law. Until all of the methods
contained in SW-846 are adequately validated, SW-
846 is nothing more than a collection of methods
that may or may not work. It is only suitable as a
reference document. Even as a reference document,
SW-846 has limitations.
If one reviews the resulting data from the CMA
study from a data user's perspective, one would
conclude that the analytical community could indeed
identify groundwater contamination, but not very
well both qualitatively and quantitatively. The
problem of false positive and false negative
observations would be of major concern in determining
i , • 'in • ,,
intermediate corrective action, as well as the
potential for future problems.
298
-------
CMA recommends that the EPA should not
promulgate the mandatory use of SW-846 for testing
for Appendix VIII compounds under Subtitle C of
RCRA. The methods have only been validated for a
few of the Appendix VIII list and further-validation
is required before promulgation.
The EPA should concentrate its efforts initially
to the validation and development of GC/MS Methods
8240 and 8270. The list of Appendix VIII compounds
amenable to these procedures needs to be established,
along with supporting precision and bias data.
For Appendix VIII compounds not amenable to
8240 and 8270, EPA needs to validate other methods
in SW-846 to confirm they actually work for some
matrix with a reasonable expectation that they may
work in other matrices.
Multilaboratory studies of all methods similar
to the CMA SW-846 Assessment Study would be advisable
prior to including a method into a manual such as
SW-846, and prior to considering promulgation of
a method for mandatory compliance testing. Results
from such studies should be subjected to the analyti-
cal peer review process and appropriate statistical
evaluation, both within and outside the Agency.
I'd like to give an acknowledgement to the CMA
299
-------
Environmental Monitoring Task Group who contributed
greatly to this paper, and particularly to CMA
Staff Excecutive, Sharon Kneiss, who happens to be
in the audience, and to Becky Wilson, the CMA
Administrative Assistant.
Thank you.
MR. TELLIARD: I'm glad
you're behind us again, George. Any questions?
300
-------
Inter- and Sntralaboratory Assessment of
Selected SW-846 Methods for Analysis of
Appendix VIII Compounds in Groundwater
Chemical Manufacturers Association
2501 M Street NW
Washington, DC 20037
Authors: George H. Stanko
Shell Development Company
Peter E. Fortini
American Cyanamid Company
08611-1
301
-------
Physical and Chemical Properties of Groundwater
CMA Assessment Study
Property
Appearance
pH
Total Suspended Matter
Total Dissolved Solids*
Chloride, Cl
Hardness, as CaCOs
Results
Very Turbid, Sandy Solids
6.9
1060 mg/L
580 mg/L
70 mg/L
318 mg/L
*Filtered thru 0.45ft, membrane.
08611-2
302
-------
Sample Code Used for
CMA Assessment Study
Samples
Sample 1
Sample 2
Sample 3
Sample 4
Assigned Numbers
10,7,6
8,11,2
5,1,9
3,12,4
08611-3
303
-------
CMA SW-846 Assessment Study (8240)
Sample Identity and Concentration Information
Compounds
Acetone
Acetonitrile
Benzene
Carbon Tetrachloride
Chloroform
1,1-Dichloroethane
1,2-Dichloroethane
1,2-Dichloropropane
Ethylbenzene
Tetrachloroethane
1,1,1-Trichloroethane
Trichloroethene
Toluene
1,1,2,2-Tetrachloroethane
Methyl Ethyl Ketone
Isobutyl Alcohol
1,4-Dioxane
Samples f/xg/U
1
—
-
50
—
-
-
—
—
—
-
25
—
25
—
—
50
—
2
200
200
50
—
25
25
100
100
—
—
25
—
25
-
200
50
200
3
—
-
50
200
-.
—
—
—
200
200
25
200
25
200
-
250
—
4
400
400
100
300
50
50
200
200
200
300
325
300
75
300
400
250
400
08611-4
30*
-------
CMA SW-846 Assessment Study (8270)
Sample identity and Concentration Information
Compounds
Aniline
p-Chloro-m-Cresol
2-Chlorophenol
Di-n-Octyl Phthalate
Diphenyiamine
Methylmethacrylate
Naphthalene
4-Nitropheno!
Phenacetin
2,4,6-TrichIorophenol
2A5-Trichlorophenol
4-Ch!orophenol
Acetophenone
1,3-Dkhlorobenzene
Chlorobenzene
Samples (/*glL)
1
-
—
-
—
-
-
_
-
—
—
_
-
—
-
—
2
25
100
-
-
100
-
100
—
—
-
—
—
25
25
100
3
- ,
—
300
300
—
300
—
300
300
300
300
300
-
—
-
4
250
200
350
500
250
500
250
500
500
350
350
350
250
200
200
08611-5
305
-------
CMA SW-846 Assessment Study (8330)
Sample Identity and Concentration Information
Compounds
Thiourea
N-Phenylthiourea
1-Acetyl-2-Thiourea
Samples (fJLgIL)
1
-
—
-
2
—
100
100
3
300
300
300
4
500
500
500
08611-6
306
-------
Percent of Recovery Observations
Not Meeting EPA Criteria
SW-846 Method
8240(17)*
8270(16)*
8330 (3)*
Laboratory
A
29
31
100
B
29
69
100
C
94
81
100
*Total number of compounds included in calculations.
08611-7
307
-------
Variability Factor
Vu= Exp (2 S) for the Upper Limit
VI = Exp (-2 S) for the Lower Limit
Example
Factor = 1.68
True or Mean = 100 ppb
Vu = 100 ppb* 1.68 = 168 ppb
VI a 100ppb/1.68 = 60 ppb
08611-8
308
-------
Repeatability Factor
Ru = Exp (2 • VFS) for the Upper Limit
Rl = Exp (-2- VTS) for the Lower Limit
Example
Factor = 2.08
First Analysis = 100 ppb
Ru = 100ppb* 2.08 = 208 ppb
Rl = 100ppb/2.08 = 48 ppb
08611-9
309
-------
CMA SW-846 Assessment Study
Intralaboratory Precision: Within Compound and Sample Pooled
Method
8240
8270
8330
Laboratory
A
B
C
A
B
C
A
B
C
Degrees of
Freedom
70
60
53
64
43
50
3
6
0
Vu
1.29
1.21
2.14
1.63
3.34
2.40
1.63
2.43
-
Ru
1.44
1.32
2.94
2.00
5.50
3.45
2.00
3.52
-
08611-10
310
-------
CMA SW-846 Assessment Study
Intel-laboratory Precision, Combined*
Method
8240
8270
8330
Vu
2.31
2.40**
-
Ru
3.27
3.46**
-
Ru Range
1.70-9.01
1.63-7.63
-
**
* Based on average variance.
Excluding 4-nitrophenol, 2,3,5-trichlorophenol.
08611-11
311
-------
False Positive and Negative Observations
Method 8240 (Volatiles)
7 False Positives, Sample 1,2 Labs
False Negatives
3 Labs, Isobutyl Alcohol
3 Labs, 1,4-Dioxane
2 Labs, Acetone
2 Labs, Acetonitrile
2 Labs, Methyl Ethyl Ketone
1 Lab, 1,1,1-Trichloroethane
50-250>g/L (12)
200-400 /xg/L (6)
200-400//g/L (6)
200-400 jug/L (6)
200-400 //g/L (6)
25 wg/L (9)
Of 17 Compounds Spiked into 2 or More Samples,
There Were 7 Cases of False Negatives
08611-12
312
-------
False Positive and Negative Observations
Method 8270 (Semi-Volatiles)
8 False Positives, 1 or More Labs
2 Labs N-Nitrosodiphenylamine in Same 6 Samples
1 Lab, No False Negatives
2 Labs, Methylmethacrylate 300-500 /xg/L (6)
2 Labs, Acetophenone 25-250 /xg/L (5)
1 Lab, Diphenylamine 100-250 jUg/L (6)
1 Lab, Phenacetin 300-500 /xg/L (6)
1 Lab, Chlorobenzene 100-200 ftg/L (6)
08611-13
313
-------
Jy,.> l:i
C
O
_
*>>>
o
C
l/l
vn
(0
0)
h.
1
V)
C
Q
*g
«J
"5
Q
M-
o
s-
o
'•5
0)
01
Q
Present
Within X2
3 CX
.alEs-
•M±i o
0) 5 O
tri ^^ 4l*
£ o v
Q- C ,,
0)
(Q 'w?
U. O
O.
ut
(O
.U
T3
C
3
O
Q.
E
o
u
in «* oo (N ^Q ^
*
*
.. o» en o
TS .E T3 .£ .£
QIC OJ C C
to oj nj 'jg
-------
a?
°s *>«
2 o
*f *j
c "5.
3 E
^ <
O)
E3
£^
a.
(U
O
2
o
JQ
to
01
Wl
O
a.
o
u
*o
S?
-U. Q
§.Il
So
c+: c
+4 tO (JJ
3 « C
"X3 flj
T- 00
m io
*
*
-------
Acknowledgements
CMA Environmental Monitoring Task Group
Sharon H. Kneiss, CMA Staff Executive
Becky Wilson, CMA Administrative Assistant
08611-16
316
-------
QUESTION AND ANSWER SESSION
MR. BRITTON: What were
the outlier criteria that you used in that study,
George?
MR. STANKO: It failed
all of them. Grubbs was the primary one. Those
three values were so far out that they were out by
a factor of 10 to 15 from the rest; very isolated
cases. We didn't chase down what caused the outlier,
MR. BRITTON: So you
basically ignored only those observations which
were roughly a factor of 10 larger than they should
have been?
MR. STANKO: We applied
the Grubbs test, but that's how far out they were,
to give you some idea of the magnitude of how far
out they were.
MR. BRITTON: Of course
it depends on the,confidence level.
MR. STANKO: Right.
MR. BRITTON: That's one
of the reasons why there'd be a great difference
between results from your analysis and results from
our analysis, even on the same data. We have a bit
317
-------
of a difference in philosophy, I think, as to
whether we are concerned with characterizing the
data that would reflect in general use of the
method, or whether we're interested in character-
izing how the method data would lookwhen performed
properly. That philosophical difference is
tremendous. It's one of the reasons why the log
transformation...log transformation is appropriate
if you're trying to encompass virtually all the
data that would be generated by use of that method
in practice but would basically accomodate all of
the data that represents inadequate performance.
But if you are interested in what the method would
look like, what data from the method would look
like when performed properly, then you have to try
and look past that data. It's a very big trick
cause none of us know exactly which numbers are bad
and which numbers aren't. We want to look past
those bad numbers.
MR. TELLIARD: John.
MR. McGUIRE: John McGuire,
EPA. George, one question. You said...probably I
wasn't paying attention. You said that you had
filtered the samples. Was that before or after
spiking.
318
-------
MR. STANKO: No, the
samples were not filtered.
MR. McGUIRE: I thought
the groundwater was filtered.
MR. STANKO: No. That
groundwater sample contained solids. But we looked
at the data to see if the solids had any affect on
the analysis of the compounds and in all honesty, I
could not detect any irreversible absorption of the
compounds we used in this study.
MR. McGUIRE: Okay.
There was no attempt made to extract the solid
itself, then?
MR. STANKO: The solids
were extracted, yes.
MR. McGUIRE: They were?
MR. STANKO: With the
water, in situ.
MR. McGUIRE: Okay, but
I mean totally extracted?
MR. STANKO: Total
extraction.
MR. McGUIRE: Thank you.
I certainly hope I can get a copy of that.
MR. TELLIARD: In the back.
319
-------
Any other questions? All right, folks, thank you
very much.
Point of interest. Tomorrow at a quarter to
nine here. The boat will be leaving about 5:30-ish.
That gives you about 20 minutes to get your snuggies
on and get out there. Waterside, down there by
Phillips. Thank you.
(WHEREUPON, the 4-3-85 session was adjourned.)
320
-------
Revised Presentation
321
-------
INTER- AND INTRALABORATORY ASSESSMENT OF
SELECTED SW-846 METHODS FOR ANALYSIS OF
APPENDIX VIII COMPOUNDS IN GROUNDWATER
CHEMICAL MANUFACTURERS ASSOCIATION
2501 M Street, N.W.
Washington, DC 20037
Authors:
George tf. Stanko
Shell Development Company
Peter E. Fortini
American Cyanamid Company
Presented at
U.S. EPA Symposium on the
Analysis of Pollutants in the Environment
Norfolk, Virginia
April 3-4, 1985
322
-------
ABSTRACT
EPA Methods 8240, 8270 (GC/MS), and 8330 (HPLC) from SW-846 were
evaluated at three prominent laboratories for a select list of 36 compounds
that were spiked into a relatively simple groundwater matrix. Sample
preparation was designed to simulate a groundwater contamination problem at
an abandoned waste site and samples were prepared as blind triplicates to
facilitate statistical analyses for precision and bias. Results from the
assessment of bias (accuracy, recovery) indicated that the three
laboratories that took part in the study were not able to meet EPA's
quality assurance criteria for a large percentage of the 36 compounds
included in the study. The assessment for precision revealed precision
varies from laboratory to laboratory, for different methods, and for
specific compounds. A large number of false positive and negative
observations resulted at all laboratories to varying degrees. The list of
Appendix VIII compounds reportedly amenable to Methods 8240 and 8270 was
found to be somewhat less than reported by EPA. Method 8330 was found to
be completely inadequate for the detection and quantification of the three
compounds included in the study. The statement by EPA that SW-846 contains
the analytical methods for all (375) Appendix VIII compounds (excluding
exotics and water reactive compounds) simply is not true.
RLL8506605
323
-------
INTER- AND INTRALABORATORY ASSESSMENT OF SELECTED SW-846 METHODS
FOR ANALYSIS OF APPENDIX VIII COMPOUNDS IN GROUNDWATER
1. INTRODUCTION
The U.S. Environmental Protection Agency (EPA) published "Test Methods
for Evaluating Solid Wastes, Physical/Chemical Methods" (SW-846) to serve
as a methods manual for the sampling of groundwater or leachate and
analyses of 375 parameters listed in Appendix VIII of Title 40 Part 261 of
the code of Federal Regulations (40 C.F.R. 261). On October 1, 1984 (Fed_.
Ree pp. 38786-38809), the EPA proposed to amend its hazardous waste
"regulations under Subtitle C of the Resource Conservation and Recovery Act
(RCRA) to make SW-846 methods mandatory for all testing and monitoring
activities required under Subtitle C, as specified in 40 C.F.R. Parts 260-
271. The EPA identified the principal issue of the proposed rule as how
testing under RCRA should be carried out, and the proposed rule specif-
ically addresses SW-846 to be used to, "ensure accurate, consistent, and
comparable testing results—year to year, facility to facility, and region
to region".
In May 1983, Environmental Testing and Certification Corporation (ETC)
prepared a report1 for the Chemical Manufacturers Association (CMA) from
evaluating the effectiveness of SW-846 (2nd Edition) in assuring that users
would be guided to select consistent analytical and sampling procedures for.
each parameter. The American Petroleum Institute had a similar report
prepared by Radian Corporation. Both of these reports revealed that SW-846
was not adequate to properly guide an analytical laboratory due to lack of
sufficient information and details, technical inaccuracies, and incon-
sistencies. The evaluations by ETC and Radian also pointed to numerous
problems associated with compounds identified by the Appendix VIII list.
To verify these findings, CMA conducted a limited study at three prominent
laboratories involved in environmental analyses. The prime objective of
the CMA study was an assessment of inter- and intralaboratory precision and
bias (accuracy) of selected analytical methods from SW-846. A second
objective was to determine how well the resulting data would define
groundwater contamination where the identity and levels of all pollutants
were known.
2. STUDY DESIGN
The CMA limited study was designed to simulate an environmental
situation that could very well exist, and someone would have to install
wells; collect and analyze groundwater samples; and finally assess the
nature and magnitude of the problem. The environmental situation was as
follows:
There is an old waste disposal site where groundwater contamination is
suspected. The direction of flow of the grourdwater is known and a
decision was made to install four monitoring wells. One of these wells
is located as a background well and three are down gradient from the
disposal site. Up gradient from the background well is a gasoline
service station that experienced a leaking storage tank in the past;
RLL8506605
324
-------
however, groundwater contamination from the leak was never assessed. A
small subdivision up gradient from the waste site uses septic tanks.
Chlorinated drain cleaner (1,1,1-trichloroethane) has been used for
years. All of the organic compounds that were disposed of at the site
(unlined) are known. It is reasonable to assume that the drums have
been leaking for sometime, and it would be reasonable to believe that
all of the compounds have migrated into the groundwater.
Four samples were designed for the CMA study to simulate the
environmental situation described. A list of 34 organics of interest from
Appendix VIII was developed and the distribution and approximate concen-
trations for the four samples was prepared for a request for proposal (RFP)
from four prominent contract laboratories. The laboratories were contacted
about the CMA RFP and advised to maintain the integrity of the compound
list as well as the sample list. As part of the RFP, nine SW-846 methods
were suggested to be used to conduct the analyses. CMA received bids from
three laboratories for the study. All three laboratories concurred and
recommended that SW-846 methods 8240 (GC/MS for volatiles), 8270 (GC/MS for
semi-volatiles), and 8330 (HPLC method, sulfur compounds) be used to
analyze for the select list of compounds. This concurrence was not sur-
prising since the compound list was originally designed to test the GC/MS
methods which were considered the best and most reliable in SW-846. Method
8330 was randomly selected for evaluation of an HPLC procedure from SW-846
and compounds listed in the method were included in the compound list.
CMA accepted proposals from three laboratories to analyze the samples
for the study and selected one laboratory to prepare the samples and to
prepare a report which included data from all laboratories. To maintain
the integrity of the study, the CMA list of compounds was slightly altered
with respect to compounds listed as well as concentrations in various
samples. In the case of the laboratory that prepared the samples, special
precautions were taken to make sure the confidential information was not
available to the analysts. Each of the laboratories was instructed by CMA
to analyze each sample by SW-846 Methods 8240, 8270, and 8330. The lab-
oratories were also instructed that there were other Appendix VIII
compounds not listed in Methods 8240 and 8270, but were amenable to these
methods according to EPA. Three laboratories performed the analyses
independently and submitted the data to CMA. When all data were reported,
CMA sent the data to the one laboratory selected to prepare a report.
Statistical analysis of all the data was performed to determine inter- and
intralaboratory precision and bias. These statistical results from the
contract laboratory were included and were the basis for this report.
3. EXPERIMENTAL
3.1 Groundwater Matrix
Groundwater for the study was provided by a CMA member company. The
water was collected from a monitoring well located in the coastal plain
region of Texas. Some physical and chemical properties for the water are
listed in Table 1.
RLL8506605
325
-------
Table 1
Physical and Chemical Properties of Groundwater - CMA Assessment Study
Property
Appearance
PH
Total Suspended Matter *
Total Dissolved Solids3'
Chloride, Cl
Hardness, as
Results
Very turbid, sandy solids
6.9
1,060 mg/L
580 mg/L
70 mg/L
318 mg/L
a) Filtered thru 0.45u membrane.
A portion of the groundwater was also analyzed prior to preparation of
samples to establish the presence or absence of organics. The results
demonstrated the well sample to be free of organics and specifically the
list of organics of interest from the Appendix VIII list. Since the
groundwater was free of organics, method bias (recovery, accuracy) could
easily be assessed from the spiking levels (true value - spiking value).
3.2 Sample Identification
Four samples were used for the CMA study. To obtain the necessary
data, these four samples were analyzed as blind triplicates. Sample
numbers were randomly generated and assigned to the samples. Table 2
identifies the sample numbers employed.
Table 2
Sample Code Used for CMA Assessment Study
Samples
Sample 1
Sample 2
Sample 3
Sample 4
Assigned Numbers
10, 7, 6
8, 11, 2
5, 1, 9
3, 12, 4
Each laboratory received 12 samples identified only by a single number.
For each sample number they received two one-liter containers and two VOA
vials. The study was coordinated in a way that analysis of samples was
initiated on the same day.
3.3 Sample Preparation
Table 3 reveals the identity and spiking levels of compounds in each of
the four samples. Complete details for the sample preparation have not
been included in this report. The groundwater matrix was spiked following
procedures described in Section 7, "Calibration" of EPA Method 624. The
samples were prepared to simulate what might be expected for the four
hypothetical monitoring wells described in the enviornmental situation.
Sample 1 (identified by numbers 6, 7, and 10) was prepared to represent the
RLL8506605
326
-------
I I II
in
fM I
I m III ||
CO
O
i I in i
I I in i
I I
in
fN
in
fM I
in
fM 1
. s
o
i in
i i
l i
i i
o o o
o o in
fM (M
ITl IT
fM CM
O
0
O
0
||
in
(N 1
in
fM 1
0
0
fM
c
IT,
O
o
fM
in o
fM O
IT-
I fM
c
c
00
O O O
O o in
(N (N
O
0
O
0
l|
in
fM 1
C
o
fM
C
m
O
O
IN)
in o
fM O
in
1 fM
C
•— ' 1
O O O
fM o o in
IfM fM
in in o
(N (M 0
o
0
l|
IT
fM 1
0
o
fM
0
m
O
o
fM
in o
fM O
in
fM
i
§
° 0
00
(N (N
in
fN)
o
O
CM
0
in o
fM fM
o
in
fM
1 1
O
O
ro
o
o
o
o
ro
EH 2
CO O
M
EH EH
2 <
W K
E EH
CO 2
CO U
W u
W 2
CO O
< U
VO Q
"* 2
IS
CO EH
SI
• '
00
0
O
rn
0
1 O
O
O
Q
H
W
CM
O O O O
O O O O
V T t—l m
O O
O
0
O
o
CM
O O
oo
(M er,
in
fM
o
O
tn o
!-• 0
0
O
0
in
fM
O
0
rr
00 00 O 00
ifi O m o o m o
fM CM m fM m fM in
CO
O O O O
O O O O
*» ** r-l CO
8
g
H
rH O
£ r-(
u x:
o
o
o
2
•H C
a 10
i x:
- i
& 5
o
o
u
(0
o>
1 1
(1) (00) O
C >H C _, 0
fO 4J 10 V? rH 0)
i x: i oi x: ? < c
•H 4J O EH 4-> i (0
lj -J 1 Q) S '-H X
0)E-«OO< O
Cl tHr-HCC -VjrHC4-> -rH
(OrHOxiaJaicMo^iOs Q
£ »rHCJX:D «rHX:4JXirH|
4->rHX:-H4JrHrHX:4J 0) O O^
a) -uk-ioio --cjaii^tnx:'-
<-H EH EH t-H E M rH
O I
c o
(D V!
rC O
& rH
O rH o x: 01
ne
-
o> kj o IH u c
c O in O -H
•H rH 0) rH O
•H x: »-i x: i
•H U U U , 4J ra 4J a)
4J 10 rH 0,' 4-1
U rH >, E 10
a' o
N
a
C rH rH
i x: o> >i >i
c c 4J x: x: >j
i x: & 4J o
•H Hi -H O> 10
C Q E
CO
§
1
00
(M
cc
327
-------
I I I I I
II I II I
r- II i i i i i ii i
•a
o
o
ii i ii i
CO
g
i i
in LH o
CM CM O
O
I O
o
o
10
CO O I
in LO o
CM CM o
O
I O
o
o
o
CM O I I I
in in o o o
I CM CM O I O O
a>
ooo
o o o
ro ro ro
O
O
ro
O
O
ro
O O
O O
ro ro
O
O
ro
U
ooo
ooo
ro ro ro
O
O
ro
O
o
ro
i i
O O
O O
ro ro
O
O
ro
a
OOO
OOO
ro co ro
o
o
ro
O
O
ro
l i
O O
O O
ro ro
O
O
ro
1-1 r-1
EH
Z
U
O
l-t
O O O O
in o o in
CM in in ro
O
in
ro
o
in
ro
O O
in o
CVJ (N
O
O
CN
O O
O O
in in
O
O
in
O O O O
m o o in
(N in in ro
o
in
ro
o
in
ro
O O
in o
(N (N
o
o
H
0) EH
U I
10 UJ
c -
01 ^r
o< CM
di M
O EH O
•§, A
1-1 T-t Id)
O O 01 O C
C C C U 0)
0) I 01 O O N
C r-1 C
01 S^ 01 01
f. O C X!
... o <-< & -H 01 o
OinOr-lOOQNtH
,H .r-IJ=C4J| CO
.CT.CU
-------
background well and was spiked with benzene, toluene, isobutyl alcohol
(leaking tank), and 1,1,1-trichloroethane (septic tank). These same
compounds were spiked in all samples. Samples 2, 3, and 4 were prepared to
simulated the three down gradient wells at the hypothetical waste site. As
noted in Table 3, samples 2 and 3 contained approximately half of the list
of 36 compounds in each sample. Sample 4 contained all compounds at the
highest concentration levels. The study was designed so each of the 36
selected compounds was present in a minimum of two samples.
3.4 Analytical Results
The analytical results from the three contract laboratories were
summarized into the five tables included in Appendix A. The data were put
in a format to facilitate statistical analysis. The tables identify
laboratories by code (A, B, C), the coded samples, compounds spiked into
samples, and the reported values for the study. Reviewers of Appendix A
are cautioned that the purpose for the study was not to compare the
performance of a given laboratory with respect to other laboratories, but
to assess analytical methods contained in SW-846 for the analysis of
Appendix VIII compounds in a real groundwater matrix. It should also be
noted that the three laboratories are prominent laboratories that employ
modern instrumentation, have years of experience with GC/MS methodology,
and certainly have qualified and experienced analyst. There is every
reason to believe that the analytical results obtained for the study
represent the current state-of-the-art with respect to low level environ-
mental analyses.
3.5 Statistical Analysis of Data
The analytical data were statistically analyzed to assess the two most
important criteria of analytical methods, precision and bias (accuracy,
recovery). There are numerous acceptable ways that this could be done.
One of the procedures employed was taken from procedures described in the
EPA methods as part of the quality assurance sections. Every effort was
made not to distort the statistical results, but to be conservative and
reasonable where arbitrary decisions or assignments were required.
3.5.1 Assessment of Bias (Recovery, Accuracy)
The statistical procedures used to assess bias were taken from the
quality assurance section of SW-846 Methods 8240 and 8270. One is directed
to calculate percent recovery of spiked compounds, then calculate standard
deviation for the percent recoveries assuming a normal distribution for the
recovery values. The resulting standard deviation is then used to calcu-
late a percent relative standard deviation (ZRSD). The mean percent
recovery and %RSD of the percent recovery defines the bias for a given
compound by a specific method. For the CMA study, these calculations were
done for each laboratory and for all laboratories combined. It should be
noted that the %RSD defines the dispersion of values of percent recovery
with respect to the simple mean percent recovery. The %RSD also represents
one measure for precision as well. While the %RSD values were calculated
RLL8506605
329
-------
c ^ r r c. -T cc «» — i-i c. — T c
c^^cr-c * ir r- c — r^ C i—'
if
p-irseccvcriNCcccr-
1
01 • .... * « C
r-IKJ i i r- u-. ic c ,/• _, u- c o: i-' „ 111 i~ * r^' cr w ^ r- i « c i ec - -- « i i
, Bl — — — „ „ „ >c f- rv r, a- £ "•"
I
* -[
|i|=0S?l~2^i = HE?S ~°° ??-S:£SS=sgcsss;£-c c
3
s
£
O in a\ *r M v o
"IIs = - * = s 2 "' 2 "' - "' 2;'' ~ ' ' ssrssss^ayz-'-'s'ss
e
f*"1 ^^OO V'^\CC'Mifr^cC'ff^i'*lf**t.'"\ct^f1* CC
e cl
2 ,
i
I
I.
OQOOCCOO omomo coo occcoccccooooooc1
_, COCOU^U*. OO Q f*t C r* & OiTC ifCiT-CCC "L" C iT'COtTiiT'inii^O
[cj* i i i't i i ici i i i i i iT T'i7'fi47'irri'^virirTirv'i<'i*
c. u=. oooiAtnoorsctrciT. o ooe w-ooi.-t.-coooooocotro
C.CtT'OiNiNCC O^CINC Oi/*C M C C "•< «v C C Cf C O C O C C" *N C»
'•**'* rj F-W IN rs* rg f>* (N —,— f-^i-i^-f-|f-if-,f-i?i „
«»
C
«
2.
g
AJ M
a. c
fc « -H
N « flj
C,,, C £
.|
& U «
.
OO.wt. S-
^;
C C C
C t) C
UCCCC-CU C — ^, c
t ^ w t- •-< ID «. C ££4
ft£.2»Lif f i; £ !: !!i
« £ £f tt xl1; &-t fi: £,
c c- c - « o c « £ fc. u i i r
SSS^^fci^StiS^'.s
s vv ". •"..: s-1 s- f i *.'. v
« O.r%» — ^^oojCZvO.rsi
330
-------
and Included in the report (Table 4), an alternate procedure was employed
to assess precision.
Table 4 was prepared to summarize all the statistical information for
bias (accuracy) following the procedures specified by EPA in Section 8.2.4
of SW-846 Methods 8240 and 8270. Table 4 shows the bias results for each
compound included in the CMA study by laboratory, by method, and for all
three laboratories combined. The number of observations included in each
recovery calculation was also listed. Some discretion was used in prep-
aration of Table 4. No mean recovery value for all laboratories was
calculated when only one laboratory reported values for a compound. There
were several instances where a laboratory reported values for one set of
samples but did not find the compound in a second set. For these cases,
the zero percent recovery observations were not included in calculations
since they would have influenced bias in a way that did not appear
justified nor useful.
The performance criteria set forth in Section 8.2.4 of the Quality
Control section of SW-846 Methods 8240 and 8270 is identical. "The average
percent recovery must be greater than 20 (%) for all compounds to be
measured and greater than 60 (%) for all surrogate compounds. The percent
relative standard deviation (ZRSD) must be less than 20 (%) for all
compounds to be measured and all surrogate compounds." Section 8.1.1
states, "Before performing any analyses, the analyst must demonstrate the
ability to generate acceptable accuracy and precision with this method
(8240, 8270). This ability is established as described in Section 8.2."
Table 4 was reviewed to determined the "ability" of the three laboratories
to "generate acceptable accuracy and precision" using the EPA criteria.
The results are summarized in Table 5.
The data summarized in Table 5 indicate that these three laboratories
did not meet the criteria for a large percentage of compounds to demon-
strate their ability to generate acceptable data. Considering that the
three laboratories are prominent laboratories with considerable GC/MS
experience, it is doubtful that any laboratory can meet the criteria for
all compounds amenable to the three SW-846 Methods. It should also be
noted that the methods give no guidance or instructions on what is to be
done when the criteria are not met.
TABLE 5
Percent of Recovery Observations NOT Meeting EPA Criteria
SW-846 Method
8240 (17)
a)
U^,v, V*Wv
8330 (3)
i)
Laboratory
A. JL £
29 29 94
31 69 81
100 100 100
a) Total number of compounds included in calculations.
RLL8506605
331
-------
3.5.2 Assessment of Precision
The resulting data from the CMA study were statistically analyzed using
lognormal distribution theory to assess precision for each method (i.e.,
8240, 8270, 8330). As measures of intralaboratory and interlaboratory
precision, variability factors (V) and repeatability factors (R) were
calculated.
Upper and lower variability factors are defined by equations (1) and
(2) where S is a pooled standard of logarithms of amounts found by
analysis.
(1) Vu * exp(2S) for the upper limit
(2) VI ~ exp(-2S) for the lower limit
These variability factors define 95% confidence limits for the ratio of an
observed to a known mean or true value. If the true or mean value is
expressed as (X), then 95% confidence limits on an observed value are
calculated using equations (3) and (4).
(3) X o Vu « upper limit
(4) X/Vu - lower limit
For example, for a mean or true value of 100 ppb and a Vu value of 1.68,
95% of the results for a sample would fall in a range of 60 ppb (100/1.68)
to 168 ppb (100 o 1.68).
Upper and lower repeatability factors are defined by equations (5) and
(6). These define 95% limits for an observation when the true or mean
value is not known but only estimated from a single previous observation
(x).
(5) Ru - exp (2 /2 s) for the upper limit
(6) Rl « exp (-2 /2 s) for the lower limit
The 95% prediction limits for the second observation are given by equations
(7) and (8).
(7) X o Ru - upper limit
(8) X/Ru - lower limit
Thus, if the first analysis of a sample gives a value of 100 ppb and the Ru
value is 2.08, then with 95% probability, the results for the second
analysis will fall in the range of 48 ppb (100/2.08) to 208 ppb (100 o
2.08).
Variability and repeatability factors expressing intralaboratory
precision for each compound in the CMA study are given in Table 6.
RLL8506605
332
-------
Reported results were transformed to the standard deviations of log values
(amount found) were calculated for each laboratory, compound, and sample
from results reported for the (up to) three triplicates. Non-detected
observations were omitted from statistical analysis. Two unusual values
for diphenylamine on different samples reported by laboratory B, and the
single value for methyl ethyl ketone by laboratory B were also excluded as
outliers. Standard deviations from triplicate analyses samples were pooled
over laboratories and sampled on compound-specific basis and Vu and Ru
calculated from equations (1) and (5) using the pooled standard devi-
ations. Also included in Table 6 are the number of samples in which the
compound was detected, divided by the number of samples in which it was
present; the degrees of freedom in the estimate of (S) used to calculate Vu
and Ru; the geometric mean level; and the percent recovery. The geometric
mean levels were calculated using equation (9).
(9) Geometric X - exp (1/nSlog X)
The percent recovery reported is this geometric mean divided by the
geometric mean of actual levels in the samples where the compound was
detected.
The Vu and Ru calculated in Table 6 represent somewhat of an average
within laboratory variability for the three laboratories. These Vu and Ru
values can be used to estimate the range of values one might expect (95%
confidence interval) for any of the laboratories with respect to a true or
mean value (Vu) or first observation (Ru). For example, the value of Ru
listed in Table 6 for 1,2 dichloroethane is 2.02. If a first determination
for that dichloroethane was 100 ppb, then the range of values for a second
observation would be 50 ppb (100/2.02) to 202 ppb (100.2.02) when the same
laboratory analyzed the sample a second time. Precision does vary from
compound to compound and that this variability must be considered when
interpreting the analytical data.
The Vu and Ru values shown in Table 7 quantify the variability each
laboratory experienced for the group of compounds using a specific
method. Standard deviations for triplicate samples were pooled over
compounds detected by each method for each of the three laboratories and Vu
and Ru values calculated for each method and laboratory using equations (1)
and (5). Using Ru values it can be seen that Laboratory B had the best
precision with Method 8240 and the poorest for Method 8270. Laboratory A
had the best precision for Method 8270. The precision observed for
Laboratory C was fairly consistent for both Methods 8240 and 8270.
Although the values for Ru and Vu were calculated for Method 8330 and
included in Table 7, they are of very limited value due to the low rate of
detection for all laboratories.
Interlaboratory estimates of precision were also computed for the CMA
Study and expresssed as Vu and Ru values in Table 8. This was done on a
method and compound specific basis. Actual reported values were
transformed to the logarithm scale. For each compound, the root mean
square different (in this scale) between determinations of different
laboratories on equivalent samples was calculated using equation (10).
RLL8506605
333
-------
r
M ''it1'
VO
c.
E
I
a
c
*t
n
s °
•" c
e <•
« »<
£ &
£ £
l.ii
1 I
o cc c c rt
r- c -• ^ ***
ml CC
rsi cc
cctrccccccccoccccccrccttacccccc
X X. V. "V \ \ V X X.\X.XtX,Nfc-xN.i
co "** a i\£ccccf>icccc
i i t i
§c
o
r>j rsi
occc
ooocc
O
V
(N
CC
fc
•»•*
w
*J
II
o u
* <
o
o c
•o « re
.^ a. a- c o> £
1. C C 1C C 4-.
C 10 10 C* Of Of
— £ £ O £ O It
O 6}SjC.DQtOIL'
(0 COGCC~*£
4JECCOKOL.il
C, luUUUICUE-O
C C 0 •-. — •« 1 -<
M .Q O 1 1 1 >• ^ - O
muo««rtiat--iE-
4J
a
*
10
u
H
l
0) CM
C -
3 -
O -
U
C
o
4J *-t
C 0
^8
£ <
4J
>.
£*£
Of IT.
C
X
o
1
fc
-*
0)
o
c
£ &
O. C
4J ~*
o c
< t
o
r-
CM
(D
^4
C
tr.
(L
U C
E K
1 C
£1
0 0
£ C
V jr
a u
c
c
£
a
c
c
£
O
1
(N
O
c
Q>
O
O
.C
L-'
^
QJ
C
C
fj
c
Or
C
c
u
c
°i
^
a. a
C 4-
c m
M —
C (C
Q.' £
X< •*-*
£ •§.
c •-<
o u
'5 ?
t C
- ..-*
-H Q
a-
jj
>.
a- u
c ra
E -w
£ z:
c ^
QJ >s
£ x:
.^ G.1
O E
C
Q> C
c a,
a x:
*-* a.
*D O
£ -tl
n >
c
a
c
c
XI
a
•H (-4
o c
c c
It
c c
o c
u u
H H
1 1
\c ir^
^ "^
(M fM
f-
a
g
X?
(C ~*
a> >•
t- C
r> &•
O x:
• *4 t
JC 1
H Z
o
fl
m
CC
i
L'
334
-------
O C
C i.".
O
O
rsi IN —
r+ — .-N
^- f- IN
C '
k.;
t-l
:•'
C .
in
e-
7--
jr
rr
1 c
•—
u.
v.i c
t:
L".
^
Jj
cc
1
^~
*
^
.5
t.'.
•^
C
*"-
•c
c
|
F"
,c
c'
'_c
«_
—
y
C
m
a
^
•—
*t z: it —. tf
V "~" w •— ^^ **• ^ CT
C — C •« «* r= T •
*"" *™^ *"*" ^" * O IN L"
Ixl P* rr
-------
TABLE 8
CMA SW-846 ASSESSMENT STUDY
Interlaboratory Precision Of Methods 8240, 8270,
ppb Labs
Method, Compound Spike Range Reporting
8240
Acetone
Acetonitrile
Benzene
Carbon Tetrachloride
Chloroform
1 , 1-Dichloroethane
1 ,2-Dichloroethane
1 , 2-Bichloroprcpar.e
Ethyl Benzene
Tetra chloroethene
1 ,1 ,1-Trichloroethar.e
Tri chloroethene
Toluer.e
1,1,2, 2-Tetr achl cr oethane
Methyl Ethyl Ketone
Isobutyl Alcchol
1 ,4-Dioxane
Acetophenone
Aniline
p-Chlcrc- ir-Crescl
Chlorobenzene
2-Chlcropher.cl
4-Chloropher.ol
1 ,2-Dichlorobenzene
1 , 3-Dichlorobenzene
Di-n-Octylphthalate
Diphenylair.ine
Methyl Methacrylate
Naphthalene
4-Nitrophenoi
Phenacetin
2,4 ,6-Trichlorophenol
2,4, 5-Trichlorophenol
B33C
Thiourea
N-Phenylthiourea
l-Acetyl-2-Thiourea
200-400
200-400
50-100
200-300
25- 50
25- 50
100-200
100-200
200
200-300
25-325
200-300
25- 75
200-300
200-4CO
50-250
200-4CO
25-250
25-250
10C-2CC
100-20D
300-350
300-350
25-200
25-200
300-500
100-250
300-500
100-250
300-500
300-500
300-350
300-350
300-500
100-500
100-500
.1 i, ;'!" , r, ,1, ,,i
A.C
• A
A.B'.C
A',B'
A.B.C
A.B.C
A.B.C
A,E,C
A.B'.C
A.B.C
A.B.C
A.B.C
A.B.'C
A.B.C
A.C
-
C
A.B
A.B.C
A.B.C
A,C
A.B.C
A.B.C
A.B.C
A.B.C
A.B.C
A,B
A
A.B.C
A.B.C
A,C
A'.B/d
A.B.C "
-
B
A '"
8330
Vu
4.73
-
1.51
2.63
1.72
2.12
2.06
1.73
2.80
1.6B
2.05
1.70
1 .'45
3.3E
2.59
"
4.21
1.61
1.75
3.4?
1.39
2.07
2.54
2.58
2.65
2.20
™
1.65
22.33
3.22
1.41
7.30
—
—
—
Ru
9.01
" -
1.79
3.92
2.15
2.69
2.77
2.17
4.30
2.09
2.75
2.11
1.7C
5.61
3.E4
~
7.63
2.31
2.21
5.67
1.59
2.e:
3.74
3.83
' 3.97
3. 05
"
2.02
80.84
5.22
1.63
16.64
—
-
—
8240 Combined , based on average variance
8270 Combined, excluding 4-nitrophenol, 2,4,5 TCP
8330 Combined
2.31
2.40
3.27
3.46
336
-------
(10) S,
^2
M
where the summation is over
0 samples containing the compound
0 laboratories i and i' « A, B, C with
triplicates j, j1 - 1, 2, 3
and M is the number of pairs (X.^.,, X
laboratories on equivalent samples.
j') of nonzero values by independent
o-Z2)]1/2
Equation (10) is equivalent to the estimate of [2(a +
derivable using standard analysis of variance techniques when no values are
missing. Here a , and cr2 are respectively components of between and
within laboratory variance of J.og (concentration found). S d is an
unbiased estimated of 2( cr + o2 ) regardless of the imbalance caused by
missing values.
For three 8240 compounds, one 8270 compound and all 8330 compounds, no
laboratory or only one laboratory reported values on any sample.
Therefore, no estimates for between laboratory variability could be
derived. The single value for methyl ethyl ketone from Laboratory B was
excluded from the analysis. Two unusually low values for diphenylamine on
different samples by Laboratory B were also excluded as outliers.
The interlaboratory variability (Vu) and repeatability (Ru) factors
shown in Table 8 were calculated as follows using equations (11) and (12).
(11) Ru - exp (2Sd)
(12) Vu » exp 2Sd//2)
The Vu and Ru values are used as previously illustrated, but the 95%
confidence intervals are for an observation from one laboratory with
respect to a second observation at a different laboratory.
Ru values for 8240 and 8270 compounds indicate that, for most compounds
analyzed by these methods, results from independent laboratories can be
expected to agree to within factors of two to four. Problems with these
methods are indicated by the 11 compounds which one or more laboratories
failed to detect in any samples and by unusually high Ru values for
4-nitrophenol and 2,4,5-trichlorophenol. These appear to be due to factors
influencing all determinations within laboratories and not to isolated bad
analyses.
RLL8506605
337
-------
3.6 False Positive and Negative Observations
Calculations for precision and bias are used to define the quantitative
aspects of an analytical method. There are two important properties of an
analytical method which are used to assess the qualitative attributes of a
method. These are the number of false positive and number of false
negative observations that result when a method is used. The experimental
design for the CMA study allows for such an assessment since the identity
of every compound in every sample was known.
The following summarizes the qualitative properties for the analytical
methods with respect to the compounds included in the CMA study and for the
groundwater matrix used for all samples.
3.6.1 Method 8240 (Volatiles)
Only four compounds were spiked into Sample 1; however, two of the
laboratories found a total of seven additional compounds in one or more of
the blind triplicates (6, 7, 10). These are all false positive
observations. This is a particular problem since Sample 1 was prepared to
represent an up-gradient well that was supposed to establish background
contamination levels both quantitatively and qualitatively. Review of the
data reported in Appendix A, Table A-l also indicates that one gets a
different picture for background contamination depending upon which
laboratory analyzed the sample and what sample from the blind triplicate is
used. Methylene chloride was reported to be present in most samples by all
three laboratories. For Sample 1 (6, 7, 10), Table A-2 in Appendix A shows
the level of methylene chloride exceeded the total of all other (four)
compounds spiked into the sample. It would be reasonable to assume that
methylene chloride was present as a contaminant even though the compound
was not spiked into the sample. Methylene chloride would not be
considered a false positive observation. The compound was not included in
the precision and bias calculations since the true concentration was not
known.
There were a number of false negative observations with Method 8240.
None of the laboratories were able to detect the presence of isobutyl
alcohol in any of 12 actual samples (4 samples in triplicate) at
concentrations that ranged from 50 to 250 ug/A. The same was true for 1,4-
dioxane in six samples where the concentration range was 200-400 \ig/l. Two
of the laboratories did not detect the presence (6 samples) of acetone
(200-400 ug/A), acetonitrile (200-400 ug/A), methyl ethyl ketone (200-400
ug/Jl), and 1,1,1-trichloroethane (25 ug/Jl). This represents a serious
problem of false negative results. Of the 17 compounds spiked into two or
more samples, there were 7 cases of false negative observations for one or
more laboratories. This indicates that the list of compounds reported by
EPA to be amenable for analysis by Method 8240 is somewhat less than
reported.
RLL8506605
338
-------
3.6.2. Method 8270 (Semi-volatiles)
There were a total of eight false positive observations reported by one
or more laboratories. In one instance, two laboratories reported the same
false positive observation for N-nitrosodiphenylamine and reported it
present in the same six samples analyzed. This observation is an
interesting situation when two independent laboratories concur on the
presence of a component not actually present in samples.
There were 16 compounds spiked into the various samples. Sample 4
contained all 16; Sample 3 contained 8; and, Sample 2 contained 8. Only
one laboratory was able to detect and quantify all compounds in all
samples. Two of the laboratories did not detect the presence of methyl
methacrylate (300-500 ug/A) in any of six samples, nor acetophenone (25-250
ug/A) in five of six samples. Additional false negative observations
included diphenylamine (100-250 ug/A), phenacetin (300-500 ng/A), and
chlorobenzene (100-200 ng/A). These false negative observations indicate
the list of compounds reported by EPA to be amenable for analysis by Method
8270 is somewhat less than reported.
3.6.3. Method 8330 (HPLC. Sulfur Compounds)
False positive observations were a common problem at all three
laboratories when the method was followed. One laboratory used a dual
wavelength detector and employed absorbance ratios to reduce the number of
false positive observations (interferences); however, even this
modification did not solve the problem completely.
None of the laboratories were able to detect the presence of thiourea
in any sample. There were some reported values for the other two
compounds, but for all practical purposes the quantification was so poor it
would be better to consider these observations as false negatives, as well.
4. DISCUSSION
A second objective for the CMA study was to determine how well the
resulting data would define groundwater contamination. The results from
the study were, therefore, reviewed from the perspective of those using the
data for an environmental assessment. Because the identity and true
concentration level of every organic contaminant (except methylene
chloride) is known for each sample, it is possible to evaluate how well the
resulting data defined groundwater contamination for this hypothetical
situation. This evaluation is strictly applicable only to groundwater
samples resembling those used in the study; however, the evaluation
provides the basis for what might be expected in other similar situations
where the true concentration levels and identity are not known and other
available information are factored into the overall assessment. In most
situations, the qualitative and quantitative quality of the data base will
be much poorer since one does not usually have blind triplicate analyses
done at three laboratories. .
RLL8506605
339
-------
In Table 9, analytical results for all the spiked compounds for all
four samples are classified. For each compound spiked, a laboratory either
failed to detect its presence (generated a false negative result) or
detected the compound. Detections are further classified according to
whether the reported value is within a factor of two of the spiked amount,
our minimal standard for quantitative accuracy.
Of the background compounds (also EPA priority pollutants), benzene and
toluene were quantified in all samples by all laboratories, 1,1,1-tri-
chloroethane by most, and isobutyl alcohol (Appendix VIII compound) by no
one. Methods 8240 and 8270 detected and quantified non-background
compounds not containing chlorine in about half of the analyses; they
detected and quantified compounds containing chlorine in most analyses.
Method 8330 for sulfur-containing organic compounds performed poorly on the
three spiked compounds; no reported values fell within a factor of two of
the true value. ^ J" i ' i; ^
The user of these data is less concerned with how well the analytical
method performs on compound which may or may not be present as groundwater
contaminants than with the validity of the particular list of compound
names and concentrations reported for his sample. From this point of view,
reported results for the simulated down gradient samples (Samples 2, 3, and
4) have been tabulated in Table 10.
A total of 531 analytical detections other than background species or
methylene chloride are reported for these samples. 63% (336 of 531) are
for species actually spiked at between half and double the reported
amount. An additional 19% of reported values represent compounds present
but at levels not within a factor of 2 (non-quantitative) of those
reported. Most of these reported values are underestimates. 18% of the
reported values are for compounds not spiked^ into the samples. Table 10
also is broken down by analytical method as well as chlorined and non-
chlorinated compounds. Using methods 8240 and 8270, results for
chlorinated compounds are generally trustworthy, while results for non-
chlorinated compounds include 16% and 25% false detections. Three quarters
of detections by method 8330 are false positives.
An evaluation of the degree of agreement between independent analyses
is also of interest to a user of the data. Table 11 was prepared by
examining all possible pairs of samples of the same type submitted to
different laboratories. The number of such pairs is 162 (9 x 3 = 27
choices for the first analysis, 2x3-6 for the second) For each set of
values reported by the first laboratory, results of the second laboratory
were classified as either (1) not confirming the presence of the compounds,
(2) confirming the presence of the compound by an amount differing by a
factor of 2 or more from the original amount, or (3) confirming with
quantitative agreement.
Based on the tabulation in Table 11, the submitter of samples to two
laboratories can expect that results reported from the first laboratory
will be confirmed quantitatively (to within a factor of 2) by the second
RLL8506605
340
-------
I
O O O CO
o o r-
en M
O CO •* o
co -3- m
ON O «n O O 0*
eo o oo o o oe
o o o
r- o
f» «H eo r-
_< ,* CM i—I
O O O O •-<
O O O CN
O —i
*H O .C
C 1-1
O CJ t-l
« sr x
u « AJ
< —i W
o
CM
CM
O
o
o
o
CM
g
c
o
CM
•h
o
0
/-* o
o o
O CM
CM •>
• O
O O
O «-i
W v^,s_,
O 6
«n «
u
to
»-
g ^
O J=
IM O
O i-<
W" -O
O I
«
w c
c «
5 e
o o»
o o
^ tl
o o
CM
o >—
O *-s
mow
• o c
O C^ 03
o -.c
CM O *J
s-' O »
CM O
W *-- »J
C O
•I « i-H
*g-g
j: IH a
4-1X1-1
5 jc w
o 4-1 a>
t* W ^J
o o
i-H I-l •>
O CM
O O
1-1 «rt I)
•o -o os
I I w
CM CM
• • W i- •
fi CM
O •
-C ...«)- .
t_) i-H i—i •—i 4J 4J 1-1
-------
o o o
vot--r-«-iOCMvr>ooocovocooor->oco
_!_,,_< CM m CM in CM CM CM co en
CM co
CM co
vococovoi-»oococooo--OOOO
in CM en «c co co *->
O co r-
o r~ \o
o
CM
in
CM
4)
O
O O
o o
m m
o o
X-NO O x-s
o co co o
in ^^X-KO
CM o m x-*
* V w in • o
O 4J 4J CM O O
o o 03 «o m
X-M-I r-l iH O CO •>
O *-^ B >%O '-x O
in .E H •-< O
CM I) 4J O ^ ?-< CO
• c .e « o <^
m -H P..C 01 B
CM B I-H w B v B
s_^ CO >> V •) f -H
l-l 4J 6 lH O.W
« >. U 0 O •)
B B O *H iC Vi U
•rt If T ^ 4J 4J «
rH rfT B f. £1 -H B
•H B. T W D. B *
£232£^£
o
w
Kl
s
T^ ^
fig
« iH iH
tl >. >N
M 4J B
E
u .c
« o-
342
-------
o
*d
s
1
«— «.
i
*
••a-
4k
co
m
CM
«A
W
IA
^
H«d
H
t^
W
G
(b
°
9S
O
M
S
»^
fc
•"H
(A
CJ
1
B
M
i
M
H
U
(d
tH
O
(L,
^
i
M
|3
£tU
1
CM
X
2
M
tH
S
t
w
(A
W
eti
P-
WITHIN
1
CM
t^ {^
&
CO Pb
O
z B:
w o
tA H
E
E™1
(H
(A
g
W
(A
iJ
25.
(A
tA
^3
u
£
g
1
tn C?
l«O CO
~c
fc*l? b^
O vD
\O CT*
•* sr
CM
VC O
^•4
*
*
60
B
•H
B
* «H
•O CO
0) *J
co o
B U
•H
Id Ol
0 B
T*"4 *rt
j: i-i
S5
S£
o
CM
CO
CO CM CM*
co sr ^0
N_^ >— '
0)
»< M B
IT) O O
m t^ 2
c \o sr
CM CM CM
m sr \o
CM r»-
60
B
•H 60
B B
•H -H
•on B
W 4J - co
CM CO
CO 00
£
CO
trt
.
•o
w
•o
t-l
CO B
so £
4J
o
0)
B
O
4J
•a o
« o>
•O 4J
3 V
tH -0
U
B oi
•H CO
tH
01 *•>
•a S-
01 N «
B C —
•H •! •
1 V
U * *
343
-------
vH
r->
w
1
»>-\
1
2
o
en 2
Id
M H
OS Z
O W
SB
2 (x,
§5
3°
^3 *
la
P^
>- £
PQ <
C/3
§ w
*§
en CO
W
tn pc
;- H
Is
£
E
pa H-<
« o <
Z H
S5 M
O H
£*§
W &
f-i o*
Q
en
p?j
^3
o >-
£ pa
o w 2
c^
°H§
K W 0
e K
en
g
fz
«
CR
c/:
4
G
i
I
1
i
*
O VO 1-1 OO «-"
en r* en vD
*
<_l VO Is- CM »-l
CM -^ CM CM — «
O\ 00 CM O OO
CM CM en
, 00 00 00
•o
A)
w
T5
M Z3
5 o
01
0)
to
R)
4J
•o
0)
•0
• H
o
1— 1
M 0)
s S
4-J
0)
s
•O W
C
• r4
*T3 !_'
c -^
3 to
0 0
W 0-
60
^ 0)
U CO
to .-<
X> tg
^-J
CO •*
3 J-*
»
^ 3 E
"2 « >H
* 0) uj
.£ 4J C
« z cS
§ *
" * *
3**
-------
laboratory for 49% of the compounds, confirmed qualitatively for an
additional 20%, and not confirmed at all for 31%. One in 200 quantitative
confirmations and one in 11 qualitative confirmations (with quantitative
disagreement) will be for compounds not actually present in the sample.
5. CONCLUSIONS
The results from the CMA SW-846 Assessment Study revealed that even the
best (GC/MS) of analytical procedures contained in SW-846 are somewhat
inadequate for the analysis of compounds reportedly (EPA) amenable to
Methods 8240 and 8270 from the Appendix VIII list. Method 8330 was found
to be completely inadequate for the three compounds included in the
study. This observation was confirmed by all three laboratories. The
statement that SW-846 contains the analytical methods for all (375)
Appendix VIII compounds (excluding exotics and water reactive compounds)
simply is not true.
Review of results from the assessment of bias (accuracy, recovery)
indicated that the three laboratories were not able to meet the EPA
criteria to demonstrate one's ability to generate "acceptable accuracy and
precision" for a large percentage of compounds included in the study. This
observation was unexpected since three prominent and experienced
laboratories took part in the study, and the samples contained only pure
compounds in a rather simple groundwater matrix.
There were a large number of false positive and negative observations
by all laboratories to varying degrees. This observation is of particular
concern since GC/MS methodology was employed for rather simple samples in a
rather simple groundwater matrix. With less specific detectors such as FID
(GC) or UV (HPLC), the problem with false positive and negative obser-
vations would be expected to be even more severe. The resulting data for
Method 8330 supports this conclusion.
The calculated Vu and Ru values, which express intra- and
interlaboratory precision, revealed precision varies from laboratory to
laboratory, for different methods, and for specific compounds. These facts
must be considered when analytical data are being interpreted to assess an
environmental situation.
It is reasonable to conclude that the numerous problems observed for
Methods 8240, 8270, and 8330 for the samples included in the CMA SW-846
Assessment Study may be expected for other methods in SW-846 not evaluated.
SW-846 certainly has not reached the level of development whereby its use
should be mandated by law. Until all the methods contained in SW-846 are
adequately validated, SW-846 is nothing more than a collection of methods
that may or may not work, and is only suitable as a reference document.
Even as a reference source, SW-846 has limitations.
If one reviews the resulting data from the CMA study from a data user's
perspective, one would conclude that the analytical community could indeed
identify groundwater contamination, but not- very well both qualitatively
RLL8506605
-------
and quantitatively. The problem with false positive and negative obser-
vation would be of major concern in determining immediate corrective
action, as well as the potential for future problems.
6. RECOMMENDATIONS
The EPA should not promulgate the mandatory use of SW-846 for testing
-
validation is required before promulgation.
The EPA should concentrate its efforts initially to the validation and
development of GC/MS Methods 8240 and 8270. The list of Appendix VIII
compounds amenable to these procedures needs to be established, along with
supporting precision and bias data.
For Appendix VIII compounds not amenable to Methods 8240 and 8270, EPA
needs to validate other methods in SW-846 to confirm they actually work for
some matrix with a reasonable expectation that they may work in other
matrices.
Multilaboratory studies of aU methods similar to the CMA SW-846
Assessment Study would be advisiEli prior to including a method into a
manual such as SW-846 and prior to considering promulgation of a method for
mandatory compliance testing. Results from such studies should be sub-
jected to the analytical peer review process and appropriate statistical
evaluation, both within and outside the Agency.
Acknowledgement:
The authors gratefully acknowledge the work of CMA's
Environment Monitoring Task Group,
Sharon H. Kneiss, Staff Executive, and
Becky Wilson, Administrative Assistant
7. REFERENCES
1. Environmental Testing and Certification Corporation 1983 "Evaluation
of EPA SW-846 Test Methods for Evaluating Solid Waste Physical/
Chemical Methods for Appendix VIII Parameters", Report to Chemical
Manufacturers Association, Washington, D.C.
2. Radian Corporation, 1983, "Evaluation of the Feasibility of Analyzing
Groundwater For the Appendix VIII Hazardous Constituents , Report to
American Petroleum Institute, Washington, D.C.
RLL8506605
3*6
-------
8. APPENDIX A
Summary of Data for CMA SW-846
Assessment Study
RLL8506605
3*7
-------
f
bl
I
t- _»
"c.
E
in
t rsi
a
E
wi
c
"&.
E
t
Z
^
££
c.
-*
E
tr.
a
1
I
8
O
2»
g
1
>.
s
&J
.
i
to
a!
1
1
t
±
S
<
„
s.
*->4
p"»
c
-c.
e •
c
i:
0. S
E I*/
ut
rsi
&
"S
e
o
»
&
s
t
1
(C
tr.
e
E
5
c
Z
E
10
"0
-
H
c.
ra
e
*£_
£
e.
E
f3
i".
1
!
in
4,
>
*£
•*
°c.
in
I
i
i
„
t
c
z
c
.
I
r\j
C
2
C-
— "*
^
"
£
c
z
§--•000 \c\rcc*<2CZ o)r-2P»r- ^vCZ^rcr
—. rs ^t m p^ JM
2 g i g i S?i55 S S i S S
— IN i-> " ,-. rsi
gSiSg SsiiSS £5i§f
_i ry •— "*> ^^ *"
g*^u?o^ circin cifCin
^ u- ' (N c r- ir js o P- jf; pJ
£ — 2 - C-CiC c*--C
roK.CCCr~' ,-imQinC \fr-CCO
^y^i2'-(CC m2i-»CC inrM2(NvO v«-«22^
r^t
Q O
022
2 *"• *^
xcc-^vD^^ mmoocc O^-TCD^T
^rsjc»-tCC L*"r*jinn*jc m«-(in2\c
in rg (N
IN — —
n^Qf^-rM O^Cff>in r- r- D D f"'
ccr*Cinr-i r*«"D^T»- r-r"CCiC
TT--2'-'r>> IT o* 2 rsi r^ r-iN22C
f-H
^-ir^DOxT n* *N C r- f- CTCC.CO
xcrM2f^O tr rg z *M \e L"fNZ2rn
(TCCCvm ^t£OCV ^CCCDC
f*i^-*2^-rw mr*j2f*''CC cc'^>22r*"
n
— • (N o cc co fM*rcr-cc r-cCdo
lf»lcC'*'>C
vCrv2fMiN ifitNZmC* mfM2—*CC
^ ^ ?
&
3
1C
>
•o
a-
4J
^
v
fl
>a
u
t3
C
Oiromo omomo o m o tr. c '^
m*Niro* ifiNinrj irtNintN
C JJ
a. T
E &
1= C-
u: c
re
C £
(M- Q^. Q>— 4-1 U
cc cm cio c
10 (0
^c/ £&. .not or--*
jj*n *j*c aj*o •Oi-'a.
_
CO>-i OOk- OC^ JilVlV
£1.0 .e ^ o £i.c •S*J"<
co~* oo« oc« D.a>
^ u«-i.cca u^^u u-^js tn*Da'
^ -> H t 4J6Jfc>*Ha; AJOfC^HO'
HJCCiTl^ *DCC4JI^ TOCCAJI^ III
w»aa-3«H>- i^a.QJSi-«>» L*Ofap'-'>-
OK7£»X O M 3 ^ - JZ CN;:'£'*'C *""
i C — • O »-* *J ,fl C »-* O *^ 4J £C^HO«-*4J
3a.otn-4i «ajOtn»(y
-------
s
to
c.
E
i
O O O
z 2 r~
D D D
Z 2 2
D C
2 Z
c.
E
o
z
c c
z o-
»-> IN z —i03"i^
lo
K Z
o
z
D C C D
z z z z
(N —i O
-H V 2
1
CM *T O
irt pg 2
C O V *->
2 Z T IN
o
z
C -< O
Z NJ 2
1 o
O -10
z v z
I
a
E
D C C C C C
z z z z z z
c
c z
Ic
IB Z
Q
z
c
I- Z
•i o
Z
VCCICDCWC
vCiNZrMiNZZZZ
a
V)
C O C O C C! O
•D
I
Q,
Benzer
a-
TOlUP!
tyl Alcohol
3
O
in
-Trichloroethai
1-1
Lene Chloride
£
0;
C
11
o
<
(0
I
i
1
(N
lylbenzene
*H
u"
o>
c
D
1
(jit
0 C
ro
Alk
-------
s
o.
id
I:
li
si
I"
y
K
in z
f -
— x v: — — o 01
z ™ — z
Z Z r» —
.B
[COOCrrir-DODC D
: — i^ezoirrZMZz z
Z f*l ~* C. ZZZ
C C C DC C AT c
Z Z Z ZZ •» ..- §
c TT c ~a c -
VCCr-CCC — CCG'-lC
^•Z^rrMrH^xCZZS;'*^
IN cm
— (NOOr-c
-
§i
c c c QC, a —
CC"
Li^rt-
— — etc
r- — CCCCDr.
— n IP z z z z z
CCC—
_ ,
•S Cl o, —
IT. Zl
ceo cc c c
zzz zz z ^
DCC CC C C.
zzz zz z z
C C C CC C C. o .
ZZZ ZZ ZZ y.
x e
c E o
1.1 "
K 1
if. O C O O O O
oc c o c c
— 0>
ro c
e. m
a » c o> £
C C 1C >O 4J
J(C C. -4 0.
£ O ^ C
4J 4J U O W
c c. a -< o
e e o £ —
W W O U £
E _0 _C *• ^ £
C £ £ £ C k.
w o Li O Of H fl;
k. D D D >• -i I>
O 1 I I £ - 3
^ r* IN, Ol +* ^ ^
£ * M. . 6f « o
o ~* - -. z -i £
c;
c
o
4^ t-t
il O
-J U
£ <
U -H
£-5
JJ O
a; w
c *-•
^
"0
>.
X
o
£
4J
' f
a. 1-1 -<
c n -
ID— C O/
C U 01
D C TJ O
1 fO 10 ^
V J£ *J ft-
- ^H U 1
- < 0 ~>
tn
•O
u
u
»-^ a:
m c
u c
4J C
3 a.
U £
V 0
01 4J
m a>
So
<
o. «' ' c c S . _ -
= £ — £ « — t C C (0=>>»a 4^~ ^ C n 2 %
WCC CX££ — OJCCI-KTBi ?v *
40101 C.XC.C C b •» c 9 0 Ol c —
k.£J30< 0;£0' U ~H 3££C£0<| rf C
UOCC C a £ — — 10 Ot-U— — Tlci ir '
i n ^ -M a a, — < — >. < **•-•-* i o £—-«.« s, c i£ ~
EOOECK'O>.£ c: £oii.(-4-Ev.? .- T -
l^^iCOCO£^u o 4JIC KT.rir ire"
C££ — -(O.W4JO'— U lci"
0)fc-uo>.ic-j2OWE*J o>xx£c^-Q.a'E. CL"*
co^'«4C£Ofc-Oc'" A:TCT
.-— ,QOo«-fc**jrg^£ 33ft»Olt-i£iD;;y .«r*t
rt£ii££c-«-oa u-iO£oox^OD.i ait.
•H u 01 1-1 a &. — z « « — -H •- a. < — £ c .- it _, !T t. — L.
Ct - * — « 10 £ 1 — 1 £ *-i 3£lll4J.-*£ r'c
-------
a
tn
zr
e
13 C
V. Z
IL
(0
in
1 c
tn z
I"!
f C
V. Z
d
V. Z
c
~« i-i
c.
E •
1C C
in z
1 6
w z
I i
a
I c
tn z
S-iiSiiSSSSg? i § & 5 S § ='"E § § § g §
CCC-CCCC C'WCCC
ZTCT- TO-L">TZ->ZZZ
(N f ^- .^ ry rsj
— cccccoccc
Z. ZZfNZC-CZttZ
= —
z >?•
is
D O
Z CC
' O
z
C D T C C
Z Z " IN Z
O C D Ov 1-1
r- Z Z C1
O C
^ Z
CCQCC'-COGOCCOC
^-i2«-• C 1C 1C
•-• — ' £ 0. X ft' — 1C ft.
c c 4J 4-' c c f a ~ £
tn il i c c £ 10 £ t * £ * c 5 —
•0 c ID £ £_ _ reu^ m t.^ci' ">.*
o-i -S f-o" x &i£ -S f ??£t^ 5
^ B o f - £ o o S= .2 T £ .5 S Z 1 T i ' % fc
^££ti 2S££af2g 5 Iff? 3 St.-
W4J£-.CCUO£>,CC - it C £, o Si o>.u)ScSj
SE2'!'S'-iri:^0^x~'cu um-.-ii£.:u;. i.
S-.fe££8TlrSlf.2_,-S.2 *,t&2re£zi£t
^1-S tS g'c.'n.S2?g-55 Sgii^ffSig-
S t V i f i * » V 5 » £ g S ?2tfTfi|flJ
*ziNQTra
-------
j
li
Sh c
2i3*
_
s
m
8
gSSS§Sil?gS2g|?i8§g iii
^4^-tfMrH^in ,H (N iH ^ m M v£)
OPPPvDPDQPPPPPOroDP Q g
^H^iiNfH & ,-1 CM CM ro (N o CD co co tn co
,H ^i 01 r-t f-i fM rg
«H CM f-4' iH QP PQQ
n^rzz^ Z«N ^ZZ
o o o a c o a o
ZZ-H z n \c z o
O fN C fl
Q O "1 DOCVO
Z Z C^ Z r- P -. C
\O m (E
ODDCD
2 Z Z ^T
O O O O
Z ^ Z Z
D O C Q D —£
Z Z Z Z Z C •*
1^
n Q Q Q c —
z z z z •; =.
01
3aoo occac
C^HZZ ZZZZZ
o o \D
TT C C Q O
ro Z Z Z Z
Or-t\C fnOQCO
Z -H m z Z Z Z
Q d fi ON i-HOCQQ
ZZ-* rozZZZ
gppppppppppinpmppoo
ooPtninooP ptMpp*oomo
^•^^itn CMC*™ nmm (n*a*o a;
§£ C O :** "•< *•* JO *^ Gl*-•* > 3 O \
«^ M w* O t I I >H £ ki » O ^^ *JC.QI 1*
P P P P O P
PPP
in in m
WCCIO-
•it 5
00^ C
--* o c o a* ^
c £ !c a; a- c •-< c
-J
a>k,uuou>,Eigoa:t*t-i<.
•H a1 t- -
of
>- >.
C V
£ 8
a <
•S- i£*?'
Our i - - i c
». — ^ E •>! c
0 -3 >. f. - I -•
352
-------
MR. TELLIARD: This
morning we have a number of speakers. The way
we're going to work this program to get you out
of here relatively on time is two very important
things. One, we have a boat break and the boat
break will take precedence over your other break.
The boat break happens to be that we found out last
night that the USS Iowa is coming up the channel
this morning, and they're ain't many of us who have
ever seen a real battleship, so we will take our
coffee break when the battleship comes up the
channel. In between times, if you have other needs,
you're on your own. We can't train you to do
everything.
353
-------
MR. TELLIARD: Our first
speaker this morning is yesterday's speaker. Judith
is going to talk to us a little bit about the
analysis that EPRI has been working on in their
round robin on metals analysis. Judith.
354
-------
JUDITH SCOTT
TRW ENERGY DEVELOPMENT GROUP
UTILITY ROUND ROBIN RESULTS FOR THE DETERMINATION
OF ARSENIC AND SELENIUM BY GRAPHITE FURNACE AAS
MRS. SCOTT: Good morning.
Neither Winston Chow, Program Manager at EPRI, nor
Ray Maddalone, our Program Manager at TRW, could be
here today. Ray was hoping as late as last Friday
that he'd be able to make it, but he's being detained
in Los Angeles against his will. He is foreman of
a jury on a major trial which has kept him more
than a month longer on jury duty than anyone ever
dreamed.
My name is Judy Scott and I have been with the
project team since the beginning of the Analytical
Methods Qualification Project. Contrary to rumor,
I did not bribe a juror to remain a holdout, but I
am happy to be here.
EPRI is sponsoring the Analytical Methods
Qualification Project at TRW to examine the methods
available to determine priority pollutants in
utility discharge streams. We are concentrating
primarily on the trace metals at this point.
355
-------
There has been a trend in recent years toward
basing discharge limits on water quality criteria,
and there is concern over whether or not the methods
at hand are capable of determining these low levels
• , . " , " '• JV-'/iii ,Y • ;I,IL , 3!1./1' if'.' " \'.
in utility matrices. Therefore, we are working
right now on a round robin using real world spiked
and split samples to measure precision and bias of
the analytical methods in utility streams.
The Analytical Methods Qualification Project
'!; ' ' " ' '''I': 'i' 'I'!, ',!" 1,' •' ". 'i ''
has four phases. We are now in the first phase
which has two rounds, and we have just completed
the first round. In the first round we are deter-
mining arsenic and selenium by graphite furnace
atomic absorption. In the second round, we will be
looking at the determination of copper, nickel,
chromium and lead by graphite furnace. Today, I will
• ,". :.:?J'I;J (:.'^:':'-\'L
be giving you a progress report on round one.
FIGURE 1
Our title was prepared at a time when we were
very optimistic about the length of time it would
take for the participating utilities to return
" ' 't .; ' '''" ' "
their results. Unfortunately, we were overly
optimistic and the results are just now coming in
and being collated. Although I will not have
any hard and fast precision and bias data to give
356
-------
you, I will present an overview of our activities
to date.
FIGURE 2
Initially we sent out 198 requests to utilities
to participate in the project and as their responses
came back, we screened them. We were looking for
utilities which would be representative of the steam
electric industry. We also were looking for utilities
that had positive experiences with DMR-QA samples in
the determination of these elements. In addition,
we wanted utilities that routinely use graphite
furnace analytical techniques.
Our goal was to have two groups of 30 labs
each, one group being fresh water matrices primarily,
and one group being saltwater. We were successful
on the freshwater labs. However, on the saltwater
side, we did not have as many labs responding. In
addition, we found that the saltwater labs are
using a greater diversity of methods to handle
their problems. Therefore, in round one for the
arsenic and selenium determination, we had labora-
tories using three different methods: graphite
furnace with matrix modification or matrix match-
ing, graphite furnace with Zeeman background correc-
tion, and gaseous hydride. We realize that for the
357
-------
seawater labs, we'll have at best a qualitative
comparison when the results are all in.
Just a brief word about the matrices. You'll
notice here, the first two matrices on the freshwater
side, river water and ashpond overflow, represent
an influent and effluent at a coal-fired utility.
Similarly on the seawater side, the first two
matrices, seawater and then seawater plus fireside
wash, represent also influent and effluent at an
oil-fired utility on the west coast.
The next matrix, treated chemical metal cleaning
waste (TCMCW), was selected because it's a matrix
with disposal problems and a matrix which has
typically been difficult to analyze.
The reagent grade water is included so that we
can compare the results of this study with those of
previous studies and it also is a measure of what
utilities can do in an interference-free matrix.
Then, finally, the QA/QC sample. We were fortu-
nate in obtaining vials of concentrate from EPA/EMSL,
Cincinnati, which will be used as a measure of abso-
lute bias.
FIGURE 3
The activities have fallen into the categories
shown. Once we decided on the matrices and found
358
-------
the laboratories, we then collected the test
samples. The first two samples, riverwater and
ashpond overflow, were acquired at an east coast
utility, sampled by utility personnel in bottles
that we had pre-cleaned and sent to the utility and
then they returned to us at TRW. The other three
matrices listed were acquired by TRW personnel at
a utility on the west coast.
For the spiking and disbursement activity, we
used a churn splitter, which was an original design
of the USGS scaled up to meet the project needs.
We acquired six different churn splitters to handle
each of the matrices.
Finally, in the process of anticipating the
kind of data we would want from this study, we
devised a laboratory data reporting form, which
I'll be showing you in a few minutes, as well as a
QA/QC questionnaire to help us develop a laboratory
profile so that we would have some way, perhaps,
when we discovered outliers, of examining labora-
tory practices in effect during analysis of these
samples.
Finally, the analysis of the data will follow
the ASTM 2777-77 and we will be deriving estimates
of limits of detection and limits of quantitation.
359
-------
FIGURE 4
This is a picture of the 25 liter polyethylene
carboys used to acquire the samples and the shipping
crate used. The cleaning that was used for the
carboys is the same as that which was used for the
bottles on the project; 10 percent hydrochloric
acid, soak for 48 hours; 10 percent nitric acid,
soak for 48 hours. Then the bottles were filled
with reagent grade water until just before use.
This is similar to an NBS procedure which we had
established would be the best for our purposes
during an earlier literature survey. The cleaning
method is efficient in removing any trace element
contamination, and at the same time, doesn't overly
sensitize the surface so that later on you get
adsorption of the trace elements that you're trying
to add.
FIGURE 5
This is a picture of our disassembled churn
splitter which we used on the project. You see the
container, the churn disc and then the cover with a
hole for the shaft. You'll notice that the spigot
is at the bottom. It's modeled after the USGS 14
liter design. The one you're looking at here is
120 liters. We had them fabricated by Bell Arts.
360
-------
The mixer is extremely efficient. You get a
strong suction downward and then upwelling, and
USGS knew that the 14 liter size mixing was very
efficient. Our next step was to determine whether
or not our scaled-up model would be just as
efficient.
FIGURE 6
The next slide shows you the result of a test
to determine mixing efficiency. We used a water
matrix and we spiked into it a manganese solution
which we knew would have an ultimate mixed concen-
tration of five parts per million. The churn handle
was operated at about seven inches a second. The
first minute, samples were withdrawn every 10
seconds and then after that, every 30 seconds until
the end of the test. Initially, you can see that
there's actually a very strong downward surge,
because remember, we're withdrawing the samples at
the bottom of the churn mixer. Mixing is essentially
complete within two to three minutes. We selected
five minutes as the mix time for use on the project.
FIGURE 7
Once the mixing efficiency was verified, we
then proceeded to the spiking and disbursement of
actual samples. This is a picture of the churn
361
-------
splitter on the digital scale, which is accurate
to a tenth of a kilogram. We added the contents
of the carboys to the churn splitter, mixed them
for five minutes, and then withdrew samples for
density measurements. By measuring both weight and
density very accurately, we were able to determine
volume at any given point within a few tenths of a
percent.
FIGURE 8
This slide shows the volumetric flasks used for
measuring density in triplicate.
FIGURE 9
This is a copy of our spiking data worksheet.
On this we kept all the relevant data in one place,
including the density data and target spiking
levels. The target spiking levels were chosen
based on the background levels in the samples as
well as the methods' recommended use range. We
sent each participating laboratory four samples of
each matrix: the background, plus three spike levels
to cover the recommended use range. At the bottom
of the sheet are the formulas used for calculating
the amounts of subsequent spike additions.
FIGURE 10
The next slide shows the actual preparation of
362
-------
the spiking solution. The spikes themselves were
1,000 ppm certified standards which we in turn
checked against an independent standard. We used a
syringe to deliver the approximate spiking solution
that we desired, anywhere from say 2 to 10 milli-
liters, and then we made an accurate weighing of
that amount. We had determined earlier that in
the short time spent on the balance, evaporation
was not a problem. Then by knowing the weight of
the solution and the density, we were able to
calculate the exact amount which had been added.
For each sample, we added all six test elements-
copper, nickel, chromium and lead and the arsenic
and selenium. This was done so that the two rounds
would be consistent in terms of any potential inter-
elemental interferences which could bias the results,
FIGURE 11
The standards that were prepared were then
combined in a teflon beaker for addition to the
churn mixer. You see here the teflon beaker and
top of the disc. The mixing action tips it and
rinses it and disperses the spike throughout the
sample. The churn was operated five minutes before
any samples were withdrawn and then continuously
during withdrawal of the samples.
363
-------
,,„!,;',!„! i|. 'i ii"!,:iSr,i'ij if1 .„'.I!I!HI:",'
FIGURE 12
The next slide is our churning team in action.
Sample preparation takes place in an isolated
laboratory off-limits to non-project personnel.
The bottles being filled with samples are 500
millimeter low-density polyethylene which, as I
mentioned, were stored with reagent grade water and
emptied just prior to filling. I should mention
that the churn master you see on the right reported
a marked improvement in his golf game after this
activity, undoubtedly due to the large number of
bottles that we filled.
We filled approximately 640 bottles for shipping
to participating utilities. We have another couple
,'!'"' 'I!!' » „",', .1 '' ,
hundred for spares and QA purpose, such as the
measurements being made to determine that indeed
the samples are stable in the concentrations that
we prepared.
FIGURE 13
The bottles shown here have what we call the
; • , I;,.'* ,' : '•
pretest labels. These labels were designed to
minimize the amount of clerical work needed during
the filling operation. When you're talking about
this many bottles, you want to streamline the
filling activity as much as possible. The labels,
'364
-------
which were prepared in advance, contain a code
indicating the matrix, the concentration level and
the filling order for later traceability. Bottles
were removed from storage in groups, a particular
spiking level filled, and then the bottles returned
to their places on the shelves as a group. They
were then stored in locked cabinets until prepara-
tion of the boxes to go to the laboratories.
FIGURE 14
This is a bottle with the final test label
attached. These labels were also prepared in
advance with the exception of the fill order number,
FIGURE 15
The code of the label has several parts. I
can tell from the label the lab number, the round
number, the matrix identification, concentration
level, and fill order number.
The bottles were then bagged, boxed in card-
board boxes, and shipped via two day air to the
participating utilities. With the bottles went a
set of instructions, a copy of the procedures from
the 1983 EPA Methods for Chemical Analysis of Water
and Wastes, and a lab survey questionnaire designed
to determine the overall capability of a partici-
pating laboratory.
365
-------
I should note that we used special packaging
for the reference samples. Those come in, I believe,
five percent nitric acid which is considered a
hazardous material, and EMSL has a waiver from DOT
for this kind of shipment. I was told I probably
wouldn't live long enough to get a waiver for this
project since EMSL had had so much work getting
theirs. The vials were shipped separately as
hazardous material, and I was pleased to discover
that they, too, usually arrived within two to
three days.
FIGURE 16
I have here a copy of our laboratory data
reporting form which was a four-part carbonless
package. There's actually perforation on the top
of the first page, which doesn't show here, which
allowed us to use this top part as a label for the
bottle. The second and third pages went to the
utilities for recording results and then one sheet
was returned to us; the final page was kept at
TRW for our records. You notice here, that we have
places to record all the requisite analytical data.
In addition, we asked for other information about
the conditions of the analysis. Again, we're
trying to anticipate our data analysis needs
366
-------
downstream when certain data points turn out to
be outliers. Our goal is to be able to examine
factors which might have biased the results, so
we've requested furnace conditions, temperatures,
times, et cetera.
FIGURE 17
This is a copy of the first page of our general
laboratory questionnaire in which we ask for infor-
mation about the equipment used, sources of standards,
the level of operator experience, and calibration
ranges that were used for the tests.
FIGURE 18
I want to just briefly summarize what our
approach to the data analysis will be. Once the data
are collated, we will first perform a screening
test in which we look for any values which are
less than one-fifth or greater than five times the
average value calculated by our laboratory. At
that point, these values will be flagged and we
will contact the participant to investigate the
possibility of transcriptional or decimal errors.
I was very interested in what Samuel To had to say
yesterday when he reported that as much as 50
percent of DMR-QA errors were due to trascriptional
or decimal errors.
367
-------
If we find that an error is due to what was
referred to as a data management error, that value
will be corrected for purposes of this data analy-
sis and will be included in its corrected form in
the data collection. Any data points which we can't
reconcile by that analysis will remain in the
data base but will be flagged.
The next analysis of the data will be for
systematic errors and we will be using the ASTM
five percent two-tail ranking, which is called out
in D2777-77. The remaining data will then be
examined for outliers using a one percent T-test.
I believe the 1985 revision of ASTM D-2777 permits
one repeat cycle through this test and that will
be the maximum that will be used. I think earlier
versions did not specify how many times you could
go through this kind of an outlier process.
FIGURE 19
After rejection of outliers, we will then
calculate both single operator and overall precision
and bias for each element and each matrix. For
the percent recovery calculation, the accepted
background concentration will be the mean of all
reported concentrations after outliers are removed.
Other concentration levels will then be calculated
368
-------
from the background plus added spikes.
Precision data will be plotted against
concentration and linear regression used to extra-
polate back to zero concentration. This value
then, will enable us to develop an estimate of the
limit of detection using the standard definition
of three times the precision at zero concentration.
In the real world, the precision tends to flatten
out as you approach zero so, if anything, we'll be
coming up with a bit of an optimistic estimate of
the method's capability. Finally, any observed
biases will be tested for significance at the one
percent level.
Currently the data that we've collected are
being entered in a dBase III program so that we
can spin out appropriate files as needed for the
data analysis. The summary of the results from
round one will be reviewed with EPRI in the begin-
ning of May, and if all goes well, we will be
preparing samples for round two in May. The summary
report of these two rounds will be prepared and
issued during the August/September time frame.
A brief word about phase two for the
Analytical Methods Qualifications Project. We
are tentatively scheduled to look at arsenic and
369
-------
selenium by gaseous hydride, cadmium by graphite
furnace, mercury by cold vapor, and iron by flame
AAS. Again, I apologize for not having hard data
for this presentation, but I thank you for your
attention.
MR. TELLIARD: Any
questions? Thank you.
370
-------
Analytical Methods
Qualification
Utility Round Robin
Results for the
Determination of
Arsenic and Selenium
by Graphite Furnace AAS
Raymond F. Maddalone, Ph.
Judith W. Scott (TRW Inc.)
Winston Chow, P.E. (EPRI)
(TRW Inc.)
371
-------
Utility Round Robin
Results for the
Determination of
Arsenic and Selenium
by Graphite Furnace AAS
Figure 1
Raymond F. Maddalone, Ph.D {TRW Inc.)
Judith W. Scon (TRW Inc.)
Winston Chow, P.E. (EPRI)
Figure 2
"Freshwater"
(30 Labs)
"i
GFAAS
.4
Matrices
River water
Ash pond overflow
TCMCW
Reagent grade water
QA/QC
AMQ-I, Round 1
Four Concentration
Levels
"Seawater"
(13 Laos)
Z-GFAAS GHAAS
Matrices
Seawater
Seawater + fireside wash
TCMCW
Reagent grade water
QA/QC
AMQ-I Round 1 Effort
Collect
Test
Samples
Spike
and
Disburse
Collate
Data
Analyze
Results
River water
Ash pond overflow
TCMCW
Fireside wash
Seawater
Churn splitter
LDRF
QA/QC
S0.St,x
Regression
equations
LOD, LOQ
Figure 3
372
-------
Figure 4
Figure 5
373
-------
USGS CHURN-SPLITTER MIXING EFFICIENCY TEST
EC
C_D
CD
CJ
^ Mn CONCENTRATION WITH TIME
1 TIME AFTER ADDITION, min.
Figure 6
37*
-------
Figure 7
Figure 8
375
-------
Figure 9
Figure 10
376
-------
Figure 11
Figure 12
377
-------
" :•.•,'„(I I,!,;'
Figure 13
Figure 14
37B
-------
ANALYTICAL METHODS QUALIFICATION
UTILITY AQUEOUS DISCHARGE MONITORING
,XXX,-XX - XX.-.XXX.
CODE:
LABORATORY
IDENTIFICATION
NUMBER
(THREE DIGITS)
TEST ROUND
IDENTIFIER
(TWO DIGITS)
MATRIX
IDENTIFICATION
(TWO DIGITS)
BOTTLE CODE
NUMBER
(THREE DIGITS)
USE:
IDENTICAL ON
ALL SAMPLES
RECEIVED BY
PARTICIPATING
LABORATORY:
USED ON DATA
REPORTING
FORMS
IDENTIFICATION
OF STORED
SAMPLES BY
TEST ROUND
PERMITS
LABORATORY
IDENTIFICATION
OF
INTERFERENCES
SPECIFIC TO
MATRIX
NUMBER USED
AS IDENTIFIER OF
CONCENTRATION
LEVELAND
BOTTLE FILLING
ORDER
Figure 15
379
-------
EPRI AMQ-I, Round1
AMQ-l/1
in-
Mitriifypt: Freshwater (Olf Ash Pond Overflow (02).
Saawiter (03). Seewater with Metal/Wastes (04),
Reagent Gride Water (05). QA/QC Ampule (06).
treated Chemical Metel Cleaning Wastes (07)
Fireact Ceadtttons, *C. Sec
pHelTaM p>ta Wn Standard
Char
Atotnization
Aatrrtt tt Free. Piep. AM!. AddiUoa Used? Taaip Tmt Temp Ta»a Tamp Time
At
Se
Measured Alipat
Ekmett Rapieete Ceoc(^/L)
Dihtiex Fecter
Si
1
2
3
1
2
3
Calcriated CMceMratian
at Ori|. Sampte
Saftple CwMilie* M Anital:
CaeMMits/PraWem:
. <( I
Blu* • PwticipMlne LatnrMorv'i Copy
•ufl Ccrd • DuburMnwnt Copy
Figure 16
380
-------
GENERAL LABORATORY EQUIPMENT AND PRACTICE SURVEY
LABORATORY
KEY CONTACT/PHONE:
MATRICES ANALYZED:
Apparatus
1. Atomic Absorption Spectrometer (AAS):
Instrumentation Laboratory _^^
Perkln Elmer
Varlan
Jarren-Ash
Fisher Scientific
Beckman
Hitachi
Sargent-Helen
Other:
2.
Model Number:
Type of AAS:
Single beam __
Double beam
3. Type of background correction used during AMQ project:
None
Deuterium arc
Smlth-Hefltje ^^^
Secondary line
Zeeman
Other
4. Graphite Furnace manufactured by:
Instrumentation Laboratory _____
Balrd Atomic
Perk1n-E1mer
Varfan
Jarren-Ash
Beckman
Sargent-Welch
Other
Model W. ''
Figure 17
3S1
-------
Highlights of Data Analysis Approach
Follows ASTM D2777-77 (1985 version)
Screens for transcriptional errors
• Does not remove data unless it is
a reporting error
Ranks laboratories (5% two tail limit)
Tests for individual outliers (1% t-test)
• Permits one repeat of t-test
Figure 18
Anilflical Method!
Qyiliiicition
Expected Data Outputs
Figure 19
By matrix for each element
• Mean, S0, St, BSD (S0, St)
- Distribution plots
• % recovery
- % recovery versus concentration
• Regression (linear) analysis
- "x versus S0
- 7 versus S(
• Bias (tested for significance at 1 % level)
382
-------
MR. TELLIARD: Our next
speaker is going to talk a little bit about one
of our favorite subject, pesticides. Many of you
here are living off of them. I understand Mr.
Whitlock's story where he has this little can and
goes around every Thursday and spots a small piece
of ground and then calls for the contamination.
I forgot to announce earlier that there are
some limited copies of Paul Britton's talk in the
back on the table with the other publications. If
you'd like one, feel free to pick one up.
383
-------
MITCHELL D. ERICKSON, Ph.D.
MIDWEST RESEARCH INSTITUTE
KANSAS CITY, MISSOURI
COMPARISON OF METHODS FOR ANALYSIS OF PCBs
MR. ERICKSON: I would
like to present my talk in three segments this morn-
ing. The first segment reviews why we are concerned
about PCBs. Second, I will talk about philosophy
of methods that one would use for analysis. Finally,
I will present one answer which is what we've been
doing which we think has many advantages over some
of the other methods that are available. I'm going
to focus more on the broader scope of method develop-
ment and less on the data, although I should empha-
size that there is a substantial body of data back-
ing up all of the work that we've done. I'm going
to be comparing different methods and talking about
them today.
SLIDE 1 & 2
Just as a quick review, that's a PCB. there
are 209 PCBs. The group as a whole, or the individ-
ual members, are called congeners. If you refer to
them in terms of the degrees of chlorination, you
38*
-------
have 10 different homologs. For instance, tetra-
chlorobiphenyl is one homolog. Within each homolog
group, there are anywhere from 1 to 46 isomers.
SLIDE 3
The commercial production of PCBs. Through
1976, 90 percent of the world's PCBs were manufac-
tured by Monsanto, sold under the trade name of
Aroclor, with a number associated with it, 1221
through 1260. The percentage composition of these
aroclors is listed by homolog. The point to be made
here is that these are complex mixtures. They range
from 30 to 60 individual congeners per mixture.
That really is what separates PCBs from most other
analytes.
PCBs are treated generally as a class, they
are regulated as a class, and analysts are asked to
do an analysis and then report on a class of com-
pounds rather than on a specific compound. It
would be much analogous to being asked to do an ICP
analysis and then report total metals, or to report
total PAH's. PCBs are complex mixtures in the
environment and in virtually any sample. This
creates all sorts of analytical problems, which is
why PCBs are a little bit different in terms of
the approach to a method.
385
-------
SLIDE 4
The reason that we are concerned about PCBs is
that they are present in virtually any environmental
compartment. The one sample that I can think of
right off hand that I can be reasonably sure that
would not have PCBs in it, would be the Hope Diamond,
but anything else—you, me, anyone else—has PCBs
in them.
SLIDE 5
As I have summarized here, PCBs are stable com-
ii, MP
pounds, and they do biomagnify through the environ-
ment, so they are of concern from that respect.
They are ubiquitous; they are found at least 5,000
meters depth in the ocean; they are found in the
upper atmosphere; they are found in the arctic snow.
They are found everywhere. They are toxic, according
to some of the old data that prompted their inclusion
in TSCA, yet there's a lot of evidence that they're
nontoxic. It's a very controversial area. Regard-
'"' V1,,1,'1:"1, • :' :
less, TSCA essentially banned their production use,
manufacture, et cetera, in 1976.
However, EPA has seen fit for a number of
uses to continue and so we are still, at this
date, faced with a lot of analytical problems.
There are a lot of samples to be analyzed because
386
-------
of disposal by the utilities, because of environmental
contamination, because of spills from PCB capacitors
and transformers in your backyard that rupture, and
we're also faced, as I mentioned earlier, with a
complex analysis.
SLIDE 6
Basically, when one develops a method, one has
to keep in mind what the objectives of the various
aspects are. For instance, with the extraction,
one wants to remove the PCBs from the sample matrix
into a solvent. There are several general consider-
ations in selecting an extraction technique. Like-
wise, with cleanup, you're removing interferences.
Another way to look at it is that you are trying to
enrich the PCB content at the expense of the other
things and there are several different things that
one wants to consider. There are many excellent
cleanup techniques out there that probably would
remove the PCBs. You need to have good recovery.
SLIDE 7
There are three categories of cleanup. There's
a gross interference removal. For example, removing
fat. One example of a way to do that is with GPC.
Removing trace interferences like organochlorine
pesticides can be done with column chromatography.
387
-------
Some of them can also be selectivelychemically
degraded with chromium trioxide. The last category
is not widely applied by people who are simply
looking for total PCBs in the sample, but it is
very important for people who are trying to do
congener-specific analysis. In this category, one
fractionates the PCB groups by some of their subtle
functional differences. You can use carbon columns
to separate these compounds according to the number
of ortho-chlorines. So, there are different levels
of cleanup.
SLIDE 8
• ','• "l;fi"'lM
These are the three major instrumental tech-
niques that are used to determine PCB content of a
sample: packed column GC/ECD, high resolution
GC/ECD, and high resolution GC/EIMS. There are
many, many others, of course. Quantitation is
where we get into the real difference between
regular pesticide analysis and PCB analysis. The
difference is that we're trying to get a single
number out of a very complex chromatographic pro-
file. It depends on which instrumentation you1re
using how complex that profile gets, but the Webb-
• ' , . >'• !i f :' 1'1"'1 "j ! I.
McCall and the total areas and the Aroclor are the
standard way s of doing pa eked col_umn quan tl tat ion.
388
-------
The Aroclor quantitation is, to my way of thinking,
the worst choice of all because you're trying to
say that the sample contains a commercial product
and you're trying to say that it contains no other
PCBs and that in fact the product hasn't changed.
That works okay if you've got a fresh transformer
oil, but it falls down very rapidly when you start
having weathering of the samples. There's a lot of
data that indicates that you can get as much as an
order of magnitude error by trying to quantitate as
a specific Aroclor and ignore peaks that might be
from another Aroclor or might be from other PCB
sources or where you don't take into account the
degradation.
Webb-McCall is tried and true. It's been
studied in many round robins and proven time and
again to give very decent data. With GCY EIMS,
one wants to utilize the information available.
For instance, one can report the data by homologs.
One can also do congener-specific analysis if there
is a need to know, for instance, the presence or
absence of say 3,3',4,4'-tetrachlorobiphenyl,
which is purportedly the most toxic of the PCBs.
So there are a variety of ways to quantitate.
389
-------
SLIDE 9
Another thing that one has to consider when
they're doing a method development is what kinds of
QC is going to be required. This is just a list of
topic headings, any one of which has a dozen differ-
ent items to consider when planning QC. For instance,
under method execution, one wants to have provisions
in there for blanks, duplicates, maybe standard
addition. One wants to have proper data recording
in notebooks and so forth. All of those kinds of
things would be under just that one heading of
method execution. •
SLIDE 10
Now, given that background, we see that there
are many methods available for PCBs. In fact, this
is only a partial listing. I have a table with 45
different methods. One might call them standard
methods. In some of the cases, that's a pretty far
stretch of the imagination, since the methods have
not been endorsed by any standard-issuing organiza-
tion. Buried in this list and not explicitly brought
out are several methods that are near and dear to
this group's heart. For instance, in the water
category, there are the methods that we've been
talking about off and on here, 608, 625. Under the
390
-------
solid waste category are 8080, 8250, 8270. Under
the sludge category, one of them is 625S. So there
are a number of methods that this group is familiar
with, but others probably not. The methods differ
widely in terms of the specificity. Some of them
are highly specific in terms of what options are
allowed; some of them are quite general. With some
of them, the PCBs are piggy-backed with say, prior-
ity pollutants or organo-chlorine pesticides. In
other cases, the PCBs are the only analyte that's
addressed. Some of these have been highly tested
and validated with round robins and so forth, and
others basically just got written up and have no
validation data behind them.
So that is the background. The overall assess-
ment at this point is that there are a lot of meth-
ods out there for PCBs; not enough to do every
matrix, though. There are several matrices that are
not covered and time and again I've seen people
trying to do an analysis and shoehorn in a method
that doesn't exactly fit the matrix and was not
validated for that matrix. That's part of where we
were coming from and why we developed our method.
SLIDE 11
Stepping back for a second, I want to talk a
391
-------
little bit about the philosophy of writing a method.
There are two general approaches that one can take.
One is to specify all of the procedures throughout
the method. An example of that is the EPA 600 series
where there is a fairly explicit set of directions.
One of the positive aspects of this is that it gives
you an off-the-shelf method. You can go to the
manual, grab it, send somebody to the lab and they
can do the analysis. Some of the negative aspects
are shown there. Basically it means that if you
have a method that's validated for a given matrix
and a new matrix comes along, you have to make
revisions and so forth. Also, it doesn't allow a
lot of flexibility for laboratory preferences.
The other option is to specify that the method
meets certain performance criteria. An example of
that is the method I'll show you in a minute. There
are other methods out there that also utilize this
general approach to a greater or lesser extent.
The options allowed can be numerous and you can
cover more analytes in some cases. You can certain-
ly cover more matrices. The key thing is that when
you specify that the method must meet certain perfor-
''; ,'li"'"::1: 3 < "• '" " "'
mance criteria on a per-sample basis, you end up
with known data quality on each sample. The downside
392
-------
of this option is that when you go through the list
of options that might be presented, you have to
write an in-house protocol and that does require
some up-front work on the part of an individual
laboratory. So those are the two extremes of
method philosophy.
SLIDE 12
Now, changing to what we've done. We've got
in draft form at this point, a method that's titled,
"Analysis of PCBs' in Liquids and Solids." The
analytes that can be measured are any mixture of
PCBs. The matrices include water, solids, liquids
—you name it. It does utilize GC/EIMS. It has
options for packed or capillary column GC. The quan-
titation is based upon 10 individual congeners, one
for each homolog. They were selected after an
extensive study of all of the available PCB congeners
which determined response factors, retention times.
We selected the congeners from each homolog class
that were averages. The QC that's involved is that
on every sample, or four 1 ISC-labeled PCBs are added
to the sample and the recoveries of those PCBs are
measured and reported along with the values for the
native PCBs. There are also instrumental perform-
ance criteria, blanks, and so forth, as with any
393
-------
other method, say with 625.
SLIDE 13
The four 13C PCBs that were selected are shown
, ' mi "H i i ,
In the bottom half of this slide. The asterisks in
the center of the ring denote the 13C labeling.
The exact isomers that were picked were picked for
expediency in synthesis. We were given six weeks
to generate a gram of each of these. Incidentally,
an interesting story aside, buying 70 grams of 13C
benzene turned out to be quite a trick and somewhat
expensive also. That constituted the starting
material and was the entire U.S. supply at that
time.
The rationale for choosing these recovery sur-
rogates is shown at the top. We wanted compounds
that are chemically similar so that there would be
no real question about the accuracy of the recovery
assessment. We also wanted something that could be
differentiated from the analyte. Clearly, these
compounds are no good for ECD work because they
have no different ECD response than the native
ECDs, but in mass spectrometry, a separation of 12
mass units gives you a very nice clear channel to
work with. They were not commercially available,
so we did a custom synthesis. The four compounds
39*
-------
are now available in mixture solutions from EMSL,
Cincinnati. I don't have any specifics but I
understand that there are at least three commercial
labeled isotope manufacturers that are in some
stage of preparation of these compounds for sale,
so that if for some reason you don't want to avail
yourself of the free ones through Cincinnati, you
will also be able to purchase them.
SLIDE 14
The general flow scheme of the method is not a
lot different than in many methodsi One physically
homogenizes the sample—breaks up chunks, stirs it,
whatever necessary. Then one adds the surrogates
before doing any kind of chemical processing which
could change the PCS content. At that point, the
analyst has a decision to make, which extraction and
cleanup should be used. Pretty much any technique
that gives decent recoveries is allowed. We have
provided some examples, and I'll get to those in a
minute.
After the sample is ready for GC/EIMS, you add
an internal standard. The original internal standard
was dS-tetrachlorobiphenyl, but that's no longer
commercially available, so we've also added d!2-
chrysene and iodonaphthylene as options, depending
395
-------
upon laboratory preference. Conduct the GC/EIMS
analysis, do qualitative and quantitative data
reduction and report as is required by the client,
either by horaologs or by individual peaks, or just
the total number. There is flexibility in the re-
porting.
SLIDE 15
I mentioned that we allowed a number of extrac-
tion techniques. These are just the general categor-
ies that are mentioned. Literature references are
supplied. The choice of the technique depends upon
the sample. For instance, with a chlorinated
organic still bottom, your only option might be
dilution. In another case, evaporative concentra-
tion may be appropriate. If your sample is a vola-
tile solvent and you're trying to find the PCB
content, you can simply put it on a rotary evaporator
or Kuderna-Danish and achieve say a 8500:1 concentra-
tion.
Liquid-liquid extraction is applicable for water
samples and many other samples. Liquid-solid extrac-
tion is used for soils, et cetera, and others. Matrix
destruction might be utilized for a commercial product
such as an acid chloride where you hydrolize it to
form the acid and then rinse away the acid with water.
396
-------
SLIDE 16
We've also recommended the specific extraction
techniques listed here for a variety of matrices.
Most of these should be familiar. The reference to
Watts for the blood and the adipose is the old HERL
pesticide manual which was formally authored by
Randy Watts. "MOG" is Mills Onley Gaither from .the
classic JAOAG publication in 1963. The AOAC refers
to their Official Methods of Analysis manual. FDA
refers to the FDA Pesticide Analytical Manual.
Those methods are given as specific options. If
people want to use those, that's fine.
SLIDE 17
Cleanup options are listed here. Many of
these are taken from the oil method of Bellar and
Lichtenberg.
SLIDE 18
Quality control is the heart of the method.
There are laboratory QC requirements. One must
achieve certain performance criteria. It's not
precisely specified in our method, but the labora-
tory has to demonstrate to the client or the
regulatory agency or whatever, that they're achieving
performance.
397
-------
SLIDE 19
GC and MS performance are similar to what's
found in 625. In addition, one must perform routine
qualitative and quantitative calculation checks to
make sure that calculation transcription errors are
not occurring, much along the line of what was
mentioned yesterday by Samuel To. Sample QC—the
recoveries must meet specified percent recoveries.
The acceptable recovery is something one sets up
with a client before you start up. For instance,
you might specify that any recovery between 50 percent
and 150 percent is acceptable. There is some guidance
on mass spectral data quality and it's also important
to see that you have an internal standard response
that's consistent from sample to sample. Last' but
not least, the typical blanks, replicates, and
standard addition are all mentioned in the QC
section.
SLIDE 20
Now I'd like to present some application slides.
i1 '' . „' "'
This is a fish oil. I believe it's bluefish from
this area, the Chesapeake Bay. This is the total
ion chromatogram. PCB region is labeled. They
don't really stand out. The big peak at the right
is labeled just because everybody wants to know
398
-------
what it is, and it's good old cholesterol.
SLIDE 21
Same fish sample. These are ion plots for
tetra, penta, hexa, and heptachlorobiphenyls. You
can see that the PCBs are standing out nicely. I
don't have a slide, but recoveries of the surrogates
were adequate and worked very nicely. This is a
typical routine type of sample. People have been
determining PCBs in fish since maybe 1969. No big
deal.
SLIDE 22
Now, let's move on to something that's a
little bit more complex and a little tougher, and
that's a by-product PCB problem. By-product PCBs
are regulated under TSCA and there are a couple of
rules out in the Federal Register.
This is what one sees. This is a chlorinated
organic still bottom. We ran it; several other
labs ran the same sample in a CMA round robin. The
top two traces are the M+2 and the M+4 for octa-
chlorobiphenyl, and you see we have a good match
for six isomers. The third trace is the 13C octa-
surrogate that had, I believe, in this case, 67
percent recovery. This is an extremely complex
sample. There's everything from mono- through
399
-------
deca-chlorpbiphenyls in it; somewhere on the order
of 80 congeners total in the sample; somewhere
around 300 to 500 parts per million aggregate PCBs
in the sample. Very, very complex. There are lots
of other chlorinated compounds. An ECD chromatogram
of this sample just totally obscured the PCBs.
In fact, you can see from the RIG the complexity
of this particular sample. A tough sample.
SLIDE 23
We have some performance data. This is a
table that's in the method writeup. I really don't
want to direct your attention so much to the top--
that 's a tabulation of the possible range of LOQs.
Right here we have some intralaboratory data that
we generated in our lab. We feel that with good
samples and good technique, you should be getting
recoveries close to 90 percent. Depending upon the
type of sample, we've seen recoveries as low as 22
percent. Those are worst cases. Typically, though,
we tend to see 70 to 130. Some of that error is
from the MS, which has its own errors associated
with it.
An interlaboratory study was conducted about a
year and a half ago. The recoveries observed by
the different laboratories ranged quite widely, as
-------
you can see. There were some very, very bad data
in there. As was mentioned yesterday, some of this
was clerical errors.- Some of it we patched up. For
example, when decimal places were wrong, and so
forth, we fixed. In the worst case column, with
the 690 percent recovery, one laboratory failed to
correctly identify the internal standard. If you
don't identify your internal standard, everything
goes downhill rapidly. They were given a chance to
change their report and took a look at it and said,
"Yes, those are the numbers we believe," so we had
to take it.
With throwing out some of those really terrible
results, we feel like +/-60 percent precision is
achievable, and this is for the very low levels in
very complex samples. One of the samples in this
sample set was in fact that chlorinated benzene
waste that I talked about earlier.
SLIDE 24
In summary, the advantages of this type of a
method are presented on this slide. As with isotope
dilution, although not quite as all-encompassing,
we have data of known quality on each sample.
That's important. The surrogates, we feel, can
reduce the validation time. The idea is where you
401
-------
have a matrix that you've not encountered before,
you really don't know the precise way to handle it
in the extraction and cleanup, you simply try your
best guess. If the surrogate recoveries are good,
you report it. If they're not good, you know you
did something wrong and you can go back and do
something else, but you don't have to do a $50,000
validation project before you get started.
We eliminated Aroclor as a calibration mixture.
It eliminates the temptation to try to make every
sample with PCBs in it be an Aroclor. Most of the
PCBs in the environment did start out as Aroclors,
but they're weathering and as the years go by, the
profiles are going to look less and less like the
original Monsanto products. We are quantitating
all the way from mono through deca. We have incor-
porated flexibility that accomodates laboratory
preferences.
SLIDE 25
It's not a perfect method. The surrogates must
be incorporated into the matrix. If you've got a hard
plastic, for instance, and you throw the PCBs on the
top and don't get them down inside, you're going to
get good surrogate recoveries, but you really haven't
incorporated them into the matrix.
402
-------
We are extrapolating from 77 to 209 congeners
and there's some error associated with that. We
have complex matrices, complex analytes and we are
having a lot of sources of error. That holds true
for PCBs regardless of which method you're talking
about. It's always going to have inherently more
error than a single analyte method. Finally, keep
in mind that any method is only as good as the
person who's performing the work.
Thank you. Are there questions?
403
-------
co
CO
0
c
CO
O 0)
O QQ
X O
I- 0-
UJ —
LL. LL
CO
O to
O
O O
LJ-I £= CQ
—•01-
Q co 0
"0 DC "5
co
O CO ^
^ 0 10
O
CO
OJ 55
p^ g
^ a
O
-------
405
-------
W
V
-H
l-i
O
60
OJ
4J
O
0)
3
0)
U
d
(U
1
CQ
U
Oi
w
>
O
oo
(U
Jj
ra
CJ
VO
O\ O •
-------
e
o
•H
4-1
•H
CO
O
&
S
o to
o
VI rH
re u
rH O
3 IH
U <
(U
i— 1 CD
0 E
S3 0
to
60 tw
TO O
i-J
QJ ^~N
^ ^S
<^ •
4->
CM
QJ
«
H
1-1
o
rH
U
O
*c
O
*»o
CM
rH
CM
rH
CO
CM
rH
CM
*3"
CM
rH
, VO
rH
O
rH
03
CM
fO
rH
rH
CM
CM
rH
/— ^
to
60 1)
o a
rH -H
O VI
E 0
O rH
CM
rH
rn ID en
rH IT)
rH CM r--
>* CM
rH fO ID rH O
rH ^f CO rH
CM Ch t-~ CM
rH ID CM
VD ON "^ ID
CM CM CM rH
O O ID -* rH
rH ID CO
O rH CM CO ""^ ID
CO
0)
9 CO
0)
(y p ,
U
a rH
o u
<1! OJ
o
•H
n
•t-l OO
OJ
•H
c
O)
•o
'H
c
*J
0)
u
TO J3
£ 00
•A* •!"(
a
QJ PQ O
pt D
(U .. ~
> aj d
'HOC
•a* Vl -H
S to
O
« to
-------
CO
••J
LU
*^
LU
_J
3!
1—
J2
LU
•s
C
CO
p£
1_
*-
CO
5
fc.
<
-J
"S
c
o
1
CO
(
o
co
"S>
c
in
0
•
CM
o
•
O
o
10 o
nr- 10
1 •
O T-
*•""
1O CM
i •
T~ T"
O O
o
m
CM
•
o
^™
CO
I
10
o
CM
CO
I
o
V
O> CO •
in • &
o -i-
O
DC
Z
LU
^
O
0.
C
o
s
0
CD
DC
CO
O
CO
O
— . C
C^S CD
3 "S
DC 2
O
O
•o
c
To
5
"co
£
cS
o
CO
CO
K ^a
^^^
O
55
"co
5
OL
3
o
O
-------
£
"5
yz CO
jtn
CO
o>
^ = *» T— V* ^
& Q B*j Q) J"j
O) ** *< T3 co ®
• • • ^Ik ^^ mm^ ^S 21 V^ ai^
iu g> co 3 ,. o © ""^ o
J3 £ CT *•• "c ^ ~ E
O JS .2 !5 o o to S o
co m 3 i- z m co o
-------
Herring gull
3,530,000
i
Cotfish
11,580
i ,
Plonkton
1,830
ffi
410
-------
O
OC
X
LU
LU
O
LU
3
0
"c
CD
"5
CO
0
"JI
to
CD
D.
E
CO
CO
2
CO
m
o
a.
emove
cc
• •
CO
0
^r
CONSIDER.
cc
LU
LU
0
C
CO
a.
•a
o
o
0
T—
Contact
•o
o
o
O
CM
Iterferences (''Cleanup
&
.2,
"cF
"o
"«
o
DC
CO
"55
15
c
15
•»-»
CO
CD
CC
«*•»
"i
.CD
JD
to
Q.
O
O
"c
CD
"o
CO
f!t
es, Equipment)
Q.
Q.
3
CO
*\
0
CO
W
o
o
U)
-------
UJ
O
^ Q
^ Q- ^
2*
CO (A
OW*
e
0) m
fe p
0> ™
*•* co
CO ^
CD ^
£ 3
£ &>
"
o 5.
.£ a)
O
0)
o
g o
a §
O
S S 5 ^
s= "2 ° §"
«
"
>
o
0)
°- C S
_ - Q.
412
-------
UJ
o
LL
O
CO
LLJ
UJ
0)
3
O
CD
O
Q.
O
E
_3
o
o
coo
o
£
CO
O
E
CO
X
LU
CO
CO
CD
O
C
CD
CO
o
•o
CO
-------
O
DC
O
CL
LU
LLJ
CO
I
CO LU
IS O CM
^ OC Z
CO
CO
o
o
o
CO
II
CD
o §. I S
CO O
> T5 T3
6 «*
QQ .-=
CO
-------
O
0
—^ o .w
§ -Q "5 P
O
1- <
.o
M—
"o
0
§
,o
vt—
"o
0
Q.
CO
E c
0 °
I O
O •*=;
o
<
Q
O
LU
O
O
Q.
Q
O
LLJ
O
O
DC
X
CO
LU
O
O
DC
X
LU
-------
CO
Ill
Q CO
CO CO
z >:
8<
CO
"CO
>» 0)
"co .£
c -c
< o
CO
CO
CO
CD
a:
CD
OQ
CO
!3
CD
o
0)
Q.
CO
O
CO O
Q. ^
o
0) W OL -5
c -D m ®
^ O -sr OC
3 -^
CO
E JS 1
CO CO O
CO Q O
CO
-------
o
W
O
(3
W
ED
Q
o
^
o9ooooo1_uj,_o
LUU.UJLUUJUJUJOXOUJ
0 i
o 1
UJ UJ
o
o
UJ
CO
UJ
DC
o
UJ
o
o
DC
o_
o
cr
CO
_x
™
to
=5 Ji
c t
0) « .£
£ « S
=5 « w
® « -^
co 5 2,
1^ "5 tr
^ Q. *-
^ O ^
l
(0
o
CO CO
l
T3
o
CO
,n
N W
jo « »
"
s
CO
2
to
§
o
CO
- Q.
o>
'o
o
CO
kT
0)
to
0)
3
•o
0)
o
o
CO CM **
— S S ^ "-^ *•"
W H H- < < < <
2 CO W Q. Q. Q. Q.
< < < UJ UJ LLJ HI
• CO *5
a.
LU
a.
UJ
3
<
-------
13
CD
_O
<
CO X
CO
CD
X Q.
CO
0
cr
0)
QC
CL
O
CO
O
O CD -^
O
o
_ o
> O) nr •*=!
O .£ CD 2
LL CO tr Q.
X
Q.
a
o
X
HI
o
®
£ E_
CO "TZ
o _Q w co _0
'S J? •*- § .9-
co co
a) =s
*r co rx
>• CD ^
CO O CO
CO
Q
c *c
P
'•5^50
Q
or
h-
co
0
CD CD
Q. CO
^o
2,9
D)
O O)
S .2
'o o
O
o
t o
CD "—•
D.
-------
ND SOLIDS
rf
J LIQUIDS
£.
>
m
o
Q_
LL
O
CO
CO
>-
rf
<--K(
eo
o>
75
o>
(Erickson (
CO
DO
O
Q.
'o
0
J5
."x
C
CO
0
">,
"co
c
"l3
Zi
o
"o
CO
CO
"co
•4—1
CO
0
.O
to
2
CO
UJ
o
o
c
.0
to
c
E"
"0
Q
geners
o
O
o
o
'"£
"c
CO
o
rmance)
3
i_
0
CL
0
Q.
CO
00
O
Q_
6
m
I
O
0
formance
trumental Per
CO
JZ
o
°0
1
CO
m
-------
RECOVERY SURROGATES
Criteria
1. Chemical similarity (identical liquid chromatography,
identical behavior under acid cleanup)
2. Able to be differentiated from analyte
Best Choice
13C labeled PCB
But: Not commercially available
Solution: Custom Synthesis
(Enough for thousands of analyses)
Cl Cl
420
-------
ANALYTICAL METHOD
Obtain Sample
I
Physical
Preparation
Add Surrogates
Extract, Cleanup,
Concentrate
I
Add Internal
Standard
I
GC/EIMS
Analysis
I
Analyst
Decisions
RRFs*
RT Windows
Identify PCBs'
Quantitate"
Report by Peaks,
Homologs or Total
*QC criteria provided
421
-------
CO
LLJ
ID
o
^s
O
LLJ
h-
0
^_
O
^£
DC
H
X
LLJ
c:
.g
+•*
J-?
tw
•s
LU
32
"o
CO
1
"5
.C7
c
.g
i«— >
_2
Q
.g
^4— <
1
x
LU
Column
-t— '
c
CD
^
b
CO
C
"o
Q^
c
^™ ""
•4—'
QJ
i_
b
c
o
"7^
D.
b
CO
CD
Q
"CO
E
o
h-
c
,g
2
•j-
0
o
g
_>
CO
Iw.
O
Q.
LU
C
n
estructi<
Q
_x
•s
c
.g
"o
CO
i_
X
"5
C7
ZJ
t
T3
'ZJ
C7
1_J
-------
CO
UJ
O
O
LU
CO
LU
O
25
HO
X co
LU iJ,
O
o.
0
O)
CO
<
a.
UJ^
0
4— >
§
.— »
l£>
CM
CD
CO
0
0
•s
>
CO
Z5
O
"E
CO
a
X
0"
O)
ZJ
CO
O)
CO
0
CO
^~
1-
co
"^- ^"
'6
CO
•D
CO
0
0
CO
£>
^— '
1
T3
O
0
DO
CO
•+—>
-(—•
CO
Q
LL
O
O
0
ZJ
CO
CO
h-
S?
0
-Q
C
_0
O
~ C5
c: rr* -Q
O — « O c
^ O Q_ < CO
£, < _, O .=
0
CO
0
a.
CO
c
CO
T3
O
O
rr ®
0 CD
Q. ^~"
CO -S
•^^ ^^ '•*' ^^
LL LL CL O
-------
c
O
CO
D_
C
g
to
CD
•, c:
; a
i s g
S « 0
0 >r CO
o _g co
< LL en
3
DL
a.
O
<
O
c
ZJ
o
O
"O 'il
"o o
< LL
C
1
^o
O
CO
c
"E
_TJ
<£
C
E
"o
O
"0
O
CO
o
(^
-omatograp
^i
o
c
_g
*i
CO
0
E
0
Q.
"0
O
-------
ONTROL
O
QUALITY
o
o
CO
_J
£
cc
0
-Q
CO
CO
C
.g
to
.0
ormance Cert
0
QL
0
o
CO
0
D_
CO
•D
O
O
•+-<
CO
O
^H^
~o
"co
a
o
0
6
o
a
CD
D.
CO
CO
b
ID
1
o
D>
0
CO
.0
0
O
0
0?
0
to
0>
0
1—
CO
CO
o
"o
0
Q.
CO
CO
.3
Itandard Response
UJ
13
0
•4— >
JC
I
CO
"co
c
o
O
O
0
CO
-------
en + i—
oo
CM CO
r— CM
D i—
CO O
ea®
§8
CM
0~
i
S
CO CM
10 ;
+ t-> o
CM ^^^
§
-------
CC
LU
UJ
CD
"3
"ee
o
"5L
H"
®
(0
O
10
IO CM
q q
o 6
To 1
o CM
I CD co
+1+1
10 10 ;±
CC «£
Q.
Ill
O
Z o
•s £°°
o
8§5s
O to
t --
o
o
CM
A
1 I
o
O5
CM
CM
! I
•^i»
EC
O
• _
LL
CC
LLI
0.
Q
0
X
LU
^>
•^w
(0
O
0)
Q>
rf^
Measuremeni
IO
o
U
^
O)
c
Instrumental LOQ >
Full Scan
IO
o
T 0
o 6
.c
e: -2
"
o o
C05
CO .-
So
^•o
"S~
2 g
E-|
Deo
t
O
o
q
T—
J5
"o
CO
LL
c
_o
luS
Sample Concentra
o
o
nr
1
1
0
_J
Matrix Interference
o
I
CD
|
"o
>
.g
•*-*
o
CD
jET
0
Q.
cc
co
IO
O
•I—
'5
o
3
•o
o
•*— '
OJ
IO
+ 1
o
O5
Intralaboratory
Recovery (%)
Precision (%)
i
Accuracy
I
Interlaboratory
Recovery (%)
i i
Precision (%)
Accuracy
427
-------
IE LU
•cc E:
O i—i
-
e?
0)
U
0)
C-4
428
-------
• CM CD . CM CD . CM CD . CM Gi
§ s®. s gg &j s®, i; cgp: i^
l-'-1 • • u .' • i •— i . « co • ', r •( Jm
cn® ~ CM*3 -* S3® " ffic' l7"-
™* "* CDi!
/.w
w**=.!.zC
L' -!•"» I ^
»™
>"*
*d
'
~CC
^=
~f
l/>
"x *••«•
c "*
0>
"•
•
•
-i
*
" jff
~
-"•»»«
— f
«rt
">- - T-
—
-
•
.
•
•*i*
—
•1
„,
w • "^
X
h » ^"*
•-Jl— •
.9- -"
15
(u ri. jr
_c -»C O
1 I ?
o r o
fci —•.««, X
_O ill if 0)
_C "*
u
0
-§. ~^*J.g- c -^
.— -—«»«• (U
J2 f~ CU
2 4
— — 1-
"« V
s H-
-fc L
«... — «vdE
t— -^-31,
i'^-Sg-
•
X
P _ in ..
•
. •"'••MM-MI
•
"^2
— 4M
~
.
•JCS
— «••«••
««
X
c
0)
_c — <•
.S-
15
8
o
1 |
"o.
X
—
i_ 1 1 i tilt i i ; i • i — j — i — i — *-
M-' (ID i"--j r-.
• CM .!£, .CD . -rf
8 a i; s s H; 2 S
co r--
- »-i CM
GS G?
~ CO ID
• •.- 1 CM
.
CD
£D CD
~ CM u"
• '-i C-4
•
CD
CD CD
«• CD *B
CM •«-
. T-I CM
_
• «- Gl
CS> d'
_ ITj ..
— < CO
. *-< C-4
CD
Gi CD
~ ^H CM
•^ CM
»- Gl
_ in ..
.^J CM
i^
GI s
- 'S «•
GI CD
«-« CM
IS
CD 'S'
'Tl Tl
•^H
-------
430
-------
C/>
LLJ
C5
<
1-
Z.
>
O
•*— '
"co
cr
g
o
.§
H-
3
"co
T5
CD
1
Q.
to
"CO
O>
O
CO
CD
E
c
,o
•*— •
CO
CD
0
•D
CD
C.
CO
o
CO
_CD
"co
c?
k_
CO
CD
3
E
g
"ro
J5
"co
0
CO
CO
k.
o
o
o
CO
_CD
I
UJ
type mixtures
j_
o
o
o
"co
•— t
o
c
to"
00
£
CD
"cO
^t^
"c
CO
13
cr
c
CO
O
matrices
•*—*
1
_o
c
~
ermits applica
D.
^
5
"x
CD
U_
ences
CD
"CD
Q.
Q.
C
co
13
ccommodates
CO
£•
S
"x
CD
U.
-------
r
ROBLEMS
CL
orated. If not, surrogate recovery £
E-
o
Surrogate must be inc
•
analyte recovery.
m 77 of the 209 congeners available for
o
Must extrapolate RRF
•
CD
O)
CD
CO
CD
CD
-Q
-»-*
O
1
DC
DC
CD
CD
T5
CD
measurement. Measur
CO
•a
H—
O
CO
CD
3
O
CO
CO
T5
.CD
CO
"CO
CO
X
CD
a.
E
8
Complex matrices and
•
interpretation error.
CO
CO
o
8
TJ
O
0
O)
CO
cc
o
•o
o
"CD
•
432
-------
QUESTION AND ANSWER SESSION
MR. KROCHTA; Bill Krochta,
PPG Industries. Mitch, you mentioned that the PCBs
were stable, however, there are reports that some
of the congeners are extremely sensitive or degrade
with radiation or light. Do you take any precautions
or have you noticed this to be a problem?
MR. ERICKSON: Well,
there's no question that PCBs do have a half-life.
Some of the monos have a very short biological half-
life in, say, aquatic systems. The one thing that
I've heard where people have worried over the years
about degradation is a concentrated sulfuric acid
cleanup. There are some reports in the literature
that the lower PCBs can be degraded in a sulfuric
acid cleanup. Those reports are for hot sulfuric
acid, say 50 degrees for 15 minutes. There are
also many reports in the literature where people
have done sulfuric acid at room temperature with
monochloro- and dichlorobiphenyls and had no
losses.
That's the one thing I know of where people
have actually done some studies. There's some
conflicting literature, but basically if you don't
433
-------
heat it up, it seems like you can get away with
sulfuric acid treatment.
With regard to photochemical degradation in the
laboratory, I'd say that's a minimal problem. I
don't know of any data to support that.
MR. TELLIARD: Any other
questions? Thank you.
-------
MR. TELLIARD: Our next
speaker is Denis Lin. Denis Lin is from some
organization whose letters I can never remember.
Denis is going to talk about some volatile analysis
435
-------
DENIS C. K. LIN, PH.D.
ENVIRONMENTAL TESTING AND CERTIFICATION CORPORATION
ANALYSIS OF VOLATILE WATER SOLUBLE COMPOUNDS
DR. LIN: Good morning,
ladies and gentleman. By all means, get up, get
a cup of coffee; you probably need it to hear me
speak. Before I start getting into this particular
1 ' hi i!1 ' . i . „. ' ":•
presentation, contemplating what transpired yesterday
in terms of the presentations and statistics, et
cetera, perhaps I should change my title to, In
" i, ],!"''' i , ,' ' ,i
Search Of Our Experience in a Preliminary Study of
Analysis of Volatile Water Soluble Compounds.
Before I really start again, I'd like to acknowledge
my co-workers, Faith Dees and Dave Lessing, who
have worked hard in generating the data and some
of these viewgraphs.
As you have heard of, this list called Appendix
VIII compounds, we as an analytical service lab,
always have some kinds that come in to request for
something somewhat different, like dioxin in combat
army boots, folate in baby plastic panties and all
Appendix VIII compounds in groundwater. We have
our approaches to Appendix VIII compounds and
-------
today, I'm not going to talk about all of them.
I'm specifically interested in a certain segment of
it, which seemingly very simple. You say, how could
someone analyze formic acid in water or isobutanol
in water. The first thing you think of is, I drink,
I worry about alcohol in blood, so you think, how do
you analyze ethanol in blood. Clinical tests have
shown that when you directly inject the serum, you
get very good results. You always heard John
McGuire preaching about direct aqueous injection on
some of these so-called water soluble compounds.
Certainly direct aqueous injection is an approach
to doing these kinds of analyses.
However, some of the regulators don't like
the detection limits because they are somewhat
high. In that respect, there has been some work
done through, I believe, and contracted by Athens
, the master analytical scheme.
There is this animal called heated purge and trap
which has been suggested to us to tackle some of
these Appendix VIII compounds. So today, what I'm
going to talk about is our experience of doing
these compounds via that method compared to say,
perhaps direct aqueous injection.
The objective is to make sure that heated purge
437
-------
and trap worked in those compounds of interest, and
also, in the same process, we would like to know
11 mi, ' • ','V.v1! .. '
the linear range of sensitivity and somewhat of an
off-the-cuff error range, not a very rigorous
winsorized, whatever, statistic approach.
In this particular study, what we have done is,
because we want to compare it with direct aqueous
'•; , ., '! ,r IE; r ,- •;• .':' i
injection, we have deliberately chosen columns that
can be used for direct aqueous injection. If
everything works, the heated purge and trap technique
should have a thousand-fold better sensitivity,
mainly because in direct aqueous injection normally
you inject micrometer quantities and in our case,
'"i * ' 'i,., ' L,
we inject about 5 microliters. When you do heated
purge and trap, you use 5 mil but the absolute
amount of analyte is the same. So the concentration
factor if one considers it that way, is a thousand-
fold.
The data I'm going to show is going to talk
about absolute amount. So whenever the heated
purge and trap technique works, you can factor in a
thousand-fold better sensitivity. You have to
compare apples with apples, I guess.
SLIDE 2
As George said, there are 375 compounds listed
-------
in the Appendix VIII list. Some of these are water
soluble compounds and here's the list. The approach
that we tried to tackle first is either direct aqueous
injection or heated purge and trap. The absolute
amount range that we looked into is between 5 ng.
to 5,000 ng. We don't know what's the right number
so we explore the whole range. The column, as
stated, is down at the bottom. There's a .spelling
error in the nitrosopyrolidine.
SLIDE 3
The next list...since we're doing this, we
said we might as well take a look at other things
because other things are of interest also. These
are very commonly found alcohols. Methanol to
butanol. If you are in clinical analysis, you know
that every time you do blood alcohol you use methanol
and isobutyl alcohol just to check up. The
column again is at the bottom.
SLIDE 4
The third list we have is a list of C1-C4
alcule acids. The clinical chemists that have
done this for a while...I listen to them and they
always seem to have very good results. I'd like to
try it and if you look at a catalog,
they always also show very good results.
-------
When you get into the nitty-gritty, things don't
seem to be as rosy as the literature says.
So this is the list of acids we looked at and the
column we used, which was recommended by
direct aqueous injection for C1-C8 acids.
SLIDE 5
I'm going to make the corrections on the first
one. The rest I'm not going to make any corrections
there. Dave Lessy is a young man who's the technician
in our group and who made all these slides and I
just didn't have the heart to make him go back and
do it all over again. The Y axis is actually the
absolute area per nanogram, so it is not relative
response factor. It is response factor. The black
asterisk is the average reponse factor of triplicate
analysis. We bootlegged this project. We didn't
have the time or the luxury to do 7 out of 10.
We used all the data in triplicate. The redmarks
are basically the standard deviation error range
you want to consider that way. At the bottom of
the X axis is the absolute amount analyzed. In
other words, 5 mL contain 100 ng. of the analyte or
100 ng. of the analyte in 5 mils of the water,
assuming that you have 100 percent purge efficiency.
440
-------
In the ideal situation, if the world is perfect,
it should be a straight line across the graph. As
I said, this is not, so, there are situations where
one thing is better than others, and in this particu-
lar case, this is the direct aqueous injection of 1-
4 dioxane compared to the next one which is the
heated purge and trap technique.
In almost all the cases, the 5,000 ng. absolute
amount is just off range as linearity goes. Depends
on the compound, for those that are successful, it
is somewhere between 50 to 5,000 ng. absolute
amount. We did not use any internal standard in
this study. I'm sure you are probably thinking
what C13 internal standard can you use in this
study to improve it.
SLIDE 6
We'll move on to the next one that will be
fairly repetitive now, in terms of some of these
charts. The next one, if I remember correctly,
should be ethylcyanide gas. This is the direct
aqueous injection of ethylcyanide.
SLIDE 7
The next one is the heated purge and trap.
That may be an old photo of that chart. It
looks very straight. It looks as though it's
441
-------
almost the idea case for the heated purge and trap
;• : ' ';'p hi ;• , j;;1;!1!111;;;;;;,;1;'; '
as far as linearity goes, from 50 to about 5,000
ng. absolute amount.
p "r •'•••." i1
SLIDE 8
This is aereol alcohol. It looks like 50 ng.
doesn't seem to be appropriate. It should start
somewhere higher for direct aqueous injection.
The heated purge and trap of this alcohol seems to
be better in terms of linearity compared to the
direct aqueous injection.
SLIDE 9
As you remember, George demonstrated amply
yesterday, isobutyl alcohol if you use 8240 as a way
to test without any modification is a miserable
failure. We have experienced that and we do recom-
mend either use direct aqueous injection, which
gives a better result, or mix heated purge and
trap for isobutyl alcohol.
SLIDE 10
Then we move on to the next series of the
compounds as a continuation in terms of alcohol.
Methanol. Next one is the heated purge and trap
for methanol. I believe that there are couple more
alcohols that we have done. This is one.
442
-------
SLIDE 11
The next one is butynol using heated purge and
trap. It's very linear in terms of response factor
across the range. The heated purge and trap. I
believe the next set is the acids.
SLIDE 12
I started with reversed order, somewhat, for
the acids. Acetic acid was the only one that we
have some success in using the heated purge and
trap technique. The other acids that we look into
simply fail. We did not have any response, but
that doesn't deter us from looking into the direct
aqueous injection technique, because you have to
have some way, no matter how poor, in some sense of
sensitivity. If you are pushed, you at least can
report some detection limit for the acids. The next
three or four are all acids, direct aqueous injection.
I'm not impressed with the failure, but at least
we got a response.
SLIDES 13, 14, 15
This is acetic acid with direct aqueous
injection, and there's a couple more. Direct acid,
crotonic acid. That's all the acids. If I recall
correctly, formic acid is in Appendix VIII list and
-------
s;; • jit I;-:
our experience with formic acid has been miserable
in terms of getting any consistent result. We
tried direct aqueous, we tried to extract with
ether. Every now and then we get response and
then all of a sudden, for no reason you don't have
any response at all. So, the formic acid, if you
look into the biomedical community where they worry
about the acid in urine, there seems to have
literature assess that it can be done, and I
always thought water is simpler than urine, for
whatever reason. Maybe the urine is essential to
have successful formic analysis to do something
with water.
SLIDE 16
In our opinion, as I said—this is a pre-
liminary study, using the heated purge and trap
technique, the range that we think is reason-
able is represented in the second column from the
right. The percentage standard deviation is
calculated basically by forming the so called,
the one outside range, an average response factor.
Maybe George and Paul can argue the validity on
using this. As I said, this is just some estimation
how good, bad or indifferent the method is.
The third column, just for interest's sake,
-------
what we did is assume the direct aqueous injection
is 100 percent recovery and compare absolute response
of the heated purge and trap to the direct aqueous
injection.
It looks very easy indeed. The direct aqueous
injection represents 100 percent recovery. These
are for the Appendix VIII compounds. I don't know
why Dave didn't have the chance to calculate out
the nitrosalpyrolidine.
SLIDE 17
These are for the alcohols in terms of heated
purge and trap. Quickly, go down and we don't know
why for two propanol, the purge efficiency is 159
percent, but I believe in reporting what you see.
Maybe somebody else may not, maybe John McGuire
has a lot of experience in these functions here.
So we don't know why it was 159 percent, but if...
other than that, things look very normal.
How can we improve this? This is a bootleg
project and we still have to look further. There
are a number of things. Obviously, if you decide
heated purge and trap is a viable way to go, then
we should really dig deep into the perhaps master
analytical scheme which has a section on heated
purge and trap, and try to optimize the material
-------
or the purge and trap conditions. I'm sure addition
of internal standard, perhaps the CIS label ones,
will improve the precision and accuracy. The
chromatographic conditions, as I said, because we
want to compare apples with apples, we did not
think in terms of...for those series of compounds
we are thinking in terms, how can we have comparable
results and how can we improve the purging efficiency.
The three items that we have thought of that
1 ,:,,,! Wiiii ' . , ,,. , i
might just be the change, and I'd welcome any other
ones that you can help me on, is solvent Ph adjustment
and saponification.
As far as the preliminary study goes, that's
all I have to say, and I welcome any suggestions
and questions.
MR. TELLIARD: Any
questions? You going to let him off?
-------
MR. TELLIARD: Now a
completely different subject. We'd like to keep
this meeting still somewhat ressembling industrial
waste chemistry, so our next speaker is going to
talk on one of our...this is for all you old shit-
chemists, so many of you people out there with
GC/MS don't even know what one of these things is
let alone what the letters mean. But our next
speaker is going to talk about the basic shit
chemistry, commonly called a BOD.
447
-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
JAMES C. YOUNG
UNIVERSITY OF ARKANSAS
DEPARTMENT OF CIVIL ENGINEERING
DETERMINATION OF FIVE-DAY CARBONACEOUS BOD
IN WASTEWATER
MR. YOUNG: Good morning.
I feel a little bit out ofplace...
VOICE FROM AUDIENCE: The
ship is coming!
(WHEREUPON, a 10 minute break was taken to view the
battleship.)
MR. YOUNG: This is the
first time I've ever been in competition with a
battleship. I guess I should have known who'd win.
When you stop and think about it, that battleship is
probably older than most of us here. I'm not sure
when it was first built; it was a few years ago.
Any historians here that know? I expect it was
built in the late 30's or early 40's.
(Revised presentation submitted.)
-------
449
-------
BIOCHEMICAL OXYGEN DEMAND:
MEASUREMENT AND INTERPRETATION
By
James C. Young
Professor and Head of Civil Engineering
University of Arkansas, Fayetteville, AR 72701
Prepared for presentation at the
Eighth Annual Analytical Symposium on the
Analysis of Pollutants in the Environment,
Norfolk, VA
April 3-4, 1985
450
-------
BIOCHEMICAL OXYGEN DEMAND:
MEASUREMENT AND INTERPRETATION
By
James C. Young
Professor and Head of Civil Engineering
University of Arkansas, Fayetteville, AR 72701
INTRODUCTION
The Biochemical Oxygen Demand (BOD) test is probably the least
understood and most frequently misused of any analytical measurement in the
field of water pollution control. Yet the proper measurement and
interpretation of BOD data is highly essential to the evaluation of the
performance of wastewater treatment plants and to determining the impact of
waste loads on receiving streams.
Not only must the wastewater analyst know how to measure BOD properly,
but plant operators and others involved in pollution control must know how
to use the results of BOD tests. They must have a basic knowledge of the
factors affecting the accuracy and precision of the measurement technique
and must understand the basic factors causing the test results to deviate
from anticipated or normal values.
The purpose of this paper is to review the BOD test procedure, to
discuss factors affecting its accuracy and precision, and to present recent
developments in analytical procedures, specifically nitrification control,
that affect the interpretation and use of BOD data.
-------
THE BOD REACTION
The basics of the BOD test are simple: ' organic materials are
decomposed in the presence of oxygen to form carbon dioxide and water
1' I'll ,ni!,';":'
(energy and products) and to synthesize new cells (Figure 1). These
biological cell solids in turn require "oxygen for respiration and can decay
and release free organic matter that is recycled through the
oxidation/systhesis process. This biological process is termed
heterotrophic growth and the resultant oxygen uptake represents Carbonaeous
Biochemical Oxygen Demand (CBOD).
Reduced inorganic—specifically, ammonia (N!^), nitrite (IK^), and
sulfide (S= )—also are oxidized biologically to create energy for
synthesis of carbon dioxide to biological cell solids (Figure 2). This
latter, or autotrophic, reaction can occur simultaneously with the
carbonaceous reaction, separate from the carbonaceous reaction, or may not
occur at all in a given sample. The BOD test provides a measure of the
amount of oxygen consumed in these biochemical reactions and can, if
ii
measured properly, provide a reasonably accurate indication of the amount
of biodegradable organic matter and reduced inorganics present in the water
sample at the beginning of a test.
The oxygen uptake reaction is time dependent as illustrated in Figure 3
and the accuracy of BOD measurement is a function of the time at which the
measurement (reading) is taken and the rate and extent of completion of the
two major biological reactions. The challenge of BOD measurements is to
control or manage the factors contributing to the rate and amount of oxygen
consumed up to the time the oxygen uptake is measured, and to make sure
that tests conducted at different times have a common basis of comparison.
452
-------
Organic
Materials
New
Syn
Cell
thesis
L ^
i
Energy
1
1
1
1
1
1 *
Oxidation
Biological
Cell
Solids
Energy
Fnd-
Products
Endogenous
Respiration
and
Decay
Organic
Material
H-
C02 + -H20 + Cell Solids
Figure 1. Schematic diagram and equation expressing components
of heterotrophic growth.
-------
Reduced
Inorganic
NH3, NC-2, S~"
Carbon
Dioxide
a) NH3 + 3/1
b) HN02 + I
Net: NH
Energy
\
1
'«• i .'' 'i
1
i
i
i
i
T _^—
Synthesis
11
It o ^- TTNO
12. 0_ ^ 3
+ f) r\ \Rar- HNH -f-
1. (J ^^ tllNU^ ^
Oxidized
Inorganic
NOl . NO- , SO,
2 3 4
'' j ;• , ' ^
vr / , , •' I11:
Biological
Cell
Solids
,. !!» i! <
H2°
Figure 2. Schematic diagram and equations experssing
components of autotrophic growth, specifically
for the nitrification reaction.
-------
I?
o
o
Ultimate Oxygen Demand (UOD)
NOD
Nitrogenous
Demand
CBOD
u
Carbonaceous
Demand
1 - 1 - 1
i t i
10
TIME, days
Figure 3 . Schematic representation of a typical BOD curve showing
carbonaceous and nitrogenous oxygen demand reactions
15
455
-------
THE STANDARD BOD TEST
•' • : : 'i: "-''Vd1 iir ' i- .•':•' .<• '••''•"•:• ; '
The standard (traditional) procedure for measuring BOD is the dilution
lil,u, • «, i',,,!, i|||, , ', , ,' |i
method (Standard Methods. 1985). In this method, a sample of wastewater is
diluted with a standard mixture of distilled water and known amounts of
nutrients and buffering agents. The purpose of dilution is to reduce the
total concentration of oxygen demanding material in the diluted sample to
below about 10 mg/1 so that the amount of dissolved oxygen exceeds the
5-day oxygen uptake capacity of the diluted sample. The diluted sample is
then placed in a "standard", usually 300 mL bottle, sealed or capped, and
' "Ijjj ' '. ',, ., ,;, ' " '"
incubated at 20°C in a dark room for five-days. The reduction of dissolved
oxygen concentration in the five day-test period is then multiplied by the
dilution ratio to give a measure of 5-day biochemical oxygen demand (BOD).
i , .11',',. I ' ,' '!, | !,
The plague of the BOD test is that the largest sources of variability
are not analytical. Almost any experienced analyst can prepare ah
acceptable dilution water, can make accurate dilutions, can measure
dissolved oxygen relatively precisely and accurately, and can multiply by a
dilution ratio. The problem is that the major sources of variation are
biological and largely uncontrolable by the analyst. And while these
variations are natural, that is they represent inherent characteristics of
the sample, it is difficult to compare the results of one test to the
results of another. The need, then, is to understand the factors
contributing to variability so that the resulting test data are more easily
understood and provide a more consistent measure of a known biological
reaction (see Young, 1984).
The goal of committees assigned to improve the standard BOD test has
ji " ' i ' ; ; ?;., ',: '• :. . .,.'..' ^ •. ;','.•
been to establish procedures that minimize variability and maximize the
456
-------
uniformity between test procedures. Over the last 40 years (about 1940 to
1980) the Standard Methods BOD Task Group has made relatively minor
refinements in the method of preparing dilution water, in the buffer
formulation, dilution procedure, the seeding procedure and the precise time
of starting and ending a test (Young, McDermott and Jenkins, 1982).
However, the largest source of test variability — the nitrogenous oxygen
demand (NOD) reaction — has been largely ignored. The effect of the
nitrification reaction was acknowledged in the 1930's in studies of the
impact of waste loads on receiving streams, and its impact on BOD reactions
was well documented in the late 1940's (Hurwitz, et al, 1947). Methods
such as acidification or pasteurization were even proposed for the
inhibition of the nitrogenous oxygen demand reaction in BOD tests (Sawyer
and Bradney, 1946).
The impact of nitrification on the variability of test data is
illustrated in Figure 4. Shown here is a correlation of 5-day BOD as
measured by electrolytic respirometer (EBOD) and 5-day dilution BOD. In
Plant A, nitrification was inhibited in both test procedures; at Plant M,
nitrification control was not used in either test. A major part of the
variability was attributed to the fact that the nitrification reaction was
at different stages of completion in each test at the end of the 5-day test
period.
Failure to separate carbonaceous and nitrogenous BOD can lead to gross
errors in calculating the impact of wastewater discharges to receiving
streams. For example, consider Figure 5. Shown here is a BOD reaction
exhibiting both carbonaceous and nitrogenous oxygen demands. With
nitrification inhibited, the carbonaceous BOD reaction would exhibit a
457
-------
450
8
s
O RAW INFLUENT
• AERATOR EFFLUENT
• PRIMARY EFFLUENT
£ FILTER EFFLUENT
o FINAL EFFLUENT
SLOPE = 1.17
•7.89
300 -
150 -
450
O RAW INFLUENT
• PRIMARY EFaUENT
o FINAL EFFLUENT
SLOPE =1.06
B =18.41
R°= 92.44
PLANT A
(With Nitrification
Control)
PLANT M
(No Nitrification
Control)
0 80
DILUTION BOOj, ma/I
Figure 4. Comparison of electrolytic and dilution BOD
measurements showing the effect of nitrification
on the variability of test data.
458
-------
200
I?
Q
O
PQ
100 -
TIME, days
Figure 5. Comparison of measured and calculated BOD curves and
reaction coefficients for a wastewater sample with and
without nitrification inhibition.
459
-------
curve represented by the solid line which would have a relatively high
first-order rate and a reasonably low ultimate carbonaceous BOD (CBODU).
With nitrification, the best-fit, first-order equation (dashed line) would
show an apparent decrease in rate coefficient and an increase in ultimate
oxygen demand (CBODU + NOD). Obviously the use of these two different sets
of reaction coefficients to represent the BOD reaction for the same
wastewater would have a dramatic effect on the results of an oxygen
depletion model using uptake rate and measured 5-day BOD as input
parameters. The impact of nitrification in BOD tests on treatment plant
compliance is well-documented by Hall and Foxen (1983).
The reasons for not including nitrification control.in BOD tests in the
past seem to have been both technical and non-technical. The methods
available were cumbersome and did not improve the accuracy of the test
since the carbonaceous BOD was adversely affected by pasteurization and
acidification, and reseeding was required. This changed both the chemical
and biochemical nature of the test sample. But perhaps the largest reason
for not separating carbonaceous and nitrogenous BOD in tests is related to
the purposes for which BOD was used, especially in treated effluents.
Prior to 1972, when major amendments to the Clean Water Act (PL-92-500)
were passed, levels of treatment were based largely on process technology.
That is, communities were required to have primary or secondary treatment
1 .1', '! H
based on size and location of the city, and secondary treatment was defined
as a "biological process". Hence, trickling filters, activated sludge,
rotating biological contactors, lagoons, etc.—were essentially equivalent
and the choice of one over the other was largely based on economics and
operability. PL-92-500 led to the definition of secondary treatment in
terms of effluent quality, that is, 30 mg/1 BOD and suspended solids on a
monthly average basis, 45 mg/1 for a peak 7-day period. Too few people
460
-------
realized at the time that so called "secondary processes" could not
consistently provide "secondary effluent" quality unless the measure of
effluent quality was carbonaceous BOD.
However, the standard method for measuring BOD—as developed by
Standard Methods and accepted by the U.S. Environmental Protection
Agency—did not allow for separation of the carbonaceous and nitrogenous
components of the BOD reaction. Thus began a 10-year debate of the merits
of and need for basing treatment performance on carbonaceous BOD alone or
carbonaceous plus nitrogenous BOD. The arguments were based largely on the
premise that nitrogenous oxygen uptake is in fact biochemical oxygen demand
and the fact that the multi-decade base of BOD data did not include only
carbonaceous BOD measurements; and decisions for granting discharge permits
were based almost entirely on calculations using past BOD records.
After considerable debate, numerous lawsuits, and significant
expenditures of funds for construction of treatment plants to provide
"secondary" or better quality effluent, EPA has accepted the use of
carbonaceous BOD as a measure of permit compliance, although in restricted
situations and subject to state approval. Consequently, we now are where we
should have been 10 to 12 years ago in BOD measurement and application
technology.
The challenge now is to precede as rapidly as possibly to using
carbonaceous and nitrogenous BOD as independent performance references for
biological treatment plants, thus requiring a thorough understanding of how
to conduct BOD tests involving separate measurements of carbonaceous and
nitrogenous BOD.
461
-------
NITRIFICATION CONTROL METHODS
The accepted method for measuring carbonaceous and nitrogenous BOD is
to inhibit the nitrogenous oxygen uptake reaction in one set of samples
thereby leaving only the measurement of carbonaceous demand. The
nitrogenous oxygen demand is measured by setting up a parallel set of
samples so that nitrogenous oxygen uptake is the difference between
inhibited and uninhibited samples. An alternate method is to measure the
amount of reduced nitrogen in a sample—either as ammonia-nitrogen or Total
Kjeldhal Nitrogen (TKN)—and calculating the anticipated NOD using the
equations stated in Figure 2. This latter method is considered to be as
• . '!'ii': ''
accurate as direct measurement in BOD bottles and much simpler and faster.
The presently accepted method of inhibiting the nitrogenous reaction in
BOD tests is to use chemical inhibitors of the nitrification reaction.
While pasteurization or acidification have been used sporadically in the
past, these methods are cumbersome and are not as reliable as is chemical
inhibition. Two chemicals can be used: allythiourea (ATU) or
trichloromethyl pyridine (TCMP). Allylthiourea seems to be the favored
chemical in Europe, and while effective it does have some disadvantages.
First, it is biodegradable and its oxygen demand can be a significant part
of the total BOD of samples having large dilution ratios. Secondly, since
it is biodegradable, it loses its effectiveness after a few days and must
be replenished periodically in long term tests.
The favored chemical in U.S. methods is 2-chloro-6-(trichloromethyl)
pyridine. This chemical is highly specific in inhibiting the ammonia to
nitrite nitrogen conversion and does not adversely affect carbonaceous
demand reactions (Young, 1973, 1983). It is quite stable in water and it
is effective for up to 30 days in BOD tests. One disadvantage of TCMP is
that it is not soluble in water and must be added to samples as a powder.
462
-------
This is not a major problem, however, because commercial dispensers are
available to provide accurate doses to individual BOD bottles.
The effectiveness of TCMP as a nitrification inhibitor has been
demonstrated in both dilution and respirometer tests (Young, 1973, 1983).
Typical results are shown in Figure 6, and nitrogen and oxygen balances are
shown in Table 1 for a number of test cases. In all cases, TCMP has been
shown effective for nitrification control and there is no evidence that it
interferes with the carbonaceous BOD reaction.
ACCURACY AND PRECISION OF BOD AND NOD TESTS
One problem facing BOD measurements is that it is not possible to know
the stoichiometry or completeness of the reaction at any time so that there
is no standard for establishing accuracy. Standard Methods (APHA," 1980)
gives a procedure for checking the dilution procedure and for identifying
problems with seeding or measurement technique. This involves adding 5 ml
of a solution containing 50 mg/1 each of glucose and glutamic acid to
seeded dilution water and measuring the 5-day BOD. The theoretical oxygen
demand of this mixture is 308 mg/1. If the measured BOD5 falls within a
range of 220 + 20 mg/1 after seed correction, the analyst can feel with
some confidence that his procedure is correct and that there is no toxic
substance in the dilution water. This, however, does not establish a
method for determining the accuracy of the test.
Test precision reflects the ability to repeat a measurement in a set of
replicates or among tests conducted under a given set of conditions. It is
important when establishing precision that a common or known basis of
reference is used. For eample, it should be made clear whether the
analysis of precision includes sampling, seeding, and transfer errors or
simply reflects the variability between replicates after samples have been
-------
FINAL EFaUENT
FINAL EFFLUENT
TCMJ
PRIMARY
EFFLUENT
PRIMARY
EFaUENT -f KMF
RAW INFLUENT
RAW INFLUENT
+ TCMP
TIME, days
100
Figure 6. Examples of BOD measurements with and without
nitrification inhibition (From Young, 1972)
-------
Table 1. Measured total and carbonaceous BOD and measured and calculated
nitrogenous oxygen demand (NOD).
(From Young, 1973).
Sample
•
Primary
effluent
Trickling filter
effluent
Activated
sludge effluent
Trickling filter
effluent
Trickling filter
effluent
Trickling filter
effluent
BOD
Total
328
80
75
297
100
135
i mg/1
Carbon-
aceous
224
25
16
95
38
84
^Measured NOD = Total BOD-Carbonaceous
"fal 1-11 1 af o^ Mnn — / MU . _«_ Mn — s .. / o •> .
NOD
Measured3
104
55
59
202
62
51
BOD
, mg/1
Calculated5
96
56
56
224
58
51
465
-------
transfered, seeded and diluted. Measurement precision for a given analyst
can be consistently below 7 percent if sampling, dilution and transfer
errors are eliminated and nitrification is controlled. This verification
would include natural biological variability between samples plus errors in
measuring dissolved oxygen concentration. Another contributing factor is
that the error in DO measurement is multiplied by the dilution ratio so
that considerable error may be introduced when measuring the BOD of a
high-strength sample that requires a large dilution.
A common technique for monitoring BOD test methodology is to conduct
interlaboratory analyses of mixtures containing known amounts of organic
materials (typically glucose-glutamic acid). Such samples are sent to a
number of different analyst who are asked to measure the dilution BOD using
their normal analytical procedure at 2, 3, 5, 6-day or similar intervals by
using a stated method for seeding. Typical results of this type of
analysis are shown in Tables 2 and 3. While interlaboratory testing does
' • 'I • , " ', ' ,
not give a good indicator of precision of BOD measurements in a given
laboratory or treatment plant environment, it does give an indication of
the variability of analyses from plant to plant.
THE FUTURE OF THE BOD TEST
The BOD test has been in use in essentially its present form for about
60 years. Throughout this period, there has been considerable criticism
about the soundness of the test procedure and its use as a pollution
control measure. So why does the test continue to be used? Basically, the
reason is that no acceptable substitute test has been developed that
responds to and provides a measure of the amount of biodegradable matter
present in a wastewater.
466
-------
Table 2. Precision of BOD5 measurements,
Parameter
Reference Source
Method Research Study 3, Ballinger and Young and ~
EPA, 1971 (Dilution Test) Lishka, 1962 Baumann, 1976
(Dilution test) (Respirometer)
Low-level High level
Theoretical
value, mg/L
Mean of measured
BOD5 values, mg/L
Recovery, %
Coefficient of
2.2 (BOD5)
2.12
96
33.2
194 (BOD5)
175
90
15.0
308 (THOD)
214
69
19.5
350 (THOD)
296
85
5.8
No. of analyses/ 74/56
No. of laboratories
73/56
34/34
4/4
467
-------
Table 3. Intel-laboratory analysis of 5-day BOD as conducted by the U.S.
Environmental Protection Agency (From Britton, 1985).
Added Glucose/ EPA & Stat
Glutamic Acid Laboratori
Study (50/50) in mg/L Reporting
•J UMMT \ w v
WP003
WP004
WP005
WP006
WP007
WP008
WP009
WP010
WP011
a Coef. of
From these
X = 0
S = 0
37.0
340
6.0
325
4.0
176
101.3
24.2
6.0
144.0
39.1
89.6
4.0
231.0
114.0
161.0
5.0
192.0
Var. = Std. Dev./X
data (WP007-WP011)
.665 (added level)
.0998 (added level)
98
102
85
85
102
102
90
95
94
95
68
70
65
68
69
69
74
75
, %
e . Statistics
es Mean
(X. mg/L)
25.47
229.9
4.270
218.7
2.734
118.8
68.97
16.81
4.09
100.0
26.7
61.5
2.91
146.0
75.4
103.6
3.56
125.2
the following linear
+ 0.225;
+ 0.430
with an R2 =
2
; with an R
Std. Coef .dof
Dev. (S. mg/L) Var.
• "; 'lii'" '. I*
4.299
37.06
.8710
33.36
.6672
17.05
9. 785
2.131
1.0662
12.603
3.377
6.609
0.9528
27.257
10.408
17.850
0.5500
14.714
relationships
0.99+
= 0.99+
16.9
16.1
20.4
15.3
24.4
14.4
14.2
12.7
26.1
12.6
12.6
10.7
32.7
18.3
13.8
17.2
15.4
11.8
may be calculated:
468
-------
Consequently, until a better and more acceptable biochemical-based test
is developed, we no doubt will have to continue to use the BOD test as, one
measure of water quality. Our objective, then, should be to learn as much
as possible about the factors affecting the BOD test and to learn to
control those factors that interfere with proper analysis, interpretation
and application. Presently, the factors contributing to nitrification in
BOD tests are known and methods for nitrification control are available.
We should use this technology!
REFERENCES
Britton, P. W., Unpublished results of EPA Laboratory Performance
Evaluation Studies, Analytical Quality Control Laboratory, U.S.
Environmental Protection Agency, Environmental Monitoring and Support
Laboratory, Cincinnati, Ohio (March 1985).
Hall, J.C. and Foxen, R.J., "Nitrification in BODg Test Increases POTW
Noncompliance," Journal Water Pollution Control Federati.on. 55, 1461-1469
(Dec. 1983).
Hurwitz, E., Barnett, G.R., Beaudoin, R.E. and Kramer, H.P., "Nitrification
and BOD," Sewage Works Journal. 19. 995-999 (1947).
Sawyer, C.N. and Bradney, L., "Modernization of the BOD Test for
Determining the Efficacy of Sewage Treatment Processes," Sewage Works
Journal. 18. 1113-1120 (1946).
Standard Methods for the Examination of Water and Wastewater. 16th Edition.
American Public Health Association, New York (1985).
U. S. Environmental Protection Agency, "Method Research Study 3. Oxygen
Demand Analysis." Analytical Quality Control Laboratory, Cincinnati, Ohio
(1971).
Young, J.C., "McDermott, G.N. and Jenkins, D., "Alterations in the BOD
Procedure for the 15th Edition of 'Standards Methods for the Examination of
Water and Wastewater," Journal Water Pollution Control. 53. 1253-1259 (July
1981).
Young, J.C., "The Electrolytic Respirometer," Water Research. 1.0.
1031-1040, 1141-1149, (1976).
Young, J.C., "Chemical Methods for Nitrification Control," Journal Water
Pollution Control Federation. 45. 637-646 (1973).
Young, J.C., "Comparison of Three Forms of Trichloromethyl Pyridine for
469
-------
Nitrification Control," Journal Water Pollution Control Federation. 55.
415-416 (April 1983).
Young, J.C., "Waste Strength and Water Pollution Parameters,"
in Water Analysis - Vol 3.; Organic Species, edited by
R. Minear and Keith. Academic Press, 1984.
470
-------
PEGGY KNIGHT, PH.D.
WEYERHAEUSER ANALYTICAL TESTING SERVICES
ANALYSIS OF PRIORITY POLLUTANTS
BELOW FIVE NANOGRAMS (ON COLUMN)
IN MARINE SEDIMENTS BY ISOTOPE DILUTION GC/MS
DR. KNIGHT: I've been
spending a lot of time the last few weeks trying to
figure out why I should give this talk. When Dale
Rushneck called last year to ask if I could give a
talk about this project, I was, of course, flattered.
I told Dale that there were other labs, much larger
labs, out there which had used the method and who
had participated in the development of the technique,
which we had not. That they also had better
precision and accuracy, and I was certain they had better
QA/QC practices. Dale asked anyway and I guess I'm
here.
I think probably the advantage to my giving
this talk is to show that a small lab used to
handling a variety of analyses can effectively use
the isotope dilution technique for the analysis of
priority pollutants in complex matrices.
Our lab is relatively small. Our chromatography
group consists of four full time people and one
471
-------
part time consultant. We have one Pinnegan GC/MS
and a battery of GCs. Like many industrial labs,
1 I i:1:;" '".
we analyze a plethora of compounds in about as
many matrices as we can come up with. Seldom do we
see very many samples which develop into what might
be called routine analysis.
The hundred or so samples in this particular
project consisted of marine sediment samples to be
analyzed ultimately for the Washington DOE through
a contractor for the Port of Tacoma., Our lab was.
subcontracted for these analyses. The sediments
were from the Blair, and Milwaukee waterways which
impinge on Commencement Bay in the Puget Sound.
Commencement Bay, as many of you know, is a Superfund
site. Part of the requirements for the analysis
were the detection limits should be as low as
practical and at.least five micrograms compound per
extract.
SLIDE 1 ;
The method was based,; on' the EPA. isotope dilu-
tion method 1925 for priority pollutants. It was
modified to use soxhlet extraction of the wet
sediment and add few other procedures to clean up
the sample extract before analysis. This medication
was developed jointly by Tetratech and California
472
-------
Analytical Laboratories for the Superfund Project
analysis of Commencement Bay sediment.
One hundred grams of wet sediment were weighed
directly into a soxhlet extraction thimble. The
soxhlet had been previously cleaned by detergent
washing, extensively rinsing it with water, rinsing
it with solvent and then running the thimble and
the glassware overnight with the extraction solvent.
Five micograms of each label were spiked into the
sediment, and the sediment stirred and the extraction
started. The sediment was stirred after the first
hour and twice more within the next eight hours in
attempt to minimize channeling.
The extract was transferred to a separatory
funnel and half-saturated aqueous sodium sulfate
added. The aqueous phase was acidified and then
extracted. The aqueous phase was then made basic
and and re-extracted with methylene chloride. We
had a great problem here with emulsion formation
and it would have been a heck of a lot easier if we
had been able to simply ignore the bases, but they
wouldn't let us get away with it.
Acid/neutral and base extracts were combined
and Kuderna-Danish evaporated with a steam bath.
Recoveries of all compounds were good to this stage.
473
-------
We had taken blank spikes through the procedures to
find out where the recovery losses would be.
The sample was shaken with mercury to remove
sulfur. We seemed to lose benzidene at this stage
at the five microgram level. To get rid of the
mercury sulfide fines, which were extensive, the
extract was repeatedly centrifuged and decanted.
Fines which remained collected at the top of the
GPC column.
Gel used for the GPC was Biobeads SX-3 from
Biorad. The columns were nitrogen pressurized and
manually operated. The gel was optimized for
separation of corn oil lipid from phthalate and
recovery of pentachlorophenol, and the phthalate.
The columns were reused, but if an exceedingly
sloppy sample was analyzed, the column was discarded
or the column was washed.
Part of the extract was split out for pesti-
cides, but since it's not an isotope dilution
technique, I will not be discussing them. I under-
stand now that several of the labelled pesticides
are available.
The solvent was exchanged and the extract run
through with solvent washings, a disposable C-18
solid phase extraction column.
-------
As some of the higher labeled PAHs fluoresce
very well, we watched the fluorescence throughout
the entire procedure. This gave an excellent visual
check of the progress. By this aid, it seemed that
we lost a bit of higher PAHs on the SPE column,
however, they were not removed even with methylene
chloride. The extract was concentrated again, the
injection internal standard, dichlorobiphenyl added,
and sample injected on the GC/MS.
The instrument used was a Finnegan 4000 GC/MS
with a Nova 3 data system. We used standard columns,
DB-5. It could be useful to say that a standard of
4ug/mL of Phenanthrene gave an area of about 50,000.
At these instrument settings we found that we some-
times had a lot of problems with disc space. Some-
times the more concentrated samples would run out
off of the end of the disc, which would be discon-
certing.
Numbers were developed from a program developed
by Dale Rushneck and Joel Karnofski, made available
modified, from Finnegan which reverse-searches for
labels. After finding these, it searches for the
corresponding non-labeled compounds using a window
around a linear least squares fit of current times
compared to reference retention times.
475
-------
The lower limits were calculated on a compound-
by-corapound basis for each individual sample using
instrument detection limits and adjusting this by
the sample weight, recovery of the corresponding
, '• . ' >. i •,' , ' ,i
label, and the variability of the area of the
injection internal standard, DFB.
1 i • V,!' ' ( M .1! ,
The calibration curve consisted of five points
ranging nominally from 0.5ug/mL to 25 ug/mL. For
the phenols, the range was usually slightly higher.
I've picked out a few to show. They are fairly
typical. Some of them were as straight as strings.
SLIDES 2&3
The precision and accuracy data are based on
duplicate, spike pairs; that is, a sample was
taken in triplicate, analyzed in duplicate, and
and the remaining replicate spiked with five micro-
grams of the non-labeled materials. It was carried
through the entire procedure. This was done on
approximately 10 percent of the samples.
SLIDE 4
Again, I've picked out a few to show as a
complete table would be absolutely indigestible.
These precision and accuracy figures are in percent
recovery of the spike. These are fairly typical.
In general, the PAHs are a little better and the
-------
phenols, of course, are worse. Butylbenzylphthalate
has no label standard to recover.
Somehow, when I proofed the slides, I managed
to miss the big blank area there. It doesn't mean
that they weren't recovered except for 4-nitrophenol
which usually is not. I count it among those
compounds which don't really work.
In comparison with Table 8 in Method 1625,
most of these figures correspond fairly well. The
accuracy figures are usually well within the ranges.
The standard deviations sometimes are a bit higher
and sometimes within the ranges. The recovery of
the labels over on this side usually were within
the range. The label recovery figures represent
a summary of the data over all samples as well as
the duplicates and spikes.
To take a few of these examples, 2-chlorophenol
from 1625, the range for accuracy is 79-135 and the
standard deviation is +/-13 in 1625. Cabel recovery
ranges from 23 to 255 percent. Dichlorobenzene is
63-201, the standard deviation is 43, and a lower
limit for the recovery is not specified although it
should be present, and up to 550 percent.
To fill in these numbers that I'm missing, our
recoveries here for acenaphthene were somewhere
477
-------
between about 30 and 150 percent. 4-Nitrophenol
was not recovered in the sample much of the time.
It just happened to be for some reason with our
spiked samples. Fluorene ranged from 40 to 180.
Most of the compounds that are analyzed by
this method are really very similar to this. If
the PAHs seem more highly represented than warrants,
it's because those compounds were the compounds
with which I was dealing. I should point out that
in comparison with the precision and recovery samples
in 1625, 1625 figures were developed using what
Dale referred to as a "real world reagent water
spike," whereas these are actual sediment samples
and, of course, have to be corrected for the amount
that was in the sample itself.
There were a few compounds for which the method
seemed ultimately to fail, failure being determined
by non-recovery of the label. As was mentioned
earlier, benzidene was lost even on a blank spike
at five micrograms at the mercury stage. Other
compounds did not seem to survive the procedure,
also at this level; 2,4-Dinitrotoluene, 2-Methyl
4,6-Dinitrophenol and 4-Nitrophenol.
Hexachlorocyclopentadiene may or may
not
survive the procedure. We seemed to have problems
478
-------
getting that compound through the column. On a
fresh column or a regenerated column, there are no
difficulties. However, after a few of the sediment
extracts have been injected on the column, the
response rapidly degrades. A rather similar effect
was exhibited with acenaphthene. On a new or
regenerated column, the response was twice what it
would have been on a column which had been exposed
to sediment extracts. Consequently, after the sedi-
ment extracts, we reexamined the calibration curve.
SLIDE 6
This is a pictorial representation of the
replication of numbers. A two sample plot. The
amount at 50'would represent five micrograms of
compound. The PAHs are also.
SLIDE 7
This is just blowing up that particular slide.
The phenols are, as expected, more scattered.
SLIDE 8
These are the people in the lab that also
participated in the study: Ed Barnes, Candy McFaul,
Jim Leong and Mike Grove. Two of them have spent
a lot of time doing extractions and...well, three
spent a lot of time doing extractions and the rest
of us were involved in data manipulation and keeping
479
-------
the lab going. You learn a lot about your lab when
you have a project like this; how well the group
functions as a unit, how well the normal workload
is handled with the additional sample load, and
where some of the faults in normal practices are.
We had two major faults which came to light
during the project, one caused by the other. All
the data had to be manually transferred at least
twice before the final report. Sometimes this can
act as a prompt to .double check the data. In our
case, we did have transcription errors which
developed and it also took so much time to transfer
the data that we allowed some false positives
through. The compounds which come to mind are
benzidene and N-nitrosodiphenylamine. Fortunately,
a QA/QC review practice outside caught these errors
before the data was entered into the data base.
With a small number of samples, the errors
would not have occurred, but with the bulk of work--
the sheer number of numbers— they did occur. As a
result, one of the highest priorities in our lab-
oratory has become the computerized transfer of
data, and I might add that no report is issued
until it has been doubly rechecked, no matter how
loudly the client screams or the lab manager.
Are there any guestions?
480
-------
Sample
Developed jointly
by Tetratech and
CAL for the
Superfund Analysis of
Commencement
Bay Sediment
CH2OH/CH2CI2
481
-------
o
-------
Fluorene
Ref: D10-Fluorene
o
<
M-^
CP
o
(U
2.2-
2.0-
1.8-
1.6-
1.4-
1.2-
1.0-
0.8-
0.6-
0.4-
0.2-
0.0-
0
X 10% error limits
10 12 14 16
Amount
—r~
18
—T~
20
—r—
22
24
Benzo(ghi)perylene
Ref: D12-Benzo(ghi)perylene
D
<
%
o
cu
01 23456789 10
*83
-------
COMPOUND
PRECISION/ACCURACY
LABEL RECOVERY
2-Chlorophenol
1 , 3-Di chlorobenzene
Acenaphthene
4-Nitrophenol
Fluorene
Pen tachlor ophenol
Phenanthrene
Fluoranthene
Butylbenzylphthalate
Chrysene
Benzo(a)pyrene
Benzo(ghi)perylene
94 +/- 9.8
128 +/- 56
104 +/- 20
95 +/- 27
80 +/- 38
107 +/- 45
106 +/- 28
105 +/- 26
nn i_ / ic
80 +/- 15
100 +/- 30
91 +/- 46
102 +/- 41
16-75
2-63
42-83
0-250
47-154
18-180
13-147
32-150
40-250
12-300
14-400
-------
COMPOUNDS GENERALLY NOT RECOVERED
• Benzidene
• *HexachlorocycIopentadiene
• 2,4 - Dinitrotoluene
• 4 - Nitrophenol
• 2 - Methyl - 4,6 - Dinitrophenol
485
-------
I
a
PU
CO
CD
cd
o
CD
m
486
-------
ffl-
PH
CO
CD
ctf
O
CD
-------
C/3
i—i
O
CD
PH
CD
cd
o
CD
•g
QQ
o
CO
o
CN
- O
-------
QUESTION AND ANSWER SESSION
MRS. VOLLMERHAUSEN:
Jill Vollmerhausen, Martin Marietta. You said you
had a lot of trouble with emulsions. I was wondering
what method you used to handle the emulsions?
MRS. KNIGHT: We used
centrifugation.
MRS. VOLLMERHAUSEN: Did
that usually take care of it?
MRS. KNIGHT: Well, it
got us to about 80 percent of recovery of the
extract after repeat centrifugation, but it
sometimes took a long time.
MRS. VOLLMERHAUSEN: I
also notice you use soxhlet and sep funnel extraction.
MRS. KNIGHT: That's right.
.MRS. VOLLMERHAUSEN: What
was the reason for that?
MRS. KNIGHT: The methods
were dictated by a client and previous studies had
indicated that this was necessary...
MR. TAYLOR: The answer
for that is when you extract 100 grams of wet
sediment—and methanols are used in this—you have
489
-------
a horrendous amount of water which you then have to
separate. That's also the reason for the sodium
sulfate.
MR. STANKO: George
Stanko/ Shell Development. In your spiking procedure,
you spiked the labelled compounds into the wet
sediment?
MRS. KNIGHT: That's right.
MR. STANKO: Did you
conduct any experiments on taking the sediment,
drying it first and comparing your spiking of
labelled compounds onto a dry sediment, then add
the water and do your extraction, versus spiking
labelled compounds into a wet sediment?
MRS. KNIGHT: No, we
didrt't undertake that study. The methods, as I said,
were totally specified. Whether that study has
been done, perhaps Paul would like to comment.
MR. BARRICK: I have an
answer for that. The major reason that wet sediments
were used is because there have been several studies
that show with marine sediments and other kinds of
things like this, that if you do a drying procedure
on them, unless it's very careful freeze drying,
you get development of artifacts, active sites,
490
-------
things like that. When we were dealing with having
to recover acids, bases, neutrals, pesticides,
everything all over the place, you simply couldn't
afford to do a modification of the matrix through
drying, so we did a wet sediment extract and then
removal of the water through the separatory technique,
There are so many different phase associations
of all these chemicals that you can't expect the
isotope dilution spike to mimic the recovery out
of the matrix of all those things. There are
surface cuttings, things that are engulfed and
locked up in matrices and clay lattices and things
like this. So the real purpose of this was to get
your spikes in there, get them mixed up, get them
as equilibrated as possible, and then pull them out
and primarily use the isotopes as an analytical
recovery spike so you could see what was being lost
in the lab and what wasn't.
You were making the tacit assumption that you
were starting out with a given amount of extract
from your matrix, but they weren't geared in a true
sense of isotope dilution to say that it was mimick-
ing the total recovery of every single compound out
of every single one of the sum matrices within the
samples.
491
-------
MR. TELLIARD: Bob, would
you tell the ladies who you are?
MR. BARRICK: I'm sorry.
Bob Barrick, Tetra Tech.
MR. STANKO: I have one
final question on that. When you actually calculated
the concentrations of the analyte, did you use the
isotope dilution calculation or did you use the
isotope dilution compounds, or were you spiking
only to give you an indication of quality assurance?
In other words, how did you use the labelled spiking
data? Was it incorporated into calculations?
MRS. KNIGHT: Yes, it
i • • ;i , , ,• ,;,' ;;;- ;.
was. That was also specified in the method that we
were to use.
MR. STANKO:, I think I
would have some concerns based on the information
you just presented.
\
MRS. LESAGE: Suzanne
Lesage, Canada. I would like to know if you had to
look for other unknown compounds in those marine
sediments, and if so, did the addition of all
those labelled compounds make the chromatograms
so complicated as you would have a hard time.
MRS. KNIGHT: We did not
•492
-------
have to look for other unknown compounds. It would
make the identification of those materials perhaps
more complex, but as has been noted 'before, the
masses for the labelled compounds are fairly unique
that we're using, and I believe it would have helped
quite a bit in making sure that these were the labels
Also, we knew what the labels were supposed to be
and what labels were involved and where they
occurred.
MRS. LESAGE: Except in
coellution problems.
MRS. KNIGHT: Well,
coellution problems happen.
MR. TELLIARD: Thank you,
Peggy.
493
-------
MR. TELLIARD: Our next
speaker is going to talk about some ion chromato-
graphy. Jack is from Cincinnati—that other lab.
Jack.
494
-------
JOHN D. PPAFF
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
ENVIRONMENTAL MONITORING AND SUPPORT LABORATORY
USES OF ION CHROMATOGRAPHY
FOR INORGANIC ANALYTES IN WATER
MR. PFAFF: We've heard
so many talks throughout the last day and a half
on HPLC, I thought maybe what I ought to do is
back up a little bit here and tell you what the
difference between ion chromatography and HPLC is.
Basically, it's very simple. HPLC came first,
1C or ion chromatography was a later addition to
HPLC. The original design for ion chromatography
came from Dow Chemistry in the early 70's and it
was originally designed to do inorganic ions
using aqueous solutions and using conductivity
detectors. Throughout the years, it has grown up
to a certain extent, and now it can be used for
some of the smaller organic materials and it cer-
tainly is using more different types of eluants
than just the aqueous types.
SLIDE 1
What I've tried to do here is give you a very
basic schematic of an ion chromatographic system.
495
-------
There are basically two types of ion chromatography,
if you want to call it that, and this is due to
patents more than anything else. When Dionex came
out with the first commercial type of ion chromato-
graph, they used what's known as a suppressor—I'll
get into that in just a minute—and they patented
the use of the term suppressor and the use of a
suppressor column following the separator column.
Consequently, anyone else who wanted to get into
the field, could not use a suppressor column. So
now you have what is called a suppressed and non-
suppressed, or as some of them like to call it,
electronically suppressed pieces of equipment.
If you start at the top of the slide where
the eluent is coming down, you get to the injec-
tion port. The injection port is a simple straight-
forward thing except when you try to modify it and
put what is called a preconcentrator column in
there. In the Dionex instrument, which most of my
work has been done on, the injection port goes
into a replaceable sample loop so that you can
change the sample loop to put in any size from
approximately 10 microliters to somewhere in the
neighborhood of one milliliter. Of course, you
get corresponding peak broadening depending on
496
-------
the size that you use.
Another way of injecting larger amounts of
samples into the instrument is to put in a concen-
trator column, which is very similar to the separa-
tor column that is being used. It is placed behind
the sample injection port and up to 100 millimeters
can be injected through this concentrator column.
The ions of interest are held on the concentrator
column while the matrix is allowed to go to waste.
The instrument is then switched over and the eluent
is passed through the concentrator column and goes
through the separation mode. So when you start
talking about detection limits, you usually have
to specify what size sample loop you're using
and/or whether you're using preconcentration of
the sample.
The next thing in the line is usually what's
called a guard column. The guard column is simply
put there to protect the investment that you put
into the separator column. You simply don't want
to ruin your separator column, so you put something
that can be sacrificed at the head of the column.
Let me say here that the instrument itself cannot
tolerate solids. Depending on whether you're
talking about the eluent or the sample, you're
497
-------
talking about particles in the neighborhood of
about 0.02 microns that have to be taken out.
They are taken out to a certain extent by the guard
column, but then you have to replace this more
often. The separator column, obviously, as in any
kind of chromatography, is the heart of the system.
It does the separation so that you isolate one of
your components from the other. I want to discuss
a little bit later the advantages of suppressed
and non-suppressed, so let's just continue on
down the slide.
The dash lines that I've put in here are simply
to differentiate between the suppressed and the non-
suppressed pieces of equipment. Anything between
the dashed lines then, would be suppressed and if
you sort of surgically remove that section of the
slide, you would have., in essence, a non-suppressed
piece of equipment. So in the Dionex equipment,
you go through a suppressor.
Now, what's the sense of the suppressor? The
suppressor is going to take the eluent, which for
anions is usually some ratio of carbonate/bicar-
bonate solution, and change it into a non-conduct-
ing species so that your background is extremely
low, and in turn, it changes your anions to the
498
-------
acid form so you're looking at things like HCL or
HF, which are highly conductive.
Now, this does one thing for the Dionex
equipment. It does increase the sensitivity levels
which you can go down to. Usually a rule of thumb
is that you're talking somewhere in the neighborhood
of one order of magnitude difference in sensitivity
between suppressed and non-suppressed equipment.
So, you go through the suppressor. If you'll
notice, you have two outlets on your suppressor.
You have a regenerative going in and out as well
as an eluent in and out. This is a relatively new
piece of equipment. The original suppressors were
packed bed suppressors. I don't know how many of
you have ever used ion chromatography, particularly
with packed bed suppressors. The packed bed sup-
pressors were good for let's say 12 hours of use
before they just simply quit on you. This neces-
sitated approximately two hours of regeneration
before you could put the instrument back into
operation.
The introduction of the fiber suppressors
did away with the drawbacks of packed bed suppres-
sion completely. The suppressor is not a packed
column but rather it's more of a hollow fiber
499
-------
packed with beads, simply to cut down on the dead
volume, and it is constantly being regenerated by
a flow of sulfuric acid counter-current to the
flow of the sample.
After the suppressor, the sample enters the
detector; whichever kind of detector you're using.
Probably the most used type of detector is the
conductivity detector, but as I'll show you later,
you have many different choices now that you can
use.
SLIDE 2
This is a separation of the common anions.
For those of you who can't read the slide, fluoride
chloride nitrate, not total, but orthophosphate,
bromate nitrate and sulfate are separated. The
eight minute separation time is going to be very
dependent on the type of separator column you're
using and your eluent. It can vary anywhere from
8 to let's say 20 minutes, depending on the column.
This particular column here is the proprietary
column for Dionex that is referred to as an AS-4
column. I know it's a matter of choice, however,
I don't particularly care for the AS-4 column. It's
very easily overloaded and, secondly, if you change
any of these ratios of solutions that you see here,
500
-------
you tend to get a lot of interference. My personal
preference is that I would rather stretch it out
and have a larger, concentration that I can look
at.
We did most of our work with what's called the
AS-3 column. It's a little older column, it's more
loosely packed and consequently, you can put more
material on it, but your separation times are
going to be at least three times as long. For
sulfate, we're talking somewhere in the neighbor-
hood of let's say about 20 minutes for elution.
Sulfate is your last elutor from this column.
I mentioned that I would later differentiate
between suppressed and non-suppressed pieces of
equipment. Sensitivity is better for the suppressed
equipment. What are we talking about? As I
mentioned, about an order of magnitude. The cost
is significantly different. Your non-suppressed
pieces of equipment are going to be considerably
cheaper than your suppressed pieces of equipment.
Eluents and columns that can be used with the
two different kinds of pieces of equipment tend to
be more numerous for the non-suppressed pieces of
equipment. Why? Number one, you have many more
companies actually involved in the production of
501
-------
non-suppressed equipment. You even have a few
people who are producing columns now that really
don't produce the equipment themselves and I think
this is a healthy situation. It's the way I think
gas chromatography went in its infancy. If you do
tend to get these people, I think you're going to
get away from proprietary columns and there's going
to be more competition and I think there's going to
be a more successful introduction of columns.
I mentioned the fact that our lab has a Dionex
1 • I ,,'n. "
piece of equipment, a 2120, which is the latest in
1 i ill.
their series. Let me point out one thing here,
when you start looking at people's slides. If you
notice this negative peak right here, this is what's
known as the water dip. This is the thing that
causes a lot of gray hair in people who are trying
to do anionic separation on ion chromatography. If
you see the water dip coming out in your standard
anion separation after fluoride, you can sort of
guess that the chromatogram that you're looking
at was done on a packed bed suppressor. When you
get to the fiber suppressor, the negative peak
occurs in front of fluoride, and then it starts
to cause more and more interferences in the
separation of fluoride.
502
-------
One other thing with packed bed suppressors.
The retention times changed drastically as they
tended to exhaust themselves, and you really had
to watch what you were doing because even the water
dip would migrate into an area where it would become
an interference.
SLIDE 3
Talking about the migration of retention times,
one of the drawbacks of ion chromatography seems to
be the fact that the retention times can vary
considerably with concentration. This work was
some early work that we did. I picked out the
worst and best possible cases, obviously. Fluoride
is an early eluter, so as you can see, if you
jump from 100 ppm to .5 ppm, you're only talking
two-hundredths of a minute change, which you might
as well forget. There's really no difference there.
For nitrate using that same ratio of 100 parts to
one-half a part, you'll notice now that you're
talking in the neighborhood of two minutes variation
with concentration. In sulfate, which is the
worst, going from 500 ppm to 1, you're changing in
the neighborhood of four minutes in retention time.
I think the one thing here that you've got to
say right at the beginning of any kind of a
503
-------
methodology using ion chromatography is, you've
got to do an awful lot of standard injection into
the instrument so that you can get some kind of an
idea of what it is that you're actually looking
at. There's just no way around it.
SLIDE 4
This is a slide that I borrowed as a figure
from Small who published a paper in Analytical
Chemistry and that paper introduced the hollow
fiber suppressor for Dionex. If you're looking at
the first one...the only difference between the
first and the second is the size of the packed bed
suppressor...you'11 notice the two dips; number one,
the water dip and secondly, the carbonate dip. The
second one shows that as the column gets longer,
the two peaks seem to change in retention time.
The third one actually shows that with the hollow
fiber suppressor, they migrate together and you
think you only have one peak, but in essence you
don't.
Let's treat them separately. The water dip ,
peak, although it is something that gives you a
problem, is easily gotten around and that is simply
by matching the ionic strength of your sample to
the eluent that you're using. That sounds
504
-------
difficult, but if you take and make up an eluent
at 100 times the concentration that you're using
in the instrument, and simply add 1 mL of that
to 100 mL of your sample, you really haven't changed
much in the form of concentration, but your water
dip will level out and you don't have to worry
about interference from the water dip.
The carbonate dip, although Small showed it
as a negative dip, this negative dip shows that you
have depletion of the carbonate ion; that it's lower
in concentration than the eluent. What happens if
you have carbonate present in your sample at a
higher level? What happens is you get a positive
peak. For a long time with the system that we
were using, we were trying to determine fluoride
and the reproducibility of it was not good.
Eventually we weakened the eluent so much trying
to determine it that our fluoride peak in natural
water samples evolved into two peaks. What happened
was that carbonate elutes at the same time as fluo-
ride and is a severe interference as far as fluoride
determination is concerned.
The carbonate is one of the replaceable ions
in the separator column. It is the ion that the
anion replace throughout the separation. So if
505
-------
you have a sample that is very high in concentration
of total ionic strength—add everything together,
you tend to get a rather high carbonate peak due
to the separator column, and somehow or another,
you can't quite do without that. So you're always
going to have it. We originally started off look-
ing at this as hardness, but later on we found out
that it has been coming off the column.
How did we get around it? Actually, we
got a column made by another company, other than
Dionex, and put it into the suppressed equipment.
It will actually separate the carbonate from the
fluoride, so we're hoping that we can now do this
quantitatively.
SLIDE 5
I mentioned the fact that there are other
detectors, and so I included this slide so that
you could see that you can do things other than
the standard anions. What you're looking at, the
bottom chromatograph, is the common six anions
done by a conductivity detector while the top
chromatogram—this is the separation here--is done
by an electrochemical detector.
If you'll notice the same sample here, you've
got your separation down in here of your standard
506
-------
anions/ but now we're getting sulfite, cyanide and
bromate coming out on the electrochemical detector.
These detectors can be run in series since neither
are destructive, and you can get considerable
separation. Since cyanide is one of the materials
that is most commonly sought after for separation,
we're hoping that this particular separation can
be used to do some of the separations for cyanide
in place of some of the wet methodology that is
used at the current time.
In addition to electrochemical detectors,
ultraviolet detectors, fluorescence detectors can
be used and there's even an adjunct, if you will,
detector known as a post column reactor in which a
chemical is pumped into the system to react with
the eluent and the anions, or whatever it is you're
trying to separate, that will usually give you a
reaction that can be determined using ultraviolet
detectors.
Getting back to some of the interferences for
the common anions, aluminum interferes, obviously,
with the fluoride, but that has nothing to do with
the ion chromatograph. With fluoride, which is
first off the column, you're really interfering
with what we call the garbage peak—everything
507
509
-------
have trouble in zeroing the baseline. This is
part of the reason for this order of magnitude
difference.
In way of summary, let me say that I think
ion chromatography has a place in the analytical
scheme of things. I want to say one other thing,
and that's to caution you to run a lot of standards
and spikes because of this migration of retention
time, in order to be sure of what you're doing. I
think it is probably one of the better and more
promising techniques to come along, in inorganic
chemistry anyway, since the advent of atomic
absorption.
Thank you. Any questions?
MR. TELLIARD: Any
questions? Thank you.
NOTE: Figures are reprinted with the permission
of the Dionex Corporation.
510
-------
511
-------
512
-------
513
-------
6 a-
—- Q.
^ a
dip J8ie/v\
H *0 UO|ID3(U|
CM OL
dip J31R/NA
-------
515
-------
516
-------
517
-------
MR. TELLIARD: Our last
speaker this morning is Suzanne with Environment,
Canada. She's going to talk about some HPLC with
electroconductivity detection for phenols.
518
-------
SUZANNE LESAGE, PH.D.
WASTEWATER TECHNOLOGY CENTRE
ENVIRONMENTAL CANADA
ENVIRONMENTAL PROTECTION SERVICE
DIRECT ANALYSIS OF PHENOLS BY HPLC
DR. LESAGE: Thank you
for inviting me here. The little ship break this
morning reminded me much of the situation that we
have. Our offices are located on Lake Ontario
between Toronto and Niagara Falls, right on the
lake and beside a lift bridge, very similar to
the one that is here. Whenever my guests stop
listening to me, I'll turn around and inevitably
there's a ship going by. Fortunately, we've got
winter when we manage to work because then there
are no ships going through.
Before I start, I will acknowledge the people
I work with. The data that I will present today
was mostly produced by Mrs. Sharon Hay-Pole and
Mrs. Sandra Abbott. Mr. Ken Conn is the supervisor
of the lab and is the person who allowed me to do
this kind of work, and Mr. Fowlie is our guality
control chemist. Maybe I should say the data
519
-------
didn't quite pass his criteria, but I told him,
"It's organic analysis, you have to tie understand-
ing."
I work for Environment Canada, the Canadian
Federal Department of the Environment, more speci-
fically for the Environmental Protection Service
in the Wastewater Technology Center. We are a
small research facility, mostly staffed with en-
gineers, and our role is to study industrial dis-
charge problems and try to solve them by developing
appropriate control technology.
Our laboratory works in support of those
research programs, and also acts as a national
center for GCMS analysis of industrial wastes.
We've done all kinds of wastes—tanneries, textiles,
wood preservation, steel mills, coal conversion
wastewater, resin manufacturing, pulp and paper.
We're only one lab, so we do everything.
Whenever possible, we use the EPA methods.
To us it only makes good sense; why reinvent the
wheel? Well, of course, like everybody else, we
get the urge once in a while to try, and our
attempts at reinventing the wheel for phenol
analysis is what I will be describing now.
This program was started because we needed
520
-------
to turn out a lot of phenol results in coal conver-
sion wastewater. We have a GCMS, which we prefer
to use where we can. Unfortunately, we only have
one, and when you have several pilot plants running
continuously, we have to be'able to provide data
relatively quickly, and the engineers with whom we
work are more used to the BOD, TOCs type of turn-
around time and they don't have the patience to
wait for six months for their samples to be analyzed
by GCMS. Of course, their needs are to run the pilot
plants, thus they require results much faster.
The situation we had here, was we were looking
at the effluent from a pilot plant, consisting of a
biologically active fluidized bed treating coal
conversion wastewater. As an aside, the coal
conversion wastewater used actually was imported
from the H-coal Pilot Plant in Catlettsburg,
Kentucky. We don't seem to have enough Canadian
problems, so we import some.
We looked at the alternate method 604, the GC
method for phenols, so we could turn out results a
little bit faster than with GCMS. Unfortunately,
coal conversion wastewater contains a lot of other
chemicals. Typically it's full of heterocyclic
nitrogenous compounds—aniline, quinoline, carbazole
521
-------
and the like. So we got a tremendously complex
chromatogram. The only way really we could handle
it was GCMS. The other problem we had with this
as with all GCMS analysis, is that typically with
this kind of matrix, our recoveries from phenol
ran from very poor to simply pathetic. That means
30 down to about 5 percent, if we found any. We
knew from the conventional phenol, 4-amino-
antipyrene method, that there was a substantial
amount of phenols in there, the only problem is that
the GCMS just couldn't find it.
SLIDE 1
So we turned ourselves to HPLC as a possible
substitute. The method we tried is in essence very
simple. We used a five micron ultrasphere ODS
column that can be purchased anywhere, and the
mobile phase is acetonitrile and water with
acetic acid and phosphoric acid as modifiers at
1.5 mL/min. We used a BAS amperometric detector
running at 1.2 mV, and 20 mA current. That is
really middle range for that particular detector,
but we find that although the label says it goes
down to .1, the actual life situation is different.
We also have to use a UV detector to follow the
response of nitrophenols.
522
1 '•''fc!'IIL: !**:;. IK'
-------
For PCP analysis, the percentage of aceto-
nitrile has to be increased to 60 percent, other-
wise it takes something like an hour and a half to
elute PCP with detection that would be very poor.
We find that usually it's a lot more practical to
do PCPs one day and the rest of them the other
day. In coal conversion, PCP was of no concern
whatsoever. We were actually looking more at
phenol, methylated phenols and dihydric phenols,
and when we do PCP, it's usually for wood preser-
vation plants where tetra and pentachlorophenol
are most important. It would be possible to do
all of priority pollutant phenols by solvent
programming, but the electrochemical detector does
not like any kind of programming whatsoever. The
response of the detector depends on the mobile phase,
on its electroconductivity. When you increase the
percent of acetonitrile, the baseline drifts down
and the detector does not recover fast enough to
enable you to do a lot of analysis in a day, so
it's easier to change mobile phase over night.
SLIDES 2&3
This is a typical standard chromatogram.
There's not really much to see looking at this
standard chromatogram, other than everything is
523
-------
nicely separated. There is response to dinitro-
, .,", Si ;"'i I"' ,! A'lH'i >. •• .;• ,
phenol, and 2- and 4-nitrophenol on the electro-
chemical detector. The amount injected on column
'• '' ' ':;' ':M J"11: ; • ;'•
here is representative of 100 to 500 micrograms per
liter solution. That represents 2 to 10 ng. injected
1 . • ,• . i •: ' : i •»*.•'. .« .'...,
in a 20 microliter loop. We also followed the
analysis with the U.V. and visually checked the
presence of nitros from both detectors. You can
see that for 4-nitrophenol one actually is better
off using the electrochemical detector rather than
the U.V. detector. For the 2,4 nitrophenol and
4,6 dinitrocresol, the response is not sufficient
in the electrochemical detector. But as I'll
show you a little bit later, we have done some work
and it is possible to enhance the response by
addition of specific buffers. Unfortunately, it
doesn't really quite work.
SLIDE 4
These are the retention times of allthe
compounds of interest. As you can see here, we can
complete analysis within 25 minutes. One thing I
might not have stressed enough here is that we're
doing direct analysis. We get a sample, filter it
through to a .45 micron filter and inject. That is
the end of sample preparation. This is not quite
52*
-------
the type of work that we're used to. In a normal
organic lab, if you haven't done three extractions
and five cleanups, you haven't done analysis. But
it has the advantage of being able to do duplicates,
triplicates, spikes and all.
SLIDE 5
We analyzed the standards in mobile phase and
we did five replicate analyses. This is the typical
kind of recovery we got. Now, looking at 97.8
percent recovery for phenol, and really, I don't
know that anybody can honestly report more than 50
percent, when you're doing extractions, I got quite
excited, especially with really rather low standard
deviation, nothing above 10 percent. This is not
the kind of numbers I'm used to in the lab.
The nitrophenols and dinitrophenols can be done
either way. For the 2- and the 4-dinitrophenols, we
usually get slightly better standard deviation using
the electrochemical rather than the U.V. detector.
SLIDE 6
To prove that this really worked, we used the
standards which are most generously provided by the
EPA for quality control. These standards are pro-
vided for quality control under method 604. That's
the GC method with or without derivatization. The
525
-------
first column is the true value of the standard.
These are the results obtained with Method 604
and provided with the standards. This is the kind
of number that we got in our method. Again, where
we got most excited was finding that with a true
value of 75 microgram per liter of phenol, we were
actually reporting 78 rather than 34. To us that
is most crucial. The other thing is that this
really isn't very much work. It's just basically
get the sample, filter and inject.
For the other phenols, we think that the results
either are equivalent or better than ones obtained
with Method 604, with the exception of PCP. PCP
at 70 micrograms per liter, that's below the
detection limit. Now, I must stress that is for
direct analysis. We could get better by extracting
the sample and injecting the extract, which is
obviously a very easy way to go. We had very little
interest in actually doing that because when we
have to do PCPs, we get samples that are high
rather than low. We get levels that would be
running around 50 or 60 milligrams per liter, so
dilution is more of a problem than actual detection
limits.
526
-------
SLIDE 7
Now, in this next slide we decided to try a
method in real life and to do so, we selected two
typical, or maybe not very so typical wastewaters
that we deal with. These are not nice wastewaters.
They were very yellow, fairly high solids content.
I don't have the exact TOC's, but it's fairly high.
The other reason we chose those is that we had GCMS
data to back up our analysis and we knew roughly
how much phenol was in there to start with. For
instance, the conversion wastewaters I'm talking
about here, by GCMS we were reporting a figure of
6 micrograms per liter. My supervisor got a bit
upset because the 4-AA method was reporting some-
thing to the order of 400 micrograms per liter in
total phenols, and he said well, what are these
other phenols? I said, I don't know. When we ran
this, phenol itself accounted for over 100 micro-
grams of the total and there were a substantial
number of methyl phenols and other analytes. So
this method usually gives a better correlation with
the total phenols.
These numbers here are not the measured values
but actually the percentage of recovery. The water
data is basically what was presented in the previous
527
-------
slide. Everything is in fairly good standing and
would pass except for some of the nitros. We all
know, of course, problem with 2-nitrophenoI.
I must say that we did not throw away any data
whatsoever. So, what happens here when you see
193 +/-111 on triplicate, we found it in two samples
and didn't find it in the other. At 150 micrograms
per liter with 2-nitrophenol, that's basically at
the detection limit, so it is not that surprising
that there is such a variation.
SLIDE 8
We repeated the experiment at twice the level to
see whether the method is linear. The percentage of
recovery in water again is very similar to what we
had obtained at the lower level and in the two
samples, everything is more or less above 50 percent
recovery except for the nitrophenols, and I think
that's one of the nightmares of the analytical
chemists.
OVERHEAD 1
Briefly I'll touch on the effect of adding
buffers, one interesting fact we found here. At
the top part here, that's the chromatogram that I
showed you in the slides previously (Slide 2).
That's our normal standardization chromatogram.
528
-------
Here, below, is a chromatogram we obtained where we
added .01M sodium acetate buffer to the solution.
If you look at the 4-nitrophenol response here
compared to the 4-nitrophenol here, you can see
that there's a substantial difference. The same
case with the 2-nitrophenol which is substantially
enhanced by the addition of buffer.
You will ask probably why on earth didn't we
use the buffer all the time. Well, the problem is
that not only is the 2 and the 4-nitrophenol
response enhanced, but so is the 2, 4-dinitrophenol
and the 4, 6-dinitrocresol. Their response is
enhanced and the retention time shifts. What we
think is happening in this case is that the species
we're actually measuring is not nitrophenol itself,
but an ion-pair.
OVERHEAD 2
There are the typical detection limits that
one gets, based on 20 microliters analyzed directly.
It isn't great for dinitrophenol, but neither is
the GC method. If there is a need for lower detec-
tion limits, I think one would have to go back
to the extraction with all the problems it entails.
It isn't as good as one would want, maybe, for
some clean effluents, but is usually sufficient in
529
-------
a real life situation for the monitoring of an
industrial discharge.
SLIDE 9
Finally/ here are what we believe are the
advantages of the method. It has all of the advan-
tages of direct analysis. There is no sample
preparation. The method would be very easily
automated because there is no need for an operator.
It's low cost. We estimate that getting a single
pump with the BAS detector and a column would cost
you less than $10,000 U.S. For us, of course, it's
about $20K Canadian.
It is a very rapid method and the reproduci-
bility of standard deviation is usually better
than one would obtain having to extract samples,
especially very dirty samples. It's also fairly
free of interferences. The coal conversion waste-
water that we used, for instance, we really could
not use anything else but GCMS and this provided
us an alternative. So we believe this is ideally
suited for routine monitoring of effluents when
the samples are already characterized. Thank you.
MR. TELLIARD: Any
questions?
530
.:1* 1.
-------
ANALYTICAL CONDITIONS
COLUMN: 5 urn ULTRASPHERE ODS
MOBILE PHASE: CH3CN: WATER:ACETIC ACID:PHOSPHORIC ACID
40: 59.7 : 0.1 s 0.2
FLOW RATE: 1.5 ML/MIN
DETECTOR A: BAS AMPEROMETRIC
1.2 VOLT
20 uA CURRENT
DETECTOR B: FIXED U.V. AT 254 nm
ATTENUATION: 0.005 A.U.
PCP: CHANGE TO 60% ACETONITRILE
531
-------
STANDARDS by ELECTROCHEMICAL DETECTION
(2 to 10 ng per component (20 >il loop)
2,4,6-trichlorophenol
2 nitrophenol 2^ aimethylphenol
-chloro-m-cresol
2,4-dichlorophenol
532
'' "l 1
-------
4,6 dinitrocresol
STANDARDS 10 ng each
U.V. 254 nm
2-nitrophe"nol
2,4 dinitrophenol
4-nitrophenol
533
-------
COMPOUNDS OF INTEREST
R
PHENOL
4-NITROPHENOL
2-CHLOROPHENOL
2,4-DINITROPHENOL
2-NITROPHENOL
2,4-DIMETHYLPHENOL
P-CHLORO-M-CRESOL
2,4-DICHLOROPHENOL
4,6-DINITROCRESOL
2,4,6-TRICHLOROPHENOL
PENTACHLOROPHENOL
3 .77
4.46
5 .87
6.63
7 .54
8.24
9 .56
1 1 .7
14 .6
22 .2
13.3*
* USING 60% ACETONITRILE - 40% WATER WITH ACETIC ACID
534
-------
ANALYSIS OF STANDARDS IN MOBILE PHASE ON 5 REPLICATES
TRUE
VALUE
COMPOUND ug/L
BCD
PHENOL
4
2
2
2
P
2
2
4
2
2
4
-NITROPHENOL
-CHLOROPHENOL
-NITROPHENOL
, 4-DIMETHYLPHENOL
-CHLORO-M-CRESOL
,4-DICHLOROPHENOL
,4 ,6-TRICHLOROPHENOL
UV
-NITROPHENOL
,4 DINITROPHENOL
, NITROPHENOL
,6 DINITROCRESOL
100
500
100
500
250
250
250
500
500
500
500
500
MEAN
X
97.8
451
95.5
474
238
244
246
516
427
534
619
351
STD.
DEV.
n
2
23
5
25
9
13
6
35
31
28
86
20
.6
.1
.0
.7
.2
.9
.3
.8
.9
.2
.0
.6
REL.
STD .
DEV.
2
5
5
5
3
5
2
7
7
5
13
5
.7
. 1
.2
.4
.9
.7
.6
.0
.5
.3
.9
.9
535
-------
11 '! '.'i,,i1:"1'!1;!,, Hi',IKI,.,'•!'' 'L"1'1:"''"''!
''IPS <:"''!-' v1 J1 -"• !!' "',[ ' ill1'.;,!,:1!"!!!!!1!!
EPA STANDARD
TRUE
METHOD 604
OUR METHOI
PARAMETER
VALUE
x
x
PHENOL
2,4-DIMETHYLPHENOL
2-CHLOROPHENOL
4-CHLORO-3-METHYLPHENOL
2 , 4-DICHLOROPHENOL
2,4, 6-TRICHLOROPHENOL
PENTACHLOROPHENOL
2-NITROPHENOL
4-NITROPHENOL
2,4-DINITROPHENOL
75.0
100
75.0
125
100
150
70.0
150
160
250
34 .1
61 .4
60.3
1 06
79.6
124
60.1
120
73.0
205
11
21
13
19
17
20
13
22
31
67
.1
.2
.6
.0
.0
.4
.9
• 8
.2
.8
78.1
92 .0
73.8
125
99.4
170
176
1 18
221
'1
1
1 J
41
-1
1
\
4 J
ia
536
-------
RECOVERY OF SPIKED EPA STD
HIGH LEVEL
COMPOUND
BCD
PHENOL
4-NITROPHENOL
2-CHLOROPHENOL
2-NITROPHENOL
2 , 4-DIMETHYLPHENOL
P-CHLORO-M-CRESOL
2 ,4-DICHLOROPHENOL
2 ,4 ,6-TRICHLOROPHENOL
TRUE
VALUE
ug/L
150
320
150
300
200
250
200
300
WATER
% + S.D .
94±1 .0
84±1 .7
89±3.3
107±12
99±10
107±18
130±35
144±17
SAMPLE A
% + S.D.
158±4.0
40±44
60±3.3
62±0 .5
85±2.0
86±2.7
100±2 .9
69±7.5
SAMPLE B
% + S.D.
109±0.9
124±17
103±2 .0
71±1 .2
82±6.2
89±18
1 1 9±27
1 10±20
uv
2,4 DINITROPHENOL
4,6 DINITROCRESOL
500 86±36 93±3.5
NOT PRESENT IN STD .
92±0.7
537
-------
RECOVERY OF SPIKED EPA STD
USED FOR METHOD 604
COMPOUND
TRUE
VALUE
ug/L
WATER SAMPLE A SAMPLE B
%+S.D. %+S.D. %+S.D.
BCD
PHENOL
4-NITROPHENOL
2-CHLOROPHENOL
2-NITROPHENOL
2,4-DIMETHYLPHENOL
P-CHLORO-M-CRESOL
2,4-DICHLOROPHENOL
2,4,6-TRICHLOROPHENOL 150
75
160
75
150
100
125
100
150
104±.3
74±33
86±9.1
1 17±27
9 2 ± 1 6
82±13
86±45
1 1 9±63
155±14.
12±11
39±34
193±1 1 1
92±6 .5
86±2 .6
1 10±4 .5
1 18±61
uv
2,4 DINITROPHENOL
4,6 DINITROCRESOL
250 89±8.2 61±33
NOT PRESENT IN STD.
221±192
84±13
191±17
86±2 .2
91±2.9
101±4.9
79±34
90±14
SAMPLE A: COAL CONVERSION WASTEWATER
SAMPLE B: EFFLUENT FROM A WOOD PRESERVING PLANT
1
-------
STANDARDS by ELECTROCHEMICAL DETECTION
.phenol
nitrophenol
— 2 chlorophenol
=rr==2.nitrophenol 2/4 dinethylphenol
-p-chloro-m-cresol
2,4-dichlorophenol
2,4,6-trichlorophenol
INJECT 01/29/35 11:22:22
II 0
fe&. 82
22. 2222' 44
24. 66
ICH
O-o/M
539
-------
DETECTION LIMIT
Based on 20 ul analyzed directly
ug/L
•Id ' ljl" ill'I'I?ft1
Phenol
4 Nitrophenol
2 Chlorophenol
2 Nitrophenol
2,4 dimethylphenol
p-Cl-m-cresol
2,4-dichlorophenol
2,4,6 trichlorophenol
pentachlorophenol
10
200
20
100
25
25
25
50
150
U.V.
4 Nitrophenol
2 Nitrophenol
4 ,6-dinitrophenol
300
300
300
5*0
-------
ADVANTAGES OF THE METHOD
DIRECT ANALYSIS
NO SAMPLE PREPARATION
EASILY AUTOMATED
LOW COST
RAPID
IMPROVED REPRODUCIBILITY
IDEALLY SUITED FOR ROUTINE MONITORING OF EFFLUENTS
-------
':; I" ft 'i'1:: i>"i' ilii1
QUESTION AND ANSWER SESSION
MR. STANKO: George
Stanko, Shell Development. I have one question.
Can you explain the difference in your procedure
and Method 604 with respect to why the 604 method
only has approximately 50 percent recovery of
phenol itself and yours gets just about 100 percent
recovery? Do you have an explanation or a reason?
DR. LESAGE: We do not
extract. That's why. Method 604, I believe, is a
methylene chloride extraction, and typically phenol
doesn't seem to extract out of water. In this
case, we just take a sample and inject it directly.
So these recoveries are not really recoveries. The
.it i1'. ']h r
measurement is a direct measurement.
MR. STANKO: Thenit's
really nothing in the chromatography? In other
words, there's nothing unique to your column or
your chromatographic system, it is all in the
extraction versus no extraction?
DR. LESAGE: I would tend
to believe so, yet the chromatography on the GC
column would obviously be better to get more
theoretical plates than one gets with HPLC. The
542
-------
detector is also more specific, so that there are
fewer interference problems.
MR. McMAHON: Wayne
McMahon, Martin Marietta. Did you do any comparison
studies to compare the data from the 4-AAP method
to your HPLC data to see how well there was cor-
relation between the two methods?
DR. LESAGE: We always
compared the data. One of the problems with com-
parison with the 4-AAP method is that of course,
it is for total phenols, whichever ones are present,
the color is the total. To be able to compare, it
is necessary to know which species are present and
what is their respective response for the 4-amino-
antipyrine because as you know, that method is
based on total phenols as phenol. The color response
from other phenols is not equivalent. It would be
something like 40 to 80 percent depending on the
phenol. Anything that, is substituted in the para
position will not respond to the 4-aminoantipyrine.
So when we correct for that and in samples that are
not typically too bad, that is they don't have a
lot of other interferences, the correlation is
really quite good. Typically, we can usually
account for about 80 percent of the total phenols.
543
-------
There was a case where there was a lot of pehta-
chlorophenol in the sample where the correlation
fell through completely.
Now, when we try to do the same correlation
with GCMS, there is usually absolutely no correlation,
it never makes any sense. So in this case, it's a
little bit better—quite a bit better, actually. At
least the curves follow the same slopes.
MR. EDELMAN: Dave
Edelman, Crown Zellerbach. When you do your
percent recoveries on your real world effluent
samples, how do you know you had positive inter-
ferences beforehand when you make your corrections
for what's initially in there before you spike the
samples?
DR. LESAGE: We did first
an analysis of the sample as is, then spiked at the
two levels. I just reported here the percent of
recovery, not the amount that was found. There
were some phenols that were not found at all and
typically, phenol itself was running about 100
micrograms a liter as well as a few others that
were present in there, and we corrected to
percentage recovery.
5*4
-------
MR. EDELMAN: Right. I
understand that, but how do you know you don't have
positive interferences initially that peak your
chlorophenol before you spike it, as phenol,
not something else, but one of the other peaks
is...
DR. LESAGE: Yes, that's
what I said earlier. A method like this has the
same problem as Method 604. For instance, it's
like any GC or HPLC method. You have to be able
to correcterize your sample by an alternate method
and check for those kind of possible interferences.
We usually do not have very much problem with that
and we have run very, very dirty samples. There is
usually very few peaks in the electrochemical
chromatogram.
One thing is that, basically because you're
running a very specific buffer, something like
aniline will not be in the same ionization state.
It will be a positive ion in the solution, and
be easily separated so you don't have a problem.
It is more specific because you only elute in
certain window. So it's not completely free of
interferences, I'm not going to say that, but if
545
-------
you have something that you have correcterized, you
can fairly safely use it.
MR. TELLIARD; I'd like
to thank Susan and Peggy and Jack for their
presentations this morning. I'd also like to thank
Dale Rushneck and Todd Fielding for working so much
in getting this thing together, and, of course,
Whitescarver Associates and County Court Reporters} inc.
for their work here.
This ends the eighth. We hope to see you for
the ninth. Until we figure out where else we're
going to have it, we might assume it's going to
be here, unless somebody comes up with a real
winner. It's rough to follow up on a battleship,
but we'll try to figure out something. Thank you
very much for coming; enjoyed having you here.
546
-------
ROSTER OF ATTENDEES
547
-------
J. H. Alexander
Supv. Chem i st
Norfolk Naval Shipyard
Cod 134.2, Bldg. 184
Portsmouth VA 23709
804-396-3373
B. A. All en
Manager of Environmental
Dow Chemical U.S.A.
Building B-1226
Freeport TX 77541
409-238-5208
Services
Sue Apter
United States Testing Co., Inc.
1415 Park Avenue
Hoboken NJ 07030
201-792-2400
Gerald Babsk i
Environmental Chemist
Westchester County Dept.
Hammond House Road
Valhalla NY 10595
914-347-3155
of Labs.
John M. Ballard, Ph. D.
Lockheed EMSCO
944 E. Harmon Ave.
Las Vegas NV 89114
702-798-2230
Devereaux Barnes
Deputy Dir., Ind. Tech. Division
USEPA -• Washington, DC
401 M Street, SW,
Washington DC 20460
202-382-7120
Robert C. Barrick
Senior Chemist
Tetra Tech, Inc.
11820 Northup Way, NE, Suite 100
Bellevue WA 98005
206-822-9596
548
-------
Richard F. Browner
School of Chemistry
Georgia Institute of Technology
Athens GA 30332
404-894-4002
Stephen L. Bugbee
OWEP
USEPA - Washington, DC
401 M Street, SW, (EN-336)
Washington DC 20460
202-382-5596
Keith L. Cherryholmes, Ph.D.
Coordinator of Environ. Studies
Hyg. Lab., The University of Iowa
Oakdale Campus
Iowa City IA 52242
319-353-5990
Colby
Chemi stry
Bruce N.
Manager,
S-CUBED
Box 1620
LaJolla CA 92038
619-453-0060
Carol Colclough
IT Corporation
312 Directors Drive
Knoxvi1le TN 37923
615-690-321 1
Donald A. Cooper
Techn i c 5 an
E. I. du Pont de Nemours
Engineering Test Center
Wilmington DE 19898-7104
302-366-4674
& Co., Inc
Thomas F. Cull en, Jr.
Div. Manager Analytical Services
Environmental Research Group, Inc,
117 North First Street
Ann Arbor MI 48104
313-662-3104
549
-------
Mary Lou Daniel
Laboratory D i rector
S. Florida Water Management Dist.
P. O. Box V, 3301 Gun Club Road
West Palm Beach FL 33402
305-686-8800
Susan de Nagy
Project Officer, Ind. Tech. Div.
USEPA - Washington, DC
401 M Street, SW,
-------
Mitchell D. Erlckson, Ph.D,
Principal Chemist
Midwest Research Institute
425 Volker Boulevard
Kansas City MO 64110
816-753-7600
Barrett P. Eynon
Stat i st i ci an
SRI International
333 Ravenswood Avenue
MenTo Park CA 94025
415-859-5239
Paul Farrow
VG Masslab
Tudor Road
Altrincham, England WA14 5RZ
44 61 969 9222
Thomas E. Fielding
Chem i st
USEPA - Washington, DC
401 M Street, SW,
-------
", ' JJ;;,IHF! Ill Sf-'fil'"
Gail S. Goldberg
PermIts D i v i s i on
USEPA - Washington, DC
401 M Street, SW,
-------
Earl M. Hansen
Laboratory Manager
Weston
256 Welsh Pool Road
Lionvilie PA 19353
215-525-0180
J. Ronald Mass, Ph.D.
Presi dent
Triangle Laboratories Inc.
P. 0. Box 13485
Research Triangle Park NC 27709
919-544-5729
John C. Hendricks
Chem 5 st
American Electric
1 Riverside Plaza
Columbus OH 43215
614-223-1238
Power Serv. Corp,
Frank H. Hund
Industrial Technology Divsion
USEPA - Washington, DC
401 M Street, SW,
-------
Sharon H. Kneiss
Chemical Manufacturers Association
2501 M Street
Washington DC 20037
202-887-1180
Margaret M. Knight, Ph.D.
Weyerhaeuser Company
WTC 2B25
Tacoma WA 98477
206-924-6002
Wll11am G. Krochta
Manager, Analytical Research
PPG Industries
Box 31
Barberton OH 44203
216-848-4161 Ext. 505
Marcia Kuehl
Donohue and Associates
4738 N 40th St.
Sheboygan WI 53081
414-458-8711
Barbara A. Larka
Mass Spectroscop i st
Twin City Testing & Engineering Co.
662 Cromwel1 Ave.
St. Paul MN 55114
612-645-3601
Suzanne Lesage, Ph.D.
Organic Chemist
EPS/Wastewater Technology Center
867 Lakeshore Rd., P. O. Box 5050
Burlington, Ontario CD L7R 4A6
416-637-4504
Denis C.K. Lin, Ph.D.
Envr. Testing and Certification
284 Raritan Center Parkway
Edison NJ 08837
201-225-6707
'1 1
-------
John M» McGu1 re
USERA - Region IV
College Station Road
Athens GA 30613
404-546-3185
L. Wayne McMahon
Prog. Mgr. - Environmental Analysis
Martin Marietta Energy .Syst.-ORGDP
P. 0. Box P, MS 443
Oak Ridge TN 37922
615-574-9701
Ronald A. Michaud
Principal Chemist
Connect 5 cut Hea1th
P. O. Box 1689
Hartford CT 06101
203-566-3802
Laboratory
Raymond F. Mindrup
Market Development Specialist
SUPELCO, Inc.
Supelco Park
Bellefonte PA 16823
814-359-3441
Lee Myers
CompuChem Laborator i es
P. 0. Box 12652
Research Triangle Park NC 27709
919-549-8263
Robert N i cho1 son
Laboratory Manager
Cal Lab East
P. O. Box 11106
Richmond VA 23230
804-359-1900
Teresa Norbert-K i ng
Env. Research Laboratory/ORD
USEPA - Region V
6201 Congdon Boulevard
Duluth MN 55804
218-727-6692 Ext. 528
555
-------
John Do Pfaff
Monitoring and Support Lab., ORD
USEPA - Region V
26 W St. Clair St.
Cincinnati OH 45268
513-684-7372
Joseph A. Poland
Sen i or Chem 5 st
CT State Dept. of
10 C1 5nton St.
Hartford CT 06106
203-566-2787
Health Services
Richard Posner
Vice President - Metals Division
United States Testing Company, Inc.
1415 Park Avenue
Hoboken NJ 07030
201-792-2400 Ext. 320
David H. Powel1
Manager GC/MS
Environmental Science
P. 0. Box ESE
Gainesville FL 32602
904-332-3318
& Engineering
William B. Prescott
Consultant
724 Hawthorne Avenue
Bound Brook NJ 08805
201-469-1198
James K. R 5 ce
James K. R i ce Chartered
17415 Satchel 1ors Forest
Olney MD 20832
301-774-2210
Rd,
Russ Roegner
USEPA - Washington, DC
401 M Street, SW,
Washington DC 20460
202-382-5410
556
-------
Richard Ronan
V i ce Pres i dent
VERSAR, INC.
6850 Versar Center
Springfield VA 22151
703-750-3000
Ann E. Rosecrance
Laboratory Director
JTC Environmental Consultants,
4 Research Place
Rockville MD 20850
301-921-9790
Inc.
Dale R. Rushneck
Interface, Inc.
P. O. Box 297
Ft. Collins CO 80522-0297
303-223-2013
Sam R. Sax
Un 5 on Camp Corporat i on
'Technical Center
Frank 1 in VA 23851
804-569-4614
Robert B. Schaffer
Vice President
CENTEC Corporation
11260 Roger Bacon Drive
Reston VA 22090-5281
703-471-6300
Will? am Schnute
Environmental Marketing
Finnegan Mat
355 River Oaks Parkway
San Jose CA 951 34
408-946-4848
Manager
Noel Schwartz
Vice President
United States Testing
1415 Park Avenue
Hoboken NJ 07030
201-792-2400
Company, Inc,
557
-------
Judy Scott
TRW
One Space Park 01/2030
Radondo Beach CA 90278
213-536-2451
i&i i'"::;1
Walter M. Shackel ford
ERL/ORD
USEPA - Region IV
College Station Road
Athens GA 30613
404-546-3186
John C. Sheats
Head of Envir. Sciences Lab.
NC State Lab. of Public Health
P. O. Box 28047
Raleigh NC 27611
919-733-7308
Nannette S i mon
Chemi st
Occidental Chemical
Long Road
Grand Island NY 14072
716-773-8655
James S. Smith
Director Weston Analytics
Roy F. Weston, Inc.
Weston Way
West Chester PA 19380
215-692-3030
George H. Stanko
Staff Research Chemist
Shell Development Company
P. 0. Box 1380
Houston TX 77001
713-493-7702
Edward Stigall
Chief, Inorganic Chem. & Ser. Br.
USEPA - Washington, DC
401 M Street, SW, (WH-552)
Washington DC 20460
202-382-7124
55B
-------
Paul Taylor
Calfornia Analytical Labs., Inc.
2544 Industry Blvd.
W. Sacramento CA 95691
916-372-1393
W 511i am A. Tel 1 5ard
Chief, E & M Ind. Branch
USEPA - Washington, DC
401 M Street, SW,
Washington DC 20460
202-382-7131
Kurt Theurer
AH i ed Corporat i on
Box 1021 R-CRL
Morristown NJ 07960
201-455-2141
Samuel To
Quality Assurance Officer
USEPA - Washington, DC
401 M Street, SW, (EN-338)
Washington DC 20460
202-475-8319
David F. Tompkins
Director of Analytical
CENTEC Corporation
2160 Industrial Drive
Salem VA 24153
703-387-3995
Serv i ces
A11 an Tord 5 n i
Asst. Vice President - Metals
United States Testing Company,
1415 Park Avenue
Hoboken NJ 07030
201-792-2400 Ext.
D5 v.
Inc,
V 5 ctor Turosk i
Supervisor of Analytical Laboratory
James River Corporation
P. O. Box 899
Neenah WI 54956
414-729-8064
559
-------
Alexandra Uleeka
Chem! st
Rahway Valley Sewerage
Foot of East Hazel wood
Rahway NJ 07065
201-388-0033
Author i ty
Avenue
Marvin Vests!
Chem i stry Department
University of Houston
Houston TX 77004
713-749-2675
Joseph F. Viar, Jr.
Presldent
Viar and Company, Inc.
300 North Lee St., Suite
Alexandria VA 22314
703-683-0885
200
Jill Vol1merhausen
Martin Marietta Environmental Syst,
9200 Rumsey Road
Columbia MD 21045
301-964-9200
Robert D. Voyksner, Ph.D.
Research Triangle Institute
P. 0. Box 12194
Research Triangle Park NC 27709
919-541-6697
Dr. Dal las Wait
Laboratory Director
ERCO/Division of ENSECO
205 Alewife Brook Parkway
Cambridge MA 02138
617-661-3111
Ton i e Wai 1 ace
President
County Court Reporters,
27 E. Loudoun Street
Leesburg VA 22075
703-777-8645
Inc.
560
-------
Bruce Wai 1 i n
Tech. Din. Environmental
E. C. Jordan Company
92 Oak Street
Portland ME 04101
202-775-5401 Ext. 467
Lab. Svcs.
John P. Whitescarver
Presi dent
Wh i tescarver Assoc 5 ates ,
P. 0. Box 17088, IAD
Washington DC 20041
703-661-8800
Inc.
Stuart Whitlock
Environmental Science & Engineering
P. O. Box ESE
Gainsvi11e FL 32602
904-332-3318
Bruce E. W5 Ikes
Project Sc 5 ent i st
Union Carbide Corporation
P. O. Box 8361, Technical Center
South Charleston WV 25303
304-747-4463
Ed Winstead
Chem i st
The B i onet 5 cs Corporat 5 on
20 Research Drive
Hampton VA 23666
804-865-2686
Hugh E. W 5 se
Industrial Technology Division
USEPA - Washington, DC
401 M Street, SW,
-------
Lauren Yel1e
Mass Spectrometrist
Arthur D. Little, Inc.
^5 Acorn Park
Cambridge MA 02140
617-864-5770 Ext. 2586
James C. Young, Ph.D., P.E.
Professor and Head, Dept. Civil
University of Arkansas
340 Engineering Bldg.
FayetteviUe AR 72701
501-575-4954
Eng
Jerry Zwe 5 genbaum
Eastman Kodak Company
B-34 Kodak Park
Rochester NY 14650
716-722-3205
562
------- |