-------
-J
>~
.£>
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
OFFICE OF WATER
OFFICE OF SCIENCE AND TECHNOLOGY
ENGINEERING AND ANALYSIS DIVISION
SIXTEENTH ANNUAL EPA CONFERENCE ON
ANALYSIS OF POLLUTANTS IN THE ENVIRONMENT
MAY 5-6, 1993
'a*
Recycled/Recyclable
Printed with Soy/Canola Ink on paper that
conta'nsatleast50%racydadBbet
-------
FOREWORD
The Sixteenth Annual EPA Conference on Analysis of Pollutants in the
Environment was held at the Norfolk Marriott Waterside Hotel in Norfolk, VA on May
5th and 6th, 1993. The Conference was attended by over 300 scientists from regulated
industry, commercial environmental laboratories, state and Federal regulators, and
environmental consultants and contractors. The Conference provided the attendees with
the opportunity to discuss all aspects of environmental analytical chemistry with a
particular focus on analytical methods and related regulatory issues.
These proceedings document 23 technical and policy presentations on the
following subjects: herbicide, dioxin, and PCB analysis, detection levels and laboratory
accreditation, metals and organo-metallics, radiochemistry and drilling muds, unusual
matrices, matrix interferences and sample collection, performance-based methods, and
pollutants in soil.
We would like to thank Jan Sears Kourmadas of Ogden Environmental for
coordinating the conference, Dale Rushneck of Interface, Inc. and Cindy Simbanin of
DynCorp Viar, Inc. for their assistance in arranging the technical program, the speakers
for their outstanding efforts, and all the others who helped make the Sixteenth Annual
Conference a success. We are looking forward to your attendance at the Seventeenth
Annual EPA Conference in May of 1994.
W. A. Telliard
-------
SIXTEENTH ANNUAL EPA CONFERENCE ON
ANALYSIS OF POLLUTANTS IN THE ENVIRONMENT
USEPA, Office of Water
Office of Science and Technology
Engineering and Analysis Division
May 5-6, 1993
Norfolk, Virginia
TABLE OF CONTENTS - 1
Wednesday, May 5, 1993
Welcome and Introduction
William A. Telliard
Engineering and Analysis Division
Office of Science and Technology
USEPA, Office of Water
Herbicides. Dioxin and PCBs
Development of GC/ECD Analysis Method for Herbicide
Acids and Pentachlorophenol in House Dust 3
Marielle Brinkman
Battelle
Results of the Intel-laboratory Validation Study of
USEPA Method 1613 for the Analysis of Tetra- through
Octachlorinated Dioxins and Furans by Isotope
Dilution GC/MS 49
Dr. Harry B. McCarty
Science Applications International Corporation
Determination of Coplanar and Total Homolog PCBs
by HRGC/HRMS 87
E. A. Marti
Triangle Laboratories of RTP, Inc.
Detection Levels
Compliance Monitoring Detection and Quantitation Levels 125
James K. Rice, Consultant
-------
TABLE OF CONTENTS - 2
Metallo-organic Compounds
Combination of SFE with Capillary GC and Atomic Emission
Detection for the Determination of Organotin Compounds
in Environmental Samples 161
Yan Liu
Midwest Research Institute
Metals
Determination of Metals in Water by Ultrasonic
Nebulization, Inductively Coupled Plasma/Atomic
Emission Spectroscopy 191
Theodore D. Martin
Environmental Monitoring Systems Laboratory
US EPA, Office of Research and Development
Ultra-Clean Sampling, Storage, and Analytical
Strategies for Accurate Determination of Trace
Metals in Natural Waters 211
Nicolas S. Bloom
Frontier Geosciences, Inc.
Trace Metals Analysis - Clean and Ultra-Clean
Techniques 237
Bob April
Ecological Risk Assessment Branch
US EPA, Office of Water, Office of Science and Technology
Radiochemistry and Drilling Muds
Minimization of Production Costs and Waste Generation in
Radiochemical Analysis 247
David L. Demorest
Core Laboratories, A Division of Western Atlas International
Determination of Radium-226 and Radium-228 in Naturally
Occurring Radioactive Material (NORM) Solids by
High Purity Germanium Gamma Spectroscopy 277
Richard Rivera
Shell Development Center
-------
TABLE OF CONTENTS - 3
Radiochemistry and Drilling Muds
Methods for the Determination of Diesel, Mineral, and Crude Oils
in Drilling Muds from Offshore Drilling Operations 307
Joseph C. Raia, Consultant
Thursday, May 6, 1993
Unusual Matrices
Stormwater Sampling and Analysis 335
G. H. Stanko
Shell Development Company
Extracts of National Sewage Sludge Samples 395
John M. McGuire
Environmental Research Laboratory
US EPA, Office of Environmental Processes and Effects Research
Pollutants in Soil
Determination of Selected Organochlorine Pesticides in Soils 415
Ileana Rhodes, Ph.D.
Shell Development Company
Matrix Interferences and Sample Collection
Elimination of Matrix Interferences in Solid Phase Microextraction 487
Zhouyao Zhang
University of Waterloo, Department of Chemistry
A Generalized Approach to Solving Matrix Problems 521
Bruce N. Colby
Pacific Analytical
Anion Exchange Resins for Collection of Phenols in Air and Water 567
Hazel Burkholder
Battelle
-------
TABLE OF CONTENTS - 4
Performance-Based Methods
Performance-Based Methods: An Opportunity for Basic Science 605
James M. Conlon
Drinking Water Standards Division
US EPA, Office of Water, Office of Ground Water and Drinking Water
Environment Canada's Approach to Performance-Based Methods 641
Richard Turle
Environment Canada, Chemistry Division
Laboratory Accreditation
The AIHA Environmental Lead Laboratory Accreditation Program 667
Kenneth T. White
Consultive Services
Pollutants in Soil
An Integrated Application of Field Screening To Environmental
Site Investigations: A Case Study 699
Alex Tracy
Woodward-Clyde Federal Services
Unusual Matrices
QA/QC Guidance for Dredged Material Evaluations 741
Mike Kravitz
US EPA, Office of Water, Office of Science and Technology
Closing Remarks 749
List of Speakers 751
List of Attendees 753
-------
PROCEEDINGS
Mav 5. 1993
MR. TELLIARD: Good morning. I would like to
welcome you to the 16th annual meeting on the measurement of pollutants in the environment.
This meeting is sponsored by the Environmental Protection Agency Office of Water.
We would like to maintain an open forum, and want you to express your opinions.
We ask you to do one thing: when you come to the microphone, please identify yourself by
name and organization.
If, for some reason, you don't want your statement kept in the proceedings, just
state that, and we will make sure that it doesn't appear. Otherwise, it is a free-flowing system,
and we want you to feel free to ask questions, express opinions and, all in all, try to feel relaxed
at this meeting.
So, with that housekeeping information, I would like to start the program. Our
first speaker this morning is Margaret Brinkman and she is going to be talking on herbicides and
pentachlorophenol in house dust. This was an interesting paper when we first heard about it, and
I think you are going to enjoy it.
-------
MS. BRINKMAN: Good morning. Today I am
going to be talking about pesticides. More specifically, I will be talking about acid herbicides,
the types of pre-emergent herbicides most often applied to residential lawns.
I will discuss one way these acid herbicides migrate indoors through a
phenomenon that we call "track-in." When we enter our homes wearing the same pair of shoes
with which we just walked across our pesticide-treated lawn, do we track in these pesticides?
In our effort to document and then quantify human track-in, we encountered some
challenging sample matrices.
These statistics were reported in the U.S. EPA non-occupational pesticides
exposure study: 90 percent of the people polled in that study used pesticides. Only half of those
90 percent read the instructions prior to use, and only 10 percent responded that they used these
pesticides with caution.
The National Academy of Sciences, in their report entitled, "Urban Pest
Management," indicated that the lawn care industry has grown to a $2.8 billion per year industry
and that homeowners apply pesticides at a rate four to ten times greater than farms.
Battelle conducted a telephone survey of randomly selected lawn-care companies.
These companies were geographically well-distributed within the greater Columbus, Ohio area
and represented both nationally- and locally-owned companies.
Of the 21 lawn-care companies polled, 11 responded to our questions. As you can
see, dicamba, mecoprop, and 2,4-D were the most popular post-emergent herbicides applied.
We also surveyed 7 retailers in the greater Columbus area. The 6 that responded
represented a variety of store types, including a nursery, a garden center, hardware store, discount
department store, and a hardware/department store hybrid. Again, 2,4-D, mecoprop, and dicamba
were the most popular post-emergent herbicides stocked by these retailers.
What is the impact of this use on our indoor environment? The primary issue is
whether or not these pesticides are tracked into the home from treated turf. Then, are
dislodgeable residues or what gets picked up off of the treated lawn, related to track-in? Do the
amounts of dislodgeable residues change with time? What is the effect of environmental
weathering? Do entry way mats, or the welcome mats that we have in front of our front doors,
limit track-in?
To begin to assess the human exposure in the indoor environment, we need to
simulate track-in under controlled conditions at various times after application and then develop
analytical methods in order to measure the transfer of these pesticides.
-------
For this study, a commercially available mixture of three herbicide acids was
applied to the turf. The structures are shown along with their measured application rate.
A section of residential turf that had not been treated with lawn chemicals in at
least ten years was used. 20x20-foot plots were roped off with approximately 10-foot borders
separating the plots. The application was made at time zero, and a simulation of track-in into
the home was conducted at the times shown here for each 20x20-foot plot, 4 hours, 8 hours, 1
day, 4 days, and 6 days.
The study also included a field blank, which is shown in the lower left of the slide,
which at the conclusion of the study, actually became an evaluation of spray drift, since our field
blank levels were 3 percent of our sprayed plot measurements.
Collection of dislodgeable turf residues was done using a mechanical device, which
I will refer to as the "PUF roller", at 11 different times between 4 hours and 14 days after
application. This was done on the plot marked, "PUF 4 hour-14 day." There was no human
traffic on this plot.
In the upper right is a weather control plot which was used to assess the effect of
a minor amount of rain that fell in the evening following application. The lawn plot in the lower
left was used to assess herbicide stability in the carpet after track-in.
We will focus now on a single track-in experiment. The treated turf is shown in
light green. The area reserved for collection of turf dislodgeable residues with the PUF roller
is shown as it was marked off in the middle of the plot. This area received no human traffic,
and collection of this sample occurred prior to human subjects walking on the plot.
Two platforms were placed just beyond the perimeter of the treated turf. Both had
standard nylon residential carpeting, and one included a standard home entryway mat.
The black arrows indicate the path taken by a human subject during one cycle of
walking to simulate track-in. Starting at the white "start here" box, the subject would walk
across the left side of the track-in plot, step onto the walk-off mat, and wipe each foot once,
proceed along across the carpeting, turning right, circling back around, and then proceeding to
walk across the right half of the treated lawn plot, then walking directly onto the carpeting,
circling back around to the left to complete one cycle.
A complete track-in experiment included 100 of these cycles. Each experiment
had five participants. Thus, each participant walked 20 cycles.
The counters at the end recorded the number of footfalls in the lawn plot and
directed participants to a specific quadrant of the carpeting.
-------
Now we are looking at the platform with carpeting at the top and the entryway mat
at the bottom. The carpeting was divided into four equal areas, as shown. This was done so that
dislodgeable residues collected with the PUF roller could be collected from half of the carpet,
and dust- or dirt-bound residues collected using the high volume surface sampler vacuum, which
is called the HVS3, could be collected from the other half of the carpet.
We developed a track-in sequence to ensure that each participant walked the same
number of times in each area to avoid track-in bias from participants' weight, shoe size, and
stride.
The samples collected for each track-in experiment included dislodgeable residues
from the turf which resulted in a PUF sleeve to be extracted. Dust- or dirt-bound residues from
the carpet, one carpet with the home entryway mat preceding and one with no mat, was collected
with the HVS3 and resulted in two containers of dust to be weighed and extracted.
Dislodgeable residues also from both carpets also resulted in a set of PUF sleeves
to be extracted.
The PUF roller is shown here. A removable polyurethane foam sleeve was
cleaned and then mounted at the front prior to sampling. After sampling, the PUF sleeve was
placed in an extraction bag.
This slide shows the PUF roller collecting turf dislodgeables on the lawn plot that
was reserved for PUF roller measurements only. No track-in by human subjects was done on
this lawn. You can see that the lanes are marked off.
This slide shows the collection of dust or dirt-bound residues using the HVS3.
The dust or dirt sample container is white and is connected to the HVS3 cyclone in the middle
of the slide.
This slide shows our human subjects simulating track-in. A new pair of shoes and
safety apparel were provided to each participant prior to every track-in experiment.
And this slide shows the area of a track-in lawn plot reserved for the PUF roller.
It is roped off down the middle of the plot.
Our analytical method development effort was directed toward not only the applied
herbicide acids but other herbicides, including MCPA, 2,4,5-T, 2,4,5-TP, and pentachlorophenol.
These methods were developed not only for use in this program but also for other programs in
which we are analyzing residential dust samples for lawn-applied pesticides.
Pentachlorophenol has been used extensively in the past as an insecticide,
fungicide, and herbicide; and, notably, as a wood preservative. Although pentachlorophenol is
-------
less acidic than the herbicide acids, we felt that it would be feasible to develop a single analysis
scheme for both herbicide acids and pentachlorophenol.
The development of analysis methods for herbicide acids, particularly in house
dust, requires the following considerations: First, co-extractable neutral and acidic species must
be removed through clean-up steps.
Second, detection levels for this particular study needed to cover both high and
low analyte levels.
Third, for purposes of quality assurance and quality control, appropriate recovery
standards and internal standards must be chosen.
The three analysis approaches considered are shown here: first, GC/MS with little
or no sample clean-up, perhaps the most costly way to go; second, GC with minimal clean-up,
derivatization, and two-dimensional chromatography; or GC with clean-up using either dual
derivatization and/or dual GC column analysis and/or dual GC detection for confirmation of
peaks.
Derivatization is essential for these analytes to ensure accurate detection and
quantification by GC techniques. The options considered included: methylation with
diazomethane; pentafluorobenzylbromide derivatization; or derivatization with 2-cyanoethyl-
dimethyl-diethyl-aminosilane, the latter derivative being detectible by both BCD and NPD
detectors.
Our analytical scheme shown here is scaled in volume to accommodate extraction
of either dust, soil, or PUF sleeves. The recovery standard, 3,4-D, is spiked to the matrix prior
to extraction with an acetonitrile:phosphate buffer.
Soil and dust samples are sonified, PUF samples are squeezed manually in the
transport bag.
Partitioning of the extract at high pH removes neutral compounds. Solid phase
extraction with aCl& cartridge removes acetic and humic acid interferences.
After addition of the internal standard, the extract is split for multiple analyses,
with PFBBr derivatization possible for samples not detectible with methylation.
A comparison of the general sensitivities that can be achieved with methylation
and PFBBr are shown here for representative herbicide acids and pentachlorophenol. Note in
particular that for monochlorinated herbicide acids such as mecoprop and MCPA, a 100-fold
increase in concentration over the concentration of 2,4-D, a dichlorinated herbicide acid, still
produces a signal only one-tenth as large. This indicates a differential response factor of 1000.
-------
However, with PFBBr derivatization, the signals for 50 pg of mecoprop, MCPA,
and 2,4,-D are approximately equal. Relative to methylation, PFBBr derivatization provides a
40-fold increase in sensitivity for 2,4-D and a 20,000-fold increase in sensitivity for mecoprop.
This detection issue will become increasingly important in the future as lawn-care
companies switch from using 2,4,-D and use instead the monochlorinated acid herbicides,
mecoprop and MCPA.
As shown at last year's conference, we have developed a virtually artifact-free
PFBBr derivatization procedure which allows analyses by GC/ECD rather than GC/MS. A
standard literature-reported method was applied to three herbicide acids with the resulting top
GC/ECD chromatogram. Our PFBBr derivatization of four herbicide acids resulted in the lower
chromatogram.
House dust appears to be a long-term reservoir for pollutants entering the home.
Young children crawling on carpets with moist hands appear to transfer milligram quantities per
day to their mouths.
House dust is also an extremely complex matrix but will continue in importance
as scientists recognize its significance in exposure within the home.
In the lowest panel is shown the GC/ECD extract of a representative house dust
without any clean-up. Above it is that same house dust with SPE clean-up. The top panel shows
the house dust extract with both SPE and liquid partitioning clean-up steps.
Discrete analytes are now detectible with baseline resolution and, as shown in the
following slide...they are labeled up there on the top...the internal standard 2,6-D, 2,4-D and PCP
are now identifiable. The presence of these analytes in the extract was confirmed using negative
chemical ionization GC/MS. No interferences were detectible by GC/MS.
This slide shows 1 gram dust extracts, spiked and unspiked, that were carried
through our analytical procedure and subsequently methylated with diazomethane. The analytes
are highlighted in pink, and the internal standard and recovery standard are highlighted in blue
and black.
As you can see from the bottom chromatogram, the house dust we collected had
measurable quantities of dicamba, 2,4-D, and PCP already in it.
This slide shows 0.1 gram dust extracts, spiked and unspiked, that were carried
through our analytical procedure and derivitized with PFBBr. Now we are beginning to detect
MCPA, a monochlorinated herbicide acid, in the unspiked dust extract.
And this slide shows our PUF extracts, spiked and unspiked.
-------
Our method recoveries were, in general, quite good, greater than 80 percent. The
method can also be used for herbicide acid salts, the form in which they are applied to turf.
As shown for PUF, 2,4-D salt had similar extraction and recovery to the 2,4-D free
acid. Shown at the bottom of the slide, recovery of our surrogate recovery standard, 3,4-D, in
numerous dust, soil, and PUF field samples analyzed to date is also quite good.
Before I show you some results, this slide is just to refresh your memory about
the different samples collected for each track-in experiment. Dislodgeable residues are picked
up from the turf via the PUF roller. Carpet dust is collected from one-half of each carpet.
Dislodgeable residues are collected via the PUF roller from one-half of each carpet.
What is not shown here are the turf dislodgeable residue samples taken at various
times after application from the lawn plot that had no human traffic.
This slide shows the temporal profile of 2,4-D turf dislodgeable residues that were
collected from the single lawn plot using the PUF roller. The arrow bars indicate duplicate PUF
samples and show relatively good reproducibility for the technique.
The trace of rain that fell on the first evening after application removed about 60
percent of the dislodgeable residues. The second rain, which was 0.9 inches in 2 hours, removed
approximately 99 percent of the dislodgeable residues, but residues were still detectible even
without PFBBr derivatization out to 14 days after application.
An interesting phenomenon should be noted here: 1 day after rain, dislodgeable
residues increase, suggesting that the turf herbicide acids are retained in the top layer of soil.
As water displaces the herbicide acids from highly retentive sites, more is available as a
dislodgeable residue. This phenomenon has been reported by Seibert and coworkers for volatile
pesticides as well.
The temporal profile for dicamba is also very similar to that for 2,4-D, including
the increases after the rain.
This slide shows the temporal profile of 2,4-D dislodgeable turf residues collected
from the lawn plots used for human track-in. Again, the profile is very similar to that seen in
the previous slide and is also similar to the profile of herbicide acids in the carpet matrices as
seen in the following slides.
These are the levels of dust and dirt-bound 2,4-D tracked from the turf onto the
carpeting and collected with the HVS3 vacuum cleaner. Clearly, 2,4-D is found in carpet dust
well after the heavy rain episodes. The entryway mat does little to remove these residues, 50
percent at most and, at later times, showing minimal effect.
-------
Dislodgeable 2,4-D residues on the carpet collected with the PUF roller are shown
here. Once again, the same temporal profile is observed with about 25 percent removed by the
entryway mat.
We have calculated here the relative transfer of pesticides from turf to the PUF
roller, to the dust in the carpet, and to the carpet surface. The numbers in the very top row are
listed as ppm numbers, and below them are the percentages.
Imagine we had a million pieces of confetti strewn about on the lawn, for
Dicamba, 1600 of those pieces of confetti were picked up by the PUF, 30 of them were in the
dust, and 3 of them went to the carpet surface.
Dislodgeable turf residues appear to be about 0.1 percent of that applied. For
industry standard application rates that are four to five times greater than what was used in this
study, residues in the home can potentially build up over the years. Most vacuum cleaners have
a removal efficiency of about 35 percent for small particles.
This slide shows quite good correlation between the mechanical measure of
dislodgeable residue and that amount actually tracked in by our human subjects. Correlation of
turf dislodgeables with carpet dust levels was 0.9, and with the dislodgeable carpet surface
residues was 0.99.
This would indicate that simple mechanical devices may be used to discern levels
that can be tracked into a home. What are the implications for us personally? Possibly, we
should become more like the Japanese and Scandinavians and leave our shoes outside the door.
Any questions?
MR. TELLIARD: Thank you.
-------
QUESTION AND ANSWER SESSION
MR. TELLIARD: Questions?
MR. COCHRAN: Jack Cochran, Hazardous Waste
Center.
I was wondering if you monitored the vapor of the home at all, the air of the home
for vapors.
MS. BRINKMAN: Well, the track-in experiments
were conducted outside. So, no, we did not. That is a concern, and that is something that we
will be looking at in the future.
MR. COCHRAN: And as far as the vacuum cleaner,
would you think that it would tend to throw these particles up in the air?
MS. BRINKMAN: Well, the HVS3 is designed
specifically for the experiments that we did.
MR. COCHRAN: I am talking about a home
vacuum cleaner.
MS. BRINKMAN: Yes, that is why we used the
HVS3, because a home vacuum cleaner does, in fact, throw the particles up into the air.
MR. COCHRAN: I am just suggesting that maybe
they will be breathed by the occupants.
MS. BRINKMAN: Yes.
MR. TELLIARD: Anyone else?
(No response.)
MR. TELLIARD: I have one quick question. Does
this mean my son's room is now a national hazard?
MS. BRINKMAN: I think yes.
MR. TELLIARD: Not to mention what four paws
will do. Thank you.
10
-------
©iit of (SC/EGP Ainiiil^ste Method
bVv> L L vi- V—• L V_-t: X^---i I ii -.ill- \>^> L_^ t,^V.U Lv^U |v Vi>> UV«^' L L. L V^> \*,L, L'V^ Vu^i.
Marcia G. Nishioka, Marielle C. Brinkman, Hazel M. Burkholder
Battelle
Robert Lewis
AREAL/ US EPA
-------
t/.S. Household: 90% use pesticii'ns
50% read instructions
10% use with caution
Lawn Care Industry: $2.8 billion/year
Home Application Rate: 4-1 Ox/acre than farms
bri2-16cdr
-------
bv L^wit
Lu^ \ 1-w.C^. X. V. L L
b
** li\(ii£t~i(~d: ut s
X-> t i L V^iN^VAk.X' V^ lit t*'
u>
Dicamba <^
-------
s litdical^d irr
A B C D E F
2,4-D
6/6
Mecoprop
5/6
Dicamba
2/6
Glyphosate
1/6
eoitMinwrxdr
-------
au-8 si
• .
.-v
-------
/•;; ....... ... 0 ..
Apptcfsoit
jit vsrtoas tltwe^
bn2-06 cdr
-------
Lawn. I2lot
S
COOH
Cl
OCH3
Cl
(DS [Kg/tit2))
CH3
OCHCOOH
OCH2COOH
•pplkxcdr
-------
-------
«':)•
-------
-------
Dislodgeable Residues
from Carpet (with Mat)
Carpet Dust
(with Mat)
Dislodgeable Residues
from Carpet (No Mat)
Dislodgeable
Residues from Turf
(Section of Turf
with no Traffic)
Carpet Dust
(No Mat)
[Start Here!
dlagrm5.cdr
-------
22
-------
23
-------
24
-------
25
-------
26
-------
KeitHGLeie
a
nd L2eLttadildm[;>hei.iai
Cl
<€°[B)
t5vra 01
OCH2COOH
4: «£*••;
:. /'
S«f£
*£<£ 0¥f/$
T^jtMIP
-------
cL
f ^ O- /^S /'-** I -^^
L IbLCLBL
to
oo
tf'V- tf
Neutrals
alkanes
PAH
lipids
combustion source
compounds
Acids
phenols
aliphatic carboxylic acids
humic acids
High - Application Zone 1-100 ppm in extract
Low - Indoor Track-in 10 - 1000 ppb in extract
Recovery Standard
Internal Standard
-------
to
GC/MS with little or no sample clean-up
GC with minimal clean-up, derivatization, and
two-dimensional chromatography
GC with clean-up
dual derivatization and/or
dual GC column analysis and/or
dual GC detection
-------
Derivs.tiz.atian arudt tefetsfeirt
Methylation
UJ
o
PFBBr
CEDMSDEA
o
ii
OCH2COCH3
Cl
O
II
OCH2COCH2
O CH3 rw
II I xCH3
OCH2CO-SI-CH
6C Wtase
ECD
ECD
ECD
ECD
ECD
NPD
options .Oof
-------
A h f:;
Z-vL LCL
I
dl
Add recovery standard (3,4-D)
Extract with acetonitrilerphosphate buffer
Partition at high pH to remove neutrals
SPE with C18 cartridge
Concentrate
Add internal standard (2,6-D)
Divide extract
0.8 mL
0.1 ml
^ (SC/IS30
Derivatize by
Methylation
2 - 0.05 ppm
0.1 ml
Dilute 1:10
Methylation
20 - 0.5 ppm
Dilute 1:10
PFBBr
0.2 - 0.01 ppm
-------
Chi'
of
Methylation
400
200
Ni
Pentafluorobenzylbromide (PFBBr)
900
450
1
2
3
4
5
6
7
8
9
10
2,4-diClphenol 10
Dicamba 0.5
Mecoprop 100
MCPA 100
2,6-D (Int Std) 1
2,4-D 1
3,4-D (Rec Std) 1
PentaCtphenol 0.05
2,4,5-TP 0.2
2,4,5-T 0.2
rfds.cdr
-------
;* I P/^- I "^ I -_ I Z I £ r-
&Ll© L^L L3L3L
tVVM IV^
ciIUL vr
Ittetltdtf*
Method
o
Q.
OT
-------
fa***
T*
i_
f-J -w
G
OT
fZ
w
M
-------
prdnuwp
dnueeio
ON
dnueaio
3dS
sdeis
dnueeio
pmbn
pue 3dS
OSf
006
OSfr
006
003
00t7
-------
900
450
900
450
iatea
Derivatives
mdtt*vbx>*
-------
900
450
900
450
D List Li^Lt
Derivatives
pctottvbxdr
-------
600
300
4 A 3&C : i .- f-
oo
600
300
2 Li Lr l£Kti
v^ L • - Xr\-LL
Ifctlt
Deiivatives
mpufevtaxdr
-------
Lfetliad.
(LI
5
}
Dust
Soil
PUF
dicamba
mecoprop
2,4-D
OJ
mcpa
3,4-D
PCP
3,4-D
Field Recovery
87±2
89±11
84±9
81 ± 1
93±8
98±6
95± 10
n = 17
75± 3
112±13
93±4
99±5
93± 8
95±4
90± 7
n = 19
84±6
100 ±1
105 ±4
89 ± 4 (salt)
106 ±4
105 ±3
97±2
90±15
n = 41
Standard
-------
ampule!
slteiite fat
l
Itf
-
Dislodgeable Residues
from Carpet (with Mat)
Carpet Dust
(with Mat)
Dislodgeable
Residues from Turf
(Section of Turf
with no Traffic)
Start Here
Dislodgeable Residues
from Carpet (No Mat)
Carpet Dust
(No Mat)
-------
32,400
£ Q 7,500
4 hr 8 hr 2 d
Time After Application
«24dln7xdr
-------
KJ
D)
CB
.0
(0
O
2,000
1,000 --
4 hr 8 hr 2 d
Time After Application
4,320
4 d 5 d 6 d 8 d 10 d 14 d
•dloln7.o*
-------
X
Disladaeable Residues tout Turf
VM^/
<•< ****„ *'' ** t t t I
or In it with Wo
4 hr
8 hr
1 d
Exposed
4d
Time After Application
D)
D 7,000 •
i
•i
CM
f\ -
*•
',',
• -.
^
< 0.1 " rain
>
G
t
f f %
'
'
\
(.9 fl rain
f
6d
30,200
15,100
24dPUFMxdr
-------
9,000
^ ...Mdt&m te \:>n \!
n
J
4 hr
8 hr
1 d
Exposed
4d
Time After Application
1
12,100
Walk-off Mat
No Mat
6d
6,050 |E
B
c
24
-------
800
0)
c^
o
4
•\
CM
400 •-
s- fmm Carpet
p Trade-In by £0 [
2,830
4 hr
8 hr
1 d
Exposed
4d
Time After Application
6d
£
"S)
24dc*rptcdr
-------
Os
Relative Transfer From Turf, ppm
Picamba 2,4-P
To PUF 1600 900
(0.16%) (0.09%)
To dust 30 15
To carpet 3 2
surface
dislodgeables
-------
Comparison of Human and Mechanical
Dislodgeable Turf Residues
10
o>
o
3
2 e
o
Q.
i.
cc .
O 4
CM
- Carpet Dust
Carpet Surface
r =
02 4 6 8 10 12
Turf Dislodgeable 2,4-D jig
1.0
0.8
0.4
0.2
14
O
O
CO
t
3
0)
0.6 J2
0)
Q.
O
c
o
O
4
»\
CM
-------
Aelutew leda metr
oo
Battelle acknowledges financial support
for this research from
EPA on Contract No.68-DO-0007
bri2-13cdr
-------
MR. TELLIARD: Our next speaker is Harry
McCarty. Harry is going to be speaking on the round robin that was carried out on Method 1613
for the determination of dioxin and furans tetra through octa and the application of dirty,
dangerous, and deadly.
Harry?
MR. MCCARTY: Thanks, Bill.
I expected to get a lot more grief from Bill. This study was undertaken while I
was still an employee of Viar & Company at the Sample Control Center.
We presented this work in Finland, and I am sure a large number of you in the
audience were at the dioxin meeting in Finland. The presentation from the dioxin meeting has
been published in Chemosphere (Vol. 27, Nos. 1-3, pp 41-46, 1993). I see a lot of heads
nodding off. You have probably heard about this study in the past, either at this meeting or at
others.
To give you a little bit of background very quickly, in 1990, a little over three
years ago, Bill's office started this interlaboratory validation study. The analysis was for dioxins
and furans, the tetra through octa chlorinated dioxins and furans, by high resolution GC/mass
spectometry.
It is an international study. We got everybody we could find at the time who was
willing to participate in the study, and ended up with 22 laboratories in 6 countries. As Lynn
Riddick can tell you, the troubles of shipping standards and samples to 6 different countries or
5 outside of the U.S. is not a trivial task.
The idea was to gather data to support the promulgation of Method 1613 under
Section 304(h) of the Clean Water Act. That is the section, as you well know, that requires the
Agency monitoring methods to be proposed and, ultimately, promulgated after public comment.
The method was proposed for use in February of 1991. This study began before
that promulgation, and the method has not gone final yet. So, the results of this study will be
put into the final promulgation.
To give you a little bit of background on the method itself, it is not as exciting
as oil and grease analysis, obviously, but we extract the water samples. The focus of the method
is aqueous samples, and we extract the water samples after filtering them first, removing the
small particles which are believed to contain most of the dioxin in any case.
The particulates on the filter are extracted with toluene, and the remaining aqueous
filtrate is extracted with methylene chloride in a traditional separatory funnel shake-out.
49
-------
There is a solvent exchange and, ultimately, the two extracts are combined for
cleanup and analysis. So, you are only talking about one shot on the mass spec, but two different
extraction procedures are applied to the two different parts of the matrix.
The extraction device is something that came from Nestrick and Lamparski's work
at Dow. They have worked on this for quite some time. There had also been some other
development prior to their working on it.
It is a combination of a Soxhlet extraction with a Dean-Start water trap. It is
known now as a Soxhlet-Dean-Stark or an SDS. It is widely applicable to soils and other solid
matrices, including particulates out of a water sample.
This is a picture of it for those of you who aren't familiar with it. It is basically
a glass dogleg which collects the water out of the sample.
Toluene and water form an azeotrope, and when you boil the toluene up through
here into the Soxhlet extractor, the vapor that comes off is an azeotrope. The boiling point is
85°C. The water condenses out and drops back down in this dogleg. The toluene will float on
top of it and, ultimately, be returned to the Soxhlet body.
With a Dean-Stark trap with a stopcock at the bottom, you can take off as much
water as you need to for a very wet matrix.
The advantage of using the Soxhlet-Dean-Stark is that you no longer have to add
the drying agent, sodium sulfate, to the sample. There are two concerns with using sodium
sulfate. The first thing that you do with reagent grade sodium sulfate is to bake it at 400 degrees
in a muffle furnace to get rid of all the residual carbon. That gives a grey cast to the sodium
sulfate.
Nestrick and Lamparski have done a fair amount of work to show that that grey
cast is, of course, activated carbon, which picks up dioxins and furans out of the matrix and can
absorb them.
The other issue is that as you hydrate the sodium sulfate it seals off pores in a
porous matrix such as soil or fly ash. You seal off the cavities in the matrix, and you never get
the solvent in contact with the small pores in the solid matrix, and therefore may not extract the
analytes of interest.
Nestrick and Lamparski have done a fair amount of work on SDS, and we
confirmed it with samples as nice as sewage sludge to show that Soxhlet-Dean-Stark does at least
as well as using Soxhlet along with a drying agent.
The cleanup steps are fairly common for dioxin methods. There is an acid-base
back extraction of the extract to remove the water extractable interferences. Gel permeation
50
-------
chromatography (GPL) can be applied. It is not essential to perform GPL on any of the cleaner
samples.
The alumina column, silica column, and carbon columns are all fairly traditional
for dioxin analysis, and they remove a fair number of the co-eluting interferences, the co-
extracted materials.
There is also a procedure in the method for doing HPLC as a cleanup step as well.
We add a number of standards to these samples, the first one being 37Cl-labeled
2,3,7,8-TCDD. This compound is not found among the environmentally occurring dioxins. It
is completely chlorine labeled, and it is used as, in this method, what is called a cleanup
standard. It is added after extraction so that you can determine the efficiency of the cleanup
steps separately from the recoveries of the other standards.
We also used carbon-labeled 1,2,3,4-tetrachlorodibenzodioxin and 1,2,3,7,8,9-
hexachlorodioxin as what are called in this method internal standards. Other methods call them
recovery standards. These are added immediately prior to injection, and they allow you to
measure the recovery of the 15 carbon-labeled standards that are added immediately prior to
extraction.
This is an isotope dilution method. The 15 isomers of the dioxins and furans that
have label analogs added prior to extraction are quantitated against those labels by isotope
dilution. The two isomers without label analogs are quantitated against an analog at the same
level of chlorination.
The octafuran label produces an interference with the native octadioxin, so it is
not added. As a result, you quantitate octafuran by comparing it to the peak areas of the labeled
octadioxin.
One of the hexadioxins, as I showed, is used as an internal standard immediately
prior to injection. So, it is quantitated against the labels for the other two hexadioxins that are
added prior to extraction. It is a minor variation on isotope dilution, if you will.
The study design, in general, was to supply the laboratories with concentrated
extracts derived from bulk extraction of large volumes of industrial wastewaters and sludges.
In fact, we did not ship the sludges to the labs in this portion of the study. They are being used
for other purposes.
The extracts were prepared by the EPA laboratory in Bay St. Louis. Some of them
were fortified with additional analytes. By and large, most of the samples had some 2,3,7,8-
TCDD and some 2,3,7,8-TCDF, but many of the other isomers were not found or not found in
significant concentrations in those original samples, so the extracts were fortified prior to being
ampulated.
51
-------
The extracts were submitted to the laboratory as traditional single blind samples.
In fact, they were submitted in duplicate, and each lab did not know that they had duplicate
samples at the time that they received them.
The extracts were used to prepare what we are calling simulated effluent samples.
It is not unlike what is done during PE studies from Cincinnati. You concentrate the extract
down a little bit and mix it with a water miscible solvent, spike it into a liter of reagent water,
let it equilibrate, and then move on from there.
Again, each lab received two extracts. The statisticians tell me that the study
design was an incomplete block. We were trying to get a large variety of information out of a
single study, so we were looking at different concentration levels across different laboratories.
The extracts were shipped over a period of about four months as additional people
wanted to join in the study. Again, the troubles of shipping things overseas also slowed down
the process for us a little bit.
There were some alternatives to the use of concentrated extracts that we did
considered at first, but given the time frame and the relative budget for the project, it was not
something that was really feasible. We would have had to have obtained large volumes of
wastewaters that we knew contained dioxins and furans, divide them into replicate aliquots using
total suspended solids as a key to whether or not there was replication amongst the aliquots, and
then ship them to over 20 laboratories in 6 countries.
This really wasn't feasible at the time of the study. I am not sure it would be
feasible now, the biggest issue being not the homogeneity, but coming up with a matrix you
know has the analytes of interest at the levels you are really interested in testing.
There were a significant number of quality assurance requirements associated with
the study. The bulk of them come out of the method.
Each of the samples, again, is spiked with the 15 topically labeled standards prior
to extraction. That is done prior to the filtration as well so that there is some partitioning
between the water and solid phase, certainly.
There is the cleanup standard that is spiked at the completion of the extraction step
in order to track the efficiency of the various cleanup steps.
Each of the labs was requested to run what the 1600 and 600 series the EPA
methods traditionally call the "start-up tests," or the initial demonstration of capability; four
reagent water aliquots spiked with all 17 of the target analytes and then run through the entire
process. Each laboratory was expected to prepare and extract method blanks. There was a single
additional spiked reagent water aliquot that extracted with each group of samples, the so-called
ongoing precision recovery standard in these methods.
52
-------
There was as 5-point initial calibration and a single-point calibration verification
carried out at the frequencies described in the method.
We had some data requirements we put on the laboratories. We were originally
looking for information content, not the specific presentation. Again, we were dealing with
laboratories that weren't used to dealing with this method or with EPA programs. We didn't want
to lock them into a specific set of paperwork.
But we wanted to get information on the concentration of each analyte that was
detected, the recoveries of all the labeled standards, and then we wanted all the supporting raw
data, including the selected ion current profiles and the quantitation reports. We wanted to see
exactly what the lab had done and how they had done it, the idea being that anyone with a little
bit of knowledge could go back and calculate the same result that the laboratory came up with,
and if we couldn't, we were going to contact the laboratory.
We did receive data from a large number of the laboratories. Of the 22 that
originally agreed to participate, 19 submitted data by August of 1991. Again, the study started
in February of '90. So, a year and a half later overall, we had received data from 19 laboratories.
Some of the labs received samples as late as approximately almost August of 1990.
So, it was still taking on the order of a year for some of these laboratories to submit data.
One lab finally submitted their data in June of 1992. This was after six or seven
broken promises as to when it was going to get done. They were having some severe instrument
difficulties, and we said well, we would rather you ran it on an instrument that worked, but it
was taking an inordinately long time to get moved along.
Two labs failed to submit any data at all, and this was somewhat troublesome,
given that we had sent them a significant volume of standards for the study.
Of the 19 labs that submitted data by August of'91, 1 lab admitted that they failed
to be able to perform the method. They had some horrendous problems. They were not able to
dedicate the time to going back and figuring out what they were. They did give us a fair number
of useful comments on the approach in general, but their data, by their own admission and by
our review, was not terribly useful.
Two labs submitted only summary results, a cover page saying, you know, sample
number, analyte result. No labeled compound recoveries that were particularly useful and,
certainly, no raw data.
That was a very difficult situation for us. We made numerous attempts to get the
data from both of those laboratories.
53
-------
We reviewed these data laboriously. We looked at GC resolution, mass spec
signal-to-noise ratios, the labeled compound recoveries, retention times, ion abundance ratios, and
all the other method specifications. All of these review parameters are requirements of the
method. This is the way that we typically review all data for Bill Telliard's program, but dioxin
data in particular, given the sensitivity.
Where we did not have anything but summary data, we could not review the
majority of these parameters. We could look at the concentrations they found for things, but that
was about it.
We did use a fair amount of these data to develop information for the revised
method performance specifications, looking particularly at initial calibration, calibration
verification, and the initial precision recovery, that is, the four spiked reagent water aliquots.
Each of the 1600 series methods has a set of specifications for the recovery and
the scatter, the standard deviation of the recoveries for the IPR tests. There is also a specification
for ongoing precision and recovery analyses which are conducted with each batch of samples
extracted on a routine basis.
The approach was to use algorithms that had been used previously to develop 95
percent confidence intervals for each performance measure.
By May of 1991, we had collected enough data that we made a first attempt at
what we called the "interim method performance specifications" for the IPR/OPR, and calibration
verification. The interim specifications were put together and sent out under a memo from Bill
Telliard asking the laboratory community that we deal with routinely for comment on these
specifications. They were the reality test, if you will.
A number of the laboratories had concern that the specifications were what they
considered too restrictive. The existing method specifications in the method that had been
proposed and were sent out with the study were based on data from roughly two, almost three
laboratories, a fair number of data from two labs and a small amount from the third lab.
Those numbers had always been viewed as preliminary, and we were hoping to
come up with specifications that were more rational from the standpoint of the analysis itself but
also allowed the laboratory some time to run samples instead of just running calibrations or check
standards.
When the labs responded with their concerns, we went back and looked at the
specific data again. It became very quickly apparent that the data set was censored.
Laboratories had attempted to meet the specifications in the original method. They
submitted the one set of results that seemed to pass. We heard from a number of laboratories
54
-------
that they had run some of these analyses several times in order to meet the specs. They sent in
the one that worked, thinking that was what we were looking for.
As a result, the censoring produced very little variability in the data. They were
trying to hit a preconceived target, and the confidence intervals, the method specifications, are
derived on the basis of the variability that exists in the overall data set.
So, we found that the performance specs were still considerably tighter than was
deemed reasonable by most people.
At the 11th International Dioxin Symposium in Research Triangle Park two years
ago, we sat down with the labs and talked about the problem. They were asked to submit any
additional data they generated during the course of the original study. We did not want them
going out and generating new data outside of the context of what they had done; we simply
wanted the results that they said they had generated.
Unfortunately, no additional data showed up. Everybody said they had it, but it
didn't appear in Bill's office or mine at any point.
So, then we did what we really didn't want to do, which was sit down with the
statisticians. Generally, when you have to sit down with statisticians, you have to starting talking
in terms of things like root mean square deviations and things like that that most of us don't
understand as just dumb old chemists.
But we went through and figured out that the approach that we were going to use
in dealing with the censoring problems was to expand what is called the quantile range so that
we were rejecting fewer data as outliers. Outlier tests were performed on the data set prior to
developing the method specs.
The outlier tests make an assumption about how wide you want to go in terms of
your range of what you will and won't consider outliers. The tests that had been used had a
relatively narrow quantile range. We expanded that in a step-wise fashion and looked at the data
after each adjustment.
We also expanded the confidence interval from a 95 percent confidence interval
to a 99 percent confidence interval. On a censored data set, we felt this was still a reasonable
approach.
Again, every time we made an adjustment, we printed it out, one or two of us sat
down and looked at the data and said this looks like a reasonable approach. We didn't throw out
any data a priori and say "oh, that has got to be an outlier", but we looked and said "geez, you
don't have to expand the range very far and that one data point becomes an outlier, and these
three which looked reasonable to the naked eye were still considered as part of the normal
population."
55
-------
We really couldn't do much with the data from the two labs that didn't submit any
raw data. We really were concerned about using them on the basis of good faith in the
laboratory.
What we did was wherever those data fell within the range that was not considered
an outlier, we kept the data point in. But, if we came up with an outlier that was from one of
these laboratories where we could not substantiate the data, we made no further attempts to see
if it could be included.
In the process of reviewing the hard copy data from the other laboratories, we did
come up with about three calculation errors, simple things like they forgot to multiply something
by 2, so the recovery was, in fact, twice what they thought it was originally. In one case, a value
came well within the specifications, and we confirmed the correction with the laboratory and
documented all of that.
The end result was we came up with a third revision of the method specifications
and, again, we circulated these through the same laboratory community and some other people
within the Agency, and the answer came back that these numbers seemed quite reasonable.
There were one or two people who said, "well, I wish this number were 17.5, not 18," or
something like that, but by and large, they were considered a more reasonable approach to the
specifications.
The third revision specifications are currently being used by Bill's office in his
contracts with commercial laboratories, and they will be formally incorporated into the next
revision of the method.
In terms of evaluating the data from the sample results, we had, three different
extracts. The first one contained relatively few analytes, just the 2,3,7,8-TCDD and TCDF and
perhaps a little bit of octadioxin at low part per quadrillion concentrations.
Yesterday, we were talking about mg/L of oil and grease. We are now down nine
orders of magnitude lower than that.
The second extract had been fortified with most of the 2,3,7,8-substituted isomers
that weren't already present. The concentrations ranged from 100 to 500 ppq. And the third
extract was fortified such that it ended up in the 250 to 1000 ppq range.
15 labs submitted data that were considered in the study by the time we got done.
Actually, I think that number is closer to 19 now.
Two labs had very large differences between the results that they didn't know were
blind dupes, and we removed some of those from consideration after we went through the raw
data.
56
-------
Basically, we classified these as low, mid, and high concentration samples. You
will see that in the transparencies Lynn is going to put up.
There wasn't a large number of sample data points, so we didn't try to do outlier
tests on these data. Independent of the study, there was a single laboratory out at Bay St. Louis
that had done some analyses by a completely different procedure on the large volume extracts,
and we used those as reference values, not as absolute true values, but as a reference for what
the performance ought to be.
These are the results for what we are calling the low part per quadrillion sample.
The only two analytes really of concern are the 2,3,7,8-TCDD and TCDF.
The mean concentration for the tetradioxin was 49.4; the RSD was 26.2 percent.
The 2,3,7,8-TCDF mean concentration was 268; the RSD was considerably tighter.
There are different numbers of samples, the N column on the end there. The
tetrafuran requires what is called a second column confirmation in most dioxin methods. It is
not completely separated from all of the tetrafurans on a DB-5 GC column. Several of the
laboratories did not perform that second column confirmation test, so we did not throw in their
unconfirmed results into this pile and use it for statistics.
In the mid ppq sample, all 17 analytes were present. The mean concentrations
were within expectations. Again, we don't believe the reference concentrations are absolute true
values necessarily. They were simply two analyses by a completely different method.
If you look at the relative standard deviation column on the right there, the relative
standard deviations are, by and large, below 30 percent. Most of them are also below 25 percent.
The worst case overall is the octafuran at the bottom of the slide. The relative
standard deviation of that mean concentration is 45.5 percent. This is one of the two analytes
that is not quantitated by absolute isotope dilution. It is quantitated against the labeled
octadioxin. The lack of a true isotope dilution analysis here, we think, is evidence in the
variability in the data.
The hexadioxin that is also quantitated against two other labeled standards is in
the middle there. The RSD is 10.6, with an N of 9. That falls much more closely in line with
the other data for the other hexadioxins, again, we believe, because that is quantitated against the
response of the other two labeled hexadioxins, so it tends to correct there.
The concentration range for the high sample was approximately 250 ppq to about
1000 ppq. The agreement is quite good for things like the tetradioxin, with 18.8 RSD. Again,
we had one lab that did not confirm the tetrafuran results, so the N there is smaller.
57
-------
The pentafaran results are quite tight, 6.8 percent relative standard deviation. If
you get down to the octafuran, again, you have a much larger standard deviation by and large.
Again, this is, at most, four laboratories and eight samples.
One of the concerns we had at the time the study was started was that most of the
labs had no direct experience with the method. We had gone beyond the existing EPA contract
laboratory community to 22 labs in 6 different countries. Most of them were not familiar with
Method 1613, although they were all versed in high resolution dioxin-furan analysis.
One of the problems with the international labs is they are not used to quite the
level of rigid reporting requirements that EPA requires even under a program such as Bill's.
There is still that "trust me, we know what we are doing" attitude when you contact these people.
The labs in this country have learned painfully that that is not the best approach,
certainly. So, the U.S. labs gave us more data and were more cooperative in providing it in a
format we could utilize, certainly, but they also have more experience with the programs in
general.
For the tetradioxin, the RSD of the mean concentrations for those three samples
generally decreases as the concentration increases. This is as you would expect. As you get to
higher concentrations above the sensitivity limits of the method, you get more precise analyses.
This isn't unexpected.
To a certain extent, this slide goes back to an earlier run of the data, but it still
holds true here. For the tetrafuran, the RSD is lowest at the lowest concentration level. We are
not sure if that is due to the second column confirmation requirement or if it is an artifact of
something else in the data.
Again, there are also some concerns about octadioxin and octafuran and one of the
heptadioxins. There was one lab that seemed to have high levels of background contamination.
None of these results are background corrected either by the laboratory or by us, so we believe
that some of the scatter in that data may be due to the fact that they are having some background
contamination situations.
This, again, is not uncommon at all with octadioxin or the octa and hepta furans.
Again, the results for one of the hexadioxins, the one that is not done by absolute isotope
dilution, also seem to be skewed by the results from one laboratory that seemed to be a little bit
out, but again, we did not perform any outlier tests, because this was a relatively small data set.
Our first conclusion is you get what you pay for. This is a volunteer study. It
took, as you saw, two and a half years to get all the data back. If we had paid for it, we
probably would have gotten it in a year and a half in some cases.
58
-------
It is very difficult to run a study like this with anything other than volunteer
laboratories, because you then end up with people who are willing to do it for money, and you
are not sure that they are really the only people you want to play with as well.
We did go through and try to summarize some of the method performance out of
this data. If you set a data quality objective of 30 percent accuracy, plus or minus 30 percent,
then, basically, half of the analytes in all three samples met this criterion, looking at the RSD
results. Most of them were well within in that, and, in fact, more than half would be within the
window if you look at something other than the octadioxin and octafuran results.
Certainly, the two isomers of highest toxicological concern, the 2,3,7,8-TCDD and
TCDF, met this criterion and, in fact, are under 25 percent in the majority of these samples.
Octadioxin contamination appears to be a problem across most labs, as we had
expected. The octafuran and heptadioxin contamination are also a concern in some cases.
If we were to go back and pull out outliers, we believe that the picture would
improve in general. Again, we didn't do that for this, because we didn't feel there was quite a
large enough data set. If we get statistically more sophisticated, we may be able to do
something.
Since this study, EPA has used this method on hundreds of other real-world
samples. Many of you have heard about the study that was done in the past year and a half in
cooperation with the pulp and paper industry.
There have been a lot of results that have come out of that study that may
ultimately be used to develop method performance specifications as well.
We are not set in concrete in terms of what numbers are going to be out of this
study necessarily. We want to pull together as much information as possible.
There are some other issues involved in terms of this method in particular. You
have probably heard, if you were here last year, about the EMMC committee that Bill sits on,
the Environmental Methods Management Council, trying to consolidate EPA methods.
Method 1613 is the choice of that group for an Agency high resolution dioxin
method. It will presumably supplant all other Agency high resolution dioxin methods with time.
The Office of Drinking Water and Groundwater has adopted it as their high
resolution dioxin method, as opposed to their original proposal of a Method 513, which was
going to be a single analyte high resoution method.
There are efforts underway to get together with the Office of Solid Waste and deal
with the differences between 1613 and 8290 so there can be a consolidated version there as well.
59
-------
Also, Bill's office is looking at testing method performance in other matrices, this
again focused on water samples. We could go back and look at sludges from sewage treatment
facilities. We could look at soils. There is interest in looking at animal tissues and things like
that.
I would like to thank you for your attention. If there are any questions, glad to
answer them. Thank you.
60
-------
QUESTION AND ANSWER SESSION
MR. TELLIARD: Any questions? Oh, oh, he has
got a note pad.
MR. NEIN: I have three questions.
MR. TELLIARD: Would you state your name and
affiliation, please?
MR. NEIN: Oh, I am sorry. John Nein. I am with
Chesapeake Paper Products Company.
Have you looked at the results of this study compared with the round robin study
done Larry LaFleur at National Council and compared any of those results?
MR. MCCARTY: We don't have Larry's round
robin study that I know of. Maybe I am wrong, Larry, and you can correct me, but we haven't
looked at it in any formal context.
MR. NEIN: And when the method is promulgated,
how will the existing data generated by NCASI 551 be incorporated as far as a data base, or do
you anticipate doing that?
MR. MCCARTY: I don't know that any data from
551 are going to be incorporated at all. We certainly have been working with Larry and his
people on things like...
MR. NEIN: The 104 ml study? Any of that data?
MR. TELLIARD: Yes.
MR. MCCARTY: Yes, the 104-ml study, some of
that. Certainly, the variability study data is more likely to be used in developing the final method
for promulgation.
MR. NEIN: I have also been told by some labs that
the 551 methodology is acceptable for 1613, that they are basically equivalent. Do you agree
with that?
MR. MCCARTY: They are not equivalent. Larry
and I could get into a lengthy debate as to which works and which doesn't for certain matrices.
In some regions, it is my understanding that individual discharge permit writers have allowed the
61
-------
use of 551, at least for the time being. I know Bill's office had endorsed the possibility of
Method 551 until the final method was promulgated.
Once 1613 is promulgated, I don't know where that is going to stand. That is
certainly a policy question.
MR. NEIN: Thank you.
MR. HART: Jerry Hart, VG Analytical.
Is there going to be any further clarification in the protocol concerning the data
processing, particularly with regard to the effect on things like smoothing and background
subtraction and whether that is allowable within the program?
MR. MCCARTY: Bill and I have chatted only
briefly since you and I talked at the Pittsburgh conference. The final version of the method is
in the process of being written, if you will. It does contain a lot of improvements, clarifications,
simple language changes that came out of this study as well as other work that has been done
in the past year or so.
One of the issues that we are still talking about is how to handle the specifications
for what data processing is allowable and what isn't. The traditional approach is to limit the
amount of manipulation, post-acquisition manipulation, of the data to a great extent.
Right now, my personal feeling is that we are going to come up with something
that is not probably going to please everybody, but it is at least going to give an indication of
how much smoothing is going to be considered acceptable.
The whole issue of background subtraction is something that we are going to have
to discuss at length.
MR. HART: Additionally, is there going to be some
description regarding the calculations for the diphenylether channels?
MR. MCCARTY: Yes, the whole issue of
diphenylether interferences has been strengthened in there, and there will be a considerably
greater focus on it in terms of when you find a signal in the channel for one of the
diphenylethers, you cannot report that result as a positive result for the dioxin or furan without
doing further work.
MR. HART: Thank you.
62
-------
MR. TELLIARD: While he is coming up, Harry
pointed out that this was a volunteer program and, therefore, saved the taxpayers a lot of dollars.
It only cost me $100,000 for standards.
So, when some of those labs didn't send any data, their names are on a list.
MR. MCCARTY: We also specifically did not put
up the names of the people participating in the study so we didn't have to answer the question
of who are the two that didn't and who are the ones that didn't perform well.
MR. WAGNER: My name is Bruce Wagner, and
I am with IT Corporation.
Several questions. When do you estimate the final promulgation of 1613?
MR. TELLIARD: We are planning, due to the fact
that the pulp and paper industry is insisting on having their regulations proposed by the end of
the year or end of the summer, they are feeling very left out that they don't have any regs. So,
hopefully, the final promulgation would be close to that time period, in the October window.
One of the issues that we have to resolve is whether we are going to have to re-
propose the method because of the changes resulting from this study, resulting from the activities
of the variability study, as it is titled, which was carried out last year with the industry.
Also, we are going to include into the method a number of other options, for
example, solid phase extraction which was not in the proposed method. Also, the application of
the method will probably be proposed for fish tissue.
It was also envisioned that we would propose the method for solid phase, for,
basically, OSW-SW846 type. Due to financial problems with our budgets, the Office of Solid
Waste hasn't been able to generate the data to do that. So, that is still in question, but we will
probably propose it for domestic sewage sludge, fish tissue, and then, hopefully, go final on it
for aqueous samples.
MR. WAGNER: Okay, you answered one of my
other questions about solid phase extraction. However, the method you mentioned, 513,
specifically allows solid phase extraction for drinking water.
Is there any consideration of 1613 for, having made a comment about most of the
dioxins and furans being bound to the particular matter in aqueous samples, of doing away with
the separately extraction, just filtering and doing the Dean-Stark on the particulates, or are you
still going to be requiring...
63
-------
MR. MCCARTY: We had talked about that at one point. In fact,
we had tried earlier on a novel extraction technique that had been developed at Dow where you
extracted the sample in the bottle and threw away the sample and the bottle after you took the
extract off.
That didn't fly with most of the regulatory community. From my point of view,
I suspect we are still going to have to do something with the aqueous phase.
Last year, Sarah Barkowski, who is sitting a couple rows in front of you, had
presented her work on solid phase extraction using the disk extractors, and as Bill said, that is
something that his office is looking at extensively.
Whether or not that ends up in the version of the method that is promulgated first
off is a timing question, I think, as much as anything else.
This method, in particular, is not likely to be static. Bill is certainly anxious to
improve it as time goes along, not only in terms of use in the laboratory but also, even lower
sensitivity where people are concerned about it so we can write nastier and nastier regs for
people who want to be regulated.
But I think you are going to see several iterations of this in the next couple of
years, and these sorts of improvements will be included as the time and the budget permit.
MR. WAGNER: One more question. Did any of
the laboratories report trouble with achieving adequate separation for either the 2,3,7,8-TCDD
or the 2,3,7,8-TCDF? In particular, there are references in the literature about the 225 column
not being isomer-specific for the TCDF. Did any of the laboratories report that problem?
MR. MCCARTY: We didn't have any specific
reports of it. Again, we did go back and look at that, the resolution checks in every case where
we got raw data, and by and large, there were no specific problems with those.
The method, in traditional EPA language, specifies DB-225 or equivalent. I think
there was at least one lab that used a different column. So, we can use that as a judgment of
how other columns are working, but we didn't have any specific problems with that in this study
at least.
MR. WAGNER: I have one more question. There
is a section in 1613 as it is written now that if you have a peak that meets the signal-to-noise
criteria and it is in the retention time window but the IA ratio does not meet that you have to go
analyze that extract on a different column. Is that going to stay?
MR MCCARTY: Well, that was an approach to
making the labs do something when things didn't meet ion abundance ratio.
64
-------
For those of you who aren't intimately familiar with dioxin analysis, one of the
identification parameters, in addition to retention time information, is what is called the ion
abundance ratio. You monitor two specific masses for each of the analytes you are looking at,
and the ratio of the two peaks produced by each analyte has to meet some method-specified QC
limit.
The limit currently is plus or minus 15 percent of a theoretical ratio. That number,
as best as I have been able to determine in the past three years, was agreed upon at a meeting
somewhere because they needed a limit.
We presented another paper at the meeting in Finland last year with an alternative
to that approach which is being considered for inclusion in the method along with several other
possibilities.
But I believe that the final method will remove that, from the laboratory's
standpoint, onerous burden of having to re-prep and reanalyze the sample. I think what is going
to happen is there is going to be another approach to clarifying when something is positively
identified in terms of ion abundance ratio.
It is going to be a little bit of extra calculation, but it shouldn't be any extra
analytical work, but that is one of the things that is being considered. The decision rests, to a
certain extent, on Bill's head.
MR. WAGNER: Well, you have my vote to drop
it. Thank you.
MR. TELLIARD: Thanks. Sorry, I didn't see you.
As long as it is an easy one.
MR. MCCARTY: It is Sarah. It is never easy.
MS. BARKOWSKI: Sarah Barkowski, Boise
Cascade.
Could you describe the approach that you are using to looking at lower sensitivity?
MR. TELLIARD: Can't hear you, Sarah.
MS. BARKOWSKI: Could you describe the
approach that you are using for your investigation of a lower sensitivity or lower reporting limit,
whatever?
MR. TELLIARD: Lower detection limits?
65
-------
MR. MCCARTY: Yes.
MR. TELLIARD: We are getting beat up by various
and sundry groups who are looking at increasing the volume of the sample again. You know,
if you put the Missouri River through your extraction setup, you will get a lower detection limit,
and that is basically what they are doing.
Historically, what we have found is that you increase the sample volume, you also
increase the interferences. Surprise, surprise. But one of the issues down the road is we are
going to probably be asked as to what will this...and it is not going to be something we are going
to do overnight.
You know, in other words, how far can you push this methodology as far as
extraction is concerned to get to what level? Is it 1 part, is it 0.01, is it 0.001? We don't know.
But it is one of the issues that we are looking at, because most of the public
meetings we have had on the pulp and paper study, one of the comments we are always getting
is, you know, Fred said he can get to 0.02. What the hell is wrong with you?
So, in lieu of that, what we are looking at is as this method progresses...our first
effort is to get the method finalized as it stands with a detection limit of 10 in water and 1 in
solids, that is, 10 parts per quadrillion and 1 part per trillion, and then move on and see what the
ruggedness of the method is.
That is something, as Harry pointed out, this isn't going to be a fixed method. It
will probably evolve as new techniques come out, and that is going to be dependent on how well
we can clean it up in the extraction procedures.
So, we don't have a window yet to say it is going to be 0.0006.
MR. MCCARTY: Or how we are going to get there.
MR. TELLIARD: Right.
MR. MCCARTY: As Bill pointed out, one of the
issues is to get this method promulgated, because the existing method in Section 304(h) of the
Clean Water Act for monitoring is Method 613 which is specific only to the 2,3,7,8-TCDD, and
its best guess at sensitivity is on the order of 2000 parts per quadrillion, 2 parts per trillion.
To groups arguing we are not sensitive enough, we keep saying, "But if we don't
get this one in place, if we keep making it a moving target, there isn't going to be anything that
is on the books and promulgated for formal monitoring."
66
-------
So, our first step at this point is to get this one done once and then go through,
maybe every couple of years there will be improvements that come along.
MR. TELLIARD: We don't know where we are at,
but we are working on it. Thanks, Harry.
67
-------
68
-------
ON
\o
Results of the Inter laboratory
Validation Study of
USEPA Method 1613
for the Analysis of
Tetra- through Octachlorinated
Dioxins and Furans
by Isotope Dilution GC/MS
Harry B. McCarty
Environmental and Health Sciences Group
Science Applications International Corporation
Falls Church, Virginia
Lynn S. Riddick
Sample Control Center
Dyncorp/Viar, Inc.
Alexandria, Virginia
474J
-------
Background
February 1990—U. S. Environmental
Protection Agency, Office of Water
Regulations and Standards (now the
Office of Science and Technology) began
interlaboratory validation study of
Method 1613.
Analysis of PCDDs and PCDFs by high
resolution gas chromatography/mass
spectrometry.
Background (cont'd)
International study, ultimately involving:
- 22 laboratories,
- in 6 countries.
Purpose was to gather data to support
the promulgation of Method 1613 for
compliance monitoring under the
authority of Section 304(h) of the Clean
Water Act (CWA).
70
-------
Study Design
Laboratories supplied with concentrated
extracts derived from the extraction of
large volumes of industrial wastewaters
and sludges.
Some extracts were fortified with
additional analytes.
Extracts were submitted to the laboratory
as traditional single blind samples.
Study Design (cont'd)
Extracts were used to prepare the
simulated effluent samples.
Each laboratory received two extracts.
The study design formed an incomplete
block diagram.
Extracts were shipped to laboratories over
a period of four months as additional
laboratories agreed to participate.
71
-------
Alternative to Use of Extracts
Alternative to preparing simulated effluent samples
would have involved:
- Obtaining large homogeneous wastewater samples,
- Ensuring that they contained appropriate levels of
PCDDs/PCDFs,
- Dividing them into replicate aliquots,
- Shipping them to over twenty laboratories in 6
countries.
Given the difficulties associated with this alternative, the
use of simulated effluent samples was judged by USEPA
to be a suitable compromise for the purpose of the study.
Quality Assurance Requirements
of the Study
In the process of analyzing the simulated effluent
samples according to the protocol, the
participating laboratories were required to:
- Spike 15 isotopically labeled standards into
the samples prior to extraction
- Spike one additional standard into the extract
before cleanup.
72
-------
Quality Assurance Requirements
of the Study (cont'd)
Analysis of four reagent water aliquots spiked with
the 17 2,3,7,8-substituted PCDDs/PCDFs and 15
isotopically labeled standards
Method blanks
An additional spiked reagent water aliquot
extracted with each group of samples
Five point initial calibration
Single point calibration verification.
Data requirements
For each of the sample and quality control analyses,
the laboratories were to provide:
• Concentration of each analyte detected.
• Recoveries of each of the labeled standards.
• All supporting raw data, including:
- Selected ion current profiles,
- Quantitation reports.
73
-------
Data received
Of 22 laboratories agreeing to participate:
- 19 submitted data by August 1991
- 1 laboratory submitted data in June 1992
- 2 failed to submit any data at all.
Of the 19 submissions received by August 1991:
- 1 laboratory admitted they failed to be able to
perform the method
- 2 laboratories submitted only summary results,
no raw data.
Data Review
The data from each laboratory were thoroughly
reviewed, including the evaluation of:
• Gas chromatographic resolution,
• Mass spectral signal-to-noise ratios,
• Labeled compound recoveries,
• Retention times,
• Ion abundance ratios
• all other method specifications.
Summary data received for 2 laboratories could not
be reviewed in this fashion.
74
-------
Data Evaluation
Data from calibration standards and quality control
samples were used to develop revised method
performance specifications for:
• Initial calibration,
• Calibration verification (VER),
• Initial precision and recovery (IPR), and
• Ongoing precision and recovery (OPR).
Results were processed using algorithms designed
to develop confidence intervals for each
performance measure.
Data Evaluation (cont'd)
AH data received and reviewed by USEPA by
May 1991 were used to generate interim
method performance specifications.
Interim specifications were circulated among
the USEPA contract laboratory community for
comment.
Concern expressed that specifications were
too restrictive.
75
-------
Data Evaluation (cont'd)
Apparent cause was censoring of the data set.
- Laboratories attempted to meet the specifications
in original method
- Submitted one set of QC results
- Did not submit analyses which may have come
close, but still outside specifications.
Censoring reduced variability in data, and narrowed
confidence intervals predicted for performance
measurements.
Data Evaluation (cont'd)
Based on comments received at the 11th
International Dioxin Symposium, laboratories were
asked to submit any additional data generated during
the course of the original study.
When no additional data were received, two aspects
of the data evaluation algorithms were adjusted.
- Expanded quantile range to reject fewer data as
outliers.
- Expanded confidence interval to 99% from 95%.
Adjustments were made in steps, with manual
review of the results each time.
76
-------
Data Evaluation (cont'd)
When results from two laboratories that did not
submit raw data were within the revised
specifications, they were included in the database.
If they fell outside the variability exhibited by the
other laboratories, they were not included.
Third revision of the specifications circulated again,
and generally deemed reasonable.
Third revision specifications are currently being
employed by Office of Science and Technology for
their contracts with commercial laboratories, and
will be formally incorporated into next revision of
the method.
Sample Data Evaluation
Data received from the laboratories represented
three different sample extracts.
- One sample contained relatively few analytes at
low part per quadrillion concentrations.
- The second extract was fortified with most of
the 2,3,7,8-substituted isomers not already
present, and had concentrations in the 100 to
500 ppq range.
- The third extract was fortified in the 250 to 1000
ppq range.
77
-------
Sample Data Evaluation (cont'd)
15 laboratories submitted sample data that were
considered in the study.
Of those, two laboratories had such large differences
between the two simulated samples that their data
were removed from consideration.
Remaining 13 laboratories represented:
- 5 low ppq samples
- 5 mid ppq samples
- 3 high ppq samples.
Sample Results
For each analyte detected in each of the three sample
types (low, mid, high), the mean concentration and
relative standard deviation were calculated.
Given the small number of results for each analyte,
no attempt was made to determine outlier values or
to exclude them.
Independent of this study, a single laboratory
analyzed the large volume extracts before they were
divided into ampules and sealed. The results from
duplicate analyses by that laboratory were used as
reference values.
78
-------
Sample Results (cont'd)
Low ppq sample
Analyte
2378-TCDD
2378-TCDF
Reference
Cone.
60
300
Mean
Cone.
50.6
277.1
RSD
{%) N
27.0 10
10.2 10
MU,I>
M;H *-*.
2378-TCDO
2378-TCDF
12378-PeCOD
12378-PeCOF
23478-PeCOF
123678-HxCDO
123678-HxCDO
123789-HxCOD
123478-HxCOF
123678-HxCDF
123789-HxCDF
1234678-HpCOO
1234678-HpCOF
1234789-HpCOF
OCOD
OCDF
n«f«r*nc« Cone.
140
480
90
120
170
70
70
150
210
150
110
120
180
110
460
120
M«*n Cone.
111.1
427.7
71 A
100.0
158.4
72.6
72.6
207.4
195.6
155.9
86.4
197.0
146.3
104.3
2325.1
189.3
R8D <%) N
22.2 10
22.1 10
17.8 9
26.1 10
23.8 9
20.4 10
20.4 10
106.1 10
24.8 10
25.2 10
405 7
122.7 10
35.3 10
23.6 10
206.8 10
114,9 9
79
-------
Analyt*
IMvrwic* Cone. Mean Cone. MO (%)
N
2378-TCOO
2378-TCOF
12378-PeCDO
12378-PeCOF
23478-PaCOF
123478-HxCDD
123678-HxCDD
123789-HxCOO
123478-HxCOF
123678-HxCOF
123789-HxCDF
1234678-HpCOO
1234678-HpCDF
1234789-HpCDF
OCOO
OCDF
270
800
250
320
480
240
240
250
600
430
310
240
500
310
700
320
243.3
651.8
192.3
274.0
425.7
175.9
175.9
167.6
507.4
161.7
129.5
203.4
356.3
287.6
734.6
235.1
18.8
26.6
39.3
32.0
26.2
39.9
41.2
49.7
422
403
128.1
42.4
44.3
32.0
38.1
51.7
8
a
7
7
7
8
8
8
8
8
5
8
8
7
8
8
*«Mu 1
Sample Results (cont'd)
Fewer than one third of the laboratories participating
in the study had direct experience with Method 1613
prior to this study.
For 2,3,7,8-TCDD, the RSD of the results from all of
the laboratories decreases as the concentration
increases.
In contrast, for 2,3,7,8-TCDF, the RSD is lowest at the
lowest concentration tested.
80
-------
Sample Results (cont'd)
For the mid ppq sample type, the results for
1,2,3,4,6,7,8-HpCDD, OCDD and OCDF are highly
skewed by one laboratory where background
contamination appears to be a problem.
For the same sample, the 1,2,3,7,8,9-HxCDD
results are skewed by the results from one other
laboratory.
Conclusions
You get what you pay for.
81
-------
Sample Data Evaluation (cont'd)
• 19 laboratories submitted sample data that were
considered in the study, representing:
7 low ppq samples
7 mid ppq samples
5 high ppq samples
• One of the two laboratories that did not submit any
raw data had summary results for a high ppq
sample which were markedly different from one
another. Without raw data to evaluate, these data
were dropped from further consideration.
Sample Results (cont'd)
Low ppq sample
Reference Mean BSD
Analyte Cone. Cone. (%) N
2378-TCDD 60 49.4 26.2 14
2378-TCDF 300 268.0 8.9 10
82
-------
Mid ppq sample
Reference
Analvte
2378-TCDD
2378-TCDF
12378-PeCDD
12378-PeCDF
23478-PeCDF
123478-HxCDD
123678-HxCDD
123789-HxCDD
1 23478-HxCDF
1 23678-HxCDF
123789-HxCDF
1234678-HpCDD
1234678-HpCDF
1234789-HpCDF
OCDD
OCDF
Cone.
140
480
90
120
170
70
70
150
210
150
110
120
180
110
460
120
Mean
Cone.
108.9
453.8
64.6
93.4
157.2
76.1
77.9
146.3
203.5
161.4
99.3
110.5
144.1
111.1
757.8
125.0
RSD
(%)
22.6
18.7
31.4
32.4
22.8
14.4
7.6
10.6
12.7
15.1
9.9
14.4
4.9
7.8
31.4
45.5
N
14
8
12
13
10
11
10
9
11
11
6
10
10
11
12
10
High ppq sample
Analyte
2378-TCDD
2378-TCDF
12378-PeCDD
1 2378-PeCDF
23478-PeCDF
1 23478-HxCDD
123678-HxCDD
123789-HxCDD
123478-HxCDF
123678-HxCDF
123789-HxCDF
1234678-HpCDD
1234678-HpCDF
1234789-HpCDF
OCDD
OCDF
Reference
Cone.
270
800
250
320
480
240
240
250
600
430
310
240
500
310
700
320
Mean
Cone.
243.3
588.7
192.3
306.3
466.5
175.9
175.9
167.6
507.4
401.3
243.8
203.4
356.3
322.3
734.6
235.1
RSD
(%)
18.8
25.6
39.3
6.8
6.8
39.9
41.2
49.7
42.3
49.7
48.4
42.4
44.3
4.2
38.1
51.7
N
8
6
8
6
6
8
8
8
8
8
6
8
8
6
8
8
83
-------
Conclusions (cont'd)
Accepting a data quality objective for accuracy of
+/- 30%, half of the analytes in all three samples
meet this criterion judged by either %RSD across
all laboratories or as a percent bias from the
reference data set.
2,3,7,8-TCDD and 2,3,7,8-TCDF meet this criterion
in all samples.
OCDD contamination appears to be a problem
across most laboratories, with lesser problems
with OCDF and HpCDD
Conclusions (cont'd)
If outlying values are removed from the data set, the
precision and bias improve for all analytes.
Since this study, USEPA has undertaken studies
where this method has been applied to hundreds of
real-world samples. Data from those studies are
being evaluated relative to method performance
issues as well.
84
-------
Other Issues
In response to concern that there is a proliferation of
USEPA methods for environmental contaminants,
the Agency is evaluating the consolidation of
methods from various Program Offices.
Method 1613 is one method being considered for
such consolidation. Efforts are underway to
incorporate matrices from the Office of Solid Waste
Method 8290 into Method 1613, such that the
consolidated version of Method 1613 would replace
Method 8290.
Testing of method performance in other matrices will
benefit from the lessons learned in this study.
85
-------
86
-------
MR. TELLIARD: Staying with high resolution
GC/MS, Mr. Marti is going to talk about looking at coplanar PCBs. If you can't find dioxins,
you have got to have something to look for.
MR. MARTI: Currently, there is no move by the
EPA to either develop or promulgate a method for coplanar PCBs. Customer demand, on the
other hand, has forced the laboratories to develop their own method which is interesting, because
we are going to end up with different types of methods.
This is one lab's adventure into the coplanar PCB analysis.
I find it useful to define the terms used in PCB analysis, because there isn't any
consistency used by many people. Congeners, when I use that term, I am speaking of any one
of the 209 possible PCB compounds. They are just similar in structure.
I have shown on this slide the biphenyl rings showing the para, meta, and ortho-
positions, along with the numbering system used for PCBs. Those are positions that the chlorine
can be substituted.
PCBs with the same number of chlorines are considered a homolog group. So,
the mono PCBs are one homolog group, and deca PCB which is one compound comprises the
entire homolog group.
Coplanar, on the other hand, are chlorines in the substitution in at least the 4,4'
position as well as in at least two other positions, either 3 or 5, such that a 3,3',4,4'-
tetrachlorobiphenyl is a coplanar, or a 3,3',4,4',5,5'-hexachlorobiphenyl is considered coplanar.
PCBs with chlorines substituted in the 2 or the 6 position are considered
substituted in the ortho positions.
Isomer, refers to any of the tetra PCBs or penta PCBs within a homolog group,
compounds with the same molecular formula.
There have been a lot of methods in the literature using BCD and low resolution
mass spectrometry (LRMS). Doug Kuehl and his group at the EPA lab in Duluth developed a
method using carbon column isolation similar to the dioxin analysis using isotope dilution, high
resolution GC/high resolution MS (HRGC/HRMS) for the analysis of dioxins and the coplanar
PCBs.
The advantages of the technique, are the sensitivity, specificity, and, more
importantly, the data would be compatible with the historical data base for the national dioxin
study.
87
-------
The coplanar PCBs have the same mechanism of toxicity as 2,3,7,8-TCDD. The
relative toxicity may be a lot less, but PCBs can be present in the environment in much higher
concentrations. So, there is an interest in quantitating coplanar PCBs in environmental matrices.
Our objective was to evaluate EPA-Duluth's method, and what we want to do is
report the total homolog PCBs along with the coplanar PCBs.
PCB congeners 77, 126, 169, and 105 were the original four that the EPA Duluth
lab originally included in their method. There are three true coplanars and one mono-ortho
substituted PCB. Customers have asked us to add two other mono-ortho substituted, congeners
118 and 156.
There are several reasons why a special method is required for coplanar PCB
analysis. The problem is, unlike the dioxins, the PCB homolog groups overlap in retention times.
You have the hexas and the pentas eluting with the tetras and so on.
There are GC columns out there that you can have a 4-hour temperature program
and separate all 209 congeners, but not too many of our customers have $5000 per sample to
spend for that type of analysis.
What we want to have is a reasonable GC temperature program, about an hour,
in order to make it cost effective. This means that congener specificity by GC, by itself, is not
sufficient.
Even using the high resolution mass spectrometer is not sufficient. What happens
is you have a hexa PCB that co-elutes with the 3,3',4,4'-tetra PCB, and the fragmentation of the
hexa PCB is such that the loss of two chlorines will contribute to the exact mass of the tetra
PCB. You need to have greater than 16,000 resolution of the mass spectrometer in order to
separate the masses. The normal practical operating resolution is 10,000.
Congener specificity by high resolution mass spectrometry is not possible for some
congeners.
This leaves us with very simple wet chemistry. We do this by using carbon
column fractoration to isolate the coplanar PCBs from the other non-coplanar PCBs.
There is an extraction step (usually Soxhlet) that can use a variety of solvents.
Carbon column cleanup is then done which is similar to the dioxin cleanup of which one fraction
is collected for the dioxin and the other is collected for the PCBs. Recovery standards are added
and two analyses are done, one for the coplanar PCBs and one for the dioxin.
Basically, for the carbon column cleanup, you elute several solvents. The
important part is that the EPA-Duluth method elutes with 27.5 ml of methylene chloride/benzene
in a 1:1 volume to volume to get the coplanar PCB fractions.
-------
Now, right away, one would have to ask 27.5 ml seems pretty exact, and what we
decided to do was just to do some spiking experiments, just spike the coplanar PCBs onto this
column and see how well we can recover the coplanar PCBs
We are unable to recover the labeled internal standard hexa-PCB. It is just not
present. The 3,3',4,4',5,5' was staying on the carbon column, and as you can imagine, the analyte
itself, of course, also didn't show up, although the mono-ortho substituted hexa did come through
the carbon column. We were unable to quantitate it, because we lost the internal standard. Just
looking at peak heights, we are estimating that the recovery was around 100 percent for the
mono-ortho substituted hexa, but we lost the coplanar hexa-PCB,
We modified the method several ways. We took additional fractions and did more
analyses, and after several replicates, we kept on getting these same results.
So, what I did at this point was call Brian Butterworth at EPA saying, you know,
what is going on here? Am I not doing something correct? Maybe something is not in the
method.
He explained he was finding the same problems with reproducible results. So, as
it turns out, EPA in Duluth has abandoned this method at this point and are looking at alternative
methods which kind of left me high and dry.
Triangle Laboratories then decided to modify the method. We decided that since
the dioxin carbon column cleanup is designed for coplanar compounds, it wouldn't discriminate
between a coplanar dioxin and a coplanar PCB. We spiked in the coplanar mono-orthos along
with the labeled internal standards and, lo and behold, we got good recoveries for all of the
compounds of interest. We then decided to do was do an experiment on an actual matrix.
We went to the grocery store and bought some catfish, and we ground them up,
and we added all of our standards, internal, alternate and surrogate standards. We then did a
Soxhlet extraction with methylene chloride.
We like to take out our percent lipid from the extract itself. Then we simply split
the extract, one fraction just going through our (what we call) the Big Fish Column (acid celite)
and adding our recovery standards and simply doing the high resolution GC/high resolution MS
for the total homolog PCB.
Since we want to see all the PCBs for the total homolog, we don't need any fancy
cleanups. We just need to get rid of the lipid using the Big Fish Column.
We took the other half of the extract and did the carbon column cleanup, and then
analysis by high resolution GC/MS for the coplanar PCBs.
89
-------
Now, to do the dioxins, you can simply split the extract yet again for the dioxins,
or since they go through practically the same cleanups, use the same extract for the coplanar
PCBs and dioxin analyses.
Each of the congener groups has one isomer that is used for calculating a response
factor. In order to get a response factor for the total mono PCB we use the response factor from
the 2-monochlorobiphenyl analyte.
In the case where there is a couple of tetras and a couple of pentas, we average
the response factor for the two tetras to get a response factor for the entire homolog group.
The results are reported in nanograms. I have not reported these in ppt. It is a
25 gram sample size if you want to do the calculation.
Well, interestingly enough, there are PCBs present in the store-bought catfish. The
blank was essentially clean, but what is interesting is if you look at the distribution of those
specific PCBs in the sample, they don't look like an Aroclor. So, very likely, if these fish were
analyzed for PCBs for any sort of State health concern which are normally Aroclor methods,
these would be reported as not detect as Aroclors even though there are PCBs present.
We also did some Aroclor spikes previous to this experiment. We spiked some
Aroclors and measured the recovery, and we got 85 to 105 percent recovery for the Aroclors just
looking at it by total homolog concentrations.
Of course, what we are really interested in is how well the coplanars did, and if
you look down the percent recoveries, you will see that the 2,2',5,5'-tetra PCB was essentially
removed from the carbon column as you would expect, since it is not coplanar, and we got good
recoveries of the coplanars of interest.
It is interesting that the mono, di, and tri also come through the carbon column
for good recoveries.
What is interesting is that the 2,2'-substituted octa PCB had very good recoveries
through the carbon column cleanup as well as the diortho-substituted hepta which indicates that
the carbon column may not be all that congener specific as we had hoped in that it seems to be
pretty selective in the PCBs it lets through but not necessarily based on the ortho substitution.
The octa which has a 2,2',6,6' substitution was not recovered, but the octa with just
the 2,2' substitution was recovered. So, it indicates there is some variability here in the congener
specificity, and I think the carbon column is useful, but I don't think it demonstrates congener
specificity for the coplanars and the mono-ortho substituted compounds.
90
-------
Conclusions. Well, in talking with Brian Butterworth, it is apparent that the
Duluth method is not rugged enough. They had problems with reproducibility with the carbon
column and getting highly variable recoveries.
Our method shows that you can do both the total homologs and the coplanar
PCBs, but the coplanar PCBs are reported as maximum concentrations. I do not think any
laboratory at this point can demonstrate congener specificity by this method.
We estimate our detection limits at this point would be about 1 ppt based on
method detection limit studies. That will be somewhat variable, depending on the specific
congener of interest.
Future studies. Well, Duluth is very interested in our work right now, they sent
us their reference fish samples such that we could compare our results with their results to
determine how well we are measuring the coplanar PCBs.
The other issue still remains, though, and it is a question that I can't answer today,
how are we going to demonstrate congener specificity, or are we even interested in doing that?
Maybe maximum concentrations are sufficient for what we want to do.
But if we are interested in doing congener specific analysis, we can do some things
like using carbon-labeled close eluters or try and operate your high resolution mass spectrometer
at 20,000 which may be conceivable with today's new high resolution equipment, but neither one
of these are really practical solutions at this point.
There is one more elegant way of doing it is that we can look at the M-l, M-2
from the molecular ion cluster to see if there is contribution from a higher homolog group into
the specific isomers we are looking at, and we have done some preliminary experiments that look
very good and that we can at least tell if there is a higher homolog interfering with the
quantitation of the coplanar PCB of interest.
So, in conclusion, the question remains, is congener specificity important for this
analysis? And I have got a feeling that there are going to be different answers depending on who
you talk to, but this is the state of the art as it stands.
I think it is a concern that we resolve this issue, because we are already starting
to accumulate a historical data base on coplanar PCBs, and it would be nice to be able to have
some data that is being generated now that is going to be comparable five years from now.
91
-------
QUESTION AND ANSWER SESSION
MR. TELLIARD: Questions? Microphones?
There is a PCS seminar being held next week in Washington by the Office of
Water, primarily looking at PCB, coplanar congener PCBs in tissue. The subject of this talk will
be the subject of two days of hand wringing. For those of you who are going to attend, it looks
like the party afterwards is supposed to be pretty good.
But are there any questions?
(No response.)
MR. TELLIARD: Thank you very much for your
attention. It is break time. There will be coffee outside. Get your coffee and strawberry and
get back in here so we can stay on schedule.
Oh, we have a question. I am sorry.
MR. MITZEL: Robert Mitzel from ALTA.
When you did the coplanars and the dioxin furans, we have noticed on the
dioxin/furan analysis you are required to run essentially on three columns because of the
interference with the penta dioxin C13 labeled isomer. Did you guys find that same problem?
MR. MARTI: We haven't done the experiment yet,
but you are correct. I think what we will have to do is probably take the coplanar PCB fraction
and run it back through additional cleanups for the dioxin at this point, unless we separate the
extract, you know, split it into thirds and just pull it through the entire thing by itself.
It really depends on the detection limit requirements.
MR. MITZEL: Right.
MR. TELLIARD: Okay, thank you very much.
(WHEREUPON, a brief recess was taken.)
92
-------
Determination of Coplanar
and
Total Homolog PCBs
by
HRGC/HRMS
u>
E.A. Marti, G.D. Marbury,
N.L Bragg and B.P. Rueda
Triangle Laboratories of RTP, Inc.
801 Capitola Drive
Durham, NC 27713
-------
Definition of Terms |
PCBs: Polychlorinated Biphenyls
3 "> V 3'
m
Congeners:
PCBs are composed of 209 congeners. Similar
in structure with different levels of chlorination.
Homologs:
PCBs with the same number of chlorines. The
mono through deca PCBs represent 10
homolog groups.
-------
Definition of Terms |
PCBs: Polychlorinated Biphenyls
Isomer:
Any PCB compound with same molecular
formula. All tetra-PCBs are isomers.
Coplanar:
Compound is dimensionally flat. Substitutions
in both para (4,4') plus at least two meta
positions (3 or 5) but no ortho (2J2',6,6').
-------
Introduction]
OS
There are many methods of analysis for coplanar
PCBs including various extraction techniques and
modes of analysis (e.g., GC-ECD and LRMS). Kuehl
et. al. uses carbon column isolation with isotope
dilution HRGC/HRMS for the analysis of dioxins and
coplanar PCBs. The advantages of this technique
are sensitivity, specificity and compatible data to
the U.S. EPA National Dioxin Study.
-------
Introduction
Coplanar PCBs are shown to have the same
mechanism of toxicity (receptor-mediated) as
2,3,7,8-TCDD. The relative toxicity of coplanar
PCBs compared to TCDD may be 10 to 1000
times less but PCBs can be found in much higher
concentrations in the environment.
-------
Objective]
CO
Evaluate and develop method for the analysis
of coplanar PCBs using Kuehl et. al. methodology,
The analysis will report total homolog PCB,
congener specific coplanar and mono-ortho
PCBs and dioxin.
-------
Objective
Coplanar:
3,3',4,4'- Tetra-CB (#77)
3,3',4,4',5- Penta-CB (#126)
3,3',4,4',5,5'- Hexa-CB (#169)
Mono-Ortho:
2,3,3')4,4'- Penta-CB (#105)
2,3',4,4',5- Penta-CB (#118)*
2,3,3',4,4',5- Hexa-CB (#156)*
* Added by TL-RTP
-------
Why Go Through All This Trouble?
O Unlike dioxins, the PCB homolog groups
overlap in retention times (e.g.,
' 2,2',3,3',6,6' hexa-CB and 2,3,3',4',6
penta-CB coelute with 3,3',4,4' tetra-CB).
Congener specificity by GC resolution is
not practical.
-------
Why Go Through All This Trouble?
Even using high resolution mass
spectrometry, the M+2 ion from the loss
of two chlorines from the hexa-CB has an
exact mass of 289.9038. The exact mass
of the tetra-CB is 289.9223. A mass
resolution of 16,000 would be needed to
separate the two. Congener specificity
by mass spectrometry is not possible.
-------
Why Go Through All This Trouble?
Separation of coplanar PCBs from
noncoplanars is possible by carbon
column fractionation.
-------
Duluth Method Summary |
Sample
Spike ISC-labeled
Dioxin/PCB Standards
o
U)
I
Extraction
I
Carbon Column
Cleanup
Spike Recovery
Standards
Analyze for Coplanar
PCBs by HRGC/HRMS
Dioxin Fraction
-------
Duluth Method Clean-Up |
350 milligrams (mg) of 10% carbon
(AX-21) / 90% silica gel (60 mesh)
Load sample extracts (0.5 mL volume in
hexane) onto column, rinsing with two
0.5 mL hexane washes
Rinse column with 18 mL hexane, discard
-------
Duluth Method Clean-Up
Elute column with 27.5 ml of methylene
chloride/benzene (1:1, vol:vol)
(Coplanar Fraction)
Reverse column, elute with 25 ml
toluene (Dioxin Fraction)
-------
High Resolution Coplanar PCB Recovery
Duluth Method
1. Unlabled Analytes:
a.S'.-M'-tetraCB (#77)
2,3',4,4',5-pentaCB (#118)
2,3,3'I414'-pentaCB (#105)
3,3',4,4',5-pentaCB (#126)
2,3,3',4,4',5-hexaCB (#156)
3,3',4,4',5,5'-hexaCB (#169)
ii 13C)2-Labeled Internal Standards
13C- 3,3',4,4'- tetraCB
13C- 3,3',4,4',5- pentaCB
13C- 3,3',4,4',5>5'- hexaCB
Ave
Found
(ng)
ion
i y.u
46.5
40.4
18.9
NQ*
ND
•• M jf^
14.0
7.5
ND
Spike
Level
(ng)
9n n
4&u.u
20.0
20.0
20.0
20.0
20.0
10.0
10.0
10.0
Ave
% Rec.
OK
j*j
232
202
94
NA
NA
140
75
NA
RSD
0 9
w« 7
12.6
14.8
1.7
NA
NA
* NQ= Not quantitated due to loss of IS during cleanup
ND= Not detected
-------
High Resolution Coplanar PCB Recovery
Duluth Method
Ave Spike
Found Level Ave
(ng) (ng) % Rec. RSD
13CI2- Labeled Alternate Standards:
nC-2,2'A,4',5,5'- hexaCB N/A 10.0 N/A
IV. 13C,2- Labeled Surrogate Standards:
"C- 3,3',S,S'- tetraCB 12.2 10.0 122
I3C- 2,2',4I5,5'- pentaCB 2.6 10.0 26
I3C- 2I2'3,4,4'I5- hexaCB 4.5 10.0 45
-------
o
oo
High Resolution Coplanar PCB Recovery
Triangle Laboratories of RTF Method
1. Unlabled Analytes:
S.S'A^-tetraCB (#77)
2,3',4,4',5-pentaCB (#118)
2,3,3>I4,4'-pentaCB (#105)
3,3',4,4',5-pentaCB (#126)
2,3,3',4,4',5-hexaCB (#156)
3,yA,4',5,5'-hexaCQ (#169)
II. 13C12-Labeled Internal Standards
13C- 3,3',4,4'- tetraCB
I3C- 3,y,4A'£- pentaCB
I3C- 3,3',4,4',5,5'- hexaCB
Ave
Found
(ng)
20.2
17.4
17.8
19.6
19.7
20.1
5.9
6.3
5.1
Spike
Level
(ng)
20.0
20.0
20.0
20.0
20.0
20.0
10.0
10.0
10.0
Ave
% Rec.
101
87
89
98
99
101
59
63
51
RSD
2.4
2.3
2.8
3.3.
4.3s
3.7
-------
Triangle Laboratories of RTP
PCB Method
25 grams
tissue sample
Spike with:
25 ng CoPCB-IS
25 ng CoPCB-AS
25 ng CoPCB-SS
Soxhlet extract
with MeCI2l 16hrs
Concentrate extract
to 25 mL
t
1
5.0 mL for
Percent Liptd
Determination
I
lO.OmLfor
HR-PCB Analysis
(Total Homolog)
I
Acid Ceiite column
clean up
Final extract
volume 100 i
I
Spike with HR-PCB
Recovery Standards
13C- 2,2',5>5'- Tetra
13C- 2,2',3,3',4>4'- Hexa
I
Analyze HRGC/HRMS
for total Homolog PCB
1
lO.OmLfor
Co-PCB Analysis
(6 Specific Congeners)
I
Acid Ceiite column
clean up
I
Modified Carbon/Silica
Gel column clean up
I
Final extract
volume
I
Spike with HR-PCB
Recovery Standards
13C- 2,2',5,5>- Tetra
13C- 2,2',3,3',4,4'- Hexa
I
Analyze HRGC/HRMS
for Coplanar PCB
-------
Triangle Laboratories of RTP
Method Clean-Up
O 350 milligrams (mg) of 10% carbon (AX-21)
/ 90% silica gel (60 mesh)
O Load sample extracts (0.5 mL volume
in hexane) onto column, rinsing with two
0.5 mL hexane washes
O Rinse column with 18 mL hexane, discard
O Reverse column, elute with 25 mL toluene
(coplanar PCB and dioxin fraction)
-------
Unlabele<
andT<
2-monoCI
4,4'-diCB
2)4,4'-triCI
2,2')5,5'-t€
3,3',4,4'-te
2,3>,4,4',5-
2,3,3',4,4'-
High Resoiutioi
PCB Analysis Sumr
Total Homolog Anc
d Analytes Ave spike
~*~i. Found Level
3TCIIS (ng) (ng)
3 22.5 25.0
26.4 25.0
3 27.8 25.0
rfraCB 48.2 50.0
traCB (#77) 48.9 50.0
pentaCB(#118) 52.9 50.0
pentaCB(#105) 51.0 50.0
1
nary
slysis
••••••
Ave
% Rec.
90.0
105.6
111.2
96.4
97.8
105.8
102.0
•••••
RSD
3.4
5.1
4.4
10.6
4.3
5.0
4.8
Catfish
Blank
Level, ng
ND
1.3
2.4
1.8
0.14
2.7
0.89
-------
1. Unlabeled
and To
3,3',4,4',5-pen
2,3,3',4,4',5-hc
S.S'A^.S.S'-h*
2,2'I3I4I4',5,5'-
2,2'I3,3',414'I5)
2I2'I3I3'I4>4',5,
2,2'I3,3'14I4'I5,
High Resolutioi
PCB Analysis Sumr
Total Homolog An<
Anfllvtp<5 ^ve Spike
• »• iv*i j IN^W Found Levsl
talS (ng) (ng)
taCB(#126) 50.1 50.0
>xaCB(#156) **41.0 50.0
exaCB (#169) **48.2 50.0
heptaCB 68.4 75.0
5'-octaCB 77.6 75.0
5',6-nonaCB 133.0 125.0
5',6,6'-decaCB 117.0 125.0
1
nary
^lysis
••••••
Ave
% Rec.
100.2
82.0
96.4
91.2
103.5
106.4
93.6
••i
RSt
>
5.4
13.6
6.0
4.7
7.5
6.5
2.8
Catfish
Blank
Level, ng
*0.002
HD
NO
1.8
*0.01
0.27
0.25
**
(EMPC)
HexaCB values from 1:25 dilutions, required due to Ql
-------
High Resolution
Coplanar PCB Analysis
1. Unlabeled Analytes
and Totals
2-monoCB
4,4'-diCB
2,4,4'-triCB
2,2')5,5>-tetraCB
3,3',4,4'-tetraCB (#77)
2,3',4,4',5-pentaCB (#118)
2)3,3',4,4'-pentaCB (#105)
Ave'
Found
(ng)
23.1
24.8
26.5
2.1
47.7
45.7
46.3
Spike
Level
(ng)
25.0
25.0
25.0
50.0
50.0
50.0
50.0
Ave
% Rec.
92.4
99.2
106.0
4.2
95.4
91.4
92.6
RSD
4.2
4.9
5.1
52.5
0.75
4.3
2.2
Catfish
Blank
Level, na
*0.006
1.3
2.5
0.33
ND
2.5
0.77
EMPC
-------
High Resolution
Coplanar PCB Analysis
1. Unlabeled Analytes
and Totals
3,3',4,4',5-pentaCB (#126)
2,3,3',4,4',5-hexaCB (#156)
2!2'i3A4'!5,5'-heptaCB
2,2',3,3',4I4',5I5'-octaCB
2,2',3,3>,4,4',5,5I)6-nonaCB
2I2'I3,3',4,4',5I5',6I6I-decaCB
Found
48.2
45.4
50.1
76.9
72.5
5.1
ND
Spike
Level
(ng)
50.0
50.0
50.0
75.0
75.0
125.0
125.0
Ave
% Rec.
96.4
90.8
100.2
102.5
96.7
4.1
NA
RSD
2.5
3.4
0.88
2.2
3.6
62.7
NA
Catfish
Blank
Level, ng
ND
0.25
ND
2.0
ND
ND
ND
-------
High Resolution PCB Recovery Summary
Coplanar PCB Analysis
'3C12-Labeled Internal Standards
13C-4-monoCB
I3C-4,4'-diCB
13C-2,4,4'-triCB
"C-3,3',4,4'- tetraCB
1SC-3,3P,4,4',5- pentaCB
nC-3,3'AA,S,5'- hexaCB
nC-2,2',3,4A',5,5'-hetfaCZ
]3C-2,2',3,yAA',5,$'-oc\aCB
'iC-2,2',313l,4I4l,5,5I,6,6I-decaCB
Ave
% Rec.
76.5
72.2
76.8
82.0
83.7
81.4
71.5
79.7
ND
%
RSD
10.6
10.5
8.3
14.8
15.9
20.0
18.0
21.5
NA
-------
High Resolution PCB Analysis Summary
Coplanar PCB Analysis
13
13
C-2,2'I3>3'I5>5',6,6'-octaCB
Ave
I3C12-Labeled Alternate Standards °/0 Rec'
'3C-2)2',4)4')5(5'-hexaCB 546 29.2
C12-Labeled Surrogate Standards
3C-3,3f,5,5J-totraCB AH o
l3C-2,2',4(5,5'-pentaCB '
-------
Conclusions
1. The Duluth Method is not rugged enough
for environmental matrices. EPA- Duluth
looking at alternatives.
2. TL-RTP method works for the analysis
of total homolog PCB and coplanar PCBs
(as maximum concentrations). Dioxin analysis
can be done from the coplanar PCB extract.
3. Coplanar PCB detection limits around 1 ppt,
MDL study underway.
-------
Future Studies I
Evaluate Triangle Labs of RTP method using
reference fish samples from U.S. EPA
Duluth Laboratory.
oo
Demonstrate congener specificity by:
1. Carbon labeled close eluters
2. HRMS resolution of 20,000
-------
Reference]
Kuehl, D.W., B.C. Butterworth, J. Libal and P. Marquis,
"An Isotope Dilution High Resolution Gas
Chromatographic High Resolution Mass
Spectrometric Method for the Determination of
Coplanar PCBs: Application to Fish and
Marine Mammals." Chemosphere, Vol. 22, Nos 9-10,
pp 849-858. 1991.
-------
TRIANGLE LABORATORIES OF RTF, INC
TL-RTP Project: 91085E
Client Sample: CoPCB IPA #5
Polychlonnated Biphenyls Analysis
Analysis File: X930928
Sample Matrix:
Client Project:
TLRTPID:
Sample Size:
Dry Weight:
CATHSH
COPL PCB
IPA #5
25.020 g
n/a
Date Received:
Date Extracted:
Date Analyzed:
Dilution Factor:
Blank File:
Analyst:
II
04/14/93
04/22/93
n/a
X930880
WG
Spike File:
ICAL:
CONCAL:
GC Column:
% Lipid:
% Solids:
SPPCBO25
XPC4213
X930923
DB-5
6.9
n/a
["specific AnaJytes
\< :•;*$'*•*, ;.*f:%K::x:w :.--.* •• *
2-MonoCB
4,4'-DiCB
2,4,4'-TriCB
2,2',5,5'-TetraCB
S.S'^^'-TetraCB (#77)
2,3',4,4',5-PentaCB(#118)
2,3,3',4,4'-PentaCB (#105)
3,3',4,4',5-PentaCB (#126)
2,3,3',4,4',5-HexaCB (#156)
3,3',4,4',5,5'-HexaCB (#169)
2,2',3,4,4',5,5'-HeptaCB
2,Tt3,yAA',5,5'-OctnCB
2,2',3(3',4,4>,5,5>,6-NonaCB
2,2',3,3',4,4',5>5')6,6'-DecaCB
Total MonoCB
Total DiCB
Total TriCB
Total TetraCB
Total PentaCB
Total HexaCB
Total HeptaCB
Total OctaCB
Total NonaCB
Ami (ng )
23.7
23.7
25.4
2.7
47.4
44.9
46.5
48.0
45.1
50.1
75.6
71.4
7.1
ND
24.9
26.1
30.0
65.4
147
106
77.0
71.4
7.1
DL ;,, EMPC - Rati?. y':^
3.19
1.52
1.00
0.76
0.71
0.56
0.55
0.56
1.20
1.22
1.03
0.88
0.77
0.001
3.08
26.2 1.55
30.1 1.00
65.6 0.75
148 0.58
107 1.22
1.01
0.88
0.77
• - ;';4: = • -FnV :^'j':v.-:"Ffag5.>.,;
12:57
19:32
21:38
23:15
29:14
30:27
31:44
33:14
35:11
36:55
35:56
40:09
41:50
—
Page 1 of 2
PCBO.PSR v:I.OO. LARS 5 06o2
Triangle Laboratories of RTF, Inc.
801 Capitola Drive • Durham, North Carolina 27713
Phone: (919) 544-5729 • Fax: (919) 544-5491
120
Printed: 14:02 04/27/93
-------
TRIANGLE LABORATORIES OF RTF, INC
TL-RTP Project: 91085E
Client Sample: CoPCB IPA #5
Polychlonnated Biphenyls Analysi
Analysis File: X93092!
^^:«w«te^^%,.j: -^w;
13Cs-4-MonoCB
iJC,2-4,4'-DiCB
13C12-2,4,4'-TriCB
"Cu-S.S'^'-TetraCB
13Ci2-3,3' ,4,4' ,5-PentaCB
13Ci2-3,3',4,4',5,5'-HexaCB
13C12-2,2>,3,4,4',5,5'-HeptaCB
13CI2-2,2',3,3',4,4',5,5'-OctaCB
ftj»t:{«gjj
21.4
20.0
21.6
22.1
22.7
20.4
18.2
19.1
':?&v'% Recoveiy :; •:•'••/
85.6
79.8
86.3
88.4
91.0
81.5
72.9
76.3
y y •-,/'.--.- -Ratio yy
3.23
1.55
1.04
0.79
0.63
1.27
1.03
0.89
'V '£**•:;:
14:38
19:32
21:37
29:14
33:14
36:54
35:55
40:08
K Flags ; •
l3Ci2-2>2',3,3',4,4',5,5',6,6'-DecaCB Interference
andards {iype 8)
Reisavery •
] Flags
I3C,2-3,3',5,5'-TetraCB
13Ci2-2,2'.4,5,5'-PentaCB
lsCn-2,2',3,3',5,5',6,6'-OctaCB
22.5
9.8
8.1
Interference
89.9
39.3
32.5
0.77
0.63
1.28
26:14
27:26
32:48
Flags:
I3Cn-2,2',4,4>,5,5>-HexaCB
16.6
66.5
1.30
31:29
Data Reviewer
04/27/93
Page 2 of 2
PC8O..PSR v: 1.00. LARS 5.06 33.
Triangle Laboratories of RTF, Inc.
801 Capitola Drive • Durham, North Carolina 27713
Phone: (919) 544-5729 • Fax: (919) 544-5491
Printed: 14:02 04/27/93
121
-------
TRIANGLE LABORATORIES OF RTF, INC
TL-RTP Project: 91085E
Client Sample: CoPCB IPA #5
Polychlorinated Biphenyls Analysis
Analysis File: X930928
Sample Matrix:
Client Project:
TLRTP ID:
Sample Size:
Dry Weight:
CATFISH
COPL PCB
IPA #5
25.020 g
n/a
Date Received:
Date Extracted:
Date Analyzed:
Dilution Factor:
Blank File:
Analyst:
II
04/14/93
04/22793
n/a
X930880
WG
Spike File:
ICAL:
CONCAL:
GC Column:
% Lipid:
% Solids:
SPPCBO25
XPC4213
X930923
DB-5
6.9
n/a
Specific Ahaiyte* , «
3,3',4,4'-TetraCB (#77)
2,3'>4,4',5-PentaCB(#118)
3,3',4,4',5-PentaCB (#126)
2,3,3',4,4',5-HexaCB (#156)
3,3')4,4',5,5'-HexaCB (#169)
IntefhaT SlarjdardS:
''drS.S'.M'-TetraCB
13C12-3>3',4,4',5-PentaCB
13C12-3,3',4,4',5,5'-HexaCB
Samite Siahdards (Type B)
!3Ci2-3,3',5,5'-TetiaCB
I3C12-2,2',4,5,5'-PentaCB
3C12-2,2', 3,4,4' ,5-HexaCB
Amt (ng)
47.4
44.9
46.5
48.0
45.1
50.1
AmL(ng)
22.1
22.7
20.4
AmL{ng)
22.5
9.8
8.1
DL ;;-v[-:4;^awPC '.;:
.. -%^cw%yj!;,^ =;.'•;.
88.4
91.0
81.5
.% Recovery..;: _..'• •.. • :: ",.•:.
89.9
39.3
32.5
;. : Ratio
0.71
0.56
0.55
0.56
1.20
1.22
— v Ratio
0.79
0.63
1.27
\^\- Ratio
0.77
0.63
1.28
RT Flags
29:14
30:27
31:44
33:14
35:11
36:55
RT .Flags"
29:14
33:14
36:54
RT • Flags: ;
26:14
27:26
32:48
/Alternate Standard (Typ*8) AmUfns) : *>% R^i^i^x:;;^ : .. RT :::f-:( • ••FJagsg|
13C12-Z2',4,4',5,5'-HexaCB
16.6
66.5
1.30
31:29
PCBO_PSR v: 1.00. LARS 5.06 ja
Triangle Laboratories of RTP, Inc.
801 Caphola Drive • Durham, North Carolina 27713
Phone: (919) 544-5729 • Fax: (919) 544-5491
122
Printed: 13:5604/27/93
-------
MR. TELLIARD: The next session is going to deal
with a thing warm to our hearts which is detection limits, level of quantitation, level of
enforcement, and, of course, the reliable imprisonment level which is a number that the Agency
is developing to make sure that no one escapes their wrath.
Our first speaker this morning is Jim Rice. Jim has been coming to these meetings
when we used to fight, old meetings. He was at the first one and has been coming every year
since, and he is going to keep coming, he says, until we get this right.
So, I would like to have Jim address you on detection limits and compliance.
123
-------
124
-------
Compliance Monitoring Detection and Quantitation Levels
By
James K. Rice
Raymond F. Maddalone
Ben C. Edmondson
Babu R. Nott
Judith W. Scott
There are many uses of detection and quantitation levels, for example, instrument evaluation,
laboratory quality control, laboratory certification and compliance monitoring. The problem
for the chemist, and particularly the non-chemist, is that current definitions in general do not
specify their intended use and consequently are applied inappropriately. For example, there
should be a fundamental distinction made between measurement limits that are developed for
quality control purposes in a single lab and measurement levels that apply to compliance
monitoring. Quality control measurement limits are used by single laboratories to ensure that
a particular analytical procedure is in control at a particular time. In compliance monitoring,
measurements are made, often over an extended period and by different laboratories, to
determine if enforceable standards (e.g., National Pollutant Discharge Elimination System
(NPDES) permit limits under the Clean Water Act) are being attained. The compliance
measurement level needs to reflect the greater variability inherent in this situation, as
compared to the intra-lab Quality Assurance/Quality Control (QA/QC) situation.
In this paper, we present definitions that we have developed for a Compliance Monitoring
Detection Level (CMDL) and a Compliance Monitoring Quantitation Level (CMQL) that
properly take into account the analytical variability associated with compliance monitoring
125
Copyright VJ 1993 Electric Power Research Institute
-------
situations (1). CMDLs and CMQLs derived from interlaboratory standard deviation are
shown for several analytes, along with comparisons with detection and quantitation levels
derived employing alternative definitions.
Background
Definitions for the detection limit abound. Over the years chemists have used "2-sigma" and
"3-sigma" detection limits without precise definition or meaning. The United States
Environmental Protection Agency (USEPA) continued this practice when they published
detection limits (DL) without definition in the Methods of Chemical Analysis of
Water and Wastes (MCAW) (2). Unfortunately, these limits are widely used in the
regulatory environment, where they were neither intended nor appropriate, and because
levels specific to compliance monitoring had not been introduced (3,4,5).
If compliance standards, such as NPDES permit limitations, are set at levels at which it is
not possible to make reliable measurements, industries and municipalities may be subjected to
harsh civil and criminal enforcement consequences entirely as a result of analytical
variability, as opposed to an unacceptable concentration of pollutants in their effluents. That
is because compliance is gauged solely on the basis of the analytical results of a permittee's
effluent, not on the pollution control measures employed. Thus, unless appropriate detection
and quantitation levels are developed and applied, permittees will experience compliance
problems, notwithstanding their best efforts to select and apply effective pollution control
measures.
126
-------
Numerous authors and organizations (Table 1) have defined detection and quantitation levels
(6,7,8,9,10,11,12). Currie (6) and Kaiser (7,8) defined detection levels in the context of
confidence intervals and the probability of seeing false positive and negative errors. The
common thread among all the definitions was the use of a factor times the standard deviation
of the blank or of a sample containing analyte at a concentration near the expected detection
limit. The resulting definitions (Table 1) varied in their selection of the factor and
consequently in the probability of seeing false positive and negative errors. Some authors
acknowledged that detection and quantitation levels would vary by matrix, while others only
mentioned using reagent water to compute the standard deviation. The USEPA in 40 CFR
Part 136, Appendix B, acknowledges the importance of the matrix and the overall procedure
(preparation and analysis) in their definition of the Method Detection Limit (MDL)(11.12).
The Electric Power Research Institute (EPRI) Q3) review of the literature of definitions for
limit of quantitation (LOQ) noted two common themes: the LOQ is equal to a factor times
the standard deviation of a well characterized blank; the factor is related to the
expected/required precision at the LOQ. The American Chemical Society (ACS) (10) chose
± 10% relative standard deviation (RSD) and the factor as inversely proportional to the RSD
at the LOQ. EPRI Q3) defined the LOQ as the lowest true concentration for which the RSD
equals 20%. Later, Kempic of the USEPA presented detailed procedures (14) using
interlaboratory studies for calculating acceptance limits (AL) and the practical quantitation
limit (PQL) which the USEPA (15) defined as the lowest true concentration for which greater
than 75% of the laboratories can measure within + AL. The latter was based on the 95%
confidence limit at the maximum contaminant limit goal (MCLG) or, where the MCLG was
127
-------
zero, at a concentration five times the MDL. Britton (16) of the USEPA expressed three
years ago an alternative definition for PQL - the lowest true concentration at which one could
be confident that a single value would not be reported below a minimum level. This
definition was similar to one proposed earlier by Currie (6) except that the Kempic definition
of PQL as well as that of EPRI and the alternative expressed by Britton utilize the
interlaboratory standard deviation and interlaboratory recovery expressions for their
derivation.
Definitions of CMDL and CMOL
Our definitions for CMDL and CMQL are based on the fact that compliance monitoring
inherently involves interlaboratory performance. Previous definitions for detection levels
have been based on either single operator, single laboratory performance, or on a pooled
(average) of the former obtained by several laboratories. It is well established that single
measurements by different laboratories using the identically same method on the same
traceable standard sample differ, and that measurements made by one laboratory on a
standard sample differ randomly over an extended period of time. The variation in results
between laboratories on a known standard is the interlaboratory standard deviation of the
method for the particular analyte or property in the given matrix. The difference between
the mean of these interlaboratory measurements on a single standard sample and the true
value of the standard is the bias of the method at that level in the given matrix (17). There
are both systematic and random components of this observed bias. The random components
relate to random errors in the calibration curves resulting in part from the absence of
traceable standards at concentrations near the MDL as well as from random ambient
128
-------
contamination both external and internal to each laboratory. The interlaboratory variance
reflects these random errors while the pooled single operator variance does not.
Compliance monitoring is inherently an interlaboratory issue for these reasons:
• Split samples - In the regulatory environment samples can be split between
laboratories (permittee and regulator) in situations where doubt exists about meeting
or exceeding a discharge limit.
• Extended life of the permit - Permits that are issued for discharges to the
environment cover an extended period of time (5 years for NPDES permits).
During that time, many different qualified contract laboratories may be employed by
the permittee or the regulatory agency to make measurements and to monitor
compliance.
Also, we note that compliance monitoring is dominated by two types of permits:
• Those written as "no-detectable" discharge, and
• Those where discharge limits (pollutant concentration not to exceed a numeric
value) are set.
These permits require that a detection and/or quantitation level be set for compliance
monitoring.
129
-------
The following definitions meet the needs that we have outlined for compliance monitoring
applications:
• Compliance Monitoring Detection Level (CMDL) - The lowest true concentration at
which there is at least a 95% level of confidence that 99% of the future analyses for
a specific analyte at that concentration in a common sample by any laboratory in
control will be reported as greater than zero (or a blank) for a given method and
matrix.
• Compliance Monitoring Quantitation Level (CMQL) - the lowest true concentration
at which there is at least a 95% level of confidence that 99% of the future analyses
for a specific analyte at that concentration in a common sample by any laboratory in
control will be reported as greater than the CMDL for a given method and matrix.
The CMDL should be used for determining detection levels in compliance monitoring. The
CMQL is the lowest level recommended for quantitative decisions based on a single analysis,
and it should be used for determining quantitation levels in compliance monitoring.
Figures 1, 2, and 3 summarize our proposed approach. Since the standard deviation is a
function of the concentration, our proposed definition first computes the standard deviation at
the defined level from a regression equation of interlaboratory standard deviation versus true
value (Figure 1). The computed interlaboratory standard deviation is used to establish a
130
-------
tolerance interval at the CMDL, such that there is 95% confidence that there will be only a
1 % chance of a false negative with respect to zero (or a blank) among all future
measurements at the CMDL (not reporting something as there when it is) (Figure 2).
We believe that the experimental designs recommended by the ASTM D2777, Standard
Practice for Determination of Precision and Bias of Applicable Methods of Committee D-19
on Water (17), for interlaboratory round-robin studies are the proper basis for determining
the standard deviation used to compute detection levels for compliance monitoring. The
ASTM D2777 consensus standard is well known and accepted in scientific circles. ASTM
D2777 studies provide ample safeguards for data integrity through lab performance
assessment by lab ranking and individual outlier rejection. A round-robin study as specified
in ASTM D2777, which 1) requires several concentration levels, 2) provides the ability to
reject data from laboratories that can't perform, and 3) screens for outlier data which might
affect precision estimates, is most appropriate for meeting compliance monitoring needs. A
study based upon ASTM D2777, if properly implemented, produces an experimental situation
that closely mimics that of compliance monitoring.
The Electric Power Research Institute (EPRI) has funded extensive interlaboratory validation
studies of selected elements in aqueous matrices for graphite furnace atomic absorption
spectroscopy (GFAAS), flame AAS, and inductively coupled plasma-atomic emission
spectroscopy (ICP-AES) (Table 2). The data obtained from these studies (13,18,19,20), have
led to an evaluation of the detection levels (DL) contained in the USEPA MCAW (1) and are
131
-------
the basis of our proposed approach. It is important to note that when establishing the
performance of a given method-matrix combination, EPRI has striven to assure that all low
level observations including those at "zero" concentration are retained in the data to be
subjected to analysis by ASTM D2777. We believe that such retention for subsequent
analysis is essential to derive a valid estimate of "detection." In a method performance
study, no values should be censored a priori either by instrument set-point adjustments or by
the application of arbitrary "detection limits." Estimates of detection derived from such
censored data are artifacts of the censoring process.
We establish the CMDL by using a tolerance interval (single-sided) for the normal
distribution with high confidence: that is, we want to be 95% confident that 99% of the
measurements of a constituent having a concentration equal to the CMDL will be greater
than zero. In calculating CMDLs, it is not necessary for us to assume that the standard
deviation estimated at a spiked level is an approximation for the standard deviation that
would be obtained with a blank. We assume, however, that measurement errors follow a
normal distribution and that the standard deviation at the CMDL is properly represented by
the regression expression employed. The CMDL is calculated:
CMDL = S • K
(0.95, 0.99, n)
where the tolerance factor K (single-sided) depends on the number of measurements that are
used to produce the estimated standard deviation.
132
-------
Comparison of CMDL and CMOL with Recently Proposed Definitions
Recently the American Chemical Society Committee on Environmental Improvement (ACS-
CEI) (21) proposed definitions for method detection level (MDL), reliable detection level
(RDL), and reliable quantitation level (RQL) which are based on the US EPA method
detection limit calculation defined in 1984 (12). The ACS-CEI levels are set for false
positives from zero (method detection level), for false negatives from the MDL (reliable
detection level), and an arbitrary factor times the RDL (reliable quantitation level). The
ACS-CEI computational approach relies on multiplying the single operator standard deviation
near zero (MDL) by different factors to set the RDL and RQL. The ACS-CEI MDL
definition describes the recommended MDL as being based on the single operator standard
deviation, but mentions that published MDLs for a method should be based upon
interlaboratory (which they define as pooled single operator) data. In general, the RDL and
RQL are two and four times the MDL, respectively, and the RQL is described as the
"recommended lowest level for quantitative regulatory action based on individual
measurements." The ACS-CEI acknowledges that these levels would be matrix dependent.
The ACS-CEI definitions are inappropriate, in our opinion, for compliance monitoring,
where two parties need to agree in a legal sense on the presence, absence or magnitude of a
toxic constituent, for the following reasons:
• Pooled single operator standard deviation is used incorrectly as a surrogate for
interlaboratory standard deviation when both are defined by ASTM D2777 as very
different measures of method performance (see Table 3).
133
-------
• Estimates of the standard deviation near zero concentration are used to compute the
base ACS-CEI MDL from which the RDL and RQL are computed far upscale from
zero.
• Standard deviation estimates used to derive the ACS-CEI MDL are based on too few
observations, as few as seven, and are based on observations at a single analyte
concentration, and do not preserve the proposed false positive or false negative rate.
Interlaboratory Standard Deviation. It is well established that the interlaboratory standard
deviation is considerably larger than the standard deviation derived from the results of
replicate analysis by an operator within a laboratory. As noted earlier, the present inability
to employ for calibration commonly available externally prepared traceable calibration
standards at concentrations near the detection level generates a laboratory bias that appears to
be normally distributed among laboratories. The variance of this calibration bias appears to
constitute one of the major error terms contributing to the difference between inter- and
intra-laboratory variance. Since samples for compliance monitoring originate outside the
laboratory, random ambient contamination error also contributes to the observed difference
(field blanks allow an estimate of this component). Within laboratory contamination error is
a component of both intralaboratory and interlaboratory standard deviation.
The foregoing points are illustrated using USEPA data from Method Study 27 for 6 trace
metals in surface waters by ICP-AES using the soft digestion preparation (Table 4). In this
134
-------
case the regression equations describing the relation of pooled single operator and
interlaboratory standard deviation versus the mean concentration for river water were used to
compute the standard deviation at 100 /ig/L. The results clearly show that the
interlaboratory standard deviation is larger and thus includes more error terms than the
pooled single operator standard deviation. Previous EPRI RP1851 validation studies
(18,19,20) with 20-40 laboratories participating also consistently showed that the
interlaboratory standard deviation was 2 to 3 times larger than the pooled single operator
standard deviation (Table 4 and Figure 4). Definitions for detection and quantitation based
on single operator standard deviation, whether pooled or not, are therefore too optimistic
about the ability of equally qualified laboratories to confirm one another's results, especially
detection, on identical samples. In addition, such definitions are truly inappropriate for use
in compliance monitoring as they do not take into account calibration errors at trace levels
nor errors that originate outside the laboratory yet influence the reported value of a
parameter in a sample collected for determination of compliance with a permit limit.
Precision Estimates Near Zero Concentration. As the concentration increases, the standard
deviation increases. For any concentration selected, the measurements will have a
distribution related to the standard deviation at that concentration (Figure 1). The proposed
ACS-CEI definition is given in terms of a but in fact uses an estimate of the standard
deviation near zero to predict the confidence intervals at the higher concentrations associated
with an RDL or RQL. Since the standard deviations at the RDL and RQL, respectively, are
substantially greater than that near zero, the confidence intervals at the RDL and RQL
135
-------
inferred by ACS-CEI are inaccurate and, in fact, much smaller than the actual confidence
intervals.
Unsuitability of MDL-Based Definitions. The third reason that the ACS-CEI proposed
definitions are inappropriate for compliance monitoring applications is that they are based on
the estimated standard deviation of the measurement error distribution and, in addition, are
computed from too few measurements to provide any reasonable confidence in the false
positive error rate.
The ACS-CEI definitions, which have been proposed for compliance monitoring, are based
on the MDL under the erroneous assumption that the MDL provides a 1 percent false
positive rate. This error rate for false positives can only be maintained by referring
measurements to the 99th percentile of the measurement error distribution. An MDL, using
an estimated standard deviation from repeated measurements within a single laboratory,
yields an "error rate" of 1 percent false positives under very restrictive and subtle conditions:
for each measurement to be compared to an MDL, the MDL must be recomputed. That is,
to maintain a prescribed error rate of false positives, a minimum of seven measurements
must be taken to check the eighth for compliance. This, in fact, is not done in practice and
is one of the reasons that neither the MDL nor any multiple of the MDL that purports to
maintain a false positive rate is appropriate for compliance monitoring. Moreover, the "error
rate" of false positives produced by using an MDL in compliance monitoring, even if eight
measurements are used to test for compliance, cannot be predicted for any particular stream,
136
-------
effluent or outfall. Since the MDL itself is a chance or random variable, produced from a
calculation based upon a random sample, compliance or non-compliance for any specific test
cannot be predicted. The 1 percent error rate ascribed to the MDL is an average rate that
belongs to all MDLs computed with at least seven measurements. This average rate is
comprised of monitoring situations yielding false positives above 1 percent and those below 1
percent. It can be shown that more than 20 percent of all tests for compliance using an
MDL exactly as prescribed will have false positive rates much greater than 1 percent, while
more than 70 percent of those tests will have false positive rates below, perhaps considerably
below, 1 percent.
There is, however, an appropriate statistical procedure for computing with confidence
tolerance limits for the percentiles of the error measurement distribution. Under the
assumption of normal error distribution, tolerance limits require using the factor 4.625 (not
3.143) times the estimated standard deviation (based on seven replicates) which gives a
detection level that preserves a 1 percent false positive rate with 95% confidence (22).
This issue of too few observations can be addressed by employing data from interlaboratory
studies. The USEPA and EPRI have conducted interlaboratory method validation studies in
numerous matrices and for many methods and analytes. The data have been expressed as
regression equations that can be used to compute the standard deviation at any point in the
test range.
137
-------
Table 5 compares the pooled single operator, single point computational approach suggested
by the ACS-CEI with our proposed CMDL/CMQL approach which uses standard deviation
regression equations to compute the interlaboratory standard deviation at the defined level.
EPRI RP1851 ICP-AES validation data (20) were used to compute both sets of values shown
in Table 5. For the ACS-CEI values, the pooled single operator standard deviation at the
USEPA estimated detection limit (USEPA Method 200.7) was used with a t value computed
for the number of laboratories reporting (approximately 18). For EPRI RP1851 data given
for the six elements in Table 5, the sample sizes ranged from 48 to 72, giving single-sided K
values from 2.7529 to 2.8760. If the true standard deviation, a, had been known exactly
instead of estimated, we would have used 2.326 which is the 99th percentile of the normal
distribution.
The ACS-CEI definitions use an estimate of standard deviation at one point (at or near zero),
while our approach effectively uses the standard deviation at the defined detection and
quantitation levels. Since our method of computation uses the standard deviation regression
expressions to determine the concentration that meets the conditions of the respective
definition, the full power of all the valid data at all concentrations by all laboratories is used
in the estimation, not just that from a single concentration, possibly far from the defined
level.
The concept behind the proposed ACS-CEI definitions might be pursued as "fall-back"
definitions for the situation where only single laboratory and no interlaboratory performance
138
-------
information is available and there are neither sufficient resources or time to obtain data in
accordance with ASTM D2777 requirements. In developing such definitions, however, the
ACS-CEI approach needs to be modified to satisfy the following technical criteria:
• The levels developed for compliance monitoring should be based on an estimate of
interlaboratory standard deviation at an acceptable level of confidence.
• The computational approach should recognize the change in standard deviation with
concentration and the effect of this change on the calculation of the confidence
intervals.
• The terms and definitions for the levels to be used for compliance monitoring
should be readily discerned by users as different from existing definitions that are
based on single operator or pooled single operator standard deviation.
Our CMDL/CMQL computational approach using standard deviation regression equations
addresses a number of shortcomings with the ACS-CEI MDL, RDL, and RQL, as presently
proposed, as well as with other existing definitions. In particular, using the standard
deviation calculated from the range of concentrations regressed allows for the interpolation or
limited extrapolation needed to calculate the CMDL and CMQL. The latter can be viewed
with much more confidence since they are based on the demonstrated abilities of many
laboratories to perform the measurements.
139
-------
The authors recognize that other researchers (23,24,25,26) have proposed alternative
approaches for determining detection and quantitation levels involving joint consideration of
probabilities of false positive and false negatives, including simultaneous tolerance intervals
for weighted regression. These fundamentally single operator based statistical approaches
should be evaluated to determine their applicability to interlaboratory. compliance monitoring
situations.
Of special interest for the compliance monitoring situation is the work in progress by the
ASTM Sub-committee D-19.02, Task Group on Detection and Quantitation, Chaired by
Nancy Grams. The Task Group (composed of Federal, State, and industry representatives) is
currently conducting its first Sub-committee ballot of the Task Group's draft definition for
detection estimate that is based on interlaboratory standard deviation and is to be used in
compliance monitoring.
Summary
The compliance monitoring detection/quantitation levels, CMDL and CMQL, presented in
this paper:
• Emphasize the interlaboratory nature of compliance monitoring.
• Benefit from the use of the full range of available statistical data (regression
equations used to interpolate the concentration that exactly meets the definition).
140
-------
• Provide unambiguous guidance to users regarding their intended application.
• Compute detection and quantitation levels based on the estimated standard deviation
associated with each level.
The alternative ACS-CEI definitions as presently proposed have limitations in that they:
• Use the term "interlaboratory standard deviation" when they mean "pooled single
operator standard deviation".
• Use single operator based levels for compliance monitoring, a circumstance
requiring interlaboratory based levels.
• Use single point estimates of standard deviation near zero to compute confidence
intervals far up-scale from zero.
• Do not take full advantage of published validation data as could be done by using
regression expressions of standard deviation versus true concentration.
One very important problem in the MDL and any levels computed from it is that it is based
upon too few observations (seven) and also upon a single analyte concentration level in the
MDL's simplest form. There are, of course, multi-laboratory single operator replication
141
-------
studies with more observations. These studies, however, involve essentially a single
concentration level in samples prepared by the participant and do not address between
laboratory error. Thus, the study data are not suited for addressing detection/quantitation
issues in a compliance monitoring situation.
In this paper we have sought to explain the need to use estimates of variability derived from
the interlaboratory validation of compliance methods. In doing so, we have provided a
powerful and useful method to arrive at compliance detection and quantitation levels.
Acknowledgment
The authors wish to acknowledge the support of Mr. James Stine (Pennsylvania Power &
Light) and Mr. Steven Koorse, Esq. (Hunton and Williams) for their review and guidance.
142
-------
REFERENCES
1. Maddalone, R.F.; J.K. Rice, B.C. Edmondson, B.R. Nott, J.W. Scott, "Defining
Detection and Quantitation Levels," Water Environ.. & Tech.. 5(1), 41 (1993).
2. USEPA, "Methods for Chemical Analysis of Water and Wastes," USEPA-600/4-79-
020, March 1979 (updated March 1983).
3. Koorse, S.J., "False Positives, Detection Limits, and other Laboratory Imperfections:
The Regulatory Implications," Environmental Law Reporter. Volume 19, 1989, 10211-
10222.
4. Koorse, S.J., "MCL Noncompliance: Is the Laboratory at Fault," Journal of the
AWWA. February 1990, pp. 53-58.
5. Petition of James River Corporation, et. al. (IH-90-18, 90-17, 91-05).
6. Currie, L.A., "Limits for Qualitative Detection and Quantitative Determination:
Application to Radiochemistry," Anal. Chem.. 40(3), 586 (1968).
7. Kaiser, H., "Quantitation in Elemental Analysis (Part 2)," Anal. Chem. 42(4) 26A
(1970).
8. Kaiser, H., Z. Anal. Chem.. 209. 1 (1965).
9. Rice, J.K., "Analytical Issues in Compliance Monitoring," Environ. Sci. Technol..
14(12), 1455 (1980).
10. Keith, L.H., et al.. "Guidelines for Data Acquisition and Data Quality Evaluation in
Environmental Chemistry," Anal. Chem.. 55(14), 2210 (1983).
11. Glaser, J.A.; D.C. Forest, G.D. McKee, S.A. Quane, and W.L. Budde, "Trace
Analysis for Wastewaters," Environ. Sci. Technol.. 11(12), 1426 (1981).
12. USEPA, "Appendix B to Part 136 - Definition and Procedure for the Determination of
the Method Detection Limit - Revision 1.11," Federal Register. 49., (209), 43430,
Friday, October 26, 1984.
13. Maddalone, R.F., J.W. Scott, and M.D. Powers, "Aqueous Discharges from Steam-
Electric Power Plants: The Precision and Bias of Methods for Chemical Analysis."
EPRI CS-3744, November 1984.
143
-------
14. Kempic, J.B., "Use of Water Supply Performance Evaluation Data to Calculate
Laboratory Certification Criteria and Practical Quantitation Limits for Inorganic
Contaminants," 12th Annual USEPA Conference on Analysis of Pollutants in the
Environment, May 10-11, 1989, Norfolk, VA.
15. USEPA, 52 FR 25690, July 8, 1987.
16. Britton, P.W., Statistician, Development and Evaluation Branch, Quality Assurance
Research Division, Environmental Monitoring Systems Laboratory, U.S. Environmental
Protection Agency, Cincinnati, Ohio, letter to Rick Brandes, Chief, Enforcement
Support Branch, Office of Water Enforcement and Permits, U.S. USEPA, Washington,
DC, October 22, 1990.
17. "Standard Practice for Determination of Precision & Bias of Applicable Methods of
Committee D-19 on Water," D2777-86, ASTM Standards of Precision and Bias for
Various Applications. Third Edition, 1988, pp. 47-60.
18. Maddalone, R.F., J.W. Scott, and J. Frank, "Round-Robin Study of Methods for Trace
Metal Analysis; Volume 1: Atomic Absorption Spectroscopy - Part 1," EPRI CS-5910,
Volume 1, August 1988.
19. Maddalone, R.F., J.W. Scott, and J. Frank, "Round-Robin Study of Methods for Trace
Metal Analysis; Volume 2: Atomic Absorption Spectroscopy - Part 2," EPRI CS-5910,
Volume 2, August 1988.
20. Maddalone, R.F., J.W. Scott, and N.T. Whiddon, "Round-Robin Study of Methods for
Trace Metal Analysis; Volume 3: Inductively Coupled Plasma-Atomic Emission
Spectroscopy," EPRI CS-5910, May 1991.
21. Keith, L.H., "Revising Definitions: Low-Level Analyses," Environmental Lab,
June/July 1992, p. 58.
22. Natrella, M.G., "Experimental Statistics," National Bureau of Standards Handbook 91,
October 1966, pp 2-14, 2-15 and Table A-7, p. T-15.
23. Gibbons, R.D., F.H. Jarke, and K.P. Stoub, "Detection Limits: For Linear Calibration
Curves with Increasing Variance and Multiple Future Detection Decisions", Waste
Testing and Quality Assurance. Third Volume, ASTM STP 1075., C. E. Tatsch, Ed.,
American Society for Testing and Materials, Philadelphia, 1991.
24. Hubaux, A. and G. Vos, "Decision and Detection Limits for Linear Calibration
Curves," Anal. Chem.. 42(8), 849 (1970).
144
-------
25. Clayton, C.A., J.W. Mines, and P.D. Elkins, "Detection Limits with Specified
Assurance Probabilities," Anal. Chem.. 59(20), 2506 (1987).
26. Grant, C.L., A.D. Hewitt, and T.F. Jenkins, "Experimental Comparison of USEPA
and USATHAMA Detection and Quantitation Capability Estimators," American
Laboratory. 15, February, 1991.
145
-------
Table 1
Detection Level Definitions From Various Sources
Author
Title
Definition
Authors' Statistical
Interpretation
Degrees
of Freedom
Currie (5)
Decision Limit
Detection Limit
Kaiser (6,7) Limit of Detection
Rice (8)* Critical Level
Limit of Detection
ACS-CEI (9) Limit of Detection
USEPA/EMSL Method Detection
QO, il) Limit
Lc = 1.645aB
LD = 3.290a B
LOD = 30.
Approximately a 5% risk of Infinite
reporting zero values as
detected for normal distri-
butions and as high as 11 %
risk of reporting zero values
as detected for asymmetric or
broad distributions.
0.5% risk of reporting zero Infinite
value as detected or not
reporting a real value as > 0.
0.5% risk of reporting a true Infinite
concentration as not detected.
7% chance of reporting zero Infinite
value as detected or not
reporting a real value as > 0.
1 % chance of reporting zero 6
value as detected.
Rice also suggests using overall standard deviation data from interlaboratory studies on real, spiked samples.
146
-------
Table 2
EPRI RP 1851-1 Interlaboratory
Validation Studies
PART/ROUND
METHOD
ELEMENT
I. Round 1
I. Round 2
II. Round 1
III. Round 1
GFAAS
GFAAS
GFAAS
CVAAS
Flame AAS
ICP-AES
As, Se
Ni, Pb, Cr, Cu
Cd
Hg
Fe, Zn
Al, Ba, Be, B, Cd, Cr, Cu
Fe, Pb, Mn, No, Ni, V, Zn
147
-------
Table 3
Calculated Standard Deviations from Laboratory Results
Replicate #
1
2
3
Lab Mean
Lab Std. Dev.
Laboratory Results
Lab A
11
13
12
12
1
Lab B
15
14
13
14
1
LabC
17
16
18
17
1
LabD
14
15
16
15
1
Pooled Standard Deviation = 1
Overall Mean = 14.5
Interlaboratory Standard Deviation =2.1
148
-------
Table 4
Comparison of Pooled Single Operator
v. Interlaboratory Standard deviation
Standard Deviation ^tg/L*
ELEMENT
Cd
Fe
Mn
Mo
V
Zn
POOLED
SINGLE
OPERATOR
3.2
54.2
6.9
6.6
2.9
10.0
INTERLABORATORY
9.2
56
12.9
9.1
10.0
16.6
* USEPA Method Study 27, Surface Waters, soft digestion
standard deviation data at 100 ^g/L for ICP-AES
149
-------
Table 5
Comparison of ACS-CEI Computational Approach
with Proposed EPRI Compliance Monitoring
Detection and Quantitation Levels
Detection and Quantitation Levels in River Water, /*g/L*
ELEMENT
Cd
Fe
Mn
Mo
V
Zn
ACS-CEI
(single operator)
MDL
15
28
12
15
15
23
RDL
30
56
24
30
30
46
RQL
60
112
48
60
60
92
EPRI
(interlaboratory)
CMDL
14
28
17
59
29
48
CMQL
31
61
36
123
61
111
ACS-CEI values computed from EPRI RP 1851-1 pooled single operator data
at the USEPA estimated DL for ICP-AES; EPRI values from the interlaboratory
data from the same study (20).
150
-------
MEASURED
VALUE
TYPICAL
DISTRIBUTION OF
MEASURED VALUES
TO CALCULATE A CMDL AND
CMQL THE PRECISION AT EACH
LEVEL MUST BE KNOWN AND
USED
TRUE CONCENTRATION
01M 92.054.02
Figure 1. Distribution of Measured Values at Different Concentrations
151
-------
INTERLABORATORY
STANDARD
DEVIATION
DISTRIBUTION
TRUE CONCENTRATION
EXPECTED NORMAL
DISTRIBUTION OF
MEASURED VALUES
TRUE CONCENTRATION
CMDL
CMQL
DIM 92.066.04
Figure 2. Illustration of CMDL and CMQL Definitions
152
-------
STANDARD
DEVIATION
(PRECISION)
MATRIX "NOISE" i
INSTRUMENT "NOISE" J
LINEAR
FIT
,'o
CURVILINEAR
FIT
TRUE CONCENTRATION
O1M 92.086.03
Figure 3. Comparison of Linear and Curvilinear Fits to
Standard Deviation versus True Concentration Data
153
-------
STANDARD
DEVIATION
(PRECISION)
INTERLABORATORY
2X
(TYPICAL)
POOLED
SINGLE
OPERATOR
TRUE CONCENTRATION
O1M 92.066.01
Figure 4. Typical Relationship between Interlaboratory and
Pooled Single Operator Precision
154
-------
City of Phoenix.
QUESTION AND ANSWER SESSION
MR. TELLIARD: Questions?
MR. SCHREINER: My name is Dave Schreiner,
For example, let's say you are running beryllium, and your detection limit is like
500 parts per trillion. We have been going around and around with QC about rounding. Let's
say your analysis shows you that you have got 450 ppt and your MDL is, you know, 500. Do
you round that up and say you report a number, or do you report less than or what?
MR. RICE: When one compares a number that you
get with a permitted value, you obviously have to establish some ground rules. In this context,
EPA is now, and I think we may hear in the next paper some discussions of this, trying to
establish some very real ground rules for how you get real numbers for compliance purposes
when you come up with values that are less than the detection level.
Whatever you do may be quite arbitrary, but everybody has to play by the same
ground rules. That is all I can say.
Company.
MR. SCHREINER: Okay, thanks.
MR. STANKO: George Stanko, Shell Development
Jim, I understand everything you presented here, but would you give some
guidance as to how the data should be reported? Should a laboratory report any value below
CMQL? And what if the observation falls below that but above CMDL?
MR. RICE: The first step is getting some agreement
on a rational definition for a CMDL and a CMQL. That is all I really addressed in my paper.
The next stage is as I said earlier. EPA is addressing the reporting issue, and I
think it is going to be out for public comment at some point in the near future based on a
minimum level concept. Then the question is how do you define a minimum level (ML).
From my viewpoint, I think there is a strong rationale to define the ML as
equivalent to our CMQL, the quantitation level. The minimum level, as it appears will be
proposed together with its proper usage, will serve as a sort of rosetta stone - it serves as an
acceptable way out of the box, that occurs when you come up with health or aquatic risk based
limits for a permit that are less than what can be detected by the agreed methods of measurement.
155
-------
How you handle a less than value on a discharge monitoring report? That is what
the convention will be about.
How do you arrive at that convention that is your transition or transformation point
between a less than and a value?
EPA is proposing that anything less than the ML is zero, and that is what you
report. You don't report the other numbers. You report the zero. Then, anything that is at the
ML or above you report as a number, and that is quantified.
Therefore, when you make averages, or whatever statistics you are calculating, you
are using real numbers, because zero is a real number.
But that is not what this paper of ours was addressed to. It is first things first
which is to find out how you can come to a rational detection and a rational quantitation limit
in a compliance context where interlaboratory differences on split samples is a real problem.
MR. TOM: Otoyo Tom from University of
Waterloo.
You mentioned as the concentration increases, the standard deviation will also
increase in some cases. Can you explain more specifically would this happen?
MR. RICE: Why does the standard deviation
increase with concentration?
MR. TOM: Yes.
MR. RICE: I start out with the easy answer. It
does. The why is simply that it is the major error terms that make up the interlaboratory
standard deviation are in themselves a function of concentration. The method itself generally has
an ability to measure a parameter with the error of measurement a percentage of the total. Then,
as the value of the parameter increases, so does the actual standard deviation that is involved.
Now, that is kind of a circular argument. I have always found, at least in my
experience, that the standard deviation for a method increases with the increase in concentration,
and this is true especially for interlaboratory values.
MR TOM: Thanks.
MS. MOORE: Marlene Moore with Advanced
Systems.
In your definitions, you refer to true concentration. What is true concentration?
156
-------
MR. RICE: Well, that is a good question. I can
explain it best by example in the EPRI round robins dealing with metals what it is we had to do
to answer that question. I think we have come about as close practically as you can at this time.
Everyone thinks his own reagent water is Simon pure, but as we know, that is not
true. When you are measuring metals at trace levels, it is not, and it is certainly not for lots of
other parameters.
What we did was to send everyone the best reagent water we could make, and
send it blind. The participants calibrated using their own reagent water and their calibration
standards that they made up. We asked them to report the answer they got - don't censor. If you
got a negative value, report it. We got negatives, as you would expect.
Actually, in a real situation, if I did that and I had the true zero and I sent it out
there, I am very likely to get as many negatives as I get positives if I have enough laboratories
in my sample.
MS. MOORE: Okay, but in the calculation when
you are actually calculating true...
MR. RICE: The next step is what do you do when
you are trying to deal with a real world situation? For the reagent waters that went out, we used
the consensus value of 20 or 30 labs on that reagent water as the true concentration of the
particular analyte.
MS. MOORE: So, we are continuing with the fact
that we are going to use a consensus value as our true concentration?
MR. RICE: Well, only for the blank, in effect.
MS. MOORE: What about the knowns that were
sent out when you spiked? Do you use a true...
MR. RICE: Well, the knowns we used were the
value of the spike, which was calculated by the control lab by weighted additions and verified
by subsequent analysis of samples that were taken randomly from lots of the split samples. The
true value for a spiked sample was the sum of the spike and the concensus blank concentration.
MS. MOORE: Currently, with performance
evaluation samples, though, when they are analyzed, we don't look at what the true value is. We
just use a statistical calculation to determine what our deviations and variances are based on what
all 600 labs or however many labs are associated with the study. Is that what you are trying to
do here with this is to just see what everybody can get in the same method?
157
-------
Because you do also look at method- and matrix-specific requirements. I guess,
in looking like this, what I am trying to decide is, do you take a given method and say therefore,
if I am running the ICP-AES method, I will get a true concentration being that I have spiked at
10 parts per billion and, therefore, this is what I should get when I do that?
How method- and matrix-specific are you? I mean, now we can run five different
methods for cadmium, and we might get different values, but we all have to hit the same number.
MR. RICE: Well, the value that-first, you have to
start out with a traceable standard in the first place in making up the standards that you are
sending out. So, you have to view those that you are sending...they are split equally to
everybody...as the standards. And you have verified by one method or another alternatively and
by other control systems that you have confidence that that is, and you call it the true value.
Now, the only thing that is changed that we have done is to use the consensus
value for the reagent water that we sent out, which was the matrix that was used, and then add
the spike to that and call the total the true.
All of the standard deviations and the like and all of the reported measured values
are then reported against and analyzed with that as the true.
MS. MOORE: Okay, thank you.
MR. KEITH: Larry Keith from Radian Corporation.
Jim, my understanding of the ACS definition is it is a definition that allows you
to expect certain parameters from the data that you get. In that respect, it is more of a generic
kind of definition. It doesn't, to my understanding, specify the protocol which means that it
doesn't say that you have to use single operators.
If you used an interlaboratory protocol, then I think the definition would end up
being quite close to what you have proposed here.
MR. RICE: Well, I might say, Larry, there is a
problem with a definition if it can have alternative bases and the same acronym describes it. It
is really not very precise, and any time you see, an RDL or an RQL, unless somebody footnotes
it, you won't know what that number really means, what the significance is. It could be based
on one guy, one lab. It could be an average of a number of labs. It could be a full
interlaboratory.
What we are trying to do is to get a clear unambiquous definition. I don't care
what acronym you give it, but the definition is what counts, not the acronym. The definition of
how that number was derived is clear, unequivocal, it is unambiguous, everybody understands
it.
158
-------
And you can't do that with the way you have divined...put up...divined, that was
a slip...with the way you have constructed your proposed RDL and RQL, because it could be any
number of things to different people.
MR. KEITH: Well, I agree that is exactly correct.
The protocol would have to be specified along with the definition which is what you have done
in this case.
MR. RICE: But we only have one protocol; you
have several. That is the difference.
MR. KEITH: Yes, that is true. One could have
different protocols.
MR. TELLIARD: We have one more, and then we
are calling it. It is the hard question now.
MR. MCCARTY: Yes. Harry McCarty, SAIC.
Jim, one comment real quick. In this day and age, is it really appropriate to talk
about a single operator precision?
I don't know any meaningful measurement that is being made for compliance
monitoring done by one person. There is an extraction guy for organics. There might be a
cleanup guy, and there is an instrument operator, and the lab manager thinks he has something
to do with it as well.
Every one of those people thinks they are the most critical in the process, and
many lab managers are finding out it is the guy who received the sample and forgot to put it in
the cooler who was probably the critical one.
A comment. It might help to get away from the concept of single operator.
MR. RICE: I would love it.
MR. MCCARTY: If you call it a single lab
measurement, it might be more meaningful.
MR. RICE: That is what it really should be, and
that...right now, ASTM's D2777 is in the process of revision, and some of the internal serious
discussions go to that issue. I think there is general consensus there now that we ought to
abandon the word operator. It is laboratory, single laboratory. It is intralaboratory precision that
you are talking about.
159
-------
MR. MCCARTY: One further suggestion. You
mentioned the minimal level which is a concept that came out of Bill Telliard's methods in
general. One of the things that would be nice to see with your definition or whichever one is
ultimately adopted is to take what, in fact, the ACS committee in 1983 suggested which is that
you don't make measurements outside of the calibration range of a methodology, that when you
establish your quantitation level that you also specify that the method run a standard at the
concentration equivalent to that in the sample.
Your standards are made up, you know, in pure solvent or clean water or
something like that, but at some point, if you could then tie this concept back to running a
standard at that level so at least you have some concept that the instrument can see it. For mass
spec, it becomes critical in terms of evaluating the ability of the instrument to really identify that
compound.
The quantitation aspect of it you have covered here very well, but Bill's methods
at least specify...the minimum level is specified as that concentration in a sample equivalent to
what is in that standard following the procedure, and if you add that to it, I think you have got
the qualitative and the quantitative pieces tied off to a great extent, and I would like to suggest
that for the rewrite of D2777.
MR. RICE: A very good comment.
MR. TELLIARD: Thanks, Jim. Appreciate it.
MR. RICE: Thank you.
160
-------
MR. TELLIARD: I would like to introduce our next
speaker. We are going to move into metals now, and we are going to talk about some organo-
metals.
Our next speaker is from Midwest Research. Yan Liu is going to talk on organo-
metal analysis. After lunch, we will continue on the metals train which, right now, is a very
large issue for the Agency and has a lot of implications.
MR. LIU: Thank you.
I think some of us have had enough with MDLs. I am going to talk about
determination of organotin compounds by SFE with GC and atomic emission detection. The co-
author for the paper is Dr. Viorica Lopez-Avila, MRI and Dr. Werner Beckert, EPA-Las Vegas.
We know that the industrial use of organotin compounds has increased
significantly over the last few decades. These compounds are commonly used as stabilizers for
polymers as wood preservatives and also as agricultural fungicides. They are also used in anti-
fouling paints.
Lots of these compounds are toxic. Their increased use and their subsequent
discharge into the environment have contributed to environmental pollution.
A number of analytical methods have been developed for determination and
isolation of these compounds. A typical procedure for determination and speciation of these
compounds usually consists of these steps:
First, the organotin compounds are extracted from the sample matrix with a solvent
extraction procedure. Because most of these organotin compounds exist as the ionic species in
the sample, a complexing agent is often used in the solvent extraction procedure.
Then, the extracts are often derivatized with a Grignard reagent to convert the
ionic organotin compounds into tetra alkyltin compounds. Finally, the derivatized extracts are
analyzed by GC with tin specific detectors such as AA or atomic emission detectors.
We know that the solvent extraction procedures are often time-consuming and
labor-intensive. Also, the use of toxic organic solvents in relatively large amounts can contribute
to environmental pollution.
In recent years, SFE has gained great popularity as a sample preparation technique.
SFE can be selective because the solvation power of a supercritical fluid can be controlled by
changing of pressure and temperature.
161
-------
SFE is also relatively fast and efficient, because the supercritical fluid has high
diffusivity and low viscosity, compared to the common organic solvents.
SFE is also suitable for extraction of thermally labile analytes, because the
extractions can be carried out at relative low temperatures.
In addition, SFE reduces the use of toxic organic solvents. Carbon dioxide, the
most commonly used supercritical fluid, is nontoxic, nonflammable, non-polluting, and relatively
inexpensive.
A number of organic pollutants have been successfully extracted from different
sample matrices by SFE. We at the Midwest Research Institute, California Operations have been
developing SFE-based sample preparation methods over the last few years under a contract with
EPA-Las Vegas. Recently, we developed a SFE/GC-AED procedure for determination of
organotin compounds in environmental samples.
In the procedure we developed, we use supercritical carbon dioxide or carbon
dioxide modified with a small amount of methanol, usually 5 percent, to extract organotin
compounds from solid samples, and then we use pentylmagnesium bromide to derivatize the
extract. Finally, we use a GC with atomic emission detector to analyze the extract.
This slide shows the block diagram for the SFE system that we used. We used
carbon dioxide or carbon dioxide modified with 5 percent methanol as the extraction fluid. A
syringe pump operating at a flow rate of 1 to 2 ml/min was used to deliver the extraction fluid.
The volume the extraction vessel was 10 ml. It was kept in a vertical position,
and we used a fused silica capillary tubing as the restrictor.
The extracted analytes were collected with 5 ml of hexane or methanol.
For those of you who are not familiar with the atomic emission detector, I will go
through the system very briefly. It basically consists of an HP-5890 GC that is coupled to an
HP-5921A atomic emission detector (AED). This detector uses microwave induced helium
plasma as the excitation source.
The GC effluent entering into the AED cavity where the analytes are cleaving into
atoms, and their atomic emissions were monitored using a grating spectrometer with a photodiode
array sensor.
The GC-AED conditions that we used are shown here. We used a HP-5 column
to separate the organotin compounds. The transfer line that interfaced the GC with the AED was
also a HP-5 column, and we monitored the atomic emission from carbon, tin, and hydrogen. The
carbon response and tin response was monitored with one sample injection, and the hydrogen
response was monitored with another sample injection.
162
-------
This slide shows the GC-AED chromatogram of a organotin standard solution.
The concentration of these organotin compounds was 2 ug/ml. These ionic tin compounds were
pentylated before the GC-AED analysis.
The top chromatogram shows the carbon response, the middle one shows the
hydrogen response, and the bottom one shows the tin response.
In this standard, we also added hexadecane and docosane. The reponses of these
alkanes were observed with either carbon channel or hydrogen channel, but their responses were
not observed with the tin channel.
This specific response provided by the tin channel is very useful in analysis of the
SFE extracts generated for the environmental samples, because these extracts often contain many
other organic compounds. The use of a tin-specific response, will obviously facilitate the
identification and the quantitation of these organotin compounds.
One example is shown here. These are the chromatograms for an SFE extract.
This topsoil sample was spiked with seven ionic tin compounds, and the carbon response,
hydrogen response, and tin response are shown. It is clear that we have difficulty in identifying
these tin compounds by using either carbon or hydrogen response. On the other hand, the
identification and quantitation can be achieved easily with the tin-specific response.
The GC-AED calibration data are shown here. We found that the calibration
curves were usually linear within this concentration range from 10 to 2500 ng/ml. The detection
limit for these compounds was about 5 ng/ml.
Now I am going to show you some SFE results. We first investigated the
extraction of tetraalkyltin compounds from the spiked topsoil samples. These samples were
spiked with 2 ug/g of these six tetraalkyltin compounds.
We carried out extractions at different pressures and temperatures, using carbon
dioxide as the extraction fluid. The pressure and temperature ranged from 100 to 350
atmospheres, and the temperature ranged from 40 to 80° C.
We carried out dynamic extraction of 10 to 40 minutes.
With the exception of this experimental condition: 100 atmospheres, 10 minutes,
and 80 degrees, the recovery of tetraalkyltin compounds from spiked topsoil samples was
basically quantitative.
Again, more data are shown in this slide. These results indicate that the
tetraalkyltin compounds can be extracted, with a relative low pressure and temperature and a
short extraction time. For example, with a 10-minute extraction at 100 atmospheres and 40
degrees, we can get quantitative recovery of these compounds.
163
-------
We then carried out experiments to extract ionic organotin compounds. The SFE
conditions that we used are shown here. The extraction pressure was 450 atmospheres, and the
temperature was 80° C.
We performed a 30-minute static extraction followed by a 20-minute dynamic
extraction. Carbon dioxide or carbon dioxide modified with 5 percent methanol was used as the
extraction fluid.
We also used a complexing agent: sodium diethyldithiocarbamate. We added the
complexing agent to the spiked topsoil samples to complex the ionic organotin compounds to see
if the use of the complexing agent has any effect on the recovery of these compounds.
This slide shows the recovery of ionic organotin compounds from a spiked topsoil
sample by SFE without using the complexing agent. We can see that the recovery for the
trialkyltin compounds, including trimethyltin chlorine, triethyltin bromide, and tributyltin iodide,
ranged from about 50 percent to about 75 percent.
On the other hand, the recovery for dialkyltin compounds and monoalkyltin
compounds like dimethyltin, dibutyltin, diphenyltin, and butyltin trichloride were very low. They
ranged from less than 10 percent to about 20 percent.
These results were found with either carbon dioxide as the extraction fluid or
carbon dioxide modified with 5 percent methanol as the extraction fluid.
On the other hand, when we mixed the spiked samples with a small amount of
complexing agent: sodium DDC, prior to the SFE extraction, we observed a significant
improvement in the recovery of these ionic organotin compounds from spiked topsoil samples.
These data were obtained with carbon dioxide modified with 5 percent methanol as the extraction
fluid.
For these trialkyltin compounds, we can see that the recovery improved about 20
to 30 percent. For these dialkyltin compounds and also monoalkyltin compounds, we observed
significant improvements in the extraction efficiency. For example, the dibutyltin compounds,
the recovery improves from about 10 percent to over 90 percent.
We also observed some improvement in the recovery for the diphenyltin and the
butyltin. These are somewhat less dramatic.
This slide compares the effect of the extraction fluid. This compares the
recoveries that we obtained with either carbon dioxide alone or carbon dioxide modified with 5
percent methanol. We didn't observe any significant difference using these two extraction fluids
with the exception of butyltin trichloride.
164
-------
In conclusion, our study indicated that the tetraalkyltin compounds can be extracted
quantitatively from spiked topsoil samples by SFE under moderate pressure and temperature
conditions.
Secondly, the ionic organotin compounds can be extracted by complexation SFE,
that is, with the use of complexing agents.
We believe that SFE followed by GC-AED is a promising analytical technique for
determination of organotin compounds in environmental samples.
Finally, I would like to acknowledge the financial support from EPA-Las Vegas
for this research. Also, Marcela Alcaraz from MRI, and Joe Tehrani from Isco who provided the
SFE system that we used in this investigation.
Thank you. Any questions?
MR. TELLIARD: Questions? Any questions?
(No response.)
MR. TELLIARD: Thank you, Yan, very much.
165
-------
COMBINATION OF SUPERCRITICAL FLUID EXTRACTION WITH
CAPILLARY GAS CHROMATOGRAPHY AND ATOMIC EMISSION
DETECTION FOR THE DETERMINATION OF ORGANOTTN
COMPOUNDS IN ENVIRONMENTAL SAMPLES
YAN LIU AND VIORICA LOPEZ-AVILA, Midwest Research Institute, California
Operations, 625-B Clyde Avenue, Mountain View, CA 94043
WERNER F. BECKERT, U.S. Environmental Protection Agency, Environmental
Monitoring Systems Laboratory, 944 East Harmon Avenue, Las Vegas, Nevada 89119
In the analysis of environmental samples for organic compounds, sample
preparation methods involving conventional solvent extraction are usually time- and
labor- intensive. Also, the use of organic solvents, often in relatively large amounts,
is of environmental concern because of fugitive emissions and solvent waste handling
and disposal. Supercritical fluid extraction (SFE) has gained popularity in recent
years as a sample preparation technique. The commonly used supercritical fluid
carbon dioxide is nontoxic, nonflammable, nonpolluting, and relatively inexpensive.
In addition, SFE is relatively fast, when compared to conventional solvent extraction,
and its selectivity can be easily controlled. SFE techniques have been used
successfully to extract a variety of organic pollutants from solid matrices.
A number of organometallic compounds have found industrial use, and their
releases into the environment are contributing to environmental pollution. However,
the isolation of organometallic compounds from environmental samples and their
determination often present problems. We have successfully extracted organotin
compounds from spiked topsoil samples by SFE and analyzed the extracts by gas
chromatography with atomic emission detection. A novel approach was used
involving the use of complexing agents in the supercritical fluid to improve the
extraction of ionic organotin compounds. The effects of pressure, temperature,
extraction time, and modifier on the recovery of the organotin compounds from spiked
soil samples will be discussed.
NOTICE: Although the research described in this paper has been funded wholly by
the U.S. Environmental Protection Agency through Contract No. 68-C1-0029 to
Midwest Research Institute, it has not been subjected to Agency review. Therefore,
it does not necessarily reflect the views of the Agency. Mention of trade names or
commercial products does not constitute endorsement or recommendation for use.
166
-------
Combination of Supercritical Fluid Extraction with Capillary
Gas Chromatography and Atomic Emission Detection
for the Determination of Organotin Compounds
in Environmental Samples
o\
Yan Liu1, Viorica Lopez-Avila1, and Werner F. Beckert2
'Midwest Research Institute, California Operation, 625-B Clyde Avenue, Mountain View, CA 94043
2U.S. Environmental Protection Agency, 944 East Harmon Avenue, Las Vegas, Nevada 89119
-------
OO
Industrial Applications of Organotin Compounds
• Stabilizers for PVC Polymers (e.g., diorganotin compounds)
• Wood Preservatives (e.g., triethyltin hydroxide)
• Agricultural Fungicides (e.g., triphenyltin hydroxide)
• Antifouling Paints (e.g., bis(tributyltin) oxide)
-------
Typical Procedures for Determination of Organotin
Compounds in Environmental Samples
• Extraction of Organotin Compounds from Sample Matrices by
Complexation Solvent Extraction.
• Derivatization of Extracts with Grignard Reagent (RMgX).
• Analysis of Derivatized Extracts By GC with Atomic Absorption or
Atomic Emission Detectors.
-------
Advantages of Supercritical Fluid Extraction
Selective — Controllable Solvation Power
Fast and Efficient — High Diffusivity and Low Viscosity
Suitable for Thermally Labile Analytes
Reduced Use of Toxic Solvents — CO2 is Nontoxic,
Nonflammable, Nonpolluting, and Relatively Inexpensive
-------
SFE/GC-AED Procedures for Determination of Organotin
Compounds in Environmental Samples
• Extraction of Organotin Compounds from Sample Matrices by
Supercritical CO2 or CO2 Modified With Methanol.
• Derivatization of Extracts with Grignard Reagent (C5HuMgBr).
• Analysis of Derivatized Extracts By GC with Atomic
Emission Detection (GC-AED).
-------
SUPERCRITICAL FLUID EXTRACTION SYSTEM BLOCK DIAGRAM
to
Pump
Syringe Pump (100 mL)
Flow Rate of 1 - 2 mL/min
do
u u
Extraction
Vessel
10 inL Vessel
(1.5-cm ID x 6-on length)
Vertical Position
Deactiyated Fused-Silica Capillary Tubing
(50-/im ID, 375-A»m OD, 40-cm length)
Restrictor
5 mL Hexane
or Methanol
-------
reagent gas microwave
, generator
Injection
port
spectrometer
chromatograph
He gas
cavity
with plasma
GC-AED Block Diagram
-------
GC-AED OPERATING CONDITIONS
(GC Parameters)
Injection port temperature 250°C
Injection port Splitless
Injection volume 1 jiL
Splitless time 60 sec
Column HP-5, 15-m length x 530-pm ID x
0.88-/tm film thickness
Carrier gas flow rate 6 .0 mL/min helium
Temperature program 50°C (3-min hold) to 230°C (4-
min hold) at 20°C/min.
-------
GC-AED OPERATING CONDITIONS
(AED Parameters)
Transfer line
Transfer line temperature
Cavity temperature
Solvent vent begin
Solvent vent end
Spectrometer window purge
Helium makeup gas flow
Makeup and reagent gas pressures
Element wavelength
HP-5 column
250°C
250°C
0.01 niin
4.0 miii
2 L/min nitrogen
220 mL/iniii
70 psi helium, 65 psi hydrogen, 25 psi
oxygen
270.651 nm for carbon (reagent gases:
hydrogen and oxygen)
247.857 nm for tin (reagent gases:
hydrogen and oxygen)
656.302 nm for hydrogen (reagent gas:
oxygen)
-------
GC-AED Chromatograms of Organotin Standards
C 24
3CB-
34B-
| 32B-
33 3BB
w 2BB-
« 2CB-
^
24B-
22B-
,t
y
B of DBTR: 1 1 1892-1.0
3 * |
.2
'
•a
X
o
L- t_
56 7
1
i
1
« C-248 channel
c
rt
8 ,
o 1 „
•o 1 9
1 II /I
a IB i! u ic
Tl ni (nl n. )
H CSC of DRTS: 1 1 1892-1. 0
IBB-
17B-
U IBB-
a
53 14B-
Ui
O
< »"•
ita-
1BB-
9B-
1
Jl
. H-656 channel
c
3 f §.
2
V
•s
X
v^
56 T
HI
i
8
1
CO
1 1
5l
L_ R. \
B IB 12 14 1C
Tl «• Cn n. >
Sn 271 of ORTR: 1 11892-1 .0
IBBB-i
9BB-
MO.-
U 7BB-
I"8'
8 see-
o •*BB"
U
2BB-
1BB-
B"
3 g Sn-271 channel
i
2
4
6 T
5
1
*
9
A
B IB 12 14 1C
Tl »> C«l n. >
1 Trimethyltin chloride
2 Tetraethyltin
3 Triethyltin bromide
4 Dimethyltin dichloride
5 Tetrabutyltin
6 Tributyltin iodide
7 Dibutyltin dichloride
8 Butyltin trichloride
9 Diphenyltin dichloride
176
-------
Chromatograms of an SFE extract of a topsoil sample
spiked with seven ionic organotin compounds
taaa-i
saa-
0 BBB-
g. 7BB-
•- 6BB-
o
"5BB-
4BB-
3BH-
c
1
241
a
III
r QflrB:
u
W
11 189
(k
2-
\.
2. a
C-248 channel
UL_^__^
B 10 12 14 1C
ft n« C n 1 n . )
1BBBT
9BB-J
BBB-
v :
3 780-^
a SMj
Q SBB1
<; 43E- .
joa-
2BB- I
SSS
at
\
u
DRTH: 1 1 1892-2. Q
H-656 channel
y
\ L:
(
LlL. . . -
B IB 12 14 It
T { n« C n 1 n . )
sza-i
sae-
S 4BB-
2.
Sx 271 «f QBTB: 111892-2.0
a 3Ba4
Q 1
U 2BB-|
1BB-
B-
U
Sn-271 channel
i
3 7
4 6
8
, 9
ill
B IB 12 14 IK
Tl f« (nl n. >
1 Trimethyltin chloride
2 Tetraethyltin
3 Triethyltin bromide
4 Dimethyltin dichloride
5 Tetrabutyltin
6 Tributyltin iodide
7 Dibutyltin dichloride
8 Butyltin trichloride
9 Diphenyltin dichloride
177
-------
GC-AED CALIBRATION DATA FOR ORGANOTIN COMPOUNDS
00
Compound
no.
1
2
3
4
5
6
7
8
9
Compound name
Trimethyltin chloride
Tetraethyltin
Triethyltin bromide
Dimethyltin dichloride
Tetrabutyltin
Tributyltin iodide
Dibutyltin dichloride
Butyltin trichloride
Diphenyltin dichloride
GC retention
time
(min)
7.10
7.39
9.21
9.54
11.15
11.58
11.98
12.39
16.59
Calibration
curve linear
range
(ng/mL)
10-2,500
10-2,500
10-2,500
10-2,500
10-2,500
10-2,500
10-2,500
10-2,500
20-2,500
Calibration
curve
correlation
coefficient
0.998
0.999
0.999
0.998
0.999
1.000
0.999
0.999
0.997
-------
Recovery of Tetraalkyltins by Supercritical Fluid
Extraction with CO2 from Topsoil Samples
(Topsoil Samples spiked with 2 ug/g Tetraalkyllin Compounds)
o
o
100 atm
10 min.
40 °C
350 aim
10 min.
40 °C
100 aim
40 min.
40 °C
350 atm
40 min.
40 °C
250 atm
20 min.
60 °C
• Tctracthyltln
® Triclhylpentyltln
• Dlmcthyldipcntyit
mi Tctrabutyltln
EI Tributylpentyllin
s Dutyltripentyltin
Rxncrimental Conditions
-------
oo
o
CU
>
O
a
a>
Recovery of Tetraalkyltins by Supercritical Fluid
Extraction with CO2 from Topsoil Samples
(Topsoil Samples spiked with 2 ug/g Tetraalkyltin Compounds)
100 atm
10 min.
80 °C
350 atm
10 min.
80 °C
100 atm
40 min.
80 °C
350 atm
40 min.
80 °C
• Tetraethyltln
® Trlethylpcntyltin
• Dimethyldipentyltin
mi Tctrabutyltin
BI Tributylpcntyllin
Q Dutyltripcntyltin
Experimental Conditions
-------
SFE Conditions for Extraction of Ionic
Organotin Compounds
00
Pressure:
Temperature:
Time:
Fluid:
450 atm
80 °C
30 min static followed by
20 min dynamic
CO2or
CO2 + 5% methanol
Complexing Agent: with or without sodium
diethyldithiocarbamate
(NaDDC)
-------
Recovery of Ionic Organotin Compounds From Spiked
Topsoil Samples By SFE
00
K)
fc
Carbon Dioxide
Carbon Dioxide with 5% Methanol
Trimethyltin
chloride
Triethyltin
bromide
Tributyltin
iodide
Dimethyltin
bichloride
Dibutyltin
dichloride
Dlbhenyltin
dichloride
Butyltin
trichloride
-------
Effects of Adding Complexing Agent (NaDDC) on Recovery]
of Ionic Organotin Compounds by SFE
With Complexing agent
•11 No Complexing agent
Trimethyltin
chloride
Triethyltin
bromide
Tributyltin
iodide
Dimethyltin
dichloride
Dibutyltin
dichloride
Diphenyltin
dichloride
Butyltin
trichloride
-------
Recovery of Ionic Organotin Compounds from Spiked
Topsoil Samples by Complexation SFE
• Carbon Dioxide with 5% methnnol
C3 Carbon Dioxide
Trimethyltin
chloride
Triethyltln
bromlda
Tributyltin
Iodide
DfmethyUin
dichlorlde
Dibutyltln
dichlorldo
Diphenyftln
dichlorldo
Butyltln
trichloride
-------
00
Conclusions
• Tetraalkyltin Compounds Can be Extracted Quantitatively from Spiked
Soil Samples by Supercritical CO2 at Moderate Pressures and
Temperatures (e.g., 100 atm and 40 °C).
• Ionic Organotin Compounds Can be Extracted by Complexation SFE from
Spiked Soil Samples with Recoveries Ranging from 70 to 90 Percent.
• SFE Followed by GC-AED is a Promising Analytical Technique for
Determination of Organotin Compounds in Environmental Samples.
-------
Acknowledgement
oo
ON
. U. S. EPA-Las Vegas (Contract 68-C1-0029)
. Marcela Alcaraz of Midwest Research Institute
. Joe Tehrani of Isco Inc. (Lincoln, Nebraska)
-------
MR. TELLIARD: I would like to thank all of our
morning speakers. I think we can give them a round of applause, don't you?
It is lunch time. Please get back here at 1:30 so we can start the afternoon
session. There are a lot of restaurants across the way and in the hotel. Thanks for your attention
this morning. See you after lunch.
(WHEREUPON, a luncheon recess was taken.)
187
-------
188
-------
MR. TELLIARD: We would like to start this
afternoon's session, please, so if we could, take some seats. It is kind of like church. There are
a lot of open pews down front for those of you who are hanging out in the back trying to leave
early.
We would like to start off this afternoon's program with a discussion of metals and
what we are going to call trace determinations or low levels or ultraclean or clean or whatever
those terms mean. I am not too sure. Anyhow, the media is metal, and most of the media we
are going to be talking about is water.
Our first speaker is Ted Martin from EMSL-Cincinnati. Ted has been at this
laboratory under many names, but the present one is the Environmental Monitoring Systems
Laboratory, and Ted has been dealing with metals analysis for, I guess, most of his career which
has been very long and very fruitful.
He is going to talk a little bit to you today about using ultrasonic nebulization for
determining metals by ICAP.
MR. MARTIN: Thank you, Bill.
189
-------
190
-------
DETERMINATION OF METALS IN WATER
BY ULTRASONIC NEBULIZATION
ICP-ATOMIC EMISSION SPECTROMETRY
Theodore D. Martin, Carol A. Brockhoff
and
John T. Creed
U.S. ENVIRONMENTAL PROTECTION AGENCY
OFFICE OF RESEARCH AND DEVELOPMENT
ENVIRONMENTAL MONITORING SYSTEMS LABORATORY
CINCINNATI, OHIO 45268
Disclaimer Notice: The mention of trade names or commercial products in this
presentation does not constitute endorsement but is only given as information relevant to this
presentation.
For more than ten years now, inductively coupled plasma-atomic emission
spectrometry (ICP-AES) has been recognized as a mature analytical technique for the
determination of trace elements in environmental samples. It provides rapid, reliable data and
can be used for both analyte screening and quantitative analysis. Many laboratories consider
ICP-AES an essential tool necessary for providing complete analytical service at reasonable cost.
The inductively coupled plasma is a unique source of high energy. The resulting
analyses are virtually free of chemical interference with analytical linearity that covers five orders
of magnitude. However, one drawback facing complete acceptance of ICP-AES has been the
higher limits of detection when compared to graphite furnace atomic absorption.
Current EPA ICP-AES methodology used for compliance monitoring does not
specifically state that pneumatic nebulization must be used in all analyses, but from the direction
given in Method 200.7, its intended use is obvious. Pneumatic nebulizers of various designs have
proven to be very rugged and useful in the analysis of complex matrices. Unfortunately, they
are also one of the limiting factors in achieving lower detection limits. When using pneumatic
nebulization, a general approach to lowering detection limits in the analysis of aqueous samples
has been to preconcentrate the sample by evaporation prior to analysis. This has been useful,
particularly in the analysis of drinking water, but the procedure is considered time consuming,
especially when the nature of the sample does not require digestion prior to analysis.
A more direct approach to lowering ICP-AES detection limits is the use of a more
efficient nebulizer so more analyte will be transported to the plasma. The continued development
work on ultrasonic nebulization has been one response to this need. Although both pneumatic
and ultrasonic nebulizers generate an aerosol, their functional aspects are somewhat different.
191
-------
In pneumatic nebulization, a pressurized flow of argon is used to aspirate and
break up the liquid into small droplets, while in ultrasonic nebulization, as illustrated on Slide
1 (a schematic of the GET AC U-5000) liquid is peristaltically pumped through the sample inlet
and flows over a chemical-resistant plate surface covering a piezoelectric transducer that is
generating an acoustical wave perpendicular to the flow of liquid. The energy from this
acoustical wave fragments the liquid into a fine mist aerosol. The transducer in the CETAC U-
5000 is powered by a small auto-tuned radio-frequency generator operating at 35 watts incident
power at a frequency of 1.4 MHz. Heat from the transducer is dissipated through the heat sink
which is air cooled. Argon entering the spray chamber at the gas inlet carries the fine mist
aerosol through the heating tube where most of the water is stripped from the droplets by
evaporation. As the dry aerosol and water vapor proceed into the condenser, the water vapor
condenses and is removed at the drain allowing a relatively dry aerosol of analyte to be
transported to the plasma. Removal of water from the aerosol is essential to reduce solvent
loading of the plasma and to stabilize its performance.
The ultrasonic nebulizer has four operational parameters: sample flow rate, argon
flow rate, desolvation temperature, and coolant temperature. Of the four, the argon flow rate and
desolvation temperature are considered the two most critical parameters affecting performance.
Prior to the development of Method 200.15, it was decided to incorporate the same
uniform sample preparation procedure used in other EMSL-Cincinnati spectrochemical methods.
This decision dictated that the calibration standards be prepared in acid solution containing 2%
(V/V) nitric and 1% (V/V) hydrochloric acids.
This was readily accomplished with an appropriate dilution of the calibration
standards normally used with pneumatic nebulization. These standard solutions had been
previously verified for accuracy and were of an analyte combination known to be free of
interelement spectral interference. Using pneumatic nebulization they showed excellent
agreement to a quality control check solution purchased from SPEX Industries. However, when
verification was attempted using ultrasonic nebulization, the analysis data for arsenic, chromium,
and selenium in the SPEX QC solution were elevated by 80 percent, 46, and 35 percent
respectively (see Slide 2).
From the investigation of this elevated response, it was concluded that the +5
valence state of arsenic and the +3 valence state of chromium must form a more stable dry
aerosol during desolvation. This allows more analyte to reach the plasma, thus giving greater
signal intensity.
In the case of selenium, the explanation of enhancement is probably similar.
However, the cause is a concomitant effect from other analytes in the QC solution.
To determine that the ultrasonic transducer was not a factor in the enhancement,
the transducer assembly was removed from a BAIRD Corporation UDX unit and replaced with
the THERMO JARRELL ASH fixed crossflow nebulizer spray chamber assembly. This
192
-------
convenient change allowed desolvation to be applied to the aerosol generated from the pneumatic
nebulizer.
It should be noted that the heating tube on the UDX was intended to be operated
at a temperature of 250 degrees centigrade and was not altered for this investigation.
Given on Slide 3 is a brief data summary of that work. Please note the four
operational conditions under which intensity counts were collected: heat on, chiller on; heat on,
chiller off; heat off, chiller on; and heat off, chiller off. (The chiller supplies the coolant to the
condenser.) Except for these operational changes, all other analytical conditions were held
constant.
It is apparent from the data that the intensity counts for As+3 and hexavalent
chromium, for the conditions heat on, chiller on and heat on, chiller off are much lower than the
corresponding counts for As+5 and trivalent chromium, collected under the same conditions and
listed in the same column. Note the greatest difference occurs under the conditions heat on,
chiller on which is the normal operating condition of an ultrasonic nebulizer. Also, the data
given in the right-hand column for the conditions where the heat is off show excellent agreement
between the two valence states of each element.
The elevated response of the SPEX QC solution is attributed to this difference in
valence state response, since it is known that As+5 and trivalent chromium compounds were used
by SPEX in the preparation of the QC solution, while As+3 and hexavalent chromium
compounds were used in the preparation of the EMSL-Cincinnati calibration standards.
To eliminate the difference between valence states, 50 percent hydrogen peroxide
was added to 1 mg/L solutions of each analyte and analyzed using the GET AC U-5000 ultrasonic
nebulizer. The intensity counts collected are given on Slide 4.
The As+3 and the Cr+6 intensity counts for the solution without hydrogen
peroxide, are again much lower than the intensity counts for As+5 and Cr+3 in the corresponding
single-element solutions. However, when hydrogen peroxide is added, the AS+3 is oxidized to
As+5, and the hexavalent chromium is reduced to trivalent chromium. The analysis of these
solutions now provide the same level of intensity counts as the As+5 and trivalent chromium
single-element solutions. Also, it is worth noting that the addition of hydrogen peroxide to the
As+5 and Cr+3 had no effect on the signal response.
The actual phenomenon that occurs during desolvation causing the two valence
states of arsenic and chromium to give different signal intensities cannot be explained at this
time. Two possible answers for the lower intensity counts may be that a portion of the analyte
is either plating onto the walls of the heating tube or recombining with the condensed water
vapor and lost to waste.
193
-------
In any event, it appears that the chemical nature of these analytes and the dry
aerosol resulting from desolvation can affect analyte transport to the plasma. As stated
previously, this comparative work was done in a mixed acid solution of nitric and hydrochloric
acids. A similar phenomenon was also observed in single-acid and no-acid solutions.
To determine if this same effect occurs in environmental waters, two aliquots from
four different water sources were fortified to a concentration of 1 mg/L with either As+3 or
As+5. These aliquots were then analyzed in the same manner using the CETAC U-5000. The
intensity counts for those analyses are given on Slide 5.
It is obvious from the data given in the right-hand column that As+5 responds the
same in environmental waters as in the standard solution. The same is true for As+3 samples
in the 1 st column, except for the Cincinnati, Ohio tap water. Because it was known that of the
four waters, only the tap water was chlorinated, a sample was collected at the filtration plant
prior to chlorination. When this sample (Tap-No CL2)was fortified with As+3, similar data to
the other non-chlorinated waters were obtained. From other studies, it is known that chlorine will
oxidize As+3 to As+5 in waters of neutral pH. Therefore, it is not surprising to find the same
occurrence in an acidified chlorinated water.
Since acid preservation alone does not oxidize As+3 to As+5, aliquots from two
of the environmental waters fortified to a concentration of 2 mg/L with either As+3 or As+5 were
subjected to the total recoverable digestion. These digested aliquots were then analyzed along
with non-digested aliquots taken from the same solutions. Following analysis, hydrogen peroxide
was added to the analysis solutions and then reanalyzed. The instrument was calibrated using
an As+5 standard solution.
Data from these analyses are presented on Slide 6. It is apparent from the data
that either sample digestion or the addition of hydrogen peroxide can be used to convert As+3
to As+5 prior to analysis.
On Slide 7 analyses data are given for the same environmental waters where
aliquots had been fortified with 2 mg/L with either hexavalent or trivalent chromium. The
instrument was calibrated using a trivalent chromium standard solution. Note that the analysis
data for the single element hexavalent standard without peroxide gave low recovery, as would
be expected. However, when a cation is added such as sodium (5 ppm) that will bind with the
negative chromate ion and form a chromate compound, a larger percentage of hexavalent
chromium reaches the plasma. When an additional amount of sodium (80 ppm) is added,
complete recovery is obtained.
Although it is evident that there is equal recovery of both valence states of
chromium from environmental waters, the addition of peroxide would be helpful, because less
wash out time is required for higher levels of trivalent chromium, therefore reducing possible
memory effects.
194
-------
From the data presented, it must be obvious that the preparation of arsenic and
chromium calibration standards is critical and that these analytes must be in the same valence
state in standards and samples alike. For this reason, the addition of hydrogen peroxide to both
samples and the calibration standards containing arsenic and chromium will be a requirement of
Method 200.15.
The enhancement of selenium is a concomitant effect from other analytes present
in solution. Normally, a concomitant effect is recognized as an interelement interference where
the interferant is present at much higher concentration than the analyte of concern.
If a ratio is calculated between the sum of the other analytes to selenium in the
multianalyte standard given on Slide 8, the ratio is only 6.4. Also, the ratio between the other
analytes and selenium in ICP-19 is only 18. However, from the data given on Slide 8, it is
apparent that when the instrument is calibrated using the multianalyte standard and the three
standard solutions listed, each containing 1 mg/L of selenium are analyzed as samples,
significantly different selenium concentrations are determined for ICP-19 and the single element
selenious acid solution.
To determine which of the other analytes cause the enhancement, 1 ppm single
element solutions of the other 18 analytes were prepared and fortified with 1 ppm of selenium.
The instrument was calibrated using a single element solution of selenious acid.
As indicated by the effect from arsenic listed at the top of Slide 9, responses
ranged from no effect from elements such as arsenic, antimony, cadmium, and thallium to
significant enhancements like that from aluminum and iron. When the concentration of the
interferant was increased, very little or no change occurred as shown for the 10 ppm solutions
of aluminum and iron. However, when the analytes were combined as shown by the 20 ppm
solution of aluminum and iron, there is additional enhancement. But this increase occurring from
combining analytes also has limitation as can be seen from the 60 ppm solution of aluminum,
beryllium, chromium, iron, titanium and vanadium. These analytes were the elements that
provided the greatest enhancement as single element solutions.
At the bottom of Slide 9 are the determined concentrations for ICP-19 which
contains 1 ppm selenium and ICP-19 fortified with an additional 1 and 2 ppm selenium. These
data confirm that the enhancement effect is nearly linear. Unfortunately, a concomitant matrix
effect can only be circumvented either by matrix matching the calibration standard to the sample
or analysis by method of standard additions.
Slide 10 shows the enhancement effect on selenium from calcium, sodium, and
magnesium, the major cation constituents present in all ambient waters. The enhancing effect
from calcium is most prominent up to 50 mg/L, while the maximum effect from sodium would
appear to be a concentration above 300 mg/L.
195
-------
For magnesium, it could be estimated that the maximum effect occurs at a
concentration between 10 and 50 mg/L. Also, the selenium response in the 300 mg/L magnesium
solution would indicate that a possible reverse trend in enhancement will occur when these
constituents are present in very high concentrations.
To avoid methods of standard addition for the analysis of drinking water and
ambient waters, a more practical approach is to prepare a calibration standard in a matrix of the
major constituents. The concentration of the major constituents in the standard should
approximate that of the sample matrix.
After analyzing combination solutions of calcium, sodium, and magnesium where
the concentrations of all three analytes were varied, a compromise combination of 40 mg/L
calcium, 10 mg/L magnesium, and 20 mg/L sodium was selected as the preferred matrix.
Analysis of this standard as a sample is displayed at the top and right-hand side of Slide 11.
Also, on Slide 11 are the comparative data for the analysis of 1 mg/L selenium
in environmental waters using both a single element and mixed standard calibration. The
enhanced selenium data obtained from the single element calibration are given in the 1 st column,
while the data from the mixed standard calibration are in the 2nd column. To the right of the
selenium data are the analyses data of the major constituents in these waters. The concentrations
listed for the rain water are attributed to storage in a concrete cistern. Please note that when ICP-
19 was analyzed using the mixed standard calibration, the determined value for selenium was
within 5 percent of the stated true value of 1 mg/L.
Given on Slide 12 is a partial listing of ultrasonic ICP-AES MDLs and MDLs
determined by ICP-mass spectrometry and stabilized temperature or platform graphite furnace
atomic absorption. All values are expressed in ug/L.
For the first two groups of seven analytes, all respective MDLs between the
techniques appear to be within a factor of 5 and are at a concentration that is sufficiently low to
ensure that the methods will provide accurate and reliable determinations at the listed MCL
concentration.
In the bottom group of the ultrasonic ICP-AES MDLs, only selenium appears to
meet the old criteria that the method MDL should be at least 1/5 the MCL for use in compliance
monitoring of drinking water. Although MDLs can always be lowered by preconcentrating the
sample prior to analysis, the purpose of using ultrasonic nebulization is to avoid the extra time
and cost of sample processing.
Since the Office of Ground Water and Drinking Water now uses acceptance
criteria to determine adequate laboratory performance, the listing of ultrasonic ICP-AES as an
approved drinking water method may depend more on meeting the required performance
acceptance criteria than on reaching a particular MDL.
196
-------
The following is a brief summary of Method 200.15. The method is in draft form
in the EMMC format and is applicable to the analysis of 32 analytes. It is proposed for the
analyses of clean waters and currently is not intended to be used in the analysis of NPDES
discharge samples.
The Method 200.2 sample preparation procedure is included for total recoverable
analysis of samples containing particulate material and provides for direct analysis of drinking
waters where turbidity is less than 1 NTU.
As stated earlier, the addition of hydrogen peroxide to all samples and the
calibration standards containing arsenic and chromium will be required. Also, a specific
preparation of the selenium calibration standard will be required to compensate for matrix
enhancement.
The method will include the same required quality control common to all other
EMSL-Cincinnati spectrochemical methods. MDL, precision, and recovery data on waters
collected from five different sources will be provided as statements of performance and for
comparative use by other analysts in assessing their own laboratory performance. The method
will be included in a supplemental manual to Methods for the Determination of Metals in
Environmental Samples. This supplemental manual is also in draft form. However, it should
be available for review sometime this summer.
This concludes my presentation.
REFERENCES
1. FASSEL, V.A. and BEAR, B.R., SPECTROCHIMICA ACTA 41B, No. 10,1089-1113,1986.
2. OLSON, K.W., HAAS, W.J. Jr., AND FASSEL, V.A., Analytical Chemistry, Vol. 40,
No. 4, 632-637, April 1977.
3. TARR, M.A., ZHU, G., AND BROWNER, R.F., Applied Spectroscopy, 45/9, 1424-1432,
1991.
4. TARR, M.A., ZHU, G., AND BROWNER, R.F., Journal of Analytical Atomic Spectrometry,
Vol. 7, 813-817, September 1992.
MR. TELLIARD: Thank you, Ted. Wait, wait,
don't escape. They are slow getting to the mic. Any questions? Are you out there? (No
response.)
MR. TELLIARD: Thanks, Ted.
197
-------
198
-------
?u°/ooo
-------
K)
O
O
ULTRASONIC NEBULIZATION
DESOLVATION EFFECT
SPEX RECOVERY PERCENT
ICP-19 mg/L INCREASE
As 1.80 80%
Cr 1.46 46%
Se 1.35 35%
As & Cr - Attributed to Different Stable Valence States
Se - Concomitant Effect from Other Analytes
SLIDE 2
-------
N)
O
PNEUMATIC NEBULIZATION
W/WO 250°C DESOLVATION
SINGLE ELEMENT SOLUTIONS
ARSENIC - 2 ppm
CHILLER ON
As + 3
As +5
CHILLER OFF
As 4-3
As 4 5
CHROMIUM - 1 ppm
CHILLER ON
Cr -f-3
Cr 4-6
CHILLER OFF
Cr 4-3
Cr 4-6
INTENSITY COUNTS
HEAT ON HEAT OFF
970
1790
1030
1790
4090
1660
4440
1930
SLIDE 3
2900
2930
2350
2400
4250
4150
3400
3350
-------
DESOLVATION EFFECT
SINGLE ELEMENT STANDARD SOLUTIONS
VALENCE 1 mg/L INTENSITY
STATE STD COUNTS
H3AsO< 12830
H3AsO4 + H2O2 12760
§ +3 As2O3 7950
AsO3 + H2O, 12620
+ 3 Cr (NO3);) 43270
Cr (NO3)3 + H2Oa 43140
4-6 CrO3 30740
CrO3 -f HEO^ 43300
SLIDE 4
-------
ARSENIC
DESOLVATION EFFECT
ENVIRONMENTAL WATERS
UJ
SOLUTIONS
1 mg/L
INTENSITY COUNTS
As + 3 As + 5
I
STD
CINTl TAP
TAP - No CI2
RAIN WATER
WELL WATER
POND WATER
8080
11550
9190
8520
8960
8990
11000
11970
12280
11800
11500
11740
SLIDES
-------
ARSENIC
ENVIRONMENTAL WATERS
Fortified - 2 mg/L
Calibration - As + 5
SAMPLE
SOLUTION
NOT DIGESTED
As f3 As 4-5
TOTAL RECOVERABLE
As f3 As +5
to
o
STD
RAIN WATER
WELL WATER
1.25
1.20
1.44
2.19
2.10
2.02
2.12
2.02
2.21
2.04
H2O2 Added
STD
RAIN WATER
WELL WATER
2.08
2.04
1.94
1.98
2.04
2.08
1.99
1.90
2.08
1.93
SLIDE 6
-------
Chromium
Environmental Waters
Fortified - 2 mg/L
Calibration - Cr-f 3
2% HNO
Solution 1% HCI
L.
Cr-f 6 Std 1.53
Cr + 6 + 5 ppm Na 1.88
Cr-f 6 -f 80 ppm Na 2.06
Cr-f 6 - Cinti. Tap 2.01
Cr-f 3 - Cinti. Tap 1 .94
Cr-f 6 - Rain Water 2.01
Cr f3 - Rain Water 2.08
Cr-f 6 - Well Water 1.93
Cr-f 3 - Well Water 2.00
Cr-f 6 - Pond Water 1.90
Cr-f 3 - Pond Water 1.93
Acid
H20;>
2.04
_
-
1.99
1.95
1.99
1.96
1.93
1.94
1.99
1.95
SLIDE 7
-------
SELENIUM
DESOLVATION EFFECT
CALIBRATION
MULTIANALYTE STD., mq/L
Ag 0.2
As 2.0
B 0.2
Ba 0.2
Ca 2.0
Cd 0.4
Cu 0.4
Sb 1.0
Se 1.0
ANALYSIS Se
SOLUTIONS mg/L
CALB. STD. 1.01
EPAQCICP-19 1.35
1 ppm KSeCX 0.73
<- o
SLIDES
-------
O
-J
SELENIUM
CONCOMITANT - DESOLVATION EFFECT
Calibration - 1 mg/L H^SeO
1 ppm Se
Solutions
1 ppm As
1 ppm Al
1 ppm Fe
10 ppm Al
10 ppm Fe
20 ppm Al, Fe
60 ppm Al, Be, Cr, Fe, Ti, V
ICP-19
ICP-19 + 1 ppm Se
ICP-19 + 2 ppm Se
Se
mg/L
1.01
1 .46
1.55
1.57
1.47
2.04
1.98
1.96
3.80
5.62
Percent
Increase
1%
46%
55%
57%
47%
104%
98%
96%
90%
87%
-------
O
oo
SELENIUM
EFFECT OF Ca, Na, & Mg
CALIBRATION - 1 mg/L H.SeQ,
1 ppm Se Solutions Se, mg/L
Ca 1 ppm 1.12
10 ppm 1.46
50 ppm 1.81
100 ppm 1.88
300 ppm 1.89
Na 1 ppm 1.14
10 ppm 1.25
50 ppm i 52
100 ppm 1.64
300 ppm 1.75
Mg 1 ppm i 29
10 ppm 1.77
50 ppm 1.97
100 ppm 2.00
300 ppm 1.88
SLIDE 10
-------
NJ
O
SELENIUM
ENVIRONMENTAL WATERS
Fortified - 1 mg/L Se
Concentration, mg/L
Se
Solutions
H^SeOa Std.
Mixed Std
ICP-19
Cmti. Tap
Rain Water
Well Water
Pond Water
H,SeO,
Calb.
1.02
-
1.92
1.65
1.57
1.87
1.92
Mixed Std.
Calb.
.
1.01
1.05
1.03
0.94
1.03
1.04
Ca
.
40.3
1.01
26.6
10.3
78.9
42.9
Mg
10.0
1.01
6.90
1.10
21.4
9.20
Na
_
20.0
-
10.7
4.70
30.6
15.5
SLIDE 11
-------
to
(—*
o
COMPARISON OF MDL'S
DRINKING WATER CONTAMINANTS
CONCENTRATION,/xg/L
ANALYTE
As
Ba
Be
Cd
Cr
Cu
Ni
Pb
Sb
Se
Tl
MCL
50
2000
4
5
100
1300
100
15
6
50
2
ULTRASONIC
JCP-AES
3
0.2
0.05
0.4
2
2
0.7
4
3
10"
6
200.8
ICP-MS
1.4
0.8
0.3
0.5
0.9
0.5
0 5
0.6
0.4
7.9
0.3
200.9
STGFAA"
1.0
-
0.04
0.1
0.2
1.4
1.2
1.4
1.6
1.2
1.4
'DETERMINED IN TAP WATER
(1) BASED ON 1X DETERMINATION
SLIDE 12
-------
MR. TELLIARD: Our next speaker is going to be
talking about ultra-clean sampling, storage, and analytical strategies for accurate determination
of trace metals in natural waters. Nick Bloom is going to be going over a lot of issues that right
now we are addressing in the Agency.
Nick is here under the auspices of EPRI, I think, and I hope you enjoy his talk.
MR. BLOOM: I am going to be talking today about
a historical perspective of methods and things that we have learned about ultra-clean sampling,
storage, and analysis of trace metals. It doesn't represent a specific research project but, rather,
represents an evolution over the last 15 years in our laboratory of pushing detection limits to the
point where we can measure trace metals in ambient waters at their actual concentrations.
The concentrations that we look at are typically much, much lower than EPA-
mandated monitoring criteria, but I think there is a trend toward lowering those values. For
example, in Minnesota, I believe, they have lowered the monitoring requirements for mercury
in ambient waters; they have to be monitored at the ambient level, and that may be a wave of
the future.
I would like to credit EPRI who has funded most of the work over the last 10
years that we have undertaken to develop these methods and also give some credit to open ocean
oceanographers such as Bill Fitzgerald and Claire Patterson and Gary Gill who have developed
a lot of these methods for doing very low level work in the open ocean.
It is only somewhat of a surprise to us that we found that when applied to
terrestrial systems, waters in the terrestrial environment are typically as low in trace metals as
they are in the open ocean.
The first slide (Figure 1) is probably my most interesting. This is a plot
representing the mean concentration of mercury in pristine surface water samples taken by simply
going through my filing cabinet containing hundreds of reprints on the subject of mercury in the
environment and grouping them in two-year intervals or so, depending on how many reprints I
had per year, and taking the mean value for these unpolluted samples and the standard deviation
of that value. Typically, for each data point, there are about 12 different research papers.
It should be noted that this represents the best science at the time. This isn't
routine compliance monitoring. This is university open ocean research and so forth.
As you can see, the axis starts from 1970 and goes up to 1990. There is a
dramatic decrease in both the mean concentration and the standard deviation of those values for
water which we now understand should have all roughly the same concentration which is
indicated by the leveling out of the curve after 1995 or so at about 1 ng/L.
211
-------
I should note that the error bars which are the two lighter lines on the top and the
bottom are the standard deviation of the mean, and we had to plot them in that manner in order
to fit them on the scale. If we plotted just the standard deviation, then we would have scales that
were ten times higher.
This is actually quite an interesting graph. One's first hope would be that this
indicated how the world is becoming cleaner and cleaner as environmental laws come into place,
but it turns out that is not the case. It represents how the laboratory researchers have become
cleaner and cleaner in implementing their sampling, largely their sampling strategies.
This isn't only the case for mercury. Most of my discussion today will focus on
mercury, because we have the largest amount of this type of QA information for mercury, and,
if anything, it is the most difficult of the elements to correctly obtain ambient values for, but the
same trend has been observed for virtually all of the trace metals.
In this case (Figure 2), the graph on the left indicates a sort of three-point curve
of the same type showing 1965, '75, and '85, and the mean open ocean concentration for copper,
zinc, iron, and mercury. As you can see, there is this same general trend, dramatically decreasing
from the '60s to the '80s, typically by a factor of 100 or so.
The graph on the right is a graph for one lake in northern Wisconsin. We started
doing most of this work for EPRI on this lake in a process-oriented study trying to look at the
fate of mercury in the environment. The first bar on that graph, I believe, is 198...I can't see it
now...looks like 1983, and it represents the accepted value of mercury in that lake as conducted
by the Wisconsin Department of Natural Resources.
The second bar represented a sort of a hybrid. It was a grab sample collected by
the Department of Natural Resources but sent to the University of Connecticut to Bill Fitzgerald's
lab for analysis in 1985.
The following bar which is too small to even fit on the graph represents the values
that we typically find in that lake now that we employ clean techniques from the beginning to
the end.
I think this picture is quite typical. While the oceanographers have traveled the
path down this slope to being able to measure ambient levels quite reliably for most metals, I
think that many compliance monitoring and State labs of hygiene and so forth are still at the
other end of that graph as was supported by the fights that we had when we presented this data
before we won the project.
The State of Wisconsin simply wouldn't believe it and ultimately resorted to the
claim that "well, they had 20,000 data points that showed they were right, and we only had 12
that showed we were right."
212
-------
So, thinking about this type of trend and my experience in analytical chemistry for
trace metals, led me to develop this simple schematic diagram for the downward spiral in
concentrations of mercury in the environment (Figure 3). Essentially, it is an interactive loop
where the top one is discoveries near the detection limit. Because I think that the tip-off is if
you are measuring some constituent and all of your determinations are near the detection limit
and you talk in a language that emphasizes whether or not you have "hits," then I think you are
at a point where you are probably not getting very real data. Typically, you are overestimating
the mean value of that constituent.
This puts a pressure, then, to develop, at least in the case of trace metals, cleaner
sampling techniques. I should note that typically in trace metal work, instrumental detection limit
has not been the problem. The detection limit has been a function of the variability of the blank.
And measuring things at the method detection limit required improvements in
clean sampling techniques which then led to the disappearance of all of the hits which then, in
turn, mandates improvements in analytical detection limits. And then you are ready for another
crank of this turn until you get to a point where you have consistent results that are
geochemically meaningful.
So, I am mostly going to talk today about clean sampling techniques, but I will
talk a little bit about detection limits just in the case of mercury to give you an idea of how far
we have come.
This table (Figure 4) indicates seven different common analytical methods for
mercury and the absolute peak detection limit in nanograms. You can see they vary by seven
orders of magnitude.
In the '60s in this country and still in most of the world today, the colormetric
technique was the standard technique for mercury, and you see it has a detection limit of about
1 microgram. By the mid '60s and almost to the present day, cold vapor atomic absorption has
become the standard for mercury. It has a detection limit much better, about 20 pg, although
most laboratories and, I believe, the EPA methodologies still typically see this method as one
where the sample is purged directly into the analyzer.
An enhancement of the method detection limit by up to a factor of 1000 is
routinely obtained in research labs by preconcentrating the mercury first onto a gold trap before
admitting the mercury into the analyzer. This allows much more mercury, a much larger sample,
to be effectively input into the detector.
In the mid '80s, improvements were made in atomic fluorescence detection, the
second one on the list, which improved the detection limit over atomic absorption by about a
factor of 50 to 100. At the latest mercury meeting I attended, it seems like almost all research
laboratories in the world have now gone over to atomic fluorescence in place of atomic
213
-------
absorption, although atomic absorption is still certainly the technique of choice for a routine
monitoring lab.
So, the evolution of the detectors has trailed the evolution in clean handling
techniques to keep pace. Very sensitive detectors are also necessary if you then begin to look
at mercury speciation, where each component is only a fraction of an already small total. This
is a schematic of the technique we use for measuring total mercury in the sample (Figure 5). It
is digested and placed into a bubbler, reduced with stannous chloride, and then the gaseous
mercury is purged onto a gold trap.
The gold trap is then thermally heated into an atomic fluorescence detector, and
this can give us detection limits down in the parts per quadrillion range.
This setup (Figure 6) shows a technique for determining mercury speciation. In
this case, an ambient sample is ethylated with sodium tetraethylborate in the aqueous phase which
forms volatile ethyl analogs for methylmercury and divalent mercury. Dimethylmercury and
elemental mercury remain in their original volatile state.
These can all be purged out and collected onto a Carbotrap™ column which is
then thermally desorbed into a GC unit, and the separated peaks are determined by atomic
fluorescence.
For mercury, in particular, but also for several of the metals such as arsenic and
selenium, speciation is very, very important to understanding toxicity and biogeochemical fate.
As we move into a time period where speciation is recognized as necessary for monitoring metals
in the environment, extremely sensitive detectors will be required, because often these species
may be present at only a fraction of l/1000th of the total metal there, but they may actually be
more biogeochemically important than the dominant, relatively inert species.
So, in our work with mercury, we typically analyze eight different fractions for
mercury, and this table (Figure 7) indicates the detection limits in the second column which
range, in the case of elemental mercury, from close to the parts per quintillion up to total
mercury which is reagent limited at about 50 parts per quadrillion.
All of these concentrations are typically well below...the detection limits are well
below the actual environmental level which can then allow you to feel relatively certain that the
numbers you are getting are accurate. If you are detecting a species at the same level that it
exists in the environment, then, certainly geochemically, your results are meaningless.
So, having said that about detectors, I will just go briefly through the clean
techniques that we have employed in the field and laboratory and some of the things of concern
that you have to have.
214
-------
When we are in the field, we start with, in this case, water sampling with an all
plastic boat. The boat has been acid cleaned previously by washing down with acid, and then
it is stored in an area which is away from, say, automobile exhaust and so forth and so on.
Whenever it is taken out to the field, it is washed with lake water and sponged down and rinsed
prior to each use. We never use a metal boat in our research in these lakes.
The sample bottles are prepared in a clean lab, and they are all teflon bottles.
They are cleaned by boiling in concentrated nitric acid for 48 hours and then rinsed and filled
with ultra high purity water and low mercury hydrochloric acid.
The lids are sealed with a wrench to minimize diffusion, through the threads, of
gaseous mercury, and then they are double bagged in the clean room into polyethylene bags to
keep dust and dirt out and to provide somewhat of a mini-clean room environment in the field
for the field crew to be able to use the bottles.
In the field, there are typically three people in a sampling crew, one person who
takes notes and then the other two people are called "clean hands" and "dirty hands." "Dirty
hands" is the person who grabs the bag with the sample bottle out of the box and opens the
ziplock bag. That is all that person can do. Then, "clean hands," wearing clean gloves, reaches
into the bag, pulls out the bottle, collects the sample, replaces the lid, puts it back into the bag,
and then "dirty hands" can then close the bag back up.
In this way, the sample bottle has never contacted anything except clean room
environment and ambient air.
This is a photo showing collection of the water sample by "clean hands." In this
case, the water is being pumped through a teflon tube from depth in the lake.
After the sample is collected, the lid is put back on, and screwed back down with
a wrench. There has been documented evidence which indicates that if the lids are not screwed
on tight enough, that in storage, mercury vapor can diffuse up through the threads into the bottle
and give you inflated answers.
Now, of course, once you get the samples back to the laboratory, you have the
question of storage to consider. I should note that all of our work on geochemical problems has
been done using teflon bottles or, in the worst case, glass bottles with teflon lids.
However, it is standard practice, I believe, to use polyethylene bottles, and for all
other metals these are documented to work quite well, but it is a well known and documented
fact that they can't be used for mercury, although they are still accepted for collecting mercury
samples, I believe, by EPA protocols.
215
-------
This is a test that was done in 1975 by Dave Robertson (Figure 8) where an
acidified sea water sample is placed into standard ultra-clean polyethylene bottles used for
oceanographic work and then stored in the laboratory for a period of time.
The line on the bottom, the results indicated by the red line that is relatively level,
was stored in a plywood box out of doors. The blue line was stored on a bench top, and the line
reaching up to 200 after only 20 days was stored on the laboratory floor.
These dramatic increases in mercury concentration in these polyethylene bottles
is due to diffusion of mercury through the polyethylene. This has been documented at lower
levels in our own lab where we have collected water samples in an ultra-clean EPA type
sampling bottle and stored them in one laboratory where the room air concentration was very
low, 2 ng/m3, and witnessed an increase in concentration of 1 ng/L over three weeks which is
small, but that did double the ambient concentration in the bottle.
Then we moved it to a laboratory which had 16 ng/m3 in the air which is still very,
very low for a laboratory, and the concentration jumped to 20 ng/L in three days.
The opposite can also happen in an unpreserved sample. If you have high levels
of mercury in the bottle, it can diffuse out, and you can get lower results.
So, one of the things, obviously, that you have to do if you are going to do
mercury is you have to pay attention to the mercury concentration in the air. Unlike all your
other metals where you can have a clean room that essentially uses HEPA filters to remove the
paniculate metals, in the case of mercury, most of the mercury is not on the particulates, but in
the gas phase.
So, a standard clean room can be the dirtiest place in the whole building as far as
mercury goes. So, you have to develop some way to remove the mercury from the room air, and
at the very least, know what the concentration is.
This is just an example of what we have done in one laboratory of ours that had
high mercury levels when we started, 320 ng/m3 (Figure 10). The first thing we did was bring
in large volumes of outside air which is typically quite low in mercury. That dropped the
concentration to 80 ng/m3.
Our investigation indicated that the walls of the lab were painted with a latex paint
that contained mercury, and we covered those by painting them with a paint that had sulfur added
to it. That reduced Hg levels somewhat.
Then we also discovered that laboratory sinks were contaminated from historic
spills of mercury or dumping of reagents or something in the past. We removed the sinks, and
that dropped the concentration to a reasonably low level, 15 ng/m3, which would be acceptable
for doing mercury at ambient levels.
216
-------
Finally, then, we took the incoming air from outside which is in the city of Seattle
and typically averages about 10 ng/m3, and we added gold filters to the incoming air to remove
the mercury that is coming into the laboratory air.
These gold filters are made by impregnating cloth with metallic gold using wet
chemistry. That reduced the air in the incoming clean hood to about 1 ng/m3, and it reduced the
room air ultimately to 2 to 10 ng/m3. Typically, it was about 4 ng/m3.
So, these are the types of considerations that you have to take.
Some other just really quick kind of checks that we have done that some people
might be surprised about. We looked at the magnitude of various potentially contaminating
activities in the laboratory (Figure 11). The first four bars represent a bottle of water, ambient
water, that was left open for one day in various places in the building, and you can see that the
concentration anywhere from doubled to went up by a factor of seven in one day.
The next two bars represent the incremental contamination that could occur by
breathing once on the sample. Typically, people have mercury amalgams in their mouths, and
that could cause about a doubling of the concentration from one misplaced breath.
That very tall bar there represents what happened when the sample was touched
by the sampler's finger, an uncleaned finger. Just to give you an idea that it is not really coming
out of the person, that it is really because of, you know, spit and so forth on people's hands as
they work throughout the day, we used a gloved finger and got no contamination using a clean
room glove, and then we used an acid-cleaned finger, believe it or not. After washing my hands,
I then dipped them in 10 percent HC1 for a bit and rinsed it off and stuck that finger, the entire
finger, into the bottle, in fact, and it gave a very small increase in concentration.
This type of contamination has been documented although not...I mean, has been
shown but not published for other metals as well, when I was at Bartelle.
Here are some even more surprising findings (Figure 12). This is levels of
mercury in various water and acids that you find in the laboratory.
The first set of bars are water, and the very first bar there is a milli-Q of water,
of course, which is too low to detect. Tap water is very, very low in mercury, and when there
is a doubt, it is always better to use tap water than some other so-called purified water in the
laboratory, as the next three bars will show.
Those are deionized laboratory water, including the tallest one there which is
14,000 ng/L compared to, say, the tap water which was about 0.2 ng/L. That is the same tap
water going into the deionizing system that came out at 14,000, so you can see the efficiency of
that system for cleaning up for mercury.
217
-------
It turns out, after much investigation, the reason we learned for this very strange
behavior is that deionizing columns are recharged with sodium hydroxide, and apparently the
only really good kind of sodium hydroxide for this purpose is a kind made in a mercury cell
electrolytic plant, and they said that, basically, that is tough luck. You know, they are not going
to change their system for us. So, this is very common for deionized water.
Then, the next set of numbers with the hatch marks are various laboratory acids
hi ng/ml. The big point, I think, to make is that the two highest bars there are the ultrex acid,
and the cleanest acids that we find typically are the regular old reagent grade stuff that, you
know, costs $10 a case.
I think it is quite common to buy the ultrex stuff at $100 for 500 ml and just
accept it as clean and use it to preserve samples and so forth when, in fact, not only is it more
expensive, but it can be considerably dirtier.
In the case of mercury, it is always dirtier. The sub-boiling distillation technique
that they use to purify it for iron and sodium and so forth actually results in contamination for
mercury.
This slide (Figure 13), then, is...well, it is hard to read, but it is trying to show for
four metals...this story is actually true for all of the EPA priority pollutants, but for these four,
arsenic, lead, mercury, and cadmium, the first bar hi the graph shows the detection limit by the
EPA ICP methodology.
The second bar...this is a logarithmic scale, incidently. The second bar is the
detection limit for the EPA graphite furnace technology.
The third bar represents a detection limit that we had obtained, back when I was
at Battelle, by using sophisticated preconcentration techniques and then suitable detection.
Typically, it was graphite furnace AA for most metals. For mercury, it was atomic absorption
at that time.
Then the last bar is the actual typical value for these metals in the ambient water
sample.
As you can see, the current accepted methodologies will never allow you to detect
these metals at ambient levels, often by several orders of magnitude. If the goal is ever to
ultimately use the data that is collected for anything more than compliance, for example, the data
that goes into data bases like the USGS data base, is going to be used for any geochemical or
long-term monitoring or something, then the current methods are unacceptable, because they will
give you essentially the same number no matter where you take the sample.
In fact, the detection limits are such that you would get the same number for many
polluted sites or contaminated sites as you would for pristine.
218
-------
To give you an example, research that we have done at Onadaga Lake in Syracuse
which may be about as mercury polluted of a site as you can imagine (had a chloralkalide plant
for a long tune discharging 100,000 pounds or so of mercury into it) even now, the mercury
levels in the water there are about 25 ng/L which is about 25 times higher than a clean lake hi
the region would contain. Still, it would be a non-detect by the current methodology.
Finally, just to sum up, I put together this slide to indicate how obtaining ambient
level information can actually change our perceptions of the magnitude and importance of given
pollution-related problems. Here we have 1970 and 1990. This is, again, for mercury.
In 1970, the accepted belief for mercury in pristine water was about 100 to 1000
ng/L. There was speculation about the presence of methylmercury, because it was known that
fish contain methylmercury, but, basically, nobody knew anything about what it might be hi
water.
For polluted water such as Onadaga Lake, you got exactly the same answer. So,
it was apparent that even lakes receiving high discharges of mercury showed no ill effects from
having that mercury, because they had the same concentrations as a pristine lake would.
Total mercury in a fish was easy to measure at the time. As I say, it was 1 ppm,
and it was known, like I say, that that was mostly methylmercury.
This led to a belief that the bioconcentration factor for mercury was somewhere
around 103 or 104 which is rather low, and the expected impact of a discharge, therefore, to a
lake would be none. You know, it wouldn't be seen as a problem.
Today, on the other hand, we see that the water from that lake probably should
contain about 1 ng/L of total mercury and about a tenth of that as methylmercury. By contrast,
a polluted lake such as Onadaga Lake might contain somewhere in the range of 5 to 25 ng/L of
mercury with a similar proportion hi the methylated form.
Now we see, however, since we understand more what the food chain transport
and so forth that is happening in a lake is, that the active species in the lake is methylmercury.
This results hi a bioconcentration factor for that same fish of 106 to 107 which is quite dramatic.
We could see, then, that if we were going to allow a discharge of 1 ug/L into this
hypothetical lake that we might then see that as a severe impact rather than a nonexistent impact.
That is all. Any questions?
MR. TELLIARD: Any questions?
(No response.)
MR. TELLIARD: Thanks, Nick. Appreciate it.
219
-------
220
-------
Ultra-Clean Sampling, Storage, and Analytical
Strategies necessary for Accurate Determination
of Trace Metals in Natural Waters
May 5, 1993
Nicolas Bloom
Frontier Geosciences
414 Pontius North
Seattle, WA 98109
221
-------
180
160
-40 J~
Julian year
2. Decrease in the observed Hg concentration of pristine waters, 1970-1990.
222
-------
FIG 1: A HISTORICAL PERSPECTIVE
OF TRACE METAL CONTAMINATION
180T
1 150-
V '
(J
2 120-
0
0
_j 90-
t
3 60-
UJ
| 3°-
0-
Fe
Z
Cu
T
^
\
S1
n
i
C
X
C
e
u
5
I
5
?
J
X
v
y
\
s
^
s
s
y
S
S
y
\
1-
g (pM)
KS,
r.
R
8
5
5
?>
^3
s
s
\
\
1965 1975 1983
YEAR OF OCEANIC SUMMARY
1. MEAN OCEANIC CONCS.
ESTIMATED IN REVIEWS BY
GOLDBERG (1965), BREWER (1975)
AND BRULAND (1983).
a
c
o
o
a-
1500-
1200-
900-
600-
300
0-
(S
D)
(25102)
198^ 1985-86 1986
STUDY
2. MEAN WATER CONCS.
MEASURED IN VANDERCOOK
LAKE, Wl SURFACE WATERS
DURING 3 DIFFERENT STUDIES
(AFTER FITZGERALD AND WATRAS 1989)
3. Decrease over time in observed metals concentrations.
-------
4. Mechanism for downward spiral in observed metals concentrations.
224
-------
species peak D.L.
method detected (pg
CVAAS Hgo 20
CVAFS Hg° 0.1
AES Hg(g) 0.5
PAS Hg° 10
GC/ECD RHg-X 50
Colormetric Hg(II)(aq) lo6
ICP/MS Hg(aq) 10
Resistance Hg(g) 500
NAA Hg 500
Piezoelectric Hg(g) 106
5. Absolute detection limits for various types of Hg detectors.
225
-------
1. Purge
Gas
Soda Lime Pre-Trap Gold Sample Trap
Gas
Aqueous Sample + SnCl2
2. Analyze
Hg Free
He Gas"
Gold Sample Trap
Gold Analysis Trap
••*•*••••
^&mg$Mx&smsmmsm
wwwwwwww
Nichrome Coil
AijJiijL&jJJ *
TrTrrrrT1"
Nichrome Coil
Hrt
ng
Detector
6. Schematic for dual amalgamation Hg analytical method.
226
-------
Helium
in
0-1000 volt DC
Power Supply
350°C
Oven with
GC Column
Current-to-voltage
Converter
110°C
7. Schematic for Hg speciation technique using GC/AFS.
227
-------
TYPICAL DETECTION LIMITS FOR MERCURY SPECIATION IN WATER
oo
Mercury Species
Total
Acid labile
Hg°
Participate
Complexed Organic
Total methylmercury
Dimethylmercury
Labile Methylmercury
Paniculate methylmercury
Sample Size
Detection Limit
ng«L~1
Typical Level in
Uncontaminated
Waters, ng«L
100 mL
100 mL
1,000 mL
On filter
100 mL
50 mL
1,000 mL
50 mL
On filter
0.05
0.002
0.0001
-0.02
0.08
0.004
0.0001
0.002
-0.005
1.0
0.05
0.02
0.2
0.8
0.05
< 0.0001
0.02
0.04
8. Detection limits for 8 typically studied Hg species in water.
-------
9 - 13. Illustrations of various laboratory and field techniques.
229
-------
i.Or
o
o
CD
CL
Q.
0.00 I
O.Oi
0
20 40 6O 8O IOO 120
STORAGE TIME DAYS
245
14. Increase in [Hg] when water is stored in polyethylene bottles
(Bothner and Robertson, 1975).
230
-------
CLEAN-UP OF LAB AIR FOR LOW LEVEL Hg ANALYSIS
Initial Condition: [Hgt] ng-nr3
Closed room, old paint,
used lab benches 320
Ventilate with outside air 79
Cover old paint containing Hg 60
Remove contaminated sinks 15
Final Condition
Gold filters on clean air benches,
venting outside air
Room air (varies) 2-10
Clean hood air 1-2
Outside air, Seattle: 3-15
15. Effectiveness of various steps in the clean-up of high Hg lab air.
231
-------
Magnitude of Various Potentially
Contaminating Activities
0 "
O
^^
u.
OJ c4
o S
1 E
-------
fc£
C
• •
c/}
"o
OJ
-«— '
03
DJD
C
* •
en
CJ
*3 T3
Mercury Levels in Lab Water and Acids
0
o
C
c
o
C
"c
o
u
C
o
C
u
(U
J=
o
C
«i
**
c
.£
"c
o
CJ
CJ
M*
^
c
O
CJ
E
c
o
(J
c
U
cc
c
o
00
U
u
u
X
UJ
a:
o
c
X
u
oc
H
_J
u
o
cs
cs
17. Hg levels in various supposedly "clean" laboratory waters and acids.
-------
18. Comparison of true observed levels for As, Pb, Hg and Cd in
uncontaminated waters with detection limits by EPA sanctioned methods.
234
-------
parameter
total Hg (pristine water)
methyl Hg (pristine water)
total Hg (polluted water)
methyl Hg (polluted water)
total Hg (pike)
methyl Hg (pike)
BCF (pike)
expected impact of 1 M-g/L Hg
discharge to pristine lake
1970
1990
100-1000 ng/L 0.5-3 ng/L
? 0.03-1 ng/L
100-1000 ng/L 5-25 ng/L
? 0.5-10 ng/L
lMg/g
0.8 |ig/g
103-104
none
1Q6-107
severe
19. Change in scientific perception of anthropogenic Hg impact, with data
resulting from using ultra-clean techniques.
235
-------
236
-------
MR. TELLIARD: Our next speaker is from EPA's
Office of Water. Bob April is going to be talking about, again, clean and ultra-clean techniques
as it relates to trace metals.
MR. APRIL: After that presentation by Nicolas
Bloom, I am very tempted to just get up here and say ditto. I am also very chastened to consider
my graduate work with mercury, some years back. I was very proud to do work in the subparts
per billion range, and now I think about what I just heard and realize that almost all my results
were no doubt contamination.
My intent up here is somewhat different. I am going to give a regulatory
perspective from an EPA regulator. You folks are the real experts in this business, and my
agenda is to hear from you. I am going to try and move right along with my presentation, pose
a few questions, and hopefully get some feedback from the floor, which is basically why I
twisted Bill's wrist to let me come here.
I would also like to give thanks to my two co-authors, Charles Delos of EPA and
Carlton Hunt of Battelle. Because of the format of this conference, somewhat informal, and the
timing of my presentation, they haven't had a chance to review in detail what I am going to say.
So, any mistakes are my own, and the good stuff I stole from them.
Trace metals have been receiving increased regulatory attention, I would say, in
the past two to three years. The reason for this is that we got many more State water quality
standards for toxic metals, and these were promulgated, and then we started writing NPDES
permits with these limits. Particularly for dischargers such as POTWs, where we hadn't looked
very much at the toxic metals in the discharges before, this caused controversy.
Also accelerating the trend were the 1987 Clean Water Act, in which Congress
expressed their impatience at the pace of water quality regulation, and the resulting 1992 National
Toxics rule. Under the rule's impetus, many States promulgated water quality standards, and the
toxics rule itself finished the job.
As metals got this attention, controversy surfaced over the water quality criteria
and their validity, and that is my end of the business right now. For a time, these discussions
were very contentious and not very fruitful.
When we really started to think about it, we thought were that we were running
into matrix effects, effects of the matrix on metals toxicity, and that still is an important factor.
A metal's toxicity is very subject to the matrix that it is in, something you need to consider.
But we made a mistake, I guess, in deciding that matrix effects were really the
central issue and focusing on that.
237
-------
When we looked at data, some of the data that you just saw, we saw very high
levels of metals everywhere. One of the authors of this paper, Charles Delos, drafted a paper,
"Metals Criteria Excursions in Unspoiled Watersheds."
Now, this paper may hold a record for a paper that has never been formally
published, actually never been formally submitted, and actually has never gone beyond a draft.
It may be the most cited paper in that catagory anywhere. What it said was that in a variety of
unspoiled watersheds of all sorts of chemistries such that you would expect that the matrix effects
would be small, the values that we saw for these toxic metals were well in excess of the criteria.
We did not think that there were severe impacts. At least, there were so many of
these watersheds that we did not think that there were severe impacts everywhere.
The paper listed possible reasons for this, and one of the possible reasons that was
listed was maybe the data is wrong. But it really wasn't prominently listed as one of the reasons,
because we didn't really believe that the data could be that bad.
Well, we are wrong; it was. But when the paper came out, there were even more
controversy, even more doubt cast on the criteria. After all, EPA itself had written this paper
which said that the criteria were exceeded virtually everywhere.
And this is where we were when two of, I would say, very useful things happened.
Through the 1980s, thanks to work such as done by Dr. Bloom, there was a recognition that
some of the data was bad. And another event was some of our own work that we did in New
York Harbor.
One thing that is interesting is, as was said, marine chemists basically went
through this in the 1970s, and they went through it in a very major way, publishing papers,
giving presentations to conferences and the like. I guess one of the surprising things to me is
it has actually taken so long for the fresh water people to make use of that information to the
extent that they now have. Another of the authors of the paper, Carlton Hunt, did some of that
work in the oceanography.
One thing that came up was the USGS data. Now, USGS tends to get mentioned
a lot when we talk about this topic, and I really think that one of the reasons is that they have
been very, very proactive in bringing this issue to the forefront and making it clear what the
problems with the data are; and then proceeding very proactively to try and address these
problems in a thoughtful and substantial way.
Several papers came out talking about the USGS data. Of course, we became
aware of it, and, of course, this was the data on which Charlie Delos had based his paper,
"Metals Criteria Excursions in Unspoiled Watersheds."
238
-------
Now, looking at some of the papers that we have seen on the USGS data, it
appears that most, if not all, of the seeming excursions in unspoiled watersheds were based on
analytical errors.
There is still some doubt about iron and aluminum. I am not sure exactly why
iron, but as I am sure everybody knows, aluminum chemistry is very complex hi a number of
ways, including fate and transport and toxicology. So, we are still not quite sure about what is
going on there.
Another thing that happened, and it is illustrative both from the standpoint of
figuring out the problems and understanding what the problems are in a regulatory context, was
work that we did in New York Harbor. EPA, in developing its water quality approach, examined
various watersheds to decide which pollutants were causing the problem and then went after
those pollutants.
The vehicle for this was the so-called 304(L) list. We would look at watersheds
and put them on the 304(L) list for various pollutants. We did that for New York Harbor.
This is another case of the 20,000 data points versus the 20 data points. We
looked at many, many data points, and most of them were very high for a lot of metals. A few
of them were pretty low, but we went with the majority of the data.
So, we put New York Harbor on the 304 list for several toxic metals. And what
this triggered was a process called the TMDL process, total maximum daily loads.
What it means is you go into the watershed, and you model that pollutant very
carefully. You look at what the sources are. You look at the transport. And you try and come
up with a plan to control the sources and to figure out what control measures will bring this
water into compliance with water quality standards.
When we tried to do that for New York Harbor, it didn't work, and the reason
didn't work was the data didn't make any sense. It didn't have the right kind of spatial variability
that we would expect, given the hydrology and the sources and the variations in salinity. It just
didn't match up.
What we then thought was well, maybe the data set that had low concentrations
of most of these metals was the right one. So, we went back out, and we did some more work.
When we did that work...I will give you the bottom line first...basically, everything
dropped out except copper, and copper changed from being a very serious exceedance problem
to a more marginal exceedance problem.
239
-------
What we did was redo some of the sampling work very carefully and also the
analytical work. We split samples to two different labs. One lab came out high; one lab came
out low. It turns out we are now convinced the low lab is correct.
This leads us to some conclusions. First of all, for EPA, toxic metal contamination
is far.less widespread than we thought. It is not as big a problem as we thought.
When we look at good data and when we do things such as make appropriate
corrections for site-specific factors we see that matrix effects are still important. Matrix effects
on metals toxicity are not inconsequential, but when you look at good data, a fair number of
these problems really go away.
We now have a lot more faith in the underlying validity of the criteria.
Another lesson is that it is not just sampling. The work we have just seen
emphasized sampling problems and some lab contamination of a fairly esoteric nature, but the
work that we did in New York Harbor showed that more gross lab contamination can occur very
easily.
Also in New York Harbor, it turned out that appropriate extraction techniques to
handle saline samples were crucial in getting good data.
The bottom line here is that we have to be careful start to finish. It is fairly easy,
I think, to focus on sampling, and we need to be very careful to cover the whole of the process.
This is very, very serious business for EPA. Two important results have happened,
that I have talked about here.
First of all, we are uncertain about some of the priorities that we have set for
looking at watersheds and whether we are, in fact, looking at the most important problems.
Secondly, we have gone through a period where enormous doubt was cast over
criteria which are actually among the most scientifically based and thoroughly developed criteria
that EPA puts out.
So, as I say, this is very serious business for us. We have outstanding problems.
One of them is, how bad is the historical data?
This is a problem to determine, because in many, many instances, all that is stored
is the final result. We don't have the QA/QC data that goes with it, and that makes it essentially
impossible to evaluate.
At a metals conference that we had earlier this year to discuss this whole issue,
a well-respected scientist got up and said all of the historical data is junk, and the people who
240
-------
disputed that were only disputing it in a very modest sense, not the overall thrust of it. It was
just the intensity of his statement.
I have some questions, and I am really hoping that people will feel that they can
respond to them. My questions are...and people should feel free to address these or anything else
about this issue that they want that they think is important...first of all, how widespread is this
problem, particularly in the compliance monitoring business, not the scientific community?
Because that is my end of the world.
My impression is, from everything that I have heard, is that it is very, very
widespread. Is that true?
Second, there has been a lot of talk about this in the past couple of years, as I say,
much of it spearheaded by USGS. Is the problem going away? Are the natural effects of the
scientific community really solving this problem?
The reason I am asking that question is that we are interested in what does EPA
need to do.
So, the third question is, what does EPA need to do? Do we need to do anything?
What would be useful?
One of the questions that brings to mind is, are clean and ultra-clean techniques
basically just good laboratory practices? Does everybody really understand pretty much what is
going on, and it is just a matter of being careful?
Can we just collect people's experience and put it together, or are there still some
arcane things that we don't understand or maybe only a very few of us understand that are going
to be difficult to come up with?
That being said, I would like to open the floor for comments.
241
-------
QUESTION AND ANSWER SESSION
MR RICE: Bob, Jim Rice.
First, I might like to say I didn't think I would live to see the day. I mean that
from the heart, and I find this absolutely refreshing, and I do congratulate you for doing it.
I hate to add a note of caution to what you have said in any questioning of some
of the underlying criteria that are involved for the level of risk or the chronic toxicity or what
have you, but I would suggest you put the same searchlight on the data that was used to create
the chronic and acute toxicity data. Those biological tests are subject to all the same variabilities
that we are talking about analytically otherwise.
So, this is going to go around, and I go back to Nick Bloom whom I also
congratulate. I think that is just wonderful. I love him. But this circle goes around, and here
you will see another part of it.
You have set criteria that are health-based of an organism or of people, and the
means used to get that are in question also.
It is like a lot of jobs. Until you anchor down as best you can the method of
analysis and take care of the error, you don't really understand the other problems that you have.
You don't understand the sampling problems until you know how to measure as error-free as you
can be and so on and so on.
To get to some very specific things that you had, though, how wide is the problem
in compliance? Very wide, in my experience. Very wide.
Is the problem going away? No, it will not go away. And the reason it won't go
away is that new methods keep coming, and as you move to an ever lower level, there is another
method, and it generates all the same problems all over again, and as you go around, you find
out facts that you had thought were right are not any longer.
Finally, what does EPA need to do? I think it needs to do the same kind of
searching thing you are doing right here in talking about on all the bases for what is happening.
I believe that if, somehow, all of us who have been involved and those who are
currently involved could work on a level playing field on a lot of this, we could really exchange
information much better than we have in the past when it has been largely adversarial.
Thank you.
MR. APRIL: Thank you. I understand the point
that you were making about the derivation of the criteria.
242
-------
There is one thing that has come up as a result of this that I would like to make
clear to everyone here. When we do the criteria, we have all kinds of checks on the analytical
data of the metals in terms of how we run the experiments, how we make up the solutions, where
we are going.
We know what we are trying to do. So, we are reasonably confident of the
magnitude of the actual metals levels in the criteria experiments themselves.
Others?
MR. BLOOM: I have a comment.
MR. APRIL: Yes?
MR. BLOOM: I just might say that from my
experience previously at Battelle Northwest and then with my own company that I think it is
going to be a very, very difficult task ahead of you to try to get as much data as you need for
compliance monitoring at the level that university researchers can produce it. And I think the
reason for that is the incentive for the people that are doing the work.
I think scientists, research scientists who are doing this kind of thing because they
are going to discover new facts and processes and publish the results have a very strong incentive
to do it right. My observation at Battelle Northwest as they converted from that type of
operation to a very large and sophisticated routine testing laboratory over the last ten years is that
they have totally lost that incentive, and the quality of the work has gone down because of that.
I just don't think, just by offering the carrot of money, that you can get people to
have the interest level necessary to do these sophisticated kinds of analyses.
MR. APRIL: But wouldn't a counter to that be that
the regulated community has a great interest in having data developed that are not showing high
levels of contamination? Because commercial labs who hand back to the regulated clients
excessively high values because of contamination are going to be at a severe commercial
disadvantage to people who are doing it right?
MR. BLOOM: Right, but that is still taking a stick
rather than a carrot approach. I mean, we turn down work, routine monitoring work, all the time
just because it is not fun. I mean, there is no interest for us to get 10,000 blind samples, you
know, and I think that is what you will find. The really top notch research labs aren't going to
be interested in doing this kind of work.
MR. HUNT: Carlton Hunt with Battelle.
MR. BLOOM: He is from the other branch.
243
-------
MR. HUNT: I am East Coast.
The incentive issue is very clearly that. Some of the work we are doing as an
outfall of the meetings in January, clearly our incentive is a lot of capital investments that various
firms may have to put in place if it is based on bad data, if the numbers are high as a result of
bad data.
That is an incentive that has to be really looked at, and that is an issue that has
to be addressed carefully. That is just a general comment I would make.
The other side of the coin is that I agree that, in general monitoring programs,
compliance monitoring is probably not the incentive for the commercial firms to necessarily or
the type of firm that you have or that I am in the East Coast with in Battelle is that we want to
do that kind of monitoring work that gives realistic values, that puts us into the place where we
are gaining knowledge as a result of the monitoring, and I think there are a lot of firms out there
that are interested in doing that type of work.
I know in one program I worked with, the 106-mill site, these issues came up in
terms of sewage sludge disposal in the ocean, and we ended up going through these clean
techniques in order to really address the issues of water quality criteria exceedance.
Without the clean techniques, we wouldn't have demonstrated that the near field
disposal was not a problem from that perspective. So, it is applied in a lot of monitoring
programs and needs to be applied in monitoring programs that are looking at those end points
of water quality criteria.
New York Harbor is the other classic point, and there are a lot of reports out on
that. So, that is just my general comment.
quiet out there. George?
MR. APRIL: Thank you.
MR. TELLIARD: Anyone else? You are awfully
MR. APRIL: Yes, I have one question very
specifically. People in the audience have worked with EPA and EPA methods. EPA is thinking
about developing guidance on how to do clean techniques. What kinds of things would you like
to see us do? What kinds of things do you think that we would possibly do that might not be
productive?
MR. TELLIARD: Let me make a suggestion. Bob
has posed like three questions to you. I promised you a paper that was somewhat mislaid this
morning. Why don't I send you those three questions, since I have all your names and I know
244
-------
where you live, and think over what you have heard here this afternoon and give me some
feedback.
I know, you know, some of you are reluctant to talk openly, but drop me a note.
Let me know if you have any suggestions, ideas, or if you know somebody who has done some
of this stuff.
The Agency does not wish to reinvent the wheel if it is already in someone's
garage. A lot of you are involved in this. A lot of you know the stuff going on. You tell me
this in the hallways and at the bar. So, why don't you drop me a line, because before we spend
a trillion dollars of his money and my money, we would like to take what is already in the
community and use it.
We are certainly talking to USGS, and that will continue. People like Nick Zeffert
and people in the research area, they are feeding us.
There is also the practical level which we are getting at which is the NPDES and
the compliance monitoring effort that you are involved in. I don't want the letter that comes in
and says I don't want to have to put a clean room in, I think it is crazy, we have never needed
it.
If you have got anything to come on in with that is informative or suggestions,
please send them in. We will send you the paper, you send us the questions or the answers back.
Okay?
I would like to thank our morning speakers or afternoon speakers, whatever they
were. We are going to take a quick 10-minute break. There is coffee outside and soda. Do
what you need to do and come on back in.
(WHEREUPON, a brief recess was taken.)
245
-------
246
-------
MR. TELLIARD: Could we get started, please?
If you would come in and take your seats, I would appreciate it.
The final session for the day is going to deal with some radiochemistry activities
and also a discussion of the determination of diesel, mineral, and crude oil in drilling muds.
The first speaker is Dave Demorest from Core Laboratories. He is going to be
talking about the cost and minimization of cost for radiochemistry determinations.
Dave?
MR. DEMOREST: In the commercial analytical
business, there are three areas that need to be addressed in order to maintain a viable business
in today's market. The first issue is that the clients requirements must be met. In the case of
analytical services for the environmental marketplace the requirements are driven by regulatory
and/or liability issues. The requirement is to provide legally defensible data. In order to provide
defensible data the laboratory must utilize approved methodology performed by a qualified staff.
The operating procedures and training programs must be documented and continually updated.
The second issue is one concerned with maintaining a viable business. In order
to maintain a share of the market a commercial laboratory must provide the data at a competitive
price. As the data packages become more complex and the analytical requirements more
stringent, it is difficult to keep costs down. Hence, it becomes more difficult to maintain a
reasonable profit.
Finally, as the data is produced for the data packages, it is essential that now new
waste streams be generated. Any wastes generated may come back as a liability the laboratory
and client or the general public as a health risk. This necessitates that the last requirement of
an analytical laboratory is that it performs its business in a manner that does not create new
problems as it attempts to solve old ones.
This paper will provide an approach that may provide an avenue to minimize
and/or eliminate waste streams generated as analytical data is produced in an analytical
laboratory.
Most of the current methods utilized for isolation and purification of radiochemical
analytes produce complex waste steams, and those include mineral acid wastes and organic
solvents and mixed waste.
These waste streams, as well as associated production costs, can be simplified and
reduced by utilization of advances in technology such as new extraction chromatography.
247
-------
In this presentation, we will examine the results of preliminary investigations using
these technologies in a commercial laboratory setting. We are going to examine the accuracy,
the waste generation profile, and the manpower requirements of two conventional analytical
schemes. We will compare these to two alternative analytical schemes in which extraction
chromatography has been introduced as a means of separation and isolation of the target isotope.
The first method that will be examined is a derivation of a method developed by
Percivel and Martin at INEL. This procedure isolates radium-226, radium-228 and thorium-230
from water and soil. Core Laboratories has developed a permutation on the Percivel and Martin
that will also isolate lead-210. The isotopes of interest are isolated by co-precipitation on lead
sulfate. The sulfate precipitated is dissolved in DTPA and the radium isolated by co-precipitation
on barium sulfate. The radium-226 and 228 are then isolated and determined by classical
methods.
The thorium and lead-210 remaining in the DTPA fraction are isolated by re-
precipitation out of DTPA on lead sulfate. The sulfate is removed and the lead converted to a
carbonate. The carbonate precipitate is dissolved in 2 M nitric acid. The thorium is isolated by
co-precipitation on bismuth phosphate, purified by a TOPO extraction and counted via alpha-
spectroscopy. The lead is isolated as a carbonate, redissolved and counted by liquid scintillation.
This technology has been approved for use at Wright Patterson Air Force Base in
Ohio, at Fernald in Ohio and by the Hazwrap program.
In this paper classical methodology was compared to a new methodology that
utilizes TRU-Spec resins as an extraction technology developed by ElChroM Industries hi
Chicago. The isotopes are isolated via lead sulfate precipitation as above. The sulfate is
converted to a carbonate and dissolved in nitric acid. The radium-228 decay product is allowed
to ingrown. The sample is then run through a TRU-Spec column. The radium and lead isotopes
are not bound on the resin and the radium-226 and lead-210 are isolated as described in the
classical method described above. The radium-228 decay product actinium-228 and the thorium
isotopes are bound on the column. The actinium-228 is stripped, isolated as an oxalate and beta
counted. The thorium is then stripped from the column and isolated for alpha-spectroscopy
analysis. The Tru-Spec isolation method was developed at ElChroM in Chicago. The actinium
method was developed and presented at a conference in 1992 in Santa Fe by Cable, Brinnell, and
Westmoreland. The thorium procedure was provided to Core Laboratories by ElChroM
Industries.
In examining the effectiveness of this new separation technique thorium recovery
was examined. Know standards were separated on TRU-Spec and the each elution was saved
and the thorium isolated and analyzed to determine recovery. The first eluent (El), which is the
6 M Hydrochloric acid was containing the actinium-228, contained less that 5% of the thorium
recovery. The nitric acid wash (E2) contained only 2% of the thorium. The thorium wash (E3)
continued the majority of the thorium (average of 90%). This demonstrates that the technology
is very effective.
248
-------
A know radium-226 standard was isolated on lead sulfate and then taken through
the TRU-Spec columns. The radium was isolated and counted via alpha proportional counting.
The radium data was compared with the average recovery achieved by the traditional EPA
method over a 2 month period in our analytical laboratory. The radium-226 recovery by the EPA
method was 98% with an error of 8%. The radium-226 recovery by the TRU-Spec method was
95% with an error of 9%. There is no significant difference in the data from the two analytical
methods.
The radium-228 data were compared. The traditional method yielded a recovery
of 97% with an error of 10% while the TRU-Spec method yielded a 90% recovery with an error
of 7%. Again there is no significant difference in the analytical results between methods.
Finally, the lead-210 data were compared and again there was no difference in the
data. The traditional method yielded 93% chemical recovery with an error of 15% while the
TRU-Spec method demonstrated an 88% recovery with an error of 10%.
The second procedure examined compared the proposed EPA CLP procedure for
isolation of plutonium and americium to a procedure utilizing TRU-Spec for the separation. The
traditional method isolates the isotopes by a ferric hydroxide precipitation followed by a
reprecipitation on bismuth phosphate. The precipitate is dissolved in 8 molar hydrochloric. The
plutonium is isolated by extraction on TIOA and counting via alpha-spectroscopy. The
americium is purified by TOPO extraction and counted via alpha-spectroscopy. (See flow chart).
Both the TIOA and TOPO extractions generate fairly significant mixed waste streams.
In comparison, plutonium and americium can be isolated by one pass through a
TRU-Spec resin column. (See flow chart). The method involves isolation of the isotopes on
ferric hydroxide. This precipitate is dissolved in 2 Molar Nitric acid and loaded on a TRU-Spec
resin column. The americium is eluted with 4 N hydrochloric and counted via-alpha-
spectroscopy. The plutonium is eluted with 2 N hydrochloric and 0.1 N hydroquinone. The
results demonstrating the activity present in each fraction from the americium and plutonium
standards are presented in the bar table. The results demonstrate that 86% of the americium was
recovered in the E4 elution. The plutonium was recovered in the E5 eluent with a recovery of
87%. There was less that a 12% loss of either americium or plutonium into the other elution and
wash fractions.
In comparison with the conventional method for the plutonium and americium the
average recovery runs from 75% to 90%. It is apparent that the TRU-Spec separation is as
effective or more effective than the conventional techniques.
It is apparent that the extraction chromatograph provides similar recovery and
accuracy as the tradtitional technology available. However, the waste streams generated and the
manpower requirements for the conventional versus the extraction chromatography methods are
much different. In examining these issues three areas were identified to use as a comparison.
249
-------
The acid waste stream, the organic (mixed waste) waste stream and the time requirements in
terms of man-hours.
A model was devised which examined the flow through our own laboratory over
a 28-day period. Each sample set included a full set of 20 samples for all of the analytical
parameters. The model was developed with 8 sets a day for 28 days, so that 244 sets were run
for all the parameters.
The acid waste stream generated the traditional method produced 4.7 barrels (42-
gallons/barrel) or 197 gallons of acid waste. In comparison the TRU-Spec method generated 2.7
barrels or 98 gallons of acid waste. This represents a 50% reduction in a significant waste
stream.
Examining the organic waste stream the traditional generated around 1.2 barrels
a month (60 gallons). The TRU-Spec method generated 1 Kg of a solid waste which contained
no detectable activity. The resin utilized for the thorium extraction was isolated and alpha
counted. There was no detectable activity above background. This idicates that the resin could
be disposed of in a local landfill.
In comparing the time involved man-hours or man-days per month were
determined for the traditional and the new methods. With the traditional method 117 man-days
were required or an equivalent of 4.2 staff members were required to perform this work. The
TRU-Spec method utilized 75 man-days or an average of 2.7 people to perform the same amount
of work. This cuts the time and personnel necessary in half.
When the americium and plutonium methods were examined for the same three
criteria the following results were obtained. The tradtional method genereated 7.1 barrels of acid
waste/month in comparison to 1.1 barrels for the TRU-Spec method. When the organic waste
stream was examined the traditional method generated 1.3 barrels of organic waste compared to
1.17 Kg of solid resin. Finally, 96 man-days (3.4 staff members) using traditional methods
compared to 66 man-days (2.4 staff members) with the TRU-Spec method.
The results indicated that there is very little cross contamination between isotopes
utilizing TRU-Spec (less that 10 percent between americium, plutonium and thorium).
The recoveries range from 85% to 95% on all isotopes of interest using the
extraction chromatography methods.
The acid waste streams are reduced 40% in the radium 228 method and 80% in
the case of the plutonium method.
Organic waste generated decreased from 1.1 barrels of potential mixed waste to
an average of 1.1 kilogram of waste that can be disposed of in a local landfill.
250
-------
The man-hours are decreased overall by 30 percent utilizing the column
chromatography.
Conclusion, isolation of radionuclides with liquid-liquid extraction versus TRU-
Spec has fairly comparable recovery. There is no sacrifice in recovery or accuracy when
comparing historical data from our facility. And the decreased time at the bench level will
decrease turnaround and allow laboratories to be more competitive.
Finally, the pronounced decrease in waste generation should result in substantial
savings in disposal costs and also have a significant reduction in liabilities by eliminating the
mixed waste stream generation. Utilizing this technology is beneficial to the analytical labs, the
client, and to the general public by not generating a mixed waste stream.
We would like to thank ElChroM for their support and help and advice on this
technology. John Mitchell and Dee Fairservis did a lot of work on the analytical side for this
paper.
I will take questions if you want.
MR. TELLIARD: Questions?
(No response.)
MR. TELLIARD: Okay, thank you.
251
-------
252
-------
to
Minimization of Production Costs
and Waste Generation in
Radiochemical Analysis
by
Dave Demorest and John M. DeHart
EPA Annual Methods Conference, Norfolk
VA. April 1993
-------
t-o
U)
INTRODUCTION
• Current methods for the isolation and
purification of radiochemical analytes
produce complex waste streams of mineral
acids and organic solvents.
• These waste streams, as well as
associated production costs, can be
simplified and reduced by utilization of
recent advances in extraction chromato-
graphy.
• In this presentation we will examine the
results of a preliminary investigation using
these technologies in a commercial
laboratory setting.
-------
Extraction Procedure for Thorium,
Ra-226, Ra228, and Pb-210
• 0.4 grams of TRU-Spec® (EIChroM) is placed
in a 20 ml column.
• Conditioned with 2N HNOs
• Sample is loaded on the column. Wash is
saved for later isolation of Ra-226 and
Pb-210.
• Column is washed with 6N HCL. Ac-228
elutes (E1).
• Column is washed with 2N HCL (E2).
• Column is washed with .1N ammonium
oxalate. Thorium elutes (E3)
-------
Flow Chart of Ra-Th-Pb Procedure
PbSO4
BaSO4
DTPA PbSO4
to
L/i
ON
Ra-226 Ra-228 (Ac-228) BiPO4
I ••*••
04 fl f
Pb-210
a-spec ' TOPO
Liquid-liquid (LSC)
Extraction
TH
i-count a"
-------
TRU-Spec® Procedure for Ra-Th-Pb
to
PbSO4
2N HMOs
Ra-226 _._.. 0
Pb-210 TRU-Spec®
6NHCI Ac-228
B-count
2N HCI
Th
a-spec
-------
Percent Recovery
00
Thorium
E1
E2
E3
-------
Percent Recovery of Standards, Ra-226
Ki
Traditional
Procedure
TRU-Spec
a=9
-------
Percent Recovery of Standards, Ra-228
Traditional
Procedure
TRU-Spec
-10
-------
Percent Recovery of Standards, Pb-210
Traditional
Procedure
TRU-Spec
-15
a-10
20
40
60
80
100
-------
Procedure for Separating Pu, Am,
andU
• Column is prepared with TRU-Spec® resin
and conditioned as in previous procedure.
• Sample is prepared in 2N HNOs and loaded
on the column. Wash collected is E1.
• Column is rinsed twice with 1N HNOs (E2
and E3).
• Am is eluted with 4N HCL (E4)
• Pu is eluted with .1M hydroquinone in 4N
HCL. (E5).
-------
EPA Pu-AM
Am
NJ
O\
WASH
TOPO
Am
a-Spec
FeOH
w
BiSO4 (liquid-liquid)
I
TIOA
I
Pu
I
a-Spec
-------
3
0.
CO
o
Q_
X
o
(I)
o
CM
CM
264
-------
Separation of Americium and Plutonium
Percent Recovery
Am
Pu
E5
-------
Ra-226, Ra-228, Pb-210 and Thorium Procedure
Represents (1 Month)
Acid Waste Generation
Barrels (42 Gal.
Traditional
Procedure
TRU-Spec®
-------
Ra-226, Ra-228, Pb-210 and Thorium Procedure
Represents (1 Month)
Organic Waste Generation
Barrels (42 Gal.)
07 Kg resin
Traditional
Procedure
TRU-Spec®
-------
Ra-226, Ra-228, Pb-210 and Thorium Procedure
Represents (1 Month)
Man-Days Per Month
to
Ox
oo
Traditional
Procedure
TRU-Spec®
-------
Am and Pu Procedure
Represents (1 Month)
Acid Waste Generation
Barrels (42 Gal.)
K)
Os
EPA-CLP
TRU-Spec®
-------
Am and Pu Procedure
Represents (1 Month)
Organic Waste Generation
Barrels (42 Gal.)
1.17 Kg resin.
EPA-CLP
TRU-Spec®
-------
NJ
140
120
100
80
60
40
20
0
Am and Pu Procedure
Represents (1 Month)
Man-Days Per Month
EPA-CLP
TRU-Spec®
-------
RESULTS
Cross contamination between isotopes
using TRU-Spec® is <10%, and usually
to
-J
to
The recoveries ranged from 85% to 95% on
all methods.
Acid waste can be decreased from 40%
(Ra, Th, Pb-210 procedure) to 80% (Am,
Pu, U procedure).
Organic waste generation decreased from
1.1 barrels of liquid waste to 1 .1 Kg of solid
waste.
Man hours decreased by 30% by utilizing
column extraction techniques.
-------
U)
CONCLUSIONS
• Isolation of radionuclides using liquid-liquid
extraction or liquid-solid extraction using
TRU-Spec® result in comparable recov-
eries.
• There is no sacrifice in recovery or
accuracy when comparing historical data of
the more established methods to these
results.
• Decreased time at the bench level will
decrease turnaround time and allow the
laboratory be more competitive.
-------
CONCLUSIONS (cont.)
The pronounced decrease in waste generation
should result in;
(1) Substantial savings in disposal costs
(2) Significant reduction in long term liability by
eliminating the mixed waste streams gener-
ated by the conventional methods.
-------
o
-------
276
-------
MR. TELLIARD: Our next speaker is Richard
Rivera from Shell Development. He is going to talk about high purity germanium gamma
spectrometry, and late in the afternoon, you have got to stay awake, because there is a quiz on
this or you don't get dinner.
MR. RIVERA: My talk this afternoon is on the
development of a procedure using high purity germanium detector for the analysis of radium-226
and radium-228 in barium sulfate scale and produced solids, that are present in the oil production
facilities.
The determination of radium-226 and radium-228 for solids containing naturally
occurring radioactive material was developed at Shell Development Company in conjunction with
projects and radium fate studies carried out by sister companies like Shell Offshore, Incorporated
and Shell Western Exploration and Production.
The people involved in the development of the procedure were J.C. Postlewaite,
W.T. Shebs who is now retired but works as a consultant for us, and myself.
Naturally occurring radioactive materials are found in the barium sulfate scale and
produced solids in the oil and gas production facilities and the separation equipment that is
involved. The isotopes of major concern are radium-226, radon-222, and radium-228.
The radon is not so much a problem in the barium sulfate scale but is a problem
in the gas plants.
The radium-226 and radium-228 are presumed to be solubilized during the water
flood process during secondary phase of oil recovery. Under the proper temperature and pressure
the barium and the radium solubilize into the formation water and when mixed with sulfate form
a scale with radium inside the barium sulfate matrix.
The origin of these two isotopes, are uranium-238 which gives off the radium-226
and the thorium-232 which gives off the radium-228.
Occasionally, we will see a sample that may have some U-235 in very minor
amounts. We can tell that uranium-235 is there, because it affects part of our results.
The radioactivity is concentrated by this process and, in some cases, sufficient
material may accumulate so that the external radiation dose rates are in the radiological
significant range. We have to be careful when we familiarize the employees in the field as to
what they are handling and how much.
Here we have the uranium decay chain that shows uranium-238 and its immediate
descendants. The uranium-238 is mostly immobile and remain in the formation. The radium-226
277
-------
is partially mobilized and accumulates in the barium sulfate scale and the sludges. The other
isotope is the radon-222 that is soluble in the petroleum liquids and also emanates from the
radium/barium sulfate scale and the sludges.
The reason that Shell Development had to come up with a procedure for the
analysis of radium-226 and radium-228 was for guidance in handling of a NORM scale and
NORM contaminated equipment, for use in radiation safety, for waste disposal purposes, risk
management calculations, and in environmental studies.
The data was also used by regulatory agencies such as the MMS and other
agencies that regulate the offshore activity and also non-regulatory agencies such as API.
In a study that we did where we sent NORM samples to commercial labs, we got
results back that we did not like because the results were very inconsistent. This is one reason
why the procedure was developed.
This vu-graph shows the results obtained when we sent out some NORM scale
containing radium-226 and radium-228 in these concentrations in pCi/g to the seven commercial
laboratories. As you can see from this information, laboratory number 1 over-reported on the
radium-226 and did quite well or what we consider quite well on radium-228.
The second lab over-reported on radium-226 and then under-reported on radium-
228. Lab number 3 used a radiochemical procedure that did not digest the barium sulfate scale
enough and, consequently, reported low numbers, but when they switched to a gamma ray
spectroscopy method, they reported numbers that were more in line with what was submitted.
Labs 4, 5, and 6 over-reported in both cases, in radium-226 and radium-228. Lab
number 7 here over-reported on the radium-226 amount and then did fairly well on the radium-
228.
As it turned out, I sent out samples to labs number 6 and to lab number 7, but lab
number 7 forwarded the samples to lab number 6. You can see that the results are not consistent
and this adds to the confusion.
The procedure that we developed gave results that are listed here under lab number
8 which, we feel, are a lot more consistent than the commercial labs.
We also sent along some samples that had radium-226 on the sand, not inside the
sand matrix. We picked two concentrations, 60 and 240 pCi/g. As you can see, lab numbers
1,2, 4, 6, and 7 under-reported the results. Lab number 3, using the radiochemical procedure,
was able to extract the radium-226 off the sand and reported results back that were right on the
money, but when they reported results by gamma spec, the results were a little low.
278
-------
Lab number 5 over-reported on the radium-226. Lab number 8 shows the results
that we obtained using our method.
These two samples had no radium-228 in it, and some labs still reported radium-
228 in these cases.
The procedure that we have developed, as in any other procedure, requires
standards so that you can make the calculations against your standards. You heard earlier today
from Bob April that matrix effect is a very important part of your analysis.
What we have here in the oil patch are samples that are produced in the production
of oil and gas. There can be barium sulfate scale or produced solids from the reservoir or a
combination of both. Since these two types of samples have different densities, the way we
decided to handle this situation was to make two sets of standards.
The standards are prepared in the same matrices as the unknown. A radium-226
standard is made in barite, and we also have a thorium nitrate standard that is made in barite.
We also have the radium-226 standard made in sand and also a thorium-232 made in sand.
The liquid standards are added to the matrix, followed by mixing and drying, and
the standard matrices are transferred to a sample holder where they are counted after they are
sealed. The sample is counted after the sample reaches secular equilibrium.
For geometry consistency, the standards and samples are counted in the same type
holders. This is very important.
The preparation of the samples in our procedure, as I said a while ago, are either
NORM scale, produced solids which are mainly sand particles from the reservoir along with
some clay and maybe a mixture of a little bit of NORM scale in the produced solids.
When we get clean wet samples, all we are required to do is to dry the sample.
Samples that contain oil require removal of the hydrocarbon. This is done by extracting the oil
off the sample with toluene. You decant the toluene off, and you dry the sample with a solvent
such as acetone. This gives you a quick cleaning and drying procedure.
Samples with heavy crude or very tarry material require Soxhlet extraction and
then drying.
Once the sample is dried, the material is ground, because sometimes we do get big
chunks of material. The sample is sieved through a 35-mesh sieve to match the particular size
of the standard also.
A weighed amount of material is placed in the petri dish. The petri dish is
covered, sealed and then placed on the detector for analysis.
279
-------
The equipment that is used for this gamma ray spectroscopy method as a detector
is the high purity germanium detector that requires liquid nitrogen temperatures for it to function.
The detector is coupled with a devvar that contains the liquid nitrogen. As shown here, the
detector is inside our lead shield.
On the electronic side, we use a spectroscopy amplifier, a multichannel analyzer,
high voltage supply, and the NIM power supply. Here you see a lot of electronic Nim bins, but
that is because we have four of these detectors.
We also have a computer that is coupled to the instrumentation, has the proper
software and does the calculations.
In the procedure steps, first, you must calibrate your MCA energy scale, and we
do this using a cobalt-57 and a cobalt-60 source. We use the 122 KeV from the cobalt-57 source
to calibrate on the low end of the energy scale, and we use the 1332 KeV from the cobalt-60
source to calibrate on the high end of the energy scale.
Once the energy scale is calibrated, you count the radium-226 standards at seven
regions of interest. You count the thorium-232 standard at 4 regions of interest for the radium-
228. You have a background count at 11 energies and your unknown sample count at 11
energies.
The difference between 7 ROIs for radium-226 plus 4 ROIs for radium-228 and
17 is that we have 6 ROIs thrown in there for the determination of thorium-228.
The data is stored in a directory. The data files are inputted into a Lotus spread
sheet, and after other data is inputted into the spread sheet, the software calculates the results.
The picture that you get on the video terminal of your computer may look like this
where you have energy in KeV and counts on your vertical scale. Here you see nothing but
straight lines that look like they don't mean anything, but this is a full gamma ray spectrum.
You can zero in on these regions of interest and expand to show that there are
some real counts in the regions of interest that you set. Here, for example, we set a region of
interest from channel 546 to channel 558.
We have a background spectrum with counts in the peak, the background peak,
and a gross sample with counts in the sample gross count peak. The software takes care of
subtracting the background from the sample gross counts, giving you a sample net count.
This is done for each region of interest that you pick for all the isotopes that you
select.
280
-------
For the determination of radium-226, we do not rely solely on the 186 peak,
because that is a weak energy and can be influenced by several factors. We allow our sample
to reach equilibrium. In order to do that, we must trap the radon gas being given off in samples
that are not equilibrated.
Once you trap the radon-222, the other daughters of the radon-222 will grow in,
and after they reach secular equilibrium, you can use the results from these peaks to determine
and check your answer for radium-226.
The other reason we can do this is that all these daughters from radium-226 have
short half-life and equilibrate in a rather short time, say, 30 days or so.
For the radium-226, we use 295 and 352 from the lead-214. We use four energies
from the bismuth-214. The other one not included in here is a 934 peak energy.
From the thorium series, since the radium-228 has no gamma rays we wait for
the ingrowth of the actinium, because it has a very short half-life. We use four energy peaks
from the actinium-228. The fourth one here is 463.
Once the actinium equilibrates, you can use the answers you get here for reporting
the radium-228 concentration.
For the thorium-228 that we do not report but we just like to analyze for, we use
a 239 KeV from lead-212, the 727 from the bismuth-212, and four from the thalium-208.
In all of these regions of interest, the calculation is done very simply as shown
here. We have the activity for whatever region of interest you want in pCi/g. The software takes
the counts per hour for the unknown in that region of interest divide by the counts per hour per
nanocurie of your standard for that same region of interest.
You multiply that times 1000 to convert nanocuries to picocuries and divide by
the weight of your sample.
A result sheet would look like this where you have the count time that the sample
was counted for, the weight of the sample, and the sample identification, when the standard was
last counted, what the standard's concentrations are, and some identification code for the
standards that you are using.
Here, shown in red, are the energies that we use for the analysis of radium-226.
You can see here that the sample is at equilibrium. The answer came out to 44.8 for radium-226,
and we say that these are all in equilibrium with the parent and, therefore, the software calculates
an average.
281
-------
The results are not an arithmetic average. It is an average based on the percent
abundance from the prominence of the isotope.
The same computation is done for the radium-228. We use the actinium energies
shown in blue, and, again, the software calculates a weighted average.
If you come across some results like this on the method that we have, I am not
sure how it shows back in the back, but here you have a number that is higher for the radium-
226 and results that are quite close to each other for the rest of the isotopes from radium-226.
After looking at the spectrum, we decided to go looking for uranium-235 in this sample, because
uranium-235 has an energy at 185 KeV that will add counts in this region of interest. We looked
also for the 1000 KeV peak from uranium-235.
So, given this answer 8.8 is not the correct one, in this case, we would average
the numbers shown in black here.
Earlier today, you heard about matrix effects or the differences in matrix. You
will remember that we match our unknown sample to the standard matrix. If it is barium sulfate
sample, we match it to the barium sulfate standards.
Just to give you an example of what happens when you have a produced solids
sample, mainly sand from the production reservoir, and you calculate it against a sand standard,
you will come up, after your sample equilibrates, with numbers in this fashion, giving you an
average of about 605. But if you calculate the results against a wrong standard, against a barium
sulfate standard, you will probably get answers looking like this.
In conclusion, there are a lot of methods out there that the different commercial
labs use, but we feel that matching the unknown with the standard matrix and staying at
approximately the same density as your standard, we feel that our method has several advantages:
One, we do not need a radiochemical separation to determine radium-226 and
radium-228 in these types of samples;
The sample preparation is quick and straightforward;
The standard and sample use the same matrix and geometry;
We have a quick turnaround time, not as quick as some of the labs claim to have
about 8 hours, but we can't beat that;
The procedure is tailored for production and environmental soil samples;
We feel that our answers are more accurate when compared to commercial labs;
282
-------
And our results are not based on the 186 peak alone but are based on equilibrium
of radium and its daughters;
And we also have the ability to look for potassium-40 and the other uranium
isotopes that may creep up.
And that is the end of my talk.
MR. TELLIARD: Do we have any questions?
(No response.)
MR. TELLIARD: I thank you, sir. Thanks so
much. Appreciate it.
283
-------
284
-------
SHELL DEVELOPMENT COMPANY
J. C. POSTLEWAITE
R. V. RIVERA
U. T. SHEBS
SHELL OFFSHORE INC.
SHELL WESTERN EXP & PROD INC
285
-------
ISOTOPES OF
MAJOR CONCERN
226Ra from 238U
222Rn from 226Ra
228Ra from 232Th
286
-------
ORIGIN OF NORM
U-238 Half-life 4.5 billion years
Th-232 Half-life 14.1 billion years
K-40 Half-life 1.5 billion years
U-235 Half-life 700 million years
These isotopes have been present since the
formation of the earth.
287
-------
U-238
4.5 Byr
.iJ
U-234
2281 i<247,000yr
Pa-234m
1.17m
Th-234
14.1 d
Mostly Immobile. Remaining in
Underground formation.
'" ' ••'""•«»w*»*i»*r;sjy«**jt
Partially Mobilized. Occassionally
Accumulates in Scale and Sludges
P«MK* *M«*-~,.*.... «,m WMmpajij,^^ ^ -A. £ * -^ ^,- n ,^
OO
oo
4.780 I
Th-230
80,000 yr
4.690 i
Ra-226
1602yr
(186.2)3.3%
4.7801
Rn-222
3.82 d
5.490 4
DPo-218
3.05m
DPb-214T
(295.2) 19.2%
(351.9)37.2%
LEGEND
v
alpha decay, energy in MeV
beta decay, energy in keV
gamma decay, (energy in keV) % abundance
D short lived daughters IQPVTOPF
O long lived daughters
half-life
Mobile Gas. 5*-22% Emanated from
Scales and Sludges. Soluble in
Petroleum Liquids
DP6-214
1540 Jf^ 0.164ms
7.687,
116110^ 13
210 5-305
Po-210
138.4d
l
f
14, ->c* c\ H c
(1764"5) 15'
Pb-206
stable
(46.5) 41%
-------
WHY NORM ANALYTICAL PROCEDURE NEEDED
to
00
TO PROVIDE THE SHELL COMPANIES
WITH RELIABLE, DEPENDABLE ANALYTICAL DATA
* FOR GUIDANCE IN HANDLING OF NORM SCALE AND NORM
CONTAMINATED EQUIPMENT
* TO USE IN RADIATION SAFETY, WASTE DISPOSAL PURPOSES, RISK
MANAGEMENT CALCULATIONS, AND IN ENVIRONMENTAL STUDIES
* FOR USE BY REGULATORY AGENCIES
* THAN PROVIDED BY COMMERCIAL LABORATORIES
-------
NORN SCALE ROUND ROBIN RESULTS
BY GAMMA SPECTROSCOPY METHOD
to
V£>
O
SAMPLE
CONTENTS
26
52
104
208
0
18
35
70
141
0
LABORATORY NUMBER
8
RADIUM- 226,
34
62
119
245
0.7
43
97
186
425
0
31
40
64
65
1.4
31
57
98
199
<1
32
63
130
260
<.2
RADIUM- 228.
23
38
73
138
0.5
5
21
34
109
0.1
16
16
27
10
0.5
20
33
61
121
<1
25
46
94
200
<.4
PCl/GM
35
71
137
251
0
PCl/GM
23
48
93
172
0
51
81
190
323
1.3
24
20
96
214
0.3
37
81
162
346
<.2
18
28
71
117
0.2
26
52
105
192
0.5
18
35
72
137
0
RADIOCHEMICAL ANALYSIS
-------
K)
II RADIUM IN SAND ROUND ROBIN RESULTS
LABORATORY NUMBER
SAMPLE
CONTENTS 1 2
3* 3
4
RADiUM-226,
60 33 38
240 112 145
60 51
240 214
40
160
RADiUM-228,
0 0.9 0.8
0 1.0 0.8
2.8 <1
5.8 1.6
<.6
<3.0
5 6
PCl/GM
70 47
270 183
PCl/GM
0 6
0 16
7 8
37 59
184 236
1.8 0.5
9.0 0.6
* RADIOCHEMICAL ANALYSIS
-------
STANDARDS PREPARATION
STANDARDS ARE PREPARED IN SAME MATRICES AS THE UNKNOWN SAMPLES
RADIUM-226 STANDARD IN BARITE OR SAND
THORIUM NITRATE (TH-232 FOR RADIUM-228 IN BARITE OR SAND
S STANDARDS ARE ADDED TO MATRIX, FOLLOWED BY MIXING AND DRYING
STANDARDS MATRICES TRANSFERRED TO SAMPLE HOLDER AND SEALED
STANDARDS COUNTED AFTER REACHING SECULAR EQUILIBRIUM
FOR GEOMETRY CONSISTENCY - STANDARDS AND SAMPLE COUNTED IN SAME TYPE
HOLDER
-------
SAMPLE PREPARATION
SAMPLES ARE NORM SCALE. PRODUCED SOLIDS OR COMBINATION
* CLEAN WET SAMPLES REQUIRE ONLY DRYING
* OILY/SLUDGY SAMPLES REQUIRE REMOVAL OF HYDROCARBONS AND DRYING
* SAMPLES WITH HEAVY CRUDE REQUIRE SOXHLET EXTRACTION AND DRYING
* ONCE DRIED, MATERIAL IS GROUND AND SIEVED (35 MESH)
* MATERIAL IS PLACED IN PETRI DISH AND WEIGHED
* PETRI DISH IS COVERED AND SEALED
* SAMPLE PLACED ON DETECTOR FOR ANALYSIS
-------
r
-------
PROCEDURE STEPS
MCA ENERGY SCALE CALIBRATED
RADIUM-226 STANDARD COUNTED AT SEVEN ENERGIES
THORIUM-232 STANDARD COUNTED AT FOUR ENERGIES
BACKGROUND COUNTED AT SEVENTEEN ENERGIES
UNKNOWN SAMPLE COUNTED AT SEVENTEEN ENERGIES
DATA ARE STORED IN A DIRECTORY
DATA FILES ARE IMPORTED INTO LOTUS SPEADSHEET
OTHER DATA INPUTTED INTO SPREADSHEET
SOFTWARE CALCULATES RESULTS
295
-------
o
O
K)
'O
8000
6000
4000
2000
0
^
a
< 00
-i r
0
500 1000 1500 2000 2500 3000
Energy, KeV
Figure 6. Gamma Ray Spectrum
from Scale
-------
to
o
O
600
500
400
300
200
100
0
] | | 1 I I I I I I i i i — x» f AN A
528 534 540 546 552 558 564
Channel Number
Figure 7. Example of Sample
and Background Peaks
570
-------
URANIUM SERIES
NUCLIDE
HALF-LIFE
GAMMA ENERGIES (KEV)
U-238 TH-234 PA-234M AND PA-234 U-234 TH-230
RA-226
1602 YRS
186 (4%)
RN-222
Po-218
3.8 DAYS
3.1 MIN
510 (0.07%)
NO GAMMAS
Pe-214
Ar-218
Bi-214
26.8 MIN
2 SEC
19.7 MIN
295 (19%)
352 (36%)
NO GAMMAS
609 (47%)
1120 (17%)
1764 (17%)
Po-214
TL-210
Pe-210
Bi-210
Po-210
TL-206
Pe-206
164 USEC
1.3 MIN
21 YRS
5 DAYS
138 DAYS
4.2 MIN
STABLE
799 (0.014%)
296 (80%)
795 (100%)
1310 (21%)
47 (4%)
NO GAMMAS
803 (0.0011%)
NO GAMMAS
298
-------
THORIUM SERIES
NUCLIDE
HALF-LIFE
GAMMA ENERGIES (KEY)
TH-232
1.41 B YRS
NO GAMMAS
RA-228
Ac-228
TH-228
RA-224
RN-220
Po-216
Pe-212
Bi-212
Po-212
Tt-208
5.75 YRS
6.13 HRS
1.91 YRS
3.64 DAYS
55 SEC
.15 SEC
10.6 HRS
60 MIN
304 NSEC
3.1 MIN
NO GAMMAS
340 (15%)
908 (25%)
960 (20%)
84 (1.6%)
214 (0.3%)
241 (3.7%)
550 (0.07%)
NO GAMMAS
239 (47%)
300 (3.2%)
40 (2%)
727 (7%)
1620 (1.8%)
NO GAMMAS
511 (23%)
583 (86%)
860 (12%)
2614 (100%)
Pe-208
STABLE
299
-------
CALCULATION FOR EACH ENERGY
REGION OF INTEREST
o
o
(c/h)iu 1000
I,U
A;, pCi/gm = x
c/h/nCi)is Wu
-------
RADIUM GAMMA RAY ANALYSIS
Project Name: MC-194A
Project Number: CPI UNIT 3/7/93
Sample Identification: 229810824
Count time (s)
Weight (g)
83459.9
32.02
CALIBRATION DATA
Date Stnd. Activity (nCi) Ct time (hr) Identif. Spectrum File
03/09/93 Ra-226 36.0 2.00 2298-36-1 D4RA0393.CHN
03/09/93 Th-232 46.0 2.00 2298-36-2 D4TH0393.CHN
03/09/93 27.30 D4BG0393.CHN
Analyst: RVR
Analysis Date:
03/16/93
Isotope, picocuries/gram
Peak#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
i
Isotope
Ra-226
Pb-212
Pb-214
Ac-228
Pb-214
Ac-228
TI-208
TI-208
Bi-214
Bi-212
TI-208
Ac-228
Bi-214
Ac-228
Bi-214
Bi-214
TI-208
II
Energy Counts
186.1
240.0
295.2
338.4
351.9
463.1
510.6
583.0
609.3
727.3
860.3
911.6
934.0
965.0
1120.3
1764.5
2614.4
r *•" jT-"" Average
11226
33228
39167
24587
64780
6231
27 '17
3732
44140
872
501
23008
2058
16834
8456
6869
1313
(pCi/g)
Ra-226 Ra-228
44.8
43.9
54.2
43.8
50.1
43.5
50.9
42.9
49.3
42.0
42.7
43.7 51.5
Th-228
5.7
6.1
5.4
5.8
5.6
5.6
5-ZJ
301
-------
Isotope, picocuries/gram
Peak # Isotope
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Ra-226
Pb-212
Pb-214
Ac-228
Pb-214
Ac-228
Tl-208
Tl-208
Bi-214
Bi-212
Tl-208
Ac-228
Bi-214
Ac-228
Bi-214
Bi-214
Tl-208
Energy
186.1
240
295.2
338.4
351.9
463.1
510.6
583.0
609.3
727.3
860.3
911.6
934.0
965
1120.3
1764.5
2614.4
Counts
5405
9376
3943
1078
6343
377
1117
1691
4093
356
111
1115
176
858
774
693
553
Average, pCi/gram
Ra-226 Ra-228
32.1
7.3
4.0
7.1
4.8
7.0
4.3
6.6
4.4
6.9
7.6
8.8 4.3
Th-228
4.2
4.3
4.5
4.1
2.2
4.9
4.3
302
-------
EFFECTS OF MATRIX DIFFERENCE
PRODUCED SOLIDS SAMPLE CALCULATED
AGAINST TWO DIFFERENT MATRICES
ENERGY AGAINST AGAINST
KeV SAND BARITE
STD STD
(pCi/gm) (pCi/gm)
186 612 830
295 600 684
352 602 660
609 614 646
934 632 638
1120 607 629
1764 603 615
303
-------
ADVANTAGES OF METHOD
NO RADIOCHEMICAL SEPARATION METHOD NEEDED
QUICK, STRAIGHT FORWARD SAMPLE PREPARATION
STANDARDS AND SAMPLE USE SAME MATRIX AND GEOMETRY
QUICK TURN AROUND TIME
TAILORED FOR PRODUCTION OR SOIL SAMPLES
MORE ACCURATE WHEN COMPARED TO COMMERCIAL LABS
RESULTS BASED ON EQUILIBRIUM OF RADIUM AND DAUGHTERS
ABILITY TO SEARCH FOR OTHER NORM ISOTOPES ON SPECTRUM
-------
MR. TELLIARD: Our final speaker for this
afternoon is Joe Raia. Joe is going to be talking on a project that the Office of Water has been
involved with for the last decade. It is on the analysis and determination of diesel, mineral, and
crude oil in drilling muds.
For those of you who aren't familiar with what drilling muds are, they are muds
that you use for drilling, and it is basically a characteristic of the petroleum industry which, as
we all know, is dirty anyhow and that is why they have muds.
So, this particular project is something that Joe and I and numerous other people
who have since retired, passed on, quit, been maimed, have been working on so long we have
forgotten why. Now we are to that point where we are almost done, and I am sure we have lost
the original data, but Joe is going to report on, we hope, the final of that study.
305
-------
306
-------
METHODS FOR THE DETERMINATION OF DIESEL, MINERAL, AND CRUDE OILS
IN DRILLING MUDS FROM OFFSHORE DRILLING OPERATIONS
by
Joseph C. Raia (Consultant),
Dan Caudle (Conoco Inc.),
Ronald E. Benjamin (Southern Petroleum Labs),
Donald J. Weintritt (Weintritt Consulting Services)
16th Annual EPA Conference on Analysis of
Pollutants in the Environment
Norfolk, Virginia
May 5-6, 1993
307
-------
ABSTRACT
The effluent limitation guidelines being promulgated by the
United States Environmental Protection Agency (EPA) for the
offshore oil and gas industry include the prohibition of the
discharge of diesel oil in drilling muds and drill cuttings from
offshore oil and gas platforms. Analytical test procedures have
been developed by the EPA to allow monitoring for diesel oil in
drilling fluids whenever necessary to ensure compliance to the
regulation. In the development of these analytical techniques,
the EPA and the Technology and Diesel Analysis Work Group of the
American Petroleum Institute (API) conducted studies to evaluate
various extraction and analytical measurement techniques for
reliable determination of diesel, mineral, and crude oils in
drilling muds. This work has resulted in Method 1662 (Soxhlet
Dean-Stark Extraction and Gravimetry for Total Extractable
Material in Drilling Mud), Method 1654A (HPLC/UV for Polynuclear
Aromatic Hydrocarbons Content of Oil), and Method 1663 (GC/FID
for differentiation of Diesel and Crude Oil).
This paper will discuss the methods, the inter laboratory
validation results, and how the methods are employed for
differentiating diesel, mineral, and crude oils in drilling mud
discharges.
308
-------
INTRODUCTION
Diesel oil in drilling fluids (muds) and drill cuttings cannot be
discharged from offshore oil and gas platforms. The diesel oil
prohibition is part of the effluent limitation guidelines for the
offshore oil and gas industry being promulgated by the The U. S.
Environmental Protection Agency (EPA) [1]. In support of the
final rule, EPA has issued a compendium of analytical methods for
the determination of diesel, mineral, and crude oils in offshore
oil and gas industry discharges [2]. The analytical test
procedures were developed to allow monitoring for diesel oil in
drilling fluids whenever necessary to ensure compliance to the
regulation.
Initially, Method 1651 (Retort, Gravimetry, and GC-FID) was
developed for diesel monitoring and proposed as part of the 40
CFR Part 435 rule (56 FR 10664-10715). This method uses a retort
apparatus to thermally extract oil from drilling mud. The oil in
the extract is weighed and then further analyzed by gas
chromatography with flame ionization detection (GC-FID), Diesel
oil identification is done by comparing the pattern of GC peaks
in the oil with the pattern produced by a diesel oil reference.
The American Petroleum Institute (API) and its member companies
critized Method 1651 because the method is not definitive for
diesel, since it can show potential interferences from mineral
oil and crude oil. Mineral oil is an allowed lubricity additive
for drilling fluids, which may be discharged in drilling muds as
long as the discharge passes the sheen test and toxicity limits
are met. Crude oil arising from the oil bearing formation can
have hydrocarbons which interfere in the same boiling range used
in Method 1651 to identify diesel hydrocarbons. Other objections
to Method 1651 were that the retort apparatus used in the method
is not sufficiently reproducible to serve as an analytical
extraction technique, and the device can produce analytical
artifacts with some types of muds.
In order to develop a suitable analytical procedure for diesel in
drilling muds, the EPA and the Technology and Diesel Analysis
Work Group of the API conducted studies to evaluate alternative
extraction and analytical measurement techniques for diesel,
mineral, and crude oils in drilling muds [3, 4, 5]. This work
has resulted in Method 1662 (Soxhlet Dean-Stark Extraction and
Gravimetry for Total Extractable Material in Drilling Mud),
Method 1654A (HPLC/UV for Polynuclear Aromatic Hydrocarbons
Content of Oil), and Method 1663 (GC/FID for differentiation of
Diesel and Crude Oil).
309
-------
This paper will discuss these methods and the inter 1aboratory
validation results.
DRILLING MUD DISCHARGE MONITORING FOR OIL AND TOXICITY
The discharge of drilling muds from offshore platforms requires
environmental compliance monitoring for oil and toxicity.
Regulatory requirements are: no free oil can be present, as
measured by the static sheen test (the visual sheen test is
allowed in EPA Region VI); a toxicity limitation in the suspended
particulate phase of the mud to mysids as measured by the 96-h
LC50 >= 30,000 ppm; no diesel can be present as documented by the
well inventory record, and verified by confirmatory analytical
testing when required. For confirmatory analysis of diesel, EPA
Methods 1662, 1654A, and 1663 are used in a tiered analysis
approach as discussed below.
DEVELOPMENT OF EPA METHODS 1662, 1654A,,AND 1663
In the development of Methods 1662, 1654A, and 1663, work
conducted by EPA and the API Task Group was aimed at obtaining a
good alternative extraction procedure to the retort, and a
measurement finish that would allow diesel to be distinguished
from mineral oil and crude oil.
The extraction techniques evaluated in addition to the retort
were: Soxhlet Dean-Stark (Soxhlet-DS), sonication with
acetone/methylene chloride (1:1 V/V), and supercritical fluid
extraction (SFE) with carbon dioxide. Laboratory prepared hot-
rolled muds were spiked at two concentration levels of diesel.
One level was at 0.2% and the other at 2.095. Similarly, other
mud samples were spiked with mineral oil and with crude. Based on
the recovery data from these extraction studies, Soxhlet-DS was
selected as the best extraction procedure for diesel in drilling
muds [3]. SFE gave lower recoveries for diesel than did the
other techniques tested in this study. This may have been due in
part to problems caused from the relatively high water contents
of drilling muds.
In the analytical measurement of diesel in drilling muds,
definitive techniques are required that allow diesel to be
distinguished from potential interferences caused by mineral oil
and crude oil. Diesel oil is known to generally contain higher
concentrations of polynuclear aromatic hydrocarbons (PAHs) than
does mineral oil. Further, the alkane hydrocarbons in diesel are
typically in the boiling range of C10-C24, while in crude oils,
the alkane hydrocarbons generally range lower than CIO and extend
beyond C24. These distinguishing characteristics, PAH content
310
-------
and alkane boiling range, were the basis of selecting High
Performance Liquid Chromatography/Ultraviolet (HPLC/UV) for
measuring PAHs, and GC-FID for determining n-alkane boiling point
profiles.
A study was then made to quantify the PAH contents and n-alkane
distributions in diesel, mineral, and crude oils [4]. Retort
results for drilling muds from offshore drilling sites were
surveyed to determine levels of total extractable material in
drilling muds [5]. These data provided PAH concentration levels
that could be used to distinguish diesel oil from mineral oil,
and n-alkane distributions that could be used to differentiate
diesel oil from crude oil. The survey also provided an
indication of background concentration levels of extractable
material in drilling muds to which diesel had never been added.
This information provided the basis for how the tiered analysis
approach is employed, using PAH content and n-alkane
distributions, to determine diesel oil in drilling muds.
DIFFERENTIATION OF DIESEL, MINERAL, AND CRUDE OILS
BY EPA METHODS 1662, 1654A, AND 1663
The tiered analysis approach is used to determine the presence of
diesel in drilling muds as shown in Figure 1.
Method 1662 uses a Soxhlet/Dean-Stark extractor to remove oil
from the drilling mud. The total oil in the extract can be
measured by weighing a measured portion of the extract. The
other portion of the extract is used in Methods 1654A and 1663.
The PAH content of the extracted oil is measured as phenanthrene
by HPLC/UV in Method 1654A. If the PAH content is less than 0.35
weight percent, the oil is mineral oil. If the PAH content is
equal to or greater than 0.35 weight percent, the oil is diesel
oil or crude oil.
Method 1663 uses GC-FID to measure the presence and distribution
of hydrocarbons in the extracted oil. The presence of n-alkanes
in the C9-C24 range indicates the presence of diesel or crude
oil. If less than 10 n-alkanes are present in the C9-C24 (at a
signal-to-noise ratio of 3 or greater for each n-alkane), diesel
oil is not present. If 10 or more n-alkanes are present in the
C9-C24 range, the percentage of n-alkanes in the C25-C30 range
are used to determine if the oil is crude oil. The oil is crude
oil if the C25-C30 n-alkane content is greater than 1.2 percent
of the total C9-C30 n-alkane content.
INTERLABORATORY STUDY OF METHODS 1662, 1654A, AND 1663
An interlaboratory round-robin test of methods 1662, 1654A and
311
-------
1663 is underway and the results are being completed at this
time. The results presented in this paper are only preliminary
and are not yet completed. The test design includes six
analytical laboratory participants, and one additional laboratory
which prepared and distributed the drilling mud samples for
testing. DyneCorp Viar is serving as sample control center for
the study. The laboratory participants have received three
drilling mud samples. One sample was spiked with diesel oil,
another with mineral oil, and the other with crude oil. The
laboratories are reporting initial and ongoing precision and
recovery results for each method as specified. Each of the mud
samples is analyzed in duplicate.
Results obtained thus far show that all four laboratories who
have reported data have been able to meet the initial precision
and recovery acceptance criteria for the methods (Table 1).
Results for the spiked drilling mud samples however are showing
measureable interlaboratory variabilities (Table 2.). As soon as
all the data has been completed and reported, the results will be
examined in detail for verification. The completed results of
the interlaboratory study will then be used to determine any
required method revisions.
REFERENCES
[1] 40 CFR Part 435 (58 FR 12454-12512, No. 41, March 4, 1993)
[2] "Methods for the Determination of Diesel, Mineral, and Crude
Oils in Offshore Oil and Gas Industry Discharges", EPA-821-R-92-
008, 1992
[3] "Results of the API Study of Extraction and Analysis
Procedures for the Determination of Diesel Oil in Drilling Muds"
Final Report, American Petroleum Institute, Offshore Guidelines
Steering committee, Technology Work Group, Prepared by J. C.
Raia, Shell Development Company, Houston, Texas, April 8, 1992.
[4] "Polycyclic Aromatic Hydrocarbon and Normal Alkane
Distributions in Diesel, Crude, and Mineral Oils: A Comparative
Study", Volumes 1 and 2, American Petroleum Institute, Technology
Work Group, Offshore Effluent Guidelines Task Force, Report
Prepared by Ronald Benjamin, Core Laboratories, Lafayette, LA,
January 10, 1992.
[5] "The Analysis of Drilling Fluids and Cuttings from 14
Offshore Drilling Sites", Final Report, Weintritt Testing
Laboratories, March 31, 1989.
312
-------
Method 1GG2
SDS extraction
Rotovap to approx imately 1 ml_
Adjust volume to 5.O ml_ with acetonitrile
Evaporate <4.O ml_ to cJryness with nitrogen blowdowri
Determine total oil in 4.O-mL_ portion by gravimetry
UJ
Method 1654A
Determine PAH content of 1 .O-mL portion by HPLC/UV
If PAH content O.35 wt °/o. oil may be diesel or crude oil
Method 1663
Determine n-alkane pattern of 1.O-ml_ portion by GC/RID
If <1O n-alkanes present in Cg — C^/j range © S/N >3. no diesel
If n-alkanes present in Cg— 030 range & ^25—^30 n-alkanes => 1
% of total Cg— C3Q n-alkanes. oil is crude oil
CS2 10-5
FIGURE-1
Differentiation of Diesel, Mineral, and Crude Oils by SDS Extraction.
HPLC/UV. and GC/FID, using Methods 1 662. 1 654A. anH 1
-------
TABLE-1
RESULTS OF INTERLABORATORY STUDY OF METHODS
1662, 1654A, AND 1663 (ALL DATA NOT YET REPORTED)
INITIAL PRECISION & RECOVERY
LABORATORY A B C D E F
1662(%W Extract.)
SPIKE
MEAN
S.D.
0.25
0.22
0.02
0.25
0.18
0.04
0.20
0.15
0.02
0.25
0.20
0.02
1654A(mg/ml PAH)
SPIKE
MEAN
S.D.
1.25
1.39
0.04
3.21
3.56
0.41
1.25
1.05
0.03
1.25
1.36
0.13
1663(mg/ml C25-C30)
SPIKE
MEAN
S.D.
1.25
1.24
0.02
1.25
1.16
0.14
1.25
1.13
0.12
1.25
1.20
0.22
-------
U)
TABLE-2
RESULTS OF INTERLABORATORY STUDY OF METHODS
1662, 1654A, AND 1663 (ALL DATA NOT YET REPORTED)
LABORATORY
%W Extract.
M+Mineral
M+Diesel
M+Crude
%W PAH
M+Mineral
M+Diesel
M+Crude
%W C25-C30
M+Mineral
M+Diesel
M+Crude
A
0.18
0.14
0.14
0.18
2.70
1.16
<1.0
<1.0
1.53
B
0.028
0.050
0.041
0.50
2.74
1.78
<1.0
0.6
7.5
C
0.26
0.17
0.13
0.19
0.28
0.40
<1.0
<1.0
<1.0
D E F
0.05
0.34
0.14
<0.01
0.04
<0.01
<1.2
/ *4 ^\
> H ^)
-------
METHODS FOR THE DETERMINATION OF DIESEL,
MINERAL, AND CRUDE OILS IN DRILLING MUDS
FROM OFFSHORE DRILLING OPERATIONS
by
Joseph C. Raia, Consultant; Dan Caudle, Conoco;
Ronald E. Benjamin, Southern Petroleum labs, and
Donald J. Weintritt, Weintritt Consulting Services
Presented at the
16th Annual EPA Conference on
Analysis of Pollutants in the Environment
Norfolk, Virginia
May 5-6. 1993
-------
ACKNOWLEDGMENTS
Robert C. Ayers, Consultant(Exxon)
Carrie Buswell, DynCorp.Viar
Thomas M. Randolph, Consultant(Shell Offshore)
Dale R. Rushneck, Interface
George H. Stanko, Shell Development
Alexis E. Steen, American Petroleum Institute
Michael T. Stephenson, Texaco
William A. Telliard, Environmental Protection Agency
-------
DRILLING MUD DISCHARGE MONITORING
OIL AND TOXICITY
00
• NO FREE OIL
STATIC/VISUAL SHEEN TEST
NO TOXICITY
MYSID LC50 >• 30,000 PPM
• NO DIESEL
WELL INVENTORY RECORD/ANALYSIS
• Ref.: Oil and Gas Extraction Point Source Category;
Offshore Subcategory Effluent Guidelines and New
Source Performance Standards; Final Rule
[Fed. Reg. 56, No. 41, March 4, 19931
-------
EPA & API METHOD DEVELOPMENT EFFORT
DIESEL IN DRILLING MUDS
INITIAL WORK - EPA METHOD 1651A(RETORT-GC/FID)
ALTERNATIVE EXTRACTION/ANALYSIS STUDIES
Ref.fAPI Diesel Analysis Work Group Report April 92]
TIERED ANALYSIS APPROACH
EPA METHODS 1662, 1654A, 1663
Ref.lEPA 821-R-92-008, December 19921
INTERLABORATORY METHOD VALIDATION
-------
200
150
100
50
OIL RECOVERY vs EXTRACTION METHOD
BASE MUD SPIKED WITH DIESEL(D)
% Wt. Recovery of Oil Added
Soxhlet-DS
Retort
nnr
Sonication
SFE
Mud «• 0.2% D
Mud * 2% D
87
94
123
116
171
170
69
22
EXTRACTION METHOD
Mud * 0.2% D
Mud * 2% D
Ret: API Comments of 5/13/91(Vol.8 Tab2)
to 56 Fed Reg 10664-10715(3/13/91)
-------
to
SOLVENT EXTRACTION VS RETORT GRAVIMETRIC
Extraction of Diesel Oil in Drilling Mud
Measured Oil, g/KQ
0
10 15
DIESEL OIL ADDED, g/KG
20
NOMINAL
+ RETORT
SOXHLET-DS
SONICATION
Ret: API Comments of 5/13/91(Vol.8 Tab2)
to 56 Fed Reg 10664-10715(3/13/911
-------
GRAVIMETRIC OIL RESULTS
A)BASE MUD - NO OIL ADDED
K)
1000
800
600
400
200
MQ/KG OIL
837
579
278
138
Sonication Soxhlet-DS SFE
EXTRACTION METHOD
Retort
-------
SURVEY OF EXTRACTABLES IN DRILLING MUDS, PAH IN
MINERAL OILS, AND C25-C30/C9-C30 IN DIESEL OILS
a)
EXTRACTABLES IN
DRILLING MUDS
mg/kg
b)
PAH IN
MINERAL OILS
% WT
b)
C25-C30 IN
DIESEL OILS
% WT
LO
N
Mean
S.D.
Mean+2SD
14
1267.
748.
2764.
9
0.1590
0.0935
0.3459
10
0.45
0.39
1.22
a) The Analysis of Drilling Fluids and Cuttings from
14 Offshore Drilling Sites", Weintritt Testing
Laboratories Report, March 31, 1989.
b) "PAH and n-Alkane Distributions in Diesel, Crude, and
Mineral Oils: A Comparative Study, Core Laboratories
Report, January 10, 1992.
-------
Method 1662
SDS extraction
Rotovap to approx imately 1 mL
Adjust volume to 5.O ml_ with acetonitrile
Evaporate 4.O rr»L to dryness with nitrogen blowdown
Determine total oil in 4.O-mL_ portion by gravirnetry
Method 165-4A
Determine PAH content of 1 .O-mL portion by HRLC/UV
If RAH content O.35 wt *?4», oil may be diesel or crude oil
U)
N)
Method 1663
Determine n-alkane pattern of I.O-mL portion by GC/RID
If <1O n-alkanes present in Cg—C2^ range <§> S/N =»3. no diesel
If n-alkanes present in Cg—030 range & 035—^SO ri-a'*
-------
METHOD 1662 [From EPA 821-R-92-008,12/92]
Soxhiet/Dean-Stark Extractor
-------
METHOD 1654A [From EPA 821-R-92-008,12/92]
HPLC/UV of Standard & No 2 Diesel Oil
to
00
0>
O)
.' <5
I
re
o.
O)
No. 2 Diesel Oil
0.00
i
0.50
1.00 1.50
x 101 minutes
CM
00 _'
CD
2.00
AM-010 1
-------
METHOD 1663 [From EPA 821-R-92-008.12/92]
GC/FID of Crude Oil & Diesel through C25-C30
U)
(O
Crude Oil
C/5
O
in
»—
O
1/1
O
C\J
O
I • • ' ' I
Diesel Oil
HI
-------
K)
GO
INTERLABORATORY STUDY OF METHODS 1662, 1654A, 1663
Six Lab Participants + One Sample Prep Lab
Sample
IPR
M1662
Extractable
%W
4X
Blank(1662) 1X
Blank(1654A) 1X
or(1663)
OPR
1X
Mud+Diesel 2X
Mud+Mineral 2X
Mud+Crude 2X
M1654A
PAH in Oil
%W
4X
1X
1X
1X
2X
2X
2X
M1663
C25-C30
%W
4X
1X
1X
1X
2X
2X
2X
M1663
Diesel Oil
mg/ml
4X
1X
1X
1X
2X
2X
2X
Mean
% Diff.
-------
UJ
K)
RESULTS OF INTERLABORATORY STUDY OF METHODS
1662, 1654A, AND 1663 (ALL DATA NOT YET REPORTED)
INITIAL PRECISION & RECOVERY
LABORATORY A B C D E F
1662(%W Extract.)
SPIKE
MEAN
S.D.
0.25
0.22
0.02
0.25
0.18
0.04
0.20
0.15
0.02
0.25
0.20
0.02
1654A(mg/ml PAH)
SPIKE
MEAN
S.D.
1.25
1.39
0.04
3.21
3.56
0.41
1.25
1.05
0.03
1.25
1.36
0.13
1663(mg/ml C25-C30)
SPIKE
MEAN
S.D.
1.25
1.24
0.02
1.25
1.16
0.14
1.25
1.13
0.12
1.25
1.20
0.22
-------
RESULTS OF INTERLABORATORY STUDY OF METHODS
1662, 1654A, AND 1663 (ALL DATA NOT YET REPORTED)
LABORATORY
%W Extract.
M+Mineral
M+Diesel
M+Crude
%W PAH
M+Mineral
M+Diesel
M+Crude
%W C25-C30
M+Mineral
M+Diesel
M+Crude
A
0.18
0.14
0.14
0.18
2.70
1.16
<1.0
<1.0
1.53
B
0.028
0.050
0.041
0.50
2.74
1.78
<1.0
0.6
7.5
C
0.26
0.17
0.13
0.19
0.28
0.40
<1.0
<1.0
<1.0
D E F
0.05
0.34
0.14
<0.01
0.04
<0.01
> -4 f\
£ ^ ^5
^ *4 f\
OJ
o
-------
U)
CONCLUDING COMMENTS
• THE TIERED ANALYSIS APPROACH IS PRESENTLY THE
MOST DEFINITIVE AND COST EFFECTIVE WAY TO
MEASURE DIESEL, MINERAL, AND CRUDE OILS IN
DRILLING MUDS FROM OFFSHORE DRILLING OPERATIONS
• EPA METHODS 1662, 1654A, and 1663 WERE DEVELOPED
FOR THE MEASUREMENT OF PAH CONTENT OF OIL TO
DISTINGUISH MINERAL FROM DIESEL, AND THE ALKANE
DISTRIBUTION TO DIFFERENTIATE DIESEL AND CRUDE OIL
THE RESULTS OF THE INTERLABORATORY VALIDATION STUDY
OF THESE METHODS, WHICH IS NEARLY COMPLETED, WILL
BE USED TO DETERMINE ANY REQUIRED METHOD REVISIONS
-------
QUESTION AND ANSWER SESSION
MR. TELLIARD: Do we have any questions?
MR. CROWLEY: Ray Crowley from Millipore.
I have a question on 1654. You do an HPLC separation, and then you add up all
the peaks. Why was that chosen over just doing a UV analysis and doing a total absorption,
since you deal with the sum at the end anyhow and reference it to the phenanthrene?
MR. RAIA: Are you talking about not doing a
separation?
MR. CROWLEY: Yes. You have got an
acetonitrile. Why don't you just put it in the UV spectrometer and measure total absorbance?
Because you add it up at the end anyhow.
MR. RAIA: I think that you may end up with some
interferences.
MR. CROWLEY? From?
MR. RAIA: From other additives that could be
extracted in the drilling mud.
MR. TELLIARD: There is more than one material
that would come over in the extraction that we...it isn't real oil but would appear as oil in the
analysis which is why we are doing the separation.
Anyone else? (No response.)
MR. TELLIARD: I would like to have a round of
applause for our speakers today for this afternoon's session.
Thank you all for your attention. Tomorrow morning at 8:30, quarter of 9:00, we
will start. For those of you attending the magic show tonight, please do not disappear,
particularly if you are a speaker.
We'll see you.
(WHEREUPON, the proceedings were recessed at 4:30 p.m.)
332
-------
PROCEEDINGS
May 6. 1993
MR. TELLIARD: I would like to get started, please.
Could you come on in and take a seat?
To get the day off on a really upbeat note, I have a schedule change. Mike
Kravitz who is supposed to be here at 9:45 will be showing up in Ms. Rhodes' place at 3:15.
So, we are switching two papers. We are going to have the pesticide paper this morning and the
dredge materials this afternoon. I hope that does not inconvenience anyone.
This morning, we open with George Stanko. George has been, I think, at every
Norfolk meeting since inception. There is no prize for that. He just keeps coming back.
George is going to talk about the storm water sampling and analysis program for
Shell.
333
-------
334
-------
STORM WATER SAMPLING AND ANALYSIS
Author: G. H. Stanko
Shell Development Co,
Presented at: 16TH Annual EPA Conference on Analysis
of Pollutants in the Environment
Norfolk, Virginia
May 5,6, 1993
335
-------
ABSTRACT
The Fina] Rule for storm water discharges under the NPDES Permit system
was published in the Federal Register on November 16, 1990. This Final
Rule established requirements for the storm water permit application
process. Initial review of the Final Rule identified a number of
technical issues associated with the flow-weighted compositing requirement
for organics.
EPA published their "NPDES Storm Water Sampling Guidance Document" in July
1992 and the EPA guidance document was very explicit about the regulatory
requirement to use flow-proportioned samples for storm water monitoring.
A comprehensive review of the EPA guidance document indicated it provides
detailed information and should prove helpful in meeting regulatory
requirements, but EPA may need to provide additional information in some
areas.
A hypothetical example for a storm water event along with all the needed
calculations to prepare the flow-weighted composite samples for organics
and volatile organics was prepared and is included in this paper. The
hypothetical example demonstrates the EPA flow-weighted compositing
requirement could be met with some difficulty and also identified a number
of areas where additional EPA guidance may be considered.
336
-------
STORM WATER SAMPLING AND ANALYSIS
INTRODUCTION
The Final Rule for storm water discharges under the NPDES Permit system
was published in the Federal Register of November 16, 1990. This Final
Rule established requirements for the storm water permit application
process. The Chemical Manufacturers Association (CMA) Environmental
Monitoring Task Group (EMTG) was asked to review the Final Rule and draft
a guidance document for member companies. The review indicated that
compliance monitoring for storm water regulations was not a simple task
and identified a major technical issue concerning the requirement to use
flow-weighted composite samples for organics analysis. The Final Rule did
not provide sufficient details for preparation of flow-weighted composite
samples for semi-volatile organics nor for volatile organics. The EPA
methods also do not contain protocols for preparation of flow-weighted
composite samples. The CMA Task Group prepared a draft guidance document
which identified this major technical issue and recommended using time-
weighted composite samples based on protocols that were used during
effluent guideline limitations samplings conducted by EPA and
participating CMA member companies.
All of the technical issues associated with the flow-weighted compositing
requirement were subsequently discussed with EPA and the CMA guidance
document was offered for EPA review and concurrence. In July 1992, EPA
published their guidance document, "NPDES Storm Water Sampling Guidance
Document" (EPA 833-B-92-001, July 1992). The CMA EMTG was asked to review
the EPA guidance document and to prepare comments.
Review of the EPA guidance document identified several technical issues
which could bias the analytical results. However, the EPA guidance
document was very explicit about the regulatory requirement to use flow-
proportioned samples to satisfy storm water regulations. Comments
resulting from the review were prepared to share with EPA and a
hypothetical example for a storm water event along with all the needed
calculations to prepare the flow-weighted composite samples for organics
and volatile organics were prepared to demonstrate how one might comply
with storm water regulations. The example demonstrated that the EPA flow-
weighted compositing requirement could be met but with some difficulty.
337
-------
COMMENTS FOR EPA's NPDES STORM WATER SAMPLING GUIDANCE DOCUMENT
Review of the EPA guidance document revealed that EPA provided detailed
information which should prove helpful in meeting storm water regulatory
requirements; however, there are still some areas where more details or
further clarifications are needed. The comments that follow address the
specific actions required to meet the regulatory requirements for the
entire process and more importantly, point out potential problem areas and
options that one may want to consider for meeting the regulatory
requirements.
Chapter 2
Chapter 2 (p. 15-18) clearly identified the specific nature and criteria
for a storm event. The EPA guidance specifies that one needs to establish
and document details for the storm event to verify the event meets EPA
specified criteria. One first has to establish in which rain zone of the
United States the sampling is to occur (Exhibit 2-8, p. 21), then identify
the annual statistics and the average parameters for each independent
storm event for the site to be sampled. For example, if the site were in
East Texas (Houston), one would find that the average storm duration is 8
hours with an intensity of 0.137 inches per hour, and the average volume
of rain is 0.76 inches. The average interval between storm midpoints is
213 hours. With such information in hand, one can prepare a sampling plan
and then establish whether a given storm event met the specified criteria.
The example (Exhibit 2-9, p. 22) was quite easy to follow and understand.
For a site in Houston, Texas, a storm event that lasts between 4 to 12
hours where the rain volume is between 0.38 and 1.14 inches would meet the
criteria provided it occurred 72 hours after the last storm event.
One detail missing from Chapter 2 is the need to have and use a rain gage
at the sampling site during the storm event. Rain volume (depth)
information is needed to establish that the storm event met the specified
criteria. While the EPA guidance document does not preclude the use of
local weather information to establish meeting the storm event criteria,
local weather reports may not be appropriate because of the distance from
an official weather gathering site from the sampling location or due to
the local nature of the storm event. These and other factors need to be
considered. Use of a rain gage at the sampling site is perhaps the most
reliable way to obtain the needed information. It is important to note
that the rain volume information is only used to establish that the storm
event met criteria, and this information is not used in any subsequent
calculations for the flow-weighted composite samples.
Chapter 3
Chapter 3 provides useful information concerning different options
available to measure or estimate flow rates. Perhaps one of the least
338
-------
expensive and practical ways is described as the "Float Method" (p. 49).
Initial review of the guidance revealed no major problems. Exhibit 3-8
(p. 51) provides an example for a storm event. These data in Exhibit 3-8
were used for the hypothetical example. Sample #1 in this example was
taken at "0 minutes" when there should have been no flow. One explanation
would be that they set the clock to "zero" at the time the first sample
was actually collected. This may be the situation since Exhibit 3-8
showed flow and flow calculations at SO. However, a corresponding example
in Exhibit 3-16 is inconsistent with 3-8 because Sample #1 was collected
when there was no flow. We think the best way to address this problem is
the way it was done in Table 1 of our hypothetical example. "SO" was used
to denote no sample was collected; to was used to denote when the sampling
clock was started; and "clock time" was used to document the actual times
involved. No flow calculations were made at this time (tO) since the flow
just started. Table 1 for our hypothetical example also included the use
of "S," "t," and "Q" to make the Table easier to use when calculations are
made. All of these terms are used in subsequent EPA example equations.
The problem with S#l was previously covered and addressed in Table 1 of
the hypothetical example. Exhibit 3-16 (p. 63) shows that the flow was
measured and calculated when S#9 was collected. Exhibit 3-16 (p. 64)
shows how the data from p. 63 would be plotted. In this example, it shows
S#l would have been collected 20 minutes after flow started. S#9 was
collected at 180 minutes and the curve shows the flow (Q9) was zero at 180
minutes.
Step 3 advises to "assume that flow drops uniformly from the last
calculated flow rate (Q9) to zero at the time when Q10 would have been
taken." This is where the real gray areas exist. First, the curve
actually shows zero flow at Q9. It was confirmed that Q9 = zero was used
to calculate the volume of runoff (V9) during the time from 160 to 180
minutes (p. 66). Secondly, Q10 is not used in subsequent calculations.
We think Step 3 should have stated the flow at Q9 should have arbitrarily
been set to zero.
This brings up a second point. Do you have to measure the flow when S#9
is collected? All the EPA examples show such a calculation and we also
showed a flow calculation in Table 1 of the hypothetical example. The
second point is why should the flow be assumed to be zero rather than use
the measured flow when S#9 is collected. Arbitrarily setting the flow to
zero would introduce some bias (low) to the actual flow calculation from
the time flow was observed and when S#9 was collected. If there was a
large incremental increase in flow from Q8 to Q9, the amount of bias would
increase substantially. The EPA guidance document needs further
clarification. In the hypothetical example, Q9 was arbitrarily set to
zero to follow the EPA example in Exhibit 3-16. We are not certain this
is the correct way to calculate the volume or the way EPA intended it to
be done.
Chapter 3 (p. 39) specifies the nature of the samples to be collected.
One must take one grab sample (for each parameter to be monitored) within
339
-------
the first 30 minutes of discharge, or as soon as possible. One must also
collect a flow-weighted composite sample for at least the first three
hours of the discharge, or for the event's entire duration (if it is less
than three hours). The flow-weighted composite samples must be a
combination of at least three sample aliquots taken during each hour of
discharge, with a minimum of 15 minutes between each aliquot. These are
the identified sampling requirements to meet the regulatory criteria.
While there are a lot of details and useful information presented for
these samples, Exhibit 3.22 (Chapter 3, p. 74) gives an excellent example
of some sampling times that might be considered; however, there are a few
things to consider. The example identified the discharge time from 2:15pm
through 5:15pm. In the example, they collected the first grab sample for
the composite at 2:20pm which is only 5 minutes after flow or discharge
was first noted. The area of concern is for one's ability to estimate
flow with any degree of accuracy only 5 minutes after it started. Perhaps
an alternate option would be to collect samples at 2:35pm (20 minutes
after flow), 2:50pm, 3:05pm, 3:25pm, 3:45pm, 4:05pm, 4:25pm, 4:45pm, and
5:05pm. The initial grab sample for all parameters would be collected at
the 2:35pm sampling time following directions given in Exhibit 3-17 (p.
69).
The nine required grab samples would be collected at the times indicated
and would be used to prepare the flow-weighted composite samples. The EPA
guidance document did indicate on p. 75 that the flow-weighted composite
samples should be prepared at the laboratory. The guidance document did
not specify the container size to be used to collect the grab sample for
the composite sample. Later in the document EPA indicated that a total
volume of 5,000 mL is needed to have sufficient sample to perform all the
analyses. This total volume requirement would be independent of the VGA
samples for volatile organics. A one-liter amber glass container for each
sampling time would provide more than the required volume. Page 75 gives
two different ways one should consider to collect the nine individual grab
samples.
Page 75 also states that "generally, 1,000 ml for each aliquot collected
should provide enough sample volume, when composited." Their example on
p. 80 resulted in a final composite of 5,100 mi. Our hypothetical example
in Table 3 resulted in a final Volume of 5,950 mL. If one used the
example in Exhibit 3-16 (p. 63) and go through the calculations, the final
volume would have been 4,000 which is somewhat short. Review of the flow
data showed a surge during three samplings with minimal flow during the
rest. It is important to note that the 1,000 mL at each sample may not be
sufficient to achieve the 5,000 mL target volume. The nature of the storm
event and subsequent flow impacts this final volume considerably. In our
hypothetical example we recommended collecting two 1-L containers to cover
the possibility of breakage. This second container could also be used for
preparing a composite when an unusual storm event was experienced.
The EPA guidance document provided information for proper sample
containers and preservation requirements (taken from 40 CFR Part 136) but
additional information may be required for some analytical tests. For
340
-------
example, the pH method specifies that the pH should be taken at the time
of collection which is a field measurement. EPA has not approved the use
of narrow-range pH paper in 40 CFR Part 136, so one would have to use a
meter in the field to meet EPA's pH method requirements. It should also
be noted that the approved EPA methods for pH also have a requirement to
record the sample temperature at the time the pH measurement is taken.
This would have to be done for full compliance with the method.
Assuming that one measures the pH of the storm water when each of the nine
individual grab samples are collected for the flow-weighted composite
sample, one would end up with nine pH readings. No guidance is provided
with respect to reporting the pH data. The EPA guidance document did
specify in Chapter 3, p. 38, "Monitoring by grab sample must be conducted
for pH, temperature, cyanide, total phenols, residual chlorine, oil and
grease (O&G), fecal coliform, and fecal streptococcus. Composite samples
are not appropriate for these parameters " This guidance is
consistent with Page 48803 of the November 16, 1990 Federal Register
Notice which states, "you are not required to analyze a flow-weighted
composite" for oil and grease. One could interpret this to mean that these
tests are to be performed only on the initial grab sample taken within the
first 30 minutes of the storm event and there is no further need for
additional samples or testing for these parameters.
Use of narrow-range pH paper would be more realistic and reliable, but EPA
has not approved the use of narrow-range pH paper in 40 CFR Part 136 and
they may not or cannot accept such measurements.
Presumably the flow-weighted composite for all other environmental
parameters except volatile organics is prepared at the laboratory in one
large container. Method specified volumes would have to be portioned out
from the large container to other smaller containers specified for each of
the analytical parameters. The different chemical preservatives could be
added at this point.
Another associated problem is with the one-liter sample used for
semivolatile organics. The sample container used for the semivolatile
organics analysis is also rinsed with the extraction solvent (as with oil
and grease) to prevent loss of organics to the sample container walls. If
one considers that nine different bottles were used to collect the
individual grab samples to make the composite sample; that some kind of
measuring device was used to transfer the proper volume to a single
compositing container; and that another device was used to transfer the
sample from the single compositing container to the one-liter bottle used
for the semivolatile organics methods, one has to be concerned with the
potential loss of semivolatile organics during this entire operation. One
should be aware of the potential problems relating to the measurement of
semivolatiles, etc. Clarification may be required from EPA. The same
situation would exist for other tests such as pesticides.
The problems associated with the preparation of a flow-weighted composite
sample for volatile organics was even more difficult than any previously
stated problems. The EPA guidance document first acknowledged the
341
-------
uniqueness of the vials used for volatile organics. In Chapter 3, p. 39,
the document states automatic samplers cannot be used to collect volatile
organic compound (VOC) samples. Additional guidance for collection of VOC
samples can be found on p. 69. Again, the guidance clearly identifies the
VOC sample as being a grab sample. Some additional guidance is given on
p. 70 as well. On p. 75 a reference is given to Section 3.5.2 of Chapter
3 for preparation of the flow-weighted composite sample for volatile
organics. Pages 85 and 86 of Section 3.5.2 give the details for two
different ways for obtaining a flow-weighted composite result for volatile
organics.
One way is "mathematical compositing" and the second is "procedural
compositing." There are technical and procedural problems associated with
the EPA guidance on pages 85 and 86.
The "mathematical compositing" approach is the most technically, and
procedurally sound; however, it is quite costly since it involves nine
different sample analyses (approximately $2,000) to have the necessary
data to construct the "mathematical" composite result. As the EPA
guidance pointed out, this approach also provides specific information
concerning each of the nine sampling events.
The reference on page 85 for 40 CFR Part 141 along with the four EPA 500
Series Methods is inappropriate. The reference is for requirements under
the Safe Drinking Water Act and not for NPDES samples under the Clean
Water Act. The 500 Series method cannot be used for compliance monitoring
under the Clean Water Act. The reference should have been for 40 Part 136
and the 600 Series methods should have been referenced. A reference is
also given for a 25 ml_ purge vessel. This refers to Method 524.2 and not
Method 624 where the purge vessel is 5 mL.
Review of the specific sections cited in the EPA guidance document (40 CFR
Part 141.24(f)14(iv) and (v)) revealed additional problems. While this
section provides instructions for preparing a composite sample for
volatile organics, it must be noted that the instructions are for a time-
weighted composite since equal volumes are required to prepare the
composite. To further complicate matters, the maximum number of grab
samples allowed for the composite is five. The minimum total number of
grab samples resulting from a storm event sampling is nine which would not
be allowed. Based on the review of 40 CFR Part 141, EPA really did not
provide specific details for preparation of a flow-proportioned composite
sample for volatile organics.
The "procedural compositing" approach is more cost-effective but has some
serious technical and procedural problems. Normally, a 5 ml syringe is
used to introduce the proper volume of sample to the purge vessel. For
one sample, there is no problem. The flow-weighted portions from each of
the nine vials will be approximately 0.5 ml and will be determined from
the flow rate/volume measurements. One could probably calculate the
precise volume to three significant figures. An example would be that the
first aliquot required is 0.543 mL and the second is 0.473 mL and so on.
It is not technically possible to measure either or any of the calculated
342
-------
volumes with a 5 mL syringe.
There is some difficulty in trying to place 0.473 ml of sample into a 5 ml
syringe containing 0.543 ml and so on. One could probably use a 1 ml
syringe to measure 0.54 mL and 0.47 mL and so on and then introduce each
of the portions directly to the purge vessel. One could also introduce
each of the portions to a 5 mL syringe which is then used to introduce the
flow-weighted composite sample to the purge vessel. One would have to
introduce the internal standard(s) and/or surrogates into one of the nine
different syringes or to the 5 mL syringe. This alternative is viable but
is both time and labor intensive and still has the potential for loss of
volatile organics.
The EPA guidance gives directions to "draw the sample into the syringe."
This technique is not the normal practice followed for volatile organics
analysis. You can lose volatile organics following this practice. This
is the reason why one cannot use an automatic pump to collect volatile
organic samples. Volatile organic samples are normally poured into the
barrel of the syringe. To go back to the previous example, how does one
pour 0.437 mL of water into the barrel of a syringe containing 0.543 mL of
water and so on?
One proposal that may be considered to prepare the flow-weighted composite
sample for volatiles is to use a 1 mL syringe (Hamilton 1000 series
syringe with fixed needle, Model* 1001-LTN Cat.# Hamilton 81317) to pierce
the VGA septum. The correct volume to two significant figures can be
carefully drawn into the syringe without creating a headspace. The flow-
weighted aliquot is then transferred to a 5 mL syringe (Hamilton 1000
series syringe with teflon luer lock no needle model# 10050-TLL Cat.f
Hamilton 81520) fitted with a two-way valve with CTFE fittings (Hamilton
Cat.# 86580). When this operation has been completed for all the VOA grab
samples, the internal standard(s) or surrogate(s) are added to the 5 mL
syringe prior to transferring the flow-weighted composite sample to the
purging vessel. This procedure has been attempted and appears to work
quite well. For the hypothetical example that follows, this proposal was
used.
Another technical consideration is that none of the current EPA procedures
being followed by commercial environmental laboratories or in fact
industry laboratories have a need for the necessary instructions to
prepare a "procedural compositing" sample for volatile organics. The
laboratories also lack experience in preparing these "procedural
composite" samples. Laboratories will have to learn how to do the
compositing and there will be some additional costs since this procedure
is not routine and has to be done manually. Because of the number of
samples and syringes involved, cross contamination problems may be
amplified.
343
-------
Comments Summary
In summary, the review of the EPA guidance document identified a number of
problem areas and attempts were made to provide suggestions or details in
order to gain an understanding for these problems. Some technical options
were posed and should be considered. Faced with the various problems and
options noted, just how would one envision a storm water sampling that
would meet the regulatory requirements? A hypothetical example for a
storm water sampling event was prepared to gain a better understanding for
the actual physical requirements to conduct a sampling meeting regulatory
requirements and the subsequent analyses for flow-weighted composite
samples.
Pi sclaimer
While neither the author nor CMA assume responsibility for the
hypothetical example, we feel the example does meet the regulatory
requirement of the Final Rule for storm water regulations.
STORM WATER SAMPLING AND ANALYSIS HYPOTHETICAL EXAMPLE
Site Location Houston Texas
Facility Chemical Plant
Parameters Required on NPDES Permit:
Oil and Grease
pH
BOD5
COD
TSS
Total P
TKN
Nitrate + Nitrite
Volatile Organics (Method 624)
Semivolatile Organics (Method 625)
Cyanide
Fecal Coliform
Sampling Point "V" Shaped Grass-Lined Ditch (completely dry)
Last Rainfall Ten Days Ago
Forecast Light to Moderate Rain Expected During the Morning Hours
with Clearing in the Afternoon. Approximately 0.25 to 0.75
Inches Total Expected.
Decision Conditions Appear Optimal to Conduct a Storm Water Sampling
and Sampling was Approved and Authorized.
Logistics Sampling Crew was at the Site at Sam.
344
-------
Sequence of Events:
1. It was recorded that the rain started at 5:30am.
2. Flow at the selected sampling point began at 5:40am. The rain gage
indicated that 0.10 inches of rain had fallen the first 10 minutes
and the rain continued at a relatively steady pace. Since it
appeared the forecast might be accurate, a decision was made to
collect the samples.
3. At 6:00am the flow in the ditch was such that one could make a
reasonable estimate of the flow rate using the "Float Method" and
the flow was determined to be 1.8 cfm. (Table 1 contains the flow
calculations).
4. All of the required grab samples were collected in labeled
containers at 6:00am. A 1-L glass container and three VGA vials
were collected for the composite sample. The pH was measured on
site with a calibrated meter and was found to be 6.9 units. This pH
value was verified with narrow-range pH paper. The temperature was
recorded and found to be 53 degrees centigrade. All of the
collected samples were placed in a chest containing wet ice.
5. Since the commercial laboratory selected to perform the analyses of
samples was located within 10 minutes of the sampling site, a
decision was made to transport the initial grab samples to the
laboratory for preservation and analysis. No chain of custody form
was required since a member of the sampling team took the samples
directly to the laboratory. The laboratory was advised to add the
proper preservation to the containers and to analyze for the
selected parameters using EPA approved methods specified in 40 CFR
Part 136 under the Clean Water Act. The analysis requirements for
the initial grab samples were quite simple since they are handled
the same as any other samples normally submitted for NPDES Permit
compliance monitoring.
6. The flow was measured at 6:20am and was 3.5 cfm. A 1-L container
and three VGA vials were collected and placed on ice. The pH was
6.7 units and the temperature of the water was 49 degrees
centigrade. The rain gage showed a total accumulated rainfall of
0.13 inches of rain.
7. The flow was measured at 6:40am and was 3.6 cfm. A 1-L sample and
three VGA vials were collected and placed on ice. The pH was 6.5
units and the water temperature was 48 degrees centigrade. The rain
gage indicated a total of 0.17 inches of accumulated rain had
fallen.
345
-------
8. The flow was measured at 7:00am and was 3.9 cfm. A 1-L sample and
three VOA vials were collected and placed on ice. The pH was 6.3
units and the water temperature was 48 degrees centigrade. The rain
gage indicated a total of 0.25 inches of rain had fallen.
9. The flow was measured at 7:20am and was 4.0 cfm. A 1-L sample and
three VOA vials were collected and placed on ice. The pH was 6.2
units and the water temperature was 48 degrees centigrade. The rain
gage indicated a total of 0.30 inches of rain had fallen.
10. The flow was measured at 7:40am and was 3.7 cfm. A 1-L sample and
three VOA vials were collected and placed on ice. The pH was 6.2
units and the water temperature was 48 degrees centigrade. The rain
gage indicated a total of 0.35 inches of rain had fallen.
11. The flow was measured at 8:00am and was 1.8 cfm. A 1-L sample and
three VOA vials were collected and placed on ice. The pH was 6.3
units and the water temperature was 48 degrees centigrade. The rain
gage indicated a total of 0.37 inches of rain had fallen and the
rain was letting up.
12. The flow was measured at 8:15am and was 1.9 cfm. A 1-L sample and
three VOA vials were collected and placed on ice. The pH was 6.3
units and the water temperature was 47 degrees centigrade. The rain
gage indicated a total of 0.39 inches of rain had fallen.
13. The flow was measured at 8:30am and was 1.7 cfm. A 1-L sample and
three VOA vials were collected and placed on ice. The pH was 6.3
units and the water temperature was 48 degrees centigrade. The rain
gage indicated a total of 0.40 inches of rain had fallen.
While the rain continued lightly for the next hour, the sampling event was
terminated since the required number of grab samples had been collected to
prepare the flow-weighted composite samples.
Verification of Storm Event
There was no rain reported at the site for the last 72 hours prior to the
current storm event. Officially, the last rainfall was recorded 240 hours
before this storm event. The rain started at 5:30am and ended at 9:45am
which is a total of 4 hours and 15 minutes. The rain gage used in the
field during the storm event indicated that the accumulated volume of rain
was 0.42 inches during the time the samples were collected. A check on
the officially recorded rainfall reported by the local TV station
indicated that 0.48 inches of rain had fallen that day. All of the rain
was in the morning hours.
346
-------
The storm event's duration of 4 hours and 15 minutes met the 4 to 12 hour
criteria calculated for the Houston area and both the field recorded
volume of 0.42 inches and the official report of 0.48 inches met the 0.38
to 1.14 inches criteria for the Houston area. The storm event clearly met
the regulatory criteria and the necessary information was documented for
regulatory purposes.
Verification of Sampling Criteria
A review of the field log notebook indicated the initial grab sample was
collected 20 minutes after flow was observed at the sampling point. The
reason for not collecting the grab sample sooner was the inability to
estimate the flow rate any sooner. The initial grab sample was collected
within the regulatory criteria.
Review of the field log notebook further revealed the grab samples to
prepare the composite samples were collected at 6:00am, 6:20am, 6:40am,
7:00am, 7:20am, 7:40am, 8:00am, 8:15am, and 8:30am. There were 9 grab
samples collected. There were three samples collected each hour and the
minimum duration of 15 minutes was used for the last two samples because
the rain was letting up. The rain started at 5:30am; storm water flow was
noted at 5:40am; and the first sample was collected at 6:00am. The last
sample was collected at 8:30am which represents a span of time 2 hours and
50 minutes. All samples were collected as soon as feasible and did occur
within the first 3 hours of storm water flow. The nine samples all met
the 15 minute/3 per hour criteria.
Sampling Conclusions
All of the storm event criteria were met and the samples collected also
met regulatory criteria. The sampling was a success. What remains to be
done is to transport the remainder of the samples to the laboratory;
prepare the necessary calculation needed by the laboratory to prepare the
flow-weighted composite sample for most parameters; make a decision on how
to analyze for volatile organics and advise the laboratory how to do it;
and to provide the specific instructions on how to preserve and analyze
the flow-weighted composite samples.
A decision was made to use the "procedural compositing" proposal for our
hypothetical example because we actually tested the proposal and it
appeared to work.
347
-------
Calculations for Preparation of Flow-Weighted Composite Sample
1. It should be noted that the flow rates expressed as cfm's that were
listed in our example Table 1 for each sampling event were taken
directly from Exhibit 3-8 on p. 51.
Exhibit 3-8 was rather straightforward and was easily understood;
however, for the hypothetical storm event we used, it was necessary
to make some modifications. It also became obvious later that
Column "B" would be better labeled "t"; Column "G" would be better
as "Q"; and Column "A" would be better as "S." The reason for this
suggestion is that later the equations show time as "t" and flow in
cfm as "Q." "S" represents the samples that we identified as SI,
S2, etc. making things clearer when performing the calculations.
It also became obvious later that the time column in Exhibit 3-8 was
offset by one position. Since we collected our first sample 20
minutes after flow started, the first number should be 20 which is
for tl. It is understood that to is 0 minutes. Table 1 was
prepared to measure flow for our hypothetical example and is the
equivalent for Exhibit 3.8 in the EPA guidance document. The same
numbers were used. We liked our version better than Exhibit 3.8.
Table 1.
2. The next calculation is to estimate the outfall volumes associated
with each sampling event. One portion of this calculation uses the
cfm's calculated in Table 1 from Step 1. Exhibit 3-16, starting on
p. 63, provides the step-by-step process for the volume calculation.
Essentially Exhibit 3-8 (and Table 1) is somewhat equivalent to
Exhibit 3-16 on p. 63 except that they calculated cfm's differently.
It was found later that one really does not have to perform Step 1
of Exhibit 3-16 since all the needed information is contained in
Exhibit 3-8 or for our hypothetical example in Table 1. The only
information used for subsequent calculations are the values for "t"
and "Q" which are time and flow rate. It is recommended that only
an Exhibit 3-8-type table such as Table 1 be prepared. Also, Step
2 of Exhibit 3-16 represents another way to calculate flow which
again was found to be redundant and not needed since Table 1 already
contains the same information.
Steps 3,4, and 5 of Exhibit 3-16 provide examples for calculation of
flow volume; however, later it was found that it was not necessary
to plot the data to complete the calculations. Some problem areas
were also identified as well as some possible errors (these were
covered in the comments section of the paper). There also were some
necessary details missing.
348
-------
Step 3 shows a plot of "Q" and "t" values. The area under this plot
represents the total volume of flow at the time the storm water grab
samples were collected. Instructions say to assume that flow drops
uniformly from the last calculated flow rate (Q9) to zero at the
time when (Q10) would have been taken. This statement is not
consistent with the drawn curve and is one of the identified
problems. While not entirely agreeing with the procedure in Steps
4 and 5, we arbitrarily set Q9 at zero when the last sample S9 was
taken because that appears what EPA did in their example. The EPA
calculations on p. 66 were validated and found to be consistent with
arbitrarily setting Q9 to zero.
On p. 66 of Exhibit 3-16 the equations used to calculate the flow
are provided. Everything was fine until the calculation for Volume
#6 of the hypothetical example. In our example, it was noted that
Q6 was less than Q5. In the equation the quantity (Q6 - Q5) results
in a negative value. The same is true for (Q9 - Q8). The EPA
guidance document did not point out how to handle these negative
values. It was established that the equations are correct and the
proper flow volume results when using these negative values.
The next problem was noted in the calculation of V9. Table 1 shows
the calculated flow rate as 1.7 cfm. Using this value for Q9 will
result in higher value than if one assumes Q9 to be zero (based on
one interpretation of Step 3). For purposes of this hypothetical
example, Q9 was set to zero. The volume calculations were completed
and are shown in Table 2.
Table 2
The next set of calculations use the volume information from Table
2 to make the necessary calculations to prepare the flow-weighted
composite sample for most parameters (also for volatile organics).
The values listed in Table 2, in cubic feet units, were also
converted to volume expressed in liters (L) using the equation:
Volume (liters) = Volume (cubic feet) x 28.32 liters/1 cubic feet
This equation was listed on p. 78 of Exhibit 3-24. Table 2 was also
used to show the discharged volume expressed in liter units. The
EPA example used another step and table to accomplish this
conversion. We felt this was not necessary.
Step 8 of Exhibit 3-24 provides the equations needed to calculate
the volume one must take from each of the 1-L grab samples to
prepare the flow-weighted composite sample with an approximate final
volume of 5,000 mL. Table 3 was prepared to summarize the results
from these calculations and provides the proper aliquots (Al - A9)
needed for the flow-weighted composite sample for all parameters for
the hypothetical example.
349
-------
Table 3
The final volume of the flow-weighted composite was more than 5,000
mL which should be more than sufficient for all the analyses. It
follows that the same information calculated for the flow-weighted
composite sample can also be used and was used to give instructions
on how to prepare the flow-weighted composite (VOA) sample for
volatile organics. To accomplish this, one just had to divide the
values for the 5 L composite aliquots (Al - A9) by 1,000 and round
the results to two significant figures. This information for the
volatile organics flow-weighted composite sample was also included
in Table 4. The EPA guidance document provided no guidance for the
volatile organics calculation.
The calculation for the flow-weighted composite sample for volatile
organics indicated a total volume of 5.96 mL. This should be no
problem since the 5 ml purge vessel is capable of handling a volume
of 6 ml. It should be noted that this volume difference of 5.96
instead of 5.0 also needs to be corrected for in the calculations
for analyte concentration since the normal sample size for volatile
organics is 5 mL.
One option available to prepare the flow-weighted composite sample
for volatile organics that will result in a final volume of 5.0 mL
is to multiply the Al - A9 values in Table 4 by an appropriate
factor. For example, multiplying the Al - A9 values by the factor
0.000840 will result in a final total volume of 4.99 mL or 5.0 ml.
If this practice was followed, there would be no need to make any
volume difference in the method calculations.
IT IS ABSOLUTELY ESSENTIAL THAT THE LABORATORY IS PROVIDED THE
INFORMATION CONTAINED IN TABLE 4 AND THAT THEY COMPLETELY UNDERSTAND THE
SIGNIFICANCE OF TABLE 4 AND THE NEEDED CORRECTION IN ANALYTE CONCENTRATION
CALCULATIONS BECAUSE OF THE VOLATILE ORGANICS SAMPLE VOLUME DIFFERENCE.
The laboratory should be able to prepare the flow-weighted composite
samples using the information from Table 4 and provide the data needed to
complete the storm water permit application. However, it would be a good
idea to have a capable analytical chemist within your own company validate
the data.
Other Considerations
For the hypothetical example, a total of 9 grab samples were collected.
The volume collected followed EPA guidance and no additional samples were
collected to cover the possibility of breakage. We would make a strong
recommendation to collect additional samples to cover any possible
350
-------
breakage. It is recommended that two 1-L containers should be collected
at each sampling event to address the possibility of breakage.
It was also determined from one of the EPA examples that the 1-L volume
may be insufficient to prepare a composite volume of 5,000 ml if there
were unusual flow patterns. The second 1-L container that was collected
to cover the possibility of breakage could also be used if such a flow
pattern was experienced (see comment section for details). Applying a
correction factor to the Al - A9 volumes in Table 4 can be used to ensure
a final target volume of 5,000 mL.
Another recommendation would be to keep the collection intervals equal for
the entire sampling program if possible. Later in the calculations all
the quantities (Table 2) of (tl - tO), (t2 - tl), (t3 - t2), etc., become
a constant value for the interval. Using a constant interval would
simplify preparation of an electronic spreadsheet to perform all the
necessary calculations. Either a 15 or 20 minute interval could be used.
Using different intervals for the hypothetical example resulted in some
initial errors which had to be corrected.
Review of Table 4 shows that the calculated aliquot volumes (Al - A9) are
three significant figures. The EPA guidance did not specify any
regulatory criteria for significant figures for these volumes. Since the
volumes ranged from 181 mL to 1,000 mL, it was obvious that a graduated
cylinder should be considered. However, it was learned that a 1-L
graduated cylinder can only measure to the closest 10 mL; a 500 mL
cylinder to 5 mL; and a 250 mL cylinder to 2 mL. Several options are
available to consider. One option is to round to two significant figures
and use only the 1-L graduated cylinder. A second option is to use the
appropriate combination of graduated cylinders to measure the volume to
three significant figures. A third option, and perhaps the most precise,
is to use a balance capable of weighing 3,000 grams and to pour the proper
weight corresponding the Al - A9 aliquots into a 1-L graduated cylinder.
Several experiments were conducted to see if there was a preferred way to
do it. All three ways had some small errors, but all three ways appeared
to work. Since the EPA guidance did not provide specific criteria, it
would appear that the laboratories can make the choice.
351
-------
CONCLUSIONS
The EPA's "NPDES Storm Water Sampling Guidance Document" provides
detailed information which should prove helpful in meeting storm water
regulatory requirements. It is strongly recommended that anyone who has
to perform sampling or analysis of storm water should obtain a copy of the
EPA guidance document. It would also be advisable to prepare a
hypothetical example for the sampling location to have a good
understanding for the sequence of events and what is required.
It is possible to meet the storm water regulatory requirement for flow-
weighted composite samples. The hypothetical example demonstrated it can
be done but with some difficulty and with a lot of detailed instructions
and subsequent calculations. There are also a number of problem areas
where EPA may want to provide more details. It also appears some options
are available that one may want to consider. Concerns still remain for
the potential to lose analytes to container walls because of the process
needed to prepare these flow-weighted composite samples. There is also
some concern with commercial laboratories who will have to be provided
explicit instructions in order to prepare the flow-weighted composite
samples. The volatile organics flow-weighted sample preparation is far
from a routine operation and also poses some potential problems.
We think the hypothetical example meets regulatory criteria and should be
acceptable to regulatory agencies.
352
-------
TABLE 1 - Example Calculation of Float Method for Unimpeded Open Channel Flow
Step 1: When each grab sample was collected, the data for the time involved;
length between points A and B (at least 5 feet apart); flow depth and
width; as well as the flow calculation.
Sample
Number
(S)
SO
SI
S2
S3
S4
S5
S6
S7
S8
S9
Clock
Time
5:40 a.m.
6:00 a.m.
6:20 a.m.
6:40 a.m.
7:00 a.m.
7:20 a.m.
7:40 a.m.
8:00 a.m.
8:15 a.m.
8:30 a.m.
For SI: V
Time in
Minutes
(t)
to = o
tl = 20
t2 = 40
t3 = 60
t4 = 80
t5 = 100
t6 = 120
t7 = 140
t8 = 155
t9 = 170
Distance
Between
Points
A & B
(ft)
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
= 5.0 ft = 29.4
0.17 min
A = 0.12 ft X 0.5 ft
Ql = 29.4 ft/min X O.C
Time of
Travel
(A to B)
(min)
0.00
0.17
0.18
0.20
0.21
0.18
0.17
0.17
0.16
0.18
Depth of
Water at
Point B
(ft)
0.00
0.12
0.25
0.29
0.33
0.29
0.25
0.12
0.12
0.12
Width of
Flow at
Point B
(ft)
0.0
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
Calcul.
Flow Rate
(cfm)
(Q)
QO = 0
Ql = 1.8
Q2 = 3.5
Q3 = 3.6
Q4 = 3.9
Q5 = 4.0
Q6 = 3.7
Q7 = 1.8
Q8 = 1.9
Q9 = 1.7
ft/min
= 0.06 ft2
6 ft2 = 1.8 cfm
353
-------
TABLE 2
Volume (V) = Flow Rate (cfm) X Duration (minutes)
VI = HQl-QO)(tl-tO)
= t(1.8-0)(2JJ-0)
- 18 ^ * = "Ollters
V2 = |(Q2-Ql)(t2-tl) + Ql(tZ-tl)
= H3.5-1.8)(40-20) + 1.8(40-20)
= H1.7)(20) + 1.8(20)
= 17+36
= 53 ft3 X ^p- = 1,501 liters
V3 = J(Q3-Q2)(t3-t2) + Q2(t3-t2)
= i(3.6-3.5)(60-40) + 3.5(60-40)
= H0.1)(20) + 3.5(20)
= 1+70
= 71 ft3 X ^p^ = 2,011 liters
V4 = KQ4-Q3)(t4-t3) + Q3(t4-t3)
= H3.9-3.6)(80-60) + 3.6(80-60)
= i(0.3)(20) + 3.6(20)
= 3 + 72
= 75 ft3 X ^f^ = 2,124 liters
V5 = J(Q5-Q4)(t5-t4) + Q4(t5-t4)
= i(4.0-3.9)(100-80) + 3.9(100-80)
= i(0.1)(20) + 3.9(20)
= 1 + 78
= 79 ft3 X ^^ = 2,237 liters
354
-------
TABLE 2 (CONTINUED) Volume (V) = Flow Rate (cfm) X Duration (minutes)
V6 = i(Q6-Q5)(t6-t5) + Q5(t6-t5)
= H3.7-4.0)(120-100) + 4.0(120-100)
= f(-0.3)(20) + 4.0(20)
= [-3] + 80
= 77 ft3 X —— = 2,181 liters
V7 = HQ7-Q6)(t7-t6) + Q6(t7-t6)
= Hl-8-3.7)(140-120) + 3.7(140-120)
= H-1.9)(20) + 3.7(20)
= [-19] + 74
= 55 ft3 X ^~^ = 1,558 liters
V8 = |(Q8-Q7)(t8-t7) + Q7(t8-t7)
= HI-9-1.8)(155-140) + 1.8(155-140)
= i(0.1)(15) + 1.8(15)
= 0.75 + 27
= 27.75 ft3 Xp^ = 786 liters
V9 = HQ9-Q8)(t9-t8) + Q8(t9-t8)
= i(0-1.9)(170-155) + 1.9(170-155)
= J(-1.9)(15) + 1.9(15)
= [-14.25] + 28.5
= 14.25 ft3 X^ = 404 liters
355
-------
TABLE 3 ___^_
Step 8: Calculate the volume of each sample aliquot
(A) which must be used to comprise a flow-
weighted composite sample. The following
equation should be used:
Aliquot volume (A)(ml) = Minimum aliquot volume
(ml) X Aliquot discharge volume (VHliters)
Largest discharge volume (V)(liters)
Al- 1,000ml X- 228ml
AO i r\nn mi v 1501 liters £71 mi
A2 = i'000 ml X 2237 liters = 671 m1
A3 = 1,000 ml X 227 r = 899 ml
A4= 1,000 ml Xf$-U|£f- 949ml
A5 = 1,000 ml X HI] }\Hrrl - 1,000 ml
A6 = 1,000 ml X r = 975 ml
A7 =1,000 ml X I** liters = 6g6 ml
A8 = 1,000 ml X£ = 351 ml
A9- 1.000 •1Xt- 181ml
TOTAL = 5,950 ml
356
-------
TABLE 4 - Summary of Volumes Needed to Prepare a
Flow-Weight Composite for all
Parameters and Volatile Organics
Sample
SI
S2
S3
S4
S5
S6
S7
S8
S9
Parameter
Composite
Al = 228 ml
A2 = 671 ml
A3 = 899 ml
A4 = 949 ml
A5 = 1,000 ml
A6 = 975 ml
A7 = 696 ml
A8 = 351 ml
A9 = 181 ml
Volatile
Organic
Composite
VOA1 = 0.23 ml
VOA2 = 0.67 ml
VOA3 = 0.90 ml
VOA4 = 0.95 ml
VOA5 = 1.00 ml
VOA6 = 0.98 ml
VOA7 = 0.70 ml
VOA8 = 0.35 ml
VOA9 = 0.18 ml
Total (V) 5,950 ml 5.96 ml
357
-------
358
-------
VO
STORM WATER SAMPLING AND ANALYSIS
G. H. STANKO
SHELL DEVELOPMENT COMPANY
16TH ANNUAL EPA CONFERENCE
ANALYSIS OF POLLUTANTS
IN THE ENVIRONMENT
NORFOLK, VIRGINIA
MAYb-6, 1993
-------
ON
O
FINAL RULE STORM WATER 40 CFR PARTS 122, 123,124
NATIONAL POLLUTANT DISCHARGE ELIMINATION SYSTEM PERMIT
APPLICATION REGULATIONS FOR STORM WATER DISCHARGES,
FEDERAL REGISTER FRIDAY NOVEMBER 16, 1990.
"NPDES STORM WATER GUIDANCE DOCUMENT1 -- JULY 1992
(EPA 833-B-92-001, JULY 1992)
-------
o\
COMMENTS FOR EPAs
"NPDES STORM WATER SAMPLING GUIDANCE DOCUMENT"
CHAPTER 2
IDENTIFIED SPECIFIC NATURE AND CRITERIA FOR STORM EVENT
V HOUSTON, TEXAS SITE:
• STORM EVENT -- 4 TO 12 HOURS.
• RAIN VOLUME BETWEEEN 0.38 AND 1.14 INCHES.
• LAST RAIN 72 HOURS.
• NEED RAIN GAGE AT SAMPLING SITE.
• RAIN DEPTH DATA NEEDED TO ESTABLISH CRITERIA MET.
• RAIN VOLUME INFORMATION ONLY USED TO ESTABLISH
STORM EVENT CRITERIA WAS MET.
-------
K>
fa.
COMMENTS FOR ERA'S
"NPDES STORM WATER SAMPLING GUIDANCE DOCUMENT
CHAPTER 3
PROVIDES THE NECESSARY INFORMATION FOR THE DIFFERENT OPTIONS
AVAILABLE TO MEASURE OR ESTIMATE FLOW RATES, CALCULATING FLOW
VOLUME, PREPARATION OF FLOW-WEIGHTED COMPOSITE SAMPLES, AND
METHODOLOGY.
PROBLEMS WERE EXPERIENCED AND A NUMBER OF PROBLEM AREAS WERE
IDENTIFIED IN CHAPTER 3.
-------
ON
U)
J EXHIBIT 3-8 TABLE L
(Cont.)
Step 1 : When each sample or aliquot is taken, record the data for the time the sample was
taken and the length between points A and B (at least 5 feet apart). See columns
A,B and C.
Wl
mi
L
Example Data:
A
Sample
Number
1
2
3
4
5
6
7
8
9
B
Time in
Minutes
0
20
40
60
80
100
120
140
160
C
Distance
Between
Points
A&B, ft
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
D
Time of
Travel
(A&B)
(min)
0.17
0.18
0.20
0.21
0.18
0.17
0.17
0.16
0.18
E
Depth of
Water at
Point B,
ft
0.12
0.25
0.29
0.33
0.29
0.25
0.12
0.12
0.12
F
Width of
Flow at
Point B,
ft
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
G
Calculated
Flow Rate,
cfm
1.8
3.5
3.6
3.9
4.0
3.7
1.8
1.9
1.7
wrc93-
-074—01^^
-------
EXHIBIT 3-8 (Conf.)
Step 2: Place a float in the water flow at point A and time it as it moves from point A to
point B. Record the time in minutes. See column D.
Step 3: Measure the depth of the water and the width of the flow at point B. See columns
E and F.
Step 4: Calculate the flow rate for each sample time using the common flow rate formula.
See column G.
Formulas: i/P/nrih/ n/\ - Len9th from AtoB
veiuuiy (vj - Tjme Qf Travei
Area (A) = Water Depth x Width of Flow
Flow Rate (Q) = (V) x (A)
Example: For Sample 1
K K V = 5.0ft = 29.4 ft/min
0.17 min
A = O.T2ft x 0.5ft = 0.06 ft2
Q = 29.4 ft/min X 0.06 ft2 '= l.Scfm
wrc93-074-02
-------
OS
J "'" • • """ """ L
EXHIBIT 3-16 EXAMPLE CACULATION OF TOTAL
RUNOFF VOLUME FROM FLOW RATE DATA (Cont)
Step 1: Measure and tabulate flow depths and velocities every 20 minutes (at the same time
that the sample is collected) during at least the first 3 hours of the runoff event.
TV
m
mm
1
Example Data:
A
Sample
Number
1
2
3
4
5
6
7
8
9
B
Time in
Minutes
0
20
40
60
80
100
120
140
160
C
Flow
Velocity,
ft/min
—
4
8
12
8
4
8
4
4
D
Flow
Depth, ft
—
0.2
0.4
0.4
0.4
0.2
0.2
0.2
0.2
E
Width, ft
—
5
5
5
5
5
5
5
5
F
Calculated
Flow Rate,
cfm
—
4
16
24
16
4
8
4
4
wrc93-074-09 f™
-------
EXHIBIT 3-16 (Cont)
Step 2: Calculate and tabulate the cross-sectional area of flow for each of the flow depths
measured. Calculate the flow rate for each discrete set of measurements.
Formula:
Flow Rate Q (cfm) = Velocity (ft/min) x Area (sq ft)
Area = Depth x Width
CTs
ON
Example: For Sample 1 Area = 0.2ftx5ft = 1sqft
Flow Rate = 4 ft/min x 1sq ft = 4cfm
Step 2: Plot the flow rate, Q, versus time. Also, assume that flow drops uniformly from the
last calculated flow rate (Q9) to zero at the time when Q10 would have been taken.
Example: The flow rates calculated in Step 3 are plotted against the time between
samples.
28 |-
24
*£ 20
16
12
8
4
0
CD
-i-»
£
20 40 60 80 100 120
Time (minutes)
140
160
180
wrc93-074-10
LO
^J
Ln
-------
EXHIBIT 3-16(00/7*.)
Step 4: The total flow volume (Vt) can be calculated by geometrically determining the area
under the curve. The summation of the individual volumes per increment of time
through V9) is the total flow volume of the event.
2
o
u.
Example:
28
24
20
16
12
8
4
0
20 40 60 80 100 120
Time (minutes)
140
160
180
wrc93-074-11
-------
EXHIBIT 3-16 (Conf.)
Step 4: Compute the flow volume associated with each observation (V15 V2,..., V9) by
multiplying the measured flow rate by the duration (in this case, 20 minutes). Be sure
the units are consistent. For example, if durations are in minutes and flow velocities
are in cubic feet per second (cfs), convert the durations to seconds or the velocities
to feet per minute.
oo
Example
12
7 8
2
1 4
u.
0
20 40
Time (minutes)
wrc93-074-12
-------
U)
CHAPTER 3
• EXHIBIT 3.22 (P. 74) PROVIDES EXAMPLE OF TIMES
TO CONSIDER FOR SAMPLING.
EXAMPLE SHOWS FIRST GRAB SAMPLE COLLECTED
AFTER 5 MINUTES OF FLOW.
• GRAB SAMPLES FOR ALL PARAMETERS COLLECTED
FOLLOWING DIRECTIONS IN EXHIBIT 3-17 (P.69).
1
-------
UJ
-J
o
CHAPTER 3
• NINE GRAB SAMPLES NEEDED FOR COMPOSITE SAMPLE.
FLOW-WEIGHTED COMPOSITE SAMPLE SHOULD BE
PREPARED IN LABORATORY (P. 75).
1,000 mL GRAB SHOULD PROVIDE ENOUGH
VOLUME FOR THE COMPOSITE
PROPER CONTAINERS/PRESERVATION TAKEN FROM
40 CFR PART 136
-------
CHAPTER 3
FLOW-WEIGHTED COMPOSITE FOR SEMIVOLATILE ORGANICS
PROBLEM
• NINE BOTTLES TO COLLECT GRAB SAMPLES.
MEASURING DEVICE TO TRANSFER PROPER VOLUME.
• ANOTHER DEVICE USED TO TRASFER TO FINAL SAMPLE
BOTTLES.
• LOSS OF SEMIVOLATILE ORGANICS TO CONTAINER WALLS?
-------
OJ
CHAPTER 3
FLOW-WEIGHTED COMPOSITE FOR VOLATILE ORGANICS
• AUTOMATIC SAMPLERS CANNOT BE USED TO COLLECT
VOLATILE ORGANIC SAMPLES.
• GUIDANCE FOR COLLECTION OF VOC SAMPLE IS ON P. 69.
• GUIDANCE IDENTIFIES VOC SAMPLES AS GRAB SAMPLES.
• PAGES 85 AND 86 OF SECTION 3.5.2 PROVIDE DETAILS
FOR COMPOSITE.
-------
CHAPTER 3
FLOW WEIGHTED COMPOSITE FOR VOLATILE ORGAN ICS
MATHEMATICAL COMPOSITING
PROCEDURAL COMPOSITING
-------
OJ
-J
CHAPTER 3
FLOW-WEIGHTED COMPOSITE FOR VOLATILE ORGANICS
PROCEDURAL COMPOSITING
EPA GUIDANCE DOCUMENT (40 CFR Part 141.24(f)14(iv)
AND (v)) REVEALED THIS SECTION PROVIDES INSTRUCTIONS
FOR PREPARING A TIME-WEIGHTED COMPOSITE SAMPLE
FOR VOLATILE ORGANICS.
THE MAXIMUM NUMBER OF GRAB SAMPLES ALLOWED FOR THE
COMPOSITE IS FIVE.
BASED ON THE REVIEW OF 40 CFR PART 141, EPA REALLY
DID NOT PROVIDE SPECIFIC DETAILS FOR PREPARATION OF
FLOW-PROPORTIONED COMPOSITE SAMPLE FOR VOLATILE
ORGANICS.
-------
CHAPTER 3
METHODOLOGY PROBLEMS
• ON PAGE 85, EPA PROVIDED AN INAPPROPRIATE REFERENCE
TO 40 CFR PART 141 AND THE EPA 500 SERIES METHODS.
• REFERENCE SHOULD HAVE BEEN FOR 40 CFR PART 136 AND
600 SERIES METHODS.
• A REFERENCE IS ALSO GIVEN FOR A 25 mL PURGE VESSEL.
THIS REFERS TO METHOD 524.2 AND NOT METHOD 624.
-------
os
STORM WATER SAMPLING AND ANALYSIS HYPOTHETICAL EXAMPLE
Site Location -— Houston Texas
Facility — Chemical Plant
Parameters Required on NPDES Permit
Oil and Grease
PH
BODS
COD
TSS
Total P
TKN
Nitrate + Nitrite
Volatile Organics (Method 624)
Semivolatile Organics (Method 625)
Cyanide
Fecal Coliform
Sampling Point — "V" Shaped Grass-Lined Ditch (completely dry)
Last Rainfall —- Ten Days Ago
Forecast -— Light to Moderate Rain Expected During the Morning Hours
with Clearing in the Afternoon. Approximately 0.25 to 0.75
Inches Total Expected.
Decision — Conditions Appear Optimal to Conduct a Storm Water Sampling
and Sampling was Approved and Authorized.
Logistics — Sampling Crew was at the Site at Sam.
-------
STORM WATER SAMPLING AND ANALYSIS
HYPOTHETICAL EXAMPLE
VERIFICATION OF STORM EVENT
NO RAIN IN LAST 240 HOURS (72 HOURS REQUIREMENT).
RAIN LASTED 4 HOURS 15 MINUTES (4 TO 12 HOURS CRITERIA)
• RAIN GAGE VOLUME 0.42 INCHES (0.38 TO 1.14 CRITERIA)
CONCLUSION
STORM EVENT MET REQUIREMENTS
-------
oo
STORM WATER SAMPLING AND ANALYSIS
HYPOTHETICAL EXAMPLE
VERIFICATION OF SAMPLING CRITERIA
INITIAL GRAB SAMPLE AT 20 MINUTES (WITHIN FIRST
30 MINUTES)
NINE GRAB SAMPLES COLLECTED (3 EACH HOUR, MORE
THAN 15 MINUTES APART)
tu
-------
u>
STORM WATER SAMPLING AND ANALYSIS
HYPOTHETICAL EXAMPLE
SAMPLING CONCLUSIONS
• EVENT MET REGULATORY CRITERIA FOR HOUSTON AREA.
• ALL SAMPLES WERE COLLECTED WITHIN REGULATORY CRITERIA.
-------
oo
o
STORM WATER SAMPLING AND ANALYSIS
HYPOTHETICAL EXAMPLE
CALCULATIONS FOR FLOW
• TABLE 1 DATA TAKEN FROM EXHIBIT 3-8 P. 51
COLUMN "B" LABELED "t".
COLUMN "G" LABELED "Q".
• COLUMN "A" LABELED "S".
-------
OJ
oo
IT
t
T?>
TABLE 1 - EXAMPLE CALCULATION OF FLOAT L
METHOD FOR UNIMPEDED OPEN CHANNEL FLOW
Step 1: When each grab sample was collected, the data for the time involved; length between points
A and B (a least 5 feet apart); flow depth and width; as well as the flow calculation.
Sample
Number, S
SO
S1
S2
S3
S4
S5
S6
S7
S8
S9
k
Clock
Time
5:40 a.m.
6:00 a.m.
6:20 a.m.
6:40 a.m.
7:00 a.m.
7:20 a.m.
7:40 a.m.
8:00 a.m.
8:15 a.m.
8:30 a.m.
Time in
Minutes,
T
to = o
t1 = 20
t2 = 40
t3 = 60
t4 = 80
t5 = 100
t6 = 120
t7 = 140
t8 = 155
t9 = 170
!fT ForS1:
Distance
Between
Points
A&B, ft
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
Time of
Travel
(A to
B), min
0.00
0.17
0.18
0.20
0.21
0.18
0.17
0.17
0.16
0.18
5.0ft
Depth of
Water at
Point B,
ft
0.00
0.12
0.25
0.29
0.33
0.29
0.25
0.12
0.12
0.12
Width Of
Flow at
Point B,
ft
0.0
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
Calculated
Flow Rate,
cfm, Q
QO = 0
Q1 = 1.8
Q2 = 3.5
Q3 = 3.6
Q4 = 3.9
Q5 = 4.0
Q6 = 3.7
Q7 = 1.8
Q8 = 1.9
Q9 = 1.7
**^ v- 0.1 7 mm -«••*«/»""
A = 0.72 ft x 0.5ft = 0.06 ft2
Q = 29.4 ff/m/n x 0.06 ft2 = t.8 c/m
HI wrc93-074-05|"""
-------
oo
STORM WATER SAMPLING AND ANALYSIS
HYPOTHETICAL EXAMPLE
CALCULATIONS FOR TOTAL VOLUME
• EXHIBIT 3-16 P.63 PROVIDES STEP BY STEP PROCESS.
h.
• NOT NECESSARY TO DO STEP 1 THE WAY TABLE 1 WAS PREPARED.
• ONLY "t" AND "Q" FROM TABLE 1 ARE USED FOR VOLUME.
CALCULATIONS.
• STEP 2 FOUND TO BE REDUNDANT SINCE TABLE 1 CONTAINS
INFORMATION.
• STEPS 3, 4, AND 5 OF EXHIBIT 3-16 USED TO CALCULATE
VOLUME.
• ARBITRARILY SET Q9 TO ZERO (EPA EXAMPLE).
• VOLUME CONVERTED TO LITER UNITS USING EQUATION FROM
EXHIBIT 3-24 P. 78.
-------
TABLE 2 - VOLUME (V) = FLOW RATE (cfm) x DURATION (MINUTES)
(Confj
V1 = 1/2 (Q1 - QO) (t1 - W)
= 1/2 (1.8 - 0) (20 - 0)
= 18ft3x •&£$£** = 510 liters
V2 = 1/2 (Q2 - Q1) (12 - t1) + Q1 (12 - t1)
= 1/2 (3.5 - 1.8) (40 - 20) + 1.8 (40 - 20)
= 1/2 (1.7) (20) + 1.8(20)
= 17 + 36
= 53ft3 x ^2- = 1,501 liters
V3 = 1/2 (Q3 - Q2) (t3 - 12) + Q2 ((13 - t2)
= 1/2 (3.6 - 3.5) (60 - 40) + 3.5 (60 - 40)
= 1/2 (0.1) (20) + 3.5(20)
= 1 + 70
= 71 ft3 x ^1 = 2,011 liters
wrc93-074-06l
-------
V4
V5
UJ
CO
TABLE 2 (Cont)
1/2 (04 - Q3) (t4 - 13) + Q3 ((t4 - 13)
112 (3.9 - 3.6) (80 - 60) + 3.6 (80 - 60)
1/2 (0.3) (20) + 3.6 (20)
3 + 72
= 2,124 liters
7/2 fQ5 - 04) (t5 - t4) + Q4 ((t5 - t4)
1/2 (4.0 - 3.9) (100 - 80) + 3.9 (100 - 80)
1/2 (0.1) (20) + 3.9(20)
3 + 78
79 ft3 x ^^
= 2,237 liters
V6 =
7/2 (06 - 05) (t6 - 15) + Q5 ((t6 - 15)
1/2 (3.7 - 4.0) (120 - 100) + 4.0 (120 - 100)
1/2 (0.3) (20) + 4.0 (20)
[3] + 80
= 77 ft3 X
= 2,181 liters
1/2 (07 - 06) (t7 ~ t6) + 06 ((t7 - t6)
1/2 (1.8 ~ 3.7) (140 - 720) + 3.7 (740 - 720)
1/2 (-1.9) (20) + 3.7(20)
[-19] + 74
55 ft3 x ^2- = 1,558 liters
wrc93-074-07|
-------
TABLE 2 (Cont)
(08 - Q7) (t8 - t7) + Q7 ((t8 - t7)
= 1/2 (1.9 - 1.8) (155 - 140) + 1.8 (155 -
= 1/2 (0.1) (15) + 1.8(15)
= 27.75 + 27
= 75 ft3 X ^^
140)
= 786 liters
oo
V9 = 1/2 (Q9 - Q8) (t9 - 18) + Q8 ((t9 - t8)
= 1/2 (0 - 1.9) (170 - 155) + 1.9 (170 - 155)
= 1/2 (-1.9) (15) + 1.9(15)
= [-14.25] +28.5
= 14.25 ft3 X 2&2
= 404 liters
wrc93-074-08l
-------
oo
OS
STORM WATER SAMPLING AND ANALYSIS
HYPOTHETICAL EXAMPLE
CALCULATIONS FOR ALIQUOT VOLUME
• EXHIBIT 3-24 STEP 8 PROVIDES EQUATION NEEDED TO
CALCULATE ALIQUOT VOLUME NEEDED FROM EACH 1-L
GRAB SAMPLE TO PREPARE THE FLOW-WEIGHTED COMPOSITE
SAMPLE.
-------
U)
oo
J
Step
Aliqi
(ml)
L
i
TABLE 3 u
8: Calculate the volume of each sample aliquot (A) which must be used
to comprise a flow-weighted composite sample. The following
equation should be used:
lot volume (A) (ml) = Minimum aliquot volume
x Aliquot discharge volume (V) (liters)
Largest discharge volume (V) (liters)
510 liters
A1 = 1,000ml x 2237 liters ~ 228m/
&9 1 nnn mi v ?5Qt /flers R71 ml
f\Z — 1,000 ml x 2237 liters ~ "' ' m'
A-? 1 nnn ml ^ 2Qt 1 liters RQQ .
A3 - 1,000 ml x 2237 liters ~ 8" m/
A i • rt™ i 2^24 liters - m .
A4 - 1,000ml x 2237 liters ~ 949ml
A5 = 1,000 ml x 2237 //tefs = 1>°°° ml
9 *f & 1 lift* f*c
A6 - 1,000 ml x 2237 liters ~ 975 inl
A7 1 nnn mf ^/ 3-OPo «(*®' ^ coc ml
A 7 — 7,000 mi x 2237 liters ~ **"" ^'
A a t /inn i 786 liters ~-1 .
A8 - 1,000 ml x 2237 liters ~ 351 m/
&OA //fpr
-------
TABLE 4 - SUMMARY OF VOLUMES NEEDED TO PREPARE A FLOW-
WEIGHT COMPOSITE FOR ALL PARAMETERS AND VOLATILE ORGANICS
oo
00
Sample
S1
S2
S3
S4
S5
S6
S7
S8
S9
Parameter
Composite
A1 = 228 ml
A2 = 671 ml
A3 = 899 ml
A4 = 949 ml
A5 = 1,000ml
A6 = 975 ml
A7 = 696 ml
AS = 351 ml
A9 = 181 ml
Volatile Organic
Composite
VOA1 = 0.23 ml
VOA2 = 0.67 ml
VOA3 = 0.90 ml
VOA4 = 0.95 ml
VOA5 = 1 .00 ml
VOA6 = 0.98 ml
VOA7 = 0.35 ml
VOA8 = 0.35 ml
VOA9 = 0.18ml
TOTAL (V) 5,950 ml 5.96 ml
wrc93-074-04l
-------
STORM WATER SAMPLING AND ANALYSIS
HYPOTHETICAL EXAMPLE
IT IS ABSOLUTELY ESSENTIAL THAT THE LABORATORY IS
PROVIDED THE INFORMATION CONTAINED IN TABLE 4 AND
THAT THEY COMPLETELY UNDERSTAND THE SIGNIFICANCE
OF TABLE 4 AND THE NEEDED CORRECTION IN ANALYTE
CONCENTRATION CALCULATIONS BECAUSE OF THE VOLATILE
ORGANICS SAMPLE VOLUME DIFFERENCE.
-------
U)
VO
o
l
STORM WATER SAMPLING AND ANALYSIS
HYPOTHETICAL EXAMPLE
OTHER CONSIDERATIONS
• TWO 1-L CONTAINERS SHOULD BE COLLECTED AT EACH
SAMPLING.
• KEEP COLLECTION INTERVALS EQUAL FOR ENTIRE PROGRAM
ROUND ALIQUOT VOLUMES TO TWO SIGNIFICANT FIGURES
WHERE NEEDED.
-------
U)
STORM WATER SAMPLING AND ANALYSIS
HYPOTHETICAL EXAMPLE
CONCLUSIONS
• ERA'S GUIDANCE DOCUMENT IS HELPFUL IN MEETING
REGULATIONS.
• PREPARE A HYPOTHETICAL EXAMPLE FOR SAMPLING LOCATION.
• IT IS POSSIBLE TO MEET STORM WATER REGULATORY
REQUIREMENTS.
• EPA MAY WANT TO PROVIDE MORE DETAILS.
• CONCERN REMAINS FOR POTENTIAL TO LOSE ANALYTES TO
CONTAINER WALLS.
• CONCERN WITH COMMERCIAL LABORATORIES TO PREPARE
FLOW-WEIGHTED COMPOSITE SAMPLES.
• COMPOSITE SAMPLES FOR VOLATILES IS FAR FROM ROUTINE.
• HYPOTHETICAL EXAMPLE MEETS REGULATORY CRITERIA (?).
• ELECTRONIC SPREADSHEET SHOULD BE PREPARED FOR CALCULATIONS.
-------
MD
STORM WATER SAMPLING AND ANALYSIS
HYPOTHETICAL EXAMPLE
DISCLAIMER
THE AUTHOR AND CMA ASSUME NO RESPONSIBILITY FOR
ANYONE WHO MAY WANT TO USE OR FOLLOW THE HYPOTHETICAL
EXAMPLE; HOWEVER, WE FEEL THE EXAMPLE DOES MEET THE
REGULATORY REQUIREMENT OF THE FINAL RULE FOR STORM
WATER REGULATIONS.
-------
QUESTION AND ANSWER SESSION
MR. TELLIARD: Do we have any questions? We
all have storm water answers? Come to the microphone. Please identify yourself and your
organization.
MR. CROWLEY: Ray Crowley from Millipore.
You didn't talk about the specific results, but if you profiled the VOCs or the non-
volatiles, the semi-volatiles, do you see most of the runoff in the first 30-minute sample?
MR. STANKO: This was only a hypothetical
example. We believe that most of the volatiles are going to be in that first grab sample, and you
have to analyze the first grab sample for the volatiles and semi-volatiles in addition to doing this
flow weighed composite.
If there is anything there, we think that the first flush, as it is called, the first flush
is probably going to have it, and it kind of conies down to why are we doing all this flow
weighted compositing for zeros. The rule says we have to.
MR. CROWLEY: And a second point, since I am
not aware o;f all the regs. How often do you have to do this to meet site compliance?
MR. NOEL: That is negotiable. I think everyone
is going to have to have it in place. October is the deadline for this year? I think it is.
MR. CROWLEY: And then ever year? Or what
is it, once every five years?
MR. STANKO: I think that is negotiable. You could
do it on a quarterly basis, whatever your agency wants you to do.
MR. CROWLEY: And for a chem plant like in
Houston, how many sites do you have to take samples at?
MR. STANKO: Wherever you have a storm water
runoff.
MR. CROWLEY: I guess you have got to have one,
huh?
MR. STANKO: Well, I guess we are going to start
changing our ditches.
MR. CROWLEY: Thanks.
393
-------
MR. TELLIARD: Any other questions?
(No response.)
MR. TELLIARD: Thanks so much, George.
Appreciate it.
MR. STANKO: Thank you.
394
-------
MR. TELLIARD: Our next speaker will be
speaking on one of our favorite-est subjects, national sewage sludge survey or sludge sampling.
John McGuire is with our Athens laboratory under the Office of Research and Development.
John has been here before, and we want to welcome him back. And he is now armed.
MR. MCGUIRE: Armed and loaded, Bill. Thank
you, Bill.
This talk is going to be quite a bit different from the talks that you are used to
hearing at this meeting, so bear with me.
The title of this talk is, "Commonality of Non-target Organics and Extracts of
National Sewage Sludge Survey Samples."
Identification of organic compounds present in sludge has been important in
sewage treatment plants for many years. Twenty years ago, Miller and Thistlethwaite defined
several types of sewage, including silage, all-inclusive sewage, industrial waste, storm water,
groundwater, and municipal sewage. From the engineering viewpoint, each of these has its own
composition.
However, the importance and complexity of domestic sewage which is defined as
silage plus fecal matter and urine, have made that the area that has been most widely
investigated.
Fulton and Klein reported that domestic sewage consists of carbohydrates, fats,
proteins, their decomposition products and synthetic detergents. Having said that, they did not
identify specific organics.
Garrison and his coworkers...it just happens that is in my lab...seem to have been
the first to report specific organic chemicals in sewage treated by several treatments, including
activated sludge. These workers found 19 straight chain carboxylic acids, two unsaturated
carboxylic acids, seven branched chain carboxylic acids, seven oxyacids, four ring-containing
carboxylic acids, seven alcohols, three thiolates, eight chlorine-containing compounds, two
steroids, four drugs, six aromatic compounds, and 11 miscellaneous.
In 1981, Giger first reported the presence of nonylphenols and nonylphenol
ethoxylates in sewage. Since then, he has published extensively in the area of improved methods
for these compounds in various samples.
He pointed out that the nonylphenols, which are toxic to fish in environmental
waters, may be the result of biological treatment of sewage sludge containing the ethoxylated
nonylphenols, common non-ionic surfactants. He also noted that anaerobic sludge treatment led
to higher levels of nonylphenols than did aerobic treatment. His postulate that the nonylphenols
395
-------
come from the ethoxylates is a concern in view of last year's CMA/EPA report that nonylphenol
ethoxylates themselves do not persist in the environment.
Rosen and his group also detected nonylphenols but attributed them to degradation
of an antioxidant, tris-nonylphenol phosphate.
Six years ago, Demirijian studied the fate of sludge applied to soils. The initial
sludge was analyzed and found to contain 12 priority pollutants as well as 29 other organics,
including aliphatics and aromatic amines, acids, and phenols.
Getting up to date, the National Sewage Sludge Survey was conducted in '88 to
'89. Among other objectives, it was designed to provide concentrations of 419, including 176
semi-volatile, target organic analytes for which the EGD...I am sorry, that is now the Engineering
and Analysis Division of the Office of Science and Technology of the Office of Water, the group
that is so well represented here today...had standards.
These target compounds were chosen based on Section 307(a) of the Clean Water
Act, Appendix VI11 of the RCRA, compounds that were detected in the Domestic Sewage Study,
and ones that were suspected to be of interest in municipal sludge. "Suspected" pretty much
means Bill Telliard.
Nationwide, 180 plants were sampled for this study. The results, which were
based on low resolution electron impact mass spec, were reported by the contract labs conducting
the analyses and summarized in the Office of Water report in 1989.
Low resolution EI-GC/mass spec has been applied since the early '70s to
identification of organics in water. At this time, it is the accepted method for positive
identification of target analytes and the most significant analytical technique for monitoring and
regulation of organic pollutants, at least semi-volatiles.
GC/MS with automated spectra and retention time matching against a reference
collection is known to be excellent for specific substantiation of target compounds, but its current
success rate for tentative identification of unknowns is poor.
There are about three studies I know of associated with the tentative identification
of unknown organics in the Superfund program, and they range from 1 study that had a 1 percent
confidence in the identification to the highest confidence rate that was 42 percent. So, current
success rates are rather poor.
In particular, it fails to detect and identify compounds whose mass spectra are not
in the spectral libraries.
As was to be expected, most of the compounds that appeared in the course of the
laboratory sample workups were not target analytes, and they were not identified. Accordingly,
396
-------
the BAD asked that qualitative multispectral analysis methods be applied to a subset of 20 of the
NSSS sludges.
This paper summarizes some of the highlights of that work. The full report will
be published in the fall.
The multispectral analysis approach uses high resolution mass spec to determine
elemental composition of ions, Fourier transform infrared spectroscopy to recognize submolecular
structures, and chemical ionization mass spectrometry, CI, to establish molecular weights of the
unknowns.
The spectral information is then melded together by a team of analysts to postulate
the structures of the unknown compounds.
At another meeting last year, one of the audience asked well, how does that differ
from good analytical chemistry? Well, in one sense, it doesn't, but the big difference in our
multispectral analysis program is the bit I just mentioned, a team of analysts who are used to
applying this sort of approach.
A number of locations have the individual scientists and equipment necessary to
apply FTIR, high res, and CI, but most of them are not set up as a team. We are.
The objective of this work is not the usual EPA quantitative methods approach but
the qualitative identification of sample constituents. Nor is it a complete qualitative/quantitative
analysis.
Results and application of this technique to unidentified compounds in
environmental samples have been excellent. Upon reexamination of samples from another survey
conducted by the Office of Water, we identified two series of aldehydes...! believe that work was
reported here at an earlier meeting as well as a variety of organophosphates whose spectra were
not included in the reference collection of mass spectra.
Following the identification, spectra of the identified compounds have been
included in both the Wiley and the NIST collections of spectra so that future identifications of
these compounds should be simplified for others.
The 20 samples chosen by the HAD for the in-depth examination were to be
broadly representative of samples taken in the NSSS. They included samples from seven regions
with examples of sludges that had been produced during primary and/or secondary treatments,
ones produced during aerobic and/or anaerobic digestion, ones produced by one or more of six
methods of separating liquids and solids, and ones dried with or without drying beds. All but
one of the samples have been characterized by one of the contract laboratories of the Office of
Water as part of the NSSS. Two of the samples were duplicates.
397
-------
This history, gross appearance, and residue weight of the samples as received are
given in the next slide.
Based on screening tests, three principal approaches were used for extraction of
the analytes from the sludge matrix. The first of these was used for drier sludges and consisted
of extraction with a 50/50 mixture of acetone and methylene chloride followed by filtration of
the solvent through a bed of sodium sulfate.
For those samples that.appeared moist, a significant amount of granular sodium
sulfate was ground up...by that, I mean about 25 to 30 percent...with the sludge prior to
extraction.
When a sample appeared to be essentially liquid at room temperature, water was
added prior to a methylene chloride shakeout. In all cases, the final step was concentration of
the solvent using a Kuderna-Danish.
That slide showed basically that sample appearance was related to dry weight, and
because appearance also was related to the method of prep, the dry weight and the method of
prep are confounded statistically. That certainly would introduce a bias into any quantitative
analysis, but we expected at the start, and have no reason to believe now, that it had any effect
on the qualitative results.
Now for this slide. Results of the contract laboratory analysis for 176 target semi-
volatile analytes were reviewed in conjunction with the analyses of the sludge samples. This
table shows the 21 target compounds that were reported in at least one sample by the
laboratories.
Whether or not these identifications were confirmed in our re-analysis is indicated
by an appropriate tag in the table, and I am not sure whether you can read the tag. Basically,
a single asterisk indicates that the presence was confirmed at least once in this study. A double
plus sign indicates that we could not confirm those compounds in any of the samples, and we
seriously doubt the identification from the laboratories.
You will notice that most of the compounds that do have the double plus sign on
them are the real "nasties," the benz[a]pyrene, the benz[b]fluoranthene, the benz[k]fluoranthene,
benzo[ghi]perylene are all double plussed.
Samples that have a pound sign next to them...I am sorry...compounds that have
a pound sign next to them...and I believe the only one in this table is pyrene...were not confirmed
due to a high sample background.
Most of the non-phthalate target compounds that were reported were not reported
in more than one sample. The phthalates, in general, and bis-iso-octyl phthalate in particular,
were found in all samples.
398
-------
The reason for the small number of targets reported is the same as the reason for
the small number of confirmations. The samples are so complex and dirty it is extremely
difficult to obtain a good mass spectrum of any specific compound.
As an example of this, the range for the level of detection for the semi-volatile
fraction was found by the contract laboratories to range from a relatively clean sample having
a minimum detection level of 250 ppb to a particularly bad one having a detection level of 500
ppm. Our work in no way contradicted this.
Yesterday, Jim Rice proposed a CMQL which I believe would make a great deal
of sense in analytical work such as that of the NSSS if they get lists that are going to be the
basis for regulation (which happens to be an approach I don't agree with.)
The chromatographic profiles of the various samples did not resemble one another,
as is shown in the next slide. However, this quantitative attribute was not reflected in the
qualitative aspect of the analyses. Many of the same compounds could be found in all samples,
although a few were much more prominent in some samples than in others.
These compounds are ones that, although not target compounds, are definitely
anthropogenic in the most restricted meaning of the word and are not surprising in sewage
sludge. Specifically, we found numerous fatty acids as well as their degradation products such
as aldehydes and polyunsaturated straight chain hydrocarbons. Other types of compounds found
that are man-made appear to be sterols and degradation products, (thianes, thiols, and sulfur) of
proteins.
Other compound types found are ones that are anthropogenic in the usual broader
sense of the term. These include surfactant amines and phenols, perfumes used in detergents and
soaps, and chlorine- and nitrogen-containing aromatics.
The next slide is a brief summary of the various classes found in the course of the
study. Chlorine- and nitrogen-containing aromatics consisted of mono and dichloroanilines and/or
mono and chloroisocyanatobenzenes. Because reduction of the isocyanatobenzene could lead to
aniline and oxidation of the aniline could lead to isocyanatobenzene, it is not possible to say
which came first, nor is it possible to say whether both may simply be disproportionation
products of Triclocarbon®, a bacteriostat used in soaps which has been identified by Rosen using
LC/MS.
Nonylphenols have been reported before in sludges, as I have already said, and
it has been observed that aerobic sludge treatment tends to result in lower levels than anaerobic.
As expected, the level of nonylphenols in the two aerobically treated sludges was very low. In
fact, nonylphenols could only be found in sludge 16525 by using specific ion monitoring for the
most significant peaks of the nonylphenol.
399
-------
The levels in the other sludges were variable, ranging from levels equivalent to
those of the aerobic treated samples and some samples which were totally untreated to that of
sludge 16825 where the nonylphenols were the most significant GC peaks.
One of the most obvious compound types evidenced in these sludge samples is that
of fatty acids. The next slide shows a mass 60 chromatogram which shows the presences of
these compounds. The series of acids is clearly evident from butanoic acid at scan 290 through
arachidic acid at scan 1575. The significant peak at 303 is a degradation product of acetone and
is not part of the series.
Although these acids, in general, represent the largest organic components of the
samples, it is possible that they are themselves only degradation products of fats that were
extracted but could not be chromatographed under the conditions used.
As an example of the types of identifications made in this study, the next slide
gives a full list of the carboxylic acids found. I am sure you won't be able to read it, but it is
not significant other than the fact that that long list is entirely made up of acids.
Other families of compounds also can be enhanced away from the high chemical
noise of the sample. The next slide shows a mass 70 chromatogram which emphasizes the
aldehyde series, starting with hexanal at scan 244 and continuing at 363, 484, 599, and ending
with decanal at 705.
Mass 70 is less distinctive than the mass 60 used in the other slide. Therefore,
this representation has a very significant number of GC peaks that are not part of the series.
Nonetheless, the presence of the series is evident.
The next slide shows the mass 47 chromatogram that is characteristic of thiols in
a sample, starting with pentylthiol at scan 215 and continuing with hexylthiol at 314, heptylthiol
at 416, octylthiol, nonylthiol, and ending with decylthiol at scan 692. The interferences here are
generally dimethylpolythianes.
In the course of this work, we stored one of the extracts that had been rich in both
aldehyde and thiane families in a refrigerator for several months between analyses. Aldehydes,
thiols, and a related series of thianes all had disappeared when the sample was rerun. Because
all of these compounds are both relatively volatile and relatively active chemically, the
significance cannot be pinned down as to which effect caused the disappearance, but the need
for a short storage time is obvious.
Of the 157 compounds and classes found in this study, only 15 of them or 10
percent are regulated compounds. This is based on the EMI which is being advertised rather
heavily at this meeting...although an additional 55 of 35 percent are non-regulated target
compounds such as those from the "4C other" list that was put together by Bill following the tape
study.
400
-------
Hence, more than one-half of all compounds found in these very common types
of samples are non-target compounds. It raises the question whether we may be failing to
regulate things we should be regulating.
It is worth reemphasizing that this is to be expected for all but the most highly
characterized types of samples. There are significantly less than 1000 target compounds, but
Chem Abstracts lists over 10 million reported chemicals.
It is also worth noting that there was little need for a full range of multispectral
analysis in this work. Most of the compounds were readily identified through the use of
conventional mass spec matching when a sufficiently large collection of reference spectra was
used. In this work, we used both the most recent NIH collection and the Wiley collection. The
older NIH collection by itself would have been inadequate.
Examples of compounds whose identifications did use other techniques are the
aldehydes, the mercaptopropionic acid and its ester, the chloroisocyanatobenzenes, the
alkyltetralins, the perfumes shown in the next slide.
(I like that one at the top. That is actually caryophyllene, but it certainly looks
like pac man, doesn't it? These are two of the perfumes that were found in the sewage sludge.
They are conventional perfumes that are used frequently in the soap industry.) And, finally, the
almost ubiquitous musk compounds shown in the next slide that we identify as galaxolide and
versalide.
The structures of these last two compounds seemed so unlikely that we decided
to investigate some commercial detergents to see if we could determine a likely source of such
compounds. We then took two commercial detergents from our home shelves. The second
detergent that we analyzed showed the same two compounds in approximately the same GC peak
ratio. It is a very well-known detergent. I might refer to it as Detergent C, and it is quite
probably one of the sources of these two compounds in the sewage sludge.
Three unsaturated straight chain compounds appear to be significant in these
samples. The last eluting of these compounds was easily identified by spectra matching as
squalene, which is a reasonable compound in sewage, but the other two had less obvious ionic
characteristics.
Based on the hypothesis that this might arise from some of the major fatty acids
in the sample, we deliberately decarboxylated several acids and analyzed their degradation
products by both MS and GC retention times. We conclude, as a result, that the first of these
unsaturated compounds is 8-heptadecene, arising from the decarboxylation of oleic acid. The
second is 6,9-heptadecadiene, arising from decarboxylation of linoleic acid, both of which acids
were very common in these studies.
401
-------
This is probably a general process, but we did not determine if the level of
heptadecane, presumably from decarboxylation of stearic acid, is higher than would be expected
from some of the other normal alkanes that were present in the samples.
The sterols/phytosterols represent a new class of compounds for us. They may not
be for some of the people in the audience. Their presence in the various samples is manifest,
but the precise identifications that we make are very tentative.
Many of you may have noted an instability in the mass spectrum of cholesterol
in your own mass spectrometers. This dehydration effect is but one of many in the determination
of some of the less common sterols. As a result, we are sure that sterols are present but much
less sure of which ones that we see may be merely due to the GC thermal rearrangement of
others.
GC is probably not the best way to pin down specific identification of this type.
It requires too high temperatures.
More than half of the 157 compounds found in this qualitative analysis were non-
target organics. Of the 15 regulated target compounds found, the only one that appeared to be
significant in all samples was the common plasticizer, bis-iso-octylphthalate.
As you know, that is also a very common interfering compound in a laboratory.
We were one of the first labs to spot the source of that as an interference, and I am quite sure
it was not an interference in our lab. It may or may not have been a contamination that was
introduced during the collection of the sample. I don't know.
The most common non-regulated target compound was nonylphenol, a compound
that has been reported to be less common in aerobic treated sludges than in anaerobic treated
ones-which we certainly agree with.
I feel regulation should be based on what is present rather than what is on lists.
Further analytical work is indicated in the field of sterol/phytosterol analysis in order to
characterize these late-eluting components of sludge.
At least four compounds in the sample are associated with soaps. This suggests
that it may be worthwhile to characterize the products of some of the large volume
manufacturers. In particular, it seems likely that some of the polynuclears found by one of the
contract laboratories (and only one) may have, in fact, been sterols which were beginning to elute
near the end of the GC run.
The source of both the chlorinated analines and the chlorinated isocyanatobenzenes
needs to be determined to permit fuller characterization of sludges in the future.
402
-------
I thank Bill for the use of his EMI system in the course of determining whether
or not a compound is listed, and I thank him for once again for having me speak here. And
thank you all. Are there any questions?
403
-------
QUESTION AND ANSWER SESSION
MR. TELLIARD: Any questions?
MR. SULLIVAN: Richard Sullivan, Environment-
Canada. Have you ever looked for any of the compounds related from birth control pills? I
understand that some biologists have reported the effects of these on minnows and similar
species.
MR. MCGUIRE: Richard, as soon as we saw
sterols coming in, we thought, "aha!" The answer is we looked, and we did not see any.
MR. KING: Jim King, DynCorp-Viar.
Dr. McGuire, did you make any attempt to review the raw data tapes from the contract labs to
check on that misidentification?
MR. MCGUIRE: I did not. I took the reports that
were included in, what was it, two years ago summary, and went through those line by line. I
did not go back to the raw data tapes. I hope that Bill has given you the correction to EMI that
I gave him the other day.
MR. CROWLEY: John Ray Crowley, Millipore.
When you use these high-powered analytical techniques like your soap analysis, I mean, now that
you have got the CMA and the EPA working together, are you going to move more toward the
direction of finding out who is putting what into the sludge area? Let's say you call up P&G and
ask them what their favorite fragrances are rather than put in a lot of work in terms of trying to
identify unknowns that you might know it ahead of time?
MR. MCGUIRE: That sort of work doesn't fit into
our charter. Whether the regulatory offices in Washington will is something I can't address. I
don't know.
MR. CROWLEY: But doesn't that make your
information base more powerful, because you have...you are problem solving and you have better
information up front?
MR. MCGUIRE: Absolutely. Incidently, in
conjunction with that, that is one of the approaches that my TIC identification task group that
I head up for Superfund does do but not under this program.
MR. TELLIARD: Thank you, John.
MR. MCGUIRE: Thank you, Bill.
404
-------
Table 1
Sample Summary
EPA Sample Number and
Episode
16525/1362
16534/1367
16537/1430
16542/1493
16544/1380
16803/1544
16819/1381
16825/1443
16827/1411
16835/1389
16838/1454
16848/1399
17028/1475
17042/1509
17047/1439
17055/1476*
17056/1476*
17059/1477
17087/1538
17131/1486
Plant and Location
Rainbow MWD,Cal.
Fredericksburg STP, Va.
Burnham STP, Pa.
Oakland WWTP, Kans.
Sioux Falls WWTP, SD
Hilton NWQTP, N.Y.
Brookings STP, S.D.
St Joseph WWTP, Mo.
Port Clinton STP, Oh.
Lake Zurich NWSTP, 111.
Egan WRP, Chicago, 111.
Wyoming WWTP, Grandville, Mich.
Weirton WWTF, W.Va.
Garden City WPCP, Garden City, Ga.
Cinnaminson STP, NJ
Corbin STP, Ky.
Corbin STP, Ky.
No information
Phoenix WWTP, Ariz.
Mason Farm WTP, Carrboro, N.C.
Physical Appearance
at Room Temperature
Dry
Solid and extruded
Dirt
Granular
Damp
Some water
Some water
Wet
Granular
Some water
Granular
Some water
Some water
Granular
Wet
Damp
Damp
Damp
Granular
Mainly water
Residue Reported
by Contract
Laboratory
100.%
89.6%
66.0%
83.4%
33.3%
20.3%
3.4%
3.9%
52.0%
34.8%
38.3%
4.2%
30.7%
47.6%
10.1%
22.5%
24.5%
<>
70.5%
2.6%
* Duplicates
-------
Table 2
Compounds found by Contract Laboratories
Benz[a]pyrene++
Benz[b]fluoranthene++
Benz[k]fluoranthene++
Benzo[ghi]perylene++
Benzyl alcohol++
Bis-(iso-octyl) phthalate*
Bis-(n-Octyl) phthalate*
Butyl-benzyl phthalate*
Chloroaniline*
p-Cresol*
p-Cymene*
Diepoxybutane++
Even-carbon-#-alkanes from decane to
triacontane*
Fluoranthene*
Hexanoic acid*
Naphthalene*
Phenol*
Pyrene#
Squalene*
Terpineol++
Thiophenol++
* Presence was confirmed at least once in this study
++ Not confirmed & probably wrong
# Not confirmed due to high sample background
406
-------
o
•-4
Table 3
Compound Classes found in Sludge Study
ALDEHYDES
ALKANES (NONANE to TRIACONTANE)
ALKYL PHENOLS
AMINES
AROMATICS (Cl to C12 BENZENES)
CHLORINATED ANILINES and ISOCYANATOBENZENES
DECARBOXYLATION PRODUCTS of FATTY ACIDS
FATTY ACIDS
PERFUME INGREDIENTS and MUSKS
PHTHALATES
STEROLS and PHYTOSTEROLS
TETRALINS
THIANES and SULFUR
THIOLS
-------
Table 4
CARBOXYIJC ACIDS FOUND IN SIAJIHJKS
Methylthiopropanoic acid
Benzeneacefic acid
Pentanoic acid
Ilexanoic acid
Ileplanok acid
Octanok acid
Nonanoic acid
Phenylpropanoic acid
Decanok acid
Undecanok acid
Isododecanoic acid
Anteisododecanoic acid
Dodecanok acid
Isolridccanoic acid
Anteisotridecanoic acid
Tridecanoic acid
12-Methyl(ridccanoic acid (iso)
ll-Meflij'Kridccanok acid (antoiso)
Tetradecanoic acid
x-Melhylletradecanoic acid
13-Mecanoic acid (iso)
12-Melhylteoic acid (S(earic acid)
Arachidk acid
408
-------
\
Figure 5 Caryophyllene and Aromadendrene
409
-------
o
0
Figure 6 Versalide and Galaxolide Musks
410
-------
100%
80|
60J
40J
20J
0 =
0:00
50%
40 j
30J
20J
10 j
o 3
0:00
80%
60J
40J
-
201
r
0:00
10%
6J
4.1
0'
i
0:'00
Sludge 16525
)
! '
wW
5:00 ' 10:00 ' ' 15:00 ' 20:00 ' ' 25:00 '
|
Sludge 16825
1 i!
I ,1
\ / n u \
\ i i i i H X j, j_>-^ L
v_ . i _ A.jiJ^^^A^^^^^j^^Jj-^ l/s> — \>—>-^ u
1 1 1 1 i i 1 1 j 1 i 1 1 1 1 i i i 1 i 1 i 1 ] 1 r
5:00 10:00 15:00 20:00 25:00
Sludge 16835
ii 1 i ii
I i i k II i Ii
i i i L ,| , ,i 1 i LJ |i,| ,i. ,¥ J
V ,. , , . .Lw^tMXUWvUkMvW UbUV Wl
— i i 1 i i 1 1 1 1 i 1 1 1 1 ! 1 1 1 1 1 1 1 1 r
5:00 10:00 15:00 20:00 25:00
Sludge 17042
I/
ill jJ^"
vJiJlLJL__ t ^~ju^~^^^
i i i | | | | | 1 i ' i •" i • i ~i — i 1 1 1 1 1 1 1 1 1 1 r~
5:00 10:00 15:00 20:00 25:00
_
/i ^
L
1 E
p~
W^U ^~
"""^ — • — v__ A_> L — -
30:00 35:00 TIME
i—
t_
t_
E
i
1 ^ w _, A- -jjV 1 1, , . L_
.\ 1-
' ' 1,1 i
30:00 35:00 TIME
tr
^
,1. 1 t-
M t
/*L^-~»~~~~^L
-------
K)
100^
80j
70
20
1335
Acids in Sludge 16525
303
1188
477
1032
678
yti
1460
I I ! i i i
200 400 600 800 1000 1200 1400 SCAN
-------
100%
90_:
80_:
70^
60 J:
H
!OJ
n :
w
2i
^""^•SIJIA. i /i A
^"TWM^-JW ^Uy,.-^ _«XyJ
200
L4
3(
l
i
269
{
l/U
i i
3C
j
u
1_
—
Aldehydes in Sludge 16525 ^
,
f
-
363 P_
484 ^
i f—
C fl| Q *-
J^-JlL-^jL^ /; t
- i- "V^^ *V "TL ^v v. V^nm i , 1 1 «n • ft ^•V-*^S*S*«^r%^;
)0 400 500 600 700 800 SCAN
-------
100%
901
801
70J
601
501
40^
301
201
0 =
21
\1 1
^VUJ
L5
Thiols in Sludge 16525t
514 ~-
'' \-
3" 416
L..
i
\^W\_^V/"^A->W^--*'*
i r
j
E
L
r
L_
j-
i_
i
L
597 rQO 849 h
i 692 r
II f-
i 1 1 1 1 1 r^ 1 i 1 1 l 1
200
400
600
800
SCAN
-------
MR. TELLIARD: Our next speaker is Ileana
Rhodes from Shell. She is going to speak on the use of determination of organochlorine
pesticides in soils.
MS. RHODES: Good morning. Can you hear me?
One of the nice things about changing the schedule is that I actually have an
audience. I didn't expect to have one, having one of the last papers of the day. I figured it was
going to be an empty house.
I am going to talk about determination of selected pesticides in soils. One of the
things you might wonder is what is an oil company doing with pesticides. Well, we used to
manufacture aldrin, dieldrin, endrin and so forth at the Rocky Mountain Arsenal from the '50s
to the '80s, and we continue working in remediation aspects of that area.
One thing that is going to become quite evident is that some people like myself
have two different kinds of functions. One of my functions is to work with our locations,
meaning refineries or chemical plants, in making sure that their are meeting their permits or they
are cleaning the areas they need to clean appropriately.
In that sense, I use a regulatory hat, and I insist that all the contract laboratories
who do the work for us follow the methods as they are specified.
When it comes to the work I do in-house for our environmental engineers and
biodegradation experts, I do whatever is necessary to meet data quality objectives. The kind of
paper I am going to present to you today involves taking some EPA procedures and some
USATHAMA type procedures, because we have to work with the Army as well on this project,
and then making it meet the data quality objectives, in other words, going from A to B the
shortest way possible and still getting data that makes sense.
What I would like to show you today is the fact that you can put the two of them
together and get a lot better data with a lot less trouble.
Essentially, we took the EPA procedures/USATHAMA procedures...they are kind
of similar...and we took the best out of both of them and went on from there.
I am going to talk about the project requirements and goals. I have some very
specific goals in the project, and I wanted to support research. These were the people who are
doing different technologies, things like bioremediation. We all know that doesn't work very well
for chlorinated compounds, but we had to give it a try. Thermal desorption techniques, chemical
treatment such as solvent washes and things of that nature were also investigated.
415
-------
So, my goal was to support those experiments in-house. We developed a single
extraction technique, and I am going to talk about that some more.
We developed two different analytical methods. In other words, when we put the
sample into the instrument, we followed two procedures, a split procedure and a splitless
procedure to kind of accommodate the wide range of samples without having to do a lot of
sample manipulation.
Then we went on and evaluated the need to clean up the samples. These are
primarily soil samples, but the soil samples that I was getting were not only just field soil
samples. They were also soil samples mixed with cow manure, bacteria, fungi, algae. You name
it, we got them. And also brines and impinger fluids, all kinds of things.
So, when I say soil, it is not just soil. It is soil with a lot of extra stuff on it.
We need to compare the techniques for cleanup not only in terms of
chromatographic quality but also how well can we quantitate the sample without doing all the
cleanup steps.
Finally, I like to compare the short technique that we developed with the standard
approach done by contract laboratories.
Essentially, we had several goals for this project. One of them is that we were
going to get samples that vary from percent level of pesticides, primarily dieldrin, aldrin and so
forth. From now on, I am going to call them drins for short. And they came from sludges,
sediments from evaporation ponds, windblown soils, etc.
We have a variety of samples, and we have to do it as quickly as possible, because
there were critical path samples. In other words, the next experiment couldn't be planned until
we had the results from the previous experiment.
Our goals were to get from point A to point B, in other words, from extraction to
analysis without doing any dilutions if we didn't need to, without doing any cleanups if we didn't
need to, and so forth. Our solution was to come up with a single method that goes from point
A to point B with minimal effort.
One of the things that I want to bring up and will show up on every single slide
just about is data quality objectives. Whenever we support research, I always find out what is
it that we are going to use this data for. And that is what we home in on. This is the only thing
we need to do.
Not what should be done based on some procedure that was written 20 years ago
but what makes sense for our project.
416
-------
And I am going to talk about the development of a single extraction procedure
which is based on EPA and PMRMA methods.
PMRMA is Project Management Rocky Mountain Arsenal, and the methods are
similar to the EPA procedures, but they are more capillary oriented rather than the old packed
column, and the cleanup techniques are slightly different. Essentially, you take your soil sample,
you dry it out with sodium sulfate, you extract it three times, treat it always as if it were a low
level soil rather than a high level soil.
Most laboratories treat samples with a high level method. Most laboratories treat
the sample as if it were the last part per billion on earth. It is extracted three times and
concentrated and so forth.
Then you go through a series of extraction procedures. We follow the protocol
from the USATHAMA procedure which was alumina column cleanup followed by a sulfur
cleanup, water washing, et cetera, concentration, and then analysis.
We started taking steps one at a time, and essentially what we did is we extracted
a sample following the high level method. Just sonicate and then centrifuge, put it on a sample
vial, and went from there.
I would like to show you for purposes of this talk that it doesn't make any
difference what levels you got from parts per billion to percent level. The recoveries are the
same.
We take the soil sample and weigh 5 to 20 grams. Typically, we take about 10
grams of soil and 10 ml of solvent. We use acetone/hexane. That was the protocol in the
USATHAMA procedure that we used at the time, and we vortex the sample for 1 minute. Later
on, we found out that vortexing just about takes almost more than 90 percent of the stuff that is
there in the first place.
Then we go ahead and sonicate using a horn sonicator for about 5 minutes. The
alternate procedure is that if we have a few samples and we want to do it in a hurry, we use the
sonicator. If we have a bunch of samples, more than ten or so, we just throw them in the shaker
overnight and pick them up the next day, and we are ready to go.
Then we go ahead and centrifuge the sample. We take the extract and put it in
another sample vial, and then we analyze it by GC/ECD. Our detection limits in the soil are 1
to 10 ppb. That is the PQL, not the MDL. The MDL is more like 0.25 ppb.
And a second column is used for confirmation if need be.
Now, most of the time we are working with samples that we knew were spiked
or contaminated, so confirmation was not necessary.
417
-------
Now, we have got an extraction procedure that is very simple, and one thing that
I want to bring up is the fact that a lot of times you are doing all these cleanup steps to get rid
of interferences. The more you manipulate the sample, the more you pass it through sep funnels,
the more you pass it through columns, the more chances you have got to get it exposed to
phthalates and things of that nature that you end up having to clean up for.
You saw that in some of the people that talked yesterday about mercury analysis
and the like, the fact that they thought a lot of the numbers and a lot of results that were reported
were mostly contamination rather than actually being present in the sample in the first place.
And I truly believe that a lot of the reason why you have to do cleanups in many
respects is because of the stuff you add to it while you are trying to clean up the sample and
handle it.
But we developed a split and a splitless analysis, and this is the same extract.
Remember, our goal was to try to do the least treatment to the sample. If it doesn't need to be
diluted, then don't dilute it.
So, we simply used Hewlett-Packard instruments, and we use a split/splitless liner,
and the sample went higher than 10 ppb, we just simply shot them as they were. If we expected
lower levels, then we went ahead and did the splitless method.
This is kind of a summary of the two methods that we used. Essentially, the only
difference between the two methods is whether we used a splitter system or we don't use a
splitter.
What I call the high concentration method which actually is a misnomer there is
not 10 ppb to 10,000, but it is 10 ppb to 100,000 ppb. In this particular method, we simply use
an isothermal run of 20 minutes to try to minimize the time. All we did is we shook the sample,
extracted, injected. We were able to get our results in a very quick manner.
For the splitless method, we use temperature programming to try to use focusing
for the solvent, but that was the only difference between the two methods, and the difference was
10 minutes longer. That is just to be able to get down an order of magnitude lower.
We had two different columns, DB-17 which has some phenyl substitution, and
the DB-1701 was our confirmatory column. Even though the numbers are similar, the DB-1701
has cyanopropyl groups, so the polarity is quite different.
This is a typical series of chromatograms of different standards for the split
procedure. For the split procedure, we used heptachlor as an internal standard, and you see aldrin
through endrin there from 40 ppb to 4000 ppm in the same run.
418
-------
This one is the splitless method in which you can see a 1 ppb standard and a 10
ppb standard. Since we do 1:1 extractions from the soil, we don't go through any concentration
step. Pretty much what you get in the extract is what you get in the soil for detection limits.
There is really no discrimination there. The peaks are wider towards the end. The
area counts are pretty much the same for all the compounds.
And what you see in the front is pretty much the solvent. The acetone portion,
the 1:1 acetone:hexane, the acetone does have a response in the BCD, but it is well way away
from the area of interest.
Now, we have methods that we could use, very quick methods that we can extract
and shoot into an instrument and get our answers very quickly. Now, what we wanted to know
is do we need to do all the cleanup steps that we originally started doing, and what I am going
to do is I am going to show you data for spiked samples.
Now, remember, our spiked samples are not just clean soils. These things have
cow manure and wood rot fungi and you name it. So, they are not really that clean samples to
begin with. The soil was clean originally, but we pretty much dirtied it in a hurry by putting all
that in it.
I am going to show you data with some sludges or sediments and data from native
soils. What I would like to show you is that no matter what you do, quantitation doesn't get
compromised, and the chromatographic quality does not change, either.
This is the spiked samples. The people in the biodegradation group spiked soil
at about 30 ppm or so, and we went and sent those samples off for analysis using standard
Method 8080 with the cleanup steps associated with it, and then we did it our way by just a
simple shake and shoot procedure, and you can see that the numbers are just about the same
within soil variability and methodology.
And for soil samples, this is no mean feat, especially when you take a soil sample
that is 10 grams and a piece of it could be a fertilizer in these samples.
Then we wanted to try out some real samples, because we thought well, spiked
samples, even though we have junk added to it, they are not as realistic as real samples that have
been exposed to the pesticide for quite some time.
The only thing we had originally in-house was a sediment from an evaporation
pond. This particular material is very, very high with percent level of dieldrin and aldrin present.
There is diesel oil present in there, there was a lot of urea, salts, very briny, lots of copper in it.
So, it was a very, very nasty matrix.
419
-------
What we first did was just use a hexane extract, and as you can see...let's see if
this pointer will work...you can see that when you use hexane only, not much is extracted from
the sample. Aldrin is in blue, isodrin is red, and isodrin is pretty low in the sample. Dieldrin
is green and endrin is in red.
As you can see, not much is extracted with just hexane. If you go to
acetone:hexane, your extraction works out pretty well, and you pull out everything in there.
Then we go through a water washing step. Then we get to an alumina column
cleanup followed by just plain hexane. We wanted to see if we didn't add acetone to it we could
leave behind some of the polar species.
Well, you also leave behind your epoxide linkage kind of pesticides. For example,
you don't really pull out dieldrin very much, and you don't pull out endrin very much with just
hexane. So, we had to put, I believe it is, about 20 percent acetone back in to be able to pull
them out from the alumina column, and then we get recoveries similar to the very first run.
Finally, we went through copper shavings to get rid of sulfur, but essentially, we
got pretty much the same results, and these were a very, nasty sample.
Next, this was a series of eight samples. There are four on the first slide and four
on the next one. What it is is that we took four random samples from the field. These are
contaminated at different levels from non-detect to 20 or so ppm.
The first column is aldrin, then isodrin, dieldrin, endrin. The last column was
blacked out. There is nothing sinister there. We didn't hide any compound or anything. That
was intended to be the sum of all of them, but when I made these overheads and I imported from
Lotus into Harvard, I had some columns misplaced, and I never got a chance to correct them.
So, I don't have anything hidden there. I don't have this 2,3,-dinitrobadstuff or anything hidden
in there, Bill.
MR. TELLIARD: Likely story.
MS. RHODES: So, what happened was that what
we did in here is we analyzed the sample the way that I have described here just by a simple
shake and shoot procedure. The next is a water wash. The is alumina which, allegedly, will get
rid of the phthalates and things of that nature, but we didn't have them in the first place, because
we didn't expose the sample to anything but just a clean vial.
One thing that I forgot to mention early on is one of the advantages of this method
is that we take a VOA vial or a mini VOA, what I call those half-vials, and we put the soil in
there, extracted that, and that is the only container we use. We sonicate in there, and then from
then on, we centrifuge in that container. We put it into another sample vial, and everything is
disposable.
420
-------
So, we don't have anything, any glassware to fool with, and that helps quite a bit
both in manpower savings and contamination and cross-contamination problems.
If one looks at one or two of these, they are essentially the same numbers, whether
you go through these cleanup steps or not.
This is the second set in here. Again, I don't expect you to read it, but if you want
to just kind of pick one of them, you can see pretty well that there is really no change in the
numbers as you go down the different cleanup steps.
This is a graphic presentation of the same thing. These are the eight samples.
Some of them are just simply too low to see, but what I want to point out is whether you are at
ppb levels or ppm levels, the simple extraction procedure, the so-called high level method, works
quite well for low levels as well.
So, you are really wasting your time to go through three extractions and then
having to concentrate, and then you might end up having to dilute your sample. The high level
method works quite well for low level as well, at least for soil samples for organochlorine
pesticides.
As you can see, it didn't make any difference what cleanup steps were used. The
concentrations came out pretty much about the same.
Now, the next thing you ask yourself is, what is it doing to my instrument? What
is it doing to your column, what is it doing to your system? What do your chromatograms look
like?
This is one of the chromatograms, and this is ppb levels of material present. The
front end of the chromatograms are the solvents. The top, the red line, is the as-is sample,
extract and shoot kind of thing. The next one is a water wash, and the last one is an alumina
column cleanup.
As you can see, all the other peaks don't really change. They are all still present.
You don't gain anything by going through column cleanup.
One of the things that you could get rid of are phthalates, but we didn't introduce
any, because we didn't manipulate the sample. So, we didn't have any kind of late eluting peaks
to worry about.
More of the same. This sample was also sulfur cleanup using copper shavings.
One thing that I want to point out that the top chromatogram is the as-is sample, the method that
we really pretty much settled on.
421
-------
The next one is the water wash. The next one down is the alumina column
cleanup, and the last one is the copper shavings cleanup for sulfur.
As you can see, all of them look the same. The only thing that I want to point
out is the last two are diluted by a factor of 2 by virtue of going through the cleanup steps. The
sample was diluted in those steps, and we didn't bother to reconcentrate it.
So, the peaks look a factor of 2 smaller simply due to dilution, but what I would
like to point out is that none of the peaks that were present in the non-cleanup sample have gone
away. They are all there. So, we really don't gain anything by doing any of the cleanup steps.
More of the same, a little bit more complex a sample. By the way, the
chromatograms are offset a bit so you can see the peaks, but, again, what I want to point out is
there is no change other than a factor of 2 dilution for the latter two. So, in other words, there
is no compromise in the chromatographic quality.
More of the same. Again, the peaks of interest are aldrin, isodrin, dieldrin, and
endrin. As you can see, the hash is in all of them. Cleanups don't help you a bit in the area of
interest.
Now, the last thing I would like to go over is a comparison. What I showed to
you is that cleanups don't make any difference in the chromatographic quality or the ability to
quantitate organochlorine pesticides in soils, in dirty soils and sediments.
What I would like to do is compare it to what results we are getting in-house to
the results you would get using the standard EPA procedures as provided by contract labs.
We took 20 samples and split them and sent some to the contract lab and some
stayed with us.
The first set would be what we call the high level method, the split method. The
next row would be the splitless method for those samples of below 10 ppb for a compound, and
the last row would be for each set is the contract laboratory results.
By and large, the numbers are quite comparable, as well as can be expected for
soils. This is clearly shown quite clearly in a plot, and what this shows is the correlation
between or a comparison between a contract laboratory results and our results using the
procedures outlined.
What we did in this case is sum all the pesticides. In other words, we took the
four pesticides of interest and added them up rather than to do them individually for simplicity.
422
-------
The green line or the 45-degree line indicates if it were complete agreement
between the contract laboratory results following standard EPA procedures and our approach.
All the points would be lined up along the green line.
These are the Shell results, and these are the contract laboratory results. There is
a slight bias, and there are two outliers. The big two samples, we disagree totally on those
levels, but on the low samples, and this is an expansion of anything below 7 ppm is on this side
over here.
If you look at it, more or less, most of the results obtained or reported by the
contract laboratory fall below the 45-degree line. That indicates that the contract laboratory is
usually biased a little bit on the low side, and that has probably to do with all the sample
manipulations and dilutions and triple extractions and so forth.
But, essentially, the two methods are quite comparable.
This is a boxed-and-whisker plot, and for each set, there are two boxes.
The first box is an average of 6 Shell results using the methods that I have
described here. The second box is the average of three results provided by the contract
laboratory for each sample.
By and large, what you can see is that most of the two boxes kind of agree with
each other. That is all I wanted to point out in here. When you use a simple shake and shoot
procedure, it works just as well as using the long procedure and going through all the cleanup
steps.
Now, what are the advantages of the simple station procedure? One of them is
that we used everything disposable. We start out with VOA vials or the mini VOAs like I call
them, the 20 ml or so. When you are through, you throw everything away. The only thing I
have is a VOA vial and an autosampler vial, nothing else.
We eliminate all cross-contamination problems. We have minimal sample
handling. We have labor savings. We have no washing of glassware to do, no glassware to deal
with, really.
We have great space savings. No Kuderna-Danish to deal with. Nothing to deal
with. Just sonicate appropriately or shake and you are done.
Great space savings. We don't need a solvent exchange. We don't use methylene
chloride.
By the way, the contract laboratory results that you saw were using methylene
chloride as an extraction solvent. I use acetone/hexane, and the data shows that there is really
423
-------
no difference. If there is any difference, it is a little bit of bias on the negative side for the
contract labs.
We reduce the turnaround time. We can get a sample in and out in 25 minutes
or so, half an hour, if we need to.
We have no deleterious effects on GC columns or BCD detectors. One thing that
I didn't mention earlier is for the split procedure which we use an isothermal method, we run our
injector at 200 degrees to prevent degradation of endrin. The column is run isothermally at 215,
and the detector is at 350. Anything that gets into the column makes it through the detector, and
the detector is hotter, so there is no contamination. Everything is kept very clean.
We can say that this method is about a third cheaper than the standard methods,
and, of course, time savings go without saying.
As an example, we ran, on the first year that we worked on this project in 1988
through June of 1989, we ran about 1500 samples in support of thermal degradation experiments,
biodegradation experiments, solvent washing, chemical treatments and so forth. We ran spiked
soils, we ran RMA soils, and we ran RMA sediments. That is Rocky Mountain Arsenal.
We ran spiked water. The method applies for water, too. What we use is the
hexane extraction of the water, and because we have such good detection limits, we only extract
100 ml of water with about 10 ml - 25 ml of solvent.
We even used water samples that were loaded with just nothing but all kinds of
algae and fungi and you name it when we were screening cultures to use in soils. We ran them
in impinger fluids from different desorption experiments, etc.
We ran 1500 samples in the first year. It says there we ran about 20 to 50
samples a month, depending on the load. It has sped up quite a bit recently, but those original
1300 samples or so were run on a single instrument with a single column. We have never
changed the column. The only thing we do is change the liner every so often, and that is about
all we do.
We have used the same extraction procedure for PNAs, we have used it for
lindane, DDT, PCBs, and the method lends itself quite well to field applications, because it is
so simple. This is one of the cases in which if you can go from A to B without jumping through
a lot of hoops, don't do it.
That is all I have to say.
424
-------
QUESTION AND ANSWER SESSION
MR. TELLIARD: Do we have any questions,
comments, suggestions?
MR. STANKO: We had papers back there, Bill,
but...
MR. TELLIARD: Oh, there are papers in the back
of the room?
MR. STANKO: They are gone.
MS. RHODES: They are gone. I expected a much
smaller audience, since I was going to be so late, so I only brought 20.
MR. TELLIARD: Oh, well, there are more than 20.
MR. STANKO: If you didn't get a copy of the
papers, mine or Ileana's, if you want, just leave a business card and we will make sure you get
one.
MR. TELLIARD: Thanks so much. Coffee break.
Come on back in and we will get going again in a few minutes.
(WHEREUPON, a brief recess was taken.)
425
-------
426
-------
OF SELECTED ORGAJVOCHLORINE
PESTICIDES 7/V SOIL
I LEAN A RHODES
SHELL DEVELOPMENT COMPANY
HOUSTON, T\
NORFOLK, VA
APRIL 1993
-------
to
00
DETERMINATION OF SELECTED ORGANOCHLORINE PESTICIDES
IN SOIL
PROJECT REQUIREMENTS AND GOALS
DEVELOPMENT OF SINGLE EXTRACTION PROCEDURE
DEVELOPMENT OF SPLIT AND SPLITLESS ANALYSES
- <10 jig/kg
• EVALUATION OF NEED FOR CLEANUP STEPS FOR SOIL EXTRACTS
USING SPIKED SOILS AND NATIVE SOILS
- QUANTITATION
- CHROMATOGRAPHIC QUALITY
• COMPARISON AND STATISTICAL EVALUATION OF RESULTS
OBTAINED USING THE SINGLE EXTRACTION / NO CLEANUPS /
GC-ECD PROCEDURES WITH RESULTS OBTAINED FROM STANDARD
EPA METHODS
-------
to
FLEXIBILITY IS NEEDED FOR THE DETERMINATION
OF DRINS BECAUSE...
• DAMIlLLi I HOM P~JB IO% LLVLL OF DHINS
• SAMPLES INCLUDE CLEAN SPIKED SOIL TO
HETEROGENEOUS SOLIDS TO WATER WITH
ORGANISMS TO SOUPY LIQUIDS
• ALL SAMPLES REQUIRE SOME SORT OF
EXTRACTION PROCEDURE PRIOR TO ANALYSIS
GOALS:
• MINIMIZE SAMPLE HANDLING STEPS IN THE
EXTRACTION PROCEDURE
• ELIMINATE CLEANUP STEPS
• MINIMIZE DILUTIONS OF EXTRACTS
• MAINTAIN FLEXIBILITY
SOLUTIONS:
• DEVELOPMENT AND VERIF CATION OF
EXTRACTION AND ANALYSIS PROCEDURES
-------
DETERMINATION OF SELECTED ORGANOCHLORINE PESTICIDES
IN SOIL
PROJECT REQUIREMENTS AND GOALS
/ DEVELOPMENT OF SINGLE EXTRACTION PROCEDURE
• DEVELOPMENT OF SPLIT AND SPLITLESS ANALYSES
- >10jL/g/kg
- <10jL/g/kg
• EVALUATION OF NEED FOR CLEANUP STEPS FOR SOIL EXTRACTS
USING SPIKED SOILS AND NATIVE SOILS
- QUANTITATION
- CHROMATOGRAPHIC QUALITY
• COMPARISON AND STATISTICAL EVALUATION OF RESULTS
OBTAINED USING THE SINGLE EXTRACTION / NO CLEANUPS /
GC-ECD PROCEDURES WITH RESULTS OBTAINED FROM STANDARD
EPA METHODS
-------
DETERMINATION OF ORGANOCHLORINE PESTICIDES
EPA/PMRMA
WEIGH SOIL
ADD EXTRACTION
SOLVENTS
SHAKE OR SONICATE (X3)
CENTRIFUGE
WATER WASH
EXTRACT/DRY
CONCENTRATE EXTRACT
ALUMINA COLUMN
CLEANUP
SULFUR CLEANUP
EXTRACT IN A/S VIALS
GC/ECD ANALYSIS
-------
DETERMINATION OF ORGANOCHLORINE PESTICIDES
EPA/PMRMA SHELL WRC
tL
WEIGH SOIL
I
ADD EXTRACTION
SOLVENTS
SHAKE OR SONICATE (X3)
CENTRIFUGE
WATER WASH
EXTRACT/DRY
CONCENTRATE EXTRACT
I
ALUMINA COLUMN
CLEANUP
SULFUR CLEANUP
WEIGH SOIL
ADD EXTRACTION
SOLVENTS
SHAKE OR SONICATE
EXTRACT IN A/S VIALS
GC/ECD ANALYS 8
-------
UJ
u>
PROCEDURE FOR DETERMINATION OF ORGANOCHLORINE PESTICIDES IN SOIL
WEIGH
SOIL:
5-20 g
ADD 1:1
ACETONE
HEXANE
5-25 ml_
VORTEX
1 min
SONICATE
5 min
CENTRIFUGE
5 min
ESTIMATED PQL: 1-10 PPB
ANALYZE
GC/ECD
A SECOND COLUMN IS USED FOR CONFIRMATION
TRANSFER
EXTRACT
TO A/S
VIAL
-------
U)
DETERMINATION OF SELECTED ORGANOCHLORINE PESTICIDES
IN SOIL
PROJECT REQUIREMENTS AND GOALS
DEVELOPMENT OF SINGLE EXTRACTION PROCEDURE
V DEVELOPMENT OF SPLIT AND SPLITLESS ANALYSES
- >10 fjg/kg
- <10 fjg/kg
• EVALUATION OF NEED FOR CLEANUP STEPS FOR SOIL EXTRACTS
USING SPIKED SOILS AND NATIVE SOILS
- QUANTITATION
- CHROMATOGRAPHIC QUALITY
• COMPARISON AND STATISTICAL EVALUATION OF RESULTS
OBTAINED USING THE SINGLE EXTRACTION / NO CLEANUPS /
GC-ECD PROCEDURES WITH RESULTS OBTAINED FROM STANDARD
EPA METHODS
-------
*>.
u>
GAS CHROMATOGRAPHIC PROCEDURES
COLUMN:
1. DB-17, 30 m X 0.32 mm, 0.25 urn. J&W
2. DB-1701, 30 m X 0.32 mm, 0.25 jum. J&W
CARRIER GAS: He
MAKE-UP GAS: P-10
SAMPLE SIZE: 1-3/A.
DETECTOR: ECD
SPLIT METHOD
• ISOTHERMAL
• 10 - 10,000 jug/kg
SPLITLESS METHOD
PROGRAMMED
1 - 1,000 A/g/kg
-------
ON
SPLIT INJECTION METHOD WITH INTERNAL STANDARD
CHROMATOGRAMS (GC-ECD) OF A SERIES OF STANDARDS FROM
0.04 TO 4 PPM EACH
h.
HEPTACHLOR
(INT. STD.)
2.0
4.0
6.0
9.0
10.0
12.0 14.0 16.0 19.0 RT
-------
U)
SPLITLESS INJECTION METHOD
CHROMATOGRAMS (GC-ECD) OF SELECTED STANDARDS
<.0.0
:>
E
± 20.0
16.0
12.0 -
ALDRIN ISODRIN
28.0
2V 0
10/yg/L
1 A/g/L
15.0 20.0
T i me (to i nut es)
25.0
-------
oo
DETERMINATION OF SELECTED ORGANOCHLORINE PESTICIDES
IN SOIL
PROJECT REQUIREMENTS AND GOALS
DEVELOPMENT OF SINGLE EXTRACTION PROCEDURE
DEVELOPMENT OF SPLIT AND SPLITLESS ANALYSES
V EVALUATION OF NEED FOR CLEANUP STEPS FOR SOIL
EXTRACTS USING SPIKED SOILS AND NATIVE SOILS
- QUANT/TAT/ON
- CHROMATOGRAPHIC QUALITY
• COMPARISON AND STATISTICAL EVALUATION OF RESULTS
OBTAINED USING THE SINGLE EXTRACTION / NO CLEANUPS /
GC-ECD PROCEDURES WITH RESULTS OBTAINED FROM STANDARD
EPA METHODS
-------
COMPARISON OF RESULTS OF SPIKED SOILS
SIMPLE PROCEDURE VS. EPA APPROACH
SAMPLE ID
1
2
3
4
5
6
7
8
9
10
EPA METHOD 8080
PPM
35
37
35
41
35
38
36
30
32
21
SHELL APPROACH
PPM
33
38
35
38
34
39
36
29
30
20
-------
<0
0.
Q. C
D 0)
CD T3
CD
•t->
<*—
CO
V)
co
c
10
-------
•
3
RESULTS OF FIELD SOIL SAMPLES
SAMPLE ID EXTRACT
A NONE
WASHED
AL COLUMN
SULFUR
B NONE
WASHED
AL COLUMN
SULFUR
C NONE
WASHED
AL COLUMN
flf^V SULFUR
\\l/\
WA
\.sf D NONE
WASHED
•
AL COLUMN
I
M
ALDRIN
PPB
180
195
166
200
359
349
250
360
1600
1500
1800
1400
26
20
21
C
BEFORE AND AFTER CLEANUP STEPS
ISODRIN
PPB
11
12
15
14
15
15
16
14
50
60
19
25
1
1
1
DIELDRIN
PPB
1420
1430
1450
1400
2460
2450
2500
2625
10000
7800
9600
9700
160
140
150
ET;? m
147 ^H
142 ^^1
150 ^H
150 ^H
180 ^1
177 ^H
180 ^^1
194 ^M
600 ^1
700 ^H
640 ^H
680 ^H
8 ^1
8 ^H
8 B
r
-------
to
3
RESULTS OF FIELD SOIL SAMPLES BEFORE
SAMPLE ID EXTRACT
E NONE
WASHED
AL COLUMN
F NONE
WASHED
AL COLUMN
SULFUR
G NONE
WASHED
AL COLUMN
SULFUR
m\
\U/x3 H NONE
^M WASHED
AL COLUMN
SULFUR
1
ALDRIN
PPB
<1
<1
<1
13
12
13
11
26
22
19
21
1040
1060
1010
930
ISODRIN
PPB
<1
<1
<1
8
7
11
11
8
8
11
12
47
55
110
98
AND AFTER
DIELDRIN
PPB
2
2
5
300
280
280
280
280
260
260
260
23000
23100
22600
22200
c
CLEANUP STEPS
•
-------
Results after cleanup steps
for selected field samples
Thousands, PPB TOTAL DRINS
30 -fl
NONE
\
8 9
SOIL SAMPLE ID
15
WATER WASHED
AL COLUMN
17
20
SULFUR
-------
fc
SPLITLESS INJECTION METHOD
COMPARISON OF CHROMATOGRAMS OBTAINED
BEFORE AND AFTER CLEANUP PROCEDURES
Selected Isometric Plots
300-
250-
200 -
100 -
341 0-6"
ACETONE/HEXANE EXTRACT
WATER WASHED '
AL COLUMN
tt
a
6.0
_i—i—L—
10.0
1B.O 20.0
Time (minutes)
28.0
Aldrin: 22 ppb
Isodrin: 1 ppb
I JiolcJrin 1 r;0 ppb
Endrin: 8 ppb
From Top to Bottom: acetone/hexane extract, water washed extract,
alumina cleanup extract
-------
SPLITLESS INJECTION METHOD
COMPARISON OF CHROMATOGRAMS OBTAINED
BEFORE AND AFTER CLEANUP PROCEDURES
Selected Isometric Plots
221 0-6"
ACETONE/HEXANE EXTRACT
WATER WASH
AL COLUMN DF-2
SULFUR CLEAN UP DF-2
s.o
10.0
1B.O 20.0
Time (minutes)
20.0
Aidrin: 200 ppb
Isodrin; 13 ppb
Dieldrin: 1400 ppb
Endrin: 150 ppb
From Top to Bottom: acetone/hexane extract, water washed extract, alumina cleanup
extract, sulfur cleanup extract, Note dilution factors
-------
Os
SPLITLESS INJECTION METHOD
COMPARISON OF CHROMATOGRAMS OBTAINED
BEFORE AND AFTER CLEANUP PROCEDURES
Selected Isometric Plots
NE/HEXANE EXTRACT
WASH
AL COLUMN DF-2
SULFUR CLEAN UP
350 ppb
15 ppb
2500 ppb
180 ppb
8.0
10.0
16.0
20.0
Time (minutes)
25.0
From Top to Bottom: acetone/hexane extract, water washed extract, alumina cleanup
extract, sulfur cleanup extract. Note dilution factors
-------
SPLITLESS INJECTION METHOD
COMPARISON OF CHROMATOGRAMS OBTAINED
BEFORE AND AFTER CLEANUP PROCEDURES
Selected Isometric Plots
210-
180-
160-
120-
5.0
10.0
18.0
80.0
Time (minutes)
Aldrin: 22 ppb
Isodrin: 10 ppb
Dieldrin: 270 ppb
Endrin: 50 ppb
From Top to Bottom: acetone/hexane extract, water washed extract, alumina cleanup
extract, sulfur cleanup extract, Note dilution factors
-------
oo
DETERMINATION OF SELECTED ORGANOCHLORINE PESTICIDES
IN SOIL
PROJECT REQUIREMENTS AND GOALS
DEVELOPMENT OF SINGLE EXTRACTION PROCEDURE
DEVELOPMENT OF SPLIT AND SPLITLESS ANALYSES
- >10/jg/kg
• EVALUATION OF NEED FOR CLEANUP STEPS FOR SOIL
EXTRACTS USING SPIKED SOILS AND NATIVE SOILS
- QUANTITATION
- CHROMATOGRAPHIC QUALITY
V COMPARISON AND STATISTICAL EVALUATION OF RESULTS
OBTAINED USING THE SINGLE EXTRACTION / NO CLEANUPS /
GC-ECD PROCEDURES WITH RESULTS OBTAINED FROM
STANDARD EPA METHODS
-------
Summary of analyses of soil samples using Shell WRC methods
and analysis at a contract laboratory. Drins in ug/kg (ppb)
SAMPLE
ID
METHOD
ALDRIN
ISODRIN DIELDRIN
ENDRIN
1 121 (0-6")
2 121 (6-12")
3 221 (0-6")
4 222 (0-6")
5 242D (0-6")
6 242D (6-12")
7 321 (0-6")
8 341 (0-6")
9 341 (6-12")
10 421 (0-6")
HiSM
LOSM
CLM
HiSM
LoSM
CLM
HiSM
LoSM
CLM
HiSM
LoSM
CLM
HiSM
LOSM
CLM
HiSM
LOSM
CLM
HiSM
LoSM
CLM
HiSM
LOSM
CLM
HiSM
LoSM
CLM
HiSM
LoSM
CLM
20
18
9
ND, <20
<1
3
130
140
140
64
77
70
200
250
200
ND, <20
1
ND, <3
3800
1600
5000
38
27
21
ND, <20
<1, <1
ND, <3
160
180
180
ND, <20
1, <1
ND, <2
ND, <20
ND, <1
ND, <2
ND, <20
9
ND, <10
ND, <20
5
ND, <4
ND, <20
13
ND, <20
ND, <20
1
ND, <2
50
50
ND, <400
ND, <20
1
ND, <2
ND, <20
ND, <1
ND, <2
ND, <20
9
42
170
140
60
ND, <20
2
3
1020
1400
850
430
420
400
1810
2500
1800
65
45
29
>5000
10000
24000
220
160
160
ND, <20
2
ND, <2
2200
3300
2400
ND, <20
11
7
ND, <20
ND, <1
3
140
140
110
53
44
60
150
160
95
ND, <20
1
ND, <2
440
600
ND, <500
ND, <20
9
18
ND, <20
ND, <1
ND, <2
360
430
300
HiSM: RELATIVELY HIGH CONCENTRATION SHELL METHOD, SRC 9W37/89R
LOSM: TRACE SHELL METHOD, SRC 9W38/89R
CLM: CONTRACT LABORATORY
NOTE: EACH SET OF RESULTS OBTAINED FROM ANALYSIS OF INDEPENDENT
SOIL EXTRACTIONS
449
-------
Summary of analyses of soil samples using Shell WRC methods
(CONT'D) and analysis at a contract laboratory. Drins in ug/kg (ppb)
SAMPLE
ID
METHOD ALDRIN
ISODRIN DIELDRIN ENDRIN
11
12
13
14
15
16
17
18
19
20
421 (6-12") HiSM
LoSM
CLM
422 (0-6") HiSM
LoSM
CLM
422 (6-12") HiSM
LoSM
CLM
423 (0-6") HiSM
LoSM
CLM
423 (6-12") HiSM
LOSM
CLM
423D (0-6") HiSM
LoSM
CLM
423D (6-12") HiSM
LoSM
CLM
442 (0-6") HiSM
LOSM
CLM
531 (0-6") HiSM
LOSM
CLM
531 (6-12") HiSM
LoSM
CLM
32
60
36
460
600
400
65
72
120
ND, <20
10
8
26
13
ND, <3
ND, <20
4
7
ND, <20
26
ND, <3
100
80
63
250
400
280
940
1000
970
ND, <20
6
ND, <8
25
10
40
ND, <20
7
ND
ND, <20
5
ND, <2
ND, <20
7
ND, <2
ND, <20
3
ND, <4
ND, <20
7
ND, <2
40
7
ND, <8
ND, <20
19
ND
90
50
ND
560
840
520
2500
3600
2200
440
560
480
320
290
200
230
290
78
110
62
200
305
270
55
1660
15OO
780
4100
5800
3800
>11000
23000
9900
90
93
56
130
170
74
<20
28
12
240
200
130
60
170
46
46
30
110
220
60
26
200
110
74
850
800
600
2310
2900
1700
HiSM: RELATIVELY HIGH CONCENTRATION SHELL METHOD, SRC 9W37/89R
LoSM: TRACE SHELL METHOD, SRC 9W38/89R
CLM: CONTRACT LABORATORY
NOTE: EACH SET OF RESULTS OBTAINED FROM ANALYSIS OF INDEPENDENT
SOIL EXTRACTIONS
450
-------
Correlation plots of results of analysis of soils
using Shell WRC method and EPA method.
DRINS IN SOIL: CONTRACT LAB/SHELL
PPM TOTAL DRINS
EPA METHOD RESULTS (CONTRACT LAB)
0 2 4 6 8 10 12 14 16 IS 20 22 24 26 28 30
SHELL METHODS AVERAGE RESULTS
• SOIL SAMPLES —THEORETICAL LINE
DRINS IN SOIL: CONTRACT LAB/SHELL
PPM TOTAL DRINS, <7 PPM ONLY
EPA METHOD RESULTS (CONTRACT LAB)
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7
SHELL METHODS AVERAGE RESULTS
• SOIL SAMPLES — THEORETICAL LINE
-------
(0
BOX AND WHISKER PLOT
COMPARISON OF LOG OF SUM OF DRINS OBTAINED USING NO
SAMPLE CLEANUP (SHELL) AND WITH SAMPLE CLEANUP
(CONTRACT LAB, EPA METHOD)
13 —
jf^^ 5
0)
(0
I .
CO
CeL
-3 -
-11 —
S I
:* o.
iB-:0B^Q.,B:A :
i ; B|f i B!
! ON !
,
% \\
Sample
The first box plot of each pair summarizes 6 analysis done by Shell for each sample.
The second box summarizes the 3 analysis done by the Contract Lab for each sample
-------
ADVANTAGES OF SIMPLE EXTRACTION PROCEDURE
• DISPOSABLE SAMPLE EXTRACTION VIALS (VOA's)
• ELIMINATION OF CROSSCONTAMINATION PROBLEMS
• MINIMUM SAMPLE HANDLING
• LABOR SAVINGS: NO WASHING OF GLASSWARE
• LABORATORY SPACE SAVINGS
• NO SOLVENT EXCHANGE
• REDUCTION OF TURNAROUND TIME
• NO DELETERIOUS EFFECT TO GC COLUMNS OR ECD DETECTORS
• COST SAVINGS (~1/3 CHEAPER)
• TIME SAVINGS
-------
DETERMINATION OF "DRINS"
MARCH 88 TO JUNE 89
BIODEGRADATION, 739 (55%)
THERMAL & CHEMICAL
TREATMENTS, 539 (40%)
OTHER, 20 (1°:.)
THERMAL DESORPTION, 53 (4%)
- SPIKED SOILS
- RMA SOILS
- SEDIMENTS
- SPIKED WATER
- BRINY LIQUIDS
- IMPINGER FLUIDS
TOTAL NUMBER OF SAMPLES: 1351
-------
SHELL RESEARCH COMPLEX METHOD SERIES
SHELL DEVELOPMENT COMPANY
ENVIRONMENTAL ANALYSIS DEPARTMENT
Determination of
ALDRIN, ISQDRIN, DIELDRIN AND ENDRIN IN SOIL BY SPLIT INJECTION
CAPILLARY GAS CHROMATOGRAPHY WITH ELECTRON CAPTURE DETECTION
Hazardous materials used in this
method are designated (*CAUTION*) in
the REAGENTS and PROCEDURE sections.
A current Material Safety Data Sheet
(MSDS) for each material so
designated, as well as for all other
chemicals and reagents, should be
reviewed before proceeding with this
method. Operations which may present
a hazard are given in the UNUSUAL
OPERATIONAL HAZARDS section.
SCOPE
1. This gas chromatographic method describes the determination of the
chlorocyclodiene pesticides Aldrin, Isodrin, Dieldrin and Endrin (collectively
referred to as "Drins" in this method) in soils. The method involves
extraction followed by analysis of the extract by capillary gas chromatography
with electron capture detection. The method has been applied to spiked soil
samples as well as contaminated soil samples in a range of concentrations from
0.01 Mg/g (ppm) to 100 /zg/g (ppm). Samples with percent concentrations of
"Drins" can be analyzed by this method after appropriate dilutions.
METHOD SUMMARY
2. A weighed portion of soil is extracted with a mixture of 1:1
acetone/hexane. The samples are placed in either a) a horizontal shaker for at
least 4 hours, or b) sonicated using a probe for 5-10 minutes for extraction.
The extracts are analyzed by capillary gas chromatography with the use of
electron capture detection (GC-ECD). Separation is done using a high
resolution fused silica capillary column with bonded methylphenyl-polysiloxane
phase (DB-17). The internal standard method of calibration is used. This
method was developed for the determination of "Drins" in laboratory spiked
soil samples to evaluate remediation technologies. The method has also been
applied to the analysis of moderately to highly contaminated soil samples.
UNUSUAL OPERATIONAL HAZARDS
3. Caution should be exercised in handling all samples containing the
type of toxic components being determined by this method. Handle all samples,
extracts and calibration solutions in a well ventilated hood and use
appropriate gloves for hand protection.
455
-------
4. Current health hazard data indicates that Aldrin and Dieldrin are
known carcinogens. Heptachlor is a suspected carcinogen. Presently, Isodriint!
and Endrin are classified as highly hazardous based on compounds of simitar
structure. It is recommended that all of these compounds be handled with equal
precautions.
5. The solvents used for extraction, acetone and hexane, are flammable..
INTERFERENCES
6. Any compound with the same chromatographic retention time as the
compounds of interest will interfere with the analysis. A seconudl
chromatographic column with different polarity from the one used in thiis
method may be used for confirmation. Alternatively, gas chromatography/nass
spectrometry analysis of the samples may be carried out to confirm compotamdi
identification.
7. Sample extracts may be screened prior to addition of internal
standard to ensure that there are no coeluting peaks in the sample with the
internal standard.
APPARATUS
8. (a) Gas chromatograph - Hewlett-Packard 5880a or equivalent,
equipped with an electron capture detector, capillary injector with glass
liner packed with Pyrex wool.
(b) Chromatographic Column - Fused silica capillary column, 30 m I
0.32 mm methylphenyl-polysiloxane (DB-17) bonded phase of 0.25 /an film
thickness (J&W Corporation).
(c) Chromatographic data system - Capable of on-line electronic
integration of chromatographic data. VG Multichrom data system is used for
this method.
(d) Vials - VGA vials (40 ml) with caps with Teflon lined septa.
(e) Transfer pipettes, glass
(f) Autosampler - Hewlett-Packard 7673a Robotic Arm Autosampler or
equivalent.
(g) Autosampler vials
(h) Horizontal shaker - Eberbach Corporation or equivalent.
(i) Ultrasonic Processor - Heat Systems-Ultrasonics, Incorporated!
Model W-385 or equivalent.
(j) Centrifuge - IEC Centra-7, International Equipment Company.
(k) Balance - Capable of weighing to the nearest 0.1 mg.
456
-------
(1) Vortex Mixer - Maxi Mix 1, Thermolyne or equivalent.
REAGENTS
9. (a) Acetone (*CAUTION*). Baxter, Burdick & Jackson, B&J Brand™,
High Purity Solvent.
(b) Hexane (*CAUTION*) . Baxter, Burdick & Jackson, Hexane UV, High
Purity Solvent.
(c) Chromatographic grade air, hydrogen, and helium.
(d) Aldrin. (*CAUTION*). 99.0% purity, Chem Service.
(e) Isodrin. (*CAUTION*). 99.0% purity, Chem Service.
(f) Dieldrin. (*CAUTION*). 99.0% purity, Chem Service.
(g) Endrin. (*CAUTION*). 98.0% purity, Chem Service.
(h) Heptachlor. (*CAUTION*). 98.0% purity, Chem Service.
PROCEDURE
10. (a) Instrumental parameters
The gas chromatograph is operated according to the parameters
listed in Table 1. The VG Multichrom data system acquisition method file is
included in Appendix A.
(b) Standard preparations
(1) "Drins" standards - Standards are prepared by weighing out
appropriate amounts of each "Drin" and then diluting by volume with the
extraction solvent mixture (1:1 acetone/hexane) . Typical concentration of the
stock solution is 100-1000 /ig/mL (ppm). The stock solution is then used for
preparation of the calibration standards in the range of 0.01 ng/ml to 10
(2) Internal standard - The internal standard used in this
method is Heptachlor. A stock solution is prepared by weighing out the
appropriate amount of Heptachlor and diluting by volume with the extraction
solvent mixture (1:1 acetone/hexane) to obtain a 100-1000 pg/mL (ppm)
solution. This stock solution is diluted accordingly to obtain a working
internal standard solution of Heptachlor with a concentration level of 1/ig/mL.
Note 1: The concentration level of 1-ng/mL for the internal standard
solution was chosen because most of the samples for which the method was
developed were at this concentration level in solution. The concentration of
the internal standard solution should be in the same concentration range of
the samples for best results, thus it should be adjusted accordingly as
needed.
457
-------
Note 2: Sample extracts may be screened prior to addition of the
internal standard to ensure that there are no other compounds in the sample
that elute with the same retention time as Heptachlor.
(3) _
(1:1) with the l^g/mL
"Drins" standard level
Figure 1 shows a
internal standard.
Calibration standards
internal
is mixed
chromatogram
Each standard level is mixed
standard solution. Typically, 3 ml of each
with 3 mi of the internal standard solution.
of a nominal 1 fig/mi "Drins" standard with
Note 3: It should be noted that the actual concentrations of a
calibration standard are half of the indicated values since it is prepared by
mixing equal volumes of a "Drins" standard with the internal standard
solution. This dilution is not indicated because soil extracts are similarly
mixed with equal volumes of the internal standard solution and are diluted
also by half. Therefore, this dilution cancels out and is not indicated to the
data system.
(c) Sample preparation
(1) Weigh a portion of soil sample (1-20 grams) in a VOA vial.
(2) Add 10-20 ml of 1:1 acetone/hexane.
Note 4: For customer spiked samples where approximate concentrations of
the "Drins" are known, the soil weights and extraction volumes can be adjusted
accordingly to place most of the samples in the middle of the calibration
curve to minimize uncertainty in quantitation. For example, soil samples with
"Drins" concentration range of 0.01 to 1 /ig/g are extracted 1:1 (10-20 g of
soil with 10-20 ml of extractant). For soils ranging in "Drins" concentrations
from 0.1-20 /ig/g, 1-5 g of soil are extracted with 10 ml of extractant. For
soils ranging in "Drins" concentrations from 20-100 /ig/g, 1 g of soil is
extracted with 25 ml of extractant. The extracts can also be diluted as needed
for higher "Drins" concentrations. Homogeneity of the soil sample must also be
considered in selection of sample size. For visibly heterogeneous samples, at
least 5-10 g of soil may be needed with subsequent dilutions of the extracts
as needed.
Note 5: It is recommended that for samples containing "Drins" below 1
/ig/g, the analysis be done using a splitless injection method documented
separately.
(3) Extract samples by either a) shaking for at least 4 hours
using a horizontal shaker, or b) sonication for 5-10 minutes.
(4) Centrifuge for 5 minutes at 2000 RPM if necessary.
(5) Take a portion of the extract and mix (1:1) with a portion
of the internal standard solution (Heptachlor, 1 ng/mi). Vortex mix briefly.
(6) Tranfer to an autosampler vial and analyze by gas
chromatography using parameters listed in Table 1. Typical chromatograms are
shown in Figures 1-3.
458
-------
(d) Data collection - Data is collected by means of a level 4
Hewlett-Packard 5880A integrator and/or by VG Multichrom Data System. The data
system is used for calibration, quantitation and reporting of results.
Appendices A-C include data acquisition method, calibration and a sample
sequence files.
CALCULATIONS
11. Calculations are done using an internal standard method based on
peak areas. The VG Multichrom data system allows the operator to select from
several curve fit options. The operator can test different curve fits until an
appropriate fit of the data is obtained. Typically, linear fits as well as
polynomial fits (up to third order) with a correlation coefficient of 0.999 or
better are required. Visual display of the calibration curves is quite helpful
for inspection of curve fit. Figure 1 shows a typical analysis report for one
of the calibration standards. The calculated concentrations for the standards
are compared to the prepared concentrations to diagnose curve fit. Information
on sample weights, extraction volumes and dilutions (if any) are entered into
the VG data system before the samples are analyzed. The data system identifies
the presence of "Drins" by retention time, calculates the concentration of the
"Drins" in the extracts from the corresponding calibration curve, then applies
the necessary weight/volume corrections. Figure 2 shows a quantitation report
for a soil sample containing Aldrin and Dieldrin. Figure 3 shows a series of
isometric plots for analysis of "Drins" standards from "0.05-4/ig/mL typical
calibration curves for each "Drin". Figures 4-7 show the individual
calibration curves obtained from the analysis of these calibration standards.
Analysis reported in Figures 1-2 were quantitated using these calibration
curves. Figure 8 shows isometric plots of a series of Dieldrin standards.
12. The VG Multichrom data system method for data acquisition and
calibration files are included in Appendices A-B.
13. Calculations can also be done by plotting the ratio of area counts
obtained from the analysis of a series of standards for each compound to the
area counts of the Heptachlor internal standard versus concentration of each
level of each "Drin". The concentration of Heptachlor is constant in both
calibration standards and samples and does not need to be taken into account.
The ratios of the area counts obtained for each compounds detected in the
sample to the area counts of the Heptachlor internal standard added to the
sample are then used to read off the calibration curve the corresponding
concentration of a given "Drin" in the sample. A calibration curve is
constructed for each "Drin". Once the concentration of the "Drin" in the
extract is determined, the concentration in the soil can be calculated as
follows:
Cone. "Drin" Extraction Dilution
Concentration in extract X Volume X Factor
of a "Drin" (/xg/mL) (ml) (If any)
in a soil sample =
(M9/9) Weight of soil sample (g)
459
-------
EVALUATION OF METHOD
14. The method was tested by spiking known amounts of "Drins" in clean
reference soil. Recoveries are 95% or better. The limit of detection is
estimated to be 0.01 /ng/g. The limit of detection can be lowered by decreasing
the split ratio and increasing the sample injection size.
15. Standard EPA methods involve water washing of the acetone/hexane
extracts to remove polar compounds and acetone, drying of the extract with
sodium sulfate followed by clean-up through an activated alumina column.
Twenty contaminated soil samples were analyzed using the simple extraction
described in this method and the chromatograms were compared to those obtained
after each additional clean-up step. The results indicated that for these soil
samples, it is not necessary to perform these time consuming clean-up steps.
In addition, the method uses all disposable glassware and eliminates potential
cross-contamination.
REFERENCES
16. a) M. E. Wilcox and C. C. Chou, "Determination of Chlorocyclodiene
Insecticides in Soil by a Simplified Method", MRS January, 1988.
b) EPA Method 3550, "Sonication Extraction", Test Methods for
Evaluating Solid Wastes, SW 846, Vol. IB, 3rd. Edition, 1986.
c) EPA Method 8080, "Organochlorine Pesticides and PCBs", Test
Methods for Evaluating Solid Wastes. SW 846, Vol. IB, 3rd. Edition, 1986.
d) EPA Method 8270, "Gas Chromatography/Mass Spectrometry for
Semivolatile Organics: Capillary Column Technique", Test Methods for
Evaluating Solid Wastes, SW 846, Vol. IB, 3rd. Edition, 1986.
Westhollow Research Center
I. A. L. Rhodes
R. Z. Olvera
March 21, 1989
460
-------
Gas Chromatograph:
Column:
Carrier gas:
Make-up gas:
Split Ratio:
Sample size:
Injector:
Detector:
Column program:
Chart speed:
Attenuation:
Threshold:
Peak width:
Table 1
Instrumental Parameters
Hewlett-Packard 5880a
J&W, fused silica capillary column 30 m X 0.32 mm
ID, 0.25 pm film thickness (methylphenylsiloxane)
DB-17.
Helium, 10.5 Psig.
P-10 (10% Methane/Argon) 30 ml/min.
100:1
1 ML
200°C
Electron Capture, 350°C
215°C, isothermal. Hold for 20 minutes.
Post analysis bakeout to 265'C for 10 min.
Icm/min
24
3
0.04
461
-------
[GW-HWJ 12 WRCZJ^dUy, / , 1
Reported on 17-MAR-1989 at 10:20
Injection Report
Acquired on 5-MAR-1989 at 12:35
• ,1
_ j
:.#-
h
o.i
-
:.£f-
*
:.i
!.£.
-
" t
• -r
- T •
••in
;
k
i
1
1
\
\_
i
i.: :.u
r
.
C
c
!
(.3
1
Z
- a
B i
" Ul
1
'
1 1 f i
' 1 1
? i
. i i , . i ' i , .
I'J.U U.3 13. ii ii.3 i'j
Sample Name
Sample Id
Sample Type
Bottle No
Tin* (ninuteil
: 1.00 PPM + I.S.
•
•
: Standard Amount-1.00000
: 7
PEAK INFORMATION
Peak RT mins
1 5.993
2 7.078
3 8.864
4 13.571
5 16.762
Residual
Total
Area uVs
110147
88310
118897
82175
103788
0
503317
Calculated
PPM Peak name
HEPTACHLOR
1.16 ALDRIN
1.60 ISODRIN
1.06 DIELDRIN
1.55 ENDRIN
N/A
5.37
Prepared Concentration
ppm
1.05
1.18
1.64
1.08
1.57
Figure 1: Chromatogram and Quantitation Report for a "DRINS" Calibration Standard.
462
-------
[GW-HW] 12 WRC232809,18,1
Reported on 6-MAR-1989 at 10:21
Injection Report
Acquired on 5-MAR-1989 at 16:44
z
'J
25.0
J30.0
X
i!25.0
c
^20.0
[5.0
in (i
. j. u
; a
"0
-
-
i
•
:
.u
i
i . , , i i
^.3 IMJ
2
C
^
!
A.
'
t
1
j
:
z
a
a
j
Ul
0
I
J A
o 1U.U lAb ib.U l/.b ^U
Sample Name
Sample Id
Sample Type
Bottle No
Tin* (ninutit)
LR 019432 - 8-3
19396-37-4
Sample Amount-1.00000
18
PEAK INFORMATION
Peak RT mins
1 5.989
3 7.076
5 13.569
Residual
Total
Area uVs
110931
85498
47626
52974
244056
PPM
27.9
15.4
N/A
43.442
Peak name
HEPTACHLOR
ALDRIN
DIELDRIN
ReP°rt f°r a Sofl S«"1e-
01e,drin Here
463
-------
Analysis View - Screen copy. Reported on 15-MAR-1989
Chromatogram : 12 WRC232809,7,1
BO. 00
60.00
40.00
20,00
HEPTACHLOR
(INT. STD.)
2.0
4.0
6.0
9.0
10.0 12.0 14.0 16.0 10.0 RT
Figure 3: Isometric Plots of a Series of "DRINS" Standards from 0.04 to 4 /zg/mL
Heptachlor Internal Standard 1s 1.05 /zg/mL.
-------
SHELL WRC ENVIRONMENTAL ANALYSIS VG SYSTEM
Os
Calibration Name : 12 232809.
CALIBRATION FOR DIELDRIN ANALYSIS BY SPLIT ISOTHERMAL GC
Peak : ALDRIN
3.5
3.0
2.5
£2.0
3
jg 1.5
k
1.0
0.5
0.0
Page 1 (of 1)
Calibration leyel_plpt_
3
_ J_J 1 — I—I —L—1
1.0 1.5
Amount (PPM)
Constant
1st degree
2nd degree
3rd degree
Figure 4
-0.02696
0.66145
0.08446
-0.00643
Curve fit
Coeff of determination
Standard error
Cubic
1.01032
0.12168
on 15-MAR-!9e9 at 12:
-------
SHELL WRC ENVIRONMENTAL ANALYSIS VG SYSTEM
Calibration Name : 12 232809.
CALIBRATION FOR DIELDRIM ANALYSIS BY SPLIT ISOTHERMAL GC
Peak : ISODRIN
o.o1-
Page 1 (of 1)
Calibration level plot
3
_ I •_ -t • !
.u. . i 1 »
4.0
0.0
1.0
2.0 3.0
Amount (PPM)
Constant
1st degree
2nd degree
3rd degree
-0.02830
0.61729
0.07787
-0.00685
Figure 5
Curve fit
Coeff of determination
Standard error
Cubic
1.00801
0.14373
Reported on 15-MAR-1989 at 12: 48
-------
SHELL WRC ENVIRONMENTAL ANALYSIS VG SYSTEM
Calibration Name : 12 232809.
CALIBRATION FOR DIELDRIN ANALYSIS BY SPLIT ISOTHERMAL GC
Peak : DIELDRIN
2.5
2.0
in
> 1.5
ID
0)
< 1.0
0.5
0.0
0.0
3.5
Amount (PPM)
Constant
1st degree
2nd degree
3rd degree
-0.01784
0.75716
-0.00174
0.00107
Curve fit
Coeff of determination
Standard error
Cubic
1.00919
0.09109
Reported on 17--MAR-1989 at 13: 51
-------
SHELL WRC ENVIRONMENTAL ANALYSIS VG SYSTEM
Os
oo
Calibration Name : 12 232809.
CALIBRATION FOR DIELDRIN ANALYSIS BY SPLIT ISOTHERMAL GC
Peak : EHDRIN
3.5-
Page 1 (of 1)
Calibration level plot
3
Amount (PPM)
Constant
1st degree
2nd degree
3rd degree
-0.03079
0.65087
0.00431
-0.00012
Curve fit
Coeff of determination
Standard error
Cubic
1.01248
0.12915
Reported on I5-MAR-1989 at 12: 49
Figure 7
-------
SHELL WRC ENVIRONMENTAL ANALYSIS VG SYSTEM
Selected Isometric Plots
to
c
0)
+J
c
1
3
5
Page 1 (of 1)
3 RuHIchrort)
BOO
700
600
BOO
400
300
200
100
0
—
—
-
—
-
(
V
I
V
Extraction
Solvents
V.
, , I
.0 a.o a.
0
1
—
—
Heptachlor
, (Int. Std.) |
1.05 ppm I
1
Dieldrin
.__ 6.3 ppm
II
I 11 4.2 ppm
A 2.1 ppm
A
A 1.05 ppm
A^ 0.53 ppm
yv 0.21 ppm
1 1 > 1 1 4 1 1 1 1 1 1 I
g.o 12.0 is.o la.o 21.0
Time (minutes)
[GW-HW] 12 WRC227005. 14. 1
[GW-HW] 12 WRC227005. 16. 1
[GW-HW] 12 WRC227005. 5, 1
2
4
6
[GW-HW]
[GW-HW]
[GW-HW]
12 WRC227005. 15. 1
12 WRC227005. 4. 1
12 WRC227005, 6. 1
Reported on 23-NOV-1988 at 12: 48
Figure 3: Isometric Plots of a Series of Die'c^in Stancla^'.?.
-------
SHELL RESEARCH COMPLEX METHOD SERIES
SHELL DEVELOPMENT COMPANY
ENVIRONMENTAL ANALYSIS DEPARTMENT
Determination of
TRACE AMOUNTS OF ALDRIN, ISODRIN. DIELDRIN AND ENDRIN IN SOIL BY SPLITLESS
INJECTION CAPILLARY GAS CHROMATQGRAPHY WITH ELECTRON CAPTURE DETECTION
Hazardous materials used in this
method are designated "(*CAUTION*)" in
the Reagents and Procedure sections. A
current Material Safety Data Sheet
(MSDS) for each material so
designated, as well as for all other
chemicals and reagents, should be
reviewed before proceeding with this
method. Operations which may present
a hazard are given in the UNUSUAL
OPERATIONAL HAZARDS section.
SCOPE
1. This gas chromatographic method describes the determination of the
chlorocylodiene pesticides Aldrin, Isodrin, Dieldrin and Endrin (collectively
referred to as "Drins" in this method) in soils. The method involves
extraction followed by analysis of the extract by capillary gas chromatography
with electron capture detection. The method has been applied to spiked soil
samples as well as contaminated soil samples in a range of concentrations from
0.001 fig/9 to 1 /xg/g. Soil samples with concentrations higher than 1 /zg/g may
be analyzed by this method after dilution.
METHOD SUMMARY
2. A weighed portion of soil is extracted with a mixture of 1:1
acetone/hexane. The samples are placed in either a) a horizontal shaker for at
least 4 hours, or b) sonicated using a probe for 5-10 minutes for extraction.
The extracts are analyzed by capillary gas chromatography with the use of
electron capture detection (GC-ECD). Separation is done using a high
resolution fused silica capillary column with bonded methylphenyl-polysiloxane
phase (DB-17). The external standard method of calibration is used. This
method was developed for the determination of "Drins" in soil samples to
evaluate remediation technologies.
UNUSUAL OPERATIONAL HAZARDS
3. Caution should be exercised in handling all samples containing the
type of toxic components being determined by this method. Handle all samples,
extracts and calibration solutions in a well ventilated hood and use
appropriate gloves for hand protection.
4. Current health hazard data indicates that Aldrin and Dieldrin are
known carcinogens. Heptachlor is a suspected carcinogen. Presently, Isodrin
470
-------
-and Endri/i are classified as highly hazardous based on compounds of similar
structure. It is recommended that all of these compounds be handled with equal
precautions.
5. The solvents used for extraction, acetone and hexane, are flammable.
INTERFERENCES
6. Any compound with the same chromatographic retention time as the
compounds of interest will interfere with the analysis. A second
chromatographic column with different polarity from the one used in this
method may be used for confirmation. Alternatively, gas chromatography/mass
spectrometry analysis of the samples may be carried out to confirm compound
identification.
APPARATUS
7. (a) Gas chromatograph - Hewlett-Packard 5880a or equivalent,
equipped with an electron capture detector, capillary injector with glass
liner packed with Pyrex wool.
(b) Chromatographic Column - Fused silica capillary column, 30 m X
0.32 mm methylphenyl-polysiloxane(DB-17) bonded phase of 0.25 urn film
thickness (J&W Corporation).
(c) Chromatoqraphic data system - Capable of on-line electronic
integration of chromatographic data. VG Multichrom data system is used for
this method.
(d) Vials - VOA vials (40 ml) with caps with Teflon lined septa.
(e) Transfer pipettes, glass
(f) Autosampler - Hewlett-Packard 7673a Robotic Arm Autosampler or
equivalent.
(g) Autosampler vials
(h) Horizontal shaker - Eberbach Corporation or equivalent.
(i) Ultrasonic Processor - Heat Systems-Ultrasonics, Incorporated
Model W-385 or equivalent.
(j) Centrifuge - IEC Centra-7, International Equipment Company.
(k) Balance - Capable of weighing to the nearest 0.1 mg.
(1) Vortex Mixer - Maxi Mix 1, Thermolyne or equivalent.
REAGENTS
8. (a) Acetone (*CAUTION*). Baxter, Burdick & Jackson, B&J Brand™,
High Purity Solvent.
471
-------
(b) Hexane (*CAUTION*). Baxter, Burdick & Jackson, Hexane UV, High
Purity Solvent.
(c) Chromatographic grade air, hydrogen, and helium.
(d) Aldrin. (*CAUTION*). 99.0% purity, Chem Service.
(e) Isodrin. (*CAUTION*). 99.0% purity, Chem Service.
(f) Dieldrin. (*CAUTION*). 99.0% purity, Chem Service.
(g) Endrin. (*CAUTION*). 98.0% purity, Chem Service.
PROCEDURE
9. (a) Instrumental parameters
The gas chromatograph is operated according to the parameters
listed in Table 1. The VG Multichrom data system acquisition method file is
included in Appendix A.
(b) Standard preparations
Standards are prepared by weighing out appropriate amounts of
each "Drin" and then diluting by volume with the extraction solvent mixture
(1:1 acetone/hexane). Typical concentration of the stock solution is 100-1000
/jg/ml_ (ppm). The stock solution is then used for preparation of the
calibration standards in the range of 1 /ig/L to 1000 ng/l. Multilevel
calibration is essential since the electron capture detector response is often
nonlinear.
(c) Sample preparation
(1) Weigh a portion of soil sample (10-20 grams) in a VGA
vial.
(2) Add 10-20 ml of 1:1 acetone/hexane.
(3) Extract samples by either a) shaking for at least 4 hours
using a horizontal shaker, or b) sonication for 5-10 minutes.
(4) Centrifuge for 5 minutes at 2000 RPM if necessary.
(5) Tranfer to an autosampler vial and analyze by gas
chromatography using parameters listed in Table 1. Typical chromatograms are
shown in Figures 1-3.
(d) Data collection - Data is collected by means of a level 4
Hewlett-Packard 5880A integrator and/or by VG Multichrom Data System. The data
system is used for calibration, quantitation and reporting of results.
Appendices A-C include the data acquisition method, a calibration and a sample
sequence files.
472
-------
CALCULATIONS
10. Calculations are done using external standard calibration based on
peak areas. The VG Multichrom data system allows the operator to select from
several curve fit options. The operator can test different curve fits until an
appropriate fit of the data is obtained. Typically, linear fits as well as
polynomial fits with a correlation coefficient of 0.999 or better are
required. Visual display of the calibration curves is quite helpful for
inspection of curve fit. Figure 1 shows a typical analysis report for one of
the calibration standards. The calculated concentrations for the standards are
compared to the prepared concentrations to diagnose curve fit. Information on
sample weights, extraction volumes and dilutions (if any) are entered into the
VG data system before the samples are analyzed. The data system identifies the
presence of "Drins" by retention time, calculates the concentration of the
"Drins" in the extracts from the corresponding calibration curve, then applies
the necessary weight/volume corrections. Figure 2 shows a quantisation report
for a soil sample. Figure 3 shows isometric plots obtained from the analysis
of "Drins" standards. Figures 4-7 show the individual calibration curves
obtained from the analysis of calibration standards in the range of 1-500
mg/L. Analysis reported in Figures 1-2 were quantitated using these
calibration curves.
Note 1; It is recommended that calibration be performed with standards
bracketing the samples in concentration. For example: if the sample extracts
range in concentration from not detected to 0.020 /wj/L, then it is best to
include in the calibration curve standards with similar range (ie. 0.001-0.020
/jg/L). Including in the calibration all of the standards (0.001-1 pg/L) is not
recommended since it is not needed and this would only add to uncertainty of
the measurement because the electron capture detector response is linear over
a limited concentration range.
11. The VG Multichrom data system method for data acquisition and
calibration files are included in Appendices A-B.
12. Calculations can also be done by plotting the area counts obtained
from the analysis of a series of standards for each compound versus the
respective "Drin" concentration in each standard level. The area counts
obtained for each compounds detected in the sample are then used to read off
the calibration curve the corresponding concentration of a given "Drin" in the
sample. A calibration curve is constructed for each "Drin". Once the
concentration of the "Drin" in the extract is determined, the concentration in
the soil can be calculated as follows:
Cone. "Drin" Extraction Dilution
Concentration in extract X 4/olume X Factor
of a "Drin" (jig/L) (L) (If any)
in a soil sample =
(pg/g) Weight of soil sample (g)
473
-------
EVALUATION OF METHOD
13. The method was tested by spiking known amounts of "Drins" in clean
reference soil and with contaminated soil samples. Recoveries are 95% or
better. The limit of detection is estimated to be 0.001 /jg/g.
14. Standard EPA methods involve water washing of the acetone/hexane
extracts to remove polar compounds and acetone, drying of the extract with
sodium sulfate followed by clean-up through an activated alumina column.
Twenty contaminated soil samples were analyzed using the simple extraction
described in this method and the chromatograms were compared to those obtained
after each additional clean-up step. The results indicated that for these soil
samples, it is not necessary to perform these time consuming clean-up steps.
In addition, the method uses all disposable glassware and eliminates potential
cross-contamination Figure 8 shows the chromatograms of the acetone/hexane
extract of a contaminated soil sample. Also included are the chromatograms of
the same extract after water washing and after passing the water washed
extract through activated Alumina. There are no differences in the quality of
the chromatograms.
REFERENCES
15. (a) M. E. Wilcox and C. C. Chou, "Determination of Chlorocyclodiene
Insecticides in Soil by a Simplified Method", MRS January, 1988.
(b) EPA Method 3550, "Sonication Extraction", Test Methods for
Evaluating Solid Wastes, SW 846, Vol. IB, 3rd. Edition, 1986.
(c) EPA Method 8080, "Organochlorine Pesticides and PCBs", Test
Methods for Evaluating Solid Wastes, SW 846, Vol. IB, 3rd. Edition, 1986.
(d) EPA Method 8270, "Gas Chromatography/Mass Spectrometry for
Semivolatile Organics: Capillary Column Technique", Test Methods for
Evaluating Solid Wastes, SW 846, Vol. IB, 3rd. Edition, 1986.
Westhollow Research Center
I. A. L. Rhodes
R. Z. Olvera
T. E. Vipond
M. E. Wilcox
March 21, 1989
474
-------
Table 1
Gas Chromatograph:
Column:
Carrier gas:
Make-up gas:
Split Ratio:
Sample size:
Injector:
Detector:
Column program:
Chart speed:
Attenuation:
Threshold:
Peak width:
Instrumental Parameters
Hewlett-Packard 5880a
J&W, fused silica capillary column 30 m X 0.32 mm
ID, 0.25 Mm film thickness (methylphenylsiloxane)
DB-17.
Helium, 10.5 Psig.
P-10 (10% Methane in Argon) @ 30 ml/min.
Splitless. Then after 0.5 min, 80 mL/min.
1 ML
200'C
Electron Capture, 350'C
60*C hold for 0.5 min, program at 20'C/min to 215°C,
Hold for 30 min at 215*C. Post analysis bakeout to
265*C for 10 min.
Icm/min
24
3
0.04
475
-------
[GW-HW] 12 LR19396-29B,20,1
Reported on 21-MAR-1989 at 10:32
Injection Report
Acquired on 2-MAR-1989 at 04:32
-'>- '
"H . i
I" :•>•
&
o
t llll
I—i
2
a
j
ui
2
Q
-0>-
5'J.U
Sample Name
Sample Id
Sample Type
Bottle No
line inmutiil
10PPB 4DRINS
Standard Amount-1.00000
20
PEAK INFORMATION Prepared
Peek RTrrarE 1
2 13.831
5 15.756
8 20.738
ID 24.053
IfeidLal
Tttal
itftvl/
31284
7fg?J
12034
11445
12960
82992
Ateaifyfe
147305
138039
89605
1CQJB30
84841
476779
~ , uoncentra i iun
Calc
Effl Efeakrene ppb
n.l fflXON "•»
14.7 jaiKlN 16-4
10.4 1 HH 1 KIN 10.8
15.1 EMHIN 15.7
NA
51.4
vfidth
4.3
4.8
7.2
8.5
PF slfTR W
132L3.4659
9377.3055
8534.7715
6724.7993
j^U^
0.0000
0.0000
0.0000
0.0000
i: ehre-tor- and qwtltttton Report for a •OMB- Calibration Standard.
476
-------
IGW-HWJ 12 LR19396-29B,3,1
Reported on 21-MAR-1989 at 13:41
Injection Report
Acquired on l-MAR-1989 at 11:42
130
> ISO
j
* 120
c
it or
w "
0^
•'-
n
L
-
•
'
•1
:
:
"
-
1
'Ajujj u
i . .
f.u 3.y
i
<
« iL J. J « . 1 1 JU
•
;
z
o
1 o
in
IT \
z
5
z
u
. :,.....„ . *.
.'j.y ib.il iU.U
-------
SHELL URC ENVIRONMENTflL flNflLYSIS Vij SYSTEM
Hnatysis Name • [GU-HW] 12 LR19396-24.19.1.
MuIt i chrom
DETERMINflTION OF DRINS IN SOIL EXIRPCTS BY SPLITLESS GC-ECl'
40.0
36.0
32.0
28.0
24.0
£ 20.0 -
16.0 -
12.0 _
ALDRIN ISODRIN
DIELDRIN
—A- 1 JV lJU
A-JLJl
ENDRIN
A
1 1 1 1
A.
Page 1 (of I)
Mult Ichrom
-10 ng/L_
0.0
5.0
10. 0
15.0 20.0
T ime (m inut est
25.0
30.0
35.0
Instrument t
Channel Title i Channel a 12
Lims ID t
Required on 18-FEB-I989 at 05,00
Reported on 16-MRR-19S9 ?t }f».\7
Method : OR IN
Calibration , DRIN
Run Sequence i DRIN
Fiourp V !<;omr>t-Hr
-------
SHELL WRC ENVlRONMENTflL flNflLYSIS VG SYSTEM
Calibration Name i 12 DRIN.
DETERMINflTION OF DRINS IN SOIL EXTRPOS BY SPLITLESS GC-ECO
Peak , flLDRIN
. t_
Page 1 (of I)
Callbracioii level plot
600
500
400
o
"300
_p
«o200
•
100
100
200
300
Amount IPPB)
AOO
500
600
Constant • 7.79825E+2
1st degree • 1.38161E+4
2nd degree i -6.76714
Curve fit i Quadratic
Coeff of determination i 0.99992
Standard error , 3.09208E+A
Reported on 21-MRR-1989 at 08i52
-------
00
o
SHELL URC ENVIRONMENTflL flNflLYSIS Vij bYSTEM
Calibration Name i 12 DRIN.
DETERMI NOTION OF DRINS IN SOIL EXTRACTS BY SPLITLESS GC-ECD
Peak , ISODRIN
U
Page I (o f 1)
Calibration level plot
900
flmount
Constant , -4.222208E+4
1st degree i 1.23152E+4
2nd degree i -4.71631
Curve fit i Quadratic
Coeff of determination i 0.99989
Standard error i 4.18238E+4
Reported on 21-MRR-1989 at 08i52
Figure 5
-------
oo
SHELL WRC ENVIRONMENTRL flNRLYSIS VG SYSTEM
Calibration Name i 12 DRIN.
DETERMINOT ION OF ORINS IN SOIL EXTRftCTS BY SPLITLESS GC-ECD
Peak , DIELDRIN
65A -
Page 1 (of 1)
Calibration level plot
100
200 300
Amount (PPB)
400
500
Constant • -1.780815E+4
1st degree • 1.02786E+4
2nd degree • 2.681A 1
Curve fit i Quadratic
Coeff of determination i 0.99995
Standard error i 2.56285E+4
Reported on 21-MRR-I989 at 08t53
-------
00
to
SHELL URC ENVIRONMENTAL flNflLYSIS VG SYSTEM
Calibration Name • 12 DRIN.
DETERMINOTION OF DRINS IN SOIL EXTRACTS BY SPLITLESS GC-ECD
Peak , ENDRIN
708
590
472
o
354
236
CE
118
Page 1 (of 1)
Calibration level plot
100
200
300
400
Amount (PP6)
500
600
700
800
Constant i -1.886063E+4
1st degree • 7.96342E+3
2nd degree i 0.95070
Curve fit i Quadratic
Coeff of determination i 0.99996
Standard error i 2.50020E+4
Reported on 21-MRR-1989 at 08,54
Figure 7
-------
CO
U)
SHELL WRC ENVIRONMENTAL ANAI YS1S VG SYSTEM
Analysis Name : [GW-HW] 12 LR19396-24, 3. 2.
Multichrom
DETERMINATION OF DRINS IN SOIL EXTRACTS BY SPLITLESS GC-ECD
Page 1 (of 1)
Multichrom
0.0
160
140
~ 120
6
> 100
•M
rt
in
u BO
-M
C
M
60
40
on
CU
-
.
-
-
—
ALDRIN
0.04 wg/g
j
A. 1
VM
UvJl
.1
i . NUL
iilC
^f*-WJu». » ».. JL. i*^- ..._u_^i_^
y
ISODRIN
0.02 /ig/g
I _ »_ «.
JL^ ^ x
DIELDRIN
0.20 jig/9
^
3. WATER WASHED EXTRACT PASSED THROUGH
ALUMINUM COLUMN
2. WATER WASHED EXTRACT
1. ACETONE/HEXANE EXTRACT
ENDRIN
/0.005 M9/9
..At -
A _yv ^ ^^
.AAI..-.I...— II./W.,
5.0
10.0
15.0 20.0
Time (minutes)
25.0
30.0
35.0
Channel #12
Instrument
Channel Title
Lims ID
Acquired on 16-FEB-1989 at
Reported on 17-FEB-1989 at
22: 59
12: 31
Method
Calibration
Run Sequence
DRIN
DRIN
DRIN
Figure 8: Comparison of Chromatograms Obtained After Sample Cleanup Procedures,
-------
484
-------
MR. MCCARTY: The one thing this meeting lacks
is an official T-shirt, so we have chosen a design which, for those of you who were here
Tuesday, there is a lovely picture of Bill out here sampling with a caption that reads, "Hi, I am
from EPA, and I am here to help."
Any of those who are interested, please leave your card or give your name to
Tony. We have another 200 on order. This is the prototype, but we thought that...yeah, it will
fit, just barely.
MR. TELLIARD: Thanks, Harry.
MR. MCCARTY: The original slide is in a safe
deposit box with instructions to be delivered to The Washington Post if I don't make it home this
year.
MR. TELLIARD: You know, the last T-shirt went
to Bob Metz for...what was that one? That was for 304(h), yeah. I guess that puts me in the
same old category? Yeah, I know.
The next set of speakers are going to be talking basically about matrix
interferences, problems, and issues.
The Office of Water has put out a guidance package that is presently in circulation
dealing with matrix problems and some suggestions on both corrective actions that could be taken
as well as an outline for the user community as to what type of information and data to collect
for supporting their issues or claims that there is a matrix problem. Many of the matrix problems
that the Agency is facing is the fact that many of the dischargers send their samples out, they
come back, the detection limit can't be met, and with no supporting data, they then solicit the
Agency for some sort of waiver, and, of course, we have a number 5 which, of course, we can't
make any real judgment to.
So, the document does lay out a check sheet of things that either you, as
laboratories, should make available to your clients or you, as clients, should get from your
laboratories. These are floating around the back. If we run out, we can send more out, and they
are available.
Following along that line, Zhouyao Zhang from the Chemistry Department at the
University of Waterloo is going to discuss that now. Thank you.
485
-------
486
-------
MR. ZHANG: Good morning. Before my
presentation, I would like to thank my collaborator, Miss Karen Buchholz, and my supervisor,
Dr. Pawliszyn, for their contribution to this work.
I would also like to thank Supelco, Varian, and the Natural Science and
Engineering Research Council of Canada for the financial support.
In this presentation, I will introduce to you a new extraction technique called solid
phase microextraction or SPME. I will also discuss what is SPME and how it works.
The main topic will be focused on how can we deal with the matrix interference
in SPME. There are mainly two types of matrix interferences. One is when you extract the
targeted organic compounds, you also extract hundreds or sometimes even thousands of non-
targeted organic compounds which will cause interference in your analysis.
Another major interference at work is a matrix which absorbs the analytes very
strongly so you cannot get a very good recovery, and it will affect the accurate quantitation.
I will also discuss briefly how can we use some matrix effects to our advantage.
Solid phase microextraction or SPME uses a fiber coated with a polymeric coating.
This coating here has a strong affinity toward organic compounds. So, the analytes will partition
between the sample matrix and the coating.
Due to the strong affinity of the coating toward organic compounds, the organic
compound will be absorbed by the coating, and then, as the fiber is directly transferred into the
GC injector for thermal desorption and analysis.
The extraction process of SPME is very simple and time efficient and also can be
easily automated. Another very important element in our SPME is that it completely eliminates
the solvent both for the extraction and the injection.
In the SPME device, the fiber is connected with a stainless steel tubing to increase
the fibre's mechanical strength. This assembly is contained in this syringe for easy operation.
When you do the SPME sampling, you first withdraw the fiber into the syringe
needle, and the syringe needle is used to punch through the sample vial septum. Then, the fiber
is lowered into the vial by pressing down the plunger so the analytes will be absorbed by the
fiber coating through the partitioning process.
So, as you can see, in the SPME process, we combine the extraction, the
concentration (because of the strong affinity of the fiber coating), and the injection into a single
process.
487
-------
There are two ways to do the SPME extraction. One way is direct sampling. That
is, you put the fiber into a liquid sample such as an aqueous sample. The analytes in the aqueous
matrix will be partitioned between the fiber coating and the aqueous matrix. The amount of the
analytes absorbed by the coating will be directly related to the volume of the coating and the
partition coefficients of the analytes between the coating and the aqueous phase.
Also, these parameters can be easily modified through the design of the fiber.
This direct sampling has been working very successfully in analyzing volatile
organic compounds from aqueous samples. We have published more than a dozen papers on this
subject.
But direct sampling has its limitations. The main limitation will be you cannot
use the direct sampling to extract analytes from a solid or very complex matrix, because the fiber
is very fragile and could easily be broken.
Another way to do the SPME sampling would be to sample the analytes from the
headspace above the sample matrix. In this way, we can sample the analytes virtually from any
matrix, because the fiber is not in contact with the sample matrix.
Sampling analytes from the headspace, as we all know, is not a new idea. It has
been used very extensively to extract volatile components from food or beverage or volatile
pollutants from the environmental samples.
But the conventional headspace method has its problems. One problem is the
sample has a portion of gas injected into the GC column. So, some of the oxygen and the
moisture will get into a column. It will degrade your stationary phase and shorten the column's
lifetime, or you will need some kind of device to remove the oxygen or moisture and make your
whole system very complicated.
Another thing is that conventional headspaces don't have the concentration effect.
So, only volatile compounds can be sampled and the sensitivity is usually low.
But the headspace SPME has a premium asset, so we sought to eliminate these
problems. We use the fiber coating which is hydrophobic, and it has a very high affinity toward
the organic compounds.
Due to these two properties, the headspace SPME can extract trace target
compounds but not moisture and oxygen.
This is an example of the headspace SPME sampling. This is an extraction time
profile. We can see that the system equilibrates in about 2 minutes. So, practically, the temp
sampling time will be 2 minutes, because sampling 2 minutes and 10 minutes will not make a
difference. The sampling is very efficient.
488
-------
Another thing is that it has also a very good sensitivity. For example, for the o-
xylene, 1 ppm o-xylene, more than 50 ng of analyte has been absorbed by the fiber coating and
then transferred into the GC column. So, if we use MS as a detector, it is capable to detect pg
of analytes, so we can detect parts per trillion analytes in the matrix.
This is 0.1 ppm PAH in an aqueous sample, sampled from the headspace at room
temperature. We can see the naphthalene and anthracene each equilibrate in less than 10
minutes, but for phenanthrene and chrysene, it will take quite a long time to reach equilibration,
but between 20 and 30 minutes, a substantial amount of analyte has already been absorbed by
the fiber coating.
So, if we use some kind of timing device or use isotopic labeled compounds as
internal standards, we can still very efficiently quantitate these analytes.
This is 1 ppm BTEX in a soil matrix. This is in sand. It was also sampled from
the headspace at a moderate temperature, about 50 degrees. And again the sampling is very
efficient. The sampling time is less than 1 minute, and the sensitivity is also very good. This
is o-xylene, so more than 50 ng having been absorbed by the fiber coating.
This is a 40 ppb PAH in sand being sampled by headspace SPME at room
temperature.
This is several chlorinated organic compounds in three matrices at 5 ppb level
sampled from the headspace. We can see good detection limit and precision. Most precisions
are below 10 percent.
This is an example of using the headspace SPME to analyze a real PAH
contaminated soil sample. We used isotopic labeled PAH as an internal standard. We were able
to quantitate 0.23 ppm benz[a]anthracene and the chrysene.
So, these results clearly show the headspace SPME can isolate volatile and
semivolatile compounds from various matrices, but if the matrix absorbs analytes very strongly,
the amount of analytes that exist in the headspace will be low, that will affect the sensitivity of
this headspace SPME method.
But there are ways to solve these matrix problems. One simple way is to sample
the analytes at a higher temperature.
By heating the sample, we speed up the mass transfer of the analytes and release
more analytes into the headspace. At the higher temperature, we also increase the volatility of
the analytes. Both of these effects will increase the existing of analytes in the headspace and
increase their concentrations in the headspace. Thus, we can improve the sensitivity of the
headspace SPME method.
489
-------
This is an example of the temperature dependence of 1 ppm BTEX in soil. At
room temperature, the sensitivity is quite low. As we increase the temperature, we can see the
sensitivity improve quite significantly.
When we further increase the temperature, the sensitivity starts to decrease. The
reason for that is while you increase the temperature, you increase the concentration in the
headspace, but at the higher temperature, the fiber coating is also starting to desorb some of the
analytes. So, you have a competing process.
So, you need to be careful to choose an optimum temperature.
This is the same for the clay matrix. At about 50 degrees, you achieve the
optimum sensitivity.
But even at the optimum sensitivity, we can see the amount of the analyte
absorbed by the coating is still low if we compare it to the aqueous example.
The problem, as I mentioned, is because the coating starts to desorb some of the
analytes at high temperature. So, if we can heat the sample matrix to a higher temperature while
keeping the fiber coating at a reasonable temperature, it is going to improve the sensitivity.
We designed an experiment called the heat-cooling experiment. What we did is
we heated the sample to 140 degrees centigrade, then put it into a zero degree ice water bath, and
immediately put the fiber into the sample, and do an extraction time profile.
You can see at about 5 seconds, we achieved a maximum sensitivity. The point
here is the sensitivity at the optimum temperature. So, you see a dramatic improvement in
sensitivity.
This is the temperature dependence of the PAHs in sand. You can see for the
relatively volatile PAHs, there was an optimum temperature at about 50 degrees before the
increased temperature decreases the sensitivity, but for the less volatile PAHs like phenanthrene,
chrysene, and perylene, increased temperature will increase the sensitivity.
This effect is very important, because we can use this effect to do some kind of
temperature differentiation. We can sample a volatile compound at a lower temperature while
sampling less volatile ones at a higher temperature. So, we can reduce the interference between
compounds.
Temperature is not the only means we can use to solve this matrix problem. One
very important advantage for the SPME technique is we cannot only improve our extraction
efficiency by changing extraction media, we can also actually modify the matrix.
490
-------
As we all know, in the Soxhlet or in the supercritical fluid extraction, what we can
do is to change the solvent or add a modifier. In other words, changing the extraction media.
We can do the same for the headspace SPME by changing the coating. We can use a more
selective coating to extract a certain category of analytes and use another coating to extract
another category of analytes.
For example, the polydimethylsiloxane coating can be used very successfully to
extract BTEX and PAHs, but polydimethylsiloxane cannot extract phenols very well. If we use
a polyacrylate coating to extract phenols, it works very well.
So, coating selectivity is also a way to reduce the interference from non-targeted
compounds.
Besides this coating selection, we can actually modify the matrix. This is an
example of 1 ppm BTEX in clay. It is also a comparison. It is a very interesting experiment.
At room temperature, you can see the sensitivity is very low. At the optimum
temperature of 50 degrees Centigrade, the sensitivity increases. Then we add 10 percent of water
into the clay matrix, and you can see the further improvement of the sensitivity. With 30 percent
of water, the sensitivity is the same as that with 10 percent of water.
When you further increase the water to 50 percent, (50 percent means you
basically cover all the matrix with water), the sensitivity starts to decrease.
This is another example, the PAH in the sand. At 50 degrees, we cannot detect
chrysene. With 15 percent of water, now we can detect chrysene. At 100 degrees, we cannot
detect perylene. By adding 15 percent of water, we now can detect perylene at 100 degrees.
For the soil # 7, (this soil # 7 has 30 percent of sand, 30 percent of clay, and 4
percent of organic carbon content), you can see the improvement of the limit of detection by
adding some water here. Also, you can see the improvement of the limit of detection in clay
sample as well.
BTEX in all four matrices have a limit of detection at the ppt level.
Increasing temperature and adding water works well for the solid matrix. So, what
about the aqueous matrix? Well, we can modify the aqueous matrix as well.
One simple modification is the use of a very old technique, salt-out, using the salt-
out effect.
This is 1 ppm BTEX in aqueous matrix. The extraction was done from the
headspace. The white bar is BTEX extracted from the water without salt. The black bar is
491
-------
BTEX extracted from water with saturated salt. You can see the significant improvement when
salt was added.
We can also modify the matrix pH values in the aqueous sample. This is
extraction of phenols. This is direct sampling, that is, you put the fiber into an aqueous sample,
not sampling from'the headspace. The white bar is for our control sample, pH of 7, no salt. The
black bar is for pH=2 and saturated with salt. You can see a dramatic increase of sensitivity for
all the phenols.
The coating used here, as I mentioned, is polyacrylate.
With the help of pH and salt, we can actually sample the phenols from the
headspace. More interestingly, some of the phenols sampled from the headspace here have got
better sensitivity.
This is another interesting experiment. We spiked some phenols into a sewage
sample, then do the sampling, direct sampling. Some of the phenols have very bad recovery.
So, we saturate this sewage sample with salt and adjust the pH to 2. Then we can
achieve a very good recovery.
So, what does this experiment tell us? If we are analyzing phenols from various
aqueous samples like river water or waste water or drinking water, what we can do is we adjust
the pH, salting it out to basically overwhelm the matrix, or, in other words, you normalize the
matrix so you can achieve a very consistent recovery for all the aqueous samples.
In summary, by sampling analytes from the headspace, we extend this SPME
technique to sample very complex matrices and soil matrices. Through the temperature and
matrix modification, we can help to release analytes from the matrix that absorbs analytes very
strongly.
We can also use some kind of matrix normalization to achieve a more consistent
recovery and better precision, or we can, through a temperature differentiation and the use of
coating selectivity, reduce the interference from the non-targeted compounds.
Thank you all for your attention.
492
-------
QUESTION AND ANSWER SESSION
MR. TELLIARD: Do we have any questions?
MR. COCHRAN: Jack Cochran, Hazardous Waste
Center. Considering that you probably don't have an inert atmosphere in your sample vial and
you elevate the temperature sometimes as much as 150 degrees C, are you worried that the
stationary phase on the fiber will break down under those conditions?
MR. ZHANG: Oh, no, the fiber is actually used for
GC coating, so it is stable at about 300 degrees.
MR. COCHRAN: But you do have oxygen in the
sample vial, and you have it hot. I was just concerned that maybe the phase would break down.
MR. ZHANG: We did extensive experiments, and
the fiber can be used sometimes for a hundred times and it won't break down.
MR. CROWLEY: Ray Crowley, Millipore. You
have shown a lot of work on sensitivity and limit of detection. Do you have any plans to do an
actual accuracy study, let's say to compare your methodology with something like EPA 525? My
concern is that you have tremendous matrix interferences here that will impact the quality of
results.
MR. ZHANG: Oh, yes, we are actually...one person
is right now doing head-to-head comparison of SPME with purge and trap. We also plan to do
a lot of comparison experiments in the future.
We have done a lot of experiments in the aqueous samples like Method 624 and
525, a lot in the aqueous samples.
This headspace method to extract analytes from the solid matrix is just beginning.
I just show you some of the ways we can do. Certainly, we are going to do a lot of experiments
to compare with the others to show whether this method has advantages or not.
MR. CROWLEY: Do you want to predict how long
it would take to calibrate your system for a 50-analyte component mixture?
MR. ZHANG: That is a difficult question.
Basically, this type of technique can be used two ways. One way to use it is to rapid screening.
If you have hundreds of compounds and you want to see whether they exist or not, you just use
this technique to do it very easily.
493
-------
The other one is to do accurate quantitation. It depends how accurate you want
it.
Suppose you have 50 compounds. You need to divide them into several groups
using compounds with relatively comparable partition coefficients as internal standards.
If you want to do it very accurately, then you will have to use isotopically labeled
internal standards to match each compound of interest.
MR. CROWLEY: Thanks.
MR.RISSER: Nelson Risser, Lancaster Labs. What
is the mechanical life of the fiber? How rugged is this system?
MR. ZHANG: The fiber usually can last quite a
long time.
MR. TELLIARD: How many analyses? Do we
know?
MR. ZHANG: What do you mean?
MR. TELLIARD: How many analyses?
MR. ZHANG: Oh, usually if you analyze volatile
compounds, you desorb at 200 degrees like I did, it can last more than a year. You can do it
hundreds of times, but I guess sometimes if the sample is very complicated, then you are likely
to have some kind of interference compounds. Then you want to desorb at higher temperatures.
You probably will shorten the fibre's lifetime.
But it just won't be a problem. Once you try it, you will know it won't be a
problem.
MR. RISSER: Well, with that fiber sliding inside
the needle, isn't there a mechanical wear on the fiber as you continue to use it over time?
MR. ZHANG: You can continuously use it.
Supelco is making this syringe right now. I did not bring it. It is just like an ordinary syringe,
but you have a fiber inside, so you can push it down with your plunger and expose the fiber.
Then you pull up the plunger and withdraw the fibre inside the needle.
MR. RISSER: But don't you rub the liquid coating
off of the fiber as you use the fiber?
494
-------
MR. ZHANG: Liquid coating has a very high
viscosity so it won't come off. It is just the kind of coating.
MR. RISSER: Okay. What do your blanks look
like for volatiles?
MR. ZHANG: Blanks?
MR. RISSER: Blanks, system blanks.
MR. ZHANG: It depends on the desorption
temperature. For the volatiles, it is very clean.
MR. RISSER: Okay, thank you.
MR. TELLIARD: Thank you very much.
495
-------
496
-------
ELIMINATION OF MATRIX INTERFERENCES IN
SOLID PHASE MICROEXTRACTION
Zhouyao Zhang, Karen Buchholz, and Janusz Pawliszyn
Department of Chemistry
Waterloo Centre for Groundwater Research
University of Waterloo
Waterloo, Ontario, Canada
FINANCIAL SUPPORT
Supelco
Varian
NSERC
-------
oo
OUTLINE
What is SPME?
What is headspace SPME?
How do SPME and headspace SPME deal with matrix interferences?
Can SPME reduce the interferences from non-target compounds?
Can SPME overcome matrix adsorption to achieve good recovery?
Can we use the matrix effects to our advantage?
-------
SOLID PHASE MICROEXTRACTION (SPME)
needle
plunge epoxy
enlarged
I
silica rod
coating
stainless steel
tubing
-------
DIRECT SPME SAMPLING
o
o
fused silica rod
coating
aqueous phase
V1: volume of coating
V2: volume of water
The amount of the
analyte absorbed
by fibre coating in
direct SPME
sampling:
n =
KC.V.V,
-------
HEADSPACE SPME SAMPLING
fused silica rod
Kl
[
f
—• —.
*>
^
V3
Uieadspace
K2
Co
V2
aqueous phase
V1: volume of coating
The amount of the analyte
absorbed by the fibre
coating:
n =
-------
Headspace SPME
BTEX IN WATER, SAMPLE AGITATED
(room temperature)
0
CO
CQ
100
80
60
40
20
0
0
m,p-Xylene
o-Xylene
Ethylbenzene
Toluene
Benzene
246 8 10
Extraction Time (minutes)
502
-------
Headspace SPME
PAHS IN WATER, SAMPLE AGITATED
(room temperature)
20
CD
1 10
CO
0
Acenaphthene
Phenanthrene
Naphthalene
m ».
Chrysene
Extraction Time (minutes)
503
-------
Headspace SPME
1 ppm BTEX in sand (time profile)
150
be
100
CD
cc
cd
0
•*•
-•
47 °C
E
D
•i C
B
A
02 4 6 8 10 12 14
Extraction Time (minutes)
504
-------
Headspace SPME
40 ppb of PAHs in sand (time profile)
(room temperature)
GO
1
*
20
0
0
naphthalene
acenaphthene
phenanthrene
20 40
Extraction Time (min)
60
505
-------
THE PRECISION AND LIMIT OF DETECTION (LOD) OF SEVERAL VOLATILE
ORGANIC COMPOUNDS IN THREE MATRICES AT 5 ppb CONCENTRATION
LEVEL.
(LOD is calculated from the MS spectra assuming S/N=3)
Matrix
1,1-dichloro-
ethane
chloroform
carbon
tetrachloride
trichloro-
ethane
dibromochloro-
methane
chlorobenzene
waste-water (5ppb)
precision
14%
5%
6%
7%
3%
14%
LOD
(ppt)
400
6
20
110
210
2
aqueous sludge (5ppb)
precision
14%
9%
3%
7%
7%
6%
LOD (ppt)
450
150
70
550
110
30
sand (5ppb)
precision
4%
2%
4%
6%
3%
4%
LOD
(ppt)
80
40
20
70
3
1
Sampling in headspace at room temperature for 2 minutes.
-------
ANALYSIS OF SOIL SAMPLES BY
HEADSPACE SPME/GC/MS
o
-J
GO
CO
CO
ro
BENZ[a]ANTHRACENE
r
CHRYSENE
15:00 15:20 15:40 16:00 16:20
RETENTION TIME (minutes)
-------
Headspace SPME
Temperature dependence (1 ppb BTEX in soil#7)
(2 min sampling)
6
'v ^>
I
nd 4
o
-I
QQ
E
GQ
0
20
40 60 80 100
Extraction Temperature (°C)
508
-------
Headspace SPME
Temperature dependence (1 ppm BTEX in clay)
(2 min sampling)
3
CO
CD
0
20
E
40 60 80 100
Extraction Temperature ( °C)
509
-------
12
a
GO
8
SS 4
0
Headspace SPME
Heating-Cooling Experiments
(1 ppm BTEX in clay)
0
140°CtoO°C
o-xylene
5 10 15 20
Extraction Time (seconds)
*: o-xylene at 50 °C
25
510
-------
Headspace SPME
Temperature dependence (40 ppb PAHs in sand)
(5 minutes sampling)
40
f—»
•3
CO
20
I
0
0
naphthalene
acenaphthene
phenanthrem
chryse
perylen
50 100 150
Sampling Temperature (°C)
200
511
-------
I
co
Headspace SPME
The effect of water content in clay
( 1 ppm BTEX, 5 minutes sampling)
A, room temperature
B, 50 °C
C, 50 °C, 10% water
D, 50 OC, 30% water
E, 50 °C, 50% water
Benzene Toluene Ethylbenz m,p-Xylene o-Xylene
512
-------
Headspace SPME
40 ppb PAHs in sand (5 minutes sampling)
100000
80000
A, 50 °C
B, 50°C/15% water
C, 100°C
D, 100° C/15% water
Naphth Acenaph Phenanth Chrysene Perylene
513
-------
THE LIMIT OF DETECTION (LOD) OF BTEX IN SEVERAL MATRICES.
(LOD is calculated from the MS spectra of BTEX at 5 ppb level assuming S/N=3)
Benzene
Toluene
Ethylbenz
ra,p-Xylene
o-Xylene
Water
(ppt)
84
6
5
3
5
Sand
(ppt)
39
5
3
3
6
Soil#7
(ppt)
1000
125
28
16
32
Soil#7/15%
water (ppt)
8
2
1
1
7
Clay
(ppt)
1700
190
96
38
94
Clay/15%
water (ppt)
357
24
28
18
41
Water and sand samples were extracted at room temperature;
Soil#7 samples at 65°C (optimum temperature)
Clay samples at 50°C (optimum temperature)
Sampling in headspace for 2 minutes.
-------
Headspace SPME
Salt-out effect (1 ppm BTEX, 2 min sampling)
400
350
water
salt saturated water
benzene toluene ethylbenz m,p-xylene o-xylene
515
-------
EFFECT OF ACID & SALT
GC Area Counts
1.88+07
1.58+07
1.2e+07
S.Oe+06
6.06+06
3.08+06
Phe 2C 2N 24OM 24DC 24ON
Legend
| [control
Add
Add&Salt
GC Area Counts
S.Oe+07 T
2.5e+07-
4C3M 246TC 4N 2M46DN PCP
516
-------
PHENOLS FROM HEADSPACE
GC Area Counts
2.8e+07
Legend
Control
Headspace
Phe 2C 2N 24DM 24DC 4C3M 246TC PCP
-------
ANALYTE RECOVERY FROM SEWAGE MATRIX
Compound
Phenol
2 -Chlorophenol
o-Cresol
m-Cresol
p-Cresol
2,4 -Dimethylphenol
2,4 -Dichlorophenol
2, 6-Dichlorophenol
4 -Chloro- 3 -methylphenol
2,3, 5-Trichlorophenol
2,4, 6 -Tr ichlorophenol
2,4, 5-Trichlorophenol
2,3,4 -Trichlorophenol
2 , 4 -Dini trophenol
4 - Ni t r opheno 1
Tetrachlorophenol Isomers
2-Methyl-4, 6-dinitrophenol
Pentachlorophenol
%
Recovery
74.2
128
118
95.8
95.8
104
105
21.3
91.5
56.1
21.5
64.5
66.8
2.7
38.4
16.7
0
8.0
%
Recovery
Acid+Salt
92*
92*
95.9
l_ 97.8
97.8
96.5
78.8
83.4
85.0
71.3
66.1
66.1
71.9
111
118
61.4
83.1
32.2
*coeluted on GC column
518
-------
SUMMARY
Headspace sampling
Matrix modification
Matrix normalization
Temperature differentiation
Coating selectivity
519
-------
520
-------
MR. TELLIARD: Our next speaker is Bruce Colby
from Pacific Analytical. Bruce is going to talk about general approaches to solving matrix
problems.
Bruce?
MR. COLBY: Good morning. I am going to talk
about a general approach to dealing with matrix problems. I don't have any magic in my pocket,
unfortunately, but we have developed a way to contend with matrix problems when they arise,
and they do arise with a fair degree of frequency, as people out there who have to analyze
samples I am sure are aware.
Before I really get into the discussion I would like to define to some degree what
matrix problems are. There are two things that we use as categories for matrix problems. These
are situations where an analyte that we are after is either not recovered from a sample matrix or
it cannot be isolated from the matrix. In other words, we suspect that it is there, or, in some
cases, we know that it is there, but we can't seem to find it when we try to determine it.
That is one situation. It is really quite different from the other matrix problem,
which is when something in the sample matrix interferes with the detection of a particular
analyte.
Sometimes can be a bit confusing which problem is which, but if you can't get a
compound or an element into your instrument, can't get it out of the sample and into an
instrument, that is kind of the first one. If you have gotten it out of the sample and into the
instrument but there is something else from the sample that you have put into the instrument
which keeps you from making the measurement, that is the second kind of problem.
The thing I would like to do right now, is to quickly run through the approach we
use when we face these problems.
Incidently, my whole discussion is going to be based around discrete analytes. I
am not trying to address any kind of matrix problem associated with method-defined parameters
like "oil and grease" where the method defines what "oil and grease" is.
The first thing we do when we are going to try to solve a matrix problem is kind
of classic. We define what the problem is. Is there an interference or is it a recovery problem?
We have to settle on what kind of problem we are trying to solve first.
Then we identify the boundary conditions we are going to deal with. If we are
going to continue trying to make a measurement, what kind of detection limits do we have to
achieve and what kind of time restrictions are there on getting the measurements and so forth?
521
-------
Once we have decided we are going to move ahead with it, we have to generate
a hypothesis of how we are going to deal with the problem, what really explains the situation that
we have encountered. If we can come up with an explanation of the situation, then we can move
along and start to make some sort of tests with respect to dealing with the problem.
We then try to identify chemical differences between the analyte or analytes that
we are trying to measure and the matrix, whether it is a matrix recovery problem or a spectral
interference problem or a detector problem or what not.
We then try to come up with some sort of physical separation technology that will
let us isolate the matrix from the analyte that we are after. Sometimes we come up with more
than one possibility. We then have to pick the best one, and go ahead and test it.
Well, let's take these steps one at a time here.
In defining the problem, the first step really is to decide whether we have an
interference, or whether we don't have recovery. This is a very fundamental difference, and it
has a lot to do with how we attack the problem.
If it is an interference problem, then we have to identify what the interference is,
so it is apt to require someone with knowledge of the sample, what is present, what is likely to
be present in the sample. From a commercial lab's viewpoint, normally we would have to go
back to someone who understands the source of the sample, what created the sample, what is
likely to be there; "We think there might be thus and so present, does that make sense to you?"
If we are trying to get rid of something or separate something, it is critical to know
what those things are.
Identifying the boundary conditions that we have to deal with, some of the
traditional QA/QC-related things, is next. We need to know what detection limits are needed.
If we are going to put something together, what kind of precision are we going to have to
generate, and what sort of accuracy? How many different analytes are we going to have to
contend with? Can we deal with just one, or are we dealing with 30 problem analytes? What
kind of turnaround time do we have to come up with?
We have got to do some thinking, and possibly do some experiments. We may
also have a development time issue, and, of course, time converts itself into costs at some rate.
Sometimes it is important to put a lot of development time into something, because
we need a low-cost, quick-turnaround measurement in the end. Other times, we just need an
answer once, so there is no point in putting a lot of development time into it.
522
-------
Once we have established in our own minds what the problem is and what the
boundary conditions are, we establish an hypothesis of how we might contend with things, and
then we move on and try to evaluate that hypothesis.
We look at existing data and talk to people who know something about the sample.
Does our hypothesis make sense to someone who is aware of what might be in the sample? If
it is an interference situation, we might have to collect some additional data to help identify the
interference.
There are lots of things, but basically what we are doing is trying to look at the
information we have, whether it is experimental data or information from external sources, to
justify the hypothesis we have generated to explain the problem.
Now we try to identify chemical or physical differences between the analyte and
the matrix problem. Is there something that we can use as a handle to help separate these from
one another? The types of parameter we go for are differences in vapor pressures, solubilities,
polarities, reactivities and sometimes, pKa's.
Once we have decided what the differences are between the analyte and the matrix,
we can pick some sort of separation technology to deal with the situation. Things that we have
to work with here are different solvent systems and possibly distillation, this could be
evaporation, but it is some mechanism using vapor pressure to isolate an analyte from the matrix.
We can sometimes remove the matrix and leave the analyte behind and vice versa. Sometimes
chemical reactions are useful. Derivatizations, for instance, could be a useful thing. We also
have got a lot of different liquid chromatography tools at our disposal, alumina column
fractionations, potentially HPLC separations, although those are usually fairly expensive if you
are putting them in front of something else. We also have different gas chromatography
techniques and columns plus different types of detector systems.
Well, having put all these pieces together, we try to select a candidate approach,
one that our gut feeling says is going to work. It has also got to be one that we can accomplish
with the tools that we have at our disposal.
At some point, then, we have got to move along and undertake some experimental
effort to test our theory with real samples. We usually start out looking at blanks, spiked blanks,
then clean samples. We need to make sure that the technique that we are working with is going
to work in the easy situations.
We then go on to looking at field samples and spiked field samples. We look at
things like the recoveries and the precision of the recoveries, i.e., what kind of accuracy are we
getting.
523
-------
If we fail in working with a field sample, we are definitely going to run into
problems. So, doing spiked field samples is very, very important. It is the only way we can
really assure ourselves that we have solved the problem.
Well, that is the basic approach. It is not a cookbook thing, it is a thought
process.
The next thing I would like to do is run through a few examples that I brought
which illustrate some of these steps as we go through. They are real problems that we have
encountered and solved. I should mention that we have encountered problems that we have been
unable to solve. Fortunately, we have solved more than we have failed to solve. So, overall,
things look pretty good.
The first example is a situation where we were trying to measure, among other
things, nitrobenzene in a wastewater sample. There was a huge spectral interference which made
it absolutely impossible for us to achieve the detection limit required by the regulatory people.
The detection limit required was around 2000 ug/L. So, this is serious interference territory.
Well, that is a boundary condition, 2000 ug/L. We needed a method that will get
down to that level on these particular samples.
We looked at the GC/MS data we had and we felt that the interference was some
kind of phenolpropanol or something like that, and when we discussed this with the client, they
agreed that that was reasonable and it was quite probably that it was 2-phenyl-2-propanol.
Well, we have nitrobenzene, water, and 2-phenyl-2-propanol, and we have to get
them separated. Things we can use for handles are different polarities, different solubilities are
different reactivities to work with. There are also potential differences in volatility that we can
work with.
Tools that we might apply in this situation would include different GC columns,
ion exchange technology, back extraction techniques, and derivatization (relying on that OH
group as a key). Finally, purge and trap might be a way to go after things. Nitrobenzene is,
after all, reasonably volatile.
Well, we tried all of these, and finally kind of threw in the towel and said okay,
we will have to try purge and trap, because we have tried everything else, and nothing really
worked.
You can see in this chromatogram of the sample where nitrobenzene elutes. It is
somewhere under the largest peak. There is quite a bit of the interference in there, and there is
not much nitrobenzene that we can see. You can see this is a pretty ugly looking chromatogram.
524
-------
The nitrobenzene-d5 that we put in to track exactly where nitrobenzene eluted
came out as shown in the top trace. The center trace is the quantitation mass for nitrobenzene.
It happens to be a minor peak in the 2-phenyl-2 propanol spectrum, so it is totally wiped out.
The bottom trace is a total ion chromatogram.
Nothing seemed to work so we finally took a crack at it with purge and trap using
a capillary column.
Now, this is a fair deviation from the normal EPA wastewater GC/MS technology.
Purge and trap is not normal technology for nitrobenzene, but in this case, it
works. We were able to demonstrate an MDL in the vicinity of 10 ug/L, considerably lower than
the 2000 that was necessary. There was also plenty of precision. We were able to demonstrate
that, in fact, there was no nitrobenzene present even down at the 10 ug/L level.
Another example we ran into had a volatile organics analysis that was quite
important. The difficulty with this particular sample was percent levels of acetone and phenol.
If you have ever tried to run a purge and trap on a sample that has several percent of acetone or
phenol hi it, you know that your instrument becomes so contaminated that after running the first
sample, it takes a day or more to get the thing back on line again.
So, there was no way this sample could be run without totally destroying the
instrument. If we followed traditional thinking, i.e., dilution of the sample, the detection limits
for volatile aromatics and chlorinated solvents could not be achievable.
The problem here is a matrix isolation problem. We don't have a spectral
interference problem. We have got to get rid of this acetone and phenol somehow before we do
our measurement.
Well, what have we got to work with? Acetone and phenol are fairly polar, as is
the matrix. We are after volatile aromatics and chlorinated solvents which are not very polar.
We finally settled on trying to do something with a solvent system. This, again,
is a fair deviation from normal EPA methodology.
What we looked at was the possibility of doing a hexadecane:water partition and
then taking the hexadecane, putting that in an autoinjector vial and shooting it on a cap column
GC/MS.
The detection limit requirements were 5 ug/L, and we felt the only way we could
get the necessary precision and accuracy for the measurements here was to use isotope dilution.
That is more work, but we get a better result when we do it.
525
-------
After we settled on those two things, we decided we had to calibrate our
instrument down to the equivalent of 0.1 ng/uL. So, we are pushing instrument technology and
using extraction instead of purge and trap, to solve a problem.
We ended up with a chromatogram that looks like this. This is a matrix spike of
a field sample. All of the peaks in there are analytes that we put in. The hump in the middle
is probably the result of phenol. We didn't see much of the acetone.
We were able to get MDLs at the 5 ug/L level. We eliminated the acetone and
the phenol; established that we had accurate, precise measurements; and everybody went away
happy. It took some time to develop it, but it worked.
Another example. This one is not a wastewater example. All of my examples are
not wastewater.
We had a need to measure volatile organics in waste oil. The data I am going to
show will be from used motor oil.
The purge and trap type technology is essentially useless if you have an oil
sample. Very oily waters present a similar kind of a problem. The analytes we are interested
in are very soluble in the matrix, and they just don't recover.
Things that had been tried on the oil were headspace, and it didn't produce the
required precision, so the MDLs were not acceptable. We had to get detection limits for...well,
the lowest one was for vinyl chloride down at 1 ug/kg of oil. Again, the problem in this case
was a matrix isolation problem. How do we get our analytes away from the matrix? There was
no interference problem in particular.
The only thing we had to go on that looks promising here was vapor pressure, so
we settled on doing a headspace type measurement.
We did some initial work that indicated that we had to equilibrate the samples at
a fairly high temperature for 8 hours prior to analyzing them. I think we used 80 degrees C for
the equilibration period.
We used isotope dilution for the quantitation to try to improve the precision. This
helps get MDL values down to the boundry condition level. Some of the detection limits were
fairly high levels. We wanted to be able to analyze a sample in one run yet get all the detection
limits covered. We didn't want to have to analyze a sample twice, essentially dilute it so that
we could get everything on scale.
Consequently, we ended up with calibration range on order a factor of 1000 to
cover everything.
526
-------
The next slide shows a chromatogram. This is a motor oil sample. It is spiked
with regulatory levels of volatile TCLP analytes. They are all in there at regulatory levels, and
this is what a chromatogram looks like from a headspace isotope dilution run at the regulatory
levels.
The next slide gives you some idea of the kind of sensitivity. This is the same
sample showing a 500 ppb benzene peak near the center on the top trace and a 200 ppb vinyl
chloride peak. As you can see, it is possible to get those analytes out of the motor oil sample
matrix.
The next example I have got is a problem we ran into with some PAH
measurements in soil samples. The situation had arisen where someone was trying to measure
"carcenogenic" PAHs using Method 8310. The result was that there was a big huge peak that
started near the beginning of the chromatogram and went to the end of the chromatogram. There
were essentially no discrete peaks, just one big wide one.
They spent a lot of money and finally decided that it wasn't particularly useful.
They needed to get a detection limit for any given PAH down in the 1 ug/kg area. Now, the
8310 method doesn't even produce that, according to the method specifications, but for some of
the analytes, it will get close.
We talked to the people about the samples a bit and discovered that the site had
most likely been contaminated with a coal tar derived waste. Consequently there were more
alkyl PAHs there than unalkylated PAHs. In effect there are thousands of PAHs present, not just
six or seven.
We decided that the thing we had to work with in this case was detector
specificity. Fluorescence wasn't going to do the job, because all of the alkyl PAHs fluoresce.
We also needed more chromatographic resolution than HPLC was not going to
provide. Finally we settled on mass spec to increased detector specificity. We also used some
additional with the chromatography separations to remove other components from the matrix and
we used a capillary column to increase chromatographic resolution.
Some of the other things we did included a 100 gram sample instead of sort of a
standard 30 gram sample in order to improve sensitivity. The column chromatography selected
was an alumina column fractionation, which yielded three fractions, a polar one which we didn't
analyze; an aromatic which we did analyze; and an aliphatic which we didn't analyze.
There was a lot of sulfur present. If anyone has ever analyzed the NIST sediment
sample, that one also has a lot of sulfur in it. So, a sulfur removal procedure was added.
We used a small final volume, 300 uL, but we could still use an autoinjector at
that volume. We calibrated the GC/MS down to 0.1 ng/uL for the injections.
527
-------
Quantitation was done by isotope dilution. We tend to use this technology a lot,
because it reduced development time substantially, and because it improves precision and the
accuracy.
The next slide shows a total ion chromatogram. This is a field sample. The hump
at the end of this is...actually, it is all kinds of different things, all aromatic things. We have no
idea what most of them are.
If we just take a quick look at this, what I put up here was a...well, the bottom
trace was a total ion chromatogram. The next trace up shows phenanthrene and then anthracene
as the two little peaks on the left. The bigger one is phenanthrene.
The next trace up with the two sets of two peaks is for the methylanthracenes and
methylphenanthrenes. The next trace up is the dimethyls and ethels, and the next trace up is for
the ones that are C3 anthracenes and phenanthrenes. So, you can see that, indeed, there were
large quantities of these alkyl PAHs present.
In this case, we can put traces up for all the other alkyl-PAHs, and, basically, we
just keep finding more and more of them the more we look.
But if we now look at the quantitation mass peaks for the carcinogenic PAHs in
here that were identified as important in this case, we have clean traces for them, and they are
all present. Note that these are field samples, not spiked samples.
One thing that is worth pointing out, is that in the middle trace there, is benzo(-b)
and benzo(-k-)furanthene, and there is a peak at scan 2111. The peak at 2111 is benzo-e-pyrene.
It is normally present in field samples at a higher level than benzo-a-pyrene. If there were no
real benzo(-a-)pyrene present most GC/MS methods, if followed strictly, identify benzo(-e-
)pyrene as benzo(-a-)pyrene. This is because it is within the retention time window for benzo(-a-
)pyrene, and it has got exactly the same spectrum. That can be a real problem with some of the
methods, if, say a health effects guy tries to use the data.
Anyway, we achieved detection limits on all the carcinogenic PAHs, actually on
all the PAHs that we measured, down below 1 ug/kg.
In the final example, we had a sample where the total organic carbon numbers for
some well water samples were very high. There was particular concern about contamination of
the groundwater at this point because of its proximity to some drinking water wells. None of
EPA's HPLC or GC/MS had detected any identifiable analytes using the traditional methods.
Effectively, none of the compounds on the lists were showing up.
It bothered some people that the TOC values could be high, and yet there weren't
any detectable analytes present. That, clearly, could be a problem.
528
-------
The hypothesis that we came up with was that there were polar compounds in the
water that weren't on anybody's list.
Well, if they are polar compounds in well water, we have got something we can
do with solubilities and possibly something with vapor pressures.
We, in dealing with solubility, decided to alter the ionic strength of the sample.
We also tried working with vapor pressure through vacuum distillation.
We maximize the ionic strength of the water sample by saturating it with sodium
chloride. We would have preferred to use calcium chloride, but we couldn't find any that
satisfied us in terms of the blank.
We then extracted the water in a continuous liquid-liquid extractor for 72 hours
and concentrated the extract by K-D. We concentrated the extract to 100 uL.
In the second experiment we took a liter of water and essentially evaporated the
water in a vacuum centrifuge. It took quite a while to do this, about a week to get 1 liter down
to dryness. We then took up the residue in 100 uL methylene chloride.
We analyzed the extracts by GC/MS and used some special software techniques
to deconvolute the spectra. I am not going to go into this in detail, because I talked about it here
last year.
At any rate, the total ion trace for either one of the extracts look very much alike.
The five big peaks in the example are the internal standards. The rest of the peaks, only a few
of which are fairly good sized, are peaks down in the baseline.
You can see on the left-hand side there is a peak at scan 303. If I take that
portion of the chromatogram and expand it, I get something that looks like this. You can see 303
is now the biggest peak.
In this region of the chromatogram, the part that is displayed, we were able to
identify 15 discrete compounds and get good mass spectra from all of them. Some of them were
even in the NIST library. Many of them were not, but since there are several series of
homologous type compounds present, we were able to pick up most of what was there.
Some of the compounds were things like this butoxy-ethanol. There are basically
glycol ethers with different end groups on them. There are all kinds of different end groups, and
each end group creates a different series.
There were a few other things as well. There were some relatively nasty
chemicals in there which I can't go into much detail, but, we were able to come up with about
529
-------
150 identifications out of this. At this point, we don't know what any of the recoveries are,
because some attorney is still trying to figure out what to do about it.
That is the last example I am going to go through. In conclusion, I would like to
say that there are usually ways to get around matrix problems. In order to get around them, we
have to be willing to deviate from the cookbook methods that are the heart and sole of most
environmental labs and generate modified methods. When we do this, we come up with results
that are acceptable from a data quality standpoint.
We have found regulators generally reasonable to talk to and willing to listen to
this method kind of data. They haven't been hard to deal with, but they do want to be talked to,
and they want to understand what is going on.
It takes some extra time and money, and some decisions have to be made
regarding when to throw one's hands up in the air or whether to persist.
If anybody has any questions, I will try to take a crack at them.
530
-------
QUESTION AND ANSWER SESSION
MS. RHODES: Ileana Rhodes with Shell
Development. I would like to suggest...well, actually, I would like to add a comment. Actually,
I would like to congratulate you on doing unusual things. Some of the things that we have been
doing in the oil industry to correct for some time like fractionation in aromatics, polars, and
nonpolars. We have done that quite a long time, and that is what we know how to do best, but
most of the time, we are forced to use EPA procedures that don't work for pure oils.
An example I would like to share with you, we were forced to look at
benzo[a]pyrene in a diesel sample, and that was because of emissions. The benzo[a]pyrene
number that we obtained using simple waste dilution was just too high of a detection limit, and
when you plugged into the modeling equations, then it came out as an exceedance.
So, the engineers came by crying to us, and we just went the old-fashioned way.
Look at the fractionation using a column such as the one you used and then took the aromatic
fraction and were able to drop the detection limit below.
We still got a non-detect, but that non-detect plugged into the equations and gave
them a non-exceedance, and that was a lot better.
So, this is the way. It is kind of refreshing to see using good old oil company
methods to apply to environmental problems instead of trying to analyze an oil sample as if it
were a water sample.
That is my comment.
MR. COLBY: The alumina column fractionation
technique, incidently, is an SW846 recognized method.
MS. RHODES: But nobody uses it, though.
MR. COLBY: Yes, they do. We do.
MS. RHODES: Very few people do.
MR. COLBY: It is on the books, and it works very
nicely. There is a good silica gel one, too.
MR. SCHREINER: My name is Dave Schreiner,
City of Phoenix. Have you had any success in inorganic matrices for solving problems?
531
-------
MR. COLBY: There are things that can be done
with different kelating agents to solve some of the problems that we have seen in the inorganic
area. Typically, the increase in cost to work out those things has scared people away.
Brine samples, in particular, are one area where people sometimes want what I
guess would strike most of us as ludicrous detection limit numbers. The only way to go after
them is to go to kelating and by doing a fair amount of chemical separation.
So, yes, there are ways to go after some of it, but we haven't had people that are
too willing to pursue it at this point. I would be very interested in doing some of it, but we can't
do it for free.
MR. TELLIARD: No, really?
MR. COLBY: Really.
MR. SCHREINER: But your approach would be
more towards prior to digestion to, you know, change the methods or whatever?
MR. COLBY: Kelating is not something that is
really anywhere that I have seen in any EPA technology at this point. So, that would be a pretty
big deviation from anything I have seen.
It is in the literature, Analytical Chemistry and places like that, but it is not in the
environmental methods at this stage.
MR. SCHREINER: Thanks.
MR. TELLIARD: Thanks, Bruce. Appreciate it.
532
-------
U)
A GENERALIZED APPROACH TO
SOLVING MATRIX PROBLEMS
Bruce Colby, Lee Helms and Steve Parsons
Pacific Analytical
Carlsbad, CA
and
James Smith
Trillium
Coatesville, PA
-------
MATRIX PROBLEMS
Analyte not recovered/isolated from the matrix
Analyte interfered with by non-target chemical
U)
-------
APPROACH
Define the analysis problem
Identify boundary conditions
Form a hypothesis which explains the problem
Test the hypothesis experimentally
Identify chemical differences between analyte and
matrix/interference
Identify separation mechanisms which play on the
above differences
Select the best candidate separation
Test the best candidate
U)
-------
DEFINE THE PROBLEM
Interference vs. isolation/recovery
Identify the interference/matrix
U)
-------
U)
-4
IDENTIFY BOUNDARY CONDITIONS
• Detection limit(s)
• Precision
• Accuracy
• Range of problem analy tes
• Turnaround time
- Development
- Routine application
•Cost
- Development
- Routine application
-------
FORM A HYPOTHESIS
U)
00
-------
EVALUATE THE HYPOTHESIS
• Use existing data
• Collect additional data
-------
IDENTIFY CHEMICAL DIFFERENCES
• Vapor pressures
• Solubilities
• Polarities
• Reactivities
• pKa's
-------
IDENTIFY SEPARATION MECHANISMS
• Solvent systems
• Distillation
• Chemical reactions
• Liquid chromatography
• Gas chromatography
• Spectrometry
-------
SELECT A CANDIDATE APPROACH
Solve the analytical problem
Satisfy the boundary conditions
to
-------
TEST THE BEST APPROACH
Blanks
u>
Spiked blanks
Field samples
Spiked field samples
-------
NITROBENZENE INTERFERENCE
1. Spectral interference causes DL requirement not
to be met
2. DL < 2000 (ig/L, routine monitoring
3. Interference is 2-phenyl-2-propanol
4. Polarities, solubilities, reactivities, volatilities
5. GC Column, ion exchange, back extraction,
derivatization, P&T
-------
ROUTINE 1625 DATA
Pacific Analytical
Sample: 45281 ES NOGPC
Instrument: UG06
-------
METHOD 1625 10:1 DILUTION
&
Pacific Analytical
Sample: 452B1 DIL ES NO GPC
Instrument: UG86
N itrobenzene-d5
686
2-Pheny1-2-propano1
11378787
Scn660 665
-------
PURGE & TRAP DATA
Sample: FIELD SAMPLE
Pacific Analytical
Instrument: UG87
N i trobenzene—d5
1635
774'
hi
Nitrobenzene
MDL = 18 ug/L
Sen 1580 1590 1600 1619 1620 1630 1640 165Q 1660 1679 1689 1690
-------
oo
EXCESS ACETONE & PHENOL
1. Inadequate DLs for all volatiles using Method
624
2. Instrument contamination precludes analysis
3. Problem is acetone & phenol
4. Polarities, solubilities
5. Solvent system
-------
SOLVENT EXTRACTION
1. Hexadecane:water partition/syringe injection
2. Isotope dilution for precision and accuracy
3. Calibration down to 0.1 |ig/L
-------
MICROEXTRACTION SAMPLE TIC
8808
B7-ftpr-90
Pacific Analytical
Sanple: Matrix Spike
UGft4 TRI01
M9 300 4
-------
VOLATILE ORGANICS IN OIL
1. Purge & Trap ineffective, Headspace not
accurate at low cone.
2. Need DL for vinyl chloride at 100 |ig/kg
3. Matrix isolation problem
4. Vapor pressure
5. Headspace
-------
to
HEADSPACE ISOTOPE DILUTION
1. Heated headspace with 8 hr equilibrium time
2. Isotope dilution quantitation
3. Wide range (103) calibration
-------
HEADSPACE TIC
Pacific Analytical
Sample: NEU OIL»3 SPIKED AT REG LIHITS,5PPH IS a
1858
Instrument: UG 6
39449856
TIC
ftl
OJ
see
1000
is00
2000
2 see
seee
3 see
-------
VOLATILE ANALYTES
Pacific Analytical
Sample: NEU OIL83 SPIKED AT BEG LIMITS,5PPH IS & Instrument: UG 6
21484
100-
XFS-
0
100-
X.FS-
e-
Sen
2230 194568
benzene 1
See ppb 1
1511
f\ II .y ^JKTJQ
9?S9 18P7 U 2769 \
-« *A )l ft . /IVl A A A
13
vinyl chloride
Zee ppb
552
*n 1 v
22 132896
2230
1508
1 . ll , ZT
See ' 1000 ' 1500 ' 2000 ' 2500 ' 3000 ' 3588
Ul
<-f>
-li.
-------
PAHs IN SOIL
1. HPLC-fluorescence chromatograms have few
distinct peaks
2. Need 1 |ig/kg DLs for carcinogenic PAHs
3. Alkyl-PAHs from coal tar obscuring target peaks
4. Detector specificity, chromatographic resolution
5. MS , liquid chromatography, capillary column
-------
GC/MS PAHs
1. Extract 100 g sample
2. Alumina column fractionation to generate an
aromatic fraction
3. Sulfur removal
4. Final volume of 300 (iL
5. Calibrate GC/MS to 0.10 ng/|LiL
6. Quantify using isotope dilution
-------
PAH TIC
Sample: 595
Pacific Analytical
Instrument: UG86
6B82B87
1001
•XFS-
0
Sen
515
1733
1139
1093
\
7"
890
JU.
16
10
1362
1580
2435794
TIC
#1
18 4
1907
2140
500
1000
1500
2000
2500
-------
CARCINOGENIC PAHs
OC
Samp
6B82
100
X.FS •
100
XFS-
100-
XFS -
100
XFS-
100-
XFS-
0-
Scn
Pacific Analytical
le: 595 Instrument: UG86
!B87
24?9 21§I
n « • «*
2432 /I 415
II II
2478 7048
7l A 276
2005 2842 20?9 1 fl *4
^K/vAv/ 2^3/l /\
2844 42752
fl 21/f1 2*^
11. AA/146
18,38 78848
1831x| 228
17.95 fll
1834 1727444
A TIC
ij^JL^^ez^__2043 2i/° ttl
^ *^-*»*--v__rrv>_/ \ — ^ — -b^ j»**vj'c; . . _^
1800 ' 1900 ' 2000 ' 2100 ' 2200 ' 2300 2400 ' 2500 ' 2600
-------
ALKYL-PAHs
Sample: 595
Pacific Analytical
Instrument: MG86
-------
WATER SOLUBLE BNAs
1. Something is present (TOC data) but HPLC &
GC/MS show nothing
2. Need to know if ground water is contaminated
s with dangerous chemicals
3. Polar compounds not being extracted
4. Solubilities, vapor pressure
5. Increase ionic strength, distillation
-------
APPROACHES
1 a. Maximize ionic strength
2a. Extract in CLLE for 72 hrs
Ib. Vacuum evaporate 1 L of sample
3. Concentrate final extract to 100 uL
4. Use deconvolution background subtraction on the
GC/MS data
-------
WATER SOLUBLES TIC
Sanple: 5B62592C
Pacific Analytical
Instrument: UG81
ON
8814R1B
100-
XFS-
0
Sen
1058 1326
721
486
1803
16.77
1296
11.46
992626
TIC
Vri
2115
500
1O0O
1500
20O0
2500
-------
WATER SOLUBLES TIC
Sample: 5062592C
Pacific Analytical
Instrument: UGBi
0\
0814R10
100
XFS-
o
92
378
8
Sen 280 3 O0 320 34O 360 38O
400
-------
WATER SOLUBLES SPECTRA
Sample: 5862592C
Pacific Analytical
Instrument : UG81
ON
RESOLVE'59 (18.388) COMBINE:(52 to 73)
100- 57
NBS 128 Hits
0
39
*1
128 Searched
576
58
63
71
_.-
75
#1 FI873 3137:ETHANOL, 2-BUTOXV-
100- 57
A
0
M/Z 40
255
58
72 75
50
60
70
(~
„,
87
80
90
100
110 120
CH3
2CH2OH
-------
CONCLUSIONS
Additional matrix isolation techniques are
available
Analyte specific interferences can usually be
resolved
Extra effort and flexible analysis approaches are
required to deal with matrix problems
-------
566
-------
MR. TELLIARD: Our next speaker is from Battelle.
Hazel Burkholder is going to be talking on anion exchange resins for the collection of phenols
from air and water.
MS. BURKHOLDER: This morning, I will be
discussing the use of anion exchange resins for collection of phenols from air and water. The
first section of this talk will be rather general, the second section will cover phenols in water, and
the final section will cover phenols in air.
Methods have been established for sampling air and water with polyurethane foam
(PUF). These methods are applied to pesticides and PAHs and do not include phenols.
Current methods for phenols include Method 604, a liquid-liquid extraction of
wastewater, and Method TO8, an aqueous sodium hydroxide impinger collection method.
Method TO8 is only a suggested method and has not been validated. Our work demonstrates a
new technology, the use of anion exchange resins, for collection of phenols from either air or
water.
This is a schematic representation of an anion exchange resin, AG MP-1. The
resin has a styrene-divinylbenzene polymeric backbone with chemically bound quaternary amine
sites. Sodium hydroxide is used to convert the counter ion of the resin to the hydroxide form
and thus obtain the strongest possible aqueous base.
When analytes are introduced to the resin, the hydroxide on the resin readily
abstracts a proton from phenols and acids and causes attachment of their anion to a quaternary
amine site.
Analyte anions are displaced from the resin by application of an acid of greater
strength, in this case methanolic HCR, regenerating analytes in neutral form in the eluent.
Anion exchange resins were obtained from BioRad. The granular resins were
evaluated in four mesh sizes, with the smallest size proving to be the most efficient for air
sampling. We also used an anion exchange membrane consisting of BioRad AG-1 resin, 5 um
particles, embedded in a 3M teflon matrix. This membrane is very similar to the CIS Empore
membrane.
The desired environmental detection limits are 1 ppb to 0.01 ppb for waters and
1 to 0.1 ppb for air.
We try to match detection limits to methods of derivatization and instrumentation
systems. For high levels (1-10 mg/mL in the analysis volume), it is not necessary to derivatize,
and analysis can be done by GC/FID or GC/MS. With mid-range (0.1-1 mg/mL analysis
concentration), it is necessary to derivatize with BSTFA and use GC/FID or positive chemical
567
-------
ionization GC/MS. Low levels require derivatization with pentaflurobenzylbromide and analysis
with GC/ECD or negative chemical ionization GC/MS.
These are scenarios for achieving these desired detection limits. With current
methods, it is essential to extract a liter of water or to sample 7200 liters of air. This volume
of air. can be achieved by sampling at 10 liters per minute for 12 hours. In all cases the final
analysis volume is 1 mL.
Advantages of the anion exchange resin for water are the lack of use of large
separatory funnels; there are no continuous extractors, no refluxing solvents, and no emulsions.
These features add up to reduced volumes of toxic solvents. High extraction efficiency and high
retention efficiency result because of the formation of a chemical bond. There is no
breakthrough.
Advantages of anion exchange resins for air sampling include no glass impingers.
The resin is lightweight, it has high collection efficiency, high sampling rates can be used, and
high retention efficiency results. Because of the chemical bond, humid air has no effect.
Disadvantages of anion exchange include the efficient passive sampling of the
resin. The resin must be protected from the atmosphere before it is used. There are also phenols
in styrene divinylbenzene polymers, and these resins must be cleaned thoroughly before use.
I will now briefly review the extraction of phenols and acids in water. These data
were presented here last year by Marielle Brinkman. Liquid-solid extraction of phenols and acids
from water was accomplished by placing a silanized glasswool plug in a silanized glass
chromatography column and adding 0.5 g AG MP-1 resin and water. The top of the water
column was spiked with phenols and the water was drained through the anion exchange resin.
The resin was then eluted with 2 percent hydrochloric acid in methanol/methylene
chloride. Water was added to the extract and it was then acidified. The extract was partitioned,
the aqueous layer discarded, and the organic layer dried through sodium sulfate. After K-D
concentration, the appropriate internal standards were added.
The sample was then split. 900 uL for acids was solvent exchanged into MTBE.
This portion was methylated with diazomethane and analyzed using GC/ECD.
The remaining 100 uL was solvent exchanged into acetone, diluted to 1 mL, and
derivatized with PFBBr prior to GC/ECD analysis.
This graph shows the collection efficiency of the AG MP-1 anion exchange resin
for phenols in water. Even with a liter of water, there is no breakthrough due to the formation
of the chemical bond. Please note, however, that the 1 ug in 1L used silanized glassware, while
the 1 ug in 100 mL did not. The differences in recovery clearly show the need to tie up active
sites on glass surfaces. Except for 1-naphthol, recoveries are generally greater than 80 percent.
568
-------
The shaded areas of this GC/ECD chromatogram mark spiked analyte peaks.
Despite some background peaks, the analytes are well resolved. These data indicate that using
an anion exchange resin and PFBBr derivatization for trace levels of phenols in water is quite
feasible with GC/ECD analysis. This approach eliminates the need for more costly GC/MS
analyses.
This GC/ECD chromatogram shows that 4-nitrophenol and pentachlorophenol can
be determined along with organic acids by this method without interferences.
Method recoveries for acids, 4-nitrophenol and pentachlorophenol are generally
good, greater than 70%.
In developing a method for sampling air, vapor spikes were done using a GC oven
held at 55 degrees C and an injector held at 250 degrees. Analytes spiked into the injector were
vaporized, mixed with helium carrier gas, and swept onto the anion exchange resin.
The resin was then eluted with 2 percent hydrochloric acid in methyl-t-butyl ether
and methanol. Aqueous sodium chloride was added, along with additional methyl-t-butyl ether.
After partitioned, the aqueous layer was discarded. The organic layer was dried through sodium
sulfate, concentrated to 1 mL, appropriate internal standards were added, and, again, the sample
was split, with 800 uL being analyzed by GC/FID and 200 uL diluted to 1 mL, methylated and
analyzed by GC/ECD.
High and low concentration standards show good resolution of all analytes as
shown in the GC/FID chromatogram. The analytes are easily quantifiable at the lower level of
0.6 ng/mL.
Collection and retention efficiencies of vapor spiked phenols are related to face
velocity of the sampler. When velocity is too high, there is not enough residence time for
analytes to react with the resin, and channeling is much more apt to occur. The 4.3 cm/sec
velocity was obtained using a 7 mm i.d. tube with helium flow at 100 mL/min (0.1 L/min). The
1.75 cm/sec velocity was obtained using an 11 mm i.d. tube with helium flow at 100 mL/min.
Neutral compounds that are collected and retained by the polymer backbone can
be sequentially extracted from the resin with neutral solvents without extracting phenols. When
using the sequential extraction with neutral solvents, alkanes, aromatics, and ketones are removed
without affecting the phenols. The phenols can be recovered in a subsequent acidic eluent.
These GC/MS chromatograms resulted from ambient air sampled using the tube
sampler as shown in an earlier slide. Several of the alkyl phenols were identified using EI-
GC/MS, and electronegative components, including nitre-phenols, were identified using negative
CI-GC/MS.
569
-------
However, sampling of 0.1 L/min with granular resin would not allow detection of
phenols at ambient levels. An anion exchange membrane was used to achieve higher sampling
rates.
This is a schematic representation of the inlet for a PS-1 high volume air sampler.
This air sampler normally contains a single filter and a sorbent trap. However, we used the inlet
for a series of stacked filters consisting of a teflon coated glass fiber filter, the AG-1 membrane,
a second teflon coated glass fiber filter, and a sodium hydroxide coated glass fiber filter.
We would have preferred a second AG-1 membrane instead of the sodium
hydroxide coated filter as a backup filter. However, in attempting this, we were not able to
achieve the required flow rates. So, we used the sodium hydroxide coated filter in its place. The
analytes are spiked onto the first teflon coated glass fiber filter before it is assembled. The
analytes are then swept onto the AG-1 membrane by the air flow. The phenols have sufficiently
high volatility that they are not retained by this glass fiber filter.
Because of the somewhat fragile nature of the membrane, initial extraction of
neutral compounds is not possible. It is necessary to extract all compounds, then partition
neutrals from the phenols. Our analytical scheme includes extraction of the membrane with 2
percent hydrochloric acid in methanol/methylene chloride, addition of hexane and aqueous
sodium hydroxide and partitioning. The organic layer, which now contains the neutrals is
discarded. The aqueous layer is acidified with HC1, MTBE and sodium chloride (for salting out)
are added. After partitioning we discard the aqueous layer, dry the organic layer over sodium
sulfate, K-D concentrate, add the appropriate internal standard, and derivatize with BSTFA. The
extract is analyzed using GC/FID.
Although there are detectable amounts of the alkyl phenols hi the neutral fraction,
there is a very good material balance, and recovery of phenols in the acidic fraction is high. The
low recovery for 1-naphthol is thought to be due to its somewhat unstable nature under acidic
conditions.
Continuous 8 hr sampling at 10 L/min demonstrated good collection and retention
of phenols following a vapor phase spike, generally >60%. The less volatile phenols (e.g.
pentachlorophenol) are left on the teflon coated glass fiber filter. For these phenols, we still
achieve a very good material balance. Analytes were not detected on the back-up sodium
hydroxide glass fiber filter.
The associated GC/FID chromatograms show well resolved analyte peaks and a
clean blank.
Conclusions, then, a strong anion exchange resin can be used to efficiently sample
phenols from air and water. It provides high collection efficiency, high retention efficiency, and
field portability.
570
-------
Are there questions?
MR. TELLIARD: Any questions?
(No response.)
MR. TELLIARD: Thank you, Hazel.
Thank you very much for your attention. I would like a round of applause for our morning
speakers.
It is lunchtime. For those of you who come back, we have a real treat. You can
buy a performance-based method, you can purchase accreditation, whatever you want this
afternoon. So, come on back, and we will see you right after lunch.
(WHEREUPON, a luncheon recess was taken.)
571
-------
572
-------
Anion Exchange Resins
for Collection of
Phenols in Air and Water
Hazel Burkholder, Marielle Brinkman, and
Marcia Nishioka
Battelle - Columbus, OH
Jimmie Hodgeson
EMSL-EPA - Cincinnati, OH
Joachim Pleil
AREAL-EPA - Research Triangle Park, NC
NKA/NM**a/1&-1
-------
Air and Water Sampling with PUF
PUF: Polyurethane foam
Air: pesticides 240 L/min (TO4)
Air: PAH 20 L/min (IP-7, TO-13)
Air: pesticides 4 L/min (IP-8)
Water: PAH [Saxena et al; ES&T H, 682, 1977]
-------
Current Methods for Phenols
Waste Water: Liquid-liquid extraction (604)
Air: Aqueous NaOH impinger (TO8)
Not validated
1-5 ppb
-------
Retention and Elution of Phenols and Organic Acids
from Anion Exchange Resin (AG MP-1)
Initial State
Suspend AG MP-1
In solvent in open
chromatography column
-CH2N(CH3)3OH
.CH2N(CH3)3OH
Retention
Attach phenol or acid
to AG MP-1
Added Material:
HO
CH2N(CH3)3O-
CH2N(CH3)30-
Eluted Material:
H20
Neutral molecules
Basic molecules
Elution
fcemove phenol or acid
from AG MP-1
Added Material:
HCI : MeOH : MeCI2
-CH2N(CH3)3CI
•CH2N(CH3)3CI
Eluted Material:
HO-
- Excess HCI
• MeOH, MeCI2
-------
Strong Anion Exchange Resins
Granular: BioRad AG MP-1
38-75 itm
75-150 /u,m
150-300
300-1100
Membrane: BioRad/3M AG-1
5 ftm particles in Teflon
("Empore")
-------
Environmental Detection of Phenols
00
Waste Water: 1 ppb (1 /u-g/L)
Drinking Water: 0.01 ppb (0.01 \JL g/L)
Near-Source Air: 1 ppb (~7ftg/m3)
Ambient Air: 0.1 ppb (~0.7 /itg/m3)
NKA/NM**a/15-Ont
-------
Analyses of Phenols
Detection Limit Instrument
Underivatized 1-10 /^g/mL GC/FID; GC/MS
BSTFA 1 A^g/mL GC/FID
0.1 A^g/mL PCI GC/MS
PFBBr 0.01 /ig/mL GC/ECD; NCI GC/MS
NKA/NtNoKi/16-fl*
-------
Sampling and Detection
Water Air
00
o
Waste Drinking Source Ambient
Lower Detection 1 ug/L 0.01 Atg/L 7 i^ig/m3 0.7
Volume 1 L 1 L 7200 L 7200 L
(12 hr x 10 L/min)
Final cone 1 /tg/mL 0.01 ^tg/mL 50 iig/mL 5 /Ltg/mL
NKA/Nllhlok»/16-06ti
-------
00
Advantages of Anion Exchange
for Water
• No separatory funnels
• No continuous extractors
• No refluxing solvents
• No emulsions
• Reduced volumes of toxic solvents
• High extraction efficiency
• High retention efficiency
— Chemical bond — no breakthrough
NKA/NW**«/15-06
-------
00
S)
Advantages of Anion Exchange for Air
• No glass impingers
• Lightweight
• High collection efficiency
• High sampling rates (>5 L/min)
• High retention efficiency
— Humid air has no effect
NKA/Ntol**a/1S-Oe
-------
Disadvantages of Anion Exchange
oo
u>
• Efficient passive sampling -
must protect before use
• Phenols in styrene divinylbenzene
polymers - must clean thoroughly
NKA/NMfeta/16-10
-------
LSE of Phenols and Acids
from Water
Silanized glass
chromatography
column
0.5 g AG MP-1
Silanized
glass wool
1 mL of 0.1 ug/mL
analyte mix in MeOH
BioRad AG MP-1
100-200 mesh resin
70-150 urn, chloride form
584
-------
Analytical
Procedure
Elute AG MP-1:
6 mL 2% HCI in
MeOH:MeCI2 (20:80)
5 mL MeCI2 (x2)
Na2SO4 (ca 2 g)
drying column
10 mL H2O
130 uL cone HCI
Discard
K-D concentration
Add IS: 3,4-diCH3-phenol for phenols
4-CI-benzoic acid for acids
Acids
Solvent exchange to MTBE
Methylate
GC-ECD
Phenols
- Solvent exchange to acetone,
dilute to 1 mL
- PFBBr derivatization
- GC-ECD
585
-------
Ul
oo
1 ug in 100 mL vs1 ug in 1L
LSE of Phenols from Water with AG MP-1
Methylation and GC-FID
V
1 ug in 100 mL
1 ug in 1 L
silanized glassware
-------
00
O
CO
o
Q.
W
0)
CC
Q
O
LJLJ
6
o
of Phenols from Water with AG MP-1
PFBBr and GC-ECD
200 H
150 -
100-
50 -
Spike: 0.1 ug
Analysis: 1:10 dilution
i i i i i
13
15
17
Retention Time (min)
-------
Ul
00
oo
LSE of Acids from Water with AG MP-1
400
"> 300
Q>
in
| 200
(/>
DC
o 100-
6
o
Methylation and GC-ECD
Spike: 0.1 ug
PCP
\
DiCI-acetic
4-Cl-benzoic (IS)
2,4-diCI-benzoic
4-NO2-phenol
/ 2,4-D
*.-
2,4,5-T
\
Blank
100 H
JL
I I I I I I I I I I I I I I I I I I I
5 7 9 11 13 15 17 19 21 23
Retention Time (mln)
-------
Method Recovery and LSE Efficiency
of AG MP-1 for Acids
Simulated 0.1 ug/L Drinking Water, Methylation and GC-ECD
100
0)
u, O
00 Q
Q)
DC
•4-"
c
0)
O
75
50
25
Method Recovery
Mean = 79 ± 9
LSE Efficiency
Mean = 79 ± 12
-------
Vapor Spike
Experiment
GC
Oven,
55°C
He
Carrier
Gas
KA\Nfehioka5-13
590
-------
Analytical Procedure
• Elute 1 g AG MP-1 with
* 10 ml 2% concentrated HCI in
jf 15:85 methanol : methyl-t-butyl ether
• Add to separatory funnel
15 ml 15%aq. NaCI
7 ml methyl-t-butyl ether
Organic Layer
K-D concentration to 1 ml
IS for GC/FID
3,4-diCH3 -phenol
IS for GC/ECD
Br2F8-biphenyl
Aqueous Layer
• Discard
800
GC/FID
200
Dilute to 1 ml
Methylate
GC/ECD
KA\Nijl**o5-1 *
-------
70
60
sol
40
30
20
Phenol Standard GC/FID
c
o
a
-------
Recovery of Phenols vs Face Velocity
at 100 ml/min Sampling Rate
Spike 200-400 Mesh AG-MP-1
0
OJ
Alkyl
Chloro
Nitro
Other
10 20 30 40 50 60 70 80 90
Velocity
100
~J Corrected
Recovery, %
98
102
76
97
73
101
90
105
-------
Recovery of Neutral Compounds in Neutral Solvents
0 10 20 30 40 50 60 70 80 90 100
Biphenyl
Recovery, %
89
Naphthalene
84
Fluorenone
91
Legend:
XA\Nbh>oko5-'. I
-------
Demonstration of Separate Extractions of
Neutral Compounds and Phenols
Experiment: Spike 500 A* I solution of analytes to dry AG MP-1
Extract AG MP-1 with Mtbe (x3) and Methanol (x2)
Extract AG MP-1 with acidic Methanol: Mtbe
Average Total Average Corrected
Results: Recovery in Neutral Recovery in Acidic
Solvent extracts, % Methanol: Mtbe extract, %
Alkanes 95 0
Aromatics 98 0
Ketone 96 0
Alkyl Phenols 0 86
Chloro Phenols 0 104
Nitro Phenols 0 89
Other 0 100
ko/nohioko/5-20
-------
o
C 2-alkyl phenol
C 3 -alkyl phenol
C 3 -alkyl phenol
ACA
ACA
ACA
ACA
QL
(palmitic add)
ACA
PHTH
PHTH
m
o
o
i
7
o
o>
-------
Air Spiking with AG-1 Membrane
TCGFF
AG-1
TCGFF
NaOH-GFF
d = 9 cm
Air 10 L/min
phenol spike
1/8" gasket
Hi-Vol Air Sampler Inlet
-------
Analytical Method for Membrane AG-1
Extract
Ul
VO
oo
Add
Partition
Add
Partition
Derivatize
- 2% HCI in 65:35
MeOH:DCM
— Hexane, aqueous NaOH
— Discard organic (neutrals)
- Aqueous HCI, MTBE
— Discard aqueous
- BSTFA
NKA/NW**»/15-11
-------
Analytical Method Recovery
Alky!
phenol
2-CH3
4-CH3
2,4-diCH3
Nitro
2-NO2
4-N02
96
84
87
55
99
88
(2*; 98+)
(11; 95)
(8; 95)
(34; 89)
Chloro
2-CI
4-CI
2,4-diCI
pentaCI
Other
1-naphthol
2-naphthol
89
94
93
100
63
83
* Recovery in neutral fraction
* Total recovery; material balance
-------
Recovery of Vapor Spike
AG-1 Membrane 10 L/min 8 hr
0\
o
o
Aikyl
phenol
2-CH3
4-CH3
2,4-diCHg
Nitro
2-NO2
4-NO2
Chloro
72 ± 1
60 + 4
74+2
35+5
2-Cl
4-CI
2,4-diCI
pentaCI
NT
81 + 4
75 + 0
12 + 4(79*; 91+)
Other
60+6
52 + 17 (49*; 101+)
1-naphthol 46+2
2-naphthol 57 + 8
* Recovery from TCGFF
+ Total recovery
NKA/NtoNoka/1S-14
-------
3
o.;
o:
&
,s
AG-1
membrane
blank
iiu.
Surrogate
recovery
phenol
. ,. L J .1 1 u .. J
£
I
of
i
CO
o
o
*rf
o
4*
£
o;
•
:
Ll
1 i [ i ' i
Ul i
'I ' 1 '
-T-*—
] i 1 ' T"'^lf T
-i—i-
1 4 iji
i . ,, . T .-, i..,, ..
AG-
mer
phe
spih
spike 10 L/min
si
o.
I
12 14
i »
Phc
BS1
der
i • i • i • • i • i • i • i • i • i • i • i • i • i • i • i • i • i • i • i ' i ' i ' i '
16 18 20 22 24 26 28 30 32 34 36 3
3
Phenol standard
Elution Time, Minutes
601
-------
s
Conclusions
Strong anion exchange resins can be used to
efficiently sample phenols from air and water
• High collection efficiency
• High retention efficiency
• Field portable
NKWNI«hlok«/16-ie
-------
Os
s
Acknowledgment
Battelle acknowledges support for this research
from U.S. EPA on Contracts:
68-DO-0007 AREAL/RTP
68-O2-4127 AREAL/RTP
68-CO-0003 EMSL/Cin
NKWNHhk*»/16-15
-------
604
-------
MR. TELLIARD: Could we get started, please, for
our afternoon session?
This afternoon, we are going to address basically two small issues in the first
session. One is a derivation of an approach to methods development consolidation and use
called, basically, performance-based methods. In addition, a later speaker will talk on lab
certification.
The application of performance-based methods are going to be expressed by Mike
Conlon from the US EPA and Richard Turle for Environment Canada.
Our first speaker is Mike Conlon from the Office of Ground and Drinking Water,
Drinking Water Standards Division, and he is going to give you basically the Agency's approach
as to the implementation or use of performance-based methods.
MR. CONLON: Thank you, Bill.
As some of you know, I have been associated with Bill for almost 18 years now
in one way or another. I have spent a lot of Bill's money; Bill has spent a lot of my money. But
in spite of all that money we spent, this is the first time he has ever let me come to Norfolk and
I thank you for the opportunity today.
One of the things that one of the audience said to me on my way up here was "...I
remember when you were Bill's boss..." and it was funny, because I tried to think of when I was
Bill's boss. The best I can ever remember is being a tolerated associate, never a boss.
Well, maybe now as we get into performance-based methods, Bill and I will
probably have not 100 percent meeting of the minds, but at least we will come a little bit closer.
I think both of us realize that there are major changes that need to be made, some major steps
that need to be taken in the way the Agency approaches the specification and the acceptance of
alternative chemistry methods.
Jerry, could you put that first slide up there?
I want to tell you I use slides that I steal from other people. I don't believe in
coming up with brand new stuff all the time.
This particular slide...and for those of you in the back of the room who cannot see
the text, don't worry. The thing I want you to really focus on is the expression on the
decisionmaker's face.
605
-------
EPA IM/DA
Benefits:
• Improved decision making
• More flexible/adaptable organization and systems
~^r
More effective identification, definition, collection, /r
storage, and dissemination of data
More efficient development and use/reuse of systems
From the brain of Rick Jackson
\
You have all seen that glazed over look, the vacant stare. You have all contributed
to that. You have helped him have that glazed look, that "...gee, what the hell do I do now..."
kind of look. All too often that look is caused by information that doesn't fit the decision being
made. And what I would like you to keep in mind is there is no use for chemistry data unless
it results in improved decisionmaking and putting some light behind those eyes. It is not good
enough to simply know that you have measured something.
My purpose is to help improve that decisionmaking process, and very often, quite
frankly, the character on this slide has become a self-portrait because we didn't know just how
information was developed. But in spite of the need for certainty we want to see more flexible
adaptation in how we generate information.
We also need more explicit organization and systems, and I want to see more
effective identification, definition, collection, storage, and dissemination of the information.
Now, as you have suspected that all applies to chemistry and the work that we do.
This particular slide, however, happened to be put together by people in an Information and Data
Processing group. Yet, we all have the same objective and in many respects a common problem.
606
-------
If I am successful this afternoon, you are going to go away from here saying "...I
didn't learn anything new".
"...All he is talking about is going back to some old basic science principles...."
That will mean that I have been successful if you come up with that understanding.
Telliard has accused me of being a marketeer of performance-based methods, and
that is an understatement. If you have any sense when I am done here today that I am neutral
on the subject;...if you leave your name and address, I will come by your house and preach hi
your front yard.
I am not neutral. I believe very strongly that the Agency has to take some steps
very quickly and needs to change the way it has done business.
I get ahead of myself. Over the years...can I have that next slide? We got started
Background
Asychronous adoption of environmental
legislation
Validation and approval processes varied by
program
Formal evaluation/validation processes
cumbersome
Formal equivalency processes basically nonexistent
From the brain of Llew Williams
in this business with basically the adoption of different kinds of legislation. We had a bunch of
uncoordinated and redundant methods developments. We had 15 or 18 different laboratories hi
the Agency, each one being perfection itself knowing what the answer was, not willing to
recognize that anyplace else in the country had any skill, talent, or knowledge that they
themselves didn't possess at a higher level of understanding.
607
-------
We had four different validation and approval processes that varied by program.
Now, the purpose of that was not to keep the folks outside the Agency off balance; it was to
keep the bureaucrats in charge of the methods. "...Make sure that my method of approval was
just a little bit tougher than yours..."
We "had formal evaluation and validation processes that took forever. Now, you
are going to say hey, this isn't all in the past, and yeah, that is the point of all this. Some of it
needs to be in the past.
And we had some formal equivalency processes that were basically nonexistent.
I have got to tell you a little story. About a year ago, I sent a microbiological
procedure to Cincinnati to have it validated. The term of art is, "...is the method equivalent?.."
And I got a memo back that said no, it is not equivalent.
So, I called them up, and I said I had looked at the data, and the test looks to be
more sensitive, more accurate, if there is such a thing, less biased maybe, and it seemed to be
faster, and it cost 10 percent of what the conventional method cost. I asked, "what is the
problem?"
He said well, you asked if it was equivalent. This one is more sensitive, costs
less, and is faster. I said oh, okay, now I understand the problem.
Background (continued)
Long leadtime requirement
- One-at-a-time method development
- Within - and among - lab evaluation
- Prolonged public comment period
Public and Congressional pressure
Results of the tension?
- Methods by committee
- Prescriptive methods for contracts
- Sub-optimal science
From the brain of Llew Williams
608
-------
We had long lead times. We felt we had to prove...still do..we have to prove
every scientific fact. We went through a one at a time method development within and amongst
labs, we had public and Congressional pressure, and the results of the tension, was that we had
methods by committee.
We had prescriptive methods for contracts, and it has resulted in suboptimal
science in both some of our contract efforts and in our regulatory requirements.
Now, before I move to the next slide, I want everybody here who has used an EPA
method without change to stand up in place. I want everybody here who has ever run a drinking
water sample method without change to stand up. (No one stands - but there are chuckles).
Yet our rules say you follow our methods without change, except what is allowed
in the procedure.
In the Past
Take what was available
- Live with it if you could
- Improve it if you could not
Balance quality with practicality/throughput/cost
Emphasize multianalyte methods to address long
list of regulated chemicals
Regulate what could be measured
From the brain of Llew Williams
Quite frankly, in many ways, we get a bum rap. Our methods allow more
flexibility than people are willing to give us credit for, and on the other hand we are not willing
very often in our audit processes to allow the flexibility that exists, because the people who are
doing the audits may not have the full understanding of what the method was supposed to do.
So, the way we deal with that is: it is a hell of a lot safer to say "...do you follow
the methods?..." And the lab says "yes, we follow the methods, especially when the auditor is
609
-------
there. See? And I have them written down right here. This is the EPA book, and I can show
you. I have got finger marks and pencil smudges in absolutely the right places." " Oh, okay.
Well, then, everything is all right."
So, what happens is we end up scamming each other. I am not talking now about
crooks. Okay? I am talking now about good-hearted, well intentioned smart analysts who are
working to give a better, more accurate, more reliable and cheaper result that actually serves the
purpose better.
We all know there are crooks out there. We all know there are cheats and liars,
and you all know who they are. Every one of you has at least got your own list of suspects.
Some of them are in your competitors' labs, and some are in your own, but we will leave that
alone for now, I am not talking about those kinds of problems. I am talking about what should
be legitimate changes in methodologies created in an environment in which they are suppressed.
We have finally come to realize that at a variety of levels in the Agency. It only
took us 3 years, and we even got Telliard to admit it...well, at a sort of superficial level. He is
still not prepared to give up his particular set of methods, but he is willing to consider
alternatives that might also work. We are actually going to use the transcript of this meeting as
evidence in our behalf.
MR. TELLIARD: That is why I am not saying
anything.
MR CONLON: What has happened in the Agency
is that there is a group of people who has gotten together under the auspices of the EMMC. That
is the Environmental Methods Monitoring Council. It is a physical science policy group that is
one of the really good interoffice organizations and has the potential for being one of the good
longlived interoffice organizations.
Its purpose is to try and bring some sense and some order out of this mess
associated with chemistry methods. All of us in the different offices, we all have our methods.
The solid waste people have their method, the air people have a method, the water people have
their method. The water people actually have two methods.
Some of them are validated and some of them aren't. Pick the one you want and
take your chances.
One of the aspects coming out of the EMMC group, though, is they have fostered
a committee called the Performance-Based Methods Committee. It is a group of people who are
kind of remarkable in the Agency.
The committee has been in existence a little over a year. But the remarkable part
is the same people come to the meetings all the time. That is the fascinating part.
610
-------
I have been in the business more years than I care to remember, and the one thing
that never happened on interoffice committees was you always got the same people, because
people would lose interest. In this particular group, the same people are coming, the same offices
are being represented, and they are making material progress.
Performance Based Methods
Write Method in EMMC Format
Calibrate with reagent blank and minimum of 4
standards bracketing expected range
Use DQO's of the Program Office(s) to establish
performance range
Establish precision and bias with a minimum of 4-7
replicate spiked matrices using certified QC materials
at levels required to satisfy DQO's
Verify acceptable performance in a blind PE study
(alternatively, with a certified QC sample from an
external source)
From the brain of Llew Williams
What they have done is they have focused on what it would take to actually come
up with a performance-based method process that would work. The first thing that they agreed
on was that we had to have some target.
The second thing they agreed on was we will always have a reference method.
Now, let me say that again. We will always have a reference method especially in the drinking
water program.
I am speaking now primarily about the Drinking Water Office, but what I say, I
believe, will be applicable to the other offices as well. There will never be a regulation that
deals with chemistry that is politically acceptable unless you can demonstrate that you can
611
-------
measure at the compliance level. So, you will always have to have an analytical method that
demonstrates you can measure at that compliance level.
In the Office of Ground and Drinking Water, that target...or back in the olden
days, some of Bill's QA/QC friends used to call it the DQO, the data quality objective...will be
a derivative of the adverse health effect. It may or may not be a derivative of how well a
method can measure, in fact or how low a method can quantitate. What a method can achieve
and what a laboratory has to demonstrate to be certified are two different questions and should
not be confused especially by us folks at EPA.
We put out a rule last spring...I am sorry, a year ago, and we had an MCL at 600
ppb. We had a required performance level for the laboratory at 0.7 ppb. Now, can anybody
explain to me the logic of that?
I signed it. It went over my desk. I agreed with it. I missed it.
We had a copper number that was fairly high, a copper action level in our lead
and copper rule. We had a required performance level for a laboratory to be certified two orders
of magnitude below the action level. And in those two orders of magnitude was a shift in
measurement technique in which you drove the price up by a factor of four, at least, that
particular problem we have fixed in a subsequent modification to the rule.
So, what we are trying to do in the Drinking Water program is separate the idea
that the target has to be what the reference method is capable of measuring. We will have a
reference method. It will calculate an MDL (or, if Larry Keith has his way, an RQL instead of
our present PQL) and we will also set a separate performance level for lab certification, please
keep that in mind.
Now, I want to talk about how we might go about demonstrating that your own
laboratory could achieve whatever is required.
Remember, there is a reference method. There will also be a separate target, a
nominal concentration X plus or minus some confidence interval, an MDC and confidence
interval and an RQL, with its own confidence interval. Hopefully, the confidence interval will
not overlap the adverse effect level. I mean, that is part of our theoretical objective, to set RQL
requirements such that the upper error band does not exceed the MCL.
In the past, the condition was you had to live with what was available. You
balanced the equity and the through-put, and you emphasized the multi-analyte methods if you
could. And we regulated what could be measured.
I am not advocating that you deviate from any rules. Our rules say follow the
method, you follow the method. Please do so until we get this other new rules in place.
612
-------
The Performance-Based Methods Committee has actually looked at the situation
that they believe would provide adequate documentation for somebody using alternative method
or a derivative of one of the Agency methods. These could include a fairly major significant
change, maybe even a shift in solvent, maybe even a shift in extractions, who knows, maybe even
a shift in detectors.
Basically, it comes down to the idea that there would have to be an initial
validation. There would have to be, more or less, a routine validation in the particular matrix,
and then some performance check stuff.
First of all, they established data quality objectives (DQOs). Those, as I said,
would come from the programs and for drinking water derive from toxicological data. The
DQOs would be derivatives of the adverse effects information, maybe or a combination of what
we wanted to achieve using a specific technology - or what we thought was allowable for other
reasons under a particular drinking water rule.
To benefit from, you would have to have a written method, and I will say a little
bit more about that later. You would have to have a required performance range. If I have an
MCL, for example, that is at 10 ppb, we are not going to let you use a method that operates from
0.5 to 1 ppb and then goes to hell and gone when you get above 2 or 5 or vice versa.
I know that seems pretty silly to have to say that out loud, but it is amazing some
of the stuff that the Agency gets sent.
Initial Validation Criteria
Established data quality objectives
Written method
Required performance range
Calibration
Precision
Bias
Acceptable performance in PE study
Detection limit
Holding time
Preservatives
Surrogates, where applicable
613
-------
You would have to have, obviously, some calibration data, some precision data,
some bias data. For those of us who are a little older, bias is now what we call accuracy.
Acceptable performance in a PE study.
Let me answer the question now. If we don't have PE samples available, then
there would be a provisional acceptance. Of course, on the other hand, if you have done your
calibrations right, then participation in a PE study is a useful but not necessarily essential part
of the process except from a regulatory point of view.
I don't believe that the Agency is ever going to accept alternative methods
without participation in fairly serious PE studies, and I don't think it should.
You would have to specify detection limits, holding times, preservatives, and
surrogates where applicable.
Jerry?
Routine Performance Criteria
Calibration, precision, bias
Internal and external check standards
PE studies, surrogates, where available
Independent performance samples
Matrix-spiked duplicates
"System blank" (through whole method)
Routine calibration checks involving external or
internal reference materials over required range of
the method
Acceptable results on blind PE study or analysis
of external, certified QC sample
"Batch" = <20 samples of similar matrix prepared/
analyzed together or in series
We would require documentation on routine performance criteria, calibration,
precision and bias information again, internal/external check standards, PE studies, surrogates
where available, independent performance samples.
614
-------
This is a little bit different than what we would mean by PEs, performance
evaluation sample on sort of a round robin run by the Agency. This probably will create a
whole second market for people who are supplying performance standards.
Here comes the one that all of you folks know most about, the matrix spiked
duplicates. We have played these games in the Agency...and games is a bad word. It is not
at all disingenuous. We have pretended that somehow the standard methods work hi all the
matrices, and then we turn around and we say in order to be able to run a method, you must
test it in your matrix.
By the time people get done running a method in their matrix, very often, they
have made changes in the method. You have to do it hi order to make the method work.
What we are doing is we are sort of fessing up to that condition, and in a
regulatory context being a lot more open with it.
We would require you to run system blanks, routine calibration checks
involving external or internal reference materials over the required range of the method,
acceptable results on a blind PE, and...here comes one of the stickers for the high output
laboratories...we would begin to define batches as 20 samples in a similar matrix and begin to
require specific calibrations in those intervals.
There is no free lunch. This is not necessarily going to result in less
documentation in your laboratories. It would, however, provide you with a ratification of
some of the work changes you have already made, and it would provide you with a lot more
flexibility in coming up with the innovative, cheaper techniques.
I believe very strongly that unless we make some serious changes in
measurement cost, first, it is going to kill us politically, then it is going to kill us
economically, and then it is going to kill us physically.
I think we are in a situation in this country right now where we have need of
analytical data, and the data costs too much. Unless we do something to help the venture
capital people or the innovative technique folks to move toward faster and better analytical
measurement techniques across the board, not just in drinking water but in everything else, all
of it will go for naught, because the country will finally decide that they can't afford it, and
compound by compound regulation, over the next ten-year period will be hi jeopardy.
I don't think that is going to happen. I think what will happen is we will come
up with better and faster analytical techniques. Each of you who has spent considerable time
hi the laboratory probably has in your heart of hearts one or two ideas that will work better,
places where we could save a hell of a lot of money and improve our accuracy at the same
time.
615
-------
What we are trying to do...there is a growing handful of us in the
Agency...what we are trying to do is create an environment in which you can make positive
changes in relative regulatory safety.
Let's go to the next slide.
Performance Check Frequency
System Blank - Each batch or new reagent
Calibration check - Undecided
Matrix supplied duplicates - Each batch
PE study participation - 1 or more/year
Surrogates - As in reference method
Finally, we have a performance check frequency requirement, run system
blank, calibration check, matrix spike dups, et cetera, et cetera.
Now, all of this is great, but if you are actually in a laboratory, the real hard
part is what do I have to do to prove it. One of the members of our Performance-Based
Methods Committee, a fellow by the name of Joe Slayton, who, I have got to tell you, if Joe
tells me to go to jump out a window, I will do it, because I know good things will happen.
Joe is a pretty bright fellow.
MR. TELLIARD: I will have to talk to Joe.
MR. CONLON: Bill is going to talk to Joe and
see if he can arrange that.
But Joe has put together a draft, and some of us have critiqued a little bit on it,
a draft of what a basic documentation package might look like. What would a reporting form
look like?
616
-------
As I said, there is no free lunch. Let's go to the next slide.
To be able to use one of these methods, you are going to have to keep it in our
format. It is the only thing us bureaucrats got left. We are going to require format if nothing
else.
You have got to calibrate with the reagent blanks, et cetera, et cetera, et cetera.
The key to this, though, is going to be in the documentation. One group within
the Agency opposes the use of performance-based methods, because they say it will make
audits and lab inspections harder. They will actually have to look more and more at data. I
agree that for some auditors, this program means more work. I believe we ought to be
looking more and more at data anyhow. Reviews of procedures alone don't get you answers
to the tougher questions.
So, what we are going to do, is to trade off a much more explicit, consistent
documentation process for more flexibility in the actual use of whatever measurement
techniques that you choose.
Jerry Thoma with Environmental Health Laboratories characterizes our
approach as loosening the grip and tightening the control, and I suspect that he is probably
right. We are not giving up all that much. In fact, what I believe will happen as a result of
this new documentation process is that the science will actually improve.
People will feel comfortable exercising more science judgement. There will be
more responsibility on the analyst, and there will be more responsibility on the laboratory
management to see that what comes out of the lab is actually a correct answer, not one that
simply was generated by an EPA-approved method.
What I would like to do, is to distribute 100 copies of this documentation to
those folks who happen to get here right at 1:30. So, I would appreciate it if, as the copy
passed in front of you, if you weren't sitting here at 1:30, we will have to get you your copy
later.
617
-------
o\
00
Performance Based Method System
Checklist (Initial Demonstration of Method Performance )
Program:
Analyst:
Analyte:
Matrix Type2:
1 Performed Each Time There Is A Change In Equipment, Personnel Or Procedure
2 Major Matrix Types (Wastewater; Drinking Water; Hz. Liq.; Hz. Solid; Air)
From the brain of Joe Slayton, et al.
-------
Criteria:
Results:3:
DQOs (Program Specified4)
ON
Written Method (EMMC Format)
Copy of "Reference Method" On-Site
Listing of Difference Between PBM
& "Reference Method"
Y/N
Y/N
Y/N
Performance Range5 ( - )
Linear Working Range/s
Calibration Stds. (units6) Mblk:
Calibration Curve Available:
Slope & Line (Sensitivity):
• • *
Y/N
—
—
—
(-)
Mblk: : : :
3 All Associated Supporting Data Necessary To Fully Reproduce These Analytical
Results Must Be Retained On File
4 Reflect Program Needs And "Reference Method" Performance
5 The Cone. Range Specified By The Program DQOs Is Of Primary Importance.
However, Additional Ranges May Be Delineated.
6 Cone. Of Calibration Standards Bracket The Measured Cone. For The Samples
And Lowest Standard Is At 4 Times The MDL
From the brain of Joe Slayton, et al.
-------
o\
6J
O
Preservatives & Holding Times7:
Interferences8:
Qualitative Identifications:
Perfromance Evaluation Study:
"Performance Evaluation"^:
Study Title/tf:
Surrogate Recoveries:
7
9
If Different From "Reference Method"
Detected During Matrix Spikes, Experience Or As Indicated In The Literature
List Analytes For Which "Not Acceptable" Results Were Obtained. Corrective
Actions Taken Must Be Recorded.
From the brain of Joe Slay ton, et al.
-------
OS
Spike Levels (Use External QC Material):
Number of Matrices:
Matrix
Description:
Precision (Std. Dev. (n-1))
Bias (% Recovery)
Method Detection Limits
Lab Pure Sub
Water Matrix A
Sub
Matrix B
Sub
Matrix C
Sub
Matrix D
Sub
Matrix E
DQOs
From the brain of Joe Slayton, et al.
-------
ON
M
to
Analyst's Name:
Signature:
Date:
Manager's Name
Signature:
10.
Date:
QCO's Name:
Signature:
Date:
10 As Personnel Are Replaced, The Information On This Form Must Be Signed By The Current
Responsible Official.
From the brain of Joe Slayton, et al.
-------
Performance Based Method System
Checklist (Continuing Demonstration of Method Performance)
Program:
Analyst:
Analyte:
Matrix Type11:
Matrix Supplement:
ON
to
Major Matrix Types (Air; Wastewater; Drinking Water; Hz. Liq.; Hz. Solid)
From the brain of Joe Slayton, et al.
-------
to
Criteria:
Frequency
Required12
Frequency
RESULTS
Acceptable13
Required12
Results
System Blank14:
Calibration151
Linear Working Range Verified
Calibration Check Std.
Fresh Ext. QC Material
2/Batch16
I/Batch16
J/Batch16
Y/N
Y/N
Y/N
Y/N
12 Reflect Program DQOs And "Reference Method" Performance
13 The Continuing Performance Checks Must Be Acceptable Or The Problems Must Be
Corrected And The Affected Samples Reanalyzed
14 Carried Through All Steps Of The Method
15 Cone, of Calibration Standards Bracket The Measured Cone, for the Samples And
Lowest Standard Is At 4 Times the MDL
16 A "Batch" Is Defined As "A Group Of Samples Not Greater Than 20 Of A Similar
Matrix That Are Analyzed Together Or In Series
From the brain of Joe Slayton, et al.
-------
Performance Evaluation Study:
"Performance Evaluation:17
Study Title/*
Surrogate Recoveries:
Matrix Spike Duplicate Analyses
1/Yr
1/BatcV
Matrix
Y/N
Y/N
Y/N
to
17 List Analytes For Which "Not Acceptable" Results Were Obtained. Corrective Actions
Must Be Recorded
From the brain of Joe Slayton, et al.
-------
Analyst's Name:
Signature:
Date:
Quality Control Officer's Name:
Signature:
Date:
to
Manager's Name18:
Title:
Signature:
Date:
18 As Personnel Are Replaced, The Information On This Form Must Be Signed By The Current
Responsible Official.
From the brain of Joe Slayton, et al.
-------
There actually will be complete copies in the proceedings of an updated
version of this form. It is basically a 5-page form, the results of which, when, and if you fill
them all out, you have presumably entered all the data pieces; I would expect people to
understand and be able to judge whether or not your method actually was equivalent to, or
maybe even better than or less than the reference method. In any event this documentation
certainly should satisfy most auditors and inspectors, and demonstrate that the method you
were using would measure what it was supposed to measure.
When we put out the proceedings, there will be a name and address where you
can send your comments. If you want to send them before then, send them to me. My
address is in the program. And we will see if we can consolidate them and begin to use a
more formal docket process to review the comments we receive.
We are very serious. We are going forward with this in the drinking water
program. We will probably make it a part of the so-called 6b proposal that is due out next
year.
In that proposal, we will specify analytical requirements for somewhere
between 15 and 20 compounds. This notice will articulate what we think the reference
methods are capable of achieving and what level of performance is necessary for a lab to be
certified. In order to maintain some consistency between the old way in which we calculated
MDLs and PQLs and the old ways we required people to document what they were doing, we
will set some equivalency between the reference method and what would have to be
demonstrated using this new performance-based system.
But in the main, the record keeping requirements would be pretty much what is
being passed out here.
There will be a separate piece of paper that will be available later on, maybe at
the door, and it is a form that talks about certification of laboratory results. One of the
essential parts of this program is a requirement that the analyst and the management of the
laboratory certify the results and that what is in this 5-page documentation set that you see
was actually what was done.
627
-------
oo
WE CERTIFY that the methods
in use by this facility
for the analyses of samples
for the programs of the
U.S. Environmental Protection
Agency have:
-------
OS
to
1. Met the Initial and Continuing
Demonstration of Method
Performance Criteria as specified
under the PBMS.
-------
2. Written SOPs available on site
in EMMC format (a copy of the
associated "Reference Methods "
are also available).
-------
In addition, WE CERTIFY that the data
associated with the Initial and Continuing
Demonstration of Method Performance
Criteria are complete (including the
mandatory PBMS Checklist), that all
necessary "raw" data to reconstruct the
analyses are retained, and that the
associated information is well organized
and available for review by authorized inspectors.
-------
Analyst's Name:
Signature:
Date:
-------
QCO's Name:
Signature:
OS
U)
U)
Date:
-------
Manager's Name:
Title:
Signature:
Date:
-------
That is not an accident. There is only one criminal violation. It is not criminal
under the Safe Drinking Water Act to deliver bad water. It is only criminal to lie about it or
to submit or to generate bad data.
So, what we are going to do is require generators of information under the Safe
Drinking Water Act to certify its accuracy.
In exchange for that, what you get is a much greater level of flexibility in your
analytical techniques than the Agency has ever allowed before.
Thank you.
635
-------
QUESTION AND ANSWER SESSION
MR. TELLIARD: Any questions, comments,
suggestions? Let me, before Jim gets going, point out a couple of things that Mike touched
on.
Presently, in the 600 series, the dirty water techniques as opposed to the 500,
Mike's methods, there is a provision that you can alter the methods. This is one of the things
that Mike points out. We don't advertise this for some reason. I don't know why. But you
can make changes to a method, and it specifies what you must do to show equivalency.
In the case of the 1600 and 600 series methods, it basically means if you want
to change an extraction technique, a column, a temperature ramp, you could probably do that
and still meet the method specs. You run the first four start-up samples like you did to show
proficiency, document that, keep it in the back of your folder so that when your gestapo kick
in your door and come to audit you, you say see, I did that and that is why I went to...I
wanted to run my samples through peanut butter before I analyzed them.
So, that is already in existence. That capability for change and flexibility is
already there.
The other thing is there are certain methodologies for which performance-based
methods, of course, do not apply, and those are definitive methods such as BOD, absorbable
organic halides, things along that line where the actual technique defines the measurement.
Oil and grease, one of our favorites, the cutting edge of science.
All of these things that those methods, of course, by their procedural format
define the analyte would not be amenable to that.
The other thing that is inherent in this is that it does take away some of the
vulnerability that we all know you do it out there but then you don't tell us and if you get
caught, then we have this thing called persecution, prosecution, or plagiarism, whatever it is.
By having this flexibility and documentation, it allows you to make these changes as
necessary in your matrix and not have the liability for it.
So, now that I have made my 2-cent advertisement, go ahead.
MR. RICE: Jim Rice. It is all very interesting.
I haven't had a chance to look at it any more carefully than your presentations, so with that in
mind, I did notice one thing in passing that it said the data quality objectives will determine
the detection limit. Now, that is intriguing.
636
-------
How do you...is someone by fiat simply going to determine that is the detection
limit of the method? We have been talking at great length about how one really establishes a
valid detection limit.
MR. CONLON: Well, let me answer that,
because if I said that, I was wrong. I do not believe that an MDL or a method detection limit
should be the DQO. In fact, in my example where I said we had an MCL, a maximum
contaminant level of 600 and then for some reason, we had an MDL of 0.6, what I should
have said was that what we should do is recognize and calculate what the MDL is but then,
separately, establish what we think we need to achieve with the method.
In the case of glycosate, I don't believe we need to be able to measure at 0.6.
I think it is more likely to be somewhere closer to 100.
MR. RICE: Well, I will be happy, and I will
send you a copy of the paper that I gave yesterday which addresses this area, and it is not
quite that simple.
One of the problems that one really faces in the whole area with all of our
methods where we try to analyze at the trace concentration in the environment, the whole
history is that these methods are constantly pushed to the low end of anything that is
available. When you do that, one of the real problems is in calibration.
There are no, in almost all of the areas in the water arena anyway, there are no
traceable external standards at the trace or detection level. Until you can do that, then you
have a real problem in adequately defining that short of running interlaboratory studies at that
level and very carefully designed.
That is what the paper yesterday was addressed at, and that is a subject that
cannot be done in any one single lab. You can't bootstrap yourself.
MR. CONLON: Well, I agree that there is some
wisdom in what you say, but I also believe personally that our industry has misused interlab
studies. We have substituted interlab studies for method validations.
Very often, what we have done is we take the data from 5 or 6 or 10 or 20 or
100 laboratories, we pool the data, we do a statistical analysis, and we say that is what the
method is capable of. That is not what the method is capable of; it is what those laboratories
were capable of on the day they ran the method.
And what I propose is that we change the basic validation requirements on our
reference methods so that we spend a lot more time in the Agency when we validate a
reference method, we spend a lot more time and take a somewhat different approach.
637
-------
I am not saying don't do interlab studies, but I am saying recognize that they
are what they are. They are comparisons of competencies amongst laboratories. They are not
necessarily driven by inherent characteristics in the method.
MR. RICE: Not so. If you simply look at...you
are literally saying that all of the error, all of the differences that you observe are truly the
result of inadequacy or incompetence by the people that are running it.
MR. CONLON: Not at all.
MR. RICE: It is not so. There is a very basic
variability to any measurement process. You cannot get around that law. And you can only
measure that in the environmental area at the trace levels interlaboratory, because there are
errors that only that picks up. You can't find them by yourself.
MR. CONLON: No, I didn't mean to imply that
I thought there weren't inherent method errors. There are, but the statistical approach to
defining method error is different than the statistical approach for defining interlab or operator
error.
When you get into operator error and interlab comparisons, the actual numbers
of samples required and the numbers of runs required in each laboratory are generally well
above what we do now.
MR. SHIREY: I am Bob Shirey, Supelco,
Incorporated. As we introduce new products, I will give an example, like a purge trap, the
methodology seems to be very rigid in the selection of traps, very flexible in selection of
columns. We get calls from customers saying your trap is outperforming the existing trap,
but it is not listed in methodology.
And we also see variation, depending on the region that is doing...the region
that they are in. And this has created a great deal of problems for us in responding to their
needs.
MR. CONLON: I think what you describe is
probably more common than any of us want to admit, but it is what I was talking about.
There is an unlevel playing field in many respects.
What we hope to do is we hope to substitute for line by line approvals, we
substitute this in-house documentation process. It would be up to you to demonstrate that
your traps did the job, but once they did, then you would be able to use them yourself and to
prove and demonstrate and provide the data to your customers to demonstrate that they
actually work as well as they should.
638
-------
Theresa Prado with Suburban Water Company in Philadelphia makes a very
nice presentation where she has demonstrated that she has a microextraction process that
works great, works better than our extraction process ever did, yet it is not necessarily
approvable under the drinking water rules as they stand today.
MS. MOORE: Marlene Moore with Advanced
Systems. I just want to commend you to finally see performance-based gets such excitement
from the Agency. The one question, of course, that constantly comes up when you see these
kinds of definitions is what is a matrix.
MR. CONLON: For us, it is just drinking water.
For Bill Telliard, it can be just about anything.
MR. TELLIARD: We look upon our matrices as
much more challenging.
MS. MOORE: Is there going to be some kind of
consensus as you establish these DQOs that this might be more uniformly defined in all the
programs?
MR. TELLIARD: Well, in the case of, for
example, the Agency has put together a combined or consolidated VOA method which I was
hoping we would talk about this year, but we didn't, in which all three program offices have
now set their levels. So, while it will be the same method, same equipment, same QA, the
performance levels are going to be different, of course, for my samples as opposed to
drinking water or for the contract laboratory program for Superfund, and those are driven by
our DQOs which are a programmatic thing.
But what we have done is we have at least standardized on what a shift is, how
many spikes you run, and what your start-up time is, what instrumentation to use, and also
the instrument standardization.
MS. MOORE: I think you have kind of
answered it by saying that you think there might be a move towards the Agency by just
classifying it by program, just wastewater, drinking water, RCRA, and that it might be
program-specific instead of looking at a matrix as being actually chemically what is the
combination in each sample?
MR. CONLON: Well, if you really take the
theory behind performance-based methods, it means that the matrix is really the sample that
you are running right now.
MS. MOORE: Exactly.
639
-------
MR. CONLON: And it may be that this artificial
distinction we have between a wastewater and a sludge and a solid and a semi-solid and all
that goes away, because, quite frankly, what you ought to be doing is calibrating that
technique in those samples with those interferences that you have on the table right in front of
you, not something that one statute or another happened to define somewhat arbitrarily 20
years ago.
MS. MOORE: Well, I think that is something
that really has got to get into the comments, because I prefer to see us try to generalize things
a little bit more, to stick to program-specific DQOs generalities with more specific
developments having to be done for a particular matrix or a particular waste stream or
whatever you are dealing with.
MR. TELLIARD: I think we should underline
two things here, and I will speak for someone who isn't here and myself. If it is in a
contract, you don't mess with it. There is no such thing as performance-based contracts.
MR. CONLON: And for those of you who are
Superfund or RCRA contractors, I would suggest you write what Bill just said down on the
back of your hand and never ever forget it, or they won't pay you.
MR. TELLIARD: Hi, Yves.
MR. TONDEUR: Hi, Bill. Yves Tondeur,
Triangle Labs. I notice that in your routine performance requirements, you indicated matrix
spike duplicates, and I didn't notice duplicates by itself. Was it an oversight or intentional?
MR. CONLON: That is not meant to be an all-
inclusive list, and please don't take it as that. What we are trying to do is sell the principle.
Okay? If you have additional things that you think we ought to have on that list...
MR. TELLIARD: Let us know.
MR. CONLON: Let us know.
MR. TELLIARD: Thank you. Thank you,
Mike. This is round one. We will talk about this next year, and we will probably have some
more neat and spiffy stuff.
640
-------
MR. TELLIARD: Now we are going to talk about
the Canadian base for performance-based method. Richard, you want to come up here?
MR. TURLE: Can everybody hear me? Before I
start my talk, I would just like to give some introductory comments.
One thing I am very conscious of and I don't think I have ever been so much
conscious of is there is as great difference between Canada and the United States when it comes
down to environmental regulations. I have learned maybe 50 new acronyms this last day or two!
One of the things is that many of the activities that we would call monitoring are
regulated in the United States; they are not in Canada, and I would like to just touch upon the
way we do monitoring methods.
There is no particular requirement that a method be published or that a particular
method be used in much of the work that Environment Canada does under what we would call
monitoring. Federal or provincial labs are free to choose the most appropriate method.
However, I would state there has been a consensus of the most suitable method
for a particular analyte for a particular type of matrix.
Often, the federal or the larger provincial labs have established a method which
is, as it spreads through the community, used by other labs. Agencies contracting out work for
monitoring may decide that they want a particular method used. They will actually, in fact, say
it will be done this way or no way. Alternatively, they may leave that choice up to the
contractor as part of the contract proposal.
Let me just leave that there and then move into the talk. Let's have the first slide,
please.
I would like to acknowledge my co-workers in this. We are from a center just
outside Ottawa within what we call Environmental Protection. This work is financed underneath
Canada's Green Plan which you may have heard about which is essentially spending several
billion dollars over the next few years to improve environmental quality in the longer term in
Canada.
Slide 2. I think one of the things that we recognized is that we have to
occasionally regulate. I think the reasons for this have to be made clear. We are doing this
primarily to protect the environment. It is not just a nice scientific activity.
I think one of the major objectives is definitely to bring about new technologies.
This has certainly occurred with the pulp and paper dioxin regulation which I will be talking
about.
641
-------
It is obviously the basis for enforcement. You can't say to somebody don't do that
unless you have a legal framework.
The other thing it does is it puts a level playing field across all the players in the
industry. For example, we have something like 47 pulp mills in Canada that use chlorine-based
bleaching or did. They are all on the same level playing field. They all have to spend money
to come up to meet the same standard.
The regulatory authority is the Canadian Environmental Protection Act which is
a federal piece of legislation.
Slide 3. The consequences of this are obviously we must have analytical methods
which have appropriate detection levels, and I think I would underscore the word appropriate.
We should establish suitable accuracy and precision for those methods. I believe that we should
have a broad acceptability of the method. It should not be based on one that has been...a method
that is based on one experiment done in one university laboratory which gets down to a super-
low detection level and somebody says that is what we need.
We should use widely available equipment, and that has obviously, in some cases,
made our decision as to which type of method we go. That has certainly influenced our decision
as we were revising our PCS methods. And the method should reflect generally established
procedures.
Slide 4. There are two possible approaches. I guess there is the rigidly strictly
defined method, highly detailed, and assumes, I think...! think this is the basis...that if you follow
that method, then you must get good results. Perhaps I will have to stand correct after realizing
there is more flexibility, but I always thought that was the way the EPA did business. I stand
corrected.
I think the performance-based method is you must be very clear about what your
performance means. You must define those criteria. I think it can be flexible methodology as
long as the criteria are met, and this is certainly the approach that we would go.
This didn't take any great decision. It took ten lab managers who were sitting
around a table saying yes, this is the way we go. It is not in regulatory format, but that is the
way we have decided to go.
I must admit we have had a lot of support from this approach from the private
sector laboratories who feel that this is a good thing.
Slide 5. Performance Based Methods can be built around two ways, as far as I
can see. You can use defined recoveries of surrogates which is what we have done with the
dioxin and will be doing with the PCB method. I believe if you have suitable standard reference
642
-------
materials, you could use this approach for perhaps methods which are well now established and
you are rewriting them. Of course, in some cases, there are not suitable reference materials.
If we were, for example, to revisit the lead in gasoline method which we have, I
think we would probably go to one based on standard reference materials, because there is a good
NIST material available. Similarly, in some regulations as I think they will come up to be
revisited, there are enough standards out there in matrix spikes which we could actually use for
that particular purpose.
Slide 6. There are limitations. Bill mentioned that where the method defines the
analyte, for example, AOX. You either follow that procedure and you get an AOX method, or
you don't. Incidently, Canada is not going to regulate at the federal level AOX.
Alternatively, there is another situation where it is impossible. We have just
revisited the method for doing vinylchloride in air and in PVC resin, and for the life of us, we
cannot find a way you can really assess performance. It is difficult. There just does not seem
to be an obvious way. Maybe we have missed it, but we cannot do it.
Slide 7. The two examples I would like to talk about are the dioxins and furans
in the pulp and paper industry regs and the new regulations that are in the process of being
proclaimed in Canada for waste disposal and for new electrical products.
Slide 8. Why do we actually, in fact, bother to regulate dioxins and furans? I
think it is pretty obvious. I think the effects on fish and birds, particularly in two areas in
Canada, British Columbia which is a marine environment and the Great Lakes, there are
definitely serious effects which are well documented on fish and birds. I don't think there is any
doubt in the scientific community that they had to be eliminated.
It would appear that this could be achieved by either eliminating or reducing the
amount of chlorine in the bleaching process and by perhaps improving of secondary treatment
in those mills and also by removing the native dibenzodioxin and dibenzofuran from defoamers
which are, in fact, also regulated in Canada.
This has been an amazing regulation in one sense in that there has been already
more than a 90 percent reduction from 1987 levels and we are down probably to achieve 99
percent reductions in the amount of dioxins and furans that are going out of the end of the pipes
by the end of this year.
Slide 9. How do we deal with this? Well, this was quite a problem. We actually
have had to come up with a regulation that stated that effluents must not contain a measurable
quantity of 2,3,7,8-TCDD or a measurable concentration of 2,3,7,8-TCDF times 0.1 which was
the TEF factor, toxic equivalence factor, which exceeds 5 ppq.
643
-------
Well, how do you define measurable concentration or no measurable
concentration? Well, we had to make a decision, and what we decided to do is one that was
most commonly available, one that we felt we could handle in the laboratory and that could be
applicable, and we decided to take the ACS definition of the limit of quantitation, which, as you
probably well know, is ten times the standard deviation of a sample near the blank.
Slide 10. There it is described schematically. If you have a signal increasing, you
pass through a region of high uncertainty, you reach a limit of detection which is three times the
standard deviation of the blank to the limit of quantitation which is ten times the standard
deviation of the blank, and then you enter your solid area of quantitation.
Slide 11. Well, we decided we needed a process to define this. We decided that
we could not just do it in our own laboratory, and the comments regarding data of one laboratory
have been well noted. We did draft a reference method. That was the first thing. We said that
is how we are going to do this, so that other people have a pretty good idea of our thinking.
The labs had to follow this initially in some of this work. They were free to
modify it to some extent, but they had to tell us what those modifications were, but we asked
them to stick as closely as possible to the draft method.
We made one decision which we said was not changeable and that it was going
to be high resolution, a resolution of 10,000 GC/MS. The method was going to cover tetra
through octa, even though the regulation only essentially regulates the tetra, because we were
certain that we would want and we did require that all data be reported for tetra through octa,
and we decided to base the performance on C13 surrogate standards which were generally
available.
So, those were the decisions that we made up front. We obtained composite
effluents for an LOQ study. We asked some labs to participate. There weren't enough Canadian
labs, so we went to North American labs. We submitted samples to them, and we analyzed the
data. That was the overall process.
Let's look at some individual steps.
Slide 12. This is a key slide. I am not going to read it in detail, but there is an
important point is that we based it on a 1 liter sample. We have people who are reporting data
on 10 liter samples. They cannot read. It says you must use a 1 liter sample.
No, that causes us problems, because they are effectively reporting results which
are now measurable, thereby doing so making mills which are in compliance out of compliance.
You know, it says 1 liter.
644
-------
Once you reach the end of that sentence where the curly brackets are all the way
down to that slide where it says adjust to volume, essentially, I don't care what goes on in
between. That is the decision of the lab.
We have used a method which I don't think is very different from that of Method
1613, and I know some labs in Canada are using Method 1613 essentially adapted to this
procedure, and it works well. But if somebody comes up with a particular technique, gel
permeation chromatography, supercritical extraction, I don't care. As long as it meets the
performance characteristics, they are free to do whatever they like between those two points in
the method.
And we distinguish this in the written method by means of bold type of steps
which are mandatory and regular type for those parts of the method where there is choice.
Slide 13. How do we go around preparing a sample? Well, what we went to is
we asked an organization...gee, I am so acronymed out...the Pulp and Paper Research Institute
of Canada, PAPRICAN, to look at the 47 bleach mills. Some mills were already in the process
of cleaning up their act, so that was a bit of a problem. We didn't want to know which ones,
obviously, because there were legal implications.
So, they identified nine mills as possible sources of effluent samples which would
have the right range of analytes. Two mills were selected. We got from those 210 liters of
composite samples. We bottled them up into about 180 bottles, and we confirmed the
homogeneity of each bottle by just measuring the suspended particulate. We found that the
relative standard deviation for that was less than 5 percent which wasn't bad, I understand, for
this type of sample.
Slide 14. We found there were ten labs willing to participate in the study. One
of the differences between our study and the one that was talked about yesterday is we paid the
labs.
Amazing, we got results really quickly. It cost money. It cost us $80,000 to do
that, and that is Canadian dollars, but we went and fought for the money and we got it, because
we said we were on a regulatory time table, and it just could not be changed.
So, each laboratory received eight 1 liter samples of the two effluents as blind
samples. They were just labeled 1 through 8. They also received a set of standards, C13
standards. These are not the same as the EPA standards. They were ones that we had prepared.
They received four GC/MS calibration standards which I believe go lower than the Method 1613
standards and two sets of fortifying solutions.
We were a little sneaky here, because one of these was a blank, and not everybody
realized that.
645
-------
Slide 15. Out of the set of the 10 labs, we got 8 sets of useful data. Basically,
we had to reject 2 labs immediately, because they failed to meet the initial calibration criteria.
In other words, they could not get the criteria that we required on the calibration step on the
GC/MS.
Only three labs met all of the criteria that we required. It was interesting that no
more than two labs failed to meet any one of the performance criteria, and there were five of
these criteria that we required. One lab could not get the instrument detection level of 4 ppq,
and two labs failed to meet 40 percent recovery criteria.
Now, I find this truly amazing that some labs cannot get at least 40 percent
recovery of surrogates in a method like this. There are other labs, I know, who are regularly on
an every day basis, including octas, are getting 80 percent recovery.
The proof rinses and the method blanks indicated that contamination was not a
specific problem in this particular method.
Slide 16. Well, what data do you accept and what don't you accept? Well, we
used both Grubbs1 and Dixoris tests, and we said that a 5 percent level, but one lab had an outlier
for TCDD, and another one had one for TCDF, and we did a whole pile of interlab comparisons
between the sets of labs that really indicated that one lab had mean values that were really high,
significantly higher or lower than the other labs, and we found out that their precision of the
duplicates was poor for both dioxin and furan, so we threw out some of that data.
Slide 17. The maximum differences between the lab mean and the interlab median
was 35 percent for effluent 1 which was around 21 ppq TCDD, and one lab had a mean value
for TCDD 92 percent higher than the interlab median which really indicated a problem.
Effluent 2 was a lot lower. That was around 4 ppq for TCDD. So, it was a more
difficult sample. So, it is not surprising we had that problem
Slide 18. The key points of this slide are indicated by the red arrows. The
standard deviations we got for the first effluent was 1.8 for TCDD; for TCDF, 2.7. You can see
the levels there were, roughly, for TCDD 21 ppq and for TCDF 33.
Slide 19. Similarly for the second effluent, we had a standard deviation of 1.1 and
for dioxin and for furan, 2.4. The levels were a lot lower, as I said earlier. 4.3 for TCDD and
28 for TCDF.
Slide 20. We managed to pool these. This is where a statistician is a great friend.
This basically said there was no statistical difference between any of the sets of data that we had,
and we came up with this magic number of 1.543.
646
-------
We decided that that, based on a 1 liter sample, was our LOQ. I wouldn't put
great confidence in those significant figures, but that was the number that spewed out of the
computer. So, we in actual fact decided that the limit of quantitation will be written into the
regulation and into the method as 15 ppq for both TCDD and TCDF.
The method has been published and, as required in Canada, it was Gazetted as part
of the regulation.
So, we actually hi fact now have a performance-based method on hand and
available. If people would like copies of this, I suggest they write to me, because I only have
the one with me.
Slide 21. We are now in the process of revising all the PCB regulations. This
is required, because regulations obviously change in time. They were looked at about ten years
ago, and they decided to update them.
They basically cover waste disposal and new electrical products, transformers, oils,
capacitors for fluorescent lights, things like this, and we decided to classify to accept all limits
for various categories of waste and for treatment.
One thing you should be well aware of is that in Canada, we also have a whole
series of provincial regulations which may differ in value to the federal regulations, but our
regulations will certainly cover inter-provincial trade, cover import and export, as well as any
waste generated on federal lands.
Slide 22. In Canada, we define PCB as a chlorobiphenyl which has 3 to 10 atoms
of chlorine. That means mono and dichlorobiphenyl are not regulated substances.
This is often a surprise to people, and we find people in actual fact spending a
great deal of money or did at one time trying to get rid of aroclor 1228 which is largely
unaffected by the regulation.
Slide 23. The destruction regulations are quite stringent. Basically, it requires a
high performance, a five nines efficiency, 1:1 million, and it should release at the end of the pipe
or the stack 1 mg/kg of PCB put into the system. There are also requirements for dioxin
measured as TEF equivalents which I think is a very good approach and also as PCBs, whether
solid or liquid. They are both covered in that regulation.
Slide 24. Treatment systems basically require that at the end of the day, we have
less than 2 mg/kg of PCB. We do allow conversion of the higher chlorinated species to lower
chlorinated species. This seems a good approach, given the fact we are only regulating tri to
deca chlorinated species.
647
-------
The other requirement is that PCBs must not be released into the environment if
greater than 50 mg/kg, and waste oil is sometimes used for road oiling to keep down dust, but
it can't be used if it has more than 5 mg/kg of PCBs. People thought this was a great way of
getting rid of PCBs, but that was stopped pretty quickly.
Slide 25. New product regulations, I think they are basically the same as in the
States. Less than 2 ppm of PCBs in new products. The previous limit set in 1985 was 50
mg/kg.
Our concern now is not material manufactured in North America but goods coming
in from outside North America. Some of the Third World countries are trying to tap into North
American markets, and some of the stuff coming out of Eastern Europe has been, we understand,
for sale which may cause problems, but most of the major North American companies say this
is not a problem for them. They are well below those limits.
Slide 26. So, we decided we should review how we should go about PCB
measurement. Traditionally, it has been extraction and cleanup and quantitation by GC/ECD and
by GC/MS. We actually took a decision that since MSDs were quite universal these days in
most private and government labs that that was a fairly good approach that we should try and
use. We would not try a high resolution approach.
Slide 27. Traditionally, the quantitation has been as various levels, as aroclors,
as homologs, or as individual congeners, and these were decisions that we had to look at.
Slide 28. Measurement based on Aroclors is quite acceptable for a transformer
oil which does contain Aroclors and is not too degraded. Traditionally, there have been various
approaches. The Webber-McCall which has been modified these days for capillary, gives quite
good numbers.
Single peaks which are unique to each aroclor which I think gives poor numbers.
I have found in previous work that I have dealt with that we had numbers a factor of twice too
high.
We found that sometimes when they were complicated mixtures, people completely
blew it, because they use the wrong standard, but they are useful for rapid screening. After all,
if it is 1000 ppm, it is out of compliance. It has to be destroyed. It just can't be treated as a
PCB-free sample. So, there is some merit to those approaches.
Slide 29. Specific congener analysis, on the other hand, requires the use of many
congener standards. They are available, but they cost an arm and a leg. Also, we found that the
calculation aspect of the methodology is tedious, to say the least, and there are many errors.
648
-------
One of the problems, of course, is there is no one column that can resolve all the
PCBs that you find in common aroclor mixtures. So, I think we decided that an approach that
would be quite useful would be to look at the homologs.
Slide 30. This obviously lends itself to the GC/MS approach. It is easy
quantitation based on characteristic ion. We have decided and made the decision that a
representative PCB from each of the homolog groups can be used as a standard, and we can
obtain the total PCB numbers simply by summing those homologs.
We did note there is a poor sensitivity of MSD in comparison with electron
capture detector as we set out to do our methodology development. We decided, in actual fact,
to use all our work on a very ancient MSD, because we felt if we could do it with that
instrument, then anybody else could with a newer one.
Slide 31. We looked at a whole variety of cleanup methods. We hadn't visited
this for about ten years, and we found out most people were using a sort of "dilute and shoot"
technique, sometimes doing a little bit of cleanup with alumina or silica column, but we found
out with the MSD that it pays to get rid of most of the transformer oil. Indeed, we would like
to have got to 100 percent removal of transformer or mineral oil away from the PCB.
We use liquid-liquid extraction of DMSO, DMF, or acetonitrile. DMSO in our
hands seems to be the best. We looked at gel permeation chromatography, and it has not worked
out as well as we anticipated. We used other column cleanup techniques which have been fairly
traditional.
There are some people who seem to do very well on silica gel or silica gel with
silver nitrate. We are aware of other people who use Florosil and it works very well in their
hands, and other people say they can't use Florosil, they prefer alumina.
But, basically, what we are saying is when we had this method published that the
choice of extraction and cleanup method will be totally dependent on the analyst, as long as it
meets the performance criteria. We don't care which one of these techniques people use as long
as it works and meets the criteria.
Just to sort of bring some chemistry into it, this is one of our results using HPLC.
We used an amino silene column, and this was our best to date performance. Obviously, the
chromophors affect the size of the two peaks. The PCBs are the second peaks, and the mineral
oil is the first peak. There is actually about 99.2 percent separation between those two peaks.
Unfortunately, this is still not good enough, because when we put that same sample
onto a MSD, we had some problems. The three chromatograms there represent essentially the
top one is the hepta, the middle one is the hexa, and the bottom one is the penta. If you focus
your attention on the bottom one, you will see what I call a humpogram. There is lots of mineral
649
-------
oil causing interference, and it is obvious that we have to deal with that, but to date, that is our
best performance hi terms of separation.
We think this may be something we can live with, but that will be one of the
criteria that you will have to get baseline...a certain base performance for your baseline before
you will be allowed to quantitate.
Slide 32. So, basically what we are doing in approaching this new reference
method. It is a definitive method for legal purposes in the sense that it is quoted in the
regulation, and if Environment Canada takes somebody to court because they have violated the
PCB regulations, then this is the method we use.
We feel strongly that there must be a method that we know works. You just can't
put out performance criteria and not publish a method. I think this was the point the previous
speaker alluded to, and I must admit I agree with it.
We allow a variety of cleanup methods. We feel the technique should be based
on GC/MSD as an appropriate. Maybe the ion trap that Barry markets (Varian) or Finnegan
markets would be acceptable.
We in actual fact did look at and toyed about using other detectors. One of the
things on a quick search indicated that no one had an atomic emission detector, including us, but
apparently it works really well for PCB analysis. You can do some really good work with that
if the levels are high enough.
The quantitation using homologs seems a lot simpler, and it has been well
established, I think, in some EPA methods, certainly for monitoring.
This slide is now wrong. The detection level that we are going to require is based
on the fact the regulation requires it, 0.4 ppm total PCB as determined by a particular column.
We are going to specify the column for quantitation. It will probably be a DB-1 or a DB-5
column. We personally prefer the DB-1, but we know some other people prefer a DB-5 for this
type of analysis.
And any single peak or congener has got to be no more than 0.4 mg/kg as long
as the total is less than 2 mg as far as the regulation is concerned. So, that requires a detection
level of at least 0.2 mg per resolvable peak which is the way we are going to define it in the
method.
We are going to use surrogates to determine performance. C13 PCB standards will
be required to be added to the sample right at the beginning, and we are going to build in QA
specs based on the NIST PCBs in oil which are readily available.
650
-------
We also found there was a distinct problem in the analytical world, because if you
give them a sample of 2 ppm, some people can do quite well. 50 ppm is certainly no problem,
but if you give mem a sample of 50 percent PCBs, we get numbers back from something like
30 to 70 percent PCS.
That causes a problem, because when you put that into an incinerator or something
like this, there is obviously a problem in defining its performance.
So, we are going to try and develop a method where we can control that with a
surrogate. That work is not completed yet. Finding a suitable surrogate is going to be difficult.
Slide 33. The method will encompass...a method that has already been published
for ashes, stack effluents and so on which is available from us. We are going to basically
incorporate all this into the one method.
One of the points that has been raised to us is well, this will stop people using
screening. No, we see screening as a very valuable tool. There are many, many people have to
test PCBs in oil. A lot of companies just do it as habit, because they don't want to be held up
for any liability.
So, what we will say is if you want to use a GC/ECD for a simple screen, that is
fine, but if you are working at one of the regulatory limits, then you had better be very sure that
that method is accurate and precise, because if we take a sample and we decide to prosecute, we
will be using our method based on GC/MSD as written for the basis of that prosecution.
Slide 34. I think there are some conclusions that we can draw. We believe that
this performance-based methodology is a key to future successful regulation, and it will be the
approach that we will try and adopt in all methods, particularly complicated methods where you
have very low levels of detection required as for dioxins or you have naughty components such
as PCBs, but we think this can be applied right across the spectrum, and we will be doing that.
The method must be defined very well as far as quantitation steps. That does not
mean that you cannot use different columns, different detectors, but there has to be some way
of assessing that quantitation. We like the use of surrogates.
We certainly believe that you should have complete freedom as to the choice of
extraction and cleanup, and I think that the reference method that should be published should cite
a typical method. We like to use the broadest-based method in the sense of one technique that
is the most widely available so it becomes a typical method that should be performed well in the
hands of most competent analysts.
I thank you.
MR. TELLIARD: Thank you.
651
-------
652
-------
QUESTION AND ANSWER SESSION
MR. TELLIARD: Any questions?
MR. COCHRAN: Jack Cochran, Hazardous Waste
Center. Does Canada regulate polychlorinated naphthalenes, and if so, do they have methods and
congener standards available?
MR. TURLE: I believe we do regulate them, but
I don't think we actually have a method. I know we don't have a method.
Perhaps I could at this point...in actual fact, believe it or not, in Canada at the
federal level, we only have about 20 methods actually specified in the regulations. We don't
really have a method for polychlorinated naphthalenes. It doesn't mean they are not regulated,
though. (Added in proof - polychlorinated naphthalenes are not regulated in Canada.)
MR. COCHRAN: Do you see the PCNs when you
do your analysis?
MR. TURLE: In what sort of matrix?
MR. COCHRAN: I am just curious as to how
widespread they are when you do, say, PCB analysis or even dioxin analysis.
MR. TURLE: I am not with you. We do find them
in environmental samples. For example, in wildlife, I think they can be found if you look for
them.
MR. COCHRAN: So, if they are regulated and you
find them, how do you measure them, then, I guess is my question, if you don't have standards
available?
MR. TURLE: We don't have a method available.
There is a difference. We buy the standards from the same places you guys get them.
MR. COCHRAN: As far as I know, we don't have
standards available for PCNs.
MR. TURLE: I don't know. I have not done any
PCN analysis.
MR. RICE: I am curious about...and this applies to
Conlon's paper as well, but you both have discussed the absolute necessity of having a reference
method. When compliance data may be subject to litigation as to the validity of it, for example,
653
-------
and the reference method I assume...and you correct me if I am wrong, but the reference method
would be the final arbiter. Is that true, and, if so, why then would anyone run anything but a
reference method?
MR. TELLIARD: The reference method, Jim, is
the floor. In other words, we don't want to just give you a method spec and tell you to go out
there and do it. If your method has met the checklist as far as the data qualifications and the
method specifications, there is no reason...we are not going to say they are equivalent, but there
is no reason to disallow that data?
MR. TURLE: Can I make a point? In the Canadian
context, if a company had used a laboratory which had used a method different in the sense of
different from the reference method as published but had produced data which met the
performance criteria, I think their argument would be that they had used what in Canadian law
is called the due diligence argument. That would be their standing.
What we are interested in is we have not actually prosecuted anybody underneath
these regulations. I expect a lot of fun and games the first time where it actually happens.
MR. RICE: That answers part of my question.
MR. CONLON: Jim, to answer your question on
behalf of the Drinking Water Office, once we get these kinds of changes into rules, then any
analytical results that are documented according to the requirements of the regulation would be
just as equivalent as those generated by the reference method if you can prove it.
Now, obviously, the rubber is going to meet the road when the auditors sit down
and the analyst from one lab looks at the other guy's data. People are going to go through it
very, very carefully, but, obviously, I think we are trying to create a field where you can actually
use alternative methods.
MR. TELLIARD: The answer also, too, is that if
you want to use the, quote, reference method and feel warm and fuzzy and it make you happier,
of course, that is a logical approach, but what we find is that people make what we would call
insignificant changes and get themselves in trouble, because it was no big deal until they get
audited or because they get in trouble.
This system allows you presently to do that as long as you document it. Let us
know what you are doing. Not necessarily call me personally, but somewhere in that lab book
there has got to be some documentation of what you are doing.
MR. TURLE: Can I just add to that? Underneath
the dioxin regulations, mills have to submit data to Environment Canada, and we are auditing the
654
-------
results that are coming in from the private labs. I would say the majority have no problems
meeting our performance criteria.
MR. TONDEUR: Richard, you mentioned GC/MSD.
MR. TELLIARD: Tell us who you are.
MR. TONDEUR: Oh, sorry. Yves Tondeur,
Triangle Labs.
MR. TELLIARD: Thank you.
MR. TONDEUR: You said that the method would
have to be done using GC/MSD or low resolution. Does that mean that you have to use an MSD
to do it, or can you use another type of mass spectrometer that can be operated in low resolution
mode?
MR. TURLE: Yes, any sort of low resolution mass
spectrometer. We just thought we would take the worst case example for our own experience.
One of the arguments that we had initially when we proposed this is people said
well, we don't have the equipment available. We found out that was not true, but we are using
one of our older MSDs, because it has a slightly poorer performance than newer ones.
MR. TONDEUR: So, we're not restricted to MSDs?
MR. TURLE: No, no.
MR. TONDEUR: And if, for instance, the lab were
to run into a humpogram like the one you showed for the pentas and decided to solve this
problem by running into GC/MS using a high resolution of the mass spectrometer and that takes
care of the humpogram problem, would the analysis be considered valid?
MR. TURLE: Yes, I believe so.
MR. TONDEUR: Okay, thanks.
655
-------
656
-------
Environment Canada's Approach
to Performance Based Methods
K Richard Turle, Chung Chiu, Gary Poole, & Bob
Thomas,
Chemistry Division
River Road Environmental Technolgy Centre
,» , Environment Canada
Ottawa
Canada K1AOH3
CANADA'S GKEEN
657
-------
', Environment Canada's Approach
to Performance Based Methods
; • Richard Turle, Chung Chiu, Gary Poole, & Bob
! Thomas,
; Chemistry Division
! River Road Environmental Technolgy Centre
Environment Canada
Ottawa
Canada K1A OH3
1;
oo
Consequence:
' Establish analytical methods
-with appropriate detection levels
-established accuracy and precision
-broad acceptability
-widely available equipment
-reflecting generally established
procedures
Why regulate?
• Protect the Environment
» Bring about new technologies
• Basis for enforcement
• Ensures level playing field within industry
• Regulatory Authority is the Canadian
Environmental Protection Act
Possible Approaches:
• 1. Rigid, strictly defined method
-highly detailed - assumes if procedure
followed explicitly then accurate results
must follow
-eg US EPA methods
» 2. Performance based method
-performance criteria defined
-flexible methodology as long as criteria
met
-eg Environment Canada approach
-------
Performance based methods
• can be built around
-defined recoveries of surrogates
-comparison with SRMs
- limited number of suitable RMs
-eg
- for Pb in gasoline
- common water parameters
Examples
' Dioxins and furans for pulp and paper
industry
-2,3,7,8-TCDD and 2,3,7,8-TCDF
' PCBs for
-waste disposal
- new products
Limitations to performance based
approach
• where method defines the analyte
-egAOX,
•where a surrogate is impossible
-eg analysis of vinyl chloride
Dioxins and furans for pulp and
paper industry
' Regulated because of effects of dioxins on
fish and birds especially in British Columbia
and
exp
-------
ON
ON
O
Regulation
• effluents must not contain a "measurable
concentration" of 2,3,7,8-TCDD
• or a "measurable concentration of
2,3,7,8-TCDF times 0.1 (TEF factor) which
exceeds 5 ppq
« how to define?
» used "Limit of Quantitation" as the level to
regulate
« LOQ (ACS definition) is 10 x SD near the
blank
Process to determine LOQ
• Drafted reference method - labs had to
follow - based on high resolution GC-MS
-covers tetra- through octa-PCDDs and
PCDFs
-uses surrogate C-13 standards for
assessment of performance
» Obtained composite effluents for LOQ study
• Invited labs to participate
« Submitted samples
• Analyzed data
Definition of LOQ
Region of Quantitation
ALIMIT OF QUANTITATION
•Sb+ IDs
Region of less certain quantitation
LIMIT OF DETECTION
Region of High Uncertainty
V 3s
• S b (zero)
Method Summary
• A 1 litre sample of effluent is spiked with
isotopically labelled PCDD and PCDF
standards.{ It is then separated into phases
by filtering. Solid phase is soxhleted. Liquid
phase is extracted with DCM. Concentrated
extracts are cleaned up on (1)
acid/base/silver nitrate/silica column (2) a
basic alumina column (PCDD/F eluted with
50% DCM/hexane) (3) 2nd alumina or
carbon/silica column.} Adjusted to volume,
then analyzed using GC-MS.
-------
Sample preparation for LOQ study
• PAPRICAN assessed effluent data from 47
bleached chemical pulp mills
• identified 9 mills as possible sources of
effluent samples with TCDD and TCDF near
estimated LOQ - 2 mills selected
» 210 litres of composite effluent were
collected & homogenized - split into 180
bottles
• Homogeneity confirmed using suspended
parti culate
• SD <= 5% (n=5)
CTs
Results
« 8 labs produced useful data
• 2 labs data rejected since they failed to
meet initial calibration criteria
» Only 3 labs met all the criteria
• No more than 2 labs failed to meet any
performance criterion
« One lab failed to DL criteria of 4 ppq
» Two labs failed to meet 40% recovery
criteria
« Proof rinses and method blanks indicate
contamination not a problem
Study design -10 labs across N.
America
• Each lab received 8 one litre samples of 2
effluents - blind samples
• Set of PCDD & PCDF C-13 standards
« Four GC-MS calibration standards from
0.25 to 25 pg/uL
« Two sets of fortifying solutions (one was
blank)
Data analysis
< Grubb's & Dixon's tests indicated at 5%
level that one lab had an outlier for TCDD
and a second lab had one outlier for TCDF
• Inter-lab comparisons indicated one lab had
mean values significantly higher or lower
than other labs - also precision was poor
for both TCDD & TCDF
-------
-Data analysis (cont'd)
> Maximum difference between lab mean and
inter median was 35% for effluent 1 (21
ppqTCDD)
' One lab had a mean value for TCDD 92%
higher than inter-lab median for effluent 2 -
due to lower concentration in this effluent (
4 ppq v 21 ppq in effluent 1)
Effluent 2
•TCDD
- mean = 4.31 ppq
> SD = 1.152
- DF = 42, RSD = 27%
-TCDF
* mean = 28.58 ppq
- SD = 2.432 ^
- DF = 47, RSD = 8.5%
Results
• Effluent 1
-TCDD
•• mean = 21.53 ppq
- SD- 1.825 ~*
- DF = 48, RSD = 8.5
-TCDF
» mean = 33.75 ppq
- SD = 2.744 ^
- DF = 42, RSD = 8.1
LOQ
• Pooling SD for both effluents resulted in a
SD for TCDD of 1.543 ppq (1 litre sample)
• LOQ of 15 ppq adopted for both
2,3,7,8-TCDD & TCDF
»incorporated into final Reference Method
» Reference Method "gazetteted" as part of
regulation
-------
OS
Canadian Federal PCB
regulations cover
' waste disposal
> new products
• classification as to acceptable limits
PCBs
' defined as chlorobiphenyls containing 3 to
10 atoms of chlorine per molecule
'Thus
-monchlorobiphenyl and dichlorobiphenyls
are NOT PCBs for the purposes of the
regulation
PCB destruction regulation
> requires
-releases less than 1mg/kg of PCBs put
into system
-i.e. a 1 :1,000,000 destruction efficiency
-0.5mg/kg of PCBs or 1 ug/kg of PCDD/F
(measured as TEF equivalents) as a solid
-5 ug/L of PCBs or 0.6 ng/L PCDD/F
(measured as TEF equivalents) as a
liquid
PCB treatment systems
-must reduce PCBs to less than 2 mg/kg
-allows conversion of 3 to 10 Cl PCBs to
mono- and dichoro-PCBs
• PCBs
-must not be released into the environment
if greater than 50 mg/kg
-5 mg/kg if used for road oiling
-------
PCBs for new products
' must contain less than 2 mg/kg PCB
' previous limit set in 1985 was 50 mg/kg
PCB
Os
Quantitation
« as Aroclors
«as homologues
• as individual congeners to give sum PCB
PCB measurement
« extraction and clean-up
« quantitation by
-GC-ECD (packed or capillary)
-GC-MS (MSD or low-resolution)
Aroclors
' generally based on either
-Webb & McCall procedure (packed or
capillary)
-single peaks unique to each Aroclor
-often inaccurate for complex mixtures
-useful for rapid screening
-------
ON
OS
Specific congener analysis
«requires many congener standards
• difficult to obtain appropriate standards
* complex calculation to obtain total PCB
Clean-up methods evaluated
• liquid/liquid extraction using DMSO, DMF or
MeCN to remove oil
-GPC
« HPLC using amino-silane columns
• Column cleanup using
- acid silica/base silica/neutral
silica/AgN03 silica
* Florosil or alumina
« choice dependent on analyst and sample
type
Homologue
« best applied using GC-MS
• easy quantitation based on characteristic
ion
• only representative PCB from each group
required as standards
• total PCB obtained by summing
homologues
• poor sensitivity of MSD cf ECD
New Reference Method -1
' definitive method for legal purposes
' allows a variety of cleanup methods
• based on GC-MSD
' quantitation using homologues
• DL of 0.4 mg/kg total PCB (limited by MSD)
• DL of <- 0.2 mg/kg per congener
> uses C-13 surrogates to ensure adequate
performance
• QA based on reference materials such as
NIST PCBs in Oil
-------
New Reference Method - 2
• method for high level PCBs
-problems with dilution
-level of surrogate
• methods for ashes, stack effluents etc
-similar to published method.
• screening method for transformer and
waste oils
-based on GC-ECD (megabore)
Conclusion
' Performance of methodology is key to
successful regulation
' Methods must be well defined as to
quantitation
1 Allow choice of extraction (clean-up)
techniques
' Define and assess performance (surrogates)
• Reference method cites "typical" method
-------
MR. TELLIARD: Thank you very much.
Appreciate it. Mike, Rich, thank you very much for the performance-based methods scenario.
Now we are going to talk about almost as much fun, a thing that we are all into,
lab accreditation. Kenneth White is going to talk to us. He is with Consultive Services.
MR. WHITE: Thanks, Bill.
Hello, Kenn White from AS&M in Hampton, Virginia. I am here representing the
American Industrial Hygiene Association Environmental Lead Laboratory Accreditation
Committee, and I am here to talk about the accreditation program.
The American Industrial Hygiene Association is a professional association
dedicated to the health and safety of workers and the community.
The association has over 11,000 members now worldwide, 75 local sections
worldwide, has 45 technical committees, and provides professional training worldwide. Further,
we publish a litany of magazines and guides.
Of interest here is that, based on our latest demographics, half or more of our
membership have at least one master's degree and have more than 5 years of experience in the
industry.
AIHA offers accreditation and registration programs. They include the Industrial
Hygiene Laboratory Accreditation Program called IHLAP, the Asbestos Analysts Registry, a
registry of people who, by training and/or documentation, are qualified to perform asbestos
analyses, and now a new program, the Environmental Lead Laboratory Accreditation Program
called ELLAP.
AIHA offers several proficiency testing programs. The one that is probably the
most know is the PAT, the Proficiency Analytical Testing program, which includes various
contaminants in air. The Asbestos Analytical Testing program, the AAT, involves air samples
of asbestos for fiber count. The bulk asbestos program uses the same source that the NIST
NAVLAP program uses for its bulk asbestos samples. And now the new program for
environmental lead.
AIHA's ELLAP deals with accreditation for analysis of lead in three matrices; lead
in dry film paint, in soil, and in wipe dust, that is settled dust. The AIHA considers air
monitoring as an industrial hygiene exposure to be handled by its industrial hygiene laboratory
accreditation program. So, only these three matrices are included in the environmental program.
667
-------
The major program requirements include site visits, proficiency testing, an audit
of facilities and equipment, audit of personnel criteria, an audit of the quality control program,
and an audit of documented analytical methods.
Site visits occur prior to the issuance of accreditation. They occur every three
years thereafter. They are announced visits. They are conducted by a trained and experienced
representative of the association. They use a checklist format. Since we are dealing only with
analyses of environmental matrices, only these subjects will be addressed during the site visit,
and the site visitor carries a sample into the laboratory to assure that the laboratory indeed can
perform the analysis for which they are accrediting.
Proficiency testing occurs quarterly. It includes the matrices of interest; dry paint,
soil, and/or dust wipes. The concentration varies from sample to sample and from round to
round. Data is handled by NIOSH who then reports the results to AIHA.
Facilities and equipment are reviewed.
Accreditation is granted to a laboratory; it is not granted to an individual or an
organization. So, if an individual is operating a laboratory and moves from one site to another,
the accreditation does not follow.
If a corporation has more than one laboratory, each of those laboratories must, in
their own turn, go through the accreditation process which includes the site visit.
Mobile laboratories are allowed, and the committee defines those mobile
laboratories as defined internal sites, a volume, the inside of a trailer, the inside of a mobile
home, et cetera.
Specific instruments are not required. The protocol does not demand use of any
specific instrument nor any specific instrument type. Good housekeeping is, of course, required
in the laboratory proper.
Personnel criteria. The following people are defined as personnel in the accredited
laboratory: a technical manager, by whatever title; a quality assurance coordinator, again by
whatever title; and analysts. Further, accreditation requires that management staff must be on
site at least half-time and must be full-time employees of the entity being accredited.
Personnel criteria, in particular the technical manager: The technical manager must
have a college degree in chemistry or a related science, must have a minimum of two years of
analytical chemistry experience and must have a minimum of at least six months of metals
analysis experience.
There are two options for the QA coordinator. The QA coordinator should have
a college degree in a basic science and a minimum of one year of analytical chemistry experience
668
-------
and documented training in statistics; or, have a minimum of four years of analytical chemistry
experience and, again, documented training in statistics.
Those personnel performing the analyses are titled analysts. The analysts must
complete either an internal or an external training course in metals analysis. This training must
be documented and those records available for examination. Further, the analysts must
demonstrate their ability to produce reliable results through the successful analysis of SRMs,
proficiency samples that we have mentioned before, and/or quality control samples manufactured
internally.
The quality assurance program must include in-house training. There must be a
written QA manual. Instrument calibration procedure is required in that manual. QC checks are
required. We will talk about those a little later.
Statistical quality control must be used as part of the quality control checks. There
must be a written corrective plan of action. There must be analysis documentation for each
analysis performed, and there must be calculation and report review within the laboratory.
Quality control checks include instrument calibration verification, blanks that must
be analyzed at the rate of 1 in 20, spike samples that must be analyze at the rate of 1 in 10, and
duplicates of real samples must be analyzed at the rate of 1 in 10, 10 percent.
The analytical methods must be listed in the manual. None are specified. One
can use whatever method one decides to use as long as it is documented. The lab must
demonstrate, though, that this method will yield adequate performance by documenting the
detection limit and accuracy and precision measures, and the method must be documented in the
methods manual and be available to those who are actually performing the analyses.
Further, the method must be reviewed periodically by management.
The AIHA ELLAP proficiency program has begun. November, 1992 the first set
was sent out. It included analyses for lead in paint chips, in soil, and in dust wipes. It will
continue quarterly with four samples per matrix. It was supported by a partial grant from the
EPA and again supported through results handling and data handling through NIOSH. The
samples went out December 1st with results due by January 15th of '93. The final report was
issued on February 19 of'93. All told, there were 110 laboratories participating in the program,
but only 104 of those 110 actually did report results in the time period.
Concentrations in samples ranged as follows: paint chips ranged from 0.089 to
5.5 percent by weight. Mass of lead in soils ranged from 133 to 3370 mg/kg, and dust wipe
samples ranged from 38 to 4400 ug/sample.
669
-------
Relative standard deviation based on reference laboratory analysis ranged for paint
chips from 7 to 15 percent, for soils from 8 to 12 percent, and for dust wipes from 10 to 14
percent.
Outliers, defined as equal to or greater than three standard deviations of that
reported by reference laboratories, ranged as follows: Paint chips 9.4 percent; a little less than
10 percent of the 404 analyses performed were outside the acceptable range. For soils, it was
7.7 percent of the 336 analyses that were reported. For dust wipes, it was 8.5 percent of 364
analyses reported. Overall 8.6 percent were outliers. These are the failed analyses of 1104
analyses reported.
Acceptable results. For paint chips, 88 of the 101 laboratories reported acceptable
results; for soils, 77 of 84 laboratories for 92 percent. On dust wipes, 82 of 91 for 90 percent.
We were very pleased with this response.
The various programs are available through AIHA, and AIFIA has moved to a new
office in Fairfax, Virginia.
That ends my report to you on the first round of ELLAP. I will entertain
questions.
670
-------
QUESTION AND ANSWER SESSION
MR. TELLIARD: Questions? Kenn, how often do
they do the site visits?
MR. WHITE: The site visits occur once prior to
accreditation and then are repeated every three years thereafter. In the case where a complaint
is listed against the laboratory for cause, fraud, failure to report data, failure to pay one's bills,
et cetera, a site visit can occur unannounced. Usually, the site visits occur with two or three
months prior notice. We try not to sneak up on anyone.
MR. TELLIARD: Sneaking is good.
MR. WHITE: Sneaking is good when you need to
do it.
MS. MOORE: Marlene Moore with Advanced
Systems. A question. Who all is conducting site visits? Is only AIHA doing this?
MR. WHITE: There are two...the EPA is currently
in negotiation for a memorandum of understanding for certification of laboratory accreditation
for analysis for lead from environmental sources. The two organizations are the American
Industrial Hygiene Association and the American Association for Laboratory Accreditation,
A2LA.
We have an agreement with A2LA concerning site visitor personnel.
MS. MOORE: Okay, thank you.
MR. SMITH: Keith Smith, Analytical Services,
Atlanta, Georgia. Are you requiring your inspecting personnel to be competent in these methods?
Can they pass the methods themselves?
I am sure your inspecting people know the paperwork and have their checklist, but
are they capable...do you require them to actually be able to perform the method?
MR. WHITE: Yes. When the site visitor arrives
on site, he or she carries a sample that they observe during the time that they are there being
handled (as far as the paperwork is concerned), prepared, and analyzed.
Now, they are not worried too much about the data that comes out from that
analysis. All that carry-sample is for is to see what the process is within the laboratory and the
fact that they can run the type of analysis for which they are being accredited.
671
-------
MR. CONLON: Mike Conlon with EPA.
Maybe you mentioned this and I just missed it, but are there other compounds that you certify
laboratories for?
MR. WHITE: Are there other analyses that AIHA
certifies laboratories for?
MR. CONLON: Yes.
MR. WHITE: AIHA has an industrial hygiene
laboratory accreditation program that accredits laboratories for analysis of metals, a general group
of metals in air, for asbestos in air by fiber count, for organics in air, and for silica in air.
MR. CONLON: It is fair to presume that the
requirements generally as you described them here also apply to all those other things that are
covered by the certification program?
MR. WHITE: Yes, sir. In fact, ELLAP programs,
this accreditation program, is formed around the industrial hygiene program, but the industrial
hygiene program has specific requirements dealing with man monitoring.
MR. CONLON: Thank you.
MR. TELLIARD: Thank you very much, Kenn.
Appreciate your time.
A little round of applause for our speakers, and you get a five-minute coffee break.
Let's get back out and get back in.
(WHEREUPON, a brief recess was taken.)
672
-------
"The American Industrial
Hygiene Association
Environmental! Lead
Laboratory Accreditation
Program (ASHA/ELLAP)B1
Kenneth T. White
MS, MBA (MM), CM, CSP
AS&M
Hampton, VA
673
-------
The
AMERICAN INDUSTRIAL
HYGIENE ASSOCIATION
is a Professional
Association Dedicated to
the Health and Safety of
Workers and the
Community
674
-------
The American (Industrial
Hygiene Associaf eon
Has >10,000 members
worldwide
Has 75 local sections
worldwide
Has 45 standing
(technical) committees
Provides professional
training worldwide
Publishes
675
-------
AIHA Accreditation and
Registration Programs
Industrial Hygiene
Laboratory Accreditation
Program (IHLAPH
Asbestos Analysts
Registry (AAR1)
Environmental Lead
Laboratory Accreditation
Program (ELLAP)
676
-------
AIHA Proficiency Testing
(Quality Assurance)
Programs
- Proficiency Analytical
Testing (PAT) Program
- Asbestos Analytical
Testing (AAT) Program
- Bulk Asbestos Program
- Environmental Lead
Proficiency Analytical
Testing (ELPAT) Program
677
-------
AIHA Environmental Lead
Laboratory Accreditation
Program (ELLAP)
- Accreditation for analysts
of lead in:
+ paint chips (drv filmi
soil
+ wiped dust (settled)
678
-------
Major Program
Requirements:;
- Site Visits
- Proficiency Testing
- Facilities and Equipment
- Personnel! Criteria
- Quality Assurance
Program
- Analytical Methods
679
-------
Site Visits
- Prior to Accreditation
- Every 3 years thereafter
- Announced visits
- Trained and experienced
AIM A Representative
- Checkiist format
- Sample analysis required
during visit
680
-------
Proficiency Testing
- Quarterly
Paint chips, Soils and/or
Dust Wipes
Varying concentration
leveSs
Data/Results handled by
NIOSH
681
-------
Facilities and Equipment
- Laboratory is accredited
not an Individual or
organization
- Mobile laboratories are
allowed
Specific instruments not
required
Good housekeeping
required
682
-------
Personnel Criteria
- Technical Manager
Quality Assurance
Coordinator
- Analyst(s)
Management iis on-site
>50% of the time
683
-------
Personnel Criteria
- Technical Manager
•s- college degree iin Chemistry
or related scaence5 and
+ a minimum of 2 years of
analytical chemistry
experience, and
+ a minimum of 6 months of
metals analysis experience
684
-------
Personnel Criteria
Quality Assurance
Coordinator
•f college degree an a basic
science, and
* a minimum of one year of
analytical chemistry
experience, and
-r traininq in statists
* a minimum of 4 years of
analytical chemistry
experience, and
+ training in statistics
685
-------
Personnel Criteria
+ completion of a training
program in metals analysis
(internal or external), and
-r demonstrated ability to
*/
produce reliable results
through successful analysis
of blind SRMs, proficiency
testing samples, and/or QC
samples
686
-------
Quality Assurance
Program
In-house training program
required
Written QA Manual
Instrument calibration
required
QC checks required
Statistically derived
control limits required
for QC checks
Written corrective action
plan required
i n
Analysis documentation
required
Calculation and report
review required
687
-------
Quality Controk Checks
Instrument calibration
verification
Blanks analyzed at rate of
one in 20 (5%)
Spiked samples analyzed
at rate of one iin 10 (10%)
Duplicate samples
analyzed at rate of one in
10 (10%)
688
-------
Analytical Methods
No specific method is
required
Laboratory must
demonstrate acceptable
performance of and by
the chosen method (e.g.
LOD, LOQ5 accuracy,
precision)
Method must be
docurnentated in a
methods manual that is
accessible to analysts
and is reviewed
periodically by
management
689
-------
AIHA/ELPAT Program
Began November 1992
Matricies are dry paint
chips, soils, and dust
wipes
Rounds occur quarferfly,
n r* '
with 4 samples per
matrix
Is EPA supported
Data/Results handled by
NIOSH
690
-------
ELPAT Roood 001
Samples sent: Dec 1 1992
Results due: Jan 15,1993
Report issued: Feb 19,
1993
Labs participating: 110
Labs submitting results:
104
691
-------
ELPAT Round 001
Concentration Ranges
Paint chips: 0.089% to
5.5% by weight Pb
Soils: 133 mg Pb/kg to
3370 mg Pb/kg
Dust wipes 38 ug Pb to
4400 ug Pb
692
-------
ELPAT Round 001
Relative Standard
Deviation
(Reference Laboratories)
- Paint chips: 7% to 15%
-Soils: 8%to 12%
- Dust wipes: 10% to 14%
693
-------
ELPAT Round 001
OutHiers
(>3 SD)
- Paint chips: 9.4% of 404
- Soils: 7.7% of 336
- Dust wipes: 8.5% of 364
-Overall: 8.6%of 1104
694
-------
ELPAT Round 001
Acceptable Results
(acceptable on all
samples)
- Paint chips: 88 of 101 labs
- Soils: 77 of 84 labs (92%)
Dust wipes 82 of 9H abs
(90%)
695
-------
696
-------
MR. TELLIARD: Can we bring your coffee and
soda in? We are in the stretch, gang. Come on, we can do it.
Our next speaker is going to talk to us on the use of immuno chemistry and x-ray
fluorescence methods for a site investigation, and this is from Woodward-Clyde Federal Services.
I only knew of Woodward-Clyde-Envircon. I guess that was the same thing. And
this is Alex Tracy.
You are on.
697
-------
698
-------
AN INTEGRATED APPLICATION OF FIELD SCREENING TO
ENVIRONMENTAL SITE INVESTIGATIONS: A CASE STUDY
Tina Cline-Thomas, William Mills, Alex Tracy
Woodward-Clyde Federal Services, Rockville MD
Abstract
A base in the Washington D.C. area was slated to undergo facility expansion. This
expansion was to include construction of a commissary and parking lot, along with movement
of an existing sports field and a playground to new areas. Immediately prior to the start of the
construction, information became available which indicated that a landfill had existed in the
general area slated for construction activities. The exact location and extent of the landfill were
not known. Woodward-Clyde Federal Services (WCFS) was retained by the Baltimore District
of the U. S. Army Corps of Engineers (COE) to perform an investigation of the area to determine
the extent of the landfill and the health risk to construction workers and playground users. Due
to the construction schedule, the project work had to be completed in an 8-week time period.
Normally this type of project would require up to 6 months. In order to meet both time and
budgetary constraints, an intensive field sampling effort was undertaken in conjunction with the
use of field screening methods. Several field screening methodologies were employed to more
fully characterize the site during the time between field sampling and lab analysis: 22 metals by
portable X-Ray Fluorescence Spectrometry (XRF) and Polychlorinated Biphenyls (PCB) and
Aromatic Hydrocarbons (AH) by immunoassay. The field screening was used to:
• direct field sampling efforts by delineating contaminated areas
• prioritize samples for laboratory analysis
• provide the laboratory with information on the expected range of contaminants
In total, approximately 130 samples were screened in the field and approximately 30 of those
samples had results verified by lab analysis. The metals results for XRF and lab analysis
generally corresponded to each other, provided samples were thoroughly homogenized. Although
some false positives were observed by field screening for PCB and AH, no false negatives were
observed. This presentation will discuss time and budgetary savings, QA/QC procedures and
comparability of the field screening and lab results.
Introduction
Woodward-Clyde Federal Services (WCFS) was charged with the task of clearing a site for
construction activities within 8 weeks while ensuring that sufficient samples had been taken to
characterize the site and that those sample results were accurate. In addition to the historical
information which indicated the presence of a landfill whose size and contents were unknown,
there was the possibility that the landfill area contained burn pits where PCB transformer
carcasses had been disposed. Some preliminary work indicated elevated levels of PCBs,
miscellaneous other organics and metals in the areas where the present playground was located
and where the athletic field was to be relocated (Figure 1).
699
-------
Figure 1: Site Map Showing Proposed Construction
As a result of health and safety concerns for construction workers and future residents a site
clearance was undertaken. Since the construction contract had already been awarded and the
government would face penalties if construction was delayed the time frame available for the
investigation was very short.
Lab analysis takes 3-4 weeks at normal turnaround times. Faster turnaround times are
possible at premium rates but even the fastest analysis requires 24-48 hours before results can
be reported. The approach that was developed for this project involved field screening and lab
analysis with five day turnaround. Lab analysis was required to verify field screening results and
to provide the quantitative information required for the health-based risk assessment samples.
Field screening was performed on soil samples for PCB and aromatic hydrocarbons (the
Petrorisc1" immunoassay which was chosen is designed to provide total petroleum hydrocarbon
data but is very sensitive to bi and tri-cyclic polyaromatic hydrocarbons) using immunoassay
technology, and 22 metals using a field portable X-Ray fluorescence (XRF) spectrometer. The
field screening methods were used to prioritize samples for lab analysis, provide information to
the lab on the approximate concentration range expected to minimize reanalysis and provide
extent of contamination information for the areas being investigated.
Methods
Immunoassay technologies are well established within the medical lab industry where they have
been used provide rapid, accurate test results for medical professionals. In recent years this
technology has also started to emerge in the environmental analysis field. The two parameters
700
-------
analyzed by immunoassays for this project were PCB, using the Envirogard"" immunoassay by
Millipore and aromatic hydrocarbons using Ensys1 Petrorisc1"1 immunoassay kits. Soil samples
were analyzed for 22 metals using the Spectrace 9000 field-portable XRF. The Spectrace 9000""
XRF uses a mercuric iodide (HgI2) detector along with a fundamental parameters algorithm to
qualitatively and quantitatively identify the metals.
A lab facility was set up on base for sample log-in and analysis. All samples were labeled,
logged into a sample tracking system on and screened at this location. A portable computer was
used for sample tracking as well as to store both the results and the spectra produced by the
XRF. All immunoassay data, including balance calibration, extraction weight and the
absorbencies of both the samples and standards were recorded in bound lab notebooks.
Immunoassays-General
As both immunoassay kits used methanol as their extraction solvent and immunoassay tests
are quite specific for their target compound(s), one extraction was performed on each sample and
the extract was refrigerated in a labeled screw-top vial. This ensured any re-analysis would be
performed on the same aliquot of each environmental sample. The analysis reagents were added
according to each manufacturer's instructions1'2'3'4 and all samples were run immediately following
calibration of the test kit. A spectrophotometer set to 450 run was used to record the
absorbencies of both the standards and samples.
Aromatic Hydrocarbons (AH)
For the Petrorisc11" immunoassays, sample absorbencies were determined relative to a low
standard (0.7 ppm m-Xylene, which is equivalent to 100 ppm gasoline) which served as the
threshold of detection. Two aliquots of the methanol extract were analyzed relative to this
standard: the first represented the sample without any dilution and the second was the same
extract at a ten-fold dilution. In this manner, approximate concentrations of petroleum
constituents can be determined with relative ease. While the petroleum kits were calibrated using
m-Xylene, they were sensitive to a variety of compounds found in petroleum products including
bi- and tri-cyclic aromatics.1>2 WCFS utilized the Petrorisc"" kits' sensitivity to aromatic
hydrocarbons to indicate burn areas where these aromatic hydrocarbons remained as products of
incomplete combustion.
Because the Petrorisc"11 kits are sensitive to a variety of compounds, the immunoassay results
correlated well with the hot spots as defined by lab analysis. Data on the correlation between
the two methods is shown in Table 1.
701
-------
Table 1: Comparison of Lab and Field Values for Petrorisc"
Sample Number
Sample 1
Sample 2
Sample 3
Sample 4
Sample 5
Sample 6
Sample 7
Sample 8
Sample 9
Sample 10
Sample 11
Sample 12
Sample 13
Sample 14
Sample 15
Sample 16
Sample 17
Sample 18
Sample 19
Sample 20
Sample 21
Sample 22
Sample 23
Sample 24
Sample 25
Sample 26
Sample 27
Sample 28
Sample 29
Petrorisc011 Value
ND
ND
ND
ND
ND
Detect
Detect
ND
Detect
Detect
Detect
ND
ND
ND
ND
ND
ND
Detect
ND
ND
ND
ND
ND
Detect
ND
ND
ND
ND
ND
Sum of PAH Values*
ND
ND
ND
ND
ND
ND**
>1 ppm
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
>1 ppm
ND
ND
ND
ND
ND
>1 ppm
ND
ND
ND
ND
ND
*Sum PAH=Sum
"Dilution at lab
of all detects for compounds listed in SW-846 Method 8100.
prevented proper quantitation.
702
-------
The protocol for performing analysis dictated that the difference between duplicate standards
(Delta Std) could not exceed 0.2 absorbance units (a.u.) or the calibration would be considered
invalid and the samples would be re-analyzed. Although the precision data for the petroleum kits
was acceptable, the use of a repeat pipettor rather than dropper bottles would have improved the
precision. (All figures showing precision data are scaled to equal size, so a visual comparison
may be made.) Ensys will supply the reagents either in dropper bottles or in bulk (for use with
a pipettor), but for this project the dropper bottles were used. A Shewart plot of Delta Std for
aromatic hydrocarbon analysis is shown in Figure 2.
Figure 2: Precision Data for Petrorisctm Immunoassay
1 2 3 4 5 6 7 fl 9 1D 11 12 13 H 15 16 17 IB 19 20 21 22 33 31 35 36 37 2B
Delia Sta Upper Warning Limit __ Loner Warning unit
Upper Control Limit Lower Control Limit
Polychlorinated Biphenyls (PCB)
For the PCB kits3'4 (Envirogard""), the concentration of the two calibration standards (2 and
10 ppm Aroclor 1248) were used and a linear dose-response was assumed between those two
points in order to calculate an approximate Aroclor 1248 concentration. As Aroclor 1260 was
the PCB found at the site, Aroclor 1248 concentrations were converted to Aroclor 1260
concentrations using relative response data for the two Aroclors provided by Millipore3'4. The
PCB kits were used to delineate the volume of the burn pits being investigated and the samples
taken for lab analysis in those areas was directed by the field screening results. Correlation data
is presented in Table 2.
703
-------
Table 2: Comparison of Lab and Field Values for PCB
Sample Number
Sample 1
Sample 2
Sample 3
Sample 4
Sample S
Sample 6
Sample 7
Sample 8
Sample 9
Sample 10
Sample 11
Sample 12
Sample 13
Sample 14
Sample 15
Sample 16
Sample 17
Sample 18
Sample 19
Sample 20
Sample 21
Sample 22
Sample 23
Sample 24
Sample 25
Sample 26
Sample 27
Sample 28
Sample 29
Sample 30
Sample 31
Sample 32
Immunoassay Value
ND
ND
ND
ND
Detect
ND
ND
ND
ND
Detect
ND
ND
ND
ND
Detect
Detect
ND
Detect
ND
ND
Detect
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
PCBs by Method 8080
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND*
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
704
-------
Sample Number
Sample 33
Sample 34
Sample 35
Sample 36
Sample 37
Sample 38
Immunoassay Value
ND
ND
ND
ND
Detect
ND
PCBs by Method 8080
ND
ND
ND
ND
ND
ND
"Dilution at lab prevented proper quantitation
Reagents were added using an Eppendorf repeat pipettor, which allowed for rapid analysis
with good accuracy and precision. Typically a total of twenty tubes were analyzed per run:
assuming analysis performed in duplicate, three standards and seven samples could be analyzed
in one run. A Shewart plot (Delta Std) for both the low and high standards is provided in
Figures 3 and 4.
Figure 3: Precision Data for PCB Low Std.
D 3
0 2
D 1
Q
-D 1
0 I
0 3
-
^
S \ /-^.^ f v ^
" ~ \ / \/
V
-
1 2 3 1 5 6 7 B 9 10 11 12 13 11 15 16
Delia Sid Upper iBrnins Limit Lo^er laming Limn
Upper Control Limit U*«r Control Limn
705
-------
Figure 4: Precision Data for PCB High Std.
0 2
0 1
0 1
0 2
,-
_X""«^
——/ "~\ ^—^
\__^— — - — .
-
1 2 3 1 5 6 1 8 9 ID 11 12 13 11 15 16
Delta Std Upper Naming Limit lower Warning Limit
Upper Control Limit Lower Control Limit
Field-Portable XRF
All samples were first air dried and sieved through a 10-mesh sieve and sample descriptions
were recorded for all samples. If a significant amount of material would not pass through the
sieve, the material remaining in the sieve was retained and labeled as the exclusion products of
that environmental sample. The dried samples were then placed in 32mm sample cups and
covered with Mylar film.5-6 Each sample cup was labeled with the field sample number and
retained for re-analysis, if necessary. At the end of each day, both the results and spectra were
downloaded to a laptop computer for storage and data processing.
A mid-range standard reference material (SRM) and a quartz blank were run daily prior to
any samples, after every 10 samples and at the end of the analytical run (10% frequency). This
standard was an environmental sample which had been certified using traditional wet prep
techniques followed by GFAA and ICP analysis. Both the SRM and blank were used to confirm
instrument stability during the project. The standard was less homogeneous than was initially
assumed at the start of the project: approximately a week after the XRF screening had begun,
particles where discovered in the sample cup which would not have passed through a 10-mesh
sieve. As there was a week of data on the standard it was not re-prepped. Particle-size effects
from these large particles were believed to introduce some variability in the standard as
illustrated in the Shewart plots for the standard. The samples were believed to be more
homogeneous because all samples were dried and sieved prior to analysis. A plot of the Pb
results (Pb was one of the elements which had poor precision relative to most of the analytes)
for the standard over the course of the project is shown hi Figure 5 and a table of the accuracy
and precision data is shown hi Table 3.
706
-------
Figure 5: Precision Data for XRF Std (Pb)
1 2 3 1 5 6 7 B 9 10 11 12 13 H 15 16 17 IB 19 20 21 ?2 ?3
. PO Concenlraion (mg/lg) Upper Varning Limit Loner Warning Limit
.Upper Con t ro I L im 11 —
Table 3: Accuracy and Precision Data for XRF Standard Reference Material
Average (mg/kg)
True Value1
Percent Recovery
Std. Dev.
Relative Std. Dev.
Cr
184170
160287
114.9%
2263
1.2%
Ni
16014
13105
122.2%
423
2.6%
Cu
2787
2946
94.6%
144
5.2%
Pb
115
141
81.6%
19
16.9%
Cd
387
292
132.6%
65
16.9%
Conclusion
Method-Specific: Immunoassay
The staff of two chemists performing analysis in the field lab was able to screen approximately
20 samples per day for metals, PCBs and aromatic hydrocarbons. The use of a repeat pipettor
is recommended both to speed immunoassay analysis and to achieve better precision and
accuracy. The correlation between lab and field data was good, but the difference in detection
limits and sample heterogeneity sometimes make it difficult to directly compare immunoassay
and lab data. However, the regions indicated as contaminated by field screening correlated very
well with the areas indicated as contaminated by lab analysis, historical data, and PID/OVA
results of samples taken hi the field.
707
-------
Method-Specific: XRF
The use of an independent standard which was certified by traditional metals techniques
(GFAA/ICP) gave the data produced by the XRF an additional level of confidence. The
instrument showed good stability over the course of the project and 16.9% was the worst
relative standard deviation for any of the certified analytes (Cr, Ni, Cu, Pb, and Cd). Had the
standard reference material been completely homogeneous, the standard deviation would
probably have been considerably lower: Shewart plots showed only a few values which were
close to the control limit (Average+/-[3*Std. Dev.]). If these values were considered outliers
and removed, the standard deviation decreased markedly. XRF analysis of the standard reference
material correlated very well with its certified values: the average percent recovery (defined as
[XRF value/True Value]*100) was 109.2% with a high of 132.6% for Cd and a low of 81.6%
for Pb.
OA/OC Issues
Field screening can provide either Level I or Level II data9: for this project the field
screening data was regarded as Level I data and the laboratory analysis was used to make all
final decisions regarding site contamination. Specific guidelines for producing Level II data may
vary from site to site, and the sampling and analysis program must address the problems of
sample heterogeneity, matrix effects, interfering compounds, and sample contamination as a
result of improper handling or preparation7.
Although field screening can present additional challenges to the field team, there are many
instances where the additional data produced from the lower-cost field screening tests can
significantly reduce the sampling error in site investigations. Analytical error (bias and
variability introduced in the lab) typically accounts for only 15% of the total error introduced
in the site investigation process. The remaining 85% of the error in site investigations results
from insufficient samples or samples which do not accurately represent the contamination at the
site8. Field screening allows for rapid analysis following sample collection, which reduces
problems in sample handling, preservation and transport, and gives the field team the flexibility
to employ an iterative sampling strategy to fully characterize the contamination.
Effect on Sampling & Lab Analysis
Through the integrated use of field screening WCFS completed the site clearance on-time,
better delineated the extent of contamination and helped to direct the activities of the field crew.
In addition to the data provided by the lab, historical information such as aerial photos were used
to identify the area occupied by the former landfill (Figure 6). While XRF has been used
successfully in site investigations in the past, new advances in detector technology will provide
field teams with an instrument which is both portable and sensitive. At the time of this project
(April-May 1992), none of the immunoassay techniques had been recognized as methods by
EPA. Subsequently EPA has granted SW-846 third update numbers of 4010
(pentachlorophenol), 4020 (PCB), and 4030 (Total Petroleum Hydrocarbons) for immunoassay
screening techniques.
708
-------
In keeping with the DQO development process defined by EPA9, the project should be
planned with field screening in mind from the outset, and a chemist familiar with the technology
to be employed should be involved during the planning stage. The overall effectiveness of field
screening will depend on project-specific needs. It is recommended that the actual screening
analysis be carried out by, or under the supervision of, a qualified chemist to minimize
resampling and reanalysis and to ensure that results are not used inappropriately. Both
immunoassay and XRF are mature screening technologies, which, when used properly can be
very cost-effective tools hi the site investigation process.
Figure 6: Site Contamination as Delineated by Field Screening and Lab Results
Federal Se
709
-------
References
'PETRO RISctm User's Guide, Ensys Inc, 1992.
2"Soil Screening for Petroleum Hydrocarbons by Immunoassay," Draft Method 4030, USEPA
SW-846 Third Update, July 1992.
3Envirogard Tests Kits User's Guide, Millipore Corporation, 1992.
4"Soil Screening for Polychlorinated Biphenyls by Immunoassay," Draft Method 4020, USEPA
SW-846 Third Update, July 1992.
5Spectrace 9000 User's Guide. TN Technologies
6Donald E. Leyden, Fundamentals of X-Ray Spectrometry as Applied to Energy Dispersive
Techniques: Tracor Xray, 1984.
7Kevin J. Nesbitt, "Application and QA/QC Guidance USEPA SW-846 Immunoassay-Based
Field Methods 4010, 4020 & 4030;" Ensys Inc, 1992.
8Francis Pittard, Principles of Environmental Sampling: A Short Course Presented Prior to the
8th Annual Waste Testing & Quality Assurance Symposium, July 11-12, 1992.
9USEPA, Data Quality Objectives for Remedial Response Activities. EPA/540/G-87/003, March
1987.
710
-------
Integrated Application of Field Screening
A Case Study
Tina Cline-Thomas
William J. Mills
Alex Tracy
Woodward-Clyde Federal Services
Rockville, Maryland
Woodward-Clyde
-------
K)
Disclaimer
This presentation has not been subject to peer
review by the U.S. Army Corps of Engineers
(USAGE) and therefore, does not represent USAGE
opinion or endorsement. The mention of trademarks
in this presentation is provided for documentation
purposes and does not constitute endorsement by
Woodward-Clyde.
-------
Introduction
U.S. Army base in Washington, D.C. area
Planned construction of a new commissary
and parking lot
Relocation of sports fields and playground
to other areas
-------
Introduction
(continued)
Prior to construction, information became
available that a former landfill existed in
general area
Extent of landfill is not known
Some preliminary information available -
PCBs, VOCs, metals were observed
Concerns about worker and playground
users' health
Site clearance required
-------
The Problem
Perform a site clearance investigation
Very little information on landfill available
Determine extent and location of former landfill
Determine health risk to workers and users
Extremely tight timeframe
-------
Site Map Showing Proposed Construction
-o
ON
-------
Solution
Intense sampling effort
Implement field screening as part of site
investigation process
Field screening for PCBs, aromatics,
metals (soils only)
Lab analysis to provide backup of field
screening and more detailed information
for health risk assessment
One week TAT for most lab analyses
-------
Field Screening
Used to:
-o
oo
Direct sampling efforts
Provide information to labs on sample
concentration range expected
Prioritize samples for lab analyses
Provide extent of contamination information
while awaiting lab results
-------
Field Screening Methods
PCBs ** Immunoassay
Aromatics ^ Immunoassay
Metals *- X-ray fluorescence
-------
Field Screening Setup
K)
O
Field lab setup on base:
- Sample log in
- Sample analysis
- Packaging of samples for shipment to lab
- Tabulation of results
- Planning of sampling effort
-------
Immunoassays
General
- Immunoassays in chemical diagnostic use
since -1965
- Immunoassays being applied to pollutant
testing in 1990's
- Immunoassays offer:
• Selectivity
• Sensitivity
• Speed of analysis
• Concentration range information
-------
Soil Sample
1. Weigh out
2. Solvent
extraction
1
Extract
Add extract to
antibody coated tube
Incubate
1
Decant off
extract
\
Add
conjugate
Incubate
Incubate
Decant off
conjugate
Compare to standards
Color is inversely
proportional to
concentration of
analyte
f
Add
substrate
Measure
absorbence
Quench
-------
PCB Analysis
to
Millipore Envirogard™ kits used
2 ppm and 10 ppm concentration ranges
Calibrate with Arochlor 1248
Methanol extraction
-------
Precision Data for PCB Low Std.
0.3
0.2
0. 1
3K-
to
A A-
-0.1
-o e © o e ©——© o ©-
o e e o
-0.2 -
-0.3 I 1 L
J 1 I I I I I I I \ I I L
23
De I ta Std .
Upper Control Limit
4 5 S 7 8 9 10
0 Upper Warning Limit
—0— Lower Control Limit
11 12 13
Lower War n)ng Limit
15 16
-------
Precision Data for PCB High Std.
0.3
0.2
to
0 1
A. A-
-0.1
-$ $ 9 9 9-
o e e e-
-e e-
X X-
-a B-
-e-
-e e e o
-0.2
-0.3
J L
_L
J I I L
J L
B_ Delta Std
^— Upper Control Limit
7 8 9 10
Upper Warning Limit
Lower Control Limit
11 12 13 14 15
^ Lower Warning Limit
16
-------
Aromatic Hydrocarbons
to
Ensys Petro Rise ™ kits
100 ppm gasoline
Sensitivity to bi- and tricyclic aromatics
Methanol extraction
-------
N)
-O
Precision Data for Petrorisc™ Immunoassay
0.3
0 2
0.1
-0. 1
-0.2
-0.3
*—*—x—xxx—*—*—x-—*—*—*—x—x—x—x-—*—x—*—x-—*—*—x—*—x x x
»-*-
-oooooooooooooooooooooooooooo
I I I I I I I I I I I I I I I I I I I I I I I I I I I I
1 2 3 4 5 B 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
DeIta Std.
Upper Control Limit
Upper Warning Limit
Lower Control Limit
Lower Warning Limit
-------
Immunoassays Discussion
to
oo
Able to use one extract for two types of analyses
Fast
Semi-quantitative
Very little temperature effect observed
Good correlation with lab results (30-40 samples)
No false negatives
-------
XRF
Spectrace 9000 Field Portable XRF
Hgl2 detection
- No cooling
- Resolution
Different sources available
Wide variety of metals analyzed
All data stored in computer
-------
200
Precision Data for XRF Std. (Pb)
o
180
160
140
120
100
80
60
o—G—©—e—©—©—©—©—©-
-e—0—0—©—©—©—©—©—©—o
40 I 1
i -i.
i I I I I I i
I - L
I I I I
1 2 3 A 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Pb Concentralon Ci^J/kaJ —«/- UpPer Warning Limit A Lower Warning Limit
Upper Control Limit r> Lower Control Limit
-------
XRF Discussion
Excellent stability
Good performance for SRM
Ease of use
Ability to reinspect specific results and spectra
Good correlation with lab results
-------
EPA QA/QC Levels
KJ
Level I
- Screening objective - rapid, preliminary field assessment
Level II
- Screening objective - field methods or quick lab methods
with a portion of results verified by rigorous lab analysis
Level III
- Rigorous objective - lab methods with assessment of
accuracy and precision of each analytical determination
-------
Level I Field Screening QA/QC
XRF and immunoassays
- Sample documentation
- Calibration or performance check
-------
Level II Field Screening QA/QC
XRF and immunoassays
- Sample documentation
- Calibration
- Blanks
- Matrix background
- Performance evaluation
samples
- Matrix spikes
- Duplicate sample analysis
- Laboratory confirmation
- Size fractioning of soil
sample
-------
Project Conclusions
130 samples field screened over a two
and one-half week period
Many samples analyzed in duplicate
Many samples backed up by lab analysis
No false negatives, some false positives
Data produced was Level 1.5
Project completed successfully to meet
client needs
-------
General Conclusions
ON
Rapidly developing area
Definite application to environmental
site investigation
Actual application will depend on
project needs
Can provide cost-effective answers
-------
General Conclusions
(continued)
U)
Speeds up information transfer to
project manager
Different levels of QA/QC possible
Most methods should be operated under
supervision of a qualified chemist
-------
Site Contamination as Delineated
by Field Screening and Lab Results
BUHN PIT AREA
-o
u>
oo
-------
Acknowledgements
OJ
Ensys
Millipore
Spectrace
U.S. EPA
U.S. Army Corps of Engineers
-------
740
-------
MR. TELLIARD: Our last speaker of the day is
Mike Kravitz from the Office of Water. Mike is going to talk about QA/QC guidance that the
office has generated for the analysis of dredge materials and evaluation thereof.
Mike just arrived. He was a little late. Gave me a little heartburn, but he is
armed.
QA/QC Guidance for Dredged Material Evaluations
MR. KRAVITZ: My name is Mike Kravitz, and I
am with the EPA's Office of Science and Technology (OST).
For the past few years, the Risk Assessment and Management Branch of OST has
been working on issues pertaining to the evaluation of dredged material. We are working closely
with the Army Corps of Engineers to produce a number of national
documents.
One of these, the draft "Inland Testing Manual" is a national document for the
evaluation of potential contaminant-related impacts of dredged material proposed for discharge
into freshwater, estuarine and near coastal waters of the United States. It provides guidance on
the kinds of tests that need to be performed by dredging permit applicants.
Along with this guidance, it is very important that we have good quality assurance
and quality control. This is, to some extent, built into the document, but to ensure that
evaluations are conducted in a consistent manner around the country, we are producing "Quality
Assurance/Quality Control Guidance for Sampling and Analysis of Sediments, Water and Tissues
for Dredged Material Evaluations." Let me go into a little more detail.
The purpose of the document is twofold. One is to provide guidance on the
development of QA project plans for ensuring the reliability of physical, chemical, and biological
testing data gathered to evaluate dredged material proposed for discharge under the Clean Water
Act (CWA) or the Marine Protection, Research, and Sanctuaries Act (MPRSA). (Essentially, the
CWA regulates the discharge of dredged material within the baseline of the territorial sea,
whereas MPRSA governs transportation of dredged material seaward of the baseline for discharge
purposes.) The second purpose is to outline procedures that need to be followed when sampling
and analyzing sediments, water, and tissues.
The QA guidance document serves as a companion document to the draft Inland
Testing Manual (mentioned at the start of this talk) and the 1991 Ocean Testing Manual for
evaluating dredged material proposed for discharge into CWA or MPRSA waters, respectively.
741
-------
When dredged material is evaluated, it is evaluated for whether or not it can be
discharged into open water without management actions, in other words, without capping or other
control measures. If the material "passes", then it is okay to dispose of the material as long as
it complies with other regulations.
The testing procedures in the Inland and Ocean testing manuals are arranged in
a series of tiers, or levels of intensity (and cost) of investigation. The first tier is an analysis of
historical data, and it is important that the data be of good quality in order that we can use it in
the evaluation. The second tier is concerned with chemical analyses. It is here that you compare
the dredged material discharge to water quality criteria or standards, depending on whether
disposal will occur in ocean or inland waters. The third tier involves biological tests (bioassay),
i.e. toxicity and bioaccumulation tests. The fourth tier allows for case-specific laboratory and
field testing, and is intended for use in unusual circumstances.
Most of the sediment and water chemistry analyses takes place in tier 2.
Validation of the chemistry data ensures that objectives for data precision and bias were met, that
data generated in accordance with the QA project plan and standard operating procedures, and
that data are traceable and defensible. Figure 1, from the QA document, provides evaluation
criteria that could alert a project manager to potential problems with data acceptability.
Figure 2 shows the contents of the QA document. The main emphasis is on
drafting a quality assurance project plan; the elements listed here are typical. We also include
a fairly comprehensive list of references.
The document includes a number of useful appendices (Fig. 3), such as examples
of various checklists; an example statement of work for the laboratory; description of calibration,
quality control checks, and widely used analytical methods; and example standard operating
procedures (SOPs).
One thing that is very important is we are using a performance-based approach in
the development of this document even though we do recommend methods that should be used
for various analyses. Performance-based goals for accuracy (precision and bias),
representativeness, comparability, and completeness, as well as the required sensitivity of
chemical measurements should be defined in project-specific Data Quality Objectives.
With respect to required sensitivity of chemical measurements, the QA guidance
contains recommended "target detection limits" (TDLs) that are judged to be feasible by a variety
of methods, cost effective, and to meet the requirements for dredged material evaluations. If
significantly higher or lower TDLs are required to meet rigorously defined data quality objectives
for a specific project, then on a project-specific basis, modification to existing analytical
procedures may be necessary. Such modifications must be documented in the QA Project Plan.
742
-------
The TDL should not be confused with MDL or PQL or any of those kinds of
detection limits. The TDL is a perofrmance goal set between the lowest, technically feasible,
detection limit for routine analytical methods and available regulatory criteria or guidelines
("effects levels") for evaluating dredged material. For example, some metals have relatively high
target detection limits, whereas dioxin has a low detection limit. The QA guidance document
recommends target detection limits for sediments, water, and tissues.
Much of the information in the QA guidance is in tabulated form. These (Fig. 4)
are some of examples of the type of information covered. We include a list of PCB congeners
recommended for quantification as potential contaminants of concern. We also list octanol/water
partition coefficients for organic compound priority pollutants and pesticides. This is important
because analytes having a high octanol/water partition coefficient are likely to bioaccumulate in
animals. Of particular usefulness is the summary of recommended procedures for sample
collection, preservation, and storage.
The QA document is being developed in two phases. The first phase, planned for
distribution this fall, is concerned with chemical evaluations. The second phase will cover
biological evaluations, including bioassays (bioaccumulation and toxicity tests). EPA is currently
standardizing a number of bioassay tests.
In summary, the dredged material evaluations can be very comprehensive,
involving measures of sediment and water chemistry as well as biological evaluations. The
present QA guidance, along with the national testing manuals, should allow for technically
appropiate and consistent assessments of potential contaminant-related impacts associated wiht
the discharge of dredged material in U.S. waters.
Any questions?
MR. TELLIARD: Any questions?
(No response.)
MR. TELLIARD: Thank you.
743
-------
744
-------
INFORMATION
SOURCE
EVALUATION
CRITERIA
TECHNICAL
CONCLUSION
MANAGEMENT
ACTION
Analytical
Data and Supporting
Documentation
Detection
Limits
Acceptable
9
Accept
Data for Use
Within Limits
Marginally
Outside Limits
Accept Data with
Appropriate
Qualifications
Consult Expert
Severely
Outside Limits
Reject Data
(and consider
reanalysis)
Figure 1. Guidance for data assessment and screening for data quality.
C965 Or 02 0193
-------
CONTENTS
1. INTRODUCTION
- Government (Data User) Program
- Contractor (Data Generator) Program
2. DRAFTING A QUALITY ASSURANCE PROJECT PLAN
- Introductory Material
- Quality Assurance Organization and Responsibilities
- Quality Assurance Objectives
- Standard Operating Procedures
- Sampling Strategy and Procedures
- Sampling Custody
- Calibration Procedures and Frequency
- Analytical Procedures
- Data Screening, Validation, Reduction, and Reporting
- Internal Quality Control Checks
- Performance and System Audits
- Preventive Maintenance
- Calculation of Data Quality Indicators
- Corrective Actions
- Quality Assurance Reports to Management
- References
3. REFERENCES
4. GLOSSARY
APPENDICES
Figure 2
746
-------
APPENDICES
- Example QA/QC Checklists, Forms, and Records
- Example Statement of Work for the Laboratory
- Description of Calibration, Quality Control Checks, and
Widely Used Analytical Methods
- Standard Operating Procedures
- EPA Priority Pollutants and Additional Hazardous Substance
List Compounds
- Example Quality Assurance Reports
- Analytical/Environmental Laboratory Audit Standard Operating
Procedure
- Format for the Sediment Testing Report
Figure 3
747
-------
MUCH OF INFORMATION IS IN TABULATED FORM:
- Checklist of Laboratory Deliverables for the Analysis of
Organic Compounds
- Checklist of Laboratory Deliverables for the Analysis of
Metals
- Routine Analytical Methods and Target Detection Limits for
Sediment, Water, and Tissue
- Levels of Data Quality for Historical Data
- Summary of Recommended Procedures for Sample Collection,
Preservation, and Storage
- Example Calibration Procedures
- Polychlorinated Biphenyl Congeners Recommended for
Quantification as Potential Contaminants of Concern
- Octanol/water Partition Coefficients for Organic Compound
Priority Pollutants and 301(h) Pesticides
- Bioconcentration Factors of Inorganic Priority Pollutants
- Levels of Data Validation
- Example Warning and Control Limits for Calibration and
Quality Control Samples
- Sources of Standard Reference Materials
Figure 4
748
-------
CLOSING REMARKS
MR. TELLIARD: Thanks to all of the speakers and
to all of you for your attention and cooperation.
I would also like to thank the County Court Reporters, Inc. for taking down all
your golden droppings of knowledge; Jan Sears who made this happen and facilitated it; and
Dale Rushneck for planning and coordinating the program.
I hope we are still trying to meet the purpose of this meeting which is to exchange
information, not only on analytical work but also on the policies and direction of the Agency.
We are trying to let the world out there know that there are changes afoot. We change slowly.
The wheels of government do not blind you with their quickness. But we are also trying to keep
this an open forum, and we would like input.
I have asked for suggestions on the oil and grease issue. Many of you have come
up to me and said you will be sending me information.
Bob April requested some information on the metals, and we will be sending you
out those questions. If you would give it a little thought going home on the plane, train, bus or
the goat cart and come up with some information and send it back, we would appreciate it.
Thank you again for your attention. We hope to be back here next year, barring
some unforeseen disaster. In the meantime, if you have any suggestions for program sessions
or whatever, please feel free to give me a call. Thanks again for a great meeting!
(WHEREUPON, the proceedings were concluded at 4:00 p.m.)
749
-------
750
-------
16th Annual EPA Conference on Analysis
of Pollutants in the Environment
May 5-6, 1993
List of Speakers
Bob April
Office of Science and Technology
US EPA, Office of Water
401 M Street, S.W.
Washington, DC 20460
Telephone No: 202-260-6322
Nicholas Bloom
Senior Scientist
Frontier GeoSciences, Inc.
414 Pontius North, Suite F
Seattle, WA 98109
Telephone No: 206-622-6960
Marielle Brinkman
Batelle Memorial Institute
505 King Avenue
Columbus, OH 43201
Telephone No: 614-424-5277
Hazel M. Burkholder
Research Chemist
Batelle Memorial Institute
505 King Avenue
Columbus, OH 43201
Telephone No: 614-424-5311
Bruce Colby
Pacific Analytical
349 Paseo Del Lago
Carlsbad, CA 92009
Telephone No: 619-438-3100
Dave Demorest
Core Laboratories
A division of Atlas International
420 West First Street
Casper, WY 82601
Telephone No: 307-235-5741
James M. Conlon
Drinking Water Standards
Office of Ground Water and
Drinking Water
US EPA, Office of Water
401 M Street, S.W.
Washington, DC 20460
Telephone No: 202-260-7575
Mike Kravitz
Office of Science and Technology
US EPA, Office of Water
401 M Street, S.W.
Washington, DC 20460
Telephone No: 202-260-8085
Yan Liu
California Operations
Midwest Research Institute
625-B Clyde Avenue
Mountain View, CA 94043
Telephone No: 415-694-7700
Ed Marti
Triangle Labs of RTF, Inc.
P.O. Box 13485
Research Triangle Pk, NC 27709
Telephone No: 919-544-8353
Theodore D. Martin
Environmental Monitoring
Research Laboratory
Chemistry Research Division
26 W. Martin Luther King Dr.
Cincinnati, OH 45268
Telephone No: 513-569-7312
Dr. Harry B. McCarty
Hazardous Waste Methods
Support Division
SAIC
7600-A Leesburg Pike
Falls Church, VA 22043
Telephone No: 703-821-4709
751
-------
John McGuire
Measurements Branch
US EPA
Environmental Research Laboratory
Athens, GA 30613-7799
Telephone No: 706-546-3185
Joe Raia
15402 Park Estates
Houston, TX 77062
Telephone No: 713-486-4354
Ileana Rhodes
Senior Research Chemist
Shell Development Company
P.O. Box 1380
Houston, TX 77251
Telephone No: 713-493-8215
James K. Rice
17415 Batchelor's Forest Road
Olney, MD 20832
Telephone No: 301-774-2210
Richard Rivera
Shell Development Company
P.O. Box 1380
Houston, TX 77251
Telephone No: 713-245-7904
George Stanko
Senior Staff Chemist
Shell Development Company
P.O. Box 1380
Houston, TX 77251
Telephone No: 713-493-7702
William A. Telliard, Director
Analytical Methods Staff
Office of Science and Technology
US EPA, Office of Water
401 M Street, S.W.
Washington, DC 20460
Telephone No: 202-260-7185
Alex Tracy
Woodward-Clyde Federal Services
One Church St., Suite 700
Rockville, MD 20850
Telephone No: 301-309-0800
Richard Turle
Technology Centre
Environment Canada
Chemistry Division
River Road Environmental
Ottawa Ontario K1A OH3, CANADA
Telephone No: 613-990-8559
Kenneth T. White
Consultive Services
4428 Ironwood Drive
Virginia Beach, VA 23462
Telephone No: 804-499-4420
Zhouyao Zhang
Department of Chemistry
University of Waterloo
200 University Avenue, West
Waterloo Ontario N2L 3G1, CANADA
Telephone No: 519-885-1211
752
-------
16th Annual EPA Conference on
Analysis of Pollutants in the Environment
May 5-6, 1993
List of Attendees
Amos Lee Adams
Chemist
Petroleum Testing Lab
Fleet & Industrial Supply Center
Building W
388 Virginia Avenue
Norfolk VA 23511
804-444-2761
Merrill Anderson
Navy Public Works Center
9742 Maryland Ave
Code 900
Norfolk VA 23511-3095
804-445-8850
Merrill Anderson
Navy Public Works Center
9742 Maryland Ave
Code 900
Norfolk VA 23511-3095
804-445-8850
Steve Arpie
QC Supervisor
Absolute Standards, Inc.
498 Russell Street
New Haven CT 06513
203-468-7407
David E. Ashkenaz
MW Regional Mgr.
Varian Sample Preparation Products
388 Forest Knoll Drive
Palatine IL 60067
708-705-9629
Federico Asmar
Laboratory Manager
High Technology Lab.
PO Box 3964
Guaynabo PR 00970-3964
809-790-0251
Charley W. Banks
Env. Engineer Sr.
VA DEQ
4900 Cox Rd.
Glen Allen VA 23060
804-527-5087
Thomas Barber
Group Leader
CIBA-GEIGY
41 Swing Road
Greensboro NC 27409
919-632-7297
Sarah Barkowski
Senior Research Chemist
Boise Cascade Corp, R & D
4435 N. Channel Ave.
Portland OR 97217
503-286-7441
Bob Beimer
Lab Manager
S-Cubed
3398 Carmel Mtn. Road
San Diego CA 92121
619-587-8848
753
-------
S. Mark Benfield
Manager
Wet Chemistry Laboratory
Webb Technical Group, Inc.
4325 Pleasant Valley Rd.
Suite 110
Raleigh NC 27612'
919-787-9171
Julie M. Blackwell
Chemist
GZA GeoEnvironmental
320 Needham St.
Boston MA 02164
617-630-6108
John Bernard, Jr.
Lab Manager
Alexandria Sanit. Auth.
P. O. Box 1987
Alexandria VA 22313
703-549-3381
E. Blank
Complete Analysis Labs.
1259 Route 46
Bldg. 4
Parsippany NJ 07054-4909
201-335-CALI
Daniel Bolt
Environmental Products Mgr.
Cambridge Isotope Labs., Inc.
20 Commerce Way
Woburn MA 01801
617-938-0067
Petra Bott
Chemical Tech.
HRSD
1432 Air Rail Ave.
Virginia Beach VA 23455
804-460-2261
Paul Bouis
J.T. Baker
222 Red School Lane
Phillipsburg NJ 08865
908-859-2151
Keith Bounds
Tech. Lead Environmental
Sverdrup Technology, Inc.
Building 2423
Stennis Spa Ctr MS 35929
601-888-3158
John Bourbon
US EPA, Region 2
Environmental Service Division
Building 10
2890 Woodbridge Avenue
Edison NJ 08837
908-321-6729
Bettie J. Bradley
Navy Public Works Center
9742 Maryland Ave
Code 900
Norfolk VA 23511-3095
804-445-8850
Patrick Bradley
Environmental Scientist
Dept. of Navy
Atlantic Division
1510 Gilbert Street
Norfolk VA 23511-2699
804-445-2930
Parry Bragg
Lab. Supervisor/Sr.Chm.
Marine Chemist Service
11850 Tugboat Lane
Newport News VA 23606
804-873-0933
754
-------
Ronald Brenton
Supervisory Chemist
US Geological Survey
5293 Ward Road
Arvada CO 80002
303-467-8215
Don W. Brown
EQ Superintendent
City of Danville, WPCP
Riverview Industrial Park
229 Stinson Drive
Danville VA 24540
804-799-5137
Nancy Broyles
Advanced Chemist
Technical Center
Union Carbide Corporation
3200 Kanawha Turnpike
South Charleston WV 25303
304-747-4729
Leslie Bucina
Organic Laboratory Manager
Kemron Environmental
109 Starlite Park
Marietta OH 45750
614-373-4071
Barbara Brumbrugh
Sr. Environmental Insp.
VA DEQ
Water Division
287 Pembrook Office Park
Pembrook 2, Suite 310
Virginia Beach VA 23462
804-552-1174
Patrick A. Buddrus
Manager
Organics Laboratory
CHESTER LabNet - Portland
12242 SW Garden Place
Tigard OR 97223
503-624-2773
Lisa M. Burgesser
Chemist
Environmental Resource Associates
5540 Marshall Street
Arvada CO 80002
303-431-8474
Anne Burnett
Quality Control Officer
Environmental Test. Svcs.
888 Norfolk Square
Norfolk VA 23502
804-461-3874
E. A. (Tony) Burns
Contract Administrator
Quality Assurance Laboratory
6605 Nancy Ridge Drive
San Diego CA 92121
619-552-3636
Carrie Buswell
Environmental Scientist
DynCorp - Viar, Inc.
300 North Lee St., #200
Alexandria VA 22314
703-519-1385
Bill Castle
Petroleum & Chemical Lab
State of California
Dept. of Fish & Game
1995 Nimbus Rd.
Rancho Cordova CA 95670
916-355-0142
Dan Caudle
Conoco, Inc.
Suite DU2002
PO Box 2197
Houston TX 77252
713-293-1246
755
-------
Krishna Chakravorty
Defense General Supply Center
8000 Jefferson Davis Hwy.
Richmond VA 23297-5685
804-279-4097
Shin-Ling Chang
Director of Analytical Testing
Commonwealth Technology, Inc.
2520 Regency Road
Lexington KY 40503
606-276-3506
Terry N. Chamberlain
Supy. Envir. Engineer
Environmental Prot. Dept.
Fleet & Industrial Supply
Installation Services
Suite 600
Code 71
Norfolk VA 23511
804-444-5446
Rich Chrostek
Technical Sales Rep.
Vairan Sample Prep. Prod.
24201 Frampton Avenue
Harbor City CA 90710
800-421-2825
Ida Church
Navy Public Works Center
9742 Maryland Ave
Code 900
Norfolk VA 23511-3095
804-445-8850
Roger Claff
Environmental Scientist
American Petroleum Inst.
1220 L Street, N.W.
Washington DC 20005
202-682-8324
David Clampitt
Environmental Affairs Director
Institute of Industrial Launderers
1730 M St., NW, Suite 610
Washington DC 20036
202-296-6744
Joy G. Clark
Organics Section Chief
HydroLogic, Inc.
130 Placid Valley Road
Gaston SC 29053
803-750-0913
Louis C. Clay
Lab Director
Continental Cement Co.,
3000 Highway 79S
PO Box 71
Hannibal MO 63431
314-221-1740
Alyson Cockrell
Technician
HydroLogic, Inc.
100 Ashland Park Lane
Suite E
Columbia SC 29210
803-750-0913
Inc.
Jack Cochran
Sr. Organic/Analy. Chem.
IL Hazardous Waste Res.
Information Center
One E Hazelwood Drive
Champaign IL 61820
217-244-8910
Tracey L. Colbert
Group Leader
NUS Laboratory
5350 Campbells Run Road
Pittsburgh PA 15205
412-747-2533
756
-------
Martin K. Collamore
Laboratory Supervisor
City of Tacoma
2201 Portland Ave.
Tacoma WA 98421
206-591-5588
Susana Comte-Walters
Research Specialist
Westvaco Corporation
5600 Virginia Ave.
N. Charleston SC 29411
803-745-3711
Sandra Conley
Chemist
Arlington County WPCD
3401 S. Glebe Rd.
Arlington VA 22202
703-358-6832
Steve Connor
Consulting Health Phys.
Haliburton-NUS Corp.
900 Trail Ridge Road
Aiken SC 29803
803-649-7963
William E. Corl, III
Navy Public Works Center
9742 Maryland Ave
Code 900
Norfolk VA 23511-3095
804-445-8850
B. Rod Corrigan
Quality Assurance Officer
Environmental Consultants
391 Newman Avenue
Clarksville IN 47129
812-282-8481
Jack Criscio
President
Absolute Standards, Inc.
498 Russell Street
New Haven CT 06513
203-468-7407
Raymond J. Crowley
Project Manager
Sample Prep.
Millipore Corporation
34 Maple Street
Milford MA 01757
508-478-2000
Joanna Culver
Supervisory Chemist
Norfolk Naval Shipyard
Code 130
Portsmouth VA 23709
804-396-9307
Ann T. Davis
Analytical Chemist
Eastman Kodak
Kodak Park
Rochester NY 14652
716-722-5328
T.L. Dawson
Group Leader
Technical Center
Union Carbide Corporation
3200 Kanawha Turnpike
South Charleston WV 25303
304-747-4729
Deborah DeBiasi
Environmental Engineer
VA DEQ
PO Box 11143
Richmond VA 23230
804-527-5073
757
-------
Gerald J. DeMenna
Consultant
Chem-Chek Laboratories
594 Dial Avenue
Piscataway NJ 08854
908-752-7793
Jane Dennison
Organic Lab Manager
Princeton Testing Lab
3490 US Rt.l
Princeton NJ 08543
609-452-9050
Ashok D. Deshpande
Research Chemist
NOAA, NMFS, NEFSC
US Dept. of Commerce
Sandy Hook Laboratory
Highlands NJ 07732
908-872-3043
Frank Dias
Waste Management
Environmental Monit. Lab.
2100 Cleanwater Drive
Geneva IL 60134
708-208-3112
Ann Dombrowski
Env. Contain. Analysis Lab Coord.
Applied Marine Research Lab
1034 West 45th Street
Norfolk VA 23529
804-683-4787
Willard Douglas
Tech. Lead Environmental
Sverdrup Technology, Inc.
Building 2423
Stennis Spa Ctr MS 35929
601-888-3158
Jan D. Dunn, PhD
Director
EML, Inc.
Environmetrix Research
59 N. Plains Industrial Park
Wallingford CT 06492
203-284-0555
Robin Y. Eaton
Group Leader, GC/MS
Lancaster Laboratories
2425 New Holland Pike
Lancaster PA 17601-5994
717-656-2301
Bethany Ann Ebling
Group Leader, Water Quality
Lancaster Laboratories, Inc.
2425 New Holland Pike
Lancaster PA 17601
717-656-2301
Andrew Ecklund
Act. Laboratory Director
Free-Col Labs
PO Box 557
Cotton Road
Meadville PA 16335
814-724-6242
Kenneth Edge11
Section Chief
The Bionetics Corporation
16 Triangle Park Drive
Cincinnati OH 45246
513-771-0448
Stephen P. Ellis
Laboratory Manager
Ecolochem, Inc.
4545 Patent Road
Norfolk VA 23502
804-855-9000
758
-------
Paul S. Epstein
Director
NSF International
3475 Plymouth Road
Ann Arbor MI 48105
313-769-8774
Valerie Evans
WQ Client Svcs. Manager
Triangle Labs, of RTF
801 Capitola Drive
Durham NC 27713
919-544-8353
Kirby Feldmann
Sample Prep Section Manager
Environmental Science & Engr.
8901 N. Industrial Road
Peoria IL 61615
309-692-4422
Conrad Ferro
Laboratory Supervisor
City of Jacksonville
Public Util/WW Div.
2221 Bucknow St.
Jacksonville FL 32206
904-630-4210
Tom Fieldsend
Project Mgr.
Environmental Survey Associates
PO Box 867
Ramsey NJ 07446
201-934-1102
Gary Folk
Technical Officer
IEA, Inc.
3000 Weston Parkway
Gary NC 27513
919-677-0090
Jack David Fox
Chemist
9219 Main Street
Apartment 16
Woburn MA 01801-1256
617-932-4743
Drew Francis
QA Officer
HRSD
PO Box 5911
Virginia Beach VA 23455
804-460-2261
Kay Gamble
Director of Analytical Services
McGinnes Laboratories
4168 Westroads Drive
West Palm Beach FL 33458
407-842-2849
A. J. Gilbert
Technical Director
V.G. Masslab
Crewe Road, Wythenshawe
Manchester M23 9BE
ENGLAND
061-946-1060
Jenny Goeglein
QA Chemist
DynCorp - Viar, Inc.
300 North Lee St., Suite 200
Alexandria VA 22314
703-519-1278
Michael Goergen
President
Fire & Environmental
Consulting Labs
1451 East Lansing Drive
Suite 222
East Lansing MI 48823
517-332-0167
759
-------
Russell W. Grice
Lab Manager
City of Columbus
Sewer & Drains
900 Dublin Road
Columbus OH 43215-1170
614-645-7016
Zoe Grosser
Sr. Marketing Specialist
The Perkin-Elmer Corp.
761 Main Ave.
MS-219
Norwalk CT 06859
203-761-2874
John P. Gute
Laboratory Supervisor
LA County Sanitation District
1965 Workman Mill Road
Whittier CA 90601
310-699-0405
David W. Haddaway
Senior Chemist
City of Portsmouth
105 Maury Place
Suffolk VA 23434
804-398-0682
Donald J. Haertel
Laboratory Manager
Center for Applied
Engineering, Inc.
10301 Ninth St. N.
St. Petersburg FL 34683
813-576-4171
Jack Hall
Div. Technical Director
IT Corp.
9000 Executive Pk Dr,A110
Knoxville TN 37923
615-690-3211
Jeff Hall
President
Environmental Testing Services
888 Norfolk Square
Norfolk VA 23502
804-461-3874
Pam Hall
Laboratory Technician
Environmental Test. Svcs
888 Norfolk Square
Norfolk VA 23502
804-461-3874
Jeff Halvorson
Chemist
Burdick & Jackson
1953 S. Harvey St.
Muskegon MI 49442
616-726-3171
Scott Hanigan
VA DEQ
PO Box 11143
Richmond VA 23230
804-527-5069
Bob Harrison
Manager of Assay Development
ImmunoSystems
4 Washington Avenue
Scarborough ME 04074
207-883-9900
Jerry Hart
Product Manager
Environmental Applicat.
Fisons Instr./VG Analy.
Floats Road, Wythenshawe
Manchester M23 9LE
ENGLAND
061-945-4170
760
-------
Ken Hart
Laboratory Director
Free-Col Labs
5815 Airport Road, Suite A-2
Roanoke VA 24012
703-265-2544
Riaz-ul Hasan
Supervisor
Bergen County Utilities
Foot of Mehrhof Road
P.O. Box 122
Little Ferry NJ 07643
201-807-5855
Chuck Haskins
Sales Development Manager
3M Company
3M Center
Bldg. 220-9E-10
St. Paul MN 55144-1000
612-736-2899
Elaine T. Hasty
Sr. Applications Spec.
GEM Corp.
PO Box 200
Matthews NC 28106
704-821-7015
R. E. Hawley
Market Development Manager
Varian Sample Preparation Products
24201 Frampton Avenue
Harbor City CA 90710
310-539-6490
Nathan Heldenbrand
Senior Chemist
Koch Refining
PO Box 64596
St. Paul MN 55164
612-437-0668
Mike Heniken
Chemist
City of Columbus
Sewer & Drains
900 Dublin Road
Columbus OH 43215-1170
614-645-7016
Michael Herbert
Technologist
Baxter Health Care Corp.
120 Wilson Road
Round Lake IL 60073
708-270-4956
Ted Hess
Spectro Analytical Instruments
160 Authority Drive
Fitchburg MA 01420
508-342-3400
Rochelle Hickmott
Envir. Customer Service
Cambridge Isotope Labs.
20 Commerce Way
Woburn MA 01801
617-938-0067
Barbara Hill
Director of Administration
Waste Management
Environmental Monitoring Lab.
2100 Cleanwater Drive
Geneva IL 60134
708-208-3112
Kathy J. Hillig, PhD
Manager
Ecology Analytical Svcs.
BASF Corporation
1609 Biddle Avenue
Wyandotte MI 48192-3799
313-246-6334
761
-------
Richard L. Hoag, Jr.
Physical Science Technician
Fleet & Industrial Supply Center
1968 Gilbert Street
Suite 600, Code 700
Norfolk VA 23511-3392
804-444-2761
Mary Hoganson
Haliburton-NUS Corp.
900 Trail Ridge Road
Aiken SC 29803
803-649-7963
Pamela Holbrook
Associate Staff
Toyota Motor Manufacturing
Inc., USA
1001 Cherry Blossom Way
Georgetown KY 40324
502-868-2491
Kevin Holbrooks
Chemist
City of Jacksonville
Public Utilities/WW Div.
2221 Bucknow St.
Jacksonville FL 32206
904-721-9529
Dawn Holdren
Analytical Chemist
NASA
PO Box 44
Wallops Island VA 23337
804-824-1761
William T. Holt
Lab Director
Trace Analytical Labs.
2241 Black Creek Road
Muskegon MI 49444-2673
616-773-5998
Ben Honaker
Chemist
US EPA, Office of Water
OST, BAD, (WH-552)
401 M Street, SW
Washington DC 20460
202-260-2272
Roxane Hook
Gelman Sciences
600 S. Wagner Rd.
Ann Arbor MI 48106-1448
800-521-1520x623
Bob Houser
DynCorp - Viar, Inc.
300 North Lee St., Suite 200
Alexandria VA 22314
703-557-5040
Lyman H. Howe III
Research Chemist
TVA Environmental Chem.
1101 Market St. (CClA-C)
Chattanooga TN 37401
615-751-3711
Dean Howe11
Division Director Quality
Fuels Department
Fleet & Industrial Supply Center
1968 Gilbert Street
Suite 600, Code 700
Norfolk VA 23511-3392
804-444-2761
Danny Hubbard
Graduate Student
Clark Atlanta University
223 James P. Brawley Dr.
Atlanta GA 30314
404-438-9645
762
-------
Greg Hudson
Laboratory Director
Enviro Compliance Labs., Inc.
Route 4, Box 2864
Route 1 and Old Keeton Road
Glen Allen VA 23060
804-550-3971
Frank Hund
Chemist
US EPA, Office of Water
OST, BAD, (WH-552)
401 M Street, SW
Washington DC 20460
202-260-7182
Carlton D. Hunt
Senior Research Scientist
Battelle Ocean Sciences
397 Washington Street
Duxbury MA 02332
617-934-0571
Ron Isaacson
Scientist
Weyerhaeuser Company
WTC 2F25
Tacoma WA 98477
206-924-6149
Carol Isenhour
Vice President
James R. Reed & Associates,
11864 Canon Blvd.
Newport News VA 23606
804-873-4703
Inc.
Sohail Jahani
Phoenix Envr. Labs, Inc.
PO Box 418
Manchester CT 06040
203-645-1102
Richard A. Javick
Sr. Research Associate
FMC Corporation
P.O. Box 8
Princeton NJ 08543
609-951-3639
James S. Jones
Materials Engineer
NASA
Kennedy Space Center
DM-MSL
Kennedy Spa Ctr FL 32899
407-867-7051
Phanishushan B. Joshipura
Supervisory Chemist
Fleet & Industrial Supply Center
1968 Gilbert Street
Suite 600, Code 700
Norfolk VA 23511-3392
804-484-6430
Kevin W. Keeley
Laboratory Director
Great Lakes Analytical
1380 Busch Parkway
Buffalo Grove IL 60089
708-808-7766
Larry Keith
Radian Corporation
PO Box 201088
Austin TX 78720-1088
512-454-4797
George D. Kennedy
Environmental Scientist
HRSD
1436 Air Rail Ave.
Virginia Beach VA 23455
804-460-2261
763
-------
R. Michael Kennedy
Laboratory Supervisor
City of Rock Hill
Env. Mon. Laboratory
P.O. Box 11706
Rock Hill SC 29731-1706
803-329-8704
Jerry Kidwell
Ogden Envir.fc Energy
Services Company, Inc.
3211 Jermantown Road
Fairfax VA 22030
703-246-0288
Jim King
Project Manager
DynCorp - Viar, Inc.
300 North Lee St., Suite 200
Alexandria VA 22314
703-519-1380
Wendy Kirkeeng
Lab Manager
Davis Analytical Labs.
PO Box 29
Tallevast FL 34270
813-355-2971
Dewey Klahn
Environmental Science Corp.
1910 Mays Chapel Drive
Mt. Juliet TN 37122
615-758-5858
Susan Kopacz
Environmental Analyst
PPB Environmental Labs.
6821 SW Arcfrer Rd
Gainesville FL 32608
904-977-2349
Lawrence J. Korn
President
V.O.C. Analytical Labs., Inc.
877 NW 61st Street, Suite 202
Ft. Lauderdale FL 33309
305-938-8823
Rosanna Kroll
Environmental Specialist
MD Dept. of Environment
2500 Broening Highway
Room 1120
Baltimore MD 21224
410-631-3906
Larry LaFleur
NCASI
PO Box 458
Corvallis OR 97339
503-752-8801
Joan LaRock
Consultant to 3M
LaRock Associates, Inc.
801 Pennsylvania Av. NW
Washington DC 20004
202-628-4322
Cynthia H. Lee
Laboratory Supervisor
Kenvirons, Inc.
PO Drawer V
452 Versailles Road
Frankfort KY 40601
502-695-4357
Nathan Levy
President
Analytical & Envir.
Testing, Inc.
1717 Seaboro Dr.
Baton Rouge LA 70810
504-769-1930
764
-------
Ronald Lewis
Chemist
Norfolk Naval Shipyard
Code 130
Portsmouth VA 23709
804-396-9307
Mark Liner
US EPA, Office of Water
OST, (WH-552)
401 M Street, SW
Washington DC 20460
202-260-2090
Roger Litow
DynCorp - Viar, Inc.
300 North Lee St., Suite 200
Alexandria VA 22314
703-519-1385
Barry J. Llewellyn
Supervisor
Environmental Water Lab
GPU Systems Lab
PO Box 15152
Reading PA 19612-5152
215-375-5494
Vicorica Lopez-Avila
Midwest Research Institute
California Operations
625-B Clyde Avenue
Mountain View CA 94043
415-694-7700
Norman Low
Product Manager
Hewlett-Packard
1601 California Avenue
Palo Alto CA 94304
415-857-7381
Ted W. Lufriu
President
Chesapeake Analytical Lab.,
106 A Rockefeller Ct.
Waldorf MD 20602
301-932-4775
Inc,
Carol Malone
QA/QC Coordinator
Jennings Laboratories
1118 Cypress Avenue
Virginia Beach VA 23451
804-425-1498
Craig Markell
Research Specialist
3M Company
3M Center
Bldg. 209-1W-24
St. Paul MN 55144
612-733-2813
Michael F. Martin
Analytical Chemist Senior
Commonwealth of VA DGS
1 North 14th Street
Richmond VA 23219-3691
804-371-2874
Tom Mascarenas
Analyst
Star Analytical
14500 Trinity Blvd 5-119
Ft. Worth TX 76155
817-571-6800
Sandra Mays
Instrument Specialist
Applied Marine Res. Lab
1034 West 45th Street
Norfolk VA 23529
804-683-4787
765
-------
Craig T. McCaffrey
Marketing Manager
Ohmicron Corp.
275 Pheasant Run
Newtown PA 28940
215-860-5115
Helen McCarthy
Supervising Chemist
RI Dept. of Health Labs.
50 Orms St.
Providence RI 02904
401-274-1011
Melinda McDavies
Chemist
Norfolk Naval Shipyard
Code 130
Portsmouth VA 23709
804-396-9307
Barry McKenzie
Research Chemist
Mallinckrodt Spec. Chem.
Paris By-Pass
P.O. Box 800
Paris KY 40362
606-987-7000
Lisa McMillan
Analytical Chemist
VA DEQ
Water Division
PO Box 11143
Richmond VA 23230
804-527-5181
Tom McVicker
QA/QC Officer
Gascoyne Laboratories
2101 Van Deman Street
Baltimore MD 21224
410-633-1800
Rodney T. Miller
Corporate QA Officer
PACE, Inc.
1710 Douglas Drive North
Minneapolis MN 55422
612-525-3465
Robert S. Mitzel
Director of Air Toxics
ALTA Analytical Lab.
5070 Robert J. Mathews Pk
El Dorado Hills CA 95762
916-933-1640
Jennifer L. Molnar
Analytical Chemist
Lockheed Environmental Systems
and Technologies
839 Bestgate Rd.
Annapolis MD 21401
410-266-9180
Marlene O. Moore
President
Advanced Systems, Inc.
P.O. Box 8090
Newark DE 09714
302-834-9796
Reginald D. Morehead
Scientist
Duke Power Company
GSD/ED/MG03A2
13339 Hagars Ferry Road
Huntersville NC 28078
704-875-5399
Pam Morhard
Physical Science Tech.
Norfolk Naval Shipyard
Quality Assurance Office
Code 130.02
Portsmouth VA 23709-5000
804-396-9309
766
-------
Ken Moura
QA Chemist
DynCorp - Viar, Inc.
300 North Lee St., Suite 200
Alexandria VA 22314
703-519-1165
Dierdre Murphy
Environmental Specialist
MD Dept. of Environment
2500 Broening Highway
Room 1120
Baltimore MD 21224
410-631-3906
Robert C. Murphy
Technical Coordinator
Technical Testing Labs
1256 Greenbrier Street
Charleston WV 25311
304-346-0725
Reynold Murray
Clark Atlanta University
University Box 296
Atlanta GA 30314
404-880-8744
Stephen P. Naughton
Environmental Manager
Coyne Textile Services
140 Cortland Ave.
Syracuse NY 13221
315-475-1626
J. R. Nein
Group Leader, Environment
Chesapeake Paper Products
19th and Main Streets
PO Box 311
West Point VA 23181
804-843-5750
Deborah Nelson
Chemist
Hampton Roads Sanitation District
1432 Air Rail Ave.
Virginia Beach VA 23455
804-460-2261
Guenter Niessen
Senior Project Manager
EM Science Division
EM Industries, Inc.
480 Democrat Road
Gibbstown NJ 08024
609-354-9200
Bill Nivens
Laboratory Programs Manager
Water Environment Federation
601 Wythe St.
Alexandria VA 22314
703-684-2400
Babu R. Nott
Project Manager
EPRI
3412 Hi11view Avenue
Palo Alto CA 94304
415-855-7946
Alicia P. Ordono
VA Div. of Consolidated
Laboratory Services
1 North 14th Street
Richmond VA 23113
804-786-3411
Veriti P. Overby
Chemist
Fleet & Industrial Supply
Building W-388, Code 702
Norfolk VA 23511
804-444-2761
767
-------
Robert G. Owens, Jr.
Analytical Services, Inc.
390 Trabert Ave.
Atlanta GA 30309
404-892-8144
Jac L. Padgett
Vice President
EC Labs, Inc.
PO Box 569
Farmersburg IN 47850
812-696-5076
V. K. Palat
Wayne Analytical and
Environmental Services Inc.
Suite 915
992 Old Eagle School Road
Wayne PA 19087
215-888-7485
Lisa Palfey
Environmental Consultant
PA Power & Light
2 North 9th Street
Allentown PA 18101
215-774-5930
Susan E. Park
Treasurer
PPB Environmental Labs.,
6821 SW Archer Road
Gainesville FL 32608
904-377-2349
Inc.
Jerry L. Parr
Director, QA & Technology
Enseco-Rocky Mountain
Analytical Laboratory
4955 Yarrow Street
Arvada CO 80002
303-421-6611
Patricia Parsly
Project Coordinator
IT Corp.
304 Directors Drive
Knoxville TN 37923
615-690-3211
Lisa Kelley Peterson
Law Engineering
4465 Brookfield Corp. Dr.
Chantilly VA 22081
703-968-4700
William F. Pfeiffer
President
Ginosko Laboratories,
17875 Cherokee St.
P.O. Box 8
Harpster OH 43323
614-496-4051
Inc.
James W. Pinson
President
American Labs & Research
Services, Inc.
1008 SE Circle
Hattisburg MS 39402
601-264-9320
Joel A. Pitman
Senior Project Manager
Twin City Testing Corporation
737 Pelham Blvd.
St. Paul MN 55114
612-659-7476
James J. Pletl
Environmental Scientist
HRSD
1436 Air Rail Ave.
Virginia Beach VA 23455
804-460-2261
768
-------
Lee N. Polite
Research Chemist
AMOCO Corporation
Box 3011, M/S F-7
Naperville IL 60566
708-420-3110
Marion Poythress
President
Poythress Environmental, Inc.
PO Box 5298
Macon GA 31208
912-994-3527
Glenn Powell
Manager
Environmental Labs.
Webb Technical Group
4325 Pleasant Valley Rd.
Suite 110
Raleigh NC 27612
919-787-9171
Kerry Prescott
Director of Operations
IEA, Inc.
1113 Sawgrass Corp. Pkwy.
Sunrise FL 33323
305-846-1730
Susan M. Price
Sr. Technical Service Chemist
3M Company
3M Center
Bldg. 209-1C-30
St. Paul MN 55144-1000
612-733-3461
Bill Purcell
Inspections Coordinator
VA DEQ, Water Division
PO Box 11143
Richmond VA 23230
804-527-5077
Sarala Rajeshuni
Chemist
Roy F. Weston Inc.
Auburn Chemistry Laboratory
835 N. Gay Street #6
Auburn AL 36830
205-821-8039
Edgar Raker
System Laboratory Supr.
Kentucky Utilities Co.
PO Box 437
Ghent KY 41045
502-347-5361
J. Roberto Ramirez
President
Quantum Laboratories Inc.
PO Box 366950
San Juan PR 00936-6950
809-793-7288
Tom Randolph
Randolph Consulting
PO Box 82860
Baton Rouge LA 70884-2860
504-767-6302
Kenneth T. Raum
Environmental Insp. Supervisor
VA DEQ, Water Division
287 Pembroke Office Park
Virginia Beach VA 23462
804-552-1172
Susan Redding
GS-MS Operations
Davis Analytical Labs
PO Box 815
Edgewater FL 32132
904-428-5720
769
-------
G. Sudhakar Reddy
Shrader Analytical Labs
3814 Vinewood
Detroit MI 48208
313-894-4440
Edgard Resto, PhD
Associate Professor
University of Torabo
PO Box 3030
Gurabo PR 00656
809-743-7979
Hal Rhodes
Senior Research Assoc.
Texaco R & D
PO Box 1608
Pt. Arthur TX 77641
409-989-6487
Mark Richardson
VA DEQ
PO Box 11143
Richmond VA 23230
804-527-5078
Lynn Riddick
Associate Project Manager
DynCorp - Viar, Inc.
300 North Lee St., Suite 200
Alexandria VA 22314
703-519-1385
Nelson Risser
Manager, Pest./Sample
Sample Support
Lancaster Laboratories
2425 New Holland Pike
Lancaster PA 17601-5994
717-656-2301
Patty A. Rollins
Chemist
Hampton Roads Sanitation District
1432 Air Rail Ave.
Virginia Beach VA 23455
804-460-2261
Miriam Roman
Group Leader
WMX Technologies, Inc.
2100 Cleanwater Drive
Geneva IL 60134
708-208-3170
Ann Rosecrance
Corporate QA Director
Core Laboratories
10205 Westheimer
Houston TX 77042
713-972-6316
James R. Roth
Laboratory Manager
Alpha Analytical Lab
8 Walkup Drive
Westborough MA 01581
508-898-9220
Nancy C. Rothman
Chief Organic Scientist
Enseco
205 Alewife Brook Parkway
Cambridge MA 02138
617-661-3111
John T. Roy
Project Leader
Dow Chemical
1602 Chemical
Midland MI 48667
517-638-6912
770
-------
Anna M. Rule
Chief, Laboratory Div.
Hampton Roads Sanitation District
PO Box 5911
Virginia Beach VA 23455
804-460-2261
Dale Rushneck
Interface, Inc.
PO Box 297
Ft. Collins CO 80522-0297
303-223-2013
Gautam Saha
Clark Atlanta University
University Box 296
Atlanta GA 30314
404-880-8744
Ed Saltzberg
Senior Vice President
DynCorp - Viar, Inc.
300 North Lee St., #200
Alexandria VA 22314
703-519-1200
Cheryl G. Sawyer
Mgr. of Environmental Affairs
Cogentrix, Inc.
3105 American Legion Rd., Ste B
Chesapeake VA 23321
804-484-9008
Aisling Scallan
Marketing Manager
EnSys, Inc.
P.O. Box 14063
RTP NC 27709
919-941-5509x129
Robert B. Schaffer
Mgr. Eastern Region
Ogden Environmental & Energy
Services Company, Inc.
3211 Jermantown Road
Fairfax VA 22030
703-246-0274
George A. Schmitt
Business Development Mgr.
3M Company
3M Center
Bldg. 220-9E-10
St. Paul MN 55144-1000
612-733-0307
William C. Schnute, Jr.
979 Azalea Drive
Sunnyvale CA 94086
408-736-6265
David Schreiner
Chemist
City of Phoenix
2301 West Durango St.
Phoenix AZ 85009
602-495-5974
Christine Schwerdtferger
Radian Corporation
300 N. Sepulveda Blvd., Ste 1000
El Segundo CA 90245
310-640-0045
Janice Sears
Project Manager
Ogden Envir. & Energy
Services Company, Inc.
3211 Jermantown Road
Fairfax VA 22030
714-589-4301
771
-------
Jim Seeger
Chemist
US Army Environmental Hygiene
Agency, OECD/PAB
Building E2100 (EA)
Aberdeen Proving Ground MD 21010
410-671-8304
Andy Sendelbach
Sales Manager
Varian Sample Prep. Prod.
1829 Grande Oaks Rd.
Durham NC 27712
919-477-1015
Christopher Shane
Laboratory Supervisor
Technical Testing Laboratory
4643 Benson Avenue
Baltimore MD 21227
410-247-7400
C. Philip Shank
Director of Technology
Mallinckrodt Spec. Chem.
Paris By-Pass
P.O. Box 800
Paris KY 40362
606-987-7000
Robert Shirey
R&D Chemist
Supelco, Inc.
Supelco Park
Bellefonte PA 16823
814-359-5706
Jennifer Sieger
Chemist
US Army Envir. Hygiene
Agency, RICO/MAB
Building E2100 (EA)
Aberdeen Proving Ground
MD 21010
410-671-8304
Cindy Simbanin
DynCorp - Viar, Inc.
300 North Lee St., Suite 200
Alexandria VA 22314
703-519-1386
Kate Simmons
Tighe & Bond, Inc.
53 Southampton Road
Westfield MA 01085
415-572-3210
Rachael Simms
Clark Atlanta University
University Box 296
Atlanta GA 30314
404-880-8744
Louis Slapshack
Research Manager
Anheuser-Busch
1101 Wyoming Street
St. Louis MO 63118
314-577-2638
Joe Slayton
US EPA, Region 3
841 Chestnut Building
Philadelphia PA 19107
215-597-9800
John Sledge
Lab Manager
Burlington Research Inc.
PO Box 2481
Burlington NC 27215
919-584-5564
772
-------
Peggy S. Sleevi
Director of Quality Assurance
Enseco
2612 Olde Stone Rd.
Midlothian VA 23113
804-378-1851
Dewey W. Smith
Applications Manager
Antek Instruments, Inc.
300 Bammel Westfield Rd.
Houston TX 77090-3508
713-580-0339
James S. Smith
Pre s i dent/Chemi st
Trillium, Inc.
7A Grace's Dr.
Coatesville PA 19320
215-383-7233
Roy-Keith Smith
Analytical Services Inc.
390 Trabert Ave.
Atlanta GA 30309
404-892-8144
Terry Smith
Section Manager GCMS
USPCI Analytical Services
4322 South 49th West Avenue
Tulsa OK 74107-6100
918-446-1163
Tom Smith
QA Chemist
DynCorp - Viar, Inc.
300 North Lee St., #200
Alexandria VA 22314
703-519-1279
Patrick Spink
WindowChem Software, Inc.
1955 West Texas Street, Suite 7-288
Fairfield CA 94533-4462
408-956-9666
Bruce Staples
Physical Science Tech.
Norfolk Naval Shipyard
Quality Assurance Office
Code 130.02
Portsmouth VA 23709-5000
804-396-9309
Douglas L. Stevenson
Chemist
Labratory Support Division
Rocky Mountain Arsenal
Attn-AMXRM-L5
Commerce City CO 80022-2180
303-289-0217
William P. Stork
Chemical Analyst
Environmental Analysis
3278 North Highway 67
Florissant MO 63033
314-921-4488
Ed Stuber
Pace
RD #6 Robinson Lane
Wappingers Falls NY 12590
914-227-2811
Louise Stunkard
Physical Science Tech.
US Army Envir. Hygiene
Agency
Building E2100
Aberdeen Proving Ground
MD 21010
410-671-3269
773
-------
Mark Taylor
G.C. Product Specialist
Shimadzu Scientific Instruments
7102 Riverwood Drive
Columbia MD 21046
410-381-1227
Dawn Thomas
Sr. QA Specialist
PSI, Inc.
6913 Highway 225
Deer Park TX 77536
713-479-8307
Marion K. Thompson
Environmental Prot. Spec.
US EPA, Office of Water
OST, EAD, (WH-552)
401 M Street, S.W.
Washington DC 20460
202-260-7117
James C. Todaro
Lab Director
Matrix Analytical
106 South Street
Hopkinton MA 01748
508-435-6824
David Tompkins
President
ETS Analytical Services
1401 Municipal Rd.
Roanoke VA 24012
703-265-0004
Yves Tondeur
V.P. Technology
Triangle Laboratories
6320 Quadrangle Dr.
Suite 240
Chapel Hill NC 27514
919-493-0877
Allan M. Tordini
President
Quality Works, Inc.
8 Strafford Circle Road
Medford NJ 08055
609-953-9163
Felicitas Trinidad
Sr. Scientist/Supervisor
Hoffmann La Roche
340 Kingsland Road
Nutley NJ 07110
201-235-3131
Dion Tsourides
Product Manager - ICP
Spectro Analytical Instruments
160 Authority Drive
Fitchburg MA 01420
508-342-3400
Teri L. Tumolo
Mgr., Inorganic Analyses
Killam Associates
100 Allegheny Drive
Warrendale PA 15086
412-772-0200
Mark E. Tuttle
Program Director
ENTER Manufacturing
1632 NW Vicksburg Avenue
Bend OR 97701
503-389-4525
Richard Ungvarsky
Chemist
Akzo Salt Inc.
Abington Executive Park
Clarks Summit PA 18411
717-587-9403
774
-------
Jim Vance
Product Line Manager
Horiba Instruments
17671 Armstrong Ave.
Irvine CA 92714
714-250-4811x170
Yupha Vatcharapijarn
Clark Atlanta University
University Box 296
Atlanta GA 30314
404-880-8744
Joe Viar
Chairman
DynCorp - Viar, Inc.
300 North Lee St., Suite 200
Alexandria VA 22314
703-519-1000
Albert F. Vicinie
Supr., Industrial Lab
DeYor Laboratories, Inc.
7655 Market Street
Youngstown OH 44512
216-758-5788
Joseph S. Vitalis
Chemical Engineer
US EPA, Office of Water
OST, BAD, (WH-552)
401 M Street, SW
Washington DC 20460
202-260-7172
Tracy R. Volpe
V.O.C. Analytical Labs
877 NW 61st St., #202
Ft. Lauderdale FL 33309
305-938-8823
Leonard Voo
US EPA, Region 2
Environmental Service Division
(N5230)
2890 Woodbridge Avenue
Edison NJ 08837
908-321-6710
Bruce Wagner
Lab Manager
IT Corporation
304 Directors Drive
Knoxville TN 37923
615-690-3211
Jack Wahlstrom
Lab Manager
GCWDA
10800 Bay Area Boulevard
Pasadena TX 77507
713-474-4111
Tonie M. Wallace, RPR
President
County Court Reporters
124 E. Cork Street
Winchester VA 22601
703-667-0600
Claudia Walters
QA Officer
US EPA, Region 3
Chesapeake Bay Program
410 Severn Avenue, Suite 109
Annapolis MD 21403
410-267-0061
Randy Ward
Chief Chemist
Envir. Science Corp.
1910 Mays Chapel Rd.
Mt. Juliet TN 37122
615-758-5858
775
-------
Spence Ward
Chemist
Norfolk Naval Shipyard
Quality Assurance Office
Code 130.02
Portsmouth VA 23709-5000
804-396-9309
John Whitescarver
Vice President
Carter-Burgess
PO Box 16525
Washington DC 20041
703-777-9384
Robert Wichser
Chief Utility Engineer
Richmond Dept. of Public Utilities
600 East Broad St., Room 831
Richmond VA 23219
804-780-5202
Ann Wilkes
Lab. Regulation News
1350 Connecticut Ave., NW
Suite 1000
Washington DC 20036
202-862-0916
David F. Williams
President
Kenwill Environmental Laboratory
505 East Broadway
Maryville TN 37801
615-977-1200
David S. Williams
Product Manager
Zymark Corperation
Zymark Center
Hopkinton MA 01748
508-435-9500
Allison Wilson
Chief Chemist
Hampton Roads Sanitation District
1432 Air Rail Ave.
Virginia Beach VA 23455
804-460-2261
Hugh E. Wise
Environmental Scientist
US EPA, Office of Water
OST, BAD, (WH-552)
401 M Street, SW
Washington DC 20460
202-260-7177
Michael W. Woods
Mgr., Environmental Services
Technical Services Laboratories
1612 N. Lexington Ave.
Springfield MO 65802
417-864-3195
John E. Young
Principal Scientist
Westinghouse
Building 773-A
Savannah River Tech. Ctr.
Aiken SC 29808
803-725-3565
Steve Zajicek
Laboratory Operations Manager
PDC Laboratories, Inc.
PO Box 9071
Peoria IL 61612-9071
309-676-4893
776
------- |