-------
200k
Pi
O
o
lOO-i
90-
80-
70-
60-
50-
40-
30-
20-
10-
&
Table 3. Rate of confirmation for selected
chemical classes.
-------
201
QUESTIONS AND ANSWERS
MR. STANKO: George Stanko, Shell
Development. Walt, obviously, this work was done
with some of the data and some of the extracts
that had been sent to Athens as part of the
screening phase.
MR. SHACKELFORD: That's correct.
MR. STANKO: Could you tell us
what has happened with that program; where are you
in that particular program now?
MR. SHACKELFORD: Well, as far as
Athens is concerned, we have finished all of the
computer matching tests that we are going to do.
We have done the confirmation study and we were
able to confirm 435 compounds.
These are compounds that were found at a high
frequency. We also tried to confirm each com-
pound at least once in every industrial effluent
in which it was found. The library of unknowns
is presently being evaluated right now. We have
-------
202
some 55 or 60 candidates presently being studied
whose spectrum does not exist in our reference
library. For the final end of the data you have
to refer to the Effluent Guidelines Division.
MR. TELLIARD: I want to add
some toxics to the list, George, only in petro-
leum.
Thank you, Walter. We are going to try to
continue this program with the additional indus-
tries of offshore oil and gas and organic chemi-
cals that will be coming up this year. We will
continue to use the tape program and the extracts,
with the new quality assurance built into the
data set, which will make Walter's life easier.
Of course, Walter is saying, "who's goin^ to pay
me to do this".
Our next speaker is Paul Mills from Mead
CompuChem. Mead has spent some time in develop-
ing a quality assurance decision tree I guess is
the best way to describe it for real time quality
-------
203
assurance. This was developed primarily for
the garbage people, the solid waste people, but
I think a lot of these measurements decision can
be made applicable to the work we are doing. So
we have invited Paul to come today and explain
the system.
-------
204
QUALITY ASSURANCE DECISION MODELS
FOR HAZARDOUS WASTE ANALYSIS
Paul Mills, Mead CompuChem
MR. MILLS: Thank you. I
have asked Nancy to handle some transparencies
for me.
Now, this will be a multi-media show because
it deals not only with transparencies and slides,
but because it will also deal with soils, sludges,
solid and hazard waste as well as water that you
are primarily interested in.
Earlier in the program we have heard several
speakers talking about quality assurance and
what you do, for example, at the instrument, how
can an operator make decisions as to the quality
of the data that has been produced. I thought
for those people who may or may not have some
familiarity with quality assurance I would put
the obligatory quality assurance and quality
assurance definition up there; is that focused
-------
205
well? (Indicating.)
CompuChem is one of the largest analytical
facilities in the country. We have quite a few
GC/MS instruments and it poses some unique prob-
lems for me as Director, Quality Assurance, some
of which we will get into which led me to help
develop the model that I will be talking about.
Some of the things down here that I would like
to point out (indicating). We do hundreds of
samples a month, by a variety of methods, both
EPA and commercial methods for a variety of cus-
tomers and industries. We have three shifts,
24-hours a day, we never close. We have signi-
ficant computer capability so that we can pro-
cess the data that is generated and turn it
around quickly. We have a laboratory at Research
Triangle Park, North Carolina, and one in Gary,
Illinois, near Chicago. We have 24 GC/MS
instruments and trying to keep track of the
data from all of those can be time-consuming.
-------
206
The manner and the size, the scope of
CompuChera is set up so that samples come in
on what amounts to an assembly line; no one
person sees the entire job on the sample from
extraction to concentration through clean-up,
through GC/MS analysis through data reporting.
So we found that it is critical that each
person who does a piece of that sample as it
is passed along knows how well that that job
has been done because they get immediate
feedback as to "Did I do my job correctly, or
did I screw it up, does it have to be done
again?" The person next in line that gets
that sample to be able to do his part of the
job with it, like an auto assembly plant,
needs to know that that job was done correctly
so that his piece will have value when it
is added as the product goes down the line.
So we must make sure that the quality
of the product that went out the door to the
-------
207
customer meets the standards that are demanded,
whether it is by contract or purchase agreement.
Also, to facilitate intralaboratory transfers
between departments and between people of pro-
ducts of known quality, we started by implement-
ing a system so that each person, each product,
each lab area was defined as to the type of
quality that was required.
May I have the next transparency, please,
Nancy. This you should have seen before, the
elements of quality assurance that are listed
in the EPA quality assurance guidelines. We
started to look at how are we organized and
who are responsible for what aspects of quality
in the laboratory, what are the quality assurance
objectives for the data in terms of these
parameters.
In EPA contracts these are very well spelled
out in some regards with the number of definitive
criteria that are supposed to be applied:
-------
208
Surrogate recoveries, internal standard areas,
how well your spikes and duplicates are sup-
posed to be recovered and duplicated, things
like that. However, there are sections in the
contracts which read such as, '...If these cri-
teria are not met it is left up to the judgment
of the anlyst in order to take corrective ac-
tions' ; it's not spelled out clearly what those
should be or how those should be implemented.
There are also procedures spelled out for how
sample custody should be handled, how do you
calibrate, how do you tell if your instruments
are properly calibrated, the methods that are
to be used. Some of the methods in the hazardous
waste program that we found have been developed
in advance of validation data because of the
urgencies for some of the data to be produced.
We find, not surprisingly, that the methods
don't work for all kinds of samples very well.
-------
209
Some they will work for very well, but some they
won't. Then, how do you produce and validate
your data, what checks are performed within the
laboratory on how well that data has been pro-
duced, and the procedures that are used; in
particular, corrective actions and reports to
the management on the corrective actions.
May I have the next slide, please. I went
up to the mountain one day and came down with a
stone tablet with the Four Laws of Quality
Assurance engraved on it, which are not my
invention but they seem to make some kind of a
sense and at least the people that I work with
understand them. The first law, the most
important one is, 'Do it right the first time'.
If you are going to take the time to process
a sample and report it out, do it right the
first time so there are no mistakes. Secondly,
'Detect errors as soon as possible1. If you
know that there was a mistake made in the
-------
laboratory try and get that nistake rectified
or start the reprocessing of the sanple; don't
wait until it is ready to be reported out the
door to say there was a problen. You have lost
tine and you have wasted a lot of energy. Again,
this gets back to one of the things Phil Ryan
said earlier, you want to correct the error
as close as possible to its source. If an
instrument operator can detect that there is a
problen with the surrogate recovery, that's
the tine that sonething could be done about it.
It is also cheapest and quickest to do it that
way; and, from a quality assurance standpoint
I denand that all of the actions that have been
taken for problen data be docunented. I want
to know what the corrective actions were, who
did then, what was their rationale, what was
the result.
On the next transparency, we started to build
an exanple criteria for building a decision
-------
211
model. How do you apply those four laws of
quality assurance so that you would apply
some sort of a logical or hierarchical frame-
work for making decisions based on problems
that you might see?
So we looked at, first, what data can be
examined by the analyst or someone who detects
the error. For example, you could look at it
as a GC/MS operator: Was the tune correct?
Was the blank run okay? Was the standard with-
in the criteria for calibration? Did all of
the pieces of information that were passed to
him concerning the preparation of that sample
match what it was supposed to be for that pro-
cedure? Were there other samples in that data
set, say if they came from a particular case,
that have similar problems that could account
for the problems that are being seen? Essen-
tially, what was the quality criteria for the
product and were they met?
-------
212
If some of these things are not correct then
in what order should you examine the possible
causes? You could look and say certain things
like the tune, the blank, the standard must
have been acceptable or the analyst would not
have run the sample. You can check internal
standard areas, you can check the worksheets,
you can check response factors, things like
that, and check to see whether there was
anything special about those samples. Was
there any additional data that may be necessary
to determine the source of the error? For
example, the data from other sets of samples.
In our set up, a particular operator may
not have analyzed all of these samples from a
particular case. They may have been prepared
at different times, they may have been done by
a different instrument, a different operator,
a different shift; but, the laboratory manager
in charge of that area can go back and
-------
213
determine they get similar samples from the
same set of samples to have the same problems.
Are there additional people outside of that
laboratory that it may be necessary to determine
the source of the error, like the lab manager
or the QC Department? And what are the options
for taking corrective action? What are the ones
that are most prompt, likely to lead to the
solution and elimination of the errors and saving
costs, especially saving time in identifying and
correcting the problems?
It may be possible that a calculation error
was made in the information that was provided
to the analyst. If that is detected a calculation
correction can be made; that is quick, that is
simple, that does not effect the quality of the
data except to correct the mistake. You may be
able to reinject the sample, in the worst case
you may have to go back and reanalyze an entire
lot of samples. Then, how are the corrective
-------
214
actions documented? They are supposed to
be documented on the worksheet associated
with the sample in the laboratory files by
a personal memo to me, to the files, and in
the report to the customer.
The next slide, please. These are some of
the advantages and disadvantages of the
implementation of this system, at least as
it applies within CompuChem. It has shown
an improvement in the turn around time
because it will detect and correct problems
earlier and, I'm sure, avoid repetition.
It has improved the laboratory working
relationships. If you have established with
each part of the laboratory that you
expect a certain quality of product from
them, all of the little pieces of paper
completely filled out, and you don't get it,
then you turn it back to them or don't
accept it, they tend to get the message very
-------
215
quickly when things pile up on them, that
it's got to be right or it won't be passed
along.
To reduce rework and the associated cost
with the rework because people are starting
to do things right more often, it has
improved our good will and our prestige to
the customers because you have decreased the
turn-around time and it improved the
quality of the product to the customer. It
tends to free higher level staff for planning
instead of problem-solving if things can get
solved at lower levels. You can document the
accountability for quality, something I am
particularly interested in. I am always trying
to establish that quality control and quality
assurance are really profit centers, they are
not cost centers or overhead; they contribute to
the value of the products. If you can document
what corrective actions were made and taken
-------
216
and that you can reduce costs, you can show that
the quality departments are paying their way.
The detailed logic that goes into the
corrective actions for each area can be put into
the computer so that eventually there will be
no human intervention. Data can go directly from
the GC/MS instrument to a main-frame computer
that has the logic of the corrective actions and
decisions built into it so that those data can
be rejected or accepted right there. You save a
lot of manual intervention. We have this
currently in force for our biomedical area which
deals with much less complex samples than the
environmental ones. We are a few months away
from implementing it entirely for the environ-
mental , but the concepts in what we have learned
in biomedical will apply in environmental sam-
ples. Having the defined criteria we found
makes training of new staff quicker and more
effective. They know what is expected of them
-------
217
and they know what they have to do.
The system for documentation as required by
the customer for this product is on the computer
that allows ready access by managers, so if
there is a question: "How good does this piece
of information have to be?," it is spelled out
and it is readily accessible. It's nice to be
able to know how much things cost so that you
can bid on some of the new work, for example.
It is an excellent management tool for measuring
performance.
Some of the disadvantages are there are some
costs associated with implementation because
you have to make changes in how the laboratory
does some things. In the past, it has been
the policy of CompuChem to use code numbers so
that the analysts working in the laboratory do
not know which samples are duplicates, blanks,
or spiked samples. This is so that we can
-------
218
identify how well the laboratory does on all
kinds of samples. In order to make sure certain
kinds of information are detected at the
earliest possible level, the analysts need to
know the identities of those samples. If
someone thinks that that may distort the
performance, that if they know it is a QC sample
they are going to do even better than normal
on regular samples, there are still periodic
blind samples submitted that are doctored by
our quality control and quality assurance
department which come in as true "blinds" and
will test how well people are doing. Those are
submitted for each of the analysts and operators
every month. As you will see later in the
presentation that data is available for their
managers to review, comment on, and correct if
performance is under par.
May I have the next slide, please. This is
just a brief summary of the decision model
-------
219
steps as applied to, for example, if you
are looking for contamination in a blank
associated with a set of soil samples that
has been prepared. You have to define the
product; usually that's defined by the
customer or in the contract. What are the
attributes that you want to be determined?
How much contamination do you want or how
little? How is the report to be delivered?
How do you make that product? Is it on the
GC/MS or do you want it on the GC? It is
usually defined by the contract for that
product. What quality of performance is
desired? For example, if something has failed
the criteria. How do you measure the product
quality? How often do you measure it? Who
is responsible for measuring it? Then,
this is where the managers and the actual
technical staff have to get heavily involved,
listing in detail all of the possible
-------
220
reasons why you might not be able to meet
that criteria; such as, contamination in
various parts of the laboratory. Then, how
would you test and document those...or
eliminate those sources of problems in a
logical manner. Describe the documentation,
corrective actions, train the staff, report
it, and then notify the customer if it is
necessary.
Some of the changes that we have come up
with, now, for example in the processing of
blanks for the volatiles we have changed
hoods, we have changed types of inpingers,
we have changed the location of sample
preparation based on the results of some
studies indicating there is some volatile
contamination in certain parts of the
laboratory. We have changed certain times
when we do things to limit the contamination
and have seen an overall improvement,
-------
221
for example, in the quality of the volatile
blanks.
Next slide, please. This is a listing of
the desired product quality; for example, from
the previous slide, our example of the volatile
soil blank. Most of this is taken straight
from the contracts that come from the Hazardous
Waste Program office, but it is translated into
saying "This for our laboratory is what has to
be produced." You have to say the RIG of the
sample doesn't end on an eluting peak, for
example. You have to have certain kinds of
information, document control number, you have
to label certain peaks, identify who did it,
when, how, what standards they used; all of
these kinds of things go into making up the
attributes of an acceptable product. If there
is any qualifying data it is important to put
footnotes in there so people understand it.
-------
222
Next slide, please. This is the example
for the volatile blank of what happens. There
are three different blanks made up to be able
to determine for a particular set of 20
samples where a contaminant might occur. If
you check the first blank and it is clean
then you move to check the second blank. If
that is clean you check the third blank.
If any of the blanks have a contamination
problem in them you can then narrow down
where the source of the contamination might
be coming from, go back and correct it;
or, you prepare the sample and do it until
you have gotten a clean blank and clean
samples associated with it. This is the kind
of logic that is being applied for other
types of samples in spikes, the duplicates,
as long as the blanks are in addition across
the laboratory.
Now, I would like to switch, if possible,
-------
223
back to the 35 millimeter slides and then
at the end of that I'11 have about three more
transparencies.
Could someone turn on that slide projector,
please. In case you haven't seen a GC/MS
laboratory with a lot of instruments in it
that's what it looks like (indicating). This
is, for the example that I was using, the
volatile bottles are on the top and packed in
nice little Styrofoam containers so that
they don't break. There are four there, they
are contained in four volatile bottles.
That's CompuChem's...our EPA samples don't
come in nice containers like this. If they
did we wouldn't have as much breakage problems
as we see with them; in these we don't lose
samples for our commercial customers, they
come in like this (indicating).
As part of the laboratory quality control
some of these things...well, all of these
-------
224
things have to be met. These are pieces of
information that are reported to the customer;
for this instance, in the Hazardous Waste
Program of EPA. I certainly echo, I nearly
stood up and applauded when I heard that the
Effluent Guidelines is trying to reduce the
amount of paper that is produced, they can
get more on tape because for several years
we have been trying to get the Hazardous
Waste Program to do that. All of our customers
are the regions so they are demanding
additional paperwork. Some of the tests that
are done in order to insure that there is no
contamination in the glassware, for example,
for every set of samples, say, for sample
containers that are prepared, a portion of them
are prepared for tests to determine if they
look clean before they are used. We have
storage stability tests in order to monitor
the atmosphere in which...you walk in a
-------
225
refrigerator in which the volatiles that
are prepared are stored to make sure that
there is no contamination occurring from
the storage of the samples.
This is the purged water that is used
for the preparation of the samples so that
we can demonstrate that we are not
contaminating the samples with the water.
The purge and trap, GC/MS, is where they
get analyzed. Some of the reports that we
put out to the managers to document how
well the decision models are being followed,
how well the people that they have for
their work are performing. We have computer-
generated reports which will show surrogate
recoveries by matrix, by level, by individual
extraction person, by GC/MS instrument, by
operater, by shifts; all of the different
ways so that the manager can actually
determine if somebody is out of line and
-------
226
what needs to be done about that. You have
this kind of information that is also useful
to determine how well the laboratory is
performing and all of that. Quarterly we
will take a look at our recoveries and
determine whether or not they need to be
tightened based on what we are seeing on
the results.
The thrust of what we are trying to do
is to establish if we find a problem is it
a problem with the laboratory in technique?
Is it a problem with the method and its
applicability for those samples so that we
can document for our customer it's our fault,
we screwed up, don't pay us for our mistakes?
Or, if it is not our fault we can present
that to the customer and say it is a problem
that either the method or the matrix causes
the data not to be acceptable to meet the
criteria as has been specified.
-------
227
This is the main frame of the computer or it
is back behind there that we use for processing
some of the reports (indicating). I'll turn up
the lights again. There are just a few more
transparencies that I wanted to show which will
show the form of some of the reports that we
get.
This is an example of a pie chart that
essentially tells the managers, the people
that I work with how many repeat requests
for sample repreparation or, for an example,
reinjection that were done during February.
The pie chart is divided into different
sections depending on the type of fraction
which is analyzed. The volatiles over here,
acid, base neutrals, semi-volatiles is part
of the chart and that seemed to be where a
large number of the problems were.
If you are interested to know how big a
problem this represents, this is less than
-------
228
four percent of the total number of samples
that we processed for the month that
represented things that had to be reprepared.
The next one, please. An example of some
of the information, say, for surrogate
recoveries for volatile samples. A target
range is based or set up based on the criteria
that are in the EPA contracts that we have
saying, assuming a normal distribution of the
surrogate recoveries you would want to see
the same distribution up here (indicating).
This is the actual distribution that we are
seeing, to be able to see for the kinds of
samples that we are getting in, if we are
within the control limits for those types
of samples and on that particular indicator.
The last one, please. Here is an example
of tracking internal standard response
verifications. Here is an instance in which
suddenly something went way out of control
-------
229
down here, the instrument stopped, maintenance
was performed on the instrument and brought back
up (indicating); and, it's within the criteria.
That kind of information is available promptly
at the instrument, although the graph was not
made until later. The operator at the instru-
ment has to make decisions if he sees that
happen; that's it.
Thank you. I would like to thank Nancy.
I'm sorry you only got to see the back of her
head while she was doing that; the rest of
her is nice, too. That's the end of my
presentation. I would be glad to answer any
questions that you may have.
MR. TELLIARD: Thank you,
Paul.
A couple of announcements.
******
-------
229a
QUALITY ASSURANCE DECISION MODEL
FOR
HAZARDOUS WASTE ANALYSIS
Paul E. Mills, Director
Quality Assurance
Mead CompuChem
P.O. Box 12652
Research Triangle Park, NC 27709
ABSTRACT:
Adopting a customer and service-orientation within a laboratory Quality
Assurance program provides better-defined product quality within laboratory
areas, and ultimately for the laboratory's customers. This presentation descri-
bes a quality assurance decision model which is being installed at Mead
CompuChem. The model defines the various products of the laboratory as
"analytical results" which are conveyed via paper or magnetic tape to each part
of the laboratory or to customers. Each product must meet a pre-defined set of
criteria; for EPA hazardous waste analyses, these are based on contract-
specified deliverable documentation. Each laboratory area is responsible for
the quality of its products. The Quality Control and Quality Assurance
Departments monitor product quality, and check that documentation of corrective
actions is complete and consistent with the model. The second generation of the
model will extend the concept to allow "quality-costing" to be applied to
finished and reworked products. The advantages of this decision model are:
1) It is accessible to each manager through a common file; 2) Corrective
actions are made consistently the same for similar problems; 3) Logic applied
can be used for automating review systems, and ultimately "networking" to elimi-
nate the need for manual interventions; 4) Cost of producing products of
defined quality can be determined and used in assessing laboratory performance.
INTRODUCTION:
This paper provides a description of a management concept for improved product
quality from an analytical chemistry laboratory. I have provided a summary of
its development, and a description of the advantages of the system. Some
examples are provided of the logic applied, and the system performance data on
portions installed and currently operating.
Because of the size of Mead CompuChem (24 GC/MS instruments at 2 locations) and
the scope of the company's business, the concept of manufacturing centers has
been used in planning and production. Productivity is essential to warrant
capital investment. Specialization is applied to tasks such as sample receipt,
extractions, analysis, and data reporting. No one person does the "whole job"
on a sample. Therefore, it is critical that information accompany the sample in
process down the "assembly line". Each lab area must check its products'
quality prior to transferring samples to another lab area.
The original concept for this paper developed from attempts to quantitate the
contribution of QA and QC to profits. Several references are included which
provided assistance in this effort.
-------
229b
The productivity of a quality system can be measured by its contribution to
business profits. It is desirable to develop a quality system to achieve and
maintain product quality and decrease variability. To accomplish this, it is
necessary to define the product, its quality, and the quality of the perfor-
mance necessary to make the product. The procedures or methods used in produc-
tion place constraints on the quality by. specifying: Limits of detection,
sensitivity, safety, cost measurability, precision, accuracy, selectivity, and
specificity. It is important to identify areas of responsibility to be effec-
tively managed to obtain control of product quality. It is also important to
measure the performance of procedures and analysts for the samples being
analyzed.
The objective of CompuChem's system is to improve product quality for the
laboratory's customers. I have suggested the definition that the lab's products
are "analytical results". These products consist of information packaged in
customer-requested formats such as EPA deliverable paper, GC/MS magnetic tapes,
etc. A sub-objective is to improve the quality of information exchanged within
the laboratory used to produce the customer-reportable data; i.e., each area of
the laboratory is a customer for intra-lab data; QA uses it to determine how
well those areas are performing.
To attain the objective, at least the following goals must be met; they are
stated in the form of Quality Assurance Laws for impact and for easy remembering.
The First Law of Quality Assurance is "Do it right the first time!" The Second
Law is "Detect errors as soon as possible!" The Third Law is "Correct the error
as close as possible to its source!" The Fourth Law is "Document all actions
taken!"
The objectives and goals can be met if the following concepts are adopted for
the laboratory: 1) Each product of the laboratory must be a defined quality.
2) Criteria are established by Marketing and the customer for the products to be
delivered; criteria are established by QA for those products remaining inhouse.
Specifications for finished products must define the desired attributes and sub-
components. They must specify the inspection methods and frequencies and who is
responsible for inspections. Specifications must be expressed as "targets" and
"ranges". 3) Each lab area is responsible for the quality of its products, or
the product is returned for rework or explanation. Specifications for the
disposition of rejects (rework or scrap) must be made. No one should have to
look at bad data from another part of the lab! Each lab area should be viewed
as a "customer" for the products of the other lab areas. Each lab area has the
right as a customer to demand that the quality levels be met and maintained. 4)
Make QC samples, such as blanks, spikes, and duplicates, known to the analysts,
in the laboratory, to allow for prompt detection of problems. 5) Before changes
in product specifications or procedures are made, the approval of the Director,
Quality Assurance, is necessary. 6) Training and documentation are critical
steps to ensure quality. Figure 1 shows an example which summarizes the steps
in establishing the decision model.
While these system concepts were being developed for application at Mead
CompuChem, several EPA customers began requesting that the current set of hazar-
dous waste analytical contracts be modified to define more specifically the data
quality desired and corrective actions to be taken if acceptance criteria were
not met. EPA contract requirements would seem to imply that, by using the
required procedures for analyzing hazardous waste samples, it is possible to
produce data of acceptable quality on most samples, as determined by specified
-------
229c
quality indicators. Unfortunately, insufficient data is available to prove this
is true for all samples to which the methods are being applied. For the
contracts, the nature of corrective actions has been left to the "judgment of
the analyst" without specifying that the same problems (i.e., exceeding accep-
tance criteria) should be treated in a standardized fashion for all who
experience the same problems. However, there may in fact be samples for which
the methods and therefore the quality criteria do not apply. The lab must
therefore demonstrate that the analytical procedure and the techniques of ana-
lysts are in control, or that the problems are inherent in the method or the
nature of the sample. This can be established, for example, by using duplicates,
spikes, blanks, and other test samples to evaluate lab performance.
Using the EPA contract-specified deliverables list, I have produced a document
which defines the desired quality of the products (pieces of paper) which make
up the EPA data package. The criteria applied are either specified in the EPA
contract, or have been established by CompuChem in their absence in the
contract. It is the responsibility of the manager of each lab area that his
products meet the quality criteria. An example is provided in Figure 2 of the
criteria used in building the model. Each manager is responsible for rework
until the product is acceptable. The system for detection and correction of
such problems are established within a lab area by its manager, who presumably
knows its capabilities and resources best. Each manager goes through the logic
required to produced acceptable quality products. Quality of product should be
considered as well as the constraints of productivity and resources. Where
there are conflicts, top management must resolve them. This will give some
options to management in producing certain products. For example, if the pro-
duct is a "screening analysis" to determine approximate amounts of organics in a
sample, it may be that the screening data can be acceptably produced either by GC
or GC/MS. The system must define how many and what types of errors are to be
monitored and corrected, the frequency of testing, and what kinds of corrective
actions are appropriate. In addition, quality measures of performance are
required. An example of the product quality, procedure for production, and flow
chart of the decision model is shown in Figures 3, 4, and 5.
It is the responsibility of each lab manager to monitor for errors within his
area, to implement corrective actions, and to report the problem, its extent,
and the effectiveness of remedies, to QC and QA. If quality control samples are
outside control limits, the manager is informed by the QC Department, so that
the manager can correct the problems. QC and QA can assist and advise on the
appropriate actions. QC monitors the effectiveness of these actions, and
reports this to QA. Documentation of problems and actions must be made, either
by footnotes or written explanations within the body of the report. This docu-
mentation should provide adequate detail to state the problem, actions taken,
and their effectiveness, what data was affected, what dates these things
occurred, and the names of parties responsible, should there be questions. In
Figure 6 I have listed advantages and disadvantages of conversion to this
system.
The system described has demonstrated improved product quality and lowered costs
for those areas in which it has been installed. The system is being expanded
into other lab areas, and its operation continues to be refined with experience.
Figures 7, 8, and 9 show the type of management information generated.
-------
229d
Appraisal of the effectiveness of the system will eventually be handled by
acceptance sampling at CompuChem during review processes. Currently, several
levels evaluate all the data prior to release to other lab areas and to custo-
mers. Acceptance sampling will be instituted as observed error rates fall.
I would like to acknowledge my colleagues who contributed their time effort and
study results to developing parts of the system: Mrs. Patty Ragsdale; Mr.
Robert Meierer; Mr. Robert Whitehead.
-------
229e
REFERENCES:
ASTM Standard E882-82: "Standard Guide for Accountability and Quality Control
in the Chemical Analysis Laboratory"
Managing Quality for Higher Profits, Robert A. Broh, McGraw-hill, 1982.
ility Control in Analyt'
ley-Interscience, 1981.
Quality Control in Analytical Chemistry, G. Kateman & F. W. Pijpers,
Wiley-Ir
-------
229f
FIGURE 1:
SUMMARY OF QA DECISION MODEL STEPS EXAMPLE OF VGA BLANK, SOIL SAMPLE
1) Define product (VOA blank)
2) Describe attributes to be determined (extent of contamination, form and con-
tent of report to be delivered)
3) Define how product is to be made (GC/MS output, contract method)
4) Define quality of performance desired (no blanks fail criteria)
5) Define product quality criteria (specified in contract, priority pollutants
less than half detection limits)
6) Determine measurements of product quality, frequency of measurement, and
responsibilities (each set of samples prepared, analyzed by GC/MS operator,
within acceptance criteria)
7) List in detail all possible problems which could cause unacceptable product
quality (contaminated standards, glassware, water, etc.)
8) For all problems, list tests to determine source of problem in a logical,
heirarchical order (operator, manager check, reanalysis).
9) Describe documentation of corrective actions to be reported (reanalysis)
10) Implement system with training for staff responsible
11) Monitor and report on system effectiveness
12) Modify as necessary, and document changes (eg., change type of impinger,
change location of sample preparation).
-------
229g
FIGURE 2:
EXAMPLE CRITERIA APPLIED TO BUILD A DECISION MODEL
What data can be examined by analyst who detects error? (For example, instrument
performance Tune, blank, standard data, worksheet, vials are all available for
inspection at the bench, as well as results of previous, related, samples, and
the quality criteria for the product)
In what order should it be examined? (Tune, blank, and standard must have been
acceptable, or no samples could be run; check internal standard areas, check
internal standard areas; check worksheets for amount of sample used, volume of
concentrates, surrogate and spike standards used, any nonroutine actions taken
or problems encountered in prep.)
What additional data may be necessary to determine the source of error? (Other
data from same set of samples)
What additional people may be necessary to determine the source of error? (Lab
manager, QC, etc.)
What options for corrective actions are most prompt, likely to lead to elimina-
tion of errors, save costs? (From least to most costly, identify and correct
calculation errors; reinject sample; reprepare and reanalyze samples.)
How are corrective actions documented? (In report, in lab files, by memo, etc.)
-------
229h
FIGURE 3
DESIRED PRODUCT QUALITY: VOA SOIL BLANK
Desired Product:
The RIC must be normalized to the largest, non-solvent peak.
The RIC must cover the range of Hazardous Substances List compounds.
Internal and surrogate standards must be labelled on the RIC.
There should be no tailing or elevated baselines (the latter portion of the
baseline should not rise by more than 4X the midrange level).
The RIC must not end on an eluting peak; peaks must not be cut off by the
end of a page.
Contaminants must be less than I/? the detection limits for HSL compounds, and
less than 25% the peak height of the nearest internal standard for others.
Contaminants must be accounted for.
There must be a document control number of the RIC, representing the EPA
case and sample numbers.
The RIC scan starts before the firt eluting HSL compound, and ends no sooner
than the latest eluting HSL compound.
File header information is included to identify the ID number, standards
used, operator, shift, instrument, and time.
Tabulated results (identification, quantity, scan number of retention time)
of the specified HSL compounds must be submitted, validated and signed in
original signature by the Laboratory Manager.
On the EPA reporting form, the appropriate units and detection limit factors
must be circled and/or adjusted.
Appropiate footnotes for qualifying data must be included.
-------
2291
FIGURE 4
VOA SOLID SAMPLE PREPARATION
Three different laboratory areas are involved in preparation and analysis of VOA
solid samples: Glassware preparation; Inorganics lab hood for sample
preparation; GC/MS lab for addition of water, sample storage, and analysis.
Glass impingers are taken from the oven in Glassware preparation area and
transported to the Inorganics lab hood. Samples are transported to the lab area
for preparation, but kept outside the hood until each one's turn for preparation.
Only one impinger and one sample at a time are in the hood during preparation.
Water from the GC/MS lab (purged organic-free water) is taken into the hood for
filling the designated "A" and HB" blanks. A "C" blank is filled in the GC/MS
lab and makes the trip with the other samples, but is not opened in the hood; it
is similar to a trip blank.
The "A" blank is prepared first, by filling the impinger with the GC/MS water.
(This tests the hood area, to demonstrate it is clean before preparing other
samples).
20 samples are prepared, one at a time. Weighed quantities of samples are
transferred from jars into impingers with appropriate utensils, then capped.
After the 20th samples is prepared, the "B" blank is made, similarly to the "A"
blank. (This tests that there has been no contamination introduced into the
hood during sample preparation).
Prepared samples are taken into the GC/MS lab, filled with aliquots of GC/MS
water, and stored in the GC/MS lab refrigerator for VGA's only. It is equipped
with a charcoal scrubber.
The order of analysis for these samples and blanks is described on the following
flow chart.
LOGIC: The instrument blank shows that the internal surrogate standards were
not contaminated, and that the GC/MS lab air and water are clean, and that the
instrument is not contaminated.
The "A" blank will show is the hood area was contaminated prior to sample
preparation.
The "B" blank will show if there has been "cross-contamination during transit or
staorage due to faulty impinger seals, lab air, etc.
The latter three blanks will also show if there is contamination of syringes or
glassware.
-------
•4-> (/>»
•O
C C r—
229J
LO
CO
< A™
CO A
•r- C
in
01
C Ol Q.
«•— e
S- O <3
r- C 0) Q.
•l~ •!•• Ol Ol
O • i. a.
^ 6 o tt»
1- (O 4-> t.
O) -4-» t/1
•U C -O
fO O S- C
3 o o
OJ 10
IM 0)
>>•—
•— Q.
ro E
C rd
tQ 10
ca A
ca a.
co
LU
CQ A
= a.
-)
o:
co
O)
ofl 4^ f^ O ^^K {*••
C Q. t. to 0)
co a> 3-1 • i-
i—i £ to > 4) .o i- 10
JZ C (O 10 >>i—
o -i- S i— to 01
^/
• « to
O)
-i- E
(O C S- OS
O ZJ W
•o u -a
a>
Q.T3 $-
ai cu ns
i- > O.
J>£ C3. O C»" s- a. ai a.
.c o E s- a)
O M- -r- Q. S-
s
-------
229k
FIGURE 6
ADVANTAGES AND DISADVANTAGES
ADVANTAGES:
Improvements in turnaround time
Improved intralab working relationships
Reduced rework and associated costs
Improved goodwill and prestige with customers
Correction of problems at earliest stages
Higher-level staff are freed for planning, not problem-solving
Accountability for quality can be well-established
Detailed logic of corrective actions can be automated, "networked"
Defined criteria makes training quicker, more effective
Automated system allows prompt access, consistent responses
Costs of errors can be documented
Costs of corrective actions can be documented
Costs data assist in bidding new work, measuring performance, etc.
DISADVANTAGES:
Minor costs of implementation: Changes in paperwork, work flow, training
Allowing analysts to know identities of QC samples may distort true
performance; can be corrected by submitting true "blinds"
-------
229 1
r-
UJ
a:
CO
CD
CD
DC
CD
cc
o
u.
en
en
UJ
a
UJ
cc
UJ
a.
UJ
cc
-------
229m
oo
LU
QC
CD
GO
CD
>_"
fT
^
1-3 8
1 1
UJ S
CL
CO
CE
UJ
^
hi
i
__ j
L.—J
F^l
>~~ X
< 1
-J i
o i
> j
E
K\W
Kivv\vCvC^^^
lOv\\XX\\X
E
c
i 8 9 8 8 8 c
UL.w-.,.aw
E
ES
^c^^sXN^^xsxsv<
KVsNNXNNXNN
r^r
I!>A^
289888*
i
8
2
sM
1
» i
d
£k
ESS
^^
ES^^^^
^\w
ES
\
i 8 9 8 8 8 c
. - . i) - 1
c
E
LW
N-V^
f^sx\:
^SS»>^v^'
f^^
rrr
fx-x>i
^
1
1
B 8 9 8 8 8 *
^
i
9
5
9
M
8
ii
*«.
Hi
8
8
9
8
»
n
s
B
§
?
8
S
§
•«
8
g
8
:
9
O
u
u
NOIiVindOd 31dMVS JO !N33d3d
-------
INT.STD.RESP.VERIFICATION CONTROL CHART
SEMIVOLATILE D3-PHENOL/D8-NAPHTHALENE
UCL
LCL
Page
229n
(CHART ABOVE PLOTTED
SO 82 24
UCffl FILE ISflVCCS)
SEMIVOLATILE: DB-httPHTHALENE/DlO-PHENANTHHENE
HEL.flESP. MEAN UCL
LCL
1.8
t.8
1.4
t.B
1
.8
.8
.«!
.2
0.0
RELATIVE RESPONSE
4 ' ' ' I '
(CHART ABOVE PLOTTED
-jU)'12
'lM»PILE I8HVCC80
ffiL.RESP.
-+-
SEMIVOLATILE D10-PHENANTHHENE/D12-CHRYSENE
MEAN UCL
LCL
1.2
1
.8
.8
.4
.2
0.0
DATE
ANALYST
ELATIVE RESPONSE
4 ' '
(CHART ABOVE PLOTTED
Ltmet
16
1KB) FILE XSRVCCSa
15
COMMENTS
FIGURE 9
-------
. 230
PROCEEDINGS
MR. TELLIARD: Good norning.
Contrary to the program, we have made a few
ninor changes, there is no break this norning
so that we can stop about 11:30 for lunch
and people can check out.
Our first speaker this norning is Barry
Eynon from SRI. Barry is part three of the
continuing saga of the new procedure that we
are trying to enact and which we discussed
all day yesterday and which we will touch on
again today. Barry is going to discuss this
norning sonething we all want to listen to at
about quarter to 10, statistics; the joy and
fun of numbers.
-------
231
STATISTICAL METHODS FOR EFFLUENT GUIDELINES
Barrett P. Eynon
SRI International
MR. EYNON: Good morning, I
am glad to see we are all still here or at
least partly. Bill asked me to come down and
talk to you a little bit about what we at SRI
and other groups working with effluent guidelines
have been doing as far as the statistical
analysis of pollutant data for setting effluent
guidelines.
The statistical analysis of industrial waste
pollution data is an important step in the
determination of effluent water guidelines. A
number of different statistical techniques are
used to address the information needs of the
technical staff at EPA in setting limitations
guidelines. There is not time today to talk
about all of the different methodologies, but
what I will try to do is give a general review
-------
232
of the circumstances, methods, and objectives
of some of these analyses.
SRI and myself, personally, have been
involved in three major industrial categories
over the past three years on both conventional
and priority pollutants: pharmaceutical, petroleum
refining, and organic chemicals manufacturing.
This work has been in cooperation with Effluent
Guidelines and the Office of Analysis and
Evalution.
The data that we use in these analyses is
usually voluntary data submitted by the plants
and will consist of influent and effluent
treatment concentrations of pollutants. The
plants are selected from among the voluntary
participants to be those which have well-designed
and operating treatment systems of the
appropriate type for the regulatory package.
For instance, of the set of 22 pharamaceutical
plants which submitted data, 13 were judged to
-------
233
have well-designed and operating biological
treatment systems and were designated as BAT/
BCT plants for the purposes of constructing
regulations. A further subset of 10 plants
among the 13 were designed as NSPS plants for
setting NSPS limits. The characterization
of these plants represents the engineering and
technical evaluations of the plants by EPA.
The sampling and data handling for each study
proceeds through several stages to insure high
data quality. The samples are usually taken by
the plant according to a pre-determined sampling
plan. The sampling plan can be as straight-
forward as one sample taken each day or each
week at each sampling point; or, as extensive as
that in the first slide which is the sampling
design for the Organic Chemicals 5-Plant Study.
In this study at each of the participant plants
approximately 30 days of sampling were performed.
On each sampling day, one sample was taken at the
-------
234
pre-treatment and the post-treatment sampling
points. Each sample was analyzed by an EPA
contract laboratory for a specific set of
priority pollutants and the samples were
also analyzed by a Chemical Manufacturers
Association contract laboratory and also by
the participant plants.
The pollutants that were analyzed for were
chosen from one or more of the analytical
fractions of the organic priority pollutants
so as to reduce the number of analyses needed
on each sample and also to satisfy the
confidentiality restrictions of each of the
plants. In order to evaluate the accuracy
and precision of the priority pollutant
measurements several quality control measures
were included in the sampling design.
Approximately two-thirds of the samples were
spiked with known concentrations of priority
pollutants after their analysis and reanalyzed
-------
235
in order to measure the percent recovery of
the analytical methods. The remaining one-third
of the samples were analyzed in duplicate in
order to measure laboratory precision. The
samples were also spiked with known amounts of
"surrogate" chemicals known not to be present
in the waste stream, in order to aid in measuring
the recovery of the analytical methods. Measure-
ments were also made on blank samples of distilled
water, some of which were shipped with the waste
samples to check for contamination.
Upon receipt of the laboratory reports on the
chemical analyses, the data from such studies
are coded and entered into the computer data base.
As we have heard today, hopefully some of this
stuff will be obviated in the future, but the
current work on this study we coded the data and
checked the data and then reviewed the data.
The data base that we have found to be very
effective for setting up data bases to handle
-------
236
complicated studies like this is the Statistical
Analysis System package, SAS, which we have
available on EPA's IBM computer and it runs on
IBM main frames. A very powerful and flexible
package for data processing that has data
management and reporting facilities and also
has the capabilities for sophisticated
statistical analyses.
Once the data is stored in the computer, data
listings and plots of the data can be generated.
These are checked for unusual or extreme values
which may indicate coding or transcription
errors. The values are reviewed with the
laboratory reports and with the laboratory to
correct any errors. In addition, concentrations
which are confirmed by the laboratory, but
which are attributable to known plant treatment
upsets, or deemed to show variation beyond that
associated with well-operated treatment systems,
can be removed from the analysis, in order to
-------
237
focus on the behavior of well-operating treatment
systems.
Figure 2 shows plots of the Effluent Total
Suspended Solids concentration versus time for one
of the plants in the pharmaceutical data base
before and after removal of an extreme value.
In the top picture we can see that one point just
sticks out like a sore thumb and we went back
and checked it out. I'm not sure exactly what
was going on in this one, it could have been a
typo or it was a value that just was an upset.
We reviewed the plant records and removed that
value from the data set and then we get...when
we replot the data and rescale it we get a much
more reasonable looking view of the situation
at that plant.
So this is done for the set of data and the
final data set or edited data set is then
available for analysis by statistical methods.
So now we go into what is it that we are
-------
238
trying to determine from this data once it is
in the computer. There are two major
quantities of interest in all of these studies.
The first is to measure the average concentration
of each pollutant in the wastewater of each
plant, before and after treatment.
Why don't we put up the next slide. This can
be directly estimated from the data, using the
arithmetic averages of the measured concentrations
for each sample. If the set of plants for which
the data is available is deemed to be a
representative set of the set of all plants with
well-operating treatment systems for the
industry, then the average pollutant concentrations
can be taken across plants to estimate the
average effluent concentrations for the industry.
So we will start with an averaging, if we have
multiple analyses per sample we will start with
an average and come up with a number for each
sample. Then, we would take an average across
-------
239
those samples to come up with a value for the
plant and then an average across the plants to
come up with an overall concentration value.
In other situations similar to the organics
five-plant study where the plants were more on
the order of case studies with specific
pollutants of interest there we want to actually
review the pollutants on a pollutant-by-
poilutant basis on a plant-by-plant basis to
look for pollutants in each different kind of
effluent.
If both influent and effluent data are
available on a particular data base, a second
quantity which can be calculated is the
percentage reduction of the pollutant; and,
that is given in the format up there, influent
minus effluent divided by influent and
corrected by 100 to turn into a percentage.
This is used to quantify the effectiveness of
the treatment system by the plant.
-------
240
The second main quantity of interest is
to characterize the day-to-day variability
in the concentrations of a pollutant in a
waste stream. For regulatory purposes, the
quantity of interest known as the variability
factor is defined to be the 99th percentile
of the distribution of daily concentrations
divided by their long term mean. This quantity
which is similar in concept to the usual
coefficient of variation, except we are aiming
at a different percentile; this is found to
be a reasonable stable measure of the amount of
day-to-day variation in a pollutant independent
of the overall level of the pollutant in the
effluent.
If the appropriate variability for a pollutant
is determined, then it could be multiplied by a
designated long-term plant mean concentration
for that pollutant such that if the plant is
discharging overall at the designated long-term
-------
241
mean concentration, then the rate of exceedance
of the limitation will be one day in 100. Lon^-
term mean effluent concentrations above the
designated mean level will show an exceedance
rate in excess of one in 100.
In order to calculate the variability factor
from a set of data, an estimate of the 99th
percentile of the distribution is necessary.
This is a more complex problem than the
estimation of mean concentrations, since the
data at hand often only consist of 30 to 50
points, or less. Several statistical methods of
estimating the 99th percentile have been examined
in the course of these studies. Figure 3 shows
the models used in the three main methods, super-
imposed on a hypothetical data histogram.
If sufficient numbers of points are available,
nonparametric estimates of the 99th percentile
can be calculated directly by looking at the
histogram. These estimates make no parametric
-------
242
assumption about the shape of the distribution.
In particular, the specific which was used in
work where we have sufficient data is 50 percent
non-parametric tolerance estimator. I have the
reference in the paper when it comes out. This
requires at least 69 data points to be calculated.
There is also another form of estimator known
as the tail-exponential estimator which makes a
parameteric assumption about the upper tail
of the distribution. That's the dotted curve up
there, and it assumes that beyond a certain
percentile, usually we take like a base 90 of
the percentile, that the tail of the distribution
falls off like an exponential distribution
(indicating). Taking only the data in the tail
we can construct a smooth...we smooth that out
and use that to estimate the 99th percentile.
Again, this requires about 70 data points to be
an effective method of calculation.
For cases with fewer data points, distributional
-------
243
models are necessary. The best general
distributional model that we have found
for low concentration pollutant data is the
lo£,-normal distribution. The log-normal
distribution is the distribution of the
variable whose logarithm has a normal
distribution. Log-normal distribution is
appropriate for this type of data because
it does not assign any probability to
negative concentrations, and it has an
appropriate frequency distribution which
accords with the actual distributions
observed in the sample data.
The next figure shows a sample cumulative
distribution for an actual set of data along
with a fitted cumulative distribution of log-
normal. As you can see, they fit each very
well...I'm afraid that's a little light, but
the jagged line is the frequency distribution
of the actual data. It's a rather large data
-------
244
set in this case. The smooth line is the
fitted log-normal distribution (indicating).
So they do appear to fit each other very well
and have the appropriate type of behavior.
To estimate the 99th percentile, the lOfe-norrnal
distribution is fitted to the data and then
we obtain the 99th percentile from tables of the
fitted distribution.
The concept of variability factor is also
applied to the situation of determining
limitations for average concentrations over
longer time periods. Kor instance, for the
pharmaceutical and petroleum studies, variability
factors were calculated for averages over 30
consecutive measuring days. The lon^-terrn
mean of 30-day averages is equal to that of the
daily values, but the averaging decreases the
variability in the resulting measure.
Therefore, the appropriate variability factor
for 30-day average concentrations will be
-------
245
smaller than that for daily concentrations. If
the concentrations on each day were completely
independent, the appropriate formula for a
variability factor for these averages would be
as given in the first figure there. This comes
about through the central limit theory of
statistics which says that if we take X bar
here is the mean of the data from which we are
investing and S of X is the sample standard
deviation, that if we take averages of size 30
from a process with this mean in standard
deviation they will tend to have the same mean
and a standard deviation which reduces by
root 30.
Actually, in practice, we find that the
concentration values on successive days tend
to be more similar than that which would be
suggested by independents. This is
presumably due to dependencies in the effluent
discharge from the plant and mixing and
-------
246
holding systems in the treatment process.
Figure 5 shows some sample graphs
of the autocorrelation functions which we
calculate at...it doesn't...we can take
either half. These were calculated on some
pharmaceutical data where we had long-term
data and we could calculate the autocorrelation
which is the correlation between values, a
particular fixed number of days apart. So the
autocorrelation of lag one is the correlation
between concentrations one day apart, an
autocorrelation of lag 30 is the correlation
between values 30 days apart. If we plot those
as a function of the lag going down,
correlations running between minus one and one,
we see that we have for each of these
situations we have positive autocorrelations
and their positive and tail-off get smaller and
smaller as we get a longer and longer lag.
Of course, we would expect as the distance
-------
247
between any two measurements goes towards
infinity, that the correlation between those
measurements would tend to go to zero. The
effect o± this is that the averaging process
on consecutive days reduces the variability,
but not by as much as would be suggested by
independents.
Could we back up one slide; that's the one.
The calculated autocorrelation for lags up...
1 guess we need lags 1 to 29 in order to
calculate a 3U-day variability factor, then the
formula for the appropriate variability
factor is similar, but it has another term in
it which depends on the autocorrelation. This
can be used to calculate appropriate variability
factors for 30-day averages for consecutive
days. For 4-day averages such as have been
suggested for priority pollutant limitations,
the appropriate variability factor would be
what we have got on the bottom because those
-------
248
are being suggested for non-consecutive days
of measurement. Also because when we look at
the priority pollutants we see less auto-
correlation than in the conventional pollutants.
This could be due to the effect of analysis
variability or just that priority pollutants
work differently, but our preliminary look
at priority pollutants shows that there is less
evidence of autocorrelation present.
So that's really where we're coming from on
objectives and how we would calculate these
numbers if data were perfect, perfect in the
sense of no reporting problems and no other
external considerations. Data, of course, all
laboratory data always, of course, are expected
to have some variability in them.
There are some special topics I would like
to mention. On things that are particularly
applicable to priority pollutants and the way
that they effect our statistical analysis. In
-------
249
particular, there is the reporting and
handling of detection limit values. When
the concentration of the sample is too
small to measure, the laboratory will report
not detected. This is fine and very
appropriate as an analytical tool, but just
drives the statisticians nuts because it
is not a numerical value. Somehow in order
to do a calculation with these values we
have to come up with some sort of numerical
value to use. The first cut on this would
be to stick these values in at a concentration
of zero. This is not bad, but it may
under-estimate the concentration of the
pollutant.
So what we want to do is, we would also
like to explore the sensitivity of our analyses
to the assignment of these values by also
assigning them to an upper value for the
concentration. This works best if we know
-------
250
the detection limit for the methodolgy; and,
that's not always true in the data that we see,
If we calculate a statistic with the data
assigned at zero and then assigned to the
detection limit, we get a sensitivity type of
analysis which will tell us how much the
means, for instance, could change between
these two values.
There are also other more sophisticated
techniques for handling these quantities
that deal with the detection limit data as
missing information or sensored data, not
in any majority of sense; simply, that the
concentration would be known to be below a
certain low but would not be known
quantitatively further than that and that
that would be the kind of model.
The appropriate handling of detection
limit data is a question that has to be
approached for each different technique,
-------
251
statistical technique that we are going
to use. For the calculation of variability,
simply assigning the values to a
particular numerical concentration is not
quite the right thing to do. We have
done a lot of work with what is called the
Delta log-normal model where we explicitly
give these concentrations their own
probability mass at zero and this allows...
and then we model the data above the
detection limit by a log-normal distribution.
This seems to work fairly well.
For other types of considerations and
situations, has to be a continuing factor in
the statistician's mind as to how he is going
to handle these detection limit values. What
1 would also like to say is that continued
attention by analytical chemists to the
definition of reporting of detection limits
will be an important step in clarifying how
-------
252
these values should be used. I have seen
some American Chemical Society publications
and some o± the things that we talked
about in these conferences on clarifying
detection limits and defining them. I
would like to suggest that that always can
be carried further in terms of standards
and reporting practices for all of the
laboratories.
Another issue in the analysis of priority
pollutant data is inter-laboratory and
intra-laboratory variation. The analysis
replication in the organic 5-plant study
allows an investigation of the sources of
variation in the concentration measurements
because the study includes multiple samples,
multiple laboratories analyzing each sample
and replicate analyses by laboratories on
at least a portion of the samples. Using
statistical variance components estimation
-------
253
techniques, the variability in these samples
can be broken down into four components.
There is the inter-sample variability which
would be the natural variability of the true
concentrations in each sample which we would
see if there were no analytical errors. This
would be representative of time or sampling
variation in these samples. The second
factor is the consistent inter-laboratory
variability which is, if we take a set of
samples and give them to a set of laboratories
and look at the mean concentration that each
laboratory gives, each laboratory will vary
slightly and the variation between laboratories
on that is another factor that can come out.
This could be called the inter-iaboratory
accuracy or lab bias. The third factor is,
within-sample inter-laboratory variability
which has to do with the individual handling
of each sample by the laboratory; and, if we
-------
254
took one sample and gave it to a bunch of
laboratories they would also vary. This
could be thought of as the inter-laboratory
precision. The fourth factor is the
intra-iaboratory variability which would be
the variability between pairs of replicates
run at the same laboratory. So this would
be the intra-laboratory precision.
These four components were estimated in
the organic study for each pollutant for which
there was sufficient data and the model we
used was a slight modification on the ordinary
variance components model in that we applied
this to the log-normal model effectively
analyzing the logarithm to the concentrations
and what we end up with is a multiplicative
model rather than the ordinary additive model.
it seems to work fairly well with the log-
normal distribution and all of our other
assumptions in the analysis.
-------
255
We did run into some problems with lots
of detection limit data on some of this data.
A lot of the effluent data was consistently
down to detection limit. We can't really say
much about these sources o± variability in
such cases. That's why it's nice to have
studies like George's study with designed
levels of pollutant concentrations so you can
actually see what's going on there. So these
kinds of factors can be quantified in cases
where we have sufficient data.
The last issue that 1 wanted to mention was
spike sample analysis. We had data in the
organic study for both priority pollutant
spiking and also surrogate chemicals, deuterated
or halogen substituted pollutants which were
added to the sample after the original analysis
and measured for their concentration.
The last slide here shows...just gives the
ordinary formula for percent recovery and here
we have the spike...we take the raw sample
-------
256
concentration, C, the spike level as L and
the spike sample concentration as S; we
calculate the percent recovery this way.
For the surrogate chemicals, C would be fixed
at zero because we know these chemicals
would not be present in the sample and we can
calculate the recovery.
Now, the important point here from a
statistician's point of view is that we can
assume fairly well that we know L because it
is a laboratory standard, but S and C are
both subject to analytical variation and,
therefore, for a single sample the estimate of
the recovery is subject to analytical
variation. Therefore, when we calculate these
recoveries we like to take an average over
many samples in order to evaluate the overall
recovery of the method; and, that's fine.
The other issue which comes up is the question
of correcting individual sample values. We
decided that it was probably better not to do
-------
257
that because while you are increasing the
accuracy, you are also decreasing the precision
of the concentration measurement. So we
decided it was better in this case not to do
this on these samples.
There was also an additional consideration
in that not all samples were spiked and so we
couidn't...we wanted to make sure we were doing
everything the same on all samples. So we
evaluated that for each method, for each
chemical. We evaluated recovery and that's
part of our summary that we will be presenting
to the agency.
Hopefully, this has given an idea of some
of the types of techniques and issues in the
statistical analysis of the data for effluent
guidelines, and 1 hope that continued cooperation
between statisticians and analytical chemists,
1 hope that will continue. I think it is
important in exploring all of the facets of this
complex subject. Thank you.
-------
257a
Statistical Methods for Effluent Guidelines
Barrett P. Eynon
SRI International
I.. Int roduc t i on
The statistical analysis of industrial waste pollution data
is an important step in the determination of effluent water
quality guidelines. A number of different statistical
techniques are used to address the information needs of the
technical staff at EPA in setting limitations guidelines.
There is insufficient time today for a detailed discussion
of all of these methodologies; what will be aimed for in
this talk is a general overview of the circumstances,
methods, and objectives of some of these statistical
analyses. SRI has been involved in the data analysis for
three major industrial categories over the past three years:
pharmaceutical manufacturing (1), petroleum refining (2) ,
and organic chemicals manufacturing industries (3). This
work has been performed under the auspices of the EPA Office
of Analysis and Evaluation. The topics presented here are
drawn from our work on these projects, and are intended to
indicate some of the important concepts and methods in this
-------
257b
uo rk.
11. Peso ri pt i on of Data
The basic data used in these projects consists of
measurements of pollutant concentrations in water samples
taken at the treatment influent and effluent points, at a
set of representative plants from the industrial category in
question. The plants involved in the study are generally
voluntary participants from among the set of plants having
well-designed and operating treatment systems of the
appropriate type for the regulatory package. For instance,
of the set of 22 pharmaceutical plants which submitted data,
13 were judged to have well-designed and operating
biological treatment systems, and were designated as BAT/BCT
(Best Available Technology/Best Conventional Technology)
plants for the purposes of constructing regulations. A
further subset of ten plants from among the 13 were
designated as NSPS (New Source Performance Standards) plants
for setting NSPS limits. The characterization of the plants
represents engineeering and technical evaluations of the
plants by EPA.
The sampling and data handling for each study proceeds
through several stages, to insure high data quality. The
samples are usually taken by the plant according to a
predetermined sampling plan. The sampling plan can be as
-------
257c
straightforward as one sample taken each day or each week at
each sampling point over the sampling period, or as
extensive as that shown in Figure 1, the sampling design for
the Organic Chemicals 5-Plant Study.
-------
257d
CO
S
S >•
sl
S 2
£ a
•- -e
-= J3
CO -1
c
(0
cn
u
o
u
O
c
cn
••H
0)
cn
C
a
S
(0
e
a
(S
• v-t
Q
O
a
£
•
"O 2
.4!<
a.
CO
OJ
f-.
!-> I
3 cn
cn c
••« o
U, J
-------
257e
In this study, at each of the participant plants,
approximately 30 days of sampling were performed. On each
sampling day, one sample was taken at the pre-treatment and
the post-treatment sampling points. Each sample was analyzed
by an EPA contract laboratory for a specific set of priority
pollutants. The pollutants analyzed for were chosen from one
or more of the analytical subsets of the organic priority
pollutants, so as to reduce the number of analyses needed on
each sample, and to satisfy the confidentiality restrictions
of each of the participant plants. Approximately one-fourth
of the samples were also analysed by a Chemical
Manufacturers Association (CMA) contract laboratory, and the
participant plants were also encouraged to conduct their oun
analyses of the samples.
In order to evaluate the accuracy and precision of the
priority pollutant measurements, several quality control
measures were included in the sampling design. Approximately
two-thirds of the samples were spiked with known
concentrations of priority pollutants after their analysis,
then reanalyzed, in order to measure the percentage recovery
of the analytical methods. The remaining one-third of the
samples were analyzed in duplicate, in order to measure
laboratory precision. Samples were also spiked with known
amounts of "surrogate" chemicals known not to be present in
the waste stream, in order to aid in measuring the recovery
of the analytical methods. Measurements were also made on
-------
257f
blank samples of distilled water, some of uhich were shipped
with the uaste samples to check for contamination.
Upon receipt of the laboratory reports on the chemical
analyses, the data are coded and entered into a computer
data base for processing. An appropriate data base
structure is set up to incorporate the elements of the study
design. In our work at SRI, we have found the Statistical
Analysis System (SAS) computer package (4), which is
available on EPA's NCC-IBM system, to be the most effective
system for flexible and efficient data processing, because
it both provides data management and reporting facilities,
and has the capabilities for sophisticated statistical
analyses.
Once the data is stored in the computer, data listings and
plots can be generated. These are checked for unusual or
extreme values, which may indicate coding or transcription
errors. These values are reviewed with the laboratory
reports and with the laboratory to correct any errors. In
addition, concentrations which are confirmed by the
laboratory, but which are attributable to known plant
treatment upsets, or deemed to show variation beyond that
associated with well-operated treatment systems, can be
removed from the analysis, in order to focus on the behavior
of well-operating systems. Figure 2 shows plots of the
Effluent Total Suspended Solids (EFTSS) concentration versus
-------
257g
time for one of the plants in the pharmaceutical data base,
before and after removal of an extreme value. Note that the
plots of the data after correction have been rescaled, and
the data now exhibits much more homogenous behavior.
-------
Plant 12097 EFTSS (MG/L)
ORIGINAL DATA
257h
800
600-
400
200
' I '
;u^A/wu7\Ajvij yijW
yjL
350
400
450
500
550
500
650
700
DATA AFTER CORRECTION
150
100
50
350
400
450
550
500
650
Figure 2. Plot of Effluent Total Suspended Solids for a
pharmaceutical plant, before and after correction of an
outlier.
-------
2571
III. Statistical Analvs is
After the data set is cleaned and checked, statistical
analyses are performed. There are two major quantities of
interest in all of these studies. The first is to measure
the average concentra'tion of each pollutant in the
uasteuater of each plant, before and after treatment. This
can be directly estimated from the data, using the
arithmetic average of the measured concentrations for each
sample. If the set of plants for which data is available is
deemed to be representative of the set of all plants with
well-operated treatment systems in the industry, then the
average pollutant concentrations for the industry can be
estimated by taking the average across plants of the average
concentrations for each plant. In other situations, such as
the organics study, where there are only a few plants, each
analyzed for a different set of pollutants, a case-by-case
analysis can be prepared for each plant, focusing on the
pollutants found to be present in large concentrations in
the effluent streams of specific plants.
-------
257 j
If both influent and effluent data are available for a
pollutant at a plant, the percentage reduction of the
pollutant can be calculated by:
(influent concentration - effluent concentration)
100 x
influent concentration
The second main quantity of interest is a characterization
of the day-to-day variability in the concentrations of a
pollutant in a waste stream. For regulatory purposes, the
quantity of interest, known as the variability factor, is
defined to be the 99th percentile of the distribution of
daily concentrations, divided by their long term mean. This
quantity, (which is similar in concept to the usual
coefficient of variation, the ratio of the standard
deviation to the mean), is found to be a reasonably stable
measure of the amount of day-to-day variation of a
pollutant, independent of overall level of the pollutant
concentration. If the appropriate variability factor for a
pollutant is determined, then it can be multiplied by a
designated long-term plant mean concentration for that
pollutant at that plant, such that if the plant is
discharging overall at the designated long-term mean
concentration, then the rate of exceedance of the limitation
will be 1 day in 100. Long-term mean effluent
concentrations above the designated mean level will show an
-------
257k
exceedance rate in excess of 1 in 100.
In order to calculate the variability factor from a set of
data, an estimate of the 99th percentile of the distribution
of daily values is necessary. This is a more complex problem
than the estimation of the mean concentrations, since the
data at hand often only consist of 30-50 points, or less.
Several statistical methods of estimating the 99th
percentile have been examined in the course of these
studies. Figure 3 shows the models used in the three main
methods, superimposed on a hypothetical data histogram.
-------
257 1
1
cs
J
1
-------
257m
If sufficient numbers of points are available, nonparametric
estimates of the 99th percentile can be calculated. These
make no parametric assumption about the shape of the
distribution. In particular, the 50X nonparametric tolerance
estimator (5, pp 40-43), is a useful estimator, but requires
at least 69 data points to be calculated. Also, tail-
exponential estimators(5), uhich make assumptions about only
the shape of the upper tail of the distribution, have been
found to be effective, but require about 70 points to be
effectively calculated. In cases with feuer data points
available, distributional models are necessary. The best
general distributional model we have found for low
concentration pollutant data is the lognormal distribution.
The lognormal distribution is the distribution of a variable
whose logarithm has a normal, or Gaussian distribution. The
lognormal distribution is appropriate for this type of data,
because it does not assign any probability to negative
concentrations, and it has a skewed frequency distribution,
which accords with the actual distributions observed in the
sample data. Figure t shows a sample cumulative
distribution, along with the cumulative distribution of a
fitted lognormal . To estimate the 99th percentile, the
lognormal distribution is fitted to the data, and the 99th
percentile of the fitted distribution, obtained from tables
of the lognormal distribution, is used to calculate the
variability factor.
-------
257n
o:
S3
CD
'". 1
1 f\J
C
o
•ft
-M
O
C
3
C
O
LiJ
o
CD
M
• r-t
TJ
OJ
•^
-+J
re
«H
3
3
U
CO
—i a
a E
co
in
o
c
Gl
O
^T —(
OJ -a
Li 01
en -H
•
-------
2576
The concept of the variability factor is also applied to the
determination of limitations for average concentrations over
longer time periods. For instance, for the pharmaceutical
and petroleum studies, variability factors were calculated
for averages over 30 consecutive measuring days. The long-
term mean of 30-day averages is equal to that of the daily
values, but the averaging will decrease the variability of
the result. Therefore the appropriate variability factor
for 30-day average concentrations will be smaller than that
for daily concentrations. If the concentrations on each day
were completely independent, the appropriate formula for the
variability factor would be
x
where X and S are the sample mean and standard deviation of
the daily concentrations. (The numerator is the 99th
percentile of the asymptotic distribution of 30-day averages
according to the Central Limit Theorem of statistics).
However, examination of the data reveals that concentration
values on successive days tend to be similar, presumably due
to dependencies in the effluent discharge from the plant,
and mixing and holding systems in the treatment process.
Figure 5 shows sample graphs of autocorrelation functions
for effluent Biological Oxygen Demand (BOD) and Total
Suspended Solids (TSS), measured in concentration and mass
discharge units, at a representative pharmaceutical plant.
-------
257p
The autocorrelation a * , Si = 1,...,30 , represents the
correlation between concentration values measured 1 days
apart. Note that, in the figure, the calculated
autocorrelations are all positive and decrease with
increasing time lag, which is consistent with the physical
model proposed above. If a*is calculated for a plant, then
the appropriate formula for the variability factor for
30-day averages is
X
See, for instance, Switzer (7).
For 4-day averages, as are under consideration for priority
pollutant limitations, the appropriate variability factor
uould be
x
No autocorrelation correction uould be used, because the
sets of four days are not consecutive. In addition,
preliminary analysis of priority pollutant data shous much
less autocorrelation than the conventional pollutants.
-------
257q
ESTIMATED AUTOCORRELATION FUNCTIONS
L V
1
2
3
4
5
6
7
3
9
10
11
12
13
14
IS
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
V
(LAG
391
390
389
338
387
386
335
334
383
332
331
380
379
373
377
376
375
374
373
372
371
370
369
363
367
366
365
364
363
362
'ARIA8LE -
COVAR
. 190E»04
.147E*04
. t21E*04
.109E+04
.101E»04
962.
.110E»04
.111E*04
.104E+04
.105E+04
.106E*04
972.
852.
837.
807.
. 100E«04
.119E»04
.118E«04
.116E+04
.112E»04
.100E»04
873.
847.
962.
.106E»04
.105E»04
.111E«04
.113E+04
.1106*04
.107E+04
TSS MG/L
NOBSO - 395
MEAN - 84.846
MAX LAG - 30
-1
0.6670
0.5144
0.4247
0 . 3827
0.3532
0.3376
0 . 3857
0.3899
0.3668
0.3692
0.3730
0.3412
0.2991
0.2939
0.2833
0.3523
0.4171
0.4133
0.4072
0.3929
0.3526
0.3066
0.2973
0.3378
0.3714
0.3679
0.3906
0 . 3964
0.3877
0.3741
NXNM - 395
NOBS - 396
VARIANCE - 2849.1
0
Ixxxxxxxxxxxxxx
Ixxxxxxxxxxx
Ixxxxxxxxx
Ixxxxxxxx
1 xxxxxxxx
Ixxxxxxx
1 xxxxxxxx
1 xxxxxxxx
1 xxxxxxxx
Ixxxxxxxx
IXXXXXXXX
Ixxxxxxx
IXXXXXX
1 xxxxxx
IXXXXXX
Ixxxxxxxx
Ixxxxxxxxx
IXXXXXXXXX
Ixxxxxxxxx
Ixxxxxxxx
IXXXXXXXX
IXXXXXXX
IXXXXXX
IXXXXXXX
IXXXXXXXX
Ixxxxxxxx
1 xxxxxxxx
Ixxxxxxxx
Ixxxxxxxx
Ixxxxxxxx
PLANT 1202Z.
VARIABLE - TSS IB/DAY
NOBSO - 395
MEAN - 990.97
MAXLA6 - 30
NXNM - 394
NOBS - 396
VARIANCE - .32352E»06
1
2
3
4
5
6
7
8
9
10
NLAG
390
389
388
387
386
385
384
383
382
381
11 330
12 379
13 378
14 377
IS 376
16 375
17 374
18 373
19 372
20 371
21 370
22 369
23 368
24 367
25 366
26 365
27 364
28 363
29 362
30 361
.213E»06
.15SE*06
.133E*06
.122Et06
.110£t06
.109E»06
.129E+06
.132Et06
.122E*06
.120E*06
.121E*06
.113E+06
.974E+05
.924E*05
.448Et05
.104Etfl6
.124Ef06
.12SEt06
.128E»06
.126E»06
.112E«06
.894E*05
.848E»05
.975E»05
.110E»06
.113E>06
.120E«06
.125E*06
.123E*06
.117E»06
-1
0.6477
0.4821
0.4058
0.3709
0.3347
0.3327
0.3932
0.4032
0.3712
0.3665
0.3697
0.3440
0.2966
0.2814
0.2530
0.3178
0.3763
0.3796
0.3896
0.3835
0.3416
0.2720
0.2530
0.2969
0.3356
0.3426
0.3641
0.3817
0.3735
0.3560
0
• — - 1
1 xxxxxxxxxxxxx
Ixxxxxxxxxx
Ixxxxxxxxx
Ixxxxxxxx
1 xxxxxxx
Ixxxxxxx
Ixxxxxxxx
Ixxxxxxxxx
Ixxxxxxxx
1 xxxxxxxx
1 xxxxxxxx
Ixxxxxxx
1 xxxxxx
IXXXXXX
1 xxxxxx
Ixxxxxxx
Ixxxxxxxx
1 xxxxxxxx
Ixxxxxxxx
1 xxxxxxxx
Ixxxxxxx
IXXXXXX
IXXXXXX
IXXXXXX
Ixxxxxxx
1 xxxxxxx
1 xxxxxxxx
Ixxxxxxxx
Ixxxxxxxx
1 xxxxxxxx
ESTIMATED AUTOCORRELATION FUNCTIONS
^
1
.2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
NLAG
365
364
363
362
361
360
359
353
357
356
35S
354
353
352
351
350
349
343
347
346
345
344
343
342
341
340
339
338
337
336
PLANT
VARIABLE
COVAR
274.
183.
134.
132.
93.3
57.4
45.0
60.0
53.7
96.7
121.
166.
141.
131.
128.
111.
94.5
62.2
27.4
31.0
22. 1
52.0
13.3
19.3
5.27
30.5
44,9
27.6
19.1
17.0
12036.
- BOO HG/L NXNM - 366
NO8SO ' 366 NOBS - 366
MEAN - 33.041 VARIANCE - 604.25
MAXLAG - 30
-1 0
0.4529
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
.3035
.2224
.2177
.1544
.0950
,0744
.0994
.0971
.1601
.2007
.2743
.2327
.2160
.2127
.1843
.1564
.1029
.0453
.0512
.0366
.0861
.0229
.0319
.0087
.0505
.0743
.0457
.0316
.0282
Ixxxxxxxxxx
Ixxxxxxx
Ixxxxx
Ixxxxx
Ixxxx
Ixx
IXX
Ixx
Ixx
Ixxxx
Ixxxxx
IXXXXXX
Ixxxxx
Ixxxxx
Ixxxxx
IXXXX
Ixxxx
Ixxx
IX
IXX
IX
IXX
1
IX
1
Ixx
Ixx
IX
IX
IX
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
13
19
20
21
22
23
24
25
26
27
23
29
30
NLAG
364
363
362
361
360
359
353
357
356
355
354
353
352
351
350
349
348
347
346
345
344
343
342
341
340
339
333
337
336
335
VARIABLE -
COVAR
.218E*OS
.154E+OS
,111E*05
.868E«04
.391E»04
.304E«04
.200E»04
,414E«04
.392£t04
.328E»04
.322E»04
,106E«05
.732E*04
.630E.04
.640E*04
.661E*04
.521E»04
.263E»04
267.
.172E»04
.102E»04
.116E*04
-809,
677.
-310.
317.
.185E«04
.262E«04
.139E«04
-56.4
SOC
At
0.
0.
a.
a.
0.
a.
0.
0,
0.
0.
0,
0.
0.
0,
0.
0,
0
0
0.
0
0
0
-0
0
-0
0
0
0
0
-0
I LB/OAT
NOBSO - 366
MEAN - 293.70
(1AXLA6 - 30
-1
.4922
.3488
.2519
,1962
.OS33
.0683
.0453
.0936
,0886
.1872
.1353
.2336
.1654
.1536
.1446
.1493
.1173
.0595
.0060
.0390
.0231
.0263
.01S3
.0153
.0070
.0072
.0419
.0592
.0313
.0013
NXNM - 365
NOBS - 36a
VARIANCE - 44255.
0
1 XXXXXXXXXX
1 XXXXXXX
IXXXXXX
Ixxxx
Ixx
Ixx
IX
Ixx
Ixx
Ixxxx
Ixxxx
Ixxxxx
Ixxxx
Ixxxx
Ixxx
Ixxx
Ixxx
Ixx
1
IX
1
IX
1
1
1
1
IX
Ixx
IX
1
Figure 5. Estimated autocorrelation functions for BOD and
TSS in mg/1 and Ib/day at a pharmaceutical plant.
-------
257r
IV. EXCEPTIONS AND SPECIAL TOPICS
The objective of calculating means and variability factors
can be accomplished as described above, for any set of
standard data. However, in many cases, there are side issues
and complications which affect the data analysis. Some of
them are particularly prevalent in the analysis of priority
pollutant data, due to the necessity of measuring
concentrations very near the limits of the measurement
technique.
One issue in particular is the reporting and handling of
detection limit values. When the concentration in a sample
is too small to measure, the concentration is reported as
"not detected" (ND). This is fine as a descriptive
statement/ but causes problems for the statistician, because
statistical procedures must work with numerical
concentration values, and these measurements still reflect
valid samples and must be accounted for in calculations. For
even the usually straightforward process of calculating mean
concentrations, the handling of these values can be
approached in several ways. Using a zero concentration is a
reasonable first approximation, but may understate the
actual average concentration. If the analytical detection
limit for the particular method was supplied by the
laboratory, or can be assumed to be known, a sensitivity
analysis can be performed by also calculating statistics
-------
257s
with ND values set to the detection limit, giving an upper
and lower limit to the "true" value. Compromise solutions,
with ND values set to 1/2 the detection limit are also often
used. However, all of these methods produce a somewhat
distorted estimate of variability, because all of the
detection limit values are being placed at the same point.
In our work/ we have made extensive use of an extention of
the lognormal distribution, known as the delta-1ognormal
distribution, in which the detection limit data are placed
in a separate probability spike at zero, or the detection
limit value, and the concentrations above the detection
limit are modeled with the lognormal distribution. This
allows calculation of appropriate variability factors.
However, the appropriate statistical handling of detection
limit data has to be addressed for each statistical
technique. Continued attention by analytical chemists to
the definition and reporting of detection limits, and
standardization of reporting formats and notation would be
of great aid in this task. Various recent American Chemical
Society papers and talks have addressed these questions
(8,9), but more needs to be done to implement standards in
pract ice.
Another issue in the analysis of priority pollutant data is
inter- and intra-laboratory variation. The analysis
replication in the organics 5-plant study allows an
-------
257t
investigation into the sources of variation in the
concentration measurements, because the study includes
multiple samples, multiple laboratories analyzing each
sample, and replicate analyses by laboratories on some
samples. Using statistical variance components techniques,
the variability can be broken down into four components:
- Inter-sample variability. Variation in the true
concentration of each sample (time and sampling
variation).
- Consistent inter-laboratory variability. Variation
between the average concentrations measurements from
each laboratory (laboratory bias, or inter-laboratory
accuracy).
- Within-sample inter laboratory variability. Variation
between laboratories in the analysis of each sample.
(inter-1afaoratorx precision).
- Intra-1aboratory variability. Variation between
repeated measurements at the same laboratory (intra-
laboratory precision).
For the organics study, these analyses were performed in
terms of a multiplicative effects model consistent with the
-------
257u
lognormal distribution. These analyses were done for each
pollutant, at each samling point at each plant, for all
situations having sufficient data above the detection limit.
The final issue I will mention is that of spiked sample
analyses. The organics study included both priority
pollutant and surrogate chemical spiking of samples. The
calculated percent recovery for a sample is:
where C is the raw concentration measurement, L is the spike
level, and S is the spiked concentration measurement, for
surrogate chemicals C is zero, these quantities can be
computed, and then averaged across samples, to give a
measure of the average recovery for each chemical by each
method.
Some consideration was given to the correction of individual
sample measurements according to the measured recovery in
that sample (10). While this method generally increases the
accuracy of the measurements, it also decreases the
precision of the measurements substantially. Because of
this, and because not all samples were spiked in the study,
it was decided to do all statistical analyses on the
uncorrected data.
-------
257v
1. CONCLUSIONS
Hopefully, this has given an idea of the types of techniques
and issues in the statistical analysis of data for effluent
guidelines. Continues cooperation between the statistican
and the analytical chemist is important in exploring all
aspects of this complex subject.
-------
257w
Ref erenc es
1. Eynon, B. P., Javitz, H. S., Valdes, A. J., Skurnick, J. H.,
Gofer, R. L., Maxwell, C.., and Rollin, J. D. , "Pharmaceut ica 1
Effluent Data Analysis: Long-Term Pollutant Data Analysis,"
Final Report on Task 1, EPA Contract 68-01-6062, for EPA
Office of Water Regulations and Standards, SRI International
(1982) .
2. Eynon, B. P., Valdes, A. J., Maxwell, C., Walter, L., and Rollin,
J. D.,"Petroleuro Refining: Self Monitoring Data Analysis,"
Final Report on Task 6, EPA Contract 68-01-6062, for EPA
Office of Mater Regulations and Standards, SRI International
(1982).
3. Eynon, B. P., "Organic Chemicals Manufacturing: EPA/CMA Long
Term Study", Final Report on Task 7, EPA Contract 68-01-6062,
for EPA Office of Mater Regulations and Standards, SRI
International (in preparation).
4. SAS Institute, SAS User's Guide , SAS Institute,
Raleigh, NC ( 1979) .
5. Gibbons, J. D., Nonparametric Statistical Inference > McGraw-HilJ
New York (1971)7 ~"~~
6. Breiman, L., et al.,"Statistical Analysis and Interpretation
of Peak Air Pollution Measurements," Final Report by
Technology Service Corporation to Thomas Curan, U.S.
Environmental Protection Agency, MD-14, Research Triangle
Park, North Carolina (1978).
7. Suitzer, P. "Variances and Autocorrelations for Time-Averaged
Autocorrelations," SIMS working paper No. 18, Stanford
University (1981).
8. Crummett, W.C., et al. "Guidelines for Data Acquisition
and Data Quality Evaluation in Environmental Chemistry,"
ACS Committee on Environmental Quality, Subcommittee
on Environmental Analytical Chemistry (Final Draft, 1980).
9. Kagel, R. 0., "Validation and Priority Pollutant Analysis",
Invited Plenary Address, American Chemical Society National
Meeting, Division on Environmental Chemistry (1980).
10. Eynon, B. P., "Percentage Recovery Information in Organics
Long Term Data", Memorandum to R. Roegner, et al, US EPA,
August 20,1982.
-------
258
QUESTIONS AND ANSWERS
MR. MADDALONE: Ray Maddalone,
TKW. I actually have a comment and a question.
MR. EYNON: Sure.
MR. MADDALONE: We run into
the problem of detection limits and the problem
of trying to determine what value to put into
the data base, and some of the reports have not
detected value. 1 don't think there's any
really good solutions when you don't have a
base of data to go back on.
My question is, when you don't have a base
of data, a historical base of data just a single
number without a detection limit reported,
what do you recommend? Is it a zero, is it one-
half of the value that the report has the less
than or non-detected?
MR. EYNON: I'll tell you what
-------
259
we did in the organics data base and that is,
we stored the number as a zero, but we also
kept a comment field associated with each entry
Part o± that comment field was the fact that
this was really a non-detected value. Now,
that was actually the only way you could get an
actual zero concentration was to have a not
detect or a less than 10 or something like that
as the statement by the laboratory; but, we
kept the detection limit. If it was present
we knew the fact that these were not detected
values and there is no easy answer to this
question. 1 don't think there is any single
numerical value which is the correct value to
put in. If we knew the correct value we
would have detected, you know, we wouldn't be
worried about this detection question. I
think that what has to be done is, you have to
do some exploratory statistics on the effect
of these values on your overall statistical
-------
260
analysis.
MR. MADDALONE: What about less
than numbers? I asked, actually, a two part
question. Less than, you have a number and that
really can't...
MK. EYNON: Well, let's say you
have a number; I found that the difference
between saying less than 10 and saying not
detected seems to be more of a phenomenon of the
laboratory reporting criteria than anything
else which is really going on with the data,.
So I think it would be unfair to treat those
any particularly differently.
It would be nice to always have the detection
limit; and, if you have a not detect and you
have no other information about the upper limit
and you have to calculate a mean, I don't see
that you can have any argument for anything
other than this real value to put in. You
simply have to caveat your result and say, look,
-------
261
this is all we could do, we have no further
information about this value, we're going to
have to stick it into zero. The thing you can't
do is, you can't ignore that value because you
are told something positive, although not exact
about the concentration. You can't leave it
out of the calculation, you can't ignore it, you
can't put it as missing; so, you know, it's a
total lack of any sort of information about the
level of that pollutant, or about the detection
limit I would say, yes, you would have to put a
zero.
MR. MADDALONE: My comment is,
is that as part of the effort that we did we
reviewed the various definitions of limit of
detection. 1 think one definition has been
grossly overlooked by agencies and the people
using or setting definitions for the limit of
detection is, are the ACS guidelines that
were published in Analytical Chemistry in 1980;
-------
262
it was Volume 52, page 2252. Those are
extremely good because they set three
different levels. They chose a three sigma
detection limit and they said if the values
are less than your three sigma detection
limit you report them as not detected with
the LOD. Now, that would solve part of
the problem that you get with these not
detected values.
Then if you have a value above your three
sigma you report it as a real value, but you
also, again, put the limit of detection in
parenthesis so you know where you are in
relationship to that. I think that some sort
of consistent use of a definition ought to
be assigned; and, then, some explanation of
what the risk is associated with that
definition because it really varies and
whether you consider false positive or false
negative errors.
-------
263
MR. EYNON: I couldn't agree
with you more. In fact, I think I'm talking
about the...when I mentioned ACS I think I'm
thinking of the exact same paper you are
talking about. So, yes, I think that that's
an important question to standardize the
laboratory reporting practices on these issues.
MR. DELLINGER: Bob Uellinger,
Effluent Guidelines Division. My questions on
the use of autocorrelation in establishing 30
day variability factors. I was wondering if you
checked your 30-day, 99th percentiie estimates
on the data sets from which they were derived
to see i± they were good estimators of the 99th
percentiie?
MR. EYNON: We have in cases
where that's possible. Sometimes we can't
because we don't have complete sample
information on every day. Therefore, we cannot
construct 30-day running averages from the
-------
264
daily data that we are given. There are
many cases in which we could estimate the
autocorrelation function and come up with
the variability factor without being able
to go back and check that.
However, there are a few cases that we
have, we have looked at some other smaller
data sets on...I think we looked at the
leather tanning and the iron industry and
the cases we have checked out, yes, much
better agreement than the central limit
theory in value would give, much more...much
closer agreement with the actual empirical.
1 mean, if we had years and years and
years of data we could even think about
constructing the 3U days, then actually
using some sort of non-parametric estimate
or model the 30-day averages directly,
but that's well beyond any scope of any
data set we have.
-------
265
MR. DELLINGER: Okay, because
we have checked the central Unit theory as a
predictor with biological treatment and it is
not very good at all. We get something like
20 percent of the values fron the data set fron
which the number was derived at higher than
the 99th percentile.
MR. EYNON: I'm not surprised.
The correction for autocorrelation can actually
make a noticeable difference in the variability
factor that you would get and, indeed, they
have used the central limit theory that assumes
independence; actually central limit theory
underlies both these. The ordinary central
limit theory which is the independent one would,
indeed, give too small a variability factor.
If you went back and checked it, you would get
exactly what you're saying; which is, you would
get too many exceedances even in the data set
that you calculated it.
-------
266
MR. DELLINGER: Now, we have
used things like taking the 30-day averages.
Let's say we had 12 or 14 or 16 30-days averages
on a set.
MR. EYNON: That's a different
question because there if you are calculating
your 30 day averages on less than 30 days data,
you are also going to get a larger variation
because...
MR. DELLINGEH: No, these would
be straight 30 day values.
MR. EYNON: See, I mean, if you
only measure 12 days out of the 30.
MR. UELL1NGER: No, that's not
what 1 am...what I am saying is, we have used...
let's say we have had 12 sets of 30 day
averages.
MR. EYNON: Right, okay, that
gives you 12 numbers.
MR. DELLINGER: That's right
-------
267
and we have checked for...
MR. EYNON: Just enough to look
at, to examine how many are exceedance.
MR. DELLINGER: And then we have
just checked for using parametric procedures and
establish variability factors that way.
MK. EYNON: You could do that
also, although 1 think that this method will be
stronger because you are using more of the
information in the data to actually calculate
the autocorrelation.
MR. BELLINGER: You are using
each individual data point as opposed to using..
MR. EYNON: Right, rather than
combining each...
MR. DELL1NGER: ...30 day
average.
MR. EYNON: That's a tough
question; 1 think that's true. Yes, we have
done...we have our program that does this.
-------
268
I have been working...if you would like to
catch me later and you have your data set on
the PA System 1 can talk to you about maybe
our stuff through if you are interested in
seeing 30 day numbers based on the stuff;
it's not too hard to do.
MR. DELLINGER: Sure.
MR. TELLIARD: Anyone else?
Thank you, Barry.
Our next speaker is from Battelle, Columbus.
Jim Brasch is going to talk about something that
we haven't utilized too much in this program,
but we have skirt it; that is, the Utilization
of GC/FT as it relates to Analysis Priority
Pollutant.
-------
269
GC/FT-IR and GCMS: WHICH, WHEN and WHY?
Jim Brasch
Battelle's Columbus Laboratory
MR. BRASCH: Have you ever heard
the expression, as confused as the little farm
boy who dropped his chewing gum on the floor of
the chicken house and didn't know which one to
pick up?
I know why I am here; why I, personally, am
here. It's because Dale Rushneck called
Battelle and he started talking to people and
filtered down through the hierarchy. By the
time he got down to my level to talk to me, I
had been told that I would give a talk on GC/FT-
IR if he asked me to. I did respond positively
to his request. Let me assure you, if I had
had any idea how big he was when I was talking
to him, I would have responded much faster.
Now, what continued to puzzle me was, why one
GC/FT-IR talk in a GC/MS Symposium? Those
-------
270
of you attending the Hershey meeting in 1981
saw the same phenomenon; one GC/FT-IR
talk in a GC Mass Spec Symposium. It was only
last night after an exquisite meal and a
delightful glass of wine that it became smash-
in^ly clear to me; mass spec ceremonies
require the periodic sacrifice of a pristine
virgin. Obviously, these qualities require they
go outside the mass spec community for their
victim. I am complimented by Battelle's
recognition that I have these qualities. This
is mitigated by the fact that they also,
obviously, consider me totally expendable.
Nevertheless, I am here and I want to give
you a state of the art status report of GC/FT-IR
stressing its complementary nature with GC/MS.
How do you do GC/FT-IR? You can obtain one of
the earlier generations of the system, such as
the DIGILAB instrument, shown in Figure 1, first
produced three or four years ago. You can get
-------
271
one of the later generation, shown in Figure
2, again, a DIGILAB system which is much more
cosmetically nice and is configured so the
instrument is free for normal operation. All
of the major manufacturers produce these now;
Nicollet, IBM and Analect. Beckmann and Bowmen
are very hard at work on their systems.
You can also do like we did at Battelle
where we are faced with two problems; one, the
equipment is expensive and sometimes we can't
buy it; secondly, we are concerned only with
selling the output, we don't have to sell
the instrument. So we are somewhat less
concerned with aesthetics and cosmetics and
you can do as in Figure 3, which shows our
interface, the chromatograph and our instrument.
What else is required? Nothing particularly
profound as diagramed in Figure 4. What you
do is just take the infrared beam from your
-------
272
instrument through the light pipe through which
the GC effluent is going. Take the output from
an MCT detector and you can get the spectrum
that way; nothing particuarly profound. There
is a little technology in the light pipe, but
it is also not difficult as shown in Figure 5.
You just have some way to get the effluent in,
traverse it down the pipe and back out. For
mid infrared spectroscopy, one generally uses
KBR windows. There is a little technology
effecting a good seal at the windows and the
transfer lines so that you don't lose the sam-
ple. You also need to heat the light pipe.
This can be done relatively simply as I'll show
you in just a moment, but the only other require-
ment then is some technology in the light pipe
coating. I really hesitate to use the word tech-
nology; it's absolutely a black art. We make
our own light pipes. They are simply precision
bore glass tubes that we put a gold coating on
-------
273
ourselves. Most of the manufacturers are doing
the same thing or else they have a sole source
of supply. Nothing really profound. It is just
difficult to get a really good gold coating on
it.
The only other problem then is, how do you
heat it? Again, in our system we enclose it
with a very simple aluminum block as shown in
Figure 6. This is the end of the light pipe
here; ours is only about four inches long. So
this is relatively compact. We have a heated
transfer line here through which we are actually
bringing the fused capillary from the GC, rout-
ing it over to here so you can get the effluents
into the light pipe, traverse it down here, it
comes right back through the heated transfer
line back over to the FID of the GC. So we get
infrared data, and FID traces after it has been
through the light pipe; really rather simple.
What do you do with it then and why would I
-------
274
want to give you a comparision to mass spec?
(Figure 7). I want to compare the information
content, the speed and ease of operation, the
sensitivity, and the chromatographic resolution.
I can actually dismiss the last one because
when that slide was made the first approaches
to doing capillary column work were being made.
There was considerable necessity to justify all
of the additional work and complexities for the
capillary work to show that you did get an
improvement in data that was worth the extra
trouble.
Now, with the capillary ability, the
chromatography is the same. So that has
become quite irrelevant. The other three I do
want to talk about some more. I will demonstrate
this to you by using what I will define only as
a "hazardous waste sample." What happened with
this was that in our laboratory we did GC/FT-IR
on the sample using a packed column.
-------
275
We also did the capillary column work and
this is where we first demonstrated that the
additional difficulty was well worth it.
A second laboratory was also running
capillary GC/FT-IR. At Battelle we also
were doing the mass spec on it. Another
laboratory, completely independent of all
of these was also doing mass spec on it.
So we had an excellent cross-check here;
from laboratories using mass spec
and giving, for all practical purposes,
absolutely identical results, and two
different laboratories using GC/FT-IR and,
while they were not absolutely identical
because they did not use exactly the same
column, there was no difficulty in correlating
the two and seeing that they reproduced each
other extremely well.
So we had a very nice cross-check here,
not only of the two different techniques,
-------
276
but the validity of the technique in our labora-
tory, and outside of our laboratory. What do
you get, then? As I mentioned, as the sample
traverses the light pipe, in addition to trans-
forming the data and producing a spectrum every
second, if there is any absorption above the
baseline, the computer also takes a point and
stores it to reconstruct a gas chromatograph;
a Gram-Schmidt reconstructed gas chromatograph.
Then, the effluent goes on to the FID. So, as
shown in Figure 8, we have an FID trace where
we can check the chromatographic resolution and
make sure we haven't degraded that. We also
have a reconstructed gas chromatograph based on
the infrared data so we can correlate those. I
don't know if it is apparent from that slide,
but there is very nice correlation there. You
have no trouble whatsoever in correlating a
GC/FID peak with its corresponding infrared
-------
277
peak in the data bank.
How does that stack up with the mass spec
data? Well, Figure 9 shows the RGC from mass
spec (a total ion count RGC) and what you just
saw, the RGC from the infrared. Now, there
are differences here, but there are also great
similarities. The differences are sensitivity
differences and I can point out instances
where, for example, mass spec saw a rather
intense peak that was missing in the infrared.
On the other hand there are instances where the
converse is true. I don't want to spend too
much time on the whys of that; it has to do
with the absorbtivity in the infrared which
determines whether it is going to see it or
not. The major point I want to make is that
their differences are complementary.
Again, on that slide I want to make the point
that we can correlate these data very well. We
have no difficulty in correlating an infrared
-------
278
peak to a mass spec peak. What sort of data
do we get in the infrared? Well, Figure 10 shows
three spectra that are pulled out almost at
random showing one of the stong peaks, a medium
peak, and a very weak peak. The middle region
includes strong absorption from C02, which we
do not purge out of our instrument and, indeed,
all of the search programs completely obliterate
this region in their searches; you do not use
this region. You can see that as we get to the
very weak peaks we have a much lower signal noise
level and this is where we ultimately lose out.
If we do not have the discrimination of the sig-
nal there we can't get any useful spectrum infor-
mation. The lower example is an excellent spec-
trum and this is from one of the very weak peaks
in that RGC. What else do we see? We see a
lot of structural information, functional group
information. I'll mention that one again later.
-------
279
The software programs that are available are
nice, getting nicer, getting faster, getting
better all of the time. Figure 11 illustrates
some software features. This is a DIGILAB
slide, it is not of the data from this hazard-
ous waste sample. I just wanted to show you
what you can do with this. The lower trace is
a spectrum from a GC run; the other spectra are
the results of their search program through
their catalog of spectra. They have a HIT
index listed.
Now, another thing I wanted to point out;
if you get a very low number here, it's an
excellent identification, particularly if there
is a low first number here and then a wide gap
between the others; that is what you call a
positive ID. Actually, what I have chosen
here, I don't know if you can read that number,
but the HIT index ranges from .61 here to
.69; this means that the search really wasn't
sure what this compound was. It couldn't
-------
280
discriminate between these four or five
candidates. One of the flexibilities you
have, you can tell it to look for the top
ten, the top five, or another number of your
choice. The other point I wanted to make
here, that they are all chemically similar.
This particular one is an ester; and, while it
couldn't identify the particular ester this
was, the search program picked out all esters.
That is because of the nature of the infrared
information that's here; this carbonyl group
absorption is specifically characteristic of
esters. From this you also could tell that it
is not terribly complicated. So all of this
information is inherent in the infrared spectrum
and even if it doesn't come out positively
identified, you will get excellent chemical
type information.
Some of the results, now, from the hazardous
-------
281
waste sample are shown in Figure 12, again,
to talk about this complementary nature.
Here an X means the compound was positively
identified, and zero means the compound type
was identified, but not specifically. The
mass spec did not see the fluorinated alcohol;
the infrared not only saw it, it identified
it. Why? Carbon-fluorine stretching vibration
is one of the most intense infrared absorbers.
So if there is much there, the infrared is
going to see it. Other things you might expect:
mass spec, certainly cannot discriminate ortho-
and para-chlorotoluene; infrared did.
Similar things here are seen in Figure 13.
Another isomer, mass spec typed it, infrared
identified it. A case where mass spec gave an
indentification and infrared didn't even see
it; a very weak infrared absorber with very few
bands for the infrared to key on. So, again,
the complementary nature of the two techniques.
-------
282
Now, I want to very quickly show Figure 14
where I have more recent tabulations of this
data. This summarizes where we are with this
particular sample. There were 44 components
in the GC FID trace. By the infrared data, we
identified specifically 28 of them, and we got
information on 15 types. By infrared alone we
got good chemical evidence on 43 of the 44
components. Mass spec gave a positive IDS on
13; and good information on 23 compound types;
a total of 36.
At this point, now, I want to say something
with great caution. This sample was probably
optimum to show the value of infrared. We did
not chose the sample, however, to demonstrate
this point. The sample came in totally blind.
We had no idea what is was. It worked out that
infrared gave a lot more data than mass spec
on this sample. I can suggest some other
-------
283
samples where the converse would be completely
true, namely, long chain hydrocarbons; then,
mass spec would shine, infrared would tell
you it was hydrocarbons, but it would not
give great definitive information. This one
happened to work out to show the power of
infrared.
Another point I want to make and I cannot
emphasize too much. If we combine the two
sets of data we see the complementarity even
greater; of the 23 compound types identified
by mass spec, 19 of them are positively
identified by infrared. There were only
seven overlaps in here; five of these, GC
mass specs identified that infrared did not
identify. If we combine these two sets of
data, we get useful information on all 44 of
them. We would have specific identifications
on 33 of the 44. This impresses me. I
-------
284
think this demonstrates beyond any question
that the two together are best. What if you
can1t do that?
Let's compare them in Figure 15.
Sensitivity: the gap is not as great as in
the past, and it is getting smaller. But there
is no question, if you know what compound you
are looking for and where to look for it,
infrared will never compete with mass spec on
sensitivity. The gap now is certainly one,
perhaps as much as three orders of magnitude.
Infrared is going to improve and I expect mass
spec will also. So I think that ultimate gap
is going to remain there.
Ease of operation: the mass spec is
better. That gap is closing also, but there
is one very important difference. At our
laboratories, and I'm sure this is common
with other laboratories also, we can bring
-------
285
a kid with a decent high school education
into our mass spec lab and we can have him
getting reliable, useful data in a day or two.
It is just so well automated, so well soft-
wared that that is no problem. Infrared GC
software is very nice, but it presently requires
an experienced spectroscopist to utilize the
system and to make sure it is doing what it
should be doing. That will obtain for quite
awhile because of the different nature of the
data, the information that is coming out.
The time element is not that much different
now. The software programs for the GC/FT-IR
are becoming very fast now and just last week
at a Pittsburgh Conference there was some
very exciting new developments that are going
to make it even faster. So I think that's
going to be quite comparable. I've mentioned
the chromatography is identical (capillary
-------
286
column). In fact, there have been several
laboratories, including ours that have
successfully coupled a GC to an infrared and
then onto a mass spec; that is super
powerful, but it is going to be a few years
before that is routine. Information Content:
infrared is the best; no question.
Anaylsis time. Again, they are just about
equal now; with the exception, again, that
the infrared requires an experienced
spectroscopist and there are some manual
operations that help you out.
In a pseudo-summary, on Figure 16, if your
problem is to detect a specific component and
you know exactly where it is, GC mass spec is
the way to go. If you want to identify compon-
ents in an unknown sample, far and away the
best thing to do is use both. If you can only
use one, you will get more information by GC
infrared.
-------
287
Now, at considerable risk I am going to
completely change subjects. The risk is that
neither Bill nor Dale knew I would do this
and by so doing, I am following a philosophy
expressed by a colleague, that if you want to
do something, it is almost always easier to
obtain forgiveness than it is permission.
In a very few minutes I want to give you an
abbreviated version of a development first
announced publicly only a week ago yesterday
at the Pittsburgh Conference.
Ken Shafer has added another important
member to the analytical alphabet soup; SFC/
FT-IR. He has successfully coupled an FT-IR
system to the effluent of a supercritical
fluid chromatograph. Why do you want to do
that? What is a supercritical fluid? (Figure
17) It is one that is above its critical tem-
perature and pressure. It is neither liquid
nor gas. It has properties intermediate. The
-------
288
one I want to talk about is C02 which has a
critical temperature of 31 centigrade and at
a pressure of about 73 atmospheres.
Why is this important or useful to anybody?
(Figure 18) In normal GC you have lots of
stationary phases, one mobile phase. In HPLC
you have a few stationary phases and many
mobile phases. SFC has those intermediate
properties. It can use all GLC and HPLC columns
It can use a variety of mobile phases. The one
I'll talk about today is C02, but pentane, N20
and the Freons have also been used.
(Figure 19) Common detectors, GC uses,
FIDs, HPLC uses UV; SFC can use both of them.
I'll show you data to support that. (Figure
20) GC uses temperature programming to get
your separation; in HPLC you use solvent pro-
gramming; SFC you use pressure programming.
This is the major difficulty of it, but it's
not that hard to do. Figure 21 shows some
-------
289
separations and the use of two detectors. This
is a mixture...of biphenyl, isomers of terphenyl,
another phenyl, a triphenyl benzene, and two
quaterphenyls. These are highly condensed
ring compounds. The separation here is very
good and what you see slightly displaced here
is a UV trace followed then later on by an FID
trace (indicating). So you have both detectors
it's possible to use and you see here the
separation of these isomers and some condensed
ring compounds of relatively high molecular
weight.
This is one of the more exciting avenues for
this is, in the separation of higher molecu-
lar weight compounds. But I want to show you
today that it can also be used for low molecular
weight materials that you would be interested
in. Some other considerations in interfacing
FT-IR with various chromatographies are shown
in Figure 22.
-------
290
In GC/FT-IK, you use a light pipe with a
volume as you would like to have it, the
exact volume of the peak that is coming out.
In HPLC you either have to get rid of the
solvent or you have to use a flow through
cell that is much, much less than the volume
of the peak; and, one of the major handicaps
of HPLC/FT-IR is this problem right here
(indicating). With SFC you can use a flow
cell with the volume equal to or greater than
the peak width. What I'll talk about is using
C02 as the mobile phase. You can eliminate
the solvent much more easily than you can with
HPLC, but with infrared you don't need to. C02
is a beautiful infrared solvent.
Figure 23 is the spectrum, a transmis-
sion single beam spectrum of CO^', this band
about 2400 wave numbers. There is another
strong bend out here about 3600; this is a
-------
291
little unfortunate because you would like to
look at some alcohols out here, but the only
thing that ever shows up in the 2400cm-l
region is a few nitriles, C=N compounds, and
you don't see those very often; otherwise,
it is an absolutely beautiful solvent. This
cut off here was caused by the £>2p2 window of
the cell, the only one that he had at this
moment that would take this pressure. C02
remains a very good solvent on for several
hundred wavenumbers.
Now, the only problem with it, it shows
changes with pressure as shown in Figure 24.
This is a relatively low pressure and at a
higher pressure you see some other bands
coming out here because of a Fermi resonance
interaction. This is very easy, you can just
program your computer to use a particular
background of whatever pressure you are at
-------
292
and these will substract out. It handicaps
your sensitivity here a little bit, but that
can be handled by the software quite nicely.
So it is a very good infrared solvent.
How do you do it? (Figure 25) You have
a Varian syringe pump for HPLC that nobody
wanted. Hooked up the CC>2 tank to it, a
simple pressure controler, through a preheat-
ing coil and a valve loop injector into the
conventional gas chromatograph with a conven-
tional capillary column, went through the UV
detector; and, that's another neat thing. All
he did was run the capillary all of the way
through here and just scratch off the outer
coating and actually do the UV detection
directly in the capillary. Then went to this
FT-IR (in this case it was one of the small,
low cost Analect systems) and on beyond that
to the FID. So you had the UV detection here,
the IR detection here, and the FID detection
-------
293
here (indicating); a very powerful combination.
Did he get data? Well, Figure 26 shows
the chromatography on it. This is a mixture
of anisole, acetophenone and nitrobenzene. You
see the differing sensitivities of the two dif-
ferent detectors, the UV detector here, the
FID here, the solvent peak from the chloroform
and the separation of those three materials.
Figure 27 shows the spectra he obtained, anisole,
acetophenone and nitrobenzene (indicating).
This is brand new. The slides were still wet
when Ken reported it at the Pittsburgh meeting
last week. So these data are about two weeks
old. The first public report is a week and one
day old; it is very exciting. So in overall
summary, then, GC/MS or GC/FT-IR. Which and
when are a matrix into which are factored the
nature of the sample, the information you
need, and the differing selectivities, speci-
ficities, and sensitivities of the two
-------
294
techniques. Why? Because the complementary
nature of these techniques effects a synergism
such that the whole is substantially greater
than the sum of the parts.
Last, but very far from least, SFC/FT-IR
has been accomplished. Its potential is truly
exciting because the chromatography is excep-
tionally versatile and the spectroscopy is
relatively simple. That potential is further
enhanced by earlier but still recent demonstra-
tions of SFC/MS. The very same complementary
nature I have stressed today will be evidenced
in this new field. With this I have now ful-
filled my commitment to Battelle and I dutifully
submit myself to the remainder of the ceremony,
whether this be the leap into the flaming
volcano; or maybe questions. Thank you.
MR. TELLIARD: Any questions?
-------
294a
-------
294b
-------
294c
-------
294d
-------
294e
-KSr Windows (Glued to 1/16" Swagelock Fitting;
Hewlett Packard Fused Silica Capillary Line
1/16" O.D. Stainless Steel Transfer Line
1/16" Graphite Ferrule
•1/16" O.D. Gold-coated Lightpipt
IGURE 1. DIAGRAM OF 1/16' UGHTPIPE AND FITTINGS
• AT T « L U *' — C O L U M B U *
-------
294:
-------
294g
2;
o
CO
M
P^
<
p_i
^
O
o
p_l
o
CO
H
£5
M
O
pL|
w.
*2
bJ
H
2
O
U
J2J
O
M
H
<^
JJEJ
S
O
PH
53
M
-------
294h
- FID
-------
2941
RGC - MS
-------
2941
HA.'AK'Ix'il i 'c,A i II >\i*f>
-------
294k
-------
294 1
-------
294m
-------
294n
RESULTS WITH
HAZARDOUS WASTE SAMPLE
W COMPONENTS
METHOD SPECIFIC COMPOUND
USED ID's TYPES
COMBINED DATA SETS
33 SPECIFIC ID's
11 COMPOUND TYPES
GC/IR 28 15
GC/MS 13 23
-------
294o
-------
294p
CO
-------
Q- C-)
294q
CD CO
CM
O
-------
294r
-------
294s
-------
294t
-------
294u
-------
294v
-------
294w
-------
294x
SUPERCRITICAL CARBON DIOXIDE
1 CM PATHLENGTH
EFFECT OF PRESSURE
900 psi
1100 psi
1300 psi
1500 psi
1700 osi
1900 nsi
2100 osi
2000
WAVENUMBERS
-------
294y
-------
294z
-------
294aa
-------
295
QUESTIONS AND ANSWERS
DR. VINCENT: Frank Vincent,
James River Corporation. That was a fasinating
presentation. I never heard of either one of
these methods before. Are you talking about a
precision bore glass tube, gold coated? I would
assume that the interior surface has to be pretty
smooth or you get so much breakup of the beam
that you don't really get much energy out the
other end.
Also, I was wondering about the gold coating.
Is this something that is relatively simple to
get done?
MR. BRASCH: The answer to the
first is, yes, it must be quite smooth; and,
the answer to the second is, yes, in the sense
that the procedure itself is quite simple. It
simple. It is just a solution that is poured
into the tube to coat it and then it is heated.
-------
296
The technology, or the "black art," comes in
how to get the right thickness and the heating
rate, to get uniformity of the coating. And
that is, just from my point of view, a black
art. There is nothing profound or difficult to
it at all. It is used in many laboratories;
but, it is just that there is an art to it.
DR. VINCENT: This was done
at Battelle rather than some...
MR. BRASCH: Yes.
DR. VINCENT: So it is basi-
cally...it is something like silvering a doer
flask, except, apparently, somewhat more critical
in the way you handle it?
MR. BRASCH: Yes, very much so.
DR. VINCENT: Is the coating
critical? The amount of coating and the amount
of gold on the tube?
MR. BRASCH: I cannot give you
an answer, only that there is a lot of work
-------
297
going on on geometries and different coatings.
If there is a critical thickness, it must not
be too thick; an interesting phenomenon that
the physicist understands, but I don't. It
must be relatively thin, but not transparent.
DR. VINCENT: Thank you.
MR. TELLIARD: Anyone else?
For the presenters, for the proceedings we
would like to have copies of your slides
or your overheads; if you could supply us
with xeroxes of them so that we can
incorporate them. Otherwise, these two ladies
up front here will come after you, that may
not be bad. It wasn't a very good threat;
sorry.
-------
298
Our next speaker is Drew Sauter from EMSIL-
Las Vegas. Now, that we have all mastered the
use of a mass spectrometer, why not put them
in tandem. If one is good probably more is
better, is that true; you will see. Drew, come
on up.
-------
299
RAPID ORGANIC ANALYTICAL METHODS DEVELOPMENT,
THE TRIPLE QUADRUPOLE MASS SPECTROMETRY POTENTIAL
Andrew Sauter
U.S. Environmental Protection Agency
EMSIL-Las Vegas
MR. SAUTER: May I have the
first slide, please. The original work for
Triple Quadrupole Mass Spectrometry that was
funded within the agency was done out of the
Athens Laboratory.
The Triple Quadrupole, was sold to the agency
because supposedly one could reduce sample work-
up. What I hope to do today is give some idea
of the analytical utility of the instrument,
hitting probably too many areas and to demon-
strate why we feel that it does have great
utility to the hazardous waste programs and I
think in many specific areas to Effluent Guide-
lines or Priority Pollutant type programs.
Just last year in Analytical Chemistry,
Burlingame said what is on that particular slide
-------
300
and I think that's effectively true. Hopefully,
what we will do today is give you an idea from
about five or six areas why we think the Triple
Quadrupole does have some great potential and,
hopefully, demonstrate a little bit about it.
The ion optical train of a Triple Quadrupole
is shown on this slide. There are three Quad-
rupoles, you might focus on that. The first
one can be used to select and/or scan. The
second Quadrupole is the collision chamber
where the ions can be made to undergo collision-
induced dissociation, generally in the range
of a few volts. The instrument that we currently
have, which, by the way, is a Finnigan Triple
Quadrupole Mass Spectrometer. The third mass
filter can be scanned and/or set at a given
mass depending on the configuration. Alterna-
tively, quadrupole one and quadrupole three
can be offset...both scaned and offset, give
characteristic neutral loss or gain.
-------
301
So we have a variety of options with the
instrument. Now, one of the things that most
people did when they discussed the analytical
utility, the potential of the Triple Quadrupole,
they resolutely ignored both source introduction
problems and problems which might occur from
introductions of large volumes of material due
to...for example, problems with source saturation
which are found in all types of mass spectroscopy
So you can see that there is a fairly complex
set of choices that one could have and what
I'll do is take a few of these configurations
today and try to give you an example of why we
think the instrument is useful.
We have published in analytical chemistry
in January a comparison of response factors
from GC/MS, GC/MS/MS and compared those values.
This comparison I think firmly establishes
that one should be able to attain quantitative
data out of such instruments which is effectively
-------
302
identical to good GC/MS work. Peter Dawson,
who is probably the most well-known gentleman
in ion optics of Quadrupoles has described the
ion optics of Triple Quadrupoles as complex.
I submit that we should take his word there.
So such observations are not trivial and are
of some practical utility.
One does not buy a Triple Quadrupole to do
GC/MS. One would like to be able to do other
things and because the instrument costs approxi-
mately $350,000, one would like to be able to
do a lot of other things.
This is a view of the Triple Quadrupole and
you will note that in the front of the instrument
is a moving belt, LC/MS interface. While this
is a mechanically crude device, one can use this
device to rapidly introduce samples into the
ionization region and then perform a variety
of different experiments. We have been doing
this with a variety of samples and mixtures
-------
303
and we believe that it will find great utility,
perhaps in screening analysis.
Let me move on. By simply placing in this
crude fashion, an extract on the belt, for
example, neat transformer oils; One can screen
for a variety of different compounds. One can
also do that fairly quickly. This slide shows
25 determinations of Aroclor 1260; it is essen-
tially a single level precision study that was
done in slightly over 1,000 seconds. There
are 25 measurements of Aroclor 1260 at 50 nano-
grams.
The precision, including all data there,
was approximately...16 percent relative standard
deviation and if you will allow me to throw
three outliers, the RSD improves to 12 percent.
This is a total ion current plot of a negative
daughter ion experiment introducing standards
of Aroclor 1260 into the ion source.
-------
304
The next slide shows triplicate analyses
of Aroclor 1260 from five to 100 nanograms per
microliter and with the subsequent analyses
of eight transformer oils in triplicate. These
particular transformer oils were diluted by, I
believe, a factor of two because the chlorinated
biphenyls identified in these samples were found
in the relatively high concentrations. In 27
minutes there were three times eight plus five
times three determinations of Aroclor 1260.
The ionization mode was methane chemical ioniza-
tion at approximately one turn. We were doing
negative parent ion scanning and it's obvious
that analyses at this rate, is of considerable
utility for a couple of reasons.
Most of the environmental measurements that
are made, are made on one sample. They are not
made in triplicate. It would be nice to have
triplicate measurements to examine sample related
precision. This is a multi-level calibration
-------
305
curve of Arocolor 1260 using negative parent
ion with methane at .94 in argon of approximately
.5 millitons.
Again, the methane is utilized to create
negative ion which are, in this case, then
selected and undergo induced dissociation in
Q2 creating ion current which is sensed at the
multiplier. This is an example of a calibration
curve that we can currently get now and such
determinations can be done in the order of
minutes. We think that is also useful.
This is a. complex sample workup scheme that
was utilized to obtain the data in the previous
slide. Essentially, the transformer oils have
been taken out of the vial and placed directly
on the belt. We have done this probably
eight or nine different times for the course
of approximately an hour, demonstrating that,
in fact, the system can take the abuse of
direct complex, mixture analyses of chlorinated
-------
306
biphenyls in transformer oils. The fact that
the LC/MS Interface tends to throw away quite
a bit of the material itself is the reason this
system works. We gain back the sensitivity
lost in that we are using negative ions.
So using this introduction technique one
would take transformer oil and place it directly
on the belt. A normal negative ion Q3 mass
spectra produces a complex mass spectrum. Tak-
ing the same sample under the same ionization
and introduction conditions and doing a negative
parent ion scanning for the same sample, this
is the resulting spectrum.
Most of the ion clusters are related to the
formation of molecular anions of chlorinated
biphenyls. The nice thing about this type of
detection technique is, it takes a complex mix-
ture, chlorinated biphenyls, and makes it sim-
ple. That is, I believe, of regulatory interest,
One would not want to use this type of technique
-------
307
if one was trying to study metabolism of given
isomers, but for making regulatory decisions I
think it is a valuable approach. Aroclor 1260
standard run under the identical conditions
there are shown. So they are quite similar;
in fact, Aroclor 1260, 1254, 1248 and 1242 and
perhaps 1232 can be differentiated. The mixture
mass spectra of negative parent ion scanning
mode is unique. That is not to say that we
could differentiate mixtures of those given
mixtures, but under such analyses conditions
we seem to be able to unequivocally determine
that there are, in fact, molecular anions con-
taining chlorine of the molecular weight which
coincides with chlorinated biphenyls. One can
do this quite rapidly, with effectively no
sample workup.
We are interested, in our programs, in
hazardous waste areas. In our particular aspect
of the MS/MS Program we are particularly inter-
-------
308
ested in compounds which cannot be done by Gas
Chromatography, Mass Spectroscopy. This slide
shows a variety of compounds, many of which
cannot be done by Gas Chromato^raphy, Mass
Spectroscopy, but can be directly introduced in
the fashion discussed previously.
We expect from our work that, in fact, methods
for these given compounds of regulatory interest
to RCRA could be rapidly developed. One, in
fact, does need quite a bit of manpower to develop
methods for many different molecules and while
this is a major problem with rapid analytical
method development, we feel quite certain that
for a variety of molecules MS/MS coupled with
this crude introduction technique can be exploited
to develop methods rapidly.
This is an example of a positive daughter
ion spectra of diethylstibesterol. This slide
presents an indication of the information con-
tent available in daughter ion spectra acquired
-------
309
in this nature. We have noticed that for many
molecules the information content is sufficient
to identify polar molecules in hazardous waste
extracts.
Professor Hunt at the University of Virginia
is developing priority pollutant methods. This
slide presents a direct comparison of results
done independently by GC/MS and MS/MS. A gen-
eral summary of the work to date by Professor
Hunt is that the results based on performance
evaluation samples and a variety of hazardous
waste samples, very complex mixtures, is that
qualitatively the MS/MS scheme that he has
developed is quite promising. In many cases,
quantitatively, the data is excellent; in a
quantitative sense it requires improvement.
The interesting thing about Professor Hunt's
work is that sample workup for the priority
pollutants and analyses and acquisition requires
on the order of 25 minutes, total. Will that
-------
310
type of methodology apply to every sample in the
universe? I could probably say unequivocally,
no. Will it have great analytical utility for
certain industries and for certain waste indus-
trial effluents? I believe it will. In fact,
I had thought that his mission to develop analy-
tical methods which would compete with the eco-
nomics of fused-silica capillary column, GC/MS
was a particularly difficult one. Both the
qualititative and quantitative reliability of
the data that has been provided to date has
been good, but we anticipate further improve-
ments. He is working under a cooperative agree-
ment with EMSL-Las Vegas and Dr. Don Betowski
at our laboratory is monitoring that program.
We are not concentrating on MS/MS analysis
of priority pollutants at our lab, but as Bill
invited us to talk here about MS/MS and as we
were analyzing hazardous waste extracts and I
thought we should examine some priority pollutant
-------
311
data by MS/MS. What you are looking at right
now is a positive ion Q1MS of an actual extract
provided by Dr. Larry Straton at NEIC. This
is a particularly clean hazardous waste extract.
This is positive ion methane CI with a full scan
This is as if one would introduce a sample on
the LC belt directly into a single Quadrupole
Mass Spectrometer. You will note that in many
cases fragments corresponding to molecular ions
of the priority pollutants which were spiked
into this mixture are obvious. This sample
was spiked at approximately 100 nanograms per
microliter.
It is interesting to look at the region...
where the pointer is (indicating). The power
of MS/MS becomes apparent when one looks, for
example, at this region. At mass 139 and mass
140, the pronated positive molecular ions for
isophrone and two nitrophenol. What one can
do is introduce this sample in the same fashion
-------
312
and instead of doing a full Ql scan, one can
ask for daughters of 139 or 140. This is a
positive daughter ion spectra of 140 and you
can see the pronated molecular ion and you can
also see loss of water and phenol and a variety
of other peaks which are quite characteristic
of nitrophenols. In fact, the CID spectra of
positive ions are, in fact, very similar to,
in many cases, low energy electron impact mass
spectra. I guess in retrospect that really
shouldn't surprise anyone, but it is nice to
know that if you can interpret electron impact
mass spectra it is fairly easy to interpret
CID spectra.
Taking M/Z 140 in the next few milliseconds
of a scan for the positive daughters of 139,
alternative identification of isophrone is
mode. So going back again, selecting these
peaks and doing daughter ion analysis allows
one, despite the fact that their proximity is
-------
313
1 amu apart, to identify these compounds in
hazardous waste sample extracts.
Other things can be done. This is a nega-
tive Q3, CI mass spectra, full scan of another
complex hazardous waste extract. Chlorinated
materials are present, someone will say maybe
that's a polynuclear. In selecting the daughter
ion, M/Z 182, from this sample, just with the
electronics of the instrument one gets a very
characteristic and clean spectrum for that
molecular anion of a dinitrotoluene. It amazed
us that in many cases the instrument selects
ions out of incredible garbage and provide one
with reasonably clean spectra. We have been
able to qualitatively verify a variety of
priority pollutants in hazardous waste extracts
via this approach. With proper quality con-
trol, we expect to attain good qualitative
results. We had done some work with fused
silica capillary column along with a lot of
-------
314
other people here and the acquisition time for
priority pollutant analysis was reduced to ap-
proximately 30 minutes. What would happen if
we put all of the priority pollutants on the
belt at one time and performed a full scan Ql
mass spectra. What one observes is 95 for
phenol, 124 for nitrobenezene...let's see,
185...anyone that will give me help with that?
I believe that's benzidine. And a variety of
other compounds can be identified through
appropriate daughter or parent ion scanning
techniques. The information content in many
respect to the priority pollutants daughter
ions and other scanning modes are quite adequate
for qualitative identification.
This is the negative ion CI Q1MS acquistion
for the priority pollutant standard. So that
half a half a second later taking negative ion
Q1MS data from the same sample that you saw
previously and you will note that where the
-------
315
sensitivity is low in positive ion spectra, the
negative ion CI is more sensitive. You see
hexachlorobenezene, benzeneapyrene D-12 or
benzapyrene or a molecular anion with that
weight, trichlorophenol and a variety of other
molecules at 25 nanograms are observed. Doing
daughter ion experiments we have repeated this
at one nanogram and using the belt introduction
technique. You are able to observe signals for
most of the priority pollutants, including some
very low molecular weight compounds which sur-
prised us, like dipropylnitrosamine and dimethyl-
nitrosamine also. So MS/MS offers the possibi-
lity of a rapid screening procedure (MS/MS) for
priority pollutants. One could analyze, at
least theoretically, on the order of 150 to
200 samples an hour. It's not clear whether
one could do 800 a day; it's not clear that
one would want to do 800 day, but one could
surely do in triplicate analysis of the samples
-------
316
of interest. One might be able, then, to screen
extracts for selected priority pollutants and
other compounds of interest which can be done
by GC/MS and in this fashion determine whether
one needs to do GC/MS analysis.
A perfect example of this is the Missouri
dioxin problem. The information that I have
indicates that approximately 80 percent times
4,000 times $400 per sample minus the cost of
a Triple Quadrupole screening scheme could be
saved in that program by a MS/MS dioxin screen-
ing scheme. One of the reasons that we work
with the chlorinated biphenyls was related to
the interest in dioxin. In fact, it appears...
there is every reason to believe that in actual
extracts one will be .able to have a very rapid
screening technique for this molecule. The
reason for this talk, is to present an idea
which has become obvious to me, that it is
still just an idea, that analytical methods
-------
317
development can be structured such that methods
with people and perhaps a few automated instru-
ments can rapidly develop analytical methods in
crash problems.
The Effluent Guidelines Division program has
evolved over a number of years now, but it is
still saddled in many cases with the matrix
problems. When we were first told to write
methods for priority pollutants I remember a
lot of analytical organic chemists standing up
and saying you can't write methods for everything
in anything. The progress that has been made
is really amazing, but there are reasons why
one might want to have a matrix specific approach
or a structured approach to methods development
using, for example, the 1600 methods as the
quality control check. Using that as a model
and knowing a little bit about scanning options
and ionization processes at MS/MS, it is not too
difficult to say how one could go about making
-------
318
the development of a method routine. Methods
Development costs quite a lot and I think it is
worthwhile for us to look into Rapid Analytical
Methods Development.
To give you another idea what you can do with
a MS/MS. We have a program to develop methods
for dyes. There is a gentleman by the name of
Professor McGovern at the University of Pennsyl-
vania. If you have read C&E News last February,
there was an article discussing instrumental
applications to archeology. A dye that was
discussed in that article called 6 of 6' dibro-
moindigo was of particular interest to Professor
McGovern. The dye apparently at one time was
used by the Greeks, Egyptians, Phonecians and
Romans. Work to date has not been able to confirm
that this dye is present in samples of archeo-
logical interest. We have contacted Professor
McGovern and suggested another introduction
technique which is a FAB-like technique where
-------
319
one bombards a. sample on a platform with ions.
This mass spectrum is full negative ion scan
of a polar dye, bromocresol purple. The struc-
ture of 6 of 6' dibromoindigo does have some
structural similarity to this molecule and we
sent him the spectra and asked if he would
like to send us some sample. He appears to be
interested in this application of MS/MS to his
problem. We are going up to Philadelphia tomor-
row and probably take a sample back from this
artifact, but based on the structure of the
molecule an MS/MS approach has become obvious.
If the dye is present and if there are not
ionization suppression problems with the matrix
we should be able to unequivocally conclude...if
the Phoenicians practice unregulated dye dumping
in the year 1300 B.C. using DISIMS ionization
and negative ion daughter scanning techniques.
Is screening important enough to warrant
purchase of an MS/MS? The work of Dr. Shackelford
-------
320
at Athens in creating the data base on the
industrial effluents has indicated that, in
fact, in industrial effluents the occurrence
of priority pollutants is relatively rare. I
believe the highest compound that was found
was phenol and the frequency was 5 percent;
that would indicate, then, that priority pollu-
tant screening methods via MS/MS, would seem
to be viable. It would seem to be economical
to screen samples by MS/MS and the data could
be produced for project officers in a more
timely fashion.
For dioxin, let me repeat 80 percent of
4,000 times $400 could be saved minus the cost
of a TSQ mixture screening scheme. Is a method
ready to go right now off of the shelf; it
isn't. Should you go out and immediately
purchase a Triple Quadrupole at $350K, I would
probably wait. However, I believe you can see
why we are excited about the technique and why
-------
321
we think it has advantages. I think it will
eventually find its way into the programs
for programmatic as well as technical reasons.
Any questions?
-------
321a
o>
. CO
"O .ti co
£
CD
o
"5 g.
co O.
0)
+* m 55 —i ^ «
^ g 2 co ^o
S 25 c o ®«-
o =
.
3 3 -S C < 10
-
C ® o
E z
_k
0) /•% p . 30
i§
s s
-------
321b
CO
O
0)
CO
tO
O
a
3
"O
CO
0)
O
CO
CO
.2
a.
0
0
CO
?•
.= 0
3 *
O §
O -I
o
c 2
O 3
— O
CO
-------
321c
0)
(ft
3£ ^ .£
C
« « o> ^
oo
«
li
ilglll III
Wo.2 — — < < JS^S^ oo —
_ O-j£E UJOUL O 5 Q 0. Z OOO
0)
W
(0 (ft
8
3 •: c •= CD
o ** « 5 o
-------
321d
2 uu
< 2
0
C
to
— 03
(/) i-
cS
Q) 03
*~ O
03
a-1
Q)^0
ro "5
Q)
Z
D
O
03
O
O)
c
•550
a O
a >.
'IT O
0
6
0
r*
tO)J)
LD a.
•R E
o co
(/>)
'w r=
o «-
o co
O CM
O ,X
CD
O
i- o
o
IT)
-------
321e
3.0-t
2.0-
1.0-
Multilevel Calibration
Negative Parents
Aroclor 1260
Introduced via LC Belt
Neg. Parent Ions
CHa. Cl 0.90 torr
Argon 0.5 mtorr
Expr. CX
10
20
l
30
40
r
50
I
60
i
70
1
80
90 100
ng Aroclor 1260
-------
321f
en
UD-
CM
in-
cs
tn-
rs- -
.
10-
'
en
in
en
en
en
in
CO
CSD
CO
co
1
CD
CO
in
CO
in
-------
321g
tn
5
*-j x i^
V
*s
cn-
oo
CD
- CO
V
in
O-
co-
- LD
-* cs>
LD
cn-
CD
co- •
co •
_ CO
,-f CO
ID
CO .
00
«
-------
321h
Li'
'4
t
»
§t
uH
rr
••—€
(Ti
rv
^
^J"
^-t
t-O-l
CD
10
co -
IO-
't
-------
3211
to "U Q)
•O & o
£Z JJ CD
334-
aP £
"O
-a o x
S-R w
•o
c
3
O
a
o
O
< . = . r
o o
a> .o to
co o a>
> o o
o a) o o
cc
S -2 P
»
c
o o c
o i
c
3 o o z: •*- ** **
2 S 2 .E o -g •£
COCM
CO CM CM OJ
t-r- r-CMt-r-r-CMr-r-^1
-------
32 Ij
Summary of Results Obtained by the University of Virginia on
an EPA "Rag Oil" Sample
Radian GC/MS MS/MS
Compound (ug/mg) (ug/mg)
C2-benzenes 14 12.4
toluene 6 3.7
Cl-dibenzothiophenes 3.4 8.0
Cl-phenanthrenes 3.2 0.9
C3-benzenes 3.1 8.6
phenanthrene 2.0 1.0
dibenzothiophene 1.5 4.0
Cl-naphthalenes 1.5 0.5
C4-benzenes 1.2 4.3
C2-phenanthrenes 1.1 0.3
C2-dibenzothiophenes 1.1 4.8
benzene 1.0 2.0
C2-naphthalenes 0.9 0.2
-------
321k
T3
0)
+-
0)
O
x 0
0) O
C «
0) :
•IS
si
a- «
N
(0
I
Extract
Clean
CO-
in
IS-.
UO
r —
CM
U>-
ro
CO
a-
\D -
cn
CL
-o
Q.
8
00
LU
o
£-B
0)
c
a;
TO
.C
CL
Q
CM
S—:
CO
CO
-------
321 1
0>
c
o
o
.c
a
o
—
O)
CO
t—
N
E
S
D)
3
(0
G
0)
o
a.
-------
CE>
321m
O
c
[-8
Q.
O
co
\
E
(0
O
co.
.
en
00
CO"
D)
3
03
Q
o
O
Q.
.
CO
$-:
I
CD
UJ
-------
321n
co
r»--
CM
E
3
i_
v
O
0
a
CO
OLU
Z
c
<0 ^
o
0
(Q
O5
0)
03
cn
co-
in-
cn-
00
co-
m
r^--
co
LO-
CO
CO
cn-
00-
TT-
CO
CD
00
GD
O
in
CD
IT)
-------
321o
CN
CN
CD
N
£
"*™ ^^
QO
C LU
£2
0>
CD
O)
0
Z
I
CO
§
^.
C ~"
0) ,s.
3 £
"5 in
^ ^
O m
^— **t
^* ^
*^
I 5
Si
*
0>
8
S
oo
to
c
u
00
«-•
? ^ LQ
S 10 •"•
CL
-------
321p
c .ti
in
ca-
in
in-
CO
a:
CO
«•"••-
CM
O CO
— •*-"
© °
.> 2
•S x
«UJ
a, =
Tl.
CN-
in.
en
to
CO
I
O
•
(Si
CO
LD"
-------
321q
O -
0 'S
0.^-
C/) O)
(/> C
<2 LO
C
CD 'C
5.2
o> >
— (U
*J
0 O
CN
CO-
.
CO
CO-
CM
CM
\r>-
CM
CO
in
ot-
CN
CM
8
cs
(N
00
-------
321r
if1
r
rn
L 1 J 1 1. 1— - -JL I
°
cn
r»--
co
in
r---
(N
I
f
—*-
LO
r-
CM
CO
cn-
CN
en
co-
cs
r^
CO-
IN;
CD
O?
-r
CO
cs>
ir>
CM
-
'CT
r
~\ /"
,-- i" ^
(O/
OL "^
0 P g
|J1 "-
LU CK:
I s
CO ^
o
^ >—
UT CL
O -.
[
I
s
on
in
-
r~f
~^.
o
UJ
>— 4
1
0
-j
c^
L
m
I
o
CD
• — • s
V
T '
CO
U)
03
fr>
^ L.O ^
>-" >
cn --*
£ ^ !o
I 05
0
UD
IO
LD
cn-
in
to
in
in
LO-
cn:
3^73pr=~3=
r*- —
S-
cn -
?-
$ =
-— ==£=i^=ri=£-s=.
^
-co
r ^
>— U'j
uo
-
C'
— ca
-
.
— in
in
-
CO
•
CB
CO
cs
-------
322
QUESTIONS AND ANSWERS
DR. COLBY: Bruce Colby, S-CUBED.
Drew, I did a quick calculation here based on
your 200 sample per hour estimate as a through
put. If there were 90 compounds per sample that
you were interested in, you would ultimately
require or have available for any single compound
identification, quantification, whatever you
are going to do; one-fifth of one second in terms
of data reduction.
How is the agency addressing the data
manipulation problem that would appear to be
generated here?
MR. SAUTER: I think one of the
ways that we would address that is not to
address it, frankly. The problem...what I see
in this meeting, for example, George Stanko's
talk I thought was particularly interesting.
Earlier on, what people have done in these methods
-------
323
and programs is, they have come in and have said
we can do anything in anything and a whole lot
of that.
What you see now in the analytical community,
I think, is a concentration on very specific
issues. I feel quite confident from data with
standards and samples to indicate that this type
of screening techniques would be of great
interest to people worried about the polynuclears
at given levels. In the type of workup that I
am talking about for parts per billion analysis,
one does not gain the minimal sample workup; one
still needs the concentration factor.
There is no reason to think on an average
that MS/MS will be more sensitive than a single
Quadrupole instrument. So that I don't think
one would, in fact, want to look for every prio-
rity pollutant in a sample by the method that
we have discussed. One might want to screen
samples that way. Based on Dr. Shackelford's
-------
324
work and on other areas of our work it strikes
me that screening in such a fashion has inter-
esting properties.
This is a potential approach for limited
objective analytical strategies. For example,
chlorinated biphenyls in transformer oils, dio-
xins in extracts, which have gone through some
workup, polynuclear aromatics in a variety of
different samples and one could go on and on.
To me, the analytical applications are obvious.
We have given a few examples of applications.
We have not shown that it is an equivocal method
to supplant GC/MS. It will augment GC/MS. It
will not supplant GC/MS.
MR. RUSHNECK: Dale Rushneck,
Interface, Inc. Well, the answer to Bruce's
question seems pretty obvious, in addition to
the Triple-stage Quadrupole you need a Triple-
stage computer.
-------
325
My question is one concerning isotope dilu-
tion. I noticed this belt technique in having
worked with detectors of that nature myself.
There is, of course, a lot of variability in
getting reproducibility; that didn't play to-
gether. There is a lot of variability in that
technique from the standpoint of getting the
sample on the belt precisely the same way;
and, I have wondered if you've tried isotope
dilution in terms of the...
MR. SAUTER: That data that was
presented was isotopic dilution.
MR. RUSHNECK: Pardon?
MR. SAUTER: The data that was
presented, that multiple level concentration
curve was effectively isotopic dilution.
We were using the C13 label chlorinated
biphenyl standards which are now being used in
the interlaboratory PCB study. We used the
per C-13 CL-8 molecular anion relative to the
-------
326
most intense molecular ion in the negative
parent ion scan of Aroclor 1260 which was from
the molecular anion of the heptachloro isomer,
of all heptachloro isomers. So it was almost
isoptic dilution. I think, your point is well
taken; because of your work, Bruce's, Bill's,
the labeled materials for priority pollutants
are available. It would not take much ingenuity
to take the 1600 methods and that's what I meant
before, take the material available because of
the work on the 1600 methods and lace that into
some sort of screening scheme.
Your point about the belt is well taken, it
is mechanically crude. I do not personally
believe that the way we put material on the belt
was, in fact, the best introduction method; but,
I think in many cases it can work.
MR. RUSHNECK: Sure.
MR. SAUTER: I do believe very
strongly that if, for example, newer developments
-------
327
of the thermospray LC/MS interface may provide a
superior introduction technique. I think the
speed of the belt is worth considering and I
think if one could demonstrate, unequivocally,
the analytical utility of that approach then
someone would figure out a damn precise way to
put it on that belt or some other type of
LC/MS interface.
MR. RUSHNECK: The second ques-
tion I had was concerning the analysis of PCB's
and transformer oils. Do you think with negative
ion CI you could get sufficient results from a
single stage instrument to be unequivocal.
MR. SAUTER: In many cases you
can't. It depends on which regulation and pro-
bably which transformer oil. The OTS regula-
tions are worried about 50 PPM, I believe,
whether to incinerate or not.
Many of the samples that were provided to
us could have been done in that fashion. How
-------
328
well quantitatively and qualitatively it could
have been done, I can't really say; but, at
levels above 100 PPM, 75 PPM, one could use
a 4,000 in theory to do this. One would like
to have, I can tell you from a certain amount
of experience; one would like to have the
selectivity of a triple-stage instrument.
Thank you.
MR. TELLIARD: Thank you, Drew.
MR. KEEN: Gary Keen with Conoco,
I may make one comment, Dale, we do use a single
stage instrument for PCBs and negative CIs, but
we use a mass 35 and 37 and not molecular ion
and it works very well.
MR. RUSHNECK: And no GC; it is
just a production sample?
MR. KEEN: No, we do have
capillary GC on it. We find it works extremely
well, better than, then, the specific GC
techniques.
-------
329
MR. TELLIARD: Our last speaker
for this morning's session is Bob Beimer from
TRW. Bob, as you know, has been on this program
before and as we know Bob can't speak to
metals analyses, but he is here to talk about
some organic analyses which is, perhaps, more
in his area of expertise.
Bob is going to talk about a direct injection
technique that EDG has been working on, on and
off, for the last year and a half. It,
basically, is a selective little tubing. So
Bob, now, is going to talk to you about a hose
job; Bob.
-------
330
EVALUATION OF A NEW GC/MS DIRECT AQUEOUS
INJECTION INTERFACE FOR VOLATILE ORGANIC ANALYSES
Robert G. Beimer
TRW, Inc.
MR. BEIMER: There have been a
lot of comments out there about the length of
this morning's session. I'm going to try to run
along pretty fast so that you won't miss the
rubber chicken and peas.
At the request of Bill Telliard and others,
we have done some work on evaluating a DuPont
polymer called Nafion. We evaluated this material
as a concentrator technique for the determination
of volatile organic compounds in water. The
analysis is conducted by directly injecting the
water without any previous separation. The
sample passes through the Nafion tube and right
onto the GC column where the analysis is
conducted.
The interface consists of an injector block
that's the injector (indicating). Were were
-------
331
using an all-glass system in order to minimize
contamination problems. The carrier gas is
pre-heated by winding through coils within that
injector block, which is maintained at 150
degress centrigrade, passed into the injector
port itself, and then the water sample is
injected through the septum and is flashed in
this zone in the glass injector. The material
is carried from the injector in a vapor state
into a six foot length of Nafion tubing. I
have no idea of the chemical structure of this
stuff; but, basically, it is a material which is
at least permeable to water and at the most
permeable to all polar organic compounds while
being impermeable to non-volatile species.
The tube itself is this inner line here on
the drawing, you can see that there are two
lines there and the inner line is the Nafion
tubing (indicating). The outer sheath is just
-------
332
a nylon tube through which one passes a dry
gas in a counter-current direction to the flow
of the helium. The countercurrent flow of dry
gas around the Nafion tubing is flowing in the
reverse direction carrying away the water that
is permeating through the Nafion tube.
The reduction in relative humidity here is
substantial as shown by work that has been done
by Peter Simmons of International Science
Consultants in England, the person who came up
with this technique to begin with. Basically,
the sample once injected at this point enters
the GC column as a dry gas. There is no problem
with water buildup in the system. We studied
how much water could be injected into this sys-
tem on a routine basis without deterimental
effects on the mass spectrometer system and/
-------
333
or the GC column.
Originally, it had been reported that seven
microliters was the maximum water injection
which could be tolerated when an electron
capture detector was used. We felt that the
mass spectrometer might be a little more
tolerant of water than the electron capture
detector system, so we started at seven micro-
liters and worked our way up. Our determina-
tion was that 250 microliters or a quarter of
a milliliter of water could be injected into
this system on a routine basis. You could do
that for at least eight hours at one sample
each 45 minutes and not get an increase in the
water background in the mass spectrometer and
you could maintain your vacuum.
We did, however, find that when you inject
in a half a milliliter of water, the system
-------
334
self-destructs. If you will notice, we
have got connectors here connecting the Nafion
to the injector; and, then, there is
another connector down here where the Nafion
is connected to the GC column. We blew them
both apart. There is quite a column
change when you go from a half a ml of water
in liquid to its equivalent vapor state. We
ended with most of that half a ml of water on
the end of the GC column which we ruined. It
took the better part of the day to get the
mass spectrometer vacuum back; but, basically,
a quarter of a ml was not a problem. If in-
jected in a reasonable way I think a half a ml
could be done as well. In other words, you
would have to inject it slowly, not trying to
slug it in at a given instant because the way
the technique works is, you are maintaining
the GC column at room temperature or below.
While you are making the injection, at this
-------
335
point, and if you made it slowly you could
hold the GC for just a little longer at room
temperature before you programed it up to do
your analyses. Effectively, your samples will
be concentrating on the head of the GC column
anyway.
The whole idea of this was to be able to run
particularly nasty samples without going through
the purging operation and the secondary traping
operation. Before we did that we had to deter-
mine whether or not this technique was compati-
ble, reasonable and similar to the purge and
trap operation. In order to do that, we ran a
rather significant number of standards by both
the pur^e and trap technique and by the Nafion
interface technique using similar concentrations
of materials.
This is just a reconstructed ion trace of a 100
nanogram standard run, using the purge and trap
technique. A number of the peaks are identified,
-------
336
but that is really not important; more impor-
tant is the shape of what you see here and then
compared to the same standard run using the
Nafion (indicating). Down at this end, you
will notice the typical starting end of the GC
trace for a volatiles analysis, the peaks are
broad and unresolved; maybe that's only me,
maybe the rest of you do better. If you will
notice, with the Nafion injection a much better
resolution at the low end (indicating). A
reason, of course, that you have this is that
you don't hold the GC for a significant period
of time using the Nafion system like you do
with the purge and trap. With the purge and
trap you are holding the GC at the low end while
you desorb the materials from the trapping col-
umn. Here, of course, you start the GC at room
temperature and you inject through the Nafion
and then you program the gas chromatograph.
So there isn't that lag time, the material
-------
337
doesn't have a chance to defuse at the head of
the column at low temperature, and you get a much
sharper chromatogram.
Well, that's fine for the beginning peaks,
but one might expect that the later peaks could
be a problem in that we are introducing a
significant amount of volume before the GC
column itself by having this six feet of tubing.
This is a comparative trace, mass 78, I hope;
I can't see all of the way over there, for
benzene (indicating). The top trace is the
Nafion direct injection interface, the bottom
trace is the purge and trap; and, I think to
anyone's satisfaction the GC resolution is
virtually identical in both cases.
Now, this is all well and good, but assuming
that you can only inject 250 microliters of
water into this system, you are limited to a
factor of 20 loss in sensitivity, assuming that
five milliliters would normally be used in a
-------
338
purge and trap operation. Therefore, we are
not proposing, this technique, supplant purge
and trap. We are only saying that in those
samples where you have a very high concentration
of material this may offer a solution to
diluting and rediluting your sample with water
and purging it, you can just inject different
volumes of it into the system using this tech-
nique and get some pretty good analytical
results.
On this slide we have a response plot.
The bottom axis is the amount injected in nano-
grams and up this side is an arbitrary uncor-
rected area count measurement. The idea here
is to show you that although there is a displace-
ment in the slope of the direct aqueous injec-
tion response curve which is this bottom line;
I took some liberaties and dotted it down here
at the low end where it didn't have any sensi-
-------
339
tivity (indicating). The purge and trap line is
the top one and basically they are the same, in
the sense that you can get good linear response
over a broad range of concentration. This con-
centration out here is about 1200; 1200 nano-
grams is this last data point injected into the
system (indicating).
On this slide we have determined the limit
of detection based on a 250 microliter injection
volume into the Nafion system. What 1 want to
point here is that there are some compounds that
have poorer detection than others. Simply put,
1,2-dichloroethane at a 2,000 nanogram detection
limit which is rather poor. The bromodichloro-
methane also had a 2,000 nanogram detection limit,
I have no real good explanation for this since a
lot of this work is preliminary. It may be that
those materials have some significant affinity
for the Nafion and, therefore, they are not
transmitted effectively at low concentration.
-------
340
However, benzene down here at 80 nanograms in
a 250 microliter injection...or excuse me,
this 80 micrograms per liter based on a 250
microliter injection. Toluene at 40. Aromatic
hydrocarbons give very good transmission
through the Nafion.
In order to try to nail down what the
mechanism of some compounds being better
performers than others, we calculated the
recovery, if you will, of various different
levels of standards injected through the
Nafion interface and run by the purge and trap
technique. We assumed the pur&e and trap
technique was perfect; and, therefore, we
ratioed everything to the purge and trap data
at the same concentration. This chart shows
the recovery of the materials that we studied.
At the top is chloromethane and needless to
say, a highly volatile gas which is not
trapped all that well on the tenax trap
-------
341
and lost somewhat through the GC column by migra-
tion. Performs much better by the Nafion tech-
nique. In fact, the ratio of the concentrations
of the same material injected was 360. So we
are getting almost a four-fold increase in sen-
sitivity of chloromethane by this technique; but,
down the list the rest of them, for the most
part, are less than 100 which says that the
Nafion is not transmitting quite as well as the
purge and trap. With a couple of exceptions.
The benzene, for example, is 160.
To get some idea of what effect the Nafion
has on this transmission, I have also put on
this chart boiling point. We are dealing with
a 150 degree injector, we are taking that down
to room temperature, presumably the injection
has allowed the organic materials to move
quickly through the Nafion before the water
itself condensences. So let's think about boil-
ing point as being the mechanism by which
-------
342
materials are transmitted or lost. Well,
that didn't work because you go down this
list and you look at the boiling points and
in some cases the higher boiling points
have higher transmission efficiencies; in
other cases they don't. So I think the
mechanism or the thing to describe, the
transmission efficiency probably has more to
do with the polarity of the molecule than the
boiling point. However, when you deal with
similar molecules (i.e., benzene, toluene and
xylene) with similar polarities, the transmis-
sion efficiency drops off as the boiling point
increases.
The same is true for the chlorinated
organic molecules, chloromethane, methylene
chloride, chloroform and carbon tetrachloride
which have the same trend as the boiling point
increases, the transmission efficiency drops off;
-------
343
which says that there is some condensation
taking place in the Nafion tube. The idea
here was also to determine how one can do
the analyses on complex or nasty samples.
We had a bunch of really nasty samples with
water from a low BTU gasifier and if any
of you have done any synfuel wastewater work
you know that it may only be about half
water and the rest of it is suspended garbage.
This is a sample run through the Nafion
of the synfuel wastewater from a low BTU
gasifier. The movements in the baseline down
here are benzene and toluene; the two consti-
tuents of the priority pollutants that were
actually observed in this sample. If you can
notice these peaks, that's phenol and those
are methyl phenols. The phenol and the methyl
phenols injected by this technique not only
passed the Nafion onto the GC column, but those
-------
344
rascals can actually be chromatographed very
nicely with the Carbowax on Carbopack column;
something I didn't realize would be the case.
A sample was also run by purge and
trap, the phenols, obviously, were not
observed under those circumstances because phenol
itself is not purged. We compared the two
pieces of data and there was a reasonable
correlation between the direct aqueous injection
analysis of the sample and the purge and
trap analysis of the sample. If one assumed
that the low concentration materials would not
be seen by the Nafion injection which was the
case because we have a higher sensitivity
cutoff of about 20 fold.
This is a standard which we ran three days
after we finished the low BTU gasification
study and surprise, surprise, we got phenols
coming out in our standard. Well, obviously,
we didn't have any phenols in our standard
-------
345
to begin with, so it became very obvious to
us that one of the problems we would have in
running nasty samples through this Nafion
interface is a carryover or contamination within
the tube itself. When subsequent injections
are we made we get a steam distillation effect
and the phenols come off in the next sample.
Well, that's not very good.
So we bake the Nafion tube 100 degrees
centigrade overnight and put it back in the
system and ran the standard again. Now all
of the phenol materials are gone. So we
learned something about the interface; in
that when one is running dirty samples they
are going to have clean it up and you can
clean it up by thermally desorbing it and
purging it with an inerx gas. That's all I
have on the interface itself. I would like
to talk to you just a little bit about where
we are going from here.
-------
346
It's obvious that the loss in sensitivity
of a factor of 20 can be debilitating. It's
obvious that the contamination problem that
one has when you inject dirty samples onto the
interface is a shortcoming; but, if one
assumes that the Nafion is a good system
for reducing the relative humidity of a gas
and based on what we have seen that the
Nafion can transmit non-polar halogenated
organics or non-halogenated organics, non-
water soluable species very well, then the
thought that comes to mind is, why not use
the purging apparatus, purge the volatile
organics from the water, and then rather than
trapping them secondarily on a tenax trap which
is essentially is a water removing system, why
not trap them directly on the end of the GC
column, but remove the water by running this
effluent through the Nafion tube.
Hopefully, within the next few months we
-------
347
will have some data to show that this technique
works and we can remove the trap part of the
purge and trap and still maintain the sensiti-
vity that one gets using this technique. If
this works then we can go from there because
there are many other applications that we are
looking at in terms of removing moisture prior
to analyses by GC/MS especially when one is
dealing with capillary columns where even small
amounts of water frozen on the end of the column
using subambient conditions will cause the
analyses to be totally useless. Thank you
very much.
MR. TELLIARD: Any questions;
none?
Thank you, Bob; and, that does it for
our morning session. Checkout time is 1
o'clock. For those of you who want go up and
bring your bags down and put them along the
-------
348
back. If the government people will put
their bags on one side and not mingle with the
industry bags.
Lunch is next door. So we will break until
this afternoon.
(WHEREUPON, the lunch recess was taken.)
-------
348a
X
UJ
tn
tu
u
s!
I
_J
o
u
u
o
c/3
>-
CO
a. Oc
Ul
o
DC I
o
I—I
u_
-------
348b
sz--
fin! liiiii HI i!,
I-
i
•I
I
00
-------
348c
62~
sl?Ht I i I
-------
348d
Li—
ZZ--
01 01
M 3
04 I*.
OJ CM
U-l
O
LU
UJ
PQ
Q_
-
CO
-------
348e
I 1
E.^S.Z
sie^siel - ti
SEgiiE.2z 7 ZS
3S.2SSgil JlSSESiiS^Z > S? ?,
iiinihihiihii^ ii?i I
Cu«*ouuv«-w*i --•o^Bo**«*Ce«s**jJ55
u1
S
62—
L2 S 02*61-
SL"
•i§ *?
WQ ^~—
*«o o
CJ
UJ
oo
I? o
5« UJ
CO
UJ
UJ
00
«
LU
fe
^
UJ
I —
00
co yj
1:3 u_
m^
oz oo
§ UJ
•i:
IS 8
-------
348f
62"
L2"
22"
! ! I J«.,l.llH
L ffill Iflfftilii I I
i ii
II
21-
01—
9—
B
.9
-s
1
S *-*
ft
CO
rD t_J
O e
LU O
ZD CD
C3 i—J
UU
ex:
00
CQ
c
CD
CD
-------
348g
CHLOROFORM RESPONSE CURVE
-------
348h
UJ
-D
to
o
UJ
CC.
<£>
ft
CD
O£
a.
Q_
-
PQ
Q
UJ
I— 00
a ro
•—• o
:z uj
^ a
-------
3481
OBSERVED DETECTION LIMITS FOR NAFION DIRECT AQUEOUS INJECTION
Reference
Number
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
Compound
chloromethane
bromome thane
dichlorodifliroromethane
vinyl chloride
chloroethane
methylene chloride
trichl orof 1 uoromethane
1,1-dichloroethene
1 ,1-dichloroethane
trans-1 ,2-dichloroethene
chloroform
1 ,2-dichloroethane
1 ,1 ,1-trichloroethane
carbon tetrachloride
bromodichlororomethane
1,2-dichloropropane
trans-1 ,3-dichloropropene
trichloroethene
dibromochloromethane
1,1 ,2-trichloroethane
cis-1 ,3-dichloropropene
benzene
2-chloroethylvinyl ether
bromoform
. 1,1 ,2,2-tetrachloroethane
tetrachloroethene
toluene
chlorobenzene
ethyl benzene
Limit of Detection (ug/L)*
40
120
400
120
200
160
160 •
200
120
160
160
2000
160
200
2000
200
120
160
160
160
120
80
N/A
120
160
160
40
80
160
* Based on 250 uL Injection Voli'me
-------
348J
DIRECT AQUEOUS INJECTION RECOVERY EXPRESSED RELATIVE TO PURGE
AND TRAP DATA FOR THE SAME STANDARDS OF VARYING CONCENTRATION
Reference
Number
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
Compound
chloromethane
bromomethane
dichlorodifluoromethane
vinyl chloride
chloroethane
methyl ene chloride
trichlorofluoromethane
1,1-dichloroethene
1,1-d.ichloroethane
trans-1 ,2-dich.loroethene
chloroform
1 ,2-dichloroethane
1 ,1 ,1-trichloroethane
carbon tetrachloride
bromodi chloromethane
1 ,2-dichloropropane
trans-1 ,3-dichloropropene
trichloroethene
di bromochl oromethane
1 ,1 ,2-trichloroethane
cis-1 ,3-dichloropropene
benzene
2-chloroethylvinyl ether
bromoform
1 ,1 ,2,2-tetrachloroethane
tetrachloroethane
toluene
chlorobenzene
ethyl benzene
Direct Aqueous Recovery
Relative to Purge and
Trap (%)
360
62
—
81
45
, --
—
—
39
110
53
—
81
33
—
--
71
38
—
—
67
160
—
61
42
36
84
38
64
B.P. (°C)
-24
4
--
-14
12
--
—
--
57
47
62
--
74
77
--
—
104
87
—
—
112
80
—
150
146
121
in
132
136
-------
349
AFTERNOON SESSION
MR. TELLIARD: Bob Maxfield is
from Versar. About two years ago we had some
concern in two particular areas in the mining
industries, where we looked at some comparability
between ICAP and AA. Versar, i.e., Bob, spent a
lot of time and effort putting together a study
which looked at the comparability, in a small
sense, within the maxtrix of a mining sample; and,
since then they have done the national validation
study for ICAP and Bob is here today to tell us
a little bit about it.
-------
350
RESULTS OF THE U.S. EPA NATIONAL VALIDATION
STUDY OF THE INDUCTIVELY COUPLED PLASMA METHOD
Robert Maxfield, Versar, Inc.
MR. MAXFIELD: Good afternoon.
This afternoon I would like to briefly discuss
the Inductively Coupled Plasma Method and the
validation study that Versar is currently con-
ducting on this method for EMSL-Cincinnati.
The method was originally published on
December 3rd, 1979 in the Federal Register and
has since been revised. The method that we are
validating is method 200.7 which, as I said, is
a revised method based upon the December 3rd,
1979 Federal Register version.
The method describes the requirements for ICP
in the analyses of water and wastewater and
details analytical procedures such as the
sample preparation, interference testing,
operating conditions, and quality control
procedures that are required for the analyses
-------
351
of water and waste by Inductively Coupled
Plasma Emission Spectroscopy, or ICP. The
objective of the validation study is to define
the precision and accuracy of the ICP method.
As we heard yesterday, EMSL has come up with
a standard validation procedure which they
have used on the 600 methods as well as the
624, 625 GC/MS methods. This is, in fact,
the same sort of validation procedure that is
being used on the ICP method.
My objective this afternoon will be to
give you an idea of the study design and also
to discuss some preliminary results of the
study. The study is not complete at this
point and a final report is not expected until
sometime this spring. So, therefore, any
comments I have with regard to the data are
subject to further review and the EPA has
not yet reviewed any data at this point.
-------
352
The overall study design defined by EPA
at the outset is shown on this slide and is
also in a handout that you have in front of
you. I will be discussing the various points
of the overall design in some detail as I go
along. This study design uses aspects of
Youden's unit block approach as well as the
ASTM method, Standard Practice for Determination
of Precision and Accuracy. This approach has
been used, as I said, to validate other methods
and is currently being used by us, again, to
validate the ICP method.
The first parameter that is included in
the study design, is the elements, 27 were
studied. All of the priority pollutants are
included with the exception of mercury. I have
put an asterisk next to the metals which are
the priority pollutants.
The second aspect of the study design is
the water types, there were six water types;
-------
353
laboratory pure water, drinking water, surface
water and three treated effluents from the chemi-
cal manufacturing industry, copper sulphate,
sodium hydrosulphate and crome pigments manufac-
turing. These particular effluents were selected
to present an analytical challenge and, indeed,
were rather difficult samples to handle with the
ICP instrumentation.
The two digestion types that we studied are
termed the hard or total metal digestion, and
the soft or total recoverable metals digestion.
As the names imply, the hard digestion is a more
rigorous procedure requiring a greater degree of
refluxing and a greater degree of evaporation
during the process. It is a somewhat longer
procedure than is the soft digestion. These
procedures are similar, although not identical
to the methods that are included for the atomic
absorption procedures in "Methods for Chemical
Analyses of Water and Waste," the EMSL method book.
-------
354
Another variable that we have in our study
is the sample spikes. All of the samples were
analyzed without any spike, that would be
the background analyses. Then, we looked at
three concentration levels, at each concentra-
tion level we had a Youden pair of spikes; that
is, two spikes of similar concentration. That
would total seven analyses per sample, background,
plus six individual spikes.
The spike solutions were prepared in sealed
glass ampules. All of the 27 elements were
included in three individual spiked solutions.
Very specific instructions were provided to the
participating laboratories on how to go about
spiking the water types with the elements
of interest.
Again, the overall design included 27 ele-
ments, six water types, two digestion procedures,
six spike samples, plus the background
analyses and 12 participating laboratories.
-------
355
That totals approximately 30,000 data points
for 12 participating laboratories. The parti-
cipating laboratories involved are listed on the
slide, there are 12 of them. One is EMSL-Cin-
cinnati. The other 11 were selected by Versar
through a selection process whereby we collected
bids, these bids were evaluated, the bidders
deemed responsive were then included in a preli-
minary performance evaluation study where they
received one sample which was treated in a manner
similar to that which would be used in the study
later on. They analyzed the sample, provided
data to Versar and based upon this data Versar
selected the 11 participants that would be in-
cluded in the study.
The 30,000 data points were then evaluated
using a software program developed by EMSL,
«
termed IMVS, and this data is treated such that
we produce measures of precision and accuracy
for each of the various permutations of water
-------
356
type, element, and digestion procedure. As you
may well imagine, to digest the information
generated from this program is very difficult;
therefore, the summary plots are generated
as an easier way to visualize this vast
amount of data. The plots that we have generated
summarize precision and accuracy under differ-
ent conditions. Another plot we use is called
the scatter plot which is also termed a Youden
plot. These various plots allow one to visualize
the vast amount of data and make some interpreta-
tions and comparisons between water types, diges-
tion procedures and the like.
This is an example of a precision plot for
lab pure water for copper using a hard digestion.
Mean recovery is along the horizontal axis in
micrograms per liter; and, on the vertical I
have precision as S or overall precision, and
Sr, single operator precision. The lower line
represents a linear regression of the individual
-------
357
operator precision. The upper, the regression
analysis of the overall precision for the
12 laboratories. As is the case in most of my
plots, the individual laboratory precision,
the single operator precision is better than
the overall laboratory precision. There are
some 300 of these precision plots.
This next plot is an accuracy plot and what
we have along the horizontal access is the true
concentration of the spiked samples and on the
vertical the mean concentration for the 12 labora-
tories. This slide also represents data for
copper, laboratory pure water and the hard diges-
tion. There are also 300 of these plots.
The third type of plot I call a scatter
plot or a Youden plot, allows me to look at both
precision and accuracy in one diagram. This
plot has concentration plotted along the horizon-
tal access and along the vertical. On the hori-
zontal access we have one ampule from a Youden
-------
358
pair, on the vertical a second spike of the
Youden pair. The crossed lines in the upper
right-hand portion of the plot are the true
values. If we were to analyze these vials and
get exactly the true value in the vial the data
point should fall squarely in the center of that
crossed area. As you can see, the plots are
somewhat scattered about that point. The Xs I
have on the diagram indicate one laboratory's
data; a "Z" indicates that two laboratory's data
fall on top of one another (indicating). This
particular data is for chromium in drinking
water for the hard digestion. If I show the
next slide, I have an elipse drawn around the
same set of data. This elipse is at a 45-degree
angle to the plot and this is indicative of
the larger systematic error involved in the
analyses relative to the random error. The
random error being made up of two possibilities;
that is, random error within the laboratory,
-------
359
or random error that may be a result of non-
uniform samples. If the systematic error is
dominate, this eliptical pattern is the pattern
that one will get on a Youden or scatter plot.
This appears to be the general case at this
point in the study; most of the plots seem to
form this sort of eliptical pattern.
I would now like to go into a few examples
showing you some of the comparisons that we can
make using the precision accuracy and scatter
type plots. In my first comparison I have alumi-
num in laboratory pure water on your left and
aluminum in effluent number one on the right.
We are looking at precision; again, mean recovery
for all laboratories on the horizontal axis, and
precision on the vertical axis. The axis are
identical on both plots. So, therefore, the
regression lines are comparable. It would
appear, then, that the laboratory pure water, as
one might expect, exhibits better precision than
-------
360
in the case of the effluent; the effluent being
the more difficult matrix.
In my next example, we're looking at the same
water types. Again, aluminum for laboratory
pure water on the left, and aluminum for
effluent number one on the right (indicating).
These are accuracy plots, true concentration
on the horizontal axis and mean recovery for
all laboratories on the vertical axis. If the
slope of the line approaches 1.0 that would
indicate 100 percent recovery or perfect a^ree-
ment between the true value of the sample and
the mean observed value by the laboratories. As
you can see, the laboratory pure water, the
easier solution to analyze, has a slope of .93
approaching 1.0 which would indicate good re-
covery. In the case of the effluent, .78 indi-
cates somewhat poorer recovery. This is the
general case one would expect when analyzing a
more difficult sample, that is, a poor recovery,
-------
361
poorer precision than would be achieved with
the lab pure water. Indeed, these plots for
this particular example do point that out.
My next comparison is between digestion
types. Again, we had the hard digestion, a
more rigorous procedure, and the soft digestion,
a less rigorous procedure. These are precision
plots, mean recovery along the horizontal and
precision along the vertical again. In this
particular case, the scatter of points and the
linear regressions that result from these points
are inconclusive and I wouldn't like to say too
much about the differences in precision between
the hard and soft digestion. If we look at the
accuracy plots for the same data, chromium in
effluent number one, hard digestion on the left;
chromium effluent number one, soft digestion on
the right. The accuracy of the two methods
appear to be very similar; that is, the recovery
for the hard digestion and the soft digestion
-------
362
appear to be about the same. Now, if the hard
digestion is a more rigorous procedure and we
are having some recovery problems with this
effluent, one would think that the hard diges-
tion might produce better data. In fact, that
doesn't appear to be the case. The soft diges-
tion, a simple or more economical procedure to
use appears to be producing for this particular
example data with similar recovery.
In is my last example I have a pair of
Youden plots. Again, for chromium on the left
in the lab pure water, the control; and, chromium
for effluent number one on the right. These are
scatter plots and the scatter of the points are
indicative of the precision with which these
laboratories were able to analyze the sample.
Note the obvious better precision that one has
with the laboratory pure water. The scatter is
much tighter, the eliptical pattern again is
there in both cases. It does appear, however,
-------
363
that the chromium data for the effluent is
skewed toward the lower left-hand quadrant of
the Youden plot. This would be indicative of
low recovery, poor recovery than in the case
of the laboratory pure water. If the data
points were skewed toward the upper right
quadrant, this would indicate high recovery;
and, again, if the pattern were not eliptical
but more circular in nature one would expect
that the random error is more dominant than
the systematic error or, at least, the errors
are somewhat equivalent.
In conclusion I would like to say that the
ICP Validation Program is ongoing. The data
has not been totally analyzed at this point.
We are in the processing of analyzing the data
and the report is not due until sometime in
the spring. What this report should do is,
allow us to quantitate the precision and
accuracy for the Inductively Coupled Plasma
-------
364
Method under a variety of conditions. Specifi-
cally with six different water types, realizing
that that is not the universe, 27 elements and
the two digestion procedures that were used
in the study. I thank you for your attention
and if there are any questions I would more than
happy to try and answer them.
-------
364a
SI
VALIDATION
of ICP for
27 ELEMENTS
in
WATER and WASTES
METHOD 200.7
SPONSORED BY:
ENVIRONMENTAL MONITORING AND SUPPORT LABORATORY
U.S. ENVIRONMENTAL PROTECTION AGENCY
CINCINNATI, OHIO
-------
364b
S2
ELEMENTS
Al
Sb*
As*
Ba
B
Be*
Cd*
Ca
Cr*
Co
Cu*
Fe
Pb*
Li
Mg
Mn
Mo
Ni*
K
Si
Ag*
Se*
Na
Sr
Tl*
V
Zn*
-------
364c
WATER TYPES
1. LAB PURE WATER
2. DRINKING WATER
3. SURF ACE WATER
S3
TREATED EFFLUENTS
FROM
CHEMICAL MANUFACTURING
INDUSTRY
4. COPPER SULFATE
5. SODIUM HYDROSULFATE
6. CHROME PIGMENTS
-------
364d
S4
DIGESTION TYPES
HARD DIGESTION
"TOTAL METALS"
SO FT DIGESTION
TOTAL RECOVERABLE METALS"
-------
364e
SAMPLE SPIKES
BACKGROUND
S5
CONCENTRATION LEVEL 1
YOUDEN
PAIR OF
SPIKES
SPIKE 1
SPIKE 2
CONCENTRATION LEVEL 2
YOUDEN
PAIR OF
SPIKES
SPIKE 3
SPIKE 4
CONCENTRATION LEVELS
YOUDEN
PAIR OF
SPIKES
SPIKE 5
SPIKE 6
-------
364f
S6
OVERALL DESIGN
27 ELEMENTS
6 WATER TYPES
2 DIGESTION PROCEDURES
6 SPIKED SAMPLES + BACKGROUND SAMPLE
12 PARTICIPATING LABS
TOTAL OF - 30,000 DATA POINTS
-------
364g
S7
PARTICIPATING LABORATORIES
WEYERHAUSER TECHNOLOGY CENTER
HARRIS LABORATORIES
RALTECH
MONSANTO RESEARCH CORPORATION
ANALYTICS
ERCO
VETTER RESEARCH INCORPORATED
BATTELLE COLUMBUS LABORATORY
JOHNSON CONTROLS
RADIAN CORPORATION
GCA TECHNOLOGY DIVISION
ENVIRONMENTAL MONITORING AND SUPPORT LABORATORY
-------
364h
S8
COPPER - LAB PURE, HARD DIGESTION
20-
cc
00
ec
o
GO
CO
<
Z
O
o
LU
cc
a.
10-
0-
1
100
MEAN RECOVERY
200
-------
3641
S9
COPPER - LAB PURE, HARD DIGESTION
200-
cc
UJ
>
o
u
LU
CC
100-
UJ
0-
1
100
CONCENTRATION jug/I
200
-------
364J
CHROMIUM - DRINKING WATER, HARD DIGESTION
S10
600-
400-
X X
2
X
o>
a.
0.
<
XX
200-
0-
200 400
AMPUL 5 jug/1
XX
600
-------
364k
Sll
CHROMIUM - DRINKING WATER, HARD DIGESTION
600
400-
01
3.
tO
o.
200-
0 —
I
200
1
400
XX
600
AMPUL 5 jug/'
-------
364 1
PRECISION AS S OR SR
S12
z
3!
m
8 M
< ^
m o~
a o
C
2
C
CD
•O
7
m
Cf)
O
m
H
O
5
m
GO
H
O
PRECISION AS S OR SR
§
I
m
I
33
J>
C
z
C
m
H
S
•n
H
O
5
5
-------
364m
MEAN RECOVERY pg/
I
I
S13
§
i
o
m
51-
o
10
CO
c
z
c
I
0)
T3
31
m
8
Tl
H
O
en
O
MEAN RECOVERY
S
§
I
i
z
s
o
m
i
c
m
§
s
00
CO
•H
O
-------
364n
PRECISION AS S OR SR
S14
§~
m
o
O
a
o
c
c
m
a
o
m
GO
H
O
PRECISION AS S OR SR
00
o
J_
o
I
3)
m
8
O
33
O
I
m
•n
-n
i-
C
m
VI
O
•n
-i
O
O
m
v>
O
-------
364o
MEAN RECOVERY
SI 5
§
O
30
O
o
o
o
m
O
33
O
O
5
m
M
1
O
MEAN RECOVERY ,ug/l
o
I
I-
o
a
o
c
o
m
2
-
o§-
c
m
Z
S
o
m
en
00
CO
O
z
-------
364p
AMPUL 4
S16
h-
o
1
X X"
n
I
33
O
CD
^
C
33
rn
33
O
a
5
m
5
AMPUL 4 M9/I
o
J
s
8-
h-
o
. I _.
x
o
i
m
Z
33
O
O
O
m
V)
H
O
Z
-------
365
QUESTIONS AND ANSWERS
FROM THE AUDIENCE: I have one
for you. It's my understanding and recollection,
and Bill you correct me if I am wrong, that the
original ICP, Effluent Guidelines Study on Mining
waste was conducted on field samples, spiked and
shipped.
MR. MAXFIELD: That is correct.
MR. TELLIARD: That's right.
FROM AUDIENCE: I notice that
this study was conducted on ampules split and
received and diluted.
MR. MAXFIELD: In fact, it's a
little bit more complicated than that. Could I
explain.
FROM AUDIENCE: Well, then, my
question is and you can maybe cover that in
your explanation, then, too, is, did you evaluate
the differences in errors that are introduced
by those two processes?
-------
366
MR. MAXFIELD: The answer to
that question is no. In fact, what was done
is, effluent samples were collected by Versar
and tested, split and sent to the participating
laboratories. Spiking solutions for all six
water types were prepared by Versar and sent
to all participating laboratories. The three
other water types, the laboratory pure water,
surface water and the drinking water were, in
fact, collected at each of the participating
laboratories in the study. So they are not the
same waters.
MR. TELLIARD: The industrial
samples, were those treated or untreated?
MR. MAXFIELD: Those were treated
wastes.
MR. PRESCOTT: I am Bill
Prescott, American Cyanamid Company. I have a
question about the spiking solutions. You had
obviously Youden pairs at each level, was that
correct?
-------
367
MR. MAXFIELD: That is correct.
MR. PRESCOTT: This implied...!
guess I'm having difficulty saying what I want
to say; 27 metals, the two Youden pairs that were
high for one metal were the same spikes for all
27 metals?
MR. MAXFIELD: Do you mean the
same spiked concentrations?
MR. PRESCOTT: In spike
concentration.
MR. MAXFIELD: No.
MR. PRESCOTT: Let's say you
have got vials A, B, C, D and E. And vials A
and B for aluminum were the two low
concentrations.
MR. MAXFIELD: That's right.
MR. PRESCOTT: Were those vials
also the low concentrations for the other 26
metals?
MR. MAXFIELD: Not necessarily.
-------
368
There was some mixing involved. For some metals
it would not have been the same.
MR. PRESCOTT: Thank you.
MR. MAXFIELD: In fact, there were
more than six spiking solutions because of the
various matrices involved we had some effluent
that had very high background concetrations for
many of the metals. So, therefore, if we took
something that would be an effective spike in,
say, drinking water and attempted to put that into
an effluent it would not be a reasonable spike
level. There were, in fact, ten sets of spiking
solutions; or, ten spiking solutions, five sets.
Five sets of Youden pairs.
MR. TONKIN: Dave Tonkin, Centec.
I missed the beginning of your talk so maybe you
already addressed this, but were all of the
instruments used in the study simultaneous or
were there any sequential?
MR. MAXFIELD: There were 11
-------
369
direct readers and one sequential device.
MR. TONKIN: Is there any
conclusion about the precision accuracy, one
versus the other at this point?
MR. MAXFIELD: There is none
at this point and I doubt seriously whether we
will be able to draw any conclusion with regard
to direct reader versus sequential device with
only one sequential device included in the study.
MR. TONKIN: Would you anticipate
a need for this in the future? It seems like the
instrumentation industry as in terms of ICAP is
going towards the sequential.
MR. MAXFIELD: It would seem like
a very reasonable thing to do. The problem I see
with that is the sequential devices operate very
differently and the operator of the sequential
device can operate his particular device in so
many different ways, using so many different
lines and different procedures for background
correction, et cetera.
-------
370
Any other questions?
MR. MEDZ: In the regulations
or in the write up of the procedure, are there
going to be any changes to reflect that fact,
that you have more latitude with sequential
instruments in chosing your background correction
or moving to another line when there are
inferences?
MR. MAXFIELD: The method as it
is currently written, I don't believe addresses
the sequential device to any great degree. In
fact, the lines are not specified at this point.
There are some lines that are referred to in
the method, but lines are not specified for
individual elements; at least that's my under-
standing at this point.
MR. TELLIARD: Thank you, Bob.
When did you say that report was going to be?
MR. MAXFIELD: The spring.
MR. TELLIARD: Direct draft,
interim draft; you and Bob Medz, I'm sorry, Bob.
-------
371
MR. TELLIARD: Our next speaker
is from TRW and Ray is going to talk about
precision. I won't address the rest of his title
because bias is in the eye of the beholder.
-------
372
A SURVEY OF PRECISION AND BIAS DATA FOR METHODS
OF ANALYSIS FOR PRIORITY POLLUTANT ELEMENTS
Ray F. Maddalone, TRW, Inc.
MR. MADDALONE: Having sat through
a few days worth of GC/Mass Spec and being a per-
son who is more attuned to the inorganic analysis,
I'm going to try to prove that there are other
elements than carbon, hydrogen, oxygen, chlorine,
and fluorine. I am going to talk about the
other parts of the periodic table, in particular
the 13 priority pollutant metals.
What we have been listening to is what has
been going on with the forefront of technology.
What TRW has tried to do in a study for the
Electric Power Research Institute (EPRI) is to
develop a picture of what the people in the
trenches are actually doing and what they are
capable of doing. What we have found in this
study is that the analysts in the field are not
-------
373
performing as well as the people on the forefront
of technology expect them to.
Before I get into the actual presentation,
I want to give you a brief outline of the program
that TRW has with the Electric Power Research
Institute. It is RP1851-1 and the EPRI program
manager is Winston Chow. The project consists
of four primary tasks. The first task is one on
data base development. In this data, we took
data from a number of sources, in particular 100
of the most recent NPDES 2C permit forms which
were coded and then put into our computer system
at TRW. In addition, all of EPA/EGU's data and
information from open sources were included. All
of this data was then computerized and statisti-
cally evaluated for outlines, and used to calcu-
late the aqueous discharge concentrations for the
steam electric power industry. We wrote a data
base report which is now in the hands of the Pro-
ject Manager and should be published this spring.
-------
374
The second task, which is the main focus of
the program, concerns the review of the sampling
and analysis methods. This task had two major
components, one of which was a precision and
bias data compilation effort which I will talk
about today. The second subtask is the litera-
ture review effort, which consists of reviewing
the chemical literature for the last ten years
with the intent of identifying interferences and
finding solutions for the problems that exist
with the NPDES approved methods for priority
pollutant metal analysis.
The third task is a small effort to plan
for Phase II, which we believe will be a vali-
dation study of the methods used for NPDES prio-
rity pollutant metal analysis. The fourth task
was a workshop. At this workshop utility chemists
came to Los Angeles for formal presentations and
then broke up into working groups to discuss
sampling and analysis problems related to the
-------
375
utility industry. There will be a proceedings
document from the workshop containing the formal
presentations and the concensus R&D development
ideas that were recommended by the utility
chemists.
Today I am going to discuss the findings from
two major sources of precision data on the prior-
ity pollutant metals analysis methods. The first
source was the data tape from the DMR-QA-I study,
which was obtained through the good offices of
Bob Medz and Wayne Gueder in Washington, and
John Winters and Paul Britton of EMSL-Cincinnati.
DMR-QA stands for Discharge Monitoring Report,
Quality Assurance program. The second source or
rather sources of precision and bias data was
compiled from the validation studies that we
could find in the open or governmental literature
The DMR-QA study we evaluated was conducted
in 1980 and consisted of distilled water ampules
containing 26 parameters, including 10 of the
-------
376
13 priority pollutant metals. There were two
concentration levels for each parameter which
varied depending on the element and parameter.
For the sake of this presentation I will simply
refer to them by their code names: red and
white. The data tape obtained from EMSL was
coded in a manner which permitted us to make
various data evaluations. For example, we
could break out the EPA State results and com-
pare them to the Permittee laboratory results.
The data tape contained results from all the
NPDES Permittees responding, so it wasn't spe-
cific to the utility industry. I want to define
two words. When I say method, I'm referring to
a generic title such as Graphite Furnace Spec-
troscopy (GFAAS), ICP, Flame Atomic Absorption
Spectroscopy (Flame AAS). When I mention pro-
cedure, I'm referring to the protocol, such as
ASTM, or Standard Methods that were used to
perform the GFAAS or Flame AAS analyses.
-------
377
The DMR-QA data reduction was done with soft-
ware developed by TRW and using our CDC compu-
puter system. Without going into great detail,
the first steps in the data reduction effort
consisted of an outlier test, at the suggestion
of Paul Britton, we used screening test to get
rid of the decimal point errors or obvious
recording errors. We did that by excluding any
data point that was a factor of 5, higher or
lower than the true value. The data that passed
through this initial screening test was then
tested with the ASTM D-2777-77 (a one percent
double tail test). The data that failed either
test were omitted from the final compilation.
We calculated the mean, the standard deviation,
and the relative standard deviation. We also
calculated biases and differences. Biases
being the mean of the EPA/State or the Permittees
as compared to the true value. By differences,
I am referring to the EPA/State mean compared
-------
378
to the Permittee mean. All of this was placed
with other details on a single page format for
each parameter. If you are interested, I have
a copy of the report here in the draft form and
I can show you the type of format that was out-
puted. Incidentally, all the data for the 26
parameters were reduced.
This whole exercise was completed for the
individual method procedures. The procedure
data was also compiled so that all of the proce-
dures for a given method were placed into one
data set. We did that by taking the equivalent,
alternate procedures listed under 40 CFR 136.
First, some general observations about the
DMR-QA data set. The DMR-QA test concentrations
were compound to the non-cooling water discharge
(NCWD) concentrations calculated from the Task 1
data base. The non-cooling water discharge
streams are all the power plant discharge
streams except the cooling water streams.
-------
379
Had we added the cooling water streams and com-
puted the average, it would have given us an
arbitrarily low number. So we sequestered the
once through cooling water data into a separate
group. The NCWD concentrations represent the
nominal concentrations a plant chemist would
monitor.
We found that the red sample set was approxi-
mately five times higher than the non-cooling
water discharge concentration and the white set
was generally greater than a factor of 15. As
a result, we are not sure that the precision data
that we saw in the DMR-QA data set is represen-
tative of the actual samples that the utility
industry has to monitor.
The methods that were used by the EPA/State
laboratories and the Permittees were primarily
the same. The biggest difference was that the
Permittees used wet chemical analyses (WCA) some-
where between two to nine percent of the time;
-------
380
whereas, the EPA/State laboratories never used
it at all. The biggest difference in Atomic
absorption usage was for the Graphite Furnace
AAS analyses of arsenic and selenium. As you
can see by the data in the slide, the EPA/State
laboratories primarily used GFAAS for those two
elements; whereas, the Permittees used the combina-
tion of gaseous hydride, wet chemical methods,
and GFAAS.
The procedure selection was also very inter-
esting. The EPA/State laboratories, as you would
expect, used the "Methods of Chemical Analysis
for Water and Wastewaters" (MCAW) most of the
time. The Permittees only used it 57 percent of
the time and very surprisingly, at least as far
as I was concerned, is that they used "Standard
Methods" as their second choice. I think it's
very important that we, as a group, try to get
the message across to the users that the ASTM
-------
381
procedures are far better written than "Stan-
dard Methods" on the MCAW. There are no pre-
cision and bias statements in the "Standard
Methods"; whereas, the ASTM methods have preci-
sion and bias statements for each procedure.
Also, each ASTM procedure is written from start
to finish for each metal and not grouped under
a general method as they are in "Standard
Methods".
One final general observation about the
DMR-QA data is that the EPA/State data set had
far fewer outliers than the Permittee's. Many
of the data points were removed by the initial
screening test.
The next two slides are histographs summari-
zing the relative standard deviation data for
flame and graphite atomic absorption. The rela-
tive standard deviation is plotted on the X axis
with the number of elements falling in a given
range of relative standard deviation plotted
-------
382
on the Y axis. The top two histograms are for
the Permittees red and the white concentration
test sets. The bottom two are for the EPA/State
data for the red and white test concentrations.
So if you look at it in the vertical sense, you
can compare the two histograms for data distribu-
tions. For Flame AAS both distributions are
similar. The EPA/State RSD's tend to cluster in
the 10 to 20 percent bracket. The Permittee
Flame AAS data is slightly higher compared to
the EPA/State laboratory data. In particular,
there are three bad elements (As, Se, Hg) pro-
bably because they were determined by the Per-
mittees using gaseous hydride absorption.
The next slide shows the same type of histo-
grams for GFAAS. There is a much bigger differ-
ence in RSD's between the two different or^aniza-
tions when you look at the GFAAS data. In this
case, you can see that the Permittees had a
much wider spread in their relative standard
-------
383
deviation versus the EPA/State, which was
clustered, around or less than plus or minus 20
percent. Clearly, there is a difference in how
these two groups are able to apply the methods
methodology.
In addition to the DMR-QA data, we collated
precision data from a number of sources. This
next slide shows a list of the documents that
we collected, and reviewed. During the course
of this review we found that the 1975 AOAC Manual
and "Standard Methods" used the same precision
and bias data. The source for this precision
and bias data was a 1968 Public Health Service
study. This fact re-enforces my concern about
using "Standard Methods" as a procedure manual.
The best source for precision and bias data is
the ASTM, Part 31, Water. In many cases we were
able to obtain the original research reports
used in ASTM, Part 31, Water.
-------
384
We also collated data from Bill Telliard's
study on Mining Effluents and the Utility Water
Act Group (UWAG) inorganic analysis round robin
study. These are the only two studies that used
samples that were collected, spiked, and split
in the field, and then sent to the participants.
Even with all of these studies that were
performed, we found a lack of high quality data
for matrices that might be considered challenging.
If you look at this next slide which shows the
matrices that were tested and the various methods
that were used, you can see the limited extent
validation data. If you look upward from the
Ohio River water, you will see that there is a
lot of data collected for standards either in
distilled tap, or surface waters. Whereas you
go down from there, you find the same elements
are being done and only a few, maybe six or
seven of the priority pollutant metals have
been tested in matrices that are challenging
-------
385
or representable of common SIC matrices.
Now, what do we do when we collect this
precision data. The idea was to have precision
data at three concentrations, so we could calcu-
late a regression equation of the single operator
and overall standard deviation obtained at the
mean concentration tested. With these equations,
we could go back and calculate what the relative
standard deviation is at the specific non-cooling
water discharge concentration. We could also
calculate the limit of detection using the inter-
cept of this equation. Finally, we could use
this equation to calculate the limit of quantita-
tion using a specific relative standard deviation.
Now, the idea with using the regression equa-
tion to calculate the limit of detection (LOD)
is based on the idea that if you have a plot of
standard deviation versus the test concentration,
you then can extrapolate to the standard deviation
at zero concentration. Some factor times the
-------
386
standard deviation at zero concentration is
defined as the LOD of a method. In most cases
it is obtained by taking a distilled water blank
or your blank reagent sample and analyzing it a
number of times. In this case, we are using
the actual data generated from these validation
data to extrapolate to the standard deviation
at zero concentration. LODs calculated in this
manner are fairly conservative (i.e., low) esti-
mates since as you approach the limit of detec-
tion, the absolute standard deviation tends to
reach a limiting value and not linearly decrease.
This next slide shows the approach that we
took in calculating the limit of detection.
Generally, you can define the limit of detection
as the minimum concentration that produces
a specific relative standard deviation. This
is pretty much what the ACS guidelines are and
the general approach that Lloyd Curie took in
his article on limits of detection quantitation.
-------
387
Based on the RSDs calculated at NCWD concentra-
tions, the relative standard deviation for rou-
tine analyses should be near plus or minus 20
percent. As this slide shows, you can use the
linear regression equation to calculate a con-
centration that would give you a relative stan-
dard deviation of 0.2 (20 percent relative stan-
dard deviation). Taking this approach, we calcu-
lated both the extrapolated three signal limit
of detection and the calculated limit of quanti-
tation.
Before I show you a table which compares those
numbers to the NCWD concentrations, I want to
give you an overview of the precision and bias
data that was obtained from these validation
studies. We generally found that at the non-
cooling water discharge concentrations that
Flame AAS produced poor precision. This was
not totally unexpected because NCWD concentra-
tions are generally below its limit of detection.
-------
388
We also couldn't see any trends with precision
based on matrix effects, but this correlation
may have been obscured because we didn't have
the exact composition of a matrix to rank the
matrices. Future validation studies should have
a spark source mass spectroscopic analysis and an
anion analyses of the matrix, so you have some
idea of why one matrix's precision is different
from another. There is another problem that
there was no single study that covered all of
the matrices, so we were trying to compare differ-
ent groups of people doing different matrices.
The biggest single finding was that the over-
all precision was two times higher than the sin-
gle operator precision for all the methods that
we studied (Flame AAS, GFAAS, and ICP) . This
has a major impact when you calculate the limit
of detection. It will make a factor of two
difference when you use either the overall pre-
cision or the single operator precision.
-------
389
To illustrate calculation of precision based
on limits of detection, I took the data presented
by George Stanko using the standard addition
technique with Method 624 and compared it to the
detection listed in Method 1624. I assume that
in Method 1624 two sigma detection limits are
reported though I may be wrong, but just for the
purposes of discussion let's assume that that is
the case. If a LOD was listed as 10, we converted
that to a three sigma so it would be 15. If you
compare that to a three sigma detection limit
calculated from the data that George presented,
you find that in some cases you have reasonably
good agreement, but in other cases you have
very poor agreement. This is the same sort
of thing that-we found with the trace metal data.
On this slide there is a listing of the non-
cooling water discharge concentrations. I set
a criteria that a method could detect or quanti-
tate the method if its limit of detection or
-------
390
limit of quantitation, based on overall precision
was below the non-cooling water discharge concen-
tration. This slide shows that ICP could not
detect no more than six of the elements at their
non-cooling water discharge concentration. GFAAS
did a bit better. They were able to detect 10
of the 13, but unfortunately five of the data
points were actually based on single operator
precision.
When we get to quantitating,; that is, being
able to measure the element at plus or minus 20
percent relative standard deviation, we found
that ICP can quantitate only a few elements at
NCWD concentrations. Only two of the metals
(cadmium and zinc) could be quantitated at the
non-cooling water discharge concentration.
GFAAS dropped down to five elements out of
the 13, but four of those elements are based
on single operator precision. As we mentioned
earlier, the single operator precision is approxi-
-------
391
nately a factor of two less than the overall
precision. As a result, we are not exactly sure
whether the data that we have really does indi-
cate that the graphite furnace can be used to
detect priority pollutant netals at non-cooling
water discharge concentrations.
So what we have done in this EPRI program
is to establish what the state of the art is for
the methods being used to analyze the 13 priority
pollutant metals. What we expect to do in the
future is to extend the study to other parameters,
In fact, we have a recent add-on to the project
to do six more conventional and non-conventional
parameters. We also hope to validate the
methods used to monitor these metals using power
plant discharge streams.
If you have any questions, I would be happy
to answer them at this time.
-------
391a
oo
o:
rD
—i
CD
Q_
>-
Hj
i—*
I—
LJ_ I
CD i—I
LA
00 OO
. ^—J
• «I
OO
>- Q-
_i az
_l
<
z
<
CO
< 2
i—i LU
PQ —,
Q Ul
z cc:
ui
z ce
CO
•—I
o
LU
C£
Q.
o:
LU
oo
I
CXI
oo
-------
391b
en
CD
ct:
D_
CNJ
LU
1-
<
a:
<
a.
UJ
CO
CO
CO
z
o
a:
l-
z
LU
O
z
o
o
LU
Q
UJ
o:
CO
_J
LU
LU
Z
o
OL
\-
z
LU
O
Z
o
o
0
^-4
CO
c_>
i—i
<_>
oo
(=1
LU
PQ
o
LU
Q_
OO
-------
391c
cc:
CXI IX CQ Q Qi
LU
x
CD
OO
1^
CXI
LO
LU
LU
C_J
OO
OO
OO
i
CO X
I— .
CO ^
UJ LA
1— \
UJ
CC 12
LU C£
~ t-
_l
_l
<
LU
_l
CQ
Z>
O
Q
£wO
6s?
i— 1
^
1^
i^
i
r^
r^
r^
CXI
a
z:
t—
CO
<
ED/REJECTED
i—
Q-
UJ
(_)
O
<
CO
1—
z
o
0.
LL
O
OH
Ul
pa
3
z
•\
Q
CO
C£
/^~v
IX
LU
LU
1-
t-
»— 4
s:
a:
LU
Q_
O
h-
IX
s-** Ijj
ml
r—
=3 <
-1 _J t-
< —• CO
> < \
H- <
LU Q.
=3 LU LU
CE _| •— '
1- OQ
Z) CO
O O LU
H- Q U
Z
IX &-S LU
^ t— i o:
LU
CO 1 U.
_i
»— l
<
i-
LU
_l
CQ
Z3
O
Q
B>S
i— 1
1
OO
Q
CD
zc
LU
S^
1—
•ZL
LU
_l
ZD
f—yt
C_i>
LU
0£
CD
Ll_
cn
LU
I—
o
"N
CO
<
<
X
ID
•\
CO
<
<
U-
cu
>
CO
<
<
u_
v_x
CO
Q
O
X
UNDER METHOD
Q
LU
1-
u
LU
_1
_J
O
U
CO
LU
ce
Z)
Q
LU
CJ
O
ce
a.
LU
t—
<
z
a:
LU
-------
391d
OO
•zz
o
O
8-8
un
CM
\s
LU
_l
_l
<
o:
LU
>
o
CO
LU
LU
co
Q-
LU
—• Q
a:
LU
a.
oo
LU
co
co •—
QL
LU
a.
-------
391e
O
CC
CO UJ
UJ
UJ
UJ
CC
Ul
a.
N
O
o
CQ
3
in
1 1
_J CC
uj o
CC U_
u. co
Oz
in
CM i-
CQ UJ
CO
O
CC
I
UJ
UJ
CC
UJ
Q.
|N|Z
£
3
£
i
o
S
S
A
in
>— S
II
in
ui
H
I
UJ
-
-
1C 1 — 1 .Q 1 *o 1 ®
N|z|tt.|o|CQ
1 1 1 1 1
^^^^m
^^^^
O)
X
•••
•••
o
to
<
3
8
9|
»i
Ul
M Q
Q
in z
OJ <
N <"
U) ^
o §
*- UJ
CC
m
UJ
(o in * co CM i-
Q
UJ
CC
I
w
— S ^
Q.
Ul
10
CO CM i-
-------
391f
§
A
O
DC
< (/>
H <
><
LU U.
h- DC
<0
EJ"-
51
co CM «-
LU
H
I
LU
I
LU
1 1
z
£
-
—
_
-
S
%
•••i
o
&
^
i
O
in
A Z
inO
Si
LU
8 °
rt Q
o QC
n <
O
in z
CM 5
s ^
^ uj
m>
'5
O U
t- uj
CC
in
o
LO
CO CM t-
CO
E
O)
o
LU
CC
I
LU
LU
PERM
1 1
LO
in
CO CM r-
LO
-------
391g
oo
LU
LU
CC:
LU
OO
LU
OO
ca
o
oo
Q_
O
LU
LU
cc:
LU
Q_
— 1
PQ
METHODS
_ «a:
0
l^-i
CO
LO
CNl
1
O
0
I— 1
1
UJ
oo o
LU M
LU 00
STR
S
-------
391h
en
r^»
CD
CO
LU
CO
CO
CD
i—I OO
oo en
en ,—i
a.
co
5 LU
00 —•
O
00
CD
00
CD
CD
oo
en
LL.
LU
LU
cc:
CD
LU
Qi
LU
a_
en
r^
en
LU
PQ
LU
<_>
LU
CO
5
LU
>
cr>
cc:
-------
3911
CQ
CO
CO
>-
— 1
LU
Qi
Q_
+
t-
IX
•SL
II
Q
CO
•*^
CO
z:
CD
f— I
H-
LU
•^^
CD
CO
CO
LU
Qi
C£3
LU
cr:
LU
1 —
z
o
CO
1— H
CJ
LU
o:
a.
_i
_l
<
o:
LU
>
o
•^
o:
o
i-
<
Q£
LU
Q_
O
LU
_I
O
Z
CO
1
CO
CD
1— 1
I—
CD
CO
Ll_
CD
CO
i i i
LJ—I
CO
=D
CONCENTRA
Q
2
o
z
o
1—4
U-
l—l
o
LU
Q.
CO
H
<
Q
CO
cc
LU
1-
<
ZD
0
_l
O
1
»-
a.
LU
0
OL
LU
f-
z
•— «
2:
o
a:
LL
Q
o
_i
LU
h-
<
ID
O
<
_J
0
1
Q
CO
a:
o
i— i
LL.
1— 1
o
LU
D_
CO
0
Z
1-4
CO
Z)
a
o
_i
LU
^-
<
D
0
_J
o
1
-------
39 Ij
LU
o
t—t
I-
<
LU
u
z
o
o
CO
UJ
NOIIVIA3Q
-------
391k
CC
o
>
CC,
in
O
O
LJLJ
CC
O
UJ
CC
o
UJ
«og
O c/>
1 1
< 3
Q. <
UJ >
Q
UJ
(0
UJ
o
o.
o
LU
73
g
z
u
UJ
H
^-
CQ
2 <
\— ^
CO £
uj O
1-
co
z
UJ
UJ
^J
UJ
CO
<
UJ
S
_j
u.
X
E
<
s
^
0
O |yj
-C m
0$
Q> o*
03 £
CA •""
c
< N
£ z*
k> *>
3 1 £
«o~ v> aj m <« c/T
i5
K Z
ii £
CNW * 4
x to o
CD
£ -. z g 1
S 2 S g g I
£ ^ LL ** 5E m
fl. ^ u. a. O S ^
OC uj Z ^ *^ ^
O* UJ 1- UJ < CD r-
— H «" 5 — Z Jk
T > ^ >
N
*,
*>
3
0
to*
g
3
oc
>
Q
1
1
^
OLD
X^
2~
CO
X
§ p
UJ — UJ
tu
Q.
CO
UJ
_l
C3
Z
CO
CO
CO*
OC
O
O
cV
co*
<
<
LU
Q
OC
O
X
CO
LU
3
N
O UJ
Z I-
Q
uj
>
U
,
j?n
< UJ
CC u-
PO
h- CC
.- ^
-------
391 1
CD
CO
UJ
cc:
UJ
ct:
CD
CD
Q_
CO
2=
CD
D_
I— <_>
GO
i— -«
— i
CQ
<=C
h-
CO
UJ
CD
1—
1—
1
rD
c_>
i— i
u_
i i
1 '
s
2^
CD
CO
i—i
C_3
UJ
O£
Q_
•y^
CD
CO
1 —
c_>
UJ
u_
u_
•=a:
X
i— i
0£
I—
a.
r>
i
UJ
:*:
<
^
_J
O
»— 1
^*
UJ
X
o
X
>— I
or
H-
<
^~
^_
z:
o
<
i—
<
Q
O
Z.
CO
UJ
o
i— <
a:
<
_i
_i
a:
UJ
>
o
>
Q
rs
i
r^
CO
LU
_J
CD
•z.
1— 1
CO
o
z.
z.
0
t— 1
CO
^^
0
UJ
o;
Q.
UJ
_i
OQ
<
^-1
o:
<
>
CO
Q
Q
<
CO
H-
^y
^.
<
a.
o
i— i
l-
o:
<
a.
u_
0
_i
UJ
>
UJ
_J
_l
_J
t— t
^.
CO
•s
CO
CO
-^
^^
CO
CO
LU
^
1—
CD
S
P
•y*
CD
CO
i«— i
C_)
LU
0£
Q_
_J
1
-------
391m
I— i-H CXI
«3. v_x s_x
LU
CALCULATION OF LOQ
: MINIMUM CONCENTRATION THAT PRODUCES A SPE
DEVIATION (RSD)
LU
PQ
_J
CD
OO
Q
OO
LU
h-
^
Q
t _._, i t^£
Q_
LU
Q_ CD
OO
CD
CNI
IX CNI S
+ CNI
S CD
IX
II *^ II
IX IX
a LU
00 _l
LU
-- ^ LU t_)
C3
-------
391n
0)
Ow
UJ _l
H UJ
o
OCC
X CL
so
DO
wz
<
01
H
H-
Z
D
O
O
UJ
I-
01
Q
Q.
O
at
00
o,
O
Q.
O
UJ
CO
2 £<*-£
^ o cc
1
Ol
s
01
_l
Ol
> i >
CM
un to
CM f.
E S
THALLIUM
ZINC
01
_l
03
O
cc
o
cc
01
a.
O
O!
Z
CO
•2.
O
3 I
cc -.
o <
% cc
5 01
o
CM
01
01
Ol
O
IT
D
>
CO
cc
Ol
h-
%<
^ O
S Z
-------
392
QUESTION AND ANSWER SESSION
MR. RICE: You might point out
the number of participants and the composition
in the DMR-QA1.
MR. MADDALONE: Generally, for
the metals the number was about 200. I think
the maximum number that I saw was on the order
of 200 reporting in respondents for a given metal,
Most metals on the order of 40 or 50 people re-
porting. There is a correlation. Since the
DMR-QA study allowed them to monitor all or none
of the parameters that were in the vials, depend-
ing on what elements are required by their per-
mits, the number in the DMR-QA study related to
the number of people required to monitor a pol-
lutant .
MR. MEDZ: Ray, the 1980 DMR-
QA program, was entitled program and we
only had two states participating in the 1980
-------
393
program. We had one state that had the primacy,
that was Minnesota, and we had one state that
did not have the primacy, that was New Jersey.
The study on which we based the number two study,
which is completed now, had almost a full
8,000 dischargers.
MR. MADDALONE: We would really
like to get a copy of that information for our
program.
MR. RICE: Bob, I had a question.
As far as I was concerned we were led to believe
that the DMR-QA tape that was made available to
EPRY covered the round of the five or six major
SIC category industries and that this represented
all responses. It wasn't just a two-state affair
MR. MEDZ: I remember, in 1980
we had a pilot program.
MR. RICE: Well, there was a
pilot program prior to this, as far as I know,
but that was New Jersey; wasn't it?
-------
394
MR. MADDALONE: One problem with
the DMR-QA1 study was the number of procedure
codes available. There was a large number of
procedures that were used that weren't expected
to be used. There was a code "99" that lumps
all those responses together. We would like to
recover that information. I understand there's
more codes in the second study based on the
results from the first.
MR. STANKO: George Stanko, Shell
Development. I think if you will check you will
find that there was a DMR-QA study 1 with approxi-
mately 7,000 permit holders and that DMR-QA Study
2 with approximately 7,000 to 8,000. There was
also a pilot program before Study 1 or Study 2.
So there should have been a lot more data for
Study 1 than what you show.
MR. MADDALONE: Well, it depends
on the parameter. If you look at the pH, we had
something like 2,000 respondents; but, then, in
-------
395
the metals you would end up with 40, 50, to 200
people responding on that particular element.
MR. STANKO: I would have thought
for zinc you would have had a lot nore than what
you did.
I'm. MADDALONE: I'll have to
look through...! don't have that data with me.
MR. RICE: I do, you will see it
on the slide I have, George.
MR. STANKO: Thank you.
MR. MADDALONE: Bob, one question
about that. The tape that we received, was that
the pilot study?
MR. MEDZ: Well, when you said
that you only had 40 or 50 respondents...
MR. RICE: No, Bob, I'm almost,
answering for Ray, and that's what George says is
true. Our understanding was that this was the
first major round, it wasn't the pilot study,
that on pH and total suspended solids and common
-------
396
parameters such as that there were thousands of
respondings on that data tape. The numbers that
I will show on the slide I have are for those
who had to run these elements.
MR. TELLIARD: Our next speaker,
now speaking, is Jim Rice. Jim is a consultant
to the utility industry. He and I have been
jousting over monitoring questions for the last
seven years and today he would like to talk a
little bit about compliance monitoring and the
poisons being discharged from public utilities.
-------
397
COMPLIANCE MONITORING METHODS FOR PRIORITY
POLLUTANT ELEMENTS IN THE DISCHARGES FROM
STEAM ELECTRIC POWER PLANTS
James K. Rice, PE
Consulting Engineer
Olney, Maryland 20832
ABSTRACT
The data presented in the report by the Electric
Power Research Institute, "Aqueous Discharges from
Steam Electric Power Plants: Analytical Methods
Precision and Bias," November 1982, clearly supports
a concern of the Steam Electric Power Generating
Industry that insufficient interlaboratory precision
data exists for the compliance monitoring methods
for priority pollutant elements associated with power
plant discharges. In addition, the potential for
greatly lowered NPDES permit limitations based on
water quality standards emphasizes the need for vali-
dation at these concentrations in effluent matrices
as well as in fresh, estuarine and ocean water.
In the absence of a national program for con-
sensus validation of environmental monitoring methods
-------
398
at appropriate concentrations and in representative
matrices, the Electric Power Research Institute has
been urged to undertake in cooperation with ASTM the
task of validating existing and future EPA methods
relevant to the power industry's discharges.
POLLUTANT PARAMETERS OF CONCERN
Pollutants derived from the fuel being burned,
chemicals added for cleaning or for corrosion and
deposit control, as well as pollutants present in
the intake may appear in the process discharges from
the steam electric power industry. The average con-
centration of the priority pollutant elements in
such discharges is presented in a recent study of one
hundred steam electric power plant NPDES Application
Form 2C's by the Electric Power Research Institute
(1). In a parallel study, EPRI determined the avail-
able precision and bias data for the approved analy-
tical methods for the priority pollutant elements
(2)(3). This latter study included an analysis of
the results of the performance sample program con-
-------
399
ducted by EPA in 1980 under Sec. 308 of the Clean
Water Act (DMR/QA-1).
Table I summarizes the data on the major pollu-
tants in coal-fired power plant process discharges.
The parameters are shown ranked by mass discharge
rate normalized by plant name plate capacity. It is
important to note that the priority pollutant ele-
ments are present in the lowest two of the four order
of magnitude range of the mass discharges of all of
the pollutants. The average concentration of the
priority pollutant elements in coal-fired plant dis-
charges is used herein as the basis for examining the
adequacy of the compliance monitoring methods ap-
proved for these elements.
REQUIREMENTS FOR COMPLIANCE MONITORING
As per Sec. 304(h) of the Clean Water Act, EPA
has published analytical methods for use by permittees
to determine whether their aqueous discharges comply
with the terms of their NPUES permit. Any determina-
tion of the compliance of that result with the
-------
400
limitations in the permit must take into account the
precision of the method employed. Since the result
is always subject to verification by the regulatory
agency, a minimum of two laboratories are involved,
expressly or implied, in making any compliance
determination. Thus, determinations of compliance
with a permit limitation can be made properly only
in terms of the interlaboratory precision of the
method employed on the matrix in question.
AVAILABILITY AND QUALITY OF PRECISION DATA
In view of the foregoing, the methods for the
priority pollutant elements, as contained in 40 CFR
Part 136, were examined by the EPRI study (2) to
determine both the single operator and the inter-
laboratory precision.
It is important to note that very little of the
interlaboratory precision data was found by the study
to have been collected in a manner that reflected
the errors introduced by the sample container, by
preservation, shipping and storage. ASTM Committee
-------
401
D-19 has recently adopted a definition for a multi-
laboratory operational precision that encompasses
all of these errors as well as the more common
within-the-laboratory errors.
An additional point revealed by the study is
that the very largest portion of the precision data
available on the Part 136 methods was developed on
reagent water, or on fresh natural water, by employ-
ing vials of concentrated standards that were diluted
by the recipient. Only a few studies determined pre-
cision data on specified effluent water samples,
none separately on estuarine or seawater.
For the average concentrations in power plant
process discharges Table II shows the relative stan-
dard deviation (RSD) as reported in the different
flame AAS procedures approved by EPA in Part 136.
It should be pointed out that the RSD's for the
ASTM procedures may be not be applicable to the con-
centrations shown since, except for Se and As, they
were developed over a concentration range much higher
than those in Table II. Note that the mining indus-
-------
402
try's effluents are the only ones for which the
priority pollutant elements by flame AAS are specifi-
cally validated. In view of the many important and
varied matrices for which these methods are approved,
the amount of precision data available is clearly
inadequate.
Table III shows data similar to that in Table II
except for furnace AAS. Here there is even less
interlaboratory precision data than for flame AAS.
Even the 1979 MCAW does not contain any interlabora-
tory precision statements for furnace AAS. The only
study available, with one exception, on the furnace
AAS procedures for As, Cr, Cu, Ni and Zn as they
appear in the 1979 MCAW was performed by the power
industry on one ash pond effluent and on one river
water (4).
APPROVED ALTERNATIVE METHODS
EPA faces numerous problems with validating the
Part 136 methods. One underlying problem stems from
1973 when EPA accepted, a priori, that the differently
-------
403
written procedures for a given method, such as flame
AAS, for a given element as they appeared in several
widely employed standards publications produced
equivalent results. That is, the same level of con-
fidence could be placed in the results produced by
any of these several procedures when used by quali-
fied operators. Subsequent experience with the
methods concerned shows in hind-sight that this
conclusion was incorrect. The clarity, the precise-
ness and the detail with which a method is written
greatly influences the manner in which that method
is carried out by different skilled, or unskillled,
operators. Thus, the skill and care with which a
method is written greatly influences the closeness
with which one laboratory can verify the results of
another (one measure of which is interlaboratory
precision).
Table IV illustrates the varying degree of
equivalence of two of the most widely used of the
alternative procedures sources, 1974 METHODS FOR THE
CHEMICAL ANALYSIS OF WATER AND WASTES (5) (MCAW) and
-------
404
the 14th Edition of STANDARD METHODS (6) (SM). The
relative standard deviations (RSD) were obtained
from EPRI's analysis of the results for one of the
two sample sets furnished by EPA/EMSL on the DMR/
QA-1 program. In the foregoing program, each of the
several thousand permit holders who received the
samples (vials), diluted them with reagent water and
then analyzed them for selected parameters (those
required by their permits plus any others they chose)
by employing the procedures they normally used for
obtaining their compliance monitoring data. In addi-
tion to the results of their analyses, each permittee
reported, according to a prescribed code, the speci-
fic procedure that they employed. EPRI examined the
data using this code. It must be cautioned that
there is no way of knowing if each respondent em-
ployed the procedures exactly as written. Nonethe-
less, the data in Table IV are very informative.
Of the nine elements studied, six have RSD's
for the MCAW and the SM procedures that are signifi-
cantly different at the 99% level of confidence.
-------
405
Of these six elements, four (Cr, Cu, Ni, and Zn)
have RSD's that are significantly higher for results
determined using the procedure as it appears in
STANDARD METHODS than if the results were deter-
mined following the procedure as written in the
1974 MCAW; two, Cd and Se, have lower RSD's follow-
ing the SM rather than the MCAW procedures. It is
well to remember that these significant differences
in the performances of two widely used procedures
sources arose on reagent water. What the perfor-
mance differences would be on actual effluent
matrices is not known.
EXISTING VALIDATION REQUIREMENTS
In 1978 EPA/EMSL made known a formal requirement
for applicants who proposed test procedures as alter-
natives to the procedures approved in 40 CFR Part 136.
These requirements for nationwide approval of equi-
valency specified comparative testing of representa-
tive samples of the point source discharges from five
Standard Industrial Classification codes or subcate-
-------
406
gories. It would appear from the EPRI study that
none of the Part 136 methods for the nine priority
pollutant elements discussed here has been so tested.
Table V summarizes the applicability of approved
methods for the nine priority pollutant elements
when these methods are evaluated by comparing their
detection and quantitation limits with the average
concentrations in power plant process discharges.
By this criterion, approved methods are available
to detect six of the nine for compliance purposes,
but to quantify only two, As and Se.
POWER INDUSTRY CONCERNS
The absence of interlaboratory precision data
for all of the matrices for each of the alternative
procedures discussed herein would probably not be of
great concern to the power industry if compliance
with effluent limitations was to be enforced on only
an order-of-magnitude basis at technology based
effluent concentrations. The Part 136 methods pre-
cision data allows adequate confidence to be placed
-------
407
in results under such circumstances. However, EPA's
major effort under the Clean Water Act has now
shifted from technology based effluent limitations
to limitations based upon water quality standards.
The changes proposed in October 1982 in the Water
Quality Standards regulations (7) are a major step
toward implementing that shift.
The concentrations for the priority pollutant
elements in the present National Water Quality
Criteria (8) are significantly lower than those
discharge concentrations shown in Table I for power
plant effluents. The latest draft revisions of the
National Guidelines for Deriving Water Quality Cri-
teria (9) will lower many of these concentrations
still further. If future permits are likely to con-
tain effluent limitations for the priority pollutant
elements that result from more strict water quality
standards, or from the waste load allocation systems
that may be emplaced, a major effort to correct the
situation evident in Table V must begin soon.
-------
408
VOLUNTARY VALIDATION
The Electric Power Research Institute has been
urged to begin a program whereby it will conduct in
cooperation with ASTM and, hopefully with EPA, vali-
dation studies of selected EPA approved methods of
concern to the power industry (10). These studies
would be carried out on matrices representative of
the industry's process discharges and of the major
receiving waters, fresh, estuarine and sea. Power
plant and selected state and federal laboratories
would be the participants in the round robin studies.
The result of the program would be precision and
bias data that would be acceptable to the EPA, to
the industry and to the courts as representative of
the expected performance of the EPA approved methods
on power plant discharges and associated receiving
waters. It is possible that this effort could begin
before the end of 1983.
Other industry associations may be interested in
considering conducting methods validation programs
for their own members. It is essential that a solu-
tion be found to the present impasse.
-------
409
BIBLIOGRAPHY
1. "Aqueous Discharges from Steam Electric Power
Plants: Data Evaluation," Interim Report RP
1851-1, Electric Power Research Institute,
Palo Alto, CA, October 1982.
2. "Aqueous Discharges from Steam Electric Power
Plants: Analytical Methods Precision and Bias,"
Draft Report RP 1851-1, Electric Power Research
Institute, Palo Alto, CA, November 1982.
3. Maddalone, R.F., "A Survey of Precision and
Bias Data on Methods of Analysis for Priority
Pollutant Elements," Sixth Annual Priority Pol-
lutant Symposium, Norfolk, VA, March 1983.
4. "Round Robin Interlaboratory Inorganics Analy-
ses," Utility Water Act Group, Hunton & Williams,
Washington, D.C., January 1980.
5. Methods for Chemical Analysis of Water and
Wastes'! U. S. Environmental Protection Agency,
Environmental Monitoring and Support Laboratory,
Cincinnati, OH, 1974.
6. Standard Methods for the Examination of Water
and Wastewater, 14th Edition, APHA-AWWA-WPCF.
Washington, D.C., 1975.
7. "Water Quality Standards Regulation," Proposed
Rule, Federal Register, 47, 49234 et seq.,
October 1982.
8. "National Water Quality Criteria," Federal
Register, 45, 79318 et seq., November 1980.
9. Stephan, C.E., et al, "Guidelines for Deriving
Numerical National Water Quality Criteria for
the Protection of Aquatic Life and Its Uses,"
Draft, U.S. Environmental Protection Agency,
Environmental Research Laboratory, Duluth, MN,
February 1983.
10. Rice, J.K., "Utility Perspective on Pollutant
Analysis Requirements," EPRI Seminar on Sampling
and Analysis of Utility Pollutants, Los Angeles,
CA, February 1983.
-------
409a
TABLE I
NORMAL CHARACTERISTICS OF
COAL-FIRED PLANT
PROCESS WASTE DISCHARGES
Param
eter
TSS
O&G
Mn
Fe
P
NH3
Zn
Cu
As
Pb
Ni
Cr
Se
Be
Cd
Mass
Kg/day/GW
1870
172
53
45
18
12
4.4
3.2
2.8
2.6
2.2
1.0
0.58
0.35
0.34
RSD
%
87
70
183
77
295
118
68
118
130
94
84
128
130
65
156
Cone.
mg/L
32
3.3
1.1
0.71
0.22
0.28
0.075
0.043
0.050
0.034
0.041
0.017
0.012
0.005
0.005
RSD
%
96
68
173
83
246
109
76
147
112
87
88
108
123
71
147
Source: Table 4-11 and 4-17, (1)
-------
409b
TABLE II
REPORTED FLAME AAS RELATIVE STANDARD DEVIATIONS
BASED ON INTERLABORATORY PRECISION AT
PROCESS WASTE DISCHARGE AVERAGE CONCENTRATIONS
Data Source
Cone.
Element ug/L
AS
Be
Cd
Cr
Cu
Pb
Ni
Se
Zn
41
5.2
4.6
19
45
35
39
16
76
ASTM
1981
Reagent
Water
8.5
41.1
1134
62.3
274
169
240
16.9
58.1
RSD(%)
AOAC
Reagent
Water
-
-
121
67.5
38.9
120
-
-
20.2
uses
River
Water
-
-
-
26
27
-
-
26
25
.5
.1
.7
.2
MCAW
1979
Natural
Water
41
-
128
66
34
69
-
50
46
.2
.3
.6
.4
.4
.6
EPA/EGD
Mining
Effl.
-
-
-
86
27
157
70
-
18
.6
.6
.1
.9
Source: Table 4-2 and 4-3, (2)
-------
409c
TABLE III
REPORTED FURNACE AAS RELATIVE STANDARD DEVIATIONS BASED ON
INTERLABORATORY PRECISION AT PROCESS WASTE DISCHARGE
AVERAGE CONCENTRATIONS
Element
As
Be
Cd
Cr
Cu
Pb
Ni
Se
Zn
Cone.
ug/L
41
5.2
4.6
19
45
35
39
16
76
ASTM
1981
Reagent
Water
-
-
-
-
-
-
-
-
-
RSD(%)
UWAG
River
Water
47.1
-
-
21.0
17.9
-
19.1
-
24.5
UWAG
Ash
Pond
Efflu.
9.1
-
-
38.1
13.4
-
25.6
-
42.3
MCAW
1979
Natural
Water
_
-
-
-
-
-
-
-
_
EPA/EGD
Mining
Effl.
53.8
-
-
-
-
-
-
-
_
Source: Table 4-2 and 4-3. (2)
-------
409d
TABLE IV
COMPARISON OF RELATIVE STANDARD DEVIATIONS BETWEEN
ALTERNATIVE PROCEDURES FOR FLAME AAS BASED UPON
EPA DMR/QA-1 PERMITTEE RESULTS
Element
As
Be
Cd
Cr
Cu
Pb
Ni
Se
Zn
Cone.
ug/L
235
235
39
261
339
435
207
50.4
418
MCAW
1974
RSD% n
28.5 (39)
6.95 (34)
19.4 (171)
13.9 (265)
6.74(310)
15.0 (238)
11.5 (210)
94.6 (27)
7.95(317)
STANDARD
METHODS
RSD% n
35.3 (36)
7.94 (16)
14.2 (111)
18.2 (204)
8.63(177)
13.5 (135)
15.0 (128)
37.3 (34)
12.4 (208)
R2
1.54
1.30
0.53
1.72
1.64
0.81
1.69
0.16
2.43
F
[.01]
No
No
Yes
Yes
Yes
No
Yes
Yes
Yes
R = (RSD STANDARD METHODS) -r (RSD MCAW)
Source: Table 2-16, (2)
-------
409e
TABLE V '
CAPABILITY TO DETECT OR
TO QUANTIFY AT LOWEST REQUIRED
CONCENTRATION(1)
Ele-
ment
As
Be
Cd
Cr
Cu
Pb
Ni
Se
Zn
WQS
ug/L
50
5.3
10
50
72
50
13
10
47
Proc.
Waste
ug/L
41
5.2
4.6
19
45
35
39
16
76
Detect
GF/AAS
Y
-
N
Y
Y
N
Y
N
N
F/AAS
Y(2)
Y
N
N
N
N
N
Y(2)
N
Quant
GF/AAS
Y
-
N
N
N
N
N
N
N
ify
F/AAS
Y(2)
N
N
N
N
N
N
Y(2)
N
(1) (Y) means that method LOD or LOQ is lower than either
concentration shown; (N) means not lower than the lowest
of the concentrations shown.
(2) Gaseous hydride method
Source: Tables 5-19 and 5-20 (2).
-------
410
QUESTION AND ANSWER SESSION
MR. TELLIARD: And you say you
looked at coal-fired and gas-fired?
MR. RICE: Maddalone separated
the effluent data on steam electric plants into
three categories: coal-fired, oil-fired, and
gas-fired.
MR. TELLIARD: Any difference...
you didn't sample any hydro?
MR. RICE: We decided that the
question of potential pollution from hydro-power
dams was better left to the courts.
-------
411
MR. TELLIARD: That concludes
this year's presentation. Thank you for coming,
I hope you enjoyed it; hope to see you next
March, same time, same station, same players,
maybe a few more. Thanks a lot.
(WHEREUPON, the hearing was concluded.)
-------