v>EPA
United States
Environmental Protection
Agency
EPA-600/9-80-048
October 1980
Research and Devplopmeai
Proceedings of a
Workshop on
Exposure
Assessment
January 2-3, 1979
Belmont House,
Maryland
Sponsored by:
Office of Health and
Environmental Assessment
Washington DC 20460
-.:
EPA
600
9
80
048
-------
-------
EPA-600/9-80-048
October 1980
Proceedings of a Workshop
on Exposure Assessment
January 2-3, 1979—Belmont House, Maryland
John L. Buckley and David L. Jackson, Editors
..o
OJ
OFFICE OF RESEARCH AND DEVELOPMENT
U.S. ENVIRONMENTAL PROTECTION AGENCY
WASHINGTON. D.C. 20460
HEADQUARTERS LIBRARY
ENVIRONMENTAL PROTECTION AGENCY
WASHINGTON, D.C. 20460
-------
Preface
The general conclusions reached by participants in the
Workshop on Exposure Assessment have been taken seriously by
EPA's Office of Research and Development. We have initiated a
series of steps to increase recognition of the importance of exposure
assessments in the Agency's regulatory decisions and to improve the
coordination between specific media programs and the consistency
of their assessments.
Specifically, The Administrator authorized the formation of an
Exposure Assessment Group as part of the Office of Health and
Environmental Assessment in the Office of Research and
Development. The Group's responsibilities are: 1 ) to provide state-of-
the-art methodology, guidance and procedures for exposure
determinations; 2) to ensure quality and consistency in the Agency's
scientific risk assessments; 3} to advise the Program Offices on
proposed testing requirements with emphasis on the information
needed for adequate exposure determinations; and 4) to provide
independent assessments of exposure and recommendations to the
appropriate regulatory offices concerning the exposure potential of
specific agents.
Also, an Agency-wide Exposure Assessment Work Group has
been formed to ensure the full contributions of the operating
programs within EPA during the dynamic period in the development
of policy and guidelines on exposure assessment. Under the
leadership of Courtney Riordan, Chairman, and Tom McLaughlin,
Executive Secretary, this work group has reviewed the Workshop
report and supports its conclusions and recommendations: increased
priority and institutional visibility are necessary for exposure
assessments; "standardization" of the detailed methodology for
making assessments is not possible at this time; integrated
assessments and validation by measurement should be attempted
whenever possible; peer review is desirable for both the assessment
plan and the completed program; consistency must be improved for
assessments made throughout the Agency; and uncertainties and
limitations should be stated explicitly.
The Work Group has applied these recommendations in
constructing a draft of Exposure Assessment Guidelines to provide a
solid foundation for the formal operation of the Exposure Assessment
Group. The new Group will give valuable guidance to regulatory
decision-makers in assessing exposure to environmental pollutants.
Stephen J. Gage
Assistant Administrator
for Research and Development
..
•/
-------
Table of Contents
Preface ii
Acknowledgment iv
Introduction , 1
Workshop Overview and Recommendations 3
Introductory Presentation: Exposure Assessment
Regulations and Enforcement
Cara Jablon 9
Case Study #1: Participate Sulfates and the
Catalytic Converter
Roger Cortesi 13
Case Study #2: Chlorobenzilate
David J. Severn 19
Case Study #3: Benzene
Michael Berry 23
Case Study #4: Bladder Cancer Epidemiology
Study
Kenneth Cantor 27
Case Study #5: Lead
Lester Grant 33
III
-------
Acknowledgment
The editors express their appreciation to Dr. Richard Mariandfor
his many hours of hard work in organizing the symposium and for his
substantive contribution to organizing this proceedings document.
We would also like to thank Ms. Melinda Graves of ORD for her
continuing assistance in the organization of the symposium and the
preparation of this manuscript. Thanks are also extended to the
workshop participants:
Elizabeth Anderson, Office of Research and Development/EPA
Darryl Banks, Office of Research and Development/EPA
Michael Berry, Office of Research and Develpment/EPA
John Buckley, Workshop Co-chairman, Whitney Point, NY
Kenneth Cantor, Office of Research and Development/EPA
(on detail to National Cancer Institute)
Stanley Coerr, Office of Air, Noise and Radiation/EPA
Roger Cortesi, Office of Research and Development/EPA
Phillip Enterline, University of Pittsburgh
Stephen Gage, Office of Research and Development/EPA
David Gaylor, Food and Drug Administration
Lester Grant, Office of Research and Development/EPA
George Hutchinson, Discussant, Harvard University
Cara Jablon, Office of General Counsel/EPA
David Jackson, Workshop Co-chairman, Case Western
Reserve University
William Marcus, Office of Water and Waste/EPA
Richard Mar land, Office of Research and Development/EPA
Thomas Murphy, Office of Research and Development/EPA
William Murray, Office of Research and Development/EPA
Charles Poole, Office of Toxic Substances/EPA
Courtney Riordan, Office of Research and Development/EPA
Judah Rosenblatt, Discussant, Case Western Reserve University
Marvin Schneiderman, Discussant, National Cancer Institute
Edward Schuck, Office of Research and Development/EPA
David Severn, Office of Toxic Substances/EPA
Syed Shariq, Office of Research and Development/EPA
IV
-------
Introduction
In January 1979, the Office of Research and Development (ORD)
of the Environmental Protection Agency sponsored a two-day
workshop that permitted exchange of ideas and discussion among
regulatory decision-makers and experts in biostatistics,
epidemiology, and dosimetry/exposure modeling. Participants came
from both inside and outside the Agency. Their discussions
emphasized: (a) risk assessment, (b) approaches to improve the
methodologic consistency in exposure assessment (c) improved use
of existing data bases, (d) areas where new methodology for data
acquisition or data analysis are needed, (e) methodologies for
identifying data requirements in a timely fashion forfuture regulatory
decisions, and (f) suggestions on priorities for ORD funding and
research efforts in exposure assessment in the near* and long-term
future.
This report summarizes the proceedings of the workshop and
synthesizes the recommendations stemming from the discussions.
-------
-------
Workshop Overview
and Recommendations
Participants in the Exposure Assessment Workshop brought to it
a keen awareness of the difficulties inherent in the assessment
process as well as of the necessity for decision-makers to have
reliable exposure information on which to base regulatory standards.
One of the major topics of workshop discussions was the complexity
involved in assessing exposures from multiple sources.
None of the five case studies discussed represented a true multi-
media assessment, though several did consider more than a single
medium of exposure. The media orientation (air and water) of EPA
program offices and the character of the authorizing legislation tends
to require, or at least encourage, single-medium exposure
assessments and single-medium regulations. Such assessments can
be done reasonably well with existing methods and data that are
already available or easily obtainable. Even in these relatively simple
situations, it is important to state explicitly the assumptions made
and the uncertainties in the assessment, making clear the range and
most probable kinds of exposure. Uncertainties will occur not only in
the estimates of geographic distribution, duration, concentration,
and transformation of the pollutant of interest, but also in the
behavior and movement of individuals in the area of concern. Such
single-medium assessments can be based on empirical data or
models, with actual measurements used as bases for model verifi-
cation whenever possible.
Multi-media exposure assessment is inhibited by institutional
arrangements, traditional scientific disciplines (for example,
hydrology, meteorology), and the lack of agreed-upon assessment
methodology. Also, legislatively and judicially imposed deadlines
frequently prohibit adequate exposure assessments, so that there is
time only to use existing data or data that can be acquired quickly;
there may be insufficient time for field measurements or model
verification.
The multi-media exposure monitoring and modeling conducted
by the Environmental Monitoring Systems Laboratory (EMSL), Las
Vegas, is the on ly exa mp le we have wit hi n E PA that demonstrates an
understanding of complete and complex exposure assessment. The
relative freedom from regulatory pressures probably contributes to
this approach, since there is no stimulus to favor one aspect of control
over another. We suggest that a larger investment in exposure
assessments drawing on Las Vegas experience and personnel would
be worthwhile.
-------
Participants generally believed that the bladder cancer
epidemiological study {Case Study 4} was an excellent study model
that will be useful in the regulatory decision process. It was
emphasized that there was an extensive critique of the study before it
was undertaken. A similar pre-study critique could well be
incorporated into all such extensive Federal studies, not only to
improve their scientific quality, but also to blunt post hoc attacks on
the results once the study is completed.
It was also noted that an important difference exists between
qualitative hazard identification and quantitative hazard assessment.
The possibility was raised of reviewing state implementation plan
(SIP) guidelines for exposure assessments to determine the
consistency and quality of such guidelines. It was also noted that
perhaps there was information to be gained from the SIP guidelines
in a broader EPA context.
Additional emphasis was placed on the need for multi-media
exposure assessments that include an estimate of the relative
contribution of each exposure route. Also emphasized was the
importance of conducting "real world" monitoring to verify modeling
estimates as early in the regulatory decision process as possible. It
was concluded that it is difficult to arrive at a system that could
prioritize resource allocation, both for exposure assessment alone
and in combination with other information needs. This is particularly
important since many of the methodologies required to improve our
abilities in exposure assessment are relatively costly.
RECOMMENDATIONS FOR PRIORITIES
AND FUNDING
Workshop participants agreed that exposure assessments and
the methodologies necessary for generating adequate assessments
have not been assigned a high enough priority in Federal resource
allocation. It was also clear that no obvious way currently exists to
"standardize" the details of an approach to exposure assessment.
The wide disparities in data base uncertainties, the severity of
expected results, and the resources available in the area under
review necessitate a case-by-case approach to many exposure
assessments. Given this necessity for individualization, it is critical
that the Agency's general approach to exposure assessment have an
inherent logical consistency. This is important not only for the legal
defense of a given regulatory decision, but also as a basis for ensuring
that all the important factors in assembling an optimal exposure
assessment have at least been formally addressed in the process.
-------
RECOMMENDATIONS FOR ADMINISTRATIVE
AND ORGANIZATIONAL ARRANGEMENTS
Because of the diversity of expertise within EPA, total
centralization of the responsibility for exposure assessments is likely
to be less productive than an arrangement of clearly established lines
of responsibility across the Agency. For such an arrangement to work
effectively, it must avoid the dangers of a too narrow, medium-
specific perspective. It is vital to have a central coordinating group or
office with major responsibility for issuance of general guidance for
exposure assessments. The group should also coordinate and review
the efforts in exposure assessment across the Agency.
The issue of coordination reaches beyond the confines of any
single office within EPA, or even the boundaries of the Agency as a
whole. Exposure assessment methodology should be a major area for
interagency coordination and cooperation, perhaps through a
working group of the Interagency Regulatory Liaison Group.
Additionally, the Agency should continue efforts to establish stronger
ties with the scientific and academic communities concerned with
exposure assessments. Multi-discipline, multi-agency workshops
like this one, as well as grant/contract strategies in this field, can all
help achieve this goal.
RECOMMENDATIONS ABOUT TIME FACTORS IN
REGULATORY ACTIONS
No matter what coordinative framework is established, two
generalities about the exposure assessment process should be
emphasized. First, for maximum effectiveness, exposure assessment
should be formally required to be considered at a very early stage in
the regulatory decision and review process. This may be simply a brief
qualitative review (for example, is there likely to be significant
exposure or not?) in the early stages, becoming a complete, detailed
exposure assessment when certain criteria are met. Secondly, the
system should include a formal attempt to conduct anticipatory
research in exposure assessment, in an effort to acquire data for
future regulatory actions. This must be accomplished in addition to
the more common "fire-fighting" mode of operation.
RECOMMENDATIONS FOR SPECIFIC GUIDELINES
A number of specific recommended guidelines for exposure
assessment also emerged from the discussions. .It was almost
universally agreed that any assessment should include consideration
-------
of multi-media exposure. Only if it can be specifically concluded that
other media do not contribute in any significant way to exposure of
any population group can a single-medium approach be used.
Methodologies for evaluating agent-specific as well as the more
common medium-specific exposure should be developed.
RECOMMENDATIONS FOR MODELING AND
DATA COLLECTION
Modeling of exposures will remain a key tool in exposure
assessment, particularly for evaluating substances not yet produced
in commercial quantities. It was stressed that, whenever possible,
field measurement of the most critical parameters should be carried
out to evaluate the precision of model estimations. Whenever
verification measurements cannot be made, the exposure
assessment should include explicit statements on the conditions
postulated in the model, the probabilities for each of the assumptions
in the model, and an estimate of the range of uncertainty introduced
by the inability to check on the model's accuracy. Whenever worst
case analyses are performed, it is particularly important that the
conditions of the worst case be clearly defined. A consistent approach
across the Agency to the definition of "worst case" should be
encouraged. "Worst case" can be derived by generating a range of
estimates for exposure and taking the "worst" or highest level as the
one upon which to base the exposure assessment. Alternatively, one
could use the most susceptible population as the population to be
protected, thus generating a different worst case analysis.
Whenever various control strategies are being considered in the
decision-making process, the long-range impact of the various
strategies should be considered, as well as the short-term impacts. A
control strategy that achieves the greatest reduction in the short-
term may not be the most effective strategy when all the long-term
implications are reviewed. In order for the long-term impact of
various control strategies on exposure assessments to be carried out
effectively, it is necessary that the transport and fate of the substance
under consideration and the possibility of future exposure from
"sinks" be well understood.
SOME SPECIFIC RESEARCH AND DEVELOPMENT
NEEDS
Some of the case study discussion groups noted research and
development needs specific to their areas. The sulfate discussion
group took note of exposure assessment needs in the areas of diesel
emissions, toxic substances and control, and resource conservation
-------
and recovery. They also urged work in measurement methods, model
verification, and diesel emissions transport and fate studies. The
need for personal dosimeters was noted by both the chlorobenzilate
and the bladder cancer discussion groups. The chlorobenzilate group
called attention to the urgent need for an evaluation of exposure
information pertaining to home and garden pesticides. They also
suggested that industries need guidelines on generation of data for
pesticide exposure assessment for the Rebuttable Presumption
Against Registration (RPAR) process. The bladder cancer group
suggested a search for indicator variables or "flags" that might be
used as evidence of exposure; they also suggested a study to consider
the implication and utility of the concept of a "risk budget."
-------
-------
Introductory Presentation:
Exposure Assessment
Regulations and Enforcement
Cara Jablon
Office of General Counsel, EPA
Exposure assessment is a key factor in any statutory scheme
requiring either that exposure considerations be taken into account
or that a risk/benefit analysis be performed to determine whether a
substance poses an unacceptable health risk. Without adequate
exposure data, it is extremely difficult to arrive at reasonable
regulatory decisions or to defend such decisions at Agency hearings
or in judicial review.
The Rebuttable Presumption Against Registration (RPAR)
process under the Federal Insecticide, Fungicide, and Rodenticide
Act (FIFRA) is an example of a regulatory scheme in which exposure
assessment is an important factor. Many of the considerations faced
in this process are applicable to any statutory scheme that requires a
risk/benefit analysis. The RPAR process is a review of pesticides to
determine if these substances pose an unreasonable adverse risk to
human beings or to the environment. The statutory provisions of
FIFRA give a broad mandate to EPA to conduct these reviews. The
regulations provide risk criteria to measure the potential for adverse
effects and require that the Agency issue an RPAR notice when the
risk criteria are met or exceeded. The risk criteria require that the
Agency take exposure into account in determining whether a
rebuttable presumption should be issued for chronic effects other
than oncogenicity or mutagenicity. These latter two effects give
cause for issuing an RPAR notice, even in the absence of a specific
exposure assessment.1 After an RPAR notice has been issued, the
registrant has 105 days to respond with rebuttal information. EPA
then reviews the rebuttal submissions and any additional available
information to determine whether any of the presumptions have
been successfully rebutted. Any remaining presumptions are then
'The conference report to the 1978 FIFRA amendments directs the Agency to
"establish suitable scientific protocols for the development of human exposure data; to
work cooperatively with the registrants in the assembly and collection of such data;
and then to evaluate and weigh such data prior to initiating an RPAR process." Thus, it
is likely that all available exposure information for oncogenicity and mutagenicity will
be considered prior to the issuance of an RPAR.
-------
subjected to a risk/benefit analysis, using exposure data that
incorporate the information generated during the rebuttal period, to
determine the potential risk attributable to the various uses of the
pesticide. If a determination is made that the risk is greater than the
benefits, a notice of intent to cancel or restrict the use of the pesticide
is issued. At this point, registrants usually request an Agency
hearing.
At the Agency hearing level, the exposure data base is often a
major area of concern. In previous cancellation hearings, solid
evidence was often presented of exposure such as measurable levels
of the pesticide substances in human tissues and fluids. However, for
many of the pesticides currently under review, sufficient data are not
available to determine levels of exposure. The lack of data
necessitates that a multitude of assumptions must be made in order
to generate an exposure assessment. These assumptions may be
difficult to defend against a rigorous attack in any formal hearing or
court review. Thus, it is essential not only to generate more exposure
data, but also to assure that the data developed will be useful in
arriving at a supportable regulatory decision. There is a specific need
with regard to pesticides for studies on levels of human exposure by
dermal, inhalation, and ingestion routes, as well as for information on
the size and exposure profile of the population at risk. Studies must be
generated to allow a determination of the actual human intake by
inhalation from ambient levels, the actual amount of absorption of
the material through the skin, and the actual level of pesticide
residues present in raw and processed foods.
Unless studies are designed to provide information concerning
real world exposure, they provide little aid in arriving at reasonable
regulatory decisions. Both physical monitoring (human and residue)
and animal studies are important in assessment of exposure to a
hazardous substance. As a starting point, it is useful to have studies
on methods and rates of application for families of chemicals. The
most valuable information is, of course, derived from studies on the
specific pesticide of concern. Particular emphasis should be placed
on performing monitoring studies on the specific population at risk in
order to determine the exposure per unit time.
Each exposure study must consider critical exposure factors
such as the specific chemical formulation, the population at risk, the
geographic and meteorologic conditions, the frequency and duration
of use, the method of application, the particle size, and any customary
use not in accordance with label directions. Information is most
useful when the exposure data are generated by the highest
application rate permissible by label instructions. The design of these
studies requires specific input from toxicologists, engineers,
environmental and analytical chemists, biomedical scientists,
agricultural experts, and attorneys. Attorneys should be consulted
early in the planning process as to the usefulness of the study design
in generating data that can form a basis for a supportable regulatory
decision.
10
-------
In the design of animal studies, similar considerations must be
applied. These include attention to realistic routes of exposure, the
proper selection of animal species, an adequate number of animals in
the exposure groups and appropriate levels of exposure.
Clearly, if properly designed studies are not available to form the
basis for an exposure assessment, the Agency will have great
difficulty in promulgating a regulatory decision, and in defending this
decision. Although the burden of proof is on the registrant to
establish the safety of his product, it is incumbent upon the Agency to
establish the validity of the data base on which regulatory action is
taken. When exposure assessment is based on a multitude of
unsubstantiated assumptions, the Agency's regulatory action may
be effectively challenged in court. The more uncertainties and
assumptions that are present, the greater the chance that the
registrant will be able to successfully challenge the impact of the
exposure assessment and render it potentially "unreasonable." A
risk assessment can be seriously damaged by the demonstration of
potential error in each sequential step of the exposure estimate. The
decision process may become so clouded by the magnification of the
potential error from such an analysis that the regulatory decision will
not be upheld under review. These problems can be greatly reduced
by using data from well-designed studies that provide information
about real-world exposure. In the absence of definitive studies, it is
essential that the potential error in each assumption be limited by
reference to a solid data base on related compounds and application
methodology. The presence of confirmatory data from a number of
sufficiently different sources is important in establishing the validity
of any assumptions underlying the assessment.
Several other considerations about the use of exposure data in a
hearing or court situation are important. First, in order to effectively
convince a judge to support an Agency decision that a given pesticide
poses a significant health or environmental problem, the Agency
must be forthright about the shortcomings of the data base. It is often
helpful to state a range of estimated values for an exposure
assessment if the assessment is based on assumptions that cannot
be directly substantiated. Also, the process by which exposure
assessments are generated should be consistent with Agency
treatment of other pesticides and other substances amenable to
similar treatment, as well as internally consistent for the given
pesticide under review. If there are inconsistencies in the process,
the professional quality of the Agency's supporting documentation
may be called into question and the Agency's case seriously
damaged.
In the judicial review process, the standard of review is
substantial evidence on the record as a whole. Thus, a regulatory
decision will be upheld in judicial review if it is within the regulatory
authority of the Agency, if the Agency took into account all relevant
issues and considered all the evidence, and if the decision is
reasonable based on this evidence. Thus, from the perspective of the
Office of General Counsel, the importance of designing studies with a
11
-------
view toward whether the data will hold up in court and will convince
the judge cannot be overemphasized. It is essential that the Agency's
exposure assessments be based on relevant data derived from well-
designed studies and that the Agency take a consistent approach to
the generation of such exposure assessments.
12
-------
Case Study #1
Particulate Sulfates and
the Catalytic Converter
Roger Cortesi
Office of Research and Development, EPA
THE PRESENTATION
In the spring of 1973, the Administrator granted a two-year delay
to auto manufacturers for meeting the statutory emission standards
under the Clean Air Act. At about the same time. Ford Motor
Company notified EPA that it had measured increased sulfuric acid
(H2S04) emissions from autos fitted with the oxidative catalytic
converter, one technology that had been projected to provide
eventual compliance with the statutory automotive emission
standards.
EPA was sharply divided in opinion as to the significance of the
health threat from H,S(X emissions. The Air Program Office
considered the problem to be relatively insignificant. The Human
Health Effects group within the Office of Research and Development
(ORD)took a strong stand that a significant health hazard could occur
in two to six years after widespread introduction of the catalytic
converter. This ORD position was based on the assumption that
significant adverse health effects occurred after exposure to 10
ug/m3 of acid sulfate—an exposure effects level derived from the
Community Health Environmental Surveillance Study (CHESS) data,
and from estimates derived from both direct and surrogate (for
example, carbon monoxide carboxyhemoglobin) models. Various EPA
modeling efforts produced a wide range of exposure estimates, some
considerably exceeding 10 ug/m3. In fact, major uncertainties
pervaded all key data elements: emission factors, ambient level
model predictions (which are highly dependent upon assumptions of
worst case meteorology), and levels of exposure that produce adverse
health effects.
In late 1973 and early 1974, a review of the issues involved in
this dispute was undertaken by representatives of the Office of the
Administrator, the Office of Research and Development, and the Air
Program Office. Some of the modeling assumption differences were
resolved and the areas of uncertainty were made explicit. So that
"real world" emission levels could be approximated, a roadside
13
-------
monitoring program in California was recommended and
implemented for H2S04 particles. Later, a joint General Motors and
EPA study of H2S04 roadside emission levels was also completed at
the GM test track.
In January 1975, the Administrator was petitioned for an
additional year's delay in imposing the statutory emission standards.
Faced with strong arguments by some EPA scientists that the
technology for meeting these standards could present a health
hazard greater than the hazard of not meeting the standards, he
granted the one-year detoy.
Internal arguments within EPA were intense. The major
technical disputes related to exposure estimates, including modeling
techniques, modeling assumptions, and data interpretation. By late
1975, the California roadside study and GM/EPA test track
monitoring had added substantial data to the questioning. In both
studies,
• Emissions proved to be about a third as high as previous
estimates
• Model estimates of exposure levels fell into the lower end of
the range of previous EPA estimates
• The advent of oxidation/reduction catalysts promised further
emissions reductions.
By early 1976, EPA declared the HjSOj issue no longer of
immediate concern and announced that the Agency would not set an
H2SO4 emission standard for automobiles, as previously planned.
THE WORK GROUP DISCUSSION
The group made the following general observations on exposure
assessment:
A. Exposure assessments cannot be performed by any single,
uniform method; each case may differ significantly in:
1) Purpose: One can be dealing with such disparate cases as
single-medium exposure to sulfates from catalyst-equipped
automobiles, to TSCA-type analyses where a large number of
new chemicals potentially present in several media are involved,
to analyses of the relative significance of multi-media exposure
pathways (for example, cadmium, benzene, or lead).
2) Timing: This can range from relatively crude assessments
early in the regulatory analysis in order to evaluate the problem,
to detai led assessments that q uantify exposure assessment for a
major regulatory decision.
14
-------
3} Relative application of necessary disciplines: That is, what
role is required of meteorology, environmental chemistry,
population dynamics, pharmacokinetics, etc. in a given exposure
assessment?
B. One must recognize that:
1) Exposure assessments will always be "custom-made," and
often the approach used successfully in one case will not be
suitable for other problems. However, major efforts at
consistency in some basic issues are still important.
2} Exposure assessments must be an integral part of a
regulatory analysis. These considerations should be included
early in the process and integrated with other elements of
regulatory analysis, especially the consideration of potential for
adverse health effects.
3) All media and pathways for exposure should be considered
from the outset. The process must identify, and when possible
quantify, both significant and insignificant sources and
pathways. The potential impact of cumulative exposure from all
media/sources also must be specifically addressed.
4) Exposure assessment within EPA should not be
organizationally centralized, although the new ORD Exposure
Assessment Group will play a major role in generating many of
these assessments. There is a need to institutionalize the more
formal, detailed assessments, at least by the mandatory
involvement of all interested offices in the exposure assessment
process.
In discussing the state-of-the-art for exposure assessment
methodology, the group pointed out the following needs and
priorities:
A. Exposure assessments can be done reasonably well on specific,
single-medium problems, when this is the appropriate assessment
approach.
1) The studies must have been carefully designed, clearly
testing well defined hypotheses.
2) All assumptions, methods, and uncertainties must be dealt
with explicitly.
3) Data from both predictive models and empirical studies must
be integrated.
15
-------
4) Recognition must be given to the different levels of
complexity required for exposure assessment analyses at
different stages of regulatory decision-making.
5) The possibility of significant multi-media exposure must be
explicitly analyzed before a single-medium approach is adopted.
B. Data acquisition to test the applicability of exposure models to a
given case is a high priority need. However, some participants felt this
need might not be met simply by the expenditure of additional
resources.
C. In the formalized exposure assessment process, some
systematization of methodology would be useful (recognizing that the
entire process often must be individualized).
D. Exposure can be assessed in two ways:
1) Predictive or modeled. The greater short-term need for the
Agency is the development of more accurate predictive methods
and for the methodology/resources necessary to validate the
modeling estimates.
2) Evaluated/estimated from actual data for chemicals already
in the environment.
The group agreed upon several specific EPA Research and
Development needs in exposure assessment:
A. There are specific areas in which very real short-term Agency
needs exist for optimal exposure assessments to be produced. These
include:
1) Air: diesel emissions;
2) Toxic Substances Control Act: new and existing chemicals;
3) Resource Conservation and Recovery Act: the total dimen-
sion of the need was still unknown; and
4) Water: toxics control strategies will require technically
sound, legally defensible exposure assessments.
B. Methodologic development to enable better exposure assess-
ments in the future.
1) Correlation of actual human exposures with ambient levels
through:
a) Human behavior and mobility analyses
b) Personal monitors/individual dosimetry
16
\
-------
2) Verification of modeling predictions. Early in any analysis
involving a substance or substances already in the environment
(in contrast to a new chemical under TSCA), it should be
determined whether or not the models to be used can feasibly
and economically be verified. If they can, this validation effort
should be incorporated into early resource allocation decisions. If
model validation is not feasible, other models that might be
verifiable should be explored. Alternatively, the model should be
used with the explicit caveat that no validation efforts will be
undertaken and an estimate given of the range of uncertainty
introduced by the use of such a model.
3) General environmental transport and fate studies to develop
better approaches (for example, pathways analysis or mass
balance determinations) are required to improve multi-media
assessment capability.
17
-------
-------
Case Study #2
Chlorobenzilate
David J. Severn
Office of Pesticides Program, OTS
THE PRESENTATION
Pesticides differ from most chemicals of environmental concern
in that they are deliberately distributed into the environment. An
Agency assessment of Chlorobenzilate exposure made in 1978
stands as an example of how the Office of Pesticide Programs (OPP)
used an existing base of exposure data for other pesticides to
estimate the exposures received by workers who apply
Chlorobenzilate.
A considerable bank of data is available concerning pesticide
applicator exposure. However, much of this information is focused on
application techniques rather than on specific pesticides. Like many
other pesticides, Chlorobenzilate is applied by air blast or speed
sprayer equipment; these techniques produce a fine spray that
penetrates the foliar canopy and that also creates potential for
exposure of the applicators.
The Wenatchee Research Station of ORD has developed
techniques to measure both inhalation and dermal exposure during
pesticide operations. To make the measurements, a series of pads
made of alpha-cellulose is attached to the arms, chest, and back of
the applicator. At the conclusion of the exposure period, the center
100 cm2 are cut out of the pads and analyzed. Cartridge respirators
are worn during exposure to measure inhalation; these cartridges are
also analyzed. Alternatively, air concentrations of pesticides may be
measured by air sampling devices, and the concentrations converted
to inhalation exposure values by assuming a standard breathing rate.
The exposure data are tabulated in terms of milligrams per hour of
dermal and inhalation exposure.
In various airblast operations, average values for dermal expo-
sure have ranged from about 15 to 50 mg/hr. Inhalation exposure
has been observed to be much lower, averaging about 0.1 mg/hr.
OPP used these values as reasonable estimates of applicator expo-
sure to Chlorobenzilate. In order to convert the dermal values into ac-
tual doses absorbed by the applicators, it was necessary to correct for
the efficiency of penetration and absorption through human skin. No
19
-------
data were available for chlorobenzilate, but, based on data for a num-
ber of other chemicals, it was concluded that 10 percent may be a
reasonable estimate. The dermal exposure value estimated for
chlorobenzilate was accordingly reduced to 1.5 to 5.0 mg/hr, or 1.6 to
5.1 mg/hr for the combined routes of exposure.
Since the toxic effect of chlorobenzilate for which a risk
assessment was required was carcinogenesis, the exposure had to
be presented as long-term or chronic exposure. OPP relied on data
from USDA and user groups to estimate that there are about 700
people who apply chlorobenzilate to orange groves. Each person
probably works about 20 to 40 days per year applying the pesticide.
The lifetime daily average exposure was then computed by assuming
that the applicators work 8 hours per day, 40 days per year, for 40
years of a 70-year life. With the higher exposure rate, the resulting
lifetime value was 2.57 mg per day. Finally, the Carcinogen
Assessment Group of ORD used this higher exposure estimate with
data on tumor incidence from a chronic mouse feeding study to
predict a tumor incidence of 400 to 1,400per million exposed people.
THE WORK GROUP DISCUSSION
The discussion revealed that a great deal of information is
needed for pesticide exposure assessment. Exposure is dependent on
actual use practices as well as on the nature of the pesticide. The
group recommended that:
A. A synthesis of known exposure information be undertaken for
home and garden pesticide use (including interior use pesticides), to
be followed by an analysis of exposure assessment data needs. Home
and garden use of pesticides has recently been shown to be
unexpectedly high. In addition, recent surveys have revealed that the
overall national incidence of acute human poisoning is highest in the
home, particularly with young children. The undertaking of this
synthesis should receive the highest priority.
B. There is an urgent need to collect and analyze available data and
to identify data gaps relative to exposure of farm workers who enter
pesticide-treated fields to cultivate or harvest crops. Attempts to
develop simple, effective field indicators of personnel exposures
(dosimeters} should be augmented. A large number of farm workers
are unknowingly exposed to pesticide residues. Recorded incidents,
as well as additional (extrapolated) situations not documented,
represent considerable impetus for corrective action. The highest
priority is represented by this recommendation.
C. Protocols should be developed for industry (applicant) guidance
in their provision of human exposure data. These data are required in
the RPAR process in order to assure reasonable safety in the uses of
20
-------
pesticides for which application for registration is made. HEW
guidelines for human testing, as applied by EPA, restrict overt
exposure of human subjects to pesticides. Yet, the burden of proof of
the safety of pesticides is on the registrants. The industry, as a group,
is unclear as to what exposure data are expected. Work to be carried
out under this recommendation is of second level priority.
D. Total exposure of all populations should be assessed. Exposure
may occur to workers during manufacture, formulation, application,
and field operations such as cultivation and harvest. Other exposed
populations include those in areas adjacent to treated areas and
those who utilize products that may still contain residues.
21
-------
-------
Case Study #3
Benzene
Michael Berry
Office of Research and Development, EPA
THE PRESENTATION
Three years ago the Environmental Protection Agency was faced
with legal action to require the listing of benzene under Section 112
of the Clean Air Act (hazardous pollutant emission limit). At that time,
the Air Programs Office, with the National Air Quality Data Bank in its
domain, took the lead role in generating a population exposure
assessment. The Office of Research and Development took major
responsibility for the assessment of health effects data and, through
the Carcinogen Assessment Group, for the formal risk assessment.
The outcome of these efforts was an EPA decision that benzene
should be regulated as a potential human carcinogen, though it was
not yet proven that benzene actually acts as a human carcinogen.
The exposure assessment for benzene was based on a worst
case analysis that took into account the maximun possible number of
exposed individuals exposed at the maximum projected levels. It is
very difficult to translate the experience from industrial exposures
that range in hundreds of parts per million exposure over 40 hours a
week to the final range of 0.1 to 10 parts per billion as the limit for
annual average exposure for the general population. The study gives
no number as a discrete value, nor does it provide estimates of
variance. However, as several workshop participants noted, it is
extremely difficult to state the variance for any number that is in
reality based on a model that already includes inherently large
uncertainties. The "body count" used in the risk assessment was
based on the non-threshold linear dose/response assumption. The
validity of this assumption is still open to scientific debate. However,
the basis for regulatory action should be and was a detailed risk
analysis. One of the major shortcomings of the exposure assessment
in the case of benzene was the lack of a truly broad multi-media base
for the assessment. The possible contribution of non-airborne
sources of benzene was not included in this analysis. For example, it
was noted in the workshop that upto650mrcrogramsof benzenecan
be inhaled by a smoker who consumes one pack of cigarettes per day;
also, some commercially produced eggs contain up to 100
micrograms of benzene. The need for multi-media exposure
assessments was again emphasized.
23
-------
Another area of concern was that, where there were monitoring
data available to compare with the estimates generated by the
models used in the exposure assessment, the data did not agree
closely. The importance of verifying modeled concentration
estimates with real world monitoring data was also emphasized. It is
important to target those areas where monitoring data will be of
immediate help in improving the modeled estimations. When a worst
case analysis is generated by a model, it is important that the basis for
such a worst case be stated quantitatively and explicitly. It is also
useful to state the probability estimates that surround the worst case
circumstances.
THE WORK GROUP DISCUSSION
The group used the experience of the benzene exposure
assessment as a basis on which to assess the state-of-the-art and to
recommend improvements in the existing methodology. Their review
of the benzene document and their collective knowledge about
exposure assessments led them to the following recommendations:
A. Integrated Assessments. The consensus of the group was that
assessments should include all sources and all media. Partial
assessments were deemed inadequate because they might not result
in the adoption of cost-effective and rational control strategies.
B. Fate and Future Exposures. Assessments should account for the
fate of the substance. If "sinks" exist, the assessment should
estimate the future exposures that might result from those
accumulations. If a substance has toxic degradation products,
exposures to the latter should be estimated. Conversely, if a
substance is rapidly degraded to non-toxic by-products, this should
be recognized.
C. Define Uncertainty. In any assessment, gaps will exist with
respect to some part of the analysis—for instance, inadequate data
on releases, transport, or transformation. Any credible assessment
should make explicit the assumptions that have been made to
overcome such gaps. Such openness identifies priority areas for
research and data gathering and also invites attempts at improving
the methodology and the substantive basis for making decisions.
D. Actual Field Measurement. Every assessment should be
supported by field measurement whenever possible. When field
measurements are not possible, laboratory data from various test
procedures designed to provide insights into environmental behavior
and fate should be used.
E. Single Effects and Exposure Document. There was a strong
feeling among the group members that exposure and effects
24
-------
documents produced thus far display a lack of coherence. For
example, effects documents often ignore the exposure data. The
exposure documents often may conflict with or duplicate material in
the effects document. At a minimum, it was urged that preparation of
the documents be more closely coordinated. At least one member of
the group urged that the effects and exposure estimates be combined
into one document, thereby forcing coherence.
F. Consistent Methods. One member of the group felt strongly that
the exposure assessment process could be improved by development
of guidance or, in some cases, even the standardization of methods
for carrying out exposure estimates. However, because of our present
lack of information on environmental quality and spatial activities and
behavior of populations, it would seem prudent to evaluate the
desirability of developing guidelines for standardizing assessment
methods.
25
-------
-------
Case Study #4
Bladder Cancer Epidemiology Study
Kenneth Cantor
Office of Research and Development, EPA
(on detail to National Cancer Institute)
THE PRESENTATION
The impetus for establishing the Bladder Cancer Epidemiology
Study was a recent surge of interest in the relationship between
saccharin ingestion and bladder cancer. However, the general
problem of potential organic carcinogens in the Nation's water
supplies has been debated for a much longer time.
It is known that chlorinated organic substances in water supplies
come not only from the water source itself, but also from substances
produced by the water treatment process. Ten years ago, the
introduction of gas chromatography to examine drinking water
supplies revealed multiple peaks representing the presence of
numerous organic compounds in many water supplies. A supply
survey in 1973 showed that there were 81 identifiable compounds in
the water supply for New Orleans. The Environmental Defense Fund
helped support a study that showed higher rates of cancer in
Louisiana parishes where the water supply came from the
Mississippi River than in parishes using other water sources. In the
Miami, Florida, water supply (obtained from an area of the Everglades
which has relatively high natural organic content, and then
chlorinated) chloroform levels are 350 times higher than in deep well
water sources in the same areas. There have been a number of
studies correlating increased rates of cancer mortality in counties
whose water supply is from a surface or shallow source. High
trihalomethane levels in particular have been linked to elevated
cancer mortality, especially bladder cancer. These suggestive
correlations have not clearly been established as causal. There is
debate within EPA about whether regulation of such substances can
be based on suggestive correlations or whether more definitive
evidence is necessary.
The review of saccharin as a potential carcinogen by the FDA in
1972 did not trigger application of the Delaney clause at that time.
However, in 1977, a Canadian animal study in which one group of
rodents was exposed to pure saccharin, a first control group was
27
-------
exposed to the most common contaminant of commercial saccharin,
and a second control was unexposed showed an increased bladder
cancer rate in male rats who ingested saccharin. This increased
cancer rate was not seen in the contaminant control group. On the
basis of this evidence, the FDA started proceedings to decide whether
saccharin should be removed from the market in the United States.
While this was occurring, results became available from a Canadian
epidemiology case control study which showed a statistically
significant positive correlation between bladder cancer and
saccharin ingestion in males, but no such correlation in females.
There were also negative studies reported at this time, but these
studies used hospitalized individuals with hospitalized controls,
rather than a random sample of the general population.
Because of the debate about saccharin in the U.S., the Congress
mandated that the National Cancer Institute and other concerned
agencies organize a broad nationwide epidemiological study on the
relationship of saccharin and bladder cancer. It was desirable and
necessary to study as many of the interactive variables as possible,
obtaining a history not only of saccharin ingestion, but also of
industrial exposure to bladder carcinogens, smoking (both non-
filtered and filtered cigarettes), water supply, and coffee drinking.
Nine regional surveillance areas plus the state of New Jersey were
chosen. (In each of these areas, there was a population-based tumor
registry to maximize total case reporting. With a rapid ascertainment
procedure, the lag time between diagnosis of bladder cancer and
notification of the tumor registry is less than six months. Less than
four percent of the patients died between their original diagnosis and
the time the epidemiology survey team attempted to contact them.)
The ten epidemiology survey areas represent 12 to 15 percent of the
U.S. population. These groups are not totally random, because
minority population groups are slightly over-represented. It is
estimated that 4,000 of the 30,000 expected cases per year of bladder
cancer in this country will be included in the study. Controls for age,
race, and sex-specific case groups are randomly selected from a
stratified population sample. Control selection takes account of the
demographic characteristics of bladder cancer patients; patients
have a median age of 67 years, and the disease occursthreetimes as
often in males as in females. The control series subjects have similar
characteristics.
A personalized questionnaire is used that includes a complete
residential history of water supply. To support this, a separate survey
of historical municipal water supply characteristics in the area under
study has been undertaken. Items included in this survey include
source of water (surface versus ground), the history of chlorination
levels, and the possibility of any upstream industrial or municipal
contamination for those areas with surface supplies. Water samples
from the ten areas in the study are also being analyzed. If there has
been no change in the treatment methodology or water supply
source, present samples will be used as a measure of long-term
water quality/organic pollutant content.
28
-------
The interviewers, who have not been informed that the
questionnaire is related to a bladder cancer study, also ask
patients/controls to quantitate their use of saccharin, including
artificial sweeteners, diet drinks, and other uses. It is known that
approximately 10 to 20 percent of the population used saccharin to
some extent 20 years ago. A worst case estimate is that there may be
as many as 15,000 people who have developed bladder cancer each
year and who report some past saccharin use.
To date, there has been an 80 percent response rate for both
cases and controls. In the case series, there is approximately a 4 per-
cent fatality rate prior to interview. Some patients move out of the
study area and a few patients have refused to participate. In some
study areas, physicians have not permitted study personnel to
contact their patients. Morethan 90 percent of the controls contacted
have been willing to participate. The cost of the study has been $1.4
million over 18 months for the more than 10,000 respondents. The
study will include all those cases diagnosed in the ten areas in the
calendar year 1978. This figure does not include the cost of the water
testing. Mid-1979 was the target date for a preliminary report of
study findings with regard to saccharin. Such epidemiology studies
require the best possible assessment of individual exposure. The
retrospective construction of exposure profiles is the best one can do
in this type of epidemiology study. Additional useful information
could be gathered by studying epidemiology data from workers in the
saccharin production industry in its early years, when it was a
particularly dusty process. These individuals constitute a relatively
easily defined group of high level exposure individuals.
Animal studies show saccharin by itself to be a relatively weak
carcinogen. However, if an animal is given a short exposure to a low
concentration of a known bladder carcinogen and then is given a high
dose of saccharin, there is a much higher incidence of bladder cancer.
This co-carcinogenicity potential makes a multi-factor epidemiology
approach mandatory if any useful data are to be generated.
THE WORK GROUP DISCUSSION
The discussions of the workshop centered on two broad areas: (1)
the generation of a systematic list of generic issues to be confronted
in organizing an adequate exposure assessment, and (2) some
general recommendations for further research efforts within EPA in
the area of exposure assessment. First, the group strongly felt that in
the past there has been a lack of consistent organization in the
generation of exposure assessments within the Office of Research
and Development in particular and throughout the Agency in general.
It was suggested that there might be some benefit from applying a
systematic list of questions that must be asked in conducting any
exposure assessment. The workshop group decided that some efforts
should be expended in developing a general outline or approach that
29
-------
all exposure assessments could utilize. They proposed a partial list of
such generic issues, including:
A. Where is the pollutant? What is known about the dispersion of
the pollutant from its known sources? Are there any available models
that can be used to predict pollutant dispersal? Can such models and
predictions be feasibly validated by ambient monitoring in the real-
world setting?
B. Where are the people (that is, the population at risk)?
1. Additional information is necessary concerning the distribu-
tion of the population in space {in terms of population distribution
around known sources of pollution) and in time.
2. Sampling studies or survey interview studies should be used
to gather data on personal behavior patterns as they relate to
exposure assessment.
C. What are the routes of exposure?
1. It is necessary to define all of the potential sources of
exposure for the population at risk.
2. Methodologic problems associated with measuring the
"insult" (exposure) must be addressed. This includes the area of
personal dosimetry technology.
3. Data are also necessary concerning thedistribution frequen-
cy in both time and space of the exposure to the population at
risk.
D. In defining the interactive role between health effects assess-
ment and exposure assessment, two aspects should be considered:
1. What are the sources of information available? That is, are
there human data from either occupational or accidental
exposure? Are there primate, other mammalian, or other in vivo
animal exposure data available? Are there in vitro laboratory
screening data available? The relative importance assigned to
these various sources of information is controversial, and some
overall coordinating guidelines would be useful.
2. Which health effects should be looked for specifically? That
is, should all potential adverse health effects be screened for, or
can we predict for a given category of pollutants which poten-
tially adverse health effects should be specifically evaluated?
E. Non-human ecological impact of the pollutant must also be
formally considered in the decision process.
30
-------
In the area of high priority items for further ORD resource
allocation, the work group recommended that the following efforts
should receive high priority:
A. Research in the area of validation of both exposure and effect
models. This was felt to be one area where there was the possibility
of relatively high potential information gain for limited resource
expenditure.
B. Are there any indicator variables ("flags") for a given exposure of
a population to a specific pollutant or class of pollutants that can
supply generic information with regard to the population at risk?
C. Measurement methodology research:
Anticipatory research, which might assist in developing ex-
posure measurement methodology for substances that might
have the potential to result in health effects on the exposed
population many years in the future. This might help avoid a
situation similar to asbestos exposure, where even after the
adverse effect was clearly documented, uncertainty on measure-
ment methodology and on the characteristics of dangerous
fiber types (size, composition, etc.) hampered the regulatory
decision process.
D. Long-range development and validation of the "risk budget"
concept for exposure assessment. ORD should develop and evaluate
a system for assigning a risk budget (upper limit of risk) for exposure
to any environmental hazard (for example, individual pollutant,
aggregate health risk). This concept would dictate management/
monitoring of population exposure to the environmen-
tal hazard under consideration until the limit specified by
the risk budget is reached. At that point, before any additional
exposure/risk could be permitted, it would have to be balanced by a
concomitant reduction in exposure/risk within this defined area.
Such an approach would require significant resources for its
development. Evaluation of the efficacy of such an approach, and a
full discussion of the social/political implications are, of course,
necessary before it could be implemented even on a trial basis.
31
-------
-------
Case Study #5
Lead
Lester Grant
Office of Research and Development, EPA
THE PRESENTATION
In September 1978, a National Ambient Air Quality Standard for
lead was promulgated by EPA. The standard selected, 1.5 ug per
cubic meter (1.5 ug Pb/m-*) averaged over a calendar quarter, was
based mainly on health effects /exposure information evaluated in
Air Quality Criteria for Lead, published by ORD in December 1977.
The standard is designed to protect 99.5% of the most sensitive pop-
ulation, which is children.
Setting of the final standard involved difficult policy decisions
that centered on resolving uncertainties about some aspects of the
basic information and the provision of an appropriate margin of safety
within the standard.
Partly, the difficulty in decision-making was related to several
unique properties of lead as an air pollutant. First, a significant
fraction of population exposure to lead occurs from sources other
than respiration of airborne lead particles. Second, there is
substantial variability in the blood lead levels of individuals who have
had similar lead exposures. Third, there is considerable scientific
debate over the public health significance of lead-induced biological
changes that occur before marked symptoms of disease are detected.
The sequence used in developing the standards was first to
derive estimates of internal dose (blood lead levels) associated with
the induction of various health effects, and to define or select the
pivotal adverse effect(s) encountered as a function of increasing
internal exposure. The most susceptible population at risk was then
defined, and efforts were made to estimate the proportion of the
population that would be at risk for particular health effects, given a
mean level of integrated external exposure to all sources of lead.
Contributions of non-air lead sources to an "acceptable" mean level
of internal exposure were then subtracted out, leaving an allowable
amount of exposure due to air alone. This was then translated in
terms of equivalent allowable external exposure, or, in other words,
the ambient air level(s) chosen as the standard, including a margin of
safety.
33
-------
In the development of the standard, 30 micrograms of lead per
deciliter was defined as the maximum safe level for any individual
child. Statistically, for protection of 99.5% of the target population,
this translates into a geometric mean population blood level of 15
micrograms Pb/dl. Of this latter level, only 3 ug Pb/dl are attributable
to air exposure. Given that in children a 1 ug Pb/m3 change in air
results in a change of blood lead of 2 ug Pb/dl, and given the policy
decision to establish the population blood level at 12 ug Pb/dl in order
to provide an adequate safety margin, the calculated standard is 1.5
ug Pb/m3.
THE WORK GROUP DISCUSSION
The work group on the lead case study initially reviewed the
overall meaning of exposure assessment in the context of the
requirements of regulatory decision-making within the
Environmental Protection Agency. It was generally agreed that the
five individual case studies presented at the symposium highlighted
individual aspects of the exposure assessment problems, but that no
one case study by itself represented a "total" exposure assessment.
The group further felt that the Agency, at this time, possesses neither
the methodology nor the institutional mechanisms to implement
successfully a truly "total" exposure assessment. At the present
time, there are available methodologies and institutional
mechanisms for separately defining the various aspects of a total
exposure assessment. For example, water-borne exposure is
evaluated separately from air-borne exposure, etc. While each of
these analyses is an essential part of the total, individually they do
not,perse, represent an adequate "total exposure"assessment.The
major recommendation of the work group was that the Agency
should use whatever administrative/legislative/resource-allocation
means necessary to ensure the necessary coordination for the
production of effective total exposure assessments. It was
additionally felt that coordination of exposure assessment efforts
within EPA are at the present time not as effective as they could be.
The establishment of an Exposure Assessment Group within the
Office of Health and Environmental Assessment(EPA-ORD) provides
an opportunity for such coordinative efforts in the future.
It was further concluded that EPA was the proper and logical
focal point within the government for the performance of total
exposure assessments. The necessary coordinative effort does not
require any additional legislative authority, but does require
continued effort in developing close working relationships with other
federal agencies interested in exposure assessments (for example,
the Intergovernmental Regulatory Liaison Group). A major advantage
in the regulatory decision-making process ot adequate total exposure
assessments is that such assessments can. help to define more
clearly where the most environmental gain (that is, reduction in an
34
-------
exposure) can be obtained for a given resource allocation. Thus, a
combination of strategies could be evaluated, and the control
approach most likely to be optimally cost-effective can be selected.
These options could range from strict control of a pollutant in one
medium to partial control in several different media, each option of
which could lead to a total exposure at an acceptable level.
It was also emphasized that the long-term effects, as well as the
short-term effects, of any control strategy must be evaluated. For
example, in the case of lead exposure, if one found that elimination of
metal and plastic food containers could reduce the general exposure
to acceptable levels, would one then choose as a major emphasis the
modification of containers or the control of air-borne lead? Obviously,
in many cases, the most attractive option in the short-term might not
necessarily be the best long-term solution.
Within the concept of total exposure assessment, it was felt that
EPA should develop and evaluate an agent-specific approach in
addition to the traditional media approach. An additional area of
concern was the approach to control of pollution in uninhabited or
sparsely inhabited areas. For example, should the emission
requirements for a lead smelter be different in a relatively
uninhabited area from one in a more urban area? Restating the issue,
should the emphasis on pollution control be on environmental
contamination or on human health effects? The group felt that both
should be considered, though health should receive more weight in
the final decision process. However, it was emphasized that
environmental contamination must be considered because of its
potential wide-spread impact on the ecosystem and for what it could
possibly contribute in future human exposure.
There was also some discussion that questioned whether the
time pressure generated by various legislative and court decisions
was counter-productive to optimal operation of the EPA decision-
making system. It was felt that in some cases, attempts to alleviate
the time pressures, especially in the face of an inadequate data base,
could be useful. However, in some instances a deadline has
generated a more timely Agency response, and thus EPA should not
completely disregard time pressures.
There were three further general recommendations from the
lead case study work group:
A. It was felt that a major effort was required in anticipatory
planning, in an attempt to avoid the recurrent "fire-fighting"
approach to environmental problems. It was recognized that no
matter how effective the Agency becomes in the prediction of future
informational needs, it is very likely that it will still need an effective
mechanism to deal with unanticipated crises.
B. A number of the participants felt that, as a corollary to the need
for a stronger anticipatory effort, the Agency should strengthen its
ties with the general scientific community by use of grants and/or
35
-------
other mechanisms to support appropriate research. It was felt that
the workshop format, with input both from the regulatory
governmental community and the academic and general scientific
community, could also be a useful tool in improving the
communications between the scientific community and EPA.
C. In the final recommendation of the work group, it was noted that
the air-borne lead standard was based explicitly on a decision to
protect 99.5% of the population. The group felt strongly that this was
a welcome, though unusual, departure from previous regulatory
decision processes. It was recommended that for all future exposure
assessments, data should be provided to permit proposed standards
or revisions of existing standards to provide an explicit statement of
the fraction of the population to be protected by the proposed
standard. The fraction to be protected, a non-technical decision, will
vary as a function of several factors, including the severity of the
effect, the size of the population at risk, extent of exposure, and the
type of control available. It was also noted that one could make
additional statements about the percent of the population protected
by the lead standard. Setting the lead standard at level X results in
protecting 99.5% of the population from adverse effect A, while
protecting 99.9% against adverse effect B, and perhaps 100% against
adverse effect C.
: 1980 — 757-073/0506
36
-------
-------
«* *"
;»z
4!
i°
M
VO
I
00
oo
H
!.f
tfl
03 £
me
3 3
o o a
1
23" 5?
Z — > CD
K)
00
J£?
m TI TJ
Woo
01
§1
I
Q. IB
------- |