60OZ92001A
GUIDELINES FOR EXPOSURE ASSESSMENT
AGENCY: U.S. Environmental Protection Agency
ACTION: Final Guidelines for Exposure Assessment
SUMMARY: The U.S. Environmental Protection Agency (EPA) is today issuing
final guidelines for exposure assessment. The Guidelines for Exposure
Assessment (hereafter "Guidelines") are intended for risk assessors in EPA, and
those exposure and risk assessment consultants, contractors, or other persons who
perform work under Agency contract or sponsorship. In addition, publication of
these Guidelines makes information on the principles, concepts, and methods
used by the Agency available to all interested members of the public. These
Guidelines supersede and replace both the Guidelines for Estimating Exposures
published September 24, 1986 (51 FR 34042-34054) (hereafter "1986 Guidelines")
and the Proposed Guidelines for Exposure-Related Measurements published for
comment on December 2, 1988 (53 FR 48830-48853) (hereafter "1988 Proposed
Guidelines"). In response to recommendations from the Science Advisory Board
(SAB) and the public, the 1986 Guidelines were updated and combined with the
1988 Proposed Guidelines and retitled as the current Guidelines for Exposure
Assessment.
These Guidelines establish a broad framework for Agency exposure
assessments by describing the general concepts of exposure assessment including
definitions and associated units, and by providing guidance on the planning and
conducting of an exposure assessment. Guidance is also provided on presenting
the results of the exposure assessment and characterizing uncertainty. Although
these Guidelines focus on exposures of humans to chemical substances, much of
the guidance contained herein also pertains to assessing wildlife exposure to
chemicals, or to human exposures to biological, noise, or radiological agents.
-------
Since these latter four areas present unique challenges, assessments on these
topics must consider additional factors beyond the scope of these Guidelines.
The Agency may, at a future date, issue additional specific guidelines in these
areas.
EFFECTIVE DATE: The Guidelines will be effective [insert date of publication
in the FEDERAL REGISTER]
FOR FURTHER INFORMATION CONTACT: Michael A. Callahan,
Director, Exposure Assessment Group,
Office of Health and Environmental Assessment (RD-689),
U.S. Environmental Protection Agency,
401 M Street, S.W.,
Washington, DC 20460,202-260-8909.
SUPPLEMENTARY INFORMATION
In its 1983 book Risk Assessment in the Federal Government: Managing
the Process, the National Academy of Sciences recommended that Federal
regulatory agencies establish "inference guidelines" to promote consistency and
technical quality in risk assessment, and to ensure that the risk assessment process
is maintained as a scientific effort separate from risk management. A task force
within EPA accepted that recommendation and requested that Agency scientists
begin to develop such guidelines.
In 1984, EPA scientists began work on risk assessment guidelines for
carcinogenicity, mutagenicity, suspect developmental toxicants, chemical mixtures,
and estimating exposures. Following extensive scientific and public review, these
guidelines were issued on September 24,1986 (51 FR 33992-34054). Subsequent
work resulted in the publishing of four additional proposals (one of which has
-------
recently become final): Proposed Guidelines for Assessing Female Reproductive
Risk (53 FR 24834-24847), Proposed Guidelines for Assessing Male Reproductive
Risk (53 FR 24850-24869), Proposed Guidelines for Exposure-Related
Measurements (53 FR 48830-48853), and Proposed Amendments to the
Guidelines for the Health Assessment of Suspect Developmental Toxicants (54
FR 9386-9403). The final Guidelines for Developmental Toxicity Risk
Assessment, published December 5, 1991 [56 FR 63798-63826], supersede and
replace the proposed amendments.
The Guidelines issued today continue the guidelines development process
initiated in 1984. Like the guidelines issued in 1986, the Guidelines issued today
set forth principles and procedures to guide EPA scientists in the conduct of
Agency risk assessments and to inform Agency decision makers and the public
about these procedures. In particular, the Guidelines standardize terminology
used by the Agency in exposure assessment and in many areas outline the limits
of sound scientific practice. They emphasize that exposure assessments done as
pan of a risk assessment need to consider the hazard identification and dose-
response parts of the risk assessment in the planning stages of the exposure
assessment so that these three parts can be smoothly integrated into the risk
characterization. The Guidelines discuss and reference a number of approaches
and tools for exposure assessment, along with discussion of their appropriate use.
The Guidelines also stress that exposure estimates along with supporting
information will be fully presented in Agency risk assessment documents, and that
Agency scientists will identify the strengths and weaknesses of each assessment by
describing uncertainties, assumptions, and limitations, as well as the scientific
basis and rationale for each assessment.
Work on these Guidelines began soon after publication of the 1986
Guidelines. At that time, the SAB recommended that the Agency develop
supplementary guidelines for conducting exposure studies. This supplementary
3
-------
guidance was developed by an Agency work group composed of scientists from
throughout the Agency, a draft was peer reviewed by experienced professionals
from environmental groups, industry, academia, and other governmental agencies,
and proposed for comment on December 2, 1988 (as Proposed Guidelines for
Exposure-Related Measurements). In the public notice, the Agency asked for
comment on whether the proposed guidelines should be combined with the 1986
guidelines in order to have a single Agency guideline for exposure assessment.
Comments from the public and the SAB were heavily in favor of combining the
two guidelines.
Since proposal the Agency has reformatted the 1988 Proposed Guidelines
to allow incorporation of the information in the 1986 Guidelines, and
incorporated revisions resulting from additional public and SAB comments, to
establish the current Guidelines. The current Guidelines were reviewed by the
Risk Assessment Forum and the Risk Assessment Council subjected to an
external peer review, and presented to the SAB on September 12, 1991 for final
comment (EPA-SAB-IAQC-92-015). In addition, the Guidelines were reviewed
by the Working Party on Exposure Assessment, an interagency working group
under the Subcommittee on Risk Assessment of the Federal Coordinating
Committee on Science, Engineering and Technology. Comments of these groups
have been considered in the revision of these Guidelines. The full text of the
'final Guidelines for Exposure Assessment is published here.
These Guidelines were developed as part of an interoffice guidelines
development program under the auspices of the Risk Assessment Forum and the
Office of Health and Environmental Assessment in the Agency's Office of
Research and Development. The Agency is continuing to study risk assessment
issues raised in these Guidelines, and will revise them in line with new
information as appropriate.
-------
Following this preamble are two parts: Part A is the Guidelines and Part B is
the Response to the Public and Science Advisory Board comments submitted in
response to the 1988 Proposed Guidelines.
References, supporting documents, and comments received on the 1988
Proposed Guidelines, as well as a copy of these final Guidelines for Exposure
Assessment are available for inspection at the ORD Public Information Shelf, EPA
Headquarters Library (202-260-5926), 401 M Street, S.W., Washington, DC,
between the hours of 8:00 a.m. and 4:30 p.m.
Administrat
-------
TABLE OF CONTENTS
1. INTRODUCTION 13
1.1. Intended Audience 13
1.2. Purpose and Scope of the Guidelines 14
13. Organization of the Guidelines 15
2. GENERAL CONCEPTS IN EXPOSURE ASSESSMENT 16
2.1. Concepts of Exposure, Intake, Uptake, and Dose 18
2.1.1. Exposure 19
2.12. Applied Dose and Potential Dose 20
2.13. Internal Dose 22
2.1.4. Exposure and Dose Relationships 23
2.1.4.1. Calculating Potential Dose for Intake
Processes 25
2.1.42. Calculating Internal Dose for Uptake
»
Processes 30
2.1.43. Calculating Internal Dose for Intake Processes . 35
2.1.5. Summary of Exposure and Dose Terms With Example
Units 36
22. Approaches to Quantification of Exposure 39
22.1. Measurement of Exposure at the Point-of-Contact .... 40
222. Estimates of Exposure from Scenario Evaluation 41
223. Exposure Estimation by Reconstruction of Internal
Dose 43
23. Relationships of Exposure and Dose to Risk 44
23.1. Individual Risk 45
232. Population Risk 49
233. Risk Descriptors 52
-------
3. PLANNING AN EXPOSURE ASSESSMENT 54
3.1. Purpose of the Exposure Assessment 55
3.1.1. Using Exposure Assessments in Risk Assessment 55
3.12. Using Exposure Assessments for Status and Trends 58
3.1.3. Using Exposure Assessments in Epidemiologic Studies .. 58
3.2. Scope of the Assessment 59
33. Level of Detail of the Assessment 60
3.4. Determining the Approach for the Exposure Assessment 60
3.5. Establishing the Exposure Assessment Plan 62
3.5.1. Planning an Exposure Assessment as Part of a Risk
Assessment 62
3.52. Establishing the Sampling Strategy 64
3.52.1. Data Quality Objectives 65
3.522. Sampling Plan 66
3.523. Quality Assurance Samples 69
3.52.4. Background Level 70
3.52.5. Quality Assurance and Quality Control 70
3.52.6. Quality Assurance and Quality Control for
Previously Generated Data 71
3.52.7. Selection and Validation of Analytical
Methods 71
3.53. Establishing the Modeling Strategy 72
3.53.1. Setting the Modeling Study Objectives 72
3.532. Characterization and Model Selection 72
3.533. Obtaining and Installing the Computer Code ... 74
3.53.4. Calibrating and Running the Model 75
3.53.5. Model Validation 75
3.5.4. Planning an Exposure Assessment to Assess Past
Exposures 76
-------
4. GATHERING AND DEVELOPING DATA FOR EXPOSURE
ASSESSMENTS . 78
4.1. Measurement Data for Poim-of-Contact Assessments 78
4.2. Obtaining Chemical Concentration Information ;.. 79
4.2.1. Concentration Measurements in Environmental Media .. 85
422. Use of Models for Concentration Estimation 87
4.23. Selection of Models for Environmental Concentrations . 88
4.3. Estimating Duration of Contact 89
43.1. Observation and Survey Data 90
43.2. Developing Other Estimates of Duration of Contact ... 92
4.4. Obtaining Data on Body Burden or Biomarkers 93
4.5. Obtaining Data for Pharmacokinetic Relationships 94
4.6. Obtaining Data on Intake and Uptake 95
5. USING DATA TO DETERMINE OR ESTIMATE EXPOSURE AND
DOSE 96
5.1. Use of Data in Making Inferences for Exposure Assessments ... 96
5.1.1. Relevance of Data for the Intended Exposure
Assessment 97
5.12. Adequacy of Data for the Intended Assessment 98
5.12.1. Evaluation of Analytical Methods 99
5.122. Evaluation of Analytical Data Reports 99
5.122.1. Evaluation of Censored Data Sets ... 100
5.1222. Blanks and Recovery 103
5.13. Combining Measurement Data Sets from Various
Studies 104
5.1.4. Combining Measurement Data and Modeling Results .. 104
5.2. Dealing with Data Gaps 105
53. Calculating Exposure and Dose 107
-------
53.1. Short-Term Versus Long-Term Data for Population
Exposures ... 10?
53.2. Using Point-of-Contact Data to Calculate Exposure
and Dose 108
533. The Role of Exposure Scenarios in Exposure
Assessment 109
533.1. Scenarios as a Means to Quantify Exposure
and Dose Ill
5332. Exposure Scenarios and Exposure Estimators
as Input to Risk Descriptors 113
5333. Exposure Scenarios as a Tool for Option
Evaluation 113
53.4. General Methods for Estimating Exposure and Dose .. 114
53.4.1. Preliminary Evaluation and Bounding
Estimates 115
53.4.2. Refining the Estimates of Exposure and Dose . 117
53.5. Using Estimates for Developing Descriptors 119
53.5.1. Individual Exposure, Dose, and Risk 119
53.5.2. Population Exposure, Dose, and Risk 129
6. ASSESSING UNCERTAINTY 134
6.1. Role of Uncertainty Analysis in Exposure Assessment 134
62. Types of Uncertainty 136
6.2.1. Scenario Uncertainty 137
622. Parameter Uncertainty 138
623. Model Uncertainty 142
63. Variability Within a Population Versus Uncertainty in the
Estimate 144
-------
7. PRESENTING THE RESULTS OF THE EXPOSURE ASSESSMENT . 146
7.1. Communicating the Results of the Assessment 146
7.1.1. Exposure Characterization 146
7.1.2. Risk Characterization 148
7.12.1. Integration of Hazard Identification, Dose-
Response, and Exposure Assessments 149
1.122. Quality of the Assessment and Degree of
Confidence 150
1.123. Descriptors of Risk 151
7.1.2.4. Communicating Results of a Risk Assessment
to the Risk Manager 151
7.13. Establishing the Communication Strategy 152
12. Format for Exposure Assessment Reports 153
7.3. Reviewing Exposure Assessments 153
8. GLOSSARY OF TERMS 157
9. REFERENCES 166
10
-------
FIGURES
2-1. Schematic of dose and exposure 26
5-1. Schematic of exposure estimators for unbounded simulated population
distributions 125
TABLES
2-1. Explanation of exposure and dose terms 37
4-1. Examples of types of measurements to characterize exposure-related media
and parameters 81
11
-------
ABBREVIATIONS AND ACRONYMS
ADD
AF
AT
BW
C
C(t)
CO
CT
D
DOO
E
ED
EPA
F^
f(t)
IR
J
K,
LADD
LOAEL
LOD
LT
MDL
MEI
ND
PMN
QA
QAPjP
OC
OL
RfC
RID
SA
SAB
TEAM
TUBE
UCL
UR
Average daily dose
Absorption fraction
Averaging time
Body weight
Exposure concentration
Exposure concentration as a function of time
Carbon monoxide
Contact time
Dose
Applied dose
Internal dose
Potential dose
Data quality objective
Exposure
Exposure duration
U.S. Environmental Protection Agency
Adherence factor for soil
Absorption function
Intake rate (also ingestion or inhalation rate)
Flux
Permeability coefficient
Lifetime average daily dose
Lowest observable adverse effect level
Limit of detection
Lifetime
Amount (mass) of carrier medium material applied to the skin
Method detection limit
Maximum exposed individual or maximally exposed individual
Not detected
Premanufacture notice
Quality assurance
Quality assurance project plan
Quality control
Quantification !"**«*
Reference concentration
Reference dose
Surface area
Science Advisory Board
Total exposure assessment methodology
Theoretical upper bounding estimate
Upper confidence Emit (often used to refer to the upper confidence Emit
of the mean)
Uptake rate
12
-------
PART A: GUIDELINES FOR EXPOSURE ASSESSMENT
1. INTRODUCTION
In 1984, the U. S. Environmental Protection Agency (EPA) initiated a
program to ensure scientific quality and technical consistency of Agency risk
assessments. One of the goals of the program was to develop risk assessment
guidelines that would be used Agencywide. The guidelines development process
includes a public review and comment period for all proposed guidelines as well
as Agency Science Advisory Board review. Following the review process, the
guidelines are revised if needed and then issued as final guidelines. The
Guidelines for Estimating Exposures (hereafter "1986 Guidelines") were one of
five guidelines issued as final in 1986 (U.S. EPA, 1986a). In 1988, the Proposed
Guidelines for Exposure-Related Measurements (hereafter "1988 Proposed
Guidelines") were published in the Federal Register for public review and
comment (U.S. EPA, 1988a). The 1988 Proposed Guidelines were intended to be
a companion and supplement to the 1986 Guidelines.
When proposing the 1988 guidelines, the Agency asked both the EPA
Science Advisory Board (SAB) and the public for comments on combining the
1986 and 1988 exposure guidelines into a larger, more comprehensive guideline;
the majority of comments received were in favor of doing so. Thus, these 1992
Guidelines For Exposure Assessment (hereafter "Guidelines") combine, reformat,
and substantially update the earlier guidelines. These guidelines make use of
developments in the exposure assessment field since 1988, both revising the
previous work and adding several topics not covered in the 1986 or 1988
guidelines. Therefore, the 1992 guidelines are being issued by the Agency as a
replacement for both the 1986 Guidelines and the 1988 Proposed Guidelines.
1.1. Intended Audience
This document is intended for exposure and risk assessors in the Agency
and those exposure and risk assessment consultants, contractors, or other persons
13
-------
who perform work under Agency contract or sponsorship. Risk managers in the
Agency may also benefit from this document since it clarifies the terminology and
methods used by assessors, which in some cases could strengthen the basis for
decisions. In addition, publication of these guidelines makes information on the
principles, concepts, and methods used by the Agency available to other agencies,
States, industry, academia, and all interested members of the public.
12. Purpose and Scope of the Guidelines
There are a number of different purposes for exposure assessments,
including their use in risk assessments, status and trends analysis, and
epidemiology. These Guidelines are intended to convey the general principles of
exposure assessment, not to serve as a detailed instructional guide. The technical
documents cited here provide more specific information for individual exposure
assessment situations. As the Agency performs more exposure assessments and
incorporates new approaches, these Guidelines will be revised.
Agency risk assessors should use these Guidelines in conjunction with
published guidelines for assessing health effects such as cancer (U.S. EPA,
1986b), developmental toxicity (U.S. EPA, 199la), mutagenic effects (U.S. EPA,
1986c), and reproductive effects (U.S. EPA, 1988b; U.S. EPA, 1988c). These
exposure assessment guidelines focus on human exposure to chemical substances.
Much of the guidance contained herein also applies to wildlife exposure to
chemicals, or human exposure to biological, physical (Le., noise), or radiological
agents. Since these areas present unique challenges, however, assessments on
these topics must consider additional factors beyond the scope of these
Guidelines.
For example, ecological exposure and risk assessment may deal with many
species which are interconnected via complex food webs, while these guidelines
deal with one species, humans. While these guidelines discuss human exposure on
the individual and population levels, ecological exposure and risk assessments
may need to address community, ecosystem, and landscape levels, also. Whereas
14
-------
chemical agents may degrade or be transformed in the environment, biological
agents may of course grow and multiply, an area not covered in these guidelines.
The Agency may, at a future date, issue specific guidelines in these areas.
Persons subject to these Guidelines should use the terms associated with
chemical exposure assessment in a manner consistent with the glossary in Section
8. Throughout the public comment and SAB review process, the Agency has
sought definitions that have consensus within the scientific community, especially
those definitions common to several scientific fields. The Agency is aware that
certain well understood and widely accepted concepts and definitions in the area
of health physics (such as the definition of exposure) differ from the definitions
in this glossary. The definitions in this glossary are not meant to replace such
basic definitions used in another field of science. It was not possible, however, to
reconcile all the definitions used in various fields of science, and the ones used in
the glossary are thought to be the most appropriate for the field of chemical
exposure assessment.
The Agency may, from time to time, issue updates of or revisions to these
Guidelines.
1.3. Organization of the Guidelines
These Guidelines are arranged in an order that assessors commonly use in
preparing exposure assessments. Section 2 deals with general concepts, Section 3
with planning, Section 4 with data development, Section 5 with calculating
exposures, Section 6 with uncertainty evaluation, and Section 7 with presenting
the results. In addition, these Guidelines include a glossary of terms (Section 8)
and references to other documents (Section 9).
15
-------
2. GENERAL CONCEPTS IN EXPOSURE ASSESSMENT
Exposure assessment in various forms dates back at least to the early
twentieth century, and perhaps before, particularly in the fields of epidemiology
(World Health Organization [WHO], 1983), industrial hygiene (Cook, 1969;
Paustenbach, 1985), and health physics (Upton, 1988). Epidemiology is the study
of disease occurrence and the causes of disease, while the latter fields deal
primarily with occupational exposure. Exposure assessment combines elements of
all three disciplines. This has become increasingly important since the early 1970s
due to greater public, academic, industrial, and governmental awareness of
chemical pollution problems.
Because there is no agreed-upon definition of the point on or in the body
»
where exposure takes place, the terminology used in the current exposure
assessment literature is inconsistent. Although there is reasonable agreement that
human exposure means contact with the chemical or agent (Allaby, 1983; Environ
Corporation, 1988; Hodgson et al., 1988; U.S. EPA, 1986a), there has not yet been
widespread agreement as to whether this means contact with (a) the visible
exterior of the person (skin and openings into the body such as mouth and
nostrils), or (b) the so-called exchange boundaries where absorption takes place
(skin, lung, gastrointestinal tract).1 These different definitions have led to some
ambiguity in the use of terms and units for quantifying exposure.2
Comments on the 1986 Guidelines and the 1988 Proposed Guidelines
.suggested that EPA examine how exposure and dose were defined in Agency
assessments and include guidance on appropriate definitions and units. After
1 A third, less common, scheme is that exposure is contact with any boundary outside or inside
of the body, including internal boundaries around organs, etc. This scheme is alluded to, for
example, in an article prepared by the National Research Council (NRC, 1985, p. 91). One could
then speak of exposure to the whole person or exposure to certain internal organs.
2 For example, the amount of food ingested would be a dose under scheme (a) and an exposure
under scheme (b). Since the amount ingested in an animal toxicology study is usually termed
administered dose, this leads to the use of both exposure and dose for the same quantity under
scheme (b). There are several such ambiguities in any of the currently used schemes. Brown
(1987) provides a discussion of various units used to describe exposures due to multiple schemes.
16
-------
internal discussions and external peer review, it is the Agency's position that
defining exposure as taking place at the visible external boundary, as in (a)
above, is less ambiguous and more consistent with nomenclature in other
scientific fields. This is a change from the 1986 Guidelines.
Under this definition, it is helpful to think of the human body as having a
hypothetical outer boundary separating inside the body from outside the body.
This outer boundary of the body is the skin and the openings into the body such
as the mouth, the nostrils, and punctures and lesions in the skin. As used in these
Guidelines, exposure to a chemical is the contact of that chemical with the outer
boundary. An exposure assessment is the quantitative or qualitative evaluation of
that contact; it describes the intensity, frequency, and duration of contact, and
often evaluates the rates at which the chemical crosses the boundary (chemical
intake or uptake rates), the route by which it crosses the boundary (exposure
route; e.g., dermal, oral, or respiratory), and the resulting amount of the chemical
that actually crosses the boundary (a dose) and the amount absorbed (internal
dose).
Depending on the purpose for which an exposure assessment will be used,
the numerical output of an exposure assessment may be an estimate of either
exposure or dose. If exposure assessments are being done as pan of a risk
assessment that uses a dose-response relationship, the output usually includes an
estimate of dose.3 Other risk assessments, for example many of those done as
part of epidemiologic studies, use empirically derived exposure-response
relationships, and may characterize risk without the intermediate step of
estimating dose.
3 The National Research Council's 1983 report RL
-------
2.1. Concepts of Exposure, Intake, Uptake, and Dose
The process of a chemical entering the body can be described in two steps:
contact (exposure), followed by actual entry (crossing the boundary). Absorption,
either upon crossing the boundary or subsequently, leads to the availability of an
amount of the chemical to biologically significant sites within the body (internal
dose4). Although the description of contact with the outer boundary is simple
conceptually, the description of a chemical crossing this boundary is somewhat
more complex.
There are two major processes by which a chemical can cross the boundary
from outside to inside the body. Intake involves physically moving the chemical
in question through an opening in the outer boundary (usually the mouth or
nose), typically via inhalation, eating, or drinking. Normally the chemical is
contained in a medium such as air, food, or water; the estimate of how much of
the chemical enters into the body focuses on how much of the carrier medium
enters. In this process, mass transfer occurs by bulk flow, and the amount of the
chemical itself crossing the boundary can be described as a chemical intake rate.
*
The chemical intake rate is the amount of chemical crossing the outer boundary
per unit time, and is the product of the exposure concentration times the
ingestion or inhalation rate. Ingestion and inhalation rates are the amount of the
carrier medium crossing the boundary per unit time, such as m3 air
breathed/hour, kg food ingested/day, or liters of water consumed/day. Ingestion
4 These guidelines use the term internal dose to refer to the amount of a chemical absorbed across
the exchange boundaries, such as the skin, hing, or gastrointestinal tract. The term absorbed dose is
often used synonymously for internal dose, although the connotation for the term absorbed dose seems
to be more related to a specific boundary (the amount absorbed across a membrane in an experiment,
for example), while the term internal dose seems to connote a more general sense of the amount
absorbed across one or more specific sites. For the purpose of these guidelines, the term internal dose
is used for both connotations. The term internal dose as used here is also consistent with how it is
generally applied to a discussion of biomarkers (NRC, 1989a). It is also one of the terms used in
epidemiology (NRC, 1985).
18
-------
or inhalation rates typically are not constant over time, but often can be observed
to vary within known limits.5
The second process by which a chemical can cross the boundary from
outside to inside the body is uptake. Uptake involves absorption of the chemical
through the skin or other exposed tissue such as the eye. Although the chemical
is often contained in a carrier medium, the medium itself typically is not absorbed
at the same rate as the chemical so estimates of the amount of the chemical
crossing the boundary cannot be made in the same way as for intake (see Section
2.1.3.). Dermal absorption is an example of direct uptake across the outer
boundary of the body.6 A chemical uptake rate is the amount of chemical
absorbed per unit time. In this process, mass transfer occurs by diffusion, so
uptake can depend on the concentration gradient across the boundary,
permeability of the barrier, and other factors. Chemical uptake rates can be
expressed as a function of the exposure concentration, permeability coefficient,
and surface area exposed, or as a* flux (see Section 2.1.4.).
The conceptual process of contact, then entry and absorption, can be used
to derive the equations for exposure and dose for all routes of exposure.
2.1.1. Exposure
The condition of a chemical contacting the outer boundary of a human is
exposure. Most of the time, the chemical is contained in air, water, soil, a
product, or a transport or carrier medium; the chemical concentration at the point
of contact is the exposure concentration. Exposure over a period of time can be
represented by a time-dependent profile of the exposure concentration. The area
5 digestion of food or water is an intermittent rather than continuous process, and can be
expressed as (amount of m*drom per event) x (events per unit dock or calendar time) [the
frequency of contact]; (e^, 250 mL of water/glass of water ingested x 8 glasses of water
ingested/day).
6 Uptake through the lung, gastrointestinal tract, or other internal barriers also can occur
following intake through ingestion or inhalation.
19
-------
under the curve of this profile is the magnitude of the exposure, in concentration-
time units (Lioy, 1990: NRC 1990):
«i
£ = f C(t) dt (2-1)
where E is the magnitude of exposure, C(t) is the exposure concentration as a
function of time, and t is time, t2 - 1, being the exposure duration (ED). If ED is
a continuous period of time (e.g., a day, week, year, etc.), then C(t) may be zero
during pan of this time.7 Integrated exposures are done typically for a single
individual, a specific chemical, and a particular pathway or exposure route over a
given time period.8
The integrated exposures for a number of different individuals (a
population or population segment, for example), may then be displayed in a
histogram or curve (usually, with integrated exposure increasing along the abscissa
or x-axis, and the number of individuals at that integrated exposure increasing
along the ordinate or y-axis). This histogram or curve is a presentation of an
exposure distribution for that population or population segment. The utility of
both individual exposure profiles and population exposure distributions is
discussed in Section 23.
2.1.2. Applied Dose and Potential Dose
Applied dose is the amount of a chemical at the absorption barrier (skin,
lung, gastrointestinal tract) available for absorption. It is useful to know the
7 Contact time (CT) is that pan of the exposure duration where C(t) does not equal zero; that is,
the actual time periods (events, episodes) during which actual exposure is taking place. The exposure
duration as defined here, on the other hand, is a time interval of interest for assessment purposes
during which exposure occurs, either continuously or intermittently.
1 An exposure pathway is the course a chemical takes from its source to the person being
contacted. An exposure route is the particular means of entry into the body, e.g., inhalation.
ingestion, or dermal absorption.
20
-------
applied dose if a relationship can be established between applied dose and
internal dose, a relationship that can sometimes be established experimentally.
Usually, it is very difficult to measure the applied dose directly, as many of the
absorption barriers are internal to the human and are not localized in such a way
to make measurement easy. An approximation of applied dose can be made,
however, using the concept of potential dose9 (Lioy, 1990; NRC, 1990).
Potential dose is simply the amount of the chemical ingested, inhaled, or in
material applied to the skin. It is a useful term or concept for those instances in
which there is exposure to a discrete amount of chemical or transport medium,
such as eating a certain amount of food or applying a certain amount of material
to the skin.10
The potential dose for ingestion and inhalation is analogous to the
administered dose in a dose-response experiment. Human exposure to
environmental chemicals is generally inadvertent rather than administered, so in
these Guidelines it is termed potential dose rather than administered dose.
Potential dose can be used for dose-response relationships based on administered
dose.
For the dermal route, potential dose is the amount of chemical applied, or
the amount of chemical in the medium applied, for example as a small amount of
9 Potential dose is the potential amount of the chemical that could be absorbed if it were 100%
bioavailable. Note, however, that this does not imply that 100% bioavailability or 100% absorption
is assumed when using potential dose. The equations and discussion in this chapter use potential dose
as a measurable quantity that can then be converted to applied or absorbed dose by the use of the
appropriate factors. Potential dose is a general term referring to any of the exposure routes. The terms
respiratory dose, oral dose, or dermal dose are sometimes used to refer to the route-specific potential
doses.
10 It is not useful to calculate potential doses in cases where there is partial or total immersion in
a fluid such as air or water. In these cases, it is more useful to describe the situation in terms of
exposure (concentration of the chemical in the tuaHinm times the time of contact) or absorbed dose.
For cases such as contact with water in a swimming pool, the person is not really exposed to the entire
mass of the chemical that would be described by a potential dose. Nor is it useful to calculate dermal
applied doses because the boundary layer is being constantly renewed. The use of alternate ways to
calculate a dose that might occur while swimming is dfyny* in Section 2.1.4.2., in conjunction with
Equations 2-7 and 2-8.
21
-------
paniculate deposited on the skin. Note that as all of the chemical in the
paniculate is not contacting the skin, this differs from exposure (the concentration
in the paniculate times the time of contact) and applied dose (the amount in the
layer actually touching the skin).
The applied dose, or the amount that reaches the exchange boundaries of
the skin, lung, or gastrointestinal tract, may often be less than the potential dose
if the material is only partly bioavailable. Where data on bioavailability are
known, adjustments to the potential dose to convert it to applied dose and
internal dose may be made.11 -
2.1.3. Internal Dose
»
The amount of a chemical that has been absorbed and is available for
interaction with biologically significant receptors is called the internal dose. Once
absorbed, the chemical can undergo metabolism, storage, excretion, or transport
within the body. The amount transported to an individual organ, tissue, or fluid
of interest is termed the delivered dose. The delivered dose may be only a small
pan of the total internal dose. The biologically effective dose, or the amount that
actually reaches cells, sites, or membranes where adverse effects occur (NRC,
1990, p. 29), may only be a pan of the delivered dose, but it is obviously the
crucial part Currently, most risk assessments dealing with environmental
chemicals (as opposed to pharmaceutical assessments) use dose-response
. relationships based on potential (administered) dose or internal dose, since the
pharmacokinetics necessary to base relationships on the delivered dose or
biologically effective doses are not available for most chemicals. This may
change in the future, as more becomes known about the pharmacokinetics of
environmental chemicals.
11 This may be done by adding a bioavailability factor (range: 0 to 1) to the dose equation. The
bioavailability factor would then take into account the ability of the chemical to be extracted from the
matrix, absorption through the exchange boundary, and any other losses between ingestion and contact
with the '"tig or gastrointestinal tract. When no data or information are available to indicate otherwise,
the bioavailability factor is usually assumed to be 1.
22
-------
Doses are often presented as dose rates, or the amount of a chemical dose
(applied or internal) per unit time (e.g., mg/day), or as dose rates on a per.-unit-
body-weight basis (e.g., mg/kg/day).
Distributions of individual doses within a population or population segment
may be displayed in a histogram or curve analogous to the exposure distributions
described in Section 2.1.1. The utility of individual dose profiles, as well as the
utility of population distributions of dose are described more fully in Section 23.
2.1.4. Exposure and Dose Relationships
Depending on the use of the exposure assessment, estimates of exposure
and dose in various forms may be required.
Exposure concentrations are useful when comparing peak exposures
to levels of concern such as short-term exposure limits (STELs).
They are typically expressed in units such as /*g/m3, mg/m3, mg/kg,
^g/L, mg/L, ppb, or ppm.
Exposure or dose profiles describe the exposure concentration or
dose as a function of time. Concentration and time are used to
depict exposure, while amount and time characterize dose; graphical
or tabular presentations may be used for either type of profile.
Such profiles are very important for use in risk assessment where the
severity of effect is dependent on the pattern by which the exposure occurs rather
than the total (integrated) exposure. For example, a developmental toxin may
only produce effects if exposure occurs during a particular stage of development.
Similarly, a single acute exposure to very high contaminant levels may induce
adverse effects even if the average exposure is much lower than apparent no-
effect levels. Such profiles will become increasingly important as biologically
based dose-response models become available.
Integrated exposures are useful when a total exposure for a
particular route (i.e., the total for various pathways leading to
exposure via the same route) is needed. Units of integrated
23
-------
exposure are concentration times time. The integrated exposure is
the total area under the curve of the exposure profile (Equation 2-
1). Note that an exposure profile (a picture of exposure
concentration over time) contains more information than an
integrated exposure (a number), including the duration and
periodicity of exposure, the peak exposure, and the shape of the
area under the time-concentration curve.
Time-weighted averages are widely used in exposure assessments,
especially as pan of a carcinogen risk assessment. A time-weighted
average exposure concentration (units of concentration) is the
integrated exposure divided by the period where exposure occurs,
and is useful in some of the equations discussed below in estimating
dose. A time-weighted average dose rate is the total dose divided
by the time period of dosing, usually expressed in units of mass per
unit time, or mass/time normalized to body weight (e.g.,
mg/kg/day). Time-weighted average dose rates such as the lifetime
average daily dose (LADD) are often used in dose-response
equations to estimate effects or risk.12
The discussion in the next three sections focuses on exposure via
inhalation, oral intake, and dermal absorption. Other exposure routes are
possible, however, including direct introduction into the bloodstream via injection
or transfusion, contamination of exposed lesions, placenta! transfer, or use of
12 Current carcinogen risk models, such as the linearized multistage procedure and other linear
nonthreshold models, use lifetime exposures to develop the dose-response relationships, and
therefore use lifetime time-weighted average exposures to estimate risks. Within the range of
linearity for risk, this procedure effectively treats exposures and doses as a series of "units," with
each unit of dose being equal to any other unit of dose in terms of risk potential without respect to
prior exposure or dose patterns. Current research in the field of dose-response modeling is focusing
on biologically based dose-response models which may take into account the effects of the exposure
or dose patterns, making use of all of the information in an exposure or dose profile. For a more
indepth discussion on the implications of the use of time-weighted averages, see Atherley (1965).
24
-------
suppositories. The exposures and doses for these routes can be calculated in a
similar manner, depending on whether an intake or uptake process is involved.
Although equations for calculating exposure, dose, and their various
averages are in widespread use in exposure assessment, the assessor should
consider the implications of the assumptions used to derive the equations.
Simplifying assumptions used in deriving the equations may mean that variations
in exposure concentration, ingestion or inhalation rate, permeability coefficient,
surface area exposed, and absorption fraction can introduce error into the
estimate of dose if average values are used, and this must be considered in the
evaluation of uncertainty (Section 6).
2.1.4.1. Calculating Potential Dose for Intake Processes
The general equation for potential dose for intake processes, e.g.,
inhalation and ingestion (see Figure 2-1 for illustration of various exposures and
doses) is simply the integration of the chemical intake rate (concentration of the
chemical in the medium times the intake rate of the medium, C times IR) over
time:
0- - f
j
«!
where D^ is potential dose and IR(t) is the ingestion or inhalation rate.
25
-------
Route:
RMpiritory
RouS:
llttto
CM
RoulK
\/
MouK
SUn
UpWct
G.I.
Figure 2-1. Schematic of dose and exposure.
26
-------
The quantity t2 -1,, as before, represents the period of time over which
exposure is being examined, or the exposure duration (ED). The exposure
duration may contain times where the chemical is in contact with the person, and
also times when C(t) is zero. Contact time represents the actual time period
where the chemical is in contact with the person. For cases such as ingestion,
where actual contact with food or water is intermittent, and consequently the
actual contact time may be small, the intake rate is usually expressed in terms of
a frequency of events (e.g., 8 glasses of water consumed per day) times the intake
per event (e.g., 250 mL of water/glass of water consumed). Intermittent air
exposures (e.g., 8 hours exposed/day times one cubic meter of air inhaled/hour)
can also be expressed easily using exposure duration rather than contact time.
Hereafter, the term exposure duration will be used in the examples below to
refer to the term t2 - tt, since it occurs frequently in exposure assessments and it
is often easier to use.
Equation 2-2 can also be expressed in discrete form as a summation of the
doses received during various events i:
(2'3)
where ED; is the exposure duration for event i. If C and IR are nearly constant
(which is a good approximation if the contact time is very short). Equation 2-3
becomes:
^ = C - IR - ED
(2-4)
where ED is the sum of the exposure durations for all events, and C and « are
the average values for these parameters. Equation 2-4 will not necessarily hold in
cases where C and IR vary considerably. In those cases, Equation 2-3 can be
used if the exposure can be broken out into segments where C and IR are
27
-------
approximately constant. If even this condition cannot be met, Equation 2-2 may
be used.
For risk assessment purposes, estimates of dose should be expressed in a
manner that can be compared with available dose-response data. Frequently,
dose-response relationships are based on potential dose (called administered dose
in animal studies), although dose-response relationships are sometimes based on
internal dose.
Doses may be expressed in several different ways. Solving Equations 2-2,
2-3, or 2-4, for example, gives a- total dose accumulated over the time in question.
The dose per unit time is the dose rate, which has units of mass/time (e.g.,
mg/day). Because intake .and uptake can vary, dose rate is not necessarily
V
constant. An average dose rate over a period of time is a useful number for
many risk assessments.
Exposure assessments should take into account the time scale related to
the biological response studied unless the assessment is intended to provide data
on the range of biological responses (NRC, 1990, p. 28). For many noncancer
effects, risk assessments consider the period of time over which the exposure
occurred, and often, if there are no excursions in exposure that would lead to
acute effects, average exposures or doses over the period of exposure are
sufficient for the assessment. These averages are often in the form of average
daily doses (ADDs).
An ADD can be calculated from Equation 2-2 by averaging Dp* over body
weight and an averaging time, provided the dosing pattern is known so the
integral can be solved. It is unusual to have such data for human exposure and
intake over extended periods of time, so some simplifying assumptions are
commonly used. Using Equation 2-4 instead of 2-2 or 2-3 involves making
28
-------
steady-state assumptions about C and IR, but this makes the equation for ADD
easier to solve.13 For intake processes, then, using Equation 2-4, this becomes:
= [ C - IR ED ] I [BW-AT] (2-5)
where ADD^ is the average daily potential dose, BW is body weight, and AT is
the time period over which the dose is averaged (converted to days). As with
Equation 2-4, the exposure concentration C is best expressed as an estimate of
the arithmetic mean regardless of the distribution of the data. Again, using
average values for C and IR in Equation 2-5 assumes that C and IR are
approximately constant.
For effects such as cancer, where the biological response is usually
described in terms of lifetime probabilities, even though exposure does not occur
over the entire lifetime, doses are often presented as lifetime average daily doses
(LADDs). The LADD takes the form of Equation 2-5, with lifetime (LT)
replacing the averaging time (AT):
.* - [ C « ED J / [ BW - LT ]
r*
(2-6)
The LADD is a very common term used in carcinogen risk assessment
where linear nonthreshold models are employed.
13 The assessor should keep in mind that this steady state assumption has been made when using
Equation 2-5. and should be able to discuss what effect using average values for C, IR, and ED has on
the resulting estimate.
29
-------
2.1.4.2. Calculating Internal Dose for Uptake Processes (Especially via the
Dermal Route)
For absorption processes, there are two methods generally in use for
calculating internal dose. The first, commonly used for dermal absorption from a
liquid where at least partial immersion occurs, is derived from the equation for
internal dose, Dj,,,, which is analogous to Equation 2-2 except that the chemical
uptake rate (C Kp SA) replaces the chemical intake rate (C IR). Thus,
C(t) Kf SA(t)
-------
experimentally measurable.13 The internal dose that is analogous to the
potential dose in Equation 2-4 would be:
DiM = C Kp SA ED (2-8)
where SA is the average surface area exposed and the ADD^ (average daily
internal dose) becomes:
ADDM = [ C K- SA ££>]/[ BW AT ] (2-9)
(The corresponding LADD^ would be obtained by substituting LT for
AT.) This is the method to use when calculating internal dose for a swimmer.
The total body surface area (SA) is assumed to be exposed to a layer of water
with an average chemical concentration C for a period of time (ED). It is not
necessary to know the mass of the chemical that comes in contact with the skin.
The assumptions necessary in going from Equation 2-7 to Equation 2-9 are
comparable to those made in deriving Equation 2-5. Recall that both C and SA
will vary over time, and Kp may not be constant over different parts of the body.
If the assumption used to derive Equation 2-5 (that these variables are nearly
13 The permeability coefficient, Kp, can be experimentally calculated for a chemical and a
particular barrier (eg., skin type) by observing the flux rate in vitro (typical units: mg chemical
crossing/sec-cm2), and dividing it by the concentration of the chemical in the medium in contact
with the barrier (typical units: mg chemical/cm1). This allows the relationship between bulk
concentration and the crossing of the chemical itself to be made. K, has the advantage of being
fairly constant over a range of concentrations and can be used for concentrations other than the one
used in the experiment. The chemical uptake rate, relating the crossing of the barrier of the
chemical itself in terms of the bulk concentration, then becomes C times Kp times the surface area
exposed (SA).
31
-------
constant) does not hold, a different form of the equation having several terms
must be used.
The second method of calculating internal dose uses empirical observations
or estimates of the rate that a chemical is absorbed when a dose is administered
or applied. It is useful when a small or known amount of material (such as a
paniculate) or a chemical (such as a pesticide) contacts the skin. The potential
dose of a chemical to the skin, D^, can often be calculated from knowing the
concentration (C) and the amount of carrier medium applied {M,,.^), either as a
whole or on a unit surface area basis. For example, potential dose from dermal
contact with soil can be calculated using the following equation:
*
= C F^ SA ED (2-10)
where D^ is potential dose, MM,^ is amount of soil applied, and F^ is the
adherence factor for soil (the amount of soil applied to and adhering to the skin
on a unit surface area per unit time).
The relationship between potential dose and applied dose for dermal
exposures is that potential dose includes the amount of the chemical in the total
amount of medium contacting the skin, e.g., the amount of chemical in the soil
whether or not all the chemical itself ever comes in direct contact, and applied
dose includes only that amount of the chemical which actually directly touches
the skin. Theoretically, the relationship between the applied dose (Dw) and the
internal (or absorbed) dose (D^) can be thought of as:
(2-11)
32
-------
where f(t) is a complicated nonlinear absorption function, usually not measurable,
having the dimensions of mass absorbed per mass applied per unit time. The
absorption function will vary due to a number of factors (concentration gradient
of chemical, carrier medium, type of skin, skin moisture, skin condition, etc.). If
f(t) could be integrated over time from the start of exposure until time T, it
would yield the absorption fraction, AF, which is the fraction of the applied dose
that is absorbed after time T. The absorption fraction is a cumulative number
and can increase with time to a possible maximum of 1 (or 100% absorption), but
due to competing processes may reach steady state long before reaching 100%
absorption. Equation 2-11 then becomes:
(2-l2>
where AF is the absorption fraction in units of mass absorbed/mass applied
(dimensionless).
If one assumes that all the chemical contained in the bulk material will
eventually come in contact with the skin, then Dtpp equals D^ and using Equation
2-12, the Dfc, equation becomes:
and (using Equations 2-9 and 2-10) consequently:
ADD* - [ C M^^ -AF]I[BW-AT]
where Mne
-------
comact the skin. Although with certain liquids or small amounts of material, the
applied dose may be approximately equal to the potential dose, in cases where
there is contact with more than a minimal amount of soil, there is research that
indicates that using this approximation may cause serious error (Yang et al,,
1989). When this approximation does not hold, the assessor must make
assumptions about how much of the bulk material actually contacts the skin, or
use ithe first method of estimating internal dose outlined above.
Unfortunately, almost no data are available concerning the relationship
between potential dose and applied dose for dermal exposures. Experimental
data on absorption fractions derived for soil commonly use potential dose rather
than applied dose, which may make the experimental data at least in pan
dependent on experimental conditions such as how much soil was applied. If the
exposure assessment conditions are similar to those in the experiment, this would
not usually introduce much error, but if the conditions vary widely, the error
introduced may be difficult to determine.
As a practical matter, estimates of absorption fraction are often crude
approximations and may be difficult to refine even if some data from
experiments are available in the published literature. Typically, absorption
experiments report results as an absorption fraction after a given time (e.g., 50%
after 24 hours). Since absorption fraction is a function of several variables such
as skin temperature, pH, moisture content, and exposed surface area, as well as
characteristics of the matrix in which the chemical occurs (e.g., soil particle size
r
distribution, organic matter content, and moisture content), it is often difficult to
make comparisons between experimental data and conditions being considered
for an assessment.
With single data points, it may not be clear whether the experiment
reached steady state. If several data points are available from different times in
the experiment, a plot of absorption fraction vs. time may be instructive. For
chemicals where data are available for steady-state conditions, the steady-state
value will probably be a good approximation to use in assessments where
34
-------
exposure duration is at least this long, provided the conditions in the experiment
are similar to those of the case being assessed. Assessors should be very cautious
in applying absorption fractions for moderately absorbed chemicals (where
observed experimental absorption fractions are not in the steady-state part of the
cumulative curve), or in using experimental data for estimates of absorption over
a much shorter duration than in the experiment.
In almost all cases, the absorption fraction method of estimating internal
dose from applied dose gives only an approximation of the internal dose. The
interested reader is referred to U.S. EPA (1992b) for more thorough guidance on
dermal exposure assessment.
2.1.4.3. Calculating Internal Dose for Intake Processes (Especially via
Respiratory and Oral Routes)
Chemicals in air, food, or drinking water normally enter the body through
intake processes, then are subsequently absorbed through internal uptake
processes in the lung or gastrointestinal tract. Sometimes it is necessary to
estimate resulting internal dose, D^ after intake. In addition, if enough is known
about the pharmacokinetics of the chemical to make addition of doses across
routes a meaningful exercise, the doses must be added as internal dose, not
applied dose, potential dose, or exposure.
Theoretically, one could calculate D^ in these cases by using an equation
similar to Equation 2-7; but C in that equation would become the concentration
of the chemical in the lung or gastrointestinal tract, SA would be the internal
surface area involved, and Kp would be the permeability coefficient of the lung or
gastrointestinal tract lining. Although data from the pharmaceutical field may be
helpful in determining, for example, internal surface areas, all of the data
mentioned above are not known, nor are they measurable with current
instrumentation.
Because Equations 2-2 through 2-4 estimate the potential dose Dp*, which
is the amount ingested or inhaled, and Equations 2-11 and 2-12 provide
35
-------
relationships between the applied dose (Dtpp) and internal dose (DiM), all that is
necessary is a relationship between potential dose and applied dose for intake
processes. Again, data on this topic are virtually nonexistent, so a common
assumption is that for intake processes, the potential dose equals the applied
dose. Although arguments can be made that this assumption is likely to be more
nearly accurate than for the case of soil contact, the validity of this assumption is
unknown at this point. Essentially, the assumption of equality means that
whatever is eaten, drunk, or inhaled touches an absorption barrier inside the
person.
Assuming potential dose and applied dose are approximately equal, the
internal dose after intake can be estimated by combining Equations 2-2 or 2-3
and 2-10 or 2-11. Using Equations 2-3 and 2-11, this becomes:
C - IR ED AF (2-15)
** *W I***
The ADD;,,, for the two-step intake/uptake process becomes:
IC-JR-ED'AF]I[BW'AT] (2-16)
Using average values for c and A in Equations 2-15 and 2-16 involves the
same assumptions and cautions as were discussed in deriving the ADD and
'LADD equations in the previous two sections, and of course, the same cautions
apply to the use of the absorption fraction as were outlined in Section 2.1.4.2.
2.1.5. Summary of Exposure and Dose Terms With Example Units
Table 2-1 provides a summary of the exposure and dose terms discussed in
Section 2.1, along with examples of units commonly used.
36
-------
Table 2-1. Explanation of exposure and dose terms.
TERM
Exposure
Potential
Dote
REFERS TO
Contact of chemical with
outer boundary of a
person, e.g.. skin. nose.
mouth.
Amount of a chemical
contained in material
ingested, air breathed.,or
bulk material applied to
the skin.
GENERIC UMTS
concentration x time
mass of the chemical:
Cat otfi is mass of the
chemical/time:
the dose rate is sometimes
normalized to body
weight: mass of
chemical/unit body weight
time
SPECIFIC EXAMPLE UNITS
Dermal: (mg chem/L water) (hrs
of contact)
(mg chem/kg soil) (hrs
of contact)
Respiratory: (ppm chem in air) (hrs
of contact)
(jig/m'air) (days of
contact)
Oral: (mg chem/L water) (min
of contact)
(mg chem/kg food) (min
of contact)
Dermal: (mg chem/kg soil) (kg
soil on skin)
* mg cbem in soil applied
to skin
Respiratory: (ug chem/m' air)
(mj air breathed/min)
(min exposed) «
ug chemical in air breathed
Oral: (mg chem/L water)
(L water consumed/day)
days exposed «
mg chemical ingested
in water
(also dose rate: rag/day)
37
-------
TERM
Applied
Dose
Internal
(Absorbed)
Dose
Delivered
Dose
REFERS TO
Amount of chemical in
contact with the primary
absorption boundaries (e.g..
skin, lungs, gastrointestinal
tract) and available for
absorption
The amount of a chemical
penetrating across an
absorption barrier or
exchange boundary via
either physical or
biological ptott m i
Amount of chemical
available for interaction
with any particular organ
or cell.
GEN'ERIC UNITS
as above
as above
as above
SPECIFIC EXAMPLE UMTS
Dermal: (mg chem/kg soil) (kg
soil directly touching skin)
(% of them in soil
actually touching skin)
mg chem actually touching
skin
(ltg chem/m3 air) (m3
air directly touching lung)
Respiratory: (% of chemical actually
touching lung) - mg
chemical actually touching
lung absorption barrier
(mg chem/kg food) (kg
food consumed/day) (%
of chemical touching g.i.
tract) ng chemical
Oral: actually touching g.i. tract
absorption barrier
(also absorbed dote rate:
mg/day) chemical available
to organ or cell
(dote rate: mg chemical
available to organ/day)
Dermal: mg chemical absorbed
through skin
mg chemical absorbed via
Respiratory: lung
mg chemical absorbed via
Oral: g.i. tract
(dose rate: mg chemical
absorbed/day or rag/kg
day)
mg chemical available to
organ or cell
(dose rate: mg chemical
available to organ/day)
38
-------
2.2. Approaches to Quantification of Exposure
Although exposure assessments are done for a variety of reasons (see
Section 3), the quantitative exposure estimate can be approached from three
different ways:16
1. The exposure can be measured at the point of contact (the outer
boundary of the body) while it is taking place, measuring both
exposure concentration and time of contact and integrating them
(point-of-contact measurement),
2. The exposure can be estimated by separately evaluating the
exposure concentration and the time of contact, then combining this
information (scenario evaluation),
3. The exposure can be estimated from dose, which in turn can be
reconstructed through internal indicators (biomarkers,17 body
burden, excretion levels, etc.) after the exposure has taken place
(reconstruction).
These three approaches to quantification of exposure (or dose) are
independent, as each is based on different data. The independence of the three
methods is a useful concept in verifying or validating results. Each of the three
16 These three ways are approaches for arriving at a quantitative estimate of exposure. Sometimes
the approaches to awning exposure are described in terms of "direct measures* and ."indirect
measures" of exposure (e.&, NRC, 1990). Measurements that actually involve sampling on or within
a person, for example, use of personal monitors and biomarkers, are termed "direct measures" of
exposure. Use of models, microenvironmental measurements, and questionnaires, where measurements
do not actually involve personal measurements, are termed "indirect measures" of exposure. The
direct/indirect nomenclature focuses on the type of measurements being made; the scenario
evaluation/pomt-of-coatact/reconstruction nomenclature focuses on how the data are used to develop
the dose estimate. The three-term nomenclature is used in these guidelines to highlight the point that
three independent estimates of dose can be developed.
17 Biomarkers can be used to study exposure, effects, or susceptibility. The discussion of
biomarkers in these guidelines is limited to their use in indicating exposure.
39
-------
has strengths and weaknesses; using them in combination can considerably
strengthen the credibility of an exposure or risk assessment. Sections 22.1
through 2.23 briefly describe some of the strengths and weaknesses of each
approach.
2.2.1. Measurement of Exposure at the Point-of-Contact
Point-of-contact exposure measurement evaluates the exposure as it occurs,
by measuring the chemical concentrations at the interface between the person and
the environment as a function of time, resulting in an exposure profile. The best
known example of the point-of-contact measurement is the radiation dosimeter.
This small badge-like device measures exposure to radiation as it occurs and
provides an integrated estimate of exposure for the period of time over which the
measurement has been taken. Another example is the Total Exposure
Assessment Methodology (TEAM) studies (U.S. EPA, 1987a) conducted by the
EPA. In the TEAM studies, a small pump with a collector and absorbent was
attached to a person's clothing to measure his or her exposure to airborne
solvents or other pollutants as it occurred. A third example is the carbon
monoxide (CO) point-of-contact measurement studies where subjects carried a
small CO measuring device for several days (U.S. EPA, 1984a). Dermal patch
studies and duplicate meal studies are also point-of-contact measurement studies.
In all of these examples, the measurements are taken at the interface between the
. person and the environment while exposure is occurring. Use of these data for
estimating exposures or doses for periods that differ from those for which the
data, are collected (e.g., for estimates of lifetime exposures) will require some
assumptions, as discussed in Section 53.1.
The strength of this method is that it measures exposure directly, and
providing that the measurement devices are accurate, is likely to give the most
accurate exposure value for the period of time over which the measurement was
taken. It is often expensive, however, and measurement devices and techniques
do not currently exist for all chemicals. This method may also require
40
-------
assumptions to be made concerning the relationship between short-term sampling
and long-term exposures, if appropriate. This method is also not source-specific,
a limitation when particular sources will need to be addressed by risk managers.
2.2.2. Estimates of Exposure from Scenario Evaluation
In exposure scenario evaluation, the assessor attempts to determine the
concentrations of chemicals in a medium or location and link this information
with the time that individuals or populations contact the chemical. The set of
assumptions about how this contact takes place is an exposure scenario. In
evaluating exposure scenarios, the assessor usually characterizes the chemical
concentration and the time of contact separately. This may be done for a series
of events, e.g., by using Equation 2-3, or using a steady-state approximation, e.g.,
using Equation 2-4.
The goal of chemical concentration characterization is to develop estimates
of exposure concentration. This is typically accomplished indirectly by measuring,
modeling, or using existing data on concentrations in the bulk media, rather than
at the point of contact. Assuming the concentration in the bulk medium is the
same as the exposure concentration is a clear source of potential error in the
exposure estimate and must be discussed in the uncertainty analysis. Generally,
the closer the medium can be measured to the point of contact (in both space and
time), the less uncertainty there is in the characterization of exposure
concentration.
The goal of characterizing time of contact is to identify who is exposed
and to develop estimates of the frequency and duration of exposure. Like
chemical concentration characterization, this is usually done indirectly by use of
demographic data, survey statistics, behavior observation, activity diaries, activity
models, or, in the absence of more substantive information, assumptions about
behavior.
The chemical concentration and population characterizations are ultimately
combined in an exposure scenario, and there are various ways to accomplish this.
41
-------
One of the major problems in evaluating dose equations such as Equations 2-4
through 2-6 is "that the limiting assumptions or boundary conditions used to derive
them (e.g., steady-state assumptions; see Section 2.1.4.) do not always hold true.
Two major approaches to this problem are (1) to evaluate the exposure or dose
equation under conditions where the limiting assumptions do hold true, or (2) to
deal with the uncertainty caused by the divergence from the boundary conditions.
As an example of the first way, the microenvironment method, usually used for
evaluating air exposures, evaluates segments of time and location where the
assumption of constant concentration is approximately true, then sums over all
such time segments for a total exposure for the respiratory route, effectively
removing some of the boundary conditions by falling back to the more general
Equation 2-3. While estimates of exposure concentration and time-of-contact are
still derived indirectly by this method, the concentration and time-of-contact
estimates can be measured for each microenvironment. This avoids much of the
error due to using average values in cases where concentration varies widely
along with time of contact.18
As examples of the second approach, there are various tools used to
describe uncertainty caused by parameter variation, such as Monte Carlo analysis
(see Section 5). Section 6 discusses some of these techniques in more detail.
One strength of the scenario evaluation approach is that it is usually the
least expensive method of the three. Also, it is particularly suited to analysis of
the risk consequences of proposed actions. It is both a strength and a weakness
of scenario development that the evaluation can be performed with little or no
18 This technique still may not deal effectively with the problem of short-term "peak concentrations"
excesding some threshold leading to an acute effect. Even the averaging process used in a
microenvironment may miss ugnifiemtf concentration spikes and average them out to lower
concentrations which are apparently less lexicologically significant. A similar problem exists when
evaluating sources; a "peak release" of a toxic chemical for a short time may cause serious acute effects.
even though the average concentration over a longer period of time might not indicate serious chronic
effects.
42
-------
data; it is a technique that is best used when some knowledge exists about the
soundness, validity,.and uncertainty of the underlying assumptions.
2.2.3. Exposure Estimation by Reconstruction of Internal Dose
Exposure can also be estimated after it has taken place. If a total dose is
known, or can be reconstructed, and information about intake and uptake rates is
available, an average past exposure rate can be estimated. Reconstruction of
dose relies on measuring internal body indicators after exposure and intake and
uptake have already occurred, and using these measurements to back-calculate
dose. However, the data on body burden levels or biomarkers cannot be used
directly unless a relationship can be established between these levels or
biomarker indications and internal dose, and interfering reactions (e.g.,
metabolism of unrelated chemicals) can be accounted for or ruled out. Biological
tissue or fluid measurements that reveal the presence of a chemical may indicate
directly that an exposure has occurred, provided the chemical is not a metabolite
of other chemicals.
Biological monitoring can be used to evaluate the amount of a chemical in
the body by measuring one or more of the following items. Not all of these can
be measured for every chemical:
the concentration of the chemical itself in biological tissues or sera
(blood, urine, breath, hair, adipose tissue, etc.),
the concentration of the chemical's metabolite(s),
the biological effect that occurs as a result of human exposure to
the chemical (e.g., alkylated hemoglobin or changes in enzyme
induction), or
the amount of a chemical or its metabolites bound to target
molecules.
The results of biomonitoring can be used to estimate chemical uptake
during a specific interval if background levels do not mask the marker and the
relationships between uptake and the marker selected are known. The time of
43
-------
sampling for biomarkers can be critical. Establishing a correlation between
exposure and the measurement of the marker, including pharmacokinetics, can
help optimize the sampling conditions.
The strengths of this method are that it demonstrates that exposure to and
absorption of the chemical has actually taken place, and it theoretically can give a
good indication of past exposure. The drawbacks are that it will not work for
every chemical due to interferences or the reactive nature of the chemical, it has
not been methodologically established for very many chemicals, data relating
internal dose to exposure are needed, and it may be expensive.
23. Relationships of Exposure and Dose to Risk
Exposure and dose information are often combined with exposure-
response or dose-response relationships to estimate risk, the probability of an
adverse effect occurring. There are a variety of risk models, with various
mathematical relationships between risk and dose or (less frequently) exposure.
A major function of the exposure assessment as pan of a risk assessment is to
provide the exposure or dose values, and their interpretations.
The exposure and dose information available will often allow estimates of
individual risk or population risk, or both. Presentation of risks in a risk
assessment involves more than merely a numerical value, however. Risks can be
described or characterized in a number of different ways. This section discusses
the relationships between exposure and dose and a series of risk descriptors.
In preparing exposure information for use in a risk assessment, the use of
several descriptors, including descriptors of both individual and population risk,
often provides more useful information to the risk manager than a single
descriptor or risk value. Developing several descriptors may require the exposure
assessor to analyze and evaluate the exposure and dose information in several
different ways. The exposure assessor should be aware of the purpose, scope,
and level of detail of the assessment (see Sections 3.1 through 33) before
gathering data, since the types and amounts of data needed may differ. The
44
-------
questions that need to be addressed as a result of the purpose of the assessment
determine the type of risk descriptors used in the assessment.
2.3.1. Individual Risk
Individual risk is risk borne by individual persons within a population.
Risk assessments almost always deal with more than a single individual.
Frequently,, individual risks are calculated for some or all of the persons in the
population being studied, and are then put into the context of where they fall in
the distribution of risks for the entire population.
Descriptions of individual risk can take various forms, depending on the
questions being addressed. For the risk manager, there are often key questions in
mapping out a strategy for dealing with individual risk. For cancer (or when
possible, noncancer) assessments, the risk manager may need answers to questions
such as:
Are individuals at risk from exposure to the substances under study?
Although for substances, such as carcinogens, that are assumed to
have no threshold, only a zero dose would result in no excess risk;
for noncarcinogens, this question can often be addressed. In the
case of the use of hazard indices, where exposures or doses are
compared to a reference dose or some other acceptable level, the
risk descriptor would be a statement based on the ratio between the
dose incurred and the reference dose.
To what risk levels are the persons at the highest risk subjected?
Who are these people, what are they doing, where do they live, etc.,
and what might be putting them at this higher risk?
Can people with a high degree of susceptibility be identified?
What is the average individual risk?
45
-------
In addressing these questions, risk descriptors may take any of several forms:
an estimate of the probability that an individual in the high end of
the distribution may suffer an adverse effect, along with an
explanation (to the extent known) of the (exposure or susceptibility)
factors which result in their being in the high end;
an estimate of the probability that an individual at the average or
median risk may suffer an adverse effect; or
an estimate of the probability that an individual will suffer an
adverse effect given a specific set of exposure circumstances.
Individuals at the high end of the risk distribution are often of interest to
r risk managers when considering various actions to mitigate risk. These
individuals often are either more susceptible to the adverse health effect than
others in the population or are highly exposed individuals, or both.
Higher susceptibility may be the result of a clear difference in the way the
chemical is processed by the body, or it may be the result of being in the extreme
part of the normal range in metabolism for a population. It may not always be
possible to identify persons or subgroups who are more susceptible than the
general population. If groups of individuals who have clearly different
susceptibility characteristics can be identified, they can be treated as a separate
sub population, and the risk assessment for this subgroup may require a different
dose-response relationship from the one used for the general population. When
' highly susceptible individuals can be identified, but when a different dose-
response relationship is not appropriate or feasible to develop, the risks for these
individuals are usually treated as pan of the variability of the general population.
Highly exposed individuals have been described in the literature using
many different terms. Due to unclear definitions, terms such as maximum
46
-------
exposed individual.19 worst case exposure,20 and reasonable worst case
exposure21 have sometimes been applied to a variety of ad hoc estimates with
unclear target ranges. The term maximum exposed individual has often been
used synonymously with worst case exposure, that is, to estimate the exposure of
the individual with the highest actual or possible exposure. An accurate estimate
of the exposure of the person in the distribution with the highest exposure is
extremely difficult to develop; uncertainty in the estimate usually increases
greatly as the more extreme ends of the distribution are approached. Even using
techniques such as Monte Carlo simulations can result in high uncertainty about
whether the estimate is within, or above, the actual exposure distribution.
For the purpose of these guidelines, a high end exposure estimate is a
plausible estimate of the individual exposure for those persons at the upper end
of an exposure distribution. The intent of this designation is to convey an
estimate of exposures in the upper range of the distribution, but to avoid
estimates that are beyond the true distribution. Conceptually, the high end of the
distribution means above the 90th percentile of the population distribution, but
14 The uppermost portion of the high-end exposure range has generally been the target for terms
such as "manmuTn exposed individual," although actual usage has varied.
20 The term "worst case exposure" has historically meant the maximum possible exposure, or where
everything that can plausibly happen to * Timing exposure, happens. White in actuality, this worst case
exposure may fall on the uppermost point of the population distribution, in most cases, it will be
somewhat higher than the individual in the population with the highest exposure. The worst case
represents a hypothetical individual and an extreme set of conditions; this will usually not be observed
in an actual population. The worst case and the so-called iMTtmum exposed individual are therefore
not synonymous, the former describing a statistical possibility that may or may not occur in the
population, and the latter ostensibly describing an individual that does, or is thought to, exist in the
population.
21 The lower part of the high-end exposure range, e.g., conceptually above the 90th percentile but
below about the 98th percentile, has generally been the target used by those employing the term
"reasonable worst case exposure." Above about the 98th percentile has been termed the "maximum
exposure" range. Note that both these terms should refer to estimates of exposure on the actual
distribution, not above it.
47
-------
not higher than the individual in the population who has the highest exposure.
High-end dose estimates are described analogously.
The concept of the high end exposure, as used in this guidance, is
fundamentally different from terms such as worst case, in that the estimate is by
definition intended to fall on the actual (or in the case of scenarios dealing with
future exposures, probable) exposure distribution.
Key Point: The primary objective when developing an estimate of high-end
exposure or dose is to arrive at an estimate that will fall within the actual
distribution, rather than above h. (Estimates above the distribution are
bounding estimates; see Section 53.4.1.) Often this requires professional
judgment when data are sparse, but the primary objective of this type of
estimator is to be within this fairly wide conceptual target range.
The relationship between answering the questions about high-end
individual risk and what the exposure assessor must do to develop the descriptors
is discussed in Section 3.4. Individual risk descriptors will generally require the
assessor to make estimates of high-end exposure or dose, and sometimes
additional estimates (e.g., estimates of central tendency such as average or
median exposure or dose).
Another type of individual risk descriptor results from specific sets of
circumstances that can be hypothesized as pan of a scenario, for example:
What if a homeowner lives at the edge of this site for his entire
life?
What if a pesticide applicator applies this pesticide without using
protective equipment?
What if a consumer uses this product every day for ten years?
Once a month? Once a week?
What risk level will occur if we set the standard at 100 ppb?
48
-------
The assumptions made in answering these assessment-specific postulated
questions should not be confused with the approximations made in developing an
exposure estimate for an existing population or with the adjustments in parameter
values made in performing a sensitivity analysis. The assumptions in these
specific questions address a purer "if/then" relationship and, as such, are more
helpful in answering specific hypothetical or anecdotal questions. The answers to
these postulated questions do not give information about how likely the
combination of values might be in the actual population or about how many (if
any) persons might actually be subjected to the calculated risk.
Exposure scenarios employing these types of postulated questions are
encountered often in risk assessments, especially in those where actual exposure
data are incomplete or nonexistent. Although the estimates of individual
exposure derived from these assumptions provide numerical values for calculating
risk, they do so more as a matter of context than a determination of actual
exposure. They are not the same types of estimates as high-end exposure or risk,
where some statement must be made about the likelihood of their falling within a
specified range in the actual exposure or risk distribution.
2.3.2. Population Risk
Population risk refers to an estimate of the extent of harm for the
population or population segment being addressed Risk managers may need
questions addressed such as the following:
How many cases of a particular health effect might be
probabilistically estimated for a population of interest during a
specified time period?
For noncarcinogens, what portion of the population exceeds the
reference dose (RfD), the reference concentration (RfC), or other
health concern level?
For carcinogens, how many persons are above a certain risk level
such as 10"* or a series of risk levels such as 10's, 10"4, etc?
49
-------
How do various subgroups fall within the distributions of exposure,
dose, and risk?
What is the risk for a particular population segment?
Do any particular subgroups experience a high exposure, dose, or
risk?
The risk descriptors for population risk can take any of several forms:
a probabilistic projection of the estimated extent of occurrence of a
particular effect for a population or segment (sometimes called
"number of cases" of effect);
a description of what part of the population (or population
segment) is above a certain risk value of interest; or
a description of the distribution of risk among various segments or
subgroups of the population.
In theory, an estimate of the extent of effects a population might incur
(e.g., the number of individual cases that might occur during a specified time) can
be calculated by summing the individual risks for all individuals within the
population or population segment of interest. The ability to calculate this
estimate depends on whether the individual risks are in terms of probabilities for
each individual, rather than a hazard index or other nonprobabilistic risk. The
calculation also requires a great deal more information than is normally available.
For some assessments, an alternate method is used, provided certain
conditions hold. An arithmetic mean dose is usually much easier to estimate than
the individual doses of each person in the population or population segment, but
calculating the hypothetical number of cases by using mean doses, slope factors,
and population size must be done with considerable caution. If the risk varies
linearly with dose, and there is no threshold below which no effect ever occurs,
an estimate of the number of cases that might occur can be derived from the
50
-------
definition of arithmetic mean. If A = T/n, where A is the arithmetic mean of n
numbers, and T is the sum of the same n numbers, simple rearrangement gives T
= A n. If the arithmetic mean risk for the population (A) can be estimated,
and the size of the population (n) is known, then this relationship can be used to
calculate a probabilistic estimate of the extent of effects (T).22 Even so, several
other cautions apply when using this method.
Individual risks are usually expressed on an upper bound basis, and the
resulting number of cases estimated in this manner will normally be an upper
bound estimate due to the nature of the risk model used. This method will not
work at all for nonlinear dose-response models, such as many noncancer effects
or for nonlinear carcinogenic dose-response models.
In practice, it is difficult even to establish an accurate mean health effect
risk for a population. This is due to many complications, including uncertainties
in using animal data for human dose-response relationships, nonlinearities in the
dose-response curve, projecting incidence data from one group to another
dissimilar group, etc. Although it has been common practice to estimate the
number of cases of disease, especially cancer, for populations exposed to
chemicals, it should be understood that these estimates are not meant to be
accurate predictions of real (or actuarial) cases of disease. The estimate's value
lies in framing hypothetical risk in an understandable way rather than in any
literal interpretation of the term "cases."
Another population risk descriptor is a statement regarding how many
people are thought to be above a certain risk level or other point of demarcation.
For carcinogens, this might be an excess risk level such as 10* (or a series of
levels, i.e., 10'5, 10"4, etc.). For noncarcinogenic risk, it might be the portion of the
population that exceeds the RfD (a dose), the RfC (an exposure concentration).
22 Since the geometric mean (G) is defined differently, use of the geometric mean individual risk
(where G does not equal A, such as is often found in environmental situations) in the above
relationship will obviously give an erroneous (usually low) estimate of the total. Geometric means have
appropriate uses in exposure and risk assessment, but estimating population risk in this way is not one
of them.
51
-------
an effect-based level such as a lowest observed adverse effect level (LOAEL),
etc. For the exposure assessor, this type of descriptor usually requires detailed
information about the distribution of exposures or doses.
Other population risk descriptors address the way the risk burden is
distributed among various segments of the subject population. The segments (or
subgroups) could be divided by geographic location, age, sex, ethnic background,
lifestyle, economic factors, or other demographic variables, or they could
represent groups of persons with a typical sensitivity or susceptibility, such as
asthmatics.
*
For assessors, this means that data may need to be evaluated for both
highly exposed population segments and highly sensitive population segments. In
cases involving a highly exposed population segment, the assessor might approach
this question by having this segment of the population in mind when developing
the descriptors of high-end exposure or dose. Usually, however, these segments
are identified (either a priori or from inspection of the data) and then treated as
separate, unique populations in themselves, with segment-specific risk descriptors
(population, individual etc.) analogous to those used for the larger population.
2.3.3. Risk Descriptors
In summary, exposure and dose information developed as pan of an exposure
assessment may be used in constructing risk descriptors. These are statements to
convey information about risk to users of that information, primarily risk managers.
Risk descriptors can be grouped as descriptors of individual risk or population risk,
and within these broad categories, there are several types of descriptors. Not all
descriptors are applicable to all assessments. As a matter of policy, the Agency or
individual program offices within the Agency may require one or more of these
descriptors to be included in specific risk assessments. Because the type of
desimptor translates fairly directly into the type of analysis the exposure assessor
must perform, the exposure assessor needs to be aware of these policies. Additional
52
-------
information on calculating and presenting exposure estimates and risk descriptors is
found in Sections 5 and 7 of these Guidelines.
53
-------
3. PLANNING AN EXPOSURE ASSESSMENT
Exposure assessments are done for a variety of purposes, and for that
reason, cannot easily be regimented into a set format or protocol. Each
assessment, however, uses a similar set of planning questions, and by addressing
these questions the assessor will be better able to decide what is needed to
perform the assessment and how to obtain and use the information required. To
facilitate this planning, the exposure assessor should consider some basic
questions:
Purpose: Why is the study being conducted? What questions will the
study address and how will the results be used?
Scope: Where does the study area begin and end? Will inferences be
made on a national, regional, or local scale? Who or what is to be
monitored? What chemicals and what media will be measured, and for
which individuals, populations, or population segments will estimates of
exposure and dose be developed?
Level of Detail: How accurate must the exposure or dose estimate be to
achieve the purpose? How detailed must the assessment be to properly
account for the biological link between exposure, dose, effect, and risk, if
necessary? How is the depth of the assessment limited by resources (time
and money), and what is the most effective use of those resources in terms
of level of detail of the various pans of the assessment?
Approach: How will exposure or dose be measured or estimated, and are
these methods appropriate given the biological links among exposure, dose,
effect, and risk? How will populations be characterized? How will
exposure concentrations be estimated? What is known about the
environmental and biological fate of the substance? What are the
important exposure pathways? What is known about expected
concentrations, analytical methods, and detection limits? Are the presently
available analytical methods capable of detecting the chemical of interest
54
-------
and can they achieve the level of quality needed in the assessment? How
many samples are needed? When will the samples be collected? How
frequently? How will the data be handled, analyzed, and interpreted?
By addressing each of these questions, the exposure assessor will develop a
clear and concise definition of study objectives that will form the basis for further
planning.
3.1. Purpose of the Exposure Assessment
The particular purpose for which an exposure assessment will be used will
often have significant implications for the scope, level of detail, and approach of
the assessment. Because of the complex nature of exposure assessments, a
multidisciplinary approach that encompasses the expertise of a variety of
scientists is necessary. Exposure assessors should seek assistance from other
scientists when they lack the expertise necessary in certain areas of the
assessment.
3.1.1. Using Exposure Assessments in Risk Assessment
The National Research Council (NRC, 1983) described exposure
assessment as one of the four major areas of risk assessment (the others are
hazard identification, dose-response assessment, and risk characterization). The
primary purpose of an exposure assessment in this application is often to estimate
dose, which is combined with chemical-specific dose-response data (usually from
animal studies) in order to estimate risk. Depending on the purpose of the risk
assessment, the exposure assessment will need to emphasize certain areas in
addition to quantification of exposure and dose.
If the exposure assessment is pan of a risk assessment to support
regulations for specific chemical sources, such as point emission sources,
consumer products, or pesticides, then the link between the source and the
55
-------
wexposed or potentially exposed population is important. In this case, it is often
necessary to trace chemicals from the source to the point of exposure by using
source and fate models and exposure scenarios. By examining the individual
components of a scenario, assessors can focus their efforts on the factors that
contribute the most to exposure, and perhaps use the exposure assessment to
select possible actions to reduce risk. For example, exposure assessments are
often used to compare and select control or cleanup options. Most often the
scenario evaluation is employed to estimate the residual risk associated with each
of the alternatives under consideration. These estimates are compared to the
baseline risk to determine the relative risk reduction of each alternative. These
types of assessments can also be employed to make screening decisions about
whether to further investigate a particular chemical. These assessments can also
benefit from verification through the use of personal or biological monitoring
techniques.
If the exposure assessment is pan of a risk assessment performed to set
standards for environmental media, usually the concentration levels in the
medium that pose a particular risk level are important. Normally, these
assessments place less emphasis on the ultimate source of the chemical and more
emphasis on linking concentration levels in the medium with exposure and dose
levels of those exposed. A combination of media measurements and personal
exposure monitoring could be very helpful in assessments for this purpose, since
what is being sought is the relationship between the two. Modeling may also
support or supplement these assessments.
If the exposure assessment is part of a risk assessment used to determine
the need to remediate a waste site or chemical spill, the emphasis is on
calculating the risk to an individual or small group, comparing that risk to an
acceptable risk level, and if necessary determining appropriate cleanup actions to
reach an acceptable risk. The source of chemical contamination may or may not
be known. Although personal exposure monitoring can give a good indication of
the exposure or dose at the present time, often the risk manager must make a
56
-------
decision that will protect health in the future. For this reason, modeling and
scenario development are the primary techniques used in this type of assessment.
Emphasis is usually placed on linking sources with the exposed individuals.
Biological monitoring may also be helpful (in cases where the methodology is
established) in determining if exposure actually results in a dose, since some
chemicals are not bioavailable even if intake occurs.
If the exposure assessment is pan of a risk assessment used as a screening
device for setting priorities, the emphasis is more on the comparative risk levels,
perhaps with the risk estimates falling into broad categories (e.g., semi-
quantitative categories such as high, medium, and low). For such quick-sorting
exercises, rarely are any techniques used other than modeling and scenario
development. Decisions made in such cases rarely involve direct cleanup or
regulatory action without further refinement of the risk assessment, so the
scenario development approach can be a cost-effective way to set general
priorities for future investigation of worst risk first.
If the exposure assessment is part of a risk assessment that is wholly
predictive in nature, such as for the premanufacture notice (PMN) program, a
modeling and scenario development approach is recommended. In such cases,
measurement of chemicals yet to be manufactured or in the environment is not
possible. In this case again, the link between source and exposed individuals is
emphasized.
Not only are risk assessments done for a variety of purposes, but the toxic
endpoints being assessed (e.g., cancer, reproductive effects, neurotoxic effects)
can also vary widely. Endpoints and other aspects of the hazard identification
and dose-response relationships can have a major effect on how the exposure
information must be collected and analyzed for a risk assessment. This is
discussed in more detail in Section 3.5.1.
57
-------
3.1.2. Using Exposure Assessments for Status and Trends
Exposure assessments can also be used to determine whether exposure
occurs and to monitor status and trends. The emphasis in these exposure
assessments is on what the actual exposure (or dose) is at one particular time, and
how the exposure changes over time. Examples of this type of assessment are
occupational studies. Characteristics and special considerations for occupational
studies have been discussed by the National Institute for Occupational Safety and
Health (NIOSH, 1988).
Exposure status is the snapshot of exposure at a given time, usually the
exposure profile of a population or population segment (perhaps a segment or
statistical sample that can.be studied periodically). Exposure trends show how
V
this profile changes with time. Normally, status and trends studies make use of
statistical sampling strategies to assure that changes can be interpreted
meaningfully. These data are particularly useful if actions for risk amelioration
and demonstration of the effectiveness of these actions can be made through
exposure trend measurements.
Measurement is critical to such assessments. Personal monitoring can give
the most accurate picture of exposure, but biological or media monitoring can
indicate exposure levels, provided a strong link is established between the
biological or media levels and the exposure levels. Usually this link is established
first by correlating biological or media levels with personal monitoring data for
'the same population over the same period.
3.1.3. Using Exposure Assessments in Epidemiologic Studies
Exposure assessments can also be important components of epidemiologic
studies, where the emphasis is on using the exposure assessment to establish
exposure-incidence (or dose-effect) relationships. For this purpose, personal
monitoring, biological monitoring, and scenario development have all been used.
If the population under study is being currently exposed, personal monitoring or
biological monitoring may be particularly helpful in establishing exposure or dose
58
-------
levels. If the exposure took place in the past, biological monitoring may provide
useful data, provided the chemical is amenable to detection without interference
or degradation, and the pharmacokinetics are known. More often, however,
scenario development techniques are used to estimate exposure in the past, and
often the accuracy of the estimate is limited to classifying exposure as high,
medium, or low. This type of categorization is rather common, but sometimes it
is very difficult to determine who belongs in a category, and to interpret the
results of the study. Although epidemiologic protocols are beyond the scope of
these Guidelines, the use of exposure assessment for epidemiology has been
described by the World Health Organization (WHO, 1983).
3.2. Scope of the Assessment
The scope of an assessment refers to its comprehensiveness. For example,
an important limitation in many exposure assessments relates to the specific
chemical(s) to be evaluated. Although this seems obvious, where exposure to
multiple chemicals or mixtures is possible, it is not always clear whether assessing
"all" chemicals will result in a different risk value than if only certain significant
chemicals are assessed and the others assumed to contribute only a minor amount
to the risk. This may also be true for cases where degradation products have
equal or greater lexicological concerns. In these cases, a preliminary investigation
may be necessary to determine which chemicals are likely to be in high enough
concentrations to cause concern, with the possibile contribution of the others
discussed in the uncertainty assessment. The assessor must also determine
geographical boundaries, population exposed, environmental media to be
considered, and exposure pathways and routes of concern.
The purpose of the exposure assessment will usually help define the scope.
There are characteristics that are unique to national exposure assessments as
opposed to industry-wide or local exposure assessments. For example, exposure
assessments in support of national regulations must be national in scope; exposure
assessments to support cleanup decisions at a site will be local in scope.
59
-------
. Exposure assessments to support standards for a panicular medium will ot'ten
concentrate on that medium's concentration levels and typical exposure pathways
and routes, although the other pathways and routes are also often estimated for
perspective.
3.3. Level of Detail of the Assessment
The level of detail, or depth of the assessment, is measured by the amount
and resolution of the data used, and the sophistication of the analysis employed.
It is determined by the purpose of the exposure assessment and the resources
available to perform the assessment. Although in theory the level of detail
needed can be established by determining the accuracy of the estimate required,
this is rarely the case in practice. To conserve resources, most assessments are
done in an iterative fashion, with a screening done first; successive iterations add
more detail and sophistication. After each iteration, the question is asked, is this
level of detail or degree of confidence good enough to achieve the purpose of the
assessment? If the answer is no, successive iterations continue until the answer is
*
affirmative, new input data are generated, or as is the case for many assessments,
the available data, time, or resources are depleted. Resource-limited assessments
should be evaluated in terms of what pan of the original objectives have been
accomplished, and how this affects the use of the results.
The level of detail of an exposure assessment can also be influenced by
the level of sophistication or uncertainty in the assessment of health effects to be
used for a risk assessment. If only very weak health information is available, a
detailed, costly, and in-depth exposure assessment will in most cases be wasteful,
since the most detailed information will not add significantly to the certainty of
the risk assessment.
3.4. Determining the Approach for the Exposure Assessment
The intended use of the exposure assessment will generally favor one
approach to quantifying exposure over the others, or suggest that two or more
60
-------
approaches be combined. These approaches to exposure assessment can be
viewed as different ways of estimating the same exposure or dose. Each has its
own unique characteristics, strengths, and weaknesses, but the estimate should
theoretically be the same, independent of the approach taken.
The point-of-contact approach requires measurements of chemical
concentrations at the point where they the exposed individuals, and a record of
the length of time of contact at each concentration. Some integrarive techniques
are inexpensive and easy to use (radiation badges), while others are costly and
may present logistical challenges (personal continuous-sampling devices), and
require public cooperation.
The scenario evaluation approach requires chemical concentration and
time-of-contact data, as well as information on the exposed persons. Chemical
concentration may be determined by sampling and analysis or by use of fate and
transport models (including simple dilution models). Models can be particularly
helpful when some analytical data are available, but resources for additional
sampling are limited. Information on human behavior and physical characteristics
may be assumed or obtained by interviews or other techniques from individuals
who represent the population of interest.
For the reconstruction of dose approach, the exposure assessor usually uses
measured body burden or specific biomarker data, and selects or constructs a
biological model that uses these data to account for the chemical's behavior in the
body. If a pharmacokinetic model is used, additional data on metabolic processes
will be required (as well as model validation information). Information on
exposure routes and relative source strengths is also helpful.
One of the goals in selecting the approach should include developing an
estimate having an acceptable amount of uncertainty. In general estimates based
on quality-assured measurement data, gathered to directly answer the questions of
the assessment, are likely to have less uncertainty than estimates based on
indirect information. The approach selected for the assessment will determine
61
-------
which data are needed. All three approaches also require data on intake and
uptake rates if the final product of the assessment is a calculated dose.
Sometimes more than one approach is used to estimate exposure. For
example, the TEAM study combines point-of-contact measurement with the
microenvironment (scenario evaluation) approach and breath measurements for
the reconstruction of dose approach (U.S. EPA, 1987a). If more than one
approach is used, the assessor should consider how using each approach
separately can verify or validate the others. In particular, point-of-contact
measurements can be used as a check on assessments made by scenario
evaluation.
3.5. Establishing the Exposure Assessment Plan
Before starting work on an exposure assessment, the assessor should have
determined the purpose, scope, level of detail, and approach for the assessment,
and should be able to translate these into a set of objectives. These objectives
will be the foundation for the exposure assessment plan. The exposure
assessment plan need not be a lengthy or formal document, especially for
assessments that have a narrow scope and little detail. For more complex
exposure assessments, however, it is helpful to have a written plan.
For exposure assessments being done as part of a risk assessment, the
exposure assessment plan should reflect (in addition to the objectives) an
understanding of how the results of the exposure assessment will be used in the
risk assessment. For some assessments, three additional components may be
needed: the sampling strategy (Section 3.5.2), the modeling strategy (Section
3.5.3). and the communications strategy (Section 7.13).
3.5.1. Planning an Exposure Assessment as Part of a Risk Assessment
For risk assessments, exposure information must be clearly linked to the
hazard identification and dose-response relationship (or exposure-response
relationship; see Section 3.5.4). The toxic endpoints (e.g., cancer, reproductive
62
-------
effects, neurotoxic effects) can vary widely, and along with other aspects .of the
hazard identification and dose-response relationships, can have a major effect on
how the exposure information must be collected and analyzed for a risk
assessment. Some of these aspects include implications of limited versus repeated
exposures, dose-rate considerations, reversibility of lexicological processes, and
composition of the exposed population.
Limited versus Repeated Exposures. Current carcinogen risk
models often use lifetime time-weighted average doses in the dose-
response relationships owing to their derivation from lifetime
animal studies. This does not mean cancer cannot occur after single
exposures (witness the A-bomb experience), merely that exposure
information must be consonant with the source of the model. Some
toxic effects, however, occur after a single or a limited number of
exposures, including acute reactions such as anesthetic effects and
respiratory depression or certain developmental effects following
exposure during pregnancy. For developmental effects, for
example, lifetime time-weighted averages have little relevance, so
different types of data must be collected, in this case usually
shorter-term exposure profile data during a particular time window.
Consequently, the exposure assessors and scientists who conduct
monitoring studies need to collaborate with those scientists who
evaluate a chemical's hazard potential to assure the development of
a meaningful risk assessment. If short-term peak exposures are
related to the effect, then instruments used should be able to
measure short-term peak concentrations. If cumulative exposure is
related to the effect, long-term average sampling strategies will
probably be more appropriate.
Dose-Rate Effects. The use of average daily exposure values
(e.g., ADD, LADD) in a dose-response relationship assumes that
within some limits, increments of C times T (exposure concentration
63
-------
times time) that are equal in magnitude are equivalent in their
potential to cause an effect, regardless of the pattern of exposure
(the so-called Haber's Rule; see Atherley, 1985). In those cases
where toxicity depends on the dose rate, one may need a more
precise determination of the time people are exposed to various
concentrations and the sequence in which these exposures occur.
Reversibility of Toricological Processes. The averaging process
for daily exposure assumes that repeated dosing continues to add to
the risk potential.- In some cases, after cessation of exposure,
toxicological processes are reversible over time. In these cases,
exposure assessments must provide enough information so that the
risk assessor can account for the potential influence of episodic
exposures.
Composition of the Exposed Population. For some substances,
the type of health effect may vary as a function of age or sex.
Likewise, certain behaviors (e.g., smoking), diseases (e.g., asthma),
and genetic traits (e.g., glucose-6-phosphate dehydrogenase
deficiency) may affect the response of a person to a chemical
substance. Special population segments, such as children, may also
call for a specialized approach to data collection (WHO, 1986).
. 3.5.2. Establishing the Sampling Strategy
If the objectives of the assessment are to be met using measurements, it is
important to establish the sampling strategy before samples are actually taken.
The sampling strategy includes setting data quality objectives, developing the
sampling plan and design, using spiked and blank samples, assessing background
levels, developing quality assurance project plans, validating previously generated
data, and selecting and validating analytical methods.
64
-------
3.5.2.1. Data Quality Objectives
All measurements are subject to uncertainty because of the inherent
variability in the quantities being measured (e.g., spatial and temporal variability)
and analytical measurement variability introduced during the measurement
process through sampling and analysis. Some sources of variability can be
expressed quantitatively, but others can only be described qualitatively. The
larger the variability associated with individual measurements, the lower the data
quality, and the greater the probability of errors in interpretation. Data quality
objectives (DQOs) describe the degree of uncertainty that an exposure assessor
and other scientists and management are willing to accept.
Realistic DQOs are essential. Data of insufficient quality will have little
value for problem solving, while data of quality vastly in excess of what is needed
to answer the questions asked provide few, if any, additional advantages. DQOs
should consider data needs, cost-effectiveness, and the capability of the
measurement process. The amount of data required depends on the level of
detail necessary for the purpose of the assessment. Estimates of the number of
samples to be taken and measurements to be made should account for expected
sample variability. Finally, DQOs help clarify study objectives by compelling the
exposure assessor to establish how the data will be used before they are collected.
The exposure assessor establishes data criteria by proposing limits (based
on best judgment or perhaps a pilot study) on the acceptable level of uncertainty
for each conclusion to be drawn from new data, considering the resources
available for the study. DQOs should include:
A clear statement of study objectives, to include an estimation of
the key study parameters, identifying the hypotheses being tested,
the specific aims of the study, and how the results will be used.
The scope of study objectives, to include the minimum size of
subsamples from which separate results may be calculated, and the
65
-------
largest unit (area, time period, or group of people) the data will
represent.
A description of the data to be obtained, the media to be sampled.
and the capabilities of the analytical methodologies.
The acceptable probabilities and uncertainties associated with false
positive and false negative statements.
A discussion of statistics used to summarize the data; any standards,
reference values, or action levels used for comparison; and a
description and rationale for any mathematical or statistical
procedures used.
An estimate of the resources needed.
3.5.2.2. Sampling Plan
The sampling plan specifies how a sample is to be selected and handled.
An inadequate plan will often lead to biased, unreliable, or meaningless results.
Good planning, on the other hand, makes optimal use of limited resources and is
*
more likely to produce valid results.
The sampling design specifies the number and types of samples needed to
achieve DQOs. Factors to be considered in developing the sampling design
include study objectives, sources of variability (e.g., temporal and spatial
heterogeneity, analytical differences) and their relative magnitudes, relative costs,
and practical limitations of time, cost, and personnel.
Sampling design considers the need for temporal and spatial replication,
compositing (combining several samples prior to analysis), and multiple
determinations on a single sample. A statistical or environmental process model
may be used to allocate sampling effort in the most efficient manner.
Data may be collected using a survey or an experimental approach. It may
be desirable to stratify the sample if it is suspected that differences exist between
segments of the statistical population being sampled. In such cases, the stratified
sampling plan assures representative samples of the obviously different pans of
66
-------
the sample population while reducing variance in the sample data. The survey
approach estimates population exposure based on the measured exposure of a
statistically representative sample of the population. In some situations the study
objectives are better served by an experimental approach; this approach involves
experiments designed to determine the relationship between two or more factors,
(e.g., between house construction and a particular indoor air pollutant). In the
experimental approach, experimental units are selected to cover a range of
situations (e.g., different housing types), but do not reflect the frequency of those
units in the population of interest. An understanding of the relationship between
factors gained from an experiment can be combined with other data (e.g.,
distribution of housing types) to estimate exposure. An advantage of the
experimental approach is that it may provide more insight into underlying
mechanisms which may be important in targeting regulatory action. However, as
in all experimental work, one must argue that the relationships revealed apply
beyond that particular experiment.
A study may use a combination of survey and experimental techniques and
involve a variety of sampling procedures. A summary of methods for measuring
worker exposure is found in Lynch (1985). Smith et al. (1987) provide guidance
for field sampling of pesticides. Relevant EPA reference documents include
Survey Management Handbook, Volumes I and II (U.S. EPA, 1984b); Soil
Sampling Quality Assurance User's Guide (U.S. EPA, 1990a); and A Rationale
for the Assessment of Errors in the Sampling of Soils (U.S. EPA, 1989a). A
detailed description of methods for enumerating and characterizing populations
exposed to chemical substances is contained in Methods for Assessing Exposure
to Chemical Substances, Volume 4 (U.S. EPA, 1985a).
Factors to be considered in selecting sampling locations include population
density, historical sampling results, patterns of environmental contamination and
environmental characteristics such as stream flow or prevailing wind direction,
access to the sample site, types of samples, and health and safety requirements.
67
-------
The frequency and duration of sample collection will depend on whether
the risk assessor is concerned with acute or chronic exposures, how rapidly
contamination patterns are changing, ways in which chemicals are released into
the environment, and whether and to what degree physical conditions are
expected to vary in the future.
There are many sources of information on methods for selecting sampling
locations. Schweitzer and Black (1985) and Schweitzer and Santolucito (1984)
give statistical methods for selecting sampling locations for ground water, soil, and
hazardous wastes. A practical guide for ground-water sampling (U.S. EPA,
1985b) and a handbook for stream sampling (U.S. EPA, 1986d) are also available.
The type of sample to be taken and the physical and chemical properties
»
of the chemical of concern usually dictate the sampling frequency. For example,
determining the concentration of a volatile chemical in surface water requires a
higher sampling frequency than necessary for ground water because the chemical
concentration of the surface water changes more rapidly. Sampling frequency
might also depend on whether the health effects of concern result from acute or
chronic exposures. More frequent sampling may be needed to determine peak
exposures versus average exposure.
A preliminary survey is often used to estimate the optimum number,
sparing, and sampling frequency. Factors to be considered include technical
objectives, resources, program schedule, types of analyses, and the constituents to
be evaluated. Shaw et al. (1984), Sanders and Adrian (1978), and Nelson and
Ward (1981) discuss statistical techniques for determining the optimal number of
samples.
Sampling duration depends on the analytical method chosen, the limits of
detection, the physical and chemical properties of the analyte, chemical
concentration, and knowledge of transport and transformation mechanisms.
Sampling duration may be extended to ensure adequate collection of a chemical
at low concentration or curtailed to prevent the breakthrough of one at high
68
-------
-concentration. Sampling duration is directly related to selection of statistical
procedures, such as trend or cross-sectional analyses.
Storage stability studies with periodic sample analysis should normally be
run concurrently with the storage of treated samples. However, in certain
situations where chemicals are prone to break down or have high volatility, it is
advisable to run a storage stability study in advance so that proper storage and
maximum time of storage can be determined prior to sample collection and
storage. Unless storage stability has been previously documented, samples should
be analyzed as soon as possible after collection to avoid storage stability
problems. Individual programs may have specific time limits on storage,
depending on the types of samples being analyzed.
3.5.2.3. Quality Assurance Samples
Sampling should be planned to ensure that the samples are not biased by
the introduction of field or laboratory contaminants. If sample validity is in
question, all associated analytical data will be suspect. Field- and laboratory-
spiked samples and blank samples should be analyzed concurrently to validate
results. The plan should provide instructions clear enough so that each worker
can collect, prepare, preserve, and analyze samples according to established
protocols.
Any data not significantly greater than blank sample levels should be used
with considerable caution. All values should be reported as measured by the
laboratory, but with appropriate caveats on blank sample levels. The method for
interpreting and using the results from blank samples depends on the analyte and
should be specified in the sampling plan. The following guidance is
recommended:
For volatiles and semivolatiles, no positive sample results should be
reported unless the concentration of the compound in the sample
exceeds 10 times the amount in any blank for the common
laboratory contaminants methylene chloride, acetone, toluene, 2-
69
-------
butanone, and common phthalate esters. The amount for other
volatiles and semivolatiles should exceed 5 times the amount in the
blank (U.S. EPA, 1988d).
For pesticides and polychlorinated biphenyls (PCBs) no positive
sample results should be reported unless the concentration in the
sample exceeds 5 times that in the blank (U.S. EPA, 1988d). If a
pesticide or PCB is found in a blank but not in a sample, no action
is taken.
For inorganics, no positive sample results should be reported if the
results are less than 5 times the amount in any blank (U.S. EPA,
1988e).
3.5.2.4. Background Level
Background presence may be due to natural or anthropogenic sources. At
some sites, it is significant and must be accounted for. The exposure assessor
should try to determine local background concentrations by gathering data from
nearby locations clearly unaffected by the site under investigation.
When differences between a background (control area) and a target site
are to be determined experimentally, the control area must be sampled with the
same detail and care as the target.
3.5.2.5. Quality Assurance and Quality Control
Quality assurance (QA) assures that a product meets defined standards of
quality with a stated level of confidence. QA includes quality control.
Quality assurance begins with the establishment of DQOs and continues
throughout the measurement process. Each laboratory should have a QA
program and, for each study, a detailed quality assurance project plan, with
language clear enough to preclude confusion and misunderstanding. The plan
should list the DQOs and fully describe the analytes, all materials, methods, and
procedures used, and the responsibilities of project participants. The EPA has
70
-------
prepared a guidance document (U.S. EPA, 1980) that describes all these elements
and provides complete guidance for plan preparation. Quality control (CXI)
ensures a product or service is satisfactory, dependable, and economical. A QC
program should include development and strict adherence to principles of good
laboratory practice, consistent use of standard operational procedures, and
carefully-designed protocols for each measurement effort. The program should
ensure that errors have been statistically characterized and reduced to acceptable
levels.
3.52.6. Quality Assurance and Quality Control for Previously Generated
Data
Previously generated data may be used by the exposure assessor to fulfill
current needs. Any data developed through previous studies should be validated
with respect to both quality and extrapolation to current use. One should
consider how long ago the data were collected and whether they are still
representative. The criteria for method selection and validation should also be
followed when analyzing existing data. Other points considered in data
evaluation include the collection protocol, analytical methods, detection limits,
laboratory performance, and sample handling.
3.52.7. Selection and Validation of Analytical Methods
There are several major steps in the method selection and validation
process. First, the assessor establishes methods requirements. Next, existing
methods are reviewed for suitability to the current application. If a new method
must be developed, it is subjected to field and laboratory testing to determine its
performance; these tests are then repeated by other laboratories using a round
robin test. Finally, the method is revised as indicated by laboratory testing. The
reader is referred to Guidance for Data Useability in Risk Assessment (U.S.
EPA, 1990b) for extensive discussion of this topic.
71
-------
3.53. Establishing the Modeling Strategy
Often the most critical element of the assessment is the estimation of
pollutant concentrations at exposure points. This is usually carried out by a
combination of field data and mathematical modeling results. In the absence of
field data, this process often relies on the results of mathematical models (U.S.
EPA, 1986e, 1987b, 1987c, 1988f, 1991b). EPA's Science Advisory Board (U.S.
EPA, 1989b) has concluded that, ideally, modeling should be linked with
monitoring data in regulatory assessments, although this is not always possible
(e.g., for new chemicals).
A modeling strategy has several aspects, including setting objectives, model
selection, obtaining and installing the code, calibrating and running the computer
model, and validation and verification. Many of these aspects are analogous to
the QA/QC measures applied to measurements.
3.53.1. Setting the Modeling Study Objectives
The first step in using a model to estimate concentrations and exposure is
to clearly define the goal of the exposure assessment and how the model can help
address the questions or hypotheses of the assessment This includes a clear
statement of what information the model will help estimate, and how this
estimate will be used. The approach must be consistent with known project
constraints (i.e., schedule, budget, and other resources).
*
3.53.2. Characterization and Model Selection
Regardless of whether models are extensively used in an assessment and a
formal modeling strategy is documented in the exposure assessment plan, when
computer simulation models such as fate and transport models and exposure
models are used in exposure assessments, the assessor must be aware of the
performance characteristics of the model and state how the exposure assessment
requirements are satisfied by the model.
72
-------
If models are to be used to simulate pollutant behavior at a specific site.
the site must be characterized. Site characterization for any modeling study
includes examining all data on the site such as source characterization, dimensions
and topography of the site, location of receptor populations, meteorology, soils,
geohydrology, and ranges and distributions of chemical concentrations. For
exposure models that simulate both chemical concentration and time of exposure
(through behavior patterns) data on these two parameters must be evaluated.
For all models, the modeler must determine if databases are available to
support the site, chemical, or population characterization, and that all parameters
required by the model can be obtained or reasonable default values are available.
The assessment goals and the results of the characterization step provide the
technical basis for model selection.
Criteria are provided in U.S. EPA (1987b, 1988f) for selection of surface
water models and ground-water models respectively; the reader is referred to
these documents for details. Similar selection criteria exist for air dispersion
models (U.S. EPA, 1986e, 1987c, 1991b).
A primary consideration in selecting a model is whether to perform a
screening study or to perform a detailed study. A screening study makes a
preliminary evaluation of a site or a general comparison between several sites. It
may be generic to a type of site (i.e., an industrial segment or a climatic region)
or may pertain to a specific site for which sufficient data are not available to
properly characterize the site. Screening studies can help direct data collection at
the site by, for example, providing an indication of the level of detection and
quantification that would be required and the distances and directions from a
point of release where chemical concentrations might be expected to be highest.
The value of the screening-level analysis is that it is simple to perform and
may indicate that no significant contamination problem exists. Screening-level
models are frequently used to get a first approximation of the concentrations that
may be present. Often these models use very conservative assumptions; that is,
they tend to overpredict concentrations or exposures. If the results of a
73
-------
conservative screening procedure indicate that predicted concentrations or
exposures are less than some predetermined no-concern level, then a more
detailed analysis is probably not necessary. If the screening estimates are above
that level, refinement of the assumptions or a more sophisticated model are
necessary for a more realistic estimate.
Screening-level models also help the user conceptualize the physical
system, identify important processes, and locate available data. The assumptions
used in the preliminary analysis should represent conservative conditions, such
that the predicted results overestimate potential conditions, limiting false
negatives. If the limited field measurements or screening analyses indicate that a
contamination problem may exist, then a detailed modeling study may be useful.
A detailed study is one in which the purpose is to make a detailed
evaluation of a specific site. The approach is to use the best data available to
make the best estimate of spatial and temporal distributions of chemicals.
Detailed studies typically require much more data of higher quality and models of
greater sophistication.
3.5.3.3. Obtaining and Installing the Computer Code
It may be necessary to obtain and install the computer code for a model
on a specific computer system. Modern computer systems and software have a
vaiiety of differences that require changes to the source code being installed. It
is essential to verify that these modifications do not change the way the model
works or the results it provides. If the model is already installed and supported
on a computer system to which the user has access, this step is simplified greatly.
Criteria for using a model include its demonstrated acceptability and the
ease with which the model can be obtained. Factors include availability of
specific models and their documentation, verification, and validation. These so-
called implementation criteria relate to the practical considerations of model use
and may be used to further narrow the selection of technically acceptable models.
74
-------
3.5.3.4. Calibrating and Running the Model
Calibration is the process of adjusting selected model parameters within an
expected range until the differences between model predictions and field
observations are within selected criteria. Calibration is highly recommended for
all operational, deterministic models. Calibration accounts for spatial variations
not represented by the model formulation; functional dependencies of parameters
that are either nonquantifiable, unknown, or not included in the model
algorithms; or extrapolation of laboratory measurements to field conditions.
Extrapolation of laboratory measurements to field conditions requires
considerable care since many unknown factors may cause differences between
laboratory and field.
The final step in the modeling portion of an exposure assessment is to run
the model and generate the data needed to answer the questions posed in the
study objectives.
Experience and familiarity with a model can also be important. This is
especially true with regard to the more complex models. Detailed models can be
quite complex with a large number of input variables, outputs, and computer-
related requirements. It frequently takes months to years of experience to fully
comprehend all aspects of a model. Consequently, it is suggested that an
exposure assessor select a familiar model if it possesses all the selection criteria,
or seek the help of experienced exposure modelers.
3.5.3.5. Model Validation
Model validation is a process by which the accuracy of model results is
compared with actual data from the system being simulated. There are numerous
levels of validation of an environmental fate model, for example, such as
verifying that the transport and transformation concepts are appropriately
represented in the mathematical equations, verifying that the computer code is
free from error, testing the model against laboratory microcosms, running field
tests under controlled conditions, running general field tests, and repeatedly
75
-------
comparing field data to the modeling results under a variety of conditions and
chemicals. In essence, validation is an independent test of how well the model
(with its calibrated parameters) represents the important processes occurring in
the natural system. Although field and environmental conditions are often
different during the validation step, parameters fixed as a result of calibration are
not readjusted during validation.23
The performance of models (their ability to represent measured data) is
often dramatically influenced by site characterization and how models represent
such characteristics. Characterizing complex, heterogenous physical systems
presents major challenges; modeling representations of such systems must be
evaluated in light of that difficulty. In many cases, the apparent inability to
model a system is caused by incomplete physical characterization of the system.
In other cases the uncertainties cannot be readily apportioned between the model
per s;e and the model's input data.
In addition to comparing model results with actual data (thus illustrating
accuracy, bias, etc.), the model validation process provides information about
conditions under which a simulation will be acceptable and accurate, and under
what conditions it should not be used at all. All models have specific ranges of
application and specific classes of chemicals for which they are appropriate.
Assessors should be aware of these limitations as they develop modeling
strategies.
3.5.4. Planning an Exposure Assessment to Assess Past Exposures
In addition to the considerations discussed in Sections 3.5.1 through 3.53,
if the data are being collected to assess past exposures, such as in epidemiologic
studies, they need to be representative of the past exposure conditions, which may
have changed with time. The scope and level of detail of the assessment depends
u][n other words, a fundamental rule is that a model should not be validated using data that
were already used to generate or calibrate the model, since doing so would not be an independent
test.
76
-------
greatly on the availability and quality of past data. Several approaches for
determining and estimating past exposure are provided in the literature
(Waxweiler et a/., 1988; Stern et a/., 1986; NIOSa 1988; Greife et a/., 1988;
Hornung and Meinhardt, 1987).
77
-------
4. GATHERING AND DEVELOPING DATA FOR EXPOSURE
ASSESSMENTS
The information needed to perform an exposure assessment will depend
on the approach(es) selected in the planning stage (Section 3). For those
assessments using point-of-contact measurements, the information includes:
Measured exposure concentrations and duration of contact.
For assessments using the scenario evaluation method for estimating exposures,
the needed information includes:
Information on chemical concentrations in media, usually desirable
in the format of a concentration-time-location profile.
Information on persons who are exposed and the duration of
contact with various concentrations.
For assessments estimating exposure from dose, the information includes:
Biomarker data.
Pharmacokinetic relationships, including the data to support
pharmacokinetic models.
If close is to be calculated, data are needed on:
Intake and uptake, usually in the form of rates.
Information on sources, both natural and anthropogenic is usually helpful.
If the agent has natural sources, the contribution of these to environmental
concentrations may be relevant. These background concentrations may be
particularly important when the results of toxicity tests show a threshold or
distinctly nonlinear dose-response relationship. In a situation where only relative
or additional risk is considered, background levels may not be relevant.
4.1. Measurement Data for Point-of-Contact Assessments
This approach requires that chemical concentrations be measured at the
interface between the person and the environment, usually through the use of
personal monitors; there are currently no models to assist in the process of
obtaining the concentration-time data itself. The chemical concentrations
78
-------
contacted in the media are measured by sampling the individual's breathing zone,
food, and water. These methodologies were originally developed for occupational
monitoring; they may have to be modified for exposures outside the workplace.
An example of this is the development of a small pump and collector used in the
TEAM studies (U.S. EPA, 1987a). In order to conduct these studies, a
monitoring device had to be developed that was sufficiently small and lightweight
so that it could be worn by the subjects.
The Total Human Exposure and Indoor Air Quality (U.S. EPA, 1988h)
report is a useful bibliography covering models, field data, and emerging research
methodologies, as well as new techniques for accurately determining exposure at
nonoccupational levels.
New data for a particular exposure assessment may be developed through
the use of point-of-contact methods, or data from prior studies can sometimes be
used. In determining whether existing point-of-contact monitoring data can be
used in another assessment, the assessor must consider the factors that existed in
the original study and that influenced the exposure levels measured. Some of
these factors are proximity to sources, activities of the studied individuals, time of
day, season, and weather conditions.
Point-of-contact data are valuable in evaluating overall population
exposure and checking the credibility of exposure estimates generated by other
methods.
4.2. Obtaining Chemical Concentration Information
The distribution of chemical concentrations is used to estimate the
concentration that comes in contact with the individual(s) at any given time and
place. This can be done through personal monitoring, but for a variety of
reasons, in a given assessment, personal monitoring may not be feasible.
Alternative methods involve measuring the concentration in the media, or
modeling the concentration distribution based on source strength, media transport.
79
-------
and chemical transformation processes. For exposure scenario evaluation.
measurements and modeling of media concentrations are often used together.
Many types of measurements can be used to help determine the
distribution of chemical concentrations in media. They can be measurements of
the concentrations in the media themselves, measurements of source strength, or
measurements of environmental fate processes which will allow the assessor to
use a model to estimate the concentration in the media at the point of contact.
Table 4-1 illustrates some of the types of measurements used by exposure
assessors, along with notes concerning what additional information is usually
needed to use these measurements in estimating exposure or dose. For
epidemiologic studies, questionnaires are often used when data are not
measureable or are otherwise unavailable.
80
-------
Table 4-1. Examples of types of measurements to characterize exposure-
related media and parameters.1
Type of Measurement
(sample)
Usually Attempts to
Characterize (whole)
Examples
Typical Information Needed
to Characterize Exposure
A. FOR USE IN EXPOSURE SCENARIO EVALUATION
1. Fixed-Location Monitoring
2. Short-Term Media Monitoring
3. Source Monitoring of Facilities
4. Food Samples
(also see #9 below)
Environmental medium:
samples used to establish
long-term indications of
media quality and trends.
Environmental or ambient
medium: samples used to
establish a snapshot of
quality of medium over
relatively short time.
Release rates to the
environment from sources
(facilities). Often given in
terms of relationships
between release amounts
and various operating
parameters of the facilities.
Concentrations of
contaminants in food
supply.
National Scream
Quality Accounting
Network (NASQAN1*
water quality networks.
air quality networks.
Special studies of
environmental media.
indoor air.
Stack sampling.
effluent sampling.
leachate sampling from
landfills, incinerator
ash sampling, fugitive
emissions sampling.
pollution control
device sampling.
FDA Total Diet Study
Profrvn* flurxct
basket studies, shelf
studies, cooked-food
diet sampling.
Population location and
activities relative to
monitoring locations: fate of
pollutants over distance
between monitoring and
point of exposure: lime
variation of pollutant
concentration at point of
exposure.
Population location and
activities (this is critical
since it must be closely
matched to variations in
concentrations due to short
period of study): fate of
pollutants between
measurement point and
point of exposure: time
variation of pollutant
concentration at point of
exposure.
Fate of pollutants from
point of entry into the
environment to point of
exposure: population
location and activities: time
variation of release.
Dietary habits of various
age. sex. or cultural groups.
Relationship between food
items sampled and groups
(geographic, ethnic.
demographic) studied.
Relationships between
concentrations in uncooked
versus piepared food.
1 To characterize dose, intake or uptake information is also needed (see Section 2).
b U.S. EPA (1985c).
c U.S. EPA (1986f).
81
-------
Table 4-1. continued
Tj'pe of Measurement
(sample)
5. Drinking Water Samples
6. Consumer Products
7. Breathing Zone Measurements
8. Mkraenvnonmental Studies
9. Surface Soil Sample
10. Soil Core
Usually Attempts to
Characterize (whole)
Concentrations of
pollutants in drinking
water supply.
Cuiiteiitirauon levels of
various products.
Exposure to airborne
chemicals.
Ambient medium in a
defined area. e.g.. kitchen.
automobile interior, office
setting, parking lot
Degree of contamination'
of soil available for
contact.
Soil including pollution
available for ground-water
indication of quality afld
trewai over tune.
Examples
Ground Water Supply
Survey.4 Community
Water Supply Survey.'
tap water.
Shelf surveys, e-x,.
solvent concentration
in household cleaners.'
Blndustrial hygiene
surveys, indoor air
studies.
Special studies of
indoor air. bouse dust
radon measurements,
office building studies.
Soil samples at
fTmta.minft*i1 sites.
Soil sampling at
hazardous waste sites.
Typical Information Needed ^f
to Characterize Exposure
Fate and distribution of
pollutants from point of
sample to point of
consumption. Population
served by specific facilities
and consumption rates. For
exposure due to other uses
(64.. cooking and showering).
need to know activity patterns
and volatilization rates.
Establish use patterns and/or
market share of particular
products, individual exposure
at various usage levels, extent
of passive exposure.
Location, activities, and time
spent relative to monitoring
locations. Protective
measures/avoidance.
Activities of study populations
relative to monitoring
locations and time exposed.
Fate of pollution on/in soil: ^1
activities of potentially |
exposed popuuuoiu. |
Fate of substance in soil: 1
speciation and bnavailability. 1
contact and ingesuon rates as I
a function of activity pailenu 1
and age. |
continued on die following page
. EPA (1985c).
* U.S. EPA (1985d).
' U.S. EPA (1985a).
82
-------
Table 4-1. continued
Type of Measurement
(sample)
11. Fish Tissue
Usually Attempts to
Characterize (whole)
Extent of coaiamiuiion of
edible fish tissue.
Examples
National Shellfich
Survey.1
Typical Information Needed
to Characterize Exposure
Relationship of samples to
food supply for individuals or
population of interest:
consumption habits:
preparation habits .
& FOR USE IN POINT-OF-CONTACT MEASUREMENT
1. Air Punp/PinkuUtes and
Vapors
2. Passive Viper Sampling
3. Split Sample Food/ Split
Sample Drinking Water
4. Ska Pin* Samples
Exposure of an individual
or population via the air
medium.
Same as above.
Exposure* of aa individual
or population via ingecaon.
Dermal exposure of an
individual or population.
TEAM Rudy.' carbon
monoxide study.'
sampling in industrial
settings.
Same at above.
TEAM study.'
Ptstkidft Applicator
Survey.1
Direct measurement of
individual exposure during
time sampled. In order to
characterize exposure to
population, relationships
between individuals and the
population must be
established as well as
relationships between times
sampled and other times for
the same individuals, and
individuals and other
populations. In order to make
these links, activities of the
sampled individuals compared
are needed in some detail.
Same as above.
Same as above.
1) Same as above.
2) Skin penetration.
continued on the following page
* U.S. EPA (19860-
h U.S. EPA (1987a).
' U.S. EPA (1987a).
j US. EPA (1987a).
" U.S. EPA (1987d).
83
-------
Table 4-1. continued
Type of Measurement
(sample)
Usually Attempts to
Characterize (whole)
Examples
Typical Information Needed f
to Characterize Exposure Q
C. FOR USE IN EXPOSURE ESTIMATION FROM RECONSTRUCTED DOSE
1. Breath
2. Blood
3. Adipose
4. Nails. Hair
5. Urine
Total internal dace for
individuals or population
(usually indicative of
relatively recent
exposures).
Total internal dose for
individuals or population
(may be indicative of
either relatively recent
exposures to fat-soluble
organic* a long tern body
burden for metals).
Total internal dose for
individuals or population
(usually indicative of long-
term averages for fat-
soluble organic*).
Total internal dose for
individuals or population
(usually indicative of past
exposure in weeks to
months range; can
sometimes be used to
evaluate exposure
patterns).
Total internal dose for
individuals or population
(usually indicative of
elimination rates): time
from exposure to
appearance in urine may
vary, depending on
chemical.
Measurement of
volatile organic
chemicals (VOO).
alcohol. (Usually
limited to volatile
compounds).
Lead studies.
pesticides, heavy
metals (usually best for
soluble compounds.
although Mood lipid
analysis may reveal
lipophilic compounds).
NHATS.' dionn
studio. PCBs (usually
limited to upophilic
compounds).
Heavy metal studies
(usually limited to
metals).
Studies of
and trichtoroethytene.'
1) Relationship between
individuals and population:
exposure history (i.e.. steady-
state or not) pharmacokineucs
(chemical half-life), possible
storage reservoirs within the
body.
2) Relationship between
breath content and body
burden.
1) Same as above.
2) Relationship between
Mood content and body
burden.
1) Same as above.
I
2) Relationship between M
adipose content and body H
burden. jf
1) Same as above.
2) Relationship between nails.
hair content and body burden.
1) Same as above.
2) Relationship between urine
content and body burden.
1 U.S. EPA (1986g).
" U.S. EPA (1986h).
0 U.S. EPA (1987e).
84
-------
4.2.1. Concentration Measurements in Environmental Media
Measured concentration data can be generated for the exposure
assessment by a new field study, or by evaluating concentration data from
completed field study results and using them to estimate concentrations. Media
measurements taken close to the point of contact with the individual(s) in space
and time are preferable to measurements far removed geographically or
temporally. As the distance from the point of contact increases, the certainty of
the data at the point of contact usually decreases, and the obligation for the
assessor to show relevance of the data to the assessment at hand becomes greater.
For example, an outdoor air measurement, no matter how close it is taken to the
point of contact, cannot by itself adequately characterize indoor exposure.
Concentrations can vary considerably from place to place, seasonally, and
over time due to changing emission and use patterns. This needs to be
considered not only when designing studies to collect new data, but especially
when evaluating the applicability of existing measurements as estimates of
exposure concentrations in a new assessment. It is a particular concern when the
measurement data will be used to extrapolate to long time periods such as a
lifetime. Transport and dispersion models are frequently used to help answer
these questions.
The exposure assessor is likely to encounter several different types of
measurements.' One type of measurement used for general indications and trends
of concentrations is outdoor fixed-location monitoring. This measurement is used
by EPA and other groups to provide a record of pollutant concentration at one
place over time. Nationwide air and water monitoring programs have been
established so that baseline values in these environmental media can be
documented. Although it is not practical to set up a national monitoring network
to gather data for a particular exposure assessment, the data from existing
networks can be evaluated for relevance to an exposure assessment These data
are usually somewhat removed, and often far removed, from the point of contact.
Adapting data from previous studies usually presents challenges similar to those
encountered when using network data. If new data are needed for the
85
-------
assessment, studies measuring specific chemicals at specific locations and times
can be conducted.
Contaminant concentrations in indoor air can vary as much or more than
those in outdoor air. Consequently, indoor exposure is best represented by
measurements taken at the point of contact. However, because pollutants such as
carton monoxide can exhibit substantial indoor penetration, indoor exposure
estimates should consider potential outdoor as well as indoor sources of the
contaminant(s) under evaluation.
Food and drinking water measurements can also be made. General
characterization of these media, such as market basket studies (where
representative diets are characterized), shelf studies (where foodstuffs are taken
from store shelves and analyzed), or drinking water quality surveys, are usually
far removed from the point of contact for an individual, but may be useful in
evaluating exposure concentrations over a large population. Closer to the point
of contact would be measurements of tap water or foodstuffs in a home, and how
they are used. In evaluating the relevance of data from previous studies,
f
variations in the distribution systems must be considered as well as the space-time
proximity.
Consumer or industrial product analysis is sometimes done to characterize
the concentrations of chemicals in products. The formulation of products can
change substantially over time, similar products do not necessarily have similar
formulations, and regional differences in product formulation can also occur.
These should be considered when determining relevance of extant data and when
settling up sampling plans to gather new data.
Another type of concentration measurement is the microenvironmental
mezisurement. Rather than using measurements to characterize the entire
medium, this approach defines specific zones in which the concentration in the
medium of interest is thought to be relatively homogenous, then characterizes the
concentration in that zone. Typical microenvironments include the home or pans
of the home, office, automobile, or other indoor settings. Microenvironments can
also be divided into time segments (e.g., kitchen-day, kitchen-night). This
86
-------
^approach can produce measurements that are closely linked with the point of
contact both in location and time, especially when new data are generated for a
particular exposure assessment. The more specific the microenvironment,
however, the greater the burden on the exposure assessor to establish that the
measurements are representative of the population of interest. Adapting existing
data bases in this area to a particular exposure assessment requires the usual
evaluation discussed throughout this section.
The concentration measurement that provides the closest link to the actual
point of contact uses personal monitoring, which is discussed in Section 4.3.
4.2.2. Use of Models for Concentration Estimation
If concentrations in the media cannot be measured, they can frequently be
estimated indirectly by using related measurements and models. To accomplish
this, source and fate information are usually needed. Source characterization
data are used as input to transport and transformation models (environmental
fate models). These models use a combination of general relationships and
situation-specific information to estimate concentrations. In exposure
assessments, mathematical models are used extensively to calculate environmental
fate and transport, concentrations of chemicals in different environmental media,
the distribution of concentrations over space and time, indoor air levels of
chemicals, concentrations in foods, etc In determining the relevance of this type
of model for estimating concentrations, the same rules apply as for the
measurements of concentrations discussed in the previous section. When
concentrations in the media are available, models can be used to interpolate
concentrations between measurements. Because models rely on indirect
measurements and data remote from the point of contact, statistically valid
analytical measurements take precedence when discrepancies arise. When it is
necessary to estimate contributions of individual sources to overall concentrations,
models are commonly used.
Source characterization measurements usually determine the rate of
release of chemicals into the environment from a point of emission such as an
87
-------
incinerator, landfill, industrial facility, or other source. Often these measurements
are used to estimate emission factors, or a relationship between releases and
facility operations. Since emission factors are usually averages over time, the
assessor must determine whether given emission factors from previous work are
relevant to the time specificity and source type needed for the exposure
assessment. Generally, emission factors are more useful for long-term average
emission calculations, and become less useful when applied to intermittent or
short-term exposures.
Environmental fate measurements can be either field measurements (field
degradation studies, for example) or laboratory measurements (partition
coefficients, hydrolysis, or biodegradation rates, etc.). Approximations for these
rate!! can sometimes also be calculated (Lyman et a/., 1982).
Environmental fate models calculate estimated concentrations in media,
that in turn are linked to the concentrations at the point of contact. The use of
estimated properties or rates adds to the uncertainty in the exposure
concentration estimate. When assessors use these methods to estimate exposures,
uncertainties attributable to the model and the validation status of the model
must: be clearly discussed in the uncertainty section (see discussion in Section 6).
4.2.3. Selection of Models for Environmental Concentrations
Selection of an appropriate model is essential for successful simulation of
chemical concentrations. In most cases assessors will be able to choose between
several models, any of which could be used to estimate environmental
concentrations. There is no right model; there may not even be a best model.
There are, however, several factors that will help in selecting an appropriate
model for the study. The assessor should consider the objectives of the study, the
technical capabilities of the models, how readily the models can be obtained, and
how difficult each is to use (U.S. EPA, 1987b, 1988f).
The primary consideration in selecting a model is the objective of the
exposure assessment The associated schedule, budget, and other resource
constraints will also affect model selection options. Models are available to
-------
Support both screening-level and detailed, site-specific studies. Screening models
can provide quick, easy, and cost-effective estimates of environmental
concentrations. They can support data collection efforts at the site by indicating
the required level of detection and quantification and the locations where
chemical concentrations are expected to be highest. They are also used to
interpolate chemical concentrations between measurements. Where study
objectives require the best estimates of spatial and temporal distributions of
chemicals, more sophisticated models are available. These models require more
and better data to characterize the site, and therefore site-specific data may be
needed in order to use them.
The technical capabilities of a model are expressed in its ability to
simulate site-specific contaminant transport and transformation processes. The
model must be able to simulate the relevant processes occurring within the
specified environmental setting. It must adequately represent the physical setting
(e.g., the geometric configuration of hydrogeological systems, river widths and
depths, soil profiles, meteorological patterns, etc.) and the chemical
transformation processes. Field data from the area where doses are to be
estimated are necessary to define the input parameters required to use the
models. In cases in which these data are not available, parameter values
representative of field conditions should be used as defaults. Assumptions of
homogeneity and simplification of site geometry may allow use of simpler models.
In addition, it is important to thoroughly understand the performance
characteristics of the model used. This is especially true with regard to the more
complex models. Detailed models can be quite complex with a large number of
input variables, outputs, and computer-related requirements.
4.3. Estimating Duration of Contact
As discussed in Section 2, the duration of contact is linked to a particular
exposure concentration to estimate exposure. Depending on the purpose of the
assessment and the confidence needed in the accuracy of the final estimate,
several approaches for obtaining estimates of duration of contact can be used.
89
-------
Ideally, the time that the individual is in contact with a chemical would be
observed and recorded, and linked to the concentrations of the chemical during
those time segments. Although it is sometimes feasible to do this (by point-of-
contact measurement, see Section 4.1), many times it is not. In those cases, as in
concentration characterization, the duration of contact must be estimated by using
data that may be somewhat removed from the actual point of contact, and
assumptions must be made as to the relevance of the data.
It is common for the estimate of duration of contact at a given
concentration to be the single largest source of uncertainty in an exposure
assessment.24 The exposure assessor, in developing or selecting data for making
estimates of duration of contact, must often assume that the available data
adequately represent exposure.
4.3.1. Observation and Survey Data
Observation and recording of activities, including location-time data, are
likely to be the types of data collection closest to the point of contact. This can
be done by an observer or the person(s) being evaluated for exposure, and can be
done for an individual, a population segment, or a population. The usual method
for obtaining these data for population segments or populations is survey
questionnaires. Surveys can be performed as part of the data-gathering efforts of
the exposure assessment, or existing survey data can be used if appropriate.
There are several approaches used in activity surveys, including diaries,
respondent or third-party estimates, momentary sampling, videomonitoring, and
behavioral meters. The diary approach, probably the most powerful method for
developing activity patterns, provides a sequential record of a person's activities
during a specified time period. Typical time-diary studies are done across a day
or a week. Diary forms are designed to have respondents report all their
activities and locations for that period. Carefully designed forms are especially
24 Conversely, it may be stated that the largest source of uncertainty is the concentration for a given
exposure duration. Often, however, the concentration in the media is known with more certainty than
the activities of the individuals) exposed.
90
-------
important for diary studies to ensure that data reported by each individual are
comparable. The resulting time budget is a sample of activity that can be used to
characterize an individual's behavior, activities, or other features during the
observation period. Sequential activity monitoring forms the basis of an activity
profile. Several studies have demonstrated the reliability of the diary method in
terms of its ability to produce similar estimates. One study (Robinson, 1977)
found a 0.85 correlation between diary estimates using the yesterday and
tomorrow approaches and a 0.86 correlation between overall estimates.
However, no definitive study has established the validity of time-diary data.
Questionnaires are used for direct questions to collect the basic data needed.
Questionnaire design is a complex and subtle process, and should only be
attempted with the help of professionals well-versed in survey techniques. A
useful set of guidelines is provided in the Survey Management Handbook (U.S.
EPA, 1984b).
Respondent estimates are the least expensive and most commonly used
questionnaire alternative. Respondents are simply asked to estimate the time
they spend at a particular activity. Basically, the question is, how many hours did
you spend doing this activity (or in this location or using a certain product)? In
exposure studies, respondents may be asked how often they use a chemical or
product of interest or perform a specific activity. These data are less precise and
likely to be somewhat less accurate than a carefully conducted diary approach.
At a less demanding level, respondents may be asked whether their homes
contain items of interest (pesticides, etc.). Since this information is not time-of-
activity data, it is more useful in characterizing whether the chemical of interest is
present. It does, however, give the assessor some indication that use may or may
not occur.
Estimates from other respondents (third parties) use essentially the same
approach, except that other informants respond for that individual. Here the
question is how many hours per week does the target person spend doing this
activity?
91
-------
Momentary (beeper) sampling or telephone-coincidental techniques ask
respondents to give only brief reports for a specific moment - usually the
moment the respondent's home telephone or beeper sounds. This approach is
limited to times when people are at home or able to carry beepers with them.
Methods that use behavioral meter or monitoring devices are probably the
most expensive approach, since they require the use or development of
equipment, respondent agreement to use such equipment, and technical help to
install or adjust the equipment.
The Exposure Factors Handbook (U.S. EPA, 1989c) contains a summary
of published data on activity patterns along with citations. Note that the
summary data and the mean values cited are for the data sets included in the
Handbook, and may or may not be appropriate for any given assessment.
4.3.2. Developing Other Estimates of Duration of Contact
When activity surveys cannot be used to estimate duration of contact, it
may be estimated from more indirect data.. This is the least expensive and most
commonly used approach for generating estimates of duration of contact; it is also
the least accurate. But for some situations, such as assessing the risk to new
chemicals being introduced into the marketplace or in assessing future possible
uses of contaminated sites, it is the only approach that can be used.
In general the methods used to make these estimates fall into two areas:
(1) those where the time it takes to perform an activity is itself estimated, and (2)
those where an average duration of contact is estimated by combining the time of
a unit activity with data on the use of a product or commodity.
Methods that try to estimate the time of a particular activity include
general time-and-motion studies that might be adapted for use in an exposure
assessment, general marketing data which include time of use, anecdotal
information, personal experience, and assumptions about the amount of time it
takes to perform an activity.
Methods that estimate average times for activities from product or
commodity use usually interpret data on product sales or marketing surveys,
92
-------
water use, general food sales, etc. Information on use can be combined with an
estimate of the number of persons using the product to estimate the average
consumption of the product. If an estimate of the duration of contact with one
unit (product, gallon of water, etc.) can be made, this can then be multiplied by
the average number of units consumed to arrive at an estimate of average
duration of contact for each individual
Duration-of-contact estimates based on data collected close to the actual
point of contact are preferable to those based on indirect measurements; both of
these are preferred to estimates based on assumptions alone. This hierarchy is
useful in both the data-gathering process and uncertainty analysis.
4.4. Obtaining Data on Body Burden or Biomarkers
Body burden or biomarker data denote the presence of the chemical inside
the body of exposed individuals. In a reconstructive assessment, these data, in
conjunction with other environmental monitoring data, may provide a better
estimate of exposure.
A biomarker of exposure has been defined as an exogenous substance or
its metabolite or the product of an interaction between a xenobiotic agent and
some target molecule or cell that is measured in a compartment within an
organism (NRC, 1989a). Examples of simple direct biomarkers include the
chemical itself in body fluid, tissue, or breath. Measurable changes in the
physiology of the organism can also constitute markers of exposure. Examples
include changes in a particular enzyme synthesis and activity. The interaction of
xenobiotic compounds with physiological receptors can produce measurable
complexes which also serve as exposure biomarkers. Other markers of exposure
include xenobiotic species adducted to protein or DNA, as well as a variety of
genotoxicity endpoints, such as micronuclei and mutation. Some biomarkers are
specific to a given chemical while others may result from exposure to numerous
individual or classes of compounds.
Biomarker data alone do not usually constitute a complete exposure
assessment, since these data must be associated with external exposures.
93
-------
However, biomarker data complement other environmental monitoring data and
modeling activities in estimating exposure.
4.5. Obtaining Data for Pharmacokinetic Relationships
To estimate dose from exposure, one must understand the
pnaimacokinetics of the chemical of interest. This is particularly true when
comparing risks resulting from different exposure situations. Two widely
different exposure profiles for the same chemical may have the same integrated
exposure (area under the curve), but may not result in the same internal dose due
to variations in disposition of the chemical under the two profiles. For example,
enzymes that normally could metabolize low concentrations of a chemical may be
saturated when the chemical is absorbed in high doses, resulting in a higher dose
delivered to target tissues. The result of these two exposures may even be a
different toxicological endpoint, if pharmacokinetic sensitivities are severe
enough.
An iterative approach, including both monitoring and modeling, is
necessary for proper data generation and analysis. Data collection includes
monitoring of environmental media, personal exposure, biomarkers, and
pharmacokinetic data. It may involve monitoring for the chemical, metabolites,
or the target biomarker. Monitoring activities must be designed to yield data that
are useful for model formulation and validation. Modeling activities must be
designed to simulate processes that can be monitored with available techniques.
The pharmacokinetic data necessary for model development are usually obtained
from laboratory studies with animals. The data are generated in experiments
designed to estimate such model parameters as the time course of the process,
absoirption, distribution, metabolism, and elimination of the chemical. These data,
and the pharmacokinetic models developed from them, are necessary to interpret
field biomarker data.
94
-------
4.6. Obtaining Data on Intake and Uptake
The Exposure Factors Handbook (U.S. EPA. 1989c) presents statistical
data on many of the factors used in assessing exposure, including intake rates, and
provides citations for the primary references. Some of these data were developed
by researchers using approaches discussed in Section 4.2.1 (for example, Pao et
al., 1982) used the diary approach in a study of food consumption). Intake
factors included are:
drinking water consumption rates;
consumption rates for homegrown fruits, vegetables, beef, and dairy
products;
consumption rates for recreationally caught fish and shellfish;
incidental soil ingestion rates;
pulmonary ventilation rates; and
surface areas of various parts of the human body.
The Exposure Factors Handbook is being updated to encompass additional
factors and to include new research data on the factors currently covered. It also
provides default parameter values that can be used when site-specific data are not
available. Obviously, general default values should not be used in place of
known, valid data that are more relevant to the assessment being done.
95
-------
5. USING DATA TO DETERMINE OR ESTIMATE EXPOSURE AND
DOSE
Collecting and assembling data, as discussed in the previous section, is
often an iterative process. Once the data are assembled, inferences can be made
about exposure concentrations, times of contact, and exposures to persons other
than those for whom data are available. During this process, there usually will be
gaps in information that can be filled by making a series of assumptions. If these
gaps are in areas critical to the accuracy of the assessment, further data collection
may be necessary.
Once an acceptable data set23 is available, the assessor can calculate
exposure or dose. Depending on the method used to quantify exposure, there are
y
several ways to calculate exposure and dose. This chapter will discuss making
inferences (Section 5.1), assumptions (Section 52), and calculations (Section 5J).
5.1. Use of Data in Making Inferences for Exposure Assessments
Inferences are generalizations that go beyond the information contained in
a data set. The credibility of an inference is often related to the method used to
make it and the supporting data. Anecdotal information is the source of one type
of inference, but the assessor has only limited knowledge of how well one
anecdote represents the realm of possibilities, so anecdotes as a basis for
inference should be used only with considerable caution. Professional judgment
. is usually preferred to anecdotes assuming that it is based on experience
representing a variety of conditions. Statistical inferences also are generalizations
that go beyond the data set. They may take any of several forms (see any
statistics textbook for examples), but unlike those described above, a statistical
inference will usually include a measure of how certain it is. For that reason,
statistical inferences are often preferable to anecdotes or professional judgment
provided the data are shown to be relevant and adequate.
25 An acceptable data set is one that is consistent with the scope, depth, and purpose of the
assessment, and is both relevant and adequate as discussed in Section S.I.
96
-------
As discussed above, the primary use of data from exposure-related
measurements is to infer more general information about exposure
concentrations, contact times, exposures, or doses. For example, measured
concentrations in a medium can be used to infer what the concentration might be
at the point of contact, which may not have been measured directly. Point-of-
contact measurement data for one group of people may be used to infer the
exposures of a similar group, or to infer what the exposures of the same group
might be at different times.
In all cases, the exposure assessor must have a clear picture of the
relationship between the data at hand and what is being characterized by
inference. For example, surface water concentration data alone, although
essential for characterizing the medium itself, are not necessarily useful for
inferring exposures from surface water, since other information is necessary to
complete the link between surface water and exposure. But the medium's
characteristics (over space and time) can be used, along with the location and
activities of individuals or populations, to estimate exposures. Samples taken for
exposure assessment may be designed to characterize different aspects (or
components) of exposure. For example, a sample taken as a point-of-contact
exposure measurement is qualitatively different from a sample of an
environmental medium or body fluid.
Different measurements taken under the general category of exposure-
related measurements cannot necessarily all be used in the same way. The
exposure assessor must explain the relationship between the sample data and the
inferences or conclusions being drawn from them. In order to do this, data
relevance, adequacy, and uncertainty must be evaluated.
5.1.1. Relevance of Data for the Intended Exposure Assessment
When making inferences from a data set, the assessor must establish a
clear link between the data and the inference. When statistically based sampling
is used to generate data, relevance is a function of how well the sample
represents the medium or parameter being characterized. When planning data
97
-------
collection for an exposure assessment, the assessor can use information about the
inferences that will be made to select the best measurement techniques. In many
cases data are also available from earlier studies. The assessor must determine
(and state) how relevant the available data are to the current assessment; this is
usually easier for new data than for previously collected information.
5.1.2. Adequacy of Data for the Intended Assessment
Table 4-1 in the previous section illustrated how different types of
measurements may be used to characterize a variety of concentrations, contact
times, and intake or uptake parameters. Nevertheless, just because certain types
of measurements generally can be used to make certain inferences, there is no
guarantee that this can always be done. The adequacy of the data to make
inferences is determined by evaluating the amount of data available and the
accuracy of the data. Evaluation of the adequacy of data will ensure that the
exposure assessment is conducted with data of known quality.
In general, inadequate data should not be used, but when it can be
*
demonstrated that the inadequacies do not affect results, it is sometimes possible
to use such data. In these cases, an explanation should be given as to why the
inadequacies do not invalidate conclusions drawn from them. In some cases, even
seriously inadequate or only partially relevant data may be the only data
available, and some information may be gained from their consideration. It may
not be possible to discard these data entirely unless better data are available. If
thes.e data are used, the uncertainties and resulting limitations of the inferences
should be clearly stated. If data are rejected for use in favor of better data, the
rationale for rejection should be clearly stated and the basis for retaining the
selected data should be documented. QA/QC considerations are paramount in
considerations of which data to keep and which to discard.
Outliers should not be eliminated from data analysis procedures unless it
can be shown that an error has occurred in the sample collection or analysis
phases of the study. Very often outliers provide much information to the study
98
-------
evaluators. Statistical tests such as the Dtxon test exist to determine the presence
of outliers (Dixon, 1950. 1951, 1953, I960).
5.1.2.1. Evaluation of Analytical Methods
Analytical methods are evaluated in order to develop a data set based on
validated analytical methods and appropriate QA/QC procedures. In a larger
sense, analytical methods can be evaluated to determine the strength of the
inferences made from them, and in turn, the confidence in the exposure
assessment itself. Consequently, it is just as important to evaluate analytical
methods used for data generated under another study as it is to evaluate the
methods used to generate new data.
The EPA has established extensive QA/QC procedures (U.S. EPA, 1980).
Before measurement data are used in the assessment, they should be evaluated
against these procedures and the results stated. If this is not possible, the assessor
must consider what effect the unknown quality of the data has on the confidence
placed on the inferences and conclusions of the assessment.
5.1.2.2. Evaluation of Analytical Data Reports
An assortment of qualifiers is often used in data validation. These
qualifiers are used to indicate QA/QC problems such as uncertain chemical
identity or difficulty in determining chemical concentration. Qualifiers usually
appear on a laboratory analysis report as a letter of the alphabet next to the
analytical result. Some examples of data qualifiers, applied by U.S. EPA regional
reviewers for Contract Laboratory Program (CLP) data include:
B (blank) the analyte was found in blank samples;
J (judgment) the compound is present but the concentration value is
estimated;
U (undetected) - the chemical was analyzed for but not detected at the
detection limit;
R (reject) - the quality control indicates that the data are unusable.
99
-------
Th« exposure assessor may contact the laboratory or the person who validated
the data if the definitions of the qualifiers are unclear. Since the exposure
assessment is only as good as the data supporting it, it is essential to interpret
these types of data properly to avoid misrepresenting the data set or biasing the
results.
S.I2.2.1. Evaluation of Censored Data Sets
Exposure assessors commonly encounter data sets containing values that
are lower than limits deemed reliable enough to report as numerical values (i.e.,
quantification limits [QL]). These data points are often reported as nondetected
and are referred to as censored. The level of censoring is based on the
confidence with which the analytical signal can be discerned from the noise.
While the concentration may be highly uncertain for substances below the
reporting limit, it does not necessarily mean that the concentration is zero. As a
result the exposure assessor is often faced with the problem of having to estimate
values for the censored data. Although a variety of techniques have been
described in the literature, no one procedure is appropriate under all exposure
assessment circumstances; thus, the exposure assessor will need to decide on the
appropriate method for a given situation. Techniques for analyzing censored data
seu can be grouped into three classes (Helsel, 1990): simple substitution
methods, distributional methods, and robust methods.
Simple substitution methods, the most commonly encountered technique,
involve substitution of a single value as a proxy for each nondetected data value.
Frequently used values have included zero, the QL, QL/2, and QL/y2 ".
In the worst-case approach, all nondetects are assigned the value of the
QL, which is the lowest level at which a chemical may be accurately and
reproducibly quantitated. This approach biases the mean upward. On the other
* Some programs, such as the U.S. Department of Energy (1991), do not recommend this
procedure at all, if it can be avoided.
100
-------
hand, assigning all nondetects the value of zero biases the mean downward. The
degree to which the results are biased will depend on the relative number of
detects and nondetects in the data set and the difference between the reporting
limit and the measured values above it.
In an effort to minimize the obvious bias introduced by choosing either
zero or the QL as the proxy, two other values have been suggested, i.e., QL/2
and QL/,/2. Assigning all nondetects as QL/2 (Nehls and Akland, 1973) assumes
that all values between the QL and zero are equally likely; therefore, an average
value would result if many samples in this range were measured. Hornung and
Reed (1990) discuss the merits of assigning a value of QL/,/2 for nondetects
rather than QL/2 if the data are not highly skewed (geometric standard deviation
< 3.0): otherwise they suggest using QL/2.
Based on reported analyses of simulated data sets that have been censored
to varying degrees (Gleit, 1985; Horning and Reed, 1990; Gilliom and Helsel,
1986; Helsel and Conn, 1988), it can be concluded that substitution with QL/2 or
QL/,/2 for nondetects will be adequate for most exposure assessments provided
that the nondetects do not exceed 10% to 15% of the data set or the data are not
highly skewed. When such situations arise, the additional effort to make use of
more sophisticated methods as discussed below is recommended. On the other
hand, the exposure assessor may encounter situations in which the purpose of the
assessment is only to serve as a screen to determine if a health concern has been
triggered or if a more detailed study is required, then assigning the value of the
QL to all nondetect values can be justified. If, when using this conservative
approach, no concern is indicated, then no further effort is warranted. This
method cannot be used to prove an unacceptable risk exists, and any exposure
values calculated using this method should be caveated and clearly presented as
less than estimates.
Distributional methods, unlike simple substitution methods, make use of
the data above the reporting limit to extrapolate below it. One such technique is
the use of log-probit analysis. This approach assumes a lognormal probability
101
-------
distribution of the data. In the probit analysis, the detected values are plotted on
the scale and the nondetectable values are treated as unknowns, but their
percentages are accounted for. The geometric mean is determined from the 50th
percentile. As discussed by Travis and Land (1990), limitations of the method
have been pointed out, but it is less biased and more accurate than the frequently
used substitution methods. This method is useful in situations where the data set
contains enough data points above the reporting limit to define the distribution
function for the exposure values (i.e., lognormal) with an acceptable degree of
confidence. The treatment of the nondetectable samples is then straightforward,
assuming the nondetectable samples follow the same distribution as those above
the reporting limit.
V
Robust methods have an advantage over distributional methods in so far
as they do not assume that the data above the reporting limit follow a defined
distribution (e.g., lognormal) and they are not subject to transformation bias in
going from logarithms back to original units. Gilliom and Helsel (1986) have
described the application of several approaches to data sets of varying sample
sb:e and degree of censoring. These methods involve somewhat more data
manipulation than the log-probit method discussed earlier in this Section, but they
may be more appropriate to use when the observed data do not fit a lognormal
distribution. Generally, these methods only assume a distributional form for the
censored values rather than the entire data set, and extrapolation from the
. uncensored data is done by using regression techniques.
In summary, when dealing with censored data sets, a variety of approaches
cain be used by the exposure assessor. Selecting the appropriate method requires
consideration of the degree of censoring, the goals of the exposure assessment.
and the accuracy required. Regardless of the method selected, the. assessor
should explain the choice made and how it may affect the summary statistics.
Presenting only the summary statistics developed by one of these methods should
be avoided. It is always useful to include a characterization of the data by the
percentage of detects and nondetects in language such as "in 37% of the samples
102
-------
the chemical was detected above the quantitation limit; of these 37%, the mean
concentration was 47 ppm. the standard deviation was 5 ppm, etc."
5.1.2.2.2. Blanks and Recovery
Blank samples should be compared with the results from their
corresponding samples. When comparing blank samples to the data set, the
following rules should be followed (outlined in Section 3):
Sample results should be reported only if the concentrations in the
sample exceed 10 times the maximum amount detected in the blank
for common laboratory contaminants. Common laboratory
contaminants include: acetone, 2-butanone (or methyl ethyl
ketone), methylene chloride, toluene, and phthalate esters.
Sample results should be reported only if the concentrations in the
sample exceed 5 times the maximum amount detected in a blank for
chemicals that are not common laboratory contaminants.
In general, for other types of qualifiers, the exposure assessor may include
the data with qualifiers if they indicate that a chemical's concentration is
uncertain, but its identity is known. If possible, the uncertainties associated with
the qualifier should be noted.
Chemical spike samples that show abnormally high or low recoveries may
result in qualified or rejected data. Assessors should not use rejected data; these
samples should be treated as if the samples were not taken, since the resulting
data are unreliable. Typically, analytical results are reported from the laboratory
unadjusted for recovery, with the recovery percentage also reported. The
assessor must determine how these data should be used to calculate exposures. If
recovery is near 100%, concentrations are not normally adjusted (although the
implicit assumption of 100% recovery should be mentioned in the uncertainty
section). However, the assessor may need to adjust the data to account for
consistent, but abnormally high or low recovery. The rationale for such
adjustments should be clearly explained; individual program offices may develop
103
-------
guidance on the acceptable percent recovery limits before data adjustment or
rejection is necessary.
5.1.3. Combining Measurement Data Sets from Various Studies
Combining data from several sources into a single data set must be done
cautiously. The circumstances under which each set of data was collected (target
population, sampling design, location, time, etc.) and quality (precision, accuracy,
representativeness, completeness, etc.) must be evaluated. Combining summary
statistics of the data sets (e.g., means) into a single set may be more appropriate
than combining the original values. Statistical methods are available for
combining results from individual statistical tests. For example, it is sometimes
possible to use several studies with marginally significant results to justify an
overall conclusion of a statistically significant effect.
The best way to report data is to provide sufficient background
information to explain what was done and why, including clear documentation of
the source of the data and including any references.
+
5.1.4. Combining Measurement Data and Modeling Results
Combining model results with measurement data must be done with an
understanding of how this affects the resulting inferences, conclusions, or
exposure estimates. If model results are used in lieu of additional data points,
they must be evaluated for accuracy and representativeness as if they were
additional data, and the uncertainty associated with this data combination must be
' described fully, as discussed in Section 5.13.
On the other hand, measurement data are often used within the context of
the model itself, as calibration and verification points, or as a check on the
plausibility of the model results. If measurements are used within the model, the
uncertainty in these measurements affects the uncertainty of the model results,
and should be discussed as pan of the uncertainty of the model results.
104
-------
5.2. Dealing with Data Gaps
Even after supplementing existing measurement data with model results.
there are likely to be gaps in the information base to be used for calculating
exposures and doses. There are several ways to deal with data gaps. None are
entirely satisfactory in all situations, but they can be useful depending on the
purposes of the assessment and the resources available. The following options
can be used singly or in combination:
New data can be collected. This may be beyond the reach of the
assessor's resources, but promises the best chance for getting an
accurate answer. It is most likely to be a useful option if the new
data are quick and easy to obtain.
The scope of the assessment can be narrowed. This is possible if
the data gaps are in one pathway or exposure route, and the others
have adequate data. It may be a viable option if the pathway or
route has values below certain bounds, and those bounds are small
relative to the other pathways being evaluated. This is unlikely to
be satisfactory if the pan of the assessment deleted is an important
exposure pathway or route and must be evaluated.
Conservative" assumptions can be used. This option is useful for
establishing bounds on exposure parameters, but limits how the
resulting exposures and doses can be expressed. For example, if
one were to assume that a person stays at home 24 hours a day as a
conservative assumption, and used this value in calculations, the
resulting contact time would have to be expressed as an upper limit
rather than a best estimate. When making conservative
assumptions, the assessor must be aware of (and explain) how many
r "Conservative" assumptions are those which tend to maximize estimates of exposure or dose, such
as choosing a value near the high end of the concentration or intake rate range.
105
-------
of these are made in the assessment, and how they influence the
final conclusions of the assessment.28
Models may be used in some cases, not only to estimate values for
concentrations or exposures, but also to check on how conservative
certain assumptions are.
Surrogate data may also be used in some cases. For example, for
pesticide applicators' exposure to pesticides, the EPA Office of
Pesticide Programs (U.S. EPA, 1987d) assumes that the general
parameters of application (such as the human activity that leads to
exposure) are more important than the properties of the pesticide in
determining the level of exposure.29 This option assumes that
surrogate data are available and that the differences between the
chemical and the surrogate are small. If a clear relationship can be
determined between the concentration of a chemical and the
surrogate (usually termed an indicator chemical) in a medium, this
relationship could also be used to fill data gaps. In any case, the
strength and character of the relationship between the chemical and
the surrogate must be explained.
Professional judgment can be used. The utility of this option
depends on the confidence placed in the estimate. Expen opinion
based on years of observation of similar circumstances usually
carries more weight than anecdotal information. The assessor must
discuss the implications of these estimates in the uncertainty
analysis.
a Obviously, the mathematical product of several conservative assumptions is more conservative
than any single assumption alone. Ultimately, this could lead to imrMiiaieaiiy conservative bounding
estimates (see Section S3).
29 Note that when using a passive dosimetry monitoring method, what is measured is the amount
of chemical impinging on the f*"n surface or available for inhalation, that is, exposure, not the actual
dose received. Factors such as dermal penetration, are, of course, expected to be highly chemical
dependent.
106
-------
5.3. Calculating Exposure and Dose
Depending on the approach used to quantify exposure and dose, various
types of data will have been assembled. In calculating exposures and doses from
these data, the assessor needs to direct attention specifically to certain aspects of
the data. These aspects include the use of short-term data for long-term
projections, the role of personal monitoring data, and the particular way the data
might be used to construct scenarios. Each of these aspects is covered in turn
below.
5.3.1. Short-Tenn versus Long-Term Data for Population Exposures
Short-term data, for the purposes of this discussion, are data representing a
short period of time measured (or modeled) relative to the time period covered
in the exposure assessment. For example, a 3-day sampling period would produce
short-term data if the exposure assessment covered a period of several years to a
lifetime. The same 3-day sampling period would not be considered short-term if
the assessment covered, say, a few days to a week.
Short-term data can provide a snapshot of concentrations or exposures
during that time, and an inference must be made about what that means for the
longer term if the exposure assessment covers a long period. The assessor must
determine how well the short-term data represent the longer period.
Even when short-term population data are statistically representative (i.e.,
they describe the shape of the distribution, the mean, and other statistics), use of
these short-term data to infer long-term exposures and risks must be done with
caution. Using short-term data to estimate long-term exposures has a tendency to
underestimate the number of people exposed, but to overestimate the exposure
levels to the upper end of the distribution, even though the mean will remain the
same.30 Both concentration variation at a single point and population mobility
30 Consider, for example, a hypothetical set of 100 rooms (mkroeovironments) where the
concentration of a particular pollutant is zero in SO of them, and ranges stepwise from 1 to SO (nominal
concentration units) in the remainder. If one person were in each room, short-term "snapshot"
107
-------
will drive the estimates of the levels of exposure for the upper tail of the
distribution toward the mean. If short-term data are used for long-term exposure
or dose estimates, the implications of this on the estimated exposures must be
discussed in the assessment. Likewise, use of long-term monitoring data for
specific short-term assessments can miss significant variations due to short-term
conditions or activities. Long-term data should be used cautiously when
estimating short-term exposures or doses, and the implications should be discussed
in the assessment.
5.3.2. Using Point-of-Contact Data to Calculate Exposure and Dose
Point-of-contact exposure assessments are often done with the intent of
' protecting the individuals, often in an occupational setting. When exposures are
being evaluated to determine whether they exceed an action level or other
benchmark, point-of-contact measurements are the most relevant data.
Typically, point-of-contact measurement data reflect exposures over
periods of minutes to perhaps a week or so. For individuals whose exposures
have been measured, these data may be used directly as an indication of their
exposure during the sampling period, provided they are of adequate quality,
measure the appropriate chemical, and actually measure exposure while it occurs.
This is the only case in which measurement data may be used directly as exposure
data.
monitoring would show that SO people were unexposed and the others were exposed to concentrations
ranging from 1 to SO. If the concentration in each room remained constant and people were allowed
to visit any room at random, long-term monitoring would indicate that all 100 were exposed to a mean
concentration of 12.7S. The short-term data would tend to overestimate concentration and
underestimate the number of persons exposed if applied to long-term exposures. If only average values
were available, the long-term data would tend to underestimate concentration and overestimate the
number exposed if applied to short-term exposures. Because populations are not randomly mobile or
static, the exposure assessor should determine what effect this has on the exposure estimate.
108
-------
When using poim-of-contact measurements, even with statistically based
data, several inferences still must be made to calculate exposure or dose:
Inferences must be made to apply short-term measurements of
exposure to long-term estimates of exposure; these are subject to
the cautions outlined in Section 5.3.1.
Inferences must be made about the representativeness of the
individual or persons sampled for the individual or population
segment for which the assessment is done.
Inferences must be made about the factors converting measured
exposure to potential or internal dose for use in a risk assessment.
If the assessment requires it, inferences must be made about the
relationship between the measured chemical exposures and the
presence and relative contribution of various sources of the
chemical.
5.3.3. The Role of Exposure Scenarios in Exposure Assessment
Exposure scenarios have several functions in exposure and risk
assessments. First, they are calculational tools to help the assessor develop
estimates of exposure, dose, and risk. Whatever combination of data and models
is used, the scenario will help the assessor to picture how the exposure is taking
place, and will help organize the data and calculations. Second, the estimates
derived from scenarios are used to develop a series of exposure and risk
descriptors, which were discussed in Section 23. Finally, exposure scenarios can
often help risk managers make estimates of the potential impact of possible
control actions. This is usually done by changing the assumptions in the exposure
scenario to the conditions as they would exist after the contemplated action is
implemented, and reassessing the exposure and risk. These three uses of
exposure assessments are explained in Sections 533.1, 533.2, and 533.3,
respectively.
109
-------
An exposure scenario is the set of information about how exposure takes
place. An exposure scenario generally includes facts, data, assumptions,
inferences, and sometimes professional judgment about the following:
The physical setting where exposure takes place (exposure setting)
The exposure pathway(s) from source(s) to exposed individual(s)
(exposure pathways)
The characterization of the chemical, i.e., amounts, locations, time
variation of concentrations, source strength, environmental pathways
from source to exposed individuals, fate of the chemical in the
environment, etc. (characterization of the chemical)
Identification of the individual(s) or population(s) exposed, and the
profile of contact with the chemical based on behavior, location as a
function of time, characteristics of the individuals, etc.
(characterization of the exposed population)
If the dose is to be estimated, assumptions about the transfer of the
chemical across the boundary, i.e., ingestion rates, respiration rates,
*
absorption rates, etc. (intake and uptake rates)
It usually is necessary to know whether the effect of concern is chronic.
acute, or dependent on a particular exposure time pattern.
The risk characterization, the link between the development of the
assessment and the use of the assessment, is usually communicated in pan to the
risk manager by means of a series of "risk descriptors," which are merely different
ways to describe the risk. Section 23 outlined two broad types of descriptors:
individual risk descriptors and population risk descriptors, with several variations
for each. To the exposure or risk assessor, different types of risk information
require different risk descriptors and different analyses of the data. The
following paragraphs discuss some of the aspects of developing and using
exposure scenarios in various functions for exposure assessment.
110
-------
5.3.3.1. Scenarios as a Means to Quantify Exposure and Dose
When using exposure scenario evaluation as a means to quantify exposure
and dose, it is possible to accumulate a large volume of data and estimated
values, and both the amount and type of information can vary widely. The
exposure scenario also contains the information needed to calculate exposure,
since the last three bullets above (Section 5.33) are the primary variables in most
exposure and dose equations.
As an example, consider Equation 2-5, the equation for lifetime average
daily potential dose (LADD^). This equation uses the variables of exposure
concentration (C), intake rate (IR), and exposure duration (ED) as the three
primary variables. Body weight (BW) and averaging time (AT) (in this case,
lifetime, LT) are not related to the exposure or dose per se, but are averaging
variables used to put the resulting dose in convenient units of lifetime average
exposure or dose per kg of body weight.
In looking at the three primary variables (C, IR, and ED), the exposure
assessor must determine what value to use for each to solve the equation. In
actuality, the information available for a variable like C may consist of
measurements of various points in an environmental medium, source and fate
characterizations, and model results. There will be uncertainty in the values for
C for any individual; there will also be variability among individuals. Each of
these primary variables will be represented by a range of values, even though at
times, the boundaries of this range will be unknown. How exposure or dose is
calculated depends on how these ranges are treated.
In dealing with these ranges in trying to solve the equation for LADD, the
assessor has at least two choices. First, statistical tools, such as the Monte Carlo
analysis, can be used to enter the values as frequency distributions, which results
in a frequency distribution for the LADD. This is an appropriate strategy when
the frequency distributions are known for C, IR, and ED (or for the uptake
analogs, C, K,, SA, and ED introduced in Section 2), and when these variables
are independent.
Ill
-------
A second approach is to select or estimate discrete values from the ranges
of each of the variables and use these values to solve the LADD equation. This
approach usually results in a less certain estimate, but may be easier to do.
Which values are used determines how the resulting estimate will be described.
Several terms for describing such estimates are discussed in Section 53.3.2.
Since exposure to chemicals occurs through a variety of different pathways,
contact patterns, and settings, sufficient perspective must be provided to the users
of the assessment (usually risk managers) to help them make an informed
decision. Providing this perspective and insight would be relatively
straightforward if complete and accurate information were known about the
exposure, dose, and risk for each and every person within the population of
interest. In this hypothetical situation, these individual data could actually be
arrayed next to the name of each person in the population, or the data could be
compiled into frequency distribution curves. From such distributions, the average,
median, maximum, or other statistical values could easily be read off the curves
arid presented to the risk manager. In addition, accurate information could be
provided about how many persons are above certain exposure, dose, or risk levels
as well as information about where various subgroups fall within the subject
distribution.
Unfortunately, an assessor rarely has these kinds of data; the reality an
assessor faces usually falls far short of this ideal. But it is precisely this kind of
information about the distribution of exposure, dose, and risk that is needed many
times by the risk assessor to characterize risk, and by the risk manager to deal
with risk-related issues.
In the absence of comprehensive data, or if the scenario being evaluated is
a possible future use or post-control scenario, an assessor must make assumptions
in order to estimate what the distribution would look like if better data were
available, or if the possible future use becomes a reality. Communicating this
estimated distribution to the risk manager can be difficult. The assessor must not
only estimate exposure, dose, and risk levels, but must also estimate where those
112
-------
levels might fall on the actual distributions or estimated distributions for potential
future situations. To help communicate where on the distribution the estimate
might fall, loosely defined terms such as reasonable worst case, worst case, and
maximally exposed individual have been used by assessors. Although these terms
have been used to help describe the exposure assessor's perceptions of where-
estimated exposures fall on the actual or potential distribution for the future use,
the ad hoc nature of the historical definitions used has led to some inconsistency.
One of the goals of these Guidelines is to promote greater consistency in the use
of terms describing exposure and risk.
5.33.2. Exposure Scenarios and Exposure Estimators as Input to Risk
Descriptors
As discussed in Section 23, risk descriptors convey information about risk
to users of that information, primarily risk managers. This information usually
takes the form of answers to a relatively short set of questions, not all of which
are applicable to all assessments. Section 5.3.5 provides more detail on how the
exposure assessor's analysis leads to construction of the risk descriptors.
5.33.3. Exposure Scenarios as a Tool for Option Evaluation
A third important use for exposure scenarios is as a tool for evaluating
proposed options for action. Risk managers often have a number of choices for
dealing with environmental problems, from taking no action on one extreme to a
number of different actions, each with different costs, on the other. Often the
exposure scenarios developed as part of the baseline risk assessment provide a
powerful tool to evaluate the potential reduction of exposure and risk for these
various options, and consequently are quite useful in many cost-benefit analyses.
there are several additional related uses of exposure scenarios for risk
managers. They may help establish a range of options for cleanup by showing
the sensitivity of the risk estimates to the changes in assumed source or exposure
levels. The exposure assessor can use the sensitivity analysis of the exposure
113
-------
scenario to help evaluate and communicate the uncertainty of the assumptions,
and what can be done to reduce that uncertainty. Well-crafted and soundly based
exposure scenarios may also help communicate risks and possible options to
community groups.
Although it is beyond the scope of these Guidelines to detail the methods
used for option evaluation and selection, the assessor should be aware of this
potential use. Discussing strategy (and specific information needs) with risk
managers is usually prudent before large resource expenditures are made in the
risk assessment area.
5.3.4. General Methods for Estimating Exposure and Dose
A variety of methods are used to obtain estimates of dose necessary for
risk characterization. These range from quick screening level calculations and
rules of thumb to more sophisticated techniques. The technique to be used in a
given case is a matter of the amount of information available and the purpose of
the assessment. Several of the methods are outlined in the following sections.
Normally it is neither practicable nor advisable to immediately develop
detailed information on all the potential pathways, since not all may contribute
significantly to the outcome of the assessment.31 Rather, evaluation of the
scenario is done in an iterative manner. First, screening or bounding techniques
are used to ascertain which pathways are unimportant, then the information for
(the remaining pathways is refined, iteratively becoming more accurate, until the
quantitative objectives of the assessment are met (or resources are depleted).
In beginning the evaluation phase of any assessment, the assessor should
have a scenario's basic assumptions (setting, scope, etc.) well identified, one or
31 There are some important exceptions to this statement. First, the public or other concerned
groups may express particular interest in certain pathways, which will not normally be dropped entirely
at ithis point. Second, for routine repetitive assessments using a certain standard scenario for many
chemicals, once the general bounding has been done on the various possible pathways, it may become
standard operating procedure to immediately begin developing information for particular pathways as
new chemicals are assessed.
114
-------
more applicable exposure pathways defined, an equation for evaluating the
exposure or dose for each of those exposure pathways, and the data and
information requirements pertinent to solving the equations. Quality and quantity
of data and information needed to substitute quantitative values or ranges into
the parameters of the exposure equation will often vary widely, from postulated
assumptions to actual high-quality measurements. Many times, there are several
exposure pathways identified within the scenario, and the quality of the data and
information may vary for each.
A common approach to estimating exposure and dose is to do a
preliminary evaluation, or screening step, during which bounding estimates are
used, and then to proceed to refine the estimates for those pathways that cannot
be eliminated as of trivial importance.
5.3.4.1. Preliminary Evaluation and Bounding Estimates
The first step that experienced assessors usually take in evaluating the
scenario involves making bounding estimates for the individual exposure
pathways. The purpose of this is to eliminate further work on refining estimates
for pathways that are clearly not important.
The method used for bounding estimates is to postulate a set of values for
the parameters in the exposure or dose equation that will result in an exposure or
dose higher than any exposure or dose expected to occur in the actual population.
The estimate of exposure or dose calculated by this method is clearly outside of
(and higher than) the distribution of actual exposures or doses. If the value of
this bounding estimate is not significant, the pathway can be eliminated from
further refinement.32
12 "Not significant" can mean either that it is so small relative to other pathways that it will not add
perceptibly to the total exposure being evaluated or that it falls so far below a level of concern that
even when added to other results from other pathways, it will be trivial. Note that a 'level of concern"
is a risk management term, and the assessor must discuss and establish any such levels of concern with
risk managers (and in some cases, concerned groups such as the local community) before eliminating
pathways as not significant.
115
-------
The theoretical upper bounding estimate (TUBE) is a type of bounding
estimate that can be easily calculated and is designed to estimate exposure, dose,
and risk levels that are expected to exceed the levels experienced by all
individuals in the actual distribution. The TUBE is calculated by assuming limits
for all the variables used to calculate exposure and dose that, when combined,
will result in the mathematically highest exposure or dose (highest concentration,
highest intake rate, lowest body weight, etc.). The theoretical upper bound is a
bounding estimate that should, if the limits of the parameters used are known,
ensure that the estimate is above the actual exposures received by all individuals
in the population. It is not necessary to go to the formality of the TUBE to
assure that the exposure or dose calculated is above the actual distribution,
however, since any combination that results in a value clearly higher than the
actual distribution can serve as a suitable upper bound.
The bounding estimate (a limit of individual exposure, dose or risk) is
most often used only to eliminate pathways from further consideration. This is
ofi:en done in screening-level assessments, where bounding estimates of exposure,
r
dose, or risk provide a quick and relatively easy check on whether the levels to be
assessed are trivial relative to a level that would cause concern. If acceptably
lower than the concern level, then additional assessment work is not necessary.
Bounding estimates also are used in other types of assessments. They can
be used for deregulation of chemicals when pathways or concentrations can be
shown to present insignificant or de minimis risk. They can be used to determine
whether more information is needed to determine whether a pathway is
'significant; if the pathway's significance cannot be ruled out by a bounding
estimate, test data may be needed to refine the estimate.
There are two important points about bounding estimates. First, the only
thing the bounding estimate can establish is a level to eliminate pathways from
further consideration. It cannot be used to make a determination that a pathway
is significant (that can only be done after more information is obtained and a
refinement of the estimate is made), and it certainly cannot be used for an
116
-------
estimate of actual exposure (since by definition it is clearly outside the actual
distribution). Second, when an exposure scenario is presented in an assessment, it
is likely that the amount of refinement of the data, information, and estimates
will vary by pathway, some having been eliminated by bounding estimates, some
eliminated after further refinement, and others fully developed and quantified.
This is an efficient way. to evaluate scenarios. In such cases, bounding estimates
must not be considered to be equally as sophisticated as an estimate of a fully
developed pathway, and should not be described as such.
Experienced assessors can often eliminate some obvious pathways more or
less by inspection as they may have evaluated these pathways many times
before.33 In these cases, the assessor must still explain why the pathway is being
eliminated. For less experienced assessors, developing bounding estimates for all
pathways is instructive and will be easier to defend.
5.3.4.2. Refining the Estimates of Exposure and Dose
For those pathways not eliminated by bounding estimates or judged trivial,
the assessor will then evaluate the resulting exposure or dose. At this point, the
assessor will make estimates of exposure or dose that are designed to fall on the
actual distribution. The important point here is that unlike a bounding estimate,
these estimates of exposure or dose should focus on points in the actual
distribution. Both estimates of central tendency and estimates of the upper end
of the distribution curve are useful in crafting risk descriptors.
Consider Equation 2-6 for the lifetime average daily potential dose
(LADDpo,), an equation often used for linear, nonthreshold carcinogen risk
models. The assessor will use the data, ranges of data, distributions of data, and
assumptions about each of the factors needed to solve the equation for dose.
Generally, both central estimates and high-end estimates are performed. Each of
33 Experienced assessors may also be able to determine quickly that a pathway requires refined
estimation.
117
-------
these estimates has uncertainty (perhaps unquamifiable uncertainty), and the
better the quality and comprehensiveness of data used as input to the equation,
the less uncertainty.
After solving the equation, the assessor will determine whether the
uncertainty associated with the answer is sufficiently narrow to allow the risk
descriptors to be developed (see Section 3.4) and to answer satisfactorily the
questions posed in the exposure assessment statement of purpose. Evaluating
whether the data, uncertainty, risk descriptors, and answers to the questions are
good enough is usually a joint responsibility of the risk assessor and the risk
manager.
Should the estimates of exposure or dose have sufficiently narrow
uncertainty, the'assessor can then proceed to develop the descriptors and finish
the assessment. If not, the data or assumptions used usually will have to be
refined, if resources allow, in an attempt to bring the estimated exposure or dose
closer to what the assessor believes are the actual values in the population.
Refining the estimates usually requires that new data be brought into
consideration34; this new information can be other studies from the literature,
information previously developed for another, related purpose that can be
adapted, or new survey, laboratory, or field data. The decision about which
particular pans of the information base to refine should be based both on which
data will most significantly reduce the uncertainty of the overall exposure or dose
estimate, and on which data are in fact obtainable either technologically or within
resource constraints.
After refinement of the estimate, the assessor and risk manager again
determine whether the estimates provided will be sufficient to answer the
questions posed to an acceptable degree, given the uncertainties that may be
associated with those estimates. Refinements proceed iteratively until the
assessment provides an adequate answer within the resources available.
14 It also can involve new methods or additional methods for analyzing the old data.
118
-------
5.3.5. Using Estimates for Developing Descriptors
Risk assessors and risk managers are encouraged to explore a range of
ways to describe exposure and risk information, depending on the purpose of the
assessment and the questions for which the risk manager must have answers.
Section 2.3 outlines a series of risk descriptors; in the sections below, these are
discussed in the context of how an exposure assessor's analysis of the data would
lead to various descriptors for risk.
5.3.5.1. Individual Exposure, Dose, and Risk
Questions about individual risk are an important component of any
assessment, especially an estimate of the high end of the distribution. Section
5.3.4.1 indicated that bounding estimates are actually a useful but limited form of
individual risk estimate, a form which is by definition beyond the highest point on
the population distribution. This section deals with estimates that are actually on
the distribution of exposure, dose, or risk.
There are several approaches for arriving at an individual risk estimate.
Since calculation of risk involves using information from fields other than
exposure assessment, the reader is advised to consult other Agency guidelines for
more detailed discussions (e.g., U.S. EPA, 1986b, 1986c, 1988b, 1988c, 1991a).
The uncertainty in the risk estimate will depend heavily on the quality of the
information used. There are several steps in the process:
First, the question of unusual susceptibility of part of the population must
be addressed. If equal doses result in widely different responses in two
individuals, it may be necessary to consult with scientists familiar with the
derivation of the dose-response relationship for the chemical in question in order
to ascertain whether this is normal variability among members of a population.
Normal variability should have been considered as pan of the development of the
dose-response relationship; unusual susceptibility may not have been. If such a
highly susceptible subgroup can be identified, it is often useful to assess their risk
separately from the general population. It will not be common, given the current
119
-------
data availability, to clearly identify such susceptible subgroups. If none can be
identified, the default has usually been to assume the dose-response relationship
applies to all members of the population being assessed. Where no information
shows the contrary, this assumption may be used provided it is highlighted as a
source of uncertainty.
Second, after the population or population segment can be represented by
a single dose-response relationship, the appropriate dose for use in the dose-
response relationship (absorbed/internal dose, potential dose, applied dose,
effective dose) must be identified. For dose-response relationships based on
administered dose in animal studies, potential dose will usually be the human
analogue. If the dose-response relationship is based on internal dose, then that is
V
the most appropriate human dose. If the estimates of exposure and dose from
the exposure assessment are in an inappropriate form (say, potential dose rather
than internal dose), they must be converted before they are used for risk
calculations. This may involve analysis of bioavailability, absorption rates as a
function of form of the chemical and route, etc. If these data are not available,
the default has been to assume the entire potential dose becomes the internal
dose.31 As more data become available concerning absorption for different
chemicals, this conservative assumption may not always be the best, or even a
credible, default. Whatever assumption is made concerning absorption (or the
relationships among any of the different dose terms if used, for that matter), it
.should be highlighted in the uncertainty section.
Once the first two steps have been done, and the dose-response
relationship and type of dose have been identified, the exposure and dose
15 The unstated assumption b often made that the relationship between administered dose and
absorbed dose in the animal is the same as that between potential dose and internal dose in humans,
provided a correction is made for body weight/surface area. In other words, the bioavailability and
absorption fractions are assumed to be the same in the human as in the animal experiment. If no
correction is made for absorption, this leads to the assumption that the absorption percent is the same
as in the animal experiment from which the dose-response relationship was derived. Note this
uncorrected conversion of potential dose to internal dose does not assume '100% absorption" unless
there was 100% absorption in the animal study.
120
-------
information needs to be put in the appropriate form. Ideally, this would be a
distribution of doses of the appropriate type across the population or population
subgroup of interest. This may involve converting exposures into potential doses
or converting potential doses into internal, delivered, or biologically effective
doses. Once this is accomplished, the high-end estimate of dose will often (but
not always) lead fairly directly to the high-end estimate of risk. The method used
to develop the high-end estimate for dose depends on the data available.
Because of the skewed nature of exposure data, there is no exact formula that
will guarantee an estimate will fall into this range in the actual population if only
sparse data are available.
The high-end risk is a plausible estimate of the individual risk for those
persons at the upper end of the risk distribution. The intent of this descriptor is
to convey an estimate of risk in the upper range of the distribution, but to avoid
estimates that are beyond the true distribution. Conceptually, high-end risk
means risks above the 90th percentile of the population distribution, but not
higher than the individual in the population who has the highest risk. This
descriptor is intended to estimate the risks that are expected to occur in small but
definable high-end segments of the subject population. The use of "above the 90th
percentile" in the definition is not meant to precisely define the range of this
descriptor, but rather to clarify what is meant conceptually by high end.
The high-end segments of the exposure, dose, and risk populations may
represent different individuals. Since the location of individuals on the exposure,
dose, and risk distributions may vary depending on the distributions of
bioavailability, absorption, intake rates, susceptability, and other variables, a high
exposure does not necessarily result in a high dose or risk, although logically one
would expect a moderate to highly positive correlation among exposure, dose, and
risk.
When the complete data on the population distributions of exposures and
doses are available, and the significance of the factors above (bioavailability, etc.)
are known to the extent to allow a risk distribution to be constructed, the high-
121
-------
end risk estimate can be represented by reporting risks at selected percentiles of
the distributions, such as the 90th, 95th, or 98th percentile. When the complete
distributions are not available, the assessor should conceptually target something
above the 90th percentile on the actual distribution.
In developing estimates of high-end individual exposure and dose, the
following conditions must be met:
The estimated exposure or dose is on the expected distribution, not
above the value one would expect for the person with the highest
estimated risk in the population. This means that when constructing
this estimate from a series of factors (environmental concentrations,
intake rates, individual activities, etc.), not all factors should be set
to values that maximize exposure or dose, since this will almost
always lead to an estimate that is much too conservative.
The combination of values assigned to the exposure and dose
factors can be expected to be found in the actual population. In
estimating high-end exposures or doses for future use or post-
«
control scenarios, the criterion to be used should be that it is
expected to be on the distribution provided the future use or control
measure occurs.36
Some of the alternative methods for determining a high-end estimate of
dose are:
If sufficient data on the distribution of doses are available, take the
value directly for the percentile(s) of interest within the high end.
If possible, the actual percentile(s) should be stated, or the number
of persons determined in the high end above the estimate, in order
36 This means that estimates of high-end exposure or dose for future uses are limited to the same
conceptual range as current uses. Although a "worst-case" combination of future conditions or events
may result in an exposure that is conceivably possible, the assessor should not merely use a worst-case
combination as an estimate of high-end exposure for possible future uses. Rather, the assessor must
use judgment as to what the range of exposures or doses would plausibly be, given the population size
and probability of certain events happening.
122
-------
to give the risk manager an idea of where within the high end-range
the estimate falls.
If data on the distribution of doses are not available, but data on
the parameters used to calculate the dose are available, a simulation
(such as an exposure model or Monte Carlo simulation) can
sometimes be made of the distribution. In this case, the assessor
may take the estimate from the simulated distribution. As in the
method above, the risk manager should be told where in the high-
end range the estimate falls by stating the percentile or the number
of persons above this estimate. The assessor and risk manager
should be cautioned that unless a great deal is known about
exposures or doses at the high end of the distribution, simulated
distributions may not be able to differentiate between bounding
estimates and high-end estimates. Simulations often include low-
probability estimates at the upper end that are higher than those
actually experienced in a given population, due to improbability of
finding these exposures or doses in a specific population of limited
size, or due to nonobvious correlations among parameters at the
high ends of their ranges.37 Using the highest estimate from a
Monte Carlo simulation may therefore overestimate the exposure or
dose for a specific population, and it is advisable to use values
somewhat less than the highest Monte Carlo estimated value if one
37 For example, although concentration breathed, frequency, duration, and breathing rate may be
independent for a consumer p*»"«j"g rooms in a house under most normal circumstances, if the
concentration is high enough, it may affect the other parameters such as duration or breathing rate.
These types of high-end correlations are difficult to quantify, and techniques such as Monte Carlo
simulations will not consider them unless relationships are known and taken into account in the
simulation. If extreme concentration in this case resulted in lower breathing rate or duration, a non-
corrected Monte Carlo simulation could overestimate the exposure or dose at the high end. Far less
likely, due to self-preservation processes, would seem the case where high concentration increases
duration or intake rate, although this theoretically might also occur.
123
-------
is to defend the estimate as being within the actual population
distribution and not above it.
Simulations using finite ranges for parameters will result in a
simulated distribution with a calculable finite maximum exposure,
and the maximum exposures calculated in repeated simulations will
not exceed this theoretical maximum.38 When unbounded default
distributions, such as lognormal distributions, are used for input
parameters to generate the simulated exposure distributions, there
will not be a finite maximum exposure limit for the simulation, so
the maximum value of the resulting simulated distribution will vary
with repeated simulations. The EPA's Science Advisory Board
[SAB] (U.S. EPA, 1992a) has recommended that values above a
certain percentile in these simulations be treated as if they were
bounding estimates, not estimates of high-end exposures (see Figure
5-1). The SAB noted that for large populations, simulated
exposures, doses, and risks above the 99.9th percentile may not be
meaningful when unbounded lognormal distributions are used as a
default.
34 This maximum is the theoretical upper bounding estimate (TUBE).
124
-------
95% 98% 99%
HIGH END OF EXPOSURE
Figure 5-1. Schematic of exposure estimators for unbounded simulated population distributions.
For simulated distributions, whether derived from measured data or statistical methods
such as Monte Carlo analysis, the high-end estimator should not exceed the 99.9*
percentile. Bounding estimates should reflect the size of the population (see text),
therefore bounding estimates for this type of distribution should not automatically be
set at 99.9th percentile. Several statistical estimators of exposure should be identified,
e.g., the 50*. 90th, or 95th percentiles. The distribution should reflect exposures, not just
concentrations.
125
-------
Although the Agency has not specifically set policy on this
matter, exposure assessors should observe the folio-wing caution
when using simulated distributions. The actual percentile cutoff
above which a simulation should be considered a bounding estimate
may be expected to vary depending on the size of the population.
Since bounding estimates are established to develop statements that
exposures, doses, and risks are "not greater than...," it is prudent that
the percentile cutoff bound expected exposures for the size of the
population being evaluated. For example, if there are 100 persons
in the population, it may be prudent to consider simulated
exposures above the 1 in 500 level or 1 in 1000 level (i.e., above the
99.5th or 99.9th percentile, respectively) to be bounding estimates.
Due to uncertainties in simulated distributions, assessors should be
cautious about using estimates above the 99.9th percentile for
estimates of high-end exposure regardless of the size of the
population. The Agency or individual program offices may issue
more direct policy for setting the exact cutoff value for use as high-
end and bounding estimates in simulations.
If some information on the distribution of the variables making up
the exposure or dose equation (e.g., concentration, exposure
duration, intake or uptake rates) is available, the assessor may
estimate a value which falls into the high end by meeting the
defining criteria of "high end": an estimate that will be within the
distribution, but high enough so that less than 1 out of 10 in the
distribution will be as high. The assessor often constructs such an
estimate by using maximum or near-maximum values for one or
more of the most sensitive variables, leaving others at their mean
126
-------
values.39 The exact method used to calculate the estimate of high-
end exposure or dose is not critical; it is very important that the
exposure assessor explain why the estimate, in his or her opinion,
falls into the appropriate range, not above or below it.
If almost no data are available, it will be difficult, if not impossible,
to estimate exposures or doses in the high end. One method that
has been used, especially in screening-level assessments, is to start
with a bounding estimate and back off the limits used until the
combination of parameter values is, in the judgment of the assessor,
clearly in the distribution of exposure or dose. Obviously, this
method results in a large uncertainty. The availability of pertinent
data will determine how easily and defensibly the high-end estimate
can be developed by simply adjusting or backing off from the ultra
conservative assumptions used in the bounding estimates. This
estimate must still meet the defining criteria of "high end," and the
assessor should be ready to explain why the estimate is thought to
meet the defining criteria.
A descriptor of central tendency may be either the arithmetic mean risk
(average estimate) or the median risk (median estimate), but should be clearly
labeled as such. Where both the arithmetic mean and the median are available,
but differ substantially, it is helpful to present both.
Exposure and dose profiles often fall in a skewed distribution that many
times appears to be approximately lognormally distributed, although statistical
tests for lognormality may fail. The arithmetic mean and the median are the
same in a normal distribution, but exposure data are rarely normally distributed.
As the typical skewness in the distribution increases, the exposure or dose
distribution comes to resemble a lognormal curve where the arithmetic mean will
w Maximizing all variables, as is done in bounding estimates, will result in virtually all cases in an
estimate that is above the bounds of this range, that is, above the actual values seen in the population.
127
-------
be higher than the median. It is not unusual for the arithmetic mean to be
located at the 75th percentile of the distribution or higher. Thus, the arithmetic
mean is not necessarily a good indicator of the midpoint (median, 50'" percentile)
of a distribution.
The average estimate, used to describe the arithmetic mean, can be
approximated by using average values for all the factors making up the exposure
or dose equation. It does not necessarily represent a particular individual on the
distribution, but will fall within the range of the actual distribution. Historically,
this calculation has been referred to as the average case, but as with other ad hoc
descriptors, definitions have varied widely in individual assessments.
When the data are highly skewed, it is sometimes instructive to
approximate the median exposure or dose, or median estimate. This is usually
done by calculating the geometric mean of the exposure or dose distribution, and
historically this has often been referred to as the typical case, although again,
definitions have varied widely. Both the average estimate and median estimate
are measures of the central tendency of the exposure or dose distribution, but
f-
they must be clearly differentiated when presenting the results.
It will often be useful to provide additional specific individual risk
information to provide perspective for the risk manager. This specific
information may take the form of answers to what if questions, such as, what if a
consumer should use this product without adequate ventilation? For the risk
manager, these questions are likely to put bounds on various aspects of the risk
question. For the assessor, these are much less complicated problems than trying
'to estimate baseline exposure or dose in an actual population, since the answers
to these questions involve choosing values for various parameters in the exposure
or risk equations and solving them for the estimate.
This type of risk descriptor is a calculation of risk to specific hypothetical
or actual combinations of factors postulated within the exposure assessment. It is
often valuable to ask and answer specific questions of the "what if nature to add
perspective to the risk assessment.
128
-------
Each assessment may have none, one, or several of these specific types of
descriptors. The answers to these questions might be a point estimate or a range,
but are usually fairly simple to calculate. The answers to these types of
postulated questions, however, do not directly give information about how likely
that combination of values might be in the actual population, so there are some
limits to the applicability of these descriptors.
5.3.5.2. Population Exposure, Dose, and Risk
Questions about population exposure, dose, and risk are central to any risk
assessment. Ideally, given the time and methods, the assessor might strive to
construct a picture of exposure, dose, and risk in which each individual exposure,
dose and risk is known. These data could then be displayed in a frequency
distribution.
The risk manager, perhaps considering what action might be necessary for
this particular situation, might ask how many cases of the particular effect might
be probabilistically estimated in a population during a specific time period, or
what percentage of the population is (or how many people are) above a certain
exposure, dose, or risk level.
For those who do the assessments, answering these questions requires
some knowledge of the population frequency distribution. This information can
be obtained or estimated in several ways, leading to two descriptors of population
risk.
The first is the probabilistic number of health effect cases estimated in the
population of interest over a specified time period. This descriptor can be
obtained either by summing the individual risks over all the individuals in the
population, or by multiplying the slope factor obtained from a carcinogen dose-
response relationship, the arithmetic mean of the dose, and the size of the
population. The latter approach may be used only if the risk model assumes a
129
-------
single linear, nonthreshold response to dose, and then only with some caution.40
If risk varies linearly with dose, knowing the arithmetic mean risk and the
population size can lead to an estimate of the extent of harm for the population
as a whole, excluding sensitive subgroups for which a different dose-response
curve may need to be used. For noncarcinogens, or for nonlinear, nonthreshold
carcinogen models, using the arithmetic mean exposure or dose, multiplying by a
slope factor to calculate an average risk, and multiplying by the population size is
not appropriate, and risks should be summed over individuals.41
Obviously, the more relevant information one has, the less uncertain this
descriptor, but in any case, the estimate used to develop the descriptor is also
limited by the inherent uncertainties in risk assessment methodology, e.g., the risk
estimates often being upper confidence level bounds. With the current state of
the science, this descriptor should not be confused with an actuarial prediction of
cases in the population (which is a statistical prediction based on a great deal of
empirical data).
The second type of population risk descriptor is an estimate of the
percentage of the population, or the number of persons, above a specified level
40 For example, when calculating risks using doses and "slope factors," the risk is approximately
linear with dose until relatively high individual risks (about 10"') are attained, after which the
relationship is no longer even approximately linear. This results from the fact that no matter how high
the: dose, the individual risk cannot exceed 1, and the dose-risk curve approaches 1 asymptotically. This
can result in artifacts when calculating population risk from average individual doses and population
size if there are individuals in the population in this nonlinear risk range. Consider a population of five
persons, only one of whom is exposed. As an example, assume a lifetime average daily dose of 100
mg/kg/day corresponds to an individual risk of 4 x 10"'. Increasing the dose fivefold, to 500
mg/kg/day, would result in a higher individual risk for that individual, but due to the nonlinearity of
the dose-risk curve, not yet a risk of 1. The average dose for the five persons in the population would
then be 100 mg/kg/day. Multiplying the "average risk" of 4 x 10*' by the population size of five results
in an estimate of two cases, even though in actuality only one person is exposed. Although calculating
average individual dose, estimating individual risk from it, and multiplying by the population size is a
useful approximation if all members of the population are within the approximately linear range of the
dose-risk curve, this method should not be used if some members of the population have calculated
individual risks higher than about 10'', since it will overestimate the number of cases.
41 In these cases, a significant problem can be the lack of a constant (or nearly constant) "slope
factor" that would be appropriate over a wide exposure/dose range, since the dose-response curve may
have thresholds, windows, or other discontinuities.
130
-------
of risk, RfD, RfC, LOAEL, or other specific level of interest. This descriptor
must be obtained by measuring or simulating the population distribution, which
can be done in several ways.
First, if the population being studied is small enough, it may be possible to
measure the distribution of exposure or dose. Usually, this approach can be
moderately to highly costly, but it may be the most accurate. Possible problems
with this approach are lack of measuring techniques for the chemical of interest,
the availability of a suitable population subset to monitor, and the problem of
extrapolating short-term measurements to long-term exposures.
Second, the distribution itself may be simulated from a model such as an
exposure model (a model that reports exposures or doses by linking
concentrations with contact times for subsets of the population, such as those
living various distances from a source) or a Monte Carlo simulation. Although
this may be considerably less costly than measurements, it will probably be less
accurate, especially near the high end of the distribution. Although models and
statistical simulations can be fairly accurate if the proper input data are available,
these data are often difficult to obtain and assumptions must be made; use of
assumptions may reduce the certainty of the estimated results.
Third, it may be possible to estimate how many people are above a certain
exposure, dose, or risk level by identifying and enumerating certain population
segments known to be at higher exposure, dose, sensitivity, or risk than the level
.of interest.
For those who use the assessments, this descriptor can be used in the
evaluation of options if a level can be identified as an exposure, dose, or risk
level of concern. The options can then be evaluated by estimating how many
persons would go from the higher category to the lower category alter the option
is implemented.
Questions about the distribution of exposure, dose, and risk often require
the use of additional risk descriptors. In considering the risks posed by the
particular situation being evaluated, a risk manager might want to know how
131
-------
various subgroups fall within the distribution, and if there are any particular
subgroups at disproportionately high risk.
It is often helpful for the risk assessor to describe risk by an identification,
and if possible, characterization and quantification of the magnitude of the risk
for specific highly exposed subgroups within the population. This descriptor is
useful when there is (or is expected to be) a subgroup experiencing significantly
different exposures or doses from that of the larger population.
It is also helpful to describe risk by an identification, and if possible,
characterization and quantification of .the magnitude of risk for specific highly
sensitive or highly susceptible subgroups within the population. This descriptor is
useful when the sensitivity or susceptibility to the effect for specific subgroups
within the population is (or is expected to be) significantly different from that of
the larger population. In order to calculate risk for these subgroups, it will
sometimes be necessary to use a different dose-response relationship.
Generally, selection of the subgroups or population segments is a matter of
either a priori interest in the subgroup, in which case the risk manager and risk
assessor can jointly agree on which subgroups to highlight, or a matter of
discovery of a subgroup during the assessment process. In either case, the
subgroup can be treated as a population in itself and characterized the same way
as the larger population using the descriptors for population and individual risk.
Exposures and doses for highly-exposed subpopulations can be calculated
by defining the population segment as a population, then estimating the doses as
for a population. The assessor must make it clear exactly which population was
considered.
A special case of a subpopulation is that of children. For exposures that
take place during childhood, when low body weight results in a higher dose rate
than would be calculated using the LADDp* (Equation 2-6), it is appropriate to
132
-------
average the dose rate (intake rate/body weight) rather than dose. The LADD^,
equation then becomes
= E t C, ,- ( IR I BW \ ( ED, I LT ) ] (5-1)
where LADDp,,, is the lifetime average daily potential dose, ED; is the exposure
duration (time over which the contact actually takes place), C, is the average
exposure concentration during period of calendar time ED;, «, is the average
ingestion or inhalation rate during EDit BW; is body weight during exposure
duration ED;, and LT is the averaging time, in this case, a lifetime (convened to
days). This form of the LADD,^ equation, if applied to an exposure that occurs
primarily in childhood (for example, inadvertent soil ingestion), may result in an
LADDpo, calculation somewhat higher than that obtained by using Equation 2-6,
but there is some evidence that it is more defensible (Kodell et al., 1987;
additional discussion in memorandum from Hugh McKinnon, EPA, to Michael
Callahan, EPA, November 9, 1990).
133
-------
6. ASSESSING UNCERTAINTY
Assessing uncertainty may involve simple or very sophisticated techniques,
depending on the requirements of the assessment. Uncertainty characterization
and uncertainty assessment are two activities that lead to different degrees of
sophistication in describing uncertainty. Uncertainty characterization generally
involves a qualitative discussion of the thought processes that lead to the selection
arid rejection of specific data, estimates, scenarios, etc. For simple exposure
assessments, where not much quantitative information is available, uncertainty
characterization may be all that is necessary.
The uncertainty assessment is more quantitative. The process begins with
simpler measures (i.e., ranges) and simpler analytical techniques (i.e., sensitivity
analysis), and progresses, to the extent needed to support the decision for which
the exposure assessment is conducted, to more complex measures and techniques.
The development and implementation of an appropriate uncertainty assessment
strategy can be viewed as a decision process. Decisions are made about ways to
characterize and analyze uncertainties, and whether to proceed to increasingly
more complex levels of uncertainty assessment.
6.1. Role of Uncertainty Analysis in Exposure Assessment
Exposure assessment uses a wide array of information sources and
techniques. Even where actual exposure-related measurements exist, assumptions
or inferences will still be required (see Section 5.2). Most likely, data will not be
available for all aspects of the exposure assessment and those data that are
available may be of questionable or unknown quality. In these situations, the
exposure assessor will have to rely on a combination of professional judgment,
inferences based on analogy with similar chemicals and conditions, estimation
techniques, and the like. The net result is that the exposure assessment will be
based on a number of assumptions with varying degrees of uncertainty.
The decision analysis literature has focused on the importance of
explicitly incorporating and quantifying scientific uncertainty in risk assessments
134
-------
(Morgan, 1983; Finkel, 1990). Reasons for addressing uncertainties in exposure
assessments include:
Uncertain information from different sources of different quality
must be combined.
A decision must be made about whether and how to expend
resources to acquire additional information (e.g., production, use,
and emissions data; environmental fate information; monitoring
data; population data) to reduce the uncertainty.
There is considerable empirical evidence that biases may result in
so-called best estimates that are not actually very accurate. Even if
all that is needed is a best-estimate answer, the quality of that
answer may be improved by an analysis that incorporates a frank
discussion of uncertainty.
Exposure assessment is an iterative process. The search for an
adequate and robust methodology to handle the problem at hand
may proceed more effectively, and to a more certain conclusion, if
the associated uncertainty is explicitly included and can be used as a
guide in the process of refinement.
A decision is rarely made on the basis of a single piece of analysis.
Further, it is rare for there to be one discrete decision; a process of
multiple decisions spread over time is the more common
occurrence. Chemicals of concern may go through several levek of
risk assessment before a final decision is made. Within this process,
decisions may be made based on exposure considerations. An
exposure analysis that attempts to characterize the associated
uncertainty allows the user or decision-maker to better evaluate it in
the context of the other factors being considered.
Exposure assessors have a responsibility to present not just numbers
but also a clear and explicit explanation of the implications and
135
-------
limitations of their analyses. Uncertainty characterization helps
carry out this responsibility.
Essentially, the construction of scientifically sound exposure assessments
and the analysis of uncertainty go hand in hand. The reward for analyzing
uncertainties is knowing that the results have integrity or that significant gaps
exist in available information that can make decision-making a tenuous process.
6.2. Types of Uncertainty
Uncertainty in exposure assessment can be classified into three broad
categories:
1. Uncertainty regarding missing or incomplete information needed to
fully define the exposure and dose (scenario uncertainty)
2. Uncertainty regarding some parameter (parameter uncertainty)
3. Uncertainty regarding gaps in scientific theory required to make
predictions on the basis of causal inferences (model uncertainty)
Identification of the sources of uncertainty in an exposure assessment is
the first step toward eventually determining the type of action necessary to
reduce that uncertainty. The three types of uncertainty mentioned above can be
further defined by examining some principal causes for each.
Exposure assessments often are developed in a phased approach. The
initial phase usually involves some type of broad-based screening in which the
scenarios that are not expected to pose a risk to the receptor are eliminated from
a more detailed, resource-intensive review, usually through developing bounding
estimates. These screening-level scenarios often are constructed to represent
exposures that would fall beyond the extreme upper end of the expected exposure
distribution. Because the screening-level assessments for these nonproblem
scenarios usually are included in the final exposure assessment document, this
final document may contain scenarios that differ quite markedly in level of
sophistication, quality of data, and amenability to quantitative expressions of
136
-------
uncertainty. These also can apply to the input parameters used to construct
detailed exposure scenarios.
The following sections will discuss sources, characterization, and methods
for analyzing the different types of uncertainty.
6.2.1. Scenario Uncertainty
The sources of scenario uncertainty include descriptive errors, aggregation
errors, errors in professional judgment, and incomplete analysis.
Descriptive errors include errors in information, such as the current
producers of the chemical and its industrial, commercial, and consumer uses.
Information of this type is the foundation for the eventual development of
V
exposure pathways, scenarios, exposed populations, and exposure estimates.
Aggregation errors arise as a result of lumping approximations. Included
among these are assumptions of homogeneous populations, and spatial and
temporal approximations such as assumptions of steady-state conditions.
Professional judgment comes into play in virtually every aspect of the
exposure assessment process, from defining the appropriate exposure scenarios, to
selecting the proper environmental fate models, to determining representative
environmental conditions, etc. Errors in professional judgment also are a source
of uncertainty.
A potentially serious source of uncertainty in exposure assessments arises
. from incomplete analysis. For example, the exposure assessor may overlook an
important consumer exposure due to lack of information regarding the use of a
chemical in a particular product. Although this source of uncertainty is essentially
unquantifiable, it should not be overlooked by the assessor. At a minimum, the
rationale for excluding particular exposure scenarios should be described and the
uncertainty in those decisions should be characterized as high, medium, or low.
The exposure assessor should discuss whether these decisions were based on
actual data, analogues, or professional judgment. For situations in which the
137
-------
uncertainty is high, one should perform a reality check where credible upper
limits on the exposure are established by a "what if" analysis.
Characterization of the uncertainty associated with nonnumeric
assumptions (often relating to setting the assessment's direction and scope) will
generally involve a qualitative discussion of the rationale used in selecting specific
scenarios. The discussion should allow the reader to make an independent
judgment about the validity of the conclusions reached by the assessor by
describing the uncertainty associated with any inferences, extrapolations, and
analogies used and the weight of evidence that led the assessor to particular
conclusions.
6,2.2. Parameter Uncertainty
Sources of parameter uncertainty include measurement errors, sampling
errors, variability, and use of generic or surrogate data.
Measurement errors can be random or systematic. Random error results
from imprecision in the measurement process. Systematic error is a bias or
tendency away from the true value.
Sampling errors concern sample representativeness. The purpose of
sampling is to make an inference about the nature of the whole from a
measurement of a subset of the total population. If the exposure assessment uses
data that were generated for another purpose, for example, consumer product
preference surveys or compliance monitoring surveys, uncertainty will arise if the
data do not represent the exposure scenario being analyzed.
The inability to characterize the inherent variability in environmental and
exposure-related parameters is a major source of uncertainty. For example,
meteorological and hydrological conditions may vary seasonally at a given
location, soil conditions can have large spatial variability, and human activity
patterns can vary substantially depending on age, sex, and geographical location.
The use of generic or surrogate data is common when site-specific data are
not available. Examples include standard emission factors for industrial
138
-------
processes, generalized descriptions of environmental settings, and data pertaining
to structurally related chemicals as surrogates for the chemical of interest. This is
an additional source of uncertainty, and should be avoided if actual data can be
obtained.
The approach to characterizing uncertainty in parameter values will vary.
It can involve an order-of-magnitude bounding of the parameter range when
uncertainty is high, or a description of the range for each of the parameters
including the lower- and upper-bound and the best estimate values and
justification for these based on available data or professional judgment. In some
circumstances, characterization can take the form of a probabilistic description of
the parameter range. The appropriate characterization will depend on several
factors, including whether a sensitivity analysis indicates that the results are
significantly affected by variations within the range. When the results are
significantly affected by a particular parameter, the exposure assessor should
attempt to reduce the uncertainty by developing a description of the likely
occurrence of particular values within the range. If enough data are available,
*
standard statistical methods can be used to obtain a meaningful representation. If
available data are inadequate, then expert judgments can be used to develop a
subjective probabilistic representation. Expert judgments should be developed in
a consistent, well-documented manner. Examples of techniques to solicit expert
judgments have been described (Morgan et a/., 1979; Morgan et a/., 1984; Rish,
1988).
Most approaches for analyzing uncertainty have focused on techniques that
1 examine how uncertainty in parameter values translates into overall uncertainty in
the assessment. Several published reports (Cox and Baybutt, 1981; U.S. EPA,
1985f; Inman and Helton, 1988; Seller, 1987; Rish and Marnicio, 1988) have
reviewed the many techniques available; the assessor should consult these for
details. In general, these approaches can be described, in order of increasing
complexity and data requirements, as either sensitivity analysis, analytical
139
-------
uncertainty propagation, probabilistic uncertainty analysis, or classical statistical
methods.
Sensitivity analysis is the process of changing one variable while leaving
the others constant and determining the effect on the output. The procedure
involves fixing each uncertain quantity, one at a time, at its credible lower-bound
and then its upper-bound (holding all others at their medians), and then
computing the outcomes for each combination of values. These results are useful
to identify the variables that have the greatest effect on exposure and to help
focus further information gathering. The results do not provide any information
about the probability of a quantity's value being at any level within the range;
therefore, this approach is most useful at the screening level when deciding about
the need and direction of further analyses.
Analytical uncertainty propagation involves examining how uncertainty
in individual parameters affects the overall uncertainty of the exposure
assessment. Intuitively, it seems clear that uncertainty in a specific parameter
may propagate very differently through a model than another variable having
approximately the same uncertainty. Some parameters are more important than
others, and the model structure is designed to account for the relative sensitivity.
Thus, uncertainty propagation is a function of both the data and the model
structure. Accordingly, both model sensitivity and input variances are evaluated
in this procedure. Application of this approach to exposure assessment requires
explicit mathematical expressions of exposure, estimates of the variances for each
of the variables of interest, and the ability either analytically or numerically to
obtain a mathematical derivative of the exposure equation.
Although uncertainty propagation is a powerful tool, it should be applied
with caution, and the assessor should consider several points. It is difficult to
generate and solve the equations for the sensitivity coefficients. In addition, the
technique is most accurate for linear equations, so any departure from linearity
must be carefully evaluated. Assumptions, such as independence of variables and
normality of errors in the variables, need to be checked. Finally, this approach
140
-------
requires estimates of parameter variance, and the information to support these
may not be readily available.
Probabilistic uncertainty analysis is generally considered the next level
of refinement. The most common example is the Monte Carlo technique where
probability density functions are assigned to each parameter, then values from
these distributions are randomly selected and inserted into the exposure equation.
After this process is completed many times, a distribution of predicted values
results that reflects the overall uncertainty in the inputs to the calculation.
The principal advantage of the Monte Carlo method is its very general
applicability. There is no restriction on the form of the input distributions or the
nature of the relationship between input and output; computations are also
straightforward! There are some disadvantages as well as inconveniences,
however. The exposure assessor should only consider using this technique when
there are credible distribution data (or ranges) for most key variables. Even if
these distributions are known, it may not be necessary to apply this technique.
For example, if only average exposure values are needed, these can often be
computed as accurately by using average values for each of the input parameters.
Another inconvenience is that the sensitivity of the results to the input
distributions is somewhat cumbersome to assess. Changing the distribution of
only one value requires rerunning the entire calculation (typically, several
hundreds or thousands of times). Finally, Monte Carlo results do not tell the
assessor which variables are the most important contributors to output
uncertainty. This is a disadvantage since most analyses of uncertainty are
performed to find effective ways to reduce uncertainty.
Gassical statistical methods can be used to analyze uncertainty in
measured exposures. Given a data set of measured exposure values for a series
of individuals, the population distribution may be estimated directly, provided
that the sample design was developed properly to capture a representative
sample. The measured exposure values also may be used to directly compute
confidence interval estimates for percentiles of the exposure distribution
141
-------
(American Chemical Society, 1988). When the exposure distribution is estimated
from measured exposures for a probability sample of population members,
confidence interval estimates for percentiles of the exposure distribution are the
primary uncertainty characterization. Data collection survey design should also
be discussed, as well as accuracy and precision of the measurement techniques.
Often the observed exposure distribution is skewed; many sample members
have exposure distributions at or below the detection limit. In this situation,
estimates of the exposure distribution may require a very large sample size.
Fitting the data to a distribution type can be problematic in this situation because
data are usually scant in the low probability areas (the tails) where numerical
values vary widely. As a consequence, for data sets for which the sampling has
been completed, means and standard deviations may be determined to a good
approximation, but characterization of the tails of the distribution will have much
greater uncertainty. This difference should be brought out in the discussion. For
data sets for which sampling is still practical, stratification of the statistical
population to oversample the tail may give more precision and confidence in the
information in the tail area of the distribution.
6.2.3. Model Uncertainty
At a minimum, the exposure assessor should describe in qualitative terms
the rationale for selection of any conceptual and mathematical models. This
discussion should address the status of these approaches and any plausible
alternatives in terms of their acceptance by the scientific community, how well
the model(s) represents the situation being assessed, e.g., high end estimate, and
to what extent verification and validation have been done. Relationship errors
and modeling errors are the primary sources of modeling uncertainty.
Relationship errors include errors in correlations between chemical
properties, structure-reactivity correlations, and environmental fate models. In
choosing to use these tools, the exposure assessor must decide among the many
possible functional forms available. Even though statistics on the performance of
142
-------
the methodology for a given test set of chemicals may be available and can help
guide in the selection process, the exposure assessor must decide on the most
appropriate methodology for the chemical of interest based on the goals of the
assessment.
Modeling errors are due to models being simplified representations of
reality, for example approximating a three-dimensional aquifer with a two-
dimensional mathematical model. Even after the exposure assessor has selected
the most appropriate model for the purpose at hand, one is still faced with the
question of how well the model represents the real situation. This question is
compounded by the overlap between modeling uncertainties and other
uncertainties, e.g., natural variability in environmental inputs, representativeness
of the modeling scenario, and aggregation errors. The dilemma facing exposure
assessors is that many existing models (particularly the very complex ones) and
the hypotheses contained within them cannot be fully tested (Beck, 1987),
although certain components of the model may be tested. Even when a model
has been validated under a particular set of conditions, uncertainty will exist in its
application to situations beyond the test system.
A variety of approaches can be used to quantitatively characterize the
uncertainty associated with model constructs. One approach is to use different
modeling formulations (including the preferred and plausible alternatives) and
consider the range of the outputs to be representative of the uncertainty range.
.This strategy is most useful when no clear best approach can be identified due to
the lack of supporting data or when the situations being assessed require
extrapolation beyond the conditions for which the models were originally
designed.
Where the data base is sufficient, the exposure assessor should characterize
the uncertainty in the selected model by describing the validation and verification
efforts. Validation is the process of examining the performance of the model
compared to actual observations under situations representative of those being
assessed. Approaches for model validation have been discussed (U.S. EPA
143
-------
(
1985e). Verification is the process of confirming that the model computer code is
producing the proper numerical output. In most situations, only partial validation
is possible due to data deficiencies or model complexity.
63. Variability Within a Population Versus Uncertainty in the Estimate
For clarity, it should be emphasized that variability (the receipt of
different levels of exposure by different individuals) is being distinguished from
uncertainty (the lack of knowledge about the correct value for a specific exposure
measure or estimate). Most of the exposure and risk descriptors discussed in this
report deal with variability directly, but estimates must also be made of the
uncertainty of these descriptors.42 This may be done qualitatively or
quantitatively, and it is beyond the scope of this report to discuss the mechanics
of uncertainty analysis in detail. It is an important distinction, however, since the
risk assessor and risk manager need to know if the numbers being reported for
exposures take variability, uncertainty, or both, into consideration.
Not all approaches historically used to construct measures or estimates of
exposure attempted to distinguish variability and uncertainty. In particular, in
many cases in which estimates were termed worst case, focusing on the high end
of the exposed population and also selection of high-end values for uncertain
physical quantities resulted in values that were seen to be quite conservative. By
using both the high-end individuals (variability) and upper confidence bounds43
on data or physical parameters (uncertainty), these estimates might be interpreted
42 Each measure or estimate of exposure will have its associated uncertainty which should be
addressed both qualitatively and quantitatively. For example, if population mean exposure is being
addressed by use of direct personal monitoring data, qualitative issues will include the
representativeness of the population monitored to the full population, the representativeness of the
period selected for monitoring, and confidence that there were not systematic errors in the measured
dau. Quantitative uncertainty could be addressed through the use of confidence intervals for the actual
mean population exposure.
43 The confidence interval is interpreted as the range of values within which the assessor knows
the true measure lies, with specified statistical confidence. The upper bound confidence limit is the
higher of the two ends of the confidence interval.
144
-------
as "not exceeding an upper bound on exposures received by certain high-end
individuals."
Note that this approach will provide an estimate that considers both
variability and uncertainty, but by only reporting the upper confidence bound, it
appears to be merely a more conservative estimate of the variability. High end
estimates which include consideration of uncertainty should be presented with
both the upper and lower uncertainty bounds on the high end estimate. This
provides the necessary information to the risk manager. Without specific
discussion of what was done, risk managers may view the results as not having
dealt with uncertainty. It is fundamental to exposure assessment that assessors
have a clear distinction between the variability of exposures received by
individuals in a population, and the uncertainty of the data and physical
parameters used in calculating exposure.
The discussion of estimating exposure and dose presented in Section 5.3.4
addresses the rationale and approaches for constructing a range of measures or
estimates of exposure, with emphasis on how these can be used for exposure or
f
risk characterization. The distinction between these measures or estimates (e.g.,
average versus high end) is often a difference in anticipated variability in the
exposures received by individuals (i.e., average exposure integrates exposures
across all individuals, while high-end exposure focuses on the upper percentiles of
the exposed group being assessed.) Although several measures can be used to
characterize risk in different ways, this does not address which of these measures
or characterizations is used for decisions. The selection of the point or measure
of exposure or risk upon which regulatory decisions are made is a risk
management decision governed by programmatic policy, and is therefore beyond
the scope of these guidelines.
145
-------
7. PRESENTING THE RESULTS OF THE EXPOSURE ASSESSMENT
One of the most important aspects of the exposure assessment is
presenting the results. It is here that the assessment ultimately succeeds or fails
in meeting the objectives laid out in the planning as discussed in Section 3. This
section discusses communication of the results, format considerations, and
suggested tips for reviewing exposure assessments either as a final check or as a
review of work done by others.
7.11. Communicating the Results of the Assessment
Communicating the results of an exposure assessment is more than a
simple summary of conclusions and quantitative estimates for the various
pathways and routes of exposure. The most important part of an exposure
assessment is the overall narrative exposure characterization, without which the
assessment is merely a collection of data, calculations, and estimates. This
exposure characterization should'consist of discussion, analysis, and conclusions
that synthesize the results from the earlier portions of the document, present a
balanced representation of the available data and its relevancy to the health
effects of concern, and identify key assumptions and major areas of uncertainty.
Section 7.1.1 discusses the exposure characterization, and Section 7.1.2 discusses
how this is used in the risk characterization step of a risk assessment.
7. LI. Exposure Characterization
The exposure characterization is the summary explanation of the exposure
assessment. In this final step, the exposure characterization:
provides a statement of purpose, scope, level of detail, and
approach used in the assessment, including key assumptions;
presents the estimates of exposure and dose by pathway and route
for individuals, population segments, and populations in a manner
appropriate for the intended risk characterization;
146
-------
provides an evaluation of the overall quality of the assessment and
the degree of confidence the authors have in the estimates of
exposure and dose and the conclusions drawn;
interprets the data and results; and
communicates results of the exposure assessment to the risk
assessor, who can then use the exposure characterization, along with
characterizations of the other risk assessment elements, to develop a
risk characterization.
As part of the statement of purpose, the exposure characterization explains
why the assessment was done and what questions were asked. It also reaches a
conclusion as to whether the questions posed were in fact answered, and with
what degree of "confidence. It should also note whether the exposure assessment
brought to light additional or perhaps more appropriate questions, if these were
answered, and if so, with what degree of confidence.
The statement of scope discusses the geographical or demographic
boundaries of the assessment. The specific populations and population segments
that were the subjects of the assessment are clearly identified, and the reasons for
their selection and any exclusions are discussed. Especially sensitive groups or
groups that may experience unusual exposure patterns are highlighted.
The characterization also discusses whether the scope and level of detail of
the assessment were ideal for answering the questions of the assessment and
whether limitations in scope and level of detail were made because of technical,
practical, or financial reasons, and the implications of these limitations on the
quality of the conclusions.
The methods used to quantify exposure and dose are clearly stated in the
exposure characterization. If models are used, the basis for their selection and
validation status is described. If measurement data are used, the quality of the
data is discussed. The strengths and weaknesses of the particular methods used
to quantify exposure and dose are described, along with comparison and contrast
to alternate methods, if appropriate.
147
-------
In presenting the exposure and dose estimates, the important sources,
pathways, and routes of exposure are identified and quantified, and reasons for
excluding any from the assessment are discussed.
A variety of risk descriptors, and where possible, the full population
distribution is presented. Risk managers should be given some sense of how
exposure is distributed over the population and how variability in population
activities influences this distribution. Ideally, the exposure characterization links
the purpose of the assessment with specific risk descriptors, which in turn are
presented in such a way as to facilitate construction of a risk characterization.
A discussion of the quality of the exposure and dose estimates is critical to
the credibility of the assessment. This may be based in part on a quantitative
uncertainty analysis, but the exposure characterization must explain the results of
any such analysis in terms of the degree of confidence to be placed in the
estimates and conclusions drawn.
Finally, a description of additional research and data needed to improve
the exposure assessment is often helpful to risk managers in making decisions
about improving the quality of the assessment. For this reason, the exposure
characterization should identify key data gaps that can help focus further efforts
to reduce uncertainty.
Additional guidance on communicating the results of an exposure
assessment can be found in the proceedings of a recent workshop on risk
communication (American Industrial Health Council, 1989).
7.1.2. Risk Characterization
Most exposure assessments will be done as pan of a risk assessment, and
the exposure characterization must be useful to the risk assessor in .constructing a
risk characterization. Risk characterization is the integration of information from
hazard identification, dose-response assessment, and exposure assessment into a
coherent picture. A risk characterization is a necessary part of any Agency report
on risk whether the report is a preliminary one prepared to support allocation of
148
-------
resources toward further study or a comprehensive one prepared to support
regulatory decisions.
Risk characterization is the culmination of the risk assessment process. In
this final step, the risk characterization:
integrates the individual characterizations from the hazard
identification, dose-response, and exposure assessments;
provides an evaluation of the overall quality of the assessment and
the degree of confidence the authors have in the estimates of risk
and conclusions drawn;
describes risks to individuals and populations in terms of extent and
severity of probable harm; and
communicates results of the risk assessment to the risk manager.
It provides a scientific interpretation of the assessment. The risk manager can
then use the risk assessment, along with other risk management elements, to
make public health decisions. The following sections describe these four aspects
of the risk characterization in more detail.
7.1.2.1. Integration of Hazard Identification, Dose-Response, and Exposure
Assessments
In developing the hazard identification, dose-response, and exposure
portions of the risk assessment, the assessor makes many judgments concerning
, the relevance and appropriateness of data and methodology. These judgments
are summarized in the individual characterizations for hazard identification, dose-
response, and exposure. In integrating the parts of the assessment, the risk
assessor determines if some of these judgments have implications for other parts
of the assessment, and whether the parts of the assessment are compatible. For
example, if the hazard identification assessment determines that a chemical is a
developmental toxicant but not a carcinogen, the dose-response and exposure
information is presented accordingly; this differs greatly from the way the
149
-------
presentation is made if the chemical is a carcinogen but not a developmental
toxicant.
The risk characterization not only examines these judgments, but also
explains the constraints of available data and the state of knowledge about the
phenomena studied in making them, including:
the qualitative, weight-of-evidence conclusions about the likelihood
that the chemical may pose a specific hazard (or hazards) to human
health, the nature and severity of the observed effects, and by what
route(s) these effects are seen to occur. These judgments affect
both the dose-response and exposure assessments;
for noncancer effects, a discussion of the dose-response behavior of
the critical effect(s), data such as the shapes and slopes of the
dose-response curves for the various other toxic endpoints, and how
this information was used to determine the appropriate
dose-response assessment .technique; and
the estimates of the magnitude of the exposure, the route, duration
and pattern of the exposure, relevant pharmacokinetics, and the
number and characteristics of the population exposed. This
information must be compatible with both the hazard identification
and dose-response assessments.
The presentation of the integrated results of the assessment draws from
and highlights key points of the individual characterizations of hazard,
dose-response, and exposure analysis performed separately under these
Guidelines. The summary integrates these component characterizations into an
overall risk characterization.
7.1.2.2. Quality of the Assessment and Degree of Confidence
The risk characterization summarizes the data brought together in the
analysis and the reasoning upon which the assessment is based. The description
also conveys the major strengths and weaknesses of the assessment that zirise
150
-------
from data availability and the current limits of understanding of toxicity
mechanisms.
Confidence in the results of a risk assessment is consequently a function of
confidence in the results of analysis of each element: hazard, dose-response, and
exposure. Each of these three elements has its own characterization associated
with it. For example, the exposure assessment component includes an exposure
characterization. Within each characterization, the important uncertainties of the
analysis and interpretation of data are explained so that the risk manager is given
a clear picture of any consensus or lack thereof about significant aspects of the
assessment. For example, whenever more than one view of dose-response
assessment is supported by the data and by the policies of these Guidelines, and
choosing between them is difficult, the views are presented together. If one has
been selected over another, the rationale is given; if not, then both are presented
as plausible alternatives.
If a quantitative uncertainty analysis is appropriate, it is summarized in the
risk characterization; in any case a qualitative discussion of important
f-
uncertainties is appropriate. If other organizations, such as other Federal
agencies, have published risk assessments, or prior EPA assessments have been
done on the substance or an analogous substance and have relevant similarities or
differences, these too are described.
7.1.2.3. Descriptors of Risk
There are a number of different ways to describe risk in quantitative or
'qualitative terms. Section 2.3 explains how risk descriptors are used. It is
important to explain what aspect of the risk is being described, and how the
exposure data and estimates are used to develop the particular descriptor.
7.1.2.4. Communicating Results of a Risk Assessment to the Risk Manager
Once the risk characterization is completed, the focus turns to
communicating results to the risk manager. The risk manager uses the results of
151
-------
the risk characterization, technologic factors, and socioeconomic considerations in
reaching a regulatory decision. Because of the way these risk management
factors may impact different cases, consistent, but not necessarily identical, risk
management decisions must be made on a case-by-case basis. Consequently, it is
entirely possible and appropriate that a chemical with a specific risk
characterization may be regulated differently under different statutes. These
Guidelines are not intended to give guidance on the nonscientific aspects of risk
management decisions.
7.L3. Establishing the Communication Strategy
For assessments that must be explained to the general public, a
communication strategy is often required. Although risk communication is often
considered a part of risk management, it involves input from the exposure and
risik assessors; early planning for a communication strategy can be very helpful to
the ultimate risk communication. -
The EPA has guidance on preparing communication strategies (U.S. EPA,
1988g). Additional sources of information are the New Jersey Department of
Environmental Protection (1988a, 1988b) and the NRC (1989b). These
documents, and the sources listed within them, are valuable resources for all who
will be involved with the sensitive issues of explaining environmental health risks.
The NRC (1989b, p. 148) states:
It is a mistake to simply consider risk communication to be an add-
on activity for either scientific or public affairs staffs; both elements
should be involved. There are clear dangers if risk messages are
formulated ad hoc by public relations personnel in isolation from
available technical expertise; neither can they be prepared by risk
analysts as a casual extension of their analytic duties.
152
-------
7.2. Format for Exposure Assessment Reports
The Agency does not require a set format for exposure assessment reports,
but individual program offices within the Agency may have specific format
requirements. Section 3 illustrates that exposure assessments are performed for a
variety of purposes, scopes, and levels of detail, and use a variety of approaches.
While it is impracticable for the Agency to specify an outline format for all types
of assessments being performed within the Agency, program offices are
encouraged to use consistent formats for similar types of assessments within their
own purview.
All exposure assessments must, at a minimum, contain a narrative exposure
characterization section that contains the types of information discussed in
Section 7.1. For the purpose of consistency, this section should be titled exposure
characterization.. Placement of this section within the assessment is optional, but
it is strongly suggested that it be prominently featured in the assessment. It is
not, however, an executive summary and should not be used interchangeably with
one.
7.3. Reviewing Exposure Assessments
This section provides some suggestions on how to effectively review an
exposure assessment and highlights some of the common pitfalls. The emphasis
in these Guidelines has been on how to properly conduct exposure assessments;
this section can serve as a final checklist in reviewing the completed assessment.
An exposure assessor also may be called upon to critically review and evaluate
exposure assessments conducted by others; these suggestions should be helpful in
this regard.
Reviewers of exposure assessments are usually asked to identify
inconsistences with the underlying science and with Agency-developed guidelines,
factors, and methodologies, and to determine the effect these inconsistences might
have on the results and conclusions of the exposure assessment. Often the
153
-------
reviewer can only describe whether these inconsistencies or deficiencies might
underestimate or overestimate exposure.
Some of the questions a reviewer should ask to identify the more common
pitfalls that tend to underestimate exposure are:
Has the pathways analysis been broad enough to avoid overlooking a
significant pathway? For example, in evaluating exposure to soil contaminated
with PCBs, the exposure assessment should not be limited only to evaluating the
dermal contact pathway. Other pathways, such as inhalation of dust and vapors
or the ingestion of contaminated gamefish from an adjacent stream receiving
surface runoff containing contaminated soil, should also be evaluated as they
could contribute higher levels of exposure from the same source.
Have all the contaminants of concern in a mixture been evaluated? Since
risks resulting from exposures to complex mixtures of chemicals with the same
mode of toxic action are generally treated as additive (by summing the risks) in a
risk assessment, failure to evaluate one or more of the constituents would neglect
its contribution to the total exposure and risk. This is especially critical for
relatively toxic or potent chemicals that tend to drive risk estimates even when
present in relatively low quantities.
Have exposure levels or concentration measurements been compared with
appropriate background levels? Contaminant concentrations or exposure levels
should not be compared with other contaminated media or exposed populations.
When comparing with background levels, the exposure assessor must determine
whether these concentrations or exposure levels are also affected by
contamination from anthropogenic activities.
Were the detection limits sensitive enough to make interpretations about
exposures at levels corresponding to health concerns? Were the data interpreted
correctly? Because values reported as not detected (ND) mean only that the
chemical of interest was not found at the particular detection limit used in the
laboratory analysis, ND does not rule out the possibility that the chemical may be
present in significant concentrations. Depending on the purpose and the degree
154
-------
of conservatism warranted in the exposure assessment, results reported as ND
should be handled as discussed in Section 5.
Has the possibility of additive pathways been considered for the population
being studied? If the purpose of the exposure assessment is to evaluate the total
exposure and risk of a population, then exposures from individual pathways
within the same route may be summed in cases which concurrent exposures can
realistically be expected to occur.
Some questions a reviewer should ask to avoid the more prevalent errors
that generally tend to overestimate exposure are:
Have unrealistically conservative exposure parameters been used in the
scenarios? The exposure .assessor must conduct a reality check to ensure that the
exposure cases used in the scenario(s) (except bounding estimates) could actually
occur.
Have potential exposures been presented as existing exposures? In many
situations, especially when the scenario evaluation approach is used, the objective
of the assessment is to estimate potential exposures. (That is, if a person were to
be exposed to these chemicals under these conditions, then the resultant exposure
would be this much.) In determining the need and urgency for regulatory action,
risk managers often weigh actual exposures more heavily than higher levels of
potential exposures. Therefore, the exposure assessment should clearly note
whether the results represent actual or potential exposures.
Have exposures derived from "not detected" levels been presented as actual
exposures? For some exposure assessments it may be appropriate to assume that
a chemical reported as not detected is present at either the detection limit or one-
half the detection limit. The exposure estimates derived from these nondetects,
however, should be clearly labeled as hypothetical since they are based on the
conservative assumption that chemicals are present at or below the detection
limit, when, in fact, they may not be present at all. Exposures, doses, or risks
estimated from data using substituting values of detection limits for "not detected"
155
-------
samples must be reported as "less than" the resulting exposure, dose, or risk
estimate.
Questions a reviewer should ask to identify common errors that may
underestimate or overestimate exposure are:
Are the results presented with an appropriate number of significant figures?
The number of significant figures should reflect the uncertainty of the numeric
estimate. If the likely range of the results spans several orders of magnitude,
then using more than one significant figure implies more confidence in the results
than is warranted.
Have the calculations been checked for computational errors? Obviously,
calculations should be checked for arithmetic errors and mistakes in convening
units. This is overlooked more often than one might expect.
Are the factors for intake rates, etc. used appropriately? Exposure factors
should be checked to ensure that they correspond to the site or situation being
evaluated.
Have the uncertainties been adequately addressed? Exposure assessment is
an inexact science, and the confidence in the results may vary tremendously. It is
essential the exposure assessment include an uncertainty assessment that places
these uncertainties in perspective.
If Monte Carlo simulations were used, were correlations among input
distributions known and properly accounted for? Is the maximum value simulated
by this method in fact a bounding estimate? Was Monte Carlo simulation
necessary? (A Monte Carlo simulation randomly selects the values from the input
parameters to simulate an individual. If data already exist to show the
relationship between variables for the actual individuals, it makes little sense to
use Monte Carlo simulation, since one already has the answer to the question of
how the variables are related for each individual. A simulation is unnecessary.)
156
-------
8. GLOSSARY OF TERMS
Absorbed dose - See internal dose.
Absorption barrier - Any of the exchange barriers of the body that allow
differential diffusion of various substances across a boundary. Examples of
absorption barriers are the skin, lung tissue, and gastrointestinal tract wall.
Accuracy - The measure of the correctness of data, as given by the difference
between the measured value and the true or standard value.
Administered dose - The amount of a substance given to a test subject (human
or animal) in determining dose-response relationships, especially through
ingestion or inhalation. In exposure assessment, since exposure to chemicals is
usually inadvertent, this quantity is called potential dose.
Agent - A chemical, physical, mineralogical, or biological entity that may cause
deleterious effects in an organism after the organism is exposed to it.
Ambient The conditions surrounding a person, sampling location, etc.
Ambient measurement - A measurement (usually of the concentration of a
chemical or pollutant) taken in an ambient medium, normally with the intent of
relating the measured value to the exposure of an organism that contacts that
medium.
Ambient medium - One of the basic categories of material surrounding or
contacting an organism, e.g., outdoor air, indoor air, water, or soil, through which
chemicals or pollutants can move and reach the organism. (See also biological
medium, environmental medium)
Applied dose - The amount of a substance in contact with the primary
absorption boundaries of an organism (e.g., skin, lung, gastrointestinal tract) and
available for absorption.
Arithmetic mean The sum of all the measurements in a data set divided by the
number of measurements in the data set.
Background level (environmental) - The concentration of substance in a
defined control area during a fixed period of time before, during, or after a data-
gathering operation.
157
-------
Breathing zone - A zone of air in the vicinity of an organism from which
respired air is drawn. Personal monitors are often used to measure pollutants in
the breathing zone.
Bias - A systematic error inherent in a method or caused by some feature of the
measurement system.
Bioavailability - The state of being capable of being absorbed and available to
interact with the metabolic processes of an organism. Bioavailability is typically a
function of chemical properties, physical state of the material to which an
organism is exposed, and the ability of the individual organism to physiologically
take up the chemical.
Biological marker of exposure (sometimes referred to as a biomarker of
exposure) - Exogenous chemicals, their metabolites, or products of interactions
between a xenobiotic chemical and some target molecule or cell that is measured
in a compartment within an organism.
Biological measurement A measurement taken in a biological medium. For
the purpose of exposure assessment via reconstruction of dose, the measurement
is usually of the concentration of a chemical/metabolite or the status of a
biomarker, normally with the intent of relating the measured value to the internal
dose of a chemical at some time in the past. (Biological measurements are also
taken for purposes of monitoring health status and predicting effects of
exposure.) (See also ambient measurement)
Biological medium - One of the major categories of material within an
organism, e.g., blood, adipose tissue, or breath, through which chemicals can
move, be stored, or be biologically, physically, or chemically transformed. (See
also ambient medium, environmental medium)
Biologically effective dose - The amount of a deposited or absorbed chemical
that reaches the cells or target site where an adverse effect occurs, or where that
chemical interacts with a membrane surface.
Blank (blank sample) An unexposed sampling medium, or an aliquot of the
reagents used in an analytical procedure, in the absence of added analyte. The
measured value of a blank sample is the blank value.
158
-------
Body burden - The amount of a particular chemical stored in the body at a
particular time, especially a potentially toxic chemical in the body as a result of
exposure. Body burdens can be the result of long-term or short-term storage, for
example, the amount of a metal in bone, the amount of a lipophilic substance
such as PCB in adipose tissue, or the amount of carbon monoxide (as
carboxyhemoglobin) in the blood.
Bounding estimate - An estimate of exposure, dose, or risk that is higher than
that incurred by the person in the population with the highest exposure, dose, or
risk. Bounding estimates are useful in developing statements that exposures,
doses, or risks are "not greater than" the estimated value.
Comparability - The ability to describe likenesses and differences in the quality
and relevance of two or more data sets.
Data quality objectives (DQO) - Qualitative and quantitative statements of the
overall level of-uncertainty that a decision-maker is willing to accept in results or
decisions derived from environmental data. DQOs provide the statistical
framework for planning and managing environmental data operations consistent
with the data user's needs.
Dose - The amount of a substance available for interaction with metabolic
processes or biologically significant receptors after crossing the outer boundary of
an organism. The potential dose is the amount ingested, inhaled, or applied to the
skin. The applied dose is the amount of a substance presented to an absorption
barrier and available for absorption (although not necessarily having yet crossed
the outer boundary of the organism). The absorbed dose is the amount crossing a
specific absorption barrier (e.g., the exchange boundaries of skin, lung, and
digestive tract) through uptake processes. Internal dose is a more general term
denoting the amount absorbed without respect to specific absorption barriers or
exchange boundaries. The amount of the chemical available for interaction by
any particular organ or cell is termed the delivered dose for that organ or cell.
Dose rate - Dose per unit time, for example in mg/day, sometimes also called
dosage. Dose rates are often expressed on a per-unit-body-weight basis, yielding
units such as mg/kg/day (mg/kg-day). They are also often expressed as averages
over some time period, for example a lifetime.
Dose-response assessment - The determination of the relationship between the
magnitude of administered, applied, or internal dose and a specific biological
response. Response can be expressed as measured or observed incidence, percent
response in groups of subjects (or populations), or the probability of occurrence
of a response in a population.
159
-------
Dose-response curve - A graphical representation of the quantitative
relationship between administered, applied, or internal dose of a chemical or
agent, and a specific biological response to that chemical or agent.
Dose-response relationship - The resulting biological responses in an organ or
organism expressed as a function of a series of different doses.
Dosimeter - Instrument to measure dose; many so-called dosimeters actually
measure exposure rather than dose.
Dosimetry - Process of measuring or estimating dose.
Biological exposure - Exposure of a nonhuman receptor or organism to a
chemical, or a radiological or biological agent.
Effluent - Waste material being discharged into the environment, either treated
or untreated. Effluent generally is used to describe water discharges to the
environment, although it can refer to stack emissions or other material flowing
into the environment.
Environmental fate The destiny of a chemical or biological pollutant after
release into the environment. Environmental fate involves temporal and spatial
considerations of transport, transfer, storage, and transformation.
Environmental fate model - In the context of exposure assessment, any
mathematical abstraction of a physical system used to predict the concentration of
specific chemicals as a function of space and time subject to transport, intermedia
transfer, storage, and degradation in the environment.
Environmental medium - One of the major categories of material found in the
physical environment that surrounds or contacts organisms, e.g., surface water,
ground water, soil, or air, and through which chemicals or pollutants can move
and reach the organisms. (See ambient medium, biological medium)
Exposure Contact of a chemical, physical, or biological agent with the outer
boundary of an organism. Exposure is quantified as the concentration of the
agent in the medium in contact integrated over the time duration of that contact
Exposure assessment - The determination or estimation (qualitative or
quantitative) of the magnitude, frequency, duration, and route of exposure.
Exposure concentration - The concentration of a chemical in its transport or
carrier medium at the point of contact.
160
-------
Exposure pathway - The physical course a chemical or pollutant takes from the
source to the organism exposed.
Exposure route - The way a chemical or pollutant enters an organism after
contact, e.g., by ingestion, inhalation, or dermal absorption.
Exposure scenario - A set of facts, assumptions, and inferences about how
exposure takes place that aids the exposure assessor in evaluating, estimating, or
quantifying exposures.
Fixed-location monitoring Sampling of an environmental or ambient medium
for pollutant concentration at one location continuously or repeatedly over some
length of time.
Geometric mean - The nth root of the product of n values.
Guidelines Principles and procedures to set basic requirements for general
limits of acceptability for assessments.
Hazard identification A description of the potential health effects attributable
to a specific chemical or physical agent. For carcinogen assessments, the hazard
identification phase of a risk assessment is also used to determine whether a
particular agent or chemical is, or is not, causally linked to cancer in humans.
High-end exposure (dose) estimate A plausible estimate of individual
exposure or dose for those persons at the upper end of an exposure or dose
distribution, conceptually above the 90* percentile, but not higher than the
individual in the population who has the highest exposure or dose.
High-end Risk Descriptor - A plausible estimate of the individual risk for those
persons at the upper end of the risk distribution, conceptually above the 90th
percentile but not higher than the individual in the population with the highest
risk. Note that persons in the high end of the risk distribution have high risk due
to high exposure, high susceptibility, or other reasons, and therefore persons in
the high end of the exposure or dose distribution are not necessarily the same
individuals as those in the high end of the risk distribution.
Intake - The process by which a substance crosses the outer boundary of an
organism without passing an absorption barrier, e.g., through ingestion or
inhalation. (See also potential dose)
161
-------
Internal dose - The amount of a substance penetrating across the absorption
barriers (the exchange boundaries) of an organism, via either physical or
biological processes. For the purpose of these Guidelines, this term is
synonymous with absorbed dose.
Limit of detection (LOD) [or Method detection limit (MDL)] - The
minimum concentration of an analyte that, in a given matrix and with a specific
method, has a 99% probability of being identified, qualitatively or quantitatively
measured, and reported to be greater than zero.
Matrix - A specific type of medium (e.g., surface water, drinking water) in which
the analyte of interest may be contained.
Maximally exposed individual (MEI) - The single individual with the highest
exposure in a given population (also, maximum exposed individual). This term
has historically been defined various ways, including as defined here and also
synonymously with worst case or bounding estimate. Assessors are cautioned to
look for contextual definitions when encountering this term in the literature.
Maximum exposure range - A semiquantitative term referring to the extreme
uppermost portion of the distribution of exposures. For consistency, this term
(and the dose or risk analogues) should refer to the portion of the individual
exposure distribution that conceptually falls above about the 98th percentile of the
distribution, but is not higher than the individual with the highest exposure.
Median value - The value in a measurement data set such that half the
measured values are greater and half are less.
Microenvironment method - A method used in predictive exposure assessments
to estimate exposures by sequentially assessing exposure for a series of areas
(microenvironments) that can be approximated by constant or well-characterized
concentrations of a chemical or other agent.
Microenvironments Well-defined surroundings such as the home, office,
automobile, kitchen, store, etc. that can be treated as homogeneous (or well
characterized) in the concentrations of a chemical or other agent.
Mode - The value in the data set that occurs most frequently.
Monte Carlo technique A repeated random sampling from the distribution of
values for each of the parameters in a generic (exposure or dose) equation to
derive an estimate of the distribution of (exposures or doses in) the population.
162
-------
Nooparametric statistical methods - Methods that do not assume a functional
form with identifiable parameters for the statistical distribution of interest
(distribution-free methods).
Pathway - The physical course a chemical or pollutant takes from the source to
the organism exposed.
Personal measurement - A measurement collected from an individual's
immediate environment using active or passive devices to collect the samples.
Pharmacokinetics The study of the time course of absorption, distribution,
metabolism, and excretion of a foreign substance (e.g., a drug or pollutant) in an
organism's body.
Point-of-contact measurement of exposure - An approach to quantifying
exposure by taking measurements of concentration over time at or near the point
of contact between the chemical and an organism while the exposure is taking
place.
Potential dose The amount of a chemical contained in material ingested, air
breathed, or bulk material applied to the skin.
Precision A measure of the reproducibility of a measured value under a given
set of conditions.
Probability samples - Samples selected from a statistical population such that
each sample has a known probability of being selected.
Quality assurance (QA) An integrated system of activities involving planning,
quality control, quality assessment, reporting and quality improvement to ensure
that a product or service meets defined standards of quality with a stated level of
confidence.
Quality control (QC) - The overall system of technical activities whose purpose
is to measure and control the quality of a product or service so that it meets the
needs of the users. The aim is to provide quality that is satisfactory, adequate,
dependable, and economical.
Quantification limit (QL) - The concentration of analyte in a specific matrix for
which the probability of producing analytical values above the method detection
limit is 99%.
Random samples Samples selected from a statistical population such that each
sample has an equal probability of being selected.
163
-------
Range - The difference between the largest and smallest values in a
measurement data set.
Reasonable worst case - A semiquantitative term referring to the lower portion
of the high end of the exposure, dose, or risk distribution. The reasonable worst
case has historically been loosely defined, including synonymously with maximum
exposure or worst case, and assessors are cautioned to look for contextual
definitions when encountering this term in the literature. As a semiquantitative
term, it is sometimes useful to refer to individual exposures, doses, or risks that,
while in the high end of the distribution, are not in the extreme tail. For
consistency, it should refer to a range that can conceptually be described as above
the 90th percentile in the distribution, but below about the 98th percentile.
(compare maximum exposure range, worst case).
Reconstruction of dose - An approach to quantifying exposure from internal
dose, which is in turn reconstructed after exposure has occurred, from evidence
within an organism such as chemical levels in tissues or fluids or from evidence of
other biomarkers of exposure.
Representativeness The degree to which a sample is, or samples are,
characteristic of the whole medium, exposure, or dose for which the samples are
being used to make inferences. *
Risk - The probability of deleterious health or environmental effects.
Risk characterization - The description of the nature and often the magnitude
of human or nonhuman risk, including attendant uncertainty.
Route - The way a chemical or pollutant enters an organism after contact, e.g.,
by ingestion, inhalation, or dermal absorption.
Sample - A small part of something designed to show the nature or quality of the
whole. Exposure-related measurements are usually samples of environmental or
ambient media, exposures of a small subset of a population for a short time, or
biological samples, all for the purpose of inferring the nature and quality of
parameters important to evaluating exposure.
Sampling frequency - The time interval between the collection of successive
samples.
Sampling plan A set of rules or procedures specifying how a sample is to be
selected and handled.
164
-------
Scenario evaluation - An approach to quantifying exposure by measurement or
estimation of both the amount of a substance contacted, and the
frequency/duration of contact, and subsequently linking these together to
estimate exposure or dose.
Source characterization measurements - Measurements made to characterize
the rate of release of agents into the environment from a source of emission such
as an incinerator, landfill, industrial or municipal facility, consumer product, etc.
Standard operating procedure (SOP) - A procedure adopted for repetitive use
when performing a specific measurement or sampling operation.
Statistical control - The process by which the variability of measurements or of
data outputs of a system is controlled to the extent necessary to produce stable
and reproducible results. To say that measurements are under statistical control
means that there is statistical evidence that the critical variables in the
measurement process are being controlled to such an extent that the system yields
data that are reproducible within well-defined limits.
Statistical significance An inference that the probability is low that the
observed difference in quantities being measured could be due to variability in
the data rather than an actual difference in the quantities themselves. The
inference that an observed difference is statistically significant is typically based
on a test to reject one hypothesis and accept another.
Surrogate data Substitute data or measurements on one substance used to
estimate analogous or corresponding values of another substance.
Uptake - The process by which a substance crosses an absorption barrier and is
absorbed into the body.
Worst case - A semiquantitative term referring to the maximum possible
exposure, dose, or risk, that can conceivably occur, whether or not this exposure,
dose, or risk actually occurs or is observed in a specific population. Historically,
this term has been loosely defined in an ad hoc way in the literature, so assessors
are cautioned to look for contextual definitions when encountering this term. It
should refer to a hypothetical situation in which everything that can plausibly
happen to maximize exposure, dose, or risk does in fact happen. This worst case
may occur (or even be observed) in a given population, but since it is usually a
very unlikely set of circumstances, in most cases, a worst-case estimate will be
somewhat higher than occurs in a specific population. As in other fields, the
worst-case scenario is a useful device when low probability events may result in a
catastrophe that must be avoided even at great cost, but in most health risk
assessments, a worst-case scenario is essentially a type of bounding estimate.
165
-------
9. REFERENCES
Allaby, M. (1983) A dictionary of the environment. 2* ed. New York, NY:
New York University Press, p. 195.
American Chemical Society. (1988) Principles of environmental sampling. In:
Keith. C.H., ed. ACS professional reference book. Washington, DC:
American Chemical Society.
American Industrial Health Council. (1989) Presentation of risk assessments of
carcinogens. Report of an Ad Hoc Study Group on Risk Assessment
Presentation. Washington, DC: American Industrial Health Council.
Atherley, G. (1985) A critical review of time-weighted average as an index of
exposure and dose, and of its key elements. Am. Ind. Hyg. Assoc. J.
46(9):481487.
Beck. M.B. (1987) Water quality modeling: A review of the analysis of
uncertainty. Water Resources Research 23(8): 1393-1442.
Brown, S.L. (1987) Exposure assessment. In: Tardiff, R.G.; Rodricks, J.V., eds.
Toxic substances and human risk. New York, NY: Plenum Press, pp. 377-
390.
Cook, W.A. (1969) Problems of setting occupational exposure standards -
background. Arch. Environ. Health 19:272-276.
Cox, D.C.; Baybutt, P.C. (1981) Methods for uncertainty analysis: A comparative
survey. Risk Analysis 1(4):251-258.
Dixon, WJ. (1950) Analysis of extreme values. Ann. Math. Statist. 21:488-506.
Dixon, W.J. (1951) Ratios involving extreme values. Ann. Math. Statist. 22:68-78.
Dixon, WJ. (1953) Processing data for outliers. Biometrics 9:74-89.
Dixon, WJ. (1960) Simplified estimation from censored normal samples. Ann.
Math. Statist. 31:385-391.
Environ Corporation. (1988) Elements of toxicology and chemical risk
assessment. Rev. ed. Washington, DC: Environ Corporation, p. 67.
Finkel, A.M. (1990) Confronting uncertainty in risk management: A guide for
decision-makers. Washington, DC: Resources for the Future.
166
-------
Gilliom, R.J.; Helsel, D.R. (1986) Estimation of distributional parameters for
censored trace level water quality data. 1. Estimation techniques. Water
Resources Research 22(2): 135-146.
Gleit, A. (1985) Estimation for small norm data sets with detection limits.
Environ. Sci. Technol. 19(12): 1201-1206.
Greife, A.; Hornung, R.W.; Stayner, L.G.; Steenland, K.N. (1988) Development
of a model for use in estimating exposure to ethylene oxide in a
retrospective cohort study. Scand. J. Work Environ. Health 14(1):29-30.
Helsel, D.R. (1990) Less than obvious. Environ. Sci. Technol. 24(12): 1766-1774.
Helsel, D.R.; Conn, T.A. (1988) Estimation of descriptive statistics for multiply
censored water quality data. Water Resources Research 24(12): 1997-2004.
Hodgson, E.; Mailman, R.B.; Chambers, J.E. (1988) Dictionary of toxicology.
New York, NY: Van Nostrand Reinhold Co., p. 154.
Hornung, R.W.; Meinhardt, TJ. (1987) Quantitative risk assessment of lung
cancer in U.S. uranium miners. Health Physics 52(4):417-430.
Hornung, R.W.; Reed, L.D. (1990) Estimation of average concentration in the
presence of nondetectable values. Appl. Occup. Environ, Hyg. 5:46-51.
Inman, R.L.; Helton, J.C. (1988) An investigation of uncertainty and sensitivity
analysis techniques for computer models. Risk Analysis 8(1):71-90.
Kodell, R.L.; Gaylor, D.W.; Chen, JJ. (1987) Using average dose rate for
intermittent exposures to carcinogens. Risk Analysis 7(3):339-345.
Lioy, PJ. (1990) Assessing total human exposure to contaminants. Environ. Sci.
Technol. 24{7):938-945.
Lyman, WJ.; Reehl, W.F.; Rosenblatt, D.H. (1982) Handbook of chemical
property estimation methods. New York, NY: McGraw Hill.
Lynch, J.R. (1985) Measurement of worker exposure. In: Cralley, LJ.; Cralley,
L.V., eds. Patty's industrial hygiene and toxicology. Volume 3a: The work
environment. 2nd ed. New York, NY: Wiley-Interscience, pp. 569-615.
167
-------
Morgan, M.G. (1983) The role of decision analysis and other quantitative tools in
environmental policy analysis. Organization for Economic Cooperation
and Development, Chemicals Division Environment Directorate, Paris,
France. ENZ/CHEM/CM/83.5.
Morgan, M.G.; Henrion, M.; Morris, S.C. (1979) Expert judgements for policy
analysis. Brookhaven National Laboratory, Upton, NY. BNL51358.
Morgan, M.G.; Morris, S.C; Henrion, M.; Amaral, D.A.L.; Rish, W.R. (1984)
Technical uncertainty in quantitative policy analysis - a sulfur air pollution
example. Risk Analysis 4(3):201-213.
National Institute for Occupational Safety and Health. (1988) Comments by the
National Institute for Occupational Safety and Health on The
Occupational Safety and Health Administration's Advance Notice of
Proposed Rulemaking on Generic Standard for Exposure Monitoring.
National Research Council. (1983) Risk assessment in the federal government:
Managing the process. Committee on the Institutional Means for
Assessment of Risks to Public Health, Commission on Life Sciences, NRC.
Washington, DC: National Academy Press.
National Research Council. (1985) Epidemiology and air pollution. Committee
on the Epidemiology of Air Pollutants, Board on Toxicology and
Environmental Health Standards, Commission on Life Sciences, NRC.
Washington, DC: National Academy Press.
National Research Council. (1989a) Biologic markers in pulmonary toxicology.
Committee on Biologic Markers, Commission on Life Sciences, NRC.
Washington, DC: National Academy Press.
National Research Council. (1989b) Improving risk communication. Washington,
DC: National Academy Press.
National Research Council. (1990) Human exposure assessment for airborne
pollutants: Advances and applications. Committee on Advances in
Assessing Human Exposure to Airborne Pollutants, Committee on
Geosciences, Environment, and Resources, NRC. Washington, DC:
National Academy Press.
Nehls. G.J.; Akland, G.G. (1973) Procedures for handling aerometric data. J. Air
Pollut. Control Assoc. 23:180.
168
-------
Nelson, J.D.; Ward, R.C. (1981) Statistical considerations and sampling
techniques for ground-water quality monitoring. Ground Water 19(6):617-
625.
New Jersey Department of Environmental Protection. (1988a) Improving
dialogue with communities: A risk communication manual for
government. Division of Science and Research, Risk Communication Unit,
Trenton, NJ.
New Jersey Department of Environmental Protection. (1988b) Improving
dialogue with communities: A short guide for government risk
communication. Division of Science and Research, Risk Communication
Unit, Trenton, NJ.
Pao. E.M.; Fleming, K.H.; Guenther, P.M.; Mickle, SJ. (1982) Foods commonly
eaten by individuals: Amount per day and per eating occasion. U.S.
Department of Agriculture, Washington, DC. Home Economics Research
Report Number 44.
Paustenbach, DJ. (1985) Occupational exposure limits, pharmacokinetics, and
usual work schedules. In: Cralley, LJ.; Cralley, L.V., eds. Patty's
industrial hygiene and toxicology. Volume 3a: The work environment. 2nd
ed. New York, NY: Wiley-Interscience, pp. 111-277.
*
Rish, W.R.; Marnicio, RJ. (1988) Review of studies related to uncertainty in risk
analysis. Oak Ridge National Laboratory, Oak Ridge, TN. ORNL/TM-
10776.
Rish, W.R. (1988) Approach to uncertainty in risk analysis. Oak Ridge National
Laboratory, Oak Ridge, TN. ORNL/TM-10746.
Robinson, J. (1977) How Americans use time: A social psychological analysis of
everyday behavior. New York, NY: Praeger Publishers, Praeger Special
Studies.
Sanders, T.G.; Adrian, D.D. (1978) Sampling frequency for river quality
monitoring. Water Resources Research 14(4):569-576.
Schweitzer, G.E.; Black, S.C. (1985) Monitoring statistics. Environ. Sci. Technol.
19(11):1026-1030.
Schweitzer, G.E.; Santolucito J.A. (1984) Environmental sampling for hazardous
wastes. American Chemical Society, Washington, DC. ACS Symposium
Series Number 267.
169
-------
Seller, F.A. (1987) Error propagation for large errors. Risk Analysis 7(4):509-
518.
Shaw, R.W.; Smith, M.V.; Pour. R.J.J. (1984) The effect of sample frequency on
aerosol mean-values. J. Air Pollut. Control Assoc. 34(8):839-841.
Smith, C.N.; Parrish, R.S.; Carsel, R.F. (1987) Estimating sample requirements
for field evaluations of pesticide leaching. Environ. Toxicol. Chern. 6:345-
357.
Stern, F.B.; Waxweiler, R.A.; Beaumont, JJ.; Lee, ST.; Rinsky, R.A.; Zumwalde,
R.D.; Halperin, W.E.; Bierbaum, RJ.; Landrigan, RJ.; Murray, W.E.
(1986) Original contribution: A case-control study of leukemia at a naval
nuclear shipyard. Am. J. Epidemiol. 123(6):980-992.
Travis, CC; Land, M.L. (1990) Estimating the mean of data sets with
nondetectable values. Environ. Sci. Technol. 24(7):% 1-962.
Upton, A.C. (1988) Evolving perspectives on the concept of dose in radiobiology
and radiation protection. Health Physics 55(4):605-614.
U.S. Department of Energy. (1991) Environmental regulatory guide for
radiological effluent monitoring and environmental surveillence.
Department of Energy, Washington, DC. DOE/EH-0173T.
U.S. Environmental Protection Agency. (1980) Interim guidelines and
specifications for preparing quality assurance project plans. Office: of
Monitoring Systems and Quality Assurance, Office of Research and
Development, Washington, DC. QAMS-005/80.
U.S. Environmental Protection Agency. (1984a) Study of carbon monoxide
exposure or residents of Washington, D.C., and Denver, Colorado.
Environmental Monitoring Systems Laboratory, Office of Research and
Development, Research Triangle Park, NC. EPA-600/S4-84/031, NT1S
PB84-183516.
U.S. Environmental Protection Agency. (1984b) Survey management handbook.
Volumes I and II. Office of Policy, Planning and Evaluation, Washington,
DC. EPA-230/12-84/002.
U.S. Environmental Protection Agency. (1985a) Methods for assessing exposure
to chemical substances. Volume 4: Methods for enumerating and
characterizing populations exposed to chemical substances. Office of Toxic
Substances, Washington, DC. EPA-560/5-85/004, NTIS PB86-107042.
170
-------
U.S. Environmental Protection Agency. (1985b) Practical guide for ground-water
sampling. Robert S. Kerr Environmental Research Lab, Office of
Research and Development, Ada, OK. EPA-600/2-85/104, NTIS PB86-
137304.
U.S. Environmental Protection Agency. (1985c) Methods for assessing exposure
to chemical substances. Volume 2: Methods for assessing exposure to
chemicals in the ambient environment. Office of Toxic Substances,
Washington, DC. EPA-560/5-85/002, NTIS PB86-107067.
U.S. Environmental Protection Agency. (1985d) Methods for assessing exposure
to chemical substances. Volume 5: Methods for assessing exposure to
chemical substances in drinking water. Office of Toxic Substances.
Washington, DC. EPA-560/5-85/006, NTIS PB86-1232156.
U.S. Environmental Protection Agency. (1985e) Validation methods for chemical
exposure and hazard assessment models. Environmental Research
Laboratory, Office of Research and Development, Athens, GA.
EPA/600/D-85/297.
U.S. Environmental Protection Agency. (1985f) Methodology for characterization
of uncertainty in exposure assessments. Office of Health and
Envoronmental Assessment, Office of Research and Development,
Washington, DC. EPA/600/8-86/009, NTIS PB85-240455/AS.
U.S. Environmental Protection Agency. (1986a) Guidelines for estimating
exposures. Federal Register 51:34042-34054.
U.S. Environmental Protection Agency. (1986b) Guidelines for carcinogen risk
assessment. Federal Register 51(185):33992-34003.
U.S. Environmental Protection Agency. (1986c) Guidelines for mutagenic risk
assessment. Federal Register 51(185):34006-34012.
U.S. Environmental Protection Agency. (1986d) Handbook: Stream sampling for
waste load allocations applications. Office of Research and Development,
Cincinnati, OH. EPA-600/2-86/013.
U.S. Environmental Protection Agency. (1986e) Guideline on air quality models
(Revised). Office of Air Quality Planning and Standards, Research
Triangle Park, NC EPA-450/2-78/027R.
171
-------
U.S. Environmental Protection Agency. (1986f) Methods for assessing exposure
to chemical substances. Volume 8: Methods for assessing environmental
pathways of food contamination. Office of Toxic Substances, Washington,
DC. EPA-560/5-85/008.
U.S. Environmental Protection Agency. (1986g) Analysis for polychlorinated
dibenzeo-p-dioxins (PCDD) and dibenzofurans (PCDF) in human adipose
tissue: method evaluation study. Office of Toxic Substances, Washington,
DC. EPA-560/5-86/020.
U.S. Environmental Protection Agency. (1986h) Addendum to the health
assessment document for tetrachloroethylene (perchloroethylene): Updated
carcinogenicity assessment for tetrachloroethylene (perchloroethylene,
PERC, PCE). Review Draft. Office of Health and Environmental
Assessment, Office of Research and Development, Washington, DC. EPA-
600/8-82/005FA, NTIS PB86-174489/AS.
U.S. Environmental Protection Agency. (1987a) The total exposure assessment
methodology (TEAM) study. Volume I: Summary and analysis. (Office of
Acid Deposition, Environmental Monitoring and Quality Assurance, Office
of Research and Development, Washington, DC. EPA-600/6-87/002a.
U.S. Environmental Protection Agency. (1987b) Selection criteria for
mathematical models used in exposure assessments: Surface water models.
Office of Health and Environmental Assessment, Office of Research and
Development, Washington, DC. EPA-600/8-87/042, NTIS PB88-
139928/AS.
U.S. Environmental Protection Agency. (1987c) Supplement A to the guideline
on air quality models (Revised). Office of Air Quality Planning and
Standards, Research Triangle Park, NC. EPA-450/2-78/027R.
U.S. Environmental Protection Agency. (1987d) Pesticide assessment guidelines
for applicator exposure monitoring - subdivision U. Office of Pesticide
Programs, Office of Pesticides and Toxic Substances, Washington, DC.
EPA-540/9-87/127.
U.S. Environmental Protection Agency. (1987e) Addendum to the health
assessment document for trichloroethylene: Updated carcinogenicity
assessment for trichloroethylene. Review Draft. Office of Health and
Environmental Assessment, Office of Research and Development,
Washington, DC. EPA-600/8-82/006FA, NTIS PB87-228045/AS.
172
-------
U.S. Environmental Protection Agency. (1988a) Proposed guidelines for
exposure-related measurements. Federal Register 53(232):48830-48853.
U.S. Environmental Protection Agency. (1988b) Proposed guidelines for assessing
female reproductive risk. Federal Register 53(126):24834-24847.
U.S. Environmental Protection Agency. (1988c) Proposed guidelines for assessing
male reproductive risk. Federal Register 53( 126):24850-24869.
U.S. Environmental Protection Agency. (1988d) Laboratory data validation
functional guidelines for evaluating organic analyses. USEPA Data
Review Work Group, prepared for the Hazardous Site Evaluation
Division, Office of Solid. Waste and Emergency Response, Washington,
DC. February 1, 1988.
U.S. Environmental Protection Agency. (1988e) Laboratory data validation
functional guidelines for evaluating inorganic analyses. USEPA Data
Review Work Group, prepared for the Hazardous Site Evaluation
Division, Office of Solid Waste and Emergency Response, Washington,
DC. July 1, 1988.
U.S. Environmental Protection Agency. (1988f) Selection criteria for
mathematical models used in exposure assessments: Ground-water models.
Office of Health and Environmental Assessment, Office of Research and
Development, Washington, DC. EPA-600/8-88/075, NTIS PB88-
2487527 AS.
U.S. Environmental Protection Agency. (1988g) Seven cardinal rules of risk
communication. Office of Policy Analysis, Office of Policy, Planning, and
Evaluation, Washington, DC. OPA-87-020.
U.S. Environmental Protection Agency. (1988h) Total human exposure and
indoor air quality: An automated bibliography (BLIS) with summary
abstracts. Office of Acid Deposition, Monitoring, and Quality Assurance,
Office of Research and Development, Washington, DC. EPA-600/9-
88/011.
U.S. Environmental Protection Agency. (1989a) A rationale for the assessment of
errors in the sampling of soils. Office of Research and Development,
Washington, DC. EPA-600/4-90/013.
U.S. Environmental Protection Agency. (1989b) Resolution on use of
mathematical models by EPA for regulatory assessment and decision-
173
-------
making. Environmental Engineering Committee, Science Advisory Board,
Washington, DC. EPA-SAB-EEC-89-012.
U.S. Environmental Protection Agency. (1989c) Exposure factors handbook.
Office of Health and Environmental Assessment, Office of Research and
Development, Washington, DC. EPA-600/8-89/043, NTIS PB90-
106774/AS.
U.S. Environmental Protection Agency. (1990a) Soil sampling quality assurance
user's guide. Environmental Monitoring Systems Laboratory, Office of
Research and Development, Washington, DC. EPA-600/8-89/046.
U.S. Environmental Protection Agency. (1990b) Guidance for data useability in
risk assessment. Interim Final. Office of Emergency and Remedial
Response, Washington, DC. EPA-540/G-90/008.
U.S. Environmental Protection Agency. (1991a) Guidelines for developmental
toxicity risk assessment. Federal Register 56(234):63798-63826.
U.S. Environmental Protection Agency. (1991b) Selection criteria for
mathematical models used in exposure assessment: Atmospheric dispersion
models. Office of Health and Environmental Assessment, Office of
Research and Development, Washington, DC. EPA-600/8-91/038.
U.S. Environmental Protection Agency. (1992a) Science Advisory Board's review
of the draft final exposure assessment guidelines (SAB Final Review Draft
dated August 8, 1991). EPA Science Advisory Board, Washington, DC.
EPA-SAB-IAQC-92-015, Dated January 13, 1992.
U.S. Environmental Protection Agency. (1992b) Dermal exposure assessment:
Principles and applications. Office of Health and Environmental
Assessment, Office of Research and Development, Washington, DC. EPA-
600/8-91/01 IF.
Waxweiler, RJ.; Zumwalde, R.D.; Ness, G.O.; Brown, D.P. (1988) A
retrospective cohort mortality study of males mining and milling attapulgite
clay. Am. J. Ind. Med. 13:305-315.
World Health Organization (WHO). (1983) Guidelines on studies in
environmental epidemiology. Geneva, Switzerland: WHO, Environmental
Health Criteria 27.
174
-------
World Health Organization (WHO). (1986) Principles for evaluating health risks
from chemicals during infancy and early childhood: The need for a special
approach. Geneva, Switzerland: WHO, Environmental Health Criteria 59,
pp. 26-33.
Yang, J.J.; Roy, T.A.; Krueger, A.J.; Neil, W.; Mackerer, C.R. (1989) In vitro and
in vivo percutaneous absorption of benzo[a]pyrene from petroleum crude-
fonified soil in the rat. Bull. Environ. Contain. Toxicol. 43:207-214.
175
-------
PART B: RESPONSE TO PUBLIC AND SCIENCE ADVISORY BOARD
COMMENTS
1. INTRODUCTION
This section summarizes the major issues raised in public comments on the
Proposed Guidelines for Exposure-Related Measurements (hereafter "1988
Proposed Guidelines") published December 2, 1988 (53 FR 48830-48853). In
addition to general comments, reviewers were requested to comment specifically
on the guidance for interpreting contaminated blanks versus field data, the
interpretation of data at or near the limit of detection, approaches to assessing
uncertainty, and the Glossary of Terms. Comment was also invited on the
following questions: Should the 1988 Proposed Guidelines be combined with the
1986 Guidelines for Estimating Exposures (hereafter "1986 Guidelines")? Is the
current state-of-the-art in making measurements of population activities, for the
purpose of exposure assessment advanced to the point where the Agency can
construct guidelines in this area? Given that EPA Guidelines are not protocols or
detailed literature reviews, is the level of detail useful and appropriate, especially
in the area of statistics?
The Science Advisory Board (SAB) met on December 2, 1988, and
provided written comments in a May, 1989 letter to the EPA Administrator
(EPA-SAB-EETFC-89-020). The public comment period extended until March 2,
1989. Comments were received from 17 individuals or organizations.
After the SAB and public comment, Agency staff prepared summaries of
the comments and analyses of major issues presented by the commentors. These
were considered in the development of these final Guidelines. In response to the
comments, the Agency has modified or clarified most of the sections of the
Guidelines. For the purposes of this discussion, only the most significant issues
reflected by the public and SAB comments are discussed. Several minor
recommendations, which do not warrant discussion here, were considered and
adopted by the Agency in the revision of these Guidelines.
176
-------
The EPA revised the 1988 Proposed Guidelines in accordance with the
public and SAB comments, retitling them Guidelines for Exposure Assessment
(hereafter "Guidelines"). The Agency presented the draft final Guidelines to the
SAB at a public meeting on September 12, 1991, at which time the SAB invited
public comment for a period of 30 days on the draft. The SAB discussed the
final draft in a January 13, 1992 letter to the Administrator of the EPA (EPA-
SAB-IAQC-92-015). There were no additional public comments received.
2. RESPONSE TO GENERAL COMMENTS
In general, the reviewers were complementary regarding the overall quality
of the 1988 Proposed Guidelines. Several reviewers requested that the Agency
better define the focus and intended audiences and refine the Guidelines with
regard to treatment of nonhuman exposure. The Agency has refined its approach
and coverage in these Guidelines. Although these Guidelines deal specifically
with human exposures to chemicals, additional supplemental guidance may be
developed for ecological exposures, and exposures to biological or radiological
entities. The Agency is currently developing separate guidelines for ecological
risk assessment.
Concerns were expressed about the Agency's use of the terms exposure
and dose. Consequently, the Agency reviewed its definitions and uses of these
terms and evaluated their use elsewhere in the scientific community. The Agency
has changed its definitions and uses of these terms from that in both the 1986
Guidelines and the 1988 Proposed Guidelines. It is believed that the definitions
contained in the current Guidelines are now in concert with the definitions
suggested by the National Academy of Sciences and others in the scientific field.
Many reviewers urged the Agency to be more explicit in its
recommendations regarding uncertainty in statistics, limits of detection, censored
data sets, and the use of models. Some reviewers felt the level of detail was
appropriate for statistical uncertainty while others wanted additional methods for
dealing with censored data. Several commended the Agency for its
177
-------
acknowledgement of uncertainty in exposure assessments and the call for its
explicit description in all exposure assessments, while others expressed concern
for lack of acknowledgement of model uncertainty. Accordingly, these areas have
been revisited and an entire section has been devoted to uncertainty. We agree
with the reviewers that much more work remains to be done in this are;*,
particularly with evaluating overall exposure assessment uncertainty, nol: only with
models but also with the distributions of exposure parameters. The Agency may
issue additional guidance in this area in the future.
Some reviewers submitted extensive documentation regarding detection
limits and statistical representations. Several submitted comments arguing against
data reporting conventions that result in censored data sets and recommended
that the Agency issue a guidance document for establishing total system detection
limits. The Agency found the documentation to be helpful and has revised the
sections of the Guidelines accordingly. Unfortunately, several of the other
suggestions go beyond the scope of this document.
The reviewers generally commented that the glossary was useful,
presenting many technical terms and defining them in an appropriate manner.
The glossary has been expanded to include the key terms used in the Guidelines,
while at the same time correcting some definitions that were inconsistent or
unclear. In particular, the definitions for exposure and dose have been revised.
3. RESPONSE TO COMMENTS ON THE SPECIFIC QUESTIONS
3.1. Should the 1988 Proposed Guidelines be combined with the 1986
Guidelines?
The SAB and several other commentors recommended that the 1986
Guidelines and the 1988 Proposed Guidelines be combined into an integrated
document. The Agency agrees with this recommendation and has made an effort
to produce a single guideline that progresses logically from start to finish. This
was accomplished through an extensive reformatting of the two sets of guidelines
178
-------
as an integrated document, rather than a simple joining together of the previous
versions.
In integrating the two previous guidelines, the Agency has revised and
updated the section in the 1986 Guidelines that suggests an outline for an
exposure assessment. A more complete section (Section 7 of the current
Guidelines) now discusses how assessments should be presented and suggests a
series of points to consider in reviewing assessments.
The Agency has also expanded the section in the 1986 Guidelines that
discussed exposure scenarios, partly by incorporating material from the 1988
Proposed Guidelines, and partly as a result of comments requesting clarification
of the appropriate use of. certain types of scenario (e.g., "worst case"). Section 5.3
of the current Guidelines extensively discusses the appropriateness of using
various scenarios, estimates, and risk descriptors, and defines certain scenario-
related terms for use in exposure assessments.
3.2. Is the current state-of-the-art in making measurements of population
activities for the purpose of exposure assessment advanced to the point
where the Agency can construct guidelines in this area?
Both the SAB and public comments recommended the inclusion of
demographics, population dynamics, and population activity patterns in the
exposure assessment process. In response, the Agency has included additional
. discussion on use of activity patterns in the current Guidelines, while recognizing
that more research has to be done in this area.
3.3. Is the level of detail of the Guidelines useful and appropriate,
especially in the area of statistics?
As might be expected, there was no clear consensus of opinion on what
constitutes appropriate coverage. Regarding quality assurance (QA) and quality
control (QC), it was felt that a strong statement on the need for QA/QC
followed by reference to appropriate EPA documents was a suitable level of
179
-------
detail. Statistical analyses, sampling issues, limit of detection, and other analytical
issues all elicited many thoughtful comments. Where the recommendations did
not exceed the scope of the document or the role of EPA, the Agency has
attempted to blend the various recommendations into the current Guidelines. In
all these areas, therefore, the previous sections have been revised in accordance
with comments.
180
------- |